- Python 88.4%
- HTML 7.5%
- JavaScript 2.4%
- CSS 1.2%
- Shell 0.3%
- Other 0.1%
This change integrates canned queries with Datasette's new SQL-based permissions system by making the following changes: 1. **Default canned_queries plugin hook**: Added a new hookimpl in default_permissions.py that returns canned queries from datasette configuration. This extracts config-reading logic into a plugin hook, allowing QueryResource to discover all queries. 2. **Async resources_sql()**: Converted Resource.resources_sql() from a synchronous class method returning a string to an async method that receives the datasette instance. This allows QueryResource to call plugin hooks and query the database. 3. **QueryResource implementation**: Implemented QueryResource.resources_sql() to gather all canned queries by: - Querying catalog_databases for all databases - Calling canned_queries hooks for each database with actor=None - Building a UNION ALL SQL query of all (database, query_name) pairs - Properly escaping single quotes in resource names 4. **Simplified get_canned_queries()**: Removed config-reading logic since it's now handled by the default plugin hook. 5. **Added view-query to default allow**: Added "view-query" to the default_allow_actions set so canned queries are accessible by default. 6. **Removed xfail markers**: Removed test xfail markers from: - tests/test_canned_queries.py (entire module) - tests/test_html.py (2 tests) - tests/test_permissions.py (1 test) - tests/test_plugins.py (1 test) All canned query tests now pass with the new permission system. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .github | ||
| datasette | ||
| demos | ||
| docs | ||
| tests | ||
| .coveragerc | ||
| .dockerignore | ||
| .git-blame-ignore-revs | ||
| .gitattributes | ||
| .gitignore | ||
| .isort.cfg | ||
| .prettierrc | ||
| .readthedocs.yaml | ||
| CODE_OF_CONDUCT.md | ||
| codecov.yml | ||
| Dockerfile | ||
| Justfile | ||
| LICENSE | ||
| MANIFEST.in | ||
| package-lock.json | ||
| package.json | ||
| pytest.ini | ||
| README.md | ||
| ruff.toml | ||
| setup.cfg | ||
| setup.py | ||
| test-in-pyodide-with-shot-scraper.sh | ||
An open source multi-tool for exploring and publishing data
Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API.
Datasette is aimed at data journalists, museum curators, archivists, local governments, scientists, researchers and anyone else who has data that they wish to share with the world.
Explore a demo, watch a video about the project or try it out on GitHub Codespaces.
- datasette.io is the official project website
- Latest Datasette News
- Comprehensive documentation: https://docs.datasette.io/
- Examples: https://datasette.io/examples
- Live demo of current
mainbranch: https://latest.datasette.io/ - Questions, feedback or want to talk about the project? Join our Discord
Want to stay up-to-date with the project? Subscribe to the Datasette newsletter for tips, tricks and news on what's new in the Datasette ecosystem.
Installation
If you are on a Mac, Homebrew is the easiest way to install Datasette:
brew install datasette
You can also install it using pip or pipx:
pip install datasette
Datasette requires Python 3.8 or higher. We also have detailed installation instructions covering other options such as Docker.
Basic usage
datasette serve path/to/database.db
This will start a web server on port 8001 - visit http://localhost:8001/ to access the web interface.
serve is the default subcommand, you can omit it if you like.
Use Chrome on OS X? You can run datasette against your browser history like so:
datasette ~/Library/Application\ Support/Google/Chrome/Default/History --nolock
Now visiting http://localhost:8001/History/downloads will show you a web interface to browse your downloads data:
metadata.json
If you want to include licensing and source information in the generated datasette website you can do so using a JSON file that looks something like this:
{
"title": "Five Thirty Eight",
"license": "CC Attribution 4.0 License",
"license_url": "http://creativecommons.org/licenses/by/4.0/",
"source": "fivethirtyeight/data on GitHub",
"source_url": "https://github.com/fivethirtyeight/data"
}
Save this in metadata.json and run Datasette like so:
datasette serve fivethirtyeight.db -m metadata.json
The license and source information will be displayed on the index page and in the footer. They will also be included in the JSON produced by the API.
datasette publish
If you have Heroku or Google Cloud Run configured, Datasette can deploy one or more SQLite databases to the internet with a single command:
datasette publish heroku database.db
Or:
datasette publish cloudrun database.db
This will create a docker image containing both the datasette application and the specified SQLite database files. It will then deploy that image to Heroku or Cloud Run and give you a URL to access the resulting website and API.
See Publishing data in the documentation for more details.
Datasette Lite
Datasette Lite is Datasette packaged using WebAssembly so that it runs entirely in your browser, no Python web application server required. Read more about that in the Datasette Lite documentation.
