Fix numerous typos (#561)

Thanks, @minho42!
This commit is contained in:
Min ho Kim 2019-07-26 20:25:44 +10:00 committed by Simon Willison
commit 27cb29365c
6 changed files with 10 additions and 10 deletions

View file

@ -409,7 +409,7 @@ def render_pep440_old(pieces):
The ".dev0" means dirty.
Eexceptions:
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:

View file

@ -387,7 +387,7 @@ class DataView(BaseView):
return await self.as_csv(request, database, hash, **kwargs)
if _format is None:
# HTML views default to expanding all foriegn key labels
# HTML views default to expanding all foreign key labels
kwargs["default_labels"] = True
extra_template_data = {}

View file

@ -142,7 +142,7 @@ Datasette can still run against immutable files and gains numerous performance b
Faceting improvements, and faceting plugins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Datasette :ref:`facets` provide an intuitive way to quickly summarize and interact with data. Previously the only supported faceting technique was column faceting, but 0.28 introduces two powerful new capibilities: facet-by-JSON-array and the ability to define further facet types using plugins.
Datasette :ref:`facets` provide an intuitive way to quickly summarize and interact with data. Previously the only supported faceting technique was column faceting, but 0.28 introduces two powerful new capabilities: facet-by-JSON-array and the ability to define further facet types using plugins.
Facet by array (`#359 <https://github.com/simonw/datasette/issues/359>`__) is only available if your SQLite installation provides the ``json1`` extension. Datasette will automatically detect columns that contain JSON arrays of values and offer a faceting interface against those columns - useful for modelling things like tags without needing to break them out into a new table. See :ref:`facet_by_json_array` for more.
@ -153,7 +153,7 @@ The new :ref:`plugin_register_facet_classes` plugin hook (`#445 <https://github.
datasette publish cloudrun
~~~~~~~~~~~~~~~~~~~~~~~~~~
`Google Cloud Run <https://cloud.google.com/run/>`__ is a brand new serverless hosting platform from Google, which allows you to build a Docker container which will run only when HTTP traffic is recieved and will shut down (and hence cost you nothing) the rest of the time. It's similar to Zeit's Now v1 Docker hosting platform which sadly is `no longer accepting signups <https://hyperion.alpha.spectrum.chat/zeit/now/cannot-create-now-v1-deployments~d206a0d4-5835-4af5-bb5c-a17f0171fb25?m=MTU0Njk2NzgwODM3OA==>`__ from new users.
`Google Cloud Run <https://cloud.google.com/run/>`__ is a brand new serverless hosting platform from Google, which allows you to build a Docker container which will run only when HTTP traffic is received and will shut down (and hence cost you nothing) the rest of the time. It's similar to Zeit's Now v1 Docker hosting platform which sadly is `no longer accepting signups <https://hyperion.alpha.spectrum.chat/zeit/now/cannot-create-now-v1-deployments~d206a0d4-5835-4af5-bb5c-a17f0171fb25?m=MTU0Njk2NzgwODM3OA==>`__ from new users.
The new ``datasette publish cloudrun`` command was contributed by Romain Primet (`#434 <https://github.com/simonw/datasette/pull/434>`__) and publishes selected databases to a new Datasette instance running on Google Cloud Run.
@ -592,7 +592,7 @@ Mostly new work on the :ref:`plugins` mechanism: plugins can now bundle static a
- Longer time limit for test_paginate_compound_keys
It was failing intermittently in Travis - see `#209 <https://github.com/simonw/datasette/issues/209>`_
- Use application/octet-stream for downloadable databses
- Use application/octet-stream for downloadable databases
- Updated PyPI classifiers
- Updated PyPI link to pypi.org

View file

@ -62,7 +62,7 @@ Each of the top-level metadata fields can be used at the database and table leve
Source, license and about
-------------------------
The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optionaly.
The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional.
**source** and **source_url** should be used to indicate where the underlying data came from.

View file

@ -39,7 +39,7 @@ Then later you can start Datasette against the ``counts.json`` file and use it t
datasette -i data.db --inspect-file=counts.json
You need to use the ``-i`` immutable mode agaist the databse file here or the counts from the JSON file will be ignored.
You need to use the ``-i`` immutable mode against the databse file here or the counts from the JSON file will be ignored.
You will rarely need to use this optimization in every-day use, but several of the ``datasette publish`` commands described in :ref:`publishing` use this optimization for better performance when deploying a database file to a hosting provider.

View file

@ -180,7 +180,7 @@ two common reasons why `setup.py` might not be in the root:
`setup.cfg`, and `tox.ini`. Projects like these produce multiple PyPI
distributions (and upload multiple independently-installable tarballs).
* Source trees whose main purpose is to contain a C library, but which also
provide bindings to Python (and perhaps other langauges) in subdirectories.
provide bindings to Python (and perhaps other languages) in subdirectories.
Versioneer will look for `.git` in parent directories, and most operations
should get the right version string. However `pip` and `setuptools` have bugs
@ -805,7 +805,7 @@ def render_pep440_old(pieces):
The ".dev0" means dirty.
Eexceptions:
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
@ -1306,7 +1306,7 @@ def render_pep440_old(pieces):
The ".dev0" means dirty.
Eexceptions:
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]: