Compare commits

...

61 commits

Author SHA1 Message Date
Simon Willison
b9bcd6b6ae Release 0.65.1
Refs #2443
2024-11-28 17:11:46 -08:00
Simon Willison
496463bd4c Back to httpx>=0.20, refs #2443 2024-11-28 17:03:44 -08:00
Simon Willison
bd818aef62 Backport new DatasetteClient implementation, refs #2443 2024-11-28 17:01:57 -08:00
Simon Willison
2eba9a6630 httpx>=0.28.0, refs #2443 2024-11-28 17:00:15 -08:00
Simon Willison
753ddc9028 Release 0.65
Refs #2434
2024-10-07 10:33:20 -07:00
Simon Willison
dbed80826f Upgrade Furo
Refs https://github.com/simonw/datasette/issues/2434#issuecomment-2397501988
2024-10-07 10:30:21 -07:00
Simon Willison
c07e9946e5 Vendor Pint, support Python 3.13, drop 3.8
Refs #2434
2024-10-07 10:22:07 -07:00
Simon Willison
5cac74c4ac Apply blacken-docs 2024-06-21 16:39:42 -07:00
Simon Willison
868d34d7b9 Test 0.64.x against Python 3.12
And drop 3.7 tests because it is not compatible with the new Black version
2024-06-21 16:36:04 -07:00
Simon Willison
0006a45b58 Bump Black dependency to latest for 0.64.x branch 2024-06-21 16:33:46 -07:00
Simon Willison
0af9a89460 Release 0.64.8
Refs #2353, #2359
2024-06-21 16:31:32 -07:00
Simon Willison
10dae61213 Applied latest Black 2024-06-21 16:28:25 -07:00
Simon Willison
2e62282170 Backport derive_named_parameters fix to 0.64.x
Refs #2354
2024-06-21 16:24:48 -07:00
Simon Willison
7aa4bf9506 Backport of #2359 for 0.64.x 2024-06-21 16:18:38 -07:00
Simon Willison
6c941e30c5 More deploy workflow wrangling 2024-06-12 16:04:17 -07:00
Simon Willison
c302357dc3 Publish using gh-action-pypi-publish@release/v1 2024-06-12 15:57:37 -07:00
Simon Willison
ead2882f01 Fix for pyodide test failure, refs #2351 2024-06-12 15:46:55 -07:00
Simon Willison
a0a9b8023d Update ignore-words 2024-06-12 15:46:37 -07:00
Simon Willison
b7c44de56d Release 0.64.7
Refs #2353
2024-06-12 15:45:40 -07:00
Simon Willison
47333e43c6 Workaround for #2353 2024-06-12 15:39:11 -07:00
Simon Willison
4e3e8a0124 Fix typo in plugin docs 2023-12-22 15:16:28 -08:00
Simon Willison
c4571c4f17 Release 0.64.6
Refs #2214
2023-12-22 15:14:50 -08:00
Simon Willison
324090919d Backport of CSV labels fix from #2214 2023-12-22 15:12:50 -08:00
Simon Willison
5800eaeb5b Release 0.64.5
Refs #2197
2023-10-08 09:03:37 -07:00
Simon Willison
5e932f3e27 Drop click-default-group-wheel, refs #2197 2023-10-08 09:00:59 -07:00
Simon Willison
2b44d6a3fc Fixed typo in docs 2023-09-21 12:28:45 -07:00
Simon Willison
d61687cbeb Release 0.64.4
Refs #2189
2023-09-21 12:25:52 -07:00
Simon Willison
4b01cec374 Applied latest Black 2023-09-21 12:23:48 -07:00
Simon Willison
c0d58a71b9 Stop running queries in parallel, refs #2189 2023-09-21 12:23:17 -07:00
Simon Willison
c0227ac7c7 Fix for flaky test_hidden_sqlite_stat1_table, refs #2179 2023-09-21 12:21:24 -07:00
Simon Willison
917272c864 Fix for Python httpx test failure, refs #2066 2023-04-27 07:55:25 -07:00
Simon Willison
0746e785fc Release 0.64.3
Refs #2065
2023-04-27 07:53:33 -07:00
Simon Willison
4d50fc7cd7 Depend on setuptools and pip, refs #2065 2023-04-27 07:45:39 -07:00
Simon Willison
2a0a94fe97 Release 0.64.2
Refs #2036
2023-03-08 12:41:13 -08:00
Simon Willison
cb9fa71878 Use service-specific image ID for Cloud Run deploys, refs #2036 2023-03-08 12:27:15 -08:00
Simon Willison
2e064641ac Release 0.64.1
Refs #1985, #1987
2023-01-11 10:21:37 -08:00
Simon Willison
a1c60bab86 Fix Sphinx warning turned error 2023-01-11 10:21:15 -08:00
Simon Willison
b7df546a45 Raise ValueError if Datasette(files=) is a string, refs #1985 2023-01-11 10:12:53 -08:00
Simon Willison
0a1de5d7b9 Link to non-spam Python 3 setup instructions
Refs #1987
2023-01-11 10:08:49 -08:00
Simon Willison
cd65558aee setup-gcloud 318.0.0
Refs https://til.simonwillison.net/googlecloud/gcloud-error-workaround
2023-01-09 16:02:28 -08:00
Simon Willison
0084daa50a Release 0.64, with a warning against arbitrary SQL with SpatiaLite
Refs #1409, #1771, #1979

Refs https://github.com/simonw/datasette.io/issues/132
2023-01-09 08:37:21 -08:00
Simon Willison
02987e342d Explicitly explain allow_sql: false 2023-01-09 08:25:19 -08:00
Simon Willison
fdb4d975a1 Better error for --load-extensions, refs #1979 2023-01-07 15:58:30 -08:00
Simon Willison
3b88ac671e What to do if extensions will not load, refs #1979 2023-01-07 15:48:08 -08:00
Simon Willison
1ec9c9995c Backported default_allow_sql for 0.63.x, closes #1409 2023-01-05 09:21:07 -08:00
Simon Willison
b8cf864fa6 Fixed broken example links in _where= docs 2023-01-04 16:53:39 -08:00
Simon Willison
1bff3f1a70 Fixed table_action example in docs 2023-01-04 16:53:31 -08:00
Simon Willison
7402294018 .select-wrapper:focus-within for accessibility, closes #1771 2023-01-04 16:53:16 -08:00
Simon Willison
9ec58da6ec Deploy docs on publish using Python 3.9
A workaround for gcloud setup, see:

https://til.simonwillison.net/googlecloud/gcloud-error-workaround

Refs #1963
2022-12-17 19:24:34 -08:00
Simon Willison
e05998bc85 Added missing word, refs #1963 2022-12-17 19:08:01 -08:00
Simon Willison
a9ffcbd42e Release 0.63.3
Refs #1955, #1958, #1963
2022-12-17 19:06:03 -08:00
Simon Willison
d93f975b3d On publish run tests same way as for test 2022-12-17 18:52:52 -08:00
Simon Willison
e6d94f9ffa Run new HTTPS test in CI, refs #1955 2022-12-17 18:51:17 -08:00
Simon Willison
0bd3eaa2dd Move HTTPS test to a bash script
See https://github.com/simonw/datasette/issues/1955#issuecomment-1356627931
2022-12-17 18:51:17 -08:00
Simon Willison
5649e547ef Put AsgiLifestyle back so server starts up again, refs #1955 2022-12-17 18:51:17 -08:00
Simon Willison
96b3a86d7f Replace AsgiLifespan with AsgiRunOnFirstRequest, refs #1955 2022-12-17 18:51:17 -08:00
Simon Willison
4ba8d57bb1 Try click.echo() instead
This ensures the URL is output correctly when running under Docker.

Closes #1958
2022-12-15 16:57:45 -08:00
Simon Willison
d67f812b73 Release 0.63.2
Refs #1904, #1905
2022-11-18 16:53:05 -08:00
Simon Willison
c4d002fef5 Pin httpx in Pyodide test, refs #1904
Should help get tests to pass for #1896 too
2022-11-18 16:52:09 -08:00
Simon Willison
a93ccc63c7 Upgrade to Python 3.11 on Heroku, refs #1905 2022-11-18 16:49:08 -08:00
Simon Willison
9cca381033 --generate-dir option to publish heroku, refs #1905 2022-11-18 16:49:02 -08:00
129 changed files with 17587 additions and 258 deletions

View file

@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"]
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13.0-rc.3"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
@ -31,11 +31,17 @@ jobs:
pip install -e '.[test]'
- name: Run tests
run: |
pytest
pytest -n auto -m "not serial"
pytest -m "serial"
# And the test that exceeds a localhost HTTPS server
tests/test_datasette_https_server.sh
deploy:
runs-on: ubuntu-latest
needs: [test]
environment: release
permissions:
id-token: write
steps:
- uses: actions/checkout@v3
- name: Set up Python
@ -51,14 +57,12 @@ jobs:
${{ runner.os }}-publish-pip-
- name: Install dependencies
run: |
pip install setuptools wheel twine
- name: Publish
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
pip install setuptools wheel build
- name: Build
run: |
python setup.py sdist bdist_wheel
twine upload dist/*
python -m build
- name: Publish
uses: pypa/gh-action-pypi-publish@release/v1
deploy_static_docs:
runs-on: ubuntu-latest
@ -69,7 +73,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.10'
python-version: '3.9'
- uses: actions/cache@v2
name: Configure pip caching
with:
@ -90,7 +94,7 @@ jobs:
- name: Set up Cloud Run
uses: google-github-actions/setup-gcloud@v0
with:
version: '275.0.0'
version: '318.0.0'
service_account_email: ${{ secrets.GCP_SA_EMAIL }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
- name: Deploy stable-docs.datasette.io to Cloud Run

View file

@ -27,4 +27,4 @@ jobs:
- name: Check spelling
run: |
codespell docs/*.rst --ignore-words docs/codespell-ignore-words.txt
codespell datasette -S datasette/static --ignore-words docs/codespell-ignore-words.txt
codespell datasette -S datasette/static -S datasette/vendored --ignore-words docs/codespell-ignore-words.txt

View file

@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"]
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13.0-rc.3"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
@ -35,6 +35,8 @@ jobs:
run: |
pytest -n auto -m "not serial"
pytest -m "serial"
# And the test that exceeds a localhost HTTPS server
tests/test_datasette_https_server.sh
- name: Check if cog needs to be run
run: |
cog --check docs/*.rst

View file

@ -64,16 +64,14 @@ from .utils import (
)
from .utils.asgi import (
AsgiLifespan,
Base400,
Forbidden,
NotFound,
Request,
Response,
AsgiRunOnFirstRequest,
asgi_static,
asgi_send,
asgi_send_file,
asgi_send_html,
asgi_send_json,
asgi_send_redirect,
)
from .utils.internal_db import init_internal_db, populate_schema_tables
@ -118,6 +116,11 @@ SETTINGS = (
True,
"Allow users to specify columns to facet using ?_facet= parameter",
),
Setting(
"default_allow_sql",
True,
"Allow anyone to run arbitrary SQL queries",
),
Setting(
"allow_download",
True,
@ -215,6 +218,8 @@ class Datasette:
self.config_dir = config_dir
self.pdb = pdb
self._secret = secret or secrets.token_hex(32)
if files is not None and isinstance(files, str):
raise ValueError("files= must be a list of paths, not a string")
self.files = tuple(files or []) + tuple(immutables or [])
if config_dir:
db_files = []
@ -1260,7 +1265,7 @@ class Datasette:
async def setup_db():
# First time server starts up, calculate table counts for immutable databases
for dbname, database in self.databases.items():
for database in self.databases.values():
if not database.is_mutable:
await database.table_counts(limit=60 * 60 * 1000)
@ -1274,10 +1279,8 @@ class Datasette:
)
if self.setting("trace_debug"):
asgi = AsgiTracer(asgi)
asgi = AsgiLifespan(
asgi,
on_startup=setup_db,
)
asgi = AsgiLifespan(asgi)
asgi = AsgiRunOnFirstRequest(asgi, on_startup=[setup_db, self.invoke_startup])
for wrapper in pm.hook.asgi_wrapper(datasette=self):
asgi = wrapper(asgi)
return asgi
@ -1565,45 +1568,40 @@ class DatasetteClient:
path = f"http://localhost{path}"
return path
async def _request(self, method, path, **kwargs):
async with httpx.AsyncClient(
transport=httpx.ASGITransport(app=self.app),
cookies=kwargs.pop("cookies", None),
) as client:
return await getattr(client, method)(self._fix(path), **kwargs)
async def get(self, path, **kwargs):
await self.ds.invoke_startup()
async with httpx.AsyncClient(app=self.app) as client:
return await client.get(self._fix(path), **kwargs)
return await self._request("get", path, **kwargs)
async def options(self, path, **kwargs):
await self.ds.invoke_startup()
async with httpx.AsyncClient(app=self.app) as client:
return await client.options(self._fix(path), **kwargs)
return await self._request("options", path, **kwargs)
async def head(self, path, **kwargs):
await self.ds.invoke_startup()
async with httpx.AsyncClient(app=self.app) as client:
return await client.head(self._fix(path), **kwargs)
return await self._request("head", path, **kwargs)
async def post(self, path, **kwargs):
await self.ds.invoke_startup()
async with httpx.AsyncClient(app=self.app) as client:
return await client.post(self._fix(path), **kwargs)
return await self._request("post", path, **kwargs)
async def put(self, path, **kwargs):
await self.ds.invoke_startup()
async with httpx.AsyncClient(app=self.app) as client:
return await client.put(self._fix(path), **kwargs)
return await self._request("put", path, **kwargs)
async def patch(self, path, **kwargs):
await self.ds.invoke_startup()
async with httpx.AsyncClient(app=self.app) as client:
return await client.patch(self._fix(path), **kwargs)
return await self._request("patch", path, **kwargs)
async def delete(self, path, **kwargs):
await self.ds.invoke_startup()
async with httpx.AsyncClient(app=self.app) as client:
return await client.delete(self._fix(path), **kwargs)
return await self._request("delete", path, **kwargs)
async def request(self, method, path, **kwargs):
await self.ds.invoke_startup()
avoid_path_rewrites = kwargs.pop("avoid_path_rewrites", None)
async with httpx.AsyncClient(app=self.app) as client:
async with httpx.AsyncClient(
transport=httpx.ASGITransport(app=self.app),
cookies=kwargs.pop("cookies", None),
) as client:
return await client.request(
method, self._fix(path, avoid_path_rewrites), **kwargs
)

View file

@ -4,6 +4,7 @@ import click
from click import formatting
from click.types import CompositeParamType
from click_default_group import DefaultGroup
import functools
import json
import os
import pathlib
@ -11,6 +12,7 @@ import shutil
from subprocess import call
import sys
from runpy import run_module
import textwrap
import webbrowser
from .app import (
OBSOLETE_SETTINGS,
@ -126,7 +128,7 @@ class Setting(CompositeParamType):
def sqlite_extensions(fn):
return click.option(
fn = click.option(
"sqlite_extensions",
"--load-extension",
type=LoadExtension(),
@ -135,6 +137,26 @@ def sqlite_extensions(fn):
help="Path to a SQLite extension to load, and optional entrypoint",
)(fn)
# Wrap it in a custom error handler
@functools.wraps(fn)
def wrapped(*args, **kwargs):
try:
return fn(*args, **kwargs)
except AttributeError as e:
if "enable_load_extension" in str(e):
raise click.ClickException(
textwrap.dedent(
"""
Your Python installation does not have the ability to load SQLite extensions.
More information: https://datasette.io/help/extensions
"""
).strip()
)
raise
return wrapped
@click.group(cls=DefaultGroup, default="serve", default_if_no_args=True)
@click.version_option(version=__version__)
@ -607,7 +629,7 @@ def serve(
url = "http://{}:{}{}?token={}".format(
host, port, ds.urls.path("-/auth-token"), ds._root_token
)
print(url)
click.echo(url)
if open_browser:
if url is None:
# Figure out most convenient URL - to table, database or homepage

View file

@ -36,12 +36,16 @@ def permission_allowed(datasette, actor, action, resource):
return None
return actor_matches_allow(actor, allow)
elif action == "execute-sql":
# Only use default_allow_sql setting if it is set to False:
default_allow_sql = (
None if datasette.setting("default_allow_sql") else False
)
# Use allow_sql block from database block, or from top-level
database_allow_sql = datasette.metadata("allow_sql", database=resource)
if database_allow_sql is None:
database_allow_sql = datasette.metadata("allow_sql")
if database_allow_sql is None:
return None
return default_allow_sql
return actor_matches_allow(actor, database_allow_sql)
return inner

View file

@ -80,9 +80,9 @@ def search_filters(request, database, table, datasette):
"{fts_pk} in (select rowid from {fts_table} where {fts_table} match {match_clause})".format(
fts_table=escape_sqlite(fts_table),
fts_pk=escape_sqlite(fts_pk),
match_clause=":search"
if search_mode_raw
else "escape_fts(:search)",
match_clause=(
":search" if search_mode_raw else "escape_fts(:search)"
),
)
)
human_descriptions.append(f'search matches "{search}"')
@ -99,9 +99,11 @@ def search_filters(request, database, table, datasette):
"rowid in (select rowid from {fts_table} where {search_col} match {match_clause})".format(
fts_table=escape_sqlite(fts_table),
search_col=escape_sqlite(search_col),
match_clause=":search_{}".format(i)
if search_mode_raw
else "escape_fts(:search_{})".format(i),
match_clause=(
":search_{}".format(i)
if search_mode_raw
else "escape_fts(:search_{})".format(i)
),
)
)
human_descriptions.append(

View file

@ -173,7 +173,7 @@ def publish_subcommand(publish):
print(fp.read())
print("\n====================\n")
image_id = f"gcr.io/{project}/{name}"
image_id = f"gcr.io/{project}/datasette-{service}"
check_call(
"gcloud builds submit --tag {}{}".format(
image_id, " --timeout {}".format(timeout) if timeout else ""

View file

@ -3,7 +3,9 @@ from datasette import hookimpl
import click
import json
import os
import pathlib
import shlex
import shutil
from subprocess import call, check_output
import tempfile
@ -28,6 +30,11 @@ def publish_subcommand(publish):
"--tar",
help="--tar option to pass to Heroku, e.g. --tar=/usr/local/bin/gtar",
)
@click.option(
"--generate-dir",
type=click.Path(dir_okay=True, file_okay=False),
help="Output generated application files and stop without deploying",
)
def heroku(
files,
metadata,
@ -49,6 +56,7 @@ def publish_subcommand(publish):
about_url,
name,
tar,
generate_dir,
):
"Publish databases to Datasette running on Heroku"
fail_if_publish_binary_not_installed(
@ -105,6 +113,16 @@ def publish_subcommand(publish):
secret,
extra_metadata,
):
if generate_dir:
# Recursively copy files from current working directory to it
if pathlib.Path(generate_dir).exists():
raise click.ClickException("Directory already exists")
shutil.copytree(".", generate_dir)
click.echo(
f"Generated files written to {generate_dir}, stopping without deploying",
err=True,
)
return
app_name = None
if name:
# Check to see if this app already exists
@ -176,7 +194,7 @@ def temporary_heroku_directory(
fp.write(json.dumps(metadata_content, indent=2))
with open("runtime.txt", "w") as fp:
fp.write("python-3.8.10")
fp.write("python-3.11.0")
if branch:
install = [

View file

@ -573,6 +573,9 @@ form button[type=button] {
display: inline-block;
margin-right: 0.3em;
}
.select-wrapper:focus-within {
border: 1px solid black;
}
.select-wrapper.filter-op {
width: 80px;
}

View file

@ -402,9 +402,9 @@ def make_dockerfile(
apt_get_extras = apt_get_extras_
if spatialite:
apt_get_extras.extend(["python3-dev", "gcc", "libsqlite3-mod-spatialite"])
environment_variables[
"SQLITE_EXTENSIONS"
] = "/usr/lib/x86_64-linux-gnu/mod_spatialite.so"
environment_variables["SQLITE_EXTENSIONS"] = (
"/usr/lib/x86_64-linux-gnu/mod_spatialite.so"
)
return """
FROM python:3.11.0-slim-bullseye
COPY . /app
@ -416,9 +416,11 @@ RUN datasette inspect {files} --inspect-file inspect-data.json
ENV PORT {port}
EXPOSE {port}
CMD {cmd}""".format(
apt_get_extras=APT_GET_DOCKERFILE_EXTRAS.format(" ".join(apt_get_extras))
if apt_get_extras
else "",
apt_get_extras=(
APT_GET_DOCKERFILE_EXTRAS.format(" ".join(apt_get_extras))
if apt_get_extras
else ""
),
environment_variables="\n".join(
[
"ENV {} '{}'".format(key, value)
@ -1114,17 +1116,24 @@ class StartupError(Exception):
pass
_re_named_parameter = re.compile(":([a-zA-Z0-9_]+)")
_single_line_comment_re = re.compile(r"--.*")
_multi_line_comment_re = re.compile(r"/\*.*?\*/", re.DOTALL)
_single_quote_re = re.compile(r"'(?:''|[^'])*'")
_double_quote_re = re.compile(r'"(?:\"\"|[^"])*"')
_named_param_re = re.compile(r":(\w+)")
async def derive_named_parameters(db, sql):
explain = "explain {}".format(sql.strip().rstrip(";"))
possible_params = _re_named_parameter.findall(sql)
try:
results = await db.execute(explain, {p: None for p in possible_params})
return [row["p4"].lstrip(":") for row in results if row["opcode"] == "Variable"]
except sqlite3.DatabaseError:
return possible_params
# Remove single-line comments
sql = _single_line_comment_re.sub("", sql)
# Remove multi-line comments
sql = _multi_line_comment_re.sub("", sql)
# Remove single-quoted strings
sql = _single_quote_re.sub("", sql)
# Remove double-quoted strings
sql = _double_quote_re.sub("", sql)
# Extract parameters from what is left
return _named_param_re.findall(sql)
def add_cors_headers(headers):

View file

@ -428,3 +428,18 @@ class AsgiFileDownload:
content_type=self.content_type,
headers=self.headers,
)
class AsgiRunOnFirstRequest:
def __init__(self, asgi, on_startup):
assert isinstance(on_startup, list)
self.asgi = asgi
self.on_startup = on_startup
self._started = False
async def __call__(self, scope, receive, send):
if not self._started:
self._started = True
for hook in self.on_startup:
await hook()
return await self.asgi(scope, receive, send)

View file

@ -4,6 +4,7 @@ Backported from Python 3.8.
This code is licensed under the Python License:
https://github.com/python/cpython/blob/v3.8.3/LICENSE
"""
import os
from shutil import copy, copy2, copystat, Error

View file

View file

@ -0,0 +1,33 @@
Copyright (c) 2012 by Hernan E. Grecco and contributors. See AUTHORS
for more details.
Some rights reserved.
Redistribution and use in source and binary forms of the software as well
as documentation, with or without modification, are permitted provided
that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* The names of the contributors may not be used to endorse or
promote products derived from this software without specific
prior written permission.
THIS SOFTWARE AND DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER
OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

View file

@ -0,0 +1,144 @@
"""
pint
~~~~
Pint is Python module/package to define, operate and manipulate
**physical quantities**: the product of a numerical value and a
unit of measurement. It allows arithmetic operations between them
and conversions from and to different units.
:copyright: 2016 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from importlib.metadata import version
from .delegates.formatter._format_helpers import formatter
from .errors import ( # noqa: F401
DefinitionSyntaxError,
DimensionalityError,
LogarithmicUnitCalculusError,
OffsetUnitCalculusError,
PintError,
RedefinitionError,
UndefinedUnitError,
UnitStrippedWarning,
)
from .formatting import register_unit_format
from .registry import ApplicationRegistry, LazyRegistry, UnitRegistry
from .util import logger, pi_theorem # noqa: F401
# Default Quantity, Unit and Measurement are the ones
# build in the default registry.
Quantity = UnitRegistry.Quantity
Unit = UnitRegistry.Unit
Measurement = UnitRegistry.Measurement
Context = UnitRegistry.Context
Group = UnitRegistry.Group
try: # pragma: no cover
__version__ = version("pint")
except Exception: # pragma: no cover
# we seem to have a local copy not installed without setuptools
# so the reported version will be unknown
__version__ = "unknown"
#: A Registry with the default units and constants.
_DEFAULT_REGISTRY = LazyRegistry()
#: Registry used for unpickling operations.
application_registry = ApplicationRegistry(_DEFAULT_REGISTRY)
def _unpickle(cls, *args):
"""Rebuild object upon unpickling.
All units must exist in the application registry.
Parameters
----------
cls : Quantity, Magnitude, or Unit
*args
Returns
-------
object of type cls
"""
from datasette.vendored.pint.util import UnitsContainer
for arg in args:
# Prefixed units are defined within the registry
# on parsing (which does not happen here).
# We need to make sure that this happens before using.
if isinstance(arg, UnitsContainer):
for name in arg:
application_registry.parse_units(name)
return cls(*args)
def _unpickle_quantity(cls, *args):
"""Rebuild quantity upon unpickling using the application registry."""
return _unpickle(application_registry.Quantity, *args)
def _unpickle_unit(cls, *args):
"""Rebuild unit upon unpickling using the application registry."""
return _unpickle(application_registry.Unit, *args)
def _unpickle_measurement(cls, *args):
"""Rebuild measurement upon unpickling using the application registry."""
return _unpickle(application_registry.Measurement, *args)
def set_application_registry(registry):
"""Set the application registry, which is used for unpickling operations
and when invoking pint.Quantity or pint.Unit directly.
Parameters
----------
registry : pint.UnitRegistry
"""
application_registry.set(registry)
def get_application_registry():
"""Return the application registry. If :func:`set_application_registry` was never
invoked, return a registry built using :file:`defaults_en.txt` embedded in the pint
package.
Returns
-------
pint.UnitRegistry
"""
return application_registry
# Enumerate all user-facing objects
# Hint to intersphinx that, when building objects.inv, these objects must be registered
# under the top-level module and not in their original submodules
__all__ = (
"Measurement",
"Quantity",
"Unit",
"UnitRegistry",
"PintError",
"DefinitionSyntaxError",
"LogarithmicUnitCalculusError",
"DimensionalityError",
"OffsetUnitCalculusError",
"RedefinitionError",
"UndefinedUnitError",
"UnitStrippedWarning",
"formatter",
"get_application_registry",
"set_application_registry",
"register_unit_format",
"pi_theorem",
"__version__",
"Context",
)

View file

@ -0,0 +1,51 @@
from __future__ import annotations
from collections.abc import Callable
from decimal import Decimal
from fractions import Fraction
from typing import TYPE_CHECKING, Any, Protocol, TypeVar, Union
from .compat import Never, TypeAlias
if TYPE_CHECKING:
from .facets.plain import PlainQuantity as Quantity
from .facets.plain import PlainUnit as Unit
from .util import UnitsContainer
HAS_NUMPY = False
if TYPE_CHECKING:
from .compat import HAS_NUMPY
if HAS_NUMPY:
from .compat import np
Scalar: TypeAlias = Union[float, int, Decimal, Fraction, np.number[Any]]
Array = np.ndarray[Any, Any]
else:
Scalar: TypeAlias = Union[float, int, Decimal, Fraction]
Array: TypeAlias = Never
# TODO: Change when Python 3.10 becomes minimal version.
Magnitude = Union[Scalar, Array]
UnitLike = Union[str, dict[str, Scalar], "UnitsContainer", "Unit"]
QuantityOrUnitLike = Union["Quantity", UnitLike]
Shape = tuple[int, ...]
S = TypeVar("S")
FuncType = Callable[..., Any]
F = TypeVar("F", bound=FuncType)
# TODO: Improve or delete types
QuantityArgument = Any
T = TypeVar("T")
class Handler(Protocol):
def __getitem__(self, item: type[T]) -> Callable[[T], None]: ...

View file

@ -0,0 +1,146 @@
"""
pint.babel
~~~~~~~~~~
:copyright: 2016 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .compat import HAS_BABEL
_babel_units: dict[str, str] = dict(
standard_gravity="acceleration-g-force",
millibar="pressure-millibar",
metric_ton="mass-metric-ton",
megawatt="power-megawatt",
degF="temperature-fahrenheit",
dietary_calorie="energy-foodcalorie",
millisecond="duration-millisecond",
mph="speed-mile-per-hour",
acre_foot="volume-acre-foot",
mebibit="digital-megabit",
gibibit="digital-gigabit",
tebibit="digital-terabit",
mebibyte="digital-megabyte",
kibibyte="digital-kilobyte",
mm_Hg="pressure-millimeter-of-mercury",
month="duration-month",
kilocalorie="energy-kilocalorie",
cubic_mile="volume-cubic-mile",
arcsecond="angle-arc-second",
byte="digital-byte",
metric_cup="volume-cup-metric",
kilojoule="energy-kilojoule",
meter_per_second_squared="acceleration-meter-per-second-squared",
pint="volume-pint",
square_centimeter="area-square-centimeter",
in_Hg="pressure-inch-hg",
milliampere="electric-milliampere",
arcminute="angle-arc-minute",
MPG="consumption-mile-per-gallon",
hertz="frequency-hertz",
day="duration-day",
mps="speed-meter-per-second",
kilometer="length-kilometer",
square_yard="area-square-yard",
kelvin="temperature-kelvin",
kilogram="mass-kilogram",
kilohertz="frequency-kilohertz",
megahertz="frequency-megahertz",
meter="length-meter",
cubic_inch="volume-cubic-inch",
kilowatt_hour="energy-kilowatt-hour",
second="duration-second",
yard="length-yard",
light_year="length-light-year",
millimeter="length-millimeter",
metric_horsepower="power-horsepower",
gibibyte="digital-gigabyte",
# 'temperature-generic',
liter="volume-liter",
turn="angle-revolution",
microsecond="duration-microsecond",
pound="mass-pound",
ounce="mass-ounce",
calorie="energy-calorie",
centimeter="length-centimeter",
inch="length-inch",
centiliter="volume-centiliter",
troy_ounce="mass-ounce-troy",
gram="mass-gram",
kilowatt="power-kilowatt",
knot="speed-knot",
lux="light-lux",
hectoliter="volume-hectoliter",
microgram="mass-microgram",
degC="temperature-celsius",
tablespoon="volume-tablespoon",
cubic_yard="volume-cubic-yard",
square_foot="area-square-foot",
tebibyte="digital-terabyte",
square_inch="area-square-inch",
carat="mass-carat",
hectopascal="pressure-hectopascal",
gigawatt="power-gigawatt",
watt="power-watt",
micrometer="length-micrometer",
volt="electric-volt",
bit="digital-bit",
gigahertz="frequency-gigahertz",
teaspoon="volume-teaspoon",
ohm="electric-ohm",
joule="energy-joule",
cup="volume-cup",
square_mile="area-square-mile",
nautical_mile="length-nautical-mile",
square_meter="area-square-meter",
mile="length-mile",
acre="area-acre",
nanometer="length-nanometer",
hour="duration-hour",
astronomical_unit="length-astronomical-unit",
liter_per_100kilometers="consumption-liter-per-100kilometers",
megaliter="volume-megaliter",
ton="mass-ton",
hectare="area-hectare",
square_kilometer="area-square-kilometer",
kibibit="digital-kilobit",
mile_scandinavian="length-mile-scandinavian",
liter_per_kilometer="consumption-liter-per-kilometer",
century="duration-century",
cubic_foot="volume-cubic-foot",
deciliter="volume-deciliter",
# pint='volume-pint-metric',
cubic_meter="volume-cubic-meter",
cubic_kilometer="volume-cubic-kilometer",
quart="volume-quart",
cc="volume-cubic-centimeter",
pound_force_per_square_inch="pressure-pound-per-square-inch",
milligram="mass-milligram",
kph="speed-kilometer-per-hour",
minute="duration-minute",
parsec="length-parsec",
picometer="length-picometer",
degree="angle-degree",
milliwatt="power-milliwatt",
week="duration-week",
ampere="electric-ampere",
milliliter="volume-milliliter",
decimeter="length-decimeter",
fluid_ounce="volume-fluid-ounce",
nanosecond="duration-nanosecond",
foot="length-foot",
karat="proportion-karat",
year="duration-year",
gallon="volume-gallon",
radian="angle-radian",
)
if not HAS_BABEL:
_babel_units = {}
_babel_systems: dict[str, str] = dict(mks="metric", imperial="uksystem", US="ussystem")
_babel_lengths: list[str] = ["narrow", "short", "long"]

View file

@ -0,0 +1,394 @@
"""
pint.compat
~~~~~~~~~~~
Compatibility layer.
:copyright: 2013 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import math
import sys
from collections.abc import Callable, Iterable, Mapping
from decimal import Decimal
from importlib import import_module
from numbers import Number
from typing import (
Any,
NoReturn,
)
if sys.version_info >= (3, 10):
from typing import TypeAlias # noqa
else:
from typing_extensions import TypeAlias # noqa
if sys.version_info >= (3, 11):
from typing import Self # noqa
else:
from typing_extensions import Self # noqa
if sys.version_info >= (3, 11):
from typing import Never # noqa
else:
from typing_extensions import Never # noqa
if sys.version_info >= (3, 11):
from typing import Unpack # noqa
else:
from typing_extensions import Unpack # noqa
if sys.version_info >= (3, 13):
from warnings import deprecated # noqa
else:
from typing_extensions import deprecated # noqa
def missing_dependency(
package: str, display_name: str | None = None
) -> Callable[..., NoReturn]:
"""Return a helper function that raises an exception when used.
It provides a way delay a missing dependency exception until it is used.
"""
display_name = display_name or package
def _inner(*args: Any, **kwargs: Any) -> NoReturn:
raise Exception(
"This feature requires %s. Please install it by running:\n"
"pip install %s" % (display_name, package)
)
return _inner
# TODO: remove this warning after v0.10
class BehaviorChangeWarning(UserWarning):
pass
try:
from uncertainties import UFloat, ufloat
unp = None
HAS_UNCERTAINTIES = True
except ImportError:
UFloat = ufloat = unp = None
HAS_UNCERTAINTIES = False
try:
import numpy as np
from numpy import datetime64 as np_datetime64
from numpy import ndarray
HAS_NUMPY = True
NUMPY_VER = np.__version__
if HAS_UNCERTAINTIES:
from uncertainties import unumpy as unp
NUMERIC_TYPES = (Number, Decimal, ndarray, np.number, UFloat)
else:
NUMERIC_TYPES = (Number, Decimal, ndarray, np.number)
def _to_magnitude(value, force_ndarray=False, force_ndarray_like=False):
if isinstance(value, (dict, bool)) or value is None:
raise TypeError(f"Invalid magnitude for Quantity: {value!r}")
elif isinstance(value, str) and value == "":
raise ValueError("Quantity magnitude cannot be an empty string.")
elif isinstance(value, (list, tuple)):
return np.asarray(value)
elif HAS_UNCERTAINTIES:
from datasette.vendored.pint.facets.measurement.objects import Measurement
if isinstance(value, Measurement):
return ufloat(value.value, value.error)
if force_ndarray or (
force_ndarray_like and not is_duck_array_type(type(value))
):
return np.asarray(value)
return value
def _test_array_function_protocol():
# Test if the __array_function__ protocol is enabled
try:
class FakeArray:
def __array_function__(self, *args, **kwargs):
return
np.concatenate([FakeArray()])
return True
except ValueError:
return False
HAS_NUMPY_ARRAY_FUNCTION = _test_array_function_protocol()
NP_NO_VALUE = np._NoValue
except ImportError:
np = None
class ndarray:
pass
class np_datetime64:
pass
HAS_NUMPY = False
NUMPY_VER = "0"
NUMERIC_TYPES = (Number, Decimal)
HAS_NUMPY_ARRAY_FUNCTION = False
NP_NO_VALUE = None
def _to_magnitude(value, force_ndarray=False, force_ndarray_like=False):
if force_ndarray or force_ndarray_like:
raise ValueError(
"Cannot force to ndarray or ndarray-like when NumPy is not present."
)
elif isinstance(value, (dict, bool)) or value is None:
raise TypeError(f"Invalid magnitude for Quantity: {value!r}")
elif isinstance(value, str) and value == "":
raise ValueError("Quantity magnitude cannot be an empty string.")
elif isinstance(value, (list, tuple)):
raise TypeError(
"lists and tuples are valid magnitudes for "
"Quantity only when NumPy is present."
)
elif HAS_UNCERTAINTIES:
from datasette.vendored.pint.facets.measurement.objects import Measurement
if isinstance(value, Measurement):
return ufloat(value.value, value.error)
return value
try:
from babel import Locale
from babel import units as babel_units
babel_parse = Locale.parse
HAS_BABEL = hasattr(babel_units, "format_unit")
except ImportError:
HAS_BABEL = False
babel_parse = missing_dependency("Babel") # noqa: F811 # type:ignore
babel_units = babel_parse
try:
import mip
mip_model = mip.model
mip_Model = mip.Model
mip_INF = mip.INF
mip_INTEGER = mip.INTEGER
mip_xsum = mip.xsum
mip_OptimizationStatus = mip.OptimizationStatus
HAS_MIP = True
except ImportError:
HAS_MIP = False
mip_missing = missing_dependency("mip")
mip_model = mip_missing
mip_Model = mip_missing
mip_INF = mip_missing
mip_INTEGER = mip_missing
mip_xsum = mip_missing
mip_OptimizationStatus = mip_missing
# Defines Logarithm and Exponential for Logarithmic Converter
if HAS_NUMPY:
from numpy import (
exp, # noqa: F401
log, # noqa: F401
)
else:
from math import (
exp, # noqa: F401
log, # noqa: F401
)
# Define location of pint.Quantity in NEP-13 type cast hierarchy by defining upcast
# types using guarded imports
try:
from dask import array as dask_array
from dask.base import compute, persist, visualize
except ImportError:
compute, persist, visualize = None, None, None
dask_array = None
# TODO: merge with upcast_type_map
#: List upcast type names
upcast_type_names = (
"pint_pandas.pint_array.PintArray",
"xarray.core.dataarray.DataArray",
"xarray.core.dataset.Dataset",
"xarray.core.variable.Variable",
"pandas.core.series.Series",
"pandas.core.frame.DataFrame",
"pandas.Series",
"pandas.DataFrame",
"xarray.core.dataarray.DataArray",
)
#: Map type name to the actual type (for upcast types).
upcast_type_map: Mapping[str, type | None] = {k: None for k in upcast_type_names}
def fully_qualified_name(t: type) -> str:
"""Return the fully qualified name of a type."""
module = t.__module__
name = t.__qualname__
if module is None or module == "builtins":
return name
return f"{module}.{name}"
def check_upcast_type(obj: type) -> bool:
"""Check if the type object is an upcast type."""
# TODO: merge or unify name with is_upcast_type
fqn = fully_qualified_name(obj)
if fqn not in upcast_type_map:
return False
else:
module_name, class_name = fqn.rsplit(".", 1)
cls = getattr(import_module(module_name), class_name)
upcast_type_map[fqn] = cls
# This is to check we are importing the same thing.
# and avoid weird problems. Maybe instead of return
# we should raise an error if false.
return obj in upcast_type_map.values()
def is_upcast_type(other: type) -> bool:
"""Check if the type object is an upcast type."""
# TODO: merge or unify name with check_upcast_type
if other in upcast_type_map.values():
return True
return check_upcast_type(other)
def is_duck_array_type(cls: type) -> bool:
"""Check if the type object represents a (non-Quantity) duck array type."""
# TODO (NEP 30): replace duck array check with hasattr(other, "__duckarray__")
return issubclass(cls, ndarray) or (
not hasattr(cls, "_magnitude")
and not hasattr(cls, "_units")
and HAS_NUMPY_ARRAY_FUNCTION
and hasattr(cls, "__array_function__")
and hasattr(cls, "ndim")
and hasattr(cls, "dtype")
)
def is_duck_array(obj: type) -> bool:
"""Check if an object represents a (non-Quantity) duck array type."""
return is_duck_array_type(type(obj))
def eq(lhs: Any, rhs: Any, check_all: bool) -> bool | Iterable[bool]:
"""Comparison of scalars and arrays.
Parameters
----------
lhs
left-hand side
rhs
right-hand side
check_all
if True, reduce sequence to single bool;
return True if all the elements are equal.
Returns
-------
bool or array_like of bool
"""
out = lhs == rhs
if check_all and is_duck_array_type(type(out)):
return out.all()
return out
def isnan(obj: Any, check_all: bool) -> bool | Iterable[bool]:
"""Test for NaN or NaT.
Parameters
----------
obj
scalar or vector
check_all
if True, reduce sequence to single bool;
return True if any of the elements are NaN.
Returns
-------
bool or array_like of bool.
Always return False for non-numeric types.
"""
if is_duck_array_type(type(obj)):
if obj.dtype.kind in "ifc":
out = np.isnan(obj)
elif obj.dtype.kind in "Mm":
out = np.isnat(obj)
else:
if HAS_UNCERTAINTIES:
try:
out = unp.isnan(obj)
except TypeError:
# Not a numeric or UFloat type
out = np.full(obj.shape, False)
else:
# Not a numeric or datetime type
out = np.full(obj.shape, False)
return out.any() if check_all else out
if isinstance(obj, np_datetime64):
return np.isnat(obj)
elif HAS_UNCERTAINTIES and isinstance(obj, UFloat):
return unp.isnan(obj)
try:
return math.isnan(obj)
except TypeError:
return False
def zero_or_nan(obj: Any, check_all: bool) -> bool | Iterable[bool]:
"""Test if obj is zero, NaN, or NaT.
Parameters
----------
obj
scalar or vector
check_all
if True, reduce sequence to single bool;
return True if all the elements are zero, NaN, or NaT.
Returns
-------
bool or array_like of bool.
Always return False for non-numeric types.
"""
out = eq(obj, 0, False) + isnan(obj, False)
if check_all and is_duck_array_type(type(out)):
return out.all()
return out

View file

@ -0,0 +1,74 @@
# Default Pint constants definition file
# Based on the International System of Units
# Language: english
# Source: https://physics.nist.gov/cuu/Constants/
# https://physics.nist.gov/PhysRefData/XrayTrans/Html/search.html
# :copyright: 2013,2019 by Pint Authors, see AUTHORS for more details.
#### MATHEMATICAL CONSTANTS ####
# As computed by Maxima with fpprec:50
pi = 3.1415926535897932384626433832795028841971693993751 = π # pi
tansec = 4.8481368111333441675396429478852851658848753880815e-6 # tangent of 1 arc-second ~ arc_second/radian
ln10 = 2.3025850929940456840179914546843642076011014886288 # natural logarithm of 10
wien_x = 4.9651142317442763036987591313228939440555849867973 # solution to (x-5)*exp(x)+5 = 0 => x = W(5/exp(5))+5
wien_u = 2.8214393721220788934031913302944851953458817440731 # solution to (u-3)*exp(u)+3 = 0 => u = W(3/exp(3))+3
eulers_number = 2.71828182845904523536028747135266249775724709369995
#### DEFINED EXACT CONSTANTS ####
speed_of_light = 299792458 m/s = c = c_0 # since 1983
planck_constant = 6.62607015e-34 J s = # since May 2019
elementary_charge = 1.602176634e-19 C = e # since May 2019
avogadro_number = 6.02214076e23 # since May 2019
boltzmann_constant = 1.380649e-23 J K^-1 = k = k_B # since May 2019
standard_gravity = 9.80665 m/s^2 = g_0 = g0 = g_n = gravity # since 1901
standard_atmosphere = 1.01325e5 Pa = atm = atmosphere # since 1954
conventional_josephson_constant = 4.835979e14 Hz / V = K_J90 # since Jan 1990
conventional_von_klitzing_constant = 2.5812807e4 ohm = R_K90 # since Jan 1990
#### DERIVED EXACT CONSTANTS ####
# Floating-point conversion may introduce inaccuracies
zeta = c / (cm/s) = ζ
dirac_constant = / (2 * π) = ħ = hbar = atomic_unit_of_action = a_u_action
avogadro_constant = avogadro_number * mol^-1 = N_A
molar_gas_constant = k * N_A = R
faraday_constant = e * N_A
conductance_quantum = 2 * e ** 2 / = G_0
magnetic_flux_quantum = / (2 * e) = Φ_0 = Phi_0
josephson_constant = 2 * e / = K_J
von_klitzing_constant = / e ** 2 = R_K
stefan_boltzmann_constant = 2 / 15 * π ** 5 * k ** 4 / ( ** 3 * c ** 2) = σ = sigma
first_radiation_constant = 2 * π * * c ** 2 = c_1
second_radiation_constant = * c / k = c_2
wien_wavelength_displacement_law_constant = * c / (k * wien_x)
wien_frequency_displacement_law_constant = wien_u * k /
#### MEASURED CONSTANTS ####
# Recommended CODATA-2018 values
# To some extent, what is measured and what is derived is a bit arbitrary.
# The choice of measured constants is based on convenience and on available uncertainty.
# The uncertainty in the last significant digits is given in parentheses as a comment.
newtonian_constant_of_gravitation = 6.67430e-11 m^3/(kg s^2) = _ = gravitational_constant # (15)
rydberg_constant = 1.0973731568160e7 * m^-1 = R_∞ = R_inf # (21)
electron_g_factor = -2.00231930436256 = g_e # (35)
atomic_mass_constant = 1.66053906660e-27 kg = m_u # (50)
electron_mass = 9.1093837015e-31 kg = m_e = atomic_unit_of_mass = a_u_mass # (28)
proton_mass = 1.67262192369e-27 kg = m_p # (51)
neutron_mass = 1.67492749804e-27 kg = m_n # (95)
lattice_spacing_of_Si = 1.920155716e-10 m = d_220 # (32)
K_alpha_Cu_d_220 = 0.80232719 # (22)
K_alpha_Mo_d_220 = 0.36940604 # (19)
K_alpha_W_d_220 = 0.108852175 # (98)
#### DERIVED CONSTANTS ####
fine_structure_constant = (2 * * R_inf / (m_e * c)) ** 0.5 = α = alpha
vacuum_permeability = 2 * α * / (e ** 2 * c) = µ_0 = mu_0 = mu0 = magnetic_constant
vacuum_permittivity = e ** 2 / (2 * α * * c) = ε_0 = epsilon_0 = eps_0 = eps0 = electric_constant
impedance_of_free_space = 2 * α * / e ** 2 = Z_0 = characteristic_impedance_of_vacuum
coulomb_constant = α * hbar * c / e ** 2 = k_C
classical_electron_radius = α * hbar / (m_e * c) = r_e
thomson_cross_section = 8 / 3 * π * r_e ** 2 = σ_e = sigma_e

View file

@ -0,0 +1,22 @@
"""
pint.context
~~~~~~~~~~~~
Functions and classes related to context definitions and application.
:copyright: 2016 by Pint Authors, see AUTHORS for more details..
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
pass
#: Regex to match the header parts of a context.
#: Regex to match variable names in an equation.
# TODO: delete this file

View file

@ -0,0 +1,75 @@
"""
pint.converters
~~~~~~~~~~~~~~~
Functions and classes related to unit conversions.
:copyright: 2016 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from dataclasses import dataclass
from dataclasses import fields as dc_fields
from typing import Any, ClassVar
from ._typing import Magnitude
from .compat import HAS_NUMPY, Self, exp, log # noqa: F401
@dataclass(frozen=True)
class Converter:
"""Base class for value converters."""
_subclasses: ClassVar[list[type[Converter]]] = []
_param_names_to_subclass: ClassVar[dict[frozenset[str], type[Converter]]] = {}
@property
def is_multiplicative(self) -> bool:
return True
@property
def is_logarithmic(self) -> bool:
return False
def to_reference(self, value: Magnitude, inplace: bool = False) -> Magnitude:
return value
def from_reference(self, value: Magnitude, inplace: bool = False) -> Magnitude:
return value
def __init_subclass__(cls, **kwargs: Any):
# Get constructor parameters
super().__init_subclass__(**kwargs)
cls._subclasses.append(cls)
@classmethod
def get_field_names(cls, new_cls: type) -> frozenset[str]:
return frozenset(p.name for p in dc_fields(new_cls))
@classmethod
def preprocess_kwargs(cls, **kwargs: Any) -> dict[str, Any] | None:
return None
@classmethod
def from_arguments(cls, **kwargs: Any) -> Converter:
kwk = frozenset(kwargs.keys())
try:
new_cls = cls._param_names_to_subclass[kwk]
except KeyError:
for new_cls in cls._subclasses:
p_names = frozenset(p.name for p in dc_fields(new_cls))
if p_names == kwk:
cls._param_names_to_subclass[kwk] = new_cls
break
else:
params = "(" + ", ".join(tuple(kwk)) + ")"
raise ValueError(
f"There is no class registered for parameters {params}"
)
kw = new_cls.preprocess_kwargs(**kwargs)
if kw is None:
return new_cls(**kwargs)
return cls.from_arguments(**kw)

View file

@ -0,0 +1,898 @@
# Default Pint units definition file
# Based on the International System of Units
# Language: english
# :copyright: 2013,2019 by Pint Authors, see AUTHORS for more details.
# Syntax
# ======
# Units
# -----
# <canonical name> = <relation to another unit or dimension> [= <symbol>] [= <alias>] [ = <alias> ] [...]
#
# The canonical name and aliases should be expressed in singular form.
# Pint automatically deals with plurals built by adding 's' to the singular form; plural
# forms that don't follow this rule should be instead explicitly listed as aliases.
#
# If a unit has no symbol and one wants to define aliases, then the symbol should be
# conventionally set to _.
#
# Example:
# millennium = 1e3 * year = _ = millennia
#
#
# Prefixes
# --------
# <prefix>- = <amount> [= <symbol>] [= <alias>] [ = <alias> ] [...]
#
# Example:
# deca- = 1e+1 = da- = deka-
#
#
# Derived dimensions
# ------------------
# [dimension name] = <relation to other dimensions>
#
# Example:
# [density] = [mass] / [volume]
#
# Note that primary dimensions don't need to be declared; they can be
# defined for the first time in a unit definition.
# E.g. see below `meter = [length]`
#
#
# Additional aliases
# ------------------
# @alias <canonical name or previous alias> = <alias> [ = <alias> ] [...]
#
# Used to add aliases to already existing unit definitions.
# Particularly useful when one wants to enrich definitions
# from defaults_en.txt with custom aliases.
#
# Example:
# @alias meter = my_meter
# See also: https://pint.readthedocs.io/en/latest/defining.html
@defaults
group = international
system = mks
@end
#### PREFIXES ####
# decimal prefixes
quecto- = 1e-30 = q-
ronto- = 1e-27 = r-
yocto- = 1e-24 = y-
zepto- = 1e-21 = z-
atto- = 1e-18 = a-
femto- = 1e-15 = f-
pico- = 1e-12 = p-
nano- = 1e-9 = n-
# The micro (U+00B5) and Greek mu (U+03BC) are both valid prefixes,
# and they often use the same glyph.
micro- = 1e-6 = µ- = μ- = u- = mu- = mc-
milli- = 1e-3 = m-
centi- = 1e-2 = c-
deci- = 1e-1 = d-
deca- = 1e+1 = da- = deka-
hecto- = 1e2 = h-
kilo- = 1e3 = k-
mega- = 1e6 = M-
giga- = 1e9 = G-
tera- = 1e12 = T-
peta- = 1e15 = P-
exa- = 1e18 = E-
zetta- = 1e21 = Z-
yotta- = 1e24 = Y-
ronna- = 1e27 = R-
quetta- = 1e30 = Q-
# binary_prefixes
kibi- = 2**10 = Ki-
mebi- = 2**20 = Mi-
gibi- = 2**30 = Gi-
tebi- = 2**40 = Ti-
pebi- = 2**50 = Pi-
exbi- = 2**60 = Ei-
zebi- = 2**70 = Zi-
yobi- = 2**80 = Yi-
# extra_prefixes
semi- = 0.5 = _ = demi-
sesqui- = 1.5
#### BASE UNITS ####
meter = [length] = m = metre
second = [time] = s = sec
ampere = [current] = A = amp
candela = [luminosity] = cd = candle
gram = [mass] = g
mole = [substance] = mol
kelvin = [temperature]; offset: 0 = K = degK = °K = degree_Kelvin = degreeK # older names supported for compatibility
radian = [] = rad
bit = []
count = []
#### CONSTANTS ####
@import constants_en.txt
#### UNITS ####
# Common and less common, grouped by quantity.
# Conversion factors are exact (except when noted),
# although floating-point conversion may introduce inaccuracies
# Angle
turn = 2 * π * radian = _ = revolution = cycle = circle
degree = π / 180 * radian = deg = arcdeg = arcdegree = angular_degree
arcminute = degree / 60 = arcmin = arc_minute = angular_minute
arcsecond = arcminute / 60 = arcsec = arc_second = angular_second
milliarcsecond = 1e-3 * arcsecond = mas
grade = π / 200 * radian = grad = gon
mil = π / 32000 * radian
# Solid angle
steradian = radian ** 2 = sr
square_degree = (π / 180) ** 2 * sr = sq_deg = sqdeg
# Information
baud = bit / second = Bd = bps
byte = 8 * bit = B = octet
# byte = 8 * bit = _ = octet
## NOTE: B (byte) symbol can conflict with Bell
# Ratios
percent = 0.01 = %
permille = 0.001 = ‰
ppm = 1e-6
# Length
angstrom = 1e-10 * meter = Å = ångström = Å
micron = micrometer = µ = μ
fermi = femtometer = fm
light_year = speed_of_light * julian_year = ly = lightyear
astronomical_unit = 149597870700 * meter = au # since Aug 2012
parsec = 1 / tansec * astronomical_unit = pc
nautical_mile = 1852 * meter = nmi
bohr = hbar / (alpha * m_e * c) = a_0 = a0 = bohr_radius = atomic_unit_of_length = a_u_length
x_unit_Cu = K_alpha_Cu_d_220 * d_220 / 1537.4 = Xu_Cu
x_unit_Mo = K_alpha_Mo_d_220 * d_220 / 707.831 = Xu_Mo
angstrom_star = K_alpha_W_d_220 * d_220 / 0.2090100 = Å_star
planck_length = (hbar * gravitational_constant / c ** 3) ** 0.5
# Mass
metric_ton = 1e3 * kilogram = t = tonne
unified_atomic_mass_unit = atomic_mass_constant = u = amu
dalton = atomic_mass_constant = Da
grain = 64.79891 * milligram = gr
gamma_mass = microgram
carat = 200 * milligram = ct = karat
planck_mass = (hbar * c / gravitational_constant) ** 0.5
# Time
minute = 60 * second = min
hour = 60 * minute = h = hr
day = 24 * hour = d
week = 7 * day
fortnight = 2 * week
year = 365.25 * day = a = yr = julian_year
month = year / 12
# decade = 10 * year
## NOTE: decade [time] can conflict with decade [dimensionless]
century = 100 * year = _ = centuries
millennium = 1e3 * year = _ = millennia
eon = 1e9 * year
shake = 1e-8 * second
svedberg = 1e-13 * second
atomic_unit_of_time = hbar / E_h = a_u_time
gregorian_year = 365.2425 * day
sidereal_year = 365.256363004 * day # approximate, as of J2000 epoch
tropical_year = 365.242190402 * day # approximate, as of J2000 epoch
common_year = 365 * day
leap_year = 366 * day
sidereal_day = day / 1.00273790935079524 # approximate
sidereal_month = 27.32166155 * day # approximate
tropical_month = 27.321582 * day # approximate
synodic_month = 29.530589 * day = _ = lunar_month # approximate
planck_time = (hbar * gravitational_constant / c ** 5) ** 0.5
# Temperature
degree_Celsius = kelvin; offset: 273.15 = °C = celsius = degC = degreeC
degree_Rankine = 5 / 9 * kelvin; offset: 0 = °R = rankine = degR = degreeR
degree_Fahrenheit = 5 / 9 * kelvin; offset: 233.15 + 200 / 9 = °F = fahrenheit = degF = degreeF
degree_Reaumur = 4 / 5 * kelvin; offset: 273.15 = °Re = reaumur = degRe = degreeRe = degree_Réaumur = réaumur
atomic_unit_of_temperature = E_h / k = a_u_temp
planck_temperature = (hbar * c ** 5 / gravitational_constant / k ** 2) ** 0.5
# Area
[area] = [length] ** 2
are = 100 * meter ** 2
barn = 1e-28 * meter ** 2 = b
darcy = centipoise * centimeter ** 2 / (second * atmosphere)
hectare = 100 * are = ha
# Volume
[volume] = [length] ** 3
liter = decimeter ** 3 = l = L = = litre
cubic_centimeter = centimeter ** 3 = cc
lambda = microliter = λ
stere = meter ** 3
# Frequency
[frequency] = 1 / [time]
hertz = 1 / second = Hz
revolutions_per_minute = revolution / minute = rpm
revolutions_per_second = revolution / second = rps
counts_per_second = count / second = cps
# Wavenumber
[wavenumber] = 1 / [length]
reciprocal_centimeter = 1 / cm = cm_1 = kayser
# Velocity
[velocity] = [length] / [time]
[speed] = [velocity]
knot = nautical_mile / hour = kt = knot_international = international_knot
mile_per_hour = mile / hour = mph = MPH
kilometer_per_hour = kilometer / hour = kph = KPH
kilometer_per_second = kilometer / second = kps
meter_per_second = meter / second = mps
foot_per_second = foot / second = fps
# Volumetric Flow Rate
[volumetric_flow_rate] = [volume] / [time]
sverdrup = 1e6 * meter ** 3 / second = sv
# Acceleration
[acceleration] = [velocity] / [time]
galileo = centimeter / second ** 2 = Gal
# Force
[force] = [mass] * [acceleration]
newton = kilogram * meter / second ** 2 = N
dyne = gram * centimeter / second ** 2 = dyn
force_kilogram = g_0 * kilogram = kgf = kilogram_force = pond
force_gram = g_0 * gram = gf = gram_force
force_metric_ton = g_0 * metric_ton = tf = metric_ton_force = force_t = t_force
atomic_unit_of_force = E_h / a_0 = a_u_force
# Energy
[energy] = [force] * [length]
joule = newton * meter = J
erg = dyne * centimeter
watt_hour = watt * hour = Wh = watthour
electron_volt = e * volt = eV
rydberg = * c * R_inf = Ry
hartree = 2 * rydberg = E_h = Eh = hartree_energy = atomic_unit_of_energy = a_u_energy
calorie = 4.184 * joule = cal = thermochemical_calorie = cal_th
international_calorie = 4.1868 * joule = cal_it = international_steam_table_calorie
fifteen_degree_calorie = 4.1855 * joule = cal_15
british_thermal_unit = 1055.056 * joule = Btu = BTU = Btu_iso
international_british_thermal_unit = 1e3 * pound / kilogram * degR / kelvin * international_calorie = Btu_it
thermochemical_british_thermal_unit = 1e3 * pound / kilogram * degR / kelvin * calorie = Btu_th
quadrillion_Btu = 1e15 * Btu = quad
therm = 1e5 * Btu = thm = EC_therm
US_therm = 1.054804e8 * joule # approximate, no exact definition
ton_TNT = 1e9 * calorie = tTNT
tonne_of_oil_equivalent = 1e10 * international_calorie = toe
atmosphere_liter = atmosphere * liter = atm_l
# Power
[power] = [energy] / [time]
watt = joule / second = W
volt_ampere = volt * ampere = VA
horsepower = 550 * foot * force_pound / second = hp = UK_horsepower = hydraulic_horsepower
boiler_horsepower = 33475 * Btu / hour # unclear which Btu
metric_horsepower = 75 * force_kilogram * meter / second
electrical_horsepower = 746 * watt
refrigeration_ton = 12e3 * Btu / hour = _ = ton_of_refrigeration # approximate, no exact definition
cooling_tower_ton = 1.25 * refrigeration_ton # approximate, no exact definition
standard_liter_per_minute = atmosphere * liter / minute = slpm = slm
conventional_watt_90 = K_J90 ** 2 * R_K90 / (K_J ** 2 * R_K) * watt = W_90
# Momentum
[momentum] = [length] * [mass] / [time]
# Density (as auxiliary for pressure)
[density] = [mass] / [volume]
mercury = 13.5951 * kilogram / liter = Hg = Hg_0C = Hg_32F = conventional_mercury
water = 1.0 * kilogram / liter = H2O = conventional_water
mercury_60F = 13.5568 * kilogram / liter = Hg_60F # approximate
water_39F = 0.999972 * kilogram / liter = water_4C # approximate
water_60F = 0.999001 * kilogram / liter # approximate
# Pressure
[pressure] = [force] / [area]
pascal = newton / meter ** 2 = Pa
barye = dyne / centimeter ** 2 = Ba = barie = barad = barrie = baryd
bar = 1e5 * pascal
technical_atmosphere = kilogram * g_0 / centimeter ** 2 = at
torr = atm / 760
pound_force_per_square_inch = force_pound / inch ** 2 = psi
kip_per_square_inch = kip / inch ** 2 = ksi
millimeter_Hg = millimeter * Hg * g_0 = mmHg = mm_Hg = millimeter_Hg_0C
centimeter_Hg = centimeter * Hg * g_0 = cmHg = cm_Hg = centimeter_Hg_0C
inch_Hg = inch * Hg * g_0 = inHg = in_Hg = inch_Hg_32F
inch_Hg_60F = inch * Hg_60F * g_0
inch_H2O_39F = inch * water_39F * g_0
inch_H2O_60F = inch * water_60F * g_0
foot_H2O = foot * water * g_0 = ftH2O = feet_H2O
centimeter_H2O = centimeter * water * g_0 = cmH2O = cm_H2O
sound_pressure_level = 20e-6 * pascal = SPL
# Torque
[torque] = [force] * [length]
foot_pound = foot * force_pound = ft_lb = footpound
# Viscosity
[viscosity] = [pressure] * [time]
poise = 0.1 * Pa * second = P
reyn = psi * second
# Kinematic viscosity
[kinematic_viscosity] = [area] / [time]
stokes = centimeter ** 2 / second = St
# Fluidity
[fluidity] = 1 / [viscosity]
rhe = 1 / poise
# Amount of substance
particle = 1 / N_A = _ = molec = molecule
# Concentration
[concentration] = [substance] / [volume]
molar = mole / liter = M
# Catalytic activity
[activity] = [substance] / [time]
katal = mole / second = kat
enzyme_unit = micromole / minute = U = enzymeunit
# Entropy
[entropy] = [energy] / [temperature]
clausius = calorie / kelvin = Cl
# Molar entropy
[molar_entropy] = [entropy] / [substance]
entropy_unit = calorie / kelvin / mole = eu
# Radiation
becquerel = counts_per_second = Bq
curie = 3.7e10 * becquerel = Ci
rutherford = 1e6 * becquerel = Rd
gray = joule / kilogram = Gy
sievert = joule / kilogram = Sv
rads = 0.01 * gray
rem = 0.01 * sievert
roentgen = 2.58e-4 * coulomb / kilogram = _ = röntgen # approximate, depends on medium
# Heat transimission
[heat_transmission] = [energy] / [area]
peak_sun_hour = 1e3 * watt_hour / meter ** 2 = PSH
langley = thermochemical_calorie / centimeter ** 2 = Ly
# Luminance
[luminance] = [luminosity] / [area]
nit = candela / meter ** 2
stilb = candela / centimeter ** 2
lambert = 1 / π * candela / centimeter ** 2
# Luminous flux
[luminous_flux] = [luminosity]
lumen = candela * steradian = lm
# Illuminance
[illuminance] = [luminous_flux] / [area]
lux = lumen / meter ** 2 = lx
# Intensity
[intensity] = [power] / [area]
atomic_unit_of_intensity = 0.5 * ε_0 * c * atomic_unit_of_electric_field ** 2 = a_u_intensity
# Current
biot = 10 * ampere = Bi
abampere = biot = abA
atomic_unit_of_current = e / atomic_unit_of_time = a_u_current
mean_international_ampere = mean_international_volt / mean_international_ohm = A_it
US_international_ampere = US_international_volt / US_international_ohm = A_US
conventional_ampere_90 = K_J90 * R_K90 / (K_J * R_K) * ampere = A_90
planck_current = (c ** 6 / gravitational_constant / k_C) ** 0.5
# Charge
[charge] = [current] * [time]
coulomb = ampere * second = C
abcoulomb = 10 * C = abC
faraday = e * N_A * mole
conventional_coulomb_90 = K_J90 * R_K90 / (K_J * R_K) * coulomb = C_90
ampere_hour = ampere * hour = Ah
# Electric potential
[electric_potential] = [energy] / [charge]
volt = joule / coulomb = V
abvolt = 1e-8 * volt = abV
mean_international_volt = 1.00034 * volt = V_it # approximate
US_international_volt = 1.00033 * volt = V_US # approximate
conventional_volt_90 = K_J90 / K_J * volt = V_90
# Electric field
[electric_field] = [electric_potential] / [length]
atomic_unit_of_electric_field = e * k_C / a_0 ** 2 = a_u_electric_field
# Electric displacement field
[electric_displacement_field] = [charge] / [area]
# Reduced electric field
[reduced_electric_field] = [electric_field] * [area]
townsend = 1e-21 * V * m^2 = Td
# Resistance
[resistance] = [electric_potential] / [current]
ohm = volt / ampere = Ω
abohm = 1e-9 * ohm = abΩ
mean_international_ohm = 1.00049 * ohm = Ω_it = ohm_it # approximate
US_international_ohm = 1.000495 * ohm = Ω_US = ohm_US # approximate
conventional_ohm_90 = R_K / R_K90 * ohm = Ω_90 = ohm_90
# Resistivity
[resistivity] = [resistance] * [length]
# Conductance
[conductance] = [current] / [electric_potential]
siemens = ampere / volt = S = mho
absiemens = 1e9 * siemens = abS = abmho
# Capacitance
[capacitance] = [charge] / [electric_potential]
farad = coulomb / volt = F
abfarad = 1e9 * farad = abF
conventional_farad_90 = R_K90 / R_K * farad = F_90
# Magnetic flux
[magnetic_flux] = [electric_potential] * [time]
weber = volt * second = Wb
unit_pole = µ_0 * biot * centimeter
# Inductance
[inductance] = [magnetic_flux] / [current]
henry = weber / ampere = H
abhenry = 1e-9 * henry = abH
conventional_henry_90 = R_K / R_K90 * henry = H_90
# Magnetic field
[magnetic_field] = [magnetic_flux] / [area]
tesla = weber / meter ** 2 = T
gamma = 1e-9 * tesla = γ
# Magnetomotive force
[magnetomotive_force] = [current]
ampere_turn = ampere = At
biot_turn = biot
gilbert = 1 / (4 * π) * biot_turn = Gb
# Magnetic field strength
[magnetic_field_strength] = [current] / [length]
# Electric dipole moment
[electric_dipole] = [charge] * [length]
debye = 1e-9 / ζ * coulomb * angstrom = D # formally 1 D = 1e-10 Fr*Å, but we generally want to use it outside the Gaussian context
# Electric quadrupole moment
[electric_quadrupole] = [charge] * [area]
buckingham = debye * angstrom
# Magnetic dipole moment
[magnetic_dipole] = [current] * [area]
bohr_magneton = e * hbar / (2 * m_e) = µ_B = mu_B
nuclear_magneton = e * hbar / (2 * m_p) = µ_N = mu_N
# Refractive index
[refractive_index] = []
refractive_index_unit = [] = RIU
# Logaritmic Unit Definition
# Unit = scale; logbase; logfactor
# x_dB = [logfactor] * log( x_lin / [scale] ) / log( [logbase] )
# Logaritmic Units of dimensionless quantity: [ https://en.wikipedia.org/wiki/Level_(logarithmic_quantity) ]
decibelwatt = watt; logbase: 10; logfactor: 10 = dBW
decibelmilliwatt = 1e-3 watt; logbase: 10; logfactor: 10 = dBm
decibelmicrowatt = 1e-6 watt; logbase: 10; logfactor: 10 = dBu
decibel = 1 ; logbase: 10; logfactor: 10 = dB
# bell = 1 ; logbase: 10; logfactor: = B
## NOTE: B (Bell) symbol conflicts with byte
decade = 1 ; logbase: 10; logfactor: 1
## NOTE: decade [time] can conflict with decade [dimensionless]
octave = 1 ; logbase: 2; logfactor: 1 = oct
neper = 1 ; logbase: 2.71828182845904523536028747135266249775724709369995; logfactor: 0.5 = Np
# neper = 1 ; logbase: eulers_number; logfactor: 0.5 = Np
#### UNIT GROUPS ####
# Mostly for length, area, volume, mass, force
# (customary or specialized units)
@group USCSLengthInternational
thou = 1e-3 * inch = th = mil_length
inch = yard / 36 = in = international_inch = inches = international_inches
hand = 4 * inch
foot = yard / 3 = ft = international_foot = feet = international_feet
yard = 0.9144 * meter = yd = international_yard # since Jul 1959
mile = 1760 * yard = mi = international_mile
circular_mil = π / 4 * mil_length ** 2 = cmil
square_inch = inch ** 2 = sq_in = square_inches
square_foot = foot ** 2 = sq_ft = square_feet
square_yard = yard ** 2 = sq_yd
square_mile = mile ** 2 = sq_mi
cubic_inch = in ** 3 = cu_in
cubic_foot = ft ** 3 = cu_ft = cubic_feet
cubic_yard = yd ** 3 = cu_yd
@end
@group USCSLengthSurvey
link = 1e-2 * chain = li = survey_link
survey_foot = 1200 / 3937 * meter = sft
fathom = 6 * survey_foot
rod = 16.5 * survey_foot = rd = pole = perch
chain = 4 * rod
furlong = 40 * rod = fur
cables_length = 120 * fathom
survey_mile = 5280 * survey_foot = smi = us_statute_mile
league = 3 * survey_mile
square_rod = rod ** 2 = sq_rod = sq_pole = sq_perch
acre = 10 * chain ** 2
square_survey_mile = survey_mile ** 2 = _ = section
square_league = league ** 2
acre_foot = acre * survey_foot = _ = acre_feet
@end
@group USCSDryVolume
dry_pint = bushel / 64 = dpi = US_dry_pint
dry_quart = bushel / 32 = dqt = US_dry_quart
dry_gallon = bushel / 8 = dgal = US_dry_gallon
peck = bushel / 4 = pk
bushel = 2150.42 cubic_inch = bu
dry_barrel = 7056 cubic_inch = _ = US_dry_barrel
board_foot = ft * ft * in = FBM = board_feet = BF = BDFT = super_foot = superficial_foot = super_feet = superficial_feet
@end
@group USCSLiquidVolume
minim = pint / 7680
fluid_dram = pint / 128 = fldr = fluidram = US_fluid_dram = US_liquid_dram
fluid_ounce = pint / 16 = floz = US_fluid_ounce = US_liquid_ounce
gill = pint / 4 = gi = liquid_gill = US_liquid_gill
pint = quart / 2 = pt = liquid_pint = US_pint
fifth = gallon / 5 = _ = US_liquid_fifth
quart = gallon / 4 = qt = liquid_quart = US_liquid_quart
gallon = 231 * cubic_inch = gal = liquid_gallon = US_liquid_gallon
@end
@group USCSVolumeOther
teaspoon = fluid_ounce / 6 = tsp
tablespoon = fluid_ounce / 2 = tbsp
shot = 3 * tablespoon = jig = US_shot
cup = pint / 2 = cp = liquid_cup = US_liquid_cup
barrel = 31.5 * gallon = bbl
oil_barrel = 42 * gallon = oil_bbl
beer_barrel = 31 * gallon = beer_bbl
hogshead = 63 * gallon
@end
@group Avoirdupois
dram = pound / 256 = dr = avoirdupois_dram = avdp_dram = drachm
ounce = pound / 16 = oz = avoirdupois_ounce = avdp_ounce
pound = 7e3 * grain = lb = avoirdupois_pound = avdp_pound
stone = 14 * pound
quarter = 28 * stone
bag = 94 * pound
hundredweight = 100 * pound = cwt = short_hundredweight
long_hundredweight = 112 * pound
ton = 2e3 * pound = _ = short_ton
long_ton = 2240 * pound
slug = g_0 * pound * second ** 2 / foot
slinch = g_0 * pound * second ** 2 / inch = blob = slugette
force_ounce = g_0 * ounce = ozf = ounce_force
force_pound = g_0 * pound = lbf = pound_force
force_ton = g_0 * ton = _ = ton_force = force_short_ton = short_ton_force
force_long_ton = g_0 * long_ton = _ = long_ton_force
kip = 1e3 * force_pound
poundal = pound * foot / second ** 2 = pdl
@end
@group AvoirdupoisUK using Avoirdupois
UK_hundredweight = long_hundredweight = UK_cwt
UK_ton = long_ton
UK_force_ton = force_long_ton = _ = UK_ton_force
@end
@group AvoirdupoisUS using Avoirdupois
US_hundredweight = hundredweight = US_cwt
US_ton = ton
US_force_ton = force_ton = _ = US_ton_force
@end
@group Troy
pennyweight = 24 * grain = dwt
troy_ounce = 480 * grain = toz = ozt
troy_pound = 12 * troy_ounce = tlb = lbt
@end
@group Apothecary
scruple = 20 * grain
apothecary_dram = 3 * scruple = ap_dr
apothecary_ounce = 8 * apothecary_dram = ap_oz
apothecary_pound = 12 * apothecary_ounce = ap_lb
@end
@group ImperialVolume
imperial_minim = imperial_fluid_ounce / 480
imperial_fluid_scruple = imperial_fluid_ounce / 24
imperial_fluid_drachm = imperial_fluid_ounce / 8 = imperial_fldr = imperial_fluid_dram
imperial_fluid_ounce = imperial_pint / 20 = imperial_floz = UK_fluid_ounce
imperial_gill = imperial_pint / 4 = imperial_gi = UK_gill
imperial_cup = imperial_pint / 2 = imperial_cp = UK_cup
imperial_pint = imperial_gallon / 8 = imperial_pt = UK_pint
imperial_quart = imperial_gallon / 4 = imperial_qt = UK_quart
imperial_gallon = 4.54609 * liter = imperial_gal = UK_gallon
imperial_peck = 2 * imperial_gallon = imperial_pk = UK_pk
imperial_bushel = 8 * imperial_gallon = imperial_bu = UK_bushel
imperial_barrel = 36 * imperial_gallon = imperial_bbl = UK_bbl
@end
@group Printer
pica = inch / 6 = _ = printers_pica
point = pica / 12 = pp = printers_point = big_point = bp
didot = 1 / 2660 * m
cicero = 12 * didot
tex_point = inch / 72.27
tex_pica = 12 * tex_point
tex_didot = 1238 / 1157 * tex_point
tex_cicero = 12 * tex_didot
scaled_point = tex_point / 65536
css_pixel = inch / 96 = px
pixel = [printing_unit] = _ = dot = pel = picture_element
pixels_per_centimeter = pixel / cm = PPCM
pixels_per_inch = pixel / inch = dots_per_inch = PPI = ppi = DPI = printers_dpi
bits_per_pixel = bit / pixel = bpp
@end
@group Textile
tex = gram / kilometer = Tt
dtex = decitex
denier = gram / (9 * kilometer) = den
jute = pound / (14400 * yard) = Tj
aberdeen = jute = Ta
RKM = gf / tex
number_english = 840 * yard / pound = Ne = NeC = ECC
number_meter = kilometer / kilogram = Nm
@end
#### CGS ELECTROMAGNETIC UNITS ####
# === Gaussian system of units ===
@group Gaussian
franklin = erg ** 0.5 * centimeter ** 0.5 = Fr = statcoulomb = statC = esu
statvolt = erg / franklin = statV
statampere = franklin / second = statA
gauss = dyne / franklin = G
maxwell = gauss * centimeter ** 2 = Mx
oersted = dyne / maxwell = Oe = ørsted
statohm = statvolt / statampere = statΩ
statfarad = franklin / statvolt = statF
statmho = statampere / statvolt
@end
# Note this system is not commensurate with SI, as ε_0 and µ_0 disappear;
# some quantities with different dimensions in SI have the same
# dimensions in the Gaussian system (e.g. [Mx] = [Fr], but [Wb] != [C]),
# and therefore the conversion factors depend on the context (not in pint sense)
[gaussian_charge] = [length] ** 1.5 * [mass] ** 0.5 / [time]
[gaussian_current] = [gaussian_charge] / [time]
[gaussian_electric_potential] = [gaussian_charge] / [length]
[gaussian_electric_field] = [gaussian_electric_potential] / [length]
[gaussian_electric_displacement_field] = [gaussian_charge] / [area]
[gaussian_electric_flux] = [gaussian_charge]
[gaussian_electric_dipole] = [gaussian_charge] * [length]
[gaussian_electric_quadrupole] = [gaussian_charge] * [area]
[gaussian_magnetic_field] = [force] / [gaussian_charge]
[gaussian_magnetic_field_strength] = [gaussian_magnetic_field]
[gaussian_magnetic_flux] = [gaussian_magnetic_field] * [area]
[gaussian_magnetic_dipole] = [energy] / [gaussian_magnetic_field]
[gaussian_resistance] = [gaussian_electric_potential] / [gaussian_current]
[gaussian_resistivity] = [gaussian_resistance] * [length]
[gaussian_capacitance] = [gaussian_charge] / [gaussian_electric_potential]
[gaussian_inductance] = [gaussian_electric_potential] * [time] / [gaussian_current]
[gaussian_conductance] = [gaussian_current] / [gaussian_electric_potential]
@context Gaussian = Gau
[gaussian_charge] -> [charge]: value / k_C ** 0.5
[charge] -> [gaussian_charge]: value * k_C ** 0.5
[gaussian_current] -> [current]: value / k_C ** 0.5
[current] -> [gaussian_current]: value * k_C ** 0.5
[gaussian_electric_potential] -> [electric_potential]: value * k_C ** 0.5
[electric_potential] -> [gaussian_electric_potential]: value / k_C ** 0.5
[gaussian_electric_field] -> [electric_field]: value * k_C ** 0.5
[electric_field] -> [gaussian_electric_field]: value / k_C ** 0.5
[gaussian_electric_displacement_field] -> [electric_displacement_field]: value / (4 * π / ε_0) ** 0.5
[electric_displacement_field] -> [gaussian_electric_displacement_field]: value * (4 * π / ε_0) ** 0.5
[gaussian_electric_dipole] -> [electric_dipole]: value / k_C ** 0.5
[electric_dipole] -> [gaussian_electric_dipole]: value * k_C ** 0.5
[gaussian_electric_quadrupole] -> [electric_quadrupole]: value / k_C ** 0.5
[electric_quadrupole] -> [gaussian_electric_quadrupole]: value * k_C ** 0.5
[gaussian_magnetic_field] -> [magnetic_field]: value / (4 * π / µ_0) ** 0.5
[magnetic_field] -> [gaussian_magnetic_field]: value * (4 * π / µ_0) ** 0.5
[gaussian_magnetic_flux] -> [magnetic_flux]: value / (4 * π / µ_0) ** 0.5
[magnetic_flux] -> [gaussian_magnetic_flux]: value * (4 * π / µ_0) ** 0.5
[gaussian_magnetic_field_strength] -> [magnetic_field_strength]: value / (4 * π * µ_0) ** 0.5
[magnetic_field_strength] -> [gaussian_magnetic_field_strength]: value * (4 * π * µ_0) ** 0.5
[gaussian_magnetic_dipole] -> [magnetic_dipole]: value * (4 * π / µ_0) ** 0.5
[magnetic_dipole] -> [gaussian_magnetic_dipole]: value / (4 * π / µ_0) ** 0.5
[gaussian_resistance] -> [resistance]: value * k_C
[resistance] -> [gaussian_resistance]: value / k_C
[gaussian_resistivity] -> [resistivity]: value * k_C
[resistivity] -> [gaussian_resistivity]: value / k_C
[gaussian_capacitance] -> [capacitance]: value / k_C
[capacitance] -> [gaussian_capacitance]: value * k_C
[gaussian_inductance] -> [inductance]: value * k_C
[inductance] -> [gaussian_inductance]: value / k_C
[gaussian_conductance] -> [conductance]: value / k_C
[conductance] -> [gaussian_conductance]: value * k_C
@end
# === ESU system of units ===
# (where different from Gaussian)
# See note for Gaussian system too
@group ESU using Gaussian
statweber = statvolt * second = statWb
stattesla = statweber / centimeter ** 2 = statT
stathenry = statweber / statampere = statH
@end
[esu_charge] = [length] ** 1.5 * [mass] ** 0.5 / [time]
[esu_current] = [esu_charge] / [time]
[esu_electric_potential] = [esu_charge] / [length]
[esu_magnetic_flux] = [esu_electric_potential] * [time]
[esu_magnetic_field] = [esu_magnetic_flux] / [area]
[esu_magnetic_field_strength] = [esu_current] / [length]
[esu_magnetic_dipole] = [esu_current] * [area]
@context ESU = esu
[esu_magnetic_field] -> [magnetic_field]: value * k_C ** 0.5
[magnetic_field] -> [esu_magnetic_field]: value / k_C ** 0.5
[esu_magnetic_flux] -> [magnetic_flux]: value * k_C ** 0.5
[magnetic_flux] -> [esu_magnetic_flux]: value / k_C ** 0.5
[esu_magnetic_field_strength] -> [magnetic_field_strength]: value / (4 * π / ε_0) ** 0.5
[magnetic_field_strength] -> [esu_magnetic_field_strength]: value * (4 * π / ε_0) ** 0.5
[esu_magnetic_dipole] -> [magnetic_dipole]: value / k_C ** 0.5
[magnetic_dipole] -> [esu_magnetic_dipole]: value * k_C ** 0.5
@end
#### CONVERSION CONTEXTS ####
@context(n=1) spectroscopy = sp
# n index of refraction of the medium.
[length] <-> [frequency]: speed_of_light / n / value
[frequency] -> [energy]: planck_constant * value
[energy] -> [frequency]: value / planck_constant
# allow wavenumber / kayser
[wavenumber] <-> [length]: 1 / value
@end
@context boltzmann
[temperature] -> [energy]: boltzmann_constant * value
[energy] -> [temperature]: value / boltzmann_constant
@end
@context energy
[energy] -> [energy] / [substance]: value * N_A
[energy] / [substance] -> [energy]: value / N_A
[energy] -> [mass]: value / c ** 2
[mass] -> [energy]: value * c ** 2
@end
@context(mw=0,volume=0,solvent_mass=0) chemistry = chem
# mw is the molecular weight of the species
# volume is the volume of the solution
# solvent_mass is the mass of solvent in the solution
# moles -> mass require the molecular weight
[substance] -> [mass]: value * mw
[mass] -> [substance]: value / mw
# moles/volume -> mass/volume and moles/mass -> mass/mass
# require the molecular weight
[substance] / [volume] -> [mass] / [volume]: value * mw
[mass] / [volume] -> [substance] / [volume]: value / mw
[substance] / [mass] -> [mass] / [mass]: value * mw
[mass] / [mass] -> [substance] / [mass]: value / mw
# moles/volume -> moles requires the solution volume
[substance] / [volume] -> [substance]: value * volume
[substance] -> [substance] / [volume]: value / volume
# moles/mass -> moles requires the solvent (usually water) mass
[substance] / [mass] -> [substance]: value * solvent_mass
[substance] -> [substance] / [mass]: value / solvent_mass
# moles/mass -> moles/volume require the solvent mass and the volume
[substance] / [mass] -> [substance]/[volume]: value * solvent_mass / volume
[substance] / [volume] -> [substance] / [mass]: value / solvent_mass * volume
@end
@context textile
# Allow switching between Direct count system (i.e. tex) and
# Indirect count system (i.e. Ne, Nm)
[mass] / [length] <-> [length] / [mass]: 1 / value
@end
#### SYSTEMS OF UNITS ####
@system SI
second
meter
kilogram
ampere
kelvin
mole
candela
@end
@system mks using international
meter
kilogram
second
@end
@system cgs using international, Gaussian, ESU
centimeter
gram
second
@end
@system atomic using international
# based on unit m_e, e, hbar, k_C, k
bohr: meter
electron_mass: gram
atomic_unit_of_time: second
atomic_unit_of_current: ampere
atomic_unit_of_temperature: kelvin
@end
@system Planck using international
# based on unit c, gravitational_constant, hbar, k_C, k
planck_length: meter
planck_mass: gram
planck_time: second
planck_current: ampere
planck_temperature: kelvin
@end
@system imperial using ImperialVolume, USCSLengthInternational, AvoirdupoisUK
yard
pound
@end
@system US using USCSLiquidVolume, USCSDryVolume, USCSVolumeOther, USCSLengthInternational, USCSLengthSurvey, AvoirdupoisUS
yard
pound
@end

View file

@ -0,0 +1,47 @@
"""
pint.definitions
~~~~~~~~~~~~~~~~
Kept for backwards compatibility
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import flexparser as fp
from . import errors
from .delegates import ParserConfig, txt_defparser
class Definition:
"""This is kept for backwards compatibility"""
@classmethod
def from_string(cls, input_string: str, non_int_type: type = float) -> Definition:
"""Parse a string into a definition object.
Parameters
----------
input_string
Single line string.
non_int_type
Numerical type used for non integer values.
Raises
------
DefinitionSyntaxError
If a syntax error was found.
"""
cfg = ParserConfig(non_int_type)
parser = txt_defparser.DefParser(cfg, None)
pp = parser.parse_string(input_string)
for definition in parser.iter_parsed_project(pp):
if isinstance(definition, Exception):
raise errors.DefinitionSyntaxError(str(definition))
if not isinstance(definition, (fp.BOS, fp.BOF, fp.BOS)):
return definition
# TODO: What shall we do in this return path.

View file

@ -0,0 +1,17 @@
"""
pint.delegates
~~~~~~~~~~~~~~
Defines methods and classes to handle autonomous tasks.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from . import txt_defparser
from .base_defparser import ParserConfig, build_disk_cache_class
from .formatter import Formatter
__all__ = ["txt_defparser", "ParserConfig", "build_disk_cache_class", "Formatter"]

View file

@ -0,0 +1,111 @@
"""
pint.delegates.base_defparser
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Common class and function for all parsers.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import functools
import itertools
import numbers
import pathlib
from dataclasses import dataclass
from typing import Any
import flexcache as fc
import flexparser as fp
from datasette.vendored.pint import errors
from datasette.vendored.pint.facets.plain.definitions import NotNumeric
from datasette.vendored.pint.util import ParserHelper, UnitsContainer
@dataclass(frozen=True)
class ParserConfig:
"""Configuration used by the parser in Pint."""
#: Indicates the output type of non integer numbers.
non_int_type: type[numbers.Number] = float
def to_scaled_units_container(self, s: str):
return ParserHelper.from_string(s, self.non_int_type)
def to_units_container(self, s: str):
v = self.to_scaled_units_container(s)
if v.scale != 1:
raise errors.UnexpectedScaleInContainer(str(v.scale))
return UnitsContainer(v)
def to_dimension_container(self, s: str):
v = self.to_units_container(s)
invalid = tuple(itertools.filterfalse(errors.is_valid_dimension_name, v.keys()))
if invalid:
raise errors.DefinitionSyntaxError(
f"Cannot build a dimension container with {', '.join(invalid)} that "
+ errors.MSG_INVALID_DIMENSION_NAME
)
return v
def to_number(self, s: str) -> numbers.Number:
"""Try parse a string into a number (without using eval).
The string can contain a number or a simple equation (3 + 4)
Raises
------
_NotNumeric
If the string cannot be parsed as a number.
"""
val = self.to_scaled_units_container(s)
if len(val):
raise NotNumeric(s)
return val.scale
@dataclass(frozen=True)
class PintParsedStatement(fp.ParsedStatement[ParserConfig]):
"""A parsed statement for pint, specialized in the actual config."""
@functools.lru_cache
def build_disk_cache_class(chosen_non_int_type: type):
"""Build disk cache class, taking into account the non_int_type."""
@dataclass(frozen=True)
class PintHeader(fc.InvalidateByExist, fc.NameByFields, fc.BasicPythonHeader):
from .. import __version__
pint_version: str = __version__
non_int_type: str = chosen_non_int_type.__qualname__
@dataclass(frozen=True)
class PathHeader(fc.NameByFileContent, PintHeader):
pass
@dataclass(frozen=True)
class ParsedProjecHeader(fc.NameByHashIter, PintHeader):
@classmethod
def from_parsed_project(
cls, pp: fp.ParsedProject[Any, ParserConfig], reader_id: str
):
tmp = (
f"{stmt.content_hash.algorithm_name}:{stmt.content_hash.hexdigest}"
for stmt in pp.iter_statements()
if isinstance(stmt, fp.BOS)
)
return cls(tuple(tmp), reader_id)
class PintDiskCache(fc.DiskCache):
_header_classes = {
pathlib.Path: PathHeader,
str: PathHeader.from_string,
fp.ParsedProject: ParsedProjecHeader.from_parsed_project,
}
return PintDiskCache

View file

@ -0,0 +1,27 @@
"""
pint.delegates.formatter
~~~~~~~~~~~~~~~~~~~~~~~~
Easy to replace and extend string formatting.
See pint.delegates.formatter.plain.DefaultFormatter for a
description of a formatter.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .full import FullFormatter
class Formatter(FullFormatter):
"""Default Pint Formatter"""
pass
__all__ = [
"Formatter",
]

View file

@ -0,0 +1,327 @@
"""
pint.delegates.formatter._compound_unit_helpers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Convenient functions to help organize compount units.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import functools
import locale
from collections.abc import Callable, Iterable
from functools import partial
from itertools import filterfalse, tee
from typing import (
TYPE_CHECKING,
Any,
Literal,
TypedDict,
TypeVar,
)
from ...compat import TypeAlias, babel_parse
from ...util import UnitsContainer
T = TypeVar("T")
U = TypeVar("U")
V = TypeVar("V")
W = TypeVar("W")
if TYPE_CHECKING:
from ...compat import Locale, Number
from ...facets.plain import PlainUnit
from ...registry import UnitRegistry
class SortKwds(TypedDict):
registry: UnitRegistry
SortFunc: TypeAlias = Callable[
[Iterable[tuple[str, Any, str]], Any], Iterable[tuple[str, Any, str]]
]
class BabelKwds(TypedDict):
"""Babel related keywords used in formatters."""
use_plural: bool
length: Literal["short", "long", "narrow"] | None
locale: Locale | str | None
def partition(
predicate: Callable[[T], bool], iterable: Iterable[T]
) -> tuple[filterfalse[T], filter[T]]:
"""Partition entries into false entries and true entries.
If *predicate* is slow, consider wrapping it with functools.lru_cache().
"""
# partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9
t1, t2 = tee(iterable)
return filterfalse(predicate, t1), filter(predicate, t2)
def localize_per(
length: Literal["short", "long", "narrow"] = "long",
locale: Locale | str | None = locale.LC_NUMERIC,
default: str | None = None,
) -> str:
"""Localized singular and plural form of a unit.
THIS IS TAKEN FROM BABEL format_unit. But
- No magnitude is returned in the string.
- If the unit is not found, the default is given.
- If the default is None, then the same value is given.
"""
locale = babel_parse(locale)
patterns = locale._data["compound_unit_patterns"].get("per", None)
if patterns is None:
return default or "{}/{}"
patterns = patterns.get(length, None)
if patterns is None:
return default or "{}/{}"
# babel 2.8
if isinstance(patterns, str):
return patterns
# babe; 2.15
return patterns.get("compound", default or "{}/{}")
@functools.lru_cache
def localize_unit_name(
measurement_unit: str,
use_plural: bool,
length: Literal["short", "long", "narrow"] = "long",
locale: Locale | str | None = locale.LC_NUMERIC,
default: str | None = None,
) -> str:
"""Localized singular and plural form of a unit.
THIS IS TAKEN FROM BABEL format_unit. But
- No magnitude is returned in the string.
- If the unit is not found, the default is given.
- If the default is None, then the same value is given.
"""
locale = babel_parse(locale)
from babel.units import _find_unit_pattern, get_unit_name
q_unit = _find_unit_pattern(measurement_unit, locale=locale)
if not q_unit:
return measurement_unit
unit_patterns = locale._data["unit_patterns"][q_unit].get(length, {})
if use_plural:
grammatical_number = "other"
else:
grammatical_number = "one"
if grammatical_number in unit_patterns:
return unit_patterns[grammatical_number].format("").replace("\xa0", "").strip()
if default is not None:
return default
# Fall back to a somewhat bad representation.
# nb: This is marked as no-cover, as the current CLDR seemingly has no way for this to happen.
fallback_name = get_unit_name(
measurement_unit, length=length, locale=locale
) # pragma: no cover
return f"{fallback_name or measurement_unit}" # pragma: no cover
def extract2(element: tuple[str, T, str]) -> tuple[str, T]:
"""Extract display name and exponent from a tuple containing display name, exponent and unit name."""
return element[:2]
def to_name_exponent_name(element: tuple[str, T]) -> tuple[str, T, str]:
"""Convert unit name and exponent to unit name as display name, exponent and unit name."""
# TODO: write a generic typing
return element + (element[0],)
def to_symbol_exponent_name(
el: tuple[str, T], registry: UnitRegistry
) -> tuple[str, T, str]:
"""Convert unit name and exponent to unit symbol as display name, exponent and unit name."""
return registry._get_symbol(el[0]), el[1], el[0]
def localize_display_exponent_name(
element: tuple[str, T, str],
use_plural: bool,
length: Literal["short", "long", "narrow"] = "long",
locale: Locale | str | None = locale.LC_NUMERIC,
default: str | None = None,
) -> tuple[str, T, str]:
"""Localize display name in a triplet display name, exponent and unit name."""
return (
localize_unit_name(
element[2], use_plural, length, locale, default or element[0]
),
element[1],
element[2],
)
#####################
# Sorting functions
#####################
def sort_by_unit_name(
items: Iterable[tuple[str, Number, str]], _registry: UnitRegistry | None
) -> Iterable[tuple[str, Number, str]]:
return sorted(items, key=lambda el: el[2])
def sort_by_display_name(
items: Iterable[tuple[str, Number, str]], _registry: UnitRegistry | None
) -> Iterable[tuple[str, Number, str]]:
return sorted(items)
def sort_by_dimensionality(
items: Iterable[tuple[str, Number, str]], registry: UnitRegistry | None
) -> Iterable[tuple[str, Number, str]]:
"""Sort a list of units by dimensional order (from `registry.formatter.dim_order`).
Parameters
----------
items : tuple
a list of tuples containing (unit names, exponent values).
registry : UnitRegistry | None
the registry to use for looking up the dimensions of each unit.
Returns
-------
list
the list of units sorted by most significant dimension first.
Raises
------
KeyError
If unit cannot be found in the registry.
"""
if registry is None:
return items
dim_order = registry.formatter.dim_order
def sort_key(item: tuple[str, Number, str]):
_display_name, _unit_exponent, unit_name = item
cname = registry.get_name(unit_name)
cname_dims = registry.get_dimensionality(cname) or {"[]": None}
for cname_dim in cname_dims:
if cname_dim in dim_order:
return dim_order.index(cname_dim), cname
raise KeyError(f"Unit {unit_name} (aka {cname}) has no recognized dimensions")
return sorted(items, key=sort_key)
def prepare_compount_unit(
unit: PlainUnit | UnitsContainer | Iterable[tuple[str, T]],
spec: str = "",
sort_func: SortFunc | None = None,
use_plural: bool = True,
length: Literal["short", "long", "narrow"] | None = None,
locale: Locale | str | None = None,
as_ratio: bool = True,
registry: UnitRegistry | None = None,
) -> tuple[Iterable[tuple[str, T]], Iterable[tuple[str, T]]]:
"""Format compound unit into unit container given
an spec and locale.
Returns
-------
iterable of display name, exponent, canonical name
"""
if isinstance(unit, UnitsContainer):
out = unit.items()
elif hasattr(unit, "_units"):
out = unit._units.items()
else:
out = unit
# out: unit_name, unit_exponent
if len(out) == 0:
if "~" in spec:
return ([], [])
else:
return ([("dimensionless", 1)], [])
if "~" in spec:
if registry is None:
raise ValueError(
f"Can't short format a {type(unit)} without a registry."
" This is usually triggered when formatting a instance"
" of the internal `UnitsContainer`."
)
_to_symbol_exponent_name = partial(to_symbol_exponent_name, registry=registry)
out = map(_to_symbol_exponent_name, out)
else:
out = map(to_name_exponent_name, out)
# We keep unit_name because the sort or localizing functions might needed.
# out: display_unit_name, unit_exponent, unit_name
if as_ratio:
numerator, denominator = partition(lambda el: el[1] < 0, out)
else:
numerator, denominator = out, ()
# numerator: display_unit_name, unit_name, unit_exponent
# denominator: display_unit_name, unit_name, unit_exponent
if locale is None:
if sort_func is not None:
numerator = sort_func(numerator, registry)
denominator = sort_func(denominator, registry)
return map(extract2, numerator), map(extract2, denominator)
if length is None:
length = "short" if "~" in spec else "long"
mapper = partial(
localize_display_exponent_name, use_plural=False, length=length, locale=locale
)
numerator = map(mapper, numerator)
denominator = map(mapper, denominator)
if sort_func is not None:
numerator = sort_func(numerator, registry)
denominator = sort_func(denominator, registry)
if use_plural:
if not isinstance(numerator, list):
numerator = list(numerator)
numerator[-1] = localize_display_exponent_name(
numerator[-1],
use_plural,
length=length,
locale=locale,
default=numerator[-1][0],
)
return map(extract2, numerator), map(extract2, denominator)

View file

@ -0,0 +1,234 @@
"""
pint.delegates.formatter._format_helpers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Convenient functions to help string formatting operations.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import re
from collections.abc import Callable, Generator, Iterable
from contextlib import contextmanager
from functools import partial
from locale import LC_NUMERIC, getlocale, setlocale
from typing import (
TYPE_CHECKING,
Any,
TypeVar,
)
from ...compat import ndarray
from ._spec_helpers import FORMATTER
try:
from numpy import integer as np_integer
except ImportError:
np_integer = None
if TYPE_CHECKING:
from ...compat import Locale, Number
T = TypeVar("T")
U = TypeVar("U")
V = TypeVar("V")
W = TypeVar("W")
_PRETTY_EXPONENTS = "⁰¹²³⁴⁵⁶⁷⁸⁹"
_JOIN_REG_EXP = re.compile(r"{\d*}")
def format_number(value: Any, spec: str = "") -> str:
"""Format number
This function might disapear in the future.
Right now is aiding backwards compatible migration.
"""
if isinstance(value, float):
return format(value, spec or ".16n")
elif isinstance(value, int):
return format(value, spec or "n")
elif isinstance(value, ndarray) and value.ndim == 0:
if issubclass(value.dtype.type, np_integer):
return format(value, spec or "n")
else:
return format(value, spec or ".16n")
else:
return str(value)
def builtin_format(value: Any, spec: str = "") -> str:
"""A keyword enabled replacement for builtin format
format has positional only arguments
and this cannot be partialized
and np requires a callable.
"""
return format(value, spec)
@contextmanager
def override_locale(
spec: str, locale: str | Locale | None
) -> Generator[Callable[[Any], str], Any, None]:
"""Given a spec a locale, yields a function to format a number.
IMPORTANT: When the locale is not None, this function uses setlocale
and therefore is not thread safe.
"""
if locale is None:
# If locale is None, just return the builtin format function.
yield ("{:" + spec + "}").format
else:
# If locale is not None, change it and return the backwards compatible
# format_number.
prev_locale_string = getlocale(LC_NUMERIC)
if isinstance(locale, str):
setlocale(LC_NUMERIC, locale)
else:
setlocale(LC_NUMERIC, str(locale))
yield partial(format_number, spec=spec)
setlocale(LC_NUMERIC, prev_locale_string)
def pretty_fmt_exponent(num: Number) -> str:
"""Format an number into a pretty printed exponent."""
# unicode dot operator (U+22C5) looks like a superscript decimal
ret = f"{num:n}".replace("-", "").replace(".", "\u22C5")
for n in range(10):
ret = ret.replace(str(n), _PRETTY_EXPONENTS[n])
return ret
def join_u(fmt: str, iterable: Iterable[Any]) -> str:
"""Join an iterable with the format specified in fmt.
The format can be specified in two ways:
- PEP3101 format with two replacement fields (eg. '{} * {}')
- The concatenating string (eg. ' * ')
"""
if not iterable:
return ""
if not _JOIN_REG_EXP.search(fmt):
return fmt.join(iterable)
miter = iter(iterable)
first = next(miter)
for val in miter:
ret = fmt.format(first, val)
first = ret
return first
def join_mu(joint_fstring: str, mstr: str, ustr: str) -> str:
"""Join magnitude and units.
This avoids that `3 and `1 / m` becomes `3 1 / m`
"""
if ustr == "":
return mstr
if ustr.startswith("1 / "):
return joint_fstring.format(mstr, ustr[2:])
return joint_fstring.format(mstr, ustr)
def join_unc(joint_fstring: str, lpar: str, rpar: str, mstr: str, ustr: str) -> str:
"""Join uncertainty magnitude and units.
Uncertainty magnitudes might require extra parenthesis when joined to units.
- YES: 3 +/- 1
- NO : 3(1)
- NO : (3 +/ 1)e-9
This avoids that `(3 + 1)` and `meter` becomes ((3 +/- 1) meter)
"""
if mstr.startswith(lpar) or mstr.endswith(rpar):
return joint_fstring.format(mstr, ustr)
return joint_fstring.format(lpar + mstr + rpar, ustr)
def formatter(
numerator: Iterable[tuple[str, Number]],
denominator: Iterable[tuple[str, Number]],
as_ratio: bool = True,
single_denominator: bool = False,
product_fmt: str = " * ",
division_fmt: str = " / ",
power_fmt: str = "{} ** {}",
parentheses_fmt: str = "({0})",
exp_call: FORMATTER = "{:n}".format,
) -> str:
"""Format a list of (name, exponent) pairs.
Parameters
----------
items : list
a list of (name, exponent) pairs.
as_ratio : bool, optional
True to display as ratio, False as negative powers. (Default value = True)
single_denominator : bool, optional
all with terms with negative exponents are
collected together. (Default value = False)
product_fmt : str
the format used for multiplication. (Default value = " * ")
division_fmt : str
the format used for division. (Default value = " / ")
power_fmt : str
the format used for exponentiation. (Default value = "{} ** {}")
parentheses_fmt : str
the format used for parenthesis. (Default value = "({0})")
exp_call : callable
(Default value = lambda x: f"{x:n}")
Returns
-------
str
the formula as a string.
"""
if as_ratio:
fun = lambda x: exp_call(abs(x))
else:
fun = exp_call
pos_terms: list[str] = []
for key, value in numerator:
if value == 1:
pos_terms.append(key)
else:
pos_terms.append(power_fmt.format(key, fun(value)))
neg_terms: list[str] = []
for key, value in denominator:
if value == -1 and as_ratio:
neg_terms.append(key)
else:
neg_terms.append(power_fmt.format(key, fun(value)))
if not pos_terms and not neg_terms:
return ""
if not as_ratio:
# Show as Product: positive * negative terms ** -1
return join_u(product_fmt, pos_terms + neg_terms)
# Show as Ratio: positive terms / negative terms
pos_ret = join_u(product_fmt, pos_terms) or "1"
if not neg_terms:
return pos_ret
if single_denominator:
neg_ret = join_u(product_fmt, neg_terms)
if len(neg_terms) > 1:
neg_ret = parentheses_fmt.format(neg_ret)
else:
neg_ret = join_u(division_fmt, neg_terms)
return join_u(division_fmt, [pos_ret, neg_ret])

View file

@ -0,0 +1,131 @@
"""
pint.delegates.formatter._spec_helpers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Convenient functions to deal with format specifications.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import functools
import re
import warnings
from collections.abc import Callable
from typing import Any
FORMATTER = Callable[
[
Any,
],
str,
]
# Extract just the type from the specification mini-language: see
# http://docs.python.org/2/library/string.html#format-specification-mini-language
# We also add uS for uncertainties.
_BASIC_TYPES = frozenset("bcdeEfFgGnosxX%uS")
REGISTERED_FORMATTERS: dict[str, Any] = {}
def parse_spec(spec: str) -> str:
"""Parse and return spec.
If an unknown item is found, raise a ValueError.
This function still needs work:
- what happens if two distinct values are found?
"""
result = ""
for ch in reversed(spec):
if ch == "~" or ch in _BASIC_TYPES:
continue
elif ch in list(REGISTERED_FORMATTERS.keys()) + ["~"]:
if result:
raise ValueError("expected ':' after format specifier")
else:
result = ch
elif ch.isalpha():
raise ValueError("Unknown conversion specified " + ch)
else:
break
return result
def extract_custom_flags(spec: str) -> str:
"""Return custom flags present in a format specification
(i.e those not part of Python's formatting mini language)
"""
if not spec:
return ""
# sort by length, with longer items first
known_flags = sorted(REGISTERED_FORMATTERS.keys(), key=len, reverse=True)
flag_re = re.compile("(" + "|".join(known_flags + ["~"]) + ")")
custom_flags = flag_re.findall(spec)
return "".join(custom_flags)
def remove_custom_flags(spec: str) -> str:
"""Remove custom flags present in a format specification
(i.e those not part of Python's formatting mini language)
"""
for flag in sorted(REGISTERED_FORMATTERS.keys(), key=len, reverse=True) + ["~"]:
if flag:
spec = spec.replace(flag, "")
return spec
@functools.lru_cache
def split_format(
spec: str, default: str, separate_format_defaults: bool = True
) -> tuple[str, str]:
"""Split format specification into magnitude and unit format."""
mspec = remove_custom_flags(spec)
uspec = extract_custom_flags(spec)
default_mspec = remove_custom_flags(default)
default_uspec = extract_custom_flags(default)
if separate_format_defaults in (False, None):
# should we warn always or only if there was no explicit choice?
# Given that we want to eventually remove the flag again, I'd say yes?
if spec and separate_format_defaults is None:
if not uspec and default_uspec:
warnings.warn(
(
"The given format spec does not contain a unit formatter."
" Falling back to the builtin defaults, but in the future"
" the unit formatter specified in the `default_format`"
" attribute will be used instead."
),
DeprecationWarning,
)
if not mspec and default_mspec:
warnings.warn(
(
"The given format spec does not contain a magnitude formatter."
" Falling back to the builtin defaults, but in the future"
" the magnitude formatter specified in the `default_format`"
" attribute will be used instead."
),
DeprecationWarning,
)
elif not spec:
mspec, uspec = default_mspec, default_uspec
else:
mspec = mspec or default_mspec
uspec = uspec or default_uspec
return mspec, uspec

View file

@ -0,0 +1,132 @@
"""
pint.delegates.formatter.base_formatter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Common class and function for all formatters.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from collections.abc import Callable
from typing import TYPE_CHECKING, Any, Iterable
from ..._typing import Magnitude
from ...compat import Unpack, ndarray, np
from ...util import UnitsContainer
from ._compound_unit_helpers import BabelKwds, prepare_compount_unit
from ._format_helpers import join_mu, override_locale
from ._spec_helpers import REGISTERED_FORMATTERS, split_format
from .plain import BaseFormatter
if TYPE_CHECKING:
from ...facets.plain import MagnitudeT, PlainQuantity, PlainUnit
from ...registry import UnitRegistry
def register_unit_format(name: str):
"""register a function as a new format for units
The registered function must have a signature of:
.. code:: python
def new_format(unit, registry, **options):
pass
Parameters
----------
name : str
The name of the new format (to be used in the format mini-language). A error is
raised if the new format would overwrite a existing format.
Examples
--------
.. code:: python
@pint.register_unit_format("custom")
def format_custom(unit, registry, **options):
result = "<formatted unit>" # do the formatting
return result
ureg = pint.UnitRegistry()
u = ureg.m / ureg.s ** 2
f"{u:custom}"
"""
# TODO: kwargs missing in typing
def wrapper(func: Callable[[PlainUnit, UnitRegistry], str]):
if name in REGISTERED_FORMATTERS:
raise ValueError(f"format {name!r} already exists") # or warn instead
class NewFormatter(BaseFormatter):
spec = name
def format_magnitude(
self,
magnitude: Magnitude,
mspec: str = "",
**babel_kwds: Unpack[BabelKwds],
) -> str:
with override_locale(
mspec, babel_kwds.get("locale", None)
) as format_number:
if isinstance(magnitude, ndarray) and magnitude.ndim > 0:
# Use custom ndarray text formatting--need to handle scalars differently
# since they don't respond to printoptions
with np.printoptions(formatter={"float_kind": format_number}):
mstr = format(magnitude).replace("\n", "")
else:
mstr = format_number(magnitude)
return mstr
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
**babel_kwds: Unpack[BabelKwds],
) -> str:
numerator, _denominator = prepare_compount_unit(
unit,
uspec,
**babel_kwds,
as_ratio=False,
registry=self._registry,
)
if self._registry is None:
units = UnitsContainer(numerator)
else:
units = self._registry.UnitsContainer(numerator)
return func(units, registry=self._registry)
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
qspec: str = "",
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
if registry is None:
mspec, uspec = split_format(qspec, "", True)
else:
mspec, uspec = split_format(
qspec,
registry.formatter.default_format,
registry.separate_format_defaults,
)
joint_fstring = "{} {}"
return join_mu(
joint_fstring,
self.format_magnitude(quantity.magnitude, mspec, **babel_kwds),
self.format_unit(quantity.unit_items(), uspec, **babel_kwds),
)
REGISTERED_FORMATTERS[name] = NewFormatter()
return wrapper

View file

@ -0,0 +1,267 @@
"""
pint.delegates.formatter.full
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Implements:
- Full: dispatch to other formats, accept defaults.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import locale
from typing import TYPE_CHECKING, Any, Iterable, Literal
from ..._typing import Magnitude
from ...compat import Unpack, babel_parse
from ...util import iterable
from ._compound_unit_helpers import BabelKwds, SortFunc, sort_by_unit_name
from ._to_register import REGISTERED_FORMATTERS
from .html import HTMLFormatter
from .latex import LatexFormatter, SIunitxFormatter
from .plain import (
BaseFormatter,
CompactFormatter,
DefaultFormatter,
PrettyFormatter,
RawFormatter,
)
if TYPE_CHECKING:
from ...compat import Locale
from ...facets.measurement import Measurement
from ...facets.plain import (
MagnitudeT,
PlainQuantity,
PlainUnit,
)
from ...registry import UnitRegistry
class FullFormatter(BaseFormatter):
"""A formatter that dispatch to other formatters.
Has a default format, locale and babel_length
"""
_formatters: dict[str, Any] = {}
default_format: str = ""
# TODO: This can be over-riden by the registry definitions file
dim_order: tuple[str, ...] = (
"[substance]",
"[mass]",
"[current]",
"[luminosity]",
"[length]",
"[]",
"[time]",
"[temperature]",
)
default_sort_func: SortFunc | None = staticmethod(sort_by_unit_name)
locale: Locale | None = None
def __init__(self, registry: UnitRegistry | None = None):
super().__init__(registry)
self._formatters = {}
self._formatters["raw"] = RawFormatter(registry)
self._formatters["D"] = DefaultFormatter(registry)
self._formatters["H"] = HTMLFormatter(registry)
self._formatters["P"] = PrettyFormatter(registry)
self._formatters["Lx"] = SIunitxFormatter(registry)
self._formatters["L"] = LatexFormatter(registry)
self._formatters["C"] = CompactFormatter(registry)
def set_locale(self, loc: str | None) -> None:
"""Change the locale used by default by `format_babel`.
Parameters
----------
loc : str or None
None (do not translate), 'sys' (detect the system locale) or a locale id string.
"""
if isinstance(loc, str):
if loc == "sys":
loc = locale.getdefaultlocale()[0]
# We call babel parse to fail here and not in the formatting operation
babel_parse(loc)
self.locale = loc
def get_formatter(self, spec: str):
if spec == "":
return self._formatters["D"]
for k, v in self._formatters.items():
if k in spec:
return v
for k, v in REGISTERED_FORMATTERS.items():
if k in spec:
orphan_fmt = REGISTERED_FORMATTERS[k]
break
else:
return self._formatters["D"]
try:
fmt = orphan_fmt.__class__(self._registry)
spec = getattr(fmt, "spec", spec)
self._formatters[spec] = fmt
return fmt
except Exception:
return orphan_fmt
def format_magnitude(
self, magnitude: Magnitude, mspec: str = "", **babel_kwds: Unpack[BabelKwds]
) -> str:
mspec = mspec or self.default_format
return self.get_formatter(mspec).format_magnitude(
magnitude, mspec, **babel_kwds
)
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
uspec = uspec or self.default_format
sort_func = sort_func or self.default_sort_func
return self.get_formatter(uspec).format_unit(
unit, uspec, sort_func=sort_func, **babel_kwds
)
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
spec: str = "",
**babel_kwds: Unpack[BabelKwds],
) -> str:
spec = spec or self.default_format
# If Compact is selected, do it at the beginning
if "#" in spec:
spec = spec.replace("#", "")
obj = quantity.to_compact()
else:
obj = quantity
del quantity
locale = babel_kwds.get("locale", self.locale)
if locale:
if "use_plural" in babel_kwds:
use_plural = babel_kwds["use_plural"]
else:
use_plural = obj.magnitude > 1
if iterable(use_plural):
use_plural = True
else:
use_plural = False
return self.get_formatter(spec).format_quantity(
obj,
spec,
sort_func=self.default_sort_func,
use_plural=use_plural,
length=babel_kwds.get("length", None),
locale=locale,
)
def format_measurement(
self,
measurement: Measurement,
meas_spec: str = "",
**babel_kwds: Unpack[BabelKwds],
) -> str:
meas_spec = meas_spec or self.default_format
# If Compact is selected, do it at the beginning
if "#" in meas_spec:
meas_spec = meas_spec.replace("#", "")
obj = measurement.to_compact()
else:
obj = measurement
del measurement
use_plural = obj.magnitude.nominal_value > 1
if iterable(use_plural):
use_plural = True
return self.get_formatter(meas_spec).format_measurement(
obj,
meas_spec,
sort_func=self.default_sort_func,
use_plural=babel_kwds.get("use_plural", use_plural),
length=babel_kwds.get("length", None),
locale=babel_kwds.get("locale", self.locale),
)
#######################################
# This is for backwards compatibility
#######################################
def format_unit_babel(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
spec: str = "",
length: Literal["short", "long", "narrow"] | None = None,
locale: Locale | None = None,
) -> str:
if self.locale is None and locale is None:
raise ValueError(
"format_babel requires a locale argumente if the Formatter locale is not set."
)
return self.format_unit(
unit,
spec or self.default_format,
sort_func=self.default_sort_func,
use_plural=False,
length=length,
locale=locale or self.locale,
)
def format_quantity_babel(
self,
quantity: PlainQuantity[MagnitudeT],
spec: str = "",
length: Literal["short", "long", "narrow"] | None = None,
locale: Locale | None = None,
) -> str:
if self.locale is None and locale is None:
raise ValueError(
"format_babel requires a locale argumente if the Formatter locale is not set."
)
use_plural = quantity.magnitude > 1
if iterable(use_plural):
use_plural = True
return self.format_quantity(
quantity,
spec or self.default_format,
sort_func=self.default_sort_func,
use_plural=use_plural,
length=length,
locale=locale or self.locale,
)
################################################################
# This allows to format units independently of the registry
#
REGISTERED_FORMATTERS["raw"] = RawFormatter()
REGISTERED_FORMATTERS["D"] = DefaultFormatter()
REGISTERED_FORMATTERS["H"] = HTMLFormatter()
REGISTERED_FORMATTERS["P"] = PrettyFormatter()
REGISTERED_FORMATTERS["Lx"] = SIunitxFormatter()
REGISTERED_FORMATTERS["L"] = LatexFormatter()
REGISTERED_FORMATTERS["C"] = CompactFormatter()

View file

@ -0,0 +1,188 @@
"""
pint.delegates.formatter.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Implements:
- HTML: suitable for web/jupyter notebook outputs.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import re
from typing import TYPE_CHECKING, Any, Iterable
from ..._typing import Magnitude
from ...compat import Unpack, ndarray, np
from ...util import iterable
from ._compound_unit_helpers import (
BabelKwds,
SortFunc,
localize_per,
prepare_compount_unit,
)
from ._format_helpers import (
formatter,
join_mu,
join_unc,
override_locale,
)
from ._spec_helpers import (
remove_custom_flags,
split_format,
)
from .plain import BaseFormatter
if TYPE_CHECKING:
from ...facets.measurement import Measurement
from ...facets.plain import MagnitudeT, PlainQuantity, PlainUnit
_EXP_PATTERN = re.compile(r"([0-9]\.?[0-9]*)e(-?)\+?0*([0-9]+)")
class HTMLFormatter(BaseFormatter):
"""HTML localizable text formatter."""
def format_magnitude(
self, magnitude: Magnitude, mspec: str = "", **babel_kwds: Unpack[BabelKwds]
) -> str:
with override_locale(mspec, babel_kwds.get("locale", None)) as format_number:
if hasattr(magnitude, "_repr_html_"):
# If magnitude has an HTML repr, nest it within Pint's
mstr = magnitude._repr_html_() # type: ignore
assert isinstance(mstr, str)
else:
if isinstance(magnitude, ndarray):
# Need to override for scalars, which are detected as iterable,
# and don't respond to printoptions.
if magnitude.ndim == 0:
mstr = format_number(magnitude)
else:
with np.printoptions(formatter={"float_kind": format_number}):
mstr = (
"<pre>" + format(magnitude).replace("\n", "") + "</pre>"
)
elif not iterable(magnitude):
# Use plain text for scalars
mstr = format_number(magnitude)
else:
# Use monospace font for other array-likes
mstr = (
"<pre>"
+ format_number(magnitude).replace("\n", "<br>")
+ "</pre>"
)
m = _EXP_PATTERN.match(mstr)
_exp_formatter = lambda s: f"<sup>{s}</sup>"
if m:
exp = int(m.group(2) + m.group(3))
mstr = _EXP_PATTERN.sub(r"\1×10" + _exp_formatter(exp), mstr)
return mstr
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
numerator, denominator = prepare_compount_unit(
unit,
uspec,
sort_func=sort_func,
**babel_kwds,
registry=self._registry,
)
if babel_kwds.get("locale", None):
length = babel_kwds.get("length") or ("short" if "~" in uspec else "long")
division_fmt = localize_per(length, babel_kwds.get("locale"), "{}/{}")
else:
division_fmt = "{}/{}"
return formatter(
numerator,
denominator,
as_ratio=True,
single_denominator=True,
product_fmt=r" ",
division_fmt=division_fmt,
power_fmt=r"{}<sup>{}</sup>",
parentheses_fmt=r"({})",
)
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
qspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
qspec, registry.formatter.default_format, registry.separate_format_defaults
)
if iterable(quantity.magnitude):
# Use HTML table instead of plain text template for array-likes
joint_fstring = (
"<table><tbody>"
"<tr><th>Magnitude</th>"
"<td style='text-align:left;'>{}</td></tr>"
"<tr><th>Units</th><td style='text-align:left;'>{}</td></tr>"
"</tbody></table>"
)
else:
joint_fstring = "{} {}"
return join_mu(
joint_fstring,
self.format_magnitude(quantity.magnitude, mspec, **babel_kwds),
self.format_unit(quantity.unit_items(), uspec, sort_func, **babel_kwds),
)
def format_uncertainty(
self,
uncertainty,
unc_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
unc_str = format(uncertainty, unc_spec).replace("+/-", " &plusmn; ")
unc_str = re.sub(r"\)e\+0?(\d+)", r")×10<sup>\1</sup>", unc_str)
unc_str = re.sub(r"\)e-0?(\d+)", r")×10<sup>-\1</sup>", unc_str)
return unc_str
def format_measurement(
self,
measurement: Measurement,
meas_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
meas_spec,
registry.formatter.default_format,
registry.separate_format_defaults,
)
unc_spec = remove_custom_flags(meas_spec)
joint_fstring = "{} {}"
return join_unc(
joint_fstring,
"(",
")",
self.format_uncertainty(measurement.magnitude, unc_spec, **babel_kwds),
self.format_unit(measurement.units, uspec, sort_func, **babel_kwds),
)

View file

@ -0,0 +1,420 @@
"""
pint.delegates.formatter.latex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Implements:
- Latex: uses vainilla latex.
- SIunitx: uses latex siunitx package format.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import functools
import re
from collections.abc import Iterable
from typing import TYPE_CHECKING, Any
from ..._typing import Magnitude
from ...compat import Number, Unpack, ndarray
from ._compound_unit_helpers import (
BabelKwds,
SortFunc,
prepare_compount_unit,
)
from ._format_helpers import (
FORMATTER,
formatter,
join_mu,
join_unc,
override_locale,
)
from ._spec_helpers import (
remove_custom_flags,
split_format,
)
from .plain import BaseFormatter
if TYPE_CHECKING:
from ...facets.measurement import Measurement
from ...facets.plain import MagnitudeT, PlainQuantity, PlainUnit
from ...registry import UnitRegistry
from ...util import ItMatrix
def vector_to_latex(
vec: Iterable[Any], fmtfun: FORMATTER | str = "{:.2n}".format
) -> str:
"""Format a vector into a latex string."""
return matrix_to_latex([vec], fmtfun)
def matrix_to_latex(matrix: ItMatrix, fmtfun: FORMATTER | str = "{:.2n}".format) -> str:
"""Format a matrix into a latex string."""
ret: list[str] = []
for row in matrix:
ret += [" & ".join(fmtfun(f) for f in row)]
return r"\begin{pmatrix}%s\end{pmatrix}" % "\\\\ \n".join(ret)
def ndarray_to_latex_parts(
ndarr: ndarray, fmtfun: FORMATTER = "{:.2n}".format, dim: tuple[int, ...] = tuple()
) -> list[str]:
"""Convert an numpy array into an iterable of elements to be print.
e.g.
- if the array is 2d, it will return an iterable of rows.
- if the array is 3d, it will return an iterable of matrices.
"""
if isinstance(fmtfun, str):
fmtfun = fmtfun.format
if ndarr.ndim == 0:
_ndarr = ndarr.reshape(1)
return [vector_to_latex(_ndarr, fmtfun)]
if ndarr.ndim == 1:
return [vector_to_latex(ndarr, fmtfun)]
if ndarr.ndim == 2:
return [matrix_to_latex(ndarr, fmtfun)]
else:
ret = []
if ndarr.ndim == 3:
header = ("arr[%s," % ",".join("%d" % d for d in dim)) + "%d,:,:]"
for elno, el in enumerate(ndarr):
ret += [header % elno + " = " + matrix_to_latex(el, fmtfun)]
else:
for elno, el in enumerate(ndarr):
ret += ndarray_to_latex_parts(el, fmtfun, dim + (elno,))
return ret
def ndarray_to_latex(
ndarr: ndarray,
fmtfun: FORMATTER | str = "{:.2n}".format,
dim: tuple[int, ...] = tuple(),
) -> str:
"""Format a numpy array into string."""
return "\n".join(ndarray_to_latex_parts(ndarr, fmtfun, dim))
def latex_escape(string: str) -> str:
"""Prepend characters that have a special meaning in LaTeX with a backslash."""
return functools.reduce(
lambda s, m: re.sub(m[0], m[1], s),
(
(r"[\\]", r"\\textbackslash "),
(r"[~]", r"\\textasciitilde "),
(r"[\^]", r"\\textasciicircum "),
(r"([&%$#_{}])", r"\\\1"),
),
str(string),
)
def siunitx_format_unit(
units: Iterable[tuple[str, Number]], registry: UnitRegistry
) -> str:
"""Returns LaTeX code for the unit that can be put into an siunitx command."""
def _tothe(power) -> str:
if power == int(power):
if power == 1:
return ""
elif power == 2:
return r"\squared"
elif power == 3:
return r"\cubed"
else:
return rf"\tothe{{{int(power):d}}}"
else:
# limit float powers to 3 decimal places
return rf"\tothe{{{power:.3f}}}".rstrip("0")
lpos = []
lneg = []
# loop through all units in the container
for unit, power in sorted(units):
# remove unit prefix if it exists
# siunitx supports \prefix commands
lpick = lpos if power >= 0 else lneg
prefix = None
# TODO: fix this to be fore efficient and detect also aliases.
for p in registry._prefixes.values():
p = str(p.name)
if len(p) > 0 and unit.find(p) == 0:
prefix = p
unit = unit.replace(prefix, "", 1)
if power < 0:
lpick.append(r"\per")
if prefix is not None:
lpick.append(rf"\{prefix}")
lpick.append(rf"\{unit}")
lpick.append(rf"{_tothe(abs(power))}")
return "".join(lpos) + "".join(lneg)
_EXP_PATTERN = re.compile(r"([0-9]\.?[0-9]*)e(-?)\+?0*([0-9]+)")
class LatexFormatter(BaseFormatter):
"""Latex localizable text formatter."""
def format_magnitude(
self, magnitude: Magnitude, mspec: str = "", **babel_kwds: Unpack[BabelKwds]
) -> str:
with override_locale(mspec, babel_kwds.get("locale", None)) as format_number:
if isinstance(magnitude, ndarray):
mstr = ndarray_to_latex(magnitude, mspec)
else:
mstr = format_number(magnitude)
mstr = _EXP_PATTERN.sub(r"\1\\times 10^{\2\3}", mstr)
return mstr
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
numerator, denominator = prepare_compount_unit(
unit,
uspec,
sort_func=sort_func,
**babel_kwds,
registry=self._registry,
)
numerator = ((rf"\mathrm{{{latex_escape(u)}}}", p) for u, p in numerator)
denominator = ((rf"\mathrm{{{latex_escape(u)}}}", p) for u, p in denominator)
# Localized latex
# if babel_kwds.get("locale", None):
# length = babel_kwds.get("length") or ("short" if "~" in uspec else "long")
# division_fmt = localize_per(length, babel_kwds.get("locale"), "{}/{}")
# else:
# division_fmt = "{}/{}"
# division_fmt = r"\frac" + division_fmt.format("[{}]", "[{}]")
formatted = formatter(
numerator,
denominator,
as_ratio=True,
single_denominator=True,
product_fmt=r" \cdot ",
division_fmt=r"\frac[{}][{}]",
power_fmt="{}^[{}]",
parentheses_fmt=r"\left({}\right)",
)
return formatted.replace("[", "{").replace("]", "}")
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
qspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
qspec, registry.formatter.default_format, registry.separate_format_defaults
)
joint_fstring = r"{}\ {}"
return join_mu(
joint_fstring,
self.format_magnitude(quantity.magnitude, mspec, **babel_kwds),
self.format_unit(quantity.unit_items(), uspec, sort_func, **babel_kwds),
)
def format_uncertainty(
self,
uncertainty,
unc_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
# uncertainties handles everythin related to latex.
unc_str = format(uncertainty, unc_spec)
if unc_str.startswith(r"\left"):
return unc_str
return unc_str.replace("(", r"\left(").replace(")", r"\right)")
def format_measurement(
self,
measurement: Measurement,
meas_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
meas_spec,
registry.formatter.default_format,
registry.separate_format_defaults,
)
unc_spec = remove_custom_flags(meas_spec)
# TODO: ugly. uncertainties recognizes L
if "L" not in unc_spec:
unc_spec += "L"
joint_fstring = r"{}\ {}"
return join_unc(
joint_fstring,
r"\left(",
r"\right)",
self.format_uncertainty(measurement.magnitude, unc_spec, **babel_kwds),
self.format_unit(measurement.units, uspec, sort_func, **babel_kwds),
)
class SIunitxFormatter(BaseFormatter):
"""Latex localizable text formatter with siunitx format.
See: https://ctan.org/pkg/siunitx
"""
def format_magnitude(
self,
magnitude: Magnitude,
mspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
with override_locale(mspec, babel_kwds.get("locale", None)) as format_number:
if isinstance(magnitude, ndarray):
mstr = ndarray_to_latex(magnitude, mspec)
else:
mstr = format_number(magnitude)
# TODO: Why this is not needed in siunitx?
# mstr = _EXP_PATTERN.sub(r"\1\\times 10^{\2\3}", mstr)
return mstr
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
if registry is None:
raise ValueError(
"Can't format as siunitx without a registry."
" This is usually triggered when formatting a instance"
' of the internal `UnitsContainer` with a spec of `"Lx"`'
" and might indicate a bug in `pint`."
)
# TODO: not sure if I should call format_compound_unit here.
# siunitx_format_unit requires certain specific names?
# should unit names be translated?
# should unit names be shortened?
# units = format_compound_unit(unit, uspec, **babel_kwds)
try:
units = unit._units.items()
except Exception:
units = unit
formatted = siunitx_format_unit(units, registry)
if "~" in uspec:
formatted = formatted.replace(r"\percent", r"\%")
# TODO: is this the right behaviour? Should we return the \si[] when only
# the units are returned?
return rf"\si[]{{{formatted}}}"
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
qspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
qspec, registry.formatter.default_format, registry.separate_format_defaults
)
joint_fstring = "{}{}"
mstr = self.format_magnitude(quantity.magnitude, mspec, **babel_kwds)
ustr = self.format_unit(quantity.unit_items(), uspec, sort_func, **babel_kwds)[
len(r"\si[]") :
]
return r"\SI[]" + join_mu(joint_fstring, "{%s}" % mstr, ustr)
def format_uncertainty(
self,
uncertainty,
unc_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
# SIunitx requires space between "+-" (or "\pm") and the nominal value
# and uncertainty, and doesn't accept "+/-"
# SIunitx doesn't accept parentheses, which uncs uses with
# scientific notation ('e' or 'E' and sometimes 'g' or 'G').
return (
format(uncertainty, unc_spec)
.replace("+/-", r" +- ")
.replace("(", "")
.replace(")", " ")
)
def format_measurement(
self,
measurement: Measurement,
meas_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
meas_spec,
registry.formatter.default_format,
registry.separate_format_defaults,
)
unc_spec = remove_custom_flags(meas_spec)
joint_fstring = "{}{}"
return r"\SI" + join_unc(
joint_fstring,
r"",
r"",
"{%s}"
% self.format_uncertainty(measurement.magnitude, unc_spec, **babel_kwds),
self.format_unit(measurement.units, uspec, sort_func, **babel_kwds)[
len(r"\si[]") :
],
)

View file

@ -0,0 +1,486 @@
"""
pint.delegates.formatter.plain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Implements plain text formatters:
- Raw: as simple as it gets (no locale aware, no unit formatter.)
- Default: used when no string spec is given.
- Compact: like default but with less spaces.
- Pretty: pretty printed formatter.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import itertools
import re
from typing import TYPE_CHECKING, Any, Iterable
from ..._typing import Magnitude
from ...compat import Unpack, ndarray, np
from ._compound_unit_helpers import (
BabelKwds,
SortFunc,
localize_per,
prepare_compount_unit,
)
from ._format_helpers import (
formatter,
join_mu,
join_unc,
override_locale,
pretty_fmt_exponent,
)
from ._spec_helpers import (
remove_custom_flags,
split_format,
)
if TYPE_CHECKING:
from ...facets.measurement import Measurement
from ...facets.plain import MagnitudeT, PlainQuantity, PlainUnit
from ...registry import UnitRegistry
_EXP_PATTERN = re.compile(r"([0-9]\.?[0-9]*)e(-?)\+?0*([0-9]+)")
class BaseFormatter:
def __init__(self, registry: UnitRegistry | None = None):
self._registry = registry
class DefaultFormatter(BaseFormatter):
"""Simple, localizable plain text formatter.
A formatter is a class with methods to format into string each of the objects
that appear in pint (magnitude, unit, quantity, uncertainty, measurement)
"""
def format_magnitude(
self, magnitude: Magnitude, mspec: str = "", **babel_kwds: Unpack[BabelKwds]
) -> str:
"""Format scalar/array into string
given a string formatting specification and locale related arguments.
"""
with override_locale(mspec, babel_kwds.get("locale", None)) as format_number:
if isinstance(magnitude, ndarray) and magnitude.ndim > 0:
# Use custom ndarray text formatting--need to handle scalars differently
# since they don't respond to printoptions
with np.printoptions(formatter={"float_kind": format_number}):
mstr = format(magnitude).replace("\n", "")
else:
mstr = format_number(magnitude)
return mstr
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
"""Format a unit (can be compound) into string
given a string formatting specification and locale related arguments.
"""
numerator, denominator = prepare_compount_unit(
unit,
uspec,
sort_func=sort_func,
**babel_kwds,
registry=self._registry,
)
if babel_kwds.get("locale", None):
length = babel_kwds.get("length") or ("short" if "~" in uspec else "long")
division_fmt = localize_per(length, babel_kwds.get("locale"), "{} / {}")
else:
division_fmt = "{} / {}"
return formatter(
numerator,
denominator,
as_ratio=True,
single_denominator=False,
product_fmt="{} * {}",
division_fmt=division_fmt,
power_fmt="{} ** {}",
parentheses_fmt=r"({})",
)
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
qspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
"""Format a quantity (magnitude and unit) into string
given a string formatting specification and locale related arguments.
"""
registry = self._registry
mspec, uspec = split_format(
qspec, registry.formatter.default_format, registry.separate_format_defaults
)
joint_fstring = "{} {}"
return join_mu(
joint_fstring,
self.format_magnitude(quantity.magnitude, mspec, **babel_kwds),
self.format_unit(quantity.unit_items(), uspec, sort_func, **babel_kwds),
)
def format_uncertainty(
self,
uncertainty,
unc_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
"""Format an uncertainty magnitude (nominal value and stdev) into string
given a string formatting specification and locale related arguments.
"""
return format(uncertainty, unc_spec).replace("+/-", " +/- ")
def format_measurement(
self,
measurement: Measurement,
meas_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
"""Format an measurement (uncertainty and units) into string
given a string formatting specification and locale related arguments.
"""
registry = self._registry
mspec, uspec = split_format(
meas_spec,
registry.formatter.default_format,
registry.separate_format_defaults,
)
unc_spec = remove_custom_flags(meas_spec)
joint_fstring = "{} {}"
return join_unc(
joint_fstring,
"(",
")",
self.format_uncertainty(measurement.magnitude, unc_spec, **babel_kwds),
self.format_unit(measurement.units, uspec, sort_func, **babel_kwds),
)
class CompactFormatter(BaseFormatter):
"""Simple, localizable plain text formatter without extra spaces."""
def format_magnitude(
self, magnitude: Magnitude, mspec: str = "", **babel_kwds: Unpack[BabelKwds]
) -> str:
with override_locale(mspec, babel_kwds.get("locale", None)) as format_number:
if isinstance(magnitude, ndarray) and magnitude.ndim > 0:
# Use custom ndarray text formatting--need to handle scalars differently
# since they don't respond to printoptions
with np.printoptions(formatter={"float_kind": format_number}):
mstr = format(magnitude).replace("\n", "")
else:
mstr = format_number(magnitude)
return mstr
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
numerator, denominator = prepare_compount_unit(
unit,
uspec,
sort_func=sort_func,
**babel_kwds,
registry=self._registry,
)
# Division format in compact formatter is not localized.
division_fmt = "{}/{}"
return formatter(
numerator,
denominator,
as_ratio=True,
single_denominator=False,
product_fmt="*", # TODO: Should this just be ''?
division_fmt=division_fmt,
power_fmt="{}**{}",
parentheses_fmt=r"({})",
)
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
qspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
qspec, registry.formatter.default_format, registry.separate_format_defaults
)
joint_fstring = "{} {}"
return join_mu(
joint_fstring,
self.format_magnitude(quantity.magnitude, mspec, **babel_kwds),
self.format_unit(quantity.unit_items(), uspec, sort_func, **babel_kwds),
)
def format_uncertainty(
self,
uncertainty,
unc_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
return format(uncertainty, unc_spec).replace("+/-", "+/-")
def format_measurement(
self,
measurement: Measurement,
meas_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
meas_spec,
registry.formatter.default_format,
registry.separate_format_defaults,
)
unc_spec = remove_custom_flags(meas_spec)
joint_fstring = "{} {}"
return join_unc(
joint_fstring,
"(",
")",
self.format_uncertainty(measurement.magnitude, unc_spec, **babel_kwds),
self.format_unit(measurement.units, uspec, sort_func, **babel_kwds),
)
class PrettyFormatter(BaseFormatter):
"""Pretty printed localizable plain text formatter without extra spaces."""
def format_magnitude(
self, magnitude: Magnitude, mspec: str = "", **babel_kwds: Unpack[BabelKwds]
) -> str:
with override_locale(mspec, babel_kwds.get("locale", None)) as format_number:
if isinstance(magnitude, ndarray) and magnitude.ndim > 0:
# Use custom ndarray text formatting--need to handle scalars differently
# since they don't respond to printoptions
with np.printoptions(formatter={"float_kind": format_number}):
mstr = format(magnitude).replace("\n", "")
else:
mstr = format_number(magnitude)
m = _EXP_PATTERN.match(mstr)
if m:
exp = int(m.group(2) + m.group(3))
mstr = _EXP_PATTERN.sub(r"\1×10" + pretty_fmt_exponent(exp), mstr)
return mstr
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
numerator, denominator = prepare_compount_unit(
unit,
uspec,
sort_func=sort_func,
**babel_kwds,
registry=self._registry,
)
if babel_kwds.get("locale", None):
length = babel_kwds.get("length") or ("short" if "~" in uspec else "long")
division_fmt = localize_per(length, babel_kwds.get("locale"), "{}/{}")
else:
division_fmt = "{}/{}"
return formatter(
numerator,
denominator,
as_ratio=True,
single_denominator=False,
product_fmt="·",
division_fmt=division_fmt,
power_fmt="{}{}",
parentheses_fmt="({})",
exp_call=pretty_fmt_exponent,
)
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
qspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
qspec, registry.formatter.default_format, registry.separate_format_defaults
)
joint_fstring = "{} {}"
return join_mu(
joint_fstring,
self.format_magnitude(quantity.magnitude, mspec, **babel_kwds),
self.format_unit(quantity.unit_items(), uspec, sort_func, **babel_kwds),
)
def format_uncertainty(
self,
uncertainty,
unc_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
return format(uncertainty, unc_spec).replace("±", " ± ")
def format_measurement(
self,
measurement: Measurement,
meas_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
meas_spec,
registry.formatter.default_format,
registry.separate_format_defaults,
)
unc_spec = meas_spec
joint_fstring = "{} {}"
return join_unc(
joint_fstring,
"(",
")",
self.format_uncertainty(measurement.magnitude, unc_spec, **babel_kwds),
self.format_unit(measurement.units, uspec, sort_func, **babel_kwds),
)
class RawFormatter(BaseFormatter):
"""Very simple non-localizable plain text formatter.
Ignores all pint custom string formatting specification.
"""
def format_magnitude(
self, magnitude: Magnitude, mspec: str = "", **babel_kwds: Unpack[BabelKwds]
) -> str:
return str(magnitude)
def format_unit(
self,
unit: PlainUnit | Iterable[tuple[str, Any]],
uspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
numerator, denominator = prepare_compount_unit(
unit,
uspec,
sort_func=sort_func,
**babel_kwds,
registry=self._registry,
)
return " * ".join(
k if v == 1 else f"{k} ** {v}"
for k, v in itertools.chain(numerator, denominator)
)
def format_quantity(
self,
quantity: PlainQuantity[MagnitudeT],
qspec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
qspec, registry.formatter.default_format, registry.separate_format_defaults
)
joint_fstring = "{} {}"
return join_mu(
joint_fstring,
self.format_magnitude(quantity.magnitude, mspec, **babel_kwds),
self.format_unit(quantity.unit_items(), uspec, sort_func, **babel_kwds),
)
def format_uncertainty(
self,
uncertainty,
unc_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
return format(uncertainty, unc_spec)
def format_measurement(
self,
measurement: Measurement,
meas_spec: str = "",
sort_func: SortFunc | None = None,
**babel_kwds: Unpack[BabelKwds],
) -> str:
registry = self._registry
mspec, uspec = split_format(
meas_spec,
registry.formatter.default_format,
registry.separate_format_defaults,
)
unc_spec = remove_custom_flags(meas_spec)
joint_fstring = "{} {}"
return join_unc(
joint_fstring,
"(",
")",
self.format_uncertainty(measurement.magnitude, unc_spec, **babel_kwds),
self.format_unit(measurement.units, uspec, sort_func, **babel_kwds),
)

View file

@ -0,0 +1,17 @@
"""
pint.delegates.txt_defparser
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Parser for the original textual Pint Definition file.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .defparser import DefParser
__all__ = [
"DefParser",
]

View file

@ -0,0 +1,52 @@
"""
pint.delegates.txt_defparser.block
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Classes for Pint Blocks, which are defined by:
@<block name>
<content>
@end
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Generic, TypeVar
import flexparser as fp
from ..base_defparser import ParserConfig, PintParsedStatement
@dataclass(frozen=True)
class EndDirectiveBlock(PintParsedStatement):
"""An EndDirectiveBlock is simply an "@end" statement."""
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[EndDirectiveBlock]:
if s == "@end":
return cls()
return None
OPST = TypeVar("OPST", bound="PintParsedStatement")
IPST = TypeVar("IPST", bound="PintParsedStatement")
DefT = TypeVar("DefT")
@dataclass(frozen=True)
class DirectiveBlock(
Generic[DefT, OPST, IPST], fp.Block[OPST, IPST, EndDirectiveBlock, ParserConfig]
):
"""Directive blocks have beginning statement starting with a @ character.
and ending with a "@end" (captured using a EndDirectiveBlock).
Subclass this class for convenience.
"""
def derive_definition(self) -> DefT: ...

View file

@ -0,0 +1,59 @@
"""
pint.delegates.txt_defparser.common
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Definitions for parsing an Import Statement
Also DefinitionSyntaxError
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from dataclasses import dataclass, field
import flexparser as fp
from ... import errors
from ..base_defparser import ParserConfig
@dataclass(frozen=True)
class DefinitionSyntaxError(errors.DefinitionSyntaxError, fp.ParsingError):
"""A syntax error was found in a definition. Combines:
DefinitionSyntaxError: which provides a message placeholder.
fp.ParsingError: which provides raw text, and start and end column and row
and an extra location attribute in which the filename or reseource is stored.
"""
location: str = field(init=False, default="")
def __str__(self) -> str:
msg = (
self.msg + "\n " + (self.format_position or "") + " " + (self.raw or "")
)
if self.location:
msg += "\n " + self.location
return msg
def set_location(self, value: str) -> None:
super().__setattr__("location", value)
@dataclass(frozen=True)
class ImportDefinition(fp.IncludeStatement[ParserConfig]):
value: str
@property
def target(self) -> str:
return self.value
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[ImportDefinition]:
if s.startswith("@import"):
return ImportDefinition(s[len("@import") :].strip())
return None

View file

@ -0,0 +1,203 @@
"""
pint.delegates.txt_defparser.context
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Definitions for parsing Context and their related objects
Notices that some of the checks are done within the
format agnostic parent definition class.
See each one for a slighly longer description of the
syntax.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import numbers
import re
import typing as ty
from dataclasses import dataclass
from typing import Union
import flexparser as fp
from ...facets.context import definitions
from ..base_defparser import ParserConfig, PintParsedStatement
from . import block, common, plain
# TODO check syntax
T = ty.TypeVar("T", bound="Union[ForwardRelation, BidirectionalRelation]")
def _from_string_and_context_sep(
cls: type[T], s: str, config: ParserConfig, separator: str
) -> T | None:
if separator not in s:
return None
if ":" not in s:
return None
rel, eq = s.split(":")
parts = rel.split(separator)
src, dst = (config.to_dimension_container(s) for s in parts)
return cls(src, dst, eq.strip())
@dataclass(frozen=True)
class ForwardRelation(PintParsedStatement, definitions.ForwardRelation):
"""A relation connecting a dimension to another via a transformation function.
<source dimension> -> <target dimension>: <transformation function>
"""
@classmethod
def from_string_and_config(
cls, s: str, config: ParserConfig
) -> fp.NullableParsedResult[ForwardRelation]:
return _from_string_and_context_sep(cls, s, config, "->")
@dataclass(frozen=True)
class BidirectionalRelation(PintParsedStatement, definitions.BidirectionalRelation):
"""A bidirectional relation connecting a dimension to another
via a simple transformation function.
<source dimension> <-> <target dimension>: <transformation function>
"""
@classmethod
def from_string_and_config(
cls, s: str, config: ParserConfig
) -> fp.NullableParsedResult[BidirectionalRelation]:
return _from_string_and_context_sep(cls, s, config, "<->")
@dataclass(frozen=True)
class BeginContext(PintParsedStatement):
"""Being of a context directive.
@context[(defaults)] <canonical name> [= <alias>] [= <alias>]
"""
_header_re = re.compile(
r"@context\s*(?P<defaults>\(.*\))?\s+(?P<name>\w+)\s*(=(?P<aliases>.*))*"
)
name: str
aliases: tuple[str, ...]
defaults: dict[str, numbers.Number]
@classmethod
def from_string_and_config(
cls, s: str, config: ParserConfig
) -> fp.NullableParsedResult[BeginContext]:
try:
r = cls._header_re.search(s)
if r is None:
return None
name = r.groupdict()["name"].strip()
aliases = r.groupdict()["aliases"]
if aliases:
aliases = tuple(a.strip() for a in r.groupdict()["aliases"].split("="))
else:
aliases = ()
defaults = r.groupdict()["defaults"]
except Exception as exc:
return common.DefinitionSyntaxError(
f"Could not parse the Context header '{s}': {exc}"
)
if defaults:
txt = defaults
try:
defaults = (part.split("=") for part in defaults.strip("()").split(","))
defaults = {str(k).strip(): config.to_number(v) for k, v in defaults}
except (ValueError, TypeError) as exc:
return common.DefinitionSyntaxError(
f"Could not parse Context definition defaults '{txt}' {exc}"
)
else:
defaults = {}
return cls(name, tuple(aliases), defaults)
@dataclass(frozen=True)
class ContextDefinition(
block.DirectiveBlock[
definitions.ContextDefinition,
BeginContext,
ty.Union[
plain.CommentDefinition,
BidirectionalRelation,
ForwardRelation,
plain.UnitDefinition,
],
]
):
"""Definition of a Context
@context[(defaults)] <canonical name> [= <alias>] [= <alias>]
# units can be redefined within the context
<redefined unit> = <relation to another unit>
# can establish unidirectional relationships between dimensions
<dimension 1> -> <dimension 2>: <transformation function>
# can establish bidirectionl relationships between dimensions
<dimension 3> <-> <dimension 4>: <transformation function>
@end
See BeginContext, Equality, ForwardRelation, BidirectionalRelation and
Comment for more parsing related information.
Example::
@context(n=1) spectroscopy = sp
# n index of refraction of the medium.
[length] <-> [frequency]: speed_of_light / n / value
[frequency] -> [energy]: planck_constant * value
[energy] -> [frequency]: value / planck_constant
# allow wavenumber / kayser
[wavenumber] <-> [length]: 1 / value
@end
"""
def derive_definition(self) -> definitions.ContextDefinition:
return definitions.ContextDefinition(
self.name, self.aliases, self.defaults, self.relations, self.redefinitions
)
@property
def name(self) -> str:
assert isinstance(self.opening, BeginContext)
return self.opening.name
@property
def aliases(self) -> tuple[str, ...]:
assert isinstance(self.opening, BeginContext)
return self.opening.aliases
@property
def defaults(self) -> dict[str, numbers.Number]:
assert isinstance(self.opening, BeginContext)
return self.opening.defaults
@property
def relations(self) -> tuple[BidirectionalRelation | ForwardRelation, ...]:
return tuple(
r
for r in self.body
if isinstance(r, (ForwardRelation, BidirectionalRelation))
)
@property
def redefinitions(self) -> tuple[plain.UnitDefinition, ...]:
return tuple(r for r in self.body if isinstance(r, plain.UnitDefinition))

View file

@ -0,0 +1,80 @@
"""
pint.delegates.txt_defparser.defaults
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Definitions for parsing Default sections.
See each one for a slighly longer description of the
syntax.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import typing as ty
from dataclasses import dataclass, fields
import flexparser as fp
from ...facets.plain import definitions
from ..base_defparser import PintParsedStatement
from . import block, plain
@dataclass(frozen=True)
class BeginDefaults(PintParsedStatement):
"""Being of a defaults directive.
@defaults
"""
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[BeginDefaults]:
if s.strip() == "@defaults":
return cls()
return None
@dataclass(frozen=True)
class DefaultsDefinition(
block.DirectiveBlock[
definitions.DefaultsDefinition,
BeginDefaults,
ty.Union[
plain.CommentDefinition,
plain.Equality,
],
]
):
"""Directive to store values.
@defaults
system = mks
@end
See Equality and Comment for more parsing related information.
"""
@property
def _valid_fields(self) -> tuple[str, ...]:
return tuple(f.name for f in fields(definitions.DefaultsDefinition))
def derive_definition(self) -> definitions.DefaultsDefinition:
for definition in self.filter_by(plain.Equality):
if definition.lhs not in self._valid_fields:
raise ValueError(
f"`{definition.lhs}` is not a valid key "
f"for the default section. {self._valid_fields}"
)
return definitions.DefaultsDefinition(
*tuple(self.get_key(key) for key in self._valid_fields)
)
def get_key(self, key: str) -> str:
for stmt in self.body:
if isinstance(stmt, plain.Equality) and stmt.lhs == key:
return stmt.rhs
raise KeyError(key)

View file

@ -0,0 +1,143 @@
from __future__ import annotations
import pathlib
import typing as ty
import flexcache as fc
import flexparser as fp
from ..base_defparser import ParserConfig
from . import block, common, context, defaults, group, plain, system
class PintRootBlock(
fp.RootBlock[
ty.Union[
plain.CommentDefinition,
common.ImportDefinition,
context.ContextDefinition,
defaults.DefaultsDefinition,
system.SystemDefinition,
group.GroupDefinition,
plain.AliasDefinition,
plain.DerivedDimensionDefinition,
plain.DimensionDefinition,
plain.PrefixDefinition,
plain.UnitDefinition,
],
ParserConfig,
]
):
pass
class _PintParser(fp.Parser[PintRootBlock, ParserConfig]):
"""Parser for the original Pint definition file, with cache."""
_delimiters = {
"#": (
fp.DelimiterInclude.SPLIT_BEFORE,
fp.DelimiterAction.CAPTURE_NEXT_TIL_EOL,
),
**fp.SPLIT_EOL,
}
_root_block_class = PintRootBlock
_strip_spaces = True
_diskcache: fc.DiskCache | None
def __init__(self, config: ParserConfig, *args: ty.Any, **kwargs: ty.Any):
self._diskcache = kwargs.pop("diskcache", None)
super().__init__(config, *args, **kwargs)
def parse_file(
self, path: pathlib.Path
) -> fp.ParsedSource[PintRootBlock, ParserConfig]:
if self._diskcache is None:
return super().parse_file(path)
content, _basename = self._diskcache.load(path, super().parse_file)
return content
class DefParser:
skip_classes: tuple[type, ...] = (
fp.BOF,
fp.BOR,
fp.BOS,
fp.EOS,
plain.CommentDefinition,
)
def __init__(self, default_config: ParserConfig, diskcache: fc.DiskCache):
self._default_config = default_config
self._diskcache = diskcache
def iter_parsed_project(
self, parsed_project: fp.ParsedProject[PintRootBlock, ParserConfig]
) -> ty.Generator[fp.ParsedStatement[ParserConfig], None, None]:
last_location = None
for stmt in parsed_project.iter_blocks():
if isinstance(stmt, fp.BOS):
if isinstance(stmt, fp.BOF):
last_location = str(stmt.path)
continue
elif isinstance(stmt, fp.BOR):
last_location = (
f"[package: {stmt.package}, resource: {stmt.resource_name}]"
)
continue
else:
last_location = "orphan string"
continue
if isinstance(stmt, self.skip_classes):
continue
assert isinstance(last_location, str)
if isinstance(stmt, common.DefinitionSyntaxError):
stmt.set_location(last_location)
raise stmt
elif isinstance(stmt, block.DirectiveBlock):
for exc in stmt.errors:
exc = common.DefinitionSyntaxError(str(exc))
exc.set_position(*stmt.get_position())
exc.set_raw(
(stmt.opening.raw or "") + " [...] " + (stmt.closing.raw or "")
)
exc.set_location(last_location)
raise exc
try:
yield stmt.derive_definition()
except Exception as exc:
exc = common.DefinitionSyntaxError(str(exc))
exc.set_position(*stmt.get_position())
exc.set_raw(stmt.opening.raw + " [...] " + stmt.closing.raw)
exc.set_location(last_location)
raise exc
else:
yield stmt
def parse_file(
self, filename: pathlib.Path | str, cfg: ParserConfig | None = None
) -> fp.ParsedProject[PintRootBlock, ParserConfig]:
return fp.parse(
filename,
_PintParser,
cfg or self._default_config,
diskcache=self._diskcache,
strip_spaces=True,
delimiters=_PintParser._delimiters,
)
def parse_string(
self, content: str, cfg: ParserConfig | None = None
) -> fp.ParsedProject[PintRootBlock, ParserConfig]:
return fp.parse_bytes(
content.encode("utf-8"),
_PintParser,
cfg or self._default_config,
diskcache=self._diskcache,
strip_spaces=True,
delimiters=_PintParser._delimiters,
)

View file

@ -0,0 +1,111 @@
"""
pint.delegates.txt_defparser.group
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Definitions for parsing Group and their related objects
Notices that some of the checks are done within the
format agnostic parent definition class.
See each one for a slighly longer description of the
syntax.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import re
import typing as ty
from dataclasses import dataclass
import flexparser as fp
from ...facets.group import definitions
from ..base_defparser import PintParsedStatement
from . import block, common, plain
@dataclass(frozen=True)
class BeginGroup(PintParsedStatement):
"""Being of a group directive.
@group <name> [using <group 1>, ..., <group N>]
"""
#: Regex to match the header parts of a definition.
_header_re = re.compile(r"@group\s+(?P<name>\w+)\s*(using\s(?P<used_groups>.*))*")
name: str
using_group_names: ty.Tuple[str, ...]
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[BeginGroup]:
if not s.startswith("@group"):
return None
r = cls._header_re.search(s)
if r is None:
return common.DefinitionSyntaxError(f"Invalid Group header syntax: '{s}'")
name = r.groupdict()["name"].strip()
groups = r.groupdict()["used_groups"]
if groups:
parent_group_names = tuple(a.strip() for a in groups.split(","))
else:
parent_group_names = ()
return cls(name, parent_group_names)
@dataclass(frozen=True)
class GroupDefinition(
block.DirectiveBlock[
definitions.GroupDefinition,
BeginGroup,
ty.Union[
plain.CommentDefinition,
plain.UnitDefinition,
],
]
):
"""Definition of a group.
@group <name> [using <group 1>, ..., <group N>]
<definition 1>
...
<definition N>
@end
See UnitDefinition and Comment for more parsing related information.
Example::
@group AvoirdupoisUS using Avoirdupois
US_hundredweight = hundredweight = US_cwt
US_ton = ton
US_force_ton = force_ton = _ = US_ton_force
@end
"""
def derive_definition(self) -> definitions.GroupDefinition:
return definitions.GroupDefinition(
self.name, self.using_group_names, self.definitions
)
@property
def name(self) -> str:
assert isinstance(self.opening, BeginGroup)
return self.opening.name
@property
def using_group_names(self) -> tuple[str, ...]:
assert isinstance(self.opening, BeginGroup)
return self.opening.using_group_names
@property
def definitions(self) -> tuple[plain.UnitDefinition, ...]:
return tuple(el for el in self.body if isinstance(el, plain.UnitDefinition))

View file

@ -0,0 +1,279 @@
"""
pint.delegates.txt_defparser.plain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Definitions for parsing:
- Equality
- CommentDefinition
- PrefixDefinition
- UnitDefinition
- DimensionDefinition
- DerivedDimensionDefinition
- AliasDefinition
Notices that some of the checks are done within the
format agnostic parent definition class.
See each one for a slighly longer description of the
syntax.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from dataclasses import dataclass
import flexparser as fp
from ...converters import Converter
from ...facets.plain import definitions
from ...util import UnitsContainer
from ..base_defparser import ParserConfig, PintParsedStatement
from . import common
@dataclass(frozen=True)
class Equality(PintParsedStatement, definitions.Equality):
"""An equality statement contains a left and right hand separated
lhs and rhs should be space stripped.
"""
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[Equality]:
if "=" not in s:
return None
parts = [p.strip() for p in s.split("=")]
if len(parts) != 2:
return common.DefinitionSyntaxError(
f"Exactly two terms expected, not {len(parts)} (`{s}`)"
)
return cls(*parts)
@dataclass(frozen=True)
class CommentDefinition(PintParsedStatement, definitions.CommentDefinition):
"""Comments start with a # character.
# This is a comment.
## This is also a comment.
Captured value does not include the leading # character and space stripped.
"""
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[CommentDefinition]:
if not s.startswith("#"):
return None
return cls(s[1:].strip())
@dataclass(frozen=True)
class PrefixDefinition(PintParsedStatement, definitions.PrefixDefinition):
"""Definition of a prefix::
<prefix>- = <value> [= <symbol>] [= <alias>] [ = <alias> ] [...]
Example::
deca- = 1e+1 = da- = deka-
"""
@classmethod
def from_string_and_config(
cls, s: str, config: ParserConfig
) -> fp.NullableParsedResult[PrefixDefinition]:
if "=" not in s:
return None
name, value, *aliases = s.split("=")
name = name.strip()
if not name.endswith("-"):
return None
name = name.rstrip("-")
aliases = tuple(alias.strip().rstrip("-") for alias in aliases)
defined_symbol = None
if aliases:
if aliases[0] == "_":
aliases = aliases[1:]
else:
defined_symbol, *aliases = aliases
aliases = tuple(alias for alias in aliases if alias not in ("", "_"))
try:
value = config.to_number(value)
except definitions.NotNumeric as ex:
return common.DefinitionSyntaxError(
f"Prefix definition ('{name}') must contain only numbers, not {ex.value}"
)
try:
return cls(name, value, defined_symbol, aliases)
except Exception as exc:
return common.DefinitionSyntaxError(str(exc))
@dataclass(frozen=True)
class UnitDefinition(PintParsedStatement, definitions.UnitDefinition):
"""Definition of a unit::
<canonical name> = <relation to another unit or dimension> [= <symbol>] [= <alias>] [ = <alias> ] [...]
Example::
millennium = 1e3 * year = _ = millennia
Parameters
----------
reference : UnitsContainer
Reference units.
is_base : bool
Indicates if it is a base unit.
"""
@classmethod
def from_string_and_config(
cls, s: str, config: ParserConfig
) -> fp.NullableParsedResult[UnitDefinition]:
if "=" not in s:
return None
name, value, *aliases = (p.strip() for p in s.split("="))
defined_symbol = None
if aliases:
if aliases[0] == "_":
aliases = aliases[1:]
else:
defined_symbol, *aliases = aliases
aliases = tuple(alias for alias in aliases if alias not in ("", "_"))
if ";" in value:
[converter, modifiers] = value.split(";", 1)
try:
modifiers = {
key.strip(): config.to_number(value)
for key, value in (part.split(":") for part in modifiers.split(";"))
}
except definitions.NotNumeric as ex:
return common.DefinitionSyntaxError(
f"Unit definition ('{name}') must contain only numbers in modifier, not {ex.value}"
)
else:
converter = value
modifiers = {}
converter = config.to_scaled_units_container(converter)
try:
reference = UnitsContainer(converter)
# reference = converter.to_units_container()
except common.DefinitionSyntaxError as ex:
return common.DefinitionSyntaxError(f"While defining {name}: {ex}")
try:
converter = Converter.from_arguments(scale=converter.scale, **modifiers)
except Exception as ex:
return common.DefinitionSyntaxError(
f"Unable to assign a converter to the unit {ex}"
)
try:
return cls(name, defined_symbol, tuple(aliases), converter, reference)
except Exception as ex:
return common.DefinitionSyntaxError(str(ex))
@dataclass(frozen=True)
class DimensionDefinition(PintParsedStatement, definitions.DimensionDefinition):
"""Definition of a root dimension::
[dimension name]
Example::
[volume]
"""
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[DimensionDefinition]:
s = s.strip()
if not (s.startswith("[") and "=" not in s):
return None
return cls(s)
@dataclass(frozen=True)
class DerivedDimensionDefinition(
PintParsedStatement, definitions.DerivedDimensionDefinition
):
"""Definition of a derived dimension::
[dimension name] = <relation to other dimensions>
Example::
[density] = [mass] / [volume]
"""
@classmethod
def from_string_and_config(
cls, s: str, config: ParserConfig
) -> fp.NullableParsedResult[DerivedDimensionDefinition]:
if not (s.startswith("[") and "=" in s):
return None
name, value, *aliases = s.split("=")
if aliases:
return common.DefinitionSyntaxError(
"Derived dimensions cannot have aliases."
)
try:
reference = config.to_dimension_container(value)
except common.DefinitionSyntaxError as exc:
return common.DefinitionSyntaxError(
f"In {name} derived dimensions must only be referenced "
f"to dimensions. {exc}"
)
try:
return cls(name.strip(), reference)
except Exception as exc:
return common.DefinitionSyntaxError(str(exc))
@dataclass(frozen=True)
class AliasDefinition(PintParsedStatement, definitions.AliasDefinition):
"""Additional alias(es) for an already existing unit::
@alias <canonical name or previous alias> = <alias> [ = <alias> ] [...]
Example::
@alias meter = my_meter
"""
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[AliasDefinition]:
if not s.startswith("@alias "):
return None
name, *aliases = s[len("@alias ") :].split("=")
try:
return cls(name.strip(), tuple(alias.strip() for alias in aliases))
except Exception as exc:
return common.DefinitionSyntaxError(str(exc))

View file

@ -0,0 +1,117 @@
"""
pint.delegates.txt_defparser.system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import re
import typing as ty
from dataclasses import dataclass
import flexparser as fp
from ...facets.system import definitions
from ..base_defparser import PintParsedStatement
from . import block, common, plain
@dataclass(frozen=True)
class BaseUnitRule(PintParsedStatement, definitions.BaseUnitRule):
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[BaseUnitRule]:
if ":" not in s:
return cls(s.strip())
parts = [p.strip() for p in s.split(":")]
if len(parts) != 2:
return common.DefinitionSyntaxError(
f"Exactly two terms expected for rule, not {len(parts)} (`{s}`)"
)
return cls(*parts)
@dataclass(frozen=True)
class BeginSystem(PintParsedStatement):
"""Being of a system directive.
@system <name> [using <group 1>, ..., <group N>]
"""
#: Regex to match the header parts of a context.
_header_re = re.compile(r"@system\s+(?P<name>\w+)\s*(using\s(?P<used_groups>.*))*")
name: str
using_group_names: ty.Tuple[str, ...]
@classmethod
def from_string(cls, s: str) -> fp.NullableParsedResult[BeginSystem]:
if not s.startswith("@system"):
return None
r = cls._header_re.search(s)
if r is None:
raise ValueError("Invalid System header syntax '%s'" % s)
name = r.groupdict()["name"].strip()
groups = r.groupdict()["used_groups"]
# If the systems has no group, it automatically uses the root group.
if groups:
group_names = tuple(a.strip() for a in groups.split(","))
else:
group_names = ("root",)
return cls(name, group_names)
@dataclass(frozen=True)
class SystemDefinition(
block.DirectiveBlock[
definitions.SystemDefinition,
BeginSystem,
ty.Union[plain.CommentDefinition, BaseUnitRule],
]
):
"""Definition of a System:
@system <name> [using <group 1>, ..., <group N>]
<rule 1>
...
<rule N>
@end
See Rule and Comment for more parsing related information.
The syntax for the rule is:
new_unit_name : old_unit_name
where:
- old_unit_name: a root unit part which is going to be removed from the system.
- new_unit_name: a non root unit which is going to replace the old_unit.
If the new_unit_name and the old_unit_name, the later and the colon can be omitted.
"""
def derive_definition(self) -> definitions.SystemDefinition:
return definitions.SystemDefinition(
self.name, self.using_group_names, self.rules
)
@property
def name(self) -> str:
assert isinstance(self.opening, BeginSystem)
return self.opening.name
@property
def using_group_names(self) -> tuple[str, ...]:
assert isinstance(self.opening, BeginSystem)
return self.opening.using_group_names
@property
def rules(self) -> tuple[BaseUnitRule, ...]:
return tuple(el for el in self.body if isinstance(el, BaseUnitRule))

View file

@ -0,0 +1,255 @@
"""
pint.errors
~~~~~~~~~~~
Functions and classes related to unit definitions and conversions.
:copyright: 2016 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import typing as ty
from dataclasses import dataclass, fields
OFFSET_ERROR_DOCS_HTML = "https://pint.readthedocs.io/en/stable/user/nonmult.html"
LOG_ERROR_DOCS_HTML = "https://pint.readthedocs.io/en/stable/user/log_units.html"
MSG_INVALID_UNIT_NAME = "is not a valid unit name (must follow Python identifier rules)"
MSG_INVALID_UNIT_SYMBOL = "is not a valid unit symbol (must not contain spaces)"
MSG_INVALID_UNIT_ALIAS = "is not a valid unit alias (must not contain spaces)"
MSG_INVALID_PREFIX_NAME = (
"is not a valid prefix name (must follow Python identifier rules)"
)
MSG_INVALID_PREFIX_SYMBOL = "is not a valid prefix symbol (must not contain spaces)"
MSG_INVALID_PREFIX_ALIAS = "is not a valid prefix alias (must not contain spaces)"
MSG_INVALID_DIMENSION_NAME = "is not a valid dimension name (must follow Python identifier rules and enclosed by square brackets)"
MSG_INVALID_CONTEXT_NAME = (
"is not a valid context name (must follow Python identifier rules)"
)
MSG_INVALID_GROUP_NAME = "is not a valid group name (must not contain spaces)"
MSG_INVALID_SYSTEM_NAME = (
"is not a valid system name (must follow Python identifier rules)"
)
def is_dim(name: str) -> bool:
"""Return True if the name is flanked by square brackets `[` and `]`."""
return name[0] == "[" and name[-1] == "]"
def is_valid_prefix_name(name: str) -> bool:
"""Return True if the name is a valid python identifier or empty."""
return str.isidentifier(name) or name == ""
is_valid_unit_name = is_valid_system_name = is_valid_context_name = str.isidentifier
def _no_space(name: str) -> bool:
"""Return False if the name contains a space in any position."""
return name.strip() == name and " " not in name
is_valid_group_name = _no_space
is_valid_unit_alias = is_valid_prefix_alias = is_valid_unit_symbol = (
is_valid_prefix_symbol
) = _no_space
def is_valid_dimension_name(name: str) -> bool:
"""Return True if the name is consistent with a dimension name.
- flanked by square brackets.
- empty dimension name or identifier.
"""
# TODO: shall we check also fro spaces?
return name == "[]" or (
len(name) > 1 and is_dim(name) and str.isidentifier(name[1:-1])
)
class WithDefErr:
"""Mixing class to make some classes more readable."""
def def_err(self, msg: str):
return DefinitionError(self.name, self.__class__, msg)
@dataclass(frozen=True)
class PintError(Exception):
"""Base exception for all Pint errors."""
@dataclass(frozen=True)
class DefinitionError(ValueError, PintError):
"""Raised when a definition is not properly constructed."""
name: str
definition_type: type
msg: str
def __str__(self):
msg = f"Cannot define '{self.name}' ({self.definition_type}): {self.msg}"
return msg
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class DefinitionSyntaxError(ValueError, PintError):
"""Raised when a textual definition has a syntax error."""
msg: str
def __str__(self):
return self.msg
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class RedefinitionError(ValueError, PintError):
"""Raised when a unit or prefix is redefined."""
name: str
definition_type: type
def __str__(self):
msg = f"Cannot redefine '{self.name}' ({self.definition_type})"
return msg
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class UndefinedUnitError(AttributeError, PintError):
"""Raised when the units are not defined in the unit registry."""
unit_names: str | tuple[str, ...]
def __str__(self):
if isinstance(self.unit_names, str):
return f"'{self.unit_names}' is not defined in the unit registry"
if (
isinstance(self.unit_names, (tuple, list, set))
and len(self.unit_names) == 1
):
return f"'{tuple(self.unit_names)[0]}' is not defined in the unit registry"
return f"{tuple(self.unit_names)} are not defined in the unit registry"
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class PintTypeError(TypeError, PintError):
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class DimensionalityError(PintTypeError):
"""Raised when trying to convert between incompatible units."""
units1: ty.Any
units2: ty.Any
dim1: str = ""
dim2: str = ""
extra_msg: str = ""
def __str__(self):
if self.dim1 or self.dim2:
dim1 = f" ({self.dim1})"
dim2 = f" ({self.dim2})"
else:
dim1 = ""
dim2 = ""
return (
f"Cannot convert from '{self.units1}'{dim1} to "
f"'{self.units2}'{dim2}{self.extra_msg}"
)
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class OffsetUnitCalculusError(PintTypeError):
"""Raised on ambiguous operations with offset units."""
units1: ty.Any
units2: ty.Optional[ty.Any] = None
def yield_units(self):
yield self.units1
if self.units2:
yield self.units2
def __str__(self):
return (
"Ambiguous operation with offset unit (%s)."
% ", ".join(str(u) for u in self.yield_units())
+ " See "
+ OFFSET_ERROR_DOCS_HTML
+ " for guidance."
)
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class LogarithmicUnitCalculusError(PintTypeError):
"""Raised on inappropriate operations with logarithmic units."""
units1: ty.Any
units2: ty.Optional[ty.Any] = None
def yield_units(self):
yield self.units1
if self.units2:
yield self.units2
def __str__(self):
return (
"Ambiguous operation with logarithmic unit (%s)."
% ", ".join(str(u) for u in self.yield_units())
+ " See "
+ LOG_ERROR_DOCS_HTML
+ " for guidance."
)
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class UnitStrippedWarning(UserWarning, PintError):
msg: str
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class UnexpectedScaleInContainer(Exception):
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))
@dataclass(frozen=True)
class UndefinedBehavior(UserWarning, PintError):
msg: str
def __reduce__(self):
return self.__class__, tuple(getattr(self, f.name) for f in fields(self))

View file

@ -0,0 +1,106 @@
"""
pint.facets
~~~~~~~~~~~
Facets are way to add a specific set of funcionalities to Pint. It is more
an organization logic than anything else. It aims to enable growth while
keeping each part small enough to be hackable.
Each facet contains one or more of the following modules:
- definitions: classes describing specific unit-related definitons.
These objects must be immutable, pickable and not reference the registry (e.g. ContextDefinition)
- objects: classes and functions that encapsulate behavior (e.g. Context)
- registry: implements a subclass of PlainRegistry or class that can be
mixed with it (e.g. ContextRegistry)
In certain cases, some of these modules might be collapsed into a single one
as the code is very short (like in dask) or expanded as the code is too long
(like in plain, where quantity and unit object are in their own module).
Additionally, certain facets might not have one of them.
An important part of this scheme is that each facet should export only a few
classes in the __init__.py and everything else should not be accessed by any
other module (except for testing). This is Python, so accessing it cannot be
really limited. So is more an agreement than a rule.
It is worth noticing that a Pint Quantity or Unit is always connected to a
*specific* registry. Therefore we need to provide a way in which functionality
can be added to a Quantity class in an easy way. This is achieved beautifully
using specific class attributes. For example, the NumpyRegistry looks like this:
class NumpyRegistry:
Quantity = NumpyQuantity
Unit = NumpyUnit
This tells pint that it should use NumpyQuantity as base class for a quantity
class that belongs to a registry that has NumpyRegistry as one of its bases.
Currently the folowing facets are implemented:
- plain: basic manipulation and calculation with multiplicative
dimensions, units and quantities (e.g. length, time, mass, etc).
- nonmultiplicative: manipulation and calculation with offset and
log units and quantities (e.g. temperature and decibel).
- measurement: manipulation and calculation of a quantity with
an uncertainty.
- numpy: using numpy array as magnitude and properly handling
numpy functions operating on quantities.
- dask: allows pint to interoperate with dask by implementing
dask magic methods.
- group: allow to make collections of units that can be then
addressed together.
- system: redefine base units for dimensions for a particular
collection of units (e.g. imperial)
- context: provides the means to interconvert between incompatible
units through well defined relations (e.g. spectroscopy allows
converting between spatial wavelength and temporal frequency)
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .context import ContextRegistry, GenericContextRegistry
from .dask import DaskRegistry, GenericDaskRegistry
from .group import GenericGroupRegistry, GroupRegistry
from .measurement import GenericMeasurementRegistry, MeasurementRegistry
from .nonmultiplicative import (
GenericNonMultiplicativeRegistry,
NonMultiplicativeRegistry,
)
from .numpy import GenericNumpyRegistry, NumpyRegistry
from .plain import GenericPlainRegistry, MagnitudeT, PlainRegistry, QuantityT, UnitT
from .system import GenericSystemRegistry, SystemRegistry
__all__ = [
"ContextRegistry",
"DaskRegistry",
"FormattingRegistry",
"GroupRegistry",
"MeasurementRegistry",
"NonMultiplicativeRegistry",
"NumpyRegistry",
"PlainRegistry",
"SystemRegistry",
"GenericContextRegistry",
"GenericDaskRegistry",
"GenericFormattingRegistry",
"GenericGroupRegistry",
"GenericMeasurementRegistry",
"GenericNonMultiplicativeRegistry",
"GenericNumpyRegistry",
"GenericPlainRegistry",
"GenericSystemRegistry",
"QuantityT",
"UnitT",
"MagnitudeT",
]

View file

@ -0,0 +1,18 @@
"""
pint.facets.context
~~~~~~~~~~~~~~~~~~~
Adds pint the capability to contexts: predefined conversions
between incompatible dimensions.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .definitions import ContextDefinition
from .objects import Context
from .registry import ContextRegistry, GenericContextRegistry
__all__ = ["ContextDefinition", "Context", "ContextRegistry", "GenericContextRegistry"]

View file

@ -0,0 +1,157 @@
"""
pint.facets.context.definitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import itertools
import numbers
import re
from collections.abc import Callable, Iterable
from dataclasses import dataclass
from typing import TYPE_CHECKING
from ... import errors
from ..plain import UnitDefinition
if TYPE_CHECKING:
from ..._typing import Quantity, UnitsContainer
@dataclass(frozen=True)
class Relation:
"""Base class for a relation between different dimensionalities."""
_varname_re = re.compile(r"[A-Za-z_][A-Za-z0-9_]*")
#: Source dimensionality
src: UnitsContainer
#: Destination dimensionality
dst: UnitsContainer
#: Equation connecting both dimensionalities from which the tranformation
#: will be built.
equation: str
# Instead of defining __post_init__ here,
# it will be added to the container class
# so that the name and a meaningfull class
# could be used.
@property
def variables(self) -> set[str]:
"""Find all variables names in the equation."""
return set(self._varname_re.findall(self.equation))
@property
def transformation(self) -> Callable[..., Quantity]:
"""Return a transformation callable that uses the registry
to parse the transformation equation.
"""
return lambda ureg, value, **kwargs: ureg.parse_expression(
self.equation, value=value, **kwargs
)
@property
def bidirectional(self) -> bool:
raise NotImplementedError
@dataclass(frozen=True)
class ForwardRelation(Relation):
"""A relation connecting a dimension to another via a transformation function.
<source dimension> -> <target dimension>: <transformation function>
"""
@property
def bidirectional(self) -> bool:
return False
@dataclass(frozen=True)
class BidirectionalRelation(Relation):
"""A bidirectional relation connecting a dimension to another
via a simple transformation function.
<source dimension> <-> <target dimension>: <transformation function>
"""
@property
def bidirectional(self) -> bool:
return True
@dataclass(frozen=True)
class ContextDefinition(errors.WithDefErr):
"""Definition of a Context"""
#: name of the context
name: str
#: other na
aliases: tuple[str, ...]
defaults: dict[str, numbers.Number]
relations: tuple[Relation, ...]
redefinitions: tuple[UnitDefinition, ...]
@property
def variables(self) -> set[str]:
"""Return all variable names in all transformations."""
return set().union(*(r.variables for r in self.relations))
@classmethod
def from_lines(cls, lines: Iterable[str], non_int_type: type):
# TODO: this is to keep it backwards compatible
from ...delegates import ParserConfig, txt_defparser
cfg = ParserConfig(non_int_type)
parser = txt_defparser.DefParser(cfg, None)
pp = parser.parse_string("\n".join(lines) + "\n@end")
for definition in parser.iter_parsed_project(pp):
if isinstance(definition, cls):
return definition
def __post_init__(self):
if not errors.is_valid_context_name(self.name):
raise self.def_err(errors.MSG_INVALID_GROUP_NAME)
for k in self.aliases:
if not errors.is_valid_context_name(k):
raise self.def_err(
f"refers to '{k}' that " + errors.MSG_INVALID_CONTEXT_NAME
)
for relation in self.relations:
invalid = tuple(
itertools.filterfalse(
errors.is_valid_dimension_name, relation.src.keys()
)
) + tuple(
itertools.filterfalse(
errors.is_valid_dimension_name, relation.dst.keys()
)
)
if invalid:
raise self.def_err(
f"relation refers to {', '.join(invalid)} that "
+ errors.MSG_INVALID_DIMENSION_NAME
)
for definition in self.redefinitions:
if definition.symbol != definition.name or definition.aliases:
raise self.def_err(
"can't change a unit's symbol or aliases within a context"
)
if definition.is_base:
raise self.def_err("can't define plain units within a context")
missing_pars = set(self.defaults.keys()) - self.variables
if missing_pars:
raise self.def_err(
f"Context parameters {missing_pars} not found in any equation"
)

View file

@ -0,0 +1,336 @@
"""
pint.facets.context.objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import weakref
from collections import ChainMap, defaultdict
from collections.abc import Callable, Iterable
from typing import TYPE_CHECKING, Any, Generic, Protocol
from ..._typing import Magnitude
from ...facets.plain import MagnitudeT, PlainQuantity, PlainUnit, UnitDefinition
from ...util import UnitsContainer, to_units_container
from .definitions import ContextDefinition
if TYPE_CHECKING:
from ...registry import UnitRegistry
class Transformation(Protocol):
def __call__(
self, ureg: UnitRegistry, value: PlainQuantity, **kwargs: Any
) -> PlainQuantity: ...
from ..._typing import UnitLike
ToBaseFunc = Callable[[UnitsContainer], UnitsContainer]
SrcDst = tuple[UnitsContainer, UnitsContainer]
class ContextQuantity(Generic[MagnitudeT], PlainQuantity[MagnitudeT]):
pass
class ContextUnit(PlainUnit):
pass
class Context:
"""A specialized container that defines transformation functions from one
dimension to another. Each Dimension are specified using a UnitsContainer.
Simple transformation are given with a function taking a single parameter.
Conversion functions may take optional keyword arguments and the context
can have default values for these arguments.
Additionally, a context may host redefinitions.
A redefinition must be performed among units that already exist in the registry. It
cannot change the dimensionality of a unit. The symbol and aliases are automatically
inherited from the registry.
See ContextDefinition for the definition file syntax.
Parameters
----------
name : str or None, optional
Name of the context (must be unique within the registry).
Use None for anonymous Context. (Default value = None).
aliases : iterable of str
Other names for the context.
defaults : None or dict
Maps variable names to values.
Example
-------
>>> from datasette.vendored.pint.util import UnitsContainer
>>> from datasette.vendored.pint import Context, UnitRegistry
>>> ureg = UnitRegistry()
>>> timedim = UnitsContainer({'[time]': 1})
>>> spacedim = UnitsContainer({'[length]': 1})
>>> def time_to_len(ureg, time):
... 'Time to length converter'
... return 3. * time
>>> c = Context()
>>> c.add_transformation(timedim, spacedim, time_to_len)
>>> c.transform(timedim, spacedim, ureg, 2)
6.0
>>> def time_to_len_indexed(ureg, time, n=1):
... 'Time to length converter, n is the index of refraction of the material'
... return 3. * time / n
>>> c = Context(defaults={'n':3})
>>> c.add_transformation(timedim, spacedim, time_to_len_indexed)
>>> c.transform(timedim, spacedim, ureg, 2)
2.0
>>> c.redefine("pound = 0.5 kg")
"""
def __init__(
self,
name: str | None = None,
aliases: tuple[str, ...] = tuple(),
defaults: dict[str, Any] | None = None,
) -> None:
self.name: str | None = name
self.aliases: tuple[str, ...] = aliases
#: Maps (src, dst) -> transformation function
self.funcs: dict[SrcDst, Transformation] = {}
#: Maps defaults variable names to values
self.defaults: dict[str, Any] = defaults or {}
# Store Definition objects that are context-specific
# TODO: narrow type this if possible.
self.redefinitions: list[Any] = []
# Flag set to True by the Registry the first time the context is enabled
self.checked = False
#: Maps (src, dst) -> self
#: Used as a convenience dictionary to be composed by ContextChain
self.relation_to_context: weakref.WeakValueDictionary[SrcDst, Context] = (
weakref.WeakValueDictionary()
)
@classmethod
def from_context(cls, context: Context, **defaults: Any) -> Context:
"""Creates a new context that shares the funcs dictionary with the
original context. The default values are copied from the original
context and updated with the new defaults.
If defaults is empty, return the same context.
Parameters
----------
context : pint.Context
Original context.
**defaults
Returns
-------
pint.Context
"""
if defaults:
newdef = dict(context.defaults, **defaults)
c = cls(context.name, context.aliases, newdef)
c.funcs = context.funcs
c.redefinitions = context.redefinitions
for edge in context.funcs:
c.relation_to_context[edge] = c
return c
return context
@classmethod
def from_lines(
cls,
lines: Iterable[str],
to_base_func: ToBaseFunc | None = None,
non_int_type: type = float,
) -> Context:
context_definition = ContextDefinition.from_lines(lines, non_int_type)
if context_definition is None:
raise ValueError(f"Could not define Context from from {lines}")
return cls.from_definition(context_definition, to_base_func)
@classmethod
def from_definition(
cls, cd: ContextDefinition, to_base_func: ToBaseFunc | None = None
) -> Context:
ctx = cls(cd.name, cd.aliases, cd.defaults)
for definition in cd.redefinitions:
ctx._redefine(definition)
for relation in cd.relations:
try:
# TODO: check to_base_func. Is it a good API idea?
if to_base_func:
src = to_base_func(relation.src)
dst = to_base_func(relation.dst)
else:
src, dst = relation.src, relation.dst
ctx.add_transformation(src, dst, relation.transformation)
if relation.bidirectional:
ctx.add_transformation(dst, src, relation.transformation)
except Exception as exc:
raise ValueError(
f"Could not add Context {cd.name} relation {relation}"
) from exc
return ctx
def add_transformation(
self, src: UnitLike, dst: UnitLike, func: Transformation
) -> None:
"""Add a transformation function to the context."""
_key = self.__keytransform__(src, dst)
self.funcs[_key] = func
self.relation_to_context[_key] = self
def remove_transformation(self, src: UnitLike, dst: UnitLike) -> None:
"""Add a transformation function to the context."""
_key = self.__keytransform__(src, dst)
del self.funcs[_key]
del self.relation_to_context[_key]
@staticmethod
def __keytransform__(src: UnitLike, dst: UnitLike) -> SrcDst:
return to_units_container(src), to_units_container(dst)
def transform(
self, src: UnitLike, dst: UnitLike, registry: Any, value: Magnitude
) -> Magnitude:
"""Transform a value."""
_key = self.__keytransform__(src, dst)
func = self.funcs[_key]
return func(registry, value, **self.defaults)
def redefine(self, definition: str) -> None:
"""Override the definition of a unit in the registry.
Parameters
----------
definition : str
<unit> = <new definition>``, e.g. ``pound = 0.5 kg``
"""
from ...delegates import ParserConfig, txt_defparser
# TODO: kept for backwards compatibility.
# this is not a good idea as we have no way of known the correct non_int_type
cfg = ParserConfig(float)
parser = txt_defparser.DefParser(cfg, None)
pp = parser.parse_string(definition)
for definition in parser.iter_parsed_project(pp):
if isinstance(definition, UnitDefinition):
self._redefine(definition)
def _redefine(self, definition: UnitDefinition):
self.redefinitions.append(definition)
def hashable(
self,
) -> tuple[
str | None,
tuple[str, ...],
frozenset[tuple[SrcDst, int]],
frozenset[tuple[str, Any]],
tuple[Any, ...],
]:
"""Generate a unique hashable and comparable representation of self, which can
be used as a key in a dict. This class cannot define ``__hash__`` because it is
mutable, and the Python interpreter does cache the output of ``__hash__``.
Returns
-------
tuple
"""
return (
self.name,
tuple(self.aliases),
frozenset((k, id(v)) for k, v in self.funcs.items()),
frozenset(self.defaults.items()),
tuple(self.redefinitions),
)
class ContextChain(ChainMap[SrcDst, Context]):
"""A specialized ChainMap for contexts that simplifies finding rules
to transform from one dimension to another.
"""
def __init__(self):
super().__init__()
self.contexts: list[Context] = []
self.maps.clear() # Remove default empty map
self._graph: dict[SrcDst, set[UnitsContainer]] | None = None
def insert_contexts(self, *contexts: Context):
"""Insert one or more contexts in reversed order the chained map.
(A rule in last context will take precedence)
To facilitate the identification of the context with the matching rule,
the *relation_to_context* dictionary of the context is used.
"""
self.contexts = list(reversed(contexts)) + self.contexts
self.maps = [ctx.relation_to_context for ctx in reversed(contexts)] + self.maps
self._graph = None
def remove_contexts(self, n: int | None = None):
"""Remove the last n inserted contexts from the chain.
Parameters
----------
n: int
(Default value = None)
"""
del self.contexts[:n]
del self.maps[:n]
self._graph = None
@property
def defaults(self) -> dict[str, Any]:
for ctx in self.values():
return ctx.defaults
return {}
@property
def graph(self):
"""The graph relating"""
if self._graph is None:
self._graph = defaultdict(set)
for fr_, to_ in self:
self._graph[fr_].add(to_)
return self._graph
# TODO: type registry
def transform(
self, src: UnitsContainer, dst: UnitsContainer, registry: Any, value: Magnitude
):
"""Transform the value, finding the rule in the chained context.
(A rule in last context will take precedence)
"""
return self[(src, dst)].transform(src, dst, registry, value)
def hashable(self) -> tuple[Any, ...]:
"""Generate a unique hashable and comparable representation of self, which can
be used as a key in a dict. This class cannot define ``__hash__`` because it is
mutable, and the Python interpreter does cache the output of ``__hash__``.
"""
return tuple(ctx.hashable() for ctx in self.contexts)

View file

@ -0,0 +1,428 @@
"""
pint.facets.context.registry
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import functools
from collections import ChainMap
from collections.abc import Callable, Generator
from contextlib import contextmanager
from typing import Any, Generic
from ..._typing import F, Magnitude
from ...compat import TypeAlias
from ...errors import UndefinedUnitError
from ...util import UnitsContainer, find_connected_nodes, find_shortest_path, logger
from ..plain import GenericPlainRegistry, QuantityT, UnitDefinition, UnitT
from . import objects
from .definitions import ContextDefinition
# TODO: Put back annotation when possible
# registry_cache: "RegistryCache"
class ContextCacheOverlay:
"""Layer on top of the plain UnitRegistry cache, specific to a combination of
active contexts which contain unit redefinitions.
"""
def __init__(self, registry_cache) -> None:
self.dimensional_equivalents = registry_cache.dimensional_equivalents
self.root_units = {}
self.dimensionality = registry_cache.dimensionality
self.parse_unit = registry_cache.parse_unit
self.conversion_factor = {}
class GenericContextRegistry(
Generic[QuantityT, UnitT], GenericPlainRegistry[QuantityT, UnitT]
):
"""Handle of Contexts.
Conversion between units with different dimensions according
to previously established relations (contexts).
(e.g. in the spectroscopy, conversion between frequency and energy is possible)
Capabilities:
- Register contexts.
- Enable and disable contexts.
- Parse @context directive.
"""
Context: type[objects.Context] = objects.Context
def __init__(self, **kwargs: Any) -> None:
# Map context name (string) or abbreviation to context.
self._contexts: dict[str, objects.Context] = {}
# Stores active contexts.
self._active_ctx = objects.ContextChain()
# Map context chain to cache
self._caches = {}
# Map context chain to units override
self._context_units = {}
super().__init__(**kwargs)
# Allow contexts to add override layers to the units
self._units: ChainMap[str, UnitDefinition] = ChainMap(self._units)
def _register_definition_adders(self) -> None:
super()._register_definition_adders()
self._register_adder(ContextDefinition, self.add_context)
def add_context(self, context: objects.Context | ContextDefinition) -> None:
"""Add a context object to the registry.
The context will be accessible by its name and aliases.
Notice that this method will NOT enable the context;
see :meth:`enable_contexts`.
"""
if isinstance(context, ContextDefinition):
context = objects.Context.from_definition(context, self.get_dimensionality)
if not context.name:
raise ValueError("Can't add unnamed context to registry")
if context.name in self._contexts:
logger.warning(
"The name %s was already registered for another context.", context.name
)
self._contexts[context.name] = context
for alias in context.aliases:
if alias in self._contexts:
logger.warning(
"The name %s was already registered for another context",
context.name,
)
self._contexts[alias] = context
def remove_context(self, name_or_alias: str) -> objects.Context:
"""Remove a context from the registry and return it.
Notice that this methods will not disable the context;
see :meth:`disable_contexts`.
"""
context = self._contexts[name_or_alias]
del self._contexts[context.name]
for alias in context.aliases:
del self._contexts[alias]
return context
def _build_cache(self, loaded_files=None) -> None:
super()._build_cache(loaded_files)
self._caches[()] = self._cache
def _switch_context_cache_and_units(self) -> None:
"""If any of the active contexts redefine units, create variant self._cache
and self._units specific to the combination of active contexts.
The next time this method is invoked with the same combination of contexts,
reuse the same variant self._cache and self._units as in the previous time.
"""
del self._units.maps[:-1]
units_overlay = any(ctx.redefinitions for ctx in self._active_ctx.contexts)
if not units_overlay:
# Use the default _cache and _units
self._cache = self._caches[()]
return
key = self._active_ctx.hashable()
try:
self._cache = self._caches[key]
self._units.maps.insert(0, self._context_units[key])
except KeyError:
pass
# First time using this specific combination of contexts and it contains
# unit redefinitions
base_cache = self._caches[()]
self._caches[key] = self._cache = ContextCacheOverlay(base_cache)
self._context_units[key] = units_overlay = {}
self._units.maps.insert(0, units_overlay)
on_redefinition_backup = self._on_redefinition
self._on_redefinition = "ignore"
try:
for ctx in reversed(self._active_ctx.contexts):
for definition in ctx.redefinitions:
self._redefine(definition)
finally:
self._on_redefinition = on_redefinition_backup
def _redefine(self, definition: UnitDefinition) -> None:
"""Redefine a unit from a context"""
# Find original definition in the UnitRegistry
candidates = self.parse_unit_name(definition.name)
if not candidates:
raise UndefinedUnitError(definition.name)
candidates_no_prefix = [c for c in candidates if not c[0]]
if not candidates_no_prefix:
raise ValueError(f"Can't redefine a unit with a prefix: {definition.name}")
assert len(candidates_no_prefix) == 1
_, name, _ = candidates_no_prefix[0]
try:
basedef = self._units[name]
except KeyError:
raise UndefinedUnitError(name)
# Rebuild definition as a variant of the plain
if basedef.is_base:
raise ValueError("Can't redefine a plain unit to a derived one")
dims_old = self._get_dimensionality(basedef.reference)
dims_new = self._get_dimensionality(definition.reference)
if dims_old != dims_new:
raise ValueError(
f"Can't change dimensionality of {basedef.name} "
f"from {dims_old} to {dims_new} in a context"
)
# Do not modify in place the original definition, as (1) the context may
# be shared by other registries, and (2) it would alter the cache key
definition = UnitDefinition(
name=basedef.name,
defined_symbol=basedef.symbol,
aliases=basedef.aliases,
reference=definition.reference,
converter=definition.converter,
)
# Write into the context-specific self._units.maps[0] and self._cache.root_units
self.define(definition)
def enable_contexts(
self, *names_or_contexts: str | objects.Context, **kwargs: Any
) -> None:
"""Enable contexts provided by name or by object.
Parameters
----------
*names_or_contexts :
one or more contexts or context names/aliases
**kwargs :
keyword arguments for the context(s)
Examples
--------
See :meth:`context`
"""
# If present, copy the defaults from the containing contexts
if self._active_ctx.defaults:
kwargs = dict(self._active_ctx.defaults, **kwargs)
# For each name, we first find the corresponding context
ctxs = [
self._contexts[name] if isinstance(name, str) else name
for name in names_or_contexts
]
# Check if the contexts have been checked first, if not we make sure
# that dimensions are expressed in terms of plain dimensions.
for ctx in ctxs:
if ctx.checked:
continue
funcs_copy = dict(ctx.funcs)
for (src, dst), func in funcs_copy.items():
src_ = self._get_dimensionality(src)
dst_ = self._get_dimensionality(dst)
if src != src_ or dst != dst_:
ctx.remove_transformation(src, dst)
ctx.add_transformation(src_, dst_, func)
ctx.checked = True
# and create a new one with the new defaults.
contexts = tuple(objects.Context.from_context(ctx, **kwargs) for ctx in ctxs)
# Finally we add them to the active context.
self._active_ctx.insert_contexts(*contexts)
self._switch_context_cache_and_units()
def disable_contexts(self, n: int | None = None) -> None:
"""Disable the last n enabled contexts.
Parameters
----------
n : int
Number of contexts to disable. Default: disable all contexts.
"""
self._active_ctx.remove_contexts(n)
self._switch_context_cache_and_units()
@contextmanager
def context(
self: GenericContextRegistry[QuantityT, UnitT], *names: str, **kwargs: Any
) -> Generator[GenericContextRegistry[QuantityT, UnitT], None, None]:
"""Used as a context manager, this function enables to activate a context
which is removed after usage.
Parameters
----------
*names : name(s) of the context(s).
**kwargs : keyword arguments for the contexts.
Examples
--------
Context can be called by their name:
>>> import pint.facets.context.objects
>>> import pint
>>> ureg = pint.UnitRegistry()
>>> ureg.add_context(pint.facets.context.objects.Context('one'))
>>> ureg.add_context(pint.facets.context.objects.Context('two'))
>>> with ureg.context('one'):
... pass
If a context has an argument, you can specify its value as a keyword argument:
>>> with ureg.context('one', n=1):
... pass
Multiple contexts can be entered in single call:
>>> with ureg.context('one', 'two', n=1):
... pass
Or nested allowing you to give different values to the same keyword argument:
>>> with ureg.context('one', n=1):
... with ureg.context('two', n=2):
... pass
A nested context inherits the defaults from the containing context:
>>> with ureg.context('one', n=1):
... # Here n takes the value of the outer context
... with ureg.context('two'):
... pass
"""
# Enable the contexts.
self.enable_contexts(*names, **kwargs)
try:
# After adding the context and rebuilding the graph, the registry
# is ready to use.
yield self
finally:
# Upon leaving the with statement,
# the added contexts are removed from the active one.
self.disable_contexts(len(names))
def with_context(self, name: str, **kwargs: Any) -> Callable[[F], F]:
"""Decorator to wrap a function call in a Pint context.
Use it to ensure that a certain context is active when
calling a function.
Parameters
----------
name :
name of the context.
**kwargs :
keyword arguments for the context
Returns
-------
callable: the wrapped function.
Examples
--------
>>> @ureg.with_context('sp')
... def my_cool_fun(wavelength):
... print('This wavelength is equivalent to: %s', wavelength.to('terahertz'))
"""
def decorator(func):
assigned = tuple(
attr for attr in functools.WRAPPER_ASSIGNMENTS if hasattr(func, attr)
)
updated = tuple(
attr for attr in functools.WRAPPER_UPDATES if hasattr(func, attr)
)
@functools.wraps(func, assigned=assigned, updated=updated)
def wrapper(*values, **wrapper_kwargs):
with self.context(name, **kwargs):
return func(*values, **wrapper_kwargs)
return wrapper
return decorator
def _convert(
self,
value: Magnitude,
src: UnitsContainer,
dst: UnitsContainer,
inplace: bool = False,
) -> Magnitude:
"""Convert value from some source to destination units.
In addition to what is done by the PlainRegistry,
converts between units with different dimensions by following
transformation rules defined in the context.
Parameters
----------
value :
value
src : UnitsContainer
source units.
dst : UnitsContainer
destination units.
inplace :
(Default value = False)
Returns
-------
callable
converted value
"""
# If there is an active context, we look for a path connecting source and
# destination dimensionality. If it exists, we transform the source value
# by applying sequentially each transformation of the path.
if self._active_ctx:
src_dim = self._get_dimensionality(src)
dst_dim = self._get_dimensionality(dst)
path = find_shortest_path(self._active_ctx.graph, src_dim, dst_dim)
if path:
src = self.Quantity(value, src)
for a, b in zip(path[:-1], path[1:]):
src = self._active_ctx.transform(a, b, self, src)
value, src = src._magnitude, src._units
return super()._convert(value, src, dst, inplace)
def _get_compatible_units(
self, input_units: UnitsContainer, group_or_system: str | None = None
):
src_dim = self._get_dimensionality(input_units)
ret = super()._get_compatible_units(input_units, group_or_system)
if self._active_ctx:
ret = ret.copy() # Do not alter self._cache
nodes = find_connected_nodes(self._active_ctx.graph, src_dim)
if nodes:
for node in nodes:
ret |= self._cache.dimensional_equivalents[node]
return ret
class ContextRegistry(
GenericContextRegistry[objects.ContextQuantity[Any], objects.ContextUnit]
):
Quantity: TypeAlias = objects.ContextQuantity[Any]
Unit: TypeAlias = objects.ContextUnit

View file

@ -0,0 +1,141 @@
"""
pint.facets.dask
~~~~~~~~~~~~~~~~
Adds pint the capability to interoperate with Dask
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import functools
from typing import Any, Generic
from ...compat import TypeAlias, compute, dask_array, persist, visualize
from ..plain import (
GenericPlainRegistry,
MagnitudeT,
PlainQuantity,
PlainUnit,
QuantityT,
UnitT,
)
def check_dask_array(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
if isinstance(self._magnitude, dask_array.Array):
return f(self, *args, **kwargs)
else:
msg = "Method {} only implemented for objects of {}, not {}".format(
f.__name__, dask_array.Array, self._magnitude.__class__
)
raise AttributeError(msg)
return wrapper
class DaskQuantity(Generic[MagnitudeT], PlainQuantity[MagnitudeT]):
# Dask.array.Array ducking
def __dask_graph__(self):
if isinstance(self._magnitude, dask_array.Array):
return self._magnitude.__dask_graph__()
return None
def __dask_keys__(self):
return self._magnitude.__dask_keys__()
def __dask_tokenize__(self):
from dask.base import tokenize
return (type(self), tokenize(self._magnitude), self.units)
@property
def __dask_optimize__(self):
return dask_array.Array.__dask_optimize__
@property
def __dask_scheduler__(self):
return dask_array.Array.__dask_scheduler__
def __dask_postcompute__(self):
func, args = self._magnitude.__dask_postcompute__()
return self._dask_finalize, (func, args, self.units)
def __dask_postpersist__(self):
func, args = self._magnitude.__dask_postpersist__()
return self._dask_finalize, (func, args, self.units)
def _dask_finalize(self, results, func, args, units):
values = func(results, *args)
return type(self)(values, units)
@check_dask_array
def compute(self, **kwargs):
"""Compute the Dask array wrapped by pint.PlainQuantity.
Parameters
----------
**kwargs : dict
Any keyword arguments to pass to ``dask.compute``.
Returns
-------
pint.PlainQuantity
A pint.PlainQuantity wrapped numpy array.
"""
(result,) = compute(self, **kwargs)
return result
@check_dask_array
def persist(self, **kwargs):
"""Persist the Dask Array wrapped by pint.PlainQuantity.
Parameters
----------
**kwargs : dict
Any keyword arguments to pass to ``dask.persist``.
Returns
-------
pint.PlainQuantity
A pint.PlainQuantity wrapped Dask array.
"""
(result,) = persist(self, **kwargs)
return result
@check_dask_array
def visualize(self, **kwargs):
"""Produce a visual representation of the Dask graph.
The graphviz library is required.
Parameters
----------
**kwargs : dict
Any keyword arguments to pass to ``dask.visualize``.
Returns
-------
"""
visualize(self, **kwargs)
class DaskUnit(PlainUnit):
pass
class GenericDaskRegistry(
Generic[QuantityT, UnitT], GenericPlainRegistry[QuantityT, UnitT]
):
pass
class DaskRegistry(GenericDaskRegistry[DaskQuantity[Any], DaskUnit]):
Quantity: TypeAlias = DaskQuantity[Any]
Unit: TypeAlias = DaskUnit

View file

@ -0,0 +1,24 @@
"""
pint.facets.group
~~~~~~~~~~~~~~~~~
Adds pint the capability to group units.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .definitions import GroupDefinition
from .objects import Group, GroupQuantity, GroupUnit
from .registry import GenericGroupRegistry, GroupRegistry
__all__ = [
"GroupDefinition",
"Group",
"GroupRegistry",
"GenericGroupRegistry",
"GroupQuantity",
"GroupUnit",
]

View file

@ -0,0 +1,56 @@
"""
pint.facets.group.definitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from collections.abc import Iterable
from dataclasses import dataclass
from ... import errors
from ...compat import Self
from .. import plain
@dataclass(frozen=True)
class GroupDefinition(errors.WithDefErr):
"""Definition of a group."""
#: name of the group
name: str
#: unit groups that will be included within the group
using_group_names: tuple[str, ...]
#: definitions for the units existing within the group
definitions: tuple[plain.UnitDefinition, ...]
@classmethod
def from_lines(
cls: type[Self], lines: Iterable[str], non_int_type: type
) -> Self | None:
# TODO: this is to keep it backwards compatible
from ...delegates import ParserConfig, txt_defparser
cfg = ParserConfig(non_int_type)
parser = txt_defparser.DefParser(cfg, None)
pp = parser.parse_string("\n".join(lines) + "\n@end")
for definition in parser.iter_parsed_project(pp):
if isinstance(definition, cls):
return definition
@property
def unit_names(self) -> tuple[str, ...]:
return tuple(el.name for el in self.definitions)
def __post_init__(self) -> None:
if not errors.is_valid_group_name(self.name):
raise self.def_err(errors.MSG_INVALID_GROUP_NAME)
for k in self.using_group_names:
if not errors.is_valid_group_name(k):
raise self.def_err(
f"refers to '{k}' that " + errors.MSG_INVALID_GROUP_NAME
)

View file

@ -0,0 +1,224 @@
"""
pint.facets.group.objects
~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from collections.abc import Callable, Generator, Iterable
from typing import TYPE_CHECKING, Any, Generic
from ...util import SharedRegistryObject, getattr_maybe_raise
from ..plain import MagnitudeT, PlainQuantity, PlainUnit
from .definitions import GroupDefinition
if TYPE_CHECKING:
from ..plain import UnitDefinition
DefineFunc = Callable[
[
Any,
],
None,
]
AddUnitFunc = Callable[
[
UnitDefinition,
],
None,
]
class GroupQuantity(Generic[MagnitudeT], PlainQuantity[MagnitudeT]):
pass
class GroupUnit(PlainUnit):
pass
class Group(SharedRegistryObject):
"""A group is a set of units.
Units can be added directly or by including other groups.
Members are computed dynamically, that is if a unit is added to a group X
all groups that include X are affected.
The group belongs to one Registry.
See GroupDefinition for the definition file syntax.
Parameters
----------
name
If not given, a root Group will be created.
"""
def __init__(self, name: str):
# The name of the group.
self.name = name
#: Names of the units in this group.
#: :type: set[str]
self._unit_names: set[str] = set()
#: Names of the groups in this group.
self._used_groups: set[str] = set()
#: Names of the groups in which this group is contained.
self._used_by: set[str] = set()
# Add this group to the group dictionary
self._REGISTRY._groups[self.name] = self
if name != "root":
# All groups are added to root group
self._REGISTRY._groups["root"].add_groups(name)
#: A cache of the included units.
#: None indicates that the cache has been invalidated.
self._computed_members: frozenset[str] | None = None
@property
def members(self) -> frozenset[str]:
"""Names of the units that are members of the group.
Calculated to include to all units in all included _used_groups.
"""
if self._computed_members is None:
tmp = set(self._unit_names)
for _, group in self.iter_used_groups():
tmp |= group.members
self._computed_members = frozenset(tmp)
return self._computed_members
def invalidate_members(self) -> None:
"""Invalidate computed members in this Group and all parent nodes."""
self._computed_members = None
d = self._REGISTRY._groups
for name in self._used_by:
d[name].invalidate_members()
def iter_used_groups(self) -> Generator[tuple[str, Group], None, None]:
pending = set(self._used_groups)
d = self._REGISTRY._groups
while pending:
name = pending.pop()
group = d[name]
pending |= group._used_groups
yield name, d[name]
def is_used_group(self, group_name: str) -> bool:
for name, _ in self.iter_used_groups():
if name == group_name:
return True
return False
def add_units(self, *unit_names: str) -> None:
"""Add units to group."""
for unit_name in unit_names:
self._unit_names.add(unit_name)
self.invalidate_members()
@property
def non_inherited_unit_names(self) -> frozenset[str]:
return frozenset(self._unit_names)
def remove_units(self, *unit_names: str) -> None:
"""Remove units from group."""
for unit_name in unit_names:
self._unit_names.remove(unit_name)
self.invalidate_members()
def add_groups(self, *group_names: str) -> None:
"""Add groups to group."""
d = self._REGISTRY._groups
for group_name in group_names:
grp = d[group_name]
if grp.is_used_group(self.name):
raise ValueError(
"Cyclic relationship found between %s and %s"
% (self.name, group_name)
)
self._used_groups.add(group_name)
grp._used_by.add(self.name)
self.invalidate_members()
def remove_groups(self, *group_names: str) -> None:
"""Remove groups from group."""
d = self._REGISTRY._groups
for group_name in group_names:
grp = d[group_name]
self._used_groups.remove(group_name)
grp._used_by.remove(self.name)
self.invalidate_members()
@classmethod
def from_lines(
cls, lines: Iterable[str], define_func: DefineFunc, non_int_type: type = float
) -> Group:
"""Return a Group object parsing an iterable of lines.
Parameters
----------
lines : list[str]
iterable
define_func : callable
Function to define a unit in the registry; it must accept a single string as
a parameter.
Returns
-------
"""
group_definition = GroupDefinition.from_lines(lines, non_int_type)
if group_definition is None:
raise ValueError(f"Could not define group from {lines}")
return cls.from_definition(group_definition, define_func)
@classmethod
def from_definition(
cls,
group_definition: GroupDefinition,
add_unit_func: AddUnitFunc | None = None,
) -> Group:
grp = cls(group_definition.name)
add_unit_func = add_unit_func or grp._REGISTRY._add_unit
# We first add all units defined within the group
# to the registry.
for definition in group_definition.definitions:
add_unit_func(definition)
# Then we add all units defined within the group
# to this group (by name)
grp.add_units(*group_definition.unit_names)
# Finally, we add all grou0ps used by this group
# tho this group (by name)
if group_definition.using_group_names:
grp.add_groups(*group_definition.using_group_names)
return grp
def __getattr__(self, item: str):
getattr_maybe_raise(self, item)
return self._REGISTRY

View file

@ -0,0 +1,155 @@
"""
pint.facets.group.registry
~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Generic
from ... import errors
from ...compat import TypeAlias
if TYPE_CHECKING:
from ..._typing import Unit, UnitsContainer
from ...util import create_class_with_registry, to_units_container
from ..plain import (
GenericPlainRegistry,
QuantityT,
UnitDefinition,
UnitT,
)
from . import objects
from .definitions import GroupDefinition
class GenericGroupRegistry(
Generic[QuantityT, UnitT], GenericPlainRegistry[QuantityT, UnitT]
):
"""Handle of Groups.
Group units
Capabilities:
- Register groups.
- Parse @group directive.
"""
# TODO: Change this to Group: Group to specify class
# and use introspection to get system class as a way
# to enjoy typing goodies
Group = type[objects.Group]
def __init__(self, **kwargs):
super().__init__(**kwargs)
#: Map group name to group.
self._groups: dict[str, objects.Group] = {}
self._groups["root"] = self.Group("root")
def _init_dynamic_classes(self) -> None:
"""Generate subclasses on the fly and attach them to self"""
super()._init_dynamic_classes()
self.Group = create_class_with_registry(self, objects.Group)
def _after_init(self) -> None:
"""Invoked at the end of ``__init__``.
- Create default group and add all orphan units to it
- Set default system
"""
super()._after_init()
#: Copy units not defined in any group to the default group
if "group" in self._defaults:
grp = self.get_group(self._defaults["group"], True)
group_units = frozenset(
[
member
for group in self._groups.values()
if group.name != "root"
for member in group.members
]
)
all_units = self.get_group("root", False).members
grp.add_units(*(all_units - group_units))
def _register_definition_adders(self) -> None:
super()._register_definition_adders()
self._register_adder(GroupDefinition, self._add_group)
def _add_unit(self, definition: UnitDefinition):
super()._add_unit(definition)
# TODO: delta units are missing
self.get_group("root").add_units(definition.name)
def _add_group(self, gd: GroupDefinition):
if gd.name in self._groups:
raise ValueError(f"Group {gd.name} already present in registry")
try:
# As a Group is a SharedRegistryObject
# it adds itself to the registry.
self.Group.from_definition(gd)
except KeyError as e:
raise errors.DefinitionSyntaxError(f"unknown dimension {e} in context")
def get_group(self, name: str, create_if_needed: bool = True) -> objects.Group:
"""Return a Group.
Parameters
----------
name : str
Name of the group to be
create_if_needed : bool
If True, create a group if not found. If False, raise an Exception.
(Default value = True)
Returns
-------
Group
Group
"""
if name in self._groups:
return self._groups[name]
if not create_if_needed:
raise ValueError("Unknown group %s" % name)
return self.Group(name)
def get_compatible_units(
self, input_units: UnitsContainer, group: str | None = None
) -> frozenset[Unit]:
""" """
if group is None:
return super().get_compatible_units(input_units)
input_units = to_units_container(input_units)
equiv = self._get_compatible_units(input_units, group)
return frozenset(self.Unit(eq) for eq in equiv)
def _get_compatible_units(
self, input_units: UnitsContainer, group: str | None = None
) -> frozenset[str]:
ret = super()._get_compatible_units(input_units)
if not group:
return ret
if group in self._groups:
members = self._groups[group].members
else:
raise ValueError("Unknown Group with name '%s'" % group)
return frozenset(ret & members)
class GroupRegistry(
GenericGroupRegistry[objects.GroupQuantity[Any], objects.GroupUnit]
):
Quantity: TypeAlias = objects.GroupQuantity[Any]
Unit: TypeAlias = objects.GroupUnit

View file

@ -0,0 +1,21 @@
"""
pint.facets.measurement
~~~~~~~~~~~~~~~~~~~~~~~
Adds pint the capability to handle measurements (quantities with uncertainties).
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .objects import Measurement, MeasurementQuantity
from .registry import GenericMeasurementRegistry, MeasurementRegistry
__all__ = [
"Measurement",
"MeasurementQuantity",
"MeasurementRegistry",
"GenericMeasurementRegistry",
]

View file

@ -0,0 +1,195 @@
"""
pint.facets.measurement.objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import copy
import re
from typing import Generic
from ...compat import ufloat
from ..plain import MagnitudeT, PlainQuantity, PlainUnit
MISSING = object()
class MeasurementQuantity(Generic[MagnitudeT], PlainQuantity[MagnitudeT]):
# Measurement support
def plus_minus(self, error, relative=False):
if isinstance(error, self.__class__):
if relative:
raise ValueError(f"{error} is not a valid relative error.")
error = error.to(self._units).magnitude
else:
if relative:
error = error * abs(self.magnitude)
return self._REGISTRY.Measurement(copy.copy(self.magnitude), error, self._units)
class MeasurementUnit(PlainUnit):
pass
class Measurement(PlainQuantity):
"""Implements a class to describe a quantity with uncertainty.
Parameters
----------
value : pint.Quantity or any numeric type
The expected value of the measurement
error : pint.Quantity or any numeric type
The error or uncertainty of the measurement
Returns
-------
"""
def __new__(cls, value, error=MISSING, units=MISSING):
if units is MISSING:
try:
value, units = value.magnitude, value.units
except AttributeError:
# if called with two arguments and the first looks like a ufloat
# then assume the second argument is the units, keep value intact
if hasattr(value, "nominal_value"):
units = error
error = MISSING # used for check below
else:
units = ""
if error is MISSING:
# We've already extracted the units from the Quantity above
mag = value
else:
try:
error = error.to(units).magnitude
except AttributeError:
pass
if error < 0:
raise ValueError("The magnitude of the error cannot be negative")
else:
mag = ufloat(value, error)
inst = super().__new__(cls, mag, units)
return inst
@property
def value(self):
return self._REGISTRY.Quantity(self.magnitude.nominal_value, self.units)
@property
def error(self):
return self._REGISTRY.Quantity(self.magnitude.std_dev, self.units)
@property
def rel(self):
return abs(self.magnitude.std_dev / self.magnitude.nominal_value)
def __reduce__(self):
# See notes in Quantity.__reduce__
from datasette.vendored.pint import _unpickle_measurement
return _unpickle_measurement, (Measurement, self.magnitude, self._units)
def __repr__(self):
return "<Measurement({}, {}, {})>".format(
self.magnitude.nominal_value, self.magnitude.std_dev, self.units
)
def __str__(self):
return f"{self}"
def __format__(self, spec):
spec = spec or self._REGISTRY.default_format
return self._REGISTRY.formatter.format_measurement(self, spec)
def old_format(self, spec):
# TODO: provisional
from ...formatting import _FORMATS, extract_custom_flags, siunitx_format_unit
# special cases
if "Lx" in spec: # the LaTeX siunitx code
# the uncertainties module supports formatting
# numbers in value(unc) notation (i.e. 1.23(45) instead of 1.23 +/- 0.45),
# using type code 'S', which siunitx actually accepts as input.
# However, the implementation is incompatible with siunitx.
# Uncertainties will do 9.1(1.1), which is invalid, should be 9.1(11).
# TODO: add support for extracting options
#
# Get rid of this code, we'll deal with it here
spec = spec.replace("Lx", "")
# The most compatible format from uncertainties is the default format,
# but even this requires fixups.
# For one, SIUnitx does not except some formats that unc does, like 'P',
# and 'S' is broken as stated, so...
spec = spec.replace("S", "").replace("P", "")
# get SIunitx options
# TODO: allow user to set this value, somehow
opts = _FORMATS["Lx"]["siopts"]
if opts != "":
opts = r"[" + opts + r"]"
# SI requires space between "+-" (or "\pm") and the nominal value
# and uncertainty, and doesn't accept "+/-", so this setting
# selects the desired replacement.
pm_fmt = _FORMATS["Lx"]["pm_fmt"]
mstr = format(self.magnitude, spec).replace(r"+/-", pm_fmt)
# Also, SIunitx doesn't accept parentheses, which uncs uses with
# scientific notation ('e' or 'E' and sometimes 'g' or 'G').
mstr = mstr.replace("(", "").replace(")", " ")
ustr = siunitx_format_unit(self.units._units.items(), self._REGISTRY)
return rf"\SI{opts}{{{mstr}}}{{{ustr}}}"
# standard cases
if "L" in spec:
newpm = pm = r" \pm "
pars = _FORMATS["L"]["parentheses_fmt"]
elif "P" in spec:
newpm = pm = "±"
pars = _FORMATS["P"]["parentheses_fmt"]
else:
newpm = pm = "+/-"
pars = _FORMATS[""]["parentheses_fmt"]
if "C" in spec:
sp = ""
newspec = spec.replace("C", "")
pars = _FORMATS["C"]["parentheses_fmt"]
else:
sp = " "
newspec = spec
if "H" in spec:
newpm = "&plusmn;"
newspec = spec.replace("H", "")
pars = _FORMATS["H"]["parentheses_fmt"]
mag = format(self.magnitude, newspec).replace(pm, sp + newpm + sp)
if "(" in mag:
# Exponential format has its own parentheses
pars = "{}"
if "L" in newspec and "S" in newspec:
mag = mag.replace("(", r"\left(").replace(")", r"\right)")
if "L" in newspec:
space = r"\ "
else:
space = " "
uspec = extract_custom_flags(spec)
ustr = format(self.units, uspec)
if not ("uS" in newspec or "ue" in newspec or "u%" in newspec):
mag = pars.format(mag)
if "H" in spec:
# Fix exponential format
mag = re.sub(r"\)e\+0?(\d+)", r")×10<sup>\1</sup>", mag)
mag = re.sub(r"\)e-0?(\d+)", r")×10<sup>-\1</sup>", mag)
return mag + space + ustr

View file

@ -0,0 +1,46 @@
"""
pint.facets.measurement.registry
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from typing import Any, Generic
from ...compat import TypeAlias, ufloat
from ...util import create_class_with_registry
from ..plain import GenericPlainRegistry, QuantityT, UnitT
from . import objects
class GenericMeasurementRegistry(
Generic[QuantityT, UnitT], GenericPlainRegistry[QuantityT, UnitT]
):
Measurement = objects.Measurement
def _init_dynamic_classes(self) -> None:
"""Generate subclasses on the fly and attach them to self"""
super()._init_dynamic_classes()
if ufloat is not None:
self.Measurement = create_class_with_registry(self, self.Measurement)
else:
def no_uncertainties(*args, **kwargs):
raise RuntimeError(
"Pint requires the 'uncertainties' package to create a Measurement object."
)
self.Measurement = no_uncertainties
class MeasurementRegistry(
GenericMeasurementRegistry[
objects.MeasurementQuantity[Any], objects.MeasurementUnit
]
):
Quantity: TypeAlias = objects.MeasurementQuantity[Any]
Unit: TypeAlias = objects.MeasurementUnit

View file

@ -0,0 +1,20 @@
"""
pint.facets.nonmultiplicative
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Adds pint the capability to handle nonmultiplicative units:
- offset
- logarithmic
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
# This import register LogarithmicConverter and OffsetConverter to be usable
# (via subclassing)
from .definitions import LogarithmicConverter, OffsetConverter # noqa: F401
from .registry import GenericNonMultiplicativeRegistry, NonMultiplicativeRegistry
__all__ = ["NonMultiplicativeRegistry", "GenericNonMultiplicativeRegistry"]

View file

@ -0,0 +1,117 @@
"""
pint.facets.nonmultiplicative.definitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from dataclasses import dataclass
from ..._typing import Magnitude
from ...compat import HAS_NUMPY, exp, log
from ..plain import ScaleConverter
@dataclass(frozen=True)
class OffsetConverter(ScaleConverter):
"""An affine transformation."""
offset: float
@property
def is_multiplicative(self):
return self.offset == 0
def to_reference(self, value: Magnitude, inplace: bool = False) -> Magnitude:
if inplace:
value *= self.scale
value += self.offset
else:
value = value * self.scale + self.offset
return value
def from_reference(self, value: Magnitude, inplace: bool = False) -> Magnitude:
if inplace:
value -= self.offset
value /= self.scale
else:
value = (value - self.offset) / self.scale
return value
@classmethod
def preprocess_kwargs(cls, **kwargs):
if "offset" in kwargs and kwargs["offset"] == 0:
return {"scale": kwargs["scale"]}
return None
@dataclass(frozen=True)
class LogarithmicConverter(ScaleConverter):
"""Converts between linear units and logarithmic units, such as dB, octave, neper or pH.
Q_log = logfactor * log( Q_lin / scale ) / log(log_base)
Parameters
----------
scale : float
unit of reference at denominator for logarithmic unit conversion
logbase : float
plain of logarithm used in the logarithmic unit conversion
logfactor : float
factor multiplied to logarithm for unit conversion
inplace : bool
controls if computation is done in place
"""
# TODO: Can I use PintScalar here?
logbase: float
logfactor: float
@property
def is_multiplicative(self):
return False
@property
def is_logarithmic(self):
return True
def from_reference(self, value: Magnitude, inplace: bool = False) -> Magnitude:
"""Converts value from the reference unit to the logarithmic unit
dBm <------ mW
y dBm = 10 log10( x / 1mW )
"""
if inplace:
value /= self.scale
if HAS_NUMPY:
log(value, value)
else:
value = log(value)
value *= self.logfactor / log(self.logbase)
else:
value = self.logfactor * log(value / self.scale) / log(self.logbase)
return value
def to_reference(self, value: Magnitude, inplace: bool = False) -> Magnitude:
"""Converts value to the reference unit from the logarithmic unit
dBm ------> mW
y dBm = 10 log10( x / 1mW )
"""
if inplace:
value /= self.logfactor
value *= log(self.logbase)
if HAS_NUMPY:
exp(value, value)
else:
value = exp(value)
value *= self.scale
else:
value = self.scale * exp(log(self.logbase) * (value / self.logfactor))
return value

View file

@ -0,0 +1,67 @@
"""
pint.facets.nonmultiplicative.objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from typing import Generic
from ..plain import MagnitudeT, PlainQuantity, PlainUnit
class NonMultiplicativeQuantity(Generic[MagnitudeT], PlainQuantity[MagnitudeT]):
@property
def _is_multiplicative(self) -> bool:
"""Check if the PlainQuantity object has only multiplicative units."""
return not self._get_non_multiplicative_units()
def _get_non_multiplicative_units(self) -> list[str]:
"""Return a list of the of non-multiplicative units of the PlainQuantity object."""
return [
unit
for unit in self._units
if not self._get_unit_definition(unit).is_multiplicative
]
def _get_delta_units(self) -> list[str]:
"""Return list of delta units ot the PlainQuantity object."""
return [u for u in self._units if u.startswith("delta_")]
def _has_compatible_delta(self, unit: str) -> bool:
""" "Check if PlainQuantity object has a delta_unit that is compatible with unit"""
deltas = self._get_delta_units()
if "delta_" + unit in deltas:
return True
# Look for delta units with same dimension as the offset unit
offset_unit_dim = self._get_unit_definition(unit).reference
return any(
self._get_unit_definition(d).reference == offset_unit_dim for d in deltas
)
def _ok_for_muldiv(self, no_offset_units: int | None = None) -> bool:
"""Checks if PlainQuantity object can be multiplied or divided"""
is_ok = True
if no_offset_units is None:
no_offset_units = len(self._get_non_multiplicative_units())
if no_offset_units > 1:
is_ok = False
if no_offset_units == 1:
if len(self._units) > 1:
is_ok = False
if (
len(self._units) == 1
and not self._REGISTRY.autoconvert_offset_to_baseunit
):
is_ok = False
if next(iter(self._units.values())) != 1:
is_ok = False
return is_ok
class NonMultiplicativeUnit(PlainUnit):
pass

View file

@ -0,0 +1,304 @@
"""
pint.facets.nonmultiplicative.registry
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from typing import Any, Generic, TypeVar
from ...compat import TypeAlias
from ...errors import DimensionalityError, UndefinedUnitError
from ...util import UnitsContainer, logger
from ..plain import GenericPlainRegistry, QuantityT, UnitDefinition, UnitT
from . import objects
from .definitions import OffsetConverter, ScaleConverter
T = TypeVar("T")
class GenericNonMultiplicativeRegistry(
Generic[QuantityT, UnitT], GenericPlainRegistry[QuantityT, UnitT]
):
"""Handle of non multiplicative units (e.g. Temperature).
Capabilities:
- Register non-multiplicative units and their relations.
- Convert between non-multiplicative units.
Parameters
----------
default_as_delta : bool
If True, non-multiplicative units are interpreted as
their *delta* counterparts in multiplications.
autoconvert_offset_to_baseunit : bool
If True, non-multiplicative units are
converted to plain units in multiplications.
"""
def __init__(
self,
default_as_delta: bool = True,
autoconvert_offset_to_baseunit: bool = False,
**kwargs: Any,
) -> None:
super().__init__(**kwargs)
#: When performing a multiplication of units, interpret
#: non-multiplicative units as their *delta* counterparts.
self.default_as_delta = default_as_delta
# Determines if quantities with offset units are converted to their
# plain units on multiplication and division.
self.autoconvert_offset_to_baseunit = autoconvert_offset_to_baseunit
def parse_units_as_container(
self,
input_string: str,
as_delta: bool | None = None,
case_sensitive: bool | None = None,
) -> UnitsContainer:
""" """
if as_delta is None:
as_delta = self.default_as_delta
return super().parse_units_as_container(input_string, as_delta, case_sensitive)
def _add_unit(self, definition: UnitDefinition) -> None:
super()._add_unit(definition)
if definition.is_multiplicative:
return
if definition.is_logarithmic:
return
if not isinstance(definition.converter, OffsetConverter):
logger.debug(
"Cannot autogenerate delta version for a unit in "
"which the converter is not an OffsetConverter"
)
return
delta_name = "delta_" + definition.name
if definition.symbol:
delta_symbol = "Δ" + definition.symbol
else:
delta_symbol = None
delta_aliases = tuple("Δ" + alias for alias in definition.aliases) + tuple(
"delta_" + alias for alias in definition.aliases
)
delta_reference = self.UnitsContainer(
{ref: value for ref, value in definition.reference.items()}
)
delta_def = UnitDefinition(
delta_name,
delta_symbol,
delta_aliases,
ScaleConverter(definition.converter.scale),
delta_reference,
)
super()._add_unit(delta_def)
def _is_multiplicative(self, unit_name: str) -> bool:
"""True if the unit is multiplicative.
Parameters
----------
unit_name
Name of the unit to check.
Can be prefixed, pluralized or even an alias
Raises
------
UndefinedUnitError
If the unit is not in the registry.
"""
if unit_name in self._units:
return self._units[unit_name].is_multiplicative
# If the unit is not in the registry might be because it is not
# registered with its prefixed version.
# TODO: Might be better to register them.
names = self.parse_unit_name(unit_name)
assert len(names) == 1
_, base_name, _ = names[0]
try:
return self._units[base_name].is_multiplicative
except KeyError:
raise UndefinedUnitError(unit_name)
def _validate_and_extract(self, units: UnitsContainer) -> str | None:
"""Used to check if a given units is suitable for a simple
conversion.
Return None if all units are non-multiplicative
Return the unit name if a single non-multiplicative unit is found
and is raised to a power equals to 1.
Otherwise, raise an Exception.
Parameters
----------
units
Compound dictionary.
Raises
------
ValueError
If the more than a single non-multiplicative unit is present,
or a single one is present but raised to a power different from 1.
"""
# TODO: document what happens if autoconvert_offset_to_baseunit
# TODO: Clarify docs
# u is for unit, e is for exponent
nonmult_units = [
(u, e) for u, e in units.items() if not self._is_multiplicative(u)
]
# Let's validate source offset units
if len(nonmult_units) > 1:
# More than one src offset unit is not allowed
raise ValueError("more than one offset unit.")
elif len(nonmult_units) == 1:
# A single src offset unit is present. Extract it
# But check that:
# - the exponent is 1
# - is not used in multiplicative context
nonmult_unit, exponent = nonmult_units.pop()
if exponent != 1:
raise ValueError("offset units in higher order.")
if len(units) > 1 and not self.autoconvert_offset_to_baseunit:
raise ValueError("offset unit used in multiplicative context.")
return nonmult_unit
return None
def _add_ref_of_log_or_offset_unit(
self, offset_unit: str, all_units: UnitsContainer
) -> UnitsContainer:
slct_unit = self._units[offset_unit]
if slct_unit.is_logarithmic:
# Extract reference unit
slct_ref = slct_unit.reference
# TODO: Check that reference is None
# If reference unit is not dimensionless
if slct_ref != UnitsContainer():
# Extract reference unit
(u, e) = [(u, e) for u, e in slct_ref.items()].pop()
# Add it back to the unit list
return all_units.add(u, e)
if not slct_unit.is_multiplicative: # is offset unit
# Extract reference unit
return slct_unit.reference
# Otherwise, return the units unmodified
return all_units
def _convert(
self, value: T, src: UnitsContainer, dst: UnitsContainer, inplace: bool = False
) -> T:
"""Convert value from some source to destination units.
In addition to what is done by the PlainRegistry,
converts between non-multiplicative units.
Parameters
----------
value :
value
src : UnitsContainer
source units.
dst : UnitsContainer
destination units.
inplace :
(Default value = False)
Returns
-------
type
converted value
"""
# Conversion needs to consider if non-multiplicative (AKA offset
# units) are involved. Conversion is only possible if src and dst
# have at most one offset unit per dimension. Other rules are applied
# by validate and extract.
try:
src_offset_unit = self._validate_and_extract(src)
except ValueError as ex:
raise DimensionalityError(src, dst, extra_msg=f" - In source units, {ex}")
try:
dst_offset_unit = self._validate_and_extract(dst)
except ValueError as ex:
raise DimensionalityError(
src, dst, extra_msg=f" - In destination units, {ex}"
)
# convert if no offset units are present
if not (src_offset_unit or dst_offset_unit):
return super()._convert(value, src, dst, inplace)
src_dim = self._get_dimensionality(src)
dst_dim = self._get_dimensionality(dst)
# If the source and destination dimensionality are different,
# then the conversion cannot be performed.
if src_dim != dst_dim:
raise DimensionalityError(src, dst, src_dim, dst_dim)
# clean src from offset units by converting to reference
if src_offset_unit:
if any(u.startswith("delta_") for u in dst):
raise DimensionalityError(src, dst)
value = self._units[src_offset_unit].converter.to_reference(value, inplace)
src = src.remove([src_offset_unit])
# Add reference unit for multiplicative section
src = self._add_ref_of_log_or_offset_unit(src_offset_unit, src)
# clean dst units from offset units
if dst_offset_unit:
if any(u.startswith("delta_") for u in src):
raise DimensionalityError(src, dst)
dst = dst.remove([dst_offset_unit])
# Add reference unit for multiplicative section
dst = self._add_ref_of_log_or_offset_unit(dst_offset_unit, dst)
# Convert non multiplicative units to the dst.
value = super()._convert(value, src, dst, inplace, False)
# Finally convert to offset units specified in destination
if dst_offset_unit:
value = self._units[dst_offset_unit].converter.from_reference(
value, inplace
)
return value
class NonMultiplicativeRegistry(
GenericNonMultiplicativeRegistry[
objects.NonMultiplicativeQuantity[Any], objects.NonMultiplicativeUnit
]
):
Quantity: TypeAlias = objects.NonMultiplicativeQuantity[Any]
Unit: TypeAlias = objects.NonMultiplicativeUnit

View file

@ -0,0 +1,15 @@
"""
pint.facets.numpy
~~~~~~~~~~~~~~~~~
Adds pint the capability to interoperate with NumPy
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .registry import GenericNumpyRegistry, NumpyRegistry
__all__ = ["NumpyRegistry", "GenericNumpyRegistry"]

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,306 @@
"""
pint.facets.numpy.quantity
~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import functools
import math
import warnings
from typing import Any, Generic
from ..._typing import Shape
from ...compat import HAS_NUMPY, _to_magnitude, np
from ...errors import DimensionalityError, PintTypeError, UnitStrippedWarning
from ..plain import MagnitudeT, PlainQuantity
from .numpy_func import (
HANDLED_UFUNCS,
copy_units_output_ufuncs,
get_op_output_unit,
matching_input_copy_units_output_ufuncs,
matching_input_set_units_output_ufuncs,
numpy_wrap,
op_units_output_ufuncs,
set_units_ufuncs,
)
try:
import uncertainties.unumpy as unp
from uncertainties import UFloat, ufloat
HAS_UNCERTAINTIES = True
except ImportError:
unp = np
ufloat = Ufloat = None
HAS_UNCERTAINTIES = False
def method_wraps(numpy_func):
if isinstance(numpy_func, str):
numpy_func = getattr(np, numpy_func, None)
def wrapper(func):
func.__wrapped__ = numpy_func
return func
return wrapper
class NumpyQuantity(Generic[MagnitudeT], PlainQuantity[MagnitudeT]):
""" """
# NumPy function/ufunc support
__array_priority__ = 17
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
if method != "__call__":
# Only handle ufuncs as callables
return NotImplemented
# Replicate types from __array_function__
types = {
type(arg)
for arg in list(inputs) + list(kwargs.values())
if hasattr(arg, "__array_ufunc__")
}
return numpy_wrap("ufunc", ufunc, inputs, kwargs, types)
def __array_function__(self, func, types, args, kwargs):
return numpy_wrap("function", func, args, kwargs, types)
_wrapped_numpy_methods = ["flatten", "astype", "item"]
def _numpy_method_wrap(self, func, *args, **kwargs):
"""Convenience method to wrap on the fly NumPy ndarray methods taking
care of the units.
"""
# Set input units if needed
if func.__name__ in set_units_ufuncs:
self.__ito_if_needed(set_units_ufuncs[func.__name__][0])
value = func(*args, **kwargs)
# Set output units as needed
if func.__name__ in (
matching_input_copy_units_output_ufuncs
+ copy_units_output_ufuncs
+ self._wrapped_numpy_methods
):
output_unit = self._units
elif func.__name__ in set_units_ufuncs:
output_unit = set_units_ufuncs[func.__name__][1]
elif func.__name__ in matching_input_set_units_output_ufuncs:
output_unit = matching_input_set_units_output_ufuncs[func.__name__]
elif func.__name__ in op_units_output_ufuncs:
output_unit = get_op_output_unit(
op_units_output_ufuncs[func.__name__],
self.units,
list(args) + list(kwargs.values()),
self._magnitude.size,
)
else:
output_unit = None
if output_unit is not None:
return self.__class__(value, output_unit)
return value
def __array__(self, t=None) -> np.ndarray:
if HAS_NUMPY and isinstance(self._magnitude, np.ndarray):
warnings.warn(
"The unit of the quantity is stripped when downcasting to ndarray.",
UnitStrippedWarning,
stacklevel=2,
)
return _to_magnitude(self._magnitude, force_ndarray=True)
def clip(self, min=None, max=None, out=None, **kwargs):
if min is not None:
if isinstance(min, self.__class__):
min = min.to(self).magnitude
elif self.dimensionless:
pass
else:
raise DimensionalityError("dimensionless", self._units)
if max is not None:
if isinstance(max, self.__class__):
max = max.to(self).magnitude
elif self.dimensionless:
pass
else:
raise DimensionalityError("dimensionless", self._units)
return self.__class__(self.magnitude.clip(min, max, out, **kwargs), self._units)
def fill(self: NumpyQuantity, value) -> None:
self._units = value._units
return self.magnitude.fill(value.magnitude)
def put(self: NumpyQuantity, indices, values, mode="raise") -> None:
if isinstance(values, self.__class__):
values = values.to(self).magnitude
elif self.dimensionless:
values = self.__class__(values, "").to(self)
else:
raise DimensionalityError("dimensionless", self._units)
self.magnitude.put(indices, values, mode)
@property
def real(self) -> NumpyQuantity:
return self.__class__(self._magnitude.real, self._units)
@property
def imag(self) -> NumpyQuantity:
return self.__class__(self._magnitude.imag, self._units)
@property
def T(self):
return self.__class__(self._magnitude.T, self._units)
@property
def flat(self):
for v in self._magnitude.flat:
yield self.__class__(v, self._units)
@property
def shape(self) -> Shape:
return self._magnitude.shape
@property
def dtype(self):
return self._magnitude.dtype
@shape.setter
def shape(self, value):
self._magnitude.shape = value
def searchsorted(self, v, side="left", sorter=None):
if isinstance(v, self.__class__):
v = v.to(self).magnitude
elif self.dimensionless:
v = self.__class__(v, "").to(self)
else:
raise DimensionalityError("dimensionless", self._units)
return self.magnitude.searchsorted(v, side)
def dot(self, b):
"""Dot product of two arrays.
Wraps np.dot().
"""
return np.dot(self, b)
@method_wraps("prod")
def prod(self, *args, **kwargs):
"""Return the product of quantity elements over a given axis
Wraps np.prod().
"""
return np.prod(self, *args, **kwargs)
def __ito_if_needed(self, to_units):
if self.unitless and to_units == "radian":
return
self.ito(to_units)
def __len__(self) -> int:
return len(self._magnitude)
def __getattr__(self, item) -> Any:
if item.startswith("__array_"):
# Handle array protocol attributes other than `__array__`
raise AttributeError(f"Array protocol attribute {item} not available.")
elif item in HANDLED_UFUNCS or item in self._wrapped_numpy_methods:
magnitude_as_duck_array = _to_magnitude(
self._magnitude, force_ndarray_like=True
)
try:
attr = getattr(magnitude_as_duck_array, item)
return functools.partial(self._numpy_method_wrap, attr)
except AttributeError:
raise AttributeError(
f"NumPy method {item} not available on {type(magnitude_as_duck_array)}"
)
except TypeError as exc:
if "not callable" in str(exc):
raise AttributeError(
f"NumPy method {item} not callable on {type(magnitude_as_duck_array)}"
)
else:
raise exc
elif (
HAS_UNCERTAINTIES and item == "ndim" and isinstance(self._magnitude, UFloat)
):
# Dimensionality of a single UFloat is 0, like any other scalar
return 0
try:
return getattr(self._magnitude, item)
except AttributeError:
raise AttributeError(
"Neither Quantity object nor its magnitude ({}) "
"has attribute '{}'".format(self._magnitude, item)
)
def __getitem__(self, key):
try:
return type(self)(self._magnitude[key], self._units)
except PintTypeError:
raise
except TypeError:
raise TypeError(
"Neither Quantity object nor its magnitude ({})"
"supports indexing".format(self._magnitude)
)
def __setitem__(self, key, value):
try:
# If we're dealing with a masked single value or a nan, set it
if (
isinstance(self._magnitude, np.ma.MaskedArray)
and np.ma.is_masked(value)
and getattr(value, "size", 0) == 1
) or (getattr(value, "ndim", 0) == 0 and math.isnan(value)):
self._magnitude[key] = value
return
except TypeError:
pass
try:
if isinstance(value, self.__class__):
factor = self.__class__(
value.magnitude, value._units / self._units
).to_root_units()
else:
factor = self.__class__(value, self._units ** (-1)).to_root_units()
if isinstance(factor, self.__class__):
if not factor.dimensionless:
raise DimensionalityError(
value,
self.units,
extra_msg=". Assign a quantity with the same dimensionality "
"or access the magnitude directly as "
f"`obj.magnitude[{key}] = {value}`.",
)
self._magnitude[key] = factor.magnitude
else:
self._magnitude[key] = factor
except PintTypeError:
raise
except TypeError as exc:
raise TypeError(
f"Neither Quantity object nor its magnitude ({self._magnitude}) "
"supports indexing"
) from exc

View file

@ -0,0 +1,27 @@
"""
pint.facets.numpy.registry
~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from typing import Any, Generic
from ...compat import TypeAlias
from ..plain import GenericPlainRegistry, QuantityT, UnitT
from .quantity import NumpyQuantity
from .unit import NumpyUnit
class GenericNumpyRegistry(
Generic[QuantityT, UnitT], GenericPlainRegistry[QuantityT, UnitT]
):
pass
class NumpyRegistry(GenericPlainRegistry[NumpyQuantity[Any], NumpyUnit]):
Quantity: TypeAlias = NumpyQuantity[Any]
Unit: TypeAlias = NumpyUnit

View file

@ -0,0 +1,43 @@
"""
pint.facets.numpy.unit
~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from ...compat import is_upcast_type
from ..plain import PlainUnit
class NumpyUnit(PlainUnit):
__array_priority__ = 17
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
if method != "__call__":
# Only handle ufuncs as callables
return NotImplemented
# Check types and return NotImplemented when upcast type encountered
types = {
type(arg)
for arg in list(inputs) + list(kwargs.values())
if hasattr(arg, "__array_ufunc__")
}
if any(is_upcast_type(other) for other in types):
return NotImplemented
# Act on limited implementations by conversion to multiplicative identity
# Quantity
if ufunc.__name__ in ("true_divide", "divide", "floor_divide", "multiply"):
return ufunc(
*tuple(
self._REGISTRY.Quantity(1, self._units) if arg is self else arg
for arg in inputs
),
**kwargs,
)
return NotImplemented

View file

@ -0,0 +1,39 @@
"""
pint.facets.plain
~~~~~~~~~~~~~~~~~
Base implementation for registry, units and quantities.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .definitions import (
AliasDefinition,
DefaultsDefinition,
DimensionDefinition,
PrefixDefinition,
ScaleConverter,
UnitDefinition,
)
from .objects import PlainQuantity, PlainUnit
from .quantity import MagnitudeT
from .registry import GenericPlainRegistry, PlainRegistry, QuantityT, UnitT
__all__ = [
"GenericPlainRegistry",
"PlainUnit",
"PlainQuantity",
"PlainRegistry",
"AliasDefinition",
"DefaultsDefinition",
"DimensionDefinition",
"PrefixDefinition",
"ScaleConverter",
"UnitDefinition",
"QuantityT",
"UnitT",
"MagnitudeT",
]

View file

@ -0,0 +1,302 @@
"""
pint.facets.plain.definitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import itertools
import numbers
import typing as ty
from dataclasses import dataclass
from functools import cached_property
from typing import Any
from ... import errors
from ..._typing import Magnitude
from ...converters import Converter
from ...util import UnitsContainer
class NotNumeric(Exception):
"""Internal exception. Do not expose outside Pint"""
def __init__(self, value: Any):
self.value = value
########################
# Convenience functions
########################
@dataclass(frozen=True)
class Equality:
"""An equality statement contains a left and right hand separated
by and equal (=) sign.
lhs = rhs
lhs and rhs are space stripped.
"""
lhs: str
rhs: str
@dataclass(frozen=True)
class CommentDefinition:
"""A comment"""
comment: str
@dataclass(frozen=True)
class DefaultsDefinition:
"""Directive to store default values."""
group: ty.Optional[str]
system: ty.Optional[str]
def items(self):
if self.group is not None:
yield "group", self.group
if self.system is not None:
yield "system", self.system
@dataclass(frozen=True)
class NamedDefinition:
#: name of the prefix
name: str
@dataclass(frozen=True)
class PrefixDefinition(NamedDefinition, errors.WithDefErr):
"""Definition of a prefix."""
#: scaling value for this prefix
value: numbers.Number
#: canonical symbol
defined_symbol: str | None = ""
#: additional names for the same prefix
aliases: ty.Tuple[str, ...] = ()
@property
def symbol(self) -> str:
return self.defined_symbol or self.name
@property
def has_symbol(self) -> bool:
return bool(self.defined_symbol)
@cached_property
def converter(self) -> ScaleConverter:
return ScaleConverter(self.value)
def __post_init__(self):
if not errors.is_valid_prefix_name(self.name):
raise self.def_err(errors.MSG_INVALID_PREFIX_NAME)
if self.defined_symbol and not errors.is_valid_prefix_symbol(self.name):
raise self.def_err(
f"the symbol {self.defined_symbol} " + errors.MSG_INVALID_PREFIX_SYMBOL
)
for alias in self.aliases:
if not errors.is_valid_prefix_alias(alias):
raise self.def_err(
f"the alias {alias} " + errors.MSG_INVALID_PREFIX_ALIAS
)
@dataclass(frozen=True)
class UnitDefinition(NamedDefinition, errors.WithDefErr):
"""Definition of a unit."""
#: canonical symbol
defined_symbol: str | None
#: additional names for the same unit
aliases: tuple[str, ...]
#: A functiont that converts a value in these units into the reference units
# TODO: this has changed as converter is now annotated as converter.
# Briefly, in several places converter attributes like as_multiplicative were
# accesed. So having a generic function is a no go.
# I guess this was never used as errors where not raised.
converter: Converter | None
#: Reference units.
reference: UnitsContainer | None
def __post_init__(self):
if not errors.is_valid_unit_name(self.name):
raise self.def_err(errors.MSG_INVALID_UNIT_NAME)
# TODO: check why reference: Optional[UnitsContainer]
assert isinstance(self.reference, UnitsContainer)
if not any(map(errors.is_dim, self.reference.keys())):
invalid = tuple(
itertools.filterfalse(errors.is_valid_unit_name, self.reference.keys())
)
if invalid:
raise self.def_err(
f"refers to {', '.join(invalid)} that "
+ errors.MSG_INVALID_UNIT_NAME
)
is_base = False
elif all(map(errors.is_dim, self.reference.keys())):
invalid = tuple(
itertools.filterfalse(
errors.is_valid_dimension_name, self.reference.keys()
)
)
if invalid:
raise self.def_err(
f"refers to {', '.join(invalid)} that "
+ errors.MSG_INVALID_DIMENSION_NAME
)
is_base = True
scale = getattr(self.converter, "scale", 1)
if scale != 1:
return self.def_err(
"Base unit definitions cannot have a scale different to 1. "
f"(`{scale}` found)"
)
else:
raise self.def_err(
"Cannot mix dimensions and units in the same definition. "
"Base units must be referenced only to dimensions. "
"Derived units must be referenced only to units."
)
super.__setattr__(self, "_is_base", is_base)
if self.defined_symbol and not errors.is_valid_unit_symbol(self.name):
raise self.def_err(
f"the symbol {self.defined_symbol} " + errors.MSG_INVALID_UNIT_SYMBOL
)
for alias in self.aliases:
if not errors.is_valid_unit_alias(alias):
raise self.def_err(
f"the alias {alias} " + errors.MSG_INVALID_UNIT_ALIAS
)
@property
def is_base(self) -> bool:
"""Indicates if it is a base unit."""
# TODO: This is set in __post_init__
return self._is_base
@property
def is_multiplicative(self) -> bool:
# TODO: Check how to avoid this check
assert isinstance(self.converter, Converter)
return self.converter.is_multiplicative
@property
def is_logarithmic(self) -> bool:
# TODO: Check how to avoid this check
assert isinstance(self.converter, Converter)
return self.converter.is_logarithmic
@property
def symbol(self) -> str:
return self.defined_symbol or self.name
@property
def has_symbol(self) -> bool:
return bool(self.defined_symbol)
@dataclass(frozen=True)
class DimensionDefinition(NamedDefinition, errors.WithDefErr):
"""Definition of a root dimension"""
@property
def is_base(self) -> bool:
return True
def __post_init__(self) -> None:
if not errors.is_valid_dimension_name(self.name):
raise self.def_err(errors.MSG_INVALID_DIMENSION_NAME)
@dataclass(frozen=True)
class DerivedDimensionDefinition(DimensionDefinition):
"""Definition of a derived dimension."""
#: reference dimensions.
reference: UnitsContainer
@property
def is_base(self) -> bool:
return False
def __post_init__(self):
if not errors.is_valid_dimension_name(self.name):
raise self.def_err(errors.MSG_INVALID_DIMENSION_NAME)
if not all(map(errors.is_dim, self.reference.keys())):
return self.def_err(
"derived dimensions must only reference other dimensions"
)
invalid = tuple(
itertools.filterfalse(errors.is_valid_dimension_name, self.reference.keys())
)
if invalid:
raise self.def_err(
f"refers to {', '.join(invalid)} that "
+ errors.MSG_INVALID_DIMENSION_NAME
)
@dataclass(frozen=True)
class AliasDefinition(errors.WithDefErr):
"""Additional alias(es) for an already existing unit."""
#: name of the already existing unit
name: str
#: aditional names for the same unit
aliases: ty.Tuple[str, ...]
def __post_init__(self):
if not errors.is_valid_unit_name(self.name):
raise self.def_err(errors.MSG_INVALID_UNIT_NAME)
for alias in self.aliases:
if not errors.is_valid_unit_alias(alias):
raise self.def_err(
f"the alias {alias} " + errors.MSG_INVALID_UNIT_ALIAS
)
@dataclass(frozen=True)
class ScaleConverter(Converter):
"""A linear transformation without offset."""
scale: float
def to_reference(self, value: Magnitude, inplace: bool = False) -> Magnitude:
if inplace:
value *= self.scale
else:
value = value * self.scale
return value
def from_reference(self, value: Magnitude, inplace: bool = False) -> Magnitude:
if inplace:
value /= self.scale
else:
value = value / self.scale
return value

View file

@ -0,0 +1,14 @@
"""
pint.facets.plain.objects
~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .quantity import PlainQuantity
from .unit import PlainUnit, UnitsContainer
__all__ = ["PlainUnit", "PlainQuantity", "UnitsContainer"]

View file

@ -0,0 +1,424 @@
from __future__ import annotations
import bisect
import math
import numbers
import warnings
from typing import TYPE_CHECKING
from ...compat import (
mip_INF,
mip_INTEGER,
mip_Model,
mip_model,
mip_OptimizationStatus,
mip_xsum,
)
from ...errors import UndefinedBehavior
from ...util import infer_base_unit
if TYPE_CHECKING:
from ..._typing import UnitLike
from ...util import UnitsContainer
from .quantity import PlainQuantity
def _get_reduced_units(
quantity: PlainQuantity, units: UnitsContainer
) -> UnitsContainer:
# loop through individual units and compare to each other unit
# can we do better than a nested loop here?
for unit1, exp in units.items():
# make sure it wasn't already reduced to zero exponent on prior pass
if unit1 not in units:
continue
for unit2 in units:
# get exponent after reduction
exp = units[unit1]
if unit1 != unit2:
power = quantity._REGISTRY._get_dimensionality_ratio(unit1, unit2)
if power:
units = units.add(unit2, exp / power).remove([unit1])
break
return units
def ito_reduced_units(quantity: PlainQuantity) -> None:
"""Return PlainQuantity scaled in place to reduced units, i.e. one unit per
dimension. This will not reduce compound units (e.g., 'J/kg' will not
be reduced to m**2/s**2), nor can it make use of contexts at this time.
"""
# shortcuts in case we're dimensionless or only a single unit
if quantity.dimensionless:
return quantity.ito({})
if len(quantity._units) == 1:
return None
units = quantity._units.copy()
new_units = _get_reduced_units(quantity, units)
return quantity.ito(new_units)
def to_reduced_units(
quantity: PlainQuantity,
) -> PlainQuantity:
"""Return PlainQuantity scaled in place to reduced units, i.e. one unit per
dimension. This will not reduce compound units (intentionally), nor
can it make use of contexts at this time.
"""
# shortcuts in case we're dimensionless or only a single unit
if quantity.dimensionless:
return quantity.to({})
if len(quantity._units) == 1:
return quantity
units = quantity._units.copy()
new_units = _get_reduced_units(quantity, units)
return quantity.to(new_units)
def to_compact(
quantity: PlainQuantity, unit: UnitsContainer | None = None
) -> PlainQuantity:
""" "Return PlainQuantity rescaled to compact, human-readable units.
To get output in terms of a different unit, use the unit parameter.
Examples
--------
>>> import pint
>>> ureg = pint.UnitRegistry()
>>> (200e-9*ureg.s).to_compact()
<Quantity(200.0, 'nanosecond')>
>>> (1e-2*ureg('kg m/s^2')).to_compact('N')
<Quantity(10.0, 'millinewton')>
"""
if not isinstance(quantity.magnitude, numbers.Number) and not hasattr(
quantity.magnitude, "nominal_value"
):
warnings.warn(
"to_compact applied to non numerical types has an undefined behavior.",
UndefinedBehavior,
stacklevel=2,
)
return quantity
if (
quantity.unitless
or quantity.magnitude == 0
or math.isnan(quantity.magnitude)
or math.isinf(quantity.magnitude)
):
return quantity
SI_prefixes: dict[int, str] = {}
for prefix in quantity._REGISTRY._prefixes.values():
try:
scale = prefix.converter.scale
# Kludgy way to check if this is an SI prefix
log10_scale = int(math.log10(scale))
if log10_scale == math.log10(scale):
SI_prefixes[log10_scale] = prefix.name
except Exception:
SI_prefixes[0] = ""
SI_prefixes_list = sorted(SI_prefixes.items())
SI_powers = [item[0] for item in SI_prefixes_list]
SI_bases = [item[1] for item in SI_prefixes_list]
if unit is None:
unit = infer_base_unit(quantity, registry=quantity._REGISTRY)
else:
unit = infer_base_unit(quantity.__class__(1, unit), registry=quantity._REGISTRY)
q_base = quantity.to(unit)
magnitude = q_base.magnitude
# Support uncertainties
if hasattr(magnitude, "nominal_value"):
magnitude = magnitude.nominal_value
units = list(q_base._units.items())
units_numerator = [a for a in units if a[1] > 0]
if len(units_numerator) > 0:
unit_str, unit_power = units_numerator[0]
else:
unit_str, unit_power = units[0]
if unit_power > 0:
power = math.floor(math.log10(abs(magnitude)) / float(unit_power) / 3) * 3
else:
power = math.ceil(math.log10(abs(magnitude)) / float(unit_power) / 3) * 3
index = bisect.bisect_left(SI_powers, power)
if index >= len(SI_bases):
index = -1
prefix_str = SI_bases[index]
new_unit_str = prefix_str + unit_str
new_unit_container = q_base._units.rename(unit_str, new_unit_str)
return quantity.to(new_unit_container)
def to_preferred(
quantity: PlainQuantity, preferred_units: list[UnitLike] | None = None
) -> PlainQuantity:
"""Return Quantity converted to a unit composed of the preferred units.
Examples
--------
>>> import pint
>>> ureg = pint.UnitRegistry()
>>> (1*ureg.acre).to_preferred([ureg.meters])
<Quantity(4046.87261, 'meter ** 2')>
>>> (1*(ureg.force_pound*ureg.m)).to_preferred([ureg.W])
<Quantity(4.44822162, 'watt * second')>
"""
units = _get_preferred(quantity, preferred_units)
return quantity.to(units)
def ito_preferred(
quantity: PlainQuantity, preferred_units: list[UnitLike] | None = None
) -> PlainQuantity:
"""Return Quantity converted to a unit composed of the preferred units.
Examples
--------
>>> import pint
>>> ureg = pint.UnitRegistry()
>>> (1*ureg.acre).to_preferred([ureg.meters])
<Quantity(4046.87261, 'meter ** 2')>
>>> (1*(ureg.force_pound*ureg.m)).to_preferred([ureg.W])
<Quantity(4.44822162, 'watt * second')>
"""
units = _get_preferred(quantity, preferred_units)
return quantity.ito(units)
def _get_preferred(
quantity: PlainQuantity, preferred_units: list[UnitLike] | None = None
) -> PlainQuantity:
if preferred_units is None:
preferred_units = quantity._REGISTRY.default_preferred_units
if not quantity.dimensionality:
return quantity._units.copy()
# The optimizer isn't perfect, and will sometimes miss obvious solutions.
# This sub-algorithm is less powerful, but always finds the very simple solutions.
def find_simple():
best_ratio = None
best_unit = None
self_dims = sorted(quantity.dimensionality)
self_exps = [quantity.dimensionality[d] for d in self_dims]
s_exps_head, *s_exps_tail = self_exps
n = len(s_exps_tail)
for preferred_unit in preferred_units:
dims = sorted(preferred_unit.dimensionality)
if dims == self_dims:
p_exps_head, *p_exps_tail = (
preferred_unit.dimensionality[d] for d in dims
)
if all(
s_exps_tail[i] * p_exps_head == p_exps_tail[i] ** s_exps_head
for i in range(n)
):
ratio = p_exps_head / s_exps_head
ratio = max(ratio, 1 / ratio)
if best_ratio is None or ratio < best_ratio:
best_ratio = ratio
best_unit = preferred_unit ** (s_exps_head / p_exps_head)
return best_unit
simple = find_simple()
if simple is not None:
return simple
# For each dimension (e.g. T(ime), L(ength), M(ass)), assign a default base unit from
# the collection of base units
unit_selections = {
base_unit.dimensionality: base_unit
for base_unit in map(quantity._REGISTRY.Unit, quantity._REGISTRY._base_units)
}
# Override the default unit of each dimension with the 1D-units used in this Quantity
unit_selections.update(
{
unit.dimensionality: unit
for unit in map(quantity._REGISTRY.Unit, quantity._units.keys())
}
)
# Determine the preferred unit for each dimensionality from the preferred_units
# (A prefered unit doesn't have to be only one dimensional, e.g. Watts)
preferred_dims = {
preferred_unit.dimensionality: preferred_unit
for preferred_unit in map(quantity._REGISTRY.Unit, preferred_units)
}
# Combine the defaults and preferred, favoring the preferred
unit_selections.update(preferred_dims)
# This algorithm has poor asymptotic time complexity, so first reduce the considered
# dimensions and units to only those that are useful to the problem
# The dimensions (without powers) of this Quantity
dimension_set = set(quantity.dimensionality)
# Getting zero exponents in dimensions not in dimension_set can be facilitated
# by units that interact with that dimension and one or more dimension_set members.
# For example MT^1 * LT^-1 lets you get MLT^0 when T is not in dimension_set.
# For each candidate unit that interacts with a dimension_set member, add the
# candidate unit's other dimensions to dimension_set, and repeat until no more
# dimensions are selected.
discovery_done = False
while not discovery_done:
discovery_done = True
for d in unit_selections:
unit_dimensions = set(d)
intersection = unit_dimensions.intersection(dimension_set)
if 0 < len(intersection) < len(unit_dimensions):
# there are dimensions in this unit that are in dimension set
# and others that are not in dimension set
dimension_set = dimension_set.union(unit_dimensions)
discovery_done = False
break
# filter out dimensions and their unit selections that don't interact with any
# dimension_set members
unit_selections = {
dimensionality: unit
for dimensionality, unit in unit_selections.items()
if set(dimensionality).intersection(dimension_set)
}
# update preferred_units with the selected units that were originally preferred
preferred_units = list(
{u for d, u in unit_selections.items() if d in preferred_dims}
)
preferred_units.sort(key=str) # for determinism
# and unpreferred_units are the selected units that weren't originally preferred
unpreferred_units = list(
{u for d, u in unit_selections.items() if d not in preferred_dims}
)
unpreferred_units.sort(key=str) # for determinism
# for indexability
dimensions = list(dimension_set)
dimensions.sort() # for determinism
# the powers for each elemet of dimensions (the list) for this Quantity
dimensionality = [quantity.dimensionality[dimension] for dimension in dimensions]
# Now that the input data is minimized, setup the optimization problem
# use mip to select units from preferred units
model = mip_Model()
model.verbose = 0
# Make one variable for each candidate unit
vars = [
model.add_var(str(unit), lb=-mip_INF, ub=mip_INF, var_type=mip_INTEGER)
for unit in (preferred_units + unpreferred_units)
]
# where [u1 ... uN] are powers of N candidate units (vars)
# and [d1(uI) ... dK(uI)] are the K dimensional exponents of candidate unit I
# and [t1 ... tK] are the dimensional exponents of the quantity (quantity)
# create the following constraints
#
# ⎡ d1(u1) ⋯ dK(u1) ⎤
# [ u1 ⋯ uN ] * ⎢ ⋮ ⋱ ⎢ = [ t1 ⋯ tK ]
# ⎣ d1(uN) dK(uN) ⎦
#
# in English, the units we choose, and their exponents, when combined, must have the
# target dimensionality
matrix = [
[preferred_unit.dimensionality[dimension] for dimension in dimensions]
for preferred_unit in (preferred_units + unpreferred_units)
]
# Do the matrix multiplication with mip_model.xsum for performance and create constraints
for i in range(len(dimensions)):
dot = mip_model.xsum([var * vector[i] for var, vector in zip(vars, matrix)])
# add constraint to the model
model += dot == dimensionality[i]
# where [c1 ... cN] are costs, 1 when a preferred variable, and a large value when not
# minimize sum(abs(u1) * c1 ... abs(uN) * cN)
# linearize the optimization variable via a proxy
objective = model.add_var("objective", lb=0, ub=mip_INF, var_type=mip_INTEGER)
# Constrain the objective to be equal to the sums of the absolute values of the preferred
# unit powers. Do this by making a separate constraint for each permutation of signedness.
# Also apply the cost coefficient, which causes the output to prefer the preferred units
# prefer units that interact with fewer dimensions
cost = [len(p.dimensionality) for p in preferred_units]
# set the cost for non preferred units to a higher number
bias = (
max(map(abs, dimensionality)) * max((1, *cost)) * 10
) # arbitrary, just needs to be larger
cost.extend([bias] * len(unpreferred_units))
for i in range(1 << len(vars)):
sum = mip_xsum(
[
(-1 if i & 1 << (len(vars) - j - 1) else 1) * cost[j] * var
for j, var in enumerate(vars)
]
)
model += objective >= sum
model.objective = objective
# run the mips minimizer and extract the result if successful
if model.optimize() == mip_OptimizationStatus.OPTIMAL:
optimal_units = []
min_objective = float("inf")
for i in range(model.num_solutions):
if model.objective_values[i] < min_objective:
min_objective = model.objective_values[i]
optimal_units.clear()
elif model.objective_values[i] > min_objective:
continue
temp_unit = quantity._REGISTRY.Unit("")
for var in vars:
if var.xi(i):
temp_unit *= quantity._REGISTRY.Unit(var.name) ** var.xi(i)
optimal_units.append(temp_unit)
sorting_keys = {tuple(sorted(unit._units)): unit for unit in optimal_units}
min_key = sorted(sorting_keys)[0]
result_unit = sorting_keys[min_key]
return result_unit
# for whatever reason, a solution wasn't found
# return the original quantity
return quantity._units.copy()

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,289 @@
"""
pint.facets.plain.unit
~~~~~~~~~~~~~~~~~~~~~
:copyright: 2016 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import copy
import locale
import operator
from numbers import Number
from typing import TYPE_CHECKING, Any
from ..._typing import UnitLike
from ...compat import NUMERIC_TYPES, deprecated
from ...errors import DimensionalityError
from ...util import PrettyIPython, SharedRegistryObject, UnitsContainer
from .definitions import UnitDefinition
if TYPE_CHECKING:
from ..context import Context
class PlainUnit(PrettyIPython, SharedRegistryObject):
"""Implements a class to describe a unit supporting math operations."""
def __reduce__(self):
# See notes in Quantity.__reduce__
from datasette.vendored.pint import _unpickle_unit
return _unpickle_unit, (PlainUnit, self._units)
def __init__(self, units: UnitLike) -> None:
super().__init__()
if isinstance(units, (UnitsContainer, UnitDefinition)):
self._units = units
elif isinstance(units, str):
self._units = self._REGISTRY.parse_units(units)._units
elif isinstance(units, PlainUnit):
self._units = units._units
else:
raise TypeError(
"units must be of type str, Unit or "
"UnitsContainer; not {}.".format(type(units))
)
def __copy__(self) -> PlainUnit:
ret = self.__class__(self._units)
return ret
def __deepcopy__(self, memo) -> PlainUnit:
ret = self.__class__(copy.deepcopy(self._units, memo))
return ret
@deprecated(
"This function will be removed in future versions of pint.\n"
"Use ureg.formatter.format_unit_babel"
)
def format_babel(self, spec: str = "", **kwspec: Any) -> str:
return self._REGISTRY.formatter.format_unit_babel(self, spec, **kwspec)
def __format__(self, spec: str) -> str:
return self._REGISTRY.formatter.format_unit(self, spec)
def __str__(self) -> str:
return self._REGISTRY.formatter.format_unit(self)
def __bytes__(self) -> bytes:
return str(self).encode(locale.getpreferredencoding())
def __repr__(self) -> str:
return f"<Unit('{self._units}')>"
@property
def dimensionless(self) -> bool:
"""Return True if the PlainUnit is dimensionless; False otherwise."""
return not bool(self.dimensionality)
@property
def dimensionality(self) -> UnitsContainer:
"""
Returns
-------
dict
Dimensionality of the PlainUnit, e.g. ``{length: 1, time: -1}``
"""
try:
return self._dimensionality
except AttributeError:
dim = self._REGISTRY._get_dimensionality(self._units)
self._dimensionality = dim
return self._dimensionality
def compatible_units(self, *contexts):
if contexts:
with self._REGISTRY.context(*contexts):
return self._REGISTRY.get_compatible_units(self)
return self._REGISTRY.get_compatible_units(self)
def is_compatible_with(
self, other: Any, *contexts: str | Context, **ctx_kwargs: Any
) -> bool:
"""check if the other object is compatible
Parameters
----------
other
The object to check. Treated as dimensionless if not a
Quantity, PlainUnit or str.
*contexts : str or pint.Context
Contexts to use in the transformation.
**ctx_kwargs :
Values for the Context/s
Returns
-------
bool
"""
from .quantity import PlainQuantity
if contexts or self._REGISTRY._active_ctx:
try:
(1 * self).to(other, *contexts, **ctx_kwargs)
return True
except DimensionalityError:
return False
if isinstance(other, (PlainQuantity, PlainUnit)):
return self.dimensionality == other.dimensionality
if isinstance(other, str):
return (
self.dimensionality == self._REGISTRY.parse_units(other).dimensionality
)
return self.dimensionless
def __mul__(self, other):
if self._check(other):
if isinstance(other, self.__class__):
return self.__class__(self._units * other._units)
else:
qself = self._REGISTRY.Quantity(1, self._units)
return qself * other
if isinstance(other, Number) and other == 1:
return self._REGISTRY.Quantity(other, self._units)
return self._REGISTRY.Quantity(1, self._units) * other
__rmul__ = __mul__
def __truediv__(self, other):
if self._check(other):
if isinstance(other, self.__class__):
return self.__class__(self._units / other._units)
else:
qself = 1 * self
return qself / other
return self._REGISTRY.Quantity(1 / other, self._units)
def __rtruediv__(self, other):
# As PlainUnit and Quantity both handle truediv with each other rtruediv can
# only be called for something different.
if isinstance(other, NUMERIC_TYPES):
return self._REGISTRY.Quantity(other, 1 / self._units)
elif isinstance(other, UnitsContainer):
return self.__class__(other / self._units)
return NotImplemented
__div__ = __truediv__
__rdiv__ = __rtruediv__
def __pow__(self, other) -> PlainUnit:
if isinstance(other, NUMERIC_TYPES):
return self.__class__(self._units**other)
else:
mess = f"Cannot power PlainUnit by {type(other)}"
raise TypeError(mess)
def __hash__(self) -> int:
return self._units.__hash__()
def __eq__(self, other) -> bool:
# We compare to the plain class of PlainUnit because each PlainUnit class is
# unique.
if self._check(other):
if isinstance(other, self.__class__):
return self._units == other._units
else:
return other == self._REGISTRY.Quantity(1, self._units)
elif isinstance(other, NUMERIC_TYPES):
return other == self._REGISTRY.Quantity(1, self._units)
else:
return self._units == other
def __ne__(self, other) -> bool:
return not (self == other)
def compare(self, other, op) -> bool:
self_q = self._REGISTRY.Quantity(1, self)
if isinstance(other, NUMERIC_TYPES):
return self_q.compare(other, op)
elif isinstance(other, (PlainUnit, UnitsContainer, dict)):
return self_q.compare(self._REGISTRY.Quantity(1, other), op)
return NotImplemented
__lt__ = lambda self, other: self.compare(other, op=operator.lt)
__le__ = lambda self, other: self.compare(other, op=operator.le)
__ge__ = lambda self, other: self.compare(other, op=operator.ge)
__gt__ = lambda self, other: self.compare(other, op=operator.gt)
def __int__(self) -> int:
return int(self._REGISTRY.Quantity(1, self._units))
def __float__(self) -> float:
return float(self._REGISTRY.Quantity(1, self._units))
def __complex__(self) -> complex:
return complex(self._REGISTRY.Quantity(1, self._units))
@property
def systems(self):
out = set()
for uname in self._units.keys():
for sname, sys in self._REGISTRY._systems.items():
if uname in sys.members:
out.add(sname)
return frozenset(out)
def from_(self, value, strict=True, name="value"):
"""Converts a numerical value or quantity to this unit
Parameters
----------
value :
a Quantity (or numerical value if strict=False) to convert
strict :
boolean to indicate that only quantities are accepted (Default value = True)
name :
descriptive name to use if an exception occurs (Default value = "value")
Returns
-------
type
The converted value as this unit
"""
if self._check(value):
if not isinstance(value, self._REGISTRY.Quantity):
value = self._REGISTRY.Quantity(1, value)
return value.to(self)
elif strict:
raise ValueError("%s must be a Quantity" % value)
else:
return value * self
def m_from(self, value, strict=True, name="value"):
"""Converts a numerical value or quantity to this unit, then returns
the magnitude of the converted value
Parameters
----------
value :
a Quantity (or numerical value if strict=False) to convert
strict :
boolean to indicate that only quantities are accepted (Default value = True)
name :
descriptive name to use if an exception occurs (Default value = "value")
Returns
-------
type
The magnitude of the converted value
"""
return self.from_(value, strict=strict, name=name).magnitude

View file

@ -0,0 +1,17 @@
"""
pint.facets.system
~~~~~~~~~~~~~~~~~~
Adds pint the capability to system of units.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from .definitions import SystemDefinition
from .objects import System
from .registry import GenericSystemRegistry, SystemRegistry
__all__ = ["SystemDefinition", "System", "SystemRegistry", "GenericSystemRegistry"]

View file

@ -0,0 +1,86 @@
"""
pint.facets.systems.definitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from collections.abc import Iterable
from dataclasses import dataclass
from ... import errors
from ...compat import Self
@dataclass(frozen=True)
class BaseUnitRule:
"""A rule to define a base unit within a system."""
#: name of the unit to become base unit
#: (must exist in the registry)
new_unit_name: str
#: name of the unit to be kicked out to make room for the new base uni
#: If None, the current base unit with the same dimensionality will be used
old_unit_name: str | None = None
# Instead of defining __post_init__ here,
# it will be added to the container class
# so that the name and a meaningfull class
# could be used.
@dataclass(frozen=True)
class SystemDefinition(errors.WithDefErr):
"""Definition of a System."""
#: name of the system
name: str
#: unit groups that will be included within the system
using_group_names: tuple[str, ...]
#: rules to define new base unit within the system.
rules: tuple[BaseUnitRule, ...]
@classmethod
def from_lines(
cls: type[Self], lines: Iterable[str], non_int_type: type
) -> Self | None:
# TODO: this is to keep it backwards compatible
# TODO: check when is None returned.
from ...delegates import ParserConfig, txt_defparser
cfg = ParserConfig(non_int_type)
parser = txt_defparser.DefParser(cfg, None)
pp = parser.parse_string("\n".join(lines) + "\n@end")
for definition in parser.iter_parsed_project(pp):
if isinstance(definition, cls):
return definition
@property
def unit_replacements(self) -> tuple[tuple[str, str | None], ...]:
# TODO: check if None can be dropped.
return tuple((el.new_unit_name, el.old_unit_name) for el in self.rules)
def __post_init__(self):
if not errors.is_valid_system_name(self.name):
raise self.def_err(errors.MSG_INVALID_SYSTEM_NAME)
for k in self.using_group_names:
if not errors.is_valid_group_name(k):
raise self.def_err(
f"refers to '{k}' that " + errors.MSG_INVALID_GROUP_NAME
)
for ndx, rule in enumerate(self.rules, 1):
if not errors.is_valid_unit_name(rule.new_unit_name):
raise self.def_err(
f"rule #{ndx} refers to '{rule.new_unit_name}' that "
+ errors.MSG_INVALID_UNIT_NAME
)
if rule.old_unit_name and not errors.is_valid_unit_name(rule.old_unit_name):
raise self.def_err(
f"rule #{ndx} refers to '{rule.old_unit_name}' that "
+ errors.MSG_INVALID_UNIT_NAME
)

View file

@ -0,0 +1,215 @@
"""
pint.facets.systems.objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import numbers
from collections.abc import Callable, Iterable
from numbers import Number
from typing import Any, Generic
from ..._typing import UnitLike
from ...babel_names import _babel_systems
from ...compat import babel_parse
from ...util import (
SharedRegistryObject,
getattr_maybe_raise,
logger,
to_units_container,
)
from .. import group
from ..plain import MagnitudeT
from .definitions import SystemDefinition
GetRootUnits = Callable[[UnitLike, bool], tuple[Number, UnitLike]]
class SystemQuantity(Generic[MagnitudeT], group.GroupQuantity[MagnitudeT]):
pass
class SystemUnit(group.GroupUnit):
pass
class System(SharedRegistryObject):
"""A system is a Group plus a set of plain units.
Members are computed dynamically, that is if a unit is added to a group X
all groups that include X are affected.
The System belongs to one Registry.
See SystemDefinition for the definition file syntax.
Parameters
----------
name
Name of the group.
"""
def __init__(self, name: str):
#: Name of the system
#: :type: str
self.name = name
#: Maps root unit names to a dict indicating the new unit and its exponent.
self.base_units: dict[str, dict[str, numbers.Number]] = {}
#: Derived unit names.
self.derived_units: set[str] = set()
#: Names of the _used_groups in used by this system.
self._used_groups: set[str] = set()
self._computed_members: frozenset[str] | None = None
# Add this system to the system dictionary
self._REGISTRY._systems[self.name] = self
def __dir__(self):
return list(self.members)
def __getattr__(self, item: str) -> Any:
getattr_maybe_raise(self, item)
u = getattr(self._REGISTRY, self.name + "_" + item, None)
if u is not None:
return u
return getattr(self._REGISTRY, item)
@property
def members(self):
d = self._REGISTRY._groups
if self._computed_members is None:
tmp: set[str] = set()
for group_name in self._used_groups:
try:
tmp |= d[group_name].members
except KeyError:
logger.warning(
"Could not resolve {} in System {}".format(
group_name, self.name
)
)
self._computed_members = frozenset(tmp)
return self._computed_members
def invalidate_members(self):
"""Invalidate computed members in this Group and all parent nodes."""
self._computed_members = None
def add_groups(self, *group_names: str) -> None:
"""Add groups to group."""
self._used_groups |= set(group_names)
self.invalidate_members()
def remove_groups(self, *group_names: str) -> None:
"""Remove groups from group."""
self._used_groups -= set(group_names)
self.invalidate_members()
def format_babel(self, locale: str) -> str:
"""translate the name of the system."""
if locale and self.name in _babel_systems:
name = _babel_systems[self.name]
locale = babel_parse(locale)
return locale.measurement_systems[name]
return self.name
# TODO: When 3.11 is minimal version, use Self
@classmethod
def from_lines(
cls: type[System],
lines: Iterable[str],
get_root_func: GetRootUnits,
non_int_type: type = float,
) -> System:
# TODO: we changed something here it used to be
# system_definition = SystemDefinition.from_lines(lines, get_root_func)
system_definition = SystemDefinition.from_lines(lines, non_int_type)
if system_definition is None:
raise ValueError(f"Could not define System from from {lines}")
return cls.from_definition(system_definition, get_root_func)
@classmethod
def from_definition(
cls: type[System],
system_definition: SystemDefinition,
get_root_func: GetRootUnits | None = None,
) -> System:
if get_root_func is None:
# TODO: kept for backwards compatibility
get_root_func = cls._REGISTRY.get_root_units
base_unit_names = {}
derived_unit_names = []
for new_unit, old_unit in system_definition.unit_replacements:
if old_unit is None:
old_unit_dict = to_units_container(get_root_func(new_unit)[1])
if len(old_unit_dict) != 1:
raise ValueError(
"The new unit must be a root dimension if not discarded unit is specified."
)
old_unit, value = dict(old_unit_dict).popitem()
base_unit_names[old_unit] = {new_unit: 1 / value}
else:
# The old unit MUST be a root unit, if not raise an error.
if old_unit != str(get_root_func(old_unit)[1]):
raise ValueError(
f"The old unit {old_unit} must be a root unit "
f"in order to be replaced by new unit {new_unit}"
)
# Here we find new_unit expanded in terms of root_units
new_unit_expanded = to_units_container(
get_root_func(new_unit)[1], cls._REGISTRY
)
# We require that the old unit is present in the new_unit expanded
if old_unit not in new_unit_expanded:
raise ValueError("Old unit must be a component of new unit")
# Here we invert the equation, in other words
# we write old units in terms new unit and expansion
new_unit_dict = {
new_unit: -1 / value
for new_unit, value in new_unit_expanded.items()
if new_unit != old_unit
}
new_unit_dict[new_unit] = 1 / new_unit_expanded[old_unit]
base_unit_names[old_unit] = new_unit_dict
system = cls(system_definition.name)
system.add_groups(*system_definition.using_group_names)
system.base_units.update(**base_unit_names)
system.derived_units |= set(derived_unit_names)
return system
class Lister:
def __init__(self, d: dict[str, Any]):
self.d = d
def __dir__(self) -> list[str]:
return list(self.d.keys())
def __getattr__(self, item: str) -> Any:
getattr_maybe_raise(self, item)
return self.d[item]

View file

@ -0,0 +1,265 @@
"""
pint.facets.systems.registry
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from numbers import Number
from typing import TYPE_CHECKING, Any, Generic
from ... import errors
from ...compat import TypeAlias
from ..plain import QuantityT, UnitT
if TYPE_CHECKING:
from ..._typing import Quantity, Unit
from ..._typing import UnitLike
from ...util import UnitsContainer as UnitsContainerT
from ...util import (
create_class_with_registry,
to_units_container,
)
from ..group import GenericGroupRegistry
from . import objects
from .definitions import SystemDefinition
class GenericSystemRegistry(
Generic[QuantityT, UnitT], GenericGroupRegistry[QuantityT, UnitT]
):
"""Handle of Systems.
Conversion between units with different dimensions according
to previously established relations (contexts).
(e.g. in the spectroscopy, conversion between frequency and energy is possible)
Capabilities:
- Register systems.
- List systems
- Get or get the default system.
- Parse @group directive.
"""
# TODO: Change this to System: System to specify class
# and use introspection to get system class as a way
# to enjoy typing goodies
System: type[objects.System]
def __init__(self, system: str | None = None, **kwargs):
super().__init__(**kwargs)
#: Map system name to system.
self._systems: dict[str, objects.System] = {}
#: Maps dimensionality (UnitsContainer) to Dimensionality (UnitsContainer)
self._base_units_cache: dict[UnitsContainerT, UnitsContainerT] = {}
self._default_system_name: str | None = system
def _init_dynamic_classes(self) -> None:
"""Generate subclasses on the fly and attach them to self"""
super()._init_dynamic_classes()
self.System = create_class_with_registry(self, objects.System)
def _after_init(self) -> None:
"""Invoked at the end of ``__init__``.
- Create default group and add all orphan units to it
- Set default system
"""
super()._after_init()
#: System name to be used by default.
self._default_system_name = self._default_system_name or self._defaults.get(
"system", None
)
def _register_definition_adders(self) -> None:
super()._register_definition_adders()
self._register_adder(SystemDefinition, self._add_system)
def _add_system(self, sd: SystemDefinition) -> None:
if sd.name in self._systems:
raise ValueError(f"System {sd.name} already present in registry")
try:
# As a System is a SharedRegistryObject
# it adds itself to the registry.
self.System.from_definition(sd)
except KeyError as e:
# TODO: fix this error message
raise errors.DefinitionError(f"unknown dimension {e} in context")
@property
def sys(self):
return objects.Lister(self._systems)
@property
def default_system(self) -> str | None:
return self._default_system_name
@default_system.setter
def default_system(self, name: str) -> None:
if name:
if name not in self._systems:
raise ValueError("Unknown system %s" % name)
self._base_units_cache = {}
self._default_system_name = name
def get_system(self, name: str, create_if_needed: bool = True) -> objects.System:
"""Return a Group.
Parameters
----------
name : str
Name of the group to be.
create_if_needed : bool
If True, create a group if not found. If False, raise an Exception.
(Default value = True)
Returns
-------
type
System
"""
if name in self._systems:
return self._systems[name]
if not create_if_needed:
raise ValueError("Unknown system %s" % name)
return self.System(name)
def get_base_units(
self,
input_units: UnitLike | Quantity,
check_nonmult: bool = True,
system: str | objects.System | None = None,
) -> tuple[Number, Unit]:
"""Convert unit or dict of units to the plain units.
If any unit is non multiplicative and check_converter is True,
then None is returned as the multiplicative factor.
Unlike PlainRegistry, in this registry root_units might be different
from base_units
Parameters
----------
input_units : UnitsContainer or str
units
check_nonmult : bool
if True, None will be returned as the
multiplicative factor if a non-multiplicative
units is found in the final Units. (Default value = True)
system :
(Default value = None)
Returns
-------
type
multiplicative factor, plain units
"""
input_units = to_units_container(input_units)
f, units = self._get_base_units(input_units, check_nonmult, system)
return f, self.Unit(units)
def _get_base_units(
self,
input_units: UnitsContainerT,
check_nonmult: bool = True,
system: str | objects.System | None = None,
):
if system is None:
system = self._default_system_name
# The cache is only done for check_nonmult=True and the current system.
if (
check_nonmult
and system == self._default_system_name
and input_units in self._base_units_cache
):
return self._base_units_cache[input_units]
factor, units = self.get_root_units(input_units, check_nonmult)
if not system:
return factor, units
# This will not be necessary after integration with the registry
# as it has a UnitsContainer intermediate
units = to_units_container(units, self)
destination_units = self.UnitsContainer()
bu = self.get_system(system, False).base_units
for unit, value in units.items():
if unit in bu:
new_unit = bu[unit]
new_unit = to_units_container(new_unit, self)
destination_units *= new_unit**value
else:
destination_units *= self.UnitsContainer({unit: value})
base_factor = self.convert(factor, units, destination_units)
if check_nonmult:
self._base_units_cache[input_units] = base_factor, destination_units
return base_factor, destination_units
def get_compatible_units(
self, input_units: UnitsContainerT, group_or_system: str | None = None
) -> frozenset[Unit]:
""" """
group_or_system = group_or_system or self._default_system_name
if group_or_system is None:
return super().get_compatible_units(input_units)
input_units = to_units_container(input_units)
equiv = self._get_compatible_units(input_units, group_or_system)
return frozenset(self.Unit(eq) for eq in equiv)
def _get_compatible_units(
self, input_units: UnitsContainerT, group_or_system: str | None = None
) -> frozenset[Unit]:
if group_or_system and group_or_system in self._systems:
members = self._systems[group_or_system].members
# group_or_system has been handled by System
return frozenset(members & super()._get_compatible_units(input_units))
try:
# This will be handled by groups
return super()._get_compatible_units(input_units, group_or_system)
except ValueError as ex:
# It might be also a system
if "Unknown Group" in str(ex):
raise ValueError(
"Unknown Group o System with name '%s'" % group_or_system
) from ex
raise ex
class SystemRegistry(
GenericSystemRegistry[objects.SystemQuantity[Any], objects.SystemUnit]
):
Quantity: TypeAlias = objects.SystemQuantity[Any]
Unit: TypeAlias = objects.SystemUnit

View file

@ -0,0 +1,169 @@
"""
pint.formatter
~~~~~~~~~~~~~~
Format units for pint.
:copyright: 2016 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from numbers import Number
from typing import Iterable
from .delegates.formatter._format_helpers import (
_PRETTY_EXPONENTS, # noqa: F401
)
from .delegates.formatter._format_helpers import (
join_u as _join, # noqa: F401
)
from .delegates.formatter._format_helpers import (
pretty_fmt_exponent as _pretty_fmt_exponent, # noqa: F401
)
from .delegates.formatter._spec_helpers import (
_BASIC_TYPES, # noqa: F401
FORMATTER, # noqa: F401
REGISTERED_FORMATTERS,
extract_custom_flags, # noqa: F401
remove_custom_flags, # noqa: F401
)
from .delegates.formatter._spec_helpers import (
parse_spec as _parse_spec, # noqa: F401
)
from .delegates.formatter._spec_helpers import (
split_format as split_format, # noqa: F401
)
# noqa
from .delegates.formatter._to_register import register_unit_format # noqa: F401
# Backwards compatiblity stuff
from .delegates.formatter.latex import (
_EXP_PATTERN, # noqa: F401
latex_escape, # noqa: F401
matrix_to_latex, # noqa: F401
ndarray_to_latex, # noqa: F401
ndarray_to_latex_parts, # noqa: F401
siunitx_format_unit, # noqa: F401
vector_to_latex, # noqa: F401
)
def formatter(
items: Iterable[tuple[str, Number]],
as_ratio: bool = True,
single_denominator: bool = False,
product_fmt: str = " * ",
division_fmt: str = " / ",
power_fmt: str = "{} ** {}",
parentheses_fmt: str = "({0})",
exp_call: FORMATTER = "{:n}".format,
sort: bool = True,
) -> str:
"""Format a list of (name, exponent) pairs.
Parameters
----------
items : list
a list of (name, exponent) pairs.
as_ratio : bool, optional
True to display as ratio, False as negative powers. (Default value = True)
single_denominator : bool, optional
all with terms with negative exponents are
collected together. (Default value = False)
product_fmt : str
the format used for multiplication. (Default value = " * ")
division_fmt : str
the format used for division. (Default value = " / ")
power_fmt : str
the format used for exponentiation. (Default value = "{} ** {}")
parentheses_fmt : str
the format used for parenthesis. (Default value = "({0})")
exp_call : callable
(Default value = lambda x: f"{x:n}")
sort : bool, optional
True to sort the formatted units alphabetically (Default value = True)
Returns
-------
str
the formula as a string.
"""
join_u = _join
if sort is False:
items = tuple(items)
else:
items = sorted(items)
if not items:
return ""
if as_ratio:
fun = lambda x: exp_call(abs(x))
else:
fun = exp_call
pos_terms, neg_terms = [], []
for key, value in items:
if value == 1:
pos_terms.append(key)
elif value > 0:
pos_terms.append(power_fmt.format(key, fun(value)))
elif value == -1 and as_ratio:
neg_terms.append(key)
else:
neg_terms.append(power_fmt.format(key, fun(value)))
if not as_ratio:
# Show as Product: positive * negative terms ** -1
return _join(product_fmt, pos_terms + neg_terms)
# Show as Ratio: positive terms / negative terms
pos_ret = _join(product_fmt, pos_terms) or "1"
if not neg_terms:
return pos_ret
if single_denominator:
neg_ret = join_u(product_fmt, neg_terms)
if len(neg_terms) > 1:
neg_ret = parentheses_fmt.format(neg_ret)
else:
neg_ret = join_u(division_fmt, neg_terms)
# TODO: first or last pos_ret should be pluralized
return _join(division_fmt, [pos_ret, neg_ret])
def format_unit(unit, spec: str, registry=None, **options):
# registry may be None to allow formatting `UnitsContainer` objects
# in that case, the spec may not be "Lx"
if not unit:
if spec.endswith("%"):
return ""
else:
return "dimensionless"
if not spec:
spec = "D"
if registry is None:
_formatter = REGISTERED_FORMATTERS.get(spec, None)
else:
try:
_formatter = registry.formatter._formatters[spec]
except Exception:
_formatter = registry.formatter._formatters.get(spec, None)
if _formatter is None:
raise ValueError(f"Unknown conversion specified: {spec}")
return _formatter.format_unit(unit)

View file

@ -0,0 +1,86 @@
"""
pint.matplotlib
~~~~~~~~~~~~~~~
Functions and classes related to working with Matplotlib's support
for plotting with units.
:copyright: 2017 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import matplotlib.units
from .util import iterable, sized
class PintAxisInfo(matplotlib.units.AxisInfo):
"""Support default axis and tick labeling and default limits."""
def __init__(self, units):
"""Set the default label to the pretty-print of the unit."""
formatter = units._REGISTRY.mpl_formatter
super().__init__(label=formatter.format(units))
class PintConverter(matplotlib.units.ConversionInterface):
"""Implement support for pint within matplotlib's unit conversion framework."""
def __init__(self, registry):
super().__init__()
self._reg = registry
def convert(self, value, unit, axis):
"""Convert :`Quantity` instances for matplotlib to use."""
# Short circuit for arrays
if hasattr(value, "units"):
return value.to(unit).magnitude
if iterable(value):
return [self._convert_value(v, unit, axis) for v in value]
return self._convert_value(value, unit, axis)
def _convert_value(self, value, unit, axis):
"""Handle converting using attached unit or falling back to axis units."""
if hasattr(value, "units"):
return value.to(unit).magnitude
return self._reg.Quantity(value, axis.get_units()).to(unit).magnitude
@staticmethod
def axisinfo(unit, axis):
"""Return axis information for this particular unit."""
return PintAxisInfo(unit)
@staticmethod
def default_units(x, axis):
"""Get the default unit to use for the given combination of unit and axis."""
if iterable(x) and sized(x):
return getattr(x[0], "units", None)
return getattr(x, "units", None)
def setup_matplotlib_handlers(registry, enable):
"""Set up matplotlib's unit support to handle units from a registry.
Parameters
----------
registry : pint.UnitRegistry
The registry that will be used.
enable : bool
Whether support should be enabled or disabled.
Returns
-------
"""
if matplotlib.__version__ < "2.0":
raise RuntimeError("Matplotlib >= 2.0 required to work with pint.")
if enable:
matplotlib.units.registry[registry.Quantity] = PintConverter(registry)
else:
matplotlib.units.registry.pop(registry.Quantity, None)

View file

@ -0,0 +1,213 @@
#!/usr/bin/env python3
"""
pint-convert
~~~~~~~~~~~~
:copyright: 2020 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import argparse
import contextlib
import re
from datasette.vendored.pint import UnitRegistry
parser = argparse.ArgumentParser(description="Unit converter.", usage=argparse.SUPPRESS)
parser.add_argument(
"-s",
"--system",
metavar="sys",
default="SI",
help="unit system to convert to (default: SI)",
)
parser.add_argument(
"-p",
"--prec",
metavar="n",
type=int,
default=12,
help="number of maximum significant figures (default: 12)",
)
parser.add_argument(
"-u",
"--prec-unc",
metavar="n",
type=int,
default=2,
help="number of maximum uncertainty digits (default: 2)",
)
parser.add_argument(
"-U",
"--with-unc",
dest="unc",
action="store_true",
help="consider uncertainties in constants",
)
parser.add_argument(
"-C",
"--no-corr",
dest="corr",
action="store_false",
help="ignore correlations between constants",
)
parser.add_argument(
"fr", metavar="from", type=str, help="unit or quantity to convert from"
)
parser.add_argument("to", type=str, nargs="?", help="unit to convert to")
try:
args = parser.parse_args()
except SystemExit:
parser.print_help()
raise
ureg = UnitRegistry()
ureg.auto_reduce_dimensions = True
ureg.autoconvert_offset_to_baseunit = True
ureg.enable_contexts("Gau", "ESU", "sp", "energy", "boltzmann")
ureg.default_system = args.system
def _set(key: str, value):
obj = ureg._units[key].converter
object.__setattr__(obj, "scale", value)
if args.unc:
try:
import uncertainties
except ImportError:
raise Exception(
"Failed to import uncertainties library!\n Please install uncertainties package"
)
# Measured constants subject to correlation
# R_i: Rydberg constant
# g_e: Electron g factor
# m_u: Atomic mass constant
# m_e: Electron mass
# m_p: Proton mass
# m_n: Neutron mass
R_i = (ureg._units["R_inf"].converter.scale, 0.0000000000021e7)
g_e = (ureg._units["g_e"].converter.scale, 0.00000000000035)
m_u = (ureg._units["m_u"].converter.scale, 0.00000000050e-27)
m_e = (ureg._units["m_e"].converter.scale, 0.00000000028e-30)
m_p = (ureg._units["m_p"].converter.scale, 0.00000000051e-27)
m_n = (ureg._units["m_n"].converter.scale, 0.00000000095e-27)
if args.corr:
# Correlation matrix between measured constants (to be completed below)
# R_i g_e m_u m_e m_p m_n
corr = [
[1.0, -0.00206, 0.00369, 0.00436, 0.00194, 0.00233], # R_i
[-0.00206, 1.0, 0.99029, 0.99490, 0.97560, 0.52445], # g_e
[0.00369, 0.99029, 1.0, 0.99536, 0.98516, 0.52959], # m_u
[0.00436, 0.99490, 0.99536, 1.0, 0.98058, 0.52714], # m_e
[0.00194, 0.97560, 0.98516, 0.98058, 1.0, 0.51521], # m_p
[0.00233, 0.52445, 0.52959, 0.52714, 0.51521, 1.0],
] # m_n
try:
(R_i, g_e, m_u, m_e, m_p, m_n) = uncertainties.correlated_values_norm(
[R_i, g_e, m_u, m_e, m_p, m_n], corr
)
except AttributeError:
raise Exception(
"Correlation cannot be calculated!\n Please install numpy package"
)
else:
R_i = uncertainties.ufloat(*R_i)
g_e = uncertainties.ufloat(*g_e)
m_u = uncertainties.ufloat(*m_u)
m_e = uncertainties.ufloat(*m_e)
m_p = uncertainties.ufloat(*m_p)
m_n = uncertainties.ufloat(*m_n)
_set("R_inf", R_i)
_set("g_e", g_e)
_set("m_u", m_u)
_set("m_e", m_e)
_set("m_p", m_p)
_set("m_n", m_n)
# Measured constants with zero correlation
_set(
"gravitational_constant",
uncertainties.ufloat(
ureg._units["gravitational_constant"].converter.scale, 0.00015e-11
),
)
_set(
"d_220",
uncertainties.ufloat(ureg._units["d_220"].converter.scale, 0.000000032e-10),
)
_set(
"K_alpha_Cu_d_220",
uncertainties.ufloat(
ureg._units["K_alpha_Cu_d_220"].converter.scale, 0.00000022
),
)
_set(
"K_alpha_Mo_d_220",
uncertainties.ufloat(
ureg._units["K_alpha_Mo_d_220"].converter.scale, 0.00000019
),
)
_set(
"K_alpha_W_d_220",
uncertainties.ufloat(
ureg._units["K_alpha_W_d_220"].converter.scale, 0.000000098
),
)
ureg._root_units_cache = {}
ureg._build_cache()
def convert(u_from, u_to=None, unc=None, factor=None):
prec_unc = 0
q = ureg.Quantity(u_from)
fmt = f".{args.prec}g"
if unc:
q = q.plus_minus(unc)
if u_to:
nq = q.to(u_to)
else:
nq = q.to_base_units()
if factor:
q *= ureg.Quantity(factor)
nq *= ureg.Quantity(factor).to_base_units()
if args.unc:
prec_unc = use_unc(nq.magnitude, fmt, args.prec_unc)
if prec_unc > 0:
fmt = f".{prec_unc}uS"
else:
with contextlib.suppress(Exception):
nq = nq.magnitude.n * nq.units
fmt = "{:" + fmt + "} {:~P}"
print(("{:} = " + fmt).format(q, nq.magnitude, nq.units))
def use_unc(num, fmt, prec_unc):
unc = 0
with contextlib.suppress(Exception):
if isinstance(num, uncertainties.UFloat):
full = ("{:" + fmt + "}").format(num)
unc = re.search(r"\+/-[0.]*([\d.]*)", full).group(1)
unc = len(unc.replace(".", ""))
return max(0, min(prec_unc, unc))
def main():
convert(args.fr, args.to)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,568 @@
"""
pint.pint_eval
~~~~~~~~~~~~~~
An expression evaluator to be used as a safe replacement for builtin eval.
:copyright: 2016 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import operator
import token as tokenlib
import tokenize
from io import BytesIO
from tokenize import TokenInfo
from typing import Any
try:
from uncertainties import ufloat
HAS_UNCERTAINTIES = True
except ImportError:
HAS_UNCERTAINTIES = False
ufloat = None
from .errors import DefinitionSyntaxError
# For controlling order of operations
_OP_PRIORITY = {
"+/-": 4,
"**": 3,
"^": 3,
"unary": 2,
"*": 1,
"": 1, # operator for implicit ops
"//": 1,
"/": 1,
"%": 1,
"+": 0,
"-": 0,
}
def _ufloat(left, right):
if HAS_UNCERTAINTIES:
return ufloat(left, right)
raise TypeError("Could not import support for uncertainties")
def _power(left: Any, right: Any) -> Any:
from . import Quantity
from .compat import is_duck_array
if (
isinstance(left, Quantity)
and is_duck_array(left.magnitude)
and left.dtype.kind not in "cf"
and right < 0
):
left = left.astype(float)
return operator.pow(left, right)
# https://stackoverflow.com/a/1517965/1291237
class tokens_with_lookahead:
def __init__(self, iter):
self.iter = iter
self.buffer = []
def __iter__(self):
return self
def __next__(self):
if self.buffer:
return self.buffer.pop(0)
else:
return self.iter.__next__()
def lookahead(self, n):
"""Return an item n entries ahead in the iteration."""
while n >= len(self.buffer):
try:
self.buffer.append(self.iter.__next__())
except StopIteration:
return None
return self.buffer[n]
def _plain_tokenizer(input_string):
for tokinfo in tokenize.tokenize(BytesIO(input_string.encode("utf-8")).readline):
if tokinfo.type != tokenlib.ENCODING:
yield tokinfo
def uncertainty_tokenizer(input_string):
def _number_or_nan(token):
if token.type == tokenlib.NUMBER or (
token.type == tokenlib.NAME and token.string == "nan"
):
return True
return False
def _get_possible_e(toklist, e_index):
possible_e_token = toklist.lookahead(e_index)
if (
possible_e_token.string[0] == "e"
and len(possible_e_token.string) > 1
and possible_e_token.string[1].isdigit()
):
end = possible_e_token.end
possible_e = tokenize.TokenInfo(
type=tokenlib.STRING,
string=possible_e_token.string,
start=possible_e_token.start,
end=end,
line=possible_e_token.line,
)
elif (
possible_e_token.string[0] in ["e", "E"]
and toklist.lookahead(e_index + 1).string in ["+", "-"]
and toklist.lookahead(e_index + 2).type == tokenlib.NUMBER
):
# Special case: Python allows a leading zero for exponents (i.e., 042) but not for numbers
if (
toklist.lookahead(e_index + 2).string == "0"
and toklist.lookahead(e_index + 3).type == tokenlib.NUMBER
):
exp_number = toklist.lookahead(e_index + 3).string
end = toklist.lookahead(e_index + 3).end
else:
exp_number = toklist.lookahead(e_index + 2).string
end = toklist.lookahead(e_index + 2).end
possible_e = tokenize.TokenInfo(
type=tokenlib.STRING,
string=f"e{toklist.lookahead(e_index+1).string}{exp_number}",
start=possible_e_token.start,
end=end,
line=possible_e_token.line,
)
else:
possible_e = None
return possible_e
def _apply_e_notation(mantissa, exponent):
if mantissa.string == "nan":
return mantissa
if float(mantissa.string) == 0.0:
return mantissa
return tokenize.TokenInfo(
type=tokenlib.NUMBER,
string=f"{mantissa.string}{exponent.string}",
start=mantissa.start,
end=exponent.end,
line=exponent.line,
)
def _finalize_e(nominal_value, std_dev, toklist, possible_e):
nominal_value = _apply_e_notation(nominal_value, possible_e)
std_dev = _apply_e_notation(std_dev, possible_e)
next(toklist) # consume 'e' and positive exponent value
if possible_e.string[1] in ["+", "-"]:
next(toklist) # consume "+" or "-" in exponent
exp_number = next(toklist) # consume exponent value
if (
exp_number.string == "0"
and toklist.lookahead(0).type == tokenlib.NUMBER
):
exp_number = next(toklist)
assert exp_number.end == end
# We've already applied the number, we're just consuming all the tokens
return nominal_value, std_dev
# when tokenize encounters whitespace followed by an unknown character,
# (such as ±) it proceeds to mark every character of the whitespace as ERRORTOKEN,
# in addition to marking the unknown character as ERRORTOKEN. Rather than
# wading through all that vomit, just eliminate the problem
# in the input by rewriting ± as +/-.
input_string = input_string.replace("±", "+/-")
toklist = tokens_with_lookahead(_plain_tokenizer(input_string))
for tokinfo in toklist:
line = tokinfo.line
start = tokinfo.start
if (
tokinfo.string == "+"
and toklist.lookahead(0).string == "/"
and toklist.lookahead(1).string == "-"
):
plus_minus_op = tokenize.TokenInfo(
type=tokenlib.OP,
string="+/-",
start=start,
end=toklist.lookahead(1).end,
line=line,
)
for i in range(-1, 1):
next(toklist)
yield plus_minus_op
elif (
tokinfo.string == "("
and ((seen_minus := 1 if toklist.lookahead(0).string == "-" else 0) or True)
and _number_or_nan(toklist.lookahead(seen_minus))
and toklist.lookahead(seen_minus + 1).string == "+"
and toklist.lookahead(seen_minus + 2).string == "/"
and toklist.lookahead(seen_minus + 3).string == "-"
and _number_or_nan(toklist.lookahead(seen_minus + 4))
and toklist.lookahead(seen_minus + 5).string == ")"
):
# ( NUM_OR_NAN +/- NUM_OR_NAN ) POSSIBLE_E_NOTATION
possible_e = _get_possible_e(toklist, seen_minus + 6)
if possible_e:
end = possible_e.end
else:
end = toklist.lookahead(seen_minus + 5).end
if seen_minus:
minus_op = next(toklist)
yield minus_op
nominal_value = next(toklist)
tokinfo = next(toklist) # consume '+'
next(toklist) # consume '/'
plus_minus_op = tokenize.TokenInfo(
type=tokenlib.OP,
string="+/-",
start=tokinfo.start,
end=next(toklist).end, # consume '-'
line=line,
)
std_dev = next(toklist)
next(toklist) # consume final ')'
if possible_e:
nominal_value, std_dev = _finalize_e(
nominal_value, std_dev, toklist, possible_e
)
yield nominal_value
yield plus_minus_op
yield std_dev
elif (
tokinfo.type == tokenlib.NUMBER
and toklist.lookahead(0).string == "("
and toklist.lookahead(1).type == tokenlib.NUMBER
and toklist.lookahead(2).string == ")"
):
# NUM_OR_NAN ( NUM_OR_NAN ) POSSIBLE_E_NOTATION
possible_e = _get_possible_e(toklist, 3)
if possible_e:
end = possible_e.end
else:
end = toklist.lookahead(2).end
nominal_value = tokinfo
tokinfo = next(toklist) # consume '('
plus_minus_op = tokenize.TokenInfo(
type=tokenlib.OP,
string="+/-",
start=tokinfo.start,
end=tokinfo.end, # this is funky because there's no "+/-" in nominal(std_dev) notation
line=line,
)
std_dev = next(toklist)
if "." not in std_dev.string:
std_dev = tokenize.TokenInfo(
type=std_dev.type,
string="0." + std_dev.string,
start=std_dev.start,
end=std_dev.end,
line=line,
)
next(toklist) # consume final ')'
if possible_e:
nominal_value, std_dev = _finalize_e(
nominal_value, std_dev, toklist, possible_e
)
yield nominal_value
yield plus_minus_op
yield std_dev
else:
yield tokinfo
if HAS_UNCERTAINTIES:
tokenizer = uncertainty_tokenizer
else:
tokenizer = _plain_tokenizer
import typing
UnaryOpT = typing.Callable[
[
Any,
],
Any,
]
BinaryOpT = typing.Callable[[Any, Any], Any]
_UNARY_OPERATOR_MAP: dict[str, UnaryOpT] = {"+": lambda x: x, "-": lambda x: x * -1}
_BINARY_OPERATOR_MAP: dict[str, BinaryOpT] = {
"+/-": _ufloat,
"**": _power,
"*": operator.mul,
"": operator.mul, # operator for implicit ops
"/": operator.truediv,
"+": operator.add,
"-": operator.sub,
"%": operator.mod,
"//": operator.floordiv,
}
class EvalTreeNode:
"""Single node within an evaluation tree
left + operator + right --> binary op
left + operator --> unary op
left + right --> implicit op
left --> single value
"""
def __init__(
self,
left: EvalTreeNode | TokenInfo,
operator: TokenInfo | None = None,
right: EvalTreeNode | None = None,
):
self.left = left
self.operator = operator
self.right = right
def to_string(self) -> str:
# For debugging purposes
if self.right:
assert isinstance(self.left, EvalTreeNode), "self.left not EvalTreeNode (1)"
comps = [self.left.to_string()]
if self.operator:
comps.append(self.operator.string)
comps.append(self.right.to_string())
elif self.operator:
assert isinstance(self.left, EvalTreeNode), "self.left not EvalTreeNode (2)"
comps = [self.operator.string, self.left.to_string()]
else:
assert isinstance(self.left, TokenInfo), "self.left not TokenInfo (1)"
return self.left.string
return "(%s)" % " ".join(comps)
def evaluate(
self,
define_op: typing.Callable[
[
Any,
],
Any,
],
bin_op: dict[str, BinaryOpT] | None = None,
un_op: dict[str, UnaryOpT] | None = None,
):
"""Evaluate node.
Parameters
----------
define_op : callable
Translates tokens into objects.
bin_op : dict or None, optional
(Default value = _BINARY_OPERATOR_MAP)
un_op : dict or None, optional
(Default value = _UNARY_OPERATOR_MAP)
Returns
-------
"""
bin_op = bin_op or _BINARY_OPERATOR_MAP
un_op = un_op or _UNARY_OPERATOR_MAP
if self.right:
assert isinstance(self.left, EvalTreeNode), "self.left not EvalTreeNode (3)"
# binary or implicit operator
op_text = self.operator.string if self.operator else ""
if op_text not in bin_op:
raise DefinitionSyntaxError(f"missing binary operator '{op_text}'")
return bin_op[op_text](
self.left.evaluate(define_op, bin_op, un_op),
self.right.evaluate(define_op, bin_op, un_op),
)
elif self.operator:
assert isinstance(self.left, EvalTreeNode), "self.left not EvalTreeNode (4)"
# unary operator
op_text = self.operator.string
if op_text not in un_op:
raise DefinitionSyntaxError(f"missing unary operator '{op_text}'")
return un_op[op_text](self.left.evaluate(define_op, bin_op, un_op))
# single value
return define_op(self.left)
from collections.abc import Iterable
def _build_eval_tree(
tokens: list[TokenInfo],
op_priority: dict[str, int],
index: int = 0,
depth: int = 0,
prev_op: str = "<none>",
) -> tuple[EvalTreeNode, int]:
"""Build an evaluation tree from a set of tokens.
Params:
Index, depth, and prev_op used recursively, so don't touch.
Tokens is an iterable of tokens from an expression to be evaluated.
Transform the tokens from an expression into a recursive parse tree, following order
of operations. Operations can include binary ops (3 + 4), implicit ops (3 kg), or
unary ops (-1).
General Strategy:
1) Get left side of operator
2) If no tokens left, return final result
3) Get operator
4) Use recursion to create tree starting at token on right side of operator (start at step #1)
4.1) If recursive call encounters an operator with lower or equal priority to step #2, exit recursion
5) Combine left side, operator, and right side into a new left side
6) Go back to step #2
Raises
------
DefinitionSyntaxError
If there is a syntax error.
"""
result = None
while True:
current_token = tokens[index]
token_type = current_token.type
token_text = current_token.string
if token_type == tokenlib.OP:
if token_text == ")":
if prev_op == "<none>":
raise DefinitionSyntaxError(
f"unopened parentheses in tokens: {current_token}"
)
elif prev_op == "(":
# close parenthetical group
assert result is not None
return result, index
else:
# parenthetical group ending, but we need to close sub-operations within group
assert result is not None
return result, index - 1
elif token_text == "(":
# gather parenthetical group
right, index = _build_eval_tree(
tokens, op_priority, index + 1, 0, token_text
)
if not tokens[index][1] == ")":
raise DefinitionSyntaxError("weird exit from parentheses")
if result:
# implicit op with a parenthetical group, i.e. "3 (kg ** 2)"
result = EvalTreeNode(left=result, right=right)
else:
# get first token
result = right
elif token_text in op_priority:
if result:
# equal-priority operators are grouped in a left-to-right order,
# unless they're exponentiation, in which case they're grouped
# right-to-left this allows us to get the expected behavior for
# multiple exponents
# (2^3^4) --> (2^(3^4))
# (2 * 3 / 4) --> ((2 * 3) / 4)
if op_priority[token_text] <= op_priority.get(
prev_op, -1
) and token_text not in ("**", "^"):
# previous operator is higher priority, so end previous binary op
return result, index - 1
# get right side of binary op
right, index = _build_eval_tree(
tokens, op_priority, index + 1, depth + 1, token_text
)
result = EvalTreeNode(
left=result, operator=current_token, right=right
)
else:
# unary operator
right, index = _build_eval_tree(
tokens, op_priority, index + 1, depth + 1, "unary"
)
result = EvalTreeNode(left=right, operator=current_token)
elif token_type in (tokenlib.NUMBER, tokenlib.NAME):
if result:
# tokens with an implicit operation i.e. "1 kg"
if op_priority[""] <= op_priority.get(prev_op, -1):
# previous operator is higher priority than implicit, so end
# previous binary op
return result, index - 1
right, index = _build_eval_tree(
tokens, op_priority, index, depth + 1, ""
)
result = EvalTreeNode(left=result, right=right)
else:
# get first token
result = EvalTreeNode(left=current_token)
if tokens[index][0] == tokenlib.ENDMARKER:
if prev_op == "(":
raise DefinitionSyntaxError("unclosed parentheses in tokens")
if depth > 0 or prev_op:
# have to close recursion
assert result is not None
return result, index
else:
# recursion all closed, so just return the final result
assert result is not None
return result, -1
if index + 1 >= len(tokens):
# should hit ENDMARKER before this ever happens
raise DefinitionSyntaxError("unexpected end to tokens")
index += 1
def build_eval_tree(
tokens: Iterable[TokenInfo],
op_priority: dict[str, int] | None = None,
) -> EvalTreeNode:
"""Build an evaluation tree from a set of tokens.
Params:
Index, depth, and prev_op used recursively, so don't touch.
Tokens is an iterable of tokens from an expression to be evaluated.
Transform the tokens from an expression into a recursive parse tree, following order
of operations. Operations can include binary ops (3 + 4), implicit ops (3 kg), or
unary ops (-1).
General Strategy:
1) Get left side of operator
2) If no tokens left, return final result
3) Get operator
4) Use recursion to create tree starting at token on right side of operator (start at step #1)
4.1) If recursive call encounters an operator with lower or equal priority to step #2, exit recursion
5) Combine left side, operator, and right side into a new left side
6) Go back to step #2
Raises
------
DefinitionSyntaxError
If there is a syntax error.
"""
if op_priority is None:
op_priority = _OP_PRIORITY
if not isinstance(tokens, list):
# ensure tokens is list so we can access by index
tokens = list(tokens)
result, _ = _build_eval_tree(tokens, op_priority, 0, 0)
return result

View file

View file

@ -0,0 +1,272 @@
"""
pint.registry
~~~~~~~~~~~~~
Defines the UnitRegistry, a class to contain units and their relations.
This registry contains all pint capabilities, but you can build your
customized registry by picking only the features that you actually
need.
:copyright: 2022 by Pint Authors, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
from typing import Generic
from . import facets, registry_helpers
from .compat import TypeAlias
from .util import logger, pi_theorem
# To build the Quantity and Unit classes
# we follow the UnitRegistry bases
# but
class Quantity(
facets.SystemRegistry.Quantity,
facets.ContextRegistry.Quantity,
facets.DaskRegistry.Quantity,
facets.NumpyRegistry.Quantity,
facets.MeasurementRegistry.Quantity,
facets.NonMultiplicativeRegistry.Quantity,
facets.PlainRegistry.Quantity,
):
pass
class Unit(
facets.SystemRegistry.Unit,
facets.ContextRegistry.Unit,
facets.DaskRegistry.Unit,
facets.NumpyRegistry.Unit,
facets.MeasurementRegistry.Unit,
facets.NonMultiplicativeRegistry.Unit,
facets.PlainRegistry.Unit,
):
pass
class GenericUnitRegistry(
Generic[facets.QuantityT, facets.UnitT],
facets.GenericSystemRegistry[facets.QuantityT, facets.UnitT],
facets.GenericContextRegistry[facets.QuantityT, facets.UnitT],
facets.GenericDaskRegistry[facets.QuantityT, facets.UnitT],
facets.GenericNumpyRegistry[facets.QuantityT, facets.UnitT],
facets.GenericMeasurementRegistry[facets.QuantityT, facets.UnitT],
facets.GenericNonMultiplicativeRegistry[facets.QuantityT, facets.UnitT],
facets.GenericPlainRegistry[facets.QuantityT, facets.UnitT],
):
pass
class UnitRegistry(GenericUnitRegistry[Quantity, Unit]):
"""The unit registry stores the definitions and relationships between units.
Parameters
----------
filename :
path of the units definition file to load or line-iterable object.
Empty string to load the default definition file. (default)
None to leave the UnitRegistry empty.
force_ndarray : bool
convert any input, scalar or not to a numpy.ndarray.
(Default: False)
force_ndarray_like : bool
convert all inputs other than duck arrays to a numpy.ndarray.
(Default: False)
default_as_delta :
In the context of a multiplication of units, interpret
non-multiplicative units as their *delta* counterparts.
(Default: False)
autoconvert_offset_to_baseunit :
If True converts offset units in quantities are
converted to their plain units in multiplicative
context. If False no conversion happens. (Default: False)
on_redefinition : str
action to take in case a unit is redefined.
'warn', 'raise', 'ignore' (Default: 'raise')
auto_reduce_dimensions :
If True, reduce dimensionality on appropriate operations.
(Default: False)
autoconvert_to_preferred :
If True, converts preferred units on appropriate operations.
(Default: False)
preprocessors :
list of callables which are iteratively ran on any input expression
or unit string or None for no preprocessor.
(Default=None)
fmt_locale :
locale identifier string, used in `format_babel` or None.
(Default=None)
case_sensitive : bool, optional
Control default case sensitivity of unit parsing. (Default: True)
cache_folder : str or pathlib.Path or None, optional
Specify the folder in which cache files are saved and loaded from.
If None, the cache is disabled. (default)
"""
Quantity: TypeAlias = Quantity
Unit: TypeAlias = Unit
def __init__(
self,
filename="",
force_ndarray: bool = False,
force_ndarray_like: bool = False,
default_as_delta: bool = True,
autoconvert_offset_to_baseunit: bool = False,
on_redefinition: str = "warn",
system=None,
auto_reduce_dimensions=False,
autoconvert_to_preferred=False,
preprocessors=None,
fmt_locale=None,
non_int_type=float,
case_sensitive: bool = True,
cache_folder=None,
):
super().__init__(
filename=filename,
force_ndarray=force_ndarray,
force_ndarray_like=force_ndarray_like,
on_redefinition=on_redefinition,
default_as_delta=default_as_delta,
autoconvert_offset_to_baseunit=autoconvert_offset_to_baseunit,
system=system,
auto_reduce_dimensions=auto_reduce_dimensions,
autoconvert_to_preferred=autoconvert_to_preferred,
preprocessors=preprocessors,
fmt_locale=fmt_locale,
non_int_type=non_int_type,
case_sensitive=case_sensitive,
cache_folder=cache_folder,
)
def pi_theorem(self, quantities):
"""Builds dimensionless quantities using the Buckingham π theorem
Parameters
----------
quantities : dict
mapping between variable name and units
Returns
-------
list
a list of dimensionless quantities expressed as dicts
"""
return pi_theorem(quantities, self)
def setup_matplotlib(self, enable: bool = True) -> None:
"""Set up handlers for matplotlib's unit support.
Parameters
----------
enable : bool
whether support should be enabled or disabled (Default value = True)
"""
# Delays importing matplotlib until it's actually requested
from .matplotlib import setup_matplotlib_handlers
setup_matplotlib_handlers(self, enable)
wraps = registry_helpers.wraps
check = registry_helpers.check
class LazyRegistry(Generic[facets.QuantityT, facets.UnitT]):
def __init__(self, args=None, kwargs=None):
self.__dict__["params"] = args or (), kwargs or {}
def __init(self):
args, kwargs = self.__dict__["params"]
kwargs["on_redefinition"] = "raise"
self.__class__ = UnitRegistry
self.__init__(*args, **kwargs)
self._after_init()
def __getattr__(self, item):
if item == "_on_redefinition":
return "raise"
self.__init()
return getattr(self, item)
def __setattr__(self, key, value):
if key == "__class__":
super().__setattr__(key, value)
else:
self.__init()
setattr(self, key, value)
def __getitem__(self, item):
self.__init()
return self[item]
def __call__(self, *args, **kwargs):
self.__init()
return self(*args, **kwargs)
class ApplicationRegistry:
"""A wrapper class used to distribute changes to the application registry."""
__slots__ = ["_registry"]
def __init__(self, registry):
self._registry = registry
def get(self):
"""Get the wrapped registry"""
return self._registry
def set(self, new_registry):
"""Set the new registry
Parameters
----------
new_registry : ApplicationRegistry or LazyRegistry or UnitRegistry
The new registry.
See Also
--------
set_application_registry
"""
if isinstance(new_registry, type(self)):
new_registry = new_registry.get()
if not isinstance(new_registry, (LazyRegistry, UnitRegistry)):
raise TypeError("Expected UnitRegistry; got %s" % type(new_registry))
logger.debug(
"Changing app registry from %r to %r.", self._registry, new_registry
)
self._registry = new_registry
def __getattr__(self, name):
return getattr(self._registry, name)
def __setattr__(self, name, value):
if name in self.__slots__:
super().__setattr__(name, value)
else:
setattr(self._registry, name, value)
def __dir__(self):
return dir(self._registry)
def __getitem__(self, item):
return self._registry[item]
def __call__(self, *args, **kwargs):
return self._registry(*args, **kwargs)
def __contains__(self, item):
return self._registry.__contains__(item)
def __iter__(self):
return iter(self._registry)

View file

@ -0,0 +1,386 @@
"""
pint.registry_helpers
~~~~~~~~~~~~~~~~~~~~~
Miscellaneous methods of the registry written as separate functions.
:copyright: 2016 by Pint Authors, see AUTHORS for more details..
:license: BSD, see LICENSE for more details.
"""
from __future__ import annotations
import functools
from collections.abc import Callable, Iterable
from inspect import Parameter, signature
from itertools import zip_longest
from typing import TYPE_CHECKING, Any, TypeVar
from ._typing import F
from .errors import DimensionalityError
from .util import UnitsContainer, to_units_container
if TYPE_CHECKING:
from ._typing import Quantity, Unit
from .registry import UnitRegistry
T = TypeVar("T")
def _replace_units(original_units, values_by_name):
"""Convert a unit compatible type to a UnitsContainer.
Parameters
----------
original_units :
a UnitsContainer instance.
values_by_name :
a map between original names and the new values.
Returns
-------
"""
q = 1
for arg_name, exponent in original_units.items():
q = q * values_by_name[arg_name] ** exponent
return getattr(q, "_units", UnitsContainer({}))
def _to_units_container(a, registry=None):
"""Convert a unit compatible type to a UnitsContainer,
checking if it is string field prefixed with an equal
(which is considered a reference)
Parameters
----------
a :
registry :
(Default value = None)
Returns
-------
UnitsContainer, bool
"""
if isinstance(a, str) and "=" in a:
return to_units_container(a.split("=", 1)[1]), True
return to_units_container(a, registry), False
def _parse_wrap_args(args, registry=None):
# Arguments which contain definitions
# (i.e. names that appear alone and for the first time)
defs_args = set()
defs_args_ndx = set()
# Arguments which depend on others
dependent_args_ndx = set()
# Arguments which have units.
unit_args_ndx = set()
# _to_units_container
args_as_uc = [_to_units_container(arg, registry) for arg in args]
# Check for references in args, remove None values
for ndx, (arg, is_ref) in enumerate(args_as_uc):
if arg is None:
continue
elif is_ref:
if len(arg) == 1:
[(key, value)] = arg.items()
if value == 1 and key not in defs_args:
# This is the first time that
# a variable is used => it is a definition.
defs_args.add(key)
defs_args_ndx.add(ndx)
args_as_uc[ndx] = (key, True)
else:
# The variable was already found elsewhere,
# we consider it a dependent variable.
dependent_args_ndx.add(ndx)
else:
dependent_args_ndx.add(ndx)
else:
unit_args_ndx.add(ndx)
# Check that all valid dependent variables
for ndx in dependent_args_ndx:
arg, is_ref = args_as_uc[ndx]
if not isinstance(arg, dict):
continue
if not set(arg.keys()) <= defs_args:
raise ValueError(
"Found a missing token while wrapping a function: "
"Not all variable referenced in %s are defined using !" % args[ndx]
)
def _converter(ureg, sig, values, kw, strict):
len_initial_values = len(values)
# pack kwargs
for i, param_name in enumerate(sig.parameters):
if i >= len_initial_values:
values.append(kw[param_name])
values_by_name = {}
# first pass: Grab named values
for ndx in defs_args_ndx:
value = values[ndx]
values_by_name[args_as_uc[ndx][0]] = value
values[ndx] = getattr(value, "_magnitude", value)
# second pass: calculate derived values based on named values
for ndx in dependent_args_ndx:
value = values[ndx]
assert _replace_units(args_as_uc[ndx][0], values_by_name) is not None
values[ndx] = ureg._convert(
getattr(value, "_magnitude", value),
getattr(value, "_units", UnitsContainer({})),
_replace_units(args_as_uc[ndx][0], values_by_name),
)
# third pass: convert other arguments
for ndx in unit_args_ndx:
if isinstance(values[ndx], ureg.Quantity):
values[ndx] = ureg._convert(
values[ndx]._magnitude, values[ndx]._units, args_as_uc[ndx][0]
)
else:
if strict:
if isinstance(values[ndx], str):
# if the value is a string, we try to parse it
tmp_value = ureg.parse_expression(values[ndx])
values[ndx] = ureg._convert(
tmp_value._magnitude, tmp_value._units, args_as_uc[ndx][0]
)
else:
raise ValueError(
"A wrapped function using strict=True requires "
"quantity or a string for all arguments with not None units. "
"(error found for {}, {})".format(
args_as_uc[ndx][0], values[ndx]
)
)
# unpack kwargs
for i, param_name in enumerate(sig.parameters):
if i >= len_initial_values:
kw[param_name] = values[i]
return values[:len_initial_values], kw, values_by_name
return _converter
def _apply_defaults(sig, args, kwargs):
"""Apply default keyword arguments.
Named keywords may have been left blank. This function applies the default
values so that every argument is defined.
"""
for i, param in enumerate(sig.parameters.values()):
if (
i >= len(args)
and param.default != Parameter.empty
and param.name not in kwargs
):
kwargs[param.name] = param.default
return list(args), kwargs
def wraps(
ureg: UnitRegistry,
ret: str | Unit | Iterable[str | Unit | None] | None,
args: str | Unit | Iterable[str | Unit | None] | None,
strict: bool = True,
) -> Callable[[Callable[..., Any]], Callable[..., Quantity]]:
"""Wraps a function to become pint-aware.
Use it when a function requires a numerical value but in some specific
units. The wrapper function will take a pint quantity, convert to the units
specified in `args` and then call the wrapped function with the resulting
magnitude.
The value returned by the wrapped function will be converted to the units
specified in `ret`.
Parameters
----------
ureg : pint.UnitRegistry
a UnitRegistry instance.
ret : str, pint.Unit, or iterable of str or pint.Unit
Units of each of the return values. Use `None` to skip argument conversion.
args : str, pint.Unit, or iterable of str or pint.Unit
Units of each of the input arguments. Use `None` to skip argument conversion.
strict : bool
Indicates that only quantities are accepted. (Default value = True)
Returns
-------
callable
the wrapper function.
Raises
------
TypeError
if the number of given arguments does not match the number of function parameters.
if any of the provided arguments is not a unit a string or Quantity
"""
if not isinstance(args, (list, tuple)):
args = (args,)
for arg in args:
if arg is not None and not isinstance(arg, (ureg.Unit, str)):
raise TypeError(
"wraps arguments must by of type str or Unit, not %s (%s)"
% (type(arg), arg)
)
converter = _parse_wrap_args(args)
is_ret_container = isinstance(ret, (list, tuple))
if is_ret_container:
for arg in ret:
if arg is not None and not isinstance(arg, (ureg.Unit, str)):
raise TypeError(
"wraps 'ret' argument must by of type str or Unit, not %s (%s)"
% (type(arg), arg)
)
ret = ret.__class__([_to_units_container(arg, ureg) for arg in ret])
else:
if ret is not None and not isinstance(ret, (ureg.Unit, str)):
raise TypeError(
"wraps 'ret' argument must by of type str or Unit, not %s (%s)"
% (type(ret), ret)
)
ret = _to_units_container(ret, ureg)
def decorator(func: Callable[..., Any]) -> Callable[..., Quantity]:
sig = signature(func)
count_params = len(sig.parameters)
if len(args) != count_params:
raise TypeError(
"%s takes %i parameters, but %i units were passed"
% (func.__name__, count_params, len(args))
)
assigned = tuple(
attr for attr in functools.WRAPPER_ASSIGNMENTS if hasattr(func, attr)
)
updated = tuple(
attr for attr in functools.WRAPPER_UPDATES if hasattr(func, attr)
)
@functools.wraps(func, assigned=assigned, updated=updated)
def wrapper(*values, **kw) -> Quantity:
values, kw = _apply_defaults(sig, values, kw)
# In principle, the values are used as is
# When then extract the magnitudes when needed.
new_values, new_kw, values_by_name = converter(
ureg, sig, values, kw, strict
)
result = func(*new_values, **new_kw)
if is_ret_container:
out_units = (
_replace_units(r, values_by_name) if is_ref else r
for (r, is_ref) in ret
)
return ret.__class__(
res if unit is None else ureg.Quantity(res, unit)
for unit, res in zip_longest(out_units, result)
)
if ret[0] is None:
return result
return ureg.Quantity(
result, _replace_units(ret[0], values_by_name) if ret[1] else ret[0]
)
return wrapper
return decorator
def check(
ureg: UnitRegistry, *args: str | UnitsContainer | Unit | None
) -> Callable[[F], F]:
"""Decorator to for quantity type checking for function inputs.
Use it to ensure that the decorated function input parameters match
the expected dimension of pint quantity.
The wrapper function raises:
- `pint.DimensionalityError` if an argument doesn't match the required dimensions.
ureg : UnitRegistry
a UnitRegistry instance.
args : str or UnitContainer or None
Dimensions of each of the input arguments.
Use `None` to skip argument conversion.
Returns
-------
callable
the wrapped function.
Raises
------
TypeError
If the number of given dimensions does not match the number of function
parameters.
ValueError
If the any of the provided dimensions cannot be parsed as a dimension.
"""
dimensions = [
ureg.get_dimensionality(dim) if dim is not None else None for dim in args
]
def decorator(func):
sig = signature(func)
count_params = len(sig.parameters)
if len(dimensions) != count_params:
raise TypeError(
"%s takes %i parameters, but %i dimensions were passed"
% (func.__name__, count_params, len(dimensions))
)
assigned = tuple(
attr for attr in functools.WRAPPER_ASSIGNMENTS if hasattr(func, attr)
)
updated = tuple(
attr for attr in functools.WRAPPER_UPDATES if hasattr(func, attr)
)
@functools.wraps(func, assigned=assigned, updated=updated)
def wrapper(*args, **kwargs):
list_args, kw = _apply_defaults(sig, args, kwargs)
for i, param_name in enumerate(sig.parameters):
if i >= len(args):
list_args.append(kw[param_name])
for dim, value in zip(dimensions, list_args):
if dim is None:
continue
if not ureg.Quantity(value).check(dim):
val_dim = ureg.get_dimensionality(value)
raise DimensionalityError(value, "a quantity of", val_dim, dim)
return func(*args, **kwargs)
return wrapper
return decorator

View file

@ -0,0 +1,32 @@
from __future__ import annotations
import tokenize
from datasette.vendored.pint.pint_eval import _plain_tokenizer, uncertainty_tokenizer
tokenizer = _plain_tokenizer
input_lines = [
"( 8.0 + / - 4.0 ) e6 m",
"( 8.0 ± 4.0 ) e6 m",
"( 8.0 + / - 4.0 ) e-6 m",
"( nan + / - 0 ) e6 m",
"( nan ± 4.0 ) m",
"8.0 + / - 4.0 m",
"8.0 ± 4.0 m",
"8.0(4)m",
"8.0(.4)m",
"8.0(-4)m", # error!
"pint == wonderfulness ^ N + - + / - * ± m J s",
]
for line in input_lines:
result = []
g = list(uncertainty_tokenizer(line)) # tokenize the string
for toknum, tokval, _, _, _ in g:
result.append((toknum, tokval))
print("====")
print(f"input line: {line}")
print(result)
print(tokenize.untokenize(result))

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,18 @@
# a few unit definitions added to use the translations by unicode cldr
dietary_calorie = 1000 * calorie = Cal = Calorie
metric_cup = liter / 4
square_meter = meter ** 2 = sq_m
square_kilometer = kilometer ** 2 = sq_km
mile_scandinavian = 10000 * meter
cubic_mile = 1 * mile ** 3 = cu_mile = cubic_miles
cubic_meter = 1 * meter ** 3 = cu_m
cubic_kilometer = 1 * kilometer ** 3 = cu_km
[consumption] = [volume] / [length]
liter_per_kilometer = liter / kilometer
liter_per_100kilometers = liter / (100 * kilometers)
[US_consumption] = [length] / [volume]
MPG = mile / gallon

View file

@ -1,2 +1,2 @@
__version__ = "0.63.1"
__version__ = "0.65.1"
__version_info__ = tuple(__version__.split("."))

View file

@ -8,7 +8,7 @@ import urllib
from markupsafe import escape
import pint
from datasette.vendored import pint
from datasette import __version__
from datasette.database import QueryInterrupted
@ -308,9 +308,11 @@ class DataView(BaseView):
if cell is None:
new_row.extend(("", ""))
else:
assert isinstance(cell, dict)
new_row.append(cell["value"])
new_row.append(cell["label"])
if not isinstance(cell, dict):
new_row.extend((cell, ""))
else:
new_row.append(cell["value"])
new_row.append(cell["label"])
else:
new_row.append(cell)
await writer.writerow(new_row)
@ -340,7 +342,7 @@ class DataView(BaseView):
try:
db = self.ds.get_database(route=database_route)
except KeyError:
raise NotFound("Database not found: {}".format(database_route))
raise NotFound("Database not found")
database = db.name
_format = request.url_vars["format"]

View file

@ -37,7 +37,7 @@ class DatabaseView(DataView):
try:
db = self.ds.get_database(route=database_route)
except KeyError:
raise NotFound("Database not found: {}".format(database_route))
raise NotFound("Database not found")
database = db.name
visible, private = await self.ds.check_visibility(
@ -226,7 +226,7 @@ class QueryView(DataView):
try:
db = self.ds.get_database(route=database_route)
except KeyError:
raise NotFound("Database not found: {}".format(database_route))
raise NotFound("Database not found")
database = db.name
params = {key: request.args.get(key) for key in request.args}
if "sql" in params:
@ -431,9 +431,11 @@ class QueryView(DataView):
display_value = markupsafe.Markup(
'<a class="blob-download" href="{}"{}>&lt;Binary:&nbsp;{:,}&nbsp;byte{}&gt;</a>'.format(
blob_url,
' title="{}"'.format(formatted)
if "bytes" not in formatted
else "",
(
' title="{}"'.format(formatted)
if "bytes" not in formatted
else ""
),
len(value),
"" if len(value) == 1 else "s",
)

View file

@ -105,9 +105,11 @@ class IndexView(BaseView):
{
"name": name,
"hash": db.hash,
"color": db.hash[:6]
if db.hash
else hashlib.md5(name.encode("utf8")).hexdigest()[:6],
"color": (
db.hash[:6]
if db.hash
else hashlib.md5(name.encode("utf8")).hexdigest()[:6]
),
"path": self.ds.urls.database(name),
"tables_and_views_truncated": tables_and_views_truncated,
"tables_and_views_more": (len(visible_tables) + len(views))

View file

@ -19,7 +19,7 @@ class RowView(DataView):
try:
db = self.ds.get_database(route=database_route)
except KeyError:
raise NotFound("Database not found: {}".format(database_route))
raise NotFound("Database not found")
database = db.name
# Ensure user has permission to view this row
@ -38,14 +38,14 @@ class RowView(DataView):
try:
db = self.ds.get_database(route=database_route)
except KeyError:
raise NotFound("Database not found: {}".format(database_route))
raise NotFound("Database not found")
database = db.name
sql, params, pks = await _sql_params_pks(db, table, pk_values)
results = await db.execute(sql, params, truncate=True)
columns = [r[0] for r in results.description]
rows = list(results.rows)
if not rows:
raise NotFound(f"Record not found: {pk_values}")
raise NotFound(f"Record not found")
async def template_data():
display_columns, display_rows = await display_columns_and_rows(

View file

@ -96,7 +96,7 @@ class TableView(DataView):
try:
db = self.ds.get_database(route=database_route)
except KeyError:
raise NotFound("Database not found: {}".format(database_route))
raise NotFound("Database not found")
database_name = db.name
table_name = tilde_decode(request.url_vars["table"])
# Handle POST to a canned query
@ -169,23 +169,17 @@ class TableView(DataView):
try:
db = self.ds.get_database(route=database_route)
except KeyError:
raise NotFound("Database not found: {}".format(database_route))
raise NotFound("Database not found")
database_name = db.name
# For performance profiling purposes, ?_noparallel=1 turns off asyncio.gather
async def _gather_parallel(*args):
return await asyncio.gather(*args)
async def _gather_sequential(*args):
# We always now run queries sequentially, rather than with asyncio.gather() -
# see https://github.com/simonw/datasette/issues/2189
async def gather(*args):
results = []
for fn in args:
results.append(await fn)
return results
gather = (
_gather_sequential if request.args.get("_noparallel") else _gather_parallel
)
# If this is a canned query, not a table, then dispatch to QueryView instead
canned_query = await self.ds.get_canned_query(
database_name, table_name, request.actor
@ -210,7 +204,7 @@ class TableView(DataView):
# If table or view not found, return 404
if not is_view and not table_exists:
raise NotFound(f"Table not found: {table_name}")
raise NotFound(f"Table not found")
# Ensure user has permission to view this table
visible, private = await self.ds.check_visibility(
@ -345,9 +339,11 @@ class TableView(DataView):
from_sql = "from {table_name} {where}".format(
table_name=escape_sqlite(table_name),
where=("where {} ".format(" and ".join(where_clauses)))
if where_clauses
else "",
where=(
("where {} ".format(" and ".join(where_clauses)))
if where_clauses
else ""
),
)
# Copy of params so we can mutate them later:
from_sql_params = dict(**params)
@ -412,10 +408,12 @@ class TableView(DataView):
column=escape_sqlite(sort or sort_desc),
op=">" if sort else "<",
p=len(params),
extra_desc_only=""
if sort
else " or {column2} is null".format(
column2=escape_sqlite(sort or sort_desc)
extra_desc_only=(
""
if sort
else " or {column2} is null".format(
column2=escape_sqlite(sort or sort_desc)
)
),
next_clauses=" and ".join(next_by_pk_clauses),
)
@ -778,9 +776,9 @@ class TableView(DataView):
"metadata": metadata,
"view_definition": await db.get_view_definition(table_name),
"table_definition": await db.get_table_definition(table_name),
"datasette_allow_facet": "true"
if self.ds.setting("allow_facet")
else "false",
"datasette_allow_facet": (
"true" if self.ds.setting("allow_facet") else "false"
),
}
d.update(extra_context_from_filters)
return d
@ -939,9 +937,11 @@ async def display_columns_and_rows(
path_from_row_pks(row, pks, not pks),
column,
),
' title="{}"'.format(formatted)
if "bytes" not in formatted
else "",
(
' title="{}"'.format(formatted)
if "bytes" not in formatted
else ""
),
len(value),
"" if len(value) == 1 else "s",
)
@ -992,9 +992,9 @@ async def display_columns_and_rows(
"column": column,
"value": display_value,
"raw": value,
"value_type": "none"
if value is None
else str(type(value).__name__),
"value_type": (
"none" if value is None else str(type(value).__name__)
),
}
)
cell_rows.append(Row(cells))

View file

@ -307,7 +307,21 @@ To limit access to the ``add_name`` canned query in your ``dogs.db`` database to
Controlling the ability to execute arbitrary SQL
------------------------------------------------
The ``"allow_sql"`` block can be used to control who is allowed to execute arbitrary SQL queries, both using the form on the database page e.g. https://latest.datasette.io/fixtures or by appending a ``?_where=`` parameter to the table page as seen on https://latest.datasette.io/fixtures/facetable?_where=city_id=1.
Datasette defaults to allowing any site visitor to execute their own custom SQL queries, for example using the form on `the database page <https://latest.datasette.io/fixtures>`__ or by appending a ``?_where=`` parameter to the table page `like this <https://latest.datasette.io/fixtures/facetable?_where=_city_id=1>`__.
Access to this ability is controlled by the :ref:`permissions_execute_sql` permission.
The easiest way to disable arbitrary SQL queries is using the :ref:`default_allow_sql setting <setting_default_allow_sql>` when you first start Datasette running.
You can alternatively use an ``"allow_sql"`` block to control who is allowed to execute arbitrary SQL queries.
To prevent any user from executing arbitrary SQL queries, use this:
.. code-block:: json
{
"allow_sql": false
}
To enable just the :ref:`root user<authentication_root>` to execute SQL for all databases in your instance, use the following:
@ -515,7 +529,7 @@ Actor is allowed to run arbitrary SQL queries against a specific database, e.g.
``resource`` - string
The name of the database
Default *allow*.
Default *allow*. See also :ref:`the default_allow_sql setting <setting_default_allow_sql>`.
.. _permissions_permissions_debug:

View file

@ -4,6 +4,106 @@
Changelog
=========
.. _v0_65_1:
0.65.1 (2024-12-28)
-------------------
- Fixed bug with upgraded HTTPX 0.28.0 dependency. (:issue:`2443`)
.. _v0_65:
0.65 (2024-10-07)
-----------------
- Upgrade for compatibility with Python 3.13 (by vendoring Pint dependency). (:issue:`2434`)
- Dropped support for Python 3.8.
.. _v0_64_8:
0.64.8 (2024-06-21)
-------------------
- Security improvement: 404 pages used to reflect content from the URL path, which could be used to display misleading information to Datasette users. 404 errors no longer display additional information from the URL. (:issue:`2359`)
- Backported a better fix for correctly extracting named parameters from canned query SQL against SQLite 3.46.0. (:issue:`2353`)
.. _v0_64_7:
0.64.7 (2024-06-12)
-------------------
- Fixed a bug where canned queries with named parameters threw an error when run against SQLite 3.46.0. (:issue:`2353`)
.. _v0_64_6:
0.64.6 (2023-12-22)
-------------------
- Fixed a bug where CSV export with expanded labels could fail if a foreign key reference did not correctly resolve. (:issue:`2214`)
.. _v0_64_5:
0.64.5 (2023-10-08)
-------------------
- Dropped dependency on ``click-default-group-wheel``, which could cause a dependency conflict. (:issue:`2197`)
.. _v0_64_4:
0.64.4 (2023-09-21)
-------------------
- Fix for a crashing bug caused by viewing the table page for a named in-memory database. (:issue:`2189`)
.. _v0_64_3:
0.64.3 (2023-04-27)
-------------------
- Added ``pip`` and ``setuptools`` as explicit dependencies. This fixes a bug where Datasette could not be installed using `Rye <https://github.com/mitsuhiko/rye>`__. (:issue:`2065`)
.. _v0_64_2:
0.64.2 (2023-03-08)
-------------------
- Fixed a bug with ``datasette publish cloudrun`` where deploys all used the same Docker image tag. This was mostly inconsequential as the service is deployed as soon as the image has been pushed to the registry, but could result in the incorrect image being deployed if two different deploys for two separate services ran at exactly the same time. (:issue:`2036`)
.. _v0_64_1:
0.64.1 (2023-01-11)
-------------------
- Documentation now links to a current source of information for installing Python 3. (:issue:`1987`)
- Incorrectly calling the Datasette constructor using ``Datasette("path/to/data.db")`` instead of ``Datasette(["path/to/data.db"])`` now returns a useful error message. (:issue:`1985`)
.. _v0_64:
0.64 (2023-01-09)
-----------------
- Datasette now **strongly recommends against allowing arbitrary SQL queries if you are using SpatiaLite**. SpatiaLite includes SQL functions that could cause the Datasette server to crash. See :ref:`spatialite` for more details.
- New :ref:`setting_default_allow_sql` setting, providing an easier way to disable all arbitrary SQL execution by end users: ``datasette --setting default_allow_sql off``. See also :ref:`authentication_permissions_execute_sql`. (:issue:`1409`)
- `Building a location to time zone API with SpatiaLite <https://datasette.io/tutorials/spatialite>`__ is a new Datasette tutorial showing how to safely use SpatiaLite to create a location to time zone API.
- New documentation about :ref:`how to debug problems loading SQLite extensions <installation_extensions>`. The error message shown when an extension cannot be loaded has also been improved. (:issue:`1979`)
- Fixed an accessibility issue: the ``<select>`` elements in the table filter form now show an outline when they are currently focused. (:issue:`1771`)
.. _v0_63_3:
0.63.3 (2022-12-17)
-------------------
- Fixed a bug where ``datasette --root``, when running in Docker, would only output the URL to sign in as root when the server shut down, not when it started up. (:issue:`1958`)
- You no longer need to ensure ``await datasette.invoke_startup()`` has been called in order for Datasette to start correctly serving requests - this is now handled automatically the first time the server receives a request. This fixes a bug experienced when Datasette is served directly by an ASGI application server such as Uvicorn or Gunicorn. It also fixes a bug with the `datasette-gunicorn <https://datasette.io/plugins/datasette-gunicorn>`__ plugin. (:issue:`1955`)
.. _v0_63_2:
0.63.2 (2022-11-18)
-------------------
- Fixed a bug in ``datasette publish heroku`` where deployments failed due to an older version of Python being requested. (:issue:`1905`)
- New ``datasette publish heroku --generate-dir <dir>`` option for generating a Heroku deployment directory without deploying it.
.. _v0_63_1:
0.63.1 (2022-11-10)

View file

@ -224,6 +224,8 @@ These can be passed to ``datasette serve`` using ``datasette serve --setting nam
(default=50)
allow_facet Allow users to specify columns to facet using
?_facet= parameter (default=True)
default_allow_sql Allow anyone to run arbitrary SQL queries
(default=True)
allow_download Allow users to download the original SQLite
database files (default=True)
suggest_facets Calculate and display suggested facets
@ -501,6 +503,8 @@ See :ref:`publish_heroku`.
-n, --name TEXT Application name to use when deploying
--tar TEXT --tar option to pass to Heroku, e.g.
--tar=/usr/local/bin/gtar
--generate-dir DIRECTORY Output generated application files and stop
without deploying
--help Show this message and exit.

View file

@ -1 +1,6 @@
alls
fo
ro
te
ths
notin

View file

@ -34,7 +34,7 @@
extensions = ["sphinx.ext.extlinks", "sphinx.ext.autodoc", "sphinx_copybutton"]
extlinks = {
"issue": ("https://github.com/simonw/datasette/issues/%s", "#"),
"issue": ("https://github.com/simonw/datasette/issues/%s", "#%s"),
}
# Add any paths that contain templates here, relative to this directory.

View file

@ -57,7 +57,7 @@ If the latest packaged release of Datasette has not yet been made available thro
Using pip
---------
Datasette requires Python 3.7 or higher. Visit `InstallPython3.com <https://installpython3.com/>`__ for step-by-step installation guides for your operating system.
Datasette requires Python 3.7 or higher. The `Python.org Python For Beginners <https://www.python.org/about/gettingstarted/>`__ page has instructions for getting started.
You can install Datasette and its dependencies using ``pip``::
@ -230,3 +230,60 @@ Some plugins such as `datasette-ripgrep <https://datasette.io/plugins/datasette-
pip install datasette-ripgrep'
docker commit $(docker ps -lq) datasette-with-ripgrep
.. _installation_extensions:
A note about extensions
=======================
SQLite supports extensions, such as :ref:`spatialite` for geospatial operations.
These can be loaded using the ``--load-extension`` argument, like so::
datasette --load-extension=/usr/local/lib/mod_spatialite.dylib
Some Python installations do not include support for SQLite extensions. If this is the case you will see the following error when you attempt to load an extension:
Your Python installation does not have the ability to load SQLite extensions.
In some cases you may see the following error message instead::
AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension'
On macOS the easiest fix for this is to install Datasette using Homebrew::
brew install datasette
Use ``which datasette`` to confirm that ``datasette`` will run that version. The output should look something like this::
/usr/local/opt/datasette/bin/datasette
If you get a different location here such as ``/Library/Frameworks/Python.framework/Versions/3.10/bin/datasette`` you can run the following command to cause ``datasette`` to execute the Homebrew version instead::
alias datasette=$(echo $(brew --prefix datasette)/bin/datasette)
You can undo this operation using::
unalias datasette
If you need to run SQLite with extension support for other Python code, you can do so by install Python itself using Homebrew::
brew install python
Then executing Python using::
/usr/local/opt/python@3/libexec/bin/python
A more convenient way to work with this version of Python may be to use it to create a virtual environment::
/usr/local/opt/python@3/libexec/bin/python -m venv datasette-venv
Then activate it like this::
source datasette-venv/bin/activate
Now running ``python`` and ``pip`` will work against a version of Python 3 that includes support for SQLite extensions::
pip install datasette
which datasette
datasette --version

Some files were not shown because too many files have changed in this diff Show more