Todo: 集成多平台 解决因SaiNiu线程抢占资源问题 本地提交测试环境打包 和 正式打包脚本与正式环境打包bat 提交Python32环境包 改进多日志文件生成情况修改打包日志细节
This commit is contained in:
117
Utils/PythonNew32/Lib/site-packages/pip/_vendor/__init__.py
Normal file
117
Utils/PythonNew32/Lib/site-packages/pip/_vendor/__init__.py
Normal file
@@ -0,0 +1,117 @@
|
||||
"""
|
||||
pip._vendor is for vendoring dependencies of pip to prevent needing pip to
|
||||
depend on something external.
|
||||
|
||||
Files inside of pip._vendor should be considered immutable and should only be
|
||||
updated to versions from upstream.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
|
||||
import glob
|
||||
import os.path
|
||||
import sys
|
||||
|
||||
# Downstream redistributors which have debundled our dependencies should also
|
||||
# patch this value to be true. This will trigger the additional patching
|
||||
# to cause things like "six" to be available as pip.
|
||||
DEBUNDLED = False
|
||||
|
||||
# By default, look in this directory for a bunch of .whl files which we will
|
||||
# add to the beginning of sys.path before attempting to import anything. This
|
||||
# is done to support downstream re-distributors like Debian and Fedora who
|
||||
# wish to create their own Wheels for our dependencies to aid in debundling.
|
||||
WHEEL_DIR = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
|
||||
# Define a small helper function to alias our vendored modules to the real ones
|
||||
# if the vendored ones do not exist. This idea of this was taken from
|
||||
# https://github.com/kennethreitz/requests/pull/2567.
|
||||
def vendored(modulename):
|
||||
vendored_name = "{0}.{1}".format(__name__, modulename)
|
||||
|
||||
try:
|
||||
__import__(modulename, globals(), locals(), level=0)
|
||||
except ImportError:
|
||||
# We can just silently allow import failures to pass here. If we
|
||||
# got to this point it means that ``import pip._vendor.whatever``
|
||||
# failed and so did ``import whatever``. Since we're importing this
|
||||
# upfront in an attempt to alias imports, not erroring here will
|
||||
# just mean we get a regular import error whenever pip *actually*
|
||||
# tries to import one of these modules to use it, which actually
|
||||
# gives us a better error message than we would have otherwise
|
||||
# gotten.
|
||||
pass
|
||||
else:
|
||||
sys.modules[vendored_name] = sys.modules[modulename]
|
||||
base, head = vendored_name.rsplit(".", 1)
|
||||
setattr(sys.modules[base], head, sys.modules[modulename])
|
||||
|
||||
|
||||
# If we're operating in a debundled setup, then we want to go ahead and trigger
|
||||
# the aliasing of our vendored libraries as well as looking for wheels to add
|
||||
# to our sys.path. This will cause all of this code to be a no-op typically
|
||||
# however downstream redistributors can enable it in a consistent way across
|
||||
# all platforms.
|
||||
if DEBUNDLED:
|
||||
# Actually look inside of WHEEL_DIR to find .whl files and add them to the
|
||||
# front of our sys.path.
|
||||
sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path
|
||||
|
||||
# Actually alias all of our vendored dependencies.
|
||||
vendored("cachecontrol")
|
||||
vendored("certifi")
|
||||
vendored("dependency-groups")
|
||||
vendored("distlib")
|
||||
vendored("distro")
|
||||
vendored("packaging")
|
||||
vendored("packaging.version")
|
||||
vendored("packaging.specifiers")
|
||||
vendored("pkg_resources")
|
||||
vendored("platformdirs")
|
||||
vendored("progress")
|
||||
vendored("pyproject_hooks")
|
||||
vendored("requests")
|
||||
vendored("requests.exceptions")
|
||||
vendored("requests.packages")
|
||||
vendored("requests.packages.urllib3")
|
||||
vendored("requests.packages.urllib3._collections")
|
||||
vendored("requests.packages.urllib3.connection")
|
||||
vendored("requests.packages.urllib3.connectionpool")
|
||||
vendored("requests.packages.urllib3.contrib")
|
||||
vendored("requests.packages.urllib3.contrib.ntlmpool")
|
||||
vendored("requests.packages.urllib3.contrib.pyopenssl")
|
||||
vendored("requests.packages.urllib3.exceptions")
|
||||
vendored("requests.packages.urllib3.fields")
|
||||
vendored("requests.packages.urllib3.filepost")
|
||||
vendored("requests.packages.urllib3.packages")
|
||||
vendored("requests.packages.urllib3.packages.ordered_dict")
|
||||
vendored("requests.packages.urllib3.packages.six")
|
||||
vendored("requests.packages.urllib3.packages.ssl_match_hostname")
|
||||
vendored("requests.packages.urllib3.packages.ssl_match_hostname."
|
||||
"_implementation")
|
||||
vendored("requests.packages.urllib3.poolmanager")
|
||||
vendored("requests.packages.urllib3.request")
|
||||
vendored("requests.packages.urllib3.response")
|
||||
vendored("requests.packages.urllib3.util")
|
||||
vendored("requests.packages.urllib3.util.connection")
|
||||
vendored("requests.packages.urllib3.util.request")
|
||||
vendored("requests.packages.urllib3.util.response")
|
||||
vendored("requests.packages.urllib3.util.retry")
|
||||
vendored("requests.packages.urllib3.util.ssl_")
|
||||
vendored("requests.packages.urllib3.util.timeout")
|
||||
vendored("requests.packages.urllib3.util.url")
|
||||
vendored("resolvelib")
|
||||
vendored("rich")
|
||||
vendored("rich.console")
|
||||
vendored("rich.highlighter")
|
||||
vendored("rich.logging")
|
||||
vendored("rich.markup")
|
||||
vendored("rich.progress")
|
||||
vendored("rich.segment")
|
||||
vendored("rich.style")
|
||||
vendored("rich.text")
|
||||
vendored("rich.traceback")
|
||||
if sys.version_info < (3, 11):
|
||||
vendored("tomli")
|
||||
vendored("truststore")
|
||||
vendored("urllib3")
|
||||
@@ -0,0 +1,29 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""CacheControl import Interface.
|
||||
|
||||
Make it easy to import from cachecontrol without long namespaces.
|
||||
"""
|
||||
|
||||
__author__ = "Eric Larson"
|
||||
__email__ = "eric@ionrock.org"
|
||||
__version__ = "0.14.3"
|
||||
|
||||
from pip._vendor.cachecontrol.adapter import CacheControlAdapter
|
||||
from pip._vendor.cachecontrol.controller import CacheController
|
||||
from pip._vendor.cachecontrol.wrapper import CacheControl
|
||||
|
||||
__all__ = [
|
||||
"__author__",
|
||||
"__email__",
|
||||
"__version__",
|
||||
"CacheControlAdapter",
|
||||
"CacheController",
|
||||
"CacheControl",
|
||||
]
|
||||
|
||||
import logging
|
||||
|
||||
logging.getLogger(__name__).addHandler(logging.NullHandler())
|
||||
@@ -0,0 +1,70 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from argparse import ArgumentParser
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from pip._vendor import requests
|
||||
|
||||
from pip._vendor.cachecontrol.adapter import CacheControlAdapter
|
||||
from pip._vendor.cachecontrol.cache import DictCache
|
||||
from pip._vendor.cachecontrol.controller import logger
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from argparse import Namespace
|
||||
|
||||
from pip._vendor.cachecontrol.controller import CacheController
|
||||
|
||||
|
||||
def setup_logging() -> None:
|
||||
logger.setLevel(logging.DEBUG)
|
||||
handler = logging.StreamHandler()
|
||||
logger.addHandler(handler)
|
||||
|
||||
|
||||
def get_session() -> requests.Session:
|
||||
adapter = CacheControlAdapter(
|
||||
DictCache(), cache_etags=True, serializer=None, heuristic=None
|
||||
)
|
||||
sess = requests.Session()
|
||||
sess.mount("http://", adapter)
|
||||
sess.mount("https://", adapter)
|
||||
|
||||
sess.cache_controller = adapter.controller # type: ignore[attr-defined]
|
||||
return sess
|
||||
|
||||
|
||||
def get_args() -> Namespace:
|
||||
parser = ArgumentParser()
|
||||
parser.add_argument("url", help="The URL to try and cache")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = get_args()
|
||||
sess = get_session()
|
||||
|
||||
# Make a request to get a response
|
||||
resp = sess.get(args.url)
|
||||
|
||||
# Turn on logging
|
||||
setup_logging()
|
||||
|
||||
# try setting the cache
|
||||
cache_controller: CacheController = (
|
||||
sess.cache_controller # type: ignore[attr-defined]
|
||||
)
|
||||
cache_controller.cache_response(resp.request, resp.raw)
|
||||
|
||||
# Now try to get it
|
||||
if cache_controller.cached_request(resp.request):
|
||||
print("Cached!")
|
||||
else:
|
||||
print("Not cached :(")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,168 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import functools
|
||||
import types
|
||||
import weakref
|
||||
import zlib
|
||||
from typing import TYPE_CHECKING, Any, Collection, Mapping
|
||||
|
||||
from pip._vendor.requests.adapters import HTTPAdapter
|
||||
|
||||
from pip._vendor.cachecontrol.cache import DictCache
|
||||
from pip._vendor.cachecontrol.controller import PERMANENT_REDIRECT_STATUSES, CacheController
|
||||
from pip._vendor.cachecontrol.filewrapper import CallbackFileWrapper
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pip._vendor.requests import PreparedRequest, Response
|
||||
from pip._vendor.urllib3 import HTTPResponse
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache
|
||||
from pip._vendor.cachecontrol.heuristics import BaseHeuristic
|
||||
from pip._vendor.cachecontrol.serialize import Serializer
|
||||
|
||||
|
||||
class CacheControlAdapter(HTTPAdapter):
|
||||
invalidating_methods = {"PUT", "PATCH", "DELETE"}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
cache: BaseCache | None = None,
|
||||
cache_etags: bool = True,
|
||||
controller_class: type[CacheController] | None = None,
|
||||
serializer: Serializer | None = None,
|
||||
heuristic: BaseHeuristic | None = None,
|
||||
cacheable_methods: Collection[str] | None = None,
|
||||
*args: Any,
|
||||
**kw: Any,
|
||||
) -> None:
|
||||
super().__init__(*args, **kw)
|
||||
self.cache = DictCache() if cache is None else cache
|
||||
self.heuristic = heuristic
|
||||
self.cacheable_methods = cacheable_methods or ("GET",)
|
||||
|
||||
controller_factory = controller_class or CacheController
|
||||
self.controller = controller_factory(
|
||||
self.cache, cache_etags=cache_etags, serializer=serializer
|
||||
)
|
||||
|
||||
def send(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
stream: bool = False,
|
||||
timeout: None | float | tuple[float, float] | tuple[float, None] = None,
|
||||
verify: bool | str = True,
|
||||
cert: (None | bytes | str | tuple[bytes | str, bytes | str]) = None,
|
||||
proxies: Mapping[str, str] | None = None,
|
||||
cacheable_methods: Collection[str] | None = None,
|
||||
) -> Response:
|
||||
"""
|
||||
Send a request. Use the request information to see if it
|
||||
exists in the cache and cache the response if we need to and can.
|
||||
"""
|
||||
cacheable = cacheable_methods or self.cacheable_methods
|
||||
if request.method in cacheable:
|
||||
try:
|
||||
cached_response = self.controller.cached_request(request)
|
||||
except zlib.error:
|
||||
cached_response = None
|
||||
if cached_response:
|
||||
return self.build_response(request, cached_response, from_cache=True)
|
||||
|
||||
# check for etags and add headers if appropriate
|
||||
request.headers.update(self.controller.conditional_headers(request))
|
||||
|
||||
resp = super().send(request, stream, timeout, verify, cert, proxies)
|
||||
|
||||
return resp
|
||||
|
||||
def build_response( # type: ignore[override]
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
response: HTTPResponse,
|
||||
from_cache: bool = False,
|
||||
cacheable_methods: Collection[str] | None = None,
|
||||
) -> Response:
|
||||
"""
|
||||
Build a response by making a request or using the cache.
|
||||
|
||||
This will end up calling send and returning a potentially
|
||||
cached response
|
||||
"""
|
||||
cacheable = cacheable_methods or self.cacheable_methods
|
||||
if not from_cache and request.method in cacheable:
|
||||
# Check for any heuristics that might update headers
|
||||
# before trying to cache.
|
||||
if self.heuristic:
|
||||
response = self.heuristic.apply(response)
|
||||
|
||||
# apply any expiration heuristics
|
||||
if response.status == 304:
|
||||
# We must have sent an ETag request. This could mean
|
||||
# that we've been expired already or that we simply
|
||||
# have an etag. In either case, we want to try and
|
||||
# update the cache if that is the case.
|
||||
cached_response = self.controller.update_cached_response(
|
||||
request, response
|
||||
)
|
||||
|
||||
if cached_response is not response:
|
||||
from_cache = True
|
||||
|
||||
# We are done with the server response, read a
|
||||
# possible response body (compliant servers will
|
||||
# not return one, but we cannot be 100% sure) and
|
||||
# release the connection back to the pool.
|
||||
response.read(decode_content=False)
|
||||
response.release_conn()
|
||||
|
||||
response = cached_response
|
||||
|
||||
# We always cache the 301 responses
|
||||
elif int(response.status) in PERMANENT_REDIRECT_STATUSES:
|
||||
self.controller.cache_response(request, response)
|
||||
else:
|
||||
# Wrap the response file with a wrapper that will cache the
|
||||
# response when the stream has been consumed.
|
||||
response._fp = CallbackFileWrapper( # type: ignore[assignment]
|
||||
response._fp, # type: ignore[arg-type]
|
||||
functools.partial(
|
||||
self.controller.cache_response, request, weakref.ref(response)
|
||||
),
|
||||
)
|
||||
if response.chunked:
|
||||
super_update_chunk_length = response.__class__._update_chunk_length
|
||||
|
||||
def _update_chunk_length(
|
||||
weak_self: weakref.ReferenceType[HTTPResponse],
|
||||
) -> None:
|
||||
self = weak_self()
|
||||
if self is None:
|
||||
return
|
||||
|
||||
super_update_chunk_length(self)
|
||||
if self.chunk_left == 0:
|
||||
self._fp._close() # type: ignore[union-attr]
|
||||
|
||||
response._update_chunk_length = functools.partial( # type: ignore[method-assign]
|
||||
_update_chunk_length, weakref.ref(response)
|
||||
)
|
||||
|
||||
resp: Response = super().build_response(request, response)
|
||||
|
||||
# See if we should invalidate the cache.
|
||||
if request.method in self.invalidating_methods and resp.ok:
|
||||
assert request.url is not None
|
||||
cache_url = self.controller.cache_url(request.url)
|
||||
self.cache.delete(cache_url)
|
||||
|
||||
# Give the request a from_cache attr to let people use it
|
||||
resp.from_cache = from_cache # type: ignore[attr-defined]
|
||||
|
||||
return resp
|
||||
|
||||
def close(self) -> None:
|
||||
self.cache.close()
|
||||
super().close() # type: ignore[no-untyped-call]
|
||||
@@ -0,0 +1,75 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
The cache object API for implementing caches. The default is a thread
|
||||
safe in-memory dictionary.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from threading import Lock
|
||||
from typing import IO, TYPE_CHECKING, MutableMapping
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class BaseCache:
|
||||
def get(self, key: str) -> bytes | None:
|
||||
raise NotImplementedError()
|
||||
|
||||
def set(
|
||||
self, key: str, value: bytes, expires: int | datetime | None = None
|
||||
) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
def close(self) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class DictCache(BaseCache):
|
||||
def __init__(self, init_dict: MutableMapping[str, bytes] | None = None) -> None:
|
||||
self.lock = Lock()
|
||||
self.data = init_dict or {}
|
||||
|
||||
def get(self, key: str) -> bytes | None:
|
||||
return self.data.get(key, None)
|
||||
|
||||
def set(
|
||||
self, key: str, value: bytes, expires: int | datetime | None = None
|
||||
) -> None:
|
||||
with self.lock:
|
||||
self.data.update({key: value})
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
with self.lock:
|
||||
if key in self.data:
|
||||
self.data.pop(key)
|
||||
|
||||
|
||||
class SeparateBodyBaseCache(BaseCache):
|
||||
"""
|
||||
In this variant, the body is not stored mixed in with the metadata, but is
|
||||
passed in (as a bytes-like object) in a separate call to ``set_body()``.
|
||||
|
||||
That is, the expected interaction pattern is::
|
||||
|
||||
cache.set(key, serialized_metadata)
|
||||
cache.set_body(key)
|
||||
|
||||
Similarly, the body should be loaded separately via ``get_body()``.
|
||||
"""
|
||||
|
||||
def set_body(self, key: str, body: bytes) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
def get_body(self, key: str) -> IO[bytes] | None:
|
||||
"""
|
||||
Return the body as file-like object.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
@@ -0,0 +1,8 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
from pip._vendor.cachecontrol.caches.file_cache import FileCache, SeparateBodyFileCache
|
||||
from pip._vendor.cachecontrol.caches.redis_cache import RedisCache
|
||||
|
||||
__all__ = ["FileCache", "SeparateBodyFileCache", "RedisCache"]
|
||||
@@ -0,0 +1,145 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import os
|
||||
import tempfile
|
||||
from textwrap import dedent
|
||||
from typing import IO, TYPE_CHECKING
|
||||
from pathlib import Path
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache, SeparateBodyBaseCache
|
||||
from pip._vendor.cachecontrol.controller import CacheController
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from datetime import datetime
|
||||
|
||||
from filelock import BaseFileLock
|
||||
|
||||
|
||||
class _FileCacheMixin:
|
||||
"""Shared implementation for both FileCache variants."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
directory: str | Path,
|
||||
forever: bool = False,
|
||||
filemode: int = 0o0600,
|
||||
dirmode: int = 0o0700,
|
||||
lock_class: type[BaseFileLock] | None = None,
|
||||
) -> None:
|
||||
try:
|
||||
if lock_class is None:
|
||||
from filelock import FileLock
|
||||
|
||||
lock_class = FileLock
|
||||
except ImportError:
|
||||
notice = dedent(
|
||||
"""
|
||||
NOTE: In order to use the FileCache you must have
|
||||
filelock installed. You can install it via pip:
|
||||
pip install cachecontrol[filecache]
|
||||
"""
|
||||
)
|
||||
raise ImportError(notice)
|
||||
|
||||
self.directory = directory
|
||||
self.forever = forever
|
||||
self.filemode = filemode
|
||||
self.dirmode = dirmode
|
||||
self.lock_class = lock_class
|
||||
|
||||
@staticmethod
|
||||
def encode(x: str) -> str:
|
||||
return hashlib.sha224(x.encode()).hexdigest()
|
||||
|
||||
def _fn(self, name: str) -> str:
|
||||
# NOTE: This method should not change as some may depend on it.
|
||||
# See: https://github.com/ionrock/cachecontrol/issues/63
|
||||
hashed = self.encode(name)
|
||||
parts = list(hashed[:5]) + [hashed]
|
||||
return os.path.join(self.directory, *parts)
|
||||
|
||||
def get(self, key: str) -> bytes | None:
|
||||
name = self._fn(key)
|
||||
try:
|
||||
with open(name, "rb") as fh:
|
||||
return fh.read()
|
||||
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
|
||||
def set(
|
||||
self, key: str, value: bytes, expires: int | datetime | None = None
|
||||
) -> None:
|
||||
name = self._fn(key)
|
||||
self._write(name, value)
|
||||
|
||||
def _write(self, path: str, data: bytes) -> None:
|
||||
"""
|
||||
Safely write the data to the given path.
|
||||
"""
|
||||
# Make sure the directory exists
|
||||
dirname = os.path.dirname(path)
|
||||
os.makedirs(dirname, self.dirmode, exist_ok=True)
|
||||
|
||||
with self.lock_class(path + ".lock"):
|
||||
# Write our actual file
|
||||
(fd, name) = tempfile.mkstemp(dir=dirname)
|
||||
try:
|
||||
os.write(fd, data)
|
||||
finally:
|
||||
os.close(fd)
|
||||
os.chmod(name, self.filemode)
|
||||
os.replace(name, path)
|
||||
|
||||
def _delete(self, key: str, suffix: str) -> None:
|
||||
name = self._fn(key) + suffix
|
||||
if not self.forever:
|
||||
try:
|
||||
os.remove(name)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
|
||||
class FileCache(_FileCacheMixin, BaseCache):
|
||||
"""
|
||||
Traditional FileCache: body is stored in memory, so not suitable for large
|
||||
downloads.
|
||||
"""
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
self._delete(key, "")
|
||||
|
||||
|
||||
class SeparateBodyFileCache(_FileCacheMixin, SeparateBodyBaseCache):
|
||||
"""
|
||||
Memory-efficient FileCache: body is stored in a separate file, reducing
|
||||
peak memory usage.
|
||||
"""
|
||||
|
||||
def get_body(self, key: str) -> IO[bytes] | None:
|
||||
name = self._fn(key) + ".body"
|
||||
try:
|
||||
return open(name, "rb")
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
|
||||
def set_body(self, key: str, body: bytes) -> None:
|
||||
name = self._fn(key) + ".body"
|
||||
self._write(name, body)
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
self._delete(key, "")
|
||||
self._delete(key, ".body")
|
||||
|
||||
|
||||
def url_to_file_path(url: str, filecache: FileCache) -> str:
|
||||
"""Return the file cache path based on the URL.
|
||||
|
||||
This does not ensure the file exists!
|
||||
"""
|
||||
key = CacheController.cache_url(url)
|
||||
return filecache._fn(key)
|
||||
@@ -0,0 +1,48 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from redis import Redis
|
||||
|
||||
|
||||
class RedisCache(BaseCache):
|
||||
def __init__(self, conn: Redis[bytes]) -> None:
|
||||
self.conn = conn
|
||||
|
||||
def get(self, key: str) -> bytes | None:
|
||||
return self.conn.get(key)
|
||||
|
||||
def set(
|
||||
self, key: str, value: bytes, expires: int | datetime | None = None
|
||||
) -> None:
|
||||
if not expires:
|
||||
self.conn.set(key, value)
|
||||
elif isinstance(expires, datetime):
|
||||
now_utc = datetime.now(timezone.utc)
|
||||
if expires.tzinfo is None:
|
||||
now_utc = now_utc.replace(tzinfo=None)
|
||||
delta = expires - now_utc
|
||||
self.conn.setex(key, int(delta.total_seconds()), value)
|
||||
else:
|
||||
self.conn.setex(key, expires, value)
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
self.conn.delete(key)
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Helper for clearing all the keys in a database. Use with
|
||||
caution!"""
|
||||
for key in self.conn.keys():
|
||||
self.conn.delete(key)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Redis uses connection pooling, no need to close the connection."""
|
||||
pass
|
||||
@@ -0,0 +1,511 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
The httplib2 algorithms ported for use with requests.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import calendar
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
import weakref
|
||||
from email.utils import parsedate_tz
|
||||
from typing import TYPE_CHECKING, Collection, Mapping
|
||||
|
||||
from pip._vendor.requests.structures import CaseInsensitiveDict
|
||||
|
||||
from pip._vendor.cachecontrol.cache import DictCache, SeparateBodyBaseCache
|
||||
from pip._vendor.cachecontrol.serialize import Serializer
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from typing import Literal
|
||||
|
||||
from pip._vendor.requests import PreparedRequest
|
||||
from pip._vendor.urllib3 import HTTPResponse
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?")
|
||||
|
||||
PERMANENT_REDIRECT_STATUSES = (301, 308)
|
||||
|
||||
|
||||
def parse_uri(uri: str) -> tuple[str, str, str, str, str]:
|
||||
"""Parses a URI using the regex given in Appendix B of RFC 3986.
|
||||
|
||||
(scheme, authority, path, query, fragment) = parse_uri(uri)
|
||||
"""
|
||||
match = URI.match(uri)
|
||||
assert match is not None
|
||||
groups = match.groups()
|
||||
return (groups[1], groups[3], groups[4], groups[6], groups[8])
|
||||
|
||||
|
||||
class CacheController:
|
||||
"""An interface to see if request should cached or not."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
cache: BaseCache | None = None,
|
||||
cache_etags: bool = True,
|
||||
serializer: Serializer | None = None,
|
||||
status_codes: Collection[int] | None = None,
|
||||
):
|
||||
self.cache = DictCache() if cache is None else cache
|
||||
self.cache_etags = cache_etags
|
||||
self.serializer = serializer or Serializer()
|
||||
self.cacheable_status_codes = status_codes or (200, 203, 300, 301, 308)
|
||||
|
||||
@classmethod
|
||||
def _urlnorm(cls, uri: str) -> str:
|
||||
"""Normalize the URL to create a safe key for the cache"""
|
||||
(scheme, authority, path, query, fragment) = parse_uri(uri)
|
||||
if not scheme or not authority:
|
||||
raise Exception("Only absolute URIs are allowed. uri = %s" % uri)
|
||||
|
||||
scheme = scheme.lower()
|
||||
authority = authority.lower()
|
||||
|
||||
if not path:
|
||||
path = "/"
|
||||
|
||||
# Could do syntax based normalization of the URI before
|
||||
# computing the digest. See Section 6.2.2 of Std 66.
|
||||
request_uri = query and "?".join([path, query]) or path
|
||||
defrag_uri = scheme + "://" + authority + request_uri
|
||||
|
||||
return defrag_uri
|
||||
|
||||
@classmethod
|
||||
def cache_url(cls, uri: str) -> str:
|
||||
return cls._urlnorm(uri)
|
||||
|
||||
def parse_cache_control(self, headers: Mapping[str, str]) -> dict[str, int | None]:
|
||||
known_directives = {
|
||||
# https://tools.ietf.org/html/rfc7234#section-5.2
|
||||
"max-age": (int, True),
|
||||
"max-stale": (int, False),
|
||||
"min-fresh": (int, True),
|
||||
"no-cache": (None, False),
|
||||
"no-store": (None, False),
|
||||
"no-transform": (None, False),
|
||||
"only-if-cached": (None, False),
|
||||
"must-revalidate": (None, False),
|
||||
"public": (None, False),
|
||||
"private": (None, False),
|
||||
"proxy-revalidate": (None, False),
|
||||
"s-maxage": (int, True),
|
||||
}
|
||||
|
||||
cc_headers = headers.get("cache-control", headers.get("Cache-Control", ""))
|
||||
|
||||
retval: dict[str, int | None] = {}
|
||||
|
||||
for cc_directive in cc_headers.split(","):
|
||||
if not cc_directive.strip():
|
||||
continue
|
||||
|
||||
parts = cc_directive.split("=", 1)
|
||||
directive = parts[0].strip()
|
||||
|
||||
try:
|
||||
typ, required = known_directives[directive]
|
||||
except KeyError:
|
||||
logger.debug("Ignoring unknown cache-control directive: %s", directive)
|
||||
continue
|
||||
|
||||
if not typ or not required:
|
||||
retval[directive] = None
|
||||
if typ:
|
||||
try:
|
||||
retval[directive] = typ(parts[1].strip())
|
||||
except IndexError:
|
||||
if required:
|
||||
logger.debug(
|
||||
"Missing value for cache-control " "directive: %s",
|
||||
directive,
|
||||
)
|
||||
except ValueError:
|
||||
logger.debug(
|
||||
"Invalid value for cache-control directive " "%s, must be %s",
|
||||
directive,
|
||||
typ.__name__,
|
||||
)
|
||||
|
||||
return retval
|
||||
|
||||
def _load_from_cache(self, request: PreparedRequest) -> HTTPResponse | None:
|
||||
"""
|
||||
Load a cached response, or return None if it's not available.
|
||||
"""
|
||||
# We do not support caching of partial content: so if the request contains a
|
||||
# Range header then we don't want to load anything from the cache.
|
||||
if "Range" in request.headers:
|
||||
return None
|
||||
|
||||
cache_url = request.url
|
||||
assert cache_url is not None
|
||||
cache_data = self.cache.get(cache_url)
|
||||
if cache_data is None:
|
||||
logger.debug("No cache entry available")
|
||||
return None
|
||||
|
||||
if isinstance(self.cache, SeparateBodyBaseCache):
|
||||
body_file = self.cache.get_body(cache_url)
|
||||
else:
|
||||
body_file = None
|
||||
|
||||
result = self.serializer.loads(request, cache_data, body_file)
|
||||
if result is None:
|
||||
logger.warning("Cache entry deserialization failed, entry ignored")
|
||||
return result
|
||||
|
||||
def cached_request(self, request: PreparedRequest) -> HTTPResponse | Literal[False]:
|
||||
"""
|
||||
Return a cached response if it exists in the cache, otherwise
|
||||
return False.
|
||||
"""
|
||||
assert request.url is not None
|
||||
cache_url = self.cache_url(request.url)
|
||||
logger.debug('Looking up "%s" in the cache', cache_url)
|
||||
cc = self.parse_cache_control(request.headers)
|
||||
|
||||
# Bail out if the request insists on fresh data
|
||||
if "no-cache" in cc:
|
||||
logger.debug('Request header has "no-cache", cache bypassed')
|
||||
return False
|
||||
|
||||
if "max-age" in cc and cc["max-age"] == 0:
|
||||
logger.debug('Request header has "max_age" as 0, cache bypassed')
|
||||
return False
|
||||
|
||||
# Check whether we can load the response from the cache:
|
||||
resp = self._load_from_cache(request)
|
||||
if not resp:
|
||||
return False
|
||||
|
||||
# If we have a cached permanent redirect, return it immediately. We
|
||||
# don't need to test our response for other headers b/c it is
|
||||
# intrinsically "cacheable" as it is Permanent.
|
||||
#
|
||||
# See:
|
||||
# https://tools.ietf.org/html/rfc7231#section-6.4.2
|
||||
#
|
||||
# Client can try to refresh the value by repeating the request
|
||||
# with cache busting headers as usual (ie no-cache).
|
||||
if int(resp.status) in PERMANENT_REDIRECT_STATUSES:
|
||||
msg = (
|
||||
"Returning cached permanent redirect response "
|
||||
"(ignoring date and etag information)"
|
||||
)
|
||||
logger.debug(msg)
|
||||
return resp
|
||||
|
||||
headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(resp.headers)
|
||||
if not headers or "date" not in headers:
|
||||
if "etag" not in headers:
|
||||
# Without date or etag, the cached response can never be used
|
||||
# and should be deleted.
|
||||
logger.debug("Purging cached response: no date or etag")
|
||||
self.cache.delete(cache_url)
|
||||
logger.debug("Ignoring cached response: no date")
|
||||
return False
|
||||
|
||||
now = time.time()
|
||||
time_tuple = parsedate_tz(headers["date"])
|
||||
assert time_tuple is not None
|
||||
date = calendar.timegm(time_tuple[:6])
|
||||
current_age = max(0, now - date)
|
||||
logger.debug("Current age based on date: %i", current_age)
|
||||
|
||||
# TODO: There is an assumption that the result will be a
|
||||
# urllib3 response object. This may not be best since we
|
||||
# could probably avoid instantiating or constructing the
|
||||
# response until we know we need it.
|
||||
resp_cc = self.parse_cache_control(headers)
|
||||
|
||||
# determine freshness
|
||||
freshness_lifetime = 0
|
||||
|
||||
# Check the max-age pragma in the cache control header
|
||||
max_age = resp_cc.get("max-age")
|
||||
if max_age is not None:
|
||||
freshness_lifetime = max_age
|
||||
logger.debug("Freshness lifetime from max-age: %i", freshness_lifetime)
|
||||
|
||||
# If there isn't a max-age, check for an expires header
|
||||
elif "expires" in headers:
|
||||
expires = parsedate_tz(headers["expires"])
|
||||
if expires is not None:
|
||||
expire_time = calendar.timegm(expires[:6]) - date
|
||||
freshness_lifetime = max(0, expire_time)
|
||||
logger.debug("Freshness lifetime from expires: %i", freshness_lifetime)
|
||||
|
||||
# Determine if we are setting freshness limit in the
|
||||
# request. Note, this overrides what was in the response.
|
||||
max_age = cc.get("max-age")
|
||||
if max_age is not None:
|
||||
freshness_lifetime = max_age
|
||||
logger.debug(
|
||||
"Freshness lifetime from request max-age: %i", freshness_lifetime
|
||||
)
|
||||
|
||||
min_fresh = cc.get("min-fresh")
|
||||
if min_fresh is not None:
|
||||
# adjust our current age by our min fresh
|
||||
current_age += min_fresh
|
||||
logger.debug("Adjusted current age from min-fresh: %i", current_age)
|
||||
|
||||
# Return entry if it is fresh enough
|
||||
if freshness_lifetime > current_age:
|
||||
logger.debug('The response is "fresh", returning cached response')
|
||||
logger.debug("%i > %i", freshness_lifetime, current_age)
|
||||
return resp
|
||||
|
||||
# we're not fresh. If we don't have an Etag, clear it out
|
||||
if "etag" not in headers:
|
||||
logger.debug('The cached response is "stale" with no etag, purging')
|
||||
self.cache.delete(cache_url)
|
||||
|
||||
# return the original handler
|
||||
return False
|
||||
|
||||
def conditional_headers(self, request: PreparedRequest) -> dict[str, str]:
|
||||
resp = self._load_from_cache(request)
|
||||
new_headers = {}
|
||||
|
||||
if resp:
|
||||
headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(resp.headers)
|
||||
|
||||
if "etag" in headers:
|
||||
new_headers["If-None-Match"] = headers["ETag"]
|
||||
|
||||
if "last-modified" in headers:
|
||||
new_headers["If-Modified-Since"] = headers["Last-Modified"]
|
||||
|
||||
return new_headers
|
||||
|
||||
def _cache_set(
|
||||
self,
|
||||
cache_url: str,
|
||||
request: PreparedRequest,
|
||||
response: HTTPResponse,
|
||||
body: bytes | None = None,
|
||||
expires_time: int | None = None,
|
||||
) -> None:
|
||||
"""
|
||||
Store the data in the cache.
|
||||
"""
|
||||
if isinstance(self.cache, SeparateBodyBaseCache):
|
||||
# We pass in the body separately; just put a placeholder empty
|
||||
# string in the metadata.
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, response, b""),
|
||||
expires=expires_time,
|
||||
)
|
||||
# body is None can happen when, for example, we're only updating
|
||||
# headers, as is the case in update_cached_response().
|
||||
if body is not None:
|
||||
self.cache.set_body(cache_url, body)
|
||||
else:
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, response, body),
|
||||
expires=expires_time,
|
||||
)
|
||||
|
||||
def cache_response(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
response_or_ref: HTTPResponse | weakref.ReferenceType[HTTPResponse],
|
||||
body: bytes | None = None,
|
||||
status_codes: Collection[int] | None = None,
|
||||
) -> None:
|
||||
"""
|
||||
Algorithm for caching requests.
|
||||
|
||||
This assumes a requests Response object.
|
||||
"""
|
||||
if isinstance(response_or_ref, weakref.ReferenceType):
|
||||
response = response_or_ref()
|
||||
if response is None:
|
||||
# The weakref can be None only in case the user used streamed request
|
||||
# and did not consume or close it, and holds no reference to requests.Response.
|
||||
# In such case, we don't want to cache the response.
|
||||
return
|
||||
else:
|
||||
response = response_or_ref
|
||||
|
||||
# From httplib2: Don't cache 206's since we aren't going to
|
||||
# handle byte range requests
|
||||
cacheable_status_codes = status_codes or self.cacheable_status_codes
|
||||
if response.status not in cacheable_status_codes:
|
||||
logger.debug(
|
||||
"Status code %s not in %s", response.status, cacheable_status_codes
|
||||
)
|
||||
return
|
||||
|
||||
response_headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(
|
||||
response.headers
|
||||
)
|
||||
|
||||
if "date" in response_headers:
|
||||
time_tuple = parsedate_tz(response_headers["date"])
|
||||
assert time_tuple is not None
|
||||
date = calendar.timegm(time_tuple[:6])
|
||||
else:
|
||||
date = 0
|
||||
|
||||
# If we've been given a body, our response has a Content-Length, that
|
||||
# Content-Length is valid then we can check to see if the body we've
|
||||
# been given matches the expected size, and if it doesn't we'll just
|
||||
# skip trying to cache it.
|
||||
if (
|
||||
body is not None
|
||||
and "content-length" in response_headers
|
||||
and response_headers["content-length"].isdigit()
|
||||
and int(response_headers["content-length"]) != len(body)
|
||||
):
|
||||
return
|
||||
|
||||
cc_req = self.parse_cache_control(request.headers)
|
||||
cc = self.parse_cache_control(response_headers)
|
||||
|
||||
assert request.url is not None
|
||||
cache_url = self.cache_url(request.url)
|
||||
logger.debug('Updating cache with response from "%s"', cache_url)
|
||||
|
||||
# Delete it from the cache if we happen to have it stored there
|
||||
no_store = False
|
||||
if "no-store" in cc:
|
||||
no_store = True
|
||||
logger.debug('Response header has "no-store"')
|
||||
if "no-store" in cc_req:
|
||||
no_store = True
|
||||
logger.debug('Request header has "no-store"')
|
||||
if no_store and self.cache.get(cache_url):
|
||||
logger.debug('Purging existing cache entry to honor "no-store"')
|
||||
self.cache.delete(cache_url)
|
||||
if no_store:
|
||||
return
|
||||
|
||||
# https://tools.ietf.org/html/rfc7234#section-4.1:
|
||||
# A Vary header field-value of "*" always fails to match.
|
||||
# Storing such a response leads to a deserialization warning
|
||||
# during cache lookup and is not allowed to ever be served,
|
||||
# so storing it can be avoided.
|
||||
if "*" in response_headers.get("vary", ""):
|
||||
logger.debug('Response header has "Vary: *"')
|
||||
return
|
||||
|
||||
# If we've been given an etag, then keep the response
|
||||
if self.cache_etags and "etag" in response_headers:
|
||||
expires_time = 0
|
||||
if response_headers.get("expires"):
|
||||
expires = parsedate_tz(response_headers["expires"])
|
||||
if expires is not None:
|
||||
expires_time = calendar.timegm(expires[:6]) - date
|
||||
|
||||
expires_time = max(expires_time, 14 * 86400)
|
||||
|
||||
logger.debug(f"etag object cached for {expires_time} seconds")
|
||||
logger.debug("Caching due to etag")
|
||||
self._cache_set(cache_url, request, response, body, expires_time)
|
||||
|
||||
# Add to the cache any permanent redirects. We do this before looking
|
||||
# that the Date headers.
|
||||
elif int(response.status) in PERMANENT_REDIRECT_STATUSES:
|
||||
logger.debug("Caching permanent redirect")
|
||||
self._cache_set(cache_url, request, response, b"")
|
||||
|
||||
# Add to the cache if the response headers demand it. If there
|
||||
# is no date header then we can't do anything about expiring
|
||||
# the cache.
|
||||
elif "date" in response_headers:
|
||||
time_tuple = parsedate_tz(response_headers["date"])
|
||||
assert time_tuple is not None
|
||||
date = calendar.timegm(time_tuple[:6])
|
||||
# cache when there is a max-age > 0
|
||||
max_age = cc.get("max-age")
|
||||
if max_age is not None and max_age > 0:
|
||||
logger.debug("Caching b/c date exists and max-age > 0")
|
||||
expires_time = max_age
|
||||
self._cache_set(
|
||||
cache_url,
|
||||
request,
|
||||
response,
|
||||
body,
|
||||
expires_time,
|
||||
)
|
||||
|
||||
# If the request can expire, it means we should cache it
|
||||
# in the meantime.
|
||||
elif "expires" in response_headers:
|
||||
if response_headers["expires"]:
|
||||
expires = parsedate_tz(response_headers["expires"])
|
||||
if expires is not None:
|
||||
expires_time = calendar.timegm(expires[:6]) - date
|
||||
else:
|
||||
expires_time = None
|
||||
|
||||
logger.debug(
|
||||
"Caching b/c of expires header. expires in {} seconds".format(
|
||||
expires_time
|
||||
)
|
||||
)
|
||||
self._cache_set(
|
||||
cache_url,
|
||||
request,
|
||||
response,
|
||||
body,
|
||||
expires_time,
|
||||
)
|
||||
|
||||
def update_cached_response(
|
||||
self, request: PreparedRequest, response: HTTPResponse
|
||||
) -> HTTPResponse:
|
||||
"""On a 304 we will get a new set of headers that we want to
|
||||
update our cached value with, assuming we have one.
|
||||
|
||||
This should only ever be called when we've sent an ETag and
|
||||
gotten a 304 as the response.
|
||||
"""
|
||||
assert request.url is not None
|
||||
cache_url = self.cache_url(request.url)
|
||||
cached_response = self._load_from_cache(request)
|
||||
|
||||
if not cached_response:
|
||||
# we didn't have a cached response
|
||||
return response
|
||||
|
||||
# Lets update our headers with the headers from the new request:
|
||||
# http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1
|
||||
#
|
||||
# The server isn't supposed to send headers that would make
|
||||
# the cached body invalid. But... just in case, we'll be sure
|
||||
# to strip out ones we know that might be problmatic due to
|
||||
# typical assumptions.
|
||||
excluded_headers = ["content-length"]
|
||||
|
||||
cached_response.headers.update(
|
||||
{
|
||||
k: v
|
||||
for k, v in response.headers.items()
|
||||
if k.lower() not in excluded_headers
|
||||
}
|
||||
)
|
||||
|
||||
# we want a 200 b/c we have content via the cache
|
||||
cached_response.status = 200
|
||||
|
||||
# update our cache
|
||||
self._cache_set(cache_url, request, cached_response)
|
||||
|
||||
return cached_response
|
||||
@@ -0,0 +1,119 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import mmap
|
||||
from tempfile import NamedTemporaryFile
|
||||
from typing import TYPE_CHECKING, Any, Callable
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from http.client import HTTPResponse
|
||||
|
||||
|
||||
class CallbackFileWrapper:
|
||||
"""
|
||||
Small wrapper around a fp object which will tee everything read into a
|
||||
buffer, and when that file is closed it will execute a callback with the
|
||||
contents of that buffer.
|
||||
|
||||
All attributes are proxied to the underlying file object.
|
||||
|
||||
This class uses members with a double underscore (__) leading prefix so as
|
||||
not to accidentally shadow an attribute.
|
||||
|
||||
The data is stored in a temporary file until it is all available. As long
|
||||
as the temporary files directory is disk-based (sometimes it's a
|
||||
memory-backed-``tmpfs`` on Linux), data will be unloaded to disk if memory
|
||||
pressure is high. For small files the disk usually won't be used at all,
|
||||
it'll all be in the filesystem memory cache, so there should be no
|
||||
performance impact.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, fp: HTTPResponse, callback: Callable[[bytes], None] | None
|
||||
) -> None:
|
||||
self.__buf = NamedTemporaryFile("rb+", delete=True)
|
||||
self.__fp = fp
|
||||
self.__callback = callback
|
||||
|
||||
def __getattr__(self, name: str) -> Any:
|
||||
# The vagaries of garbage collection means that self.__fp is
|
||||
# not always set. By using __getattribute__ and the private
|
||||
# name[0] allows looking up the attribute value and raising an
|
||||
# AttributeError when it doesn't exist. This stop things from
|
||||
# infinitely recursing calls to getattr in the case where
|
||||
# self.__fp hasn't been set.
|
||||
#
|
||||
# [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers
|
||||
fp = self.__getattribute__("_CallbackFileWrapper__fp")
|
||||
return getattr(fp, name)
|
||||
|
||||
def __is_fp_closed(self) -> bool:
|
||||
try:
|
||||
return self.__fp.fp is None
|
||||
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
try:
|
||||
closed: bool = self.__fp.closed
|
||||
return closed
|
||||
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# We just don't cache it then.
|
||||
# TODO: Add some logging here...
|
||||
return False
|
||||
|
||||
def _close(self) -> None:
|
||||
if self.__callback:
|
||||
if self.__buf.tell() == 0:
|
||||
# Empty file:
|
||||
result = b""
|
||||
else:
|
||||
# Return the data without actually loading it into memory,
|
||||
# relying on Python's buffer API and mmap(). mmap() just gives
|
||||
# a view directly into the filesystem's memory cache, so it
|
||||
# doesn't result in duplicate memory use.
|
||||
self.__buf.seek(0, 0)
|
||||
result = memoryview(
|
||||
mmap.mmap(self.__buf.fileno(), 0, access=mmap.ACCESS_READ)
|
||||
)
|
||||
self.__callback(result)
|
||||
|
||||
# We assign this to None here, because otherwise we can get into
|
||||
# really tricky problems where the CPython interpreter dead locks
|
||||
# because the callback is holding a reference to something which
|
||||
# has a __del__ method. Setting this to None breaks the cycle
|
||||
# and allows the garbage collector to do it's thing normally.
|
||||
self.__callback = None
|
||||
|
||||
# Closing the temporary file releases memory and frees disk space.
|
||||
# Important when caching big files.
|
||||
self.__buf.close()
|
||||
|
||||
def read(self, amt: int | None = None) -> bytes:
|
||||
data: bytes = self.__fp.read(amt)
|
||||
if data:
|
||||
# We may be dealing with b'', a sign that things are over:
|
||||
# it's passed e.g. after we've already closed self.__buf.
|
||||
self.__buf.write(data)
|
||||
if self.__is_fp_closed():
|
||||
self._close()
|
||||
|
||||
return data
|
||||
|
||||
def _safe_read(self, amt: int) -> bytes:
|
||||
data: bytes = self.__fp._safe_read(amt) # type: ignore[attr-defined]
|
||||
if amt == 2 and data == b"\r\n":
|
||||
# urllib executes this read to toss the CRLF at the end
|
||||
# of the chunk.
|
||||
return data
|
||||
|
||||
self.__buf.write(data)
|
||||
if self.__is_fp_closed():
|
||||
self._close()
|
||||
|
||||
return data
|
||||
@@ -0,0 +1,157 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import calendar
|
||||
import time
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from email.utils import formatdate, parsedate, parsedate_tz
|
||||
from typing import TYPE_CHECKING, Any, Mapping
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pip._vendor.urllib3 import HTTPResponse
|
||||
|
||||
TIME_FMT = "%a, %d %b %Y %H:%M:%S GMT"
|
||||
|
||||
|
||||
def expire_after(delta: timedelta, date: datetime | None = None) -> datetime:
|
||||
date = date or datetime.now(timezone.utc)
|
||||
return date + delta
|
||||
|
||||
|
||||
def datetime_to_header(dt: datetime) -> str:
|
||||
return formatdate(calendar.timegm(dt.timetuple()))
|
||||
|
||||
|
||||
class BaseHeuristic:
|
||||
def warning(self, response: HTTPResponse) -> str | None:
|
||||
"""
|
||||
Return a valid 1xx warning header value describing the cache
|
||||
adjustments.
|
||||
|
||||
The response is provided too allow warnings like 113
|
||||
http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need
|
||||
to explicitly say response is over 24 hours old.
|
||||
"""
|
||||
return '110 - "Response is Stale"'
|
||||
|
||||
def update_headers(self, response: HTTPResponse) -> dict[str, str]:
|
||||
"""Update the response headers with any new headers.
|
||||
|
||||
NOTE: This SHOULD always include some Warning header to
|
||||
signify that the response was cached by the client, not
|
||||
by way of the provided headers.
|
||||
"""
|
||||
return {}
|
||||
|
||||
def apply(self, response: HTTPResponse) -> HTTPResponse:
|
||||
updated_headers = self.update_headers(response)
|
||||
|
||||
if updated_headers:
|
||||
response.headers.update(updated_headers)
|
||||
warning_header_value = self.warning(response)
|
||||
if warning_header_value is not None:
|
||||
response.headers.update({"Warning": warning_header_value})
|
||||
|
||||
return response
|
||||
|
||||
|
||||
class OneDayCache(BaseHeuristic):
|
||||
"""
|
||||
Cache the response by providing an expires 1 day in the
|
||||
future.
|
||||
"""
|
||||
|
||||
def update_headers(self, response: HTTPResponse) -> dict[str, str]:
|
||||
headers = {}
|
||||
|
||||
if "expires" not in response.headers:
|
||||
date = parsedate(response.headers["date"])
|
||||
expires = expire_after(
|
||||
timedelta(days=1),
|
||||
date=datetime(*date[:6], tzinfo=timezone.utc), # type: ignore[index,misc]
|
||||
)
|
||||
headers["expires"] = datetime_to_header(expires)
|
||||
headers["cache-control"] = "public"
|
||||
return headers
|
||||
|
||||
|
||||
class ExpiresAfter(BaseHeuristic):
|
||||
"""
|
||||
Cache **all** requests for a defined time period.
|
||||
"""
|
||||
|
||||
def __init__(self, **kw: Any) -> None:
|
||||
self.delta = timedelta(**kw)
|
||||
|
||||
def update_headers(self, response: HTTPResponse) -> dict[str, str]:
|
||||
expires = expire_after(self.delta)
|
||||
return {"expires": datetime_to_header(expires), "cache-control": "public"}
|
||||
|
||||
def warning(self, response: HTTPResponse) -> str | None:
|
||||
tmpl = "110 - Automatically cached for %s. Response might be stale"
|
||||
return tmpl % self.delta
|
||||
|
||||
|
||||
class LastModified(BaseHeuristic):
|
||||
"""
|
||||
If there is no Expires header already, fall back on Last-Modified
|
||||
using the heuristic from
|
||||
http://tools.ietf.org/html/rfc7234#section-4.2.2
|
||||
to calculate a reasonable value.
|
||||
|
||||
Firefox also does something like this per
|
||||
https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching_FAQ
|
||||
http://lxr.mozilla.org/mozilla-release/source/netwerk/protocol/http/nsHttpResponseHead.cpp#397
|
||||
Unlike mozilla we limit this to 24-hr.
|
||||
"""
|
||||
|
||||
cacheable_by_default_statuses = {
|
||||
200,
|
||||
203,
|
||||
204,
|
||||
206,
|
||||
300,
|
||||
301,
|
||||
404,
|
||||
405,
|
||||
410,
|
||||
414,
|
||||
501,
|
||||
}
|
||||
|
||||
def update_headers(self, resp: HTTPResponse) -> dict[str, str]:
|
||||
headers: Mapping[str, str] = resp.headers
|
||||
|
||||
if "expires" in headers:
|
||||
return {}
|
||||
|
||||
if "cache-control" in headers and headers["cache-control"] != "public":
|
||||
return {}
|
||||
|
||||
if resp.status not in self.cacheable_by_default_statuses:
|
||||
return {}
|
||||
|
||||
if "date" not in headers or "last-modified" not in headers:
|
||||
return {}
|
||||
|
||||
time_tuple = parsedate_tz(headers["date"])
|
||||
assert time_tuple is not None
|
||||
date = calendar.timegm(time_tuple[:6])
|
||||
last_modified = parsedate(headers["last-modified"])
|
||||
if last_modified is None:
|
||||
return {}
|
||||
|
||||
now = time.time()
|
||||
current_age = max(0, now - date)
|
||||
delta = date - calendar.timegm(last_modified)
|
||||
freshness_lifetime = max(0, min(delta / 10, 24 * 3600))
|
||||
if freshness_lifetime <= current_age:
|
||||
return {}
|
||||
|
||||
expires = date + freshness_lifetime
|
||||
return {"expires": time.strftime(TIME_FMT, time.gmtime(expires))}
|
||||
|
||||
def warning(self, resp: HTTPResponse) -> str | None:
|
||||
return None
|
||||
@@ -0,0 +1,146 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import io
|
||||
from typing import IO, TYPE_CHECKING, Any, Mapping, cast
|
||||
|
||||
from pip._vendor import msgpack
|
||||
from pip._vendor.requests.structures import CaseInsensitiveDict
|
||||
from pip._vendor.urllib3 import HTTPResponse
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pip._vendor.requests import PreparedRequest
|
||||
|
||||
|
||||
class Serializer:
|
||||
serde_version = "4"
|
||||
|
||||
def dumps(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
response: HTTPResponse,
|
||||
body: bytes | None = None,
|
||||
) -> bytes:
|
||||
response_headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(
|
||||
response.headers
|
||||
)
|
||||
|
||||
if body is None:
|
||||
# When a body isn't passed in, we'll read the response. We
|
||||
# also update the response with a new file handler to be
|
||||
# sure it acts as though it was never read.
|
||||
body = response.read(decode_content=False)
|
||||
response._fp = io.BytesIO(body) # type: ignore[assignment]
|
||||
response.length_remaining = len(body)
|
||||
|
||||
data = {
|
||||
"response": {
|
||||
"body": body, # Empty bytestring if body is stored separately
|
||||
"headers": {str(k): str(v) for k, v in response.headers.items()},
|
||||
"status": response.status,
|
||||
"version": response.version,
|
||||
"reason": str(response.reason),
|
||||
"decode_content": response.decode_content,
|
||||
}
|
||||
}
|
||||
|
||||
# Construct our vary headers
|
||||
data["vary"] = {}
|
||||
if "vary" in response_headers:
|
||||
varied_headers = response_headers["vary"].split(",")
|
||||
for header in varied_headers:
|
||||
header = str(header).strip()
|
||||
header_value = request.headers.get(header, None)
|
||||
if header_value is not None:
|
||||
header_value = str(header_value)
|
||||
data["vary"][header] = header_value
|
||||
|
||||
return b",".join([f"cc={self.serde_version}".encode(), self.serialize(data)])
|
||||
|
||||
def serialize(self, data: dict[str, Any]) -> bytes:
|
||||
return cast(bytes, msgpack.dumps(data, use_bin_type=True))
|
||||
|
||||
def loads(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
data: bytes,
|
||||
body_file: IO[bytes] | None = None,
|
||||
) -> HTTPResponse | None:
|
||||
# Short circuit if we've been given an empty set of data
|
||||
if not data:
|
||||
return None
|
||||
|
||||
# Previous versions of this library supported other serialization
|
||||
# formats, but these have all been removed.
|
||||
if not data.startswith(f"cc={self.serde_version},".encode()):
|
||||
return None
|
||||
|
||||
data = data[5:]
|
||||
return self._loads_v4(request, data, body_file)
|
||||
|
||||
def prepare_response(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
cached: Mapping[str, Any],
|
||||
body_file: IO[bytes] | None = None,
|
||||
) -> HTTPResponse | None:
|
||||
"""Verify our vary headers match and construct a real urllib3
|
||||
HTTPResponse object.
|
||||
"""
|
||||
# Special case the '*' Vary value as it means we cannot actually
|
||||
# determine if the cached response is suitable for this request.
|
||||
# This case is also handled in the controller code when creating
|
||||
# a cache entry, but is left here for backwards compatibility.
|
||||
if "*" in cached.get("vary", {}):
|
||||
return None
|
||||
|
||||
# Ensure that the Vary headers for the cached response match our
|
||||
# request
|
||||
for header, value in cached.get("vary", {}).items():
|
||||
if request.headers.get(header, None) != value:
|
||||
return None
|
||||
|
||||
body_raw = cached["response"].pop("body")
|
||||
|
||||
headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(
|
||||
data=cached["response"]["headers"]
|
||||
)
|
||||
if headers.get("transfer-encoding", "") == "chunked":
|
||||
headers.pop("transfer-encoding")
|
||||
|
||||
cached["response"]["headers"] = headers
|
||||
|
||||
try:
|
||||
body: IO[bytes]
|
||||
if body_file is None:
|
||||
body = io.BytesIO(body_raw)
|
||||
else:
|
||||
body = body_file
|
||||
except TypeError:
|
||||
# This can happen if cachecontrol serialized to v1 format (pickle)
|
||||
# using Python 2. A Python 2 str(byte string) will be unpickled as
|
||||
# a Python 3 str (unicode string), which will cause the above to
|
||||
# fail with:
|
||||
#
|
||||
# TypeError: 'str' does not support the buffer interface
|
||||
body = io.BytesIO(body_raw.encode("utf8"))
|
||||
|
||||
# Discard any `strict` parameter serialized by older version of cachecontrol.
|
||||
cached["response"].pop("strict", None)
|
||||
|
||||
return HTTPResponse(body=body, preload_content=False, **cached["response"])
|
||||
|
||||
def _loads_v4(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
data: bytes,
|
||||
body_file: IO[bytes] | None = None,
|
||||
) -> HTTPResponse | None:
|
||||
try:
|
||||
cached = msgpack.loads(data, raw=False)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
return self.prepare_response(request, cached, body_file)
|
||||
@@ -0,0 +1,43 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TYPE_CHECKING, Collection
|
||||
|
||||
from pip._vendor.cachecontrol.adapter import CacheControlAdapter
|
||||
from pip._vendor.cachecontrol.cache import DictCache
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pip._vendor import requests
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache
|
||||
from pip._vendor.cachecontrol.controller import CacheController
|
||||
from pip._vendor.cachecontrol.heuristics import BaseHeuristic
|
||||
from pip._vendor.cachecontrol.serialize import Serializer
|
||||
|
||||
|
||||
def CacheControl(
|
||||
sess: requests.Session,
|
||||
cache: BaseCache | None = None,
|
||||
cache_etags: bool = True,
|
||||
serializer: Serializer | None = None,
|
||||
heuristic: BaseHeuristic | None = None,
|
||||
controller_class: type[CacheController] | None = None,
|
||||
adapter_class: type[CacheControlAdapter] | None = None,
|
||||
cacheable_methods: Collection[str] | None = None,
|
||||
) -> requests.Session:
|
||||
cache = DictCache() if cache is None else cache
|
||||
adapter_class = adapter_class or CacheControlAdapter
|
||||
adapter = adapter_class(
|
||||
cache,
|
||||
cache_etags=cache_etags,
|
||||
serializer=serializer,
|
||||
heuristic=heuristic,
|
||||
controller_class=controller_class,
|
||||
cacheable_methods=cacheable_methods,
|
||||
)
|
||||
sess.mount("http://", adapter)
|
||||
sess.mount("https://", adapter)
|
||||
|
||||
return sess
|
||||
@@ -0,0 +1,4 @@
|
||||
from .core import contents, where
|
||||
|
||||
__all__ = ["contents", "where"]
|
||||
__version__ = "2025.07.14"
|
||||
@@ -0,0 +1,12 @@
|
||||
import argparse
|
||||
|
||||
from pip._vendor.certifi import contents, where
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("-c", "--contents", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.contents:
|
||||
print(contents())
|
||||
else:
|
||||
print(where())
|
||||
4778
Utils/PythonNew32/Lib/site-packages/pip/_vendor/certifi/cacert.pem
Normal file
4778
Utils/PythonNew32/Lib/site-packages/pip/_vendor/certifi/cacert.pem
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,83 @@
|
||||
"""
|
||||
certifi.py
|
||||
~~~~~~~~~~
|
||||
|
||||
This module returns the installation location of cacert.pem or its contents.
|
||||
"""
|
||||
import sys
|
||||
import atexit
|
||||
|
||||
def exit_cacert_ctx() -> None:
|
||||
_CACERT_CTX.__exit__(None, None, None) # type: ignore[union-attr]
|
||||
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
|
||||
from importlib.resources import as_file, files
|
||||
|
||||
_CACERT_CTX = None
|
||||
_CACERT_PATH = None
|
||||
|
||||
def where() -> str:
|
||||
# This is slightly terrible, but we want to delay extracting the file
|
||||
# in cases where we're inside of a zipimport situation until someone
|
||||
# actually calls where(), but we don't want to re-extract the file
|
||||
# on every call of where(), so we'll do it once then store it in a
|
||||
# global variable.
|
||||
global _CACERT_CTX
|
||||
global _CACERT_PATH
|
||||
if _CACERT_PATH is None:
|
||||
# This is slightly janky, the importlib.resources API wants you to
|
||||
# manage the cleanup of this file, so it doesn't actually return a
|
||||
# path, it returns a context manager that will give you the path
|
||||
# when you enter it and will do any cleanup when you leave it. In
|
||||
# the common case of not needing a temporary file, it will just
|
||||
# return the file system location and the __exit__() is a no-op.
|
||||
#
|
||||
# We also have to hold onto the actual context manager, because
|
||||
# it will do the cleanup whenever it gets garbage collected, so
|
||||
# we will also store that at the global level as well.
|
||||
_CACERT_CTX = as_file(files("pip._vendor.certifi").joinpath("cacert.pem"))
|
||||
_CACERT_PATH = str(_CACERT_CTX.__enter__())
|
||||
atexit.register(exit_cacert_ctx)
|
||||
|
||||
return _CACERT_PATH
|
||||
|
||||
def contents() -> str:
|
||||
return files("pip._vendor.certifi").joinpath("cacert.pem").read_text(encoding="ascii")
|
||||
|
||||
else:
|
||||
|
||||
from importlib.resources import path as get_path, read_text
|
||||
|
||||
_CACERT_CTX = None
|
||||
_CACERT_PATH = None
|
||||
|
||||
def where() -> str:
|
||||
# This is slightly terrible, but we want to delay extracting the
|
||||
# file in cases where we're inside of a zipimport situation until
|
||||
# someone actually calls where(), but we don't want to re-extract
|
||||
# the file on every call of where(), so we'll do it once then store
|
||||
# it in a global variable.
|
||||
global _CACERT_CTX
|
||||
global _CACERT_PATH
|
||||
if _CACERT_PATH is None:
|
||||
# This is slightly janky, the importlib.resources API wants you
|
||||
# to manage the cleanup of this file, so it doesn't actually
|
||||
# return a path, it returns a context manager that will give
|
||||
# you the path when you enter it and will do any cleanup when
|
||||
# you leave it. In the common case of not needing a temporary
|
||||
# file, it will just return the file system location and the
|
||||
# __exit__() is a no-op.
|
||||
#
|
||||
# We also have to hold onto the actual context manager, because
|
||||
# it will do the cleanup whenever it gets garbage collected, so
|
||||
# we will also store that at the global level as well.
|
||||
_CACERT_CTX = get_path("pip._vendor.certifi", "cacert.pem")
|
||||
_CACERT_PATH = str(_CACERT_CTX.__enter__())
|
||||
atexit.register(exit_cacert_ctx)
|
||||
|
||||
return _CACERT_PATH
|
||||
|
||||
def contents() -> str:
|
||||
return read_text("pip._vendor.certifi", "cacert.pem", encoding="ascii")
|
||||
@@ -0,0 +1,13 @@
|
||||
from ._implementation import (
|
||||
CyclicDependencyError,
|
||||
DependencyGroupInclude,
|
||||
DependencyGroupResolver,
|
||||
resolve,
|
||||
)
|
||||
|
||||
__all__ = (
|
||||
"CyclicDependencyError",
|
||||
"DependencyGroupInclude",
|
||||
"DependencyGroupResolver",
|
||||
"resolve",
|
||||
)
|
||||
@@ -0,0 +1,65 @@
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
from ._implementation import resolve
|
||||
from ._toml_compat import tomllib
|
||||
|
||||
|
||||
def main() -> None:
|
||||
if tomllib is None:
|
||||
print(
|
||||
"Usage error: dependency-groups CLI requires tomli or Python 3.11+",
|
||||
file=sys.stderr,
|
||||
)
|
||||
raise SystemExit(2)
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description=(
|
||||
"A dependency-groups CLI. Prints out a resolved group, newline-delimited."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"GROUP_NAME", nargs="*", help="The dependency group(s) to resolve."
|
||||
)
|
||||
parser.add_argument(
|
||||
"-f",
|
||||
"--pyproject-file",
|
||||
default="pyproject.toml",
|
||||
help="The pyproject.toml file. Defaults to trying in the current directory.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-o",
|
||||
"--output",
|
||||
help="An output file. Defaults to stdout.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l",
|
||||
"--list",
|
||||
action="store_true",
|
||||
help="List the available dependency groups",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
with open(args.pyproject_file, "rb") as fp:
|
||||
pyproject = tomllib.load(fp)
|
||||
|
||||
dependency_groups_raw = pyproject.get("dependency-groups", {})
|
||||
|
||||
if args.list:
|
||||
print(*dependency_groups_raw.keys())
|
||||
return
|
||||
if not args.GROUP_NAME:
|
||||
print("A GROUP_NAME is required", file=sys.stderr)
|
||||
raise SystemExit(3)
|
||||
|
||||
content = "\n".join(resolve(dependency_groups_raw, *args.GROUP_NAME))
|
||||
|
||||
if args.output is None or args.output == "-":
|
||||
print(content)
|
||||
else:
|
||||
with open(args.output, "w", encoding="utf-8") as fp:
|
||||
print(content, file=fp)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,209 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import dataclasses
|
||||
import re
|
||||
from collections.abc import Mapping
|
||||
|
||||
from pip._vendor.packaging.requirements import Requirement
|
||||
|
||||
|
||||
def _normalize_name(name: str) -> str:
|
||||
return re.sub(r"[-_.]+", "-", name).lower()
|
||||
|
||||
|
||||
def _normalize_group_names(
|
||||
dependency_groups: Mapping[str, str | Mapping[str, str]],
|
||||
) -> Mapping[str, str | Mapping[str, str]]:
|
||||
original_names: dict[str, list[str]] = {}
|
||||
normalized_groups = {}
|
||||
|
||||
for group_name, value in dependency_groups.items():
|
||||
normed_group_name = _normalize_name(group_name)
|
||||
original_names.setdefault(normed_group_name, []).append(group_name)
|
||||
normalized_groups[normed_group_name] = value
|
||||
|
||||
errors = []
|
||||
for normed_name, names in original_names.items():
|
||||
if len(names) > 1:
|
||||
errors.append(f"{normed_name} ({', '.join(names)})")
|
||||
if errors:
|
||||
raise ValueError(f"Duplicate dependency group names: {', '.join(errors)}")
|
||||
|
||||
return normalized_groups
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class DependencyGroupInclude:
|
||||
include_group: str
|
||||
|
||||
|
||||
class CyclicDependencyError(ValueError):
|
||||
"""
|
||||
An error representing the detection of a cycle.
|
||||
"""
|
||||
|
||||
def __init__(self, requested_group: str, group: str, include_group: str) -> None:
|
||||
self.requested_group = requested_group
|
||||
self.group = group
|
||||
self.include_group = include_group
|
||||
|
||||
if include_group == group:
|
||||
reason = f"{group} includes itself"
|
||||
else:
|
||||
reason = f"{include_group} -> {group}, {group} -> {include_group}"
|
||||
super().__init__(
|
||||
"Cyclic dependency group include while resolving "
|
||||
f"{requested_group}: {reason}"
|
||||
)
|
||||
|
||||
|
||||
class DependencyGroupResolver:
|
||||
"""
|
||||
A resolver for Dependency Group data.
|
||||
|
||||
This class handles caching, name normalization, cycle detection, and other
|
||||
parsing requirements. There are only two public methods for exploring the data:
|
||||
``lookup()`` and ``resolve()``.
|
||||
|
||||
:param dependency_groups: A mapping, as provided via pyproject
|
||||
``[dependency-groups]``.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
dependency_groups: Mapping[str, str | Mapping[str, str]],
|
||||
) -> None:
|
||||
if not isinstance(dependency_groups, Mapping):
|
||||
raise TypeError("Dependency Groups table is not a mapping")
|
||||
self.dependency_groups = _normalize_group_names(dependency_groups)
|
||||
# a map of group names to parsed data
|
||||
self._parsed_groups: dict[
|
||||
str, tuple[Requirement | DependencyGroupInclude, ...]
|
||||
] = {}
|
||||
# a map of group names to their ancestors, used for cycle detection
|
||||
self._include_graph_ancestors: dict[str, tuple[str, ...]] = {}
|
||||
# a cache of completed resolutions to Requirement lists
|
||||
self._resolve_cache: dict[str, tuple[Requirement, ...]] = {}
|
||||
|
||||
def lookup(self, group: str) -> tuple[Requirement | DependencyGroupInclude, ...]:
|
||||
"""
|
||||
Lookup a group name, returning the parsed dependency data for that group.
|
||||
This will not resolve includes.
|
||||
|
||||
:param group: the name of the group to lookup
|
||||
|
||||
:raises ValueError: if the data does not appear to be valid dependency group
|
||||
data
|
||||
:raises TypeError: if the data is not a string
|
||||
:raises LookupError: if group name is absent
|
||||
:raises packaging.requirements.InvalidRequirement: if a specifier is not valid
|
||||
"""
|
||||
if not isinstance(group, str):
|
||||
raise TypeError("Dependency group name is not a str")
|
||||
group = _normalize_name(group)
|
||||
return self._parse_group(group)
|
||||
|
||||
def resolve(self, group: str) -> tuple[Requirement, ...]:
|
||||
"""
|
||||
Resolve a dependency group to a list of requirements.
|
||||
|
||||
:param group: the name of the group to resolve
|
||||
|
||||
:raises TypeError: if the inputs appear to be the wrong types
|
||||
:raises ValueError: if the data does not appear to be valid dependency group
|
||||
data
|
||||
:raises LookupError: if group name is absent
|
||||
:raises packaging.requirements.InvalidRequirement: if a specifier is not valid
|
||||
"""
|
||||
if not isinstance(group, str):
|
||||
raise TypeError("Dependency group name is not a str")
|
||||
group = _normalize_name(group)
|
||||
return self._resolve(group, group)
|
||||
|
||||
def _parse_group(
|
||||
self, group: str
|
||||
) -> tuple[Requirement | DependencyGroupInclude, ...]:
|
||||
# short circuit -- never do the work twice
|
||||
if group in self._parsed_groups:
|
||||
return self._parsed_groups[group]
|
||||
|
||||
if group not in self.dependency_groups:
|
||||
raise LookupError(f"Dependency group '{group}' not found")
|
||||
|
||||
raw_group = self.dependency_groups[group]
|
||||
if not isinstance(raw_group, list):
|
||||
raise TypeError(f"Dependency group '{group}' is not a list")
|
||||
|
||||
elements: list[Requirement | DependencyGroupInclude] = []
|
||||
for item in raw_group:
|
||||
if isinstance(item, str):
|
||||
# packaging.requirements.Requirement parsing ensures that this is a
|
||||
# valid PEP 508 Dependency Specifier
|
||||
# raises InvalidRequirement on failure
|
||||
elements.append(Requirement(item))
|
||||
elif isinstance(item, dict):
|
||||
if tuple(item.keys()) != ("include-group",):
|
||||
raise ValueError(f"Invalid dependency group item: {item}")
|
||||
|
||||
include_group = next(iter(item.values()))
|
||||
elements.append(DependencyGroupInclude(include_group=include_group))
|
||||
else:
|
||||
raise ValueError(f"Invalid dependency group item: {item}")
|
||||
|
||||
self._parsed_groups[group] = tuple(elements)
|
||||
return self._parsed_groups[group]
|
||||
|
||||
def _resolve(self, group: str, requested_group: str) -> tuple[Requirement, ...]:
|
||||
"""
|
||||
This is a helper for cached resolution to strings.
|
||||
|
||||
:param group: The name of the group to resolve.
|
||||
:param requested_group: The group which was used in the original, user-facing
|
||||
request.
|
||||
"""
|
||||
if group in self._resolve_cache:
|
||||
return self._resolve_cache[group]
|
||||
|
||||
parsed = self._parse_group(group)
|
||||
|
||||
resolved_group = []
|
||||
for item in parsed:
|
||||
if isinstance(item, Requirement):
|
||||
resolved_group.append(item)
|
||||
elif isinstance(item, DependencyGroupInclude):
|
||||
include_group = _normalize_name(item.include_group)
|
||||
if include_group in self._include_graph_ancestors.get(group, ()):
|
||||
raise CyclicDependencyError(
|
||||
requested_group, group, item.include_group
|
||||
)
|
||||
self._include_graph_ancestors[include_group] = (
|
||||
*self._include_graph_ancestors.get(group, ()),
|
||||
group,
|
||||
)
|
||||
resolved_group.extend(self._resolve(include_group, requested_group))
|
||||
else: # unreachable
|
||||
raise NotImplementedError(
|
||||
f"Invalid dependency group item after parse: {item}"
|
||||
)
|
||||
|
||||
self._resolve_cache[group] = tuple(resolved_group)
|
||||
return self._resolve_cache[group]
|
||||
|
||||
|
||||
def resolve(
|
||||
dependency_groups: Mapping[str, str | Mapping[str, str]], /, *groups: str
|
||||
) -> tuple[str, ...]:
|
||||
"""
|
||||
Resolve a dependency group to a tuple of requirements, as strings.
|
||||
|
||||
:param dependency_groups: the parsed contents of the ``[dependency-groups]`` table
|
||||
from ``pyproject.toml``
|
||||
:param groups: the name of the group(s) to resolve
|
||||
|
||||
:raises TypeError: if the inputs appear to be the wrong types
|
||||
:raises ValueError: if the data does not appear to be valid dependency group data
|
||||
:raises LookupError: if group name is absent
|
||||
:raises packaging.requirements.InvalidRequirement: if a specifier is not valid
|
||||
"""
|
||||
resolver = DependencyGroupResolver(dependency_groups)
|
||||
return tuple(str(r) for group in groups for r in resolver.resolve(group))
|
||||
@@ -0,0 +1,59 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
from ._implementation import DependencyGroupResolver
|
||||
from ._toml_compat import tomllib
|
||||
|
||||
|
||||
def main(*, argv: list[str] | None = None) -> None:
|
||||
if tomllib is None:
|
||||
print(
|
||||
"Usage error: dependency-groups CLI requires tomli or Python 3.11+",
|
||||
file=sys.stderr,
|
||||
)
|
||||
raise SystemExit(2)
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description=(
|
||||
"Lint Dependency Groups for validity. "
|
||||
"This will eagerly load and check all of your Dependency Groups."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"-f",
|
||||
"--pyproject-file",
|
||||
default="pyproject.toml",
|
||||
help="The pyproject.toml file. Defaults to trying in the current directory.",
|
||||
)
|
||||
args = parser.parse_args(argv if argv is not None else sys.argv[1:])
|
||||
|
||||
with open(args.pyproject_file, "rb") as fp:
|
||||
pyproject = tomllib.load(fp)
|
||||
dependency_groups_raw = pyproject.get("dependency-groups", {})
|
||||
|
||||
errors: list[str] = []
|
||||
try:
|
||||
resolver = DependencyGroupResolver(dependency_groups_raw)
|
||||
except (ValueError, TypeError) as e:
|
||||
errors.append(f"{type(e).__name__}: {e}")
|
||||
else:
|
||||
for groupname in resolver.dependency_groups:
|
||||
try:
|
||||
resolver.resolve(groupname)
|
||||
except (LookupError, ValueError, TypeError) as e:
|
||||
errors.append(f"{type(e).__name__}: {e}")
|
||||
|
||||
if errors:
|
||||
print("errors encountered while examining dependency groups:")
|
||||
for msg in errors:
|
||||
print(f" {msg}")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("ok")
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,62 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
from ._implementation import DependencyGroupResolver
|
||||
from ._toml_compat import tomllib
|
||||
|
||||
|
||||
def _invoke_pip(deps: list[str]) -> None:
|
||||
subprocess.check_call([sys.executable, "-m", "pip", "install", *deps])
|
||||
|
||||
|
||||
def main(*, argv: list[str] | None = None) -> None:
|
||||
if tomllib is None:
|
||||
print(
|
||||
"Usage error: dependency-groups CLI requires tomli or Python 3.11+",
|
||||
file=sys.stderr,
|
||||
)
|
||||
raise SystemExit(2)
|
||||
|
||||
parser = argparse.ArgumentParser(description="Install Dependency Groups.")
|
||||
parser.add_argument(
|
||||
"DEPENDENCY_GROUP", nargs="+", help="The dependency groups to install."
|
||||
)
|
||||
parser.add_argument(
|
||||
"-f",
|
||||
"--pyproject-file",
|
||||
default="pyproject.toml",
|
||||
help="The pyproject.toml file. Defaults to trying in the current directory.",
|
||||
)
|
||||
args = parser.parse_args(argv if argv is not None else sys.argv[1:])
|
||||
|
||||
with open(args.pyproject_file, "rb") as fp:
|
||||
pyproject = tomllib.load(fp)
|
||||
dependency_groups_raw = pyproject.get("dependency-groups", {})
|
||||
|
||||
errors: list[str] = []
|
||||
resolved: list[str] = []
|
||||
try:
|
||||
resolver = DependencyGroupResolver(dependency_groups_raw)
|
||||
except (ValueError, TypeError) as e:
|
||||
errors.append(f"{type(e).__name__}: {e}")
|
||||
else:
|
||||
for groupname in args.DEPENDENCY_GROUP:
|
||||
try:
|
||||
resolved.extend(str(r) for r in resolver.resolve(groupname))
|
||||
except (LookupError, ValueError, TypeError) as e:
|
||||
errors.append(f"{type(e).__name__}: {e}")
|
||||
|
||||
if errors:
|
||||
print("errors encountered while examining dependency groups:")
|
||||
for msg in errors:
|
||||
print(f" {msg}")
|
||||
sys.exit(1)
|
||||
|
||||
_invoke_pip(resolved)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,9 @@
|
||||
try:
|
||||
import tomllib
|
||||
except ImportError:
|
||||
try:
|
||||
from pip._vendor import tomli as tomllib # type: ignore[no-redef, unused-ignore]
|
||||
except ModuleNotFoundError: # pragma: no cover
|
||||
tomllib = None # type: ignore[assignment, unused-ignore]
|
||||
|
||||
__all__ = ("tomllib",)
|
||||
@@ -0,0 +1,33 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012-2024 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
import logging
|
||||
|
||||
__version__ = '0.4.0'
|
||||
|
||||
|
||||
class DistlibException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
try:
|
||||
from logging import NullHandler
|
||||
except ImportError: # pragma: no cover
|
||||
|
||||
class NullHandler(logging.Handler):
|
||||
|
||||
def handle(self, record):
|
||||
pass
|
||||
|
||||
def emit(self, record):
|
||||
pass
|
||||
|
||||
def createLock(self):
|
||||
self.lock = None
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.addHandler(NullHandler())
|
||||
1137
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/compat.py
Normal file
1137
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/compat.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,358 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2013-2017 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import bisect
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import pkgutil
|
||||
import sys
|
||||
import types
|
||||
import zipimport
|
||||
|
||||
from . import DistlibException
|
||||
from .util import cached_property, get_cache_base, Cache
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
cache = None # created when needed
|
||||
|
||||
|
||||
class ResourceCache(Cache):
|
||||
def __init__(self, base=None):
|
||||
if base is None:
|
||||
# Use native string to avoid issues on 2.x: see Python #20140.
|
||||
base = os.path.join(get_cache_base(), str('resource-cache'))
|
||||
super(ResourceCache, self).__init__(base)
|
||||
|
||||
def is_stale(self, resource, path):
|
||||
"""
|
||||
Is the cache stale for the given resource?
|
||||
|
||||
:param resource: The :class:`Resource` being cached.
|
||||
:param path: The path of the resource in the cache.
|
||||
:return: True if the cache is stale.
|
||||
"""
|
||||
# Cache invalidation is a hard problem :-)
|
||||
return True
|
||||
|
||||
def get(self, resource):
|
||||
"""
|
||||
Get a resource into the cache,
|
||||
|
||||
:param resource: A :class:`Resource` instance.
|
||||
:return: The pathname of the resource in the cache.
|
||||
"""
|
||||
prefix, path = resource.finder.get_cache_info(resource)
|
||||
if prefix is None:
|
||||
result = path
|
||||
else:
|
||||
result = os.path.join(self.base, self.prefix_to_dir(prefix), path)
|
||||
dirname = os.path.dirname(result)
|
||||
if not os.path.isdir(dirname):
|
||||
os.makedirs(dirname)
|
||||
if not os.path.exists(result):
|
||||
stale = True
|
||||
else:
|
||||
stale = self.is_stale(resource, path)
|
||||
if stale:
|
||||
# write the bytes of the resource to the cache location
|
||||
with open(result, 'wb') as f:
|
||||
f.write(resource.bytes)
|
||||
return result
|
||||
|
||||
|
||||
class ResourceBase(object):
|
||||
def __init__(self, finder, name):
|
||||
self.finder = finder
|
||||
self.name = name
|
||||
|
||||
|
||||
class Resource(ResourceBase):
|
||||
"""
|
||||
A class representing an in-package resource, such as a data file. This is
|
||||
not normally instantiated by user code, but rather by a
|
||||
:class:`ResourceFinder` which manages the resource.
|
||||
"""
|
||||
is_container = False # Backwards compatibility
|
||||
|
||||
def as_stream(self):
|
||||
"""
|
||||
Get the resource as a stream.
|
||||
|
||||
This is not a property to make it obvious that it returns a new stream
|
||||
each time.
|
||||
"""
|
||||
return self.finder.get_stream(self)
|
||||
|
||||
@cached_property
|
||||
def file_path(self):
|
||||
global cache
|
||||
if cache is None:
|
||||
cache = ResourceCache()
|
||||
return cache.get(self)
|
||||
|
||||
@cached_property
|
||||
def bytes(self):
|
||||
return self.finder.get_bytes(self)
|
||||
|
||||
@cached_property
|
||||
def size(self):
|
||||
return self.finder.get_size(self)
|
||||
|
||||
|
||||
class ResourceContainer(ResourceBase):
|
||||
is_container = True # Backwards compatibility
|
||||
|
||||
@cached_property
|
||||
def resources(self):
|
||||
return self.finder.get_resources(self)
|
||||
|
||||
|
||||
class ResourceFinder(object):
|
||||
"""
|
||||
Resource finder for file system resources.
|
||||
"""
|
||||
|
||||
if sys.platform.startswith('java'):
|
||||
skipped_extensions = ('.pyc', '.pyo', '.class')
|
||||
else:
|
||||
skipped_extensions = ('.pyc', '.pyo')
|
||||
|
||||
def __init__(self, module):
|
||||
self.module = module
|
||||
self.loader = getattr(module, '__loader__', None)
|
||||
self.base = os.path.dirname(getattr(module, '__file__', ''))
|
||||
|
||||
def _adjust_path(self, path):
|
||||
return os.path.realpath(path)
|
||||
|
||||
def _make_path(self, resource_name):
|
||||
# Issue #50: need to preserve type of path on Python 2.x
|
||||
# like os.path._get_sep
|
||||
if isinstance(resource_name, bytes): # should only happen on 2.x
|
||||
sep = b'/'
|
||||
else:
|
||||
sep = '/'
|
||||
parts = resource_name.split(sep)
|
||||
parts.insert(0, self.base)
|
||||
result = os.path.join(*parts)
|
||||
return self._adjust_path(result)
|
||||
|
||||
def _find(self, path):
|
||||
return os.path.exists(path)
|
||||
|
||||
def get_cache_info(self, resource):
|
||||
return None, resource.path
|
||||
|
||||
def find(self, resource_name):
|
||||
path = self._make_path(resource_name)
|
||||
if not self._find(path):
|
||||
result = None
|
||||
else:
|
||||
if self._is_directory(path):
|
||||
result = ResourceContainer(self, resource_name)
|
||||
else:
|
||||
result = Resource(self, resource_name)
|
||||
result.path = path
|
||||
return result
|
||||
|
||||
def get_stream(self, resource):
|
||||
return open(resource.path, 'rb')
|
||||
|
||||
def get_bytes(self, resource):
|
||||
with open(resource.path, 'rb') as f:
|
||||
return f.read()
|
||||
|
||||
def get_size(self, resource):
|
||||
return os.path.getsize(resource.path)
|
||||
|
||||
def get_resources(self, resource):
|
||||
def allowed(f):
|
||||
return (f != '__pycache__' and not
|
||||
f.endswith(self.skipped_extensions))
|
||||
return set([f for f in os.listdir(resource.path) if allowed(f)])
|
||||
|
||||
def is_container(self, resource):
|
||||
return self._is_directory(resource.path)
|
||||
|
||||
_is_directory = staticmethod(os.path.isdir)
|
||||
|
||||
def iterator(self, resource_name):
|
||||
resource = self.find(resource_name)
|
||||
if resource is not None:
|
||||
todo = [resource]
|
||||
while todo:
|
||||
resource = todo.pop(0)
|
||||
yield resource
|
||||
if resource.is_container:
|
||||
rname = resource.name
|
||||
for name in resource.resources:
|
||||
if not rname:
|
||||
new_name = name
|
||||
else:
|
||||
new_name = '/'.join([rname, name])
|
||||
child = self.find(new_name)
|
||||
if child.is_container:
|
||||
todo.append(child)
|
||||
else:
|
||||
yield child
|
||||
|
||||
|
||||
class ZipResourceFinder(ResourceFinder):
|
||||
"""
|
||||
Resource finder for resources in .zip files.
|
||||
"""
|
||||
def __init__(self, module):
|
||||
super(ZipResourceFinder, self).__init__(module)
|
||||
archive = self.loader.archive
|
||||
self.prefix_len = 1 + len(archive)
|
||||
# PyPy doesn't have a _files attr on zipimporter, and you can't set one
|
||||
if hasattr(self.loader, '_files'):
|
||||
self._files = self.loader._files
|
||||
else:
|
||||
self._files = zipimport._zip_directory_cache[archive]
|
||||
self.index = sorted(self._files)
|
||||
|
||||
def _adjust_path(self, path):
|
||||
return path
|
||||
|
||||
def _find(self, path):
|
||||
path = path[self.prefix_len:]
|
||||
if path in self._files:
|
||||
result = True
|
||||
else:
|
||||
if path and path[-1] != os.sep:
|
||||
path = path + os.sep
|
||||
i = bisect.bisect(self.index, path)
|
||||
try:
|
||||
result = self.index[i].startswith(path)
|
||||
except IndexError:
|
||||
result = False
|
||||
if not result:
|
||||
logger.debug('_find failed: %r %r', path, self.loader.prefix)
|
||||
else:
|
||||
logger.debug('_find worked: %r %r', path, self.loader.prefix)
|
||||
return result
|
||||
|
||||
def get_cache_info(self, resource):
|
||||
prefix = self.loader.archive
|
||||
path = resource.path[1 + len(prefix):]
|
||||
return prefix, path
|
||||
|
||||
def get_bytes(self, resource):
|
||||
return self.loader.get_data(resource.path)
|
||||
|
||||
def get_stream(self, resource):
|
||||
return io.BytesIO(self.get_bytes(resource))
|
||||
|
||||
def get_size(self, resource):
|
||||
path = resource.path[self.prefix_len:]
|
||||
return self._files[path][3]
|
||||
|
||||
def get_resources(self, resource):
|
||||
path = resource.path[self.prefix_len:]
|
||||
if path and path[-1] != os.sep:
|
||||
path += os.sep
|
||||
plen = len(path)
|
||||
result = set()
|
||||
i = bisect.bisect(self.index, path)
|
||||
while i < len(self.index):
|
||||
if not self.index[i].startswith(path):
|
||||
break
|
||||
s = self.index[i][plen:]
|
||||
result.add(s.split(os.sep, 1)[0]) # only immediate children
|
||||
i += 1
|
||||
return result
|
||||
|
||||
def _is_directory(self, path):
|
||||
path = path[self.prefix_len:]
|
||||
if path and path[-1] != os.sep:
|
||||
path += os.sep
|
||||
i = bisect.bisect(self.index, path)
|
||||
try:
|
||||
result = self.index[i].startswith(path)
|
||||
except IndexError:
|
||||
result = False
|
||||
return result
|
||||
|
||||
|
||||
_finder_registry = {
|
||||
type(None): ResourceFinder,
|
||||
zipimport.zipimporter: ZipResourceFinder
|
||||
}
|
||||
|
||||
try:
|
||||
# In Python 3.6, _frozen_importlib -> _frozen_importlib_external
|
||||
try:
|
||||
import _frozen_importlib_external as _fi
|
||||
except ImportError:
|
||||
import _frozen_importlib as _fi
|
||||
_finder_registry[_fi.SourceFileLoader] = ResourceFinder
|
||||
_finder_registry[_fi.FileFinder] = ResourceFinder
|
||||
# See issue #146
|
||||
_finder_registry[_fi.SourcelessFileLoader] = ResourceFinder
|
||||
del _fi
|
||||
except (ImportError, AttributeError):
|
||||
pass
|
||||
|
||||
|
||||
def register_finder(loader, finder_maker):
|
||||
_finder_registry[type(loader)] = finder_maker
|
||||
|
||||
|
||||
_finder_cache = {}
|
||||
|
||||
|
||||
def finder(package):
|
||||
"""
|
||||
Return a resource finder for a package.
|
||||
:param package: The name of the package.
|
||||
:return: A :class:`ResourceFinder` instance for the package.
|
||||
"""
|
||||
if package in _finder_cache:
|
||||
result = _finder_cache[package]
|
||||
else:
|
||||
if package not in sys.modules:
|
||||
__import__(package)
|
||||
module = sys.modules[package]
|
||||
path = getattr(module, '__path__', None)
|
||||
if path is None:
|
||||
raise DistlibException('You cannot get a finder for a module, '
|
||||
'only for a package')
|
||||
loader = getattr(module, '__loader__', None)
|
||||
finder_maker = _finder_registry.get(type(loader))
|
||||
if finder_maker is None:
|
||||
raise DistlibException('Unable to locate finder for %r' % package)
|
||||
result = finder_maker(module)
|
||||
_finder_cache[package] = result
|
||||
return result
|
||||
|
||||
|
||||
_dummy_module = types.ModuleType(str('__dummy__'))
|
||||
|
||||
|
||||
def finder_for_path(path):
|
||||
"""
|
||||
Return a resource finder for a path, which should represent a container.
|
||||
|
||||
:param path: The path.
|
||||
:return: A :class:`ResourceFinder` instance for the path.
|
||||
"""
|
||||
result = None
|
||||
# calls any path hooks, gets importer into cache
|
||||
pkgutil.get_importer(path)
|
||||
loader = sys.path_importer_cache.get(path)
|
||||
finder = _finder_registry.get(type(loader))
|
||||
if finder:
|
||||
module = _dummy_module
|
||||
module.__file__ = os.path.join(path, '')
|
||||
module.__loader__ = loader
|
||||
result = finder(module)
|
||||
return result
|
||||
@@ -0,0 +1,447 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2013-2023 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
from io import BytesIO
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import struct
|
||||
import sys
|
||||
import time
|
||||
from zipfile import ZipInfo
|
||||
|
||||
from .compat import sysconfig, detect_encoding, ZipFile
|
||||
from .resources import finder
|
||||
from .util import (FileOperator, get_export_entry, convert_path, get_executable, get_platform, in_venv)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_DEFAULT_MANIFEST = '''
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
|
||||
<assemblyIdentity version="1.0.0.0"
|
||||
processorArchitecture="X86"
|
||||
name="%s"
|
||||
type="win32"/>
|
||||
|
||||
<!-- Identify the application security requirements. -->
|
||||
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
|
||||
<security>
|
||||
<requestedPrivileges>
|
||||
<requestedExecutionLevel level="asInvoker" uiAccess="false"/>
|
||||
</requestedPrivileges>
|
||||
</security>
|
||||
</trustInfo>
|
||||
</assembly>'''.strip()
|
||||
|
||||
# check if Python is called on the first line with this expression
|
||||
FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$')
|
||||
SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*-
|
||||
import re
|
||||
import sys
|
||||
if __name__ == '__main__':
|
||||
from %(module)s import %(import_name)s
|
||||
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
||||
sys.exit(%(func)s())
|
||||
'''
|
||||
|
||||
# Pre-fetch the contents of all executable wrapper stubs.
|
||||
# This is to address https://github.com/pypa/pip/issues/12666.
|
||||
# When updating pip, we rename the old pip in place before installing the
|
||||
# new version. If we try to fetch a wrapper *after* that rename, the finder
|
||||
# machinery will be confused as the package is no longer available at the
|
||||
# location where it was imported from. So we load everything into memory in
|
||||
# advance.
|
||||
|
||||
if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'):
|
||||
# Issue 31: don't hardcode an absolute package name, but
|
||||
# determine it relative to the current package
|
||||
DISTLIB_PACKAGE = __name__.rsplit('.', 1)[0]
|
||||
|
||||
WRAPPERS = {
|
||||
r.name: r.bytes
|
||||
for r in finder(DISTLIB_PACKAGE).iterator("")
|
||||
if r.name.endswith(".exe")
|
||||
}
|
||||
|
||||
|
||||
def enquote_executable(executable):
|
||||
if ' ' in executable:
|
||||
# make sure we quote only the executable in case of env
|
||||
# for example /usr/bin/env "/dir with spaces/bin/jython"
|
||||
# instead of "/usr/bin/env /dir with spaces/bin/jython"
|
||||
# otherwise whole
|
||||
if executable.startswith('/usr/bin/env '):
|
||||
env, _executable = executable.split(' ', 1)
|
||||
if ' ' in _executable and not _executable.startswith('"'):
|
||||
executable = '%s "%s"' % (env, _executable)
|
||||
else:
|
||||
if not executable.startswith('"'):
|
||||
executable = '"%s"' % executable
|
||||
return executable
|
||||
|
||||
|
||||
# Keep the old name around (for now), as there is at least one project using it!
|
||||
_enquote_executable = enquote_executable
|
||||
|
||||
|
||||
class ScriptMaker(object):
|
||||
"""
|
||||
A class to copy or create scripts from source scripts or callable
|
||||
specifications.
|
||||
"""
|
||||
script_template = SCRIPT_TEMPLATE
|
||||
|
||||
executable = None # for shebangs
|
||||
|
||||
def __init__(self, source_dir, target_dir, add_launchers=True, dry_run=False, fileop=None):
|
||||
self.source_dir = source_dir
|
||||
self.target_dir = target_dir
|
||||
self.add_launchers = add_launchers
|
||||
self.force = False
|
||||
self.clobber = False
|
||||
# It only makes sense to set mode bits on POSIX.
|
||||
self.set_mode = (os.name == 'posix') or (os.name == 'java' and os._name == 'posix')
|
||||
self.variants = set(('', 'X.Y'))
|
||||
self._fileop = fileop or FileOperator(dry_run)
|
||||
|
||||
self._is_nt = os.name == 'nt' or (os.name == 'java' and os._name == 'nt')
|
||||
self.version_info = sys.version_info
|
||||
|
||||
def _get_alternate_executable(self, executable, options):
|
||||
if options.get('gui', False) and self._is_nt: # pragma: no cover
|
||||
dn, fn = os.path.split(executable)
|
||||
fn = fn.replace('python', 'pythonw')
|
||||
executable = os.path.join(dn, fn)
|
||||
return executable
|
||||
|
||||
if sys.platform.startswith('java'): # pragma: no cover
|
||||
|
||||
def _is_shell(self, executable):
|
||||
"""
|
||||
Determine if the specified executable is a script
|
||||
(contains a #! line)
|
||||
"""
|
||||
try:
|
||||
with open(executable) as fp:
|
||||
return fp.read(2) == '#!'
|
||||
except (OSError, IOError):
|
||||
logger.warning('Failed to open %s', executable)
|
||||
return False
|
||||
|
||||
def _fix_jython_executable(self, executable):
|
||||
if self._is_shell(executable):
|
||||
# Workaround for Jython is not needed on Linux systems.
|
||||
import java
|
||||
|
||||
if java.lang.System.getProperty('os.name') == 'Linux':
|
||||
return executable
|
||||
elif executable.lower().endswith('jython.exe'):
|
||||
# Use wrapper exe for Jython on Windows
|
||||
return executable
|
||||
return '/usr/bin/env %s' % executable
|
||||
|
||||
def _build_shebang(self, executable, post_interp):
|
||||
"""
|
||||
Build a shebang line. In the simple case (on Windows, or a shebang line
|
||||
which is not too long or contains spaces) use a simple formulation for
|
||||
the shebang. Otherwise, use /bin/sh as the executable, with a contrived
|
||||
shebang which allows the script to run either under Python or sh, using
|
||||
suitable quoting. Thanks to Harald Nordgren for his input.
|
||||
|
||||
See also: http://www.in-ulm.de/~mascheck/various/shebang/#length
|
||||
https://hg.mozilla.org/mozilla-central/file/tip/mach
|
||||
"""
|
||||
if os.name != 'posix':
|
||||
simple_shebang = True
|
||||
elif getattr(sys, "cross_compiling", False):
|
||||
# In a cross-compiling environment, the shebang will likely be a
|
||||
# script; this *must* be invoked with the "safe" version of the
|
||||
# shebang, or else using os.exec() to run the entry script will
|
||||
# fail, raising "OSError 8 [Errno 8] Exec format error".
|
||||
simple_shebang = False
|
||||
else:
|
||||
# Add 3 for '#!' prefix and newline suffix.
|
||||
shebang_length = len(executable) + len(post_interp) + 3
|
||||
if sys.platform == 'darwin':
|
||||
max_shebang_length = 512
|
||||
else:
|
||||
max_shebang_length = 127
|
||||
simple_shebang = ((b' ' not in executable) and (shebang_length <= max_shebang_length))
|
||||
|
||||
if simple_shebang:
|
||||
result = b'#!' + executable + post_interp + b'\n'
|
||||
else:
|
||||
result = b'#!/bin/sh\n'
|
||||
result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n'
|
||||
result += b"' '''\n"
|
||||
return result
|
||||
|
||||
def _get_shebang(self, encoding, post_interp=b'', options=None):
|
||||
enquote = True
|
||||
if self.executable:
|
||||
executable = self.executable
|
||||
enquote = False # assume this will be taken care of
|
||||
elif not sysconfig.is_python_build():
|
||||
executable = get_executable()
|
||||
elif in_venv(): # pragma: no cover
|
||||
executable = os.path.join(sysconfig.get_path('scripts'), 'python%s' % sysconfig.get_config_var('EXE'))
|
||||
else: # pragma: no cover
|
||||
if os.name == 'nt':
|
||||
# for Python builds from source on Windows, no Python executables with
|
||||
# a version suffix are created, so we use python.exe
|
||||
executable = os.path.join(sysconfig.get_config_var('BINDIR'),
|
||||
'python%s' % (sysconfig.get_config_var('EXE')))
|
||||
else:
|
||||
executable = os.path.join(
|
||||
sysconfig.get_config_var('BINDIR'),
|
||||
'python%s%s' % (sysconfig.get_config_var('VERSION'), sysconfig.get_config_var('EXE')))
|
||||
if options:
|
||||
executable = self._get_alternate_executable(executable, options)
|
||||
|
||||
if sys.platform.startswith('java'): # pragma: no cover
|
||||
executable = self._fix_jython_executable(executable)
|
||||
|
||||
# Normalise case for Windows - COMMENTED OUT
|
||||
# executable = os.path.normcase(executable)
|
||||
# N.B. The normalising operation above has been commented out: See
|
||||
# issue #124. Although paths in Windows are generally case-insensitive,
|
||||
# they aren't always. For example, a path containing a ẞ (which is a
|
||||
# LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a
|
||||
# LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by
|
||||
# Windows as equivalent in path names.
|
||||
|
||||
# If the user didn't specify an executable, it may be necessary to
|
||||
# cater for executable paths with spaces (not uncommon on Windows)
|
||||
if enquote:
|
||||
executable = enquote_executable(executable)
|
||||
# Issue #51: don't use fsencode, since we later try to
|
||||
# check that the shebang is decodable using utf-8.
|
||||
executable = executable.encode('utf-8')
|
||||
# in case of IronPython, play safe and enable frames support
|
||||
if (sys.platform == 'cli' and '-X:Frames' not in post_interp and
|
||||
'-X:FullFrames' not in post_interp): # pragma: no cover
|
||||
post_interp += b' -X:Frames'
|
||||
shebang = self._build_shebang(executable, post_interp)
|
||||
# Python parser starts to read a script using UTF-8 until
|
||||
# it gets a #coding:xxx cookie. The shebang has to be the
|
||||
# first line of a file, the #coding:xxx cookie cannot be
|
||||
# written before. So the shebang has to be decodable from
|
||||
# UTF-8.
|
||||
try:
|
||||
shebang.decode('utf-8')
|
||||
except UnicodeDecodeError: # pragma: no cover
|
||||
raise ValueError('The shebang (%r) is not decodable from utf-8' % shebang)
|
||||
# If the script is encoded to a custom encoding (use a
|
||||
# #coding:xxx cookie), the shebang has to be decodable from
|
||||
# the script encoding too.
|
||||
if encoding != 'utf-8':
|
||||
try:
|
||||
shebang.decode(encoding)
|
||||
except UnicodeDecodeError: # pragma: no cover
|
||||
raise ValueError('The shebang (%r) is not decodable '
|
||||
'from the script encoding (%r)' % (shebang, encoding))
|
||||
return shebang
|
||||
|
||||
def _get_script_text(self, entry):
|
||||
return self.script_template % dict(
|
||||
module=entry.prefix, import_name=entry.suffix.split('.')[0], func=entry.suffix)
|
||||
|
||||
manifest = _DEFAULT_MANIFEST
|
||||
|
||||
def get_manifest(self, exename):
|
||||
base = os.path.basename(exename)
|
||||
return self.manifest % base
|
||||
|
||||
def _write_script(self, names, shebang, script_bytes, filenames, ext):
|
||||
use_launcher = self.add_launchers and self._is_nt
|
||||
if not use_launcher:
|
||||
script_bytes = shebang + script_bytes
|
||||
else: # pragma: no cover
|
||||
if ext == 'py':
|
||||
launcher = self._get_launcher('t')
|
||||
else:
|
||||
launcher = self._get_launcher('w')
|
||||
stream = BytesIO()
|
||||
with ZipFile(stream, 'w') as zf:
|
||||
source_date_epoch = os.environ.get('SOURCE_DATE_EPOCH')
|
||||
if source_date_epoch:
|
||||
date_time = time.gmtime(int(source_date_epoch))[:6]
|
||||
zinfo = ZipInfo(filename='__main__.py', date_time=date_time)
|
||||
zf.writestr(zinfo, script_bytes)
|
||||
else:
|
||||
zf.writestr('__main__.py', script_bytes)
|
||||
zip_data = stream.getvalue()
|
||||
script_bytes = launcher + shebang + zip_data
|
||||
for name in names:
|
||||
outname = os.path.join(self.target_dir, name)
|
||||
if use_launcher: # pragma: no cover
|
||||
n, e = os.path.splitext(outname)
|
||||
if e.startswith('.py'):
|
||||
outname = n
|
||||
outname = '%s.exe' % outname
|
||||
try:
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
except Exception:
|
||||
# Failed writing an executable - it might be in use.
|
||||
logger.warning('Failed to write executable - trying to '
|
||||
'use .deleteme logic')
|
||||
dfname = '%s.deleteme' % outname
|
||||
if os.path.exists(dfname):
|
||||
os.remove(dfname) # Not allowed to fail here
|
||||
os.rename(outname, dfname) # nor here
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
logger.debug('Able to replace executable using '
|
||||
'.deleteme logic')
|
||||
try:
|
||||
os.remove(dfname)
|
||||
except Exception:
|
||||
pass # still in use - ignore error
|
||||
else:
|
||||
if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover
|
||||
outname = '%s.%s' % (outname, ext)
|
||||
if os.path.exists(outname) and not self.clobber:
|
||||
logger.warning('Skipping existing file %s', outname)
|
||||
continue
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
if self.set_mode:
|
||||
self._fileop.set_executable_mode([outname])
|
||||
filenames.append(outname)
|
||||
|
||||
variant_separator = '-'
|
||||
|
||||
def get_script_filenames(self, name):
|
||||
result = set()
|
||||
if '' in self.variants:
|
||||
result.add(name)
|
||||
if 'X' in self.variants:
|
||||
result.add('%s%s' % (name, self.version_info[0]))
|
||||
if 'X.Y' in self.variants:
|
||||
result.add('%s%s%s.%s' % (name, self.variant_separator, self.version_info[0], self.version_info[1]))
|
||||
return result
|
||||
|
||||
def _make_script(self, entry, filenames, options=None):
|
||||
post_interp = b''
|
||||
if options:
|
||||
args = options.get('interpreter_args', [])
|
||||
if args:
|
||||
args = ' %s' % ' '.join(args)
|
||||
post_interp = args.encode('utf-8')
|
||||
shebang = self._get_shebang('utf-8', post_interp, options=options)
|
||||
script = self._get_script_text(entry).encode('utf-8')
|
||||
scriptnames = self.get_script_filenames(entry.name)
|
||||
if options and options.get('gui', False):
|
||||
ext = 'pyw'
|
||||
else:
|
||||
ext = 'py'
|
||||
self._write_script(scriptnames, shebang, script, filenames, ext)
|
||||
|
||||
def _copy_script(self, script, filenames):
|
||||
adjust = False
|
||||
script = os.path.join(self.source_dir, convert_path(script))
|
||||
outname = os.path.join(self.target_dir, os.path.basename(script))
|
||||
if not self.force and not self._fileop.newer(script, outname):
|
||||
logger.debug('not copying %s (up-to-date)', script)
|
||||
return
|
||||
|
||||
# Always open the file, but ignore failures in dry-run mode --
|
||||
# that way, we'll get accurate feedback if we can read the
|
||||
# script.
|
||||
try:
|
||||
f = open(script, 'rb')
|
||||
except IOError: # pragma: no cover
|
||||
if not self.dry_run:
|
||||
raise
|
||||
f = None
|
||||
else:
|
||||
first_line = f.readline()
|
||||
if not first_line: # pragma: no cover
|
||||
logger.warning('%s is an empty file (skipping)', script)
|
||||
return
|
||||
|
||||
match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n'))
|
||||
if match:
|
||||
adjust = True
|
||||
post_interp = match.group(1) or b''
|
||||
|
||||
if not adjust:
|
||||
if f:
|
||||
f.close()
|
||||
self._fileop.copy_file(script, outname)
|
||||
if self.set_mode:
|
||||
self._fileop.set_executable_mode([outname])
|
||||
filenames.append(outname)
|
||||
else:
|
||||
logger.info('copying and adjusting %s -> %s', script, self.target_dir)
|
||||
if not self._fileop.dry_run:
|
||||
encoding, lines = detect_encoding(f.readline)
|
||||
f.seek(0)
|
||||
shebang = self._get_shebang(encoding, post_interp)
|
||||
if b'pythonw' in first_line: # pragma: no cover
|
||||
ext = 'pyw'
|
||||
else:
|
||||
ext = 'py'
|
||||
n = os.path.basename(outname)
|
||||
self._write_script([n], shebang, f.read(), filenames, ext)
|
||||
if f:
|
||||
f.close()
|
||||
|
||||
@property
|
||||
def dry_run(self):
|
||||
return self._fileop.dry_run
|
||||
|
||||
@dry_run.setter
|
||||
def dry_run(self, value):
|
||||
self._fileop.dry_run = value
|
||||
|
||||
if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover
|
||||
# Executable launcher support.
|
||||
# Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/
|
||||
|
||||
def _get_launcher(self, kind):
|
||||
if struct.calcsize('P') == 8: # 64-bit
|
||||
bits = '64'
|
||||
else:
|
||||
bits = '32'
|
||||
platform_suffix = '-arm' if get_platform() == 'win-arm64' else ''
|
||||
name = '%s%s%s.exe' % (kind, bits, platform_suffix)
|
||||
if name not in WRAPPERS:
|
||||
msg = ('Unable to find resource %s in package %s' %
|
||||
(name, DISTLIB_PACKAGE))
|
||||
raise ValueError(msg)
|
||||
return WRAPPERS[name]
|
||||
|
||||
# Public API follows
|
||||
|
||||
def make(self, specification, options=None):
|
||||
"""
|
||||
Make a script.
|
||||
|
||||
:param specification: The specification, which is either a valid export
|
||||
entry specification (to make a script from a
|
||||
callable) or a filename (to make a script by
|
||||
copying from a source location).
|
||||
:param options: A dictionary of options controlling script generation.
|
||||
:return: A list of all absolute pathnames written to.
|
||||
"""
|
||||
filenames = []
|
||||
entry = get_export_entry(specification)
|
||||
if entry is None:
|
||||
self._copy_script(specification, filenames)
|
||||
else:
|
||||
self._make_script(entry, filenames, options=options)
|
||||
return filenames
|
||||
|
||||
def make_multiple(self, specifications, options=None):
|
||||
"""
|
||||
Take a list of specifications and make scripts from them,
|
||||
:param specifications: A list of specifications.
|
||||
:return: A list of all absolute pathnames written to,
|
||||
"""
|
||||
filenames = []
|
||||
for specification in specifications:
|
||||
filenames.extend(self.make(specification, options))
|
||||
return filenames
|
||||
BIN
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/t32.exe
Normal file
BIN
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/t32.exe
Normal file
Binary file not shown.
Binary file not shown.
BIN
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/t64.exe
Normal file
BIN
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/t64.exe
Normal file
Binary file not shown.
1984
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/util.py
Normal file
1984
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/util.py
Normal file
File diff suppressed because it is too large
Load Diff
BIN
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/w32.exe
Normal file
BIN
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/w32.exe
Normal file
Binary file not shown.
Binary file not shown.
BIN
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/w64.exe
Normal file
BIN
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distlib/w64.exe
Normal file
Binary file not shown.
@@ -0,0 +1,54 @@
|
||||
from .distro import (
|
||||
NORMALIZED_DISTRO_ID,
|
||||
NORMALIZED_LSB_ID,
|
||||
NORMALIZED_OS_ID,
|
||||
LinuxDistribution,
|
||||
__version__,
|
||||
build_number,
|
||||
codename,
|
||||
distro_release_attr,
|
||||
distro_release_info,
|
||||
id,
|
||||
info,
|
||||
like,
|
||||
linux_distribution,
|
||||
lsb_release_attr,
|
||||
lsb_release_info,
|
||||
major_version,
|
||||
minor_version,
|
||||
name,
|
||||
os_release_attr,
|
||||
os_release_info,
|
||||
uname_attr,
|
||||
uname_info,
|
||||
version,
|
||||
version_parts,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"NORMALIZED_DISTRO_ID",
|
||||
"NORMALIZED_LSB_ID",
|
||||
"NORMALIZED_OS_ID",
|
||||
"LinuxDistribution",
|
||||
"build_number",
|
||||
"codename",
|
||||
"distro_release_attr",
|
||||
"distro_release_info",
|
||||
"id",
|
||||
"info",
|
||||
"like",
|
||||
"linux_distribution",
|
||||
"lsb_release_attr",
|
||||
"lsb_release_info",
|
||||
"major_version",
|
||||
"minor_version",
|
||||
"name",
|
||||
"os_release_attr",
|
||||
"os_release_info",
|
||||
"uname_attr",
|
||||
"uname_info",
|
||||
"version",
|
||||
"version_parts",
|
||||
]
|
||||
|
||||
__version__ = __version__
|
||||
@@ -0,0 +1,4 @@
|
||||
from .distro import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
1403
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distro/distro.py
Normal file
1403
Utils/PythonNew32/Lib/site-packages/pip/_vendor/distro/distro.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,45 @@
|
||||
from .core import (
|
||||
IDNABidiError,
|
||||
IDNAError,
|
||||
InvalidCodepoint,
|
||||
InvalidCodepointContext,
|
||||
alabel,
|
||||
check_bidi,
|
||||
check_hyphen_ok,
|
||||
check_initial_combiner,
|
||||
check_label,
|
||||
check_nfc,
|
||||
decode,
|
||||
encode,
|
||||
ulabel,
|
||||
uts46_remap,
|
||||
valid_contextj,
|
||||
valid_contexto,
|
||||
valid_label_length,
|
||||
valid_string_length,
|
||||
)
|
||||
from .intranges import intranges_contain
|
||||
from .package_data import __version__
|
||||
|
||||
__all__ = [
|
||||
"__version__",
|
||||
"IDNABidiError",
|
||||
"IDNAError",
|
||||
"InvalidCodepoint",
|
||||
"InvalidCodepointContext",
|
||||
"alabel",
|
||||
"check_bidi",
|
||||
"check_hyphen_ok",
|
||||
"check_initial_combiner",
|
||||
"check_label",
|
||||
"check_nfc",
|
||||
"decode",
|
||||
"encode",
|
||||
"intranges_contain",
|
||||
"ulabel",
|
||||
"uts46_remap",
|
||||
"valid_contextj",
|
||||
"valid_contexto",
|
||||
"valid_label_length",
|
||||
"valid_string_length",
|
||||
]
|
||||
122
Utils/PythonNew32/Lib/site-packages/pip/_vendor/idna/codec.py
Normal file
122
Utils/PythonNew32/Lib/site-packages/pip/_vendor/idna/codec.py
Normal file
@@ -0,0 +1,122 @@
|
||||
import codecs
|
||||
import re
|
||||
from typing import Any, Optional, Tuple
|
||||
|
||||
from .core import IDNAError, alabel, decode, encode, ulabel
|
||||
|
||||
_unicode_dots_re = re.compile("[\u002e\u3002\uff0e\uff61]")
|
||||
|
||||
|
||||
class Codec(codecs.Codec):
|
||||
def encode(self, data: str, errors: str = "strict") -> Tuple[bytes, int]:
|
||||
if errors != "strict":
|
||||
raise IDNAError('Unsupported error handling "{}"'.format(errors))
|
||||
|
||||
if not data:
|
||||
return b"", 0
|
||||
|
||||
return encode(data), len(data)
|
||||
|
||||
def decode(self, data: bytes, errors: str = "strict") -> Tuple[str, int]:
|
||||
if errors != "strict":
|
||||
raise IDNAError('Unsupported error handling "{}"'.format(errors))
|
||||
|
||||
if not data:
|
||||
return "", 0
|
||||
|
||||
return decode(data), len(data)
|
||||
|
||||
|
||||
class IncrementalEncoder(codecs.BufferedIncrementalEncoder):
|
||||
def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[bytes, int]:
|
||||
if errors != "strict":
|
||||
raise IDNAError('Unsupported error handling "{}"'.format(errors))
|
||||
|
||||
if not data:
|
||||
return b"", 0
|
||||
|
||||
labels = _unicode_dots_re.split(data)
|
||||
trailing_dot = b""
|
||||
if labels:
|
||||
if not labels[-1]:
|
||||
trailing_dot = b"."
|
||||
del labels[-1]
|
||||
elif not final:
|
||||
# Keep potentially unfinished label until the next call
|
||||
del labels[-1]
|
||||
if labels:
|
||||
trailing_dot = b"."
|
||||
|
||||
result = []
|
||||
size = 0
|
||||
for label in labels:
|
||||
result.append(alabel(label))
|
||||
if size:
|
||||
size += 1
|
||||
size += len(label)
|
||||
|
||||
# Join with U+002E
|
||||
result_bytes = b".".join(result) + trailing_dot
|
||||
size += len(trailing_dot)
|
||||
return result_bytes, size
|
||||
|
||||
|
||||
class IncrementalDecoder(codecs.BufferedIncrementalDecoder):
|
||||
def _buffer_decode(self, data: Any, errors: str, final: bool) -> Tuple[str, int]:
|
||||
if errors != "strict":
|
||||
raise IDNAError('Unsupported error handling "{}"'.format(errors))
|
||||
|
||||
if not data:
|
||||
return ("", 0)
|
||||
|
||||
if not isinstance(data, str):
|
||||
data = str(data, "ascii")
|
||||
|
||||
labels = _unicode_dots_re.split(data)
|
||||
trailing_dot = ""
|
||||
if labels:
|
||||
if not labels[-1]:
|
||||
trailing_dot = "."
|
||||
del labels[-1]
|
||||
elif not final:
|
||||
# Keep potentially unfinished label until the next call
|
||||
del labels[-1]
|
||||
if labels:
|
||||
trailing_dot = "."
|
||||
|
||||
result = []
|
||||
size = 0
|
||||
for label in labels:
|
||||
result.append(ulabel(label))
|
||||
if size:
|
||||
size += 1
|
||||
size += len(label)
|
||||
|
||||
result_str = ".".join(result) + trailing_dot
|
||||
size += len(trailing_dot)
|
||||
return (result_str, size)
|
||||
|
||||
|
||||
class StreamWriter(Codec, codecs.StreamWriter):
|
||||
pass
|
||||
|
||||
|
||||
class StreamReader(Codec, codecs.StreamReader):
|
||||
pass
|
||||
|
||||
|
||||
def search_function(name: str) -> Optional[codecs.CodecInfo]:
|
||||
if name != "idna2008":
|
||||
return None
|
||||
return codecs.CodecInfo(
|
||||
name=name,
|
||||
encode=Codec().encode,
|
||||
decode=Codec().decode,
|
||||
incrementalencoder=IncrementalEncoder,
|
||||
incrementaldecoder=IncrementalDecoder,
|
||||
streamwriter=StreamWriter,
|
||||
streamreader=StreamReader,
|
||||
)
|
||||
|
||||
|
||||
codecs.register(search_function)
|
||||
@@ -0,0 +1,15 @@
|
||||
from typing import Any, Union
|
||||
|
||||
from .core import decode, encode
|
||||
|
||||
|
||||
def ToASCII(label: str) -> bytes:
|
||||
return encode(label)
|
||||
|
||||
|
||||
def ToUnicode(label: Union[bytes, bytearray]) -> str:
|
||||
return decode(label)
|
||||
|
||||
|
||||
def nameprep(s: Any) -> None:
|
||||
raise NotImplementedError("IDNA 2008 does not utilise nameprep protocol")
|
||||
437
Utils/PythonNew32/Lib/site-packages/pip/_vendor/idna/core.py
Normal file
437
Utils/PythonNew32/Lib/site-packages/pip/_vendor/idna/core.py
Normal file
@@ -0,0 +1,437 @@
|
||||
import bisect
|
||||
import re
|
||||
import unicodedata
|
||||
from typing import Optional, Union
|
||||
|
||||
from . import idnadata
|
||||
from .intranges import intranges_contain
|
||||
|
||||
_virama_combining_class = 9
|
||||
_alabel_prefix = b"xn--"
|
||||
_unicode_dots_re = re.compile("[\u002e\u3002\uff0e\uff61]")
|
||||
|
||||
|
||||
class IDNAError(UnicodeError):
|
||||
"""Base exception for all IDNA-encoding related problems"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class IDNABidiError(IDNAError):
|
||||
"""Exception when bidirectional requirements are not satisfied"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class InvalidCodepoint(IDNAError):
|
||||
"""Exception when a disallowed or unallocated codepoint is used"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class InvalidCodepointContext(IDNAError):
|
||||
"""Exception when the codepoint is not valid in the context it is used"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
def _combining_class(cp: int) -> int:
|
||||
v = unicodedata.combining(chr(cp))
|
||||
if v == 0:
|
||||
if not unicodedata.name(chr(cp)):
|
||||
raise ValueError("Unknown character in unicodedata")
|
||||
return v
|
||||
|
||||
|
||||
def _is_script(cp: str, script: str) -> bool:
|
||||
return intranges_contain(ord(cp), idnadata.scripts[script])
|
||||
|
||||
|
||||
def _punycode(s: str) -> bytes:
|
||||
return s.encode("punycode")
|
||||
|
||||
|
||||
def _unot(s: int) -> str:
|
||||
return "U+{:04X}".format(s)
|
||||
|
||||
|
||||
def valid_label_length(label: Union[bytes, str]) -> bool:
|
||||
if len(label) > 63:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def valid_string_length(label: Union[bytes, str], trailing_dot: bool) -> bool:
|
||||
if len(label) > (254 if trailing_dot else 253):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def check_bidi(label: str, check_ltr: bool = False) -> bool:
|
||||
# Bidi rules should only be applied if string contains RTL characters
|
||||
bidi_label = False
|
||||
for idx, cp in enumerate(label, 1):
|
||||
direction = unicodedata.bidirectional(cp)
|
||||
if direction == "":
|
||||
# String likely comes from a newer version of Unicode
|
||||
raise IDNABidiError("Unknown directionality in label {} at position {}".format(repr(label), idx))
|
||||
if direction in ["R", "AL", "AN"]:
|
||||
bidi_label = True
|
||||
if not bidi_label and not check_ltr:
|
||||
return True
|
||||
|
||||
# Bidi rule 1
|
||||
direction = unicodedata.bidirectional(label[0])
|
||||
if direction in ["R", "AL"]:
|
||||
rtl = True
|
||||
elif direction == "L":
|
||||
rtl = False
|
||||
else:
|
||||
raise IDNABidiError("First codepoint in label {} must be directionality L, R or AL".format(repr(label)))
|
||||
|
||||
valid_ending = False
|
||||
number_type: Optional[str] = None
|
||||
for idx, cp in enumerate(label, 1):
|
||||
direction = unicodedata.bidirectional(cp)
|
||||
|
||||
if rtl:
|
||||
# Bidi rule 2
|
||||
if direction not in [
|
||||
"R",
|
||||
"AL",
|
||||
"AN",
|
||||
"EN",
|
||||
"ES",
|
||||
"CS",
|
||||
"ET",
|
||||
"ON",
|
||||
"BN",
|
||||
"NSM",
|
||||
]:
|
||||
raise IDNABidiError("Invalid direction for codepoint at position {} in a right-to-left label".format(idx))
|
||||
# Bidi rule 3
|
||||
if direction in ["R", "AL", "EN", "AN"]:
|
||||
valid_ending = True
|
||||
elif direction != "NSM":
|
||||
valid_ending = False
|
||||
# Bidi rule 4
|
||||
if direction in ["AN", "EN"]:
|
||||
if not number_type:
|
||||
number_type = direction
|
||||
else:
|
||||
if number_type != direction:
|
||||
raise IDNABidiError("Can not mix numeral types in a right-to-left label")
|
||||
else:
|
||||
# Bidi rule 5
|
||||
if direction not in ["L", "EN", "ES", "CS", "ET", "ON", "BN", "NSM"]:
|
||||
raise IDNABidiError("Invalid direction for codepoint at position {} in a left-to-right label".format(idx))
|
||||
# Bidi rule 6
|
||||
if direction in ["L", "EN"]:
|
||||
valid_ending = True
|
||||
elif direction != "NSM":
|
||||
valid_ending = False
|
||||
|
||||
if not valid_ending:
|
||||
raise IDNABidiError("Label ends with illegal codepoint directionality")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def check_initial_combiner(label: str) -> bool:
|
||||
if unicodedata.category(label[0])[0] == "M":
|
||||
raise IDNAError("Label begins with an illegal combining character")
|
||||
return True
|
||||
|
||||
|
||||
def check_hyphen_ok(label: str) -> bool:
|
||||
if label[2:4] == "--":
|
||||
raise IDNAError("Label has disallowed hyphens in 3rd and 4th position")
|
||||
if label[0] == "-" or label[-1] == "-":
|
||||
raise IDNAError("Label must not start or end with a hyphen")
|
||||
return True
|
||||
|
||||
|
||||
def check_nfc(label: str) -> None:
|
||||
if unicodedata.normalize("NFC", label) != label:
|
||||
raise IDNAError("Label must be in Normalization Form C")
|
||||
|
||||
|
||||
def valid_contextj(label: str, pos: int) -> bool:
|
||||
cp_value = ord(label[pos])
|
||||
|
||||
if cp_value == 0x200C:
|
||||
if pos > 0:
|
||||
if _combining_class(ord(label[pos - 1])) == _virama_combining_class:
|
||||
return True
|
||||
|
||||
ok = False
|
||||
for i in range(pos - 1, -1, -1):
|
||||
joining_type = idnadata.joining_types.get(ord(label[i]))
|
||||
if joining_type == ord("T"):
|
||||
continue
|
||||
elif joining_type in [ord("L"), ord("D")]:
|
||||
ok = True
|
||||
break
|
||||
else:
|
||||
break
|
||||
|
||||
if not ok:
|
||||
return False
|
||||
|
||||
ok = False
|
||||
for i in range(pos + 1, len(label)):
|
||||
joining_type = idnadata.joining_types.get(ord(label[i]))
|
||||
if joining_type == ord("T"):
|
||||
continue
|
||||
elif joining_type in [ord("R"), ord("D")]:
|
||||
ok = True
|
||||
break
|
||||
else:
|
||||
break
|
||||
return ok
|
||||
|
||||
if cp_value == 0x200D:
|
||||
if pos > 0:
|
||||
if _combining_class(ord(label[pos - 1])) == _virama_combining_class:
|
||||
return True
|
||||
return False
|
||||
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def valid_contexto(label: str, pos: int, exception: bool = False) -> bool:
|
||||
cp_value = ord(label[pos])
|
||||
|
||||
if cp_value == 0x00B7:
|
||||
if 0 < pos < len(label) - 1:
|
||||
if ord(label[pos - 1]) == 0x006C and ord(label[pos + 1]) == 0x006C:
|
||||
return True
|
||||
return False
|
||||
|
||||
elif cp_value == 0x0375:
|
||||
if pos < len(label) - 1 and len(label) > 1:
|
||||
return _is_script(label[pos + 1], "Greek")
|
||||
return False
|
||||
|
||||
elif cp_value == 0x05F3 or cp_value == 0x05F4:
|
||||
if pos > 0:
|
||||
return _is_script(label[pos - 1], "Hebrew")
|
||||
return False
|
||||
|
||||
elif cp_value == 0x30FB:
|
||||
for cp in label:
|
||||
if cp == "\u30fb":
|
||||
continue
|
||||
if _is_script(cp, "Hiragana") or _is_script(cp, "Katakana") or _is_script(cp, "Han"):
|
||||
return True
|
||||
return False
|
||||
|
||||
elif 0x660 <= cp_value <= 0x669:
|
||||
for cp in label:
|
||||
if 0x6F0 <= ord(cp) <= 0x06F9:
|
||||
return False
|
||||
return True
|
||||
|
||||
elif 0x6F0 <= cp_value <= 0x6F9:
|
||||
for cp in label:
|
||||
if 0x660 <= ord(cp) <= 0x0669:
|
||||
return False
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def check_label(label: Union[str, bytes, bytearray]) -> None:
|
||||
if isinstance(label, (bytes, bytearray)):
|
||||
label = label.decode("utf-8")
|
||||
if len(label) == 0:
|
||||
raise IDNAError("Empty Label")
|
||||
|
||||
check_nfc(label)
|
||||
check_hyphen_ok(label)
|
||||
check_initial_combiner(label)
|
||||
|
||||
for pos, cp in enumerate(label):
|
||||
cp_value = ord(cp)
|
||||
if intranges_contain(cp_value, idnadata.codepoint_classes["PVALID"]):
|
||||
continue
|
||||
elif intranges_contain(cp_value, idnadata.codepoint_classes["CONTEXTJ"]):
|
||||
try:
|
||||
if not valid_contextj(label, pos):
|
||||
raise InvalidCodepointContext(
|
||||
"Joiner {} not allowed at position {} in {}".format(_unot(cp_value), pos + 1, repr(label))
|
||||
)
|
||||
except ValueError:
|
||||
raise IDNAError(
|
||||
"Unknown codepoint adjacent to joiner {} at position {} in {}".format(
|
||||
_unot(cp_value), pos + 1, repr(label)
|
||||
)
|
||||
)
|
||||
elif intranges_contain(cp_value, idnadata.codepoint_classes["CONTEXTO"]):
|
||||
if not valid_contexto(label, pos):
|
||||
raise InvalidCodepointContext(
|
||||
"Codepoint {} not allowed at position {} in {}".format(_unot(cp_value), pos + 1, repr(label))
|
||||
)
|
||||
else:
|
||||
raise InvalidCodepoint(
|
||||
"Codepoint {} at position {} of {} not allowed".format(_unot(cp_value), pos + 1, repr(label))
|
||||
)
|
||||
|
||||
check_bidi(label)
|
||||
|
||||
|
||||
def alabel(label: str) -> bytes:
|
||||
try:
|
||||
label_bytes = label.encode("ascii")
|
||||
ulabel(label_bytes)
|
||||
if not valid_label_length(label_bytes):
|
||||
raise IDNAError("Label too long")
|
||||
return label_bytes
|
||||
except UnicodeEncodeError:
|
||||
pass
|
||||
|
||||
check_label(label)
|
||||
label_bytes = _alabel_prefix + _punycode(label)
|
||||
|
||||
if not valid_label_length(label_bytes):
|
||||
raise IDNAError("Label too long")
|
||||
|
||||
return label_bytes
|
||||
|
||||
|
||||
def ulabel(label: Union[str, bytes, bytearray]) -> str:
|
||||
if not isinstance(label, (bytes, bytearray)):
|
||||
try:
|
||||
label_bytes = label.encode("ascii")
|
||||
except UnicodeEncodeError:
|
||||
check_label(label)
|
||||
return label
|
||||
else:
|
||||
label_bytes = label
|
||||
|
||||
label_bytes = label_bytes.lower()
|
||||
if label_bytes.startswith(_alabel_prefix):
|
||||
label_bytes = label_bytes[len(_alabel_prefix) :]
|
||||
if not label_bytes:
|
||||
raise IDNAError("Malformed A-label, no Punycode eligible content found")
|
||||
if label_bytes.decode("ascii")[-1] == "-":
|
||||
raise IDNAError("A-label must not end with a hyphen")
|
||||
else:
|
||||
check_label(label_bytes)
|
||||
return label_bytes.decode("ascii")
|
||||
|
||||
try:
|
||||
label = label_bytes.decode("punycode")
|
||||
except UnicodeError:
|
||||
raise IDNAError("Invalid A-label")
|
||||
check_label(label)
|
||||
return label
|
||||
|
||||
|
||||
def uts46_remap(domain: str, std3_rules: bool = True, transitional: bool = False) -> str:
|
||||
"""Re-map the characters in the string according to UTS46 processing."""
|
||||
from .uts46data import uts46data
|
||||
|
||||
output = ""
|
||||
|
||||
for pos, char in enumerate(domain):
|
||||
code_point = ord(char)
|
||||
try:
|
||||
uts46row = uts46data[code_point if code_point < 256 else bisect.bisect_left(uts46data, (code_point, "Z")) - 1]
|
||||
status = uts46row[1]
|
||||
replacement: Optional[str] = None
|
||||
if len(uts46row) == 3:
|
||||
replacement = uts46row[2]
|
||||
if (
|
||||
status == "V"
|
||||
or (status == "D" and not transitional)
|
||||
or (status == "3" and not std3_rules and replacement is None)
|
||||
):
|
||||
output += char
|
||||
elif replacement is not None and (
|
||||
status == "M" or (status == "3" and not std3_rules) or (status == "D" and transitional)
|
||||
):
|
||||
output += replacement
|
||||
elif status != "I":
|
||||
raise IndexError()
|
||||
except IndexError:
|
||||
raise InvalidCodepoint(
|
||||
"Codepoint {} not allowed at position {} in {}".format(_unot(code_point), pos + 1, repr(domain))
|
||||
)
|
||||
|
||||
return unicodedata.normalize("NFC", output)
|
||||
|
||||
|
||||
def encode(
|
||||
s: Union[str, bytes, bytearray],
|
||||
strict: bool = False,
|
||||
uts46: bool = False,
|
||||
std3_rules: bool = False,
|
||||
transitional: bool = False,
|
||||
) -> bytes:
|
||||
if not isinstance(s, str):
|
||||
try:
|
||||
s = str(s, "ascii")
|
||||
except UnicodeDecodeError:
|
||||
raise IDNAError("should pass a unicode string to the function rather than a byte string.")
|
||||
if uts46:
|
||||
s = uts46_remap(s, std3_rules, transitional)
|
||||
trailing_dot = False
|
||||
result = []
|
||||
if strict:
|
||||
labels = s.split(".")
|
||||
else:
|
||||
labels = _unicode_dots_re.split(s)
|
||||
if not labels or labels == [""]:
|
||||
raise IDNAError("Empty domain")
|
||||
if labels[-1] == "":
|
||||
del labels[-1]
|
||||
trailing_dot = True
|
||||
for label in labels:
|
||||
s = alabel(label)
|
||||
if s:
|
||||
result.append(s)
|
||||
else:
|
||||
raise IDNAError("Empty label")
|
||||
if trailing_dot:
|
||||
result.append(b"")
|
||||
s = b".".join(result)
|
||||
if not valid_string_length(s, trailing_dot):
|
||||
raise IDNAError("Domain too long")
|
||||
return s
|
||||
|
||||
|
||||
def decode(
|
||||
s: Union[str, bytes, bytearray],
|
||||
strict: bool = False,
|
||||
uts46: bool = False,
|
||||
std3_rules: bool = False,
|
||||
) -> str:
|
||||
try:
|
||||
if not isinstance(s, str):
|
||||
s = str(s, "ascii")
|
||||
except UnicodeDecodeError:
|
||||
raise IDNAError("Invalid ASCII in A-label")
|
||||
if uts46:
|
||||
s = uts46_remap(s, std3_rules, False)
|
||||
trailing_dot = False
|
||||
result = []
|
||||
if not strict:
|
||||
labels = _unicode_dots_re.split(s)
|
||||
else:
|
||||
labels = s.split(".")
|
||||
if not labels or labels == [""]:
|
||||
raise IDNAError("Empty domain")
|
||||
if not labels[-1]:
|
||||
del labels[-1]
|
||||
trailing_dot = True
|
||||
for label in labels:
|
||||
s = ulabel(label)
|
||||
if s:
|
||||
result.append(s)
|
||||
else:
|
||||
raise IDNAError("Empty label")
|
||||
if trailing_dot:
|
||||
result.append("")
|
||||
return ".".join(result)
|
||||
4243
Utils/PythonNew32/Lib/site-packages/pip/_vendor/idna/idnadata.py
Normal file
4243
Utils/PythonNew32/Lib/site-packages/pip/_vendor/idna/idnadata.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,57 @@
|
||||
"""
|
||||
Given a list of integers, made up of (hopefully) a small number of long runs
|
||||
of consecutive integers, compute a representation of the form
|
||||
((start1, end1), (start2, end2) ...). Then answer the question "was x present
|
||||
in the original list?" in time O(log(# runs)).
|
||||
"""
|
||||
|
||||
import bisect
|
||||
from typing import List, Tuple
|
||||
|
||||
|
||||
def intranges_from_list(list_: List[int]) -> Tuple[int, ...]:
|
||||
"""Represent a list of integers as a sequence of ranges:
|
||||
((start_0, end_0), (start_1, end_1), ...), such that the original
|
||||
integers are exactly those x such that start_i <= x < end_i for some i.
|
||||
|
||||
Ranges are encoded as single integers (start << 32 | end), not as tuples.
|
||||
"""
|
||||
|
||||
sorted_list = sorted(list_)
|
||||
ranges = []
|
||||
last_write = -1
|
||||
for i in range(len(sorted_list)):
|
||||
if i + 1 < len(sorted_list):
|
||||
if sorted_list[i] == sorted_list[i + 1] - 1:
|
||||
continue
|
||||
current_range = sorted_list[last_write + 1 : i + 1]
|
||||
ranges.append(_encode_range(current_range[0], current_range[-1] + 1))
|
||||
last_write = i
|
||||
|
||||
return tuple(ranges)
|
||||
|
||||
|
||||
def _encode_range(start: int, end: int) -> int:
|
||||
return (start << 32) | end
|
||||
|
||||
|
||||
def _decode_range(r: int) -> Tuple[int, int]:
|
||||
return (r >> 32), (r & ((1 << 32) - 1))
|
||||
|
||||
|
||||
def intranges_contain(int_: int, ranges: Tuple[int, ...]) -> bool:
|
||||
"""Determine if `int_` falls into one of the ranges in `ranges`."""
|
||||
tuple_ = _encode_range(int_, 0)
|
||||
pos = bisect.bisect_left(ranges, tuple_)
|
||||
# we could be immediately ahead of a tuple (start, end)
|
||||
# with start < int_ <= end
|
||||
if pos > 0:
|
||||
left, right = _decode_range(ranges[pos - 1])
|
||||
if left <= int_ < right:
|
||||
return True
|
||||
# or we could be immediately behind a tuple (int_, end)
|
||||
if pos < len(ranges):
|
||||
left, _ = _decode_range(ranges[pos])
|
||||
if left == int_:
|
||||
return True
|
||||
return False
|
||||
@@ -0,0 +1 @@
|
||||
__version__ = "3.10"
|
||||
8681
Utils/PythonNew32/Lib/site-packages/pip/_vendor/idna/uts46data.py
Normal file
8681
Utils/PythonNew32/Lib/site-packages/pip/_vendor/idna/uts46data.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,55 @@
|
||||
# ruff: noqa: F401
|
||||
import os
|
||||
|
||||
from .exceptions import * # noqa: F403
|
||||
from .ext import ExtType, Timestamp
|
||||
|
||||
version = (1, 1, 1)
|
||||
__version__ = "1.1.1"
|
||||
|
||||
|
||||
if os.environ.get("MSGPACK_PUREPYTHON"):
|
||||
from .fallback import Packer, Unpacker, unpackb
|
||||
else:
|
||||
try:
|
||||
from ._cmsgpack import Packer, Unpacker, unpackb
|
||||
except ImportError:
|
||||
from .fallback import Packer, Unpacker, unpackb
|
||||
|
||||
|
||||
def pack(o, stream, **kwargs):
|
||||
"""
|
||||
Pack object `o` and write it to `stream`
|
||||
|
||||
See :class:`Packer` for options.
|
||||
"""
|
||||
packer = Packer(**kwargs)
|
||||
stream.write(packer.pack(o))
|
||||
|
||||
|
||||
def packb(o, **kwargs):
|
||||
"""
|
||||
Pack object `o` and return packed bytes
|
||||
|
||||
See :class:`Packer` for options.
|
||||
"""
|
||||
return Packer(**kwargs).pack(o)
|
||||
|
||||
|
||||
def unpack(stream, **kwargs):
|
||||
"""
|
||||
Unpack an object from `stream`.
|
||||
|
||||
Raises `ExtraData` when `stream` contains extra bytes.
|
||||
See :class:`Unpacker` for options.
|
||||
"""
|
||||
data = stream.read()
|
||||
return unpackb(data, **kwargs)
|
||||
|
||||
|
||||
# alias for compatibility to simplejson/marshal/pickle.
|
||||
load = unpack
|
||||
loads = unpackb
|
||||
|
||||
dump = pack
|
||||
dumps = packb
|
||||
@@ -0,0 +1,48 @@
|
||||
class UnpackException(Exception):
|
||||
"""Base class for some exceptions raised while unpacking.
|
||||
|
||||
NOTE: unpack may raise exception other than subclass of
|
||||
UnpackException. If you want to catch all error, catch
|
||||
Exception instead.
|
||||
"""
|
||||
|
||||
|
||||
class BufferFull(UnpackException):
|
||||
pass
|
||||
|
||||
|
||||
class OutOfData(UnpackException):
|
||||
pass
|
||||
|
||||
|
||||
class FormatError(ValueError, UnpackException):
|
||||
"""Invalid msgpack format"""
|
||||
|
||||
|
||||
class StackError(ValueError, UnpackException):
|
||||
"""Too nested"""
|
||||
|
||||
|
||||
# Deprecated. Use ValueError instead
|
||||
UnpackValueError = ValueError
|
||||
|
||||
|
||||
class ExtraData(UnpackValueError):
|
||||
"""ExtraData is raised when there is trailing data.
|
||||
|
||||
This exception is raised while only one-shot (not streaming)
|
||||
unpack.
|
||||
"""
|
||||
|
||||
def __init__(self, unpacked, extra):
|
||||
self.unpacked = unpacked
|
||||
self.extra = extra
|
||||
|
||||
def __str__(self):
|
||||
return "unpack(b) received extra data."
|
||||
|
||||
|
||||
# Deprecated. Use Exception instead to catch all exception during packing.
|
||||
PackException = Exception
|
||||
PackValueError = ValueError
|
||||
PackOverflowError = OverflowError
|
||||
170
Utils/PythonNew32/Lib/site-packages/pip/_vendor/msgpack/ext.py
Normal file
170
Utils/PythonNew32/Lib/site-packages/pip/_vendor/msgpack/ext.py
Normal file
@@ -0,0 +1,170 @@
|
||||
import datetime
|
||||
import struct
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
class ExtType(namedtuple("ExtType", "code data")):
|
||||
"""ExtType represents ext type in msgpack."""
|
||||
|
||||
def __new__(cls, code, data):
|
||||
if not isinstance(code, int):
|
||||
raise TypeError("code must be int")
|
||||
if not isinstance(data, bytes):
|
||||
raise TypeError("data must be bytes")
|
||||
if not 0 <= code <= 127:
|
||||
raise ValueError("code must be 0~127")
|
||||
return super().__new__(cls, code, data)
|
||||
|
||||
|
||||
class Timestamp:
|
||||
"""Timestamp represents the Timestamp extension type in msgpack.
|
||||
|
||||
When built with Cython, msgpack uses C methods to pack and unpack `Timestamp`.
|
||||
When using pure-Python msgpack, :func:`to_bytes` and :func:`from_bytes` are used to pack and
|
||||
unpack `Timestamp`.
|
||||
|
||||
This class is immutable: Do not override seconds and nanoseconds.
|
||||
"""
|
||||
|
||||
__slots__ = ["seconds", "nanoseconds"]
|
||||
|
||||
def __init__(self, seconds, nanoseconds=0):
|
||||
"""Initialize a Timestamp object.
|
||||
|
||||
:param int seconds:
|
||||
Number of seconds since the UNIX epoch (00:00:00 UTC Jan 1 1970, minus leap seconds).
|
||||
May be negative.
|
||||
|
||||
:param int nanoseconds:
|
||||
Number of nanoseconds to add to `seconds` to get fractional time.
|
||||
Maximum is 999_999_999. Default is 0.
|
||||
|
||||
Note: Negative times (before the UNIX epoch) are represented as neg. seconds + pos. ns.
|
||||
"""
|
||||
if not isinstance(seconds, int):
|
||||
raise TypeError("seconds must be an integer")
|
||||
if not isinstance(nanoseconds, int):
|
||||
raise TypeError("nanoseconds must be an integer")
|
||||
if not (0 <= nanoseconds < 10**9):
|
||||
raise ValueError("nanoseconds must be a non-negative integer less than 999999999.")
|
||||
self.seconds = seconds
|
||||
self.nanoseconds = nanoseconds
|
||||
|
||||
def __repr__(self):
|
||||
"""String representation of Timestamp."""
|
||||
return f"Timestamp(seconds={self.seconds}, nanoseconds={self.nanoseconds})"
|
||||
|
||||
def __eq__(self, other):
|
||||
"""Check for equality with another Timestamp object"""
|
||||
if type(other) is self.__class__:
|
||||
return self.seconds == other.seconds and self.nanoseconds == other.nanoseconds
|
||||
return False
|
||||
|
||||
def __ne__(self, other):
|
||||
"""not-equals method (see :func:`__eq__()`)"""
|
||||
return not self.__eq__(other)
|
||||
|
||||
def __hash__(self):
|
||||
return hash((self.seconds, self.nanoseconds))
|
||||
|
||||
@staticmethod
|
||||
def from_bytes(b):
|
||||
"""Unpack bytes into a `Timestamp` object.
|
||||
|
||||
Used for pure-Python msgpack unpacking.
|
||||
|
||||
:param b: Payload from msgpack ext message with code -1
|
||||
:type b: bytes
|
||||
|
||||
:returns: Timestamp object unpacked from msgpack ext payload
|
||||
:rtype: Timestamp
|
||||
"""
|
||||
if len(b) == 4:
|
||||
seconds = struct.unpack("!L", b)[0]
|
||||
nanoseconds = 0
|
||||
elif len(b) == 8:
|
||||
data64 = struct.unpack("!Q", b)[0]
|
||||
seconds = data64 & 0x00000003FFFFFFFF
|
||||
nanoseconds = data64 >> 34
|
||||
elif len(b) == 12:
|
||||
nanoseconds, seconds = struct.unpack("!Iq", b)
|
||||
else:
|
||||
raise ValueError(
|
||||
"Timestamp type can only be created from 32, 64, or 96-bit byte objects"
|
||||
)
|
||||
return Timestamp(seconds, nanoseconds)
|
||||
|
||||
def to_bytes(self):
|
||||
"""Pack this Timestamp object into bytes.
|
||||
|
||||
Used for pure-Python msgpack packing.
|
||||
|
||||
:returns data: Payload for EXT message with code -1 (timestamp type)
|
||||
:rtype: bytes
|
||||
"""
|
||||
if (self.seconds >> 34) == 0: # seconds is non-negative and fits in 34 bits
|
||||
data64 = self.nanoseconds << 34 | self.seconds
|
||||
if data64 & 0xFFFFFFFF00000000 == 0:
|
||||
# nanoseconds is zero and seconds < 2**32, so timestamp 32
|
||||
data = struct.pack("!L", data64)
|
||||
else:
|
||||
# timestamp 64
|
||||
data = struct.pack("!Q", data64)
|
||||
else:
|
||||
# timestamp 96
|
||||
data = struct.pack("!Iq", self.nanoseconds, self.seconds)
|
||||
return data
|
||||
|
||||
@staticmethod
|
||||
def from_unix(unix_sec):
|
||||
"""Create a Timestamp from posix timestamp in seconds.
|
||||
|
||||
:param unix_float: Posix timestamp in seconds.
|
||||
:type unix_float: int or float
|
||||
"""
|
||||
seconds = int(unix_sec // 1)
|
||||
nanoseconds = int((unix_sec % 1) * 10**9)
|
||||
return Timestamp(seconds, nanoseconds)
|
||||
|
||||
def to_unix(self):
|
||||
"""Get the timestamp as a floating-point value.
|
||||
|
||||
:returns: posix timestamp
|
||||
:rtype: float
|
||||
"""
|
||||
return self.seconds + self.nanoseconds / 1e9
|
||||
|
||||
@staticmethod
|
||||
def from_unix_nano(unix_ns):
|
||||
"""Create a Timestamp from posix timestamp in nanoseconds.
|
||||
|
||||
:param int unix_ns: Posix timestamp in nanoseconds.
|
||||
:rtype: Timestamp
|
||||
"""
|
||||
return Timestamp(*divmod(unix_ns, 10**9))
|
||||
|
||||
def to_unix_nano(self):
|
||||
"""Get the timestamp as a unixtime in nanoseconds.
|
||||
|
||||
:returns: posix timestamp in nanoseconds
|
||||
:rtype: int
|
||||
"""
|
||||
return self.seconds * 10**9 + self.nanoseconds
|
||||
|
||||
def to_datetime(self):
|
||||
"""Get the timestamp as a UTC datetime.
|
||||
|
||||
:rtype: `datetime.datetime`
|
||||
"""
|
||||
utc = datetime.timezone.utc
|
||||
return datetime.datetime.fromtimestamp(0, utc) + datetime.timedelta(
|
||||
seconds=self.seconds, microseconds=self.nanoseconds // 1000
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def from_datetime(dt):
|
||||
"""Create a Timestamp from datetime with tzinfo.
|
||||
|
||||
:rtype: Timestamp
|
||||
"""
|
||||
return Timestamp(seconds=int(dt.timestamp()), nanoseconds=dt.microsecond * 1000)
|
||||
@@ -0,0 +1,929 @@
|
||||
"""Fallback pure Python implementation of msgpack"""
|
||||
|
||||
import struct
|
||||
import sys
|
||||
from datetime import datetime as _DateTime
|
||||
|
||||
if hasattr(sys, "pypy_version_info"):
|
||||
from __pypy__ import newlist_hint
|
||||
from __pypy__.builders import BytesBuilder
|
||||
|
||||
_USING_STRINGBUILDER = True
|
||||
|
||||
class BytesIO:
|
||||
def __init__(self, s=b""):
|
||||
if s:
|
||||
self.builder = BytesBuilder(len(s))
|
||||
self.builder.append(s)
|
||||
else:
|
||||
self.builder = BytesBuilder()
|
||||
|
||||
def write(self, s):
|
||||
if isinstance(s, memoryview):
|
||||
s = s.tobytes()
|
||||
elif isinstance(s, bytearray):
|
||||
s = bytes(s)
|
||||
self.builder.append(s)
|
||||
|
||||
def getvalue(self):
|
||||
return self.builder.build()
|
||||
|
||||
else:
|
||||
from io import BytesIO
|
||||
|
||||
_USING_STRINGBUILDER = False
|
||||
|
||||
def newlist_hint(size):
|
||||
return []
|
||||
|
||||
|
||||
from .exceptions import BufferFull, ExtraData, FormatError, OutOfData, StackError
|
||||
from .ext import ExtType, Timestamp
|
||||
|
||||
EX_SKIP = 0
|
||||
EX_CONSTRUCT = 1
|
||||
EX_READ_ARRAY_HEADER = 2
|
||||
EX_READ_MAP_HEADER = 3
|
||||
|
||||
TYPE_IMMEDIATE = 0
|
||||
TYPE_ARRAY = 1
|
||||
TYPE_MAP = 2
|
||||
TYPE_RAW = 3
|
||||
TYPE_BIN = 4
|
||||
TYPE_EXT = 5
|
||||
|
||||
DEFAULT_RECURSE_LIMIT = 511
|
||||
|
||||
|
||||
def _check_type_strict(obj, t, type=type, tuple=tuple):
|
||||
if type(t) is tuple:
|
||||
return type(obj) in t
|
||||
else:
|
||||
return type(obj) is t
|
||||
|
||||
|
||||
def _get_data_from_buffer(obj):
|
||||
view = memoryview(obj)
|
||||
if view.itemsize != 1:
|
||||
raise ValueError("cannot unpack from multi-byte object")
|
||||
return view
|
||||
|
||||
|
||||
def unpackb(packed, **kwargs):
|
||||
"""
|
||||
Unpack an object from `packed`.
|
||||
|
||||
Raises ``ExtraData`` when *packed* contains extra bytes.
|
||||
Raises ``ValueError`` when *packed* is incomplete.
|
||||
Raises ``FormatError`` when *packed* is not valid msgpack.
|
||||
Raises ``StackError`` when *packed* contains too nested.
|
||||
Other exceptions can be raised during unpacking.
|
||||
|
||||
See :class:`Unpacker` for options.
|
||||
"""
|
||||
unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs)
|
||||
unpacker.feed(packed)
|
||||
try:
|
||||
ret = unpacker._unpack()
|
||||
except OutOfData:
|
||||
raise ValueError("Unpack failed: incomplete input")
|
||||
except RecursionError:
|
||||
raise StackError
|
||||
if unpacker._got_extradata():
|
||||
raise ExtraData(ret, unpacker._get_extradata())
|
||||
return ret
|
||||
|
||||
|
||||
_NO_FORMAT_USED = ""
|
||||
_MSGPACK_HEADERS = {
|
||||
0xC4: (1, _NO_FORMAT_USED, TYPE_BIN),
|
||||
0xC5: (2, ">H", TYPE_BIN),
|
||||
0xC6: (4, ">I", TYPE_BIN),
|
||||
0xC7: (2, "Bb", TYPE_EXT),
|
||||
0xC8: (3, ">Hb", TYPE_EXT),
|
||||
0xC9: (5, ">Ib", TYPE_EXT),
|
||||
0xCA: (4, ">f"),
|
||||
0xCB: (8, ">d"),
|
||||
0xCC: (1, _NO_FORMAT_USED),
|
||||
0xCD: (2, ">H"),
|
||||
0xCE: (4, ">I"),
|
||||
0xCF: (8, ">Q"),
|
||||
0xD0: (1, "b"),
|
||||
0xD1: (2, ">h"),
|
||||
0xD2: (4, ">i"),
|
||||
0xD3: (8, ">q"),
|
||||
0xD4: (1, "b1s", TYPE_EXT),
|
||||
0xD5: (2, "b2s", TYPE_EXT),
|
||||
0xD6: (4, "b4s", TYPE_EXT),
|
||||
0xD7: (8, "b8s", TYPE_EXT),
|
||||
0xD8: (16, "b16s", TYPE_EXT),
|
||||
0xD9: (1, _NO_FORMAT_USED, TYPE_RAW),
|
||||
0xDA: (2, ">H", TYPE_RAW),
|
||||
0xDB: (4, ">I", TYPE_RAW),
|
||||
0xDC: (2, ">H", TYPE_ARRAY),
|
||||
0xDD: (4, ">I", TYPE_ARRAY),
|
||||
0xDE: (2, ">H", TYPE_MAP),
|
||||
0xDF: (4, ">I", TYPE_MAP),
|
||||
}
|
||||
|
||||
|
||||
class Unpacker:
|
||||
"""Streaming unpacker.
|
||||
|
||||
Arguments:
|
||||
|
||||
:param file_like:
|
||||
File-like object having `.read(n)` method.
|
||||
If specified, unpacker reads serialized data from it and `.feed()` is not usable.
|
||||
|
||||
:param int read_size:
|
||||
Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`)
|
||||
|
||||
:param bool use_list:
|
||||
If true, unpack msgpack array to Python list.
|
||||
Otherwise, unpack to Python tuple. (default: True)
|
||||
|
||||
:param bool raw:
|
||||
If true, unpack msgpack raw to Python bytes.
|
||||
Otherwise, unpack to Python str by decoding with UTF-8 encoding (default).
|
||||
|
||||
:param int timestamp:
|
||||
Control how timestamp type is unpacked:
|
||||
|
||||
0 - Timestamp
|
||||
1 - float (Seconds from the EPOCH)
|
||||
2 - int (Nanoseconds from the EPOCH)
|
||||
3 - datetime.datetime (UTC).
|
||||
|
||||
:param bool strict_map_key:
|
||||
If true (default), only str or bytes are accepted for map (dict) keys.
|
||||
|
||||
:param object_hook:
|
||||
When specified, it should be callable.
|
||||
Unpacker calls it with a dict argument after unpacking msgpack map.
|
||||
(See also simplejson)
|
||||
|
||||
:param object_pairs_hook:
|
||||
When specified, it should be callable.
|
||||
Unpacker calls it with a list of key-value pairs after unpacking msgpack map.
|
||||
(See also simplejson)
|
||||
|
||||
:param str unicode_errors:
|
||||
The error handler for decoding unicode. (default: 'strict')
|
||||
This option should be used only when you have msgpack data which
|
||||
contains invalid UTF-8 string.
|
||||
|
||||
:param int max_buffer_size:
|
||||
Limits size of data waiting unpacked. 0 means 2**32-1.
|
||||
The default value is 100*1024*1024 (100MiB).
|
||||
Raises `BufferFull` exception when it is insufficient.
|
||||
You should set this parameter when unpacking data from untrusted source.
|
||||
|
||||
:param int max_str_len:
|
||||
Deprecated, use *max_buffer_size* instead.
|
||||
Limits max length of str. (default: max_buffer_size)
|
||||
|
||||
:param int max_bin_len:
|
||||
Deprecated, use *max_buffer_size* instead.
|
||||
Limits max length of bin. (default: max_buffer_size)
|
||||
|
||||
:param int max_array_len:
|
||||
Limits max length of array.
|
||||
(default: max_buffer_size)
|
||||
|
||||
:param int max_map_len:
|
||||
Limits max length of map.
|
||||
(default: max_buffer_size//2)
|
||||
|
||||
:param int max_ext_len:
|
||||
Deprecated, use *max_buffer_size* instead.
|
||||
Limits max size of ext type. (default: max_buffer_size)
|
||||
|
||||
Example of streaming deserialize from file-like object::
|
||||
|
||||
unpacker = Unpacker(file_like)
|
||||
for o in unpacker:
|
||||
process(o)
|
||||
|
||||
Example of streaming deserialize from socket::
|
||||
|
||||
unpacker = Unpacker()
|
||||
while True:
|
||||
buf = sock.recv(1024**2)
|
||||
if not buf:
|
||||
break
|
||||
unpacker.feed(buf)
|
||||
for o in unpacker:
|
||||
process(o)
|
||||
|
||||
Raises ``ExtraData`` when *packed* contains extra bytes.
|
||||
Raises ``OutOfData`` when *packed* is incomplete.
|
||||
Raises ``FormatError`` when *packed* is not valid msgpack.
|
||||
Raises ``StackError`` when *packed* contains too nested.
|
||||
Other exceptions can be raised during unpacking.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
file_like=None,
|
||||
*,
|
||||
read_size=0,
|
||||
use_list=True,
|
||||
raw=False,
|
||||
timestamp=0,
|
||||
strict_map_key=True,
|
||||
object_hook=None,
|
||||
object_pairs_hook=None,
|
||||
list_hook=None,
|
||||
unicode_errors=None,
|
||||
max_buffer_size=100 * 1024 * 1024,
|
||||
ext_hook=ExtType,
|
||||
max_str_len=-1,
|
||||
max_bin_len=-1,
|
||||
max_array_len=-1,
|
||||
max_map_len=-1,
|
||||
max_ext_len=-1,
|
||||
):
|
||||
if unicode_errors is None:
|
||||
unicode_errors = "strict"
|
||||
|
||||
if file_like is None:
|
||||
self._feeding = True
|
||||
else:
|
||||
if not callable(file_like.read):
|
||||
raise TypeError("`file_like.read` must be callable")
|
||||
self.file_like = file_like
|
||||
self._feeding = False
|
||||
|
||||
#: array of bytes fed.
|
||||
self._buffer = bytearray()
|
||||
#: Which position we currently reads
|
||||
self._buff_i = 0
|
||||
|
||||
# When Unpacker is used as an iterable, between the calls to next(),
|
||||
# the buffer is not "consumed" completely, for efficiency sake.
|
||||
# Instead, it is done sloppily. To make sure we raise BufferFull at
|
||||
# the correct moments, we have to keep track of how sloppy we were.
|
||||
# Furthermore, when the buffer is incomplete (that is: in the case
|
||||
# we raise an OutOfData) we need to rollback the buffer to the correct
|
||||
# state, which _buf_checkpoint records.
|
||||
self._buf_checkpoint = 0
|
||||
|
||||
if not max_buffer_size:
|
||||
max_buffer_size = 2**31 - 1
|
||||
if max_str_len == -1:
|
||||
max_str_len = max_buffer_size
|
||||
if max_bin_len == -1:
|
||||
max_bin_len = max_buffer_size
|
||||
if max_array_len == -1:
|
||||
max_array_len = max_buffer_size
|
||||
if max_map_len == -1:
|
||||
max_map_len = max_buffer_size // 2
|
||||
if max_ext_len == -1:
|
||||
max_ext_len = max_buffer_size
|
||||
|
||||
self._max_buffer_size = max_buffer_size
|
||||
if read_size > self._max_buffer_size:
|
||||
raise ValueError("read_size must be smaller than max_buffer_size")
|
||||
self._read_size = read_size or min(self._max_buffer_size, 16 * 1024)
|
||||
self._raw = bool(raw)
|
||||
self._strict_map_key = bool(strict_map_key)
|
||||
self._unicode_errors = unicode_errors
|
||||
self._use_list = use_list
|
||||
if not (0 <= timestamp <= 3):
|
||||
raise ValueError("timestamp must be 0..3")
|
||||
self._timestamp = timestamp
|
||||
self._list_hook = list_hook
|
||||
self._object_hook = object_hook
|
||||
self._object_pairs_hook = object_pairs_hook
|
||||
self._ext_hook = ext_hook
|
||||
self._max_str_len = max_str_len
|
||||
self._max_bin_len = max_bin_len
|
||||
self._max_array_len = max_array_len
|
||||
self._max_map_len = max_map_len
|
||||
self._max_ext_len = max_ext_len
|
||||
self._stream_offset = 0
|
||||
|
||||
if list_hook is not None and not callable(list_hook):
|
||||
raise TypeError("`list_hook` is not callable")
|
||||
if object_hook is not None and not callable(object_hook):
|
||||
raise TypeError("`object_hook` is not callable")
|
||||
if object_pairs_hook is not None and not callable(object_pairs_hook):
|
||||
raise TypeError("`object_pairs_hook` is not callable")
|
||||
if object_hook is not None and object_pairs_hook is not None:
|
||||
raise TypeError("object_pairs_hook and object_hook are mutually exclusive")
|
||||
if not callable(ext_hook):
|
||||
raise TypeError("`ext_hook` is not callable")
|
||||
|
||||
def feed(self, next_bytes):
|
||||
assert self._feeding
|
||||
view = _get_data_from_buffer(next_bytes)
|
||||
if len(self._buffer) - self._buff_i + len(view) > self._max_buffer_size:
|
||||
raise BufferFull
|
||||
|
||||
# Strip buffer before checkpoint before reading file.
|
||||
if self._buf_checkpoint > 0:
|
||||
del self._buffer[: self._buf_checkpoint]
|
||||
self._buff_i -= self._buf_checkpoint
|
||||
self._buf_checkpoint = 0
|
||||
|
||||
# Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython
|
||||
self._buffer.extend(view)
|
||||
view.release()
|
||||
|
||||
def _consume(self):
|
||||
"""Gets rid of the used parts of the buffer."""
|
||||
self._stream_offset += self._buff_i - self._buf_checkpoint
|
||||
self._buf_checkpoint = self._buff_i
|
||||
|
||||
def _got_extradata(self):
|
||||
return self._buff_i < len(self._buffer)
|
||||
|
||||
def _get_extradata(self):
|
||||
return self._buffer[self._buff_i :]
|
||||
|
||||
def read_bytes(self, n):
|
||||
ret = self._read(n, raise_outofdata=False)
|
||||
self._consume()
|
||||
return ret
|
||||
|
||||
def _read(self, n, raise_outofdata=True):
|
||||
# (int) -> bytearray
|
||||
self._reserve(n, raise_outofdata=raise_outofdata)
|
||||
i = self._buff_i
|
||||
ret = self._buffer[i : i + n]
|
||||
self._buff_i = i + len(ret)
|
||||
return ret
|
||||
|
||||
def _reserve(self, n, raise_outofdata=True):
|
||||
remain_bytes = len(self._buffer) - self._buff_i - n
|
||||
|
||||
# Fast path: buffer has n bytes already
|
||||
if remain_bytes >= 0:
|
||||
return
|
||||
|
||||
if self._feeding:
|
||||
self._buff_i = self._buf_checkpoint
|
||||
raise OutOfData
|
||||
|
||||
# Strip buffer before checkpoint before reading file.
|
||||
if self._buf_checkpoint > 0:
|
||||
del self._buffer[: self._buf_checkpoint]
|
||||
self._buff_i -= self._buf_checkpoint
|
||||
self._buf_checkpoint = 0
|
||||
|
||||
# Read from file
|
||||
remain_bytes = -remain_bytes
|
||||
if remain_bytes + len(self._buffer) > self._max_buffer_size:
|
||||
raise BufferFull
|
||||
while remain_bytes > 0:
|
||||
to_read_bytes = max(self._read_size, remain_bytes)
|
||||
read_data = self.file_like.read(to_read_bytes)
|
||||
if not read_data:
|
||||
break
|
||||
assert isinstance(read_data, bytes)
|
||||
self._buffer += read_data
|
||||
remain_bytes -= len(read_data)
|
||||
|
||||
if len(self._buffer) < n + self._buff_i and raise_outofdata:
|
||||
self._buff_i = 0 # rollback
|
||||
raise OutOfData
|
||||
|
||||
def _read_header(self):
|
||||
typ = TYPE_IMMEDIATE
|
||||
n = 0
|
||||
obj = None
|
||||
self._reserve(1)
|
||||
b = self._buffer[self._buff_i]
|
||||
self._buff_i += 1
|
||||
if b & 0b10000000 == 0:
|
||||
obj = b
|
||||
elif b & 0b11100000 == 0b11100000:
|
||||
obj = -1 - (b ^ 0xFF)
|
||||
elif b & 0b11100000 == 0b10100000:
|
||||
n = b & 0b00011111
|
||||
typ = TYPE_RAW
|
||||
if n > self._max_str_len:
|
||||
raise ValueError(f"{n} exceeds max_str_len({self._max_str_len})")
|
||||
obj = self._read(n)
|
||||
elif b & 0b11110000 == 0b10010000:
|
||||
n = b & 0b00001111
|
||||
typ = TYPE_ARRAY
|
||||
if n > self._max_array_len:
|
||||
raise ValueError(f"{n} exceeds max_array_len({self._max_array_len})")
|
||||
elif b & 0b11110000 == 0b10000000:
|
||||
n = b & 0b00001111
|
||||
typ = TYPE_MAP
|
||||
if n > self._max_map_len:
|
||||
raise ValueError(f"{n} exceeds max_map_len({self._max_map_len})")
|
||||
elif b == 0xC0:
|
||||
obj = None
|
||||
elif b == 0xC2:
|
||||
obj = False
|
||||
elif b == 0xC3:
|
||||
obj = True
|
||||
elif 0xC4 <= b <= 0xC6:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
if len(fmt) > 0:
|
||||
n = struct.unpack_from(fmt, self._buffer, self._buff_i)[0]
|
||||
else:
|
||||
n = self._buffer[self._buff_i]
|
||||
self._buff_i += size
|
||||
if n > self._max_bin_len:
|
||||
raise ValueError(f"{n} exceeds max_bin_len({self._max_bin_len})")
|
||||
obj = self._read(n)
|
||||
elif 0xC7 <= b <= 0xC9:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
L, n = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
self._buff_i += size
|
||||
if L > self._max_ext_len:
|
||||
raise ValueError(f"{L} exceeds max_ext_len({self._max_ext_len})")
|
||||
obj = self._read(L)
|
||||
elif 0xCA <= b <= 0xD3:
|
||||
size, fmt = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
if len(fmt) > 0:
|
||||
obj = struct.unpack_from(fmt, self._buffer, self._buff_i)[0]
|
||||
else:
|
||||
obj = self._buffer[self._buff_i]
|
||||
self._buff_i += size
|
||||
elif 0xD4 <= b <= 0xD8:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
if self._max_ext_len < size:
|
||||
raise ValueError(f"{size} exceeds max_ext_len({self._max_ext_len})")
|
||||
self._reserve(size + 1)
|
||||
n, obj = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
self._buff_i += size + 1
|
||||
elif 0xD9 <= b <= 0xDB:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
if len(fmt) > 0:
|
||||
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
else:
|
||||
n = self._buffer[self._buff_i]
|
||||
self._buff_i += size
|
||||
if n > self._max_str_len:
|
||||
raise ValueError(f"{n} exceeds max_str_len({self._max_str_len})")
|
||||
obj = self._read(n)
|
||||
elif 0xDC <= b <= 0xDD:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
self._buff_i += size
|
||||
if n > self._max_array_len:
|
||||
raise ValueError(f"{n} exceeds max_array_len({self._max_array_len})")
|
||||
elif 0xDE <= b <= 0xDF:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
self._buff_i += size
|
||||
if n > self._max_map_len:
|
||||
raise ValueError(f"{n} exceeds max_map_len({self._max_map_len})")
|
||||
else:
|
||||
raise FormatError("Unknown header: 0x%x" % b)
|
||||
return typ, n, obj
|
||||
|
||||
def _unpack(self, execute=EX_CONSTRUCT):
|
||||
typ, n, obj = self._read_header()
|
||||
|
||||
if execute == EX_READ_ARRAY_HEADER:
|
||||
if typ != TYPE_ARRAY:
|
||||
raise ValueError("Expected array")
|
||||
return n
|
||||
if execute == EX_READ_MAP_HEADER:
|
||||
if typ != TYPE_MAP:
|
||||
raise ValueError("Expected map")
|
||||
return n
|
||||
# TODO should we eliminate the recursion?
|
||||
if typ == TYPE_ARRAY:
|
||||
if execute == EX_SKIP:
|
||||
for i in range(n):
|
||||
# TODO check whether we need to call `list_hook`
|
||||
self._unpack(EX_SKIP)
|
||||
return
|
||||
ret = newlist_hint(n)
|
||||
for i in range(n):
|
||||
ret.append(self._unpack(EX_CONSTRUCT))
|
||||
if self._list_hook is not None:
|
||||
ret = self._list_hook(ret)
|
||||
# TODO is the interaction between `list_hook` and `use_list` ok?
|
||||
return ret if self._use_list else tuple(ret)
|
||||
if typ == TYPE_MAP:
|
||||
if execute == EX_SKIP:
|
||||
for i in range(n):
|
||||
# TODO check whether we need to call hooks
|
||||
self._unpack(EX_SKIP)
|
||||
self._unpack(EX_SKIP)
|
||||
return
|
||||
if self._object_pairs_hook is not None:
|
||||
ret = self._object_pairs_hook(
|
||||
(self._unpack(EX_CONSTRUCT), self._unpack(EX_CONSTRUCT)) for _ in range(n)
|
||||
)
|
||||
else:
|
||||
ret = {}
|
||||
for _ in range(n):
|
||||
key = self._unpack(EX_CONSTRUCT)
|
||||
if self._strict_map_key and type(key) not in (str, bytes):
|
||||
raise ValueError("%s is not allowed for map key" % str(type(key)))
|
||||
if isinstance(key, str):
|
||||
key = sys.intern(key)
|
||||
ret[key] = self._unpack(EX_CONSTRUCT)
|
||||
if self._object_hook is not None:
|
||||
ret = self._object_hook(ret)
|
||||
return ret
|
||||
if execute == EX_SKIP:
|
||||
return
|
||||
if typ == TYPE_RAW:
|
||||
if self._raw:
|
||||
obj = bytes(obj)
|
||||
else:
|
||||
obj = obj.decode("utf_8", self._unicode_errors)
|
||||
return obj
|
||||
if typ == TYPE_BIN:
|
||||
return bytes(obj)
|
||||
if typ == TYPE_EXT:
|
||||
if n == -1: # timestamp
|
||||
ts = Timestamp.from_bytes(bytes(obj))
|
||||
if self._timestamp == 1:
|
||||
return ts.to_unix()
|
||||
elif self._timestamp == 2:
|
||||
return ts.to_unix_nano()
|
||||
elif self._timestamp == 3:
|
||||
return ts.to_datetime()
|
||||
else:
|
||||
return ts
|
||||
else:
|
||||
return self._ext_hook(n, bytes(obj))
|
||||
assert typ == TYPE_IMMEDIATE
|
||||
return obj
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __next__(self):
|
||||
try:
|
||||
ret = self._unpack(EX_CONSTRUCT)
|
||||
self._consume()
|
||||
return ret
|
||||
except OutOfData:
|
||||
self._consume()
|
||||
raise StopIteration
|
||||
except RecursionError:
|
||||
raise StackError
|
||||
|
||||
next = __next__
|
||||
|
||||
def skip(self):
|
||||
self._unpack(EX_SKIP)
|
||||
self._consume()
|
||||
|
||||
def unpack(self):
|
||||
try:
|
||||
ret = self._unpack(EX_CONSTRUCT)
|
||||
except RecursionError:
|
||||
raise StackError
|
||||
self._consume()
|
||||
return ret
|
||||
|
||||
def read_array_header(self):
|
||||
ret = self._unpack(EX_READ_ARRAY_HEADER)
|
||||
self._consume()
|
||||
return ret
|
||||
|
||||
def read_map_header(self):
|
||||
ret = self._unpack(EX_READ_MAP_HEADER)
|
||||
self._consume()
|
||||
return ret
|
||||
|
||||
def tell(self):
|
||||
return self._stream_offset
|
||||
|
||||
|
||||
class Packer:
|
||||
"""
|
||||
MessagePack Packer
|
||||
|
||||
Usage::
|
||||
|
||||
packer = Packer()
|
||||
astream.write(packer.pack(a))
|
||||
astream.write(packer.pack(b))
|
||||
|
||||
Packer's constructor has some keyword arguments:
|
||||
|
||||
:param default:
|
||||
When specified, it should be callable.
|
||||
Convert user type to builtin type that Packer supports.
|
||||
See also simplejson's document.
|
||||
|
||||
:param bool use_single_float:
|
||||
Use single precision float type for float. (default: False)
|
||||
|
||||
:param bool autoreset:
|
||||
Reset buffer after each pack and return its content as `bytes`. (default: True).
|
||||
If set this to false, use `bytes()` to get content and `.reset()` to clear buffer.
|
||||
|
||||
:param bool use_bin_type:
|
||||
Use bin type introduced in msgpack spec 2.0 for bytes.
|
||||
It also enables str8 type for unicode. (default: True)
|
||||
|
||||
:param bool strict_types:
|
||||
If set to true, types will be checked to be exact. Derived classes
|
||||
from serializable types will not be serialized and will be
|
||||
treated as unsupported type and forwarded to default.
|
||||
Additionally tuples will not be serialized as lists.
|
||||
This is useful when trying to implement accurate serialization
|
||||
for python types.
|
||||
|
||||
:param bool datetime:
|
||||
If set to true, datetime with tzinfo is packed into Timestamp type.
|
||||
Note that the tzinfo is stripped in the timestamp.
|
||||
You can get UTC datetime with `timestamp=3` option of the Unpacker.
|
||||
|
||||
:param str unicode_errors:
|
||||
The error handler for encoding unicode. (default: 'strict')
|
||||
DO NOT USE THIS!! This option is kept for very specific usage.
|
||||
|
||||
:param int buf_size:
|
||||
Internal buffer size. This option is used only for C implementation.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
default=None,
|
||||
use_single_float=False,
|
||||
autoreset=True,
|
||||
use_bin_type=True,
|
||||
strict_types=False,
|
||||
datetime=False,
|
||||
unicode_errors=None,
|
||||
buf_size=None,
|
||||
):
|
||||
self._strict_types = strict_types
|
||||
self._use_float = use_single_float
|
||||
self._autoreset = autoreset
|
||||
self._use_bin_type = use_bin_type
|
||||
self._buffer = BytesIO()
|
||||
self._datetime = bool(datetime)
|
||||
self._unicode_errors = unicode_errors or "strict"
|
||||
if default is not None and not callable(default):
|
||||
raise TypeError("default must be callable")
|
||||
self._default = default
|
||||
|
||||
def _pack(
|
||||
self,
|
||||
obj,
|
||||
nest_limit=DEFAULT_RECURSE_LIMIT,
|
||||
check=isinstance,
|
||||
check_type_strict=_check_type_strict,
|
||||
):
|
||||
default_used = False
|
||||
if self._strict_types:
|
||||
check = check_type_strict
|
||||
list_types = list
|
||||
else:
|
||||
list_types = (list, tuple)
|
||||
while True:
|
||||
if nest_limit < 0:
|
||||
raise ValueError("recursion limit exceeded")
|
||||
if obj is None:
|
||||
return self._buffer.write(b"\xc0")
|
||||
if check(obj, bool):
|
||||
if obj:
|
||||
return self._buffer.write(b"\xc3")
|
||||
return self._buffer.write(b"\xc2")
|
||||
if check(obj, int):
|
||||
if 0 <= obj < 0x80:
|
||||
return self._buffer.write(struct.pack("B", obj))
|
||||
if -0x20 <= obj < 0:
|
||||
return self._buffer.write(struct.pack("b", obj))
|
||||
if 0x80 <= obj <= 0xFF:
|
||||
return self._buffer.write(struct.pack("BB", 0xCC, obj))
|
||||
if -0x80 <= obj < 0:
|
||||
return self._buffer.write(struct.pack(">Bb", 0xD0, obj))
|
||||
if 0xFF < obj <= 0xFFFF:
|
||||
return self._buffer.write(struct.pack(">BH", 0xCD, obj))
|
||||
if -0x8000 <= obj < -0x80:
|
||||
return self._buffer.write(struct.pack(">Bh", 0xD1, obj))
|
||||
if 0xFFFF < obj <= 0xFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BI", 0xCE, obj))
|
||||
if -0x80000000 <= obj < -0x8000:
|
||||
return self._buffer.write(struct.pack(">Bi", 0xD2, obj))
|
||||
if 0xFFFFFFFF < obj <= 0xFFFFFFFFFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BQ", 0xCF, obj))
|
||||
if -0x8000000000000000 <= obj < -0x80000000:
|
||||
return self._buffer.write(struct.pack(">Bq", 0xD3, obj))
|
||||
if not default_used and self._default is not None:
|
||||
obj = self._default(obj)
|
||||
default_used = True
|
||||
continue
|
||||
raise OverflowError("Integer value out of range")
|
||||
if check(obj, (bytes, bytearray)):
|
||||
n = len(obj)
|
||||
if n >= 2**32:
|
||||
raise ValueError("%s is too large" % type(obj).__name__)
|
||||
self._pack_bin_header(n)
|
||||
return self._buffer.write(obj)
|
||||
if check(obj, str):
|
||||
obj = obj.encode("utf-8", self._unicode_errors)
|
||||
n = len(obj)
|
||||
if n >= 2**32:
|
||||
raise ValueError("String is too large")
|
||||
self._pack_raw_header(n)
|
||||
return self._buffer.write(obj)
|
||||
if check(obj, memoryview):
|
||||
n = obj.nbytes
|
||||
if n >= 2**32:
|
||||
raise ValueError("Memoryview is too large")
|
||||
self._pack_bin_header(n)
|
||||
return self._buffer.write(obj)
|
||||
if check(obj, float):
|
||||
if self._use_float:
|
||||
return self._buffer.write(struct.pack(">Bf", 0xCA, obj))
|
||||
return self._buffer.write(struct.pack(">Bd", 0xCB, obj))
|
||||
if check(obj, (ExtType, Timestamp)):
|
||||
if check(obj, Timestamp):
|
||||
code = -1
|
||||
data = obj.to_bytes()
|
||||
else:
|
||||
code = obj.code
|
||||
data = obj.data
|
||||
assert isinstance(code, int)
|
||||
assert isinstance(data, bytes)
|
||||
L = len(data)
|
||||
if L == 1:
|
||||
self._buffer.write(b"\xd4")
|
||||
elif L == 2:
|
||||
self._buffer.write(b"\xd5")
|
||||
elif L == 4:
|
||||
self._buffer.write(b"\xd6")
|
||||
elif L == 8:
|
||||
self._buffer.write(b"\xd7")
|
||||
elif L == 16:
|
||||
self._buffer.write(b"\xd8")
|
||||
elif L <= 0xFF:
|
||||
self._buffer.write(struct.pack(">BB", 0xC7, L))
|
||||
elif L <= 0xFFFF:
|
||||
self._buffer.write(struct.pack(">BH", 0xC8, L))
|
||||
else:
|
||||
self._buffer.write(struct.pack(">BI", 0xC9, L))
|
||||
self._buffer.write(struct.pack("b", code))
|
||||
self._buffer.write(data)
|
||||
return
|
||||
if check(obj, list_types):
|
||||
n = len(obj)
|
||||
self._pack_array_header(n)
|
||||
for i in range(n):
|
||||
self._pack(obj[i], nest_limit - 1)
|
||||
return
|
||||
if check(obj, dict):
|
||||
return self._pack_map_pairs(len(obj), obj.items(), nest_limit - 1)
|
||||
|
||||
if self._datetime and check(obj, _DateTime) and obj.tzinfo is not None:
|
||||
obj = Timestamp.from_datetime(obj)
|
||||
default_used = 1
|
||||
continue
|
||||
|
||||
if not default_used and self._default is not None:
|
||||
obj = self._default(obj)
|
||||
default_used = 1
|
||||
continue
|
||||
|
||||
if self._datetime and check(obj, _DateTime):
|
||||
raise ValueError(f"Cannot serialize {obj!r} where tzinfo=None")
|
||||
|
||||
raise TypeError(f"Cannot serialize {obj!r}")
|
||||
|
||||
def pack(self, obj):
|
||||
try:
|
||||
self._pack(obj)
|
||||
except:
|
||||
self._buffer = BytesIO() # force reset
|
||||
raise
|
||||
if self._autoreset:
|
||||
ret = self._buffer.getvalue()
|
||||
self._buffer = BytesIO()
|
||||
return ret
|
||||
|
||||
def pack_map_pairs(self, pairs):
|
||||
self._pack_map_pairs(len(pairs), pairs)
|
||||
if self._autoreset:
|
||||
ret = self._buffer.getvalue()
|
||||
self._buffer = BytesIO()
|
||||
return ret
|
||||
|
||||
def pack_array_header(self, n):
|
||||
if n >= 2**32:
|
||||
raise ValueError
|
||||
self._pack_array_header(n)
|
||||
if self._autoreset:
|
||||
ret = self._buffer.getvalue()
|
||||
self._buffer = BytesIO()
|
||||
return ret
|
||||
|
||||
def pack_map_header(self, n):
|
||||
if n >= 2**32:
|
||||
raise ValueError
|
||||
self._pack_map_header(n)
|
||||
if self._autoreset:
|
||||
ret = self._buffer.getvalue()
|
||||
self._buffer = BytesIO()
|
||||
return ret
|
||||
|
||||
def pack_ext_type(self, typecode, data):
|
||||
if not isinstance(typecode, int):
|
||||
raise TypeError("typecode must have int type.")
|
||||
if not 0 <= typecode <= 127:
|
||||
raise ValueError("typecode should be 0-127")
|
||||
if not isinstance(data, bytes):
|
||||
raise TypeError("data must have bytes type")
|
||||
L = len(data)
|
||||
if L > 0xFFFFFFFF:
|
||||
raise ValueError("Too large data")
|
||||
if L == 1:
|
||||
self._buffer.write(b"\xd4")
|
||||
elif L == 2:
|
||||
self._buffer.write(b"\xd5")
|
||||
elif L == 4:
|
||||
self._buffer.write(b"\xd6")
|
||||
elif L == 8:
|
||||
self._buffer.write(b"\xd7")
|
||||
elif L == 16:
|
||||
self._buffer.write(b"\xd8")
|
||||
elif L <= 0xFF:
|
||||
self._buffer.write(b"\xc7" + struct.pack("B", L))
|
||||
elif L <= 0xFFFF:
|
||||
self._buffer.write(b"\xc8" + struct.pack(">H", L))
|
||||
else:
|
||||
self._buffer.write(b"\xc9" + struct.pack(">I", L))
|
||||
self._buffer.write(struct.pack("B", typecode))
|
||||
self._buffer.write(data)
|
||||
|
||||
def _pack_array_header(self, n):
|
||||
if n <= 0x0F:
|
||||
return self._buffer.write(struct.pack("B", 0x90 + n))
|
||||
if n <= 0xFFFF:
|
||||
return self._buffer.write(struct.pack(">BH", 0xDC, n))
|
||||
if n <= 0xFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BI", 0xDD, n))
|
||||
raise ValueError("Array is too large")
|
||||
|
||||
def _pack_map_header(self, n):
|
||||
if n <= 0x0F:
|
||||
return self._buffer.write(struct.pack("B", 0x80 + n))
|
||||
if n <= 0xFFFF:
|
||||
return self._buffer.write(struct.pack(">BH", 0xDE, n))
|
||||
if n <= 0xFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BI", 0xDF, n))
|
||||
raise ValueError("Dict is too large")
|
||||
|
||||
def _pack_map_pairs(self, n, pairs, nest_limit=DEFAULT_RECURSE_LIMIT):
|
||||
self._pack_map_header(n)
|
||||
for k, v in pairs:
|
||||
self._pack(k, nest_limit - 1)
|
||||
self._pack(v, nest_limit - 1)
|
||||
|
||||
def _pack_raw_header(self, n):
|
||||
if n <= 0x1F:
|
||||
self._buffer.write(struct.pack("B", 0xA0 + n))
|
||||
elif self._use_bin_type and n <= 0xFF:
|
||||
self._buffer.write(struct.pack(">BB", 0xD9, n))
|
||||
elif n <= 0xFFFF:
|
||||
self._buffer.write(struct.pack(">BH", 0xDA, n))
|
||||
elif n <= 0xFFFFFFFF:
|
||||
self._buffer.write(struct.pack(">BI", 0xDB, n))
|
||||
else:
|
||||
raise ValueError("Raw is too large")
|
||||
|
||||
def _pack_bin_header(self, n):
|
||||
if not self._use_bin_type:
|
||||
return self._pack_raw_header(n)
|
||||
elif n <= 0xFF:
|
||||
return self._buffer.write(struct.pack(">BB", 0xC4, n))
|
||||
elif n <= 0xFFFF:
|
||||
return self._buffer.write(struct.pack(">BH", 0xC5, n))
|
||||
elif n <= 0xFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BI", 0xC6, n))
|
||||
else:
|
||||
raise ValueError("Bin is too large")
|
||||
|
||||
def bytes(self):
|
||||
"""Return internal buffer contents as bytes object"""
|
||||
return self._buffer.getvalue()
|
||||
|
||||
def reset(self):
|
||||
"""Reset internal buffer.
|
||||
|
||||
This method is useful only when autoreset=False.
|
||||
"""
|
||||
self._buffer = BytesIO()
|
||||
|
||||
def getbuffer(self):
|
||||
"""Return view of internal buffer."""
|
||||
if _USING_STRINGBUILDER:
|
||||
return memoryview(self.bytes())
|
||||
else:
|
||||
return self._buffer.getbuffer()
|
||||
@@ -0,0 +1,15 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
__title__ = "packaging"
|
||||
__summary__ = "Core utilities for Python packages"
|
||||
__uri__ = "https://github.com/pypa/packaging"
|
||||
|
||||
__version__ = "25.0"
|
||||
|
||||
__author__ = "Donald Stufft and individual contributors"
|
||||
__email__ = "donald@stufft.io"
|
||||
|
||||
__license__ = "BSD-2-Clause or Apache-2.0"
|
||||
__copyright__ = f"2014 {__author__}"
|
||||
@@ -0,0 +1,109 @@
|
||||
"""
|
||||
ELF file parser.
|
||||
|
||||
This provides a class ``ELFFile`` that parses an ELF executable in a similar
|
||||
interface to ``ZipFile``. Only the read interface is implemented.
|
||||
|
||||
Based on: https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca
|
||||
ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import enum
|
||||
import os
|
||||
import struct
|
||||
from typing import IO
|
||||
|
||||
|
||||
class ELFInvalid(ValueError):
|
||||
pass
|
||||
|
||||
|
||||
class EIClass(enum.IntEnum):
|
||||
C32 = 1
|
||||
C64 = 2
|
||||
|
||||
|
||||
class EIData(enum.IntEnum):
|
||||
Lsb = 1
|
||||
Msb = 2
|
||||
|
||||
|
||||
class EMachine(enum.IntEnum):
|
||||
I386 = 3
|
||||
S390 = 22
|
||||
Arm = 40
|
||||
X8664 = 62
|
||||
AArc64 = 183
|
||||
|
||||
|
||||
class ELFFile:
|
||||
"""
|
||||
Representation of an ELF executable.
|
||||
"""
|
||||
|
||||
def __init__(self, f: IO[bytes]) -> None:
|
||||
self._f = f
|
||||
|
||||
try:
|
||||
ident = self._read("16B")
|
||||
except struct.error as e:
|
||||
raise ELFInvalid("unable to parse identification") from e
|
||||
magic = bytes(ident[:4])
|
||||
if magic != b"\x7fELF":
|
||||
raise ELFInvalid(f"invalid magic: {magic!r}")
|
||||
|
||||
self.capacity = ident[4] # Format for program header (bitness).
|
||||
self.encoding = ident[5] # Data structure encoding (endianness).
|
||||
|
||||
try:
|
||||
# e_fmt: Format for program header.
|
||||
# p_fmt: Format for section header.
|
||||
# p_idx: Indexes to find p_type, p_offset, and p_filesz.
|
||||
e_fmt, self._p_fmt, self._p_idx = {
|
||||
(1, 1): ("<HHIIIIIHHH", "<IIIIIIII", (0, 1, 4)), # 32-bit LSB.
|
||||
(1, 2): (">HHIIIIIHHH", ">IIIIIIII", (0, 1, 4)), # 32-bit MSB.
|
||||
(2, 1): ("<HHIQQQIHHH", "<IIQQQQQQ", (0, 2, 5)), # 64-bit LSB.
|
||||
(2, 2): (">HHIQQQIHHH", ">IIQQQQQQ", (0, 2, 5)), # 64-bit MSB.
|
||||
}[(self.capacity, self.encoding)]
|
||||
except KeyError as e:
|
||||
raise ELFInvalid(
|
||||
f"unrecognized capacity ({self.capacity}) or encoding ({self.encoding})"
|
||||
) from e
|
||||
|
||||
try:
|
||||
(
|
||||
_,
|
||||
self.machine, # Architecture type.
|
||||
_,
|
||||
_,
|
||||
self._e_phoff, # Offset of program header.
|
||||
_,
|
||||
self.flags, # Processor-specific flags.
|
||||
_,
|
||||
self._e_phentsize, # Size of section.
|
||||
self._e_phnum, # Number of sections.
|
||||
) = self._read(e_fmt)
|
||||
except struct.error as e:
|
||||
raise ELFInvalid("unable to parse machine and section information") from e
|
||||
|
||||
def _read(self, fmt: str) -> tuple[int, ...]:
|
||||
return struct.unpack(fmt, self._f.read(struct.calcsize(fmt)))
|
||||
|
||||
@property
|
||||
def interpreter(self) -> str | None:
|
||||
"""
|
||||
The path recorded in the ``PT_INTERP`` section header.
|
||||
"""
|
||||
for index in range(self._e_phnum):
|
||||
self._f.seek(self._e_phoff + self._e_phentsize * index)
|
||||
try:
|
||||
data = self._read(self._p_fmt)
|
||||
except struct.error:
|
||||
continue
|
||||
if data[self._p_idx[0]] != 3: # Not PT_INTERP.
|
||||
continue
|
||||
self._f.seek(data[self._p_idx[1]])
|
||||
return os.fsdecode(self._f.read(data[self._p_idx[2]])).strip("\0")
|
||||
return None
|
||||
@@ -0,0 +1,262 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import collections
|
||||
import contextlib
|
||||
import functools
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import warnings
|
||||
from typing import Generator, Iterator, NamedTuple, Sequence
|
||||
|
||||
from ._elffile import EIClass, EIData, ELFFile, EMachine
|
||||
|
||||
EF_ARM_ABIMASK = 0xFF000000
|
||||
EF_ARM_ABI_VER5 = 0x05000000
|
||||
EF_ARM_ABI_FLOAT_HARD = 0x00000400
|
||||
|
||||
|
||||
# `os.PathLike` not a generic type until Python 3.9, so sticking with `str`
|
||||
# as the type for `path` until then.
|
||||
@contextlib.contextmanager
|
||||
def _parse_elf(path: str) -> Generator[ELFFile | None, None, None]:
|
||||
try:
|
||||
with open(path, "rb") as f:
|
||||
yield ELFFile(f)
|
||||
except (OSError, TypeError, ValueError):
|
||||
yield None
|
||||
|
||||
|
||||
def _is_linux_armhf(executable: str) -> bool:
|
||||
# hard-float ABI can be detected from the ELF header of the running
|
||||
# process
|
||||
# https://static.docs.arm.com/ihi0044/g/aaelf32.pdf
|
||||
with _parse_elf(executable) as f:
|
||||
return (
|
||||
f is not None
|
||||
and f.capacity == EIClass.C32
|
||||
and f.encoding == EIData.Lsb
|
||||
and f.machine == EMachine.Arm
|
||||
and f.flags & EF_ARM_ABIMASK == EF_ARM_ABI_VER5
|
||||
and f.flags & EF_ARM_ABI_FLOAT_HARD == EF_ARM_ABI_FLOAT_HARD
|
||||
)
|
||||
|
||||
|
||||
def _is_linux_i686(executable: str) -> bool:
|
||||
with _parse_elf(executable) as f:
|
||||
return (
|
||||
f is not None
|
||||
and f.capacity == EIClass.C32
|
||||
and f.encoding == EIData.Lsb
|
||||
and f.machine == EMachine.I386
|
||||
)
|
||||
|
||||
|
||||
def _have_compatible_abi(executable: str, archs: Sequence[str]) -> bool:
|
||||
if "armv7l" in archs:
|
||||
return _is_linux_armhf(executable)
|
||||
if "i686" in archs:
|
||||
return _is_linux_i686(executable)
|
||||
allowed_archs = {
|
||||
"x86_64",
|
||||
"aarch64",
|
||||
"ppc64",
|
||||
"ppc64le",
|
||||
"s390x",
|
||||
"loongarch64",
|
||||
"riscv64",
|
||||
}
|
||||
return any(arch in allowed_archs for arch in archs)
|
||||
|
||||
|
||||
# If glibc ever changes its major version, we need to know what the last
|
||||
# minor version was, so we can build the complete list of all versions.
|
||||
# For now, guess what the highest minor version might be, assume it will
|
||||
# be 50 for testing. Once this actually happens, update the dictionary
|
||||
# with the actual value.
|
||||
_LAST_GLIBC_MINOR: dict[int, int] = collections.defaultdict(lambda: 50)
|
||||
|
||||
|
||||
class _GLibCVersion(NamedTuple):
|
||||
major: int
|
||||
minor: int
|
||||
|
||||
|
||||
def _glibc_version_string_confstr() -> str | None:
|
||||
"""
|
||||
Primary implementation of glibc_version_string using os.confstr.
|
||||
"""
|
||||
# os.confstr is quite a bit faster than ctypes.DLL. It's also less likely
|
||||
# to be broken or missing. This strategy is used in the standard library
|
||||
# platform module.
|
||||
# https://github.com/python/cpython/blob/fcf1d003bf4f0100c/Lib/platform.py#L175-L183
|
||||
try:
|
||||
# Should be a string like "glibc 2.17".
|
||||
version_string: str | None = os.confstr("CS_GNU_LIBC_VERSION")
|
||||
assert version_string is not None
|
||||
_, version = version_string.rsplit()
|
||||
except (AssertionError, AttributeError, OSError, ValueError):
|
||||
# os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)...
|
||||
return None
|
||||
return version
|
||||
|
||||
|
||||
def _glibc_version_string_ctypes() -> str | None:
|
||||
"""
|
||||
Fallback implementation of glibc_version_string using ctypes.
|
||||
"""
|
||||
try:
|
||||
import ctypes
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
# ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen
|
||||
# manpage says, "If filename is NULL, then the returned handle is for the
|
||||
# main program". This way we can let the linker do the work to figure out
|
||||
# which libc our process is actually using.
|
||||
#
|
||||
# We must also handle the special case where the executable is not a
|
||||
# dynamically linked executable. This can occur when using musl libc,
|
||||
# for example. In this situation, dlopen() will error, leading to an
|
||||
# OSError. Interestingly, at least in the case of musl, there is no
|
||||
# errno set on the OSError. The single string argument used to construct
|
||||
# OSError comes from libc itself and is therefore not portable to
|
||||
# hard code here. In any case, failure to call dlopen() means we
|
||||
# can proceed, so we bail on our attempt.
|
||||
try:
|
||||
process_namespace = ctypes.CDLL(None)
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
try:
|
||||
gnu_get_libc_version = process_namespace.gnu_get_libc_version
|
||||
except AttributeError:
|
||||
# Symbol doesn't exist -> therefore, we are not linked to
|
||||
# glibc.
|
||||
return None
|
||||
|
||||
# Call gnu_get_libc_version, which returns a string like "2.5"
|
||||
gnu_get_libc_version.restype = ctypes.c_char_p
|
||||
version_str: str = gnu_get_libc_version()
|
||||
# py2 / py3 compatibility:
|
||||
if not isinstance(version_str, str):
|
||||
version_str = version_str.decode("ascii")
|
||||
|
||||
return version_str
|
||||
|
||||
|
||||
def _glibc_version_string() -> str | None:
|
||||
"""Returns glibc version string, or None if not using glibc."""
|
||||
return _glibc_version_string_confstr() or _glibc_version_string_ctypes()
|
||||
|
||||
|
||||
def _parse_glibc_version(version_str: str) -> tuple[int, int]:
|
||||
"""Parse glibc version.
|
||||
|
||||
We use a regexp instead of str.split because we want to discard any
|
||||
random junk that might come after the minor version -- this might happen
|
||||
in patched/forked versions of glibc (e.g. Linaro's version of glibc
|
||||
uses version strings like "2.20-2014.11"). See gh-3588.
|
||||
"""
|
||||
m = re.match(r"(?P<major>[0-9]+)\.(?P<minor>[0-9]+)", version_str)
|
||||
if not m:
|
||||
warnings.warn(
|
||||
f"Expected glibc version with 2 components major.minor, got: {version_str}",
|
||||
RuntimeWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
return -1, -1
|
||||
return int(m.group("major")), int(m.group("minor"))
|
||||
|
||||
|
||||
@functools.lru_cache
|
||||
def _get_glibc_version() -> tuple[int, int]:
|
||||
version_str = _glibc_version_string()
|
||||
if version_str is None:
|
||||
return (-1, -1)
|
||||
return _parse_glibc_version(version_str)
|
||||
|
||||
|
||||
# From PEP 513, PEP 600
|
||||
def _is_compatible(arch: str, version: _GLibCVersion) -> bool:
|
||||
sys_glibc = _get_glibc_version()
|
||||
if sys_glibc < version:
|
||||
return False
|
||||
# Check for presence of _manylinux module.
|
||||
try:
|
||||
import _manylinux
|
||||
except ImportError:
|
||||
return True
|
||||
if hasattr(_manylinux, "manylinux_compatible"):
|
||||
result = _manylinux.manylinux_compatible(version[0], version[1], arch)
|
||||
if result is not None:
|
||||
return bool(result)
|
||||
return True
|
||||
if version == _GLibCVersion(2, 5):
|
||||
if hasattr(_manylinux, "manylinux1_compatible"):
|
||||
return bool(_manylinux.manylinux1_compatible)
|
||||
if version == _GLibCVersion(2, 12):
|
||||
if hasattr(_manylinux, "manylinux2010_compatible"):
|
||||
return bool(_manylinux.manylinux2010_compatible)
|
||||
if version == _GLibCVersion(2, 17):
|
||||
if hasattr(_manylinux, "manylinux2014_compatible"):
|
||||
return bool(_manylinux.manylinux2014_compatible)
|
||||
return True
|
||||
|
||||
|
||||
_LEGACY_MANYLINUX_MAP = {
|
||||
# CentOS 7 w/ glibc 2.17 (PEP 599)
|
||||
(2, 17): "manylinux2014",
|
||||
# CentOS 6 w/ glibc 2.12 (PEP 571)
|
||||
(2, 12): "manylinux2010",
|
||||
# CentOS 5 w/ glibc 2.5 (PEP 513)
|
||||
(2, 5): "manylinux1",
|
||||
}
|
||||
|
||||
|
||||
def platform_tags(archs: Sequence[str]) -> Iterator[str]:
|
||||
"""Generate manylinux tags compatible to the current platform.
|
||||
|
||||
:param archs: Sequence of compatible architectures.
|
||||
The first one shall be the closest to the actual architecture and be the part of
|
||||
platform tag after the ``linux_`` prefix, e.g. ``x86_64``.
|
||||
The ``linux_`` prefix is assumed as a prerequisite for the current platform to
|
||||
be manylinux-compatible.
|
||||
|
||||
:returns: An iterator of compatible manylinux tags.
|
||||
"""
|
||||
if not _have_compatible_abi(sys.executable, archs):
|
||||
return
|
||||
# Oldest glibc to be supported regardless of architecture is (2, 17).
|
||||
too_old_glibc2 = _GLibCVersion(2, 16)
|
||||
if set(archs) & {"x86_64", "i686"}:
|
||||
# On x86/i686 also oldest glibc to be supported is (2, 5).
|
||||
too_old_glibc2 = _GLibCVersion(2, 4)
|
||||
current_glibc = _GLibCVersion(*_get_glibc_version())
|
||||
glibc_max_list = [current_glibc]
|
||||
# We can assume compatibility across glibc major versions.
|
||||
# https://sourceware.org/bugzilla/show_bug.cgi?id=24636
|
||||
#
|
||||
# Build a list of maximum glibc versions so that we can
|
||||
# output the canonical list of all glibc from current_glibc
|
||||
# down to too_old_glibc2, including all intermediary versions.
|
||||
for glibc_major in range(current_glibc.major - 1, 1, -1):
|
||||
glibc_minor = _LAST_GLIBC_MINOR[glibc_major]
|
||||
glibc_max_list.append(_GLibCVersion(glibc_major, glibc_minor))
|
||||
for arch in archs:
|
||||
for glibc_max in glibc_max_list:
|
||||
if glibc_max.major == too_old_glibc2.major:
|
||||
min_minor = too_old_glibc2.minor
|
||||
else:
|
||||
# For other glibc major versions oldest supported is (x, 0).
|
||||
min_minor = -1
|
||||
for glibc_minor in range(glibc_max.minor, min_minor, -1):
|
||||
glibc_version = _GLibCVersion(glibc_max.major, glibc_minor)
|
||||
tag = "manylinux_{}_{}".format(*glibc_version)
|
||||
if _is_compatible(arch, glibc_version):
|
||||
yield f"{tag}_{arch}"
|
||||
# Handle the legacy manylinux1, manylinux2010, manylinux2014 tags.
|
||||
if glibc_version in _LEGACY_MANYLINUX_MAP:
|
||||
legacy_tag = _LEGACY_MANYLINUX_MAP[glibc_version]
|
||||
if _is_compatible(arch, glibc_version):
|
||||
yield f"{legacy_tag}_{arch}"
|
||||
@@ -0,0 +1,85 @@
|
||||
"""PEP 656 support.
|
||||
|
||||
This module implements logic to detect if the currently running Python is
|
||||
linked against musl, and what musl version is used.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import functools
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Iterator, NamedTuple, Sequence
|
||||
|
||||
from ._elffile import ELFFile
|
||||
|
||||
|
||||
class _MuslVersion(NamedTuple):
|
||||
major: int
|
||||
minor: int
|
||||
|
||||
|
||||
def _parse_musl_version(output: str) -> _MuslVersion | None:
|
||||
lines = [n for n in (n.strip() for n in output.splitlines()) if n]
|
||||
if len(lines) < 2 or lines[0][:4] != "musl":
|
||||
return None
|
||||
m = re.match(r"Version (\d+)\.(\d+)", lines[1])
|
||||
if not m:
|
||||
return None
|
||||
return _MuslVersion(major=int(m.group(1)), minor=int(m.group(2)))
|
||||
|
||||
|
||||
@functools.lru_cache
|
||||
def _get_musl_version(executable: str) -> _MuslVersion | None:
|
||||
"""Detect currently-running musl runtime version.
|
||||
|
||||
This is done by checking the specified executable's dynamic linking
|
||||
information, and invoking the loader to parse its output for a version
|
||||
string. If the loader is musl, the output would be something like::
|
||||
|
||||
musl libc (x86_64)
|
||||
Version 1.2.2
|
||||
Dynamic Program Loader
|
||||
"""
|
||||
try:
|
||||
with open(executable, "rb") as f:
|
||||
ld = ELFFile(f).interpreter
|
||||
except (OSError, TypeError, ValueError):
|
||||
return None
|
||||
if ld is None or "musl" not in ld:
|
||||
return None
|
||||
proc = subprocess.run([ld], stderr=subprocess.PIPE, text=True)
|
||||
return _parse_musl_version(proc.stderr)
|
||||
|
||||
|
||||
def platform_tags(archs: Sequence[str]) -> Iterator[str]:
|
||||
"""Generate musllinux tags compatible to the current platform.
|
||||
|
||||
:param archs: Sequence of compatible architectures.
|
||||
The first one shall be the closest to the actual architecture and be the part of
|
||||
platform tag after the ``linux_`` prefix, e.g. ``x86_64``.
|
||||
The ``linux_`` prefix is assumed as a prerequisite for the current platform to
|
||||
be musllinux-compatible.
|
||||
|
||||
:returns: An iterator of compatible musllinux tags.
|
||||
"""
|
||||
sys_musl = _get_musl_version(sys.executable)
|
||||
if sys_musl is None: # Python not dynamically linked against musl.
|
||||
return
|
||||
for arch in archs:
|
||||
for minor in range(sys_musl.minor, -1, -1):
|
||||
yield f"musllinux_{sys_musl.major}_{minor}_{arch}"
|
||||
|
||||
|
||||
if __name__ == "__main__": # pragma: no cover
|
||||
import sysconfig
|
||||
|
||||
plat = sysconfig.get_platform()
|
||||
assert plat.startswith("linux-"), "not linux"
|
||||
|
||||
print("plat:", plat)
|
||||
print("musl:", _get_musl_version(sys.executable))
|
||||
print("tags:", end=" ")
|
||||
for t in platform_tags(re.sub(r"[.-]", "_", plat.split("-", 1)[-1])):
|
||||
print(t, end="\n ")
|
||||
@@ -0,0 +1,353 @@
|
||||
"""Handwritten parser of dependency specifiers.
|
||||
|
||||
The docstring for each __parse_* function contains EBNF-inspired grammar representing
|
||||
the implementation.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import ast
|
||||
from typing import NamedTuple, Sequence, Tuple, Union
|
||||
|
||||
from ._tokenizer import DEFAULT_RULES, Tokenizer
|
||||
|
||||
|
||||
class Node:
|
||||
def __init__(self, value: str) -> None:
|
||||
self.value = value
|
||||
|
||||
def __str__(self) -> str:
|
||||
return self.value
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<{self.__class__.__name__}('{self}')>"
|
||||
|
||||
def serialize(self) -> str:
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class Variable(Node):
|
||||
def serialize(self) -> str:
|
||||
return str(self)
|
||||
|
||||
|
||||
class Value(Node):
|
||||
def serialize(self) -> str:
|
||||
return f'"{self}"'
|
||||
|
||||
|
||||
class Op(Node):
|
||||
def serialize(self) -> str:
|
||||
return str(self)
|
||||
|
||||
|
||||
MarkerVar = Union[Variable, Value]
|
||||
MarkerItem = Tuple[MarkerVar, Op, MarkerVar]
|
||||
MarkerAtom = Union[MarkerItem, Sequence["MarkerAtom"]]
|
||||
MarkerList = Sequence[Union["MarkerList", MarkerAtom, str]]
|
||||
|
||||
|
||||
class ParsedRequirement(NamedTuple):
|
||||
name: str
|
||||
url: str
|
||||
extras: list[str]
|
||||
specifier: str
|
||||
marker: MarkerList | None
|
||||
|
||||
|
||||
# --------------------------------------------------------------------------------------
|
||||
# Recursive descent parser for dependency specifier
|
||||
# --------------------------------------------------------------------------------------
|
||||
def parse_requirement(source: str) -> ParsedRequirement:
|
||||
return _parse_requirement(Tokenizer(source, rules=DEFAULT_RULES))
|
||||
|
||||
|
||||
def _parse_requirement(tokenizer: Tokenizer) -> ParsedRequirement:
|
||||
"""
|
||||
requirement = WS? IDENTIFIER WS? extras WS? requirement_details
|
||||
"""
|
||||
tokenizer.consume("WS")
|
||||
|
||||
name_token = tokenizer.expect(
|
||||
"IDENTIFIER", expected="package name at the start of dependency specifier"
|
||||
)
|
||||
name = name_token.text
|
||||
tokenizer.consume("WS")
|
||||
|
||||
extras = _parse_extras(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
url, specifier, marker = _parse_requirement_details(tokenizer)
|
||||
tokenizer.expect("END", expected="end of dependency specifier")
|
||||
|
||||
return ParsedRequirement(name, url, extras, specifier, marker)
|
||||
|
||||
|
||||
def _parse_requirement_details(
|
||||
tokenizer: Tokenizer,
|
||||
) -> tuple[str, str, MarkerList | None]:
|
||||
"""
|
||||
requirement_details = AT URL (WS requirement_marker?)?
|
||||
| specifier WS? (requirement_marker)?
|
||||
"""
|
||||
|
||||
specifier = ""
|
||||
url = ""
|
||||
marker = None
|
||||
|
||||
if tokenizer.check("AT"):
|
||||
tokenizer.read()
|
||||
tokenizer.consume("WS")
|
||||
|
||||
url_start = tokenizer.position
|
||||
url = tokenizer.expect("URL", expected="URL after @").text
|
||||
if tokenizer.check("END", peek=True):
|
||||
return (url, specifier, marker)
|
||||
|
||||
tokenizer.expect("WS", expected="whitespace after URL")
|
||||
|
||||
# The input might end after whitespace.
|
||||
if tokenizer.check("END", peek=True):
|
||||
return (url, specifier, marker)
|
||||
|
||||
marker = _parse_requirement_marker(
|
||||
tokenizer, span_start=url_start, after="URL and whitespace"
|
||||
)
|
||||
else:
|
||||
specifier_start = tokenizer.position
|
||||
specifier = _parse_specifier(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
if tokenizer.check("END", peek=True):
|
||||
return (url, specifier, marker)
|
||||
|
||||
marker = _parse_requirement_marker(
|
||||
tokenizer,
|
||||
span_start=specifier_start,
|
||||
after=(
|
||||
"version specifier"
|
||||
if specifier
|
||||
else "name and no valid version specifier"
|
||||
),
|
||||
)
|
||||
|
||||
return (url, specifier, marker)
|
||||
|
||||
|
||||
def _parse_requirement_marker(
|
||||
tokenizer: Tokenizer, *, span_start: int, after: str
|
||||
) -> MarkerList:
|
||||
"""
|
||||
requirement_marker = SEMICOLON marker WS?
|
||||
"""
|
||||
|
||||
if not tokenizer.check("SEMICOLON"):
|
||||
tokenizer.raise_syntax_error(
|
||||
f"Expected end or semicolon (after {after})",
|
||||
span_start=span_start,
|
||||
)
|
||||
tokenizer.read()
|
||||
|
||||
marker = _parse_marker(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
return marker
|
||||
|
||||
|
||||
def _parse_extras(tokenizer: Tokenizer) -> list[str]:
|
||||
"""
|
||||
extras = (LEFT_BRACKET wsp* extras_list? wsp* RIGHT_BRACKET)?
|
||||
"""
|
||||
if not tokenizer.check("LEFT_BRACKET", peek=True):
|
||||
return []
|
||||
|
||||
with tokenizer.enclosing_tokens(
|
||||
"LEFT_BRACKET",
|
||||
"RIGHT_BRACKET",
|
||||
around="extras",
|
||||
):
|
||||
tokenizer.consume("WS")
|
||||
extras = _parse_extras_list(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
return extras
|
||||
|
||||
|
||||
def _parse_extras_list(tokenizer: Tokenizer) -> list[str]:
|
||||
"""
|
||||
extras_list = identifier (wsp* ',' wsp* identifier)*
|
||||
"""
|
||||
extras: list[str] = []
|
||||
|
||||
if not tokenizer.check("IDENTIFIER"):
|
||||
return extras
|
||||
|
||||
extras.append(tokenizer.read().text)
|
||||
|
||||
while True:
|
||||
tokenizer.consume("WS")
|
||||
if tokenizer.check("IDENTIFIER", peek=True):
|
||||
tokenizer.raise_syntax_error("Expected comma between extra names")
|
||||
elif not tokenizer.check("COMMA"):
|
||||
break
|
||||
|
||||
tokenizer.read()
|
||||
tokenizer.consume("WS")
|
||||
|
||||
extra_token = tokenizer.expect("IDENTIFIER", expected="extra name after comma")
|
||||
extras.append(extra_token.text)
|
||||
|
||||
return extras
|
||||
|
||||
|
||||
def _parse_specifier(tokenizer: Tokenizer) -> str:
|
||||
"""
|
||||
specifier = LEFT_PARENTHESIS WS? version_many WS? RIGHT_PARENTHESIS
|
||||
| WS? version_many WS?
|
||||
"""
|
||||
with tokenizer.enclosing_tokens(
|
||||
"LEFT_PARENTHESIS",
|
||||
"RIGHT_PARENTHESIS",
|
||||
around="version specifier",
|
||||
):
|
||||
tokenizer.consume("WS")
|
||||
parsed_specifiers = _parse_version_many(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
return parsed_specifiers
|
||||
|
||||
|
||||
def _parse_version_many(tokenizer: Tokenizer) -> str:
|
||||
"""
|
||||
version_many = (SPECIFIER (WS? COMMA WS? SPECIFIER)*)?
|
||||
"""
|
||||
parsed_specifiers = ""
|
||||
while tokenizer.check("SPECIFIER"):
|
||||
span_start = tokenizer.position
|
||||
parsed_specifiers += tokenizer.read().text
|
||||
if tokenizer.check("VERSION_PREFIX_TRAIL", peek=True):
|
||||
tokenizer.raise_syntax_error(
|
||||
".* suffix can only be used with `==` or `!=` operators",
|
||||
span_start=span_start,
|
||||
span_end=tokenizer.position + 1,
|
||||
)
|
||||
if tokenizer.check("VERSION_LOCAL_LABEL_TRAIL", peek=True):
|
||||
tokenizer.raise_syntax_error(
|
||||
"Local version label can only be used with `==` or `!=` operators",
|
||||
span_start=span_start,
|
||||
span_end=tokenizer.position,
|
||||
)
|
||||
tokenizer.consume("WS")
|
||||
if not tokenizer.check("COMMA"):
|
||||
break
|
||||
parsed_specifiers += tokenizer.read().text
|
||||
tokenizer.consume("WS")
|
||||
|
||||
return parsed_specifiers
|
||||
|
||||
|
||||
# --------------------------------------------------------------------------------------
|
||||
# Recursive descent parser for marker expression
|
||||
# --------------------------------------------------------------------------------------
|
||||
def parse_marker(source: str) -> MarkerList:
|
||||
return _parse_full_marker(Tokenizer(source, rules=DEFAULT_RULES))
|
||||
|
||||
|
||||
def _parse_full_marker(tokenizer: Tokenizer) -> MarkerList:
|
||||
retval = _parse_marker(tokenizer)
|
||||
tokenizer.expect("END", expected="end of marker expression")
|
||||
return retval
|
||||
|
||||
|
||||
def _parse_marker(tokenizer: Tokenizer) -> MarkerList:
|
||||
"""
|
||||
marker = marker_atom (BOOLOP marker_atom)+
|
||||
"""
|
||||
expression = [_parse_marker_atom(tokenizer)]
|
||||
while tokenizer.check("BOOLOP"):
|
||||
token = tokenizer.read()
|
||||
expr_right = _parse_marker_atom(tokenizer)
|
||||
expression.extend((token.text, expr_right))
|
||||
return expression
|
||||
|
||||
|
||||
def _parse_marker_atom(tokenizer: Tokenizer) -> MarkerAtom:
|
||||
"""
|
||||
marker_atom = WS? LEFT_PARENTHESIS WS? marker WS? RIGHT_PARENTHESIS WS?
|
||||
| WS? marker_item WS?
|
||||
"""
|
||||
|
||||
tokenizer.consume("WS")
|
||||
if tokenizer.check("LEFT_PARENTHESIS", peek=True):
|
||||
with tokenizer.enclosing_tokens(
|
||||
"LEFT_PARENTHESIS",
|
||||
"RIGHT_PARENTHESIS",
|
||||
around="marker expression",
|
||||
):
|
||||
tokenizer.consume("WS")
|
||||
marker: MarkerAtom = _parse_marker(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
else:
|
||||
marker = _parse_marker_item(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
return marker
|
||||
|
||||
|
||||
def _parse_marker_item(tokenizer: Tokenizer) -> MarkerItem:
|
||||
"""
|
||||
marker_item = WS? marker_var WS? marker_op WS? marker_var WS?
|
||||
"""
|
||||
tokenizer.consume("WS")
|
||||
marker_var_left = _parse_marker_var(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
marker_op = _parse_marker_op(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
marker_var_right = _parse_marker_var(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
return (marker_var_left, marker_op, marker_var_right)
|
||||
|
||||
|
||||
def _parse_marker_var(tokenizer: Tokenizer) -> MarkerVar:
|
||||
"""
|
||||
marker_var = VARIABLE | QUOTED_STRING
|
||||
"""
|
||||
if tokenizer.check("VARIABLE"):
|
||||
return process_env_var(tokenizer.read().text.replace(".", "_"))
|
||||
elif tokenizer.check("QUOTED_STRING"):
|
||||
return process_python_str(tokenizer.read().text)
|
||||
else:
|
||||
tokenizer.raise_syntax_error(
|
||||
message="Expected a marker variable or quoted string"
|
||||
)
|
||||
|
||||
|
||||
def process_env_var(env_var: str) -> Variable:
|
||||
if env_var in ("platform_python_implementation", "python_implementation"):
|
||||
return Variable("platform_python_implementation")
|
||||
else:
|
||||
return Variable(env_var)
|
||||
|
||||
|
||||
def process_python_str(python_str: str) -> Value:
|
||||
value = ast.literal_eval(python_str)
|
||||
return Value(str(value))
|
||||
|
||||
|
||||
def _parse_marker_op(tokenizer: Tokenizer) -> Op:
|
||||
"""
|
||||
marker_op = IN | NOT IN | OP
|
||||
"""
|
||||
if tokenizer.check("IN"):
|
||||
tokenizer.read()
|
||||
return Op("in")
|
||||
elif tokenizer.check("NOT"):
|
||||
tokenizer.read()
|
||||
tokenizer.expect("WS", expected="whitespace after 'not'")
|
||||
tokenizer.expect("IN", expected="'in' after 'not'")
|
||||
return Op("not in")
|
||||
elif tokenizer.check("OP"):
|
||||
return Op(tokenizer.read().text)
|
||||
else:
|
||||
return tokenizer.raise_syntax_error(
|
||||
"Expected marker operator, one of <=, <, !=, ==, >=, >, ~=, ===, in, not in"
|
||||
)
|
||||
@@ -0,0 +1,61 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
|
||||
class InfinityType:
|
||||
def __repr__(self) -> str:
|
||||
return "Infinity"
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(repr(self))
|
||||
|
||||
def __lt__(self, other: object) -> bool:
|
||||
return False
|
||||
|
||||
def __le__(self, other: object) -> bool:
|
||||
return False
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
return isinstance(other, self.__class__)
|
||||
|
||||
def __gt__(self, other: object) -> bool:
|
||||
return True
|
||||
|
||||
def __ge__(self, other: object) -> bool:
|
||||
return True
|
||||
|
||||
def __neg__(self: object) -> "NegativeInfinityType":
|
||||
return NegativeInfinity
|
||||
|
||||
|
||||
Infinity = InfinityType()
|
||||
|
||||
|
||||
class NegativeInfinityType:
|
||||
def __repr__(self) -> str:
|
||||
return "-Infinity"
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(repr(self))
|
||||
|
||||
def __lt__(self, other: object) -> bool:
|
||||
return True
|
||||
|
||||
def __le__(self, other: object) -> bool:
|
||||
return True
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
return isinstance(other, self.__class__)
|
||||
|
||||
def __gt__(self, other: object) -> bool:
|
||||
return False
|
||||
|
||||
def __ge__(self, other: object) -> bool:
|
||||
return False
|
||||
|
||||
def __neg__(self: object) -> InfinityType:
|
||||
return Infinity
|
||||
|
||||
|
||||
NegativeInfinity = NegativeInfinityType()
|
||||
@@ -0,0 +1,195 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import contextlib
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import Iterator, NoReturn
|
||||
|
||||
from .specifiers import Specifier
|
||||
|
||||
|
||||
@dataclass
|
||||
class Token:
|
||||
name: str
|
||||
text: str
|
||||
position: int
|
||||
|
||||
|
||||
class ParserSyntaxError(Exception):
|
||||
"""The provided source text could not be parsed correctly."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
source: str,
|
||||
span: tuple[int, int],
|
||||
) -> None:
|
||||
self.span = span
|
||||
self.message = message
|
||||
self.source = source
|
||||
|
||||
super().__init__()
|
||||
|
||||
def __str__(self) -> str:
|
||||
marker = " " * self.span[0] + "~" * (self.span[1] - self.span[0]) + "^"
|
||||
return "\n ".join([self.message, self.source, marker])
|
||||
|
||||
|
||||
DEFAULT_RULES: dict[str, str | re.Pattern[str]] = {
|
||||
"LEFT_PARENTHESIS": r"\(",
|
||||
"RIGHT_PARENTHESIS": r"\)",
|
||||
"LEFT_BRACKET": r"\[",
|
||||
"RIGHT_BRACKET": r"\]",
|
||||
"SEMICOLON": r";",
|
||||
"COMMA": r",",
|
||||
"QUOTED_STRING": re.compile(
|
||||
r"""
|
||||
(
|
||||
('[^']*')
|
||||
|
|
||||
("[^"]*")
|
||||
)
|
||||
""",
|
||||
re.VERBOSE,
|
||||
),
|
||||
"OP": r"(===|==|~=|!=|<=|>=|<|>)",
|
||||
"BOOLOP": r"\b(or|and)\b",
|
||||
"IN": r"\bin\b",
|
||||
"NOT": r"\bnot\b",
|
||||
"VARIABLE": re.compile(
|
||||
r"""
|
||||
\b(
|
||||
python_version
|
||||
|python_full_version
|
||||
|os[._]name
|
||||
|sys[._]platform
|
||||
|platform_(release|system)
|
||||
|platform[._](version|machine|python_implementation)
|
||||
|python_implementation
|
||||
|implementation_(name|version)
|
||||
|extras?
|
||||
|dependency_groups
|
||||
)\b
|
||||
""",
|
||||
re.VERBOSE,
|
||||
),
|
||||
"SPECIFIER": re.compile(
|
||||
Specifier._operator_regex_str + Specifier._version_regex_str,
|
||||
re.VERBOSE | re.IGNORECASE,
|
||||
),
|
||||
"AT": r"\@",
|
||||
"URL": r"[^ \t]+",
|
||||
"IDENTIFIER": r"\b[a-zA-Z0-9][a-zA-Z0-9._-]*\b",
|
||||
"VERSION_PREFIX_TRAIL": r"\.\*",
|
||||
"VERSION_LOCAL_LABEL_TRAIL": r"\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*",
|
||||
"WS": r"[ \t]+",
|
||||
"END": r"$",
|
||||
}
|
||||
|
||||
|
||||
class Tokenizer:
|
||||
"""Context-sensitive token parsing.
|
||||
|
||||
Provides methods to examine the input stream to check whether the next token
|
||||
matches.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
source: str,
|
||||
*,
|
||||
rules: dict[str, str | re.Pattern[str]],
|
||||
) -> None:
|
||||
self.source = source
|
||||
self.rules: dict[str, re.Pattern[str]] = {
|
||||
name: re.compile(pattern) for name, pattern in rules.items()
|
||||
}
|
||||
self.next_token: Token | None = None
|
||||
self.position = 0
|
||||
|
||||
def consume(self, name: str) -> None:
|
||||
"""Move beyond provided token name, if at current position."""
|
||||
if self.check(name):
|
||||
self.read()
|
||||
|
||||
def check(self, name: str, *, peek: bool = False) -> bool:
|
||||
"""Check whether the next token has the provided name.
|
||||
|
||||
By default, if the check succeeds, the token *must* be read before
|
||||
another check. If `peek` is set to `True`, the token is not loaded and
|
||||
would need to be checked again.
|
||||
"""
|
||||
assert self.next_token is None, (
|
||||
f"Cannot check for {name!r}, already have {self.next_token!r}"
|
||||
)
|
||||
assert name in self.rules, f"Unknown token name: {name!r}"
|
||||
|
||||
expression = self.rules[name]
|
||||
|
||||
match = expression.match(self.source, self.position)
|
||||
if match is None:
|
||||
return False
|
||||
if not peek:
|
||||
self.next_token = Token(name, match[0], self.position)
|
||||
return True
|
||||
|
||||
def expect(self, name: str, *, expected: str) -> Token:
|
||||
"""Expect a certain token name next, failing with a syntax error otherwise.
|
||||
|
||||
The token is *not* read.
|
||||
"""
|
||||
if not self.check(name):
|
||||
raise self.raise_syntax_error(f"Expected {expected}")
|
||||
return self.read()
|
||||
|
||||
def read(self) -> Token:
|
||||
"""Consume the next token and return it."""
|
||||
token = self.next_token
|
||||
assert token is not None
|
||||
|
||||
self.position += len(token.text)
|
||||
self.next_token = None
|
||||
|
||||
return token
|
||||
|
||||
def raise_syntax_error(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
span_start: int | None = None,
|
||||
span_end: int | None = None,
|
||||
) -> NoReturn:
|
||||
"""Raise ParserSyntaxError at the given position."""
|
||||
span = (
|
||||
self.position if span_start is None else span_start,
|
||||
self.position if span_end is None else span_end,
|
||||
)
|
||||
raise ParserSyntaxError(
|
||||
message,
|
||||
source=self.source,
|
||||
span=span,
|
||||
)
|
||||
|
||||
@contextlib.contextmanager
|
||||
def enclosing_tokens(
|
||||
self, open_token: str, close_token: str, *, around: str
|
||||
) -> Iterator[None]:
|
||||
if self.check(open_token):
|
||||
open_position = self.position
|
||||
self.read()
|
||||
else:
|
||||
open_position = None
|
||||
|
||||
yield
|
||||
|
||||
if open_position is None:
|
||||
return
|
||||
|
||||
if not self.check(close_token):
|
||||
self.raise_syntax_error(
|
||||
f"Expected matching {close_token} for {open_token}, after {around}",
|
||||
span_start=open_position,
|
||||
)
|
||||
|
||||
self.read()
|
||||
@@ -0,0 +1,145 @@
|
||||
#######################################################################################
|
||||
#
|
||||
# Adapted from:
|
||||
# https://github.com/pypa/hatch/blob/5352e44/backend/src/hatchling/licenses/parse.py
|
||||
#
|
||||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2017-present Ofek Lev <oss@ofek.dev>
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
|
||||
# software and associated documentation files (the "Software"), to deal in the Software
|
||||
# without restriction, including without limitation the rights to use, copy, modify,
|
||||
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to the following
|
||||
# conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all copies
|
||||
# or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
|
||||
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
|
||||
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
|
||||
# OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
#
|
||||
#
|
||||
# With additional allowance of arbitrary `LicenseRef-` identifiers, not just
|
||||
# `LicenseRef-Public-Domain` and `LicenseRef-Proprietary`.
|
||||
#
|
||||
#######################################################################################
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import NewType, cast
|
||||
|
||||
from pip._vendor.packaging.licenses._spdx import EXCEPTIONS, LICENSES
|
||||
|
||||
__all__ = [
|
||||
"InvalidLicenseExpression",
|
||||
"NormalizedLicenseExpression",
|
||||
"canonicalize_license_expression",
|
||||
]
|
||||
|
||||
license_ref_allowed = re.compile("^[A-Za-z0-9.-]*$")
|
||||
|
||||
NormalizedLicenseExpression = NewType("NormalizedLicenseExpression", str)
|
||||
|
||||
|
||||
class InvalidLicenseExpression(ValueError):
|
||||
"""Raised when a license-expression string is invalid
|
||||
|
||||
>>> canonicalize_license_expression("invalid")
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
packaging.licenses.InvalidLicenseExpression: Invalid license expression: 'invalid'
|
||||
"""
|
||||
|
||||
|
||||
def canonicalize_license_expression(
|
||||
raw_license_expression: str,
|
||||
) -> NormalizedLicenseExpression:
|
||||
if not raw_license_expression:
|
||||
message = f"Invalid license expression: {raw_license_expression!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
|
||||
# Pad any parentheses so tokenization can be achieved by merely splitting on
|
||||
# whitespace.
|
||||
license_expression = raw_license_expression.replace("(", " ( ").replace(")", " ) ")
|
||||
licenseref_prefix = "LicenseRef-"
|
||||
license_refs = {
|
||||
ref.lower(): "LicenseRef-" + ref[len(licenseref_prefix) :]
|
||||
for ref in license_expression.split()
|
||||
if ref.lower().startswith(licenseref_prefix.lower())
|
||||
}
|
||||
|
||||
# Normalize to lower case so we can look up licenses/exceptions
|
||||
# and so boolean operators are Python-compatible.
|
||||
license_expression = license_expression.lower()
|
||||
|
||||
tokens = license_expression.split()
|
||||
|
||||
# Rather than implementing boolean logic, we create an expression that Python can
|
||||
# parse. Everything that is not involved with the grammar itself is treated as
|
||||
# `False` and the expression should evaluate as such.
|
||||
python_tokens = []
|
||||
for token in tokens:
|
||||
if token not in {"or", "and", "with", "(", ")"}:
|
||||
python_tokens.append("False")
|
||||
elif token == "with":
|
||||
python_tokens.append("or")
|
||||
elif token == "(" and python_tokens and python_tokens[-1] not in {"or", "and"}:
|
||||
message = f"Invalid license expression: {raw_license_expression!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
else:
|
||||
python_tokens.append(token)
|
||||
|
||||
python_expression = " ".join(python_tokens)
|
||||
try:
|
||||
invalid = eval(python_expression, globals(), locals())
|
||||
except Exception:
|
||||
invalid = True
|
||||
|
||||
if invalid is not False:
|
||||
message = f"Invalid license expression: {raw_license_expression!r}"
|
||||
raise InvalidLicenseExpression(message) from None
|
||||
|
||||
# Take a final pass to check for unknown licenses/exceptions.
|
||||
normalized_tokens = []
|
||||
for token in tokens:
|
||||
if token in {"or", "and", "with", "(", ")"}:
|
||||
normalized_tokens.append(token.upper())
|
||||
continue
|
||||
|
||||
if normalized_tokens and normalized_tokens[-1] == "WITH":
|
||||
if token not in EXCEPTIONS:
|
||||
message = f"Unknown license exception: {token!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
|
||||
normalized_tokens.append(EXCEPTIONS[token]["id"])
|
||||
else:
|
||||
if token.endswith("+"):
|
||||
final_token = token[:-1]
|
||||
suffix = "+"
|
||||
else:
|
||||
final_token = token
|
||||
suffix = ""
|
||||
|
||||
if final_token.startswith("licenseref-"):
|
||||
if not license_ref_allowed.match(final_token):
|
||||
message = f"Invalid licenseref: {final_token!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
normalized_tokens.append(license_refs[final_token] + suffix)
|
||||
else:
|
||||
if final_token not in LICENSES:
|
||||
message = f"Unknown license: {final_token!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
normalized_tokens.append(LICENSES[final_token]["id"] + suffix)
|
||||
|
||||
normalized_expression = " ".join(normalized_tokens)
|
||||
|
||||
return cast(
|
||||
NormalizedLicenseExpression,
|
||||
normalized_expression.replace("( ", "(").replace(" )", ")"),
|
||||
)
|
||||
@@ -0,0 +1,759 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TypedDict
|
||||
|
||||
class SPDXLicense(TypedDict):
|
||||
id: str
|
||||
deprecated: bool
|
||||
|
||||
class SPDXException(TypedDict):
|
||||
id: str
|
||||
deprecated: bool
|
||||
|
||||
|
||||
VERSION = '3.25.0'
|
||||
|
||||
LICENSES: dict[str, SPDXLicense] = {
|
||||
'0bsd': {'id': '0BSD', 'deprecated': False},
|
||||
'3d-slicer-1.0': {'id': '3D-Slicer-1.0', 'deprecated': False},
|
||||
'aal': {'id': 'AAL', 'deprecated': False},
|
||||
'abstyles': {'id': 'Abstyles', 'deprecated': False},
|
||||
'adacore-doc': {'id': 'AdaCore-doc', 'deprecated': False},
|
||||
'adobe-2006': {'id': 'Adobe-2006', 'deprecated': False},
|
||||
'adobe-display-postscript': {'id': 'Adobe-Display-PostScript', 'deprecated': False},
|
||||
'adobe-glyph': {'id': 'Adobe-Glyph', 'deprecated': False},
|
||||
'adobe-utopia': {'id': 'Adobe-Utopia', 'deprecated': False},
|
||||
'adsl': {'id': 'ADSL', 'deprecated': False},
|
||||
'afl-1.1': {'id': 'AFL-1.1', 'deprecated': False},
|
||||
'afl-1.2': {'id': 'AFL-1.2', 'deprecated': False},
|
||||
'afl-2.0': {'id': 'AFL-2.0', 'deprecated': False},
|
||||
'afl-2.1': {'id': 'AFL-2.1', 'deprecated': False},
|
||||
'afl-3.0': {'id': 'AFL-3.0', 'deprecated': False},
|
||||
'afmparse': {'id': 'Afmparse', 'deprecated': False},
|
||||
'agpl-1.0': {'id': 'AGPL-1.0', 'deprecated': True},
|
||||
'agpl-1.0-only': {'id': 'AGPL-1.0-only', 'deprecated': False},
|
||||
'agpl-1.0-or-later': {'id': 'AGPL-1.0-or-later', 'deprecated': False},
|
||||
'agpl-3.0': {'id': 'AGPL-3.0', 'deprecated': True},
|
||||
'agpl-3.0-only': {'id': 'AGPL-3.0-only', 'deprecated': False},
|
||||
'agpl-3.0-or-later': {'id': 'AGPL-3.0-or-later', 'deprecated': False},
|
||||
'aladdin': {'id': 'Aladdin', 'deprecated': False},
|
||||
'amd-newlib': {'id': 'AMD-newlib', 'deprecated': False},
|
||||
'amdplpa': {'id': 'AMDPLPA', 'deprecated': False},
|
||||
'aml': {'id': 'AML', 'deprecated': False},
|
||||
'aml-glslang': {'id': 'AML-glslang', 'deprecated': False},
|
||||
'ampas': {'id': 'AMPAS', 'deprecated': False},
|
||||
'antlr-pd': {'id': 'ANTLR-PD', 'deprecated': False},
|
||||
'antlr-pd-fallback': {'id': 'ANTLR-PD-fallback', 'deprecated': False},
|
||||
'any-osi': {'id': 'any-OSI', 'deprecated': False},
|
||||
'apache-1.0': {'id': 'Apache-1.0', 'deprecated': False},
|
||||
'apache-1.1': {'id': 'Apache-1.1', 'deprecated': False},
|
||||
'apache-2.0': {'id': 'Apache-2.0', 'deprecated': False},
|
||||
'apafml': {'id': 'APAFML', 'deprecated': False},
|
||||
'apl-1.0': {'id': 'APL-1.0', 'deprecated': False},
|
||||
'app-s2p': {'id': 'App-s2p', 'deprecated': False},
|
||||
'apsl-1.0': {'id': 'APSL-1.0', 'deprecated': False},
|
||||
'apsl-1.1': {'id': 'APSL-1.1', 'deprecated': False},
|
||||
'apsl-1.2': {'id': 'APSL-1.2', 'deprecated': False},
|
||||
'apsl-2.0': {'id': 'APSL-2.0', 'deprecated': False},
|
||||
'arphic-1999': {'id': 'Arphic-1999', 'deprecated': False},
|
||||
'artistic-1.0': {'id': 'Artistic-1.0', 'deprecated': False},
|
||||
'artistic-1.0-cl8': {'id': 'Artistic-1.0-cl8', 'deprecated': False},
|
||||
'artistic-1.0-perl': {'id': 'Artistic-1.0-Perl', 'deprecated': False},
|
||||
'artistic-2.0': {'id': 'Artistic-2.0', 'deprecated': False},
|
||||
'aswf-digital-assets-1.0': {'id': 'ASWF-Digital-Assets-1.0', 'deprecated': False},
|
||||
'aswf-digital-assets-1.1': {'id': 'ASWF-Digital-Assets-1.1', 'deprecated': False},
|
||||
'baekmuk': {'id': 'Baekmuk', 'deprecated': False},
|
||||
'bahyph': {'id': 'Bahyph', 'deprecated': False},
|
||||
'barr': {'id': 'Barr', 'deprecated': False},
|
||||
'bcrypt-solar-designer': {'id': 'bcrypt-Solar-Designer', 'deprecated': False},
|
||||
'beerware': {'id': 'Beerware', 'deprecated': False},
|
||||
'bitstream-charter': {'id': 'Bitstream-Charter', 'deprecated': False},
|
||||
'bitstream-vera': {'id': 'Bitstream-Vera', 'deprecated': False},
|
||||
'bittorrent-1.0': {'id': 'BitTorrent-1.0', 'deprecated': False},
|
||||
'bittorrent-1.1': {'id': 'BitTorrent-1.1', 'deprecated': False},
|
||||
'blessing': {'id': 'blessing', 'deprecated': False},
|
||||
'blueoak-1.0.0': {'id': 'BlueOak-1.0.0', 'deprecated': False},
|
||||
'boehm-gc': {'id': 'Boehm-GC', 'deprecated': False},
|
||||
'borceux': {'id': 'Borceux', 'deprecated': False},
|
||||
'brian-gladman-2-clause': {'id': 'Brian-Gladman-2-Clause', 'deprecated': False},
|
||||
'brian-gladman-3-clause': {'id': 'Brian-Gladman-3-Clause', 'deprecated': False},
|
||||
'bsd-1-clause': {'id': 'BSD-1-Clause', 'deprecated': False},
|
||||
'bsd-2-clause': {'id': 'BSD-2-Clause', 'deprecated': False},
|
||||
'bsd-2-clause-darwin': {'id': 'BSD-2-Clause-Darwin', 'deprecated': False},
|
||||
'bsd-2-clause-first-lines': {'id': 'BSD-2-Clause-first-lines', 'deprecated': False},
|
||||
'bsd-2-clause-freebsd': {'id': 'BSD-2-Clause-FreeBSD', 'deprecated': True},
|
||||
'bsd-2-clause-netbsd': {'id': 'BSD-2-Clause-NetBSD', 'deprecated': True},
|
||||
'bsd-2-clause-patent': {'id': 'BSD-2-Clause-Patent', 'deprecated': False},
|
||||
'bsd-2-clause-views': {'id': 'BSD-2-Clause-Views', 'deprecated': False},
|
||||
'bsd-3-clause': {'id': 'BSD-3-Clause', 'deprecated': False},
|
||||
'bsd-3-clause-acpica': {'id': 'BSD-3-Clause-acpica', 'deprecated': False},
|
||||
'bsd-3-clause-attribution': {'id': 'BSD-3-Clause-Attribution', 'deprecated': False},
|
||||
'bsd-3-clause-clear': {'id': 'BSD-3-Clause-Clear', 'deprecated': False},
|
||||
'bsd-3-clause-flex': {'id': 'BSD-3-Clause-flex', 'deprecated': False},
|
||||
'bsd-3-clause-hp': {'id': 'BSD-3-Clause-HP', 'deprecated': False},
|
||||
'bsd-3-clause-lbnl': {'id': 'BSD-3-Clause-LBNL', 'deprecated': False},
|
||||
'bsd-3-clause-modification': {'id': 'BSD-3-Clause-Modification', 'deprecated': False},
|
||||
'bsd-3-clause-no-military-license': {'id': 'BSD-3-Clause-No-Military-License', 'deprecated': False},
|
||||
'bsd-3-clause-no-nuclear-license': {'id': 'BSD-3-Clause-No-Nuclear-License', 'deprecated': False},
|
||||
'bsd-3-clause-no-nuclear-license-2014': {'id': 'BSD-3-Clause-No-Nuclear-License-2014', 'deprecated': False},
|
||||
'bsd-3-clause-no-nuclear-warranty': {'id': 'BSD-3-Clause-No-Nuclear-Warranty', 'deprecated': False},
|
||||
'bsd-3-clause-open-mpi': {'id': 'BSD-3-Clause-Open-MPI', 'deprecated': False},
|
||||
'bsd-3-clause-sun': {'id': 'BSD-3-Clause-Sun', 'deprecated': False},
|
||||
'bsd-4-clause': {'id': 'BSD-4-Clause', 'deprecated': False},
|
||||
'bsd-4-clause-shortened': {'id': 'BSD-4-Clause-Shortened', 'deprecated': False},
|
||||
'bsd-4-clause-uc': {'id': 'BSD-4-Clause-UC', 'deprecated': False},
|
||||
'bsd-4.3reno': {'id': 'BSD-4.3RENO', 'deprecated': False},
|
||||
'bsd-4.3tahoe': {'id': 'BSD-4.3TAHOE', 'deprecated': False},
|
||||
'bsd-advertising-acknowledgement': {'id': 'BSD-Advertising-Acknowledgement', 'deprecated': False},
|
||||
'bsd-attribution-hpnd-disclaimer': {'id': 'BSD-Attribution-HPND-disclaimer', 'deprecated': False},
|
||||
'bsd-inferno-nettverk': {'id': 'BSD-Inferno-Nettverk', 'deprecated': False},
|
||||
'bsd-protection': {'id': 'BSD-Protection', 'deprecated': False},
|
||||
'bsd-source-beginning-file': {'id': 'BSD-Source-beginning-file', 'deprecated': False},
|
||||
'bsd-source-code': {'id': 'BSD-Source-Code', 'deprecated': False},
|
||||
'bsd-systemics': {'id': 'BSD-Systemics', 'deprecated': False},
|
||||
'bsd-systemics-w3works': {'id': 'BSD-Systemics-W3Works', 'deprecated': False},
|
||||
'bsl-1.0': {'id': 'BSL-1.0', 'deprecated': False},
|
||||
'busl-1.1': {'id': 'BUSL-1.1', 'deprecated': False},
|
||||
'bzip2-1.0.5': {'id': 'bzip2-1.0.5', 'deprecated': True},
|
||||
'bzip2-1.0.6': {'id': 'bzip2-1.0.6', 'deprecated': False},
|
||||
'c-uda-1.0': {'id': 'C-UDA-1.0', 'deprecated': False},
|
||||
'cal-1.0': {'id': 'CAL-1.0', 'deprecated': False},
|
||||
'cal-1.0-combined-work-exception': {'id': 'CAL-1.0-Combined-Work-Exception', 'deprecated': False},
|
||||
'caldera': {'id': 'Caldera', 'deprecated': False},
|
||||
'caldera-no-preamble': {'id': 'Caldera-no-preamble', 'deprecated': False},
|
||||
'catharon': {'id': 'Catharon', 'deprecated': False},
|
||||
'catosl-1.1': {'id': 'CATOSL-1.1', 'deprecated': False},
|
||||
'cc-by-1.0': {'id': 'CC-BY-1.0', 'deprecated': False},
|
||||
'cc-by-2.0': {'id': 'CC-BY-2.0', 'deprecated': False},
|
||||
'cc-by-2.5': {'id': 'CC-BY-2.5', 'deprecated': False},
|
||||
'cc-by-2.5-au': {'id': 'CC-BY-2.5-AU', 'deprecated': False},
|
||||
'cc-by-3.0': {'id': 'CC-BY-3.0', 'deprecated': False},
|
||||
'cc-by-3.0-at': {'id': 'CC-BY-3.0-AT', 'deprecated': False},
|
||||
'cc-by-3.0-au': {'id': 'CC-BY-3.0-AU', 'deprecated': False},
|
||||
'cc-by-3.0-de': {'id': 'CC-BY-3.0-DE', 'deprecated': False},
|
||||
'cc-by-3.0-igo': {'id': 'CC-BY-3.0-IGO', 'deprecated': False},
|
||||
'cc-by-3.0-nl': {'id': 'CC-BY-3.0-NL', 'deprecated': False},
|
||||
'cc-by-3.0-us': {'id': 'CC-BY-3.0-US', 'deprecated': False},
|
||||
'cc-by-4.0': {'id': 'CC-BY-4.0', 'deprecated': False},
|
||||
'cc-by-nc-1.0': {'id': 'CC-BY-NC-1.0', 'deprecated': False},
|
||||
'cc-by-nc-2.0': {'id': 'CC-BY-NC-2.0', 'deprecated': False},
|
||||
'cc-by-nc-2.5': {'id': 'CC-BY-NC-2.5', 'deprecated': False},
|
||||
'cc-by-nc-3.0': {'id': 'CC-BY-NC-3.0', 'deprecated': False},
|
||||
'cc-by-nc-3.0-de': {'id': 'CC-BY-NC-3.0-DE', 'deprecated': False},
|
||||
'cc-by-nc-4.0': {'id': 'CC-BY-NC-4.0', 'deprecated': False},
|
||||
'cc-by-nc-nd-1.0': {'id': 'CC-BY-NC-ND-1.0', 'deprecated': False},
|
||||
'cc-by-nc-nd-2.0': {'id': 'CC-BY-NC-ND-2.0', 'deprecated': False},
|
||||
'cc-by-nc-nd-2.5': {'id': 'CC-BY-NC-ND-2.5', 'deprecated': False},
|
||||
'cc-by-nc-nd-3.0': {'id': 'CC-BY-NC-ND-3.0', 'deprecated': False},
|
||||
'cc-by-nc-nd-3.0-de': {'id': 'CC-BY-NC-ND-3.0-DE', 'deprecated': False},
|
||||
'cc-by-nc-nd-3.0-igo': {'id': 'CC-BY-NC-ND-3.0-IGO', 'deprecated': False},
|
||||
'cc-by-nc-nd-4.0': {'id': 'CC-BY-NC-ND-4.0', 'deprecated': False},
|
||||
'cc-by-nc-sa-1.0': {'id': 'CC-BY-NC-SA-1.0', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.0': {'id': 'CC-BY-NC-SA-2.0', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.0-de': {'id': 'CC-BY-NC-SA-2.0-DE', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.0-fr': {'id': 'CC-BY-NC-SA-2.0-FR', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.0-uk': {'id': 'CC-BY-NC-SA-2.0-UK', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.5': {'id': 'CC-BY-NC-SA-2.5', 'deprecated': False},
|
||||
'cc-by-nc-sa-3.0': {'id': 'CC-BY-NC-SA-3.0', 'deprecated': False},
|
||||
'cc-by-nc-sa-3.0-de': {'id': 'CC-BY-NC-SA-3.0-DE', 'deprecated': False},
|
||||
'cc-by-nc-sa-3.0-igo': {'id': 'CC-BY-NC-SA-3.0-IGO', 'deprecated': False},
|
||||
'cc-by-nc-sa-4.0': {'id': 'CC-BY-NC-SA-4.0', 'deprecated': False},
|
||||
'cc-by-nd-1.0': {'id': 'CC-BY-ND-1.0', 'deprecated': False},
|
||||
'cc-by-nd-2.0': {'id': 'CC-BY-ND-2.0', 'deprecated': False},
|
||||
'cc-by-nd-2.5': {'id': 'CC-BY-ND-2.5', 'deprecated': False},
|
||||
'cc-by-nd-3.0': {'id': 'CC-BY-ND-3.0', 'deprecated': False},
|
||||
'cc-by-nd-3.0-de': {'id': 'CC-BY-ND-3.0-DE', 'deprecated': False},
|
||||
'cc-by-nd-4.0': {'id': 'CC-BY-ND-4.0', 'deprecated': False},
|
||||
'cc-by-sa-1.0': {'id': 'CC-BY-SA-1.0', 'deprecated': False},
|
||||
'cc-by-sa-2.0': {'id': 'CC-BY-SA-2.0', 'deprecated': False},
|
||||
'cc-by-sa-2.0-uk': {'id': 'CC-BY-SA-2.0-UK', 'deprecated': False},
|
||||
'cc-by-sa-2.1-jp': {'id': 'CC-BY-SA-2.1-JP', 'deprecated': False},
|
||||
'cc-by-sa-2.5': {'id': 'CC-BY-SA-2.5', 'deprecated': False},
|
||||
'cc-by-sa-3.0': {'id': 'CC-BY-SA-3.0', 'deprecated': False},
|
||||
'cc-by-sa-3.0-at': {'id': 'CC-BY-SA-3.0-AT', 'deprecated': False},
|
||||
'cc-by-sa-3.0-de': {'id': 'CC-BY-SA-3.0-DE', 'deprecated': False},
|
||||
'cc-by-sa-3.0-igo': {'id': 'CC-BY-SA-3.0-IGO', 'deprecated': False},
|
||||
'cc-by-sa-4.0': {'id': 'CC-BY-SA-4.0', 'deprecated': False},
|
||||
'cc-pddc': {'id': 'CC-PDDC', 'deprecated': False},
|
||||
'cc0-1.0': {'id': 'CC0-1.0', 'deprecated': False},
|
||||
'cddl-1.0': {'id': 'CDDL-1.0', 'deprecated': False},
|
||||
'cddl-1.1': {'id': 'CDDL-1.1', 'deprecated': False},
|
||||
'cdl-1.0': {'id': 'CDL-1.0', 'deprecated': False},
|
||||
'cdla-permissive-1.0': {'id': 'CDLA-Permissive-1.0', 'deprecated': False},
|
||||
'cdla-permissive-2.0': {'id': 'CDLA-Permissive-2.0', 'deprecated': False},
|
||||
'cdla-sharing-1.0': {'id': 'CDLA-Sharing-1.0', 'deprecated': False},
|
||||
'cecill-1.0': {'id': 'CECILL-1.0', 'deprecated': False},
|
||||
'cecill-1.1': {'id': 'CECILL-1.1', 'deprecated': False},
|
||||
'cecill-2.0': {'id': 'CECILL-2.0', 'deprecated': False},
|
||||
'cecill-2.1': {'id': 'CECILL-2.1', 'deprecated': False},
|
||||
'cecill-b': {'id': 'CECILL-B', 'deprecated': False},
|
||||
'cecill-c': {'id': 'CECILL-C', 'deprecated': False},
|
||||
'cern-ohl-1.1': {'id': 'CERN-OHL-1.1', 'deprecated': False},
|
||||
'cern-ohl-1.2': {'id': 'CERN-OHL-1.2', 'deprecated': False},
|
||||
'cern-ohl-p-2.0': {'id': 'CERN-OHL-P-2.0', 'deprecated': False},
|
||||
'cern-ohl-s-2.0': {'id': 'CERN-OHL-S-2.0', 'deprecated': False},
|
||||
'cern-ohl-w-2.0': {'id': 'CERN-OHL-W-2.0', 'deprecated': False},
|
||||
'cfitsio': {'id': 'CFITSIO', 'deprecated': False},
|
||||
'check-cvs': {'id': 'check-cvs', 'deprecated': False},
|
||||
'checkmk': {'id': 'checkmk', 'deprecated': False},
|
||||
'clartistic': {'id': 'ClArtistic', 'deprecated': False},
|
||||
'clips': {'id': 'Clips', 'deprecated': False},
|
||||
'cmu-mach': {'id': 'CMU-Mach', 'deprecated': False},
|
||||
'cmu-mach-nodoc': {'id': 'CMU-Mach-nodoc', 'deprecated': False},
|
||||
'cnri-jython': {'id': 'CNRI-Jython', 'deprecated': False},
|
||||
'cnri-python': {'id': 'CNRI-Python', 'deprecated': False},
|
||||
'cnri-python-gpl-compatible': {'id': 'CNRI-Python-GPL-Compatible', 'deprecated': False},
|
||||
'coil-1.0': {'id': 'COIL-1.0', 'deprecated': False},
|
||||
'community-spec-1.0': {'id': 'Community-Spec-1.0', 'deprecated': False},
|
||||
'condor-1.1': {'id': 'Condor-1.1', 'deprecated': False},
|
||||
'copyleft-next-0.3.0': {'id': 'copyleft-next-0.3.0', 'deprecated': False},
|
||||
'copyleft-next-0.3.1': {'id': 'copyleft-next-0.3.1', 'deprecated': False},
|
||||
'cornell-lossless-jpeg': {'id': 'Cornell-Lossless-JPEG', 'deprecated': False},
|
||||
'cpal-1.0': {'id': 'CPAL-1.0', 'deprecated': False},
|
||||
'cpl-1.0': {'id': 'CPL-1.0', 'deprecated': False},
|
||||
'cpol-1.02': {'id': 'CPOL-1.02', 'deprecated': False},
|
||||
'cronyx': {'id': 'Cronyx', 'deprecated': False},
|
||||
'crossword': {'id': 'Crossword', 'deprecated': False},
|
||||
'crystalstacker': {'id': 'CrystalStacker', 'deprecated': False},
|
||||
'cua-opl-1.0': {'id': 'CUA-OPL-1.0', 'deprecated': False},
|
||||
'cube': {'id': 'Cube', 'deprecated': False},
|
||||
'curl': {'id': 'curl', 'deprecated': False},
|
||||
'cve-tou': {'id': 'cve-tou', 'deprecated': False},
|
||||
'd-fsl-1.0': {'id': 'D-FSL-1.0', 'deprecated': False},
|
||||
'dec-3-clause': {'id': 'DEC-3-Clause', 'deprecated': False},
|
||||
'diffmark': {'id': 'diffmark', 'deprecated': False},
|
||||
'dl-de-by-2.0': {'id': 'DL-DE-BY-2.0', 'deprecated': False},
|
||||
'dl-de-zero-2.0': {'id': 'DL-DE-ZERO-2.0', 'deprecated': False},
|
||||
'doc': {'id': 'DOC', 'deprecated': False},
|
||||
'docbook-schema': {'id': 'DocBook-Schema', 'deprecated': False},
|
||||
'docbook-xml': {'id': 'DocBook-XML', 'deprecated': False},
|
||||
'dotseqn': {'id': 'Dotseqn', 'deprecated': False},
|
||||
'drl-1.0': {'id': 'DRL-1.0', 'deprecated': False},
|
||||
'drl-1.1': {'id': 'DRL-1.1', 'deprecated': False},
|
||||
'dsdp': {'id': 'DSDP', 'deprecated': False},
|
||||
'dtoa': {'id': 'dtoa', 'deprecated': False},
|
||||
'dvipdfm': {'id': 'dvipdfm', 'deprecated': False},
|
||||
'ecl-1.0': {'id': 'ECL-1.0', 'deprecated': False},
|
||||
'ecl-2.0': {'id': 'ECL-2.0', 'deprecated': False},
|
||||
'ecos-2.0': {'id': 'eCos-2.0', 'deprecated': True},
|
||||
'efl-1.0': {'id': 'EFL-1.0', 'deprecated': False},
|
||||
'efl-2.0': {'id': 'EFL-2.0', 'deprecated': False},
|
||||
'egenix': {'id': 'eGenix', 'deprecated': False},
|
||||
'elastic-2.0': {'id': 'Elastic-2.0', 'deprecated': False},
|
||||
'entessa': {'id': 'Entessa', 'deprecated': False},
|
||||
'epics': {'id': 'EPICS', 'deprecated': False},
|
||||
'epl-1.0': {'id': 'EPL-1.0', 'deprecated': False},
|
||||
'epl-2.0': {'id': 'EPL-2.0', 'deprecated': False},
|
||||
'erlpl-1.1': {'id': 'ErlPL-1.1', 'deprecated': False},
|
||||
'etalab-2.0': {'id': 'etalab-2.0', 'deprecated': False},
|
||||
'eudatagrid': {'id': 'EUDatagrid', 'deprecated': False},
|
||||
'eupl-1.0': {'id': 'EUPL-1.0', 'deprecated': False},
|
||||
'eupl-1.1': {'id': 'EUPL-1.1', 'deprecated': False},
|
||||
'eupl-1.2': {'id': 'EUPL-1.2', 'deprecated': False},
|
||||
'eurosym': {'id': 'Eurosym', 'deprecated': False},
|
||||
'fair': {'id': 'Fair', 'deprecated': False},
|
||||
'fbm': {'id': 'FBM', 'deprecated': False},
|
||||
'fdk-aac': {'id': 'FDK-AAC', 'deprecated': False},
|
||||
'ferguson-twofish': {'id': 'Ferguson-Twofish', 'deprecated': False},
|
||||
'frameworx-1.0': {'id': 'Frameworx-1.0', 'deprecated': False},
|
||||
'freebsd-doc': {'id': 'FreeBSD-DOC', 'deprecated': False},
|
||||
'freeimage': {'id': 'FreeImage', 'deprecated': False},
|
||||
'fsfap': {'id': 'FSFAP', 'deprecated': False},
|
||||
'fsfap-no-warranty-disclaimer': {'id': 'FSFAP-no-warranty-disclaimer', 'deprecated': False},
|
||||
'fsful': {'id': 'FSFUL', 'deprecated': False},
|
||||
'fsfullr': {'id': 'FSFULLR', 'deprecated': False},
|
||||
'fsfullrwd': {'id': 'FSFULLRWD', 'deprecated': False},
|
||||
'ftl': {'id': 'FTL', 'deprecated': False},
|
||||
'furuseth': {'id': 'Furuseth', 'deprecated': False},
|
||||
'fwlw': {'id': 'fwlw', 'deprecated': False},
|
||||
'gcr-docs': {'id': 'GCR-docs', 'deprecated': False},
|
||||
'gd': {'id': 'GD', 'deprecated': False},
|
||||
'gfdl-1.1': {'id': 'GFDL-1.1', 'deprecated': True},
|
||||
'gfdl-1.1-invariants-only': {'id': 'GFDL-1.1-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.1-invariants-or-later': {'id': 'GFDL-1.1-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.1-no-invariants-only': {'id': 'GFDL-1.1-no-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.1-no-invariants-or-later': {'id': 'GFDL-1.1-no-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.1-only': {'id': 'GFDL-1.1-only', 'deprecated': False},
|
||||
'gfdl-1.1-or-later': {'id': 'GFDL-1.1-or-later', 'deprecated': False},
|
||||
'gfdl-1.2': {'id': 'GFDL-1.2', 'deprecated': True},
|
||||
'gfdl-1.2-invariants-only': {'id': 'GFDL-1.2-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.2-invariants-or-later': {'id': 'GFDL-1.2-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.2-no-invariants-only': {'id': 'GFDL-1.2-no-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.2-no-invariants-or-later': {'id': 'GFDL-1.2-no-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.2-only': {'id': 'GFDL-1.2-only', 'deprecated': False},
|
||||
'gfdl-1.2-or-later': {'id': 'GFDL-1.2-or-later', 'deprecated': False},
|
||||
'gfdl-1.3': {'id': 'GFDL-1.3', 'deprecated': True},
|
||||
'gfdl-1.3-invariants-only': {'id': 'GFDL-1.3-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.3-invariants-or-later': {'id': 'GFDL-1.3-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.3-no-invariants-only': {'id': 'GFDL-1.3-no-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.3-no-invariants-or-later': {'id': 'GFDL-1.3-no-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.3-only': {'id': 'GFDL-1.3-only', 'deprecated': False},
|
||||
'gfdl-1.3-or-later': {'id': 'GFDL-1.3-or-later', 'deprecated': False},
|
||||
'giftware': {'id': 'Giftware', 'deprecated': False},
|
||||
'gl2ps': {'id': 'GL2PS', 'deprecated': False},
|
||||
'glide': {'id': 'Glide', 'deprecated': False},
|
||||
'glulxe': {'id': 'Glulxe', 'deprecated': False},
|
||||
'glwtpl': {'id': 'GLWTPL', 'deprecated': False},
|
||||
'gnuplot': {'id': 'gnuplot', 'deprecated': False},
|
||||
'gpl-1.0': {'id': 'GPL-1.0', 'deprecated': True},
|
||||
'gpl-1.0+': {'id': 'GPL-1.0+', 'deprecated': True},
|
||||
'gpl-1.0-only': {'id': 'GPL-1.0-only', 'deprecated': False},
|
||||
'gpl-1.0-or-later': {'id': 'GPL-1.0-or-later', 'deprecated': False},
|
||||
'gpl-2.0': {'id': 'GPL-2.0', 'deprecated': True},
|
||||
'gpl-2.0+': {'id': 'GPL-2.0+', 'deprecated': True},
|
||||
'gpl-2.0-only': {'id': 'GPL-2.0-only', 'deprecated': False},
|
||||
'gpl-2.0-or-later': {'id': 'GPL-2.0-or-later', 'deprecated': False},
|
||||
'gpl-2.0-with-autoconf-exception': {'id': 'GPL-2.0-with-autoconf-exception', 'deprecated': True},
|
||||
'gpl-2.0-with-bison-exception': {'id': 'GPL-2.0-with-bison-exception', 'deprecated': True},
|
||||
'gpl-2.0-with-classpath-exception': {'id': 'GPL-2.0-with-classpath-exception', 'deprecated': True},
|
||||
'gpl-2.0-with-font-exception': {'id': 'GPL-2.0-with-font-exception', 'deprecated': True},
|
||||
'gpl-2.0-with-gcc-exception': {'id': 'GPL-2.0-with-GCC-exception', 'deprecated': True},
|
||||
'gpl-3.0': {'id': 'GPL-3.0', 'deprecated': True},
|
||||
'gpl-3.0+': {'id': 'GPL-3.0+', 'deprecated': True},
|
||||
'gpl-3.0-only': {'id': 'GPL-3.0-only', 'deprecated': False},
|
||||
'gpl-3.0-or-later': {'id': 'GPL-3.0-or-later', 'deprecated': False},
|
||||
'gpl-3.0-with-autoconf-exception': {'id': 'GPL-3.0-with-autoconf-exception', 'deprecated': True},
|
||||
'gpl-3.0-with-gcc-exception': {'id': 'GPL-3.0-with-GCC-exception', 'deprecated': True},
|
||||
'graphics-gems': {'id': 'Graphics-Gems', 'deprecated': False},
|
||||
'gsoap-1.3b': {'id': 'gSOAP-1.3b', 'deprecated': False},
|
||||
'gtkbook': {'id': 'gtkbook', 'deprecated': False},
|
||||
'gutmann': {'id': 'Gutmann', 'deprecated': False},
|
||||
'haskellreport': {'id': 'HaskellReport', 'deprecated': False},
|
||||
'hdparm': {'id': 'hdparm', 'deprecated': False},
|
||||
'hidapi': {'id': 'HIDAPI', 'deprecated': False},
|
||||
'hippocratic-2.1': {'id': 'Hippocratic-2.1', 'deprecated': False},
|
||||
'hp-1986': {'id': 'HP-1986', 'deprecated': False},
|
||||
'hp-1989': {'id': 'HP-1989', 'deprecated': False},
|
||||
'hpnd': {'id': 'HPND', 'deprecated': False},
|
||||
'hpnd-dec': {'id': 'HPND-DEC', 'deprecated': False},
|
||||
'hpnd-doc': {'id': 'HPND-doc', 'deprecated': False},
|
||||
'hpnd-doc-sell': {'id': 'HPND-doc-sell', 'deprecated': False},
|
||||
'hpnd-export-us': {'id': 'HPND-export-US', 'deprecated': False},
|
||||
'hpnd-export-us-acknowledgement': {'id': 'HPND-export-US-acknowledgement', 'deprecated': False},
|
||||
'hpnd-export-us-modify': {'id': 'HPND-export-US-modify', 'deprecated': False},
|
||||
'hpnd-export2-us': {'id': 'HPND-export2-US', 'deprecated': False},
|
||||
'hpnd-fenneberg-livingston': {'id': 'HPND-Fenneberg-Livingston', 'deprecated': False},
|
||||
'hpnd-inria-imag': {'id': 'HPND-INRIA-IMAG', 'deprecated': False},
|
||||
'hpnd-intel': {'id': 'HPND-Intel', 'deprecated': False},
|
||||
'hpnd-kevlin-henney': {'id': 'HPND-Kevlin-Henney', 'deprecated': False},
|
||||
'hpnd-markus-kuhn': {'id': 'HPND-Markus-Kuhn', 'deprecated': False},
|
||||
'hpnd-merchantability-variant': {'id': 'HPND-merchantability-variant', 'deprecated': False},
|
||||
'hpnd-mit-disclaimer': {'id': 'HPND-MIT-disclaimer', 'deprecated': False},
|
||||
'hpnd-netrek': {'id': 'HPND-Netrek', 'deprecated': False},
|
||||
'hpnd-pbmplus': {'id': 'HPND-Pbmplus', 'deprecated': False},
|
||||
'hpnd-sell-mit-disclaimer-xserver': {'id': 'HPND-sell-MIT-disclaimer-xserver', 'deprecated': False},
|
||||
'hpnd-sell-regexpr': {'id': 'HPND-sell-regexpr', 'deprecated': False},
|
||||
'hpnd-sell-variant': {'id': 'HPND-sell-variant', 'deprecated': False},
|
||||
'hpnd-sell-variant-mit-disclaimer': {'id': 'HPND-sell-variant-MIT-disclaimer', 'deprecated': False},
|
||||
'hpnd-sell-variant-mit-disclaimer-rev': {'id': 'HPND-sell-variant-MIT-disclaimer-rev', 'deprecated': False},
|
||||
'hpnd-uc': {'id': 'HPND-UC', 'deprecated': False},
|
||||
'hpnd-uc-export-us': {'id': 'HPND-UC-export-US', 'deprecated': False},
|
||||
'htmltidy': {'id': 'HTMLTIDY', 'deprecated': False},
|
||||
'ibm-pibs': {'id': 'IBM-pibs', 'deprecated': False},
|
||||
'icu': {'id': 'ICU', 'deprecated': False},
|
||||
'iec-code-components-eula': {'id': 'IEC-Code-Components-EULA', 'deprecated': False},
|
||||
'ijg': {'id': 'IJG', 'deprecated': False},
|
||||
'ijg-short': {'id': 'IJG-short', 'deprecated': False},
|
||||
'imagemagick': {'id': 'ImageMagick', 'deprecated': False},
|
||||
'imatix': {'id': 'iMatix', 'deprecated': False},
|
||||
'imlib2': {'id': 'Imlib2', 'deprecated': False},
|
||||
'info-zip': {'id': 'Info-ZIP', 'deprecated': False},
|
||||
'inner-net-2.0': {'id': 'Inner-Net-2.0', 'deprecated': False},
|
||||
'intel': {'id': 'Intel', 'deprecated': False},
|
||||
'intel-acpi': {'id': 'Intel-ACPI', 'deprecated': False},
|
||||
'interbase-1.0': {'id': 'Interbase-1.0', 'deprecated': False},
|
||||
'ipa': {'id': 'IPA', 'deprecated': False},
|
||||
'ipl-1.0': {'id': 'IPL-1.0', 'deprecated': False},
|
||||
'isc': {'id': 'ISC', 'deprecated': False},
|
||||
'isc-veillard': {'id': 'ISC-Veillard', 'deprecated': False},
|
||||
'jam': {'id': 'Jam', 'deprecated': False},
|
||||
'jasper-2.0': {'id': 'JasPer-2.0', 'deprecated': False},
|
||||
'jpl-image': {'id': 'JPL-image', 'deprecated': False},
|
||||
'jpnic': {'id': 'JPNIC', 'deprecated': False},
|
||||
'json': {'id': 'JSON', 'deprecated': False},
|
||||
'kastrup': {'id': 'Kastrup', 'deprecated': False},
|
||||
'kazlib': {'id': 'Kazlib', 'deprecated': False},
|
||||
'knuth-ctan': {'id': 'Knuth-CTAN', 'deprecated': False},
|
||||
'lal-1.2': {'id': 'LAL-1.2', 'deprecated': False},
|
||||
'lal-1.3': {'id': 'LAL-1.3', 'deprecated': False},
|
||||
'latex2e': {'id': 'Latex2e', 'deprecated': False},
|
||||
'latex2e-translated-notice': {'id': 'Latex2e-translated-notice', 'deprecated': False},
|
||||
'leptonica': {'id': 'Leptonica', 'deprecated': False},
|
||||
'lgpl-2.0': {'id': 'LGPL-2.0', 'deprecated': True},
|
||||
'lgpl-2.0+': {'id': 'LGPL-2.0+', 'deprecated': True},
|
||||
'lgpl-2.0-only': {'id': 'LGPL-2.0-only', 'deprecated': False},
|
||||
'lgpl-2.0-or-later': {'id': 'LGPL-2.0-or-later', 'deprecated': False},
|
||||
'lgpl-2.1': {'id': 'LGPL-2.1', 'deprecated': True},
|
||||
'lgpl-2.1+': {'id': 'LGPL-2.1+', 'deprecated': True},
|
||||
'lgpl-2.1-only': {'id': 'LGPL-2.1-only', 'deprecated': False},
|
||||
'lgpl-2.1-or-later': {'id': 'LGPL-2.1-or-later', 'deprecated': False},
|
||||
'lgpl-3.0': {'id': 'LGPL-3.0', 'deprecated': True},
|
||||
'lgpl-3.0+': {'id': 'LGPL-3.0+', 'deprecated': True},
|
||||
'lgpl-3.0-only': {'id': 'LGPL-3.0-only', 'deprecated': False},
|
||||
'lgpl-3.0-or-later': {'id': 'LGPL-3.0-or-later', 'deprecated': False},
|
||||
'lgpllr': {'id': 'LGPLLR', 'deprecated': False},
|
||||
'libpng': {'id': 'Libpng', 'deprecated': False},
|
||||
'libpng-2.0': {'id': 'libpng-2.0', 'deprecated': False},
|
||||
'libselinux-1.0': {'id': 'libselinux-1.0', 'deprecated': False},
|
||||
'libtiff': {'id': 'libtiff', 'deprecated': False},
|
||||
'libutil-david-nugent': {'id': 'libutil-David-Nugent', 'deprecated': False},
|
||||
'liliq-p-1.1': {'id': 'LiLiQ-P-1.1', 'deprecated': False},
|
||||
'liliq-r-1.1': {'id': 'LiLiQ-R-1.1', 'deprecated': False},
|
||||
'liliq-rplus-1.1': {'id': 'LiLiQ-Rplus-1.1', 'deprecated': False},
|
||||
'linux-man-pages-1-para': {'id': 'Linux-man-pages-1-para', 'deprecated': False},
|
||||
'linux-man-pages-copyleft': {'id': 'Linux-man-pages-copyleft', 'deprecated': False},
|
||||
'linux-man-pages-copyleft-2-para': {'id': 'Linux-man-pages-copyleft-2-para', 'deprecated': False},
|
||||
'linux-man-pages-copyleft-var': {'id': 'Linux-man-pages-copyleft-var', 'deprecated': False},
|
||||
'linux-openib': {'id': 'Linux-OpenIB', 'deprecated': False},
|
||||
'loop': {'id': 'LOOP', 'deprecated': False},
|
||||
'lpd-document': {'id': 'LPD-document', 'deprecated': False},
|
||||
'lpl-1.0': {'id': 'LPL-1.0', 'deprecated': False},
|
||||
'lpl-1.02': {'id': 'LPL-1.02', 'deprecated': False},
|
||||
'lppl-1.0': {'id': 'LPPL-1.0', 'deprecated': False},
|
||||
'lppl-1.1': {'id': 'LPPL-1.1', 'deprecated': False},
|
||||
'lppl-1.2': {'id': 'LPPL-1.2', 'deprecated': False},
|
||||
'lppl-1.3a': {'id': 'LPPL-1.3a', 'deprecated': False},
|
||||
'lppl-1.3c': {'id': 'LPPL-1.3c', 'deprecated': False},
|
||||
'lsof': {'id': 'lsof', 'deprecated': False},
|
||||
'lucida-bitmap-fonts': {'id': 'Lucida-Bitmap-Fonts', 'deprecated': False},
|
||||
'lzma-sdk-9.11-to-9.20': {'id': 'LZMA-SDK-9.11-to-9.20', 'deprecated': False},
|
||||
'lzma-sdk-9.22': {'id': 'LZMA-SDK-9.22', 'deprecated': False},
|
||||
'mackerras-3-clause': {'id': 'Mackerras-3-Clause', 'deprecated': False},
|
||||
'mackerras-3-clause-acknowledgment': {'id': 'Mackerras-3-Clause-acknowledgment', 'deprecated': False},
|
||||
'magaz': {'id': 'magaz', 'deprecated': False},
|
||||
'mailprio': {'id': 'mailprio', 'deprecated': False},
|
||||
'makeindex': {'id': 'MakeIndex', 'deprecated': False},
|
||||
'martin-birgmeier': {'id': 'Martin-Birgmeier', 'deprecated': False},
|
||||
'mcphee-slideshow': {'id': 'McPhee-slideshow', 'deprecated': False},
|
||||
'metamail': {'id': 'metamail', 'deprecated': False},
|
||||
'minpack': {'id': 'Minpack', 'deprecated': False},
|
||||
'miros': {'id': 'MirOS', 'deprecated': False},
|
||||
'mit': {'id': 'MIT', 'deprecated': False},
|
||||
'mit-0': {'id': 'MIT-0', 'deprecated': False},
|
||||
'mit-advertising': {'id': 'MIT-advertising', 'deprecated': False},
|
||||
'mit-cmu': {'id': 'MIT-CMU', 'deprecated': False},
|
||||
'mit-enna': {'id': 'MIT-enna', 'deprecated': False},
|
||||
'mit-feh': {'id': 'MIT-feh', 'deprecated': False},
|
||||
'mit-festival': {'id': 'MIT-Festival', 'deprecated': False},
|
||||
'mit-khronos-old': {'id': 'MIT-Khronos-old', 'deprecated': False},
|
||||
'mit-modern-variant': {'id': 'MIT-Modern-Variant', 'deprecated': False},
|
||||
'mit-open-group': {'id': 'MIT-open-group', 'deprecated': False},
|
||||
'mit-testregex': {'id': 'MIT-testregex', 'deprecated': False},
|
||||
'mit-wu': {'id': 'MIT-Wu', 'deprecated': False},
|
||||
'mitnfa': {'id': 'MITNFA', 'deprecated': False},
|
||||
'mmixware': {'id': 'MMIXware', 'deprecated': False},
|
||||
'motosoto': {'id': 'Motosoto', 'deprecated': False},
|
||||
'mpeg-ssg': {'id': 'MPEG-SSG', 'deprecated': False},
|
||||
'mpi-permissive': {'id': 'mpi-permissive', 'deprecated': False},
|
||||
'mpich2': {'id': 'mpich2', 'deprecated': False},
|
||||
'mpl-1.0': {'id': 'MPL-1.0', 'deprecated': False},
|
||||
'mpl-1.1': {'id': 'MPL-1.1', 'deprecated': False},
|
||||
'mpl-2.0': {'id': 'MPL-2.0', 'deprecated': False},
|
||||
'mpl-2.0-no-copyleft-exception': {'id': 'MPL-2.0-no-copyleft-exception', 'deprecated': False},
|
||||
'mplus': {'id': 'mplus', 'deprecated': False},
|
||||
'ms-lpl': {'id': 'MS-LPL', 'deprecated': False},
|
||||
'ms-pl': {'id': 'MS-PL', 'deprecated': False},
|
||||
'ms-rl': {'id': 'MS-RL', 'deprecated': False},
|
||||
'mtll': {'id': 'MTLL', 'deprecated': False},
|
||||
'mulanpsl-1.0': {'id': 'MulanPSL-1.0', 'deprecated': False},
|
||||
'mulanpsl-2.0': {'id': 'MulanPSL-2.0', 'deprecated': False},
|
||||
'multics': {'id': 'Multics', 'deprecated': False},
|
||||
'mup': {'id': 'Mup', 'deprecated': False},
|
||||
'naist-2003': {'id': 'NAIST-2003', 'deprecated': False},
|
||||
'nasa-1.3': {'id': 'NASA-1.3', 'deprecated': False},
|
||||
'naumen': {'id': 'Naumen', 'deprecated': False},
|
||||
'nbpl-1.0': {'id': 'NBPL-1.0', 'deprecated': False},
|
||||
'ncbi-pd': {'id': 'NCBI-PD', 'deprecated': False},
|
||||
'ncgl-uk-2.0': {'id': 'NCGL-UK-2.0', 'deprecated': False},
|
||||
'ncl': {'id': 'NCL', 'deprecated': False},
|
||||
'ncsa': {'id': 'NCSA', 'deprecated': False},
|
||||
'net-snmp': {'id': 'Net-SNMP', 'deprecated': True},
|
||||
'netcdf': {'id': 'NetCDF', 'deprecated': False},
|
||||
'newsletr': {'id': 'Newsletr', 'deprecated': False},
|
||||
'ngpl': {'id': 'NGPL', 'deprecated': False},
|
||||
'nicta-1.0': {'id': 'NICTA-1.0', 'deprecated': False},
|
||||
'nist-pd': {'id': 'NIST-PD', 'deprecated': False},
|
||||
'nist-pd-fallback': {'id': 'NIST-PD-fallback', 'deprecated': False},
|
||||
'nist-software': {'id': 'NIST-Software', 'deprecated': False},
|
||||
'nlod-1.0': {'id': 'NLOD-1.0', 'deprecated': False},
|
||||
'nlod-2.0': {'id': 'NLOD-2.0', 'deprecated': False},
|
||||
'nlpl': {'id': 'NLPL', 'deprecated': False},
|
||||
'nokia': {'id': 'Nokia', 'deprecated': False},
|
||||
'nosl': {'id': 'NOSL', 'deprecated': False},
|
||||
'noweb': {'id': 'Noweb', 'deprecated': False},
|
||||
'npl-1.0': {'id': 'NPL-1.0', 'deprecated': False},
|
||||
'npl-1.1': {'id': 'NPL-1.1', 'deprecated': False},
|
||||
'nposl-3.0': {'id': 'NPOSL-3.0', 'deprecated': False},
|
||||
'nrl': {'id': 'NRL', 'deprecated': False},
|
||||
'ntp': {'id': 'NTP', 'deprecated': False},
|
||||
'ntp-0': {'id': 'NTP-0', 'deprecated': False},
|
||||
'nunit': {'id': 'Nunit', 'deprecated': True},
|
||||
'o-uda-1.0': {'id': 'O-UDA-1.0', 'deprecated': False},
|
||||
'oar': {'id': 'OAR', 'deprecated': False},
|
||||
'occt-pl': {'id': 'OCCT-PL', 'deprecated': False},
|
||||
'oclc-2.0': {'id': 'OCLC-2.0', 'deprecated': False},
|
||||
'odbl-1.0': {'id': 'ODbL-1.0', 'deprecated': False},
|
||||
'odc-by-1.0': {'id': 'ODC-By-1.0', 'deprecated': False},
|
||||
'offis': {'id': 'OFFIS', 'deprecated': False},
|
||||
'ofl-1.0': {'id': 'OFL-1.0', 'deprecated': False},
|
||||
'ofl-1.0-no-rfn': {'id': 'OFL-1.0-no-RFN', 'deprecated': False},
|
||||
'ofl-1.0-rfn': {'id': 'OFL-1.0-RFN', 'deprecated': False},
|
||||
'ofl-1.1': {'id': 'OFL-1.1', 'deprecated': False},
|
||||
'ofl-1.1-no-rfn': {'id': 'OFL-1.1-no-RFN', 'deprecated': False},
|
||||
'ofl-1.1-rfn': {'id': 'OFL-1.1-RFN', 'deprecated': False},
|
||||
'ogc-1.0': {'id': 'OGC-1.0', 'deprecated': False},
|
||||
'ogdl-taiwan-1.0': {'id': 'OGDL-Taiwan-1.0', 'deprecated': False},
|
||||
'ogl-canada-2.0': {'id': 'OGL-Canada-2.0', 'deprecated': False},
|
||||
'ogl-uk-1.0': {'id': 'OGL-UK-1.0', 'deprecated': False},
|
||||
'ogl-uk-2.0': {'id': 'OGL-UK-2.0', 'deprecated': False},
|
||||
'ogl-uk-3.0': {'id': 'OGL-UK-3.0', 'deprecated': False},
|
||||
'ogtsl': {'id': 'OGTSL', 'deprecated': False},
|
||||
'oldap-1.1': {'id': 'OLDAP-1.1', 'deprecated': False},
|
||||
'oldap-1.2': {'id': 'OLDAP-1.2', 'deprecated': False},
|
||||
'oldap-1.3': {'id': 'OLDAP-1.3', 'deprecated': False},
|
||||
'oldap-1.4': {'id': 'OLDAP-1.4', 'deprecated': False},
|
||||
'oldap-2.0': {'id': 'OLDAP-2.0', 'deprecated': False},
|
||||
'oldap-2.0.1': {'id': 'OLDAP-2.0.1', 'deprecated': False},
|
||||
'oldap-2.1': {'id': 'OLDAP-2.1', 'deprecated': False},
|
||||
'oldap-2.2': {'id': 'OLDAP-2.2', 'deprecated': False},
|
||||
'oldap-2.2.1': {'id': 'OLDAP-2.2.1', 'deprecated': False},
|
||||
'oldap-2.2.2': {'id': 'OLDAP-2.2.2', 'deprecated': False},
|
||||
'oldap-2.3': {'id': 'OLDAP-2.3', 'deprecated': False},
|
||||
'oldap-2.4': {'id': 'OLDAP-2.4', 'deprecated': False},
|
||||
'oldap-2.5': {'id': 'OLDAP-2.5', 'deprecated': False},
|
||||
'oldap-2.6': {'id': 'OLDAP-2.6', 'deprecated': False},
|
||||
'oldap-2.7': {'id': 'OLDAP-2.7', 'deprecated': False},
|
||||
'oldap-2.8': {'id': 'OLDAP-2.8', 'deprecated': False},
|
||||
'olfl-1.3': {'id': 'OLFL-1.3', 'deprecated': False},
|
||||
'oml': {'id': 'OML', 'deprecated': False},
|
||||
'openpbs-2.3': {'id': 'OpenPBS-2.3', 'deprecated': False},
|
||||
'openssl': {'id': 'OpenSSL', 'deprecated': False},
|
||||
'openssl-standalone': {'id': 'OpenSSL-standalone', 'deprecated': False},
|
||||
'openvision': {'id': 'OpenVision', 'deprecated': False},
|
||||
'opl-1.0': {'id': 'OPL-1.0', 'deprecated': False},
|
||||
'opl-uk-3.0': {'id': 'OPL-UK-3.0', 'deprecated': False},
|
||||
'opubl-1.0': {'id': 'OPUBL-1.0', 'deprecated': False},
|
||||
'oset-pl-2.1': {'id': 'OSET-PL-2.1', 'deprecated': False},
|
||||
'osl-1.0': {'id': 'OSL-1.0', 'deprecated': False},
|
||||
'osl-1.1': {'id': 'OSL-1.1', 'deprecated': False},
|
||||
'osl-2.0': {'id': 'OSL-2.0', 'deprecated': False},
|
||||
'osl-2.1': {'id': 'OSL-2.1', 'deprecated': False},
|
||||
'osl-3.0': {'id': 'OSL-3.0', 'deprecated': False},
|
||||
'padl': {'id': 'PADL', 'deprecated': False},
|
||||
'parity-6.0.0': {'id': 'Parity-6.0.0', 'deprecated': False},
|
||||
'parity-7.0.0': {'id': 'Parity-7.0.0', 'deprecated': False},
|
||||
'pddl-1.0': {'id': 'PDDL-1.0', 'deprecated': False},
|
||||
'php-3.0': {'id': 'PHP-3.0', 'deprecated': False},
|
||||
'php-3.01': {'id': 'PHP-3.01', 'deprecated': False},
|
||||
'pixar': {'id': 'Pixar', 'deprecated': False},
|
||||
'pkgconf': {'id': 'pkgconf', 'deprecated': False},
|
||||
'plexus': {'id': 'Plexus', 'deprecated': False},
|
||||
'pnmstitch': {'id': 'pnmstitch', 'deprecated': False},
|
||||
'polyform-noncommercial-1.0.0': {'id': 'PolyForm-Noncommercial-1.0.0', 'deprecated': False},
|
||||
'polyform-small-business-1.0.0': {'id': 'PolyForm-Small-Business-1.0.0', 'deprecated': False},
|
||||
'postgresql': {'id': 'PostgreSQL', 'deprecated': False},
|
||||
'ppl': {'id': 'PPL', 'deprecated': False},
|
||||
'psf-2.0': {'id': 'PSF-2.0', 'deprecated': False},
|
||||
'psfrag': {'id': 'psfrag', 'deprecated': False},
|
||||
'psutils': {'id': 'psutils', 'deprecated': False},
|
||||
'python-2.0': {'id': 'Python-2.0', 'deprecated': False},
|
||||
'python-2.0.1': {'id': 'Python-2.0.1', 'deprecated': False},
|
||||
'python-ldap': {'id': 'python-ldap', 'deprecated': False},
|
||||
'qhull': {'id': 'Qhull', 'deprecated': False},
|
||||
'qpl-1.0': {'id': 'QPL-1.0', 'deprecated': False},
|
||||
'qpl-1.0-inria-2004': {'id': 'QPL-1.0-INRIA-2004', 'deprecated': False},
|
||||
'radvd': {'id': 'radvd', 'deprecated': False},
|
||||
'rdisc': {'id': 'Rdisc', 'deprecated': False},
|
||||
'rhecos-1.1': {'id': 'RHeCos-1.1', 'deprecated': False},
|
||||
'rpl-1.1': {'id': 'RPL-1.1', 'deprecated': False},
|
||||
'rpl-1.5': {'id': 'RPL-1.5', 'deprecated': False},
|
||||
'rpsl-1.0': {'id': 'RPSL-1.0', 'deprecated': False},
|
||||
'rsa-md': {'id': 'RSA-MD', 'deprecated': False},
|
||||
'rscpl': {'id': 'RSCPL', 'deprecated': False},
|
||||
'ruby': {'id': 'Ruby', 'deprecated': False},
|
||||
'ruby-pty': {'id': 'Ruby-pty', 'deprecated': False},
|
||||
'sax-pd': {'id': 'SAX-PD', 'deprecated': False},
|
||||
'sax-pd-2.0': {'id': 'SAX-PD-2.0', 'deprecated': False},
|
||||
'saxpath': {'id': 'Saxpath', 'deprecated': False},
|
||||
'scea': {'id': 'SCEA', 'deprecated': False},
|
||||
'schemereport': {'id': 'SchemeReport', 'deprecated': False},
|
||||
'sendmail': {'id': 'Sendmail', 'deprecated': False},
|
||||
'sendmail-8.23': {'id': 'Sendmail-8.23', 'deprecated': False},
|
||||
'sgi-b-1.0': {'id': 'SGI-B-1.0', 'deprecated': False},
|
||||
'sgi-b-1.1': {'id': 'SGI-B-1.1', 'deprecated': False},
|
||||
'sgi-b-2.0': {'id': 'SGI-B-2.0', 'deprecated': False},
|
||||
'sgi-opengl': {'id': 'SGI-OpenGL', 'deprecated': False},
|
||||
'sgp4': {'id': 'SGP4', 'deprecated': False},
|
||||
'shl-0.5': {'id': 'SHL-0.5', 'deprecated': False},
|
||||
'shl-0.51': {'id': 'SHL-0.51', 'deprecated': False},
|
||||
'simpl-2.0': {'id': 'SimPL-2.0', 'deprecated': False},
|
||||
'sissl': {'id': 'SISSL', 'deprecated': False},
|
||||
'sissl-1.2': {'id': 'SISSL-1.2', 'deprecated': False},
|
||||
'sl': {'id': 'SL', 'deprecated': False},
|
||||
'sleepycat': {'id': 'Sleepycat', 'deprecated': False},
|
||||
'smlnj': {'id': 'SMLNJ', 'deprecated': False},
|
||||
'smppl': {'id': 'SMPPL', 'deprecated': False},
|
||||
'snia': {'id': 'SNIA', 'deprecated': False},
|
||||
'snprintf': {'id': 'snprintf', 'deprecated': False},
|
||||
'softsurfer': {'id': 'softSurfer', 'deprecated': False},
|
||||
'soundex': {'id': 'Soundex', 'deprecated': False},
|
||||
'spencer-86': {'id': 'Spencer-86', 'deprecated': False},
|
||||
'spencer-94': {'id': 'Spencer-94', 'deprecated': False},
|
||||
'spencer-99': {'id': 'Spencer-99', 'deprecated': False},
|
||||
'spl-1.0': {'id': 'SPL-1.0', 'deprecated': False},
|
||||
'ssh-keyscan': {'id': 'ssh-keyscan', 'deprecated': False},
|
||||
'ssh-openssh': {'id': 'SSH-OpenSSH', 'deprecated': False},
|
||||
'ssh-short': {'id': 'SSH-short', 'deprecated': False},
|
||||
'ssleay-standalone': {'id': 'SSLeay-standalone', 'deprecated': False},
|
||||
'sspl-1.0': {'id': 'SSPL-1.0', 'deprecated': False},
|
||||
'standardml-nj': {'id': 'StandardML-NJ', 'deprecated': True},
|
||||
'sugarcrm-1.1.3': {'id': 'SugarCRM-1.1.3', 'deprecated': False},
|
||||
'sun-ppp': {'id': 'Sun-PPP', 'deprecated': False},
|
||||
'sun-ppp-2000': {'id': 'Sun-PPP-2000', 'deprecated': False},
|
||||
'sunpro': {'id': 'SunPro', 'deprecated': False},
|
||||
'swl': {'id': 'SWL', 'deprecated': False},
|
||||
'swrule': {'id': 'swrule', 'deprecated': False},
|
||||
'symlinks': {'id': 'Symlinks', 'deprecated': False},
|
||||
'tapr-ohl-1.0': {'id': 'TAPR-OHL-1.0', 'deprecated': False},
|
||||
'tcl': {'id': 'TCL', 'deprecated': False},
|
||||
'tcp-wrappers': {'id': 'TCP-wrappers', 'deprecated': False},
|
||||
'termreadkey': {'id': 'TermReadKey', 'deprecated': False},
|
||||
'tgppl-1.0': {'id': 'TGPPL-1.0', 'deprecated': False},
|
||||
'threeparttable': {'id': 'threeparttable', 'deprecated': False},
|
||||
'tmate': {'id': 'TMate', 'deprecated': False},
|
||||
'torque-1.1': {'id': 'TORQUE-1.1', 'deprecated': False},
|
||||
'tosl': {'id': 'TOSL', 'deprecated': False},
|
||||
'tpdl': {'id': 'TPDL', 'deprecated': False},
|
||||
'tpl-1.0': {'id': 'TPL-1.0', 'deprecated': False},
|
||||
'ttwl': {'id': 'TTWL', 'deprecated': False},
|
||||
'ttyp0': {'id': 'TTYP0', 'deprecated': False},
|
||||
'tu-berlin-1.0': {'id': 'TU-Berlin-1.0', 'deprecated': False},
|
||||
'tu-berlin-2.0': {'id': 'TU-Berlin-2.0', 'deprecated': False},
|
||||
'ubuntu-font-1.0': {'id': 'Ubuntu-font-1.0', 'deprecated': False},
|
||||
'ucar': {'id': 'UCAR', 'deprecated': False},
|
||||
'ucl-1.0': {'id': 'UCL-1.0', 'deprecated': False},
|
||||
'ulem': {'id': 'ulem', 'deprecated': False},
|
||||
'umich-merit': {'id': 'UMich-Merit', 'deprecated': False},
|
||||
'unicode-3.0': {'id': 'Unicode-3.0', 'deprecated': False},
|
||||
'unicode-dfs-2015': {'id': 'Unicode-DFS-2015', 'deprecated': False},
|
||||
'unicode-dfs-2016': {'id': 'Unicode-DFS-2016', 'deprecated': False},
|
||||
'unicode-tou': {'id': 'Unicode-TOU', 'deprecated': False},
|
||||
'unixcrypt': {'id': 'UnixCrypt', 'deprecated': False},
|
||||
'unlicense': {'id': 'Unlicense', 'deprecated': False},
|
||||
'upl-1.0': {'id': 'UPL-1.0', 'deprecated': False},
|
||||
'urt-rle': {'id': 'URT-RLE', 'deprecated': False},
|
||||
'vim': {'id': 'Vim', 'deprecated': False},
|
||||
'vostrom': {'id': 'VOSTROM', 'deprecated': False},
|
||||
'vsl-1.0': {'id': 'VSL-1.0', 'deprecated': False},
|
||||
'w3c': {'id': 'W3C', 'deprecated': False},
|
||||
'w3c-19980720': {'id': 'W3C-19980720', 'deprecated': False},
|
||||
'w3c-20150513': {'id': 'W3C-20150513', 'deprecated': False},
|
||||
'w3m': {'id': 'w3m', 'deprecated': False},
|
||||
'watcom-1.0': {'id': 'Watcom-1.0', 'deprecated': False},
|
||||
'widget-workshop': {'id': 'Widget-Workshop', 'deprecated': False},
|
||||
'wsuipa': {'id': 'Wsuipa', 'deprecated': False},
|
||||
'wtfpl': {'id': 'WTFPL', 'deprecated': False},
|
||||
'wxwindows': {'id': 'wxWindows', 'deprecated': True},
|
||||
'x11': {'id': 'X11', 'deprecated': False},
|
||||
'x11-distribute-modifications-variant': {'id': 'X11-distribute-modifications-variant', 'deprecated': False},
|
||||
'x11-swapped': {'id': 'X11-swapped', 'deprecated': False},
|
||||
'xdebug-1.03': {'id': 'Xdebug-1.03', 'deprecated': False},
|
||||
'xerox': {'id': 'Xerox', 'deprecated': False},
|
||||
'xfig': {'id': 'Xfig', 'deprecated': False},
|
||||
'xfree86-1.1': {'id': 'XFree86-1.1', 'deprecated': False},
|
||||
'xinetd': {'id': 'xinetd', 'deprecated': False},
|
||||
'xkeyboard-config-zinoviev': {'id': 'xkeyboard-config-Zinoviev', 'deprecated': False},
|
||||
'xlock': {'id': 'xlock', 'deprecated': False},
|
||||
'xnet': {'id': 'Xnet', 'deprecated': False},
|
||||
'xpp': {'id': 'xpp', 'deprecated': False},
|
||||
'xskat': {'id': 'XSkat', 'deprecated': False},
|
||||
'xzoom': {'id': 'xzoom', 'deprecated': False},
|
||||
'ypl-1.0': {'id': 'YPL-1.0', 'deprecated': False},
|
||||
'ypl-1.1': {'id': 'YPL-1.1', 'deprecated': False},
|
||||
'zed': {'id': 'Zed', 'deprecated': False},
|
||||
'zeeff': {'id': 'Zeeff', 'deprecated': False},
|
||||
'zend-2.0': {'id': 'Zend-2.0', 'deprecated': False},
|
||||
'zimbra-1.3': {'id': 'Zimbra-1.3', 'deprecated': False},
|
||||
'zimbra-1.4': {'id': 'Zimbra-1.4', 'deprecated': False},
|
||||
'zlib': {'id': 'Zlib', 'deprecated': False},
|
||||
'zlib-acknowledgement': {'id': 'zlib-acknowledgement', 'deprecated': False},
|
||||
'zpl-1.1': {'id': 'ZPL-1.1', 'deprecated': False},
|
||||
'zpl-2.0': {'id': 'ZPL-2.0', 'deprecated': False},
|
||||
'zpl-2.1': {'id': 'ZPL-2.1', 'deprecated': False},
|
||||
}
|
||||
|
||||
EXCEPTIONS: dict[str, SPDXException] = {
|
||||
'389-exception': {'id': '389-exception', 'deprecated': False},
|
||||
'asterisk-exception': {'id': 'Asterisk-exception', 'deprecated': False},
|
||||
'asterisk-linking-protocols-exception': {'id': 'Asterisk-linking-protocols-exception', 'deprecated': False},
|
||||
'autoconf-exception-2.0': {'id': 'Autoconf-exception-2.0', 'deprecated': False},
|
||||
'autoconf-exception-3.0': {'id': 'Autoconf-exception-3.0', 'deprecated': False},
|
||||
'autoconf-exception-generic': {'id': 'Autoconf-exception-generic', 'deprecated': False},
|
||||
'autoconf-exception-generic-3.0': {'id': 'Autoconf-exception-generic-3.0', 'deprecated': False},
|
||||
'autoconf-exception-macro': {'id': 'Autoconf-exception-macro', 'deprecated': False},
|
||||
'bison-exception-1.24': {'id': 'Bison-exception-1.24', 'deprecated': False},
|
||||
'bison-exception-2.2': {'id': 'Bison-exception-2.2', 'deprecated': False},
|
||||
'bootloader-exception': {'id': 'Bootloader-exception', 'deprecated': False},
|
||||
'classpath-exception-2.0': {'id': 'Classpath-exception-2.0', 'deprecated': False},
|
||||
'clisp-exception-2.0': {'id': 'CLISP-exception-2.0', 'deprecated': False},
|
||||
'cryptsetup-openssl-exception': {'id': 'cryptsetup-OpenSSL-exception', 'deprecated': False},
|
||||
'digirule-foss-exception': {'id': 'DigiRule-FOSS-exception', 'deprecated': False},
|
||||
'ecos-exception-2.0': {'id': 'eCos-exception-2.0', 'deprecated': False},
|
||||
'erlang-otp-linking-exception': {'id': 'erlang-otp-linking-exception', 'deprecated': False},
|
||||
'fawkes-runtime-exception': {'id': 'Fawkes-Runtime-exception', 'deprecated': False},
|
||||
'fltk-exception': {'id': 'FLTK-exception', 'deprecated': False},
|
||||
'fmt-exception': {'id': 'fmt-exception', 'deprecated': False},
|
||||
'font-exception-2.0': {'id': 'Font-exception-2.0', 'deprecated': False},
|
||||
'freertos-exception-2.0': {'id': 'freertos-exception-2.0', 'deprecated': False},
|
||||
'gcc-exception-2.0': {'id': 'GCC-exception-2.0', 'deprecated': False},
|
||||
'gcc-exception-2.0-note': {'id': 'GCC-exception-2.0-note', 'deprecated': False},
|
||||
'gcc-exception-3.1': {'id': 'GCC-exception-3.1', 'deprecated': False},
|
||||
'gmsh-exception': {'id': 'Gmsh-exception', 'deprecated': False},
|
||||
'gnat-exception': {'id': 'GNAT-exception', 'deprecated': False},
|
||||
'gnome-examples-exception': {'id': 'GNOME-examples-exception', 'deprecated': False},
|
||||
'gnu-compiler-exception': {'id': 'GNU-compiler-exception', 'deprecated': False},
|
||||
'gnu-javamail-exception': {'id': 'gnu-javamail-exception', 'deprecated': False},
|
||||
'gpl-3.0-interface-exception': {'id': 'GPL-3.0-interface-exception', 'deprecated': False},
|
||||
'gpl-3.0-linking-exception': {'id': 'GPL-3.0-linking-exception', 'deprecated': False},
|
||||
'gpl-3.0-linking-source-exception': {'id': 'GPL-3.0-linking-source-exception', 'deprecated': False},
|
||||
'gpl-cc-1.0': {'id': 'GPL-CC-1.0', 'deprecated': False},
|
||||
'gstreamer-exception-2005': {'id': 'GStreamer-exception-2005', 'deprecated': False},
|
||||
'gstreamer-exception-2008': {'id': 'GStreamer-exception-2008', 'deprecated': False},
|
||||
'i2p-gpl-java-exception': {'id': 'i2p-gpl-java-exception', 'deprecated': False},
|
||||
'kicad-libraries-exception': {'id': 'KiCad-libraries-exception', 'deprecated': False},
|
||||
'lgpl-3.0-linking-exception': {'id': 'LGPL-3.0-linking-exception', 'deprecated': False},
|
||||
'libpri-openh323-exception': {'id': 'libpri-OpenH323-exception', 'deprecated': False},
|
||||
'libtool-exception': {'id': 'Libtool-exception', 'deprecated': False},
|
||||
'linux-syscall-note': {'id': 'Linux-syscall-note', 'deprecated': False},
|
||||
'llgpl': {'id': 'LLGPL', 'deprecated': False},
|
||||
'llvm-exception': {'id': 'LLVM-exception', 'deprecated': False},
|
||||
'lzma-exception': {'id': 'LZMA-exception', 'deprecated': False},
|
||||
'mif-exception': {'id': 'mif-exception', 'deprecated': False},
|
||||
'nokia-qt-exception-1.1': {'id': 'Nokia-Qt-exception-1.1', 'deprecated': True},
|
||||
'ocaml-lgpl-linking-exception': {'id': 'OCaml-LGPL-linking-exception', 'deprecated': False},
|
||||
'occt-exception-1.0': {'id': 'OCCT-exception-1.0', 'deprecated': False},
|
||||
'openjdk-assembly-exception-1.0': {'id': 'OpenJDK-assembly-exception-1.0', 'deprecated': False},
|
||||
'openvpn-openssl-exception': {'id': 'openvpn-openssl-exception', 'deprecated': False},
|
||||
'pcre2-exception': {'id': 'PCRE2-exception', 'deprecated': False},
|
||||
'ps-or-pdf-font-exception-20170817': {'id': 'PS-or-PDF-font-exception-20170817', 'deprecated': False},
|
||||
'qpl-1.0-inria-2004-exception': {'id': 'QPL-1.0-INRIA-2004-exception', 'deprecated': False},
|
||||
'qt-gpl-exception-1.0': {'id': 'Qt-GPL-exception-1.0', 'deprecated': False},
|
||||
'qt-lgpl-exception-1.1': {'id': 'Qt-LGPL-exception-1.1', 'deprecated': False},
|
||||
'qwt-exception-1.0': {'id': 'Qwt-exception-1.0', 'deprecated': False},
|
||||
'romic-exception': {'id': 'romic-exception', 'deprecated': False},
|
||||
'rrdtool-floss-exception-2.0': {'id': 'RRDtool-FLOSS-exception-2.0', 'deprecated': False},
|
||||
'sane-exception': {'id': 'SANE-exception', 'deprecated': False},
|
||||
'shl-2.0': {'id': 'SHL-2.0', 'deprecated': False},
|
||||
'shl-2.1': {'id': 'SHL-2.1', 'deprecated': False},
|
||||
'stunnel-exception': {'id': 'stunnel-exception', 'deprecated': False},
|
||||
'swi-exception': {'id': 'SWI-exception', 'deprecated': False},
|
||||
'swift-exception': {'id': 'Swift-exception', 'deprecated': False},
|
||||
'texinfo-exception': {'id': 'Texinfo-exception', 'deprecated': False},
|
||||
'u-boot-exception-2.0': {'id': 'u-boot-exception-2.0', 'deprecated': False},
|
||||
'ubdl-exception': {'id': 'UBDL-exception', 'deprecated': False},
|
||||
'universal-foss-exception-1.0': {'id': 'Universal-FOSS-exception-1.0', 'deprecated': False},
|
||||
'vsftpd-openssl-exception': {'id': 'vsftpd-openssl-exception', 'deprecated': False},
|
||||
'wxwindows-exception-3.1': {'id': 'WxWindows-exception-3.1', 'deprecated': False},
|
||||
'x11vnc-openssl-exception': {'id': 'x11vnc-openssl-exception', 'deprecated': False},
|
||||
}
|
||||
@@ -0,0 +1,362 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import operator
|
||||
import os
|
||||
import platform
|
||||
import sys
|
||||
from typing import AbstractSet, Any, Callable, Literal, TypedDict, Union, cast
|
||||
|
||||
from ._parser import MarkerAtom, MarkerList, Op, Value, Variable
|
||||
from ._parser import parse_marker as _parse_marker
|
||||
from ._tokenizer import ParserSyntaxError
|
||||
from .specifiers import InvalidSpecifier, Specifier
|
||||
from .utils import canonicalize_name
|
||||
|
||||
__all__ = [
|
||||
"EvaluateContext",
|
||||
"InvalidMarker",
|
||||
"Marker",
|
||||
"UndefinedComparison",
|
||||
"UndefinedEnvironmentName",
|
||||
"default_environment",
|
||||
]
|
||||
|
||||
Operator = Callable[[str, Union[str, AbstractSet[str]]], bool]
|
||||
EvaluateContext = Literal["metadata", "lock_file", "requirement"]
|
||||
MARKERS_ALLOWING_SET = {"extras", "dependency_groups"}
|
||||
|
||||
|
||||
class InvalidMarker(ValueError):
|
||||
"""
|
||||
An invalid marker was found, users should refer to PEP 508.
|
||||
"""
|
||||
|
||||
|
||||
class UndefinedComparison(ValueError):
|
||||
"""
|
||||
An invalid operation was attempted on a value that doesn't support it.
|
||||
"""
|
||||
|
||||
|
||||
class UndefinedEnvironmentName(ValueError):
|
||||
"""
|
||||
A name was attempted to be used that does not exist inside of the
|
||||
environment.
|
||||
"""
|
||||
|
||||
|
||||
class Environment(TypedDict):
|
||||
implementation_name: str
|
||||
"""The implementation's identifier, e.g. ``'cpython'``."""
|
||||
|
||||
implementation_version: str
|
||||
"""
|
||||
The implementation's version, e.g. ``'3.13.0a2'`` for CPython 3.13.0a2, or
|
||||
``'7.3.13'`` for PyPy3.10 v7.3.13.
|
||||
"""
|
||||
|
||||
os_name: str
|
||||
"""
|
||||
The value of :py:data:`os.name`. The name of the operating system dependent module
|
||||
imported, e.g. ``'posix'``.
|
||||
"""
|
||||
|
||||
platform_machine: str
|
||||
"""
|
||||
Returns the machine type, e.g. ``'i386'``.
|
||||
|
||||
An empty string if the value cannot be determined.
|
||||
"""
|
||||
|
||||
platform_release: str
|
||||
"""
|
||||
The system's release, e.g. ``'2.2.0'`` or ``'NT'``.
|
||||
|
||||
An empty string if the value cannot be determined.
|
||||
"""
|
||||
|
||||
platform_system: str
|
||||
"""
|
||||
The system/OS name, e.g. ``'Linux'``, ``'Windows'`` or ``'Java'``.
|
||||
|
||||
An empty string if the value cannot be determined.
|
||||
"""
|
||||
|
||||
platform_version: str
|
||||
"""
|
||||
The system's release version, e.g. ``'#3 on degas'``.
|
||||
|
||||
An empty string if the value cannot be determined.
|
||||
"""
|
||||
|
||||
python_full_version: str
|
||||
"""
|
||||
The Python version as string ``'major.minor.patchlevel'``.
|
||||
|
||||
Note that unlike the Python :py:data:`sys.version`, this value will always include
|
||||
the patchlevel (it defaults to 0).
|
||||
"""
|
||||
|
||||
platform_python_implementation: str
|
||||
"""
|
||||
A string identifying the Python implementation, e.g. ``'CPython'``.
|
||||
"""
|
||||
|
||||
python_version: str
|
||||
"""The Python version as string ``'major.minor'``."""
|
||||
|
||||
sys_platform: str
|
||||
"""
|
||||
This string contains a platform identifier that can be used to append
|
||||
platform-specific components to :py:data:`sys.path`, for instance.
|
||||
|
||||
For Unix systems, except on Linux and AIX, this is the lowercased OS name as
|
||||
returned by ``uname -s`` with the first part of the version as returned by
|
||||
``uname -r`` appended, e.g. ``'sunos5'`` or ``'freebsd8'``, at the time when Python
|
||||
was built.
|
||||
"""
|
||||
|
||||
|
||||
def _normalize_extra_values(results: Any) -> Any:
|
||||
"""
|
||||
Normalize extra values.
|
||||
"""
|
||||
if isinstance(results[0], tuple):
|
||||
lhs, op, rhs = results[0]
|
||||
if isinstance(lhs, Variable) and lhs.value == "extra":
|
||||
normalized_extra = canonicalize_name(rhs.value)
|
||||
rhs = Value(normalized_extra)
|
||||
elif isinstance(rhs, Variable) and rhs.value == "extra":
|
||||
normalized_extra = canonicalize_name(lhs.value)
|
||||
lhs = Value(normalized_extra)
|
||||
results[0] = lhs, op, rhs
|
||||
return results
|
||||
|
||||
|
||||
def _format_marker(
|
||||
marker: list[str] | MarkerAtom | str, first: bool | None = True
|
||||
) -> str:
|
||||
assert isinstance(marker, (list, tuple, str))
|
||||
|
||||
# Sometimes we have a structure like [[...]] which is a single item list
|
||||
# where the single item is itself it's own list. In that case we want skip
|
||||
# the rest of this function so that we don't get extraneous () on the
|
||||
# outside.
|
||||
if (
|
||||
isinstance(marker, list)
|
||||
and len(marker) == 1
|
||||
and isinstance(marker[0], (list, tuple))
|
||||
):
|
||||
return _format_marker(marker[0])
|
||||
|
||||
if isinstance(marker, list):
|
||||
inner = (_format_marker(m, first=False) for m in marker)
|
||||
if first:
|
||||
return " ".join(inner)
|
||||
else:
|
||||
return "(" + " ".join(inner) + ")"
|
||||
elif isinstance(marker, tuple):
|
||||
return " ".join([m.serialize() for m in marker])
|
||||
else:
|
||||
return marker
|
||||
|
||||
|
||||
_operators: dict[str, Operator] = {
|
||||
"in": lambda lhs, rhs: lhs in rhs,
|
||||
"not in": lambda lhs, rhs: lhs not in rhs,
|
||||
"<": operator.lt,
|
||||
"<=": operator.le,
|
||||
"==": operator.eq,
|
||||
"!=": operator.ne,
|
||||
">=": operator.ge,
|
||||
">": operator.gt,
|
||||
}
|
||||
|
||||
|
||||
def _eval_op(lhs: str, op: Op, rhs: str | AbstractSet[str]) -> bool:
|
||||
if isinstance(rhs, str):
|
||||
try:
|
||||
spec = Specifier("".join([op.serialize(), rhs]))
|
||||
except InvalidSpecifier:
|
||||
pass
|
||||
else:
|
||||
return spec.contains(lhs, prereleases=True)
|
||||
|
||||
oper: Operator | None = _operators.get(op.serialize())
|
||||
if oper is None:
|
||||
raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.")
|
||||
|
||||
return oper(lhs, rhs)
|
||||
|
||||
|
||||
def _normalize(
|
||||
lhs: str, rhs: str | AbstractSet[str], key: str
|
||||
) -> tuple[str, str | AbstractSet[str]]:
|
||||
# PEP 685 – Comparison of extra names for optional distribution dependencies
|
||||
# https://peps.python.org/pep-0685/
|
||||
# > When comparing extra names, tools MUST normalize the names being
|
||||
# > compared using the semantics outlined in PEP 503 for names
|
||||
if key == "extra":
|
||||
assert isinstance(rhs, str), "extra value must be a string"
|
||||
return (canonicalize_name(lhs), canonicalize_name(rhs))
|
||||
if key in MARKERS_ALLOWING_SET:
|
||||
if isinstance(rhs, str): # pragma: no cover
|
||||
return (canonicalize_name(lhs), canonicalize_name(rhs))
|
||||
else:
|
||||
return (canonicalize_name(lhs), {canonicalize_name(v) for v in rhs})
|
||||
|
||||
# other environment markers don't have such standards
|
||||
return lhs, rhs
|
||||
|
||||
|
||||
def _evaluate_markers(
|
||||
markers: MarkerList, environment: dict[str, str | AbstractSet[str]]
|
||||
) -> bool:
|
||||
groups: list[list[bool]] = [[]]
|
||||
|
||||
for marker in markers:
|
||||
assert isinstance(marker, (list, tuple, str))
|
||||
|
||||
if isinstance(marker, list):
|
||||
groups[-1].append(_evaluate_markers(marker, environment))
|
||||
elif isinstance(marker, tuple):
|
||||
lhs, op, rhs = marker
|
||||
|
||||
if isinstance(lhs, Variable):
|
||||
environment_key = lhs.value
|
||||
lhs_value = environment[environment_key]
|
||||
rhs_value = rhs.value
|
||||
else:
|
||||
lhs_value = lhs.value
|
||||
environment_key = rhs.value
|
||||
rhs_value = environment[environment_key]
|
||||
assert isinstance(lhs_value, str), "lhs must be a string"
|
||||
lhs_value, rhs_value = _normalize(lhs_value, rhs_value, key=environment_key)
|
||||
groups[-1].append(_eval_op(lhs_value, op, rhs_value))
|
||||
else:
|
||||
assert marker in ["and", "or"]
|
||||
if marker == "or":
|
||||
groups.append([])
|
||||
|
||||
return any(all(item) for item in groups)
|
||||
|
||||
|
||||
def format_full_version(info: sys._version_info) -> str:
|
||||
version = f"{info.major}.{info.minor}.{info.micro}"
|
||||
kind = info.releaselevel
|
||||
if kind != "final":
|
||||
version += kind[0] + str(info.serial)
|
||||
return version
|
||||
|
||||
|
||||
def default_environment() -> Environment:
|
||||
iver = format_full_version(sys.implementation.version)
|
||||
implementation_name = sys.implementation.name
|
||||
return {
|
||||
"implementation_name": implementation_name,
|
||||
"implementation_version": iver,
|
||||
"os_name": os.name,
|
||||
"platform_machine": platform.machine(),
|
||||
"platform_release": platform.release(),
|
||||
"platform_system": platform.system(),
|
||||
"platform_version": platform.version(),
|
||||
"python_full_version": platform.python_version(),
|
||||
"platform_python_implementation": platform.python_implementation(),
|
||||
"python_version": ".".join(platform.python_version_tuple()[:2]),
|
||||
"sys_platform": sys.platform,
|
||||
}
|
||||
|
||||
|
||||
class Marker:
|
||||
def __init__(self, marker: str) -> None:
|
||||
# Note: We create a Marker object without calling this constructor in
|
||||
# packaging.requirements.Requirement. If any additional logic is
|
||||
# added here, make sure to mirror/adapt Requirement.
|
||||
try:
|
||||
self._markers = _normalize_extra_values(_parse_marker(marker))
|
||||
# The attribute `_markers` can be described in terms of a recursive type:
|
||||
# MarkerList = List[Union[Tuple[Node, ...], str, MarkerList]]
|
||||
#
|
||||
# For example, the following expression:
|
||||
# python_version > "3.6" or (python_version == "3.6" and os_name == "unix")
|
||||
#
|
||||
# is parsed into:
|
||||
# [
|
||||
# (<Variable('python_version')>, <Op('>')>, <Value('3.6')>),
|
||||
# 'and',
|
||||
# [
|
||||
# (<Variable('python_version')>, <Op('==')>, <Value('3.6')>),
|
||||
# 'or',
|
||||
# (<Variable('os_name')>, <Op('==')>, <Value('unix')>)
|
||||
# ]
|
||||
# ]
|
||||
except ParserSyntaxError as e:
|
||||
raise InvalidMarker(str(e)) from e
|
||||
|
||||
def __str__(self) -> str:
|
||||
return _format_marker(self._markers)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<Marker('{self}')>"
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash((self.__class__.__name__, str(self)))
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
if not isinstance(other, Marker):
|
||||
return NotImplemented
|
||||
|
||||
return str(self) == str(other)
|
||||
|
||||
def evaluate(
|
||||
self,
|
||||
environment: dict[str, str] | None = None,
|
||||
context: EvaluateContext = "metadata",
|
||||
) -> bool:
|
||||
"""Evaluate a marker.
|
||||
|
||||
Return the boolean from evaluating the given marker against the
|
||||
environment. environment is an optional argument to override all or
|
||||
part of the determined environment. The *context* parameter specifies what
|
||||
context the markers are being evaluated for, which influences what markers
|
||||
are considered valid. Acceptable values are "metadata" (for core metadata;
|
||||
default), "lock_file", and "requirement" (i.e. all other situations).
|
||||
|
||||
The environment is determined from the current Python process.
|
||||
"""
|
||||
current_environment = cast(
|
||||
"dict[str, str | AbstractSet[str]]", default_environment()
|
||||
)
|
||||
if context == "lock_file":
|
||||
current_environment.update(
|
||||
extras=frozenset(), dependency_groups=frozenset()
|
||||
)
|
||||
elif context == "metadata":
|
||||
current_environment["extra"] = ""
|
||||
if environment is not None:
|
||||
current_environment.update(environment)
|
||||
# The API used to allow setting extra to None. We need to handle this
|
||||
# case for backwards compatibility.
|
||||
if "extra" in current_environment and current_environment["extra"] is None:
|
||||
current_environment["extra"] = ""
|
||||
|
||||
return _evaluate_markers(
|
||||
self._markers, _repair_python_full_version(current_environment)
|
||||
)
|
||||
|
||||
|
||||
def _repair_python_full_version(
|
||||
env: dict[str, str | AbstractSet[str]],
|
||||
) -> dict[str, str | AbstractSet[str]]:
|
||||
"""
|
||||
Work around platform.python_version() returning something that is not PEP 440
|
||||
compliant for non-tagged Python builds.
|
||||
"""
|
||||
python_full_version = cast(str, env["python_full_version"])
|
||||
if python_full_version.endswith("+"):
|
||||
env["python_full_version"] = f"{python_full_version}local"
|
||||
return env
|
||||
@@ -0,0 +1,862 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import email.feedparser
|
||||
import email.header
|
||||
import email.message
|
||||
import email.parser
|
||||
import email.policy
|
||||
import pathlib
|
||||
import sys
|
||||
import typing
|
||||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
Generic,
|
||||
Literal,
|
||||
TypedDict,
|
||||
cast,
|
||||
)
|
||||
|
||||
from . import licenses, requirements, specifiers, utils
|
||||
from . import version as version_module
|
||||
from .licenses import NormalizedLicenseExpression
|
||||
|
||||
T = typing.TypeVar("T")
|
||||
|
||||
|
||||
if sys.version_info >= (3, 11): # pragma: no cover
|
||||
ExceptionGroup = ExceptionGroup
|
||||
else: # pragma: no cover
|
||||
|
||||
class ExceptionGroup(Exception):
|
||||
"""A minimal implementation of :external:exc:`ExceptionGroup` from Python 3.11.
|
||||
|
||||
If :external:exc:`ExceptionGroup` is already defined by Python itself,
|
||||
that version is used instead.
|
||||
"""
|
||||
|
||||
message: str
|
||||
exceptions: list[Exception]
|
||||
|
||||
def __init__(self, message: str, exceptions: list[Exception]) -> None:
|
||||
self.message = message
|
||||
self.exceptions = exceptions
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"{self.__class__.__name__}({self.message!r}, {self.exceptions!r})"
|
||||
|
||||
|
||||
class InvalidMetadata(ValueError):
|
||||
"""A metadata field contains invalid data."""
|
||||
|
||||
field: str
|
||||
"""The name of the field that contains invalid data."""
|
||||
|
||||
def __init__(self, field: str, message: str) -> None:
|
||||
self.field = field
|
||||
super().__init__(message)
|
||||
|
||||
|
||||
# The RawMetadata class attempts to make as few assumptions about the underlying
|
||||
# serialization formats as possible. The idea is that as long as a serialization
|
||||
# formats offer some very basic primitives in *some* way then we can support
|
||||
# serializing to and from that format.
|
||||
class RawMetadata(TypedDict, total=False):
|
||||
"""A dictionary of raw core metadata.
|
||||
|
||||
Each field in core metadata maps to a key of this dictionary (when data is
|
||||
provided). The key is lower-case and underscores are used instead of dashes
|
||||
compared to the equivalent core metadata field. Any core metadata field that
|
||||
can be specified multiple times or can hold multiple values in a single
|
||||
field have a key with a plural name. See :class:`Metadata` whose attributes
|
||||
match the keys of this dictionary.
|
||||
|
||||
Core metadata fields that can be specified multiple times are stored as a
|
||||
list or dict depending on which is appropriate for the field. Any fields
|
||||
which hold multiple values in a single field are stored as a list.
|
||||
|
||||
"""
|
||||
|
||||
# Metadata 1.0 - PEP 241
|
||||
metadata_version: str
|
||||
name: str
|
||||
version: str
|
||||
platforms: list[str]
|
||||
summary: str
|
||||
description: str
|
||||
keywords: list[str]
|
||||
home_page: str
|
||||
author: str
|
||||
author_email: str
|
||||
license: str
|
||||
|
||||
# Metadata 1.1 - PEP 314
|
||||
supported_platforms: list[str]
|
||||
download_url: str
|
||||
classifiers: list[str]
|
||||
requires: list[str]
|
||||
provides: list[str]
|
||||
obsoletes: list[str]
|
||||
|
||||
# Metadata 1.2 - PEP 345
|
||||
maintainer: str
|
||||
maintainer_email: str
|
||||
requires_dist: list[str]
|
||||
provides_dist: list[str]
|
||||
obsoletes_dist: list[str]
|
||||
requires_python: str
|
||||
requires_external: list[str]
|
||||
project_urls: dict[str, str]
|
||||
|
||||
# Metadata 2.0
|
||||
# PEP 426 attempted to completely revamp the metadata format
|
||||
# but got stuck without ever being able to build consensus on
|
||||
# it and ultimately ended up withdrawn.
|
||||
#
|
||||
# However, a number of tools had started emitting METADATA with
|
||||
# `2.0` Metadata-Version, so for historical reasons, this version
|
||||
# was skipped.
|
||||
|
||||
# Metadata 2.1 - PEP 566
|
||||
description_content_type: str
|
||||
provides_extra: list[str]
|
||||
|
||||
# Metadata 2.2 - PEP 643
|
||||
dynamic: list[str]
|
||||
|
||||
# Metadata 2.3 - PEP 685
|
||||
# No new fields were added in PEP 685, just some edge case were
|
||||
# tightened up to provide better interoptability.
|
||||
|
||||
# Metadata 2.4 - PEP 639
|
||||
license_expression: str
|
||||
license_files: list[str]
|
||||
|
||||
|
||||
_STRING_FIELDS = {
|
||||
"author",
|
||||
"author_email",
|
||||
"description",
|
||||
"description_content_type",
|
||||
"download_url",
|
||||
"home_page",
|
||||
"license",
|
||||
"license_expression",
|
||||
"maintainer",
|
||||
"maintainer_email",
|
||||
"metadata_version",
|
||||
"name",
|
||||
"requires_python",
|
||||
"summary",
|
||||
"version",
|
||||
}
|
||||
|
||||
_LIST_FIELDS = {
|
||||
"classifiers",
|
||||
"dynamic",
|
||||
"license_files",
|
||||
"obsoletes",
|
||||
"obsoletes_dist",
|
||||
"platforms",
|
||||
"provides",
|
||||
"provides_dist",
|
||||
"provides_extra",
|
||||
"requires",
|
||||
"requires_dist",
|
||||
"requires_external",
|
||||
"supported_platforms",
|
||||
}
|
||||
|
||||
_DICT_FIELDS = {
|
||||
"project_urls",
|
||||
}
|
||||
|
||||
|
||||
def _parse_keywords(data: str) -> list[str]:
|
||||
"""Split a string of comma-separated keywords into a list of keywords."""
|
||||
return [k.strip() for k in data.split(",")]
|
||||
|
||||
|
||||
def _parse_project_urls(data: list[str]) -> dict[str, str]:
|
||||
"""Parse a list of label/URL string pairings separated by a comma."""
|
||||
urls = {}
|
||||
for pair in data:
|
||||
# Our logic is slightly tricky here as we want to try and do
|
||||
# *something* reasonable with malformed data.
|
||||
#
|
||||
# The main thing that we have to worry about, is data that does
|
||||
# not have a ',' at all to split the label from the Value. There
|
||||
# isn't a singular right answer here, and we will fail validation
|
||||
# later on (if the caller is validating) so it doesn't *really*
|
||||
# matter, but since the missing value has to be an empty str
|
||||
# and our return value is dict[str, str], if we let the key
|
||||
# be the missing value, then they'd have multiple '' values that
|
||||
# overwrite each other in a accumulating dict.
|
||||
#
|
||||
# The other potentional issue is that it's possible to have the
|
||||
# same label multiple times in the metadata, with no solid "right"
|
||||
# answer with what to do in that case. As such, we'll do the only
|
||||
# thing we can, which is treat the field as unparseable and add it
|
||||
# to our list of unparsed fields.
|
||||
parts = [p.strip() for p in pair.split(",", 1)]
|
||||
parts.extend([""] * (max(0, 2 - len(parts)))) # Ensure 2 items
|
||||
|
||||
# TODO: The spec doesn't say anything about if the keys should be
|
||||
# considered case sensitive or not... logically they should
|
||||
# be case-preserving and case-insensitive, but doing that
|
||||
# would open up more cases where we might have duplicate
|
||||
# entries.
|
||||
label, url = parts
|
||||
if label in urls:
|
||||
# The label already exists in our set of urls, so this field
|
||||
# is unparseable, and we can just add the whole thing to our
|
||||
# unparseable data and stop processing it.
|
||||
raise KeyError("duplicate labels in project urls")
|
||||
urls[label] = url
|
||||
|
||||
return urls
|
||||
|
||||
|
||||
def _get_payload(msg: email.message.Message, source: bytes | str) -> str:
|
||||
"""Get the body of the message."""
|
||||
# If our source is a str, then our caller has managed encodings for us,
|
||||
# and we don't need to deal with it.
|
||||
if isinstance(source, str):
|
||||
payload = msg.get_payload()
|
||||
assert isinstance(payload, str)
|
||||
return payload
|
||||
# If our source is a bytes, then we're managing the encoding and we need
|
||||
# to deal with it.
|
||||
else:
|
||||
bpayload = msg.get_payload(decode=True)
|
||||
assert isinstance(bpayload, bytes)
|
||||
try:
|
||||
return bpayload.decode("utf8", "strict")
|
||||
except UnicodeDecodeError as exc:
|
||||
raise ValueError("payload in an invalid encoding") from exc
|
||||
|
||||
|
||||
# The various parse_FORMAT functions here are intended to be as lenient as
|
||||
# possible in their parsing, while still returning a correctly typed
|
||||
# RawMetadata.
|
||||
#
|
||||
# To aid in this, we also generally want to do as little touching of the
|
||||
# data as possible, except where there are possibly some historic holdovers
|
||||
# that make valid data awkward to work with.
|
||||
#
|
||||
# While this is a lower level, intermediate format than our ``Metadata``
|
||||
# class, some light touch ups can make a massive difference in usability.
|
||||
|
||||
# Map METADATA fields to RawMetadata.
|
||||
_EMAIL_TO_RAW_MAPPING = {
|
||||
"author": "author",
|
||||
"author-email": "author_email",
|
||||
"classifier": "classifiers",
|
||||
"description": "description",
|
||||
"description-content-type": "description_content_type",
|
||||
"download-url": "download_url",
|
||||
"dynamic": "dynamic",
|
||||
"home-page": "home_page",
|
||||
"keywords": "keywords",
|
||||
"license": "license",
|
||||
"license-expression": "license_expression",
|
||||
"license-file": "license_files",
|
||||
"maintainer": "maintainer",
|
||||
"maintainer-email": "maintainer_email",
|
||||
"metadata-version": "metadata_version",
|
||||
"name": "name",
|
||||
"obsoletes": "obsoletes",
|
||||
"obsoletes-dist": "obsoletes_dist",
|
||||
"platform": "platforms",
|
||||
"project-url": "project_urls",
|
||||
"provides": "provides",
|
||||
"provides-dist": "provides_dist",
|
||||
"provides-extra": "provides_extra",
|
||||
"requires": "requires",
|
||||
"requires-dist": "requires_dist",
|
||||
"requires-external": "requires_external",
|
||||
"requires-python": "requires_python",
|
||||
"summary": "summary",
|
||||
"supported-platform": "supported_platforms",
|
||||
"version": "version",
|
||||
}
|
||||
_RAW_TO_EMAIL_MAPPING = {raw: email for email, raw in _EMAIL_TO_RAW_MAPPING.items()}
|
||||
|
||||
|
||||
def parse_email(data: bytes | str) -> tuple[RawMetadata, dict[str, list[str]]]:
|
||||
"""Parse a distribution's metadata stored as email headers (e.g. from ``METADATA``).
|
||||
|
||||
This function returns a two-item tuple of dicts. The first dict is of
|
||||
recognized fields from the core metadata specification. Fields that can be
|
||||
parsed and translated into Python's built-in types are converted
|
||||
appropriately. All other fields are left as-is. Fields that are allowed to
|
||||
appear multiple times are stored as lists.
|
||||
|
||||
The second dict contains all other fields from the metadata. This includes
|
||||
any unrecognized fields. It also includes any fields which are expected to
|
||||
be parsed into a built-in type but were not formatted appropriately. Finally,
|
||||
any fields that are expected to appear only once but are repeated are
|
||||
included in this dict.
|
||||
|
||||
"""
|
||||
raw: dict[str, str | list[str] | dict[str, str]] = {}
|
||||
unparsed: dict[str, list[str]] = {}
|
||||
|
||||
if isinstance(data, str):
|
||||
parsed = email.parser.Parser(policy=email.policy.compat32).parsestr(data)
|
||||
else:
|
||||
parsed = email.parser.BytesParser(policy=email.policy.compat32).parsebytes(data)
|
||||
|
||||
# We have to wrap parsed.keys() in a set, because in the case of multiple
|
||||
# values for a key (a list), the key will appear multiple times in the
|
||||
# list of keys, but we're avoiding that by using get_all().
|
||||
for name in frozenset(parsed.keys()):
|
||||
# Header names in RFC are case insensitive, so we'll normalize to all
|
||||
# lower case to make comparisons easier.
|
||||
name = name.lower()
|
||||
|
||||
# We use get_all() here, even for fields that aren't multiple use,
|
||||
# because otherwise someone could have e.g. two Name fields, and we
|
||||
# would just silently ignore it rather than doing something about it.
|
||||
headers = parsed.get_all(name) or []
|
||||
|
||||
# The way the email module works when parsing bytes is that it
|
||||
# unconditionally decodes the bytes as ascii using the surrogateescape
|
||||
# handler. When you pull that data back out (such as with get_all() ),
|
||||
# it looks to see if the str has any surrogate escapes, and if it does
|
||||
# it wraps it in a Header object instead of returning the string.
|
||||
#
|
||||
# As such, we'll look for those Header objects, and fix up the encoding.
|
||||
value = []
|
||||
# Flag if we have run into any issues processing the headers, thus
|
||||
# signalling that the data belongs in 'unparsed'.
|
||||
valid_encoding = True
|
||||
for h in headers:
|
||||
# It's unclear if this can return more types than just a Header or
|
||||
# a str, so we'll just assert here to make sure.
|
||||
assert isinstance(h, (email.header.Header, str))
|
||||
|
||||
# If it's a header object, we need to do our little dance to get
|
||||
# the real data out of it. In cases where there is invalid data
|
||||
# we're going to end up with mojibake, but there's no obvious, good
|
||||
# way around that without reimplementing parts of the Header object
|
||||
# ourselves.
|
||||
#
|
||||
# That should be fine since, if mojibacked happens, this key is
|
||||
# going into the unparsed dict anyways.
|
||||
if isinstance(h, email.header.Header):
|
||||
# The Header object stores it's data as chunks, and each chunk
|
||||
# can be independently encoded, so we'll need to check each
|
||||
# of them.
|
||||
chunks: list[tuple[bytes, str | None]] = []
|
||||
for bin, encoding in email.header.decode_header(h):
|
||||
try:
|
||||
bin.decode("utf8", "strict")
|
||||
except UnicodeDecodeError:
|
||||
# Enable mojibake.
|
||||
encoding = "latin1"
|
||||
valid_encoding = False
|
||||
else:
|
||||
encoding = "utf8"
|
||||
chunks.append((bin, encoding))
|
||||
|
||||
# Turn our chunks back into a Header object, then let that
|
||||
# Header object do the right thing to turn them into a
|
||||
# string for us.
|
||||
value.append(str(email.header.make_header(chunks)))
|
||||
# This is already a string, so just add it.
|
||||
else:
|
||||
value.append(h)
|
||||
|
||||
# We've processed all of our values to get them into a list of str,
|
||||
# but we may have mojibake data, in which case this is an unparsed
|
||||
# field.
|
||||
if not valid_encoding:
|
||||
unparsed[name] = value
|
||||
continue
|
||||
|
||||
raw_name = _EMAIL_TO_RAW_MAPPING.get(name)
|
||||
if raw_name is None:
|
||||
# This is a bit of a weird situation, we've encountered a key that
|
||||
# we don't know what it means, so we don't know whether it's meant
|
||||
# to be a list or not.
|
||||
#
|
||||
# Since we can't really tell one way or another, we'll just leave it
|
||||
# as a list, even though it may be a single item list, because that's
|
||||
# what makes the most sense for email headers.
|
||||
unparsed[name] = value
|
||||
continue
|
||||
|
||||
# If this is one of our string fields, then we'll check to see if our
|
||||
# value is a list of a single item. If it is then we'll assume that
|
||||
# it was emitted as a single string, and unwrap the str from inside
|
||||
# the list.
|
||||
#
|
||||
# If it's any other kind of data, then we haven't the faintest clue
|
||||
# what we should parse it as, and we have to just add it to our list
|
||||
# of unparsed stuff.
|
||||
if raw_name in _STRING_FIELDS and len(value) == 1:
|
||||
raw[raw_name] = value[0]
|
||||
# If this is one of our list of string fields, then we can just assign
|
||||
# the value, since email *only* has strings, and our get_all() call
|
||||
# above ensures that this is a list.
|
||||
elif raw_name in _LIST_FIELDS:
|
||||
raw[raw_name] = value
|
||||
# Special Case: Keywords
|
||||
# The keywords field is implemented in the metadata spec as a str,
|
||||
# but it conceptually is a list of strings, and is serialized using
|
||||
# ", ".join(keywords), so we'll do some light data massaging to turn
|
||||
# this into what it logically is.
|
||||
elif raw_name == "keywords" and len(value) == 1:
|
||||
raw[raw_name] = _parse_keywords(value[0])
|
||||
# Special Case: Project-URL
|
||||
# The project urls is implemented in the metadata spec as a list of
|
||||
# specially-formatted strings that represent a key and a value, which
|
||||
# is fundamentally a mapping, however the email format doesn't support
|
||||
# mappings in a sane way, so it was crammed into a list of strings
|
||||
# instead.
|
||||
#
|
||||
# We will do a little light data massaging to turn this into a map as
|
||||
# it logically should be.
|
||||
elif raw_name == "project_urls":
|
||||
try:
|
||||
raw[raw_name] = _parse_project_urls(value)
|
||||
except KeyError:
|
||||
unparsed[name] = value
|
||||
# Nothing that we've done has managed to parse this, so it'll just
|
||||
# throw it in our unparseable data and move on.
|
||||
else:
|
||||
unparsed[name] = value
|
||||
|
||||
# We need to support getting the Description from the message payload in
|
||||
# addition to getting it from the the headers. This does mean, though, there
|
||||
# is the possibility of it being set both ways, in which case we put both
|
||||
# in 'unparsed' since we don't know which is right.
|
||||
try:
|
||||
payload = _get_payload(parsed, data)
|
||||
except ValueError:
|
||||
unparsed.setdefault("description", []).append(
|
||||
parsed.get_payload(decode=isinstance(data, bytes)) # type: ignore[call-overload]
|
||||
)
|
||||
else:
|
||||
if payload:
|
||||
# Check to see if we've already got a description, if so then both
|
||||
# it, and this body move to unparseable.
|
||||
if "description" in raw:
|
||||
description_header = cast(str, raw.pop("description"))
|
||||
unparsed.setdefault("description", []).extend(
|
||||
[description_header, payload]
|
||||
)
|
||||
elif "description" in unparsed:
|
||||
unparsed["description"].append(payload)
|
||||
else:
|
||||
raw["description"] = payload
|
||||
|
||||
# We need to cast our `raw` to a metadata, because a TypedDict only support
|
||||
# literal key names, but we're computing our key names on purpose, but the
|
||||
# way this function is implemented, our `TypedDict` can only have valid key
|
||||
# names.
|
||||
return cast(RawMetadata, raw), unparsed
|
||||
|
||||
|
||||
_NOT_FOUND = object()
|
||||
|
||||
|
||||
# Keep the two values in sync.
|
||||
_VALID_METADATA_VERSIONS = ["1.0", "1.1", "1.2", "2.1", "2.2", "2.3", "2.4"]
|
||||
_MetadataVersion = Literal["1.0", "1.1", "1.2", "2.1", "2.2", "2.3", "2.4"]
|
||||
|
||||
_REQUIRED_ATTRS = frozenset(["metadata_version", "name", "version"])
|
||||
|
||||
|
||||
class _Validator(Generic[T]):
|
||||
"""Validate a metadata field.
|
||||
|
||||
All _process_*() methods correspond to a core metadata field. The method is
|
||||
called with the field's raw value. If the raw value is valid it is returned
|
||||
in its "enriched" form (e.g. ``version.Version`` for the ``Version`` field).
|
||||
If the raw value is invalid, :exc:`InvalidMetadata` is raised (with a cause
|
||||
as appropriate).
|
||||
"""
|
||||
|
||||
name: str
|
||||
raw_name: str
|
||||
added: _MetadataVersion
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
added: _MetadataVersion = "1.0",
|
||||
) -> None:
|
||||
self.added = added
|
||||
|
||||
def __set_name__(self, _owner: Metadata, name: str) -> None:
|
||||
self.name = name
|
||||
self.raw_name = _RAW_TO_EMAIL_MAPPING[name]
|
||||
|
||||
def __get__(self, instance: Metadata, _owner: type[Metadata]) -> T:
|
||||
# With Python 3.8, the caching can be replaced with functools.cached_property().
|
||||
# No need to check the cache as attribute lookup will resolve into the
|
||||
# instance's __dict__ before __get__ is called.
|
||||
cache = instance.__dict__
|
||||
value = instance._raw.get(self.name)
|
||||
|
||||
# To make the _process_* methods easier, we'll check if the value is None
|
||||
# and if this field is NOT a required attribute, and if both of those
|
||||
# things are true, we'll skip the the converter. This will mean that the
|
||||
# converters never have to deal with the None union.
|
||||
if self.name in _REQUIRED_ATTRS or value is not None:
|
||||
try:
|
||||
converter: Callable[[Any], T] = getattr(self, f"_process_{self.name}")
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
value = converter(value)
|
||||
|
||||
cache[self.name] = value
|
||||
try:
|
||||
del instance._raw[self.name] # type: ignore[misc]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
return cast(T, value)
|
||||
|
||||
def _invalid_metadata(
|
||||
self, msg: str, cause: Exception | None = None
|
||||
) -> InvalidMetadata:
|
||||
exc = InvalidMetadata(
|
||||
self.raw_name, msg.format_map({"field": repr(self.raw_name)})
|
||||
)
|
||||
exc.__cause__ = cause
|
||||
return exc
|
||||
|
||||
def _process_metadata_version(self, value: str) -> _MetadataVersion:
|
||||
# Implicitly makes Metadata-Version required.
|
||||
if value not in _VALID_METADATA_VERSIONS:
|
||||
raise self._invalid_metadata(f"{value!r} is not a valid metadata version")
|
||||
return cast(_MetadataVersion, value)
|
||||
|
||||
def _process_name(self, value: str) -> str:
|
||||
if not value:
|
||||
raise self._invalid_metadata("{field} is a required field")
|
||||
# Validate the name as a side-effect.
|
||||
try:
|
||||
utils.canonicalize_name(value, validate=True)
|
||||
except utils.InvalidName as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{value!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
else:
|
||||
return value
|
||||
|
||||
def _process_version(self, value: str) -> version_module.Version:
|
||||
if not value:
|
||||
raise self._invalid_metadata("{field} is a required field")
|
||||
try:
|
||||
return version_module.parse(value)
|
||||
except version_module.InvalidVersion as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{value!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
|
||||
def _process_summary(self, value: str) -> str:
|
||||
"""Check the field contains no newlines."""
|
||||
if "\n" in value:
|
||||
raise self._invalid_metadata("{field} must be a single line")
|
||||
return value
|
||||
|
||||
def _process_description_content_type(self, value: str) -> str:
|
||||
content_types = {"text/plain", "text/x-rst", "text/markdown"}
|
||||
message = email.message.EmailMessage()
|
||||
message["content-type"] = value
|
||||
|
||||
content_type, parameters = (
|
||||
# Defaults to `text/plain` if parsing failed.
|
||||
message.get_content_type().lower(),
|
||||
message["content-type"].params,
|
||||
)
|
||||
# Check if content-type is valid or defaulted to `text/plain` and thus was
|
||||
# not parseable.
|
||||
if content_type not in content_types or content_type not in value.lower():
|
||||
raise self._invalid_metadata(
|
||||
f"{{field}} must be one of {list(content_types)}, not {value!r}"
|
||||
)
|
||||
|
||||
charset = parameters.get("charset", "UTF-8")
|
||||
if charset != "UTF-8":
|
||||
raise self._invalid_metadata(
|
||||
f"{{field}} can only specify the UTF-8 charset, not {list(charset)}"
|
||||
)
|
||||
|
||||
markdown_variants = {"GFM", "CommonMark"}
|
||||
variant = parameters.get("variant", "GFM") # Use an acceptable default.
|
||||
if content_type == "text/markdown" and variant not in markdown_variants:
|
||||
raise self._invalid_metadata(
|
||||
f"valid Markdown variants for {{field}} are {list(markdown_variants)}, "
|
||||
f"not {variant!r}",
|
||||
)
|
||||
return value
|
||||
|
||||
def _process_dynamic(self, value: list[str]) -> list[str]:
|
||||
for dynamic_field in map(str.lower, value):
|
||||
if dynamic_field in {"name", "version", "metadata-version"}:
|
||||
raise self._invalid_metadata(
|
||||
f"{dynamic_field!r} is not allowed as a dynamic field"
|
||||
)
|
||||
elif dynamic_field not in _EMAIL_TO_RAW_MAPPING:
|
||||
raise self._invalid_metadata(
|
||||
f"{dynamic_field!r} is not a valid dynamic field"
|
||||
)
|
||||
return list(map(str.lower, value))
|
||||
|
||||
def _process_provides_extra(
|
||||
self,
|
||||
value: list[str],
|
||||
) -> list[utils.NormalizedName]:
|
||||
normalized_names = []
|
||||
try:
|
||||
for name in value:
|
||||
normalized_names.append(utils.canonicalize_name(name, validate=True))
|
||||
except utils.InvalidName as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{name!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
else:
|
||||
return normalized_names
|
||||
|
||||
def _process_requires_python(self, value: str) -> specifiers.SpecifierSet:
|
||||
try:
|
||||
return specifiers.SpecifierSet(value)
|
||||
except specifiers.InvalidSpecifier as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{value!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
|
||||
def _process_requires_dist(
|
||||
self,
|
||||
value: list[str],
|
||||
) -> list[requirements.Requirement]:
|
||||
reqs = []
|
||||
try:
|
||||
for req in value:
|
||||
reqs.append(requirements.Requirement(req))
|
||||
except requirements.InvalidRequirement as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{req!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
else:
|
||||
return reqs
|
||||
|
||||
def _process_license_expression(
|
||||
self, value: str
|
||||
) -> NormalizedLicenseExpression | None:
|
||||
try:
|
||||
return licenses.canonicalize_license_expression(value)
|
||||
except ValueError as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{value!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
|
||||
def _process_license_files(self, value: list[str]) -> list[str]:
|
||||
paths = []
|
||||
for path in value:
|
||||
if ".." in path:
|
||||
raise self._invalid_metadata(
|
||||
f"{path!r} is invalid for {{field}}, "
|
||||
"parent directory indicators are not allowed"
|
||||
)
|
||||
if "*" in path:
|
||||
raise self._invalid_metadata(
|
||||
f"{path!r} is invalid for {{field}}, paths must be resolved"
|
||||
)
|
||||
if (
|
||||
pathlib.PurePosixPath(path).is_absolute()
|
||||
or pathlib.PureWindowsPath(path).is_absolute()
|
||||
):
|
||||
raise self._invalid_metadata(
|
||||
f"{path!r} is invalid for {{field}}, paths must be relative"
|
||||
)
|
||||
if pathlib.PureWindowsPath(path).as_posix() != path:
|
||||
raise self._invalid_metadata(
|
||||
f"{path!r} is invalid for {{field}}, paths must use '/' delimiter"
|
||||
)
|
||||
paths.append(path)
|
||||
return paths
|
||||
|
||||
|
||||
class Metadata:
|
||||
"""Representation of distribution metadata.
|
||||
|
||||
Compared to :class:`RawMetadata`, this class provides objects representing
|
||||
metadata fields instead of only using built-in types. Any invalid metadata
|
||||
will cause :exc:`InvalidMetadata` to be raised (with a
|
||||
:py:attr:`~BaseException.__cause__` attribute as appropriate).
|
||||
"""
|
||||
|
||||
_raw: RawMetadata
|
||||
|
||||
@classmethod
|
||||
def from_raw(cls, data: RawMetadata, *, validate: bool = True) -> Metadata:
|
||||
"""Create an instance from :class:`RawMetadata`.
|
||||
|
||||
If *validate* is true, all metadata will be validated. All exceptions
|
||||
related to validation will be gathered and raised as an :class:`ExceptionGroup`.
|
||||
"""
|
||||
ins = cls()
|
||||
ins._raw = data.copy() # Mutations occur due to caching enriched values.
|
||||
|
||||
if validate:
|
||||
exceptions: list[Exception] = []
|
||||
try:
|
||||
metadata_version = ins.metadata_version
|
||||
metadata_age = _VALID_METADATA_VERSIONS.index(metadata_version)
|
||||
except InvalidMetadata as metadata_version_exc:
|
||||
exceptions.append(metadata_version_exc)
|
||||
metadata_version = None
|
||||
|
||||
# Make sure to check for the fields that are present, the required
|
||||
# fields (so their absence can be reported).
|
||||
fields_to_check = frozenset(ins._raw) | _REQUIRED_ATTRS
|
||||
# Remove fields that have already been checked.
|
||||
fields_to_check -= {"metadata_version"}
|
||||
|
||||
for key in fields_to_check:
|
||||
try:
|
||||
if metadata_version:
|
||||
# Can't use getattr() as that triggers descriptor protocol which
|
||||
# will fail due to no value for the instance argument.
|
||||
try:
|
||||
field_metadata_version = cls.__dict__[key].added
|
||||
except KeyError:
|
||||
exc = InvalidMetadata(key, f"unrecognized field: {key!r}")
|
||||
exceptions.append(exc)
|
||||
continue
|
||||
field_age = _VALID_METADATA_VERSIONS.index(
|
||||
field_metadata_version
|
||||
)
|
||||
if field_age > metadata_age:
|
||||
field = _RAW_TO_EMAIL_MAPPING[key]
|
||||
exc = InvalidMetadata(
|
||||
field,
|
||||
f"{field} introduced in metadata version "
|
||||
f"{field_metadata_version}, not {metadata_version}",
|
||||
)
|
||||
exceptions.append(exc)
|
||||
continue
|
||||
getattr(ins, key)
|
||||
except InvalidMetadata as exc:
|
||||
exceptions.append(exc)
|
||||
|
||||
if exceptions:
|
||||
raise ExceptionGroup("invalid metadata", exceptions)
|
||||
|
||||
return ins
|
||||
|
||||
@classmethod
|
||||
def from_email(cls, data: bytes | str, *, validate: bool = True) -> Metadata:
|
||||
"""Parse metadata from email headers.
|
||||
|
||||
If *validate* is true, the metadata will be validated. All exceptions
|
||||
related to validation will be gathered and raised as an :class:`ExceptionGroup`.
|
||||
"""
|
||||
raw, unparsed = parse_email(data)
|
||||
|
||||
if validate:
|
||||
exceptions: list[Exception] = []
|
||||
for unparsed_key in unparsed:
|
||||
if unparsed_key in _EMAIL_TO_RAW_MAPPING:
|
||||
message = f"{unparsed_key!r} has invalid data"
|
||||
else:
|
||||
message = f"unrecognized field: {unparsed_key!r}"
|
||||
exceptions.append(InvalidMetadata(unparsed_key, message))
|
||||
|
||||
if exceptions:
|
||||
raise ExceptionGroup("unparsed", exceptions)
|
||||
|
||||
try:
|
||||
return cls.from_raw(raw, validate=validate)
|
||||
except ExceptionGroup as exc_group:
|
||||
raise ExceptionGroup(
|
||||
"invalid or unparsed metadata", exc_group.exceptions
|
||||
) from None
|
||||
|
||||
metadata_version: _Validator[_MetadataVersion] = _Validator()
|
||||
""":external:ref:`core-metadata-metadata-version`
|
||||
(required; validated to be a valid metadata version)"""
|
||||
# `name` is not normalized/typed to NormalizedName so as to provide access to
|
||||
# the original/raw name.
|
||||
name: _Validator[str] = _Validator()
|
||||
""":external:ref:`core-metadata-name`
|
||||
(required; validated using :func:`~packaging.utils.canonicalize_name` and its
|
||||
*validate* parameter)"""
|
||||
version: _Validator[version_module.Version] = _Validator()
|
||||
""":external:ref:`core-metadata-version` (required)"""
|
||||
dynamic: _Validator[list[str] | None] = _Validator(
|
||||
added="2.2",
|
||||
)
|
||||
""":external:ref:`core-metadata-dynamic`
|
||||
(validated against core metadata field names and lowercased)"""
|
||||
platforms: _Validator[list[str] | None] = _Validator()
|
||||
""":external:ref:`core-metadata-platform`"""
|
||||
supported_platforms: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
""":external:ref:`core-metadata-supported-platform`"""
|
||||
summary: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-summary` (validated to contain no newlines)"""
|
||||
description: _Validator[str | None] = _Validator() # TODO 2.1: can be in body
|
||||
""":external:ref:`core-metadata-description`"""
|
||||
description_content_type: _Validator[str | None] = _Validator(added="2.1")
|
||||
""":external:ref:`core-metadata-description-content-type` (validated)"""
|
||||
keywords: _Validator[list[str] | None] = _Validator()
|
||||
""":external:ref:`core-metadata-keywords`"""
|
||||
home_page: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-home-page`"""
|
||||
download_url: _Validator[str | None] = _Validator(added="1.1")
|
||||
""":external:ref:`core-metadata-download-url`"""
|
||||
author: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-author`"""
|
||||
author_email: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-author-email`"""
|
||||
maintainer: _Validator[str | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-maintainer`"""
|
||||
maintainer_email: _Validator[str | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-maintainer-email`"""
|
||||
license: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-license`"""
|
||||
license_expression: _Validator[NormalizedLicenseExpression | None] = _Validator(
|
||||
added="2.4"
|
||||
)
|
||||
""":external:ref:`core-metadata-license-expression`"""
|
||||
license_files: _Validator[list[str] | None] = _Validator(added="2.4")
|
||||
""":external:ref:`core-metadata-license-file`"""
|
||||
classifiers: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
""":external:ref:`core-metadata-classifier`"""
|
||||
requires_dist: _Validator[list[requirements.Requirement] | None] = _Validator(
|
||||
added="1.2"
|
||||
)
|
||||
""":external:ref:`core-metadata-requires-dist`"""
|
||||
requires_python: _Validator[specifiers.SpecifierSet | None] = _Validator(
|
||||
added="1.2"
|
||||
)
|
||||
""":external:ref:`core-metadata-requires-python`"""
|
||||
# Because `Requires-External` allows for non-PEP 440 version specifiers, we
|
||||
# don't do any processing on the values.
|
||||
requires_external: _Validator[list[str] | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-requires-external`"""
|
||||
project_urls: _Validator[dict[str, str] | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-project-url`"""
|
||||
# PEP 685 lets us raise an error if an extra doesn't pass `Name` validation
|
||||
# regardless of metadata version.
|
||||
provides_extra: _Validator[list[utils.NormalizedName] | None] = _Validator(
|
||||
added="2.1",
|
||||
)
|
||||
""":external:ref:`core-metadata-provides-extra`"""
|
||||
provides_dist: _Validator[list[str] | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-provides-dist`"""
|
||||
obsoletes_dist: _Validator[list[str] | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-obsoletes-dist`"""
|
||||
requires: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
"""``Requires`` (deprecated)"""
|
||||
provides: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
"""``Provides`` (deprecated)"""
|
||||
obsoletes: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
"""``Obsoletes`` (deprecated)"""
|
||||
@@ -0,0 +1,91 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Iterator
|
||||
|
||||
from ._parser import parse_requirement as _parse_requirement
|
||||
from ._tokenizer import ParserSyntaxError
|
||||
from .markers import Marker, _normalize_extra_values
|
||||
from .specifiers import SpecifierSet
|
||||
from .utils import canonicalize_name
|
||||
|
||||
|
||||
class InvalidRequirement(ValueError):
|
||||
"""
|
||||
An invalid requirement was found, users should refer to PEP 508.
|
||||
"""
|
||||
|
||||
|
||||
class Requirement:
|
||||
"""Parse a requirement.
|
||||
|
||||
Parse a given requirement string into its parts, such as name, specifier,
|
||||
URL, and extras. Raises InvalidRequirement on a badly-formed requirement
|
||||
string.
|
||||
"""
|
||||
|
||||
# TODO: Can we test whether something is contained within a requirement?
|
||||
# If so how do we do that? Do we need to test against the _name_ of
|
||||
# the thing as well as the version? What about the markers?
|
||||
# TODO: Can we normalize the name and extra name?
|
||||
|
||||
def __init__(self, requirement_string: str) -> None:
|
||||
try:
|
||||
parsed = _parse_requirement(requirement_string)
|
||||
except ParserSyntaxError as e:
|
||||
raise InvalidRequirement(str(e)) from e
|
||||
|
||||
self.name: str = parsed.name
|
||||
self.url: str | None = parsed.url or None
|
||||
self.extras: set[str] = set(parsed.extras or [])
|
||||
self.specifier: SpecifierSet = SpecifierSet(parsed.specifier)
|
||||
self.marker: Marker | None = None
|
||||
if parsed.marker is not None:
|
||||
self.marker = Marker.__new__(Marker)
|
||||
self.marker._markers = _normalize_extra_values(parsed.marker)
|
||||
|
||||
def _iter_parts(self, name: str) -> Iterator[str]:
|
||||
yield name
|
||||
|
||||
if self.extras:
|
||||
formatted_extras = ",".join(sorted(self.extras))
|
||||
yield f"[{formatted_extras}]"
|
||||
|
||||
if self.specifier:
|
||||
yield str(self.specifier)
|
||||
|
||||
if self.url:
|
||||
yield f"@ {self.url}"
|
||||
if self.marker:
|
||||
yield " "
|
||||
|
||||
if self.marker:
|
||||
yield f"; {self.marker}"
|
||||
|
||||
def __str__(self) -> str:
|
||||
return "".join(self._iter_parts(self.name))
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<Requirement('{self}')>"
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(
|
||||
(
|
||||
self.__class__.__name__,
|
||||
*self._iter_parts(canonicalize_name(self.name)),
|
||||
)
|
||||
)
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
if not isinstance(other, Requirement):
|
||||
return NotImplemented
|
||||
|
||||
return (
|
||||
canonicalize_name(self.name) == canonicalize_name(other.name)
|
||||
and self.extras == other.extras
|
||||
and self.specifier == other.specifier
|
||||
and self.url == other.url
|
||||
and self.marker == other.marker
|
||||
)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,656 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import platform
|
||||
import re
|
||||
import struct
|
||||
import subprocess
|
||||
import sys
|
||||
import sysconfig
|
||||
from importlib.machinery import EXTENSION_SUFFIXES
|
||||
from typing import (
|
||||
Iterable,
|
||||
Iterator,
|
||||
Sequence,
|
||||
Tuple,
|
||||
cast,
|
||||
)
|
||||
|
||||
from . import _manylinux, _musllinux
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
PythonVersion = Sequence[int]
|
||||
AppleVersion = Tuple[int, int]
|
||||
|
||||
INTERPRETER_SHORT_NAMES: dict[str, str] = {
|
||||
"python": "py", # Generic.
|
||||
"cpython": "cp",
|
||||
"pypy": "pp",
|
||||
"ironpython": "ip",
|
||||
"jython": "jy",
|
||||
}
|
||||
|
||||
|
||||
_32_BIT_INTERPRETER = struct.calcsize("P") == 4
|
||||
|
||||
|
||||
class Tag:
|
||||
"""
|
||||
A representation of the tag triple for a wheel.
|
||||
|
||||
Instances are considered immutable and thus are hashable. Equality checking
|
||||
is also supported.
|
||||
"""
|
||||
|
||||
__slots__ = ["_abi", "_hash", "_interpreter", "_platform"]
|
||||
|
||||
def __init__(self, interpreter: str, abi: str, platform: str) -> None:
|
||||
self._interpreter = interpreter.lower()
|
||||
self._abi = abi.lower()
|
||||
self._platform = platform.lower()
|
||||
# The __hash__ of every single element in a Set[Tag] will be evaluated each time
|
||||
# that a set calls its `.disjoint()` method, which may be called hundreds of
|
||||
# times when scanning a page of links for packages with tags matching that
|
||||
# Set[Tag]. Pre-computing the value here produces significant speedups for
|
||||
# downstream consumers.
|
||||
self._hash = hash((self._interpreter, self._abi, self._platform))
|
||||
|
||||
@property
|
||||
def interpreter(self) -> str:
|
||||
return self._interpreter
|
||||
|
||||
@property
|
||||
def abi(self) -> str:
|
||||
return self._abi
|
||||
|
||||
@property
|
||||
def platform(self) -> str:
|
||||
return self._platform
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
if not isinstance(other, Tag):
|
||||
return NotImplemented
|
||||
|
||||
return (
|
||||
(self._hash == other._hash) # Short-circuit ASAP for perf reasons.
|
||||
and (self._platform == other._platform)
|
||||
and (self._abi == other._abi)
|
||||
and (self._interpreter == other._interpreter)
|
||||
)
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return self._hash
|
||||
|
||||
def __str__(self) -> str:
|
||||
return f"{self._interpreter}-{self._abi}-{self._platform}"
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<{self} @ {id(self)}>"
|
||||
|
||||
|
||||
def parse_tag(tag: str) -> frozenset[Tag]:
|
||||
"""
|
||||
Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances.
|
||||
|
||||
Returning a set is required due to the possibility that the tag is a
|
||||
compressed tag set.
|
||||
"""
|
||||
tags = set()
|
||||
interpreters, abis, platforms = tag.split("-")
|
||||
for interpreter in interpreters.split("."):
|
||||
for abi in abis.split("."):
|
||||
for platform_ in platforms.split("."):
|
||||
tags.add(Tag(interpreter, abi, platform_))
|
||||
return frozenset(tags)
|
||||
|
||||
|
||||
def _get_config_var(name: str, warn: bool = False) -> int | str | None:
|
||||
value: int | str | None = sysconfig.get_config_var(name)
|
||||
if value is None and warn:
|
||||
logger.debug(
|
||||
"Config variable '%s' is unset, Python ABI tag may be incorrect", name
|
||||
)
|
||||
return value
|
||||
|
||||
|
||||
def _normalize_string(string: str) -> str:
|
||||
return string.replace(".", "_").replace("-", "_").replace(" ", "_")
|
||||
|
||||
|
||||
def _is_threaded_cpython(abis: list[str]) -> bool:
|
||||
"""
|
||||
Determine if the ABI corresponds to a threaded (`--disable-gil`) build.
|
||||
|
||||
The threaded builds are indicated by a "t" in the abiflags.
|
||||
"""
|
||||
if len(abis) == 0:
|
||||
return False
|
||||
# expect e.g., cp313
|
||||
m = re.match(r"cp\d+(.*)", abis[0])
|
||||
if not m:
|
||||
return False
|
||||
abiflags = m.group(1)
|
||||
return "t" in abiflags
|
||||
|
||||
|
||||
def _abi3_applies(python_version: PythonVersion, threading: bool) -> bool:
|
||||
"""
|
||||
Determine if the Python version supports abi3.
|
||||
|
||||
PEP 384 was first implemented in Python 3.2. The threaded (`--disable-gil`)
|
||||
builds do not support abi3.
|
||||
"""
|
||||
return len(python_version) > 1 and tuple(python_version) >= (3, 2) and not threading
|
||||
|
||||
|
||||
def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> list[str]:
|
||||
py_version = tuple(py_version) # To allow for version comparison.
|
||||
abis = []
|
||||
version = _version_nodot(py_version[:2])
|
||||
threading = debug = pymalloc = ucs4 = ""
|
||||
with_debug = _get_config_var("Py_DEBUG", warn)
|
||||
has_refcount = hasattr(sys, "gettotalrefcount")
|
||||
# Windows doesn't set Py_DEBUG, so checking for support of debug-compiled
|
||||
# extension modules is the best option.
|
||||
# https://github.com/pypa/pip/issues/3383#issuecomment-173267692
|
||||
has_ext = "_d.pyd" in EXTENSION_SUFFIXES
|
||||
if with_debug or (with_debug is None and (has_refcount or has_ext)):
|
||||
debug = "d"
|
||||
if py_version >= (3, 13) and _get_config_var("Py_GIL_DISABLED", warn):
|
||||
threading = "t"
|
||||
if py_version < (3, 8):
|
||||
with_pymalloc = _get_config_var("WITH_PYMALLOC", warn)
|
||||
if with_pymalloc or with_pymalloc is None:
|
||||
pymalloc = "m"
|
||||
if py_version < (3, 3):
|
||||
unicode_size = _get_config_var("Py_UNICODE_SIZE", warn)
|
||||
if unicode_size == 4 or (
|
||||
unicode_size is None and sys.maxunicode == 0x10FFFF
|
||||
):
|
||||
ucs4 = "u"
|
||||
elif debug:
|
||||
# Debug builds can also load "normal" extension modules.
|
||||
# We can also assume no UCS-4 or pymalloc requirement.
|
||||
abis.append(f"cp{version}{threading}")
|
||||
abis.insert(0, f"cp{version}{threading}{debug}{pymalloc}{ucs4}")
|
||||
return abis
|
||||
|
||||
|
||||
def cpython_tags(
|
||||
python_version: PythonVersion | None = None,
|
||||
abis: Iterable[str] | None = None,
|
||||
platforms: Iterable[str] | None = None,
|
||||
*,
|
||||
warn: bool = False,
|
||||
) -> Iterator[Tag]:
|
||||
"""
|
||||
Yields the tags for a CPython interpreter.
|
||||
|
||||
The tags consist of:
|
||||
- cp<python_version>-<abi>-<platform>
|
||||
- cp<python_version>-abi3-<platform>
|
||||
- cp<python_version>-none-<platform>
|
||||
- cp<less than python_version>-abi3-<platform> # Older Python versions down to 3.2.
|
||||
|
||||
If python_version only specifies a major version then user-provided ABIs and
|
||||
the 'none' ABItag will be used.
|
||||
|
||||
If 'abi3' or 'none' are specified in 'abis' then they will be yielded at
|
||||
their normal position and not at the beginning.
|
||||
"""
|
||||
if not python_version:
|
||||
python_version = sys.version_info[:2]
|
||||
|
||||
interpreter = f"cp{_version_nodot(python_version[:2])}"
|
||||
|
||||
if abis is None:
|
||||
if len(python_version) > 1:
|
||||
abis = _cpython_abis(python_version, warn)
|
||||
else:
|
||||
abis = []
|
||||
abis = list(abis)
|
||||
# 'abi3' and 'none' are explicitly handled later.
|
||||
for explicit_abi in ("abi3", "none"):
|
||||
try:
|
||||
abis.remove(explicit_abi)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
platforms = list(platforms or platform_tags())
|
||||
for abi in abis:
|
||||
for platform_ in platforms:
|
||||
yield Tag(interpreter, abi, platform_)
|
||||
|
||||
threading = _is_threaded_cpython(abis)
|
||||
use_abi3 = _abi3_applies(python_version, threading)
|
||||
if use_abi3:
|
||||
yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms)
|
||||
yield from (Tag(interpreter, "none", platform_) for platform_ in platforms)
|
||||
|
||||
if use_abi3:
|
||||
for minor_version in range(python_version[1] - 1, 1, -1):
|
||||
for platform_ in platforms:
|
||||
version = _version_nodot((python_version[0], minor_version))
|
||||
interpreter = f"cp{version}"
|
||||
yield Tag(interpreter, "abi3", platform_)
|
||||
|
||||
|
||||
def _generic_abi() -> list[str]:
|
||||
"""
|
||||
Return the ABI tag based on EXT_SUFFIX.
|
||||
"""
|
||||
# The following are examples of `EXT_SUFFIX`.
|
||||
# We want to keep the parts which are related to the ABI and remove the
|
||||
# parts which are related to the platform:
|
||||
# - linux: '.cpython-310-x86_64-linux-gnu.so' => cp310
|
||||
# - mac: '.cpython-310-darwin.so' => cp310
|
||||
# - win: '.cp310-win_amd64.pyd' => cp310
|
||||
# - win: '.pyd' => cp37 (uses _cpython_abis())
|
||||
# - pypy: '.pypy38-pp73-x86_64-linux-gnu.so' => pypy38_pp73
|
||||
# - graalpy: '.graalpy-38-native-x86_64-darwin.dylib'
|
||||
# => graalpy_38_native
|
||||
|
||||
ext_suffix = _get_config_var("EXT_SUFFIX", warn=True)
|
||||
if not isinstance(ext_suffix, str) or ext_suffix[0] != ".":
|
||||
raise SystemError("invalid sysconfig.get_config_var('EXT_SUFFIX')")
|
||||
parts = ext_suffix.split(".")
|
||||
if len(parts) < 3:
|
||||
# CPython3.7 and earlier uses ".pyd" on Windows.
|
||||
return _cpython_abis(sys.version_info[:2])
|
||||
soabi = parts[1]
|
||||
if soabi.startswith("cpython"):
|
||||
# non-windows
|
||||
abi = "cp" + soabi.split("-")[1]
|
||||
elif soabi.startswith("cp"):
|
||||
# windows
|
||||
abi = soabi.split("-")[0]
|
||||
elif soabi.startswith("pypy"):
|
||||
abi = "-".join(soabi.split("-")[:2])
|
||||
elif soabi.startswith("graalpy"):
|
||||
abi = "-".join(soabi.split("-")[:3])
|
||||
elif soabi:
|
||||
# pyston, ironpython, others?
|
||||
abi = soabi
|
||||
else:
|
||||
return []
|
||||
return [_normalize_string(abi)]
|
||||
|
||||
|
||||
def generic_tags(
|
||||
interpreter: str | None = None,
|
||||
abis: Iterable[str] | None = None,
|
||||
platforms: Iterable[str] | None = None,
|
||||
*,
|
||||
warn: bool = False,
|
||||
) -> Iterator[Tag]:
|
||||
"""
|
||||
Yields the tags for a generic interpreter.
|
||||
|
||||
The tags consist of:
|
||||
- <interpreter>-<abi>-<platform>
|
||||
|
||||
The "none" ABI will be added if it was not explicitly provided.
|
||||
"""
|
||||
if not interpreter:
|
||||
interp_name = interpreter_name()
|
||||
interp_version = interpreter_version(warn=warn)
|
||||
interpreter = "".join([interp_name, interp_version])
|
||||
if abis is None:
|
||||
abis = _generic_abi()
|
||||
else:
|
||||
abis = list(abis)
|
||||
platforms = list(platforms or platform_tags())
|
||||
if "none" not in abis:
|
||||
abis.append("none")
|
||||
for abi in abis:
|
||||
for platform_ in platforms:
|
||||
yield Tag(interpreter, abi, platform_)
|
||||
|
||||
|
||||
def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]:
|
||||
"""
|
||||
Yields Python versions in descending order.
|
||||
|
||||
After the latest version, the major-only version will be yielded, and then
|
||||
all previous versions of that major version.
|
||||
"""
|
||||
if len(py_version) > 1:
|
||||
yield f"py{_version_nodot(py_version[:2])}"
|
||||
yield f"py{py_version[0]}"
|
||||
if len(py_version) > 1:
|
||||
for minor in range(py_version[1] - 1, -1, -1):
|
||||
yield f"py{_version_nodot((py_version[0], minor))}"
|
||||
|
||||
|
||||
def compatible_tags(
|
||||
python_version: PythonVersion | None = None,
|
||||
interpreter: str | None = None,
|
||||
platforms: Iterable[str] | None = None,
|
||||
) -> Iterator[Tag]:
|
||||
"""
|
||||
Yields the sequence of tags that are compatible with a specific version of Python.
|
||||
|
||||
The tags consist of:
|
||||
- py*-none-<platform>
|
||||
- <interpreter>-none-any # ... if `interpreter` is provided.
|
||||
- py*-none-any
|
||||
"""
|
||||
if not python_version:
|
||||
python_version = sys.version_info[:2]
|
||||
platforms = list(platforms or platform_tags())
|
||||
for version in _py_interpreter_range(python_version):
|
||||
for platform_ in platforms:
|
||||
yield Tag(version, "none", platform_)
|
||||
if interpreter:
|
||||
yield Tag(interpreter, "none", "any")
|
||||
for version in _py_interpreter_range(python_version):
|
||||
yield Tag(version, "none", "any")
|
||||
|
||||
|
||||
def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str:
|
||||
if not is_32bit:
|
||||
return arch
|
||||
|
||||
if arch.startswith("ppc"):
|
||||
return "ppc"
|
||||
|
||||
return "i386"
|
||||
|
||||
|
||||
def _mac_binary_formats(version: AppleVersion, cpu_arch: str) -> list[str]:
|
||||
formats = [cpu_arch]
|
||||
if cpu_arch == "x86_64":
|
||||
if version < (10, 4):
|
||||
return []
|
||||
formats.extend(["intel", "fat64", "fat32"])
|
||||
|
||||
elif cpu_arch == "i386":
|
||||
if version < (10, 4):
|
||||
return []
|
||||
formats.extend(["intel", "fat32", "fat"])
|
||||
|
||||
elif cpu_arch == "ppc64":
|
||||
# TODO: Need to care about 32-bit PPC for ppc64 through 10.2?
|
||||
if version > (10, 5) or version < (10, 4):
|
||||
return []
|
||||
formats.append("fat64")
|
||||
|
||||
elif cpu_arch == "ppc":
|
||||
if version > (10, 6):
|
||||
return []
|
||||
formats.extend(["fat32", "fat"])
|
||||
|
||||
if cpu_arch in {"arm64", "x86_64"}:
|
||||
formats.append("universal2")
|
||||
|
||||
if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}:
|
||||
formats.append("universal")
|
||||
|
||||
return formats
|
||||
|
||||
|
||||
def mac_platforms(
|
||||
version: AppleVersion | None = None, arch: str | None = None
|
||||
) -> Iterator[str]:
|
||||
"""
|
||||
Yields the platform tags for a macOS system.
|
||||
|
||||
The `version` parameter is a two-item tuple specifying the macOS version to
|
||||
generate platform tags for. The `arch` parameter is the CPU architecture to
|
||||
generate platform tags for. Both parameters default to the appropriate value
|
||||
for the current system.
|
||||
"""
|
||||
version_str, _, cpu_arch = platform.mac_ver()
|
||||
if version is None:
|
||||
version = cast("AppleVersion", tuple(map(int, version_str.split(".")[:2])))
|
||||
if version == (10, 16):
|
||||
# When built against an older macOS SDK, Python will report macOS 10.16
|
||||
# instead of the real version.
|
||||
version_str = subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-sS",
|
||||
"-c",
|
||||
"import platform; print(platform.mac_ver()[0])",
|
||||
],
|
||||
check=True,
|
||||
env={"SYSTEM_VERSION_COMPAT": "0"},
|
||||
stdout=subprocess.PIPE,
|
||||
text=True,
|
||||
).stdout
|
||||
version = cast("AppleVersion", tuple(map(int, version_str.split(".")[:2])))
|
||||
else:
|
||||
version = version
|
||||
if arch is None:
|
||||
arch = _mac_arch(cpu_arch)
|
||||
else:
|
||||
arch = arch
|
||||
|
||||
if (10, 0) <= version and version < (11, 0):
|
||||
# Prior to Mac OS 11, each yearly release of Mac OS bumped the
|
||||
# "minor" version number. The major version was always 10.
|
||||
major_version = 10
|
||||
for minor_version in range(version[1], -1, -1):
|
||||
compat_version = major_version, minor_version
|
||||
binary_formats = _mac_binary_formats(compat_version, arch)
|
||||
for binary_format in binary_formats:
|
||||
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
|
||||
|
||||
if version >= (11, 0):
|
||||
# Starting with Mac OS 11, each yearly release bumps the major version
|
||||
# number. The minor versions are now the midyear updates.
|
||||
minor_version = 0
|
||||
for major_version in range(version[0], 10, -1):
|
||||
compat_version = major_version, minor_version
|
||||
binary_formats = _mac_binary_formats(compat_version, arch)
|
||||
for binary_format in binary_formats:
|
||||
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
|
||||
|
||||
if version >= (11, 0):
|
||||
# Mac OS 11 on x86_64 is compatible with binaries from previous releases.
|
||||
# Arm64 support was introduced in 11.0, so no Arm binaries from previous
|
||||
# releases exist.
|
||||
#
|
||||
# However, the "universal2" binary format can have a
|
||||
# macOS version earlier than 11.0 when the x86_64 part of the binary supports
|
||||
# that version of macOS.
|
||||
major_version = 10
|
||||
if arch == "x86_64":
|
||||
for minor_version in range(16, 3, -1):
|
||||
compat_version = major_version, minor_version
|
||||
binary_formats = _mac_binary_formats(compat_version, arch)
|
||||
for binary_format in binary_formats:
|
||||
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
|
||||
else:
|
||||
for minor_version in range(16, 3, -1):
|
||||
compat_version = major_version, minor_version
|
||||
binary_format = "universal2"
|
||||
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
|
||||
|
||||
|
||||
def ios_platforms(
|
||||
version: AppleVersion | None = None, multiarch: str | None = None
|
||||
) -> Iterator[str]:
|
||||
"""
|
||||
Yields the platform tags for an iOS system.
|
||||
|
||||
:param version: A two-item tuple specifying the iOS version to generate
|
||||
platform tags for. Defaults to the current iOS version.
|
||||
:param multiarch: The CPU architecture+ABI to generate platform tags for -
|
||||
(the value used by `sys.implementation._multiarch` e.g.,
|
||||
`arm64_iphoneos` or `x84_64_iphonesimulator`). Defaults to the current
|
||||
multiarch value.
|
||||
"""
|
||||
if version is None:
|
||||
# if iOS is the current platform, ios_ver *must* be defined. However,
|
||||
# it won't exist for CPython versions before 3.13, which causes a mypy
|
||||
# error.
|
||||
_, release, _, _ = platform.ios_ver() # type: ignore[attr-defined, unused-ignore]
|
||||
version = cast("AppleVersion", tuple(map(int, release.split(".")[:2])))
|
||||
|
||||
if multiarch is None:
|
||||
multiarch = sys.implementation._multiarch
|
||||
multiarch = multiarch.replace("-", "_")
|
||||
|
||||
ios_platform_template = "ios_{major}_{minor}_{multiarch}"
|
||||
|
||||
# Consider any iOS major.minor version from the version requested, down to
|
||||
# 12.0. 12.0 is the first iOS version that is known to have enough features
|
||||
# to support CPython. Consider every possible minor release up to X.9. There
|
||||
# highest the minor has ever gone is 8 (14.8 and 15.8) but having some extra
|
||||
# candidates that won't ever match doesn't really hurt, and it saves us from
|
||||
# having to keep an explicit list of known iOS versions in the code. Return
|
||||
# the results descending order of version number.
|
||||
|
||||
# If the requested major version is less than 12, there won't be any matches.
|
||||
if version[0] < 12:
|
||||
return
|
||||
|
||||
# Consider the actual X.Y version that was requested.
|
||||
yield ios_platform_template.format(
|
||||
major=version[0], minor=version[1], multiarch=multiarch
|
||||
)
|
||||
|
||||
# Consider every minor version from X.0 to the minor version prior to the
|
||||
# version requested by the platform.
|
||||
for minor in range(version[1] - 1, -1, -1):
|
||||
yield ios_platform_template.format(
|
||||
major=version[0], minor=minor, multiarch=multiarch
|
||||
)
|
||||
|
||||
for major in range(version[0] - 1, 11, -1):
|
||||
for minor in range(9, -1, -1):
|
||||
yield ios_platform_template.format(
|
||||
major=major, minor=minor, multiarch=multiarch
|
||||
)
|
||||
|
||||
|
||||
def android_platforms(
|
||||
api_level: int | None = None, abi: str | None = None
|
||||
) -> Iterator[str]:
|
||||
"""
|
||||
Yields the :attr:`~Tag.platform` tags for Android. If this function is invoked on
|
||||
non-Android platforms, the ``api_level`` and ``abi`` arguments are required.
|
||||
|
||||
:param int api_level: The maximum `API level
|
||||
<https://developer.android.com/tools/releases/platforms>`__ to return. Defaults
|
||||
to the current system's version, as returned by ``platform.android_ver``.
|
||||
:param str abi: The `Android ABI <https://developer.android.com/ndk/guides/abis>`__,
|
||||
e.g. ``arm64_v8a``. Defaults to the current system's ABI , as returned by
|
||||
``sysconfig.get_platform``. Hyphens and periods will be replaced with
|
||||
underscores.
|
||||
"""
|
||||
if platform.system() != "Android" and (api_level is None or abi is None):
|
||||
raise TypeError(
|
||||
"on non-Android platforms, the api_level and abi arguments are required"
|
||||
)
|
||||
|
||||
if api_level is None:
|
||||
# Python 3.13 was the first version to return platform.system() == "Android",
|
||||
# and also the first version to define platform.android_ver().
|
||||
api_level = platform.android_ver().api_level # type: ignore[attr-defined]
|
||||
|
||||
if abi is None:
|
||||
abi = sysconfig.get_platform().split("-")[-1]
|
||||
abi = _normalize_string(abi)
|
||||
|
||||
# 16 is the minimum API level known to have enough features to support CPython
|
||||
# without major patching. Yield every API level from the maximum down to the
|
||||
# minimum, inclusive.
|
||||
min_api_level = 16
|
||||
for ver in range(api_level, min_api_level - 1, -1):
|
||||
yield f"android_{ver}_{abi}"
|
||||
|
||||
|
||||
def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]:
|
||||
linux = _normalize_string(sysconfig.get_platform())
|
||||
if not linux.startswith("linux_"):
|
||||
# we should never be here, just yield the sysconfig one and return
|
||||
yield linux
|
||||
return
|
||||
if is_32bit:
|
||||
if linux == "linux_x86_64":
|
||||
linux = "linux_i686"
|
||||
elif linux == "linux_aarch64":
|
||||
linux = "linux_armv8l"
|
||||
_, arch = linux.split("_", 1)
|
||||
archs = {"armv8l": ["armv8l", "armv7l"]}.get(arch, [arch])
|
||||
yield from _manylinux.platform_tags(archs)
|
||||
yield from _musllinux.platform_tags(archs)
|
||||
for arch in archs:
|
||||
yield f"linux_{arch}"
|
||||
|
||||
|
||||
def _generic_platforms() -> Iterator[str]:
|
||||
yield _normalize_string(sysconfig.get_platform())
|
||||
|
||||
|
||||
def platform_tags() -> Iterator[str]:
|
||||
"""
|
||||
Provides the platform tags for this installation.
|
||||
"""
|
||||
if platform.system() == "Darwin":
|
||||
return mac_platforms()
|
||||
elif platform.system() == "iOS":
|
||||
return ios_platforms()
|
||||
elif platform.system() == "Android":
|
||||
return android_platforms()
|
||||
elif platform.system() == "Linux":
|
||||
return _linux_platforms()
|
||||
else:
|
||||
return _generic_platforms()
|
||||
|
||||
|
||||
def interpreter_name() -> str:
|
||||
"""
|
||||
Returns the name of the running interpreter.
|
||||
|
||||
Some implementations have a reserved, two-letter abbreviation which will
|
||||
be returned when appropriate.
|
||||
"""
|
||||
name = sys.implementation.name
|
||||
return INTERPRETER_SHORT_NAMES.get(name) or name
|
||||
|
||||
|
||||
def interpreter_version(*, warn: bool = False) -> str:
|
||||
"""
|
||||
Returns the version of the running interpreter.
|
||||
"""
|
||||
version = _get_config_var("py_version_nodot", warn=warn)
|
||||
if version:
|
||||
version = str(version)
|
||||
else:
|
||||
version = _version_nodot(sys.version_info[:2])
|
||||
return version
|
||||
|
||||
|
||||
def _version_nodot(version: PythonVersion) -> str:
|
||||
return "".join(map(str, version))
|
||||
|
||||
|
||||
def sys_tags(*, warn: bool = False) -> Iterator[Tag]:
|
||||
"""
|
||||
Returns the sequence of tag triples for the running interpreter.
|
||||
|
||||
The order of the sequence corresponds to priority order for the
|
||||
interpreter, from most to least important.
|
||||
"""
|
||||
|
||||
interp_name = interpreter_name()
|
||||
if interp_name == "cp":
|
||||
yield from cpython_tags(warn=warn)
|
||||
else:
|
||||
yield from generic_tags()
|
||||
|
||||
if interp_name == "pp":
|
||||
interp = "pp3"
|
||||
elif interp_name == "cp":
|
||||
interp = "cp" + interpreter_version(warn=warn)
|
||||
else:
|
||||
interp = None
|
||||
yield from compatible_tags(interpreter=interp)
|
||||
@@ -0,0 +1,163 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import functools
|
||||
import re
|
||||
from typing import NewType, Tuple, Union, cast
|
||||
|
||||
from .tags import Tag, parse_tag
|
||||
from .version import InvalidVersion, Version, _TrimmedRelease
|
||||
|
||||
BuildTag = Union[Tuple[()], Tuple[int, str]]
|
||||
NormalizedName = NewType("NormalizedName", str)
|
||||
|
||||
|
||||
class InvalidName(ValueError):
|
||||
"""
|
||||
An invalid distribution name; users should refer to the packaging user guide.
|
||||
"""
|
||||
|
||||
|
||||
class InvalidWheelFilename(ValueError):
|
||||
"""
|
||||
An invalid wheel filename was found, users should refer to PEP 427.
|
||||
"""
|
||||
|
||||
|
||||
class InvalidSdistFilename(ValueError):
|
||||
"""
|
||||
An invalid sdist filename was found, users should refer to the packaging user guide.
|
||||
"""
|
||||
|
||||
|
||||
# Core metadata spec for `Name`
|
||||
_validate_regex = re.compile(
|
||||
r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", re.IGNORECASE
|
||||
)
|
||||
_canonicalize_regex = re.compile(r"[-_.]+")
|
||||
_normalized_regex = re.compile(r"^([a-z0-9]|[a-z0-9]([a-z0-9-](?!--))*[a-z0-9])$")
|
||||
# PEP 427: The build number must start with a digit.
|
||||
_build_tag_regex = re.compile(r"(\d+)(.*)")
|
||||
|
||||
|
||||
def canonicalize_name(name: str, *, validate: bool = False) -> NormalizedName:
|
||||
if validate and not _validate_regex.match(name):
|
||||
raise InvalidName(f"name is invalid: {name!r}")
|
||||
# This is taken from PEP 503.
|
||||
value = _canonicalize_regex.sub("-", name).lower()
|
||||
return cast(NormalizedName, value)
|
||||
|
||||
|
||||
def is_normalized_name(name: str) -> bool:
|
||||
return _normalized_regex.match(name) is not None
|
||||
|
||||
|
||||
@functools.singledispatch
|
||||
def canonicalize_version(
|
||||
version: Version | str, *, strip_trailing_zero: bool = True
|
||||
) -> str:
|
||||
"""
|
||||
Return a canonical form of a version as a string.
|
||||
|
||||
>>> canonicalize_version('1.0.1')
|
||||
'1.0.1'
|
||||
|
||||
Per PEP 625, versions may have multiple canonical forms, differing
|
||||
only by trailing zeros.
|
||||
|
||||
>>> canonicalize_version('1.0.0')
|
||||
'1'
|
||||
>>> canonicalize_version('1.0.0', strip_trailing_zero=False)
|
||||
'1.0.0'
|
||||
|
||||
Invalid versions are returned unaltered.
|
||||
|
||||
>>> canonicalize_version('foo bar baz')
|
||||
'foo bar baz'
|
||||
"""
|
||||
return str(_TrimmedRelease(str(version)) if strip_trailing_zero else version)
|
||||
|
||||
|
||||
@canonicalize_version.register
|
||||
def _(version: str, *, strip_trailing_zero: bool = True) -> str:
|
||||
try:
|
||||
parsed = Version(version)
|
||||
except InvalidVersion:
|
||||
# Legacy versions cannot be normalized
|
||||
return version
|
||||
return canonicalize_version(parsed, strip_trailing_zero=strip_trailing_zero)
|
||||
|
||||
|
||||
def parse_wheel_filename(
|
||||
filename: str,
|
||||
) -> tuple[NormalizedName, Version, BuildTag, frozenset[Tag]]:
|
||||
if not filename.endswith(".whl"):
|
||||
raise InvalidWheelFilename(
|
||||
f"Invalid wheel filename (extension must be '.whl'): {filename!r}"
|
||||
)
|
||||
|
||||
filename = filename[:-4]
|
||||
dashes = filename.count("-")
|
||||
if dashes not in (4, 5):
|
||||
raise InvalidWheelFilename(
|
||||
f"Invalid wheel filename (wrong number of parts): {filename!r}"
|
||||
)
|
||||
|
||||
parts = filename.split("-", dashes - 2)
|
||||
name_part = parts[0]
|
||||
# See PEP 427 for the rules on escaping the project name.
|
||||
if "__" in name_part or re.match(r"^[\w\d._]*$", name_part, re.UNICODE) is None:
|
||||
raise InvalidWheelFilename(f"Invalid project name: {filename!r}")
|
||||
name = canonicalize_name(name_part)
|
||||
|
||||
try:
|
||||
version = Version(parts[1])
|
||||
except InvalidVersion as e:
|
||||
raise InvalidWheelFilename(
|
||||
f"Invalid wheel filename (invalid version): {filename!r}"
|
||||
) from e
|
||||
|
||||
if dashes == 5:
|
||||
build_part = parts[2]
|
||||
build_match = _build_tag_regex.match(build_part)
|
||||
if build_match is None:
|
||||
raise InvalidWheelFilename(
|
||||
f"Invalid build number: {build_part} in {filename!r}"
|
||||
)
|
||||
build = cast(BuildTag, (int(build_match.group(1)), build_match.group(2)))
|
||||
else:
|
||||
build = ()
|
||||
tags = parse_tag(parts[-1])
|
||||
return (name, version, build, tags)
|
||||
|
||||
|
||||
def parse_sdist_filename(filename: str) -> tuple[NormalizedName, Version]:
|
||||
if filename.endswith(".tar.gz"):
|
||||
file_stem = filename[: -len(".tar.gz")]
|
||||
elif filename.endswith(".zip"):
|
||||
file_stem = filename[: -len(".zip")]
|
||||
else:
|
||||
raise InvalidSdistFilename(
|
||||
f"Invalid sdist filename (extension must be '.tar.gz' or '.zip'):"
|
||||
f" {filename!r}"
|
||||
)
|
||||
|
||||
# We are requiring a PEP 440 version, which cannot contain dashes,
|
||||
# so we split on the last dash.
|
||||
name_part, sep, version_part = file_stem.rpartition("-")
|
||||
if not sep:
|
||||
raise InvalidSdistFilename(f"Invalid sdist filename: {filename!r}")
|
||||
|
||||
name = canonicalize_name(name_part)
|
||||
|
||||
try:
|
||||
version = Version(version_part)
|
||||
except InvalidVersion as e:
|
||||
raise InvalidSdistFilename(
|
||||
f"Invalid sdist filename (invalid version): {filename!r}"
|
||||
) from e
|
||||
|
||||
return (name, version)
|
||||
@@ -0,0 +1,582 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
"""
|
||||
.. testsetup::
|
||||
|
||||
from pip._vendor.packaging.version import parse, Version
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import itertools
|
||||
import re
|
||||
from typing import Any, Callable, NamedTuple, SupportsInt, Tuple, Union
|
||||
|
||||
from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType
|
||||
|
||||
__all__ = ["VERSION_PATTERN", "InvalidVersion", "Version", "parse"]
|
||||
|
||||
LocalType = Tuple[Union[int, str], ...]
|
||||
|
||||
CmpPrePostDevType = Union[InfinityType, NegativeInfinityType, Tuple[str, int]]
|
||||
CmpLocalType = Union[
|
||||
NegativeInfinityType,
|
||||
Tuple[Union[Tuple[int, str], Tuple[NegativeInfinityType, Union[int, str]]], ...],
|
||||
]
|
||||
CmpKey = Tuple[
|
||||
int,
|
||||
Tuple[int, ...],
|
||||
CmpPrePostDevType,
|
||||
CmpPrePostDevType,
|
||||
CmpPrePostDevType,
|
||||
CmpLocalType,
|
||||
]
|
||||
VersionComparisonMethod = Callable[[CmpKey, CmpKey], bool]
|
||||
|
||||
|
||||
class _Version(NamedTuple):
|
||||
epoch: int
|
||||
release: tuple[int, ...]
|
||||
dev: tuple[str, int] | None
|
||||
pre: tuple[str, int] | None
|
||||
post: tuple[str, int] | None
|
||||
local: LocalType | None
|
||||
|
||||
|
||||
def parse(version: str) -> Version:
|
||||
"""Parse the given version string.
|
||||
|
||||
>>> parse('1.0.dev1')
|
||||
<Version('1.0.dev1')>
|
||||
|
||||
:param version: The version string to parse.
|
||||
:raises InvalidVersion: When the version string is not a valid version.
|
||||
"""
|
||||
return Version(version)
|
||||
|
||||
|
||||
class InvalidVersion(ValueError):
|
||||
"""Raised when a version string is not a valid version.
|
||||
|
||||
>>> Version("invalid")
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
packaging.version.InvalidVersion: Invalid version: 'invalid'
|
||||
"""
|
||||
|
||||
|
||||
class _BaseVersion:
|
||||
_key: tuple[Any, ...]
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(self._key)
|
||||
|
||||
# Please keep the duplicated `isinstance` check
|
||||
# in the six comparisons hereunder
|
||||
# unless you find a way to avoid adding overhead function calls.
|
||||
def __lt__(self, other: _BaseVersion) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key < other._key
|
||||
|
||||
def __le__(self, other: _BaseVersion) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key <= other._key
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key == other._key
|
||||
|
||||
def __ge__(self, other: _BaseVersion) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key >= other._key
|
||||
|
||||
def __gt__(self, other: _BaseVersion) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key > other._key
|
||||
|
||||
def __ne__(self, other: object) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key != other._key
|
||||
|
||||
|
||||
# Deliberately not anchored to the start and end of the string, to make it
|
||||
# easier for 3rd party code to reuse
|
||||
_VERSION_PATTERN = r"""
|
||||
v?
|
||||
(?:
|
||||
(?:(?P<epoch>[0-9]+)!)? # epoch
|
||||
(?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
|
||||
(?P<pre> # pre-release
|
||||
[-_\.]?
|
||||
(?P<pre_l>alpha|a|beta|b|preview|pre|c|rc)
|
||||
[-_\.]?
|
||||
(?P<pre_n>[0-9]+)?
|
||||
)?
|
||||
(?P<post> # post release
|
||||
(?:-(?P<post_n1>[0-9]+))
|
||||
|
|
||||
(?:
|
||||
[-_\.]?
|
||||
(?P<post_l>post|rev|r)
|
||||
[-_\.]?
|
||||
(?P<post_n2>[0-9]+)?
|
||||
)
|
||||
)?
|
||||
(?P<dev> # dev release
|
||||
[-_\.]?
|
||||
(?P<dev_l>dev)
|
||||
[-_\.]?
|
||||
(?P<dev_n>[0-9]+)?
|
||||
)?
|
||||
)
|
||||
(?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
|
||||
"""
|
||||
|
||||
VERSION_PATTERN = _VERSION_PATTERN
|
||||
"""
|
||||
A string containing the regular expression used to match a valid version.
|
||||
|
||||
The pattern is not anchored at either end, and is intended for embedding in larger
|
||||
expressions (for example, matching a version number as part of a file name). The
|
||||
regular expression should be compiled with the ``re.VERBOSE`` and ``re.IGNORECASE``
|
||||
flags set.
|
||||
|
||||
:meta hide-value:
|
||||
"""
|
||||
|
||||
|
||||
class Version(_BaseVersion):
|
||||
"""This class abstracts handling of a project's versions.
|
||||
|
||||
A :class:`Version` instance is comparison aware and can be compared and
|
||||
sorted using the standard Python interfaces.
|
||||
|
||||
>>> v1 = Version("1.0a5")
|
||||
>>> v2 = Version("1.0")
|
||||
>>> v1
|
||||
<Version('1.0a5')>
|
||||
>>> v2
|
||||
<Version('1.0')>
|
||||
>>> v1 < v2
|
||||
True
|
||||
>>> v1 == v2
|
||||
False
|
||||
>>> v1 > v2
|
||||
False
|
||||
>>> v1 >= v2
|
||||
False
|
||||
>>> v1 <= v2
|
||||
True
|
||||
"""
|
||||
|
||||
_regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
|
||||
_key: CmpKey
|
||||
|
||||
def __init__(self, version: str) -> None:
|
||||
"""Initialize a Version object.
|
||||
|
||||
:param version:
|
||||
The string representation of a version which will be parsed and normalized
|
||||
before use.
|
||||
:raises InvalidVersion:
|
||||
If the ``version`` does not conform to PEP 440 in any way then this
|
||||
exception will be raised.
|
||||
"""
|
||||
|
||||
# Validate the version and parse it into pieces
|
||||
match = self._regex.search(version)
|
||||
if not match:
|
||||
raise InvalidVersion(f"Invalid version: {version!r}")
|
||||
|
||||
# Store the parsed out pieces of the version
|
||||
self._version = _Version(
|
||||
epoch=int(match.group("epoch")) if match.group("epoch") else 0,
|
||||
release=tuple(int(i) for i in match.group("release").split(".")),
|
||||
pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
|
||||
post=_parse_letter_version(
|
||||
match.group("post_l"), match.group("post_n1") or match.group("post_n2")
|
||||
),
|
||||
dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
|
||||
local=_parse_local_version(match.group("local")),
|
||||
)
|
||||
|
||||
# Generate a key which will be used for sorting
|
||||
self._key = _cmpkey(
|
||||
self._version.epoch,
|
||||
self._version.release,
|
||||
self._version.pre,
|
||||
self._version.post,
|
||||
self._version.dev,
|
||||
self._version.local,
|
||||
)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""A representation of the Version that shows all internal state.
|
||||
|
||||
>>> Version('1.0.0')
|
||||
<Version('1.0.0')>
|
||||
"""
|
||||
return f"<Version('{self}')>"
|
||||
|
||||
def __str__(self) -> str:
|
||||
"""A string representation of the version that can be round-tripped.
|
||||
|
||||
>>> str(Version("1.0a5"))
|
||||
'1.0a5'
|
||||
"""
|
||||
parts = []
|
||||
|
||||
# Epoch
|
||||
if self.epoch != 0:
|
||||
parts.append(f"{self.epoch}!")
|
||||
|
||||
# Release segment
|
||||
parts.append(".".join(str(x) for x in self.release))
|
||||
|
||||
# Pre-release
|
||||
if self.pre is not None:
|
||||
parts.append("".join(str(x) for x in self.pre))
|
||||
|
||||
# Post-release
|
||||
if self.post is not None:
|
||||
parts.append(f".post{self.post}")
|
||||
|
||||
# Development release
|
||||
if self.dev is not None:
|
||||
parts.append(f".dev{self.dev}")
|
||||
|
||||
# Local version segment
|
||||
if self.local is not None:
|
||||
parts.append(f"+{self.local}")
|
||||
|
||||
return "".join(parts)
|
||||
|
||||
@property
|
||||
def epoch(self) -> int:
|
||||
"""The epoch of the version.
|
||||
|
||||
>>> Version("2.0.0").epoch
|
||||
0
|
||||
>>> Version("1!2.0.0").epoch
|
||||
1
|
||||
"""
|
||||
return self._version.epoch
|
||||
|
||||
@property
|
||||
def release(self) -> tuple[int, ...]:
|
||||
"""The components of the "release" segment of the version.
|
||||
|
||||
>>> Version("1.2.3").release
|
||||
(1, 2, 3)
|
||||
>>> Version("2.0.0").release
|
||||
(2, 0, 0)
|
||||
>>> Version("1!2.0.0.post0").release
|
||||
(2, 0, 0)
|
||||
|
||||
Includes trailing zeroes but not the epoch or any pre-release / development /
|
||||
post-release suffixes.
|
||||
"""
|
||||
return self._version.release
|
||||
|
||||
@property
|
||||
def pre(self) -> tuple[str, int] | None:
|
||||
"""The pre-release segment of the version.
|
||||
|
||||
>>> print(Version("1.2.3").pre)
|
||||
None
|
||||
>>> Version("1.2.3a1").pre
|
||||
('a', 1)
|
||||
>>> Version("1.2.3b1").pre
|
||||
('b', 1)
|
||||
>>> Version("1.2.3rc1").pre
|
||||
('rc', 1)
|
||||
"""
|
||||
return self._version.pre
|
||||
|
||||
@property
|
||||
def post(self) -> int | None:
|
||||
"""The post-release number of the version.
|
||||
|
||||
>>> print(Version("1.2.3").post)
|
||||
None
|
||||
>>> Version("1.2.3.post1").post
|
||||
1
|
||||
"""
|
||||
return self._version.post[1] if self._version.post else None
|
||||
|
||||
@property
|
||||
def dev(self) -> int | None:
|
||||
"""The development number of the version.
|
||||
|
||||
>>> print(Version("1.2.3").dev)
|
||||
None
|
||||
>>> Version("1.2.3.dev1").dev
|
||||
1
|
||||
"""
|
||||
return self._version.dev[1] if self._version.dev else None
|
||||
|
||||
@property
|
||||
def local(self) -> str | None:
|
||||
"""The local version segment of the version.
|
||||
|
||||
>>> print(Version("1.2.3").local)
|
||||
None
|
||||
>>> Version("1.2.3+abc").local
|
||||
'abc'
|
||||
"""
|
||||
if self._version.local:
|
||||
return ".".join(str(x) for x in self._version.local)
|
||||
else:
|
||||
return None
|
||||
|
||||
@property
|
||||
def public(self) -> str:
|
||||
"""The public portion of the version.
|
||||
|
||||
>>> Version("1.2.3").public
|
||||
'1.2.3'
|
||||
>>> Version("1.2.3+abc").public
|
||||
'1.2.3'
|
||||
>>> Version("1!1.2.3dev1+abc").public
|
||||
'1!1.2.3.dev1'
|
||||
"""
|
||||
return str(self).split("+", 1)[0]
|
||||
|
||||
@property
|
||||
def base_version(self) -> str:
|
||||
"""The "base version" of the version.
|
||||
|
||||
>>> Version("1.2.3").base_version
|
||||
'1.2.3'
|
||||
>>> Version("1.2.3+abc").base_version
|
||||
'1.2.3'
|
||||
>>> Version("1!1.2.3dev1+abc").base_version
|
||||
'1!1.2.3'
|
||||
|
||||
The "base version" is the public version of the project without any pre or post
|
||||
release markers.
|
||||
"""
|
||||
parts = []
|
||||
|
||||
# Epoch
|
||||
if self.epoch != 0:
|
||||
parts.append(f"{self.epoch}!")
|
||||
|
||||
# Release segment
|
||||
parts.append(".".join(str(x) for x in self.release))
|
||||
|
||||
return "".join(parts)
|
||||
|
||||
@property
|
||||
def is_prerelease(self) -> bool:
|
||||
"""Whether this version is a pre-release.
|
||||
|
||||
>>> Version("1.2.3").is_prerelease
|
||||
False
|
||||
>>> Version("1.2.3a1").is_prerelease
|
||||
True
|
||||
>>> Version("1.2.3b1").is_prerelease
|
||||
True
|
||||
>>> Version("1.2.3rc1").is_prerelease
|
||||
True
|
||||
>>> Version("1.2.3dev1").is_prerelease
|
||||
True
|
||||
"""
|
||||
return self.dev is not None or self.pre is not None
|
||||
|
||||
@property
|
||||
def is_postrelease(self) -> bool:
|
||||
"""Whether this version is a post-release.
|
||||
|
||||
>>> Version("1.2.3").is_postrelease
|
||||
False
|
||||
>>> Version("1.2.3.post1").is_postrelease
|
||||
True
|
||||
"""
|
||||
return self.post is not None
|
||||
|
||||
@property
|
||||
def is_devrelease(self) -> bool:
|
||||
"""Whether this version is a development release.
|
||||
|
||||
>>> Version("1.2.3").is_devrelease
|
||||
False
|
||||
>>> Version("1.2.3.dev1").is_devrelease
|
||||
True
|
||||
"""
|
||||
return self.dev is not None
|
||||
|
||||
@property
|
||||
def major(self) -> int:
|
||||
"""The first item of :attr:`release` or ``0`` if unavailable.
|
||||
|
||||
>>> Version("1.2.3").major
|
||||
1
|
||||
"""
|
||||
return self.release[0] if len(self.release) >= 1 else 0
|
||||
|
||||
@property
|
||||
def minor(self) -> int:
|
||||
"""The second item of :attr:`release` or ``0`` if unavailable.
|
||||
|
||||
>>> Version("1.2.3").minor
|
||||
2
|
||||
>>> Version("1").minor
|
||||
0
|
||||
"""
|
||||
return self.release[1] if len(self.release) >= 2 else 0
|
||||
|
||||
@property
|
||||
def micro(self) -> int:
|
||||
"""The third item of :attr:`release` or ``0`` if unavailable.
|
||||
|
||||
>>> Version("1.2.3").micro
|
||||
3
|
||||
>>> Version("1").micro
|
||||
0
|
||||
"""
|
||||
return self.release[2] if len(self.release) >= 3 else 0
|
||||
|
||||
|
||||
class _TrimmedRelease(Version):
|
||||
@property
|
||||
def release(self) -> tuple[int, ...]:
|
||||
"""
|
||||
Release segment without any trailing zeros.
|
||||
|
||||
>>> _TrimmedRelease('1.0.0').release
|
||||
(1,)
|
||||
>>> _TrimmedRelease('0.0').release
|
||||
(0,)
|
||||
"""
|
||||
rel = super().release
|
||||
nonzeros = (index for index, val in enumerate(rel) if val)
|
||||
last_nonzero = max(nonzeros, default=0)
|
||||
return rel[: last_nonzero + 1]
|
||||
|
||||
|
||||
def _parse_letter_version(
|
||||
letter: str | None, number: str | bytes | SupportsInt | None
|
||||
) -> tuple[str, int] | None:
|
||||
if letter:
|
||||
# We consider there to be an implicit 0 in a pre-release if there is
|
||||
# not a numeral associated with it.
|
||||
if number is None:
|
||||
number = 0
|
||||
|
||||
# We normalize any letters to their lower case form
|
||||
letter = letter.lower()
|
||||
|
||||
# We consider some words to be alternate spellings of other words and
|
||||
# in those cases we want to normalize the spellings to our preferred
|
||||
# spelling.
|
||||
if letter == "alpha":
|
||||
letter = "a"
|
||||
elif letter == "beta":
|
||||
letter = "b"
|
||||
elif letter in ["c", "pre", "preview"]:
|
||||
letter = "rc"
|
||||
elif letter in ["rev", "r"]:
|
||||
letter = "post"
|
||||
|
||||
return letter, int(number)
|
||||
|
||||
assert not letter
|
||||
if number:
|
||||
# We assume if we are given a number, but we are not given a letter
|
||||
# then this is using the implicit post release syntax (e.g. 1.0-1)
|
||||
letter = "post"
|
||||
|
||||
return letter, int(number)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
_local_version_separators = re.compile(r"[\._-]")
|
||||
|
||||
|
||||
def _parse_local_version(local: str | None) -> LocalType | None:
|
||||
"""
|
||||
Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
|
||||
"""
|
||||
if local is not None:
|
||||
return tuple(
|
||||
part.lower() if not part.isdigit() else int(part)
|
||||
for part in _local_version_separators.split(local)
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
def _cmpkey(
|
||||
epoch: int,
|
||||
release: tuple[int, ...],
|
||||
pre: tuple[str, int] | None,
|
||||
post: tuple[str, int] | None,
|
||||
dev: tuple[str, int] | None,
|
||||
local: LocalType | None,
|
||||
) -> CmpKey:
|
||||
# When we compare a release version, we want to compare it with all of the
|
||||
# trailing zeros removed. So we'll use a reverse the list, drop all the now
|
||||
# leading zeros until we come to something non zero, then take the rest
|
||||
# re-reverse it back into the correct order and make it a tuple and use
|
||||
# that for our sorting key.
|
||||
_release = tuple(
|
||||
reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
|
||||
)
|
||||
|
||||
# We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
|
||||
# We'll do this by abusing the pre segment, but we _only_ want to do this
|
||||
# if there is not a pre or a post segment. If we have one of those then
|
||||
# the normal sorting rules will handle this case correctly.
|
||||
if pre is None and post is None and dev is not None:
|
||||
_pre: CmpPrePostDevType = NegativeInfinity
|
||||
# Versions without a pre-release (except as noted above) should sort after
|
||||
# those with one.
|
||||
elif pre is None:
|
||||
_pre = Infinity
|
||||
else:
|
||||
_pre = pre
|
||||
|
||||
# Versions without a post segment should sort before those with one.
|
||||
if post is None:
|
||||
_post: CmpPrePostDevType = NegativeInfinity
|
||||
|
||||
else:
|
||||
_post = post
|
||||
|
||||
# Versions without a development segment should sort after those with one.
|
||||
if dev is None:
|
||||
_dev: CmpPrePostDevType = Infinity
|
||||
|
||||
else:
|
||||
_dev = dev
|
||||
|
||||
if local is None:
|
||||
# Versions without a local segment should sort before those with one.
|
||||
_local: CmpLocalType = NegativeInfinity
|
||||
else:
|
||||
# Versions with a local segment need that segment parsed to implement
|
||||
# the sorting rules in PEP440.
|
||||
# - Alpha numeric segments sort before numeric segments
|
||||
# - Alpha numeric segments sort lexicographically
|
||||
# - Numeric segments sort numerically
|
||||
# - Shorter versions sort before longer versions when the prefixes
|
||||
# match exactly
|
||||
_local = tuple(
|
||||
(i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
|
||||
)
|
||||
|
||||
return epoch, _release, _pre, _post, _dev, _local
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,631 @@
|
||||
"""
|
||||
Utilities for determining application-specific dirs.
|
||||
|
||||
See <https://github.com/platformdirs/platformdirs> for details and usage.
|
||||
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
from .version import __version__
|
||||
from .version import __version_tuple__ as __version_info__
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pathlib import Path
|
||||
from typing import Literal
|
||||
|
||||
if sys.platform == "win32":
|
||||
from pip._vendor.platformdirs.windows import Windows as _Result
|
||||
elif sys.platform == "darwin":
|
||||
from pip._vendor.platformdirs.macos import MacOS as _Result
|
||||
else:
|
||||
from pip._vendor.platformdirs.unix import Unix as _Result
|
||||
|
||||
|
||||
def _set_platform_dir_class() -> type[PlatformDirsABC]:
|
||||
if os.getenv("ANDROID_DATA") == "/data" and os.getenv("ANDROID_ROOT") == "/system":
|
||||
if os.getenv("SHELL") or os.getenv("PREFIX"):
|
||||
return _Result
|
||||
|
||||
from pip._vendor.platformdirs.android import _android_folder # noqa: PLC0415
|
||||
|
||||
if _android_folder() is not None:
|
||||
from pip._vendor.platformdirs.android import Android # noqa: PLC0415
|
||||
|
||||
return Android # return to avoid redefinition of a result
|
||||
|
||||
return _Result
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
# Work around mypy issue: https://github.com/python/mypy/issues/10962
|
||||
PlatformDirs = _Result
|
||||
else:
|
||||
PlatformDirs = _set_platform_dir_class() #: Currently active platform
|
||||
AppDirs = PlatformDirs #: Backwards compatibility with appdirs
|
||||
|
||||
|
||||
def user_data_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: data directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_data_dir
|
||||
|
||||
|
||||
def site_data_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param multipath: See `roaming <platformdirs.api.PlatformDirsABC.multipath>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: data directory shared by users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
multipath=multipath,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_data_dir
|
||||
|
||||
|
||||
def user_config_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: config directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_config_dir
|
||||
|
||||
|
||||
def site_config_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param multipath: See `roaming <platformdirs.api.PlatformDirsABC.multipath>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: config directory shared by the users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
multipath=multipath,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_config_dir
|
||||
|
||||
|
||||
def user_cache_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `roaming <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: cache directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_cache_dir
|
||||
|
||||
|
||||
def site_cache_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: cache directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_cache_dir
|
||||
|
||||
|
||||
def user_state_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: state directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_state_dir
|
||||
|
||||
|
||||
def user_log_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `roaming <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: log directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_log_dir
|
||||
|
||||
|
||||
def user_documents_dir() -> str:
|
||||
""":returns: documents directory tied to the user"""
|
||||
return PlatformDirs().user_documents_dir
|
||||
|
||||
|
||||
def user_downloads_dir() -> str:
|
||||
""":returns: downloads directory tied to the user"""
|
||||
return PlatformDirs().user_downloads_dir
|
||||
|
||||
|
||||
def user_pictures_dir() -> str:
|
||||
""":returns: pictures directory tied to the user"""
|
||||
return PlatformDirs().user_pictures_dir
|
||||
|
||||
|
||||
def user_videos_dir() -> str:
|
||||
""":returns: videos directory tied to the user"""
|
||||
return PlatformDirs().user_videos_dir
|
||||
|
||||
|
||||
def user_music_dir() -> str:
|
||||
""":returns: music directory tied to the user"""
|
||||
return PlatformDirs().user_music_dir
|
||||
|
||||
|
||||
def user_desktop_dir() -> str:
|
||||
""":returns: desktop directory tied to the user"""
|
||||
return PlatformDirs().user_desktop_dir
|
||||
|
||||
|
||||
def user_runtime_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: runtime directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_runtime_dir
|
||||
|
||||
|
||||
def site_runtime_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: runtime directory shared by users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_runtime_dir
|
||||
|
||||
|
||||
def user_data_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: data path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_data_path
|
||||
|
||||
|
||||
def site_data_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param multipath: See `multipath <platformdirs.api.PlatformDirsABC.multipath>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: data path shared by users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
multipath=multipath,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_data_path
|
||||
|
||||
|
||||
def user_config_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: config path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_config_path
|
||||
|
||||
|
||||
def site_config_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param multipath: See `roaming <platformdirs.api.PlatformDirsABC.multipath>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: config path shared by the users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
multipath=multipath,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_config_path
|
||||
|
||||
|
||||
def site_cache_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: cache directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_cache_path
|
||||
|
||||
|
||||
def user_cache_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `roaming <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: cache path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_cache_path
|
||||
|
||||
|
||||
def user_state_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: state path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_state_path
|
||||
|
||||
|
||||
def user_log_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `roaming <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: log path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_log_path
|
||||
|
||||
|
||||
def user_documents_path() -> Path:
|
||||
""":returns: documents a path tied to the user"""
|
||||
return PlatformDirs().user_documents_path
|
||||
|
||||
|
||||
def user_downloads_path() -> Path:
|
||||
""":returns: downloads path tied to the user"""
|
||||
return PlatformDirs().user_downloads_path
|
||||
|
||||
|
||||
def user_pictures_path() -> Path:
|
||||
""":returns: pictures path tied to the user"""
|
||||
return PlatformDirs().user_pictures_path
|
||||
|
||||
|
||||
def user_videos_path() -> Path:
|
||||
""":returns: videos path tied to the user"""
|
||||
return PlatformDirs().user_videos_path
|
||||
|
||||
|
||||
def user_music_path() -> Path:
|
||||
""":returns: music path tied to the user"""
|
||||
return PlatformDirs().user_music_path
|
||||
|
||||
|
||||
def user_desktop_path() -> Path:
|
||||
""":returns: desktop path tied to the user"""
|
||||
return PlatformDirs().user_desktop_path
|
||||
|
||||
|
||||
def user_runtime_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: runtime path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_runtime_path
|
||||
|
||||
|
||||
def site_runtime_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: runtime path shared by users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_runtime_path
|
||||
|
||||
|
||||
__all__ = [
|
||||
"AppDirs",
|
||||
"PlatformDirs",
|
||||
"PlatformDirsABC",
|
||||
"__version__",
|
||||
"__version_info__",
|
||||
"site_cache_dir",
|
||||
"site_cache_path",
|
||||
"site_config_dir",
|
||||
"site_config_path",
|
||||
"site_data_dir",
|
||||
"site_data_path",
|
||||
"site_runtime_dir",
|
||||
"site_runtime_path",
|
||||
"user_cache_dir",
|
||||
"user_cache_path",
|
||||
"user_config_dir",
|
||||
"user_config_path",
|
||||
"user_data_dir",
|
||||
"user_data_path",
|
||||
"user_desktop_dir",
|
||||
"user_desktop_path",
|
||||
"user_documents_dir",
|
||||
"user_documents_path",
|
||||
"user_downloads_dir",
|
||||
"user_downloads_path",
|
||||
"user_log_dir",
|
||||
"user_log_path",
|
||||
"user_music_dir",
|
||||
"user_music_path",
|
||||
"user_pictures_dir",
|
||||
"user_pictures_path",
|
||||
"user_runtime_dir",
|
||||
"user_runtime_path",
|
||||
"user_state_dir",
|
||||
"user_state_path",
|
||||
"user_videos_dir",
|
||||
"user_videos_path",
|
||||
]
|
||||
@@ -0,0 +1,55 @@
|
||||
"""Main entry point."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pip._vendor.platformdirs import PlatformDirs, __version__
|
||||
|
||||
PROPS = (
|
||||
"user_data_dir",
|
||||
"user_config_dir",
|
||||
"user_cache_dir",
|
||||
"user_state_dir",
|
||||
"user_log_dir",
|
||||
"user_documents_dir",
|
||||
"user_downloads_dir",
|
||||
"user_pictures_dir",
|
||||
"user_videos_dir",
|
||||
"user_music_dir",
|
||||
"user_runtime_dir",
|
||||
"site_data_dir",
|
||||
"site_config_dir",
|
||||
"site_cache_dir",
|
||||
"site_runtime_dir",
|
||||
)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Run the main entry point."""
|
||||
app_name = "MyApp"
|
||||
app_author = "MyCompany"
|
||||
|
||||
print(f"-- platformdirs {__version__} --") # noqa: T201
|
||||
|
||||
print("-- app dirs (with optional 'version')") # noqa: T201
|
||||
dirs = PlatformDirs(app_name, app_author, version="1.0")
|
||||
for prop in PROPS:
|
||||
print(f"{prop}: {getattr(dirs, prop)}") # noqa: T201
|
||||
|
||||
print("\n-- app dirs (without optional 'version')") # noqa: T201
|
||||
dirs = PlatformDirs(app_name, app_author)
|
||||
for prop in PROPS:
|
||||
print(f"{prop}: {getattr(dirs, prop)}") # noqa: T201
|
||||
|
||||
print("\n-- app dirs (without optional 'appauthor')") # noqa: T201
|
||||
dirs = PlatformDirs(app_name)
|
||||
for prop in PROPS:
|
||||
print(f"{prop}: {getattr(dirs, prop)}") # noqa: T201
|
||||
|
||||
print("\n-- app dirs (with disabled 'appauthor')") # noqa: T201
|
||||
dirs = PlatformDirs(app_name, appauthor=False)
|
||||
for prop in PROPS:
|
||||
print(f"{prop}: {getattr(dirs, prop)}") # noqa: T201
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,249 @@
|
||||
"""Android."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from functools import lru_cache
|
||||
from typing import TYPE_CHECKING, cast
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
|
||||
|
||||
class Android(PlatformDirsABC):
|
||||
"""
|
||||
Follows the guidance `from here <https://android.stackexchange.com/a/216132>`_.
|
||||
|
||||
Makes use of the `appname <platformdirs.api.PlatformDirsABC.appname>`, `version
|
||||
<platformdirs.api.PlatformDirsABC.version>`, `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def user_data_dir(self) -> str:
|
||||
""":return: data directory tied to the user, e.g. ``/data/user/<userid>/<packagename>/files/<AppName>``"""
|
||||
return self._append_app_name_and_version(cast("str", _android_folder()), "files")
|
||||
|
||||
@property
|
||||
def site_data_dir(self) -> str:
|
||||
""":return: data directory shared by users, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def user_config_dir(self) -> str:
|
||||
"""
|
||||
:return: config directory tied to the user, e.g. \
|
||||
``/data/user/<userid>/<packagename>/shared_prefs/<AppName>``
|
||||
"""
|
||||
return self._append_app_name_and_version(cast("str", _android_folder()), "shared_prefs")
|
||||
|
||||
@property
|
||||
def site_config_dir(self) -> str:
|
||||
""":return: config directory shared by the users, same as `user_config_dir`"""
|
||||
return self.user_config_dir
|
||||
|
||||
@property
|
||||
def user_cache_dir(self) -> str:
|
||||
""":return: cache directory tied to the user, e.g.,``/data/user/<userid>/<packagename>/cache/<AppName>``"""
|
||||
return self._append_app_name_and_version(cast("str", _android_folder()), "cache")
|
||||
|
||||
@property
|
||||
def site_cache_dir(self) -> str:
|
||||
""":return: cache directory shared by users, same as `user_cache_dir`"""
|
||||
return self.user_cache_dir
|
||||
|
||||
@property
|
||||
def user_state_dir(self) -> str:
|
||||
""":return: state directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def user_log_dir(self) -> str:
|
||||
"""
|
||||
:return: log directory tied to the user, same as `user_cache_dir` if not opinionated else ``log`` in it,
|
||||
e.g. ``/data/user/<userid>/<packagename>/cache/<AppName>/log``
|
||||
"""
|
||||
path = self.user_cache_dir
|
||||
if self.opinion:
|
||||
path = os.path.join(path, "log") # noqa: PTH118
|
||||
return path
|
||||
|
||||
@property
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user e.g. ``/storage/emulated/0/Documents``"""
|
||||
return _android_documents_folder()
|
||||
|
||||
@property
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user e.g. ``/storage/emulated/0/Downloads``"""
|
||||
return _android_downloads_folder()
|
||||
|
||||
@property
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user e.g. ``/storage/emulated/0/Pictures``"""
|
||||
return _android_pictures_folder()
|
||||
|
||||
@property
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user e.g. ``/storage/emulated/0/DCIM/Camera``"""
|
||||
return _android_videos_folder()
|
||||
|
||||
@property
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user e.g. ``/storage/emulated/0/Music``"""
|
||||
return _android_music_folder()
|
||||
|
||||
@property
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user e.g. ``/storage/emulated/0/Desktop``"""
|
||||
return "/storage/emulated/0/Desktop"
|
||||
|
||||
@property
|
||||
def user_runtime_dir(self) -> str:
|
||||
"""
|
||||
:return: runtime directory tied to the user, same as `user_cache_dir` if not opinionated else ``tmp`` in it,
|
||||
e.g. ``/data/user/<userid>/<packagename>/cache/<AppName>/tmp``
|
||||
"""
|
||||
path = self.user_cache_dir
|
||||
if self.opinion:
|
||||
path = os.path.join(path, "tmp") # noqa: PTH118
|
||||
return path
|
||||
|
||||
@property
|
||||
def site_runtime_dir(self) -> str:
|
||||
""":return: runtime directory shared by users, same as `user_runtime_dir`"""
|
||||
return self.user_runtime_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_folder() -> str | None: # noqa: C901
|
||||
""":return: base folder for the Android OS or None if it cannot be found"""
|
||||
result: str | None = None
|
||||
# type checker isn't happy with our "import android", just don't do this when type checking see
|
||||
# https://stackoverflow.com/a/61394121
|
||||
if not TYPE_CHECKING:
|
||||
try:
|
||||
# First try to get a path to android app using python4android (if available)...
|
||||
from android import mActivity # noqa: PLC0415
|
||||
|
||||
context = cast("android.content.Context", mActivity.getApplicationContext()) # noqa: F821
|
||||
result = context.getFilesDir().getParentFile().getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
result = None
|
||||
if result is None:
|
||||
try:
|
||||
# ...and fall back to using plain pyjnius, if python4android isn't available or doesn't deliver any useful
|
||||
# result...
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
result = context.getFilesDir().getParentFile().getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
result = None
|
||||
if result is None:
|
||||
# and if that fails, too, find an android folder looking at path on the sys.path
|
||||
# warning: only works for apps installed under /data, not adopted storage etc.
|
||||
pattern = re.compile(r"/data/(data|user/\d+)/(.+)/files")
|
||||
for path in sys.path:
|
||||
if pattern.match(path):
|
||||
result = path.split("/files")[0]
|
||||
break
|
||||
else:
|
||||
result = None
|
||||
if result is None:
|
||||
# one last try: find an android folder looking at path on the sys.path taking adopted storage paths into
|
||||
# account
|
||||
pattern = re.compile(r"/mnt/expand/[a-fA-F0-9-]{36}/(data|user/\d+)/(.+)/files")
|
||||
for path in sys.path:
|
||||
if pattern.match(path):
|
||||
result = path.split("/files")[0]
|
||||
break
|
||||
else:
|
||||
result = None
|
||||
return result
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_documents_folder() -> str:
|
||||
""":return: documents folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
documents_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DOCUMENTS).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
documents_dir = "/storage/emulated/0/Documents"
|
||||
|
||||
return documents_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_downloads_folder() -> str:
|
||||
""":return: downloads folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
downloads_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DOWNLOADS).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
downloads_dir = "/storage/emulated/0/Downloads"
|
||||
|
||||
return downloads_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_pictures_folder() -> str:
|
||||
""":return: pictures folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
pictures_dir: str = context.getExternalFilesDir(environment.DIRECTORY_PICTURES).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
pictures_dir = "/storage/emulated/0/Pictures"
|
||||
|
||||
return pictures_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_videos_folder() -> str:
|
||||
""":return: videos folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
videos_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DCIM).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
videos_dir = "/storage/emulated/0/DCIM/Camera"
|
||||
|
||||
return videos_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_music_folder() -> str:
|
||||
""":return: music folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
music_dir: str = context.getExternalFilesDir(environment.DIRECTORY_MUSIC).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
music_dir = "/storage/emulated/0/Music"
|
||||
|
||||
return music_dir
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Android",
|
||||
]
|
||||
@@ -0,0 +1,299 @@
|
||||
"""Base API."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from abc import ABC, abstractmethod
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Iterator
|
||||
from typing import Literal
|
||||
|
||||
|
||||
class PlatformDirsABC(ABC): # noqa: PLR0904
|
||||
"""Abstract base class for platform directories."""
|
||||
|
||||
def __init__( # noqa: PLR0913, PLR0917
|
||||
self,
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> None:
|
||||
"""
|
||||
Create a new platform directory.
|
||||
|
||||
:param appname: See `appname`.
|
||||
:param appauthor: See `appauthor`.
|
||||
:param version: See `version`.
|
||||
:param roaming: See `roaming`.
|
||||
:param multipath: See `multipath`.
|
||||
:param opinion: See `opinion`.
|
||||
:param ensure_exists: See `ensure_exists`.
|
||||
|
||||
"""
|
||||
self.appname = appname #: The name of application.
|
||||
self.appauthor = appauthor
|
||||
"""
|
||||
The name of the app author or distributing body for this application.
|
||||
|
||||
Typically, it is the owning company name. Defaults to `appname`. You may pass ``False`` to disable it.
|
||||
|
||||
"""
|
||||
self.version = version
|
||||
"""
|
||||
An optional version path element to append to the path.
|
||||
|
||||
You might want to use this if you want multiple versions of your app to be able to run independently. If used,
|
||||
this would typically be ``<major>.<minor>``.
|
||||
|
||||
"""
|
||||
self.roaming = roaming
|
||||
"""
|
||||
Whether to use the roaming appdata directory on Windows.
|
||||
|
||||
That means that for users on a Windows network setup for roaming profiles, this user data will be synced on
|
||||
login (see
|
||||
`here <https://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>`_).
|
||||
|
||||
"""
|
||||
self.multipath = multipath
|
||||
"""
|
||||
An optional parameter which indicates that the entire list of data dirs should be returned.
|
||||
|
||||
By default, the first item would only be returned.
|
||||
|
||||
"""
|
||||
self.opinion = opinion #: A flag to indicating to use opinionated values.
|
||||
self.ensure_exists = ensure_exists
|
||||
"""
|
||||
Optionally create the directory (and any missing parents) upon access if it does not exist.
|
||||
|
||||
By default, no directories are created.
|
||||
|
||||
"""
|
||||
|
||||
def _append_app_name_and_version(self, *base: str) -> str:
|
||||
params = list(base[1:])
|
||||
if self.appname:
|
||||
params.append(self.appname)
|
||||
if self.version:
|
||||
params.append(self.version)
|
||||
path = os.path.join(base[0], *params) # noqa: PTH118
|
||||
self._optionally_create_directory(path)
|
||||
return path
|
||||
|
||||
def _optionally_create_directory(self, path: str) -> None:
|
||||
if self.ensure_exists:
|
||||
Path(path).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def _first_item_as_path_if_multipath(self, directory: str) -> Path:
|
||||
if self.multipath:
|
||||
# If multipath is True, the first path is returned.
|
||||
directory = directory.split(os.pathsep)[0]
|
||||
return Path(directory)
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_data_dir(self) -> str:
|
||||
""":return: data directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def site_data_dir(self) -> str:
|
||||
""":return: data directory shared by users"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_config_dir(self) -> str:
|
||||
""":return: config directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def site_config_dir(self) -> str:
|
||||
""":return: config directory shared by the users"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_cache_dir(self) -> str:
|
||||
""":return: cache directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def site_cache_dir(self) -> str:
|
||||
""":return: cache directory shared by users"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_state_dir(self) -> str:
|
||||
""":return: state directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_log_dir(self) -> str:
|
||||
""":return: log directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_runtime_dir(self) -> str:
|
||||
""":return: runtime directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def site_runtime_dir(self) -> str:
|
||||
""":return: runtime directory shared by users"""
|
||||
|
||||
@property
|
||||
def user_data_path(self) -> Path:
|
||||
""":return: data path tied to the user"""
|
||||
return Path(self.user_data_dir)
|
||||
|
||||
@property
|
||||
def site_data_path(self) -> Path:
|
||||
""":return: data path shared by users"""
|
||||
return Path(self.site_data_dir)
|
||||
|
||||
@property
|
||||
def user_config_path(self) -> Path:
|
||||
""":return: config path tied to the user"""
|
||||
return Path(self.user_config_dir)
|
||||
|
||||
@property
|
||||
def site_config_path(self) -> Path:
|
||||
""":return: config path shared by the users"""
|
||||
return Path(self.site_config_dir)
|
||||
|
||||
@property
|
||||
def user_cache_path(self) -> Path:
|
||||
""":return: cache path tied to the user"""
|
||||
return Path(self.user_cache_dir)
|
||||
|
||||
@property
|
||||
def site_cache_path(self) -> Path:
|
||||
""":return: cache path shared by users"""
|
||||
return Path(self.site_cache_dir)
|
||||
|
||||
@property
|
||||
def user_state_path(self) -> Path:
|
||||
""":return: state path tied to the user"""
|
||||
return Path(self.user_state_dir)
|
||||
|
||||
@property
|
||||
def user_log_path(self) -> Path:
|
||||
""":return: log path tied to the user"""
|
||||
return Path(self.user_log_dir)
|
||||
|
||||
@property
|
||||
def user_documents_path(self) -> Path:
|
||||
""":return: documents a path tied to the user"""
|
||||
return Path(self.user_documents_dir)
|
||||
|
||||
@property
|
||||
def user_downloads_path(self) -> Path:
|
||||
""":return: downloads path tied to the user"""
|
||||
return Path(self.user_downloads_dir)
|
||||
|
||||
@property
|
||||
def user_pictures_path(self) -> Path:
|
||||
""":return: pictures path tied to the user"""
|
||||
return Path(self.user_pictures_dir)
|
||||
|
||||
@property
|
||||
def user_videos_path(self) -> Path:
|
||||
""":return: videos path tied to the user"""
|
||||
return Path(self.user_videos_dir)
|
||||
|
||||
@property
|
||||
def user_music_path(self) -> Path:
|
||||
""":return: music path tied to the user"""
|
||||
return Path(self.user_music_dir)
|
||||
|
||||
@property
|
||||
def user_desktop_path(self) -> Path:
|
||||
""":return: desktop path tied to the user"""
|
||||
return Path(self.user_desktop_dir)
|
||||
|
||||
@property
|
||||
def user_runtime_path(self) -> Path:
|
||||
""":return: runtime path tied to the user"""
|
||||
return Path(self.user_runtime_dir)
|
||||
|
||||
@property
|
||||
def site_runtime_path(self) -> Path:
|
||||
""":return: runtime path shared by users"""
|
||||
return Path(self.site_runtime_dir)
|
||||
|
||||
def iter_config_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site configuration directories."""
|
||||
yield self.user_config_dir
|
||||
yield self.site_config_dir
|
||||
|
||||
def iter_data_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site data directories."""
|
||||
yield self.user_data_dir
|
||||
yield self.site_data_dir
|
||||
|
||||
def iter_cache_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site cache directories."""
|
||||
yield self.user_cache_dir
|
||||
yield self.site_cache_dir
|
||||
|
||||
def iter_runtime_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site runtime directories."""
|
||||
yield self.user_runtime_dir
|
||||
yield self.site_runtime_dir
|
||||
|
||||
def iter_config_paths(self) -> Iterator[Path]:
|
||||
""":yield: all user and site configuration paths."""
|
||||
for path in self.iter_config_dirs():
|
||||
yield Path(path)
|
||||
|
||||
def iter_data_paths(self) -> Iterator[Path]:
|
||||
""":yield: all user and site data paths."""
|
||||
for path in self.iter_data_dirs():
|
||||
yield Path(path)
|
||||
|
||||
def iter_cache_paths(self) -> Iterator[Path]:
|
||||
""":yield: all user and site cache paths."""
|
||||
for path in self.iter_cache_dirs():
|
||||
yield Path(path)
|
||||
|
||||
def iter_runtime_paths(self) -> Iterator[Path]:
|
||||
""":yield: all user and site runtime paths."""
|
||||
for path in self.iter_runtime_dirs():
|
||||
yield Path(path)
|
||||
@@ -0,0 +1,144 @@
|
||||
"""macOS."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os.path
|
||||
import sys
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
class MacOS(PlatformDirsABC):
|
||||
"""
|
||||
Platform directories for the macOS operating system.
|
||||
|
||||
Follows the guidance from
|
||||
`Apple documentation <https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/MacOSXDirectories/MacOSXDirectories.html>`_.
|
||||
Makes use of the `appname <platformdirs.api.PlatformDirsABC.appname>`,
|
||||
`version <platformdirs.api.PlatformDirsABC.version>`,
|
||||
`ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def user_data_dir(self) -> str:
|
||||
""":return: data directory tied to the user, e.g. ``~/Library/Application Support/$appname/$version``"""
|
||||
return self._append_app_name_and_version(os.path.expanduser("~/Library/Application Support")) # noqa: PTH111
|
||||
|
||||
@property
|
||||
def site_data_dir(self) -> str:
|
||||
"""
|
||||
:return: data directory shared by users, e.g. ``/Library/Application Support/$appname/$version``.
|
||||
If we're using a Python binary managed by `Homebrew <https://brew.sh>`_, the directory
|
||||
will be under the Homebrew prefix, e.g. ``/opt/homebrew/share/$appname/$version``.
|
||||
If `multipath <platformdirs.api.PlatformDirsABC.multipath>` is enabled, and we're in Homebrew,
|
||||
the response is a multi-path string separated by ":", e.g.
|
||||
``/opt/homebrew/share/$appname/$version:/Library/Application Support/$appname/$version``
|
||||
"""
|
||||
is_homebrew = sys.prefix.startswith("/opt/homebrew")
|
||||
path_list = [self._append_app_name_and_version("/opt/homebrew/share")] if is_homebrew else []
|
||||
path_list.append(self._append_app_name_and_version("/Library/Application Support"))
|
||||
if self.multipath:
|
||||
return os.pathsep.join(path_list)
|
||||
return path_list[0]
|
||||
|
||||
@property
|
||||
def site_data_path(self) -> Path:
|
||||
""":return: data path shared by users. Only return the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_data_dir)
|
||||
|
||||
@property
|
||||
def user_config_dir(self) -> str:
|
||||
""":return: config directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def site_config_dir(self) -> str:
|
||||
""":return: config directory shared by the users, same as `site_data_dir`"""
|
||||
return self.site_data_dir
|
||||
|
||||
@property
|
||||
def user_cache_dir(self) -> str:
|
||||
""":return: cache directory tied to the user, e.g. ``~/Library/Caches/$appname/$version``"""
|
||||
return self._append_app_name_and_version(os.path.expanduser("~/Library/Caches")) # noqa: PTH111
|
||||
|
||||
@property
|
||||
def site_cache_dir(self) -> str:
|
||||
"""
|
||||
:return: cache directory shared by users, e.g. ``/Library/Caches/$appname/$version``.
|
||||
If we're using a Python binary managed by `Homebrew <https://brew.sh>`_, the directory
|
||||
will be under the Homebrew prefix, e.g. ``/opt/homebrew/var/cache/$appname/$version``.
|
||||
If `multipath <platformdirs.api.PlatformDirsABC.multipath>` is enabled, and we're in Homebrew,
|
||||
the response is a multi-path string separated by ":", e.g.
|
||||
``/opt/homebrew/var/cache/$appname/$version:/Library/Caches/$appname/$version``
|
||||
"""
|
||||
is_homebrew = sys.prefix.startswith("/opt/homebrew")
|
||||
path_list = [self._append_app_name_and_version("/opt/homebrew/var/cache")] if is_homebrew else []
|
||||
path_list.append(self._append_app_name_and_version("/Library/Caches"))
|
||||
if self.multipath:
|
||||
return os.pathsep.join(path_list)
|
||||
return path_list[0]
|
||||
|
||||
@property
|
||||
def site_cache_path(self) -> Path:
|
||||
""":return: cache path shared by users. Only return the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_cache_dir)
|
||||
|
||||
@property
|
||||
def user_state_dir(self) -> str:
|
||||
""":return: state directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def user_log_dir(self) -> str:
|
||||
""":return: log directory tied to the user, e.g. ``~/Library/Logs/$appname/$version``"""
|
||||
return self._append_app_name_and_version(os.path.expanduser("~/Library/Logs")) # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user, e.g. ``~/Documents``"""
|
||||
return os.path.expanduser("~/Documents") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user, e.g. ``~/Downloads``"""
|
||||
return os.path.expanduser("~/Downloads") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user, e.g. ``~/Pictures``"""
|
||||
return os.path.expanduser("~/Pictures") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user, e.g. ``~/Movies``"""
|
||||
return os.path.expanduser("~/Movies") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user, e.g. ``~/Music``"""
|
||||
return os.path.expanduser("~/Music") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user, e.g. ``~/Desktop``"""
|
||||
return os.path.expanduser("~/Desktop") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_runtime_dir(self) -> str:
|
||||
""":return: runtime directory tied to the user, e.g. ``~/Library/Caches/TemporaryItems/$appname/$version``"""
|
||||
return self._append_app_name_and_version(os.path.expanduser("~/Library/Caches/TemporaryItems")) # noqa: PTH111
|
||||
|
||||
@property
|
||||
def site_runtime_dir(self) -> str:
|
||||
""":return: runtime directory shared by users, same as `user_runtime_dir`"""
|
||||
return self.user_runtime_dir
|
||||
|
||||
|
||||
__all__ = [
|
||||
"MacOS",
|
||||
]
|
||||
@@ -0,0 +1,272 @@
|
||||
"""Unix."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
from configparser import ConfigParser
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, NoReturn
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Iterator
|
||||
|
||||
if sys.platform == "win32":
|
||||
|
||||
def getuid() -> NoReturn:
|
||||
msg = "should only be used on Unix"
|
||||
raise RuntimeError(msg)
|
||||
|
||||
else:
|
||||
from os import getuid
|
||||
|
||||
|
||||
class Unix(PlatformDirsABC): # noqa: PLR0904
|
||||
"""
|
||||
On Unix/Linux, we follow the `XDG Basedir Spec <https://specifications.freedesktop.org/basedir-spec/basedir-spec-
|
||||
latest.html>`_.
|
||||
|
||||
The spec allows overriding directories with environment variables. The examples shown are the default values,
|
||||
alongside the name of the environment variable that overrides them. Makes use of the `appname
|
||||
<platformdirs.api.PlatformDirsABC.appname>`, `version <platformdirs.api.PlatformDirsABC.version>`, `multipath
|
||||
<platformdirs.api.PlatformDirsABC.multipath>`, `opinion <platformdirs.api.PlatformDirsABC.opinion>`, `ensure_exists
|
||||
<platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def user_data_dir(self) -> str:
|
||||
"""
|
||||
:return: data directory tied to the user, e.g. ``~/.local/share/$appname/$version`` or
|
||||
``$XDG_DATA_HOME/$appname/$version``
|
||||
"""
|
||||
path = os.environ.get("XDG_DATA_HOME", "")
|
||||
if not path.strip():
|
||||
path = os.path.expanduser("~/.local/share") # noqa: PTH111
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def _site_data_dirs(self) -> list[str]:
|
||||
path = os.environ.get("XDG_DATA_DIRS", "")
|
||||
if not path.strip():
|
||||
path = f"/usr/local/share{os.pathsep}/usr/share"
|
||||
return [self._append_app_name_and_version(p) for p in path.split(os.pathsep)]
|
||||
|
||||
@property
|
||||
def site_data_dir(self) -> str:
|
||||
"""
|
||||
:return: data directories shared by users (if `multipath <platformdirs.api.PlatformDirsABC.multipath>` is
|
||||
enabled and ``XDG_DATA_DIRS`` is set and a multi path the response is also a multi path separated by the
|
||||
OS path separator), e.g. ``/usr/local/share/$appname/$version`` or ``/usr/share/$appname/$version``
|
||||
"""
|
||||
# XDG default for $XDG_DATA_DIRS; only first, if multipath is False
|
||||
dirs = self._site_data_dirs
|
||||
if not self.multipath:
|
||||
return dirs[0]
|
||||
return os.pathsep.join(dirs)
|
||||
|
||||
@property
|
||||
def user_config_dir(self) -> str:
|
||||
"""
|
||||
:return: config directory tied to the user, e.g. ``~/.config/$appname/$version`` or
|
||||
``$XDG_CONFIG_HOME/$appname/$version``
|
||||
"""
|
||||
path = os.environ.get("XDG_CONFIG_HOME", "")
|
||||
if not path.strip():
|
||||
path = os.path.expanduser("~/.config") # noqa: PTH111
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def _site_config_dirs(self) -> list[str]:
|
||||
path = os.environ.get("XDG_CONFIG_DIRS", "")
|
||||
if not path.strip():
|
||||
path = "/etc/xdg"
|
||||
return [self._append_app_name_and_version(p) for p in path.split(os.pathsep)]
|
||||
|
||||
@property
|
||||
def site_config_dir(self) -> str:
|
||||
"""
|
||||
:return: config directories shared by users (if `multipath <platformdirs.api.PlatformDirsABC.multipath>`
|
||||
is enabled and ``XDG_CONFIG_DIRS`` is set and a multi path the response is also a multi path separated by
|
||||
the OS path separator), e.g. ``/etc/xdg/$appname/$version``
|
||||
"""
|
||||
# XDG default for $XDG_CONFIG_DIRS only first, if multipath is False
|
||||
dirs = self._site_config_dirs
|
||||
if not self.multipath:
|
||||
return dirs[0]
|
||||
return os.pathsep.join(dirs)
|
||||
|
||||
@property
|
||||
def user_cache_dir(self) -> str:
|
||||
"""
|
||||
:return: cache directory tied to the user, e.g. ``~/.cache/$appname/$version`` or
|
||||
``~/$XDG_CACHE_HOME/$appname/$version``
|
||||
"""
|
||||
path = os.environ.get("XDG_CACHE_HOME", "")
|
||||
if not path.strip():
|
||||
path = os.path.expanduser("~/.cache") # noqa: PTH111
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def site_cache_dir(self) -> str:
|
||||
""":return: cache directory shared by users, e.g. ``/var/cache/$appname/$version``"""
|
||||
return self._append_app_name_and_version("/var/cache")
|
||||
|
||||
@property
|
||||
def user_state_dir(self) -> str:
|
||||
"""
|
||||
:return: state directory tied to the user, e.g. ``~/.local/state/$appname/$version`` or
|
||||
``$XDG_STATE_HOME/$appname/$version``
|
||||
"""
|
||||
path = os.environ.get("XDG_STATE_HOME", "")
|
||||
if not path.strip():
|
||||
path = os.path.expanduser("~/.local/state") # noqa: PTH111
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def user_log_dir(self) -> str:
|
||||
""":return: log directory tied to the user, same as `user_state_dir` if not opinionated else ``log`` in it"""
|
||||
path = self.user_state_dir
|
||||
if self.opinion:
|
||||
path = os.path.join(path, "log") # noqa: PTH118
|
||||
self._optionally_create_directory(path)
|
||||
return path
|
||||
|
||||
@property
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user, e.g. ``~/Documents``"""
|
||||
return _get_user_media_dir("XDG_DOCUMENTS_DIR", "~/Documents")
|
||||
|
||||
@property
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user, e.g. ``~/Downloads``"""
|
||||
return _get_user_media_dir("XDG_DOWNLOAD_DIR", "~/Downloads")
|
||||
|
||||
@property
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user, e.g. ``~/Pictures``"""
|
||||
return _get_user_media_dir("XDG_PICTURES_DIR", "~/Pictures")
|
||||
|
||||
@property
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user, e.g. ``~/Videos``"""
|
||||
return _get_user_media_dir("XDG_VIDEOS_DIR", "~/Videos")
|
||||
|
||||
@property
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user, e.g. ``~/Music``"""
|
||||
return _get_user_media_dir("XDG_MUSIC_DIR", "~/Music")
|
||||
|
||||
@property
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user, e.g. ``~/Desktop``"""
|
||||
return _get_user_media_dir("XDG_DESKTOP_DIR", "~/Desktop")
|
||||
|
||||
@property
|
||||
def user_runtime_dir(self) -> str:
|
||||
"""
|
||||
:return: runtime directory tied to the user, e.g. ``/run/user/$(id -u)/$appname/$version`` or
|
||||
``$XDG_RUNTIME_DIR/$appname/$version``.
|
||||
|
||||
For FreeBSD/OpenBSD/NetBSD, it would return ``/var/run/user/$(id -u)/$appname/$version`` if
|
||||
exists, otherwise ``/tmp/runtime-$(id -u)/$appname/$version``, if``$XDG_RUNTIME_DIR``
|
||||
is not set.
|
||||
"""
|
||||
path = os.environ.get("XDG_RUNTIME_DIR", "")
|
||||
if not path.strip():
|
||||
if sys.platform.startswith(("freebsd", "openbsd", "netbsd")):
|
||||
path = f"/var/run/user/{getuid()}"
|
||||
if not Path(path).exists():
|
||||
path = f"/tmp/runtime-{getuid()}" # noqa: S108
|
||||
else:
|
||||
path = f"/run/user/{getuid()}"
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def site_runtime_dir(self) -> str:
|
||||
"""
|
||||
:return: runtime directory shared by users, e.g. ``/run/$appname/$version`` or \
|
||||
``$XDG_RUNTIME_DIR/$appname/$version``.
|
||||
|
||||
Note that this behaves almost exactly like `user_runtime_dir` if ``$XDG_RUNTIME_DIR`` is set, but will
|
||||
fall back to paths associated to the root user instead of a regular logged-in user if it's not set.
|
||||
|
||||
If you wish to ensure that a logged-in root user path is returned e.g. ``/run/user/0``, use `user_runtime_dir`
|
||||
instead.
|
||||
|
||||
For FreeBSD/OpenBSD/NetBSD, it would return ``/var/run/$appname/$version`` if ``$XDG_RUNTIME_DIR`` is not set.
|
||||
"""
|
||||
path = os.environ.get("XDG_RUNTIME_DIR", "")
|
||||
if not path.strip():
|
||||
if sys.platform.startswith(("freebsd", "openbsd", "netbsd")):
|
||||
path = "/var/run"
|
||||
else:
|
||||
path = "/run"
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def site_data_path(self) -> Path:
|
||||
""":return: data path shared by users. Only return the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_data_dir)
|
||||
|
||||
@property
|
||||
def site_config_path(self) -> Path:
|
||||
""":return: config path shared by the users, returns the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_config_dir)
|
||||
|
||||
@property
|
||||
def site_cache_path(self) -> Path:
|
||||
""":return: cache path shared by users. Only return the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_cache_dir)
|
||||
|
||||
def iter_config_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site configuration directories."""
|
||||
yield self.user_config_dir
|
||||
yield from self._site_config_dirs
|
||||
|
||||
def iter_data_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site data directories."""
|
||||
yield self.user_data_dir
|
||||
yield from self._site_data_dirs
|
||||
|
||||
|
||||
def _get_user_media_dir(env_var: str, fallback_tilde_path: str) -> str:
|
||||
media_dir = _get_user_dirs_folder(env_var)
|
||||
if media_dir is None:
|
||||
media_dir = os.environ.get(env_var, "").strip()
|
||||
if not media_dir:
|
||||
media_dir = os.path.expanduser(fallback_tilde_path) # noqa: PTH111
|
||||
|
||||
return media_dir
|
||||
|
||||
|
||||
def _get_user_dirs_folder(key: str) -> str | None:
|
||||
"""
|
||||
Return directory from user-dirs.dirs config file.
|
||||
|
||||
See https://freedesktop.org/wiki/Software/xdg-user-dirs/.
|
||||
|
||||
"""
|
||||
user_dirs_config_path = Path(Unix().user_config_dir) / "user-dirs.dirs"
|
||||
if user_dirs_config_path.exists():
|
||||
parser = ConfigParser()
|
||||
|
||||
with user_dirs_config_path.open() as stream:
|
||||
# Add fake section header, so ConfigParser doesn't complain
|
||||
parser.read_string(f"[top]\n{stream.read()}")
|
||||
|
||||
if key not in parser["top"]:
|
||||
return None
|
||||
|
||||
path = parser["top"][key].strip('"')
|
||||
# Handle relative home paths
|
||||
return path.replace("$HOME", os.path.expanduser("~")) # noqa: PTH111
|
||||
|
||||
return None
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Unix",
|
||||
]
|
||||
@@ -0,0 +1,21 @@
|
||||
# file generated by setuptools-scm
|
||||
# don't change, don't track in version control
|
||||
|
||||
__all__ = ["__version__", "__version_tuple__", "version", "version_tuple"]
|
||||
|
||||
TYPE_CHECKING = False
|
||||
if TYPE_CHECKING:
|
||||
from typing import Tuple
|
||||
from typing import Union
|
||||
|
||||
VERSION_TUPLE = Tuple[Union[int, str], ...]
|
||||
else:
|
||||
VERSION_TUPLE = object
|
||||
|
||||
version: str
|
||||
__version__: str
|
||||
__version_tuple__: VERSION_TUPLE
|
||||
version_tuple: VERSION_TUPLE
|
||||
|
||||
__version__ = version = '4.3.8'
|
||||
__version_tuple__ = version_tuple = (4, 3, 8)
|
||||
@@ -0,0 +1,272 @@
|
||||
"""Windows."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
from functools import lru_cache
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Callable
|
||||
|
||||
|
||||
class Windows(PlatformDirsABC):
|
||||
"""
|
||||
`MSDN on where to store app data files <https://learn.microsoft.com/en-us/windows/win32/shell/knownfolderid>`_.
|
||||
|
||||
Makes use of the `appname <platformdirs.api.PlatformDirsABC.appname>`, `appauthor
|
||||
<platformdirs.api.PlatformDirsABC.appauthor>`, `version <platformdirs.api.PlatformDirsABC.version>`, `roaming
|
||||
<platformdirs.api.PlatformDirsABC.roaming>`, `opinion <platformdirs.api.PlatformDirsABC.opinion>`, `ensure_exists
|
||||
<platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def user_data_dir(self) -> str:
|
||||
"""
|
||||
:return: data directory tied to the user, e.g.
|
||||
``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname`` (not roaming) or
|
||||
``%USERPROFILE%\\AppData\\Roaming\\$appauthor\\$appname`` (roaming)
|
||||
"""
|
||||
const = "CSIDL_APPDATA" if self.roaming else "CSIDL_LOCAL_APPDATA"
|
||||
path = os.path.normpath(get_win_folder(const))
|
||||
return self._append_parts(path)
|
||||
|
||||
def _append_parts(self, path: str, *, opinion_value: str | None = None) -> str:
|
||||
params = []
|
||||
if self.appname:
|
||||
if self.appauthor is not False:
|
||||
author = self.appauthor or self.appname
|
||||
params.append(author)
|
||||
params.append(self.appname)
|
||||
if opinion_value is not None and self.opinion:
|
||||
params.append(opinion_value)
|
||||
if self.version:
|
||||
params.append(self.version)
|
||||
path = os.path.join(path, *params) # noqa: PTH118
|
||||
self._optionally_create_directory(path)
|
||||
return path
|
||||
|
||||
@property
|
||||
def site_data_dir(self) -> str:
|
||||
""":return: data directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname``"""
|
||||
path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA"))
|
||||
return self._append_parts(path)
|
||||
|
||||
@property
|
||||
def user_config_dir(self) -> str:
|
||||
""":return: config directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def site_config_dir(self) -> str:
|
||||
""":return: config directory shared by the users, same as `site_data_dir`"""
|
||||
return self.site_data_dir
|
||||
|
||||
@property
|
||||
def user_cache_dir(self) -> str:
|
||||
"""
|
||||
:return: cache directory tied to the user (if opinionated with ``Cache`` folder within ``$appname``) e.g.
|
||||
``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname\\Cache\\$version``
|
||||
"""
|
||||
path = os.path.normpath(get_win_folder("CSIDL_LOCAL_APPDATA"))
|
||||
return self._append_parts(path, opinion_value="Cache")
|
||||
|
||||
@property
|
||||
def site_cache_dir(self) -> str:
|
||||
""":return: cache directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname\\Cache\\$version``"""
|
||||
path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA"))
|
||||
return self._append_parts(path, opinion_value="Cache")
|
||||
|
||||
@property
|
||||
def user_state_dir(self) -> str:
|
||||
""":return: state directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def user_log_dir(self) -> str:
|
||||
""":return: log directory tied to the user, same as `user_data_dir` if not opinionated else ``Logs`` in it"""
|
||||
path = self.user_data_dir
|
||||
if self.opinion:
|
||||
path = os.path.join(path, "Logs") # noqa: PTH118
|
||||
self._optionally_create_directory(path)
|
||||
return path
|
||||
|
||||
@property
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user e.g. ``%USERPROFILE%\\Documents``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_PERSONAL"))
|
||||
|
||||
@property
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user e.g. ``%USERPROFILE%\\Downloads``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_DOWNLOADS"))
|
||||
|
||||
@property
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user e.g. ``%USERPROFILE%\\Pictures``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_MYPICTURES"))
|
||||
|
||||
@property
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user e.g. ``%USERPROFILE%\\Videos``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_MYVIDEO"))
|
||||
|
||||
@property
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user e.g. ``%USERPROFILE%\\Music``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_MYMUSIC"))
|
||||
|
||||
@property
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user, e.g. ``%USERPROFILE%\\Desktop``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_DESKTOPDIRECTORY"))
|
||||
|
||||
@property
|
||||
def user_runtime_dir(self) -> str:
|
||||
"""
|
||||
:return: runtime directory tied to the user, e.g.
|
||||
``%USERPROFILE%\\AppData\\Local\\Temp\\$appauthor\\$appname``
|
||||
"""
|
||||
path = os.path.normpath(os.path.join(get_win_folder("CSIDL_LOCAL_APPDATA"), "Temp")) # noqa: PTH118
|
||||
return self._append_parts(path)
|
||||
|
||||
@property
|
||||
def site_runtime_dir(self) -> str:
|
||||
""":return: runtime directory shared by users, same as `user_runtime_dir`"""
|
||||
return self.user_runtime_dir
|
||||
|
||||
|
||||
def get_win_folder_from_env_vars(csidl_name: str) -> str:
|
||||
"""Get folder from environment variables."""
|
||||
result = get_win_folder_if_csidl_name_not_env_var(csidl_name)
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
env_var_name = {
|
||||
"CSIDL_APPDATA": "APPDATA",
|
||||
"CSIDL_COMMON_APPDATA": "ALLUSERSPROFILE",
|
||||
"CSIDL_LOCAL_APPDATA": "LOCALAPPDATA",
|
||||
}.get(csidl_name)
|
||||
if env_var_name is None:
|
||||
msg = f"Unknown CSIDL name: {csidl_name}"
|
||||
raise ValueError(msg)
|
||||
result = os.environ.get(env_var_name)
|
||||
if result is None:
|
||||
msg = f"Unset environment variable: {env_var_name}"
|
||||
raise ValueError(msg)
|
||||
return result
|
||||
|
||||
|
||||
def get_win_folder_if_csidl_name_not_env_var(csidl_name: str) -> str | None:
|
||||
"""Get a folder for a CSIDL name that does not exist as an environment variable."""
|
||||
if csidl_name == "CSIDL_PERSONAL":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Documents") # noqa: PTH118
|
||||
|
||||
if csidl_name == "CSIDL_DOWNLOADS":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Downloads") # noqa: PTH118
|
||||
|
||||
if csidl_name == "CSIDL_MYPICTURES":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Pictures") # noqa: PTH118
|
||||
|
||||
if csidl_name == "CSIDL_MYVIDEO":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Videos") # noqa: PTH118
|
||||
|
||||
if csidl_name == "CSIDL_MYMUSIC":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Music") # noqa: PTH118
|
||||
return None
|
||||
|
||||
|
||||
def get_win_folder_from_registry(csidl_name: str) -> str:
|
||||
"""
|
||||
Get folder from the registry.
|
||||
|
||||
This is a fallback technique at best. I'm not sure if using the registry for these guarantees us the correct answer
|
||||
for all CSIDL_* names.
|
||||
|
||||
"""
|
||||
shell_folder_name = {
|
||||
"CSIDL_APPDATA": "AppData",
|
||||
"CSIDL_COMMON_APPDATA": "Common AppData",
|
||||
"CSIDL_LOCAL_APPDATA": "Local AppData",
|
||||
"CSIDL_PERSONAL": "Personal",
|
||||
"CSIDL_DOWNLOADS": "{374DE290-123F-4565-9164-39C4925E467B}",
|
||||
"CSIDL_MYPICTURES": "My Pictures",
|
||||
"CSIDL_MYVIDEO": "My Video",
|
||||
"CSIDL_MYMUSIC": "My Music",
|
||||
}.get(csidl_name)
|
||||
if shell_folder_name is None:
|
||||
msg = f"Unknown CSIDL name: {csidl_name}"
|
||||
raise ValueError(msg)
|
||||
if sys.platform != "win32": # only needed for mypy type checker to know that this code runs only on Windows
|
||||
raise NotImplementedError
|
||||
import winreg # noqa: PLC0415
|
||||
|
||||
key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders")
|
||||
directory, _ = winreg.QueryValueEx(key, shell_folder_name)
|
||||
return str(directory)
|
||||
|
||||
|
||||
def get_win_folder_via_ctypes(csidl_name: str) -> str:
|
||||
"""Get folder with ctypes."""
|
||||
# There is no 'CSIDL_DOWNLOADS'.
|
||||
# Use 'CSIDL_PROFILE' (40) and append the default folder 'Downloads' instead.
|
||||
# https://learn.microsoft.com/en-us/windows/win32/shell/knownfolderid
|
||||
|
||||
import ctypes # noqa: PLC0415
|
||||
|
||||
csidl_const = {
|
||||
"CSIDL_APPDATA": 26,
|
||||
"CSIDL_COMMON_APPDATA": 35,
|
||||
"CSIDL_LOCAL_APPDATA": 28,
|
||||
"CSIDL_PERSONAL": 5,
|
||||
"CSIDL_MYPICTURES": 39,
|
||||
"CSIDL_MYVIDEO": 14,
|
||||
"CSIDL_MYMUSIC": 13,
|
||||
"CSIDL_DOWNLOADS": 40,
|
||||
"CSIDL_DESKTOPDIRECTORY": 16,
|
||||
}.get(csidl_name)
|
||||
if csidl_const is None:
|
||||
msg = f"Unknown CSIDL name: {csidl_name}"
|
||||
raise ValueError(msg)
|
||||
|
||||
buf = ctypes.create_unicode_buffer(1024)
|
||||
windll = getattr(ctypes, "windll") # noqa: B009 # using getattr to avoid false positive with mypy type checker
|
||||
windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
|
||||
|
||||
# Downgrade to short path name if it has high-bit chars.
|
||||
if any(ord(c) > 255 for c in buf): # noqa: PLR2004
|
||||
buf2 = ctypes.create_unicode_buffer(1024)
|
||||
if windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
|
||||
buf = buf2
|
||||
|
||||
if csidl_name == "CSIDL_DOWNLOADS":
|
||||
return os.path.join(buf.value, "Downloads") # noqa: PTH118
|
||||
|
||||
return buf.value
|
||||
|
||||
|
||||
def _pick_get_win_folder() -> Callable[[str], str]:
|
||||
try:
|
||||
import ctypes # noqa: PLC0415
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
if hasattr(ctypes, "windll"):
|
||||
return get_win_folder_via_ctypes
|
||||
try:
|
||||
import winreg # noqa: PLC0415, F401
|
||||
except ImportError:
|
||||
return get_win_folder_from_env_vars
|
||||
else:
|
||||
return get_win_folder_from_registry
|
||||
|
||||
|
||||
get_win_folder = lru_cache(maxsize=None)(_pick_get_win_folder())
|
||||
|
||||
__all__ = [
|
||||
"Windows",
|
||||
]
|
||||
@@ -0,0 +1,82 @@
|
||||
"""
|
||||
Pygments
|
||||
~~~~~~~~
|
||||
|
||||
Pygments is a syntax highlighting package written in Python.
|
||||
|
||||
It is a generic syntax highlighter for general use in all kinds of software
|
||||
such as forum systems, wikis or other applications that need to prettify
|
||||
source code. Highlights are:
|
||||
|
||||
* a wide range of common languages and markup formats is supported
|
||||
* special attention is paid to details, increasing quality by a fair amount
|
||||
* support for new languages and formats are added easily
|
||||
* a number of output formats, presently HTML, LaTeX, RTF, SVG, all image
|
||||
formats that PIL supports, and ANSI sequences
|
||||
* it is usable as a command-line tool and as a library
|
||||
* ... and it highlights even Brainfuck!
|
||||
|
||||
The `Pygments master branch`_ is installable with ``easy_install Pygments==dev``.
|
||||
|
||||
.. _Pygments master branch:
|
||||
https://github.com/pygments/pygments/archive/master.zip#egg=Pygments-dev
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
from io import StringIO, BytesIO
|
||||
|
||||
__version__ = '2.19.2'
|
||||
__docformat__ = 'restructuredtext'
|
||||
|
||||
__all__ = ['lex', 'format', 'highlight']
|
||||
|
||||
|
||||
def lex(code, lexer):
|
||||
"""
|
||||
Lex `code` with the `lexer` (must be a `Lexer` instance)
|
||||
and return an iterable of tokens. Currently, this only calls
|
||||
`lexer.get_tokens()`.
|
||||
"""
|
||||
try:
|
||||
return lexer.get_tokens(code)
|
||||
except TypeError:
|
||||
# Heuristic to catch a common mistake.
|
||||
from pip._vendor.pygments.lexer import RegexLexer
|
||||
if isinstance(lexer, type) and issubclass(lexer, RegexLexer):
|
||||
raise TypeError('lex() argument must be a lexer instance, '
|
||||
'not a class')
|
||||
raise
|
||||
|
||||
|
||||
def format(tokens, formatter, outfile=None): # pylint: disable=redefined-builtin
|
||||
"""
|
||||
Format ``tokens`` (an iterable of tokens) with the formatter ``formatter``
|
||||
(a `Formatter` instance).
|
||||
|
||||
If ``outfile`` is given and a valid file object (an object with a
|
||||
``write`` method), the result will be written to it, otherwise it
|
||||
is returned as a string.
|
||||
"""
|
||||
try:
|
||||
if not outfile:
|
||||
realoutfile = getattr(formatter, 'encoding', None) and BytesIO() or StringIO()
|
||||
formatter.format(tokens, realoutfile)
|
||||
return realoutfile.getvalue()
|
||||
else:
|
||||
formatter.format(tokens, outfile)
|
||||
except TypeError:
|
||||
# Heuristic to catch a common mistake.
|
||||
from pip._vendor.pygments.formatter import Formatter
|
||||
if isinstance(formatter, type) and issubclass(formatter, Formatter):
|
||||
raise TypeError('format() argument must be a formatter instance, '
|
||||
'not a class')
|
||||
raise
|
||||
|
||||
|
||||
def highlight(code, lexer, formatter, outfile=None):
|
||||
"""
|
||||
This is the most high-level highlighting function. It combines `lex` and
|
||||
`format` in one function.
|
||||
"""
|
||||
return format(lex(code, lexer), formatter, outfile)
|
||||
@@ -0,0 +1,17 @@
|
||||
"""
|
||||
pygments.__main__
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Main entry point for ``python -m pygments``.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from pip._vendor.pygments.cmdline import main
|
||||
|
||||
try:
|
||||
sys.exit(main(sys.argv))
|
||||
except KeyboardInterrupt:
|
||||
sys.exit(1)
|
||||
@@ -0,0 +1,70 @@
|
||||
"""
|
||||
pygments.console
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Format colored console output.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
esc = "\x1b["
|
||||
|
||||
codes = {}
|
||||
codes[""] = ""
|
||||
codes["reset"] = esc + "39;49;00m"
|
||||
|
||||
codes["bold"] = esc + "01m"
|
||||
codes["faint"] = esc + "02m"
|
||||
codes["standout"] = esc + "03m"
|
||||
codes["underline"] = esc + "04m"
|
||||
codes["blink"] = esc + "05m"
|
||||
codes["overline"] = esc + "06m"
|
||||
|
||||
dark_colors = ["black", "red", "green", "yellow", "blue",
|
||||
"magenta", "cyan", "gray"]
|
||||
light_colors = ["brightblack", "brightred", "brightgreen", "brightyellow", "brightblue",
|
||||
"brightmagenta", "brightcyan", "white"]
|
||||
|
||||
x = 30
|
||||
for dark, light in zip(dark_colors, light_colors):
|
||||
codes[dark] = esc + "%im" % x
|
||||
codes[light] = esc + "%im" % (60 + x)
|
||||
x += 1
|
||||
|
||||
del dark, light, x
|
||||
|
||||
codes["white"] = codes["bold"]
|
||||
|
||||
|
||||
def reset_color():
|
||||
return codes["reset"]
|
||||
|
||||
|
||||
def colorize(color_key, text):
|
||||
return codes[color_key] + text + codes["reset"]
|
||||
|
||||
|
||||
def ansiformat(attr, text):
|
||||
"""
|
||||
Format ``text`` with a color and/or some attributes::
|
||||
|
||||
color normal color
|
||||
*color* bold color
|
||||
_color_ underlined color
|
||||
+color+ blinking color
|
||||
"""
|
||||
result = []
|
||||
if attr[:1] == attr[-1:] == '+':
|
||||
result.append(codes['blink'])
|
||||
attr = attr[1:-1]
|
||||
if attr[:1] == attr[-1:] == '*':
|
||||
result.append(codes['bold'])
|
||||
attr = attr[1:-1]
|
||||
if attr[:1] == attr[-1:] == '_':
|
||||
result.append(codes['underline'])
|
||||
attr = attr[1:-1]
|
||||
result.append(codes[attr])
|
||||
result.append(text)
|
||||
result.append(codes['reset'])
|
||||
return ''.join(result)
|
||||
@@ -0,0 +1,70 @@
|
||||
"""
|
||||
pygments.filter
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Module that implements the default filter.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
||||
def apply_filters(stream, filters, lexer=None):
|
||||
"""
|
||||
Use this method to apply an iterable of filters to
|
||||
a stream. If lexer is given it's forwarded to the
|
||||
filter, otherwise the filter receives `None`.
|
||||
"""
|
||||
def _apply(filter_, stream):
|
||||
yield from filter_.filter(lexer, stream)
|
||||
for filter_ in filters:
|
||||
stream = _apply(filter_, stream)
|
||||
return stream
|
||||
|
||||
|
||||
def simplefilter(f):
|
||||
"""
|
||||
Decorator that converts a function into a filter::
|
||||
|
||||
@simplefilter
|
||||
def lowercase(self, lexer, stream, options):
|
||||
for ttype, value in stream:
|
||||
yield ttype, value.lower()
|
||||
"""
|
||||
return type(f.__name__, (FunctionFilter,), {
|
||||
'__module__': getattr(f, '__module__'),
|
||||
'__doc__': f.__doc__,
|
||||
'function': f,
|
||||
})
|
||||
|
||||
|
||||
class Filter:
|
||||
"""
|
||||
Default filter. Subclass this class or use the `simplefilter`
|
||||
decorator to create own filters.
|
||||
"""
|
||||
|
||||
def __init__(self, **options):
|
||||
self.options = options
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class FunctionFilter(Filter):
|
||||
"""
|
||||
Abstract class used by `simplefilter` to create simple
|
||||
function filters on the fly. The `simplefilter` decorator
|
||||
automatically creates subclasses of this class for
|
||||
functions passed to it.
|
||||
"""
|
||||
function = None
|
||||
|
||||
def __init__(self, **options):
|
||||
if not hasattr(self, 'function'):
|
||||
raise TypeError(f'{self.__class__.__name__!r} used without bound function')
|
||||
Filter.__init__(self, **options)
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
# pylint: disable=not-callable
|
||||
yield from self.function(lexer, stream, self.options)
|
||||
@@ -0,0 +1,940 @@
|
||||
"""
|
||||
pygments.filters
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Module containing filter lookup functions and default
|
||||
filters.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
from pip._vendor.pygments.token import String, Comment, Keyword, Name, Error, Whitespace, \
|
||||
string_to_tokentype
|
||||
from pip._vendor.pygments.filter import Filter
|
||||
from pip._vendor.pygments.util import get_list_opt, get_int_opt, get_bool_opt, \
|
||||
get_choice_opt, ClassNotFound, OptionError
|
||||
from pip._vendor.pygments.plugin import find_plugin_filters
|
||||
|
||||
|
||||
def find_filter_class(filtername):
|
||||
"""Lookup a filter by name. Return None if not found."""
|
||||
if filtername in FILTERS:
|
||||
return FILTERS[filtername]
|
||||
for name, cls in find_plugin_filters():
|
||||
if name == filtername:
|
||||
return cls
|
||||
return None
|
||||
|
||||
|
||||
def get_filter_by_name(filtername, **options):
|
||||
"""Return an instantiated filter.
|
||||
|
||||
Options are passed to the filter initializer if wanted.
|
||||
Raise a ClassNotFound if not found.
|
||||
"""
|
||||
cls = find_filter_class(filtername)
|
||||
if cls:
|
||||
return cls(**options)
|
||||
else:
|
||||
raise ClassNotFound(f'filter {filtername!r} not found')
|
||||
|
||||
|
||||
def get_all_filters():
|
||||
"""Return a generator of all filter names."""
|
||||
yield from FILTERS
|
||||
for name, _ in find_plugin_filters():
|
||||
yield name
|
||||
|
||||
|
||||
def _replace_special(ttype, value, regex, specialttype,
|
||||
replacefunc=lambda x: x):
|
||||
last = 0
|
||||
for match in regex.finditer(value):
|
||||
start, end = match.start(), match.end()
|
||||
if start != last:
|
||||
yield ttype, value[last:start]
|
||||
yield specialttype, replacefunc(value[start:end])
|
||||
last = end
|
||||
if last != len(value):
|
||||
yield ttype, value[last:]
|
||||
|
||||
|
||||
class CodeTagFilter(Filter):
|
||||
"""Highlight special code tags in comments and docstrings.
|
||||
|
||||
Options accepted:
|
||||
|
||||
`codetags` : list of strings
|
||||
A list of strings that are flagged as code tags. The default is to
|
||||
highlight ``XXX``, ``TODO``, ``FIXME``, ``BUG`` and ``NOTE``.
|
||||
|
||||
.. versionchanged:: 2.13
|
||||
Now recognizes ``FIXME`` by default.
|
||||
"""
|
||||
|
||||
def __init__(self, **options):
|
||||
Filter.__init__(self, **options)
|
||||
tags = get_list_opt(options, 'codetags',
|
||||
['XXX', 'TODO', 'FIXME', 'BUG', 'NOTE'])
|
||||
self.tag_re = re.compile(r'\b({})\b'.format('|'.join([
|
||||
re.escape(tag) for tag in tags if tag
|
||||
])))
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
regex = self.tag_re
|
||||
for ttype, value in stream:
|
||||
if ttype in String.Doc or \
|
||||
ttype in Comment and \
|
||||
ttype not in Comment.Preproc:
|
||||
yield from _replace_special(ttype, value, regex, Comment.Special)
|
||||
else:
|
||||
yield ttype, value
|
||||
|
||||
|
||||
class SymbolFilter(Filter):
|
||||
"""Convert mathematical symbols such as \\<longrightarrow> in Isabelle
|
||||
or \\longrightarrow in LaTeX into Unicode characters.
|
||||
|
||||
This is mostly useful for HTML or console output when you want to
|
||||
approximate the source rendering you'd see in an IDE.
|
||||
|
||||
Options accepted:
|
||||
|
||||
`lang` : string
|
||||
The symbol language. Must be one of ``'isabelle'`` or
|
||||
``'latex'``. The default is ``'isabelle'``.
|
||||
"""
|
||||
|
||||
latex_symbols = {
|
||||
'\\alpha' : '\U000003b1',
|
||||
'\\beta' : '\U000003b2',
|
||||
'\\gamma' : '\U000003b3',
|
||||
'\\delta' : '\U000003b4',
|
||||
'\\varepsilon' : '\U000003b5',
|
||||
'\\zeta' : '\U000003b6',
|
||||
'\\eta' : '\U000003b7',
|
||||
'\\vartheta' : '\U000003b8',
|
||||
'\\iota' : '\U000003b9',
|
||||
'\\kappa' : '\U000003ba',
|
||||
'\\lambda' : '\U000003bb',
|
||||
'\\mu' : '\U000003bc',
|
||||
'\\nu' : '\U000003bd',
|
||||
'\\xi' : '\U000003be',
|
||||
'\\pi' : '\U000003c0',
|
||||
'\\varrho' : '\U000003c1',
|
||||
'\\sigma' : '\U000003c3',
|
||||
'\\tau' : '\U000003c4',
|
||||
'\\upsilon' : '\U000003c5',
|
||||
'\\varphi' : '\U000003c6',
|
||||
'\\chi' : '\U000003c7',
|
||||
'\\psi' : '\U000003c8',
|
||||
'\\omega' : '\U000003c9',
|
||||
'\\Gamma' : '\U00000393',
|
||||
'\\Delta' : '\U00000394',
|
||||
'\\Theta' : '\U00000398',
|
||||
'\\Lambda' : '\U0000039b',
|
||||
'\\Xi' : '\U0000039e',
|
||||
'\\Pi' : '\U000003a0',
|
||||
'\\Sigma' : '\U000003a3',
|
||||
'\\Upsilon' : '\U000003a5',
|
||||
'\\Phi' : '\U000003a6',
|
||||
'\\Psi' : '\U000003a8',
|
||||
'\\Omega' : '\U000003a9',
|
||||
'\\leftarrow' : '\U00002190',
|
||||
'\\longleftarrow' : '\U000027f5',
|
||||
'\\rightarrow' : '\U00002192',
|
||||
'\\longrightarrow' : '\U000027f6',
|
||||
'\\Leftarrow' : '\U000021d0',
|
||||
'\\Longleftarrow' : '\U000027f8',
|
||||
'\\Rightarrow' : '\U000021d2',
|
||||
'\\Longrightarrow' : '\U000027f9',
|
||||
'\\leftrightarrow' : '\U00002194',
|
||||
'\\longleftrightarrow' : '\U000027f7',
|
||||
'\\Leftrightarrow' : '\U000021d4',
|
||||
'\\Longleftrightarrow' : '\U000027fa',
|
||||
'\\mapsto' : '\U000021a6',
|
||||
'\\longmapsto' : '\U000027fc',
|
||||
'\\relbar' : '\U00002500',
|
||||
'\\Relbar' : '\U00002550',
|
||||
'\\hookleftarrow' : '\U000021a9',
|
||||
'\\hookrightarrow' : '\U000021aa',
|
||||
'\\leftharpoondown' : '\U000021bd',
|
||||
'\\rightharpoondown' : '\U000021c1',
|
||||
'\\leftharpoonup' : '\U000021bc',
|
||||
'\\rightharpoonup' : '\U000021c0',
|
||||
'\\rightleftharpoons' : '\U000021cc',
|
||||
'\\leadsto' : '\U0000219d',
|
||||
'\\downharpoonleft' : '\U000021c3',
|
||||
'\\downharpoonright' : '\U000021c2',
|
||||
'\\upharpoonleft' : '\U000021bf',
|
||||
'\\upharpoonright' : '\U000021be',
|
||||
'\\restriction' : '\U000021be',
|
||||
'\\uparrow' : '\U00002191',
|
||||
'\\Uparrow' : '\U000021d1',
|
||||
'\\downarrow' : '\U00002193',
|
||||
'\\Downarrow' : '\U000021d3',
|
||||
'\\updownarrow' : '\U00002195',
|
||||
'\\Updownarrow' : '\U000021d5',
|
||||
'\\langle' : '\U000027e8',
|
||||
'\\rangle' : '\U000027e9',
|
||||
'\\lceil' : '\U00002308',
|
||||
'\\rceil' : '\U00002309',
|
||||
'\\lfloor' : '\U0000230a',
|
||||
'\\rfloor' : '\U0000230b',
|
||||
'\\flqq' : '\U000000ab',
|
||||
'\\frqq' : '\U000000bb',
|
||||
'\\bot' : '\U000022a5',
|
||||
'\\top' : '\U000022a4',
|
||||
'\\wedge' : '\U00002227',
|
||||
'\\bigwedge' : '\U000022c0',
|
||||
'\\vee' : '\U00002228',
|
||||
'\\bigvee' : '\U000022c1',
|
||||
'\\forall' : '\U00002200',
|
||||
'\\exists' : '\U00002203',
|
||||
'\\nexists' : '\U00002204',
|
||||
'\\neg' : '\U000000ac',
|
||||
'\\Box' : '\U000025a1',
|
||||
'\\Diamond' : '\U000025c7',
|
||||
'\\vdash' : '\U000022a2',
|
||||
'\\models' : '\U000022a8',
|
||||
'\\dashv' : '\U000022a3',
|
||||
'\\surd' : '\U0000221a',
|
||||
'\\le' : '\U00002264',
|
||||
'\\ge' : '\U00002265',
|
||||
'\\ll' : '\U0000226a',
|
||||
'\\gg' : '\U0000226b',
|
||||
'\\lesssim' : '\U00002272',
|
||||
'\\gtrsim' : '\U00002273',
|
||||
'\\lessapprox' : '\U00002a85',
|
||||
'\\gtrapprox' : '\U00002a86',
|
||||
'\\in' : '\U00002208',
|
||||
'\\notin' : '\U00002209',
|
||||
'\\subset' : '\U00002282',
|
||||
'\\supset' : '\U00002283',
|
||||
'\\subseteq' : '\U00002286',
|
||||
'\\supseteq' : '\U00002287',
|
||||
'\\sqsubset' : '\U0000228f',
|
||||
'\\sqsupset' : '\U00002290',
|
||||
'\\sqsubseteq' : '\U00002291',
|
||||
'\\sqsupseteq' : '\U00002292',
|
||||
'\\cap' : '\U00002229',
|
||||
'\\bigcap' : '\U000022c2',
|
||||
'\\cup' : '\U0000222a',
|
||||
'\\bigcup' : '\U000022c3',
|
||||
'\\sqcup' : '\U00002294',
|
||||
'\\bigsqcup' : '\U00002a06',
|
||||
'\\sqcap' : '\U00002293',
|
||||
'\\Bigsqcap' : '\U00002a05',
|
||||
'\\setminus' : '\U00002216',
|
||||
'\\propto' : '\U0000221d',
|
||||
'\\uplus' : '\U0000228e',
|
||||
'\\bigplus' : '\U00002a04',
|
||||
'\\sim' : '\U0000223c',
|
||||
'\\doteq' : '\U00002250',
|
||||
'\\simeq' : '\U00002243',
|
||||
'\\approx' : '\U00002248',
|
||||
'\\asymp' : '\U0000224d',
|
||||
'\\cong' : '\U00002245',
|
||||
'\\equiv' : '\U00002261',
|
||||
'\\Join' : '\U000022c8',
|
||||
'\\bowtie' : '\U00002a1d',
|
||||
'\\prec' : '\U0000227a',
|
||||
'\\succ' : '\U0000227b',
|
||||
'\\preceq' : '\U0000227c',
|
||||
'\\succeq' : '\U0000227d',
|
||||
'\\parallel' : '\U00002225',
|
||||
'\\mid' : '\U000000a6',
|
||||
'\\pm' : '\U000000b1',
|
||||
'\\mp' : '\U00002213',
|
||||
'\\times' : '\U000000d7',
|
||||
'\\div' : '\U000000f7',
|
||||
'\\cdot' : '\U000022c5',
|
||||
'\\star' : '\U000022c6',
|
||||
'\\circ' : '\U00002218',
|
||||
'\\dagger' : '\U00002020',
|
||||
'\\ddagger' : '\U00002021',
|
||||
'\\lhd' : '\U000022b2',
|
||||
'\\rhd' : '\U000022b3',
|
||||
'\\unlhd' : '\U000022b4',
|
||||
'\\unrhd' : '\U000022b5',
|
||||
'\\triangleleft' : '\U000025c3',
|
||||
'\\triangleright' : '\U000025b9',
|
||||
'\\triangle' : '\U000025b3',
|
||||
'\\triangleq' : '\U0000225c',
|
||||
'\\oplus' : '\U00002295',
|
||||
'\\bigoplus' : '\U00002a01',
|
||||
'\\otimes' : '\U00002297',
|
||||
'\\bigotimes' : '\U00002a02',
|
||||
'\\odot' : '\U00002299',
|
||||
'\\bigodot' : '\U00002a00',
|
||||
'\\ominus' : '\U00002296',
|
||||
'\\oslash' : '\U00002298',
|
||||
'\\dots' : '\U00002026',
|
||||
'\\cdots' : '\U000022ef',
|
||||
'\\sum' : '\U00002211',
|
||||
'\\prod' : '\U0000220f',
|
||||
'\\coprod' : '\U00002210',
|
||||
'\\infty' : '\U0000221e',
|
||||
'\\int' : '\U0000222b',
|
||||
'\\oint' : '\U0000222e',
|
||||
'\\clubsuit' : '\U00002663',
|
||||
'\\diamondsuit' : '\U00002662',
|
||||
'\\heartsuit' : '\U00002661',
|
||||
'\\spadesuit' : '\U00002660',
|
||||
'\\aleph' : '\U00002135',
|
||||
'\\emptyset' : '\U00002205',
|
||||
'\\nabla' : '\U00002207',
|
||||
'\\partial' : '\U00002202',
|
||||
'\\flat' : '\U0000266d',
|
||||
'\\natural' : '\U0000266e',
|
||||
'\\sharp' : '\U0000266f',
|
||||
'\\angle' : '\U00002220',
|
||||
'\\copyright' : '\U000000a9',
|
||||
'\\textregistered' : '\U000000ae',
|
||||
'\\textonequarter' : '\U000000bc',
|
||||
'\\textonehalf' : '\U000000bd',
|
||||
'\\textthreequarters' : '\U000000be',
|
||||
'\\textordfeminine' : '\U000000aa',
|
||||
'\\textordmasculine' : '\U000000ba',
|
||||
'\\euro' : '\U000020ac',
|
||||
'\\pounds' : '\U000000a3',
|
||||
'\\yen' : '\U000000a5',
|
||||
'\\textcent' : '\U000000a2',
|
||||
'\\textcurrency' : '\U000000a4',
|
||||
'\\textdegree' : '\U000000b0',
|
||||
}
|
||||
|
||||
isabelle_symbols = {
|
||||
'\\<zero>' : '\U0001d7ec',
|
||||
'\\<one>' : '\U0001d7ed',
|
||||
'\\<two>' : '\U0001d7ee',
|
||||
'\\<three>' : '\U0001d7ef',
|
||||
'\\<four>' : '\U0001d7f0',
|
||||
'\\<five>' : '\U0001d7f1',
|
||||
'\\<six>' : '\U0001d7f2',
|
||||
'\\<seven>' : '\U0001d7f3',
|
||||
'\\<eight>' : '\U0001d7f4',
|
||||
'\\<nine>' : '\U0001d7f5',
|
||||
'\\<A>' : '\U0001d49c',
|
||||
'\\<B>' : '\U0000212c',
|
||||
'\\<C>' : '\U0001d49e',
|
||||
'\\<D>' : '\U0001d49f',
|
||||
'\\<E>' : '\U00002130',
|
||||
'\\<F>' : '\U00002131',
|
||||
'\\<G>' : '\U0001d4a2',
|
||||
'\\<H>' : '\U0000210b',
|
||||
'\\<I>' : '\U00002110',
|
||||
'\\<J>' : '\U0001d4a5',
|
||||
'\\<K>' : '\U0001d4a6',
|
||||
'\\<L>' : '\U00002112',
|
||||
'\\<M>' : '\U00002133',
|
||||
'\\<N>' : '\U0001d4a9',
|
||||
'\\<O>' : '\U0001d4aa',
|
||||
'\\<P>' : '\U0001d4ab',
|
||||
'\\<Q>' : '\U0001d4ac',
|
||||
'\\<R>' : '\U0000211b',
|
||||
'\\<S>' : '\U0001d4ae',
|
||||
'\\<T>' : '\U0001d4af',
|
||||
'\\<U>' : '\U0001d4b0',
|
||||
'\\<V>' : '\U0001d4b1',
|
||||
'\\<W>' : '\U0001d4b2',
|
||||
'\\<X>' : '\U0001d4b3',
|
||||
'\\<Y>' : '\U0001d4b4',
|
||||
'\\<Z>' : '\U0001d4b5',
|
||||
'\\<a>' : '\U0001d5ba',
|
||||
'\\<b>' : '\U0001d5bb',
|
||||
'\\<c>' : '\U0001d5bc',
|
||||
'\\<d>' : '\U0001d5bd',
|
||||
'\\<e>' : '\U0001d5be',
|
||||
'\\<f>' : '\U0001d5bf',
|
||||
'\\<g>' : '\U0001d5c0',
|
||||
'\\<h>' : '\U0001d5c1',
|
||||
'\\<i>' : '\U0001d5c2',
|
||||
'\\<j>' : '\U0001d5c3',
|
||||
'\\<k>' : '\U0001d5c4',
|
||||
'\\<l>' : '\U0001d5c5',
|
||||
'\\<m>' : '\U0001d5c6',
|
||||
'\\<n>' : '\U0001d5c7',
|
||||
'\\<o>' : '\U0001d5c8',
|
||||
'\\<p>' : '\U0001d5c9',
|
||||
'\\<q>' : '\U0001d5ca',
|
||||
'\\<r>' : '\U0001d5cb',
|
||||
'\\<s>' : '\U0001d5cc',
|
||||
'\\<t>' : '\U0001d5cd',
|
||||
'\\<u>' : '\U0001d5ce',
|
||||
'\\<v>' : '\U0001d5cf',
|
||||
'\\<w>' : '\U0001d5d0',
|
||||
'\\<x>' : '\U0001d5d1',
|
||||
'\\<y>' : '\U0001d5d2',
|
||||
'\\<z>' : '\U0001d5d3',
|
||||
'\\<AA>' : '\U0001d504',
|
||||
'\\<BB>' : '\U0001d505',
|
||||
'\\<CC>' : '\U0000212d',
|
||||
'\\<DD>' : '\U0001d507',
|
||||
'\\<EE>' : '\U0001d508',
|
||||
'\\<FF>' : '\U0001d509',
|
||||
'\\<GG>' : '\U0001d50a',
|
||||
'\\<HH>' : '\U0000210c',
|
||||
'\\<II>' : '\U00002111',
|
||||
'\\<JJ>' : '\U0001d50d',
|
||||
'\\<KK>' : '\U0001d50e',
|
||||
'\\<LL>' : '\U0001d50f',
|
||||
'\\<MM>' : '\U0001d510',
|
||||
'\\<NN>' : '\U0001d511',
|
||||
'\\<OO>' : '\U0001d512',
|
||||
'\\<PP>' : '\U0001d513',
|
||||
'\\<QQ>' : '\U0001d514',
|
||||
'\\<RR>' : '\U0000211c',
|
||||
'\\<SS>' : '\U0001d516',
|
||||
'\\<TT>' : '\U0001d517',
|
||||
'\\<UU>' : '\U0001d518',
|
||||
'\\<VV>' : '\U0001d519',
|
||||
'\\<WW>' : '\U0001d51a',
|
||||
'\\<XX>' : '\U0001d51b',
|
||||
'\\<YY>' : '\U0001d51c',
|
||||
'\\<ZZ>' : '\U00002128',
|
||||
'\\<aa>' : '\U0001d51e',
|
||||
'\\<bb>' : '\U0001d51f',
|
||||
'\\<cc>' : '\U0001d520',
|
||||
'\\<dd>' : '\U0001d521',
|
||||
'\\<ee>' : '\U0001d522',
|
||||
'\\<ff>' : '\U0001d523',
|
||||
'\\<gg>' : '\U0001d524',
|
||||
'\\<hh>' : '\U0001d525',
|
||||
'\\<ii>' : '\U0001d526',
|
||||
'\\<jj>' : '\U0001d527',
|
||||
'\\<kk>' : '\U0001d528',
|
||||
'\\<ll>' : '\U0001d529',
|
||||
'\\<mm>' : '\U0001d52a',
|
||||
'\\<nn>' : '\U0001d52b',
|
||||
'\\<oo>' : '\U0001d52c',
|
||||
'\\<pp>' : '\U0001d52d',
|
||||
'\\<qq>' : '\U0001d52e',
|
||||
'\\<rr>' : '\U0001d52f',
|
||||
'\\<ss>' : '\U0001d530',
|
||||
'\\<tt>' : '\U0001d531',
|
||||
'\\<uu>' : '\U0001d532',
|
||||
'\\<vv>' : '\U0001d533',
|
||||
'\\<ww>' : '\U0001d534',
|
||||
'\\<xx>' : '\U0001d535',
|
||||
'\\<yy>' : '\U0001d536',
|
||||
'\\<zz>' : '\U0001d537',
|
||||
'\\<alpha>' : '\U000003b1',
|
||||
'\\<beta>' : '\U000003b2',
|
||||
'\\<gamma>' : '\U000003b3',
|
||||
'\\<delta>' : '\U000003b4',
|
||||
'\\<epsilon>' : '\U000003b5',
|
||||
'\\<zeta>' : '\U000003b6',
|
||||
'\\<eta>' : '\U000003b7',
|
||||
'\\<theta>' : '\U000003b8',
|
||||
'\\<iota>' : '\U000003b9',
|
||||
'\\<kappa>' : '\U000003ba',
|
||||
'\\<lambda>' : '\U000003bb',
|
||||
'\\<mu>' : '\U000003bc',
|
||||
'\\<nu>' : '\U000003bd',
|
||||
'\\<xi>' : '\U000003be',
|
||||
'\\<pi>' : '\U000003c0',
|
||||
'\\<rho>' : '\U000003c1',
|
||||
'\\<sigma>' : '\U000003c3',
|
||||
'\\<tau>' : '\U000003c4',
|
||||
'\\<upsilon>' : '\U000003c5',
|
||||
'\\<phi>' : '\U000003c6',
|
||||
'\\<chi>' : '\U000003c7',
|
||||
'\\<psi>' : '\U000003c8',
|
||||
'\\<omega>' : '\U000003c9',
|
||||
'\\<Gamma>' : '\U00000393',
|
||||
'\\<Delta>' : '\U00000394',
|
||||
'\\<Theta>' : '\U00000398',
|
||||
'\\<Lambda>' : '\U0000039b',
|
||||
'\\<Xi>' : '\U0000039e',
|
||||
'\\<Pi>' : '\U000003a0',
|
||||
'\\<Sigma>' : '\U000003a3',
|
||||
'\\<Upsilon>' : '\U000003a5',
|
||||
'\\<Phi>' : '\U000003a6',
|
||||
'\\<Psi>' : '\U000003a8',
|
||||
'\\<Omega>' : '\U000003a9',
|
||||
'\\<bool>' : '\U0001d539',
|
||||
'\\<complex>' : '\U00002102',
|
||||
'\\<nat>' : '\U00002115',
|
||||
'\\<rat>' : '\U0000211a',
|
||||
'\\<real>' : '\U0000211d',
|
||||
'\\<int>' : '\U00002124',
|
||||
'\\<leftarrow>' : '\U00002190',
|
||||
'\\<longleftarrow>' : '\U000027f5',
|
||||
'\\<rightarrow>' : '\U00002192',
|
||||
'\\<longrightarrow>' : '\U000027f6',
|
||||
'\\<Leftarrow>' : '\U000021d0',
|
||||
'\\<Longleftarrow>' : '\U000027f8',
|
||||
'\\<Rightarrow>' : '\U000021d2',
|
||||
'\\<Longrightarrow>' : '\U000027f9',
|
||||
'\\<leftrightarrow>' : '\U00002194',
|
||||
'\\<longleftrightarrow>' : '\U000027f7',
|
||||
'\\<Leftrightarrow>' : '\U000021d4',
|
||||
'\\<Longleftrightarrow>' : '\U000027fa',
|
||||
'\\<mapsto>' : '\U000021a6',
|
||||
'\\<longmapsto>' : '\U000027fc',
|
||||
'\\<midarrow>' : '\U00002500',
|
||||
'\\<Midarrow>' : '\U00002550',
|
||||
'\\<hookleftarrow>' : '\U000021a9',
|
||||
'\\<hookrightarrow>' : '\U000021aa',
|
||||
'\\<leftharpoondown>' : '\U000021bd',
|
||||
'\\<rightharpoondown>' : '\U000021c1',
|
||||
'\\<leftharpoonup>' : '\U000021bc',
|
||||
'\\<rightharpoonup>' : '\U000021c0',
|
||||
'\\<rightleftharpoons>' : '\U000021cc',
|
||||
'\\<leadsto>' : '\U0000219d',
|
||||
'\\<downharpoonleft>' : '\U000021c3',
|
||||
'\\<downharpoonright>' : '\U000021c2',
|
||||
'\\<upharpoonleft>' : '\U000021bf',
|
||||
'\\<upharpoonright>' : '\U000021be',
|
||||
'\\<restriction>' : '\U000021be',
|
||||
'\\<Colon>' : '\U00002237',
|
||||
'\\<up>' : '\U00002191',
|
||||
'\\<Up>' : '\U000021d1',
|
||||
'\\<down>' : '\U00002193',
|
||||
'\\<Down>' : '\U000021d3',
|
||||
'\\<updown>' : '\U00002195',
|
||||
'\\<Updown>' : '\U000021d5',
|
||||
'\\<langle>' : '\U000027e8',
|
||||
'\\<rangle>' : '\U000027e9',
|
||||
'\\<lceil>' : '\U00002308',
|
||||
'\\<rceil>' : '\U00002309',
|
||||
'\\<lfloor>' : '\U0000230a',
|
||||
'\\<rfloor>' : '\U0000230b',
|
||||
'\\<lparr>' : '\U00002987',
|
||||
'\\<rparr>' : '\U00002988',
|
||||
'\\<lbrakk>' : '\U000027e6',
|
||||
'\\<rbrakk>' : '\U000027e7',
|
||||
'\\<lbrace>' : '\U00002983',
|
||||
'\\<rbrace>' : '\U00002984',
|
||||
'\\<guillemotleft>' : '\U000000ab',
|
||||
'\\<guillemotright>' : '\U000000bb',
|
||||
'\\<bottom>' : '\U000022a5',
|
||||
'\\<top>' : '\U000022a4',
|
||||
'\\<and>' : '\U00002227',
|
||||
'\\<And>' : '\U000022c0',
|
||||
'\\<or>' : '\U00002228',
|
||||
'\\<Or>' : '\U000022c1',
|
||||
'\\<forall>' : '\U00002200',
|
||||
'\\<exists>' : '\U00002203',
|
||||
'\\<nexists>' : '\U00002204',
|
||||
'\\<not>' : '\U000000ac',
|
||||
'\\<box>' : '\U000025a1',
|
||||
'\\<diamond>' : '\U000025c7',
|
||||
'\\<turnstile>' : '\U000022a2',
|
||||
'\\<Turnstile>' : '\U000022a8',
|
||||
'\\<tturnstile>' : '\U000022a9',
|
||||
'\\<TTurnstile>' : '\U000022ab',
|
||||
'\\<stileturn>' : '\U000022a3',
|
||||
'\\<surd>' : '\U0000221a',
|
||||
'\\<le>' : '\U00002264',
|
||||
'\\<ge>' : '\U00002265',
|
||||
'\\<lless>' : '\U0000226a',
|
||||
'\\<ggreater>' : '\U0000226b',
|
||||
'\\<lesssim>' : '\U00002272',
|
||||
'\\<greatersim>' : '\U00002273',
|
||||
'\\<lessapprox>' : '\U00002a85',
|
||||
'\\<greaterapprox>' : '\U00002a86',
|
||||
'\\<in>' : '\U00002208',
|
||||
'\\<notin>' : '\U00002209',
|
||||
'\\<subset>' : '\U00002282',
|
||||
'\\<supset>' : '\U00002283',
|
||||
'\\<subseteq>' : '\U00002286',
|
||||
'\\<supseteq>' : '\U00002287',
|
||||
'\\<sqsubset>' : '\U0000228f',
|
||||
'\\<sqsupset>' : '\U00002290',
|
||||
'\\<sqsubseteq>' : '\U00002291',
|
||||
'\\<sqsupseteq>' : '\U00002292',
|
||||
'\\<inter>' : '\U00002229',
|
||||
'\\<Inter>' : '\U000022c2',
|
||||
'\\<union>' : '\U0000222a',
|
||||
'\\<Union>' : '\U000022c3',
|
||||
'\\<squnion>' : '\U00002294',
|
||||
'\\<Squnion>' : '\U00002a06',
|
||||
'\\<sqinter>' : '\U00002293',
|
||||
'\\<Sqinter>' : '\U00002a05',
|
||||
'\\<setminus>' : '\U00002216',
|
||||
'\\<propto>' : '\U0000221d',
|
||||
'\\<uplus>' : '\U0000228e',
|
||||
'\\<Uplus>' : '\U00002a04',
|
||||
'\\<noteq>' : '\U00002260',
|
||||
'\\<sim>' : '\U0000223c',
|
||||
'\\<doteq>' : '\U00002250',
|
||||
'\\<simeq>' : '\U00002243',
|
||||
'\\<approx>' : '\U00002248',
|
||||
'\\<asymp>' : '\U0000224d',
|
||||
'\\<cong>' : '\U00002245',
|
||||
'\\<smile>' : '\U00002323',
|
||||
'\\<equiv>' : '\U00002261',
|
||||
'\\<frown>' : '\U00002322',
|
||||
'\\<Join>' : '\U000022c8',
|
||||
'\\<bowtie>' : '\U00002a1d',
|
||||
'\\<prec>' : '\U0000227a',
|
||||
'\\<succ>' : '\U0000227b',
|
||||
'\\<preceq>' : '\U0000227c',
|
||||
'\\<succeq>' : '\U0000227d',
|
||||
'\\<parallel>' : '\U00002225',
|
||||
'\\<bar>' : '\U000000a6',
|
||||
'\\<plusminus>' : '\U000000b1',
|
||||
'\\<minusplus>' : '\U00002213',
|
||||
'\\<times>' : '\U000000d7',
|
||||
'\\<div>' : '\U000000f7',
|
||||
'\\<cdot>' : '\U000022c5',
|
||||
'\\<star>' : '\U000022c6',
|
||||
'\\<bullet>' : '\U00002219',
|
||||
'\\<circ>' : '\U00002218',
|
||||
'\\<dagger>' : '\U00002020',
|
||||
'\\<ddagger>' : '\U00002021',
|
||||
'\\<lhd>' : '\U000022b2',
|
||||
'\\<rhd>' : '\U000022b3',
|
||||
'\\<unlhd>' : '\U000022b4',
|
||||
'\\<unrhd>' : '\U000022b5',
|
||||
'\\<triangleleft>' : '\U000025c3',
|
||||
'\\<triangleright>' : '\U000025b9',
|
||||
'\\<triangle>' : '\U000025b3',
|
||||
'\\<triangleq>' : '\U0000225c',
|
||||
'\\<oplus>' : '\U00002295',
|
||||
'\\<Oplus>' : '\U00002a01',
|
||||
'\\<otimes>' : '\U00002297',
|
||||
'\\<Otimes>' : '\U00002a02',
|
||||
'\\<odot>' : '\U00002299',
|
||||
'\\<Odot>' : '\U00002a00',
|
||||
'\\<ominus>' : '\U00002296',
|
||||
'\\<oslash>' : '\U00002298',
|
||||
'\\<dots>' : '\U00002026',
|
||||
'\\<cdots>' : '\U000022ef',
|
||||
'\\<Sum>' : '\U00002211',
|
||||
'\\<Prod>' : '\U0000220f',
|
||||
'\\<Coprod>' : '\U00002210',
|
||||
'\\<infinity>' : '\U0000221e',
|
||||
'\\<integral>' : '\U0000222b',
|
||||
'\\<ointegral>' : '\U0000222e',
|
||||
'\\<clubsuit>' : '\U00002663',
|
||||
'\\<diamondsuit>' : '\U00002662',
|
||||
'\\<heartsuit>' : '\U00002661',
|
||||
'\\<spadesuit>' : '\U00002660',
|
||||
'\\<aleph>' : '\U00002135',
|
||||
'\\<emptyset>' : '\U00002205',
|
||||
'\\<nabla>' : '\U00002207',
|
||||
'\\<partial>' : '\U00002202',
|
||||
'\\<flat>' : '\U0000266d',
|
||||
'\\<natural>' : '\U0000266e',
|
||||
'\\<sharp>' : '\U0000266f',
|
||||
'\\<angle>' : '\U00002220',
|
||||
'\\<copyright>' : '\U000000a9',
|
||||
'\\<registered>' : '\U000000ae',
|
||||
'\\<hyphen>' : '\U000000ad',
|
||||
'\\<inverse>' : '\U000000af',
|
||||
'\\<onequarter>' : '\U000000bc',
|
||||
'\\<onehalf>' : '\U000000bd',
|
||||
'\\<threequarters>' : '\U000000be',
|
||||
'\\<ordfeminine>' : '\U000000aa',
|
||||
'\\<ordmasculine>' : '\U000000ba',
|
||||
'\\<section>' : '\U000000a7',
|
||||
'\\<paragraph>' : '\U000000b6',
|
||||
'\\<exclamdown>' : '\U000000a1',
|
||||
'\\<questiondown>' : '\U000000bf',
|
||||
'\\<euro>' : '\U000020ac',
|
||||
'\\<pounds>' : '\U000000a3',
|
||||
'\\<yen>' : '\U000000a5',
|
||||
'\\<cent>' : '\U000000a2',
|
||||
'\\<currency>' : '\U000000a4',
|
||||
'\\<degree>' : '\U000000b0',
|
||||
'\\<amalg>' : '\U00002a3f',
|
||||
'\\<mho>' : '\U00002127',
|
||||
'\\<lozenge>' : '\U000025ca',
|
||||
'\\<wp>' : '\U00002118',
|
||||
'\\<wrong>' : '\U00002240',
|
||||
'\\<struct>' : '\U000022c4',
|
||||
'\\<acute>' : '\U000000b4',
|
||||
'\\<index>' : '\U00000131',
|
||||
'\\<dieresis>' : '\U000000a8',
|
||||
'\\<cedilla>' : '\U000000b8',
|
||||
'\\<hungarumlaut>' : '\U000002dd',
|
||||
'\\<some>' : '\U000003f5',
|
||||
'\\<newline>' : '\U000023ce',
|
||||
'\\<open>' : '\U00002039',
|
||||
'\\<close>' : '\U0000203a',
|
||||
'\\<here>' : '\U00002302',
|
||||
'\\<^sub>' : '\U000021e9',
|
||||
'\\<^sup>' : '\U000021e7',
|
||||
'\\<^bold>' : '\U00002759',
|
||||
'\\<^bsub>' : '\U000021d8',
|
||||
'\\<^esub>' : '\U000021d9',
|
||||
'\\<^bsup>' : '\U000021d7',
|
||||
'\\<^esup>' : '\U000021d6',
|
||||
}
|
||||
|
||||
lang_map = {'isabelle' : isabelle_symbols, 'latex' : latex_symbols}
|
||||
|
||||
def __init__(self, **options):
|
||||
Filter.__init__(self, **options)
|
||||
lang = get_choice_opt(options, 'lang',
|
||||
['isabelle', 'latex'], 'isabelle')
|
||||
self.symbols = self.lang_map[lang]
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
for ttype, value in stream:
|
||||
if value in self.symbols:
|
||||
yield ttype, self.symbols[value]
|
||||
else:
|
||||
yield ttype, value
|
||||
|
||||
|
||||
class KeywordCaseFilter(Filter):
|
||||
"""Convert keywords to lowercase or uppercase or capitalize them, which
|
||||
means first letter uppercase, rest lowercase.
|
||||
|
||||
This can be useful e.g. if you highlight Pascal code and want to adapt the
|
||||
code to your styleguide.
|
||||
|
||||
Options accepted:
|
||||
|
||||
`case` : string
|
||||
The casing to convert keywords to. Must be one of ``'lower'``,
|
||||
``'upper'`` or ``'capitalize'``. The default is ``'lower'``.
|
||||
"""
|
||||
|
||||
def __init__(self, **options):
|
||||
Filter.__init__(self, **options)
|
||||
case = get_choice_opt(options, 'case',
|
||||
['lower', 'upper', 'capitalize'], 'lower')
|
||||
self.convert = getattr(str, case)
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
for ttype, value in stream:
|
||||
if ttype in Keyword:
|
||||
yield ttype, self.convert(value)
|
||||
else:
|
||||
yield ttype, value
|
||||
|
||||
|
||||
class NameHighlightFilter(Filter):
|
||||
"""Highlight a normal Name (and Name.*) token with a different token type.
|
||||
|
||||
Example::
|
||||
|
||||
filter = NameHighlightFilter(
|
||||
names=['foo', 'bar', 'baz'],
|
||||
tokentype=Name.Function,
|
||||
)
|
||||
|
||||
This would highlight the names "foo", "bar" and "baz"
|
||||
as functions. `Name.Function` is the default token type.
|
||||
|
||||
Options accepted:
|
||||
|
||||
`names` : list of strings
|
||||
A list of names that should be given the different token type.
|
||||
There is no default.
|
||||
`tokentype` : TokenType or string
|
||||
A token type or a string containing a token type name that is
|
||||
used for highlighting the strings in `names`. The default is
|
||||
`Name.Function`.
|
||||
"""
|
||||
|
||||
def __init__(self, **options):
|
||||
Filter.__init__(self, **options)
|
||||
self.names = set(get_list_opt(options, 'names', []))
|
||||
tokentype = options.get('tokentype')
|
||||
if tokentype:
|
||||
self.tokentype = string_to_tokentype(tokentype)
|
||||
else:
|
||||
self.tokentype = Name.Function
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
for ttype, value in stream:
|
||||
if ttype in Name and value in self.names:
|
||||
yield self.tokentype, value
|
||||
else:
|
||||
yield ttype, value
|
||||
|
||||
|
||||
class ErrorToken(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class RaiseOnErrorTokenFilter(Filter):
|
||||
"""Raise an exception when the lexer generates an error token.
|
||||
|
||||
Options accepted:
|
||||
|
||||
`excclass` : Exception class
|
||||
The exception class to raise.
|
||||
The default is `pygments.filters.ErrorToken`.
|
||||
|
||||
.. versionadded:: 0.8
|
||||
"""
|
||||
|
||||
def __init__(self, **options):
|
||||
Filter.__init__(self, **options)
|
||||
self.exception = options.get('excclass', ErrorToken)
|
||||
try:
|
||||
# issubclass() will raise TypeError if first argument is not a class
|
||||
if not issubclass(self.exception, Exception):
|
||||
raise TypeError
|
||||
except TypeError:
|
||||
raise OptionError('excclass option is not an exception class')
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
for ttype, value in stream:
|
||||
if ttype is Error:
|
||||
raise self.exception(value)
|
||||
yield ttype, value
|
||||
|
||||
|
||||
class VisibleWhitespaceFilter(Filter):
|
||||
"""Convert tabs, newlines and/or spaces to visible characters.
|
||||
|
||||
Options accepted:
|
||||
|
||||
`spaces` : string or bool
|
||||
If this is a one-character string, spaces will be replaces by this string.
|
||||
If it is another true value, spaces will be replaced by ``·`` (unicode
|
||||
MIDDLE DOT). If it is a false value, spaces will not be replaced. The
|
||||
default is ``False``.
|
||||
`tabs` : string or bool
|
||||
The same as for `spaces`, but the default replacement character is ``»``
|
||||
(unicode RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK). The default value
|
||||
is ``False``. Note: this will not work if the `tabsize` option for the
|
||||
lexer is nonzero, as tabs will already have been expanded then.
|
||||
`tabsize` : int
|
||||
If tabs are to be replaced by this filter (see the `tabs` option), this
|
||||
is the total number of characters that a tab should be expanded to.
|
||||
The default is ``8``.
|
||||
`newlines` : string or bool
|
||||
The same as for `spaces`, but the default replacement character is ``¶``
|
||||
(unicode PILCROW SIGN). The default value is ``False``.
|
||||
`wstokentype` : bool
|
||||
If true, give whitespace the special `Whitespace` token type. This allows
|
||||
styling the visible whitespace differently (e.g. greyed out), but it can
|
||||
disrupt background colors. The default is ``True``.
|
||||
|
||||
.. versionadded:: 0.8
|
||||
"""
|
||||
|
||||
def __init__(self, **options):
|
||||
Filter.__init__(self, **options)
|
||||
for name, default in [('spaces', '·'),
|
||||
('tabs', '»'),
|
||||
('newlines', '¶')]:
|
||||
opt = options.get(name, False)
|
||||
if isinstance(opt, str) and len(opt) == 1:
|
||||
setattr(self, name, opt)
|
||||
else:
|
||||
setattr(self, name, (opt and default or ''))
|
||||
tabsize = get_int_opt(options, 'tabsize', 8)
|
||||
if self.tabs:
|
||||
self.tabs += ' ' * (tabsize - 1)
|
||||
if self.newlines:
|
||||
self.newlines += '\n'
|
||||
self.wstt = get_bool_opt(options, 'wstokentype', True)
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
if self.wstt:
|
||||
spaces = self.spaces or ' '
|
||||
tabs = self.tabs or '\t'
|
||||
newlines = self.newlines or '\n'
|
||||
regex = re.compile(r'\s')
|
||||
|
||||
def replacefunc(wschar):
|
||||
if wschar == ' ':
|
||||
return spaces
|
||||
elif wschar == '\t':
|
||||
return tabs
|
||||
elif wschar == '\n':
|
||||
return newlines
|
||||
return wschar
|
||||
|
||||
for ttype, value in stream:
|
||||
yield from _replace_special(ttype, value, regex, Whitespace,
|
||||
replacefunc)
|
||||
else:
|
||||
spaces, tabs, newlines = self.spaces, self.tabs, self.newlines
|
||||
# simpler processing
|
||||
for ttype, value in stream:
|
||||
if spaces:
|
||||
value = value.replace(' ', spaces)
|
||||
if tabs:
|
||||
value = value.replace('\t', tabs)
|
||||
if newlines:
|
||||
value = value.replace('\n', newlines)
|
||||
yield ttype, value
|
||||
|
||||
|
||||
class GobbleFilter(Filter):
|
||||
"""Gobbles source code lines (eats initial characters).
|
||||
|
||||
This filter drops the first ``n`` characters off every line of code. This
|
||||
may be useful when the source code fed to the lexer is indented by a fixed
|
||||
amount of space that isn't desired in the output.
|
||||
|
||||
Options accepted:
|
||||
|
||||
`n` : int
|
||||
The number of characters to gobble.
|
||||
|
||||
.. versionadded:: 1.2
|
||||
"""
|
||||
def __init__(self, **options):
|
||||
Filter.__init__(self, **options)
|
||||
self.n = get_int_opt(options, 'n', 0)
|
||||
|
||||
def gobble(self, value, left):
|
||||
if left < len(value):
|
||||
return value[left:], 0
|
||||
else:
|
||||
return '', left - len(value)
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
n = self.n
|
||||
left = n # How many characters left to gobble.
|
||||
for ttype, value in stream:
|
||||
# Remove ``left`` tokens from first line, ``n`` from all others.
|
||||
parts = value.split('\n')
|
||||
(parts[0], left) = self.gobble(parts[0], left)
|
||||
for i in range(1, len(parts)):
|
||||
(parts[i], left) = self.gobble(parts[i], n)
|
||||
value = '\n'.join(parts)
|
||||
|
||||
if value != '':
|
||||
yield ttype, value
|
||||
|
||||
|
||||
class TokenMergeFilter(Filter):
|
||||
"""Merges consecutive tokens with the same token type in the output
|
||||
stream of a lexer.
|
||||
|
||||
.. versionadded:: 1.2
|
||||
"""
|
||||
def __init__(self, **options):
|
||||
Filter.__init__(self, **options)
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
current_type = None
|
||||
current_value = None
|
||||
for ttype, value in stream:
|
||||
if ttype is current_type:
|
||||
current_value += value
|
||||
else:
|
||||
if current_type is not None:
|
||||
yield current_type, current_value
|
||||
current_type = ttype
|
||||
current_value = value
|
||||
if current_type is not None:
|
||||
yield current_type, current_value
|
||||
|
||||
|
||||
FILTERS = {
|
||||
'codetagify': CodeTagFilter,
|
||||
'keywordcase': KeywordCaseFilter,
|
||||
'highlight': NameHighlightFilter,
|
||||
'raiseonerror': RaiseOnErrorTokenFilter,
|
||||
'whitespace': VisibleWhitespaceFilter,
|
||||
'gobble': GobbleFilter,
|
||||
'tokenmerge': TokenMergeFilter,
|
||||
'symbols': SymbolFilter,
|
||||
}
|
||||
@@ -0,0 +1,129 @@
|
||||
"""
|
||||
pygments.formatter
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Base formatter class.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import codecs
|
||||
|
||||
from pip._vendor.pygments.util import get_bool_opt
|
||||
from pip._vendor.pygments.styles import get_style_by_name
|
||||
|
||||
__all__ = ['Formatter']
|
||||
|
||||
|
||||
def _lookup_style(style):
|
||||
if isinstance(style, str):
|
||||
return get_style_by_name(style)
|
||||
return style
|
||||
|
||||
|
||||
class Formatter:
|
||||
"""
|
||||
Converts a token stream to text.
|
||||
|
||||
Formatters should have attributes to help selecting them. These
|
||||
are similar to the corresponding :class:`~pygments.lexer.Lexer`
|
||||
attributes.
|
||||
|
||||
.. autoattribute:: name
|
||||
:no-value:
|
||||
|
||||
.. autoattribute:: aliases
|
||||
:no-value:
|
||||
|
||||
.. autoattribute:: filenames
|
||||
:no-value:
|
||||
|
||||
You can pass options as keyword arguments to the constructor.
|
||||
All formatters accept these basic options:
|
||||
|
||||
``style``
|
||||
The style to use, can be a string or a Style subclass
|
||||
(default: "default"). Not used by e.g. the
|
||||
TerminalFormatter.
|
||||
``full``
|
||||
Tells the formatter to output a "full" document, i.e.
|
||||
a complete self-contained document. This doesn't have
|
||||
any effect for some formatters (default: false).
|
||||
``title``
|
||||
If ``full`` is true, the title that should be used to
|
||||
caption the document (default: '').
|
||||
``encoding``
|
||||
If given, must be an encoding name. This will be used to
|
||||
convert the Unicode token strings to byte strings in the
|
||||
output. If it is "" or None, Unicode strings will be written
|
||||
to the output file, which most file-like objects do not
|
||||
support (default: None).
|
||||
``outencoding``
|
||||
Overrides ``encoding`` if given.
|
||||
|
||||
"""
|
||||
|
||||
#: Full name for the formatter, in human-readable form.
|
||||
name = None
|
||||
|
||||
#: A list of short, unique identifiers that can be used to lookup
|
||||
#: the formatter from a list, e.g. using :func:`.get_formatter_by_name()`.
|
||||
aliases = []
|
||||
|
||||
#: A list of fnmatch patterns that match filenames for which this
|
||||
#: formatter can produce output. The patterns in this list should be unique
|
||||
#: among all formatters.
|
||||
filenames = []
|
||||
|
||||
#: If True, this formatter outputs Unicode strings when no encoding
|
||||
#: option is given.
|
||||
unicodeoutput = True
|
||||
|
||||
def __init__(self, **options):
|
||||
"""
|
||||
As with lexers, this constructor takes arbitrary optional arguments,
|
||||
and if you override it, you should first process your own options, then
|
||||
call the base class implementation.
|
||||
"""
|
||||
self.style = _lookup_style(options.get('style', 'default'))
|
||||
self.full = get_bool_opt(options, 'full', False)
|
||||
self.title = options.get('title', '')
|
||||
self.encoding = options.get('encoding', None) or None
|
||||
if self.encoding in ('guess', 'chardet'):
|
||||
# can happen for e.g. pygmentize -O encoding=guess
|
||||
self.encoding = 'utf-8'
|
||||
self.encoding = options.get('outencoding') or self.encoding
|
||||
self.options = options
|
||||
|
||||
def get_style_defs(self, arg=''):
|
||||
"""
|
||||
This method must return statements or declarations suitable to define
|
||||
the current style for subsequent highlighted text (e.g. CSS classes
|
||||
in the `HTMLFormatter`).
|
||||
|
||||
The optional argument `arg` can be used to modify the generation and
|
||||
is formatter dependent (it is standardized because it can be given on
|
||||
the command line).
|
||||
|
||||
This method is called by the ``-S`` :doc:`command-line option <cmdline>`,
|
||||
the `arg` is then given by the ``-a`` option.
|
||||
"""
|
||||
return ''
|
||||
|
||||
def format(self, tokensource, outfile):
|
||||
"""
|
||||
This method must format the tokens from the `tokensource` iterable and
|
||||
write the formatted version to the file object `outfile`.
|
||||
|
||||
Formatter options can control how exactly the tokens are converted.
|
||||
"""
|
||||
if self.encoding:
|
||||
# wrap the outfile in a StreamWriter
|
||||
outfile = codecs.lookup(self.encoding)[3](outfile)
|
||||
return self.format_unencoded(tokensource, outfile)
|
||||
|
||||
# Allow writing Formatter[str] or Formatter[bytes]. That's equivalent to
|
||||
# Formatter. This helps when using third-party type stubs from typeshed.
|
||||
def __class_getitem__(cls, name):
|
||||
return cls
|
||||
@@ -0,0 +1,157 @@
|
||||
"""
|
||||
pygments.formatters
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Pygments formatters.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
import types
|
||||
import fnmatch
|
||||
from os.path import basename
|
||||
|
||||
from pip._vendor.pygments.formatters._mapping import FORMATTERS
|
||||
from pip._vendor.pygments.plugin import find_plugin_formatters
|
||||
from pip._vendor.pygments.util import ClassNotFound
|
||||
|
||||
__all__ = ['get_formatter_by_name', 'get_formatter_for_filename',
|
||||
'get_all_formatters', 'load_formatter_from_file'] + list(FORMATTERS)
|
||||
|
||||
_formatter_cache = {} # classes by name
|
||||
_pattern_cache = {}
|
||||
|
||||
|
||||
def _fn_matches(fn, glob):
|
||||
"""Return whether the supplied file name fn matches pattern filename."""
|
||||
if glob not in _pattern_cache:
|
||||
pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob))
|
||||
return pattern.match(fn)
|
||||
return _pattern_cache[glob].match(fn)
|
||||
|
||||
|
||||
def _load_formatters(module_name):
|
||||
"""Load a formatter (and all others in the module too)."""
|
||||
mod = __import__(module_name, None, None, ['__all__'])
|
||||
for formatter_name in mod.__all__:
|
||||
cls = getattr(mod, formatter_name)
|
||||
_formatter_cache[cls.name] = cls
|
||||
|
||||
|
||||
def get_all_formatters():
|
||||
"""Return a generator for all formatter classes."""
|
||||
# NB: this returns formatter classes, not info like get_all_lexers().
|
||||
for info in FORMATTERS.values():
|
||||
if info[1] not in _formatter_cache:
|
||||
_load_formatters(info[0])
|
||||
yield _formatter_cache[info[1]]
|
||||
for _, formatter in find_plugin_formatters():
|
||||
yield formatter
|
||||
|
||||
|
||||
def find_formatter_class(alias):
|
||||
"""Lookup a formatter by alias.
|
||||
|
||||
Returns None if not found.
|
||||
"""
|
||||
for module_name, name, aliases, _, _ in FORMATTERS.values():
|
||||
if alias in aliases:
|
||||
if name not in _formatter_cache:
|
||||
_load_formatters(module_name)
|
||||
return _formatter_cache[name]
|
||||
for _, cls in find_plugin_formatters():
|
||||
if alias in cls.aliases:
|
||||
return cls
|
||||
|
||||
|
||||
def get_formatter_by_name(_alias, **options):
|
||||
"""
|
||||
Return an instance of a :class:`.Formatter` subclass that has `alias` in its
|
||||
aliases list. The formatter is given the `options` at its instantiation.
|
||||
|
||||
Will raise :exc:`pygments.util.ClassNotFound` if no formatter with that
|
||||
alias is found.
|
||||
"""
|
||||
cls = find_formatter_class(_alias)
|
||||
if cls is None:
|
||||
raise ClassNotFound(f"no formatter found for name {_alias!r}")
|
||||
return cls(**options)
|
||||
|
||||
|
||||
def load_formatter_from_file(filename, formattername="CustomFormatter", **options):
|
||||
"""
|
||||
Return a `Formatter` subclass instance loaded from the provided file, relative
|
||||
to the current directory.
|
||||
|
||||
The file is expected to contain a Formatter class named ``formattername``
|
||||
(by default, CustomFormatter). Users should be very careful with the input, because
|
||||
this method is equivalent to running ``eval()`` on the input file. The formatter is
|
||||
given the `options` at its instantiation.
|
||||
|
||||
:exc:`pygments.util.ClassNotFound` is raised if there are any errors loading
|
||||
the formatter.
|
||||
|
||||
.. versionadded:: 2.2
|
||||
"""
|
||||
try:
|
||||
# This empty dict will contain the namespace for the exec'd file
|
||||
custom_namespace = {}
|
||||
with open(filename, 'rb') as f:
|
||||
exec(f.read(), custom_namespace)
|
||||
# Retrieve the class `formattername` from that namespace
|
||||
if formattername not in custom_namespace:
|
||||
raise ClassNotFound(f'no valid {formattername} class found in {filename}')
|
||||
formatter_class = custom_namespace[formattername]
|
||||
# And finally instantiate it with the options
|
||||
return formatter_class(**options)
|
||||
except OSError as err:
|
||||
raise ClassNotFound(f'cannot read {filename}: {err}')
|
||||
except ClassNotFound:
|
||||
raise
|
||||
except Exception as err:
|
||||
raise ClassNotFound(f'error when loading custom formatter: {err}')
|
||||
|
||||
|
||||
def get_formatter_for_filename(fn, **options):
|
||||
"""
|
||||
Return a :class:`.Formatter` subclass instance that has a filename pattern
|
||||
matching `fn`. The formatter is given the `options` at its instantiation.
|
||||
|
||||
Will raise :exc:`pygments.util.ClassNotFound` if no formatter for that filename
|
||||
is found.
|
||||
"""
|
||||
fn = basename(fn)
|
||||
for modname, name, _, filenames, _ in FORMATTERS.values():
|
||||
for filename in filenames:
|
||||
if _fn_matches(fn, filename):
|
||||
if name not in _formatter_cache:
|
||||
_load_formatters(modname)
|
||||
return _formatter_cache[name](**options)
|
||||
for _name, cls in find_plugin_formatters():
|
||||
for filename in cls.filenames:
|
||||
if _fn_matches(fn, filename):
|
||||
return cls(**options)
|
||||
raise ClassNotFound(f"no formatter found for file name {fn!r}")
|
||||
|
||||
|
||||
class _automodule(types.ModuleType):
|
||||
"""Automatically import formatters."""
|
||||
|
||||
def __getattr__(self, name):
|
||||
info = FORMATTERS.get(name)
|
||||
if info:
|
||||
_load_formatters(info[0])
|
||||
cls = _formatter_cache[info[1]]
|
||||
setattr(self, name, cls)
|
||||
return cls
|
||||
raise AttributeError(name)
|
||||
|
||||
|
||||
oldmod = sys.modules[__name__]
|
||||
newmod = _automodule(__name__)
|
||||
newmod.__dict__.update(oldmod.__dict__)
|
||||
sys.modules[__name__] = newmod
|
||||
del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types
|
||||
@@ -0,0 +1,23 @@
|
||||
# Automatically generated by scripts/gen_mapfiles.py.
|
||||
# DO NOT EDIT BY HAND; run `tox -e mapfiles` instead.
|
||||
|
||||
FORMATTERS = {
|
||||
'BBCodeFormatter': ('pygments.formatters.bbcode', 'BBCode', ('bbcode', 'bb'), (), 'Format tokens with BBcodes. These formatting codes are used by many bulletin boards, so you can highlight your sourcecode with pygments before posting it there.'),
|
||||
'BmpImageFormatter': ('pygments.formatters.img', 'img_bmp', ('bmp', 'bitmap'), ('*.bmp',), 'Create a bitmap image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
|
||||
'GifImageFormatter': ('pygments.formatters.img', 'img_gif', ('gif',), ('*.gif',), 'Create a GIF image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
|
||||
'GroffFormatter': ('pygments.formatters.groff', 'groff', ('groff', 'troff', 'roff'), (), 'Format tokens with groff escapes to change their color and font style.'),
|
||||
'HtmlFormatter': ('pygments.formatters.html', 'HTML', ('html',), ('*.html', '*.htm'), "Format tokens as HTML 4 ``<span>`` tags. By default, the content is enclosed in a ``<pre>`` tag, itself wrapped in a ``<div>`` tag (but see the `nowrap` option). The ``<div>``'s CSS class can be set by the `cssclass` option."),
|
||||
'IRCFormatter': ('pygments.formatters.irc', 'IRC', ('irc', 'IRC'), (), 'Format tokens with IRC color sequences'),
|
||||
'ImageFormatter': ('pygments.formatters.img', 'img', ('img', 'IMG', 'png'), ('*.png',), 'Create a PNG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
|
||||
'JpgImageFormatter': ('pygments.formatters.img', 'img_jpg', ('jpg', 'jpeg'), ('*.jpg',), 'Create a JPEG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
|
||||
'LatexFormatter': ('pygments.formatters.latex', 'LaTeX', ('latex', 'tex'), ('*.tex',), 'Format tokens as LaTeX code. This needs the `fancyvrb` and `color` standard packages.'),
|
||||
'NullFormatter': ('pygments.formatters.other', 'Text only', ('text', 'null'), ('*.txt',), 'Output the text unchanged without any formatting.'),
|
||||
'PangoMarkupFormatter': ('pygments.formatters.pangomarkup', 'Pango Markup', ('pango', 'pangomarkup'), (), 'Format tokens as Pango Markup code. It can then be rendered to an SVG.'),
|
||||
'RawTokenFormatter': ('pygments.formatters.other', 'Raw tokens', ('raw', 'tokens'), ('*.raw',), 'Format tokens as a raw representation for storing token streams.'),
|
||||
'RtfFormatter': ('pygments.formatters.rtf', 'RTF', ('rtf',), ('*.rtf',), 'Format tokens as RTF markup. This formatter automatically outputs full RTF documents with color information and other useful stuff. Perfect for Copy and Paste into Microsoft(R) Word(R) documents.'),
|
||||
'SvgFormatter': ('pygments.formatters.svg', 'SVG', ('svg',), ('*.svg',), 'Format tokens as an SVG graphics file. This formatter is still experimental. Each line of code is a ``<text>`` element with explicit ``x`` and ``y`` coordinates containing ``<tspan>`` elements with the individual token styles.'),
|
||||
'Terminal256Formatter': ('pygments.formatters.terminal256', 'Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
|
||||
'TerminalFormatter': ('pygments.formatters.terminal', 'Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.'),
|
||||
'TerminalTrueColorFormatter': ('pygments.formatters.terminal256', 'TerminalTrueColor', ('terminal16m', 'console16m', '16m'), (), 'Format tokens with ANSI color sequences, for output in a true-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
|
||||
'TestcaseFormatter': ('pygments.formatters.other', 'Testcase', ('testcase',), (), 'Format tokens as appropriate for a new testcase.'),
|
||||
}
|
||||
@@ -0,0 +1,963 @@
|
||||
"""
|
||||
pygments.lexer
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Base lexer classes.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
|
||||
from pip._vendor.pygments.filter import apply_filters, Filter
|
||||
from pip._vendor.pygments.filters import get_filter_by_name
|
||||
from pip._vendor.pygments.token import Error, Text, Other, Whitespace, _TokenType
|
||||
from pip._vendor.pygments.util import get_bool_opt, get_int_opt, get_list_opt, \
|
||||
make_analysator, Future, guess_decode
|
||||
from pip._vendor.pygments.regexopt import regex_opt
|
||||
|
||||
__all__ = ['Lexer', 'RegexLexer', 'ExtendedRegexLexer', 'DelegatingLexer',
|
||||
'LexerContext', 'include', 'inherit', 'bygroups', 'using', 'this',
|
||||
'default', 'words', 'line_re']
|
||||
|
||||
line_re = re.compile('.*?\n')
|
||||
|
||||
_encoding_map = [(b'\xef\xbb\xbf', 'utf-8'),
|
||||
(b'\xff\xfe\0\0', 'utf-32'),
|
||||
(b'\0\0\xfe\xff', 'utf-32be'),
|
||||
(b'\xff\xfe', 'utf-16'),
|
||||
(b'\xfe\xff', 'utf-16be')]
|
||||
|
||||
_default_analyse = staticmethod(lambda x: 0.0)
|
||||
|
||||
|
||||
class LexerMeta(type):
|
||||
"""
|
||||
This metaclass automagically converts ``analyse_text`` methods into
|
||||
static methods which always return float values.
|
||||
"""
|
||||
|
||||
def __new__(mcs, name, bases, d):
|
||||
if 'analyse_text' in d:
|
||||
d['analyse_text'] = make_analysator(d['analyse_text'])
|
||||
return type.__new__(mcs, name, bases, d)
|
||||
|
||||
|
||||
class Lexer(metaclass=LexerMeta):
|
||||
"""
|
||||
Lexer for a specific language.
|
||||
|
||||
See also :doc:`lexerdevelopment`, a high-level guide to writing
|
||||
lexers.
|
||||
|
||||
Lexer classes have attributes used for choosing the most appropriate
|
||||
lexer based on various criteria.
|
||||
|
||||
.. autoattribute:: name
|
||||
:no-value:
|
||||
.. autoattribute:: aliases
|
||||
:no-value:
|
||||
.. autoattribute:: filenames
|
||||
:no-value:
|
||||
.. autoattribute:: alias_filenames
|
||||
.. autoattribute:: mimetypes
|
||||
:no-value:
|
||||
.. autoattribute:: priority
|
||||
|
||||
Lexers included in Pygments should have two additional attributes:
|
||||
|
||||
.. autoattribute:: url
|
||||
:no-value:
|
||||
.. autoattribute:: version_added
|
||||
:no-value:
|
||||
|
||||
Lexers included in Pygments may have additional attributes:
|
||||
|
||||
.. autoattribute:: _example
|
||||
:no-value:
|
||||
|
||||
You can pass options to the constructor. The basic options recognized
|
||||
by all lexers and processed by the base `Lexer` class are:
|
||||
|
||||
``stripnl``
|
||||
Strip leading and trailing newlines from the input (default: True).
|
||||
``stripall``
|
||||
Strip all leading and trailing whitespace from the input
|
||||
(default: False).
|
||||
``ensurenl``
|
||||
Make sure that the input ends with a newline (default: True). This
|
||||
is required for some lexers that consume input linewise.
|
||||
|
||||
.. versionadded:: 1.3
|
||||
|
||||
``tabsize``
|
||||
If given and greater than 0, expand tabs in the input (default: 0).
|
||||
``encoding``
|
||||
If given, must be an encoding name. This encoding will be used to
|
||||
convert the input string to Unicode, if it is not already a Unicode
|
||||
string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
|
||||
Latin1 detection. Can also be ``'chardet'`` to use the chardet
|
||||
library, if it is installed.
|
||||
``inencoding``
|
||||
Overrides the ``encoding`` if given.
|
||||
"""
|
||||
|
||||
#: Full name of the lexer, in human-readable form
|
||||
name = None
|
||||
|
||||
#: A list of short, unique identifiers that can be used to look
|
||||
#: up the lexer from a list, e.g., using `get_lexer_by_name()`.
|
||||
aliases = []
|
||||
|
||||
#: A list of `fnmatch` patterns that match filenames which contain
|
||||
#: content for this lexer. The patterns in this list should be unique among
|
||||
#: all lexers.
|
||||
filenames = []
|
||||
|
||||
#: A list of `fnmatch` patterns that match filenames which may or may not
|
||||
#: contain content for this lexer. This list is used by the
|
||||
#: :func:`.guess_lexer_for_filename()` function, to determine which lexers
|
||||
#: are then included in guessing the correct one. That means that
|
||||
#: e.g. every lexer for HTML and a template language should include
|
||||
#: ``\*.html`` in this list.
|
||||
alias_filenames = []
|
||||
|
||||
#: A list of MIME types for content that can be lexed with this lexer.
|
||||
mimetypes = []
|
||||
|
||||
#: Priority, should multiple lexers match and no content is provided
|
||||
priority = 0
|
||||
|
||||
#: URL of the language specification/definition. Used in the Pygments
|
||||
#: documentation. Set to an empty string to disable.
|
||||
url = None
|
||||
|
||||
#: Version of Pygments in which the lexer was added.
|
||||
version_added = None
|
||||
|
||||
#: Example file name. Relative to the ``tests/examplefiles`` directory.
|
||||
#: This is used by the documentation generator to show an example.
|
||||
_example = None
|
||||
|
||||
def __init__(self, **options):
|
||||
"""
|
||||
This constructor takes arbitrary options as keyword arguments.
|
||||
Every subclass must first process its own options and then call
|
||||
the `Lexer` constructor, since it processes the basic
|
||||
options like `stripnl`.
|
||||
|
||||
An example looks like this:
|
||||
|
||||
.. sourcecode:: python
|
||||
|
||||
def __init__(self, **options):
|
||||
self.compress = options.get('compress', '')
|
||||
Lexer.__init__(self, **options)
|
||||
|
||||
As these options must all be specifiable as strings (due to the
|
||||
command line usage), there are various utility functions
|
||||
available to help with that, see `Utilities`_.
|
||||
"""
|
||||
self.options = options
|
||||
self.stripnl = get_bool_opt(options, 'stripnl', True)
|
||||
self.stripall = get_bool_opt(options, 'stripall', False)
|
||||
self.ensurenl = get_bool_opt(options, 'ensurenl', True)
|
||||
self.tabsize = get_int_opt(options, 'tabsize', 0)
|
||||
self.encoding = options.get('encoding', 'guess')
|
||||
self.encoding = options.get('inencoding') or self.encoding
|
||||
self.filters = []
|
||||
for filter_ in get_list_opt(options, 'filters', ()):
|
||||
self.add_filter(filter_)
|
||||
|
||||
def __repr__(self):
|
||||
if self.options:
|
||||
return f'<pygments.lexers.{self.__class__.__name__} with {self.options!r}>'
|
||||
else:
|
||||
return f'<pygments.lexers.{self.__class__.__name__}>'
|
||||
|
||||
def add_filter(self, filter_, **options):
|
||||
"""
|
||||
Add a new stream filter to this lexer.
|
||||
"""
|
||||
if not isinstance(filter_, Filter):
|
||||
filter_ = get_filter_by_name(filter_, **options)
|
||||
self.filters.append(filter_)
|
||||
|
||||
def analyse_text(text):
|
||||
"""
|
||||
A static method which is called for lexer guessing.
|
||||
|
||||
It should analyse the text and return a float in the range
|
||||
from ``0.0`` to ``1.0``. If it returns ``0.0``, the lexer
|
||||
will not be selected as the most probable one, if it returns
|
||||
``1.0``, it will be selected immediately. This is used by
|
||||
`guess_lexer`.
|
||||
|
||||
The `LexerMeta` metaclass automatically wraps this function so
|
||||
that it works like a static method (no ``self`` or ``cls``
|
||||
parameter) and the return value is automatically converted to
|
||||
`float`. If the return value is an object that is boolean `False`
|
||||
it's the same as if the return values was ``0.0``.
|
||||
"""
|
||||
|
||||
def _preprocess_lexer_input(self, text):
|
||||
"""Apply preprocessing such as decoding the input, removing BOM and normalizing newlines."""
|
||||
|
||||
if not isinstance(text, str):
|
||||
if self.encoding == 'guess':
|
||||
text, _ = guess_decode(text)
|
||||
elif self.encoding == 'chardet':
|
||||
try:
|
||||
# pip vendoring note: this code is not reachable by pip,
|
||||
# removed import of chardet to make it clear.
|
||||
raise ImportError('chardet is not vendored by pip')
|
||||
except ImportError as e:
|
||||
raise ImportError('To enable chardet encoding guessing, '
|
||||
'please install the chardet library '
|
||||
'from http://chardet.feedparser.org/') from e
|
||||
# check for BOM first
|
||||
decoded = None
|
||||
for bom, encoding in _encoding_map:
|
||||
if text.startswith(bom):
|
||||
decoded = text[len(bom):].decode(encoding, 'replace')
|
||||
break
|
||||
# no BOM found, so use chardet
|
||||
if decoded is None:
|
||||
enc = chardet.detect(text[:1024]) # Guess using first 1KB
|
||||
decoded = text.decode(enc.get('encoding') or 'utf-8',
|
||||
'replace')
|
||||
text = decoded
|
||||
else:
|
||||
text = text.decode(self.encoding)
|
||||
if text.startswith('\ufeff'):
|
||||
text = text[len('\ufeff'):]
|
||||
else:
|
||||
if text.startswith('\ufeff'):
|
||||
text = text[len('\ufeff'):]
|
||||
|
||||
# text now *is* a unicode string
|
||||
text = text.replace('\r\n', '\n')
|
||||
text = text.replace('\r', '\n')
|
||||
if self.stripall:
|
||||
text = text.strip()
|
||||
elif self.stripnl:
|
||||
text = text.strip('\n')
|
||||
if self.tabsize > 0:
|
||||
text = text.expandtabs(self.tabsize)
|
||||
if self.ensurenl and not text.endswith('\n'):
|
||||
text += '\n'
|
||||
|
||||
return text
|
||||
|
||||
def get_tokens(self, text, unfiltered=False):
|
||||
"""
|
||||
This method is the basic interface of a lexer. It is called by
|
||||
the `highlight()` function. It must process the text and return an
|
||||
iterable of ``(tokentype, value)`` pairs from `text`.
|
||||
|
||||
Normally, you don't need to override this method. The default
|
||||
implementation processes the options recognized by all lexers
|
||||
(`stripnl`, `stripall` and so on), and then yields all tokens
|
||||
from `get_tokens_unprocessed()`, with the ``index`` dropped.
|
||||
|
||||
If `unfiltered` is set to `True`, the filtering mechanism is
|
||||
bypassed even if filters are defined.
|
||||
"""
|
||||
text = self._preprocess_lexer_input(text)
|
||||
|
||||
def streamer():
|
||||
for _, t, v in self.get_tokens_unprocessed(text):
|
||||
yield t, v
|
||||
stream = streamer()
|
||||
if not unfiltered:
|
||||
stream = apply_filters(stream, self.filters, self)
|
||||
return stream
|
||||
|
||||
def get_tokens_unprocessed(self, text):
|
||||
"""
|
||||
This method should process the text and return an iterable of
|
||||
``(index, tokentype, value)`` tuples where ``index`` is the starting
|
||||
position of the token within the input text.
|
||||
|
||||
It must be overridden by subclasses. It is recommended to
|
||||
implement it as a generator to maximize effectiveness.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class DelegatingLexer(Lexer):
|
||||
"""
|
||||
This lexer takes two lexer as arguments. A root lexer and
|
||||
a language lexer. First everything is scanned using the language
|
||||
lexer, afterwards all ``Other`` tokens are lexed using the root
|
||||
lexer.
|
||||
|
||||
The lexers from the ``template`` lexer package use this base lexer.
|
||||
"""
|
||||
|
||||
def __init__(self, _root_lexer, _language_lexer, _needle=Other, **options):
|
||||
self.root_lexer = _root_lexer(**options)
|
||||
self.language_lexer = _language_lexer(**options)
|
||||
self.needle = _needle
|
||||
Lexer.__init__(self, **options)
|
||||
|
||||
def get_tokens_unprocessed(self, text):
|
||||
buffered = ''
|
||||
insertions = []
|
||||
lng_buffer = []
|
||||
for i, t, v in self.language_lexer.get_tokens_unprocessed(text):
|
||||
if t is self.needle:
|
||||
if lng_buffer:
|
||||
insertions.append((len(buffered), lng_buffer))
|
||||
lng_buffer = []
|
||||
buffered += v
|
||||
else:
|
||||
lng_buffer.append((i, t, v))
|
||||
if lng_buffer:
|
||||
insertions.append((len(buffered), lng_buffer))
|
||||
return do_insertions(insertions,
|
||||
self.root_lexer.get_tokens_unprocessed(buffered))
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# RegexLexer and ExtendedRegexLexer
|
||||
#
|
||||
|
||||
|
||||
class include(str): # pylint: disable=invalid-name
|
||||
"""
|
||||
Indicates that a state should include rules from another state.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class _inherit:
|
||||
"""
|
||||
Indicates the a state should inherit from its superclass.
|
||||
"""
|
||||
def __repr__(self):
|
||||
return 'inherit'
|
||||
|
||||
inherit = _inherit() # pylint: disable=invalid-name
|
||||
|
||||
|
||||
class combined(tuple): # pylint: disable=invalid-name
|
||||
"""
|
||||
Indicates a state combined from multiple states.
|
||||
"""
|
||||
|
||||
def __new__(cls, *args):
|
||||
return tuple.__new__(cls, args)
|
||||
|
||||
def __init__(self, *args):
|
||||
# tuple.__init__ doesn't do anything
|
||||
pass
|
||||
|
||||
|
||||
class _PseudoMatch:
|
||||
"""
|
||||
A pseudo match object constructed from a string.
|
||||
"""
|
||||
|
||||
def __init__(self, start, text):
|
||||
self._text = text
|
||||
self._start = start
|
||||
|
||||
def start(self, arg=None):
|
||||
return self._start
|
||||
|
||||
def end(self, arg=None):
|
||||
return self._start + len(self._text)
|
||||
|
||||
def group(self, arg=None):
|
||||
if arg:
|
||||
raise IndexError('No such group')
|
||||
return self._text
|
||||
|
||||
def groups(self):
|
||||
return (self._text,)
|
||||
|
||||
def groupdict(self):
|
||||
return {}
|
||||
|
||||
|
||||
def bygroups(*args):
|
||||
"""
|
||||
Callback that yields multiple actions for each group in the match.
|
||||
"""
|
||||
def callback(lexer, match, ctx=None):
|
||||
for i, action in enumerate(args):
|
||||
if action is None:
|
||||
continue
|
||||
elif type(action) is _TokenType:
|
||||
data = match.group(i + 1)
|
||||
if data:
|
||||
yield match.start(i + 1), action, data
|
||||
else:
|
||||
data = match.group(i + 1)
|
||||
if data is not None:
|
||||
if ctx:
|
||||
ctx.pos = match.start(i + 1)
|
||||
for item in action(lexer,
|
||||
_PseudoMatch(match.start(i + 1), data), ctx):
|
||||
if item:
|
||||
yield item
|
||||
if ctx:
|
||||
ctx.pos = match.end()
|
||||
return callback
|
||||
|
||||
|
||||
class _This:
|
||||
"""
|
||||
Special singleton used for indicating the caller class.
|
||||
Used by ``using``.
|
||||
"""
|
||||
|
||||
this = _This()
|
||||
|
||||
|
||||
def using(_other, **kwargs):
|
||||
"""
|
||||
Callback that processes the match with a different lexer.
|
||||
|
||||
The keyword arguments are forwarded to the lexer, except `state` which
|
||||
is handled separately.
|
||||
|
||||
`state` specifies the state that the new lexer will start in, and can
|
||||
be an enumerable such as ('root', 'inline', 'string') or a simple
|
||||
string which is assumed to be on top of the root state.
|
||||
|
||||
Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
|
||||
"""
|
||||
gt_kwargs = {}
|
||||
if 'state' in kwargs:
|
||||
s = kwargs.pop('state')
|
||||
if isinstance(s, (list, tuple)):
|
||||
gt_kwargs['stack'] = s
|
||||
else:
|
||||
gt_kwargs['stack'] = ('root', s)
|
||||
|
||||
if _other is this:
|
||||
def callback(lexer, match, ctx=None):
|
||||
# if keyword arguments are given the callback
|
||||
# function has to create a new lexer instance
|
||||
if kwargs:
|
||||
# XXX: cache that somehow
|
||||
kwargs.update(lexer.options)
|
||||
lx = lexer.__class__(**kwargs)
|
||||
else:
|
||||
lx = lexer
|
||||
s = match.start()
|
||||
for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs):
|
||||
yield i + s, t, v
|
||||
if ctx:
|
||||
ctx.pos = match.end()
|
||||
else:
|
||||
def callback(lexer, match, ctx=None):
|
||||
# XXX: cache that somehow
|
||||
kwargs.update(lexer.options)
|
||||
lx = _other(**kwargs)
|
||||
|
||||
s = match.start()
|
||||
for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs):
|
||||
yield i + s, t, v
|
||||
if ctx:
|
||||
ctx.pos = match.end()
|
||||
return callback
|
||||
|
||||
|
||||
class default:
|
||||
"""
|
||||
Indicates a state or state action (e.g. #pop) to apply.
|
||||
For example default('#pop') is equivalent to ('', Token, '#pop')
|
||||
Note that state tuples may be used as well.
|
||||
|
||||
.. versionadded:: 2.0
|
||||
"""
|
||||
def __init__(self, state):
|
||||
self.state = state
|
||||
|
||||
|
||||
class words(Future):
|
||||
"""
|
||||
Indicates a list of literal words that is transformed into an optimized
|
||||
regex that matches any of the words.
|
||||
|
||||
.. versionadded:: 2.0
|
||||
"""
|
||||
def __init__(self, words, prefix='', suffix=''):
|
||||
self.words = words
|
||||
self.prefix = prefix
|
||||
self.suffix = suffix
|
||||
|
||||
def get(self):
|
||||
return regex_opt(self.words, prefix=self.prefix, suffix=self.suffix)
|
||||
|
||||
|
||||
class RegexLexerMeta(LexerMeta):
|
||||
"""
|
||||
Metaclass for RegexLexer, creates the self._tokens attribute from
|
||||
self.tokens on the first instantiation.
|
||||
"""
|
||||
|
||||
def _process_regex(cls, regex, rflags, state):
|
||||
"""Preprocess the regular expression component of a token definition."""
|
||||
if isinstance(regex, Future):
|
||||
regex = regex.get()
|
||||
return re.compile(regex, rflags).match
|
||||
|
||||
def _process_token(cls, token):
|
||||
"""Preprocess the token component of a token definition."""
|
||||
assert type(token) is _TokenType or callable(token), \
|
||||
f'token type must be simple type or callable, not {token!r}'
|
||||
return token
|
||||
|
||||
def _process_new_state(cls, new_state, unprocessed, processed):
|
||||
"""Preprocess the state transition action of a token definition."""
|
||||
if isinstance(new_state, str):
|
||||
# an existing state
|
||||
if new_state == '#pop':
|
||||
return -1
|
||||
elif new_state in unprocessed:
|
||||
return (new_state,)
|
||||
elif new_state == '#push':
|
||||
return new_state
|
||||
elif new_state[:5] == '#pop:':
|
||||
return -int(new_state[5:])
|
||||
else:
|
||||
assert False, f'unknown new state {new_state!r}'
|
||||
elif isinstance(new_state, combined):
|
||||
# combine a new state from existing ones
|
||||
tmp_state = '_tmp_%d' % cls._tmpname
|
||||
cls._tmpname += 1
|
||||
itokens = []
|
||||
for istate in new_state:
|
||||
assert istate != new_state, f'circular state ref {istate!r}'
|
||||
itokens.extend(cls._process_state(unprocessed,
|
||||
processed, istate))
|
||||
processed[tmp_state] = itokens
|
||||
return (tmp_state,)
|
||||
elif isinstance(new_state, tuple):
|
||||
# push more than one state
|
||||
for istate in new_state:
|
||||
assert (istate in unprocessed or
|
||||
istate in ('#pop', '#push')), \
|
||||
'unknown new state ' + istate
|
||||
return new_state
|
||||
else:
|
||||
assert False, f'unknown new state def {new_state!r}'
|
||||
|
||||
def _process_state(cls, unprocessed, processed, state):
|
||||
"""Preprocess a single state definition."""
|
||||
assert isinstance(state, str), f"wrong state name {state!r}"
|
||||
assert state[0] != '#', f"invalid state name {state!r}"
|
||||
if state in processed:
|
||||
return processed[state]
|
||||
tokens = processed[state] = []
|
||||
rflags = cls.flags
|
||||
for tdef in unprocessed[state]:
|
||||
if isinstance(tdef, include):
|
||||
# it's a state reference
|
||||
assert tdef != state, f"circular state reference {state!r}"
|
||||
tokens.extend(cls._process_state(unprocessed, processed,
|
||||
str(tdef)))
|
||||
continue
|
||||
if isinstance(tdef, _inherit):
|
||||
# should be processed already, but may not in the case of:
|
||||
# 1. the state has no counterpart in any parent
|
||||
# 2. the state includes more than one 'inherit'
|
||||
continue
|
||||
if isinstance(tdef, default):
|
||||
new_state = cls._process_new_state(tdef.state, unprocessed, processed)
|
||||
tokens.append((re.compile('').match, None, new_state))
|
||||
continue
|
||||
|
||||
assert type(tdef) is tuple, f"wrong rule def {tdef!r}"
|
||||
|
||||
try:
|
||||
rex = cls._process_regex(tdef[0], rflags, state)
|
||||
except Exception as err:
|
||||
raise ValueError(f"uncompilable regex {tdef[0]!r} in state {state!r} of {cls!r}: {err}") from err
|
||||
|
||||
token = cls._process_token(tdef[1])
|
||||
|
||||
if len(tdef) == 2:
|
||||
new_state = None
|
||||
else:
|
||||
new_state = cls._process_new_state(tdef[2],
|
||||
unprocessed, processed)
|
||||
|
||||
tokens.append((rex, token, new_state))
|
||||
return tokens
|
||||
|
||||
def process_tokendef(cls, name, tokendefs=None):
|
||||
"""Preprocess a dictionary of token definitions."""
|
||||
processed = cls._all_tokens[name] = {}
|
||||
tokendefs = tokendefs or cls.tokens[name]
|
||||
for state in list(tokendefs):
|
||||
cls._process_state(tokendefs, processed, state)
|
||||
return processed
|
||||
|
||||
def get_tokendefs(cls):
|
||||
"""
|
||||
Merge tokens from superclasses in MRO order, returning a single tokendef
|
||||
dictionary.
|
||||
|
||||
Any state that is not defined by a subclass will be inherited
|
||||
automatically. States that *are* defined by subclasses will, by
|
||||
default, override that state in the superclass. If a subclass wishes to
|
||||
inherit definitions from a superclass, it can use the special value
|
||||
"inherit", which will cause the superclass' state definition to be
|
||||
included at that point in the state.
|
||||
"""
|
||||
tokens = {}
|
||||
inheritable = {}
|
||||
for c in cls.__mro__:
|
||||
toks = c.__dict__.get('tokens', {})
|
||||
|
||||
for state, items in toks.items():
|
||||
curitems = tokens.get(state)
|
||||
if curitems is None:
|
||||
# N.b. because this is assigned by reference, sufficiently
|
||||
# deep hierarchies are processed incrementally (e.g. for
|
||||
# A(B), B(C), C(RegexLexer), B will be premodified so X(B)
|
||||
# will not see any inherits in B).
|
||||
tokens[state] = items
|
||||
try:
|
||||
inherit_ndx = items.index(inherit)
|
||||
except ValueError:
|
||||
continue
|
||||
inheritable[state] = inherit_ndx
|
||||
continue
|
||||
|
||||
inherit_ndx = inheritable.pop(state, None)
|
||||
if inherit_ndx is None:
|
||||
continue
|
||||
|
||||
# Replace the "inherit" value with the items
|
||||
curitems[inherit_ndx:inherit_ndx+1] = items
|
||||
try:
|
||||
# N.b. this is the index in items (that is, the superclass
|
||||
# copy), so offset required when storing below.
|
||||
new_inh_ndx = items.index(inherit)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
inheritable[state] = inherit_ndx + new_inh_ndx
|
||||
|
||||
return tokens
|
||||
|
||||
def __call__(cls, *args, **kwds):
|
||||
"""Instantiate cls after preprocessing its token definitions."""
|
||||
if '_tokens' not in cls.__dict__:
|
||||
cls._all_tokens = {}
|
||||
cls._tmpname = 0
|
||||
if hasattr(cls, 'token_variants') and cls.token_variants:
|
||||
# don't process yet
|
||||
pass
|
||||
else:
|
||||
cls._tokens = cls.process_tokendef('', cls.get_tokendefs())
|
||||
|
||||
return type.__call__(cls, *args, **kwds)
|
||||
|
||||
|
||||
class RegexLexer(Lexer, metaclass=RegexLexerMeta):
|
||||
"""
|
||||
Base for simple stateful regular expression-based lexers.
|
||||
Simplifies the lexing process so that you need only
|
||||
provide a list of states and regular expressions.
|
||||
"""
|
||||
|
||||
#: Flags for compiling the regular expressions.
|
||||
#: Defaults to MULTILINE.
|
||||
flags = re.MULTILINE
|
||||
|
||||
#: At all time there is a stack of states. Initially, the stack contains
|
||||
#: a single state 'root'. The top of the stack is called "the current state".
|
||||
#:
|
||||
#: Dict of ``{'state': [(regex, tokentype, new_state), ...], ...}``
|
||||
#:
|
||||
#: ``new_state`` can be omitted to signify no state transition.
|
||||
#: If ``new_state`` is a string, it is pushed on the stack. This ensure
|
||||
#: the new current state is ``new_state``.
|
||||
#: If ``new_state`` is a tuple of strings, all of those strings are pushed
|
||||
#: on the stack and the current state will be the last element of the list.
|
||||
#: ``new_state`` can also be ``combined('state1', 'state2', ...)``
|
||||
#: to signify a new, anonymous state combined from the rules of two
|
||||
#: or more existing ones.
|
||||
#: Furthermore, it can be '#pop' to signify going back one step in
|
||||
#: the state stack, or '#push' to push the current state on the stack
|
||||
#: again. Note that if you push while in a combined state, the combined
|
||||
#: state itself is pushed, and not only the state in which the rule is
|
||||
#: defined.
|
||||
#:
|
||||
#: The tuple can also be replaced with ``include('state')``, in which
|
||||
#: case the rules from the state named by the string are included in the
|
||||
#: current one.
|
||||
tokens = {}
|
||||
|
||||
def get_tokens_unprocessed(self, text, stack=('root',)):
|
||||
"""
|
||||
Split ``text`` into (tokentype, text) pairs.
|
||||
|
||||
``stack`` is the initial stack (default: ``['root']``)
|
||||
"""
|
||||
pos = 0
|
||||
tokendefs = self._tokens
|
||||
statestack = list(stack)
|
||||
statetokens = tokendefs[statestack[-1]]
|
||||
while 1:
|
||||
for rexmatch, action, new_state in statetokens:
|
||||
m = rexmatch(text, pos)
|
||||
if m:
|
||||
if action is not None:
|
||||
if type(action) is _TokenType:
|
||||
yield pos, action, m.group()
|
||||
else:
|
||||
yield from action(self, m)
|
||||
pos = m.end()
|
||||
if new_state is not None:
|
||||
# state transition
|
||||
if isinstance(new_state, tuple):
|
||||
for state in new_state:
|
||||
if state == '#pop':
|
||||
if len(statestack) > 1:
|
||||
statestack.pop()
|
||||
elif state == '#push':
|
||||
statestack.append(statestack[-1])
|
||||
else:
|
||||
statestack.append(state)
|
||||
elif isinstance(new_state, int):
|
||||
# pop, but keep at least one state on the stack
|
||||
# (random code leading to unexpected pops should
|
||||
# not allow exceptions)
|
||||
if abs(new_state) >= len(statestack):
|
||||
del statestack[1:]
|
||||
else:
|
||||
del statestack[new_state:]
|
||||
elif new_state == '#push':
|
||||
statestack.append(statestack[-1])
|
||||
else:
|
||||
assert False, f"wrong state def: {new_state!r}"
|
||||
statetokens = tokendefs[statestack[-1]]
|
||||
break
|
||||
else:
|
||||
# We are here only if all state tokens have been considered
|
||||
# and there was not a match on any of them.
|
||||
try:
|
||||
if text[pos] == '\n':
|
||||
# at EOL, reset state to "root"
|
||||
statestack = ['root']
|
||||
statetokens = tokendefs['root']
|
||||
yield pos, Whitespace, '\n'
|
||||
pos += 1
|
||||
continue
|
||||
yield pos, Error, text[pos]
|
||||
pos += 1
|
||||
except IndexError:
|
||||
break
|
||||
|
||||
|
||||
class LexerContext:
|
||||
"""
|
||||
A helper object that holds lexer position data.
|
||||
"""
|
||||
|
||||
def __init__(self, text, pos, stack=None, end=None):
|
||||
self.text = text
|
||||
self.pos = pos
|
||||
self.end = end or len(text) # end=0 not supported ;-)
|
||||
self.stack = stack or ['root']
|
||||
|
||||
def __repr__(self):
|
||||
return f'LexerContext({self.text!r}, {self.pos!r}, {self.stack!r})'
|
||||
|
||||
|
||||
class ExtendedRegexLexer(RegexLexer):
|
||||
"""
|
||||
A RegexLexer that uses a context object to store its state.
|
||||
"""
|
||||
|
||||
def get_tokens_unprocessed(self, text=None, context=None):
|
||||
"""
|
||||
Split ``text`` into (tokentype, text) pairs.
|
||||
If ``context`` is given, use this lexer context instead.
|
||||
"""
|
||||
tokendefs = self._tokens
|
||||
if not context:
|
||||
ctx = LexerContext(text, 0)
|
||||
statetokens = tokendefs['root']
|
||||
else:
|
||||
ctx = context
|
||||
statetokens = tokendefs[ctx.stack[-1]]
|
||||
text = ctx.text
|
||||
while 1:
|
||||
for rexmatch, action, new_state in statetokens:
|
||||
m = rexmatch(text, ctx.pos, ctx.end)
|
||||
if m:
|
||||
if action is not None:
|
||||
if type(action) is _TokenType:
|
||||
yield ctx.pos, action, m.group()
|
||||
ctx.pos = m.end()
|
||||
else:
|
||||
yield from action(self, m, ctx)
|
||||
if not new_state:
|
||||
# altered the state stack?
|
||||
statetokens = tokendefs[ctx.stack[-1]]
|
||||
# CAUTION: callback must set ctx.pos!
|
||||
if new_state is not None:
|
||||
# state transition
|
||||
if isinstance(new_state, tuple):
|
||||
for state in new_state:
|
||||
if state == '#pop':
|
||||
if len(ctx.stack) > 1:
|
||||
ctx.stack.pop()
|
||||
elif state == '#push':
|
||||
ctx.stack.append(ctx.stack[-1])
|
||||
else:
|
||||
ctx.stack.append(state)
|
||||
elif isinstance(new_state, int):
|
||||
# see RegexLexer for why this check is made
|
||||
if abs(new_state) >= len(ctx.stack):
|
||||
del ctx.stack[1:]
|
||||
else:
|
||||
del ctx.stack[new_state:]
|
||||
elif new_state == '#push':
|
||||
ctx.stack.append(ctx.stack[-1])
|
||||
else:
|
||||
assert False, f"wrong state def: {new_state!r}"
|
||||
statetokens = tokendefs[ctx.stack[-1]]
|
||||
break
|
||||
else:
|
||||
try:
|
||||
if ctx.pos >= ctx.end:
|
||||
break
|
||||
if text[ctx.pos] == '\n':
|
||||
# at EOL, reset state to "root"
|
||||
ctx.stack = ['root']
|
||||
statetokens = tokendefs['root']
|
||||
yield ctx.pos, Text, '\n'
|
||||
ctx.pos += 1
|
||||
continue
|
||||
yield ctx.pos, Error, text[ctx.pos]
|
||||
ctx.pos += 1
|
||||
except IndexError:
|
||||
break
|
||||
|
||||
|
||||
def do_insertions(insertions, tokens):
|
||||
"""
|
||||
Helper for lexers which must combine the results of several
|
||||
sublexers.
|
||||
|
||||
``insertions`` is a list of ``(index, itokens)`` pairs.
|
||||
Each ``itokens`` iterable should be inserted at position
|
||||
``index`` into the token stream given by the ``tokens``
|
||||
argument.
|
||||
|
||||
The result is a combined token stream.
|
||||
|
||||
TODO: clean up the code here.
|
||||
"""
|
||||
insertions = iter(insertions)
|
||||
try:
|
||||
index, itokens = next(insertions)
|
||||
except StopIteration:
|
||||
# no insertions
|
||||
yield from tokens
|
||||
return
|
||||
|
||||
realpos = None
|
||||
insleft = True
|
||||
|
||||
# iterate over the token stream where we want to insert
|
||||
# the tokens from the insertion list.
|
||||
for i, t, v in tokens:
|
||||
# first iteration. store the position of first item
|
||||
if realpos is None:
|
||||
realpos = i
|
||||
oldi = 0
|
||||
while insleft and i + len(v) >= index:
|
||||
tmpval = v[oldi:index - i]
|
||||
if tmpval:
|
||||
yield realpos, t, tmpval
|
||||
realpos += len(tmpval)
|
||||
for it_index, it_token, it_value in itokens:
|
||||
yield realpos, it_token, it_value
|
||||
realpos += len(it_value)
|
||||
oldi = index - i
|
||||
try:
|
||||
index, itokens = next(insertions)
|
||||
except StopIteration:
|
||||
insleft = False
|
||||
break # not strictly necessary
|
||||
if oldi < len(v):
|
||||
yield realpos, t, v[oldi:]
|
||||
realpos += len(v) - oldi
|
||||
|
||||
# leftover tokens
|
||||
while insleft:
|
||||
# no normal tokens, set realpos to zero
|
||||
realpos = realpos or 0
|
||||
for p, t, v in itokens:
|
||||
yield realpos, t, v
|
||||
realpos += len(v)
|
||||
try:
|
||||
index, itokens = next(insertions)
|
||||
except StopIteration:
|
||||
insleft = False
|
||||
break # not strictly necessary
|
||||
|
||||
|
||||
class ProfilingRegexLexerMeta(RegexLexerMeta):
|
||||
"""Metaclass for ProfilingRegexLexer, collects regex timing info."""
|
||||
|
||||
def _process_regex(cls, regex, rflags, state):
|
||||
if isinstance(regex, words):
|
||||
rex = regex_opt(regex.words, prefix=regex.prefix,
|
||||
suffix=regex.suffix)
|
||||
else:
|
||||
rex = regex
|
||||
compiled = re.compile(rex, rflags)
|
||||
|
||||
def match_func(text, pos, endpos=sys.maxsize):
|
||||
info = cls._prof_data[-1].setdefault((state, rex), [0, 0.0])
|
||||
t0 = time.time()
|
||||
res = compiled.match(text, pos, endpos)
|
||||
t1 = time.time()
|
||||
info[0] += 1
|
||||
info[1] += t1 - t0
|
||||
return res
|
||||
return match_func
|
||||
|
||||
|
||||
class ProfilingRegexLexer(RegexLexer, metaclass=ProfilingRegexLexerMeta):
|
||||
"""Drop-in replacement for RegexLexer that does profiling of its regexes."""
|
||||
|
||||
_prof_data = []
|
||||
_prof_sort_index = 4 # defaults to time per call
|
||||
|
||||
def get_tokens_unprocessed(self, text, stack=('root',)):
|
||||
# this needs to be a stack, since using(this) will produce nested calls
|
||||
self.__class__._prof_data.append({})
|
||||
yield from RegexLexer.get_tokens_unprocessed(self, text, stack)
|
||||
rawdata = self.__class__._prof_data.pop()
|
||||
data = sorted(((s, repr(r).strip('u\'').replace('\\\\', '\\')[:65],
|
||||
n, 1000 * t, 1000 * t / n)
|
||||
for ((s, r), (n, t)) in rawdata.items()),
|
||||
key=lambda x: x[self._prof_sort_index],
|
||||
reverse=True)
|
||||
sum_total = sum(x[3] for x in data)
|
||||
|
||||
print()
|
||||
print('Profiling result for %s lexing %d chars in %.3f ms' %
|
||||
(self.__class__.__name__, len(text), sum_total))
|
||||
print('=' * 110)
|
||||
print('%-20s %-64s ncalls tottime percall' % ('state', 'regex'))
|
||||
print('-' * 110)
|
||||
for d in data:
|
||||
print('%-20s %-65s %5d %8.4f %8.4f' % d)
|
||||
print('=' * 110)
|
||||
@@ -0,0 +1,362 @@
|
||||
"""
|
||||
pygments.lexers
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Pygments lexers.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
import types
|
||||
import fnmatch
|
||||
from os.path import basename
|
||||
|
||||
from pip._vendor.pygments.lexers._mapping import LEXERS
|
||||
from pip._vendor.pygments.modeline import get_filetype_from_buffer
|
||||
from pip._vendor.pygments.plugin import find_plugin_lexers
|
||||
from pip._vendor.pygments.util import ClassNotFound, guess_decode
|
||||
|
||||
COMPAT = {
|
||||
'Python3Lexer': 'PythonLexer',
|
||||
'Python3TracebackLexer': 'PythonTracebackLexer',
|
||||
'LeanLexer': 'Lean3Lexer',
|
||||
}
|
||||
|
||||
__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class',
|
||||
'guess_lexer', 'load_lexer_from_file'] + list(LEXERS) + list(COMPAT)
|
||||
|
||||
_lexer_cache = {}
|
||||
_pattern_cache = {}
|
||||
|
||||
|
||||
def _fn_matches(fn, glob):
|
||||
"""Return whether the supplied file name fn matches pattern filename."""
|
||||
if glob not in _pattern_cache:
|
||||
pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob))
|
||||
return pattern.match(fn)
|
||||
return _pattern_cache[glob].match(fn)
|
||||
|
||||
|
||||
def _load_lexers(module_name):
|
||||
"""Load a lexer (and all others in the module too)."""
|
||||
mod = __import__(module_name, None, None, ['__all__'])
|
||||
for lexer_name in mod.__all__:
|
||||
cls = getattr(mod, lexer_name)
|
||||
_lexer_cache[cls.name] = cls
|
||||
|
||||
|
||||
def get_all_lexers(plugins=True):
|
||||
"""Return a generator of tuples in the form ``(name, aliases,
|
||||
filenames, mimetypes)`` of all know lexers.
|
||||
|
||||
If *plugins* is true (the default), plugin lexers supplied by entrypoints
|
||||
are also returned. Otherwise, only builtin ones are considered.
|
||||
"""
|
||||
for item in LEXERS.values():
|
||||
yield item[1:]
|
||||
if plugins:
|
||||
for lexer in find_plugin_lexers():
|
||||
yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes
|
||||
|
||||
|
||||
def find_lexer_class(name):
|
||||
"""
|
||||
Return the `Lexer` subclass that with the *name* attribute as given by
|
||||
the *name* argument.
|
||||
"""
|
||||
if name in _lexer_cache:
|
||||
return _lexer_cache[name]
|
||||
# lookup builtin lexers
|
||||
for module_name, lname, aliases, _, _ in LEXERS.values():
|
||||
if name == lname:
|
||||
_load_lexers(module_name)
|
||||
return _lexer_cache[name]
|
||||
# continue with lexers from setuptools entrypoints
|
||||
for cls in find_plugin_lexers():
|
||||
if cls.name == name:
|
||||
return cls
|
||||
|
||||
|
||||
def find_lexer_class_by_name(_alias):
|
||||
"""
|
||||
Return the `Lexer` subclass that has `alias` in its aliases list, without
|
||||
instantiating it.
|
||||
|
||||
Like `get_lexer_by_name`, but does not instantiate the class.
|
||||
|
||||
Will raise :exc:`pygments.util.ClassNotFound` if no lexer with that alias is
|
||||
found.
|
||||
|
||||
.. versionadded:: 2.2
|
||||
"""
|
||||
if not _alias:
|
||||
raise ClassNotFound(f'no lexer for alias {_alias!r} found')
|
||||
# lookup builtin lexers
|
||||
for module_name, name, aliases, _, _ in LEXERS.values():
|
||||
if _alias.lower() in aliases:
|
||||
if name not in _lexer_cache:
|
||||
_load_lexers(module_name)
|
||||
return _lexer_cache[name]
|
||||
# continue with lexers from setuptools entrypoints
|
||||
for cls in find_plugin_lexers():
|
||||
if _alias.lower() in cls.aliases:
|
||||
return cls
|
||||
raise ClassNotFound(f'no lexer for alias {_alias!r} found')
|
||||
|
||||
|
||||
def get_lexer_by_name(_alias, **options):
|
||||
"""
|
||||
Return an instance of a `Lexer` subclass that has `alias` in its
|
||||
aliases list. The lexer is given the `options` at its
|
||||
instantiation.
|
||||
|
||||
Will raise :exc:`pygments.util.ClassNotFound` if no lexer with that alias is
|
||||
found.
|
||||
"""
|
||||
if not _alias:
|
||||
raise ClassNotFound(f'no lexer for alias {_alias!r} found')
|
||||
|
||||
# lookup builtin lexers
|
||||
for module_name, name, aliases, _, _ in LEXERS.values():
|
||||
if _alias.lower() in aliases:
|
||||
if name not in _lexer_cache:
|
||||
_load_lexers(module_name)
|
||||
return _lexer_cache[name](**options)
|
||||
# continue with lexers from setuptools entrypoints
|
||||
for cls in find_plugin_lexers():
|
||||
if _alias.lower() in cls.aliases:
|
||||
return cls(**options)
|
||||
raise ClassNotFound(f'no lexer for alias {_alias!r} found')
|
||||
|
||||
|
||||
def load_lexer_from_file(filename, lexername="CustomLexer", **options):
|
||||
"""Load a lexer from a file.
|
||||
|
||||
This method expects a file located relative to the current working
|
||||
directory, which contains a Lexer class. By default, it expects the
|
||||
Lexer to be name CustomLexer; you can specify your own class name
|
||||
as the second argument to this function.
|
||||
|
||||
Users should be very careful with the input, because this method
|
||||
is equivalent to running eval on the input file.
|
||||
|
||||
Raises ClassNotFound if there are any problems importing the Lexer.
|
||||
|
||||
.. versionadded:: 2.2
|
||||
"""
|
||||
try:
|
||||
# This empty dict will contain the namespace for the exec'd file
|
||||
custom_namespace = {}
|
||||
with open(filename, 'rb') as f:
|
||||
exec(f.read(), custom_namespace)
|
||||
# Retrieve the class `lexername` from that namespace
|
||||
if lexername not in custom_namespace:
|
||||
raise ClassNotFound(f'no valid {lexername} class found in {filename}')
|
||||
lexer_class = custom_namespace[lexername]
|
||||
# And finally instantiate it with the options
|
||||
return lexer_class(**options)
|
||||
except OSError as err:
|
||||
raise ClassNotFound(f'cannot read {filename}: {err}')
|
||||
except ClassNotFound:
|
||||
raise
|
||||
except Exception as err:
|
||||
raise ClassNotFound(f'error when loading custom lexer: {err}')
|
||||
|
||||
|
||||
def find_lexer_class_for_filename(_fn, code=None):
|
||||
"""Get a lexer for a filename.
|
||||
|
||||
If multiple lexers match the filename pattern, use ``analyse_text()`` to
|
||||
figure out which one is more appropriate.
|
||||
|
||||
Returns None if not found.
|
||||
"""
|
||||
matches = []
|
||||
fn = basename(_fn)
|
||||
for modname, name, _, filenames, _ in LEXERS.values():
|
||||
for filename in filenames:
|
||||
if _fn_matches(fn, filename):
|
||||
if name not in _lexer_cache:
|
||||
_load_lexers(modname)
|
||||
matches.append((_lexer_cache[name], filename))
|
||||
for cls in find_plugin_lexers():
|
||||
for filename in cls.filenames:
|
||||
if _fn_matches(fn, filename):
|
||||
matches.append((cls, filename))
|
||||
|
||||
if isinstance(code, bytes):
|
||||
# decode it, since all analyse_text functions expect unicode
|
||||
code = guess_decode(code)
|
||||
|
||||
def get_rating(info):
|
||||
cls, filename = info
|
||||
# explicit patterns get a bonus
|
||||
bonus = '*' not in filename and 0.5 or 0
|
||||
# The class _always_ defines analyse_text because it's included in
|
||||
# the Lexer class. The default implementation returns None which
|
||||
# gets turned into 0.0. Run scripts/detect_missing_analyse_text.py
|
||||
# to find lexers which need it overridden.
|
||||
if code:
|
||||
return cls.analyse_text(code) + bonus, cls.__name__
|
||||
return cls.priority + bonus, cls.__name__
|
||||
|
||||
if matches:
|
||||
matches.sort(key=get_rating)
|
||||
# print "Possible lexers, after sort:", matches
|
||||
return matches[-1][0]
|
||||
|
||||
|
||||
def get_lexer_for_filename(_fn, code=None, **options):
|
||||
"""Get a lexer for a filename.
|
||||
|
||||
Return a `Lexer` subclass instance that has a filename pattern
|
||||
matching `fn`. The lexer is given the `options` at its
|
||||
instantiation.
|
||||
|
||||
Raise :exc:`pygments.util.ClassNotFound` if no lexer for that filename
|
||||
is found.
|
||||
|
||||
If multiple lexers match the filename pattern, use their ``analyse_text()``
|
||||
methods to figure out which one is more appropriate.
|
||||
"""
|
||||
res = find_lexer_class_for_filename(_fn, code)
|
||||
if not res:
|
||||
raise ClassNotFound(f'no lexer for filename {_fn!r} found')
|
||||
return res(**options)
|
||||
|
||||
|
||||
def get_lexer_for_mimetype(_mime, **options):
|
||||
"""
|
||||
Return a `Lexer` subclass instance that has `mime` in its mimetype
|
||||
list. The lexer is given the `options` at its instantiation.
|
||||
|
||||
Will raise :exc:`pygments.util.ClassNotFound` if not lexer for that mimetype
|
||||
is found.
|
||||
"""
|
||||
for modname, name, _, _, mimetypes in LEXERS.values():
|
||||
if _mime in mimetypes:
|
||||
if name not in _lexer_cache:
|
||||
_load_lexers(modname)
|
||||
return _lexer_cache[name](**options)
|
||||
for cls in find_plugin_lexers():
|
||||
if _mime in cls.mimetypes:
|
||||
return cls(**options)
|
||||
raise ClassNotFound(f'no lexer for mimetype {_mime!r} found')
|
||||
|
||||
|
||||
def _iter_lexerclasses(plugins=True):
|
||||
"""Return an iterator over all lexer classes."""
|
||||
for key in sorted(LEXERS):
|
||||
module_name, name = LEXERS[key][:2]
|
||||
if name not in _lexer_cache:
|
||||
_load_lexers(module_name)
|
||||
yield _lexer_cache[name]
|
||||
if plugins:
|
||||
yield from find_plugin_lexers()
|
||||
|
||||
|
||||
def guess_lexer_for_filename(_fn, _text, **options):
|
||||
"""
|
||||
As :func:`guess_lexer()`, but only lexers which have a pattern in `filenames`
|
||||
or `alias_filenames` that matches `filename` are taken into consideration.
|
||||
|
||||
:exc:`pygments.util.ClassNotFound` is raised if no lexer thinks it can
|
||||
handle the content.
|
||||
"""
|
||||
fn = basename(_fn)
|
||||
primary = {}
|
||||
matching_lexers = set()
|
||||
for lexer in _iter_lexerclasses():
|
||||
for filename in lexer.filenames:
|
||||
if _fn_matches(fn, filename):
|
||||
matching_lexers.add(lexer)
|
||||
primary[lexer] = True
|
||||
for filename in lexer.alias_filenames:
|
||||
if _fn_matches(fn, filename):
|
||||
matching_lexers.add(lexer)
|
||||
primary[lexer] = False
|
||||
if not matching_lexers:
|
||||
raise ClassNotFound(f'no lexer for filename {fn!r} found')
|
||||
if len(matching_lexers) == 1:
|
||||
return matching_lexers.pop()(**options)
|
||||
result = []
|
||||
for lexer in matching_lexers:
|
||||
rv = lexer.analyse_text(_text)
|
||||
if rv == 1.0:
|
||||
return lexer(**options)
|
||||
result.append((rv, lexer))
|
||||
|
||||
def type_sort(t):
|
||||
# sort by:
|
||||
# - analyse score
|
||||
# - is primary filename pattern?
|
||||
# - priority
|
||||
# - last resort: class name
|
||||
return (t[0], primary[t[1]], t[1].priority, t[1].__name__)
|
||||
result.sort(key=type_sort)
|
||||
|
||||
return result[-1][1](**options)
|
||||
|
||||
|
||||
def guess_lexer(_text, **options):
|
||||
"""
|
||||
Return a `Lexer` subclass instance that's guessed from the text in
|
||||
`text`. For that, the :meth:`.analyse_text()` method of every known lexer
|
||||
class is called with the text as argument, and the lexer which returned the
|
||||
highest value will be instantiated and returned.
|
||||
|
||||
:exc:`pygments.util.ClassNotFound` is raised if no lexer thinks it can
|
||||
handle the content.
|
||||
"""
|
||||
|
||||
if not isinstance(_text, str):
|
||||
inencoding = options.get('inencoding', options.get('encoding'))
|
||||
if inencoding:
|
||||
_text = _text.decode(inencoding or 'utf8')
|
||||
else:
|
||||
_text, _ = guess_decode(_text)
|
||||
|
||||
# try to get a vim modeline first
|
||||
ft = get_filetype_from_buffer(_text)
|
||||
|
||||
if ft is not None:
|
||||
try:
|
||||
return get_lexer_by_name(ft, **options)
|
||||
except ClassNotFound:
|
||||
pass
|
||||
|
||||
best_lexer = [0.0, None]
|
||||
for lexer in _iter_lexerclasses():
|
||||
rv = lexer.analyse_text(_text)
|
||||
if rv == 1.0:
|
||||
return lexer(**options)
|
||||
if rv > best_lexer[0]:
|
||||
best_lexer[:] = (rv, lexer)
|
||||
if not best_lexer[0] or best_lexer[1] is None:
|
||||
raise ClassNotFound('no lexer matching the text found')
|
||||
return best_lexer[1](**options)
|
||||
|
||||
|
||||
class _automodule(types.ModuleType):
|
||||
"""Automatically import lexers."""
|
||||
|
||||
def __getattr__(self, name):
|
||||
info = LEXERS.get(name)
|
||||
if info:
|
||||
_load_lexers(info[0])
|
||||
cls = _lexer_cache[info[1]]
|
||||
setattr(self, name, cls)
|
||||
return cls
|
||||
if name in COMPAT:
|
||||
return getattr(self, COMPAT[name])
|
||||
raise AttributeError(name)
|
||||
|
||||
|
||||
oldmod = sys.modules[__name__]
|
||||
newmod = _automodule(__name__)
|
||||
newmod.__dict__.update(oldmod.__dict__)
|
||||
sys.modules[__name__] = newmod
|
||||
del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types
|
||||
@@ -0,0 +1,602 @@
|
||||
# Automatically generated by scripts/gen_mapfiles.py.
|
||||
# DO NOT EDIT BY HAND; run `tox -e mapfiles` instead.
|
||||
|
||||
LEXERS = {
|
||||
'ABAPLexer': ('pip._vendor.pygments.lexers.business', 'ABAP', ('abap',), ('*.abap', '*.ABAP'), ('text/x-abap',)),
|
||||
'AMDGPULexer': ('pip._vendor.pygments.lexers.amdgpu', 'AMDGPU', ('amdgpu',), ('*.isa',), ()),
|
||||
'APLLexer': ('pip._vendor.pygments.lexers.apl', 'APL', ('apl',), ('*.apl', '*.aplf', '*.aplo', '*.apln', '*.aplc', '*.apli', '*.dyalog'), ()),
|
||||
'AbnfLexer': ('pip._vendor.pygments.lexers.grammar_notation', 'ABNF', ('abnf',), ('*.abnf',), ('text/x-abnf',)),
|
||||
'ActionScript3Lexer': ('pip._vendor.pygments.lexers.actionscript', 'ActionScript 3', ('actionscript3', 'as3'), ('*.as',), ('application/x-actionscript3', 'text/x-actionscript3', 'text/actionscript3')),
|
||||
'ActionScriptLexer': ('pip._vendor.pygments.lexers.actionscript', 'ActionScript', ('actionscript', 'as'), ('*.as',), ('application/x-actionscript', 'text/x-actionscript', 'text/actionscript')),
|
||||
'AdaLexer': ('pip._vendor.pygments.lexers.ada', 'Ada', ('ada', 'ada95', 'ada2005'), ('*.adb', '*.ads', '*.ada'), ('text/x-ada',)),
|
||||
'AdlLexer': ('pip._vendor.pygments.lexers.archetype', 'ADL', ('adl',), ('*.adl', '*.adls', '*.adlf', '*.adlx'), ()),
|
||||
'AgdaLexer': ('pip._vendor.pygments.lexers.haskell', 'Agda', ('agda',), ('*.agda',), ('text/x-agda',)),
|
||||
'AheuiLexer': ('pip._vendor.pygments.lexers.esoteric', 'Aheui', ('aheui',), ('*.aheui',), ()),
|
||||
'AlloyLexer': ('pip._vendor.pygments.lexers.dsls', 'Alloy', ('alloy',), ('*.als',), ('text/x-alloy',)),
|
||||
'AmbientTalkLexer': ('pip._vendor.pygments.lexers.ambient', 'AmbientTalk', ('ambienttalk', 'ambienttalk/2', 'at'), ('*.at',), ('text/x-ambienttalk',)),
|
||||
'AmplLexer': ('pip._vendor.pygments.lexers.ampl', 'Ampl', ('ampl',), ('*.run',), ()),
|
||||
'Angular2HtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML + Angular2', ('html+ng2',), ('*.ng2',), ()),
|
||||
'Angular2Lexer': ('pip._vendor.pygments.lexers.templates', 'Angular2', ('ng2',), (), ()),
|
||||
'AntlrActionScriptLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With ActionScript Target', ('antlr-actionscript', 'antlr-as'), ('*.G', '*.g'), ()),
|
||||
'AntlrCSharpLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With C# Target', ('antlr-csharp', 'antlr-c#'), ('*.G', '*.g'), ()),
|
||||
'AntlrCppLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With CPP Target', ('antlr-cpp',), ('*.G', '*.g'), ()),
|
||||
'AntlrJavaLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With Java Target', ('antlr-java',), ('*.G', '*.g'), ()),
|
||||
'AntlrLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR', ('antlr',), (), ()),
|
||||
'AntlrObjectiveCLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With ObjectiveC Target', ('antlr-objc',), ('*.G', '*.g'), ()),
|
||||
'AntlrPerlLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With Perl Target', ('antlr-perl',), ('*.G', '*.g'), ()),
|
||||
'AntlrPythonLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With Python Target', ('antlr-python',), ('*.G', '*.g'), ()),
|
||||
'AntlrRubyLexer': ('pip._vendor.pygments.lexers.parsers', 'ANTLR With Ruby Target', ('antlr-ruby', 'antlr-rb'), ('*.G', '*.g'), ()),
|
||||
'ApacheConfLexer': ('pip._vendor.pygments.lexers.configs', 'ApacheConf', ('apacheconf', 'aconf', 'apache'), ('.htaccess', 'apache.conf', 'apache2.conf'), ('text/x-apacheconf',)),
|
||||
'AppleScriptLexer': ('pip._vendor.pygments.lexers.scripting', 'AppleScript', ('applescript',), ('*.applescript',), ()),
|
||||
'ArduinoLexer': ('pip._vendor.pygments.lexers.c_like', 'Arduino', ('arduino',), ('*.ino',), ('text/x-arduino',)),
|
||||
'ArrowLexer': ('pip._vendor.pygments.lexers.arrow', 'Arrow', ('arrow',), ('*.arw',), ()),
|
||||
'ArturoLexer': ('pip._vendor.pygments.lexers.arturo', 'Arturo', ('arturo', 'art'), ('*.art',), ()),
|
||||
'AscLexer': ('pip._vendor.pygments.lexers.asc', 'ASCII armored', ('asc', 'pem'), ('*.asc', '*.pem', 'id_dsa', 'id_ecdsa', 'id_ecdsa_sk', 'id_ed25519', 'id_ed25519_sk', 'id_rsa'), ('application/pgp-keys', 'application/pgp-encrypted', 'application/pgp-signature', 'application/pem-certificate-chain')),
|
||||
'Asn1Lexer': ('pip._vendor.pygments.lexers.asn1', 'ASN.1', ('asn1',), ('*.asn1',), ()),
|
||||
'AspectJLexer': ('pip._vendor.pygments.lexers.jvm', 'AspectJ', ('aspectj',), ('*.aj',), ('text/x-aspectj',)),
|
||||
'AsymptoteLexer': ('pip._vendor.pygments.lexers.graphics', 'Asymptote', ('asymptote', 'asy'), ('*.asy',), ('text/x-asymptote',)),
|
||||
'AugeasLexer': ('pip._vendor.pygments.lexers.configs', 'Augeas', ('augeas',), ('*.aug',), ()),
|
||||
'AutoItLexer': ('pip._vendor.pygments.lexers.automation', 'AutoIt', ('autoit',), ('*.au3',), ('text/x-autoit',)),
|
||||
'AutohotkeyLexer': ('pip._vendor.pygments.lexers.automation', 'autohotkey', ('autohotkey', 'ahk'), ('*.ahk', '*.ahkl'), ('text/x-autohotkey',)),
|
||||
'AwkLexer': ('pip._vendor.pygments.lexers.textedit', 'Awk', ('awk', 'gawk', 'mawk', 'nawk'), ('*.awk',), ('application/x-awk',)),
|
||||
'BBCBasicLexer': ('pip._vendor.pygments.lexers.basic', 'BBC Basic', ('bbcbasic',), ('*.bbc',), ()),
|
||||
'BBCodeLexer': ('pip._vendor.pygments.lexers.markup', 'BBCode', ('bbcode',), (), ('text/x-bbcode',)),
|
||||
'BCLexer': ('pip._vendor.pygments.lexers.algebra', 'BC', ('bc',), ('*.bc',), ()),
|
||||
'BQNLexer': ('pip._vendor.pygments.lexers.bqn', 'BQN', ('bqn',), ('*.bqn',), ()),
|
||||
'BSTLexer': ('pip._vendor.pygments.lexers.bibtex', 'BST', ('bst', 'bst-pybtex'), ('*.bst',), ()),
|
||||
'BareLexer': ('pip._vendor.pygments.lexers.bare', 'BARE', ('bare',), ('*.bare',), ()),
|
||||
'BaseMakefileLexer': ('pip._vendor.pygments.lexers.make', 'Base Makefile', ('basemake',), (), ()),
|
||||
'BashLexer': ('pip._vendor.pygments.lexers.shell', 'Bash', ('bash', 'sh', 'ksh', 'zsh', 'shell', 'openrc'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass', '*.exheres-0', '*.exlib', '*.zsh', '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc', '.kshrc', 'kshrc', 'PKGBUILD'), ('application/x-sh', 'application/x-shellscript', 'text/x-shellscript')),
|
||||
'BashSessionLexer': ('pip._vendor.pygments.lexers.shell', 'Bash Session', ('console', 'shell-session'), ('*.sh-session', '*.shell-session'), ('application/x-shell-session', 'application/x-sh-session')),
|
||||
'BatchLexer': ('pip._vendor.pygments.lexers.shell', 'Batchfile', ('batch', 'bat', 'dosbatch', 'winbatch'), ('*.bat', '*.cmd'), ('application/x-dos-batch',)),
|
||||
'BddLexer': ('pip._vendor.pygments.lexers.bdd', 'Bdd', ('bdd',), ('*.feature',), ('text/x-bdd',)),
|
||||
'BefungeLexer': ('pip._vendor.pygments.lexers.esoteric', 'Befunge', ('befunge',), ('*.befunge',), ('application/x-befunge',)),
|
||||
'BerryLexer': ('pip._vendor.pygments.lexers.berry', 'Berry', ('berry', 'be'), ('*.be',), ('text/x-berry', 'application/x-berry')),
|
||||
'BibTeXLexer': ('pip._vendor.pygments.lexers.bibtex', 'BibTeX', ('bibtex', 'bib'), ('*.bib',), ('text/x-bibtex',)),
|
||||
'BlitzBasicLexer': ('pip._vendor.pygments.lexers.basic', 'BlitzBasic', ('blitzbasic', 'b3d', 'bplus'), ('*.bb', '*.decls'), ('text/x-bb',)),
|
||||
'BlitzMaxLexer': ('pip._vendor.pygments.lexers.basic', 'BlitzMax', ('blitzmax', 'bmax'), ('*.bmx',), ('text/x-bmx',)),
|
||||
'BlueprintLexer': ('pip._vendor.pygments.lexers.blueprint', 'Blueprint', ('blueprint',), ('*.blp',), ('text/x-blueprint',)),
|
||||
'BnfLexer': ('pip._vendor.pygments.lexers.grammar_notation', 'BNF', ('bnf',), ('*.bnf',), ('text/x-bnf',)),
|
||||
'BoaLexer': ('pip._vendor.pygments.lexers.boa', 'Boa', ('boa',), ('*.boa',), ()),
|
||||
'BooLexer': ('pip._vendor.pygments.lexers.dotnet', 'Boo', ('boo',), ('*.boo',), ('text/x-boo',)),
|
||||
'BoogieLexer': ('pip._vendor.pygments.lexers.verification', 'Boogie', ('boogie',), ('*.bpl',), ()),
|
||||
'BrainfuckLexer': ('pip._vendor.pygments.lexers.esoteric', 'Brainfuck', ('brainfuck', 'bf'), ('*.bf', '*.b'), ('application/x-brainfuck',)),
|
||||
'BugsLexer': ('pip._vendor.pygments.lexers.modeling', 'BUGS', ('bugs', 'winbugs', 'openbugs'), ('*.bug',), ()),
|
||||
'CAmkESLexer': ('pip._vendor.pygments.lexers.esoteric', 'CAmkES', ('camkes', 'idl4'), ('*.camkes', '*.idl4'), ()),
|
||||
'CLexer': ('pip._vendor.pygments.lexers.c_cpp', 'C', ('c',), ('*.c', '*.h', '*.idc', '*.x[bp]m'), ('text/x-chdr', 'text/x-csrc', 'image/x-xbitmap', 'image/x-xpixmap')),
|
||||
'CMakeLexer': ('pip._vendor.pygments.lexers.make', 'CMake', ('cmake',), ('*.cmake', 'CMakeLists.txt'), ('text/x-cmake',)),
|
||||
'CObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'c-objdump', ('c-objdump',), ('*.c-objdump',), ('text/x-c-objdump',)),
|
||||
'CPSALexer': ('pip._vendor.pygments.lexers.lisp', 'CPSA', ('cpsa',), ('*.cpsa',), ()),
|
||||
'CSSUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'CSS+UL4', ('css+ul4',), ('*.cssul4',), ()),
|
||||
'CSharpAspxLexer': ('pip._vendor.pygments.lexers.dotnet', 'aspx-cs', ('aspx-cs',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()),
|
||||
'CSharpLexer': ('pip._vendor.pygments.lexers.dotnet', 'C#', ('csharp', 'c#', 'cs'), ('*.cs',), ('text/x-csharp',)),
|
||||
'Ca65Lexer': ('pip._vendor.pygments.lexers.asm', 'ca65 assembler', ('ca65',), ('*.s',), ()),
|
||||
'CadlLexer': ('pip._vendor.pygments.lexers.archetype', 'cADL', ('cadl',), ('*.cadl',), ()),
|
||||
'CapDLLexer': ('pip._vendor.pygments.lexers.esoteric', 'CapDL', ('capdl',), ('*.cdl',), ()),
|
||||
'CapnProtoLexer': ('pip._vendor.pygments.lexers.capnproto', "Cap'n Proto", ('capnp',), ('*.capnp',), ()),
|
||||
'CarbonLexer': ('pip._vendor.pygments.lexers.carbon', 'Carbon', ('carbon',), ('*.carbon',), ('text/x-carbon',)),
|
||||
'CbmBasicV2Lexer': ('pip._vendor.pygments.lexers.basic', 'CBM BASIC V2', ('cbmbas',), ('*.bas',), ()),
|
||||
'CddlLexer': ('pip._vendor.pygments.lexers.cddl', 'CDDL', ('cddl',), ('*.cddl',), ('text/x-cddl',)),
|
||||
'CeylonLexer': ('pip._vendor.pygments.lexers.jvm', 'Ceylon', ('ceylon',), ('*.ceylon',), ('text/x-ceylon',)),
|
||||
'Cfengine3Lexer': ('pip._vendor.pygments.lexers.configs', 'CFEngine3', ('cfengine3', 'cf3'), ('*.cf',), ()),
|
||||
'ChaiscriptLexer': ('pip._vendor.pygments.lexers.scripting', 'ChaiScript', ('chaiscript', 'chai'), ('*.chai',), ('text/x-chaiscript', 'application/x-chaiscript')),
|
||||
'ChapelLexer': ('pip._vendor.pygments.lexers.chapel', 'Chapel', ('chapel', 'chpl'), ('*.chpl',), ()),
|
||||
'CharmciLexer': ('pip._vendor.pygments.lexers.c_like', 'Charmci', ('charmci',), ('*.ci',), ()),
|
||||
'CheetahHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Cheetah', ('html+cheetah', 'html+spitfire', 'htmlcheetah'), (), ('text/html+cheetah', 'text/html+spitfire')),
|
||||
'CheetahJavascriptLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Cheetah', ('javascript+cheetah', 'js+cheetah', 'javascript+spitfire', 'js+spitfire'), (), ('application/x-javascript+cheetah', 'text/x-javascript+cheetah', 'text/javascript+cheetah', 'application/x-javascript+spitfire', 'text/x-javascript+spitfire', 'text/javascript+spitfire')),
|
||||
'CheetahLexer': ('pip._vendor.pygments.lexers.templates', 'Cheetah', ('cheetah', 'spitfire'), ('*.tmpl', '*.spt'), ('application/x-cheetah', 'application/x-spitfire')),
|
||||
'CheetahXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Cheetah', ('xml+cheetah', 'xml+spitfire'), (), ('application/xml+cheetah', 'application/xml+spitfire')),
|
||||
'CirruLexer': ('pip._vendor.pygments.lexers.webmisc', 'Cirru', ('cirru',), ('*.cirru',), ('text/x-cirru',)),
|
||||
'ClayLexer': ('pip._vendor.pygments.lexers.c_like', 'Clay', ('clay',), ('*.clay',), ('text/x-clay',)),
|
||||
'CleanLexer': ('pip._vendor.pygments.lexers.clean', 'Clean', ('clean',), ('*.icl', '*.dcl'), ()),
|
||||
'ClojureLexer': ('pip._vendor.pygments.lexers.jvm', 'Clojure', ('clojure', 'clj'), ('*.clj', '*.cljc'), ('text/x-clojure', 'application/x-clojure')),
|
||||
'ClojureScriptLexer': ('pip._vendor.pygments.lexers.jvm', 'ClojureScript', ('clojurescript', 'cljs'), ('*.cljs',), ('text/x-clojurescript', 'application/x-clojurescript')),
|
||||
'CobolFreeformatLexer': ('pip._vendor.pygments.lexers.business', 'COBOLFree', ('cobolfree',), ('*.cbl', '*.CBL'), ()),
|
||||
'CobolLexer': ('pip._vendor.pygments.lexers.business', 'COBOL', ('cobol',), ('*.cob', '*.COB', '*.cpy', '*.CPY'), ('text/x-cobol',)),
|
||||
'CodeQLLexer': ('pip._vendor.pygments.lexers.codeql', 'CodeQL', ('codeql', 'ql'), ('*.ql', '*.qll'), ()),
|
||||
'CoffeeScriptLexer': ('pip._vendor.pygments.lexers.javascript', 'CoffeeScript', ('coffeescript', 'coffee-script', 'coffee'), ('*.coffee',), ('text/coffeescript',)),
|
||||
'ColdfusionCFCLexer': ('pip._vendor.pygments.lexers.templates', 'Coldfusion CFC', ('cfc',), ('*.cfc',), ()),
|
||||
'ColdfusionHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'Coldfusion HTML', ('cfm',), ('*.cfm', '*.cfml'), ('application/x-coldfusion',)),
|
||||
'ColdfusionLexer': ('pip._vendor.pygments.lexers.templates', 'cfstatement', ('cfs',), (), ()),
|
||||
'Comal80Lexer': ('pip._vendor.pygments.lexers.comal', 'COMAL-80', ('comal', 'comal80'), ('*.cml', '*.comal'), ()),
|
||||
'CommonLispLexer': ('pip._vendor.pygments.lexers.lisp', 'Common Lisp', ('common-lisp', 'cl', 'lisp'), ('*.cl', '*.lisp'), ('text/x-common-lisp',)),
|
||||
'ComponentPascalLexer': ('pip._vendor.pygments.lexers.oberon', 'Component Pascal', ('componentpascal', 'cp'), ('*.cp', '*.cps'), ('text/x-component-pascal',)),
|
||||
'CoqLexer': ('pip._vendor.pygments.lexers.theorem', 'Coq', ('coq',), ('*.v',), ('text/x-coq',)),
|
||||
'CplintLexer': ('pip._vendor.pygments.lexers.cplint', 'cplint', ('cplint',), ('*.ecl', '*.prolog', '*.pro', '*.pl', '*.P', '*.lpad', '*.cpl'), ('text/x-cplint',)),
|
||||
'CppLexer': ('pip._vendor.pygments.lexers.c_cpp', 'C++', ('cpp', 'c++'), ('*.cpp', '*.hpp', '*.c++', '*.h++', '*.cc', '*.hh', '*.cxx', '*.hxx', '*.C', '*.H', '*.cp', '*.CPP', '*.tpp'), ('text/x-c++hdr', 'text/x-c++src')),
|
||||
'CppObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'cpp-objdump', ('cpp-objdump', 'c++-objdumb', 'cxx-objdump'), ('*.cpp-objdump', '*.c++-objdump', '*.cxx-objdump'), ('text/x-cpp-objdump',)),
|
||||
'CrmshLexer': ('pip._vendor.pygments.lexers.dsls', 'Crmsh', ('crmsh', 'pcmk'), ('*.crmsh', '*.pcmk'), ()),
|
||||
'CrocLexer': ('pip._vendor.pygments.lexers.d', 'Croc', ('croc',), ('*.croc',), ('text/x-crocsrc',)),
|
||||
'CryptolLexer': ('pip._vendor.pygments.lexers.haskell', 'Cryptol', ('cryptol', 'cry'), ('*.cry',), ('text/x-cryptol',)),
|
||||
'CrystalLexer': ('pip._vendor.pygments.lexers.crystal', 'Crystal', ('cr', 'crystal'), ('*.cr',), ('text/x-crystal',)),
|
||||
'CsoundDocumentLexer': ('pip._vendor.pygments.lexers.csound', 'Csound Document', ('csound-document', 'csound-csd'), ('*.csd',), ()),
|
||||
'CsoundOrchestraLexer': ('pip._vendor.pygments.lexers.csound', 'Csound Orchestra', ('csound', 'csound-orc'), ('*.orc', '*.udo'), ()),
|
||||
'CsoundScoreLexer': ('pip._vendor.pygments.lexers.csound', 'Csound Score', ('csound-score', 'csound-sco'), ('*.sco',), ()),
|
||||
'CssDjangoLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Django/Jinja', ('css+django', 'css+jinja'), ('*.css.j2', '*.css.jinja2'), ('text/css+django', 'text/css+jinja')),
|
||||
'CssErbLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Ruby', ('css+ruby', 'css+erb'), (), ('text/css+ruby',)),
|
||||
'CssGenshiLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Genshi Text', ('css+genshitext', 'css+genshi'), (), ('text/css+genshi',)),
|
||||
'CssLexer': ('pip._vendor.pygments.lexers.css', 'CSS', ('css',), ('*.css',), ('text/css',)),
|
||||
'CssPhpLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+PHP', ('css+php',), (), ('text/css+php',)),
|
||||
'CssSmartyLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Smarty', ('css+smarty',), (), ('text/css+smarty',)),
|
||||
'CudaLexer': ('pip._vendor.pygments.lexers.c_like', 'CUDA', ('cuda', 'cu'), ('*.cu', '*.cuh'), ('text/x-cuda',)),
|
||||
'CypherLexer': ('pip._vendor.pygments.lexers.graph', 'Cypher', ('cypher',), ('*.cyp', '*.cypher'), ()),
|
||||
'CythonLexer': ('pip._vendor.pygments.lexers.python', 'Cython', ('cython', 'pyx', 'pyrex'), ('*.pyx', '*.pxd', '*.pxi'), ('text/x-cython', 'application/x-cython')),
|
||||
'DLexer': ('pip._vendor.pygments.lexers.d', 'D', ('d',), ('*.d', '*.di'), ('text/x-dsrc',)),
|
||||
'DObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'd-objdump', ('d-objdump',), ('*.d-objdump',), ('text/x-d-objdump',)),
|
||||
'DarcsPatchLexer': ('pip._vendor.pygments.lexers.diff', 'Darcs Patch', ('dpatch',), ('*.dpatch', '*.darcspatch'), ()),
|
||||
'DartLexer': ('pip._vendor.pygments.lexers.javascript', 'Dart', ('dart',), ('*.dart',), ('text/x-dart',)),
|
||||
'Dasm16Lexer': ('pip._vendor.pygments.lexers.asm', 'DASM16', ('dasm16',), ('*.dasm16', '*.dasm'), ('text/x-dasm16',)),
|
||||
'DaxLexer': ('pip._vendor.pygments.lexers.dax', 'Dax', ('dax',), ('*.dax',), ()),
|
||||
'DebianControlLexer': ('pip._vendor.pygments.lexers.installers', 'Debian Control file', ('debcontrol', 'control'), ('control',), ()),
|
||||
'DebianSourcesLexer': ('pip._vendor.pygments.lexers.installers', 'Debian Sources file', ('debian.sources',), ('*.sources',), ()),
|
||||
'DelphiLexer': ('pip._vendor.pygments.lexers.pascal', 'Delphi', ('delphi', 'pas', 'pascal', 'objectpascal'), ('*.pas', '*.dpr'), ('text/x-pascal',)),
|
||||
'DesktopLexer': ('pip._vendor.pygments.lexers.configs', 'Desktop file', ('desktop',), ('*.desktop',), ('application/x-desktop',)),
|
||||
'DevicetreeLexer': ('pip._vendor.pygments.lexers.devicetree', 'Devicetree', ('devicetree', 'dts'), ('*.dts', '*.dtsi'), ('text/x-c',)),
|
||||
'DgLexer': ('pip._vendor.pygments.lexers.python', 'dg', ('dg',), ('*.dg',), ('text/x-dg',)),
|
||||
'DiffLexer': ('pip._vendor.pygments.lexers.diff', 'Diff', ('diff', 'udiff'), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')),
|
||||
'DjangoLexer': ('pip._vendor.pygments.lexers.templates', 'Django/Jinja', ('django', 'jinja'), (), ('application/x-django-templating', 'application/x-jinja')),
|
||||
'DnsZoneLexer': ('pip._vendor.pygments.lexers.dns', 'Zone', ('zone',), ('*.zone',), ('text/dns',)),
|
||||
'DockerLexer': ('pip._vendor.pygments.lexers.configs', 'Docker', ('docker', 'dockerfile'), ('Dockerfile', '*.docker'), ('text/x-dockerfile-config',)),
|
||||
'DtdLexer': ('pip._vendor.pygments.lexers.html', 'DTD', ('dtd',), ('*.dtd',), ('application/xml-dtd',)),
|
||||
'DuelLexer': ('pip._vendor.pygments.lexers.webmisc', 'Duel', ('duel', 'jbst', 'jsonml+bst'), ('*.duel', '*.jbst'), ('text/x-duel', 'text/x-jbst')),
|
||||
'DylanConsoleLexer': ('pip._vendor.pygments.lexers.dylan', 'Dylan session', ('dylan-console', 'dylan-repl'), ('*.dylan-console',), ('text/x-dylan-console',)),
|
||||
'DylanLexer': ('pip._vendor.pygments.lexers.dylan', 'Dylan', ('dylan',), ('*.dylan', '*.dyl', '*.intr'), ('text/x-dylan',)),
|
||||
'DylanLidLexer': ('pip._vendor.pygments.lexers.dylan', 'DylanLID', ('dylan-lid', 'lid'), ('*.lid', '*.hdp'), ('text/x-dylan-lid',)),
|
||||
'ECLLexer': ('pip._vendor.pygments.lexers.ecl', 'ECL', ('ecl',), ('*.ecl',), ('application/x-ecl',)),
|
||||
'ECLexer': ('pip._vendor.pygments.lexers.c_like', 'eC', ('ec',), ('*.ec', '*.eh'), ('text/x-echdr', 'text/x-ecsrc')),
|
||||
'EarlGreyLexer': ('pip._vendor.pygments.lexers.javascript', 'Earl Grey', ('earl-grey', 'earlgrey', 'eg'), ('*.eg',), ('text/x-earl-grey',)),
|
||||
'EasytrieveLexer': ('pip._vendor.pygments.lexers.scripting', 'Easytrieve', ('easytrieve',), ('*.ezt', '*.mac'), ('text/x-easytrieve',)),
|
||||
'EbnfLexer': ('pip._vendor.pygments.lexers.parsers', 'EBNF', ('ebnf',), ('*.ebnf',), ('text/x-ebnf',)),
|
||||
'EiffelLexer': ('pip._vendor.pygments.lexers.eiffel', 'Eiffel', ('eiffel',), ('*.e',), ('text/x-eiffel',)),
|
||||
'ElixirConsoleLexer': ('pip._vendor.pygments.lexers.erlang', 'Elixir iex session', ('iex',), (), ('text/x-elixir-shellsession',)),
|
||||
'ElixirLexer': ('pip._vendor.pygments.lexers.erlang', 'Elixir', ('elixir', 'ex', 'exs'), ('*.ex', '*.eex', '*.exs', '*.leex'), ('text/x-elixir',)),
|
||||
'ElmLexer': ('pip._vendor.pygments.lexers.elm', 'Elm', ('elm',), ('*.elm',), ('text/x-elm',)),
|
||||
'ElpiLexer': ('pip._vendor.pygments.lexers.elpi', 'Elpi', ('elpi',), ('*.elpi',), ('text/x-elpi',)),
|
||||
'EmacsLispLexer': ('pip._vendor.pygments.lexers.lisp', 'EmacsLisp', ('emacs-lisp', 'elisp', 'emacs'), ('*.el',), ('text/x-elisp', 'application/x-elisp')),
|
||||
'EmailLexer': ('pip._vendor.pygments.lexers.email', 'E-mail', ('email', 'eml'), ('*.eml',), ('message/rfc822',)),
|
||||
'ErbLexer': ('pip._vendor.pygments.lexers.templates', 'ERB', ('erb',), (), ('application/x-ruby-templating',)),
|
||||
'ErlangLexer': ('pip._vendor.pygments.lexers.erlang', 'Erlang', ('erlang',), ('*.erl', '*.hrl', '*.es', '*.escript'), ('text/x-erlang',)),
|
||||
'ErlangShellLexer': ('pip._vendor.pygments.lexers.erlang', 'Erlang erl session', ('erl',), ('*.erl-sh',), ('text/x-erl-shellsession',)),
|
||||
'EvoqueHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Evoque', ('html+evoque',), (), ('text/html+evoque',)),
|
||||
'EvoqueLexer': ('pip._vendor.pygments.lexers.templates', 'Evoque', ('evoque',), ('*.evoque',), ('application/x-evoque',)),
|
||||
'EvoqueXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Evoque', ('xml+evoque',), (), ('application/xml+evoque',)),
|
||||
'ExeclineLexer': ('pip._vendor.pygments.lexers.shell', 'execline', ('execline',), ('*.exec',), ()),
|
||||
'EzhilLexer': ('pip._vendor.pygments.lexers.ezhil', 'Ezhil', ('ezhil',), ('*.n',), ('text/x-ezhil',)),
|
||||
'FSharpLexer': ('pip._vendor.pygments.lexers.dotnet', 'F#', ('fsharp', 'f#'), ('*.fs', '*.fsi', '*.fsx'), ('text/x-fsharp',)),
|
||||
'FStarLexer': ('pip._vendor.pygments.lexers.ml', 'FStar', ('fstar',), ('*.fst', '*.fsti'), ('text/x-fstar',)),
|
||||
'FactorLexer': ('pip._vendor.pygments.lexers.factor', 'Factor', ('factor',), ('*.factor',), ('text/x-factor',)),
|
||||
'FancyLexer': ('pip._vendor.pygments.lexers.ruby', 'Fancy', ('fancy', 'fy'), ('*.fy', '*.fancypack'), ('text/x-fancysrc',)),
|
||||
'FantomLexer': ('pip._vendor.pygments.lexers.fantom', 'Fantom', ('fan',), ('*.fan',), ('application/x-fantom',)),
|
||||
'FelixLexer': ('pip._vendor.pygments.lexers.felix', 'Felix', ('felix', 'flx'), ('*.flx', '*.flxh'), ('text/x-felix',)),
|
||||
'FennelLexer': ('pip._vendor.pygments.lexers.lisp', 'Fennel', ('fennel', 'fnl'), ('*.fnl',), ()),
|
||||
'FiftLexer': ('pip._vendor.pygments.lexers.fift', 'Fift', ('fift', 'fif'), ('*.fif',), ()),
|
||||
'FishShellLexer': ('pip._vendor.pygments.lexers.shell', 'Fish', ('fish', 'fishshell'), ('*.fish', '*.load'), ('application/x-fish',)),
|
||||
'FlatlineLexer': ('pip._vendor.pygments.lexers.dsls', 'Flatline', ('flatline',), (), ('text/x-flatline',)),
|
||||
'FloScriptLexer': ('pip._vendor.pygments.lexers.floscript', 'FloScript', ('floscript', 'flo'), ('*.flo',), ()),
|
||||
'ForthLexer': ('pip._vendor.pygments.lexers.forth', 'Forth', ('forth',), ('*.frt', '*.fs'), ('application/x-forth',)),
|
||||
'FortranFixedLexer': ('pip._vendor.pygments.lexers.fortran', 'FortranFixed', ('fortranfixed',), ('*.f', '*.F'), ()),
|
||||
'FortranLexer': ('pip._vendor.pygments.lexers.fortran', 'Fortran', ('fortran', 'f90'), ('*.f03', '*.f90', '*.F03', '*.F90'), ('text/x-fortran',)),
|
||||
'FoxProLexer': ('pip._vendor.pygments.lexers.foxpro', 'FoxPro', ('foxpro', 'vfp', 'clipper', 'xbase'), ('*.PRG', '*.prg'), ()),
|
||||
'FreeFemLexer': ('pip._vendor.pygments.lexers.freefem', 'Freefem', ('freefem',), ('*.edp',), ('text/x-freefem',)),
|
||||
'FuncLexer': ('pip._vendor.pygments.lexers.func', 'FunC', ('func', 'fc'), ('*.fc', '*.func'), ()),
|
||||
'FutharkLexer': ('pip._vendor.pygments.lexers.futhark', 'Futhark', ('futhark',), ('*.fut',), ('text/x-futhark',)),
|
||||
'GAPConsoleLexer': ('pip._vendor.pygments.lexers.algebra', 'GAP session', ('gap-console', 'gap-repl'), ('*.tst',), ()),
|
||||
'GAPLexer': ('pip._vendor.pygments.lexers.algebra', 'GAP', ('gap',), ('*.g', '*.gd', '*.gi', '*.gap'), ()),
|
||||
'GDScriptLexer': ('pip._vendor.pygments.lexers.gdscript', 'GDScript', ('gdscript', 'gd'), ('*.gd',), ('text/x-gdscript', 'application/x-gdscript')),
|
||||
'GLShaderLexer': ('pip._vendor.pygments.lexers.graphics', 'GLSL', ('glsl',), ('*.vert', '*.frag', '*.geo'), ('text/x-glslsrc',)),
|
||||
'GSQLLexer': ('pip._vendor.pygments.lexers.gsql', 'GSQL', ('gsql',), ('*.gsql',), ()),
|
||||
'GasLexer': ('pip._vendor.pygments.lexers.asm', 'GAS', ('gas', 'asm'), ('*.s', '*.S'), ('text/x-gas',)),
|
||||
'GcodeLexer': ('pip._vendor.pygments.lexers.gcodelexer', 'g-code', ('gcode',), ('*.gcode',), ()),
|
||||
'GenshiLexer': ('pip._vendor.pygments.lexers.templates', 'Genshi', ('genshi', 'kid', 'xml+genshi', 'xml+kid'), ('*.kid',), ('application/x-genshi', 'application/x-kid')),
|
||||
'GenshiTextLexer': ('pip._vendor.pygments.lexers.templates', 'Genshi Text', ('genshitext',), (), ('application/x-genshi-text', 'text/x-genshi')),
|
||||
'GettextLexer': ('pip._vendor.pygments.lexers.textfmts', 'Gettext Catalog', ('pot', 'po'), ('*.pot', '*.po'), ('application/x-gettext', 'text/x-gettext', 'text/gettext')),
|
||||
'GherkinLexer': ('pip._vendor.pygments.lexers.testing', 'Gherkin', ('gherkin', 'cucumber'), ('*.feature',), ('text/x-gherkin',)),
|
||||
'GleamLexer': ('pip._vendor.pygments.lexers.gleam', 'Gleam', ('gleam',), ('*.gleam',), ('text/x-gleam',)),
|
||||
'GnuplotLexer': ('pip._vendor.pygments.lexers.graphics', 'Gnuplot', ('gnuplot',), ('*.plot', '*.plt'), ('text/x-gnuplot',)),
|
||||
'GoLexer': ('pip._vendor.pygments.lexers.go', 'Go', ('go', 'golang'), ('*.go',), ('text/x-gosrc',)),
|
||||
'GoloLexer': ('pip._vendor.pygments.lexers.jvm', 'Golo', ('golo',), ('*.golo',), ()),
|
||||
'GoodDataCLLexer': ('pip._vendor.pygments.lexers.business', 'GoodData-CL', ('gooddata-cl',), ('*.gdc',), ('text/x-gooddata-cl',)),
|
||||
'GoogleSqlLexer': ('pip._vendor.pygments.lexers.sql', 'GoogleSQL', ('googlesql', 'zetasql'), ('*.googlesql', '*.googlesql.sql'), ('text/x-google-sql', 'text/x-google-sql-aux')),
|
||||
'GosuLexer': ('pip._vendor.pygments.lexers.jvm', 'Gosu', ('gosu',), ('*.gs', '*.gsx', '*.gsp', '*.vark'), ('text/x-gosu',)),
|
||||
'GosuTemplateLexer': ('pip._vendor.pygments.lexers.jvm', 'Gosu Template', ('gst',), ('*.gst',), ('text/x-gosu-template',)),
|
||||
'GraphQLLexer': ('pip._vendor.pygments.lexers.graphql', 'GraphQL', ('graphql',), ('*.graphql',), ()),
|
||||
'GraphvizLexer': ('pip._vendor.pygments.lexers.graphviz', 'Graphviz', ('graphviz', 'dot'), ('*.gv', '*.dot'), ('text/x-graphviz', 'text/vnd.graphviz')),
|
||||
'GroffLexer': ('pip._vendor.pygments.lexers.markup', 'Groff', ('groff', 'nroff', 'man'), ('*.[1-9]', '*.man', '*.1p', '*.3pm'), ('application/x-troff', 'text/troff')),
|
||||
'GroovyLexer': ('pip._vendor.pygments.lexers.jvm', 'Groovy', ('groovy',), ('*.groovy', '*.gradle'), ('text/x-groovy',)),
|
||||
'HLSLShaderLexer': ('pip._vendor.pygments.lexers.graphics', 'HLSL', ('hlsl',), ('*.hlsl', '*.hlsli'), ('text/x-hlsl',)),
|
||||
'HTMLUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'HTML+UL4', ('html+ul4',), ('*.htmlul4',), ()),
|
||||
'HamlLexer': ('pip._vendor.pygments.lexers.html', 'Haml', ('haml',), ('*.haml',), ('text/x-haml',)),
|
||||
'HandlebarsHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Handlebars', ('html+handlebars',), ('*.handlebars', '*.hbs'), ('text/html+handlebars', 'text/x-handlebars-template')),
|
||||
'HandlebarsLexer': ('pip._vendor.pygments.lexers.templates', 'Handlebars', ('handlebars',), (), ()),
|
||||
'HareLexer': ('pip._vendor.pygments.lexers.hare', 'Hare', ('hare',), ('*.ha',), ('text/x-hare',)),
|
||||
'HaskellLexer': ('pip._vendor.pygments.lexers.haskell', 'Haskell', ('haskell', 'hs'), ('*.hs',), ('text/x-haskell',)),
|
||||
'HaxeLexer': ('pip._vendor.pygments.lexers.haxe', 'Haxe', ('haxe', 'hxsl', 'hx'), ('*.hx', '*.hxsl'), ('text/haxe', 'text/x-haxe', 'text/x-hx')),
|
||||
'HexdumpLexer': ('pip._vendor.pygments.lexers.hexdump', 'Hexdump', ('hexdump',), (), ()),
|
||||
'HsailLexer': ('pip._vendor.pygments.lexers.asm', 'HSAIL', ('hsail', 'hsa'), ('*.hsail',), ('text/x-hsail',)),
|
||||
'HspecLexer': ('pip._vendor.pygments.lexers.haskell', 'Hspec', ('hspec',), ('*Spec.hs',), ()),
|
||||
'HtmlDjangoLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Django/Jinja', ('html+django', 'html+jinja', 'htmldjango'), ('*.html.j2', '*.htm.j2', '*.xhtml.j2', '*.html.jinja2', '*.htm.jinja2', '*.xhtml.jinja2'), ('text/html+django', 'text/html+jinja')),
|
||||
'HtmlGenshiLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Genshi', ('html+genshi', 'html+kid'), (), ('text/html+genshi',)),
|
||||
'HtmlLexer': ('pip._vendor.pygments.lexers.html', 'HTML', ('html',), ('*.html', '*.htm', '*.xhtml', '*.xslt'), ('text/html', 'application/xhtml+xml')),
|
||||
'HtmlPhpLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+PHP', ('html+php',), ('*.phtml',), ('application/x-php', 'application/x-httpd-php', 'application/x-httpd-php3', 'application/x-httpd-php4', 'application/x-httpd-php5')),
|
||||
'HtmlSmartyLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Smarty', ('html+smarty',), (), ('text/html+smarty',)),
|
||||
'HttpLexer': ('pip._vendor.pygments.lexers.textfmts', 'HTTP', ('http',), (), ()),
|
||||
'HxmlLexer': ('pip._vendor.pygments.lexers.haxe', 'Hxml', ('haxeml', 'hxml'), ('*.hxml',), ()),
|
||||
'HyLexer': ('pip._vendor.pygments.lexers.lisp', 'Hy', ('hylang', 'hy'), ('*.hy',), ('text/x-hy', 'application/x-hy')),
|
||||
'HybrisLexer': ('pip._vendor.pygments.lexers.scripting', 'Hybris', ('hybris',), ('*.hyb',), ('text/x-hybris', 'application/x-hybris')),
|
||||
'IDLLexer': ('pip._vendor.pygments.lexers.idl', 'IDL', ('idl',), ('*.pro',), ('text/idl',)),
|
||||
'IconLexer': ('pip._vendor.pygments.lexers.unicon', 'Icon', ('icon',), ('*.icon', '*.ICON'), ()),
|
||||
'IdrisLexer': ('pip._vendor.pygments.lexers.haskell', 'Idris', ('idris', 'idr'), ('*.idr',), ('text/x-idris',)),
|
||||
'IgorLexer': ('pip._vendor.pygments.lexers.igor', 'Igor', ('igor', 'igorpro'), ('*.ipf',), ('text/ipf',)),
|
||||
'Inform6Lexer': ('pip._vendor.pygments.lexers.int_fiction', 'Inform 6', ('inform6', 'i6'), ('*.inf',), ()),
|
||||
'Inform6TemplateLexer': ('pip._vendor.pygments.lexers.int_fiction', 'Inform 6 template', ('i6t',), ('*.i6t',), ()),
|
||||
'Inform7Lexer': ('pip._vendor.pygments.lexers.int_fiction', 'Inform 7', ('inform7', 'i7'), ('*.ni', '*.i7x'), ()),
|
||||
'IniLexer': ('pip._vendor.pygments.lexers.configs', 'INI', ('ini', 'cfg', 'dosini'), ('*.ini', '*.cfg', '*.inf', '.editorconfig'), ('text/x-ini', 'text/inf')),
|
||||
'IoLexer': ('pip._vendor.pygments.lexers.iolang', 'Io', ('io',), ('*.io',), ('text/x-iosrc',)),
|
||||
'IokeLexer': ('pip._vendor.pygments.lexers.jvm', 'Ioke', ('ioke', 'ik'), ('*.ik',), ('text/x-iokesrc',)),
|
||||
'IrcLogsLexer': ('pip._vendor.pygments.lexers.textfmts', 'IRC logs', ('irc',), ('*.weechatlog',), ('text/x-irclog',)),
|
||||
'IsabelleLexer': ('pip._vendor.pygments.lexers.theorem', 'Isabelle', ('isabelle',), ('*.thy',), ('text/x-isabelle',)),
|
||||
'JLexer': ('pip._vendor.pygments.lexers.j', 'J', ('j',), ('*.ijs',), ('text/x-j',)),
|
||||
'JMESPathLexer': ('pip._vendor.pygments.lexers.jmespath', 'JMESPath', ('jmespath', 'jp'), ('*.jp',), ()),
|
||||
'JSLTLexer': ('pip._vendor.pygments.lexers.jslt', 'JSLT', ('jslt',), ('*.jslt',), ('text/x-jslt',)),
|
||||
'JagsLexer': ('pip._vendor.pygments.lexers.modeling', 'JAGS', ('jags',), ('*.jag', '*.bug'), ()),
|
||||
'JanetLexer': ('pip._vendor.pygments.lexers.lisp', 'Janet', ('janet',), ('*.janet', '*.jdn'), ('text/x-janet', 'application/x-janet')),
|
||||
'JasminLexer': ('pip._vendor.pygments.lexers.jvm', 'Jasmin', ('jasmin', 'jasminxt'), ('*.j',), ()),
|
||||
'JavaLexer': ('pip._vendor.pygments.lexers.jvm', 'Java', ('java',), ('*.java',), ('text/x-java',)),
|
||||
'JavascriptDjangoLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Django/Jinja', ('javascript+django', 'js+django', 'javascript+jinja', 'js+jinja'), ('*.js.j2', '*.js.jinja2'), ('application/x-javascript+django', 'application/x-javascript+jinja', 'text/x-javascript+django', 'text/x-javascript+jinja', 'text/javascript+django', 'text/javascript+jinja')),
|
||||
'JavascriptErbLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Ruby', ('javascript+ruby', 'js+ruby', 'javascript+erb', 'js+erb'), (), ('application/x-javascript+ruby', 'text/x-javascript+ruby', 'text/javascript+ruby')),
|
||||
'JavascriptGenshiLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Genshi Text', ('js+genshitext', 'js+genshi', 'javascript+genshitext', 'javascript+genshi'), (), ('application/x-javascript+genshi', 'text/x-javascript+genshi', 'text/javascript+genshi')),
|
||||
'JavascriptLexer': ('pip._vendor.pygments.lexers.javascript', 'JavaScript', ('javascript', 'js'), ('*.js', '*.jsm', '*.mjs', '*.cjs'), ('application/javascript', 'application/x-javascript', 'text/x-javascript', 'text/javascript')),
|
||||
'JavascriptPhpLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+PHP', ('javascript+php', 'js+php'), (), ('application/x-javascript+php', 'text/x-javascript+php', 'text/javascript+php')),
|
||||
'JavascriptSmartyLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Smarty', ('javascript+smarty', 'js+smarty'), (), ('application/x-javascript+smarty', 'text/x-javascript+smarty', 'text/javascript+smarty')),
|
||||
'JavascriptUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'Javascript+UL4', ('js+ul4',), ('*.jsul4',), ()),
|
||||
'JclLexer': ('pip._vendor.pygments.lexers.scripting', 'JCL', ('jcl',), ('*.jcl',), ('text/x-jcl',)),
|
||||
'JsgfLexer': ('pip._vendor.pygments.lexers.grammar_notation', 'JSGF', ('jsgf',), ('*.jsgf',), ('application/jsgf', 'application/x-jsgf', 'text/jsgf')),
|
||||
'Json5Lexer': ('pip._vendor.pygments.lexers.json5', 'JSON5', ('json5',), ('*.json5',), ()),
|
||||
'JsonBareObjectLexer': ('pip._vendor.pygments.lexers.data', 'JSONBareObject', (), (), ()),
|
||||
'JsonLdLexer': ('pip._vendor.pygments.lexers.data', 'JSON-LD', ('jsonld', 'json-ld'), ('*.jsonld',), ('application/ld+json',)),
|
||||
'JsonLexer': ('pip._vendor.pygments.lexers.data', 'JSON', ('json', 'json-object'), ('*.json', '*.jsonl', '*.ndjson', 'Pipfile.lock'), ('application/json', 'application/json-object', 'application/x-ndjson', 'application/jsonl', 'application/json-seq')),
|
||||
'JsonnetLexer': ('pip._vendor.pygments.lexers.jsonnet', 'Jsonnet', ('jsonnet',), ('*.jsonnet', '*.libsonnet'), ()),
|
||||
'JspLexer': ('pip._vendor.pygments.lexers.templates', 'Java Server Page', ('jsp',), ('*.jsp',), ('application/x-jsp',)),
|
||||
'JsxLexer': ('pip._vendor.pygments.lexers.jsx', 'JSX', ('jsx', 'react'), ('*.jsx', '*.react'), ('text/jsx', 'text/typescript-jsx')),
|
||||
'JuliaConsoleLexer': ('pip._vendor.pygments.lexers.julia', 'Julia console', ('jlcon', 'julia-repl'), (), ()),
|
||||
'JuliaLexer': ('pip._vendor.pygments.lexers.julia', 'Julia', ('julia', 'jl'), ('*.jl',), ('text/x-julia', 'application/x-julia')),
|
||||
'JuttleLexer': ('pip._vendor.pygments.lexers.javascript', 'Juttle', ('juttle',), ('*.juttle',), ('application/juttle', 'application/x-juttle', 'text/x-juttle', 'text/juttle')),
|
||||
'KLexer': ('pip._vendor.pygments.lexers.q', 'K', ('k',), ('*.k',), ()),
|
||||
'KalLexer': ('pip._vendor.pygments.lexers.javascript', 'Kal', ('kal',), ('*.kal',), ('text/kal', 'application/kal')),
|
||||
'KconfigLexer': ('pip._vendor.pygments.lexers.configs', 'Kconfig', ('kconfig', 'menuconfig', 'linux-config', 'kernel-config'), ('Kconfig*', '*Config.in*', 'external.in*', 'standard-modules.in'), ('text/x-kconfig',)),
|
||||
'KernelLogLexer': ('pip._vendor.pygments.lexers.textfmts', 'Kernel log', ('kmsg', 'dmesg'), ('*.kmsg', '*.dmesg'), ()),
|
||||
'KokaLexer': ('pip._vendor.pygments.lexers.haskell', 'Koka', ('koka',), ('*.kk', '*.kki'), ('text/x-koka',)),
|
||||
'KotlinLexer': ('pip._vendor.pygments.lexers.jvm', 'Kotlin', ('kotlin',), ('*.kt', '*.kts'), ('text/x-kotlin',)),
|
||||
'KuinLexer': ('pip._vendor.pygments.lexers.kuin', 'Kuin', ('kuin',), ('*.kn',), ()),
|
||||
'KustoLexer': ('pip._vendor.pygments.lexers.kusto', 'Kusto', ('kql', 'kusto'), ('*.kql', '*.kusto', '.csl'), ()),
|
||||
'LSLLexer': ('pip._vendor.pygments.lexers.scripting', 'LSL', ('lsl',), ('*.lsl',), ('text/x-lsl',)),
|
||||
'LassoCssLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Lasso', ('css+lasso',), (), ('text/css+lasso',)),
|
||||
'LassoHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Lasso', ('html+lasso',), (), ('text/html+lasso', 'application/x-httpd-lasso', 'application/x-httpd-lasso[89]')),
|
||||
'LassoJavascriptLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Lasso', ('javascript+lasso', 'js+lasso'), (), ('application/x-javascript+lasso', 'text/x-javascript+lasso', 'text/javascript+lasso')),
|
||||
'LassoLexer': ('pip._vendor.pygments.lexers.javascript', 'Lasso', ('lasso', 'lassoscript'), ('*.lasso', '*.lasso[89]'), ('text/x-lasso',)),
|
||||
'LassoXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Lasso', ('xml+lasso',), (), ('application/xml+lasso',)),
|
||||
'LdaprcLexer': ('pip._vendor.pygments.lexers.ldap', 'LDAP configuration file', ('ldapconf', 'ldaprc'), ('.ldaprc', 'ldaprc', 'ldap.conf'), ('text/x-ldapconf',)),
|
||||
'LdifLexer': ('pip._vendor.pygments.lexers.ldap', 'LDIF', ('ldif',), ('*.ldif',), ('text/x-ldif',)),
|
||||
'Lean3Lexer': ('pip._vendor.pygments.lexers.lean', 'Lean', ('lean', 'lean3'), ('*.lean',), ('text/x-lean', 'text/x-lean3')),
|
||||
'Lean4Lexer': ('pip._vendor.pygments.lexers.lean', 'Lean4', ('lean4',), ('*.lean',), ('text/x-lean4',)),
|
||||
'LessCssLexer': ('pip._vendor.pygments.lexers.css', 'LessCss', ('less',), ('*.less',), ('text/x-less-css',)),
|
||||
'LighttpdConfLexer': ('pip._vendor.pygments.lexers.configs', 'Lighttpd configuration file', ('lighttpd', 'lighty'), ('lighttpd.conf',), ('text/x-lighttpd-conf',)),
|
||||
'LilyPondLexer': ('pip._vendor.pygments.lexers.lilypond', 'LilyPond', ('lilypond',), ('*.ly',), ()),
|
||||
'LimboLexer': ('pip._vendor.pygments.lexers.inferno', 'Limbo', ('limbo',), ('*.b',), ('text/limbo',)),
|
||||
'LiquidLexer': ('pip._vendor.pygments.lexers.templates', 'liquid', ('liquid',), ('*.liquid',), ()),
|
||||
'LiterateAgdaLexer': ('pip._vendor.pygments.lexers.haskell', 'Literate Agda', ('literate-agda', 'lagda'), ('*.lagda',), ('text/x-literate-agda',)),
|
||||
'LiterateCryptolLexer': ('pip._vendor.pygments.lexers.haskell', 'Literate Cryptol', ('literate-cryptol', 'lcryptol', 'lcry'), ('*.lcry',), ('text/x-literate-cryptol',)),
|
||||
'LiterateHaskellLexer': ('pip._vendor.pygments.lexers.haskell', 'Literate Haskell', ('literate-haskell', 'lhaskell', 'lhs'), ('*.lhs',), ('text/x-literate-haskell',)),
|
||||
'LiterateIdrisLexer': ('pip._vendor.pygments.lexers.haskell', 'Literate Idris', ('literate-idris', 'lidris', 'lidr'), ('*.lidr',), ('text/x-literate-idris',)),
|
||||
'LiveScriptLexer': ('pip._vendor.pygments.lexers.javascript', 'LiveScript', ('livescript', 'live-script'), ('*.ls',), ('text/livescript',)),
|
||||
'LlvmLexer': ('pip._vendor.pygments.lexers.asm', 'LLVM', ('llvm',), ('*.ll',), ('text/x-llvm',)),
|
||||
'LlvmMirBodyLexer': ('pip._vendor.pygments.lexers.asm', 'LLVM-MIR Body', ('llvm-mir-body',), (), ()),
|
||||
'LlvmMirLexer': ('pip._vendor.pygments.lexers.asm', 'LLVM-MIR', ('llvm-mir',), ('*.mir',), ()),
|
||||
'LogosLexer': ('pip._vendor.pygments.lexers.objective', 'Logos', ('logos',), ('*.x', '*.xi', '*.xm', '*.xmi'), ('text/x-logos',)),
|
||||
'LogtalkLexer': ('pip._vendor.pygments.lexers.prolog', 'Logtalk', ('logtalk',), ('*.lgt', '*.logtalk'), ('text/x-logtalk',)),
|
||||
'LuaLexer': ('pip._vendor.pygments.lexers.scripting', 'Lua', ('lua',), ('*.lua', '*.wlua'), ('text/x-lua', 'application/x-lua')),
|
||||
'LuauLexer': ('pip._vendor.pygments.lexers.scripting', 'Luau', ('luau',), ('*.luau',), ()),
|
||||
'MCFunctionLexer': ('pip._vendor.pygments.lexers.minecraft', 'MCFunction', ('mcfunction', 'mcf'), ('*.mcfunction',), ('text/mcfunction',)),
|
||||
'MCSchemaLexer': ('pip._vendor.pygments.lexers.minecraft', 'MCSchema', ('mcschema',), ('*.mcschema',), ('text/mcschema',)),
|
||||
'MIMELexer': ('pip._vendor.pygments.lexers.mime', 'MIME', ('mime',), (), ('multipart/mixed', 'multipart/related', 'multipart/alternative')),
|
||||
'MIPSLexer': ('pip._vendor.pygments.lexers.mips', 'MIPS', ('mips',), ('*.mips', '*.MIPS'), ()),
|
||||
'MOOCodeLexer': ('pip._vendor.pygments.lexers.scripting', 'MOOCode', ('moocode', 'moo'), ('*.moo',), ('text/x-moocode',)),
|
||||
'MSDOSSessionLexer': ('pip._vendor.pygments.lexers.shell', 'MSDOS Session', ('doscon',), (), ()),
|
||||
'Macaulay2Lexer': ('pip._vendor.pygments.lexers.macaulay2', 'Macaulay2', ('macaulay2',), ('*.m2',), ()),
|
||||
'MakefileLexer': ('pip._vendor.pygments.lexers.make', 'Makefile', ('make', 'makefile', 'mf', 'bsdmake'), ('*.mak', '*.mk', 'Makefile', 'makefile', 'Makefile.*', 'GNUmakefile'), ('text/x-makefile',)),
|
||||
'MakoCssLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Mako', ('css+mako',), (), ('text/css+mako',)),
|
||||
'MakoHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Mako', ('html+mako',), (), ('text/html+mako',)),
|
||||
'MakoJavascriptLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Mako', ('javascript+mako', 'js+mako'), (), ('application/x-javascript+mako', 'text/x-javascript+mako', 'text/javascript+mako')),
|
||||
'MakoLexer': ('pip._vendor.pygments.lexers.templates', 'Mako', ('mako',), ('*.mao',), ('application/x-mako',)),
|
||||
'MakoXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Mako', ('xml+mako',), (), ('application/xml+mako',)),
|
||||
'MapleLexer': ('pip._vendor.pygments.lexers.maple', 'Maple', ('maple',), ('*.mpl', '*.mi', '*.mm'), ('text/x-maple',)),
|
||||
'MaqlLexer': ('pip._vendor.pygments.lexers.business', 'MAQL', ('maql',), ('*.maql',), ('text/x-gooddata-maql', 'application/x-gooddata-maql')),
|
||||
'MarkdownLexer': ('pip._vendor.pygments.lexers.markup', 'Markdown', ('markdown', 'md'), ('*.md', '*.markdown'), ('text/x-markdown',)),
|
||||
'MaskLexer': ('pip._vendor.pygments.lexers.javascript', 'Mask', ('mask',), ('*.mask',), ('text/x-mask',)),
|
||||
'MasonLexer': ('pip._vendor.pygments.lexers.templates', 'Mason', ('mason',), ('*.m', '*.mhtml', '*.mc', '*.mi', 'autohandler', 'dhandler'), ('application/x-mason',)),
|
||||
'MathematicaLexer': ('pip._vendor.pygments.lexers.algebra', 'Mathematica', ('mathematica', 'mma', 'nb'), ('*.nb', '*.cdf', '*.nbp', '*.ma'), ('application/mathematica', 'application/vnd.wolfram.mathematica', 'application/vnd.wolfram.mathematica.package', 'application/vnd.wolfram.cdf')),
|
||||
'MatlabLexer': ('pip._vendor.pygments.lexers.matlab', 'Matlab', ('matlab',), ('*.m',), ('text/matlab',)),
|
||||
'MatlabSessionLexer': ('pip._vendor.pygments.lexers.matlab', 'Matlab session', ('matlabsession',), (), ()),
|
||||
'MaximaLexer': ('pip._vendor.pygments.lexers.maxima', 'Maxima', ('maxima', 'macsyma'), ('*.mac', '*.max'), ()),
|
||||
'MesonLexer': ('pip._vendor.pygments.lexers.meson', 'Meson', ('meson', 'meson.build'), ('meson.build', 'meson_options.txt'), ('text/x-meson',)),
|
||||
'MiniDLexer': ('pip._vendor.pygments.lexers.d', 'MiniD', ('minid',), (), ('text/x-minidsrc',)),
|
||||
'MiniScriptLexer': ('pip._vendor.pygments.lexers.scripting', 'MiniScript', ('miniscript', 'ms'), ('*.ms',), ('text/x-minicript', 'application/x-miniscript')),
|
||||
'ModelicaLexer': ('pip._vendor.pygments.lexers.modeling', 'Modelica', ('modelica',), ('*.mo',), ('text/x-modelica',)),
|
||||
'Modula2Lexer': ('pip._vendor.pygments.lexers.modula2', 'Modula-2', ('modula2', 'm2'), ('*.def', '*.mod'), ('text/x-modula2',)),
|
||||
'MoinWikiLexer': ('pip._vendor.pygments.lexers.markup', 'MoinMoin/Trac Wiki markup', ('trac-wiki', 'moin'), (), ('text/x-trac-wiki',)),
|
||||
'MojoLexer': ('pip._vendor.pygments.lexers.mojo', 'Mojo', ('mojo', '🔥'), ('*.mojo', '*.🔥'), ('text/x-mojo', 'application/x-mojo')),
|
||||
'MonkeyLexer': ('pip._vendor.pygments.lexers.basic', 'Monkey', ('monkey',), ('*.monkey',), ('text/x-monkey',)),
|
||||
'MonteLexer': ('pip._vendor.pygments.lexers.monte', 'Monte', ('monte',), ('*.mt',), ()),
|
||||
'MoonScriptLexer': ('pip._vendor.pygments.lexers.scripting', 'MoonScript', ('moonscript', 'moon'), ('*.moon',), ('text/x-moonscript', 'application/x-moonscript')),
|
||||
'MoselLexer': ('pip._vendor.pygments.lexers.mosel', 'Mosel', ('mosel',), ('*.mos',), ()),
|
||||
'MozPreprocCssLexer': ('pip._vendor.pygments.lexers.markup', 'CSS+mozpreproc', ('css+mozpreproc',), ('*.css.in',), ()),
|
||||
'MozPreprocHashLexer': ('pip._vendor.pygments.lexers.markup', 'mozhashpreproc', ('mozhashpreproc',), (), ()),
|
||||
'MozPreprocJavascriptLexer': ('pip._vendor.pygments.lexers.markup', 'Javascript+mozpreproc', ('javascript+mozpreproc',), ('*.js.in',), ()),
|
||||
'MozPreprocPercentLexer': ('pip._vendor.pygments.lexers.markup', 'mozpercentpreproc', ('mozpercentpreproc',), (), ()),
|
||||
'MozPreprocXulLexer': ('pip._vendor.pygments.lexers.markup', 'XUL+mozpreproc', ('xul+mozpreproc',), ('*.xul.in',), ()),
|
||||
'MqlLexer': ('pip._vendor.pygments.lexers.c_like', 'MQL', ('mql', 'mq4', 'mq5', 'mql4', 'mql5'), ('*.mq4', '*.mq5', '*.mqh'), ('text/x-mql',)),
|
||||
'MscgenLexer': ('pip._vendor.pygments.lexers.dsls', 'Mscgen', ('mscgen', 'msc'), ('*.msc',), ()),
|
||||
'MuPADLexer': ('pip._vendor.pygments.lexers.algebra', 'MuPAD', ('mupad',), ('*.mu',), ()),
|
||||
'MxmlLexer': ('pip._vendor.pygments.lexers.actionscript', 'MXML', ('mxml',), ('*.mxml',), ()),
|
||||
'MySqlLexer': ('pip._vendor.pygments.lexers.sql', 'MySQL', ('mysql',), (), ('text/x-mysql',)),
|
||||
'MyghtyCssLexer': ('pip._vendor.pygments.lexers.templates', 'CSS+Myghty', ('css+myghty',), (), ('text/css+myghty',)),
|
||||
'MyghtyHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Myghty', ('html+myghty',), (), ('text/html+myghty',)),
|
||||
'MyghtyJavascriptLexer': ('pip._vendor.pygments.lexers.templates', 'JavaScript+Myghty', ('javascript+myghty', 'js+myghty'), (), ('application/x-javascript+myghty', 'text/x-javascript+myghty', 'text/javascript+mygthy')),
|
||||
'MyghtyLexer': ('pip._vendor.pygments.lexers.templates', 'Myghty', ('myghty',), ('*.myt', 'autodelegate'), ('application/x-myghty',)),
|
||||
'MyghtyXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Myghty', ('xml+myghty',), (), ('application/xml+myghty',)),
|
||||
'NCLLexer': ('pip._vendor.pygments.lexers.ncl', 'NCL', ('ncl',), ('*.ncl',), ('text/ncl',)),
|
||||
'NSISLexer': ('pip._vendor.pygments.lexers.installers', 'NSIS', ('nsis', 'nsi', 'nsh'), ('*.nsi', '*.nsh'), ('text/x-nsis',)),
|
||||
'NasmLexer': ('pip._vendor.pygments.lexers.asm', 'NASM', ('nasm',), ('*.asm', '*.ASM', '*.nasm'), ('text/x-nasm',)),
|
||||
'NasmObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'objdump-nasm', ('objdump-nasm',), ('*.objdump-intel',), ('text/x-nasm-objdump',)),
|
||||
'NemerleLexer': ('pip._vendor.pygments.lexers.dotnet', 'Nemerle', ('nemerle',), ('*.n',), ('text/x-nemerle',)),
|
||||
'NesCLexer': ('pip._vendor.pygments.lexers.c_like', 'nesC', ('nesc',), ('*.nc',), ('text/x-nescsrc',)),
|
||||
'NestedTextLexer': ('pip._vendor.pygments.lexers.configs', 'NestedText', ('nestedtext', 'nt'), ('*.nt',), ()),
|
||||
'NewLispLexer': ('pip._vendor.pygments.lexers.lisp', 'NewLisp', ('newlisp',), ('*.lsp', '*.nl', '*.kif'), ('text/x-newlisp', 'application/x-newlisp')),
|
||||
'NewspeakLexer': ('pip._vendor.pygments.lexers.smalltalk', 'Newspeak', ('newspeak',), ('*.ns2',), ('text/x-newspeak',)),
|
||||
'NginxConfLexer': ('pip._vendor.pygments.lexers.configs', 'Nginx configuration file', ('nginx',), ('nginx.conf',), ('text/x-nginx-conf',)),
|
||||
'NimrodLexer': ('pip._vendor.pygments.lexers.nimrod', 'Nimrod', ('nimrod', 'nim'), ('*.nim', '*.nimrod'), ('text/x-nim',)),
|
||||
'NitLexer': ('pip._vendor.pygments.lexers.nit', 'Nit', ('nit',), ('*.nit',), ()),
|
||||
'NixLexer': ('pip._vendor.pygments.lexers.nix', 'Nix', ('nixos', 'nix'), ('*.nix',), ('text/x-nix',)),
|
||||
'NodeConsoleLexer': ('pip._vendor.pygments.lexers.javascript', 'Node.js REPL console session', ('nodejsrepl',), (), ('text/x-nodejsrepl',)),
|
||||
'NotmuchLexer': ('pip._vendor.pygments.lexers.textfmts', 'Notmuch', ('notmuch',), (), ()),
|
||||
'NuSMVLexer': ('pip._vendor.pygments.lexers.smv', 'NuSMV', ('nusmv',), ('*.smv',), ()),
|
||||
'NumPyLexer': ('pip._vendor.pygments.lexers.python', 'NumPy', ('numpy',), (), ()),
|
||||
'NumbaIRLexer': ('pip._vendor.pygments.lexers.numbair', 'Numba_IR', ('numba_ir', 'numbair'), ('*.numba_ir',), ('text/x-numba_ir', 'text/x-numbair')),
|
||||
'ObjdumpLexer': ('pip._vendor.pygments.lexers.asm', 'objdump', ('objdump',), ('*.objdump',), ('text/x-objdump',)),
|
||||
'ObjectiveCLexer': ('pip._vendor.pygments.lexers.objective', 'Objective-C', ('objective-c', 'objectivec', 'obj-c', 'objc'), ('*.m', '*.h'), ('text/x-objective-c',)),
|
||||
'ObjectiveCppLexer': ('pip._vendor.pygments.lexers.objective', 'Objective-C++', ('objective-c++', 'objectivec++', 'obj-c++', 'objc++'), ('*.mm', '*.hh'), ('text/x-objective-c++',)),
|
||||
'ObjectiveJLexer': ('pip._vendor.pygments.lexers.javascript', 'Objective-J', ('objective-j', 'objectivej', 'obj-j', 'objj'), ('*.j',), ('text/x-objective-j',)),
|
||||
'OcamlLexer': ('pip._vendor.pygments.lexers.ml', 'OCaml', ('ocaml',), ('*.ml', '*.mli', '*.mll', '*.mly'), ('text/x-ocaml',)),
|
||||
'OctaveLexer': ('pip._vendor.pygments.lexers.matlab', 'Octave', ('octave',), ('*.m',), ('text/octave',)),
|
||||
'OdinLexer': ('pip._vendor.pygments.lexers.archetype', 'ODIN', ('odin',), ('*.odin',), ('text/odin',)),
|
||||
'OmgIdlLexer': ('pip._vendor.pygments.lexers.c_like', 'OMG Interface Definition Language', ('omg-idl',), ('*.idl', '*.pidl'), ()),
|
||||
'OocLexer': ('pip._vendor.pygments.lexers.ooc', 'Ooc', ('ooc',), ('*.ooc',), ('text/x-ooc',)),
|
||||
'OpaLexer': ('pip._vendor.pygments.lexers.ml', 'Opa', ('opa',), ('*.opa',), ('text/x-opa',)),
|
||||
'OpenEdgeLexer': ('pip._vendor.pygments.lexers.business', 'OpenEdge ABL', ('openedge', 'abl', 'progress'), ('*.p', '*.cls'), ('text/x-openedge', 'application/x-openedge')),
|
||||
'OpenScadLexer': ('pip._vendor.pygments.lexers.openscad', 'OpenSCAD', ('openscad',), ('*.scad',), ('application/x-openscad',)),
|
||||
'OrgLexer': ('pip._vendor.pygments.lexers.markup', 'Org Mode', ('org', 'orgmode', 'org-mode'), ('*.org',), ('text/org',)),
|
||||
'OutputLexer': ('pip._vendor.pygments.lexers.special', 'Text output', ('output',), (), ()),
|
||||
'PacmanConfLexer': ('pip._vendor.pygments.lexers.configs', 'PacmanConf', ('pacmanconf',), ('pacman.conf',), ()),
|
||||
'PanLexer': ('pip._vendor.pygments.lexers.dsls', 'Pan', ('pan',), ('*.pan',), ()),
|
||||
'ParaSailLexer': ('pip._vendor.pygments.lexers.parasail', 'ParaSail', ('parasail',), ('*.psi', '*.psl'), ('text/x-parasail',)),
|
||||
'PawnLexer': ('pip._vendor.pygments.lexers.pawn', 'Pawn', ('pawn',), ('*.p', '*.pwn', '*.inc'), ('text/x-pawn',)),
|
||||
'PddlLexer': ('pip._vendor.pygments.lexers.pddl', 'PDDL', ('pddl',), ('*.pddl',), ()),
|
||||
'PegLexer': ('pip._vendor.pygments.lexers.grammar_notation', 'PEG', ('peg',), ('*.peg',), ('text/x-peg',)),
|
||||
'Perl6Lexer': ('pip._vendor.pygments.lexers.perl', 'Perl6', ('perl6', 'pl6', 'raku'), ('*.pl', '*.pm', '*.nqp', '*.p6', '*.6pl', '*.p6l', '*.pl6', '*.6pm', '*.p6m', '*.pm6', '*.t', '*.raku', '*.rakumod', '*.rakutest', '*.rakudoc'), ('text/x-perl6', 'application/x-perl6')),
|
||||
'PerlLexer': ('pip._vendor.pygments.lexers.perl', 'Perl', ('perl', 'pl'), ('*.pl', '*.pm', '*.t', '*.perl'), ('text/x-perl', 'application/x-perl')),
|
||||
'PhixLexer': ('pip._vendor.pygments.lexers.phix', 'Phix', ('phix',), ('*.exw',), ('text/x-phix',)),
|
||||
'PhpLexer': ('pip._vendor.pygments.lexers.php', 'PHP', ('php', 'php3', 'php4', 'php5'), ('*.php', '*.php[345]', '*.inc'), ('text/x-php',)),
|
||||
'PigLexer': ('pip._vendor.pygments.lexers.jvm', 'Pig', ('pig',), ('*.pig',), ('text/x-pig',)),
|
||||
'PikeLexer': ('pip._vendor.pygments.lexers.c_like', 'Pike', ('pike',), ('*.pike', '*.pmod'), ('text/x-pike',)),
|
||||
'PkgConfigLexer': ('pip._vendor.pygments.lexers.configs', 'PkgConfig', ('pkgconfig',), ('*.pc',), ()),
|
||||
'PlPgsqlLexer': ('pip._vendor.pygments.lexers.sql', 'PL/pgSQL', ('plpgsql',), (), ('text/x-plpgsql',)),
|
||||
'PointlessLexer': ('pip._vendor.pygments.lexers.pointless', 'Pointless', ('pointless',), ('*.ptls',), ()),
|
||||
'PonyLexer': ('pip._vendor.pygments.lexers.pony', 'Pony', ('pony',), ('*.pony',), ()),
|
||||
'PortugolLexer': ('pip._vendor.pygments.lexers.pascal', 'Portugol', ('portugol',), ('*.alg', '*.portugol'), ()),
|
||||
'PostScriptLexer': ('pip._vendor.pygments.lexers.graphics', 'PostScript', ('postscript', 'postscr'), ('*.ps', '*.eps'), ('application/postscript',)),
|
||||
'PostgresConsoleLexer': ('pip._vendor.pygments.lexers.sql', 'PostgreSQL console (psql)', ('psql', 'postgresql-console', 'postgres-console'), (), ('text/x-postgresql-psql',)),
|
||||
'PostgresExplainLexer': ('pip._vendor.pygments.lexers.sql', 'PostgreSQL EXPLAIN dialect', ('postgres-explain',), ('*.explain',), ('text/x-postgresql-explain',)),
|
||||
'PostgresLexer': ('pip._vendor.pygments.lexers.sql', 'PostgreSQL SQL dialect', ('postgresql', 'postgres'), (), ('text/x-postgresql',)),
|
||||
'PovrayLexer': ('pip._vendor.pygments.lexers.graphics', 'POVRay', ('pov',), ('*.pov', '*.inc'), ('text/x-povray',)),
|
||||
'PowerShellLexer': ('pip._vendor.pygments.lexers.shell', 'PowerShell', ('powershell', 'pwsh', 'posh', 'ps1', 'psm1'), ('*.ps1', '*.psm1'), ('text/x-powershell',)),
|
||||
'PowerShellSessionLexer': ('pip._vendor.pygments.lexers.shell', 'PowerShell Session', ('pwsh-session', 'ps1con'), (), ()),
|
||||
'PraatLexer': ('pip._vendor.pygments.lexers.praat', 'Praat', ('praat',), ('*.praat', '*.proc', '*.psc'), ()),
|
||||
'ProcfileLexer': ('pip._vendor.pygments.lexers.procfile', 'Procfile', ('procfile',), ('Procfile',), ()),
|
||||
'PrologLexer': ('pip._vendor.pygments.lexers.prolog', 'Prolog', ('prolog',), ('*.ecl', '*.prolog', '*.pro', '*.pl'), ('text/x-prolog',)),
|
||||
'PromQLLexer': ('pip._vendor.pygments.lexers.promql', 'PromQL', ('promql',), ('*.promql',), ()),
|
||||
'PromelaLexer': ('pip._vendor.pygments.lexers.c_like', 'Promela', ('promela',), ('*.pml', '*.prom', '*.prm', '*.promela', '*.pr', '*.pm'), ('text/x-promela',)),
|
||||
'PropertiesLexer': ('pip._vendor.pygments.lexers.configs', 'Properties', ('properties', 'jproperties'), ('*.properties',), ('text/x-java-properties',)),
|
||||
'ProtoBufLexer': ('pip._vendor.pygments.lexers.dsls', 'Protocol Buffer', ('protobuf', 'proto'), ('*.proto',), ()),
|
||||
'PrqlLexer': ('pip._vendor.pygments.lexers.prql', 'PRQL', ('prql',), ('*.prql',), ('application/prql', 'application/x-prql')),
|
||||
'PsyshConsoleLexer': ('pip._vendor.pygments.lexers.php', 'PsySH console session for PHP', ('psysh',), (), ()),
|
||||
'PtxLexer': ('pip._vendor.pygments.lexers.ptx', 'PTX', ('ptx',), ('*.ptx',), ('text/x-ptx',)),
|
||||
'PugLexer': ('pip._vendor.pygments.lexers.html', 'Pug', ('pug', 'jade'), ('*.pug', '*.jade'), ('text/x-pug', 'text/x-jade')),
|
||||
'PuppetLexer': ('pip._vendor.pygments.lexers.dsls', 'Puppet', ('puppet',), ('*.pp',), ()),
|
||||
'PyPyLogLexer': ('pip._vendor.pygments.lexers.console', 'PyPy Log', ('pypylog', 'pypy'), ('*.pypylog',), ('application/x-pypylog',)),
|
||||
'Python2Lexer': ('pip._vendor.pygments.lexers.python', 'Python 2.x', ('python2', 'py2'), (), ('text/x-python2', 'application/x-python2')),
|
||||
'Python2TracebackLexer': ('pip._vendor.pygments.lexers.python', 'Python 2.x Traceback', ('py2tb',), ('*.py2tb',), ('text/x-python2-traceback',)),
|
||||
'PythonConsoleLexer': ('pip._vendor.pygments.lexers.python', 'Python console session', ('pycon', 'python-console'), (), ('text/x-python-doctest',)),
|
||||
'PythonLexer': ('pip._vendor.pygments.lexers.python', 'Python', ('python', 'py', 'sage', 'python3', 'py3', 'bazel', 'starlark', 'pyi'), ('*.py', '*.pyw', '*.pyi', '*.jy', '*.sage', '*.sc', 'SConstruct', 'SConscript', '*.bzl', 'BUCK', 'BUILD', 'BUILD.bazel', 'WORKSPACE', '*.tac'), ('text/x-python', 'application/x-python', 'text/x-python3', 'application/x-python3')),
|
||||
'PythonTracebackLexer': ('pip._vendor.pygments.lexers.python', 'Python Traceback', ('pytb', 'py3tb'), ('*.pytb', '*.py3tb'), ('text/x-python-traceback', 'text/x-python3-traceback')),
|
||||
'PythonUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'Python+UL4', ('py+ul4',), ('*.pyul4',), ()),
|
||||
'QBasicLexer': ('pip._vendor.pygments.lexers.basic', 'QBasic', ('qbasic', 'basic'), ('*.BAS', '*.bas'), ('text/basic',)),
|
||||
'QLexer': ('pip._vendor.pygments.lexers.q', 'Q', ('q',), ('*.q',), ()),
|
||||
'QVToLexer': ('pip._vendor.pygments.lexers.qvt', 'QVTO', ('qvto', 'qvt'), ('*.qvto',), ()),
|
||||
'QlikLexer': ('pip._vendor.pygments.lexers.qlik', 'Qlik', ('qlik', 'qlikview', 'qliksense', 'qlikscript'), ('*.qvs', '*.qvw'), ()),
|
||||
'QmlLexer': ('pip._vendor.pygments.lexers.webmisc', 'QML', ('qml', 'qbs'), ('*.qml', '*.qbs'), ('application/x-qml', 'application/x-qt.qbs+qml')),
|
||||
'RConsoleLexer': ('pip._vendor.pygments.lexers.r', 'RConsole', ('rconsole', 'rout'), ('*.Rout',), ()),
|
||||
'RNCCompactLexer': ('pip._vendor.pygments.lexers.rnc', 'Relax-NG Compact', ('rng-compact', 'rnc'), ('*.rnc',), ()),
|
||||
'RPMSpecLexer': ('pip._vendor.pygments.lexers.installers', 'RPMSpec', ('spec',), ('*.spec',), ('text/x-rpm-spec',)),
|
||||
'RacketLexer': ('pip._vendor.pygments.lexers.lisp', 'Racket', ('racket', 'rkt'), ('*.rkt', '*.rktd', '*.rktl'), ('text/x-racket', 'application/x-racket')),
|
||||
'RagelCLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in C Host', ('ragel-c',), ('*.rl',), ()),
|
||||
'RagelCppLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in CPP Host', ('ragel-cpp',), ('*.rl',), ()),
|
||||
'RagelDLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in D Host', ('ragel-d',), ('*.rl',), ()),
|
||||
'RagelEmbeddedLexer': ('pip._vendor.pygments.lexers.parsers', 'Embedded Ragel', ('ragel-em',), ('*.rl',), ()),
|
||||
'RagelJavaLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in Java Host', ('ragel-java',), ('*.rl',), ()),
|
||||
'RagelLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel', ('ragel',), (), ()),
|
||||
'RagelObjectiveCLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in Objective C Host', ('ragel-objc',), ('*.rl',), ()),
|
||||
'RagelRubyLexer': ('pip._vendor.pygments.lexers.parsers', 'Ragel in Ruby Host', ('ragel-ruby', 'ragel-rb'), ('*.rl',), ()),
|
||||
'RawTokenLexer': ('pip._vendor.pygments.lexers.special', 'Raw token data', (), (), ('application/x-pygments-tokens',)),
|
||||
'RdLexer': ('pip._vendor.pygments.lexers.r', 'Rd', ('rd',), ('*.Rd',), ('text/x-r-doc',)),
|
||||
'ReasonLexer': ('pip._vendor.pygments.lexers.ml', 'ReasonML', ('reasonml', 'reason'), ('*.re', '*.rei'), ('text/x-reasonml',)),
|
||||
'RebolLexer': ('pip._vendor.pygments.lexers.rebol', 'REBOL', ('rebol',), ('*.r', '*.r3', '*.reb'), ('text/x-rebol',)),
|
||||
'RedLexer': ('pip._vendor.pygments.lexers.rebol', 'Red', ('red', 'red/system'), ('*.red', '*.reds'), ('text/x-red', 'text/x-red-system')),
|
||||
'RedcodeLexer': ('pip._vendor.pygments.lexers.esoteric', 'Redcode', ('redcode',), ('*.cw',), ()),
|
||||
'RegeditLexer': ('pip._vendor.pygments.lexers.configs', 'reg', ('registry',), ('*.reg',), ('text/x-windows-registry',)),
|
||||
'RegoLexer': ('pip._vendor.pygments.lexers.rego', 'Rego', ('rego',), ('*.rego',), ('text/x-rego',)),
|
||||
'ResourceLexer': ('pip._vendor.pygments.lexers.resource', 'ResourceBundle', ('resourcebundle', 'resource'), (), ()),
|
||||
'RexxLexer': ('pip._vendor.pygments.lexers.scripting', 'Rexx', ('rexx', 'arexx'), ('*.rexx', '*.rex', '*.rx', '*.arexx'), ('text/x-rexx',)),
|
||||
'RhtmlLexer': ('pip._vendor.pygments.lexers.templates', 'RHTML', ('rhtml', 'html+erb', 'html+ruby'), ('*.rhtml',), ('text/html+ruby',)),
|
||||
'RideLexer': ('pip._vendor.pygments.lexers.ride', 'Ride', ('ride',), ('*.ride',), ('text/x-ride',)),
|
||||
'RitaLexer': ('pip._vendor.pygments.lexers.rita', 'Rita', ('rita',), ('*.rita',), ('text/rita',)),
|
||||
'RoboconfGraphLexer': ('pip._vendor.pygments.lexers.roboconf', 'Roboconf Graph', ('roboconf-graph',), ('*.graph',), ()),
|
||||
'RoboconfInstancesLexer': ('pip._vendor.pygments.lexers.roboconf', 'Roboconf Instances', ('roboconf-instances',), ('*.instances',), ()),
|
||||
'RobotFrameworkLexer': ('pip._vendor.pygments.lexers.robotframework', 'RobotFramework', ('robotframework',), ('*.robot', '*.resource'), ('text/x-robotframework',)),
|
||||
'RqlLexer': ('pip._vendor.pygments.lexers.sql', 'RQL', ('rql',), ('*.rql',), ('text/x-rql',)),
|
||||
'RslLexer': ('pip._vendor.pygments.lexers.dsls', 'RSL', ('rsl',), ('*.rsl',), ('text/rsl',)),
|
||||
'RstLexer': ('pip._vendor.pygments.lexers.markup', 'reStructuredText', ('restructuredtext', 'rst', 'rest'), ('*.rst', '*.rest'), ('text/x-rst', 'text/prs.fallenstein.rst')),
|
||||
'RtsLexer': ('pip._vendor.pygments.lexers.trafficscript', 'TrafficScript', ('trafficscript', 'rts'), ('*.rts',), ()),
|
||||
'RubyConsoleLexer': ('pip._vendor.pygments.lexers.ruby', 'Ruby irb session', ('rbcon', 'irb'), (), ('text/x-ruby-shellsession',)),
|
||||
'RubyLexer': ('pip._vendor.pygments.lexers.ruby', 'Ruby', ('ruby', 'rb', 'duby'), ('*.rb', '*.rbw', 'Rakefile', '*.rake', '*.gemspec', '*.rbx', '*.duby', 'Gemfile', 'Vagrantfile'), ('text/x-ruby', 'application/x-ruby')),
|
||||
'RustLexer': ('pip._vendor.pygments.lexers.rust', 'Rust', ('rust', 'rs'), ('*.rs', '*.rs.in'), ('text/rust', 'text/x-rust')),
|
||||
'SASLexer': ('pip._vendor.pygments.lexers.sas', 'SAS', ('sas',), ('*.SAS', '*.sas'), ('text/x-sas', 'text/sas', 'application/x-sas')),
|
||||
'SLexer': ('pip._vendor.pygments.lexers.r', 'S', ('splus', 's', 'r'), ('*.S', '*.R', '.Rhistory', '.Rprofile', '.Renviron'), ('text/S-plus', 'text/S', 'text/x-r-source', 'text/x-r', 'text/x-R', 'text/x-r-history', 'text/x-r-profile')),
|
||||
'SMLLexer': ('pip._vendor.pygments.lexers.ml', 'Standard ML', ('sml',), ('*.sml', '*.sig', '*.fun'), ('text/x-standardml', 'application/x-standardml')),
|
||||
'SNBTLexer': ('pip._vendor.pygments.lexers.minecraft', 'SNBT', ('snbt',), ('*.snbt',), ('text/snbt',)),
|
||||
'SarlLexer': ('pip._vendor.pygments.lexers.jvm', 'SARL', ('sarl',), ('*.sarl',), ('text/x-sarl',)),
|
||||
'SassLexer': ('pip._vendor.pygments.lexers.css', 'Sass', ('sass',), ('*.sass',), ('text/x-sass',)),
|
||||
'SaviLexer': ('pip._vendor.pygments.lexers.savi', 'Savi', ('savi',), ('*.savi',), ()),
|
||||
'ScalaLexer': ('pip._vendor.pygments.lexers.jvm', 'Scala', ('scala',), ('*.scala',), ('text/x-scala',)),
|
||||
'ScamlLexer': ('pip._vendor.pygments.lexers.html', 'Scaml', ('scaml',), ('*.scaml',), ('text/x-scaml',)),
|
||||
'ScdocLexer': ('pip._vendor.pygments.lexers.scdoc', 'scdoc', ('scdoc', 'scd'), ('*.scd', '*.scdoc'), ()),
|
||||
'SchemeLexer': ('pip._vendor.pygments.lexers.lisp', 'Scheme', ('scheme', 'scm'), ('*.scm', '*.ss'), ('text/x-scheme', 'application/x-scheme')),
|
||||
'ScilabLexer': ('pip._vendor.pygments.lexers.matlab', 'Scilab', ('scilab',), ('*.sci', '*.sce', '*.tst'), ('text/scilab',)),
|
||||
'ScssLexer': ('pip._vendor.pygments.lexers.css', 'SCSS', ('scss',), ('*.scss',), ('text/x-scss',)),
|
||||
'SedLexer': ('pip._vendor.pygments.lexers.textedit', 'Sed', ('sed', 'gsed', 'ssed'), ('*.sed', '*.[gs]sed'), ('text/x-sed',)),
|
||||
'ShExCLexer': ('pip._vendor.pygments.lexers.rdf', 'ShExC', ('shexc', 'shex'), ('*.shex',), ('text/shex',)),
|
||||
'ShenLexer': ('pip._vendor.pygments.lexers.lisp', 'Shen', ('shen',), ('*.shen',), ('text/x-shen', 'application/x-shen')),
|
||||
'SieveLexer': ('pip._vendor.pygments.lexers.sieve', 'Sieve', ('sieve',), ('*.siv', '*.sieve'), ()),
|
||||
'SilverLexer': ('pip._vendor.pygments.lexers.verification', 'Silver', ('silver',), ('*.sil', '*.vpr'), ()),
|
||||
'SingularityLexer': ('pip._vendor.pygments.lexers.configs', 'Singularity', ('singularity',), ('*.def', 'Singularity'), ()),
|
||||
'SlashLexer': ('pip._vendor.pygments.lexers.slash', 'Slash', ('slash',), ('*.sla',), ()),
|
||||
'SlimLexer': ('pip._vendor.pygments.lexers.webmisc', 'Slim', ('slim',), ('*.slim',), ('text/x-slim',)),
|
||||
'SlurmBashLexer': ('pip._vendor.pygments.lexers.shell', 'Slurm', ('slurm', 'sbatch'), ('*.sl',), ()),
|
||||
'SmaliLexer': ('pip._vendor.pygments.lexers.dalvik', 'Smali', ('smali',), ('*.smali',), ('text/smali',)),
|
||||
'SmalltalkLexer': ('pip._vendor.pygments.lexers.smalltalk', 'Smalltalk', ('smalltalk', 'squeak', 'st'), ('*.st',), ('text/x-smalltalk',)),
|
||||
'SmartGameFormatLexer': ('pip._vendor.pygments.lexers.sgf', 'SmartGameFormat', ('sgf',), ('*.sgf',), ()),
|
||||
'SmartyLexer': ('pip._vendor.pygments.lexers.templates', 'Smarty', ('smarty',), ('*.tpl',), ('application/x-smarty',)),
|
||||
'SmithyLexer': ('pip._vendor.pygments.lexers.smithy', 'Smithy', ('smithy',), ('*.smithy',), ()),
|
||||
'SnobolLexer': ('pip._vendor.pygments.lexers.snobol', 'Snobol', ('snobol',), ('*.snobol',), ('text/x-snobol',)),
|
||||
'SnowballLexer': ('pip._vendor.pygments.lexers.dsls', 'Snowball', ('snowball',), ('*.sbl',), ()),
|
||||
'SolidityLexer': ('pip._vendor.pygments.lexers.solidity', 'Solidity', ('solidity',), ('*.sol',), ()),
|
||||
'SoongLexer': ('pip._vendor.pygments.lexers.soong', 'Soong', ('androidbp', 'bp', 'soong'), ('Android.bp',), ()),
|
||||
'SophiaLexer': ('pip._vendor.pygments.lexers.sophia', 'Sophia', ('sophia',), ('*.aes',), ()),
|
||||
'SourcePawnLexer': ('pip._vendor.pygments.lexers.pawn', 'SourcePawn', ('sp',), ('*.sp',), ('text/x-sourcepawn',)),
|
||||
'SourcesListLexer': ('pip._vendor.pygments.lexers.installers', 'Debian Sourcelist', ('debsources', 'sourceslist', 'sources.list'), ('sources.list',), ()),
|
||||
'SparqlLexer': ('pip._vendor.pygments.lexers.rdf', 'SPARQL', ('sparql',), ('*.rq', '*.sparql'), ('application/sparql-query',)),
|
||||
'SpiceLexer': ('pip._vendor.pygments.lexers.spice', 'Spice', ('spice', 'spicelang'), ('*.spice',), ('text/x-spice',)),
|
||||
'SqlJinjaLexer': ('pip._vendor.pygments.lexers.templates', 'SQL+Jinja', ('sql+jinja',), ('*.sql', '*.sql.j2', '*.sql.jinja2'), ()),
|
||||
'SqlLexer': ('pip._vendor.pygments.lexers.sql', 'SQL', ('sql',), ('*.sql',), ('text/x-sql',)),
|
||||
'SqliteConsoleLexer': ('pip._vendor.pygments.lexers.sql', 'sqlite3con', ('sqlite3',), ('*.sqlite3-console',), ('text/x-sqlite3-console',)),
|
||||
'SquidConfLexer': ('pip._vendor.pygments.lexers.configs', 'SquidConf', ('squidconf', 'squid.conf', 'squid'), ('squid.conf',), ('text/x-squidconf',)),
|
||||
'SrcinfoLexer': ('pip._vendor.pygments.lexers.srcinfo', 'Srcinfo', ('srcinfo',), ('.SRCINFO',), ()),
|
||||
'SspLexer': ('pip._vendor.pygments.lexers.templates', 'Scalate Server Page', ('ssp',), ('*.ssp',), ('application/x-ssp',)),
|
||||
'StanLexer': ('pip._vendor.pygments.lexers.modeling', 'Stan', ('stan',), ('*.stan',), ()),
|
||||
'StataLexer': ('pip._vendor.pygments.lexers.stata', 'Stata', ('stata', 'do'), ('*.do', '*.ado'), ('text/x-stata', 'text/stata', 'application/x-stata')),
|
||||
'SuperColliderLexer': ('pip._vendor.pygments.lexers.supercollider', 'SuperCollider', ('supercollider', 'sc'), ('*.sc', '*.scd'), ('application/supercollider', 'text/supercollider')),
|
||||
'SwiftLexer': ('pip._vendor.pygments.lexers.objective', 'Swift', ('swift',), ('*.swift',), ('text/x-swift',)),
|
||||
'SwigLexer': ('pip._vendor.pygments.lexers.c_like', 'SWIG', ('swig',), ('*.swg', '*.i'), ('text/swig',)),
|
||||
'SystemVerilogLexer': ('pip._vendor.pygments.lexers.hdl', 'systemverilog', ('systemverilog', 'sv'), ('*.sv', '*.svh'), ('text/x-systemverilog',)),
|
||||
'SystemdLexer': ('pip._vendor.pygments.lexers.configs', 'Systemd', ('systemd',), ('*.service', '*.socket', '*.device', '*.mount', '*.automount', '*.swap', '*.target', '*.path', '*.timer', '*.slice', '*.scope'), ()),
|
||||
'TAPLexer': ('pip._vendor.pygments.lexers.testing', 'TAP', ('tap',), ('*.tap',), ()),
|
||||
'TNTLexer': ('pip._vendor.pygments.lexers.tnt', 'Typographic Number Theory', ('tnt',), ('*.tnt',), ()),
|
||||
'TOMLLexer': ('pip._vendor.pygments.lexers.configs', 'TOML', ('toml',), ('*.toml', 'Pipfile', 'poetry.lock'), ('application/toml',)),
|
||||
'TableGenLexer': ('pip._vendor.pygments.lexers.tablegen', 'TableGen', ('tablegen', 'td'), ('*.td',), ()),
|
||||
'TactLexer': ('pip._vendor.pygments.lexers.tact', 'Tact', ('tact',), ('*.tact',), ()),
|
||||
'Tads3Lexer': ('pip._vendor.pygments.lexers.int_fiction', 'TADS 3', ('tads3',), ('*.t',), ()),
|
||||
'TalLexer': ('pip._vendor.pygments.lexers.tal', 'Tal', ('tal', 'uxntal'), ('*.tal',), ('text/x-uxntal',)),
|
||||
'TasmLexer': ('pip._vendor.pygments.lexers.asm', 'TASM', ('tasm',), ('*.asm', '*.ASM', '*.tasm'), ('text/x-tasm',)),
|
||||
'TclLexer': ('pip._vendor.pygments.lexers.tcl', 'Tcl', ('tcl',), ('*.tcl', '*.rvt'), ('text/x-tcl', 'text/x-script.tcl', 'application/x-tcl')),
|
||||
'TcshLexer': ('pip._vendor.pygments.lexers.shell', 'Tcsh', ('tcsh', 'csh'), ('*.tcsh', '*.csh'), ('application/x-csh',)),
|
||||
'TcshSessionLexer': ('pip._vendor.pygments.lexers.shell', 'Tcsh Session', ('tcshcon',), (), ()),
|
||||
'TeaTemplateLexer': ('pip._vendor.pygments.lexers.templates', 'Tea', ('tea',), ('*.tea',), ('text/x-tea',)),
|
||||
'TealLexer': ('pip._vendor.pygments.lexers.teal', 'teal', ('teal',), ('*.teal',), ()),
|
||||
'TeraTermLexer': ('pip._vendor.pygments.lexers.teraterm', 'Tera Term macro', ('teratermmacro', 'teraterm', 'ttl'), ('*.ttl',), ('text/x-teratermmacro',)),
|
||||
'TermcapLexer': ('pip._vendor.pygments.lexers.configs', 'Termcap', ('termcap',), ('termcap', 'termcap.src'), ()),
|
||||
'TerminfoLexer': ('pip._vendor.pygments.lexers.configs', 'Terminfo', ('terminfo',), ('terminfo', 'terminfo.src'), ()),
|
||||
'TerraformLexer': ('pip._vendor.pygments.lexers.configs', 'Terraform', ('terraform', 'tf', 'hcl'), ('*.tf', '*.hcl'), ('application/x-tf', 'application/x-terraform')),
|
||||
'TexLexer': ('pip._vendor.pygments.lexers.markup', 'TeX', ('tex', 'latex'), ('*.tex', '*.aux', '*.toc'), ('text/x-tex', 'text/x-latex')),
|
||||
'TextLexer': ('pip._vendor.pygments.lexers.special', 'Text only', ('text',), ('*.txt',), ('text/plain',)),
|
||||
'ThingsDBLexer': ('pip._vendor.pygments.lexers.thingsdb', 'ThingsDB', ('ti', 'thingsdb'), ('*.ti',), ()),
|
||||
'ThriftLexer': ('pip._vendor.pygments.lexers.dsls', 'Thrift', ('thrift',), ('*.thrift',), ('application/x-thrift',)),
|
||||
'TiddlyWiki5Lexer': ('pip._vendor.pygments.lexers.markup', 'tiddler', ('tid',), ('*.tid',), ('text/vnd.tiddlywiki',)),
|
||||
'TlbLexer': ('pip._vendor.pygments.lexers.tlb', 'Tl-b', ('tlb',), ('*.tlb',), ()),
|
||||
'TlsLexer': ('pip._vendor.pygments.lexers.tls', 'TLS Presentation Language', ('tls',), (), ()),
|
||||
'TodotxtLexer': ('pip._vendor.pygments.lexers.textfmts', 'Todotxt', ('todotxt',), ('todo.txt', '*.todotxt'), ('text/x-todo',)),
|
||||
'TransactSqlLexer': ('pip._vendor.pygments.lexers.sql', 'Transact-SQL', ('tsql', 't-sql'), ('*.sql',), ('text/x-tsql',)),
|
||||
'TreetopLexer': ('pip._vendor.pygments.lexers.parsers', 'Treetop', ('treetop',), ('*.treetop', '*.tt'), ()),
|
||||
'TsxLexer': ('pip._vendor.pygments.lexers.jsx', 'TSX', ('tsx',), ('*.tsx',), ('text/typescript-tsx',)),
|
||||
'TurtleLexer': ('pip._vendor.pygments.lexers.rdf', 'Turtle', ('turtle',), ('*.ttl',), ('text/turtle', 'application/x-turtle')),
|
||||
'TwigHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Twig', ('html+twig',), ('*.twig',), ('text/html+twig',)),
|
||||
'TwigLexer': ('pip._vendor.pygments.lexers.templates', 'Twig', ('twig',), (), ('application/x-twig',)),
|
||||
'TypeScriptLexer': ('pip._vendor.pygments.lexers.javascript', 'TypeScript', ('typescript', 'ts'), ('*.ts',), ('application/x-typescript', 'text/x-typescript')),
|
||||
'TypoScriptCssDataLexer': ('pip._vendor.pygments.lexers.typoscript', 'TypoScriptCssData', ('typoscriptcssdata',), (), ()),
|
||||
'TypoScriptHtmlDataLexer': ('pip._vendor.pygments.lexers.typoscript', 'TypoScriptHtmlData', ('typoscripthtmldata',), (), ()),
|
||||
'TypoScriptLexer': ('pip._vendor.pygments.lexers.typoscript', 'TypoScript', ('typoscript',), ('*.typoscript',), ('text/x-typoscript',)),
|
||||
'TypstLexer': ('pip._vendor.pygments.lexers.typst', 'Typst', ('typst',), ('*.typ',), ('text/x-typst',)),
|
||||
'UL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'UL4', ('ul4',), ('*.ul4',), ()),
|
||||
'UcodeLexer': ('pip._vendor.pygments.lexers.unicon', 'ucode', ('ucode',), ('*.u', '*.u1', '*.u2'), ()),
|
||||
'UniconLexer': ('pip._vendor.pygments.lexers.unicon', 'Unicon', ('unicon',), ('*.icn',), ('text/unicon',)),
|
||||
'UnixConfigLexer': ('pip._vendor.pygments.lexers.configs', 'Unix/Linux config files', ('unixconfig', 'linuxconfig'), (), ()),
|
||||
'UrbiscriptLexer': ('pip._vendor.pygments.lexers.urbi', 'UrbiScript', ('urbiscript',), ('*.u',), ('application/x-urbiscript',)),
|
||||
'UrlEncodedLexer': ('pip._vendor.pygments.lexers.html', 'urlencoded', ('urlencoded',), (), ('application/x-www-form-urlencoded',)),
|
||||
'UsdLexer': ('pip._vendor.pygments.lexers.usd', 'USD', ('usd', 'usda'), ('*.usd', '*.usda'), ()),
|
||||
'VBScriptLexer': ('pip._vendor.pygments.lexers.basic', 'VBScript', ('vbscript',), ('*.vbs', '*.VBS'), ()),
|
||||
'VCLLexer': ('pip._vendor.pygments.lexers.varnish', 'VCL', ('vcl',), ('*.vcl',), ('text/x-vclsrc',)),
|
||||
'VCLSnippetLexer': ('pip._vendor.pygments.lexers.varnish', 'VCLSnippets', ('vclsnippets', 'vclsnippet'), (), ('text/x-vclsnippet',)),
|
||||
'VCTreeStatusLexer': ('pip._vendor.pygments.lexers.console', 'VCTreeStatus', ('vctreestatus',), (), ()),
|
||||
'VGLLexer': ('pip._vendor.pygments.lexers.dsls', 'VGL', ('vgl',), ('*.rpf',), ()),
|
||||
'ValaLexer': ('pip._vendor.pygments.lexers.c_like', 'Vala', ('vala', 'vapi'), ('*.vala', '*.vapi'), ('text/x-vala',)),
|
||||
'VbNetAspxLexer': ('pip._vendor.pygments.lexers.dotnet', 'aspx-vb', ('aspx-vb',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()),
|
||||
'VbNetLexer': ('pip._vendor.pygments.lexers.dotnet', 'VB.net', ('vb.net', 'vbnet', 'lobas', 'oobas', 'sobas', 'visual-basic', 'visualbasic'), ('*.vb', '*.bas'), ('text/x-vbnet', 'text/x-vba')),
|
||||
'VelocityHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Velocity', ('html+velocity',), (), ('text/html+velocity',)),
|
||||
'VelocityLexer': ('pip._vendor.pygments.lexers.templates', 'Velocity', ('velocity',), ('*.vm', '*.fhtml'), ()),
|
||||
'VelocityXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Velocity', ('xml+velocity',), (), ('application/xml+velocity',)),
|
||||
'VerifpalLexer': ('pip._vendor.pygments.lexers.verifpal', 'Verifpal', ('verifpal',), ('*.vp',), ('text/x-verifpal',)),
|
||||
'VerilogLexer': ('pip._vendor.pygments.lexers.hdl', 'verilog', ('verilog', 'v'), ('*.v',), ('text/x-verilog',)),
|
||||
'VhdlLexer': ('pip._vendor.pygments.lexers.hdl', 'vhdl', ('vhdl',), ('*.vhdl', '*.vhd'), ('text/x-vhdl',)),
|
||||
'VimLexer': ('pip._vendor.pygments.lexers.textedit', 'VimL', ('vim',), ('*.vim', '.vimrc', '.exrc', '.gvimrc', '_vimrc', '_exrc', '_gvimrc', 'vimrc', 'gvimrc'), ('text/x-vim',)),
|
||||
'VisualPrologGrammarLexer': ('pip._vendor.pygments.lexers.vip', 'Visual Prolog Grammar', ('visualprologgrammar',), ('*.vipgrm',), ()),
|
||||
'VisualPrologLexer': ('pip._vendor.pygments.lexers.vip', 'Visual Prolog', ('visualprolog',), ('*.pro', '*.cl', '*.i', '*.pack', '*.ph'), ()),
|
||||
'VueLexer': ('pip._vendor.pygments.lexers.html', 'Vue', ('vue',), ('*.vue',), ()),
|
||||
'VyperLexer': ('pip._vendor.pygments.lexers.vyper', 'Vyper', ('vyper',), ('*.vy',), ()),
|
||||
'WDiffLexer': ('pip._vendor.pygments.lexers.diff', 'WDiff', ('wdiff',), ('*.wdiff',), ()),
|
||||
'WatLexer': ('pip._vendor.pygments.lexers.webassembly', 'WebAssembly', ('wast', 'wat'), ('*.wat', '*.wast'), ()),
|
||||
'WebIDLLexer': ('pip._vendor.pygments.lexers.webidl', 'Web IDL', ('webidl',), ('*.webidl',), ()),
|
||||
'WgslLexer': ('pip._vendor.pygments.lexers.wgsl', 'WebGPU Shading Language', ('wgsl',), ('*.wgsl',), ('text/wgsl',)),
|
||||
'WhileyLexer': ('pip._vendor.pygments.lexers.whiley', 'Whiley', ('whiley',), ('*.whiley',), ('text/x-whiley',)),
|
||||
'WikitextLexer': ('pip._vendor.pygments.lexers.markup', 'Wikitext', ('wikitext', 'mediawiki'), (), ('text/x-wiki',)),
|
||||
'WoWTocLexer': ('pip._vendor.pygments.lexers.wowtoc', 'World of Warcraft TOC', ('wowtoc',), ('*.toc',), ()),
|
||||
'WrenLexer': ('pip._vendor.pygments.lexers.wren', 'Wren', ('wren',), ('*.wren',), ()),
|
||||
'X10Lexer': ('pip._vendor.pygments.lexers.x10', 'X10', ('x10', 'xten'), ('*.x10',), ('text/x-x10',)),
|
||||
'XMLUL4Lexer': ('pip._vendor.pygments.lexers.ul4', 'XML+UL4', ('xml+ul4',), ('*.xmlul4',), ()),
|
||||
'XQueryLexer': ('pip._vendor.pygments.lexers.webmisc', 'XQuery', ('xquery', 'xqy', 'xq', 'xql', 'xqm'), ('*.xqy', '*.xquery', '*.xq', '*.xql', '*.xqm'), ('text/xquery', 'application/xquery')),
|
||||
'XmlDjangoLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Django/Jinja', ('xml+django', 'xml+jinja'), ('*.xml.j2', '*.xml.jinja2'), ('application/xml+django', 'application/xml+jinja')),
|
||||
'XmlErbLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Ruby', ('xml+ruby', 'xml+erb'), (), ('application/xml+ruby',)),
|
||||
'XmlLexer': ('pip._vendor.pygments.lexers.html', 'XML', ('xml',), ('*.xml', '*.xsl', '*.rss', '*.xslt', '*.xsd', '*.wsdl', '*.wsf'), ('text/xml', 'application/xml', 'image/svg+xml', 'application/rss+xml', 'application/atom+xml')),
|
||||
'XmlPhpLexer': ('pip._vendor.pygments.lexers.templates', 'XML+PHP', ('xml+php',), (), ('application/xml+php',)),
|
||||
'XmlSmartyLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Smarty', ('xml+smarty',), (), ('application/xml+smarty',)),
|
||||
'XorgLexer': ('pip._vendor.pygments.lexers.xorg', 'Xorg', ('xorg.conf',), ('xorg.conf',), ()),
|
||||
'XppLexer': ('pip._vendor.pygments.lexers.dotnet', 'X++', ('xpp', 'x++'), ('*.xpp',), ()),
|
||||
'XsltLexer': ('pip._vendor.pygments.lexers.html', 'XSLT', ('xslt',), ('*.xsl', '*.xslt', '*.xpl'), ('application/xsl+xml', 'application/xslt+xml')),
|
||||
'XtendLexer': ('pip._vendor.pygments.lexers.jvm', 'Xtend', ('xtend',), ('*.xtend',), ('text/x-xtend',)),
|
||||
'XtlangLexer': ('pip._vendor.pygments.lexers.lisp', 'xtlang', ('extempore',), ('*.xtm',), ()),
|
||||
'YamlJinjaLexer': ('pip._vendor.pygments.lexers.templates', 'YAML+Jinja', ('yaml+jinja', 'salt', 'sls'), ('*.sls', '*.yaml.j2', '*.yml.j2', '*.yaml.jinja2', '*.yml.jinja2'), ('text/x-yaml+jinja', 'text/x-sls')),
|
||||
'YamlLexer': ('pip._vendor.pygments.lexers.data', 'YAML', ('yaml',), ('*.yaml', '*.yml'), ('text/x-yaml',)),
|
||||
'YangLexer': ('pip._vendor.pygments.lexers.yang', 'YANG', ('yang',), ('*.yang',), ('application/yang',)),
|
||||
'YaraLexer': ('pip._vendor.pygments.lexers.yara', 'YARA', ('yara', 'yar'), ('*.yar',), ('text/x-yara',)),
|
||||
'ZeekLexer': ('pip._vendor.pygments.lexers.dsls', 'Zeek', ('zeek', 'bro'), ('*.zeek', '*.bro'), ()),
|
||||
'ZephirLexer': ('pip._vendor.pygments.lexers.php', 'Zephir', ('zephir',), ('*.zep',), ()),
|
||||
'ZigLexer': ('pip._vendor.pygments.lexers.zig', 'Zig', ('zig',), ('*.zig',), ('text/zig',)),
|
||||
'apdlexer': ('pip._vendor.pygments.lexers.apdlexer', 'ANSYS parametric design language', ('ansys', 'apdl'), ('*.ans',), ()),
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,43 @@
|
||||
"""
|
||||
pygments.modeline
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
A simple modeline parser (based on pymodeline).
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
__all__ = ['get_filetype_from_buffer']
|
||||
|
||||
|
||||
modeline_re = re.compile(r'''
|
||||
(?: vi | vim | ex ) (?: [<=>]? \d* )? :
|
||||
.* (?: ft | filetype | syn | syntax ) = ( [^:\s]+ )
|
||||
''', re.VERBOSE)
|
||||
|
||||
|
||||
def get_filetype_from_line(l): # noqa: E741
|
||||
m = modeline_re.search(l)
|
||||
if m:
|
||||
return m.group(1)
|
||||
|
||||
|
||||
def get_filetype_from_buffer(buf, max_lines=5):
|
||||
"""
|
||||
Scan the buffer for modelines and return filetype if one is found.
|
||||
"""
|
||||
lines = buf.splitlines()
|
||||
for line in lines[-1:-max_lines-1:-1]:
|
||||
ret = get_filetype_from_line(line)
|
||||
if ret:
|
||||
return ret
|
||||
for i in range(max_lines, -1, -1):
|
||||
if i < len(lines):
|
||||
ret = get_filetype_from_line(lines[i])
|
||||
if ret:
|
||||
return ret
|
||||
|
||||
return None
|
||||
@@ -0,0 +1,72 @@
|
||||
"""
|
||||
pygments.plugin
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Pygments plugin interface.
|
||||
|
||||
lexer plugins::
|
||||
|
||||
[pygments.lexers]
|
||||
yourlexer = yourmodule:YourLexer
|
||||
|
||||
formatter plugins::
|
||||
|
||||
[pygments.formatters]
|
||||
yourformatter = yourformatter:YourFormatter
|
||||
/.ext = yourformatter:YourFormatter
|
||||
|
||||
As you can see, you can define extensions for the formatter
|
||||
with a leading slash.
|
||||
|
||||
syntax plugins::
|
||||
|
||||
[pygments.styles]
|
||||
yourstyle = yourstyle:YourStyle
|
||||
|
||||
filter plugin::
|
||||
|
||||
[pygments.filter]
|
||||
yourfilter = yourfilter:YourFilter
|
||||
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
from importlib.metadata import entry_points
|
||||
|
||||
LEXER_ENTRY_POINT = 'pygments.lexers'
|
||||
FORMATTER_ENTRY_POINT = 'pygments.formatters'
|
||||
STYLE_ENTRY_POINT = 'pygments.styles'
|
||||
FILTER_ENTRY_POINT = 'pygments.filters'
|
||||
|
||||
|
||||
def iter_entry_points(group_name):
|
||||
groups = entry_points()
|
||||
if hasattr(groups, 'select'):
|
||||
# New interface in Python 3.10 and newer versions of the
|
||||
# importlib_metadata backport.
|
||||
return groups.select(group=group_name)
|
||||
else:
|
||||
# Older interface, deprecated in Python 3.10 and recent
|
||||
# importlib_metadata, but we need it in Python 3.8 and 3.9.
|
||||
return groups.get(group_name, [])
|
||||
|
||||
|
||||
def find_plugin_lexers():
|
||||
for entrypoint in iter_entry_points(LEXER_ENTRY_POINT):
|
||||
yield entrypoint.load()
|
||||
|
||||
|
||||
def find_plugin_formatters():
|
||||
for entrypoint in iter_entry_points(FORMATTER_ENTRY_POINT):
|
||||
yield entrypoint.name, entrypoint.load()
|
||||
|
||||
|
||||
def find_plugin_styles():
|
||||
for entrypoint in iter_entry_points(STYLE_ENTRY_POINT):
|
||||
yield entrypoint.name, entrypoint.load()
|
||||
|
||||
|
||||
def find_plugin_filters():
|
||||
for entrypoint in iter_entry_points(FILTER_ENTRY_POINT):
|
||||
yield entrypoint.name, entrypoint.load()
|
||||
@@ -0,0 +1,91 @@
|
||||
"""
|
||||
pygments.regexopt
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
An algorithm that generates optimized regexes for matching long lists of
|
||||
literal strings.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
from re import escape
|
||||
from os.path import commonprefix
|
||||
from itertools import groupby
|
||||
from operator import itemgetter
|
||||
|
||||
CS_ESCAPE = re.compile(r'[\[\^\\\-\]]')
|
||||
FIRST_ELEMENT = itemgetter(0)
|
||||
|
||||
|
||||
def make_charset(letters):
|
||||
return '[' + CS_ESCAPE.sub(lambda m: '\\' + m.group(), ''.join(letters)) + ']'
|
||||
|
||||
|
||||
def regex_opt_inner(strings, open_paren):
|
||||
"""Return a regex that matches any string in the sorted list of strings."""
|
||||
close_paren = open_paren and ')' or ''
|
||||
# print strings, repr(open_paren)
|
||||
if not strings:
|
||||
# print '-> nothing left'
|
||||
return ''
|
||||
first = strings[0]
|
||||
if len(strings) == 1:
|
||||
# print '-> only 1 string'
|
||||
return open_paren + escape(first) + close_paren
|
||||
if not first:
|
||||
# print '-> first string empty'
|
||||
return open_paren + regex_opt_inner(strings[1:], '(?:') \
|
||||
+ '?' + close_paren
|
||||
if len(first) == 1:
|
||||
# multiple one-char strings? make a charset
|
||||
oneletter = []
|
||||
rest = []
|
||||
for s in strings:
|
||||
if len(s) == 1:
|
||||
oneletter.append(s)
|
||||
else:
|
||||
rest.append(s)
|
||||
if len(oneletter) > 1: # do we have more than one oneletter string?
|
||||
if rest:
|
||||
# print '-> 1-character + rest'
|
||||
return open_paren + regex_opt_inner(rest, '') + '|' \
|
||||
+ make_charset(oneletter) + close_paren
|
||||
# print '-> only 1-character'
|
||||
return open_paren + make_charset(oneletter) + close_paren
|
||||
prefix = commonprefix(strings)
|
||||
if prefix:
|
||||
plen = len(prefix)
|
||||
# we have a prefix for all strings
|
||||
# print '-> prefix:', prefix
|
||||
return open_paren + escape(prefix) \
|
||||
+ regex_opt_inner([s[plen:] for s in strings], '(?:') \
|
||||
+ close_paren
|
||||
# is there a suffix?
|
||||
strings_rev = [s[::-1] for s in strings]
|
||||
suffix = commonprefix(strings_rev)
|
||||
if suffix:
|
||||
slen = len(suffix)
|
||||
# print '-> suffix:', suffix[::-1]
|
||||
return open_paren \
|
||||
+ regex_opt_inner(sorted(s[:-slen] for s in strings), '(?:') \
|
||||
+ escape(suffix[::-1]) + close_paren
|
||||
# recurse on common 1-string prefixes
|
||||
# print '-> last resort'
|
||||
return open_paren + \
|
||||
'|'.join(regex_opt_inner(list(group[1]), '')
|
||||
for group in groupby(strings, lambda s: s[0] == first[0])) \
|
||||
+ close_paren
|
||||
|
||||
|
||||
def regex_opt(strings, prefix='', suffix=''):
|
||||
"""Return a compiled regex that matches any string in the given list.
|
||||
|
||||
The strings to match must be literal strings, not regexes. They will be
|
||||
regex-escaped.
|
||||
|
||||
*prefix* and *suffix* are pre- and appended to the final regex.
|
||||
"""
|
||||
strings = sorted(strings)
|
||||
return prefix + regex_opt_inner(strings, '(') + suffix
|
||||
@@ -0,0 +1,104 @@
|
||||
"""
|
||||
pygments.scanner
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This library implements a regex based scanner. Some languages
|
||||
like Pascal are easy to parse but have some keywords that
|
||||
depend on the context. Because of this it's impossible to lex
|
||||
that just by using a regular expression lexer like the
|
||||
`RegexLexer`.
|
||||
|
||||
Have a look at the `DelphiLexer` to get an idea of how to use
|
||||
this scanner.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
import re
|
||||
|
||||
|
||||
class EndOfText(RuntimeError):
|
||||
"""
|
||||
Raise if end of text is reached and the user
|
||||
tried to call a match function.
|
||||
"""
|
||||
|
||||
|
||||
class Scanner:
|
||||
"""
|
||||
Simple scanner
|
||||
|
||||
All method patterns are regular expression strings (not
|
||||
compiled expressions!)
|
||||
"""
|
||||
|
||||
def __init__(self, text, flags=0):
|
||||
"""
|
||||
:param text: The text which should be scanned
|
||||
:param flags: default regular expression flags
|
||||
"""
|
||||
self.data = text
|
||||
self.data_length = len(text)
|
||||
self.start_pos = 0
|
||||
self.pos = 0
|
||||
self.flags = flags
|
||||
self.last = None
|
||||
self.match = None
|
||||
self._re_cache = {}
|
||||
|
||||
def eos(self):
|
||||
"""`True` if the scanner reached the end of text."""
|
||||
return self.pos >= self.data_length
|
||||
eos = property(eos, eos.__doc__)
|
||||
|
||||
def check(self, pattern):
|
||||
"""
|
||||
Apply `pattern` on the current position and return
|
||||
the match object. (Doesn't touch pos). Use this for
|
||||
lookahead.
|
||||
"""
|
||||
if self.eos:
|
||||
raise EndOfText()
|
||||
if pattern not in self._re_cache:
|
||||
self._re_cache[pattern] = re.compile(pattern, self.flags)
|
||||
return self._re_cache[pattern].match(self.data, self.pos)
|
||||
|
||||
def test(self, pattern):
|
||||
"""Apply a pattern on the current position and check
|
||||
if it patches. Doesn't touch pos.
|
||||
"""
|
||||
return self.check(pattern) is not None
|
||||
|
||||
def scan(self, pattern):
|
||||
"""
|
||||
Scan the text for the given pattern and update pos/match
|
||||
and related fields. The return value is a boolean that
|
||||
indicates if the pattern matched. The matched value is
|
||||
stored on the instance as ``match``, the last value is
|
||||
stored as ``last``. ``start_pos`` is the position of the
|
||||
pointer before the pattern was matched, ``pos`` is the
|
||||
end position.
|
||||
"""
|
||||
if self.eos:
|
||||
raise EndOfText()
|
||||
if pattern not in self._re_cache:
|
||||
self._re_cache[pattern] = re.compile(pattern, self.flags)
|
||||
self.last = self.match
|
||||
m = self._re_cache[pattern].match(self.data, self.pos)
|
||||
if m is None:
|
||||
return False
|
||||
self.start_pos = m.start()
|
||||
self.pos = m.end()
|
||||
self.match = m.group()
|
||||
return True
|
||||
|
||||
def get_char(self):
|
||||
"""Scan exactly one char."""
|
||||
self.scan('.')
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s %d/%d>' % (
|
||||
self.__class__.__name__,
|
||||
self.pos,
|
||||
self.data_length
|
||||
)
|
||||
@@ -0,0 +1,247 @@
|
||||
"""
|
||||
pygments.sphinxext
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Sphinx extension to generate automatic documentation of lexers,
|
||||
formatters and filters.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.statemachine import ViewList
|
||||
from docutils.parsers.rst import Directive
|
||||
from sphinx.util.nodes import nested_parse_with_titles
|
||||
|
||||
|
||||
MODULEDOC = '''
|
||||
.. module:: %s
|
||||
|
||||
%s
|
||||
%s
|
||||
'''
|
||||
|
||||
LEXERDOC = '''
|
||||
.. class:: %s
|
||||
|
||||
:Short names: %s
|
||||
:Filenames: %s
|
||||
:MIME types: %s
|
||||
|
||||
%s
|
||||
|
||||
%s
|
||||
|
||||
'''
|
||||
|
||||
FMTERDOC = '''
|
||||
.. class:: %s
|
||||
|
||||
:Short names: %s
|
||||
:Filenames: %s
|
||||
|
||||
%s
|
||||
|
||||
'''
|
||||
|
||||
FILTERDOC = '''
|
||||
.. class:: %s
|
||||
|
||||
:Name: %s
|
||||
|
||||
%s
|
||||
|
||||
'''
|
||||
|
||||
|
||||
class PygmentsDoc(Directive):
|
||||
"""
|
||||
A directive to collect all lexers/formatters/filters and generate
|
||||
autoclass directives for them.
|
||||
"""
|
||||
has_content = False
|
||||
required_arguments = 1
|
||||
optional_arguments = 0
|
||||
final_argument_whitespace = False
|
||||
option_spec = {}
|
||||
|
||||
def run(self):
|
||||
self.filenames = set()
|
||||
if self.arguments[0] == 'lexers':
|
||||
out = self.document_lexers()
|
||||
elif self.arguments[0] == 'formatters':
|
||||
out = self.document_formatters()
|
||||
elif self.arguments[0] == 'filters':
|
||||
out = self.document_filters()
|
||||
elif self.arguments[0] == 'lexers_overview':
|
||||
out = self.document_lexers_overview()
|
||||
else:
|
||||
raise Exception('invalid argument for "pygmentsdoc" directive')
|
||||
node = nodes.compound()
|
||||
vl = ViewList(out.split('\n'), source='')
|
||||
nested_parse_with_titles(self.state, vl, node)
|
||||
for fn in self.filenames:
|
||||
self.state.document.settings.record_dependencies.add(fn)
|
||||
return node.children
|
||||
|
||||
def document_lexers_overview(self):
|
||||
"""Generate a tabular overview of all lexers.
|
||||
|
||||
The columns are the lexer name, the extensions handled by this lexer
|
||||
(or "None"), the aliases and a link to the lexer class."""
|
||||
from pip._vendor.pygments.lexers._mapping import LEXERS
|
||||
from pip._vendor.pygments.lexers import find_lexer_class
|
||||
out = []
|
||||
|
||||
table = []
|
||||
|
||||
def format_link(name, url):
|
||||
if url:
|
||||
return f'`{name} <{url}>`_'
|
||||
return name
|
||||
|
||||
for classname, data in sorted(LEXERS.items(), key=lambda x: x[1][1].lower()):
|
||||
lexer_cls = find_lexer_class(data[1])
|
||||
extensions = lexer_cls.filenames + lexer_cls.alias_filenames
|
||||
|
||||
table.append({
|
||||
'name': format_link(data[1], lexer_cls.url),
|
||||
'extensions': ', '.join(extensions).replace('*', '\\*').replace('_', '\\') or 'None',
|
||||
'aliases': ', '.join(data[2]),
|
||||
'class': f'{data[0]}.{classname}'
|
||||
})
|
||||
|
||||
column_names = ['name', 'extensions', 'aliases', 'class']
|
||||
column_lengths = [max([len(row[column]) for row in table if row[column]])
|
||||
for column in column_names]
|
||||
|
||||
def write_row(*columns):
|
||||
"""Format a table row"""
|
||||
out = []
|
||||
for length, col in zip(column_lengths, columns):
|
||||
if col:
|
||||
out.append(col.ljust(length))
|
||||
else:
|
||||
out.append(' '*length)
|
||||
|
||||
return ' '.join(out)
|
||||
|
||||
def write_seperator():
|
||||
"""Write a table separator row"""
|
||||
sep = ['='*c for c in column_lengths]
|
||||
return write_row(*sep)
|
||||
|
||||
out.append(write_seperator())
|
||||
out.append(write_row('Name', 'Extension(s)', 'Short name(s)', 'Lexer class'))
|
||||
out.append(write_seperator())
|
||||
for row in table:
|
||||
out.append(write_row(
|
||||
row['name'],
|
||||
row['extensions'],
|
||||
row['aliases'],
|
||||
f':class:`~{row["class"]}`'))
|
||||
out.append(write_seperator())
|
||||
|
||||
return '\n'.join(out)
|
||||
|
||||
def document_lexers(self):
|
||||
from pip._vendor.pygments.lexers._mapping import LEXERS
|
||||
from pip._vendor import pygments
|
||||
import inspect
|
||||
import pathlib
|
||||
|
||||
out = []
|
||||
modules = {}
|
||||
moduledocstrings = {}
|
||||
for classname, data in sorted(LEXERS.items(), key=lambda x: x[0]):
|
||||
module = data[0]
|
||||
mod = __import__(module, None, None, [classname])
|
||||
self.filenames.add(mod.__file__)
|
||||
cls = getattr(mod, classname)
|
||||
if not cls.__doc__:
|
||||
print(f"Warning: {classname} does not have a docstring.")
|
||||
docstring = cls.__doc__
|
||||
if isinstance(docstring, bytes):
|
||||
docstring = docstring.decode('utf8')
|
||||
|
||||
example_file = getattr(cls, '_example', None)
|
||||
if example_file:
|
||||
p = pathlib.Path(inspect.getabsfile(pygments)).parent.parent /\
|
||||
'tests' / 'examplefiles' / example_file
|
||||
content = p.read_text(encoding='utf-8')
|
||||
if not content:
|
||||
raise Exception(
|
||||
f"Empty example file '{example_file}' for lexer "
|
||||
f"{classname}")
|
||||
|
||||
if data[2]:
|
||||
lexer_name = data[2][0]
|
||||
docstring += '\n\n .. admonition:: Example\n'
|
||||
docstring += f'\n .. code-block:: {lexer_name}\n\n'
|
||||
for line in content.splitlines():
|
||||
docstring += f' {line}\n'
|
||||
|
||||
if cls.version_added:
|
||||
version_line = f'.. versionadded:: {cls.version_added}'
|
||||
else:
|
||||
version_line = ''
|
||||
|
||||
modules.setdefault(module, []).append((
|
||||
classname,
|
||||
', '.join(data[2]) or 'None',
|
||||
', '.join(data[3]).replace('*', '\\*').replace('_', '\\') or 'None',
|
||||
', '.join(data[4]) or 'None',
|
||||
docstring,
|
||||
version_line))
|
||||
if module not in moduledocstrings:
|
||||
moddoc = mod.__doc__
|
||||
if isinstance(moddoc, bytes):
|
||||
moddoc = moddoc.decode('utf8')
|
||||
moduledocstrings[module] = moddoc
|
||||
|
||||
for module, lexers in sorted(modules.items(), key=lambda x: x[0]):
|
||||
if moduledocstrings[module] is None:
|
||||
raise Exception(f"Missing docstring for {module}")
|
||||
heading = moduledocstrings[module].splitlines()[4].strip().rstrip('.')
|
||||
out.append(MODULEDOC % (module, heading, '-'*len(heading)))
|
||||
for data in lexers:
|
||||
out.append(LEXERDOC % data)
|
||||
|
||||
return ''.join(out)
|
||||
|
||||
def document_formatters(self):
|
||||
from pip._vendor.pygments.formatters import FORMATTERS
|
||||
|
||||
out = []
|
||||
for classname, data in sorted(FORMATTERS.items(), key=lambda x: x[0]):
|
||||
module = data[0]
|
||||
mod = __import__(module, None, None, [classname])
|
||||
self.filenames.add(mod.__file__)
|
||||
cls = getattr(mod, classname)
|
||||
docstring = cls.__doc__
|
||||
if isinstance(docstring, bytes):
|
||||
docstring = docstring.decode('utf8')
|
||||
heading = cls.__name__
|
||||
out.append(FMTERDOC % (heading, ', '.join(data[2]) or 'None',
|
||||
', '.join(data[3]).replace('*', '\\*') or 'None',
|
||||
docstring))
|
||||
return ''.join(out)
|
||||
|
||||
def document_filters(self):
|
||||
from pip._vendor.pygments.filters import FILTERS
|
||||
|
||||
out = []
|
||||
for name, cls in FILTERS.items():
|
||||
self.filenames.add(sys.modules[cls.__module__].__file__)
|
||||
docstring = cls.__doc__
|
||||
if isinstance(docstring, bytes):
|
||||
docstring = docstring.decode('utf8')
|
||||
out.append(FILTERDOC % (cls.__name__, name, docstring))
|
||||
return ''.join(out)
|
||||
|
||||
|
||||
def setup(app):
|
||||
app.add_directive('pygmentsdoc', PygmentsDoc)
|
||||
@@ -0,0 +1,203 @@
|
||||
"""
|
||||
pygments.style
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Basic style object.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
from pip._vendor.pygments.token import Token, STANDARD_TYPES
|
||||
|
||||
# Default mapping of ansixxx to RGB colors.
|
||||
_ansimap = {
|
||||
# dark
|
||||
'ansiblack': '000000',
|
||||
'ansired': '7f0000',
|
||||
'ansigreen': '007f00',
|
||||
'ansiyellow': '7f7fe0',
|
||||
'ansiblue': '00007f',
|
||||
'ansimagenta': '7f007f',
|
||||
'ansicyan': '007f7f',
|
||||
'ansigray': 'e5e5e5',
|
||||
# normal
|
||||
'ansibrightblack': '555555',
|
||||
'ansibrightred': 'ff0000',
|
||||
'ansibrightgreen': '00ff00',
|
||||
'ansibrightyellow': 'ffff00',
|
||||
'ansibrightblue': '0000ff',
|
||||
'ansibrightmagenta': 'ff00ff',
|
||||
'ansibrightcyan': '00ffff',
|
||||
'ansiwhite': 'ffffff',
|
||||
}
|
||||
# mapping of deprecated #ansixxx colors to new color names
|
||||
_deprecated_ansicolors = {
|
||||
# dark
|
||||
'#ansiblack': 'ansiblack',
|
||||
'#ansidarkred': 'ansired',
|
||||
'#ansidarkgreen': 'ansigreen',
|
||||
'#ansibrown': 'ansiyellow',
|
||||
'#ansidarkblue': 'ansiblue',
|
||||
'#ansipurple': 'ansimagenta',
|
||||
'#ansiteal': 'ansicyan',
|
||||
'#ansilightgray': 'ansigray',
|
||||
# normal
|
||||
'#ansidarkgray': 'ansibrightblack',
|
||||
'#ansired': 'ansibrightred',
|
||||
'#ansigreen': 'ansibrightgreen',
|
||||
'#ansiyellow': 'ansibrightyellow',
|
||||
'#ansiblue': 'ansibrightblue',
|
||||
'#ansifuchsia': 'ansibrightmagenta',
|
||||
'#ansiturquoise': 'ansibrightcyan',
|
||||
'#ansiwhite': 'ansiwhite',
|
||||
}
|
||||
ansicolors = set(_ansimap)
|
||||
|
||||
|
||||
class StyleMeta(type):
|
||||
|
||||
def __new__(mcs, name, bases, dct):
|
||||
obj = type.__new__(mcs, name, bases, dct)
|
||||
for token in STANDARD_TYPES:
|
||||
if token not in obj.styles:
|
||||
obj.styles[token] = ''
|
||||
|
||||
def colorformat(text):
|
||||
if text in ansicolors:
|
||||
return text
|
||||
if text[0:1] == '#':
|
||||
col = text[1:]
|
||||
if len(col) == 6:
|
||||
return col
|
||||
elif len(col) == 3:
|
||||
return col[0] * 2 + col[1] * 2 + col[2] * 2
|
||||
elif text == '':
|
||||
return ''
|
||||
elif text.startswith('var') or text.startswith('calc'):
|
||||
return text
|
||||
assert False, f"wrong color format {text!r}"
|
||||
|
||||
_styles = obj._styles = {}
|
||||
|
||||
for ttype in obj.styles:
|
||||
for token in ttype.split():
|
||||
if token in _styles:
|
||||
continue
|
||||
ndef = _styles.get(token.parent, None)
|
||||
styledefs = obj.styles.get(token, '').split()
|
||||
if not ndef or token is None:
|
||||
ndef = ['', 0, 0, 0, '', '', 0, 0, 0]
|
||||
elif 'noinherit' in styledefs and token is not Token:
|
||||
ndef = _styles[Token][:]
|
||||
else:
|
||||
ndef = ndef[:]
|
||||
_styles[token] = ndef
|
||||
for styledef in obj.styles.get(token, '').split():
|
||||
if styledef == 'noinherit':
|
||||
pass
|
||||
elif styledef == 'bold':
|
||||
ndef[1] = 1
|
||||
elif styledef == 'nobold':
|
||||
ndef[1] = 0
|
||||
elif styledef == 'italic':
|
||||
ndef[2] = 1
|
||||
elif styledef == 'noitalic':
|
||||
ndef[2] = 0
|
||||
elif styledef == 'underline':
|
||||
ndef[3] = 1
|
||||
elif styledef == 'nounderline':
|
||||
ndef[3] = 0
|
||||
elif styledef[:3] == 'bg:':
|
||||
ndef[4] = colorformat(styledef[3:])
|
||||
elif styledef[:7] == 'border:':
|
||||
ndef[5] = colorformat(styledef[7:])
|
||||
elif styledef == 'roman':
|
||||
ndef[6] = 1
|
||||
elif styledef == 'sans':
|
||||
ndef[7] = 1
|
||||
elif styledef == 'mono':
|
||||
ndef[8] = 1
|
||||
else:
|
||||
ndef[0] = colorformat(styledef)
|
||||
|
||||
return obj
|
||||
|
||||
def style_for_token(cls, token):
|
||||
t = cls._styles[token]
|
||||
ansicolor = bgansicolor = None
|
||||
color = t[0]
|
||||
if color in _deprecated_ansicolors:
|
||||
color = _deprecated_ansicolors[color]
|
||||
if color in ansicolors:
|
||||
ansicolor = color
|
||||
color = _ansimap[color]
|
||||
bgcolor = t[4]
|
||||
if bgcolor in _deprecated_ansicolors:
|
||||
bgcolor = _deprecated_ansicolors[bgcolor]
|
||||
if bgcolor in ansicolors:
|
||||
bgansicolor = bgcolor
|
||||
bgcolor = _ansimap[bgcolor]
|
||||
|
||||
return {
|
||||
'color': color or None,
|
||||
'bold': bool(t[1]),
|
||||
'italic': bool(t[2]),
|
||||
'underline': bool(t[3]),
|
||||
'bgcolor': bgcolor or None,
|
||||
'border': t[5] or None,
|
||||
'roman': bool(t[6]) or None,
|
||||
'sans': bool(t[7]) or None,
|
||||
'mono': bool(t[8]) or None,
|
||||
'ansicolor': ansicolor,
|
||||
'bgansicolor': bgansicolor,
|
||||
}
|
||||
|
||||
def list_styles(cls):
|
||||
return list(cls)
|
||||
|
||||
def styles_token(cls, ttype):
|
||||
return ttype in cls._styles
|
||||
|
||||
def __iter__(cls):
|
||||
for token in cls._styles:
|
||||
yield token, cls.style_for_token(token)
|
||||
|
||||
def __len__(cls):
|
||||
return len(cls._styles)
|
||||
|
||||
|
||||
class Style(metaclass=StyleMeta):
|
||||
|
||||
#: overall background color (``None`` means transparent)
|
||||
background_color = '#ffffff'
|
||||
|
||||
#: highlight background color
|
||||
highlight_color = '#ffffcc'
|
||||
|
||||
#: line number font color
|
||||
line_number_color = 'inherit'
|
||||
|
||||
#: line number background color
|
||||
line_number_background_color = 'transparent'
|
||||
|
||||
#: special line number font color
|
||||
line_number_special_color = '#000000'
|
||||
|
||||
#: special line number background color
|
||||
line_number_special_background_color = '#ffffc0'
|
||||
|
||||
#: Style definitions for individual token types.
|
||||
styles = {}
|
||||
|
||||
#: user-friendly style name (used when selecting the style, so this
|
||||
# should be all-lowercase, no spaces, hyphens)
|
||||
name = 'unnamed'
|
||||
|
||||
aliases = []
|
||||
|
||||
# Attribute for lexers defined within Pygments. If set
|
||||
# to True, the style is not shown in the style gallery
|
||||
# on the website. This is intended for language-specific
|
||||
# styles.
|
||||
web_style_gallery_exclude = False
|
||||
@@ -0,0 +1,61 @@
|
||||
"""
|
||||
pygments.styles
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Contains built-in styles.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
from pip._vendor.pygments.plugin import find_plugin_styles
|
||||
from pip._vendor.pygments.util import ClassNotFound
|
||||
from pip._vendor.pygments.styles._mapping import STYLES
|
||||
|
||||
#: A dictionary of built-in styles, mapping style names to
|
||||
#: ``'submodule::classname'`` strings.
|
||||
#: This list is deprecated. Use `pygments.styles.STYLES` instead
|
||||
STYLE_MAP = {v[1]: v[0].split('.')[-1] + '::' + k for k, v in STYLES.items()}
|
||||
|
||||
#: Internal reverse mapping to make `get_style_by_name` more efficient
|
||||
_STYLE_NAME_TO_MODULE_MAP = {v[1]: (v[0], k) for k, v in STYLES.items()}
|
||||
|
||||
|
||||
def get_style_by_name(name):
|
||||
"""
|
||||
Return a style class by its short name. The names of the builtin styles
|
||||
are listed in :data:`pygments.styles.STYLE_MAP`.
|
||||
|
||||
Will raise :exc:`pygments.util.ClassNotFound` if no style of that name is
|
||||
found.
|
||||
"""
|
||||
if name in _STYLE_NAME_TO_MODULE_MAP:
|
||||
mod, cls = _STYLE_NAME_TO_MODULE_MAP[name]
|
||||
builtin = "yes"
|
||||
else:
|
||||
for found_name, style in find_plugin_styles():
|
||||
if name == found_name:
|
||||
return style
|
||||
# perhaps it got dropped into our styles package
|
||||
builtin = ""
|
||||
mod = 'pygments.styles.' + name
|
||||
cls = name.title() + "Style"
|
||||
|
||||
try:
|
||||
mod = __import__(mod, None, None, [cls])
|
||||
except ImportError:
|
||||
raise ClassNotFound(f"Could not find style module {mod!r}" +
|
||||
(builtin and ", though it should be builtin")
|
||||
+ ".")
|
||||
try:
|
||||
return getattr(mod, cls)
|
||||
except AttributeError:
|
||||
raise ClassNotFound(f"Could not find style class {cls!r} in style module.")
|
||||
|
||||
|
||||
def get_all_styles():
|
||||
"""Return a generator for all styles by name, both builtin and plugin."""
|
||||
for v in STYLES.values():
|
||||
yield v[1]
|
||||
for name, _ in find_plugin_styles():
|
||||
yield name
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user