Todo: 集成多平台 解决因SaiNiu线程抢占资源问题 本地提交测试环境打包 和 正式打包脚本与正式环境打包bat 提交Python32环境包 改进多日志文件生成情况修改打包日志细节
This commit is contained in:
147
Utils/PythonNew32/Lib/__future__.py
Normal file
147
Utils/PythonNew32/Lib/__future__.py
Normal file
@@ -0,0 +1,147 @@
|
||||
"""Record of phased-in incompatible language changes.
|
||||
|
||||
Each line is of the form:
|
||||
|
||||
FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease ","
|
||||
CompilerFlag ")"
|
||||
|
||||
where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples
|
||||
of the same form as sys.version_info:
|
||||
|
||||
(PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int
|
||||
PY_MINOR_VERSION, # the 1; an int
|
||||
PY_MICRO_VERSION, # the 0; an int
|
||||
PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string
|
||||
PY_RELEASE_SERIAL # the 3; an int
|
||||
)
|
||||
|
||||
OptionalRelease records the first release in which
|
||||
|
||||
from __future__ import FeatureName
|
||||
|
||||
was accepted.
|
||||
|
||||
In the case of MandatoryReleases that have not yet occurred,
|
||||
MandatoryRelease predicts the release in which the feature will become part
|
||||
of the language.
|
||||
|
||||
Else MandatoryRelease records when the feature became part of the language;
|
||||
in releases at or after that, modules no longer need
|
||||
|
||||
from __future__ import FeatureName
|
||||
|
||||
to use the feature in question, but may continue to use such imports.
|
||||
|
||||
MandatoryRelease may also be None, meaning that a planned feature got
|
||||
dropped or that the release version is undetermined.
|
||||
|
||||
Instances of class _Feature have two corresponding methods,
|
||||
.getOptionalRelease() and .getMandatoryRelease().
|
||||
|
||||
CompilerFlag is the (bitfield) flag that should be passed in the fourth
|
||||
argument to the builtin function compile() to enable the feature in
|
||||
dynamically compiled code. This flag is stored in the .compiler_flag
|
||||
attribute on _Future instances. These values must match the appropriate
|
||||
#defines of CO_xxx flags in Include/cpython/compile.h.
|
||||
|
||||
No feature line is ever to be deleted from this file.
|
||||
"""
|
||||
|
||||
all_feature_names = [
|
||||
"nested_scopes",
|
||||
"generators",
|
||||
"division",
|
||||
"absolute_import",
|
||||
"with_statement",
|
||||
"print_function",
|
||||
"unicode_literals",
|
||||
"barry_as_FLUFL",
|
||||
"generator_stop",
|
||||
"annotations",
|
||||
]
|
||||
|
||||
__all__ = ["all_feature_names"] + all_feature_names
|
||||
|
||||
# The CO_xxx symbols are defined here under the same names defined in
|
||||
# code.h and used by compile.h, so that an editor search will find them here.
|
||||
# However, they're not exported in __all__, because they don't really belong to
|
||||
# this module.
|
||||
CO_NESTED = 0x0010 # nested_scopes
|
||||
CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000)
|
||||
CO_FUTURE_DIVISION = 0x20000 # division
|
||||
CO_FUTURE_ABSOLUTE_IMPORT = 0x40000 # perform absolute imports by default
|
||||
CO_FUTURE_WITH_STATEMENT = 0x80000 # with statement
|
||||
CO_FUTURE_PRINT_FUNCTION = 0x100000 # print function
|
||||
CO_FUTURE_UNICODE_LITERALS = 0x200000 # unicode string literals
|
||||
CO_FUTURE_BARRY_AS_BDFL = 0x400000
|
||||
CO_FUTURE_GENERATOR_STOP = 0x800000 # StopIteration becomes RuntimeError in generators
|
||||
CO_FUTURE_ANNOTATIONS = 0x1000000 # annotations become strings at runtime
|
||||
|
||||
|
||||
class _Feature:
|
||||
|
||||
def __init__(self, optionalRelease, mandatoryRelease, compiler_flag):
|
||||
self.optional = optionalRelease
|
||||
self.mandatory = mandatoryRelease
|
||||
self.compiler_flag = compiler_flag
|
||||
|
||||
def getOptionalRelease(self):
|
||||
"""Return first release in which this feature was recognized.
|
||||
|
||||
This is a 5-tuple, of the same form as sys.version_info.
|
||||
"""
|
||||
return self.optional
|
||||
|
||||
def getMandatoryRelease(self):
|
||||
"""Return release in which this feature will become mandatory.
|
||||
|
||||
This is a 5-tuple, of the same form as sys.version_info, or, if
|
||||
the feature was dropped, or the release date is undetermined, is None.
|
||||
"""
|
||||
return self.mandatory
|
||||
|
||||
def __repr__(self):
|
||||
return "_Feature" + repr((self.optional,
|
||||
self.mandatory,
|
||||
self.compiler_flag))
|
||||
|
||||
|
||||
nested_scopes = _Feature((2, 1, 0, "beta", 1),
|
||||
(2, 2, 0, "alpha", 0),
|
||||
CO_NESTED)
|
||||
|
||||
generators = _Feature((2, 2, 0, "alpha", 1),
|
||||
(2, 3, 0, "final", 0),
|
||||
CO_GENERATOR_ALLOWED)
|
||||
|
||||
division = _Feature((2, 2, 0, "alpha", 2),
|
||||
(3, 0, 0, "alpha", 0),
|
||||
CO_FUTURE_DIVISION)
|
||||
|
||||
absolute_import = _Feature((2, 5, 0, "alpha", 1),
|
||||
(3, 0, 0, "alpha", 0),
|
||||
CO_FUTURE_ABSOLUTE_IMPORT)
|
||||
|
||||
with_statement = _Feature((2, 5, 0, "alpha", 1),
|
||||
(2, 6, 0, "alpha", 0),
|
||||
CO_FUTURE_WITH_STATEMENT)
|
||||
|
||||
print_function = _Feature((2, 6, 0, "alpha", 2),
|
||||
(3, 0, 0, "alpha", 0),
|
||||
CO_FUTURE_PRINT_FUNCTION)
|
||||
|
||||
unicode_literals = _Feature((2, 6, 0, "alpha", 2),
|
||||
(3, 0, 0, "alpha", 0),
|
||||
CO_FUTURE_UNICODE_LITERALS)
|
||||
|
||||
barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2),
|
||||
(4, 0, 0, "alpha", 0),
|
||||
CO_FUTURE_BARRY_AS_BDFL)
|
||||
|
||||
generator_stop = _Feature((3, 5, 0, "beta", 1),
|
||||
(3, 7, 0, "alpha", 0),
|
||||
CO_FUTURE_GENERATOR_STOP)
|
||||
|
||||
annotations = _Feature((3, 7, 0, "beta", 1),
|
||||
None,
|
||||
CO_FUTURE_ANNOTATIONS)
|
||||
16
Utils/PythonNew32/Lib/__hello__.py
Normal file
16
Utils/PythonNew32/Lib/__hello__.py
Normal file
@@ -0,0 +1,16 @@
|
||||
initialized = True
|
||||
|
||||
class TestFrozenUtf8_1:
|
||||
"""\u00b6"""
|
||||
|
||||
class TestFrozenUtf8_2:
|
||||
"""\u03c0"""
|
||||
|
||||
class TestFrozenUtf8_4:
|
||||
"""\U0001f600"""
|
||||
|
||||
def main():
|
||||
print("Hello world!")
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
7
Utils/PythonNew32/Lib/__phello__/__init__.py
Normal file
7
Utils/PythonNew32/Lib/__phello__/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
initialized = True
|
||||
|
||||
def main():
|
||||
print("Hello world!")
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
0
Utils/PythonNew32/Lib/__phello__/ham/__init__.py
Normal file
0
Utils/PythonNew32/Lib/__phello__/ham/__init__.py
Normal file
0
Utils/PythonNew32/Lib/__phello__/ham/eggs.py
Normal file
0
Utils/PythonNew32/Lib/__phello__/ham/eggs.py
Normal file
7
Utils/PythonNew32/Lib/__phello__/spam.py
Normal file
7
Utils/PythonNew32/Lib/__phello__/spam.py
Normal file
@@ -0,0 +1,7 @@
|
||||
initialized = True
|
||||
|
||||
def main():
|
||||
print("Hello world!")
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
108
Utils/PythonNew32/Lib/_aix_support.py
Normal file
108
Utils/PythonNew32/Lib/_aix_support.py
Normal file
@@ -0,0 +1,108 @@
|
||||
"""Shared AIX support functions."""
|
||||
|
||||
import sys
|
||||
import sysconfig
|
||||
|
||||
|
||||
# Taken from _osx_support _read_output function
|
||||
def _read_cmd_output(commandstring, capture_stderr=False):
|
||||
"""Output from successful command execution or None"""
|
||||
# Similar to os.popen(commandstring, "r").read(),
|
||||
# but without actually using os.popen because that
|
||||
# function is not usable during python bootstrap.
|
||||
import os
|
||||
import contextlib
|
||||
fp = open("/tmp/_aix_support.%s"%(
|
||||
os.getpid(),), "w+b")
|
||||
|
||||
with contextlib.closing(fp) as fp:
|
||||
if capture_stderr:
|
||||
cmd = "%s >'%s' 2>&1" % (commandstring, fp.name)
|
||||
else:
|
||||
cmd = "%s 2>/dev/null >'%s'" % (commandstring, fp.name)
|
||||
return fp.read() if not os.system(cmd) else None
|
||||
|
||||
|
||||
def _aix_tag(vrtl, bd):
|
||||
# type: (List[int], int) -> str
|
||||
# Infer the ABI bitwidth from maxsize (assuming 64 bit as the default)
|
||||
_sz = 32 if sys.maxsize == (2**31-1) else 64
|
||||
_bd = bd if bd != 0 else 9988
|
||||
# vrtl[version, release, technology_level]
|
||||
return "aix-{:1x}{:1d}{:02d}-{:04d}-{}".format(vrtl[0], vrtl[1], vrtl[2], _bd, _sz)
|
||||
|
||||
|
||||
# extract version, release and technology level from a VRMF string
|
||||
def _aix_vrtl(vrmf):
|
||||
# type: (str) -> List[int]
|
||||
v, r, tl = vrmf.split(".")[:3]
|
||||
return [int(v[-1]), int(r), int(tl)]
|
||||
|
||||
|
||||
def _aix_bos_rte():
|
||||
# type: () -> Tuple[str, int]
|
||||
"""
|
||||
Return a Tuple[str, int] e.g., ['7.1.4.34', 1806]
|
||||
The fileset bos.rte represents the current AIX run-time level. It's VRMF and
|
||||
builddate reflect the current ABI levels of the runtime environment.
|
||||
If no builddate is found give a value that will satisfy pep425 related queries
|
||||
"""
|
||||
# All AIX systems to have lslpp installed in this location
|
||||
# subprocess may not be available during python bootstrap
|
||||
try:
|
||||
import subprocess
|
||||
out = subprocess.check_output(["/usr/bin/lslpp", "-Lqc", "bos.rte"])
|
||||
except ImportError:
|
||||
out = _read_cmd_output("/usr/bin/lslpp -Lqc bos.rte")
|
||||
out = out.decode("utf-8")
|
||||
out = out.strip().split(":") # type: ignore
|
||||
_bd = int(out[-1]) if out[-1] != '' else 9988
|
||||
return (str(out[2]), _bd)
|
||||
|
||||
|
||||
def aix_platform():
|
||||
# type: () -> str
|
||||
"""
|
||||
AIX filesets are identified by four decimal values: V.R.M.F.
|
||||
V (version) and R (release) can be retrieved using ``uname``
|
||||
Since 2007, starting with AIX 5.3 TL7, the M value has been
|
||||
included with the fileset bos.rte and represents the Technology
|
||||
Level (TL) of AIX. The F (Fix) value also increases, but is not
|
||||
relevant for comparing releases and binary compatibility.
|
||||
For binary compatibility the so-called builddate is needed.
|
||||
Again, the builddate of an AIX release is associated with bos.rte.
|
||||
AIX ABI compatibility is described as guaranteed at: https://www.ibm.com/\
|
||||
support/knowledgecenter/en/ssw_aix_72/install/binary_compatability.html
|
||||
|
||||
For pep425 purposes the AIX platform tag becomes:
|
||||
"aix-{:1x}{:1d}{:02d}-{:04d}-{}".format(v, r, tl, builddate, bitsize)
|
||||
e.g., "aix-6107-1415-32" for AIX 6.1 TL7 bd 1415, 32-bit
|
||||
and, "aix-6107-1415-64" for AIX 6.1 TL7 bd 1415, 64-bit
|
||||
"""
|
||||
vrmf, bd = _aix_bos_rte()
|
||||
return _aix_tag(_aix_vrtl(vrmf), bd)
|
||||
|
||||
|
||||
# extract vrtl from the BUILD_GNU_TYPE as an int
|
||||
def _aix_bgt():
|
||||
# type: () -> List[int]
|
||||
gnu_type = sysconfig.get_config_var("BUILD_GNU_TYPE")
|
||||
if not gnu_type:
|
||||
raise ValueError("BUILD_GNU_TYPE is not defined")
|
||||
return _aix_vrtl(vrmf=gnu_type)
|
||||
|
||||
|
||||
def aix_buildtag():
|
||||
# type: () -> str
|
||||
"""
|
||||
Return the platform_tag of the system Python was built on.
|
||||
"""
|
||||
# AIX_BUILDDATE is defined by configure with:
|
||||
# lslpp -Lcq bos.rte | awk -F: '{ print $NF }'
|
||||
build_date = sysconfig.get_config_var("AIX_BUILDDATE")
|
||||
try:
|
||||
build_date = int(build_date)
|
||||
except (ValueError, TypeError):
|
||||
raise ValueError(f"AIX_BUILDDATE is not defined or invalid: "
|
||||
f"{build_date!r}")
|
||||
return _aix_tag(_aix_bgt(), build_date)
|
||||
181
Utils/PythonNew32/Lib/_android_support.py
Normal file
181
Utils/PythonNew32/Lib/_android_support.py
Normal file
@@ -0,0 +1,181 @@
|
||||
import io
|
||||
import sys
|
||||
from threading import RLock
|
||||
from time import sleep, time
|
||||
|
||||
# The maximum length of a log message in bytes, including the level marker and
|
||||
# tag, is defined as LOGGER_ENTRY_MAX_PAYLOAD at
|
||||
# https://cs.android.com/android/platform/superproject/+/android-14.0.0_r1:system/logging/liblog/include/log/log.h;l=71.
|
||||
# Messages longer than this will be truncated by logcat. This limit has already
|
||||
# been reduced at least once in the history of Android (from 4076 to 4068 between
|
||||
# API level 23 and 26), so leave some headroom.
|
||||
MAX_BYTES_PER_WRITE = 4000
|
||||
|
||||
# UTF-8 uses a maximum of 4 bytes per character, so limiting text writes to this
|
||||
# size ensures that we can always avoid exceeding MAX_BYTES_PER_WRITE.
|
||||
# However, if the actual number of bytes per character is smaller than that,
|
||||
# then we may still join multiple consecutive text writes into binary
|
||||
# writes containing a larger number of characters.
|
||||
MAX_CHARS_PER_WRITE = MAX_BYTES_PER_WRITE // 4
|
||||
|
||||
|
||||
# When embedded in an app on current versions of Android, there's no easy way to
|
||||
# monitor the C-level stdout and stderr. The testbed comes with a .c file to
|
||||
# redirect them to the system log using a pipe, but that wouldn't be convenient
|
||||
# or appropriate for all apps. So we redirect at the Python level instead.
|
||||
def init_streams(android_log_write, stdout_prio, stderr_prio):
|
||||
if sys.executable:
|
||||
return # Not embedded in an app.
|
||||
|
||||
global logcat
|
||||
logcat = Logcat(android_log_write)
|
||||
|
||||
sys.stdout = TextLogStream(
|
||||
stdout_prio, "python.stdout", sys.stdout.fileno())
|
||||
sys.stderr = TextLogStream(
|
||||
stderr_prio, "python.stderr", sys.stderr.fileno())
|
||||
|
||||
|
||||
class TextLogStream(io.TextIOWrapper):
|
||||
def __init__(self, prio, tag, fileno=None, **kwargs):
|
||||
# The default is surrogateescape for stdout and backslashreplace for
|
||||
# stderr, but in the context of an Android log, readability is more
|
||||
# important than reversibility.
|
||||
kwargs.setdefault("encoding", "UTF-8")
|
||||
kwargs.setdefault("errors", "backslashreplace")
|
||||
|
||||
super().__init__(BinaryLogStream(prio, tag, fileno), **kwargs)
|
||||
self._lock = RLock()
|
||||
self._pending_bytes = []
|
||||
self._pending_bytes_count = 0
|
||||
|
||||
def __repr__(self):
|
||||
return f"<TextLogStream {self.buffer.tag!r}>"
|
||||
|
||||
def write(self, s):
|
||||
if not isinstance(s, str):
|
||||
raise TypeError(
|
||||
f"write() argument must be str, not {type(s).__name__}")
|
||||
|
||||
# In case `s` is a str subclass that writes itself to stdout or stderr
|
||||
# when we call its methods, convert it to an actual str.
|
||||
s = str.__str__(s)
|
||||
|
||||
# We want to emit one log message per line wherever possible, so split
|
||||
# the string into lines first. Note that "".splitlines() == [], so
|
||||
# nothing will be logged for an empty string.
|
||||
with self._lock:
|
||||
for line in s.splitlines(keepends=True):
|
||||
while line:
|
||||
chunk = line[:MAX_CHARS_PER_WRITE]
|
||||
line = line[MAX_CHARS_PER_WRITE:]
|
||||
self._write_chunk(chunk)
|
||||
|
||||
return len(s)
|
||||
|
||||
# The size and behavior of TextIOWrapper's buffer is not part of its public
|
||||
# API, so we handle buffering ourselves to avoid truncation.
|
||||
def _write_chunk(self, s):
|
||||
b = s.encode(self.encoding, self.errors)
|
||||
if self._pending_bytes_count + len(b) > MAX_BYTES_PER_WRITE:
|
||||
self.flush()
|
||||
|
||||
self._pending_bytes.append(b)
|
||||
self._pending_bytes_count += len(b)
|
||||
if (
|
||||
self.write_through
|
||||
or b.endswith(b"\n")
|
||||
or self._pending_bytes_count > MAX_BYTES_PER_WRITE
|
||||
):
|
||||
self.flush()
|
||||
|
||||
def flush(self):
|
||||
with self._lock:
|
||||
self.buffer.write(b"".join(self._pending_bytes))
|
||||
self._pending_bytes.clear()
|
||||
self._pending_bytes_count = 0
|
||||
|
||||
# Since this is a line-based logging system, line buffering cannot be turned
|
||||
# off, i.e. a newline always causes a flush.
|
||||
@property
|
||||
def line_buffering(self):
|
||||
return True
|
||||
|
||||
|
||||
class BinaryLogStream(io.RawIOBase):
|
||||
def __init__(self, prio, tag, fileno=None):
|
||||
self.prio = prio
|
||||
self.tag = tag
|
||||
self._fileno = fileno
|
||||
|
||||
def __repr__(self):
|
||||
return f"<BinaryLogStream {self.tag!r}>"
|
||||
|
||||
def writable(self):
|
||||
return True
|
||||
|
||||
def write(self, b):
|
||||
if type(b) is not bytes:
|
||||
try:
|
||||
b = bytes(memoryview(b))
|
||||
except TypeError:
|
||||
raise TypeError(
|
||||
f"write() argument must be bytes-like, not {type(b).__name__}"
|
||||
) from None
|
||||
|
||||
# Writing an empty string to the stream should have no effect.
|
||||
if b:
|
||||
logcat.write(self.prio, self.tag, b)
|
||||
return len(b)
|
||||
|
||||
# This is needed by the test suite --timeout option, which uses faulthandler.
|
||||
def fileno(self):
|
||||
if self._fileno is None:
|
||||
raise io.UnsupportedOperation("fileno")
|
||||
return self._fileno
|
||||
|
||||
|
||||
# When a large volume of data is written to logcat at once, e.g. when a test
|
||||
# module fails in --verbose3 mode, there's a risk of overflowing logcat's own
|
||||
# buffer and losing messages. We avoid this by imposing a rate limit using the
|
||||
# token bucket algorithm, based on a conservative estimate of how fast `adb
|
||||
# logcat` can consume data.
|
||||
MAX_BYTES_PER_SECOND = 1024 * 1024
|
||||
|
||||
# The logcat buffer size of a device can be determined by running `logcat -g`.
|
||||
# We set the token bucket size to half of the buffer size of our current minimum
|
||||
# API level, because other things on the system will be producing messages as
|
||||
# well.
|
||||
BUCKET_SIZE = 128 * 1024
|
||||
|
||||
# https://cs.android.com/android/platform/superproject/+/android-14.0.0_r1:system/logging/liblog/include/log/log_read.h;l=39
|
||||
PER_MESSAGE_OVERHEAD = 28
|
||||
|
||||
|
||||
class Logcat:
|
||||
def __init__(self, android_log_write):
|
||||
self.android_log_write = android_log_write
|
||||
self._lock = RLock()
|
||||
self._bucket_level = 0
|
||||
self._prev_write_time = time()
|
||||
|
||||
def write(self, prio, tag, message):
|
||||
# Encode null bytes using "modified UTF-8" to avoid them truncating the
|
||||
# message.
|
||||
message = message.replace(b"\x00", b"\xc0\x80")
|
||||
|
||||
with self._lock:
|
||||
now = time()
|
||||
self._bucket_level += (
|
||||
(now - self._prev_write_time) * MAX_BYTES_PER_SECOND)
|
||||
|
||||
# If the bucket level is still below zero, the clock must have gone
|
||||
# backwards, so reset it to zero and continue.
|
||||
self._bucket_level = max(0, min(self._bucket_level, BUCKET_SIZE))
|
||||
self._prev_write_time = now
|
||||
|
||||
self._bucket_level -= PER_MESSAGE_OVERHEAD + len(tag) + len(message)
|
||||
if self._bucket_level < 0:
|
||||
sleep(-self._bucket_level / MAX_BYTES_PER_SECOND)
|
||||
|
||||
self.android_log_write(prio, tag, message)
|
||||
66
Utils/PythonNew32/Lib/_apple_support.py
Normal file
66
Utils/PythonNew32/Lib/_apple_support.py
Normal file
@@ -0,0 +1,66 @@
|
||||
import io
|
||||
import sys
|
||||
|
||||
|
||||
def init_streams(log_write, stdout_level, stderr_level):
|
||||
# Redirect stdout and stderr to the Apple system log. This method is
|
||||
# invoked by init_apple_streams() (initconfig.c) if config->use_system_logger
|
||||
# is enabled.
|
||||
sys.stdout = SystemLog(log_write, stdout_level, errors=sys.stderr.errors)
|
||||
sys.stderr = SystemLog(log_write, stderr_level, errors=sys.stderr.errors)
|
||||
|
||||
|
||||
class SystemLog(io.TextIOWrapper):
|
||||
def __init__(self, log_write, level, **kwargs):
|
||||
kwargs.setdefault("encoding", "UTF-8")
|
||||
kwargs.setdefault("line_buffering", True)
|
||||
super().__init__(LogStream(log_write, level), **kwargs)
|
||||
|
||||
def __repr__(self):
|
||||
return f"<SystemLog (level {self.buffer.level})>"
|
||||
|
||||
def write(self, s):
|
||||
if not isinstance(s, str):
|
||||
raise TypeError(
|
||||
f"write() argument must be str, not {type(s).__name__}")
|
||||
|
||||
# In case `s` is a str subclass that writes itself to stdout or stderr
|
||||
# when we call its methods, convert it to an actual str.
|
||||
s = str.__str__(s)
|
||||
|
||||
# We want to emit one log message per line, so split
|
||||
# the string before sending it to the superclass.
|
||||
for line in s.splitlines(keepends=True):
|
||||
super().write(line)
|
||||
|
||||
return len(s)
|
||||
|
||||
|
||||
class LogStream(io.RawIOBase):
|
||||
def __init__(self, log_write, level):
|
||||
self.log_write = log_write
|
||||
self.level = level
|
||||
|
||||
def __repr__(self):
|
||||
return f"<LogStream (level {self.level!r})>"
|
||||
|
||||
def writable(self):
|
||||
return True
|
||||
|
||||
def write(self, b):
|
||||
if type(b) is not bytes:
|
||||
try:
|
||||
b = bytes(memoryview(b))
|
||||
except TypeError:
|
||||
raise TypeError(
|
||||
f"write() argument must be bytes-like, not {type(b).__name__}"
|
||||
) from None
|
||||
|
||||
# Writing an empty string to the stream should have no effect.
|
||||
if b:
|
||||
# Encode null bytes using "modified UTF-8" to avoid truncating the
|
||||
# message. This should not affect the return value, as the caller
|
||||
# may be expecting it to match the length of the input.
|
||||
self.log_write(self.level, b.replace(b"\x00", b"\xc0\x80"))
|
||||
|
||||
return len(b)
|
||||
1178
Utils/PythonNew32/Lib/_collections_abc.py
Normal file
1178
Utils/PythonNew32/Lib/_collections_abc.py
Normal file
File diff suppressed because it is too large
Load Diff
112
Utils/PythonNew32/Lib/_colorize.py
Normal file
112
Utils/PythonNew32/Lib/_colorize.py
Normal file
@@ -0,0 +1,112 @@
|
||||
from __future__ import annotations
|
||||
import io
|
||||
import os
|
||||
import sys
|
||||
|
||||
COLORIZE = True
|
||||
|
||||
# types
|
||||
if False:
|
||||
from typing import IO
|
||||
|
||||
|
||||
class ANSIColors:
|
||||
RESET = "\x1b[0m"
|
||||
|
||||
BLACK = "\x1b[30m"
|
||||
BLUE = "\x1b[34m"
|
||||
CYAN = "\x1b[36m"
|
||||
GREEN = "\x1b[32m"
|
||||
MAGENTA = "\x1b[35m"
|
||||
RED = "\x1b[31m"
|
||||
WHITE = "\x1b[37m" # more like LIGHT GRAY
|
||||
YELLOW = "\x1b[33m"
|
||||
|
||||
BOLD_BLACK = "\x1b[1;30m" # DARK GRAY
|
||||
BOLD_BLUE = "\x1b[1;34m"
|
||||
BOLD_CYAN = "\x1b[1;36m"
|
||||
BOLD_GREEN = "\x1b[1;32m"
|
||||
BOLD_MAGENTA = "\x1b[1;35m"
|
||||
BOLD_RED = "\x1b[1;31m"
|
||||
BOLD_WHITE = "\x1b[1;37m" # actual WHITE
|
||||
BOLD_YELLOW = "\x1b[1;33m"
|
||||
|
||||
# intense = like bold but without being bold
|
||||
INTENSE_BLACK = "\x1b[90m"
|
||||
INTENSE_BLUE = "\x1b[94m"
|
||||
INTENSE_CYAN = "\x1b[96m"
|
||||
INTENSE_GREEN = "\x1b[92m"
|
||||
INTENSE_MAGENTA = "\x1b[95m"
|
||||
INTENSE_RED = "\x1b[91m"
|
||||
INTENSE_WHITE = "\x1b[97m"
|
||||
INTENSE_YELLOW = "\x1b[93m"
|
||||
|
||||
BACKGROUND_BLACK = "\x1b[40m"
|
||||
BACKGROUND_BLUE = "\x1b[44m"
|
||||
BACKGROUND_CYAN = "\x1b[46m"
|
||||
BACKGROUND_GREEN = "\x1b[42m"
|
||||
BACKGROUND_MAGENTA = "\x1b[45m"
|
||||
BACKGROUND_RED = "\x1b[41m"
|
||||
BACKGROUND_WHITE = "\x1b[47m"
|
||||
BACKGROUND_YELLOW = "\x1b[43m"
|
||||
|
||||
INTENSE_BACKGROUND_BLACK = "\x1b[100m"
|
||||
INTENSE_BACKGROUND_BLUE = "\x1b[104m"
|
||||
INTENSE_BACKGROUND_CYAN = "\x1b[106m"
|
||||
INTENSE_BACKGROUND_GREEN = "\x1b[102m"
|
||||
INTENSE_BACKGROUND_MAGENTA = "\x1b[105m"
|
||||
INTENSE_BACKGROUND_RED = "\x1b[101m"
|
||||
INTENSE_BACKGROUND_WHITE = "\x1b[107m"
|
||||
INTENSE_BACKGROUND_YELLOW = "\x1b[103m"
|
||||
|
||||
|
||||
NoColors = ANSIColors()
|
||||
|
||||
for attr in dir(NoColors):
|
||||
if not attr.startswith("__"):
|
||||
setattr(NoColors, attr, "")
|
||||
|
||||
|
||||
def get_colors(
|
||||
colorize: bool = False, *, file: IO[str] | IO[bytes] | None = None
|
||||
) -> ANSIColors:
|
||||
if colorize or can_colorize(file=file):
|
||||
return ANSIColors()
|
||||
else:
|
||||
return NoColors
|
||||
|
||||
|
||||
def can_colorize(*, file: IO[str] | IO[bytes] | None = None) -> bool:
|
||||
if file is None:
|
||||
file = sys.stdout
|
||||
|
||||
if not sys.flags.ignore_environment:
|
||||
if os.environ.get("PYTHON_COLORS") == "0":
|
||||
return False
|
||||
if os.environ.get("PYTHON_COLORS") == "1":
|
||||
return True
|
||||
if os.environ.get("NO_COLOR"):
|
||||
return False
|
||||
if not COLORIZE:
|
||||
return False
|
||||
if os.environ.get("FORCE_COLOR"):
|
||||
return True
|
||||
if os.environ.get("TERM") == "dumb":
|
||||
return False
|
||||
|
||||
if not hasattr(file, "fileno"):
|
||||
return False
|
||||
|
||||
if sys.platform == "win32":
|
||||
try:
|
||||
import nt
|
||||
|
||||
if not nt._supports_virtual_terminal():
|
||||
return False
|
||||
except (ImportError, AttributeError):
|
||||
return False
|
||||
|
||||
try:
|
||||
return os.isatty(file.fileno())
|
||||
except io.UnsupportedOperation:
|
||||
return hasattr(file, "isatty") and file.isatty()
|
||||
251
Utils/PythonNew32/Lib/_compat_pickle.py
Normal file
251
Utils/PythonNew32/Lib/_compat_pickle.py
Normal file
@@ -0,0 +1,251 @@
|
||||
# This module is used to map the old Python 2 names to the new names used in
|
||||
# Python 3 for the pickle module. This needed to make pickle streams
|
||||
# generated with Python 2 loadable by Python 3.
|
||||
|
||||
# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import
|
||||
# lib2to3 and use the mapping defined there, because lib2to3 uses pickle.
|
||||
# Thus, this could cause the module to be imported recursively.
|
||||
IMPORT_MAPPING = {
|
||||
'__builtin__' : 'builtins',
|
||||
'copy_reg': 'copyreg',
|
||||
'Queue': 'queue',
|
||||
'SocketServer': 'socketserver',
|
||||
'ConfigParser': 'configparser',
|
||||
'repr': 'reprlib',
|
||||
'tkFileDialog': 'tkinter.filedialog',
|
||||
'tkSimpleDialog': 'tkinter.simpledialog',
|
||||
'tkColorChooser': 'tkinter.colorchooser',
|
||||
'tkCommonDialog': 'tkinter.commondialog',
|
||||
'Dialog': 'tkinter.dialog',
|
||||
'Tkdnd': 'tkinter.dnd',
|
||||
'tkFont': 'tkinter.font',
|
||||
'tkMessageBox': 'tkinter.messagebox',
|
||||
'ScrolledText': 'tkinter.scrolledtext',
|
||||
'Tkconstants': 'tkinter.constants',
|
||||
'ttk': 'tkinter.ttk',
|
||||
'Tkinter': 'tkinter',
|
||||
'markupbase': '_markupbase',
|
||||
'_winreg': 'winreg',
|
||||
'thread': '_thread',
|
||||
'dummy_thread': '_dummy_thread',
|
||||
'dbhash': 'dbm.bsd',
|
||||
'dumbdbm': 'dbm.dumb',
|
||||
'dbm': 'dbm.ndbm',
|
||||
'gdbm': 'dbm.gnu',
|
||||
'xmlrpclib': 'xmlrpc.client',
|
||||
'SimpleXMLRPCServer': 'xmlrpc.server',
|
||||
'httplib': 'http.client',
|
||||
'htmlentitydefs' : 'html.entities',
|
||||
'HTMLParser' : 'html.parser',
|
||||
'Cookie': 'http.cookies',
|
||||
'cookielib': 'http.cookiejar',
|
||||
'BaseHTTPServer': 'http.server',
|
||||
'test.test_support': 'test.support',
|
||||
'commands': 'subprocess',
|
||||
'urlparse' : 'urllib.parse',
|
||||
'robotparser' : 'urllib.robotparser',
|
||||
'urllib2': 'urllib.request',
|
||||
'anydbm': 'dbm',
|
||||
'_abcoll' : 'collections.abc',
|
||||
}
|
||||
|
||||
|
||||
# This contains rename rules that are easy to handle. We ignore the more
|
||||
# complex stuff (e.g. mapping the names in the urllib and types modules).
|
||||
# These rules should be run before import names are fixed.
|
||||
NAME_MAPPING = {
|
||||
('__builtin__', 'xrange'): ('builtins', 'range'),
|
||||
('__builtin__', 'reduce'): ('functools', 'reduce'),
|
||||
('__builtin__', 'intern'): ('sys', 'intern'),
|
||||
('__builtin__', 'unichr'): ('builtins', 'chr'),
|
||||
('__builtin__', 'unicode'): ('builtins', 'str'),
|
||||
('__builtin__', 'long'): ('builtins', 'int'),
|
||||
('itertools', 'izip'): ('builtins', 'zip'),
|
||||
('itertools', 'imap'): ('builtins', 'map'),
|
||||
('itertools', 'ifilter'): ('builtins', 'filter'),
|
||||
('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'),
|
||||
('itertools', 'izip_longest'): ('itertools', 'zip_longest'),
|
||||
('UserDict', 'IterableUserDict'): ('collections', 'UserDict'),
|
||||
('UserList', 'UserList'): ('collections', 'UserList'),
|
||||
('UserString', 'UserString'): ('collections', 'UserString'),
|
||||
('whichdb', 'whichdb'): ('dbm', 'whichdb'),
|
||||
('_socket', 'fromfd'): ('socket', 'fromfd'),
|
||||
('_multiprocessing', 'Connection'): ('multiprocessing.connection', 'Connection'),
|
||||
('multiprocessing.process', 'Process'): ('multiprocessing.context', 'Process'),
|
||||
('multiprocessing.forking', 'Popen'): ('multiprocessing.popen_fork', 'Popen'),
|
||||
('urllib', 'ContentTooShortError'): ('urllib.error', 'ContentTooShortError'),
|
||||
('urllib', 'getproxies'): ('urllib.request', 'getproxies'),
|
||||
('urllib', 'pathname2url'): ('urllib.request', 'pathname2url'),
|
||||
('urllib', 'quote_plus'): ('urllib.parse', 'quote_plus'),
|
||||
('urllib', 'quote'): ('urllib.parse', 'quote'),
|
||||
('urllib', 'unquote_plus'): ('urllib.parse', 'unquote_plus'),
|
||||
('urllib', 'unquote'): ('urllib.parse', 'unquote'),
|
||||
('urllib', 'url2pathname'): ('urllib.request', 'url2pathname'),
|
||||
('urllib', 'urlcleanup'): ('urllib.request', 'urlcleanup'),
|
||||
('urllib', 'urlencode'): ('urllib.parse', 'urlencode'),
|
||||
('urllib', 'urlopen'): ('urllib.request', 'urlopen'),
|
||||
('urllib', 'urlretrieve'): ('urllib.request', 'urlretrieve'),
|
||||
('urllib2', 'HTTPError'): ('urllib.error', 'HTTPError'),
|
||||
('urllib2', 'URLError'): ('urllib.error', 'URLError'),
|
||||
}
|
||||
|
||||
PYTHON2_EXCEPTIONS = (
|
||||
"ArithmeticError",
|
||||
"AssertionError",
|
||||
"AttributeError",
|
||||
"BaseException",
|
||||
"BufferError",
|
||||
"BytesWarning",
|
||||
"DeprecationWarning",
|
||||
"EOFError",
|
||||
"EnvironmentError",
|
||||
"Exception",
|
||||
"FloatingPointError",
|
||||
"FutureWarning",
|
||||
"GeneratorExit",
|
||||
"IOError",
|
||||
"ImportError",
|
||||
"ImportWarning",
|
||||
"IndentationError",
|
||||
"IndexError",
|
||||
"KeyError",
|
||||
"KeyboardInterrupt",
|
||||
"LookupError",
|
||||
"MemoryError",
|
||||
"NameError",
|
||||
"NotImplementedError",
|
||||
"OSError",
|
||||
"OverflowError",
|
||||
"PendingDeprecationWarning",
|
||||
"ReferenceError",
|
||||
"RuntimeError",
|
||||
"RuntimeWarning",
|
||||
# StandardError is gone in Python 3, so we map it to Exception
|
||||
"StopIteration",
|
||||
"SyntaxError",
|
||||
"SyntaxWarning",
|
||||
"SystemError",
|
||||
"SystemExit",
|
||||
"TabError",
|
||||
"TypeError",
|
||||
"UnboundLocalError",
|
||||
"UnicodeDecodeError",
|
||||
"UnicodeEncodeError",
|
||||
"UnicodeError",
|
||||
"UnicodeTranslateError",
|
||||
"UnicodeWarning",
|
||||
"UserWarning",
|
||||
"ValueError",
|
||||
"Warning",
|
||||
"ZeroDivisionError",
|
||||
)
|
||||
|
||||
try:
|
||||
WindowsError
|
||||
except NameError:
|
||||
pass
|
||||
else:
|
||||
PYTHON2_EXCEPTIONS += ("WindowsError",)
|
||||
|
||||
for excname in PYTHON2_EXCEPTIONS:
|
||||
NAME_MAPPING[("exceptions", excname)] = ("builtins", excname)
|
||||
|
||||
MULTIPROCESSING_EXCEPTIONS = (
|
||||
'AuthenticationError',
|
||||
'BufferTooShort',
|
||||
'ProcessError',
|
||||
'TimeoutError',
|
||||
)
|
||||
|
||||
for excname in MULTIPROCESSING_EXCEPTIONS:
|
||||
NAME_MAPPING[("multiprocessing", excname)] = ("multiprocessing.context", excname)
|
||||
|
||||
# Same, but for 3.x to 2.x
|
||||
REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items())
|
||||
assert len(REVERSE_IMPORT_MAPPING) == len(IMPORT_MAPPING)
|
||||
REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items())
|
||||
assert len(REVERSE_NAME_MAPPING) == len(NAME_MAPPING)
|
||||
|
||||
# Non-mutual mappings.
|
||||
|
||||
IMPORT_MAPPING.update({
|
||||
'cPickle': 'pickle',
|
||||
'_elementtree': 'xml.etree.ElementTree',
|
||||
'FileDialog': 'tkinter.filedialog',
|
||||
'SimpleDialog': 'tkinter.simpledialog',
|
||||
'DocXMLRPCServer': 'xmlrpc.server',
|
||||
'SimpleHTTPServer': 'http.server',
|
||||
'CGIHTTPServer': 'http.server',
|
||||
# For compatibility with broken pickles saved in old Python 3 versions
|
||||
'UserDict': 'collections',
|
||||
'UserList': 'collections',
|
||||
'UserString': 'collections',
|
||||
'whichdb': 'dbm',
|
||||
'StringIO': 'io',
|
||||
'cStringIO': 'io',
|
||||
})
|
||||
|
||||
REVERSE_IMPORT_MAPPING.update({
|
||||
'_bz2': 'bz2',
|
||||
'_dbm': 'dbm',
|
||||
'_functools': 'functools',
|
||||
'_gdbm': 'gdbm',
|
||||
'_pickle': 'pickle',
|
||||
})
|
||||
|
||||
NAME_MAPPING.update({
|
||||
('__builtin__', 'basestring'): ('builtins', 'str'),
|
||||
('exceptions', 'StandardError'): ('builtins', 'Exception'),
|
||||
('UserDict', 'UserDict'): ('collections', 'UserDict'),
|
||||
('socket', '_socketobject'): ('socket', 'SocketType'),
|
||||
})
|
||||
|
||||
REVERSE_NAME_MAPPING.update({
|
||||
('_functools', 'reduce'): ('__builtin__', 'reduce'),
|
||||
('tkinter.filedialog', 'FileDialog'): ('FileDialog', 'FileDialog'),
|
||||
('tkinter.filedialog', 'LoadFileDialog'): ('FileDialog', 'LoadFileDialog'),
|
||||
('tkinter.filedialog', 'SaveFileDialog'): ('FileDialog', 'SaveFileDialog'),
|
||||
('tkinter.simpledialog', 'SimpleDialog'): ('SimpleDialog', 'SimpleDialog'),
|
||||
('xmlrpc.server', 'ServerHTMLDoc'): ('DocXMLRPCServer', 'ServerHTMLDoc'),
|
||||
('xmlrpc.server', 'XMLRPCDocGenerator'):
|
||||
('DocXMLRPCServer', 'XMLRPCDocGenerator'),
|
||||
('xmlrpc.server', 'DocXMLRPCRequestHandler'):
|
||||
('DocXMLRPCServer', 'DocXMLRPCRequestHandler'),
|
||||
('xmlrpc.server', 'DocXMLRPCServer'):
|
||||
('DocXMLRPCServer', 'DocXMLRPCServer'),
|
||||
('xmlrpc.server', 'DocCGIXMLRPCRequestHandler'):
|
||||
('DocXMLRPCServer', 'DocCGIXMLRPCRequestHandler'),
|
||||
('http.server', 'SimpleHTTPRequestHandler'):
|
||||
('SimpleHTTPServer', 'SimpleHTTPRequestHandler'),
|
||||
('http.server', 'CGIHTTPRequestHandler'):
|
||||
('CGIHTTPServer', 'CGIHTTPRequestHandler'),
|
||||
('_socket', 'socket'): ('socket', '_socketobject'),
|
||||
})
|
||||
|
||||
PYTHON3_OSERROR_EXCEPTIONS = (
|
||||
'BrokenPipeError',
|
||||
'ChildProcessError',
|
||||
'ConnectionAbortedError',
|
||||
'ConnectionError',
|
||||
'ConnectionRefusedError',
|
||||
'ConnectionResetError',
|
||||
'FileExistsError',
|
||||
'FileNotFoundError',
|
||||
'InterruptedError',
|
||||
'IsADirectoryError',
|
||||
'NotADirectoryError',
|
||||
'PermissionError',
|
||||
'ProcessLookupError',
|
||||
'TimeoutError',
|
||||
)
|
||||
|
||||
for excname in PYTHON3_OSERROR_EXCEPTIONS:
|
||||
REVERSE_NAME_MAPPING[('builtins', excname)] = ('exceptions', 'OSError')
|
||||
|
||||
PYTHON3_IMPORTERROR_EXCEPTIONS = (
|
||||
'ModuleNotFoundError',
|
||||
)
|
||||
|
||||
for excname in PYTHON3_IMPORTERROR_EXCEPTIONS:
|
||||
REVERSE_NAME_MAPPING[('builtins', excname)] = ('exceptions', 'ImportError')
|
||||
del excname
|
||||
162
Utils/PythonNew32/Lib/_compression.py
Normal file
162
Utils/PythonNew32/Lib/_compression.py
Normal file
@@ -0,0 +1,162 @@
|
||||
"""Internal classes used by the gzip, lzma and bz2 modules"""
|
||||
|
||||
import io
|
||||
import sys
|
||||
|
||||
BUFFER_SIZE = io.DEFAULT_BUFFER_SIZE # Compressed data read chunk size
|
||||
|
||||
|
||||
class BaseStream(io.BufferedIOBase):
|
||||
"""Mode-checking helper functions."""
|
||||
|
||||
def _check_not_closed(self):
|
||||
if self.closed:
|
||||
raise ValueError("I/O operation on closed file")
|
||||
|
||||
def _check_can_read(self):
|
||||
if not self.readable():
|
||||
raise io.UnsupportedOperation("File not open for reading")
|
||||
|
||||
def _check_can_write(self):
|
||||
if not self.writable():
|
||||
raise io.UnsupportedOperation("File not open for writing")
|
||||
|
||||
def _check_can_seek(self):
|
||||
if not self.readable():
|
||||
raise io.UnsupportedOperation("Seeking is only supported "
|
||||
"on files open for reading")
|
||||
if not self.seekable():
|
||||
raise io.UnsupportedOperation("The underlying file object "
|
||||
"does not support seeking")
|
||||
|
||||
|
||||
class DecompressReader(io.RawIOBase):
|
||||
"""Adapts the decompressor API to a RawIOBase reader API"""
|
||||
|
||||
def readable(self):
|
||||
return True
|
||||
|
||||
def __init__(self, fp, decomp_factory, trailing_error=(), **decomp_args):
|
||||
self._fp = fp
|
||||
self._eof = False
|
||||
self._pos = 0 # Current offset in decompressed stream
|
||||
|
||||
# Set to size of decompressed stream once it is known, for SEEK_END
|
||||
self._size = -1
|
||||
|
||||
# Save the decompressor factory and arguments.
|
||||
# If the file contains multiple compressed streams, each
|
||||
# stream will need a separate decompressor object. A new decompressor
|
||||
# object is also needed when implementing a backwards seek().
|
||||
self._decomp_factory = decomp_factory
|
||||
self._decomp_args = decomp_args
|
||||
self._decompressor = self._decomp_factory(**self._decomp_args)
|
||||
|
||||
# Exception class to catch from decompressor signifying invalid
|
||||
# trailing data to ignore
|
||||
self._trailing_error = trailing_error
|
||||
|
||||
def close(self):
|
||||
self._decompressor = None
|
||||
return super().close()
|
||||
|
||||
def seekable(self):
|
||||
return self._fp.seekable()
|
||||
|
||||
def readinto(self, b):
|
||||
with memoryview(b) as view, view.cast("B") as byte_view:
|
||||
data = self.read(len(byte_view))
|
||||
byte_view[:len(data)] = data
|
||||
return len(data)
|
||||
|
||||
def read(self, size=-1):
|
||||
if size < 0:
|
||||
return self.readall()
|
||||
|
||||
if not size or self._eof:
|
||||
return b""
|
||||
data = None # Default if EOF is encountered
|
||||
# Depending on the input data, our call to the decompressor may not
|
||||
# return any data. In this case, try again after reading another block.
|
||||
while True:
|
||||
if self._decompressor.eof:
|
||||
rawblock = (self._decompressor.unused_data or
|
||||
self._fp.read(BUFFER_SIZE))
|
||||
if not rawblock:
|
||||
break
|
||||
# Continue to next stream.
|
||||
self._decompressor = self._decomp_factory(
|
||||
**self._decomp_args)
|
||||
try:
|
||||
data = self._decompressor.decompress(rawblock, size)
|
||||
except self._trailing_error:
|
||||
# Trailing data isn't a valid compressed stream; ignore it.
|
||||
break
|
||||
else:
|
||||
if self._decompressor.needs_input:
|
||||
rawblock = self._fp.read(BUFFER_SIZE)
|
||||
if not rawblock:
|
||||
raise EOFError("Compressed file ended before the "
|
||||
"end-of-stream marker was reached")
|
||||
else:
|
||||
rawblock = b""
|
||||
data = self._decompressor.decompress(rawblock, size)
|
||||
if data:
|
||||
break
|
||||
if not data:
|
||||
self._eof = True
|
||||
self._size = self._pos
|
||||
return b""
|
||||
self._pos += len(data)
|
||||
return data
|
||||
|
||||
def readall(self):
|
||||
chunks = []
|
||||
# sys.maxsize means the max length of output buffer is unlimited,
|
||||
# so that the whole input buffer can be decompressed within one
|
||||
# .decompress() call.
|
||||
while data := self.read(sys.maxsize):
|
||||
chunks.append(data)
|
||||
|
||||
return b"".join(chunks)
|
||||
|
||||
# Rewind the file to the beginning of the data stream.
|
||||
def _rewind(self):
|
||||
self._fp.seek(0)
|
||||
self._eof = False
|
||||
self._pos = 0
|
||||
self._decompressor = self._decomp_factory(**self._decomp_args)
|
||||
|
||||
def seek(self, offset, whence=io.SEEK_SET):
|
||||
# Recalculate offset as an absolute file position.
|
||||
if whence == io.SEEK_SET:
|
||||
pass
|
||||
elif whence == io.SEEK_CUR:
|
||||
offset = self._pos + offset
|
||||
elif whence == io.SEEK_END:
|
||||
# Seeking relative to EOF - we need to know the file's size.
|
||||
if self._size < 0:
|
||||
while self.read(io.DEFAULT_BUFFER_SIZE):
|
||||
pass
|
||||
offset = self._size + offset
|
||||
else:
|
||||
raise ValueError("Invalid value for whence: {}".format(whence))
|
||||
|
||||
# Make it so that offset is the number of bytes to skip forward.
|
||||
if offset < self._pos:
|
||||
self._rewind()
|
||||
else:
|
||||
offset -= self._pos
|
||||
|
||||
# Read and discard data until we reach the desired position.
|
||||
while offset > 0:
|
||||
data = self.read(min(io.DEFAULT_BUFFER_SIZE, offset))
|
||||
if not data:
|
||||
break
|
||||
offset -= len(data)
|
||||
|
||||
return self._pos
|
||||
|
||||
def tell(self):
|
||||
"""Return the current file position."""
|
||||
return self._pos
|
||||
71
Utils/PythonNew32/Lib/_ios_support.py
Normal file
71
Utils/PythonNew32/Lib/_ios_support.py
Normal file
@@ -0,0 +1,71 @@
|
||||
import sys
|
||||
try:
|
||||
from ctypes import cdll, c_void_p, c_char_p, util
|
||||
except ImportError:
|
||||
# ctypes is an optional module. If it's not present, we're limited in what
|
||||
# we can tell about the system, but we don't want to prevent the module
|
||||
# from working.
|
||||
print("ctypes isn't available; iOS system calls will not be available", file=sys.stderr)
|
||||
objc = None
|
||||
else:
|
||||
# ctypes is available. Load the ObjC library, and wrap the objc_getClass,
|
||||
# sel_registerName methods
|
||||
lib = util.find_library("objc")
|
||||
if lib is None:
|
||||
# Failed to load the objc library
|
||||
raise ImportError("ObjC runtime library couldn't be loaded")
|
||||
|
||||
objc = cdll.LoadLibrary(lib)
|
||||
objc.objc_getClass.restype = c_void_p
|
||||
objc.objc_getClass.argtypes = [c_char_p]
|
||||
objc.sel_registerName.restype = c_void_p
|
||||
objc.sel_registerName.argtypes = [c_char_p]
|
||||
|
||||
|
||||
def get_platform_ios():
|
||||
# Determine if this is a simulator using the multiarch value
|
||||
is_simulator = sys.implementation._multiarch.endswith("simulator")
|
||||
|
||||
# We can't use ctypes; abort
|
||||
if not objc:
|
||||
return None
|
||||
|
||||
# Most of the methods return ObjC objects
|
||||
objc.objc_msgSend.restype = c_void_p
|
||||
# All the methods used have no arguments.
|
||||
objc.objc_msgSend.argtypes = [c_void_p, c_void_p]
|
||||
|
||||
# Equivalent of:
|
||||
# device = [UIDevice currentDevice]
|
||||
UIDevice = objc.objc_getClass(b"UIDevice")
|
||||
SEL_currentDevice = objc.sel_registerName(b"currentDevice")
|
||||
device = objc.objc_msgSend(UIDevice, SEL_currentDevice)
|
||||
|
||||
# Equivalent of:
|
||||
# device_systemVersion = [device systemVersion]
|
||||
SEL_systemVersion = objc.sel_registerName(b"systemVersion")
|
||||
device_systemVersion = objc.objc_msgSend(device, SEL_systemVersion)
|
||||
|
||||
# Equivalent of:
|
||||
# device_systemName = [device systemName]
|
||||
SEL_systemName = objc.sel_registerName(b"systemName")
|
||||
device_systemName = objc.objc_msgSend(device, SEL_systemName)
|
||||
|
||||
# Equivalent of:
|
||||
# device_model = [device model]
|
||||
SEL_model = objc.sel_registerName(b"model")
|
||||
device_model = objc.objc_msgSend(device, SEL_model)
|
||||
|
||||
# UTF8String returns a const char*;
|
||||
SEL_UTF8String = objc.sel_registerName(b"UTF8String")
|
||||
objc.objc_msgSend.restype = c_char_p
|
||||
|
||||
# Equivalent of:
|
||||
# system = [device_systemName UTF8String]
|
||||
# release = [device_systemVersion UTF8String]
|
||||
# model = [device_model UTF8String]
|
||||
system = objc.objc_msgSend(device_systemName, SEL_UTF8String).decode()
|
||||
release = objc.objc_msgSend(device_systemVersion, SEL_UTF8String).decode()
|
||||
model = objc.objc_msgSend(device_model, SEL_UTF8String).decode()
|
||||
|
||||
return system, release, model, is_simulator
|
||||
396
Utils/PythonNew32/Lib/_markupbase.py
Normal file
396
Utils/PythonNew32/Lib/_markupbase.py
Normal file
@@ -0,0 +1,396 @@
|
||||
"""Shared support for scanning document type declarations in HTML and XHTML.
|
||||
|
||||
This module is used as a foundation for the html.parser module. It has no
|
||||
documented public API and should not be used directly.
|
||||
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match
|
||||
_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match
|
||||
_commentclose = re.compile(r'--\s*>')
|
||||
_markedsectionclose = re.compile(r']\s*]\s*>')
|
||||
|
||||
# An analysis of the MS-Word extensions is available at
|
||||
# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf
|
||||
|
||||
_msmarkedsectionclose = re.compile(r']\s*>')
|
||||
|
||||
del re
|
||||
|
||||
|
||||
class ParserBase:
|
||||
"""Parser base class which provides some common support methods used
|
||||
by the SGML/HTML and XHTML parsers."""
|
||||
|
||||
def __init__(self):
|
||||
if self.__class__ is ParserBase:
|
||||
raise RuntimeError(
|
||||
"_markupbase.ParserBase must be subclassed")
|
||||
|
||||
def reset(self):
|
||||
self.lineno = 1
|
||||
self.offset = 0
|
||||
|
||||
def getpos(self):
|
||||
"""Return current line number and offset."""
|
||||
return self.lineno, self.offset
|
||||
|
||||
# Internal -- update line number and offset. This should be
|
||||
# called for each piece of data exactly once, in order -- in other
|
||||
# words the concatenation of all the input strings to this
|
||||
# function should be exactly the entire input.
|
||||
def updatepos(self, i, j):
|
||||
if i >= j:
|
||||
return j
|
||||
rawdata = self.rawdata
|
||||
nlines = rawdata.count("\n", i, j)
|
||||
if nlines:
|
||||
self.lineno = self.lineno + nlines
|
||||
pos = rawdata.rindex("\n", i, j) # Should not fail
|
||||
self.offset = j-(pos+1)
|
||||
else:
|
||||
self.offset = self.offset + j-i
|
||||
return j
|
||||
|
||||
_decl_otherchars = ''
|
||||
|
||||
# Internal -- parse declaration (for use by subclasses).
|
||||
def parse_declaration(self, i):
|
||||
# This is some sort of declaration; in "HTML as
|
||||
# deployed," this should only be the document type
|
||||
# declaration ("<!DOCTYPE html...>").
|
||||
# ISO 8879:1986, however, has more complex
|
||||
# declaration syntax for elements in <!...>, including:
|
||||
# --comment--
|
||||
# [marked section]
|
||||
# name in the following list: ENTITY, DOCTYPE, ELEMENT,
|
||||
# ATTLIST, NOTATION, SHORTREF, USEMAP,
|
||||
# LINKTYPE, LINK, IDLINK, USELINK, SYSTEM
|
||||
rawdata = self.rawdata
|
||||
j = i + 2
|
||||
assert rawdata[i:j] == "<!", "unexpected call to parse_declaration"
|
||||
if rawdata[j:j+1] == ">":
|
||||
# the empty comment <!>
|
||||
return j + 1
|
||||
if rawdata[j:j+1] in ("-", ""):
|
||||
# Start of comment followed by buffer boundary,
|
||||
# or just a buffer boundary.
|
||||
return -1
|
||||
# A simple, practical version could look like: ((name|stringlit) S*) + '>'
|
||||
n = len(rawdata)
|
||||
if rawdata[j:j+2] == '--': #comment
|
||||
# Locate --.*-- as the body of the comment
|
||||
return self.parse_comment(i)
|
||||
elif rawdata[j] == '[': #marked section
|
||||
# Locate [statusWord [...arbitrary SGML...]] as the body of the marked section
|
||||
# Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA
|
||||
# Note that this is extended by Microsoft Office "Save as Web" function
|
||||
# to include [if...] and [endif].
|
||||
return self.parse_marked_section(i)
|
||||
else: #all other declaration elements
|
||||
decltype, j = self._scan_name(j, i)
|
||||
if j < 0:
|
||||
return j
|
||||
if decltype == "doctype":
|
||||
self._decl_otherchars = ''
|
||||
while j < n:
|
||||
c = rawdata[j]
|
||||
if c == ">":
|
||||
# end of declaration syntax
|
||||
data = rawdata[i+2:j]
|
||||
if decltype == "doctype":
|
||||
self.handle_decl(data)
|
||||
else:
|
||||
# According to the HTML5 specs sections "8.2.4.44 Bogus
|
||||
# comment state" and "8.2.4.45 Markup declaration open
|
||||
# state", a comment token should be emitted.
|
||||
# Calling unknown_decl provides more flexibility though.
|
||||
self.unknown_decl(data)
|
||||
return j + 1
|
||||
if c in "\"'":
|
||||
m = _declstringlit_match(rawdata, j)
|
||||
if not m:
|
||||
return -1 # incomplete
|
||||
j = m.end()
|
||||
elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ":
|
||||
name, j = self._scan_name(j, i)
|
||||
elif c in self._decl_otherchars:
|
||||
j = j + 1
|
||||
elif c == "[":
|
||||
# this could be handled in a separate doctype parser
|
||||
if decltype == "doctype":
|
||||
j = self._parse_doctype_subset(j + 1, i)
|
||||
elif decltype in {"attlist", "linktype", "link", "element"}:
|
||||
# must tolerate []'d groups in a content model in an element declaration
|
||||
# also in data attribute specifications of attlist declaration
|
||||
# also link type declaration subsets in linktype declarations
|
||||
# also link attribute specification lists in link declarations
|
||||
raise AssertionError("unsupported '[' char in %s declaration" % decltype)
|
||||
else:
|
||||
raise AssertionError("unexpected '[' char in declaration")
|
||||
else:
|
||||
raise AssertionError("unexpected %r char in declaration" % rawdata[j])
|
||||
if j < 0:
|
||||
return j
|
||||
return -1 # incomplete
|
||||
|
||||
# Internal -- parse a marked section
|
||||
# Override this to handle MS-word extension syntax <![if word]>content<![endif]>
|
||||
def parse_marked_section(self, i, report=1):
|
||||
rawdata= self.rawdata
|
||||
assert rawdata[i:i+3] == '<![', "unexpected call to parse_marked_section()"
|
||||
sectName, j = self._scan_name( i+3, i )
|
||||
if j < 0:
|
||||
return j
|
||||
if sectName in {"temp", "cdata", "ignore", "include", "rcdata"}:
|
||||
# look for standard ]]> ending
|
||||
match= _markedsectionclose.search(rawdata, i+3)
|
||||
elif sectName in {"if", "else", "endif"}:
|
||||
# look for MS Office ]> ending
|
||||
match= _msmarkedsectionclose.search(rawdata, i+3)
|
||||
else:
|
||||
raise AssertionError(
|
||||
'unknown status keyword %r in marked section' % rawdata[i+3:j]
|
||||
)
|
||||
if not match:
|
||||
return -1
|
||||
if report:
|
||||
j = match.start(0)
|
||||
self.unknown_decl(rawdata[i+3: j])
|
||||
return match.end(0)
|
||||
|
||||
# Internal -- parse comment, return length or -1 if not terminated
|
||||
def parse_comment(self, i, report=1):
|
||||
rawdata = self.rawdata
|
||||
if rawdata[i:i+4] != '<!--':
|
||||
raise AssertionError('unexpected call to parse_comment()')
|
||||
match = _commentclose.search(rawdata, i+4)
|
||||
if not match:
|
||||
return -1
|
||||
if report:
|
||||
j = match.start(0)
|
||||
self.handle_comment(rawdata[i+4: j])
|
||||
return match.end(0)
|
||||
|
||||
# Internal -- scan past the internal subset in a <!DOCTYPE declaration,
|
||||
# returning the index just past any whitespace following the trailing ']'.
|
||||
def _parse_doctype_subset(self, i, declstartpos):
|
||||
rawdata = self.rawdata
|
||||
n = len(rawdata)
|
||||
j = i
|
||||
while j < n:
|
||||
c = rawdata[j]
|
||||
if c == "<":
|
||||
s = rawdata[j:j+2]
|
||||
if s == "<":
|
||||
# end of buffer; incomplete
|
||||
return -1
|
||||
if s != "<!":
|
||||
self.updatepos(declstartpos, j + 1)
|
||||
raise AssertionError(
|
||||
"unexpected char in internal subset (in %r)" % s
|
||||
)
|
||||
if (j + 2) == n:
|
||||
# end of buffer; incomplete
|
||||
return -1
|
||||
if (j + 4) > n:
|
||||
# end of buffer; incomplete
|
||||
return -1
|
||||
if rawdata[j:j+4] == "<!--":
|
||||
j = self.parse_comment(j, report=0)
|
||||
if j < 0:
|
||||
return j
|
||||
continue
|
||||
name, j = self._scan_name(j + 2, declstartpos)
|
||||
if j == -1:
|
||||
return -1
|
||||
if name not in {"attlist", "element", "entity", "notation"}:
|
||||
self.updatepos(declstartpos, j + 2)
|
||||
raise AssertionError(
|
||||
"unknown declaration %r in internal subset" % name
|
||||
)
|
||||
# handle the individual names
|
||||
meth = getattr(self, "_parse_doctype_" + name)
|
||||
j = meth(j, declstartpos)
|
||||
if j < 0:
|
||||
return j
|
||||
elif c == "%":
|
||||
# parameter entity reference
|
||||
if (j + 1) == n:
|
||||
# end of buffer; incomplete
|
||||
return -1
|
||||
s, j = self._scan_name(j + 1, declstartpos)
|
||||
if j < 0:
|
||||
return j
|
||||
if rawdata[j] == ";":
|
||||
j = j + 1
|
||||
elif c == "]":
|
||||
j = j + 1
|
||||
while j < n and rawdata[j].isspace():
|
||||
j = j + 1
|
||||
if j < n:
|
||||
if rawdata[j] == ">":
|
||||
return j
|
||||
self.updatepos(declstartpos, j)
|
||||
raise AssertionError("unexpected char after internal subset")
|
||||
else:
|
||||
return -1
|
||||
elif c.isspace():
|
||||
j = j + 1
|
||||
else:
|
||||
self.updatepos(declstartpos, j)
|
||||
raise AssertionError("unexpected char %r in internal subset" % c)
|
||||
# end of buffer reached
|
||||
return -1
|
||||
|
||||
# Internal -- scan past <!ELEMENT declarations
|
||||
def _parse_doctype_element(self, i, declstartpos):
|
||||
name, j = self._scan_name(i, declstartpos)
|
||||
if j == -1:
|
||||
return -1
|
||||
# style content model; just skip until '>'
|
||||
rawdata = self.rawdata
|
||||
if '>' in rawdata[j:]:
|
||||
return rawdata.find(">", j) + 1
|
||||
return -1
|
||||
|
||||
# Internal -- scan past <!ATTLIST declarations
|
||||
def _parse_doctype_attlist(self, i, declstartpos):
|
||||
rawdata = self.rawdata
|
||||
name, j = self._scan_name(i, declstartpos)
|
||||
c = rawdata[j:j+1]
|
||||
if c == "":
|
||||
return -1
|
||||
if c == ">":
|
||||
return j + 1
|
||||
while 1:
|
||||
# scan a series of attribute descriptions; simplified:
|
||||
# name type [value] [#constraint]
|
||||
name, j = self._scan_name(j, declstartpos)
|
||||
if j < 0:
|
||||
return j
|
||||
c = rawdata[j:j+1]
|
||||
if c == "":
|
||||
return -1
|
||||
if c == "(":
|
||||
# an enumerated type; look for ')'
|
||||
if ")" in rawdata[j:]:
|
||||
j = rawdata.find(")", j) + 1
|
||||
else:
|
||||
return -1
|
||||
while rawdata[j:j+1].isspace():
|
||||
j = j + 1
|
||||
if not rawdata[j:]:
|
||||
# end of buffer, incomplete
|
||||
return -1
|
||||
else:
|
||||
name, j = self._scan_name(j, declstartpos)
|
||||
c = rawdata[j:j+1]
|
||||
if not c:
|
||||
return -1
|
||||
if c in "'\"":
|
||||
m = _declstringlit_match(rawdata, j)
|
||||
if m:
|
||||
j = m.end()
|
||||
else:
|
||||
return -1
|
||||
c = rawdata[j:j+1]
|
||||
if not c:
|
||||
return -1
|
||||
if c == "#":
|
||||
if rawdata[j:] == "#":
|
||||
# end of buffer
|
||||
return -1
|
||||
name, j = self._scan_name(j + 1, declstartpos)
|
||||
if j < 0:
|
||||
return j
|
||||
c = rawdata[j:j+1]
|
||||
if not c:
|
||||
return -1
|
||||
if c == '>':
|
||||
# all done
|
||||
return j + 1
|
||||
|
||||
# Internal -- scan past <!NOTATION declarations
|
||||
def _parse_doctype_notation(self, i, declstartpos):
|
||||
name, j = self._scan_name(i, declstartpos)
|
||||
if j < 0:
|
||||
return j
|
||||
rawdata = self.rawdata
|
||||
while 1:
|
||||
c = rawdata[j:j+1]
|
||||
if not c:
|
||||
# end of buffer; incomplete
|
||||
return -1
|
||||
if c == '>':
|
||||
return j + 1
|
||||
if c in "'\"":
|
||||
m = _declstringlit_match(rawdata, j)
|
||||
if not m:
|
||||
return -1
|
||||
j = m.end()
|
||||
else:
|
||||
name, j = self._scan_name(j, declstartpos)
|
||||
if j < 0:
|
||||
return j
|
||||
|
||||
# Internal -- scan past <!ENTITY declarations
|
||||
def _parse_doctype_entity(self, i, declstartpos):
|
||||
rawdata = self.rawdata
|
||||
if rawdata[i:i+1] == "%":
|
||||
j = i + 1
|
||||
while 1:
|
||||
c = rawdata[j:j+1]
|
||||
if not c:
|
||||
return -1
|
||||
if c.isspace():
|
||||
j = j + 1
|
||||
else:
|
||||
break
|
||||
else:
|
||||
j = i
|
||||
name, j = self._scan_name(j, declstartpos)
|
||||
if j < 0:
|
||||
return j
|
||||
while 1:
|
||||
c = self.rawdata[j:j+1]
|
||||
if not c:
|
||||
return -1
|
||||
if c in "'\"":
|
||||
m = _declstringlit_match(rawdata, j)
|
||||
if m:
|
||||
j = m.end()
|
||||
else:
|
||||
return -1 # incomplete
|
||||
elif c == ">":
|
||||
return j + 1
|
||||
else:
|
||||
name, j = self._scan_name(j, declstartpos)
|
||||
if j < 0:
|
||||
return j
|
||||
|
||||
# Internal -- scan a name token and the new position and the token, or
|
||||
# return -1 if we've reached the end of the buffer.
|
||||
def _scan_name(self, i, declstartpos):
|
||||
rawdata = self.rawdata
|
||||
n = len(rawdata)
|
||||
if i == n:
|
||||
return None, -1
|
||||
m = _declname_match(rawdata, i)
|
||||
if m:
|
||||
s = m.group()
|
||||
name = s.strip()
|
||||
if (i + len(s)) == n:
|
||||
return None, -1 # end of buffer
|
||||
return name.lower(), m.end()
|
||||
else:
|
||||
self.updatepos(declstartpos, i)
|
||||
raise AssertionError(
|
||||
"expected name token at %r" % rawdata[declstartpos:declstartpos+20]
|
||||
)
|
||||
|
||||
# To be overridden -- handlers for unknown objects
|
||||
def unknown_decl(self, data):
|
||||
pass
|
||||
343
Utils/PythonNew32/Lib/_opcode_metadata.py
Normal file
343
Utils/PythonNew32/Lib/_opcode_metadata.py
Normal file
@@ -0,0 +1,343 @@
|
||||
# This file is generated by Tools/cases_generator/py_metadata_generator.py
|
||||
# from:
|
||||
# Python/bytecodes.c
|
||||
# Do not edit!
|
||||
_specializations = {
|
||||
"RESUME": [
|
||||
"RESUME_CHECK",
|
||||
],
|
||||
"TO_BOOL": [
|
||||
"TO_BOOL_ALWAYS_TRUE",
|
||||
"TO_BOOL_BOOL",
|
||||
"TO_BOOL_INT",
|
||||
"TO_BOOL_LIST",
|
||||
"TO_BOOL_NONE",
|
||||
"TO_BOOL_STR",
|
||||
],
|
||||
"BINARY_OP": [
|
||||
"BINARY_OP_MULTIPLY_INT",
|
||||
"BINARY_OP_ADD_INT",
|
||||
"BINARY_OP_SUBTRACT_INT",
|
||||
"BINARY_OP_MULTIPLY_FLOAT",
|
||||
"BINARY_OP_ADD_FLOAT",
|
||||
"BINARY_OP_SUBTRACT_FLOAT",
|
||||
"BINARY_OP_ADD_UNICODE",
|
||||
"BINARY_OP_INPLACE_ADD_UNICODE",
|
||||
],
|
||||
"BINARY_SUBSCR": [
|
||||
"BINARY_SUBSCR_DICT",
|
||||
"BINARY_SUBSCR_GETITEM",
|
||||
"BINARY_SUBSCR_LIST_INT",
|
||||
"BINARY_SUBSCR_STR_INT",
|
||||
"BINARY_SUBSCR_TUPLE_INT",
|
||||
],
|
||||
"STORE_SUBSCR": [
|
||||
"STORE_SUBSCR_DICT",
|
||||
"STORE_SUBSCR_LIST_INT",
|
||||
],
|
||||
"SEND": [
|
||||
"SEND_GEN",
|
||||
],
|
||||
"UNPACK_SEQUENCE": [
|
||||
"UNPACK_SEQUENCE_TWO_TUPLE",
|
||||
"UNPACK_SEQUENCE_TUPLE",
|
||||
"UNPACK_SEQUENCE_LIST",
|
||||
],
|
||||
"STORE_ATTR": [
|
||||
"STORE_ATTR_INSTANCE_VALUE",
|
||||
"STORE_ATTR_SLOT",
|
||||
"STORE_ATTR_WITH_HINT",
|
||||
],
|
||||
"LOAD_GLOBAL": [
|
||||
"LOAD_GLOBAL_MODULE",
|
||||
"LOAD_GLOBAL_BUILTIN",
|
||||
],
|
||||
"LOAD_SUPER_ATTR": [
|
||||
"LOAD_SUPER_ATTR_ATTR",
|
||||
"LOAD_SUPER_ATTR_METHOD",
|
||||
],
|
||||
"LOAD_ATTR": [
|
||||
"LOAD_ATTR_INSTANCE_VALUE",
|
||||
"LOAD_ATTR_MODULE",
|
||||
"LOAD_ATTR_WITH_HINT",
|
||||
"LOAD_ATTR_SLOT",
|
||||
"LOAD_ATTR_CLASS",
|
||||
"LOAD_ATTR_PROPERTY",
|
||||
"LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN",
|
||||
"LOAD_ATTR_METHOD_WITH_VALUES",
|
||||
"LOAD_ATTR_METHOD_NO_DICT",
|
||||
"LOAD_ATTR_METHOD_LAZY_DICT",
|
||||
"LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES",
|
||||
"LOAD_ATTR_NONDESCRIPTOR_NO_DICT",
|
||||
],
|
||||
"COMPARE_OP": [
|
||||
"COMPARE_OP_FLOAT",
|
||||
"COMPARE_OP_INT",
|
||||
"COMPARE_OP_STR",
|
||||
],
|
||||
"CONTAINS_OP": [
|
||||
"CONTAINS_OP_SET",
|
||||
"CONTAINS_OP_DICT",
|
||||
],
|
||||
"FOR_ITER": [
|
||||
"FOR_ITER_LIST",
|
||||
"FOR_ITER_TUPLE",
|
||||
"FOR_ITER_RANGE",
|
||||
"FOR_ITER_GEN",
|
||||
],
|
||||
"CALL": [
|
||||
"CALL_BOUND_METHOD_EXACT_ARGS",
|
||||
"CALL_PY_EXACT_ARGS",
|
||||
"CALL_TYPE_1",
|
||||
"CALL_STR_1",
|
||||
"CALL_TUPLE_1",
|
||||
"CALL_BUILTIN_CLASS",
|
||||
"CALL_BUILTIN_O",
|
||||
"CALL_BUILTIN_FAST",
|
||||
"CALL_BUILTIN_FAST_WITH_KEYWORDS",
|
||||
"CALL_LEN",
|
||||
"CALL_ISINSTANCE",
|
||||
"CALL_LIST_APPEND",
|
||||
"CALL_METHOD_DESCRIPTOR_O",
|
||||
"CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS",
|
||||
"CALL_METHOD_DESCRIPTOR_NOARGS",
|
||||
"CALL_METHOD_DESCRIPTOR_FAST",
|
||||
"CALL_ALLOC_AND_ENTER_INIT",
|
||||
"CALL_PY_GENERAL",
|
||||
"CALL_BOUND_METHOD_GENERAL",
|
||||
"CALL_NON_PY_GENERAL",
|
||||
],
|
||||
}
|
||||
|
||||
_specialized_opmap = {
|
||||
'BINARY_OP_ADD_FLOAT': 150,
|
||||
'BINARY_OP_ADD_INT': 151,
|
||||
'BINARY_OP_ADD_UNICODE': 152,
|
||||
'BINARY_OP_INPLACE_ADD_UNICODE': 3,
|
||||
'BINARY_OP_MULTIPLY_FLOAT': 153,
|
||||
'BINARY_OP_MULTIPLY_INT': 154,
|
||||
'BINARY_OP_SUBTRACT_FLOAT': 155,
|
||||
'BINARY_OP_SUBTRACT_INT': 156,
|
||||
'BINARY_SUBSCR_DICT': 157,
|
||||
'BINARY_SUBSCR_GETITEM': 158,
|
||||
'BINARY_SUBSCR_LIST_INT': 159,
|
||||
'BINARY_SUBSCR_STR_INT': 160,
|
||||
'BINARY_SUBSCR_TUPLE_INT': 161,
|
||||
'CALL_ALLOC_AND_ENTER_INIT': 162,
|
||||
'CALL_BOUND_METHOD_EXACT_ARGS': 163,
|
||||
'CALL_BOUND_METHOD_GENERAL': 164,
|
||||
'CALL_BUILTIN_CLASS': 165,
|
||||
'CALL_BUILTIN_FAST': 166,
|
||||
'CALL_BUILTIN_FAST_WITH_KEYWORDS': 167,
|
||||
'CALL_BUILTIN_O': 168,
|
||||
'CALL_ISINSTANCE': 169,
|
||||
'CALL_LEN': 170,
|
||||
'CALL_LIST_APPEND': 171,
|
||||
'CALL_METHOD_DESCRIPTOR_FAST': 172,
|
||||
'CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS': 173,
|
||||
'CALL_METHOD_DESCRIPTOR_NOARGS': 174,
|
||||
'CALL_METHOD_DESCRIPTOR_O': 175,
|
||||
'CALL_NON_PY_GENERAL': 176,
|
||||
'CALL_PY_EXACT_ARGS': 177,
|
||||
'CALL_PY_GENERAL': 178,
|
||||
'CALL_STR_1': 179,
|
||||
'CALL_TUPLE_1': 180,
|
||||
'CALL_TYPE_1': 181,
|
||||
'COMPARE_OP_FLOAT': 182,
|
||||
'COMPARE_OP_INT': 183,
|
||||
'COMPARE_OP_STR': 184,
|
||||
'CONTAINS_OP_DICT': 185,
|
||||
'CONTAINS_OP_SET': 186,
|
||||
'FOR_ITER_GEN': 187,
|
||||
'FOR_ITER_LIST': 188,
|
||||
'FOR_ITER_RANGE': 189,
|
||||
'FOR_ITER_TUPLE': 190,
|
||||
'LOAD_ATTR_CLASS': 191,
|
||||
'LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN': 192,
|
||||
'LOAD_ATTR_INSTANCE_VALUE': 193,
|
||||
'LOAD_ATTR_METHOD_LAZY_DICT': 194,
|
||||
'LOAD_ATTR_METHOD_NO_DICT': 195,
|
||||
'LOAD_ATTR_METHOD_WITH_VALUES': 196,
|
||||
'LOAD_ATTR_MODULE': 197,
|
||||
'LOAD_ATTR_NONDESCRIPTOR_NO_DICT': 198,
|
||||
'LOAD_ATTR_NONDESCRIPTOR_WITH_VALUES': 199,
|
||||
'LOAD_ATTR_PROPERTY': 200,
|
||||
'LOAD_ATTR_SLOT': 201,
|
||||
'LOAD_ATTR_WITH_HINT': 202,
|
||||
'LOAD_GLOBAL_BUILTIN': 203,
|
||||
'LOAD_GLOBAL_MODULE': 204,
|
||||
'LOAD_SUPER_ATTR_ATTR': 205,
|
||||
'LOAD_SUPER_ATTR_METHOD': 206,
|
||||
'RESUME_CHECK': 207,
|
||||
'SEND_GEN': 208,
|
||||
'STORE_ATTR_INSTANCE_VALUE': 209,
|
||||
'STORE_ATTR_SLOT': 210,
|
||||
'STORE_ATTR_WITH_HINT': 211,
|
||||
'STORE_SUBSCR_DICT': 212,
|
||||
'STORE_SUBSCR_LIST_INT': 213,
|
||||
'TO_BOOL_ALWAYS_TRUE': 214,
|
||||
'TO_BOOL_BOOL': 215,
|
||||
'TO_BOOL_INT': 216,
|
||||
'TO_BOOL_LIST': 217,
|
||||
'TO_BOOL_NONE': 218,
|
||||
'TO_BOOL_STR': 219,
|
||||
'UNPACK_SEQUENCE_LIST': 220,
|
||||
'UNPACK_SEQUENCE_TUPLE': 221,
|
||||
'UNPACK_SEQUENCE_TWO_TUPLE': 222,
|
||||
}
|
||||
|
||||
opmap = {
|
||||
'CACHE': 0,
|
||||
'RESERVED': 17,
|
||||
'RESUME': 149,
|
||||
'INSTRUMENTED_LINE': 254,
|
||||
'BEFORE_ASYNC_WITH': 1,
|
||||
'BEFORE_WITH': 2,
|
||||
'BINARY_SLICE': 4,
|
||||
'BINARY_SUBSCR': 5,
|
||||
'CHECK_EG_MATCH': 6,
|
||||
'CHECK_EXC_MATCH': 7,
|
||||
'CLEANUP_THROW': 8,
|
||||
'DELETE_SUBSCR': 9,
|
||||
'END_ASYNC_FOR': 10,
|
||||
'END_FOR': 11,
|
||||
'END_SEND': 12,
|
||||
'EXIT_INIT_CHECK': 13,
|
||||
'FORMAT_SIMPLE': 14,
|
||||
'FORMAT_WITH_SPEC': 15,
|
||||
'GET_AITER': 16,
|
||||
'GET_ANEXT': 18,
|
||||
'GET_ITER': 19,
|
||||
'GET_LEN': 20,
|
||||
'GET_YIELD_FROM_ITER': 21,
|
||||
'INTERPRETER_EXIT': 22,
|
||||
'LOAD_ASSERTION_ERROR': 23,
|
||||
'LOAD_BUILD_CLASS': 24,
|
||||
'LOAD_LOCALS': 25,
|
||||
'MAKE_FUNCTION': 26,
|
||||
'MATCH_KEYS': 27,
|
||||
'MATCH_MAPPING': 28,
|
||||
'MATCH_SEQUENCE': 29,
|
||||
'NOP': 30,
|
||||
'POP_EXCEPT': 31,
|
||||
'POP_TOP': 32,
|
||||
'PUSH_EXC_INFO': 33,
|
||||
'PUSH_NULL': 34,
|
||||
'RETURN_GENERATOR': 35,
|
||||
'RETURN_VALUE': 36,
|
||||
'SETUP_ANNOTATIONS': 37,
|
||||
'STORE_SLICE': 38,
|
||||
'STORE_SUBSCR': 39,
|
||||
'TO_BOOL': 40,
|
||||
'UNARY_INVERT': 41,
|
||||
'UNARY_NEGATIVE': 42,
|
||||
'UNARY_NOT': 43,
|
||||
'WITH_EXCEPT_START': 44,
|
||||
'BINARY_OP': 45,
|
||||
'BUILD_CONST_KEY_MAP': 46,
|
||||
'BUILD_LIST': 47,
|
||||
'BUILD_MAP': 48,
|
||||
'BUILD_SET': 49,
|
||||
'BUILD_SLICE': 50,
|
||||
'BUILD_STRING': 51,
|
||||
'BUILD_TUPLE': 52,
|
||||
'CALL': 53,
|
||||
'CALL_FUNCTION_EX': 54,
|
||||
'CALL_INTRINSIC_1': 55,
|
||||
'CALL_INTRINSIC_2': 56,
|
||||
'CALL_KW': 57,
|
||||
'COMPARE_OP': 58,
|
||||
'CONTAINS_OP': 59,
|
||||
'CONVERT_VALUE': 60,
|
||||
'COPY': 61,
|
||||
'COPY_FREE_VARS': 62,
|
||||
'DELETE_ATTR': 63,
|
||||
'DELETE_DEREF': 64,
|
||||
'DELETE_FAST': 65,
|
||||
'DELETE_GLOBAL': 66,
|
||||
'DELETE_NAME': 67,
|
||||
'DICT_MERGE': 68,
|
||||
'DICT_UPDATE': 69,
|
||||
'ENTER_EXECUTOR': 70,
|
||||
'EXTENDED_ARG': 71,
|
||||
'FOR_ITER': 72,
|
||||
'GET_AWAITABLE': 73,
|
||||
'IMPORT_FROM': 74,
|
||||
'IMPORT_NAME': 75,
|
||||
'IS_OP': 76,
|
||||
'JUMP_BACKWARD': 77,
|
||||
'JUMP_BACKWARD_NO_INTERRUPT': 78,
|
||||
'JUMP_FORWARD': 79,
|
||||
'LIST_APPEND': 80,
|
||||
'LIST_EXTEND': 81,
|
||||
'LOAD_ATTR': 82,
|
||||
'LOAD_CONST': 83,
|
||||
'LOAD_DEREF': 84,
|
||||
'LOAD_FAST': 85,
|
||||
'LOAD_FAST_AND_CLEAR': 86,
|
||||
'LOAD_FAST_CHECK': 87,
|
||||
'LOAD_FAST_LOAD_FAST': 88,
|
||||
'LOAD_FROM_DICT_OR_DEREF': 89,
|
||||
'LOAD_FROM_DICT_OR_GLOBALS': 90,
|
||||
'LOAD_GLOBAL': 91,
|
||||
'LOAD_NAME': 92,
|
||||
'LOAD_SUPER_ATTR': 93,
|
||||
'MAKE_CELL': 94,
|
||||
'MAP_ADD': 95,
|
||||
'MATCH_CLASS': 96,
|
||||
'POP_JUMP_IF_FALSE': 97,
|
||||
'POP_JUMP_IF_NONE': 98,
|
||||
'POP_JUMP_IF_NOT_NONE': 99,
|
||||
'POP_JUMP_IF_TRUE': 100,
|
||||
'RAISE_VARARGS': 101,
|
||||
'RERAISE': 102,
|
||||
'RETURN_CONST': 103,
|
||||
'SEND': 104,
|
||||
'SET_ADD': 105,
|
||||
'SET_FUNCTION_ATTRIBUTE': 106,
|
||||
'SET_UPDATE': 107,
|
||||
'STORE_ATTR': 108,
|
||||
'STORE_DEREF': 109,
|
||||
'STORE_FAST': 110,
|
||||
'STORE_FAST_LOAD_FAST': 111,
|
||||
'STORE_FAST_STORE_FAST': 112,
|
||||
'STORE_GLOBAL': 113,
|
||||
'STORE_NAME': 114,
|
||||
'SWAP': 115,
|
||||
'UNPACK_EX': 116,
|
||||
'UNPACK_SEQUENCE': 117,
|
||||
'YIELD_VALUE': 118,
|
||||
'INSTRUMENTED_RESUME': 236,
|
||||
'INSTRUMENTED_END_FOR': 237,
|
||||
'INSTRUMENTED_END_SEND': 238,
|
||||
'INSTRUMENTED_RETURN_VALUE': 239,
|
||||
'INSTRUMENTED_RETURN_CONST': 240,
|
||||
'INSTRUMENTED_YIELD_VALUE': 241,
|
||||
'INSTRUMENTED_LOAD_SUPER_ATTR': 242,
|
||||
'INSTRUMENTED_FOR_ITER': 243,
|
||||
'INSTRUMENTED_CALL': 244,
|
||||
'INSTRUMENTED_CALL_KW': 245,
|
||||
'INSTRUMENTED_CALL_FUNCTION_EX': 246,
|
||||
'INSTRUMENTED_INSTRUCTION': 247,
|
||||
'INSTRUMENTED_JUMP_FORWARD': 248,
|
||||
'INSTRUMENTED_JUMP_BACKWARD': 249,
|
||||
'INSTRUMENTED_POP_JUMP_IF_TRUE': 250,
|
||||
'INSTRUMENTED_POP_JUMP_IF_FALSE': 251,
|
||||
'INSTRUMENTED_POP_JUMP_IF_NONE': 252,
|
||||
'INSTRUMENTED_POP_JUMP_IF_NOT_NONE': 253,
|
||||
'JUMP': 256,
|
||||
'JUMP_NO_INTERRUPT': 257,
|
||||
'LOAD_CLOSURE': 258,
|
||||
'LOAD_METHOD': 259,
|
||||
'LOAD_SUPER_METHOD': 260,
|
||||
'LOAD_ZERO_SUPER_ATTR': 261,
|
||||
'LOAD_ZERO_SUPER_METHOD': 262,
|
||||
'POP_BLOCK': 263,
|
||||
'SETUP_CLEANUP': 264,
|
||||
'SETUP_FINALLY': 265,
|
||||
'SETUP_WITH': 266,
|
||||
'STORE_FAST_MAYBE_NULL': 267,
|
||||
}
|
||||
|
||||
HAVE_ARGUMENT = 44
|
||||
MIN_INSTRUMENTED_OPCODE = 236
|
||||
579
Utils/PythonNew32/Lib/_osx_support.py
Normal file
579
Utils/PythonNew32/Lib/_osx_support.py
Normal file
@@ -0,0 +1,579 @@
|
||||
"""Shared OS X support functions."""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
__all__ = [
|
||||
'compiler_fixup',
|
||||
'customize_config_vars',
|
||||
'customize_compiler',
|
||||
'get_platform_osx',
|
||||
]
|
||||
|
||||
# configuration variables that may contain universal build flags,
|
||||
# like "-arch" or "-isdkroot", that may need customization for
|
||||
# the user environment
|
||||
_UNIVERSAL_CONFIG_VARS = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS', 'BASECFLAGS',
|
||||
'BLDSHARED', 'LDSHARED', 'CC', 'CXX',
|
||||
'PY_CFLAGS', 'PY_LDFLAGS', 'PY_CPPFLAGS',
|
||||
'PY_CORE_CFLAGS', 'PY_CORE_LDFLAGS')
|
||||
|
||||
# configuration variables that may contain compiler calls
|
||||
_COMPILER_CONFIG_VARS = ('BLDSHARED', 'LDSHARED', 'CC', 'CXX')
|
||||
|
||||
# prefix added to original configuration variable names
|
||||
_INITPRE = '_OSX_SUPPORT_INITIAL_'
|
||||
|
||||
|
||||
def _find_executable(executable, path=None):
|
||||
"""Tries to find 'executable' in the directories listed in 'path'.
|
||||
|
||||
A string listing directories separated by 'os.pathsep'; defaults to
|
||||
os.environ['PATH']. Returns the complete filename or None if not found.
|
||||
"""
|
||||
if path is None:
|
||||
path = os.environ['PATH']
|
||||
|
||||
paths = path.split(os.pathsep)
|
||||
base, ext = os.path.splitext(executable)
|
||||
|
||||
if (sys.platform == 'win32') and (ext != '.exe'):
|
||||
executable = executable + '.exe'
|
||||
|
||||
if not os.path.isfile(executable):
|
||||
for p in paths:
|
||||
f = os.path.join(p, executable)
|
||||
if os.path.isfile(f):
|
||||
# the file exists, we have a shot at spawn working
|
||||
return f
|
||||
return None
|
||||
else:
|
||||
return executable
|
||||
|
||||
|
||||
def _read_output(commandstring, capture_stderr=False):
|
||||
"""Output from successful command execution or None"""
|
||||
# Similar to os.popen(commandstring, "r").read(),
|
||||
# but without actually using os.popen because that
|
||||
# function is not usable during python bootstrap.
|
||||
# tempfile is also not available then.
|
||||
import contextlib
|
||||
try:
|
||||
import tempfile
|
||||
fp = tempfile.NamedTemporaryFile()
|
||||
except ImportError:
|
||||
fp = open("/tmp/_osx_support.%s"%(
|
||||
os.getpid(),), "w+b")
|
||||
|
||||
with contextlib.closing(fp) as fp:
|
||||
if capture_stderr:
|
||||
cmd = "%s >'%s' 2>&1" % (commandstring, fp.name)
|
||||
else:
|
||||
cmd = "%s 2>/dev/null >'%s'" % (commandstring, fp.name)
|
||||
return fp.read().decode('utf-8').strip() if not os.system(cmd) else None
|
||||
|
||||
|
||||
def _find_build_tool(toolname):
|
||||
"""Find a build tool on current path or using xcrun"""
|
||||
return (_find_executable(toolname)
|
||||
or _read_output("/usr/bin/xcrun -find %s" % (toolname,))
|
||||
or ''
|
||||
)
|
||||
|
||||
_SYSTEM_VERSION = None
|
||||
|
||||
def _get_system_version():
|
||||
"""Return the OS X system version as a string"""
|
||||
# Reading this plist is a documented way to get the system
|
||||
# version (see the documentation for the Gestalt Manager)
|
||||
# We avoid using platform.mac_ver to avoid possible bootstrap issues during
|
||||
# the build of Python itself (distutils is used to build standard library
|
||||
# extensions).
|
||||
|
||||
global _SYSTEM_VERSION
|
||||
|
||||
if _SYSTEM_VERSION is None:
|
||||
_SYSTEM_VERSION = ''
|
||||
try:
|
||||
f = open('/System/Library/CoreServices/SystemVersion.plist', encoding="utf-8")
|
||||
except OSError:
|
||||
# We're on a plain darwin box, fall back to the default
|
||||
# behaviour.
|
||||
pass
|
||||
else:
|
||||
try:
|
||||
m = re.search(r'<key>ProductUserVisibleVersion</key>\s*'
|
||||
r'<string>(.*?)</string>', f.read())
|
||||
finally:
|
||||
f.close()
|
||||
if m is not None:
|
||||
_SYSTEM_VERSION = '.'.join(m.group(1).split('.')[:2])
|
||||
# else: fall back to the default behaviour
|
||||
|
||||
return _SYSTEM_VERSION
|
||||
|
||||
_SYSTEM_VERSION_TUPLE = None
|
||||
def _get_system_version_tuple():
|
||||
"""
|
||||
Return the macOS system version as a tuple
|
||||
|
||||
The return value is safe to use to compare
|
||||
two version numbers.
|
||||
"""
|
||||
global _SYSTEM_VERSION_TUPLE
|
||||
if _SYSTEM_VERSION_TUPLE is None:
|
||||
osx_version = _get_system_version()
|
||||
if osx_version:
|
||||
try:
|
||||
_SYSTEM_VERSION_TUPLE = tuple(int(i) for i in osx_version.split('.'))
|
||||
except ValueError:
|
||||
_SYSTEM_VERSION_TUPLE = ()
|
||||
|
||||
return _SYSTEM_VERSION_TUPLE
|
||||
|
||||
|
||||
def _remove_original_values(_config_vars):
|
||||
"""Remove original unmodified values for testing"""
|
||||
# This is needed for higher-level cross-platform tests of get_platform.
|
||||
for k in list(_config_vars):
|
||||
if k.startswith(_INITPRE):
|
||||
del _config_vars[k]
|
||||
|
||||
def _save_modified_value(_config_vars, cv, newvalue):
|
||||
"""Save modified and original unmodified value of configuration var"""
|
||||
|
||||
oldvalue = _config_vars.get(cv, '')
|
||||
if (oldvalue != newvalue) and (_INITPRE + cv not in _config_vars):
|
||||
_config_vars[_INITPRE + cv] = oldvalue
|
||||
_config_vars[cv] = newvalue
|
||||
|
||||
|
||||
_cache_default_sysroot = None
|
||||
def _default_sysroot(cc):
|
||||
""" Returns the root of the default SDK for this system, or '/' """
|
||||
global _cache_default_sysroot
|
||||
|
||||
if _cache_default_sysroot is not None:
|
||||
return _cache_default_sysroot
|
||||
|
||||
contents = _read_output('%s -c -E -v - </dev/null' % (cc,), True)
|
||||
in_incdirs = False
|
||||
for line in contents.splitlines():
|
||||
if line.startswith("#include <...>"):
|
||||
in_incdirs = True
|
||||
elif line.startswith("End of search list"):
|
||||
in_incdirs = False
|
||||
elif in_incdirs:
|
||||
line = line.strip()
|
||||
if line == '/usr/include':
|
||||
_cache_default_sysroot = '/'
|
||||
elif line.endswith(".sdk/usr/include"):
|
||||
_cache_default_sysroot = line[:-12]
|
||||
if _cache_default_sysroot is None:
|
||||
_cache_default_sysroot = '/'
|
||||
|
||||
return _cache_default_sysroot
|
||||
|
||||
def _supports_universal_builds():
|
||||
"""Returns True if universal builds are supported on this system"""
|
||||
# As an approximation, we assume that if we are running on 10.4 or above,
|
||||
# then we are running with an Xcode environment that supports universal
|
||||
# builds, in particular -isysroot and -arch arguments to the compiler. This
|
||||
# is in support of allowing 10.4 universal builds to run on 10.3.x systems.
|
||||
|
||||
osx_version = _get_system_version_tuple()
|
||||
return bool(osx_version >= (10, 4)) if osx_version else False
|
||||
|
||||
def _supports_arm64_builds():
|
||||
"""Returns True if arm64 builds are supported on this system"""
|
||||
# There are two sets of systems supporting macOS/arm64 builds:
|
||||
# 1. macOS 11 and later, unconditionally
|
||||
# 2. macOS 10.15 with Xcode 12.2 or later
|
||||
# For now the second category is ignored.
|
||||
osx_version = _get_system_version_tuple()
|
||||
return osx_version >= (11, 0) if osx_version else False
|
||||
|
||||
|
||||
def _find_appropriate_compiler(_config_vars):
|
||||
"""Find appropriate C compiler for extension module builds"""
|
||||
|
||||
# Issue #13590:
|
||||
# The OSX location for the compiler varies between OSX
|
||||
# (or rather Xcode) releases. With older releases (up-to 10.5)
|
||||
# the compiler is in /usr/bin, with newer releases the compiler
|
||||
# can only be found inside Xcode.app if the "Command Line Tools"
|
||||
# are not installed.
|
||||
#
|
||||
# Furthermore, the compiler that can be used varies between
|
||||
# Xcode releases. Up to Xcode 4 it was possible to use 'gcc-4.2'
|
||||
# as the compiler, after that 'clang' should be used because
|
||||
# gcc-4.2 is either not present, or a copy of 'llvm-gcc' that
|
||||
# miscompiles Python.
|
||||
|
||||
# skip checks if the compiler was overridden with a CC env variable
|
||||
if 'CC' in os.environ:
|
||||
return _config_vars
|
||||
|
||||
# The CC config var might contain additional arguments.
|
||||
# Ignore them while searching.
|
||||
cc = oldcc = _config_vars['CC'].split()[0]
|
||||
if not _find_executable(cc):
|
||||
# Compiler is not found on the shell search PATH.
|
||||
# Now search for clang, first on PATH (if the Command LIne
|
||||
# Tools have been installed in / or if the user has provided
|
||||
# another location via CC). If not found, try using xcrun
|
||||
# to find an uninstalled clang (within a selected Xcode).
|
||||
|
||||
# NOTE: Cannot use subprocess here because of bootstrap
|
||||
# issues when building Python itself (and os.popen is
|
||||
# implemented on top of subprocess and is therefore not
|
||||
# usable as well)
|
||||
|
||||
cc = _find_build_tool('clang')
|
||||
|
||||
elif os.path.basename(cc).startswith('gcc'):
|
||||
# Compiler is GCC, check if it is LLVM-GCC
|
||||
data = _read_output("'%s' --version"
|
||||
% (cc.replace("'", "'\"'\"'"),))
|
||||
if data and 'llvm-gcc' in data:
|
||||
# Found LLVM-GCC, fall back to clang
|
||||
cc = _find_build_tool('clang')
|
||||
|
||||
if not cc:
|
||||
raise SystemError(
|
||||
"Cannot locate working compiler")
|
||||
|
||||
if cc != oldcc:
|
||||
# Found a replacement compiler.
|
||||
# Modify config vars using new compiler, if not already explicitly
|
||||
# overridden by an env variable, preserving additional arguments.
|
||||
for cv in _COMPILER_CONFIG_VARS:
|
||||
if cv in _config_vars and cv not in os.environ:
|
||||
cv_split = _config_vars[cv].split()
|
||||
cv_split[0] = cc if cv != 'CXX' else cc + '++'
|
||||
_save_modified_value(_config_vars, cv, ' '.join(cv_split))
|
||||
|
||||
return _config_vars
|
||||
|
||||
|
||||
def _remove_universal_flags(_config_vars):
|
||||
"""Remove all universal build arguments from config vars"""
|
||||
|
||||
for cv in _UNIVERSAL_CONFIG_VARS:
|
||||
# Do not alter a config var explicitly overridden by env var
|
||||
if cv in _config_vars and cv not in os.environ:
|
||||
flags = _config_vars[cv]
|
||||
flags = re.sub(r'-arch\s+\w+\s', ' ', flags, flags=re.ASCII)
|
||||
flags = re.sub(r'-isysroot\s*\S+', ' ', flags)
|
||||
_save_modified_value(_config_vars, cv, flags)
|
||||
|
||||
return _config_vars
|
||||
|
||||
|
||||
def _remove_unsupported_archs(_config_vars):
|
||||
"""Remove any unsupported archs from config vars"""
|
||||
# Different Xcode releases support different sets for '-arch'
|
||||
# flags. In particular, Xcode 4.x no longer supports the
|
||||
# PPC architectures.
|
||||
#
|
||||
# This code automatically removes '-arch ppc' and '-arch ppc64'
|
||||
# when these are not supported. That makes it possible to
|
||||
# build extensions on OSX 10.7 and later with the prebuilt
|
||||
# 32-bit installer on the python.org website.
|
||||
|
||||
# skip checks if the compiler was overridden with a CC env variable
|
||||
if 'CC' in os.environ:
|
||||
return _config_vars
|
||||
|
||||
if re.search(r'-arch\s+ppc', _config_vars['CFLAGS']) is not None:
|
||||
# NOTE: Cannot use subprocess here because of bootstrap
|
||||
# issues when building Python itself
|
||||
status = os.system(
|
||||
"""echo 'int main{};' | """
|
||||
"""'%s' -c -arch ppc -x c -o /dev/null /dev/null 2>/dev/null"""
|
||||
%(_config_vars['CC'].replace("'", "'\"'\"'"),))
|
||||
if status:
|
||||
# The compile failed for some reason. Because of differences
|
||||
# across Xcode and compiler versions, there is no reliable way
|
||||
# to be sure why it failed. Assume here it was due to lack of
|
||||
# PPC support and remove the related '-arch' flags from each
|
||||
# config variables not explicitly overridden by an environment
|
||||
# variable. If the error was for some other reason, we hope the
|
||||
# failure will show up again when trying to compile an extension
|
||||
# module.
|
||||
for cv in _UNIVERSAL_CONFIG_VARS:
|
||||
if cv in _config_vars and cv not in os.environ:
|
||||
flags = _config_vars[cv]
|
||||
flags = re.sub(r'-arch\s+ppc\w*\s', ' ', flags)
|
||||
_save_modified_value(_config_vars, cv, flags)
|
||||
|
||||
return _config_vars
|
||||
|
||||
|
||||
def _override_all_archs(_config_vars):
|
||||
"""Allow override of all archs with ARCHFLAGS env var"""
|
||||
# NOTE: This name was introduced by Apple in OSX 10.5 and
|
||||
# is used by several scripting languages distributed with
|
||||
# that OS release.
|
||||
if 'ARCHFLAGS' in os.environ:
|
||||
arch = os.environ['ARCHFLAGS']
|
||||
for cv in _UNIVERSAL_CONFIG_VARS:
|
||||
if cv in _config_vars and '-arch' in _config_vars[cv]:
|
||||
flags = _config_vars[cv]
|
||||
flags = re.sub(r'-arch\s+\w+\s', ' ', flags)
|
||||
flags = flags + ' ' + arch
|
||||
_save_modified_value(_config_vars, cv, flags)
|
||||
|
||||
return _config_vars
|
||||
|
||||
|
||||
def _check_for_unavailable_sdk(_config_vars):
|
||||
"""Remove references to any SDKs not available"""
|
||||
# If we're on OSX 10.5 or later and the user tries to
|
||||
# compile an extension using an SDK that is not present
|
||||
# on the current machine it is better to not use an SDK
|
||||
# than to fail. This is particularly important with
|
||||
# the standalone Command Line Tools alternative to a
|
||||
# full-blown Xcode install since the CLT packages do not
|
||||
# provide SDKs. If the SDK is not present, it is assumed
|
||||
# that the header files and dev libs have been installed
|
||||
# to /usr and /System/Library by either a standalone CLT
|
||||
# package or the CLT component within Xcode.
|
||||
cflags = _config_vars.get('CFLAGS', '')
|
||||
m = re.search(r'-isysroot\s*(\S+)', cflags)
|
||||
if m is not None:
|
||||
sdk = m.group(1)
|
||||
if not os.path.exists(sdk):
|
||||
for cv in _UNIVERSAL_CONFIG_VARS:
|
||||
# Do not alter a config var explicitly overridden by env var
|
||||
if cv in _config_vars and cv not in os.environ:
|
||||
flags = _config_vars[cv]
|
||||
flags = re.sub(r'-isysroot\s*\S+(?:\s|$)', ' ', flags)
|
||||
_save_modified_value(_config_vars, cv, flags)
|
||||
|
||||
return _config_vars
|
||||
|
||||
|
||||
def compiler_fixup(compiler_so, cc_args):
|
||||
"""
|
||||
This function will strip '-isysroot PATH' and '-arch ARCH' from the
|
||||
compile flags if the user has specified one them in extra_compile_flags.
|
||||
|
||||
This is needed because '-arch ARCH' adds another architecture to the
|
||||
build, without a way to remove an architecture. Furthermore GCC will
|
||||
barf if multiple '-isysroot' arguments are present.
|
||||
"""
|
||||
stripArch = stripSysroot = False
|
||||
|
||||
compiler_so = list(compiler_so)
|
||||
|
||||
if not _supports_universal_builds():
|
||||
# OSX before 10.4.0, these don't support -arch and -isysroot at
|
||||
# all.
|
||||
stripArch = stripSysroot = True
|
||||
else:
|
||||
stripArch = '-arch' in cc_args
|
||||
stripSysroot = any(arg for arg in cc_args if arg.startswith('-isysroot'))
|
||||
|
||||
if stripArch or 'ARCHFLAGS' in os.environ:
|
||||
while True:
|
||||
try:
|
||||
index = compiler_so.index('-arch')
|
||||
# Strip this argument and the next one:
|
||||
del compiler_so[index:index+2]
|
||||
except ValueError:
|
||||
break
|
||||
|
||||
elif not _supports_arm64_builds():
|
||||
# Look for "-arch arm64" and drop that
|
||||
for idx in reversed(range(len(compiler_so))):
|
||||
if compiler_so[idx] == '-arch' and compiler_so[idx+1] == "arm64":
|
||||
del compiler_so[idx:idx+2]
|
||||
|
||||
if 'ARCHFLAGS' in os.environ and not stripArch:
|
||||
# User specified different -arch flags in the environ,
|
||||
# see also distutils.sysconfig
|
||||
compiler_so = compiler_so + os.environ['ARCHFLAGS'].split()
|
||||
|
||||
if stripSysroot:
|
||||
while True:
|
||||
indices = [i for i,x in enumerate(compiler_so) if x.startswith('-isysroot')]
|
||||
if not indices:
|
||||
break
|
||||
index = indices[0]
|
||||
if compiler_so[index] == '-isysroot':
|
||||
# Strip this argument and the next one:
|
||||
del compiler_so[index:index+2]
|
||||
else:
|
||||
# It's '-isysroot/some/path' in one arg
|
||||
del compiler_so[index:index+1]
|
||||
|
||||
# Check if the SDK that is used during compilation actually exists,
|
||||
# the universal build requires the usage of a universal SDK and not all
|
||||
# users have that installed by default.
|
||||
sysroot = None
|
||||
argvar = cc_args
|
||||
indices = [i for i,x in enumerate(cc_args) if x.startswith('-isysroot')]
|
||||
if not indices:
|
||||
argvar = compiler_so
|
||||
indices = [i for i,x in enumerate(compiler_so) if x.startswith('-isysroot')]
|
||||
|
||||
for idx in indices:
|
||||
if argvar[idx] == '-isysroot':
|
||||
sysroot = argvar[idx+1]
|
||||
break
|
||||
else:
|
||||
sysroot = argvar[idx][len('-isysroot'):]
|
||||
break
|
||||
|
||||
if sysroot and not os.path.isdir(sysroot):
|
||||
sys.stderr.write(f"Compiling with an SDK that doesn't seem to exist: {sysroot}\n")
|
||||
sys.stderr.write("Please check your Xcode installation\n")
|
||||
sys.stderr.flush()
|
||||
|
||||
return compiler_so
|
||||
|
||||
|
||||
def customize_config_vars(_config_vars):
|
||||
"""Customize Python build configuration variables.
|
||||
|
||||
Called internally from sysconfig with a mutable mapping
|
||||
containing name/value pairs parsed from the configured
|
||||
makefile used to build this interpreter. Returns
|
||||
the mapping updated as needed to reflect the environment
|
||||
in which the interpreter is running; in the case of
|
||||
a Python from a binary installer, the installed
|
||||
environment may be very different from the build
|
||||
environment, i.e. different OS levels, different
|
||||
built tools, different available CPU architectures.
|
||||
|
||||
This customization is performed whenever
|
||||
distutils.sysconfig.get_config_vars() is first
|
||||
called. It may be used in environments where no
|
||||
compilers are present, i.e. when installing pure
|
||||
Python dists. Customization of compiler paths
|
||||
and detection of unavailable archs is deferred
|
||||
until the first extension module build is
|
||||
requested (in distutils.sysconfig.customize_compiler).
|
||||
|
||||
Currently called from distutils.sysconfig
|
||||
"""
|
||||
|
||||
if not _supports_universal_builds():
|
||||
# On Mac OS X before 10.4, check if -arch and -isysroot
|
||||
# are in CFLAGS or LDFLAGS and remove them if they are.
|
||||
# This is needed when building extensions on a 10.3 system
|
||||
# using a universal build of python.
|
||||
_remove_universal_flags(_config_vars)
|
||||
|
||||
# Allow user to override all archs with ARCHFLAGS env var
|
||||
_override_all_archs(_config_vars)
|
||||
|
||||
# Remove references to sdks that are not found
|
||||
_check_for_unavailable_sdk(_config_vars)
|
||||
|
||||
return _config_vars
|
||||
|
||||
|
||||
def customize_compiler(_config_vars):
|
||||
"""Customize compiler path and configuration variables.
|
||||
|
||||
This customization is performed when the first
|
||||
extension module build is requested
|
||||
in distutils.sysconfig.customize_compiler.
|
||||
"""
|
||||
|
||||
# Find a compiler to use for extension module builds
|
||||
_find_appropriate_compiler(_config_vars)
|
||||
|
||||
# Remove ppc arch flags if not supported here
|
||||
_remove_unsupported_archs(_config_vars)
|
||||
|
||||
# Allow user to override all archs with ARCHFLAGS env var
|
||||
_override_all_archs(_config_vars)
|
||||
|
||||
return _config_vars
|
||||
|
||||
|
||||
def get_platform_osx(_config_vars, osname, release, machine):
|
||||
"""Filter values for get_platform()"""
|
||||
# called from get_platform() in sysconfig and distutils.util
|
||||
#
|
||||
# For our purposes, we'll assume that the system version from
|
||||
# distutils' perspective is what MACOSX_DEPLOYMENT_TARGET is set
|
||||
# to. This makes the compatibility story a bit more sane because the
|
||||
# machine is going to compile and link as if it were
|
||||
# MACOSX_DEPLOYMENT_TARGET.
|
||||
|
||||
macver = _config_vars.get('MACOSX_DEPLOYMENT_TARGET', '')
|
||||
if macver and '.' not in macver:
|
||||
# Ensure that the version includes at least a major
|
||||
# and minor version, even if MACOSX_DEPLOYMENT_TARGET
|
||||
# is set to a single-label version like "14".
|
||||
macver += '.0'
|
||||
macrelease = _get_system_version() or macver
|
||||
macver = macver or macrelease
|
||||
|
||||
if macver:
|
||||
release = macver
|
||||
osname = "macosx"
|
||||
|
||||
# Use the original CFLAGS value, if available, so that we
|
||||
# return the same machine type for the platform string.
|
||||
# Otherwise, distutils may consider this a cross-compiling
|
||||
# case and disallow installs.
|
||||
cflags = _config_vars.get(_INITPRE+'CFLAGS',
|
||||
_config_vars.get('CFLAGS', ''))
|
||||
if macrelease:
|
||||
try:
|
||||
macrelease = tuple(int(i) for i in macrelease.split('.')[0:2])
|
||||
except ValueError:
|
||||
macrelease = (10, 3)
|
||||
else:
|
||||
# assume no universal support
|
||||
macrelease = (10, 3)
|
||||
|
||||
if (macrelease >= (10, 4)) and '-arch' in cflags.strip():
|
||||
# The universal build will build fat binaries, but not on
|
||||
# systems before 10.4
|
||||
|
||||
machine = 'fat'
|
||||
|
||||
archs = re.findall(r'-arch\s+(\S+)', cflags)
|
||||
archs = tuple(sorted(set(archs)))
|
||||
|
||||
if len(archs) == 1:
|
||||
machine = archs[0]
|
||||
elif archs == ('arm64', 'x86_64'):
|
||||
machine = 'universal2'
|
||||
elif archs == ('i386', 'ppc'):
|
||||
machine = 'fat'
|
||||
elif archs == ('i386', 'x86_64'):
|
||||
machine = 'intel'
|
||||
elif archs == ('i386', 'ppc', 'x86_64'):
|
||||
machine = 'fat3'
|
||||
elif archs == ('ppc64', 'x86_64'):
|
||||
machine = 'fat64'
|
||||
elif archs == ('i386', 'ppc', 'ppc64', 'x86_64'):
|
||||
machine = 'universal'
|
||||
else:
|
||||
raise ValueError(
|
||||
"Don't know machine value for archs=%r" % (archs,))
|
||||
|
||||
elif machine == 'i386':
|
||||
# On OSX the machine type returned by uname is always the
|
||||
# 32-bit variant, even if the executable architecture is
|
||||
# the 64-bit variant
|
||||
if sys.maxsize >= 2**32:
|
||||
machine = 'x86_64'
|
||||
|
||||
elif machine in ('PowerPC', 'Power_Macintosh'):
|
||||
# Pick a sane name for the PPC architecture.
|
||||
# See 'i386' case
|
||||
if sys.maxsize >= 2**32:
|
||||
machine = 'ppc64'
|
||||
else:
|
||||
machine = 'ppc'
|
||||
|
||||
return (osname, release, machine)
|
||||
147
Utils/PythonNew32/Lib/_py_abc.py
Normal file
147
Utils/PythonNew32/Lib/_py_abc.py
Normal file
@@ -0,0 +1,147 @@
|
||||
from _weakrefset import WeakSet
|
||||
|
||||
|
||||
def get_cache_token():
|
||||
"""Returns the current ABC cache token.
|
||||
|
||||
The token is an opaque object (supporting equality testing) identifying the
|
||||
current version of the ABC cache for virtual subclasses. The token changes
|
||||
with every call to ``register()`` on any ABC.
|
||||
"""
|
||||
return ABCMeta._abc_invalidation_counter
|
||||
|
||||
|
||||
class ABCMeta(type):
|
||||
"""Metaclass for defining Abstract Base Classes (ABCs).
|
||||
|
||||
Use this metaclass to create an ABC. An ABC can be subclassed
|
||||
directly, and then acts as a mix-in class. You can also register
|
||||
unrelated concrete classes (even built-in classes) and unrelated
|
||||
ABCs as 'virtual subclasses' -- these and their descendants will
|
||||
be considered subclasses of the registering ABC by the built-in
|
||||
issubclass() function, but the registering ABC won't show up in
|
||||
their MRO (Method Resolution Order) nor will method
|
||||
implementations defined by the registering ABC be callable (not
|
||||
even via super()).
|
||||
"""
|
||||
|
||||
# A global counter that is incremented each time a class is
|
||||
# registered as a virtual subclass of anything. It forces the
|
||||
# negative cache to be cleared before its next use.
|
||||
# Note: this counter is private. Use `abc.get_cache_token()` for
|
||||
# external code.
|
||||
_abc_invalidation_counter = 0
|
||||
|
||||
def __new__(mcls, name, bases, namespace, /, **kwargs):
|
||||
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
|
||||
# Compute set of abstract method names
|
||||
abstracts = {name
|
||||
for name, value in namespace.items()
|
||||
if getattr(value, "__isabstractmethod__", False)}
|
||||
for base in bases:
|
||||
for name in getattr(base, "__abstractmethods__", set()):
|
||||
value = getattr(cls, name, None)
|
||||
if getattr(value, "__isabstractmethod__", False):
|
||||
abstracts.add(name)
|
||||
cls.__abstractmethods__ = frozenset(abstracts)
|
||||
# Set up inheritance registry
|
||||
cls._abc_registry = WeakSet()
|
||||
cls._abc_cache = WeakSet()
|
||||
cls._abc_negative_cache = WeakSet()
|
||||
cls._abc_negative_cache_version = ABCMeta._abc_invalidation_counter
|
||||
return cls
|
||||
|
||||
def register(cls, subclass):
|
||||
"""Register a virtual subclass of an ABC.
|
||||
|
||||
Returns the subclass, to allow usage as a class decorator.
|
||||
"""
|
||||
if not isinstance(subclass, type):
|
||||
raise TypeError("Can only register classes")
|
||||
if issubclass(subclass, cls):
|
||||
return subclass # Already a subclass
|
||||
# Subtle: test for cycles *after* testing for "already a subclass";
|
||||
# this means we allow X.register(X) and interpret it as a no-op.
|
||||
if issubclass(cls, subclass):
|
||||
# This would create a cycle, which is bad for the algorithm below
|
||||
raise RuntimeError("Refusing to create an inheritance cycle")
|
||||
cls._abc_registry.add(subclass)
|
||||
ABCMeta._abc_invalidation_counter += 1 # Invalidate negative cache
|
||||
return subclass
|
||||
|
||||
def _dump_registry(cls, file=None):
|
||||
"""Debug helper to print the ABC registry."""
|
||||
print(f"Class: {cls.__module__}.{cls.__qualname__}", file=file)
|
||||
print(f"Inv. counter: {get_cache_token()}", file=file)
|
||||
for name in cls.__dict__:
|
||||
if name.startswith("_abc_"):
|
||||
value = getattr(cls, name)
|
||||
if isinstance(value, WeakSet):
|
||||
value = set(value)
|
||||
print(f"{name}: {value!r}", file=file)
|
||||
|
||||
def _abc_registry_clear(cls):
|
||||
"""Clear the registry (for debugging or testing)."""
|
||||
cls._abc_registry.clear()
|
||||
|
||||
def _abc_caches_clear(cls):
|
||||
"""Clear the caches (for debugging or testing)."""
|
||||
cls._abc_cache.clear()
|
||||
cls._abc_negative_cache.clear()
|
||||
|
||||
def __instancecheck__(cls, instance):
|
||||
"""Override for isinstance(instance, cls)."""
|
||||
# Inline the cache checking
|
||||
subclass = instance.__class__
|
||||
if subclass in cls._abc_cache:
|
||||
return True
|
||||
subtype = type(instance)
|
||||
if subtype is subclass:
|
||||
if (cls._abc_negative_cache_version ==
|
||||
ABCMeta._abc_invalidation_counter and
|
||||
subclass in cls._abc_negative_cache):
|
||||
return False
|
||||
# Fall back to the subclass check.
|
||||
return cls.__subclasscheck__(subclass)
|
||||
return any(cls.__subclasscheck__(c) for c in (subclass, subtype))
|
||||
|
||||
def __subclasscheck__(cls, subclass):
|
||||
"""Override for issubclass(subclass, cls)."""
|
||||
if not isinstance(subclass, type):
|
||||
raise TypeError('issubclass() arg 1 must be a class')
|
||||
# Check cache
|
||||
if subclass in cls._abc_cache:
|
||||
return True
|
||||
# Check negative cache; may have to invalidate
|
||||
if cls._abc_negative_cache_version < ABCMeta._abc_invalidation_counter:
|
||||
# Invalidate the negative cache
|
||||
cls._abc_negative_cache = WeakSet()
|
||||
cls._abc_negative_cache_version = ABCMeta._abc_invalidation_counter
|
||||
elif subclass in cls._abc_negative_cache:
|
||||
return False
|
||||
# Check the subclass hook
|
||||
ok = cls.__subclasshook__(subclass)
|
||||
if ok is not NotImplemented:
|
||||
assert isinstance(ok, bool)
|
||||
if ok:
|
||||
cls._abc_cache.add(subclass)
|
||||
else:
|
||||
cls._abc_negative_cache.add(subclass)
|
||||
return ok
|
||||
# Check if it's a direct subclass
|
||||
if cls in getattr(subclass, '__mro__', ()):
|
||||
cls._abc_cache.add(subclass)
|
||||
return True
|
||||
# Check if it's a subclass of a registered class (recursive)
|
||||
for rcls in cls._abc_registry:
|
||||
if issubclass(subclass, rcls):
|
||||
cls._abc_cache.add(subclass)
|
||||
return True
|
||||
# Check if it's a subclass of a subclass (recursive)
|
||||
for scls in cls.__subclasses__():
|
||||
if issubclass(subclass, scls):
|
||||
cls._abc_cache.add(subclass)
|
||||
return True
|
||||
# No dice; update negative cache
|
||||
cls._abc_negative_cache.add(subclass)
|
||||
return False
|
||||
2639
Utils/PythonNew32/Lib/_pydatetime.py
Normal file
2639
Utils/PythonNew32/Lib/_pydatetime.py
Normal file
File diff suppressed because it is too large
Load Diff
6339
Utils/PythonNew32/Lib/_pydecimal.py
Normal file
6339
Utils/PythonNew32/Lib/_pydecimal.py
Normal file
File diff suppressed because it is too large
Load Diff
2700
Utils/PythonNew32/Lib/_pyio.py
Normal file
2700
Utils/PythonNew32/Lib/_pyio.py
Normal file
File diff suppressed because it is too large
Load Diff
363
Utils/PythonNew32/Lib/_pylong.py
Normal file
363
Utils/PythonNew32/Lib/_pylong.py
Normal file
@@ -0,0 +1,363 @@
|
||||
"""Python implementations of some algorithms for use by longobject.c.
|
||||
The goal is to provide asymptotically faster algorithms that can be
|
||||
used for operations on integers with many digits. In those cases, the
|
||||
performance overhead of the Python implementation is not significant
|
||||
since the asymptotic behavior is what dominates runtime. Functions
|
||||
provided by this module should be considered private and not part of any
|
||||
public API.
|
||||
|
||||
Note: for ease of maintainability, please prefer clear code and avoid
|
||||
"micro-optimizations". This module will only be imported and used for
|
||||
integers with a huge number of digits. Saving a few microseconds with
|
||||
tricky or non-obvious code is not worth it. For people looking for
|
||||
maximum performance, they should use something like gmpy2."""
|
||||
|
||||
import re
|
||||
import decimal
|
||||
try:
|
||||
import _decimal
|
||||
except ImportError:
|
||||
_decimal = None
|
||||
|
||||
# A number of functions have this form, where `w` is a desired number of
|
||||
# digits in base `base`:
|
||||
#
|
||||
# def inner(...w...):
|
||||
# if w <= LIMIT:
|
||||
# return something
|
||||
# lo = w >> 1
|
||||
# hi = w - lo
|
||||
# something involving base**lo, inner(...lo...), j, and inner(...hi...)
|
||||
# figure out largest w needed
|
||||
# result = inner(w)
|
||||
#
|
||||
# They all had some on-the-fly scheme to cache `base**lo` results for reuse.
|
||||
# Power is costly.
|
||||
#
|
||||
# This routine aims to compute all amd only the needed powers in advance, as
|
||||
# efficiently as reasonably possible. This isn't trivial, and all the
|
||||
# on-the-fly methods did needless work in many cases. The driving code above
|
||||
# changes to:
|
||||
#
|
||||
# figure out largest w needed
|
||||
# mycache = compute_powers(w, base, LIMIT)
|
||||
# result = inner(w)
|
||||
#
|
||||
# and `mycache[lo]` replaces `base**lo` in the inner function.
|
||||
#
|
||||
# While this does give minor speedups (a few percent at best), the primary
|
||||
# intent is to simplify the functions using this, by eliminating the need for
|
||||
# them to craft their own ad-hoc caching schemes.
|
||||
def compute_powers(w, base, more_than, show=False):
|
||||
seen = set()
|
||||
need = set()
|
||||
ws = {w}
|
||||
while ws:
|
||||
w = ws.pop() # any element is fine to use next
|
||||
if w in seen or w <= more_than:
|
||||
continue
|
||||
seen.add(w)
|
||||
lo = w >> 1
|
||||
# only _need_ lo here; some other path may, or may not, need hi
|
||||
need.add(lo)
|
||||
ws.add(lo)
|
||||
if w & 1:
|
||||
ws.add(lo + 1)
|
||||
|
||||
d = {}
|
||||
if not need:
|
||||
return d
|
||||
it = iter(sorted(need))
|
||||
first = next(it)
|
||||
if show:
|
||||
print("pow at", first)
|
||||
d[first] = base ** first
|
||||
for this in it:
|
||||
if this - 1 in d:
|
||||
if show:
|
||||
print("* base at", this)
|
||||
d[this] = d[this - 1] * base # cheap
|
||||
else:
|
||||
lo = this >> 1
|
||||
hi = this - lo
|
||||
assert lo in d
|
||||
if show:
|
||||
print("square at", this)
|
||||
# Multiplying a bigint by itself (same object!) is about twice
|
||||
# as fast in CPython.
|
||||
sq = d[lo] * d[lo]
|
||||
if hi != lo:
|
||||
assert hi == lo + 1
|
||||
if show:
|
||||
print(" and * base")
|
||||
sq *= base
|
||||
d[this] = sq
|
||||
return d
|
||||
|
||||
_unbounded_dec_context = decimal.getcontext().copy()
|
||||
_unbounded_dec_context.prec = decimal.MAX_PREC
|
||||
_unbounded_dec_context.Emax = decimal.MAX_EMAX
|
||||
_unbounded_dec_context.Emin = decimal.MIN_EMIN
|
||||
_unbounded_dec_context.traps[decimal.Inexact] = 1 # sanity check
|
||||
|
||||
def int_to_decimal(n):
|
||||
"""Asymptotically fast conversion of an 'int' to Decimal."""
|
||||
|
||||
# Function due to Tim Peters. See GH issue #90716 for details.
|
||||
# https://github.com/python/cpython/issues/90716
|
||||
#
|
||||
# The implementation in longobject.c of base conversion algorithms
|
||||
# between power-of-2 and non-power-of-2 bases are quadratic time.
|
||||
# This function implements a divide-and-conquer algorithm that is
|
||||
# faster for large numbers. Builds an equal decimal.Decimal in a
|
||||
# "clever" recursive way. If we want a string representation, we
|
||||
# apply str to _that_.
|
||||
|
||||
from decimal import Decimal as D
|
||||
BITLIM = 200
|
||||
|
||||
# Don't bother caching the "lo" mask in this; the time to compute it is
|
||||
# tiny compared to the multiply.
|
||||
def inner(n, w):
|
||||
if w <= BITLIM:
|
||||
return D(n)
|
||||
w2 = w >> 1
|
||||
hi = n >> w2
|
||||
lo = n & ((1 << w2) - 1)
|
||||
return inner(lo, w2) + inner(hi, w - w2) * w2pow[w2]
|
||||
|
||||
with decimal.localcontext(_unbounded_dec_context):
|
||||
nbits = n.bit_length()
|
||||
w2pow = compute_powers(nbits, D(2), BITLIM)
|
||||
if n < 0:
|
||||
negate = True
|
||||
n = -n
|
||||
else:
|
||||
negate = False
|
||||
result = inner(n, nbits)
|
||||
if negate:
|
||||
result = -result
|
||||
return result
|
||||
|
||||
def int_to_decimal_string(n):
|
||||
"""Asymptotically fast conversion of an 'int' to a decimal string."""
|
||||
w = n.bit_length()
|
||||
if w > 450_000 and _decimal is not None:
|
||||
# It is only usable with the C decimal implementation.
|
||||
# _pydecimal.py calls str() on very large integers, which in its
|
||||
# turn calls int_to_decimal_string(), causing very deep recursion.
|
||||
return str(int_to_decimal(n))
|
||||
|
||||
# Fallback algorithm for the case when the C decimal module isn't
|
||||
# available. This algorithm is asymptotically worse than the algorithm
|
||||
# using the decimal module, but better than the quadratic time
|
||||
# implementation in longobject.c.
|
||||
|
||||
DIGLIM = 1000
|
||||
def inner(n, w):
|
||||
if w <= DIGLIM:
|
||||
return str(n)
|
||||
w2 = w >> 1
|
||||
hi, lo = divmod(n, pow10[w2])
|
||||
return inner(hi, w - w2) + inner(lo, w2).zfill(w2)
|
||||
|
||||
# The estimation of the number of decimal digits.
|
||||
# There is no harm in small error. If we guess too large, there may
|
||||
# be leading 0's that need to be stripped. If we guess too small, we
|
||||
# may need to call str() recursively for the remaining highest digits,
|
||||
# which can still potentially be a large integer. This is manifested
|
||||
# only if the number has way more than 10**15 digits, that exceeds
|
||||
# the 52-bit physical address limit in both Intel64 and AMD64.
|
||||
w = int(w * 0.3010299956639812 + 1) # log10(2)
|
||||
pow10 = compute_powers(w, 5, DIGLIM)
|
||||
for k, v in pow10.items():
|
||||
pow10[k] = v << k # 5**k << k == 5**k * 2**k == 10**k
|
||||
if n < 0:
|
||||
n = -n
|
||||
sign = '-'
|
||||
else:
|
||||
sign = ''
|
||||
s = inner(n, w)
|
||||
if s[0] == '0' and n:
|
||||
# If our guess of w is too large, there may be leading 0's that
|
||||
# need to be stripped.
|
||||
s = s.lstrip('0')
|
||||
return sign + s
|
||||
|
||||
def _str_to_int_inner(s):
|
||||
"""Asymptotically fast conversion of a 'str' to an 'int'."""
|
||||
|
||||
# Function due to Bjorn Martinsson. See GH issue #90716 for details.
|
||||
# https://github.com/python/cpython/issues/90716
|
||||
#
|
||||
# The implementation in longobject.c of base conversion algorithms
|
||||
# between power-of-2 and non-power-of-2 bases are quadratic time.
|
||||
# This function implements a divide-and-conquer algorithm making use
|
||||
# of Python's built in big int multiplication. Since Python uses the
|
||||
# Karatsuba algorithm for multiplication, the time complexity
|
||||
# of this function is O(len(s)**1.58).
|
||||
|
||||
DIGLIM = 2048
|
||||
|
||||
def inner(a, b):
|
||||
if b - a <= DIGLIM:
|
||||
return int(s[a:b])
|
||||
mid = (a + b + 1) >> 1
|
||||
return (inner(mid, b)
|
||||
+ ((inner(a, mid) * w5pow[b - mid])
|
||||
<< (b - mid)))
|
||||
|
||||
w5pow = compute_powers(len(s), 5, DIGLIM)
|
||||
return inner(0, len(s))
|
||||
|
||||
|
||||
def int_from_string(s):
|
||||
"""Asymptotically fast version of PyLong_FromString(), conversion
|
||||
of a string of decimal digits into an 'int'."""
|
||||
# PyLong_FromString() has already removed leading +/-, checked for invalid
|
||||
# use of underscore characters, checked that string consists of only digits
|
||||
# and underscores, and stripped leading whitespace. The input can still
|
||||
# contain underscores and have trailing whitespace.
|
||||
s = s.rstrip().replace('_', '')
|
||||
return _str_to_int_inner(s)
|
||||
|
||||
def str_to_int(s):
|
||||
"""Asymptotically fast version of decimal string to 'int' conversion."""
|
||||
# FIXME: this doesn't support the full syntax that int() supports.
|
||||
m = re.match(r'\s*([+-]?)([0-9_]+)\s*', s)
|
||||
if not m:
|
||||
raise ValueError('invalid literal for int() with base 10')
|
||||
v = int_from_string(m.group(2))
|
||||
if m.group(1) == '-':
|
||||
v = -v
|
||||
return v
|
||||
|
||||
|
||||
# Fast integer division, based on code from Mark Dickinson, fast_div.py
|
||||
# GH-47701. Additional refinements and optimizations by Bjorn Martinsson. The
|
||||
# algorithm is due to Burnikel and Ziegler, in their paper "Fast Recursive
|
||||
# Division".
|
||||
|
||||
_DIV_LIMIT = 4000
|
||||
|
||||
|
||||
def _div2n1n(a, b, n):
|
||||
"""Divide a 2n-bit nonnegative integer a by an n-bit positive integer
|
||||
b, using a recursive divide-and-conquer algorithm.
|
||||
|
||||
Inputs:
|
||||
n is a positive integer
|
||||
b is a positive integer with exactly n bits
|
||||
a is a nonnegative integer such that a < 2**n * b
|
||||
|
||||
Output:
|
||||
(q, r) such that a = b*q+r and 0 <= r < b.
|
||||
|
||||
"""
|
||||
if a.bit_length() - n <= _DIV_LIMIT:
|
||||
return divmod(a, b)
|
||||
pad = n & 1
|
||||
if pad:
|
||||
a <<= 1
|
||||
b <<= 1
|
||||
n += 1
|
||||
half_n = n >> 1
|
||||
mask = (1 << half_n) - 1
|
||||
b1, b2 = b >> half_n, b & mask
|
||||
q1, r = _div3n2n(a >> n, (a >> half_n) & mask, b, b1, b2, half_n)
|
||||
q2, r = _div3n2n(r, a & mask, b, b1, b2, half_n)
|
||||
if pad:
|
||||
r >>= 1
|
||||
return q1 << half_n | q2, r
|
||||
|
||||
|
||||
def _div3n2n(a12, a3, b, b1, b2, n):
|
||||
"""Helper function for _div2n1n; not intended to be called directly."""
|
||||
if a12 >> n == b1:
|
||||
q, r = (1 << n) - 1, a12 - (b1 << n) + b1
|
||||
else:
|
||||
q, r = _div2n1n(a12, b1, n)
|
||||
r = (r << n | a3) - q * b2
|
||||
while r < 0:
|
||||
q -= 1
|
||||
r += b
|
||||
return q, r
|
||||
|
||||
|
||||
def _int2digits(a, n):
|
||||
"""Decompose non-negative int a into base 2**n
|
||||
|
||||
Input:
|
||||
a is a non-negative integer
|
||||
|
||||
Output:
|
||||
List of the digits of a in base 2**n in little-endian order,
|
||||
meaning the most significant digit is last. The most
|
||||
significant digit is guaranteed to be non-zero.
|
||||
If a is 0 then the output is an empty list.
|
||||
|
||||
"""
|
||||
a_digits = [0] * ((a.bit_length() + n - 1) // n)
|
||||
|
||||
def inner(x, L, R):
|
||||
if L + 1 == R:
|
||||
a_digits[L] = x
|
||||
return
|
||||
mid = (L + R) >> 1
|
||||
shift = (mid - L) * n
|
||||
upper = x >> shift
|
||||
lower = x ^ (upper << shift)
|
||||
inner(lower, L, mid)
|
||||
inner(upper, mid, R)
|
||||
|
||||
if a:
|
||||
inner(a, 0, len(a_digits))
|
||||
return a_digits
|
||||
|
||||
|
||||
def _digits2int(digits, n):
|
||||
"""Combine base-2**n digits into an int. This function is the
|
||||
inverse of `_int2digits`. For more details, see _int2digits.
|
||||
"""
|
||||
|
||||
def inner(L, R):
|
||||
if L + 1 == R:
|
||||
return digits[L]
|
||||
mid = (L + R) >> 1
|
||||
shift = (mid - L) * n
|
||||
return (inner(mid, R) << shift) + inner(L, mid)
|
||||
|
||||
return inner(0, len(digits)) if digits else 0
|
||||
|
||||
|
||||
def _divmod_pos(a, b):
|
||||
"""Divide a non-negative integer a by a positive integer b, giving
|
||||
quotient and remainder."""
|
||||
# Use grade-school algorithm in base 2**n, n = nbits(b)
|
||||
n = b.bit_length()
|
||||
a_digits = _int2digits(a, n)
|
||||
|
||||
r = 0
|
||||
q_digits = []
|
||||
for a_digit in reversed(a_digits):
|
||||
q_digit, r = _div2n1n((r << n) + a_digit, b, n)
|
||||
q_digits.append(q_digit)
|
||||
q_digits.reverse()
|
||||
q = _digits2int(q_digits, n)
|
||||
return q, r
|
||||
|
||||
|
||||
def int_divmod(a, b):
|
||||
"""Asymptotically fast replacement for divmod, for 'int'.
|
||||
Its time complexity is O(n**1.58), where n = #bits(a) + #bits(b).
|
||||
"""
|
||||
if b == 0:
|
||||
raise ZeroDivisionError
|
||||
elif b < 0:
|
||||
q, r = int_divmod(-a, -b)
|
||||
return q, -r
|
||||
elif a < 0:
|
||||
q, r = int_divmod(~a, b)
|
||||
return ~q, b + ~r
|
||||
else:
|
||||
return _divmod_pos(a, b)
|
||||
19
Utils/PythonNew32/Lib/_pyrepl/__init__.py
Normal file
19
Utils/PythonNew32/Lib/_pyrepl/__init__.py
Normal file
@@ -0,0 +1,19 @@
|
||||
# Copyright 2000-2008 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
13
Utils/PythonNew32/Lib/_pyrepl/__main__.py
Normal file
13
Utils/PythonNew32/Lib/_pyrepl/__main__.py
Normal file
@@ -0,0 +1,13 @@
|
||||
|
||||
|
||||
|
||||
# Important: don't add things to this module, as they will end up in the REPL's
|
||||
# default globals. Use _pyrepl.main instead.
|
||||
|
||||
# Avoid caching this file by linecache and incorrectly report tracebacks.
|
||||
# See https://github.com/python/cpython/issues/129098.
|
||||
__spec__ = __loader__ = None
|
||||
|
||||
if __name__ == "__main__":
|
||||
from .main import interactive_console as __pyrepl_interactive_console
|
||||
__pyrepl_interactive_console()
|
||||
68
Utils/PythonNew32/Lib/_pyrepl/_minimal_curses.py
Normal file
68
Utils/PythonNew32/Lib/_pyrepl/_minimal_curses.py
Normal file
@@ -0,0 +1,68 @@
|
||||
"""Minimal '_curses' module, the low-level interface for curses module
|
||||
which is not meant to be used directly.
|
||||
|
||||
Based on ctypes. It's too incomplete to be really called '_curses', so
|
||||
to use it, you have to import it and stick it in sys.modules['_curses']
|
||||
manually.
|
||||
|
||||
Note that there is also a built-in module _minimal_curses which will
|
||||
hide this one if compiled in.
|
||||
"""
|
||||
|
||||
import ctypes
|
||||
import ctypes.util
|
||||
|
||||
|
||||
class error(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def _find_clib() -> str:
|
||||
trylibs = ["ncursesw", "ncurses", "curses"]
|
||||
|
||||
for lib in trylibs:
|
||||
path = ctypes.util.find_library(lib)
|
||||
if path:
|
||||
return path
|
||||
raise ModuleNotFoundError("curses library not found", name="_pyrepl._minimal_curses")
|
||||
|
||||
|
||||
_clibpath = _find_clib()
|
||||
clib = ctypes.cdll.LoadLibrary(_clibpath)
|
||||
|
||||
clib.setupterm.argtypes = [ctypes.c_char_p, ctypes.c_int, ctypes.POINTER(ctypes.c_int)]
|
||||
clib.setupterm.restype = ctypes.c_int
|
||||
|
||||
clib.tigetstr.argtypes = [ctypes.c_char_p]
|
||||
clib.tigetstr.restype = ctypes.c_ssize_t
|
||||
|
||||
clib.tparm.argtypes = [ctypes.c_char_p] + 9 * [ctypes.c_int] # type: ignore[operator]
|
||||
clib.tparm.restype = ctypes.c_char_p
|
||||
|
||||
OK = 0
|
||||
ERR = -1
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
|
||||
def setupterm(termstr, fd):
|
||||
err = ctypes.c_int(0)
|
||||
result = clib.setupterm(termstr, fd, ctypes.byref(err))
|
||||
if result == ERR:
|
||||
raise error("setupterm() failed (err=%d)" % err.value)
|
||||
|
||||
|
||||
def tigetstr(cap):
|
||||
if not isinstance(cap, bytes):
|
||||
cap = cap.encode("ascii")
|
||||
result = clib.tigetstr(cap)
|
||||
if result == ERR:
|
||||
return None
|
||||
return ctypes.cast(result, ctypes.c_char_p).value
|
||||
|
||||
|
||||
def tparm(str, i1=0, i2=0, i3=0, i4=0, i5=0, i6=0, i7=0, i8=0, i9=0):
|
||||
result = clib.tparm(str, i1, i2, i3, i4, i5, i6, i7, i8, i9)
|
||||
if result is None:
|
||||
raise error("tparm() returned NULL")
|
||||
return result
|
||||
74
Utils/PythonNew32/Lib/_pyrepl/_threading_handler.py
Normal file
74
Utils/PythonNew32/Lib/_pyrepl/_threading_handler.py
Normal file
@@ -0,0 +1,74 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
import traceback
|
||||
|
||||
|
||||
TYPE_CHECKING = False
|
||||
if TYPE_CHECKING:
|
||||
from threading import Thread
|
||||
from types import TracebackType
|
||||
from typing import Protocol
|
||||
|
||||
class ExceptHookArgs(Protocol):
|
||||
@property
|
||||
def exc_type(self) -> type[BaseException]: ...
|
||||
@property
|
||||
def exc_value(self) -> BaseException | None: ...
|
||||
@property
|
||||
def exc_traceback(self) -> TracebackType | None: ...
|
||||
@property
|
||||
def thread(self) -> Thread | None: ...
|
||||
|
||||
class ShowExceptions(Protocol):
|
||||
def __call__(self) -> int: ...
|
||||
def add(self, s: str) -> None: ...
|
||||
|
||||
from .reader import Reader
|
||||
|
||||
|
||||
def install_threading_hook(reader: Reader) -> None:
|
||||
import threading
|
||||
|
||||
@dataclass
|
||||
class ExceptHookHandler:
|
||||
lock: threading.Lock = field(default_factory=threading.Lock)
|
||||
messages: list[str] = field(default_factory=list)
|
||||
|
||||
def show(self) -> int:
|
||||
count = 0
|
||||
with self.lock:
|
||||
if not self.messages:
|
||||
return 0
|
||||
reader.restore()
|
||||
for tb in self.messages:
|
||||
count += 1
|
||||
if tb:
|
||||
print(tb)
|
||||
self.messages.clear()
|
||||
reader.scheduled_commands.append("ctrl-c")
|
||||
reader.prepare()
|
||||
return count
|
||||
|
||||
def add(self, s: str) -> None:
|
||||
with self.lock:
|
||||
self.messages.append(s)
|
||||
|
||||
def exception(self, args: ExceptHookArgs) -> None:
|
||||
lines = traceback.format_exception(
|
||||
args.exc_type,
|
||||
args.exc_value,
|
||||
args.exc_traceback,
|
||||
colorize=reader.can_colorize,
|
||||
) # type: ignore[call-overload]
|
||||
pre = f"\nException in {args.thread.name}:\n" if args.thread else "\n"
|
||||
tb = pre + "".join(lines)
|
||||
self.add(tb)
|
||||
|
||||
def __call__(self) -> int:
|
||||
return self.show()
|
||||
|
||||
|
||||
handler = ExceptHookHandler()
|
||||
reader.threading_hook = handler
|
||||
threading.excepthook = handler.exception
|
||||
110
Utils/PythonNew32/Lib/_pyrepl/base_eventqueue.py
Normal file
110
Utils/PythonNew32/Lib/_pyrepl/base_eventqueue.py
Normal file
@@ -0,0 +1,110 @@
|
||||
# Copyright 2000-2008 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
"""
|
||||
OS-independent base for an event and VT sequence scanner
|
||||
|
||||
See unix_eventqueue and windows_eventqueue for subclasses.
|
||||
"""
|
||||
|
||||
from collections import deque
|
||||
|
||||
from . import keymap
|
||||
from .console import Event
|
||||
from .trace import trace
|
||||
|
||||
class BaseEventQueue:
|
||||
def __init__(self, encoding: str, keymap_dict: dict[bytes, str]) -> None:
|
||||
self.compiled_keymap = keymap.compile_keymap(keymap_dict)
|
||||
self.keymap = self.compiled_keymap
|
||||
trace("keymap {k!r}", k=self.keymap)
|
||||
self.encoding = encoding
|
||||
self.events: deque[Event] = deque()
|
||||
self.buf = bytearray()
|
||||
|
||||
def get(self) -> Event | None:
|
||||
"""
|
||||
Retrieves the next event from the queue.
|
||||
"""
|
||||
if self.events:
|
||||
return self.events.popleft()
|
||||
else:
|
||||
return None
|
||||
|
||||
def empty(self) -> bool:
|
||||
"""
|
||||
Checks if the queue is empty.
|
||||
"""
|
||||
return not self.events
|
||||
|
||||
def flush_buf(self) -> bytearray:
|
||||
"""
|
||||
Flushes the buffer and returns its contents.
|
||||
"""
|
||||
old = self.buf
|
||||
self.buf = bytearray()
|
||||
return old
|
||||
|
||||
def insert(self, event: Event) -> None:
|
||||
"""
|
||||
Inserts an event into the queue.
|
||||
"""
|
||||
trace('added event {event}', event=event)
|
||||
self.events.append(event)
|
||||
|
||||
def push(self, char: int | bytes) -> None:
|
||||
"""
|
||||
Processes a character by updating the buffer and handling special key mappings.
|
||||
"""
|
||||
assert isinstance(char, (int, bytes))
|
||||
ord_char = char if isinstance(char, int) else ord(char)
|
||||
char = ord_char.to_bytes()
|
||||
self.buf.append(ord_char)
|
||||
|
||||
if char in self.keymap:
|
||||
if self.keymap is self.compiled_keymap:
|
||||
# sanity check, buffer is empty when a special key comes
|
||||
assert len(self.buf) == 1
|
||||
k = self.keymap[char]
|
||||
trace('found map {k!r}', k=k)
|
||||
if isinstance(k, dict):
|
||||
self.keymap = k
|
||||
else:
|
||||
self.insert(Event('key', k, self.flush_buf()))
|
||||
self.keymap = self.compiled_keymap
|
||||
|
||||
elif self.buf and self.buf[0] == 27: # escape
|
||||
# escape sequence not recognized by our keymap: propagate it
|
||||
# outside so that i can be recognized as an M-... key (see also
|
||||
# the docstring in keymap.py
|
||||
trace('unrecognized escape sequence, propagating...')
|
||||
self.keymap = self.compiled_keymap
|
||||
self.insert(Event('key', '\033', bytearray(b'\033')))
|
||||
for _c in self.flush_buf()[1:]:
|
||||
self.push(_c)
|
||||
|
||||
else:
|
||||
try:
|
||||
decoded = bytes(self.buf).decode(self.encoding)
|
||||
except UnicodeError:
|
||||
return
|
||||
else:
|
||||
self.insert(Event('key', decoded, self.flush_buf()))
|
||||
self.keymap = self.compiled_keymap
|
||||
489
Utils/PythonNew32/Lib/_pyrepl/commands.py
Normal file
489
Utils/PythonNew32/Lib/_pyrepl/commands.py
Normal file
@@ -0,0 +1,489 @@
|
||||
# Copyright 2000-2010 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Antonio Cuni
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
import os
|
||||
|
||||
# Categories of actions:
|
||||
# killing
|
||||
# yanking
|
||||
# motion
|
||||
# editing
|
||||
# history
|
||||
# finishing
|
||||
# [completion]
|
||||
|
||||
|
||||
# types
|
||||
if False:
|
||||
from .historical_reader import HistoricalReader
|
||||
|
||||
|
||||
class Command:
|
||||
finish: bool = False
|
||||
kills_digit_arg: bool = True
|
||||
|
||||
def __init__(
|
||||
self, reader: HistoricalReader, event_name: str, event: list[str]
|
||||
) -> None:
|
||||
# Reader should really be "any reader" but there's too much usage of
|
||||
# HistoricalReader methods and fields in the code below for us to
|
||||
# refactor at the moment.
|
||||
|
||||
self.reader = reader
|
||||
self.event = event
|
||||
self.event_name = event_name
|
||||
|
||||
def do(self) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class KillCommand(Command):
|
||||
def kill_range(self, start: int, end: int) -> None:
|
||||
if start == end:
|
||||
return
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
text = b[start:end]
|
||||
del b[start:end]
|
||||
if is_kill(r.last_command):
|
||||
if start < r.pos:
|
||||
r.kill_ring[-1] = text + r.kill_ring[-1]
|
||||
else:
|
||||
r.kill_ring[-1] = r.kill_ring[-1] + text
|
||||
else:
|
||||
r.kill_ring.append(text)
|
||||
r.pos = start
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class YankCommand(Command):
|
||||
pass
|
||||
|
||||
|
||||
class MotionCommand(Command):
|
||||
pass
|
||||
|
||||
|
||||
class EditCommand(Command):
|
||||
pass
|
||||
|
||||
|
||||
class FinishCommand(Command):
|
||||
finish = True
|
||||
pass
|
||||
|
||||
|
||||
def is_kill(command: type[Command] | None) -> bool:
|
||||
return command is not None and issubclass(command, KillCommand)
|
||||
|
||||
|
||||
def is_yank(command: type[Command] | None) -> bool:
|
||||
return command is not None and issubclass(command, YankCommand)
|
||||
|
||||
|
||||
# etc
|
||||
|
||||
|
||||
class digit_arg(Command):
|
||||
kills_digit_arg = False
|
||||
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
c = self.event[-1]
|
||||
if c == "-":
|
||||
if r.arg is not None:
|
||||
r.arg = -r.arg
|
||||
else:
|
||||
r.arg = -1
|
||||
else:
|
||||
d = int(c)
|
||||
if r.arg is None:
|
||||
r.arg = d
|
||||
else:
|
||||
if r.arg < 0:
|
||||
r.arg = 10 * r.arg - d
|
||||
else:
|
||||
r.arg = 10 * r.arg + d
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class clear_screen(Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.console.clear()
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class refresh(Command):
|
||||
def do(self) -> None:
|
||||
self.reader.dirty = True
|
||||
|
||||
|
||||
class repaint(Command):
|
||||
def do(self) -> None:
|
||||
self.reader.dirty = True
|
||||
self.reader.console.repaint()
|
||||
|
||||
|
||||
class kill_line(KillCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
eol = r.eol()
|
||||
for c in b[r.pos : eol]:
|
||||
if not c.isspace():
|
||||
self.kill_range(r.pos, eol)
|
||||
return
|
||||
else:
|
||||
self.kill_range(r.pos, eol + 1)
|
||||
|
||||
|
||||
class unix_line_discard(KillCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
self.kill_range(r.bol(), r.pos)
|
||||
|
||||
|
||||
class unix_word_rubout(KillCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
for i in range(r.get_arg()):
|
||||
self.kill_range(r.bow(), r.pos)
|
||||
|
||||
|
||||
class kill_word(KillCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
for i in range(r.get_arg()):
|
||||
self.kill_range(r.pos, r.eow())
|
||||
|
||||
|
||||
class backward_kill_word(KillCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
for i in range(r.get_arg()):
|
||||
self.kill_range(r.bow(), r.pos)
|
||||
|
||||
|
||||
class yank(YankCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
if not r.kill_ring:
|
||||
r.error("nothing to yank")
|
||||
return
|
||||
r.insert(r.kill_ring[-1])
|
||||
|
||||
|
||||
class yank_pop(YankCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
if not r.kill_ring:
|
||||
r.error("nothing to yank")
|
||||
return
|
||||
if not is_yank(r.last_command):
|
||||
r.error("previous command was not a yank")
|
||||
return
|
||||
repl = len(r.kill_ring[-1])
|
||||
r.kill_ring.insert(0, r.kill_ring.pop())
|
||||
t = r.kill_ring[-1]
|
||||
b[r.pos - repl : r.pos] = t
|
||||
r.pos = r.pos - repl + len(t)
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class interrupt(FinishCommand):
|
||||
def do(self) -> None:
|
||||
import signal
|
||||
|
||||
self.reader.console.finish()
|
||||
self.reader.finish()
|
||||
os.kill(os.getpid(), signal.SIGINT)
|
||||
|
||||
|
||||
class ctrl_c(Command):
|
||||
def do(self) -> None:
|
||||
self.reader.console.finish()
|
||||
self.reader.finish()
|
||||
raise KeyboardInterrupt
|
||||
|
||||
|
||||
class suspend(Command):
|
||||
def do(self) -> None:
|
||||
import signal
|
||||
|
||||
r = self.reader
|
||||
p = r.pos
|
||||
r.console.finish()
|
||||
os.kill(os.getpid(), signal.SIGSTOP)
|
||||
## this should probably be done
|
||||
## in a handler for SIGCONT?
|
||||
r.console.prepare()
|
||||
r.pos = p
|
||||
# r.posxy = 0, 0 # XXX this is invalid
|
||||
r.dirty = True
|
||||
r.console.screen = []
|
||||
|
||||
|
||||
class up(MotionCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
for _ in range(r.get_arg()):
|
||||
x, y = r.pos2xy()
|
||||
new_y = y - 1
|
||||
|
||||
if r.bol() == 0:
|
||||
if r.historyi > 0:
|
||||
r.select_item(r.historyi - 1)
|
||||
return
|
||||
r.pos = 0
|
||||
r.error("start of buffer")
|
||||
return
|
||||
|
||||
if (
|
||||
x
|
||||
> (
|
||||
new_x := r.max_column(new_y)
|
||||
) # we're past the end of the previous line
|
||||
or x == r.max_column(y)
|
||||
and any(
|
||||
not i.isspace() for i in r.buffer[r.bol() :]
|
||||
) # move between eols
|
||||
):
|
||||
x = new_x
|
||||
|
||||
r.setpos_from_xy(x, new_y)
|
||||
|
||||
|
||||
class down(MotionCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
for _ in range(r.get_arg()):
|
||||
x, y = r.pos2xy()
|
||||
new_y = y + 1
|
||||
|
||||
if r.eol() == len(b):
|
||||
if r.historyi < len(r.history):
|
||||
r.select_item(r.historyi + 1)
|
||||
r.pos = r.eol(0)
|
||||
return
|
||||
r.pos = len(b)
|
||||
r.error("end of buffer")
|
||||
return
|
||||
|
||||
if (
|
||||
x
|
||||
> (
|
||||
new_x := r.max_column(new_y)
|
||||
) # we're past the end of the previous line
|
||||
or x == r.max_column(y)
|
||||
and any(
|
||||
not i.isspace() for i in r.buffer[r.bol() :]
|
||||
) # move between eols
|
||||
):
|
||||
x = new_x
|
||||
|
||||
r.setpos_from_xy(x, new_y)
|
||||
|
||||
|
||||
class left(MotionCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
for _ in range(r.get_arg()):
|
||||
p = r.pos - 1
|
||||
if p >= 0:
|
||||
r.pos = p
|
||||
else:
|
||||
self.reader.error("start of buffer")
|
||||
|
||||
|
||||
class right(MotionCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
for _ in range(r.get_arg()):
|
||||
p = r.pos + 1
|
||||
if p <= len(b):
|
||||
r.pos = p
|
||||
else:
|
||||
self.reader.error("end of buffer")
|
||||
|
||||
|
||||
class beginning_of_line(MotionCommand):
|
||||
def do(self) -> None:
|
||||
self.reader.pos = self.reader.bol()
|
||||
|
||||
|
||||
class end_of_line(MotionCommand):
|
||||
def do(self) -> None:
|
||||
self.reader.pos = self.reader.eol()
|
||||
|
||||
|
||||
class home(MotionCommand):
|
||||
def do(self) -> None:
|
||||
self.reader.pos = 0
|
||||
|
||||
|
||||
class end(MotionCommand):
|
||||
def do(self) -> None:
|
||||
self.reader.pos = len(self.reader.buffer)
|
||||
|
||||
|
||||
class forward_word(MotionCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
for i in range(r.get_arg()):
|
||||
r.pos = r.eow()
|
||||
|
||||
|
||||
class backward_word(MotionCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
for i in range(r.get_arg()):
|
||||
r.pos = r.bow()
|
||||
|
||||
|
||||
class self_insert(EditCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
text = self.event * r.get_arg()
|
||||
r.insert(text)
|
||||
|
||||
|
||||
class insert_nl(EditCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.insert("\n" * r.get_arg())
|
||||
|
||||
|
||||
class transpose_characters(EditCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
s = r.pos - 1
|
||||
if s < 0:
|
||||
r.error("cannot transpose at start of buffer")
|
||||
else:
|
||||
if s == len(b):
|
||||
s -= 1
|
||||
t = min(s + r.get_arg(), len(b) - 1)
|
||||
c = b[s]
|
||||
del b[s]
|
||||
b.insert(t, c)
|
||||
r.pos = t
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class backspace(EditCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
for i in range(r.get_arg()):
|
||||
if r.pos > 0:
|
||||
r.pos -= 1
|
||||
del b[r.pos]
|
||||
r.dirty = True
|
||||
else:
|
||||
self.reader.error("can't backspace at start")
|
||||
|
||||
|
||||
class delete(EditCommand):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
if (
|
||||
r.pos == 0
|
||||
and len(b) == 0 # this is something of a hack
|
||||
and self.event[-1] == "\004"
|
||||
):
|
||||
r.update_screen()
|
||||
r.console.finish()
|
||||
raise EOFError
|
||||
for i in range(r.get_arg()):
|
||||
if r.pos != len(b):
|
||||
del b[r.pos]
|
||||
r.dirty = True
|
||||
else:
|
||||
self.reader.error("end of buffer")
|
||||
|
||||
|
||||
class accept(FinishCommand):
|
||||
def do(self) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class help(Command):
|
||||
def do(self) -> None:
|
||||
import _sitebuiltins
|
||||
|
||||
with self.reader.suspend():
|
||||
self.reader.msg = _sitebuiltins._Helper()() # type: ignore[assignment]
|
||||
|
||||
|
||||
class invalid_key(Command):
|
||||
def do(self) -> None:
|
||||
pending = self.reader.console.getpending()
|
||||
s = "".join(self.event) + pending.data
|
||||
self.reader.error("`%r' not bound" % s)
|
||||
|
||||
|
||||
class invalid_command(Command):
|
||||
def do(self) -> None:
|
||||
s = self.event_name
|
||||
self.reader.error("command `%s' not known" % s)
|
||||
|
||||
|
||||
class show_history(Command):
|
||||
def do(self) -> None:
|
||||
from .pager import get_pager
|
||||
from site import gethistoryfile
|
||||
|
||||
history = os.linesep.join(self.reader.history[:])
|
||||
self.reader.console.restore()
|
||||
pager = get_pager()
|
||||
pager(history, gethistoryfile())
|
||||
self.reader.console.prepare()
|
||||
|
||||
# We need to copy over the state so that it's consistent between
|
||||
# console and reader, and console does not overwrite/append stuff
|
||||
self.reader.console.screen = self.reader.screen.copy()
|
||||
self.reader.console.posxy = self.reader.cxy
|
||||
|
||||
|
||||
class paste_mode(Command):
|
||||
|
||||
def do(self) -> None:
|
||||
self.reader.paste_mode = not self.reader.paste_mode
|
||||
self.reader.dirty = True
|
||||
|
||||
|
||||
class enable_bracketed_paste(Command):
|
||||
def do(self) -> None:
|
||||
self.reader.paste_mode = True
|
||||
self.reader.in_bracketed_paste = True
|
||||
|
||||
class disable_bracketed_paste(Command):
|
||||
def do(self) -> None:
|
||||
self.reader.paste_mode = False
|
||||
self.reader.in_bracketed_paste = False
|
||||
self.reader.dirty = True
|
||||
295
Utils/PythonNew32/Lib/_pyrepl/completing_reader.py
Normal file
295
Utils/PythonNew32/Lib/_pyrepl/completing_reader.py
Normal file
@@ -0,0 +1,295 @@
|
||||
# Copyright 2000-2010 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Antonio Cuni
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
import re
|
||||
from . import commands, console, reader
|
||||
from .reader import Reader
|
||||
|
||||
|
||||
# types
|
||||
Command = commands.Command
|
||||
if False:
|
||||
from .types import KeySpec, CommandName
|
||||
|
||||
|
||||
def prefix(wordlist: list[str], j: int = 0) -> str:
|
||||
d = {}
|
||||
i = j
|
||||
try:
|
||||
while 1:
|
||||
for word in wordlist:
|
||||
d[word[i]] = 1
|
||||
if len(d) > 1:
|
||||
return wordlist[0][j:i]
|
||||
i += 1
|
||||
d = {}
|
||||
except IndexError:
|
||||
return wordlist[0][j:i]
|
||||
return ""
|
||||
|
||||
|
||||
STRIPCOLOR_REGEX = re.compile(r"\x1B\[([0-9]{1,3}(;[0-9]{1,2})?)?[m|K]")
|
||||
|
||||
def stripcolor(s: str) -> str:
|
||||
return STRIPCOLOR_REGEX.sub('', s)
|
||||
|
||||
|
||||
def real_len(s: str) -> int:
|
||||
return len(stripcolor(s))
|
||||
|
||||
|
||||
def left_align(s: str, maxlen: int) -> str:
|
||||
stripped = stripcolor(s)
|
||||
if len(stripped) > maxlen:
|
||||
# too bad, we remove the color
|
||||
return stripped[:maxlen]
|
||||
padding = maxlen - len(stripped)
|
||||
return s + ' '*padding
|
||||
|
||||
|
||||
def build_menu(
|
||||
cons: console.Console,
|
||||
wordlist: list[str],
|
||||
start: int,
|
||||
use_brackets: bool,
|
||||
sort_in_column: bool,
|
||||
) -> tuple[list[str], int]:
|
||||
if use_brackets:
|
||||
item = "[ %s ]"
|
||||
padding = 4
|
||||
else:
|
||||
item = "%s "
|
||||
padding = 2
|
||||
maxlen = min(max(map(real_len, wordlist)), cons.width - padding)
|
||||
cols = int(cons.width / (maxlen + padding))
|
||||
rows = int((len(wordlist) - 1)/cols + 1)
|
||||
|
||||
if sort_in_column:
|
||||
# sort_in_column=False (default) sort_in_column=True
|
||||
# A B C A D G
|
||||
# D E F B E
|
||||
# G C F
|
||||
#
|
||||
# "fill" the table with empty words, so we always have the same amout
|
||||
# of rows for each column
|
||||
missing = cols*rows - len(wordlist)
|
||||
wordlist = wordlist + ['']*missing
|
||||
indexes = [(i % cols) * rows + i // cols for i in range(len(wordlist))]
|
||||
wordlist = [wordlist[i] for i in indexes]
|
||||
menu = []
|
||||
i = start
|
||||
for r in range(rows):
|
||||
row = []
|
||||
for col in range(cols):
|
||||
row.append(item % left_align(wordlist[i], maxlen))
|
||||
i += 1
|
||||
if i >= len(wordlist):
|
||||
break
|
||||
menu.append(''.join(row))
|
||||
if i >= len(wordlist):
|
||||
i = 0
|
||||
break
|
||||
if r + 5 > cons.height:
|
||||
menu.append(" %d more... " % (len(wordlist) - i))
|
||||
break
|
||||
return menu, i
|
||||
|
||||
# this gets somewhat user interface-y, and as a result the logic gets
|
||||
# very convoluted.
|
||||
#
|
||||
# To summarise the summary of the summary:- people are a problem.
|
||||
# -- The Hitch-Hikers Guide to the Galaxy, Episode 12
|
||||
|
||||
#### Desired behaviour of the completions commands.
|
||||
# the considerations are:
|
||||
# (1) how many completions are possible
|
||||
# (2) whether the last command was a completion
|
||||
# (3) if we can assume that the completer is going to return the same set of
|
||||
# completions: this is controlled by the ``assume_immutable_completions``
|
||||
# variable on the reader, which is True by default to match the historical
|
||||
# behaviour of pyrepl, but e.g. False in the ReadlineAlikeReader to match
|
||||
# more closely readline's semantics (this is needed e.g. by
|
||||
# fancycompleter)
|
||||
#
|
||||
# if there's no possible completion, beep at the user and point this out.
|
||||
# this is easy.
|
||||
#
|
||||
# if there's only one possible completion, stick it in. if the last thing
|
||||
# user did was a completion, point out that he isn't getting anywhere, but
|
||||
# only if the ``assume_immutable_completions`` is True.
|
||||
#
|
||||
# now it gets complicated.
|
||||
#
|
||||
# for the first press of a completion key:
|
||||
# if there's a common prefix, stick it in.
|
||||
|
||||
# irrespective of whether anything got stuck in, if the word is now
|
||||
# complete, show the "complete but not unique" message
|
||||
|
||||
# if there's no common prefix and if the word is not now complete,
|
||||
# beep.
|
||||
|
||||
# common prefix -> yes no
|
||||
# word complete \/
|
||||
# yes "cbnu" "cbnu"
|
||||
# no - beep
|
||||
|
||||
# for the second bang on the completion key
|
||||
# there will necessarily be no common prefix
|
||||
# show a menu of the choices.
|
||||
|
||||
# for subsequent bangs, rotate the menu around (if there are sufficient
|
||||
# choices).
|
||||
|
||||
|
||||
class complete(commands.Command):
|
||||
def do(self) -> None:
|
||||
r: CompletingReader
|
||||
r = self.reader # type: ignore[assignment]
|
||||
last_is_completer = r.last_command_is(self.__class__)
|
||||
immutable_completions = r.assume_immutable_completions
|
||||
completions_unchangable = last_is_completer and immutable_completions
|
||||
stem = r.get_stem()
|
||||
if not completions_unchangable:
|
||||
r.cmpltn_menu_choices = r.get_completions(stem)
|
||||
|
||||
completions = r.cmpltn_menu_choices
|
||||
if not completions:
|
||||
r.error("no matches")
|
||||
elif len(completions) == 1:
|
||||
if completions_unchangable and len(completions[0]) == len(stem):
|
||||
r.msg = "[ sole completion ]"
|
||||
r.dirty = True
|
||||
r.insert(completions[0][len(stem):])
|
||||
else:
|
||||
p = prefix(completions, len(stem))
|
||||
if p:
|
||||
r.insert(p)
|
||||
if last_is_completer:
|
||||
r.cmpltn_menu_visible = True
|
||||
r.cmpltn_message_visible = False
|
||||
r.cmpltn_menu, r.cmpltn_menu_end = build_menu(
|
||||
r.console, completions, r.cmpltn_menu_end,
|
||||
r.use_brackets, r.sort_in_column)
|
||||
r.dirty = True
|
||||
elif not r.cmpltn_menu_visible:
|
||||
r.cmpltn_message_visible = True
|
||||
if stem + p in completions:
|
||||
r.msg = "[ complete but not unique ]"
|
||||
r.dirty = True
|
||||
else:
|
||||
r.msg = "[ not unique ]"
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class self_insert(commands.self_insert):
|
||||
def do(self) -> None:
|
||||
r: CompletingReader
|
||||
r = self.reader # type: ignore[assignment]
|
||||
|
||||
commands.self_insert.do(self)
|
||||
if r.cmpltn_menu_visible:
|
||||
stem = r.get_stem()
|
||||
if len(stem) < 1:
|
||||
r.cmpltn_reset()
|
||||
else:
|
||||
completions = [w for w in r.cmpltn_menu_choices
|
||||
if w.startswith(stem)]
|
||||
if completions:
|
||||
r.cmpltn_menu, r.cmpltn_menu_end = build_menu(
|
||||
r.console, completions, 0,
|
||||
r.use_brackets, r.sort_in_column)
|
||||
else:
|
||||
r.cmpltn_reset()
|
||||
|
||||
|
||||
@dataclass
|
||||
class CompletingReader(Reader):
|
||||
"""Adds completion support"""
|
||||
|
||||
### Class variables
|
||||
# see the comment for the complete command
|
||||
assume_immutable_completions = True
|
||||
use_brackets = True # display completions inside []
|
||||
sort_in_column = False
|
||||
|
||||
### Instance variables
|
||||
cmpltn_menu: list[str] = field(init=False)
|
||||
cmpltn_menu_visible: bool = field(init=False)
|
||||
cmpltn_message_visible: bool = field(init=False)
|
||||
cmpltn_menu_end: int = field(init=False)
|
||||
cmpltn_menu_choices: list[str] = field(init=False)
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
super().__post_init__()
|
||||
self.cmpltn_reset()
|
||||
for c in (complete, self_insert):
|
||||
self.commands[c.__name__] = c
|
||||
self.commands[c.__name__.replace('_', '-')] = c
|
||||
|
||||
def collect_keymap(self) -> tuple[tuple[KeySpec, CommandName], ...]:
|
||||
return super().collect_keymap() + (
|
||||
(r'\t', 'complete'),)
|
||||
|
||||
def after_command(self, cmd: Command) -> None:
|
||||
super().after_command(cmd)
|
||||
if not isinstance(cmd, (complete, self_insert)):
|
||||
self.cmpltn_reset()
|
||||
|
||||
def calc_screen(self) -> list[str]:
|
||||
screen = super().calc_screen()
|
||||
if self.cmpltn_menu_visible:
|
||||
# We display the completions menu below the current prompt
|
||||
ly = self.lxy[1] + 1
|
||||
screen[ly:ly] = self.cmpltn_menu
|
||||
# If we're not in the middle of multiline edit, don't append to screeninfo
|
||||
# since that screws up the position calculation in pos2xy function.
|
||||
# This is a hack to prevent the cursor jumping
|
||||
# into the completions menu when pressing left or down arrow.
|
||||
if self.pos != len(self.buffer):
|
||||
self.screeninfo[ly:ly] = [(0, [])]*len(self.cmpltn_menu)
|
||||
return screen
|
||||
|
||||
def finish(self) -> None:
|
||||
super().finish()
|
||||
self.cmpltn_reset()
|
||||
|
||||
def cmpltn_reset(self) -> None:
|
||||
self.cmpltn_menu = []
|
||||
self.cmpltn_menu_visible = False
|
||||
self.cmpltn_message_visible = False
|
||||
self.cmpltn_menu_end = 0
|
||||
self.cmpltn_menu_choices = []
|
||||
|
||||
def get_stem(self) -> str:
|
||||
st = self.syntax_table
|
||||
SW = reader.SYNTAX_WORD
|
||||
b = self.buffer
|
||||
p = self.pos - 1
|
||||
while p >= 0 and st.get(b[p], SW) == SW:
|
||||
p -= 1
|
||||
return ''.join(b[p+1:self.pos])
|
||||
|
||||
def get_completions(self, stem: str) -> list[str]:
|
||||
return []
|
||||
229
Utils/PythonNew32/Lib/_pyrepl/console.py
Normal file
229
Utils/PythonNew32/Lib/_pyrepl/console.py
Normal file
@@ -0,0 +1,229 @@
|
||||
# Copyright 2000-2004 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import _colorize
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import ast
|
||||
import code
|
||||
import linecache
|
||||
from dataclasses import dataclass, field
|
||||
import os.path
|
||||
import sys
|
||||
|
||||
|
||||
TYPE_CHECKING = False
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from typing import IO
|
||||
from typing import Callable
|
||||
|
||||
|
||||
@dataclass
|
||||
class Event:
|
||||
evt: str
|
||||
data: str
|
||||
raw: bytes = b""
|
||||
|
||||
|
||||
@dataclass
|
||||
class Console(ABC):
|
||||
posxy: tuple[int, int]
|
||||
screen: list[str] = field(default_factory=list)
|
||||
height: int = 25
|
||||
width: int = 80
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
f_in: IO[bytes] | int = 0,
|
||||
f_out: IO[bytes] | int = 1,
|
||||
term: str = "",
|
||||
encoding: str = "",
|
||||
):
|
||||
self.encoding = encoding or sys.getdefaultencoding()
|
||||
|
||||
if isinstance(f_in, int):
|
||||
self.input_fd = f_in
|
||||
else:
|
||||
self.input_fd = f_in.fileno()
|
||||
|
||||
if isinstance(f_out, int):
|
||||
self.output_fd = f_out
|
||||
else:
|
||||
self.output_fd = f_out.fileno()
|
||||
|
||||
@abstractmethod
|
||||
def refresh(self, screen: list[str], xy: tuple[int, int]) -> None: ...
|
||||
|
||||
@abstractmethod
|
||||
def prepare(self) -> None: ...
|
||||
|
||||
@abstractmethod
|
||||
def restore(self) -> None: ...
|
||||
|
||||
@abstractmethod
|
||||
def move_cursor(self, x: int, y: int) -> None: ...
|
||||
|
||||
@abstractmethod
|
||||
def set_cursor_vis(self, visible: bool) -> None: ...
|
||||
|
||||
@abstractmethod
|
||||
def getheightwidth(self) -> tuple[int, int]:
|
||||
"""Return (height, width) where height and width are the height
|
||||
and width of the terminal window in characters."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def get_event(self, block: bool = True) -> Event | None:
|
||||
"""Return an Event instance. Returns None if |block| is false
|
||||
and there is no event pending, otherwise waits for the
|
||||
completion of an event."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def push_char(self, char: int | bytes) -> None:
|
||||
"""
|
||||
Push a character to the console event queue.
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def beep(self) -> None: ...
|
||||
|
||||
@abstractmethod
|
||||
def clear(self) -> None:
|
||||
"""Wipe the screen"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def finish(self) -> None:
|
||||
"""Move the cursor to the end of the display and otherwise get
|
||||
ready for end. XXX could be merged with restore? Hmm."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def flushoutput(self) -> None:
|
||||
"""Flush all output to the screen (assuming there's some
|
||||
buffering going on somewhere)."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def forgetinput(self) -> None:
|
||||
"""Forget all pending, but not yet processed input."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def getpending(self) -> Event:
|
||||
"""Return the characters that have been typed but not yet
|
||||
processed."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def wait(self, timeout: float | None) -> bool:
|
||||
"""Wait for an event. The return value is True if an event is
|
||||
available, False if the timeout has been reached. If timeout is
|
||||
None, wait forever. The timeout is in milliseconds."""
|
||||
...
|
||||
|
||||
@property
|
||||
def input_hook(self) -> Callable[[], int] | None:
|
||||
"""Returns the current input hook."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def repaint(self) -> None: ...
|
||||
|
||||
|
||||
class InteractiveColoredConsole(code.InteractiveConsole):
|
||||
STATEMENT_FAILED = object()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
locals: dict[str, object] | None = None,
|
||||
filename: str = "<console>",
|
||||
*,
|
||||
local_exit: bool = False,
|
||||
) -> None:
|
||||
super().__init__(locals=locals, filename=filename, local_exit=local_exit)
|
||||
self.can_colorize = _colorize.can_colorize()
|
||||
|
||||
def showsyntaxerror(self, filename=None, **kwargs):
|
||||
super().showsyntaxerror(filename=filename, **kwargs)
|
||||
|
||||
def _excepthook(self, typ, value, tb):
|
||||
import traceback
|
||||
lines = traceback.format_exception(
|
||||
typ, value, tb,
|
||||
colorize=self.can_colorize,
|
||||
limit=traceback.BUILTIN_EXCEPTION_LIMIT)
|
||||
self.write(''.join(lines))
|
||||
|
||||
def runcode(self, code):
|
||||
try:
|
||||
exec(code, self.locals)
|
||||
except SystemExit:
|
||||
raise
|
||||
except BaseException:
|
||||
self.showtraceback()
|
||||
return self.STATEMENT_FAILED
|
||||
return None
|
||||
|
||||
def runsource(self, source, filename="<input>", symbol="single"):
|
||||
try:
|
||||
tree = self.compile.compiler(
|
||||
source,
|
||||
filename,
|
||||
"exec",
|
||||
ast.PyCF_ONLY_AST,
|
||||
incomplete_input=False,
|
||||
)
|
||||
except (SyntaxError, OverflowError, ValueError):
|
||||
self.showsyntaxerror(filename, source=source)
|
||||
return False
|
||||
if tree.body:
|
||||
*_, last_stmt = tree.body
|
||||
for stmt in tree.body:
|
||||
wrapper = ast.Interactive if stmt is last_stmt else ast.Module
|
||||
the_symbol = symbol if stmt is last_stmt else "exec"
|
||||
item = wrapper([stmt])
|
||||
try:
|
||||
code = self.compile.compiler(item, filename, the_symbol)
|
||||
linecache._register_code(code, source, filename)
|
||||
except SyntaxError as e:
|
||||
if e.args[0] == "'await' outside function":
|
||||
python = os.path.basename(sys.executable)
|
||||
e.add_note(
|
||||
f"Try the asyncio REPL ({python} -m asyncio) to use"
|
||||
f" top-level 'await' and run background asyncio tasks."
|
||||
)
|
||||
self.showsyntaxerror(filename, source=source)
|
||||
return False
|
||||
except (OverflowError, ValueError):
|
||||
self.showsyntaxerror(filename, source=source)
|
||||
return False
|
||||
|
||||
if code is None:
|
||||
return True
|
||||
|
||||
result = self.runcode(code)
|
||||
if result is self.STATEMENT_FAILED:
|
||||
break
|
||||
return False
|
||||
33
Utils/PythonNew32/Lib/_pyrepl/curses.py
Normal file
33
Utils/PythonNew32/Lib/_pyrepl/curses.py
Normal file
@@ -0,0 +1,33 @@
|
||||
# Copyright 2000-2010 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
|
||||
try:
|
||||
import _curses
|
||||
except ImportError:
|
||||
try:
|
||||
import curses as _curses # type: ignore[no-redef]
|
||||
except ImportError:
|
||||
from . import _minimal_curses as _curses # type: ignore[no-redef]
|
||||
|
||||
setupterm = _curses.setupterm
|
||||
tigetstr = _curses.tigetstr
|
||||
tparm = _curses.tparm
|
||||
error = _curses.error
|
||||
76
Utils/PythonNew32/Lib/_pyrepl/fancy_termios.py
Normal file
76
Utils/PythonNew32/Lib/_pyrepl/fancy_termios.py
Normal file
@@ -0,0 +1,76 @@
|
||||
# Copyright 2000-2004 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
import termios
|
||||
|
||||
|
||||
class TermState:
|
||||
def __init__(self, tuples):
|
||||
(
|
||||
self.iflag,
|
||||
self.oflag,
|
||||
self.cflag,
|
||||
self.lflag,
|
||||
self.ispeed,
|
||||
self.ospeed,
|
||||
self.cc,
|
||||
) = tuples
|
||||
|
||||
def as_list(self):
|
||||
return [
|
||||
self.iflag,
|
||||
self.oflag,
|
||||
self.cflag,
|
||||
self.lflag,
|
||||
self.ispeed,
|
||||
self.ospeed,
|
||||
# Always return a copy of the control characters list to ensure
|
||||
# there are not any additional references to self.cc
|
||||
self.cc[:],
|
||||
]
|
||||
|
||||
def copy(self):
|
||||
return self.__class__(self.as_list())
|
||||
|
||||
|
||||
def tcgetattr(fd):
|
||||
return TermState(termios.tcgetattr(fd))
|
||||
|
||||
|
||||
def tcsetattr(fd, when, attrs):
|
||||
termios.tcsetattr(fd, when, attrs.as_list())
|
||||
|
||||
|
||||
class Term(TermState):
|
||||
TS__init__ = TermState.__init__
|
||||
|
||||
def __init__(self, fd=0):
|
||||
self.TS__init__(termios.tcgetattr(fd))
|
||||
self.fd = fd
|
||||
self.stack = []
|
||||
|
||||
def save(self):
|
||||
self.stack.append(self.as_list())
|
||||
|
||||
def set(self, when=termios.TCSANOW):
|
||||
termios.tcsetattr(self.fd, when, self.as_list())
|
||||
|
||||
def restore(self):
|
||||
self.TS__init__(self.stack.pop())
|
||||
self.set()
|
||||
419
Utils/PythonNew32/Lib/_pyrepl/historical_reader.py
Normal file
419
Utils/PythonNew32/Lib/_pyrepl/historical_reader.py
Normal file
@@ -0,0 +1,419 @@
|
||||
# Copyright 2000-2004 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from contextlib import contextmanager
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
from . import commands, input
|
||||
from .reader import Reader
|
||||
|
||||
|
||||
if False:
|
||||
from .types import SimpleContextManager, KeySpec, CommandName
|
||||
|
||||
|
||||
isearch_keymap: tuple[tuple[KeySpec, CommandName], ...] = tuple(
|
||||
[("\\%03o" % c, "isearch-end") for c in range(256) if chr(c) != "\\"]
|
||||
+ [(c, "isearch-add-character") for c in map(chr, range(32, 127)) if c != "\\"]
|
||||
+ [
|
||||
("\\%03o" % c, "isearch-add-character")
|
||||
for c in range(256)
|
||||
if chr(c).isalpha() and chr(c) != "\\"
|
||||
]
|
||||
+ [
|
||||
("\\\\", "self-insert"),
|
||||
(r"\C-r", "isearch-backwards"),
|
||||
(r"\C-s", "isearch-forwards"),
|
||||
(r"\C-c", "isearch-cancel"),
|
||||
(r"\C-g", "isearch-cancel"),
|
||||
(r"\<backspace>", "isearch-backspace"),
|
||||
]
|
||||
)
|
||||
|
||||
ISEARCH_DIRECTION_NONE = ""
|
||||
ISEARCH_DIRECTION_BACKWARDS = "r"
|
||||
ISEARCH_DIRECTION_FORWARDS = "f"
|
||||
|
||||
|
||||
class next_history(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
if r.historyi == len(r.history):
|
||||
r.error("end of history list")
|
||||
return
|
||||
r.select_item(r.historyi + 1)
|
||||
|
||||
|
||||
class previous_history(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
if r.historyi == 0:
|
||||
r.error("start of history list")
|
||||
return
|
||||
r.select_item(r.historyi - 1)
|
||||
|
||||
|
||||
class history_search_backward(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.search_next(forwards=False)
|
||||
|
||||
|
||||
class history_search_forward(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.search_next(forwards=True)
|
||||
|
||||
|
||||
class restore_history(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
if r.historyi != len(r.history):
|
||||
if r.get_unicode() != r.history[r.historyi]:
|
||||
r.buffer = list(r.history[r.historyi])
|
||||
r.pos = len(r.buffer)
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class first_history(commands.Command):
|
||||
def do(self) -> None:
|
||||
self.reader.select_item(0)
|
||||
|
||||
|
||||
class last_history(commands.Command):
|
||||
def do(self) -> None:
|
||||
self.reader.select_item(len(self.reader.history))
|
||||
|
||||
|
||||
class operate_and_get_next(commands.FinishCommand):
|
||||
def do(self) -> None:
|
||||
self.reader.next_history = self.reader.historyi + 1
|
||||
|
||||
|
||||
class yank_arg(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
if r.last_command is self.__class__:
|
||||
r.yank_arg_i += 1
|
||||
else:
|
||||
r.yank_arg_i = 0
|
||||
if r.historyi < r.yank_arg_i:
|
||||
r.error("beginning of history list")
|
||||
return
|
||||
a = r.get_arg(-1)
|
||||
# XXX how to split?
|
||||
words = r.get_item(r.historyi - r.yank_arg_i - 1).split()
|
||||
if a < -len(words) or a >= len(words):
|
||||
r.error("no such arg")
|
||||
return
|
||||
w = words[a]
|
||||
b = r.buffer
|
||||
if r.yank_arg_i > 0:
|
||||
o = len(r.yank_arg_yanked)
|
||||
else:
|
||||
o = 0
|
||||
b[r.pos - o : r.pos] = list(w)
|
||||
r.yank_arg_yanked = w
|
||||
r.pos += len(w) - o
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class forward_history_isearch(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.isearch_direction = ISEARCH_DIRECTION_FORWARDS
|
||||
r.isearch_start = r.historyi, r.pos
|
||||
r.isearch_term = ""
|
||||
r.dirty = True
|
||||
r.push_input_trans(r.isearch_trans)
|
||||
|
||||
|
||||
class reverse_history_isearch(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.isearch_direction = ISEARCH_DIRECTION_BACKWARDS
|
||||
r.dirty = True
|
||||
r.isearch_term = ""
|
||||
r.push_input_trans(r.isearch_trans)
|
||||
r.isearch_start = r.historyi, r.pos
|
||||
|
||||
|
||||
class isearch_cancel(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.isearch_direction = ISEARCH_DIRECTION_NONE
|
||||
r.pop_input_trans()
|
||||
r.select_item(r.isearch_start[0])
|
||||
r.pos = r.isearch_start[1]
|
||||
r.dirty = True
|
||||
|
||||
|
||||
class isearch_add_character(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
r.isearch_term += self.event[-1]
|
||||
r.dirty = True
|
||||
p = r.pos + len(r.isearch_term) - 1
|
||||
if b[p : p + 1] != [r.isearch_term[-1]]:
|
||||
r.isearch_next()
|
||||
|
||||
|
||||
class isearch_backspace(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
if len(r.isearch_term) > 0:
|
||||
r.isearch_term = r.isearch_term[:-1]
|
||||
r.dirty = True
|
||||
else:
|
||||
r.error("nothing to rubout")
|
||||
|
||||
|
||||
class isearch_forwards(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.isearch_direction = ISEARCH_DIRECTION_FORWARDS
|
||||
r.isearch_next()
|
||||
|
||||
|
||||
class isearch_backwards(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.isearch_direction = ISEARCH_DIRECTION_BACKWARDS
|
||||
r.isearch_next()
|
||||
|
||||
|
||||
class isearch_end(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
r.isearch_direction = ISEARCH_DIRECTION_NONE
|
||||
r.console.forgetinput()
|
||||
r.pop_input_trans()
|
||||
r.dirty = True
|
||||
|
||||
|
||||
@dataclass
|
||||
class HistoricalReader(Reader):
|
||||
"""Adds history support (with incremental history searching) to the
|
||||
Reader class.
|
||||
"""
|
||||
|
||||
history: list[str] = field(default_factory=list)
|
||||
historyi: int = 0
|
||||
next_history: int | None = None
|
||||
transient_history: dict[int, str] = field(default_factory=dict)
|
||||
isearch_term: str = ""
|
||||
isearch_direction: str = ISEARCH_DIRECTION_NONE
|
||||
isearch_start: tuple[int, int] = field(init=False)
|
||||
isearch_trans: input.KeymapTranslator = field(init=False)
|
||||
yank_arg_i: int = 0
|
||||
yank_arg_yanked: str = ""
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
super().__post_init__()
|
||||
for c in [
|
||||
next_history,
|
||||
previous_history,
|
||||
restore_history,
|
||||
first_history,
|
||||
last_history,
|
||||
yank_arg,
|
||||
forward_history_isearch,
|
||||
reverse_history_isearch,
|
||||
isearch_end,
|
||||
isearch_add_character,
|
||||
isearch_cancel,
|
||||
isearch_add_character,
|
||||
isearch_backspace,
|
||||
isearch_forwards,
|
||||
isearch_backwards,
|
||||
operate_and_get_next,
|
||||
history_search_backward,
|
||||
history_search_forward,
|
||||
]:
|
||||
self.commands[c.__name__] = c
|
||||
self.commands[c.__name__.replace("_", "-")] = c
|
||||
self.isearch_start = self.historyi, self.pos
|
||||
self.isearch_trans = input.KeymapTranslator(
|
||||
isearch_keymap, invalid_cls=isearch_end, character_cls=isearch_add_character
|
||||
)
|
||||
|
||||
def collect_keymap(self) -> tuple[tuple[KeySpec, CommandName], ...]:
|
||||
return super().collect_keymap() + (
|
||||
(r"\C-n", "next-history"),
|
||||
(r"\C-p", "previous-history"),
|
||||
(r"\C-o", "operate-and-get-next"),
|
||||
(r"\C-r", "reverse-history-isearch"),
|
||||
(r"\C-s", "forward-history-isearch"),
|
||||
(r"\M-r", "restore-history"),
|
||||
(r"\M-.", "yank-arg"),
|
||||
(r"\<page down>", "history-search-forward"),
|
||||
(r"\x1b[6~", "history-search-forward"),
|
||||
(r"\<page up>", "history-search-backward"),
|
||||
(r"\x1b[5~", "history-search-backward"),
|
||||
)
|
||||
|
||||
def select_item(self, i: int) -> None:
|
||||
self.transient_history[self.historyi] = self.get_unicode()
|
||||
buf = self.transient_history.get(i)
|
||||
if buf is None:
|
||||
buf = self.history[i].rstrip()
|
||||
self.buffer = list(buf)
|
||||
self.historyi = i
|
||||
self.pos = len(self.buffer)
|
||||
self.dirty = True
|
||||
self.last_refresh_cache.invalidated = True
|
||||
|
||||
def get_item(self, i: int) -> str:
|
||||
if i != len(self.history):
|
||||
return self.transient_history.get(i, self.history[i])
|
||||
else:
|
||||
return self.transient_history.get(i, self.get_unicode())
|
||||
|
||||
@contextmanager
|
||||
def suspend(self) -> SimpleContextManager:
|
||||
with super().suspend(), self.suspend_history():
|
||||
yield
|
||||
|
||||
@contextmanager
|
||||
def suspend_history(self) -> SimpleContextManager:
|
||||
try:
|
||||
old_history = self.history[:]
|
||||
del self.history[:]
|
||||
yield
|
||||
finally:
|
||||
self.history[:] = old_history
|
||||
|
||||
def prepare(self) -> None:
|
||||
super().prepare()
|
||||
try:
|
||||
self.transient_history = {}
|
||||
if self.next_history is not None and self.next_history < len(self.history):
|
||||
self.historyi = self.next_history
|
||||
self.buffer[:] = list(self.history[self.next_history])
|
||||
self.pos = len(self.buffer)
|
||||
self.transient_history[len(self.history)] = ""
|
||||
else:
|
||||
self.historyi = len(self.history)
|
||||
self.next_history = None
|
||||
except:
|
||||
self.restore()
|
||||
raise
|
||||
|
||||
def get_prompt(self, lineno: int, cursor_on_line: bool) -> str:
|
||||
if cursor_on_line and self.isearch_direction != ISEARCH_DIRECTION_NONE:
|
||||
d = "rf"[self.isearch_direction == ISEARCH_DIRECTION_FORWARDS]
|
||||
return "(%s-search `%s') " % (d, self.isearch_term)
|
||||
else:
|
||||
return super().get_prompt(lineno, cursor_on_line)
|
||||
|
||||
def search_next(self, *, forwards: bool) -> None:
|
||||
"""Search history for the current line contents up to the cursor.
|
||||
|
||||
Selects the first item found. If nothing is under the cursor, any next
|
||||
item in history is selected.
|
||||
"""
|
||||
pos = self.pos
|
||||
s = self.get_unicode()
|
||||
history_index = self.historyi
|
||||
|
||||
# In multiline contexts, we're only interested in the current line.
|
||||
nl_index = s.rfind('\n', 0, pos)
|
||||
prefix = s[nl_index + 1:pos]
|
||||
pos = len(prefix)
|
||||
|
||||
match_prefix = len(prefix)
|
||||
len_item = 0
|
||||
if history_index < len(self.history):
|
||||
len_item = len(self.get_item(history_index))
|
||||
if len_item and pos == len_item:
|
||||
match_prefix = False
|
||||
elif not pos:
|
||||
match_prefix = False
|
||||
|
||||
while 1:
|
||||
if forwards:
|
||||
out_of_bounds = history_index >= len(self.history) - 1
|
||||
else:
|
||||
out_of_bounds = history_index == 0
|
||||
if out_of_bounds:
|
||||
if forwards and not match_prefix:
|
||||
self.pos = 0
|
||||
self.buffer = []
|
||||
self.dirty = True
|
||||
else:
|
||||
self.error("not found")
|
||||
return
|
||||
|
||||
history_index += 1 if forwards else -1
|
||||
s = self.get_item(history_index)
|
||||
|
||||
if not match_prefix:
|
||||
self.select_item(history_index)
|
||||
return
|
||||
|
||||
len_acc = 0
|
||||
for i, line in enumerate(s.splitlines(keepends=True)):
|
||||
if line.startswith(prefix):
|
||||
self.select_item(history_index)
|
||||
self.pos = pos + len_acc
|
||||
return
|
||||
len_acc += len(line)
|
||||
|
||||
def isearch_next(self) -> None:
|
||||
st = self.isearch_term
|
||||
p = self.pos
|
||||
i = self.historyi
|
||||
s = self.get_unicode()
|
||||
forwards = self.isearch_direction == ISEARCH_DIRECTION_FORWARDS
|
||||
while 1:
|
||||
if forwards:
|
||||
p = s.find(st, p + 1)
|
||||
else:
|
||||
p = s.rfind(st, 0, p + len(st) - 1)
|
||||
if p != -1:
|
||||
self.select_item(i)
|
||||
self.pos = p
|
||||
return
|
||||
elif (forwards and i >= len(self.history) - 1) or (not forwards and i == 0):
|
||||
self.error("not found")
|
||||
return
|
||||
else:
|
||||
if forwards:
|
||||
i += 1
|
||||
s = self.get_item(i)
|
||||
p = -1
|
||||
else:
|
||||
i -= 1
|
||||
s = self.get_item(i)
|
||||
p = len(s)
|
||||
|
||||
def finish(self) -> None:
|
||||
super().finish()
|
||||
ret = self.get_unicode()
|
||||
for i, t in self.transient_history.items():
|
||||
if i < len(self.history) and i != self.historyi:
|
||||
self.history[i] = t
|
||||
if ret and should_auto_add_history:
|
||||
self.history.append(ret)
|
||||
|
||||
|
||||
should_auto_add_history = True
|
||||
114
Utils/PythonNew32/Lib/_pyrepl/input.py
Normal file
114
Utils/PythonNew32/Lib/_pyrepl/input.py
Normal file
@@ -0,0 +1,114 @@
|
||||
# Copyright 2000-2004 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
# (naming modules after builtin functions is not such a hot idea...)
|
||||
|
||||
# an KeyTrans instance translates Event objects into Command objects
|
||||
|
||||
# hmm, at what level do we want [C-i] and [tab] to be equivalent?
|
||||
# [meta-a] and [esc a]? obviously, these are going to be equivalent
|
||||
# for the UnixConsole, but should they be for PygameConsole?
|
||||
|
||||
# it would in any situation seem to be a bad idea to bind, say, [tab]
|
||||
# and [C-i] to *different* things... but should binding one bind the
|
||||
# other?
|
||||
|
||||
# executive, temporary decision: [tab] and [C-i] are distinct, but
|
||||
# [meta-key] is identified with [esc key]. We demand that any console
|
||||
# class does quite a lot towards emulating a unix terminal.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import unicodedata
|
||||
from collections import deque
|
||||
|
||||
|
||||
# types
|
||||
if False:
|
||||
from .types import EventTuple
|
||||
|
||||
|
||||
class InputTranslator(ABC):
|
||||
@abstractmethod
|
||||
def push(self, evt: EventTuple) -> None:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get(self) -> EventTuple | None:
|
||||
return None
|
||||
|
||||
@abstractmethod
|
||||
def empty(self) -> bool:
|
||||
return True
|
||||
|
||||
|
||||
class KeymapTranslator(InputTranslator):
|
||||
def __init__(self, keymap, verbose=False, invalid_cls=None, character_cls=None):
|
||||
self.verbose = verbose
|
||||
from .keymap import compile_keymap, parse_keys
|
||||
|
||||
self.keymap = keymap
|
||||
self.invalid_cls = invalid_cls
|
||||
self.character_cls = character_cls
|
||||
d = {}
|
||||
for keyspec, command in keymap:
|
||||
keyseq = tuple(parse_keys(keyspec))
|
||||
d[keyseq] = command
|
||||
if self.verbose:
|
||||
print(d)
|
||||
self.k = self.ck = compile_keymap(d, ())
|
||||
self.results = deque()
|
||||
self.stack = []
|
||||
|
||||
def push(self, evt):
|
||||
if self.verbose:
|
||||
print("pushed", evt.data, end="")
|
||||
key = evt.data
|
||||
d = self.k.get(key)
|
||||
if isinstance(d, dict):
|
||||
if self.verbose:
|
||||
print("transition")
|
||||
self.stack.append(key)
|
||||
self.k = d
|
||||
else:
|
||||
if d is None:
|
||||
if self.verbose:
|
||||
print("invalid")
|
||||
if self.stack or len(key) > 1 or unicodedata.category(key) == "C":
|
||||
self.results.append((self.invalid_cls, self.stack + [key]))
|
||||
else:
|
||||
# small optimization:
|
||||
self.k[key] = self.character_cls
|
||||
self.results.append((self.character_cls, [key]))
|
||||
else:
|
||||
if self.verbose:
|
||||
print("matched", d)
|
||||
self.results.append((d, self.stack + [key]))
|
||||
self.stack = []
|
||||
self.k = self.ck
|
||||
|
||||
def get(self):
|
||||
if self.results:
|
||||
return self.results.popleft()
|
||||
else:
|
||||
return None
|
||||
|
||||
def empty(self) -> bool:
|
||||
return not self.results
|
||||
213
Utils/PythonNew32/Lib/_pyrepl/keymap.py
Normal file
213
Utils/PythonNew32/Lib/_pyrepl/keymap.py
Normal file
@@ -0,0 +1,213 @@
|
||||
# Copyright 2000-2008 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
"""
|
||||
Keymap contains functions for parsing keyspecs and turning keyspecs into
|
||||
appropriate sequences.
|
||||
|
||||
A keyspec is a string representing a sequence of key presses that can
|
||||
be bound to a command. All characters other than the backslash represent
|
||||
themselves. In the traditional manner, a backslash introduces an escape
|
||||
sequence.
|
||||
|
||||
pyrepl uses its own keyspec format that is meant to be a strict superset of
|
||||
readline's KEYSEQ format. This means that if a spec is found that readline
|
||||
accepts that this doesn't, it should be logged as a bug. Note that this means
|
||||
we're using the `\\C-o' style of readline's keyspec, not the `Control-o' sort.
|
||||
|
||||
The extension to readline is that the sequence \\<KEY> denotes the
|
||||
sequence of characters produced by hitting KEY.
|
||||
|
||||
Examples:
|
||||
`a' - what you get when you hit the `a' key
|
||||
`\\EOA' - Escape - O - A (up, on my terminal)
|
||||
`\\<UP>' - the up arrow key
|
||||
`\\<up>' - ditto (keynames are case-insensitive)
|
||||
`\\C-o', `\\c-o' - control-o
|
||||
`\\M-.' - meta-period
|
||||
`\\E.' - ditto (that's how meta works for pyrepl)
|
||||
`\\<tab>', `\\<TAB>', `\\t', `\\011', '\\x09', '\\X09', '\\C-i', '\\C-I'
|
||||
- all of these are the tab character.
|
||||
"""
|
||||
|
||||
_escapes = {
|
||||
"\\": "\\",
|
||||
"'": "'",
|
||||
'"': '"',
|
||||
"a": "\a",
|
||||
"b": "\b",
|
||||
"e": "\033",
|
||||
"f": "\f",
|
||||
"n": "\n",
|
||||
"r": "\r",
|
||||
"t": "\t",
|
||||
"v": "\v",
|
||||
}
|
||||
|
||||
_keynames = {
|
||||
"backspace": "backspace",
|
||||
"delete": "delete",
|
||||
"down": "down",
|
||||
"end": "end",
|
||||
"enter": "\r",
|
||||
"escape": "\033",
|
||||
"f1": "f1",
|
||||
"f2": "f2",
|
||||
"f3": "f3",
|
||||
"f4": "f4",
|
||||
"f5": "f5",
|
||||
"f6": "f6",
|
||||
"f7": "f7",
|
||||
"f8": "f8",
|
||||
"f9": "f9",
|
||||
"f10": "f10",
|
||||
"f11": "f11",
|
||||
"f12": "f12",
|
||||
"f13": "f13",
|
||||
"f14": "f14",
|
||||
"f15": "f15",
|
||||
"f16": "f16",
|
||||
"f17": "f17",
|
||||
"f18": "f18",
|
||||
"f19": "f19",
|
||||
"f20": "f20",
|
||||
"home": "home",
|
||||
"insert": "insert",
|
||||
"left": "left",
|
||||
"page down": "page down",
|
||||
"page up": "page up",
|
||||
"return": "\r",
|
||||
"right": "right",
|
||||
"space": " ",
|
||||
"tab": "\t",
|
||||
"up": "up",
|
||||
}
|
||||
|
||||
|
||||
class KeySpecError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def parse_keys(keys: str) -> list[str]:
|
||||
"""Parse keys in keyspec format to a sequence of keys."""
|
||||
s = 0
|
||||
r: list[str] = []
|
||||
while s < len(keys):
|
||||
k, s = _parse_single_key_sequence(keys, s)
|
||||
r.extend(k)
|
||||
return r
|
||||
|
||||
|
||||
def _parse_single_key_sequence(key: str, s: int) -> tuple[list[str], int]:
|
||||
ctrl = 0
|
||||
meta = 0
|
||||
ret = ""
|
||||
while not ret and s < len(key):
|
||||
if key[s] == "\\":
|
||||
c = key[s + 1].lower()
|
||||
if c in _escapes:
|
||||
ret = _escapes[c]
|
||||
s += 2
|
||||
elif c == "c":
|
||||
if key[s + 2] != "-":
|
||||
raise KeySpecError(
|
||||
"\\C must be followed by `-' (char %d of %s)"
|
||||
% (s + 2, repr(key))
|
||||
)
|
||||
if ctrl:
|
||||
raise KeySpecError(
|
||||
"doubled \\C- (char %d of %s)" % (s + 1, repr(key))
|
||||
)
|
||||
ctrl = 1
|
||||
s += 3
|
||||
elif c == "m":
|
||||
if key[s + 2] != "-":
|
||||
raise KeySpecError(
|
||||
"\\M must be followed by `-' (char %d of %s)"
|
||||
% (s + 2, repr(key))
|
||||
)
|
||||
if meta:
|
||||
raise KeySpecError(
|
||||
"doubled \\M- (char %d of %s)" % (s + 1, repr(key))
|
||||
)
|
||||
meta = 1
|
||||
s += 3
|
||||
elif c.isdigit():
|
||||
n = key[s + 1 : s + 4]
|
||||
ret = chr(int(n, 8))
|
||||
s += 4
|
||||
elif c == "x":
|
||||
n = key[s + 2 : s + 4]
|
||||
ret = chr(int(n, 16))
|
||||
s += 4
|
||||
elif c == "<":
|
||||
t = key.find(">", s)
|
||||
if t == -1:
|
||||
raise KeySpecError(
|
||||
"unterminated \\< starting at char %d of %s"
|
||||
% (s + 1, repr(key))
|
||||
)
|
||||
ret = key[s + 2 : t].lower()
|
||||
if ret not in _keynames:
|
||||
raise KeySpecError(
|
||||
"unrecognised keyname `%s' at char %d of %s"
|
||||
% (ret, s + 2, repr(key))
|
||||
)
|
||||
ret = _keynames[ret]
|
||||
s = t + 1
|
||||
else:
|
||||
raise KeySpecError(
|
||||
"unknown backslash escape %s at char %d of %s"
|
||||
% (repr(c), s + 2, repr(key))
|
||||
)
|
||||
else:
|
||||
ret = key[s]
|
||||
s += 1
|
||||
if ctrl:
|
||||
if len(ret) == 1:
|
||||
ret = chr(ord(ret) & 0x1F) # curses.ascii.ctrl()
|
||||
elif ret in {"left", "right"}:
|
||||
ret = f"ctrl {ret}"
|
||||
else:
|
||||
raise KeySpecError("\\C- followed by invalid key")
|
||||
|
||||
result = [ret], s
|
||||
if meta:
|
||||
result[0].insert(0, "\033")
|
||||
return result
|
||||
|
||||
|
||||
def compile_keymap(keymap, empty=b""):
|
||||
r = {}
|
||||
for key, value in keymap.items():
|
||||
if isinstance(key, bytes):
|
||||
first = key[:1]
|
||||
else:
|
||||
first = key[0]
|
||||
r.setdefault(first, {})[key[1:]] = value
|
||||
for key, value in r.items():
|
||||
if empty in value:
|
||||
if len(value) != 1:
|
||||
raise KeySpecError("key definitions for %s clash" % (value.values(),))
|
||||
else:
|
||||
r[key] = value[empty]
|
||||
else:
|
||||
r[key] = compile_keymap(value, empty)
|
||||
return r
|
||||
59
Utils/PythonNew32/Lib/_pyrepl/main.py
Normal file
59
Utils/PythonNew32/Lib/_pyrepl/main.py
Normal file
@@ -0,0 +1,59 @@
|
||||
import errno
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
||||
CAN_USE_PYREPL: bool
|
||||
FAIL_REASON: str
|
||||
try:
|
||||
if sys.platform == "win32" and sys.getwindowsversion().build < 10586:
|
||||
raise RuntimeError("Windows 10 TH2 or later required")
|
||||
if not os.isatty(sys.stdin.fileno()):
|
||||
raise OSError(errno.ENOTTY, "tty required", "stdin")
|
||||
from .simple_interact import check
|
||||
if err := check():
|
||||
raise RuntimeError(err)
|
||||
except Exception as e:
|
||||
CAN_USE_PYREPL = False
|
||||
FAIL_REASON = f"warning: can't use pyrepl: {e}"
|
||||
else:
|
||||
CAN_USE_PYREPL = True
|
||||
FAIL_REASON = ""
|
||||
|
||||
|
||||
def interactive_console(mainmodule=None, quiet=False, pythonstartup=False):
|
||||
if not CAN_USE_PYREPL:
|
||||
if not os.getenv('PYTHON_BASIC_REPL') and FAIL_REASON:
|
||||
from .trace import trace
|
||||
trace(FAIL_REASON)
|
||||
print(FAIL_REASON, file=sys.stderr)
|
||||
return sys._baserepl()
|
||||
|
||||
if mainmodule:
|
||||
namespace = mainmodule.__dict__
|
||||
else:
|
||||
import __main__
|
||||
namespace = __main__.__dict__
|
||||
namespace.pop("__pyrepl_interactive_console", None)
|
||||
|
||||
# sys._baserepl() above does this internally, we do it here
|
||||
startup_path = os.getenv("PYTHONSTARTUP")
|
||||
if pythonstartup and startup_path:
|
||||
sys.audit("cpython.run_startup", startup_path)
|
||||
|
||||
import tokenize
|
||||
with tokenize.open(startup_path) as f:
|
||||
startup_code = compile(f.read(), startup_path, "exec")
|
||||
exec(startup_code, namespace)
|
||||
|
||||
# set sys.{ps1,ps2} just before invoking the interactive interpreter. This
|
||||
# mimics what CPython does in pythonrun.c
|
||||
if not hasattr(sys, "ps1"):
|
||||
sys.ps1 = ">>> "
|
||||
if not hasattr(sys, "ps2"):
|
||||
sys.ps2 = "... "
|
||||
|
||||
from .console import InteractiveColoredConsole
|
||||
from .simple_interact import run_multiline_interactive_console
|
||||
console = InteractiveColoredConsole(namespace, filename="<stdin>")
|
||||
run_multiline_interactive_console(console)
|
||||
29
Utils/PythonNew32/Lib/_pyrepl/mypy.ini
Normal file
29
Utils/PythonNew32/Lib/_pyrepl/mypy.ini
Normal file
@@ -0,0 +1,29 @@
|
||||
# Config file for running mypy on _pyrepl.
|
||||
# Run mypy by invoking `mypy --config-file Lib/_pyrepl/mypy.ini`
|
||||
# on the command-line from the repo root
|
||||
|
||||
[mypy]
|
||||
files = Lib/_pyrepl
|
||||
mypy_path = $MYPY_CONFIG_FILE_DIR/../../Misc/mypy
|
||||
explicit_package_bases = True
|
||||
python_version = 3.13
|
||||
platform = linux
|
||||
pretty = True
|
||||
|
||||
# Enable most stricter settings
|
||||
enable_error_code = ignore-without-code,redundant-expr
|
||||
strict = True
|
||||
|
||||
# Various stricter settings that we can't yet enable
|
||||
# Try to enable these in the following order:
|
||||
disallow_untyped_calls = False
|
||||
disallow_untyped_defs = False
|
||||
check_untyped_defs = False
|
||||
|
||||
# Various internal modules that typeshed deliberately doesn't have stubs for:
|
||||
[mypy-_abc.*,_opcode.*,_overlapped.*,_testcapi.*,_testinternalcapi.*,test.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
# Other untyped parts of the stdlib
|
||||
[mypy-idlelib.*]
|
||||
ignore_missing_imports = True
|
||||
175
Utils/PythonNew32/Lib/_pyrepl/pager.py
Normal file
175
Utils/PythonNew32/Lib/_pyrepl/pager.py
Normal file
@@ -0,0 +1,175 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import io
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
|
||||
# types
|
||||
if False:
|
||||
from typing import Protocol
|
||||
class Pager(Protocol):
|
||||
def __call__(self, text: str, title: str = "") -> None:
|
||||
...
|
||||
|
||||
|
||||
def get_pager() -> Pager:
|
||||
"""Decide what method to use for paging through text."""
|
||||
if not hasattr(sys.stdin, "isatty"):
|
||||
return plain_pager
|
||||
if not hasattr(sys.stdout, "isatty"):
|
||||
return plain_pager
|
||||
if not sys.stdin.isatty() or not sys.stdout.isatty():
|
||||
return plain_pager
|
||||
if sys.platform == "emscripten":
|
||||
return plain_pager
|
||||
use_pager = os.environ.get('MANPAGER') or os.environ.get('PAGER')
|
||||
if use_pager:
|
||||
if sys.platform == 'win32': # pipes completely broken in Windows
|
||||
return lambda text, title='': tempfile_pager(plain(text), use_pager)
|
||||
elif os.environ.get('TERM') in ('dumb', 'emacs'):
|
||||
return lambda text, title='': pipe_pager(plain(text), use_pager, title)
|
||||
else:
|
||||
return lambda text, title='': pipe_pager(text, use_pager, title)
|
||||
if os.environ.get('TERM') in ('dumb', 'emacs'):
|
||||
return plain_pager
|
||||
if sys.platform == 'win32':
|
||||
return lambda text, title='': tempfile_pager(plain(text), 'more <')
|
||||
if hasattr(os, 'system') and os.system('(pager) 2>/dev/null') == 0:
|
||||
return lambda text, title='': pipe_pager(text, 'pager', title)
|
||||
if hasattr(os, 'system') and os.system('(less) 2>/dev/null') == 0:
|
||||
return lambda text, title='': pipe_pager(text, 'less', title)
|
||||
|
||||
import tempfile
|
||||
(fd, filename) = tempfile.mkstemp()
|
||||
os.close(fd)
|
||||
try:
|
||||
if hasattr(os, 'system') and os.system('more "%s"' % filename) == 0:
|
||||
return lambda text, title='': pipe_pager(text, 'more', title)
|
||||
else:
|
||||
return tty_pager
|
||||
finally:
|
||||
os.unlink(filename)
|
||||
|
||||
|
||||
def escape_stdout(text: str) -> str:
|
||||
# Escape non-encodable characters to avoid encoding errors later
|
||||
encoding = getattr(sys.stdout, 'encoding', None) or 'utf-8'
|
||||
return text.encode(encoding, 'backslashreplace').decode(encoding)
|
||||
|
||||
|
||||
def escape_less(s: str) -> str:
|
||||
return re.sub(r'([?:.%\\])', r'\\\1', s)
|
||||
|
||||
|
||||
def plain(text: str) -> str:
|
||||
"""Remove boldface formatting from text."""
|
||||
return re.sub('.\b', '', text)
|
||||
|
||||
|
||||
def tty_pager(text: str, title: str = '') -> None:
|
||||
"""Page through text on a text terminal."""
|
||||
lines = plain(escape_stdout(text)).split('\n')
|
||||
has_tty = False
|
||||
try:
|
||||
import tty
|
||||
import termios
|
||||
fd = sys.stdin.fileno()
|
||||
old = termios.tcgetattr(fd)
|
||||
tty.setcbreak(fd)
|
||||
has_tty = True
|
||||
|
||||
def getchar() -> str:
|
||||
return sys.stdin.read(1)
|
||||
|
||||
except (ImportError, AttributeError, io.UnsupportedOperation):
|
||||
def getchar() -> str:
|
||||
return sys.stdin.readline()[:-1][:1]
|
||||
|
||||
try:
|
||||
try:
|
||||
h = int(os.environ.get('LINES', 0))
|
||||
except ValueError:
|
||||
h = 0
|
||||
if h <= 1:
|
||||
h = 25
|
||||
r = inc = h - 1
|
||||
sys.stdout.write('\n'.join(lines[:inc]) + '\n')
|
||||
while lines[r:]:
|
||||
sys.stdout.write('-- more --')
|
||||
sys.stdout.flush()
|
||||
c = getchar()
|
||||
|
||||
if c in ('q', 'Q'):
|
||||
sys.stdout.write('\r \r')
|
||||
break
|
||||
elif c in ('\r', '\n'):
|
||||
sys.stdout.write('\r \r' + lines[r] + '\n')
|
||||
r = r + 1
|
||||
continue
|
||||
if c in ('b', 'B', '\x1b'):
|
||||
r = r - inc - inc
|
||||
if r < 0: r = 0
|
||||
sys.stdout.write('\n' + '\n'.join(lines[r:r+inc]) + '\n')
|
||||
r = r + inc
|
||||
|
||||
finally:
|
||||
if has_tty:
|
||||
termios.tcsetattr(fd, termios.TCSAFLUSH, old)
|
||||
|
||||
|
||||
def plain_pager(text: str, title: str = '') -> None:
|
||||
"""Simply print unformatted text. This is the ultimate fallback."""
|
||||
sys.stdout.write(plain(escape_stdout(text)))
|
||||
|
||||
|
||||
def pipe_pager(text: str, cmd: str, title: str = '') -> None:
|
||||
"""Page through text by feeding it to another program."""
|
||||
import subprocess
|
||||
env = os.environ.copy()
|
||||
if title:
|
||||
title += ' '
|
||||
esc_title = escape_less(title)
|
||||
prompt_string = (
|
||||
f' {esc_title}' +
|
||||
'?ltline %lt?L/%L.'
|
||||
':byte %bB?s/%s.'
|
||||
'.'
|
||||
'?e (END):?pB %pB\\%..'
|
||||
' (press h for help or q to quit)')
|
||||
env['LESS'] = '-RmPm{0}$PM{0}$'.format(prompt_string)
|
||||
proc = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,
|
||||
errors='backslashreplace', env=env)
|
||||
assert proc.stdin is not None
|
||||
try:
|
||||
with proc.stdin as pipe:
|
||||
try:
|
||||
pipe.write(text)
|
||||
except KeyboardInterrupt:
|
||||
# We've hereby abandoned whatever text hasn't been written,
|
||||
# but the pager is still in control of the terminal.
|
||||
pass
|
||||
except OSError:
|
||||
pass # Ignore broken pipes caused by quitting the pager program.
|
||||
while True:
|
||||
try:
|
||||
proc.wait()
|
||||
break
|
||||
except KeyboardInterrupt:
|
||||
# Ignore ctl-c like the pager itself does. Otherwise the pager is
|
||||
# left running and the terminal is in raw mode and unusable.
|
||||
pass
|
||||
|
||||
|
||||
def tempfile_pager(text: str, cmd: str, title: str = '') -> None:
|
||||
"""Page through text by invoking a program on a temporary file."""
|
||||
import tempfile
|
||||
with tempfile.TemporaryDirectory() as tempdir:
|
||||
filename = os.path.join(tempdir, 'pydoc.out')
|
||||
with open(filename, 'w', errors='backslashreplace',
|
||||
encoding=os.device_encoding(0) if
|
||||
sys.platform == 'win32' else None
|
||||
) as file:
|
||||
file.write(text)
|
||||
os.system(cmd + ' "' + filename + '"')
|
||||
764
Utils/PythonNew32/Lib/_pyrepl/reader.py
Normal file
764
Utils/PythonNew32/Lib/_pyrepl/reader.py
Normal file
@@ -0,0 +1,764 @@
|
||||
# Copyright 2000-2010 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Antonio Cuni
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
|
||||
from contextlib import contextmanager
|
||||
from dataclasses import dataclass, field, fields
|
||||
from _colorize import can_colorize, ANSIColors
|
||||
|
||||
|
||||
from . import commands, console, input
|
||||
from .utils import wlen, unbracket, disp_str
|
||||
from .trace import trace
|
||||
|
||||
|
||||
# types
|
||||
Command = commands.Command
|
||||
from .types import Callback, SimpleContextManager, KeySpec, CommandName
|
||||
|
||||
|
||||
# syntax classes:
|
||||
|
||||
SYNTAX_WHITESPACE, SYNTAX_WORD, SYNTAX_SYMBOL = range(3)
|
||||
|
||||
|
||||
def make_default_syntax_table() -> dict[str, int]:
|
||||
# XXX perhaps should use some unicodedata here?
|
||||
st: dict[str, int] = {}
|
||||
for c in map(chr, range(256)):
|
||||
st[c] = SYNTAX_SYMBOL
|
||||
for c in [a for a in map(chr, range(256)) if a.isalnum()]:
|
||||
st[c] = SYNTAX_WORD
|
||||
st["\n"] = st[" "] = SYNTAX_WHITESPACE
|
||||
return st
|
||||
|
||||
|
||||
def make_default_commands() -> dict[CommandName, type[Command]]:
|
||||
result: dict[CommandName, type[Command]] = {}
|
||||
for v in vars(commands).values():
|
||||
if isinstance(v, type) and issubclass(v, Command) and v.__name__[0].islower():
|
||||
result[v.__name__] = v
|
||||
result[v.__name__.replace("_", "-")] = v
|
||||
return result
|
||||
|
||||
|
||||
default_keymap: tuple[tuple[KeySpec, CommandName], ...] = tuple(
|
||||
[
|
||||
(r"\C-a", "beginning-of-line"),
|
||||
(r"\C-b", "left"),
|
||||
(r"\C-c", "interrupt"),
|
||||
(r"\C-d", "delete"),
|
||||
(r"\C-e", "end-of-line"),
|
||||
(r"\C-f", "right"),
|
||||
(r"\C-g", "cancel"),
|
||||
(r"\C-h", "backspace"),
|
||||
(r"\C-j", "accept"),
|
||||
(r"\<return>", "accept"),
|
||||
(r"\C-k", "kill-line"),
|
||||
(r"\C-l", "clear-screen"),
|
||||
(r"\C-m", "accept"),
|
||||
(r"\C-t", "transpose-characters"),
|
||||
(r"\C-u", "unix-line-discard"),
|
||||
(r"\C-w", "unix-word-rubout"),
|
||||
(r"\C-x\C-u", "upcase-region"),
|
||||
(r"\C-y", "yank"),
|
||||
*(() if sys.platform == "win32" else ((r"\C-z", "suspend"), )),
|
||||
(r"\M-b", "backward-word"),
|
||||
(r"\M-c", "capitalize-word"),
|
||||
(r"\M-d", "kill-word"),
|
||||
(r"\M-f", "forward-word"),
|
||||
(r"\M-l", "downcase-word"),
|
||||
(r"\M-t", "transpose-words"),
|
||||
(r"\M-u", "upcase-word"),
|
||||
(r"\M-y", "yank-pop"),
|
||||
(r"\M--", "digit-arg"),
|
||||
(r"\M-0", "digit-arg"),
|
||||
(r"\M-1", "digit-arg"),
|
||||
(r"\M-2", "digit-arg"),
|
||||
(r"\M-3", "digit-arg"),
|
||||
(r"\M-4", "digit-arg"),
|
||||
(r"\M-5", "digit-arg"),
|
||||
(r"\M-6", "digit-arg"),
|
||||
(r"\M-7", "digit-arg"),
|
||||
(r"\M-8", "digit-arg"),
|
||||
(r"\M-9", "digit-arg"),
|
||||
(r"\M-\n", "accept"),
|
||||
("\\\\", "self-insert"),
|
||||
(r"\x1b[200~", "enable_bracketed_paste"),
|
||||
(r"\x1b[201~", "disable_bracketed_paste"),
|
||||
(r"\x03", "ctrl-c"),
|
||||
]
|
||||
+ [(c, "self-insert") for c in map(chr, range(32, 127)) if c != "\\"]
|
||||
+ [(c, "self-insert") for c in map(chr, range(128, 256)) if c.isalpha()]
|
||||
+ [
|
||||
(r"\<up>", "up"),
|
||||
(r"\<down>", "down"),
|
||||
(r"\<left>", "left"),
|
||||
(r"\C-\<left>", "backward-word"),
|
||||
(r"\<right>", "right"),
|
||||
(r"\C-\<right>", "forward-word"),
|
||||
(r"\<delete>", "delete"),
|
||||
(r"\x1b[3~", "delete"),
|
||||
(r"\<backspace>", "backspace"),
|
||||
(r"\M-\<backspace>", "backward-kill-word"),
|
||||
(r"\<end>", "end-of-line"), # was 'end'
|
||||
(r"\<home>", "beginning-of-line"), # was 'home'
|
||||
(r"\<f1>", "help"),
|
||||
(r"\<f2>", "show-history"),
|
||||
(r"\<f3>", "paste-mode"),
|
||||
(r"\EOF", "end"), # the entries in the terminfo database for xterms
|
||||
(r"\EOH", "home"), # seem to be wrong. this is a less than ideal
|
||||
# workaround
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class Reader:
|
||||
"""The Reader class implements the bare bones of a command reader,
|
||||
handling such details as editing and cursor motion. What it does
|
||||
not support are such things as completion or history support -
|
||||
these are implemented elsewhere.
|
||||
|
||||
Instance variables of note include:
|
||||
|
||||
* buffer:
|
||||
A *list* (*not* a string at the moment :-) containing all the
|
||||
characters that have been entered.
|
||||
* console:
|
||||
Hopefully encapsulates the OS dependent stuff.
|
||||
* pos:
|
||||
A 0-based index into `buffer' for where the insertion point
|
||||
is.
|
||||
* screeninfo:
|
||||
Ahem. This list contains some info needed to move the
|
||||
insertion point around reasonably efficiently.
|
||||
* cxy, lxy:
|
||||
the position of the insertion point in screen ...
|
||||
* syntax_table:
|
||||
Dictionary mapping characters to `syntax class'; read the
|
||||
emacs docs to see what this means :-)
|
||||
* commands:
|
||||
Dictionary mapping command names to command classes.
|
||||
* arg:
|
||||
The emacs-style prefix argument. It will be None if no such
|
||||
argument has been provided.
|
||||
* dirty:
|
||||
True if we need to refresh the display.
|
||||
* kill_ring:
|
||||
The emacs-style kill-ring; manipulated with yank & yank-pop
|
||||
* ps1, ps2, ps3, ps4:
|
||||
prompts. ps1 is the prompt for a one-line input; for a
|
||||
multiline input it looks like:
|
||||
ps2> first line of input goes here
|
||||
ps3> second and further
|
||||
ps3> lines get ps3
|
||||
...
|
||||
ps4> and the last one gets ps4
|
||||
As with the usual top-level, you can set these to instances if
|
||||
you like; str() will be called on them (once) at the beginning
|
||||
of each command. Don't put really long or newline containing
|
||||
strings here, please!
|
||||
This is just the default policy; you can change it freely by
|
||||
overriding get_prompt() (and indeed some standard subclasses
|
||||
do).
|
||||
* finished:
|
||||
handle1 will set this to a true value if a command signals
|
||||
that we're done.
|
||||
"""
|
||||
|
||||
console: console.Console
|
||||
|
||||
## state
|
||||
buffer: list[str] = field(default_factory=list)
|
||||
pos: int = 0
|
||||
ps1: str = "->> "
|
||||
ps2: str = "/>> "
|
||||
ps3: str = "|.. "
|
||||
ps4: str = R"\__ "
|
||||
kill_ring: list[list[str]] = field(default_factory=list)
|
||||
msg: str = ""
|
||||
arg: int | None = None
|
||||
dirty: bool = False
|
||||
finished: bool = False
|
||||
paste_mode: bool = False
|
||||
in_bracketed_paste: bool = False
|
||||
commands: dict[str, type[Command]] = field(default_factory=make_default_commands)
|
||||
last_command: type[Command] | None = None
|
||||
syntax_table: dict[str, int] = field(default_factory=make_default_syntax_table)
|
||||
keymap: tuple[tuple[str, str], ...] = ()
|
||||
input_trans: input.KeymapTranslator = field(init=False)
|
||||
input_trans_stack: list[input.KeymapTranslator] = field(default_factory=list)
|
||||
screen: list[str] = field(default_factory=list)
|
||||
screeninfo: list[tuple[int, list[int]]] = field(init=False)
|
||||
cxy: tuple[int, int] = field(init=False)
|
||||
lxy: tuple[int, int] = field(init=False)
|
||||
scheduled_commands: list[str] = field(default_factory=list)
|
||||
can_colorize: bool = False
|
||||
threading_hook: Callback | None = None
|
||||
|
||||
## cached metadata to speed up screen refreshes
|
||||
@dataclass
|
||||
class RefreshCache:
|
||||
in_bracketed_paste: bool = False
|
||||
screen: list[str] = field(default_factory=list)
|
||||
screeninfo: list[tuple[int, list[int]]] = field(init=False)
|
||||
line_end_offsets: list[int] = field(default_factory=list)
|
||||
pos: int = field(init=False)
|
||||
cxy: tuple[int, int] = field(init=False)
|
||||
dimensions: tuple[int, int] = field(init=False)
|
||||
invalidated: bool = False
|
||||
|
||||
def update_cache(self,
|
||||
reader: Reader,
|
||||
screen: list[str],
|
||||
screeninfo: list[tuple[int, list[int]]],
|
||||
) -> None:
|
||||
self.in_bracketed_paste = reader.in_bracketed_paste
|
||||
self.screen = screen.copy()
|
||||
self.screeninfo = screeninfo.copy()
|
||||
self.pos = reader.pos
|
||||
self.cxy = reader.cxy
|
||||
self.dimensions = reader.console.width, reader.console.height
|
||||
self.invalidated = False
|
||||
|
||||
def valid(self, reader: Reader) -> bool:
|
||||
if self.invalidated:
|
||||
return False
|
||||
dimensions = reader.console.width, reader.console.height
|
||||
dimensions_changed = dimensions != self.dimensions
|
||||
paste_changed = reader.in_bracketed_paste != self.in_bracketed_paste
|
||||
return not (dimensions_changed or paste_changed)
|
||||
|
||||
def get_cached_location(self, reader: Reader) -> tuple[int, int]:
|
||||
if self.invalidated:
|
||||
raise ValueError("Cache is invalidated")
|
||||
offset = 0
|
||||
earliest_common_pos = min(reader.pos, self.pos)
|
||||
num_common_lines = len(self.line_end_offsets)
|
||||
while num_common_lines > 0:
|
||||
offset = self.line_end_offsets[num_common_lines - 1]
|
||||
if earliest_common_pos > offset:
|
||||
break
|
||||
num_common_lines -= 1
|
||||
else:
|
||||
offset = 0
|
||||
return offset, num_common_lines
|
||||
|
||||
last_refresh_cache: RefreshCache = field(default_factory=RefreshCache)
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
# Enable the use of `insert` without a `prepare` call - necessary to
|
||||
# facilitate the tab completion hack implemented for
|
||||
# <https://bugs.python.org/issue25660>.
|
||||
self.keymap = self.collect_keymap()
|
||||
self.input_trans = input.KeymapTranslator(
|
||||
self.keymap, invalid_cls="invalid-key", character_cls="self-insert"
|
||||
)
|
||||
self.screeninfo = [(0, [])]
|
||||
self.cxy = self.pos2xy()
|
||||
self.lxy = (self.pos, 0)
|
||||
self.can_colorize = can_colorize()
|
||||
|
||||
self.last_refresh_cache.screeninfo = self.screeninfo
|
||||
self.last_refresh_cache.pos = self.pos
|
||||
self.last_refresh_cache.cxy = self.cxy
|
||||
self.last_refresh_cache.dimensions = (0, 0)
|
||||
|
||||
def collect_keymap(self) -> tuple[tuple[KeySpec, CommandName], ...]:
|
||||
return default_keymap
|
||||
|
||||
def calc_screen(self) -> list[str]:
|
||||
"""Translate changes in self.buffer into changes in self.console.screen."""
|
||||
# Since the last call to calc_screen:
|
||||
# screen and screeninfo may differ due to a completion menu being shown
|
||||
# pos and cxy may differ due to edits, cursor movements, or completion menus
|
||||
|
||||
# Lines that are above both the old and new cursor position can't have changed,
|
||||
# unless the terminal has been resized (which might cause reflowing) or we've
|
||||
# entered or left paste mode (which changes prompts, causing reflowing).
|
||||
num_common_lines = 0
|
||||
offset = 0
|
||||
if self.last_refresh_cache.valid(self):
|
||||
offset, num_common_lines = self.last_refresh_cache.get_cached_location(self)
|
||||
|
||||
screen = self.last_refresh_cache.screen
|
||||
del screen[num_common_lines:]
|
||||
|
||||
screeninfo = self.last_refresh_cache.screeninfo
|
||||
del screeninfo[num_common_lines:]
|
||||
|
||||
last_refresh_line_end_offsets = self.last_refresh_cache.line_end_offsets
|
||||
del last_refresh_line_end_offsets[num_common_lines:]
|
||||
|
||||
pos = self.pos
|
||||
pos -= offset
|
||||
|
||||
prompt_from_cache = (offset and self.buffer[offset - 1] != "\n")
|
||||
lines = "".join(self.buffer[offset:]).split("\n")
|
||||
cursor_found = False
|
||||
lines_beyond_cursor = 0
|
||||
for ln, line in enumerate(lines, num_common_lines):
|
||||
line_len = len(line)
|
||||
if 0 <= pos <= line_len:
|
||||
self.lxy = pos, ln
|
||||
cursor_found = True
|
||||
elif cursor_found:
|
||||
lines_beyond_cursor += 1
|
||||
if lines_beyond_cursor > self.console.height:
|
||||
# No need to keep formatting lines.
|
||||
# The console can't show them.
|
||||
break
|
||||
if prompt_from_cache:
|
||||
# Only the first line's prompt can come from the cache
|
||||
prompt_from_cache = False
|
||||
prompt = ""
|
||||
else:
|
||||
prompt = self.get_prompt(ln, line_len >= pos >= 0)
|
||||
while "\n" in prompt:
|
||||
pre_prompt, _, prompt = prompt.partition("\n")
|
||||
last_refresh_line_end_offsets.append(offset)
|
||||
screen.append(pre_prompt)
|
||||
screeninfo.append((0, []))
|
||||
pos -= line_len + 1
|
||||
prompt, prompt_len = self.process_prompt(prompt)
|
||||
chars, char_widths = disp_str(line)
|
||||
wrapcount = (sum(char_widths) + prompt_len) // self.console.width
|
||||
trace("wrapcount = {wrapcount}", wrapcount=wrapcount)
|
||||
if wrapcount == 0 or not char_widths:
|
||||
offset += line_len + 1 # Takes all of the line plus the newline
|
||||
last_refresh_line_end_offsets.append(offset)
|
||||
screen.append(prompt + "".join(chars))
|
||||
screeninfo.append((prompt_len, char_widths))
|
||||
else:
|
||||
pre = prompt
|
||||
prelen = prompt_len
|
||||
for wrap in range(wrapcount + 1):
|
||||
index_to_wrap_before = 0
|
||||
column = 0
|
||||
for char_width in char_widths:
|
||||
if column + char_width + prelen >= self.console.width:
|
||||
break
|
||||
index_to_wrap_before += 1
|
||||
column += char_width
|
||||
if len(chars) > index_to_wrap_before:
|
||||
offset += index_to_wrap_before
|
||||
post = "\\"
|
||||
after = [1]
|
||||
else:
|
||||
offset += index_to_wrap_before + 1 # Takes the newline
|
||||
post = ""
|
||||
after = []
|
||||
last_refresh_line_end_offsets.append(offset)
|
||||
render = pre + "".join(chars[:index_to_wrap_before]) + post
|
||||
render_widths = char_widths[:index_to_wrap_before] + after
|
||||
screen.append(render)
|
||||
screeninfo.append((prelen, render_widths))
|
||||
chars = chars[index_to_wrap_before:]
|
||||
char_widths = char_widths[index_to_wrap_before:]
|
||||
pre = ""
|
||||
prelen = 0
|
||||
self.screeninfo = screeninfo
|
||||
self.cxy = self.pos2xy()
|
||||
if self.msg:
|
||||
for mline in self.msg.split("\n"):
|
||||
screen.append(mline)
|
||||
screeninfo.append((0, []))
|
||||
|
||||
self.last_refresh_cache.update_cache(self, screen, screeninfo)
|
||||
return screen
|
||||
|
||||
@staticmethod
|
||||
def process_prompt(prompt: str) -> tuple[str, int]:
|
||||
r"""Return a tuple with the prompt string and its visible length.
|
||||
|
||||
The prompt string has the zero-width brackets recognized by shells
|
||||
(\x01 and \x02) removed. The length ignores anything between those
|
||||
brackets as well as any ANSI escape sequences.
|
||||
"""
|
||||
out_prompt = unbracket(prompt, including_content=False)
|
||||
visible_prompt = unbracket(prompt, including_content=True)
|
||||
return out_prompt, wlen(visible_prompt)
|
||||
|
||||
def bow(self, p: int | None = None) -> int:
|
||||
"""Return the 0-based index of the word break preceding p most
|
||||
immediately.
|
||||
|
||||
p defaults to self.pos; word boundaries are determined using
|
||||
self.syntax_table."""
|
||||
if p is None:
|
||||
p = self.pos
|
||||
st = self.syntax_table
|
||||
b = self.buffer
|
||||
p -= 1
|
||||
while p >= 0 and st.get(b[p], SYNTAX_WORD) != SYNTAX_WORD:
|
||||
p -= 1
|
||||
while p >= 0 and st.get(b[p], SYNTAX_WORD) == SYNTAX_WORD:
|
||||
p -= 1
|
||||
return p + 1
|
||||
|
||||
def eow(self, p: int | None = None) -> int:
|
||||
"""Return the 0-based index of the word break following p most
|
||||
immediately.
|
||||
|
||||
p defaults to self.pos; word boundaries are determined using
|
||||
self.syntax_table."""
|
||||
if p is None:
|
||||
p = self.pos
|
||||
st = self.syntax_table
|
||||
b = self.buffer
|
||||
while p < len(b) and st.get(b[p], SYNTAX_WORD) != SYNTAX_WORD:
|
||||
p += 1
|
||||
while p < len(b) and st.get(b[p], SYNTAX_WORD) == SYNTAX_WORD:
|
||||
p += 1
|
||||
return p
|
||||
|
||||
def bol(self, p: int | None = None) -> int:
|
||||
"""Return the 0-based index of the line break preceding p most
|
||||
immediately.
|
||||
|
||||
p defaults to self.pos."""
|
||||
if p is None:
|
||||
p = self.pos
|
||||
b = self.buffer
|
||||
p -= 1
|
||||
while p >= 0 and b[p] != "\n":
|
||||
p -= 1
|
||||
return p + 1
|
||||
|
||||
def eol(self, p: int | None = None) -> int:
|
||||
"""Return the 0-based index of the line break following p most
|
||||
immediately.
|
||||
|
||||
p defaults to self.pos."""
|
||||
if p is None:
|
||||
p = self.pos
|
||||
b = self.buffer
|
||||
while p < len(b) and b[p] != "\n":
|
||||
p += 1
|
||||
return p
|
||||
|
||||
def max_column(self, y: int) -> int:
|
||||
"""Return the last x-offset for line y"""
|
||||
return self.screeninfo[y][0] + sum(self.screeninfo[y][1])
|
||||
|
||||
def max_row(self) -> int:
|
||||
return len(self.screeninfo) - 1
|
||||
|
||||
def get_arg(self, default: int = 1) -> int:
|
||||
"""Return any prefix argument that the user has supplied,
|
||||
returning `default' if there is None. Defaults to 1.
|
||||
"""
|
||||
if self.arg is None:
|
||||
return default
|
||||
return self.arg
|
||||
|
||||
def get_prompt(self, lineno: int, cursor_on_line: bool) -> str:
|
||||
"""Return what should be in the left-hand margin for line
|
||||
`lineno'."""
|
||||
if self.arg is not None and cursor_on_line:
|
||||
prompt = f"(arg: {self.arg}) "
|
||||
elif self.paste_mode and not self.in_bracketed_paste:
|
||||
prompt = "(paste) "
|
||||
elif "\n" in self.buffer:
|
||||
if lineno == 0:
|
||||
prompt = self.ps2
|
||||
elif self.ps4 and lineno == self.buffer.count("\n"):
|
||||
prompt = self.ps4
|
||||
else:
|
||||
prompt = self.ps3
|
||||
else:
|
||||
prompt = self.ps1
|
||||
|
||||
if self.can_colorize:
|
||||
prompt = f"{ANSIColors.BOLD_MAGENTA}{prompt}{ANSIColors.RESET}"
|
||||
return prompt
|
||||
|
||||
def push_input_trans(self, itrans: input.KeymapTranslator) -> None:
|
||||
self.input_trans_stack.append(self.input_trans)
|
||||
self.input_trans = itrans
|
||||
|
||||
def pop_input_trans(self) -> None:
|
||||
self.input_trans = self.input_trans_stack.pop()
|
||||
|
||||
def setpos_from_xy(self, x: int, y: int) -> None:
|
||||
"""Set pos according to coordinates x, y"""
|
||||
pos = 0
|
||||
i = 0
|
||||
while i < y:
|
||||
prompt_len, char_widths = self.screeninfo[i]
|
||||
offset = len(char_widths)
|
||||
in_wrapped_line = prompt_len + sum(char_widths) >= self.console.width
|
||||
if in_wrapped_line:
|
||||
pos += offset - 1 # -1 cause backslash is not in buffer
|
||||
else:
|
||||
pos += offset + 1 # +1 cause newline is in buffer
|
||||
i += 1
|
||||
|
||||
j = 0
|
||||
cur_x = self.screeninfo[i][0]
|
||||
while cur_x < x:
|
||||
if self.screeninfo[i][1][j] == 0:
|
||||
j += 1 # prevent potential future infinite loop
|
||||
continue
|
||||
cur_x += self.screeninfo[i][1][j]
|
||||
j += 1
|
||||
pos += 1
|
||||
|
||||
self.pos = pos
|
||||
|
||||
def pos2xy(self) -> tuple[int, int]:
|
||||
"""Return the x, y coordinates of position 'pos'."""
|
||||
|
||||
prompt_len, y = 0, 0
|
||||
char_widths: list[int] = []
|
||||
pos = self.pos
|
||||
assert 0 <= pos <= len(self.buffer)
|
||||
|
||||
# optimize for the common case: typing at the end of the buffer
|
||||
if pos == len(self.buffer) and len(self.screeninfo) > 0:
|
||||
y = len(self.screeninfo) - 1
|
||||
prompt_len, char_widths = self.screeninfo[y]
|
||||
return prompt_len + sum(char_widths), y
|
||||
|
||||
for prompt_len, char_widths in self.screeninfo:
|
||||
offset = len(char_widths)
|
||||
in_wrapped_line = prompt_len + sum(char_widths) >= self.console.width
|
||||
if in_wrapped_line:
|
||||
offset -= 1 # need to remove line-wrapping backslash
|
||||
|
||||
if offset >= pos:
|
||||
break
|
||||
|
||||
if not in_wrapped_line:
|
||||
offset += 1 # there's a newline in buffer
|
||||
|
||||
pos -= offset
|
||||
y += 1
|
||||
return prompt_len + sum(char_widths[:pos]), y
|
||||
|
||||
def insert(self, text: str | list[str]) -> None:
|
||||
"""Insert 'text' at the insertion point."""
|
||||
self.buffer[self.pos : self.pos] = list(text)
|
||||
self.pos += len(text)
|
||||
self.dirty = True
|
||||
|
||||
def update_cursor(self) -> None:
|
||||
"""Move the cursor to reflect changes in self.pos"""
|
||||
self.cxy = self.pos2xy()
|
||||
self.console.move_cursor(*self.cxy)
|
||||
|
||||
def after_command(self, cmd: Command) -> None:
|
||||
"""This function is called to allow post command cleanup."""
|
||||
if getattr(cmd, "kills_digit_arg", True):
|
||||
if self.arg is not None:
|
||||
self.dirty = True
|
||||
self.arg = None
|
||||
|
||||
def prepare(self) -> None:
|
||||
"""Get ready to run. Call restore when finished. You must not
|
||||
write to the console in between the calls to prepare and
|
||||
restore."""
|
||||
try:
|
||||
self.console.prepare()
|
||||
self.arg = None
|
||||
self.finished = False
|
||||
del self.buffer[:]
|
||||
self.pos = 0
|
||||
self.dirty = True
|
||||
self.last_command = None
|
||||
self.calc_screen()
|
||||
except BaseException:
|
||||
self.restore()
|
||||
raise
|
||||
|
||||
while self.scheduled_commands:
|
||||
cmd = self.scheduled_commands.pop()
|
||||
self.do_cmd((cmd, []))
|
||||
|
||||
def last_command_is(self, cls: type) -> bool:
|
||||
if not self.last_command:
|
||||
return False
|
||||
return issubclass(cls, self.last_command)
|
||||
|
||||
def restore(self) -> None:
|
||||
"""Clean up after a run."""
|
||||
self.console.restore()
|
||||
|
||||
@contextmanager
|
||||
def suspend(self) -> SimpleContextManager:
|
||||
"""A context manager to delegate to another reader."""
|
||||
prev_state = {f.name: getattr(self, f.name) for f in fields(self)}
|
||||
try:
|
||||
self.restore()
|
||||
yield
|
||||
finally:
|
||||
for arg in ("msg", "ps1", "ps2", "ps3", "ps4", "paste_mode"):
|
||||
setattr(self, arg, prev_state[arg])
|
||||
self.prepare()
|
||||
|
||||
def finish(self) -> None:
|
||||
"""Called when a command signals that we're finished."""
|
||||
pass
|
||||
|
||||
def error(self, msg: str = "none") -> None:
|
||||
self.msg = "! " + msg + " "
|
||||
self.dirty = True
|
||||
self.console.beep()
|
||||
|
||||
def update_screen(self) -> None:
|
||||
if self.dirty:
|
||||
self.refresh()
|
||||
|
||||
def refresh(self) -> None:
|
||||
"""Recalculate and refresh the screen."""
|
||||
if self.in_bracketed_paste and self.buffer and not self.buffer[-1] == "\n":
|
||||
return
|
||||
|
||||
# this call sets up self.cxy, so call it first.
|
||||
self.screen = self.calc_screen()
|
||||
self.console.refresh(self.screen, self.cxy)
|
||||
self.dirty = False
|
||||
|
||||
def do_cmd(self, cmd: tuple[str, list[str]]) -> None:
|
||||
"""`cmd` is a tuple of "event_name" and "event", which in the current
|
||||
implementation is always just the "buffer" which happens to be a list
|
||||
of single-character strings."""
|
||||
|
||||
trace("received command {cmd}", cmd=cmd)
|
||||
if isinstance(cmd[0], str):
|
||||
command_type = self.commands.get(cmd[0], commands.invalid_command)
|
||||
elif isinstance(cmd[0], type):
|
||||
command_type = cmd[0]
|
||||
else:
|
||||
return # nothing to do
|
||||
|
||||
command = command_type(self, *cmd) # type: ignore[arg-type]
|
||||
command.do()
|
||||
|
||||
self.after_command(command)
|
||||
|
||||
if self.dirty:
|
||||
self.refresh()
|
||||
else:
|
||||
self.update_cursor()
|
||||
|
||||
if not isinstance(cmd, commands.digit_arg):
|
||||
self.last_command = command_type
|
||||
|
||||
self.finished = bool(command.finish)
|
||||
if self.finished:
|
||||
self.console.finish()
|
||||
self.finish()
|
||||
|
||||
def run_hooks(self) -> None:
|
||||
threading_hook = self.threading_hook
|
||||
if threading_hook is None and 'threading' in sys.modules:
|
||||
from ._threading_handler import install_threading_hook
|
||||
install_threading_hook(self)
|
||||
if threading_hook is not None:
|
||||
try:
|
||||
threading_hook()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
input_hook = self.console.input_hook
|
||||
if input_hook:
|
||||
try:
|
||||
input_hook()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def handle1(self, block: bool = True) -> bool:
|
||||
"""Handle a single event. Wait as long as it takes if block
|
||||
is true (the default), otherwise return False if no event is
|
||||
pending."""
|
||||
|
||||
if self.msg:
|
||||
self.msg = ""
|
||||
self.dirty = True
|
||||
|
||||
while True:
|
||||
# We use the same timeout as in readline.c: 100ms
|
||||
self.run_hooks()
|
||||
self.console.wait(100)
|
||||
event = self.console.get_event(block=False)
|
||||
if not event:
|
||||
if block:
|
||||
continue
|
||||
return False
|
||||
|
||||
translate = True
|
||||
|
||||
if event.evt == "key":
|
||||
self.input_trans.push(event)
|
||||
elif event.evt == "scroll":
|
||||
self.refresh()
|
||||
elif event.evt == "resize":
|
||||
self.refresh()
|
||||
else:
|
||||
translate = False
|
||||
|
||||
if translate:
|
||||
cmd = self.input_trans.get()
|
||||
else:
|
||||
cmd = [event.evt, event.data]
|
||||
|
||||
if cmd is None:
|
||||
if block:
|
||||
continue
|
||||
return False
|
||||
|
||||
self.do_cmd(cmd)
|
||||
return True
|
||||
|
||||
def push_char(self, char: int | bytes) -> None:
|
||||
self.console.push_char(char)
|
||||
self.handle1(block=False)
|
||||
|
||||
def readline(self, startup_hook: Callback | None = None) -> str:
|
||||
"""Read a line. The implementation of this method also shows
|
||||
how to drive Reader if you want more control over the event
|
||||
loop."""
|
||||
self.prepare()
|
||||
try:
|
||||
if startup_hook is not None:
|
||||
startup_hook()
|
||||
self.refresh()
|
||||
while not self.finished:
|
||||
self.handle1()
|
||||
return self.get_unicode()
|
||||
|
||||
finally:
|
||||
self.restore()
|
||||
|
||||
def bind(self, spec: KeySpec, command: CommandName) -> None:
|
||||
self.keymap = self.keymap + ((spec, command),)
|
||||
self.input_trans = input.KeymapTranslator(
|
||||
self.keymap, invalid_cls="invalid-key", character_cls="self-insert"
|
||||
)
|
||||
|
||||
def get_unicode(self) -> str:
|
||||
"""Return the current buffer as a unicode string."""
|
||||
return "".join(self.buffer)
|
||||
598
Utils/PythonNew32/Lib/_pyrepl/readline.py
Normal file
598
Utils/PythonNew32/Lib/_pyrepl/readline.py
Normal file
@@ -0,0 +1,598 @@
|
||||
# Copyright 2000-2010 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Alex Gaynor
|
||||
# Antonio Cuni
|
||||
# Armin Rigo
|
||||
# Holger Krekel
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
"""A compatibility wrapper reimplementing the 'readline' standard module
|
||||
on top of pyrepl. Not all functionalities are supported. Contains
|
||||
extensions for multiline input.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import warnings
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
import os
|
||||
from site import gethistoryfile
|
||||
import sys
|
||||
from rlcompleter import Completer as RLCompleter
|
||||
|
||||
from . import commands, historical_reader
|
||||
from .completing_reader import CompletingReader
|
||||
from .console import Console as ConsoleType
|
||||
|
||||
Console: type[ConsoleType]
|
||||
_error: tuple[type[Exception], ...] | type[Exception]
|
||||
try:
|
||||
from .unix_console import UnixConsole as Console, _error
|
||||
except ImportError:
|
||||
from .windows_console import WindowsConsole as Console, _error
|
||||
|
||||
ENCODING = sys.getdefaultencoding() or "latin1"
|
||||
|
||||
|
||||
# types
|
||||
Command = commands.Command
|
||||
from collections.abc import Callable, Collection
|
||||
from .types import Callback, Completer, KeySpec, CommandName
|
||||
|
||||
TYPE_CHECKING = False
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from typing import Any, Mapping
|
||||
|
||||
|
||||
MoreLinesCallable = Callable[[str], bool]
|
||||
|
||||
|
||||
__all__ = [
|
||||
"add_history",
|
||||
"clear_history",
|
||||
"get_begidx",
|
||||
"get_completer",
|
||||
"get_completer_delims",
|
||||
"get_current_history_length",
|
||||
"get_endidx",
|
||||
"get_history_item",
|
||||
"get_history_length",
|
||||
"get_line_buffer",
|
||||
"insert_text",
|
||||
"parse_and_bind",
|
||||
"read_history_file",
|
||||
# "read_init_file",
|
||||
# "redisplay",
|
||||
"remove_history_item",
|
||||
"replace_history_item",
|
||||
"set_auto_history",
|
||||
"set_completer",
|
||||
"set_completer_delims",
|
||||
"set_history_length",
|
||||
# "set_pre_input_hook",
|
||||
"set_startup_hook",
|
||||
"write_history_file",
|
||||
# ---- multiline extensions ----
|
||||
"multiline_input",
|
||||
]
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
@dataclass
|
||||
class ReadlineConfig:
|
||||
readline_completer: Completer | None = None
|
||||
completer_delims: frozenset[str] = frozenset(" \t\n`~!@#$%^&*()-=+[{]}\\|;:'\",<>/?")
|
||||
|
||||
|
||||
@dataclass(kw_only=True)
|
||||
class ReadlineAlikeReader(historical_reader.HistoricalReader, CompletingReader):
|
||||
# Class fields
|
||||
assume_immutable_completions = False
|
||||
use_brackets = False
|
||||
sort_in_column = True
|
||||
|
||||
# Instance fields
|
||||
config: ReadlineConfig
|
||||
more_lines: MoreLinesCallable | None = None
|
||||
last_used_indentation: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
super().__post_init__()
|
||||
self.commands["maybe_accept"] = maybe_accept
|
||||
self.commands["maybe-accept"] = maybe_accept
|
||||
self.commands["backspace_dedent"] = backspace_dedent
|
||||
self.commands["backspace-dedent"] = backspace_dedent
|
||||
|
||||
def error(self, msg: str = "none") -> None:
|
||||
pass # don't show error messages by default
|
||||
|
||||
def get_stem(self) -> str:
|
||||
b = self.buffer
|
||||
p = self.pos - 1
|
||||
completer_delims = self.config.completer_delims
|
||||
while p >= 0 and b[p] not in completer_delims:
|
||||
p -= 1
|
||||
return "".join(b[p + 1 : self.pos])
|
||||
|
||||
def get_completions(self, stem: str) -> list[str]:
|
||||
if len(stem) == 0 and self.more_lines is not None:
|
||||
b = self.buffer
|
||||
p = self.pos
|
||||
while p > 0 and b[p - 1] != "\n":
|
||||
p -= 1
|
||||
num_spaces = 4 - ((self.pos - p) % 4)
|
||||
return [" " * num_spaces]
|
||||
result = []
|
||||
function = self.config.readline_completer
|
||||
if function is not None:
|
||||
try:
|
||||
stem = str(stem) # rlcompleter.py seems to not like unicode
|
||||
except UnicodeEncodeError:
|
||||
pass # but feed unicode anyway if we have no choice
|
||||
state = 0
|
||||
while True:
|
||||
try:
|
||||
next = function(stem, state)
|
||||
except Exception:
|
||||
break
|
||||
if not isinstance(next, str):
|
||||
break
|
||||
result.append(next)
|
||||
state += 1
|
||||
# emulate the behavior of the standard readline that sorts
|
||||
# the completions before displaying them.
|
||||
result.sort()
|
||||
return result
|
||||
|
||||
def get_trimmed_history(self, maxlength: int) -> list[str]:
|
||||
if maxlength >= 0:
|
||||
cut = len(self.history) - maxlength
|
||||
if cut < 0:
|
||||
cut = 0
|
||||
else:
|
||||
cut = 0
|
||||
return self.history[cut:]
|
||||
|
||||
def update_last_used_indentation(self) -> None:
|
||||
indentation = _get_first_indentation(self.buffer)
|
||||
if indentation is not None:
|
||||
self.last_used_indentation = indentation
|
||||
|
||||
# --- simplified support for reading multiline Python statements ---
|
||||
|
||||
def collect_keymap(self) -> tuple[tuple[KeySpec, CommandName], ...]:
|
||||
return super().collect_keymap() + (
|
||||
(r"\n", "maybe-accept"),
|
||||
(r"\<backspace>", "backspace-dedent"),
|
||||
)
|
||||
|
||||
def after_command(self, cmd: Command) -> None:
|
||||
super().after_command(cmd)
|
||||
if self.more_lines is None:
|
||||
# Force single-line input if we are in raw_input() mode.
|
||||
# Although there is no direct way to add a \n in this mode,
|
||||
# multiline buffers can still show up using various
|
||||
# commands, e.g. navigating the history.
|
||||
try:
|
||||
index = self.buffer.index("\n")
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
self.buffer = self.buffer[:index]
|
||||
if self.pos > len(self.buffer):
|
||||
self.pos = len(self.buffer)
|
||||
|
||||
|
||||
def set_auto_history(_should_auto_add_history: bool) -> None:
|
||||
"""Enable or disable automatic history"""
|
||||
historical_reader.should_auto_add_history = bool(_should_auto_add_history)
|
||||
|
||||
|
||||
def _get_this_line_indent(buffer: list[str], pos: int) -> int:
|
||||
indent = 0
|
||||
while pos > 0 and buffer[pos - 1] in " \t":
|
||||
indent += 1
|
||||
pos -= 1
|
||||
if pos > 0 and buffer[pos - 1] == "\n":
|
||||
return indent
|
||||
return 0
|
||||
|
||||
|
||||
def _get_previous_line_indent(buffer: list[str], pos: int) -> tuple[int, int | None]:
|
||||
prevlinestart = pos
|
||||
while prevlinestart > 0 and buffer[prevlinestart - 1] != "\n":
|
||||
prevlinestart -= 1
|
||||
prevlinetext = prevlinestart
|
||||
while prevlinetext < pos and buffer[prevlinetext] in " \t":
|
||||
prevlinetext += 1
|
||||
if prevlinetext == pos:
|
||||
indent = None
|
||||
else:
|
||||
indent = prevlinetext - prevlinestart
|
||||
return prevlinestart, indent
|
||||
|
||||
|
||||
def _get_first_indentation(buffer: list[str]) -> str | None:
|
||||
indented_line_start = None
|
||||
for i in range(len(buffer)):
|
||||
if (i < len(buffer) - 1
|
||||
and buffer[i] == "\n"
|
||||
and buffer[i + 1] in " \t"
|
||||
):
|
||||
indented_line_start = i + 1
|
||||
elif indented_line_start is not None and buffer[i] not in " \t\n":
|
||||
return ''.join(buffer[indented_line_start : i])
|
||||
return None
|
||||
|
||||
|
||||
def _should_auto_indent(buffer: list[str], pos: int) -> bool:
|
||||
# check if last character before "pos" is a colon, ignoring
|
||||
# whitespaces and comments.
|
||||
last_char = None
|
||||
while pos > 0:
|
||||
pos -= 1
|
||||
if last_char is None:
|
||||
if buffer[pos] not in " \t\n#": # ignore whitespaces and comments
|
||||
last_char = buffer[pos]
|
||||
else:
|
||||
# even if we found a non-whitespace character before
|
||||
# original pos, we keep going back until newline is reached
|
||||
# to make sure we ignore comments
|
||||
if buffer[pos] == "\n":
|
||||
break
|
||||
if buffer[pos] == "#":
|
||||
last_char = None
|
||||
return last_char == ":"
|
||||
|
||||
|
||||
class maybe_accept(commands.Command):
|
||||
def do(self) -> None:
|
||||
r: ReadlineAlikeReader
|
||||
r = self.reader # type: ignore[assignment]
|
||||
r.dirty = True # this is needed to hide the completion menu, if visible
|
||||
|
||||
if self.reader.in_bracketed_paste:
|
||||
r.insert("\n")
|
||||
return
|
||||
|
||||
# if there are already several lines and the cursor
|
||||
# is not on the last one, always insert a new \n.
|
||||
text = r.get_unicode()
|
||||
|
||||
if "\n" in r.buffer[r.pos :] or (
|
||||
r.more_lines is not None and r.more_lines(text)
|
||||
):
|
||||
def _newline_before_pos():
|
||||
before_idx = r.pos - 1
|
||||
while before_idx > 0 and text[before_idx].isspace():
|
||||
before_idx -= 1
|
||||
return text[before_idx : r.pos].count("\n") > 0
|
||||
|
||||
# if there's already a new line before the cursor then
|
||||
# even if the cursor is followed by whitespace, we assume
|
||||
# the user is trying to terminate the block
|
||||
if _newline_before_pos() and text[r.pos:].isspace():
|
||||
self.finish = True
|
||||
return
|
||||
|
||||
# auto-indent the next line like the previous line
|
||||
prevlinestart, indent = _get_previous_line_indent(r.buffer, r.pos)
|
||||
r.insert("\n")
|
||||
if not self.reader.paste_mode:
|
||||
if indent:
|
||||
for i in range(prevlinestart, prevlinestart + indent):
|
||||
r.insert(r.buffer[i])
|
||||
r.update_last_used_indentation()
|
||||
if _should_auto_indent(r.buffer, r.pos):
|
||||
if r.last_used_indentation is not None:
|
||||
indentation = r.last_used_indentation
|
||||
else:
|
||||
# default
|
||||
indentation = " " * 4
|
||||
r.insert(indentation)
|
||||
elif not self.reader.paste_mode:
|
||||
self.finish = True
|
||||
else:
|
||||
r.insert("\n")
|
||||
|
||||
|
||||
class backspace_dedent(commands.Command):
|
||||
def do(self) -> None:
|
||||
r = self.reader
|
||||
b = r.buffer
|
||||
if r.pos > 0:
|
||||
repeat = 1
|
||||
if b[r.pos - 1] != "\n":
|
||||
indent = _get_this_line_indent(b, r.pos)
|
||||
if indent > 0:
|
||||
ls = r.pos - indent
|
||||
while ls > 0:
|
||||
ls, pi = _get_previous_line_indent(b, ls - 1)
|
||||
if pi is not None and pi < indent:
|
||||
repeat = indent - pi
|
||||
break
|
||||
r.pos -= repeat
|
||||
del b[r.pos : r.pos + repeat]
|
||||
r.dirty = True
|
||||
else:
|
||||
self.reader.error("can't backspace at start")
|
||||
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class _ReadlineWrapper:
|
||||
f_in: int = -1
|
||||
f_out: int = -1
|
||||
reader: ReadlineAlikeReader | None = field(default=None, repr=False)
|
||||
saved_history_length: int = -1
|
||||
startup_hook: Callback | None = None
|
||||
config: ReadlineConfig = field(default_factory=ReadlineConfig, repr=False)
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.f_in == -1:
|
||||
self.f_in = os.dup(0)
|
||||
if self.f_out == -1:
|
||||
self.f_out = os.dup(1)
|
||||
|
||||
def get_reader(self) -> ReadlineAlikeReader:
|
||||
if self.reader is None:
|
||||
console = Console(self.f_in, self.f_out, encoding=ENCODING)
|
||||
self.reader = ReadlineAlikeReader(console=console, config=self.config)
|
||||
return self.reader
|
||||
|
||||
def input(self, prompt: object = "") -> str:
|
||||
try:
|
||||
reader = self.get_reader()
|
||||
except _error:
|
||||
assert raw_input is not None
|
||||
return raw_input(prompt)
|
||||
prompt_str = str(prompt)
|
||||
reader.ps1 = prompt_str
|
||||
sys.audit("builtins.input", prompt_str)
|
||||
result = reader.readline(startup_hook=self.startup_hook)
|
||||
sys.audit("builtins.input/result", result)
|
||||
return result
|
||||
|
||||
def multiline_input(self, more_lines: MoreLinesCallable, ps1: str, ps2: str) -> str:
|
||||
"""Read an input on possibly multiple lines, asking for more
|
||||
lines as long as 'more_lines(unicodetext)' returns an object whose
|
||||
boolean value is true.
|
||||
"""
|
||||
reader = self.get_reader()
|
||||
saved = reader.more_lines
|
||||
try:
|
||||
reader.more_lines = more_lines
|
||||
reader.ps1 = ps1
|
||||
reader.ps2 = ps1
|
||||
reader.ps3 = ps2
|
||||
reader.ps4 = ""
|
||||
with warnings.catch_warnings(action="ignore"):
|
||||
return reader.readline()
|
||||
finally:
|
||||
reader.more_lines = saved
|
||||
reader.paste_mode = False
|
||||
|
||||
def parse_and_bind(self, string: str) -> None:
|
||||
pass # XXX we don't support parsing GNU-readline-style init files
|
||||
|
||||
def set_completer(self, function: Completer | None = None) -> None:
|
||||
self.config.readline_completer = function
|
||||
|
||||
def get_completer(self) -> Completer | None:
|
||||
return self.config.readline_completer
|
||||
|
||||
def set_completer_delims(self, delimiters: Collection[str]) -> None:
|
||||
self.config.completer_delims = frozenset(delimiters)
|
||||
|
||||
def get_completer_delims(self) -> str:
|
||||
return "".join(sorted(self.config.completer_delims))
|
||||
|
||||
def _histline(self, line: str) -> str:
|
||||
line = line.rstrip("\n")
|
||||
return line
|
||||
|
||||
def get_history_length(self) -> int:
|
||||
return self.saved_history_length
|
||||
|
||||
def set_history_length(self, length: int) -> None:
|
||||
self.saved_history_length = length
|
||||
|
||||
def get_current_history_length(self) -> int:
|
||||
return len(self.get_reader().history)
|
||||
|
||||
def read_history_file(self, filename: str = gethistoryfile()) -> None:
|
||||
# multiline extension (really a hack) for the end of lines that
|
||||
# are actually continuations inside a single multiline_input()
|
||||
# history item: we use \r\n instead of just \n. If the history
|
||||
# file is passed to GNU readline, the extra \r are just ignored.
|
||||
history = self.get_reader().history
|
||||
|
||||
with open(os.path.expanduser(filename), 'rb') as f:
|
||||
is_editline = f.readline().startswith(b"_HiStOrY_V2_")
|
||||
if is_editline:
|
||||
encoding = "unicode-escape"
|
||||
else:
|
||||
f.seek(0)
|
||||
encoding = "utf-8"
|
||||
|
||||
lines = [line.decode(encoding, errors='replace') for line in f.read().split(b'\n')]
|
||||
buffer = []
|
||||
for line in lines:
|
||||
if line.endswith("\r"):
|
||||
buffer.append(line+'\n')
|
||||
else:
|
||||
line = self._histline(line)
|
||||
if buffer:
|
||||
line = self._histline("".join(buffer).replace("\r", "") + line)
|
||||
del buffer[:]
|
||||
if line:
|
||||
history.append(line)
|
||||
|
||||
def write_history_file(self, filename: str = gethistoryfile()) -> None:
|
||||
maxlength = self.saved_history_length
|
||||
history = self.get_reader().get_trimmed_history(maxlength)
|
||||
f = open(os.path.expanduser(filename), "w",
|
||||
encoding="utf-8", newline="\n")
|
||||
with f:
|
||||
for entry in history:
|
||||
entry = entry.replace("\n", "\r\n") # multiline history support
|
||||
f.write(entry + "\n")
|
||||
|
||||
def clear_history(self) -> None:
|
||||
del self.get_reader().history[:]
|
||||
|
||||
def get_history_item(self, index: int) -> str | None:
|
||||
history = self.get_reader().history
|
||||
if 1 <= index <= len(history):
|
||||
return history[index - 1]
|
||||
else:
|
||||
return None # like readline.c
|
||||
|
||||
def remove_history_item(self, index: int) -> None:
|
||||
history = self.get_reader().history
|
||||
if 0 <= index < len(history):
|
||||
del history[index]
|
||||
else:
|
||||
raise ValueError("No history item at position %d" % index)
|
||||
# like readline.c
|
||||
|
||||
def replace_history_item(self, index: int, line: str) -> None:
|
||||
history = self.get_reader().history
|
||||
if 0 <= index < len(history):
|
||||
history[index] = self._histline(line)
|
||||
else:
|
||||
raise ValueError("No history item at position %d" % index)
|
||||
# like readline.c
|
||||
|
||||
def add_history(self, line: str) -> None:
|
||||
self.get_reader().history.append(self._histline(line))
|
||||
|
||||
def set_startup_hook(self, function: Callback | None = None) -> None:
|
||||
self.startup_hook = function
|
||||
|
||||
def get_line_buffer(self) -> str:
|
||||
return self.get_reader().get_unicode()
|
||||
|
||||
def _get_idxs(self) -> tuple[int, int]:
|
||||
start = cursor = self.get_reader().pos
|
||||
buf = self.get_line_buffer()
|
||||
for i in range(cursor - 1, -1, -1):
|
||||
if buf[i] in self.get_completer_delims():
|
||||
break
|
||||
start = i
|
||||
return start, cursor
|
||||
|
||||
def get_begidx(self) -> int:
|
||||
return self._get_idxs()[0]
|
||||
|
||||
def get_endidx(self) -> int:
|
||||
return self._get_idxs()[1]
|
||||
|
||||
def insert_text(self, text: str) -> None:
|
||||
self.get_reader().insert(text)
|
||||
|
||||
|
||||
_wrapper = _ReadlineWrapper()
|
||||
|
||||
# ____________________________________________________________
|
||||
# Public API
|
||||
|
||||
parse_and_bind = _wrapper.parse_and_bind
|
||||
set_completer = _wrapper.set_completer
|
||||
get_completer = _wrapper.get_completer
|
||||
set_completer_delims = _wrapper.set_completer_delims
|
||||
get_completer_delims = _wrapper.get_completer_delims
|
||||
get_history_length = _wrapper.get_history_length
|
||||
set_history_length = _wrapper.set_history_length
|
||||
get_current_history_length = _wrapper.get_current_history_length
|
||||
read_history_file = _wrapper.read_history_file
|
||||
write_history_file = _wrapper.write_history_file
|
||||
clear_history = _wrapper.clear_history
|
||||
get_history_item = _wrapper.get_history_item
|
||||
remove_history_item = _wrapper.remove_history_item
|
||||
replace_history_item = _wrapper.replace_history_item
|
||||
add_history = _wrapper.add_history
|
||||
set_startup_hook = _wrapper.set_startup_hook
|
||||
get_line_buffer = _wrapper.get_line_buffer
|
||||
get_begidx = _wrapper.get_begidx
|
||||
get_endidx = _wrapper.get_endidx
|
||||
insert_text = _wrapper.insert_text
|
||||
|
||||
# Extension
|
||||
multiline_input = _wrapper.multiline_input
|
||||
|
||||
# Internal hook
|
||||
_get_reader = _wrapper.get_reader
|
||||
|
||||
# ____________________________________________________________
|
||||
# Stubs
|
||||
|
||||
|
||||
def _make_stub(_name: str, _ret: object) -> None:
|
||||
def stub(*args: object, **kwds: object) -> None:
|
||||
import warnings
|
||||
|
||||
warnings.warn("readline.%s() not implemented" % _name, stacklevel=2)
|
||||
|
||||
stub.__name__ = _name
|
||||
globals()[_name] = stub
|
||||
|
||||
|
||||
for _name, _ret in [
|
||||
("read_init_file", None),
|
||||
("redisplay", None),
|
||||
("set_pre_input_hook", None),
|
||||
]:
|
||||
assert _name not in globals(), _name
|
||||
_make_stub(_name, _ret)
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
|
||||
def _setup(namespace: Mapping[str, Any]) -> None:
|
||||
global raw_input
|
||||
if raw_input is not None:
|
||||
return # don't run _setup twice
|
||||
|
||||
try:
|
||||
f_in = sys.stdin.fileno()
|
||||
f_out = sys.stdout.fileno()
|
||||
except (AttributeError, ValueError):
|
||||
return
|
||||
if not os.isatty(f_in) or not os.isatty(f_out):
|
||||
return
|
||||
|
||||
_wrapper.f_in = f_in
|
||||
_wrapper.f_out = f_out
|
||||
|
||||
# set up namespace in rlcompleter, which requires it to be a bona fide dict
|
||||
if not isinstance(namespace, dict):
|
||||
namespace = dict(namespace)
|
||||
_wrapper.config.readline_completer = RLCompleter(namespace).complete
|
||||
|
||||
# this is not really what readline.c does. Better than nothing I guess
|
||||
import builtins
|
||||
raw_input = builtins.input
|
||||
builtins.input = _wrapper.input
|
||||
|
||||
|
||||
raw_input: Callable[[object], str] | None = None
|
||||
179
Utils/PythonNew32/Lib/_pyrepl/simple_interact.py
Normal file
179
Utils/PythonNew32/Lib/_pyrepl/simple_interact.py
Normal file
@@ -0,0 +1,179 @@
|
||||
# Copyright 2000-2010 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
"""This is an alternative to python_reader which tries to emulate
|
||||
the CPython prompt as closely as possible, with the exception of
|
||||
allowing multiline input and multiline history entries.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import _sitebuiltins
|
||||
import functools
|
||||
import os
|
||||
import sys
|
||||
import code
|
||||
|
||||
from .readline import _get_reader, multiline_input
|
||||
|
||||
TYPE_CHECKING = False
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from typing import Any
|
||||
|
||||
|
||||
_error: tuple[type[Exception], ...] | type[Exception]
|
||||
try:
|
||||
from .unix_console import _error
|
||||
except ModuleNotFoundError:
|
||||
from .windows_console import _error
|
||||
|
||||
def check() -> str:
|
||||
"""Returns the error message if there is a problem initializing the state."""
|
||||
try:
|
||||
_get_reader()
|
||||
except _error as e:
|
||||
if term := os.environ.get("TERM", ""):
|
||||
term = f"; TERM={term}"
|
||||
return str(str(e) or repr(e) or "unknown error") + term
|
||||
return ""
|
||||
|
||||
|
||||
def _strip_final_indent(text: str) -> str:
|
||||
# kill spaces and tabs at the end, but only if they follow '\n'.
|
||||
# meant to remove the auto-indentation only (although it would of
|
||||
# course also remove explicitly-added indentation).
|
||||
short = text.rstrip(" \t")
|
||||
n = len(short)
|
||||
if n > 0 and text[n - 1] == "\n":
|
||||
return short
|
||||
return text
|
||||
|
||||
|
||||
def _clear_screen():
|
||||
reader = _get_reader()
|
||||
reader.scheduled_commands.append("clear_screen")
|
||||
|
||||
|
||||
REPL_COMMANDS = {
|
||||
"exit": _sitebuiltins.Quitter('exit', ''),
|
||||
"quit": _sitebuiltins.Quitter('quit' ,''),
|
||||
"copyright": _sitebuiltins._Printer('copyright', sys.copyright),
|
||||
"help": _sitebuiltins._Helper(),
|
||||
"clear": _clear_screen,
|
||||
"\x1a": _sitebuiltins.Quitter('\x1a', ''),
|
||||
}
|
||||
|
||||
|
||||
def _more_lines(console: code.InteractiveConsole, unicodetext: str) -> bool:
|
||||
# ooh, look at the hack:
|
||||
src = _strip_final_indent(unicodetext)
|
||||
try:
|
||||
code = console.compile(src, "<stdin>", "single")
|
||||
except (OverflowError, SyntaxError, ValueError):
|
||||
lines = src.splitlines(keepends=True)
|
||||
if len(lines) == 1:
|
||||
return False
|
||||
|
||||
last_line = lines[-1]
|
||||
was_indented = last_line.startswith((" ", "\t"))
|
||||
not_empty = last_line.strip() != ""
|
||||
incomplete = not last_line.endswith("\n")
|
||||
return (was_indented or not_empty) and incomplete
|
||||
else:
|
||||
return code is None
|
||||
|
||||
|
||||
def run_multiline_interactive_console(
|
||||
console: code.InteractiveConsole,
|
||||
*,
|
||||
future_flags: int = 0,
|
||||
) -> None:
|
||||
from .readline import _setup
|
||||
_setup(console.locals)
|
||||
if future_flags:
|
||||
console.compile.compiler.flags |= future_flags
|
||||
|
||||
more_lines = functools.partial(_more_lines, console)
|
||||
input_n = 0
|
||||
|
||||
_is_x_showrefcount_set = sys._xoptions.get("showrefcount")
|
||||
_is_pydebug_build = hasattr(sys, "gettotalrefcount")
|
||||
show_ref_count = _is_x_showrefcount_set and _is_pydebug_build
|
||||
|
||||
def maybe_run_command(statement: str) -> bool:
|
||||
statement = statement.strip()
|
||||
if statement in console.locals or statement not in REPL_COMMANDS:
|
||||
return False
|
||||
|
||||
reader = _get_reader()
|
||||
reader.history.pop() # skip internal commands in history
|
||||
command = REPL_COMMANDS[statement]
|
||||
if callable(command):
|
||||
# Make sure that history does not change because of commands
|
||||
with reader.suspend_history():
|
||||
command()
|
||||
return True
|
||||
return False
|
||||
|
||||
while 1:
|
||||
try:
|
||||
try:
|
||||
sys.stdout.flush()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
ps1 = getattr(sys, "ps1", ">>> ")
|
||||
ps2 = getattr(sys, "ps2", "... ")
|
||||
try:
|
||||
statement = multiline_input(more_lines, ps1, ps2)
|
||||
except EOFError:
|
||||
break
|
||||
|
||||
if maybe_run_command(statement):
|
||||
continue
|
||||
|
||||
input_name = f"<python-input-{input_n}>"
|
||||
more = console.push(_strip_final_indent(statement), filename=input_name, _symbol="single") # type: ignore[call-arg]
|
||||
assert not more
|
||||
input_n += 1
|
||||
except KeyboardInterrupt:
|
||||
r = _get_reader()
|
||||
if r.input_trans is r.isearch_trans:
|
||||
r.do_cmd(("isearch-end", [""]))
|
||||
r.pos = len(r.get_unicode())
|
||||
r.dirty = True
|
||||
r.refresh()
|
||||
r.in_bracketed_paste = False
|
||||
console.write("\nKeyboardInterrupt\n")
|
||||
console.resetbuffer()
|
||||
except MemoryError:
|
||||
console.write("\nMemoryError\n")
|
||||
console.resetbuffer()
|
||||
except SystemExit:
|
||||
raise
|
||||
except:
|
||||
console.showtraceback()
|
||||
console.resetbuffer()
|
||||
if show_ref_count:
|
||||
console.write(
|
||||
f"[{sys.gettotalrefcount()} refs,"
|
||||
f" {sys.getallocatedblocks()} blocks]\n"
|
||||
)
|
||||
21
Utils/PythonNew32/Lib/_pyrepl/trace.py
Normal file
21
Utils/PythonNew32/Lib/_pyrepl/trace.py
Normal file
@@ -0,0 +1,21 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
|
||||
# types
|
||||
if False:
|
||||
from typing import IO
|
||||
|
||||
|
||||
trace_file: IO[str] | None = None
|
||||
if trace_filename := os.environ.get("PYREPL_TRACE"):
|
||||
trace_file = open(trace_filename, "a")
|
||||
|
||||
|
||||
def trace(line: str, *k: object, **kw: object) -> None:
|
||||
if trace_file is None:
|
||||
return
|
||||
if k or kw:
|
||||
line = line.format(*k, **kw)
|
||||
trace_file.write(line + "\n")
|
||||
trace_file.flush()
|
||||
10
Utils/PythonNew32/Lib/_pyrepl/types.py
Normal file
10
Utils/PythonNew32/Lib/_pyrepl/types.py
Normal file
@@ -0,0 +1,10 @@
|
||||
from collections.abc import Callable, Iterator
|
||||
|
||||
type Callback = Callable[[], object]
|
||||
type SimpleContextManager = Iterator[None]
|
||||
type KeySpec = str # like r"\C-c"
|
||||
type CommandName = str # like "interrupt"
|
||||
type EventTuple = tuple[CommandName, str]
|
||||
type Completer = Callable[[str, int], str | None]
|
||||
type CharBuffer = list[str]
|
||||
type CharWidths = list[int]
|
||||
816
Utils/PythonNew32/Lib/_pyrepl/unix_console.py
Normal file
816
Utils/PythonNew32/Lib/_pyrepl/unix_console.py
Normal file
@@ -0,0 +1,816 @@
|
||||
# Copyright 2000-2010 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Antonio Cuni
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import errno
|
||||
import os
|
||||
import re
|
||||
import select
|
||||
import signal
|
||||
import struct
|
||||
import termios
|
||||
import time
|
||||
import platform
|
||||
from fcntl import ioctl
|
||||
|
||||
from . import curses
|
||||
from .console import Console, Event
|
||||
from .fancy_termios import tcgetattr, tcsetattr
|
||||
from .trace import trace
|
||||
from .unix_eventqueue import EventQueue
|
||||
from .utils import wlen
|
||||
|
||||
|
||||
TYPE_CHECKING = False
|
||||
|
||||
# types
|
||||
if TYPE_CHECKING:
|
||||
from typing import IO, Literal, overload
|
||||
else:
|
||||
overload = lambda func: None
|
||||
|
||||
|
||||
class InvalidTerminal(RuntimeError):
|
||||
pass
|
||||
|
||||
|
||||
_error = (termios.error, curses.error, InvalidTerminal)
|
||||
|
||||
SIGWINCH_EVENT = "repaint"
|
||||
|
||||
FIONREAD = getattr(termios, "FIONREAD", None)
|
||||
TIOCGWINSZ = getattr(termios, "TIOCGWINSZ", None)
|
||||
|
||||
# ------------ start of baudrate definitions ------------
|
||||
|
||||
# Add (possibly) missing baudrates (check termios man page) to termios
|
||||
|
||||
|
||||
def add_baudrate_if_supported(dictionary: dict[int, int], rate: int) -> None:
|
||||
baudrate_name = "B%d" % rate
|
||||
if hasattr(termios, baudrate_name):
|
||||
dictionary[getattr(termios, baudrate_name)] = rate
|
||||
|
||||
|
||||
# Check the termios man page (Line speed) to know where these
|
||||
# values come from.
|
||||
potential_baudrates = [
|
||||
0,
|
||||
110,
|
||||
115200,
|
||||
1200,
|
||||
134,
|
||||
150,
|
||||
1800,
|
||||
19200,
|
||||
200,
|
||||
230400,
|
||||
2400,
|
||||
300,
|
||||
38400,
|
||||
460800,
|
||||
4800,
|
||||
50,
|
||||
57600,
|
||||
600,
|
||||
75,
|
||||
9600,
|
||||
]
|
||||
|
||||
ratedict: dict[int, int] = {}
|
||||
for rate in potential_baudrates:
|
||||
add_baudrate_if_supported(ratedict, rate)
|
||||
|
||||
# Clean up variables to avoid unintended usage
|
||||
del rate, add_baudrate_if_supported
|
||||
|
||||
# ------------ end of baudrate definitions ------------
|
||||
|
||||
delayprog = re.compile(b"\\$<([0-9]+)((?:/|\\*){0,2})>")
|
||||
|
||||
try:
|
||||
poll: type[select.poll] = select.poll
|
||||
except AttributeError:
|
||||
# this is exactly the minumum necessary to support what we
|
||||
# do with poll objects
|
||||
class MinimalPoll:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def register(self, fd, flag):
|
||||
self.fd = fd
|
||||
# note: The 'timeout' argument is received as *milliseconds*
|
||||
def poll(self, timeout: float | None = None) -> list[int]:
|
||||
if timeout is None:
|
||||
r, w, e = select.select([self.fd], [], [])
|
||||
else:
|
||||
r, w, e = select.select([self.fd], [], [], timeout/1000)
|
||||
return r
|
||||
|
||||
poll = MinimalPoll # type: ignore[assignment]
|
||||
|
||||
|
||||
class UnixConsole(Console):
|
||||
def __init__(
|
||||
self,
|
||||
f_in: IO[bytes] | int = 0,
|
||||
f_out: IO[bytes] | int = 1,
|
||||
term: str = "",
|
||||
encoding: str = "",
|
||||
):
|
||||
"""
|
||||
Initialize the UnixConsole.
|
||||
|
||||
Parameters:
|
||||
- f_in (int or file-like object): Input file descriptor or object.
|
||||
- f_out (int or file-like object): Output file descriptor or object.
|
||||
- term (str): Terminal name.
|
||||
- encoding (str): Encoding to use for I/O operations.
|
||||
"""
|
||||
super().__init__(f_in, f_out, term, encoding)
|
||||
|
||||
self.pollob = poll()
|
||||
self.pollob.register(self.input_fd, select.POLLIN)
|
||||
self.input_buffer = b""
|
||||
self.input_buffer_pos = 0
|
||||
curses.setupterm(term or None, self.output_fd)
|
||||
self.term = term
|
||||
|
||||
@overload
|
||||
def _my_getstr(cap: str, optional: Literal[False] = False) -> bytes: ...
|
||||
|
||||
@overload
|
||||
def _my_getstr(cap: str, optional: bool) -> bytes | None: ...
|
||||
|
||||
def _my_getstr(cap: str, optional: bool = False) -> bytes | None:
|
||||
r = curses.tigetstr(cap)
|
||||
if not optional and r is None:
|
||||
raise InvalidTerminal(
|
||||
f"terminal doesn't have the required {cap} capability"
|
||||
)
|
||||
return r
|
||||
|
||||
self._bel = _my_getstr("bel")
|
||||
self._civis = _my_getstr("civis", optional=True)
|
||||
self._clear = _my_getstr("clear")
|
||||
self._cnorm = _my_getstr("cnorm", optional=True)
|
||||
self._cub = _my_getstr("cub", optional=True)
|
||||
self._cub1 = _my_getstr("cub1", optional=True)
|
||||
self._cud = _my_getstr("cud", optional=True)
|
||||
self._cud1 = _my_getstr("cud1", optional=True)
|
||||
self._cuf = _my_getstr("cuf", optional=True)
|
||||
self._cuf1 = _my_getstr("cuf1", optional=True)
|
||||
self._cup = _my_getstr("cup")
|
||||
self._cuu = _my_getstr("cuu", optional=True)
|
||||
self._cuu1 = _my_getstr("cuu1", optional=True)
|
||||
self._dch1 = _my_getstr("dch1", optional=True)
|
||||
self._dch = _my_getstr("dch", optional=True)
|
||||
self._el = _my_getstr("el")
|
||||
self._hpa = _my_getstr("hpa", optional=True)
|
||||
self._ich = _my_getstr("ich", optional=True)
|
||||
self._ich1 = _my_getstr("ich1", optional=True)
|
||||
self._ind = _my_getstr("ind", optional=True)
|
||||
self._pad = _my_getstr("pad", optional=True)
|
||||
self._ri = _my_getstr("ri", optional=True)
|
||||
self._rmkx = _my_getstr("rmkx", optional=True)
|
||||
self._smkx = _my_getstr("smkx", optional=True)
|
||||
|
||||
self.__setup_movement()
|
||||
|
||||
self.event_queue = EventQueue(self.input_fd, self.encoding)
|
||||
self.cursor_visible = 1
|
||||
|
||||
signal.signal(signal.SIGCONT, self._sigcont_handler)
|
||||
|
||||
def _sigcont_handler(self, signum, frame):
|
||||
self.restore()
|
||||
self.prepare()
|
||||
|
||||
def more_in_buffer(self) -> bool:
|
||||
return bool(
|
||||
self.input_buffer
|
||||
and self.input_buffer_pos < len(self.input_buffer)
|
||||
)
|
||||
|
||||
def __read(self, n: int) -> bytes:
|
||||
if not self.more_in_buffer():
|
||||
self.input_buffer = os.read(self.input_fd, 10000)
|
||||
|
||||
ret = self.input_buffer[self.input_buffer_pos : self.input_buffer_pos + n]
|
||||
self.input_buffer_pos += len(ret)
|
||||
if self.input_buffer_pos >= len(self.input_buffer):
|
||||
self.input_buffer = b""
|
||||
self.input_buffer_pos = 0
|
||||
return ret
|
||||
|
||||
|
||||
def change_encoding(self, encoding: str) -> None:
|
||||
"""
|
||||
Change the encoding used for I/O operations.
|
||||
|
||||
Parameters:
|
||||
- encoding (str): New encoding to use.
|
||||
"""
|
||||
self.encoding = encoding
|
||||
|
||||
def refresh(self, screen, c_xy):
|
||||
"""
|
||||
Refresh the console screen.
|
||||
|
||||
Parameters:
|
||||
- screen (list): List of strings representing the screen contents.
|
||||
- c_xy (tuple): Cursor position (x, y) on the screen.
|
||||
"""
|
||||
cx, cy = c_xy
|
||||
if not self.__gone_tall:
|
||||
while len(self.screen) < min(len(screen), self.height):
|
||||
self.__hide_cursor()
|
||||
self.__move(0, len(self.screen) - 1)
|
||||
self.__write("\n")
|
||||
self.posxy = 0, len(self.screen)
|
||||
self.screen.append("")
|
||||
else:
|
||||
while len(self.screen) < len(screen):
|
||||
self.screen.append("")
|
||||
|
||||
if len(screen) > self.height:
|
||||
self.__gone_tall = 1
|
||||
self.__move = self.__move_tall
|
||||
|
||||
px, py = self.posxy
|
||||
old_offset = offset = self.__offset
|
||||
height = self.height
|
||||
|
||||
# we make sure the cursor is on the screen, and that we're
|
||||
# using all of the screen if we can
|
||||
if cy < offset:
|
||||
offset = cy
|
||||
elif cy >= offset + height:
|
||||
offset = cy - height + 1
|
||||
elif offset > 0 and len(screen) < offset + height:
|
||||
offset = max(len(screen) - height, 0)
|
||||
screen.append("")
|
||||
|
||||
oldscr = self.screen[old_offset : old_offset + height]
|
||||
newscr = screen[offset : offset + height]
|
||||
|
||||
# use hardware scrolling if we have it.
|
||||
if old_offset > offset and self._ri:
|
||||
self.__hide_cursor()
|
||||
self.__write_code(self._cup, 0, 0)
|
||||
self.posxy = 0, old_offset
|
||||
for i in range(old_offset - offset):
|
||||
self.__write_code(self._ri)
|
||||
oldscr.pop(-1)
|
||||
oldscr.insert(0, "")
|
||||
elif old_offset < offset and self._ind:
|
||||
self.__hide_cursor()
|
||||
self.__write_code(self._cup, self.height - 1, 0)
|
||||
self.posxy = 0, old_offset + self.height - 1
|
||||
for i in range(offset - old_offset):
|
||||
self.__write_code(self._ind)
|
||||
oldscr.pop(0)
|
||||
oldscr.append("")
|
||||
|
||||
self.__offset = offset
|
||||
|
||||
for (
|
||||
y,
|
||||
oldline,
|
||||
newline,
|
||||
) in zip(range(offset, offset + height), oldscr, newscr):
|
||||
if oldline != newline:
|
||||
self.__write_changed_line(y, oldline, newline, px)
|
||||
|
||||
y = len(newscr)
|
||||
while y < len(oldscr):
|
||||
self.__hide_cursor()
|
||||
self.__move(0, y)
|
||||
self.posxy = 0, y
|
||||
self.__write_code(self._el)
|
||||
y += 1
|
||||
|
||||
self.__show_cursor()
|
||||
|
||||
self.screen = screen.copy()
|
||||
self.move_cursor(cx, cy)
|
||||
self.flushoutput()
|
||||
|
||||
def move_cursor(self, x, y):
|
||||
"""
|
||||
Move the cursor to the specified position on the screen.
|
||||
|
||||
Parameters:
|
||||
- x (int): X coordinate.
|
||||
- y (int): Y coordinate.
|
||||
"""
|
||||
if y < self.__offset or y >= self.__offset + self.height:
|
||||
self.event_queue.insert(Event("scroll", None))
|
||||
else:
|
||||
self.__move(x, y)
|
||||
self.posxy = x, y
|
||||
self.flushoutput()
|
||||
|
||||
def prepare(self):
|
||||
"""
|
||||
Prepare the console for input/output operations.
|
||||
"""
|
||||
self.__svtermstate = tcgetattr(self.input_fd)
|
||||
raw = self.__svtermstate.copy()
|
||||
raw.iflag &= ~(termios.INPCK | termios.ISTRIP | termios.IXON)
|
||||
raw.oflag &= ~(termios.OPOST)
|
||||
raw.cflag &= ~(termios.CSIZE | termios.PARENB)
|
||||
raw.cflag |= termios.CS8
|
||||
raw.iflag |= termios.BRKINT
|
||||
raw.lflag &= ~(termios.ICANON | termios.ECHO | termios.IEXTEN)
|
||||
raw.lflag |= termios.ISIG
|
||||
raw.cc[termios.VMIN] = 1
|
||||
raw.cc[termios.VTIME] = 0
|
||||
tcsetattr(self.input_fd, termios.TCSADRAIN, raw)
|
||||
|
||||
# In macOS terminal we need to deactivate line wrap via ANSI escape code
|
||||
if platform.system() == "Darwin" and os.getenv("TERM_PROGRAM") == "Apple_Terminal":
|
||||
os.write(self.output_fd, b"\033[?7l")
|
||||
|
||||
self.screen = []
|
||||
self.height, self.width = self.getheightwidth()
|
||||
|
||||
self.__buffer = []
|
||||
|
||||
self.posxy = 0, 0
|
||||
self.__gone_tall = 0
|
||||
self.__move = self.__move_short
|
||||
self.__offset = 0
|
||||
|
||||
self.__maybe_write_code(self._smkx)
|
||||
|
||||
try:
|
||||
self.old_sigwinch = signal.signal(signal.SIGWINCH, self.__sigwinch)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
self.__enable_bracketed_paste()
|
||||
|
||||
def restore(self):
|
||||
"""
|
||||
Restore the console to the default state
|
||||
"""
|
||||
self.__disable_bracketed_paste()
|
||||
self.__maybe_write_code(self._rmkx)
|
||||
self.flushoutput()
|
||||
tcsetattr(self.input_fd, termios.TCSADRAIN, self.__svtermstate)
|
||||
|
||||
if platform.system() == "Darwin" and os.getenv("TERM_PROGRAM") == "Apple_Terminal":
|
||||
os.write(self.output_fd, b"\033[?7h")
|
||||
|
||||
if hasattr(self, "old_sigwinch"):
|
||||
signal.signal(signal.SIGWINCH, self.old_sigwinch)
|
||||
del self.old_sigwinch
|
||||
|
||||
def push_char(self, char: int | bytes) -> None:
|
||||
"""
|
||||
Push a character to the console event queue.
|
||||
"""
|
||||
trace("push char {char!r}", char=char)
|
||||
self.event_queue.push(char)
|
||||
|
||||
def get_event(self, block: bool = True) -> Event | None:
|
||||
"""
|
||||
Get an event from the console event queue.
|
||||
|
||||
Parameters:
|
||||
- block (bool): Whether to block until an event is available.
|
||||
|
||||
Returns:
|
||||
- Event: Event object from the event queue.
|
||||
"""
|
||||
if not block and not self.wait(timeout=0):
|
||||
return None
|
||||
|
||||
while self.event_queue.empty():
|
||||
while True:
|
||||
try:
|
||||
self.push_char(self.__read(1))
|
||||
except OSError as err:
|
||||
if err.errno == errno.EINTR:
|
||||
if not self.event_queue.empty():
|
||||
return self.event_queue.get()
|
||||
else:
|
||||
continue
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
break
|
||||
return self.event_queue.get()
|
||||
|
||||
def wait(self, timeout: float | None = None) -> bool:
|
||||
"""
|
||||
Wait for events on the console.
|
||||
"""
|
||||
return (
|
||||
not self.event_queue.empty()
|
||||
or self.more_in_buffer()
|
||||
or bool(self.pollob.poll(timeout))
|
||||
)
|
||||
|
||||
def set_cursor_vis(self, visible):
|
||||
"""
|
||||
Set the visibility of the cursor.
|
||||
|
||||
Parameters:
|
||||
- visible (bool): Visibility flag.
|
||||
"""
|
||||
if visible:
|
||||
self.__show_cursor()
|
||||
else:
|
||||
self.__hide_cursor()
|
||||
|
||||
if TIOCGWINSZ:
|
||||
|
||||
def getheightwidth(self):
|
||||
"""
|
||||
Get the height and width of the console.
|
||||
|
||||
Returns:
|
||||
- tuple: Height and width of the console.
|
||||
"""
|
||||
try:
|
||||
return int(os.environ["LINES"]), int(os.environ["COLUMNS"])
|
||||
except (KeyError, TypeError, ValueError):
|
||||
try:
|
||||
size = ioctl(self.input_fd, TIOCGWINSZ, b"\000" * 8)
|
||||
except OSError:
|
||||
return 25, 80
|
||||
height, width = struct.unpack("hhhh", size)[0:2]
|
||||
if not height:
|
||||
return 25, 80
|
||||
return height, width
|
||||
|
||||
else:
|
||||
|
||||
def getheightwidth(self):
|
||||
"""
|
||||
Get the height and width of the console.
|
||||
|
||||
Returns:
|
||||
- tuple: Height and width of the console.
|
||||
"""
|
||||
try:
|
||||
return int(os.environ["LINES"]), int(os.environ["COLUMNS"])
|
||||
except (KeyError, TypeError, ValueError):
|
||||
return 25, 80
|
||||
|
||||
def forgetinput(self):
|
||||
"""
|
||||
Discard any pending input on the console.
|
||||
"""
|
||||
termios.tcflush(self.input_fd, termios.TCIFLUSH)
|
||||
|
||||
def flushoutput(self):
|
||||
"""
|
||||
Flush the output buffer.
|
||||
"""
|
||||
for text, iscode in self.__buffer:
|
||||
if iscode:
|
||||
self.__tputs(text)
|
||||
else:
|
||||
os.write(self.output_fd, text.encode(self.encoding, "replace"))
|
||||
del self.__buffer[:]
|
||||
|
||||
def finish(self):
|
||||
"""
|
||||
Finish console operations and flush the output buffer.
|
||||
"""
|
||||
y = len(self.screen) - 1
|
||||
while y >= 0 and not self.screen[y]:
|
||||
y -= 1
|
||||
self.__move(0, min(y, self.height + self.__offset - 1))
|
||||
self.__write("\n\r")
|
||||
self.flushoutput()
|
||||
|
||||
def beep(self):
|
||||
"""
|
||||
Emit a beep sound.
|
||||
"""
|
||||
self.__maybe_write_code(self._bel)
|
||||
self.flushoutput()
|
||||
|
||||
if FIONREAD:
|
||||
|
||||
def getpending(self):
|
||||
"""
|
||||
Get pending events from the console event queue.
|
||||
|
||||
Returns:
|
||||
- Event: Pending event from the event queue.
|
||||
"""
|
||||
e = Event("key", "", b"")
|
||||
|
||||
while not self.event_queue.empty():
|
||||
e2 = self.event_queue.get()
|
||||
e.data += e2.data
|
||||
e.raw += e.raw
|
||||
|
||||
amount = struct.unpack("i", ioctl(self.input_fd, FIONREAD, b"\0\0\0\0"))[0]
|
||||
raw = self.__read(amount)
|
||||
data = str(raw, self.encoding, "replace")
|
||||
e.data += data
|
||||
e.raw += raw
|
||||
return e
|
||||
|
||||
else:
|
||||
|
||||
def getpending(self):
|
||||
"""
|
||||
Get pending events from the console event queue.
|
||||
|
||||
Returns:
|
||||
- Event: Pending event from the event queue.
|
||||
"""
|
||||
e = Event("key", "", b"")
|
||||
|
||||
while not self.event_queue.empty():
|
||||
e2 = self.event_queue.get()
|
||||
e.data += e2.data
|
||||
e.raw += e.raw
|
||||
|
||||
amount = 10000
|
||||
raw = self.__read(amount)
|
||||
data = str(raw, self.encoding, "replace")
|
||||
e.data += data
|
||||
e.raw += raw
|
||||
return e
|
||||
|
||||
def clear(self):
|
||||
"""
|
||||
Clear the console screen.
|
||||
"""
|
||||
self.__write_code(self._clear)
|
||||
self.__gone_tall = 1
|
||||
self.__move = self.__move_tall
|
||||
self.posxy = 0, 0
|
||||
self.screen = []
|
||||
|
||||
@property
|
||||
def input_hook(self):
|
||||
try:
|
||||
import posix
|
||||
except ImportError:
|
||||
return None
|
||||
if posix._is_inputhook_installed():
|
||||
return posix._inputhook
|
||||
|
||||
def __enable_bracketed_paste(self) -> None:
|
||||
os.write(self.output_fd, b"\x1b[?2004h")
|
||||
|
||||
def __disable_bracketed_paste(self) -> None:
|
||||
os.write(self.output_fd, b"\x1b[?2004l")
|
||||
|
||||
def __setup_movement(self):
|
||||
"""
|
||||
Set up the movement functions based on the terminal capabilities.
|
||||
"""
|
||||
if 0 and self._hpa: # hpa don't work in windows telnet :-(
|
||||
self.__move_x = self.__move_x_hpa
|
||||
elif self._cub and self._cuf:
|
||||
self.__move_x = self.__move_x_cub_cuf
|
||||
elif self._cub1 and self._cuf1:
|
||||
self.__move_x = self.__move_x_cub1_cuf1
|
||||
else:
|
||||
raise RuntimeError("insufficient terminal (horizontal)")
|
||||
|
||||
if self._cuu and self._cud:
|
||||
self.__move_y = self.__move_y_cuu_cud
|
||||
elif self._cuu1 and self._cud1:
|
||||
self.__move_y = self.__move_y_cuu1_cud1
|
||||
else:
|
||||
raise RuntimeError("insufficient terminal (vertical)")
|
||||
|
||||
if self._dch1:
|
||||
self.dch1 = self._dch1
|
||||
elif self._dch:
|
||||
self.dch1 = curses.tparm(self._dch, 1)
|
||||
else:
|
||||
self.dch1 = None
|
||||
|
||||
if self._ich1:
|
||||
self.ich1 = self._ich1
|
||||
elif self._ich:
|
||||
self.ich1 = curses.tparm(self._ich, 1)
|
||||
else:
|
||||
self.ich1 = None
|
||||
|
||||
self.__move = self.__move_short
|
||||
|
||||
def __write_changed_line(self, y, oldline, newline, px_coord):
|
||||
# this is frustrating; there's no reason to test (say)
|
||||
# self.dch1 inside the loop -- but alternative ways of
|
||||
# structuring this function are equally painful (I'm trying to
|
||||
# avoid writing code generators these days...)
|
||||
minlen = min(wlen(oldline), wlen(newline))
|
||||
x_pos = 0
|
||||
x_coord = 0
|
||||
|
||||
px_pos = 0
|
||||
j = 0
|
||||
for c in oldline:
|
||||
if j >= px_coord:
|
||||
break
|
||||
j += wlen(c)
|
||||
px_pos += 1
|
||||
|
||||
# reuse the oldline as much as possible, but stop as soon as we
|
||||
# encounter an ESCAPE, because it might be the start of an escape
|
||||
# sequene
|
||||
while (
|
||||
x_coord < minlen
|
||||
and oldline[x_pos] == newline[x_pos]
|
||||
and newline[x_pos] != "\x1b"
|
||||
):
|
||||
x_coord += wlen(newline[x_pos])
|
||||
x_pos += 1
|
||||
|
||||
# if we need to insert a single character right after the first detected change
|
||||
if oldline[x_pos:] == newline[x_pos + 1 :] and self.ich1:
|
||||
if (
|
||||
y == self.posxy[1]
|
||||
and x_coord > self.posxy[0]
|
||||
and oldline[px_pos:x_pos] == newline[px_pos + 1 : x_pos + 1]
|
||||
):
|
||||
x_pos = px_pos
|
||||
x_coord = px_coord
|
||||
character_width = wlen(newline[x_pos])
|
||||
self.__move(x_coord, y)
|
||||
self.__write_code(self.ich1)
|
||||
self.__write(newline[x_pos])
|
||||
self.posxy = x_coord + character_width, y
|
||||
|
||||
# if it's a single character change in the middle of the line
|
||||
elif (
|
||||
x_coord < minlen
|
||||
and oldline[x_pos + 1 :] == newline[x_pos + 1 :]
|
||||
and wlen(oldline[x_pos]) == wlen(newline[x_pos])
|
||||
):
|
||||
character_width = wlen(newline[x_pos])
|
||||
self.__move(x_coord, y)
|
||||
self.__write(newline[x_pos])
|
||||
self.posxy = x_coord + character_width, y
|
||||
|
||||
# if this is the last character to fit in the line and we edit in the middle of the line
|
||||
elif (
|
||||
self.dch1
|
||||
and self.ich1
|
||||
and wlen(newline) == self.width
|
||||
and x_coord < wlen(newline) - 2
|
||||
and newline[x_pos + 1 : -1] == oldline[x_pos:-2]
|
||||
):
|
||||
self.__hide_cursor()
|
||||
self.__move(self.width - 2, y)
|
||||
self.posxy = self.width - 2, y
|
||||
self.__write_code(self.dch1)
|
||||
|
||||
character_width = wlen(newline[x_pos])
|
||||
self.__move(x_coord, y)
|
||||
self.__write_code(self.ich1)
|
||||
self.__write(newline[x_pos])
|
||||
self.posxy = character_width + 1, y
|
||||
|
||||
else:
|
||||
self.__hide_cursor()
|
||||
self.__move(x_coord, y)
|
||||
if wlen(oldline) > wlen(newline):
|
||||
self.__write_code(self._el)
|
||||
self.__write(newline[x_pos:])
|
||||
self.posxy = wlen(newline), y
|
||||
|
||||
if "\x1b" in newline:
|
||||
# ANSI escape characters are present, so we can't assume
|
||||
# anything about the position of the cursor. Moving the cursor
|
||||
# to the left margin should work to get to a known position.
|
||||
self.move_cursor(0, y)
|
||||
|
||||
def __write(self, text):
|
||||
self.__buffer.append((text, 0))
|
||||
|
||||
def __write_code(self, fmt, *args):
|
||||
self.__buffer.append((curses.tparm(fmt, *args), 1))
|
||||
|
||||
def __maybe_write_code(self, fmt, *args):
|
||||
if fmt:
|
||||
self.__write_code(fmt, *args)
|
||||
|
||||
def __move_y_cuu1_cud1(self, y):
|
||||
assert self._cud1 is not None
|
||||
assert self._cuu1 is not None
|
||||
dy = y - self.posxy[1]
|
||||
if dy > 0:
|
||||
self.__write_code(dy * self._cud1)
|
||||
elif dy < 0:
|
||||
self.__write_code((-dy) * self._cuu1)
|
||||
|
||||
def __move_y_cuu_cud(self, y):
|
||||
dy = y - self.posxy[1]
|
||||
if dy > 0:
|
||||
self.__write_code(self._cud, dy)
|
||||
elif dy < 0:
|
||||
self.__write_code(self._cuu, -dy)
|
||||
|
||||
def __move_x_hpa(self, x: int) -> None:
|
||||
if x != self.posxy[0]:
|
||||
self.__write_code(self._hpa, x)
|
||||
|
||||
def __move_x_cub1_cuf1(self, x: int) -> None:
|
||||
assert self._cuf1 is not None
|
||||
assert self._cub1 is not None
|
||||
dx = x - self.posxy[0]
|
||||
if dx > 0:
|
||||
self.__write_code(self._cuf1 * dx)
|
||||
elif dx < 0:
|
||||
self.__write_code(self._cub1 * (-dx))
|
||||
|
||||
def __move_x_cub_cuf(self, x: int) -> None:
|
||||
dx = x - self.posxy[0]
|
||||
if dx > 0:
|
||||
self.__write_code(self._cuf, dx)
|
||||
elif dx < 0:
|
||||
self.__write_code(self._cub, -dx)
|
||||
|
||||
def __move_short(self, x, y):
|
||||
self.__move_x(x)
|
||||
self.__move_y(y)
|
||||
|
||||
def __move_tall(self, x, y):
|
||||
assert 0 <= y - self.__offset < self.height, y - self.__offset
|
||||
self.__write_code(self._cup, y - self.__offset, x)
|
||||
|
||||
def __sigwinch(self, signum, frame):
|
||||
self.height, self.width = self.getheightwidth()
|
||||
self.event_queue.insert(Event("resize", None))
|
||||
|
||||
def __hide_cursor(self):
|
||||
if self.cursor_visible:
|
||||
self.__maybe_write_code(self._civis)
|
||||
self.cursor_visible = 0
|
||||
|
||||
def __show_cursor(self):
|
||||
if not self.cursor_visible:
|
||||
self.__maybe_write_code(self._cnorm)
|
||||
self.cursor_visible = 1
|
||||
|
||||
def repaint(self):
|
||||
if not self.__gone_tall:
|
||||
self.posxy = 0, self.posxy[1]
|
||||
self.__write("\r")
|
||||
ns = len(self.screen) * ["\000" * self.width]
|
||||
self.screen = ns
|
||||
else:
|
||||
self.posxy = 0, self.__offset
|
||||
self.__move(0, self.__offset)
|
||||
ns = self.height * ["\000" * self.width]
|
||||
self.screen = ns
|
||||
|
||||
def __tputs(self, fmt, prog=delayprog):
|
||||
"""A Python implementation of the curses tputs function; the
|
||||
curses one can't really be wrapped in a sane manner.
|
||||
|
||||
I have the strong suspicion that this is complexity that
|
||||
will never do anyone any good."""
|
||||
# using .get() means that things will blow up
|
||||
# only if the bps is actually needed (which I'm
|
||||
# betting is pretty unlkely)
|
||||
bps = ratedict.get(self.__svtermstate.ospeed)
|
||||
while 1:
|
||||
m = prog.search(fmt)
|
||||
if not m:
|
||||
os.write(self.output_fd, fmt)
|
||||
break
|
||||
x, y = m.span()
|
||||
os.write(self.output_fd, fmt[:x])
|
||||
fmt = fmt[y:]
|
||||
delay = int(m.group(1))
|
||||
if b"*" in m.group(2):
|
||||
delay *= self.height
|
||||
if self._pad and bps is not None:
|
||||
nchars = (bps * delay) / 1000
|
||||
os.write(self.output_fd, self._pad * nchars)
|
||||
else:
|
||||
time.sleep(float(delay) / 1000.0)
|
||||
76
Utils/PythonNew32/Lib/_pyrepl/unix_eventqueue.py
Normal file
76
Utils/PythonNew32/Lib/_pyrepl/unix_eventqueue.py
Normal file
@@ -0,0 +1,76 @@
|
||||
# Copyright 2000-2008 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
# Armin Rigo
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
from . import curses
|
||||
from .trace import trace
|
||||
from .base_eventqueue import BaseEventQueue
|
||||
from termios import tcgetattr, VERASE
|
||||
import os
|
||||
|
||||
|
||||
# Mapping of human-readable key names to their terminal-specific codes
|
||||
TERMINAL_KEYNAMES = {
|
||||
"delete": "kdch1",
|
||||
"down": "kcud1",
|
||||
"end": "kend",
|
||||
"enter": "kent",
|
||||
"home": "khome",
|
||||
"insert": "kich1",
|
||||
"left": "kcub1",
|
||||
"page down": "knp",
|
||||
"page up": "kpp",
|
||||
"right": "kcuf1",
|
||||
"up": "kcuu1",
|
||||
}
|
||||
|
||||
|
||||
# Function keys F1-F20 mapping
|
||||
TERMINAL_KEYNAMES.update(("f%d" % i, "kf%d" % i) for i in range(1, 21))
|
||||
|
||||
# Known CTRL-arrow keycodes
|
||||
CTRL_ARROW_KEYCODES= {
|
||||
# for xterm, gnome-terminal, xfce terminal, etc.
|
||||
b'\033[1;5D': 'ctrl left',
|
||||
b'\033[1;5C': 'ctrl right',
|
||||
# for rxvt
|
||||
b'\033Od': 'ctrl left',
|
||||
b'\033Oc': 'ctrl right',
|
||||
}
|
||||
|
||||
def get_terminal_keycodes() -> dict[bytes, str]:
|
||||
"""
|
||||
Generates a dictionary mapping terminal keycodes to human-readable names.
|
||||
"""
|
||||
keycodes = {}
|
||||
for key, terminal_code in TERMINAL_KEYNAMES.items():
|
||||
keycode = curses.tigetstr(terminal_code)
|
||||
trace('key {key} tiname {terminal_code} keycode {keycode!r}', **locals())
|
||||
if keycode:
|
||||
keycodes[keycode] = key
|
||||
keycodes.update(CTRL_ARROW_KEYCODES)
|
||||
return keycodes
|
||||
|
||||
class EventQueue(BaseEventQueue):
|
||||
def __init__(self, fd: int, encoding: str) -> None:
|
||||
keycodes = get_terminal_keycodes()
|
||||
if os.isatty(fd):
|
||||
backspace = tcgetattr(fd)[6][VERASE]
|
||||
keycodes[backspace] = "backspace"
|
||||
BaseEventQueue.__init__(self, encoding, keycodes)
|
||||
77
Utils/PythonNew32/Lib/_pyrepl/utils.py
Normal file
77
Utils/PythonNew32/Lib/_pyrepl/utils.py
Normal file
@@ -0,0 +1,77 @@
|
||||
import re
|
||||
import unicodedata
|
||||
import functools
|
||||
|
||||
from .types import CharBuffer, CharWidths
|
||||
from .trace import trace
|
||||
|
||||
ANSI_ESCAPE_SEQUENCE = re.compile(r"\x1b\[[ -@]*[A-~]")
|
||||
ZERO_WIDTH_BRACKET = re.compile(r"\x01.*?\x02")
|
||||
ZERO_WIDTH_TRANS = str.maketrans({"\x01": "", "\x02": ""})
|
||||
|
||||
|
||||
@functools.cache
|
||||
def str_width(c: str) -> int:
|
||||
if ord(c) < 128:
|
||||
return 1
|
||||
w = unicodedata.east_asian_width(c)
|
||||
if w in ("N", "Na", "H", "A"):
|
||||
return 1
|
||||
return 2
|
||||
|
||||
|
||||
def wlen(s: str) -> int:
|
||||
if len(s) == 1 and s != "\x1a":
|
||||
return str_width(s)
|
||||
length = sum(str_width(i) for i in s)
|
||||
# remove lengths of any escape sequences
|
||||
sequence = ANSI_ESCAPE_SEQUENCE.findall(s)
|
||||
ctrl_z_cnt = s.count("\x1a")
|
||||
return length - sum(len(i) for i in sequence) + ctrl_z_cnt
|
||||
|
||||
|
||||
def unbracket(s: str, including_content: bool = False) -> str:
|
||||
r"""Return `s` with \001 and \002 characters removed.
|
||||
|
||||
If `including_content` is True, content between \001 and \002 is also
|
||||
stripped.
|
||||
"""
|
||||
if including_content:
|
||||
return ZERO_WIDTH_BRACKET.sub("", s)
|
||||
return s.translate(ZERO_WIDTH_TRANS)
|
||||
|
||||
|
||||
def disp_str(buffer: str) -> tuple[CharBuffer, CharWidths]:
|
||||
r"""Decompose the input buffer into a printable variant.
|
||||
|
||||
Returns a tuple of two lists:
|
||||
- the first list is the input buffer, character by character;
|
||||
- the second list is the visible width of each character in the input
|
||||
buffer.
|
||||
|
||||
Examples:
|
||||
>>> utils.disp_str("a = 9")
|
||||
(['a', ' ', '=', ' ', '9'], [1, 1, 1, 1, 1])
|
||||
"""
|
||||
chars: CharBuffer = []
|
||||
char_widths: CharWidths = []
|
||||
|
||||
if not buffer:
|
||||
return chars, char_widths
|
||||
|
||||
for c in buffer:
|
||||
if c == "\x1a": # CTRL-Z on Windows
|
||||
chars.append(c)
|
||||
char_widths.append(2)
|
||||
elif ord(c) < 128:
|
||||
chars.append(c)
|
||||
char_widths.append(1)
|
||||
elif unicodedata.category(c).startswith("C"):
|
||||
c = r"\u%04x" % ord(c)
|
||||
chars.append(c)
|
||||
char_widths.append(len(c))
|
||||
else:
|
||||
chars.append(c)
|
||||
char_widths.append(str_width(c))
|
||||
trace("disp_str({buffer}) = {s}, {b}", buffer=repr(buffer), s=chars, b=char_widths)
|
||||
return chars, char_widths
|
||||
682
Utils/PythonNew32/Lib/_pyrepl/windows_console.py
Normal file
682
Utils/PythonNew32/Lib/_pyrepl/windows_console.py
Normal file
@@ -0,0 +1,682 @@
|
||||
# Copyright 2000-2004 Michael Hudson-Doyle <micahel@gmail.com>
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose is hereby granted without fee,
|
||||
# provided that the above copyright notice appear in all copies and
|
||||
# that both that copyright notice and this permission notice appear in
|
||||
# supporting documentation.
|
||||
#
|
||||
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
|
||||
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
|
||||
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import io
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import msvcrt
|
||||
|
||||
from collections import deque
|
||||
import ctypes
|
||||
from ctypes.wintypes import (
|
||||
_COORD,
|
||||
WORD,
|
||||
SMALL_RECT,
|
||||
BOOL,
|
||||
HANDLE,
|
||||
CHAR,
|
||||
DWORD,
|
||||
WCHAR,
|
||||
SHORT,
|
||||
)
|
||||
from ctypes import Structure, POINTER, Union
|
||||
from .console import Event, Console
|
||||
from .trace import trace
|
||||
from .utils import wlen
|
||||
from .windows_eventqueue import EventQueue
|
||||
|
||||
try:
|
||||
from ctypes import GetLastError, WinDLL, windll, WinError # type: ignore[attr-defined]
|
||||
except:
|
||||
# Keep MyPy happy off Windows
|
||||
from ctypes import CDLL as WinDLL, cdll as windll
|
||||
|
||||
def GetLastError() -> int:
|
||||
return 42
|
||||
|
||||
class WinError(OSError): # type: ignore[no-redef]
|
||||
def __init__(self, err: int | None, descr: str | None = None) -> None:
|
||||
self.err = err
|
||||
self.descr = descr
|
||||
|
||||
|
||||
TYPE_CHECKING = False
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from typing import IO
|
||||
|
||||
# Virtual-Key Codes: https://learn.microsoft.com/en-us/windows/win32/inputdev/virtual-key-codes
|
||||
VK_MAP: dict[int, str] = {
|
||||
0x23: "end", # VK_END
|
||||
0x24: "home", # VK_HOME
|
||||
0x25: "left", # VK_LEFT
|
||||
0x26: "up", # VK_UP
|
||||
0x27: "right", # VK_RIGHT
|
||||
0x28: "down", # VK_DOWN
|
||||
0x2E: "delete", # VK_DELETE
|
||||
0x70: "f1", # VK_F1
|
||||
0x71: "f2", # VK_F2
|
||||
0x72: "f3", # VK_F3
|
||||
0x73: "f4", # VK_F4
|
||||
0x74: "f5", # VK_F5
|
||||
0x75: "f6", # VK_F6
|
||||
0x76: "f7", # VK_F7
|
||||
0x77: "f8", # VK_F8
|
||||
0x78: "f9", # VK_F9
|
||||
0x79: "f10", # VK_F10
|
||||
0x7A: "f11", # VK_F11
|
||||
0x7B: "f12", # VK_F12
|
||||
0x7C: "f13", # VK_F13
|
||||
0x7D: "f14", # VK_F14
|
||||
0x7E: "f15", # VK_F15
|
||||
0x7F: "f16", # VK_F16
|
||||
0x80: "f17", # VK_F17
|
||||
0x81: "f18", # VK_F18
|
||||
0x82: "f19", # VK_F19
|
||||
0x83: "f20", # VK_F20
|
||||
}
|
||||
|
||||
# Virtual terminal output sequences
|
||||
# Reference: https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#output-sequences
|
||||
# Check `windows_eventqueue.py` for input sequences
|
||||
ERASE_IN_LINE = "\x1b[K"
|
||||
MOVE_LEFT = "\x1b[{}D"
|
||||
MOVE_RIGHT = "\x1b[{}C"
|
||||
MOVE_UP = "\x1b[{}A"
|
||||
MOVE_DOWN = "\x1b[{}B"
|
||||
CLEAR = "\x1b[H\x1b[J"
|
||||
|
||||
# State of control keys: https://learn.microsoft.com/en-us/windows/console/key-event-record-str
|
||||
ALT_ACTIVE = 0x01 | 0x02
|
||||
CTRL_ACTIVE = 0x04 | 0x08
|
||||
|
||||
|
||||
class _error(Exception):
|
||||
pass
|
||||
|
||||
def _supports_vt():
|
||||
try:
|
||||
import nt
|
||||
return nt._supports_virtual_terminal()
|
||||
except (ImportError, AttributeError):
|
||||
return False
|
||||
|
||||
class WindowsConsole(Console):
|
||||
def __init__(
|
||||
self,
|
||||
f_in: IO[bytes] | int = 0,
|
||||
f_out: IO[bytes] | int = 1,
|
||||
term: str = "",
|
||||
encoding: str = "",
|
||||
):
|
||||
super().__init__(f_in, f_out, term, encoding)
|
||||
|
||||
self.__vt_support = _supports_vt()
|
||||
|
||||
if self.__vt_support:
|
||||
trace('console supports virtual terminal')
|
||||
|
||||
# Save original console modes so we can recover on cleanup.
|
||||
original_input_mode = DWORD()
|
||||
GetConsoleMode(InHandle, original_input_mode)
|
||||
trace(f'saved original input mode 0x{original_input_mode.value:x}')
|
||||
self.__original_input_mode = original_input_mode.value
|
||||
|
||||
SetConsoleMode(
|
||||
OutHandle,
|
||||
ENABLE_WRAP_AT_EOL_OUTPUT
|
||||
| ENABLE_PROCESSED_OUTPUT
|
||||
| ENABLE_VIRTUAL_TERMINAL_PROCESSING,
|
||||
)
|
||||
|
||||
self.screen: list[str] = []
|
||||
self.width = 80
|
||||
self.height = 25
|
||||
self.__offset = 0
|
||||
self.event_queue = EventQueue(encoding)
|
||||
try:
|
||||
self.out = io._WindowsConsoleIO(self.output_fd, "w") # type: ignore[attr-defined]
|
||||
except ValueError:
|
||||
# Console I/O is redirected, fallback...
|
||||
self.out = None
|
||||
|
||||
def refresh(self, screen: list[str], c_xy: tuple[int, int]) -> None:
|
||||
"""
|
||||
Refresh the console screen.
|
||||
|
||||
Parameters:
|
||||
- screen (list): List of strings representing the screen contents.
|
||||
- c_xy (tuple): Cursor position (x, y) on the screen.
|
||||
"""
|
||||
cx, cy = c_xy
|
||||
|
||||
while len(self.screen) < min(len(screen), self.height):
|
||||
self._hide_cursor()
|
||||
self._move_relative(0, len(self.screen) - 1)
|
||||
self.__write("\n")
|
||||
self.posxy = 0, len(self.screen)
|
||||
self.screen.append("")
|
||||
|
||||
px, py = self.posxy
|
||||
old_offset = offset = self.__offset
|
||||
height = self.height
|
||||
|
||||
# we make sure the cursor is on the screen, and that we're
|
||||
# using all of the screen if we can
|
||||
if cy < offset:
|
||||
offset = cy
|
||||
elif cy >= offset + height:
|
||||
offset = cy - height + 1
|
||||
scroll_lines = offset - old_offset
|
||||
|
||||
# Scrolling the buffer as the current input is greater than the visible
|
||||
# portion of the window. We need to scroll the visible portion and the
|
||||
# entire history
|
||||
self._scroll(scroll_lines, self._getscrollbacksize())
|
||||
self.posxy = self.posxy[0], self.posxy[1] + scroll_lines
|
||||
self.__offset += scroll_lines
|
||||
|
||||
for i in range(scroll_lines):
|
||||
self.screen.append("")
|
||||
elif offset > 0 and len(screen) < offset + height:
|
||||
offset = max(len(screen) - height, 0)
|
||||
screen.append("")
|
||||
|
||||
oldscr = self.screen[old_offset : old_offset + height]
|
||||
newscr = screen[offset : offset + height]
|
||||
|
||||
self.__offset = offset
|
||||
|
||||
self._hide_cursor()
|
||||
for (
|
||||
y,
|
||||
oldline,
|
||||
newline,
|
||||
) in zip(range(offset, offset + height), oldscr, newscr):
|
||||
if oldline != newline:
|
||||
self.__write_changed_line(y, oldline, newline, px)
|
||||
|
||||
y = len(newscr)
|
||||
while y < len(oldscr):
|
||||
self._move_relative(0, y)
|
||||
self.posxy = 0, y
|
||||
self._erase_to_end()
|
||||
y += 1
|
||||
|
||||
self._show_cursor()
|
||||
|
||||
self.screen = screen
|
||||
self.move_cursor(cx, cy)
|
||||
|
||||
@property
|
||||
def input_hook(self):
|
||||
try:
|
||||
import nt
|
||||
except ImportError:
|
||||
return None
|
||||
if nt._is_inputhook_installed():
|
||||
return nt._inputhook
|
||||
|
||||
def __write_changed_line(
|
||||
self, y: int, oldline: str, newline: str, px_coord: int
|
||||
) -> None:
|
||||
# this is frustrating; there's no reason to test (say)
|
||||
# self.dch1 inside the loop -- but alternative ways of
|
||||
# structuring this function are equally painful (I'm trying to
|
||||
# avoid writing code generators these days...)
|
||||
minlen = min(wlen(oldline), wlen(newline))
|
||||
x_pos = 0
|
||||
x_coord = 0
|
||||
|
||||
px_pos = 0
|
||||
j = 0
|
||||
for c in oldline:
|
||||
if j >= px_coord:
|
||||
break
|
||||
j += wlen(c)
|
||||
px_pos += 1
|
||||
|
||||
# reuse the oldline as much as possible, but stop as soon as we
|
||||
# encounter an ESCAPE, because it might be the start of an escape
|
||||
# sequene
|
||||
while (
|
||||
x_coord < minlen
|
||||
and oldline[x_pos] == newline[x_pos]
|
||||
and newline[x_pos] != "\x1b"
|
||||
):
|
||||
x_coord += wlen(newline[x_pos])
|
||||
x_pos += 1
|
||||
|
||||
self._hide_cursor()
|
||||
self._move_relative(x_coord, y)
|
||||
if wlen(oldline) > wlen(newline):
|
||||
self._erase_to_end()
|
||||
|
||||
self.__write(newline[x_pos:])
|
||||
if wlen(newline) == self.width:
|
||||
# If we wrapped we want to start at the next line
|
||||
self._move_relative(0, y + 1)
|
||||
self.posxy = 0, y + 1
|
||||
else:
|
||||
self.posxy = wlen(newline), y
|
||||
|
||||
if "\x1b" in newline or y != self.posxy[1] or '\x1a' in newline:
|
||||
# ANSI escape characters are present, so we can't assume
|
||||
# anything about the position of the cursor. Moving the cursor
|
||||
# to the left margin should work to get to a known position.
|
||||
self.move_cursor(0, y)
|
||||
|
||||
def _scroll(
|
||||
self, top: int, bottom: int, left: int | None = None, right: int | None = None
|
||||
) -> None:
|
||||
scroll_rect = SMALL_RECT()
|
||||
scroll_rect.Top = SHORT(top)
|
||||
scroll_rect.Bottom = SHORT(bottom)
|
||||
scroll_rect.Left = SHORT(0 if left is None else left)
|
||||
scroll_rect.Right = SHORT(
|
||||
self.getheightwidth()[1] - 1 if right is None else right
|
||||
)
|
||||
destination_origin = _COORD()
|
||||
fill_info = CHAR_INFO()
|
||||
fill_info.UnicodeChar = " "
|
||||
|
||||
if not ScrollConsoleScreenBuffer(
|
||||
OutHandle, scroll_rect, None, destination_origin, fill_info
|
||||
):
|
||||
raise WinError(GetLastError())
|
||||
|
||||
def _hide_cursor(self):
|
||||
self.__write("\x1b[?25l")
|
||||
|
||||
def _show_cursor(self):
|
||||
self.__write("\x1b[?25h")
|
||||
|
||||
def _enable_blinking(self):
|
||||
self.__write("\x1b[?12h")
|
||||
|
||||
def _disable_blinking(self):
|
||||
self.__write("\x1b[?12l")
|
||||
|
||||
def _enable_bracketed_paste(self) -> None:
|
||||
self.__write("\x1b[?2004h")
|
||||
|
||||
def _disable_bracketed_paste(self) -> None:
|
||||
self.__write("\x1b[?2004l")
|
||||
|
||||
def __write(self, text: str) -> None:
|
||||
if "\x1a" in text:
|
||||
text = ''.join(["^Z" if x == '\x1a' else x for x in text])
|
||||
|
||||
if self.out is not None:
|
||||
self.out.write(text.encode(self.encoding, "replace"))
|
||||
self.out.flush()
|
||||
else:
|
||||
os.write(self.output_fd, text.encode(self.encoding, "replace"))
|
||||
|
||||
@property
|
||||
def screen_xy(self) -> tuple[int, int]:
|
||||
info = CONSOLE_SCREEN_BUFFER_INFO()
|
||||
if not GetConsoleScreenBufferInfo(OutHandle, info):
|
||||
raise WinError(GetLastError())
|
||||
return info.dwCursorPosition.X, info.dwCursorPosition.Y
|
||||
|
||||
def _erase_to_end(self) -> None:
|
||||
self.__write(ERASE_IN_LINE)
|
||||
|
||||
def prepare(self) -> None:
|
||||
trace("prepare")
|
||||
self.screen = []
|
||||
self.height, self.width = self.getheightwidth()
|
||||
|
||||
self.posxy = 0, 0
|
||||
self.__gone_tall = 0
|
||||
self.__offset = 0
|
||||
|
||||
if self.__vt_support:
|
||||
SetConsoleMode(InHandle, self.__original_input_mode | ENABLE_VIRTUAL_TERMINAL_INPUT)
|
||||
self._enable_bracketed_paste()
|
||||
|
||||
def restore(self) -> None:
|
||||
if self.__vt_support:
|
||||
# Recover to original mode before running REPL
|
||||
self._disable_bracketed_paste()
|
||||
SetConsoleMode(InHandle, self.__original_input_mode)
|
||||
|
||||
def _move_relative(self, x: int, y: int) -> None:
|
||||
"""Moves relative to the current posxy"""
|
||||
dx = x - self.posxy[0]
|
||||
dy = y - self.posxy[1]
|
||||
if dx < 0:
|
||||
self.__write(MOVE_LEFT.format(-dx))
|
||||
elif dx > 0:
|
||||
self.__write(MOVE_RIGHT.format(dx))
|
||||
|
||||
if dy < 0:
|
||||
self.__write(MOVE_UP.format(-dy))
|
||||
elif dy > 0:
|
||||
self.__write(MOVE_DOWN.format(dy))
|
||||
|
||||
def move_cursor(self, x: int, y: int) -> None:
|
||||
if x < 0 or y < 0:
|
||||
raise ValueError(f"Bad cursor position {x}, {y}")
|
||||
|
||||
if y < self.__offset or y >= self.__offset + self.height:
|
||||
self.event_queue.insert(Event("scroll", ""))
|
||||
else:
|
||||
self._move_relative(x, y)
|
||||
self.posxy = x, y
|
||||
|
||||
def set_cursor_vis(self, visible: bool) -> None:
|
||||
if visible:
|
||||
self._show_cursor()
|
||||
else:
|
||||
self._hide_cursor()
|
||||
|
||||
def getheightwidth(self) -> tuple[int, int]:
|
||||
"""Return (height, width) where height and width are the height
|
||||
and width of the terminal window in characters."""
|
||||
info = CONSOLE_SCREEN_BUFFER_INFO()
|
||||
if not GetConsoleScreenBufferInfo(OutHandle, info):
|
||||
raise WinError(GetLastError())
|
||||
return (
|
||||
info.srWindow.Bottom - info.srWindow.Top + 1,
|
||||
info.srWindow.Right - info.srWindow.Left + 1,
|
||||
)
|
||||
|
||||
def _getscrollbacksize(self) -> int:
|
||||
info = CONSOLE_SCREEN_BUFFER_INFO()
|
||||
if not GetConsoleScreenBufferInfo(OutHandle, info):
|
||||
raise WinError(GetLastError())
|
||||
|
||||
return info.srWindow.Bottom # type: ignore[no-any-return]
|
||||
|
||||
def _read_input(self, block: bool = True) -> INPUT_RECORD | None:
|
||||
if not block:
|
||||
events = DWORD()
|
||||
if not GetNumberOfConsoleInputEvents(InHandle, events):
|
||||
raise WinError(GetLastError())
|
||||
if not events.value:
|
||||
return None
|
||||
|
||||
rec = INPUT_RECORD()
|
||||
read = DWORD()
|
||||
if not ReadConsoleInput(InHandle, rec, 1, read):
|
||||
raise WinError(GetLastError())
|
||||
|
||||
return rec
|
||||
|
||||
def get_event(self, block: bool = True) -> Event | None:
|
||||
"""Return an Event instance. Returns None if |block| is false
|
||||
and there is no event pending, otherwise waits for the
|
||||
completion of an event."""
|
||||
|
||||
while self.event_queue.empty():
|
||||
rec = self._read_input(block)
|
||||
if rec is None:
|
||||
return None
|
||||
|
||||
if rec.EventType == WINDOW_BUFFER_SIZE_EVENT:
|
||||
return Event("resize", "")
|
||||
|
||||
if rec.EventType != KEY_EVENT or not rec.Event.KeyEvent.bKeyDown:
|
||||
# Only process keys and keydown events
|
||||
if block:
|
||||
continue
|
||||
return None
|
||||
|
||||
key_event = rec.Event.KeyEvent
|
||||
raw_key = key = key_event.uChar.UnicodeChar
|
||||
|
||||
if key == "\r":
|
||||
# Make enter unix-like
|
||||
return Event(evt="key", data="\n")
|
||||
elif key_event.wVirtualKeyCode == 8:
|
||||
# Turn backspace directly into the command
|
||||
key = "backspace"
|
||||
elif key == "\x00":
|
||||
# Handle special keys like arrow keys and translate them into the appropriate command
|
||||
key = VK_MAP.get(key_event.wVirtualKeyCode)
|
||||
if key:
|
||||
if key_event.dwControlKeyState & CTRL_ACTIVE:
|
||||
key = f"ctrl {key}"
|
||||
elif key_event.dwControlKeyState & ALT_ACTIVE:
|
||||
# queue the key, return the meta command
|
||||
self.event_queue.insert(Event(evt="key", data=key))
|
||||
return Event(evt="key", data="\033") # keymap.py uses this for meta
|
||||
return Event(evt="key", data=key)
|
||||
if block:
|
||||
continue
|
||||
|
||||
return None
|
||||
elif self.__vt_support:
|
||||
# If virtual terminal is enabled, scanning VT sequences
|
||||
for char in raw_key.encode(self.event_queue.encoding, "replace"):
|
||||
self.event_queue.push(char)
|
||||
continue
|
||||
|
||||
if key_event.dwControlKeyState & ALT_ACTIVE:
|
||||
# Do not swallow characters that have been entered via AltGr:
|
||||
# Windows internally converts AltGr to CTRL+ALT, see
|
||||
# https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-vkkeyscanw
|
||||
if not key_event.dwControlKeyState & CTRL_ACTIVE:
|
||||
# queue the key, return the meta command
|
||||
self.event_queue.insert(Event(evt="key", data=key))
|
||||
return Event(evt="key", data="\033") # keymap.py uses this for meta
|
||||
|
||||
return Event(evt="key", data=key)
|
||||
return self.event_queue.get()
|
||||
|
||||
def push_char(self, char: int | bytes) -> None:
|
||||
"""
|
||||
Push a character to the console event queue.
|
||||
"""
|
||||
raise NotImplementedError("push_char not supported on Windows")
|
||||
|
||||
def beep(self) -> None:
|
||||
self.__write("\x07")
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Wipe the screen"""
|
||||
self.__write(CLEAR)
|
||||
self.posxy = 0, 0
|
||||
self.screen = [""]
|
||||
|
||||
def finish(self) -> None:
|
||||
"""Move the cursor to the end of the display and otherwise get
|
||||
ready for end. XXX could be merged with restore? Hmm."""
|
||||
y = len(self.screen) - 1
|
||||
while y >= 0 and not self.screen[y]:
|
||||
y -= 1
|
||||
self._move_relative(0, min(y, self.height + self.__offset - 1))
|
||||
self.__write("\r\n")
|
||||
|
||||
def flushoutput(self) -> None:
|
||||
"""Flush all output to the screen (assuming there's some
|
||||
buffering going on somewhere).
|
||||
|
||||
All output on Windows is unbuffered so this is a nop"""
|
||||
pass
|
||||
|
||||
def forgetinput(self) -> None:
|
||||
"""Forget all pending, but not yet processed input."""
|
||||
if not FlushConsoleInputBuffer(InHandle):
|
||||
raise WinError(GetLastError())
|
||||
|
||||
def getpending(self) -> Event:
|
||||
"""Return the characters that have been typed but not yet
|
||||
processed."""
|
||||
return Event("key", "", b"")
|
||||
|
||||
def wait(self, timeout: float | None) -> bool:
|
||||
"""Wait for an event."""
|
||||
# Poor man's Windows select loop
|
||||
start_time = time.time()
|
||||
while True:
|
||||
if msvcrt.kbhit(): # type: ignore[attr-defined]
|
||||
return True
|
||||
if timeout and time.time() - start_time > timeout / 1000:
|
||||
return False
|
||||
time.sleep(0.01)
|
||||
|
||||
def repaint(self) -> None:
|
||||
raise NotImplementedError("No repaint support")
|
||||
|
||||
|
||||
# Windows interop
|
||||
class CONSOLE_SCREEN_BUFFER_INFO(Structure):
|
||||
_fields_ = [
|
||||
("dwSize", _COORD),
|
||||
("dwCursorPosition", _COORD),
|
||||
("wAttributes", WORD),
|
||||
("srWindow", SMALL_RECT),
|
||||
("dwMaximumWindowSize", _COORD),
|
||||
]
|
||||
|
||||
|
||||
class CONSOLE_CURSOR_INFO(Structure):
|
||||
_fields_ = [
|
||||
("dwSize", DWORD),
|
||||
("bVisible", BOOL),
|
||||
]
|
||||
|
||||
|
||||
class CHAR_INFO(Structure):
|
||||
_fields_ = [
|
||||
("UnicodeChar", WCHAR),
|
||||
("Attributes", WORD),
|
||||
]
|
||||
|
||||
|
||||
class Char(Union):
|
||||
_fields_ = [
|
||||
("UnicodeChar", WCHAR),
|
||||
("Char", CHAR),
|
||||
]
|
||||
|
||||
|
||||
class KeyEvent(ctypes.Structure):
|
||||
_fields_ = [
|
||||
("bKeyDown", BOOL),
|
||||
("wRepeatCount", WORD),
|
||||
("wVirtualKeyCode", WORD),
|
||||
("wVirtualScanCode", WORD),
|
||||
("uChar", Char),
|
||||
("dwControlKeyState", DWORD),
|
||||
]
|
||||
|
||||
|
||||
class WindowsBufferSizeEvent(ctypes.Structure):
|
||||
_fields_ = [("dwSize", _COORD)]
|
||||
|
||||
|
||||
class ConsoleEvent(ctypes.Union):
|
||||
_fields_ = [
|
||||
("KeyEvent", KeyEvent),
|
||||
("WindowsBufferSizeEvent", WindowsBufferSizeEvent),
|
||||
]
|
||||
|
||||
|
||||
class INPUT_RECORD(Structure):
|
||||
_fields_ = [("EventType", WORD), ("Event", ConsoleEvent)]
|
||||
|
||||
|
||||
KEY_EVENT = 0x01
|
||||
FOCUS_EVENT = 0x10
|
||||
MENU_EVENT = 0x08
|
||||
MOUSE_EVENT = 0x02
|
||||
WINDOW_BUFFER_SIZE_EVENT = 0x04
|
||||
|
||||
ENABLE_PROCESSED_INPUT = 0x0001
|
||||
ENABLE_LINE_INPUT = 0x0002
|
||||
ENABLE_ECHO_INPUT = 0x0004
|
||||
ENABLE_MOUSE_INPUT = 0x0010
|
||||
ENABLE_INSERT_MODE = 0x0020
|
||||
ENABLE_VIRTUAL_TERMINAL_INPUT = 0x0200
|
||||
|
||||
ENABLE_PROCESSED_OUTPUT = 0x01
|
||||
ENABLE_WRAP_AT_EOL_OUTPUT = 0x02
|
||||
ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x04
|
||||
|
||||
STD_INPUT_HANDLE = -10
|
||||
STD_OUTPUT_HANDLE = -11
|
||||
|
||||
if sys.platform == "win32":
|
||||
_KERNEL32 = WinDLL("kernel32", use_last_error=True)
|
||||
|
||||
GetStdHandle = windll.kernel32.GetStdHandle
|
||||
GetStdHandle.argtypes = [DWORD]
|
||||
GetStdHandle.restype = HANDLE
|
||||
|
||||
GetConsoleScreenBufferInfo = _KERNEL32.GetConsoleScreenBufferInfo
|
||||
GetConsoleScreenBufferInfo.argtypes = [
|
||||
HANDLE,
|
||||
ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO),
|
||||
]
|
||||
GetConsoleScreenBufferInfo.restype = BOOL
|
||||
|
||||
ScrollConsoleScreenBuffer = _KERNEL32.ScrollConsoleScreenBufferW
|
||||
ScrollConsoleScreenBuffer.argtypes = [
|
||||
HANDLE,
|
||||
POINTER(SMALL_RECT),
|
||||
POINTER(SMALL_RECT),
|
||||
_COORD,
|
||||
POINTER(CHAR_INFO),
|
||||
]
|
||||
ScrollConsoleScreenBuffer.restype = BOOL
|
||||
|
||||
GetConsoleMode = _KERNEL32.GetConsoleMode
|
||||
GetConsoleMode.argtypes = [HANDLE, POINTER(DWORD)]
|
||||
GetConsoleMode.restype = BOOL
|
||||
|
||||
SetConsoleMode = _KERNEL32.SetConsoleMode
|
||||
SetConsoleMode.argtypes = [HANDLE, DWORD]
|
||||
SetConsoleMode.restype = BOOL
|
||||
|
||||
ReadConsoleInput = _KERNEL32.ReadConsoleInputW
|
||||
ReadConsoleInput.argtypes = [HANDLE, POINTER(INPUT_RECORD), DWORD, POINTER(DWORD)]
|
||||
ReadConsoleInput.restype = BOOL
|
||||
|
||||
GetNumberOfConsoleInputEvents = _KERNEL32.GetNumberOfConsoleInputEvents
|
||||
GetNumberOfConsoleInputEvents.argtypes = [HANDLE, POINTER(DWORD)]
|
||||
GetNumberOfConsoleInputEvents.restype = BOOL
|
||||
|
||||
FlushConsoleInputBuffer = _KERNEL32.FlushConsoleInputBuffer
|
||||
FlushConsoleInputBuffer.argtypes = [HANDLE]
|
||||
FlushConsoleInputBuffer.restype = BOOL
|
||||
|
||||
OutHandle = GetStdHandle(STD_OUTPUT_HANDLE)
|
||||
InHandle = GetStdHandle(STD_INPUT_HANDLE)
|
||||
else:
|
||||
|
||||
def _win_only(*args, **kwargs):
|
||||
raise NotImplementedError("Windows only")
|
||||
|
||||
GetStdHandle = _win_only
|
||||
GetConsoleScreenBufferInfo = _win_only
|
||||
ScrollConsoleScreenBuffer = _win_only
|
||||
GetConsoleMode = _win_only
|
||||
SetConsoleMode = _win_only
|
||||
ReadConsoleInput = _win_only
|
||||
GetNumberOfConsoleInputEvents = _win_only
|
||||
FlushConsoleInputBuffer = _win_only
|
||||
OutHandle = 0
|
||||
InHandle = 0
|
||||
42
Utils/PythonNew32/Lib/_pyrepl/windows_eventqueue.py
Normal file
42
Utils/PythonNew32/Lib/_pyrepl/windows_eventqueue.py
Normal file
@@ -0,0 +1,42 @@
|
||||
"""
|
||||
Windows event and VT sequence scanner
|
||||
"""
|
||||
|
||||
from .base_eventqueue import BaseEventQueue
|
||||
|
||||
|
||||
# Reference: https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#input-sequences
|
||||
VT_MAP: dict[bytes, str] = {
|
||||
b'\x1b[A': 'up',
|
||||
b'\x1b[B': 'down',
|
||||
b'\x1b[C': 'right',
|
||||
b'\x1b[D': 'left',
|
||||
b'\x1b[1;5D': 'ctrl left',
|
||||
b'\x1b[1;5C': 'ctrl right',
|
||||
|
||||
b'\x1b[H': 'home',
|
||||
b'\x1b[F': 'end',
|
||||
|
||||
b'\x7f': 'backspace',
|
||||
b'\x1b[2~': 'insert',
|
||||
b'\x1b[3~': 'delete',
|
||||
b'\x1b[5~': 'page up',
|
||||
b'\x1b[6~': 'page down',
|
||||
|
||||
b'\x1bOP': 'f1',
|
||||
b'\x1bOQ': 'f2',
|
||||
b'\x1bOR': 'f3',
|
||||
b'\x1bOS': 'f4',
|
||||
b'\x1b[15~': 'f5',
|
||||
b'\x1b[17~': 'f6',
|
||||
b'\x1b[18~': 'f7',
|
||||
b'\x1b[19~': 'f8',
|
||||
b'\x1b[20~': 'f9',
|
||||
b'\x1b[21~': 'f10',
|
||||
b'\x1b[23~': 'f11',
|
||||
b'\x1b[24~': 'f12',
|
||||
}
|
||||
|
||||
class EventQueue(BaseEventQueue):
|
||||
def __init__(self, encoding: str) -> None:
|
||||
BaseEventQueue.__init__(self, encoding, VT_MAP)
|
||||
103
Utils/PythonNew32/Lib/_sitebuiltins.py
Normal file
103
Utils/PythonNew32/Lib/_sitebuiltins.py
Normal file
@@ -0,0 +1,103 @@
|
||||
"""
|
||||
The objects used by the site module to add custom builtins.
|
||||
"""
|
||||
|
||||
# Those objects are almost immortal and they keep a reference to their module
|
||||
# globals. Defining them in the site module would keep too many references
|
||||
# alive.
|
||||
# Note this means this module should also avoid keep things alive in its
|
||||
# globals.
|
||||
|
||||
import sys
|
||||
|
||||
class Quitter(object):
|
||||
def __init__(self, name, eof):
|
||||
self.name = name
|
||||
self.eof = eof
|
||||
def __repr__(self):
|
||||
return 'Use %s() or %s to exit' % (self.name, self.eof)
|
||||
def __call__(self, code=None):
|
||||
# Shells like IDLE catch the SystemExit, but listen when their
|
||||
# stdin wrapper is closed.
|
||||
try:
|
||||
sys.stdin.close()
|
||||
except:
|
||||
pass
|
||||
raise SystemExit(code)
|
||||
|
||||
|
||||
class _Printer(object):
|
||||
"""interactive prompt objects for printing the license text, a list of
|
||||
contributors and the copyright notice."""
|
||||
|
||||
MAXLINES = 23
|
||||
|
||||
def __init__(self, name, data, files=(), dirs=()):
|
||||
import os
|
||||
self.__name = name
|
||||
self.__data = data
|
||||
self.__lines = None
|
||||
self.__filenames = [os.path.join(dir, filename)
|
||||
for dir in dirs
|
||||
for filename in files]
|
||||
|
||||
def __setup(self):
|
||||
if self.__lines:
|
||||
return
|
||||
data = None
|
||||
for filename in self.__filenames:
|
||||
try:
|
||||
with open(filename, encoding='utf-8') as fp:
|
||||
data = fp.read()
|
||||
break
|
||||
except OSError:
|
||||
pass
|
||||
if not data:
|
||||
data = self.__data
|
||||
self.__lines = data.split('\n')
|
||||
self.__linecnt = len(self.__lines)
|
||||
|
||||
def __repr__(self):
|
||||
self.__setup()
|
||||
if len(self.__lines) <= self.MAXLINES:
|
||||
return "\n".join(self.__lines)
|
||||
else:
|
||||
return "Type %s() to see the full %s text" % ((self.__name,)*2)
|
||||
|
||||
def __call__(self):
|
||||
self.__setup()
|
||||
prompt = 'Hit Return for more, or q (and Return) to quit: '
|
||||
lineno = 0
|
||||
while 1:
|
||||
try:
|
||||
for i in range(lineno, lineno + self.MAXLINES):
|
||||
print(self.__lines[i])
|
||||
except IndexError:
|
||||
break
|
||||
else:
|
||||
lineno += self.MAXLINES
|
||||
key = None
|
||||
while key is None:
|
||||
key = input(prompt)
|
||||
if key not in ('', 'q'):
|
||||
key = None
|
||||
if key == 'q':
|
||||
break
|
||||
|
||||
|
||||
class _Helper(object):
|
||||
"""Define the builtin 'help'.
|
||||
|
||||
This is a wrapper around pydoc.help that provides a helpful message
|
||||
when 'help' is typed at the Python interactive prompt.
|
||||
|
||||
Calling help() at the Python prompt starts an interactive help session.
|
||||
Calling help(thing) prints help for the python object 'thing'.
|
||||
"""
|
||||
|
||||
def __repr__(self):
|
||||
return "Type help() for interactive help, " \
|
||||
"or help(object) for help about object."
|
||||
def __call__(self, *args, **kwds):
|
||||
import pydoc
|
||||
return pydoc.help(*args, **kwds)
|
||||
684
Utils/PythonNew32/Lib/_strptime.py
Normal file
684
Utils/PythonNew32/Lib/_strptime.py
Normal file
@@ -0,0 +1,684 @@
|
||||
"""Strptime-related classes and functions.
|
||||
|
||||
CLASSES:
|
||||
LocaleTime -- Discovers and stores locale-specific time information
|
||||
TimeRE -- Creates regexes for pattern matching a string of text containing
|
||||
time information
|
||||
|
||||
FUNCTIONS:
|
||||
_getlang -- Figure out what language is being used for the locale
|
||||
strptime -- Calculates the time struct represented by the passed-in string
|
||||
|
||||
"""
|
||||
import os
|
||||
import time
|
||||
import locale
|
||||
import calendar
|
||||
from re import compile as re_compile
|
||||
from re import sub as re_sub
|
||||
from re import IGNORECASE
|
||||
from re import escape as re_escape
|
||||
from datetime import (date as datetime_date,
|
||||
timedelta as datetime_timedelta,
|
||||
timezone as datetime_timezone)
|
||||
from _thread import allocate_lock as _thread_allocate_lock
|
||||
|
||||
__all__ = []
|
||||
|
||||
def _getlang():
|
||||
# Figure out what the current language is set to.
|
||||
return locale.getlocale(locale.LC_TIME)
|
||||
|
||||
def _findall(haystack, needle):
|
||||
# Find all positions of needle in haystack.
|
||||
if not needle:
|
||||
return
|
||||
i = 0
|
||||
while True:
|
||||
i = haystack.find(needle, i)
|
||||
if i < 0:
|
||||
break
|
||||
yield i
|
||||
i += len(needle)
|
||||
|
||||
class LocaleTime(object):
|
||||
"""Stores and handles locale-specific information related to time.
|
||||
|
||||
ATTRIBUTES:
|
||||
f_weekday -- full weekday names (7-item list)
|
||||
a_weekday -- abbreviated weekday names (7-item list)
|
||||
f_month -- full month names (13-item list; dummy value in [0], which
|
||||
is added by code)
|
||||
a_month -- abbreviated month names (13-item list, dummy value in
|
||||
[0], which is added by code)
|
||||
am_pm -- AM/PM representation (2-item list)
|
||||
LC_date_time -- format string for date/time representation (string)
|
||||
LC_date -- format string for date representation (string)
|
||||
LC_time -- format string for time representation (string)
|
||||
timezone -- daylight- and non-daylight-savings timezone representation
|
||||
(2-item list of sets)
|
||||
lang -- Language used by instance (2-item tuple)
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Set all attributes.
|
||||
|
||||
Order of methods called matters for dependency reasons.
|
||||
|
||||
The locale language is set at the offset and then checked again before
|
||||
exiting. This is to make sure that the attributes were not set with a
|
||||
mix of information from more than one locale. This would most likely
|
||||
happen when using threads where one thread calls a locale-dependent
|
||||
function while another thread changes the locale while the function in
|
||||
the other thread is still running. Proper coding would call for
|
||||
locks to prevent changing the locale while locale-dependent code is
|
||||
running. The check here is done in case someone does not think about
|
||||
doing this.
|
||||
|
||||
Only other possible issue is if someone changed the timezone and did
|
||||
not call tz.tzset . That is an issue for the programmer, though,
|
||||
since changing the timezone is worthless without that call.
|
||||
|
||||
"""
|
||||
self.lang = _getlang()
|
||||
self.__calc_weekday()
|
||||
self.__calc_month()
|
||||
self.__calc_am_pm()
|
||||
self.__calc_timezone()
|
||||
self.__calc_date_time()
|
||||
if _getlang() != self.lang:
|
||||
raise ValueError("locale changed during initialization")
|
||||
if time.tzname != self.tzname or time.daylight != self.daylight:
|
||||
raise ValueError("timezone changed during initialization")
|
||||
|
||||
def __calc_weekday(self):
|
||||
# Set self.a_weekday and self.f_weekday using the calendar
|
||||
# module.
|
||||
a_weekday = [calendar.day_abbr[i].lower() for i in range(7)]
|
||||
f_weekday = [calendar.day_name[i].lower() for i in range(7)]
|
||||
self.a_weekday = a_weekday
|
||||
self.f_weekday = f_weekday
|
||||
|
||||
def __calc_month(self):
|
||||
# Set self.f_month and self.a_month using the calendar module.
|
||||
a_month = [calendar.month_abbr[i].lower() for i in range(13)]
|
||||
f_month = [calendar.month_name[i].lower() for i in range(13)]
|
||||
self.a_month = a_month
|
||||
self.f_month = f_month
|
||||
|
||||
def __calc_am_pm(self):
|
||||
# Set self.am_pm by using time.strftime().
|
||||
|
||||
# The magic date (1999,3,17,hour,44,55,2,76,0) is not really that
|
||||
# magical; just happened to have used it everywhere else where a
|
||||
# static date was needed.
|
||||
am_pm = []
|
||||
for hour in (1, 22):
|
||||
time_tuple = time.struct_time((1999,3,17,hour,44,55,2,76,0))
|
||||
# br_FR has AM/PM info (' ',' ').
|
||||
am_pm.append(time.strftime("%p", time_tuple).lower().strip())
|
||||
self.am_pm = am_pm
|
||||
|
||||
def __calc_date_time(self):
|
||||
# Set self.date_time, self.date, & self.time by using
|
||||
# time.strftime().
|
||||
|
||||
# Use (1999,3,17,22,44,55,2,76,0) for magic date because the amount of
|
||||
# overloaded numbers is minimized. The order in which searches for
|
||||
# values within the format string is very important; it eliminates
|
||||
# possible ambiguity for what something represents.
|
||||
time_tuple = time.struct_time((1999,3,17,22,44,55,2,76,0))
|
||||
time_tuple2 = time.struct_time((1999,1,3,1,1,1,6,3,0))
|
||||
replacement_pairs = [
|
||||
('1999', '%Y'), ('99', '%y'), ('22', '%H'),
|
||||
('44', '%M'), ('55', '%S'), ('76', '%j'),
|
||||
('17', '%d'), ('03', '%m'), ('3', '%m'),
|
||||
# '3' needed for when no leading zero.
|
||||
('2', '%w'), ('10', '%I'),
|
||||
# Non-ASCII digits
|
||||
('\u0661\u0669\u0669\u0669', '%Y'),
|
||||
('\u0669\u0669', '%Oy'),
|
||||
('\u0662\u0662', '%OH'),
|
||||
('\u0664\u0664', '%OM'),
|
||||
('\u0665\u0665', '%OS'),
|
||||
('\u0661\u0667', '%Od'),
|
||||
('\u0660\u0663', '%Om'),
|
||||
('\u0663', '%Om'),
|
||||
('\u0662', '%Ow'),
|
||||
('\u0661\u0660', '%OI'),
|
||||
]
|
||||
date_time = []
|
||||
for directive in ('%c', '%x', '%X'):
|
||||
current_format = time.strftime(directive, time_tuple).lower()
|
||||
current_format = current_format.replace('%', '%%')
|
||||
# The month and the day of the week formats are treated specially
|
||||
# because of a possible ambiguity in some locales where the full
|
||||
# and abbreviated names are equal or names of different types
|
||||
# are equal. See doc of __find_month_format for more details.
|
||||
lst, fmt = self.__find_weekday_format(directive)
|
||||
if lst:
|
||||
current_format = current_format.replace(lst[2], fmt, 1)
|
||||
lst, fmt = self.__find_month_format(directive)
|
||||
if lst:
|
||||
current_format = current_format.replace(lst[3], fmt, 1)
|
||||
if self.am_pm[1]:
|
||||
# Must deal with possible lack of locale info
|
||||
# manifesting itself as the empty string (e.g., Swedish's
|
||||
# lack of AM/PM info) or a platform returning a tuple of empty
|
||||
# strings (e.g., MacOS 9 having timezone as ('','')).
|
||||
current_format = current_format.replace(self.am_pm[1], '%p')
|
||||
for tz_values in self.timezone:
|
||||
for tz in tz_values:
|
||||
if tz:
|
||||
current_format = current_format.replace(tz, "%Z")
|
||||
# Transform all non-ASCII digits to digits in range U+0660 to U+0669.
|
||||
current_format = re_sub(r'\d(?<![0-9])',
|
||||
lambda m: chr(0x0660 + int(m[0])),
|
||||
current_format)
|
||||
for old, new in replacement_pairs:
|
||||
current_format = current_format.replace(old, new)
|
||||
# If %W is used, then Sunday, 2005-01-03 will fall on week 0 since
|
||||
# 2005-01-03 occurs before the first Monday of the year. Otherwise
|
||||
# %U is used.
|
||||
if '00' in time.strftime(directive, time_tuple2):
|
||||
U_W = '%W'
|
||||
else:
|
||||
U_W = '%U'
|
||||
current_format = current_format.replace('11', U_W)
|
||||
date_time.append(current_format)
|
||||
self.LC_date_time = date_time[0]
|
||||
self.LC_date = date_time[1]
|
||||
self.LC_time = date_time[2]
|
||||
|
||||
def __find_month_format(self, directive):
|
||||
"""Find the month format appropriate for the current locale.
|
||||
|
||||
In some locales (for example French and Hebrew), the default month
|
||||
used in __calc_date_time has the same name in full and abbreviated
|
||||
form. Also, the month name can by accident match other part of the
|
||||
representation: the day of the week name (for example in Morisyen)
|
||||
or the month number (for example in Japanese). Thus, cycle months
|
||||
of the year and find all positions that match the month name for
|
||||
each month, If no common positions are found, the representation
|
||||
does not use the month name.
|
||||
"""
|
||||
full_indices = abbr_indices = None
|
||||
for m in range(1, 13):
|
||||
time_tuple = time.struct_time((1999, m, 17, 22, 44, 55, 2, 76, 0))
|
||||
datetime = time.strftime(directive, time_tuple).lower()
|
||||
indices = set(_findall(datetime, self.f_month[m]))
|
||||
if full_indices is None:
|
||||
full_indices = indices
|
||||
else:
|
||||
full_indices &= indices
|
||||
indices = set(_findall(datetime, self.a_month[m]))
|
||||
if abbr_indices is None:
|
||||
abbr_indices = indices
|
||||
else:
|
||||
abbr_indices &= indices
|
||||
if not full_indices and not abbr_indices:
|
||||
return None, None
|
||||
if full_indices:
|
||||
return self.f_month, '%B'
|
||||
if abbr_indices:
|
||||
return self.a_month, '%b'
|
||||
return None, None
|
||||
|
||||
def __find_weekday_format(self, directive):
|
||||
"""Find the day of the week format appropriate for the current locale.
|
||||
|
||||
Similar to __find_month_format().
|
||||
"""
|
||||
full_indices = abbr_indices = None
|
||||
for wd in range(7):
|
||||
time_tuple = time.struct_time((1999, 3, 17, 22, 44, 55, wd, 76, 0))
|
||||
datetime = time.strftime(directive, time_tuple).lower()
|
||||
indices = set(_findall(datetime, self.f_weekday[wd]))
|
||||
if full_indices is None:
|
||||
full_indices = indices
|
||||
else:
|
||||
full_indices &= indices
|
||||
if self.f_weekday[wd] != self.a_weekday[wd]:
|
||||
indices = set(_findall(datetime, self.a_weekday[wd]))
|
||||
if abbr_indices is None:
|
||||
abbr_indices = indices
|
||||
else:
|
||||
abbr_indices &= indices
|
||||
if not full_indices and not abbr_indices:
|
||||
return None, None
|
||||
if full_indices:
|
||||
return self.f_weekday, '%A'
|
||||
if abbr_indices:
|
||||
return self.a_weekday, '%a'
|
||||
return None, None
|
||||
|
||||
def __calc_timezone(self):
|
||||
# Set self.timezone by using time.tzname.
|
||||
# Do not worry about possibility of time.tzname[0] == time.tzname[1]
|
||||
# and time.daylight; handle that in strptime.
|
||||
try:
|
||||
time.tzset()
|
||||
except AttributeError:
|
||||
pass
|
||||
self.tzname = time.tzname
|
||||
self.daylight = time.daylight
|
||||
no_saving = frozenset({"utc", "gmt", self.tzname[0].lower()})
|
||||
if self.daylight:
|
||||
has_saving = frozenset({self.tzname[1].lower()})
|
||||
else:
|
||||
has_saving = frozenset()
|
||||
self.timezone = (no_saving, has_saving)
|
||||
|
||||
|
||||
class TimeRE(dict):
|
||||
"""Handle conversion from format directives to regexes."""
|
||||
|
||||
def __init__(self, locale_time=None):
|
||||
"""Create keys/values.
|
||||
|
||||
Order of execution is important for dependency reasons.
|
||||
|
||||
"""
|
||||
if locale_time:
|
||||
self.locale_time = locale_time
|
||||
else:
|
||||
self.locale_time = LocaleTime()
|
||||
base = super()
|
||||
mapping = {
|
||||
# The " [1-9]" part of the regex is to make %c from ANSI C work
|
||||
'd': r"(?P<d>3[0-1]|[1-2]\d|0[1-9]|[1-9]| [1-9])",
|
||||
'f': r"(?P<f>[0-9]{1,6})",
|
||||
'H': r"(?P<H>2[0-3]|[0-1]\d|\d)",
|
||||
'I': r"(?P<I>1[0-2]|0[1-9]|[1-9]| [1-9])",
|
||||
'G': r"(?P<G>\d\d\d\d)",
|
||||
'j': r"(?P<j>36[0-6]|3[0-5]\d|[1-2]\d\d|0[1-9]\d|00[1-9]|[1-9]\d|0[1-9]|[1-9])",
|
||||
'm': r"(?P<m>1[0-2]|0[1-9]|[1-9])",
|
||||
'M': r"(?P<M>[0-5]\d|\d)",
|
||||
'S': r"(?P<S>6[0-1]|[0-5]\d|\d)",
|
||||
'U': r"(?P<U>5[0-3]|[0-4]\d|\d)",
|
||||
'w': r"(?P<w>[0-6])",
|
||||
'u': r"(?P<u>[1-7])",
|
||||
'V': r"(?P<V>5[0-3]|0[1-9]|[1-4]\d|\d)",
|
||||
# W is set below by using 'U'
|
||||
'y': r"(?P<y>\d\d)",
|
||||
'Y': r"(?P<Y>\d\d\d\d)",
|
||||
'z': r"(?P<z>[+-]\d\d:?[0-5]\d(:?[0-5]\d(\.\d{1,6})?)?|(?-i:Z))",
|
||||
'A': self.__seqToRE(self.locale_time.f_weekday, 'A'),
|
||||
'a': self.__seqToRE(self.locale_time.a_weekday, 'a'),
|
||||
'B': self.__seqToRE(self.locale_time.f_month[1:], 'B'),
|
||||
'b': self.__seqToRE(self.locale_time.a_month[1:], 'b'),
|
||||
'p': self.__seqToRE(self.locale_time.am_pm, 'p'),
|
||||
'Z': self.__seqToRE((tz for tz_names in self.locale_time.timezone
|
||||
for tz in tz_names),
|
||||
'Z'),
|
||||
'%': '%'}
|
||||
for d in 'dmyHIMS':
|
||||
mapping['O' + d] = r'(?P<%s>\d\d|\d| \d)' % d
|
||||
mapping['Ow'] = r'(?P<w>\d)'
|
||||
mapping['W'] = mapping['U'].replace('U', 'W')
|
||||
base.__init__(mapping)
|
||||
base.__setitem__('X', self.pattern(self.locale_time.LC_time))
|
||||
base.__setitem__('x', self.pattern(self.locale_time.LC_date))
|
||||
base.__setitem__('c', self.pattern(self.locale_time.LC_date_time))
|
||||
|
||||
def __seqToRE(self, to_convert, directive):
|
||||
"""Convert a list to a regex string for matching a directive.
|
||||
|
||||
Want possible matching values to be from longest to shortest. This
|
||||
prevents the possibility of a match occurring for a value that also
|
||||
a substring of a larger value that should have matched (e.g., 'abc'
|
||||
matching when 'abcdef' should have been the match).
|
||||
|
||||
"""
|
||||
to_convert = sorted(to_convert, key=len, reverse=True)
|
||||
for value in to_convert:
|
||||
if value != '':
|
||||
break
|
||||
else:
|
||||
return ''
|
||||
regex = '|'.join(re_escape(stuff) for stuff in to_convert)
|
||||
regex = '(?P<%s>%s' % (directive, regex)
|
||||
return '%s)' % regex
|
||||
|
||||
def pattern(self, format):
|
||||
"""Return regex pattern for the format string.
|
||||
|
||||
Need to make sure that any characters that might be interpreted as
|
||||
regex syntax are escaped.
|
||||
|
||||
"""
|
||||
# The sub() call escapes all characters that might be misconstrued
|
||||
# as regex syntax. Cannot use re.escape since we have to deal with
|
||||
# format directives (%m, etc.).
|
||||
format = re_sub(r"([\\.^$*+?\(\){}\[\]|])", r"\\\1", format)
|
||||
format = re_sub(r'\s+', r'\\s+', format)
|
||||
format = re_sub(r"'", "['\u02bc]", format) # needed for br_FR
|
||||
year_in_format = False
|
||||
day_of_month_in_format = False
|
||||
def repl(m):
|
||||
format_char = m[1]
|
||||
match format_char:
|
||||
case 'Y' | 'y' | 'G':
|
||||
nonlocal year_in_format
|
||||
year_in_format = True
|
||||
case 'd':
|
||||
nonlocal day_of_month_in_format
|
||||
day_of_month_in_format = True
|
||||
return self[format_char]
|
||||
format = re_sub(r'%([OE]?\\?.?)', repl, format)
|
||||
if day_of_month_in_format and not year_in_format:
|
||||
import warnings
|
||||
warnings.warn("""\
|
||||
Parsing dates involving a day of month without a year specified is ambiguious
|
||||
and fails to parse leap day. The default behavior will change in Python 3.15
|
||||
to either always raise an exception or to use a different default year (TBD).
|
||||
To avoid trouble, add a specific year to the input & format.
|
||||
See https://github.com/python/cpython/issues/70647.""",
|
||||
DeprecationWarning,
|
||||
skip_file_prefixes=(os.path.dirname(__file__),))
|
||||
return format
|
||||
|
||||
def compile(self, format):
|
||||
"""Return a compiled re object for the format string."""
|
||||
return re_compile(self.pattern(format), IGNORECASE)
|
||||
|
||||
_cache_lock = _thread_allocate_lock()
|
||||
# DO NOT modify _TimeRE_cache or _regex_cache without acquiring the cache lock
|
||||
# first!
|
||||
_TimeRE_cache = TimeRE()
|
||||
_CACHE_MAX_SIZE = 5 # Max number of regexes stored in _regex_cache
|
||||
_regex_cache = {}
|
||||
|
||||
def _calc_julian_from_U_or_W(year, week_of_year, day_of_week, week_starts_Mon):
|
||||
"""Calculate the Julian day based on the year, week of the year, and day of
|
||||
the week, with week_start_day representing whether the week of the year
|
||||
assumes the week starts on Sunday or Monday (6 or 0)."""
|
||||
first_weekday = datetime_date(year, 1, 1).weekday()
|
||||
# If we are dealing with the %U directive (week starts on Sunday), it's
|
||||
# easier to just shift the view to Sunday being the first day of the
|
||||
# week.
|
||||
if not week_starts_Mon:
|
||||
first_weekday = (first_weekday + 1) % 7
|
||||
day_of_week = (day_of_week + 1) % 7
|
||||
# Need to watch out for a week 0 (when the first day of the year is not
|
||||
# the same as that specified by %U or %W).
|
||||
week_0_length = (7 - first_weekday) % 7
|
||||
if week_of_year == 0:
|
||||
return 1 + day_of_week - first_weekday
|
||||
else:
|
||||
days_to_week = week_0_length + (7 * (week_of_year - 1))
|
||||
return 1 + days_to_week + day_of_week
|
||||
|
||||
|
||||
def _strptime(data_string, format="%a %b %d %H:%M:%S %Y"):
|
||||
"""Return a 2-tuple consisting of a time struct and an int containing
|
||||
the number of microseconds based on the input string and the
|
||||
format string."""
|
||||
|
||||
for index, arg in enumerate([data_string, format]):
|
||||
if not isinstance(arg, str):
|
||||
msg = "strptime() argument {} must be str, not {}"
|
||||
raise TypeError(msg.format(index, type(arg)))
|
||||
|
||||
global _TimeRE_cache, _regex_cache
|
||||
with _cache_lock:
|
||||
locale_time = _TimeRE_cache.locale_time
|
||||
if (_getlang() != locale_time.lang or
|
||||
time.tzname != locale_time.tzname or
|
||||
time.daylight != locale_time.daylight):
|
||||
_TimeRE_cache = TimeRE()
|
||||
_regex_cache.clear()
|
||||
locale_time = _TimeRE_cache.locale_time
|
||||
if len(_regex_cache) > _CACHE_MAX_SIZE:
|
||||
_regex_cache.clear()
|
||||
format_regex = _regex_cache.get(format)
|
||||
if not format_regex:
|
||||
try:
|
||||
format_regex = _TimeRE_cache.compile(format)
|
||||
# KeyError raised when a bad format is found; can be specified as
|
||||
# \\, in which case it was a stray % but with a space after it
|
||||
except KeyError as err:
|
||||
bad_directive = err.args[0]
|
||||
del err
|
||||
bad_directive = bad_directive.replace('\\s', '')
|
||||
if not bad_directive:
|
||||
raise ValueError("stray %% in format '%s'" % format) from None
|
||||
bad_directive = bad_directive.replace('\\', '', 1)
|
||||
raise ValueError("'%s' is a bad directive in format '%s'" %
|
||||
(bad_directive, format)) from None
|
||||
_regex_cache[format] = format_regex
|
||||
found = format_regex.match(data_string)
|
||||
if not found:
|
||||
raise ValueError("time data %r does not match format %r" %
|
||||
(data_string, format))
|
||||
if len(data_string) != found.end():
|
||||
raise ValueError("unconverted data remains: %s" %
|
||||
data_string[found.end():])
|
||||
|
||||
iso_year = year = None
|
||||
month = day = 1
|
||||
hour = minute = second = fraction = 0
|
||||
tz = -1
|
||||
gmtoff = None
|
||||
gmtoff_fraction = 0
|
||||
iso_week = week_of_year = None
|
||||
week_of_year_start = None
|
||||
# weekday and julian defaulted to None so as to signal need to calculate
|
||||
# values
|
||||
weekday = julian = None
|
||||
found_dict = found.groupdict()
|
||||
for group_key in found_dict.keys():
|
||||
# Directives not explicitly handled below:
|
||||
# c, x, X
|
||||
# handled by making out of other directives
|
||||
# U, W
|
||||
# worthless without day of the week
|
||||
if group_key == 'y':
|
||||
year = int(found_dict['y'])
|
||||
# Open Group specification for strptime() states that a %y
|
||||
#value in the range of [00, 68] is in the century 2000, while
|
||||
#[69,99] is in the century 1900
|
||||
if year <= 68:
|
||||
year += 2000
|
||||
else:
|
||||
year += 1900
|
||||
elif group_key == 'Y':
|
||||
year = int(found_dict['Y'])
|
||||
elif group_key == 'G':
|
||||
iso_year = int(found_dict['G'])
|
||||
elif group_key == 'm':
|
||||
month = int(found_dict['m'])
|
||||
elif group_key == 'B':
|
||||
month = locale_time.f_month.index(found_dict['B'].lower())
|
||||
elif group_key == 'b':
|
||||
month = locale_time.a_month.index(found_dict['b'].lower())
|
||||
elif group_key == 'd':
|
||||
day = int(found_dict['d'])
|
||||
elif group_key == 'H':
|
||||
hour = int(found_dict['H'])
|
||||
elif group_key == 'I':
|
||||
hour = int(found_dict['I'])
|
||||
ampm = found_dict.get('p', '').lower()
|
||||
# If there was no AM/PM indicator, we'll treat this like AM
|
||||
if ampm in ('', locale_time.am_pm[0]):
|
||||
# We're in AM so the hour is correct unless we're
|
||||
# looking at 12 midnight.
|
||||
# 12 midnight == 12 AM == hour 0
|
||||
if hour == 12:
|
||||
hour = 0
|
||||
elif ampm == locale_time.am_pm[1]:
|
||||
# We're in PM so we need to add 12 to the hour unless
|
||||
# we're looking at 12 noon.
|
||||
# 12 noon == 12 PM == hour 12
|
||||
if hour != 12:
|
||||
hour += 12
|
||||
elif group_key == 'M':
|
||||
minute = int(found_dict['M'])
|
||||
elif group_key == 'S':
|
||||
second = int(found_dict['S'])
|
||||
elif group_key == 'f':
|
||||
s = found_dict['f']
|
||||
# Pad to always return microseconds.
|
||||
s += "0" * (6 - len(s))
|
||||
fraction = int(s)
|
||||
elif group_key == 'A':
|
||||
weekday = locale_time.f_weekday.index(found_dict['A'].lower())
|
||||
elif group_key == 'a':
|
||||
weekday = locale_time.a_weekday.index(found_dict['a'].lower())
|
||||
elif group_key == 'w':
|
||||
weekday = int(found_dict['w'])
|
||||
if weekday == 0:
|
||||
weekday = 6
|
||||
else:
|
||||
weekday -= 1
|
||||
elif group_key == 'u':
|
||||
weekday = int(found_dict['u'])
|
||||
weekday -= 1
|
||||
elif group_key == 'j':
|
||||
julian = int(found_dict['j'])
|
||||
elif group_key in ('U', 'W'):
|
||||
week_of_year = int(found_dict[group_key])
|
||||
if group_key == 'U':
|
||||
# U starts week on Sunday.
|
||||
week_of_year_start = 6
|
||||
else:
|
||||
# W starts week on Monday.
|
||||
week_of_year_start = 0
|
||||
elif group_key == 'V':
|
||||
iso_week = int(found_dict['V'])
|
||||
elif group_key == 'z':
|
||||
z = found_dict['z']
|
||||
if z == 'Z':
|
||||
gmtoff = 0
|
||||
else:
|
||||
if z[3] == ':':
|
||||
z = z[:3] + z[4:]
|
||||
if len(z) > 5:
|
||||
if z[5] != ':':
|
||||
msg = f"Inconsistent use of : in {found_dict['z']}"
|
||||
raise ValueError(msg)
|
||||
z = z[:5] + z[6:]
|
||||
hours = int(z[1:3])
|
||||
minutes = int(z[3:5])
|
||||
seconds = int(z[5:7] or 0)
|
||||
gmtoff = (hours * 60 * 60) + (minutes * 60) + seconds
|
||||
gmtoff_remainder = z[8:]
|
||||
# Pad to always return microseconds.
|
||||
gmtoff_remainder_padding = "0" * (6 - len(gmtoff_remainder))
|
||||
gmtoff_fraction = int(gmtoff_remainder + gmtoff_remainder_padding)
|
||||
if z.startswith("-"):
|
||||
gmtoff = -gmtoff
|
||||
gmtoff_fraction = -gmtoff_fraction
|
||||
elif group_key == 'Z':
|
||||
# Since -1 is default value only need to worry about setting tz if
|
||||
# it can be something other than -1.
|
||||
found_zone = found_dict['Z'].lower()
|
||||
for value, tz_values in enumerate(locale_time.timezone):
|
||||
if found_zone in tz_values:
|
||||
# Deal with bad locale setup where timezone names are the
|
||||
# same and yet time.daylight is true; too ambiguous to
|
||||
# be able to tell what timezone has daylight savings
|
||||
if (time.tzname[0] == time.tzname[1] and
|
||||
time.daylight and found_zone not in ("utc", "gmt")):
|
||||
break
|
||||
else:
|
||||
tz = value
|
||||
break
|
||||
|
||||
# Deal with the cases where ambiguities arise
|
||||
# don't assume default values for ISO week/year
|
||||
if iso_year is not None:
|
||||
if julian is not None:
|
||||
raise ValueError("Day of the year directive '%j' is not "
|
||||
"compatible with ISO year directive '%G'. "
|
||||
"Use '%Y' instead.")
|
||||
elif iso_week is None or weekday is None:
|
||||
raise ValueError("ISO year directive '%G' must be used with "
|
||||
"the ISO week directive '%V' and a weekday "
|
||||
"directive ('%A', '%a', '%w', or '%u').")
|
||||
elif iso_week is not None:
|
||||
if year is None or weekday is None:
|
||||
raise ValueError("ISO week directive '%V' must be used with "
|
||||
"the ISO year directive '%G' and a weekday "
|
||||
"directive ('%A', '%a', '%w', or '%u').")
|
||||
else:
|
||||
raise ValueError("ISO week directive '%V' is incompatible with "
|
||||
"the year directive '%Y'. Use the ISO year '%G' "
|
||||
"instead.")
|
||||
|
||||
leap_year_fix = False
|
||||
if year is None:
|
||||
if month == 2 and day == 29:
|
||||
year = 1904 # 1904 is first leap year of 20th century
|
||||
leap_year_fix = True
|
||||
else:
|
||||
year = 1900
|
||||
|
||||
# If we know the week of the year and what day of that week, we can figure
|
||||
# out the Julian day of the year.
|
||||
if julian is None and weekday is not None:
|
||||
if week_of_year is not None:
|
||||
week_starts_Mon = True if week_of_year_start == 0 else False
|
||||
julian = _calc_julian_from_U_or_W(year, week_of_year, weekday,
|
||||
week_starts_Mon)
|
||||
elif iso_year is not None and iso_week is not None:
|
||||
datetime_result = datetime_date.fromisocalendar(iso_year, iso_week, weekday + 1)
|
||||
year = datetime_result.year
|
||||
month = datetime_result.month
|
||||
day = datetime_result.day
|
||||
if julian is not None and julian <= 0:
|
||||
year -= 1
|
||||
yday = 366 if calendar.isleap(year) else 365
|
||||
julian += yday
|
||||
|
||||
if julian is None:
|
||||
# Cannot pre-calculate datetime_date() since can change in Julian
|
||||
# calculation and thus could have different value for the day of
|
||||
# the week calculation.
|
||||
# Need to add 1 to result since first day of the year is 1, not 0.
|
||||
julian = datetime_date(year, month, day).toordinal() - \
|
||||
datetime_date(year, 1, 1).toordinal() + 1
|
||||
else: # Assume that if they bothered to include Julian day (or if it was
|
||||
# calculated above with year/week/weekday) it will be accurate.
|
||||
datetime_result = datetime_date.fromordinal(
|
||||
(julian - 1) +
|
||||
datetime_date(year, 1, 1).toordinal())
|
||||
year = datetime_result.year
|
||||
month = datetime_result.month
|
||||
day = datetime_result.day
|
||||
if weekday is None:
|
||||
weekday = datetime_date(year, month, day).weekday()
|
||||
# Add timezone info
|
||||
tzname = found_dict.get("Z")
|
||||
|
||||
if leap_year_fix:
|
||||
# the caller didn't supply a year but asked for Feb 29th. We couldn't
|
||||
# use the default of 1900 for computations. We set it back to ensure
|
||||
# that February 29th is smaller than March 1st.
|
||||
year = 1900
|
||||
|
||||
return (year, month, day,
|
||||
hour, minute, second,
|
||||
weekday, julian, tz, tzname, gmtoff), fraction, gmtoff_fraction
|
||||
|
||||
def _strptime_time(data_string, format="%a %b %d %H:%M:%S %Y"):
|
||||
"""Return a time struct based on the input string and the
|
||||
format string."""
|
||||
tt = _strptime(data_string, format)[0]
|
||||
return time.struct_time(tt[:time._STRUCT_TM_ITEMS])
|
||||
|
||||
def _strptime_datetime(cls, data_string, format="%a %b %d %H:%M:%S %Y"):
|
||||
"""Return a class cls instance based on the input string and the
|
||||
format string."""
|
||||
tt, fraction, gmtoff_fraction = _strptime(data_string, format)
|
||||
tzname, gmtoff = tt[-2:]
|
||||
args = tt[:6] + (fraction,)
|
||||
if gmtoff is not None:
|
||||
tzdelta = datetime_timedelta(seconds=gmtoff, microseconds=gmtoff_fraction)
|
||||
if tzname:
|
||||
tz = datetime_timezone(tzdelta, tzname)
|
||||
else:
|
||||
tz = datetime_timezone(tzdelta)
|
||||
args += (tz,)
|
||||
|
||||
return cls(*args)
|
||||
120
Utils/PythonNew32/Lib/_threading_local.py
Normal file
120
Utils/PythonNew32/Lib/_threading_local.py
Normal file
@@ -0,0 +1,120 @@
|
||||
"""Thread-local objects.
|
||||
|
||||
(Note that this module provides a Python version of the threading.local
|
||||
class. Depending on the version of Python you're using, there may be a
|
||||
faster one available. You should always import the `local` class from
|
||||
`threading`.)
|
||||
"""
|
||||
|
||||
from weakref import ref
|
||||
from contextlib import contextmanager
|
||||
|
||||
__all__ = ["local"]
|
||||
|
||||
# We need to use objects from the threading module, but the threading
|
||||
# module may also want to use our `local` class, if support for locals
|
||||
# isn't compiled in to the `thread` module. This creates potential problems
|
||||
# with circular imports. For that reason, we don't import `threading`
|
||||
# until the bottom of this file (a hack sufficient to worm around the
|
||||
# potential problems). Note that all platforms on CPython do have support
|
||||
# for locals in the `thread` module, and there is no circular import problem
|
||||
# then, so problems introduced by fiddling the order of imports here won't
|
||||
# manifest.
|
||||
|
||||
class _localimpl:
|
||||
"""A class managing thread-local dicts"""
|
||||
__slots__ = 'key', 'dicts', 'localargs', 'locallock', '__weakref__'
|
||||
|
||||
def __init__(self):
|
||||
# The key used in the Thread objects' attribute dicts.
|
||||
# We keep it a string for speed but make it unlikely to clash with
|
||||
# a "real" attribute.
|
||||
self.key = '_threading_local._localimpl.' + str(id(self))
|
||||
# { id(Thread) -> (ref(Thread), thread-local dict) }
|
||||
self.dicts = {}
|
||||
|
||||
def get_dict(self):
|
||||
"""Return the dict for the current thread. Raises KeyError if none
|
||||
defined."""
|
||||
thread = current_thread()
|
||||
return self.dicts[id(thread)][1]
|
||||
|
||||
def create_dict(self):
|
||||
"""Create a new dict for the current thread, and return it."""
|
||||
localdict = {}
|
||||
key = self.key
|
||||
thread = current_thread()
|
||||
idt = id(thread)
|
||||
def local_deleted(_, key=key):
|
||||
# When the localimpl is deleted, remove the thread attribute.
|
||||
thread = wrthread()
|
||||
if thread is not None:
|
||||
del thread.__dict__[key]
|
||||
def thread_deleted(_, idt=idt):
|
||||
# When the thread is deleted, remove the local dict.
|
||||
# Note that this is suboptimal if the thread object gets
|
||||
# caught in a reference loop. We would like to be called
|
||||
# as soon as the OS-level thread ends instead.
|
||||
local = wrlocal()
|
||||
if local is not None:
|
||||
dct = local.dicts.pop(idt)
|
||||
wrlocal = ref(self, local_deleted)
|
||||
wrthread = ref(thread, thread_deleted)
|
||||
thread.__dict__[key] = wrlocal
|
||||
self.dicts[idt] = wrthread, localdict
|
||||
return localdict
|
||||
|
||||
|
||||
@contextmanager
|
||||
def _patch(self):
|
||||
impl = object.__getattribute__(self, '_local__impl')
|
||||
try:
|
||||
dct = impl.get_dict()
|
||||
except KeyError:
|
||||
dct = impl.create_dict()
|
||||
args, kw = impl.localargs
|
||||
self.__init__(*args, **kw)
|
||||
with impl.locallock:
|
||||
object.__setattr__(self, '__dict__', dct)
|
||||
yield
|
||||
|
||||
|
||||
class local:
|
||||
__slots__ = '_local__impl', '__dict__'
|
||||
|
||||
def __new__(cls, /, *args, **kw):
|
||||
if (args or kw) and (cls.__init__ is object.__init__):
|
||||
raise TypeError("Initialization arguments are not supported")
|
||||
self = object.__new__(cls)
|
||||
impl = _localimpl()
|
||||
impl.localargs = (args, kw)
|
||||
impl.locallock = RLock()
|
||||
object.__setattr__(self, '_local__impl', impl)
|
||||
# We need to create the thread dict in anticipation of
|
||||
# __init__ being called, to make sure we don't call it
|
||||
# again ourselves.
|
||||
impl.create_dict()
|
||||
return self
|
||||
|
||||
def __getattribute__(self, name):
|
||||
with _patch(self):
|
||||
return object.__getattribute__(self, name)
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
if name == '__dict__':
|
||||
raise AttributeError(
|
||||
"%r object attribute '__dict__' is read-only"
|
||||
% self.__class__.__name__)
|
||||
with _patch(self):
|
||||
return object.__setattr__(self, name, value)
|
||||
|
||||
def __delattr__(self, name):
|
||||
if name == '__dict__':
|
||||
raise AttributeError(
|
||||
"%r object attribute '__dict__' is read-only"
|
||||
% self.__class__.__name__)
|
||||
with _patch(self):
|
||||
return object.__delattr__(self, name)
|
||||
|
||||
|
||||
from threading import current_thread, RLock
|
||||
205
Utils/PythonNew32/Lib/_weakrefset.py
Normal file
205
Utils/PythonNew32/Lib/_weakrefset.py
Normal file
@@ -0,0 +1,205 @@
|
||||
# Access WeakSet through the weakref module.
|
||||
# This code is separated-out because it is needed
|
||||
# by abc.py to load everything else at startup.
|
||||
|
||||
from _weakref import ref
|
||||
from types import GenericAlias
|
||||
|
||||
__all__ = ['WeakSet']
|
||||
|
||||
|
||||
class _IterationGuard:
|
||||
# This context manager registers itself in the current iterators of the
|
||||
# weak container, such as to delay all removals until the context manager
|
||||
# exits.
|
||||
# This technique should be relatively thread-safe (since sets are).
|
||||
|
||||
def __init__(self, weakcontainer):
|
||||
# Don't create cycles
|
||||
self.weakcontainer = ref(weakcontainer)
|
||||
|
||||
def __enter__(self):
|
||||
w = self.weakcontainer()
|
||||
if w is not None:
|
||||
w._iterating.add(self)
|
||||
return self
|
||||
|
||||
def __exit__(self, e, t, b):
|
||||
w = self.weakcontainer()
|
||||
if w is not None:
|
||||
s = w._iterating
|
||||
s.remove(self)
|
||||
if not s:
|
||||
w._commit_removals()
|
||||
|
||||
|
||||
class WeakSet:
|
||||
def __init__(self, data=None):
|
||||
self.data = set()
|
||||
def _remove(item, selfref=ref(self)):
|
||||
self = selfref()
|
||||
if self is not None:
|
||||
if self._iterating:
|
||||
self._pending_removals.append(item)
|
||||
else:
|
||||
self.data.discard(item)
|
||||
self._remove = _remove
|
||||
# A list of keys to be removed
|
||||
self._pending_removals = []
|
||||
self._iterating = set()
|
||||
if data is not None:
|
||||
self.update(data)
|
||||
|
||||
def _commit_removals(self):
|
||||
pop = self._pending_removals.pop
|
||||
discard = self.data.discard
|
||||
while True:
|
||||
try:
|
||||
item = pop()
|
||||
except IndexError:
|
||||
return
|
||||
discard(item)
|
||||
|
||||
def __iter__(self):
|
||||
with _IterationGuard(self):
|
||||
for itemref in self.data:
|
||||
item = itemref()
|
||||
if item is not None:
|
||||
# Caveat: the iterator will keep a strong reference to
|
||||
# `item` until it is resumed or closed.
|
||||
yield item
|
||||
|
||||
def __len__(self):
|
||||
return len(self.data) - len(self._pending_removals)
|
||||
|
||||
def __contains__(self, item):
|
||||
try:
|
||||
wr = ref(item)
|
||||
except TypeError:
|
||||
return False
|
||||
return wr in self.data
|
||||
|
||||
def __reduce__(self):
|
||||
return self.__class__, (list(self),), self.__getstate__()
|
||||
|
||||
def add(self, item):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
self.data.add(ref(item, self._remove))
|
||||
|
||||
def clear(self):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
self.data.clear()
|
||||
|
||||
def copy(self):
|
||||
return self.__class__(self)
|
||||
|
||||
def pop(self):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
while True:
|
||||
try:
|
||||
itemref = self.data.pop()
|
||||
except KeyError:
|
||||
raise KeyError('pop from empty WeakSet') from None
|
||||
item = itemref()
|
||||
if item is not None:
|
||||
return item
|
||||
|
||||
def remove(self, item):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
self.data.remove(ref(item))
|
||||
|
||||
def discard(self, item):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
self.data.discard(ref(item))
|
||||
|
||||
def update(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
for element in other:
|
||||
self.add(element)
|
||||
|
||||
def __ior__(self, other):
|
||||
self.update(other)
|
||||
return self
|
||||
|
||||
def difference(self, other):
|
||||
newset = self.copy()
|
||||
newset.difference_update(other)
|
||||
return newset
|
||||
__sub__ = difference
|
||||
|
||||
def difference_update(self, other):
|
||||
self.__isub__(other)
|
||||
def __isub__(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
if self is other:
|
||||
self.data.clear()
|
||||
else:
|
||||
self.data.difference_update(ref(item) for item in other)
|
||||
return self
|
||||
|
||||
def intersection(self, other):
|
||||
return self.__class__(item for item in other if item in self)
|
||||
__and__ = intersection
|
||||
|
||||
def intersection_update(self, other):
|
||||
self.__iand__(other)
|
||||
def __iand__(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
self.data.intersection_update(ref(item) for item in other)
|
||||
return self
|
||||
|
||||
def issubset(self, other):
|
||||
return self.data.issubset(ref(item) for item in other)
|
||||
__le__ = issubset
|
||||
|
||||
def __lt__(self, other):
|
||||
return self.data < set(map(ref, other))
|
||||
|
||||
def issuperset(self, other):
|
||||
return self.data.issuperset(ref(item) for item in other)
|
||||
__ge__ = issuperset
|
||||
|
||||
def __gt__(self, other):
|
||||
return self.data > set(map(ref, other))
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, self.__class__):
|
||||
return NotImplemented
|
||||
return self.data == set(map(ref, other))
|
||||
|
||||
def symmetric_difference(self, other):
|
||||
newset = self.copy()
|
||||
newset.symmetric_difference_update(other)
|
||||
return newset
|
||||
__xor__ = symmetric_difference
|
||||
|
||||
def symmetric_difference_update(self, other):
|
||||
self.__ixor__(other)
|
||||
def __ixor__(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
if self is other:
|
||||
self.data.clear()
|
||||
else:
|
||||
self.data.symmetric_difference_update(ref(item, self._remove) for item in other)
|
||||
return self
|
||||
|
||||
def union(self, other):
|
||||
return self.__class__(e for s in (self, other) for e in s)
|
||||
__or__ = union
|
||||
|
||||
def isdisjoint(self, other):
|
||||
return len(self.intersection(other)) == 0
|
||||
|
||||
def __repr__(self):
|
||||
return repr(self.data)
|
||||
|
||||
__class_getitem__ = classmethod(GenericAlias)
|
||||
188
Utils/PythonNew32/Lib/abc.py
Normal file
188
Utils/PythonNew32/Lib/abc.py
Normal file
@@ -0,0 +1,188 @@
|
||||
# Copyright 2007 Google, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
"""Abstract Base Classes (ABCs) according to PEP 3119."""
|
||||
|
||||
|
||||
def abstractmethod(funcobj):
|
||||
"""A decorator indicating abstract methods.
|
||||
|
||||
Requires that the metaclass is ABCMeta or derived from it. A
|
||||
class that has a metaclass derived from ABCMeta cannot be
|
||||
instantiated unless all of its abstract methods are overridden.
|
||||
The abstract methods can be called using any of the normal
|
||||
'super' call mechanisms. abstractmethod() may be used to declare
|
||||
abstract methods for properties and descriptors.
|
||||
|
||||
Usage:
|
||||
|
||||
class C(metaclass=ABCMeta):
|
||||
@abstractmethod
|
||||
def my_abstract_method(self, arg1, arg2, argN):
|
||||
...
|
||||
"""
|
||||
funcobj.__isabstractmethod__ = True
|
||||
return funcobj
|
||||
|
||||
|
||||
class abstractclassmethod(classmethod):
|
||||
"""A decorator indicating abstract classmethods.
|
||||
|
||||
Deprecated, use 'classmethod' with 'abstractmethod' instead:
|
||||
|
||||
class C(ABC):
|
||||
@classmethod
|
||||
@abstractmethod
|
||||
def my_abstract_classmethod(cls, ...):
|
||||
...
|
||||
|
||||
"""
|
||||
|
||||
__isabstractmethod__ = True
|
||||
|
||||
def __init__(self, callable):
|
||||
callable.__isabstractmethod__ = True
|
||||
super().__init__(callable)
|
||||
|
||||
|
||||
class abstractstaticmethod(staticmethod):
|
||||
"""A decorator indicating abstract staticmethods.
|
||||
|
||||
Deprecated, use 'staticmethod' with 'abstractmethod' instead:
|
||||
|
||||
class C(ABC):
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def my_abstract_staticmethod(...):
|
||||
...
|
||||
|
||||
"""
|
||||
|
||||
__isabstractmethod__ = True
|
||||
|
||||
def __init__(self, callable):
|
||||
callable.__isabstractmethod__ = True
|
||||
super().__init__(callable)
|
||||
|
||||
|
||||
class abstractproperty(property):
|
||||
"""A decorator indicating abstract properties.
|
||||
|
||||
Deprecated, use 'property' with 'abstractmethod' instead:
|
||||
|
||||
class C(ABC):
|
||||
@property
|
||||
@abstractmethod
|
||||
def my_abstract_property(self):
|
||||
...
|
||||
|
||||
"""
|
||||
|
||||
__isabstractmethod__ = True
|
||||
|
||||
|
||||
try:
|
||||
from _abc import (get_cache_token, _abc_init, _abc_register,
|
||||
_abc_instancecheck, _abc_subclasscheck, _get_dump,
|
||||
_reset_registry, _reset_caches)
|
||||
except ImportError:
|
||||
from _py_abc import ABCMeta, get_cache_token
|
||||
ABCMeta.__module__ = 'abc'
|
||||
else:
|
||||
class ABCMeta(type):
|
||||
"""Metaclass for defining Abstract Base Classes (ABCs).
|
||||
|
||||
Use this metaclass to create an ABC. An ABC can be subclassed
|
||||
directly, and then acts as a mix-in class. You can also register
|
||||
unrelated concrete classes (even built-in classes) and unrelated
|
||||
ABCs as 'virtual subclasses' -- these and their descendants will
|
||||
be considered subclasses of the registering ABC by the built-in
|
||||
issubclass() function, but the registering ABC won't show up in
|
||||
their MRO (Method Resolution Order) nor will method
|
||||
implementations defined by the registering ABC be callable (not
|
||||
even via super()).
|
||||
"""
|
||||
def __new__(mcls, name, bases, namespace, /, **kwargs):
|
||||
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
|
||||
_abc_init(cls)
|
||||
return cls
|
||||
|
||||
def register(cls, subclass):
|
||||
"""Register a virtual subclass of an ABC.
|
||||
|
||||
Returns the subclass, to allow usage as a class decorator.
|
||||
"""
|
||||
return _abc_register(cls, subclass)
|
||||
|
||||
def __instancecheck__(cls, instance):
|
||||
"""Override for isinstance(instance, cls)."""
|
||||
return _abc_instancecheck(cls, instance)
|
||||
|
||||
def __subclasscheck__(cls, subclass):
|
||||
"""Override for issubclass(subclass, cls)."""
|
||||
return _abc_subclasscheck(cls, subclass)
|
||||
|
||||
def _dump_registry(cls, file=None):
|
||||
"""Debug helper to print the ABC registry."""
|
||||
print(f"Class: {cls.__module__}.{cls.__qualname__}", file=file)
|
||||
print(f"Inv. counter: {get_cache_token()}", file=file)
|
||||
(_abc_registry, _abc_cache, _abc_negative_cache,
|
||||
_abc_negative_cache_version) = _get_dump(cls)
|
||||
print(f"_abc_registry: {_abc_registry!r}", file=file)
|
||||
print(f"_abc_cache: {_abc_cache!r}", file=file)
|
||||
print(f"_abc_negative_cache: {_abc_negative_cache!r}", file=file)
|
||||
print(f"_abc_negative_cache_version: {_abc_negative_cache_version!r}",
|
||||
file=file)
|
||||
|
||||
def _abc_registry_clear(cls):
|
||||
"""Clear the registry (for debugging or testing)."""
|
||||
_reset_registry(cls)
|
||||
|
||||
def _abc_caches_clear(cls):
|
||||
"""Clear the caches (for debugging or testing)."""
|
||||
_reset_caches(cls)
|
||||
|
||||
|
||||
def update_abstractmethods(cls):
|
||||
"""Recalculate the set of abstract methods of an abstract class.
|
||||
|
||||
If a class has had one of its abstract methods implemented after the
|
||||
class was created, the method will not be considered implemented until
|
||||
this function is called. Alternatively, if a new abstract method has been
|
||||
added to the class, it will only be considered an abstract method of the
|
||||
class after this function is called.
|
||||
|
||||
This function should be called before any use is made of the class,
|
||||
usually in class decorators that add methods to the subject class.
|
||||
|
||||
Returns cls, to allow usage as a class decorator.
|
||||
|
||||
If cls is not an instance of ABCMeta, does nothing.
|
||||
"""
|
||||
if not hasattr(cls, '__abstractmethods__'):
|
||||
# We check for __abstractmethods__ here because cls might by a C
|
||||
# implementation or a python implementation (especially during
|
||||
# testing), and we want to handle both cases.
|
||||
return cls
|
||||
|
||||
abstracts = set()
|
||||
# Check the existing abstract methods of the parents, keep only the ones
|
||||
# that are not implemented.
|
||||
for scls in cls.__bases__:
|
||||
for name in getattr(scls, '__abstractmethods__', ()):
|
||||
value = getattr(cls, name, None)
|
||||
if getattr(value, "__isabstractmethod__", False):
|
||||
abstracts.add(name)
|
||||
# Also add any other newly added abstract methods.
|
||||
for name, value in cls.__dict__.items():
|
||||
if getattr(value, "__isabstractmethod__", False):
|
||||
abstracts.add(name)
|
||||
cls.__abstractmethods__ = frozenset(abstracts)
|
||||
return cls
|
||||
|
||||
|
||||
class ABC(metaclass=ABCMeta):
|
||||
"""Helper class that provides a standard way to create an ABC using
|
||||
inheritance.
|
||||
"""
|
||||
__slots__ = ()
|
||||
17
Utils/PythonNew32/Lib/antigravity.py
Normal file
17
Utils/PythonNew32/Lib/antigravity.py
Normal file
@@ -0,0 +1,17 @@
|
||||
|
||||
import webbrowser
|
||||
import hashlib
|
||||
|
||||
webbrowser.open("https://xkcd.com/353/")
|
||||
|
||||
def geohash(latitude, longitude, datedow):
|
||||
'''Compute geohash() using the Munroe algorithm.
|
||||
|
||||
>>> geohash(37.421542, -122.085589, b'2005-05-26-10458.68')
|
||||
37.857713 -122.544543
|
||||
|
||||
'''
|
||||
# https://xkcd.com/426/
|
||||
h = hashlib.md5(datedow, usedforsecurity=False).hexdigest()
|
||||
p, q = [('%f' % float.fromhex('0.' + x)) for x in (h[:16], h[16:32])]
|
||||
print('%d%s %d%s' % (latitude, p[1:], longitude, q[1:]))
|
||||
2662
Utils/PythonNew32/Lib/argparse.py
Normal file
2662
Utils/PythonNew32/Lib/argparse.py
Normal file
File diff suppressed because it is too large
Load Diff
1865
Utils/PythonNew32/Lib/ast.py
Normal file
1865
Utils/PythonNew32/Lib/ast.py
Normal file
File diff suppressed because it is too large
Load Diff
47
Utils/PythonNew32/Lib/asyncio/__init__.py
Normal file
47
Utils/PythonNew32/Lib/asyncio/__init__.py
Normal file
@@ -0,0 +1,47 @@
|
||||
"""The asyncio package, tracking PEP 3156."""
|
||||
|
||||
# flake8: noqa
|
||||
|
||||
import sys
|
||||
|
||||
# This relies on each of the submodules having an __all__ variable.
|
||||
from .base_events import *
|
||||
from .coroutines import *
|
||||
from .events import *
|
||||
from .exceptions import *
|
||||
from .futures import *
|
||||
from .locks import *
|
||||
from .protocols import *
|
||||
from .runners import *
|
||||
from .queues import *
|
||||
from .streams import *
|
||||
from .subprocess import *
|
||||
from .tasks import *
|
||||
from .taskgroups import *
|
||||
from .timeouts import *
|
||||
from .threads import *
|
||||
from .transports import *
|
||||
|
||||
__all__ = (base_events.__all__ +
|
||||
coroutines.__all__ +
|
||||
events.__all__ +
|
||||
exceptions.__all__ +
|
||||
futures.__all__ +
|
||||
locks.__all__ +
|
||||
protocols.__all__ +
|
||||
runners.__all__ +
|
||||
queues.__all__ +
|
||||
streams.__all__ +
|
||||
subprocess.__all__ +
|
||||
tasks.__all__ +
|
||||
taskgroups.__all__ +
|
||||
threads.__all__ +
|
||||
timeouts.__all__ +
|
||||
transports.__all__)
|
||||
|
||||
if sys.platform == 'win32': # pragma: no cover
|
||||
from .windows_events import *
|
||||
__all__ += windows_events.__all__
|
||||
else:
|
||||
from .unix_events import * # pragma: no cover
|
||||
__all__ += unix_events.__all__
|
||||
204
Utils/PythonNew32/Lib/asyncio/__main__.py
Normal file
204
Utils/PythonNew32/Lib/asyncio/__main__.py
Normal file
@@ -0,0 +1,204 @@
|
||||
import ast
|
||||
import asyncio
|
||||
import concurrent.futures
|
||||
import contextvars
|
||||
import inspect
|
||||
import os
|
||||
import site
|
||||
import sys
|
||||
import threading
|
||||
import types
|
||||
import warnings
|
||||
|
||||
from _colorize import can_colorize, ANSIColors # type: ignore[import-not-found]
|
||||
from _pyrepl.console import InteractiveColoredConsole
|
||||
|
||||
from . import futures
|
||||
|
||||
|
||||
class AsyncIOInteractiveConsole(InteractiveColoredConsole):
|
||||
|
||||
def __init__(self, locals, loop):
|
||||
super().__init__(locals, filename="<stdin>")
|
||||
self.compile.compiler.flags |= ast.PyCF_ALLOW_TOP_LEVEL_AWAIT
|
||||
|
||||
self.loop = loop
|
||||
self.context = contextvars.copy_context()
|
||||
|
||||
def runcode(self, code):
|
||||
global return_code
|
||||
future = concurrent.futures.Future()
|
||||
|
||||
def callback():
|
||||
global return_code
|
||||
global repl_future
|
||||
global keyboard_interrupted
|
||||
|
||||
repl_future = None
|
||||
keyboard_interrupted = False
|
||||
|
||||
func = types.FunctionType(code, self.locals)
|
||||
try:
|
||||
coro = func()
|
||||
except SystemExit as se:
|
||||
return_code = se.code
|
||||
self.loop.stop()
|
||||
return
|
||||
except KeyboardInterrupt as ex:
|
||||
keyboard_interrupted = True
|
||||
future.set_exception(ex)
|
||||
return
|
||||
except BaseException as ex:
|
||||
future.set_exception(ex)
|
||||
return
|
||||
|
||||
if not inspect.iscoroutine(coro):
|
||||
future.set_result(coro)
|
||||
return
|
||||
|
||||
try:
|
||||
repl_future = self.loop.create_task(coro, context=self.context)
|
||||
futures._chain_future(repl_future, future)
|
||||
except BaseException as exc:
|
||||
future.set_exception(exc)
|
||||
|
||||
loop.call_soon_threadsafe(callback, context=self.context)
|
||||
|
||||
try:
|
||||
return future.result()
|
||||
except SystemExit as se:
|
||||
return_code = se.code
|
||||
self.loop.stop()
|
||||
return
|
||||
except BaseException:
|
||||
if keyboard_interrupted:
|
||||
self.write("\nKeyboardInterrupt\n")
|
||||
else:
|
||||
self.showtraceback()
|
||||
return self.STATEMENT_FAILED
|
||||
|
||||
class REPLThread(threading.Thread):
|
||||
|
||||
def run(self):
|
||||
global return_code
|
||||
|
||||
try:
|
||||
banner = (
|
||||
f'asyncio REPL {sys.version} on {sys.platform}\n'
|
||||
f'Use "await" directly instead of "asyncio.run()".\n'
|
||||
f'Type "help", "copyright", "credits" or "license" '
|
||||
f'for more information.\n'
|
||||
)
|
||||
|
||||
console.write(banner)
|
||||
|
||||
if startup_path := os.getenv("PYTHONSTARTUP"):
|
||||
sys.audit("cpython.run_startup", startup_path)
|
||||
|
||||
import tokenize
|
||||
with tokenize.open(startup_path) as f:
|
||||
startup_code = compile(f.read(), startup_path, "exec")
|
||||
exec(startup_code, console.locals)
|
||||
|
||||
ps1 = getattr(sys, "ps1", ">>> ")
|
||||
if can_colorize() and CAN_USE_PYREPL:
|
||||
ps1 = f"{ANSIColors.BOLD_MAGENTA}{ps1}{ANSIColors.RESET}"
|
||||
console.write(f"{ps1}import asyncio\n")
|
||||
|
||||
if CAN_USE_PYREPL:
|
||||
from _pyrepl.simple_interact import (
|
||||
run_multiline_interactive_console,
|
||||
)
|
||||
try:
|
||||
run_multiline_interactive_console(console)
|
||||
except SystemExit:
|
||||
# expected via the `exit` and `quit` commands
|
||||
pass
|
||||
except BaseException:
|
||||
# unexpected issue
|
||||
console.showtraceback()
|
||||
console.write("Internal error, ")
|
||||
return_code = 1
|
||||
else:
|
||||
console.interact(banner="", exitmsg="")
|
||||
finally:
|
||||
warnings.filterwarnings(
|
||||
'ignore',
|
||||
message=r'^coroutine .* was never awaited$',
|
||||
category=RuntimeWarning)
|
||||
|
||||
loop.call_soon_threadsafe(loop.stop)
|
||||
|
||||
def interrupt(self) -> None:
|
||||
if not CAN_USE_PYREPL:
|
||||
return
|
||||
|
||||
from _pyrepl.simple_interact import _get_reader
|
||||
r = _get_reader()
|
||||
if r.threading_hook is not None:
|
||||
r.threading_hook.add("") # type: ignore
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.audit("cpython.run_stdin")
|
||||
|
||||
if os.getenv('PYTHON_BASIC_REPL'):
|
||||
CAN_USE_PYREPL = False
|
||||
else:
|
||||
from _pyrepl.main import CAN_USE_PYREPL
|
||||
|
||||
return_code = 0
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
|
||||
repl_locals = {'asyncio': asyncio}
|
||||
for key in {'__name__', '__package__',
|
||||
'__loader__', '__spec__',
|
||||
'__builtins__', '__file__'}:
|
||||
repl_locals[key] = locals()[key]
|
||||
|
||||
console = AsyncIOInteractiveConsole(repl_locals, loop)
|
||||
|
||||
repl_future = None
|
||||
keyboard_interrupted = False
|
||||
|
||||
try:
|
||||
import readline # NoQA
|
||||
except ImportError:
|
||||
readline = None
|
||||
|
||||
interactive_hook = getattr(sys, "__interactivehook__", None)
|
||||
|
||||
if interactive_hook is not None:
|
||||
sys.audit("cpython.run_interactivehook", interactive_hook)
|
||||
interactive_hook()
|
||||
|
||||
if interactive_hook is site.register_readline:
|
||||
# Fix the completer function to use the interactive console locals
|
||||
try:
|
||||
import rlcompleter
|
||||
except:
|
||||
pass
|
||||
else:
|
||||
if readline is not None:
|
||||
completer = rlcompleter.Completer(console.locals)
|
||||
readline.set_completer(completer.complete)
|
||||
|
||||
repl_thread = REPLThread(name="Interactive thread")
|
||||
repl_thread.daemon = True
|
||||
repl_thread.start()
|
||||
|
||||
while True:
|
||||
try:
|
||||
loop.run_forever()
|
||||
except KeyboardInterrupt:
|
||||
keyboard_interrupted = True
|
||||
if repl_future and not repl_future.done():
|
||||
repl_future.cancel()
|
||||
repl_thread.interrupt()
|
||||
continue
|
||||
else:
|
||||
break
|
||||
|
||||
console.write('exiting asyncio REPL...\n')
|
||||
sys.exit(return_code)
|
||||
2067
Utils/PythonNew32/Lib/asyncio/base_events.py
Normal file
2067
Utils/PythonNew32/Lib/asyncio/base_events.py
Normal file
File diff suppressed because it is too large
Load Diff
67
Utils/PythonNew32/Lib/asyncio/base_futures.py
Normal file
67
Utils/PythonNew32/Lib/asyncio/base_futures.py
Normal file
@@ -0,0 +1,67 @@
|
||||
__all__ = ()
|
||||
|
||||
import reprlib
|
||||
|
||||
from . import format_helpers
|
||||
|
||||
# States for Future.
|
||||
_PENDING = 'PENDING'
|
||||
_CANCELLED = 'CANCELLED'
|
||||
_FINISHED = 'FINISHED'
|
||||
|
||||
|
||||
def isfuture(obj):
|
||||
"""Check for a Future.
|
||||
|
||||
This returns True when obj is a Future instance or is advertising
|
||||
itself as duck-type compatible by setting _asyncio_future_blocking.
|
||||
See comment in Future for more details.
|
||||
"""
|
||||
return (hasattr(obj.__class__, '_asyncio_future_blocking') and
|
||||
obj._asyncio_future_blocking is not None)
|
||||
|
||||
|
||||
def _format_callbacks(cb):
|
||||
"""helper function for Future.__repr__"""
|
||||
size = len(cb)
|
||||
if not size:
|
||||
cb = ''
|
||||
|
||||
def format_cb(callback):
|
||||
return format_helpers._format_callback_source(callback, ())
|
||||
|
||||
if size == 1:
|
||||
cb = format_cb(cb[0][0])
|
||||
elif size == 2:
|
||||
cb = '{}, {}'.format(format_cb(cb[0][0]), format_cb(cb[1][0]))
|
||||
elif size > 2:
|
||||
cb = '{}, <{} more>, {}'.format(format_cb(cb[0][0]),
|
||||
size - 2,
|
||||
format_cb(cb[-1][0]))
|
||||
return f'cb=[{cb}]'
|
||||
|
||||
|
||||
def _future_repr_info(future):
|
||||
# (Future) -> str
|
||||
"""helper function for Future.__repr__"""
|
||||
info = [future._state.lower()]
|
||||
if future._state == _FINISHED:
|
||||
if future._exception is not None:
|
||||
info.append(f'exception={future._exception!r}')
|
||||
else:
|
||||
# use reprlib to limit the length of the output, especially
|
||||
# for very long strings
|
||||
result = reprlib.repr(future._result)
|
||||
info.append(f'result={result}')
|
||||
if future._callbacks:
|
||||
info.append(_format_callbacks(future._callbacks))
|
||||
if future._source_traceback:
|
||||
frame = future._source_traceback[-1]
|
||||
info.append(f'created at {frame[0]}:{frame[1]}')
|
||||
return info
|
||||
|
||||
|
||||
@reprlib.recursive_repr()
|
||||
def _future_repr(future):
|
||||
info = ' '.join(_future_repr_info(future))
|
||||
return f'<{future.__class__.__name__} {info}>'
|
||||
308
Utils/PythonNew32/Lib/asyncio/base_subprocess.py
Normal file
308
Utils/PythonNew32/Lib/asyncio/base_subprocess.py
Normal file
@@ -0,0 +1,308 @@
|
||||
import collections
|
||||
import subprocess
|
||||
import warnings
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
|
||||
from . import protocols
|
||||
from . import transports
|
||||
from .log import logger
|
||||
|
||||
|
||||
class BaseSubprocessTransport(transports.SubprocessTransport):
|
||||
|
||||
def __init__(self, loop, protocol, args, shell,
|
||||
stdin, stdout, stderr, bufsize,
|
||||
waiter=None, extra=None, **kwargs):
|
||||
super().__init__(extra)
|
||||
self._closed = False
|
||||
self._protocol = protocol
|
||||
self._loop = loop
|
||||
self._proc = None
|
||||
self._pid = None
|
||||
self._returncode = None
|
||||
self._exit_waiters = []
|
||||
self._pending_calls = collections.deque()
|
||||
self._pipes = {}
|
||||
self._finished = False
|
||||
|
||||
if stdin == subprocess.PIPE:
|
||||
self._pipes[0] = None
|
||||
if stdout == subprocess.PIPE:
|
||||
self._pipes[1] = None
|
||||
if stderr == subprocess.PIPE:
|
||||
self._pipes[2] = None
|
||||
|
||||
# Create the child process: set the _proc attribute
|
||||
try:
|
||||
self._start(args=args, shell=shell, stdin=stdin, stdout=stdout,
|
||||
stderr=stderr, bufsize=bufsize, **kwargs)
|
||||
except:
|
||||
self.close()
|
||||
raise
|
||||
|
||||
self._pid = self._proc.pid
|
||||
self._extra['subprocess'] = self._proc
|
||||
|
||||
if self._loop.get_debug():
|
||||
if isinstance(args, (bytes, str)):
|
||||
program = args
|
||||
else:
|
||||
program = args[0]
|
||||
logger.debug('process %r created: pid %s',
|
||||
program, self._pid)
|
||||
|
||||
self._loop.create_task(self._connect_pipes(waiter))
|
||||
|
||||
def __repr__(self):
|
||||
info = [self.__class__.__name__]
|
||||
if self._closed:
|
||||
info.append('closed')
|
||||
if self._pid is not None:
|
||||
info.append(f'pid={self._pid}')
|
||||
if self._returncode is not None:
|
||||
info.append(f'returncode={self._returncode}')
|
||||
elif self._pid is not None:
|
||||
info.append('running')
|
||||
else:
|
||||
info.append('not started')
|
||||
|
||||
stdin = self._pipes.get(0)
|
||||
if stdin is not None:
|
||||
info.append(f'stdin={stdin.pipe}')
|
||||
|
||||
stdout = self._pipes.get(1)
|
||||
stderr = self._pipes.get(2)
|
||||
if stdout is not None and stderr is stdout:
|
||||
info.append(f'stdout=stderr={stdout.pipe}')
|
||||
else:
|
||||
if stdout is not None:
|
||||
info.append(f'stdout={stdout.pipe}')
|
||||
if stderr is not None:
|
||||
info.append(f'stderr={stderr.pipe}')
|
||||
|
||||
return '<{}>'.format(' '.join(info))
|
||||
|
||||
def _start(self, args, shell, stdin, stdout, stderr, bufsize, **kwargs):
|
||||
raise NotImplementedError
|
||||
|
||||
def set_protocol(self, protocol):
|
||||
self._protocol = protocol
|
||||
|
||||
def get_protocol(self):
|
||||
return self._protocol
|
||||
|
||||
def is_closing(self):
|
||||
return self._closed
|
||||
|
||||
def close(self):
|
||||
if self._closed:
|
||||
return
|
||||
self._closed = True
|
||||
|
||||
for proto in self._pipes.values():
|
||||
if proto is None:
|
||||
continue
|
||||
# See gh-114177
|
||||
# skip closing the pipe if loop is already closed
|
||||
# this can happen e.g. when loop is closed immediately after
|
||||
# process is killed
|
||||
if self._loop and not self._loop.is_closed():
|
||||
proto.pipe.close()
|
||||
|
||||
if (self._proc is not None and
|
||||
# has the child process finished?
|
||||
self._returncode is None and
|
||||
# the child process has finished, but the
|
||||
# transport hasn't been notified yet?
|
||||
self._proc.poll() is None):
|
||||
|
||||
if self._loop.get_debug():
|
||||
logger.warning('Close running child process: kill %r', self)
|
||||
|
||||
try:
|
||||
self._proc.kill()
|
||||
except (ProcessLookupError, PermissionError):
|
||||
# the process may have already exited or may be running setuid
|
||||
pass
|
||||
|
||||
# Don't clear the _proc reference yet: _post_init() may still run
|
||||
|
||||
def __del__(self, _warn=warnings.warn):
|
||||
if not self._closed:
|
||||
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
|
||||
self.close()
|
||||
|
||||
def get_pid(self):
|
||||
return self._pid
|
||||
|
||||
def get_returncode(self):
|
||||
return self._returncode
|
||||
|
||||
def get_pipe_transport(self, fd):
|
||||
if fd in self._pipes:
|
||||
return self._pipes[fd].pipe
|
||||
else:
|
||||
return None
|
||||
|
||||
def _check_proc(self):
|
||||
if self._proc is None:
|
||||
raise ProcessLookupError()
|
||||
|
||||
if sys.platform == 'win32':
|
||||
def send_signal(self, signal):
|
||||
self._check_proc()
|
||||
self._proc.send_signal(signal)
|
||||
|
||||
def terminate(self):
|
||||
self._check_proc()
|
||||
self._proc.terminate()
|
||||
|
||||
def kill(self):
|
||||
self._check_proc()
|
||||
self._proc.kill()
|
||||
else:
|
||||
def send_signal(self, signal):
|
||||
self._check_proc()
|
||||
try:
|
||||
os.kill(self._proc.pid, signal)
|
||||
except ProcessLookupError:
|
||||
pass
|
||||
|
||||
def terminate(self):
|
||||
self.send_signal(signal.SIGTERM)
|
||||
|
||||
def kill(self):
|
||||
self.send_signal(signal.SIGKILL)
|
||||
|
||||
async def _connect_pipes(self, waiter):
|
||||
try:
|
||||
proc = self._proc
|
||||
loop = self._loop
|
||||
|
||||
if proc.stdin is not None:
|
||||
_, pipe = await loop.connect_write_pipe(
|
||||
lambda: WriteSubprocessPipeProto(self, 0),
|
||||
proc.stdin)
|
||||
self._pipes[0] = pipe
|
||||
|
||||
if proc.stdout is not None:
|
||||
_, pipe = await loop.connect_read_pipe(
|
||||
lambda: ReadSubprocessPipeProto(self, 1),
|
||||
proc.stdout)
|
||||
self._pipes[1] = pipe
|
||||
|
||||
if proc.stderr is not None:
|
||||
_, pipe = await loop.connect_read_pipe(
|
||||
lambda: ReadSubprocessPipeProto(self, 2),
|
||||
proc.stderr)
|
||||
self._pipes[2] = pipe
|
||||
|
||||
assert self._pending_calls is not None
|
||||
|
||||
loop.call_soon(self._protocol.connection_made, self)
|
||||
for callback, data in self._pending_calls:
|
||||
loop.call_soon(callback, *data)
|
||||
self._pending_calls = None
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
if waiter is not None and not waiter.cancelled():
|
||||
waiter.set_exception(exc)
|
||||
else:
|
||||
if waiter is not None and not waiter.cancelled():
|
||||
waiter.set_result(None)
|
||||
|
||||
def _call(self, cb, *data):
|
||||
if self._pending_calls is not None:
|
||||
self._pending_calls.append((cb, data))
|
||||
else:
|
||||
self._loop.call_soon(cb, *data)
|
||||
|
||||
def _pipe_connection_lost(self, fd, exc):
|
||||
self._call(self._protocol.pipe_connection_lost, fd, exc)
|
||||
self._try_finish()
|
||||
|
||||
def _pipe_data_received(self, fd, data):
|
||||
self._call(self._protocol.pipe_data_received, fd, data)
|
||||
|
||||
def _process_exited(self, returncode):
|
||||
assert returncode is not None, returncode
|
||||
assert self._returncode is None, self._returncode
|
||||
if self._loop.get_debug():
|
||||
logger.info('%r exited with return code %r', self, returncode)
|
||||
self._returncode = returncode
|
||||
if self._proc.returncode is None:
|
||||
# asyncio uses a child watcher: copy the status into the Popen
|
||||
# object. On Python 3.6, it is required to avoid a ResourceWarning.
|
||||
self._proc.returncode = returncode
|
||||
self._call(self._protocol.process_exited)
|
||||
|
||||
self._try_finish()
|
||||
|
||||
async def _wait(self):
|
||||
"""Wait until the process exit and return the process return code.
|
||||
|
||||
This method is a coroutine."""
|
||||
if self._returncode is not None:
|
||||
return self._returncode
|
||||
|
||||
waiter = self._loop.create_future()
|
||||
self._exit_waiters.append(waiter)
|
||||
return await waiter
|
||||
|
||||
def _try_finish(self):
|
||||
assert not self._finished
|
||||
if self._returncode is None:
|
||||
return
|
||||
if all(p is not None and p.disconnected
|
||||
for p in self._pipes.values()):
|
||||
self._finished = True
|
||||
self._call(self._call_connection_lost, None)
|
||||
|
||||
def _call_connection_lost(self, exc):
|
||||
try:
|
||||
self._protocol.connection_lost(exc)
|
||||
finally:
|
||||
# wake up futures waiting for wait()
|
||||
for waiter in self._exit_waiters:
|
||||
if not waiter.cancelled():
|
||||
waiter.set_result(self._returncode)
|
||||
self._exit_waiters = None
|
||||
self._loop = None
|
||||
self._proc = None
|
||||
self._protocol = None
|
||||
|
||||
|
||||
class WriteSubprocessPipeProto(protocols.BaseProtocol):
|
||||
|
||||
def __init__(self, proc, fd):
|
||||
self.proc = proc
|
||||
self.fd = fd
|
||||
self.pipe = None
|
||||
self.disconnected = False
|
||||
|
||||
def connection_made(self, transport):
|
||||
self.pipe = transport
|
||||
|
||||
def __repr__(self):
|
||||
return f'<{self.__class__.__name__} fd={self.fd} pipe={self.pipe!r}>'
|
||||
|
||||
def connection_lost(self, exc):
|
||||
self.disconnected = True
|
||||
self.proc._pipe_connection_lost(self.fd, exc)
|
||||
self.proc = None
|
||||
|
||||
def pause_writing(self):
|
||||
self.proc._protocol.pause_writing()
|
||||
|
||||
def resume_writing(self):
|
||||
self.proc._protocol.resume_writing()
|
||||
|
||||
|
||||
class ReadSubprocessPipeProto(WriteSubprocessPipeProto,
|
||||
protocols.Protocol):
|
||||
|
||||
def data_received(self, data):
|
||||
self.proc._pipe_data_received(self.fd, data)
|
||||
94
Utils/PythonNew32/Lib/asyncio/base_tasks.py
Normal file
94
Utils/PythonNew32/Lib/asyncio/base_tasks.py
Normal file
@@ -0,0 +1,94 @@
|
||||
import linecache
|
||||
import reprlib
|
||||
import traceback
|
||||
|
||||
from . import base_futures
|
||||
from . import coroutines
|
||||
|
||||
|
||||
def _task_repr_info(task):
|
||||
info = base_futures._future_repr_info(task)
|
||||
|
||||
if task.cancelling() and not task.done():
|
||||
# replace status
|
||||
info[0] = 'cancelling'
|
||||
|
||||
info.insert(1, 'name=%r' % task.get_name())
|
||||
|
||||
if task._fut_waiter is not None:
|
||||
info.insert(2, f'wait_for={task._fut_waiter!r}')
|
||||
|
||||
if task._coro:
|
||||
coro = coroutines._format_coroutine(task._coro)
|
||||
info.insert(2, f'coro=<{coro}>')
|
||||
|
||||
return info
|
||||
|
||||
|
||||
@reprlib.recursive_repr()
|
||||
def _task_repr(task):
|
||||
info = ' '.join(_task_repr_info(task))
|
||||
return f'<{task.__class__.__name__} {info}>'
|
||||
|
||||
|
||||
def _task_get_stack(task, limit):
|
||||
frames = []
|
||||
if hasattr(task._coro, 'cr_frame'):
|
||||
# case 1: 'async def' coroutines
|
||||
f = task._coro.cr_frame
|
||||
elif hasattr(task._coro, 'gi_frame'):
|
||||
# case 2: legacy coroutines
|
||||
f = task._coro.gi_frame
|
||||
elif hasattr(task._coro, 'ag_frame'):
|
||||
# case 3: async generators
|
||||
f = task._coro.ag_frame
|
||||
else:
|
||||
# case 4: unknown objects
|
||||
f = None
|
||||
if f is not None:
|
||||
while f is not None:
|
||||
if limit is not None:
|
||||
if limit <= 0:
|
||||
break
|
||||
limit -= 1
|
||||
frames.append(f)
|
||||
f = f.f_back
|
||||
frames.reverse()
|
||||
elif task._exception is not None:
|
||||
tb = task._exception.__traceback__
|
||||
while tb is not None:
|
||||
if limit is not None:
|
||||
if limit <= 0:
|
||||
break
|
||||
limit -= 1
|
||||
frames.append(tb.tb_frame)
|
||||
tb = tb.tb_next
|
||||
return frames
|
||||
|
||||
|
||||
def _task_print_stack(task, limit, file):
|
||||
extracted_list = []
|
||||
checked = set()
|
||||
for f in task.get_stack(limit=limit):
|
||||
lineno = f.f_lineno
|
||||
co = f.f_code
|
||||
filename = co.co_filename
|
||||
name = co.co_name
|
||||
if filename not in checked:
|
||||
checked.add(filename)
|
||||
linecache.checkcache(filename)
|
||||
line = linecache.getline(filename, lineno, f.f_globals)
|
||||
extracted_list.append((filename, lineno, name, line))
|
||||
|
||||
exc = task._exception
|
||||
if not extracted_list:
|
||||
print(f'No stack for {task!r}', file=file)
|
||||
elif exc is not None:
|
||||
print(f'Traceback for {task!r} (most recent call last):', file=file)
|
||||
else:
|
||||
print(f'Stack for {task!r} (most recent call last):', file=file)
|
||||
|
||||
traceback.print_list(extracted_list, file=file)
|
||||
if exc is not None:
|
||||
for line in traceback.format_exception_only(exc.__class__, exc):
|
||||
print(line, file=file, end='')
|
||||
41
Utils/PythonNew32/Lib/asyncio/constants.py
Normal file
41
Utils/PythonNew32/Lib/asyncio/constants.py
Normal file
@@ -0,0 +1,41 @@
|
||||
# Contains code from https://github.com/MagicStack/uvloop/tree/v0.16.0
|
||||
# SPDX-License-Identifier: PSF-2.0 AND (MIT OR Apache-2.0)
|
||||
# SPDX-FileCopyrightText: Copyright (c) 2015-2021 MagicStack Inc. http://magic.io
|
||||
|
||||
import enum
|
||||
|
||||
# After the connection is lost, log warnings after this many write()s.
|
||||
LOG_THRESHOLD_FOR_CONNLOST_WRITES = 5
|
||||
|
||||
# Seconds to wait before retrying accept().
|
||||
ACCEPT_RETRY_DELAY = 1
|
||||
|
||||
# Number of stack entries to capture in debug mode.
|
||||
# The larger the number, the slower the operation in debug mode
|
||||
# (see extract_stack() in format_helpers.py).
|
||||
DEBUG_STACK_DEPTH = 10
|
||||
|
||||
# Number of seconds to wait for SSL handshake to complete
|
||||
# The default timeout matches that of Nginx.
|
||||
SSL_HANDSHAKE_TIMEOUT = 60.0
|
||||
|
||||
# Number of seconds to wait for SSL shutdown to complete
|
||||
# The default timeout mimics lingering_time
|
||||
SSL_SHUTDOWN_TIMEOUT = 30.0
|
||||
|
||||
# Used in sendfile fallback code. We use fallback for platforms
|
||||
# that don't support sendfile, or for TLS connections.
|
||||
SENDFILE_FALLBACK_READBUFFER_SIZE = 1024 * 256
|
||||
|
||||
FLOW_CONTROL_HIGH_WATER_SSL_READ = 256 # KiB
|
||||
FLOW_CONTROL_HIGH_WATER_SSL_WRITE = 512 # KiB
|
||||
|
||||
# Default timeout for joining the threads in the threadpool
|
||||
THREAD_JOIN_TIMEOUT = 300
|
||||
|
||||
# The enum should be here to break circular dependencies between
|
||||
# base_events and sslproto
|
||||
class _SendfileMode(enum.Enum):
|
||||
UNSUPPORTED = enum.auto()
|
||||
TRY_NATIVE = enum.auto()
|
||||
FALLBACK = enum.auto()
|
||||
109
Utils/PythonNew32/Lib/asyncio/coroutines.py
Normal file
109
Utils/PythonNew32/Lib/asyncio/coroutines.py
Normal file
@@ -0,0 +1,109 @@
|
||||
__all__ = 'iscoroutinefunction', 'iscoroutine'
|
||||
|
||||
import collections.abc
|
||||
import inspect
|
||||
import os
|
||||
import sys
|
||||
import types
|
||||
|
||||
|
||||
def _is_debug_mode():
|
||||
# See: https://docs.python.org/3/library/asyncio-dev.html#asyncio-debug-mode.
|
||||
return sys.flags.dev_mode or (not sys.flags.ignore_environment and
|
||||
bool(os.environ.get('PYTHONASYNCIODEBUG')))
|
||||
|
||||
|
||||
# A marker for iscoroutinefunction.
|
||||
_is_coroutine = object()
|
||||
|
||||
|
||||
def iscoroutinefunction(func):
|
||||
"""Return True if func is a decorated coroutine function."""
|
||||
return (inspect.iscoroutinefunction(func) or
|
||||
getattr(func, '_is_coroutine', None) is _is_coroutine)
|
||||
|
||||
|
||||
# Prioritize native coroutine check to speed-up
|
||||
# asyncio.iscoroutine.
|
||||
_COROUTINE_TYPES = (types.CoroutineType, collections.abc.Coroutine)
|
||||
_iscoroutine_typecache = set()
|
||||
|
||||
|
||||
def iscoroutine(obj):
|
||||
"""Return True if obj is a coroutine object."""
|
||||
if type(obj) in _iscoroutine_typecache:
|
||||
return True
|
||||
|
||||
if isinstance(obj, _COROUTINE_TYPES):
|
||||
# Just in case we don't want to cache more than 100
|
||||
# positive types. That shouldn't ever happen, unless
|
||||
# someone stressing the system on purpose.
|
||||
if len(_iscoroutine_typecache) < 100:
|
||||
_iscoroutine_typecache.add(type(obj))
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def _format_coroutine(coro):
|
||||
assert iscoroutine(coro)
|
||||
|
||||
def get_name(coro):
|
||||
# Coroutines compiled with Cython sometimes don't have
|
||||
# proper __qualname__ or __name__. While that is a bug
|
||||
# in Cython, asyncio shouldn't crash with an AttributeError
|
||||
# in its __repr__ functions.
|
||||
if hasattr(coro, '__qualname__') and coro.__qualname__:
|
||||
coro_name = coro.__qualname__
|
||||
elif hasattr(coro, '__name__') and coro.__name__:
|
||||
coro_name = coro.__name__
|
||||
else:
|
||||
# Stop masking Cython bugs, expose them in a friendly way.
|
||||
coro_name = f'<{type(coro).__name__} without __name__>'
|
||||
return f'{coro_name}()'
|
||||
|
||||
def is_running(coro):
|
||||
try:
|
||||
return coro.cr_running
|
||||
except AttributeError:
|
||||
try:
|
||||
return coro.gi_running
|
||||
except AttributeError:
|
||||
return False
|
||||
|
||||
coro_code = None
|
||||
if hasattr(coro, 'cr_code') and coro.cr_code:
|
||||
coro_code = coro.cr_code
|
||||
elif hasattr(coro, 'gi_code') and coro.gi_code:
|
||||
coro_code = coro.gi_code
|
||||
|
||||
coro_name = get_name(coro)
|
||||
|
||||
if not coro_code:
|
||||
# Built-in types might not have __qualname__ or __name__.
|
||||
if is_running(coro):
|
||||
return f'{coro_name} running'
|
||||
else:
|
||||
return coro_name
|
||||
|
||||
coro_frame = None
|
||||
if hasattr(coro, 'gi_frame') and coro.gi_frame:
|
||||
coro_frame = coro.gi_frame
|
||||
elif hasattr(coro, 'cr_frame') and coro.cr_frame:
|
||||
coro_frame = coro.cr_frame
|
||||
|
||||
# If Cython's coroutine has a fake code object without proper
|
||||
# co_filename -- expose that.
|
||||
filename = coro_code.co_filename or '<empty co_filename>'
|
||||
|
||||
lineno = 0
|
||||
|
||||
if coro_frame is not None:
|
||||
lineno = coro_frame.f_lineno
|
||||
coro_repr = f'{coro_name} running at {filename}:{lineno}'
|
||||
|
||||
else:
|
||||
lineno = coro_code.co_firstlineno
|
||||
coro_repr = f'{coro_name} done, defined at {filename}:{lineno}'
|
||||
|
||||
return coro_repr
|
||||
882
Utils/PythonNew32/Lib/asyncio/events.py
Normal file
882
Utils/PythonNew32/Lib/asyncio/events.py
Normal file
@@ -0,0 +1,882 @@
|
||||
"""Event loop and event loop policy."""
|
||||
|
||||
# Contains code from https://github.com/MagicStack/uvloop/tree/v0.16.0
|
||||
# SPDX-License-Identifier: PSF-2.0 AND (MIT OR Apache-2.0)
|
||||
# SPDX-FileCopyrightText: Copyright (c) 2015-2021 MagicStack Inc. http://magic.io
|
||||
|
||||
__all__ = (
|
||||
'AbstractEventLoopPolicy',
|
||||
'AbstractEventLoop', 'AbstractServer',
|
||||
'Handle', 'TimerHandle',
|
||||
'get_event_loop_policy', 'set_event_loop_policy',
|
||||
'get_event_loop', 'set_event_loop', 'new_event_loop',
|
||||
'get_child_watcher', 'set_child_watcher',
|
||||
'_set_running_loop', 'get_running_loop',
|
||||
'_get_running_loop',
|
||||
)
|
||||
|
||||
import contextvars
|
||||
import os
|
||||
import signal
|
||||
import socket
|
||||
import subprocess
|
||||
import sys
|
||||
import threading
|
||||
|
||||
from . import format_helpers
|
||||
|
||||
|
||||
class Handle:
|
||||
"""Object returned by callback registration methods."""
|
||||
|
||||
__slots__ = ('_callback', '_args', '_cancelled', '_loop',
|
||||
'_source_traceback', '_repr', '__weakref__',
|
||||
'_context')
|
||||
|
||||
def __init__(self, callback, args, loop, context=None):
|
||||
if context is None:
|
||||
context = contextvars.copy_context()
|
||||
self._context = context
|
||||
self._loop = loop
|
||||
self._callback = callback
|
||||
self._args = args
|
||||
self._cancelled = False
|
||||
self._repr = None
|
||||
if self._loop.get_debug():
|
||||
self._source_traceback = format_helpers.extract_stack(
|
||||
sys._getframe(1))
|
||||
else:
|
||||
self._source_traceback = None
|
||||
|
||||
def _repr_info(self):
|
||||
info = [self.__class__.__name__]
|
||||
if self._cancelled:
|
||||
info.append('cancelled')
|
||||
if self._callback is not None:
|
||||
info.append(format_helpers._format_callback_source(
|
||||
self._callback, self._args,
|
||||
debug=self._loop.get_debug()))
|
||||
if self._source_traceback:
|
||||
frame = self._source_traceback[-1]
|
||||
info.append(f'created at {frame[0]}:{frame[1]}')
|
||||
return info
|
||||
|
||||
def __repr__(self):
|
||||
if self._repr is not None:
|
||||
return self._repr
|
||||
info = self._repr_info()
|
||||
return '<{}>'.format(' '.join(info))
|
||||
|
||||
def get_context(self):
|
||||
return self._context
|
||||
|
||||
def cancel(self):
|
||||
if not self._cancelled:
|
||||
self._cancelled = True
|
||||
if self._loop.get_debug():
|
||||
# Keep a representation in debug mode to keep callback and
|
||||
# parameters. For example, to log the warning
|
||||
# "Executing <Handle...> took 2.5 second"
|
||||
self._repr = repr(self)
|
||||
self._callback = None
|
||||
self._args = None
|
||||
|
||||
def cancelled(self):
|
||||
return self._cancelled
|
||||
|
||||
def _run(self):
|
||||
try:
|
||||
self._context.run(self._callback, *self._args)
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
cb = format_helpers._format_callback_source(
|
||||
self._callback, self._args,
|
||||
debug=self._loop.get_debug())
|
||||
msg = f'Exception in callback {cb}'
|
||||
context = {
|
||||
'message': msg,
|
||||
'exception': exc,
|
||||
'handle': self,
|
||||
}
|
||||
if self._source_traceback:
|
||||
context['source_traceback'] = self._source_traceback
|
||||
self._loop.call_exception_handler(context)
|
||||
self = None # Needed to break cycles when an exception occurs.
|
||||
|
||||
|
||||
class TimerHandle(Handle):
|
||||
"""Object returned by timed callback registration methods."""
|
||||
|
||||
__slots__ = ['_scheduled', '_when']
|
||||
|
||||
def __init__(self, when, callback, args, loop, context=None):
|
||||
super().__init__(callback, args, loop, context)
|
||||
if self._source_traceback:
|
||||
del self._source_traceback[-1]
|
||||
self._when = when
|
||||
self._scheduled = False
|
||||
|
||||
def _repr_info(self):
|
||||
info = super()._repr_info()
|
||||
pos = 2 if self._cancelled else 1
|
||||
info.insert(pos, f'when={self._when}')
|
||||
return info
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self._when)
|
||||
|
||||
def __lt__(self, other):
|
||||
if isinstance(other, TimerHandle):
|
||||
return self._when < other._when
|
||||
return NotImplemented
|
||||
|
||||
def __le__(self, other):
|
||||
if isinstance(other, TimerHandle):
|
||||
return self._when < other._when or self.__eq__(other)
|
||||
return NotImplemented
|
||||
|
||||
def __gt__(self, other):
|
||||
if isinstance(other, TimerHandle):
|
||||
return self._when > other._when
|
||||
return NotImplemented
|
||||
|
||||
def __ge__(self, other):
|
||||
if isinstance(other, TimerHandle):
|
||||
return self._when > other._when or self.__eq__(other)
|
||||
return NotImplemented
|
||||
|
||||
def __eq__(self, other):
|
||||
if isinstance(other, TimerHandle):
|
||||
return (self._when == other._when and
|
||||
self._callback == other._callback and
|
||||
self._args == other._args and
|
||||
self._cancelled == other._cancelled)
|
||||
return NotImplemented
|
||||
|
||||
def cancel(self):
|
||||
if not self._cancelled:
|
||||
self._loop._timer_handle_cancelled(self)
|
||||
super().cancel()
|
||||
|
||||
def when(self):
|
||||
"""Return a scheduled callback time.
|
||||
|
||||
The time is an absolute timestamp, using the same time
|
||||
reference as loop.time().
|
||||
"""
|
||||
return self._when
|
||||
|
||||
|
||||
class AbstractServer:
|
||||
"""Abstract server returned by create_server()."""
|
||||
|
||||
def close(self):
|
||||
"""Stop serving. This leaves existing connections open."""
|
||||
raise NotImplementedError
|
||||
|
||||
def close_clients(self):
|
||||
"""Close all active connections."""
|
||||
raise NotImplementedError
|
||||
|
||||
def abort_clients(self):
|
||||
"""Close all active connections immediately."""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_loop(self):
|
||||
"""Get the event loop the Server object is attached to."""
|
||||
raise NotImplementedError
|
||||
|
||||
def is_serving(self):
|
||||
"""Return True if the server is accepting connections."""
|
||||
raise NotImplementedError
|
||||
|
||||
async def start_serving(self):
|
||||
"""Start accepting connections.
|
||||
|
||||
This method is idempotent, so it can be called when
|
||||
the server is already being serving.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
async def serve_forever(self):
|
||||
"""Start accepting connections until the coroutine is cancelled.
|
||||
|
||||
The server is closed when the coroutine is cancelled.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
async def wait_closed(self):
|
||||
"""Coroutine to wait until service is closed."""
|
||||
raise NotImplementedError
|
||||
|
||||
async def __aenter__(self):
|
||||
return self
|
||||
|
||||
async def __aexit__(self, *exc):
|
||||
self.close()
|
||||
await self.wait_closed()
|
||||
|
||||
|
||||
class AbstractEventLoop:
|
||||
"""Abstract event loop."""
|
||||
|
||||
# Running and stopping the event loop.
|
||||
|
||||
def run_forever(self):
|
||||
"""Run the event loop until stop() is called."""
|
||||
raise NotImplementedError
|
||||
|
||||
def run_until_complete(self, future):
|
||||
"""Run the event loop until a Future is done.
|
||||
|
||||
Return the Future's result, or raise its exception.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def stop(self):
|
||||
"""Stop the event loop as soon as reasonable.
|
||||
|
||||
Exactly how soon that is may depend on the implementation, but
|
||||
no more I/O callbacks should be scheduled.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def is_running(self):
|
||||
"""Return whether the event loop is currently running."""
|
||||
raise NotImplementedError
|
||||
|
||||
def is_closed(self):
|
||||
"""Returns True if the event loop was closed."""
|
||||
raise NotImplementedError
|
||||
|
||||
def close(self):
|
||||
"""Close the loop.
|
||||
|
||||
The loop should not be running.
|
||||
|
||||
This is idempotent and irreversible.
|
||||
|
||||
No other methods should be called after this one.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
async def shutdown_asyncgens(self):
|
||||
"""Shutdown all active asynchronous generators."""
|
||||
raise NotImplementedError
|
||||
|
||||
async def shutdown_default_executor(self):
|
||||
"""Schedule the shutdown of the default executor."""
|
||||
raise NotImplementedError
|
||||
|
||||
# Methods scheduling callbacks. All these return Handles.
|
||||
|
||||
def _timer_handle_cancelled(self, handle):
|
||||
"""Notification that a TimerHandle has been cancelled."""
|
||||
raise NotImplementedError
|
||||
|
||||
def call_soon(self, callback, *args, context=None):
|
||||
return self.call_later(0, callback, *args, context=context)
|
||||
|
||||
def call_later(self, delay, callback, *args, context=None):
|
||||
raise NotImplementedError
|
||||
|
||||
def call_at(self, when, callback, *args, context=None):
|
||||
raise NotImplementedError
|
||||
|
||||
def time(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def create_future(self):
|
||||
raise NotImplementedError
|
||||
|
||||
# Method scheduling a coroutine object: create a task.
|
||||
|
||||
def create_task(self, coro, **kwargs):
|
||||
raise NotImplementedError
|
||||
|
||||
# Methods for interacting with threads.
|
||||
|
||||
def call_soon_threadsafe(self, callback, *args, context=None):
|
||||
raise NotImplementedError
|
||||
|
||||
def run_in_executor(self, executor, func, *args):
|
||||
raise NotImplementedError
|
||||
|
||||
def set_default_executor(self, executor):
|
||||
raise NotImplementedError
|
||||
|
||||
# Network I/O methods returning Futures.
|
||||
|
||||
async def getaddrinfo(self, host, port, *,
|
||||
family=0, type=0, proto=0, flags=0):
|
||||
raise NotImplementedError
|
||||
|
||||
async def getnameinfo(self, sockaddr, flags=0):
|
||||
raise NotImplementedError
|
||||
|
||||
async def create_connection(
|
||||
self, protocol_factory, host=None, port=None,
|
||||
*, ssl=None, family=0, proto=0,
|
||||
flags=0, sock=None, local_addr=None,
|
||||
server_hostname=None,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None,
|
||||
happy_eyeballs_delay=None, interleave=None):
|
||||
raise NotImplementedError
|
||||
|
||||
async def create_server(
|
||||
self, protocol_factory, host=None, port=None,
|
||||
*, family=socket.AF_UNSPEC,
|
||||
flags=socket.AI_PASSIVE, sock=None, backlog=100,
|
||||
ssl=None, reuse_address=None, reuse_port=None,
|
||||
keep_alive=None,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None,
|
||||
start_serving=True):
|
||||
"""A coroutine which creates a TCP server bound to host and port.
|
||||
|
||||
The return value is a Server object which can be used to stop
|
||||
the service.
|
||||
|
||||
If host is an empty string or None all interfaces are assumed
|
||||
and a list of multiple sockets will be returned (most likely
|
||||
one for IPv4 and another one for IPv6). The host parameter can also be
|
||||
a sequence (e.g. list) of hosts to bind to.
|
||||
|
||||
family can be set to either AF_INET or AF_INET6 to force the
|
||||
socket to use IPv4 or IPv6. If not set it will be determined
|
||||
from host (defaults to AF_UNSPEC).
|
||||
|
||||
flags is a bitmask for getaddrinfo().
|
||||
|
||||
sock can optionally be specified in order to use a preexisting
|
||||
socket object.
|
||||
|
||||
backlog is the maximum number of queued connections passed to
|
||||
listen() (defaults to 100).
|
||||
|
||||
ssl can be set to an SSLContext to enable SSL over the
|
||||
accepted connections.
|
||||
|
||||
reuse_address tells the kernel to reuse a local socket in
|
||||
TIME_WAIT state, without waiting for its natural timeout to
|
||||
expire. If not specified will automatically be set to True on
|
||||
UNIX.
|
||||
|
||||
reuse_port tells the kernel to allow this endpoint to be bound to
|
||||
the same port as other existing endpoints are bound to, so long as
|
||||
they all set this flag when being created. This option is not
|
||||
supported on Windows.
|
||||
|
||||
keep_alive set to True keeps connections active by enabling the
|
||||
periodic transmission of messages.
|
||||
|
||||
ssl_handshake_timeout is the time in seconds that an SSL server
|
||||
will wait for completion of the SSL handshake before aborting the
|
||||
connection. Default is 60s.
|
||||
|
||||
ssl_shutdown_timeout is the time in seconds that an SSL server
|
||||
will wait for completion of the SSL shutdown procedure
|
||||
before aborting the connection. Default is 30s.
|
||||
|
||||
start_serving set to True (default) causes the created server
|
||||
to start accepting connections immediately. When set to False,
|
||||
the user should await Server.start_serving() or Server.serve_forever()
|
||||
to make the server to start accepting connections.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
async def sendfile(self, transport, file, offset=0, count=None,
|
||||
*, fallback=True):
|
||||
"""Send a file through a transport.
|
||||
|
||||
Return an amount of sent bytes.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
async def start_tls(self, transport, protocol, sslcontext, *,
|
||||
server_side=False,
|
||||
server_hostname=None,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None):
|
||||
"""Upgrade a transport to TLS.
|
||||
|
||||
Return a new transport that *protocol* should start using
|
||||
immediately.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
async def create_unix_connection(
|
||||
self, protocol_factory, path=None, *,
|
||||
ssl=None, sock=None,
|
||||
server_hostname=None,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None):
|
||||
raise NotImplementedError
|
||||
|
||||
async def create_unix_server(
|
||||
self, protocol_factory, path=None, *,
|
||||
sock=None, backlog=100, ssl=None,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None,
|
||||
start_serving=True):
|
||||
"""A coroutine which creates a UNIX Domain Socket server.
|
||||
|
||||
The return value is a Server object, which can be used to stop
|
||||
the service.
|
||||
|
||||
path is a str, representing a file system path to bind the
|
||||
server socket to.
|
||||
|
||||
sock can optionally be specified in order to use a preexisting
|
||||
socket object.
|
||||
|
||||
backlog is the maximum number of queued connections passed to
|
||||
listen() (defaults to 100).
|
||||
|
||||
ssl can be set to an SSLContext to enable SSL over the
|
||||
accepted connections.
|
||||
|
||||
ssl_handshake_timeout is the time in seconds that an SSL server
|
||||
will wait for the SSL handshake to complete (defaults to 60s).
|
||||
|
||||
ssl_shutdown_timeout is the time in seconds that an SSL server
|
||||
will wait for the SSL shutdown to finish (defaults to 30s).
|
||||
|
||||
start_serving set to True (default) causes the created server
|
||||
to start accepting connections immediately. When set to False,
|
||||
the user should await Server.start_serving() or Server.serve_forever()
|
||||
to make the server to start accepting connections.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
async def connect_accepted_socket(
|
||||
self, protocol_factory, sock,
|
||||
*, ssl=None,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None):
|
||||
"""Handle an accepted connection.
|
||||
|
||||
This is used by servers that accept connections outside of
|
||||
asyncio, but use asyncio to handle connections.
|
||||
|
||||
This method is a coroutine. When completed, the coroutine
|
||||
returns a (transport, protocol) pair.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
async def create_datagram_endpoint(self, protocol_factory,
|
||||
local_addr=None, remote_addr=None, *,
|
||||
family=0, proto=0, flags=0,
|
||||
reuse_address=None, reuse_port=None,
|
||||
allow_broadcast=None, sock=None):
|
||||
"""A coroutine which creates a datagram endpoint.
|
||||
|
||||
This method will try to establish the endpoint in the background.
|
||||
When successful, the coroutine returns a (transport, protocol) pair.
|
||||
|
||||
protocol_factory must be a callable returning a protocol instance.
|
||||
|
||||
socket family AF_INET, socket.AF_INET6 or socket.AF_UNIX depending on
|
||||
host (or family if specified), socket type SOCK_DGRAM.
|
||||
|
||||
reuse_address tells the kernel to reuse a local socket in
|
||||
TIME_WAIT state, without waiting for its natural timeout to
|
||||
expire. If not specified it will automatically be set to True on
|
||||
UNIX.
|
||||
|
||||
reuse_port tells the kernel to allow this endpoint to be bound to
|
||||
the same port as other existing endpoints are bound to, so long as
|
||||
they all set this flag when being created. This option is not
|
||||
supported on Windows and some UNIX's. If the
|
||||
:py:data:`~socket.SO_REUSEPORT` constant is not defined then this
|
||||
capability is unsupported.
|
||||
|
||||
allow_broadcast tells the kernel to allow this endpoint to send
|
||||
messages to the broadcast address.
|
||||
|
||||
sock can optionally be specified in order to use a preexisting
|
||||
socket object.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
# Pipes and subprocesses.
|
||||
|
||||
async def connect_read_pipe(self, protocol_factory, pipe):
|
||||
"""Register read pipe in event loop. Set the pipe to non-blocking mode.
|
||||
|
||||
protocol_factory should instantiate object with Protocol interface.
|
||||
pipe is a file-like object.
|
||||
Return pair (transport, protocol), where transport supports the
|
||||
ReadTransport interface."""
|
||||
# The reason to accept file-like object instead of just file descriptor
|
||||
# is: we need to own pipe and close it at transport finishing
|
||||
# Can got complicated errors if pass f.fileno(),
|
||||
# close fd in pipe transport then close f and vice versa.
|
||||
raise NotImplementedError
|
||||
|
||||
async def connect_write_pipe(self, protocol_factory, pipe):
|
||||
"""Register write pipe in event loop.
|
||||
|
||||
protocol_factory should instantiate object with BaseProtocol interface.
|
||||
Pipe is file-like object already switched to nonblocking.
|
||||
Return pair (transport, protocol), where transport support
|
||||
WriteTransport interface."""
|
||||
# The reason to accept file-like object instead of just file descriptor
|
||||
# is: we need to own pipe and close it at transport finishing
|
||||
# Can got complicated errors if pass f.fileno(),
|
||||
# close fd in pipe transport then close f and vice versa.
|
||||
raise NotImplementedError
|
||||
|
||||
async def subprocess_shell(self, protocol_factory, cmd, *,
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
**kwargs):
|
||||
raise NotImplementedError
|
||||
|
||||
async def subprocess_exec(self, protocol_factory, *args,
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
**kwargs):
|
||||
raise NotImplementedError
|
||||
|
||||
# Ready-based callback registration methods.
|
||||
# The add_*() methods return None.
|
||||
# The remove_*() methods return True if something was removed,
|
||||
# False if there was nothing to delete.
|
||||
|
||||
def add_reader(self, fd, callback, *args):
|
||||
raise NotImplementedError
|
||||
|
||||
def remove_reader(self, fd):
|
||||
raise NotImplementedError
|
||||
|
||||
def add_writer(self, fd, callback, *args):
|
||||
raise NotImplementedError
|
||||
|
||||
def remove_writer(self, fd):
|
||||
raise NotImplementedError
|
||||
|
||||
# Completion based I/O methods returning Futures.
|
||||
|
||||
async def sock_recv(self, sock, nbytes):
|
||||
raise NotImplementedError
|
||||
|
||||
async def sock_recv_into(self, sock, buf):
|
||||
raise NotImplementedError
|
||||
|
||||
async def sock_recvfrom(self, sock, bufsize):
|
||||
raise NotImplementedError
|
||||
|
||||
async def sock_recvfrom_into(self, sock, buf, nbytes=0):
|
||||
raise NotImplementedError
|
||||
|
||||
async def sock_sendall(self, sock, data):
|
||||
raise NotImplementedError
|
||||
|
||||
async def sock_sendto(self, sock, data, address):
|
||||
raise NotImplementedError
|
||||
|
||||
async def sock_connect(self, sock, address):
|
||||
raise NotImplementedError
|
||||
|
||||
async def sock_accept(self, sock):
|
||||
raise NotImplementedError
|
||||
|
||||
async def sock_sendfile(self, sock, file, offset=0, count=None,
|
||||
*, fallback=None):
|
||||
raise NotImplementedError
|
||||
|
||||
# Signal handling.
|
||||
|
||||
def add_signal_handler(self, sig, callback, *args):
|
||||
raise NotImplementedError
|
||||
|
||||
def remove_signal_handler(self, sig):
|
||||
raise NotImplementedError
|
||||
|
||||
# Task factory.
|
||||
|
||||
def set_task_factory(self, factory):
|
||||
raise NotImplementedError
|
||||
|
||||
def get_task_factory(self):
|
||||
raise NotImplementedError
|
||||
|
||||
# Error handlers.
|
||||
|
||||
def get_exception_handler(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def set_exception_handler(self, handler):
|
||||
raise NotImplementedError
|
||||
|
||||
def default_exception_handler(self, context):
|
||||
raise NotImplementedError
|
||||
|
||||
def call_exception_handler(self, context):
|
||||
raise NotImplementedError
|
||||
|
||||
# Debug flag management.
|
||||
|
||||
def get_debug(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def set_debug(self, enabled):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class AbstractEventLoopPolicy:
|
||||
"""Abstract policy for accessing the event loop."""
|
||||
|
||||
def get_event_loop(self):
|
||||
"""Get the event loop for the current context.
|
||||
|
||||
Returns an event loop object implementing the AbstractEventLoop interface,
|
||||
or raises an exception in case no event loop has been set for the
|
||||
current context and the current policy does not specify to create one.
|
||||
|
||||
It should never return None."""
|
||||
raise NotImplementedError
|
||||
|
||||
def set_event_loop(self, loop):
|
||||
"""Set the event loop for the current context to loop."""
|
||||
raise NotImplementedError
|
||||
|
||||
def new_event_loop(self):
|
||||
"""Create and return a new event loop object according to this
|
||||
policy's rules. If there's need to set this loop as the event loop for
|
||||
the current context, set_event_loop must be called explicitly."""
|
||||
raise NotImplementedError
|
||||
|
||||
# Child processes handling (Unix only).
|
||||
|
||||
def get_child_watcher(self):
|
||||
"Get the watcher for child processes."
|
||||
raise NotImplementedError
|
||||
|
||||
def set_child_watcher(self, watcher):
|
||||
"""Set the watcher for child processes."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class BaseDefaultEventLoopPolicy(AbstractEventLoopPolicy):
|
||||
"""Default policy implementation for accessing the event loop.
|
||||
|
||||
In this policy, each thread has its own event loop. However, we
|
||||
only automatically create an event loop by default for the main
|
||||
thread; other threads by default have no event loop.
|
||||
|
||||
Other policies may have different rules (e.g. a single global
|
||||
event loop, or automatically creating an event loop per thread, or
|
||||
using some other notion of context to which an event loop is
|
||||
associated).
|
||||
"""
|
||||
|
||||
_loop_factory = None
|
||||
|
||||
class _Local(threading.local):
|
||||
_loop = None
|
||||
_set_called = False
|
||||
|
||||
def __init__(self):
|
||||
self._local = self._Local()
|
||||
|
||||
def get_event_loop(self):
|
||||
"""Get the event loop for the current context.
|
||||
|
||||
Returns an instance of EventLoop or raises an exception.
|
||||
"""
|
||||
if (self._local._loop is None and
|
||||
not self._local._set_called and
|
||||
threading.current_thread() is threading.main_thread()):
|
||||
stacklevel = 2
|
||||
try:
|
||||
f = sys._getframe(1)
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
# Move up the call stack so that the warning is attached
|
||||
# to the line outside asyncio itself.
|
||||
while f:
|
||||
module = f.f_globals.get('__name__')
|
||||
if not (module == 'asyncio' or module.startswith('asyncio.')):
|
||||
break
|
||||
f = f.f_back
|
||||
stacklevel += 1
|
||||
import warnings
|
||||
warnings.warn('There is no current event loop',
|
||||
DeprecationWarning, stacklevel=stacklevel)
|
||||
self.set_event_loop(self.new_event_loop())
|
||||
|
||||
if self._local._loop is None:
|
||||
raise RuntimeError('There is no current event loop in thread %r.'
|
||||
% threading.current_thread().name)
|
||||
|
||||
return self._local._loop
|
||||
|
||||
def set_event_loop(self, loop):
|
||||
"""Set the event loop."""
|
||||
self._local._set_called = True
|
||||
if loop is not None and not isinstance(loop, AbstractEventLoop):
|
||||
raise TypeError(f"loop must be an instance of AbstractEventLoop or None, not '{type(loop).__name__}'")
|
||||
self._local._loop = loop
|
||||
|
||||
def new_event_loop(self):
|
||||
"""Create a new event loop.
|
||||
|
||||
You must call set_event_loop() to make this the current event
|
||||
loop.
|
||||
"""
|
||||
return self._loop_factory()
|
||||
|
||||
|
||||
# Event loop policy. The policy itself is always global, even if the
|
||||
# policy's rules say that there is an event loop per thread (or other
|
||||
# notion of context). The default policy is installed by the first
|
||||
# call to get_event_loop_policy().
|
||||
_event_loop_policy = None
|
||||
|
||||
# Lock for protecting the on-the-fly creation of the event loop policy.
|
||||
_lock = threading.Lock()
|
||||
|
||||
|
||||
# A TLS for the running event loop, used by _get_running_loop.
|
||||
class _RunningLoop(threading.local):
|
||||
loop_pid = (None, None)
|
||||
|
||||
|
||||
_running_loop = _RunningLoop()
|
||||
|
||||
|
||||
def get_running_loop():
|
||||
"""Return the running event loop. Raise a RuntimeError if there is none.
|
||||
|
||||
This function is thread-specific.
|
||||
"""
|
||||
# NOTE: this function is implemented in C (see _asynciomodule.c)
|
||||
loop = _get_running_loop()
|
||||
if loop is None:
|
||||
raise RuntimeError('no running event loop')
|
||||
return loop
|
||||
|
||||
|
||||
def _get_running_loop():
|
||||
"""Return the running event loop or None.
|
||||
|
||||
This is a low-level function intended to be used by event loops.
|
||||
This function is thread-specific.
|
||||
"""
|
||||
# NOTE: this function is implemented in C (see _asynciomodule.c)
|
||||
running_loop, pid = _running_loop.loop_pid
|
||||
if running_loop is not None and pid == os.getpid():
|
||||
return running_loop
|
||||
|
||||
|
||||
def _set_running_loop(loop):
|
||||
"""Set the running event loop.
|
||||
|
||||
This is a low-level function intended to be used by event loops.
|
||||
This function is thread-specific.
|
||||
"""
|
||||
# NOTE: this function is implemented in C (see _asynciomodule.c)
|
||||
_running_loop.loop_pid = (loop, os.getpid())
|
||||
|
||||
|
||||
def _init_event_loop_policy():
|
||||
global _event_loop_policy
|
||||
with _lock:
|
||||
if _event_loop_policy is None: # pragma: no branch
|
||||
from . import DefaultEventLoopPolicy
|
||||
_event_loop_policy = DefaultEventLoopPolicy()
|
||||
|
||||
|
||||
def get_event_loop_policy():
|
||||
"""Get the current event loop policy."""
|
||||
if _event_loop_policy is None:
|
||||
_init_event_loop_policy()
|
||||
return _event_loop_policy
|
||||
|
||||
|
||||
def set_event_loop_policy(policy):
|
||||
"""Set the current event loop policy.
|
||||
|
||||
If policy is None, the default policy is restored."""
|
||||
global _event_loop_policy
|
||||
if policy is not None and not isinstance(policy, AbstractEventLoopPolicy):
|
||||
raise TypeError(f"policy must be an instance of AbstractEventLoopPolicy or None, not '{type(policy).__name__}'")
|
||||
_event_loop_policy = policy
|
||||
|
||||
|
||||
def get_event_loop():
|
||||
"""Return an asyncio event loop.
|
||||
|
||||
When called from a coroutine or a callback (e.g. scheduled with call_soon
|
||||
or similar API), this function will always return the running event loop.
|
||||
|
||||
If there is no running event loop set, the function will return
|
||||
the result of `get_event_loop_policy().get_event_loop()` call.
|
||||
"""
|
||||
# NOTE: this function is implemented in C (see _asynciomodule.c)
|
||||
current_loop = _get_running_loop()
|
||||
if current_loop is not None:
|
||||
return current_loop
|
||||
return get_event_loop_policy().get_event_loop()
|
||||
|
||||
|
||||
def set_event_loop(loop):
|
||||
"""Equivalent to calling get_event_loop_policy().set_event_loop(loop)."""
|
||||
get_event_loop_policy().set_event_loop(loop)
|
||||
|
||||
|
||||
def new_event_loop():
|
||||
"""Equivalent to calling get_event_loop_policy().new_event_loop()."""
|
||||
return get_event_loop_policy().new_event_loop()
|
||||
|
||||
|
||||
def get_child_watcher():
|
||||
"""Equivalent to calling get_event_loop_policy().get_child_watcher()."""
|
||||
return get_event_loop_policy().get_child_watcher()
|
||||
|
||||
|
||||
def set_child_watcher(watcher):
|
||||
"""Equivalent to calling
|
||||
get_event_loop_policy().set_child_watcher(watcher)."""
|
||||
return get_event_loop_policy().set_child_watcher(watcher)
|
||||
|
||||
|
||||
# Alias pure-Python implementations for testing purposes.
|
||||
_py__get_running_loop = _get_running_loop
|
||||
_py__set_running_loop = _set_running_loop
|
||||
_py_get_running_loop = get_running_loop
|
||||
_py_get_event_loop = get_event_loop
|
||||
|
||||
|
||||
try:
|
||||
# get_event_loop() is one of the most frequently called
|
||||
# functions in asyncio. Pure Python implementation is
|
||||
# about 4 times slower than C-accelerated.
|
||||
from _asyncio import (_get_running_loop, _set_running_loop,
|
||||
get_running_loop, get_event_loop)
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
# Alias C implementations for testing purposes.
|
||||
_c__get_running_loop = _get_running_loop
|
||||
_c__set_running_loop = _set_running_loop
|
||||
_c_get_running_loop = get_running_loop
|
||||
_c_get_event_loop = get_event_loop
|
||||
|
||||
|
||||
if hasattr(os, 'fork'):
|
||||
def on_fork():
|
||||
# Reset the loop and wakeupfd in the forked child process.
|
||||
if _event_loop_policy is not None:
|
||||
_event_loop_policy._local = BaseDefaultEventLoopPolicy._Local()
|
||||
_set_running_loop(None)
|
||||
signal.set_wakeup_fd(-1)
|
||||
|
||||
os.register_at_fork(after_in_child=on_fork)
|
||||
62
Utils/PythonNew32/Lib/asyncio/exceptions.py
Normal file
62
Utils/PythonNew32/Lib/asyncio/exceptions.py
Normal file
@@ -0,0 +1,62 @@
|
||||
"""asyncio exceptions."""
|
||||
|
||||
|
||||
__all__ = ('BrokenBarrierError',
|
||||
'CancelledError', 'InvalidStateError', 'TimeoutError',
|
||||
'IncompleteReadError', 'LimitOverrunError',
|
||||
'SendfileNotAvailableError')
|
||||
|
||||
|
||||
class CancelledError(BaseException):
|
||||
"""The Future or Task was cancelled."""
|
||||
|
||||
|
||||
TimeoutError = TimeoutError # make local alias for the standard exception
|
||||
|
||||
|
||||
class InvalidStateError(Exception):
|
||||
"""The operation is not allowed in this state."""
|
||||
|
||||
|
||||
class SendfileNotAvailableError(RuntimeError):
|
||||
"""Sendfile syscall is not available.
|
||||
|
||||
Raised if OS does not support sendfile syscall for given socket or
|
||||
file type.
|
||||
"""
|
||||
|
||||
|
||||
class IncompleteReadError(EOFError):
|
||||
"""
|
||||
Incomplete read error. Attributes:
|
||||
|
||||
- partial: read bytes string before the end of stream was reached
|
||||
- expected: total number of expected bytes (or None if unknown)
|
||||
"""
|
||||
def __init__(self, partial, expected):
|
||||
r_expected = 'undefined' if expected is None else repr(expected)
|
||||
super().__init__(f'{len(partial)} bytes read on a total of '
|
||||
f'{r_expected} expected bytes')
|
||||
self.partial = partial
|
||||
self.expected = expected
|
||||
|
||||
def __reduce__(self):
|
||||
return type(self), (self.partial, self.expected)
|
||||
|
||||
|
||||
class LimitOverrunError(Exception):
|
||||
"""Reached the buffer limit while looking for a separator.
|
||||
|
||||
Attributes:
|
||||
- consumed: total number of to be consumed bytes.
|
||||
"""
|
||||
def __init__(self, message, consumed):
|
||||
super().__init__(message)
|
||||
self.consumed = consumed
|
||||
|
||||
def __reduce__(self):
|
||||
return type(self), (self.args[0], self.consumed)
|
||||
|
||||
|
||||
class BrokenBarrierError(RuntimeError):
|
||||
"""Barrier is broken by barrier.abort() call."""
|
||||
84
Utils/PythonNew32/Lib/asyncio/format_helpers.py
Normal file
84
Utils/PythonNew32/Lib/asyncio/format_helpers.py
Normal file
@@ -0,0 +1,84 @@
|
||||
import functools
|
||||
import inspect
|
||||
import reprlib
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
from . import constants
|
||||
|
||||
|
||||
def _get_function_source(func):
|
||||
func = inspect.unwrap(func)
|
||||
if inspect.isfunction(func):
|
||||
code = func.__code__
|
||||
return (code.co_filename, code.co_firstlineno)
|
||||
if isinstance(func, functools.partial):
|
||||
return _get_function_source(func.func)
|
||||
if isinstance(func, functools.partialmethod):
|
||||
return _get_function_source(func.func)
|
||||
return None
|
||||
|
||||
|
||||
def _format_callback_source(func, args, *, debug=False):
|
||||
func_repr = _format_callback(func, args, None, debug=debug)
|
||||
source = _get_function_source(func)
|
||||
if source:
|
||||
func_repr += f' at {source[0]}:{source[1]}'
|
||||
return func_repr
|
||||
|
||||
|
||||
def _format_args_and_kwargs(args, kwargs, *, debug=False):
|
||||
"""Format function arguments and keyword arguments.
|
||||
|
||||
Special case for a single parameter: ('hello',) is formatted as ('hello').
|
||||
|
||||
Note that this function only returns argument details when
|
||||
debug=True is specified, as arguments may contain sensitive
|
||||
information.
|
||||
"""
|
||||
if not debug:
|
||||
return '()'
|
||||
|
||||
# use reprlib to limit the length of the output
|
||||
items = []
|
||||
if args:
|
||||
items.extend(reprlib.repr(arg) for arg in args)
|
||||
if kwargs:
|
||||
items.extend(f'{k}={reprlib.repr(v)}' for k, v in kwargs.items())
|
||||
return '({})'.format(', '.join(items))
|
||||
|
||||
|
||||
def _format_callback(func, args, kwargs, *, debug=False, suffix=''):
|
||||
if isinstance(func, functools.partial):
|
||||
suffix = _format_args_and_kwargs(args, kwargs, debug=debug) + suffix
|
||||
return _format_callback(func.func, func.args, func.keywords,
|
||||
debug=debug, suffix=suffix)
|
||||
|
||||
if hasattr(func, '__qualname__') and func.__qualname__:
|
||||
func_repr = func.__qualname__
|
||||
elif hasattr(func, '__name__') and func.__name__:
|
||||
func_repr = func.__name__
|
||||
else:
|
||||
func_repr = repr(func)
|
||||
|
||||
func_repr += _format_args_and_kwargs(args, kwargs, debug=debug)
|
||||
if suffix:
|
||||
func_repr += suffix
|
||||
return func_repr
|
||||
|
||||
|
||||
def extract_stack(f=None, limit=None):
|
||||
"""Replacement for traceback.extract_stack() that only does the
|
||||
necessary work for asyncio debug mode.
|
||||
"""
|
||||
if f is None:
|
||||
f = sys._getframe().f_back
|
||||
if limit is None:
|
||||
# Limit the amount of work to a reasonable amount, as extract_stack()
|
||||
# can be called for each coroutine and future in debug mode.
|
||||
limit = constants.DEBUG_STACK_DEPTH
|
||||
stack = traceback.StackSummary.extract(traceback.walk_stack(f),
|
||||
limit=limit,
|
||||
lookup_lines=False)
|
||||
stack.reverse()
|
||||
return stack
|
||||
425
Utils/PythonNew32/Lib/asyncio/futures.py
Normal file
425
Utils/PythonNew32/Lib/asyncio/futures.py
Normal file
@@ -0,0 +1,425 @@
|
||||
"""A Future class similar to the one in PEP 3148."""
|
||||
|
||||
__all__ = (
|
||||
'Future', 'wrap_future', 'isfuture',
|
||||
)
|
||||
|
||||
import concurrent.futures
|
||||
import contextvars
|
||||
import logging
|
||||
import sys
|
||||
from types import GenericAlias
|
||||
|
||||
from . import base_futures
|
||||
from . import events
|
||||
from . import exceptions
|
||||
from . import format_helpers
|
||||
|
||||
|
||||
isfuture = base_futures.isfuture
|
||||
|
||||
|
||||
_PENDING = base_futures._PENDING
|
||||
_CANCELLED = base_futures._CANCELLED
|
||||
_FINISHED = base_futures._FINISHED
|
||||
|
||||
|
||||
STACK_DEBUG = logging.DEBUG - 1 # heavy-duty debugging
|
||||
|
||||
|
||||
class Future:
|
||||
"""This class is *almost* compatible with concurrent.futures.Future.
|
||||
|
||||
Differences:
|
||||
|
||||
- This class is not thread-safe.
|
||||
|
||||
- result() and exception() do not take a timeout argument and
|
||||
raise an exception when the future isn't done yet.
|
||||
|
||||
- Callbacks registered with add_done_callback() are always called
|
||||
via the event loop's call_soon().
|
||||
|
||||
- This class is not compatible with the wait() and as_completed()
|
||||
methods in the concurrent.futures package.
|
||||
|
||||
(In Python 3.4 or later we may be able to unify the implementations.)
|
||||
"""
|
||||
|
||||
# Class variables serving as defaults for instance variables.
|
||||
_state = _PENDING
|
||||
_result = None
|
||||
_exception = None
|
||||
_loop = None
|
||||
_source_traceback = None
|
||||
_cancel_message = None
|
||||
# A saved CancelledError for later chaining as an exception context.
|
||||
_cancelled_exc = None
|
||||
|
||||
# This field is used for a dual purpose:
|
||||
# - Its presence is a marker to declare that a class implements
|
||||
# the Future protocol (i.e. is intended to be duck-type compatible).
|
||||
# The value must also be not-None, to enable a subclass to declare
|
||||
# that it is not compatible by setting this to None.
|
||||
# - It is set by __iter__() below so that Task._step() can tell
|
||||
# the difference between
|
||||
# `await Future()` or`yield from Future()` (correct) vs.
|
||||
# `yield Future()` (incorrect).
|
||||
_asyncio_future_blocking = False
|
||||
|
||||
__log_traceback = False
|
||||
|
||||
def __init__(self, *, loop=None):
|
||||
"""Initialize the future.
|
||||
|
||||
The optional event_loop argument allows explicitly setting the event
|
||||
loop object used by the future. If it's not provided, the future uses
|
||||
the default event loop.
|
||||
"""
|
||||
if loop is None:
|
||||
self._loop = events.get_event_loop()
|
||||
else:
|
||||
self._loop = loop
|
||||
self._callbacks = []
|
||||
if self._loop.get_debug():
|
||||
self._source_traceback = format_helpers.extract_stack(
|
||||
sys._getframe(1))
|
||||
|
||||
def __repr__(self):
|
||||
return base_futures._future_repr(self)
|
||||
|
||||
def __del__(self):
|
||||
if not self.__log_traceback:
|
||||
# set_exception() was not called, or result() or exception()
|
||||
# has consumed the exception
|
||||
return
|
||||
exc = self._exception
|
||||
context = {
|
||||
'message':
|
||||
f'{self.__class__.__name__} exception was never retrieved',
|
||||
'exception': exc,
|
||||
'future': self,
|
||||
}
|
||||
if self._source_traceback:
|
||||
context['source_traceback'] = self._source_traceback
|
||||
self._loop.call_exception_handler(context)
|
||||
|
||||
__class_getitem__ = classmethod(GenericAlias)
|
||||
|
||||
@property
|
||||
def _log_traceback(self):
|
||||
return self.__log_traceback
|
||||
|
||||
@_log_traceback.setter
|
||||
def _log_traceback(self, val):
|
||||
if val:
|
||||
raise ValueError('_log_traceback can only be set to False')
|
||||
self.__log_traceback = False
|
||||
|
||||
def get_loop(self):
|
||||
"""Return the event loop the Future is bound to."""
|
||||
loop = self._loop
|
||||
if loop is None:
|
||||
raise RuntimeError("Future object is not initialized.")
|
||||
return loop
|
||||
|
||||
def _make_cancelled_error(self):
|
||||
"""Create the CancelledError to raise if the Future is cancelled.
|
||||
|
||||
This should only be called once when handling a cancellation since
|
||||
it erases the saved context exception value.
|
||||
"""
|
||||
if self._cancelled_exc is not None:
|
||||
exc = self._cancelled_exc
|
||||
self._cancelled_exc = None
|
||||
return exc
|
||||
|
||||
if self._cancel_message is None:
|
||||
exc = exceptions.CancelledError()
|
||||
else:
|
||||
exc = exceptions.CancelledError(self._cancel_message)
|
||||
return exc
|
||||
|
||||
def cancel(self, msg=None):
|
||||
"""Cancel the future and schedule callbacks.
|
||||
|
||||
If the future is already done or cancelled, return False. Otherwise,
|
||||
change the future's state to cancelled, schedule the callbacks and
|
||||
return True.
|
||||
"""
|
||||
self.__log_traceback = False
|
||||
if self._state != _PENDING:
|
||||
return False
|
||||
self._state = _CANCELLED
|
||||
self._cancel_message = msg
|
||||
self.__schedule_callbacks()
|
||||
return True
|
||||
|
||||
def __schedule_callbacks(self):
|
||||
"""Internal: Ask the event loop to call all callbacks.
|
||||
|
||||
The callbacks are scheduled to be called as soon as possible. Also
|
||||
clears the callback list.
|
||||
"""
|
||||
callbacks = self._callbacks[:]
|
||||
if not callbacks:
|
||||
return
|
||||
|
||||
self._callbacks[:] = []
|
||||
for callback, ctx in callbacks:
|
||||
self._loop.call_soon(callback, self, context=ctx)
|
||||
|
||||
def cancelled(self):
|
||||
"""Return True if the future was cancelled."""
|
||||
return self._state == _CANCELLED
|
||||
|
||||
# Don't implement running(); see http://bugs.python.org/issue18699
|
||||
|
||||
def done(self):
|
||||
"""Return True if the future is done.
|
||||
|
||||
Done means either that a result / exception are available, or that the
|
||||
future was cancelled.
|
||||
"""
|
||||
return self._state != _PENDING
|
||||
|
||||
def result(self):
|
||||
"""Return the result this future represents.
|
||||
|
||||
If the future has been cancelled, raises CancelledError. If the
|
||||
future's result isn't yet available, raises InvalidStateError. If
|
||||
the future is done and has an exception set, this exception is raised.
|
||||
"""
|
||||
if self._state == _CANCELLED:
|
||||
raise self._make_cancelled_error()
|
||||
if self._state != _FINISHED:
|
||||
raise exceptions.InvalidStateError('Result is not ready.')
|
||||
self.__log_traceback = False
|
||||
if self._exception is not None:
|
||||
raise self._exception.with_traceback(self._exception_tb)
|
||||
return self._result
|
||||
|
||||
def exception(self):
|
||||
"""Return the exception that was set on this future.
|
||||
|
||||
The exception (or None if no exception was set) is returned only if
|
||||
the future is done. If the future has been cancelled, raises
|
||||
CancelledError. If the future isn't done yet, raises
|
||||
InvalidStateError.
|
||||
"""
|
||||
if self._state == _CANCELLED:
|
||||
raise self._make_cancelled_error()
|
||||
if self._state != _FINISHED:
|
||||
raise exceptions.InvalidStateError('Exception is not set.')
|
||||
self.__log_traceback = False
|
||||
return self._exception
|
||||
|
||||
def add_done_callback(self, fn, *, context=None):
|
||||
"""Add a callback to be run when the future becomes done.
|
||||
|
||||
The callback is called with a single argument - the future object. If
|
||||
the future is already done when this is called, the callback is
|
||||
scheduled with call_soon.
|
||||
"""
|
||||
if self._state != _PENDING:
|
||||
self._loop.call_soon(fn, self, context=context)
|
||||
else:
|
||||
if context is None:
|
||||
context = contextvars.copy_context()
|
||||
self._callbacks.append((fn, context))
|
||||
|
||||
# New method not in PEP 3148.
|
||||
|
||||
def remove_done_callback(self, fn):
|
||||
"""Remove all instances of a callback from the "call when done" list.
|
||||
|
||||
Returns the number of callbacks removed.
|
||||
"""
|
||||
filtered_callbacks = [(f, ctx)
|
||||
for (f, ctx) in self._callbacks
|
||||
if f != fn]
|
||||
removed_count = len(self._callbacks) - len(filtered_callbacks)
|
||||
if removed_count:
|
||||
self._callbacks[:] = filtered_callbacks
|
||||
return removed_count
|
||||
|
||||
# So-called internal methods (note: no set_running_or_notify_cancel()).
|
||||
|
||||
def set_result(self, result):
|
||||
"""Mark the future done and set its result.
|
||||
|
||||
If the future is already done when this method is called, raises
|
||||
InvalidStateError.
|
||||
"""
|
||||
if self._state != _PENDING:
|
||||
raise exceptions.InvalidStateError(f'{self._state}: {self!r}')
|
||||
self._result = result
|
||||
self._state = _FINISHED
|
||||
self.__schedule_callbacks()
|
||||
|
||||
def set_exception(self, exception):
|
||||
"""Mark the future done and set an exception.
|
||||
|
||||
If the future is already done when this method is called, raises
|
||||
InvalidStateError.
|
||||
"""
|
||||
if self._state != _PENDING:
|
||||
raise exceptions.InvalidStateError(f'{self._state}: {self!r}')
|
||||
if isinstance(exception, type):
|
||||
exception = exception()
|
||||
if isinstance(exception, StopIteration):
|
||||
new_exc = RuntimeError("StopIteration interacts badly with "
|
||||
"generators and cannot be raised into a "
|
||||
"Future")
|
||||
new_exc.__cause__ = exception
|
||||
new_exc.__context__ = exception
|
||||
exception = new_exc
|
||||
self._exception = exception
|
||||
self._exception_tb = exception.__traceback__
|
||||
self._state = _FINISHED
|
||||
self.__schedule_callbacks()
|
||||
self.__log_traceback = True
|
||||
|
||||
def __await__(self):
|
||||
if not self.done():
|
||||
self._asyncio_future_blocking = True
|
||||
yield self # This tells Task to wait for completion.
|
||||
if not self.done():
|
||||
raise RuntimeError("await wasn't used with future")
|
||||
return self.result() # May raise too.
|
||||
|
||||
__iter__ = __await__ # make compatible with 'yield from'.
|
||||
|
||||
|
||||
# Needed for testing purposes.
|
||||
_PyFuture = Future
|
||||
|
||||
|
||||
def _get_loop(fut):
|
||||
# Tries to call Future.get_loop() if it's available.
|
||||
# Otherwise fallbacks to using the old '_loop' property.
|
||||
try:
|
||||
get_loop = fut.get_loop
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
return get_loop()
|
||||
return fut._loop
|
||||
|
||||
|
||||
def _set_result_unless_cancelled(fut, result):
|
||||
"""Helper setting the result only if the future was not cancelled."""
|
||||
if fut.cancelled():
|
||||
return
|
||||
fut.set_result(result)
|
||||
|
||||
|
||||
def _convert_future_exc(exc):
|
||||
exc_class = type(exc)
|
||||
if exc_class is concurrent.futures.CancelledError:
|
||||
return exceptions.CancelledError(*exc.args).with_traceback(exc.__traceback__)
|
||||
elif exc_class is concurrent.futures.InvalidStateError:
|
||||
return exceptions.InvalidStateError(*exc.args).with_traceback(exc.__traceback__)
|
||||
else:
|
||||
return exc
|
||||
|
||||
|
||||
def _set_concurrent_future_state(concurrent, source):
|
||||
"""Copy state from a future to a concurrent.futures.Future."""
|
||||
assert source.done()
|
||||
if source.cancelled():
|
||||
concurrent.cancel()
|
||||
if not concurrent.set_running_or_notify_cancel():
|
||||
return
|
||||
exception = source.exception()
|
||||
if exception is not None:
|
||||
concurrent.set_exception(_convert_future_exc(exception))
|
||||
else:
|
||||
result = source.result()
|
||||
concurrent.set_result(result)
|
||||
|
||||
|
||||
def _copy_future_state(source, dest):
|
||||
"""Internal helper to copy state from another Future.
|
||||
|
||||
The other Future may be a concurrent.futures.Future.
|
||||
"""
|
||||
assert source.done()
|
||||
if dest.cancelled():
|
||||
return
|
||||
assert not dest.done()
|
||||
if source.cancelled():
|
||||
dest.cancel()
|
||||
else:
|
||||
exception = source.exception()
|
||||
if exception is not None:
|
||||
dest.set_exception(_convert_future_exc(exception))
|
||||
else:
|
||||
result = source.result()
|
||||
dest.set_result(result)
|
||||
|
||||
|
||||
def _chain_future(source, destination):
|
||||
"""Chain two futures so that when one completes, so does the other.
|
||||
|
||||
The result (or exception) of source will be copied to destination.
|
||||
If destination is cancelled, source gets cancelled too.
|
||||
Compatible with both asyncio.Future and concurrent.futures.Future.
|
||||
"""
|
||||
if not isfuture(source) and not isinstance(source,
|
||||
concurrent.futures.Future):
|
||||
raise TypeError('A future is required for source argument')
|
||||
if not isfuture(destination) and not isinstance(destination,
|
||||
concurrent.futures.Future):
|
||||
raise TypeError('A future is required for destination argument')
|
||||
source_loop = _get_loop(source) if isfuture(source) else None
|
||||
dest_loop = _get_loop(destination) if isfuture(destination) else None
|
||||
|
||||
def _set_state(future, other):
|
||||
if isfuture(future):
|
||||
_copy_future_state(other, future)
|
||||
else:
|
||||
_set_concurrent_future_state(future, other)
|
||||
|
||||
def _call_check_cancel(destination):
|
||||
if destination.cancelled():
|
||||
if source_loop is None or source_loop is dest_loop:
|
||||
source.cancel()
|
||||
else:
|
||||
source_loop.call_soon_threadsafe(source.cancel)
|
||||
|
||||
def _call_set_state(source):
|
||||
if (destination.cancelled() and
|
||||
dest_loop is not None and dest_loop.is_closed()):
|
||||
return
|
||||
if dest_loop is None or dest_loop is source_loop:
|
||||
_set_state(destination, source)
|
||||
else:
|
||||
if dest_loop.is_closed():
|
||||
return
|
||||
dest_loop.call_soon_threadsafe(_set_state, destination, source)
|
||||
|
||||
destination.add_done_callback(_call_check_cancel)
|
||||
source.add_done_callback(_call_set_state)
|
||||
|
||||
|
||||
def wrap_future(future, *, loop=None):
|
||||
"""Wrap concurrent.futures.Future object."""
|
||||
if isfuture(future):
|
||||
return future
|
||||
assert isinstance(future, concurrent.futures.Future), \
|
||||
f'concurrent.futures.Future is expected, got {future!r}'
|
||||
if loop is None:
|
||||
loop = events.get_event_loop()
|
||||
new_future = loop.create_future()
|
||||
_chain_future(future, new_future)
|
||||
return new_future
|
||||
|
||||
|
||||
try:
|
||||
import _asyncio
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
# _CFuture is needed for tests.
|
||||
Future = _CFuture = _asyncio.Future
|
||||
617
Utils/PythonNew32/Lib/asyncio/locks.py
Normal file
617
Utils/PythonNew32/Lib/asyncio/locks.py
Normal file
@@ -0,0 +1,617 @@
|
||||
"""Synchronization primitives."""
|
||||
|
||||
__all__ = ('Lock', 'Event', 'Condition', 'Semaphore',
|
||||
'BoundedSemaphore', 'Barrier')
|
||||
|
||||
import collections
|
||||
import enum
|
||||
|
||||
from . import exceptions
|
||||
from . import mixins
|
||||
|
||||
class _ContextManagerMixin:
|
||||
async def __aenter__(self):
|
||||
await self.acquire()
|
||||
# We have no use for the "as ..." clause in the with
|
||||
# statement for locks.
|
||||
return None
|
||||
|
||||
async def __aexit__(self, exc_type, exc, tb):
|
||||
self.release()
|
||||
|
||||
|
||||
class Lock(_ContextManagerMixin, mixins._LoopBoundMixin):
|
||||
"""Primitive lock objects.
|
||||
|
||||
A primitive lock is a synchronization primitive that is not owned
|
||||
by a particular task when locked. A primitive lock is in one
|
||||
of two states, 'locked' or 'unlocked'.
|
||||
|
||||
It is created in the unlocked state. It has two basic methods,
|
||||
acquire() and release(). When the state is unlocked, acquire()
|
||||
changes the state to locked and returns immediately. When the
|
||||
state is locked, acquire() blocks until a call to release() in
|
||||
another task changes it to unlocked, then the acquire() call
|
||||
resets it to locked and returns. The release() method should only
|
||||
be called in the locked state; it changes the state to unlocked
|
||||
and returns immediately. If an attempt is made to release an
|
||||
unlocked lock, a RuntimeError will be raised.
|
||||
|
||||
When more than one task is blocked in acquire() waiting for
|
||||
the state to turn to unlocked, only one task proceeds when a
|
||||
release() call resets the state to unlocked; successive release()
|
||||
calls will unblock tasks in FIFO order.
|
||||
|
||||
Locks also support the asynchronous context management protocol.
|
||||
'async with lock' statement should be used.
|
||||
|
||||
Usage:
|
||||
|
||||
lock = Lock()
|
||||
...
|
||||
await lock.acquire()
|
||||
try:
|
||||
...
|
||||
finally:
|
||||
lock.release()
|
||||
|
||||
Context manager usage:
|
||||
|
||||
lock = Lock()
|
||||
...
|
||||
async with lock:
|
||||
...
|
||||
|
||||
Lock objects can be tested for locking state:
|
||||
|
||||
if not lock.locked():
|
||||
await lock.acquire()
|
||||
else:
|
||||
# lock is acquired
|
||||
...
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self._waiters = None
|
||||
self._locked = False
|
||||
|
||||
def __repr__(self):
|
||||
res = super().__repr__()
|
||||
extra = 'locked' if self._locked else 'unlocked'
|
||||
if self._waiters:
|
||||
extra = f'{extra}, waiters:{len(self._waiters)}'
|
||||
return f'<{res[1:-1]} [{extra}]>'
|
||||
|
||||
def locked(self):
|
||||
"""Return True if lock is acquired."""
|
||||
return self._locked
|
||||
|
||||
async def acquire(self):
|
||||
"""Acquire a lock.
|
||||
|
||||
This method blocks until the lock is unlocked, then sets it to
|
||||
locked and returns True.
|
||||
"""
|
||||
# Implement fair scheduling, where thread always waits
|
||||
# its turn. Jumping the queue if all are cancelled is an optimization.
|
||||
if (not self._locked and (self._waiters is None or
|
||||
all(w.cancelled() for w in self._waiters))):
|
||||
self._locked = True
|
||||
return True
|
||||
|
||||
if self._waiters is None:
|
||||
self._waiters = collections.deque()
|
||||
fut = self._get_loop().create_future()
|
||||
self._waiters.append(fut)
|
||||
|
||||
try:
|
||||
try:
|
||||
await fut
|
||||
finally:
|
||||
self._waiters.remove(fut)
|
||||
except exceptions.CancelledError:
|
||||
# Currently the only exception designed be able to occur here.
|
||||
|
||||
# Ensure the lock invariant: If lock is not claimed (or about
|
||||
# to be claimed by us) and there is a Task in waiters,
|
||||
# ensure that the Task at the head will run.
|
||||
if not self._locked:
|
||||
self._wake_up_first()
|
||||
raise
|
||||
|
||||
# assert self._locked is False
|
||||
self._locked = True
|
||||
return True
|
||||
|
||||
def release(self):
|
||||
"""Release a lock.
|
||||
|
||||
When the lock is locked, reset it to unlocked, and return.
|
||||
If any other tasks are blocked waiting for the lock to become
|
||||
unlocked, allow exactly one of them to proceed.
|
||||
|
||||
When invoked on an unlocked lock, a RuntimeError is raised.
|
||||
|
||||
There is no return value.
|
||||
"""
|
||||
if self._locked:
|
||||
self._locked = False
|
||||
self._wake_up_first()
|
||||
else:
|
||||
raise RuntimeError('Lock is not acquired.')
|
||||
|
||||
def _wake_up_first(self):
|
||||
"""Ensure that the first waiter will wake up."""
|
||||
if not self._waiters:
|
||||
return
|
||||
try:
|
||||
fut = next(iter(self._waiters))
|
||||
except StopIteration:
|
||||
return
|
||||
|
||||
# .done() means that the waiter is already set to wake up.
|
||||
if not fut.done():
|
||||
fut.set_result(True)
|
||||
|
||||
|
||||
class Event(mixins._LoopBoundMixin):
|
||||
"""Asynchronous equivalent to threading.Event.
|
||||
|
||||
Class implementing event objects. An event manages a flag that can be set
|
||||
to true with the set() method and reset to false with the clear() method.
|
||||
The wait() method blocks until the flag is true. The flag is initially
|
||||
false.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self._waiters = collections.deque()
|
||||
self._value = False
|
||||
|
||||
def __repr__(self):
|
||||
res = super().__repr__()
|
||||
extra = 'set' if self._value else 'unset'
|
||||
if self._waiters:
|
||||
extra = f'{extra}, waiters:{len(self._waiters)}'
|
||||
return f'<{res[1:-1]} [{extra}]>'
|
||||
|
||||
def is_set(self):
|
||||
"""Return True if and only if the internal flag is true."""
|
||||
return self._value
|
||||
|
||||
def set(self):
|
||||
"""Set the internal flag to true. All tasks waiting for it to
|
||||
become true are awakened. Tasks that call wait() once the flag is
|
||||
true will not block at all.
|
||||
"""
|
||||
if not self._value:
|
||||
self._value = True
|
||||
|
||||
for fut in self._waiters:
|
||||
if not fut.done():
|
||||
fut.set_result(True)
|
||||
|
||||
def clear(self):
|
||||
"""Reset the internal flag to false. Subsequently, tasks calling
|
||||
wait() will block until set() is called to set the internal flag
|
||||
to true again."""
|
||||
self._value = False
|
||||
|
||||
async def wait(self):
|
||||
"""Block until the internal flag is true.
|
||||
|
||||
If the internal flag is true on entry, return True
|
||||
immediately. Otherwise, block until another task calls
|
||||
set() to set the flag to true, then return True.
|
||||
"""
|
||||
if self._value:
|
||||
return True
|
||||
|
||||
fut = self._get_loop().create_future()
|
||||
self._waiters.append(fut)
|
||||
try:
|
||||
await fut
|
||||
return True
|
||||
finally:
|
||||
self._waiters.remove(fut)
|
||||
|
||||
|
||||
class Condition(_ContextManagerMixin, mixins._LoopBoundMixin):
|
||||
"""Asynchronous equivalent to threading.Condition.
|
||||
|
||||
This class implements condition variable objects. A condition variable
|
||||
allows one or more tasks to wait until they are notified by another
|
||||
task.
|
||||
|
||||
A new Lock object is created and used as the underlying lock.
|
||||
"""
|
||||
|
||||
def __init__(self, lock=None):
|
||||
if lock is None:
|
||||
lock = Lock()
|
||||
|
||||
self._lock = lock
|
||||
# Export the lock's locked(), acquire() and release() methods.
|
||||
self.locked = lock.locked
|
||||
self.acquire = lock.acquire
|
||||
self.release = lock.release
|
||||
|
||||
self._waiters = collections.deque()
|
||||
|
||||
def __repr__(self):
|
||||
res = super().__repr__()
|
||||
extra = 'locked' if self.locked() else 'unlocked'
|
||||
if self._waiters:
|
||||
extra = f'{extra}, waiters:{len(self._waiters)}'
|
||||
return f'<{res[1:-1]} [{extra}]>'
|
||||
|
||||
async def wait(self):
|
||||
"""Wait until notified.
|
||||
|
||||
If the calling task has not acquired the lock when this
|
||||
method is called, a RuntimeError is raised.
|
||||
|
||||
This method releases the underlying lock, and then blocks
|
||||
until it is awakened by a notify() or notify_all() call for
|
||||
the same condition variable in another task. Once
|
||||
awakened, it re-acquires the lock and returns True.
|
||||
|
||||
This method may return spuriously,
|
||||
which is why the caller should always
|
||||
re-check the state and be prepared to wait() again.
|
||||
"""
|
||||
if not self.locked():
|
||||
raise RuntimeError('cannot wait on un-acquired lock')
|
||||
|
||||
fut = self._get_loop().create_future()
|
||||
self.release()
|
||||
try:
|
||||
try:
|
||||
self._waiters.append(fut)
|
||||
try:
|
||||
await fut
|
||||
return True
|
||||
finally:
|
||||
self._waiters.remove(fut)
|
||||
|
||||
finally:
|
||||
# Must re-acquire lock even if wait is cancelled.
|
||||
# We only catch CancelledError here, since we don't want any
|
||||
# other (fatal) errors with the future to cause us to spin.
|
||||
err = None
|
||||
while True:
|
||||
try:
|
||||
await self.acquire()
|
||||
break
|
||||
except exceptions.CancelledError as e:
|
||||
err = e
|
||||
|
||||
if err is not None:
|
||||
try:
|
||||
raise err # Re-raise most recent exception instance.
|
||||
finally:
|
||||
err = None # Break reference cycles.
|
||||
except BaseException:
|
||||
# Any error raised out of here _may_ have occurred after this Task
|
||||
# believed to have been successfully notified.
|
||||
# Make sure to notify another Task instead. This may result
|
||||
# in a "spurious wakeup", which is allowed as part of the
|
||||
# Condition Variable protocol.
|
||||
self._notify(1)
|
||||
raise
|
||||
|
||||
async def wait_for(self, predicate):
|
||||
"""Wait until a predicate becomes true.
|
||||
|
||||
The predicate should be a callable whose result will be
|
||||
interpreted as a boolean value. The method will repeatedly
|
||||
wait() until it evaluates to true. The final predicate value is
|
||||
the return value.
|
||||
"""
|
||||
result = predicate()
|
||||
while not result:
|
||||
await self.wait()
|
||||
result = predicate()
|
||||
return result
|
||||
|
||||
def notify(self, n=1):
|
||||
"""By default, wake up one task waiting on this condition, if any.
|
||||
If the calling task has not acquired the lock when this method
|
||||
is called, a RuntimeError is raised.
|
||||
|
||||
This method wakes up n of the tasks waiting for the condition
|
||||
variable; if fewer than n are waiting, they are all awoken.
|
||||
|
||||
Note: an awakened task does not actually return from its
|
||||
wait() call until it can reacquire the lock. Since notify() does
|
||||
not release the lock, its caller should.
|
||||
"""
|
||||
if not self.locked():
|
||||
raise RuntimeError('cannot notify on un-acquired lock')
|
||||
self._notify(n)
|
||||
|
||||
def _notify(self, n):
|
||||
idx = 0
|
||||
for fut in self._waiters:
|
||||
if idx >= n:
|
||||
break
|
||||
|
||||
if not fut.done():
|
||||
idx += 1
|
||||
fut.set_result(False)
|
||||
|
||||
def notify_all(self):
|
||||
"""Wake up all threads waiting on this condition. This method acts
|
||||
like notify(), but wakes up all waiting threads instead of one. If the
|
||||
calling thread has not acquired the lock when this method is called,
|
||||
a RuntimeError is raised.
|
||||
"""
|
||||
self.notify(len(self._waiters))
|
||||
|
||||
|
||||
class Semaphore(_ContextManagerMixin, mixins._LoopBoundMixin):
|
||||
"""A Semaphore implementation.
|
||||
|
||||
A semaphore manages an internal counter which is decremented by each
|
||||
acquire() call and incremented by each release() call. The counter
|
||||
can never go below zero; when acquire() finds that it is zero, it blocks,
|
||||
waiting until some other thread calls release().
|
||||
|
||||
Semaphores also support the context management protocol.
|
||||
|
||||
The optional argument gives the initial value for the internal
|
||||
counter; it defaults to 1. If the value given is less than 0,
|
||||
ValueError is raised.
|
||||
"""
|
||||
|
||||
def __init__(self, value=1):
|
||||
if value < 0:
|
||||
raise ValueError("Semaphore initial value must be >= 0")
|
||||
self._waiters = None
|
||||
self._value = value
|
||||
|
||||
def __repr__(self):
|
||||
res = super().__repr__()
|
||||
extra = 'locked' if self.locked() else f'unlocked, value:{self._value}'
|
||||
if self._waiters:
|
||||
extra = f'{extra}, waiters:{len(self._waiters)}'
|
||||
return f'<{res[1:-1]} [{extra}]>'
|
||||
|
||||
def locked(self):
|
||||
"""Returns True if semaphore cannot be acquired immediately."""
|
||||
# Due to state, or FIFO rules (must allow others to run first).
|
||||
return self._value == 0 or (
|
||||
any(not w.cancelled() for w in (self._waiters or ())))
|
||||
|
||||
async def acquire(self):
|
||||
"""Acquire a semaphore.
|
||||
|
||||
If the internal counter is larger than zero on entry,
|
||||
decrement it by one and return True immediately. If it is
|
||||
zero on entry, block, waiting until some other task has
|
||||
called release() to make it larger than 0, and then return
|
||||
True.
|
||||
"""
|
||||
if not self.locked():
|
||||
# Maintain FIFO, wait for others to start even if _value > 0.
|
||||
self._value -= 1
|
||||
return True
|
||||
|
||||
if self._waiters is None:
|
||||
self._waiters = collections.deque()
|
||||
fut = self._get_loop().create_future()
|
||||
self._waiters.append(fut)
|
||||
|
||||
try:
|
||||
try:
|
||||
await fut
|
||||
finally:
|
||||
self._waiters.remove(fut)
|
||||
except exceptions.CancelledError:
|
||||
# Currently the only exception designed be able to occur here.
|
||||
if fut.done() and not fut.cancelled():
|
||||
# Our Future was successfully set to True via _wake_up_next(),
|
||||
# but we are not about to successfully acquire(). Therefore we
|
||||
# must undo the bookkeeping already done and attempt to wake
|
||||
# up someone else.
|
||||
self._value += 1
|
||||
raise
|
||||
|
||||
finally:
|
||||
# New waiters may have arrived but had to wait due to FIFO.
|
||||
# Wake up as many as are allowed.
|
||||
while self._value > 0:
|
||||
if not self._wake_up_next():
|
||||
break # There was no-one to wake up.
|
||||
return True
|
||||
|
||||
def release(self):
|
||||
"""Release a semaphore, incrementing the internal counter by one.
|
||||
|
||||
When it was zero on entry and another task is waiting for it to
|
||||
become larger than zero again, wake up that task.
|
||||
"""
|
||||
self._value += 1
|
||||
self._wake_up_next()
|
||||
|
||||
def _wake_up_next(self):
|
||||
"""Wake up the first waiter that isn't done."""
|
||||
if not self._waiters:
|
||||
return False
|
||||
|
||||
for fut in self._waiters:
|
||||
if not fut.done():
|
||||
self._value -= 1
|
||||
fut.set_result(True)
|
||||
# `fut` is now `done()` and not `cancelled()`.
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
class BoundedSemaphore(Semaphore):
|
||||
"""A bounded semaphore implementation.
|
||||
|
||||
This raises ValueError in release() if it would increase the value
|
||||
above the initial value.
|
||||
"""
|
||||
|
||||
def __init__(self, value=1):
|
||||
self._bound_value = value
|
||||
super().__init__(value)
|
||||
|
||||
def release(self):
|
||||
if self._value >= self._bound_value:
|
||||
raise ValueError('BoundedSemaphore released too many times')
|
||||
super().release()
|
||||
|
||||
|
||||
|
||||
class _BarrierState(enum.Enum):
|
||||
FILLING = 'filling'
|
||||
DRAINING = 'draining'
|
||||
RESETTING = 'resetting'
|
||||
BROKEN = 'broken'
|
||||
|
||||
|
||||
class Barrier(mixins._LoopBoundMixin):
|
||||
"""Asyncio equivalent to threading.Barrier
|
||||
|
||||
Implements a Barrier primitive.
|
||||
Useful for synchronizing a fixed number of tasks at known synchronization
|
||||
points. Tasks block on 'wait()' and are simultaneously awoken once they
|
||||
have all made their call.
|
||||
"""
|
||||
|
||||
def __init__(self, parties):
|
||||
"""Create a barrier, initialised to 'parties' tasks."""
|
||||
if parties < 1:
|
||||
raise ValueError('parties must be >= 1')
|
||||
|
||||
self._cond = Condition() # notify all tasks when state changes
|
||||
|
||||
self._parties = parties
|
||||
self._state = _BarrierState.FILLING
|
||||
self._count = 0 # count tasks in Barrier
|
||||
|
||||
def __repr__(self):
|
||||
res = super().__repr__()
|
||||
extra = f'{self._state.value}'
|
||||
if not self.broken:
|
||||
extra += f', waiters:{self.n_waiting}/{self.parties}'
|
||||
return f'<{res[1:-1]} [{extra}]>'
|
||||
|
||||
async def __aenter__(self):
|
||||
# wait for the barrier reaches the parties number
|
||||
# when start draining release and return index of waited task
|
||||
return await self.wait()
|
||||
|
||||
async def __aexit__(self, *args):
|
||||
pass
|
||||
|
||||
async def wait(self):
|
||||
"""Wait for the barrier.
|
||||
|
||||
When the specified number of tasks have started waiting, they are all
|
||||
simultaneously awoken.
|
||||
Returns an unique and individual index number from 0 to 'parties-1'.
|
||||
"""
|
||||
async with self._cond:
|
||||
await self._block() # Block while the barrier drains or resets.
|
||||
try:
|
||||
index = self._count
|
||||
self._count += 1
|
||||
if index + 1 == self._parties:
|
||||
# We release the barrier
|
||||
await self._release()
|
||||
else:
|
||||
await self._wait()
|
||||
return index
|
||||
finally:
|
||||
self._count -= 1
|
||||
# Wake up any tasks waiting for barrier to drain.
|
||||
self._exit()
|
||||
|
||||
async def _block(self):
|
||||
# Block until the barrier is ready for us,
|
||||
# or raise an exception if it is broken.
|
||||
#
|
||||
# It is draining or resetting, wait until done
|
||||
# unless a CancelledError occurs
|
||||
await self._cond.wait_for(
|
||||
lambda: self._state not in (
|
||||
_BarrierState.DRAINING, _BarrierState.RESETTING
|
||||
)
|
||||
)
|
||||
|
||||
# see if the barrier is in a broken state
|
||||
if self._state is _BarrierState.BROKEN:
|
||||
raise exceptions.BrokenBarrierError("Barrier aborted")
|
||||
|
||||
async def _release(self):
|
||||
# Release the tasks waiting in the barrier.
|
||||
|
||||
# Enter draining state.
|
||||
# Next waiting tasks will be blocked until the end of draining.
|
||||
self._state = _BarrierState.DRAINING
|
||||
self._cond.notify_all()
|
||||
|
||||
async def _wait(self):
|
||||
# Wait in the barrier until we are released. Raise an exception
|
||||
# if the barrier is reset or broken.
|
||||
|
||||
# wait for end of filling
|
||||
# unless a CancelledError occurs
|
||||
await self._cond.wait_for(lambda: self._state is not _BarrierState.FILLING)
|
||||
|
||||
if self._state in (_BarrierState.BROKEN, _BarrierState.RESETTING):
|
||||
raise exceptions.BrokenBarrierError("Abort or reset of barrier")
|
||||
|
||||
def _exit(self):
|
||||
# If we are the last tasks to exit the barrier, signal any tasks
|
||||
# waiting for the barrier to drain.
|
||||
if self._count == 0:
|
||||
if self._state in (_BarrierState.RESETTING, _BarrierState.DRAINING):
|
||||
self._state = _BarrierState.FILLING
|
||||
self._cond.notify_all()
|
||||
|
||||
async def reset(self):
|
||||
"""Reset the barrier to the initial state.
|
||||
|
||||
Any tasks currently waiting will get the BrokenBarrier exception
|
||||
raised.
|
||||
"""
|
||||
async with self._cond:
|
||||
if self._count > 0:
|
||||
if self._state is not _BarrierState.RESETTING:
|
||||
#reset the barrier, waking up tasks
|
||||
self._state = _BarrierState.RESETTING
|
||||
else:
|
||||
self._state = _BarrierState.FILLING
|
||||
self._cond.notify_all()
|
||||
|
||||
async def abort(self):
|
||||
"""Place the barrier into a 'broken' state.
|
||||
|
||||
Useful in case of error. Any currently waiting tasks and tasks
|
||||
attempting to 'wait()' will have BrokenBarrierError raised.
|
||||
"""
|
||||
async with self._cond:
|
||||
self._state = _BarrierState.BROKEN
|
||||
self._cond.notify_all()
|
||||
|
||||
@property
|
||||
def parties(self):
|
||||
"""Return the number of tasks required to trip the barrier."""
|
||||
return self._parties
|
||||
|
||||
@property
|
||||
def n_waiting(self):
|
||||
"""Return the number of tasks currently waiting at the barrier."""
|
||||
if self._state is _BarrierState.FILLING:
|
||||
return self._count
|
||||
return 0
|
||||
|
||||
@property
|
||||
def broken(self):
|
||||
"""Return True if the barrier is in a broken state."""
|
||||
return self._state is _BarrierState.BROKEN
|
||||
7
Utils/PythonNew32/Lib/asyncio/log.py
Normal file
7
Utils/PythonNew32/Lib/asyncio/log.py
Normal file
@@ -0,0 +1,7 @@
|
||||
"""Logging configuration."""
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
# Name the logger after the package.
|
||||
logger = logging.getLogger(__package__)
|
||||
21
Utils/PythonNew32/Lib/asyncio/mixins.py
Normal file
21
Utils/PythonNew32/Lib/asyncio/mixins.py
Normal file
@@ -0,0 +1,21 @@
|
||||
"""Event loop mixins."""
|
||||
|
||||
import threading
|
||||
from . import events
|
||||
|
||||
_global_lock = threading.Lock()
|
||||
|
||||
|
||||
class _LoopBoundMixin:
|
||||
_loop = None
|
||||
|
||||
def _get_loop(self):
|
||||
loop = events._get_running_loop()
|
||||
|
||||
if self._loop is None:
|
||||
with _global_lock:
|
||||
if self._loop is None:
|
||||
self._loop = loop
|
||||
if loop is not self._loop:
|
||||
raise RuntimeError(f'{self!r} is bound to a different event loop')
|
||||
return loop
|
||||
894
Utils/PythonNew32/Lib/asyncio/proactor_events.py
Normal file
894
Utils/PythonNew32/Lib/asyncio/proactor_events.py
Normal file
@@ -0,0 +1,894 @@
|
||||
"""Event loop using a proactor and related classes.
|
||||
|
||||
A proactor is a "notify-on-completion" multiplexer. Currently a
|
||||
proactor is only implemented on Windows with IOCP.
|
||||
"""
|
||||
|
||||
__all__ = 'BaseProactorEventLoop',
|
||||
|
||||
import io
|
||||
import os
|
||||
import socket
|
||||
import warnings
|
||||
import signal
|
||||
import threading
|
||||
import collections
|
||||
|
||||
from . import base_events
|
||||
from . import constants
|
||||
from . import futures
|
||||
from . import exceptions
|
||||
from . import protocols
|
||||
from . import sslproto
|
||||
from . import transports
|
||||
from . import trsock
|
||||
from .log import logger
|
||||
|
||||
|
||||
def _set_socket_extra(transport, sock):
|
||||
transport._extra['socket'] = trsock.TransportSocket(sock)
|
||||
|
||||
try:
|
||||
transport._extra['sockname'] = sock.getsockname()
|
||||
except socket.error:
|
||||
if transport._loop.get_debug():
|
||||
logger.warning(
|
||||
"getsockname() failed on %r", sock, exc_info=True)
|
||||
|
||||
if 'peername' not in transport._extra:
|
||||
try:
|
||||
transport._extra['peername'] = sock.getpeername()
|
||||
except socket.error:
|
||||
# UDP sockets may not have a peer name
|
||||
transport._extra['peername'] = None
|
||||
|
||||
|
||||
class _ProactorBasePipeTransport(transports._FlowControlMixin,
|
||||
transports.BaseTransport):
|
||||
"""Base class for pipe and socket transports."""
|
||||
|
||||
def __init__(self, loop, sock, protocol, waiter=None,
|
||||
extra=None, server=None):
|
||||
super().__init__(extra, loop)
|
||||
self._set_extra(sock)
|
||||
self._sock = sock
|
||||
self.set_protocol(protocol)
|
||||
self._server = server
|
||||
self._buffer = None # None or bytearray.
|
||||
self._read_fut = None
|
||||
self._write_fut = None
|
||||
self._pending_write = 0
|
||||
self._conn_lost = 0
|
||||
self._closing = False # Set when close() called.
|
||||
self._called_connection_lost = False
|
||||
self._eof_written = False
|
||||
if self._server is not None:
|
||||
self._server._attach(self)
|
||||
self._loop.call_soon(self._protocol.connection_made, self)
|
||||
if waiter is not None:
|
||||
# only wake up the waiter when connection_made() has been called
|
||||
self._loop.call_soon(futures._set_result_unless_cancelled,
|
||||
waiter, None)
|
||||
|
||||
def __repr__(self):
|
||||
info = [self.__class__.__name__]
|
||||
if self._sock is None:
|
||||
info.append('closed')
|
||||
elif self._closing:
|
||||
info.append('closing')
|
||||
if self._sock is not None:
|
||||
info.append(f'fd={self._sock.fileno()}')
|
||||
if self._read_fut is not None:
|
||||
info.append(f'read={self._read_fut!r}')
|
||||
if self._write_fut is not None:
|
||||
info.append(f'write={self._write_fut!r}')
|
||||
if self._buffer:
|
||||
info.append(f'write_bufsize={len(self._buffer)}')
|
||||
if self._eof_written:
|
||||
info.append('EOF written')
|
||||
return '<{}>'.format(' '.join(info))
|
||||
|
||||
def _set_extra(self, sock):
|
||||
self._extra['pipe'] = sock
|
||||
|
||||
def set_protocol(self, protocol):
|
||||
self._protocol = protocol
|
||||
|
||||
def get_protocol(self):
|
||||
return self._protocol
|
||||
|
||||
def is_closing(self):
|
||||
return self._closing
|
||||
|
||||
def close(self):
|
||||
if self._closing:
|
||||
return
|
||||
self._closing = True
|
||||
self._conn_lost += 1
|
||||
if not self._buffer and self._write_fut is None:
|
||||
self._loop.call_soon(self._call_connection_lost, None)
|
||||
if self._read_fut is not None:
|
||||
self._read_fut.cancel()
|
||||
self._read_fut = None
|
||||
|
||||
def __del__(self, _warn=warnings.warn):
|
||||
if self._sock is not None:
|
||||
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
|
||||
self._sock.close()
|
||||
|
||||
def _fatal_error(self, exc, message='Fatal error on pipe transport'):
|
||||
try:
|
||||
if isinstance(exc, OSError):
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r: %s", self, message, exc_info=True)
|
||||
else:
|
||||
self._loop.call_exception_handler({
|
||||
'message': message,
|
||||
'exception': exc,
|
||||
'transport': self,
|
||||
'protocol': self._protocol,
|
||||
})
|
||||
finally:
|
||||
self._force_close(exc)
|
||||
|
||||
def _force_close(self, exc):
|
||||
if self._empty_waiter is not None and not self._empty_waiter.done():
|
||||
if exc is None:
|
||||
self._empty_waiter.set_result(None)
|
||||
else:
|
||||
self._empty_waiter.set_exception(exc)
|
||||
if self._closing and self._called_connection_lost:
|
||||
return
|
||||
self._closing = True
|
||||
self._conn_lost += 1
|
||||
if self._write_fut:
|
||||
self._write_fut.cancel()
|
||||
self._write_fut = None
|
||||
if self._read_fut:
|
||||
self._read_fut.cancel()
|
||||
self._read_fut = None
|
||||
self._pending_write = 0
|
||||
self._buffer = None
|
||||
self._loop.call_soon(self._call_connection_lost, exc)
|
||||
|
||||
def _call_connection_lost(self, exc):
|
||||
if self._called_connection_lost:
|
||||
return
|
||||
try:
|
||||
self._protocol.connection_lost(exc)
|
||||
finally:
|
||||
# XXX If there is a pending overlapped read on the other
|
||||
# end then it may fail with ERROR_NETNAME_DELETED if we
|
||||
# just close our end. First calling shutdown() seems to
|
||||
# cure it, but maybe using DisconnectEx() would be better.
|
||||
if hasattr(self._sock, 'shutdown') and self._sock.fileno() != -1:
|
||||
self._sock.shutdown(socket.SHUT_RDWR)
|
||||
self._sock.close()
|
||||
self._sock = None
|
||||
server = self._server
|
||||
if server is not None:
|
||||
server._detach(self)
|
||||
self._server = None
|
||||
self._called_connection_lost = True
|
||||
|
||||
def get_write_buffer_size(self):
|
||||
size = self._pending_write
|
||||
if self._buffer is not None:
|
||||
size += len(self._buffer)
|
||||
return size
|
||||
|
||||
|
||||
class _ProactorReadPipeTransport(_ProactorBasePipeTransport,
|
||||
transports.ReadTransport):
|
||||
"""Transport for read pipes."""
|
||||
|
||||
def __init__(self, loop, sock, protocol, waiter=None,
|
||||
extra=None, server=None, buffer_size=65536):
|
||||
self._pending_data_length = -1
|
||||
self._paused = True
|
||||
super().__init__(loop, sock, protocol, waiter, extra, server)
|
||||
|
||||
self._data = bytearray(buffer_size)
|
||||
self._loop.call_soon(self._loop_reading)
|
||||
self._paused = False
|
||||
|
||||
def is_reading(self):
|
||||
return not self._paused and not self._closing
|
||||
|
||||
def pause_reading(self):
|
||||
if self._closing or self._paused:
|
||||
return
|
||||
self._paused = True
|
||||
|
||||
# bpo-33694: Don't cancel self._read_fut because cancelling an
|
||||
# overlapped WSASend() loss silently data with the current proactor
|
||||
# implementation.
|
||||
#
|
||||
# If CancelIoEx() fails with ERROR_NOT_FOUND, it means that WSASend()
|
||||
# completed (even if HasOverlappedIoCompleted() returns 0), but
|
||||
# Overlapped.cancel() currently silently ignores the ERROR_NOT_FOUND
|
||||
# error. Once the overlapped is ignored, the IOCP loop will ignores the
|
||||
# completion I/O event and so not read the result of the overlapped
|
||||
# WSARecv().
|
||||
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r pauses reading", self)
|
||||
|
||||
def resume_reading(self):
|
||||
if self._closing or not self._paused:
|
||||
return
|
||||
|
||||
self._paused = False
|
||||
if self._read_fut is None:
|
||||
self._loop.call_soon(self._loop_reading, None)
|
||||
|
||||
length = self._pending_data_length
|
||||
self._pending_data_length = -1
|
||||
if length > -1:
|
||||
# Call the protocol method after calling _loop_reading(),
|
||||
# since the protocol can decide to pause reading again.
|
||||
self._loop.call_soon(self._data_received, self._data[:length], length)
|
||||
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r resumes reading", self)
|
||||
|
||||
def _eof_received(self):
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r received EOF", self)
|
||||
|
||||
try:
|
||||
keep_open = self._protocol.eof_received()
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
self._fatal_error(
|
||||
exc, 'Fatal error: protocol.eof_received() call failed.')
|
||||
return
|
||||
|
||||
if not keep_open:
|
||||
self.close()
|
||||
|
||||
def _data_received(self, data, length):
|
||||
if self._paused:
|
||||
# Don't call any protocol method while reading is paused.
|
||||
# The protocol will be called on resume_reading().
|
||||
assert self._pending_data_length == -1
|
||||
self._pending_data_length = length
|
||||
return
|
||||
|
||||
if length == 0:
|
||||
self._eof_received()
|
||||
return
|
||||
|
||||
if isinstance(self._protocol, protocols.BufferedProtocol):
|
||||
try:
|
||||
protocols._feed_data_to_buffered_proto(self._protocol, data)
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
self._fatal_error(exc,
|
||||
'Fatal error: protocol.buffer_updated() '
|
||||
'call failed.')
|
||||
return
|
||||
else:
|
||||
self._protocol.data_received(data)
|
||||
|
||||
def _loop_reading(self, fut=None):
|
||||
length = -1
|
||||
data = None
|
||||
try:
|
||||
if fut is not None:
|
||||
assert self._read_fut is fut or (self._read_fut is None and
|
||||
self._closing)
|
||||
self._read_fut = None
|
||||
if fut.done():
|
||||
# deliver data later in "finally" clause
|
||||
length = fut.result()
|
||||
if length == 0:
|
||||
# we got end-of-file so no need to reschedule a new read
|
||||
return
|
||||
|
||||
# It's a new slice so make it immutable so protocols upstream don't have problems
|
||||
data = bytes(memoryview(self._data)[:length])
|
||||
else:
|
||||
# the future will be replaced by next proactor.recv call
|
||||
fut.cancel()
|
||||
|
||||
if self._closing:
|
||||
# since close() has been called we ignore any read data
|
||||
return
|
||||
|
||||
# bpo-33694: buffer_updated() has currently no fast path because of
|
||||
# a data loss issue caused by overlapped WSASend() cancellation.
|
||||
|
||||
if not self._paused:
|
||||
# reschedule a new read
|
||||
self._read_fut = self._loop._proactor.recv_into(self._sock, self._data)
|
||||
except ConnectionAbortedError as exc:
|
||||
if not self._closing:
|
||||
self._fatal_error(exc, 'Fatal read error on pipe transport')
|
||||
elif self._loop.get_debug():
|
||||
logger.debug("Read error on pipe transport while closing",
|
||||
exc_info=True)
|
||||
except ConnectionResetError as exc:
|
||||
self._force_close(exc)
|
||||
except OSError as exc:
|
||||
self._fatal_error(exc, 'Fatal read error on pipe transport')
|
||||
except exceptions.CancelledError:
|
||||
if not self._closing:
|
||||
raise
|
||||
else:
|
||||
if not self._paused:
|
||||
self._read_fut.add_done_callback(self._loop_reading)
|
||||
finally:
|
||||
if length > -1:
|
||||
self._data_received(data, length)
|
||||
|
||||
|
||||
class _ProactorBaseWritePipeTransport(_ProactorBasePipeTransport,
|
||||
transports.WriteTransport):
|
||||
"""Transport for write pipes."""
|
||||
|
||||
_start_tls_compatible = True
|
||||
|
||||
def __init__(self, *args, **kw):
|
||||
super().__init__(*args, **kw)
|
||||
self._empty_waiter = None
|
||||
|
||||
def write(self, data):
|
||||
if not isinstance(data, (bytes, bytearray, memoryview)):
|
||||
raise TypeError(
|
||||
f"data argument must be a bytes-like object, "
|
||||
f"not {type(data).__name__}")
|
||||
if self._eof_written:
|
||||
raise RuntimeError('write_eof() already called')
|
||||
if self._empty_waiter is not None:
|
||||
raise RuntimeError('unable to write; sendfile is in progress')
|
||||
|
||||
if not data:
|
||||
return
|
||||
|
||||
if self._conn_lost:
|
||||
if self._conn_lost >= constants.LOG_THRESHOLD_FOR_CONNLOST_WRITES:
|
||||
logger.warning('socket.send() raised exception.')
|
||||
self._conn_lost += 1
|
||||
return
|
||||
|
||||
# Observable states:
|
||||
# 1. IDLE: _write_fut and _buffer both None
|
||||
# 2. WRITING: _write_fut set; _buffer None
|
||||
# 3. BACKED UP: _write_fut set; _buffer a bytearray
|
||||
# We always copy the data, so the caller can't modify it
|
||||
# while we're still waiting for the I/O to happen.
|
||||
if self._write_fut is None: # IDLE -> WRITING
|
||||
assert self._buffer is None
|
||||
# Pass a copy, except if it's already immutable.
|
||||
self._loop_writing(data=bytes(data))
|
||||
elif not self._buffer: # WRITING -> BACKED UP
|
||||
# Make a mutable copy which we can extend.
|
||||
self._buffer = bytearray(data)
|
||||
self._maybe_pause_protocol()
|
||||
else: # BACKED UP
|
||||
# Append to buffer (also copies).
|
||||
self._buffer.extend(data)
|
||||
self._maybe_pause_protocol()
|
||||
|
||||
def _loop_writing(self, f=None, data=None):
|
||||
try:
|
||||
if f is not None and self._write_fut is None and self._closing:
|
||||
# XXX most likely self._force_close() has been called, and
|
||||
# it has set self._write_fut to None.
|
||||
return
|
||||
assert f is self._write_fut
|
||||
self._write_fut = None
|
||||
self._pending_write = 0
|
||||
if f:
|
||||
f.result()
|
||||
if data is None:
|
||||
data = self._buffer
|
||||
self._buffer = None
|
||||
if not data:
|
||||
if self._closing:
|
||||
self._loop.call_soon(self._call_connection_lost, None)
|
||||
if self._eof_written:
|
||||
self._sock.shutdown(socket.SHUT_WR)
|
||||
# Now that we've reduced the buffer size, tell the
|
||||
# protocol to resume writing if it was paused. Note that
|
||||
# we do this last since the callback is called immediately
|
||||
# and it may add more data to the buffer (even causing the
|
||||
# protocol to be paused again).
|
||||
self._maybe_resume_protocol()
|
||||
else:
|
||||
self._write_fut = self._loop._proactor.send(self._sock, data)
|
||||
if not self._write_fut.done():
|
||||
assert self._pending_write == 0
|
||||
self._pending_write = len(data)
|
||||
self._write_fut.add_done_callback(self._loop_writing)
|
||||
self._maybe_pause_protocol()
|
||||
else:
|
||||
self._write_fut.add_done_callback(self._loop_writing)
|
||||
if self._empty_waiter is not None and self._write_fut is None:
|
||||
self._empty_waiter.set_result(None)
|
||||
except ConnectionResetError as exc:
|
||||
self._force_close(exc)
|
||||
except OSError as exc:
|
||||
self._fatal_error(exc, 'Fatal write error on pipe transport')
|
||||
|
||||
def can_write_eof(self):
|
||||
return True
|
||||
|
||||
def write_eof(self):
|
||||
self.close()
|
||||
|
||||
def abort(self):
|
||||
self._force_close(None)
|
||||
|
||||
def _make_empty_waiter(self):
|
||||
if self._empty_waiter is not None:
|
||||
raise RuntimeError("Empty waiter is already set")
|
||||
self._empty_waiter = self._loop.create_future()
|
||||
if self._write_fut is None:
|
||||
self._empty_waiter.set_result(None)
|
||||
return self._empty_waiter
|
||||
|
||||
def _reset_empty_waiter(self):
|
||||
self._empty_waiter = None
|
||||
|
||||
|
||||
class _ProactorWritePipeTransport(_ProactorBaseWritePipeTransport):
|
||||
def __init__(self, *args, **kw):
|
||||
super().__init__(*args, **kw)
|
||||
self._read_fut = self._loop._proactor.recv(self._sock, 16)
|
||||
self._read_fut.add_done_callback(self._pipe_closed)
|
||||
|
||||
def _pipe_closed(self, fut):
|
||||
if fut.cancelled():
|
||||
# the transport has been closed
|
||||
return
|
||||
assert fut.result() == b''
|
||||
if self._closing:
|
||||
assert self._read_fut is None
|
||||
return
|
||||
assert fut is self._read_fut, (fut, self._read_fut)
|
||||
self._read_fut = None
|
||||
if self._write_fut is not None:
|
||||
self._force_close(BrokenPipeError())
|
||||
else:
|
||||
self.close()
|
||||
|
||||
|
||||
class _ProactorDatagramTransport(_ProactorBasePipeTransport,
|
||||
transports.DatagramTransport):
|
||||
max_size = 256 * 1024
|
||||
def __init__(self, loop, sock, protocol, address=None,
|
||||
waiter=None, extra=None):
|
||||
self._address = address
|
||||
self._empty_waiter = None
|
||||
self._buffer_size = 0
|
||||
# We don't need to call _protocol.connection_made() since our base
|
||||
# constructor does it for us.
|
||||
super().__init__(loop, sock, protocol, waiter=waiter, extra=extra)
|
||||
|
||||
# The base constructor sets _buffer = None, so we set it here
|
||||
self._buffer = collections.deque()
|
||||
self._loop.call_soon(self._loop_reading)
|
||||
|
||||
def _set_extra(self, sock):
|
||||
_set_socket_extra(self, sock)
|
||||
|
||||
def get_write_buffer_size(self):
|
||||
return self._buffer_size
|
||||
|
||||
def abort(self):
|
||||
self._force_close(None)
|
||||
|
||||
def sendto(self, data, addr=None):
|
||||
if not isinstance(data, (bytes, bytearray, memoryview)):
|
||||
raise TypeError('data argument must be bytes-like object (%r)',
|
||||
type(data))
|
||||
|
||||
if self._address is not None and addr not in (None, self._address):
|
||||
raise ValueError(
|
||||
f'Invalid address: must be None or {self._address}')
|
||||
|
||||
if self._conn_lost and self._address:
|
||||
if self._conn_lost >= constants.LOG_THRESHOLD_FOR_CONNLOST_WRITES:
|
||||
logger.warning('socket.sendto() raised exception.')
|
||||
self._conn_lost += 1
|
||||
return
|
||||
|
||||
# Ensure that what we buffer is immutable.
|
||||
self._buffer.append((bytes(data), addr))
|
||||
self._buffer_size += len(data) + 8 # include header bytes
|
||||
|
||||
if self._write_fut is None:
|
||||
# No current write operations are active, kick one off
|
||||
self._loop_writing()
|
||||
# else: A write operation is already kicked off
|
||||
|
||||
self._maybe_pause_protocol()
|
||||
|
||||
def _loop_writing(self, fut=None):
|
||||
try:
|
||||
if self._conn_lost:
|
||||
return
|
||||
|
||||
assert fut is self._write_fut
|
||||
self._write_fut = None
|
||||
if fut:
|
||||
# We are in a _loop_writing() done callback, get the result
|
||||
fut.result()
|
||||
|
||||
if not self._buffer or (self._conn_lost and self._address):
|
||||
# The connection has been closed
|
||||
if self._closing:
|
||||
self._loop.call_soon(self._call_connection_lost, None)
|
||||
return
|
||||
|
||||
data, addr = self._buffer.popleft()
|
||||
self._buffer_size -= len(data)
|
||||
if self._address is not None:
|
||||
self._write_fut = self._loop._proactor.send(self._sock,
|
||||
data)
|
||||
else:
|
||||
self._write_fut = self._loop._proactor.sendto(self._sock,
|
||||
data,
|
||||
addr=addr)
|
||||
except OSError as exc:
|
||||
self._protocol.error_received(exc)
|
||||
except Exception as exc:
|
||||
self._fatal_error(exc, 'Fatal write error on datagram transport')
|
||||
else:
|
||||
self._write_fut.add_done_callback(self._loop_writing)
|
||||
self._maybe_resume_protocol()
|
||||
|
||||
def _loop_reading(self, fut=None):
|
||||
data = None
|
||||
try:
|
||||
if self._conn_lost:
|
||||
return
|
||||
|
||||
assert self._read_fut is fut or (self._read_fut is None and
|
||||
self._closing)
|
||||
|
||||
self._read_fut = None
|
||||
if fut is not None:
|
||||
res = fut.result()
|
||||
|
||||
if self._closing:
|
||||
# since close() has been called we ignore any read data
|
||||
data = None
|
||||
return
|
||||
|
||||
if self._address is not None:
|
||||
data, addr = res, self._address
|
||||
else:
|
||||
data, addr = res
|
||||
|
||||
if self._conn_lost:
|
||||
return
|
||||
if self._address is not None:
|
||||
self._read_fut = self._loop._proactor.recv(self._sock,
|
||||
self.max_size)
|
||||
else:
|
||||
self._read_fut = self._loop._proactor.recvfrom(self._sock,
|
||||
self.max_size)
|
||||
except OSError as exc:
|
||||
self._protocol.error_received(exc)
|
||||
except exceptions.CancelledError:
|
||||
if not self._closing:
|
||||
raise
|
||||
else:
|
||||
if self._read_fut is not None:
|
||||
self._read_fut.add_done_callback(self._loop_reading)
|
||||
finally:
|
||||
if data:
|
||||
self._protocol.datagram_received(data, addr)
|
||||
|
||||
|
||||
class _ProactorDuplexPipeTransport(_ProactorReadPipeTransport,
|
||||
_ProactorBaseWritePipeTransport,
|
||||
transports.Transport):
|
||||
"""Transport for duplex pipes."""
|
||||
|
||||
def can_write_eof(self):
|
||||
return False
|
||||
|
||||
def write_eof(self):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class _ProactorSocketTransport(_ProactorReadPipeTransport,
|
||||
_ProactorBaseWritePipeTransport,
|
||||
transports.Transport):
|
||||
"""Transport for connected sockets."""
|
||||
|
||||
_sendfile_compatible = constants._SendfileMode.TRY_NATIVE
|
||||
|
||||
def __init__(self, loop, sock, protocol, waiter=None,
|
||||
extra=None, server=None):
|
||||
super().__init__(loop, sock, protocol, waiter, extra, server)
|
||||
base_events._set_nodelay(sock)
|
||||
|
||||
def _set_extra(self, sock):
|
||||
_set_socket_extra(self, sock)
|
||||
|
||||
def can_write_eof(self):
|
||||
return True
|
||||
|
||||
def write_eof(self):
|
||||
if self._closing or self._eof_written:
|
||||
return
|
||||
self._eof_written = True
|
||||
if self._write_fut is None:
|
||||
self._sock.shutdown(socket.SHUT_WR)
|
||||
|
||||
|
||||
class BaseProactorEventLoop(base_events.BaseEventLoop):
|
||||
|
||||
def __init__(self, proactor):
|
||||
super().__init__()
|
||||
logger.debug('Using proactor: %s', proactor.__class__.__name__)
|
||||
self._proactor = proactor
|
||||
self._selector = proactor # convenient alias
|
||||
self._self_reading_future = None
|
||||
self._accept_futures = {} # socket file descriptor => Future
|
||||
proactor.set_loop(self)
|
||||
self._make_self_pipe()
|
||||
if threading.current_thread() is threading.main_thread():
|
||||
# wakeup fd can only be installed to a file descriptor from the main thread
|
||||
signal.set_wakeup_fd(self._csock.fileno())
|
||||
|
||||
def _make_socket_transport(self, sock, protocol, waiter=None,
|
||||
extra=None, server=None):
|
||||
return _ProactorSocketTransport(self, sock, protocol, waiter,
|
||||
extra, server)
|
||||
|
||||
def _make_ssl_transport(
|
||||
self, rawsock, protocol, sslcontext, waiter=None,
|
||||
*, server_side=False, server_hostname=None,
|
||||
extra=None, server=None,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None):
|
||||
ssl_protocol = sslproto.SSLProtocol(
|
||||
self, protocol, sslcontext, waiter,
|
||||
server_side, server_hostname,
|
||||
ssl_handshake_timeout=ssl_handshake_timeout,
|
||||
ssl_shutdown_timeout=ssl_shutdown_timeout)
|
||||
_ProactorSocketTransport(self, rawsock, ssl_protocol,
|
||||
extra=extra, server=server)
|
||||
return ssl_protocol._app_transport
|
||||
|
||||
def _make_datagram_transport(self, sock, protocol,
|
||||
address=None, waiter=None, extra=None):
|
||||
return _ProactorDatagramTransport(self, sock, protocol, address,
|
||||
waiter, extra)
|
||||
|
||||
def _make_duplex_pipe_transport(self, sock, protocol, waiter=None,
|
||||
extra=None):
|
||||
return _ProactorDuplexPipeTransport(self,
|
||||
sock, protocol, waiter, extra)
|
||||
|
||||
def _make_read_pipe_transport(self, sock, protocol, waiter=None,
|
||||
extra=None):
|
||||
return _ProactorReadPipeTransport(self, sock, protocol, waiter, extra)
|
||||
|
||||
def _make_write_pipe_transport(self, sock, protocol, waiter=None,
|
||||
extra=None):
|
||||
# We want connection_lost() to be called when other end closes
|
||||
return _ProactorWritePipeTransport(self,
|
||||
sock, protocol, waiter, extra)
|
||||
|
||||
def close(self):
|
||||
if self.is_running():
|
||||
raise RuntimeError("Cannot close a running event loop")
|
||||
if self.is_closed():
|
||||
return
|
||||
|
||||
if threading.current_thread() is threading.main_thread():
|
||||
signal.set_wakeup_fd(-1)
|
||||
# Call these methods before closing the event loop (before calling
|
||||
# BaseEventLoop.close), because they can schedule callbacks with
|
||||
# call_soon(), which is forbidden when the event loop is closed.
|
||||
self._stop_accept_futures()
|
||||
self._close_self_pipe()
|
||||
self._proactor.close()
|
||||
self._proactor = None
|
||||
self._selector = None
|
||||
|
||||
# Close the event loop
|
||||
super().close()
|
||||
|
||||
async def sock_recv(self, sock, n):
|
||||
return await self._proactor.recv(sock, n)
|
||||
|
||||
async def sock_recv_into(self, sock, buf):
|
||||
return await self._proactor.recv_into(sock, buf)
|
||||
|
||||
async def sock_recvfrom(self, sock, bufsize):
|
||||
return await self._proactor.recvfrom(sock, bufsize)
|
||||
|
||||
async def sock_recvfrom_into(self, sock, buf, nbytes=0):
|
||||
if not nbytes:
|
||||
nbytes = len(buf)
|
||||
|
||||
return await self._proactor.recvfrom_into(sock, buf, nbytes)
|
||||
|
||||
async def sock_sendall(self, sock, data):
|
||||
return await self._proactor.send(sock, data)
|
||||
|
||||
async def sock_sendto(self, sock, data, address):
|
||||
return await self._proactor.sendto(sock, data, 0, address)
|
||||
|
||||
async def sock_connect(self, sock, address):
|
||||
if self._debug and sock.gettimeout() != 0:
|
||||
raise ValueError("the socket must be non-blocking")
|
||||
return await self._proactor.connect(sock, address)
|
||||
|
||||
async def sock_accept(self, sock):
|
||||
return await self._proactor.accept(sock)
|
||||
|
||||
async def _sock_sendfile_native(self, sock, file, offset, count):
|
||||
try:
|
||||
fileno = file.fileno()
|
||||
except (AttributeError, io.UnsupportedOperation) as err:
|
||||
raise exceptions.SendfileNotAvailableError("not a regular file")
|
||||
try:
|
||||
fsize = os.fstat(fileno).st_size
|
||||
except OSError:
|
||||
raise exceptions.SendfileNotAvailableError("not a regular file")
|
||||
blocksize = count if count else fsize
|
||||
if not blocksize:
|
||||
return 0 # empty file
|
||||
|
||||
blocksize = min(blocksize, 0xffff_ffff)
|
||||
end_pos = min(offset + count, fsize) if count else fsize
|
||||
offset = min(offset, fsize)
|
||||
total_sent = 0
|
||||
try:
|
||||
while True:
|
||||
blocksize = min(end_pos - offset, blocksize)
|
||||
if blocksize <= 0:
|
||||
return total_sent
|
||||
await self._proactor.sendfile(sock, file, offset, blocksize)
|
||||
offset += blocksize
|
||||
total_sent += blocksize
|
||||
finally:
|
||||
if total_sent > 0:
|
||||
file.seek(offset)
|
||||
|
||||
async def _sendfile_native(self, transp, file, offset, count):
|
||||
resume_reading = transp.is_reading()
|
||||
transp.pause_reading()
|
||||
await transp._make_empty_waiter()
|
||||
try:
|
||||
return await self.sock_sendfile(transp._sock, file, offset, count,
|
||||
fallback=False)
|
||||
finally:
|
||||
transp._reset_empty_waiter()
|
||||
if resume_reading:
|
||||
transp.resume_reading()
|
||||
|
||||
def _close_self_pipe(self):
|
||||
if self._self_reading_future is not None:
|
||||
self._self_reading_future.cancel()
|
||||
self._self_reading_future = None
|
||||
self._ssock.close()
|
||||
self._ssock = None
|
||||
self._csock.close()
|
||||
self._csock = None
|
||||
self._internal_fds -= 1
|
||||
|
||||
def _make_self_pipe(self):
|
||||
# A self-socket, really. :-)
|
||||
self._ssock, self._csock = socket.socketpair()
|
||||
self._ssock.setblocking(False)
|
||||
self._csock.setblocking(False)
|
||||
self._internal_fds += 1
|
||||
|
||||
def _loop_self_reading(self, f=None):
|
||||
try:
|
||||
if f is not None:
|
||||
f.result() # may raise
|
||||
if self._self_reading_future is not f:
|
||||
# When we scheduled this Future, we assigned it to
|
||||
# _self_reading_future. If it's not there now, something has
|
||||
# tried to cancel the loop while this callback was still in the
|
||||
# queue (see windows_events.ProactorEventLoop.run_forever). In
|
||||
# that case stop here instead of continuing to schedule a new
|
||||
# iteration.
|
||||
return
|
||||
f = self._proactor.recv(self._ssock, 4096)
|
||||
except exceptions.CancelledError:
|
||||
# _close_self_pipe() has been called, stop waiting for data
|
||||
return
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
self.call_exception_handler({
|
||||
'message': 'Error on reading from the event loop self pipe',
|
||||
'exception': exc,
|
||||
'loop': self,
|
||||
})
|
||||
else:
|
||||
self._self_reading_future = f
|
||||
f.add_done_callback(self._loop_self_reading)
|
||||
|
||||
def _write_to_self(self):
|
||||
# This may be called from a different thread, possibly after
|
||||
# _close_self_pipe() has been called or even while it is
|
||||
# running. Guard for self._csock being None or closed. When
|
||||
# a socket is closed, send() raises OSError (with errno set to
|
||||
# EBADF, but let's not rely on the exact error code).
|
||||
csock = self._csock
|
||||
if csock is None:
|
||||
return
|
||||
|
||||
try:
|
||||
csock.send(b'\0')
|
||||
except OSError:
|
||||
if self._debug:
|
||||
logger.debug("Fail to write a null byte into the "
|
||||
"self-pipe socket",
|
||||
exc_info=True)
|
||||
|
||||
def _start_serving(self, protocol_factory, sock,
|
||||
sslcontext=None, server=None, backlog=100,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None):
|
||||
|
||||
def loop(f=None):
|
||||
try:
|
||||
if f is not None:
|
||||
conn, addr = f.result()
|
||||
if self._debug:
|
||||
logger.debug("%r got a new connection from %r: %r",
|
||||
server, addr, conn)
|
||||
protocol = protocol_factory()
|
||||
if sslcontext is not None:
|
||||
self._make_ssl_transport(
|
||||
conn, protocol, sslcontext, server_side=True,
|
||||
extra={'peername': addr}, server=server,
|
||||
ssl_handshake_timeout=ssl_handshake_timeout,
|
||||
ssl_shutdown_timeout=ssl_shutdown_timeout)
|
||||
else:
|
||||
self._make_socket_transport(
|
||||
conn, protocol,
|
||||
extra={'peername': addr}, server=server)
|
||||
if self.is_closed():
|
||||
return
|
||||
f = self._proactor.accept(sock)
|
||||
except OSError as exc:
|
||||
if sock.fileno() != -1:
|
||||
self.call_exception_handler({
|
||||
'message': 'Accept failed on a socket',
|
||||
'exception': exc,
|
||||
'socket': trsock.TransportSocket(sock),
|
||||
})
|
||||
sock.close()
|
||||
elif self._debug:
|
||||
logger.debug("Accept failed on socket %r",
|
||||
sock, exc_info=True)
|
||||
except exceptions.CancelledError:
|
||||
sock.close()
|
||||
else:
|
||||
self._accept_futures[sock.fileno()] = f
|
||||
f.add_done_callback(loop)
|
||||
|
||||
self.call_soon(loop)
|
||||
|
||||
def _process_events(self, event_list):
|
||||
# Events are processed in the IocpProactor._poll() method
|
||||
pass
|
||||
|
||||
def _stop_accept_futures(self):
|
||||
for future in self._accept_futures.values():
|
||||
future.cancel()
|
||||
self._accept_futures.clear()
|
||||
|
||||
def _stop_serving(self, sock):
|
||||
future = self._accept_futures.pop(sock.fileno(), None)
|
||||
if future:
|
||||
future.cancel()
|
||||
self._proactor._stop_serving(sock)
|
||||
sock.close()
|
||||
216
Utils/PythonNew32/Lib/asyncio/protocols.py
Normal file
216
Utils/PythonNew32/Lib/asyncio/protocols.py
Normal file
@@ -0,0 +1,216 @@
|
||||
"""Abstract Protocol base classes."""
|
||||
|
||||
__all__ = (
|
||||
'BaseProtocol', 'Protocol', 'DatagramProtocol',
|
||||
'SubprocessProtocol', 'BufferedProtocol',
|
||||
)
|
||||
|
||||
|
||||
class BaseProtocol:
|
||||
"""Common base class for protocol interfaces.
|
||||
|
||||
Usually user implements protocols that derived from BaseProtocol
|
||||
like Protocol or ProcessProtocol.
|
||||
|
||||
The only case when BaseProtocol should be implemented directly is
|
||||
write-only transport like write pipe
|
||||
"""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def connection_made(self, transport):
|
||||
"""Called when a connection is made.
|
||||
|
||||
The argument is the transport representing the pipe connection.
|
||||
To receive data, wait for data_received() calls.
|
||||
When the connection is closed, connection_lost() is called.
|
||||
"""
|
||||
|
||||
def connection_lost(self, exc):
|
||||
"""Called when the connection is lost or closed.
|
||||
|
||||
The argument is an exception object or None (the latter
|
||||
meaning a regular EOF is received or the connection was
|
||||
aborted or closed).
|
||||
"""
|
||||
|
||||
def pause_writing(self):
|
||||
"""Called when the transport's buffer goes over the high-water mark.
|
||||
|
||||
Pause and resume calls are paired -- pause_writing() is called
|
||||
once when the buffer goes strictly over the high-water mark
|
||||
(even if subsequent writes increases the buffer size even
|
||||
more), and eventually resume_writing() is called once when the
|
||||
buffer size reaches the low-water mark.
|
||||
|
||||
Note that if the buffer size equals the high-water mark,
|
||||
pause_writing() is not called -- it must go strictly over.
|
||||
Conversely, resume_writing() is called when the buffer size is
|
||||
equal or lower than the low-water mark. These end conditions
|
||||
are important to ensure that things go as expected when either
|
||||
mark is zero.
|
||||
|
||||
NOTE: This is the only Protocol callback that is not called
|
||||
through EventLoop.call_soon() -- if it were, it would have no
|
||||
effect when it's most needed (when the app keeps writing
|
||||
without yielding until pause_writing() is called).
|
||||
"""
|
||||
|
||||
def resume_writing(self):
|
||||
"""Called when the transport's buffer drains below the low-water mark.
|
||||
|
||||
See pause_writing() for details.
|
||||
"""
|
||||
|
||||
|
||||
class Protocol(BaseProtocol):
|
||||
"""Interface for stream protocol.
|
||||
|
||||
The user should implement this interface. They can inherit from
|
||||
this class but don't need to. The implementations here do
|
||||
nothing (they don't raise exceptions).
|
||||
|
||||
When the user wants to requests a transport, they pass a protocol
|
||||
factory to a utility function (e.g., EventLoop.create_connection()).
|
||||
|
||||
When the connection is made successfully, connection_made() is
|
||||
called with a suitable transport object. Then data_received()
|
||||
will be called 0 or more times with data (bytes) received from the
|
||||
transport; finally, connection_lost() will be called exactly once
|
||||
with either an exception object or None as an argument.
|
||||
|
||||
State machine of calls:
|
||||
|
||||
start -> CM [-> DR*] [-> ER?] -> CL -> end
|
||||
|
||||
* CM: connection_made()
|
||||
* DR: data_received()
|
||||
* ER: eof_received()
|
||||
* CL: connection_lost()
|
||||
"""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def data_received(self, data):
|
||||
"""Called when some data is received.
|
||||
|
||||
The argument is a bytes object.
|
||||
"""
|
||||
|
||||
def eof_received(self):
|
||||
"""Called when the other end calls write_eof() or equivalent.
|
||||
|
||||
If this returns a false value (including None), the transport
|
||||
will close itself. If it returns a true value, closing the
|
||||
transport is up to the protocol.
|
||||
"""
|
||||
|
||||
|
||||
class BufferedProtocol(BaseProtocol):
|
||||
"""Interface for stream protocol with manual buffer control.
|
||||
|
||||
Event methods, such as `create_server` and `create_connection`,
|
||||
accept factories that return protocols that implement this interface.
|
||||
|
||||
The idea of BufferedProtocol is that it allows to manually allocate
|
||||
and control the receive buffer. Event loops can then use the buffer
|
||||
provided by the protocol to avoid unnecessary data copies. This
|
||||
can result in noticeable performance improvement for protocols that
|
||||
receive big amounts of data. Sophisticated protocols can allocate
|
||||
the buffer only once at creation time.
|
||||
|
||||
State machine of calls:
|
||||
|
||||
start -> CM [-> GB [-> BU?]]* [-> ER?] -> CL -> end
|
||||
|
||||
* CM: connection_made()
|
||||
* GB: get_buffer()
|
||||
* BU: buffer_updated()
|
||||
* ER: eof_received()
|
||||
* CL: connection_lost()
|
||||
"""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def get_buffer(self, sizehint):
|
||||
"""Called to allocate a new receive buffer.
|
||||
|
||||
*sizehint* is a recommended minimal size for the returned
|
||||
buffer. When set to -1, the buffer size can be arbitrary.
|
||||
|
||||
Must return an object that implements the
|
||||
:ref:`buffer protocol <bufferobjects>`.
|
||||
It is an error to return a zero-sized buffer.
|
||||
"""
|
||||
|
||||
def buffer_updated(self, nbytes):
|
||||
"""Called when the buffer was updated with the received data.
|
||||
|
||||
*nbytes* is the total number of bytes that were written to
|
||||
the buffer.
|
||||
"""
|
||||
|
||||
def eof_received(self):
|
||||
"""Called when the other end calls write_eof() or equivalent.
|
||||
|
||||
If this returns a false value (including None), the transport
|
||||
will close itself. If it returns a true value, closing the
|
||||
transport is up to the protocol.
|
||||
"""
|
||||
|
||||
|
||||
class DatagramProtocol(BaseProtocol):
|
||||
"""Interface for datagram protocol."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def datagram_received(self, data, addr):
|
||||
"""Called when some datagram is received."""
|
||||
|
||||
def error_received(self, exc):
|
||||
"""Called when a send or receive operation raises an OSError.
|
||||
|
||||
(Other than BlockingIOError or InterruptedError.)
|
||||
"""
|
||||
|
||||
|
||||
class SubprocessProtocol(BaseProtocol):
|
||||
"""Interface for protocol for subprocess calls."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def pipe_data_received(self, fd, data):
|
||||
"""Called when the subprocess writes data into stdout/stderr pipe.
|
||||
|
||||
fd is int file descriptor.
|
||||
data is bytes object.
|
||||
"""
|
||||
|
||||
def pipe_connection_lost(self, fd, exc):
|
||||
"""Called when a file descriptor associated with the child process is
|
||||
closed.
|
||||
|
||||
fd is the int file descriptor that was closed.
|
||||
"""
|
||||
|
||||
def process_exited(self):
|
||||
"""Called when subprocess has exited."""
|
||||
|
||||
|
||||
def _feed_data_to_buffered_proto(proto, data):
|
||||
data_len = len(data)
|
||||
while data_len:
|
||||
buf = proto.get_buffer(data_len)
|
||||
buf_len = len(buf)
|
||||
if not buf_len:
|
||||
raise RuntimeError('get_buffer() returned an empty buffer')
|
||||
|
||||
if buf_len >= data_len:
|
||||
buf[:data_len] = data
|
||||
proto.buffer_updated(data_len)
|
||||
return
|
||||
else:
|
||||
buf[:buf_len] = data[:buf_len]
|
||||
proto.buffer_updated(buf_len)
|
||||
data = data[buf_len:]
|
||||
data_len = len(data)
|
||||
308
Utils/PythonNew32/Lib/asyncio/queues.py
Normal file
308
Utils/PythonNew32/Lib/asyncio/queues.py
Normal file
@@ -0,0 +1,308 @@
|
||||
__all__ = (
|
||||
'Queue',
|
||||
'PriorityQueue',
|
||||
'LifoQueue',
|
||||
'QueueFull',
|
||||
'QueueEmpty',
|
||||
'QueueShutDown',
|
||||
)
|
||||
|
||||
import collections
|
||||
import heapq
|
||||
from types import GenericAlias
|
||||
|
||||
from . import locks
|
||||
from . import mixins
|
||||
|
||||
|
||||
class QueueEmpty(Exception):
|
||||
"""Raised when Queue.get_nowait() is called on an empty Queue."""
|
||||
pass
|
||||
|
||||
|
||||
class QueueFull(Exception):
|
||||
"""Raised when the Queue.put_nowait() method is called on a full Queue."""
|
||||
pass
|
||||
|
||||
|
||||
class QueueShutDown(Exception):
|
||||
"""Raised when putting on to or getting from a shut-down Queue."""
|
||||
pass
|
||||
|
||||
|
||||
class Queue(mixins._LoopBoundMixin):
|
||||
"""A queue, useful for coordinating producer and consumer coroutines.
|
||||
|
||||
If maxsize is less than or equal to zero, the queue size is infinite. If it
|
||||
is an integer greater than 0, then "await put()" will block when the
|
||||
queue reaches maxsize, until an item is removed by get().
|
||||
|
||||
Unlike the standard library Queue, you can reliably know this Queue's size
|
||||
with qsize(), since your single-threaded asyncio application won't be
|
||||
interrupted between calling qsize() and doing an operation on the Queue.
|
||||
"""
|
||||
|
||||
def __init__(self, maxsize=0):
|
||||
self._maxsize = maxsize
|
||||
|
||||
# Futures.
|
||||
self._getters = collections.deque()
|
||||
# Futures.
|
||||
self._putters = collections.deque()
|
||||
self._unfinished_tasks = 0
|
||||
self._finished = locks.Event()
|
||||
self._finished.set()
|
||||
self._init(maxsize)
|
||||
self._is_shutdown = False
|
||||
|
||||
# These three are overridable in subclasses.
|
||||
|
||||
def _init(self, maxsize):
|
||||
self._queue = collections.deque()
|
||||
|
||||
def _get(self):
|
||||
return self._queue.popleft()
|
||||
|
||||
def _put(self, item):
|
||||
self._queue.append(item)
|
||||
|
||||
# End of the overridable methods.
|
||||
|
||||
def _wakeup_next(self, waiters):
|
||||
# Wake up the next waiter (if any) that isn't cancelled.
|
||||
while waiters:
|
||||
waiter = waiters.popleft()
|
||||
if not waiter.done():
|
||||
waiter.set_result(None)
|
||||
break
|
||||
|
||||
def __repr__(self):
|
||||
return f'<{type(self).__name__} at {id(self):#x} {self._format()}>'
|
||||
|
||||
def __str__(self):
|
||||
return f'<{type(self).__name__} {self._format()}>'
|
||||
|
||||
__class_getitem__ = classmethod(GenericAlias)
|
||||
|
||||
def _format(self):
|
||||
result = f'maxsize={self._maxsize!r}'
|
||||
if getattr(self, '_queue', None):
|
||||
result += f' _queue={list(self._queue)!r}'
|
||||
if self._getters:
|
||||
result += f' _getters[{len(self._getters)}]'
|
||||
if self._putters:
|
||||
result += f' _putters[{len(self._putters)}]'
|
||||
if self._unfinished_tasks:
|
||||
result += f' tasks={self._unfinished_tasks}'
|
||||
if self._is_shutdown:
|
||||
result += ' shutdown'
|
||||
return result
|
||||
|
||||
def qsize(self):
|
||||
"""Number of items in the queue."""
|
||||
return len(self._queue)
|
||||
|
||||
@property
|
||||
def maxsize(self):
|
||||
"""Number of items allowed in the queue."""
|
||||
return self._maxsize
|
||||
|
||||
def empty(self):
|
||||
"""Return True if the queue is empty, False otherwise."""
|
||||
return not self._queue
|
||||
|
||||
def full(self):
|
||||
"""Return True if there are maxsize items in the queue.
|
||||
|
||||
Note: if the Queue was initialized with maxsize=0 (the default),
|
||||
then full() is never True.
|
||||
"""
|
||||
if self._maxsize <= 0:
|
||||
return False
|
||||
else:
|
||||
return self.qsize() >= self._maxsize
|
||||
|
||||
async def put(self, item):
|
||||
"""Put an item into the queue.
|
||||
|
||||
Put an item into the queue. If the queue is full, wait until a free
|
||||
slot is available before adding item.
|
||||
|
||||
Raises QueueShutDown if the queue has been shut down.
|
||||
"""
|
||||
while self.full():
|
||||
if self._is_shutdown:
|
||||
raise QueueShutDown
|
||||
putter = self._get_loop().create_future()
|
||||
self._putters.append(putter)
|
||||
try:
|
||||
await putter
|
||||
except:
|
||||
putter.cancel() # Just in case putter is not done yet.
|
||||
try:
|
||||
# Clean self._putters from canceled putters.
|
||||
self._putters.remove(putter)
|
||||
except ValueError:
|
||||
# The putter could be removed from self._putters by a
|
||||
# previous get_nowait call or a shutdown call.
|
||||
pass
|
||||
if not self.full() and not putter.cancelled():
|
||||
# We were woken up by get_nowait(), but can't take
|
||||
# the call. Wake up the next in line.
|
||||
self._wakeup_next(self._putters)
|
||||
raise
|
||||
return self.put_nowait(item)
|
||||
|
||||
def put_nowait(self, item):
|
||||
"""Put an item into the queue without blocking.
|
||||
|
||||
If no free slot is immediately available, raise QueueFull.
|
||||
|
||||
Raises QueueShutDown if the queue has been shut down.
|
||||
"""
|
||||
if self._is_shutdown:
|
||||
raise QueueShutDown
|
||||
if self.full():
|
||||
raise QueueFull
|
||||
self._put(item)
|
||||
self._unfinished_tasks += 1
|
||||
self._finished.clear()
|
||||
self._wakeup_next(self._getters)
|
||||
|
||||
async def get(self):
|
||||
"""Remove and return an item from the queue.
|
||||
|
||||
If queue is empty, wait until an item is available.
|
||||
|
||||
Raises QueueShutDown if the queue has been shut down and is empty, or
|
||||
if the queue has been shut down immediately.
|
||||
"""
|
||||
while self.empty():
|
||||
if self._is_shutdown and self.empty():
|
||||
raise QueueShutDown
|
||||
getter = self._get_loop().create_future()
|
||||
self._getters.append(getter)
|
||||
try:
|
||||
await getter
|
||||
except:
|
||||
getter.cancel() # Just in case getter is not done yet.
|
||||
try:
|
||||
# Clean self._getters from canceled getters.
|
||||
self._getters.remove(getter)
|
||||
except ValueError:
|
||||
# The getter could be removed from self._getters by a
|
||||
# previous put_nowait call, or a shutdown call.
|
||||
pass
|
||||
if not self.empty() and not getter.cancelled():
|
||||
# We were woken up by put_nowait(), but can't take
|
||||
# the call. Wake up the next in line.
|
||||
self._wakeup_next(self._getters)
|
||||
raise
|
||||
return self.get_nowait()
|
||||
|
||||
def get_nowait(self):
|
||||
"""Remove and return an item from the queue.
|
||||
|
||||
Return an item if one is immediately available, else raise QueueEmpty.
|
||||
|
||||
Raises QueueShutDown if the queue has been shut down and is empty, or
|
||||
if the queue has been shut down immediately.
|
||||
"""
|
||||
if self.empty():
|
||||
if self._is_shutdown:
|
||||
raise QueueShutDown
|
||||
raise QueueEmpty
|
||||
item = self._get()
|
||||
self._wakeup_next(self._putters)
|
||||
return item
|
||||
|
||||
def task_done(self):
|
||||
"""Indicate that a formerly enqueued task is complete.
|
||||
|
||||
Used by queue consumers. For each get() used to fetch a task,
|
||||
a subsequent call to task_done() tells the queue that the processing
|
||||
on the task is complete.
|
||||
|
||||
If a join() is currently blocking, it will resume when all items have
|
||||
been processed (meaning that a task_done() call was received for every
|
||||
item that had been put() into the queue).
|
||||
|
||||
shutdown(immediate=True) calls task_done() for each remaining item in
|
||||
the queue.
|
||||
|
||||
Raises ValueError if called more times than there were items placed in
|
||||
the queue.
|
||||
"""
|
||||
if self._unfinished_tasks <= 0:
|
||||
raise ValueError('task_done() called too many times')
|
||||
self._unfinished_tasks -= 1
|
||||
if self._unfinished_tasks == 0:
|
||||
self._finished.set()
|
||||
|
||||
async def join(self):
|
||||
"""Block until all items in the queue have been gotten and processed.
|
||||
|
||||
The count of unfinished tasks goes up whenever an item is added to the
|
||||
queue. The count goes down whenever a consumer calls task_done() to
|
||||
indicate that the item was retrieved and all work on it is complete.
|
||||
When the count of unfinished tasks drops to zero, join() unblocks.
|
||||
"""
|
||||
if self._unfinished_tasks > 0:
|
||||
await self._finished.wait()
|
||||
|
||||
def shutdown(self, immediate=False):
|
||||
"""Shut-down the queue, making queue gets and puts raise QueueShutDown.
|
||||
|
||||
By default, gets will only raise once the queue is empty. Set
|
||||
'immediate' to True to make gets raise immediately instead.
|
||||
|
||||
All blocked callers of put() and get() will be unblocked. If
|
||||
'immediate', a task is marked as done for each item remaining in
|
||||
the queue, which may unblock callers of join().
|
||||
"""
|
||||
self._is_shutdown = True
|
||||
if immediate:
|
||||
while not self.empty():
|
||||
self._get()
|
||||
if self._unfinished_tasks > 0:
|
||||
self._unfinished_tasks -= 1
|
||||
if self._unfinished_tasks == 0:
|
||||
self._finished.set()
|
||||
# All getters need to re-check queue-empty to raise ShutDown
|
||||
while self._getters:
|
||||
getter = self._getters.popleft()
|
||||
if not getter.done():
|
||||
getter.set_result(None)
|
||||
while self._putters:
|
||||
putter = self._putters.popleft()
|
||||
if not putter.done():
|
||||
putter.set_result(None)
|
||||
|
||||
|
||||
class PriorityQueue(Queue):
|
||||
"""A subclass of Queue; retrieves entries in priority order (lowest first).
|
||||
|
||||
Entries are typically tuples of the form: (priority number, data).
|
||||
"""
|
||||
|
||||
def _init(self, maxsize):
|
||||
self._queue = []
|
||||
|
||||
def _put(self, item, heappush=heapq.heappush):
|
||||
heappush(self._queue, item)
|
||||
|
||||
def _get(self, heappop=heapq.heappop):
|
||||
return heappop(self._queue)
|
||||
|
||||
|
||||
class LifoQueue(Queue):
|
||||
"""A subclass of Queue that retrieves most recently added entries first."""
|
||||
|
||||
def _init(self, maxsize):
|
||||
self._queue = []
|
||||
|
||||
def _put(self, item):
|
||||
self._queue.append(item)
|
||||
|
||||
def _get(self):
|
||||
return self._queue.pop()
|
||||
216
Utils/PythonNew32/Lib/asyncio/runners.py
Normal file
216
Utils/PythonNew32/Lib/asyncio/runners.py
Normal file
@@ -0,0 +1,216 @@
|
||||
__all__ = ('Runner', 'run')
|
||||
|
||||
import contextvars
|
||||
import enum
|
||||
import functools
|
||||
import threading
|
||||
import signal
|
||||
from . import coroutines
|
||||
from . import events
|
||||
from . import exceptions
|
||||
from . import tasks
|
||||
from . import constants
|
||||
|
||||
class _State(enum.Enum):
|
||||
CREATED = "created"
|
||||
INITIALIZED = "initialized"
|
||||
CLOSED = "closed"
|
||||
|
||||
|
||||
class Runner:
|
||||
"""A context manager that controls event loop life cycle.
|
||||
|
||||
The context manager always creates a new event loop,
|
||||
allows to run async functions inside it,
|
||||
and properly finalizes the loop at the context manager exit.
|
||||
|
||||
If debug is True, the event loop will be run in debug mode.
|
||||
If loop_factory is passed, it is used for new event loop creation.
|
||||
|
||||
asyncio.run(main(), debug=True)
|
||||
|
||||
is a shortcut for
|
||||
|
||||
with asyncio.Runner(debug=True) as runner:
|
||||
runner.run(main())
|
||||
|
||||
The run() method can be called multiple times within the runner's context.
|
||||
|
||||
This can be useful for interactive console (e.g. IPython),
|
||||
unittest runners, console tools, -- everywhere when async code
|
||||
is called from existing sync framework and where the preferred single
|
||||
asyncio.run() call doesn't work.
|
||||
|
||||
"""
|
||||
|
||||
# Note: the class is final, it is not intended for inheritance.
|
||||
|
||||
def __init__(self, *, debug=None, loop_factory=None):
|
||||
self._state = _State.CREATED
|
||||
self._debug = debug
|
||||
self._loop_factory = loop_factory
|
||||
self._loop = None
|
||||
self._context = None
|
||||
self._interrupt_count = 0
|
||||
self._set_event_loop = False
|
||||
|
||||
def __enter__(self):
|
||||
self._lazy_init()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
self.close()
|
||||
|
||||
def close(self):
|
||||
"""Shutdown and close event loop."""
|
||||
if self._state is not _State.INITIALIZED:
|
||||
return
|
||||
try:
|
||||
loop = self._loop
|
||||
_cancel_all_tasks(loop)
|
||||
loop.run_until_complete(loop.shutdown_asyncgens())
|
||||
loop.run_until_complete(
|
||||
loop.shutdown_default_executor(constants.THREAD_JOIN_TIMEOUT))
|
||||
finally:
|
||||
if self._set_event_loop:
|
||||
events.set_event_loop(None)
|
||||
loop.close()
|
||||
self._loop = None
|
||||
self._state = _State.CLOSED
|
||||
|
||||
def get_loop(self):
|
||||
"""Return embedded event loop."""
|
||||
self._lazy_init()
|
||||
return self._loop
|
||||
|
||||
def run(self, coro, *, context=None):
|
||||
"""Run a coroutine inside the embedded event loop."""
|
||||
if not coroutines.iscoroutine(coro):
|
||||
raise ValueError("a coroutine was expected, got {!r}".format(coro))
|
||||
|
||||
if events._get_running_loop() is not None:
|
||||
# fail fast with short traceback
|
||||
raise RuntimeError(
|
||||
"Runner.run() cannot be called from a running event loop")
|
||||
|
||||
self._lazy_init()
|
||||
|
||||
if context is None:
|
||||
context = self._context
|
||||
task = self._loop.create_task(coro, context=context)
|
||||
|
||||
if (threading.current_thread() is threading.main_thread()
|
||||
and signal.getsignal(signal.SIGINT) is signal.default_int_handler
|
||||
):
|
||||
sigint_handler = functools.partial(self._on_sigint, main_task=task)
|
||||
try:
|
||||
signal.signal(signal.SIGINT, sigint_handler)
|
||||
except ValueError:
|
||||
# `signal.signal` may throw if `threading.main_thread` does
|
||||
# not support signals (e.g. embedded interpreter with signals
|
||||
# not registered - see gh-91880)
|
||||
sigint_handler = None
|
||||
else:
|
||||
sigint_handler = None
|
||||
|
||||
self._interrupt_count = 0
|
||||
try:
|
||||
return self._loop.run_until_complete(task)
|
||||
except exceptions.CancelledError:
|
||||
if self._interrupt_count > 0:
|
||||
uncancel = getattr(task, "uncancel", None)
|
||||
if uncancel is not None and uncancel() == 0:
|
||||
raise KeyboardInterrupt()
|
||||
raise # CancelledError
|
||||
finally:
|
||||
if (sigint_handler is not None
|
||||
and signal.getsignal(signal.SIGINT) is sigint_handler
|
||||
):
|
||||
signal.signal(signal.SIGINT, signal.default_int_handler)
|
||||
|
||||
def _lazy_init(self):
|
||||
if self._state is _State.CLOSED:
|
||||
raise RuntimeError("Runner is closed")
|
||||
if self._state is _State.INITIALIZED:
|
||||
return
|
||||
if self._loop_factory is None:
|
||||
self._loop = events.new_event_loop()
|
||||
if not self._set_event_loop:
|
||||
# Call set_event_loop only once to avoid calling
|
||||
# attach_loop multiple times on child watchers
|
||||
events.set_event_loop(self._loop)
|
||||
self._set_event_loop = True
|
||||
else:
|
||||
self._loop = self._loop_factory()
|
||||
if self._debug is not None:
|
||||
self._loop.set_debug(self._debug)
|
||||
self._context = contextvars.copy_context()
|
||||
self._state = _State.INITIALIZED
|
||||
|
||||
def _on_sigint(self, signum, frame, main_task):
|
||||
self._interrupt_count += 1
|
||||
if self._interrupt_count == 1 and not main_task.done():
|
||||
main_task.cancel()
|
||||
# wakeup loop if it is blocked by select() with long timeout
|
||||
self._loop.call_soon_threadsafe(lambda: None)
|
||||
return
|
||||
raise KeyboardInterrupt()
|
||||
|
||||
|
||||
def run(main, *, debug=None, loop_factory=None):
|
||||
"""Execute the coroutine and return the result.
|
||||
|
||||
This function runs the passed coroutine, taking care of
|
||||
managing the asyncio event loop, finalizing asynchronous
|
||||
generators and closing the default executor.
|
||||
|
||||
This function cannot be called when another asyncio event loop is
|
||||
running in the same thread.
|
||||
|
||||
If debug is True, the event loop will be run in debug mode.
|
||||
If loop_factory is passed, it is used for new event loop creation.
|
||||
|
||||
This function always creates a new event loop and closes it at the end.
|
||||
It should be used as a main entry point for asyncio programs, and should
|
||||
ideally only be called once.
|
||||
|
||||
The executor is given a timeout duration of 5 minutes to shutdown.
|
||||
If the executor hasn't finished within that duration, a warning is
|
||||
emitted and the executor is closed.
|
||||
|
||||
Example:
|
||||
|
||||
async def main():
|
||||
await asyncio.sleep(1)
|
||||
print('hello')
|
||||
|
||||
asyncio.run(main())
|
||||
"""
|
||||
if events._get_running_loop() is not None:
|
||||
# fail fast with short traceback
|
||||
raise RuntimeError(
|
||||
"asyncio.run() cannot be called from a running event loop")
|
||||
|
||||
with Runner(debug=debug, loop_factory=loop_factory) as runner:
|
||||
return runner.run(main)
|
||||
|
||||
|
||||
def _cancel_all_tasks(loop):
|
||||
to_cancel = tasks.all_tasks(loop)
|
||||
if not to_cancel:
|
||||
return
|
||||
|
||||
for task in to_cancel:
|
||||
task.cancel()
|
||||
|
||||
loop.run_until_complete(tasks.gather(*to_cancel, return_exceptions=True))
|
||||
|
||||
for task in to_cancel:
|
||||
if task.cancelled():
|
||||
continue
|
||||
if task.exception() is not None:
|
||||
loop.call_exception_handler({
|
||||
'message': 'unhandled exception during asyncio.run() shutdown',
|
||||
'exception': task.exception(),
|
||||
'task': task,
|
||||
})
|
||||
1314
Utils/PythonNew32/Lib/asyncio/selector_events.py
Normal file
1314
Utils/PythonNew32/Lib/asyncio/selector_events.py
Normal file
File diff suppressed because it is too large
Load Diff
929
Utils/PythonNew32/Lib/asyncio/sslproto.py
Normal file
929
Utils/PythonNew32/Lib/asyncio/sslproto.py
Normal file
@@ -0,0 +1,929 @@
|
||||
# Contains code from https://github.com/MagicStack/uvloop/tree/v0.16.0
|
||||
# SPDX-License-Identifier: PSF-2.0 AND (MIT OR Apache-2.0)
|
||||
# SPDX-FileCopyrightText: Copyright (c) 2015-2021 MagicStack Inc. http://magic.io
|
||||
|
||||
import collections
|
||||
import enum
|
||||
import warnings
|
||||
try:
|
||||
import ssl
|
||||
except ImportError: # pragma: no cover
|
||||
ssl = None
|
||||
|
||||
from . import constants
|
||||
from . import exceptions
|
||||
from . import protocols
|
||||
from . import transports
|
||||
from .log import logger
|
||||
|
||||
if ssl is not None:
|
||||
SSLAgainErrors = (ssl.SSLWantReadError, ssl.SSLSyscallError)
|
||||
|
||||
|
||||
class SSLProtocolState(enum.Enum):
|
||||
UNWRAPPED = "UNWRAPPED"
|
||||
DO_HANDSHAKE = "DO_HANDSHAKE"
|
||||
WRAPPED = "WRAPPED"
|
||||
FLUSHING = "FLUSHING"
|
||||
SHUTDOWN = "SHUTDOWN"
|
||||
|
||||
|
||||
class AppProtocolState(enum.Enum):
|
||||
# This tracks the state of app protocol (https://git.io/fj59P):
|
||||
#
|
||||
# INIT -cm-> CON_MADE [-dr*->] [-er-> EOF?] -cl-> CON_LOST
|
||||
#
|
||||
# * cm: connection_made()
|
||||
# * dr: data_received()
|
||||
# * er: eof_received()
|
||||
# * cl: connection_lost()
|
||||
|
||||
STATE_INIT = "STATE_INIT"
|
||||
STATE_CON_MADE = "STATE_CON_MADE"
|
||||
STATE_EOF = "STATE_EOF"
|
||||
STATE_CON_LOST = "STATE_CON_LOST"
|
||||
|
||||
|
||||
def _create_transport_context(server_side, server_hostname):
|
||||
if server_side:
|
||||
raise ValueError('Server side SSL needs a valid SSLContext')
|
||||
|
||||
# Client side may pass ssl=True to use a default
|
||||
# context; in that case the sslcontext passed is None.
|
||||
# The default is secure for client connections.
|
||||
# Python 3.4+: use up-to-date strong settings.
|
||||
sslcontext = ssl.create_default_context()
|
||||
if not server_hostname:
|
||||
sslcontext.check_hostname = False
|
||||
return sslcontext
|
||||
|
||||
|
||||
def add_flowcontrol_defaults(high, low, kb):
|
||||
if high is None:
|
||||
if low is None:
|
||||
hi = kb * 1024
|
||||
else:
|
||||
lo = low
|
||||
hi = 4 * lo
|
||||
else:
|
||||
hi = high
|
||||
if low is None:
|
||||
lo = hi // 4
|
||||
else:
|
||||
lo = low
|
||||
|
||||
if not hi >= lo >= 0:
|
||||
raise ValueError('high (%r) must be >= low (%r) must be >= 0' %
|
||||
(hi, lo))
|
||||
|
||||
return hi, lo
|
||||
|
||||
|
||||
class _SSLProtocolTransport(transports._FlowControlMixin,
|
||||
transports.Transport):
|
||||
|
||||
_start_tls_compatible = True
|
||||
_sendfile_compatible = constants._SendfileMode.FALLBACK
|
||||
|
||||
def __init__(self, loop, ssl_protocol):
|
||||
self._loop = loop
|
||||
self._ssl_protocol = ssl_protocol
|
||||
self._closed = False
|
||||
|
||||
def get_extra_info(self, name, default=None):
|
||||
"""Get optional transport information."""
|
||||
return self._ssl_protocol._get_extra_info(name, default)
|
||||
|
||||
def set_protocol(self, protocol):
|
||||
self._ssl_protocol._set_app_protocol(protocol)
|
||||
|
||||
def get_protocol(self):
|
||||
return self._ssl_protocol._app_protocol
|
||||
|
||||
def is_closing(self):
|
||||
return self._closed or self._ssl_protocol._is_transport_closing()
|
||||
|
||||
def close(self):
|
||||
"""Close the transport.
|
||||
|
||||
Buffered data will be flushed asynchronously. No more data
|
||||
will be received. After all buffered data is flushed, the
|
||||
protocol's connection_lost() method will (eventually) called
|
||||
with None as its argument.
|
||||
"""
|
||||
if not self._closed:
|
||||
self._closed = True
|
||||
self._ssl_protocol._start_shutdown()
|
||||
else:
|
||||
self._ssl_protocol = None
|
||||
|
||||
def __del__(self, _warnings=warnings):
|
||||
if not self._closed:
|
||||
self._closed = True
|
||||
_warnings.warn(
|
||||
"unclosed transport <asyncio._SSLProtocolTransport "
|
||||
"object>", ResourceWarning)
|
||||
|
||||
def is_reading(self):
|
||||
return not self._ssl_protocol._app_reading_paused
|
||||
|
||||
def pause_reading(self):
|
||||
"""Pause the receiving end.
|
||||
|
||||
No data will be passed to the protocol's data_received()
|
||||
method until resume_reading() is called.
|
||||
"""
|
||||
self._ssl_protocol._pause_reading()
|
||||
|
||||
def resume_reading(self):
|
||||
"""Resume the receiving end.
|
||||
|
||||
Data received will once again be passed to the protocol's
|
||||
data_received() method.
|
||||
"""
|
||||
self._ssl_protocol._resume_reading()
|
||||
|
||||
def set_write_buffer_limits(self, high=None, low=None):
|
||||
"""Set the high- and low-water limits for write flow control.
|
||||
|
||||
These two values control when to call the protocol's
|
||||
pause_writing() and resume_writing() methods. If specified,
|
||||
the low-water limit must be less than or equal to the
|
||||
high-water limit. Neither value can be negative.
|
||||
|
||||
The defaults are implementation-specific. If only the
|
||||
high-water limit is given, the low-water limit defaults to an
|
||||
implementation-specific value less than or equal to the
|
||||
high-water limit. Setting high to zero forces low to zero as
|
||||
well, and causes pause_writing() to be called whenever the
|
||||
buffer becomes non-empty. Setting low to zero causes
|
||||
resume_writing() to be called only once the buffer is empty.
|
||||
Use of zero for either limit is generally sub-optimal as it
|
||||
reduces opportunities for doing I/O and computation
|
||||
concurrently.
|
||||
"""
|
||||
self._ssl_protocol._set_write_buffer_limits(high, low)
|
||||
self._ssl_protocol._control_app_writing()
|
||||
|
||||
def get_write_buffer_limits(self):
|
||||
return (self._ssl_protocol._outgoing_low_water,
|
||||
self._ssl_protocol._outgoing_high_water)
|
||||
|
||||
def get_write_buffer_size(self):
|
||||
"""Return the current size of the write buffers."""
|
||||
return self._ssl_protocol._get_write_buffer_size()
|
||||
|
||||
def set_read_buffer_limits(self, high=None, low=None):
|
||||
"""Set the high- and low-water limits for read flow control.
|
||||
|
||||
These two values control when to call the upstream transport's
|
||||
pause_reading() and resume_reading() methods. If specified,
|
||||
the low-water limit must be less than or equal to the
|
||||
high-water limit. Neither value can be negative.
|
||||
|
||||
The defaults are implementation-specific. If only the
|
||||
high-water limit is given, the low-water limit defaults to an
|
||||
implementation-specific value less than or equal to the
|
||||
high-water limit. Setting high to zero forces low to zero as
|
||||
well, and causes pause_reading() to be called whenever the
|
||||
buffer becomes non-empty. Setting low to zero causes
|
||||
resume_reading() to be called only once the buffer is empty.
|
||||
Use of zero for either limit is generally sub-optimal as it
|
||||
reduces opportunities for doing I/O and computation
|
||||
concurrently.
|
||||
"""
|
||||
self._ssl_protocol._set_read_buffer_limits(high, low)
|
||||
self._ssl_protocol._control_ssl_reading()
|
||||
|
||||
def get_read_buffer_limits(self):
|
||||
return (self._ssl_protocol._incoming_low_water,
|
||||
self._ssl_protocol._incoming_high_water)
|
||||
|
||||
def get_read_buffer_size(self):
|
||||
"""Return the current size of the read buffer."""
|
||||
return self._ssl_protocol._get_read_buffer_size()
|
||||
|
||||
@property
|
||||
def _protocol_paused(self):
|
||||
# Required for sendfile fallback pause_writing/resume_writing logic
|
||||
return self._ssl_protocol._app_writing_paused
|
||||
|
||||
def write(self, data):
|
||||
"""Write some data bytes to the transport.
|
||||
|
||||
This does not block; it buffers the data and arranges for it
|
||||
to be sent out asynchronously.
|
||||
"""
|
||||
if not isinstance(data, (bytes, bytearray, memoryview)):
|
||||
raise TypeError(f"data: expecting a bytes-like instance, "
|
||||
f"got {type(data).__name__}")
|
||||
if not data:
|
||||
return
|
||||
self._ssl_protocol._write_appdata((data,))
|
||||
|
||||
def writelines(self, list_of_data):
|
||||
"""Write a list (or any iterable) of data bytes to the transport.
|
||||
|
||||
The default implementation concatenates the arguments and
|
||||
calls write() on the result.
|
||||
"""
|
||||
self._ssl_protocol._write_appdata(list_of_data)
|
||||
|
||||
def write_eof(self):
|
||||
"""Close the write end after flushing buffered data.
|
||||
|
||||
This raises :exc:`NotImplementedError` right now.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def can_write_eof(self):
|
||||
"""Return True if this transport supports write_eof(), False if not."""
|
||||
return False
|
||||
|
||||
def abort(self):
|
||||
"""Close the transport immediately.
|
||||
|
||||
Buffered data will be lost. No more data will be received.
|
||||
The protocol's connection_lost() method will (eventually) be
|
||||
called with None as its argument.
|
||||
"""
|
||||
self._force_close(None)
|
||||
|
||||
def _force_close(self, exc):
|
||||
self._closed = True
|
||||
if self._ssl_protocol is not None:
|
||||
self._ssl_protocol._abort(exc)
|
||||
|
||||
def _test__append_write_backlog(self, data):
|
||||
# for test only
|
||||
self._ssl_protocol._write_backlog.append(data)
|
||||
self._ssl_protocol._write_buffer_size += len(data)
|
||||
|
||||
|
||||
class SSLProtocol(protocols.BufferedProtocol):
|
||||
max_size = 256 * 1024 # Buffer size passed to read()
|
||||
|
||||
_handshake_start_time = None
|
||||
_handshake_timeout_handle = None
|
||||
_shutdown_timeout_handle = None
|
||||
|
||||
def __init__(self, loop, app_protocol, sslcontext, waiter,
|
||||
server_side=False, server_hostname=None,
|
||||
call_connection_made=True,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None):
|
||||
if ssl is None:
|
||||
raise RuntimeError("stdlib ssl module not available")
|
||||
|
||||
self._ssl_buffer = bytearray(self.max_size)
|
||||
self._ssl_buffer_view = memoryview(self._ssl_buffer)
|
||||
|
||||
if ssl_handshake_timeout is None:
|
||||
ssl_handshake_timeout = constants.SSL_HANDSHAKE_TIMEOUT
|
||||
elif ssl_handshake_timeout <= 0:
|
||||
raise ValueError(
|
||||
f"ssl_handshake_timeout should be a positive number, "
|
||||
f"got {ssl_handshake_timeout}")
|
||||
if ssl_shutdown_timeout is None:
|
||||
ssl_shutdown_timeout = constants.SSL_SHUTDOWN_TIMEOUT
|
||||
elif ssl_shutdown_timeout <= 0:
|
||||
raise ValueError(
|
||||
f"ssl_shutdown_timeout should be a positive number, "
|
||||
f"got {ssl_shutdown_timeout}")
|
||||
|
||||
if not sslcontext:
|
||||
sslcontext = _create_transport_context(
|
||||
server_side, server_hostname)
|
||||
|
||||
self._server_side = server_side
|
||||
if server_hostname and not server_side:
|
||||
self._server_hostname = server_hostname
|
||||
else:
|
||||
self._server_hostname = None
|
||||
self._sslcontext = sslcontext
|
||||
# SSL-specific extra info. More info are set when the handshake
|
||||
# completes.
|
||||
self._extra = dict(sslcontext=sslcontext)
|
||||
|
||||
# App data write buffering
|
||||
self._write_backlog = collections.deque()
|
||||
self._write_buffer_size = 0
|
||||
|
||||
self._waiter = waiter
|
||||
self._loop = loop
|
||||
self._set_app_protocol(app_protocol)
|
||||
self._app_transport = None
|
||||
self._app_transport_created = False
|
||||
# transport, ex: SelectorSocketTransport
|
||||
self._transport = None
|
||||
self._ssl_handshake_timeout = ssl_handshake_timeout
|
||||
self._ssl_shutdown_timeout = ssl_shutdown_timeout
|
||||
# SSL and state machine
|
||||
self._incoming = ssl.MemoryBIO()
|
||||
self._outgoing = ssl.MemoryBIO()
|
||||
self._state = SSLProtocolState.UNWRAPPED
|
||||
self._conn_lost = 0 # Set when connection_lost called
|
||||
if call_connection_made:
|
||||
self._app_state = AppProtocolState.STATE_INIT
|
||||
else:
|
||||
self._app_state = AppProtocolState.STATE_CON_MADE
|
||||
self._sslobj = self._sslcontext.wrap_bio(
|
||||
self._incoming, self._outgoing,
|
||||
server_side=self._server_side,
|
||||
server_hostname=self._server_hostname)
|
||||
|
||||
# Flow Control
|
||||
|
||||
self._ssl_writing_paused = False
|
||||
|
||||
self._app_reading_paused = False
|
||||
|
||||
self._ssl_reading_paused = False
|
||||
self._incoming_high_water = 0
|
||||
self._incoming_low_water = 0
|
||||
self._set_read_buffer_limits()
|
||||
self._eof_received = False
|
||||
|
||||
self._app_writing_paused = False
|
||||
self._outgoing_high_water = 0
|
||||
self._outgoing_low_water = 0
|
||||
self._set_write_buffer_limits()
|
||||
self._get_app_transport()
|
||||
|
||||
def _set_app_protocol(self, app_protocol):
|
||||
self._app_protocol = app_protocol
|
||||
# Make fast hasattr check first
|
||||
if (hasattr(app_protocol, 'get_buffer') and
|
||||
isinstance(app_protocol, protocols.BufferedProtocol)):
|
||||
self._app_protocol_get_buffer = app_protocol.get_buffer
|
||||
self._app_protocol_buffer_updated = app_protocol.buffer_updated
|
||||
self._app_protocol_is_buffer = True
|
||||
else:
|
||||
self._app_protocol_is_buffer = False
|
||||
|
||||
def _wakeup_waiter(self, exc=None):
|
||||
if self._waiter is None:
|
||||
return
|
||||
if not self._waiter.cancelled():
|
||||
if exc is not None:
|
||||
self._waiter.set_exception(exc)
|
||||
else:
|
||||
self._waiter.set_result(None)
|
||||
self._waiter = None
|
||||
|
||||
def _get_app_transport(self):
|
||||
if self._app_transport is None:
|
||||
if self._app_transport_created:
|
||||
raise RuntimeError('Creating _SSLProtocolTransport twice')
|
||||
self._app_transport = _SSLProtocolTransport(self._loop, self)
|
||||
self._app_transport_created = True
|
||||
return self._app_transport
|
||||
|
||||
def _is_transport_closing(self):
|
||||
return self._transport is not None and self._transport.is_closing()
|
||||
|
||||
def connection_made(self, transport):
|
||||
"""Called when the low-level connection is made.
|
||||
|
||||
Start the SSL handshake.
|
||||
"""
|
||||
self._transport = transport
|
||||
self._start_handshake()
|
||||
|
||||
def connection_lost(self, exc):
|
||||
"""Called when the low-level connection is lost or closed.
|
||||
|
||||
The argument is an exception object or None (the latter
|
||||
meaning a regular EOF is received or the connection was
|
||||
aborted or closed).
|
||||
"""
|
||||
self._write_backlog.clear()
|
||||
self._outgoing.read()
|
||||
self._conn_lost += 1
|
||||
|
||||
# Just mark the app transport as closed so that its __dealloc__
|
||||
# doesn't complain.
|
||||
if self._app_transport is not None:
|
||||
self._app_transport._closed = True
|
||||
|
||||
if self._state != SSLProtocolState.DO_HANDSHAKE:
|
||||
if (
|
||||
self._app_state == AppProtocolState.STATE_CON_MADE or
|
||||
self._app_state == AppProtocolState.STATE_EOF
|
||||
):
|
||||
self._app_state = AppProtocolState.STATE_CON_LOST
|
||||
self._loop.call_soon(self._app_protocol.connection_lost, exc)
|
||||
self._set_state(SSLProtocolState.UNWRAPPED)
|
||||
self._transport = None
|
||||
self._app_transport = None
|
||||
self._app_protocol = None
|
||||
self._wakeup_waiter(exc)
|
||||
|
||||
if self._shutdown_timeout_handle:
|
||||
self._shutdown_timeout_handle.cancel()
|
||||
self._shutdown_timeout_handle = None
|
||||
if self._handshake_timeout_handle:
|
||||
self._handshake_timeout_handle.cancel()
|
||||
self._handshake_timeout_handle = None
|
||||
|
||||
def get_buffer(self, n):
|
||||
want = n
|
||||
if want <= 0 or want > self.max_size:
|
||||
want = self.max_size
|
||||
if len(self._ssl_buffer) < want:
|
||||
self._ssl_buffer = bytearray(want)
|
||||
self._ssl_buffer_view = memoryview(self._ssl_buffer)
|
||||
return self._ssl_buffer_view
|
||||
|
||||
def buffer_updated(self, nbytes):
|
||||
self._incoming.write(self._ssl_buffer_view[:nbytes])
|
||||
|
||||
if self._state == SSLProtocolState.DO_HANDSHAKE:
|
||||
self._do_handshake()
|
||||
|
||||
elif self._state == SSLProtocolState.WRAPPED:
|
||||
self._do_read()
|
||||
|
||||
elif self._state == SSLProtocolState.FLUSHING:
|
||||
self._do_flush()
|
||||
|
||||
elif self._state == SSLProtocolState.SHUTDOWN:
|
||||
self._do_shutdown()
|
||||
|
||||
def eof_received(self):
|
||||
"""Called when the other end of the low-level stream
|
||||
is half-closed.
|
||||
|
||||
If this returns a false value (including None), the transport
|
||||
will close itself. If it returns a true value, closing the
|
||||
transport is up to the protocol.
|
||||
"""
|
||||
self._eof_received = True
|
||||
try:
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r received EOF", self)
|
||||
|
||||
if self._state == SSLProtocolState.DO_HANDSHAKE:
|
||||
self._on_handshake_complete(ConnectionResetError)
|
||||
|
||||
elif self._state == SSLProtocolState.WRAPPED:
|
||||
self._set_state(SSLProtocolState.FLUSHING)
|
||||
if self._app_reading_paused:
|
||||
return True
|
||||
else:
|
||||
self._do_flush()
|
||||
|
||||
elif self._state == SSLProtocolState.FLUSHING:
|
||||
self._do_write()
|
||||
self._set_state(SSLProtocolState.SHUTDOWN)
|
||||
self._do_shutdown()
|
||||
|
||||
elif self._state == SSLProtocolState.SHUTDOWN:
|
||||
self._do_shutdown()
|
||||
|
||||
except Exception:
|
||||
self._transport.close()
|
||||
raise
|
||||
|
||||
def _get_extra_info(self, name, default=None):
|
||||
if name in self._extra:
|
||||
return self._extra[name]
|
||||
elif self._transport is not None:
|
||||
return self._transport.get_extra_info(name, default)
|
||||
else:
|
||||
return default
|
||||
|
||||
def _set_state(self, new_state):
|
||||
allowed = False
|
||||
|
||||
if new_state == SSLProtocolState.UNWRAPPED:
|
||||
allowed = True
|
||||
|
||||
elif (
|
||||
self._state == SSLProtocolState.UNWRAPPED and
|
||||
new_state == SSLProtocolState.DO_HANDSHAKE
|
||||
):
|
||||
allowed = True
|
||||
|
||||
elif (
|
||||
self._state == SSLProtocolState.DO_HANDSHAKE and
|
||||
new_state == SSLProtocolState.WRAPPED
|
||||
):
|
||||
allowed = True
|
||||
|
||||
elif (
|
||||
self._state == SSLProtocolState.WRAPPED and
|
||||
new_state == SSLProtocolState.FLUSHING
|
||||
):
|
||||
allowed = True
|
||||
|
||||
elif (
|
||||
self._state == SSLProtocolState.FLUSHING and
|
||||
new_state == SSLProtocolState.SHUTDOWN
|
||||
):
|
||||
allowed = True
|
||||
|
||||
if allowed:
|
||||
self._state = new_state
|
||||
|
||||
else:
|
||||
raise RuntimeError(
|
||||
'cannot switch state from {} to {}'.format(
|
||||
self._state, new_state))
|
||||
|
||||
# Handshake flow
|
||||
|
||||
def _start_handshake(self):
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r starts SSL handshake", self)
|
||||
self._handshake_start_time = self._loop.time()
|
||||
else:
|
||||
self._handshake_start_time = None
|
||||
|
||||
self._set_state(SSLProtocolState.DO_HANDSHAKE)
|
||||
|
||||
# start handshake timeout count down
|
||||
self._handshake_timeout_handle = \
|
||||
self._loop.call_later(self._ssl_handshake_timeout,
|
||||
self._check_handshake_timeout)
|
||||
|
||||
self._do_handshake()
|
||||
|
||||
def _check_handshake_timeout(self):
|
||||
if self._state == SSLProtocolState.DO_HANDSHAKE:
|
||||
msg = (
|
||||
f"SSL handshake is taking longer than "
|
||||
f"{self._ssl_handshake_timeout} seconds: "
|
||||
f"aborting the connection"
|
||||
)
|
||||
self._fatal_error(ConnectionAbortedError(msg))
|
||||
|
||||
def _do_handshake(self):
|
||||
try:
|
||||
self._sslobj.do_handshake()
|
||||
except SSLAgainErrors:
|
||||
self._process_outgoing()
|
||||
except ssl.SSLError as exc:
|
||||
self._on_handshake_complete(exc)
|
||||
else:
|
||||
self._on_handshake_complete(None)
|
||||
|
||||
def _on_handshake_complete(self, handshake_exc):
|
||||
if self._handshake_timeout_handle is not None:
|
||||
self._handshake_timeout_handle.cancel()
|
||||
self._handshake_timeout_handle = None
|
||||
|
||||
sslobj = self._sslobj
|
||||
try:
|
||||
if handshake_exc is None:
|
||||
self._set_state(SSLProtocolState.WRAPPED)
|
||||
else:
|
||||
raise handshake_exc
|
||||
|
||||
peercert = sslobj.getpeercert()
|
||||
except Exception as exc:
|
||||
handshake_exc = None
|
||||
self._set_state(SSLProtocolState.UNWRAPPED)
|
||||
if isinstance(exc, ssl.CertificateError):
|
||||
msg = 'SSL handshake failed on verifying the certificate'
|
||||
else:
|
||||
msg = 'SSL handshake failed'
|
||||
self._fatal_error(exc, msg)
|
||||
self._wakeup_waiter(exc)
|
||||
return
|
||||
|
||||
if self._loop.get_debug():
|
||||
dt = self._loop.time() - self._handshake_start_time
|
||||
logger.debug("%r: SSL handshake took %.1f ms", self, dt * 1e3)
|
||||
|
||||
# Add extra info that becomes available after handshake.
|
||||
self._extra.update(peercert=peercert,
|
||||
cipher=sslobj.cipher(),
|
||||
compression=sslobj.compression(),
|
||||
ssl_object=sslobj)
|
||||
if self._app_state == AppProtocolState.STATE_INIT:
|
||||
self._app_state = AppProtocolState.STATE_CON_MADE
|
||||
self._app_protocol.connection_made(self._get_app_transport())
|
||||
self._wakeup_waiter()
|
||||
self._do_read()
|
||||
|
||||
# Shutdown flow
|
||||
|
||||
def _start_shutdown(self):
|
||||
if (
|
||||
self._state in (
|
||||
SSLProtocolState.FLUSHING,
|
||||
SSLProtocolState.SHUTDOWN,
|
||||
SSLProtocolState.UNWRAPPED
|
||||
)
|
||||
):
|
||||
return
|
||||
if self._app_transport is not None:
|
||||
self._app_transport._closed = True
|
||||
if self._state == SSLProtocolState.DO_HANDSHAKE:
|
||||
self._abort(None)
|
||||
else:
|
||||
self._set_state(SSLProtocolState.FLUSHING)
|
||||
self._shutdown_timeout_handle = self._loop.call_later(
|
||||
self._ssl_shutdown_timeout,
|
||||
self._check_shutdown_timeout
|
||||
)
|
||||
self._do_flush()
|
||||
|
||||
def _check_shutdown_timeout(self):
|
||||
if (
|
||||
self._state in (
|
||||
SSLProtocolState.FLUSHING,
|
||||
SSLProtocolState.SHUTDOWN
|
||||
)
|
||||
):
|
||||
self._transport._force_close(
|
||||
exceptions.TimeoutError('SSL shutdown timed out'))
|
||||
|
||||
def _do_flush(self):
|
||||
self._do_read()
|
||||
self._set_state(SSLProtocolState.SHUTDOWN)
|
||||
self._do_shutdown()
|
||||
|
||||
def _do_shutdown(self):
|
||||
try:
|
||||
if not self._eof_received:
|
||||
self._sslobj.unwrap()
|
||||
except SSLAgainErrors:
|
||||
self._process_outgoing()
|
||||
except ssl.SSLError as exc:
|
||||
self._on_shutdown_complete(exc)
|
||||
else:
|
||||
self._process_outgoing()
|
||||
self._call_eof_received()
|
||||
self._on_shutdown_complete(None)
|
||||
|
||||
def _on_shutdown_complete(self, shutdown_exc):
|
||||
if self._shutdown_timeout_handle is not None:
|
||||
self._shutdown_timeout_handle.cancel()
|
||||
self._shutdown_timeout_handle = None
|
||||
|
||||
if shutdown_exc:
|
||||
self._fatal_error(shutdown_exc)
|
||||
else:
|
||||
self._loop.call_soon(self._transport.close)
|
||||
|
||||
def _abort(self, exc):
|
||||
self._set_state(SSLProtocolState.UNWRAPPED)
|
||||
if self._transport is not None:
|
||||
self._transport._force_close(exc)
|
||||
|
||||
# Outgoing flow
|
||||
|
||||
def _write_appdata(self, list_of_data):
|
||||
if (
|
||||
self._state in (
|
||||
SSLProtocolState.FLUSHING,
|
||||
SSLProtocolState.SHUTDOWN,
|
||||
SSLProtocolState.UNWRAPPED
|
||||
)
|
||||
):
|
||||
if self._conn_lost >= constants.LOG_THRESHOLD_FOR_CONNLOST_WRITES:
|
||||
logger.warning('SSL connection is closed')
|
||||
self._conn_lost += 1
|
||||
return
|
||||
|
||||
for data in list_of_data:
|
||||
self._write_backlog.append(data)
|
||||
self._write_buffer_size += len(data)
|
||||
|
||||
try:
|
||||
if self._state == SSLProtocolState.WRAPPED:
|
||||
self._do_write()
|
||||
|
||||
except Exception as ex:
|
||||
self._fatal_error(ex, 'Fatal error on SSL protocol')
|
||||
|
||||
def _do_write(self):
|
||||
try:
|
||||
while self._write_backlog:
|
||||
data = self._write_backlog[0]
|
||||
count = self._sslobj.write(data)
|
||||
data_len = len(data)
|
||||
if count < data_len:
|
||||
self._write_backlog[0] = data[count:]
|
||||
self._write_buffer_size -= count
|
||||
else:
|
||||
del self._write_backlog[0]
|
||||
self._write_buffer_size -= data_len
|
||||
except SSLAgainErrors:
|
||||
pass
|
||||
self._process_outgoing()
|
||||
|
||||
def _process_outgoing(self):
|
||||
if not self._ssl_writing_paused:
|
||||
data = self._outgoing.read()
|
||||
if len(data):
|
||||
self._transport.write(data)
|
||||
self._control_app_writing()
|
||||
|
||||
# Incoming flow
|
||||
|
||||
def _do_read(self):
|
||||
if (
|
||||
self._state not in (
|
||||
SSLProtocolState.WRAPPED,
|
||||
SSLProtocolState.FLUSHING,
|
||||
)
|
||||
):
|
||||
return
|
||||
try:
|
||||
if not self._app_reading_paused:
|
||||
if self._app_protocol_is_buffer:
|
||||
self._do_read__buffered()
|
||||
else:
|
||||
self._do_read__copied()
|
||||
if self._write_backlog:
|
||||
self._do_write()
|
||||
else:
|
||||
self._process_outgoing()
|
||||
self._control_ssl_reading()
|
||||
except Exception as ex:
|
||||
self._fatal_error(ex, 'Fatal error on SSL protocol')
|
||||
|
||||
def _do_read__buffered(self):
|
||||
offset = 0
|
||||
count = 1
|
||||
|
||||
buf = self._app_protocol_get_buffer(self._get_read_buffer_size())
|
||||
wants = len(buf)
|
||||
|
||||
try:
|
||||
count = self._sslobj.read(wants, buf)
|
||||
|
||||
if count > 0:
|
||||
offset = count
|
||||
while offset < wants:
|
||||
count = self._sslobj.read(wants - offset, buf[offset:])
|
||||
if count > 0:
|
||||
offset += count
|
||||
else:
|
||||
break
|
||||
else:
|
||||
self._loop.call_soon(self._do_read)
|
||||
except SSLAgainErrors:
|
||||
pass
|
||||
if offset > 0:
|
||||
self._app_protocol_buffer_updated(offset)
|
||||
if not count:
|
||||
# close_notify
|
||||
self._call_eof_received()
|
||||
self._start_shutdown()
|
||||
|
||||
def _do_read__copied(self):
|
||||
chunk = b'1'
|
||||
zero = True
|
||||
one = False
|
||||
|
||||
try:
|
||||
while True:
|
||||
chunk = self._sslobj.read(self.max_size)
|
||||
if not chunk:
|
||||
break
|
||||
if zero:
|
||||
zero = False
|
||||
one = True
|
||||
first = chunk
|
||||
elif one:
|
||||
one = False
|
||||
data = [first, chunk]
|
||||
else:
|
||||
data.append(chunk)
|
||||
except SSLAgainErrors:
|
||||
pass
|
||||
if one:
|
||||
self._app_protocol.data_received(first)
|
||||
elif not zero:
|
||||
self._app_protocol.data_received(b''.join(data))
|
||||
if not chunk:
|
||||
# close_notify
|
||||
self._call_eof_received()
|
||||
self._start_shutdown()
|
||||
|
||||
def _call_eof_received(self):
|
||||
try:
|
||||
if self._app_state == AppProtocolState.STATE_CON_MADE:
|
||||
self._app_state = AppProtocolState.STATE_EOF
|
||||
keep_open = self._app_protocol.eof_received()
|
||||
if keep_open:
|
||||
logger.warning('returning true from eof_received() '
|
||||
'has no effect when using ssl')
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
raise
|
||||
except BaseException as ex:
|
||||
self._fatal_error(ex, 'Error calling eof_received()')
|
||||
|
||||
# Flow control for writes from APP socket
|
||||
|
||||
def _control_app_writing(self):
|
||||
size = self._get_write_buffer_size()
|
||||
if size >= self._outgoing_high_water and not self._app_writing_paused:
|
||||
self._app_writing_paused = True
|
||||
try:
|
||||
self._app_protocol.pause_writing()
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
self._loop.call_exception_handler({
|
||||
'message': 'protocol.pause_writing() failed',
|
||||
'exception': exc,
|
||||
'transport': self._app_transport,
|
||||
'protocol': self,
|
||||
})
|
||||
elif size <= self._outgoing_low_water and self._app_writing_paused:
|
||||
self._app_writing_paused = False
|
||||
try:
|
||||
self._app_protocol.resume_writing()
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
self._loop.call_exception_handler({
|
||||
'message': 'protocol.resume_writing() failed',
|
||||
'exception': exc,
|
||||
'transport': self._app_transport,
|
||||
'protocol': self,
|
||||
})
|
||||
|
||||
def _get_write_buffer_size(self):
|
||||
return self._outgoing.pending + self._write_buffer_size
|
||||
|
||||
def _set_write_buffer_limits(self, high=None, low=None):
|
||||
high, low = add_flowcontrol_defaults(
|
||||
high, low, constants.FLOW_CONTROL_HIGH_WATER_SSL_WRITE)
|
||||
self._outgoing_high_water = high
|
||||
self._outgoing_low_water = low
|
||||
|
||||
# Flow control for reads to APP socket
|
||||
|
||||
def _pause_reading(self):
|
||||
self._app_reading_paused = True
|
||||
|
||||
def _resume_reading(self):
|
||||
if self._app_reading_paused:
|
||||
self._app_reading_paused = False
|
||||
|
||||
def resume():
|
||||
if self._state == SSLProtocolState.WRAPPED:
|
||||
self._do_read()
|
||||
elif self._state == SSLProtocolState.FLUSHING:
|
||||
self._do_flush()
|
||||
elif self._state == SSLProtocolState.SHUTDOWN:
|
||||
self._do_shutdown()
|
||||
self._loop.call_soon(resume)
|
||||
|
||||
# Flow control for reads from SSL socket
|
||||
|
||||
def _control_ssl_reading(self):
|
||||
size = self._get_read_buffer_size()
|
||||
if size >= self._incoming_high_water and not self._ssl_reading_paused:
|
||||
self._ssl_reading_paused = True
|
||||
self._transport.pause_reading()
|
||||
elif size <= self._incoming_low_water and self._ssl_reading_paused:
|
||||
self._ssl_reading_paused = False
|
||||
self._transport.resume_reading()
|
||||
|
||||
def _set_read_buffer_limits(self, high=None, low=None):
|
||||
high, low = add_flowcontrol_defaults(
|
||||
high, low, constants.FLOW_CONTROL_HIGH_WATER_SSL_READ)
|
||||
self._incoming_high_water = high
|
||||
self._incoming_low_water = low
|
||||
|
||||
def _get_read_buffer_size(self):
|
||||
return self._incoming.pending
|
||||
|
||||
# Flow control for writes to SSL socket
|
||||
|
||||
def pause_writing(self):
|
||||
"""Called when the low-level transport's buffer goes over
|
||||
the high-water mark.
|
||||
"""
|
||||
assert not self._ssl_writing_paused
|
||||
self._ssl_writing_paused = True
|
||||
|
||||
def resume_writing(self):
|
||||
"""Called when the low-level transport's buffer drains below
|
||||
the low-water mark.
|
||||
"""
|
||||
assert self._ssl_writing_paused
|
||||
self._ssl_writing_paused = False
|
||||
self._process_outgoing()
|
||||
|
||||
def _fatal_error(self, exc, message='Fatal error on transport'):
|
||||
if self._transport:
|
||||
self._transport._force_close(exc)
|
||||
|
||||
if isinstance(exc, OSError):
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r: %s", self, message, exc_info=True)
|
||||
elif not isinstance(exc, exceptions.CancelledError):
|
||||
self._loop.call_exception_handler({
|
||||
'message': message,
|
||||
'exception': exc,
|
||||
'transport': self._transport,
|
||||
'protocol': self,
|
||||
})
|
||||
174
Utils/PythonNew32/Lib/asyncio/staggered.py
Normal file
174
Utils/PythonNew32/Lib/asyncio/staggered.py
Normal file
@@ -0,0 +1,174 @@
|
||||
"""Support for running coroutines in parallel with staggered start times."""
|
||||
|
||||
__all__ = 'staggered_race',
|
||||
|
||||
import contextlib
|
||||
|
||||
from . import events
|
||||
from . import exceptions as exceptions_mod
|
||||
from . import locks
|
||||
from . import tasks
|
||||
|
||||
|
||||
async def staggered_race(coro_fns, delay, *, loop=None):
|
||||
"""Run coroutines with staggered start times and take the first to finish.
|
||||
|
||||
This method takes an iterable of coroutine functions. The first one is
|
||||
started immediately. From then on, whenever the immediately preceding one
|
||||
fails (raises an exception), or when *delay* seconds has passed, the next
|
||||
coroutine is started. This continues until one of the coroutines complete
|
||||
successfully, in which case all others are cancelled, or until all
|
||||
coroutines fail.
|
||||
|
||||
The coroutines provided should be well-behaved in the following way:
|
||||
|
||||
* They should only ``return`` if completed successfully.
|
||||
|
||||
* They should always raise an exception if they did not complete
|
||||
successfully. In particular, if they handle cancellation, they should
|
||||
probably reraise, like this::
|
||||
|
||||
try:
|
||||
# do work
|
||||
except asyncio.CancelledError:
|
||||
# undo partially completed work
|
||||
raise
|
||||
|
||||
Args:
|
||||
coro_fns: an iterable of coroutine functions, i.e. callables that
|
||||
return a coroutine object when called. Use ``functools.partial`` or
|
||||
lambdas to pass arguments.
|
||||
|
||||
delay: amount of time, in seconds, between starting coroutines. If
|
||||
``None``, the coroutines will run sequentially.
|
||||
|
||||
loop: the event loop to use.
|
||||
|
||||
Returns:
|
||||
tuple *(winner_result, winner_index, exceptions)* where
|
||||
|
||||
- *winner_result*: the result of the winning coroutine, or ``None``
|
||||
if no coroutines won.
|
||||
|
||||
- *winner_index*: the index of the winning coroutine in
|
||||
``coro_fns``, or ``None`` if no coroutines won. If the winning
|
||||
coroutine may return None on success, *winner_index* can be used
|
||||
to definitively determine whether any coroutine won.
|
||||
|
||||
- *exceptions*: list of exceptions returned by the coroutines.
|
||||
``len(exceptions)`` is equal to the number of coroutines actually
|
||||
started, and the order is the same as in ``coro_fns``. The winning
|
||||
coroutine's entry is ``None``.
|
||||
|
||||
"""
|
||||
# TODO: when we have aiter() and anext(), allow async iterables in coro_fns.
|
||||
loop = loop or events.get_running_loop()
|
||||
enum_coro_fns = enumerate(coro_fns)
|
||||
winner_result = None
|
||||
winner_index = None
|
||||
unhandled_exceptions = []
|
||||
exceptions = []
|
||||
running_tasks = set()
|
||||
on_completed_fut = None
|
||||
|
||||
def task_done(task):
|
||||
running_tasks.discard(task)
|
||||
if (
|
||||
on_completed_fut is not None
|
||||
and not on_completed_fut.done()
|
||||
and not running_tasks
|
||||
):
|
||||
on_completed_fut.set_result(None)
|
||||
|
||||
if task.cancelled():
|
||||
return
|
||||
|
||||
exc = task.exception()
|
||||
if exc is None:
|
||||
return
|
||||
unhandled_exceptions.append(exc)
|
||||
|
||||
async def run_one_coro(ok_to_start, previous_failed) -> None:
|
||||
# in eager tasks this waits for the calling task to append this task
|
||||
# to running_tasks, in regular tasks this wait is a no-op that does
|
||||
# not yield a future. See gh-124309.
|
||||
await ok_to_start.wait()
|
||||
# Wait for the previous task to finish, or for delay seconds
|
||||
if previous_failed is not None:
|
||||
with contextlib.suppress(exceptions_mod.TimeoutError):
|
||||
# Use asyncio.wait_for() instead of asyncio.wait() here, so
|
||||
# that if we get cancelled at this point, Event.wait() is also
|
||||
# cancelled, otherwise there will be a "Task destroyed but it is
|
||||
# pending" later.
|
||||
await tasks.wait_for(previous_failed.wait(), delay)
|
||||
# Get the next coroutine to run
|
||||
try:
|
||||
this_index, coro_fn = next(enum_coro_fns)
|
||||
except StopIteration:
|
||||
return
|
||||
# Start task that will run the next coroutine
|
||||
this_failed = locks.Event()
|
||||
next_ok_to_start = locks.Event()
|
||||
next_task = loop.create_task(run_one_coro(next_ok_to_start, this_failed))
|
||||
running_tasks.add(next_task)
|
||||
next_task.add_done_callback(task_done)
|
||||
# next_task has been appended to running_tasks so next_task is ok to
|
||||
# start.
|
||||
next_ok_to_start.set()
|
||||
# Prepare place to put this coroutine's exceptions if not won
|
||||
exceptions.append(None)
|
||||
assert len(exceptions) == this_index + 1
|
||||
|
||||
try:
|
||||
result = await coro_fn()
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException as e:
|
||||
exceptions[this_index] = e
|
||||
this_failed.set() # Kickstart the next coroutine
|
||||
else:
|
||||
# Store winner's results
|
||||
nonlocal winner_index, winner_result
|
||||
assert winner_index is None
|
||||
winner_index = this_index
|
||||
winner_result = result
|
||||
# Cancel all other tasks. We take care to not cancel the current
|
||||
# task as well. If we do so, then since there is no `await` after
|
||||
# here and CancelledError are usually thrown at one, we will
|
||||
# encounter a curious corner case where the current task will end
|
||||
# up as done() == True, cancelled() == False, exception() ==
|
||||
# asyncio.CancelledError. This behavior is specified in
|
||||
# https://bugs.python.org/issue30048
|
||||
current_task = tasks.current_task(loop)
|
||||
for t in running_tasks:
|
||||
if t is not current_task:
|
||||
t.cancel()
|
||||
|
||||
propagate_cancellation_error = None
|
||||
try:
|
||||
ok_to_start = locks.Event()
|
||||
first_task = loop.create_task(run_one_coro(ok_to_start, None))
|
||||
running_tasks.add(first_task)
|
||||
first_task.add_done_callback(task_done)
|
||||
# first_task has been appended to running_tasks so first_task is ok to start.
|
||||
ok_to_start.set()
|
||||
propagate_cancellation_error = None
|
||||
# Make sure no tasks are left running if we leave this function
|
||||
while running_tasks:
|
||||
on_completed_fut = loop.create_future()
|
||||
try:
|
||||
await on_completed_fut
|
||||
except exceptions_mod.CancelledError as ex:
|
||||
propagate_cancellation_error = ex
|
||||
for task in running_tasks:
|
||||
task.cancel(*ex.args)
|
||||
on_completed_fut = None
|
||||
if __debug__ and unhandled_exceptions:
|
||||
# If run_one_coro raises an unhandled exception, it's probably a
|
||||
# programming error, and I want to see it.
|
||||
raise ExceptionGroup("staggered race failed", unhandled_exceptions)
|
||||
if propagate_cancellation_error is not None:
|
||||
raise propagate_cancellation_error
|
||||
return winner_result, winner_index, exceptions
|
||||
finally:
|
||||
del exceptions, propagate_cancellation_error, unhandled_exceptions
|
||||
787
Utils/PythonNew32/Lib/asyncio/streams.py
Normal file
787
Utils/PythonNew32/Lib/asyncio/streams.py
Normal file
@@ -0,0 +1,787 @@
|
||||
__all__ = (
|
||||
'StreamReader', 'StreamWriter', 'StreamReaderProtocol',
|
||||
'open_connection', 'start_server')
|
||||
|
||||
import collections
|
||||
import socket
|
||||
import sys
|
||||
import warnings
|
||||
import weakref
|
||||
|
||||
if hasattr(socket, 'AF_UNIX'):
|
||||
__all__ += ('open_unix_connection', 'start_unix_server')
|
||||
|
||||
from . import coroutines
|
||||
from . import events
|
||||
from . import exceptions
|
||||
from . import format_helpers
|
||||
from . import protocols
|
||||
from .log import logger
|
||||
from .tasks import sleep
|
||||
|
||||
|
||||
_DEFAULT_LIMIT = 2 ** 16 # 64 KiB
|
||||
|
||||
|
||||
async def open_connection(host=None, port=None, *,
|
||||
limit=_DEFAULT_LIMIT, **kwds):
|
||||
"""A wrapper for create_connection() returning a (reader, writer) pair.
|
||||
|
||||
The reader returned is a StreamReader instance; the writer is a
|
||||
StreamWriter instance.
|
||||
|
||||
The arguments are all the usual arguments to create_connection()
|
||||
except protocol_factory; most common are positional host and port,
|
||||
with various optional keyword arguments following.
|
||||
|
||||
Additional optional keyword arguments are loop (to set the event loop
|
||||
instance to use) and limit (to set the buffer limit passed to the
|
||||
StreamReader).
|
||||
|
||||
(If you want to customize the StreamReader and/or
|
||||
StreamReaderProtocol classes, just copy the code -- there's
|
||||
really nothing special here except some convenience.)
|
||||
"""
|
||||
loop = events.get_running_loop()
|
||||
reader = StreamReader(limit=limit, loop=loop)
|
||||
protocol = StreamReaderProtocol(reader, loop=loop)
|
||||
transport, _ = await loop.create_connection(
|
||||
lambda: protocol, host, port, **kwds)
|
||||
writer = StreamWriter(transport, protocol, reader, loop)
|
||||
return reader, writer
|
||||
|
||||
|
||||
async def start_server(client_connected_cb, host=None, port=None, *,
|
||||
limit=_DEFAULT_LIMIT, **kwds):
|
||||
"""Start a socket server, call back for each client connected.
|
||||
|
||||
The first parameter, `client_connected_cb`, takes two parameters:
|
||||
client_reader, client_writer. client_reader is a StreamReader
|
||||
object, while client_writer is a StreamWriter object. This
|
||||
parameter can either be a plain callback function or a coroutine;
|
||||
if it is a coroutine, it will be automatically converted into a
|
||||
Task.
|
||||
|
||||
The rest of the arguments are all the usual arguments to
|
||||
loop.create_server() except protocol_factory; most common are
|
||||
positional host and port, with various optional keyword arguments
|
||||
following. The return value is the same as loop.create_server().
|
||||
|
||||
Additional optional keyword argument is limit (to set the buffer
|
||||
limit passed to the StreamReader).
|
||||
|
||||
The return value is the same as loop.create_server(), i.e. a
|
||||
Server object which can be used to stop the service.
|
||||
"""
|
||||
loop = events.get_running_loop()
|
||||
|
||||
def factory():
|
||||
reader = StreamReader(limit=limit, loop=loop)
|
||||
protocol = StreamReaderProtocol(reader, client_connected_cb,
|
||||
loop=loop)
|
||||
return protocol
|
||||
|
||||
return await loop.create_server(factory, host, port, **kwds)
|
||||
|
||||
|
||||
if hasattr(socket, 'AF_UNIX'):
|
||||
# UNIX Domain Sockets are supported on this platform
|
||||
|
||||
async def open_unix_connection(path=None, *,
|
||||
limit=_DEFAULT_LIMIT, **kwds):
|
||||
"""Similar to `open_connection` but works with UNIX Domain Sockets."""
|
||||
loop = events.get_running_loop()
|
||||
|
||||
reader = StreamReader(limit=limit, loop=loop)
|
||||
protocol = StreamReaderProtocol(reader, loop=loop)
|
||||
transport, _ = await loop.create_unix_connection(
|
||||
lambda: protocol, path, **kwds)
|
||||
writer = StreamWriter(transport, protocol, reader, loop)
|
||||
return reader, writer
|
||||
|
||||
async def start_unix_server(client_connected_cb, path=None, *,
|
||||
limit=_DEFAULT_LIMIT, **kwds):
|
||||
"""Similar to `start_server` but works with UNIX Domain Sockets."""
|
||||
loop = events.get_running_loop()
|
||||
|
||||
def factory():
|
||||
reader = StreamReader(limit=limit, loop=loop)
|
||||
protocol = StreamReaderProtocol(reader, client_connected_cb,
|
||||
loop=loop)
|
||||
return protocol
|
||||
|
||||
return await loop.create_unix_server(factory, path, **kwds)
|
||||
|
||||
|
||||
class FlowControlMixin(protocols.Protocol):
|
||||
"""Reusable flow control logic for StreamWriter.drain().
|
||||
|
||||
This implements the protocol methods pause_writing(),
|
||||
resume_writing() and connection_lost(). If the subclass overrides
|
||||
these it must call the super methods.
|
||||
|
||||
StreamWriter.drain() must wait for _drain_helper() coroutine.
|
||||
"""
|
||||
|
||||
def __init__(self, loop=None):
|
||||
if loop is None:
|
||||
self._loop = events.get_event_loop()
|
||||
else:
|
||||
self._loop = loop
|
||||
self._paused = False
|
||||
self._drain_waiters = collections.deque()
|
||||
self._connection_lost = False
|
||||
|
||||
def pause_writing(self):
|
||||
assert not self._paused
|
||||
self._paused = True
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r pauses writing", self)
|
||||
|
||||
def resume_writing(self):
|
||||
assert self._paused
|
||||
self._paused = False
|
||||
if self._loop.get_debug():
|
||||
logger.debug("%r resumes writing", self)
|
||||
|
||||
for waiter in self._drain_waiters:
|
||||
if not waiter.done():
|
||||
waiter.set_result(None)
|
||||
|
||||
def connection_lost(self, exc):
|
||||
self._connection_lost = True
|
||||
# Wake up the writer(s) if currently paused.
|
||||
if not self._paused:
|
||||
return
|
||||
|
||||
for waiter in self._drain_waiters:
|
||||
if not waiter.done():
|
||||
if exc is None:
|
||||
waiter.set_result(None)
|
||||
else:
|
||||
waiter.set_exception(exc)
|
||||
|
||||
async def _drain_helper(self):
|
||||
if self._connection_lost:
|
||||
raise ConnectionResetError('Connection lost')
|
||||
if not self._paused:
|
||||
return
|
||||
waiter = self._loop.create_future()
|
||||
self._drain_waiters.append(waiter)
|
||||
try:
|
||||
await waiter
|
||||
finally:
|
||||
self._drain_waiters.remove(waiter)
|
||||
|
||||
def _get_close_waiter(self, stream):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class StreamReaderProtocol(FlowControlMixin, protocols.Protocol):
|
||||
"""Helper class to adapt between Protocol and StreamReader.
|
||||
|
||||
(This is a helper class instead of making StreamReader itself a
|
||||
Protocol subclass, because the StreamReader has other potential
|
||||
uses, and to prevent the user of the StreamReader to accidentally
|
||||
call inappropriate methods of the protocol.)
|
||||
"""
|
||||
|
||||
_source_traceback = None
|
||||
|
||||
def __init__(self, stream_reader, client_connected_cb=None, loop=None):
|
||||
super().__init__(loop=loop)
|
||||
if stream_reader is not None:
|
||||
self._stream_reader_wr = weakref.ref(stream_reader)
|
||||
self._source_traceback = stream_reader._source_traceback
|
||||
else:
|
||||
self._stream_reader_wr = None
|
||||
if client_connected_cb is not None:
|
||||
# This is a stream created by the `create_server()` function.
|
||||
# Keep a strong reference to the reader until a connection
|
||||
# is established.
|
||||
self._strong_reader = stream_reader
|
||||
self._reject_connection = False
|
||||
self._task = None
|
||||
self._transport = None
|
||||
self._client_connected_cb = client_connected_cb
|
||||
self._over_ssl = False
|
||||
self._closed = self._loop.create_future()
|
||||
|
||||
@property
|
||||
def _stream_reader(self):
|
||||
if self._stream_reader_wr is None:
|
||||
return None
|
||||
return self._stream_reader_wr()
|
||||
|
||||
def _replace_transport(self, transport):
|
||||
loop = self._loop
|
||||
self._transport = transport
|
||||
self._over_ssl = transport.get_extra_info('sslcontext') is not None
|
||||
|
||||
def connection_made(self, transport):
|
||||
if self._reject_connection:
|
||||
context = {
|
||||
'message': ('An open stream was garbage collected prior to '
|
||||
'establishing network connection; '
|
||||
'call "stream.close()" explicitly.')
|
||||
}
|
||||
if self._source_traceback:
|
||||
context['source_traceback'] = self._source_traceback
|
||||
self._loop.call_exception_handler(context)
|
||||
transport.abort()
|
||||
return
|
||||
self._transport = transport
|
||||
reader = self._stream_reader
|
||||
if reader is not None:
|
||||
reader.set_transport(transport)
|
||||
self._over_ssl = transport.get_extra_info('sslcontext') is not None
|
||||
if self._client_connected_cb is not None:
|
||||
writer = StreamWriter(transport, self, reader, self._loop)
|
||||
res = self._client_connected_cb(reader, writer)
|
||||
if coroutines.iscoroutine(res):
|
||||
def callback(task):
|
||||
if task.cancelled():
|
||||
transport.close()
|
||||
return
|
||||
exc = task.exception()
|
||||
if exc is not None:
|
||||
self._loop.call_exception_handler({
|
||||
'message': 'Unhandled exception in client_connected_cb',
|
||||
'exception': exc,
|
||||
'transport': transport,
|
||||
})
|
||||
transport.close()
|
||||
|
||||
self._task = self._loop.create_task(res)
|
||||
self._task.add_done_callback(callback)
|
||||
|
||||
self._strong_reader = None
|
||||
|
||||
def connection_lost(self, exc):
|
||||
reader = self._stream_reader
|
||||
if reader is not None:
|
||||
if exc is None:
|
||||
reader.feed_eof()
|
||||
else:
|
||||
reader.set_exception(exc)
|
||||
if not self._closed.done():
|
||||
if exc is None:
|
||||
self._closed.set_result(None)
|
||||
else:
|
||||
self._closed.set_exception(exc)
|
||||
super().connection_lost(exc)
|
||||
self._stream_reader_wr = None
|
||||
self._stream_writer = None
|
||||
self._task = None
|
||||
self._transport = None
|
||||
|
||||
def data_received(self, data):
|
||||
reader = self._stream_reader
|
||||
if reader is not None:
|
||||
reader.feed_data(data)
|
||||
|
||||
def eof_received(self):
|
||||
reader = self._stream_reader
|
||||
if reader is not None:
|
||||
reader.feed_eof()
|
||||
if self._over_ssl:
|
||||
# Prevent a warning in SSLProtocol.eof_received:
|
||||
# "returning true from eof_received()
|
||||
# has no effect when using ssl"
|
||||
return False
|
||||
return True
|
||||
|
||||
def _get_close_waiter(self, stream):
|
||||
return self._closed
|
||||
|
||||
def __del__(self):
|
||||
# Prevent reports about unhandled exceptions.
|
||||
# Better than self._closed._log_traceback = False hack
|
||||
try:
|
||||
closed = self._closed
|
||||
except AttributeError:
|
||||
pass # failed constructor
|
||||
else:
|
||||
if closed.done() and not closed.cancelled():
|
||||
closed.exception()
|
||||
|
||||
|
||||
class StreamWriter:
|
||||
"""Wraps a Transport.
|
||||
|
||||
This exposes write(), writelines(), [can_]write_eof(),
|
||||
get_extra_info() and close(). It adds drain() which returns an
|
||||
optional Future on which you can wait for flow control. It also
|
||||
adds a transport property which references the Transport
|
||||
directly.
|
||||
"""
|
||||
|
||||
def __init__(self, transport, protocol, reader, loop):
|
||||
self._transport = transport
|
||||
self._protocol = protocol
|
||||
# drain() expects that the reader has an exception() method
|
||||
assert reader is None or isinstance(reader, StreamReader)
|
||||
self._reader = reader
|
||||
self._loop = loop
|
||||
self._complete_fut = self._loop.create_future()
|
||||
self._complete_fut.set_result(None)
|
||||
|
||||
def __repr__(self):
|
||||
info = [self.__class__.__name__, f'transport={self._transport!r}']
|
||||
if self._reader is not None:
|
||||
info.append(f'reader={self._reader!r}')
|
||||
return '<{}>'.format(' '.join(info))
|
||||
|
||||
@property
|
||||
def transport(self):
|
||||
return self._transport
|
||||
|
||||
def write(self, data):
|
||||
self._transport.write(data)
|
||||
|
||||
def writelines(self, data):
|
||||
self._transport.writelines(data)
|
||||
|
||||
def write_eof(self):
|
||||
return self._transport.write_eof()
|
||||
|
||||
def can_write_eof(self):
|
||||
return self._transport.can_write_eof()
|
||||
|
||||
def close(self):
|
||||
return self._transport.close()
|
||||
|
||||
def is_closing(self):
|
||||
return self._transport.is_closing()
|
||||
|
||||
async def wait_closed(self):
|
||||
await self._protocol._get_close_waiter(self)
|
||||
|
||||
def get_extra_info(self, name, default=None):
|
||||
return self._transport.get_extra_info(name, default)
|
||||
|
||||
async def drain(self):
|
||||
"""Flush the write buffer.
|
||||
|
||||
The intended use is to write
|
||||
|
||||
w.write(data)
|
||||
await w.drain()
|
||||
"""
|
||||
if self._reader is not None:
|
||||
exc = self._reader.exception()
|
||||
if exc is not None:
|
||||
raise exc
|
||||
if self._transport.is_closing():
|
||||
# Wait for protocol.connection_lost() call
|
||||
# Raise connection closing error if any,
|
||||
# ConnectionResetError otherwise
|
||||
# Yield to the event loop so connection_lost() may be
|
||||
# called. Without this, _drain_helper() would return
|
||||
# immediately, and code that calls
|
||||
# write(...); await drain()
|
||||
# in a loop would never call connection_lost(), so it
|
||||
# would not see an error when the socket is closed.
|
||||
await sleep(0)
|
||||
await self._protocol._drain_helper()
|
||||
|
||||
async def start_tls(self, sslcontext, *,
|
||||
server_hostname=None,
|
||||
ssl_handshake_timeout=None,
|
||||
ssl_shutdown_timeout=None):
|
||||
"""Upgrade an existing stream-based connection to TLS."""
|
||||
server_side = self._protocol._client_connected_cb is not None
|
||||
protocol = self._protocol
|
||||
await self.drain()
|
||||
new_transport = await self._loop.start_tls( # type: ignore
|
||||
self._transport, protocol, sslcontext,
|
||||
server_side=server_side, server_hostname=server_hostname,
|
||||
ssl_handshake_timeout=ssl_handshake_timeout,
|
||||
ssl_shutdown_timeout=ssl_shutdown_timeout)
|
||||
self._transport = new_transport
|
||||
protocol._replace_transport(new_transport)
|
||||
|
||||
def __del__(self, warnings=warnings):
|
||||
if not self._transport.is_closing():
|
||||
if self._loop.is_closed():
|
||||
warnings.warn("loop is closed", ResourceWarning)
|
||||
else:
|
||||
self.close()
|
||||
warnings.warn(f"unclosed {self!r}", ResourceWarning)
|
||||
|
||||
class StreamReader:
|
||||
|
||||
_source_traceback = None
|
||||
|
||||
def __init__(self, limit=_DEFAULT_LIMIT, loop=None):
|
||||
# The line length limit is a security feature;
|
||||
# it also doubles as half the buffer limit.
|
||||
|
||||
if limit <= 0:
|
||||
raise ValueError('Limit cannot be <= 0')
|
||||
|
||||
self._limit = limit
|
||||
if loop is None:
|
||||
self._loop = events.get_event_loop()
|
||||
else:
|
||||
self._loop = loop
|
||||
self._buffer = bytearray()
|
||||
self._eof = False # Whether we're done.
|
||||
self._waiter = None # A future used by _wait_for_data()
|
||||
self._exception = None
|
||||
self._transport = None
|
||||
self._paused = False
|
||||
if self._loop.get_debug():
|
||||
self._source_traceback = format_helpers.extract_stack(
|
||||
sys._getframe(1))
|
||||
|
||||
def __repr__(self):
|
||||
info = ['StreamReader']
|
||||
if self._buffer:
|
||||
info.append(f'{len(self._buffer)} bytes')
|
||||
if self._eof:
|
||||
info.append('eof')
|
||||
if self._limit != _DEFAULT_LIMIT:
|
||||
info.append(f'limit={self._limit}')
|
||||
if self._waiter:
|
||||
info.append(f'waiter={self._waiter!r}')
|
||||
if self._exception:
|
||||
info.append(f'exception={self._exception!r}')
|
||||
if self._transport:
|
||||
info.append(f'transport={self._transport!r}')
|
||||
if self._paused:
|
||||
info.append('paused')
|
||||
return '<{}>'.format(' '.join(info))
|
||||
|
||||
def exception(self):
|
||||
return self._exception
|
||||
|
||||
def set_exception(self, exc):
|
||||
self._exception = exc
|
||||
|
||||
waiter = self._waiter
|
||||
if waiter is not None:
|
||||
self._waiter = None
|
||||
if not waiter.cancelled():
|
||||
waiter.set_exception(exc)
|
||||
|
||||
def _wakeup_waiter(self):
|
||||
"""Wakeup read*() functions waiting for data or EOF."""
|
||||
waiter = self._waiter
|
||||
if waiter is not None:
|
||||
self._waiter = None
|
||||
if not waiter.cancelled():
|
||||
waiter.set_result(None)
|
||||
|
||||
def set_transport(self, transport):
|
||||
assert self._transport is None, 'Transport already set'
|
||||
self._transport = transport
|
||||
|
||||
def _maybe_resume_transport(self):
|
||||
if self._paused and len(self._buffer) <= self._limit:
|
||||
self._paused = False
|
||||
self._transport.resume_reading()
|
||||
|
||||
def feed_eof(self):
|
||||
self._eof = True
|
||||
self._wakeup_waiter()
|
||||
|
||||
def at_eof(self):
|
||||
"""Return True if the buffer is empty and 'feed_eof' was called."""
|
||||
return self._eof and not self._buffer
|
||||
|
||||
def feed_data(self, data):
|
||||
assert not self._eof, 'feed_data after feed_eof'
|
||||
|
||||
if not data:
|
||||
return
|
||||
|
||||
self._buffer.extend(data)
|
||||
self._wakeup_waiter()
|
||||
|
||||
if (self._transport is not None and
|
||||
not self._paused and
|
||||
len(self._buffer) > 2 * self._limit):
|
||||
try:
|
||||
self._transport.pause_reading()
|
||||
except NotImplementedError:
|
||||
# The transport can't be paused.
|
||||
# We'll just have to buffer all data.
|
||||
# Forget the transport so we don't keep trying.
|
||||
self._transport = None
|
||||
else:
|
||||
self._paused = True
|
||||
|
||||
async def _wait_for_data(self, func_name):
|
||||
"""Wait until feed_data() or feed_eof() is called.
|
||||
|
||||
If stream was paused, automatically resume it.
|
||||
"""
|
||||
# StreamReader uses a future to link the protocol feed_data() method
|
||||
# to a read coroutine. Running two read coroutines at the same time
|
||||
# would have an unexpected behaviour. It would not possible to know
|
||||
# which coroutine would get the next data.
|
||||
if self._waiter is not None:
|
||||
raise RuntimeError(
|
||||
f'{func_name}() called while another coroutine is '
|
||||
f'already waiting for incoming data')
|
||||
|
||||
assert not self._eof, '_wait_for_data after EOF'
|
||||
|
||||
# Waiting for data while paused will make deadlock, so prevent it.
|
||||
# This is essential for readexactly(n) for case when n > self._limit.
|
||||
if self._paused:
|
||||
self._paused = False
|
||||
self._transport.resume_reading()
|
||||
|
||||
self._waiter = self._loop.create_future()
|
||||
try:
|
||||
await self._waiter
|
||||
finally:
|
||||
self._waiter = None
|
||||
|
||||
async def readline(self):
|
||||
"""Read chunk of data from the stream until newline (b'\n') is found.
|
||||
|
||||
On success, return chunk that ends with newline. If only partial
|
||||
line can be read due to EOF, return incomplete line without
|
||||
terminating newline. When EOF was reached while no bytes read, empty
|
||||
bytes object is returned.
|
||||
|
||||
If limit is reached, ValueError will be raised. In that case, if
|
||||
newline was found, complete line including newline will be removed
|
||||
from internal buffer. Else, internal buffer will be cleared. Limit is
|
||||
compared against part of the line without newline.
|
||||
|
||||
If stream was paused, this function will automatically resume it if
|
||||
needed.
|
||||
"""
|
||||
sep = b'\n'
|
||||
seplen = len(sep)
|
||||
try:
|
||||
line = await self.readuntil(sep)
|
||||
except exceptions.IncompleteReadError as e:
|
||||
return e.partial
|
||||
except exceptions.LimitOverrunError as e:
|
||||
if self._buffer.startswith(sep, e.consumed):
|
||||
del self._buffer[:e.consumed + seplen]
|
||||
else:
|
||||
self._buffer.clear()
|
||||
self._maybe_resume_transport()
|
||||
raise ValueError(e.args[0])
|
||||
return line
|
||||
|
||||
async def readuntil(self, separator=b'\n'):
|
||||
"""Read data from the stream until ``separator`` is found.
|
||||
|
||||
On success, the data and separator will be removed from the
|
||||
internal buffer (consumed). Returned data will include the
|
||||
separator at the end.
|
||||
|
||||
Configured stream limit is used to check result. Limit sets the
|
||||
maximal length of data that can be returned, not counting the
|
||||
separator.
|
||||
|
||||
If an EOF occurs and the complete separator is still not found,
|
||||
an IncompleteReadError exception will be raised, and the internal
|
||||
buffer will be reset. The IncompleteReadError.partial attribute
|
||||
may contain the separator partially.
|
||||
|
||||
If the data cannot be read because of over limit, a
|
||||
LimitOverrunError exception will be raised, and the data
|
||||
will be left in the internal buffer, so it can be read again.
|
||||
|
||||
The ``separator`` may also be a tuple of separators. In this
|
||||
case the return value will be the shortest possible that has any
|
||||
separator as the suffix. For the purposes of LimitOverrunError,
|
||||
the shortest possible separator is considered to be the one that
|
||||
matched.
|
||||
"""
|
||||
if isinstance(separator, tuple):
|
||||
# Makes sure shortest matches wins
|
||||
separator = sorted(separator, key=len)
|
||||
else:
|
||||
separator = [separator]
|
||||
if not separator:
|
||||
raise ValueError('Separator should contain at least one element')
|
||||
min_seplen = len(separator[0])
|
||||
max_seplen = len(separator[-1])
|
||||
if min_seplen == 0:
|
||||
raise ValueError('Separator should be at least one-byte string')
|
||||
|
||||
if self._exception is not None:
|
||||
raise self._exception
|
||||
|
||||
# Consume whole buffer except last bytes, which length is
|
||||
# one less than max_seplen. Let's check corner cases with
|
||||
# separator[-1]='SEPARATOR':
|
||||
# * we have received almost complete separator (without last
|
||||
# byte). i.e buffer='some textSEPARATO'. In this case we
|
||||
# can safely consume max_seplen - 1 bytes.
|
||||
# * last byte of buffer is first byte of separator, i.e.
|
||||
# buffer='abcdefghijklmnopqrS'. We may safely consume
|
||||
# everything except that last byte, but this require to
|
||||
# analyze bytes of buffer that match partial separator.
|
||||
# This is slow and/or require FSM. For this case our
|
||||
# implementation is not optimal, since require rescanning
|
||||
# of data that is known to not belong to separator. In
|
||||
# real world, separator will not be so long to notice
|
||||
# performance problems. Even when reading MIME-encoded
|
||||
# messages :)
|
||||
|
||||
# `offset` is the number of bytes from the beginning of the buffer
|
||||
# where there is no occurrence of any `separator`.
|
||||
offset = 0
|
||||
|
||||
# Loop until we find a `separator` in the buffer, exceed the buffer size,
|
||||
# or an EOF has happened.
|
||||
while True:
|
||||
buflen = len(self._buffer)
|
||||
|
||||
# Check if we now have enough data in the buffer for shortest
|
||||
# separator to fit.
|
||||
if buflen - offset >= min_seplen:
|
||||
match_start = None
|
||||
match_end = None
|
||||
for sep in separator:
|
||||
isep = self._buffer.find(sep, offset)
|
||||
|
||||
if isep != -1:
|
||||
# `separator` is in the buffer. `match_start` and
|
||||
# `match_end` will be used later to retrieve the
|
||||
# data.
|
||||
end = isep + len(sep)
|
||||
if match_end is None or end < match_end:
|
||||
match_end = end
|
||||
match_start = isep
|
||||
if match_end is not None:
|
||||
break
|
||||
|
||||
# see upper comment for explanation.
|
||||
offset = max(0, buflen + 1 - max_seplen)
|
||||
if offset > self._limit:
|
||||
raise exceptions.LimitOverrunError(
|
||||
'Separator is not found, and chunk exceed the limit',
|
||||
offset)
|
||||
|
||||
# Complete message (with full separator) may be present in buffer
|
||||
# even when EOF flag is set. This may happen when the last chunk
|
||||
# adds data which makes separator be found. That's why we check for
|
||||
# EOF *after* inspecting the buffer.
|
||||
if self._eof:
|
||||
chunk = bytes(self._buffer)
|
||||
self._buffer.clear()
|
||||
raise exceptions.IncompleteReadError(chunk, None)
|
||||
|
||||
# _wait_for_data() will resume reading if stream was paused.
|
||||
await self._wait_for_data('readuntil')
|
||||
|
||||
if match_start > self._limit:
|
||||
raise exceptions.LimitOverrunError(
|
||||
'Separator is found, but chunk is longer than limit', match_start)
|
||||
|
||||
chunk = self._buffer[:match_end]
|
||||
del self._buffer[:match_end]
|
||||
self._maybe_resume_transport()
|
||||
return bytes(chunk)
|
||||
|
||||
async def read(self, n=-1):
|
||||
"""Read up to `n` bytes from the stream.
|
||||
|
||||
If `n` is not provided or set to -1,
|
||||
read until EOF, then return all read bytes.
|
||||
If EOF was received and the internal buffer is empty,
|
||||
return an empty bytes object.
|
||||
|
||||
If `n` is 0, return an empty bytes object immediately.
|
||||
|
||||
If `n` is positive, return at most `n` available bytes
|
||||
as soon as at least 1 byte is available in the internal buffer.
|
||||
If EOF is received before any byte is read, return an empty
|
||||
bytes object.
|
||||
|
||||
Returned value is not limited with limit, configured at stream
|
||||
creation.
|
||||
|
||||
If stream was paused, this function will automatically resume it if
|
||||
needed.
|
||||
"""
|
||||
|
||||
if self._exception is not None:
|
||||
raise self._exception
|
||||
|
||||
if n == 0:
|
||||
return b''
|
||||
|
||||
if n < 0:
|
||||
# This used to just loop creating a new waiter hoping to
|
||||
# collect everything in self._buffer, but that would
|
||||
# deadlock if the subprocess sends more than self.limit
|
||||
# bytes. So just call self.read(self._limit) until EOF.
|
||||
blocks = []
|
||||
while True:
|
||||
block = await self.read(self._limit)
|
||||
if not block:
|
||||
break
|
||||
blocks.append(block)
|
||||
return b''.join(blocks)
|
||||
|
||||
if not self._buffer and not self._eof:
|
||||
await self._wait_for_data('read')
|
||||
|
||||
# This will work right even if buffer is less than n bytes
|
||||
data = bytes(memoryview(self._buffer)[:n])
|
||||
del self._buffer[:n]
|
||||
|
||||
self._maybe_resume_transport()
|
||||
return data
|
||||
|
||||
async def readexactly(self, n):
|
||||
"""Read exactly `n` bytes.
|
||||
|
||||
Raise an IncompleteReadError if EOF is reached before `n` bytes can be
|
||||
read. The IncompleteReadError.partial attribute of the exception will
|
||||
contain the partial read bytes.
|
||||
|
||||
if n is zero, return empty bytes object.
|
||||
|
||||
Returned value is not limited with limit, configured at stream
|
||||
creation.
|
||||
|
||||
If stream was paused, this function will automatically resume it if
|
||||
needed.
|
||||
"""
|
||||
if n < 0:
|
||||
raise ValueError('readexactly size can not be less than zero')
|
||||
|
||||
if self._exception is not None:
|
||||
raise self._exception
|
||||
|
||||
if n == 0:
|
||||
return b''
|
||||
|
||||
while len(self._buffer) < n:
|
||||
if self._eof:
|
||||
incomplete = bytes(self._buffer)
|
||||
self._buffer.clear()
|
||||
raise exceptions.IncompleteReadError(incomplete, n)
|
||||
|
||||
await self._wait_for_data('readexactly')
|
||||
|
||||
if len(self._buffer) == n:
|
||||
data = bytes(self._buffer)
|
||||
self._buffer.clear()
|
||||
else:
|
||||
data = bytes(memoryview(self._buffer)[:n])
|
||||
del self._buffer[:n]
|
||||
self._maybe_resume_transport()
|
||||
return data
|
||||
|
||||
def __aiter__(self):
|
||||
return self
|
||||
|
||||
async def __anext__(self):
|
||||
val = await self.readline()
|
||||
if val == b'':
|
||||
raise StopAsyncIteration
|
||||
return val
|
||||
229
Utils/PythonNew32/Lib/asyncio/subprocess.py
Normal file
229
Utils/PythonNew32/Lib/asyncio/subprocess.py
Normal file
@@ -0,0 +1,229 @@
|
||||
__all__ = 'create_subprocess_exec', 'create_subprocess_shell'
|
||||
|
||||
import subprocess
|
||||
|
||||
from . import events
|
||||
from . import protocols
|
||||
from . import streams
|
||||
from . import tasks
|
||||
from .log import logger
|
||||
|
||||
|
||||
PIPE = subprocess.PIPE
|
||||
STDOUT = subprocess.STDOUT
|
||||
DEVNULL = subprocess.DEVNULL
|
||||
|
||||
|
||||
class SubprocessStreamProtocol(streams.FlowControlMixin,
|
||||
protocols.SubprocessProtocol):
|
||||
"""Like StreamReaderProtocol, but for a subprocess."""
|
||||
|
||||
def __init__(self, limit, loop):
|
||||
super().__init__(loop=loop)
|
||||
self._limit = limit
|
||||
self.stdin = self.stdout = self.stderr = None
|
||||
self._transport = None
|
||||
self._process_exited = False
|
||||
self._pipe_fds = []
|
||||
self._stdin_closed = self._loop.create_future()
|
||||
|
||||
def __repr__(self):
|
||||
info = [self.__class__.__name__]
|
||||
if self.stdin is not None:
|
||||
info.append(f'stdin={self.stdin!r}')
|
||||
if self.stdout is not None:
|
||||
info.append(f'stdout={self.stdout!r}')
|
||||
if self.stderr is not None:
|
||||
info.append(f'stderr={self.stderr!r}')
|
||||
return '<{}>'.format(' '.join(info))
|
||||
|
||||
def connection_made(self, transport):
|
||||
self._transport = transport
|
||||
|
||||
stdout_transport = transport.get_pipe_transport(1)
|
||||
if stdout_transport is not None:
|
||||
self.stdout = streams.StreamReader(limit=self._limit,
|
||||
loop=self._loop)
|
||||
self.stdout.set_transport(stdout_transport)
|
||||
self._pipe_fds.append(1)
|
||||
|
||||
stderr_transport = transport.get_pipe_transport(2)
|
||||
if stderr_transport is not None:
|
||||
self.stderr = streams.StreamReader(limit=self._limit,
|
||||
loop=self._loop)
|
||||
self.stderr.set_transport(stderr_transport)
|
||||
self._pipe_fds.append(2)
|
||||
|
||||
stdin_transport = transport.get_pipe_transport(0)
|
||||
if stdin_transport is not None:
|
||||
self.stdin = streams.StreamWriter(stdin_transport,
|
||||
protocol=self,
|
||||
reader=None,
|
||||
loop=self._loop)
|
||||
|
||||
def pipe_data_received(self, fd, data):
|
||||
if fd == 1:
|
||||
reader = self.stdout
|
||||
elif fd == 2:
|
||||
reader = self.stderr
|
||||
else:
|
||||
reader = None
|
||||
if reader is not None:
|
||||
reader.feed_data(data)
|
||||
|
||||
def pipe_connection_lost(self, fd, exc):
|
||||
if fd == 0:
|
||||
pipe = self.stdin
|
||||
if pipe is not None:
|
||||
pipe.close()
|
||||
self.connection_lost(exc)
|
||||
if exc is None:
|
||||
self._stdin_closed.set_result(None)
|
||||
else:
|
||||
self._stdin_closed.set_exception(exc)
|
||||
# Since calling `wait_closed()` is not mandatory,
|
||||
# we shouldn't log the traceback if this is not awaited.
|
||||
self._stdin_closed._log_traceback = False
|
||||
return
|
||||
if fd == 1:
|
||||
reader = self.stdout
|
||||
elif fd == 2:
|
||||
reader = self.stderr
|
||||
else:
|
||||
reader = None
|
||||
if reader is not None:
|
||||
if exc is None:
|
||||
reader.feed_eof()
|
||||
else:
|
||||
reader.set_exception(exc)
|
||||
|
||||
if fd in self._pipe_fds:
|
||||
self._pipe_fds.remove(fd)
|
||||
self._maybe_close_transport()
|
||||
|
||||
def process_exited(self):
|
||||
self._process_exited = True
|
||||
self._maybe_close_transport()
|
||||
|
||||
def _maybe_close_transport(self):
|
||||
if len(self._pipe_fds) == 0 and self._process_exited:
|
||||
self._transport.close()
|
||||
self._transport = None
|
||||
|
||||
def _get_close_waiter(self, stream):
|
||||
if stream is self.stdin:
|
||||
return self._stdin_closed
|
||||
|
||||
|
||||
class Process:
|
||||
def __init__(self, transport, protocol, loop):
|
||||
self._transport = transport
|
||||
self._protocol = protocol
|
||||
self._loop = loop
|
||||
self.stdin = protocol.stdin
|
||||
self.stdout = protocol.stdout
|
||||
self.stderr = protocol.stderr
|
||||
self.pid = transport.get_pid()
|
||||
|
||||
def __repr__(self):
|
||||
return f'<{self.__class__.__name__} {self.pid}>'
|
||||
|
||||
@property
|
||||
def returncode(self):
|
||||
return self._transport.get_returncode()
|
||||
|
||||
async def wait(self):
|
||||
"""Wait until the process exit and return the process return code."""
|
||||
return await self._transport._wait()
|
||||
|
||||
def send_signal(self, signal):
|
||||
self._transport.send_signal(signal)
|
||||
|
||||
def terminate(self):
|
||||
self._transport.terminate()
|
||||
|
||||
def kill(self):
|
||||
self._transport.kill()
|
||||
|
||||
async def _feed_stdin(self, input):
|
||||
debug = self._loop.get_debug()
|
||||
try:
|
||||
if input is not None:
|
||||
self.stdin.write(input)
|
||||
if debug:
|
||||
logger.debug(
|
||||
'%r communicate: feed stdin (%s bytes)', self, len(input))
|
||||
|
||||
await self.stdin.drain()
|
||||
except (BrokenPipeError, ConnectionResetError) as exc:
|
||||
# communicate() ignores BrokenPipeError and ConnectionResetError.
|
||||
# write() and drain() can raise these exceptions.
|
||||
if debug:
|
||||
logger.debug('%r communicate: stdin got %r', self, exc)
|
||||
|
||||
if debug:
|
||||
logger.debug('%r communicate: close stdin', self)
|
||||
self.stdin.close()
|
||||
|
||||
async def _noop(self):
|
||||
return None
|
||||
|
||||
async def _read_stream(self, fd):
|
||||
transport = self._transport.get_pipe_transport(fd)
|
||||
if fd == 2:
|
||||
stream = self.stderr
|
||||
else:
|
||||
assert fd == 1
|
||||
stream = self.stdout
|
||||
if self._loop.get_debug():
|
||||
name = 'stdout' if fd == 1 else 'stderr'
|
||||
logger.debug('%r communicate: read %s', self, name)
|
||||
output = await stream.read()
|
||||
if self._loop.get_debug():
|
||||
name = 'stdout' if fd == 1 else 'stderr'
|
||||
logger.debug('%r communicate: close %s', self, name)
|
||||
transport.close()
|
||||
return output
|
||||
|
||||
async def communicate(self, input=None):
|
||||
if self.stdin is not None:
|
||||
stdin = self._feed_stdin(input)
|
||||
else:
|
||||
stdin = self._noop()
|
||||
if self.stdout is not None:
|
||||
stdout = self._read_stream(1)
|
||||
else:
|
||||
stdout = self._noop()
|
||||
if self.stderr is not None:
|
||||
stderr = self._read_stream(2)
|
||||
else:
|
||||
stderr = self._noop()
|
||||
stdin, stdout, stderr = await tasks.gather(stdin, stdout, stderr)
|
||||
await self.wait()
|
||||
return (stdout, stderr)
|
||||
|
||||
|
||||
async def create_subprocess_shell(cmd, stdin=None, stdout=None, stderr=None,
|
||||
limit=streams._DEFAULT_LIMIT, **kwds):
|
||||
loop = events.get_running_loop()
|
||||
protocol_factory = lambda: SubprocessStreamProtocol(limit=limit,
|
||||
loop=loop)
|
||||
transport, protocol = await loop.subprocess_shell(
|
||||
protocol_factory,
|
||||
cmd, stdin=stdin, stdout=stdout,
|
||||
stderr=stderr, **kwds)
|
||||
return Process(transport, protocol, loop)
|
||||
|
||||
|
||||
async def create_subprocess_exec(program, *args, stdin=None, stdout=None,
|
||||
stderr=None, limit=streams._DEFAULT_LIMIT,
|
||||
**kwds):
|
||||
loop = events.get_running_loop()
|
||||
protocol_factory = lambda: SubprocessStreamProtocol(limit=limit,
|
||||
loop=loop)
|
||||
transport, protocol = await loop.subprocess_exec(
|
||||
protocol_factory,
|
||||
program, *args,
|
||||
stdin=stdin, stdout=stdout,
|
||||
stderr=stderr, **kwds)
|
||||
return Process(transport, protocol, loop)
|
||||
278
Utils/PythonNew32/Lib/asyncio/taskgroups.py
Normal file
278
Utils/PythonNew32/Lib/asyncio/taskgroups.py
Normal file
@@ -0,0 +1,278 @@
|
||||
# Adapted with permission from the EdgeDB project;
|
||||
# license: PSFL.
|
||||
|
||||
|
||||
__all__ = ("TaskGroup",)
|
||||
|
||||
from . import events
|
||||
from . import exceptions
|
||||
from . import tasks
|
||||
|
||||
|
||||
class TaskGroup:
|
||||
"""Asynchronous context manager for managing groups of tasks.
|
||||
|
||||
Example use:
|
||||
|
||||
async with asyncio.TaskGroup() as group:
|
||||
task1 = group.create_task(some_coroutine(...))
|
||||
task2 = group.create_task(other_coroutine(...))
|
||||
print("Both tasks have completed now.")
|
||||
|
||||
All tasks are awaited when the context manager exits.
|
||||
|
||||
Any exceptions other than `asyncio.CancelledError` raised within
|
||||
a task will cancel all remaining tasks and wait for them to exit.
|
||||
The exceptions are then combined and raised as an `ExceptionGroup`.
|
||||
"""
|
||||
def __init__(self):
|
||||
self._entered = False
|
||||
self._exiting = False
|
||||
self._aborting = False
|
||||
self._loop = None
|
||||
self._parent_task = None
|
||||
self._parent_cancel_requested = False
|
||||
self._tasks = set()
|
||||
self._errors = []
|
||||
self._base_error = None
|
||||
self._on_completed_fut = None
|
||||
|
||||
def __repr__(self):
|
||||
info = ['']
|
||||
if self._tasks:
|
||||
info.append(f'tasks={len(self._tasks)}')
|
||||
if self._errors:
|
||||
info.append(f'errors={len(self._errors)}')
|
||||
if self._aborting:
|
||||
info.append('cancelling')
|
||||
elif self._entered:
|
||||
info.append('entered')
|
||||
|
||||
info_str = ' '.join(info)
|
||||
return f'<TaskGroup{info_str}>'
|
||||
|
||||
async def __aenter__(self):
|
||||
if self._entered:
|
||||
raise RuntimeError(
|
||||
f"TaskGroup {self!r} has already been entered")
|
||||
if self._loop is None:
|
||||
self._loop = events.get_running_loop()
|
||||
self._parent_task = tasks.current_task(self._loop)
|
||||
if self._parent_task is None:
|
||||
raise RuntimeError(
|
||||
f'TaskGroup {self!r} cannot determine the parent task')
|
||||
self._entered = True
|
||||
|
||||
return self
|
||||
|
||||
async def __aexit__(self, et, exc, tb):
|
||||
tb = None
|
||||
try:
|
||||
return await self._aexit(et, exc)
|
||||
finally:
|
||||
# Exceptions are heavy objects that can have object
|
||||
# cycles (bad for GC); let's not keep a reference to
|
||||
# a bunch of them. It would be nicer to use a try/finally
|
||||
# in __aexit__ directly but that introduced some diff noise
|
||||
self._parent_task = None
|
||||
self._errors = None
|
||||
self._base_error = None
|
||||
exc = None
|
||||
|
||||
async def _aexit(self, et, exc):
|
||||
self._exiting = True
|
||||
|
||||
if (exc is not None and
|
||||
self._is_base_error(exc) and
|
||||
self._base_error is None):
|
||||
self._base_error = exc
|
||||
|
||||
if et is not None and issubclass(et, exceptions.CancelledError):
|
||||
propagate_cancellation_error = exc
|
||||
else:
|
||||
propagate_cancellation_error = None
|
||||
|
||||
if et is not None:
|
||||
if not self._aborting:
|
||||
# Our parent task is being cancelled:
|
||||
#
|
||||
# async with TaskGroup() as g:
|
||||
# g.create_task(...)
|
||||
# await ... # <- CancelledError
|
||||
#
|
||||
# or there's an exception in "async with":
|
||||
#
|
||||
# async with TaskGroup() as g:
|
||||
# g.create_task(...)
|
||||
# 1 / 0
|
||||
#
|
||||
self._abort()
|
||||
|
||||
# We use while-loop here because "self._on_completed_fut"
|
||||
# can be cancelled multiple times if our parent task
|
||||
# is being cancelled repeatedly (or even once, when
|
||||
# our own cancellation is already in progress)
|
||||
while self._tasks:
|
||||
if self._on_completed_fut is None:
|
||||
self._on_completed_fut = self._loop.create_future()
|
||||
|
||||
try:
|
||||
await self._on_completed_fut
|
||||
except exceptions.CancelledError as ex:
|
||||
if not self._aborting:
|
||||
# Our parent task is being cancelled:
|
||||
#
|
||||
# async def wrapper():
|
||||
# async with TaskGroup() as g:
|
||||
# g.create_task(foo)
|
||||
#
|
||||
# "wrapper" is being cancelled while "foo" is
|
||||
# still running.
|
||||
propagate_cancellation_error = ex
|
||||
self._abort()
|
||||
|
||||
self._on_completed_fut = None
|
||||
|
||||
assert not self._tasks
|
||||
|
||||
if self._base_error is not None:
|
||||
try:
|
||||
raise self._base_error
|
||||
finally:
|
||||
exc = None
|
||||
|
||||
if self._parent_cancel_requested:
|
||||
# If this flag is set we *must* call uncancel().
|
||||
if self._parent_task.uncancel() == 0:
|
||||
# If there are no pending cancellations left,
|
||||
# don't propagate CancelledError.
|
||||
propagate_cancellation_error = None
|
||||
|
||||
# Propagate CancelledError if there is one, except if there
|
||||
# are other errors -- those have priority.
|
||||
try:
|
||||
if propagate_cancellation_error is not None and not self._errors:
|
||||
try:
|
||||
raise propagate_cancellation_error
|
||||
finally:
|
||||
exc = None
|
||||
finally:
|
||||
propagate_cancellation_error = None
|
||||
|
||||
if et is not None and not issubclass(et, exceptions.CancelledError):
|
||||
self._errors.append(exc)
|
||||
|
||||
if self._errors:
|
||||
# If the parent task is being cancelled from the outside
|
||||
# of the taskgroup, un-cancel and re-cancel the parent task,
|
||||
# which will keep the cancel count stable.
|
||||
if self._parent_task.cancelling():
|
||||
self._parent_task.uncancel()
|
||||
self._parent_task.cancel()
|
||||
try:
|
||||
raise BaseExceptionGroup(
|
||||
'unhandled errors in a TaskGroup',
|
||||
self._errors,
|
||||
) from None
|
||||
finally:
|
||||
exc = None
|
||||
|
||||
|
||||
def create_task(self, coro, *, name=None, context=None):
|
||||
"""Create a new task in this group and return it.
|
||||
|
||||
Similar to `asyncio.create_task`.
|
||||
"""
|
||||
if not self._entered:
|
||||
coro.close()
|
||||
raise RuntimeError(f"TaskGroup {self!r} has not been entered")
|
||||
if self._exiting and not self._tasks:
|
||||
coro.close()
|
||||
raise RuntimeError(f"TaskGroup {self!r} is finished")
|
||||
if self._aborting:
|
||||
coro.close()
|
||||
raise RuntimeError(f"TaskGroup {self!r} is shutting down")
|
||||
if context is None:
|
||||
task = self._loop.create_task(coro, name=name)
|
||||
else:
|
||||
task = self._loop.create_task(coro, name=name, context=context)
|
||||
|
||||
# Always schedule the done callback even if the task is
|
||||
# already done (e.g. if the coro was able to complete eagerly),
|
||||
# otherwise if the task completes with an exception then it will cancel
|
||||
# the current task too early. gh-128550, gh-128588
|
||||
self._tasks.add(task)
|
||||
task.add_done_callback(self._on_task_done)
|
||||
try:
|
||||
return task
|
||||
finally:
|
||||
# gh-128552: prevent a refcycle of
|
||||
# task.exception().__traceback__->TaskGroup.create_task->task
|
||||
del task
|
||||
|
||||
# Since Python 3.8 Tasks propagate all exceptions correctly,
|
||||
# except for KeyboardInterrupt and SystemExit which are
|
||||
# still considered special.
|
||||
|
||||
def _is_base_error(self, exc: BaseException) -> bool:
|
||||
assert isinstance(exc, BaseException)
|
||||
return isinstance(exc, (SystemExit, KeyboardInterrupt))
|
||||
|
||||
def _abort(self):
|
||||
self._aborting = True
|
||||
|
||||
for t in self._tasks:
|
||||
if not t.done():
|
||||
t.cancel()
|
||||
|
||||
def _on_task_done(self, task):
|
||||
self._tasks.discard(task)
|
||||
|
||||
if self._on_completed_fut is not None and not self._tasks:
|
||||
if not self._on_completed_fut.done():
|
||||
self._on_completed_fut.set_result(True)
|
||||
|
||||
if task.cancelled():
|
||||
return
|
||||
|
||||
exc = task.exception()
|
||||
if exc is None:
|
||||
return
|
||||
|
||||
self._errors.append(exc)
|
||||
if self._is_base_error(exc) and self._base_error is None:
|
||||
self._base_error = exc
|
||||
|
||||
if self._parent_task.done():
|
||||
# Not sure if this case is possible, but we want to handle
|
||||
# it anyways.
|
||||
self._loop.call_exception_handler({
|
||||
'message': f'Task {task!r} has errored out but its parent '
|
||||
f'task {self._parent_task} is already completed',
|
||||
'exception': exc,
|
||||
'task': task,
|
||||
})
|
||||
return
|
||||
|
||||
if not self._aborting and not self._parent_cancel_requested:
|
||||
# If parent task *is not* being cancelled, it means that we want
|
||||
# to manually cancel it to abort whatever is being run right now
|
||||
# in the TaskGroup. But we want to mark parent task as
|
||||
# "not cancelled" later in __aexit__. Example situation that
|
||||
# we need to handle:
|
||||
#
|
||||
# async def foo():
|
||||
# try:
|
||||
# async with TaskGroup() as g:
|
||||
# g.create_task(crash_soon())
|
||||
# await something # <- this needs to be canceled
|
||||
# # by the TaskGroup, e.g.
|
||||
# # foo() needs to be cancelled
|
||||
# except Exception:
|
||||
# # Ignore any exceptions raised in the TaskGroup
|
||||
# pass
|
||||
# await something_else # this line has to be called
|
||||
# # after TaskGroup is finished.
|
||||
self._abort()
|
||||
self._parent_cancel_requested = True
|
||||
self._parent_task.cancel()
|
||||
1118
Utils/PythonNew32/Lib/asyncio/tasks.py
Normal file
1118
Utils/PythonNew32/Lib/asyncio/tasks.py
Normal file
File diff suppressed because it is too large
Load Diff
25
Utils/PythonNew32/Lib/asyncio/threads.py
Normal file
25
Utils/PythonNew32/Lib/asyncio/threads.py
Normal file
@@ -0,0 +1,25 @@
|
||||
"""High-level support for working with threads in asyncio"""
|
||||
|
||||
import functools
|
||||
import contextvars
|
||||
|
||||
from . import events
|
||||
|
||||
|
||||
__all__ = "to_thread",
|
||||
|
||||
|
||||
async def to_thread(func, /, *args, **kwargs):
|
||||
"""Asynchronously run function *func* in a separate thread.
|
||||
|
||||
Any *args and **kwargs supplied for this function are directly passed
|
||||
to *func*. Also, the current :class:`contextvars.Context` is propagated,
|
||||
allowing context variables from the main thread to be accessed in the
|
||||
separate thread.
|
||||
|
||||
Return a coroutine that can be awaited to get the eventual result of *func*.
|
||||
"""
|
||||
loop = events.get_running_loop()
|
||||
ctx = contextvars.copy_context()
|
||||
func_call = functools.partial(ctx.run, func, *args, **kwargs)
|
||||
return await loop.run_in_executor(None, func_call)
|
||||
184
Utils/PythonNew32/Lib/asyncio/timeouts.py
Normal file
184
Utils/PythonNew32/Lib/asyncio/timeouts.py
Normal file
@@ -0,0 +1,184 @@
|
||||
import enum
|
||||
|
||||
from types import TracebackType
|
||||
from typing import final, Optional, Type
|
||||
|
||||
from . import events
|
||||
from . import exceptions
|
||||
from . import tasks
|
||||
|
||||
|
||||
__all__ = (
|
||||
"Timeout",
|
||||
"timeout",
|
||||
"timeout_at",
|
||||
)
|
||||
|
||||
|
||||
class _State(enum.Enum):
|
||||
CREATED = "created"
|
||||
ENTERED = "active"
|
||||
EXPIRING = "expiring"
|
||||
EXPIRED = "expired"
|
||||
EXITED = "finished"
|
||||
|
||||
|
||||
@final
|
||||
class Timeout:
|
||||
"""Asynchronous context manager for cancelling overdue coroutines.
|
||||
|
||||
Use `timeout()` or `timeout_at()` rather than instantiating this class directly.
|
||||
"""
|
||||
|
||||
def __init__(self, when: Optional[float]) -> None:
|
||||
"""Schedule a timeout that will trigger at a given loop time.
|
||||
|
||||
- If `when` is `None`, the timeout will never trigger.
|
||||
- If `when < loop.time()`, the timeout will trigger on the next
|
||||
iteration of the event loop.
|
||||
"""
|
||||
self._state = _State.CREATED
|
||||
|
||||
self._timeout_handler: Optional[events.TimerHandle] = None
|
||||
self._task: Optional[tasks.Task] = None
|
||||
self._when = when
|
||||
|
||||
def when(self) -> Optional[float]:
|
||||
"""Return the current deadline."""
|
||||
return self._when
|
||||
|
||||
def reschedule(self, when: Optional[float]) -> None:
|
||||
"""Reschedule the timeout."""
|
||||
if self._state is not _State.ENTERED:
|
||||
if self._state is _State.CREATED:
|
||||
raise RuntimeError("Timeout has not been entered")
|
||||
raise RuntimeError(
|
||||
f"Cannot change state of {self._state.value} Timeout",
|
||||
)
|
||||
|
||||
self._when = when
|
||||
|
||||
if self._timeout_handler is not None:
|
||||
self._timeout_handler.cancel()
|
||||
|
||||
if when is None:
|
||||
self._timeout_handler = None
|
||||
else:
|
||||
loop = events.get_running_loop()
|
||||
if when <= loop.time():
|
||||
self._timeout_handler = loop.call_soon(self._on_timeout)
|
||||
else:
|
||||
self._timeout_handler = loop.call_at(when, self._on_timeout)
|
||||
|
||||
def expired(self) -> bool:
|
||||
"""Is timeout expired during execution?"""
|
||||
return self._state in (_State.EXPIRING, _State.EXPIRED)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
info = ['']
|
||||
if self._state is _State.ENTERED:
|
||||
when = round(self._when, 3) if self._when is not None else None
|
||||
info.append(f"when={when}")
|
||||
info_str = ' '.join(info)
|
||||
return f"<Timeout [{self._state.value}]{info_str}>"
|
||||
|
||||
async def __aenter__(self) -> "Timeout":
|
||||
if self._state is not _State.CREATED:
|
||||
raise RuntimeError("Timeout has already been entered")
|
||||
task = tasks.current_task()
|
||||
if task is None:
|
||||
raise RuntimeError("Timeout should be used inside a task")
|
||||
self._state = _State.ENTERED
|
||||
self._task = task
|
||||
self._cancelling = self._task.cancelling()
|
||||
self.reschedule(self._when)
|
||||
return self
|
||||
|
||||
async def __aexit__(
|
||||
self,
|
||||
exc_type: Optional[Type[BaseException]],
|
||||
exc_val: Optional[BaseException],
|
||||
exc_tb: Optional[TracebackType],
|
||||
) -> Optional[bool]:
|
||||
assert self._state in (_State.ENTERED, _State.EXPIRING)
|
||||
|
||||
if self._timeout_handler is not None:
|
||||
self._timeout_handler.cancel()
|
||||
self._timeout_handler = None
|
||||
|
||||
if self._state is _State.EXPIRING:
|
||||
self._state = _State.EXPIRED
|
||||
|
||||
if self._task.uncancel() <= self._cancelling and exc_type is not None:
|
||||
# Since there are no new cancel requests, we're
|
||||
# handling this.
|
||||
if issubclass(exc_type, exceptions.CancelledError):
|
||||
raise TimeoutError from exc_val
|
||||
elif exc_val is not None:
|
||||
self._insert_timeout_error(exc_val)
|
||||
if isinstance(exc_val, ExceptionGroup):
|
||||
for exc in exc_val.exceptions:
|
||||
self._insert_timeout_error(exc)
|
||||
elif self._state is _State.ENTERED:
|
||||
self._state = _State.EXITED
|
||||
|
||||
return None
|
||||
|
||||
def _on_timeout(self) -> None:
|
||||
assert self._state is _State.ENTERED
|
||||
self._task.cancel()
|
||||
self._state = _State.EXPIRING
|
||||
# drop the reference early
|
||||
self._timeout_handler = None
|
||||
|
||||
@staticmethod
|
||||
def _insert_timeout_error(exc_val: BaseException) -> None:
|
||||
while exc_val.__context__ is not None:
|
||||
if isinstance(exc_val.__context__, exceptions.CancelledError):
|
||||
te = TimeoutError()
|
||||
te.__context__ = te.__cause__ = exc_val.__context__
|
||||
exc_val.__context__ = te
|
||||
break
|
||||
exc_val = exc_val.__context__
|
||||
|
||||
|
||||
def timeout(delay: Optional[float]) -> Timeout:
|
||||
"""Timeout async context manager.
|
||||
|
||||
Useful in cases when you want to apply timeout logic around block
|
||||
of code or in cases when asyncio.wait_for is not suitable. For example:
|
||||
|
||||
>>> async with asyncio.timeout(10): # 10 seconds timeout
|
||||
... await long_running_task()
|
||||
|
||||
|
||||
delay - value in seconds or None to disable timeout logic
|
||||
|
||||
long_running_task() is interrupted by raising asyncio.CancelledError,
|
||||
the top-most affected timeout() context manager converts CancelledError
|
||||
into TimeoutError.
|
||||
"""
|
||||
loop = events.get_running_loop()
|
||||
return Timeout(loop.time() + delay if delay is not None else None)
|
||||
|
||||
|
||||
def timeout_at(when: Optional[float]) -> Timeout:
|
||||
"""Schedule the timeout at absolute time.
|
||||
|
||||
Like timeout() but argument gives absolute time in the same clock system
|
||||
as loop.time().
|
||||
|
||||
Please note: it is not POSIX time but a time with
|
||||
undefined starting base, e.g. the time of the system power on.
|
||||
|
||||
>>> async with asyncio.timeout_at(loop.time() + 10):
|
||||
... await long_running_task()
|
||||
|
||||
|
||||
when - a deadline when timeout occurs or None to disable timeout logic
|
||||
|
||||
long_running_task() is interrupted by raising asyncio.CancelledError,
|
||||
the top-most affected timeout() context manager converts CancelledError
|
||||
into TimeoutError.
|
||||
"""
|
||||
return Timeout(when)
|
||||
337
Utils/PythonNew32/Lib/asyncio/transports.py
Normal file
337
Utils/PythonNew32/Lib/asyncio/transports.py
Normal file
@@ -0,0 +1,337 @@
|
||||
"""Abstract Transport class."""
|
||||
|
||||
__all__ = (
|
||||
'BaseTransport', 'ReadTransport', 'WriteTransport',
|
||||
'Transport', 'DatagramTransport', 'SubprocessTransport',
|
||||
)
|
||||
|
||||
|
||||
class BaseTransport:
|
||||
"""Base class for transports."""
|
||||
|
||||
__slots__ = ('_extra',)
|
||||
|
||||
def __init__(self, extra=None):
|
||||
if extra is None:
|
||||
extra = {}
|
||||
self._extra = extra
|
||||
|
||||
def get_extra_info(self, name, default=None):
|
||||
"""Get optional transport information."""
|
||||
return self._extra.get(name, default)
|
||||
|
||||
def is_closing(self):
|
||||
"""Return True if the transport is closing or closed."""
|
||||
raise NotImplementedError
|
||||
|
||||
def close(self):
|
||||
"""Close the transport.
|
||||
|
||||
Buffered data will be flushed asynchronously. No more data
|
||||
will be received. After all buffered data is flushed, the
|
||||
protocol's connection_lost() method will (eventually) be
|
||||
called with None as its argument.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def set_protocol(self, protocol):
|
||||
"""Set a new protocol."""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_protocol(self):
|
||||
"""Return the current protocol."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class ReadTransport(BaseTransport):
|
||||
"""Interface for read-only transports."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def is_reading(self):
|
||||
"""Return True if the transport is receiving."""
|
||||
raise NotImplementedError
|
||||
|
||||
def pause_reading(self):
|
||||
"""Pause the receiving end.
|
||||
|
||||
No data will be passed to the protocol's data_received()
|
||||
method until resume_reading() is called.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def resume_reading(self):
|
||||
"""Resume the receiving end.
|
||||
|
||||
Data received will once again be passed to the protocol's
|
||||
data_received() method.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class WriteTransport(BaseTransport):
|
||||
"""Interface for write-only transports."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def set_write_buffer_limits(self, high=None, low=None):
|
||||
"""Set the high- and low-water limits for write flow control.
|
||||
|
||||
These two values control when to call the protocol's
|
||||
pause_writing() and resume_writing() methods. If specified,
|
||||
the low-water limit must be less than or equal to the
|
||||
high-water limit. Neither value can be negative.
|
||||
|
||||
The defaults are implementation-specific. If only the
|
||||
high-water limit is given, the low-water limit defaults to an
|
||||
implementation-specific value less than or equal to the
|
||||
high-water limit. Setting high to zero forces low to zero as
|
||||
well, and causes pause_writing() to be called whenever the
|
||||
buffer becomes non-empty. Setting low to zero causes
|
||||
resume_writing() to be called only once the buffer is empty.
|
||||
Use of zero for either limit is generally sub-optimal as it
|
||||
reduces opportunities for doing I/O and computation
|
||||
concurrently.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_write_buffer_size(self):
|
||||
"""Return the current size of the write buffer."""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_write_buffer_limits(self):
|
||||
"""Get the high and low watermarks for write flow control.
|
||||
Return a tuple (low, high) where low and high are
|
||||
positive number of bytes."""
|
||||
raise NotImplementedError
|
||||
|
||||
def write(self, data):
|
||||
"""Write some data bytes to the transport.
|
||||
|
||||
This does not block; it buffers the data and arranges for it
|
||||
to be sent out asynchronously.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def writelines(self, list_of_data):
|
||||
"""Write a list (or any iterable) of data bytes to the transport.
|
||||
|
||||
The default implementation concatenates the arguments and
|
||||
calls write() on the result.
|
||||
"""
|
||||
data = b''.join(list_of_data)
|
||||
self.write(data)
|
||||
|
||||
def write_eof(self):
|
||||
"""Close the write end after flushing buffered data.
|
||||
|
||||
(This is like typing ^D into a UNIX program reading from stdin.)
|
||||
|
||||
Data may still be received.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def can_write_eof(self):
|
||||
"""Return True if this transport supports write_eof(), False if not."""
|
||||
raise NotImplementedError
|
||||
|
||||
def abort(self):
|
||||
"""Close the transport immediately.
|
||||
|
||||
Buffered data will be lost. No more data will be received.
|
||||
The protocol's connection_lost() method will (eventually) be
|
||||
called with None as its argument.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class Transport(ReadTransport, WriteTransport):
|
||||
"""Interface representing a bidirectional transport.
|
||||
|
||||
There may be several implementations, but typically, the user does
|
||||
not implement new transports; rather, the platform provides some
|
||||
useful transports that are implemented using the platform's best
|
||||
practices.
|
||||
|
||||
The user never instantiates a transport directly; they call a
|
||||
utility function, passing it a protocol factory and other
|
||||
information necessary to create the transport and protocol. (E.g.
|
||||
EventLoop.create_connection() or EventLoop.create_server().)
|
||||
|
||||
The utility function will asynchronously create a transport and a
|
||||
protocol and hook them up by calling the protocol's
|
||||
connection_made() method, passing it the transport.
|
||||
|
||||
The implementation here raises NotImplemented for every method
|
||||
except writelines(), which calls write() in a loop.
|
||||
"""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
|
||||
class DatagramTransport(BaseTransport):
|
||||
"""Interface for datagram (UDP) transports."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def sendto(self, data, addr=None):
|
||||
"""Send data to the transport.
|
||||
|
||||
This does not block; it buffers the data and arranges for it
|
||||
to be sent out asynchronously.
|
||||
addr is target socket address.
|
||||
If addr is None use target address pointed on transport creation.
|
||||
If data is an empty bytes object a zero-length datagram will be
|
||||
sent.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def abort(self):
|
||||
"""Close the transport immediately.
|
||||
|
||||
Buffered data will be lost. No more data will be received.
|
||||
The protocol's connection_lost() method will (eventually) be
|
||||
called with None as its argument.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class SubprocessTransport(BaseTransport):
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def get_pid(self):
|
||||
"""Get subprocess id."""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_returncode(self):
|
||||
"""Get subprocess returncode.
|
||||
|
||||
See also
|
||||
http://docs.python.org/3/library/subprocess#subprocess.Popen.returncode
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_pipe_transport(self, fd):
|
||||
"""Get transport for pipe with number fd."""
|
||||
raise NotImplementedError
|
||||
|
||||
def send_signal(self, signal):
|
||||
"""Send signal to subprocess.
|
||||
|
||||
See also:
|
||||
docs.python.org/3/library/subprocess#subprocess.Popen.send_signal
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def terminate(self):
|
||||
"""Stop the subprocess.
|
||||
|
||||
Alias for close() method.
|
||||
|
||||
On Posix OSs the method sends SIGTERM to the subprocess.
|
||||
On Windows the Win32 API function TerminateProcess()
|
||||
is called to stop the subprocess.
|
||||
|
||||
See also:
|
||||
http://docs.python.org/3/library/subprocess#subprocess.Popen.terminate
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def kill(self):
|
||||
"""Kill the subprocess.
|
||||
|
||||
On Posix OSs the function sends SIGKILL to the subprocess.
|
||||
On Windows kill() is an alias for terminate().
|
||||
|
||||
See also:
|
||||
http://docs.python.org/3/library/subprocess#subprocess.Popen.kill
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class _FlowControlMixin(Transport):
|
||||
"""All the logic for (write) flow control in a mix-in base class.
|
||||
|
||||
The subclass must implement get_write_buffer_size(). It must call
|
||||
_maybe_pause_protocol() whenever the write buffer size increases,
|
||||
and _maybe_resume_protocol() whenever it decreases. It may also
|
||||
override set_write_buffer_limits() (e.g. to specify different
|
||||
defaults).
|
||||
|
||||
The subclass constructor must call super().__init__(extra). This
|
||||
will call set_write_buffer_limits().
|
||||
|
||||
The user may call set_write_buffer_limits() and
|
||||
get_write_buffer_size(), and their protocol's pause_writing() and
|
||||
resume_writing() may be called.
|
||||
"""
|
||||
|
||||
__slots__ = ('_loop', '_protocol_paused', '_high_water', '_low_water')
|
||||
|
||||
def __init__(self, extra=None, loop=None):
|
||||
super().__init__(extra)
|
||||
assert loop is not None
|
||||
self._loop = loop
|
||||
self._protocol_paused = False
|
||||
self._set_write_buffer_limits()
|
||||
|
||||
def _maybe_pause_protocol(self):
|
||||
size = self.get_write_buffer_size()
|
||||
if size <= self._high_water:
|
||||
return
|
||||
if not self._protocol_paused:
|
||||
self._protocol_paused = True
|
||||
try:
|
||||
self._protocol.pause_writing()
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
self._loop.call_exception_handler({
|
||||
'message': 'protocol.pause_writing() failed',
|
||||
'exception': exc,
|
||||
'transport': self,
|
||||
'protocol': self._protocol,
|
||||
})
|
||||
|
||||
def _maybe_resume_protocol(self):
|
||||
if (self._protocol_paused and
|
||||
self.get_write_buffer_size() <= self._low_water):
|
||||
self._protocol_paused = False
|
||||
try:
|
||||
self._protocol.resume_writing()
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
self._loop.call_exception_handler({
|
||||
'message': 'protocol.resume_writing() failed',
|
||||
'exception': exc,
|
||||
'transport': self,
|
||||
'protocol': self._protocol,
|
||||
})
|
||||
|
||||
def get_write_buffer_limits(self):
|
||||
return (self._low_water, self._high_water)
|
||||
|
||||
def _set_write_buffer_limits(self, high=None, low=None):
|
||||
if high is None:
|
||||
if low is None:
|
||||
high = 64 * 1024
|
||||
else:
|
||||
high = 4 * low
|
||||
if low is None:
|
||||
low = high // 4
|
||||
|
||||
if not high >= low >= 0:
|
||||
raise ValueError(
|
||||
f'high ({high!r}) must be >= low ({low!r}) must be >= 0')
|
||||
|
||||
self._high_water = high
|
||||
self._low_water = low
|
||||
|
||||
def set_write_buffer_limits(self, high=None, low=None):
|
||||
self._set_write_buffer_limits(high=high, low=low)
|
||||
self._maybe_pause_protocol()
|
||||
|
||||
def get_write_buffer_size(self):
|
||||
raise NotImplementedError
|
||||
98
Utils/PythonNew32/Lib/asyncio/trsock.py
Normal file
98
Utils/PythonNew32/Lib/asyncio/trsock.py
Normal file
@@ -0,0 +1,98 @@
|
||||
import socket
|
||||
|
||||
|
||||
class TransportSocket:
|
||||
|
||||
"""A socket-like wrapper for exposing real transport sockets.
|
||||
|
||||
These objects can be safely returned by APIs like
|
||||
`transport.get_extra_info('socket')`. All potentially disruptive
|
||||
operations (like "socket.close()") are banned.
|
||||
"""
|
||||
|
||||
__slots__ = ('_sock',)
|
||||
|
||||
def __init__(self, sock: socket.socket):
|
||||
self._sock = sock
|
||||
|
||||
@property
|
||||
def family(self):
|
||||
return self._sock.family
|
||||
|
||||
@property
|
||||
def type(self):
|
||||
return self._sock.type
|
||||
|
||||
@property
|
||||
def proto(self):
|
||||
return self._sock.proto
|
||||
|
||||
def __repr__(self):
|
||||
s = (
|
||||
f"<asyncio.TransportSocket fd={self.fileno()}, "
|
||||
f"family={self.family!s}, type={self.type!s}, "
|
||||
f"proto={self.proto}"
|
||||
)
|
||||
|
||||
if self.fileno() != -1:
|
||||
try:
|
||||
laddr = self.getsockname()
|
||||
if laddr:
|
||||
s = f"{s}, laddr={laddr}"
|
||||
except socket.error:
|
||||
pass
|
||||
try:
|
||||
raddr = self.getpeername()
|
||||
if raddr:
|
||||
s = f"{s}, raddr={raddr}"
|
||||
except socket.error:
|
||||
pass
|
||||
|
||||
return f"{s}>"
|
||||
|
||||
def __getstate__(self):
|
||||
raise TypeError("Cannot serialize asyncio.TransportSocket object")
|
||||
|
||||
def fileno(self):
|
||||
return self._sock.fileno()
|
||||
|
||||
def dup(self):
|
||||
return self._sock.dup()
|
||||
|
||||
def get_inheritable(self):
|
||||
return self._sock.get_inheritable()
|
||||
|
||||
def shutdown(self, how):
|
||||
# asyncio doesn't currently provide a high-level transport API
|
||||
# to shutdown the connection.
|
||||
self._sock.shutdown(how)
|
||||
|
||||
def getsockopt(self, *args, **kwargs):
|
||||
return self._sock.getsockopt(*args, **kwargs)
|
||||
|
||||
def setsockopt(self, *args, **kwargs):
|
||||
self._sock.setsockopt(*args, **kwargs)
|
||||
|
||||
def getpeername(self):
|
||||
return self._sock.getpeername()
|
||||
|
||||
def getsockname(self):
|
||||
return self._sock.getsockname()
|
||||
|
||||
def getsockbyname(self):
|
||||
return self._sock.getsockbyname()
|
||||
|
||||
def settimeout(self, value):
|
||||
if value == 0:
|
||||
return
|
||||
raise ValueError(
|
||||
'settimeout(): only 0 timeout is allowed on transport sockets')
|
||||
|
||||
def gettimeout(self):
|
||||
return 0
|
||||
|
||||
def setblocking(self, flag):
|
||||
if not flag:
|
||||
return
|
||||
raise ValueError(
|
||||
'setblocking(): transport sockets cannot be blocking')
|
||||
1536
Utils/PythonNew32/Lib/asyncio/unix_events.py
Normal file
1536
Utils/PythonNew32/Lib/asyncio/unix_events.py
Normal file
File diff suppressed because it is too large
Load Diff
903
Utils/PythonNew32/Lib/asyncio/windows_events.py
Normal file
903
Utils/PythonNew32/Lib/asyncio/windows_events.py
Normal file
@@ -0,0 +1,903 @@
|
||||
"""Selector and proactor event loops for Windows."""
|
||||
|
||||
import sys
|
||||
|
||||
if sys.platform != 'win32': # pragma: no cover
|
||||
raise ImportError('win32 only')
|
||||
|
||||
import _overlapped
|
||||
import _winapi
|
||||
import errno
|
||||
from functools import partial
|
||||
import math
|
||||
import msvcrt
|
||||
import socket
|
||||
import struct
|
||||
import time
|
||||
import weakref
|
||||
|
||||
from . import events
|
||||
from . import base_subprocess
|
||||
from . import futures
|
||||
from . import exceptions
|
||||
from . import proactor_events
|
||||
from . import selector_events
|
||||
from . import tasks
|
||||
from . import windows_utils
|
||||
from .log import logger
|
||||
|
||||
|
||||
__all__ = (
|
||||
'SelectorEventLoop', 'ProactorEventLoop', 'IocpProactor',
|
||||
'DefaultEventLoopPolicy', 'WindowsSelectorEventLoopPolicy',
|
||||
'WindowsProactorEventLoopPolicy', 'EventLoop',
|
||||
)
|
||||
|
||||
|
||||
NULL = _winapi.NULL
|
||||
INFINITE = _winapi.INFINITE
|
||||
ERROR_CONNECTION_REFUSED = 1225
|
||||
ERROR_CONNECTION_ABORTED = 1236
|
||||
|
||||
# Initial delay in seconds for connect_pipe() before retrying to connect
|
||||
CONNECT_PIPE_INIT_DELAY = 0.001
|
||||
|
||||
# Maximum delay in seconds for connect_pipe() before retrying to connect
|
||||
CONNECT_PIPE_MAX_DELAY = 0.100
|
||||
|
||||
|
||||
class _OverlappedFuture(futures.Future):
|
||||
"""Subclass of Future which represents an overlapped operation.
|
||||
|
||||
Cancelling it will immediately cancel the overlapped operation.
|
||||
"""
|
||||
|
||||
def __init__(self, ov, *, loop=None):
|
||||
super().__init__(loop=loop)
|
||||
if self._source_traceback:
|
||||
del self._source_traceback[-1]
|
||||
self._ov = ov
|
||||
|
||||
def _repr_info(self):
|
||||
info = super()._repr_info()
|
||||
if self._ov is not None:
|
||||
state = 'pending' if self._ov.pending else 'completed'
|
||||
info.insert(1, f'overlapped=<{state}, {self._ov.address:#x}>')
|
||||
return info
|
||||
|
||||
def _cancel_overlapped(self):
|
||||
if self._ov is None:
|
||||
return
|
||||
try:
|
||||
self._ov.cancel()
|
||||
except OSError as exc:
|
||||
context = {
|
||||
'message': 'Cancelling an overlapped future failed',
|
||||
'exception': exc,
|
||||
'future': self,
|
||||
}
|
||||
if self._source_traceback:
|
||||
context['source_traceback'] = self._source_traceback
|
||||
self._loop.call_exception_handler(context)
|
||||
self._ov = None
|
||||
|
||||
def cancel(self, msg=None):
|
||||
self._cancel_overlapped()
|
||||
return super().cancel(msg=msg)
|
||||
|
||||
def set_exception(self, exception):
|
||||
super().set_exception(exception)
|
||||
self._cancel_overlapped()
|
||||
|
||||
def set_result(self, result):
|
||||
super().set_result(result)
|
||||
self._ov = None
|
||||
|
||||
|
||||
class _BaseWaitHandleFuture(futures.Future):
|
||||
"""Subclass of Future which represents a wait handle."""
|
||||
|
||||
def __init__(self, ov, handle, wait_handle, *, loop=None):
|
||||
super().__init__(loop=loop)
|
||||
if self._source_traceback:
|
||||
del self._source_traceback[-1]
|
||||
# Keep a reference to the Overlapped object to keep it alive until the
|
||||
# wait is unregistered
|
||||
self._ov = ov
|
||||
self._handle = handle
|
||||
self._wait_handle = wait_handle
|
||||
|
||||
# Should we call UnregisterWaitEx() if the wait completes
|
||||
# or is cancelled?
|
||||
self._registered = True
|
||||
|
||||
def _poll(self):
|
||||
# non-blocking wait: use a timeout of 0 millisecond
|
||||
return (_winapi.WaitForSingleObject(self._handle, 0) ==
|
||||
_winapi.WAIT_OBJECT_0)
|
||||
|
||||
def _repr_info(self):
|
||||
info = super()._repr_info()
|
||||
info.append(f'handle={self._handle:#x}')
|
||||
if self._handle is not None:
|
||||
state = 'signaled' if self._poll() else 'waiting'
|
||||
info.append(state)
|
||||
if self._wait_handle is not None:
|
||||
info.append(f'wait_handle={self._wait_handle:#x}')
|
||||
return info
|
||||
|
||||
def _unregister_wait_cb(self, fut):
|
||||
# The wait was unregistered: it's not safe to destroy the Overlapped
|
||||
# object
|
||||
self._ov = None
|
||||
|
||||
def _unregister_wait(self):
|
||||
if not self._registered:
|
||||
return
|
||||
self._registered = False
|
||||
|
||||
wait_handle = self._wait_handle
|
||||
self._wait_handle = None
|
||||
try:
|
||||
_overlapped.UnregisterWait(wait_handle)
|
||||
except OSError as exc:
|
||||
if exc.winerror != _overlapped.ERROR_IO_PENDING:
|
||||
context = {
|
||||
'message': 'Failed to unregister the wait handle',
|
||||
'exception': exc,
|
||||
'future': self,
|
||||
}
|
||||
if self._source_traceback:
|
||||
context['source_traceback'] = self._source_traceback
|
||||
self._loop.call_exception_handler(context)
|
||||
return
|
||||
# ERROR_IO_PENDING means that the unregister is pending
|
||||
|
||||
self._unregister_wait_cb(None)
|
||||
|
||||
def cancel(self, msg=None):
|
||||
self._unregister_wait()
|
||||
return super().cancel(msg=msg)
|
||||
|
||||
def set_exception(self, exception):
|
||||
self._unregister_wait()
|
||||
super().set_exception(exception)
|
||||
|
||||
def set_result(self, result):
|
||||
self._unregister_wait()
|
||||
super().set_result(result)
|
||||
|
||||
|
||||
class _WaitCancelFuture(_BaseWaitHandleFuture):
|
||||
"""Subclass of Future which represents a wait for the cancellation of a
|
||||
_WaitHandleFuture using an event.
|
||||
"""
|
||||
|
||||
def __init__(self, ov, event, wait_handle, *, loop=None):
|
||||
super().__init__(ov, event, wait_handle, loop=loop)
|
||||
|
||||
self._done_callback = None
|
||||
|
||||
def cancel(self):
|
||||
raise RuntimeError("_WaitCancelFuture must not be cancelled")
|
||||
|
||||
def set_result(self, result):
|
||||
super().set_result(result)
|
||||
if self._done_callback is not None:
|
||||
self._done_callback(self)
|
||||
|
||||
def set_exception(self, exception):
|
||||
super().set_exception(exception)
|
||||
if self._done_callback is not None:
|
||||
self._done_callback(self)
|
||||
|
||||
|
||||
class _WaitHandleFuture(_BaseWaitHandleFuture):
|
||||
def __init__(self, ov, handle, wait_handle, proactor, *, loop=None):
|
||||
super().__init__(ov, handle, wait_handle, loop=loop)
|
||||
self._proactor = proactor
|
||||
self._unregister_proactor = True
|
||||
self._event = _overlapped.CreateEvent(None, True, False, None)
|
||||
self._event_fut = None
|
||||
|
||||
def _unregister_wait_cb(self, fut):
|
||||
if self._event is not None:
|
||||
_winapi.CloseHandle(self._event)
|
||||
self._event = None
|
||||
self._event_fut = None
|
||||
|
||||
# If the wait was cancelled, the wait may never be signalled, so
|
||||
# it's required to unregister it. Otherwise, IocpProactor.close() will
|
||||
# wait forever for an event which will never come.
|
||||
#
|
||||
# If the IocpProactor already received the event, it's safe to call
|
||||
# _unregister() because we kept a reference to the Overlapped object
|
||||
# which is used as a unique key.
|
||||
self._proactor._unregister(self._ov)
|
||||
self._proactor = None
|
||||
|
||||
super()._unregister_wait_cb(fut)
|
||||
|
||||
def _unregister_wait(self):
|
||||
if not self._registered:
|
||||
return
|
||||
self._registered = False
|
||||
|
||||
wait_handle = self._wait_handle
|
||||
self._wait_handle = None
|
||||
try:
|
||||
_overlapped.UnregisterWaitEx(wait_handle, self._event)
|
||||
except OSError as exc:
|
||||
if exc.winerror != _overlapped.ERROR_IO_PENDING:
|
||||
context = {
|
||||
'message': 'Failed to unregister the wait handle',
|
||||
'exception': exc,
|
||||
'future': self,
|
||||
}
|
||||
if self._source_traceback:
|
||||
context['source_traceback'] = self._source_traceback
|
||||
self._loop.call_exception_handler(context)
|
||||
return
|
||||
# ERROR_IO_PENDING is not an error, the wait was unregistered
|
||||
|
||||
self._event_fut = self._proactor._wait_cancel(self._event,
|
||||
self._unregister_wait_cb)
|
||||
|
||||
|
||||
class PipeServer(object):
|
||||
"""Class representing a pipe server.
|
||||
|
||||
This is much like a bound, listening socket.
|
||||
"""
|
||||
def __init__(self, address):
|
||||
self._address = address
|
||||
self._free_instances = weakref.WeakSet()
|
||||
# initialize the pipe attribute before calling _server_pipe_handle()
|
||||
# because this function can raise an exception and the destructor calls
|
||||
# the close() method
|
||||
self._pipe = None
|
||||
self._accept_pipe_future = None
|
||||
self._pipe = self._server_pipe_handle(True)
|
||||
|
||||
def _get_unconnected_pipe(self):
|
||||
# Create new instance and return previous one. This ensures
|
||||
# that (until the server is closed) there is always at least
|
||||
# one pipe handle for address. Therefore if a client attempt
|
||||
# to connect it will not fail with FileNotFoundError.
|
||||
tmp, self._pipe = self._pipe, self._server_pipe_handle(False)
|
||||
return tmp
|
||||
|
||||
def _server_pipe_handle(self, first):
|
||||
# Return a wrapper for a new pipe handle.
|
||||
if self.closed():
|
||||
return None
|
||||
flags = _winapi.PIPE_ACCESS_DUPLEX | _winapi.FILE_FLAG_OVERLAPPED
|
||||
if first:
|
||||
flags |= _winapi.FILE_FLAG_FIRST_PIPE_INSTANCE
|
||||
h = _winapi.CreateNamedPipe(
|
||||
self._address, flags,
|
||||
_winapi.PIPE_TYPE_MESSAGE | _winapi.PIPE_READMODE_MESSAGE |
|
||||
_winapi.PIPE_WAIT,
|
||||
_winapi.PIPE_UNLIMITED_INSTANCES,
|
||||
windows_utils.BUFSIZE, windows_utils.BUFSIZE,
|
||||
_winapi.NMPWAIT_WAIT_FOREVER, _winapi.NULL)
|
||||
pipe = windows_utils.PipeHandle(h)
|
||||
self._free_instances.add(pipe)
|
||||
return pipe
|
||||
|
||||
def closed(self):
|
||||
return (self._address is None)
|
||||
|
||||
def close(self):
|
||||
if self._accept_pipe_future is not None:
|
||||
self._accept_pipe_future.cancel()
|
||||
self._accept_pipe_future = None
|
||||
# Close all instances which have not been connected to by a client.
|
||||
if self._address is not None:
|
||||
for pipe in self._free_instances:
|
||||
pipe.close()
|
||||
self._pipe = None
|
||||
self._address = None
|
||||
self._free_instances.clear()
|
||||
|
||||
__del__ = close
|
||||
|
||||
|
||||
class _WindowsSelectorEventLoop(selector_events.BaseSelectorEventLoop):
|
||||
"""Windows version of selector event loop."""
|
||||
|
||||
|
||||
class ProactorEventLoop(proactor_events.BaseProactorEventLoop):
|
||||
"""Windows version of proactor event loop using IOCP."""
|
||||
|
||||
def __init__(self, proactor=None):
|
||||
if proactor is None:
|
||||
proactor = IocpProactor()
|
||||
super().__init__(proactor)
|
||||
|
||||
def _run_forever_setup(self):
|
||||
assert self._self_reading_future is None
|
||||
self.call_soon(self._loop_self_reading)
|
||||
super()._run_forever_setup()
|
||||
|
||||
def _run_forever_cleanup(self):
|
||||
super()._run_forever_cleanup()
|
||||
if self._self_reading_future is not None:
|
||||
ov = self._self_reading_future._ov
|
||||
self._self_reading_future.cancel()
|
||||
# self_reading_future always uses IOCP, so even though it's
|
||||
# been cancelled, we need to make sure that the IOCP message
|
||||
# is received so that the kernel is not holding on to the
|
||||
# memory, possibly causing memory corruption later. Only
|
||||
# unregister it if IO is complete in all respects. Otherwise
|
||||
# we need another _poll() later to complete the IO.
|
||||
if ov is not None and not ov.pending:
|
||||
self._proactor._unregister(ov)
|
||||
self._self_reading_future = None
|
||||
|
||||
async def create_pipe_connection(self, protocol_factory, address):
|
||||
f = self._proactor.connect_pipe(address)
|
||||
pipe = await f
|
||||
protocol = protocol_factory()
|
||||
trans = self._make_duplex_pipe_transport(pipe, protocol,
|
||||
extra={'addr': address})
|
||||
return trans, protocol
|
||||
|
||||
async def start_serving_pipe(self, protocol_factory, address):
|
||||
server = PipeServer(address)
|
||||
|
||||
def loop_accept_pipe(f=None):
|
||||
pipe = None
|
||||
try:
|
||||
if f:
|
||||
pipe = f.result()
|
||||
server._free_instances.discard(pipe)
|
||||
|
||||
if server.closed():
|
||||
# A client connected before the server was closed:
|
||||
# drop the client (close the pipe) and exit
|
||||
pipe.close()
|
||||
return
|
||||
|
||||
protocol = protocol_factory()
|
||||
self._make_duplex_pipe_transport(
|
||||
pipe, protocol, extra={'addr': address})
|
||||
|
||||
pipe = server._get_unconnected_pipe()
|
||||
if pipe is None:
|
||||
return
|
||||
|
||||
f = self._proactor.accept_pipe(pipe)
|
||||
except BrokenPipeError:
|
||||
if pipe and pipe.fileno() != -1:
|
||||
pipe.close()
|
||||
self.call_soon(loop_accept_pipe)
|
||||
except OSError as exc:
|
||||
if pipe and pipe.fileno() != -1:
|
||||
self.call_exception_handler({
|
||||
'message': 'Pipe accept failed',
|
||||
'exception': exc,
|
||||
'pipe': pipe,
|
||||
})
|
||||
pipe.close()
|
||||
elif self._debug:
|
||||
logger.warning("Accept pipe failed on pipe %r",
|
||||
pipe, exc_info=True)
|
||||
self.call_soon(loop_accept_pipe)
|
||||
except exceptions.CancelledError:
|
||||
if pipe:
|
||||
pipe.close()
|
||||
else:
|
||||
server._accept_pipe_future = f
|
||||
f.add_done_callback(loop_accept_pipe)
|
||||
|
||||
self.call_soon(loop_accept_pipe)
|
||||
return [server]
|
||||
|
||||
async def _make_subprocess_transport(self, protocol, args, shell,
|
||||
stdin, stdout, stderr, bufsize,
|
||||
extra=None, **kwargs):
|
||||
waiter = self.create_future()
|
||||
transp = _WindowsSubprocessTransport(self, protocol, args, shell,
|
||||
stdin, stdout, stderr, bufsize,
|
||||
waiter=waiter, extra=extra,
|
||||
**kwargs)
|
||||
try:
|
||||
await waiter
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except BaseException:
|
||||
transp.close()
|
||||
await transp._wait()
|
||||
raise
|
||||
|
||||
return transp
|
||||
|
||||
|
||||
class IocpProactor:
|
||||
"""Proactor implementation using IOCP."""
|
||||
|
||||
def __init__(self, concurrency=INFINITE):
|
||||
self._loop = None
|
||||
self._results = []
|
||||
self._iocp = _overlapped.CreateIoCompletionPort(
|
||||
_overlapped.INVALID_HANDLE_VALUE, NULL, 0, concurrency)
|
||||
self._cache = {}
|
||||
self._registered = weakref.WeakSet()
|
||||
self._unregistered = []
|
||||
self._stopped_serving = weakref.WeakSet()
|
||||
|
||||
def _check_closed(self):
|
||||
if self._iocp is None:
|
||||
raise RuntimeError('IocpProactor is closed')
|
||||
|
||||
def __repr__(self):
|
||||
info = ['overlapped#=%s' % len(self._cache),
|
||||
'result#=%s' % len(self._results)]
|
||||
if self._iocp is None:
|
||||
info.append('closed')
|
||||
return '<%s %s>' % (self.__class__.__name__, " ".join(info))
|
||||
|
||||
def set_loop(self, loop):
|
||||
self._loop = loop
|
||||
|
||||
def select(self, timeout=None):
|
||||
if not self._results:
|
||||
self._poll(timeout)
|
||||
tmp = self._results
|
||||
self._results = []
|
||||
try:
|
||||
return tmp
|
||||
finally:
|
||||
# Needed to break cycles when an exception occurs.
|
||||
tmp = None
|
||||
|
||||
def _result(self, value):
|
||||
fut = self._loop.create_future()
|
||||
fut.set_result(value)
|
||||
return fut
|
||||
|
||||
@staticmethod
|
||||
def finish_socket_func(trans, key, ov):
|
||||
try:
|
||||
return ov.getresult()
|
||||
except OSError as exc:
|
||||
if exc.winerror in (_overlapped.ERROR_NETNAME_DELETED,
|
||||
_overlapped.ERROR_OPERATION_ABORTED):
|
||||
raise ConnectionResetError(*exc.args)
|
||||
else:
|
||||
raise
|
||||
|
||||
@classmethod
|
||||
def _finish_recvfrom(cls, trans, key, ov, *, empty_result):
|
||||
try:
|
||||
return cls.finish_socket_func(trans, key, ov)
|
||||
except OSError as exc:
|
||||
# WSARecvFrom will report ERROR_PORT_UNREACHABLE when the same
|
||||
# socket is used to send to an address that is not listening.
|
||||
if exc.winerror == _overlapped.ERROR_PORT_UNREACHABLE:
|
||||
return empty_result, None
|
||||
else:
|
||||
raise
|
||||
|
||||
def recv(self, conn, nbytes, flags=0):
|
||||
self._register_with_iocp(conn)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
try:
|
||||
if isinstance(conn, socket.socket):
|
||||
ov.WSARecv(conn.fileno(), nbytes, flags)
|
||||
else:
|
||||
ov.ReadFile(conn.fileno(), nbytes)
|
||||
except BrokenPipeError:
|
||||
return self._result(b'')
|
||||
|
||||
return self._register(ov, conn, self.finish_socket_func)
|
||||
|
||||
def recv_into(self, conn, buf, flags=0):
|
||||
self._register_with_iocp(conn)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
try:
|
||||
if isinstance(conn, socket.socket):
|
||||
ov.WSARecvInto(conn.fileno(), buf, flags)
|
||||
else:
|
||||
ov.ReadFileInto(conn.fileno(), buf)
|
||||
except BrokenPipeError:
|
||||
return self._result(0)
|
||||
|
||||
return self._register(ov, conn, self.finish_socket_func)
|
||||
|
||||
def recvfrom(self, conn, nbytes, flags=0):
|
||||
self._register_with_iocp(conn)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
try:
|
||||
ov.WSARecvFrom(conn.fileno(), nbytes, flags)
|
||||
except BrokenPipeError:
|
||||
return self._result((b'', None))
|
||||
|
||||
return self._register(ov, conn, partial(self._finish_recvfrom,
|
||||
empty_result=b''))
|
||||
|
||||
def recvfrom_into(self, conn, buf, flags=0):
|
||||
self._register_with_iocp(conn)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
try:
|
||||
ov.WSARecvFromInto(conn.fileno(), buf, flags)
|
||||
except BrokenPipeError:
|
||||
return self._result((0, None))
|
||||
|
||||
return self._register(ov, conn, partial(self._finish_recvfrom,
|
||||
empty_result=0))
|
||||
|
||||
def sendto(self, conn, buf, flags=0, addr=None):
|
||||
self._register_with_iocp(conn)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
|
||||
ov.WSASendTo(conn.fileno(), buf, flags, addr)
|
||||
|
||||
return self._register(ov, conn, self.finish_socket_func)
|
||||
|
||||
def send(self, conn, buf, flags=0):
|
||||
self._register_with_iocp(conn)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
if isinstance(conn, socket.socket):
|
||||
ov.WSASend(conn.fileno(), buf, flags)
|
||||
else:
|
||||
ov.WriteFile(conn.fileno(), buf)
|
||||
|
||||
return self._register(ov, conn, self.finish_socket_func)
|
||||
|
||||
def accept(self, listener):
|
||||
self._register_with_iocp(listener)
|
||||
conn = self._get_accept_socket(listener.family)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
ov.AcceptEx(listener.fileno(), conn.fileno())
|
||||
|
||||
def finish_accept(trans, key, ov):
|
||||
ov.getresult()
|
||||
# Use SO_UPDATE_ACCEPT_CONTEXT so getsockname() etc work.
|
||||
buf = struct.pack('@P', listener.fileno())
|
||||
conn.setsockopt(socket.SOL_SOCKET,
|
||||
_overlapped.SO_UPDATE_ACCEPT_CONTEXT, buf)
|
||||
conn.settimeout(listener.gettimeout())
|
||||
return conn, conn.getpeername()
|
||||
|
||||
async def accept_coro(future, conn):
|
||||
# Coroutine closing the accept socket if the future is cancelled
|
||||
try:
|
||||
await future
|
||||
except exceptions.CancelledError:
|
||||
conn.close()
|
||||
raise
|
||||
|
||||
future = self._register(ov, listener, finish_accept)
|
||||
coro = accept_coro(future, conn)
|
||||
tasks.ensure_future(coro, loop=self._loop)
|
||||
return future
|
||||
|
||||
def connect(self, conn, address):
|
||||
if conn.type == socket.SOCK_DGRAM:
|
||||
# WSAConnect will complete immediately for UDP sockets so we don't
|
||||
# need to register any IOCP operation
|
||||
_overlapped.WSAConnect(conn.fileno(), address)
|
||||
fut = self._loop.create_future()
|
||||
fut.set_result(None)
|
||||
return fut
|
||||
|
||||
self._register_with_iocp(conn)
|
||||
# The socket needs to be locally bound before we call ConnectEx().
|
||||
try:
|
||||
_overlapped.BindLocal(conn.fileno(), conn.family)
|
||||
except OSError as e:
|
||||
if e.winerror != errno.WSAEINVAL:
|
||||
raise
|
||||
# Probably already locally bound; check using getsockname().
|
||||
if conn.getsockname()[1] == 0:
|
||||
raise
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
ov.ConnectEx(conn.fileno(), address)
|
||||
|
||||
def finish_connect(trans, key, ov):
|
||||
ov.getresult()
|
||||
# Use SO_UPDATE_CONNECT_CONTEXT so getsockname() etc work.
|
||||
conn.setsockopt(socket.SOL_SOCKET,
|
||||
_overlapped.SO_UPDATE_CONNECT_CONTEXT, 0)
|
||||
return conn
|
||||
|
||||
return self._register(ov, conn, finish_connect)
|
||||
|
||||
def sendfile(self, sock, file, offset, count):
|
||||
self._register_with_iocp(sock)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
offset_low = offset & 0xffff_ffff
|
||||
offset_high = (offset >> 32) & 0xffff_ffff
|
||||
ov.TransmitFile(sock.fileno(),
|
||||
msvcrt.get_osfhandle(file.fileno()),
|
||||
offset_low, offset_high,
|
||||
count, 0, 0)
|
||||
|
||||
return self._register(ov, sock, self.finish_socket_func)
|
||||
|
||||
def accept_pipe(self, pipe):
|
||||
self._register_with_iocp(pipe)
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
connected = ov.ConnectNamedPipe(pipe.fileno())
|
||||
|
||||
if connected:
|
||||
# ConnectNamePipe() failed with ERROR_PIPE_CONNECTED which means
|
||||
# that the pipe is connected. There is no need to wait for the
|
||||
# completion of the connection.
|
||||
return self._result(pipe)
|
||||
|
||||
def finish_accept_pipe(trans, key, ov):
|
||||
ov.getresult()
|
||||
return pipe
|
||||
|
||||
return self._register(ov, pipe, finish_accept_pipe)
|
||||
|
||||
async def connect_pipe(self, address):
|
||||
delay = CONNECT_PIPE_INIT_DELAY
|
||||
while True:
|
||||
# Unfortunately there is no way to do an overlapped connect to
|
||||
# a pipe. Call CreateFile() in a loop until it doesn't fail with
|
||||
# ERROR_PIPE_BUSY.
|
||||
try:
|
||||
handle = _overlapped.ConnectPipe(address)
|
||||
break
|
||||
except OSError as exc:
|
||||
if exc.winerror != _overlapped.ERROR_PIPE_BUSY:
|
||||
raise
|
||||
|
||||
# ConnectPipe() failed with ERROR_PIPE_BUSY: retry later
|
||||
delay = min(delay * 2, CONNECT_PIPE_MAX_DELAY)
|
||||
await tasks.sleep(delay)
|
||||
|
||||
return windows_utils.PipeHandle(handle)
|
||||
|
||||
def wait_for_handle(self, handle, timeout=None):
|
||||
"""Wait for a handle.
|
||||
|
||||
Return a Future object. The result of the future is True if the wait
|
||||
completed, or False if the wait did not complete (on timeout).
|
||||
"""
|
||||
return self._wait_for_handle(handle, timeout, False)
|
||||
|
||||
def _wait_cancel(self, event, done_callback):
|
||||
fut = self._wait_for_handle(event, None, True)
|
||||
# add_done_callback() cannot be used because the wait may only complete
|
||||
# in IocpProactor.close(), while the event loop is not running.
|
||||
fut._done_callback = done_callback
|
||||
return fut
|
||||
|
||||
def _wait_for_handle(self, handle, timeout, _is_cancel):
|
||||
self._check_closed()
|
||||
|
||||
if timeout is None:
|
||||
ms = _winapi.INFINITE
|
||||
else:
|
||||
# RegisterWaitForSingleObject() has a resolution of 1 millisecond,
|
||||
# round away from zero to wait *at least* timeout seconds.
|
||||
ms = math.ceil(timeout * 1e3)
|
||||
|
||||
# We only create ov so we can use ov.address as a key for the cache.
|
||||
ov = _overlapped.Overlapped(NULL)
|
||||
wait_handle = _overlapped.RegisterWaitWithQueue(
|
||||
handle, self._iocp, ov.address, ms)
|
||||
if _is_cancel:
|
||||
f = _WaitCancelFuture(ov, handle, wait_handle, loop=self._loop)
|
||||
else:
|
||||
f = _WaitHandleFuture(ov, handle, wait_handle, self,
|
||||
loop=self._loop)
|
||||
if f._source_traceback:
|
||||
del f._source_traceback[-1]
|
||||
|
||||
def finish_wait_for_handle(trans, key, ov):
|
||||
# Note that this second wait means that we should only use
|
||||
# this with handles types where a successful wait has no
|
||||
# effect. So events or processes are all right, but locks
|
||||
# or semaphores are not. Also note if the handle is
|
||||
# signalled and then quickly reset, then we may return
|
||||
# False even though we have not timed out.
|
||||
return f._poll()
|
||||
|
||||
self._cache[ov.address] = (f, ov, 0, finish_wait_for_handle)
|
||||
return f
|
||||
|
||||
def _register_with_iocp(self, obj):
|
||||
# To get notifications of finished ops on this objects sent to the
|
||||
# completion port, were must register the handle.
|
||||
if obj not in self._registered:
|
||||
self._registered.add(obj)
|
||||
_overlapped.CreateIoCompletionPort(obj.fileno(), self._iocp, 0, 0)
|
||||
# XXX We could also use SetFileCompletionNotificationModes()
|
||||
# to avoid sending notifications to completion port of ops
|
||||
# that succeed immediately.
|
||||
|
||||
def _register(self, ov, obj, callback):
|
||||
self._check_closed()
|
||||
|
||||
# Return a future which will be set with the result of the
|
||||
# operation when it completes. The future's value is actually
|
||||
# the value returned by callback().
|
||||
f = _OverlappedFuture(ov, loop=self._loop)
|
||||
if f._source_traceback:
|
||||
del f._source_traceback[-1]
|
||||
if not ov.pending:
|
||||
# The operation has completed, so no need to postpone the
|
||||
# work. We cannot take this short cut if we need the
|
||||
# NumberOfBytes, CompletionKey values returned by
|
||||
# PostQueuedCompletionStatus().
|
||||
try:
|
||||
value = callback(None, None, ov)
|
||||
except OSError as e:
|
||||
f.set_exception(e)
|
||||
else:
|
||||
f.set_result(value)
|
||||
# Even if GetOverlappedResult() was called, we have to wait for the
|
||||
# notification of the completion in GetQueuedCompletionStatus().
|
||||
# Register the overlapped operation to keep a reference to the
|
||||
# OVERLAPPED object, otherwise the memory is freed and Windows may
|
||||
# read uninitialized memory.
|
||||
|
||||
# Register the overlapped operation for later. Note that
|
||||
# we only store obj to prevent it from being garbage
|
||||
# collected too early.
|
||||
self._cache[ov.address] = (f, ov, obj, callback)
|
||||
return f
|
||||
|
||||
def _unregister(self, ov):
|
||||
"""Unregister an overlapped object.
|
||||
|
||||
Call this method when its future has been cancelled. The event can
|
||||
already be signalled (pending in the proactor event queue). It is also
|
||||
safe if the event is never signalled (because it was cancelled).
|
||||
"""
|
||||
self._check_closed()
|
||||
self._unregistered.append(ov)
|
||||
|
||||
def _get_accept_socket(self, family):
|
||||
s = socket.socket(family)
|
||||
s.settimeout(0)
|
||||
return s
|
||||
|
||||
def _poll(self, timeout=None):
|
||||
if timeout is None:
|
||||
ms = INFINITE
|
||||
elif timeout < 0:
|
||||
raise ValueError("negative timeout")
|
||||
else:
|
||||
# GetQueuedCompletionStatus() has a resolution of 1 millisecond,
|
||||
# round away from zero to wait *at least* timeout seconds.
|
||||
ms = math.ceil(timeout * 1e3)
|
||||
if ms >= INFINITE:
|
||||
raise ValueError("timeout too big")
|
||||
|
||||
while True:
|
||||
status = _overlapped.GetQueuedCompletionStatus(self._iocp, ms)
|
||||
if status is None:
|
||||
break
|
||||
ms = 0
|
||||
|
||||
err, transferred, key, address = status
|
||||
try:
|
||||
f, ov, obj, callback = self._cache.pop(address)
|
||||
except KeyError:
|
||||
if self._loop.get_debug():
|
||||
self._loop.call_exception_handler({
|
||||
'message': ('GetQueuedCompletionStatus() returned an '
|
||||
'unexpected event'),
|
||||
'status': ('err=%s transferred=%s key=%#x address=%#x'
|
||||
% (err, transferred, key, address)),
|
||||
})
|
||||
|
||||
# key is either zero, or it is used to return a pipe
|
||||
# handle which should be closed to avoid a leak.
|
||||
if key not in (0, _overlapped.INVALID_HANDLE_VALUE):
|
||||
_winapi.CloseHandle(key)
|
||||
continue
|
||||
|
||||
if obj in self._stopped_serving:
|
||||
f.cancel()
|
||||
# Don't call the callback if _register() already read the result or
|
||||
# if the overlapped has been cancelled
|
||||
elif not f.done():
|
||||
try:
|
||||
value = callback(transferred, key, ov)
|
||||
except OSError as e:
|
||||
f.set_exception(e)
|
||||
self._results.append(f)
|
||||
else:
|
||||
f.set_result(value)
|
||||
self._results.append(f)
|
||||
finally:
|
||||
f = None
|
||||
|
||||
# Remove unregistered futures
|
||||
for ov in self._unregistered:
|
||||
self._cache.pop(ov.address, None)
|
||||
self._unregistered.clear()
|
||||
|
||||
def _stop_serving(self, obj):
|
||||
# obj is a socket or pipe handle. It will be closed in
|
||||
# BaseProactorEventLoop._stop_serving() which will make any
|
||||
# pending operations fail quickly.
|
||||
self._stopped_serving.add(obj)
|
||||
|
||||
def close(self):
|
||||
if self._iocp is None:
|
||||
# already closed
|
||||
return
|
||||
|
||||
# Cancel remaining registered operations.
|
||||
for fut, ov, obj, callback in list(self._cache.values()):
|
||||
if fut.cancelled():
|
||||
# Nothing to do with cancelled futures
|
||||
pass
|
||||
elif isinstance(fut, _WaitCancelFuture):
|
||||
# _WaitCancelFuture must not be cancelled
|
||||
pass
|
||||
else:
|
||||
try:
|
||||
fut.cancel()
|
||||
except OSError as exc:
|
||||
if self._loop is not None:
|
||||
context = {
|
||||
'message': 'Cancelling a future failed',
|
||||
'exception': exc,
|
||||
'future': fut,
|
||||
}
|
||||
if fut._source_traceback:
|
||||
context['source_traceback'] = fut._source_traceback
|
||||
self._loop.call_exception_handler(context)
|
||||
|
||||
# Wait until all cancelled overlapped complete: don't exit with running
|
||||
# overlapped to prevent a crash. Display progress every second if the
|
||||
# loop is still running.
|
||||
msg_update = 1.0
|
||||
start_time = time.monotonic()
|
||||
next_msg = start_time + msg_update
|
||||
while self._cache:
|
||||
if next_msg <= time.monotonic():
|
||||
logger.debug('%r is running after closing for %.1f seconds',
|
||||
self, time.monotonic() - start_time)
|
||||
next_msg = time.monotonic() + msg_update
|
||||
|
||||
# handle a few events, or timeout
|
||||
self._poll(msg_update)
|
||||
|
||||
self._results = []
|
||||
|
||||
_winapi.CloseHandle(self._iocp)
|
||||
self._iocp = None
|
||||
|
||||
def __del__(self):
|
||||
self.close()
|
||||
|
||||
|
||||
class _WindowsSubprocessTransport(base_subprocess.BaseSubprocessTransport):
|
||||
|
||||
def _start(self, args, shell, stdin, stdout, stderr, bufsize, **kwargs):
|
||||
self._proc = windows_utils.Popen(
|
||||
args, shell=shell, stdin=stdin, stdout=stdout, stderr=stderr,
|
||||
bufsize=bufsize, **kwargs)
|
||||
|
||||
def callback(f):
|
||||
returncode = self._proc.poll()
|
||||
self._process_exited(returncode)
|
||||
|
||||
f = self._loop._proactor.wait_for_handle(int(self._proc._handle))
|
||||
f.add_done_callback(callback)
|
||||
|
||||
|
||||
SelectorEventLoop = _WindowsSelectorEventLoop
|
||||
|
||||
|
||||
class WindowsSelectorEventLoopPolicy(events.BaseDefaultEventLoopPolicy):
|
||||
_loop_factory = SelectorEventLoop
|
||||
|
||||
|
||||
class WindowsProactorEventLoopPolicy(events.BaseDefaultEventLoopPolicy):
|
||||
_loop_factory = ProactorEventLoop
|
||||
|
||||
|
||||
DefaultEventLoopPolicy = WindowsProactorEventLoopPolicy
|
||||
EventLoop = ProactorEventLoop
|
||||
173
Utils/PythonNew32/Lib/asyncio/windows_utils.py
Normal file
173
Utils/PythonNew32/Lib/asyncio/windows_utils.py
Normal file
@@ -0,0 +1,173 @@
|
||||
"""Various Windows specific bits and pieces."""
|
||||
|
||||
import sys
|
||||
|
||||
if sys.platform != 'win32': # pragma: no cover
|
||||
raise ImportError('win32 only')
|
||||
|
||||
import _winapi
|
||||
import itertools
|
||||
import msvcrt
|
||||
import os
|
||||
import subprocess
|
||||
import tempfile
|
||||
import warnings
|
||||
|
||||
|
||||
__all__ = 'pipe', 'Popen', 'PIPE', 'PipeHandle'
|
||||
|
||||
|
||||
# Constants/globals
|
||||
|
||||
|
||||
BUFSIZE = 8192
|
||||
PIPE = subprocess.PIPE
|
||||
STDOUT = subprocess.STDOUT
|
||||
_mmap_counter = itertools.count()
|
||||
|
||||
|
||||
# Replacement for os.pipe() using handles instead of fds
|
||||
|
||||
|
||||
def pipe(*, duplex=False, overlapped=(True, True), bufsize=BUFSIZE):
|
||||
"""Like os.pipe() but with overlapped support and using handles not fds."""
|
||||
address = tempfile.mktemp(
|
||||
prefix=r'\\.\pipe\python-pipe-{:d}-{:d}-'.format(
|
||||
os.getpid(), next(_mmap_counter)))
|
||||
|
||||
if duplex:
|
||||
openmode = _winapi.PIPE_ACCESS_DUPLEX
|
||||
access = _winapi.GENERIC_READ | _winapi.GENERIC_WRITE
|
||||
obsize, ibsize = bufsize, bufsize
|
||||
else:
|
||||
openmode = _winapi.PIPE_ACCESS_INBOUND
|
||||
access = _winapi.GENERIC_WRITE
|
||||
obsize, ibsize = 0, bufsize
|
||||
|
||||
openmode |= _winapi.FILE_FLAG_FIRST_PIPE_INSTANCE
|
||||
|
||||
if overlapped[0]:
|
||||
openmode |= _winapi.FILE_FLAG_OVERLAPPED
|
||||
|
||||
if overlapped[1]:
|
||||
flags_and_attribs = _winapi.FILE_FLAG_OVERLAPPED
|
||||
else:
|
||||
flags_and_attribs = 0
|
||||
|
||||
h1 = h2 = None
|
||||
try:
|
||||
h1 = _winapi.CreateNamedPipe(
|
||||
address, openmode, _winapi.PIPE_WAIT,
|
||||
1, obsize, ibsize, _winapi.NMPWAIT_WAIT_FOREVER, _winapi.NULL)
|
||||
|
||||
h2 = _winapi.CreateFile(
|
||||
address, access, 0, _winapi.NULL, _winapi.OPEN_EXISTING,
|
||||
flags_and_attribs, _winapi.NULL)
|
||||
|
||||
ov = _winapi.ConnectNamedPipe(h1, overlapped=True)
|
||||
ov.GetOverlappedResult(True)
|
||||
return h1, h2
|
||||
except:
|
||||
if h1 is not None:
|
||||
_winapi.CloseHandle(h1)
|
||||
if h2 is not None:
|
||||
_winapi.CloseHandle(h2)
|
||||
raise
|
||||
|
||||
|
||||
# Wrapper for a pipe handle
|
||||
|
||||
|
||||
class PipeHandle:
|
||||
"""Wrapper for an overlapped pipe handle which is vaguely file-object like.
|
||||
|
||||
The IOCP event loop can use these instead of socket objects.
|
||||
"""
|
||||
def __init__(self, handle):
|
||||
self._handle = handle
|
||||
|
||||
def __repr__(self):
|
||||
if self._handle is not None:
|
||||
handle = f'handle={self._handle!r}'
|
||||
else:
|
||||
handle = 'closed'
|
||||
return f'<{self.__class__.__name__} {handle}>'
|
||||
|
||||
@property
|
||||
def handle(self):
|
||||
return self._handle
|
||||
|
||||
def fileno(self):
|
||||
if self._handle is None:
|
||||
raise ValueError("I/O operation on closed pipe")
|
||||
return self._handle
|
||||
|
||||
def close(self, *, CloseHandle=_winapi.CloseHandle):
|
||||
if self._handle is not None:
|
||||
CloseHandle(self._handle)
|
||||
self._handle = None
|
||||
|
||||
def __del__(self, _warn=warnings.warn):
|
||||
if self._handle is not None:
|
||||
_warn(f"unclosed {self!r}", ResourceWarning, source=self)
|
||||
self.close()
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, t, v, tb):
|
||||
self.close()
|
||||
|
||||
|
||||
# Replacement for subprocess.Popen using overlapped pipe handles
|
||||
|
||||
|
||||
class Popen(subprocess.Popen):
|
||||
"""Replacement for subprocess.Popen using overlapped pipe handles.
|
||||
|
||||
The stdin, stdout, stderr are None or instances of PipeHandle.
|
||||
"""
|
||||
def __init__(self, args, stdin=None, stdout=None, stderr=None, **kwds):
|
||||
assert not kwds.get('universal_newlines')
|
||||
assert kwds.get('bufsize', 0) == 0
|
||||
stdin_rfd = stdout_wfd = stderr_wfd = None
|
||||
stdin_wh = stdout_rh = stderr_rh = None
|
||||
if stdin == PIPE:
|
||||
stdin_rh, stdin_wh = pipe(overlapped=(False, True), duplex=True)
|
||||
stdin_rfd = msvcrt.open_osfhandle(stdin_rh, os.O_RDONLY)
|
||||
else:
|
||||
stdin_rfd = stdin
|
||||
if stdout == PIPE:
|
||||
stdout_rh, stdout_wh = pipe(overlapped=(True, False))
|
||||
stdout_wfd = msvcrt.open_osfhandle(stdout_wh, 0)
|
||||
else:
|
||||
stdout_wfd = stdout
|
||||
if stderr == PIPE:
|
||||
stderr_rh, stderr_wh = pipe(overlapped=(True, False))
|
||||
stderr_wfd = msvcrt.open_osfhandle(stderr_wh, 0)
|
||||
elif stderr == STDOUT:
|
||||
stderr_wfd = stdout_wfd
|
||||
else:
|
||||
stderr_wfd = stderr
|
||||
try:
|
||||
super().__init__(args, stdin=stdin_rfd, stdout=stdout_wfd,
|
||||
stderr=stderr_wfd, **kwds)
|
||||
except:
|
||||
for h in (stdin_wh, stdout_rh, stderr_rh):
|
||||
if h is not None:
|
||||
_winapi.CloseHandle(h)
|
||||
raise
|
||||
else:
|
||||
if stdin_wh is not None:
|
||||
self.stdin = PipeHandle(stdin_wh)
|
||||
if stdout_rh is not None:
|
||||
self.stdout = PipeHandle(stdout_rh)
|
||||
if stderr_rh is not None:
|
||||
self.stderr = PipeHandle(stderr_rh)
|
||||
finally:
|
||||
if stdin == PIPE:
|
||||
os.close(stdin_rfd)
|
||||
if stdout == PIPE:
|
||||
os.close(stdout_wfd)
|
||||
if stderr == PIPE:
|
||||
os.close(stderr_wfd)
|
||||
611
Utils/PythonNew32/Lib/base64.py
Normal file
611
Utils/PythonNew32/Lib/base64.py
Normal file
@@ -0,0 +1,611 @@
|
||||
#! /usr/bin/env python3
|
||||
|
||||
"""Base16, Base32, Base64 (RFC 3548), Base85 and Ascii85 data encodings"""
|
||||
|
||||
# Modified 04-Oct-1995 by Jack Jansen to use binascii module
|
||||
# Modified 30-Dec-2003 by Barry Warsaw to add full RFC 3548 support
|
||||
# Modified 22-May-2007 by Guido van Rossum to use bytes everywhere
|
||||
|
||||
import re
|
||||
import struct
|
||||
import binascii
|
||||
|
||||
|
||||
__all__ = [
|
||||
# Legacy interface exports traditional RFC 2045 Base64 encodings
|
||||
'encode', 'decode', 'encodebytes', 'decodebytes',
|
||||
# Generalized interface for other encodings
|
||||
'b64encode', 'b64decode', 'b32encode', 'b32decode',
|
||||
'b32hexencode', 'b32hexdecode', 'b16encode', 'b16decode',
|
||||
# Base85 and Ascii85 encodings
|
||||
'b85encode', 'b85decode', 'a85encode', 'a85decode', 'z85encode', 'z85decode',
|
||||
# Standard Base64 encoding
|
||||
'standard_b64encode', 'standard_b64decode',
|
||||
# Some common Base64 alternatives. As referenced by RFC 3458, see thread
|
||||
# starting at:
|
||||
#
|
||||
# http://zgp.org/pipermail/p2p-hackers/2001-September/000316.html
|
||||
'urlsafe_b64encode', 'urlsafe_b64decode',
|
||||
]
|
||||
|
||||
|
||||
bytes_types = (bytes, bytearray) # Types acceptable as binary data
|
||||
|
||||
def _bytes_from_decode_data(s):
|
||||
if isinstance(s, str):
|
||||
try:
|
||||
return s.encode('ascii')
|
||||
except UnicodeEncodeError:
|
||||
raise ValueError('string argument should contain only ASCII characters')
|
||||
if isinstance(s, bytes_types):
|
||||
return s
|
||||
try:
|
||||
return memoryview(s).tobytes()
|
||||
except TypeError:
|
||||
raise TypeError("argument should be a bytes-like object or ASCII "
|
||||
"string, not %r" % s.__class__.__name__) from None
|
||||
|
||||
|
||||
# Base64 encoding/decoding uses binascii
|
||||
|
||||
def b64encode(s, altchars=None):
|
||||
"""Encode the bytes-like object s using Base64 and return a bytes object.
|
||||
|
||||
Optional altchars should be a byte string of length 2 which specifies an
|
||||
alternative alphabet for the '+' and '/' characters. This allows an
|
||||
application to e.g. generate url or filesystem safe Base64 strings.
|
||||
"""
|
||||
encoded = binascii.b2a_base64(s, newline=False)
|
||||
if altchars is not None:
|
||||
assert len(altchars) == 2, repr(altchars)
|
||||
return encoded.translate(bytes.maketrans(b'+/', altchars))
|
||||
return encoded
|
||||
|
||||
|
||||
def b64decode(s, altchars=None, validate=False):
|
||||
"""Decode the Base64 encoded bytes-like object or ASCII string s.
|
||||
|
||||
Optional altchars must be a bytes-like object or ASCII string of length 2
|
||||
which specifies the alternative alphabet used instead of the '+' and '/'
|
||||
characters.
|
||||
|
||||
The result is returned as a bytes object. A binascii.Error is raised if
|
||||
s is incorrectly padded.
|
||||
|
||||
If validate is False (the default), characters that are neither in the
|
||||
normal base-64 alphabet nor the alternative alphabet are discarded prior
|
||||
to the padding check. If validate is True, these non-alphabet characters
|
||||
in the input result in a binascii.Error.
|
||||
For more information about the strict base64 check, see:
|
||||
|
||||
https://docs.python.org/3.11/library/binascii.html#binascii.a2b_base64
|
||||
"""
|
||||
s = _bytes_from_decode_data(s)
|
||||
if altchars is not None:
|
||||
altchars = _bytes_from_decode_data(altchars)
|
||||
assert len(altchars) == 2, repr(altchars)
|
||||
s = s.translate(bytes.maketrans(altchars, b'+/'))
|
||||
return binascii.a2b_base64(s, strict_mode=validate)
|
||||
|
||||
|
||||
def standard_b64encode(s):
|
||||
"""Encode bytes-like object s using the standard Base64 alphabet.
|
||||
|
||||
The result is returned as a bytes object.
|
||||
"""
|
||||
return b64encode(s)
|
||||
|
||||
def standard_b64decode(s):
|
||||
"""Decode bytes encoded with the standard Base64 alphabet.
|
||||
|
||||
Argument s is a bytes-like object or ASCII string to decode. The result
|
||||
is returned as a bytes object. A binascii.Error is raised if the input
|
||||
is incorrectly padded. Characters that are not in the standard alphabet
|
||||
are discarded prior to the padding check.
|
||||
"""
|
||||
return b64decode(s)
|
||||
|
||||
|
||||
_urlsafe_encode_translation = bytes.maketrans(b'+/', b'-_')
|
||||
_urlsafe_decode_translation = bytes.maketrans(b'-_', b'+/')
|
||||
|
||||
def urlsafe_b64encode(s):
|
||||
"""Encode bytes using the URL- and filesystem-safe Base64 alphabet.
|
||||
|
||||
Argument s is a bytes-like object to encode. The result is returned as a
|
||||
bytes object. The alphabet uses '-' instead of '+' and '_' instead of
|
||||
'/'.
|
||||
"""
|
||||
return b64encode(s).translate(_urlsafe_encode_translation)
|
||||
|
||||
def urlsafe_b64decode(s):
|
||||
"""Decode bytes using the URL- and filesystem-safe Base64 alphabet.
|
||||
|
||||
Argument s is a bytes-like object or ASCII string to decode. The result
|
||||
is returned as a bytes object. A binascii.Error is raised if the input
|
||||
is incorrectly padded. Characters that are not in the URL-safe base-64
|
||||
alphabet, and are not a plus '+' or slash '/', are discarded prior to the
|
||||
padding check.
|
||||
|
||||
The alphabet uses '-' instead of '+' and '_' instead of '/'.
|
||||
"""
|
||||
s = _bytes_from_decode_data(s)
|
||||
s = s.translate(_urlsafe_decode_translation)
|
||||
return b64decode(s)
|
||||
|
||||
|
||||
|
||||
# Base32 encoding/decoding must be done in Python
|
||||
_B32_ENCODE_DOCSTRING = '''
|
||||
Encode the bytes-like objects using {encoding} and return a bytes object.
|
||||
'''
|
||||
_B32_DECODE_DOCSTRING = '''
|
||||
Decode the {encoding} encoded bytes-like object or ASCII string s.
|
||||
|
||||
Optional casefold is a flag specifying whether a lowercase alphabet is
|
||||
acceptable as input. For security purposes, the default is False.
|
||||
{extra_args}
|
||||
The result is returned as a bytes object. A binascii.Error is raised if
|
||||
the input is incorrectly padded or if there are non-alphabet
|
||||
characters present in the input.
|
||||
'''
|
||||
_B32_DECODE_MAP01_DOCSTRING = '''
|
||||
RFC 3548 allows for optional mapping of the digit 0 (zero) to the
|
||||
letter O (oh), and for optional mapping of the digit 1 (one) to
|
||||
either the letter I (eye) or letter L (el). The optional argument
|
||||
map01 when not None, specifies which letter the digit 1 should be
|
||||
mapped to (when map01 is not None, the digit 0 is always mapped to
|
||||
the letter O). For security purposes the default is None, so that
|
||||
0 and 1 are not allowed in the input.
|
||||
'''
|
||||
_b32alphabet = b'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567'
|
||||
_b32hexalphabet = b'0123456789ABCDEFGHIJKLMNOPQRSTUV'
|
||||
_b32tab2 = {}
|
||||
_b32rev = {}
|
||||
|
||||
def _b32encode(alphabet, s):
|
||||
# Delay the initialization of the table to not waste memory
|
||||
# if the function is never called
|
||||
if alphabet not in _b32tab2:
|
||||
b32tab = [bytes((i,)) for i in alphabet]
|
||||
_b32tab2[alphabet] = [a + b for a in b32tab for b in b32tab]
|
||||
b32tab = None
|
||||
|
||||
if not isinstance(s, bytes_types):
|
||||
s = memoryview(s).tobytes()
|
||||
leftover = len(s) % 5
|
||||
# Pad the last quantum with zero bits if necessary
|
||||
if leftover:
|
||||
s = s + b'\0' * (5 - leftover) # Don't use += !
|
||||
encoded = bytearray()
|
||||
from_bytes = int.from_bytes
|
||||
b32tab2 = _b32tab2[alphabet]
|
||||
for i in range(0, len(s), 5):
|
||||
c = from_bytes(s[i: i + 5]) # big endian
|
||||
encoded += (b32tab2[c >> 30] + # bits 1 - 10
|
||||
b32tab2[(c >> 20) & 0x3ff] + # bits 11 - 20
|
||||
b32tab2[(c >> 10) & 0x3ff] + # bits 21 - 30
|
||||
b32tab2[c & 0x3ff] # bits 31 - 40
|
||||
)
|
||||
# Adjust for any leftover partial quanta
|
||||
if leftover == 1:
|
||||
encoded[-6:] = b'======'
|
||||
elif leftover == 2:
|
||||
encoded[-4:] = b'===='
|
||||
elif leftover == 3:
|
||||
encoded[-3:] = b'==='
|
||||
elif leftover == 4:
|
||||
encoded[-1:] = b'='
|
||||
return bytes(encoded)
|
||||
|
||||
def _b32decode(alphabet, s, casefold=False, map01=None):
|
||||
# Delay the initialization of the table to not waste memory
|
||||
# if the function is never called
|
||||
if alphabet not in _b32rev:
|
||||
_b32rev[alphabet] = {v: k for k, v in enumerate(alphabet)}
|
||||
s = _bytes_from_decode_data(s)
|
||||
if len(s) % 8:
|
||||
raise binascii.Error('Incorrect padding')
|
||||
# Handle section 2.4 zero and one mapping. The flag map01 will be either
|
||||
# False, or the character to map the digit 1 (one) to. It should be
|
||||
# either L (el) or I (eye).
|
||||
if map01 is not None:
|
||||
map01 = _bytes_from_decode_data(map01)
|
||||
assert len(map01) == 1, repr(map01)
|
||||
s = s.translate(bytes.maketrans(b'01', b'O' + map01))
|
||||
if casefold:
|
||||
s = s.upper()
|
||||
# Strip off pad characters from the right. We need to count the pad
|
||||
# characters because this will tell us how many null bytes to remove from
|
||||
# the end of the decoded string.
|
||||
l = len(s)
|
||||
s = s.rstrip(b'=')
|
||||
padchars = l - len(s)
|
||||
# Now decode the full quanta
|
||||
decoded = bytearray()
|
||||
b32rev = _b32rev[alphabet]
|
||||
for i in range(0, len(s), 8):
|
||||
quanta = s[i: i + 8]
|
||||
acc = 0
|
||||
try:
|
||||
for c in quanta:
|
||||
acc = (acc << 5) + b32rev[c]
|
||||
except KeyError:
|
||||
raise binascii.Error('Non-base32 digit found') from None
|
||||
decoded += acc.to_bytes(5) # big endian
|
||||
# Process the last, partial quanta
|
||||
if l % 8 or padchars not in {0, 1, 3, 4, 6}:
|
||||
raise binascii.Error('Incorrect padding')
|
||||
if padchars and decoded:
|
||||
acc <<= 5 * padchars
|
||||
last = acc.to_bytes(5) # big endian
|
||||
leftover = (43 - 5 * padchars) // 8 # 1: 4, 3: 3, 4: 2, 6: 1
|
||||
decoded[-5:] = last[:leftover]
|
||||
return bytes(decoded)
|
||||
|
||||
|
||||
def b32encode(s):
|
||||
return _b32encode(_b32alphabet, s)
|
||||
b32encode.__doc__ = _B32_ENCODE_DOCSTRING.format(encoding='base32')
|
||||
|
||||
def b32decode(s, casefold=False, map01=None):
|
||||
return _b32decode(_b32alphabet, s, casefold, map01)
|
||||
b32decode.__doc__ = _B32_DECODE_DOCSTRING.format(encoding='base32',
|
||||
extra_args=_B32_DECODE_MAP01_DOCSTRING)
|
||||
|
||||
def b32hexencode(s):
|
||||
return _b32encode(_b32hexalphabet, s)
|
||||
b32hexencode.__doc__ = _B32_ENCODE_DOCSTRING.format(encoding='base32hex')
|
||||
|
||||
def b32hexdecode(s, casefold=False):
|
||||
# base32hex does not have the 01 mapping
|
||||
return _b32decode(_b32hexalphabet, s, casefold)
|
||||
b32hexdecode.__doc__ = _B32_DECODE_DOCSTRING.format(encoding='base32hex',
|
||||
extra_args='')
|
||||
|
||||
|
||||
# RFC 3548, Base 16 Alphabet specifies uppercase, but hexlify() returns
|
||||
# lowercase. The RFC also recommends against accepting input case
|
||||
# insensitively.
|
||||
def b16encode(s):
|
||||
"""Encode the bytes-like object s using Base16 and return a bytes object.
|
||||
"""
|
||||
return binascii.hexlify(s).upper()
|
||||
|
||||
|
||||
def b16decode(s, casefold=False):
|
||||
"""Decode the Base16 encoded bytes-like object or ASCII string s.
|
||||
|
||||
Optional casefold is a flag specifying whether a lowercase alphabet is
|
||||
acceptable as input. For security purposes, the default is False.
|
||||
|
||||
The result is returned as a bytes object. A binascii.Error is raised if
|
||||
s is incorrectly padded or if there are non-alphabet characters present
|
||||
in the input.
|
||||
"""
|
||||
s = _bytes_from_decode_data(s)
|
||||
if casefold:
|
||||
s = s.upper()
|
||||
if re.search(b'[^0-9A-F]', s):
|
||||
raise binascii.Error('Non-base16 digit found')
|
||||
return binascii.unhexlify(s)
|
||||
|
||||
#
|
||||
# Ascii85 encoding/decoding
|
||||
#
|
||||
|
||||
_a85chars = None
|
||||
_a85chars2 = None
|
||||
_A85START = b"<~"
|
||||
_A85END = b"~>"
|
||||
|
||||
def _85encode(b, chars, chars2, pad=False, foldnuls=False, foldspaces=False):
|
||||
# Helper function for a85encode and b85encode
|
||||
if not isinstance(b, bytes_types):
|
||||
b = memoryview(b).tobytes()
|
||||
|
||||
padding = (-len(b)) % 4
|
||||
if padding:
|
||||
b = b + b'\0' * padding
|
||||
words = struct.Struct('!%dI' % (len(b) // 4)).unpack(b)
|
||||
|
||||
chunks = [b'z' if foldnuls and not word else
|
||||
b'y' if foldspaces and word == 0x20202020 else
|
||||
(chars2[word // 614125] +
|
||||
chars2[word // 85 % 7225] +
|
||||
chars[word % 85])
|
||||
for word in words]
|
||||
|
||||
if padding and not pad:
|
||||
if chunks[-1] == b'z':
|
||||
chunks[-1] = chars[0] * 5
|
||||
chunks[-1] = chunks[-1][:-padding]
|
||||
|
||||
return b''.join(chunks)
|
||||
|
||||
def a85encode(b, *, foldspaces=False, wrapcol=0, pad=False, adobe=False):
|
||||
"""Encode bytes-like object b using Ascii85 and return a bytes object.
|
||||
|
||||
foldspaces is an optional flag that uses the special short sequence 'y'
|
||||
instead of 4 consecutive spaces (ASCII 0x20) as supported by 'btoa'. This
|
||||
feature is not supported by the "standard" Adobe encoding.
|
||||
|
||||
wrapcol controls whether the output should have newline (b'\\n') characters
|
||||
added to it. If this is non-zero, each output line will be at most this
|
||||
many characters long, excluding the trailing newline.
|
||||
|
||||
pad controls whether the input is padded to a multiple of 4 before
|
||||
encoding. Note that the btoa implementation always pads.
|
||||
|
||||
adobe controls whether the encoded byte sequence is framed with <~ and ~>,
|
||||
which is used by the Adobe implementation.
|
||||
"""
|
||||
global _a85chars, _a85chars2
|
||||
# Delay the initialization of tables to not waste memory
|
||||
# if the function is never called
|
||||
if _a85chars2 is None:
|
||||
_a85chars = [bytes((i,)) for i in range(33, 118)]
|
||||
_a85chars2 = [(a + b) for a in _a85chars for b in _a85chars]
|
||||
|
||||
result = _85encode(b, _a85chars, _a85chars2, pad, True, foldspaces)
|
||||
|
||||
if adobe:
|
||||
result = _A85START + result
|
||||
if wrapcol:
|
||||
wrapcol = max(2 if adobe else 1, wrapcol)
|
||||
chunks = [result[i: i + wrapcol]
|
||||
for i in range(0, len(result), wrapcol)]
|
||||
if adobe:
|
||||
if len(chunks[-1]) + 2 > wrapcol:
|
||||
chunks.append(b'')
|
||||
result = b'\n'.join(chunks)
|
||||
if adobe:
|
||||
result += _A85END
|
||||
|
||||
return result
|
||||
|
||||
def a85decode(b, *, foldspaces=False, adobe=False, ignorechars=b' \t\n\r\v'):
|
||||
"""Decode the Ascii85 encoded bytes-like object or ASCII string b.
|
||||
|
||||
foldspaces is a flag that specifies whether the 'y' short sequence should be
|
||||
accepted as shorthand for 4 consecutive spaces (ASCII 0x20). This feature is
|
||||
not supported by the "standard" Adobe encoding.
|
||||
|
||||
adobe controls whether the input sequence is in Adobe Ascii85 format (i.e.
|
||||
is framed with <~ and ~>).
|
||||
|
||||
ignorechars should be a byte string containing characters to ignore from the
|
||||
input. This should only contain whitespace characters, and by default
|
||||
contains all whitespace characters in ASCII.
|
||||
|
||||
The result is returned as a bytes object.
|
||||
"""
|
||||
b = _bytes_from_decode_data(b)
|
||||
if adobe:
|
||||
if not b.endswith(_A85END):
|
||||
raise ValueError(
|
||||
"Ascii85 encoded byte sequences must end "
|
||||
"with {!r}".format(_A85END)
|
||||
)
|
||||
if b.startswith(_A85START):
|
||||
b = b[2:-2] # Strip off start/end markers
|
||||
else:
|
||||
b = b[:-2]
|
||||
#
|
||||
# We have to go through this stepwise, so as to ignore spaces and handle
|
||||
# special short sequences
|
||||
#
|
||||
packI = struct.Struct('!I').pack
|
||||
decoded = []
|
||||
decoded_append = decoded.append
|
||||
curr = []
|
||||
curr_append = curr.append
|
||||
curr_clear = curr.clear
|
||||
for x in b + b'u' * 4:
|
||||
if b'!'[0] <= x <= b'u'[0]:
|
||||
curr_append(x)
|
||||
if len(curr) == 5:
|
||||
acc = 0
|
||||
for x in curr:
|
||||
acc = 85 * acc + (x - 33)
|
||||
try:
|
||||
decoded_append(packI(acc))
|
||||
except struct.error:
|
||||
raise ValueError('Ascii85 overflow') from None
|
||||
curr_clear()
|
||||
elif x == b'z'[0]:
|
||||
if curr:
|
||||
raise ValueError('z inside Ascii85 5-tuple')
|
||||
decoded_append(b'\0\0\0\0')
|
||||
elif foldspaces and x == b'y'[0]:
|
||||
if curr:
|
||||
raise ValueError('y inside Ascii85 5-tuple')
|
||||
decoded_append(b'\x20\x20\x20\x20')
|
||||
elif x in ignorechars:
|
||||
# Skip whitespace
|
||||
continue
|
||||
else:
|
||||
raise ValueError('Non-Ascii85 digit found: %c' % x)
|
||||
|
||||
result = b''.join(decoded)
|
||||
padding = 4 - len(curr)
|
||||
if padding:
|
||||
# Throw away the extra padding
|
||||
result = result[:-padding]
|
||||
return result
|
||||
|
||||
# The following code is originally taken (with permission) from Mercurial
|
||||
|
||||
_b85alphabet = (b"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
b"abcdefghijklmnopqrstuvwxyz!#$%&()*+-;<=>?@^_`{|}~")
|
||||
_b85chars = None
|
||||
_b85chars2 = None
|
||||
_b85dec = None
|
||||
|
||||
def b85encode(b, pad=False):
|
||||
"""Encode bytes-like object b in base85 format and return a bytes object.
|
||||
|
||||
If pad is true, the input is padded with b'\\0' so its length is a multiple of
|
||||
4 bytes before encoding.
|
||||
"""
|
||||
global _b85chars, _b85chars2
|
||||
# Delay the initialization of tables to not waste memory
|
||||
# if the function is never called
|
||||
if _b85chars2 is None:
|
||||
_b85chars = [bytes((i,)) for i in _b85alphabet]
|
||||
_b85chars2 = [(a + b) for a in _b85chars for b in _b85chars]
|
||||
return _85encode(b, _b85chars, _b85chars2, pad)
|
||||
|
||||
def b85decode(b):
|
||||
"""Decode the base85-encoded bytes-like object or ASCII string b
|
||||
|
||||
The result is returned as a bytes object.
|
||||
"""
|
||||
global _b85dec
|
||||
# Delay the initialization of tables to not waste memory
|
||||
# if the function is never called
|
||||
if _b85dec is None:
|
||||
_b85dec = [None] * 256
|
||||
for i, c in enumerate(_b85alphabet):
|
||||
_b85dec[c] = i
|
||||
|
||||
b = _bytes_from_decode_data(b)
|
||||
padding = (-len(b)) % 5
|
||||
b = b + b'~' * padding
|
||||
out = []
|
||||
packI = struct.Struct('!I').pack
|
||||
for i in range(0, len(b), 5):
|
||||
chunk = b[i:i + 5]
|
||||
acc = 0
|
||||
try:
|
||||
for c in chunk:
|
||||
acc = acc * 85 + _b85dec[c]
|
||||
except TypeError:
|
||||
for j, c in enumerate(chunk):
|
||||
if _b85dec[c] is None:
|
||||
raise ValueError('bad base85 character at position %d'
|
||||
% (i + j)) from None
|
||||
raise
|
||||
try:
|
||||
out.append(packI(acc))
|
||||
except struct.error:
|
||||
raise ValueError('base85 overflow in hunk starting at byte %d'
|
||||
% i) from None
|
||||
|
||||
result = b''.join(out)
|
||||
if padding:
|
||||
result = result[:-padding]
|
||||
return result
|
||||
|
||||
_z85alphabet = (b'0123456789abcdefghijklmnopqrstuvwxyz'
|
||||
b'ABCDEFGHIJKLMNOPQRSTUVWXYZ.-:+=^!/*?&<>()[]{}@%$#')
|
||||
# Translating b85 valid but z85 invalid chars to b'\x00' is required
|
||||
# to prevent them from being decoded as b85 valid chars.
|
||||
_z85_b85_decode_diff = b';_`|~'
|
||||
_z85_decode_translation = bytes.maketrans(
|
||||
_z85alphabet + _z85_b85_decode_diff,
|
||||
_b85alphabet + b'\x00' * len(_z85_b85_decode_diff)
|
||||
)
|
||||
_z85_encode_translation = bytes.maketrans(_b85alphabet, _z85alphabet)
|
||||
|
||||
def z85encode(s):
|
||||
"""Encode bytes-like object b in z85 format and return a bytes object."""
|
||||
return b85encode(s).translate(_z85_encode_translation)
|
||||
|
||||
def z85decode(s):
|
||||
"""Decode the z85-encoded bytes-like object or ASCII string b
|
||||
|
||||
The result is returned as a bytes object.
|
||||
"""
|
||||
s = _bytes_from_decode_data(s)
|
||||
s = s.translate(_z85_decode_translation)
|
||||
try:
|
||||
return b85decode(s)
|
||||
except ValueError as e:
|
||||
raise ValueError(e.args[0].replace('base85', 'z85')) from None
|
||||
|
||||
# Legacy interface. This code could be cleaned up since I don't believe
|
||||
# binascii has any line length limitations. It just doesn't seem worth it
|
||||
# though. The files should be opened in binary mode.
|
||||
|
||||
MAXLINESIZE = 76 # Excluding the CRLF
|
||||
MAXBINSIZE = (MAXLINESIZE//4)*3
|
||||
|
||||
def encode(input, output):
|
||||
"""Encode a file; input and output are binary files."""
|
||||
while s := input.read(MAXBINSIZE):
|
||||
while len(s) < MAXBINSIZE and (ns := input.read(MAXBINSIZE-len(s))):
|
||||
s += ns
|
||||
line = binascii.b2a_base64(s)
|
||||
output.write(line)
|
||||
|
||||
|
||||
def decode(input, output):
|
||||
"""Decode a file; input and output are binary files."""
|
||||
while line := input.readline():
|
||||
s = binascii.a2b_base64(line)
|
||||
output.write(s)
|
||||
|
||||
def _input_type_check(s):
|
||||
try:
|
||||
m = memoryview(s)
|
||||
except TypeError as err:
|
||||
msg = "expected bytes-like object, not %s" % s.__class__.__name__
|
||||
raise TypeError(msg) from err
|
||||
if m.format not in ('c', 'b', 'B'):
|
||||
msg = ("expected single byte elements, not %r from %s" %
|
||||
(m.format, s.__class__.__name__))
|
||||
raise TypeError(msg)
|
||||
if m.ndim != 1:
|
||||
msg = ("expected 1-D data, not %d-D data from %s" %
|
||||
(m.ndim, s.__class__.__name__))
|
||||
raise TypeError(msg)
|
||||
|
||||
|
||||
def encodebytes(s):
|
||||
"""Encode a bytestring into a bytes object containing multiple lines
|
||||
of base-64 data."""
|
||||
_input_type_check(s)
|
||||
pieces = []
|
||||
for i in range(0, len(s), MAXBINSIZE):
|
||||
chunk = s[i : i + MAXBINSIZE]
|
||||
pieces.append(binascii.b2a_base64(chunk))
|
||||
return b"".join(pieces)
|
||||
|
||||
|
||||
def decodebytes(s):
|
||||
"""Decode a bytestring of base-64 data into a bytes object."""
|
||||
_input_type_check(s)
|
||||
return binascii.a2b_base64(s)
|
||||
|
||||
|
||||
# Usable as a script...
|
||||
def main():
|
||||
"""Small main program"""
|
||||
import sys, getopt
|
||||
usage = f"""usage: {sys.argv[0]} [-h|-d|-e|-u] [file|-]
|
||||
-h: print this help message and exit
|
||||
-d, -u: decode
|
||||
-e: encode (default)"""
|
||||
try:
|
||||
opts, args = getopt.getopt(sys.argv[1:], 'hdeu')
|
||||
except getopt.error as msg:
|
||||
sys.stdout = sys.stderr
|
||||
print(msg)
|
||||
print(usage)
|
||||
sys.exit(2)
|
||||
func = encode
|
||||
for o, a in opts:
|
||||
if o == '-e': func = encode
|
||||
if o == '-d': func = decode
|
||||
if o == '-u': func = decode
|
||||
if o == '-h': print(usage); return
|
||||
if args and args[0] != '-':
|
||||
with open(args[0], 'rb') as f:
|
||||
func(f, sys.stdout.buffer)
|
||||
else:
|
||||
func(sys.stdin.buffer, sys.stdout.buffer)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
959
Utils/PythonNew32/Lib/bdb.py
Normal file
959
Utils/PythonNew32/Lib/bdb.py
Normal file
@@ -0,0 +1,959 @@
|
||||
"""Debugger basics"""
|
||||
|
||||
import fnmatch
|
||||
import sys
|
||||
import os
|
||||
from contextlib import contextmanager
|
||||
from inspect import CO_GENERATOR, CO_COROUTINE, CO_ASYNC_GENERATOR
|
||||
|
||||
__all__ = ["BdbQuit", "Bdb", "Breakpoint"]
|
||||
|
||||
GENERATOR_AND_COROUTINE_FLAGS = CO_GENERATOR | CO_COROUTINE | CO_ASYNC_GENERATOR
|
||||
|
||||
|
||||
class BdbQuit(Exception):
|
||||
"""Exception to give up completely."""
|
||||
|
||||
|
||||
class Bdb:
|
||||
"""Generic Python debugger base class.
|
||||
|
||||
This class takes care of details of the trace facility;
|
||||
a derived class should implement user interaction.
|
||||
The standard debugger class (pdb.Pdb) is an example.
|
||||
|
||||
The optional skip argument must be an iterable of glob-style
|
||||
module name patterns. The debugger will not step into frames
|
||||
that originate in a module that matches one of these patterns.
|
||||
Whether a frame is considered to originate in a certain module
|
||||
is determined by the __name__ in the frame globals.
|
||||
"""
|
||||
|
||||
def __init__(self, skip=None):
|
||||
self.skip = set(skip) if skip else None
|
||||
self.breaks = {}
|
||||
self.fncache = {}
|
||||
self.frame_trace_lines_opcodes = {}
|
||||
self.frame_returning = None
|
||||
self.trace_opcodes = False
|
||||
self.enterframe = None
|
||||
|
||||
self._load_breaks()
|
||||
|
||||
def canonic(self, filename):
|
||||
"""Return canonical form of filename.
|
||||
|
||||
For real filenames, the canonical form is a case-normalized (on
|
||||
case insensitive filesystems) absolute path. 'Filenames' with
|
||||
angle brackets, such as "<stdin>", generated in interactive
|
||||
mode, are returned unchanged.
|
||||
"""
|
||||
if filename == "<" + filename[1:-1] + ">":
|
||||
return filename
|
||||
canonic = self.fncache.get(filename)
|
||||
if not canonic:
|
||||
canonic = os.path.abspath(filename)
|
||||
canonic = os.path.normcase(canonic)
|
||||
self.fncache[filename] = canonic
|
||||
return canonic
|
||||
|
||||
def reset(self):
|
||||
"""Set values of attributes as ready to start debugging."""
|
||||
import linecache
|
||||
linecache.checkcache()
|
||||
self.botframe = None
|
||||
self._set_stopinfo(None, None)
|
||||
|
||||
@contextmanager
|
||||
def set_enterframe(self, frame):
|
||||
self.enterframe = frame
|
||||
yield
|
||||
self.enterframe = None
|
||||
|
||||
def trace_dispatch(self, frame, event, arg):
|
||||
"""Dispatch a trace function for debugged frames based on the event.
|
||||
|
||||
This function is installed as the trace function for debugged
|
||||
frames. Its return value is the new trace function, which is
|
||||
usually itself. The default implementation decides how to
|
||||
dispatch a frame, depending on the type of event (passed in as a
|
||||
string) that is about to be executed.
|
||||
|
||||
The event can be one of the following:
|
||||
line: A new line of code is going to be executed.
|
||||
call: A function is about to be called or another code block
|
||||
is entered.
|
||||
return: A function or other code block is about to return.
|
||||
exception: An exception has occurred.
|
||||
c_call: A C function is about to be called.
|
||||
c_return: A C function has returned.
|
||||
c_exception: A C function has raised an exception.
|
||||
|
||||
For the Python events, specialized functions (see the dispatch_*()
|
||||
methods) are called. For the C events, no action is taken.
|
||||
|
||||
The arg parameter depends on the previous event.
|
||||
"""
|
||||
|
||||
with self.set_enterframe(frame):
|
||||
if self.quitting:
|
||||
return # None
|
||||
if event == 'line':
|
||||
return self.dispatch_line(frame)
|
||||
if event == 'call':
|
||||
return self.dispatch_call(frame, arg)
|
||||
if event == 'return':
|
||||
return self.dispatch_return(frame, arg)
|
||||
if event == 'exception':
|
||||
return self.dispatch_exception(frame, arg)
|
||||
if event == 'c_call':
|
||||
return self.trace_dispatch
|
||||
if event == 'c_exception':
|
||||
return self.trace_dispatch
|
||||
if event == 'c_return':
|
||||
return self.trace_dispatch
|
||||
if event == 'opcode':
|
||||
return self.dispatch_opcode(frame, arg)
|
||||
print('bdb.Bdb.dispatch: unknown debugging event:', repr(event))
|
||||
return self.trace_dispatch
|
||||
|
||||
def dispatch_line(self, frame):
|
||||
"""Invoke user function and return trace function for line event.
|
||||
|
||||
If the debugger stops on the current line, invoke
|
||||
self.user_line(). Raise BdbQuit if self.quitting is set.
|
||||
Return self.trace_dispatch to continue tracing in this scope.
|
||||
"""
|
||||
if self.stop_here(frame) or self.break_here(frame):
|
||||
self.user_line(frame)
|
||||
if self.quitting: raise BdbQuit
|
||||
return self.trace_dispatch
|
||||
|
||||
def dispatch_call(self, frame, arg):
|
||||
"""Invoke user function and return trace function for call event.
|
||||
|
||||
If the debugger stops on this function call, invoke
|
||||
self.user_call(). Raise BdbQuit if self.quitting is set.
|
||||
Return self.trace_dispatch to continue tracing in this scope.
|
||||
"""
|
||||
# XXX 'arg' is no longer used
|
||||
if self.botframe is None:
|
||||
# First call of dispatch since reset()
|
||||
self.botframe = frame.f_back # (CT) Note that this may also be None!
|
||||
return self.trace_dispatch
|
||||
if not (self.stop_here(frame) or self.break_anywhere(frame)):
|
||||
# No need to trace this function
|
||||
return # None
|
||||
# Ignore call events in generator except when stepping.
|
||||
if self.stopframe and frame.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS:
|
||||
return self.trace_dispatch
|
||||
self.user_call(frame, arg)
|
||||
if self.quitting: raise BdbQuit
|
||||
return self.trace_dispatch
|
||||
|
||||
def dispatch_return(self, frame, arg):
|
||||
"""Invoke user function and return trace function for return event.
|
||||
|
||||
If the debugger stops on this function return, invoke
|
||||
self.user_return(). Raise BdbQuit if self.quitting is set.
|
||||
Return self.trace_dispatch to continue tracing in this scope.
|
||||
"""
|
||||
if self.stop_here(frame) or frame == self.returnframe:
|
||||
# Ignore return events in generator except when stepping.
|
||||
if self.stopframe and frame.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS:
|
||||
return self.trace_dispatch
|
||||
try:
|
||||
self.frame_returning = frame
|
||||
self.user_return(frame, arg)
|
||||
finally:
|
||||
self.frame_returning = None
|
||||
if self.quitting: raise BdbQuit
|
||||
# The user issued a 'next' or 'until' command.
|
||||
if self.stopframe is frame and self.stoplineno != -1:
|
||||
self._set_stopinfo(None, None)
|
||||
# The previous frame might not have f_trace set, unless we are
|
||||
# issuing a command that does not expect to stop, we should set
|
||||
# f_trace
|
||||
if self.stoplineno != -1:
|
||||
self._set_caller_tracefunc(frame)
|
||||
return self.trace_dispatch
|
||||
|
||||
def dispatch_exception(self, frame, arg):
|
||||
"""Invoke user function and return trace function for exception event.
|
||||
|
||||
If the debugger stops on this exception, invoke
|
||||
self.user_exception(). Raise BdbQuit if self.quitting is set.
|
||||
Return self.trace_dispatch to continue tracing in this scope.
|
||||
"""
|
||||
if self.stop_here(frame):
|
||||
# When stepping with next/until/return in a generator frame, skip
|
||||
# the internal StopIteration exception (with no traceback)
|
||||
# triggered by a subiterator run with the 'yield from' statement.
|
||||
if not (frame.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS
|
||||
and arg[0] is StopIteration and arg[2] is None):
|
||||
self.user_exception(frame, arg)
|
||||
if self.quitting: raise BdbQuit
|
||||
# Stop at the StopIteration or GeneratorExit exception when the user
|
||||
# has set stopframe in a generator by issuing a return command, or a
|
||||
# next/until command at the last statement in the generator before the
|
||||
# exception.
|
||||
elif (self.stopframe and frame is not self.stopframe
|
||||
and self.stopframe.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS
|
||||
and arg[0] in (StopIteration, GeneratorExit)):
|
||||
self.user_exception(frame, arg)
|
||||
if self.quitting: raise BdbQuit
|
||||
|
||||
return self.trace_dispatch
|
||||
|
||||
def dispatch_opcode(self, frame, arg):
|
||||
"""Invoke user function and return trace function for opcode event.
|
||||
If the debugger stops on the current opcode, invoke
|
||||
self.user_opcode(). Raise BdbQuit if self.quitting is set.
|
||||
Return self.trace_dispatch to continue tracing in this scope.
|
||||
"""
|
||||
if self.stop_here(frame) or self.break_here(frame):
|
||||
self.user_opcode(frame)
|
||||
if self.quitting: raise BdbQuit
|
||||
return self.trace_dispatch
|
||||
|
||||
# Normally derived classes don't override the following
|
||||
# methods, but they may if they want to redefine the
|
||||
# definition of stopping and breakpoints.
|
||||
|
||||
def is_skipped_module(self, module_name):
|
||||
"Return True if module_name matches any skip pattern."
|
||||
if module_name is None: # some modules do not have names
|
||||
return False
|
||||
for pattern in self.skip:
|
||||
if fnmatch.fnmatch(module_name, pattern):
|
||||
return True
|
||||
return False
|
||||
|
||||
def stop_here(self, frame):
|
||||
"Return True if frame is below the starting frame in the stack."
|
||||
# (CT) stopframe may now also be None, see dispatch_call.
|
||||
# (CT) the former test for None is therefore removed from here.
|
||||
if self.skip and \
|
||||
self.is_skipped_module(frame.f_globals.get('__name__')):
|
||||
return False
|
||||
if frame is self.stopframe:
|
||||
if self.stoplineno == -1:
|
||||
return False
|
||||
return frame.f_lineno >= self.stoplineno
|
||||
if not self.stopframe:
|
||||
return True
|
||||
return False
|
||||
|
||||
def break_here(self, frame):
|
||||
"""Return True if there is an effective breakpoint for this line.
|
||||
|
||||
Check for line or function breakpoint and if in effect.
|
||||
Delete temporary breakpoints if effective() says to.
|
||||
"""
|
||||
filename = self.canonic(frame.f_code.co_filename)
|
||||
if filename not in self.breaks:
|
||||
return False
|
||||
lineno = frame.f_lineno
|
||||
if lineno not in self.breaks[filename]:
|
||||
# The line itself has no breakpoint, but maybe the line is the
|
||||
# first line of a function with breakpoint set by function name.
|
||||
lineno = frame.f_code.co_firstlineno
|
||||
if lineno not in self.breaks[filename]:
|
||||
return False
|
||||
|
||||
# flag says ok to delete temp. bp
|
||||
(bp, flag) = effective(filename, lineno, frame)
|
||||
if bp:
|
||||
self.currentbp = bp.number
|
||||
if (flag and bp.temporary):
|
||||
self.do_clear(str(bp.number))
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def do_clear(self, arg):
|
||||
"""Remove temporary breakpoint.
|
||||
|
||||
Must implement in derived classes or get NotImplementedError.
|
||||
"""
|
||||
raise NotImplementedError("subclass of bdb must implement do_clear()")
|
||||
|
||||
def break_anywhere(self, frame):
|
||||
"""Return True if there is any breakpoint for frame's filename.
|
||||
"""
|
||||
return self.canonic(frame.f_code.co_filename) in self.breaks
|
||||
|
||||
# Derived classes should override the user_* methods
|
||||
# to gain control.
|
||||
|
||||
def user_call(self, frame, argument_list):
|
||||
"""Called if we might stop in a function."""
|
||||
pass
|
||||
|
||||
def user_line(self, frame):
|
||||
"""Called when we stop or break at a line."""
|
||||
pass
|
||||
|
||||
def user_return(self, frame, return_value):
|
||||
"""Called when a return trap is set here."""
|
||||
pass
|
||||
|
||||
def user_exception(self, frame, exc_info):
|
||||
"""Called when we stop on an exception."""
|
||||
pass
|
||||
|
||||
def user_opcode(self, frame):
|
||||
"""Called when we are about to execute an opcode."""
|
||||
pass
|
||||
|
||||
def _set_trace_opcodes(self, trace_opcodes):
|
||||
if trace_opcodes != self.trace_opcodes:
|
||||
self.trace_opcodes = trace_opcodes
|
||||
frame = self.enterframe
|
||||
while frame is not None:
|
||||
frame.f_trace_opcodes = trace_opcodes
|
||||
if frame is self.botframe:
|
||||
break
|
||||
frame = frame.f_back
|
||||
|
||||
def _set_stopinfo(self, stopframe, returnframe, stoplineno=0, opcode=False):
|
||||
"""Set the attributes for stopping.
|
||||
|
||||
If stoplineno is greater than or equal to 0, then stop at line
|
||||
greater than or equal to the stopline. If stoplineno is -1, then
|
||||
don't stop at all.
|
||||
"""
|
||||
self.stopframe = stopframe
|
||||
self.returnframe = returnframe
|
||||
self.quitting = False
|
||||
# stoplineno >= 0 means: stop at line >= the stoplineno
|
||||
# stoplineno -1 means: don't stop at all
|
||||
self.stoplineno = stoplineno
|
||||
self._set_trace_opcodes(opcode)
|
||||
|
||||
def _set_caller_tracefunc(self, current_frame):
|
||||
# Issue #13183: pdb skips frames after hitting a breakpoint and running
|
||||
# step commands.
|
||||
# Restore the trace function in the caller (that may not have been set
|
||||
# for performance reasons) when returning from the current frame, unless
|
||||
# the caller is the botframe.
|
||||
caller_frame = current_frame.f_back
|
||||
if caller_frame and not caller_frame.f_trace and caller_frame is not self.botframe:
|
||||
caller_frame.f_trace = self.trace_dispatch
|
||||
|
||||
# Derived classes and clients can call the following methods
|
||||
# to affect the stepping state.
|
||||
|
||||
def set_until(self, frame, lineno=None):
|
||||
"""Stop when the line with the lineno greater than the current one is
|
||||
reached or when returning from current frame."""
|
||||
# the name "until" is borrowed from gdb
|
||||
if lineno is None:
|
||||
lineno = frame.f_lineno + 1
|
||||
self._set_stopinfo(frame, frame, lineno)
|
||||
|
||||
def set_step(self):
|
||||
"""Stop after one line of code."""
|
||||
self._set_stopinfo(None, None)
|
||||
|
||||
def set_stepinstr(self):
|
||||
"""Stop before the next instruction."""
|
||||
self._set_stopinfo(None, None, opcode=True)
|
||||
|
||||
def set_next(self, frame):
|
||||
"""Stop on the next line in or below the given frame."""
|
||||
self._set_stopinfo(frame, None)
|
||||
|
||||
def set_return(self, frame):
|
||||
"""Stop when returning from the given frame."""
|
||||
if frame.f_code.co_flags & GENERATOR_AND_COROUTINE_FLAGS:
|
||||
self._set_stopinfo(frame, None, -1)
|
||||
else:
|
||||
self._set_stopinfo(frame.f_back, frame)
|
||||
|
||||
def set_trace(self, frame=None):
|
||||
"""Start debugging from frame.
|
||||
|
||||
If frame is not specified, debugging starts from caller's frame.
|
||||
"""
|
||||
if frame is None:
|
||||
frame = sys._getframe().f_back
|
||||
self.reset()
|
||||
with self.set_enterframe(frame):
|
||||
while frame:
|
||||
frame.f_trace = self.trace_dispatch
|
||||
self.botframe = frame
|
||||
self.frame_trace_lines_opcodes[frame] = (frame.f_trace_lines, frame.f_trace_opcodes)
|
||||
# We need f_trace_lines == True for the debugger to work
|
||||
frame.f_trace_lines = True
|
||||
frame = frame.f_back
|
||||
self.set_stepinstr()
|
||||
sys.settrace(self.trace_dispatch)
|
||||
|
||||
def set_continue(self):
|
||||
"""Stop only at breakpoints or when finished.
|
||||
|
||||
If there are no breakpoints, set the system trace function to None.
|
||||
"""
|
||||
# Don't stop except at breakpoints or when finished
|
||||
self._set_stopinfo(self.botframe, None, -1)
|
||||
if not self.breaks:
|
||||
# no breakpoints; run without debugger overhead
|
||||
sys.settrace(None)
|
||||
frame = sys._getframe().f_back
|
||||
while frame and frame is not self.botframe:
|
||||
del frame.f_trace
|
||||
frame = frame.f_back
|
||||
for frame, (trace_lines, trace_opcodes) in self.frame_trace_lines_opcodes.items():
|
||||
frame.f_trace_lines, frame.f_trace_opcodes = trace_lines, trace_opcodes
|
||||
self.frame_trace_lines_opcodes = {}
|
||||
|
||||
def set_quit(self):
|
||||
"""Set quitting attribute to True.
|
||||
|
||||
Raises BdbQuit exception in the next call to a dispatch_*() method.
|
||||
"""
|
||||
self.stopframe = self.botframe
|
||||
self.returnframe = None
|
||||
self.quitting = True
|
||||
sys.settrace(None)
|
||||
|
||||
# Derived classes and clients can call the following methods
|
||||
# to manipulate breakpoints. These methods return an
|
||||
# error message if something went wrong, None if all is well.
|
||||
# Set_break prints out the breakpoint line and file:lineno.
|
||||
# Call self.get_*break*() to see the breakpoints or better
|
||||
# for bp in Breakpoint.bpbynumber: if bp: bp.bpprint().
|
||||
|
||||
def _add_to_breaks(self, filename, lineno):
|
||||
"""Add breakpoint to breaks, if not already there."""
|
||||
bp_linenos = self.breaks.setdefault(filename, [])
|
||||
if lineno not in bp_linenos:
|
||||
bp_linenos.append(lineno)
|
||||
|
||||
def set_break(self, filename, lineno, temporary=False, cond=None,
|
||||
funcname=None):
|
||||
"""Set a new breakpoint for filename:lineno.
|
||||
|
||||
If lineno doesn't exist for the filename, return an error message.
|
||||
The filename should be in canonical form.
|
||||
"""
|
||||
filename = self.canonic(filename)
|
||||
import linecache # Import as late as possible
|
||||
line = linecache.getline(filename, lineno)
|
||||
if not line:
|
||||
return 'Line %s:%d does not exist' % (filename, lineno)
|
||||
self._add_to_breaks(filename, lineno)
|
||||
bp = Breakpoint(filename, lineno, temporary, cond, funcname)
|
||||
# After we set a new breakpoint, we need to search through all frames
|
||||
# and set f_trace to trace_dispatch if there could be a breakpoint in
|
||||
# that frame.
|
||||
frame = self.enterframe
|
||||
while frame:
|
||||
if self.break_anywhere(frame):
|
||||
frame.f_trace = self.trace_dispatch
|
||||
frame = frame.f_back
|
||||
return None
|
||||
|
||||
def _load_breaks(self):
|
||||
"""Apply all breakpoints (set in other instances) to this one.
|
||||
|
||||
Populates this instance's breaks list from the Breakpoint class's
|
||||
list, which can have breakpoints set by another Bdb instance. This
|
||||
is necessary for interactive sessions to keep the breakpoints
|
||||
active across multiple calls to run().
|
||||
"""
|
||||
for (filename, lineno) in Breakpoint.bplist.keys():
|
||||
self._add_to_breaks(filename, lineno)
|
||||
|
||||
def _prune_breaks(self, filename, lineno):
|
||||
"""Prune breakpoints for filename:lineno.
|
||||
|
||||
A list of breakpoints is maintained in the Bdb instance and in
|
||||
the Breakpoint class. If a breakpoint in the Bdb instance no
|
||||
longer exists in the Breakpoint class, then it's removed from the
|
||||
Bdb instance.
|
||||
"""
|
||||
if (filename, lineno) not in Breakpoint.bplist:
|
||||
self.breaks[filename].remove(lineno)
|
||||
if not self.breaks[filename]:
|
||||
del self.breaks[filename]
|
||||
|
||||
def clear_break(self, filename, lineno):
|
||||
"""Delete breakpoints for filename:lineno.
|
||||
|
||||
If no breakpoints were set, return an error message.
|
||||
"""
|
||||
filename = self.canonic(filename)
|
||||
if filename not in self.breaks:
|
||||
return 'There are no breakpoints in %s' % filename
|
||||
if lineno not in self.breaks[filename]:
|
||||
return 'There is no breakpoint at %s:%d' % (filename, lineno)
|
||||
# If there's only one bp in the list for that file,line
|
||||
# pair, then remove the breaks entry
|
||||
for bp in Breakpoint.bplist[filename, lineno][:]:
|
||||
bp.deleteMe()
|
||||
self._prune_breaks(filename, lineno)
|
||||
return None
|
||||
|
||||
def clear_bpbynumber(self, arg):
|
||||
"""Delete a breakpoint by its index in Breakpoint.bpbynumber.
|
||||
|
||||
If arg is invalid, return an error message.
|
||||
"""
|
||||
try:
|
||||
bp = self.get_bpbynumber(arg)
|
||||
except ValueError as err:
|
||||
return str(err)
|
||||
bp.deleteMe()
|
||||
self._prune_breaks(bp.file, bp.line)
|
||||
return None
|
||||
|
||||
def clear_all_file_breaks(self, filename):
|
||||
"""Delete all breakpoints in filename.
|
||||
|
||||
If none were set, return an error message.
|
||||
"""
|
||||
filename = self.canonic(filename)
|
||||
if filename not in self.breaks:
|
||||
return 'There are no breakpoints in %s' % filename
|
||||
for line in self.breaks[filename]:
|
||||
blist = Breakpoint.bplist[filename, line]
|
||||
for bp in blist:
|
||||
bp.deleteMe()
|
||||
del self.breaks[filename]
|
||||
return None
|
||||
|
||||
def clear_all_breaks(self):
|
||||
"""Delete all existing breakpoints.
|
||||
|
||||
If none were set, return an error message.
|
||||
"""
|
||||
if not self.breaks:
|
||||
return 'There are no breakpoints'
|
||||
for bp in Breakpoint.bpbynumber:
|
||||
if bp:
|
||||
bp.deleteMe()
|
||||
self.breaks = {}
|
||||
return None
|
||||
|
||||
def get_bpbynumber(self, arg):
|
||||
"""Return a breakpoint by its index in Breakpoint.bybpnumber.
|
||||
|
||||
For invalid arg values or if the breakpoint doesn't exist,
|
||||
raise a ValueError.
|
||||
"""
|
||||
if not arg:
|
||||
raise ValueError('Breakpoint number expected')
|
||||
try:
|
||||
number = int(arg)
|
||||
except ValueError:
|
||||
raise ValueError('Non-numeric breakpoint number %s' % arg) from None
|
||||
try:
|
||||
bp = Breakpoint.bpbynumber[number]
|
||||
except IndexError:
|
||||
raise ValueError('Breakpoint number %d out of range' % number) from None
|
||||
if bp is None:
|
||||
raise ValueError('Breakpoint %d already deleted' % number)
|
||||
return bp
|
||||
|
||||
def get_break(self, filename, lineno):
|
||||
"""Return True if there is a breakpoint for filename:lineno."""
|
||||
filename = self.canonic(filename)
|
||||
return filename in self.breaks and \
|
||||
lineno in self.breaks[filename]
|
||||
|
||||
def get_breaks(self, filename, lineno):
|
||||
"""Return all breakpoints for filename:lineno.
|
||||
|
||||
If no breakpoints are set, return an empty list.
|
||||
"""
|
||||
filename = self.canonic(filename)
|
||||
return filename in self.breaks and \
|
||||
lineno in self.breaks[filename] and \
|
||||
Breakpoint.bplist[filename, lineno] or []
|
||||
|
||||
def get_file_breaks(self, filename):
|
||||
"""Return all lines with breakpoints for filename.
|
||||
|
||||
If no breakpoints are set, return an empty list.
|
||||
"""
|
||||
filename = self.canonic(filename)
|
||||
if filename in self.breaks:
|
||||
return self.breaks[filename]
|
||||
else:
|
||||
return []
|
||||
|
||||
def get_all_breaks(self):
|
||||
"""Return all breakpoints that are set."""
|
||||
return self.breaks
|
||||
|
||||
# Derived classes and clients can call the following method
|
||||
# to get a data structure representing a stack trace.
|
||||
|
||||
def get_stack(self, f, t):
|
||||
"""Return a list of (frame, lineno) in a stack trace and a size.
|
||||
|
||||
List starts with original calling frame, if there is one.
|
||||
Size may be number of frames above or below f.
|
||||
"""
|
||||
stack = []
|
||||
if t and t.tb_frame is f:
|
||||
t = t.tb_next
|
||||
while f is not None:
|
||||
stack.append((f, f.f_lineno))
|
||||
if f is self.botframe:
|
||||
break
|
||||
f = f.f_back
|
||||
stack.reverse()
|
||||
i = max(0, len(stack) - 1)
|
||||
while t is not None:
|
||||
stack.append((t.tb_frame, t.tb_lineno))
|
||||
t = t.tb_next
|
||||
if f is None:
|
||||
i = max(0, len(stack) - 1)
|
||||
return stack, i
|
||||
|
||||
def format_stack_entry(self, frame_lineno, lprefix=': '):
|
||||
"""Return a string with information about a stack entry.
|
||||
|
||||
The stack entry frame_lineno is a (frame, lineno) tuple. The
|
||||
return string contains the canonical filename, the function name
|
||||
or '<lambda>', the input arguments, the return value, and the
|
||||
line of code (if it exists).
|
||||
|
||||
"""
|
||||
import linecache, reprlib
|
||||
frame, lineno = frame_lineno
|
||||
filename = self.canonic(frame.f_code.co_filename)
|
||||
s = '%s(%r)' % (filename, lineno)
|
||||
if frame.f_code.co_name:
|
||||
s += frame.f_code.co_name
|
||||
else:
|
||||
s += "<lambda>"
|
||||
s += '()'
|
||||
if '__return__' in frame.f_locals:
|
||||
rv = frame.f_locals['__return__']
|
||||
s += '->'
|
||||
s += reprlib.repr(rv)
|
||||
if lineno is not None:
|
||||
line = linecache.getline(filename, lineno, frame.f_globals)
|
||||
if line:
|
||||
s += lprefix + line.strip()
|
||||
else:
|
||||
s += f'{lprefix}Warning: lineno is None'
|
||||
return s
|
||||
|
||||
# The following methods can be called by clients to use
|
||||
# a debugger to debug a statement or an expression.
|
||||
# Both can be given as a string, or a code object.
|
||||
|
||||
def run(self, cmd, globals=None, locals=None):
|
||||
"""Debug a statement executed via the exec() function.
|
||||
|
||||
globals defaults to __main__.dict; locals defaults to globals.
|
||||
"""
|
||||
if globals is None:
|
||||
import __main__
|
||||
globals = __main__.__dict__
|
||||
if locals is None:
|
||||
locals = globals
|
||||
self.reset()
|
||||
if isinstance(cmd, str):
|
||||
cmd = compile(cmd, "<string>", "exec")
|
||||
sys.settrace(self.trace_dispatch)
|
||||
try:
|
||||
exec(cmd, globals, locals)
|
||||
except BdbQuit:
|
||||
pass
|
||||
finally:
|
||||
self.quitting = True
|
||||
sys.settrace(None)
|
||||
|
||||
def runeval(self, expr, globals=None, locals=None):
|
||||
"""Debug an expression executed via the eval() function.
|
||||
|
||||
globals defaults to __main__.dict; locals defaults to globals.
|
||||
"""
|
||||
if globals is None:
|
||||
import __main__
|
||||
globals = __main__.__dict__
|
||||
if locals is None:
|
||||
locals = globals
|
||||
self.reset()
|
||||
sys.settrace(self.trace_dispatch)
|
||||
try:
|
||||
return eval(expr, globals, locals)
|
||||
except BdbQuit:
|
||||
pass
|
||||
finally:
|
||||
self.quitting = True
|
||||
sys.settrace(None)
|
||||
|
||||
def runctx(self, cmd, globals, locals):
|
||||
"""For backwards-compatibility. Defers to run()."""
|
||||
# B/W compatibility
|
||||
self.run(cmd, globals, locals)
|
||||
|
||||
# This method is more useful to debug a single function call.
|
||||
|
||||
def runcall(self, func, /, *args, **kwds):
|
||||
"""Debug a single function call.
|
||||
|
||||
Return the result of the function call.
|
||||
"""
|
||||
self.reset()
|
||||
sys.settrace(self.trace_dispatch)
|
||||
res = None
|
||||
try:
|
||||
res = func(*args, **kwds)
|
||||
except BdbQuit:
|
||||
pass
|
||||
finally:
|
||||
self.quitting = True
|
||||
sys.settrace(None)
|
||||
return res
|
||||
|
||||
|
||||
def set_trace():
|
||||
"""Start debugging with a Bdb instance from the caller's frame."""
|
||||
Bdb().set_trace()
|
||||
|
||||
|
||||
class Breakpoint:
|
||||
"""Breakpoint class.
|
||||
|
||||
Implements temporary breakpoints, ignore counts, disabling and
|
||||
(re)-enabling, and conditionals.
|
||||
|
||||
Breakpoints are indexed by number through bpbynumber and by
|
||||
the (file, line) tuple using bplist. The former points to a
|
||||
single instance of class Breakpoint. The latter points to a
|
||||
list of such instances since there may be more than one
|
||||
breakpoint per line.
|
||||
|
||||
When creating a breakpoint, its associated filename should be
|
||||
in canonical form. If funcname is defined, a breakpoint hit will be
|
||||
counted when the first line of that function is executed. A
|
||||
conditional breakpoint always counts a hit.
|
||||
"""
|
||||
|
||||
# XXX Keeping state in the class is a mistake -- this means
|
||||
# you cannot have more than one active Bdb instance.
|
||||
|
||||
next = 1 # Next bp to be assigned
|
||||
bplist = {} # indexed by (file, lineno) tuple
|
||||
bpbynumber = [None] # Each entry is None or an instance of Bpt
|
||||
# index 0 is unused, except for marking an
|
||||
# effective break .... see effective()
|
||||
|
||||
def __init__(self, file, line, temporary=False, cond=None, funcname=None):
|
||||
self.funcname = funcname
|
||||
# Needed if funcname is not None.
|
||||
self.func_first_executable_line = None
|
||||
self.file = file # This better be in canonical form!
|
||||
self.line = line
|
||||
self.temporary = temporary
|
||||
self.cond = cond
|
||||
self.enabled = True
|
||||
self.ignore = 0
|
||||
self.hits = 0
|
||||
self.number = Breakpoint.next
|
||||
Breakpoint.next += 1
|
||||
# Build the two lists
|
||||
self.bpbynumber.append(self)
|
||||
if (file, line) in self.bplist:
|
||||
self.bplist[file, line].append(self)
|
||||
else:
|
||||
self.bplist[file, line] = [self]
|
||||
|
||||
@staticmethod
|
||||
def clearBreakpoints():
|
||||
Breakpoint.next = 1
|
||||
Breakpoint.bplist = {}
|
||||
Breakpoint.bpbynumber = [None]
|
||||
|
||||
def deleteMe(self):
|
||||
"""Delete the breakpoint from the list associated to a file:line.
|
||||
|
||||
If it is the last breakpoint in that position, it also deletes
|
||||
the entry for the file:line.
|
||||
"""
|
||||
|
||||
index = (self.file, self.line)
|
||||
self.bpbynumber[self.number] = None # No longer in list
|
||||
self.bplist[index].remove(self)
|
||||
if not self.bplist[index]:
|
||||
# No more bp for this f:l combo
|
||||
del self.bplist[index]
|
||||
|
||||
def enable(self):
|
||||
"""Mark the breakpoint as enabled."""
|
||||
self.enabled = True
|
||||
|
||||
def disable(self):
|
||||
"""Mark the breakpoint as disabled."""
|
||||
self.enabled = False
|
||||
|
||||
def bpprint(self, out=None):
|
||||
"""Print the output of bpformat().
|
||||
|
||||
The optional out argument directs where the output is sent
|
||||
and defaults to standard output.
|
||||
"""
|
||||
if out is None:
|
||||
out = sys.stdout
|
||||
print(self.bpformat(), file=out)
|
||||
|
||||
def bpformat(self):
|
||||
"""Return a string with information about the breakpoint.
|
||||
|
||||
The information includes the breakpoint number, temporary
|
||||
status, file:line position, break condition, number of times to
|
||||
ignore, and number of times hit.
|
||||
|
||||
"""
|
||||
if self.temporary:
|
||||
disp = 'del '
|
||||
else:
|
||||
disp = 'keep '
|
||||
if self.enabled:
|
||||
disp = disp + 'yes '
|
||||
else:
|
||||
disp = disp + 'no '
|
||||
ret = '%-4dbreakpoint %s at %s:%d' % (self.number, disp,
|
||||
self.file, self.line)
|
||||
if self.cond:
|
||||
ret += '\n\tstop only if %s' % (self.cond,)
|
||||
if self.ignore:
|
||||
ret += '\n\tignore next %d hits' % (self.ignore,)
|
||||
if self.hits:
|
||||
if self.hits > 1:
|
||||
ss = 's'
|
||||
else:
|
||||
ss = ''
|
||||
ret += '\n\tbreakpoint already hit %d time%s' % (self.hits, ss)
|
||||
return ret
|
||||
|
||||
def __str__(self):
|
||||
"Return a condensed description of the breakpoint."
|
||||
return 'breakpoint %s at %s:%s' % (self.number, self.file, self.line)
|
||||
|
||||
# -----------end of Breakpoint class----------
|
||||
|
||||
|
||||
def checkfuncname(b, frame):
|
||||
"""Return True if break should happen here.
|
||||
|
||||
Whether a break should happen depends on the way that b (the breakpoint)
|
||||
was set. If it was set via line number, check if b.line is the same as
|
||||
the one in the frame. If it was set via function name, check if this is
|
||||
the right function and if it is on the first executable line.
|
||||
"""
|
||||
if not b.funcname:
|
||||
# Breakpoint was set via line number.
|
||||
if b.line != frame.f_lineno:
|
||||
# Breakpoint was set at a line with a def statement and the function
|
||||
# defined is called: don't break.
|
||||
return False
|
||||
return True
|
||||
|
||||
# Breakpoint set via function name.
|
||||
if frame.f_code.co_name != b.funcname:
|
||||
# It's not a function call, but rather execution of def statement.
|
||||
return False
|
||||
|
||||
# We are in the right frame.
|
||||
if not b.func_first_executable_line:
|
||||
# The function is entered for the 1st time.
|
||||
b.func_first_executable_line = frame.f_lineno
|
||||
|
||||
if b.func_first_executable_line != frame.f_lineno:
|
||||
# But we are not at the first line number: don't break.
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def effective(file, line, frame):
|
||||
"""Return (active breakpoint, delete temporary flag) or (None, None) as
|
||||
breakpoint to act upon.
|
||||
|
||||
The "active breakpoint" is the first entry in bplist[line, file] (which
|
||||
must exist) that is enabled, for which checkfuncname is True, and that
|
||||
has neither a False condition nor a positive ignore count. The flag,
|
||||
meaning that a temporary breakpoint should be deleted, is False only
|
||||
when the condiion cannot be evaluated (in which case, ignore count is
|
||||
ignored).
|
||||
|
||||
If no such entry exists, then (None, None) is returned.
|
||||
"""
|
||||
possibles = Breakpoint.bplist[file, line]
|
||||
for b in possibles:
|
||||
if not b.enabled:
|
||||
continue
|
||||
if not checkfuncname(b, frame):
|
||||
continue
|
||||
# Count every hit when bp is enabled
|
||||
b.hits += 1
|
||||
if not b.cond:
|
||||
# If unconditional, and ignoring go on to next, else break
|
||||
if b.ignore > 0:
|
||||
b.ignore -= 1
|
||||
continue
|
||||
else:
|
||||
# breakpoint and marker that it's ok to delete if temporary
|
||||
return (b, True)
|
||||
else:
|
||||
# Conditional bp.
|
||||
# Ignore count applies only to those bpt hits where the
|
||||
# condition evaluates to true.
|
||||
try:
|
||||
val = eval(b.cond, frame.f_globals, frame.f_locals)
|
||||
if val:
|
||||
if b.ignore > 0:
|
||||
b.ignore -= 1
|
||||
# continue
|
||||
else:
|
||||
return (b, True)
|
||||
# else:
|
||||
# continue
|
||||
except:
|
||||
# if eval fails, most conservative thing is to stop on
|
||||
# breakpoint regardless of ignore count. Don't delete
|
||||
# temporary, as another hint to user.
|
||||
return (b, False)
|
||||
return (None, None)
|
||||
|
||||
|
||||
# -------------------- testing --------------------
|
||||
|
||||
class Tdb(Bdb):
|
||||
def user_call(self, frame, args):
|
||||
name = frame.f_code.co_name
|
||||
if not name: name = '???'
|
||||
print('+++ call', name, args)
|
||||
def user_line(self, frame):
|
||||
import linecache
|
||||
name = frame.f_code.co_name
|
||||
if not name: name = '???'
|
||||
fn = self.canonic(frame.f_code.co_filename)
|
||||
line = linecache.getline(fn, frame.f_lineno, frame.f_globals)
|
||||
print('+++', fn, frame.f_lineno, name, ':', line.strip())
|
||||
def user_return(self, frame, retval):
|
||||
print('+++ return', retval)
|
||||
def user_exception(self, frame, exc_stuff):
|
||||
print('+++ exception', exc_stuff)
|
||||
self.set_continue()
|
||||
|
||||
def foo(n):
|
||||
print('foo(', n, ')')
|
||||
x = bar(n*10)
|
||||
print('bar returned', x)
|
||||
|
||||
def bar(a):
|
||||
print('bar(', a, ')')
|
||||
return a/2
|
||||
|
||||
def test():
|
||||
t = Tdb()
|
||||
t.run('import bdb; bdb.foo(10)')
|
||||
118
Utils/PythonNew32/Lib/bisect.py
Normal file
118
Utils/PythonNew32/Lib/bisect.py
Normal file
@@ -0,0 +1,118 @@
|
||||
"""Bisection algorithms."""
|
||||
|
||||
|
||||
def insort_right(a, x, lo=0, hi=None, *, key=None):
|
||||
"""Insert item x in list a, and keep it sorted assuming a is sorted.
|
||||
|
||||
If x is already in a, insert it to the right of the rightmost x.
|
||||
|
||||
Optional args lo (default 0) and hi (default len(a)) bound the
|
||||
slice of a to be searched.
|
||||
|
||||
A custom key function can be supplied to customize the sort order.
|
||||
"""
|
||||
if key is None:
|
||||
lo = bisect_right(a, x, lo, hi)
|
||||
else:
|
||||
lo = bisect_right(a, key(x), lo, hi, key=key)
|
||||
a.insert(lo, x)
|
||||
|
||||
|
||||
def bisect_right(a, x, lo=0, hi=None, *, key=None):
|
||||
"""Return the index where to insert item x in list a, assuming a is sorted.
|
||||
|
||||
The return value i is such that all e in a[:i] have e <= x, and all e in
|
||||
a[i:] have e > x. So if x already appears in the list, a.insert(i, x) will
|
||||
insert just after the rightmost x already there.
|
||||
|
||||
Optional args lo (default 0) and hi (default len(a)) bound the
|
||||
slice of a to be searched.
|
||||
|
||||
A custom key function can be supplied to customize the sort order.
|
||||
"""
|
||||
|
||||
if lo < 0:
|
||||
raise ValueError('lo must be non-negative')
|
||||
if hi is None:
|
||||
hi = len(a)
|
||||
# Note, the comparison uses "<" to match the
|
||||
# __lt__() logic in list.sort() and in heapq.
|
||||
if key is None:
|
||||
while lo < hi:
|
||||
mid = (lo + hi) // 2
|
||||
if x < a[mid]:
|
||||
hi = mid
|
||||
else:
|
||||
lo = mid + 1
|
||||
else:
|
||||
while lo < hi:
|
||||
mid = (lo + hi) // 2
|
||||
if x < key(a[mid]):
|
||||
hi = mid
|
||||
else:
|
||||
lo = mid + 1
|
||||
return lo
|
||||
|
||||
|
||||
def insort_left(a, x, lo=0, hi=None, *, key=None):
|
||||
"""Insert item x in list a, and keep it sorted assuming a is sorted.
|
||||
|
||||
If x is already in a, insert it to the left of the leftmost x.
|
||||
|
||||
Optional args lo (default 0) and hi (default len(a)) bound the
|
||||
slice of a to be searched.
|
||||
|
||||
A custom key function can be supplied to customize the sort order.
|
||||
"""
|
||||
|
||||
if key is None:
|
||||
lo = bisect_left(a, x, lo, hi)
|
||||
else:
|
||||
lo = bisect_left(a, key(x), lo, hi, key=key)
|
||||
a.insert(lo, x)
|
||||
|
||||
def bisect_left(a, x, lo=0, hi=None, *, key=None):
|
||||
"""Return the index where to insert item x in list a, assuming a is sorted.
|
||||
|
||||
The return value i is such that all e in a[:i] have e < x, and all e in
|
||||
a[i:] have e >= x. So if x already appears in the list, a.insert(i, x) will
|
||||
insert just before the leftmost x already there.
|
||||
|
||||
Optional args lo (default 0) and hi (default len(a)) bound the
|
||||
slice of a to be searched.
|
||||
|
||||
A custom key function can be supplied to customize the sort order.
|
||||
"""
|
||||
|
||||
if lo < 0:
|
||||
raise ValueError('lo must be non-negative')
|
||||
if hi is None:
|
||||
hi = len(a)
|
||||
# Note, the comparison uses "<" to match the
|
||||
# __lt__() logic in list.sort() and in heapq.
|
||||
if key is None:
|
||||
while lo < hi:
|
||||
mid = (lo + hi) // 2
|
||||
if a[mid] < x:
|
||||
lo = mid + 1
|
||||
else:
|
||||
hi = mid
|
||||
else:
|
||||
while lo < hi:
|
||||
mid = (lo + hi) // 2
|
||||
if key(a[mid]) < x:
|
||||
lo = mid + 1
|
||||
else:
|
||||
hi = mid
|
||||
return lo
|
||||
|
||||
|
||||
# Overwrite above definitions with a fast C implementation
|
||||
try:
|
||||
from _bisect import *
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# Create aliases
|
||||
bisect = bisect_right
|
||||
insort = insort_right
|
||||
352
Utils/PythonNew32/Lib/bz2.py
Normal file
352
Utils/PythonNew32/Lib/bz2.py
Normal file
@@ -0,0 +1,352 @@
|
||||
"""Interface to the libbzip2 compression library.
|
||||
|
||||
This module provides a file interface, classes for incremental
|
||||
(de)compression, and functions for one-shot (de)compression.
|
||||
"""
|
||||
|
||||
__all__ = ["BZ2File", "BZ2Compressor", "BZ2Decompressor",
|
||||
"open", "compress", "decompress"]
|
||||
|
||||
__author__ = "Nadeem Vawda <nadeem.vawda@gmail.com>"
|
||||
|
||||
from builtins import open as _builtin_open
|
||||
import io
|
||||
import os
|
||||
import _compression
|
||||
|
||||
from _bz2 import BZ2Compressor, BZ2Decompressor
|
||||
|
||||
|
||||
# Value 0 no longer used
|
||||
_MODE_READ = 1
|
||||
# Value 2 no longer used
|
||||
_MODE_WRITE = 3
|
||||
|
||||
|
||||
class BZ2File(_compression.BaseStream):
|
||||
|
||||
"""A file object providing transparent bzip2 (de)compression.
|
||||
|
||||
A BZ2File can act as a wrapper for an existing file object, or refer
|
||||
directly to a named file on disk.
|
||||
|
||||
Note that BZ2File provides a *binary* file interface - data read is
|
||||
returned as bytes, and data to be written should be given as bytes.
|
||||
"""
|
||||
|
||||
def __init__(self, filename, mode="r", *, compresslevel=9):
|
||||
"""Open a bzip2-compressed file.
|
||||
|
||||
If filename is a str, bytes, or PathLike object, it gives the
|
||||
name of the file to be opened. Otherwise, it should be a file
|
||||
object, which will be used to read or write the compressed data.
|
||||
|
||||
mode can be 'r' for reading (default), 'w' for (over)writing,
|
||||
'x' for creating exclusively, or 'a' for appending. These can
|
||||
equivalently be given as 'rb', 'wb', 'xb', and 'ab'.
|
||||
|
||||
If mode is 'w', 'x' or 'a', compresslevel can be a number between 1
|
||||
and 9 specifying the level of compression: 1 produces the least
|
||||
compression, and 9 (default) produces the most compression.
|
||||
|
||||
If mode is 'r', the input file may be the concatenation of
|
||||
multiple compressed streams.
|
||||
"""
|
||||
self._fp = None
|
||||
self._closefp = False
|
||||
self._mode = None
|
||||
|
||||
if not (1 <= compresslevel <= 9):
|
||||
raise ValueError("compresslevel must be between 1 and 9")
|
||||
|
||||
if mode in ("", "r", "rb"):
|
||||
mode = "rb"
|
||||
mode_code = _MODE_READ
|
||||
elif mode in ("w", "wb"):
|
||||
mode = "wb"
|
||||
mode_code = _MODE_WRITE
|
||||
self._compressor = BZ2Compressor(compresslevel)
|
||||
elif mode in ("x", "xb"):
|
||||
mode = "xb"
|
||||
mode_code = _MODE_WRITE
|
||||
self._compressor = BZ2Compressor(compresslevel)
|
||||
elif mode in ("a", "ab"):
|
||||
mode = "ab"
|
||||
mode_code = _MODE_WRITE
|
||||
self._compressor = BZ2Compressor(compresslevel)
|
||||
else:
|
||||
raise ValueError("Invalid mode: %r" % (mode,))
|
||||
|
||||
if isinstance(filename, (str, bytes, os.PathLike)):
|
||||
self._fp = _builtin_open(filename, mode)
|
||||
self._closefp = True
|
||||
self._mode = mode_code
|
||||
elif hasattr(filename, "read") or hasattr(filename, "write"):
|
||||
self._fp = filename
|
||||
self._mode = mode_code
|
||||
else:
|
||||
raise TypeError("filename must be a str, bytes, file or PathLike object")
|
||||
|
||||
if self._mode == _MODE_READ:
|
||||
raw = _compression.DecompressReader(self._fp,
|
||||
BZ2Decompressor, trailing_error=OSError)
|
||||
self._buffer = io.BufferedReader(raw)
|
||||
else:
|
||||
self._pos = 0
|
||||
|
||||
def close(self):
|
||||
"""Flush and close the file.
|
||||
|
||||
May be called more than once without error. Once the file is
|
||||
closed, any other operation on it will raise a ValueError.
|
||||
"""
|
||||
if self.closed:
|
||||
return
|
||||
try:
|
||||
if self._mode == _MODE_READ:
|
||||
self._buffer.close()
|
||||
elif self._mode == _MODE_WRITE:
|
||||
self._fp.write(self._compressor.flush())
|
||||
self._compressor = None
|
||||
finally:
|
||||
try:
|
||||
if self._closefp:
|
||||
self._fp.close()
|
||||
finally:
|
||||
self._fp = None
|
||||
self._closefp = False
|
||||
self._buffer = None
|
||||
|
||||
@property
|
||||
def closed(self):
|
||||
"""True if this file is closed."""
|
||||
return self._fp is None
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
self._check_not_closed()
|
||||
return self._fp.name
|
||||
|
||||
@property
|
||||
def mode(self):
|
||||
return 'wb' if self._mode == _MODE_WRITE else 'rb'
|
||||
|
||||
def fileno(self):
|
||||
"""Return the file descriptor for the underlying file."""
|
||||
self._check_not_closed()
|
||||
return self._fp.fileno()
|
||||
|
||||
def seekable(self):
|
||||
"""Return whether the file supports seeking."""
|
||||
return self.readable() and self._buffer.seekable()
|
||||
|
||||
def readable(self):
|
||||
"""Return whether the file was opened for reading."""
|
||||
self._check_not_closed()
|
||||
return self._mode == _MODE_READ
|
||||
|
||||
def writable(self):
|
||||
"""Return whether the file was opened for writing."""
|
||||
self._check_not_closed()
|
||||
return self._mode == _MODE_WRITE
|
||||
|
||||
def peek(self, n=0):
|
||||
"""Return buffered data without advancing the file position.
|
||||
|
||||
Always returns at least one byte of data, unless at EOF.
|
||||
The exact number of bytes returned is unspecified.
|
||||
"""
|
||||
self._check_can_read()
|
||||
# Relies on the undocumented fact that BufferedReader.peek()
|
||||
# always returns at least one byte (except at EOF), independent
|
||||
# of the value of n
|
||||
return self._buffer.peek(n)
|
||||
|
||||
def read(self, size=-1):
|
||||
"""Read up to size uncompressed bytes from the file.
|
||||
|
||||
If size is negative or omitted, read until EOF is reached.
|
||||
Returns b'' if the file is already at EOF.
|
||||
"""
|
||||
self._check_can_read()
|
||||
return self._buffer.read(size)
|
||||
|
||||
def read1(self, size=-1):
|
||||
"""Read up to size uncompressed bytes, while trying to avoid
|
||||
making multiple reads from the underlying stream. Reads up to a
|
||||
buffer's worth of data if size is negative.
|
||||
|
||||
Returns b'' if the file is at EOF.
|
||||
"""
|
||||
self._check_can_read()
|
||||
if size < 0:
|
||||
size = io.DEFAULT_BUFFER_SIZE
|
||||
return self._buffer.read1(size)
|
||||
|
||||
def readinto(self, b):
|
||||
"""Read bytes into b.
|
||||
|
||||
Returns the number of bytes read (0 for EOF).
|
||||
"""
|
||||
self._check_can_read()
|
||||
return self._buffer.readinto(b)
|
||||
|
||||
def readline(self, size=-1):
|
||||
"""Read a line of uncompressed bytes from the file.
|
||||
|
||||
The terminating newline (if present) is retained. If size is
|
||||
non-negative, no more than size bytes will be read (in which
|
||||
case the line may be incomplete). Returns b'' if already at EOF.
|
||||
"""
|
||||
if not isinstance(size, int):
|
||||
if not hasattr(size, "__index__"):
|
||||
raise TypeError("Integer argument expected")
|
||||
size = size.__index__()
|
||||
self._check_can_read()
|
||||
return self._buffer.readline(size)
|
||||
|
||||
def readlines(self, size=-1):
|
||||
"""Read a list of lines of uncompressed bytes from the file.
|
||||
|
||||
size can be specified to control the number of lines read: no
|
||||
further lines will be read once the total size of the lines read
|
||||
so far equals or exceeds size.
|
||||
"""
|
||||
if not isinstance(size, int):
|
||||
if not hasattr(size, "__index__"):
|
||||
raise TypeError("Integer argument expected")
|
||||
size = size.__index__()
|
||||
self._check_can_read()
|
||||
return self._buffer.readlines(size)
|
||||
|
||||
def write(self, data):
|
||||
"""Write a byte string to the file.
|
||||
|
||||
Returns the number of uncompressed bytes written, which is
|
||||
always the length of data in bytes. Note that due to buffering,
|
||||
the file on disk may not reflect the data written until close()
|
||||
is called.
|
||||
"""
|
||||
self._check_can_write()
|
||||
if isinstance(data, (bytes, bytearray)):
|
||||
length = len(data)
|
||||
else:
|
||||
# accept any data that supports the buffer protocol
|
||||
data = memoryview(data)
|
||||
length = data.nbytes
|
||||
|
||||
compressed = self._compressor.compress(data)
|
||||
self._fp.write(compressed)
|
||||
self._pos += length
|
||||
return length
|
||||
|
||||
def writelines(self, seq):
|
||||
"""Write a sequence of byte strings to the file.
|
||||
|
||||
Returns the number of uncompressed bytes written.
|
||||
seq can be any iterable yielding byte strings.
|
||||
|
||||
Line separators are not added between the written byte strings.
|
||||
"""
|
||||
return _compression.BaseStream.writelines(self, seq)
|
||||
|
||||
def seek(self, offset, whence=io.SEEK_SET):
|
||||
"""Change the file position.
|
||||
|
||||
The new position is specified by offset, relative to the
|
||||
position indicated by whence. Values for whence are:
|
||||
|
||||
0: start of stream (default); offset must not be negative
|
||||
1: current stream position
|
||||
2: end of stream; offset must not be positive
|
||||
|
||||
Returns the new file position.
|
||||
|
||||
Note that seeking is emulated, so depending on the parameters,
|
||||
this operation may be extremely slow.
|
||||
"""
|
||||
self._check_can_seek()
|
||||
return self._buffer.seek(offset, whence)
|
||||
|
||||
def tell(self):
|
||||
"""Return the current file position."""
|
||||
self._check_not_closed()
|
||||
if self._mode == _MODE_READ:
|
||||
return self._buffer.tell()
|
||||
return self._pos
|
||||
|
||||
|
||||
def open(filename, mode="rb", compresslevel=9,
|
||||
encoding=None, errors=None, newline=None):
|
||||
"""Open a bzip2-compressed file in binary or text mode.
|
||||
|
||||
The filename argument can be an actual filename (a str, bytes, or
|
||||
PathLike object), or an existing file object to read from or write
|
||||
to.
|
||||
|
||||
The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or
|
||||
"ab" for binary mode, or "rt", "wt", "xt" or "at" for text mode.
|
||||
The default mode is "rb", and the default compresslevel is 9.
|
||||
|
||||
For binary mode, this function is equivalent to the BZ2File
|
||||
constructor: BZ2File(filename, mode, compresslevel). In this case,
|
||||
the encoding, errors and newline arguments must not be provided.
|
||||
|
||||
For text mode, a BZ2File object is created, and wrapped in an
|
||||
io.TextIOWrapper instance with the specified encoding, error
|
||||
handling behavior, and line ending(s).
|
||||
|
||||
"""
|
||||
if "t" in mode:
|
||||
if "b" in mode:
|
||||
raise ValueError("Invalid mode: %r" % (mode,))
|
||||
else:
|
||||
if encoding is not None:
|
||||
raise ValueError("Argument 'encoding' not supported in binary mode")
|
||||
if errors is not None:
|
||||
raise ValueError("Argument 'errors' not supported in binary mode")
|
||||
if newline is not None:
|
||||
raise ValueError("Argument 'newline' not supported in binary mode")
|
||||
|
||||
bz_mode = mode.replace("t", "")
|
||||
binary_file = BZ2File(filename, bz_mode, compresslevel=compresslevel)
|
||||
|
||||
if "t" in mode:
|
||||
encoding = io.text_encoding(encoding)
|
||||
return io.TextIOWrapper(binary_file, encoding, errors, newline)
|
||||
else:
|
||||
return binary_file
|
||||
|
||||
|
||||
def compress(data, compresslevel=9):
|
||||
"""Compress a block of data.
|
||||
|
||||
compresslevel, if given, must be a number between 1 and 9.
|
||||
|
||||
For incremental compression, use a BZ2Compressor object instead.
|
||||
"""
|
||||
comp = BZ2Compressor(compresslevel)
|
||||
return comp.compress(data) + comp.flush()
|
||||
|
||||
|
||||
def decompress(data):
|
||||
"""Decompress a block of data.
|
||||
|
||||
For incremental decompression, use a BZ2Decompressor object instead.
|
||||
"""
|
||||
results = []
|
||||
while data:
|
||||
decomp = BZ2Decompressor()
|
||||
try:
|
||||
res = decomp.decompress(data)
|
||||
except OSError:
|
||||
if results:
|
||||
break # Leftover data is not a valid bzip2 stream; ignore it.
|
||||
else:
|
||||
raise # Error on the first iteration; bail out.
|
||||
results.append(res)
|
||||
if not decomp.eof:
|
||||
raise ValueError("Compressed data ended before the "
|
||||
"end-of-stream marker was reached")
|
||||
data = decomp.unused_data
|
||||
return b"".join(results)
|
||||
197
Utils/PythonNew32/Lib/cProfile.py
Normal file
197
Utils/PythonNew32/Lib/cProfile.py
Normal file
@@ -0,0 +1,197 @@
|
||||
#! /usr/bin/env python3
|
||||
|
||||
"""Python interface for the 'lsprof' profiler.
|
||||
Compatible with the 'profile' module.
|
||||
"""
|
||||
|
||||
__all__ = ["run", "runctx", "Profile"]
|
||||
|
||||
import _lsprof
|
||||
import importlib.machinery
|
||||
import io
|
||||
import profile as _pyprofile
|
||||
|
||||
# ____________________________________________________________
|
||||
# Simple interface
|
||||
|
||||
def run(statement, filename=None, sort=-1):
|
||||
return _pyprofile._Utils(Profile).run(statement, filename, sort)
|
||||
|
||||
def runctx(statement, globals, locals, filename=None, sort=-1):
|
||||
return _pyprofile._Utils(Profile).runctx(statement, globals, locals,
|
||||
filename, sort)
|
||||
|
||||
run.__doc__ = _pyprofile.run.__doc__
|
||||
runctx.__doc__ = _pyprofile.runctx.__doc__
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
class Profile(_lsprof.Profiler):
|
||||
"""Profile(timer=None, timeunit=None, subcalls=True, builtins=True)
|
||||
|
||||
Builds a profiler object using the specified timer function.
|
||||
The default timer is a fast built-in one based on real time.
|
||||
For custom timer functions returning integers, timeunit can
|
||||
be a float specifying a scale (i.e. how long each integer unit
|
||||
is, in seconds).
|
||||
"""
|
||||
|
||||
# Most of the functionality is in the base class.
|
||||
# This subclass only adds convenient and backward-compatible methods.
|
||||
|
||||
def print_stats(self, sort=-1):
|
||||
import pstats
|
||||
if not isinstance(sort, tuple):
|
||||
sort = (sort,)
|
||||
pstats.Stats(self).strip_dirs().sort_stats(*sort).print_stats()
|
||||
|
||||
def dump_stats(self, file):
|
||||
import marshal
|
||||
with open(file, 'wb') as f:
|
||||
self.create_stats()
|
||||
marshal.dump(self.stats, f)
|
||||
|
||||
def create_stats(self):
|
||||
self.disable()
|
||||
self.snapshot_stats()
|
||||
|
||||
def snapshot_stats(self):
|
||||
entries = self.getstats()
|
||||
self.stats = {}
|
||||
callersdicts = {}
|
||||
# call information
|
||||
for entry in entries:
|
||||
func = label(entry.code)
|
||||
nc = entry.callcount # ncalls column of pstats (before '/')
|
||||
cc = nc - entry.reccallcount # ncalls column of pstats (after '/')
|
||||
tt = entry.inlinetime # tottime column of pstats
|
||||
ct = entry.totaltime # cumtime column of pstats
|
||||
callers = {}
|
||||
callersdicts[id(entry.code)] = callers
|
||||
self.stats[func] = cc, nc, tt, ct, callers
|
||||
# subcall information
|
||||
for entry in entries:
|
||||
if entry.calls:
|
||||
func = label(entry.code)
|
||||
for subentry in entry.calls:
|
||||
try:
|
||||
callers = callersdicts[id(subentry.code)]
|
||||
except KeyError:
|
||||
continue
|
||||
nc = subentry.callcount
|
||||
cc = nc - subentry.reccallcount
|
||||
tt = subentry.inlinetime
|
||||
ct = subentry.totaltime
|
||||
if func in callers:
|
||||
prev = callers[func]
|
||||
nc += prev[0]
|
||||
cc += prev[1]
|
||||
tt += prev[2]
|
||||
ct += prev[3]
|
||||
callers[func] = nc, cc, tt, ct
|
||||
|
||||
# The following two methods can be called by clients to use
|
||||
# a profiler to profile a statement, given as a string.
|
||||
|
||||
def run(self, cmd):
|
||||
import __main__
|
||||
dict = __main__.__dict__
|
||||
return self.runctx(cmd, dict, dict)
|
||||
|
||||
def runctx(self, cmd, globals, locals):
|
||||
self.enable()
|
||||
try:
|
||||
exec(cmd, globals, locals)
|
||||
finally:
|
||||
self.disable()
|
||||
return self
|
||||
|
||||
# This method is more useful to profile a single function call.
|
||||
def runcall(self, func, /, *args, **kw):
|
||||
self.enable()
|
||||
try:
|
||||
return func(*args, **kw)
|
||||
finally:
|
||||
self.disable()
|
||||
|
||||
def __enter__(self):
|
||||
self.enable()
|
||||
return self
|
||||
|
||||
def __exit__(self, *exc_info):
|
||||
self.disable()
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
def label(code):
|
||||
if isinstance(code, str):
|
||||
return ('~', 0, code) # built-in functions ('~' sorts at the end)
|
||||
else:
|
||||
return (code.co_filename, code.co_firstlineno, code.co_name)
|
||||
|
||||
# ____________________________________________________________
|
||||
|
||||
def main():
|
||||
import os
|
||||
import sys
|
||||
import runpy
|
||||
import pstats
|
||||
from optparse import OptionParser
|
||||
usage = "cProfile.py [-o output_file_path] [-s sort] [-m module | scriptfile] [arg] ..."
|
||||
parser = OptionParser(usage=usage)
|
||||
parser.allow_interspersed_args = False
|
||||
parser.add_option('-o', '--outfile', dest="outfile",
|
||||
help="Save stats to <outfile>", default=None)
|
||||
parser.add_option('-s', '--sort', dest="sort",
|
||||
help="Sort order when printing to stdout, based on pstats.Stats class",
|
||||
default=2,
|
||||
choices=sorted(pstats.Stats.sort_arg_dict_default))
|
||||
parser.add_option('-m', dest="module", action="store_true",
|
||||
help="Profile a library module", default=False)
|
||||
|
||||
if not sys.argv[1:]:
|
||||
parser.print_usage()
|
||||
sys.exit(2)
|
||||
|
||||
(options, args) = parser.parse_args()
|
||||
sys.argv[:] = args
|
||||
|
||||
# The script that we're profiling may chdir, so capture the absolute path
|
||||
# to the output file at startup.
|
||||
if options.outfile is not None:
|
||||
options.outfile = os.path.abspath(options.outfile)
|
||||
|
||||
if len(args) > 0:
|
||||
if options.module:
|
||||
code = "run_module(modname, run_name='__main__')"
|
||||
globs = {
|
||||
'run_module': runpy.run_module,
|
||||
'modname': args[0]
|
||||
}
|
||||
else:
|
||||
progname = args[0]
|
||||
sys.path.insert(0, os.path.dirname(progname))
|
||||
with io.open_code(progname) as fp:
|
||||
code = compile(fp.read(), progname, 'exec')
|
||||
spec = importlib.machinery.ModuleSpec(name='__main__', loader=None,
|
||||
origin=progname)
|
||||
globs = {
|
||||
'__spec__': spec,
|
||||
'__file__': spec.origin,
|
||||
'__name__': spec.name,
|
||||
'__package__': None,
|
||||
'__cached__': None,
|
||||
}
|
||||
try:
|
||||
runctx(code, globs, None, options.outfile, options.sort)
|
||||
except BrokenPipeError as exc:
|
||||
# Prevent "Exception ignored" during interpreter shutdown.
|
||||
sys.stdout = None
|
||||
sys.exit(exc.errno)
|
||||
else:
|
||||
parser.print_usage()
|
||||
return parser
|
||||
|
||||
# When invoked as main program, invoke the profiler on a script
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
813
Utils/PythonNew32/Lib/calendar.py
Normal file
813
Utils/PythonNew32/Lib/calendar.py
Normal file
@@ -0,0 +1,813 @@
|
||||
"""Calendar printing functions
|
||||
|
||||
Note when comparing these calendars to the ones printed by cal(1): By
|
||||
default, these calendars have Monday as the first day of the week, and
|
||||
Sunday as the last (the European convention). Use setfirstweekday() to
|
||||
set the first day of the week (0=Monday, 6=Sunday)."""
|
||||
|
||||
import sys
|
||||
import datetime
|
||||
from enum import IntEnum, global_enum
|
||||
import locale as _locale
|
||||
from itertools import repeat
|
||||
|
||||
__all__ = ["IllegalMonthError", "IllegalWeekdayError", "setfirstweekday",
|
||||
"firstweekday", "isleap", "leapdays", "weekday", "monthrange",
|
||||
"monthcalendar", "prmonth", "month", "prcal", "calendar",
|
||||
"timegm", "month_name", "month_abbr", "day_name", "day_abbr",
|
||||
"Calendar", "TextCalendar", "HTMLCalendar", "LocaleTextCalendar",
|
||||
"LocaleHTMLCalendar", "weekheader",
|
||||
"Day", "Month", "JANUARY", "FEBRUARY", "MARCH",
|
||||
"APRIL", "MAY", "JUNE", "JULY",
|
||||
"AUGUST", "SEPTEMBER", "OCTOBER", "NOVEMBER", "DECEMBER",
|
||||
"MONDAY", "TUESDAY", "WEDNESDAY", "THURSDAY", "FRIDAY",
|
||||
"SATURDAY", "SUNDAY"]
|
||||
|
||||
# Exception raised for bad input (with string parameter for details)
|
||||
error = ValueError
|
||||
|
||||
# Exceptions raised for bad input
|
||||
# This is trick for backward compatibility. Since 3.13, we will raise IllegalMonthError instead of
|
||||
# IndexError for bad month number(out of 1-12). But we can't remove IndexError for backward compatibility.
|
||||
class IllegalMonthError(ValueError, IndexError):
|
||||
def __init__(self, month):
|
||||
self.month = month
|
||||
def __str__(self):
|
||||
return "bad month number %r; must be 1-12" % self.month
|
||||
|
||||
|
||||
class IllegalWeekdayError(ValueError):
|
||||
def __init__(self, weekday):
|
||||
self.weekday = weekday
|
||||
def __str__(self):
|
||||
return "bad weekday number %r; must be 0 (Monday) to 6 (Sunday)" % self.weekday
|
||||
|
||||
|
||||
def __getattr__(name):
|
||||
if name in ('January', 'February'):
|
||||
import warnings
|
||||
warnings.warn(f"The '{name}' attribute is deprecated, use '{name.upper()}' instead",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
if name == 'January':
|
||||
return 1
|
||||
else:
|
||||
return 2
|
||||
|
||||
raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
|
||||
|
||||
|
||||
# Constants for months
|
||||
@global_enum
|
||||
class Month(IntEnum):
|
||||
JANUARY = 1
|
||||
FEBRUARY = 2
|
||||
MARCH = 3
|
||||
APRIL = 4
|
||||
MAY = 5
|
||||
JUNE = 6
|
||||
JULY = 7
|
||||
AUGUST = 8
|
||||
SEPTEMBER = 9
|
||||
OCTOBER = 10
|
||||
NOVEMBER = 11
|
||||
DECEMBER = 12
|
||||
|
||||
|
||||
# Constants for days
|
||||
@global_enum
|
||||
class Day(IntEnum):
|
||||
MONDAY = 0
|
||||
TUESDAY = 1
|
||||
WEDNESDAY = 2
|
||||
THURSDAY = 3
|
||||
FRIDAY = 4
|
||||
SATURDAY = 5
|
||||
SUNDAY = 6
|
||||
|
||||
|
||||
# Number of days per month (except for February in leap years)
|
||||
mdays = [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
|
||||
|
||||
# This module used to have hard-coded lists of day and month names, as
|
||||
# English strings. The classes following emulate a read-only version of
|
||||
# that, but supply localized names. Note that the values are computed
|
||||
# fresh on each call, in case the user changes locale between calls.
|
||||
|
||||
class _localized_month:
|
||||
|
||||
_months = [datetime.date(2001, i+1, 1).strftime for i in range(12)]
|
||||
_months.insert(0, lambda x: "")
|
||||
|
||||
def __init__(self, format):
|
||||
self.format = format
|
||||
|
||||
def __getitem__(self, i):
|
||||
funcs = self._months[i]
|
||||
if isinstance(i, slice):
|
||||
return [f(self.format) for f in funcs]
|
||||
else:
|
||||
return funcs(self.format)
|
||||
|
||||
def __len__(self):
|
||||
return 13
|
||||
|
||||
|
||||
class _localized_day:
|
||||
|
||||
# January 1, 2001, was a Monday.
|
||||
_days = [datetime.date(2001, 1, i+1).strftime for i in range(7)]
|
||||
|
||||
def __init__(self, format):
|
||||
self.format = format
|
||||
|
||||
def __getitem__(self, i):
|
||||
funcs = self._days[i]
|
||||
if isinstance(i, slice):
|
||||
return [f(self.format) for f in funcs]
|
||||
else:
|
||||
return funcs(self.format)
|
||||
|
||||
def __len__(self):
|
||||
return 7
|
||||
|
||||
|
||||
# Full and abbreviated names of weekdays
|
||||
day_name = _localized_day('%A')
|
||||
day_abbr = _localized_day('%a')
|
||||
|
||||
# Full and abbreviated names of months (1-based arrays!!!)
|
||||
month_name = _localized_month('%B')
|
||||
month_abbr = _localized_month('%b')
|
||||
|
||||
|
||||
def isleap(year):
|
||||
"""Return True for leap years, False for non-leap years."""
|
||||
return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)
|
||||
|
||||
|
||||
def leapdays(y1, y2):
|
||||
"""Return number of leap years in range [y1, y2).
|
||||
Assume y1 <= y2."""
|
||||
y1 -= 1
|
||||
y2 -= 1
|
||||
return (y2//4 - y1//4) - (y2//100 - y1//100) + (y2//400 - y1//400)
|
||||
|
||||
|
||||
def weekday(year, month, day):
|
||||
"""Return weekday (0-6 ~ Mon-Sun) for year, month (1-12), day (1-31)."""
|
||||
if not datetime.MINYEAR <= year <= datetime.MAXYEAR:
|
||||
year = 2000 + year % 400
|
||||
return Day(datetime.date(year, month, day).weekday())
|
||||
|
||||
|
||||
def _validate_month(month):
|
||||
if not 1 <= month <= 12:
|
||||
raise IllegalMonthError(month)
|
||||
|
||||
def monthrange(year, month):
|
||||
"""Return weekday of first day of month (0-6 ~ Mon-Sun)
|
||||
and number of days (28-31) for year, month."""
|
||||
_validate_month(month)
|
||||
day1 = weekday(year, month, 1)
|
||||
ndays = mdays[month] + (month == FEBRUARY and isleap(year))
|
||||
return day1, ndays
|
||||
|
||||
|
||||
def _monthlen(year, month):
|
||||
return mdays[month] + (month == FEBRUARY and isleap(year))
|
||||
|
||||
|
||||
def _prevmonth(year, month):
|
||||
if month == 1:
|
||||
return year-1, 12
|
||||
else:
|
||||
return year, month-1
|
||||
|
||||
|
||||
def _nextmonth(year, month):
|
||||
if month == 12:
|
||||
return year+1, 1
|
||||
else:
|
||||
return year, month+1
|
||||
|
||||
|
||||
class Calendar(object):
|
||||
"""
|
||||
Base calendar class. This class doesn't do any formatting. It simply
|
||||
provides data to subclasses.
|
||||
"""
|
||||
|
||||
def __init__(self, firstweekday=0):
|
||||
self.firstweekday = firstweekday # 0 = Monday, 6 = Sunday
|
||||
|
||||
def getfirstweekday(self):
|
||||
return self._firstweekday % 7
|
||||
|
||||
def setfirstweekday(self, firstweekday):
|
||||
self._firstweekday = firstweekday
|
||||
|
||||
firstweekday = property(getfirstweekday, setfirstweekday)
|
||||
|
||||
def iterweekdays(self):
|
||||
"""
|
||||
Return an iterator for one week of weekday numbers starting with the
|
||||
configured first one.
|
||||
"""
|
||||
for i in range(self.firstweekday, self.firstweekday + 7):
|
||||
yield i%7
|
||||
|
||||
def itermonthdates(self, year, month):
|
||||
"""
|
||||
Return an iterator for one month. The iterator will yield datetime.date
|
||||
values and will always iterate through complete weeks, so it will yield
|
||||
dates outside the specified month.
|
||||
"""
|
||||
for y, m, d in self.itermonthdays3(year, month):
|
||||
yield datetime.date(y, m, d)
|
||||
|
||||
def itermonthdays(self, year, month):
|
||||
"""
|
||||
Like itermonthdates(), but will yield day numbers. For days outside
|
||||
the specified month the day number is 0.
|
||||
"""
|
||||
day1, ndays = monthrange(year, month)
|
||||
days_before = (day1 - self.firstweekday) % 7
|
||||
yield from repeat(0, days_before)
|
||||
yield from range(1, ndays + 1)
|
||||
days_after = (self.firstweekday - day1 - ndays) % 7
|
||||
yield from repeat(0, days_after)
|
||||
|
||||
def itermonthdays2(self, year, month):
|
||||
"""
|
||||
Like itermonthdates(), but will yield (day number, weekday number)
|
||||
tuples. For days outside the specified month the day number is 0.
|
||||
"""
|
||||
for i, d in enumerate(self.itermonthdays(year, month), self.firstweekday):
|
||||
yield d, i % 7
|
||||
|
||||
def itermonthdays3(self, year, month):
|
||||
"""
|
||||
Like itermonthdates(), but will yield (year, month, day) tuples. Can be
|
||||
used for dates outside of datetime.date range.
|
||||
"""
|
||||
day1, ndays = monthrange(year, month)
|
||||
days_before = (day1 - self.firstweekday) % 7
|
||||
days_after = (self.firstweekday - day1 - ndays) % 7
|
||||
y, m = _prevmonth(year, month)
|
||||
end = _monthlen(y, m) + 1
|
||||
for d in range(end-days_before, end):
|
||||
yield y, m, d
|
||||
for d in range(1, ndays + 1):
|
||||
yield year, month, d
|
||||
y, m = _nextmonth(year, month)
|
||||
for d in range(1, days_after + 1):
|
||||
yield y, m, d
|
||||
|
||||
def itermonthdays4(self, year, month):
|
||||
"""
|
||||
Like itermonthdates(), but will yield (year, month, day, day_of_week) tuples.
|
||||
Can be used for dates outside of datetime.date range.
|
||||
"""
|
||||
for i, (y, m, d) in enumerate(self.itermonthdays3(year, month)):
|
||||
yield y, m, d, (self.firstweekday + i) % 7
|
||||
|
||||
def monthdatescalendar(self, year, month):
|
||||
"""
|
||||
Return a matrix (list of lists) representing a month's calendar.
|
||||
Each row represents a week; week entries are datetime.date values.
|
||||
"""
|
||||
dates = list(self.itermonthdates(year, month))
|
||||
return [ dates[i:i+7] for i in range(0, len(dates), 7) ]
|
||||
|
||||
def monthdays2calendar(self, year, month):
|
||||
"""
|
||||
Return a matrix representing a month's calendar.
|
||||
Each row represents a week; week entries are
|
||||
(day number, weekday number) tuples. Day numbers outside this month
|
||||
are zero.
|
||||
"""
|
||||
days = list(self.itermonthdays2(year, month))
|
||||
return [ days[i:i+7] for i in range(0, len(days), 7) ]
|
||||
|
||||
def monthdayscalendar(self, year, month):
|
||||
"""
|
||||
Return a matrix representing a month's calendar.
|
||||
Each row represents a week; days outside this month are zero.
|
||||
"""
|
||||
days = list(self.itermonthdays(year, month))
|
||||
return [ days[i:i+7] for i in range(0, len(days), 7) ]
|
||||
|
||||
def yeardatescalendar(self, year, width=3):
|
||||
"""
|
||||
Return the data for the specified year ready for formatting. The return
|
||||
value is a list of month rows. Each month row contains up to width months.
|
||||
Each month contains between 4 and 6 weeks and each week contains 1-7
|
||||
days. Days are datetime.date objects.
|
||||
"""
|
||||
months = [self.monthdatescalendar(year, m) for m in Month]
|
||||
return [months[i:i+width] for i in range(0, len(months), width) ]
|
||||
|
||||
def yeardays2calendar(self, year, width=3):
|
||||
"""
|
||||
Return the data for the specified year ready for formatting (similar to
|
||||
yeardatescalendar()). Entries in the week lists are
|
||||
(day number, weekday number) tuples. Day numbers outside this month are
|
||||
zero.
|
||||
"""
|
||||
months = [self.monthdays2calendar(year, m) for m in Month]
|
||||
return [months[i:i+width] for i in range(0, len(months), width) ]
|
||||
|
||||
def yeardayscalendar(self, year, width=3):
|
||||
"""
|
||||
Return the data for the specified year ready for formatting (similar to
|
||||
yeardatescalendar()). Entries in the week lists are day numbers.
|
||||
Day numbers outside this month are zero.
|
||||
"""
|
||||
months = [self.monthdayscalendar(year, m) for m in Month]
|
||||
return [months[i:i+width] for i in range(0, len(months), width) ]
|
||||
|
||||
|
||||
class TextCalendar(Calendar):
|
||||
"""
|
||||
Subclass of Calendar that outputs a calendar as a simple plain text
|
||||
similar to the UNIX program cal.
|
||||
"""
|
||||
|
||||
def prweek(self, theweek, width):
|
||||
"""
|
||||
Print a single week (no newline).
|
||||
"""
|
||||
print(self.formatweek(theweek, width), end='')
|
||||
|
||||
def formatday(self, day, weekday, width):
|
||||
"""
|
||||
Returns a formatted day.
|
||||
"""
|
||||
if day == 0:
|
||||
s = ''
|
||||
else:
|
||||
s = '%2i' % day # right-align single-digit days
|
||||
return s.center(width)
|
||||
|
||||
def formatweek(self, theweek, width):
|
||||
"""
|
||||
Returns a single week in a string (no newline).
|
||||
"""
|
||||
return ' '.join(self.formatday(d, wd, width) for (d, wd) in theweek)
|
||||
|
||||
def formatweekday(self, day, width):
|
||||
"""
|
||||
Returns a formatted week day name.
|
||||
"""
|
||||
if width >= 9:
|
||||
names = day_name
|
||||
else:
|
||||
names = day_abbr
|
||||
return names[day][:width].center(width)
|
||||
|
||||
def formatweekheader(self, width):
|
||||
"""
|
||||
Return a header for a week.
|
||||
"""
|
||||
return ' '.join(self.formatweekday(i, width) for i in self.iterweekdays())
|
||||
|
||||
def formatmonthname(self, theyear, themonth, width, withyear=True):
|
||||
"""
|
||||
Return a formatted month name.
|
||||
"""
|
||||
_validate_month(themonth)
|
||||
|
||||
s = month_name[themonth]
|
||||
if withyear:
|
||||
s = "%s %r" % (s, theyear)
|
||||
return s.center(width)
|
||||
|
||||
def prmonth(self, theyear, themonth, w=0, l=0):
|
||||
"""
|
||||
Print a month's calendar.
|
||||
"""
|
||||
print(self.formatmonth(theyear, themonth, w, l), end='')
|
||||
|
||||
def formatmonth(self, theyear, themonth, w=0, l=0):
|
||||
"""
|
||||
Return a month's calendar string (multi-line).
|
||||
"""
|
||||
w = max(2, w)
|
||||
l = max(1, l)
|
||||
s = self.formatmonthname(theyear, themonth, 7 * (w + 1) - 1)
|
||||
s = s.rstrip()
|
||||
s += '\n' * l
|
||||
s += self.formatweekheader(w).rstrip()
|
||||
s += '\n' * l
|
||||
for week in self.monthdays2calendar(theyear, themonth):
|
||||
s += self.formatweek(week, w).rstrip()
|
||||
s += '\n' * l
|
||||
return s
|
||||
|
||||
def formatyear(self, theyear, w=2, l=1, c=6, m=3):
|
||||
"""
|
||||
Returns a year's calendar as a multi-line string.
|
||||
"""
|
||||
w = max(2, w)
|
||||
l = max(1, l)
|
||||
c = max(2, c)
|
||||
colwidth = (w + 1) * 7 - 1
|
||||
v = []
|
||||
a = v.append
|
||||
a(repr(theyear).center(colwidth*m+c*(m-1)).rstrip())
|
||||
a('\n'*l)
|
||||
header = self.formatweekheader(w)
|
||||
for (i, row) in enumerate(self.yeardays2calendar(theyear, m)):
|
||||
# months in this row
|
||||
months = range(m*i+1, min(m*(i+1)+1, 13))
|
||||
a('\n'*l)
|
||||
names = (self.formatmonthname(theyear, k, colwidth, False)
|
||||
for k in months)
|
||||
a(formatstring(names, colwidth, c).rstrip())
|
||||
a('\n'*l)
|
||||
headers = (header for k in months)
|
||||
a(formatstring(headers, colwidth, c).rstrip())
|
||||
a('\n'*l)
|
||||
# max number of weeks for this row
|
||||
height = max(len(cal) for cal in row)
|
||||
for j in range(height):
|
||||
weeks = []
|
||||
for cal in row:
|
||||
if j >= len(cal):
|
||||
weeks.append('')
|
||||
else:
|
||||
weeks.append(self.formatweek(cal[j], w))
|
||||
a(formatstring(weeks, colwidth, c).rstrip())
|
||||
a('\n' * l)
|
||||
return ''.join(v)
|
||||
|
||||
def pryear(self, theyear, w=0, l=0, c=6, m=3):
|
||||
"""Print a year's calendar."""
|
||||
print(self.formatyear(theyear, w, l, c, m), end='')
|
||||
|
||||
|
||||
class HTMLCalendar(Calendar):
|
||||
"""
|
||||
This calendar returns complete HTML pages.
|
||||
"""
|
||||
|
||||
# CSS classes for the day <td>s
|
||||
cssclasses = ["mon", "tue", "wed", "thu", "fri", "sat", "sun"]
|
||||
|
||||
# CSS classes for the day <th>s
|
||||
cssclasses_weekday_head = cssclasses
|
||||
|
||||
# CSS class for the days before and after current month
|
||||
cssclass_noday = "noday"
|
||||
|
||||
# CSS class for the month's head
|
||||
cssclass_month_head = "month"
|
||||
|
||||
# CSS class for the month
|
||||
cssclass_month = "month"
|
||||
|
||||
# CSS class for the year's table head
|
||||
cssclass_year_head = "year"
|
||||
|
||||
# CSS class for the whole year table
|
||||
cssclass_year = "year"
|
||||
|
||||
def formatday(self, day, weekday):
|
||||
"""
|
||||
Return a day as a table cell.
|
||||
"""
|
||||
if day == 0:
|
||||
# day outside month
|
||||
return '<td class="%s"> </td>' % self.cssclass_noday
|
||||
else:
|
||||
return '<td class="%s">%d</td>' % (self.cssclasses[weekday], day)
|
||||
|
||||
def formatweek(self, theweek):
|
||||
"""
|
||||
Return a complete week as a table row.
|
||||
"""
|
||||
s = ''.join(self.formatday(d, wd) for (d, wd) in theweek)
|
||||
return '<tr>%s</tr>' % s
|
||||
|
||||
def formatweekday(self, day):
|
||||
"""
|
||||
Return a weekday name as a table header.
|
||||
"""
|
||||
return '<th class="%s">%s</th>' % (
|
||||
self.cssclasses_weekday_head[day], day_abbr[day])
|
||||
|
||||
def formatweekheader(self):
|
||||
"""
|
||||
Return a header for a week as a table row.
|
||||
"""
|
||||
s = ''.join(self.formatweekday(i) for i in self.iterweekdays())
|
||||
return '<tr>%s</tr>' % s
|
||||
|
||||
def formatmonthname(self, theyear, themonth, withyear=True):
|
||||
"""
|
||||
Return a month name as a table row.
|
||||
"""
|
||||
_validate_month(themonth)
|
||||
if withyear:
|
||||
s = '%s %s' % (month_name[themonth], theyear)
|
||||
else:
|
||||
s = '%s' % month_name[themonth]
|
||||
return '<tr><th colspan="7" class="%s">%s</th></tr>' % (
|
||||
self.cssclass_month_head, s)
|
||||
|
||||
def formatmonth(self, theyear, themonth, withyear=True):
|
||||
"""
|
||||
Return a formatted month as a table.
|
||||
"""
|
||||
v = []
|
||||
a = v.append
|
||||
a('<table border="0" cellpadding="0" cellspacing="0" class="%s">' % (
|
||||
self.cssclass_month))
|
||||
a('\n')
|
||||
a(self.formatmonthname(theyear, themonth, withyear=withyear))
|
||||
a('\n')
|
||||
a(self.formatweekheader())
|
||||
a('\n')
|
||||
for week in self.monthdays2calendar(theyear, themonth):
|
||||
a(self.formatweek(week))
|
||||
a('\n')
|
||||
a('</table>')
|
||||
a('\n')
|
||||
return ''.join(v)
|
||||
|
||||
def formatyear(self, theyear, width=3):
|
||||
"""
|
||||
Return a formatted year as a table of tables.
|
||||
"""
|
||||
v = []
|
||||
a = v.append
|
||||
width = max(width, 1)
|
||||
a('<table border="0" cellpadding="0" cellspacing="0" class="%s">' %
|
||||
self.cssclass_year)
|
||||
a('\n')
|
||||
a('<tr><th colspan="%d" class="%s">%s</th></tr>' % (
|
||||
width, self.cssclass_year_head, theyear))
|
||||
for i in range(JANUARY, JANUARY+12, width):
|
||||
# months in this row
|
||||
months = range(i, min(i+width, 13))
|
||||
a('<tr>')
|
||||
for m in months:
|
||||
a('<td>')
|
||||
a(self.formatmonth(theyear, m, withyear=False))
|
||||
a('</td>')
|
||||
a('</tr>')
|
||||
a('</table>')
|
||||
return ''.join(v)
|
||||
|
||||
def formatyearpage(self, theyear, width=3, css='calendar.css', encoding=None):
|
||||
"""
|
||||
Return a formatted year as a complete HTML page.
|
||||
"""
|
||||
if encoding is None:
|
||||
encoding = sys.getdefaultencoding()
|
||||
v = []
|
||||
a = v.append
|
||||
a('<?xml version="1.0" encoding="%s"?>\n' % encoding)
|
||||
a('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">\n')
|
||||
a('<html>\n')
|
||||
a('<head>\n')
|
||||
a('<meta http-equiv="Content-Type" content="text/html; charset=%s" />\n' % encoding)
|
||||
if css is not None:
|
||||
a('<link rel="stylesheet" type="text/css" href="%s" />\n' % css)
|
||||
a('<title>Calendar for %d</title>\n' % theyear)
|
||||
a('</head>\n')
|
||||
a('<body>\n')
|
||||
a(self.formatyear(theyear, width))
|
||||
a('</body>\n')
|
||||
a('</html>\n')
|
||||
return ''.join(v).encode(encoding, "xmlcharrefreplace")
|
||||
|
||||
|
||||
class different_locale:
|
||||
def __init__(self, locale):
|
||||
self.locale = locale
|
||||
self.oldlocale = None
|
||||
|
||||
def __enter__(self):
|
||||
self.oldlocale = _locale.setlocale(_locale.LC_TIME, None)
|
||||
_locale.setlocale(_locale.LC_TIME, self.locale)
|
||||
|
||||
def __exit__(self, *args):
|
||||
_locale.setlocale(_locale.LC_TIME, self.oldlocale)
|
||||
|
||||
|
||||
def _get_default_locale():
|
||||
locale = _locale.setlocale(_locale.LC_TIME, None)
|
||||
if locale == "C":
|
||||
with different_locale(""):
|
||||
# The LC_TIME locale does not seem to be configured:
|
||||
# get the user preferred locale.
|
||||
locale = _locale.setlocale(_locale.LC_TIME, None)
|
||||
return locale
|
||||
|
||||
|
||||
class LocaleTextCalendar(TextCalendar):
|
||||
"""
|
||||
This class can be passed a locale name in the constructor and will return
|
||||
month and weekday names in the specified locale.
|
||||
"""
|
||||
|
||||
def __init__(self, firstweekday=0, locale=None):
|
||||
TextCalendar.__init__(self, firstweekday)
|
||||
if locale is None:
|
||||
locale = _get_default_locale()
|
||||
self.locale = locale
|
||||
|
||||
def formatweekday(self, day, width):
|
||||
with different_locale(self.locale):
|
||||
return super().formatweekday(day, width)
|
||||
|
||||
def formatmonthname(self, theyear, themonth, width, withyear=True):
|
||||
with different_locale(self.locale):
|
||||
return super().formatmonthname(theyear, themonth, width, withyear)
|
||||
|
||||
|
||||
class LocaleHTMLCalendar(HTMLCalendar):
|
||||
"""
|
||||
This class can be passed a locale name in the constructor and will return
|
||||
month and weekday names in the specified locale.
|
||||
"""
|
||||
def __init__(self, firstweekday=0, locale=None):
|
||||
HTMLCalendar.__init__(self, firstweekday)
|
||||
if locale is None:
|
||||
locale = _get_default_locale()
|
||||
self.locale = locale
|
||||
|
||||
def formatweekday(self, day):
|
||||
with different_locale(self.locale):
|
||||
return super().formatweekday(day)
|
||||
|
||||
def formatmonthname(self, theyear, themonth, withyear=True):
|
||||
with different_locale(self.locale):
|
||||
return super().formatmonthname(theyear, themonth, withyear)
|
||||
|
||||
# Support for old module level interface
|
||||
c = TextCalendar()
|
||||
|
||||
firstweekday = c.getfirstweekday
|
||||
|
||||
def setfirstweekday(firstweekday):
|
||||
if not MONDAY <= firstweekday <= SUNDAY:
|
||||
raise IllegalWeekdayError(firstweekday)
|
||||
c.firstweekday = firstweekday
|
||||
|
||||
monthcalendar = c.monthdayscalendar
|
||||
prweek = c.prweek
|
||||
week = c.formatweek
|
||||
weekheader = c.formatweekheader
|
||||
prmonth = c.prmonth
|
||||
month = c.formatmonth
|
||||
calendar = c.formatyear
|
||||
prcal = c.pryear
|
||||
|
||||
|
||||
# Spacing of month columns for multi-column year calendar
|
||||
_colwidth = 7*3 - 1 # Amount printed by prweek()
|
||||
_spacing = 6 # Number of spaces between columns
|
||||
|
||||
|
||||
def format(cols, colwidth=_colwidth, spacing=_spacing):
|
||||
"""Prints multi-column formatting for year calendars"""
|
||||
print(formatstring(cols, colwidth, spacing))
|
||||
|
||||
|
||||
def formatstring(cols, colwidth=_colwidth, spacing=_spacing):
|
||||
"""Returns a string formatted from n strings, centered within n columns."""
|
||||
spacing *= ' '
|
||||
return spacing.join(c.center(colwidth) for c in cols)
|
||||
|
||||
|
||||
EPOCH = 1970
|
||||
_EPOCH_ORD = datetime.date(EPOCH, 1, 1).toordinal()
|
||||
|
||||
|
||||
def timegm(tuple):
|
||||
"""Unrelated but handy function to calculate Unix timestamp from GMT."""
|
||||
year, month, day, hour, minute, second = tuple[:6]
|
||||
days = datetime.date(year, month, 1).toordinal() - _EPOCH_ORD + day - 1
|
||||
hours = days*24 + hour
|
||||
minutes = hours*60 + minute
|
||||
seconds = minutes*60 + second
|
||||
return seconds
|
||||
|
||||
|
||||
def main(args=None):
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser()
|
||||
textgroup = parser.add_argument_group('text only arguments')
|
||||
htmlgroup = parser.add_argument_group('html only arguments')
|
||||
textgroup.add_argument(
|
||||
"-w", "--width",
|
||||
type=int, default=2,
|
||||
help="width of date column (default 2)"
|
||||
)
|
||||
textgroup.add_argument(
|
||||
"-l", "--lines",
|
||||
type=int, default=1,
|
||||
help="number of lines for each week (default 1)"
|
||||
)
|
||||
textgroup.add_argument(
|
||||
"-s", "--spacing",
|
||||
type=int, default=6,
|
||||
help="spacing between months (default 6)"
|
||||
)
|
||||
textgroup.add_argument(
|
||||
"-m", "--months",
|
||||
type=int, default=3,
|
||||
help="months per row (default 3)"
|
||||
)
|
||||
htmlgroup.add_argument(
|
||||
"-c", "--css",
|
||||
default="calendar.css",
|
||||
help="CSS to use for page"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-L", "--locale",
|
||||
default=None,
|
||||
help="locale to use for month and weekday names"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-e", "--encoding",
|
||||
default=None,
|
||||
help="encoding to use for output"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-t", "--type",
|
||||
default="text",
|
||||
choices=("text", "html"),
|
||||
help="output type (text or html)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-f", "--first-weekday",
|
||||
type=int, default=0,
|
||||
help="weekday (0 is Monday, 6 is Sunday) to start each week (default 0)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"year",
|
||||
nargs='?', type=int,
|
||||
help="year number"
|
||||
)
|
||||
parser.add_argument(
|
||||
"month",
|
||||
nargs='?', type=int,
|
||||
help="month number (1-12, text only)"
|
||||
)
|
||||
|
||||
options = parser.parse_args(args)
|
||||
|
||||
if options.locale and not options.encoding:
|
||||
parser.error("if --locale is specified --encoding is required")
|
||||
sys.exit(1)
|
||||
|
||||
locale = options.locale, options.encoding
|
||||
|
||||
if options.type == "html":
|
||||
if options.month:
|
||||
parser.error("incorrect number of arguments")
|
||||
sys.exit(1)
|
||||
if options.locale:
|
||||
cal = LocaleHTMLCalendar(locale=locale)
|
||||
else:
|
||||
cal = HTMLCalendar()
|
||||
cal.setfirstweekday(options.first_weekday)
|
||||
encoding = options.encoding
|
||||
if encoding is None:
|
||||
encoding = sys.getdefaultencoding()
|
||||
optdict = dict(encoding=encoding, css=options.css)
|
||||
write = sys.stdout.buffer.write
|
||||
if options.year is None:
|
||||
write(cal.formatyearpage(datetime.date.today().year, **optdict))
|
||||
else:
|
||||
write(cal.formatyearpage(options.year, **optdict))
|
||||
else:
|
||||
if options.locale:
|
||||
cal = LocaleTextCalendar(locale=locale)
|
||||
else:
|
||||
cal = TextCalendar()
|
||||
cal.setfirstweekday(options.first_weekday)
|
||||
optdict = dict(w=options.width, l=options.lines)
|
||||
if options.month is None:
|
||||
optdict["c"] = options.spacing
|
||||
optdict["m"] = options.months
|
||||
if options.month is not None:
|
||||
_validate_month(options.month)
|
||||
if options.year is None:
|
||||
result = cal.formatyear(datetime.date.today().year, **optdict)
|
||||
elif options.month is None:
|
||||
result = cal.formatyear(options.year, **optdict)
|
||||
else:
|
||||
result = cal.formatmonth(options.year, options.month, **optdict)
|
||||
write = sys.stdout.write
|
||||
if options.encoding:
|
||||
result = result.encode(options.encoding)
|
||||
write = sys.stdout.buffer.write
|
||||
write(result)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
409
Utils/PythonNew32/Lib/cmd.py
Normal file
409
Utils/PythonNew32/Lib/cmd.py
Normal file
@@ -0,0 +1,409 @@
|
||||
"""A generic class to build line-oriented command interpreters.
|
||||
|
||||
Interpreters constructed with this class obey the following conventions:
|
||||
|
||||
1. End of file on input is processed as the command 'EOF'.
|
||||
2. A command is parsed out of each line by collecting the prefix composed
|
||||
of characters in the identchars member.
|
||||
3. A command `foo' is dispatched to a method 'do_foo()'; the do_ method
|
||||
is passed a single argument consisting of the remainder of the line.
|
||||
4. Typing an empty line repeats the last command. (Actually, it calls the
|
||||
method `emptyline', which may be overridden in a subclass.)
|
||||
5. There is a predefined `help' method. Given an argument `topic', it
|
||||
calls the command `help_topic'. With no arguments, it lists all topics
|
||||
with defined help_ functions, broken into up to three topics; documented
|
||||
commands, miscellaneous help topics, and undocumented commands.
|
||||
6. The command '?' is a synonym for `help'. The command '!' is a synonym
|
||||
for `shell', if a do_shell method exists.
|
||||
7. If completion is enabled, completing commands will be done automatically,
|
||||
and completing of commands args is done by calling complete_foo() with
|
||||
arguments text, line, begidx, endidx. text is string we are matching
|
||||
against, all returned matches must begin with it. line is the current
|
||||
input line (lstripped), begidx and endidx are the beginning and end
|
||||
indexes of the text being matched, which could be used to provide
|
||||
different completion depending upon which position the argument is in.
|
||||
|
||||
The `default' method may be overridden to intercept commands for which there
|
||||
is no do_ method.
|
||||
|
||||
The `completedefault' method may be overridden to intercept completions for
|
||||
commands that have no complete_ method.
|
||||
|
||||
The data member `self.ruler' sets the character used to draw separator lines
|
||||
in the help messages. If empty, no ruler line is drawn. It defaults to "=".
|
||||
|
||||
If the value of `self.intro' is nonempty when the cmdloop method is called,
|
||||
it is printed out on interpreter startup. This value may be overridden
|
||||
via an optional argument to the cmdloop() method.
|
||||
|
||||
The data members `self.doc_header', `self.misc_header', and
|
||||
`self.undoc_header' set the headers used for the help function's
|
||||
listings of documented functions, miscellaneous topics, and undocumented
|
||||
functions respectively.
|
||||
"""
|
||||
|
||||
import inspect, string, sys
|
||||
|
||||
__all__ = ["Cmd"]
|
||||
|
||||
PROMPT = '(Cmd) '
|
||||
IDENTCHARS = string.ascii_letters + string.digits + '_'
|
||||
|
||||
class Cmd:
|
||||
"""A simple framework for writing line-oriented command interpreters.
|
||||
|
||||
These are often useful for test harnesses, administrative tools, and
|
||||
prototypes that will later be wrapped in a more sophisticated interface.
|
||||
|
||||
A Cmd instance or subclass instance is a line-oriented interpreter
|
||||
framework. There is no good reason to instantiate Cmd itself; rather,
|
||||
it's useful as a superclass of an interpreter class you define yourself
|
||||
in order to inherit Cmd's methods and encapsulate action methods.
|
||||
|
||||
"""
|
||||
prompt = PROMPT
|
||||
identchars = IDENTCHARS
|
||||
ruler = '='
|
||||
lastcmd = ''
|
||||
intro = None
|
||||
doc_leader = ""
|
||||
doc_header = "Documented commands (type help <topic>):"
|
||||
misc_header = "Miscellaneous help topics:"
|
||||
undoc_header = "Undocumented commands:"
|
||||
nohelp = "*** No help on %s"
|
||||
use_rawinput = 1
|
||||
|
||||
def __init__(self, completekey='tab', stdin=None, stdout=None):
|
||||
"""Instantiate a line-oriented interpreter framework.
|
||||
|
||||
The optional argument 'completekey' is the readline name of a
|
||||
completion key; it defaults to the Tab key. If completekey is
|
||||
not None and the readline module is available, command completion
|
||||
is done automatically. The optional arguments stdin and stdout
|
||||
specify alternate input and output file objects; if not specified,
|
||||
sys.stdin and sys.stdout are used.
|
||||
|
||||
"""
|
||||
if stdin is not None:
|
||||
self.stdin = stdin
|
||||
else:
|
||||
self.stdin = sys.stdin
|
||||
if stdout is not None:
|
||||
self.stdout = stdout
|
||||
else:
|
||||
self.stdout = sys.stdout
|
||||
self.cmdqueue = []
|
||||
self.completekey = completekey
|
||||
|
||||
def cmdloop(self, intro=None):
|
||||
"""Repeatedly issue a prompt, accept input, parse an initial prefix
|
||||
off the received input, and dispatch to action methods, passing them
|
||||
the remainder of the line as argument.
|
||||
|
||||
"""
|
||||
|
||||
self.preloop()
|
||||
if self.use_rawinput and self.completekey:
|
||||
try:
|
||||
import readline
|
||||
self.old_completer = readline.get_completer()
|
||||
readline.set_completer(self.complete)
|
||||
if readline.backend == "editline":
|
||||
if self.completekey == 'tab':
|
||||
# libedit uses "^I" instead of "tab"
|
||||
command_string = "bind ^I rl_complete"
|
||||
else:
|
||||
command_string = f"bind {self.completekey} rl_complete"
|
||||
else:
|
||||
command_string = f"{self.completekey}: complete"
|
||||
readline.parse_and_bind(command_string)
|
||||
except ImportError:
|
||||
pass
|
||||
try:
|
||||
if intro is not None:
|
||||
self.intro = intro
|
||||
if self.intro:
|
||||
self.stdout.write(str(self.intro)+"\n")
|
||||
stop = None
|
||||
while not stop:
|
||||
if self.cmdqueue:
|
||||
line = self.cmdqueue.pop(0)
|
||||
else:
|
||||
if self.use_rawinput:
|
||||
try:
|
||||
line = input(self.prompt)
|
||||
except EOFError:
|
||||
line = 'EOF'
|
||||
else:
|
||||
self.stdout.write(self.prompt)
|
||||
self.stdout.flush()
|
||||
line = self.stdin.readline()
|
||||
if not len(line):
|
||||
line = 'EOF'
|
||||
else:
|
||||
line = line.rstrip('\r\n')
|
||||
line = self.precmd(line)
|
||||
stop = self.onecmd(line)
|
||||
stop = self.postcmd(stop, line)
|
||||
self.postloop()
|
||||
finally:
|
||||
if self.use_rawinput and self.completekey:
|
||||
try:
|
||||
import readline
|
||||
readline.set_completer(self.old_completer)
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
|
||||
def precmd(self, line):
|
||||
"""Hook method executed just before the command line is
|
||||
interpreted, but after the input prompt is generated and issued.
|
||||
|
||||
"""
|
||||
return line
|
||||
|
||||
def postcmd(self, stop, line):
|
||||
"""Hook method executed just after a command dispatch is finished."""
|
||||
return stop
|
||||
|
||||
def preloop(self):
|
||||
"""Hook method executed once when the cmdloop() method is called."""
|
||||
pass
|
||||
|
||||
def postloop(self):
|
||||
"""Hook method executed once when the cmdloop() method is about to
|
||||
return.
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
def parseline(self, line):
|
||||
"""Parse the line into a command name and a string containing
|
||||
the arguments. Returns a tuple containing (command, args, line).
|
||||
'command' and 'args' may be None if the line couldn't be parsed.
|
||||
"""
|
||||
line = line.strip()
|
||||
if not line:
|
||||
return None, None, line
|
||||
elif line[0] == '?':
|
||||
line = 'help ' + line[1:]
|
||||
elif line[0] == '!':
|
||||
if hasattr(self, 'do_shell'):
|
||||
line = 'shell ' + line[1:]
|
||||
else:
|
||||
return None, None, line
|
||||
i, n = 0, len(line)
|
||||
while i < n and line[i] in self.identchars: i = i+1
|
||||
cmd, arg = line[:i], line[i:].strip()
|
||||
return cmd, arg, line
|
||||
|
||||
def onecmd(self, line):
|
||||
"""Interpret the argument as though it had been typed in response
|
||||
to the prompt.
|
||||
|
||||
This may be overridden, but should not normally need to be;
|
||||
see the precmd() and postcmd() methods for useful execution hooks.
|
||||
The return value is a flag indicating whether interpretation of
|
||||
commands by the interpreter should stop.
|
||||
|
||||
"""
|
||||
cmd, arg, line = self.parseline(line)
|
||||
if not line:
|
||||
return self.emptyline()
|
||||
if cmd is None:
|
||||
return self.default(line)
|
||||
self.lastcmd = line
|
||||
if line == 'EOF' :
|
||||
self.lastcmd = ''
|
||||
if cmd == '':
|
||||
return self.default(line)
|
||||
else:
|
||||
func = getattr(self, 'do_' + cmd, None)
|
||||
if func is None:
|
||||
return self.default(line)
|
||||
return func(arg)
|
||||
|
||||
def emptyline(self):
|
||||
"""Called when an empty line is entered in response to the prompt.
|
||||
|
||||
If this method is not overridden, it repeats the last nonempty
|
||||
command entered.
|
||||
|
||||
"""
|
||||
if self.lastcmd:
|
||||
return self.onecmd(self.lastcmd)
|
||||
|
||||
def default(self, line):
|
||||
"""Called on an input line when the command prefix is not recognized.
|
||||
|
||||
If this method is not overridden, it prints an error message and
|
||||
returns.
|
||||
|
||||
"""
|
||||
self.stdout.write('*** Unknown syntax: %s\n'%line)
|
||||
|
||||
def completedefault(self, *ignored):
|
||||
"""Method called to complete an input line when no command-specific
|
||||
complete_*() method is available.
|
||||
|
||||
By default, it returns an empty list.
|
||||
|
||||
"""
|
||||
return []
|
||||
|
||||
def completenames(self, text, *ignored):
|
||||
dotext = 'do_'+text
|
||||
return [a[3:] for a in self.get_names() if a.startswith(dotext)]
|
||||
|
||||
def complete(self, text, state):
|
||||
"""Return the next possible completion for 'text'.
|
||||
|
||||
If a command has not been entered, then complete against command list.
|
||||
Otherwise try to call complete_<command> to get list of completions.
|
||||
"""
|
||||
if state == 0:
|
||||
import readline
|
||||
origline = readline.get_line_buffer()
|
||||
line = origline.lstrip()
|
||||
stripped = len(origline) - len(line)
|
||||
begidx = readline.get_begidx() - stripped
|
||||
endidx = readline.get_endidx() - stripped
|
||||
if begidx>0:
|
||||
cmd, args, foo = self.parseline(line)
|
||||
if cmd == '':
|
||||
compfunc = self.completedefault
|
||||
else:
|
||||
try:
|
||||
compfunc = getattr(self, 'complete_' + cmd)
|
||||
except AttributeError:
|
||||
compfunc = self.completedefault
|
||||
else:
|
||||
compfunc = self.completenames
|
||||
self.completion_matches = compfunc(text, line, begidx, endidx)
|
||||
try:
|
||||
return self.completion_matches[state]
|
||||
except IndexError:
|
||||
return None
|
||||
|
||||
def get_names(self):
|
||||
# This method used to pull in base class attributes
|
||||
# at a time dir() didn't do it yet.
|
||||
return dir(self.__class__)
|
||||
|
||||
def complete_help(self, *args):
|
||||
commands = set(self.completenames(*args))
|
||||
topics = set(a[5:] for a in self.get_names()
|
||||
if a.startswith('help_' + args[0]))
|
||||
return list(commands | topics)
|
||||
|
||||
def do_help(self, arg):
|
||||
'List available commands with "help" or detailed help with "help cmd".'
|
||||
if arg:
|
||||
# XXX check arg syntax
|
||||
try:
|
||||
func = getattr(self, 'help_' + arg)
|
||||
except AttributeError:
|
||||
try:
|
||||
doc=getattr(self, 'do_' + arg).__doc__
|
||||
doc = inspect.cleandoc(doc)
|
||||
if doc:
|
||||
self.stdout.write("%s\n"%str(doc))
|
||||
return
|
||||
except AttributeError:
|
||||
pass
|
||||
self.stdout.write("%s\n"%str(self.nohelp % (arg,)))
|
||||
return
|
||||
func()
|
||||
else:
|
||||
names = self.get_names()
|
||||
cmds_doc = []
|
||||
cmds_undoc = []
|
||||
topics = set()
|
||||
for name in names:
|
||||
if name[:5] == 'help_':
|
||||
topics.add(name[5:])
|
||||
names.sort()
|
||||
# There can be duplicates if routines overridden
|
||||
prevname = ''
|
||||
for name in names:
|
||||
if name[:3] == 'do_':
|
||||
if name == prevname:
|
||||
continue
|
||||
prevname = name
|
||||
cmd=name[3:]
|
||||
if cmd in topics:
|
||||
cmds_doc.append(cmd)
|
||||
topics.remove(cmd)
|
||||
elif getattr(self, name).__doc__:
|
||||
cmds_doc.append(cmd)
|
||||
else:
|
||||
cmds_undoc.append(cmd)
|
||||
self.stdout.write("%s\n"%str(self.doc_leader))
|
||||
self.print_topics(self.doc_header, cmds_doc, 15,80)
|
||||
self.print_topics(self.misc_header, sorted(topics),15,80)
|
||||
self.print_topics(self.undoc_header, cmds_undoc, 15,80)
|
||||
|
||||
def print_topics(self, header, cmds, cmdlen, maxcol):
|
||||
if cmds:
|
||||
self.stdout.write("%s\n"%str(header))
|
||||
if self.ruler:
|
||||
self.stdout.write("%s\n"%str(self.ruler * len(header)))
|
||||
self.columnize(cmds, maxcol-1)
|
||||
self.stdout.write("\n")
|
||||
|
||||
def columnize(self, list, displaywidth=80):
|
||||
"""Display a list of strings as a compact set of columns.
|
||||
|
||||
Each column is only as wide as necessary.
|
||||
Columns are separated by two spaces (one was not legible enough).
|
||||
"""
|
||||
if not list:
|
||||
self.stdout.write("<empty>\n")
|
||||
return
|
||||
|
||||
nonstrings = [i for i in range(len(list))
|
||||
if not isinstance(list[i], str)]
|
||||
if nonstrings:
|
||||
raise TypeError("list[i] not a string for i in %s"
|
||||
% ", ".join(map(str, nonstrings)))
|
||||
size = len(list)
|
||||
if size == 1:
|
||||
self.stdout.write('%s\n'%str(list[0]))
|
||||
return
|
||||
# Try every row count from 1 upwards
|
||||
for nrows in range(1, len(list)):
|
||||
ncols = (size+nrows-1) // nrows
|
||||
colwidths = []
|
||||
totwidth = -2
|
||||
for col in range(ncols):
|
||||
colwidth = 0
|
||||
for row in range(nrows):
|
||||
i = row + nrows*col
|
||||
if i >= size:
|
||||
break
|
||||
x = list[i]
|
||||
colwidth = max(colwidth, len(x))
|
||||
colwidths.append(colwidth)
|
||||
totwidth += colwidth + 2
|
||||
if totwidth > displaywidth:
|
||||
break
|
||||
if totwidth <= displaywidth:
|
||||
break
|
||||
else:
|
||||
nrows = len(list)
|
||||
ncols = 1
|
||||
colwidths = [0]
|
||||
for row in range(nrows):
|
||||
texts = []
|
||||
for col in range(ncols):
|
||||
i = row + nrows*col
|
||||
if i >= size:
|
||||
x = ""
|
||||
else:
|
||||
x = list[i]
|
||||
texts.append(x)
|
||||
while texts and not texts[-1]:
|
||||
del texts[-1]
|
||||
for col in range(len(texts)):
|
||||
texts[col] = texts[col].ljust(colwidths[col])
|
||||
self.stdout.write("%s\n"%str(" ".join(texts)))
|
||||
386
Utils/PythonNew32/Lib/code.py
Normal file
386
Utils/PythonNew32/Lib/code.py
Normal file
@@ -0,0 +1,386 @@
|
||||
"""Utilities needed to emulate Python's interactive interpreter.
|
||||
|
||||
"""
|
||||
|
||||
# Inspired by similar code by Jeff Epler and Fredrik Lundh.
|
||||
|
||||
|
||||
import builtins
|
||||
import sys
|
||||
import traceback
|
||||
from codeop import CommandCompiler, compile_command
|
||||
|
||||
__all__ = ["InteractiveInterpreter", "InteractiveConsole", "interact",
|
||||
"compile_command"]
|
||||
|
||||
|
||||
class InteractiveInterpreter:
|
||||
"""Base class for InteractiveConsole.
|
||||
|
||||
This class deals with parsing and interpreter state (the user's
|
||||
namespace); it doesn't deal with input buffering or prompting or
|
||||
input file naming (the filename is always passed in explicitly).
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, locals=None):
|
||||
"""Constructor.
|
||||
|
||||
The optional 'locals' argument specifies a mapping to use as the
|
||||
namespace in which code will be executed; it defaults to a newly
|
||||
created dictionary with key "__name__" set to "__console__" and
|
||||
key "__doc__" set to None.
|
||||
|
||||
"""
|
||||
if locals is None:
|
||||
locals = {"__name__": "__console__", "__doc__": None}
|
||||
self.locals = locals
|
||||
self.compile = CommandCompiler()
|
||||
|
||||
def runsource(self, source, filename="<input>", symbol="single"):
|
||||
"""Compile and run some source in the interpreter.
|
||||
|
||||
Arguments are as for compile_command().
|
||||
|
||||
One of several things can happen:
|
||||
|
||||
1) The input is incorrect; compile_command() raised an
|
||||
exception (SyntaxError or OverflowError). A syntax traceback
|
||||
will be printed by calling the showsyntaxerror() method.
|
||||
|
||||
2) The input is incomplete, and more input is required;
|
||||
compile_command() returned None. Nothing happens.
|
||||
|
||||
3) The input is complete; compile_command() returned a code
|
||||
object. The code is executed by calling self.runcode() (which
|
||||
also handles run-time exceptions, except for SystemExit).
|
||||
|
||||
The return value is True in case 2, False in the other cases (unless
|
||||
an exception is raised). The return value can be used to
|
||||
decide whether to use sys.ps1 or sys.ps2 to prompt the next
|
||||
line.
|
||||
|
||||
"""
|
||||
try:
|
||||
code = self.compile(source, filename, symbol)
|
||||
except (OverflowError, SyntaxError, ValueError):
|
||||
# Case 1
|
||||
self.showsyntaxerror(filename, source=source)
|
||||
return False
|
||||
|
||||
if code is None:
|
||||
# Case 2
|
||||
return True
|
||||
|
||||
# Case 3
|
||||
self.runcode(code)
|
||||
return False
|
||||
|
||||
def runcode(self, code):
|
||||
"""Execute a code object.
|
||||
|
||||
When an exception occurs, self.showtraceback() is called to
|
||||
display a traceback. All exceptions are caught except
|
||||
SystemExit, which is reraised.
|
||||
|
||||
A note about KeyboardInterrupt: this exception may occur
|
||||
elsewhere in this code, and may not always be caught. The
|
||||
caller should be prepared to deal with it.
|
||||
|
||||
"""
|
||||
try:
|
||||
exec(code, self.locals)
|
||||
except SystemExit:
|
||||
raise
|
||||
except:
|
||||
self.showtraceback()
|
||||
|
||||
def showsyntaxerror(self, filename=None, **kwargs):
|
||||
"""Display the syntax error that just occurred.
|
||||
|
||||
This doesn't display a stack trace because there isn't one.
|
||||
|
||||
If a filename is given, it is stuffed in the exception instead
|
||||
of what was there before (because Python's parser always uses
|
||||
"<string>" when reading from a string).
|
||||
|
||||
The output is written by self.write(), below.
|
||||
|
||||
"""
|
||||
try:
|
||||
typ, value, tb = sys.exc_info()
|
||||
if filename and issubclass(typ, SyntaxError):
|
||||
value.filename = filename
|
||||
source = kwargs.pop('source', "")
|
||||
self._showtraceback(typ, value, None, source)
|
||||
finally:
|
||||
typ = value = tb = None
|
||||
|
||||
def showtraceback(self):
|
||||
"""Display the exception that just occurred.
|
||||
|
||||
We remove the first stack item because it is our own code.
|
||||
|
||||
The output is written by self.write(), below.
|
||||
|
||||
"""
|
||||
try:
|
||||
typ, value, tb = sys.exc_info()
|
||||
self._showtraceback(typ, value, tb.tb_next, '')
|
||||
finally:
|
||||
typ = value = tb = None
|
||||
|
||||
def _showtraceback(self, typ, value, tb, source):
|
||||
sys.last_type = typ
|
||||
sys.last_traceback = tb
|
||||
value = value.with_traceback(tb)
|
||||
# Set the line of text that the exception refers to
|
||||
lines = source.splitlines()
|
||||
if (source and typ is SyntaxError
|
||||
and not value.text and value.lineno is not None
|
||||
and len(lines) >= value.lineno):
|
||||
value.text = lines[value.lineno - 1]
|
||||
sys.last_exc = sys.last_value = value = value.with_traceback(tb)
|
||||
if sys.excepthook is sys.__excepthook__:
|
||||
self._excepthook(typ, value, tb)
|
||||
else:
|
||||
# If someone has set sys.excepthook, we let that take precedence
|
||||
# over self.write
|
||||
try:
|
||||
sys.excepthook(typ, value, tb)
|
||||
except SystemExit:
|
||||
raise
|
||||
except BaseException as e:
|
||||
e.__context__ = None
|
||||
e = e.with_traceback(e.__traceback__.tb_next)
|
||||
print('Error in sys.excepthook:', file=sys.stderr)
|
||||
sys.__excepthook__(type(e), e, e.__traceback__)
|
||||
print(file=sys.stderr)
|
||||
print('Original exception was:', file=sys.stderr)
|
||||
sys.__excepthook__(typ, value, tb)
|
||||
|
||||
def _excepthook(self, typ, value, tb):
|
||||
# This method is being overwritten in
|
||||
# _pyrepl.console.InteractiveColoredConsole
|
||||
lines = traceback.format_exception(typ, value, tb)
|
||||
self.write(''.join(lines))
|
||||
|
||||
def write(self, data):
|
||||
"""Write a string.
|
||||
|
||||
The base implementation writes to sys.stderr; a subclass may
|
||||
replace this with a different implementation.
|
||||
|
||||
"""
|
||||
sys.stderr.write(data)
|
||||
|
||||
|
||||
class InteractiveConsole(InteractiveInterpreter):
|
||||
"""Closely emulate the behavior of the interactive Python interpreter.
|
||||
|
||||
This class builds on InteractiveInterpreter and adds prompting
|
||||
using the familiar sys.ps1 and sys.ps2, and input buffering.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, locals=None, filename="<console>", *, local_exit=False):
|
||||
"""Constructor.
|
||||
|
||||
The optional locals argument will be passed to the
|
||||
InteractiveInterpreter base class.
|
||||
|
||||
The optional filename argument should specify the (file)name
|
||||
of the input stream; it will show up in tracebacks.
|
||||
|
||||
"""
|
||||
InteractiveInterpreter.__init__(self, locals)
|
||||
self.filename = filename
|
||||
self.local_exit = local_exit
|
||||
self.resetbuffer()
|
||||
|
||||
def resetbuffer(self):
|
||||
"""Reset the input buffer."""
|
||||
self.buffer = []
|
||||
|
||||
def interact(self, banner=None, exitmsg=None):
|
||||
"""Closely emulate the interactive Python console.
|
||||
|
||||
The optional banner argument specifies the banner to print
|
||||
before the first interaction; by default it prints a banner
|
||||
similar to the one printed by the real Python interpreter,
|
||||
followed by the current class name in parentheses (so as not
|
||||
to confuse this with the real interpreter -- since it's so
|
||||
close!).
|
||||
|
||||
The optional exitmsg argument specifies the exit message
|
||||
printed when exiting. Pass the empty string to suppress
|
||||
printing an exit message. If exitmsg is not given or None,
|
||||
a default message is printed.
|
||||
|
||||
"""
|
||||
try:
|
||||
sys.ps1
|
||||
except AttributeError:
|
||||
sys.ps1 = ">>> "
|
||||
try:
|
||||
sys.ps2
|
||||
except AttributeError:
|
||||
sys.ps2 = "... "
|
||||
cprt = 'Type "help", "copyright", "credits" or "license" for more information.'
|
||||
if banner is None:
|
||||
self.write("Python %s on %s\n%s\n(%s)\n" %
|
||||
(sys.version, sys.platform, cprt,
|
||||
self.__class__.__name__))
|
||||
elif banner:
|
||||
self.write("%s\n" % str(banner))
|
||||
more = 0
|
||||
|
||||
# When the user uses exit() or quit() in their interactive shell
|
||||
# they probably just want to exit the created shell, not the whole
|
||||
# process. exit and quit in builtins closes sys.stdin which makes
|
||||
# it super difficult to restore
|
||||
#
|
||||
# When self.local_exit is True, we overwrite the builtins so
|
||||
# exit() and quit() only raises SystemExit and we can catch that
|
||||
# to only exit the interactive shell
|
||||
|
||||
_exit = None
|
||||
_quit = None
|
||||
|
||||
if self.local_exit:
|
||||
if hasattr(builtins, "exit"):
|
||||
_exit = builtins.exit
|
||||
builtins.exit = Quitter("exit")
|
||||
|
||||
if hasattr(builtins, "quit"):
|
||||
_quit = builtins.quit
|
||||
builtins.quit = Quitter("quit")
|
||||
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
if more:
|
||||
prompt = sys.ps2
|
||||
else:
|
||||
prompt = sys.ps1
|
||||
try:
|
||||
line = self.raw_input(prompt)
|
||||
except EOFError:
|
||||
self.write("\n")
|
||||
break
|
||||
else:
|
||||
more = self.push(line)
|
||||
except KeyboardInterrupt:
|
||||
self.write("\nKeyboardInterrupt\n")
|
||||
self.resetbuffer()
|
||||
more = 0
|
||||
except SystemExit as e:
|
||||
if self.local_exit:
|
||||
self.write("\n")
|
||||
break
|
||||
else:
|
||||
raise e
|
||||
finally:
|
||||
# restore exit and quit in builtins if they were modified
|
||||
if _exit is not None:
|
||||
builtins.exit = _exit
|
||||
|
||||
if _quit is not None:
|
||||
builtins.quit = _quit
|
||||
|
||||
if exitmsg is None:
|
||||
self.write('now exiting %s...\n' % self.__class__.__name__)
|
||||
elif exitmsg != '':
|
||||
self.write('%s\n' % exitmsg)
|
||||
|
||||
def push(self, line, filename=None, _symbol="single"):
|
||||
"""Push a line to the interpreter.
|
||||
|
||||
The line should not have a trailing newline; it may have
|
||||
internal newlines. The line is appended to a buffer and the
|
||||
interpreter's runsource() method is called with the
|
||||
concatenated contents of the buffer as source. If this
|
||||
indicates that the command was executed or invalid, the buffer
|
||||
is reset; otherwise, the command is incomplete, and the buffer
|
||||
is left as it was after the line was appended. The return
|
||||
value is 1 if more input is required, 0 if the line was dealt
|
||||
with in some way (this is the same as runsource()).
|
||||
|
||||
"""
|
||||
self.buffer.append(line)
|
||||
source = "\n".join(self.buffer)
|
||||
if filename is None:
|
||||
filename = self.filename
|
||||
more = self.runsource(source, filename, symbol=_symbol)
|
||||
if not more:
|
||||
self.resetbuffer()
|
||||
return more
|
||||
|
||||
def raw_input(self, prompt=""):
|
||||
"""Write a prompt and read a line.
|
||||
|
||||
The returned line does not include the trailing newline.
|
||||
When the user enters the EOF key sequence, EOFError is raised.
|
||||
|
||||
The base implementation uses the built-in function
|
||||
input(); a subclass may replace this with a different
|
||||
implementation.
|
||||
|
||||
"""
|
||||
return input(prompt)
|
||||
|
||||
|
||||
class Quitter:
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
if sys.platform == "win32":
|
||||
self.eof = 'Ctrl-Z plus Return'
|
||||
else:
|
||||
self.eof = 'Ctrl-D (i.e. EOF)'
|
||||
|
||||
def __repr__(self):
|
||||
return f'Use {self.name} or {self.eof} to exit'
|
||||
|
||||
def __call__(self, code=None):
|
||||
raise SystemExit(code)
|
||||
|
||||
|
||||
def interact(banner=None, readfunc=None, local=None, exitmsg=None, local_exit=False):
|
||||
"""Closely emulate the interactive Python interpreter.
|
||||
|
||||
This is a backwards compatible interface to the InteractiveConsole
|
||||
class. When readfunc is not specified, it attempts to import the
|
||||
readline module to enable GNU readline if it is available.
|
||||
|
||||
Arguments (all optional, all default to None):
|
||||
|
||||
banner -- passed to InteractiveConsole.interact()
|
||||
readfunc -- if not None, replaces InteractiveConsole.raw_input()
|
||||
local -- passed to InteractiveInterpreter.__init__()
|
||||
exitmsg -- passed to InteractiveConsole.interact()
|
||||
local_exit -- passed to InteractiveConsole.__init__()
|
||||
|
||||
"""
|
||||
console = InteractiveConsole(local, local_exit=local_exit)
|
||||
if readfunc is not None:
|
||||
console.raw_input = readfunc
|
||||
else:
|
||||
try:
|
||||
import readline
|
||||
except ImportError:
|
||||
pass
|
||||
console.interact(banner, exitmsg)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('-q', action='store_true',
|
||||
help="don't print version and copyright messages")
|
||||
args = parser.parse_args()
|
||||
if args.q or sys.flags.quiet:
|
||||
banner = ''
|
||||
else:
|
||||
banner = None
|
||||
interact(banner)
|
||||
1132
Utils/PythonNew32/Lib/codecs.py
Normal file
1132
Utils/PythonNew32/Lib/codecs.py
Normal file
File diff suppressed because it is too large
Load Diff
155
Utils/PythonNew32/Lib/codeop.py
Normal file
155
Utils/PythonNew32/Lib/codeop.py
Normal file
@@ -0,0 +1,155 @@
|
||||
r"""Utilities to compile possibly incomplete Python source code.
|
||||
|
||||
This module provides two interfaces, broadly similar to the builtin
|
||||
function compile(), which take program text, a filename and a 'mode'
|
||||
and:
|
||||
|
||||
- Return code object if the command is complete and valid
|
||||
- Return None if the command is incomplete
|
||||
- Raise SyntaxError, ValueError or OverflowError if the command is a
|
||||
syntax error (OverflowError and ValueError can be produced by
|
||||
malformed literals).
|
||||
|
||||
The two interfaces are:
|
||||
|
||||
compile_command(source, filename, symbol):
|
||||
|
||||
Compiles a single command in the manner described above.
|
||||
|
||||
CommandCompiler():
|
||||
|
||||
Instances of this class have __call__ methods identical in
|
||||
signature to compile_command; the difference is that if the
|
||||
instance compiles program text containing a __future__ statement,
|
||||
the instance 'remembers' and compiles all subsequent program texts
|
||||
with the statement in force.
|
||||
|
||||
The module also provides another class:
|
||||
|
||||
Compile():
|
||||
|
||||
Instances of this class act like the built-in function compile,
|
||||
but with 'memory' in the sense described above.
|
||||
"""
|
||||
|
||||
import __future__
|
||||
import warnings
|
||||
|
||||
_features = [getattr(__future__, fname)
|
||||
for fname in __future__.all_feature_names]
|
||||
|
||||
__all__ = ["compile_command", "Compile", "CommandCompiler"]
|
||||
|
||||
# The following flags match the values from Include/cpython/compile.h
|
||||
# Caveat emptor: These flags are undocumented on purpose and depending
|
||||
# on their effect outside the standard library is **unsupported**.
|
||||
PyCF_DONT_IMPLY_DEDENT = 0x200
|
||||
PyCF_ONLY_AST = 0x400
|
||||
PyCF_ALLOW_INCOMPLETE_INPUT = 0x4000
|
||||
|
||||
def _maybe_compile(compiler, source, filename, symbol):
|
||||
# Check for source consisting of only blank lines and comments.
|
||||
for line in source.split("\n"):
|
||||
line = line.strip()
|
||||
if line and line[0] != '#':
|
||||
break # Leave it alone.
|
||||
else:
|
||||
if symbol != "eval":
|
||||
source = "pass" # Replace it with a 'pass' statement
|
||||
|
||||
# Disable compiler warnings when checking for incomplete input.
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore", (SyntaxWarning, DeprecationWarning))
|
||||
try:
|
||||
compiler(source, filename, symbol)
|
||||
except SyntaxError: # Let other compile() errors propagate.
|
||||
try:
|
||||
compiler(source + "\n", filename, symbol)
|
||||
return None
|
||||
except _IncompleteInputError as e:
|
||||
return None
|
||||
except SyntaxError as e:
|
||||
pass
|
||||
# fallthrough
|
||||
|
||||
return compiler(source, filename, symbol, incomplete_input=False)
|
||||
|
||||
def _compile(source, filename, symbol, incomplete_input=True):
|
||||
flags = 0
|
||||
if incomplete_input:
|
||||
flags |= PyCF_ALLOW_INCOMPLETE_INPUT
|
||||
flags |= PyCF_DONT_IMPLY_DEDENT
|
||||
return compile(source, filename, symbol, flags)
|
||||
|
||||
def compile_command(source, filename="<input>", symbol="single"):
|
||||
r"""Compile a command and determine whether it is incomplete.
|
||||
|
||||
Arguments:
|
||||
|
||||
source -- the source string; may contain \n characters
|
||||
filename -- optional filename from which source was read; default
|
||||
"<input>"
|
||||
symbol -- optional grammar start symbol; "single" (default), "exec"
|
||||
or "eval"
|
||||
|
||||
Return value / exceptions raised:
|
||||
|
||||
- Return a code object if the command is complete and valid
|
||||
- Return None if the command is incomplete
|
||||
- Raise SyntaxError, ValueError or OverflowError if the command is a
|
||||
syntax error (OverflowError and ValueError can be produced by
|
||||
malformed literals).
|
||||
"""
|
||||
return _maybe_compile(_compile, source, filename, symbol)
|
||||
|
||||
class Compile:
|
||||
"""Instances of this class behave much like the built-in compile
|
||||
function, but if one is used to compile text containing a future
|
||||
statement, it "remembers" and compiles all subsequent program texts
|
||||
with the statement in force."""
|
||||
def __init__(self):
|
||||
self.flags = PyCF_DONT_IMPLY_DEDENT | PyCF_ALLOW_INCOMPLETE_INPUT
|
||||
|
||||
def __call__(self, source, filename, symbol, flags=0, **kwargs):
|
||||
flags |= self.flags
|
||||
if kwargs.get('incomplete_input', True) is False:
|
||||
flags &= ~PyCF_DONT_IMPLY_DEDENT
|
||||
flags &= ~PyCF_ALLOW_INCOMPLETE_INPUT
|
||||
codeob = compile(source, filename, symbol, flags, True)
|
||||
if flags & PyCF_ONLY_AST:
|
||||
return codeob # this is an ast.Module in this case
|
||||
for feature in _features:
|
||||
if codeob.co_flags & feature.compiler_flag:
|
||||
self.flags |= feature.compiler_flag
|
||||
return codeob
|
||||
|
||||
class CommandCompiler:
|
||||
"""Instances of this class have __call__ methods identical in
|
||||
signature to compile_command; the difference is that if the
|
||||
instance compiles program text containing a __future__ statement,
|
||||
the instance 'remembers' and compiles all subsequent program texts
|
||||
with the statement in force."""
|
||||
|
||||
def __init__(self,):
|
||||
self.compiler = Compile()
|
||||
|
||||
def __call__(self, source, filename="<input>", symbol="single"):
|
||||
r"""Compile a command and determine whether it is incomplete.
|
||||
|
||||
Arguments:
|
||||
|
||||
source -- the source string; may contain \n characters
|
||||
filename -- optional filename from which source was read;
|
||||
default "<input>"
|
||||
symbol -- optional grammar start symbol; "single" (default) or
|
||||
"eval"
|
||||
|
||||
Return value / exceptions raised:
|
||||
|
||||
- Return a code object if the command is complete and valid
|
||||
- Return None if the command is incomplete
|
||||
- Raise SyntaxError, ValueError or OverflowError if the command is a
|
||||
syntax error (OverflowError and ValueError can be produced by
|
||||
malformed literals).
|
||||
"""
|
||||
return _maybe_compile(self.compiler, source, filename, symbol)
|
||||
1596
Utils/PythonNew32/Lib/collections/__init__.py
Normal file
1596
Utils/PythonNew32/Lib/collections/__init__.py
Normal file
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user