svn+ssh://pythondev@svn.python.org/python/trunk
........
r60286 | christian.heimes | 2008-01-25 15:54:23 +0100 (Fri, 25 Jan 2008) | 1 line
setup.py doesn't pick up changes to a header file
........
r60287 | christian.heimes | 2008-01-25 16:52:11 +0100 (Fri, 25 Jan 2008) | 2 lines
Added the Python core headers Include/*.h and pyconfig.h as dependencies for the extensions in Modules/
It forces a rebuild of all extensions when a header files has been modified
........
r60291 | raymond.hettinger | 2008-01-25 20:24:46 +0100 (Fri, 25 Jan 2008) | 4 lines
Changes 54857 and 54840 broke code and were reverted in Py2.5 just before
it was released, but that reversion never made it to the Py2.6 head.
........
r60296 | guido.van.rossum | 2008-01-25 20:50:26 +0100 (Fri, 25 Jan 2008) | 2 lines
Rewrite the list_inline_repeat overflow check slightly differently.
........
r60301 | thomas.wouters | 2008-01-25 22:09:34 +0100 (Fri, 25 Jan 2008) | 4 lines
Use the right (portable) definition of the max of a Py_ssize_t.
........
r60303 | thomas.wouters | 2008-01-26 02:47:05 +0100 (Sat, 26 Jan 2008) | 5 lines
Make 'testall' work again when building in a separate directory.
test_distutils still fails when doing that.
........
r60305 | neal.norwitz | 2008-01-26 06:54:48 +0100 (Sat, 26 Jan 2008) | 3 lines
Prevent this test from failing if there are transient network problems
by retrying the host for up to 3 times.
........
r60306 | neal.norwitz | 2008-01-26 08:26:12 +0100 (Sat, 26 Jan 2008) | 12 lines
Use a condition variable (threading.Event) rather than sleeps and checking a
global to determine when the server is ready to be used. This slows the test
down, but should make it correct. There was a race condition before where the
server could have assigned a port, yet it wasn't ready to serve requests. If
the client sent a request before the server was completely ready, it would get
an exception. There was machinery to try to handle this condition. All of
that should be unnecessary and removed if this change works. A NOTE was
added as a comment about what needs to be fixed.
The buildbots will tell us if there are more errors or
if this test is now stable.
........
r60307 | neal.norwitz | 2008-01-26 08:38:03 +0100 (Sat, 26 Jan 2008) | 3 lines
Fix exception in tearDown on ppc buildbot. If there's no directory,
that shouldn't cause the test to fail. Just like it setUp.
........
r60308 | raymond.hettinger | 2008-01-26 09:19:06 +0100 (Sat, 26 Jan 2008) | 3 lines
Make PySet_Add() work with frozensets. Works like PyTuple_SetItem() to build-up values in a brand new frozenset.
........
r60309 | neal.norwitz | 2008-01-26 09:26:00 +0100 (Sat, 26 Jan 2008) | 1 line
The OS X buildbot had errors with the unavailable exceptions disabled. Restore it.
........
r60310 | raymond.hettinger | 2008-01-26 09:37:28 +0100 (Sat, 26 Jan 2008) | 4 lines
Let marshal build-up sets and frozensets one element at a time.
Saves the unnecessary creation of a tuple as intermediate container.
........
r60311 | raymond.hettinger | 2008-01-26 09:41:13 +0100 (Sat, 26 Jan 2008) | 1 line
Update test code for change to PySet_Add().
........
r60312 | raymond.hettinger | 2008-01-26 10:31:11 +0100 (Sat, 26 Jan 2008) | 1 line
Revert PySet_Add() changes.
........
r60314 | georg.brandl | 2008-01-26 10:43:35 +0100 (Sat, 26 Jan 2008) | 2 lines
#1934: fix os.path.isabs docs.
........
r60316 | georg.brandl | 2008-01-26 12:00:18 +0100 (Sat, 26 Jan 2008) | 2 lines
Add missing things in re docstring.
........
r60317 | georg.brandl | 2008-01-26 12:02:22 +0100 (Sat, 26 Jan 2008) | 2 lines
Slashes allowed on Windows.
........
r60319 | georg.brandl | 2008-01-26 14:41:21 +0100 (Sat, 26 Jan 2008) | 2 lines
Fix markup again.
........
r60320 | andrew.kuchling | 2008-01-26 14:50:51 +0100 (Sat, 26 Jan 2008) | 1 line
Add some items
........
r60321 | georg.brandl | 2008-01-26 15:02:38 +0100 (Sat, 26 Jan 2008) | 2 lines
Clarify "b" mode under Unix.
........
r60322 | georg.brandl | 2008-01-26 15:03:47 +0100 (Sat, 26 Jan 2008) | 3 lines
#1940: make it possible to use curses.filter() before curses.initscr()
as the documentation says.
........
r60324 | georg.brandl | 2008-01-26 15:14:20 +0100 (Sat, 26 Jan 2008) | 3 lines
#
1473257: add generator.gi_code attribute that refers to
the original code object backing the generator. Patch by Collin Winter.
........
r60325 | georg.brandl | 2008-01-26 15:19:22 +0100 (Sat, 26 Jan 2008) | 2 lines
Move C API entries to the corresponding section.
........
r60326 | christian.heimes | 2008-01-26 17:43:35 +0100 (Sat, 26 Jan 2008) | 1 line
Unit test fix from Giampaolo Rodola, #1938
........
r60327 | gregory.p.smith | 2008-01-26 19:51:05 +0100 (Sat, 26 Jan 2008) | 2 lines
Update docs for new callpack params added in r60188
........
r60329 | neal.norwitz | 2008-01-26 21:24:36 +0100 (Sat, 26 Jan 2008) | 3 lines
Cleanup the code a bit. test_rfind is failing on PPC and PPC64 buildbots,
this might fix the problem.
........
r60330 | neal.norwitz | 2008-01-26 22:02:45 +0100 (Sat, 26 Jan 2008) | 1 line
Always try to remove the test file even if close raises an exception
........
r60331 | neal.norwitz | 2008-01-26 22:21:59 +0100 (Sat, 26 Jan 2008) | 3 lines
Reduce the race condition by signalling when the server is ready
and not trying to connect before.
........
r60334 | neal.norwitz | 2008-01-27 00:13:46 +0100 (Sun, 27 Jan 2008) | 5 lines
On some systems (e.g., Ubuntu on hppa) the flush()
doesn't cause the exception, but the close() does.
Will backport.
........
r60335 | neal.norwitz | 2008-01-27 00:14:17 +0100 (Sun, 27 Jan 2008) | 2 lines
Consistently use tempfile.tempdir for the db_home directory.
........
r60338 | neal.norwitz | 2008-01-27 02:44:05 +0100 (Sun, 27 Jan 2008) | 4 lines
Eliminate the sleeps that assume the server will start in .5 seconds.
This should make the test less flaky. It also speeds up the test
by about 75% on my box (20+ seconds -> ~4 seconds).
........
r60342 | neal.norwitz | 2008-01-27 06:02:34 +0100 (Sun, 27 Jan 2008) | 6 lines
Try to prevent this test from being flaky. We might need a sleep in here
which isn't as bad as it sounds. The close() *should* raise an exception,
so if it didn't we should give more time to sync and really raise it.
Will backport.
........
r60344 | jeffrey.yasskin | 2008-01-27 06:40:35 +0100 (Sun, 27 Jan 2008) | 3 lines
Make rational.gcd() public and allow Rational to take decimal strings, per
Raymond's advice.
........
r60345 | neal.norwitz | 2008-01-27 08:36:03 +0100 (Sun, 27 Jan 2008) | 3 lines
Mostly reformat. Also set an error and return NULL if neither MS_WINDOWS
nor UNIX is defined. This may have caused problems on cygwin.
........
r60346 | neal.norwitz | 2008-01-27 08:37:38 +0100 (Sun, 27 Jan 2008) | 3 lines
Use int for the sign rather than a char. char can be signed or unsigned.
It's system dependent. This might fix the problem with test_rfind failing.
........
r60347 | neal.norwitz | 2008-01-27 08:41:33 +0100 (Sun, 27 Jan 2008) | 1 line
Add stdarg include for va_list to get this to compile on cygwin
........
r60348 | raymond.hettinger | 2008-01-27 11:13:57 +0100 (Sun, 27 Jan 2008) | 1 line
Docstring nit
........
r60349 | raymond.hettinger | 2008-01-27 11:47:55 +0100 (Sun, 27 Jan 2008) | 1 line
Removed an unnecessary and confusing paragraph from the namedtuple docs.
........
is exposed to Python programs as ``types.ModuleType``.
-.. cmacro:: int PyModule_Check(PyObject *p)
+.. cfunction:: int PyModule_Check(PyObject *p)
Return true if *p* is a module object, or a subtype of a module object.
-.. cmacro:: int PyModule_CheckExact(PyObject *p)
+.. cfunction:: int PyModule_CheckExact(PyObject *p)
Return true if *p* is a module object, but not a subtype of
:cdata:`PyModule_Type`.
null-terminated. Return ``-1`` on error, ``0`` on success.
-.. cmacro:: int PyModule_AddIntMacro(PyObject *module, macro)
+.. cfunction:: int PyModule_AddIntMacro(PyObject *module, macro)
Add an int constant to *module*. The name and the value are taken from
*macro*. For example ``PyModule_AddConstant(module, AF_INET)`` adds the int
Return ``-1`` on error, ``0`` on success.
-.. cmacro:: int PyModule_AddStringMacro(PyObject *module, macro)
+.. cfunction:: int PyModule_AddStringMacro(PyObject *module, macro)
Add a string constant to *module*.
Point: x= 3.000 y= 4.000 hypot= 5.000
Point: x=14.000 y= 0.714 hypot=14.018
-Another use for subclassing is to replace performance critcal methods with
-faster versions that bypass error-checking::
-
- class Point(namedtuple('Point', 'x y')):
- __slots__ = ()
- _make = classmethod(tuple.__new__)
- def _replace(self, _map=map, **kwds):
- return self._make(_map(kwds.get, ('x', 'y'), self))
-
-The subclasses shown above set ``__slots__`` to an empty tuple. This keeps
+The subclass shown above sets ``__slots__`` to an empty tuple. This keeps
keep memory requirements low by preventing the creation of instance dictionaries.
.. method:: FTP.retrlines(command[, callback])
- Retrieve a file or directory listing in ASCII transfer mode. *command* should be
- an appropriate ``RETR`` command (see :meth:`retrbinary`) or a ``LIST`` command
- (usually just the string ``'LIST'``). The *callback* function is called for
- each line, with the trailing CRLF stripped. The default *callback* prints the
- line to ``sys.stdout``.
+ Retrieve a file or directory listing in ASCII transfer mode. *command*
+ should be an appropriate ``RETR`` command (see :meth:`retrbinary`) or a
+ command such as ``LIST``, ``NLST`` or ``MLSD`` (usually just the string
+ ``'LIST'``). The *callback* function is called for each line, with the
+ trailing CRLF stripped. The default *callback* prints the line to
+ ``sys.stdout``.
.. method:: FTP.set_pasv(boolean)
it is on by default.)
-.. method:: FTP.storbinary(command, file[, blocksize])
+.. method:: FTP.storbinary(command, file[, blocksize, callback])
Store a file in binary transfer mode. *command* should be an appropriate
``STOR`` command: ``"STOR filename"``. *file* is an open file object which is
read until EOF using its :meth:`read` method in blocks of size *blocksize* to
provide the data to be stored. The *blocksize* argument defaults to 8192.
+ *callback* is an optional single parameter callable that is called
+ on each block of data after it is sent.
-.. method:: FTP.storlines(command, file)
+.. method:: FTP.storlines(command, file[, callback])
Store a file in ASCII transfer mode. *command* should be an appropriate
``STOR`` command (see :meth:`storbinary`). Lines are read until EOF from the
open file object *file* using its :meth:`readline` method to provide the data to
- be stored.
+ be stored. *callback* is an optional single parameter callable
+ that is called on each line after it is sent.
.. method:: FTP.transfercmd(cmd[, rest])
.. function:: isabs(path)
- Return ``True`` if *path* is an absolute pathname (begins with a slash).
+ Return ``True`` if *path* is an absolute pathname. On Unix, that means it
+ begins with a slash, on Windows that it begins with a (back)slash after chopping
+ off a potential drive letter.
.. function:: isfile(path)
writing. The *mode* argument is optional; ``'r'`` will be assumed if it's
omitted.
-``'b'`` appended to the mode opens the file in binary mode, so there are
-also modes like ``'rb'``, ``'wb'``, and ``'r+b'``. Python distinguishes
-between text and binary files. Binary files are read and written without
-any data transformation. In text mode, platform-specific newline
-representations are automatically converted to newlines when read and
-newline characters are automatically converted to the proper
-platform-specific representation when written. This makes writing portable
-code which reads or writes text files easier. In addition, when reading
-from or writing to text files, the data are automatically decoded or
-encoding, respectively, using the encoding associated with the file.
+On Windows and the Macintosh, ``'b'`` appended to the mode opens the file in
+binary mode, so there are also modes like ``'rb'``, ``'wb'``, and ``'r+b'``.
+Windows makes a distinction between text and binary files; the end-of-line
+characters in text files are automatically altered slightly when data is read or
+written. This behind-the-scenes modification to file data is fine for ASCII
+text files, but it'll corrupt binary data like that in :file:`JPEG` or
+:file:`EXE` files. Be very careful to use binary mode when reading and writing
+such files. On Unix, it doesn't hurt to append a ``'b'`` to the mode, so
+you can use it platform-independently for all binary files.
This behind-the-scenes modification to file data is fine for text files, but
will corrupt binary data like that in :file:`JPEG` or :file:`EXE` files. Be
****************************
.. XXX mention switch to Roundup for bug tracking
+.. XXX add trademark info for Apple, Microsoft.
:Author: A.M. Kuchling
:Release: |release|
* An optional ``timeout`` parameter was added to the
:class:`ftplib.FTP` class constructor as well as the :meth:`connect`
method, specifying a timeout measured in seconds. (Added by Facundo
- Batista.)
+ Batista.) Also, the :class:`FTP` class's
+ :meth:`storbinary` and :meth:`storlines`
+ now take an optional *callback* parameter that will be called with
+ each block of data after the data has been sent.
+ (Contributed by Phil Schwartz.)
+
+ .. Patch 1221598
* The :func:`reduce` built-in function is also available in the
:mod:`functools` module. In Python 3.0, the built-in is dropped and it's
.. Patch 1137
+* The :mod:`Queue` module now provides queue classes that retrieve entries
+ in different orders. The :class:`PriorityQueue` class stores
+ queued items in a heap and retrieves them in priority order,
+ and :class:`LifoQueue` retrieves the most recently added entries first,
+ meaning that it behaves like a stack.
+ (Contributed by Raymond Hettinger.)
+
* The :mod:`random` module's :class:`Random` objects can
now be pickled on a 32-bit system and unpickled on a 64-bit
system, and vice versa. Unfortunately, this change also means
SSL module documentation.
+
+.. ======================================================================
+
+plistlib: A Property-List Parser
+--------------------------------------------------
+
+A commonly-used format on MacOS X is the ``.plist`` format,
+which stores basic data types (numbers, strings, lists,
+and dictionaries) and serializes them into an XML-based format.
+(It's a lot like the XML-RPC serialization of data types.)
+
+Despite being primarily used on MacOS X, the format
+has nothing Mac-specific about it and the Python implementation works
+on any platform that Python supports, so the :mod:`plistlib` module
+has been promoted to the standard library.
+
+Using the module is simple::
+
+ import sys
+ import plistlib
+ import datetime
+
+ # Create data structure
+ data_struct = dict(lastAccessed=datetime.datetime.now(),
+ version=1,
+ categories=('Personal', 'Shared', 'Private'))
+
+ # Create string containing XML.
+ plist_str = plistlib.writePlistToString(data_struct)
+ new_struct = plistlib.readPlistFromString(plist_str)
+ print data_struct
+ print new_struct
+
+ # Write data structure to a file and read it back.
+ plistlib.writePlist(data_struct, '/tmp/customizations.plist')
+ new_struct = plistlib.readPlist('/tmp/customizations.plist')
+
+ # read/writePlist accepts file-like objects as well as paths.
+ plistlib.writePlist(data_struct, sys.stdout)
+
+
.. ======================================================================
.. Issue 1629
+* Distutils now places C extensions it builds in a
+ different directory when running on a debug version of Python.
+ (Contributed by Collin Winter.)
+
+ .. Patch 1530959
+
+
.. ======================================================================
/* True if generator is being executed. */
int gi_running;
+
+ /* The code object backing the generator */
+ PyObject *gi_code;
/* List of weak reference. */
PyObject *gi_weakreflist;
#ifndef Py_UNICODEOBJECT_H
#define Py_UNICODEOBJECT_H
+#include <stdarg.h>
+
/*
Unicode implementation based on original code by Fredrik Lundh,
def setUp(self):
self.filename = self.__class__.__name__ + '.db'
- self.homeDir = tempfile.mkdtemp()
+ homeDir = os.path.join(tempfile.gettempdir(), 'db_home')
+ self.homeDir = homeDir
+ try:
+ os.mkdir(homeDir)
+ except os.error:
+ import glob
+ files = glob.glob(os.path.join(self.homeDir, '*'))
+ for file in files:
+ os.remove(file)
self.env = db.DBEnv()
self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL |
db.DB_INIT_LOCK | db.DB_THREAD | self.envFlags)
import sys, os, re
from io import StringIO
import tempfile
+import test_all
import unittest
try:
def setUp (self):
self.filename = self.__class__.__name__ + '.db'
- self.homeDir = tempfile.mkdtemp()
+ homeDir = os.path.join (tempfile.gettempdir(), 'db_home')
+ self.homeDir = homeDir
+ try:
+ os.mkdir (homeDir)
+ except os.error:
+ pass
env = db.DBEnv ()
env.open (self.homeDir,
import unittest
+import tempfile
import sys, os, glob
import shutil
import tempfile
db_name = 'test-cursor_pget.db'
def setUp(self):
- self.homeDir = tempfile.mkdtemp()
+ self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home')
+ try:
+ os.mkdir(self.homeDir)
+ except os.error:
+ pass
self.env = db.DBEnv()
self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL)
self.primary_db = db.DB(self.env)
import pickle
import tempfile
import unittest
+import tempfile
import glob
try:
db_name = 'test-dbobj.db'
def setUp(self):
- self.homeDir = tempfile.mkdtemp()
+ homeDir = os.path.join(tempfile.gettempdir(), 'db_home')
+ self.homeDir = homeDir
+ try: os.mkdir(homeDir)
+ except os.error: pass
def tearDown(self):
if hasattr(self, 'db'):
class DBSequenceTest(unittest.TestCase):
def setUp(self):
self.int_32_max = 0x100000000
- self.homeDir = tempfile.mkdtemp()
- old_tempfile_tempdir = tempfile.tempdir
+ self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home')
+ try:
+ os.mkdir(self.homeDir)
+ except os.error:
+ pass
tempfile.tempdir = self.homeDir
self.filename = os.path.split(tempfile.mktemp())[1]
tempfile.tempdir = old_tempfile_tempdir
if verbose:
dbutils._deadlock_VerboseFile = sys.stdout
- self.homeDir = tempfile.mkdtemp()
+ homeDir = os.path.join(tempfile.gettempdir(), 'db_home')
+ self.homeDir = homeDir
+ try:
+ os.mkdir(homeDir)
+ except OSError, e:
+ if e.errno != errno.EEXIST: raise
self.env = db.DBEnv()
self.setEnvOpts()
self.env.open(self.homeDir, self.envflags | db.DB_CREATE)
def tearDown(self):
self.d.close()
self.env.close()
- shutil.rmtree(self.homeDir)
+ try:
+ shutil.rmtree(self.homeDir)
+ except OSError, e:
+ if e.errno != errno.EEXIST: raise
def setEnvOpts(self):
pass
RationalAbc = numbers.Rational
-def _gcd(a, b): # XXX This is a useful function. Consider making it public.
- """Calculate the Greatest Common Divisor.
+def gcd(a, b):
+ """Calculate the Greatest Common Divisor of a and b.
Unless b==0, the result will have the same sign as b (so that when
b is divided by it, the result comes out positive).
>>> _binary_float_to_ratio(-.25)
(-1, 4)
"""
- # XXX Consider moving this to to floatobject.c
- # with a name like float.as_intger_ratio()
+ # XXX Move this to floatobject.c with a name like
+ # float.as_integer_ratio()
if x == 0:
return 0, 1
_RATIONAL_FORMAT = re.compile(
- r'^\s*(?P<sign>[-+]?)(?P<num>\d+)(?:/(?P<denom>\d+))?\s*$')
+ r'^\s*(?P<sign>[-+]?)(?P<num>\d+)'
+ r'(?:/(?P<denom>\d+)|\.(?P<decimal>\d+))?\s*$')
-# XXX Consider accepting decimal strings as input since they are exact.
-# Rational("2.01") --> s="2.01" ; Rational.from_decimal(Decimal(s)) --> Rational(201, 100)"
-# If you want to avoid going through the decimal module, just parse the string directly:
-# s.partition('.') --> ('2', '.', '01') --> Rational(int('2'+'01'), 10**len('01')) --> Rational(201, 100)
class Rational(RationalAbc):
"""This class implements rational numbers.
Rational() == 0.
Rationals can also be constructed from strings of the form
- '[-+]?[0-9]+(/[0-9]+)?', optionally surrounded by spaces.
+ '[-+]?[0-9]+((/|.)[0-9]+)?', optionally surrounded by spaces.
"""
def __new__(cls, numerator=0, denominator=1):
"""Constructs a Rational.
- Takes a string, another Rational, or a numerator/denominator pair.
+ Takes a string like '3/2' or '1.5', another Rational, or a
+ numerator/denominator pair.
"""
self = super(Rational, cls).__new__(cls)
m = _RATIONAL_FORMAT.match(input)
if m is None:
raise ValueError('Invalid literal for Rational: ' + input)
- numerator = int(m.group('num'))
- # Default denominator to 1. That's the only optional group.
- denominator = int(m.group('denom') or 1)
+ numerator = m.group('num')
+ decimal = m.group('decimal')
+ if decimal:
+ # The literal is a decimal number.
+ numerator = int(numerator + decimal)
+ denominator = 10**len(decimal)
+ else:
+ # The literal is an integer or fraction.
+ numerator = int(numerator)
+ # Default denominator to 1.
+ denominator = int(m.group('denom') or 1)
+
if m.group('sign') == '-':
numerator = -numerator
if denominator == 0:
raise ZeroDivisionError('Rational(%s, 0)' % numerator)
- g = _gcd(numerator, denominator)
+ g = gcd(numerator, denominator)
self.numerator = int(numerator // g)
self.denominator = int(denominator // g)
return self
*?,+?,?? Non-greedy versions of the previous three special characters.
{m,n} Matches from m to n repetitions of the preceding RE.
{m,n}? Non-greedy version of the above.
- "\\" Either escapes special characters or signals a special sequence.
+ "\\" Either escapes special characters or signals a special sequence.
[] Indicates a set of characters.
A "^" as the first character indicates a complementing set.
"|" A|B, creates an RE that will match either A or B.
(?#...) A comment; ignored.
(?=...) Matches if ... matches next, but doesn't consume the string.
(?!...) Matches if ... doesn't match next.
+ (?<=...) Matches if preceded by ... (must be fixed length).
+ (?<!...) Matches if not preceded by ... (must be fixed length).
+ (?(id/name)yes|no) Matches yes pattern if the group with id/name matched,
+ the (optional) no pattern otherwise.
The special sequences consist of "\\" and a character from the list
below. If the ordinary character is not on the list, then the
subn Same as sub, but also return the number of substitutions made.
split Split a string by the occurrences of a pattern.
findall Find all occurrences of a pattern in a string.
+ finditer Return an iterator yielding a match object for each match.
compile Compile a pattern into a RegexObject.
purge Clear the regular expression cache.
escape Backslash all non-alphanumerics in a string.
# client each send
chunk_size = 1
+ def __init__(self, event):
+ threading.Thread.__init__(self)
+ self.event = event
+
def run(self):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
global PORT
PORT = test_support.bind_port(sock, HOST, PORT)
sock.listen(1)
+ self.event.set()
conn, client = sock.accept()
self.buffer = b""
# collect data until quit message is seen
self.buffer = b""
+def start_echo_server():
+ event = threading.Event()
+ s = echo_server(event)
+ s.start()
+ event.wait()
+ event.clear()
+ time.sleep(0.01) # Give server time to start accepting.
+ return s, event
+
+
class TestAsynchat(unittest.TestCase):
usepoll = False
pass
def line_terminator_check(self, term, server_chunk):
- s = echo_server()
+ event = threading.Event()
+ s = echo_server(event)
s.chunk_size = server_chunk
s.start()
- time.sleep(0.5) # Give server time to initialize
+ event.wait()
+ event.clear()
+ time.sleep(0.01) # Give server time to start accepting.
c = echo_client(term)
c.push(b"hello ")
c.push(bytes("world%s" % term, "ascii"))
def numeric_terminator_check(self, termlen):
# Try reading a fixed number of bytes
- s = echo_server()
- s.start()
- time.sleep(0.5) # Give server time to initialize
+ s, event = start_echo_server()
c = echo_client(termlen)
data = b"hello world, I'm not dead yet!\n"
c.push(data)
def test_none_terminator(self):
# Try reading a fixed number of bytes
- s = echo_server()
- s.start()
- time.sleep(0.5) # Give server time to initialize
+ s, event = start_echo_server()
c = echo_client(None)
data = b"hello world, I'm not dead yet!\n"
c.push(data)
self.assertEqual(c.buffer, data)
def test_simple_producer(self):
- s = echo_server()
- s.start()
- time.sleep(0.5) # Give server time to initialize
+ s, event = start_echo_server()
c = echo_client(b'\n')
data = b"hello world\nI'm not dead yet!\n"
p = asynchat.simple_producer(data+SERVER_QUIT, buffer_size=8)
self.assertEqual(c.contents, [b"hello world", b"I'm not dead yet!"])
def test_string_producer(self):
- s = echo_server()
- s.start()
- time.sleep(0.5) # Give server time to initialize
+ s, event = start_echo_server()
c = echo_client(b'\n')
data = b"hello world\nI'm not dead yet!\n"
c.push_with_producer(data+SERVER_QUIT)
def test_empty_line(self):
# checks that empty lines are handled correctly
- s = echo_server()
- s.start()
- time.sleep(0.5) # Give server time to initialize
+ s, event = start_echo_server()
c = echo_client(b'\n')
c.push(b"hello world\n\nI'm not dead yet!\n")
c.push(SERVER_QUIT)
[b"hello world", b"", b"I'm not dead yet!"])
def test_close_when_done(self):
- s = echo_server()
- s.start()
- time.sleep(0.5) # Give server time to initialize
+ s, event = start_echo_server()
c = echo_client(b'\n')
c.push(b"hello world\nI'm not dead yet!\n")
c.push(SERVER_QUIT)
self.assertRaises(ValueError, {}.update, [(1, 2, 3)])
- # SF #1615701: make d.update(m) honor __getitem__() and keys() in dict subclasses
- class KeyUpperDict(dict):
- def __getitem__(self, key):
- return key.upper()
- d.clear()
- d.update(KeyUpperDict.fromkeys('abc'))
- self.assertEqual(d, {'a':'A', 'b':'B', 'c':'C'})
-
def test_fromkeys(self):
self.assertEqual(dict.fromkeys('abc'), {'a':None, 'b':None, 'c':None})
d = {}
>>> type(i)
<type 'generator'>
>>> [s for s in dir(i) if not s.startswith('_')]
-['close', 'gi_frame', 'gi_running', 'send', 'throw']
+['close', 'gi_code', 'gi_frame', 'gi_running', 'send', 'throw']
>>> print(i.__next__.__doc__)
x.__next__() <==> next(x)
>>> iter(i) is i
>>> print(next(g))
Traceback (most recent call last):
StopIteration
+
+
+Test the gi_code attribute
+
+>>> def f():
+... yield 5
+...
+>>> g = f()
+>>> g.gi_code is f.__code__
+True
+>>> next(g)
+5
+>>> next(g)
+Traceback (most recent call last):
+StopIteration
+>>> g.gi_code is f.__code__
+True
+
"""
# conjoin is a simple backtracking generator, named in honor of Icon's
from copy import copy, deepcopy
from pickle import dumps, loads
R = rational.Rational
+gcd = rational.gcd
+
+
+class GcdTest(unittest.TestCase):
+
+ def testMisc(self):
+ self.assertEquals(0, gcd(0, 0))
+ self.assertEquals(1, gcd(1, 0))
+ self.assertEquals(-1, gcd(-1, 0))
+ self.assertEquals(1, gcd(0, 1))
+ self.assertEquals(-1, gcd(0, -1))
+ self.assertEquals(1, gcd(7, 1))
+ self.assertEquals(-1, gcd(7, -1))
+ self.assertEquals(1, gcd(-23, 15))
+ self.assertEquals(12, gcd(120, 84))
+ self.assertEquals(-12, gcd(84, -120))
+
def _components(r):
return (r.numerator, r.denominator)
+
class RationalTest(unittest.TestCase):
def assertTypedEquals(self, expected, actual):
self.assertEquals((-3, 2), _components(R("-3/2 ")))
self.assertEquals((3, 2), _components(R(" 03/02 \n ")))
self.assertEquals((3, 2), _components(R(" 03/02 \n ")))
+ self.assertEquals((16, 5), _components(R(" 3.2 ")))
+ self.assertEquals((-16, 5), _components(R(" -3.2 ")))
self.assertRaisesMessage(
ZeroDivisionError, "Rational(3, 0)",
ValueError, "Invalid literal for Rational: + 3/2",
R, "+ 3/2")
self.assertRaisesMessage(
- # Only parse fractions, not decimals.
- ValueError, "Invalid literal for Rational: 3.2",
- R, "3.2")
+ # Avoid treating '.' as a regex special character.
+ ValueError, "Invalid literal for Rational: 3a2",
+ R, "3a2")
+ self.assertRaisesMessage(
+ # Only parse ordinary decimals, not scientific form.
+ ValueError, "Invalid literal for Rational: 3.2e4",
+ R, "3.2e4")
+ self.assertRaisesMessage(
+ # Don't accept combinations of decimals and rationals.
+ ValueError, "Invalid literal for Rational: 3/7.2",
+ R, "3/7.2")
+ self.assertRaisesMessage(
+ # Don't accept combinations of decimals and rationals.
+ ValueError, "Invalid literal for Rational: 3.2/7",
+ R, "3.2/7")
def testImmutable(self):
r = R(7, 3)
self.assertEqual(id(r), id(deepcopy(r)))
def test_main():
- run_unittest(RationalTest)
+ run_unittest(RationalTest, GcdTest)
if __name__ == '__main__':
test_main()
self.assertRaises(TypeError, resource.setrlimit, 42, 42, 42)
def test_fsize_ismax(self):
-
try:
(cur, max) = resource.getrlimit(resource.RLIMIT_FSIZE)
except AttributeError:
try:
f.write(b"Y")
f.flush()
+ # On some systems (e.g., Ubuntu on hppa) the flush()
+ # doesn't always cause the exception, but the close()
+ # does eventually. Try closing several times in
+ # an attempt to ensure the file is really synced and
+ # the exception raised.
+ for i in range(5):
+ f.close()
except IOError:
if not limit_set:
raise
resource.setrlimit(resource.RLIMIT_FSIZE, (cur, max))
finally:
f.close()
- os.unlink(test_support.TESTFN)
finally:
if limit_set:
resource.setrlimit(resource.RLIMIT_FSIZE, (cur, max))
+ test_support.unlink(test_support.TESTFN)
def test_fsize_toobig(self):
# Be sure that setrlimit is checking for really large values
from test import test_support
-def server(evt, ready):
+def server(evt):
serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serv.settimeout(3)
serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
serv.bind(("", 9091))
serv.listen(5)
- ready.set()
+ evt.set()
try:
conn, addr = serv.accept()
except socket.timeout:
def setUp(self):
self.evt = threading.Event()
- ready = threading.Event()
- threading.Thread(target=server, args=(self.evt, ready)).start()
- ready.wait()
+ threading.Thread(target=server, args=(self.evt,)).start()
+ self.evt.wait()
+ self.evt.clear()
+ time.sleep(.1)
def tearDown(self):
self.evt.wait()
import os
import mimetools
+
+def _open_with_retry(func, host, *args, **kwargs):
+ # Connecting to remote hosts is flaky. Make it more robust
+ # by retrying the connection several times.
+ for i in range(3):
+ try:
+ return func(host, *args, **kwargs)
+ except IOError, last_exc:
+ continue
+ except:
+ raise
+ raise last_exc
+
+
class URLTimeoutTest(unittest.TestCase):
TIMEOUT = 10.0
socket.setdefaulttimeout(None)
def testURLread(self):
- f = urllib.urlopen("http://www.python.org/")
+ f = _open_with_retry(urllib.urlopen, "http://www.python.org/")
x = f.read()
class urlopenNetworkTests(unittest.TestCase):
"""
+ def urlopen(self, *args):
+ return _open_with_retry(urllib.urlopen, *args)
+
def test_basic(self):
# Simple test expected to pass.
- open_url = urllib.urlopen("http://www.python.org/")
+ open_url = self.urlopen("http://www.python.org/")
for attr in ("read", "readline", "readlines", "fileno", "close",
"info", "geturl"):
self.assert_(hasattr(open_url, attr), "object returned from "
def test_readlines(self):
# Test both readline and readlines.
- open_url = urllib.urlopen("http://www.python.org/")
+ open_url = self.urlopen("http://www.python.org/")
try:
self.assert_(isinstance(open_url.readline(), bytes),
"readline did not return bytes")
def test_info(self):
# Test 'info'.
- open_url = urllib.urlopen("http://www.python.org/")
+ open_url = self.urlopen("http://www.python.org/")
try:
info_obj = open_url.info()
finally:
def test_geturl(self):
# Make sure same URL as opened is returned by geturl.
URL = "http://www.python.org/"
- open_url = urllib.urlopen(URL)
+ open_url = self.urlopen(URL)
try:
gotten_url = open_url.geturl()
finally:
# test can't pass on Windows.
return
# Make sure fd returned by fileno is valid.
- open_url = urllib.urlopen("http://www.python.org/")
+ open_url = self.urlopen("http://www.python.org/")
fd = open_url.fileno()
FILE = os.fdopen(fd)
try:
class urlretrieveNetworkTests(unittest.TestCase):
"""Tests urllib.urlretrieve using the network."""
+ def urlretrieve(self, *args):
+ return _open_with_retry(urllib.urlretrieve, *args)
+
def test_basic(self):
# Test basic functionality.
- file_location,info = urllib.urlretrieve("http://www.python.org/")
+ file_location,info = self.urlretrieve("http://www.python.org/")
self.assert_(os.path.exists(file_location), "file location returned by"
" urlretrieve is not a valid path")
FILE = open(file_location)
def test_specified_path(self):
# Make sure that specifying the location of the file to write to works.
- file_location,info = urllib.urlretrieve("http://www.python.org/",
- test_support.TESTFN)
+ file_location,info = self.urlretrieve("http://www.python.org/",
+ test_support.TESTFN)
self.assertEqual(file_location, test_support.TESTFN)
self.assert_(os.path.exists(file_location))
FILE = open(file_location)
def test_header(self):
# Make sure header returned as 2nd value from urlretrieve is good.
- file_location, header = urllib.urlretrieve("http://www.python.org/")
+ file_location, header = self.urlretrieve("http://www.python.org/")
os.unlink(file_location)
self.assert_(isinstance(header, mimetools.Message),
"header is not an instance of mimetools.Message")
PORT = None
+# The evt is set twice. First when the server is ready to serve.
+# Second when the server has been shutdown. The user must clear
+# the event after it has been set the first time to catch the second set.
def http_server(evt, numrequests):
class TestInstanceClass:
def div(self, x, y):
serv.register_function(lambda x,y: x+y, 'add')
serv.register_function(my_function)
serv.register_instance(TestInstanceClass())
+ evt.set()
# handle up to 'numrequests' requests
while numrequests > 0:
PORT = None
evt.set()
-def stop_serving():
- global PORT
- if PORT is None:
- return
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- sock.connect(('localhost', int(PORT)))
- sock.send(b"")
- sock.close()
+# This function prevents errors like:
+# <ProtocolError for localhost:57527/RPC2: 500 Internal Server Error>
+def is_unavailable_exception(e):
+ '''Returns True if the given ProtocolError is the product of a server-side
+ exception caused by the 'temporarily unavailable' response sometimes
+ given by operations on non-blocking sockets.'''
+
+ # sometimes we get a -1 error code and/or empty headers
+ if e.errcode == -1 or e.headers is None:
+ return True
class SimpleServerTestCase(unittest.TestCase):
serv_args = (self.evt, 1)
threading.Thread(target=http_server, args=serv_args).start()
- # wait for port to be assigned to server
- n = 1000
- while n > 0 and PORT is None:
- time.sleep(0.001)
- n -= 1
-
- time.sleep(0.5)
+ # wait for the server to be ready
+ self.evt.wait()
+ self.evt.clear()
def tearDown(self):
# wait on the server thread to terminate
serv_args = (self.evt, 1)
threading.Thread(target=http_server, args=serv_args).start()
- # wait for port to be assigned to server
- n = 1000
- while n > 0 and PORT is None:
- time.sleep(0.001)
- n -= 1
-
- time.sleep(0.5)
+ # wait for the server to be ready
+ self.evt.wait()
+ self.evt.clear()
def tearDown(self):
# wait on the server thread to terminate
correctfile = os.path.join(os.getcwd(), fpath[1:])
else:
correctfile = os.path.join(os.getcwd(), fpath)
+ correctfile = os.path.normpath(correctfile)
self.assertEqual(writtenfile, correctfile)
NoArgTrueFalseFunction(has_ic)
NoArgTrueFalseFunction(has_il)
NoArgTrueFalseFunction(isendwin)
-NoArgNoReturnVoidFunction(filter)
NoArgNoReturnVoidFunction(flushinp)
NoArgNoReturnVoidFunction(noqiflush)
+static PyObject *
+PyCurses_filter(PyObject *self)
+{
+ /* not checking for PyCursesInitialised here since filter() must
+ be called before initscr() */
+ filter();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
static PyObject *
PyCurses_Color_Content(PyObject *self, PyObject *args)
{
"math domain error");
return NULL;
}
- /* Value is ~= x * 2**(e*SHIFT), so the log ~=
- log(x) + log(2) * e * SHIFT.
- CAUTION: e*SHIFT may overflow using int arithmetic,
+ /* Value is ~= x * 2**(e*PyLong_SHIFT), so the log ~=
+ log(x) + log(2) * e * PyLong_SHIFT.
+ CAUTION: e*PyLong_SHIFT may overflow using int arithmetic,
so force use of double. */
x = func(x) + (e * (double)PyLong_SHIFT) * func(2.0);
return PyFloat_FromDouble(x);
int reverse)
{
Py_ssize_t start = self->pos;
- Py_ssize_t end = self->size;
- char *needle;
+ Py_ssize_t end = self->size;
+ const char *needle;
Py_ssize_t len;
CHECK_VALID(NULL);
&needle, &len, &start, &end)) {
return NULL;
} else {
- char *p;
- char sign = reverse ? -1 : 1;
+ const char *p, *start_p, *end_p;
+ int sign = reverse ? -1 : 1;
if (start < 0)
start += self->size;
else if ((size_t)end > self->size)
end = self->size;
- start += (Py_ssize_t)self->data;
- end += (Py_ssize_t)self->data;
+ start_p = self->data + start;
+ end_p = self->data + end;
- for (p = (char *)(reverse ? end - len : start);
- p >= (char *)start && p + len <= (char *)end; p+=sign) {
+ for (p = (reverse ? end_p - len : start_p);
+ (p >= start_p) && (p + len <= end_p); p += sign) {
Py_ssize_t i;
for (i = 0; i < len && needle[i] == p[i]; ++i)
/* nothing */;
if ((size_t)(offset + size) > self->size) {
PyErr_SetString(PyExc_ValueError, "flush values out of range");
return NULL;
- } else {
+ }
#ifdef MS_WINDOWS
- return PyLong_FromLong((long)
- FlushViewOfFile(self->data+offset, size));
-#endif /* MS_WINDOWS */
-#ifdef UNIX
- /* XXX semantics of return value? */
- /* XXX flags for msync? */
- if (-1 == msync(self->data + offset, size,
- MS_SYNC))
- {
- PyErr_SetFromErrno(mmap_module_error);
- return NULL;
- }
- return PyLong_FromLong(0);
-#endif /* UNIX */
+ return PyLong_FromLong((long) FlushViewOfFile(self->data+offset, size));
+#elif defined(UNIX)
+ /* XXX semantics of return value? */
+ /* XXX flags for msync? */
+ if (-1 == msync(self->data + offset, size, MS_SYNC)) {
+ PyErr_SetFromErrno(mmap_module_error);
+ return NULL;
}
+ return PyLong_FromLong(0);
+#else
+ PyErr_SetString(PyExc_ValueError, "flush not supported on this system");
+ return NULL;
+#endif
}
static PyObject *
return -1;
}
mp = (PyDictObject*)a;
- if (PyDict_CheckExact(b)) {
+ if (PyDict_Check(b)) {
other = (PyDictObject*)b;
if (other == mp || other->ma_used == 0)
/* a.update(a) or a.update({}); nothing to do */
gen_traverse(PyGenObject *gen, visitproc visit, void *arg)
{
Py_VISIT((PyObject *)gen->gi_frame);
+ Py_VISIT(gen->gi_code);
return 0;
}
_PyObject_GC_UNTRACK(self);
Py_CLEAR(gen->gi_frame);
+ Py_CLEAR(gen->gi_code);
PyObject_GC_Del(gen);
}
static PyMemberDef gen_memberlist[] = {
{"gi_frame", T_OBJECT, offsetof(PyGenObject, gi_frame), READONLY},
{"gi_running", T_INT, offsetof(PyGenObject, gi_running), READONLY},
+ {"gi_code", T_OBJECT, offsetof(PyGenObject, gi_code), READONLY},
{NULL} /* Sentinel */
};
return NULL;
}
gen->gi_frame = f;
+ Py_INCREF(f->f_code);
+ gen->gi_code = (PyObject *)(f->f_code);
gen->gi_running = 0;
gen->gi_weakreflist = NULL;
_PyObject_GC_TRACK(gen);
if (n && size/n != Py_SIZE(a))
return PyErr_NoMemory();
if (size == 0)
- return PyList_New(0);
+ return PyList_New(0);
np = (PyListObject *) PyList_New(size);
if (np == NULL)
return NULL;
list_inplace_repeat(PyListObject *self, Py_ssize_t n)
{
PyObject **items;
- Py_ssize_t size, i, j, p, newsize;
+ Py_ssize_t size, i, j, p;
size = PyList_GET_SIZE(self);
- if (size == 0) {
+ if (size == 0 || n == 1) {
Py_INCREF(self);
return (PyObject *)self;
}
return (PyObject *)self;
}
- newsize = size * n;
- if (newsize/n != size)
+ if (size > PY_SSIZE_T_MAX / n) {
return PyErr_NoMemory();
- if (list_resize(self, newsize) == -1)
+ }
+
+ if (list_resize(self, size*n) == -1)
return NULL;
p = size;
{
PyObject *key, *it;
- if (PyAnySet_CheckExact(other))
+ if (PyAnySet_Check(other))
return set_merge(so, other);
if (PyDict_CheckExact(other)) {
if (result == NULL)
return NULL;
- if (PyAnySet_CheckExact(other)) {
+ if (PyAnySet_Check(other)) {
Py_ssize_t pos = 0;
setentry *entry;
if ((PyObject *)so == other)
return set_clear_internal(so);
- if (PyAnySet_CheckExact(other)) {
+ if (PyAnySet_Check(other)) {
setentry *entry;
Py_ssize_t pos = 0;
setentry *entry;
Py_ssize_t pos = 0;
- if (!PyAnySet_CheckExact(other) && !PyDict_CheckExact(other)) {
+ if (!PyAnySet_Check(other) && !PyDict_CheckExact(other)) {
result = set_copy(so);
if (result == NULL)
return NULL;
Py_RETURN_NONE;
}
- if (PyAnySet_CheckExact(other)) {
+ if (PyAnySet_Check(other)) {
Py_INCREF(other);
otherset = (PySetObject *)other;
} else {
setentry *entry;
Py_ssize_t pos = 0;
- if (!PyAnySet_CheckExact(other)) {
+ if (!PyAnySet_Check(other)) {
PyObject *tmp, *result;
tmp = make_new_set(&PySet_Type, other);
if (tmp == NULL)
{
PyObject *tmp, *result;
- if (!PyAnySet_CheckExact(other)) {
+ if (!PyAnySet_Check(other)) {
tmp = make_new_set(&PySet_Type, other);
if (tmp == NULL)
return NULL;
__version__ = "$Revision$"
import sys, os, imp, re, optparse
+from glob import glob
from distutils import log
from distutils import sysconfig
self.distribution.scripts = [os.path.join(srcdir, filename)
for filename in self.distribution.scripts]
+ # Python header files
+ headers = glob("Include/*.h") + ["pyconfig.h"]
+
for ext in self.extensions[:]:
ext.sources = [ find_module_file(filename, moddirlist)
for filename in ext.sources ]
if ext.depends is not None:
ext.depends = [find_module_file(filename, alldirlist)
for filename in ext.depends]
+ else:
+ ext.depends = []
+ # re-compile extensions if a header file has been changed
+ ext.depends.extend(headers)
+
ext.include_dirs.append( '.' ) # to get config.h
for incdir in incdirlist:
ext.include_dirs.append( os.path.join(srcdir, incdir) )