]> granicus.if.org Git - python/commitdiff
remove bsddb
authorBenjamin Peterson <benjamin@python.org>
Wed, 3 Sep 2008 22:30:12 +0000 (22:30 +0000)
committerBenjamin Peterson <benjamin@python.org>
Wed, 3 Sep 2008 22:30:12 +0000 (22:30 +0000)
37 files changed:
Doc/library/bsddb.rst [deleted file]
Lib/bsddb/__init__.py [deleted file]
Lib/bsddb/db.py [deleted file]
Lib/bsddb/dbobj.py [deleted file]
Lib/bsddb/dbrecio.py [deleted file]
Lib/bsddb/dbshelve.py [deleted file]
Lib/bsddb/dbtables.py [deleted file]
Lib/bsddb/dbutils.py [deleted file]
Lib/bsddb/test/__init__.py [deleted file]
Lib/bsddb/test/test_all.py [deleted file]
Lib/bsddb/test/test_associate.py [deleted file]
Lib/bsddb/test/test_basics.py [deleted file]
Lib/bsddb/test/test_compare.py [deleted file]
Lib/bsddb/test/test_compat.py [deleted file]
Lib/bsddb/test/test_cursor_pget_bug.py [deleted file]
Lib/bsddb/test/test_dbobj.py [deleted file]
Lib/bsddb/test/test_dbshelve.py [deleted file]
Lib/bsddb/test/test_dbtables.py [deleted file]
Lib/bsddb/test/test_distributed_transactions.py [deleted file]
Lib/bsddb/test/test_early_close.py [deleted file]
Lib/bsddb/test/test_env_close.py [deleted file]
Lib/bsddb/test/test_get_none.py [deleted file]
Lib/bsddb/test/test_join.py [deleted file]
Lib/bsddb/test/test_lock.py [deleted file]
Lib/bsddb/test/test_misc.py [deleted file]
Lib/bsddb/test/test_pickle.py [deleted file]
Lib/bsddb/test/test_queue.py [deleted file]
Lib/bsddb/test/test_recno.py [deleted file]
Lib/bsddb/test/test_replication.py [deleted file]
Lib/bsddb/test/test_sequence.py [deleted file]
Lib/bsddb/test/test_thread.py [deleted file]
Lib/test/test_bsddb.py [deleted file]
Lib/test/test_bsddb3.py [deleted file]
Misc/NEWS
Modules/_bsddb.c [deleted file]
Modules/bsddb.h [deleted file]
setup.py

diff --git a/Doc/library/bsddb.rst b/Doc/library/bsddb.rst
deleted file mode 100644 (file)
index 9fde725..0000000
+++ /dev/null
@@ -1,205 +0,0 @@
-
-:mod:`bsddb` --- Interface to Berkeley DB library
-=================================================
-
-.. module:: bsddb
-   :synopsis: Interface to Berkeley DB database library
-.. sectionauthor:: Skip Montanaro <skip@pobox.com>
-
-
-The :mod:`bsddb` module provides an interface to the Berkeley DB library.  Users
-can create hash, btree or record based library files using the appropriate open
-call. Bsddb objects behave generally like dictionaries.  Keys and values must be
-strings, however, so to use other objects as keys or to store other kinds of
-objects the user must serialize them somehow, typically using
-:func:`marshal.dumps` or  :func:`pickle.dumps`.
-
-The :mod:`bsddb` module requires a Berkeley DB library version from 3.3 thru
-4.5.
-
-
-.. seealso::
-
-   http://pybsddb.sourceforge.net/
-      The website with documentation for the :mod:`bsddb.db` Python Berkeley DB
-      interface that closely mirrors the object oriented interface provided in
-      Berkeley DB 3 and 4.
-
-   http://www.oracle.com/database/berkeley-db/
-      The Berkeley DB library.
-
-A more modern DB, DBEnv and DBSequence object interface is available in the
-:mod:`bsddb.db` module which closely matches the Berkeley DB C API documented at
-the above URLs.  Additional features provided by the :mod:`bsddb.db` API include
-fine tuning, transactions, logging, and multiprocess concurrent database access.
-
-The following is a description of the legacy :mod:`bsddb` interface compatible
-with the old Python bsddb module.  Starting in Python 2.5 this interface should
-be safe for multithreaded access.  The :mod:`bsddb.db` API is recommended for
-threading users as it provides better control.
-
-The :mod:`bsddb` module defines the following functions that create objects that
-access the appropriate type of Berkeley DB file.  The first two arguments of
-each function are the same.  For ease of portability, only the first two
-arguments should be used in most instances.
-
-
-.. function:: hashopen(filename[, flag[, mode[, pgsize[, ffactor[, nelem[, cachesize[, lorder[, hflags]]]]]]]])
-
-   Open the hash format file named *filename*.  Files never intended to be
-   preserved on disk may be created by passing ``None`` as the  *filename*.  The
-   optional *flag* identifies the mode used to open the file.  It may be ``'r'``
-   (read only), ``'w'`` (read-write) , ``'c'`` (read-write - create if necessary;
-   the default) or ``'n'`` (read-write - truncate to zero length).  The other
-   arguments are rarely used and are just passed to the low-level :cfunc:`dbopen`
-   function.  Consult the Berkeley DB documentation for their use and
-   interpretation.
-
-
-.. function:: btopen(filename[, flag[, mode[, btflags[, cachesize[, maxkeypage[, minkeypage[, pgsize[, lorder]]]]]]]])
-
-   Open the btree format file named *filename*.  Files never intended  to be
-   preserved on disk may be created by passing ``None`` as the  *filename*.  The
-   optional *flag* identifies the mode used to open the file.  It may be ``'r'``
-   (read only), ``'w'`` (read-write), ``'c'`` (read-write - create if necessary;
-   the default) or ``'n'`` (read-write - truncate to zero length).  The other
-   arguments are rarely used and are just passed to the low-level dbopen function.
-   Consult the Berkeley DB documentation for their use and interpretation.
-
-
-.. function:: rnopen(filename[, flag[, mode[, rnflags[, cachesize[, pgsize[, lorder[, rlen[, delim[, source[, pad]]]]]]]]]])
-
-   Open a DB record format file named *filename*.  Files never intended  to be
-   preserved on disk may be created by passing ``None`` as the  *filename*.  The
-   optional *flag* identifies the mode used to open the file.  It may be ``'r'``
-   (read only), ``'w'`` (read-write), ``'c'`` (read-write - create if necessary;
-   the default) or ``'n'`` (read-write - truncate to zero length).  The other
-   arguments are rarely used and are just passed to the low-level dbopen function.
-   Consult the Berkeley DB documentation for their use and interpretation.
-
-
-.. class:: StringKeys(db)
-
-   Wrapper class around a DB object that supports string keys (rather than bytes).
-   All keys are encoded as UTF-8, then passed to the underlying object.
-
-
-.. class:: StringValues(db)
-
-   Wrapper class around a DB object that supports string values (rather than bytes).
-   All values are encoded as UTF-8, then passed to the underlying object.
-
-
-.. seealso::
-
-   Module :mod:`dbm.bsd`
-      DBM-style interface to the :mod:`bsddb`
-
-
-.. _bsddb-objects:
-
-Hash, BTree and Record Objects
-------------------------------
-
-Once instantiated, hash, btree and record objects support the same methods as
-dictionaries.  In addition, they support the methods listed below.
-
-
-.. describe:: key in bsddbobject
-
-   Return ``True`` if the DB file contains the argument as a key.
-
-
-.. method:: bsddbobject.close()
-
-   Close the underlying file.  The object can no longer be accessed.  Since there
-   is no open :meth:`open` method for these objects, to open the file again a new
-   :mod:`bsddb` module open function must be called.
-
-
-.. method:: bsddbobject.keys()
-
-   Return the list of keys contained in the DB file.  The order of the list is
-   unspecified and should not be relied on.  In particular, the order of the list
-   returned is different for different file formats.
-
-
-.. method:: bsddbobject.set_location(key)
-
-   Set the cursor to the item indicated by *key* and return a tuple containing the
-   key and its value.  For binary tree databases (opened using :func:`btopen`), if
-   *key* does not actually exist in the database, the cursor will point to the next
-   item in sorted order and return that key and value.  For other databases,
-   :exc:`KeyError` will be raised if *key* is not found in the database.
-
-
-.. method:: bsddbobject.first()
-
-   Set the cursor to the first item in the DB file and return it.  The order of
-   keys in the file is unspecified, except in the case of B-Tree databases. This
-   method raises :exc:`bsddb.error` if the database is empty.
-
-
-.. method:: bsddbobject.next()
-
-   Set the cursor to the next item in the DB file and return it.  The order of
-   keys in the file is unspecified, except in the case of B-Tree databases.
-
-
-.. method:: bsddbobject.previous()
-
-   Set the cursor to the previous item in the DB file and return it.  The order of
-   keys in the file is unspecified, except in the case of B-Tree databases.  This
-   is not supported on hashtable databases (those opened with :func:`hashopen`).
-
-
-.. method:: bsddbobject.last()
-
-   Set the cursor to the last item in the DB file and return it.  The order of keys
-   in the file is unspecified.  This is not supported on hashtable databases (those
-   opened with :func:`hashopen`). This method raises :exc:`bsddb.error` if the
-   database is empty.
-
-
-.. method:: bsddbobject.sync()
-
-   Synchronize the database on disk.
-
-Example::
-
-   >>> import bsddb
-   >>> db = bsddb.btopen('/tmp/spam.db', 'c')
-   >>> for i in range(10):
-   ...     db[str(i)] = '%d' % (i*i)
-   ... 
-   >>> db['3']
-   '9'
-   >>> db.keys()
-   ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
-   >>> db.first()
-   ('0', '0')
-   >>> db.next()
-   ('1', '1')
-   >>> db.last()
-   ('9', '81')
-   >>> db.set_location('2')
-   ('2', '4')
-   >>> db.previous() 
-   ('1', '1')
-   >>> for k, v in db.iteritems():
-   ...     print(k, v)
-   0 0
-   1 1
-   2 4
-   3 9
-   4 16
-   5 25
-   6 36
-   7 49
-   8 64
-   9 81
-   >>> '8' in db
-   True
-   >>> db.sync()
-   0
-
diff --git a/Lib/bsddb/__init__.py b/Lib/bsddb/__init__.py
deleted file mode 100644 (file)
index 5dea0f8..0000000
+++ /dev/null
@@ -1,444 +0,0 @@
-#----------------------------------------------------------------------
-#  Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA
-#  and Andrew Kuchling. All rights reserved.
-#
-#  Redistribution and use in source and binary forms, with or without
-#  modification, are permitted provided that the following conditions are
-#  met:
-#
-#    o Redistributions of source code must retain the above copyright
-#      notice, this list of conditions, and the disclaimer that follows.
-#
-#    o Redistributions in binary form must reproduce the above copyright
-#      notice, this list of conditions, and the following disclaimer in
-#      the documentation and/or other materials provided with the
-#      distribution.
-#
-#    o Neither the name of Digital Creations nor the names of its
-#      contributors may be used to endorse or promote products derived
-#      from this software without specific prior written permission.
-#
-#  THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS
-#  IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
-#  TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
-#  PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL DIGITAL
-#  CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-#  INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-#  BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
-#  OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-#  ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
-#  TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
-#  USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
-#  DAMAGE.
-#----------------------------------------------------------------------
-
-
-"""Support for Berkeley DB 4.0 through 4.7 with a simple interface.
-
-For the full featured object oriented interface use the bsddb.db module
-instead.  It mirrors the Oracle Berkeley DB C API.
-"""
-
-import sys
-absolute_import = (sys.version_info[0] >= 3)
-
-try:
-    if __name__ == 'bsddb3':
-        # import _pybsddb binary as it should be the more recent version from
-        # a standalone pybsddb addon package than the version included with
-        # python as bsddb._bsddb.
-        if absolute_import :
-            # Because this syntaxis is not valid before Python 2.5
-            exec("from . import _pybsddb")
-        else :
-            import _pybsddb
-        _bsddb = _pybsddb
-        from bsddb3.dbutils import DeadlockWrap as _DeadlockWrap
-    else:
-        import _bsddb
-        from bsddb.dbutils import DeadlockWrap as _DeadlockWrap
-except ImportError:
-    # Remove ourselves from sys.modules
-    import sys
-    del sys.modules[__name__]
-    raise
-
-# bsddb3 calls it db, but provide _db for backwards compatibility
-db = _db = _bsddb
-__version__ = db.__version__
-
-error = db.DBError  # So bsddb.error will mean something...
-
-#----------------------------------------------------------------------
-
-import sys, os
-
-from weakref import ref
-
-if sys.version_info[0:2] <= (2, 5) :
-    import UserDict
-    MutableMapping = UserDict.DictMixin
-else :
-    import collections
-    MutableMapping = collections.MutableMapping
-
-class _iter_mixin(MutableMapping):
-    def _make_iter_cursor(self):
-        cur = _DeadlockWrap(self.db.cursor)
-        key = id(cur)
-        self._cursor_refs[key] = ref(cur, self._gen_cref_cleaner(key))
-        return cur
-
-    def _gen_cref_cleaner(self, key):
-        # use generate the function for the weakref callback here
-        # to ensure that we do not hold a strict reference to cur
-        # in the callback.
-        return lambda ref: self._cursor_refs.pop(key, None)
-
-    def __iter__(self):
-        self._kill_iteration = False
-        self._in_iter += 1
-        try:
-            try:
-                cur = self._make_iter_cursor()
-
-                # FIXME-20031102-greg: race condition.  cursor could
-                # be closed by another thread before this call.
-
-                # since we're only returning keys, we call the cursor
-                # methods with flags=0, dlen=0, dofs=0
-                key = _DeadlockWrap(cur.first, 0,0,0)[0]
-                yield key
-
-                next = getattr(cur, "next")
-                while 1:
-                    try:
-                        key = _DeadlockWrap(next, 0,0,0)[0]
-                        yield key
-                    except _bsddb.DBCursorClosedError:
-                        if self._kill_iteration:
-                            raise RuntimeError('Database changed size '
-                                               'during iteration.')
-                        cur = self._make_iter_cursor()
-                        # FIXME-20031101-greg: race condition.  cursor could
-                        # be closed by another thread before this call.
-                        _DeadlockWrap(cur.set, key,0,0,0)
-                        next = getattr(cur, "next")
-            except _bsddb.DBNotFoundError:
-                pass
-            except _bsddb.DBCursorClosedError:
-                # the database was modified during iteration.  abort.
-                pass
-# When Python 2.3 not supported in bsddb3, we can change this to "finally"
-        except :
-            self._in_iter -= 1
-            raise
-
-        self._in_iter -= 1
-
-    def iteritems(self):
-        if not self.db:
-            return
-        self._kill_iteration = False
-        self._in_iter += 1
-        try:
-            try:
-                cur = self._make_iter_cursor()
-
-                # FIXME-20031102-greg: race condition.  cursor could
-                # be closed by another thread before this call.
-
-                kv = _DeadlockWrap(cur.first)
-                key = kv[0]
-                yield kv
-
-                next = getattr(cur, "next")
-                while 1:
-                    try:
-                        kv = _DeadlockWrap(next)
-                        key = kv[0]
-                        yield kv
-                    except _bsddb.DBCursorClosedError:
-                        if self._kill_iteration:
-                            raise RuntimeError('Database changed size '
-                                               'during iteration.')
-                        cur = self._make_iter_cursor()
-                        # FIXME-20031101-greg: race condition.  cursor could
-                        # be closed by another thread before this call.
-                        _DeadlockWrap(cur.set, key,0,0,0)
-                        next = getattr(cur, "next")
-            except _bsddb.DBNotFoundError:
-                pass
-            except _bsddb.DBCursorClosedError:
-                # the database was modified during iteration.  abort.
-                pass
-# When Python 2.3 not supported in bsddb3, we can change this to "finally"
-        except :
-            self._in_iter -= 1
-            raise
-
-        self._in_iter -= 1
-
-
-class _DBWithCursor(_iter_mixin):
-    """
-    A simple wrapper around DB that makes it look like the bsddbobject in
-    the old module.  It uses a cursor as needed to provide DB traversal.
-    """
-    def __init__(self, db):
-        self.db = db
-        self.db.set_get_returns_none(0)
-
-        # FIXME-20031101-greg: I believe there is still the potential
-        # for deadlocks in a multithreaded environment if someone
-        # attempts to use the any of the cursor interfaces in one
-        # thread while doing a put or delete in another thread.  The
-        # reason is that _checkCursor and _closeCursors are not atomic
-        # operations.  Doing our own locking around self.dbc,
-        # self.saved_dbc_key and self._cursor_refs could prevent this.
-        # TODO: A test case demonstrating the problem needs to be written.
-
-        # self.dbc is a DBCursor object used to implement the
-        # first/next/previous/last/set_location methods.
-        self.dbc = None
-        self.saved_dbc_key = None
-
-        # a collection of all DBCursor objects currently allocated
-        # by the _iter_mixin interface.
-        self._cursor_refs = {}
-        self._in_iter = 0
-        self._kill_iteration = False
-
-    def __del__(self):
-        self.close()
-
-    def _checkCursor(self):
-        if self.dbc is None:
-            self.dbc = _DeadlockWrap(self.db.cursor)
-            if self.saved_dbc_key is not None:
-                _DeadlockWrap(self.dbc.set, self.saved_dbc_key)
-                self.saved_dbc_key = None
-
-    # This method is needed for all non-cursor DB calls to avoid
-    # Berkeley DB deadlocks (due to being opened with DB_INIT_LOCK
-    # and DB_THREAD to be thread safe) when intermixing database
-    # operations that use the cursor internally with those that don't.
-    def _closeCursors(self, save=1):
-        if self.dbc:
-            c = self.dbc
-            self.dbc = None
-            if save:
-                try:
-                    self.saved_dbc_key = _DeadlockWrap(c.current, 0,0,0)[0]
-                except db.DBError:
-                    pass
-            _DeadlockWrap(c.close)
-            del c
-        for cref in list(self._cursor_refs.values()):
-            c = cref()
-            if c is not None:
-                _DeadlockWrap(c.close)
-
-    def _checkOpen(self):
-        if self.db is None:
-            raise error("BSDDB object has already been closed")
-
-    def isOpen(self):
-        return self.db is not None
-
-    def __len__(self):
-        self._checkOpen()
-        return _DeadlockWrap(lambda: len(self.db))  # len(self.db)
-
-    if sys.version_info[0:2] >= (2, 6) :
-        def __repr__(self) :
-            if self.isOpen() :
-                return repr(dict(_DeadlockWrap(self.db.items)))
-            return repr(dict())
-
-    def __getitem__(self, key):
-        self._checkOpen()
-        return _DeadlockWrap(lambda: self.db[key])  # self.db[key]
-
-    def __setitem__(self, key, value):
-        self._checkOpen()
-        self._closeCursors()
-        if self._in_iter and key not in self:
-            self._kill_iteration = True
-        def wrapF():
-            self.db[key] = value
-        _DeadlockWrap(wrapF)  # self.db[key] = value
-
-    def __delitem__(self, key):
-        self._checkOpen()
-        self._closeCursors()
-        if self._in_iter and key in self:
-            self._kill_iteration = True
-        def wrapF():
-            del self.db[key]
-        _DeadlockWrap(wrapF)  # del self.db[key]
-
-    def close(self):
-        self._closeCursors(save=0)
-        if self.dbc is not None:
-            _DeadlockWrap(self.dbc.close)
-        v = 0
-        if self.db is not None:
-            v = _DeadlockWrap(self.db.close)
-        self.dbc = None
-        self.db = None
-        return v
-
-    def keys(self):
-        self._checkOpen()
-        return _DeadlockWrap(self.db.keys)
-
-    def has_key(self, key):
-        self._checkOpen()
-        return _DeadlockWrap(self.db.has_key, key)
-
-    def set_location(self, key):
-        self._checkOpen()
-        self._checkCursor()
-        return _DeadlockWrap(self.dbc.set_range, key)
-
-    def __next__(self):  # Renamed by "2to3"
-        self._checkOpen()
-        self._checkCursor()
-        rv = _DeadlockWrap(getattr(self.dbc, "next"))
-        return rv
-
-    if sys.version_info[0] >= 3 :  # For "2to3" conversion
-        next = __next__
-
-    def previous(self):
-        self._checkOpen()
-        self._checkCursor()
-        rv = _DeadlockWrap(self.dbc.prev)
-        return rv
-
-    def first(self):
-        self._checkOpen()
-        # fix 1725856: don't needlessly try to restore our cursor position
-        self.saved_dbc_key = None
-        self._checkCursor()
-        rv = _DeadlockWrap(self.dbc.first)
-        return rv
-
-    def last(self):
-        self._checkOpen()
-        # fix 1725856: don't needlessly try to restore our cursor position
-        self.saved_dbc_key = None
-        self._checkCursor()
-        rv = _DeadlockWrap(self.dbc.last)
-        return rv
-
-    def sync(self):
-        self._checkOpen()
-        return _DeadlockWrap(self.db.sync)
-
-
-#----------------------------------------------------------------------
-# Compatibility object factory functions
-
-def hashopen(file, flag='c', mode=0o666, pgsize=None, ffactor=None, nelem=None,
-            cachesize=None, lorder=None, hflags=0):
-
-    flags = _checkflag(flag, file)
-    e = _openDBEnv(cachesize)
-    d = db.DB(e)
-    d.set_flags(hflags)
-    if pgsize is not None:    d.set_pagesize(pgsize)
-    if lorder is not None:    d.set_lorder(lorder)
-    if ffactor is not None:   d.set_h_ffactor(ffactor)
-    if nelem is not None:     d.set_h_nelem(nelem)
-    d.open(file, db.DB_HASH, flags, mode)
-    return _DBWithCursor(d)
-
-#----------------------------------------------------------------------
-
-def btopen(file, flag='c', mode=0o666,
-            btflags=0, cachesize=None, maxkeypage=None, minkeypage=None,
-            pgsize=None, lorder=None):
-
-    flags = _checkflag(flag, file)
-    e = _openDBEnv(cachesize)
-    d = db.DB(e)
-    if pgsize is not None: d.set_pagesize(pgsize)
-    if lorder is not None: d.set_lorder(lorder)
-    d.set_flags(btflags)
-    if minkeypage is not None: d.set_bt_minkey(minkeypage)
-    if maxkeypage is not None: d.set_bt_maxkey(maxkeypage)
-    d.open(file, db.DB_BTREE, flags, mode)
-    return _DBWithCursor(d)
-
-#----------------------------------------------------------------------
-
-
-def rnopen(file, flag='c', mode=0o666,
-            rnflags=0, cachesize=None, pgsize=None, lorder=None,
-            rlen=None, delim=None, source=None, pad=None):
-
-    flags = _checkflag(flag, file)
-    e = _openDBEnv(cachesize)
-    d = db.DB(e)
-    if pgsize is not None: d.set_pagesize(pgsize)
-    if lorder is not None: d.set_lorder(lorder)
-    d.set_flags(rnflags)
-    if delim is not None: d.set_re_delim(delim)
-    if rlen is not None: d.set_re_len(rlen)
-    if source is not None: d.set_re_source(source)
-    if pad is not None: d.set_re_pad(pad)
-    d.open(file, db.DB_RECNO, flags, mode)
-    return _DBWithCursor(d)
-
-#----------------------------------------------------------------------
-
-def _openDBEnv(cachesize):
-    e = db.DBEnv()
-    if cachesize is not None:
-        if cachesize >= 20480:
-            e.set_cachesize(0, cachesize)
-        else:
-            raise error("cachesize must be >= 20480")
-    e.set_lk_detect(db.DB_LOCK_DEFAULT)
-    e.open('.', db.DB_PRIVATE | db.DB_CREATE | db.DB_THREAD | db.DB_INIT_LOCK | db.DB_INIT_MPOOL)
-    return e
-
-def _checkflag(flag, file):
-    if flag == 'r':
-        flags = db.DB_RDONLY
-    elif flag == 'rw':
-        flags = 0
-    elif flag == 'w':
-        flags =  db.DB_CREATE
-    elif flag == 'c':
-        flags =  db.DB_CREATE
-    elif flag == 'n':
-        flags = db.DB_CREATE
-        #flags = db.DB_CREATE | db.DB_TRUNCATE
-        # we used db.DB_TRUNCATE flag for this before but Berkeley DB
-        # 4.2.52 changed to disallowed truncate with txn environments.
-        if file is not None and os.path.isfile(file):
-            os.unlink(file)
-    else:
-        raise error("flags should be one of 'r', 'w', 'c' or 'n'")
-    return flags | db.DB_THREAD
-
-#----------------------------------------------------------------------
-
-
-# This is a silly little hack that allows apps to continue to use the
-# DB_THREAD flag even on systems without threads without freaking out
-# Berkeley DB.
-#
-# This assumes that if Python was built with thread support then
-# Berkeley DB was too.
-
-try:
-    import _thread
-    del _thread
-except ImportError:
-    db.DB_THREAD = 0
-
-#----------------------------------------------------------------------
diff --git a/Lib/bsddb/db.py b/Lib/bsddb/db.py
deleted file mode 100644 (file)
index c3aee30..0000000
+++ /dev/null
@@ -1,60 +0,0 @@
-#----------------------------------------------------------------------
-#  Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA
-#  and Andrew Kuchling. All rights reserved.
-#
-#  Redistribution and use in source and binary forms, with or without
-#  modification, are permitted provided that the following conditions are
-#  met:
-#
-#    o Redistributions of source code must retain the above copyright
-#      notice, this list of conditions, and the disclaimer that follows.
-#
-#    o Redistributions in binary form must reproduce the above copyright
-#      notice, this list of conditions, and the following disclaimer in
-#      the documentation and/or other materials provided with the
-#      distribution.
-#
-#    o Neither the name of Digital Creations nor the names of its
-#      contributors may be used to endorse or promote products derived
-#      from this software without specific prior written permission.
-#
-#  THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS
-#  IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
-#  TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
-#  PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL DIGITAL
-#  CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-#  INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-#  BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
-#  OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-#  ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
-#  TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
-#  USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
-#  DAMAGE.
-#----------------------------------------------------------------------
-
-
-# This module is just a placeholder for possible future expansion, in
-# case we ever want to augment the stuff in _db in any way.  For now
-# it just simply imports everything from _db.
-
-import sys
-absolute_import = (sys.version_info[0] >= 3)
-
-if not absolute_import :
-    if __name__.startswith('bsddb3.') :
-        # import _pybsddb binary as it should be the more recent version from
-        # a standalone pybsddb addon package than the version included with
-        # python as bsddb._bsddb.
-        from _pybsddb import *
-        from _pybsddb import __version__
-    else:
-        from _bsddb import *
-        from _bsddb import __version__
-else :
-    # Because this syntaxis is not valid before Python 2.5
-    if __name__.startswith('bsddb3.') :
-        exec("from ._pybsddb import *")
-        exec("from ._pybsddb import __version__")
-    else :
-        exec("from ._bsddb import *")
-        exec("from ._bsddb import __version__")
diff --git a/Lib/bsddb/dbobj.py b/Lib/bsddb/dbobj.py
deleted file mode 100644 (file)
index 299e9db..0000000
+++ /dev/null
@@ -1,268 +0,0 @@
-#-------------------------------------------------------------------------
-#  This file contains real Python object wrappers for DB and DBEnv
-#  C "objects" that can be usefully subclassed.  The previous SWIG
-#  based interface allowed this thanks to SWIG's shadow classes.
-#   --  Gregory P. Smith
-#-------------------------------------------------------------------------
-#
-# (C) Copyright 2001  Autonomous Zone Industries
-#
-# License:  This is free software.  You may use this software for any
-#           purpose including modification/redistribution, so long as
-#           this header remains intact and that you do not claim any
-#           rights of ownership or authorship of this software.  This
-#           software has been tested, but no warranty is expressed or
-#           implied.
-#
-
-#
-# TODO it would be *really nice* to have an automatic shadow class populator
-# so that new methods don't need to be added  here manually after being
-# added to _bsddb.c.
-#
-
-import sys
-absolute_import = (sys.version_info[0] >= 3)
-if absolute_import :
-    # Because this syntaxis is not valid before Python 2.5
-    exec("from . import db")
-else :
-    from . import db
-
-if sys.version_info[0:2] <= (2, 5) :
-    try:
-        from UserDict import DictMixin
-    except ImportError:
-        # DictMixin is new in Python 2.3
-        class DictMixin: pass
-    MutableMapping = DictMixin
-else :
-    import collections
-    MutableMapping = collections.MutableMapping
-
-class DBEnv:
-    def __init__(self, *args, **kwargs):
-        self._cobj = db.DBEnv(*args, **kwargs)
-
-    def close(self, *args, **kwargs):
-        return self._cobj.close(*args, **kwargs)
-    def open(self, *args, **kwargs):
-        return self._cobj.open(*args, **kwargs)
-    def remove(self, *args, **kwargs):
-        return self._cobj.remove(*args, **kwargs)
-    def set_shm_key(self, *args, **kwargs):
-        return self._cobj.set_shm_key(*args, **kwargs)
-    def set_cachesize(self, *args, **kwargs):
-        return self._cobj.set_cachesize(*args, **kwargs)
-    def set_data_dir(self, *args, **kwargs):
-        return self._cobj.set_data_dir(*args, **kwargs)
-    def set_flags(self, *args, **kwargs):
-        return self._cobj.set_flags(*args, **kwargs)
-    def set_lg_bsize(self, *args, **kwargs):
-        return self._cobj.set_lg_bsize(*args, **kwargs)
-    def set_lg_dir(self, *args, **kwargs):
-        return self._cobj.set_lg_dir(*args, **kwargs)
-    def set_lg_max(self, *args, **kwargs):
-        return self._cobj.set_lg_max(*args, **kwargs)
-    def set_lk_detect(self, *args, **kwargs):
-        return self._cobj.set_lk_detect(*args, **kwargs)
-    if db.version() < (4,5):
-        def set_lk_max(self, *args, **kwargs):
-            return self._cobj.set_lk_max(*args, **kwargs)
-    def set_lk_max_locks(self, *args, **kwargs):
-        return self._cobj.set_lk_max_locks(*args, **kwargs)
-    def set_lk_max_lockers(self, *args, **kwargs):
-        return self._cobj.set_lk_max_lockers(*args, **kwargs)
-    def set_lk_max_objects(self, *args, **kwargs):
-        return self._cobj.set_lk_max_objects(*args, **kwargs)
-    def set_mp_mmapsize(self, *args, **kwargs):
-        return self._cobj.set_mp_mmapsize(*args, **kwargs)
-    def set_timeout(self, *args, **kwargs):
-        return self._cobj.set_timeout(*args, **kwargs)
-    def set_tmp_dir(self, *args, **kwargs):
-        return self._cobj.set_tmp_dir(*args, **kwargs)
-    def txn_begin(self, *args, **kwargs):
-        return self._cobj.txn_begin(*args, **kwargs)
-    def txn_checkpoint(self, *args, **kwargs):
-        return self._cobj.txn_checkpoint(*args, **kwargs)
-    def txn_stat(self, *args, **kwargs):
-        return self._cobj.txn_stat(*args, **kwargs)
-    def set_tx_max(self, *args, **kwargs):
-        return self._cobj.set_tx_max(*args, **kwargs)
-    def set_tx_timestamp(self, *args, **kwargs):
-        return self._cobj.set_tx_timestamp(*args, **kwargs)
-    def lock_detect(self, *args, **kwargs):
-        return self._cobj.lock_detect(*args, **kwargs)
-    def lock_get(self, *args, **kwargs):
-        return self._cobj.lock_get(*args, **kwargs)
-    def lock_id(self, *args, **kwargs):
-        return self._cobj.lock_id(*args, **kwargs)
-    def lock_put(self, *args, **kwargs):
-        return self._cobj.lock_put(*args, **kwargs)
-    def lock_stat(self, *args, **kwargs):
-        return self._cobj.lock_stat(*args, **kwargs)
-    def log_archive(self, *args, **kwargs):
-        return self._cobj.log_archive(*args, **kwargs)
-
-    def set_get_returns_none(self, *args, **kwargs):
-        return self._cobj.set_get_returns_none(*args, **kwargs)
-
-    def log_stat(self, *args, **kwargs):
-        return self._cobj.log_stat(*args, **kwargs)
-
-    if db.version() >= (4,1):
-        def dbremove(self, *args, **kwargs):
-            return self._cobj.dbremove(*args, **kwargs)
-        def dbrename(self, *args, **kwargs):
-            return self._cobj.dbrename(*args, **kwargs)
-        def set_encrypt(self, *args, **kwargs):
-            return self._cobj.set_encrypt(*args, **kwargs)
-
-    if db.version() >= (4,4):
-        def lsn_reset(self, *args, **kwargs):
-            return self._cobj.lsn_reset(*args, **kwargs)
-
-
-class DB(MutableMapping):
-    def __init__(self, dbenv, *args, **kwargs):
-        # give it the proper DBEnv C object that its expecting
-        self._cobj = db.DB(*(dbenv._cobj,) + args, **kwargs)
-
-    # TODO are there other dict methods that need to be overridden?
-    def __len__(self):
-        return len(self._cobj)
-    def __getitem__(self, arg):
-        return self._cobj[arg]
-    def __setitem__(self, key, value):
-        self._cobj[key] = value
-    def __delitem__(self, arg):
-        del self._cobj[arg]
-
-    if sys.version_info[0:2] >= (2, 6) :
-        def __iter__(self) :
-            return self._cobj.__iter__()
-
-    def append(self, *args, **kwargs):
-        return self._cobj.append(*args, **kwargs)
-    def associate(self, *args, **kwargs):
-        return self._cobj.associate(*args, **kwargs)
-    def close(self, *args, **kwargs):
-        return self._cobj.close(*args, **kwargs)
-    def consume(self, *args, **kwargs):
-        return self._cobj.consume(*args, **kwargs)
-    def consume_wait(self, *args, **kwargs):
-        return self._cobj.consume_wait(*args, **kwargs)
-    def cursor(self, *args, **kwargs):
-        return self._cobj.cursor(*args, **kwargs)
-    def delete(self, *args, **kwargs):
-        return self._cobj.delete(*args, **kwargs)
-    def fd(self, *args, **kwargs):
-        return self._cobj.fd(*args, **kwargs)
-    def get(self, *args, **kwargs):
-        return self._cobj.get(*args, **kwargs)
-    def pget(self, *args, **kwargs):
-        return self._cobj.pget(*args, **kwargs)
-    def get_both(self, *args, **kwargs):
-        return self._cobj.get_both(*args, **kwargs)
-    def get_byteswapped(self, *args, **kwargs):
-        return self._cobj.get_byteswapped(*args, **kwargs)
-    def get_size(self, *args, **kwargs):
-        return self._cobj.get_size(*args, **kwargs)
-    def get_type(self, *args, **kwargs):
-        return self._cobj.get_type(*args, **kwargs)
-    def join(self, *args, **kwargs):
-        return self._cobj.join(*args, **kwargs)
-    def key_range(self, *args, **kwargs):
-        return self._cobj.key_range(*args, **kwargs)
-    def has_key(self, *args, **kwargs):
-        return self._cobj.has_key(*args, **kwargs)
-    def items(self, *args, **kwargs):
-        return self._cobj.items(*args, **kwargs)
-    def keys(self, *args, **kwargs):
-        return self._cobj.keys(*args, **kwargs)
-    def open(self, *args, **kwargs):
-        return self._cobj.open(*args, **kwargs)
-    def put(self, *args, **kwargs):
-        return self._cobj.put(*args, **kwargs)
-    def remove(self, *args, **kwargs):
-        return self._cobj.remove(*args, **kwargs)
-    def rename(self, *args, **kwargs):
-        return self._cobj.rename(*args, **kwargs)
-    def set_bt_minkey(self, *args, **kwargs):
-        return self._cobj.set_bt_minkey(*args, **kwargs)
-    def set_bt_compare(self, *args, **kwargs):
-        return self._cobj.set_bt_compare(*args, **kwargs)
-    def set_cachesize(self, *args, **kwargs):
-        return self._cobj.set_cachesize(*args, **kwargs)
-    def set_flags(self, *args, **kwargs):
-        return self._cobj.set_flags(*args, **kwargs)
-    def set_h_ffactor(self, *args, **kwargs):
-        return self._cobj.set_h_ffactor(*args, **kwargs)
-    def set_h_nelem(self, *args, **kwargs):
-        return self._cobj.set_h_nelem(*args, **kwargs)
-    def set_lorder(self, *args, **kwargs):
-        return self._cobj.set_lorder(*args, **kwargs)
-    def set_pagesize(self, *args, **kwargs):
-        return self._cobj.set_pagesize(*args, **kwargs)
-    def set_re_delim(self, *args, **kwargs):
-        return self._cobj.set_re_delim(*args, **kwargs)
-    def set_re_len(self, *args, **kwargs):
-        return self._cobj.set_re_len(*args, **kwargs)
-    def set_re_pad(self, *args, **kwargs):
-        return self._cobj.set_re_pad(*args, **kwargs)
-    def set_re_source(self, *args, **kwargs):
-        return self._cobj.set_re_source(*args, **kwargs)
-    def set_q_extentsize(self, *args, **kwargs):
-        return self._cobj.set_q_extentsize(*args, **kwargs)
-    def stat(self, *args, **kwargs):
-        return self._cobj.stat(*args, **kwargs)
-    def sync(self, *args, **kwargs):
-        return self._cobj.sync(*args, **kwargs)
-    def type(self, *args, **kwargs):
-        return self._cobj.type(*args, **kwargs)
-    def upgrade(self, *args, **kwargs):
-        return self._cobj.upgrade(*args, **kwargs)
-    def values(self, *args, **kwargs):
-        return self._cobj.values(*args, **kwargs)
-    def verify(self, *args, **kwargs):
-        return self._cobj.verify(*args, **kwargs)
-    def set_get_returns_none(self, *args, **kwargs):
-        return self._cobj.set_get_returns_none(*args, **kwargs)
-
-    if db.version() >= (4,1):
-        def set_encrypt(self, *args, **kwargs):
-            return self._cobj.set_encrypt(*args, **kwargs)
-
-
-class DBSequence:
-    def __init__(self, *args, **kwargs):
-        self._cobj = db.DBSequence(*args, **kwargs)
-
-    def close(self, *args, **kwargs):
-        return self._cobj.close(*args, **kwargs)
-    def get(self, *args, **kwargs):
-        return self._cobj.get(*args, **kwargs)
-    def get_dbp(self, *args, **kwargs):
-        return self._cobj.get_dbp(*args, **kwargs)
-    def get_key(self, *args, **kwargs):
-        return self._cobj.get_key(*args, **kwargs)
-    def init_value(self, *args, **kwargs):
-        return self._cobj.init_value(*args, **kwargs)
-    def open(self, *args, **kwargs):
-        return self._cobj.open(*args, **kwargs)
-    def remove(self, *args, **kwargs):
-        return self._cobj.remove(*args, **kwargs)
-    def stat(self, *args, **kwargs):
-        return self._cobj.stat(*args, **kwargs)
-    def set_cachesize(self, *args, **kwargs):
-        return self._cobj.set_cachesize(*args, **kwargs)
-    def set_flags(self, *args, **kwargs):
-        return self._cobj.set_flags(*args, **kwargs)
-    def set_range(self, *args, **kwargs):
-        return self._cobj.set_range(*args, **kwargs)
-    def get_cachesize(self, *args, **kwargs):
-        return self._cobj.get_cachesize(*args, **kwargs)
-    def get_flags(self, *args, **kwargs):
-        return self._cobj.get_flags(*args, **kwargs)
-    def get_range(self, *args, **kwargs):
-        return self._cobj.get_range(*args, **kwargs)
diff --git a/Lib/bsddb/dbrecio.py b/Lib/bsddb/dbrecio.py
deleted file mode 100644 (file)
index 932ce2e..0000000
+++ /dev/null
@@ -1,190 +0,0 @@
-
-"""
-File-like objects that read from or write to a bsddb record.
-
-This implements (nearly) all stdio methods.
-
-f = DBRecIO(db, key, txn=None)
-f.close()           # explicitly release resources held
-flag = f.isatty()   # always false
-pos = f.tell()      # get current position
-f.seek(pos)         # set current position
-f.seek(pos, mode)   # mode 0: absolute; 1: relative; 2: relative to EOF
-buf = f.read()      # read until EOF
-buf = f.read(n)     # read up to n bytes
-f.truncate([size])  # truncate file at to at most size (default: current pos)
-f.write(buf)        # write at current position
-f.writelines(list)  # for line in list: f.write(line)
-
-Notes:
-- fileno() is left unimplemented so that code which uses it triggers
-  an exception early.
-- There's a simple test set (see end of this file) - not yet updated
-  for DBRecIO.
-- readline() is not implemented yet.
-
-
-From:
-    Itamar Shtull-Trauring <itamar@maxnm.com>
-"""
-
-import errno
-import string
-
-class DBRecIO:
-    def __init__(self, db, key, txn=None):
-        self.db = db
-        self.key = key
-        self.txn = txn
-        self.len = None
-        self.pos = 0
-        self.closed = 0
-        self.softspace = 0
-
-    def close(self):
-        if not self.closed:
-            self.closed = 1
-            del self.db, self.txn
-
-    def isatty(self):
-        if self.closed:
-            raise ValueError("I/O operation on closed file")
-        return 0
-
-    def seek(self, pos, mode = 0):
-        if self.closed:
-            raise ValueError("I/O operation on closed file")
-        if mode == 1:
-            pos = pos + self.pos
-        elif mode == 2:
-            pos = pos + self.len
-        self.pos = max(0, pos)
-
-    def tell(self):
-        if self.closed:
-            raise ValueError("I/O operation on closed file")
-        return self.pos
-
-    def read(self, n = -1):
-        if self.closed:
-            raise ValueError("I/O operation on closed file")
-        if n < 0:
-            newpos = self.len
-        else:
-            newpos = min(self.pos+n, self.len)
-
-        dlen = newpos - self.pos
-
-        r = self.db.get(self.key, txn=self.txn, dlen=dlen, doff=self.pos)
-        self.pos = newpos
-        return r
-
-    __fixme = """
-    def readline(self, length=None):
-        if self.closed:
-            raise ValueError, "I/O operation on closed file"
-        if self.buflist:
-            self.buf = self.buf + string.joinfields(self.buflist, '')
-            self.buflist = []
-        i = string.find(self.buf, '\n', self.pos)
-        if i < 0:
-            newpos = self.len
-        else:
-            newpos = i+1
-        if length is not None:
-            if self.pos + length < newpos:
-                newpos = self.pos + length
-        r = self.buf[self.pos:newpos]
-        self.pos = newpos
-        return r
-
-    def readlines(self, sizehint = 0):
-        total = 0
-        lines = []
-        line = self.readline()
-        while line:
-            lines.append(line)
-            total += len(line)
-            if 0 < sizehint <= total:
-                break
-            line = self.readline()
-        return lines
-    """
-
-    def truncate(self, size=None):
-        if self.closed:
-            raise ValueError("I/O operation on closed file")
-        if size is None:
-            size = self.pos
-        elif size < 0:
-            raise IOError(errno.EINVAL,
-                                      "Negative size not allowed")
-        elif size < self.pos:
-            self.pos = size
-        self.db.put(self.key, "", txn=self.txn, dlen=self.len-size, doff=size)
-
-    def write(self, s):
-        if self.closed:
-            raise ValueError("I/O operation on closed file")
-        if not s: return
-        if self.pos > self.len:
-            self.buflist.append('\0'*(self.pos - self.len))
-            self.len = self.pos
-        newpos = self.pos + len(s)
-        self.db.put(self.key, s, txn=self.txn, dlen=len(s), doff=self.pos)
-        self.pos = newpos
-
-    def writelines(self, list):
-        self.write(string.joinfields(list, ''))
-
-    def flush(self):
-        if self.closed:
-            raise ValueError("I/O operation on closed file")
-
-
-"""
-# A little test suite
-
-def _test():
-    import sys
-    if sys.argv[1:]:
-        file = sys.argv[1]
-    else:
-        file = '/etc/passwd'
-    lines = open(file, 'r').readlines()
-    text = open(file, 'r').read()
-    f = StringIO()
-    for line in lines[:-2]:
-        f.write(line)
-    f.writelines(lines[-2:])
-    if f.getvalue() != text:
-        raise RuntimeError, 'write failed'
-    length = f.tell()
-    print 'File length =', length
-    f.seek(len(lines[0]))
-    f.write(lines[1])
-    f.seek(0)
-    print 'First line =', repr(f.readline())
-    here = f.tell()
-    line = f.readline()
-    print 'Second line =', repr(line)
-    f.seek(-len(line), 1)
-    line2 = f.read(len(line))
-    if line != line2:
-        raise RuntimeError, 'bad result after seek back'
-    f.seek(len(line2), 1)
-    list = f.readlines()
-    line = list[-1]
-    f.seek(f.tell() - len(line))
-    line2 = f.read()
-    if line != line2:
-        raise RuntimeError, 'bad result after seek back from EOF'
-    print 'Read', len(list), 'more lines'
-    print 'File length =', f.tell()
-    if f.tell() != length:
-        raise RuntimeError, 'bad length'
-    f.close()
-
-if __name__ == '__main__':
-    _test()
-"""
diff --git a/Lib/bsddb/dbshelve.py b/Lib/bsddb/dbshelve.py
deleted file mode 100644 (file)
index eaddaf9..0000000
+++ /dev/null
@@ -1,370 +0,0 @@
-#!/bin/env python
-#------------------------------------------------------------------------
-#           Copyright (c) 1997-2001 by Total Control Software
-#                         All Rights Reserved
-#------------------------------------------------------------------------
-#
-# Module Name:  dbShelve.py
-#
-# Description:  A reimplementation of the standard shelve.py that
-#               forces the use of cPickle, and DB.
-#
-# Creation Date:    11/3/97 3:39:04PM
-#
-# License:      This is free software.  You may use this software for any
-#               purpose including modification/redistribution, so long as
-#               this header remains intact and that you do not claim any
-#               rights of ownership or authorship of this software.  This
-#               software has been tested, but no warranty is expressed or
-#               implied.
-#
-# 13-Dec-2000:  Updated to be used with the new bsddb3 package.
-#               Added DBShelfCursor class.
-#
-#------------------------------------------------------------------------
-
-"""Manage shelves of pickled objects using bsddb database files for the
-storage.
-"""
-
-#------------------------------------------------------------------------
-
-import pickle
-import sys
-
-import sys
-absolute_import = (sys.version_info[0] >= 3)
-if absolute_import :
-    # Because this syntaxis is not valid before Python 2.5
-    exec("from . import db")
-else :
-    from . import db
-
-#At version 2.3 cPickle switched to using protocol instead of bin
-if sys.version_info[:3] >= (2, 3, 0):
-    HIGHEST_PROTOCOL = pickle.HIGHEST_PROTOCOL
-# In python 2.3.*, "cPickle.dumps" accepts no
-# named parameters. "pickle.dumps" accepts them,
-# so this seems a bug.
-    if sys.version_info[:3] < (2, 4, 0):
-        def _dumps(object, protocol):
-            return pickle.dumps(object, protocol)
-    else :
-        def _dumps(object, protocol):
-            return pickle.dumps(object, protocol=protocol)
-
-else:
-    HIGHEST_PROTOCOL = None
-    def _dumps(object, protocol):
-        return pickle.dumps(object, bin=protocol)
-
-
-if sys.version_info[0:2] <= (2, 5) :
-    try:
-        from UserDict import DictMixin
-    except ImportError:
-        # DictMixin is new in Python 2.3
-        class DictMixin: pass
-    MutableMapping = DictMixin
-else :
-    import collections
-    MutableMapping = collections.MutableMapping
-
-#------------------------------------------------------------------------
-
-
-def open(filename, flags=db.DB_CREATE, mode=0o660, filetype=db.DB_HASH,
-         dbenv=None, dbname=None):
-    """
-    A simple factory function for compatibility with the standard
-    shleve.py module.  It can be used like this, where key is a string
-    and data is a pickleable object:
-
-        from bsddb import dbshelve
-        db = dbshelve.open(filename)
-
-        db[key] = data
-
-        db.close()
-    """
-    if type(flags) == type(''):
-        sflag = flags
-        if sflag == 'r':
-            flags = db.DB_RDONLY
-        elif sflag == 'rw':
-            flags = 0
-        elif sflag == 'w':
-            flags =  db.DB_CREATE
-        elif sflag == 'c':
-            flags =  db.DB_CREATE
-        elif sflag == 'n':
-            flags = db.DB_TRUNCATE | db.DB_CREATE
-        else:
-            raise db.DBError("flags should be one of 'r', 'w', 'c' or 'n' or use the bsddb.db.DB_* flags")
-
-    d = DBShelf(dbenv)
-    d.open(filename, dbname, filetype, flags, mode)
-    return d
-
-#---------------------------------------------------------------------------
-
-class DBShelveError(db.DBError): pass
-
-
-class DBShelf(MutableMapping):
-    """A shelf to hold pickled objects, built upon a bsddb DB object.  It
-    automatically pickles/unpickles data objects going to/from the DB.
-    """
-    def __init__(self, dbenv=None):
-        self.db = db.DB(dbenv)
-        self._closed = True
-        if HIGHEST_PROTOCOL:
-            self.protocol = HIGHEST_PROTOCOL
-        else:
-            self.protocol = 1
-
-
-    def __del__(self):
-        self.close()
-
-
-    def __getattr__(self, name):
-        """Many methods we can just pass through to the DB object.
-        (See below)
-        """
-        return getattr(self.db, name)
-
-
-    #-----------------------------------
-    # Dictionary access methods
-
-    def __len__(self):
-        return len(self.db)
-
-
-    def __getitem__(self, key):
-        data = self.db[key]
-        return pickle.loads(data)
-
-
-    def __setitem__(self, key, value):
-        data = _dumps(value, self.protocol)
-        self.db[key] = data
-
-
-    def __delitem__(self, key):
-        del self.db[key]
-
-
-    def keys(self, txn=None):
-        if txn != None:
-            return self.db.keys(txn)
-        else:
-            return list(self.db.keys())
-
-    if sys.version_info[0:2] >= (2, 6) :
-        def __iter__(self) :
-            return self.db.__iter__()
-
-
-    def open(self, *args, **kwargs):
-        self.db.open(*args, **kwargs)
-        self._closed = False
-
-
-    def close(self, *args, **kwargs):
-        self.db.close(*args, **kwargs)
-        self._closed = True
-
-
-    def __repr__(self):
-        if self._closed:
-            return '<DBShelf @ 0x%x - closed>' % (id(self))
-        else:
-            return repr(dict(iter(self.items())))
-
-
-    def items(self, txn=None):
-        if txn != None:
-            items = self.db.items(txn)
-        else:
-            items = list(self.db.items())
-        newitems = []
-
-        for k, v in items:
-            newitems.append( (k, pickle.loads(v)) )
-        return newitems
-
-    def values(self, txn=None):
-        if txn != None:
-            values = self.db.values(txn)
-        else:
-            values = list(self.db.values())
-
-        return list(map(pickle.loads, values))
-
-    #-----------------------------------
-    # Other methods
-
-    def __append(self, value, txn=None):
-        data = _dumps(value, self.protocol)
-        return self.db.append(data, txn)
-
-    def append(self, value, txn=None):
-        if self.get_type() == db.DB_RECNO:
-            return self.__append(value, txn=txn)
-        raise DBShelveError("append() only supported when dbshelve opened with filetype=dbshelve.db.DB_RECNO")
-
-
-    def associate(self, secondaryDB, callback, flags=0):
-        def _shelf_callback(priKey, priData, realCallback=callback):
-            # Safe in Python 2.x because expresion short circuit
-            if sys.version_info[0] < 3 or isinstance(priData, bytes) :
-                data = pickle.loads(priData)
-            else :
-                data = pickle.loads(bytes(priData, "iso8859-1"))  # 8 bits
-            return realCallback(priKey, data)
-
-        return self.db.associate(secondaryDB, _shelf_callback, flags)
-
-
-    #def get(self, key, default=None, txn=None, flags=0):
-    def get(self, *args, **kw):
-        # We do it with *args and **kw so if the default value wasn't
-        # given nothing is passed to the extension module.  That way
-        # an exception can be raised if set_get_returns_none is turned
-        # off.
-        data = self.db.get(*args, **kw)
-        try:
-            return pickle.loads(data)
-        except (EOFError, TypeError, pickle.UnpicklingError):
-            return data  # we may be getting the default value, or None,
-                         # so it doesn't need unpickled.
-
-    def get_both(self, key, value, txn=None, flags=0):
-        data = _dumps(value, self.protocol)
-        data = self.db.get(key, data, txn, flags)
-        return pickle.loads(data)
-
-
-    def cursor(self, txn=None, flags=0):
-        c = DBShelfCursor(self.db.cursor(txn, flags))
-        c.protocol = self.protocol
-        return c
-
-
-    def put(self, key, value, txn=None, flags=0):
-        data = _dumps(value, self.protocol)
-        return self.db.put(key, data, txn, flags)
-
-
-    def join(self, cursorList, flags=0):
-        raise NotImplementedError
-
-
-    #----------------------------------------------
-    # Methods allowed to pass-through to self.db
-    #
-    #    close,  delete, fd, get_byteswapped, get_type, has_key,
-    #    key_range, open, remove, rename, stat, sync,
-    #    upgrade, verify, and all set_* methods.
-
-
-#---------------------------------------------------------------------------
-
-class DBShelfCursor:
-    """
-    """
-    def __init__(self, cursor):
-        self.dbc = cursor
-
-    def __del__(self):
-        self.close()
-
-
-    def __getattr__(self, name):
-        """Some methods we can just pass through to the cursor object.  (See below)"""
-        return getattr(self.dbc, name)
-
-
-    #----------------------------------------------
-
-    def dup(self, flags=0):
-        c = DBShelfCursor(self.dbc.dup(flags))
-        c.protocol = self.protocol
-        return c
-
-
-    def put(self, key, value, flags=0):
-        data = _dumps(value, self.protocol)
-        return self.dbc.put(key, data, flags)
-
-
-    def get(self, *args):
-        count = len(args)  # a method overloading hack
-        method = getattr(self, 'get_%d' % count)
-        method(*args)
-
-    def get_1(self, flags):
-        rec = self.dbc.get(flags)
-        return self._extract(rec)
-
-    def get_2(self, key, flags):
-        rec = self.dbc.get(key, flags)
-        return self._extract(rec)
-
-    def get_3(self, key, value, flags):
-        data = _dumps(value, self.protocol)
-        rec = self.dbc.get(key, flags)
-        return self._extract(rec)
-
-
-    def current(self, flags=0): return self.get_1(flags|db.DB_CURRENT)
-    def first(self, flags=0): return self.get_1(flags|db.DB_FIRST)
-    def last(self, flags=0): return self.get_1(flags|db.DB_LAST)
-    def next(self, flags=0): return self.get_1(flags|db.DB_NEXT)
-    def prev(self, flags=0): return self.get_1(flags|db.DB_PREV)
-    def consume(self, flags=0): return self.get_1(flags|db.DB_CONSUME)
-    def next_dup(self, flags=0): return self.get_1(flags|db.DB_NEXT_DUP)
-    def next_nodup(self, flags=0): return self.get_1(flags|db.DB_NEXT_NODUP)
-    def prev_nodup(self, flags=0): return self.get_1(flags|db.DB_PREV_NODUP)
-
-
-    def get_both(self, key, value, flags=0):
-        data = _dumps(value, self.protocol)
-        rec = self.dbc.get_both(key, flags)
-        return self._extract(rec)
-
-
-    def set(self, key, flags=0):
-        rec = self.dbc.set(key, flags)
-        return self._extract(rec)
-
-    def set_range(self, key, flags=0):
-        rec = self.dbc.set_range(key, flags)
-        return self._extract(rec)
-
-    def set_recno(self, recno, flags=0):
-        rec = self.dbc.set_recno(recno, flags)
-        return self._extract(rec)
-
-    set_both = get_both
-
-    def _extract(self, rec):
-        if rec is None:
-            return None
-        else:
-            key, data = rec
-            # Safe in Python 2.x because expresion short circuit
-            if sys.version_info[0] < 3 or isinstance(data, bytes) :
-                return key, pickle.loads(data)
-            else :
-                return key, pickle.loads(bytes(data, "iso8859-1"))  # 8 bits
-
-    #----------------------------------------------
-    # Methods allowed to pass-through to self.dbc
-    #
-    # close, count, delete, get_recno, join_item
-
-
-#---------------------------------------------------------------------------
diff --git a/Lib/bsddb/dbtables.py b/Lib/bsddb/dbtables.py
deleted file mode 100644 (file)
index 18b2269..0000000
+++ /dev/null
@@ -1,827 +0,0 @@
-#-----------------------------------------------------------------------
-#
-# Copyright (C) 2000, 2001 by Autonomous Zone Industries
-# Copyright (C) 2002 Gregory P. Smith
-#
-# License:      This is free software.  You may use this software for any
-#               purpose including modification/redistribution, so long as
-#               this header remains intact and that you do not claim any
-#               rights of ownership or authorship of this software.  This
-#               software has been tested, but no warranty is expressed or
-#               implied.
-#
-#   --  Gregory P. Smith <greg@krypto.org>
-
-# This provides a simple database table interface built on top of
-# the Python Berkeley DB 3 interface.
-#
-_cvsid = '$Id$'
-
-import re
-import sys
-import copy
-import random
-import struct
-import pickle as pickle
-
-try:
-    # For Pythons w/distutils pybsddb
-    from bsddb3 import db
-except ImportError:
-    # For Python 2.3
-    from bsddb import db
-
-# XXX(nnorwitz): is this correct? DBIncompleteError is conditional in _bsddb.c
-if not hasattr(db,"DBIncompleteError") :
-    class DBIncompleteError(Exception):
-        pass
-    db.DBIncompleteError = DBIncompleteError
-
-class TableDBError(Exception):
-    pass
-class TableAlreadyExists(TableDBError):
-    pass
-
-
-class Cond:
-    """This condition matches everything"""
-    def __call__(self, s):
-        return 1
-
-class ExactCond(Cond):
-    """Acts as an exact match condition function"""
-    def __init__(self, strtomatch):
-        self.strtomatch = strtomatch
-    def __call__(self, s):
-        return s == self.strtomatch
-
-class PrefixCond(Cond):
-    """Acts as a condition function for matching a string prefix"""
-    def __init__(self, prefix):
-        self.prefix = prefix
-    def __call__(self, s):
-        return s[:len(self.prefix)] == self.prefix
-
-class PostfixCond(Cond):
-    """Acts as a condition function for matching a string postfix"""
-    def __init__(self, postfix):
-        self.postfix = postfix
-    def __call__(self, s):
-        return s[-len(self.postfix):] == self.postfix
-
-class LikeCond(Cond):
-    """
-    Acts as a function that will match using an SQL 'LIKE' style
-    string.  Case insensitive and % signs are wild cards.
-    This isn't perfect but it should work for the simple common cases.
-    """
-    def __init__(self, likestr, re_flags=re.IGNORECASE):
-        # escape python re characters
-        chars_to_escape = '.*+()[]?'
-        for char in chars_to_escape :
-            likestr = likestr.replace(char, '\\'+char)
-        # convert %s to wildcards
-        self.likestr = likestr.replace('%', '.*')
-        self.re = re.compile('^'+self.likestr+'$', re_flags)
-    def __call__(self, s):
-        return self.re.match(s)
-
-#
-# keys used to store database metadata
-#
-_table_names_key = '__TABLE_NAMES__'  # list of the tables in this db
-_columns = '._COLUMNS__'  # table_name+this key contains a list of columns
-
-def _columns_key(table):
-    return table + _columns
-
-#
-# these keys are found within table sub databases
-#
-_data =  '._DATA_.'  # this+column+this+rowid key contains table data
-_rowid = '._ROWID_.' # this+rowid+this key contains a unique entry for each
-                     # row in the table.  (no data is stored)
-_rowid_str_len = 8   # length in bytes of the unique rowid strings
-
-
-def _data_key(table, col, rowid):
-    return table + _data + col + _data + rowid
-
-def _search_col_data_key(table, col):
-    return table + _data + col + _data
-
-def _search_all_data_key(table):
-    return table + _data
-
-def _rowid_key(table, rowid):
-    return table + _rowid + rowid + _rowid
-
-def _search_rowid_key(table):
-    return table + _rowid
-
-def contains_metastrings(s) :
-    """Verify that the given string does not contain any
-    metadata strings that might interfere with dbtables database operation.
-    """
-    if (s.find(_table_names_key) >= 0 or
-        s.find(_columns) >= 0 or
-        s.find(_data) >= 0 or
-        s.find(_rowid) >= 0):
-        # Then
-        return 1
-    else:
-        return 0
-
-
-class bsdTableDB :
-    def __init__(self, filename, dbhome, create=0, truncate=0, mode=0o600,
-                 recover=0, dbflags=0):
-        """bsdTableDB(filename, dbhome, create=0, truncate=0, mode=0600)
-
-        Open database name in the dbhome Berkeley DB directory.
-        Use keyword arguments when calling this constructor.
-        """
-        self.db = None
-        myflags = db.DB_THREAD
-        if create:
-            myflags |= db.DB_CREATE
-        flagsforenv = (db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_LOG |
-                       db.DB_INIT_TXN | dbflags)
-        # DB_AUTO_COMMIT isn't a valid flag for env.open()
-        try:
-            dbflags |= db.DB_AUTO_COMMIT
-        except AttributeError:
-            pass
-        if recover:
-            flagsforenv = flagsforenv | db.DB_RECOVER
-        self.env = db.DBEnv()
-        # enable auto deadlock avoidance
-        self.env.set_lk_detect(db.DB_LOCK_DEFAULT)
-        self.env.open(dbhome, myflags | flagsforenv)
-        if truncate:
-            myflags |= db.DB_TRUNCATE
-        self.db = db.DB(self.env)
-        # this code relies on DBCursor.set* methods to raise exceptions
-        # rather than returning None
-        self.db.set_get_returns_none(1)
-        # allow duplicate entries [warning: be careful w/ metadata]
-        self.db.set_flags(db.DB_DUP)
-        self.db.open(filename, db.DB_BTREE, dbflags | myflags, mode)
-        self.dbfilename = filename
-
-        if sys.version_info[0] >= 3 :
-            class cursor_py3k(object) :
-                def __init__(self, dbcursor) :
-                    self._dbcursor = dbcursor
-
-                def close(self) :
-                    return self._dbcursor.close()
-
-                def set_range(self, search) :
-                    v = self._dbcursor.set_range(bytes(search, "iso8859-1"))
-                    if v != None :
-                        v = (v[0].decode("iso8859-1"),
-                                v[1].decode("iso8859-1"))
-                    return v
-
-                def __next__(self) :
-                    v = getattr(self._dbcursor, "next")()
-                    if v != None :
-                        v = (v[0].decode("iso8859-1"),
-                                v[1].decode("iso8859-1"))
-                    return v
-
-            class db_py3k(object) :
-                def __init__(self, db) :
-                    self._db = db
-
-                def cursor(self, txn=None) :
-                    return cursor_py3k(self._db.cursor(txn=txn))
-
-                def has_key(self, key, txn=None) :
-                    return getattr(self._db,"has_key")(bytes(key, "iso8859-1"),
-                            txn=txn)
-
-                def put(self, key, value, flags=0, txn=None) :
-                    key = bytes(key, "iso8859-1")
-                    if value != None :
-                        value = bytes(value, "iso8859-1")
-                    return self._db.put(key, value, flags=flags, txn=txn)
-
-                def put_bytes(self, key, value, txn=None) :
-                    key = bytes(key, "iso8859-1")
-                    return self._db.put(key, value, txn=txn)
-
-                def get(self, key, txn=None, flags=0) :
-                    key = bytes(key, "iso8859-1")
-                    v = self._db.get(key, txn=txn, flags=flags)
-                    if v != None :
-                        v = v.decode("iso8859-1")
-                    return v
-
-                def get_bytes(self, key, txn=None, flags=0) :
-                    key = bytes(key, "iso8859-1")
-                    return self._db.get(key, txn=txn, flags=flags)
-
-                def delete(self, key, txn=None) :
-                    key = bytes(key, "iso8859-1")
-                    return self._db.delete(key, txn=txn)
-
-                def close (self) :
-                    return self._db.close()
-
-            self.db = db_py3k(self.db)
-        else :  # Python 2.x
-            pass
-
-        # Initialize the table names list if this is a new database
-        txn = self.env.txn_begin()
-        try:
-            if not getattr(self.db, "has_key")(_table_names_key, txn):
-                getattr(self.db, "put_bytes", self.db.put) \
-                        (_table_names_key, pickle.dumps([], 1), txn=txn)
-        # Yes, bare except
-        except:
-            txn.abort()
-            raise
-        else:
-            txn.commit()
-        # TODO verify more of the database's metadata?
-        self.__tablecolumns = {}
-
-    def __del__(self):
-        self.close()
-
-    def close(self):
-        if self.db is not None:
-            self.db.close()
-            self.db = None
-        if self.env is not None:
-            self.env.close()
-            self.env = None
-
-    def checkpoint(self, mins=0):
-        try:
-            self.env.txn_checkpoint(mins)
-        except db.DBIncompleteError:
-            pass
-
-    def sync(self):
-        try:
-            self.db.sync()
-        except db.DBIncompleteError:
-            pass
-
-    def _db_print(self) :
-        """Print the database to stdout for debugging"""
-        print("******** Printing raw database for debugging ********")
-        cur = self.db.cursor()
-        try:
-            key, data = cur.first()
-            while 1:
-                print(repr({key: data}))
-                next = next(cur)
-                if next:
-                    key, data = next
-                else:
-                    cur.close()
-                    return
-        except db.DBNotFoundError:
-            cur.close()
-
-
-    def CreateTable(self, table, columns):
-        """CreateTable(table, columns) - Create a new table in the database.
-
-        raises TableDBError if it already exists or for other DB errors.
-        """
-        assert isinstance(columns, list)
-
-        txn = None
-        try:
-            # checking sanity of the table and column names here on
-            # table creation will prevent problems elsewhere.
-            if contains_metastrings(table):
-                raise ValueError(
-                    "bad table name: contains reserved metastrings")
-            for column in columns :
-                if contains_metastrings(column):
-                    raise ValueError(
-                        "bad column name: contains reserved metastrings")
-
-            columnlist_key = _columns_key(table)
-            if getattr(self.db, "has_key")(columnlist_key):
-                raise TableAlreadyExists("table already exists")
-
-            txn = self.env.txn_begin()
-            # store the table's column info
-            getattr(self.db, "put_bytes", self.db.put)(columnlist_key,
-                    pickle.dumps(columns, 1), txn=txn)
-
-            # add the table name to the tablelist
-            tablelist = pickle.loads(getattr(self.db, "get_bytes",
-                self.db.get) (_table_names_key, txn=txn, flags=db.DB_RMW))
-            tablelist.append(table)
-            # delete 1st, in case we opened with DB_DUP
-            self.db.delete(_table_names_key, txn=txn)
-            getattr(self.db, "put_bytes", self.db.put)(_table_names_key,
-                    pickle.dumps(tablelist, 1), txn=txn)
-
-            txn.commit()
-            txn = None
-        except db.DBError as dberror:
-            if txn:
-                txn.abort()
-            if sys.version_info[0] < 3 :
-                raise TableDBError(dberror[1])
-            else :
-                raise TableDBError(dberror.args[1])
-
-
-    def ListTableColumns(self, table):
-        """Return a list of columns in the given table.
-        [] if the table doesn't exist.
-        """
-        assert isinstance(table, str)
-        if contains_metastrings(table):
-            raise ValueError("bad table name: contains reserved metastrings")
-
-        columnlist_key = _columns_key(table)
-        if not getattr(self.db, "has_key")(columnlist_key):
-            return []
-        pickledcolumnlist = getattr(self.db, "get_bytes",
-                self.db.get)(columnlist_key)
-        if pickledcolumnlist:
-            return pickle.loads(pickledcolumnlist)
-        else:
-            return []
-
-    def ListTables(self):
-        """Return a list of tables in this database."""
-        pickledtablelist = self.db.get_get(_table_names_key)
-        if pickledtablelist:
-            return pickle.loads(pickledtablelist)
-        else:
-            return []
-
-    def CreateOrExtendTable(self, table, columns):
-        """CreateOrExtendTable(table, columns)
-
-        Create a new table in the database.
-
-        If a table of this name already exists, extend it to have any
-        additional columns present in the given list as well as
-        all of its current columns.
-        """
-        assert isinstance(columns, list)
-
-        try:
-            self.CreateTable(table, columns)
-        except TableAlreadyExists:
-            # the table already existed, add any new columns
-            txn = None
-            try:
-                columnlist_key = _columns_key(table)
-                txn = self.env.txn_begin()
-
-                # load the current column list
-                oldcolumnlist = pickle.loads(
-                    getattr(self.db, "get_bytes",
-                        self.db.get)(columnlist_key, txn=txn, flags=db.DB_RMW))
-                # create a hash table for fast lookups of column names in the
-                # loop below
-                oldcolumnhash = {}
-                for c in oldcolumnlist:
-                    oldcolumnhash[c] = c
-
-                # create a new column list containing both the old and new
-                # column names
-                newcolumnlist = copy.copy(oldcolumnlist)
-                for c in columns:
-                    if c not in oldcolumnhash:
-                        newcolumnlist.append(c)
-
-                # store the table's new extended column list
-                if newcolumnlist != oldcolumnlist :
-                    # delete the old one first since we opened with DB_DUP
-                    self.db.delete(columnlist_key, txn=txn)
-                    getattr(self.db, "put_bytes", self.db.put)(columnlist_key,
-                                pickle.dumps(newcolumnlist, 1),
-                                txn=txn)
-
-                txn.commit()
-                txn = None
-
-                self.__load_column_info(table)
-            except db.DBError as dberror:
-                if txn:
-                    txn.abort()
-                if sys.version_info[0] < 3 :
-                    raise TableDBError(dberror[1])
-                else :
-                    raise TableDBError(dberror.args[1])
-
-
-    def __load_column_info(self, table) :
-        """initialize the self.__tablecolumns dict"""
-        # check the column names
-        try:
-            tcolpickles = getattr(self.db, "get_bytes",
-                    self.db.get)(_columns_key(table))
-        except db.DBNotFoundError:
-            raise TableDBError("unknown table: %r" % (table,))
-        if not tcolpickles:
-            raise TableDBError("unknown table: %r" % (table,))
-        self.__tablecolumns[table] = pickle.loads(tcolpickles)
-
-    def __new_rowid(self, table, txn) :
-        """Create a new unique row identifier"""
-        unique = 0
-        while not unique:
-            # Generate a random 64-bit row ID string
-            # (note: might have <64 bits of true randomness
-            # but it's plenty for our database id needs!)
-            blist = []
-            for x in range(_rowid_str_len):
-                blist.append(random.randint(0,255))
-            newid = struct.pack('B'*_rowid_str_len, *blist)
-
-            if sys.version_info[0] >= 3 :
-                newid = newid.decode("iso8859-1")  # 8 bits
-
-            # Guarantee uniqueness by adding this key to the database
-            try:
-                self.db.put(_rowid_key(table, newid), None, txn=txn,
-                            flags=db.DB_NOOVERWRITE)
-            except db.DBKeyExistError:
-                pass
-            else:
-                unique = 1
-
-        return newid
-
-
-    def Insert(self, table, rowdict) :
-        """Insert(table, datadict) - Insert a new row into the table
-        using the keys+values from rowdict as the column values.
-        """
-
-        txn = None
-        try:
-            if not getattr(self.db, "has_key")(_columns_key(table)):
-                raise TableDBError("unknown table")
-
-            # check the validity of each column name
-            if table not in self.__tablecolumns:
-                self.__load_column_info(table)
-            for column in list(rowdict.keys()) :
-                if not self.__tablecolumns[table].count(column):
-                    raise TableDBError("unknown column: %r" % (column,))
-
-            # get a unique row identifier for this row
-            txn = self.env.txn_begin()
-            rowid = self.__new_rowid(table, txn=txn)
-
-            # insert the row values into the table database
-            for column, dataitem in list(rowdict.items()):
-                # store the value
-                self.db.put(_data_key(table, column, rowid), dataitem, txn=txn)
-
-            txn.commit()
-            txn = None
-
-        except db.DBError as dberror:
-            # WIBNI we could just abort the txn and re-raise the exception?
-            # But no, because TableDBError is not related to DBError via
-            # inheritance, so it would be backwards incompatible.  Do the next
-            # best thing.
-            info = sys.exc_info()
-            if txn:
-                txn.abort()
-                self.db.delete(_rowid_key(table, rowid))
-            if sys.version_info[0] < 3 :
-                raise TableDBError(dberror[1]).with_traceback(info[2])
-            else :
-                raise TableDBError(dberror.args[1]).with_traceback(info[2])
-
-
-    def Modify(self, table, conditions={}, mappings={}):
-        """Modify(table, conditions={}, mappings={}) - Modify items in rows matching 'conditions' using mapping functions in 'mappings'
-
-        * table - the table name
-        * conditions - a dictionary keyed on column names containing
-          a condition callable expecting the data string as an
-          argument and returning a boolean.
-        * mappings - a dictionary keyed on column names containing a
-          condition callable expecting the data string as an argument and
-          returning the new string for that column.
-        """
-
-        try:
-            matching_rowids = self.__Select(table, [], conditions)
-
-            # modify only requested columns
-            columns = list(mappings.keys())
-            for rowid in list(matching_rowids.keys()):
-                txn = None
-                try:
-                    for column in columns:
-                        txn = self.env.txn_begin()
-                        # modify the requested column
-                        try:
-                            dataitem = self.db.get(
-                                _data_key(table, column, rowid),
-                                txn=txn)
-                            self.db.delete(
-                                _data_key(table, column, rowid),
-                                txn=txn)
-                        except db.DBNotFoundError:
-                             # XXXXXXX row key somehow didn't exist, assume no
-                             # error
-                            dataitem = None
-                        dataitem = mappings[column](dataitem)
-                        if dataitem != None:
-                            self.db.put(
-                                _data_key(table, column, rowid),
-                                dataitem, txn=txn)
-                        txn.commit()
-                        txn = None
-
-                # catch all exceptions here since we call unknown callables
-                except:
-                    if txn:
-                        txn.abort()
-                    raise
-
-        except db.DBError as dberror:
-            if sys.version_info[0] < 3 :
-                raise TableDBError(dberror[1])
-            else :
-                raise TableDBError(dberror.args[1])
-
-    def Delete(self, table, conditions={}):
-        """Delete(table, conditions) - Delete items matching the given
-        conditions from the table.
-
-        * conditions - a dictionary keyed on column names containing
-          condition functions expecting the data string as an
-          argument and returning a boolean.
-        """
-
-        try:
-            matching_rowids = self.__Select(table, [], conditions)
-
-            # delete row data from all columns
-            columns = self.__tablecolumns[table]
-            for rowid in list(matching_rowids.keys()):
-                txn = None
-                try:
-                    txn = self.env.txn_begin()
-                    for column in columns:
-                        # delete the data key
-                        try:
-                            self.db.delete(_data_key(table, column, rowid),
-                                           txn=txn)
-                        except db.DBNotFoundError:
-                            # XXXXXXX column may not exist, assume no error
-                            pass
-
-                    try:
-                        self.db.delete(_rowid_key(table, rowid), txn=txn)
-                    except db.DBNotFoundError:
-                        # XXXXXXX row key somehow didn't exist, assume no error
-                        pass
-                    txn.commit()
-                    txn = None
-                except db.DBError as dberror:
-                    if txn:
-                        txn.abort()
-                    raise
-        except db.DBError as dberror:
-            if sys.version_info[0] < 3 :
-                raise TableDBError(dberror[1])
-            else :
-                raise TableDBError(dberror.args[1])
-
-
-    def Select(self, table, columns, conditions={}):
-        """Select(table, columns, conditions) - retrieve specific row data
-        Returns a list of row column->value mapping dictionaries.
-
-        * columns - a list of which column data to return.  If
-          columns is None, all columns will be returned.
-        * conditions - a dictionary keyed on column names
-          containing callable conditions expecting the data string as an
-          argument and returning a boolean.
-        """
-        try:
-            if table not in self.__tablecolumns:
-                self.__load_column_info(table)
-            if columns is None:
-                columns = self.__tablecolumns[table]
-            matching_rowids = self.__Select(table, columns, conditions)
-        except db.DBError as dberror:
-            if sys.version_info[0] < 3 :
-                raise TableDBError(dberror[1])
-            else :
-                raise TableDBError(dberror.args[1])
-        # return the matches as a list of dictionaries
-        return list(matching_rowids.values())
-
-
-    def __Select(self, table, columns, conditions):
-        """__Select() - Used to implement Select and Delete (above)
-        Returns a dictionary keyed on rowids containing dicts
-        holding the row data for columns listed in the columns param
-        that match the given conditions.
-        * conditions is a dictionary keyed on column names
-        containing callable conditions expecting the data string as an
-        argument and returning a boolean.
-        """
-        # check the validity of each column name
-        if table not in self.__tablecolumns:
-            self.__load_column_info(table)
-        if columns is None:
-            columns = self.tablecolumns[table]
-        for column in (columns + list(conditions.keys())):
-            if not self.__tablecolumns[table].count(column):
-                raise TableDBError("unknown column: %r" % (column,))
-
-        # keyed on rows that match so far, containings dicts keyed on
-        # column names containing the data for that row and column.
-        matching_rowids = {}
-        # keys are rowids that do not match
-        rejected_rowids = {}
-
-        # attempt to sort the conditions in such a way as to minimize full
-        # column lookups
-        def cmp_conditions(atuple, btuple):
-            a = atuple[1]
-            b = btuple[1]
-            if type(a) is type(b):
-                if isinstance(a, PrefixCond) and isinstance(b, PrefixCond):
-                    # longest prefix first
-                    return cmp(len(b.prefix), len(a.prefix))
-                if isinstance(a, LikeCond) and isinstance(b, LikeCond):
-                    # longest likestr first
-                    return cmp(len(b.likestr), len(a.likestr))
-                return 0
-            if isinstance(a, ExactCond):
-                return -1
-            if isinstance(b, ExactCond):
-                return 1
-            if isinstance(a, PrefixCond):
-                return -1
-            if isinstance(b, PrefixCond):
-                return 1
-            # leave all unknown condition callables alone as equals
-            return 0
-
-        if sys.version_info[0] < 3 :
-            conditionlist = list(conditions.items())
-            conditionlist.sort(cmp_conditions)
-        else :  # Insertion Sort. Please, improve
-            conditionlist = []
-            for i in list(conditions.items()) :
-                for j, k in enumerate(conditionlist) :
-                    r = cmp_conditions(k, i)
-                    if r == 1 :
-                        conditionlist.insert(j, i)
-                        break
-                else :
-                    conditionlist.append(i)
-
-        # Apply conditions to column data to find what we want
-        cur = self.db.cursor()
-        column_num = -1
-        for column, condition in conditionlist:
-            column_num = column_num + 1
-            searchkey = _search_col_data_key(table, column)
-            # speedup: don't linear search columns within loop
-            if column in columns:
-                savethiscolumndata = 1  # save the data for return
-            else:
-                savethiscolumndata = 0  # data only used for selection
-
-            try:
-                key, data = cur.set_range(searchkey)
-                while key[:len(searchkey)] == searchkey:
-                    # extract the rowid from the key
-                    rowid = key[-_rowid_str_len:]
-
-                    if rowid not in rejected_rowids:
-                        # if no condition was specified or the condition
-                        # succeeds, add row to our match list.
-                        if not condition or condition(data):
-                            if rowid not in matching_rowids:
-                                matching_rowids[rowid] = {}
-                            if savethiscolumndata:
-                                matching_rowids[rowid][column] = data
-                        else:
-                            if rowid in matching_rowids:
-                                del matching_rowids[rowid]
-                            rejected_rowids[rowid] = rowid
-
-                    key, data = next(cur)
-
-            except db.DBError as dberror:
-                if sys.version_info[0] < 3 :
-                    if dberror[0] != db.DB_NOTFOUND:
-                        raise
-                else :
-                    if dberror.args[0] != db.DB_NOTFOUND:
-                        raise
-                continue
-
-        cur.close()
-
-        # we're done selecting rows, garbage collect the reject list
-        del rejected_rowids
-
-        # extract any remaining desired column data from the
-        # database for the matching rows.
-        if len(columns) > 0:
-            for rowid, rowdata in list(matching_rowids.items()):
-                for column in columns:
-                    if column in rowdata:
-                        continue
-                    try:
-                        rowdata[column] = self.db.get(
-                            _data_key(table, column, rowid))
-                    except db.DBError as dberror:
-                        if sys.version_info[0] < 3 :
-                            if dberror[0] != db.DB_NOTFOUND:
-                                raise
-                        else :
-                            if dberror.args[0] != db.DB_NOTFOUND:
-                                raise
-                        rowdata[column] = None
-
-        # return the matches
-        return matching_rowids
-
-
-    def Drop(self, table):
-        """Remove an entire table from the database"""
-        txn = None
-        try:
-            txn = self.env.txn_begin()
-
-            # delete the column list
-            self.db.delete(_columns_key(table), txn=txn)
-
-            cur = self.db.cursor(txn)
-
-            # delete all keys containing this tables column and row info
-            table_key = _search_all_data_key(table)
-            while 1:
-                try:
-                    key, data = cur.set_range(table_key)
-                except db.DBNotFoundError:
-                    break
-                # only delete items in this table
-                if key[:len(table_key)] != table_key:
-                    break
-                cur.delete()
-
-            # delete all rowids used by this table
-            table_key = _search_rowid_key(table)
-            while 1:
-                try:
-                    key, data = cur.set_range(table_key)
-                except db.DBNotFoundError:
-                    break
-                # only delete items in this table
-                if key[:len(table_key)] != table_key:
-                    break
-                cur.delete()
-
-            cur.close()
-
-            # delete the tablename from the table name list
-            tablelist = pickle.loads(
-                getattr(self.db, "get_bytes", self.db.get)(_table_names_key,
-                    txn=txn, flags=db.DB_RMW))
-            try:
-                tablelist.remove(table)
-            except ValueError:
-                # hmm, it wasn't there, oh well, that's what we want.
-                pass
-            # delete 1st, incase we opened with DB_DUP
-            self.db.delete(_table_names_key, txn=txn)
-            getattr(self.db, "put_bytes", self.db.put)(_table_names_key,
-                    pickle.dumps(tablelist, 1), txn=txn)
-
-            txn.commit()
-            txn = None
-
-            if table in self.__tablecolumns:
-                del self.__tablecolumns[table]
-
-        except db.DBError as dberror:
-            if txn:
-                txn.abort()
-            if sys.version_info[0] < 3 :
-                raise TableDBError(dberror[1])
-            else :
-                raise TableDBError(dberror.args[1])
diff --git a/Lib/bsddb/dbutils.py b/Lib/bsddb/dbutils.py
deleted file mode 100644 (file)
index f401153..0000000
+++ /dev/null
@@ -1,83 +0,0 @@
-#------------------------------------------------------------------------
-#
-# Copyright (C) 2000 Autonomous Zone Industries
-#
-# License:      This is free software.  You may use this software for any
-#               purpose including modification/redistribution, so long as
-#               this header remains intact and that you do not claim any
-#               rights of ownership or authorship of this software.  This
-#               software has been tested, but no warranty is expressed or
-#               implied.
-#
-# Author: Gregory P. Smith <greg@krypto.org>
-#
-# Note: I don't know how useful this is in reality since when a
-#       DBLockDeadlockError happens the current transaction is supposed to be
-#       aborted.  If it doesn't then when the operation is attempted again
-#       the deadlock is still happening...
-#       --Robin
-#
-#------------------------------------------------------------------------
-
-
-#
-# import the time.sleep function in a namespace safe way to allow
-# "from bsddb.dbutils import *"
-#
-from time import sleep as _sleep
-
-import sys
-absolute_import = (sys.version_info[0] >= 3)
-if absolute_import :
-    # Because this syntaxis is not valid before Python 2.5
-    exec("from . import db")
-else :
-    from . import db
-
-# always sleep at least N seconds between retrys
-_deadlock_MinSleepTime = 1.0/128
-# never sleep more than N seconds between retrys
-_deadlock_MaxSleepTime = 3.14159
-
-# Assign a file object to this for a "sleeping" message to be written to it
-# each retry
-_deadlock_VerboseFile = None
-
-
-def DeadlockWrap(function, *_args, **_kwargs):
-    """DeadlockWrap(function, *_args, **_kwargs) - automatically retries
-    function in case of a database deadlock.
-
-    This is a function intended to be used to wrap database calls such
-    that they perform retrys with exponentially backing off sleeps in
-    between when a DBLockDeadlockError exception is raised.
-
-    A 'max_retries' parameter may optionally be passed to prevent it
-    from retrying forever (in which case the exception will be reraised).
-
-        d = DB(...)
-        d.open(...)
-        DeadlockWrap(d.put, "foo", data="bar")  # set key "foo" to "bar"
-    """
-    sleeptime = _deadlock_MinSleepTime
-    max_retries = _kwargs.get('max_retries', -1)
-    if 'max_retries' in _kwargs:
-        del _kwargs['max_retries']
-    while True:
-        try:
-            return function(*_args, **_kwargs)
-        except db.DBLockDeadlockError:
-            if _deadlock_VerboseFile:
-                _deadlock_VerboseFile.write(
-                    'dbutils.DeadlockWrap: sleeping %1.3f\n' % sleeptime)
-            _sleep(sleeptime)
-            # exponential backoff in the sleep time
-            sleeptime *= 2
-            if sleeptime > _deadlock_MaxSleepTime:
-                sleeptime = _deadlock_MaxSleepTime
-            max_retries -= 1
-            if max_retries == -1:
-                raise
-
-
-#------------------------------------------------------------------------
diff --git a/Lib/bsddb/test/__init__.py b/Lib/bsddb/test/__init__.py
deleted file mode 100644 (file)
index e69de29..0000000
diff --git a/Lib/bsddb/test/test_all.py b/Lib/bsddb/test/test_all.py
deleted file mode 100644 (file)
index 559d41b..0000000
+++ /dev/null
@@ -1,525 +0,0 @@
-"""Run all test cases.
-"""
-
-import sys
-import os
-import unittest
-try:
-    # For Pythons w/distutils pybsddb
-    import bsddb3 as bsddb
-except ImportError:
-    # For Python 2.3
-    import bsddb
-
-
-if sys.version_info[0] >= 3 :
-    charset = "iso8859-1"  # Full 8 bit
-
-    class cursor_py3k(object) :
-        def __init__(self, db, *args, **kwargs) :
-            self._dbcursor = db.cursor(*args, **kwargs)
-
-        def __getattr__(self, v) :
-            return getattr(self._dbcursor, v)
-
-        def _fix(self, v) :
-            if v == None : return None
-            key, value = v
-            if isinstance(key, bytes) :
-                key = key.decode(charset)
-            return (key, value.decode(charset))
-
-        def __next__(self) :
-            v = getattr(self._dbcursor, "next")()
-            return self._fix(v)
-
-        next = __next__
-
-        def previous(self) :
-            v = self._dbcursor.previous()
-            return self._fix(v)
-
-        def last(self) :
-            v = self._dbcursor.last()
-            return self._fix(v)
-
-        def set(self, k) :
-            if isinstance(k, str) :
-                k = bytes(k, charset)
-            v = self._dbcursor.set(k)
-            return self._fix(v)
-
-        def set_recno(self, num) :
-            v = self._dbcursor.set_recno(num)
-            return self._fix(v)
-
-        def set_range(self, k, dlen=-1, doff=-1) :
-            if isinstance(k, str) :
-                k = bytes(k, charset)
-            v = self._dbcursor.set_range(k, dlen=dlen, doff=doff)
-            return self._fix(v)
-
-        def dup(self, flags=0) :
-            cursor = self._dbcursor.dup(flags)
-            return dup_cursor_py3k(cursor)
-
-        def next_dup(self) :
-            v = self._dbcursor.next_dup()
-            return self._fix(v)
-
-        def next_nodup(self) :
-            v = self._dbcursor.next_nodup()
-            return self._fix(v)
-
-        def put(self, key, value, flags=0, dlen=-1, doff=-1) :
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            if isinstance(value, str) :
-                value = bytes(value, charset)
-            return self._dbcursor.put(key, value, flags=flags, dlen=dlen,
-                    doff=doff)
-
-        def current(self, flags=0, dlen=-1, doff=-1) :
-            v = self._dbcursor.current(flags=flags, dlen=dlen, doff=doff)
-            return self._fix(v)
-
-        def first(self) :
-            v = self._dbcursor.first()
-            return self._fix(v)
-
-        def pget(self, key=None, data=None, flags=0) :
-            # Incorrect because key can be a bare number,
-            # but enough to pass testsuite
-            if isinstance(key, int) and (data==None) and (flags==0) :
-                flags = key
-                key = None
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            if isinstance(data, int) and (flags==0) :
-                flags = data
-                data = None
-            if isinstance(data, str) :
-                data = bytes(data, charset)
-            v=self._dbcursor.pget(key=key, data=data, flags=flags)
-            if v != None :
-                v1, v2, v3 = v
-                if isinstance(v1, bytes) :
-                    v1 = v1.decode(charset)
-                if isinstance(v2, bytes) :
-                    v2 = v2.decode(charset)
-
-                v = (v1, v2, v3.decode(charset))
-
-            return v
-
-        def join_item(self) :
-            v = self._dbcursor.join_item()
-            if v != None :
-                v = v.decode(charset)
-            return v
-
-        def get(self, *args, **kwargs) :
-            l = len(args)
-            if l == 2 :
-                k, f = args
-                if isinstance(k, str) :
-                    k = bytes(k, "iso8859-1")
-                args = (k, f)
-            elif l == 3 :
-                k, d, f = args
-                if isinstance(k, str) :
-                    k = bytes(k, charset)
-                if isinstance(d, str) :
-                    d = bytes(d, charset)
-                args =(k, d, f)
-
-            v = self._dbcursor.get(*args, **kwargs)
-            if v != None :
-                k, v = v
-                if isinstance(k, bytes) :
-                    k = k.decode(charset)
-                v = (k, v.decode(charset))
-            return v
-
-        def get_both(self, key, value) :
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            if isinstance(value, str) :
-                value = bytes(value, charset)
-            v=self._dbcursor.get_both(key, value)
-            return self._fix(v)
-
-    class dup_cursor_py3k(cursor_py3k) :
-        def __init__(self, dbcursor) :
-            self._dbcursor = dbcursor
-
-    class DB_py3k(object) :
-        def __init__(self, *args, **kwargs) :
-            args2=[]
-            for i in args :
-                if isinstance(i, DBEnv_py3k) :
-                    i = i._dbenv
-                args2.append(i)
-            args = tuple(args2)
-            for k, v in list(kwargs.items()) :
-                if isinstance(v, DBEnv_py3k) :
-                    kwargs[k] = v._dbenv
-
-            self._db = bsddb._db.DB_orig(*args, **kwargs)
-
-        def __contains__(self, k) :
-            if isinstance(k, str) :
-                k = bytes(k, charset)
-            return getattr(self._db, "has_key")(k)
-
-        def __getitem__(self, k) :
-            if isinstance(k, str) :
-                k = bytes(k, charset)
-            v = self._db[k]
-            if v != None :
-                v = v.decode(charset)
-            return v
-
-        def __setitem__(self, k, v) :
-            if isinstance(k, str) :
-                k = bytes(k, charset)
-            if isinstance(v, str) :
-                v = bytes(v, charset)
-            self._db[k] = v
-
-        def __delitem__(self, k) :
-            if isinstance(k, str) :
-                k = bytes(k, charset)
-            del self._db[k]
-
-        def __getattr__(self, v) :
-            return getattr(self._db, v)
-
-        def __len__(self) :
-            return len(self._db)
-
-        def has_key(self, k, txn=None) :
-            if isinstance(k, str) :
-                k = bytes(k, charset)
-            return self._db.has_key(k, txn=txn)
-
-        def put(self, key, value, txn=None, flags=0, dlen=-1, doff=-1) :
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            if isinstance(value, str) :
-                value = bytes(value, charset)
-            return self._db.put(key, value, flags=flags, txn=txn, dlen=dlen,
-                    doff=doff)
-
-        def append(self, value, txn=None) :
-            if isinstance(value, str) :
-                value = bytes(value, charset)
-            return self._db.append(value, txn=txn)
-
-        def get_size(self, key) :
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            return self._db.get_size(key)
-
-        def get(self, key, default="MagicCookie", txn=None, flags=0, dlen=-1, doff=-1) :
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            if default != "MagicCookie" :  # Magic for 'test_get_none.py'
-                v=self._db.get(key, default=default, txn=txn, flags=flags,
-                        dlen=dlen, doff=doff)
-            else :
-                v=self._db.get(key, txn=txn, flags=flags,
-                        dlen=dlen, doff=doff)
-            if (v != None) and isinstance(v, bytes) :
-                v = v.decode(charset)
-            return v
-
-        def pget(self, key, txn=None) :
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            v=self._db.pget(key, txn=txn)
-            if v != None :
-                v1, v2 = v
-                if isinstance(v1, bytes) :
-                    v1 = v1.decode(charset)
-
-                v = (v1, v2.decode(charset))
-            return v
-
-        def get_both(self, key, value, txn=None, flags=0) :
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            if isinstance(value, str) :
-                value = bytes(value, charset)
-            v=self._db.get_both(key, value, txn=txn, flags=flags)
-            if v != None :
-                v = v.decode(charset)
-            return v
-
-        def delete(self, key, txn=None) :
-            if isinstance(key, str) :
-                key = bytes(key, charset)
-            return self._db.delete(key, txn=txn)
-
-        def keys(self) :
-            k = list(self._db.keys())
-            if len(k) and isinstance(k[0], bytes) :
-                return [i.decode(charset) for i in list(self._db.keys())]
-            else :
-                return k
-
-        def items(self) :
-            data = list(self._db.items())
-            if not len(data) : return data
-            data2 = []
-            for k, v in data :
-                if isinstance(k, bytes) :
-                    k = k.decode(charset)
-                data2.append((k, v.decode(charset)))
-            return data2
-
-        def associate(self, secondarydb, callback, flags=0, txn=None) :
-            class associate_callback(object) :
-                def __init__(self, callback) :
-                    self._callback = callback
-
-                def callback(self, key, data) :
-                    if isinstance(key, str) :
-                        key = key.decode(charset)
-                    data = data.decode(charset)
-                    key = self._callback(key, data)
-                    if (key != bsddb._db.DB_DONOTINDEX) and isinstance(key,
-                            str) :
-                        key = bytes(key, charset)
-                    return key
-
-            return self._db.associate(secondarydb._db,
-                    associate_callback(callback).callback, flags=flags, txn=txn)
-
-        def cursor(self, txn=None, flags=0) :
-            return cursor_py3k(self._db, txn=txn, flags=flags)
-
-        def join(self, cursor_list) :
-            cursor_list = [i._dbcursor for i in cursor_list]
-            return dup_cursor_py3k(self._db.join(cursor_list))
-
-    class DBEnv_py3k(object) :
-        def __init__(self, *args, **kwargs) :
-            self._dbenv = bsddb._db.DBEnv_orig(*args, **kwargs)
-
-        def __getattr__(self, v) :
-            return getattr(self._dbenv, v)
-
-    class DBSequence_py3k(object) :
-        def __init__(self, db, *args, **kwargs) :
-            self._db=db
-            self._dbsequence = bsddb._db.DBSequence_orig(db._db, *args, **kwargs)
-
-        def __getattr__(self, v) :
-            return getattr(self._dbsequence, v)
-
-        def open(self, key, *args, **kwargs) :
-            return self._dbsequence.open(bytes(key, charset), *args, **kwargs)
-
-        def get_key(self) :
-            return  self._dbsequence.get_key().decode(charset)
-
-        def get_dbp(self) :
-            return self._db
-
-    import string
-    string.letters=[chr(i) for i in range(65,91)]
-
-    bsddb._db.DBEnv_orig = bsddb._db.DBEnv
-    bsddb._db.DB_orig = bsddb._db.DB
-    bsddb._db.DBSequence_orig = bsddb._db.DBSequence
-
-    def do_proxy_db_py3k(flag) :
-        flag2 = do_proxy_db_py3k.flag
-        do_proxy_db_py3k.flag = flag
-        if flag :
-            bsddb.DBEnv = bsddb.db.DBEnv = bsddb._db.DBEnv = DBEnv_py3k
-            bsddb.DB = bsddb.db.DB = bsddb._db.DB = DB_py3k
-            bsddb._db.DBSequence = DBSequence_py3k
-        else :
-            bsddb.DBEnv = bsddb.db.DBEnv = bsddb._db.DBEnv = bsddb._db.DBEnv_orig
-            bsddb.DB = bsddb.db.DB = bsddb._db.DB = bsddb._db.DB_orig
-            bsddb._db.DBSequence = bsddb._db.DBSequence_orig
-        return flag2
-
-    do_proxy_db_py3k.flag = False
-    do_proxy_db_py3k(True)
-
-try:
-    # For Pythons w/distutils pybsddb
-    from bsddb3 import db, dbtables, dbutils, dbshelve, \
-            hashopen, btopen, rnopen, dbobj
-except ImportError:
-    # For Python 2.3
-    from bsddb import db, dbtables, dbutils, dbshelve, \
-            hashopen, btopen, rnopen, dbobj
-
-try:
-    from bsddb3 import test_support
-except ImportError:
-    if sys.version_info[0] < 3 :
-        from test import test_support
-    else :
-        from test import support as test_support
-
-
-try:
-    if sys.version_info[0] < 3 :
-        from threading import Thread, currentThread
-        del Thread, currentThread
-    else :
-        from threading import Thread, current_thread
-        del Thread, current_thread
-    have_threads = True
-except ImportError:
-    have_threads = False
-
-verbose = 0
-if 'verbose' in sys.argv:
-    verbose = 1
-    sys.argv.remove('verbose')
-
-if 'silent' in sys.argv:  # take care of old flag, just in case
-    verbose = 0
-    sys.argv.remove('silent')
-
-
-def print_versions():
-    print()
-    print('-=' * 38)
-    print(db.DB_VERSION_STRING)
-    print('bsddb.db.version():   %s' % (db.version(), ))
-    print('bsddb.db.__version__: %s' % db.__version__)
-    print('bsddb.db.cvsid:       %s' % db.cvsid)
-    print('py module:            %s' % bsddb.__file__)
-    print('extension module:     %s' % bsddb._bsddb.__file__)
-    print('python version:       %s' % sys.version)
-    print('My pid:               %s' % os.getpid())
-    print('-=' * 38)
-
-
-def get_new_path(name) :
-    get_new_path.mutex.acquire()
-    try :
-        import os
-        path=os.path.join(get_new_path.prefix,
-                name+"_"+str(os.getpid())+"_"+str(get_new_path.num))
-        get_new_path.num+=1
-    finally :
-        get_new_path.mutex.release()
-    return path
-
-def get_new_environment_path() :
-    path=get_new_path("environment")
-    import os
-    try:
-        os.makedirs(path,mode=0o700)
-    except os.error:
-        test_support.rmtree(path)
-        os.makedirs(path)
-    return path
-
-def get_new_database_path() :
-    path=get_new_path("database")
-    import os
-    if os.path.exists(path) :
-        os.remove(path)
-    return path
-
-
-# This path can be overriden via "set_test_path_prefix()".
-import os, os.path
-get_new_path.prefix=os.path.join(os.sep,"tmp","z-Berkeley_DB")
-get_new_path.num=0
-
-def get_test_path_prefix() :
-    return get_new_path.prefix
-
-def set_test_path_prefix(path) :
-    get_new_path.prefix=path
-
-def remove_test_path_directory() :
-    test_support.rmtree(get_new_path.prefix)
-
-if have_threads :
-    import threading
-    get_new_path.mutex=threading.Lock()
-    del threading
-else :
-    class Lock(object) :
-        def acquire(self) :
-            pass
-        def release(self) :
-            pass
-    get_new_path.mutex=Lock()
-    del Lock
-
-
-
-class PrintInfoFakeTest(unittest.TestCase):
-    def testPrintVersions(self):
-        print_versions()
-
-
-# This little hack is for when this module is run as main and all the
-# other modules import it so they will still be able to get the right
-# verbose setting.  It's confusing but it works.
-if sys.version_info[0] < 3 :
-    from . import test_all
-    test_all.verbose = verbose
-else :
-    import sys
-    print("Work to do!", file=sys.stderr)
-
-
-def suite(module_prefix='', timing_check=None):
-    test_modules = [
-        'test_associate',
-        'test_basics',
-        'test_compare',
-        'test_compat',
-        'test_cursor_pget_bug',
-        'test_dbobj',
-        'test_dbshelve',
-        'test_dbtables',
-        'test_distributed_transactions',
-        'test_early_close',
-        'test_get_none',
-        'test_join',
-        'test_lock',
-        'test_misc',
-        'test_pickle',
-        'test_queue',
-        'test_recno',
-        'test_replication',
-        'test_sequence',
-        'test_thread',
-        ]
-
-    alltests = unittest.TestSuite()
-    for name in test_modules:
-        #module = __import__(name)
-        # Do it this way so that suite may be called externally via
-        # python's Lib/test/test_bsddb3.
-        module = __import__(module_prefix+name, globals(), locals(), name)
-
-        alltests.addTest(module.test_suite())
-        if timing_check:
-            alltests.addTest(unittest.makeSuite(timing_check))
-    return alltests
-
-
-def test_suite():
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(PrintInfoFakeTest))
-    return suite
-
-
-if __name__ == '__main__':
-    print_versions()
-    unittest.main(defaultTest='suite')
diff --git a/Lib/bsddb/test/test_associate.py b/Lib/bsddb/test/test_associate.py
deleted file mode 100644 (file)
index c5d0c92..0000000
+++ /dev/null
@@ -1,445 +0,0 @@
-"""
-TestCases for DB.associate.
-"""
-
-import sys, os, string
-import time
-from pprint import pprint
-
-import unittest
-from .test_all import db, dbshelve, test_support, verbose, have_threads, \
-        get_new_environment_path
-
-
-#----------------------------------------------------------------------
-
-
-musicdata = {
-1 : ("Bad English", "The Price Of Love", "Rock"),
-2 : ("DNA featuring Suzanne Vega", "Tom's Diner", "Rock"),
-3 : ("George Michael", "Praying For Time", "Rock"),
-4 : ("Gloria Estefan", "Here We Are", "Rock"),
-5 : ("Linda Ronstadt", "Don't Know Much", "Rock"),
-6 : ("Michael Bolton", "How Am I Supposed To Live Without You", "Blues"),
-7 : ("Paul Young", "Oh Girl", "Rock"),
-8 : ("Paula Abdul", "Opposites Attract", "Rock"),
-9 : ("Richard Marx", "Should've Known Better", "Rock"),
-10: ("Rod Stewart", "Forever Young", "Rock"),
-11: ("Roxette", "Dangerous", "Rock"),
-12: ("Sheena Easton", "The Lover In Me", "Rock"),
-13: ("Sinead O'Connor", "Nothing Compares 2 U", "Rock"),
-14: ("Stevie B.", "Because I Love You", "Rock"),
-15: ("Taylor Dayne", "Love Will Lead You Back", "Rock"),
-16: ("The Bangles", "Eternal Flame", "Rock"),
-17: ("Wilson Phillips", "Release Me", "Rock"),
-18: ("Billy Joel", "Blonde Over Blue", "Rock"),
-19: ("Billy Joel", "Famous Last Words", "Rock"),
-20: ("Billy Joel", "Lullabye (Goodnight, My Angel)", "Rock"),
-21: ("Billy Joel", "The River Of Dreams", "Rock"),
-22: ("Billy Joel", "Two Thousand Years", "Rock"),
-23: ("Janet Jackson", "Alright", "Rock"),
-24: ("Janet Jackson", "Black Cat", "Rock"),
-25: ("Janet Jackson", "Come Back To Me", "Rock"),
-26: ("Janet Jackson", "Escapade", "Rock"),
-27: ("Janet Jackson", "Love Will Never Do (Without You)", "Rock"),
-28: ("Janet Jackson", "Miss You Much", "Rock"),
-29: ("Janet Jackson", "Rhythm Nation", "Rock"),
-30: ("Janet Jackson", "State Of The World", "Rock"),
-31: ("Janet Jackson", "The Knowledge", "Rock"),
-32: ("Spyro Gyra", "End of Romanticism", "Jazz"),
-33: ("Spyro Gyra", "Heliopolis", "Jazz"),
-34: ("Spyro Gyra", "Jubilee", "Jazz"),
-35: ("Spyro Gyra", "Little Linda", "Jazz"),
-36: ("Spyro Gyra", "Morning Dance", "Jazz"),
-37: ("Spyro Gyra", "Song for Lorraine", "Jazz"),
-38: ("Yes", "Owner Of A Lonely Heart", "Rock"),
-39: ("Yes", "Rhythm Of Love", "Rock"),
-40: ("Cusco", "Dream Catcher", "New Age"),
-41: ("Cusco", "Geronimos Laughter", "New Age"),
-42: ("Cusco", "Ghost Dance", "New Age"),
-43: ("Blue Man Group", "Drumbone", "New Age"),
-44: ("Blue Man Group", "Endless Column", "New Age"),
-45: ("Blue Man Group", "Klein Mandelbrot", "New Age"),
-46: ("Kenny G", "Silhouette", "Jazz"),
-47: ("Sade", "Smooth Operator", "Jazz"),
-48: ("David Arkenstone", "Papillon (On The Wings Of The Butterfly)",
-     "New Age"),
-49: ("David Arkenstone", "Stepping Stars", "New Age"),
-50: ("David Arkenstone", "Carnation Lily Lily Rose", "New Age"),
-51: ("David Lanz", "Behind The Waterfall", "New Age"),
-52: ("David Lanz", "Cristofori's Dream", "New Age"),
-53: ("David Lanz", "Heartsounds", "New Age"),
-54: ("David Lanz", "Leaves on the Seine", "New Age"),
-99: ("unknown artist", "Unnamed song", "Unknown"),
-}
-
-#----------------------------------------------------------------------
-
-class AssociateErrorTestCase(unittest.TestCase):
-    def setUp(self):
-        self.filename = self.__class__.__name__ + '.db'
-        self.homeDir = get_new_environment_path()
-        self.env = db.DBEnv()
-        self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL)
-
-    def tearDown(self):
-        self.env.close()
-        self.env = None
-        test_support.rmtree(self.homeDir)
-
-    def test00_associateDBError(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test00_associateDBError..." % \
-                  self.__class__.__name__)
-
-        dupDB = db.DB(self.env)
-        dupDB.set_flags(db.DB_DUP)
-        dupDB.open(self.filename, "primary", db.DB_BTREE, db.DB_CREATE)
-
-        secDB = db.DB(self.env)
-        secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE)
-
-        # dupDB has been configured to allow duplicates, it can't
-        # associate with a secondary.  Berkeley DB will return an error.
-        try:
-            def f(a,b): return a+b
-            dupDB.associate(secDB, f)
-        except db.DBError:
-            # good
-            secDB.close()
-            dupDB.close()
-        else:
-            secDB.close()
-            dupDB.close()
-            self.fail("DBError exception was expected")
-
-
-
-#----------------------------------------------------------------------
-
-
-class AssociateTestCase(unittest.TestCase):
-    keytype = ''
-    envFlags = 0
-    dbFlags = 0
-
-    def setUp(self):
-        self.filename = self.__class__.__name__ + '.db'
-        self.homeDir = get_new_environment_path()
-        self.env = db.DBEnv()
-        self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL |
-                               db.DB_INIT_LOCK | db.DB_THREAD | self.envFlags)
-
-    def tearDown(self):
-        self.closeDB()
-        self.env.close()
-        self.env = None
-        test_support.rmtree(self.homeDir)
-
-    def addDataToDB(self, d, txn=None):
-        for key, value in list(musicdata.items()):
-            if type(self.keytype) == type(''):
-                key = "%02d" % key
-            d.put(key, '|'.join(value), txn=txn)
-
-    def createDB(self, txn=None):
-        self.cur = None
-        self.secDB = None
-        self.primary = db.DB(self.env)
-        self.primary.set_get_returns_none(2)
-        if db.version() >= (4, 1):
-            self.primary.open(self.filename, "primary", self.dbtype,
-                          db.DB_CREATE | db.DB_THREAD | self.dbFlags, txn=txn)
-        else:
-            self.primary.open(self.filename, "primary", self.dbtype,
-                          db.DB_CREATE | db.DB_THREAD | self.dbFlags)
-
-    def closeDB(self):
-        if self.cur:
-            self.cur.close()
-            self.cur = None
-        if self.secDB:
-            self.secDB.close()
-            self.secDB = None
-        self.primary.close()
-        self.primary = None
-
-    def getDB(self):
-        return self.primary
-
-
-    def test01_associateWithDB(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test01_associateWithDB..." % \
-                  self.__class__.__name__)
-
-        self.createDB()
-
-        self.secDB = db.DB(self.env)
-        self.secDB.set_flags(db.DB_DUP)
-        self.secDB.set_get_returns_none(2)
-        self.secDB.open(self.filename, "secondary", db.DB_BTREE,
-                   db.DB_CREATE | db.DB_THREAD | self.dbFlags)
-        self.getDB().associate(self.secDB, self.getGenre)
-
-        self.addDataToDB(self.getDB())
-
-        self.finish_test(self.secDB)
-
-
-    def test02_associateAfterDB(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test02_associateAfterDB..." % \
-                  self.__class__.__name__)
-
-        self.createDB()
-        self.addDataToDB(self.getDB())
-
-        self.secDB = db.DB(self.env)
-        self.secDB.set_flags(db.DB_DUP)
-        self.secDB.open(self.filename, "secondary", db.DB_BTREE,
-                   db.DB_CREATE | db.DB_THREAD | self.dbFlags)
-
-        # adding the DB_CREATE flag will cause it to index existing records
-        self.getDB().associate(self.secDB, self.getGenre, db.DB_CREATE)
-
-        self.finish_test(self.secDB)
-
-
-    def finish_test(self, secDB, txn=None):
-        # 'Blues' should not be in the secondary database
-        vals = secDB.pget('Blues', txn=txn)
-        self.assertEqual(vals, None, vals)
-
-        vals = secDB.pget('Unknown', txn=txn)
-        self.assert_(vals[0] == 99 or vals[0] == '99', vals)
-        vals[1].index('Unknown')
-        vals[1].index('Unnamed')
-        vals[1].index('unknown')
-
-        if verbose:
-            print("Primary key traversal:")
-        self.cur = self.getDB().cursor(txn)
-        count = 0
-        rec = self.cur.first()
-        while rec is not None:
-            if type(self.keytype) == type(''):
-                self.assert_(int(rec[0]))  # for primary db, key is a number
-            else:
-                self.assert_(rec[0] and type(rec[0]) == type(0))
-            count = count + 1
-            if verbose:
-                print(rec)
-            rec = getattr(self.cur, "next")()
-        self.assertEqual(count, len(musicdata))  # all items accounted for
-
-
-        if verbose:
-            print("Secondary key traversal:")
-        self.cur = secDB.cursor(txn)
-        count = 0
-
-        # test cursor pget
-        vals = self.cur.pget('Unknown', flags=db.DB_LAST)
-        self.assert_(vals[1] == 99 or vals[1] == '99', vals)
-        self.assertEqual(vals[0], 'Unknown')
-        vals[2].index('Unknown')
-        vals[2].index('Unnamed')
-        vals[2].index('unknown')
-
-        vals = self.cur.pget('Unknown', data='wrong value', flags=db.DB_GET_BOTH)
-        self.assertEqual(vals, None, vals)
-
-        rec = self.cur.first()
-        self.assertEqual(rec[0], "Jazz")
-        while rec is not None:
-            count = count + 1
-            if verbose:
-                print(rec)
-            rec = getattr(self.cur, "next")()
-        # all items accounted for EXCEPT for 1 with "Blues" genre
-        self.assertEqual(count, len(musicdata)-1)
-
-        self.cur = None
-
-    def getGenre(self, priKey, priData):
-        self.assertEqual(type(priData), type(""))
-        genre = priData.split('|')[2]
-
-        if verbose:
-            print('getGenre key: %r data: %r' % (priKey, priData))
-
-        if genre == 'Blues':
-            return db.DB_DONOTINDEX
-        else:
-            return genre
-
-
-#----------------------------------------------------------------------
-
-
-class AssociateHashTestCase(AssociateTestCase):
-    dbtype = db.DB_HASH
-
-class AssociateBTreeTestCase(AssociateTestCase):
-    dbtype = db.DB_BTREE
-
-class AssociateRecnoTestCase(AssociateTestCase):
-    dbtype = db.DB_RECNO
-    keytype = 0
-
-#----------------------------------------------------------------------
-
-class AssociateBTreeTxnTestCase(AssociateBTreeTestCase):
-    envFlags = db.DB_INIT_TXN
-    dbFlags = 0
-
-    def txn_finish_test(self, sDB, txn):
-        try:
-            self.finish_test(sDB, txn=txn)
-        finally:
-            if self.cur:
-                self.cur.close()
-                self.cur = None
-            if txn:
-                txn.commit()
-
-    def test13_associate_in_transaction(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test13_associateAutoCommit..." % \
-                  self.__class__.__name__)
-
-        txn = self.env.txn_begin()
-        try:
-            self.createDB(txn=txn)
-
-            self.secDB = db.DB(self.env)
-            self.secDB.set_flags(db.DB_DUP)
-            self.secDB.set_get_returns_none(2)
-            self.secDB.open(self.filename, "secondary", db.DB_BTREE,
-                       db.DB_CREATE | db.DB_THREAD, txn=txn)
-            if db.version() >= (4,1):
-                self.getDB().associate(self.secDB, self.getGenre, txn=txn)
-            else:
-                self.getDB().associate(self.secDB, self.getGenre)
-
-            self.addDataToDB(self.getDB(), txn=txn)
-        except:
-            txn.abort()
-            raise
-
-        self.txn_finish_test(self.secDB, txn=txn)
-
-
-#----------------------------------------------------------------------
-
-class ShelveAssociateTestCase(AssociateTestCase):
-
-    def createDB(self):
-        self.primary = dbshelve.open(self.filename,
-                                     dbname="primary",
-                                     dbenv=self.env,
-                                     filetype=self.dbtype)
-
-    def addDataToDB(self, d):
-        for key, value in list(musicdata.items()):
-            if type(self.keytype) == type(''):
-                key = "%02d" % key
-            d.put(key, value)    # save the value as is this time
-
-
-    def getGenre(self, priKey, priData):
-        self.assertEqual(type(priData), type(()))
-        if verbose:
-            print('getGenre key: %r data: %r' % (priKey, priData))
-        genre = priData[2]
-        if genre == 'Blues':
-            return db.DB_DONOTINDEX
-        else:
-            return genre
-
-
-class ShelveAssociateHashTestCase(ShelveAssociateTestCase):
-    dbtype = db.DB_HASH
-
-class ShelveAssociateBTreeTestCase(ShelveAssociateTestCase):
-    dbtype = db.DB_BTREE
-
-class ShelveAssociateRecnoTestCase(ShelveAssociateTestCase):
-    dbtype = db.DB_RECNO
-    keytype = 0
-
-
-#----------------------------------------------------------------------
-
-class ThreadedAssociateTestCase(AssociateTestCase):
-
-    def addDataToDB(self, d):
-        t1 = Thread(target = self.writer1,
-                    args = (d, ))
-        t2 = Thread(target = self.writer2,
-                    args = (d, ))
-
-        t1.setDaemon(True)
-        t2.setDaemon(True)
-        t1.start()
-        t2.start()
-        t1.join()
-        t2.join()
-
-    def writer1(self, d):
-        for key, value in list(musicdata.items()):
-            if type(self.keytype) == type(''):
-                key = "%02d" % key
-            d.put(key, '|'.join(value))
-
-    def writer2(self, d):
-        for x in range(100, 600):
-            key = 'z%2d' % x
-            value = [key] * 4
-            d.put(key, '|'.join(value))
-
-
-class ThreadedAssociateHashTestCase(ShelveAssociateTestCase):
-    dbtype = db.DB_HASH
-
-class ThreadedAssociateBTreeTestCase(ShelveAssociateTestCase):
-    dbtype = db.DB_BTREE
-
-class ThreadedAssociateRecnoTestCase(ShelveAssociateTestCase):
-    dbtype = db.DB_RECNO
-    keytype = 0
-
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-
-    suite.addTest(unittest.makeSuite(AssociateErrorTestCase))
-
-    suite.addTest(unittest.makeSuite(AssociateHashTestCase))
-    suite.addTest(unittest.makeSuite(AssociateBTreeTestCase))
-    suite.addTest(unittest.makeSuite(AssociateRecnoTestCase))
-
-    if db.version() >= (4, 1):
-        suite.addTest(unittest.makeSuite(AssociateBTreeTxnTestCase))
-
-    suite.addTest(unittest.makeSuite(ShelveAssociateHashTestCase))
-    suite.addTest(unittest.makeSuite(ShelveAssociateBTreeTestCase))
-    suite.addTest(unittest.makeSuite(ShelveAssociateRecnoTestCase))
-
-    if have_threads:
-        suite.addTest(unittest.makeSuite(ThreadedAssociateHashTestCase))
-        suite.addTest(unittest.makeSuite(ThreadedAssociateBTreeTestCase))
-        suite.addTest(unittest.makeSuite(ThreadedAssociateRecnoTestCase))
-
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_basics.py b/Lib/bsddb/test/test_basics.py
deleted file mode 100644 (file)
index 704bc41..0000000
+++ /dev/null
@@ -1,1053 +0,0 @@
-"""
-Basic TestCases for BTree and hash DBs, with and without a DBEnv, with
-various DB flags, etc.
-"""
-
-import os
-import errno
-import string
-from pprint import pprint
-import unittest
-import time
-
-from .test_all import db, test_support, verbose, get_new_environment_path, \
-        get_new_database_path
-
-DASH = '-'
-
-
-#----------------------------------------------------------------------
-
-class VersionTestCase(unittest.TestCase):
-    def test00_version(self):
-        info = db.version()
-        if verbose:
-            print('\n', '-=' * 20)
-            print('bsddb.db.version(): %s' % (info, ))
-            print(db.DB_VERSION_STRING)
-            print('-=' * 20)
-        self.assertEqual(info, (db.DB_VERSION_MAJOR, db.DB_VERSION_MINOR,
-                        db.DB_VERSION_PATCH))
-
-#----------------------------------------------------------------------
-
-class BasicTestCase(unittest.TestCase):
-    dbtype       = db.DB_UNKNOWN  # must be set in derived class
-    dbopenflags  = 0
-    dbsetflags   = 0
-    dbmode       = 0o660
-    dbname       = None
-    useEnv       = 0
-    envflags     = 0
-    envsetflags  = 0
-
-    _numKeys      = 1002    # PRIVATE.  NOTE: must be an even value
-
-    def setUp(self):
-        if self.useEnv:
-            self.homeDir=get_new_environment_path()
-            try:
-                self.env = db.DBEnv()
-                self.env.set_lg_max(1024*1024)
-                self.env.set_tx_max(30)
-                self.env.set_tx_timestamp(int(time.time()))
-                self.env.set_flags(self.envsetflags, 1)
-                self.env.open(self.homeDir, self.envflags | db.DB_CREATE)
-                self.filename = "test"
-            # Yes, a bare except is intended, since we're re-raising the exc.
-            except:
-                test_support.rmtree(self.homeDir)
-                raise
-        else:
-            self.env = None
-            self.filename = get_new_database_path()
-
-        # create and open the DB
-        self.d = db.DB(self.env)
-        self.d.set_flags(self.dbsetflags)
-        if self.dbname:
-            self.d.open(self.filename, self.dbname, self.dbtype,
-                        self.dbopenflags|db.DB_CREATE, self.dbmode)
-        else:
-            self.d.open(self.filename,   # try out keyword args
-                        mode = self.dbmode,
-                        dbtype = self.dbtype,
-                        flags = self.dbopenflags|db.DB_CREATE)
-
-        self.populateDB()
-
-
-    def tearDown(self):
-        self.d.close()
-        if self.env is not None:
-            self.env.close()
-            test_support.rmtree(self.homeDir)
-        else:
-            os.remove(self.filename)
-
-
-
-    def populateDB(self, _txn=None):
-        d = self.d
-
-        for x in range(self._numKeys//2):
-            key = '%04d' % (self._numKeys - x)  # insert keys in reverse order
-            data = self.makeData(key)
-            d.put(key, data, _txn)
-
-        d.put('empty value', '', _txn)
-
-        for x in range(self._numKeys//2-1):
-            key = '%04d' % x  # and now some in forward order
-            data = self.makeData(key)
-            d.put(key, data, _txn)
-
-        if _txn:
-            _txn.commit()
-
-        num = len(d)
-        if verbose:
-            print("created %d records" % num)
-
-
-    def makeData(self, key):
-        return DASH.join([key] * 5)
-
-
-
-    #----------------------------------------
-
-    def test01_GetsAndPuts(self):
-        d = self.d
-
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test01_GetsAndPuts..." % self.__class__.__name__)
-
-        for key in ['0001', '0100', '0400', '0700', '0999']:
-            data = d.get(key)
-            if verbose:
-                print(data)
-
-        self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321')
-
-        # By default non-existant keys return None...
-        self.assertEqual(d.get('abcd'), None)
-
-        # ...but they raise exceptions in other situations.  Call
-        # set_get_returns_none() to change it.
-        try:
-            d.delete('abcd')
-        except db.DBNotFoundError as val:
-            import sys
-            if sys.version_info[0] < 3 :
-                self.assertEqual(val[0], db.DB_NOTFOUND)
-            else :
-                self.assertEqual(val.args[0], db.DB_NOTFOUND)
-            if verbose: print(val)
-        else:
-            self.fail("expected exception")
-
-
-        d.put('abcd', 'a new record')
-        self.assertEqual(d.get('abcd'), 'a new record')
-
-        d.put('abcd', 'same key')
-        if self.dbsetflags & db.DB_DUP:
-            self.assertEqual(d.get('abcd'), 'a new record')
-        else:
-            self.assertEqual(d.get('abcd'), 'same key')
-
-
-        try:
-            d.put('abcd', 'this should fail', flags=db.DB_NOOVERWRITE)
-        except db.DBKeyExistError as val:
-            import sys
-            if sys.version_info[0] < 3 :
-                self.assertEqual(val[0], db.DB_KEYEXIST)
-            else :
-                self.assertEqual(val.args[0], db.DB_KEYEXIST)
-            if verbose: print(val)
-        else:
-            self.fail("expected exception")
-
-        if self.dbsetflags & db.DB_DUP:
-            self.assertEqual(d.get('abcd'), 'a new record')
-        else:
-            self.assertEqual(d.get('abcd'), 'same key')
-
-
-        d.sync()
-        d.close()
-        del d
-
-        self.d = db.DB(self.env)
-        if self.dbname:
-            self.d.open(self.filename, self.dbname)
-        else:
-            self.d.open(self.filename)
-        d = self.d
-
-        self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321')
-        if self.dbsetflags & db.DB_DUP:
-            self.assertEqual(d.get('abcd'), 'a new record')
-        else:
-            self.assertEqual(d.get('abcd'), 'same key')
-
-        rec = d.get_both('0555', '0555-0555-0555-0555-0555')
-        if verbose:
-            print(rec)
-
-        self.assertEqual(d.get_both('0555', 'bad data'), None)
-
-        # test default value
-        data = d.get('bad key', 'bad data')
-        self.assertEqual(data, 'bad data')
-
-        # any object can pass through
-        data = d.get('bad key', self)
-        self.assertEqual(data, self)
-
-        s = d.stat()
-        self.assertEqual(type(s), type({}))
-        if verbose:
-            print('d.stat() returned this dictionary:')
-            pprint(s)
-
-
-    #----------------------------------------
-
-    def test02_DictionaryMethods(self):
-        d = self.d
-
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test02_DictionaryMethods..." % \
-                  self.__class__.__name__)
-
-        for key in ['0002', '0101', '0401', '0701', '0998']:
-            data = d[key]
-            self.assertEqual(data, self.makeData(key))
-            if verbose:
-                print(data)
-
-        self.assertEqual(len(d), self._numKeys)
-        keys = list(d.keys())
-        self.assertEqual(len(keys), self._numKeys)
-        self.assertEqual(type(keys), type([]))
-
-        d['new record'] = 'a new record'
-        self.assertEqual(len(d), self._numKeys+1)
-        keys = list(d.keys())
-        self.assertEqual(len(keys), self._numKeys+1)
-
-        d['new record'] = 'a replacement record'
-        self.assertEqual(len(d), self._numKeys+1)
-        keys = list(d.keys())
-        self.assertEqual(len(keys), self._numKeys+1)
-
-        if verbose:
-            print("the first 10 keys are:")
-            pprint(keys[:10])
-
-        self.assertEqual(d['new record'], 'a replacement record')
-
-# We check also the positional parameter
-        self.assertEqual(d.has_key('0001', None), 1)
-# We check also the keyword parameter
-        self.assertEqual(d.has_key('spam', txn=None), 0)
-
-        items = list(d.items())
-        self.assertEqual(len(items), self._numKeys+1)
-        self.assertEqual(type(items), type([]))
-        self.assertEqual(type(items[0]), type(()))
-        self.assertEqual(len(items[0]), 2)
-
-        if verbose:
-            print("the first 10 items are:")
-            pprint(items[:10])
-
-        values = list(d.values())
-        self.assertEqual(len(values), self._numKeys+1)
-        self.assertEqual(type(values), type([]))
-
-        if verbose:
-            print("the first 10 values are:")
-            pprint(values[:10])
-
-
-
-    #----------------------------------------
-
-    def test03_SimpleCursorStuff(self, get_raises_error=0, set_raises_error=0):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test03_SimpleCursorStuff (get_error %s, set_error %s)..." % \
-                  (self.__class__.__name__, get_raises_error, set_raises_error))
-
-        if self.env and self.dbopenflags & db.DB_AUTO_COMMIT:
-            txn = self.env.txn_begin()
-        else:
-            txn = None
-        c = self.d.cursor(txn=txn)
-
-        rec = c.first()
-        count = 0
-        while rec is not None:
-            count = count + 1
-            if verbose and count % 100 == 0:
-                print(rec)
-            try:
-                rec = next(c)
-            except db.DBNotFoundError as val:
-                if get_raises_error:
-                    import sys
-                    if sys.version_info[0] < 3 :
-                        self.assertEqual(val[0], db.DB_NOTFOUND)
-                    else :
-                        self.assertEqual(val.args[0], db.DB_NOTFOUND)
-                    if verbose: print(val)
-                    rec = None
-                else:
-                    self.fail("unexpected DBNotFoundError")
-            self.assertEqual(c.get_current_size(), len(c.current()[1]),
-                    "%s != len(%r)" % (c.get_current_size(), c.current()[1]))
-
-        self.assertEqual(count, self._numKeys)
-
-
-        rec = c.last()
-        count = 0
-        while rec is not None:
-            count = count + 1
-            if verbose and count % 100 == 0:
-                print(rec)
-            try:
-                rec = c.prev()
-            except db.DBNotFoundError as val:
-                if get_raises_error:
-                    import sys
-                    if sys.version_info[0] < 3 :
-                        self.assertEqual(val[0], db.DB_NOTFOUND)
-                    else :
-                        self.assertEqual(val.args[0], db.DB_NOTFOUND)
-                    if verbose: print(val)
-                    rec = None
-                else:
-                    self.fail("unexpected DBNotFoundError")
-
-        self.assertEqual(count, self._numKeys)
-
-        rec = c.set('0505')
-        rec2 = c.current()
-        self.assertEqual(rec, rec2)
-        self.assertEqual(rec[0], '0505')
-        self.assertEqual(rec[1], self.makeData('0505'))
-        self.assertEqual(c.get_current_size(), len(rec[1]))
-
-        # make sure we get empty values properly
-        rec = c.set('empty value')
-        self.assertEqual(rec[1], '')
-        self.assertEqual(c.get_current_size(), 0)
-
-        try:
-            n = c.set('bad key')
-        except db.DBNotFoundError as val:
-            import sys
-            if sys.version_info[0] < 3 :
-                self.assertEqual(val[0], db.DB_NOTFOUND)
-            else :
-                self.assertEqual(val.args[0], db.DB_NOTFOUND)
-            if verbose: print(val)
-        else:
-            if set_raises_error:
-                self.fail("expected exception")
-            if n != None:
-                self.fail("expected None: %r" % (n,))
-
-        rec = c.get_both('0404', self.makeData('0404'))
-        self.assertEqual(rec, ('0404', self.makeData('0404')))
-
-        try:
-            n = c.get_both('0404', 'bad data')
-        except db.DBNotFoundError as val:
-            import sys
-            if sys.version_info[0] < 3 :
-                self.assertEqual(val[0], db.DB_NOTFOUND)
-            else :
-                self.assertEqual(val.args[0], db.DB_NOTFOUND)
-            if verbose: print(val)
-        else:
-            if get_raises_error:
-                self.fail("expected exception")
-            if n != None:
-                self.fail("expected None: %r" % (n,))
-
-        if self.d.get_type() == db.DB_BTREE:
-            rec = c.set_range('011')
-            if verbose:
-                print("searched for '011', found: ", rec)
-
-            rec = c.set_range('011',dlen=0,doff=0)
-            if verbose:
-                print("searched (partial) for '011', found: ", rec)
-            if rec[1] != '': self.fail('expected empty data portion')
-
-            ev = c.set_range('empty value')
-            if verbose:
-                print("search for 'empty value' returned", ev)
-            if ev[1] != '': self.fail('empty value lookup failed')
-
-        c.set('0499')
-        c.delete()
-        try:
-            rec = c.current()
-        except db.DBKeyEmptyError as val:
-            if get_raises_error:
-                import sys
-                if sys.version_info[0] < 3 :
-                    self.assertEqual(val[0], db.DB_KEYEMPTY)
-                else :
-                    self.assertEqual(val.args[0], db.DB_KEYEMPTY)
-                if verbose: print(val)
-            else:
-                self.fail("unexpected DBKeyEmptyError")
-        else:
-            if get_raises_error:
-                self.fail('DBKeyEmptyError exception expected')
-
-        next(c)
-        c2 = c.dup(db.DB_POSITION)
-        self.assertEqual(c.current(), c2.current())
-
-        c2.put('', 'a new value', db.DB_CURRENT)
-        self.assertEqual(c.current(), c2.current())
-        self.assertEqual(c.current()[1], 'a new value')
-
-        c2.put('', 'er', db.DB_CURRENT, dlen=0, doff=5)
-        self.assertEqual(c2.current()[1], 'a newer value')
-
-        c.close()
-        c2.close()
-        if txn:
-            txn.commit()
-
-        # time to abuse the closed cursors and hope we don't crash
-        methods_to_test = {
-            'current': (),
-            'delete': (),
-            'dup': (db.DB_POSITION,),
-            'first': (),
-            'get': (0,),
-            'next': (),
-            'prev': (),
-            'last': (),
-            'put':('', 'spam', db.DB_CURRENT),
-            'set': ("0505",),
-        }
-        for method, args in list(methods_to_test.items()):
-            try:
-                if verbose:
-                    print("attempting to use a closed cursor's %s method" % \
-                          method)
-                # a bug may cause a NULL pointer dereference...
-                getattr(c, method)(*args)
-            except db.DBError as val:
-                import sys
-                if sys.version_info[0] < 3 :
-                    self.assertEqual(val[0], 0)
-                else :
-                    self.assertEqual(val.args[0], 0)
-                if verbose: print(val)
-            else:
-                self.fail("no exception raised when using a buggy cursor's"
-                          "%s method" % method)
-
-        #
-        # free cursor referencing a closed database, it should not barf:
-        #
-        oldcursor = self.d.cursor(txn=txn)
-        self.d.close()
-
-        # this would originally cause a segfault when the cursor for a
-        # closed database was cleaned up.  it should not anymore.
-        # SF pybsddb bug id 667343
-        del oldcursor
-
-    def test03b_SimpleCursorWithoutGetReturnsNone0(self):
-        # same test but raise exceptions instead of returning None
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test03b_SimpleCursorStuffWithoutGetReturnsNone..." % \
-                  self.__class__.__name__)
-
-        old = self.d.set_get_returns_none(0)
-        self.assertEqual(old, 2)
-        self.test03_SimpleCursorStuff(get_raises_error=1, set_raises_error=1)
-
-    def test03b_SimpleCursorWithGetReturnsNone1(self):
-        # same test but raise exceptions instead of returning None
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test03b_SimpleCursorStuffWithoutGetReturnsNone..." % \
-                  self.__class__.__name__)
-
-        old = self.d.set_get_returns_none(1)
-        self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=1)
-
-
-    def test03c_SimpleCursorGetReturnsNone2(self):
-        # same test but raise exceptions instead of returning None
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test03c_SimpleCursorStuffWithoutSetReturnsNone..." % \
-                  self.__class__.__name__)
-
-        old = self.d.set_get_returns_none(1)
-        self.assertEqual(old, 2)
-        old = self.d.set_get_returns_none(2)
-        self.assertEqual(old, 1)
-        self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=0)
-
-    #----------------------------------------
-
-    def test04_PartialGetAndPut(self):
-        d = self.d
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test04_PartialGetAndPut..." % \
-                  self.__class__.__name__)
-
-        key = "partialTest"
-        data = "1" * 1000 + "2" * 1000
-        d.put(key, data)
-        self.assertEqual(d.get(key), data)
-        self.assertEqual(d.get(key, dlen=20, doff=990),
-                ("1" * 10) + ("2" * 10))
-
-        d.put("partialtest2", ("1" * 30000) + "robin" )
-        self.assertEqual(d.get("partialtest2", dlen=5, doff=30000), "robin")
-
-        # There seems to be a bug in DB here...  Commented out the test for
-        # now.
-        ##self.assertEqual(d.get("partialtest2", dlen=5, doff=30010), "")
-
-        if self.dbsetflags != db.DB_DUP:
-            # Partial put with duplicate records requires a cursor
-            d.put(key, "0000", dlen=2000, doff=0)
-            self.assertEqual(d.get(key), "0000")
-
-            d.put(key, "1111", dlen=1, doff=2)
-            self.assertEqual(d.get(key), "0011110")
-
-    #----------------------------------------
-
-    def test05_GetSize(self):
-        d = self.d
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test05_GetSize..." % self.__class__.__name__)
-
-        for i in range(1, 50000, 500):
-            key = "size%s" % i
-            #print "before ", i,
-            d.put(key, "1" * i)
-            #print "after",
-            self.assertEqual(d.get_size(key), i)
-            #print "done"
-
-    #----------------------------------------
-
-    def test06_Truncate(self):
-        d = self.d
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test99_Truncate..." % self.__class__.__name__)
-
-        d.put("abcde", "ABCDE");
-        num = d.truncate()
-        self.assert_(num >= 1, "truncate returned <= 0 on non-empty database")
-        num = d.truncate()
-        self.assertEqual(num, 0,
-                "truncate on empty DB returned nonzero (%r)" % (num,))
-
-    #----------------------------------------
-
-
-#----------------------------------------------------------------------
-
-
-class BasicBTreeTestCase(BasicTestCase):
-    dbtype = db.DB_BTREE
-
-
-class BasicHashTestCase(BasicTestCase):
-    dbtype = db.DB_HASH
-
-
-class BasicBTreeWithThreadFlagTestCase(BasicTestCase):
-    dbtype = db.DB_BTREE
-    dbopenflags = db.DB_THREAD
-
-
-class BasicHashWithThreadFlagTestCase(BasicTestCase):
-    dbtype = db.DB_HASH
-    dbopenflags = db.DB_THREAD
-
-
-class BasicWithEnvTestCase(BasicTestCase):
-    dbopenflags = db.DB_THREAD
-    useEnv = 1
-    envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
-
-    #----------------------------------------
-
-    def test07_EnvRemoveAndRename(self):
-        if not self.env:
-            return
-
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test07_EnvRemoveAndRename..." % self.__class__.__name__)
-
-        # can't rename or remove an open DB
-        self.d.close()
-
-        newname = self.filename + '.renamed'
-        self.env.dbrename(self.filename, None, newname)
-        self.env.dbremove(newname)
-
-    # dbremove and dbrename are in 4.1 and later
-    if db.version() < (4,1):
-        del test07_EnvRemoveAndRename
-
-    #----------------------------------------
-
-class BasicBTreeWithEnvTestCase(BasicWithEnvTestCase):
-    dbtype = db.DB_BTREE
-
-
-class BasicHashWithEnvTestCase(BasicWithEnvTestCase):
-    dbtype = db.DB_HASH
-
-
-#----------------------------------------------------------------------
-
-class BasicTransactionTestCase(BasicTestCase):
-    import sys
-    if sys.version_info[:3] < (2, 4, 0):
-        def assertTrue(self, expr, msg=None):
-            self.failUnless(expr,msg=msg)
-
-    dbopenflags = db.DB_THREAD | db.DB_AUTO_COMMIT
-    useEnv = 1
-    envflags = (db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK |
-                db.DB_INIT_TXN)
-    envsetflags = db.DB_AUTO_COMMIT
-
-
-    def tearDown(self):
-        self.txn.commit()
-        BasicTestCase.tearDown(self)
-
-
-    def populateDB(self):
-        txn = self.env.txn_begin()
-        BasicTestCase.populateDB(self, _txn=txn)
-
-        self.txn = self.env.txn_begin()
-
-
-    def test06_Transactions(self):
-        d = self.d
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test06_Transactions..." % self.__class__.__name__)
-
-        self.assertEqual(d.get('new rec', txn=self.txn), None)
-        d.put('new rec', 'this is a new record', self.txn)
-        self.assertEqual(d.get('new rec', txn=self.txn),
-                'this is a new record')
-        self.txn.abort()
-        self.assertEqual(d.get('new rec'), None)
-
-        self.txn = self.env.txn_begin()
-
-        self.assertEqual(d.get('new rec', txn=self.txn), None)
-        d.put('new rec', 'this is a new record', self.txn)
-        self.assertEqual(d.get('new rec', txn=self.txn),
-                'this is a new record')
-        self.txn.commit()
-        self.assertEqual(d.get('new rec'), 'this is a new record')
-
-        self.txn = self.env.txn_begin()
-        c = d.cursor(self.txn)
-        rec = c.first()
-        count = 0
-        while rec is not None:
-            count = count + 1
-            if verbose and count % 100 == 0:
-                print(rec)
-            rec = next(c)
-        self.assertEqual(count, self._numKeys+1)
-
-        c.close()                # Cursors *MUST* be closed before commit!
-        self.txn.commit()
-
-        # flush pending updates
-        try:
-            self.env.txn_checkpoint (0, 0, 0)
-        except db.DBIncompleteError:
-            pass
-
-        statDict = self.env.log_stat(0);
-        self.assert_('magic' in statDict)
-        self.assert_('version' in statDict)
-        self.assert_('cur_file' in statDict)
-        self.assert_('region_nowait' in statDict)
-
-        # must have at least one log file present:
-        logs = self.env.log_archive(db.DB_ARCH_ABS | db.DB_ARCH_LOG)
-        self.assertNotEqual(logs, None)
-        for log in logs:
-            if verbose:
-                print('log file: ' + log)
-        if db.version() >= (4,2):
-            logs = self.env.log_archive(db.DB_ARCH_REMOVE)
-            self.assertTrue(not logs)
-
-        self.txn = self.env.txn_begin()
-
-    #----------------------------------------
-
-    def test07_TxnTruncate(self):
-        d = self.d
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test07_TxnTruncate..." % self.__class__.__name__)
-
-        d.put("abcde", "ABCDE");
-        txn = self.env.txn_begin()
-        num = d.truncate(txn)
-        self.assert_(num >= 1, "truncate returned <= 0 on non-empty database")
-        num = d.truncate(txn)
-        self.assertEqual(num, 0,
-                "truncate on empty DB returned nonzero (%r)" % (num,))
-        txn.commit()
-
-    #----------------------------------------
-
-    def test08_TxnLateUse(self):
-        txn = self.env.txn_begin()
-        txn.abort()
-        try:
-            txn.abort()
-        except db.DBError as e:
-            pass
-        else:
-            raise RuntimeError("DBTxn.abort() called after DB_TXN no longer valid w/o an exception")
-
-        txn = self.env.txn_begin()
-        txn.commit()
-        try:
-            txn.commit()
-        except db.DBError as e:
-            pass
-        else:
-            raise RuntimeError("DBTxn.commit() called after DB_TXN no longer valid w/o an exception")
-
-
-class BTreeTransactionTestCase(BasicTransactionTestCase):
-    dbtype = db.DB_BTREE
-
-class HashTransactionTestCase(BasicTransactionTestCase):
-    dbtype = db.DB_HASH
-
-
-
-#----------------------------------------------------------------------
-
-class BTreeRecnoTestCase(BasicTestCase):
-    dbtype     = db.DB_BTREE
-    dbsetflags = db.DB_RECNUM
-
-    def test07_RecnoInBTree(self):
-        d = self.d
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test07_RecnoInBTree..." % self.__class__.__name__)
-
-        rec = d.get(200)
-        self.assertEqual(type(rec), type(()))
-        self.assertEqual(len(rec), 2)
-        if verbose:
-            print("Record #200 is ", rec)
-
-        c = d.cursor()
-        c.set('0200')
-        num = c.get_recno()
-        self.assertEqual(type(num), type(1))
-        if verbose:
-            print("recno of d['0200'] is ", num)
-
-        rec = c.current()
-        self.assertEqual(c.set_recno(num), rec)
-
-        c.close()
-
-
-
-class BTreeRecnoWithThreadFlagTestCase(BTreeRecnoTestCase):
-    dbopenflags = db.DB_THREAD
-
-#----------------------------------------------------------------------
-
-class BasicDUPTestCase(BasicTestCase):
-    dbsetflags = db.DB_DUP
-
-    def test08_DuplicateKeys(self):
-        d = self.d
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test08_DuplicateKeys..." % \
-                  self.__class__.__name__)
-
-        d.put("dup0", "before")
-        for x in "The quick brown fox jumped over the lazy dog.".split():
-            d.put("dup1", x)
-        d.put("dup2", "after")
-
-        data = d.get("dup1")
-        self.assertEqual(data, "The")
-        if verbose:
-            print(data)
-
-        c = d.cursor()
-        rec = c.set("dup1")
-        self.assertEqual(rec, ('dup1', 'The'))
-
-        next_reg = next(c)
-        self.assertEqual(next_reg, ('dup1', 'quick'))
-
-        rec = c.set("dup1")
-        count = c.count()
-        self.assertEqual(count, 9)
-
-        next_dup = c.next_dup()
-        self.assertEqual(next_dup, ('dup1', 'quick'))
-
-        rec = c.set('dup1')
-        while rec is not None:
-            if verbose:
-                print(rec)
-            rec = c.next_dup()
-
-        c.set('dup1')
-        rec = c.next_nodup()
-        self.assertNotEqual(rec[0], 'dup1')
-        if verbose:
-            print(rec)
-
-        c.close()
-
-
-
-class BTreeDUPTestCase(BasicDUPTestCase):
-    dbtype = db.DB_BTREE
-
-class HashDUPTestCase(BasicDUPTestCase):
-    dbtype = db.DB_HASH
-
-class BTreeDUPWithThreadTestCase(BasicDUPTestCase):
-    dbtype = db.DB_BTREE
-    dbopenflags = db.DB_THREAD
-
-class HashDUPWithThreadTestCase(BasicDUPTestCase):
-    dbtype = db.DB_HASH
-    dbopenflags = db.DB_THREAD
-
-
-#----------------------------------------------------------------------
-
-class BasicMultiDBTestCase(BasicTestCase):
-    dbname = 'first'
-
-    def otherType(self):
-        if self.dbtype == db.DB_BTREE:
-            return db.DB_HASH
-        else:
-            return db.DB_BTREE
-
-    def test09_MultiDB(self):
-        d1 = self.d
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test09_MultiDB..." % self.__class__.__name__)
-
-        d2 = db.DB(self.env)
-        d2.open(self.filename, "second", self.dbtype,
-                self.dbopenflags|db.DB_CREATE)
-        d3 = db.DB(self.env)
-        d3.open(self.filename, "third", self.otherType(),
-                self.dbopenflags|db.DB_CREATE)
-
-        for x in "The quick brown fox jumped over the lazy dog".split():
-            d2.put(x, self.makeData(x))
-
-        for x in string.letters:
-            d3.put(x, x*70)
-
-        d1.sync()
-        d2.sync()
-        d3.sync()
-        d1.close()
-        d2.close()
-        d3.close()
-
-        self.d = d1 = d2 = d3 = None
-
-        self.d = d1 = db.DB(self.env)
-        d1.open(self.filename, self.dbname, flags = self.dbopenflags)
-        d2 = db.DB(self.env)
-        d2.open(self.filename, "second",  flags = self.dbopenflags)
-        d3 = db.DB(self.env)
-        d3.open(self.filename, "third", flags = self.dbopenflags)
-
-        c1 = d1.cursor()
-        c2 = d2.cursor()
-        c3 = d3.cursor()
-
-        count = 0
-        rec = c1.first()
-        while rec is not None:
-            count = count + 1
-            if verbose and (count % 50) == 0:
-                print(rec)
-            rec = next(c1)
-        self.assertEqual(count, self._numKeys)
-
-        count = 0
-        rec = c2.first()
-        while rec is not None:
-            count = count + 1
-            if verbose:
-                print(rec)
-            rec = next(c2)
-        self.assertEqual(count, 9)
-
-        count = 0
-        rec = c3.first()
-        while rec is not None:
-            count = count + 1
-            if verbose:
-                print(rec)
-            rec = next(c3)
-        self.assertEqual(count, len(string.letters))
-
-
-        c1.close()
-        c2.close()
-        c3.close()
-
-        d2.close()
-        d3.close()
-
-
-
-# Strange things happen if you try to use Multiple DBs per file without a
-# DBEnv with MPOOL and LOCKing...
-
-class BTreeMultiDBTestCase(BasicMultiDBTestCase):
-    dbtype = db.DB_BTREE
-    dbopenflags = db.DB_THREAD
-    useEnv = 1
-    envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
-
-class HashMultiDBTestCase(BasicMultiDBTestCase):
-    dbtype = db.DB_HASH
-    dbopenflags = db.DB_THREAD
-    useEnv = 1
-    envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
-
-
-class PrivateObject(unittest.TestCase) :
-    import sys
-    if sys.version_info[:3] < (2, 4, 0):
-        def assertTrue(self, expr, msg=None):
-            self.failUnless(expr,msg=msg)
-
-    def tearDown(self) :
-        del self.obj
-
-    def test01_DefaultIsNone(self) :
-        self.assertEqual(self.obj.get_private(), None)
-
-    def test02_assignment(self) :
-        a = "example of private object"
-        self.obj.set_private(a)
-        b = self.obj.get_private()
-        self.assertTrue(a is b)  # Object identity
-
-    def test03_leak_assignment(self) :
-        import sys
-        a = "example of private object"
-        refcount = sys.getrefcount(a)
-        self.obj.set_private(a)
-        self.assertEqual(refcount+1, sys.getrefcount(a))
-        self.obj.set_private(None)
-        self.assertEqual(refcount, sys.getrefcount(a))
-
-    def test04_leak_GC(self) :
-        import sys
-        a = "example of private object"
-        refcount = sys.getrefcount(a)
-        self.obj.set_private(a)
-        self.obj = None
-        self.assertEqual(refcount, sys.getrefcount(a))
-
-class DBEnvPrivateObject(PrivateObject) :
-    def setUp(self) :
-        self.obj = db.DBEnv()
-
-class DBPrivateObject(PrivateObject) :
-    def setUp(self) :
-        self.obj = db.DB()
-
-class CrashAndBurn(unittest.TestCase) :
-    def test01_OpenCrash(self) :
-        # See http://bugs.python.org/issue3307
-        self.assertRaises(db.DBInvalidArgError, db.DB, None, 65535)
-
-
-#----------------------------------------------------------------------
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-
-    suite.addTest(unittest.makeSuite(VersionTestCase))
-    suite.addTest(unittest.makeSuite(BasicBTreeTestCase))
-    suite.addTest(unittest.makeSuite(BasicHashTestCase))
-    suite.addTest(unittest.makeSuite(BasicBTreeWithThreadFlagTestCase))
-    suite.addTest(unittest.makeSuite(BasicHashWithThreadFlagTestCase))
-    suite.addTest(unittest.makeSuite(BasicBTreeWithEnvTestCase))
-    suite.addTest(unittest.makeSuite(BasicHashWithEnvTestCase))
-    suite.addTest(unittest.makeSuite(BTreeTransactionTestCase))
-    suite.addTest(unittest.makeSuite(HashTransactionTestCase))
-    suite.addTest(unittest.makeSuite(BTreeRecnoTestCase))
-    suite.addTest(unittest.makeSuite(BTreeRecnoWithThreadFlagTestCase))
-    suite.addTest(unittest.makeSuite(BTreeDUPTestCase))
-    suite.addTest(unittest.makeSuite(HashDUPTestCase))
-    suite.addTest(unittest.makeSuite(BTreeDUPWithThreadTestCase))
-    suite.addTest(unittest.makeSuite(HashDUPWithThreadTestCase))
-    suite.addTest(unittest.makeSuite(BTreeMultiDBTestCase))
-    suite.addTest(unittest.makeSuite(HashMultiDBTestCase))
-    suite.addTest(unittest.makeSuite(DBEnvPrivateObject))
-    suite.addTest(unittest.makeSuite(DBPrivateObject))
-    #suite.addTest(unittest.makeSuite(CrashAndBurn))
-
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_compare.py b/Lib/bsddb/test/test_compare.py
deleted file mode 100644 (file)
index f73dd3e..0000000
+++ /dev/null
@@ -1,257 +0,0 @@
-"""
-TestCases for python DB Btree key comparison function.
-"""
-
-import sys, os, re
-from . import test_all
-from io import StringIO
-
-import unittest
-
-from .test_all import db, dbshelve, test_support, \
-        get_new_environment_path, get_new_database_path
-
-
-lexical_cmp = cmp
-
-def lowercase_cmp(left, right):
-    return cmp (left.lower(), right.lower())
-
-def make_reverse_comparator (cmp):
-    def reverse (left, right, delegate=cmp):
-        return - delegate (left, right)
-    return reverse
-
-_expected_lexical_test_data = ['', 'CCCP', 'a', 'aaa', 'b', 'c', 'cccce', 'ccccf']
-_expected_lowercase_test_data = ['', 'a', 'aaa', 'b', 'c', 'CC', 'cccce', 'ccccf', 'CCCP']
-
-class ComparatorTests (unittest.TestCase):
-    def comparator_test_helper (self, comparator, expected_data):
-        data = expected_data[:]
-
-        import sys
-        if sys.version_info[0] < 3 :
-            if sys.version_info[:3] < (2, 4, 0):
-                data.sort(comparator)
-            else :
-                data.sort(cmp=comparator)
-        else :  # Insertion Sort. Please, improve
-            data2 = []
-            for i in data :
-                for j, k in enumerate(data2) :
-                    r = comparator(k, i)
-                    if r == 1 :
-                        data2.insert(j, i)
-                        break
-                else :
-                    data2.append(i)
-            data = data2
-
-        self.failUnless (data == expected_data,
-                         "comparator `%s' is not right: %s vs. %s"
-                         % (comparator, expected_data, data))
-    def test_lexical_comparator (self):
-        self.comparator_test_helper (lexical_cmp, _expected_lexical_test_data)
-    def test_reverse_lexical_comparator (self):
-        rev = _expected_lexical_test_data[:]
-        rev.reverse ()
-        self.comparator_test_helper (make_reverse_comparator (lexical_cmp),
-                                     rev)
-    def test_lowercase_comparator (self):
-        self.comparator_test_helper (lowercase_cmp,
-                                     _expected_lowercase_test_data)
-
-class AbstractBtreeKeyCompareTestCase (unittest.TestCase):
-    env = None
-    db = None
-
-    def setUp (self):
-        self.filename = self.__class__.__name__ + '.db'
-        self.homeDir = get_new_environment_path()
-        env = db.DBEnv()
-        env.open (self.homeDir,
-                  db.DB_CREATE | db.DB_INIT_MPOOL
-                  | db.DB_INIT_LOCK | db.DB_THREAD)
-        self.env = env
-
-    def tearDown (self):
-        self.closeDB()
-        if self.env is not None:
-            self.env.close()
-            self.env = None
-        test_support.rmtree(self.homeDir)
-
-    def addDataToDB (self, data):
-        i = 0
-        for item in data:
-            self.db.put (item, str (i))
-            i = i + 1
-
-    def createDB (self, key_comparator):
-        self.db = db.DB (self.env)
-        self.setupDB (key_comparator)
-        self.db.open (self.filename, "test", db.DB_BTREE, db.DB_CREATE)
-
-    def setupDB (self, key_comparator):
-        self.db.set_bt_compare (key_comparator)
-
-    def closeDB (self):
-        if self.db is not None:
-            self.db.close ()
-            self.db = None
-
-    def startTest (self):
-        pass
-
-    def finishTest (self, expected = None):
-        if expected is not None:
-            self.check_results (expected)
-        self.closeDB ()
-
-    def check_results (self, expected):
-        curs = self.db.cursor ()
-        try:
-            index = 0
-            rec = curs.first ()
-            while rec:
-                key, ignore = rec
-                self.failUnless (index < len (expected),
-                                 "to many values returned from cursor")
-                self.failUnless (expected[index] == key,
-                                 "expected value `%s' at %d but got `%s'"
-                                 % (expected[index], index, key))
-                index = index + 1
-                rec = next(curs)
-            self.failUnless (index == len (expected),
-                             "not enough values returned from cursor")
-        finally:
-            curs.close ()
-
-class BtreeKeyCompareTestCase (AbstractBtreeKeyCompareTestCase):
-    def runCompareTest (self, comparator, data):
-        self.startTest ()
-        self.createDB (comparator)
-        self.addDataToDB (data)
-        self.finishTest (data)
-
-    def test_lexical_ordering (self):
-        self.runCompareTest (lexical_cmp, _expected_lexical_test_data)
-
-    def test_reverse_lexical_ordering (self):
-        expected_rev_data = _expected_lexical_test_data[:]
-        expected_rev_data.reverse ()
-        self.runCompareTest (make_reverse_comparator (lexical_cmp),
-                             expected_rev_data)
-
-    def test_compare_function_useless (self):
-        self.startTest ()
-        def socialist_comparator (l, r):
-            return 0
-        self.createDB (socialist_comparator)
-        self.addDataToDB (['b', 'a', 'd'])
-        # all things being equal the first key will be the only key
-        # in the database...  (with the last key's value fwiw)
-        self.finishTest (['b'])
-
-
-class BtreeExceptionsTestCase (AbstractBtreeKeyCompareTestCase):
-    def test_raises_non_callable (self):
-        self.startTest ()
-        self.assertRaises (TypeError, self.createDB, 'abc')
-        self.assertRaises (TypeError, self.createDB, None)
-        self.finishTest ()
-
-    def test_set_bt_compare_with_function (self):
-        self.startTest ()
-        self.createDB (lexical_cmp)
-        self.finishTest ()
-
-    def check_results (self, results):
-        pass
-
-    def test_compare_function_incorrect (self):
-        self.startTest ()
-        def bad_comparator (l, r):
-            return 1
-        # verify that set_bt_compare checks that comparator('', '') == 0
-        self.assertRaises (TypeError, self.createDB, bad_comparator)
-        self.finishTest ()
-
-    def verifyStderr(self, method, successRe):
-        """
-        Call method() while capturing sys.stderr output internally and
-        call self.fail() if successRe.search() does not match the stderr
-        output.  This is used to test for uncatchable exceptions.
-        """
-        stdErr = sys.stderr
-        sys.stderr = StringIO()
-        try:
-            method()
-        finally:
-            temp = sys.stderr
-            sys.stderr = stdErr
-            errorOut = temp.getvalue()
-            if not successRe.search(errorOut):
-                self.fail("unexpected stderr output:\n"+errorOut)
-
-    def _test_compare_function_exception (self):
-        self.startTest ()
-        def bad_comparator (l, r):
-            if l == r:
-                # pass the set_bt_compare test
-                return 0
-            raise RuntimeError("i'm a naughty comparison function")
-        self.createDB (bad_comparator)
-        #print "\n*** test should print 2 uncatchable tracebacks ***"
-        self.addDataToDB (['a', 'b', 'c'])  # this should raise, but...
-        self.finishTest ()
-
-    def test_compare_function_exception(self):
-        self.verifyStderr(
-                self._test_compare_function_exception,
-                re.compile('(^RuntimeError:.* naughty.*){2}', re.M|re.S)
-        )
-
-    def _test_compare_function_bad_return (self):
-        self.startTest ()
-        def bad_comparator (l, r):
-            if l == r:
-                # pass the set_bt_compare test
-                return 0
-            return l
-        self.createDB (bad_comparator)
-        #print "\n*** test should print 2 errors about returning an int ***"
-        self.addDataToDB (['a', 'b', 'c'])  # this should raise, but...
-        self.finishTest ()
-
-    def test_compare_function_bad_return(self):
-        self.verifyStderr(
-                self._test_compare_function_bad_return,
-                re.compile('(^TypeError:.* return an int.*){2}', re.M|re.S)
-        )
-
-
-    def test_cannot_assign_twice (self):
-
-        def my_compare (a, b):
-            return 0
-
-        self.startTest ()
-        self.createDB (my_compare)
-        try:
-            self.db.set_bt_compare (my_compare)
-            self.assert_(0, "this set should fail")
-
-        except RuntimeError as msg:
-            pass
-
-def test_suite ():
-    res = unittest.TestSuite ()
-
-    res.addTest (unittest.makeSuite (ComparatorTests))
-    res.addTest (unittest.makeSuite (BtreeExceptionsTestCase))
-    res.addTest (unittest.makeSuite (BtreeKeyCompareTestCase))
-    return res
-
-if __name__ == '__main__':
-    unittest.main (defaultTest = 'suite')
diff --git a/Lib/bsddb/test/test_compat.py b/Lib/bsddb/test/test_compat.py
deleted file mode 100644 (file)
index 95a29ef..0000000
+++ /dev/null
@@ -1,184 +0,0 @@
-"""
-Test cases adapted from the test_bsddb.py module in Python's
-regression test suite.
-"""
-
-import os, string
-import unittest
-
-from .test_all import db, hashopen, btopen, rnopen, verbose, \
-        get_new_database_path
-
-
-class CompatibilityTestCase(unittest.TestCase):
-    def setUp(self):
-        self.filename = get_new_database_path()
-
-    def tearDown(self):
-        try:
-            os.remove(self.filename)
-        except os.error:
-            pass
-
-
-    def test01_btopen(self):
-        self.do_bthash_test(btopen, 'btopen')
-
-    def test02_hashopen(self):
-        self.do_bthash_test(hashopen, 'hashopen')
-
-    def test03_rnopen(self):
-        data = "The quick brown fox jumped over the lazy dog.".split()
-        if verbose:
-            print("\nTesting: rnopen")
-
-        f = rnopen(self.filename, 'c')
-        for x in range(len(data)):
-            f[x+1] = data[x]
-
-        getTest = (f[1], f[2], f[3])
-        if verbose:
-            print('%s %s %s' % getTest)
-
-        self.assertEqual(getTest[1], 'quick', 'data mismatch!')
-
-        rv = f.set_location(3)
-        if rv != (3, 'brown'):
-            self.fail('recno database set_location failed: '+repr(rv))
-
-        f[25] = 'twenty-five'
-        f.close()
-        del f
-
-        f = rnopen(self.filename, 'w')
-        f[20] = 'twenty'
-
-        def noRec(f):
-            rec = f[15]
-        self.assertRaises(KeyError, noRec, f)
-
-        def badKey(f):
-            rec = f['a string']
-        self.assertRaises(TypeError, badKey, f)
-
-        del f[3]
-
-        rec = f.first()
-        while rec:
-            if verbose:
-                print(rec)
-            try:
-                rec = next(f)
-            except KeyError:
-                break
-
-        f.close()
-
-
-    def test04_n_flag(self):
-        f = hashopen(self.filename, 'n')
-        f.close()
-
-
-    def do_bthash_test(self, factory, what):
-        if verbose:
-            print('\nTesting: ', what)
-
-        f = factory(self.filename, 'c')
-        if verbose:
-            print('creation...')
-
-        # truth test
-        if f:
-            if verbose: print("truth test: true")
-        else:
-            if verbose: print("truth test: false")
-
-        f['0'] = ''
-        f['a'] = 'Guido'
-        f['b'] = 'van'
-        f['c'] = 'Rossum'
-        f['d'] = 'invented'
-        # 'e' intentionally left out
-        f['f'] = 'Python'
-        if verbose:
-            print('%s %s %s' % (f['a'], f['b'], f['c']))
-
-        if verbose:
-            print('key ordering...')
-        start = f.set_location(f.first()[0])
-        if start != ('0', ''):
-            self.fail("incorrect first() result: "+repr(start))
-        while 1:
-            try:
-                rec = next(f)
-            except KeyError:
-                self.assertEqual(rec, f.last(), 'Error, last <> last!')
-                f.previous()
-                break
-            if verbose:
-                print(rec)
-
-        self.assert_('f' in f, 'Error, missing key!')
-
-        # test that set_location() returns the next nearest key, value
-        # on btree databases and raises KeyError on others.
-        if factory == btopen:
-            e = f.set_location('e')
-            if e != ('f', 'Python'):
-                self.fail('wrong key,value returned: '+repr(e))
-        else:
-            try:
-                e = f.set_location('e')
-            except KeyError:
-                pass
-            else:
-                self.fail("set_location on non-existant key did not raise KeyError")
-
-        f.sync()
-        f.close()
-        # truth test
-        try:
-            if f:
-                if verbose: print("truth test: true")
-            else:
-                if verbose: print("truth test: false")
-        except db.DBError:
-            pass
-        else:
-            self.fail("Exception expected")
-
-        del f
-
-        if verbose:
-            print('modification...')
-        f = factory(self.filename, 'w')
-        f['d'] = 'discovered'
-
-        if verbose:
-            print('access...')
-        for key in list(f.keys()):
-            word = f[key]
-            if verbose:
-                print(word)
-
-        def noRec(f):
-            rec = f['no such key']
-        self.assertRaises(KeyError, noRec, f)
-
-        def badKey(f):
-            rec = f[15]
-        self.assertRaises(TypeError, badKey, f)
-
-        f.close()
-
-
-#----------------------------------------------------------------------
-
-
-def test_suite():
-    return unittest.makeSuite(CompatibilityTestCase)
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_cursor_pget_bug.py b/Lib/bsddb/test/test_cursor_pget_bug.py
deleted file mode 100644 (file)
index 37edd16..0000000
+++ /dev/null
@@ -1,54 +0,0 @@
-import unittest
-import os, glob
-
-from .test_all import db, test_support, get_new_environment_path, \
-        get_new_database_path
-
-#----------------------------------------------------------------------
-
-class pget_bugTestCase(unittest.TestCase):
-    """Verify that cursor.pget works properly"""
-    db_name = 'test-cursor_pget.db'
-
-    def setUp(self):
-        self.homeDir = get_new_environment_path()
-        self.env = db.DBEnv()
-        self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL)
-        self.primary_db = db.DB(self.env)
-        self.primary_db.open(self.db_name, 'primary', db.DB_BTREE, db.DB_CREATE)
-        self.secondary_db = db.DB(self.env)
-        self.secondary_db.set_flags(db.DB_DUP)
-        self.secondary_db.open(self.db_name, 'secondary', db.DB_BTREE, db.DB_CREATE)
-        self.primary_db.associate(self.secondary_db, lambda key, data: data)
-        self.primary_db.put('salad', 'eggs')
-        self.primary_db.put('spam', 'ham')
-        self.primary_db.put('omelet', 'eggs')
-
-
-    def tearDown(self):
-        self.secondary_db.close()
-        self.primary_db.close()
-        self.env.close()
-        del self.secondary_db
-        del self.primary_db
-        del self.env
-        test_support.rmtree(self.homeDir)
-
-    def test_pget(self):
-        cursor = self.secondary_db.cursor()
-
-        self.assertEquals(('eggs', 'salad', 'eggs'), cursor.pget(key='eggs', flags=db.DB_SET))
-        self.assertEquals(('eggs', 'omelet', 'eggs'), cursor.pget(db.DB_NEXT_DUP))
-        self.assertEquals(None, cursor.pget(db.DB_NEXT_DUP))
-
-        self.assertEquals(('ham', 'spam', 'ham'), cursor.pget('ham', 'spam', flags=db.DB_SET))
-        self.assertEquals(None, cursor.pget(db.DB_NEXT_DUP))
-
-        cursor.close()
-
-
-def test_suite():
-    return unittest.makeSuite(pget_bugTestCase)
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_dbobj.py b/Lib/bsddb/test/test_dbobj.py
deleted file mode 100644 (file)
index 629d144..0000000
+++ /dev/null
@@ -1,70 +0,0 @@
-
-import os, string
-import unittest
-
-from .test_all import db, dbobj, test_support, get_new_environment_path, \
-        get_new_database_path
-
-#----------------------------------------------------------------------
-
-class dbobjTestCase(unittest.TestCase):
-    """Verify that dbobj.DB and dbobj.DBEnv work properly"""
-    db_name = 'test-dbobj.db'
-
-    def setUp(self):
-        self.homeDir = get_new_environment_path()
-
-    def tearDown(self):
-        if hasattr(self, 'db'):
-            del self.db
-        if hasattr(self, 'env'):
-            del self.env
-        test_support.rmtree(self.homeDir)
-
-    def test01_both(self):
-        class TestDBEnv(dbobj.DBEnv): pass
-        class TestDB(dbobj.DB):
-            def put(self, key, *args, **kwargs):
-                key = key.upper()
-                # call our parent classes put method with an upper case key
-                return dbobj.DB.put(*(self, key) + args, **kwargs)
-        self.env = TestDBEnv()
-        self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL)
-        self.db = TestDB(self.env)
-        self.db.open(self.db_name, db.DB_HASH, db.DB_CREATE)
-        self.db.put('spam', 'eggs')
-        self.assertEqual(self.db.get('spam'), None,
-               "overridden dbobj.DB.put() method failed [1]")
-        self.assertEqual(self.db.get('SPAM'), 'eggs',
-               "overridden dbobj.DB.put() method failed [2]")
-        self.db.close()
-        self.env.close()
-
-    def test02_dbobj_dict_interface(self):
-        self.env = dbobj.DBEnv()
-        self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL)
-        self.db = dbobj.DB(self.env)
-        self.db.open(self.db_name+'02', db.DB_HASH, db.DB_CREATE)
-        # __setitem__
-        self.db['spam'] = 'eggs'
-        # __len__
-        self.assertEqual(len(self.db), 1)
-        # __getitem__
-        self.assertEqual(self.db['spam'], 'eggs')
-        # __del__
-        del self.db['spam']
-        self.assertEqual(self.db.get('spam'), None, "dbobj __del__ failed")
-        self.db.close()
-        self.env.close()
-
-    def test03_dbobj_type_before_open(self):
-        # Ensure this doesn't cause a segfault.
-        self.assertRaises(db.DBInvalidArgError, db.DB().type)
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    return unittest.makeSuite(dbobjTestCase)
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_dbshelve.py b/Lib/bsddb/test/test_dbshelve.py
deleted file mode 100644 (file)
index 5149ac8..0000000
+++ /dev/null
@@ -1,385 +0,0 @@
-"""
-TestCases for checking dbShelve objects.
-"""
-
-import os, string
-import random
-import unittest
-
-
-from .test_all import db, dbshelve, test_support, verbose, \
-        get_new_environment_path, get_new_database_path
-
-
-
-#----------------------------------------------------------------------
-
-# We want the objects to be comparable so we can test dbshelve.values
-# later on.
-class DataClass:
-    def __init__(self):
-        self.value = random.random()
-
-    def __repr__(self) :  # For Python 3.0 comparison
-        return "DataClass %f" %self.value
-
-    def __cmp__(self, other):  # For Python 2.x comparison
-        return cmp(self.value, other)
-
-
-class DBShelveTestCase(unittest.TestCase):
-    def setUp(self):
-        import sys
-        if sys.version_info[0] >= 3 :
-            from .test_all import do_proxy_db_py3k
-            self._flag_proxy_db_py3k = do_proxy_db_py3k(False)
-        self.filename = get_new_database_path()
-        self.do_open()
-
-    def tearDown(self):
-        import sys
-        if sys.version_info[0] >= 3 :
-            from .test_all import do_proxy_db_py3k
-            do_proxy_db_py3k(self._flag_proxy_db_py3k)
-        self.do_close()
-        test_support.unlink(self.filename)
-
-    def mk(self, key):
-        """Turn key into an appropriate key type for this db"""
-        # override in child class for RECNO
-        import sys
-        if sys.version_info[0] < 3 :
-            return key
-        else :
-            return bytes(key, "iso8859-1")  # 8 bits
-
-    def populateDB(self, d):
-        for x in string.letters:
-            d[self.mk('S' + x)] = 10 * x           # add a string
-            d[self.mk('I' + x)] = ord(x)           # add an integer
-            d[self.mk('L' + x)] = [x] * 10         # add a list
-
-            inst = DataClass()            # add an instance
-            inst.S = 10 * x
-            inst.I = ord(x)
-            inst.L = [x] * 10
-            d[self.mk('O' + x)] = inst
-
-
-    # overridable in derived classes to affect how the shelf is created/opened
-    def do_open(self):
-        self.d = dbshelve.open(self.filename)
-
-    # and closed...
-    def do_close(self):
-        self.d.close()
-
-
-
-    def test01_basics(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test01_basics..." % self.__class__.__name__)
-
-        self.populateDB(self.d)
-        self.d.sync()
-        self.do_close()
-        self.do_open()
-        d = self.d
-
-        l = len(d)
-        k = list(d.keys())
-        s = d.stat()
-        f = d.fd()
-
-        if verbose:
-            print("length:", l)
-            print("keys:", k)
-            print("stats:", s)
-
-        self.assertEqual(0, self.mk('bad key') in d)
-        self.assertEqual(1, self.mk('IA') in d)
-        self.assertEqual(1, self.mk('OA') in d)
-
-        d.delete(self.mk('IA'))
-        del d[self.mk('OA')]
-        self.assertEqual(0, self.mk('IA') in d)
-        self.assertEqual(0, self.mk('OA') in d)
-        self.assertEqual(len(d), l-2)
-
-        values = []
-        for key in list(d.keys()):
-            value = d[key]
-            values.append(value)
-            if verbose:
-                print("%s: %s" % (key, value))
-            self.checkrec(key, value)
-
-        dbvalues = list(d.values())
-        self.assertEqual(len(dbvalues), len(list(d.keys())))
-        import sys
-        if sys.version_info[0] < 3 :
-            values.sort()
-            dbvalues.sort()
-            self.assertEqual(values, dbvalues)
-        else :  # XXX: Convert all to strings. Please, improve
-            values.sort(key=lambda x : str(x))
-            dbvalues.sort(key=lambda x : str(x))
-            self.assertEqual(repr(values), repr(dbvalues))
-
-        items = list(d.items())
-        self.assertEqual(len(items), len(values))
-
-        for key, value in items:
-            self.checkrec(key, value)
-
-        self.assertEqual(d.get(self.mk('bad key')), None)
-        self.assertEqual(d.get(self.mk('bad key'), None), None)
-        self.assertEqual(d.get(self.mk('bad key'), 'a string'), 'a string')
-        self.assertEqual(d.get(self.mk('bad key'), [1, 2, 3]), [1, 2, 3])
-
-        d.set_get_returns_none(0)
-        self.assertRaises(db.DBNotFoundError, d.get, self.mk('bad key'))
-        d.set_get_returns_none(1)
-
-        d.put(self.mk('new key'), 'new data')
-        self.assertEqual(d.get(self.mk('new key')), 'new data')
-        self.assertEqual(d[self.mk('new key')], 'new data')
-
-
-
-    def test02_cursors(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test02_cursors..." % self.__class__.__name__)
-
-        self.populateDB(self.d)
-        d = self.d
-
-        count = 0
-        c = d.cursor()
-        rec = c.first()
-        while rec is not None:
-            count = count + 1
-            if verbose:
-                print(rec)
-            key, value = rec
-            self.checkrec(key, value)
-            # Hack to avoid conversion by 2to3 tool
-            rec = getattr(c, "next")()
-        del c
-
-        self.assertEqual(count, len(d))
-
-        count = 0
-        c = d.cursor()
-        rec = c.last()
-        while rec is not None:
-            count = count + 1
-            if verbose:
-                print(rec)
-            key, value = rec
-            self.checkrec(key, value)
-            rec = c.prev()
-
-        self.assertEqual(count, len(d))
-
-        c.set(self.mk('SS'))
-        key, value = c.current()
-        self.checkrec(key, value)
-        del c
-
-
-    def test03_append(self):
-        # NOTE: this is overridden in RECNO subclass, don't change its name.
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test03_append..." % self.__class__.__name__)
-
-        self.assertRaises(dbshelve.DBShelveError,
-                          self.d.append, 'unit test was here')
-
-
-    def checkrec(self, key, value):
-        # override this in a subclass if the key type is different
-
-        import sys
-        if sys.version_info[0] >= 3 :
-            if isinstance(key, bytes) :
-                key = key.decode("iso8859-1")  # 8 bits
-
-        x = key[1]
-        if key[0] == 'S':
-            self.assertEqual(type(value), str)
-            self.assertEqual(value, 10 * x)
-
-        elif key[0] == 'I':
-            self.assertEqual(type(value), int)
-            self.assertEqual(value, ord(x))
-
-        elif key[0] == 'L':
-            self.assertEqual(type(value), list)
-            self.assertEqual(value, [x] * 10)
-
-        elif key[0] == 'O':
-            import sys
-            if sys.version_info[0] < 3 :
-                from types import InstanceType
-                self.assertEqual(type(value), InstanceType)
-            else :
-                self.assertEqual(type(value), DataClass)
-
-            self.assertEqual(value.S, 10 * x)
-            self.assertEqual(value.I, ord(x))
-            self.assertEqual(value.L, [x] * 10)
-
-        else:
-            self.assert_(0, 'Unknown key type, fix the test')
-
-#----------------------------------------------------------------------
-
-class BasicShelveTestCase(DBShelveTestCase):
-    def do_open(self):
-        self.d = dbshelve.DBShelf()
-        self.d.open(self.filename, self.dbtype, self.dbflags)
-
-    def do_close(self):
-        self.d.close()
-
-
-class BTreeShelveTestCase(BasicShelveTestCase):
-    dbtype = db.DB_BTREE
-    dbflags = db.DB_CREATE
-
-
-class HashShelveTestCase(BasicShelveTestCase):
-    dbtype = db.DB_HASH
-    dbflags = db.DB_CREATE
-
-
-class ThreadBTreeShelveTestCase(BasicShelveTestCase):
-    dbtype = db.DB_BTREE
-    dbflags = db.DB_CREATE | db.DB_THREAD
-
-
-class ThreadHashShelveTestCase(BasicShelveTestCase):
-    dbtype = db.DB_HASH
-    dbflags = db.DB_CREATE | db.DB_THREAD
-
-
-#----------------------------------------------------------------------
-
-class BasicEnvShelveTestCase(DBShelveTestCase):
-    def do_open(self):
-        self.env = db.DBEnv()
-        self.env.open(self.homeDir,
-                self.envflags | db.DB_INIT_MPOOL | db.DB_CREATE)
-
-        self.filename = os.path.split(self.filename)[1]
-        self.d = dbshelve.DBShelf(self.env)
-        self.d.open(self.filename, self.dbtype, self.dbflags)
-
-
-    def do_close(self):
-        self.d.close()
-        self.env.close()
-
-
-    def setUp(self) :
-        self.homeDir = get_new_environment_path()
-        DBShelveTestCase.setUp(self)
-
-    def tearDown(self):
-        import sys
-        if sys.version_info[0] >= 3 :
-            from .test_all import do_proxy_db_py3k
-            do_proxy_db_py3k(self._flag_proxy_db_py3k)
-        self.do_close()
-        test_support.rmtree(self.homeDir)
-
-
-class EnvBTreeShelveTestCase(BasicEnvShelveTestCase):
-    envflags = 0
-    dbtype = db.DB_BTREE
-    dbflags = db.DB_CREATE
-
-
-class EnvHashShelveTestCase(BasicEnvShelveTestCase):
-    envflags = 0
-    dbtype = db.DB_HASH
-    dbflags = db.DB_CREATE
-
-
-class EnvThreadBTreeShelveTestCase(BasicEnvShelveTestCase):
-    envflags = db.DB_THREAD
-    dbtype = db.DB_BTREE
-    dbflags = db.DB_CREATE | db.DB_THREAD
-
-
-class EnvThreadHashShelveTestCase(BasicEnvShelveTestCase):
-    envflags = db.DB_THREAD
-    dbtype = db.DB_HASH
-    dbflags = db.DB_CREATE | db.DB_THREAD
-
-
-#----------------------------------------------------------------------
-# test cases for a DBShelf in a RECNO DB.
-
-class RecNoShelveTestCase(BasicShelveTestCase):
-    dbtype = db.DB_RECNO
-    dbflags = db.DB_CREATE
-
-    def setUp(self):
-        BasicShelveTestCase.setUp(self)
-
-        # pool to assign integer key values out of
-        self.key_pool = list(range(1, 5000))
-        self.key_map = {}     # map string keys to the number we gave them
-        self.intkey_map = {}  # reverse map of above
-
-    def mk(self, key):
-        if key not in self.key_map:
-            self.key_map[key] = self.key_pool.pop(0)
-            self.intkey_map[self.key_map[key]] = key
-        return self.key_map[key]
-
-    def checkrec(self, intkey, value):
-        key = self.intkey_map[intkey]
-        BasicShelveTestCase.checkrec(self, key, value)
-
-    def test03_append(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test03_append..." % self.__class__.__name__)
-
-        self.d[1] = 'spam'
-        self.d[5] = 'eggs'
-        self.assertEqual(6, self.d.append('spam'))
-        self.assertEqual(7, self.d.append('baked beans'))
-        self.assertEqual('spam', self.d.get(6))
-        self.assertEqual('spam', self.d.get(1))
-        self.assertEqual('baked beans', self.d.get(7))
-        self.assertEqual('eggs', self.d.get(5))
-
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-
-    suite.addTest(unittest.makeSuite(DBShelveTestCase))
-    suite.addTest(unittest.makeSuite(BTreeShelveTestCase))
-    suite.addTest(unittest.makeSuite(HashShelveTestCase))
-    suite.addTest(unittest.makeSuite(ThreadBTreeShelveTestCase))
-    suite.addTest(unittest.makeSuite(ThreadHashShelveTestCase))
-    suite.addTest(unittest.makeSuite(EnvBTreeShelveTestCase))
-    suite.addTest(unittest.makeSuite(EnvHashShelveTestCase))
-    suite.addTest(unittest.makeSuite(EnvThreadBTreeShelveTestCase))
-    suite.addTest(unittest.makeSuite(EnvThreadHashShelveTestCase))
-    suite.addTest(unittest.makeSuite(RecNoShelveTestCase))
-
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_dbtables.py b/Lib/bsddb/test/test_dbtables.py
deleted file mode 100644 (file)
index 31b7187..0000000
+++ /dev/null
@@ -1,405 +0,0 @@
-#!/usr/bin/env python
-#
-#-----------------------------------------------------------------------
-# A test suite for the table interface built on bsddb.db
-#-----------------------------------------------------------------------
-#
-# Copyright (C) 2000, 2001 by Autonomous Zone Industries
-# Copyright (C) 2002 Gregory P. Smith
-#
-# March 20, 2000
-#
-# License:      This is free software.  You may use this software for any
-#               purpose including modification/redistribution, so long as
-#               this header remains intact and that you do not claim any
-#               rights of ownership or authorship of this software.  This
-#               software has been tested, but no warranty is expressed or
-#               implied.
-#
-#   --  Gregory P. Smith <greg@krypto.org>
-#
-# $Id$
-
-import os, re
-try:
-    import pickle
-    pickle = pickle
-except ImportError:
-    import pickle
-
-import unittest
-from .test_all import db, dbtables, test_support, verbose, \
-        get_new_environment_path, get_new_database_path
-
-#----------------------------------------------------------------------
-
-class TableDBTestCase(unittest.TestCase):
-    db_name = 'test-table.db'
-
-    def setUp(self):
-        import sys
-        if sys.version_info[0] >= 3 :
-            from .test_all import do_proxy_db_py3k
-            self._flag_proxy_db_py3k = do_proxy_db_py3k(False)
-
-        self.testHomeDir = get_new_environment_path()
-        self.tdb = dbtables.bsdTableDB(
-            filename='tabletest.db', dbhome=self.testHomeDir, create=1)
-
-    def tearDown(self):
-        self.tdb.close()
-        import sys
-        if sys.version_info[0] >= 3 :
-            from .test_all import do_proxy_db_py3k
-            do_proxy_db_py3k(self._flag_proxy_db_py3k)
-        test_support.rmtree(self.testHomeDir)
-
-    def test01(self):
-        tabname = "test01"
-        colname = 'cool numbers'
-        try:
-            self.tdb.Drop(tabname)
-        except dbtables.TableDBError:
-            pass
-        self.tdb.CreateTable(tabname, [colname])
-        import sys
-        if sys.version_info[0] < 3 :
-            self.tdb.Insert(tabname, {colname: pickle.dumps(3.14159, 1)})
-        else :
-            self.tdb.Insert(tabname, {colname: pickle.dumps(3.14159,
-                1).decode("iso8859-1")})  # 8 bits
-
-        if verbose:
-            self.tdb._db_print()
-
-        values = self.tdb.Select(
-            tabname, [colname], conditions={colname: None})
-
-        import sys
-        if sys.version_info[0] < 3 :
-            colval = pickle.loads(values[0][colname])
-        else :
-            colval = pickle.loads(bytes(values[0][colname], "iso8859-1"))
-        self.assert_(colval > 3.141)
-        self.assert_(colval < 3.142)
-
-
-    def test02(self):
-        tabname = "test02"
-        col0 = 'coolness factor'
-        col1 = 'but can it fly?'
-        col2 = 'Species'
-
-        import sys
-        if sys.version_info[0] < 3 :
-            testinfo = [
-                {col0: pickle.dumps(8, 1), col1: 'no', col2: 'Penguin'},
-                {col0: pickle.dumps(-1, 1), col1: 'no', col2: 'Turkey'},
-                {col0: pickle.dumps(9, 1), col1: 'yes', col2: 'SR-71A Blackbird'}
-            ]
-        else :
-            testinfo = [
-                {col0: pickle.dumps(8, 1).decode("iso8859-1"),
-                    col1: 'no', col2: 'Penguin'},
-                {col0: pickle.dumps(-1, 1).decode("iso8859-1"),
-                    col1: 'no', col2: 'Turkey'},
-                {col0: pickle.dumps(9, 1).decode("iso8859-1"),
-                    col1: 'yes', col2: 'SR-71A Blackbird'}
-            ]
-
-        try:
-            self.tdb.Drop(tabname)
-        except dbtables.TableDBError:
-            pass
-        self.tdb.CreateTable(tabname, [col0, col1, col2])
-        for row in testinfo :
-            self.tdb.Insert(tabname, row)
-
-        import sys
-        if sys.version_info[0] < 3 :
-            values = self.tdb.Select(tabname, [col2],
-                conditions={col0: lambda x: pickle.loads(x) >= 8})
-        else :
-            values = self.tdb.Select(tabname, [col2],
-                conditions={col0: lambda x:
-                    pickle.loads(bytes(x, "iso8859-1")) >= 8})
-
-        self.assertEqual(len(values), 2)
-        if values[0]['Species'] == 'Penguin' :
-            self.assertEqual(values[1]['Species'], 'SR-71A Blackbird')
-        elif values[0]['Species'] == 'SR-71A Blackbird' :
-            self.assertEqual(values[1]['Species'], 'Penguin')
-        else :
-            if verbose:
-                print("values= %r" % (values,))
-            raise RuntimeError("Wrong values returned!")
-
-    def test03(self):
-        tabname = "test03"
-        try:
-            self.tdb.Drop(tabname)
-        except dbtables.TableDBError:
-            pass
-        if verbose:
-            print('...before CreateTable...')
-            self.tdb._db_print()
-        self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e'])
-        if verbose:
-            print('...after CreateTable...')
-            self.tdb._db_print()
-        self.tdb.Drop(tabname)
-        if verbose:
-            print('...after Drop...')
-            self.tdb._db_print()
-        self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e'])
-
-        try:
-            self.tdb.Insert(tabname,
-                            {'a': "",
-                             'e': pickle.dumps([{4:5, 6:7}, 'foo'], 1),
-                             'f': "Zero"})
-            self.fail('Expected an exception')
-        except dbtables.TableDBError:
-            pass
-
-        try:
-            self.tdb.Select(tabname, [], conditions={'foo': '123'})
-            self.fail('Expected an exception')
-        except dbtables.TableDBError:
-            pass
-
-        self.tdb.Insert(tabname,
-                        {'a': '42',
-                         'b': "bad",
-                         'c': "meep",
-                         'e': 'Fuzzy wuzzy was a bear'})
-        self.tdb.Insert(tabname,
-                        {'a': '581750',
-                         'b': "good",
-                         'd': "bla",
-                         'c': "black",
-                         'e': 'fuzzy was here'})
-        self.tdb.Insert(tabname,
-                        {'a': '800000',
-                         'b': "good",
-                         'd': "bla",
-                         'c': "black",
-                         'e': 'Fuzzy wuzzy is a bear'})
-
-        if verbose:
-            self.tdb._db_print()
-
-        # this should return two rows
-        values = self.tdb.Select(tabname, ['b', 'a', 'd'],
-            conditions={'e': re.compile('wuzzy').search,
-                        'a': re.compile('^[0-9]+$').match})
-        self.assertEqual(len(values), 2)
-
-        # now lets delete one of them and try again
-        self.tdb.Delete(tabname, conditions={'b': dbtables.ExactCond('good')})
-        values = self.tdb.Select(
-            tabname, ['a', 'd', 'b'],
-            conditions={'e': dbtables.PrefixCond('Fuzzy')})
-        self.assertEqual(len(values), 1)
-        self.assertEqual(values[0]['d'], None)
-
-        values = self.tdb.Select(tabname, ['b'],
-            conditions={'c': lambda c: c == 'meep'})
-        self.assertEqual(len(values), 1)
-        self.assertEqual(values[0]['b'], "bad")
-
-
-    def test04_MultiCondSelect(self):
-        tabname = "test04_MultiCondSelect"
-        try:
-            self.tdb.Drop(tabname)
-        except dbtables.TableDBError:
-            pass
-        self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e'])
-
-        try:
-            self.tdb.Insert(tabname,
-                            {'a': "",
-                             'e': pickle.dumps([{4:5, 6:7}, 'foo'], 1),
-                             'f': "Zero"})
-            self.fail('Expected an exception')
-        except dbtables.TableDBError:
-            pass
-
-        self.tdb.Insert(tabname, {'a': "A", 'b': "B", 'c': "C", 'd': "D",
-                                  'e': "E"})
-        self.tdb.Insert(tabname, {'a': "-A", 'b': "-B", 'c': "-C", 'd': "-D",
-                                  'e': "-E"})
-        self.tdb.Insert(tabname, {'a': "A-", 'b': "B-", 'c': "C-", 'd': "D-",
-                                  'e': "E-"})
-
-        if verbose:
-            self.tdb._db_print()
-
-        # This select should return 0 rows.  it is designed to test
-        # the bug identified and fixed in sourceforge bug # 590449
-        # (Big Thanks to "Rob Tillotson (n9mtb)" for tracking this down
-        # and supplying a fix!!  This one caused many headaches to say
-        # the least...)
-        values = self.tdb.Select(tabname, ['b', 'a', 'd'],
-            conditions={'e': dbtables.ExactCond('E'),
-                        'a': dbtables.ExactCond('A'),
-                        'd': dbtables.PrefixCond('-')
-                       } )
-        self.assertEqual(len(values), 0, values)
-
-
-    def test_CreateOrExtend(self):
-        tabname = "test_CreateOrExtend"
-
-        self.tdb.CreateOrExtendTable(
-            tabname, ['name', 'taste', 'filling', 'alcohol content', 'price'])
-        try:
-            self.tdb.Insert(tabname,
-                            {'taste': 'crap',
-                             'filling': 'no',
-                             'is it Guinness?': 'no'})
-            self.fail("Insert should've failed due to bad column name")
-        except:
-            pass
-        self.tdb.CreateOrExtendTable(tabname,
-                                     ['name', 'taste', 'is it Guinness?'])
-
-        # these should both succeed as the table should contain the union of both sets of columns.
-        self.tdb.Insert(tabname, {'taste': 'crap', 'filling': 'no',
-                                  'is it Guinness?': 'no'})
-        self.tdb.Insert(tabname, {'taste': 'great', 'filling': 'yes',
-                                  'is it Guinness?': 'yes',
-                                  'name': 'Guinness'})
-
-
-    def test_CondObjs(self):
-        tabname = "test_CondObjs"
-
-        self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e', 'p'])
-
-        self.tdb.Insert(tabname, {'a': "the letter A",
-                                  'b': "the letter B",
-                                  'c': "is for cookie"})
-        self.tdb.Insert(tabname, {'a': "is for aardvark",
-                                  'e': "the letter E",
-                                  'c': "is for cookie",
-                                  'd': "is for dog"})
-        self.tdb.Insert(tabname, {'a': "the letter A",
-                                  'e': "the letter E",
-                                  'c': "is for cookie",
-                                  'p': "is for Python"})
-
-        values = self.tdb.Select(
-            tabname, ['p', 'e'],
-            conditions={'e': dbtables.PrefixCond('the l')})
-        self.assertEqual(len(values), 2, values)
-        self.assertEqual(values[0]['e'], values[1]['e'], values)
-        self.assertNotEqual(values[0]['p'], values[1]['p'], values)
-
-        values = self.tdb.Select(
-            tabname, ['d', 'a'],
-            conditions={'a': dbtables.LikeCond('%aardvark%')})
-        self.assertEqual(len(values), 1, values)
-        self.assertEqual(values[0]['d'], "is for dog", values)
-        self.assertEqual(values[0]['a'], "is for aardvark", values)
-
-        values = self.tdb.Select(tabname, None,
-                                 {'b': dbtables.Cond(),
-                                  'e':dbtables.LikeCond('%letter%'),
-                                  'a':dbtables.PrefixCond('is'),
-                                  'd':dbtables.ExactCond('is for dog'),
-                                  'c':dbtables.PrefixCond('is for'),
-                                  'p':lambda s: not s})
-        self.assertEqual(len(values), 1, values)
-        self.assertEqual(values[0]['d'], "is for dog", values)
-        self.assertEqual(values[0]['a'], "is for aardvark", values)
-
-    def test_Delete(self):
-        tabname = "test_Delete"
-        self.tdb.CreateTable(tabname, ['x', 'y', 'z'])
-
-        # prior to 2001-05-09 there was a bug where Delete() would
-        # fail if it encountered any rows that did not have values in
-        # every column.
-        # Hunted and Squashed by <Donwulff> (Jukka Santala - donwulff@nic.fi)
-        self.tdb.Insert(tabname, {'x': 'X1', 'y':'Y1'})
-        self.tdb.Insert(tabname, {'x': 'X2', 'y':'Y2', 'z': 'Z2'})
-
-        self.tdb.Delete(tabname, conditions={'x': dbtables.PrefixCond('X')})
-        values = self.tdb.Select(tabname, ['y'],
-                                 conditions={'x': dbtables.PrefixCond('X')})
-        self.assertEqual(len(values), 0)
-
-    def test_Modify(self):
-        tabname = "test_Modify"
-        self.tdb.CreateTable(tabname, ['Name', 'Type', 'Access'])
-
-        self.tdb.Insert(tabname, {'Name': 'Index to MP3 files.doc',
-                                  'Type': 'Word', 'Access': '8'})
-        self.tdb.Insert(tabname, {'Name': 'Nifty.MP3', 'Access': '1'})
-        self.tdb.Insert(tabname, {'Type': 'Unknown', 'Access': '0'})
-
-        def set_type(type):
-            if type == None:
-                return 'MP3'
-            return type
-
-        def increment_access(count):
-            return str(int(count)+1)
-
-        def remove_value(value):
-            return None
-
-        self.tdb.Modify(tabname,
-                        conditions={'Access': dbtables.ExactCond('0')},
-                        mappings={'Access': remove_value})
-        self.tdb.Modify(tabname,
-                        conditions={'Name': dbtables.LikeCond('%MP3%')},
-                        mappings={'Type': set_type})
-        self.tdb.Modify(tabname,
-                        conditions={'Name': dbtables.LikeCond('%')},
-                        mappings={'Access': increment_access})
-
-        try:
-            self.tdb.Modify(tabname,
-                            conditions={'Name': dbtables.LikeCond('%')},
-                            mappings={'Access': 'What is your quest?'})
-        except TypeError:
-            # success, the string value in mappings isn't callable
-            pass
-        else:
-            raise RuntimeError("why was TypeError not raised for bad callable?")
-
-        # Delete key in select conditions
-        values = self.tdb.Select(
-            tabname, None,
-            conditions={'Type': dbtables.ExactCond('Unknown')})
-        self.assertEqual(len(values), 1, values)
-        self.assertEqual(values[0]['Name'], None, values)
-        self.assertEqual(values[0]['Access'], None, values)
-
-        # Modify value by select conditions
-        values = self.tdb.Select(
-            tabname, None,
-            conditions={'Name': dbtables.ExactCond('Nifty.MP3')})
-        self.assertEqual(len(values), 1, values)
-        self.assertEqual(values[0]['Type'], "MP3", values)
-        self.assertEqual(values[0]['Access'], "2", values)
-
-        # Make sure change applied only to select conditions
-        values = self.tdb.Select(
-            tabname, None, conditions={'Name': dbtables.LikeCond('%doc%')})
-        self.assertEqual(len(values), 1, values)
-        self.assertEqual(values[0]['Type'], "Word", values)
-        self.assertEqual(values[0]['Access'], "9", values)
-
-
-def test_suite():
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(TableDBTestCase))
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_distributed_transactions.py b/Lib/bsddb/test/test_distributed_transactions.py
deleted file mode 100644 (file)
index acf8acd..0000000
+++ /dev/null
@@ -1,163 +0,0 @@
-"""TestCases for distributed transactions.
-"""
-
-import os
-import unittest
-
-from .test_all import db, test_support, get_new_environment_path, \
-        get_new_database_path
-
-try :
-    a=set()
-except : # Python 2.3
-    from sets import Set as set
-else :
-    del a
-
-from .test_all import verbose
-
-#----------------------------------------------------------------------
-
-class DBTxn_distributed(unittest.TestCase):
-    num_txns=1234
-    nosync=True
-    must_open_db=False
-    def _create_env(self, must_open_db) :
-        self.dbenv = db.DBEnv()
-        self.dbenv.set_tx_max(self.num_txns)
-        self.dbenv.set_lk_max_lockers(self.num_txns*2)
-        self.dbenv.set_lk_max_locks(self.num_txns*2)
-        self.dbenv.set_lk_max_objects(self.num_txns*2)
-        if self.nosync :
-            self.dbenv.set_flags(db.DB_TXN_NOSYNC,True)
-        self.dbenv.open(self.homeDir, db.DB_CREATE | db.DB_THREAD |
-                db.DB_RECOVER |
-                db.DB_INIT_TXN | db.DB_INIT_LOG | db.DB_INIT_MPOOL |
-                db.DB_INIT_LOCK, 0o666)
-        self.db = db.DB(self.dbenv)
-        self.db.set_re_len(db.DB_XIDDATASIZE)
-        if must_open_db :
-            if db.version() > (4,1) :
-                txn=self.dbenv.txn_begin()
-                self.db.open(self.filename,
-                        db.DB_QUEUE, db.DB_CREATE | db.DB_THREAD, 0o666,
-                        txn=txn)
-                txn.commit()
-            else :
-                self.db.open(self.filename,
-                        db.DB_QUEUE, db.DB_CREATE | db.DB_THREAD, 0o666)
-
-    def setUp(self) :
-        self.homeDir = get_new_environment_path()
-        self.filename = "test"
-        return self._create_env(must_open_db=True)
-
-    def _destroy_env(self):
-        if self.nosync or (db.version()[:2] == (4,6)):  # Known bug
-            self.dbenv.log_flush()
-        self.db.close()
-        self.dbenv.close()
-
-    def tearDown(self):
-        self._destroy_env()
-        test_support.rmtree(self.homeDir)
-
-    def _recreate_env(self,must_open_db) :
-        self._destroy_env()
-        self._create_env(must_open_db)
-
-    def test01_distributed_transactions(self) :
-        txns=set()
-        adapt = lambda x : x
-        import sys
-        if sys.version_info[0] >= 3 :
-            adapt = lambda x : bytes(x, "ascii")
-    # Create transactions, "prepare" them, and
-    # let them be garbage collected.
-        for i in range(self.num_txns) :
-            txn = self.dbenv.txn_begin()
-            gid = "%%%dd" %db.DB_XIDDATASIZE
-            gid = adapt(gid %i)
-            self.db.put(i, gid, txn=txn, flags=db.DB_APPEND)
-            txns.add(gid)
-            txn.prepare(gid)
-        del txn
-
-        self._recreate_env(self.must_open_db)
-
-    # Get "to be recovered" transactions but
-    # let them be garbage collected.
-        recovered_txns=self.dbenv.txn_recover()
-        self.assertEquals(self.num_txns,len(recovered_txns))
-        for gid,txn in recovered_txns :
-            self.assert_(gid in txns)
-        del txn
-        del recovered_txns
-
-        self._recreate_env(self.must_open_db)
-
-    # Get "to be recovered" transactions. Commit, abort and
-    # discard them.
-        recovered_txns=self.dbenv.txn_recover()
-        self.assertEquals(self.num_txns,len(recovered_txns))
-        discard_txns=set()
-        committed_txns=set()
-        state=0
-        for gid,txn in recovered_txns :
-            if state==0 or state==1:
-                committed_txns.add(gid)
-                txn.commit()
-            elif state==2 :
-                txn.abort()
-            elif state==3 :
-                txn.discard()
-                discard_txns.add(gid)
-                state=-1
-            state+=1
-        del txn
-        del recovered_txns
-
-        self._recreate_env(self.must_open_db)
-
-    # Verify the discarded transactions are still
-    # around, and dispose them.
-        recovered_txns=self.dbenv.txn_recover()
-        self.assertEquals(len(discard_txns),len(recovered_txns))
-        for gid,txn in recovered_txns :
-            txn.abort()
-        del txn
-        del recovered_txns
-
-        self._recreate_env(must_open_db=True)
-
-    # Be sure there are not pending transactions.
-    # Check also database size.
-        recovered_txns=self.dbenv.txn_recover()
-        self.assert_(len(recovered_txns)==0)
-        self.assertEquals(len(committed_txns),self.db.stat()["nkeys"])
-
-class DBTxn_distributedSYNC(DBTxn_distributed):
-    nosync=False
-
-class DBTxn_distributed_must_open_db(DBTxn_distributed):
-    must_open_db=True
-
-class DBTxn_distributedSYNC_must_open_db(DBTxn_distributed):
-    nosync=False
-    must_open_db=True
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-    if db.version() >= (4,5) :
-        suite.addTest(unittest.makeSuite(DBTxn_distributed))
-        suite.addTest(unittest.makeSuite(DBTxn_distributedSYNC))
-    if db.version() >= (4,6) :
-        suite.addTest(unittest.makeSuite(DBTxn_distributed_must_open_db))
-        suite.addTest(unittest.makeSuite(DBTxn_distributedSYNC_must_open_db))
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_early_close.py b/Lib/bsddb/test/test_early_close.py
deleted file mode 100644 (file)
index e944cc6..0000000
+++ /dev/null
@@ -1,195 +0,0 @@
-"""TestCases for checking that it does not segfault when a DBEnv object
-is closed before its DB objects.
-"""
-
-import os
-import unittest
-
-from .test_all import db, test_support, verbose, get_new_environment_path, get_new_database_path
-
-# We're going to get warnings in this module about trying to close the db when
-# its env is already closed.  Let's just ignore those.
-try:
-    import warnings
-except ImportError:
-    pass
-else:
-    warnings.filterwarnings('ignore',
-                            message='DB could not be closed in',
-                            category=RuntimeWarning)
-
-
-#----------------------------------------------------------------------
-
-class DBEnvClosedEarlyCrash(unittest.TestCase):
-    def setUp(self):
-        self.homeDir = get_new_environment_path()
-        self.filename = "test"
-
-    def tearDown(self):
-        test_support.rmtree(self.homeDir)
-
-    def test01_close_dbenv_before_db(self):
-        dbenv = db.DBEnv()
-        dbenv.open(self.homeDir,
-                   db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL,
-                   0o666)
-
-        d = db.DB(dbenv)
-        d2 = db.DB(dbenv)
-        d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-
-        self.assertRaises(db.DBNoSuchFileError, d2.open,
-                self.filename+"2", db.DB_BTREE, db.DB_THREAD, 0o666)
-
-        d.put("test","this is a test")
-        self.assertEqual(d.get("test"), "this is a test", "put!=get")
-        dbenv.close()  # This "close" should close the child db handle also
-        self.assertRaises(db.DBError, d.get, "test")
-
-    def test02_close_dbenv_before_dbcursor(self):
-        dbenv = db.DBEnv()
-        dbenv.open(self.homeDir,
-                   db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL,
-                   0o666)
-
-        d = db.DB(dbenv)
-        d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-
-        d.put("test","this is a test")
-        d.put("test2","another test")
-        d.put("test3","another one")
-        self.assertEqual(d.get("test"), "this is a test", "put!=get")
-        c=d.cursor()
-        c.first()
-        next(c)
-        d.close()  # This "close" should close the child db handle also
-     # db.close should close the child cursor
-        self.assertRaises(db.DBError,c.__next__)
-
-        d = db.DB(dbenv)
-        d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-        c=d.cursor()
-        c.first()
-        next(c)
-        dbenv.close()
-    # The "close" should close the child db handle also, with cursors
-        self.assertRaises(db.DBError, c.__next__)
-
-    def test03_close_db_before_dbcursor_without_env(self):
-        import os.path
-        path=os.path.join(self.homeDir,self.filename)
-        d = db.DB()
-        d.open(path, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-
-        d.put("test","this is a test")
-        d.put("test2","another test")
-        d.put("test3","another one")
-        self.assertEqual(d.get("test"), "this is a test", "put!=get")
-        c=d.cursor()
-        c.first()
-        next(c)
-        d.close()
-    # The "close" should close the child db handle also
-        self.assertRaises(db.DBError, c.__next__)
-
-    def test04_close_massive(self):
-        dbenv = db.DBEnv()
-        dbenv.open(self.homeDir,
-                   db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL,
-                   0o666)
-
-        dbs=[db.DB(dbenv) for i in range(16)]
-        cursors=[]
-        for i in dbs :
-            i.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-
-        dbs[10].put("test","this is a test")
-        dbs[10].put("test2","another test")
-        dbs[10].put("test3","another one")
-        self.assertEqual(dbs[4].get("test"), "this is a test", "put!=get")
-
-        for i in dbs :
-            cursors.extend([i.cursor() for j in range(32)])
-
-        for i in dbs[::3] :
-            i.close()
-        for i in cursors[::3] :
-            i.close()
-
-    # Check for missing exception in DB! (after DB close)
-        self.assertRaises(db.DBError, dbs[9].get, "test")
-
-    # Check for missing exception in DBCursor! (after DB close)
-        self.assertRaises(db.DBError, cursors[101].first)
-
-        cursors[80].first()
-        next(cursors[80])
-        dbenv.close()  # This "close" should close the child db handle also
-    # Check for missing exception! (after DBEnv close)
-        self.assertRaises(db.DBError, cursors[80].__next__)
-
-    def test05_close_dbenv_delete_db_success(self):
-        dbenv = db.DBEnv()
-        dbenv.open(self.homeDir,
-                   db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL,
-                   0o666)
-
-        d = db.DB(dbenv)
-        d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-
-        dbenv.close()  # This "close" should close the child db handle also
-
-        del d
-        try:
-            import gc
-        except ImportError:
-            gc = None
-        if gc:
-            # force d.__del__ [DB_dealloc] to be called
-            gc.collect()
-
-    def test06_close_txn_before_dup_cursor(self) :
-        dbenv = db.DBEnv()
-        dbenv.open(self.homeDir,db.DB_INIT_TXN | db.DB_INIT_MPOOL |
-                db.DB_INIT_LOG | db.DB_CREATE)
-        d = db.DB(dbenv)
-        txn = dbenv.txn_begin()
-        if db.version() < (4,1) :
-            d.open(self.filename, dbtype = db.DB_HASH, flags = db.DB_CREATE)
-        else :
-            d.open(self.filename, dbtype = db.DB_HASH, flags = db.DB_CREATE,
-                    txn=txn)
-        d.put("XXX", "yyy", txn=txn)
-        txn.commit()
-        txn = dbenv.txn_begin()
-        c1 = d.cursor(txn)
-        c2 = c1.dup()
-        self.assertEquals(("XXX", "yyy"), c1.first())
-        import warnings
-        # Not interested in warnings about implicit close.
-        warnings.simplefilter("ignore")
-        txn.commit()
-        warnings.resetwarnings()
-        self.assertRaises(db.DBCursorClosedError, c2.first)
-
-    if db.version() > (4,3,0) :
-        def test07_close_db_before_sequence(self):
-            import os.path
-            path=os.path.join(self.homeDir,self.filename)
-            d = db.DB()
-            d.open(path, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-            dbs=db.DBSequence(d)
-            d.close()  # This "close" should close the child DBSequence also
-            dbs.close()  # If not closed, core dump (in Berkeley DB 4.6.*)
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(DBEnvClosedEarlyCrash))
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_env_close.py b/Lib/bsddb/test/test_env_close.py
deleted file mode 100644 (file)
index 4809685..0000000
+++ /dev/null
@@ -1,109 +0,0 @@
-"""TestCases for checking that it does not segfault when a DBEnv object
-is closed before its DB objects.
-"""
-
-import os
-import shutil
-import sys
-import tempfile
-import unittest
-
-try:
-    # For Pythons w/distutils pybsddb
-    from bsddb3 import db
-except ImportError:
-    # For Python 2.3
-    from bsddb import db
-
-try:
-    from bsddb3 import test_support
-except ImportError:
-    from test import support as test_support
-
-from bsddb.test.test_all import verbose
-
-# We're going to get warnings in this module about trying to close the db when
-# its env is already closed.  Let's just ignore those.
-try:
-    import warnings
-except ImportError:
-    pass
-else:
-    warnings.filterwarnings('ignore',
-                            message='DB could not be closed in',
-                            category=RuntimeWarning)
-
-
-#----------------------------------------------------------------------
-
-class DBEnvClosedEarlyCrash(unittest.TestCase):
-    def setUp(self):
-        self.homeDir = os.path.join(tempfile.gettempdir(), 'db_home%d'%os.getpid())
-        try: os.mkdir(self.homeDir)
-        except os.error: pass
-        tempfile.tempdir = self.homeDir
-        self.filename = os.path.split(tempfile.mktemp())[1]
-        tempfile.tempdir = None
-
-    def tearDown(self):
-        test_support.rmtree(self.homeDir)
-
-    def test01_close_dbenv_before_db(self):
-        dbenv = db.DBEnv()
-        dbenv.open(self.homeDir,
-                   db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL,
-                   0o666)
-
-        d = db.DB(dbenv)
-        d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-
-        try:
-            dbenv.close()
-        except db.DBError:
-            try:
-                d.close()
-            except db.DBError:
-                return
-            assert 0, \
-                   "DB close did not raise an exception about its "\
-                   "DBEnv being trashed"
-
-        # XXX This may fail when using older versions of BerkeleyDB.
-        # E.g. 3.2.9 never raised the exception.
-        assert 0, "dbenv did not raise an exception about its DB being open"
-
-
-    def test02_close_dbenv_delete_db_success(self):
-        dbenv = db.DBEnv()
-        dbenv.open(self.homeDir,
-                   db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL,
-                   0o666)
-
-        d = db.DB(dbenv)
-        d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666)
-
-        try:
-            dbenv.close()
-        except db.DBError:
-            pass  # good, it should raise an exception
-
-        del d
-        try:
-            import gc
-        except ImportError:
-            gc = None
-        if gc:
-            # force d.__del__ [DB_dealloc] to be called
-            gc.collect()
-
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(DBEnvClosedEarlyCrash))
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_get_none.py b/Lib/bsddb/test/test_get_none.py
deleted file mode 100644 (file)
index abf2421..0000000
+++ /dev/null
@@ -1,92 +0,0 @@
-"""
-TestCases for checking set_get_returns_none.
-"""
-
-import os, string
-import unittest
-
-from .test_all import db, verbose, get_new_database_path
-
-
-#----------------------------------------------------------------------
-
-class GetReturnsNoneTestCase(unittest.TestCase):
-    def setUp(self):
-        self.filename = get_new_database_path()
-
-    def tearDown(self):
-        try:
-            os.remove(self.filename)
-        except os.error:
-            pass
-
-
-    def test01_get_returns_none(self):
-        d = db.DB()
-        d.open(self.filename, db.DB_BTREE, db.DB_CREATE)
-        d.set_get_returns_none(1)
-
-        for x in string.letters:
-            d.put(x, x * 40)
-
-        data = d.get('bad key')
-        self.assertEqual(data, None)
-
-        data = d.get(string.letters[0])
-        self.assertEqual(data, string.letters[0]*40)
-
-        count = 0
-        c = d.cursor()
-        rec = c.first()
-        while rec:
-            count = count + 1
-            rec = next(c)
-
-        self.assertEqual(rec, None)
-        self.assertEqual(count, len(string.letters))
-
-        c.close()
-        d.close()
-
-
-    def test02_get_raises_exception(self):
-        d = db.DB()
-        d.open(self.filename, db.DB_BTREE, db.DB_CREATE)
-        d.set_get_returns_none(0)
-
-        for x in string.letters:
-            d.put(x, x * 40)
-
-        self.assertRaises(db.DBNotFoundError, d.get, 'bad key')
-        self.assertRaises(KeyError, d.get, 'bad key')
-
-        data = d.get(string.letters[0])
-        self.assertEqual(data, string.letters[0]*40)
-
-        count = 0
-        exceptionHappened = 0
-        c = d.cursor()
-        rec = c.first()
-        while rec:
-            count = count + 1
-            try:
-                rec = next(c)
-            except db.DBNotFoundError:  # end of the records
-                exceptionHappened = 1
-                break
-
-        self.assertNotEqual(rec, None)
-        self.assert_(exceptionHappened)
-        self.assertEqual(count, len(string.letters))
-
-        c.close()
-        d.close()
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    return unittest.makeSuite(GetReturnsNoneTestCase)
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_join.py b/Lib/bsddb/test/test_join.py
deleted file mode 100644 (file)
index e92a33c..0000000
+++ /dev/null
@@ -1,99 +0,0 @@
-"""TestCases for using the DB.join and DBCursor.join_item methods.
-"""
-
-import os
-
-import unittest
-
-from .test_all import db, dbshelve, test_support, verbose, \
-        get_new_environment_path, get_new_database_path
-
-#----------------------------------------------------------------------
-
-ProductIndex = [
-    ('apple', "Convenience Store"),
-    ('blueberry', "Farmer's Market"),
-    ('shotgun', "S-Mart"),              # Aisle 12
-    ('pear', "Farmer's Market"),
-    ('chainsaw', "S-Mart"),             # "Shop smart.  Shop S-Mart!"
-    ('strawberry', "Farmer's Market"),
-]
-
-ColorIndex = [
-    ('blue', "blueberry"),
-    ('red', "apple"),
-    ('red', "chainsaw"),
-    ('red', "strawberry"),
-    ('yellow', "peach"),
-    ('yellow', "pear"),
-    ('black', "shotgun"),
-]
-
-class JoinTestCase(unittest.TestCase):
-    keytype = ''
-
-    def setUp(self):
-        self.filename = self.__class__.__name__ + '.db'
-        self.homeDir = get_new_environment_path()
-        self.env = db.DBEnv()
-        self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK )
-
-    def tearDown(self):
-        self.env.close()
-        test_support.rmtree(self.homeDir)
-
-    def test01_join(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test01_join..." % \
-                  self.__class__.__name__)
-
-        # create and populate primary index
-        priDB = db.DB(self.env)
-        priDB.open(self.filename, "primary", db.DB_BTREE, db.DB_CREATE)
-        list(map(lambda t, priDB=priDB: priDB.put(*t), ProductIndex))
-
-        # create and populate secondary index
-        secDB = db.DB(self.env)
-        secDB.set_flags(db.DB_DUP | db.DB_DUPSORT)
-        secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE)
-        list(map(lambda t, secDB=secDB: secDB.put(*t), ColorIndex))
-
-        sCursor = None
-        jCursor = None
-        try:
-            # lets look up all of the red Products
-            sCursor = secDB.cursor()
-            # Don't do the .set() in an assert, or you can get a bogus failure
-            # when running python -O
-            tmp = sCursor.set('red')
-            self.assert_(tmp)
-
-            # FIXME: jCursor doesn't properly hold a reference to its
-            # cursors, if they are closed before jcursor is used it
-            # can cause a crash.
-            jCursor = priDB.join([sCursor])
-
-            if jCursor.get(0) != ('apple', "Convenience Store"):
-                self.fail("join cursor positioned wrong")
-            if jCursor.join_item() != 'chainsaw':
-                self.fail("DBCursor.join_item returned wrong item")
-            if jCursor.get(0)[0] != 'strawberry':
-                self.fail("join cursor returned wrong thing")
-            if jCursor.get(0):  # there were only three red items to return
-                self.fail("join cursor returned too many items")
-        finally:
-            if jCursor:
-                jCursor.close()
-            if sCursor:
-                sCursor.close()
-            priDB.close()
-            secDB.close()
-
-
-def test_suite():
-    suite = unittest.TestSuite()
-
-    suite.addTest(unittest.makeSuite(JoinTestCase))
-
-    return suite
diff --git a/Lib/bsddb/test/test_lock.py b/Lib/bsddb/test/test_lock.py
deleted file mode 100644 (file)
index 75c1022..0000000
+++ /dev/null
@@ -1,179 +0,0 @@
-"""
-TestCases for testing the locking sub-system.
-"""
-
-import time
-
-import unittest
-from .test_all import db, test_support, verbose, have_threads, \
-        get_new_environment_path, get_new_database_path
-
-if have_threads :
-    from threading import Thread
-    import sys
-    if sys.version_info[0] < 3 :
-        from threading import currentThread
-    else :
-        from threading import current_thread as currentThread
-
-#----------------------------------------------------------------------
-
-class LockingTestCase(unittest.TestCase):
-    import sys
-    if sys.version_info[:3] < (2, 4, 0):
-        def assertTrue(self, expr, msg=None):
-            self.failUnless(expr,msg=msg)
-
-
-    def setUp(self):
-        self.homeDir = get_new_environment_path()
-        self.env = db.DBEnv()
-        self.env.open(self.homeDir, db.DB_THREAD | db.DB_INIT_MPOOL |
-                                    db.DB_INIT_LOCK | db.DB_CREATE)
-
-
-    def tearDown(self):
-        self.env.close()
-        test_support.rmtree(self.homeDir)
-
-
-    def test01_simple(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test01_simple..." % self.__class__.__name__)
-
-        anID = self.env.lock_id()
-        if verbose:
-            print("locker ID: %s" % anID)
-        lock = self.env.lock_get(anID, "some locked thing", db.DB_LOCK_WRITE)
-        if verbose:
-            print("Aquired lock: %s" % lock)
-        self.env.lock_put(lock)
-        if verbose:
-            print("Released lock: %s" % lock)
-        self.env.lock_id_free(anID)
-
-
-    def test02_threaded(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test02_threaded..." % self.__class__.__name__)
-
-        threads = []
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_WRITE,)))
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_READ,)))
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_READ,)))
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_WRITE,)))
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_READ,)))
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_READ,)))
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_WRITE,)))
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_WRITE,)))
-        threads.append(Thread(target = self.theThread,
-                              args=(db.DB_LOCK_WRITE,)))
-
-        for t in threads:
-            import sys
-            if sys.version_info[0] < 3 :
-                t.setDaemon(True)
-            else :
-                t.daemon = True
-            t.start()
-        for t in threads:
-            t.join()
-
-    def test03_lock_timeout(self):
-        self.env.set_timeout(0, db.DB_SET_LOCK_TIMEOUT)
-        self.env.set_timeout(0, db.DB_SET_TXN_TIMEOUT)
-        self.env.set_timeout(123456, db.DB_SET_LOCK_TIMEOUT)
-        self.env.set_timeout(7890123, db.DB_SET_TXN_TIMEOUT)
-
-        def deadlock_detection() :
-            while not deadlock_detection.end :
-                deadlock_detection.count = \
-                    self.env.lock_detect(db.DB_LOCK_EXPIRE)
-                if deadlock_detection.count :
-                    while not deadlock_detection.end :
-                        pass
-                    break
-                time.sleep(0.01)
-
-        deadlock_detection.end=False
-        deadlock_detection.count=0
-        t=Thread(target=deadlock_detection)
-        import sys
-        if sys.version_info[0] < 3 :
-            t.setDaemon(True)
-        else :
-            t.daemon = True
-        t.start()
-        self.env.set_timeout(100000, db.DB_SET_LOCK_TIMEOUT)
-        anID = self.env.lock_id()
-        anID2 = self.env.lock_id()
-        self.assertNotEqual(anID, anID2)
-        lock = self.env.lock_get(anID, "shared lock", db.DB_LOCK_WRITE)
-        start_time=time.time()
-        self.assertRaises(db.DBLockNotGrantedError,
-                self.env.lock_get,anID2, "shared lock", db.DB_LOCK_READ)
-        end_time=time.time()
-        deadlock_detection.end=True
-        self.assertTrue((end_time-start_time) >= 0.1)
-        self.env.lock_put(lock)
-        t.join()
-
-        self.env.lock_id_free(anID)
-        self.env.lock_id_free(anID2)
-
-        if db.version() >= (4,6):
-            self.assertTrue(deadlock_detection.count>0)
-
-    def theThread(self, lockType):
-        import sys
-        if sys.version_info[0] < 3 :
-            name = currentThread().getName()
-        else :
-            name = currentThread().name
-
-        if lockType ==  db.DB_LOCK_WRITE:
-            lt = "write"
-        else:
-            lt = "read"
-
-        anID = self.env.lock_id()
-        if verbose:
-            print("%s: locker ID: %s" % (name, anID))
-
-        for i in range(1000) :
-            lock = self.env.lock_get(anID, "some locked thing", lockType)
-            if verbose:
-                print("%s: Aquired %s lock: %s" % (name, lt, lock))
-
-            self.env.lock_put(lock)
-            if verbose:
-                print("%s: Released %s lock: %s" % (name, lt, lock))
-
-        self.env.lock_id_free(anID)
-
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-
-    if have_threads:
-        suite.addTest(unittest.makeSuite(LockingTestCase))
-    else:
-        suite.addTest(unittest.makeSuite(LockingTestCase, 'test01'))
-
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_misc.py b/Lib/bsddb/test/test_misc.py
deleted file mode 100644 (file)
index 498b4d3..0000000
+++ /dev/null
@@ -1,130 +0,0 @@
-"""Miscellaneous bsddb module test cases
-"""
-
-import os
-import unittest
-
-from .test_all import db, dbshelve, hashopen, test_support, get_new_environment_path, get_new_database_path
-
-#----------------------------------------------------------------------
-
-class MiscTestCase(unittest.TestCase):
-    def setUp(self):
-        self.filename = self.__class__.__name__ + '.db'
-        self.homeDir = get_new_environment_path()
-
-    def tearDown(self):
-        test_support.unlink(self.filename)
-        test_support.rmtree(self.homeDir)
-
-    def test01_badpointer(self):
-        dbs = dbshelve.open(self.filename)
-        dbs.close()
-        self.assertRaises(db.DBError, dbs.get, "foo")
-
-    def test02_db_home(self):
-        env = db.DBEnv()
-        # check for crash fixed when db_home is used before open()
-        self.assert_(env.db_home is None)
-        env.open(self.homeDir, db.DB_CREATE)
-        import sys
-        if sys.version_info[0] < 3 :
-            self.assertEqual(self.homeDir, env.db_home)
-        else :
-            self.assertEqual(bytes(self.homeDir, "ascii"), env.db_home)
-
-    def test03_repr_closed_db(self):
-        db = hashopen(self.filename)
-        db.close()
-        rp = repr(db)
-        self.assertEquals(rp, "{}")
-
-    def test04_repr_db(self) :
-        db = hashopen(self.filename)
-        d = {}
-        for i in range(100) :
-            db[repr(i)] = repr(100*i)
-            d[repr(i)] = repr(100*i)
-        db.close()
-        db = hashopen(self.filename)
-        rp = repr(db)
-        self.assertEquals(rp, repr(d))
-        db.close()
-
-    # http://sourceforge.net/tracker/index.php?func=detail&aid=1708868&group_id=13900&atid=313900
-    #
-    # See the bug report for details.
-    #
-    # The problem was that make_key_dbt() was not allocating a copy of
-    # string keys but FREE_DBT() was always being told to free it when the
-    # database was opened with DB_THREAD.
-    def test05_double_free_make_key_dbt(self):
-        try:
-            db1 = db.DB()
-            db1.open(self.filename, None, db.DB_BTREE,
-                     db.DB_CREATE | db.DB_THREAD)
-
-            curs = db1.cursor()
-            t = curs.get("/foo", db.DB_SET)
-            # double free happened during exit from DBC_get
-        finally:
-            db1.close()
-            test_support.unlink(self.filename)
-
-    def test06_key_with_null_bytes(self):
-        try:
-            db1 = db.DB()
-            db1.open(self.filename, None, db.DB_HASH, db.DB_CREATE)
-            db1['a'] = 'eh?'
-            db1['a\x00'] = 'eh zed.'
-            db1['a\x00a'] = 'eh zed eh?'
-            db1['aaa'] = 'eh eh eh!'
-            keys = list(db1.keys())
-            keys.sort()
-            self.assertEqual(['a', 'a\x00', 'a\x00a', 'aaa'], keys)
-            self.assertEqual(db1['a'], 'eh?')
-            self.assertEqual(db1['a\x00'], 'eh zed.')
-            self.assertEqual(db1['a\x00a'], 'eh zed eh?')
-            self.assertEqual(db1['aaa'], 'eh eh eh!')
-        finally:
-            db1.close()
-            test_support.unlink(self.filename)
-
-    def test07_DB_set_flags_persists(self):
-        if db.version() < (4,2):
-            # The get_flags API required for this to work is only available
-            # in Berkeley DB >= 4.2
-            return
-        try:
-            db1 = db.DB()
-            db1.set_flags(db.DB_DUPSORT)
-            db1.open(self.filename, db.DB_HASH, db.DB_CREATE)
-            db1['a'] = 'eh'
-            db1['a'] = 'A'
-            self.assertEqual([('a', 'A')], list(db1.items()))
-            db1.put('a', 'Aa')
-            self.assertEqual([('a', 'A'), ('a', 'Aa')], list(db1.items()))
-            db1.close()
-            db1 = db.DB()
-            # no set_flags call, we're testing that it reads and obeys
-            # the flags on open.
-            db1.open(self.filename, db.DB_HASH)
-            self.assertEqual([('a', 'A'), ('a', 'Aa')], list(db1.items()))
-            # if it read the flags right this will replace all values
-            # for key 'a' instead of adding a new one.  (as a dict should)
-            db1['a'] = 'new A'
-            self.assertEqual([('a', 'new A')], list(db1.items()))
-        finally:
-            db1.close()
-            test_support.unlink(self.filename)
-
-
-#----------------------------------------------------------------------
-
-
-def test_suite():
-    return unittest.makeSuite(MiscTestCase)
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_pickle.py b/Lib/bsddb/test/test_pickle.py
deleted file mode 100644 (file)
index a8d9199..0000000
+++ /dev/null
@@ -1,62 +0,0 @@
-
-import os
-import pickle
-try:
-    import pickle
-except ImportError:
-    pickle = None
-import unittest
-
-from .test_all import db, test_support, get_new_environment_path, get_new_database_path
-
-#----------------------------------------------------------------------
-
-class pickleTestCase(unittest.TestCase):
-    """Verify that DBError can be pickled and unpickled"""
-    db_name = 'test-dbobj.db'
-
-    def setUp(self):
-        self.homeDir = get_new_environment_path()
-
-    def tearDown(self):
-        if hasattr(self, 'db'):
-            del self.db
-        if hasattr(self, 'env'):
-            del self.env
-        test_support.rmtree(self.homeDir)
-
-    def _base_test_pickle_DBError(self, pickle):
-        self.env = db.DBEnv()
-        self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL)
-        self.db = db.DB(self.env)
-        self.db.open(self.db_name, db.DB_HASH, db.DB_CREATE)
-        self.db.put('spam', 'eggs')
-        self.assertEqual(self.db['spam'], 'eggs')
-        try:
-            self.db.put('spam', 'ham', flags=db.DB_NOOVERWRITE)
-        except db.DBError as egg:
-            pickledEgg = pickle.dumps(egg)
-            #print repr(pickledEgg)
-            rottenEgg = pickle.loads(pickledEgg)
-            if rottenEgg.args != egg.args or type(rottenEgg) != type(egg):
-                raise Exception(rottenEgg, '!=', egg)
-        else:
-            raise Exception("where's my DBError exception?!?")
-
-        self.db.close()
-        self.env.close()
-
-    def test01_pickle_DBError(self):
-        self._base_test_pickle_DBError(pickle=pickle)
-
-    if pickle:
-        def test02_cPickle_DBError(self):
-            self._base_test_pickle_DBError(pickle=pickle)
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    return unittest.makeSuite(pickleTestCase)
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_queue.py b/Lib/bsddb/test/test_queue.py
deleted file mode 100644 (file)
index c29295c..0000000
+++ /dev/null
@@ -1,168 +0,0 @@
-"""
-TestCases for exercising a Queue DB.
-"""
-
-import os, string
-from pprint import pprint
-import unittest
-
-from .test_all import db, verbose, get_new_database_path
-
-#----------------------------------------------------------------------
-
-class SimpleQueueTestCase(unittest.TestCase):
-    def setUp(self):
-        self.filename = get_new_database_path()
-
-    def tearDown(self):
-        try:
-            os.remove(self.filename)
-        except os.error:
-            pass
-
-
-    def test01_basic(self):
-        # Basic Queue tests using the deprecated DBCursor.consume method.
-
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test01_basic..." % self.__class__.__name__)
-
-        d = db.DB()
-        d.set_re_len(40)  # Queues must be fixed length
-        d.open(self.filename, db.DB_QUEUE, db.DB_CREATE)
-
-        if verbose:
-            print("before appends" + '-' * 30)
-            pprint(d.stat())
-
-        for x in string.letters:
-            d.append(x * 40)
-
-        self.assertEqual(len(d), len(string.letters))
-
-        d.put(100, "some more data")
-        d.put(101, "and some more ")
-        d.put(75,  "out of order")
-        d.put(1,   "replacement data")
-
-        self.assertEqual(len(d), len(string.letters)+3)
-
-        if verbose:
-            print("before close" + '-' * 30)
-            pprint(d.stat())
-
-        d.close()
-        del d
-        d = db.DB()
-        d.open(self.filename)
-
-        if verbose:
-            print("after open" + '-' * 30)
-            pprint(d.stat())
-
-        # Test "txn" as a positional parameter
-        d.append("one more", None)
-        # Test "txn" as a keyword parameter
-        d.append("another one", txn=None)
-
-        c = d.cursor()
-
-        if verbose:
-            print("after append" + '-' * 30)
-            pprint(d.stat())
-
-        rec = c.consume()
-        while rec:
-            if verbose:
-                print(rec)
-            rec = c.consume()
-        c.close()
-
-        if verbose:
-            print("after consume loop" + '-' * 30)
-            pprint(d.stat())
-
-        self.assertEqual(len(d), 0, \
-               "if you see this message then you need to rebuild " \
-               "Berkeley DB 3.1.17 with the patch in patches/qam_stat.diff")
-
-        d.close()
-
-
-
-    def test02_basicPost32(self):
-        # Basic Queue tests using the new DB.consume method in DB 3.2+
-        # (No cursor needed)
-
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test02_basicPost32..." % self.__class__.__name__)
-
-        if db.version() < (3, 2, 0):
-            if verbose:
-                print("Test not run, DB not new enough...")
-            return
-
-        d = db.DB()
-        d.set_re_len(40)  # Queues must be fixed length
-        d.open(self.filename, db.DB_QUEUE, db.DB_CREATE)
-
-        if verbose:
-            print("before appends" + '-' * 30)
-            pprint(d.stat())
-
-        for x in string.letters:
-            d.append(x * 40)
-
-        self.assertEqual(len(d), len(string.letters))
-
-        d.put(100, "some more data")
-        d.put(101, "and some more ")
-        d.put(75,  "out of order")
-        d.put(1,   "replacement data")
-
-        self.assertEqual(len(d), len(string.letters)+3)
-
-        if verbose:
-            print("before close" + '-' * 30)
-            pprint(d.stat())
-
-        d.close()
-        del d
-        d = db.DB()
-        d.open(self.filename)
-        #d.set_get_returns_none(true)
-
-        if verbose:
-            print("after open" + '-' * 30)
-            pprint(d.stat())
-
-        d.append("one more")
-
-        if verbose:
-            print("after append" + '-' * 30)
-            pprint(d.stat())
-
-        rec = d.consume()
-        while rec:
-            if verbose:
-                print(rec)
-            rec = d.consume()
-
-        if verbose:
-            print("after consume loop" + '-' * 30)
-            pprint(d.stat())
-
-        d.close()
-
-
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    return unittest.makeSuite(SimpleQueueTestCase)
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_recno.py b/Lib/bsddb/test/test_recno.py
deleted file mode 100644 (file)
index 300f833..0000000
+++ /dev/null
@@ -1,300 +0,0 @@
-"""TestCases for exercising a Recno DB.
-"""
-
-import os
-import errno
-from pprint import pprint
-import unittest
-
-from .test_all import db, test_support, verbose, get_new_environment_path, get_new_database_path
-
-letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
-
-
-#----------------------------------------------------------------------
-
-class SimpleRecnoTestCase(unittest.TestCase):
-    import sys
-    if sys.version_info[:3] < (2, 4, 0):
-        def assertFalse(self, expr, msg=None):
-            self.failIf(expr,msg=msg)
-
-    def setUp(self):
-        self.filename = get_new_database_path()
-        self.homeDir = None
-
-    def tearDown(self):
-        test_support.unlink(self.filename)
-        if self.homeDir:
-            test_support.rmtree(self.homeDir)
-
-    def test01_basic(self):
-        d = db.DB()
-
-        get_returns_none = d.set_get_returns_none(2)
-        d.set_get_returns_none(get_returns_none)
-
-        d.open(self.filename, db.DB_RECNO, db.DB_CREATE)
-
-        for x in letters:
-            recno = d.append(x * 60)
-            self.assertEqual(type(recno), type(0))
-            self.assert_(recno >= 1)
-            if verbose:
-                print(recno, end=' ')
-
-        if verbose: print()
-
-        stat = d.stat()
-        if verbose:
-            pprint(stat)
-
-        for recno in range(1, len(d)+1):
-            data = d[recno]
-            if verbose:
-                print(data)
-
-            self.assertEqual(type(data), type(""))
-            self.assertEqual(data, d.get(recno))
-
-        try:
-            data = d[0]  # This should raise a KeyError!?!?!
-        except db.DBInvalidArgError as val:
-            import sys
-            if sys.version_info[0] < 3 :
-                self.assertEqual(val[0], db.EINVAL)
-            else :
-                self.assertEqual(val.args[0], db.EINVAL)
-            if verbose: print(val)
-        else:
-            self.fail("expected exception")
-
-        # test that has_key raises DB exceptions (fixed in pybsddb 4.3.2)
-        try:
-            0 in d
-        except db.DBError as val:
-            pass
-        else:
-            self.fail("has_key did not raise a proper exception")
-
-        try:
-            data = d[100]
-        except KeyError:
-            pass
-        else:
-            self.fail("expected exception")
-
-        try:
-            data = d.get(100)
-        except db.DBNotFoundError as val:
-            if get_returns_none:
-                self.fail("unexpected exception")
-        else:
-            self.assertEqual(data, None)
-
-        keys = list(d.keys())
-        if verbose:
-            print(keys)
-        self.assertEqual(type(keys), type([]))
-        self.assertEqual(type(keys[0]), type(123))
-        self.assertEqual(len(keys), len(d))
-
-        items = list(d.items())
-        if verbose:
-            pprint(items)
-        self.assertEqual(type(items), type([]))
-        self.assertEqual(type(items[0]), type(()))
-        self.assertEqual(len(items[0]), 2)
-        self.assertEqual(type(items[0][0]), type(123))
-        self.assertEqual(type(items[0][1]), type(""))
-        self.assertEqual(len(items), len(d))
-
-        self.assert_(25 in d)
-
-        del d[25]
-        self.assertFalse(25 in d)
-
-        d.delete(13)
-        self.assertFalse(13 in d)
-
-        data = d.get_both(26, "z" * 60)
-        self.assertEqual(data, "z" * 60, 'was %r' % data)
-        if verbose:
-            print(data)
-
-        fd = d.fd()
-        if verbose:
-            print(fd)
-
-        c = d.cursor()
-        rec = c.first()
-        while rec:
-            if verbose:
-                print(rec)
-            rec = next(c)
-
-        c.set(50)
-        rec = c.current()
-        if verbose:
-            print(rec)
-
-        c.put(-1, "a replacement record", db.DB_CURRENT)
-
-        c.set(50)
-        rec = c.current()
-        self.assertEqual(rec, (50, "a replacement record"))
-        if verbose:
-            print(rec)
-
-        rec = c.set_range(30)
-        if verbose:
-            print(rec)
-
-        # test that non-existant key lookups work (and that
-        # DBC_set_range doesn't have a memleak under valgrind)
-        rec = c.set_range(999999)
-        self.assertEqual(rec, None)
-        if verbose:
-            print(rec)
-
-        c.close()
-        d.close()
-
-        d = db.DB()
-        d.open(self.filename)
-        c = d.cursor()
-
-        # put a record beyond the consecutive end of the recno's
-        d[100] = "way out there"
-        self.assertEqual(d[100], "way out there")
-
-        try:
-            data = d[99]
-        except KeyError:
-            pass
-        else:
-            self.fail("expected exception")
-
-        try:
-            d.get(99)
-        except db.DBKeyEmptyError as val:
-            if get_returns_none:
-                self.fail("unexpected DBKeyEmptyError exception")
-            else:
-                self.assertEqual(val[0], db.DB_KEYEMPTY)
-                if verbose: print(val)
-        else:
-            if not get_returns_none:
-                self.fail("expected exception")
-
-        rec = c.set(40)
-        while rec:
-            if verbose:
-                print(rec)
-            rec = next(c)
-
-        c.close()
-        d.close()
-
-    def test02_WithSource(self):
-        """
-        A Recno file that is given a "backing source file" is essentially a
-        simple ASCII file.  Normally each record is delimited by \n and so is
-        just a line in the file, but you can set a different record delimiter
-        if needed.
-        """
-        homeDir = get_new_environment_path()
-        self.homeDir = homeDir
-        source = os.path.join(homeDir, 'test_recno.txt')
-        if not os.path.isdir(homeDir):
-            os.mkdir(homeDir)
-        f = open(source, 'w') # create the file
-        f.close()
-
-        d = db.DB()
-        # This is the default value, just checking if both int
-        d.set_re_delim(0x0A)
-        d.set_re_delim('\n')  # and char can be used...
-        d.set_re_source(source)
-        d.open(self.filename, db.DB_RECNO, db.DB_CREATE)
-
-        data = "The quick brown fox jumped over the lazy dog".split()
-        for datum in data:
-            d.append(datum)
-        d.sync()
-        d.close()
-
-        # get the text from the backing source
-        text = open(source, 'r').read()
-        text = text.strip()
-        if verbose:
-            print(text)
-            print(data)
-            print(text.split('\n'))
-
-        self.assertEqual(text.split('\n'), data)
-
-        # open as a DB again
-        d = db.DB()
-        d.set_re_source(source)
-        d.open(self.filename, db.DB_RECNO)
-
-        d[3] = 'reddish-brown'
-        d[8] = 'comatose'
-
-        d.sync()
-        d.close()
-
-        text = open(source, 'r').read()
-        text = text.strip()
-        if verbose:
-            print(text)
-            print(text.split('\n'))
-
-        self.assertEqual(text.split('\n'),
-           "The quick reddish-brown fox jumped over the comatose dog".split())
-
-    def test03_FixedLength(self):
-        d = db.DB()
-        d.set_re_len(40)  # fixed length records, 40 bytes long
-        d.set_re_pad('-') # sets the pad character...
-        d.set_re_pad(45)  # ...test both int and char
-        d.open(self.filename, db.DB_RECNO, db.DB_CREATE)
-
-        for x in letters:
-            d.append(x * 35)    # These will be padded
-
-        d.append('.' * 40)      # this one will be exact
-
-        try:                    # this one will fail
-            d.append('bad' * 20)
-        except db.DBInvalidArgError as val:
-            import sys
-            if sys.version_info[0] < 3 :
-                self.assertEqual(val[0], db.EINVAL)
-            else :
-                self.assertEqual(val.args[0], db.EINVAL)
-            if verbose: print(val)
-        else:
-            self.fail("expected exception")
-
-        c = d.cursor()
-        rec = c.first()
-        while rec:
-            if verbose:
-                print(rec)
-            rec = next(c)
-
-        c.close()
-        d.close()
-
-
-#----------------------------------------------------------------------
-
-
-def test_suite():
-    return unittest.makeSuite(SimpleRecnoTestCase)
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_replication.py b/Lib/bsddb/test/test_replication.py
deleted file mode 100644 (file)
index 4668194..0000000
+++ /dev/null
@@ -1,444 +0,0 @@
-"""TestCases for distributed transactions.
-"""
-
-import os
-import time
-import unittest
-
-from .test_all import db, test_support, have_threads, verbose, \
-        get_new_environment_path, get_new_database_path
-
-
-#----------------------------------------------------------------------
-
-class DBReplicationManager(unittest.TestCase):
-    import sys
-    if sys.version_info[:3] < (2, 4, 0):
-        def assertTrue(self, expr, msg=None):
-            self.failUnless(expr,msg=msg)
-
-    def setUp(self) :
-        self.homeDirMaster = get_new_environment_path()
-        self.homeDirClient = get_new_environment_path()
-
-        self.dbenvMaster = db.DBEnv()
-        self.dbenvClient = db.DBEnv()
-
-        # Must use "DB_THREAD" because the Replication Manager will
-        # be executed in other threads but will use the same environment.
-        # http://forums.oracle.com/forums/thread.jspa?threadID=645788&tstart=0
-        self.dbenvMaster.open(self.homeDirMaster, db.DB_CREATE | db.DB_INIT_TXN
-                | db.DB_INIT_LOG | db.DB_INIT_MPOOL | db.DB_INIT_LOCK |
-                db.DB_INIT_REP | db.DB_RECOVER | db.DB_THREAD, 0o666)
-        self.dbenvClient.open(self.homeDirClient, db.DB_CREATE | db.DB_INIT_TXN
-                | db.DB_INIT_LOG | db.DB_INIT_MPOOL | db.DB_INIT_LOCK |
-                db.DB_INIT_REP | db.DB_RECOVER | db.DB_THREAD, 0o666)
-
-        self.confirmed_master=self.client_startupdone=False
-        def confirmed_master(a,b,c) :
-            if b==db.DB_EVENT_REP_MASTER :
-                self.confirmed_master=True
-
-        def client_startupdone(a,b,c) :
-            if b==db.DB_EVENT_REP_STARTUPDONE :
-                self.client_startupdone=True
-
-        self.dbenvMaster.set_event_notify(confirmed_master)
-        self.dbenvClient.set_event_notify(client_startupdone)
-
-        #self.dbenvMaster.set_verbose(db.DB_VERB_REPLICATION, True)
-        #self.dbenvMaster.set_verbose(db.DB_VERB_FILEOPS_ALL, True)
-        #self.dbenvClient.set_verbose(db.DB_VERB_REPLICATION, True)
-        #self.dbenvClient.set_verbose(db.DB_VERB_FILEOPS_ALL, True)
-
-        self.dbMaster = self.dbClient = None
-
-
-    def tearDown(self):
-        if self.dbClient :
-            self.dbClient.close()
-        if self.dbMaster :
-            self.dbMaster.close()
-        self.dbenvClient.close()
-        self.dbenvMaster.close()
-        test_support.rmtree(self.homeDirClient)
-        test_support.rmtree(self.homeDirMaster)
-
-    def test01_basic_replication(self) :
-        master_port = test_support.find_unused_port()
-        self.dbenvMaster.repmgr_set_local_site("127.0.0.1", master_port)
-        client_port = test_support.find_unused_port()
-        self.dbenvClient.repmgr_set_local_site("127.0.0.1", client_port)
-        self.dbenvMaster.repmgr_add_remote_site("127.0.0.1", client_port)
-        self.dbenvClient.repmgr_add_remote_site("127.0.0.1", master_port)
-        self.dbenvMaster.rep_set_nsites(2)
-        self.dbenvClient.rep_set_nsites(2)
-        self.dbenvMaster.rep_set_priority(10)
-        self.dbenvClient.rep_set_priority(0)
-
-        self.dbenvMaster.rep_set_timeout(db.DB_REP_CONNECTION_RETRY,100123)
-        self.dbenvClient.rep_set_timeout(db.DB_REP_CONNECTION_RETRY,100321)
-        self.assertEquals(self.dbenvMaster.rep_get_timeout(
-            db.DB_REP_CONNECTION_RETRY), 100123)
-        self.assertEquals(self.dbenvClient.rep_get_timeout(
-            db.DB_REP_CONNECTION_RETRY), 100321)
-
-        self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 100234)
-        self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 100432)
-        self.assertEquals(self.dbenvMaster.rep_get_timeout(
-            db.DB_REP_ELECTION_TIMEOUT), 100234)
-        self.assertEquals(self.dbenvClient.rep_get_timeout(
-            db.DB_REP_ELECTION_TIMEOUT), 100432)
-
-        self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_RETRY, 100345)
-        self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_RETRY, 100543)
-        self.assertEquals(self.dbenvMaster.rep_get_timeout(
-            db.DB_REP_ELECTION_RETRY), 100345)
-        self.assertEquals(self.dbenvClient.rep_get_timeout(
-            db.DB_REP_ELECTION_RETRY), 100543)
-
-        self.dbenvMaster.repmgr_set_ack_policy(db.DB_REPMGR_ACKS_ALL)
-        self.dbenvClient.repmgr_set_ack_policy(db.DB_REPMGR_ACKS_ALL)
-
-        self.dbenvMaster.repmgr_start(1, db.DB_REP_MASTER);
-        self.dbenvClient.repmgr_start(1, db.DB_REP_CLIENT);
-
-        self.assertEquals(self.dbenvMaster.rep_get_nsites(),2)
-        self.assertEquals(self.dbenvClient.rep_get_nsites(),2)
-        self.assertEquals(self.dbenvMaster.rep_get_priority(),10)
-        self.assertEquals(self.dbenvClient.rep_get_priority(),0)
-        self.assertEquals(self.dbenvMaster.repmgr_get_ack_policy(),
-                db.DB_REPMGR_ACKS_ALL)
-        self.assertEquals(self.dbenvClient.repmgr_get_ack_policy(),
-                db.DB_REPMGR_ACKS_ALL)
-
-        # The timeout is necessary in BDB 4.5, since DB_EVENT_REP_STARTUPDONE
-        # is not generated if the master has no new transactions.
-        # This is solved in BDB 4.6 (#15542).
-        import time
-        timeout = time.time()+10
-        while (time.time()<timeout) and not (self.confirmed_master and self.client_startupdone) :
-            time.sleep(0.02)
-        self.assertTrue(time.time()<timeout)
-
-        d = self.dbenvMaster.repmgr_site_list()
-        self.assertEquals(len(d), 1)
-        self.assertEquals(d[0][0], "127.0.0.1")
-        self.assertEquals(d[0][1], client_port)
-        self.assertTrue((d[0][2]==db.DB_REPMGR_CONNECTED) or \
-                (d[0][2]==db.DB_REPMGR_DISCONNECTED))
-
-        d = self.dbenvClient.repmgr_site_list()
-        self.assertEquals(len(d), 1)
-        self.assertEquals(d[0][0], "127.0.0.1")
-        self.assertEquals(d[0][1], master_port)
-        self.assertTrue((d[0][2]==db.DB_REPMGR_CONNECTED) or \
-                (d[0][2]==db.DB_REPMGR_DISCONNECTED))
-
-        if db.version() >= (4,6) :
-            d = self.dbenvMaster.repmgr_stat(flags=db.DB_STAT_CLEAR);
-            self.assertTrue("msgs_queued" in d)
-
-        self.dbMaster=db.DB(self.dbenvMaster)
-        txn=self.dbenvMaster.txn_begin()
-        self.dbMaster.open("test", db.DB_HASH, db.DB_CREATE, 0o666, txn=txn)
-        txn.commit()
-
-        import time,os.path
-        timeout=time.time()+10
-        while (time.time()<timeout) and \
-          not (os.path.exists(os.path.join(self.homeDirClient,"test"))) :
-            time.sleep(0.01)
-
-        self.dbClient=db.DB(self.dbenvClient)
-        while True :
-            txn=self.dbenvClient.txn_begin()
-            try :
-                self.dbClient.open("test", db.DB_HASH, flags=db.DB_RDONLY,
-                        mode=0o666, txn=txn)
-            except db.DBRepHandleDeadError :
-                txn.abort()
-                self.dbClient.close()
-                self.dbClient=db.DB(self.dbenvClient)
-                continue
-
-            txn.commit()
-            break
-
-        txn=self.dbenvMaster.txn_begin()
-        self.dbMaster.put("ABC", "123", txn=txn)
-        txn.commit()
-        import time
-        timeout=time.time()+10
-        v=None
-        while (time.time()<timeout) and (v==None) :
-            txn=self.dbenvClient.txn_begin()
-            v=self.dbClient.get("ABC", txn=txn)
-            txn.commit()
-            if v==None :
-                time.sleep(0.02)
-        self.assertTrue(time.time()<timeout)
-        self.assertEquals("123", v)
-
-        txn=self.dbenvMaster.txn_begin()
-        self.dbMaster.delete("ABC", txn=txn)
-        txn.commit()
-        timeout=time.time()+10
-        while (time.time()<timeout) and (v!=None) :
-            txn=self.dbenvClient.txn_begin()
-            v=self.dbClient.get("ABC", txn=txn)
-            txn.commit()
-            if v==None :
-                time.sleep(0.02)
-        self.assertTrue(time.time()<timeout)
-        self.assertEquals(None, v)
-
-class DBBaseReplication(DBReplicationManager):
-    def setUp(self) :
-        DBReplicationManager.setUp(self)
-        def confirmed_master(a,b,c) :
-            if (b == db.DB_EVENT_REP_MASTER) or (b == db.DB_EVENT_REP_ELECTED) :
-                self.confirmed_master = True
-
-        def client_startupdone(a,b,c) :
-            if b == db.DB_EVENT_REP_STARTUPDONE :
-                self.client_startupdone = True
-
-        self.dbenvMaster.set_event_notify(confirmed_master)
-        self.dbenvClient.set_event_notify(client_startupdone)
-
-        import queue
-        self.m2c = queue.Queue()
-        self.c2m = queue.Queue()
-
-        # There are only two nodes, so we don't need to
-        # do any routing decision
-        def m2c(dbenv, control, rec, lsnp, envid, flags) :
-            self.m2c.put((control, rec))
-
-        def c2m(dbenv, control, rec, lsnp, envid, flags) :
-            self.c2m.put((control, rec))
-
-        self.dbenvMaster.rep_set_transport(13,m2c)
-        self.dbenvMaster.rep_set_priority(10)
-        self.dbenvClient.rep_set_transport(3,c2m)
-        self.dbenvClient.rep_set_priority(0)
-
-        self.assertEquals(self.dbenvMaster.rep_get_priority(),10)
-        self.assertEquals(self.dbenvClient.rep_get_priority(),0)
-
-        #self.dbenvMaster.set_verbose(db.DB_VERB_REPLICATION, True)
-        #self.dbenvMaster.set_verbose(db.DB_VERB_FILEOPS_ALL, True)
-        #self.dbenvClient.set_verbose(db.DB_VERB_REPLICATION, True)
-        #self.dbenvClient.set_verbose(db.DB_VERB_FILEOPS_ALL, True)
-
-        def thread_master() :
-            return self.thread_do(self.dbenvMaster, self.c2m, 3,
-                    self.master_doing_election, True)
-
-        def thread_client() :
-            return self.thread_do(self.dbenvClient, self.m2c, 13,
-                    self.client_doing_election, False)
-
-        from threading import Thread
-        t_m=Thread(target=thread_master)
-        t_c=Thread(target=thread_client)
-        import sys
-        if sys.version_info[0] < 3 :
-            t_m.setDaemon(True)
-            t_c.setDaemon(True)
-        else :
-            t_m.daemon = True
-            t_c.daemon = True
-
-        self.t_m = t_m
-        self.t_c = t_c
-
-        self.dbMaster = self.dbClient = None
-
-        self.master_doing_election=[False]
-        self.client_doing_election=[False]
-
-
-    def tearDown(self):
-        if self.dbClient :
-            self.dbClient.close()
-        if self.dbMaster :
-            self.dbMaster.close()
-        self.m2c.put(None)
-        self.c2m.put(None)
-        self.t_m.join()
-        self.t_c.join()
-        self.dbenvClient.close()
-        self.dbenvMaster.close()
-        test_support.rmtree(self.homeDirClient)
-        test_support.rmtree(self.homeDirMaster)
-
-    def basic_rep_threading(self) :
-        self.dbenvMaster.rep_start(flags=db.DB_REP_MASTER)
-        self.dbenvClient.rep_start(flags=db.DB_REP_CLIENT)
-
-        def thread_do(env, q, envid, election_status, must_be_master) :
-            while True :
-                v=q.get()
-                if v == None : return
-                env.rep_process_message(v[0], v[1], envid)
-
-        self.thread_do = thread_do
-
-        self.t_m.start()
-        self.t_c.start()
-
-    def test01_basic_replication(self) :
-        self.basic_rep_threading()
-
-        # The timeout is necessary in BDB 4.5, since DB_EVENT_REP_STARTUPDONE
-        # is not generated if the master has no new transactions.
-        # This is solved in BDB 4.6 (#15542).
-        import time
-        timeout = time.time()+10
-        while (time.time()<timeout) and not (self.confirmed_master and
-                self.client_startupdone) :
-            time.sleep(0.02)
-        self.assertTrue(time.time()<timeout)
-
-        self.dbMaster=db.DB(self.dbenvMaster)
-        txn=self.dbenvMaster.txn_begin()
-        self.dbMaster.open("test", db.DB_HASH, db.DB_CREATE, 0o666, txn=txn)
-        txn.commit()
-
-        import time,os.path
-        timeout=time.time()+10
-        while (time.time()<timeout) and \
-          not (os.path.exists(os.path.join(self.homeDirClient,"test"))) :
-            time.sleep(0.01)
-
-        self.dbClient=db.DB(self.dbenvClient)
-        while True :
-            txn=self.dbenvClient.txn_begin()
-            try :
-                self.dbClient.open("test", db.DB_HASH, flags=db.DB_RDONLY,
-                        mode=0o666, txn=txn)
-            except db.DBRepHandleDeadError :
-                txn.abort()
-                self.dbClient.close()
-                self.dbClient=db.DB(self.dbenvClient)
-                continue
-
-            txn.commit()
-            break
-
-        txn=self.dbenvMaster.txn_begin()
-        self.dbMaster.put("ABC", "123", txn=txn)
-        txn.commit()
-        import time
-        timeout=time.time()+10
-        v=None
-        while (time.time()<timeout) and (v==None) :
-            txn=self.dbenvClient.txn_begin()
-            v=self.dbClient.get("ABC", txn=txn)
-            txn.commit()
-            if v==None :
-                time.sleep(0.02)
-        self.assertTrue(time.time()<timeout)
-        self.assertEquals("123", v)
-
-        txn=self.dbenvMaster.txn_begin()
-        self.dbMaster.delete("ABC", txn=txn)
-        txn.commit()
-        timeout=time.time()+10
-        while (time.time()<timeout) and (v!=None) :
-            txn=self.dbenvClient.txn_begin()
-            v=self.dbClient.get("ABC", txn=txn)
-            txn.commit()
-            if v==None :
-                time.sleep(0.02)
-        self.assertTrue(time.time()<timeout)
-        self.assertEquals(None, v)
-
-    if db.version() >= (4,7) :
-        def test02_test_request(self) :
-            self.basic_rep_threading()
-            (minimum, maximum) = self.dbenvClient.rep_get_request()
-            self.dbenvClient.rep_set_request(minimum-1, maximum+1)
-            self.assertEqual(self.dbenvClient.rep_get_request(),
-                    (minimum-1, maximum+1))
-
-    if db.version() >= (4,6) :
-        def test03_master_election(self) :
-            # Get ready to hold an election
-            #self.dbenvMaster.rep_start(flags=db.DB_REP_MASTER)
-            self.dbenvMaster.rep_start(flags=db.DB_REP_CLIENT)
-            self.dbenvClient.rep_start(flags=db.DB_REP_CLIENT)
-
-            def thread_do(env, q, envid, election_status, must_be_master) :
-                while True :
-                    v=q.get()
-                    if v == None : return
-                    r = env.rep_process_message(v[0],v[1],envid)
-                    if must_be_master and self.confirmed_master :
-                        self.dbenvMaster.rep_start(flags = db.DB_REP_MASTER)
-                        must_be_master = False
-
-                    if r[0] == db.DB_REP_HOLDELECTION :
-                        def elect() :
-                            while True :
-                                try :
-                                    env.rep_elect(2, 1)
-                                    election_status[0] = False
-                                    break
-                                except db.DBRepUnavailError :
-                                    pass
-                        if not election_status[0] and not self.confirmed_master :
-                            from threading import Thread
-                            election_status[0] = True
-                            t=Thread(target=elect)
-                            import sys
-                            if sys.version_info[0] < 3 :
-                                t.setDaemon(True)
-                            else :
-                                t.daemon = True
-                            t.start()
-
-            self.thread_do = thread_do
-
-            self.t_m.start()
-            self.t_c.start()
-
-            self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 50000)
-            self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 50000)
-            self.client_doing_election[0] = True
-            while True :
-                try :
-                    self.dbenvClient.rep_elect(2, 1)
-                    self.client_doing_election[0] = False
-                    break
-                except db.DBRepUnavailError :
-                    pass
-
-            self.assertTrue(self.confirmed_master)
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-    if db.version() >= (4, 6) :
-        dbenv = db.DBEnv()
-        try :
-            dbenv.repmgr_get_ack_policy()
-            ReplicationManager_available=True
-        except :
-            ReplicationManager_available=False
-        dbenv.close()
-        del dbenv
-        if ReplicationManager_available :
-            suite.addTest(unittest.makeSuite(DBReplicationManager))
-
-        if have_threads :
-            suite.addTest(unittest.makeSuite(DBBaseReplication))
-
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_sequence.py b/Lib/bsddb/test/test_sequence.py
deleted file mode 100644 (file)
index e2acbef..0000000
+++ /dev/null
@@ -1,142 +0,0 @@
-import unittest
-import os
-
-from .test_all import db, test_support, get_new_environment_path, get_new_database_path
-
-
-class DBSequenceTest(unittest.TestCase):
-    import sys
-    if sys.version_info[:3] < (2, 4, 0):
-        def assertTrue(self, expr, msg=None):
-            self.failUnless(expr,msg=msg)
-
-    def setUp(self):
-        self.int_32_max = 0x100000000
-        self.homeDir = get_new_environment_path()
-        self.filename = "test"
-
-        self.dbenv = db.DBEnv()
-        self.dbenv.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL, 0o666)
-        self.d = db.DB(self.dbenv)
-        self.d.open(self.filename, db.DB_BTREE, db.DB_CREATE, 0o666)
-
-    def tearDown(self):
-        if hasattr(self, 'seq'):
-            self.seq.close()
-            del self.seq
-        if hasattr(self, 'd'):
-            self.d.close()
-            del self.d
-        if hasattr(self, 'dbenv'):
-            self.dbenv.close()
-            del self.dbenv
-
-        test_support.rmtree(self.homeDir)
-
-    def test_get(self):
-        self.seq = db.DBSequence(self.d, flags=0)
-        start_value = 10 * self.int_32_max
-        self.assertEqual(0xA00000000, start_value)
-        self.assertEquals(None, self.seq.init_value(start_value))
-        self.assertEquals(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE))
-        self.assertEquals(start_value, self.seq.get(5))
-        self.assertEquals(start_value + 5, self.seq.get())
-
-    def test_remove(self):
-        self.seq = db.DBSequence(self.d, flags=0)
-        self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE))
-        self.assertEquals(None, self.seq.remove(txn=None, flags=0))
-        del self.seq
-
-    def test_get_key(self):
-        self.seq = db.DBSequence(self.d, flags=0)
-        key = 'foo'
-        self.assertEquals(None, self.seq.open(key=key, txn=None, flags=db.DB_CREATE))
-        self.assertEquals(key, self.seq.get_key())
-
-    def test_get_dbp(self):
-        self.seq = db.DBSequence(self.d, flags=0)
-        self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE))
-        self.assertEquals(self.d, self.seq.get_dbp())
-
-    def test_cachesize(self):
-        self.seq = db.DBSequence(self.d, flags=0)
-        cashe_size = 10
-        self.assertEquals(None, self.seq.set_cachesize(cashe_size))
-        self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE))
-        self.assertEquals(cashe_size, self.seq.get_cachesize())
-
-    def test_flags(self):
-        self.seq = db.DBSequence(self.d, flags=0)
-        flag = db.DB_SEQ_WRAP;
-        self.assertEquals(None, self.seq.set_flags(flag))
-        self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE))
-        self.assertEquals(flag, self.seq.get_flags() & flag)
-
-    def test_range(self):
-        self.seq = db.DBSequence(self.d, flags=0)
-        seq_range = (10 * self.int_32_max, 11 * self.int_32_max - 1)
-        self.assertEquals(None, self.seq.set_range(seq_range))
-        self.seq.init_value(seq_range[0])
-        self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE))
-        self.assertEquals(seq_range, self.seq.get_range())
-
-    def test_stat(self):
-        self.seq = db.DBSequence(self.d, flags=0)
-        self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE))
-        stat = self.seq.stat()
-        for param in ('nowait', 'min', 'max', 'value', 'current',
-                      'flags', 'cache_size', 'last_value', 'wait'):
-            self.assertTrue(param in stat, "parameter %s isn't in stat info" % param)
-
-    if db.version() >= (4,7) :
-        # This code checks a crash solved in Berkeley DB 4.7
-        def test_stat_crash(self) :
-            d=db.DB()
-            d.open(None,dbtype=db.DB_HASH,flags=db.DB_CREATE)  # In RAM
-            seq = db.DBSequence(d, flags=0)
-
-            self.assertRaises(db.DBNotFoundError, seq.open,
-                    key='id', txn=None, flags=0)
-
-            self.assertRaises(db.DBInvalidArgError, seq.stat)
-
-            d.close()
-
-    def test_64bits(self) :
-        # We don't use both extremes because they are problematic
-        value_plus=(1<<63)-2
-        self.assertEquals(9223372036854775806,value_plus)
-        value_minus=(-1<<63)+1  # Two complement
-        self.assertEquals(-9223372036854775807,value_minus)
-        self.seq = db.DBSequence(self.d, flags=0)
-        self.assertEquals(None, self.seq.init_value(value_plus-1))
-        self.assertEquals(None, self.seq.open(key='id', txn=None,
-            flags=db.DB_CREATE))
-        self.assertEquals(value_plus-1, self.seq.get(1))
-        self.assertEquals(value_plus, self.seq.get(1))
-
-        self.seq.remove(txn=None, flags=0)
-
-        self.seq = db.DBSequence(self.d, flags=0)
-        self.assertEquals(None, self.seq.init_value(value_minus))
-        self.assertEquals(None, self.seq.open(key='id', txn=None,
-            flags=db.DB_CREATE))
-        self.assertEquals(value_minus, self.seq.get(1))
-        self.assertEquals(value_minus+1, self.seq.get(1))
-
-    def test_multiple_close(self):
-        self.seq = db.DBSequence(self.d)
-        self.seq.close()  # You can close a Sequence multiple times
-        self.seq.close()
-        self.seq.close()
-
-def test_suite():
-    suite = unittest.TestSuite()
-    if db.version() >= (4,3):
-        suite.addTest(unittest.makeSuite(DBSequenceTest))
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/bsddb/test/test_thread.py b/Lib/bsddb/test/test_thread.py
deleted file mode 100644 (file)
index bb43034..0000000
+++ /dev/null
@@ -1,528 +0,0 @@
-"""TestCases for multi-threaded access to a DB.
-"""
-
-import os
-import sys
-import time
-import errno
-from random import random
-
-DASH = '-'
-
-try:
-    WindowsError
-except NameError:
-    class WindowsError(Exception):
-        pass
-
-import unittest
-from .test_all import db, dbutils, test_support, verbose, have_threads, \
-        get_new_environment_path, get_new_database_path
-
-if have_threads :
-    from threading import Thread
-    import sys
-    if sys.version_info[0] < 3 :
-        from threading import currentThread
-    else :
-        from threading import current_thread as currentThread
-
-
-#----------------------------------------------------------------------
-
-class BaseThreadedTestCase(unittest.TestCase):
-    dbtype       = db.DB_UNKNOWN  # must be set in derived class
-    dbopenflags  = 0
-    dbsetflags   = 0
-    envflags     = 0
-
-    import sys
-    if sys.version_info[:3] < (2, 4, 0):
-        def assertTrue(self, expr, msg=None):
-            self.failUnless(expr,msg=msg)
-
-    def setUp(self):
-        if verbose:
-            dbutils._deadlock_VerboseFile = sys.stdout
-
-        self.homeDir = get_new_environment_path()
-        self.env = db.DBEnv()
-        self.setEnvOpts()
-        self.env.open(self.homeDir, self.envflags | db.DB_CREATE)
-
-        self.filename = self.__class__.__name__ + '.db'
-        self.d = db.DB(self.env)
-        if self.dbsetflags:
-            self.d.set_flags(self.dbsetflags)
-        self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE)
-
-    def tearDown(self):
-        self.d.close()
-        self.env.close()
-        test_support.rmtree(self.homeDir)
-
-    def setEnvOpts(self):
-        pass
-
-    def makeData(self, key):
-        return DASH.join([key] * 5)
-
-
-#----------------------------------------------------------------------
-
-
-class ConcurrentDataStoreBase(BaseThreadedTestCase):
-    dbopenflags = db.DB_THREAD
-    envflags    = db.DB_THREAD | db.DB_INIT_CDB | db.DB_INIT_MPOOL
-    readers     = 0 # derived class should set
-    writers     = 0
-    records     = 1000
-
-    def test01_1WriterMultiReaders(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test01_1WriterMultiReaders..." % \
-                  self.__class__.__name__)
-
-        keys=list(range(self.records))
-        import random
-        random.shuffle(keys)
-        records_per_writer=self.records//self.writers
-        readers_per_writer=self.readers//self.writers
-        self.assertEqual(self.records,self.writers*records_per_writer)
-        self.assertEqual(self.readers,self.writers*readers_per_writer)
-        self.assertTrue((records_per_writer%readers_per_writer)==0)
-        readers = []
-
-        for x in range(self.readers):
-            rt = Thread(target = self.readerThread,
-                        args = (self.d, x),
-                        name = 'reader %d' % x,
-                        )#verbose = verbose)
-            import sys
-            if sys.version_info[0] < 3 :
-                rt.setDaemon(True)
-            else :
-                rt.daemon = True
-            readers.append(rt)
-
-        writers=[]
-        for x in range(self.writers):
-            a=keys[records_per_writer*x:records_per_writer*(x+1)]
-            a.sort()  # Generate conflicts
-            b=readers[readers_per_writer*x:readers_per_writer*(x+1)]
-            wt = Thread(target = self.writerThread,
-                        args = (self.d, a, b),
-                        name = 'writer %d' % x,
-                        )#verbose = verbose)
-            writers.append(wt)
-
-        for t in writers:
-            import sys
-            if sys.version_info[0] < 3 :
-                t.setDaemon(True)
-            else :
-                t.daemon = True
-            t.start()
-
-        for t in writers:
-            t.join()
-        for t in readers:
-            t.join()
-
-    def writerThread(self, d, keys, readers):
-        import sys
-        if sys.version_info[0] < 3 :
-            name = currentThread().getName()
-        else :
-            name = currentThread().name
-
-        if verbose:
-            print("%s: creating records %d - %d" % (name, start, stop))
-
-        count=len(keys)//len(readers)
-        count2=count
-        for x in keys :
-            key = '%04d' % x
-            dbutils.DeadlockWrap(d.put, key, self.makeData(key),
-                                 max_retries=12)
-            if verbose and x % 100 == 0:
-                print("%s: records %d - %d finished" % (name, start, x))
-
-            count2-=1
-            if not count2 :
-                readers.pop().start()
-                count2=count
-
-        if verbose:
-            print("%s: finished creating records" % name)
-
-        if verbose:
-            print("%s: thread finished" % name)
-
-    def readerThread(self, d, readerNum):
-        import sys
-        if sys.version_info[0] < 3 :
-            name = currentThread().getName()
-        else :
-            name = currentThread().name
-
-        for i in range(5) :
-            c = d.cursor()
-            count = 0
-            rec = c.first()
-            while rec:
-                count += 1
-                key, data = rec
-                self.assertEqual(self.makeData(key), data)
-                rec = next(c)
-            if verbose:
-                print("%s: found %d records" % (name, count))
-            c.close()
-
-        if verbose:
-            print("%s: thread finished" % name)
-
-
-class BTreeConcurrentDataStore(ConcurrentDataStoreBase):
-    dbtype  = db.DB_BTREE
-    writers = 2
-    readers = 10
-    records = 1000
-
-
-class HashConcurrentDataStore(ConcurrentDataStoreBase):
-    dbtype  = db.DB_HASH
-    writers = 2
-    readers = 10
-    records = 1000
-
-
-#----------------------------------------------------------------------
-
-class SimpleThreadedBase(BaseThreadedTestCase):
-    dbopenflags = db.DB_THREAD
-    envflags    = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
-    readers = 10
-    writers = 2
-    records = 1000
-
-    def setEnvOpts(self):
-        self.env.set_lk_detect(db.DB_LOCK_DEFAULT)
-
-    def test02_SimpleLocks(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test02_SimpleLocks..." % self.__class__.__name__)
-
-
-        keys=list(range(self.records))
-        import random
-        random.shuffle(keys)
-        records_per_writer=self.records//self.writers
-        readers_per_writer=self.readers//self.writers
-        self.assertEqual(self.records,self.writers*records_per_writer)
-        self.assertEqual(self.readers,self.writers*readers_per_writer)
-        self.assertTrue((records_per_writer%readers_per_writer)==0)
-
-        readers = []
-        for x in range(self.readers):
-            rt = Thread(target = self.readerThread,
-                        args = (self.d, x),
-                        name = 'reader %d' % x,
-                        )#verbose = verbose)
-            import sys
-            if sys.version_info[0] < 3 :
-                rt.setDaemon(True)
-            else :
-                rt.daemon = True
-            readers.append(rt)
-
-        writers = []
-        for x in range(self.writers):
-            a=keys[records_per_writer*x:records_per_writer*(x+1)]
-            a.sort()  # Generate conflicts
-            b=readers[readers_per_writer*x:readers_per_writer*(x+1)]
-            wt = Thread(target = self.writerThread,
-                        args = (self.d, a, b),
-                        name = 'writer %d' % x,
-                        )#verbose = verbose)
-            writers.append(wt)
-
-        for t in writers:
-            import sys
-            if sys.version_info[0] < 3 :
-                t.setDaemon(True)
-            else :
-                t.daemon = True
-            t.start()
-
-        for t in writers:
-            t.join()
-        for t in readers:
-            t.join()
-
-    def writerThread(self, d, keys, readers):
-        import sys
-        if sys.version_info[0] < 3 :
-            name = currentThread().getName()
-        else :
-            name = currentThread().name
-        if verbose:
-            print("%s: creating records %d - %d" % (name, start, stop))
-
-        count=len(keys)//len(readers)
-        count2=count
-        for x in keys :
-            key = '%04d' % x
-            dbutils.DeadlockWrap(d.put, key, self.makeData(key),
-                                 max_retries=12)
-
-            if verbose and x % 100 == 0:
-                print("%s: records %d - %d finished" % (name, start, x))
-
-            count2-=1
-            if not count2 :
-                readers.pop().start()
-                count2=count
-
-        if verbose:
-            print("%s: thread finished" % name)
-
-    def readerThread(self, d, readerNum):
-        import sys
-        if sys.version_info[0] < 3 :
-            name = currentThread().getName()
-        else :
-            name = currentThread().name
-
-        c = d.cursor()
-        count = 0
-        rec = dbutils.DeadlockWrap(c.first, max_retries=10)
-        while rec:
-            count += 1
-            key, data = rec
-            self.assertEqual(self.makeData(key), data)
-            rec = dbutils.DeadlockWrap(c.__next__, max_retries=10)
-        if verbose:
-            print("%s: found %d records" % (name, count))
-        c.close()
-
-        if verbose:
-            print("%s: thread finished" % name)
-
-
-class BTreeSimpleThreaded(SimpleThreadedBase):
-    dbtype = db.DB_BTREE
-
-
-class HashSimpleThreaded(SimpleThreadedBase):
-    dbtype = db.DB_HASH
-
-
-#----------------------------------------------------------------------
-
-
-class ThreadedTransactionsBase(BaseThreadedTestCase):
-    dbopenflags = db.DB_THREAD | db.DB_AUTO_COMMIT
-    envflags    = (db.DB_THREAD |
-                   db.DB_INIT_MPOOL |
-                   db.DB_INIT_LOCK |
-                   db.DB_INIT_LOG |
-                   db.DB_INIT_TXN
-                   )
-    readers = 0
-    writers = 0
-    records = 2000
-    txnFlag = 0
-
-    def setEnvOpts(self):
-        #self.env.set_lk_detect(db.DB_LOCK_DEFAULT)
-        pass
-
-    def test03_ThreadedTransactions(self):
-        if verbose:
-            print('\n', '-=' * 30)
-            print("Running %s.test03_ThreadedTransactions..." % \
-                  self.__class__.__name__)
-
-        keys=list(range(self.records))
-        import random
-        random.shuffle(keys)
-        records_per_writer=self.records//self.writers
-        readers_per_writer=self.readers//self.writers
-        self.assertEqual(self.records,self.writers*records_per_writer)
-        self.assertEqual(self.readers,self.writers*readers_per_writer)
-        self.assertTrue((records_per_writer%readers_per_writer)==0)
-
-        readers=[]
-        for x in range(self.readers):
-            rt = Thread(target = self.readerThread,
-                        args = (self.d, x),
-                        name = 'reader %d' % x,
-                        )#verbose = verbose)
-            import sys
-            if sys.version_info[0] < 3 :
-                rt.setDaemon(True)
-            else :
-                rt.daemon = True
-            readers.append(rt)
-
-        writers = []
-        for x in range(self.writers):
-            a=keys[records_per_writer*x:records_per_writer*(x+1)]
-            b=readers[readers_per_writer*x:readers_per_writer*(x+1)]
-            wt = Thread(target = self.writerThread,
-                        args = (self.d, a, b),
-                        name = 'writer %d' % x,
-                        )#verbose = verbose)
-            writers.append(wt)
-
-        dt = Thread(target = self.deadlockThread)
-        import sys
-        if sys.version_info[0] < 3 :
-            dt.setDaemon(True)
-        else :
-            dt.daemon = True
-        dt.start()
-
-        for t in writers:
-            import sys
-            if sys.version_info[0] < 3 :
-                t.setDaemon(True)
-            else :
-                t.daemon = True
-            t.start()
-
-        for t in writers:
-            t.join()
-        for t in readers:
-            t.join()
-
-        self.doLockDetect = False
-        dt.join()
-
-    def writerThread(self, d, keys, readers):
-        import sys
-        if sys.version_info[0] < 3 :
-            name = currentThread().getName()
-        else :
-            name = currentThread().name
-
-        count=len(keys)//len(readers)
-        while len(keys):
-            try:
-                txn = self.env.txn_begin(None, self.txnFlag)
-                keys2=keys[:count]
-                for x in keys2 :
-                    key = '%04d' % x
-                    d.put(key, self.makeData(key), txn)
-                    if verbose and x % 100 == 0:
-                        print("%s: records %d - %d finished" % (name, start, x))
-                txn.commit()
-                keys=keys[count:]
-                readers.pop().start()
-            except (db.DBLockDeadlockError, db.DBLockNotGrantedError) as val:
-                if verbose:
-                    print("%s: Aborting transaction (%s)" % (name, val[1]))
-                txn.abort()
-
-        if verbose:
-            print("%s: thread finished" % name)
-
-    def readerThread(self, d, readerNum):
-        import sys
-        if sys.version_info[0] < 3 :
-            name = currentThread().getName()
-        else :
-            name = currentThread().name
-
-        finished = False
-        while not finished:
-            try:
-                txn = self.env.txn_begin(None, self.txnFlag)
-                c = d.cursor(txn)
-                count = 0
-                rec = c.first()
-                while rec:
-                    count += 1
-                    key, data = rec
-                    self.assertEqual(self.makeData(key), data)
-                    rec = next(c)
-                if verbose: print("%s: found %d records" % (name, count))
-                c.close()
-                txn.commit()
-                finished = True
-            except (db.DBLockDeadlockError, db.DBLockNotGrantedError) as val:
-                if verbose:
-                    print("%s: Aborting transaction (%s)" % (name, val[1]))
-                c.close()
-                txn.abort()
-
-        if verbose:
-            print("%s: thread finished" % name)
-
-    def deadlockThread(self):
-        self.doLockDetect = True
-        while self.doLockDetect:
-            time.sleep(0.05)
-            try:
-                aborted = self.env.lock_detect(
-                    db.DB_LOCK_RANDOM, db.DB_LOCK_CONFLICT)
-                if verbose and aborted:
-                    print("deadlock: Aborted %d deadlocked transaction(s)" \
-                          % aborted)
-            except db.DBError:
-                pass
-
-
-class BTreeThreadedTransactions(ThreadedTransactionsBase):
-    dbtype = db.DB_BTREE
-    writers = 2
-    readers = 10
-    records = 1000
-
-class HashThreadedTransactions(ThreadedTransactionsBase):
-    dbtype = db.DB_HASH
-    writers = 2
-    readers = 10
-    records = 1000
-
-class BTreeThreadedNoWaitTransactions(ThreadedTransactionsBase):
-    dbtype = db.DB_BTREE
-    writers = 2
-    readers = 10
-    records = 1000
-    txnFlag = db.DB_TXN_NOWAIT
-
-class HashThreadedNoWaitTransactions(ThreadedTransactionsBase):
-    dbtype = db.DB_HASH
-    writers = 2
-    readers = 10
-    records = 1000
-    txnFlag = db.DB_TXN_NOWAIT
-
-
-#----------------------------------------------------------------------
-
-def test_suite():
-    suite = unittest.TestSuite()
-
-    if have_threads:
-        suite.addTest(unittest.makeSuite(BTreeConcurrentDataStore))
-        suite.addTest(unittest.makeSuite(HashConcurrentDataStore))
-        suite.addTest(unittest.makeSuite(BTreeSimpleThreaded))
-        suite.addTest(unittest.makeSuite(HashSimpleThreaded))
-        suite.addTest(unittest.makeSuite(BTreeThreadedTransactions))
-        suite.addTest(unittest.makeSuite(HashThreadedTransactions))
-        suite.addTest(unittest.makeSuite(BTreeThreadedNoWaitTransactions))
-        suite.addTest(unittest.makeSuite(HashThreadedNoWaitTransactions))
-
-    else:
-        print("Threads not available, skipping thread tests.")
-
-    return suite
-
-
-if __name__ == '__main__':
-    unittest.main(defaultTest='test_suite')
diff --git a/Lib/test/test_bsddb.py b/Lib/test/test_bsddb.py
deleted file mode 100755 (executable)
index 666f92c..0000000
+++ /dev/null
@@ -1,454 +0,0 @@
-#! /usr/bin/env python
-"""Test script for the bsddb C module by Roger E. Masse
-   Adapted to unittest format and expanded scope by Raymond Hettinger
-"""
-import os, sys
-import copy
-import bsddb
-import dbm.bsd # Just so we know it's imported
-import unittest
-from test import support
-
-class TestBSDDB(unittest.TestCase):
-    openflag = 'c'
-
-    def do_open(self, *args, **kw):
-        # This code will be vastly improved in future bsddb 4.7.4. Meanwhile,
-        # let's live with this ugliness. XXX - jcea@jcea.es - 20080902
-        class _ExposedProperties:
-            @property
-            def _cursor_refs(self):
-                return self.db._cursor_refs
-
-        import collections
-        class StringKeys(collections.MutableMapping, _ExposedProperties):
-            """Wrapper around DB object that automatically encodes
-            all keys as UTF-8; the keys must be strings."""
-
-            def __init__(self, db):
-                self.db = db
-
-            def __len__(self):
-                return len(self.db)
-
-            def __getitem__(self, key):
-                return self.db[key.encode("utf-8")]
-
-            def __setitem__(self, key, value):
-                self.db[key.encode("utf-8")] = value
-
-            def __delitem__(self, key):
-                del self.db[key.encode("utf-8")]
-
-            def __iter__(self):
-                for k in self.db:
-                    yield k.decode("utf-8")
-
-            def close(self):
-                self.db.close()
-
-            def keys(self):
-                for k in self.db.keys():
-                    yield k.decode("utf-8")
-
-            def has_key(self, key):
-                return self.db.has_key(key.encode("utf-8"))
-
-            __contains__ = has_key
-
-            def values(self):
-                return self.db.values()
-
-            def items(self):
-                for k,v in self.db.items():
-                    yield k.decode("utf-8"), v
-
-            def set_location(self, key):
-                return self.db.set_location(key.encode("utf-8"))
-
-            def next(self):
-                key, value = self.db.next()
-                return key.decode("utf-8"), value
-
-            def previous(self):
-                key, value = self.db.previous()
-                return key.decode("utf-8"), value
-
-            def first(self):
-                key, value = self.db.first()
-                return key.decode("utf-8"), value
-
-            def last(self):
-                key, value = self.db.last()
-                return key.decode("utf-8"), value
-
-            def set_location(self, key):
-                key, value = self.db.set_location(key.encode("utf-8"))
-                return key.decode("utf-8"), value
-
-            def sync(self):
-                return self.db.sync()
-
-        class StringValues(collections.MutableMapping, _ExposedProperties):
-            """Wrapper around DB object that automatically encodes
-            and decodes all values as UTF-8; input values must be strings."""
-
-            def __init__(self, db):
-                self.db = db
-
-            def __len__(self):
-                return len(self.db)
-
-            def __getitem__(self, key):
-                return self.db[key].decode("utf-8")
-
-            def __setitem__(self, key, value):
-                self.db[key] = value.encode("utf-8")
-
-            def __delitem__(self, key):
-                del self.db[key]
-
-            def __iter__(self):
-                return iter(self.db)
-
-            def close(self):
-                self.db.close()
-
-            def keys(self):
-                return self.db.keys()
-
-            def has_key(self, key):
-                return self.db.has_key(key)
-
-            __contains__ = has_key
-
-            def values(self):
-                for v in self.db.values():
-                    yield v.decode("utf-8")
-
-            def items(self):
-                for k,v in self.db.items():
-                    yield k, v.decode("utf-8")
-
-            def set_location(self, key):
-                return self.db.set_location(key)
-
-            def next(self):
-                key, value = self.db.next()
-                return key, value.decode("utf-8")
-
-            def previous(self):
-                key, value = self.db.previous()
-                return key, value.decode("utf-8")
-
-            def first(self):
-                key, value = self.db.first()
-                return key, value.decode("utf-8")
-
-            def last(self):
-                key, value = self.db.last()
-                return key, value.decode("utf-8")
-
-            def set_location(self, key):
-                key, value = self.db.set_location(key)
-                return key, value.decode("utf-8")
-
-            def sync(self):
-                return self.db.sync()
-
-        # openmethod is a list so that it's not mistaken as an instance method
-        return StringValues(StringKeys(self.openmethod[0](*args, **kw)))
-
-    def setUp(self):
-        self.f = self.do_open(self.fname, self.openflag, cachesize=32768)
-        self.d = dict(q='Guido', w='van', e='Rossum', r='invented', t='Python', y='')
-        for k, v in self.d.items():
-            self.f[k] = v
-
-    def tearDown(self):
-        self.f.sync()
-        self.f.close()
-        if self.fname is None:
-            return
-        try:
-            os.remove(self.fname)
-        except os.error:
-            pass
-
-    def test_getitem(self):
-        for k, v in self.d.items():
-            self.assertEqual(self.f[k], v)
-
-    def test_len(self):
-        self.assertEqual(len(self.f), len(self.d))
-
-    def test_change(self):
-        self.f['r'] = 'discovered'
-        self.assertEqual(self.f['r'], 'discovered')
-        self.assert_('r' in self.f.keys())
-        self.assert_('discovered' in self.f.values())
-
-    def test_close_and_reopen(self):
-        if self.fname is None:
-            # if we're using an in-memory only db, we can't reopen it
-            # so finish here.
-            return
-        self.f.close()
-        self.f = self.do_open(self.fname, 'w')
-        for k, v in self.d.items():
-            self.assertEqual(self.f[k], v)
-
-    def assertSetEquals(self, seqn1, seqn2):
-        self.assertEqual(set(seqn1), set(seqn2))
-
-    def test_mapping_iteration_methods(self):
-        f = self.f
-        d = self.d
-        self.assertSetEquals(d, f)
-        self.assertSetEquals(d.keys(), f.keys())
-        self.assertSetEquals(d.values(), f.values())
-        self.assertSetEquals(d.items(), f.items())
-        self.assertSetEquals(d.keys(), f.keys())
-        self.assertSetEquals(d.values(), f.values())
-        self.assertSetEquals(d.items(), f.items())
-
-    def test_iter_while_modifying_values(self):
-        if not hasattr(self.f, '__iter__'):
-            return
-
-        di = iter(self.d)
-        while 1:
-            try:
-                key = next(di)
-                self.d[key] = 'modified '+key
-            except StopIteration:
-                break
-
-        # it should behave the same as a dict.  modifying values
-        # of existing keys should not break iteration.  (adding
-        # or removing keys should)
-        fi = iter(self.f)
-        while 1:
-            try:
-                key = next(fi)
-                self.f[key] = 'modified '+key
-            except StopIteration:
-                break
-
-        self.test_mapping_iteration_methods()
-
-    def test_iteritems_while_modifying_values(self):
-        if not hasattr(self.f, 'iteritems'):
-            return
-
-        di = iter(self.d.items())
-        while 1:
-            try:
-                k, v = next(di)
-                self.d[k] = 'modified '+v
-            except StopIteration:
-                break
-
-        # it should behave the same as a dict.  modifying values
-        # of existing keys should not break iteration.  (adding
-        # or removing keys should)
-        fi = iter(self.f.items())
-        while 1:
-            try:
-                k, v = next(fi)
-                self.f[k] = 'modified '+v
-            except StopIteration:
-                break
-
-        self.test_mapping_iteration_methods()
-
-    def test_first_next_looping(self):
-        items = [self.f.first()]
-        for i in range(1, len(self.f)):
-            items.append(self.f.next())
-        self.assertSetEquals(items, self.d.items())
-
-    def test_previous_last_looping(self):
-        items = [self.f.last()]
-        for i in range(1, len(self.f)):
-            items.append(self.f.previous())
-        self.assertSetEquals(items, self.d.items())
-
-    def test_first_while_deleting(self):
-        # Test for bug 1725856
-        self.assert_(len(self.d) >= 2, "test requires >=2 items")
-        for _ in self.d:
-            key = self.f.first()[0]
-            del self.f[key]
-        self.assertEqual(0, len(self.f), "expected empty db after test")
-
-    def test_last_while_deleting(self):
-        # Test for bug 1725856's evil twin
-        self.assert_(len(self.d) >= 2, "test requires >=2 items")
-        for _ in self.d:
-            key = self.f.last()[0]
-            del self.f[key]
-        self.assertEqual(0, len(self.f), "expected empty db after test")
-
-    def test_set_location(self):
-        self.assertEqual(self.f.set_location('e'), ('e', self.d['e']))
-
-    def test_contains(self):
-        for k in self.d:
-            self.assert_(k in self.f)
-        self.assert_('not here' not in self.f)
-
-    def test_clear(self):
-        self.f.clear()
-        self.assertEqual(len(self.f), 0)
-
-    def test__no_deadlock_first(self, debug=0):
-        # do this so that testers can see what function we're in in
-        # verbose mode when we deadlock.
-        sys.stdout.flush()
-
-        # in pybsddb's _DBWithCursor this causes an internal DBCursor
-        # object is created.  Other test_ methods in this class could
-        # inadvertently cause the deadlock but an explicit test is needed.
-        if debug: print("A")
-        k,v = self.f.first()
-        if debug: print("B", k)
-        self.f[k] = "deadlock.  do not pass go.  do not collect $200."
-        if debug: print("C")
-        # if the bsddb implementation leaves the DBCursor open during
-        # the database write and locking+threading support is enabled
-        # the cursor's read lock will deadlock the write lock request..
-
-        # test the iterator interface (if present)
-        if hasattr(self.f, 'iteritems'):
-            if debug: print("D")
-            i = iter(self.f.items())
-            k,v = next(i)
-            if debug: print("E")
-            self.f[k] = "please don't deadlock"
-            if debug: print("F")
-            while 1:
-                try:
-                    k,v = next(i)
-                except StopIteration:
-                    break
-            if debug: print("F2")
-
-            i = iter(self.f)
-            if debug: print("G")
-            while i:
-                try:
-                    if debug: print("H")
-                    k = next(i)
-                    if debug: print("I")
-                    self.f[k] = "deadlocks-r-us"
-                    if debug: print("J")
-                except StopIteration:
-                    i = None
-            if debug: print("K")
-
-        # test the legacy cursor interface mixed with writes
-        self.assert_(self.f.first()[0] in self.d)
-        k = self.f.next()[0]
-        self.assert_(k in self.d)
-        self.f[k] = "be gone with ye deadlocks"
-        self.assert_(self.f[k], "be gone with ye deadlocks")
-
-    def test_for_cursor_memleak(self):
-        if not hasattr(self.f, 'iteritems'):
-            return
-
-        # do the bsddb._DBWithCursor _iter_mixin internals leak cursors?
-        nc1 = len(self.f._cursor_refs)
-        # create iterator
-        i = iter(self.f.iteritems())
-        nc2 = len(self.f._cursor_refs)
-        # use the iterator (should run to the first yield, creating the cursor)
-        k, v = next(i)
-        nc3 = len(self.f._cursor_refs)
-        # destroy the iterator; this should cause the weakref callback
-        # to remove the cursor object from self.f._cursor_refs
-        del i
-        nc4 = len(self.f._cursor_refs)
-
-        self.assertEqual(nc1, nc2)
-        self.assertEqual(nc1, nc4)
-        self.assertEqual(nc3, nc1+1)
-
-    def test_popitem(self):
-        k, v = self.f.popitem()
-        self.assert_(k in self.d)
-        self.assert_(v in self.d.values())
-        self.assert_(k not in self.f)
-        self.assertEqual(len(self.d)-1, len(self.f))
-
-    def test_pop(self):
-        k = 'w'
-        v = self.f.pop(k)
-        self.assertEqual(v, self.d[k])
-        self.assert_(k not in self.f)
-        self.assert_(v not in self.f.values())
-        self.assertEqual(len(self.d)-1, len(self.f))
-
-    def test_get(self):
-        self.assertEqual(self.f.get('NotHere'), None)
-        self.assertEqual(self.f.get('NotHere', 'Default'), 'Default')
-        self.assertEqual(self.f.get('q', 'Default'), self.d['q'])
-
-    def test_setdefault(self):
-        self.assertEqual(self.f.setdefault('new', 'dog'), 'dog')
-        self.assertEqual(self.f.setdefault('r', 'cat'), self.d['r'])
-
-    def test_update(self):
-        new = dict(y='life', u='of', i='brian')
-        self.f.update(new)
-        self.d.update(new)
-        for k, v in self.d.items():
-            self.assertEqual(self.f[k], v)
-
-    def test_keyordering(self):
-        if self.openmethod[0] is not bsddb.btopen:
-            return
-        keys = sorted(self.d.keys())
-        self.assertEqual(self.f.first()[0], keys[0])
-        self.assertEqual(self.f.next()[0], keys[1])
-        self.assertEqual(self.f.last()[0], keys[-1])
-        self.assertEqual(self.f.previous()[0], keys[-2])
-        self.assertEqual(list(self.f), keys)
-
-class TestBTree(TestBSDDB):
-    fname = support.TESTFN
-    openmethod = [bsddb.btopen]
-
-class TestBTree_InMemory(TestBSDDB):
-    fname = None
-    openmethod = [bsddb.btopen]
-
-class TestBTree_InMemory_Truncate(TestBSDDB):
-    fname = None
-    openflag = 'n'
-    openmethod = [bsddb.btopen]
-
-class TestHashTable(TestBSDDB):
-    fname = support.TESTFN
-    openmethod = [bsddb.hashopen]
-
-class TestHashTable_InMemory(TestBSDDB):
-    fname = None
-    openmethod = [bsddb.hashopen]
-
-##         # (bsddb.rnopen,'Record Numbers'), 'put' for RECNO for bsddb 1.85
-##         #                                   appears broken... at least on
-##         #                                   Solaris Intel - rmasse 1/97
-
-def test_main(verbose=None):
-    support.run_unittest(
-        TestBTree,
-        TestHashTable,
-        TestBTree_InMemory,
-        TestHashTable_InMemory,
-        TestBTree_InMemory_Truncate,
-    )
-
-if __name__ == "__main__":
-    test_main(verbose=True)
diff --git a/Lib/test/test_bsddb3.py b/Lib/test/test_bsddb3.py
deleted file mode 100644 (file)
index 9079b0e..0000000
+++ /dev/null
@@ -1,82 +0,0 @@
-# Test driver for bsddb package.
-"""
-Run all test cases.
-"""
-import os
-import sys
-import tempfile
-import time
-import unittest
-from test.support import requires, verbose, run_unittest, unlink, rmtree
-
-# When running as a script instead of within the regrtest framework, skip the
-# requires test, since it's obvious we want to run them.
-verbose = False
-if 'verbose' in sys.argv:
-    verbose = True
-    sys.argv.remove('verbose')
-
-if 'silent' in sys.argv:  # take care of old flag, just in case
-    verbose = False
-    sys.argv.remove('silent')
-
-
-class TimingCheck(unittest.TestCase):
-
-    """This class is not a real test.  Its purpose is to print a message
-    periodically when the test runs slowly.  This will prevent the buildbots
-    from timing out on slow machines."""
-
-    # How much time in seconds before printing a 'Still working' message.
-    # Since this is run at most once between each test module, use a smaller
-    # interval than other tests.
-    _PRINT_WORKING_MSG_INTERVAL = 4 * 60
-
-    # next_time is used as a global variable that survives each instance.
-    # This is necessary since a new instance will be created for each test.
-    next_time = time.time() + _PRINT_WORKING_MSG_INTERVAL
-
-    def testCheckElapsedTime(self):
-        # Print still working message since these tests can be really slow.
-        now = time.time()
-        if self.next_time <= now:
-            TimingCheck.next_time = now + self._PRINT_WORKING_MSG_INTERVAL
-            sys.__stdout__.write('  test_bsddb3 still working, be patient...\n')
-            sys.__stdout__.flush()
-
-
-# For invocation through regrtest
-def test_main():
-    from bsddb import db
-    from bsddb.test import test_all
-
-    # This must be improved...
-    test_all.do_proxy_db_py3k(True)
-
-    test_all.set_test_path_prefix(os.path.join(tempfile.gettempdir(),
-                                 'z-test_bsddb3-%s' %
-                                 os.getpid()))
-    # Please leave this print in, having this show up in the buildbots
-    # makes diagnosing problems a lot easier.
-    # The decode is used to workaround this:
-    # http://mail.python.org/pipermail/python-3000/2008-September/014709.html
-    print(db.DB_VERSION_STRING.decode("iso8859-1"), file=sys.stderr)
-    print('Test path prefix: ', test_all.get_test_path_prefix(), file=sys.stderr)
-    try:
-        run_unittest(test_all.suite(module_prefix='bsddb.test.',
-                                    timing_check=TimingCheck))
-    finally:
-        # The only reason to remove db_home is in case if there is an old
-        # one lying around.  This might be by a different user, so just
-        # ignore errors.  We should always make a unique name now.
-        try:
-            test_all.remove_test_path_directory()
-        except:
-            pass
-
-        # This must be improved...
-        test_all.do_proxy_db_py3k(False)
-
-
-if __name__ == '__main__':
-    test_main()
index 111450d288bb585a09a0a17dd75e85859b3a509e..eca0a1706eb28aa080775ae2cb94366104629499 100644 (file)
--- a/Misc/NEWS
+++ b/Misc/NEWS
@@ -73,6 +73,8 @@ C API
 Library
 -------
 
+- The bsddb module has been removed.
+
 - Issue #3719: platform.architecture() fails if there are spaces in the
   path to the Python binary.
 
diff --git a/Modules/_bsddb.c b/Modules/_bsddb.c
deleted file mode 100644 (file)
index 9324d76..0000000
+++ /dev/null
@@ -1,7522 +0,0 @@
-/*----------------------------------------------------------------------
-  Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA
-  and Andrew Kuchling. All rights reserved.
-
-  Redistribution and use in source and binary forms, with or without
-  modification, are permitted provided that the following conditions are
-  met:
-
-    o Redistributions of source code must retain the above copyright
-      notice, this list of conditions, and the disclaimer that follows.
-
-    o Redistributions in binary form must reproduce the above copyright
-      notice, this list of conditions, and the following disclaimer in
-      the documentation and/or other materials provided with the
-      distribution.
-
-    o Neither the name of Digital Creations nor the names of its
-      contributors may be used to endorse or promote products derived
-      from this software without specific prior written permission.
-
-  THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS
-  IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
-  TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
-  PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL DIGITAL
-  CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-  INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-  BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
-  OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-  ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
-  TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
-  USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
-  DAMAGE.
-------------------------------------------------------------------------*/
-
-
-/*
- * Handwritten code to wrap version 3.x of the Berkeley DB library,
- * written to replace a SWIG-generated file.  It has since been updated
- * to compile with Berkeley DB versions 3.2 through 4.2.
- *
- * This module was started by Andrew Kuchling to remove the dependency
- * on SWIG in a package by Gregory P. Smith who based his work on a
- * similar package by Robin Dunn <robin@alldunn.com> which wrapped
- * Berkeley DB 2.7.x.
- *
- * Development of this module then returned full circle back to Robin Dunn
- * who worked on behalf of Digital Creations to complete the wrapping of
- * the DB 3.x API and to build a solid unit test suite.  Robin has
- * since gone onto other projects (wxPython).
- *
- * Gregory P. Smith <greg@krypto.org> was once again the maintainer.
- *
- * Since January 2008, new maintainer is Jesus Cea <jcea@jcea.es>.
- * Jesus Cea licenses this code to PSF under a Contributor Agreement.
- *
- * Use the pybsddb-users@lists.sf.net mailing list for all questions.
- * Things can change faster than the header of this file is updated.  This
- * file is shared with the PyBSDDB project at SourceForge:
- *
- * http://pybsddb.sf.net
- *
- * This file should remain backward compatible with Python 2.1, but see PEP
- * 291 for the most current backward compatibility requirements:
- *
- * http://www.python.org/peps/pep-0291.html
- *
- * This module contains 6 types:
- *
- * DB           (Database)
- * DBCursor     (Database Cursor)
- * DBEnv        (database environment)
- * DBTxn        (An explicit database transaction)
- * DBLock       (A lock handle)
- * DBSequence   (Sequence)
- *
- */
-
-/* --------------------------------------------------------------------- */
-
-/*
- * Portions of this module, associated unit tests and build scripts are the
- * result of a contract with The Written Word (http://thewrittenword.com/)
- * Many thanks go out to them for causing me to raise the bar on quality and
- * functionality, resulting in a better bsddb3 package for all of us to use.
- *
- * --Robin
- */
-
-/* --------------------------------------------------------------------- */
-
-#include <stddef.h>   /* for offsetof() */
-#include <Python.h>
-
-#define COMPILING_BSDDB_C
-#include "bsddb.h"
-#undef COMPILING_BSDDB_C
-
-static char *rcs_id = "$Id$";
-
-/* --------------------------------------------------------------------- */
-/* Various macro definitions */
-
-#if (PY_VERSION_HEX < 0x02050000)
-typedef int Py_ssize_t;
-#endif
-
-#if (PY_VERSION_HEX < 0x02060000)  /* really: before python trunk r63675 */
-/* This code now uses PyBytes* API function names instead of PyString*.
- * These #defines map to their equivalent on earlier python versions.    */
-#define PyBytes_FromStringAndSize PyString_FromStringAndSize
-#define PyBytes_FromString PyString_FromString
-#define PyBytes_AsStringAndSize PyString_AsStringAndSize
-#define PyBytes_Check PyString_Check
-#define PyBytes_GET_SIZE PyString_GET_SIZE
-#define PyBytes_AS_STRING PyString_AS_STRING
-#endif
-
-#if (PY_VERSION_HEX >= 0x03000000)
-#define NUMBER_Check    PyLong_Check
-#define NUMBER_AsLong   PyLong_AsLong
-#define NUMBER_FromLong PyLong_FromLong
-#else
-#define NUMBER_Check    PyInt_Check
-#define NUMBER_AsLong   PyInt_AsLong
-#define NUMBER_FromLong PyInt_FromLong
-#endif
-
-#ifdef WITH_THREAD
-
-/* These are for when calling Python --> C */
-#define MYDB_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS;
-#define MYDB_END_ALLOW_THREADS Py_END_ALLOW_THREADS;
-
-/* For 2.3, use the PyGILState_ calls */
-#if (PY_VERSION_HEX >= 0x02030000)
-#define MYDB_USE_GILSTATE
-#endif
-
-/* and these are for calling C --> Python */
-#if defined(MYDB_USE_GILSTATE)
-#define MYDB_BEGIN_BLOCK_THREADS \
-               PyGILState_STATE __savestate = PyGILState_Ensure();
-#define MYDB_END_BLOCK_THREADS \
-               PyGILState_Release(__savestate);
-#else /* MYDB_USE_GILSTATE */
-/* Pre GILState API - do it the long old way */
-static PyInterpreterState* _db_interpreterState = NULL;
-#define MYDB_BEGIN_BLOCK_THREADS {                              \
-        PyThreadState* prevState;                               \
-        PyThreadState* newState;                                \
-        PyEval_AcquireLock();                                   \
-        newState  = PyThreadState_New(_db_interpreterState);    \
-        prevState = PyThreadState_Swap(newState);
-
-#define MYDB_END_BLOCK_THREADS                                  \
-        newState = PyThreadState_Swap(prevState);               \
-        PyThreadState_Clear(newState);                          \
-        PyEval_ReleaseLock();                                   \
-        PyThreadState_Delete(newState);                         \
-        }
-#endif /* MYDB_USE_GILSTATE */
-
-#else
-/* Compiled without threads - avoid all this cruft */
-#define MYDB_BEGIN_ALLOW_THREADS
-#define MYDB_END_ALLOW_THREADS
-#define MYDB_BEGIN_BLOCK_THREADS
-#define MYDB_END_BLOCK_THREADS
-
-#endif
-
-/* Should DB_INCOMPLETE be turned into a warning or an exception? */
-#define INCOMPLETE_IS_WARNING 1
-
-/* --------------------------------------------------------------------- */
-/* Exceptions */
-
-static PyObject* DBError;               /* Base class, all others derive from this */
-static PyObject* DBCursorClosedError;   /* raised when trying to use a closed cursor object */
-static PyObject* DBKeyEmptyError;       /* DB_KEYEMPTY: also derives from KeyError */
-static PyObject* DBKeyExistError;       /* DB_KEYEXIST */
-static PyObject* DBLockDeadlockError;   /* DB_LOCK_DEADLOCK */
-static PyObject* DBLockNotGrantedError; /* DB_LOCK_NOTGRANTED */
-static PyObject* DBNotFoundError;       /* DB_NOTFOUND: also derives from KeyError */
-static PyObject* DBOldVersionError;     /* DB_OLD_VERSION */
-static PyObject* DBRunRecoveryError;    /* DB_RUNRECOVERY */
-static PyObject* DBVerifyBadError;      /* DB_VERIFY_BAD */
-static PyObject* DBNoServerError;       /* DB_NOSERVER */
-static PyObject* DBNoServerHomeError;   /* DB_NOSERVER_HOME */
-static PyObject* DBNoServerIDError;     /* DB_NOSERVER_ID */
-static PyObject* DBPageNotFoundError;   /* DB_PAGE_NOTFOUND */
-static PyObject* DBSecondaryBadError;   /* DB_SECONDARY_BAD */
-
-#if !INCOMPLETE_IS_WARNING
-static PyObject* DBIncompleteError;     /* DB_INCOMPLETE */
-#endif
-
-static PyObject* DBInvalidArgError;     /* EINVAL */
-static PyObject* DBAccessError;         /* EACCES */
-static PyObject* DBNoSpaceError;        /* ENOSPC */
-static PyObject* DBNoMemoryError;       /* DB_BUFFER_SMALL (ENOMEM when < 4.3) */
-static PyObject* DBAgainError;          /* EAGAIN */
-static PyObject* DBBusyError;           /* EBUSY  */
-static PyObject* DBFileExistsError;     /* EEXIST */
-static PyObject* DBNoSuchFileError;     /* ENOENT */
-static PyObject* DBPermissionsError;    /* EPERM  */
-
-#if (DBVER >= 42)
-static PyObject* DBRepHandleDeadError;  /* DB_REP_HANDLE_DEAD */
-#endif
-
-static PyObject* DBRepUnavailError;     /* DB_REP_UNAVAIL */
-
-#if (DBVER < 43)
-#define        DB_BUFFER_SMALL         ENOMEM
-#endif
-
-
-/* --------------------------------------------------------------------- */
-/* Structure definitions */
-
-#if PYTHON_API_VERSION < 1010
-#error "Python 2.1 or later required"
-#endif
-
-
-/* Defaults for moduleFlags in DBEnvObject and DBObject. */
-#define DEFAULT_GET_RETURNS_NONE                1
-#define DEFAULT_CURSOR_SET_RETURNS_NONE         1   /* 0 in pybsddb < 4.2, python < 2.4 */
-
-
-/* See comment in Python 2.6 "object.h" */
-#ifndef staticforward
-#define staticforward static
-#endif
-#ifndef statichere
-#define statichere static
-#endif
-
-staticforward PyTypeObject DB_Type, DBCursor_Type, DBEnv_Type, DBTxn_Type,
-              DBLock_Type;
-#if (DBVER >= 43)
-staticforward PyTypeObject DBSequence_Type;
-#endif
-
-#ifndef Py_TYPE
-/* for compatibility with Python 2.5 and earlier */
-#define Py_TYPE(ob)              (((PyObject*)(ob))->ob_type)
-#endif
-
-#define DBObject_Check(v)           (Py_TYPE(v) == &DB_Type)
-#define DBCursorObject_Check(v)     (Py_TYPE(v) == &DBCursor_Type)
-#define DBEnvObject_Check(v)        (Py_TYPE(v) == &DBEnv_Type)
-#define DBTxnObject_Check(v)        (Py_TYPE(v) == &DBTxn_Type)
-#define DBLockObject_Check(v)       (Py_TYPE(v) == &DBLock_Type)
-#if (DBVER >= 43)
-#define DBSequenceObject_Check(v)   (Py_TYPE(v) == &DBSequence_Type)
-#endif
-
-#if (DBVER < 46)
-  #define _DBC_close(dbc)           dbc->c_close(dbc)
-  #define _DBC_count(dbc,a,b)       dbc->c_count(dbc,a,b)
-  #define _DBC_del(dbc,a)           dbc->c_del(dbc,a)
-  #define _DBC_dup(dbc,a,b)         dbc->c_dup(dbc,a,b)
-  #define _DBC_get(dbc,a,b,c)       dbc->c_get(dbc,a,b,c)
-  #define _DBC_pget(dbc,a,b,c,d)    dbc->c_pget(dbc,a,b,c,d)
-  #define _DBC_put(dbc,a,b,c)       dbc->c_put(dbc,a,b,c)
-#else
-  #define _DBC_close(dbc)           dbc->close(dbc)
-  #define _DBC_count(dbc,a,b)       dbc->count(dbc,a,b)
-  #define _DBC_del(dbc,a)           dbc->del(dbc,a)
-  #define _DBC_dup(dbc,a,b)         dbc->dup(dbc,a,b)
-  #define _DBC_get(dbc,a,b,c)       dbc->get(dbc,a,b,c)
-  #define _DBC_pget(dbc,a,b,c,d)    dbc->pget(dbc,a,b,c,d)
-  #define _DBC_put(dbc,a,b,c)       dbc->put(dbc,a,b,c)
-#endif
-
-
-/* --------------------------------------------------------------------- */
-/* Utility macros and functions */
-
-#define INSERT_IN_DOUBLE_LINKED_LIST(backlink,object)                   \
-    {                                                                   \
-        object->sibling_next=backlink;                                  \
-        object->sibling_prev_p=&(backlink);                             \
-        backlink=object;                                                \
-        if (object->sibling_next) {                                     \
-          object->sibling_next->sibling_prev_p=&(object->sibling_next); \
-        }                                                               \
-    }
-
-#define EXTRACT_FROM_DOUBLE_LINKED_LIST(object)                          \
-    {                                                                    \
-        if (object->sibling_next) {                                      \
-            object->sibling_next->sibling_prev_p=object->sibling_prev_p; \
-        }                                                                \
-        *(object->sibling_prev_p)=object->sibling_next;                  \
-    }
-
-#define EXTRACT_FROM_DOUBLE_LINKED_LIST_MAYBE_NULL(object)               \
-    {                                                                    \
-        if (object->sibling_next) {                                      \
-            object->sibling_next->sibling_prev_p=object->sibling_prev_p; \
-        }                                                                \
-        if (object->sibling_prev_p) {                                    \
-            *(object->sibling_prev_p)=object->sibling_next;              \
-        }                                                                \
-    }
-
-#define INSERT_IN_DOUBLE_LINKED_LIST_TXN(backlink,object)  \
-    {                                                      \
-        object->sibling_next_txn=backlink;                 \
-        object->sibling_prev_p_txn=&(backlink);            \
-        backlink=object;                                   \
-        if (object->sibling_next_txn) {                    \
-            object->sibling_next_txn->sibling_prev_p_txn=  \
-                &(object->sibling_next_txn);               \
-        }                                                  \
-    }
-
-#define EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(object)             \
-    {                                                           \
-        if (object->sibling_next_txn) {                         \
-            object->sibling_next_txn->sibling_prev_p_txn=       \
-                object->sibling_prev_p_txn;                     \
-        }                                                       \
-        *(object->sibling_prev_p_txn)=object->sibling_next_txn; \
-    }
-
-
-#define RETURN_IF_ERR()          \
-    if (makeDBError(err)) {      \
-        return NULL;             \
-    }
-
-#define RETURN_NONE()  Py_INCREF(Py_None); return Py_None;
-
-#define _CHECK_OBJECT_NOT_CLOSED(nonNull, pyErrObj, name) \
-    if ((nonNull) == NULL) {          \
-        PyObject *errTuple = NULL;    \
-        errTuple = Py_BuildValue("(is)", 0, #name " object has been closed"); \
-        if (errTuple) { \
-            PyErr_SetObject((pyErrObj), errTuple);  \
-            Py_DECREF(errTuple);          \
-        } \
-        return NULL;                  \
-    }
-
-#define CHECK_DB_NOT_CLOSED(dbobj) \
-        _CHECK_OBJECT_NOT_CLOSED(dbobj->db, DBError, DB)
-
-#define CHECK_ENV_NOT_CLOSED(env) \
-        _CHECK_OBJECT_NOT_CLOSED(env->db_env, DBError, DBEnv)
-
-#define CHECK_CURSOR_NOT_CLOSED(curs) \
-        _CHECK_OBJECT_NOT_CLOSED(curs->dbc, DBCursorClosedError, DBCursor)
-
-#if (DBVER >= 43)
-#define CHECK_SEQUENCE_NOT_CLOSED(curs) \
-        _CHECK_OBJECT_NOT_CLOSED(curs->sequence, DBError, DBSequence)
-#endif
-
-#define CHECK_DBFLAG(mydb, flag)    (((mydb)->flags & (flag)) || \
-                                     (((mydb)->myenvobj != NULL) && ((mydb)->myenvobj->flags & (flag))))
-
-#define CLEAR_DBT(dbt)              (memset(&(dbt), 0, sizeof(dbt)))
-
-#define FREE_DBT(dbt)               if ((dbt.flags & (DB_DBT_MALLOC|DB_DBT_REALLOC)) && \
-                                         dbt.data != NULL) { free(dbt.data); dbt.data = NULL; }
-
-
-static int makeDBError(int err);
-
-
-/* Return the access method type of the DBObject */
-static int _DB_get_type(DBObject* self)
-{
-    DBTYPE type;
-    int err;
-
-    err = self->db->get_type(self->db, &type);
-    if (makeDBError(err)) {
-        return -1;
-    }
-    return type;
-}
-
-
-/* Create a DBT structure (containing key and data values) from Python
-   strings.  Returns 1 on success, 0 on an error. */
-static int make_dbt(PyObject* obj, DBT* dbt)
-{
-    CLEAR_DBT(*dbt);
-    if (obj == Py_None) {
-        /* no need to do anything, the structure has already been zeroed */
-    }
-    else if (!PyArg_Parse(obj, "s#", &dbt->data, &dbt->size)) {
-        PyErr_SetString(PyExc_TypeError,
-#if (PY_VERSION_HEX < 0x03000000)
-                        "Data values must be of type string or None.");
-#else
-                        "Data values must be of type bytes or None.");
-#endif
-        return 0;
-    }
-    return 1;
-}
-
-
-/* Recno and Queue DBs can have integer keys.  This function figures out
-   what's been given, verifies that it's allowed, and then makes the DBT.
-
-   Caller MUST call FREE_DBT(key) when done. */
-static int
-make_key_dbt(DBObject* self, PyObject* keyobj, DBT* key, int* pflags)
-{
-    db_recno_t recno;
-    int type;
-
-    CLEAR_DBT(*key);
-    if (keyobj == Py_None) {
-        type = _DB_get_type(self);
-        if (type == -1)
-            return 0;
-        if (type == DB_RECNO || type == DB_QUEUE) {
-            PyErr_SetString(
-                PyExc_TypeError,
-                "None keys not allowed for Recno and Queue DB's");
-            return 0;
-        }
-        /* no need to do anything, the structure has already been zeroed */
-    }
-
-    else if (PyBytes_Check(keyobj)) {
-        /* verify access method type */
-        type = _DB_get_type(self);
-        if (type == -1)
-            return 0;
-        if (type == DB_RECNO || type == DB_QUEUE) {
-            PyErr_SetString(
-                PyExc_TypeError,
-#if (PY_VERSION_HEX < 0x03000000)
-                "String keys not allowed for Recno and Queue DB's");
-#else
-                "Bytes keys not allowed for Recno and Queue DB's");
-#endif
-            return 0;
-        }
-
-        /*
-         * NOTE(gps): I don't like doing a data copy here, it seems
-         * wasteful.  But without a clean way to tell FREE_DBT if it
-         * should free key->data or not we have to.  Other places in
-         * the code check for DB_THREAD and forceably set DBT_MALLOC
-         * when we otherwise would leave flags 0 to indicate that.
-         */
-        key->data = malloc(PyBytes_GET_SIZE(keyobj));
-        if (key->data == NULL) {
-            PyErr_SetString(PyExc_MemoryError, "Key memory allocation failed");
-            return 0;
-        }
-        memcpy(key->data, PyBytes_AS_STRING(keyobj),
-               PyBytes_GET_SIZE(keyobj));
-        key->flags = DB_DBT_REALLOC;
-        key->size = PyBytes_GET_SIZE(keyobj);
-    }
-
-    else if (NUMBER_Check(keyobj)) {
-        /* verify access method type */
-        type = _DB_get_type(self);
-        if (type == -1)
-            return 0;
-        if (type == DB_BTREE && pflags != NULL) {
-            /* if BTREE then an Integer key is allowed with the
-             * DB_SET_RECNO flag */
-            *pflags |= DB_SET_RECNO;
-        }
-        else if (type != DB_RECNO && type != DB_QUEUE) {
-            PyErr_SetString(
-                PyExc_TypeError,
-                "Integer keys only allowed for Recno and Queue DB's");
-            return 0;
-        }
-
-        /* Make a key out of the requested recno, use allocated space so DB
-         * will be able to realloc room for the real key if needed. */
-        recno = NUMBER_AsLong(keyobj);
-        key->data = malloc(sizeof(db_recno_t));
-        if (key->data == NULL) {
-            PyErr_SetString(PyExc_MemoryError, "Key memory allocation failed");
-            return 0;
-        }
-        key->ulen = key->size = sizeof(db_recno_t);
-        memcpy(key->data, &recno, sizeof(db_recno_t));
-        key->flags = DB_DBT_REALLOC;
-    }
-    else {
-        PyErr_Format(PyExc_TypeError,
-#if (PY_VERSION_HEX < 0x03000000)
-                     "String or Integer object expected for key, %s found",
-#else
-                     "Bytes or Integer object expected for key, %s found",
-#endif
-                     Py_TYPE(keyobj)->tp_name);
-        return 0;
-    }
-
-    return 1;
-}
-
-
-/* Add partial record access to an existing DBT data struct.
-   If dlen and doff are set, then the DB_DBT_PARTIAL flag will be set
-   and the data storage/retrieval will be done using dlen and doff. */
-static int add_partial_dbt(DBT* d, int dlen, int doff) {
-    /* if neither were set we do nothing (-1 is the default value) */
-    if ((dlen == -1) && (doff == -1)) {
-        return 1;
-    }
-
-    if ((dlen < 0) || (doff < 0)) {
-        PyErr_SetString(PyExc_TypeError, "dlen and doff must both be >= 0");
-        return 0;
-    }
-
-    d->flags = d->flags | DB_DBT_PARTIAL;
-    d->dlen = (unsigned int) dlen;
-    d->doff = (unsigned int) doff;
-    return 1;
-}
-
-/* a safe strcpy() without the zeroing behaviour and semantics of strncpy. */
-/* TODO: make this use the native libc strlcpy() when available (BSD)      */
-unsigned int our_strlcpy(char* dest, const char* src, unsigned int n)
-{
-    unsigned int srclen, copylen;
-
-    srclen = strlen(src);
-    if (n <= 0)
-       return srclen;
-    copylen = (srclen > n-1) ? n-1 : srclen;
-    /* populate dest[0] thru dest[copylen-1] */
-    memcpy(dest, src, copylen);
-    /* guarantee null termination */
-    dest[copylen] = 0;
-
-    return srclen;
-}
-
-/* Callback used to save away more information about errors from the DB
- * library. */
-static char _db_errmsg[1024];
-#if (DBVER <= 42)
-static void _db_errorCallback(const char* prefix, char* msg)
-#else
-static void _db_errorCallback(const DB_ENV *db_env,
-       const char* prefix, const char* msg)
-#endif
-{
-    our_strlcpy(_db_errmsg, msg, sizeof(_db_errmsg));
-}
-
-
-/*
-** We need these functions because some results
-** are undefined if pointer is NULL. Some other
-** give None instead of "".
-**
-** This functions are static and will be
-** -I hope- inlined.
-*/
-static const char *DummyString = "This string is a simple placeholder";
-static PyObject *Build_PyString(const char *p,int s)
-{
-  if (!p) {
-    p=DummyString;
-    assert(s==0);
-  }
-  return PyBytes_FromStringAndSize(p,s);
-}
-
-static PyObject *BuildValue_S(const void *p,int s)
-{
-  if (!p) {
-    p=DummyString;
-    assert(s==0);
-  }
-  return PyBytes_FromStringAndSize(p, s);
-}
-
-static PyObject *BuildValue_SS(const void *p1,int s1,const void *p2,int s2)
-{
-PyObject *a, *b, *r;
-
-  if (!p1) {
-    p1=DummyString;
-    assert(s1==0);
-  }
-  if (!p2) {
-    p2=DummyString;
-    assert(s2==0);
-  }
-
-  if (!(a = PyBytes_FromStringAndSize(p1, s1))) {
-      return NULL;
-  }
-  if (!(b = PyBytes_FromStringAndSize(p2, s2))) {
-      Py_DECREF(a);
-      return NULL;
-  }
-
-#if (PY_VERSION_HEX >= 0x02040000)
-  r = PyTuple_Pack(2, a, b) ;
-#else
-  r = Py_BuildValue("OO", a, b);
-#endif
-  Py_DECREF(a);
-  Py_DECREF(b);
-  return r;
-}
-
-static PyObject *BuildValue_IS(int i,const void *p,int s)
-{
-  PyObject *a, *r;
-
-  if (!p) {
-    p=DummyString;
-    assert(s==0);
-  }
-
-  if (!(a = PyBytes_FromStringAndSize(p, s))) {
-      return NULL;
-  }
-
-  r = Py_BuildValue("iO", i, a);
-  Py_DECREF(a);
-  return r;
-}
-
-static PyObject *BuildValue_LS(long l,const void *p,int s)
-{
-  PyObject *a, *r;
-
-  if (!p) {
-    p=DummyString;
-    assert(s==0);
-  }
-
-  if (!(a = PyBytes_FromStringAndSize(p, s))) {
-      return NULL;
-  }
-
-  r = Py_BuildValue("lO", l, a);
-  Py_DECREF(a);
-  return r;
-}
-
-
-
-/* make a nice exception object to raise for errors. */
-static int makeDBError(int err)
-{
-    char errTxt[2048];  /* really big, just in case... */
-    PyObject *errObj = NULL;
-    PyObject *errTuple = NULL;
-    int exceptionRaised = 0;
-    unsigned int bytes_left;
-
-    switch (err) {
-        case 0:                     /* successful, no error */      break;
-
-#if (DBVER < 41)
-        case DB_INCOMPLETE:
-#if INCOMPLETE_IS_WARNING
-            bytes_left = our_strlcpy(errTxt, db_strerror(err), sizeof(errTxt));
-            /* Ensure that bytes_left never goes negative */
-            if (_db_errmsg[0] && bytes_left < (sizeof(errTxt) - 4)) {
-                bytes_left = sizeof(errTxt) - bytes_left - 4 - 1;
-               assert(bytes_left >= 0);
-                strcat(errTxt, " -- ");
-                strncat(errTxt, _db_errmsg, bytes_left);
-            }
-            _db_errmsg[0] = 0;
-            exceptionRaised = PyErr_Warn(PyExc_RuntimeWarning, errTxt);
-
-#else  /* do an exception instead */
-        errObj = DBIncompleteError;
-#endif
-        break;
-#endif /* DBVER < 41 */
-
-        case DB_KEYEMPTY:           errObj = DBKeyEmptyError;       break;
-        case DB_KEYEXIST:           errObj = DBKeyExistError;       break;
-        case DB_LOCK_DEADLOCK:      errObj = DBLockDeadlockError;   break;
-        case DB_LOCK_NOTGRANTED:    errObj = DBLockNotGrantedError; break;
-        case DB_NOTFOUND:           errObj = DBNotFoundError;       break;
-        case DB_OLD_VERSION:        errObj = DBOldVersionError;     break;
-        case DB_RUNRECOVERY:        errObj = DBRunRecoveryError;    break;
-        case DB_VERIFY_BAD:         errObj = DBVerifyBadError;      break;
-        case DB_NOSERVER:           errObj = DBNoServerError;       break;
-        case DB_NOSERVER_HOME:      errObj = DBNoServerHomeError;   break;
-        case DB_NOSERVER_ID:        errObj = DBNoServerIDError;     break;
-        case DB_PAGE_NOTFOUND:      errObj = DBPageNotFoundError;   break;
-        case DB_SECONDARY_BAD:      errObj = DBSecondaryBadError;   break;
-        case DB_BUFFER_SMALL:       errObj = DBNoMemoryError;       break;
-
-#if (DBVER >= 43)
-       /* ENOMEM and DB_BUFFER_SMALL were one and the same until 4.3 */
-       case ENOMEM:  errObj = PyExc_MemoryError;   break;
-#endif
-        case EINVAL:  errObj = DBInvalidArgError;   break;
-        case EACCES:  errObj = DBAccessError;       break;
-        case ENOSPC:  errObj = DBNoSpaceError;      break;
-        case EAGAIN:  errObj = DBAgainError;        break;
-        case EBUSY :  errObj = DBBusyError;         break;
-        case EEXIST:  errObj = DBFileExistsError;   break;
-        case ENOENT:  errObj = DBNoSuchFileError;   break;
-        case EPERM :  errObj = DBPermissionsError;  break;
-
-#if (DBVER >= 42)
-        case DB_REP_HANDLE_DEAD : errObj = DBRepHandleDeadError; break;
-#endif
-
-        case DB_REP_UNAVAIL : errObj = DBRepUnavailError; break;
-
-        default:      errObj = DBError;             break;
-    }
-
-    if (errObj != NULL) {
-        bytes_left = our_strlcpy(errTxt, db_strerror(err), sizeof(errTxt));
-        /* Ensure that bytes_left never goes negative */
-        if (_db_errmsg[0] && bytes_left < (sizeof(errTxt) - 4)) {
-            bytes_left = sizeof(errTxt) - bytes_left - 4 - 1;
-            assert(bytes_left >= 0);
-            strcat(errTxt, " -- ");
-            strncat(errTxt, _db_errmsg, bytes_left);
-        }
-        _db_errmsg[0] = 0;
-
-        errTuple = Py_BuildValue("(is)", err, errTxt);
-        if (errTuple == NULL) {
-            Py_DECREF(errObj);
-            return !0;
-        }
-        PyErr_SetObject(errObj, errTuple);
-        Py_DECREF(errTuple);
-    }
-
-    return ((errObj != NULL) || exceptionRaised);
-}
-
-
-
-/* set a type exception */
-static void makeTypeError(char* expected, PyObject* found)
-{
-    PyErr_Format(PyExc_TypeError, "Expected %s argument, %s found.",
-                 expected, Py_TYPE(found)->tp_name);
-}
-
-
-/* verify that an obj is either None or a DBTxn, and set the txn pointer */
-static int checkTxnObj(PyObject* txnobj, DB_TXN** txn)
-{
-    if (txnobj == Py_None || txnobj == NULL) {
-        *txn = NULL;
-        return 1;
-    }
-    if (DBTxnObject_Check(txnobj)) {
-        *txn = ((DBTxnObject*)txnobj)->txn;
-        return 1;
-    }
-    else
-        makeTypeError("DBTxn", txnobj);
-    return 0;
-}
-
-
-/* Delete a key from a database
-  Returns 0 on success, -1 on an error.  */
-static int _DB_delete(DBObject* self, DB_TXN *txn, DBT *key, int flags)
-{
-    int err;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->del(self->db, txn, key, 0);
-    MYDB_END_ALLOW_THREADS;
-    if (makeDBError(err)) {
-        return -1;
-    }
-    self->haveStat = 0;
-    return 0;
-}
-
-
-/* Store a key into a database
-   Returns 0 on success, -1 on an error.  */
-static int _DB_put(DBObject* self, DB_TXN *txn, DBT *key, DBT *data, int flags)
-{
-    int err;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->put(self->db, txn, key, data, flags);
-    MYDB_END_ALLOW_THREADS;
-    if (makeDBError(err)) {
-        return -1;
-    }
-    self->haveStat = 0;
-    return 0;
-}
-
-/* Get a key/data pair from a cursor */
-static PyObject* _DBCursor_get(DBCursorObject* self, int extra_flags,
-                              PyObject *args, PyObject *kwargs, char *format)
-{
-    int err;
-    PyObject* retval = NULL;
-    DBT key, data;
-    int dlen = -1;
-    int doff = -1;
-    int flags = 0;
-    static char* kwnames[] = { "flags", "dlen", "doff", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, format, kwnames,
-                                    &flags, &dlen, &doff)) 
-      return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    flags |= extra_flags;
-    CLEAR_DBT(key);
-    CLEAR_DBT(data);
-    if (!add_partial_dbt(&data, dlen, doff))
-        return NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->mydb->moduleFlags.getReturnsNone) {
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {  /* otherwise, success! */
-
-        /* if Recno or Queue, return the key as an Int */
-        switch (_DB_get_type(self->mydb)) {
-        case -1:
-            retval = NULL;
-            break;
-
-        case DB_RECNO:
-        case DB_QUEUE:
-            retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size);
-            break;
-        case DB_HASH:
-        case DB_BTREE:
-        default:
-            retval = BuildValue_SS(key.data, key.size, data.data, data.size);
-            break;
-        }
-    }
-    return retval;
-}
-
-
-/* add an integer to a dictionary using the given name as a key */
-static void _addIntToDict(PyObject* dict, char *name, int value)
-{
-    PyObject* v = NUMBER_FromLong((long) value);
-    if (!v || PyDict_SetItemString(dict, name, v))
-        PyErr_Clear();
-
-    Py_XDECREF(v);
-}
-
-/* The same, when the value is a time_t */
-static void _addTimeTToDict(PyObject* dict, char *name, time_t value)
-{
-    PyObject* v;
-       /* if the value fits in regular int, use that. */
-#ifdef PY_LONG_LONG
-       if (sizeof(time_t) > sizeof(long))
-               v = PyLong_FromLongLong((PY_LONG_LONG) value);
-       else
-#endif
-               v = NUMBER_FromLong((long) value);
-    if (!v || PyDict_SetItemString(dict, name, v))
-        PyErr_Clear();
-
-    Py_XDECREF(v);
-}
-
-#if (DBVER >= 43)
-/* add an db_seq_t to a dictionary using the given name as a key */
-static void _addDb_seq_tToDict(PyObject* dict, char *name, db_seq_t value)
-{
-    PyObject* v = PyLong_FromLongLong(value);
-    if (!v || PyDict_SetItemString(dict, name, v))
-        PyErr_Clear();
-
-    Py_XDECREF(v);
-}
-#endif
-
-static void _addDB_lsnToDict(PyObject* dict, char *name, DB_LSN value)
-{
-    PyObject *v = Py_BuildValue("(ll)",value.file,value.offset);
-    if (!v || PyDict_SetItemString(dict, name, v))
-        PyErr_Clear();
-
-    Py_XDECREF(v);
-}
-
-/* --------------------------------------------------------------------- */
-/* Allocators and deallocators */
-
-static DBObject*
-newDBObject(DBEnvObject* arg, int flags)
-{
-    DBObject* self;
-    DB_ENV* db_env = NULL;
-    int err;
-
-    self = PyObject_New(DBObject, &DB_Type);
-    if (self == NULL)
-        return NULL;
-
-    self->haveStat = 0;
-    self->flags = 0;
-    self->setflags = 0;
-    self->myenvobj = NULL;
-    self->db = NULL;
-    self->children_cursors = NULL;
-#if (DBVER >=43)
-    self->children_sequences = NULL;
-#endif
-    self->associateCallback = NULL;
-    self->btCompareCallback = NULL;
-    self->primaryDBType = 0;
-    Py_INCREF(Py_None);
-    self->private_obj = Py_None;
-    self->in_weakreflist = NULL;
-
-    /* keep a reference to our python DBEnv object */
-    if (arg) {
-        Py_INCREF(arg);
-        self->myenvobj = arg;
-        db_env = arg->db_env;
-        INSERT_IN_DOUBLE_LINKED_LIST(self->myenvobj->children_dbs,self);
-    } else {
-      self->sibling_prev_p=NULL;
-      self->sibling_next=NULL;
-    }
-    self->txn=NULL;
-    self->sibling_prev_p_txn=NULL;
-    self->sibling_next_txn=NULL;
-
-    if (self->myenvobj)
-        self->moduleFlags = self->myenvobj->moduleFlags;
-    else
-        self->moduleFlags.getReturnsNone = DEFAULT_GET_RETURNS_NONE;
-        self->moduleFlags.cursorSetReturnsNone = DEFAULT_CURSOR_SET_RETURNS_NONE;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = db_create(&self->db, db_env, flags);
-    if (self->db != NULL) {
-        self->db->set_errcall(self->db, _db_errorCallback);
-        self->db->app_private = (void*)self;
-    }
-    MYDB_END_ALLOW_THREADS;
-    /* TODO add a weakref(self) to the self->myenvobj->open_child_weakrefs
-     * list so that a DBEnv can refuse to close without aborting any open
-     * DBTxns and closing any open DBs first. */
-    if (makeDBError(err)) {
-        if (self->myenvobj) {
-            Py_DECREF(self->myenvobj);
-            self->myenvobj = NULL;
-        }
-        Py_DECREF(self);
-        self = NULL;
-    }
-    return self;
-}
-
-
-/* Forward declaration */
-static PyObject *DB_close_internal(DBObject* self, int flags);
-
-static void
-DB_dealloc(DBObject* self)
-{
-  PyObject *dummy;
-
-    if (self->db != NULL) {
-      dummy=DB_close_internal(self,0);
-      Py_XDECREF(dummy);
-    }
-    if (self->in_weakreflist != NULL) {
-        PyObject_ClearWeakRefs((PyObject *) self);
-    }
-    if (self->myenvobj) {
-        Py_DECREF(self->myenvobj);
-        self->myenvobj = NULL;
-    }
-    if (self->associateCallback != NULL) {
-        Py_DECREF(self->associateCallback);
-        self->associateCallback = NULL;
-    }
-    if (self->btCompareCallback != NULL) {
-        Py_DECREF(self->btCompareCallback);
-        self->btCompareCallback = NULL;
-    }
-    Py_DECREF(self->private_obj);
-    PyObject_Del(self);
-}
-
-static DBCursorObject*
-newDBCursorObject(DBC* dbc, DBTxnObject *txn, DBObject* db)
-{
-    DBCursorObject* self = PyObject_New(DBCursorObject, &DBCursor_Type);
-    if (self == NULL)
-        return NULL;
-
-    self->dbc = dbc;
-    self->mydb = db;
-
-    INSERT_IN_DOUBLE_LINKED_LIST(self->mydb->children_cursors,self);
-    if (txn && ((PyObject *)txn!=Py_None)) {
-           INSERT_IN_DOUBLE_LINKED_LIST_TXN(txn->children_cursors,self);
-           self->txn=txn;
-    } else {
-           self->txn=NULL;
-    }
-
-    self->in_weakreflist = NULL;
-    Py_INCREF(self->mydb);
-    return self;
-}
-
-
-/* Forward declaration */
-static PyObject *DBC_close_internal(DBCursorObject* self);
-
-static void
-DBCursor_dealloc(DBCursorObject* self)
-{
-    PyObject *dummy;
-
-    if (self->dbc != NULL) {
-      dummy=DBC_close_internal(self);
-      Py_XDECREF(dummy);
-    }
-    if (self->in_weakreflist != NULL) {
-        PyObject_ClearWeakRefs((PyObject *) self);
-    }
-    Py_DECREF(self->mydb);
-    PyObject_Del(self);
-}
-
-
-static DBEnvObject*
-newDBEnvObject(int flags)
-{
-    int err;
-    DBEnvObject* self = PyObject_New(DBEnvObject, &DBEnv_Type);
-    if (self == NULL)
-        return NULL;
-
-    self->closed = 1;
-    self->flags = flags;
-    self->moduleFlags.getReturnsNone = DEFAULT_GET_RETURNS_NONE;
-    self->moduleFlags.cursorSetReturnsNone = DEFAULT_CURSOR_SET_RETURNS_NONE;
-    self->children_dbs = NULL;
-    self->children_txns = NULL;
-    Py_INCREF(Py_None);
-    self->private_obj = Py_None;
-    Py_INCREF(Py_None);
-    self->rep_transport = Py_None;
-    self->in_weakreflist = NULL;
-    self->event_notifyCallback = NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = db_env_create(&self->db_env, flags);
-    MYDB_END_ALLOW_THREADS;
-    if (makeDBError(err)) {
-        Py_DECREF(self);
-        self = NULL;
-    }
-    else {
-        self->db_env->set_errcall(self->db_env, _db_errorCallback);
-        self->db_env->app_private = self;
-    }
-    return self;
-}
-
-/* Forward declaration */
-static PyObject *DBEnv_close_internal(DBEnvObject* self, int flags);
-
-static void
-DBEnv_dealloc(DBEnvObject* self)
-{
-  PyObject *dummy;
-
-    if (self->db_env) {
-      dummy=DBEnv_close_internal(self,0);
-      Py_XDECREF(dummy);
-    }
-
-    Py_XDECREF(self->event_notifyCallback);
-    self->event_notifyCallback = NULL;
-
-    if (self->in_weakreflist != NULL) {
-        PyObject_ClearWeakRefs((PyObject *) self);
-    }
-    Py_DECREF(self->private_obj);
-    Py_DECREF(self->rep_transport);
-    PyObject_Del(self);
-}
-
-
-static DBTxnObject*
-newDBTxnObject(DBEnvObject* myenv, DBTxnObject *parent, DB_TXN *txn, int flags)
-{
-    int err;
-    DB_TXN *parent_txn = NULL;
-
-    DBTxnObject* self = PyObject_New(DBTxnObject, &DBTxn_Type);
-    if (self == NULL)
-        return NULL;
-
-    self->in_weakreflist = NULL;
-    self->children_txns = NULL;
-    self->children_dbs = NULL;
-    self->children_cursors = NULL;
-    self->children_sequences = NULL;
-    self->flag_prepare = 0;
-    self->parent_txn = NULL;
-    self->env = NULL;
-
-    if (parent && ((PyObject *)parent!=Py_None)) {
-        parent_txn = parent->txn;
-    }
-
-    if (txn) {
-        self->txn = txn;
-    } else {
-        MYDB_BEGIN_ALLOW_THREADS;
-        err = myenv->db_env->txn_begin(myenv->db_env, parent_txn, &(self->txn), flags);
-        MYDB_END_ALLOW_THREADS;
-
-        if (makeDBError(err)) {
-            Py_DECREF(self);
-            return NULL;
-        }
-    }
-
-    /* Can't use 'parent' because could be 'parent==Py_None' */
-    if (parent_txn) {
-        self->parent_txn = parent;
-        Py_INCREF(parent);
-        self->env = NULL;
-        INSERT_IN_DOUBLE_LINKED_LIST(parent->children_txns, self);
-    } else {
-        self->parent_txn = NULL;
-        Py_INCREF(myenv);
-        self->env = myenv;
-        INSERT_IN_DOUBLE_LINKED_LIST(myenv->children_txns, self);
-    }
-
-    return self;
-}
-
-/* Forward declaration */
-static PyObject *
-DBTxn_abort_discard_internal(DBTxnObject* self, int discard);
-
-static void
-DBTxn_dealloc(DBTxnObject* self)
-{
-  PyObject *dummy;
-
-    if (self->txn) {
-        int flag_prepare = self->flag_prepare;
-        dummy=DBTxn_abort_discard_internal(self,0);
-        Py_XDECREF(dummy);
-        if (!flag_prepare) {
-            PyErr_Warn(PyExc_RuntimeWarning,
-              "DBTxn aborted in destructor.  No prior commit() or abort().");
-        }
-    }
-
-    if (self->in_weakreflist != NULL) {
-        PyObject_ClearWeakRefs((PyObject *) self);
-    }
-
-    if (self->env) {
-        Py_DECREF(self->env);
-    } else {
-        Py_DECREF(self->parent_txn);
-    }
-    PyObject_Del(self);
-}
-
-
-static DBLockObject*
-newDBLockObject(DBEnvObject* myenv, u_int32_t locker, DBT* obj,
-                db_lockmode_t lock_mode, int flags)
-{
-    int err;
-    DBLockObject* self = PyObject_New(DBLockObject, &DBLock_Type);
-    if (self == NULL)
-        return NULL;
-    self->in_weakreflist = NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = myenv->db_env->lock_get(myenv->db_env, locker, flags, obj, lock_mode,
-                                  &self->lock);
-    MYDB_END_ALLOW_THREADS;
-    if (makeDBError(err)) {
-        Py_DECREF(self);
-        self = NULL;
-    }
-
-    return self;
-}
-
-
-static void
-DBLock_dealloc(DBLockObject* self)
-{
-    if (self->in_weakreflist != NULL) {
-        PyObject_ClearWeakRefs((PyObject *) self);
-    }
-    /* TODO: is this lock held? should we release it? */
-
-    PyObject_Del(self);
-}
-
-
-#if (DBVER >= 43)
-static DBSequenceObject*
-newDBSequenceObject(DBObject* mydb,  int flags)
-{
-    int err;
-    DBSequenceObject* self = PyObject_New(DBSequenceObject, &DBSequence_Type);
-    if (self == NULL)
-        return NULL;
-    Py_INCREF(mydb);
-    self->mydb = mydb;
-
-    INSERT_IN_DOUBLE_LINKED_LIST(self->mydb->children_sequences,self);
-    self->txn = NULL;
-
-    self->in_weakreflist = NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = db_sequence_create(&self->sequence, self->mydb->db, flags);
-    MYDB_END_ALLOW_THREADS;
-    if (makeDBError(err)) {
-        Py_DECREF(self);
-        self = NULL;
-    }
-
-    return self;
-}
-
-/* Forward declaration */
-static PyObject
-*DBSequence_close_internal(DBSequenceObject* self, int flags, int do_not_close);
-
-static void
-DBSequence_dealloc(DBSequenceObject* self)
-{
-    PyObject *dummy;
-
-    if (self->sequence != NULL) {
-        dummy=DBSequence_close_internal(self,0,0);
-        Py_XDECREF(dummy);
-    }
-
-    if (self->in_weakreflist != NULL) {
-        PyObject_ClearWeakRefs((PyObject *) self);
-    }
-
-    Py_DECREF(self->mydb);
-    PyObject_Del(self);
-}
-#endif
-
-/* --------------------------------------------------------------------- */
-/* DB methods */
-
-static PyObject*
-DB_append(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    PyObject* txnobj = NULL;
-    PyObject* dataobj;
-    db_recno_t recno;
-    DBT key, data;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "data", "txn", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|O:append", kwnames,
-                                     &dataobj, &txnobj))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-
-    /* make a dummy key out of a recno */
-    recno = 0;
-    CLEAR_DBT(key);
-    key.data = &recno;
-    key.size = sizeof(recno);
-    key.ulen = key.size;
-    key.flags = DB_DBT_USERMEM;
-
-    if (!make_dbt(dataobj, &data)) return NULL;
-    if (!checkTxnObj(txnobj, &txn)) return NULL;
-
-    if (-1 == _DB_put(self, txn, &key, &data, DB_APPEND))
-        return NULL;
-
-    return NUMBER_FromLong(recno);
-}
-
-
-static int
-_db_associateCallback(DB* db, const DBT* priKey, const DBT* priData,
-                      DBT* secKey)
-{
-    int       retval = DB_DONOTINDEX;
-    DBObject* secondaryDB = (DBObject*)db->app_private;
-    PyObject* callback = secondaryDB->associateCallback;
-    int       type = secondaryDB->primaryDBType;
-    PyObject* args;
-    PyObject* result = NULL;
-
-
-    if (callback != NULL) {
-        MYDB_BEGIN_BLOCK_THREADS;
-
-        if (type == DB_RECNO || type == DB_QUEUE)
-            args = BuildValue_LS(*((db_recno_t*)priKey->data), priData->data, priData->size);
-        else
-            args = BuildValue_SS(priKey->data, priKey->size, priData->data, priData->size);
-        if (args != NULL) {
-                result = PyEval_CallObject(callback, args);
-        }
-        if (args == NULL || result == NULL) {
-            PyErr_Print();
-        }
-        else if (result == Py_None) {
-            retval = DB_DONOTINDEX;
-        }
-        else if (NUMBER_Check(result)) {
-            retval = NUMBER_AsLong(result);
-        }
-        else if (PyBytes_Check(result)) {
-            char* data;
-            Py_ssize_t size;
-
-            CLEAR_DBT(*secKey);
-            PyBytes_AsStringAndSize(result, &data, &size);
-            secKey->flags = DB_DBT_APPMALLOC;   /* DB will free */
-            secKey->data = malloc(size);        /* TODO, check this */
-           if (secKey->data) {
-               memcpy(secKey->data, data, size);
-               secKey->size = size;
-               retval = 0;
-           }
-           else {
-               PyErr_SetString(PyExc_MemoryError,
-                                "malloc failed in _db_associateCallback");
-               PyErr_Print();
-           }
-        }
-        else {
-            PyErr_SetString(
-               PyExc_TypeError,
-               "DB associate callback should return DB_DONOTINDEX or string.");
-            PyErr_Print();
-        }
-
-        Py_XDECREF(args);
-        Py_XDECREF(result);
-
-        MYDB_END_BLOCK_THREADS;
-    }
-    return retval;
-}
-
-
-static PyObject*
-DB_associate(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    DBObject* secondaryDB;
-    PyObject* callback;
-#if (DBVER >= 41)
-    PyObject *txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = {"secondaryDB", "callback", "flags", "txn",
-                                    NULL};
-#else
-    static char* kwnames[] = {"secondaryDB", "callback", "flags", NULL};
-#endif
-
-#if (DBVER >= 41)
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|iO:associate", kwnames,
-                                     &secondaryDB, &callback, &flags,
-                                     &txnobj)) {
-#else
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|i:associate", kwnames,
-                                     &secondaryDB, &callback, &flags)) {
-#endif
-        return NULL;
-    }
-
-#if (DBVER >= 41)
-    if (!checkTxnObj(txnobj, &txn)) return NULL;
-#endif
-
-    CHECK_DB_NOT_CLOSED(self);
-    if (!DBObject_Check(secondaryDB)) {
-        makeTypeError("DB", (PyObject*)secondaryDB);
-        return NULL;
-    }
-    CHECK_DB_NOT_CLOSED(secondaryDB);
-    if (callback == Py_None) {
-        callback = NULL;
-    }
-    else if (!PyCallable_Check(callback)) {
-        makeTypeError("Callable", callback);
-        return NULL;
-    }
-
-    /* Save a reference to the callback in the secondary DB. */
-    Py_XDECREF(secondaryDB->associateCallback);
-    Py_XINCREF(callback);
-    secondaryDB->associateCallback = callback;
-    secondaryDB->primaryDBType = _DB_get_type(self);
-
-    /* PyEval_InitThreads is called here due to a quirk in python 1.5
-     * - 2.2.1 (at least) according to Russell Williamson <merel@wt.net>:
-     * The global interepreter lock is not initialized until the first
-     * thread is created using thread.start_new_thread() or fork() is
-     * called.  that would cause the ALLOW_THREADS here to segfault due
-     * to a null pointer reference if no threads or child processes
-     * have been created.  This works around that and is a no-op if
-     * threads have already been initialized.
-     *  (see pybsddb-users mailing list post on 2002-08-07)
-     */
-#ifdef WITH_THREAD
-    PyEval_InitThreads();
-#endif
-    MYDB_BEGIN_ALLOW_THREADS;
-#if (DBVER >= 41)
-    err = self->db->associate(self->db,
-                             txn,
-                              secondaryDB->db,
-                              _db_associateCallback,
-                              flags);
-#else
-    err = self->db->associate(self->db,
-                              secondaryDB->db,
-                              _db_associateCallback,
-                              flags);
-#endif
-    MYDB_END_ALLOW_THREADS;
-
-    if (err) {
-        Py_XDECREF(secondaryDB->associateCallback);
-        secondaryDB->associateCallback = NULL;
-        secondaryDB->primaryDBType = 0;
-    }
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_close_internal(DBObject* self, int flags)
-{
-    PyObject *dummy;
-    int err;
-
-    if (self->db != NULL) {
-        /* Can be NULL if db is not in an environment */
-        EXTRACT_FROM_DOUBLE_LINKED_LIST_MAYBE_NULL(self);
-
-        if (self->txn) {
-            EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(self);
-            self->txn=NULL;
-        }
-
-        while(self->children_cursors) {
-          dummy=DBC_close_internal(self->children_cursors);
-          Py_XDECREF(dummy);
-        }
-
-#if (DBVER >= 43)
-        while(self->children_sequences) {
-            dummy=DBSequence_close_internal(self->children_sequences,0,0);
-            Py_XDECREF(dummy);
-        }
-#endif
-
-        MYDB_BEGIN_ALLOW_THREADS;
-        err = self->db->close(self->db, flags);
-        MYDB_END_ALLOW_THREADS;
-        self->db = NULL;
-        RETURN_IF_ERR();
-    }
-    RETURN_NONE();
-}
-
-static PyObject*
-DB_close(DBObject* self, PyObject* args)
-{
-    int flags=0;
-    if (!PyArg_ParseTuple(args,"|i:close", &flags))
-        return NULL;
-    return DB_close_internal(self,flags);
-}
-
-
-static PyObject*
-_DB_consume(DBObject* self, PyObject* args, PyObject* kwargs, int consume_flag)
-{
-    int err, flags=0, type;
-    PyObject* txnobj = NULL;
-    PyObject* retval = NULL;
-    DBT key, data;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "txn", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:consume", kwnames,
-                                     &txnobj, &flags))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-    type = _DB_get_type(self);
-    if (type == -1)
-        return NULL;
-    if (type != DB_QUEUE) {
-        PyErr_SetString(PyExc_TypeError,
-                        "Consume methods only allowed for Queue DB's");
-        return NULL;
-    }
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-
-    CLEAR_DBT(key);
-    CLEAR_DBT(data);
-    if (CHECK_DBFLAG(self, DB_THREAD)) {
-        /* Tell Berkeley DB to malloc the return value (thread safe) */
-        data.flags = DB_DBT_MALLOC;
-        key.flags = DB_DBT_MALLOC;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->get(self->db, txn, &key, &data, flags|consume_flag);
-    MYDB_END_ALLOW_THREADS;
-
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->moduleFlags.getReturnsNone) {
-        err = 0;
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (!err) {
-        retval = BuildValue_SS(key.data, key.size, data.data, data.size);
-        FREE_DBT(key);
-        FREE_DBT(data);
-    }
-
-    RETURN_IF_ERR();
-    return retval;
-}
-
-static PyObject*
-DB_consume(DBObject* self, PyObject* args, PyObject* kwargs, int consume_flag)
-{
-    return _DB_consume(self, args, kwargs, DB_CONSUME);
-}
-
-static PyObject*
-DB_consume_wait(DBObject* self, PyObject* args, PyObject* kwargs,
-                int consume_flag)
-{
-    return _DB_consume(self, args, kwargs, DB_CONSUME_WAIT);
-}
-
-
-static PyObject*
-DB_cursor(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    DBC* dbc;
-    PyObject* txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "txn", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:cursor", kwnames,
-                                     &txnobj, &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->cursor(self->db, txn, &dbc, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return (PyObject*) newDBCursorObject(dbc, (DBTxnObject *)txnobj, self);
-}
-
-
-static PyObject*
-DB_delete(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    PyObject* txnobj = NULL;
-    int flags = 0;
-    PyObject* keyobj;
-    DBT key;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "key", "txn", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Oi:delete", kwnames,
-                                     &keyobj, &txnobj, &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_key_dbt(self, keyobj, &key, NULL))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    if (-1 == _DB_delete(self, txn, &key, 0)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    FREE_DBT(key);
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_fd(DBObject* self)
-{
-    int err, the_fd;
-
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->fd(self->db, &the_fd);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(the_fd);
-}
-
-
-static PyObject*
-DB_get(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    PyObject* txnobj = NULL;
-    PyObject* keyobj;
-    PyObject* dfltobj = NULL;
-    PyObject* retval = NULL;
-    int dlen = -1;
-    int doff = -1;
-    DBT key, data;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = {"key", "default", "txn", "flags", "dlen",
-                                    "doff", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|OOiii:get", kwnames,
-                                     &keyobj, &dfltobj, &txnobj, &flags, &dlen,
-                                     &doff))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_key_dbt(self, keyobj, &key, &flags))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    CLEAR_DBT(data);
-    if (CHECK_DBFLAG(self, DB_THREAD)) {
-        /* Tell Berkeley DB to malloc the return value (thread safe) */
-        data.flags = DB_DBT_MALLOC;
-    }
-    if (!add_partial_dbt(&data, dlen, doff)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->get(self->db, txn, &key, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && (dfltobj != NULL)) {
-        err = 0;
-        Py_INCREF(dfltobj);
-        retval = dfltobj;
-    }
-    else if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-            && self->moduleFlags.getReturnsNone) {
-        err = 0;
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (!err) {
-        if (flags & DB_SET_RECNO) /* return both key and data */
-            retval = BuildValue_SS(key.data, key.size, data.data, data.size);
-        else /* return just the data */
-            retval = Build_PyString(data.data, data.size);
-        FREE_DBT(data);
-    }
-    FREE_DBT(key);
-
-    RETURN_IF_ERR();
-    return retval;
-}
-
-static PyObject*
-DB_pget(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    PyObject* txnobj = NULL;
-    PyObject* keyobj;
-    PyObject* dfltobj = NULL;
-    PyObject* retval = NULL;
-    int dlen = -1;
-    int doff = -1;
-    DBT key, pkey, data;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = {"key", "default", "txn", "flags", "dlen",
-                                    "doff", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|OOiii:pget", kwnames,
-                                     &keyobj, &dfltobj, &txnobj, &flags, &dlen,
-                                     &doff))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_key_dbt(self, keyobj, &key, &flags))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    CLEAR_DBT(data);
-    if (CHECK_DBFLAG(self, DB_THREAD)) {
-        /* Tell Berkeley DB to malloc the return value (thread safe) */
-        data.flags = DB_DBT_MALLOC;
-    }
-    if (!add_partial_dbt(&data, dlen, doff)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    CLEAR_DBT(pkey);
-    pkey.flags = DB_DBT_MALLOC;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->pget(self->db, txn, &key, &pkey, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && (dfltobj != NULL)) {
-        err = 0;
-        Py_INCREF(dfltobj);
-        retval = dfltobj;
-    }
-    else if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-            && self->moduleFlags.getReturnsNone) {
-        err = 0;
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (!err) {
-        PyObject *pkeyObj;
-        PyObject *dataObj;
-        dataObj = Build_PyString(data.data, data.size);
-
-        if (self->primaryDBType == DB_RECNO ||
-            self->primaryDBType == DB_QUEUE)
-            pkeyObj = NUMBER_FromLong(*(int *)pkey.data);
-        else
-            pkeyObj = Build_PyString(pkey.data, pkey.size);
-
-        if (flags & DB_SET_RECNO) /* return key , pkey and data */
-        {
-            PyObject *keyObj;
-            int type = _DB_get_type(self);
-            if (type == DB_RECNO || type == DB_QUEUE)
-                keyObj = NUMBER_FromLong(*(int *)key.data);
-            else
-                keyObj = Build_PyString(key.data, key.size);
-#if (PY_VERSION_HEX >= 0x02040000)
-            retval = PyTuple_Pack(3, keyObj, pkeyObj, dataObj);
-#else
-            retval = Py_BuildValue("OOO", keyObj, pkeyObj, dataObj);
-#endif
-            Py_DECREF(keyObj);
-        }
-        else /* return just the pkey and data */
-        {
-#if (PY_VERSION_HEX >= 0x02040000)
-            retval = PyTuple_Pack(2, pkeyObj, dataObj);
-#else
-            retval = Py_BuildValue("OO", pkeyObj, dataObj);
-#endif
-        }
-        Py_DECREF(dataObj);
-        Py_DECREF(pkeyObj);
-        FREE_DBT(pkey);
-        FREE_DBT(data);
-    }
-    FREE_DBT(key);
-
-    RETURN_IF_ERR();
-    return retval;
-}
-
-
-/* Return size of entry */
-static PyObject*
-DB_get_size(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    PyObject* txnobj = NULL;
-    PyObject* keyobj;
-    PyObject* retval = NULL;
-    DBT key, data;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "key", "txn", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|O:get_size", kwnames,
-                                     &keyobj, &txnobj))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_key_dbt(self, keyobj, &key, &flags))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-    CLEAR_DBT(data);
-
-    /* We don't allocate any memory, forcing a DB_BUFFER_SMALL error and
-       thus getting the record size. */
-    data.flags = DB_DBT_USERMEM;
-    data.ulen = 0;
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->get(self->db, txn, &key, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-    if (err == DB_BUFFER_SMALL) {
-        retval = NUMBER_FromLong((long)data.size);
-        err = 0;
-    }
-
-    FREE_DBT(key);
-    FREE_DBT(data);
-    RETURN_IF_ERR();
-    return retval;
-}
-
-
-static PyObject*
-DB_get_both(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    PyObject* txnobj = NULL;
-    PyObject* keyobj;
-    PyObject* dataobj;
-    PyObject* retval = NULL;
-    DBT key, data;
-    void *orig_data;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "key", "data", "txn", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|Oi:get_both", kwnames,
-                                     &keyobj, &dataobj, &txnobj, &flags))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_key_dbt(self, keyobj, &key, NULL))
-        return NULL;
-    if ( !make_dbt(dataobj, &data) ||
-         !checkTxnObj(txnobj, &txn) )
-    {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    flags |= DB_GET_BOTH;
-    orig_data = data.data;
-
-    if (CHECK_DBFLAG(self, DB_THREAD)) {
-        /* Tell Berkeley DB to malloc the return value (thread safe) */
-        /* XXX(nnorwitz): At least 4.4.20 and 4.5.20 require this flag. */
-        data.flags = DB_DBT_MALLOC;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->get(self->db, txn, &key, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->moduleFlags.getReturnsNone) {
-        err = 0;
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (!err) {
-        /* XXX(nnorwitz): can we do: retval = dataobj; Py_INCREF(retval); */
-        retval = Build_PyString(data.data, data.size);
-
-        /* Even though the flags require DB_DBT_MALLOC, data is not always
-           allocated.  4.4: allocated, 4.5: *not* allocated. :-( */
-        if (data.data != orig_data)
-            FREE_DBT(data);
-    }
-
-    FREE_DBT(key);
-    RETURN_IF_ERR();
-    return retval;
-}
-
-
-static PyObject*
-DB_get_byteswapped(DBObject* self)
-{
-    int err = 0;
-    int retval = -1;
-
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->get_byteswapped(self->db, &retval);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(retval);
-}
-
-
-static PyObject*
-DB_get_type(DBObject* self)
-{
-    int type;
-
-    CHECK_DB_NOT_CLOSED(self);
-
-    type = _DB_get_type(self);
-    if (type == -1)
-        return NULL;
-    return NUMBER_FromLong(type);
-}
-
-
-static PyObject*
-DB_join(DBObject* self, PyObject* args)
-{
-    int err, flags=0;
-    int length, x;
-    PyObject* cursorsObj;
-    DBC** cursors;
-    DBC*  dbc;
-
-    if (!PyArg_ParseTuple(args,"O|i:join", &cursorsObj, &flags))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-
-    if (!PySequence_Check(cursorsObj)) {
-        PyErr_SetString(PyExc_TypeError,
-                        "Sequence of DBCursor objects expected");
-        return NULL;
-    }
-
-    length = PyObject_Length(cursorsObj);
-    cursors = malloc((length+1) * sizeof(DBC*));
-    if (!cursors) {
-       PyErr_NoMemory();
-       return NULL;
-    }
-
-    cursors[length] = NULL;
-    for (x=0; x<length; x++) {
-        PyObject* item = PySequence_GetItem(cursorsObj, x);
-        if (item == NULL) {
-            free(cursors);
-            return NULL;
-        }
-        if (!DBCursorObject_Check(item)) {
-            PyErr_SetString(PyExc_TypeError,
-                            "Sequence of DBCursor objects expected");
-            free(cursors);
-            return NULL;
-        }
-        cursors[x] = ((DBCursorObject*)item)->dbc;
-        Py_DECREF(item);
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->join(self->db, cursors, &dbc, flags);
-    MYDB_END_ALLOW_THREADS;
-    free(cursors);
-    RETURN_IF_ERR();
-
-    /* FIXME: this is a buggy interface.  The returned cursor
-       contains internal references to the passed in cursors
-       but does not hold python references to them or prevent
-       them from being closed prematurely.  This can cause
-       python to crash when things are done in the wrong order. */
-    return (PyObject*) newDBCursorObject(dbc, NULL, self);
-}
-
-
-static PyObject*
-DB_key_range(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    PyObject* txnobj = NULL;
-    PyObject* keyobj;
-    DBT key;
-    DB_TXN *txn = NULL;
-    DB_KEY_RANGE range;
-    static char* kwnames[] = { "key", "txn", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Oi:key_range", kwnames,
-                                     &keyobj, &txnobj, &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_dbt(keyobj, &key))
-        /* BTree only, don't need to allow for an int key */
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->key_range(self->db, txn, &key, &range, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    RETURN_IF_ERR();
-    return Py_BuildValue("ddd", range.less, range.equal, range.greater);
-}
-
-
-static PyObject*
-DB_open(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, type = DB_UNKNOWN, flags=0, mode=0660;
-    char* filename = NULL;
-    char* dbname = NULL;
-#if (DBVER >= 41)
-    PyObject *txnobj = NULL;
-    DB_TXN *txn = NULL;
-    /* with dbname */
-    static char* kwnames[] = {
-        "filename", "dbname", "dbtype", "flags", "mode", "txn", NULL};
-    /* without dbname */
-    static char* kwnames_basic[] = {
-        "filename", "dbtype", "flags", "mode", "txn", NULL};
-#else
-    /* with dbname */
-    static char* kwnames[] = {
-        "filename", "dbname", "dbtype", "flags", "mode", NULL};
-    /* without dbname */
-    static char* kwnames_basic[] = {
-        "filename", "dbtype", "flags", "mode", NULL};
-#endif
-
-#if (DBVER >= 41)
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|ziiiO:open", kwnames,
-                                    &filename, &dbname, &type, &flags, &mode,
-                                     &txnobj))
-#else
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|ziii:open", kwnames,
-                                    &filename, &dbname, &type, &flags,
-                                     &mode))
-#endif
-    {
-       PyErr_Clear();
-       type = DB_UNKNOWN; flags = 0; mode = 0660;
-       filename = NULL; dbname = NULL;
-#if (DBVER >= 41)
-       if (!PyArg_ParseTupleAndKeywords(args, kwargs,"z|iiiO:open",
-                                         kwnames_basic,
-                                        &filename, &type, &flags, &mode,
-                                         &txnobj))
-           return NULL;
-#else
-       if (!PyArg_ParseTupleAndKeywords(args, kwargs,"z|iii:open",
-                                         kwnames_basic,
-                                        &filename, &type, &flags, &mode))
-           return NULL;
-#endif
-    }
-
-#if (DBVER >= 41)
-    if (!checkTxnObj(txnobj, &txn)) return NULL;
-#endif
-
-    if (NULL == self->db) {
-        PyObject *t = Py_BuildValue("(is)", 0,
-                                "Cannot call open() twice for DB object");
-        if (t) {
-            PyErr_SetObject(DBError, t);
-            Py_DECREF(t);
-        }
-        return NULL;
-    }
-
-#if (DBVER >= 41)
-    if (txn) {  /* Can't use 'txnobj' because could be 'txnobj==Py_None' */
-        INSERT_IN_DOUBLE_LINKED_LIST_TXN(((DBTxnObject *)txnobj)->children_dbs,self);
-        self->txn=(DBTxnObject *)txnobj;
-    } else {
-        self->txn=NULL;
-    }
-#else
-    self->txn=NULL;
-#endif
-
-    MYDB_BEGIN_ALLOW_THREADS;
-#if (DBVER >= 41)
-    err = self->db->open(self->db, txn, filename, dbname, type, flags, mode);
-#else
-    err = self->db->open(self->db, filename, dbname, type, flags, mode);
-#endif
-    MYDB_END_ALLOW_THREADS;
-    if (makeDBError(err)) {
-        PyObject *dummy;
-
-        dummy=DB_close_internal(self,0);
-        Py_XDECREF(dummy);
-        return NULL;
-    }
-
-#if (DBVER >= 42)
-    self->db->get_flags(self->db, &self->setflags);
-#endif
-
-    self->flags = flags;
-
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_put(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int flags=0;
-    PyObject* txnobj = NULL;
-    int dlen = -1;
-    int doff = -1;
-    PyObject* keyobj, *dataobj, *retval;
-    DBT key, data;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "key", "data", "txn", "flags", "dlen",
-                                     "doff", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|Oiii:put", kwnames,
-                         &keyobj, &dataobj, &txnobj, &flags, &dlen, &doff))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_key_dbt(self, keyobj, &key, NULL))
-        return NULL;
-    if ( !make_dbt(dataobj, &data) ||
-         !add_partial_dbt(&data, dlen, doff) ||
-         !checkTxnObj(txnobj, &txn) )
-    {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    if (-1 == _DB_put(self, txn, &key, &data, flags)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    if (flags & DB_APPEND)
-        retval = NUMBER_FromLong(*((db_recno_t*)key.data));
-    else {
-        retval = Py_None;
-        Py_INCREF(retval);
-    }
-    FREE_DBT(key);
-    return retval;
-}
-
-
-
-static PyObject*
-DB_remove(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    char* filename;
-    char* database = NULL;
-    int err, flags=0;
-    static char* kwnames[] = { "filename", "dbname", "flags", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|zi:remove", kwnames,
-                                     &filename, &database, &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    EXTRACT_FROM_DOUBLE_LINKED_LIST_MAYBE_NULL(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->remove(self->db, filename, database, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    self->db = NULL;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-
-static PyObject*
-DB_rename(DBObject* self, PyObject* args)
-{
-    char* filename;
-    char* database;
-    char* newname;
-    int err, flags=0;
-
-    if (!PyArg_ParseTuple(args, "sss|i:rename", &filename, &database, &newname,
-                          &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->rename(self->db, filename, database, newname, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_get_private(DBObject* self)
-{
-    /* We can give out the private field even if db is closed */
-    Py_INCREF(self->private_obj);
-    return self->private_obj;
-}
-
-static PyObject*
-DB_set_private(DBObject* self, PyObject* private_obj)
-{
-    /* We can set the private field even if db is closed */
-    Py_DECREF(self->private_obj);
-    Py_INCREF(private_obj);
-    self->private_obj = private_obj;
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_bt_minkey(DBObject* self, PyObject* args)
-{
-    int err, minkey;
-
-    if (!PyArg_ParseTuple(args,"i:set_bt_minkey", &minkey ))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_bt_minkey(self->db, minkey);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static int
-_default_cmp(const DBT *leftKey,
-            const DBT *rightKey)
-{
-  int res;
-  int lsize = leftKey->size, rsize = rightKey->size;
-
-  res = memcmp(leftKey->data, rightKey->data,
-              lsize < rsize ? lsize : rsize);
-
-  if (res == 0) {
-      if (lsize < rsize) {
-         res = -1;
-      }
-      else if (lsize > rsize) {
-         res = 1;
-      }
-  }
-  return res;
-}
-
-static int
-_db_compareCallback(DB* db,
-                   const DBT *leftKey,
-                   const DBT *rightKey)
-{
-    int res = 0;
-    PyObject *args;
-    PyObject *result = NULL;
-    DBObject *self = (DBObject *)db->app_private;
-
-    if (self == NULL || self->btCompareCallback == NULL) {
-       MYDB_BEGIN_BLOCK_THREADS;
-       PyErr_SetString(PyExc_TypeError,
-                       (self == 0
-                        ? "DB_bt_compare db is NULL."
-                        : "DB_bt_compare callback is NULL."));
-       /* we're in a callback within the DB code, we can't raise */
-       PyErr_Print();
-       res = _default_cmp(leftKey, rightKey);
-       MYDB_END_BLOCK_THREADS;
-    } else {
-       MYDB_BEGIN_BLOCK_THREADS;
-
-       args = BuildValue_SS(leftKey->data, leftKey->size, rightKey->data, rightKey->size);
-       if (args != NULL) {
-               /* XXX(twouters) I highly doubt this INCREF is correct */
-               Py_INCREF(self);
-               result = PyEval_CallObject(self->btCompareCallback, args);
-       }
-       if (args == NULL || result == NULL) {
-           /* we're in a callback within the DB code, we can't raise */
-           PyErr_Print();
-           res = _default_cmp(leftKey, rightKey);
-       } else if (NUMBER_Check(result)) {
-           res = NUMBER_AsLong(result);
-       } else {
-           PyErr_SetString(PyExc_TypeError,
-                           "DB_bt_compare callback MUST return an int.");
-           /* we're in a callback within the DB code, we can't raise */
-           PyErr_Print();
-           res = _default_cmp(leftKey, rightKey);
-       }
-
-       Py_XDECREF(args);
-       Py_XDECREF(result);
-
-       MYDB_END_BLOCK_THREADS;
-    }
-    return res;
-}
-
-static PyObject*
-DB_set_bt_compare(DBObject* self, PyObject* comparator)
-{
-    int err;
-    PyObject *tuple, *result;
-
-    CHECK_DB_NOT_CLOSED(self);
-
-    if (!PyCallable_Check(comparator)) {
-       makeTypeError("Callable", comparator);
-       return NULL;
-    }
-
-    /*
-     * Perform a test call of the comparator function with two empty
-     * string objects here.  verify that it returns an int (0).
-     * err if not.
-     */
-    tuple = Py_BuildValue("(ss)", "", "");
-    result = PyEval_CallObject(comparator, tuple);
-    Py_DECREF(tuple);
-    if (result == NULL)
-        return NULL;
-    if (!NUMBER_Check(result)) {
-       PyErr_SetString(PyExc_TypeError,
-                       "callback MUST return an int");
-       return NULL;
-    } else if (NUMBER_AsLong(result) != 0) {
-       PyErr_SetString(PyExc_TypeError,
-                       "callback failed to return 0 on two empty strings");
-       return NULL;
-    }
-    Py_DECREF(result);
-
-    /* We don't accept multiple set_bt_compare operations, in order to
-     * simplify the code. This would have no real use, as one cannot
-     * change the function once the db is opened anyway */
-    if (self->btCompareCallback != NULL) {
-       PyErr_SetString(PyExc_RuntimeError, "set_bt_compare() cannot be called more than once");
-       return NULL;
-    }
-
-    Py_INCREF(comparator);
-    self->btCompareCallback = comparator;
-
-    /* This is to workaround a problem with un-initialized threads (see
-       comment in DB_associate) */
-#ifdef WITH_THREAD
-    PyEval_InitThreads();
-#endif
-
-    err = self->db->set_bt_compare(self->db, _db_compareCallback);
-
-    if (err) {
-       /* restore the old state in case of error */
-       Py_DECREF(comparator);
-       self->btCompareCallback = NULL;
-    }
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_cachesize(DBObject* self, PyObject* args)
-{
-    int err;
-    int gbytes = 0, bytes = 0, ncache = 0;
-
-    if (!PyArg_ParseTuple(args,"ii|i:set_cachesize",
-                          &gbytes,&bytes,&ncache))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_cachesize(self->db, gbytes, bytes, ncache);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_flags(DBObject* self, PyObject* args)
-{
-    int err, flags;
-
-    if (!PyArg_ParseTuple(args,"i:set_flags", &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_flags(self->db, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    self->setflags |= flags;
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_h_ffactor(DBObject* self, PyObject* args)
-{
-    int err, ffactor;
-
-    if (!PyArg_ParseTuple(args,"i:set_h_ffactor", &ffactor))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_h_ffactor(self->db, ffactor);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_h_nelem(DBObject* self, PyObject* args)
-{
-    int err, nelem;
-
-    if (!PyArg_ParseTuple(args,"i:set_h_nelem", &nelem))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_h_nelem(self->db, nelem);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_lorder(DBObject* self, PyObject* args)
-{
-    int err, lorder;
-
-    if (!PyArg_ParseTuple(args,"i:set_lorder", &lorder))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_lorder(self->db, lorder);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_pagesize(DBObject* self, PyObject* args)
-{
-    int err, pagesize;
-
-    if (!PyArg_ParseTuple(args,"i:set_pagesize", &pagesize))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_pagesize(self->db, pagesize);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_re_delim(DBObject* self, PyObject* args)
-{
-    int err;
-    char delim;
-
-    if (!PyArg_ParseTuple(args,"b:set_re_delim", &delim)) {
-        PyErr_Clear();
-        if (!PyArg_ParseTuple(args,"c:set_re_delim", &delim))
-            return NULL;
-    }
-
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_re_delim(self->db, delim);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DB_set_re_len(DBObject* self, PyObject* args)
-{
-    int err, len;
-
-    if (!PyArg_ParseTuple(args,"i:set_re_len", &len))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_re_len(self->db, len);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_re_pad(DBObject* self, PyObject* args)
-{
-    int err;
-    char pad;
-
-    if (!PyArg_ParseTuple(args,"b:set_re_pad", &pad)) {
-        PyErr_Clear();
-        if (!PyArg_ParseTuple(args,"c:set_re_pad", &pad))
-            return NULL;
-    }
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_re_pad(self->db, pad);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_re_source(DBObject* self, PyObject* args)
-{
-    int err;
-    char *re_source;
-
-    if (!PyArg_ParseTuple(args,"s:set_re_source", &re_source))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_re_source(self->db, re_source);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_q_extentsize(DBObject* self, PyObject* args)
-{
-    int err;
-    int extentsize;
-
-    if (!PyArg_ParseTuple(args,"i:set_q_extentsize", &extentsize))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_q_extentsize(self->db, extentsize);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DB_stat(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags = 0, type;
-    void* sp;
-    PyObject* d;
-#if (DBVER >= 43)
-    PyObject* txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "flags", "txn", NULL };
-#else
-    static char* kwnames[] = { "flags", NULL };
-#endif
-
-#if (DBVER >= 43)
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|iO:stat", kwnames,
-                                     &flags, &txnobj))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-#else
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat", kwnames, &flags))
-        return NULL;
-#endif
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-#if (DBVER >= 43)
-    err = self->db->stat(self->db, txn, &sp, flags);
-#else
-    err = self->db->stat(self->db, &sp, flags);
-#endif
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    self->haveStat = 1;
-
-    /* Turn the stat structure into a dictionary */
-    type = _DB_get_type(self);
-    if ((type == -1) || ((d = PyDict_New()) == NULL)) {
-        free(sp);
-        return NULL;
-    }
-
-#define MAKE_HASH_ENTRY(name)  _addIntToDict(d, #name, ((DB_HASH_STAT*)sp)->hash_##name)
-#define MAKE_BT_ENTRY(name)    _addIntToDict(d, #name, ((DB_BTREE_STAT*)sp)->bt_##name)
-#define MAKE_QUEUE_ENTRY(name) _addIntToDict(d, #name, ((DB_QUEUE_STAT*)sp)->qs_##name)
-
-    switch (type) {
-    case DB_HASH:
-        MAKE_HASH_ENTRY(magic);
-        MAKE_HASH_ENTRY(version);
-        MAKE_HASH_ENTRY(nkeys);
-        MAKE_HASH_ENTRY(ndata);
-#if (DBVER >= 46)
-        MAKE_HASH_ENTRY(pagecnt);
-#endif
-        MAKE_HASH_ENTRY(pagesize);
-#if (DBVER < 41)
-        MAKE_HASH_ENTRY(nelem);
-#endif
-        MAKE_HASH_ENTRY(ffactor);
-        MAKE_HASH_ENTRY(buckets);
-        MAKE_HASH_ENTRY(free);
-        MAKE_HASH_ENTRY(bfree);
-        MAKE_HASH_ENTRY(bigpages);
-        MAKE_HASH_ENTRY(big_bfree);
-        MAKE_HASH_ENTRY(overflows);
-        MAKE_HASH_ENTRY(ovfl_free);
-        MAKE_HASH_ENTRY(dup);
-        MAKE_HASH_ENTRY(dup_free);
-        break;
-
-    case DB_BTREE:
-    case DB_RECNO:
-        MAKE_BT_ENTRY(magic);
-        MAKE_BT_ENTRY(version);
-        MAKE_BT_ENTRY(nkeys);
-        MAKE_BT_ENTRY(ndata);
-#if (DBVER >= 46)
-        MAKE_BT_ENTRY(pagecnt);
-#endif
-        MAKE_BT_ENTRY(pagesize);
-        MAKE_BT_ENTRY(minkey);
-        MAKE_BT_ENTRY(re_len);
-        MAKE_BT_ENTRY(re_pad);
-        MAKE_BT_ENTRY(levels);
-        MAKE_BT_ENTRY(int_pg);
-        MAKE_BT_ENTRY(leaf_pg);
-        MAKE_BT_ENTRY(dup_pg);
-        MAKE_BT_ENTRY(over_pg);
-#if (DBVER >= 43)
-        MAKE_BT_ENTRY(empty_pg);
-#endif
-        MAKE_BT_ENTRY(free);
-        MAKE_BT_ENTRY(int_pgfree);
-        MAKE_BT_ENTRY(leaf_pgfree);
-        MAKE_BT_ENTRY(dup_pgfree);
-        MAKE_BT_ENTRY(over_pgfree);
-        break;
-
-    case DB_QUEUE:
-        MAKE_QUEUE_ENTRY(magic);
-        MAKE_QUEUE_ENTRY(version);
-        MAKE_QUEUE_ENTRY(nkeys);
-        MAKE_QUEUE_ENTRY(ndata);
-        MAKE_QUEUE_ENTRY(pagesize);
-#if (DBVER >= 41)
-        MAKE_QUEUE_ENTRY(extentsize);
-#endif
-        MAKE_QUEUE_ENTRY(pages);
-        MAKE_QUEUE_ENTRY(re_len);
-        MAKE_QUEUE_ENTRY(re_pad);
-        MAKE_QUEUE_ENTRY(pgfree);
-#if (DBVER == 31)
-        MAKE_QUEUE_ENTRY(start);
-#endif
-        MAKE_QUEUE_ENTRY(first_recno);
-        MAKE_QUEUE_ENTRY(cur_recno);
-        break;
-
-    default:
-        PyErr_SetString(PyExc_TypeError, "Unknown DB type, unable to stat");
-        Py_DECREF(d);
-        d = NULL;
-    }
-
-#undef MAKE_HASH_ENTRY
-#undef MAKE_BT_ENTRY
-#undef MAKE_QUEUE_ENTRY
-
-    free(sp);
-    return d;
-}
-
-static PyObject*
-DB_sync(DBObject* self, PyObject* args)
-{
-    int err;
-    int flags = 0;
-
-    if (!PyArg_ParseTuple(args,"|i:sync", &flags ))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->sync(self->db, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_truncate(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    u_int32_t count=0;
-    PyObject* txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "txn", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:cursor", kwnames,
-                                     &txnobj, &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->truncate(self->db, txn, &count, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(count);
-}
-
-
-static PyObject*
-DB_upgrade(DBObject* self, PyObject* args)
-{
-    int err, flags=0;
-    char *filename;
-
-    if (!PyArg_ParseTuple(args,"s|i:upgrade", &filename, &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->upgrade(self->db, filename, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_verify(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags=0;
-    char* fileName;
-    char* dbName=NULL;
-    char* outFileName=NULL;
-    FILE* outFile=NULL;
-    static char* kwnames[] = { "filename", "dbname", "outfile", "flags",
-                                     NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|zzi:verify", kwnames,
-                                     &fileName, &dbName, &outFileName, &flags))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-    if (outFileName)
-        outFile = fopen(outFileName, "w");
-       /* XXX(nnorwitz): it should probably be an exception if outFile
-          can't be opened. */
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->verify(self->db, fileName, dbName, outFile, flags);
-    MYDB_END_ALLOW_THREADS;
-    if (outFile)
-        fclose(outFile);
-
-    {  /* DB.verify acts as a DB handle destructor (like close) */
-        PyObject *error;
-
-        error=DB_close_internal(self,0);
-        if (error ) {
-          return error;
-        }
-     }
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DB_set_get_returns_none(DBObject* self, PyObject* args)
-{
-    int flags=0;
-    int oldValue=0;
-
-    if (!PyArg_ParseTuple(args,"i:set_get_returns_none", &flags))
-        return NULL;
-    CHECK_DB_NOT_CLOSED(self);
-
-    if (self->moduleFlags.getReturnsNone)
-        ++oldValue;
-    if (self->moduleFlags.cursorSetReturnsNone)
-        ++oldValue;
-    self->moduleFlags.getReturnsNone = (flags >= 1);
-    self->moduleFlags.cursorSetReturnsNone = (flags >= 2);
-    return NUMBER_FromLong(oldValue);
-}
-
-#if (DBVER >= 41)
-static PyObject*
-DB_set_encrypt(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    u_int32_t flags=0;
-    char *passwd = NULL;
-    static char* kwnames[] = { "passwd", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|i:set_encrypt", kwnames,
-               &passwd, &flags)) {
-       return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->set_encrypt(self->db, passwd, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-#endif /* DBVER >= 41 */
-
-
-/*-------------------------------------------------------------- */
-/* Mapping and Dictionary-like access routines */
-
-Py_ssize_t DB_length(PyObject* _self)
-{
-    int err;
-    Py_ssize_t size = 0;
-    int flags = 0;
-    void* sp;
-    DBObject* self = (DBObject*)_self;
-
-    if (self->db == NULL) {
-        PyObject *t = Py_BuildValue("(is)", 0, "DB object has been closed");
-        if (t) {
-            PyErr_SetObject(DBError, t);
-            Py_DECREF(t);
-        }
-        return -1;
-    }
-
-    if (self->haveStat) {  /* Has the stat function been called recently?  If
-                              so, we can use the cached value. */
-        flags = DB_FAST_STAT;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-redo_stat_for_length:
-#if (DBVER >= 43)
-    err = self->db->stat(self->db, /*txnid*/ NULL, &sp, flags);
-#else
-    err = self->db->stat(self->db, &sp, flags);
-#endif
-
-    /* All the stat structures have matching fields upto the ndata field,
-       so we can use any of them for the type cast */
-    size = ((DB_BTREE_STAT*)sp)->bt_ndata;
-
-    /* A size of 0 could mean that Berkeley DB no longer had the stat values cached.
-     * redo a full stat to make sure.
-     *   Fixes SF python bug 1493322, pybsddb bug 1184012
-     */
-    if (size == 0 && (flags & DB_FAST_STAT)) {
-        flags = 0;
-        if (!err)
-            free(sp);
-        goto redo_stat_for_length;
-    }
-
-    MYDB_END_ALLOW_THREADS;
-
-    if (err)
-        return -1;
-
-    self->haveStat = 1;
-
-    free(sp);
-    return size;
-}
-
-
-PyObject* DB_subscript(DBObject* self, PyObject* keyobj)
-{
-    int err;
-    PyObject* retval;
-    DBT key;
-    DBT data;
-
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_key_dbt(self, keyobj, &key, NULL))
-        return NULL;
-
-    CLEAR_DBT(data);
-    if (CHECK_DBFLAG(self, DB_THREAD)) {
-        /* Tell Berkeley DB to malloc the return value (thread safe) */
-        data.flags = DB_DBT_MALLOC;
-    }
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->get(self->db, NULL, &key, &data, 0);
-    MYDB_END_ALLOW_THREADS;
-    if (err == DB_NOTFOUND || err == DB_KEYEMPTY) {
-        PyErr_SetObject(PyExc_KeyError, keyobj);
-        retval = NULL;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {
-        retval = Build_PyString(data.data, data.size);
-        FREE_DBT(data);
-    }
-
-    FREE_DBT(key);
-    return retval;
-}
-
-
-static int
-DB_ass_sub(DBObject* self, PyObject* keyobj, PyObject* dataobj)
-{
-    DBT key, data;
-    int retval;
-    int flags = 0;
-
-    if (self->db == NULL) {
-        PyObject *t = Py_BuildValue("(is)", 0, "DB object has been closed");
-        if (t) {
-            PyErr_SetObject(DBError, t);
-            Py_DECREF(t);
-        }
-        return -1;
-    }
-
-    if (!make_key_dbt(self, keyobj, &key, NULL))
-        return -1;
-
-    if (dataobj != NULL) {
-        if (!make_dbt(dataobj, &data))
-            retval =  -1;
-        else {
-            if (self->setflags & (DB_DUP|DB_DUPSORT))
-                /* dictionaries shouldn't have duplicate keys */
-                flags = DB_NOOVERWRITE;
-            retval = _DB_put(self, NULL, &key, &data, flags);
-
-            if ((retval == -1) &&  (self->setflags & (DB_DUP|DB_DUPSORT))) {
-                /* try deleting any old record that matches and then PUT it
-                 * again... */
-                _DB_delete(self, NULL, &key, 0);
-                PyErr_Clear();
-                retval = _DB_put(self, NULL, &key, &data, flags);
-            }
-        }
-    }
-    else {
-        /* dataobj == NULL, so delete the key */
-        retval = _DB_delete(self, NULL, &key, 0);
-    }
-    FREE_DBT(key);
-    return retval;
-}
-
-
-static PyObject*
-DB_has_key(DBObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    PyObject* keyobj;
-    DBT key, data;
-    PyObject* txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = {"key","txn", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|O:has_key", kwnames,
-                &keyobj, &txnobj))
-        return NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-    if (!make_key_dbt(self, keyobj, &key, NULL))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    /* This causes DB_BUFFER_SMALL to be returned when the db has the key because
-       it has a record but can't allocate a buffer for the data.  This saves
-       having to deal with data we won't be using.
-     */
-    CLEAR_DBT(data);
-    data.flags = DB_DBT_USERMEM;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->get(self->db, txn, &key, &data, 0);
-    MYDB_END_ALLOW_THREADS;
-    FREE_DBT(key);
-
-    if (err == DB_BUFFER_SMALL || err == 0) {
-        return NUMBER_FromLong(1);
-    } else if (err == DB_NOTFOUND || err == DB_KEYEMPTY) {
-        return NUMBER_FromLong(0);
-    }
-
-    makeDBError(err);
-    return NULL;
-}
-
-
-#define _KEYS_LIST      1
-#define _VALUES_LIST    2
-#define _ITEMS_LIST     3
-
-static PyObject*
-_DB_make_list(DBObject* self, DB_TXN* txn, int type)
-{
-    int err, dbtype;
-    DBT key;
-    DBT data;
-    DBC *cursor;
-    PyObject* list;
-    PyObject* item = NULL;
-
-    CHECK_DB_NOT_CLOSED(self);
-    CLEAR_DBT(key);
-    CLEAR_DBT(data);
-
-    dbtype = _DB_get_type(self);
-    if (dbtype == -1)
-        return NULL;
-
-    list = PyList_New(0);
-    if (list == NULL)
-        return NULL;
-
-    /* get a cursor */
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db->cursor(self->db, txn, &cursor, 0);
-    MYDB_END_ALLOW_THREADS;
-    if (makeDBError(err)) {
-        Py_DECREF(list);
-        return NULL;
-    }
-
-    while (1) { /* use the cursor to traverse the DB, collecting items */
-        MYDB_BEGIN_ALLOW_THREADS;
-        err = _DBC_get(cursor, &key, &data, DB_NEXT);
-        MYDB_END_ALLOW_THREADS;
-
-        if (err) {
-            /* for any error, break out of the loop */
-            break;
-        }
-
-        switch (type) {
-        case _KEYS_LIST:
-            switch(dbtype) {
-            case DB_BTREE:
-            case DB_HASH:
-            default:
-                item = Build_PyString(key.data, key.size);
-                break;
-            case DB_RECNO:
-            case DB_QUEUE:
-                item = NUMBER_FromLong(*((db_recno_t*)key.data));
-                break;
-            }
-            break;
-
-        case _VALUES_LIST:
-            item = Build_PyString(data.data, data.size);
-            break;
-
-        case _ITEMS_LIST:
-            switch(dbtype) {
-            case DB_BTREE:
-            case DB_HASH:
-            default:
-                item = BuildValue_SS(key.data, key.size, data.data, data.size);
-                break;
-            case DB_RECNO:
-            case DB_QUEUE:
-                item = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size);
-                break;
-            }
-            break;
-        default:
-            PyErr_Format(PyExc_ValueError, "Unknown key type 0x%x", type);
-            item = NULL;
-            break;
-        }
-        if (item == NULL) {
-            Py_DECREF(list);
-            list = NULL;
-            goto done;
-        }
-        if (PyList_Append(list, item)) {
-            Py_DECREF(list);
-            Py_DECREF(item);
-            list = NULL;
-            goto done;
-        }
-        Py_DECREF(item);
-    }
-
-    /* DB_NOTFOUND || DB_KEYEMPTY is okay, it means we got to the end */
-    if (err != DB_NOTFOUND && err != DB_KEYEMPTY && makeDBError(err)) {
-        Py_DECREF(list);
-        list = NULL;
-    }
-
- done:
-    MYDB_BEGIN_ALLOW_THREADS;
-    _DBC_close(cursor);
-    MYDB_END_ALLOW_THREADS;
-    return list;
-}
-
-
-static PyObject*
-DB_keys(DBObject* self, PyObject* args)
-{
-    PyObject* txnobj = NULL;
-    DB_TXN *txn = NULL;
-
-    if (!PyArg_UnpackTuple(args, "keys", 0, 1, &txnobj))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-    return _DB_make_list(self, txn, _KEYS_LIST);
-}
-
-
-static PyObject*
-DB_items(DBObject* self, PyObject* args)
-{
-    PyObject* txnobj = NULL;
-    DB_TXN *txn = NULL;
-
-    if (!PyArg_UnpackTuple(args, "items", 0, 1, &txnobj))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-    return _DB_make_list(self, txn, _ITEMS_LIST);
-}
-
-
-static PyObject*
-DB_values(DBObject* self, PyObject* args)
-{
-    PyObject* txnobj = NULL;
-    DB_TXN *txn = NULL;
-
-    if (!PyArg_UnpackTuple(args, "values", 0, 1, &txnobj))
-        return NULL;
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-    return _DB_make_list(self, txn, _VALUES_LIST);
-}
-
-/* --------------------------------------------------------------------- */
-/* DBCursor methods */
-
-
-static PyObject*
-DBC_close_internal(DBCursorObject* self)
-{
-    int err = 0;
-
-    if (self->dbc != NULL) {
-        EXTRACT_FROM_DOUBLE_LINKED_LIST(self);
-        if (self->txn) {
-            EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(self);
-            self->txn=NULL;
-        }
-
-        MYDB_BEGIN_ALLOW_THREADS;
-        err = _DBC_close(self->dbc);
-        MYDB_END_ALLOW_THREADS;
-        self->dbc = NULL;
-    }
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBC_close(DBCursorObject* self)
-{
-    return DBC_close_internal(self);
-}
-
-
-static PyObject*
-DBC_count(DBCursorObject* self, PyObject* args)
-{
-    int err = 0;
-    db_recno_t count;
-    int flags = 0;
-
-    if (!PyArg_ParseTuple(args, "|i:count", &flags))
-        return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_count(self->dbc, &count, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    return NUMBER_FromLong(count);
-}
-
-
-static PyObject*
-DBC_current(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    return _DBCursor_get(self,DB_CURRENT,args,kwargs,"|iii:current");
-}
-
-
-static PyObject*
-DBC_delete(DBCursorObject* self, PyObject* args)
-{
-    int err, flags=0;
-
-    if (!PyArg_ParseTuple(args, "|i:delete", &flags))
-        return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_del(self->dbc, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    self->mydb->haveStat = 0;
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBC_dup(DBCursorObject* self, PyObject* args)
-{
-    int err, flags =0;
-    DBC* dbc = NULL;
-
-    if (!PyArg_ParseTuple(args, "|i:dup", &flags))
-        return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_dup(self->dbc, &dbc, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    return (PyObject*) newDBCursorObject(dbc, self->txn, self->mydb);
-}
-
-static PyObject*
-DBC_first(DBCursorObject* self, PyObject* args, PyObject* kwargs)
-{
-    return _DBCursor_get(self,DB_FIRST,args,kwargs,"|iii:first");
-}
-
-
-static PyObject*
-DBC_get(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    int err, flags=0;
-    PyObject* keyobj = NULL;
-    PyObject* dataobj = NULL;
-    PyObject* retval = NULL;
-    int dlen = -1;
-    int doff = -1;
-    DBT key, data;
-    static char* kwnames[] = { "key","data", "flags", "dlen", "doff",
-                                     NULL };
-
-    CLEAR_DBT(key);
-    CLEAR_DBT(data);
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|ii:get", &kwnames[2],
-                                    &flags, &dlen, &doff))
-    {
-        PyErr_Clear();
-        if (!PyArg_ParseTupleAndKeywords(args, kwargs, "Oi|ii:get",
-                                         &kwnames[1],
-                                        &keyobj, &flags, &dlen, &doff))
-        {
-            PyErr_Clear();
-            if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OOi|ii:get",
-                                             kwnames, &keyobj, &dataobj,
-                                             &flags, &dlen, &doff))
-            {
-                return NULL;
-           }
-       }
-    }
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    if (keyobj && !make_key_dbt(self->mydb, keyobj, &key, NULL))
-        return NULL;
-    if ( (dataobj && !make_dbt(dataobj, &data)) ||
-         (!add_partial_dbt(&data, dlen, doff)) )
-    {
-        FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */
-        return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->mydb->moduleFlags.getReturnsNone) {
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {
-        switch (_DB_get_type(self->mydb)) {
-        case -1:
-            retval = NULL;
-            break;
-        case DB_BTREE:
-        case DB_HASH:
-        default:
-            retval = BuildValue_SS(key.data, key.size, data.data, data.size);
-            break;
-        case DB_RECNO:
-        case DB_QUEUE:
-            retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size);
-            break;
-        }
-    }
-    FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-    return retval;
-}
-
-static PyObject*
-DBC_pget(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    int err, flags=0;
-    PyObject* keyobj = NULL;
-    PyObject* dataobj = NULL;
-    PyObject* retval = NULL;
-    int dlen = -1;
-    int doff = -1;
-    DBT key, pkey, data;
-    static char* kwnames_keyOnly[] = { "key", "flags", "dlen", "doff", NULL };
-    static char* kwnames[] = { "key", "data", "flags", "dlen", "doff", NULL };
-
-    CLEAR_DBT(key);
-    CLEAR_DBT(data);
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|ii:pget", &kwnames[2],
-                                    &flags, &dlen, &doff))
-    {
-        PyErr_Clear();
-        if (!PyArg_ParseTupleAndKeywords(args, kwargs, "Oi|ii:pget",
-                                         kwnames_keyOnly, 
-                                        &keyobj, &flags, &dlen, &doff))
-        {
-            PyErr_Clear();
-            if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OOi|ii:pget",
-                                             kwnames, &keyobj, &dataobj,
-                                             &flags, &dlen, &doff))
-            {
-                return NULL;
-           }
-       }
-    }
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    if (keyobj && !make_key_dbt(self->mydb, keyobj, &key, NULL))
-        return NULL;
-    if ( (dataobj && !make_dbt(dataobj, &data)) ||
-         (!add_partial_dbt(&data, dlen, doff)) ) {
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-        return NULL;
-    }
-
-    CLEAR_DBT(pkey);
-    pkey.flags = DB_DBT_MALLOC;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_pget(self->dbc, &key, &pkey, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->mydb->moduleFlags.getReturnsNone) {
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {
-        PyObject *pkeyObj;
-        PyObject *dataObj;
-        dataObj = Build_PyString(data.data, data.size);
-
-        if (self->mydb->primaryDBType == DB_RECNO ||
-            self->mydb->primaryDBType == DB_QUEUE)
-            pkeyObj = NUMBER_FromLong(*(int *)pkey.data);
-        else
-            pkeyObj = Build_PyString(pkey.data, pkey.size);
-
-        if (key.data && key.size) /* return key, pkey and data */
-        {
-            PyObject *keyObj;
-            int type = _DB_get_type(self->mydb);
-            if (type == DB_RECNO || type == DB_QUEUE)
-                keyObj = NUMBER_FromLong(*(int *)key.data);
-            else
-                keyObj = Build_PyString(key.data, key.size);
-#if (PY_VERSION_HEX >= 0x02040000)
-            retval = PyTuple_Pack(3, keyObj, pkeyObj, dataObj);
-#else
-            retval = Py_BuildValue("OOO", keyObj, pkeyObj, dataObj);
-#endif
-            Py_DECREF(keyObj);
-            FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-        }
-        else /* return just the pkey and data */
-        {
-#if (PY_VERSION_HEX >= 0x02040000)
-            retval = PyTuple_Pack(2, pkeyObj, dataObj);
-#else
-            retval = Py_BuildValue("OO", pkeyObj, dataObj);
-#endif
-        }
-        Py_DECREF(dataObj);
-        Py_DECREF(pkeyObj);
-        FREE_DBT(pkey);
-    }
-    /* the only time REALLOC should be set is if we used an integer
-     * key that make_key_dbt malloc'd for us.  always free these. */
-    if (key.flags & DB_DBT_REALLOC) {  /* 'make_key_dbt' could do a 'malloc' */
-        FREE_DBT(key);
-    }
-    return retval;
-}
-
-
-static PyObject*
-DBC_get_recno(DBCursorObject* self)
-{
-    int err;
-    db_recno_t recno;
-    DBT key;
-    DBT data;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    CLEAR_DBT(key);
-    CLEAR_DBT(data);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, DB_GET_RECNO);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    recno = *((db_recno_t*)data.data);
-    return NUMBER_FromLong(recno);
-}
-
-
-static PyObject*
-DBC_last(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    return _DBCursor_get(self,DB_LAST,args,kwargs,"|iii:last");
-}
-
-
-static PyObject*
-DBC_next(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    return _DBCursor_get(self,DB_NEXT,args,kwargs,"|iii:next");
-}
-
-
-static PyObject*
-DBC_prev(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    return _DBCursor_get(self,DB_PREV,args,kwargs,"|iii:prev");
-}
-
-
-static PyObject*
-DBC_put(DBCursorObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags = 0;
-    PyObject* keyobj, *dataobj;
-    DBT key, data;
-    static char* kwnames[] = { "key", "data", "flags", "dlen", "doff",
-                                     NULL };
-    int dlen = -1;
-    int doff = -1;
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|iii:put", kwnames,
-                                    &keyobj, &dataobj, &flags, &dlen, &doff))
-        return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    if (!make_key_dbt(self->mydb, keyobj, &key, NULL))
-        return NULL;
-    if (!make_dbt(dataobj, &data) ||
-        !add_partial_dbt(&data, dlen, doff) )
-    {
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-        return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_put(self->dbc, &key, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-    FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-    RETURN_IF_ERR();
-    self->mydb->haveStat = 0;
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBC_set(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    int err, flags = 0;
-    DBT key, data;
-    PyObject* retval, *keyobj;
-    static char* kwnames[] = { "key", "flags", "dlen", "doff", NULL };
-    int dlen = -1;
-    int doff = -1;
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|iii:set", kwnames,
-                                    &keyobj, &flags, &dlen, &doff))
-        return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    if (!make_key_dbt(self->mydb, keyobj, &key, NULL))
-        return NULL;
-
-    CLEAR_DBT(data);
-    if (!add_partial_dbt(&data, dlen, doff)) {
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-        return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, flags|DB_SET);
-    MYDB_END_ALLOW_THREADS;
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->mydb->moduleFlags.cursorSetReturnsNone) {
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {
-        switch (_DB_get_type(self->mydb)) {
-        case -1:
-            retval = NULL;
-            break;
-        case DB_BTREE:
-        case DB_HASH:
-        default:
-            retval = BuildValue_SS(key.data, key.size, data.data, data.size);
-            break;
-        case DB_RECNO:
-        case DB_QUEUE:
-            retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size);
-            break;
-        }
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-    }
-    /* the only time REALLOC should be set is if we used an integer
-     * key that make_key_dbt malloc'd for us.  always free these. */
-    if (key.flags & DB_DBT_REALLOC) {
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-    }
-
-    return retval;
-}
-
-
-static PyObject*
-DBC_set_range(DBCursorObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags = 0;
-    DBT key, data;
-    PyObject* retval, *keyobj;
-    static char* kwnames[] = { "key", "flags", "dlen", "doff", NULL };
-    int dlen = -1;
-    int doff = -1;
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|iii:set_range", kwnames,
-                                    &keyobj, &flags, &dlen, &doff))
-        return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    if (!make_key_dbt(self->mydb, keyobj, &key, NULL))
-        return NULL;
-
-    CLEAR_DBT(data);
-    if (!add_partial_dbt(&data, dlen, doff)) {
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-        return NULL;
-    }
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, flags|DB_SET_RANGE);
-    MYDB_END_ALLOW_THREADS;
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->mydb->moduleFlags.cursorSetReturnsNone) {
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {
-        switch (_DB_get_type(self->mydb)) {
-        case -1:
-            retval = NULL;
-            break;
-        case DB_BTREE:
-        case DB_HASH:
-        default:
-            retval = BuildValue_SS(key.data, key.size, data.data, data.size);
-            break;
-        case DB_RECNO:
-        case DB_QUEUE:
-            retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size);
-            break;
-        }
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-    }
-    /* the only time REALLOC should be set is if we used an integer
-     * key that make_key_dbt malloc'd for us.  always free these. */
-    if (key.flags & DB_DBT_REALLOC) {
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-    }
-
-    return retval;
-}
-
-static PyObject*
-_DBC_get_set_both(DBCursorObject* self, PyObject* keyobj, PyObject* dataobj,
-                  int flags, unsigned int returnsNone)
-{
-    int err;
-    DBT key, data;
-    PyObject* retval;
-
-    /* the caller did this:  CHECK_CURSOR_NOT_CLOSED(self); */
-    if (!make_key_dbt(self->mydb, keyobj, &key, NULL))
-        return NULL;
-    if (!make_dbt(dataobj, &data)) {
-        FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-        return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, flags|DB_GET_BOTH);
-    MYDB_END_ALLOW_THREADS;
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && returnsNone) {
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {
-        switch (_DB_get_type(self->mydb)) {
-        case -1:
-            retval = NULL;
-            break;
-        case DB_BTREE:
-        case DB_HASH:
-        default:
-            retval = BuildValue_SS(key.data, key.size, data.data, data.size);
-            break;
-        case DB_RECNO:
-        case DB_QUEUE:
-            retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size);
-            break;
-        }
-    }
-
-    FREE_DBT(key);  /* 'make_key_dbt' could do a 'malloc' */
-    return retval;
-}
-
-static PyObject*
-DBC_get_both(DBCursorObject* self, PyObject* args)
-{
-    int flags=0;
-    PyObject *keyobj, *dataobj;
-
-    if (!PyArg_ParseTuple(args, "OO|i:get_both", &keyobj, &dataobj, &flags))
-        return NULL;
-
-    /* if the cursor is closed, self->mydb may be invalid */
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    return _DBC_get_set_both(self, keyobj, dataobj, flags,
-                self->mydb->moduleFlags.getReturnsNone);
-}
-
-/* Return size of entry */
-static PyObject*
-DBC_get_current_size(DBCursorObject* self)
-{
-    int err, flags=DB_CURRENT;
-    PyObject* retval = NULL;
-    DBT key, data;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-    CLEAR_DBT(key);
-    CLEAR_DBT(data);
-
-    /* We don't allocate any memory, forcing a DB_BUFFER_SMALL error and thus
-       getting the record size. */
-    data.flags = DB_DBT_USERMEM;
-    data.ulen = 0;
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, flags);
-    MYDB_END_ALLOW_THREADS;
-    if (err == DB_BUFFER_SMALL || !err) {
-        /* DB_BUFFER_SMALL means positive size, !err means zero length value */
-        retval = NUMBER_FromLong((long)data.size);
-        err = 0;
-    }
-
-    RETURN_IF_ERR();
-    return retval;
-}
-
-static PyObject*
-DBC_set_both(DBCursorObject* self, PyObject* args)
-{
-    int flags=0;
-    PyObject *keyobj, *dataobj;
-
-    if (!PyArg_ParseTuple(args, "OO|i:set_both", &keyobj, &dataobj, &flags))
-        return NULL;
-
-    /* if the cursor is closed, self->mydb may be invalid */
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    return _DBC_get_set_both(self, keyobj, dataobj, flags,
-                self->mydb->moduleFlags.cursorSetReturnsNone);
-}
-
-
-static PyObject*
-DBC_set_recno(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    int err, irecno, flags=0;
-    db_recno_t recno;
-    DBT key, data;
-    PyObject* retval;
-    int dlen = -1;
-    int doff = -1;
-    static char* kwnames[] = { "recno","flags", "dlen", "doff", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|iii:set_recno", kwnames,
-                                    &irecno, &flags, &dlen, &doff))
-      return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    CLEAR_DBT(key);
-    recno = (db_recno_t) irecno;
-    /* use allocated space so DB will be able to realloc room for the real
-     * key */
-    key.data = malloc(sizeof(db_recno_t));
-    if (key.data == NULL) {
-        PyErr_SetString(PyExc_MemoryError, "Key memory allocation failed");
-        return NULL;
-    }
-    key.size = sizeof(db_recno_t);
-    key.ulen = key.size;
-    memcpy(key.data, &recno, sizeof(db_recno_t));
-    key.flags = DB_DBT_REALLOC;
-
-    CLEAR_DBT(data);
-    if (!add_partial_dbt(&data, dlen, doff)) {
-        FREE_DBT(key);
-        return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, flags|DB_SET_RECNO);
-    MYDB_END_ALLOW_THREADS;
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->mydb->moduleFlags.cursorSetReturnsNone) {
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {  /* Can only be used for BTrees, so no need to return int key */
-        retval = BuildValue_SS(key.data, key.size, data.data, data.size);
-    }
-    FREE_DBT(key);
-
-    return retval;
-}
-
-
-static PyObject*
-DBC_consume(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    return _DBCursor_get(self,DB_CONSUME,args,kwargs,"|iii:consume");
-}
-
-
-static PyObject*
-DBC_next_dup(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    return _DBCursor_get(self,DB_NEXT_DUP,args,kwargs,"|iii:next_dup");
-}
-
-
-static PyObject*
-DBC_next_nodup(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    return _DBCursor_get(self,DB_NEXT_NODUP,args,kwargs,"|iii:next_nodup");
-}
-
-
-static PyObject*
-DBC_prev_nodup(DBCursorObject* self, PyObject* args, PyObject *kwargs)
-{
-    return _DBCursor_get(self,DB_PREV_NODUP,args,kwargs,"|iii:prev_nodup");
-}
-
-
-static PyObject*
-DBC_join_item(DBCursorObject* self, PyObject* args)
-{
-    int err, flags=0;
-    DBT key, data;
-    PyObject* retval;
-
-    if (!PyArg_ParseTuple(args, "|i:join_item", &flags))
-        return NULL;
-
-    CHECK_CURSOR_NOT_CLOSED(self);
-
-    CLEAR_DBT(key);
-    CLEAR_DBT(data);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = _DBC_get(self->dbc, &key, &data, flags | DB_JOIN_ITEM);
-    MYDB_END_ALLOW_THREADS;
-    if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)
-           && self->mydb->moduleFlags.getReturnsNone) {
-        Py_INCREF(Py_None);
-        retval = Py_None;
-    }
-    else if (makeDBError(err)) {
-        retval = NULL;
-    }
-    else {
-        retval = BuildValue_S(key.data, key.size);
-    }
-
-    return retval;
-}
-
-
-
-/* --------------------------------------------------------------------- */
-/* DBEnv methods */
-
-
-static PyObject*
-DBEnv_close_internal(DBEnvObject* self, int flags)
-{
-    PyObject *dummy;
-    int err;
-
-    if (!self->closed) {      /* Don't close more than once */
-        while(self->children_txns) {
-          dummy=DBTxn_abort_discard_internal(self->children_txns,0);
-          Py_XDECREF(dummy);
-        }
-        while(self->children_dbs) {
-          dummy=DB_close_internal(self->children_dbs,0);
-          Py_XDECREF(dummy);
-        }
-    }
-
-    self->closed = 1;
-    if (self->db_env) {
-        MYDB_BEGIN_ALLOW_THREADS;
-        err = self->db_env->close(self->db_env, flags);
-        MYDB_END_ALLOW_THREADS;
-        /* after calling DBEnv->close, regardless of error, this DBEnv
-         * may not be accessed again (Berkeley DB docs). */
-        self->db_env = NULL;
-        RETURN_IF_ERR();
-    }
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_close(DBEnvObject* self, PyObject* args)
-{
-    int flags = 0;
-
-    if (!PyArg_ParseTuple(args, "|i:close", &flags))
-        return NULL;
-    return DBEnv_close_internal(self,flags);
-}
-
-
-static PyObject*
-DBEnv_open(DBEnvObject* self, PyObject* args)
-{
-    int err, flags=0, mode=0660;
-    char *db_home;
-
-    if (!PyArg_ParseTuple(args, "z|ii:open", &db_home, &flags, &mode))
-        return NULL;
-
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->open(self->db_env, db_home, flags, mode);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    self->closed = 0;
-    self->flags = flags;
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_remove(DBEnvObject* self, PyObject* args)
-{
-    int err, flags=0;
-    char *db_home;
-
-    if (!PyArg_ParseTuple(args, "s|i:remove", &db_home, &flags))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->remove(self->db_env, db_home, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-#if (DBVER >= 41)
-static PyObject*
-DBEnv_dbremove(DBEnvObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    u_int32_t flags=0;
-    char *file = NULL;
-    char *database = NULL;
-    PyObject *txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "file", "database", "txn", "flags",
-                                     NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|zOi:dbremove", kwnames,
-               &file, &database, &txnobj, &flags)) {
-       return NULL;
-    }
-    if (!checkTxnObj(txnobj, &txn)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->dbremove(self->db_env, txn, file, database, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_dbrename(DBEnvObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    u_int32_t flags=0;
-    char *file = NULL;
-    char *database = NULL;
-    char *newname = NULL;
-    PyObject *txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "file", "database", "newname", "txn",
-                                     "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "szs|Oi:dbrename", kwnames,
-               &file, &database, &newname, &txnobj, &flags)) {
-       return NULL;
-    }
-    if (!checkTxnObj(txnobj, &txn)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->dbrename(self->db_env, txn, file, database, newname,
-                                 flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_set_encrypt(DBEnvObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    u_int32_t flags=0;
-    char *passwd = NULL;
-    static char* kwnames[] = { "passwd", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|i:set_encrypt", kwnames,
-               &passwd, &flags)) {
-       return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_encrypt(self->db_env, passwd, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-#endif /* DBVER >= 41 */
-
-static PyObject*
-DBEnv_set_timeout(DBEnvObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    u_int32_t flags=0;
-    u_int32_t timeout = 0;
-    static char* kwnames[] = { "timeout", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii:set_timeout", kwnames,
-               &timeout, &flags)) {
-       return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_timeout(self->db_env, (db_timeout_t)timeout, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_set_shm_key(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    long shm_key = 0;
-
-    if (!PyArg_ParseTuple(args, "l:set_shm_key", &shm_key))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    err = self->db_env->set_shm_key(self->db_env, shm_key);
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_set_cachesize(DBEnvObject* self, PyObject* args)
-{
-    int err, gbytes=0, bytes=0, ncache=0;
-
-    if (!PyArg_ParseTuple(args, "ii|i:set_cachesize",
-                          &gbytes, &bytes, &ncache))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_cachesize(self->db_env, gbytes, bytes, ncache);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_flags(DBEnvObject* self, PyObject* args)
-{
-    int err, flags=0, onoff=0;
-
-    if (!PyArg_ParseTuple(args, "ii:set_flags",
-                          &flags, &onoff))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_flags(self->db_env, flags, onoff);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-#if (DBVER >= 47)
-static PyObject*
-DBEnv_log_set_config(DBEnvObject* self, PyObject* args)
-{
-    int err, flags, onoff;
-
-    if (!PyArg_ParseTuple(args, "ii:log_set_config",
-                          &flags, &onoff))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->log_set_config(self->db_env, flags, onoff);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-#endif /* DBVER >= 47 */
-
-
-static PyObject*
-DBEnv_set_data_dir(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    char *dir;
-
-    if (!PyArg_ParseTuple(args, "s:set_data_dir", &dir))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_data_dir(self->db_env, dir);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_lg_bsize(DBEnvObject* self, PyObject* args)
-{
-    int err, lg_bsize;
-
-    if (!PyArg_ParseTuple(args, "i:set_lg_bsize", &lg_bsize))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lg_bsize(self->db_env, lg_bsize);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_lg_dir(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    char *dir;
-
-    if (!PyArg_ParseTuple(args, "s:set_lg_dir", &dir))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lg_dir(self->db_env, dir);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_set_lg_max(DBEnvObject* self, PyObject* args)
-{
-    int err, lg_max;
-
-    if (!PyArg_ParseTuple(args, "i:set_lg_max", &lg_max))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lg_max(self->db_env, lg_max);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-#if (DBVER >= 42)
-static PyObject*
-DBEnv_get_lg_max(DBEnvObject* self)
-{
-    int err;
-    u_int32_t lg_max;
-
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->get_lg_max(self->db_env, &lg_max);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(lg_max);
-}
-#endif
-
-
-static PyObject*
-DBEnv_set_lg_regionmax(DBEnvObject* self, PyObject* args)
-{
-    int err, lg_max;
-
-    if (!PyArg_ParseTuple(args, "i:set_lg_regionmax", &lg_max))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lg_regionmax(self->db_env, lg_max);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_lk_detect(DBEnvObject* self, PyObject* args)
-{
-    int err, lk_detect;
-
-    if (!PyArg_ParseTuple(args, "i:set_lk_detect", &lk_detect))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lk_detect(self->db_env, lk_detect);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-#if (DBVER < 45)
-static PyObject*
-DBEnv_set_lk_max(DBEnvObject* self, PyObject* args)
-{
-    int err, max;
-
-    if (!PyArg_ParseTuple(args, "i:set_lk_max", &max))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lk_max(self->db_env, max);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-#endif
-
-
-
-static PyObject*
-DBEnv_set_lk_max_locks(DBEnvObject* self, PyObject* args)
-{
-    int err, max;
-
-    if (!PyArg_ParseTuple(args, "i:set_lk_max_locks", &max))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lk_max_locks(self->db_env, max);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_lk_max_lockers(DBEnvObject* self, PyObject* args)
-{
-    int err, max;
-
-    if (!PyArg_ParseTuple(args, "i:set_lk_max_lockers", &max))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lk_max_lockers(self->db_env, max);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_lk_max_objects(DBEnvObject* self, PyObject* args)
-{
-    int err, max;
-
-    if (!PyArg_ParseTuple(args, "i:set_lk_max_objects", &max))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_lk_max_objects(self->db_env, max);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_mp_mmapsize(DBEnvObject* self, PyObject* args)
-{
-    int err, mp_mmapsize;
-
-    if (!PyArg_ParseTuple(args, "i:set_mp_mmapsize", &mp_mmapsize))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_mp_mmapsize(self->db_env, mp_mmapsize);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_tmp_dir(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    char *dir;
-
-    if (!PyArg_ParseTuple(args, "s:set_tmp_dir", &dir))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_tmp_dir(self->db_env, dir);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_txn_recover(DBEnvObject* self)
-{
-    int flags = DB_FIRST;
-    int err, i;
-    PyObject *list, *tuple, *gid;
-    DBTxnObject *txn;
-#define PREPLIST_LEN 16
-    DB_PREPLIST preplist[PREPLIST_LEN];
-    long retp;
-
-    CHECK_ENV_NOT_CLOSED(self);
-
-    list=PyList_New(0);
-    if (!list)
-        return NULL;
-    while (!0) {
-        MYDB_BEGIN_ALLOW_THREADS
-        err=self->db_env->txn_recover(self->db_env,
-                        preplist, PREPLIST_LEN, &retp, flags);
-#undef PREPLIST_LEN
-        MYDB_END_ALLOW_THREADS
-        if (err) {
-            Py_DECREF(list);
-            RETURN_IF_ERR();
-        }
-        if (!retp) break;
-        flags=DB_NEXT;  /* Prepare for next loop pass */
-        for (i=0; i<retp; i++) {
-            gid=PyBytes_FromStringAndSize((char *)(preplist[i].gid),
-                                DB_XIDDATASIZE);
-            if (!gid) {
-                Py_DECREF(list);
-                return NULL;
-            }
-            txn=newDBTxnObject(self, NULL, preplist[i].txn, flags);
-            if (!txn) {
-                Py_DECREF(list);
-                Py_DECREF(gid);
-                return NULL;
-            }
-            txn->flag_prepare=1;  /* Recover state */
-            tuple=PyTuple_New(2);
-            if (!tuple) {
-                Py_DECREF(list);
-                Py_DECREF(gid);
-                Py_DECREF(txn);
-                return NULL;
-            }
-            if (PyTuple_SetItem(tuple, 0, gid)) {
-                Py_DECREF(list);
-                Py_DECREF(gid);
-                Py_DECREF(txn);
-                Py_DECREF(tuple);
-                return NULL;
-            }
-            if (PyTuple_SetItem(tuple, 1, (PyObject *)txn)) {
-                Py_DECREF(list);
-                Py_DECREF(txn);
-                Py_DECREF(tuple); /* This delete the "gid" also */
-                return NULL;
-            }
-            if (PyList_Append(list, tuple)) {
-                Py_DECREF(list);
-                Py_DECREF(tuple);/* This delete the "gid" and the "txn" also */
-                return NULL;
-            }
-            Py_DECREF(tuple);
-        }
-    }
-    return list;
-}
-
-static PyObject*
-DBEnv_txn_begin(DBEnvObject* self, PyObject* args, PyObject* kwargs)
-{
-    int flags = 0;
-    PyObject* txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = { "parent", "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:txn_begin", kwnames,
-                                     &txnobj, &flags))
-        return NULL;
-
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    return (PyObject*)newDBTxnObject(self, (DBTxnObject *)txnobj, NULL, flags);
-}
-
-
-static PyObject*
-DBEnv_txn_checkpoint(DBEnvObject* self, PyObject* args)
-{
-    int err, kbyte=0, min=0, flags=0;
-
-    if (!PyArg_ParseTuple(args, "|iii:txn_checkpoint", &kbyte, &min, &flags))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->txn_checkpoint(self->db_env, kbyte, min, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_tx_max(DBEnvObject* self, PyObject* args)
-{
-    int err, max;
-
-    if (!PyArg_ParseTuple(args, "i:set_tx_max", &max))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    err = self->db_env->set_tx_max(self->db_env, max);
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_tx_timestamp(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    long stamp;
-    time_t timestamp;
-
-    if (!PyArg_ParseTuple(args, "l:set_tx_timestamp", &stamp))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-    timestamp = (time_t)stamp;
-    err = self->db_env->set_tx_timestamp(self->db_env, &timestamp);
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_lock_detect(DBEnvObject* self, PyObject* args)
-{
-    int err, atype, flags=0;
-    int aborted = 0;
-
-    if (!PyArg_ParseTuple(args, "i|i:lock_detect", &atype, &flags))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->lock_detect(self->db_env, flags, atype, &aborted);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(aborted);
-}
-
-
-static PyObject*
-DBEnv_lock_get(DBEnvObject* self, PyObject* args)
-{
-    int flags=0;
-    int locker, lock_mode;
-    DBT obj;
-    PyObject* objobj;
-
-    if (!PyArg_ParseTuple(args, "iOi|i:lock_get", &locker, &objobj, &lock_mode, &flags))
-        return NULL;
-
-
-    if (!make_dbt(objobj, &obj))
-        return NULL;
-
-    return (PyObject*)newDBLockObject(self, locker, &obj, lock_mode, flags);
-}
-
-
-static PyObject*
-DBEnv_lock_id(DBEnvObject* self)
-{
-    int err;
-    u_int32_t theID;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->lock_id(self->db_env, &theID);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    return NUMBER_FromLong((long)theID);
-}
-
-static PyObject*
-DBEnv_lock_id_free(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    u_int32_t theID;
-
-    if (!PyArg_ParseTuple(args, "I:lock_id_free", &theID))
-        return NULL;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->lock_id_free(self->db_env, theID);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_lock_put(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    DBLockObject* dblockobj;
-
-    if (!PyArg_ParseTuple(args, "O!:lock_put", &DBLock_Type, &dblockobj))
-        return NULL;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->lock_put(self->db_env, &dblockobj->lock);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-#if (DBVER >= 44)
-static PyObject*
-DBEnv_lsn_reset(DBEnvObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    char *file;
-    u_int32_t flags = 0;
-    static char* kwnames[] = { "file", "flags", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|i:lsn_reset", kwnames,
-                                     &file, &flags))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->lsn_reset(self->db_env, file, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-#endif /* DBVER >= 4.4 */
-
-static PyObject*
-DBEnv_log_stat(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    DB_LOG_STAT* statp = NULL;
-    PyObject* d = NULL;
-    u_int32_t flags = 0;
-
-    if (!PyArg_ParseTuple(args, "|i:log_stat", &flags))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->log_stat(self->db_env, &statp, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    /* Turn the stat structure into a dictionary */
-    d = PyDict_New();
-    if (d == NULL) {
-        if (statp)
-            free(statp);
-        return NULL;
-    }
-
-#define MAKE_ENTRY(name)  _addIntToDict(d, #name, statp->st_##name)
-
-    MAKE_ENTRY(magic);
-    MAKE_ENTRY(version);
-    MAKE_ENTRY(mode);
-    MAKE_ENTRY(lg_bsize);
-#if (DBVER >= 44)
-    MAKE_ENTRY(lg_size);
-    MAKE_ENTRY(record);
-#endif
-#if (DBVER < 41)
-    MAKE_ENTRY(lg_max);
-#endif
-    MAKE_ENTRY(w_mbytes);
-    MAKE_ENTRY(w_bytes);
-    MAKE_ENTRY(wc_mbytes);
-    MAKE_ENTRY(wc_bytes);
-    MAKE_ENTRY(wcount);
-    MAKE_ENTRY(wcount_fill);
-#if (DBVER >= 44)
-    MAKE_ENTRY(rcount);
-#endif
-    MAKE_ENTRY(scount);
-    MAKE_ENTRY(cur_file);
-    MAKE_ENTRY(cur_offset);
-    MAKE_ENTRY(disk_file);
-    MAKE_ENTRY(disk_offset);
-    MAKE_ENTRY(maxcommitperflush);
-    MAKE_ENTRY(mincommitperflush);
-    MAKE_ENTRY(regsize);
-    MAKE_ENTRY(region_wait);
-    MAKE_ENTRY(region_nowait);
-
-#undef MAKE_ENTRY
-    free(statp);
-    return d;
-} /* DBEnv_log_stat */
-
-
-static PyObject*
-DBEnv_lock_stat(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    DB_LOCK_STAT* sp;
-    PyObject* d = NULL;
-    u_int32_t flags = 0;
-
-    if (!PyArg_ParseTuple(args, "|i:lock_stat", &flags))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->lock_stat(self->db_env, &sp, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    /* Turn the stat structure into a dictionary */
-    d = PyDict_New();
-    if (d == NULL) {
-        free(sp);
-        return NULL;
-    }
-
-#define MAKE_ENTRY(name)  _addIntToDict(d, #name, sp->st_##name)
-
-#if (DBVER < 41)
-    MAKE_ENTRY(lastid);
-#endif
-#if (DBVER >=41)
-    MAKE_ENTRY(id);
-    MAKE_ENTRY(cur_maxid);
-#endif
-    MAKE_ENTRY(nmodes);
-    MAKE_ENTRY(maxlocks);
-    MAKE_ENTRY(maxlockers);
-    MAKE_ENTRY(maxobjects);
-    MAKE_ENTRY(nlocks);
-    MAKE_ENTRY(maxnlocks);
-    MAKE_ENTRY(nlockers);
-    MAKE_ENTRY(maxnlockers);
-    MAKE_ENTRY(nobjects);
-    MAKE_ENTRY(maxnobjects);
-    MAKE_ENTRY(nrequests);
-    MAKE_ENTRY(nreleases);
-#if (DBVER >= 44)
-    MAKE_ENTRY(nupgrade);
-    MAKE_ENTRY(ndowngrade);
-#endif
-#if (DBVER < 44)
-    MAKE_ENTRY(nnowaits);       /* these were renamed in 4.4 */
-    MAKE_ENTRY(nconflicts);
-#else
-    MAKE_ENTRY(lock_nowait);
-    MAKE_ENTRY(lock_wait);
-#endif
-    MAKE_ENTRY(ndeadlocks);
-#if (DBVER >= 41)
-    MAKE_ENTRY(locktimeout);
-    MAKE_ENTRY(txntimeout);
-#endif
-    MAKE_ENTRY(nlocktimeouts);
-    MAKE_ENTRY(ntxntimeouts);
-#if (DBVER >= 46)
-    MAKE_ENTRY(objs_wait);
-    MAKE_ENTRY(objs_nowait);
-    MAKE_ENTRY(lockers_wait);
-    MAKE_ENTRY(lockers_nowait);
-#if (DBVER >= 47)
-    MAKE_ENTRY(lock_wait);
-    MAKE_ENTRY(lock_nowait);
-#else
-    MAKE_ENTRY(locks_wait);
-    MAKE_ENTRY(locks_nowait);
-#endif
-    MAKE_ENTRY(hash_len);
-#endif
-    MAKE_ENTRY(regsize);
-    MAKE_ENTRY(region_wait);
-    MAKE_ENTRY(region_nowait);
-
-#undef MAKE_ENTRY
-    free(sp);
-    return d;
-}
-
-static PyObject*
-DBEnv_log_flush(DBEnvObject* self)
-{
-    int err;
-
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->db_env->log_flush(self->db_env, NULL);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_log_archive(DBEnvObject* self, PyObject* args)
-{
-    int flags=0;
-    int err;
-    char **log_list = NULL;
-    PyObject* list;
-    PyObject* item = NULL;
-
-    if (!PyArg_ParseTuple(args, "|i:log_archive", &flags))
-        return NULL;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->log_archive(self->db_env, &log_list, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    list = PyList_New(0);
-    if (list == NULL) {
-        if (log_list)
-            free(log_list);
-        return NULL;
-    }
-
-    if (log_list) {
-        char **log_list_start;
-        for (log_list_start = log_list; *log_list != NULL; ++log_list) {
-            item = PyBytes_FromString (*log_list);
-            if (item == NULL) {
-                Py_DECREF(list);
-                list = NULL;
-                break;
-            }
-            if (PyList_Append(list, item)) {
-                Py_DECREF(list);
-                list = NULL;
-                Py_DECREF(item);
-                break;
-            }
-            Py_DECREF(item);
-        }
-        free(log_list_start);
-    }
-    return list;
-}
-
-
-static PyObject*
-DBEnv_txn_stat(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    DB_TXN_STAT* sp;
-    PyObject* d = NULL;
-    u_int32_t flags=0;
-
-    if (!PyArg_ParseTuple(args, "|i:txn_stat", &flags))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->txn_stat(self->db_env, &sp, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    /* Turn the stat structure into a dictionary */
-    d = PyDict_New();
-    if (d == NULL) {
-        free(sp);
-        return NULL;
-    }
-
-#define MAKE_ENTRY(name)        _addIntToDict(d, #name, sp->st_##name)
-#define MAKE_TIME_T_ENTRY(name) _addTimeTToDict(d, #name, sp->st_##name)
-#define MAKE_DB_LSN_ENTRY(name) _addDB_lsnToDict(d, #name, sp->st_##name)
-
-    MAKE_DB_LSN_ENTRY(last_ckp);
-    MAKE_TIME_T_ENTRY(time_ckp);
-    MAKE_ENTRY(last_txnid);
-    MAKE_ENTRY(maxtxns);
-    MAKE_ENTRY(nactive);
-    MAKE_ENTRY(maxnactive);
-#if (DBVER >= 45)
-    MAKE_ENTRY(nsnapshot);
-    MAKE_ENTRY(maxnsnapshot);
-#endif
-    MAKE_ENTRY(nbegins);
-    MAKE_ENTRY(naborts);
-    MAKE_ENTRY(ncommits);
-    MAKE_ENTRY(nrestores);
-    MAKE_ENTRY(regsize);
-    MAKE_ENTRY(region_wait);
-    MAKE_ENTRY(region_nowait);
-
-#undef MAKE_DB_LSN_ENTRY
-#undef MAKE_ENTRY
-#undef MAKE_TIME_T_ENTRY
-    free(sp);
-    return d;
-}
-
-
-static PyObject*
-DBEnv_set_get_returns_none(DBEnvObject* self, PyObject* args)
-{
-    int flags=0;
-    int oldValue=0;
-
-    if (!PyArg_ParseTuple(args,"i:set_get_returns_none", &flags))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    if (self->moduleFlags.getReturnsNone)
-        ++oldValue;
-    if (self->moduleFlags.cursorSetReturnsNone)
-        ++oldValue;
-    self->moduleFlags.getReturnsNone = (flags >= 1);
-    self->moduleFlags.cursorSetReturnsNone = (flags >= 2);
-    return NUMBER_FromLong(oldValue);
-}
-
-static PyObject*
-DBEnv_get_private(DBEnvObject* self)
-{
-    /* We can give out the private field even if dbenv is closed */
-    Py_INCREF(self->private_obj);
-    return self->private_obj;
-}
-
-static PyObject*
-DBEnv_set_private(DBEnvObject* self, PyObject* private_obj)
-{
-    /* We can set the private field even if dbenv is closed */
-    Py_DECREF(self->private_obj);
-    Py_INCREF(private_obj);
-    self->private_obj = private_obj;
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBEnv_set_rpc_server(DBEnvObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    char *host;
-    long cl_timeout=0, sv_timeout=0;
-
-    static char* kwnames[] = { "host", "cl_timeout", "sv_timeout", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|ll:set_rpc_server", kwnames,
-                                     &host, &cl_timeout, &sv_timeout))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_rpc_server(self->db_env, NULL, host, cl_timeout,
-            sv_timeout, 0);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_set_verbose(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int which, onoff;
-
-    if (!PyArg_ParseTuple(args, "ii:set_verbose", &which, &onoff)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_verbose(self->db_env, which, onoff);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-#if (DBVER >= 42)
-static PyObject*
-DBEnv_get_verbose(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int which;
-    int verbose;
-
-    if (!PyArg_ParseTuple(args, "i:get_verbose", &which)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->get_verbose(self->db_env, which, &verbose);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return PyBool_FromLong(verbose);
-}
-#endif
-
-#if (DBVER >= 45)
-static void
-_dbenv_event_notifyCallback(DB_ENV* db_env, u_int32_t event, void *event_info)
-{
-    DBEnvObject *dbenv;
-    PyObject* callback;
-    PyObject* args;
-    PyObject* result = NULL;
-
-    MYDB_BEGIN_BLOCK_THREADS;
-    dbenv = (DBEnvObject *)db_env->app_private;
-    callback = dbenv->event_notifyCallback;
-    if (callback) {
-        if (event == DB_EVENT_REP_NEWMASTER) {
-            args = Py_BuildValue("(Oii)", dbenv, event, *((int *)event_info));
-        } else {
-            args = Py_BuildValue("(OiO)", dbenv, event, Py_None);
-        }
-        if (args) {
-            result = PyEval_CallObject(callback, args);
-        }
-        if ((!args) || (!result)) {
-            PyErr_Print();
-        }
-        Py_XDECREF(args);
-        Py_XDECREF(result);
-    }
-    MYDB_END_BLOCK_THREADS;
-}
-#endif
-
-#if (DBVER >= 45)
-static PyObject*
-DBEnv_set_event_notify(DBEnvObject* self, PyObject* notifyFunc)
-{
-    int err;
-
-    CHECK_ENV_NOT_CLOSED(self);
-
-    if (!PyCallable_Check(notifyFunc)) {
-           makeTypeError("Callable", notifyFunc);
-           return NULL;
-    }
-
-    Py_XDECREF(self->event_notifyCallback);
-    Py_INCREF(notifyFunc);
-    self->event_notifyCallback = notifyFunc;
-
-    /* This is to workaround a problem with un-initialized threads (see
-       comment in DB_associate) */
-#ifdef WITH_THREAD
-    PyEval_InitThreads();
-#endif
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->set_event_notify(self->db_env, _dbenv_event_notifyCallback);
-    MYDB_END_ALLOW_THREADS;
-
-    if (err) {
-           Py_DECREF(notifyFunc);
-           self->event_notifyCallback = NULL;
-    }
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-#endif
-
-
-/* --------------------------------------------------------------------- */
-/* REPLICATION METHODS: Base Replication */
-
-
-static PyObject*
-DBEnv_rep_process_message(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    PyObject *control_py, *rec_py;
-    DBT control, rec;
-    int envid;
-#if (DBVER >= 42)
-    DB_LSN lsn;
-#endif
-
-    if (!PyArg_ParseTuple(args, "OOi:rep_process_message", &control_py,
-                &rec_py, &envid))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    if (!make_dbt(control_py, &control))
-        return NULL;
-    if (!make_dbt(rec_py, &rec))
-        return NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-#if (DBVER >= 46)
-    err = self->db_env->rep_process_message(self->db_env, &control, &rec,
-            envid, &lsn);
-#else
-#if (DBVER >= 42)
-    err = self->db_env->rep_process_message(self->db_env, &control, &rec,
-            &envid, &lsn);
-#else
-    err = self->db_env->rep_process_message(self->db_env, &control, &rec,
-            &envid);
-#endif
-#endif
-    MYDB_END_ALLOW_THREADS;
-    switch (err) {
-        case DB_REP_NEWMASTER :
-          return Py_BuildValue("(iO)", envid, Py_None);
-          break;
-
-        case DB_REP_DUPMASTER :
-        case DB_REP_HOLDELECTION :
-#if (DBVER >= 44)
-        case DB_REP_IGNORE :
-        case DB_REP_JOIN_FAILURE :
-#endif
-            return Py_BuildValue("(iO)", err, Py_None);
-            break;
-        case DB_REP_NEWSITE :
-            {
-                PyObject *tmp, *r;
-
-                if (!(tmp = PyBytes_FromStringAndSize(rec.data, rec.size))) {
-                    return NULL;
-                }
-
-                r = Py_BuildValue("(iO)", err, tmp);
-                Py_DECREF(tmp);
-                return r;
-                break;
-            }
-#if (DBVER >= 42)
-        case DB_REP_NOTPERM :
-        case DB_REP_ISPERM :
-            return Py_BuildValue("(i(ll))", err, lsn.file, lsn.offset);
-            break;
-#endif
-    }
-    RETURN_IF_ERR();
-    return Py_BuildValue("(OO)", Py_None, Py_None);
-}
-
-static int
-_DBEnv_rep_transportCallback(DB_ENV* db_env, const DBT* control, const DBT* rec,
-        const DB_LSN *lsn, int envid, u_int32_t flags)
-{
-    DBEnvObject *dbenv;
-    PyObject* rep_transport;
-    PyObject* args;
-    PyObject *a, *b;
-    PyObject* result = NULL;
-    int ret=0;
-
-    MYDB_BEGIN_BLOCK_THREADS;
-    dbenv = (DBEnvObject *)db_env->app_private;
-    rep_transport = dbenv->rep_transport;
-
-    /*
-    ** The errors in 'a' or 'b' are detected in "Py_BuildValue".
-    */
-    a = PyBytes_FromStringAndSize(control->data, control->size);
-    b = PyBytes_FromStringAndSize(rec->data, rec->size);
-
-    args = Py_BuildValue(
-#if (PY_VERSION_HEX >= 0x02040000)
-            "(OOO(ll)iI)",
-#else
-            "(OOO(ll)ii)",
-#endif
-            dbenv,
-            a, b,
-            lsn->file, lsn->offset, envid, flags);
-    if (args) {
-        result = PyEval_CallObject(rep_transport, args);
-    }
-
-    if ((!args) || (!result)) {
-        PyErr_Print();
-        ret = -1;
-    }
-    Py_XDECREF(a);
-    Py_XDECREF(b);
-    Py_XDECREF(args);
-    Py_XDECREF(result);
-    MYDB_END_BLOCK_THREADS;
-    return ret;
-}
-
-#if (DBVER <= 41)
-static int
-_DBEnv_rep_transportCallbackOLD(DB_ENV* db_env, const DBT* control, const DBT* rec,
-        int envid, u_int32_t flags)
-{
-    DB_LSN lsn;
-
-    lsn.file = -1;  /* Dummy values */
-    lsn.offset = -1;
-    return _DBEnv_rep_transportCallback(db_env, control, rec, &lsn, envid,
-            flags);
-}
-#endif
-
-static PyObject*
-DBEnv_rep_set_transport(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int envid;
-    PyObject *rep_transport;
-
-    if (!PyArg_ParseTuple(args, "iO:rep_set_transport", &envid, &rep_transport))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-    if (!PyCallable_Check(rep_transport)) {
-        makeTypeError("Callable", rep_transport);
-        return NULL;
-    }
-
-    MYDB_BEGIN_ALLOW_THREADS;
-#if (DBVER >=45)
-    err = self->db_env->rep_set_transport(self->db_env, envid,
-            &_DBEnv_rep_transportCallback);
-#else
-#if (DBVER >= 42)
-    err = self->db_env->set_rep_transport(self->db_env, envid,
-            &_DBEnv_rep_transportCallback);
-#else
-    err = self->db_env->set_rep_transport(self->db_env, envid,
-            &_DBEnv_rep_transportCallbackOLD);
-#endif
-#endif
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    Py_DECREF(self->rep_transport);
-    Py_INCREF(rep_transport);
-    self->rep_transport = rep_transport;
-    RETURN_NONE();
-}
-
-#if (DBVER >= 47)
-static PyObject*
-DBEnv_rep_set_request(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    unsigned int minimum, maximum;
-
-    if (!PyArg_ParseTuple(args,"II:rep_set_request", &minimum, &maximum))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_set_request(self->db_env, minimum, maximum);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_rep_get_request(DBEnvObject* self)
-{
-    int err;
-    u_int32_t minimum, maximum;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_get_request(self->db_env, &minimum, &maximum);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-#if (PY_VERSION_HEX >= 0x02040000)
-    return Py_BuildValue("II", minimum, maximum);
-#else
-    return Py_BuildValue("ii", minimum, maximum);
-#endif
-}
-#endif
-
-#if (DBVER >= 45)
-static PyObject*
-DBEnv_rep_set_limit(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int limit;
-
-    if (!PyArg_ParseTuple(args,"i:rep_set_limit", &limit))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_set_limit(self->db_env, 0, limit);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_rep_get_limit(DBEnvObject* self)
-{
-    int err;
-    u_int32_t gbytes, bytes;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_get_limit(self->db_env, &gbytes, &bytes);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(bytes);
-}
-#endif
-
-#if (DBVER >= 44)
-static PyObject*
-DBEnv_rep_set_config(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int which;
-    int onoff;
-
-    if (!PyArg_ParseTuple(args,"ii:rep_set_config", &which, &onoff))
-        return NULL;
-    CHECK_ENV_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_set_config(self->db_env, which, onoff);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_rep_get_config(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int which;
-    int onoff;
-
-    if (!PyArg_ParseTuple(args, "i:rep_get_config", &which)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_get_config(self->db_env, which, &onoff);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return PyBool_FromLong(onoff);
-}
-#endif
-
-#if (DBVER >= 46)
-static PyObject*
-DBEnv_rep_elect(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    u_int32_t nsites, nvotes;
-
-    if (!PyArg_ParseTuple(args, "II:rep_elect", &nsites, &nvotes)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_elect(self->db_env, nvotes, nvotes, 0);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-#endif
-
-static PyObject*
-DBEnv_rep_start(DBEnvObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err;
-    PyObject *cdata_py = Py_None;
-    DBT cdata;
-    int flags;
-    static char* kwnames[] = {"flags","cdata", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs,
-                "i|O:rep_start", kwnames, &flags, &cdata_py))
-    {
-           return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-
-    if (!make_dbt(cdata_py, &cdata))
-        return NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_start(self->db_env, cdata.size ? &cdata : NULL,
-            flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-#if (DBVER >= 44)
-static PyObject*
-DBEnv_rep_sync(DBEnvObject* self)
-{
-    int err;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_sync(self->db_env, 0);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-#endif
-
-
-#if (DBVER >= 45)
-static PyObject*
-DBEnv_rep_set_nsites(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int nsites;
-
-    if (!PyArg_ParseTuple(args, "i:rep_set_nsites", &nsites)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_set_nsites(self->db_env, nsites);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_rep_get_nsites(DBEnvObject* self)
-{
-    int err;
-#if (DBVER >= 47)
-    u_int32_t nsites;
-#else
-    int nsites;
-#endif
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_get_nsites(self->db_env, &nsites);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(nsites);
-}
-
-static PyObject*
-DBEnv_rep_set_priority(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int priority;
-
-    if (!PyArg_ParseTuple(args, "i:rep_set_priority", &priority)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_set_priority(self->db_env, priority);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_rep_get_priority(DBEnvObject* self)
-{
-    int err;
-#if (DBVER >= 47)
-    u_int32_t priority;
-#else
-    int priority;
-#endif
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_get_priority(self->db_env, &priority);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(priority);
-}
-
-static PyObject*
-DBEnv_rep_set_timeout(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int which, timeout;
-
-    if (!PyArg_ParseTuple(args, "ii:rep_set_timeout", &which, &timeout)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_set_timeout(self->db_env, which, timeout);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_rep_get_timeout(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int which;
-    u_int32_t timeout;
-
-    if (!PyArg_ParseTuple(args, "i:rep_get_timeout", &which)) {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->rep_get_timeout(self->db_env, which, &timeout);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(timeout);
-}
-#endif
-
-/* --------------------------------------------------------------------- */
-/* REPLICATION METHODS: Replication Manager */
-
-#if (DBVER >= 45)
-static PyObject*
-DBEnv_repmgr_start(DBEnvObject* self, PyObject* args, PyObject*
-        kwargs)
-{
-    int err;
-    int nthreads, flags;
-    static char* kwnames[] = {"nthreads","flags", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs,
-                "ii:repmgr_start", kwnames, &nthreads, &flags))
-    {
-           return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->repmgr_start(self->db_env, nthreads, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_repmgr_set_local_site(DBEnvObject* self, PyObject* args, PyObject*
-        kwargs)
-{
-    int err;
-    char *host;
-    int port;
-    int flags = 0;
-    static char* kwnames[] = {"host", "port", "flags", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs,
-                "si|i:repmgr_set_local_site", kwnames, &host, &port, &flags))
-    {
-           return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->repmgr_set_local_site(self->db_env, host, port, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_repmgr_add_remote_site(DBEnvObject* self, PyObject* args, PyObject*
-        kwargs)
-{
-    int err;
-    char *host;
-    int port;
-    int flags = 0;
-    int eidp;
-    static char* kwnames[] = {"host", "port", "flags", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs,
-                "si|i:repmgr_add_remote_site", kwnames, &host, &port, &flags))
-    {
-           return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->repmgr_add_remote_site(self->db_env, host, port, &eidp, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(eidp);
-}
-
-static PyObject*
-DBEnv_repmgr_set_ack_policy(DBEnvObject* self, PyObject* args)
-{
-    int err;
-    int ack_policy;
-
-    if (!PyArg_ParseTuple(args, "i:repmgr_set_ack_policy", &ack_policy))
-    {
-           return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->repmgr_set_ack_policy(self->db_env, ack_policy);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_repmgr_get_ack_policy(DBEnvObject* self)
-{
-    int err;
-    int ack_policy;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->repmgr_get_ack_policy(self->db_env, &ack_policy);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(ack_policy);
-}
-
-static PyObject*
-DBEnv_repmgr_site_list(DBEnvObject* self)
-{
-    int err;
-    unsigned int countp;
-    DB_REPMGR_SITE *listp;
-    PyObject *stats, *key, *tuple;
-
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->repmgr_site_list(self->db_env, &countp, &listp);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    stats=PyDict_New();
-    if (stats == NULL) {
-        free(listp);
-        return NULL;
-    }
-
-    for(;countp--;) {
-        key=NUMBER_FromLong(listp[countp].eid);
-        if(!key) {
-            Py_DECREF(stats);
-            free(listp);
-            return NULL;
-        }
-#if (PY_VERSION_HEX >= 0x02040000)
-        tuple=Py_BuildValue("(sII)", listp[countp].host,
-                listp[countp].port, listp[countp].status);
-#else
-        tuple=Py_BuildValue("(sii)", listp[countp].host,
-                listp[countp].port, listp[countp].status);
-#endif
-        if(!tuple) {
-            Py_DECREF(key);
-            Py_DECREF(stats);
-            free(listp);
-            return NULL;
-        }
-        if(PyDict_SetItem(stats, key, tuple)) {
-            Py_DECREF(key);
-            Py_DECREF(tuple);
-            Py_DECREF(stats);
-            free(listp);
-            return NULL;
-        }
-    }
-    free(listp);
-    return stats;
-}
-#endif
-
-#if (DBVER >= 46)
-static PyObject*
-DBEnv_repmgr_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs)
-{
-    int err;
-    int flags=0;
-    static char* kwnames[] = { "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:repmgr_stat_print",
-                kwnames, &flags))
-    {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->repmgr_stat_print(self->db_env, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBEnv_repmgr_stat(DBEnvObject* self, PyObject* args, PyObject *kwargs)
-{
-    int err;
-    int flags=0;
-    DB_REPMGR_STAT *statp;
-    PyObject *stats;
-    static char* kwnames[] = { "flags", NULL };
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:repmgr_stat",
-                kwnames, &flags))
-    {
-        return NULL;
-    }
-    CHECK_ENV_NOT_CLOSED(self);
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->db_env->repmgr_stat(self->db_env, &statp, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    stats=PyDict_New();
-    if (stats == NULL) {
-        free(statp);
-        return NULL;
-    }
-
-#define MAKE_ENTRY(name)  _addIntToDict(stats, #name, statp->st_##name)
-
-    MAKE_ENTRY(perm_failed);
-    MAKE_ENTRY(msgs_queued);
-    MAKE_ENTRY(msgs_dropped);
-    MAKE_ENTRY(connection_drop);
-    MAKE_ENTRY(connect_fail);
-
-#undef MAKE_ENTRY
-
-    free(statp);
-    return stats;
-}
-#endif
-
-
-/* --------------------------------------------------------------------- */
-/* DBTxn methods */
-
-
-static void _close_transaction_cursors(DBTxnObject* txn)
-{
-    PyObject *dummy;
-
-    while(txn->children_cursors) {
-        PyErr_Warn(PyExc_RuntimeWarning,
-            "Must close cursors before resolving a transaction.");
-        dummy=DBC_close_internal(txn->children_cursors);
-        Py_XDECREF(dummy);
-    }
-}
-
-static void _promote_transaction_dbs_and_sequences(DBTxnObject *txn)
-{
-    DBObject *db;
-#if (DBVER >= 43)
-    DBSequenceObject *dbs;
-#endif
-
-    while (txn->children_dbs) {
-        db=txn->children_dbs;
-        EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(db);
-        if (txn->parent_txn) {
-            INSERT_IN_DOUBLE_LINKED_LIST_TXN(txn->parent_txn->children_dbs,db);
-            db->txn=txn->parent_txn;
-        } else {
-            /* The db is already linked to its environment,
-            ** so nothing to do.
-            */
-            db->txn=NULL; 
-        }
-    }
-
-#if (DBVER >= 43)
-    while (txn->children_sequences) {
-        dbs=txn->children_sequences;
-        EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(dbs);
-        if (txn->parent_txn) {
-            INSERT_IN_DOUBLE_LINKED_LIST_TXN(txn->parent_txn->children_sequences,dbs);
-            dbs->txn=txn->parent_txn;
-        } else {
-            /* The sequence is already linked to its
-            ** parent db. Nothing to do.
-            */
-            dbs->txn=NULL;
-        }
-    }
-#endif
-}
-
-
-static PyObject*
-DBTxn_commit(DBTxnObject* self, PyObject* args)
-{
-    int flags=0, err;
-    DB_TXN *txn;
-
-    if (!PyArg_ParseTuple(args, "|i:commit", &flags))
-        return NULL;
-
-    _close_transaction_cursors(self);
-
-    if (!self->txn) {
-        PyObject *t =  Py_BuildValue("(is)", 0, "DBTxn must not be used "
-                                     "after txn_commit, txn_abort "
-                                     "or txn_discard");
-        if (t) {
-            PyErr_SetObject(DBError, t);
-            Py_DECREF(t);
-        }
-        return NULL;
-    }
-    self->flag_prepare=0;
-    txn = self->txn;
-    self->txn = NULL;   /* this DB_TXN is no longer valid after this call */
-
-    EXTRACT_FROM_DOUBLE_LINKED_LIST(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = txn->commit(txn, flags);
-    MYDB_END_ALLOW_THREADS;
-
-    _promote_transaction_dbs_and_sequences(self);
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBTxn_prepare(DBTxnObject* self, PyObject* args)
-{
-    int err;
-    char* gid=NULL;
-    int   gid_size=0;
-
-    if (!PyArg_ParseTuple(args, "s#:prepare", &gid, &gid_size))
-        return NULL;
-
-    if (gid_size != DB_XIDDATASIZE) {
-        PyErr_SetString(PyExc_TypeError,
-                        "gid must be DB_XIDDATASIZE bytes long");
-        return NULL;
-    }
-
-    if (!self->txn) {
-        PyObject *t = Py_BuildValue("(is)", 0,"DBTxn must not be used "
-                                    "after txn_commit, txn_abort "
-                                    "or txn_discard");
-        if (t) {
-            PyErr_SetObject(DBError, t);
-            Py_DECREF(t);
-        }
-        return NULL;
-    }
-    self->flag_prepare=1;  /* Prepare state */
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->txn->prepare(self->txn, (u_int8_t*)gid);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-
-static PyObject*
-DBTxn_abort_discard_internal(DBTxnObject* self, int discard)
-{
-    PyObject *dummy;
-    int err=0;
-    DB_TXN *txn;
-
-    if (!self->txn) {
-        PyObject *t = Py_BuildValue("(is)", 0, "DBTxn must not be used "
-                                    "after txn_commit, txn_abort "
-                                    "or txn_discard");
-        if (t) {
-            PyErr_SetObject(DBError, t);
-            Py_DECREF(t);
-        }
-        return NULL;
-    }
-    txn = self->txn;
-    self->txn = NULL;   /* this DB_TXN is no longer valid after this call */
-
-    _close_transaction_cursors(self);
-#if (DBVER >= 43)
-    while (self->children_sequences) {
-        dummy=DBSequence_close_internal(self->children_sequences,0,0);
-        Py_XDECREF(dummy);
-    }
-#endif
-    while (self->children_dbs) {
-        dummy=DB_close_internal(self->children_dbs,0);
-        Py_XDECREF(dummy);
-    }
-
-    EXTRACT_FROM_DOUBLE_LINKED_LIST(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    if (discard) {
-        assert(!self->flag_prepare);
-        err = txn->discard(txn,0);
-    } else {
-        /*
-        ** If the transaction is in the "prepare" or "recover" state,
-        ** we better do not implicitly abort it.
-        */
-        if (!self->flag_prepare) {
-            err = txn->abort(txn);
-        }
-    }
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBTxn_abort(DBTxnObject* self)
-{
-    self->flag_prepare=0;
-    _close_transaction_cursors(self);
-
-    return DBTxn_abort_discard_internal(self,0);
-}
-
-static PyObject*
-DBTxn_discard(DBTxnObject* self)
-{
-    self->flag_prepare=0;
-    _close_transaction_cursors(self);
-
-    return DBTxn_abort_discard_internal(self,1);
-}
-
-
-static PyObject*
-DBTxn_id(DBTxnObject* self)
-{
-    int id;
-
-    if (!self->txn) {
-        PyObject *t = Py_BuildValue("(is)", 0, "DBTxn must not be used "
-                                    "after txn_commit, txn_abort "
-                                    "or txn_discard");
-        if (t) {
-            PyErr_SetObject(DBError, t);
-            Py_DECREF(t);
-        }
-        return NULL;
-    }
-    MYDB_BEGIN_ALLOW_THREADS;
-    id = self->txn->id(self->txn);
-    MYDB_END_ALLOW_THREADS;
-    return NUMBER_FromLong(id);
-}
-
-#if (DBVER >= 43)
-/* --------------------------------------------------------------------- */
-/* DBSequence methods */
-
-
-static PyObject*
-DBSequence_close_internal(DBSequenceObject* self, int flags, int do_not_close)
-{
-    int err=0;
-
-    if (self->sequence!=NULL) {
-        EXTRACT_FROM_DOUBLE_LINKED_LIST(self);
-        if (self->txn) {
-            EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(self);
-            self->txn=NULL;
-        }
-
-        if (!do_not_close) {
-            MYDB_BEGIN_ALLOW_THREADS
-            err = self->sequence->close(self->sequence, flags);
-            MYDB_END_ALLOW_THREADS
-        }
-        self->sequence = NULL;
-
-        RETURN_IF_ERR();
-    }
-
-    RETURN_NONE();
-}
-
-static PyObject*
-DBSequence_close(DBSequenceObject* self, PyObject* args)
-{
-    int flags=0;
-    if (!PyArg_ParseTuple(args,"|i:close", &flags))
-        return NULL;
-
-    return DBSequence_close_internal(self,flags,0);
-}
-
-static PyObject*
-DBSequence_get(DBSequenceObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags = 0;
-    int delta = 1;
-    db_seq_t value;
-    PyObject *txnobj = NULL;
-    DB_TXN *txn = NULL;
-    static char* kwnames[] = {"delta", "txn", "flags", NULL };
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|iOi:get", kwnames, &delta, &txnobj, &flags))
-        return NULL;
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->get(self->sequence, txn, delta, &value, flags);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-    return PyLong_FromLongLong(value);
-}
-
-static PyObject*
-DBSequence_get_dbp(DBSequenceObject* self)
-{
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-    Py_INCREF(self->mydb);
-    return (PyObject* )self->mydb;
-}
-
-static PyObject*
-DBSequence_get_key(DBSequenceObject* self)
-{
-    int err;
-    DBT key;
-    PyObject *retval = NULL;
-
-    key.flags = DB_DBT_MALLOC;
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->get_key(self->sequence, &key);
-    MYDB_END_ALLOW_THREADS
-
-    if (!err)
-        retval = Build_PyString(key.data, key.size);
-
-    FREE_DBT(key);
-    RETURN_IF_ERR();
-
-    return retval;
-}
-
-static PyObject*
-DBSequence_init_value(DBSequenceObject* self, PyObject* args)
-{
-    int err;
-    PY_LONG_LONG value;
-    db_seq_t value2;
-    if (!PyArg_ParseTuple(args,"L:init_value", &value))
-        return NULL;
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    value2=value; /* If truncation, compiler should show a warning */
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->initial_value(self->sequence, value2);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-
-    RETURN_NONE();
-}
-
-static PyObject*
-DBSequence_open(DBSequenceObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags = 0;
-    PyObject* keyobj;
-    PyObject *txnobj = NULL;
-    DB_TXN *txn = NULL;
-    DBT key;
-
-    static char* kwnames[] = {"key", "txn", "flags", NULL };
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Oi:open", kwnames, &keyobj, &txnobj, &flags))
-        return NULL;
-
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-
-    if (!make_key_dbt(self->mydb, keyobj, &key, NULL))
-        return NULL;
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->open(self->sequence, txn, &key, flags);
-    MYDB_END_ALLOW_THREADS
-
-    FREE_DBT(key);
-    RETURN_IF_ERR();
-
-    if (txn) {
-        INSERT_IN_DOUBLE_LINKED_LIST_TXN(((DBTxnObject *)txnobj)->children_sequences,self);
-        self->txn=(DBTxnObject *)txnobj;
-    }
-
-    RETURN_NONE();
-}
-
-static PyObject*
-DBSequence_remove(DBSequenceObject* self, PyObject* args, PyObject* kwargs)
-{
-    PyObject *dummy;
-    int err, flags = 0;
-    PyObject *txnobj = NULL;
-    DB_TXN *txn = NULL;
-
-    static char* kwnames[] = {"txn", "flags", NULL };
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:remove", kwnames, &txnobj, &flags))
-        return NULL;
-
-    if (!checkTxnObj(txnobj, &txn))
-        return NULL;
-
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->remove(self->sequence, txn, flags);
-    MYDB_END_ALLOW_THREADS
-
-    dummy=DBSequence_close_internal(self,flags,1);
-    Py_XDECREF(dummy);
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBSequence_set_cachesize(DBSequenceObject* self, PyObject* args)
-{
-    int err, size;
-    if (!PyArg_ParseTuple(args,"i:set_cachesize", &size))
-        return NULL;
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->set_cachesize(self->sequence, size);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBSequence_get_cachesize(DBSequenceObject* self)
-{
-    int err, size;
-
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->get_cachesize(self->sequence, &size);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-    return NUMBER_FromLong(size);
-}
-
-static PyObject*
-DBSequence_set_flags(DBSequenceObject* self, PyObject* args)
-{
-    int err, flags = 0;
-    if (!PyArg_ParseTuple(args,"i:set_flags", &flags))
-        return NULL;
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->set_flags(self->sequence, flags);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBSequence_get_flags(DBSequenceObject* self)
-{
-    unsigned int flags;
-    int err;
-
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->get_flags(self->sequence, &flags);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-    return NUMBER_FromLong((int)flags);
-}
-
-static PyObject*
-DBSequence_set_range(DBSequenceObject* self, PyObject* args)
-{
-    int err;
-    PY_LONG_LONG min, max;
-    db_seq_t min2, max2;
-    if (!PyArg_ParseTuple(args,"(LL):set_range", &min, &max))
-        return NULL;
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    min2=min;  /* If truncation, compiler should show a warning */
-    max2=max;
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->set_range(self->sequence, min2, max2);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-    RETURN_NONE();
-}
-
-static PyObject*
-DBSequence_get_range(DBSequenceObject* self)
-{
-    int err;
-    PY_LONG_LONG min, max;
-    db_seq_t min2, max2;
-
-    CHECK_SEQUENCE_NOT_CLOSED(self)
-
-    MYDB_BEGIN_ALLOW_THREADS
-    err = self->sequence->get_range(self->sequence, &min2, &max2);
-    MYDB_END_ALLOW_THREADS
-
-    RETURN_IF_ERR();
-    min=min2;  /* If truncation, compiler should show a warning */
-    max=max2;
-    return Py_BuildValue("(LL)", min, max);
-}
-
-static PyObject*
-DBSequence_stat(DBSequenceObject* self, PyObject* args, PyObject* kwargs)
-{
-    int err, flags = 0;
-    DB_SEQUENCE_STAT* sp = NULL;
-    PyObject* dict_stat;
-    static char* kwnames[] = {"flags", NULL };
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat", kwnames, &flags))
-        return NULL;
-    CHECK_SEQUENCE_NOT_CLOSED(self);
-
-    MYDB_BEGIN_ALLOW_THREADS;
-    err = self->sequence->stat(self->sequence, &sp, flags);
-    MYDB_END_ALLOW_THREADS;
-    RETURN_IF_ERR();
-
-    if ((dict_stat = PyDict_New()) == NULL) {
-        free(sp);
-        return NULL;
-    }
-
-
-#define MAKE_INT_ENTRY(name)  _addIntToDict(dict_stat, #name, sp->st_##name)
-#define MAKE_LONG_LONG_ENTRY(name)  _addDb_seq_tToDict(dict_stat, #name, sp->st_##name)
-
-    MAKE_INT_ENTRY(wait);
-    MAKE_INT_ENTRY(nowait);
-    MAKE_LONG_LONG_ENTRY(current);
-    MAKE_LONG_LONG_ENTRY(value);
-    MAKE_LONG_LONG_ENTRY(last_value);
-    MAKE_LONG_LONG_ENTRY(min);
-    MAKE_LONG_LONG_ENTRY(max);
-    MAKE_INT_ENTRY(cache_size);
-    MAKE_INT_ENTRY(flags);
-
-#undef MAKE_INT_ENTRY
-#undef MAKE_LONG_LONG_ENTRY
-
-    free(sp);
-    return dict_stat;
-}
-#endif
-
-
-/* --------------------------------------------------------------------- */
-/* Method definition tables and type objects */
-
-static PyMethodDef DB_methods[] = {
-    {"append",          (PyCFunction)DB_append,         METH_VARARGS|METH_KEYWORDS},
-    {"associate",       (PyCFunction)DB_associate,      METH_VARARGS|METH_KEYWORDS},
-    {"close",           (PyCFunction)DB_close,          METH_VARARGS},
-    {"consume",         (PyCFunction)DB_consume,        METH_VARARGS|METH_KEYWORDS},
-    {"consume_wait",    (PyCFunction)DB_consume_wait,   METH_VARARGS|METH_KEYWORDS},
-    {"cursor",          (PyCFunction)DB_cursor,         METH_VARARGS|METH_KEYWORDS},
-    {"delete",          (PyCFunction)DB_delete,         METH_VARARGS|METH_KEYWORDS},
-    {"fd",              (PyCFunction)DB_fd,             METH_NOARGS},
-    {"get",             (PyCFunction)DB_get,            METH_VARARGS|METH_KEYWORDS},
-    {"pget",            (PyCFunction)DB_pget,           METH_VARARGS|METH_KEYWORDS},
-    {"get_both",        (PyCFunction)DB_get_both,       METH_VARARGS|METH_KEYWORDS},
-    {"get_byteswapped", (PyCFunction)DB_get_byteswapped,METH_NOARGS},
-    {"get_size",        (PyCFunction)DB_get_size,       METH_VARARGS|METH_KEYWORDS},
-    {"get_type",        (PyCFunction)DB_get_type,       METH_NOARGS},
-    {"join",            (PyCFunction)DB_join,           METH_VARARGS},
-    {"key_range",       (PyCFunction)DB_key_range,      METH_VARARGS|METH_KEYWORDS},
-    {"has_key",         (PyCFunction)DB_has_key,        METH_VARARGS|METH_KEYWORDS},
-    {"items",           (PyCFunction)DB_items,          METH_VARARGS},
-    {"keys",            (PyCFunction)DB_keys,           METH_VARARGS},
-    {"open",            (PyCFunction)DB_open,           METH_VARARGS|METH_KEYWORDS},
-    {"put",             (PyCFunction)DB_put,            METH_VARARGS|METH_KEYWORDS},
-    {"remove",          (PyCFunction)DB_remove,         METH_VARARGS|METH_KEYWORDS},
-    {"rename",          (PyCFunction)DB_rename,         METH_VARARGS},
-    {"set_bt_minkey",   (PyCFunction)DB_set_bt_minkey,  METH_VARARGS},
-    {"set_bt_compare",  (PyCFunction)DB_set_bt_compare, METH_O},
-    {"set_cachesize",   (PyCFunction)DB_set_cachesize,  METH_VARARGS},
-#if (DBVER >= 41)
-    {"set_encrypt",     (PyCFunction)DB_set_encrypt,    METH_VARARGS|METH_KEYWORDS},
-#endif
-    {"set_flags",       (PyCFunction)DB_set_flags,      METH_VARARGS},
-    {"set_h_ffactor",   (PyCFunction)DB_set_h_ffactor,  METH_VARARGS},
-    {"set_h_nelem",     (PyCFunction)DB_set_h_nelem,    METH_VARARGS},
-    {"set_lorder",      (PyCFunction)DB_set_lorder,     METH_VARARGS},
-    {"set_pagesize",    (PyCFunction)DB_set_pagesize,   METH_VARARGS},
-    {"set_re_delim",    (PyCFunction)DB_set_re_delim,   METH_VARARGS},
-    {"set_re_len",      (PyCFunction)DB_set_re_len,     METH_VARARGS},
-    {"set_re_pad",      (PyCFunction)DB_set_re_pad,     METH_VARARGS},
-    {"set_re_source",   (PyCFunction)DB_set_re_source,  METH_VARARGS},
-    {"set_q_extentsize",(PyCFunction)DB_set_q_extentsize, METH_VARARGS},
-    {"set_private",     (PyCFunction)DB_set_private,    METH_O},
-    {"get_private",     (PyCFunction)DB_get_private,    METH_NOARGS},
-    {"stat",            (PyCFunction)DB_stat,           METH_VARARGS|METH_KEYWORDS},
-    {"sync",            (PyCFunction)DB_sync,           METH_VARARGS},
-    {"truncate",        (PyCFunction)DB_truncate,       METH_VARARGS|METH_KEYWORDS},
-    {"type",            (PyCFunction)DB_get_type,       METH_NOARGS},
-    {"upgrade",         (PyCFunction)DB_upgrade,        METH_VARARGS},
-    {"values",          (PyCFunction)DB_values,         METH_VARARGS},
-    {"verify",          (PyCFunction)DB_verify,         METH_VARARGS|METH_KEYWORDS},
-    {"set_get_returns_none",(PyCFunction)DB_set_get_returns_none,      METH_VARARGS},
-    {NULL,      NULL}       /* sentinel */
-};
-
-
-static PyMappingMethods DB_mapping = {
-        DB_length,                   /*mp_length*/
-        (binaryfunc)DB_subscript,    /*mp_subscript*/
-        (objobjargproc)DB_ass_sub,   /*mp_ass_subscript*/
-};
-
-
-static PyMethodDef DBCursor_methods[] = {
-    {"close",           (PyCFunction)DBC_close,         METH_NOARGS},
-    {"count",           (PyCFunction)DBC_count,         METH_VARARGS},
-    {"current",         (PyCFunction)DBC_current,       METH_VARARGS|METH_KEYWORDS},
-    {"delete",          (PyCFunction)DBC_delete,        METH_VARARGS},
-    {"dup",             (PyCFunction)DBC_dup,           METH_VARARGS},
-    {"first",           (PyCFunction)DBC_first,         METH_VARARGS|METH_KEYWORDS},
-    {"get",             (PyCFunction)DBC_get,           METH_VARARGS|METH_KEYWORDS},
-    {"pget",            (PyCFunction)DBC_pget,          METH_VARARGS|METH_KEYWORDS},
-    {"get_recno",       (PyCFunction)DBC_get_recno,     METH_NOARGS},
-    {"last",            (PyCFunction)DBC_last,          METH_VARARGS|METH_KEYWORDS},
-    {"next",            (PyCFunction)DBC_next,          METH_VARARGS|METH_KEYWORDS},
-    {"prev",            (PyCFunction)DBC_prev,          METH_VARARGS|METH_KEYWORDS},
-    {"put",             (PyCFunction)DBC_put,           METH_VARARGS|METH_KEYWORDS},
-    {"set",             (PyCFunction)DBC_set,           METH_VARARGS|METH_KEYWORDS},
-    {"set_range",       (PyCFunction)DBC_set_range,     METH_VARARGS|METH_KEYWORDS},
-    {"get_both",        (PyCFunction)DBC_get_both,      METH_VARARGS},
-    {"get_current_size",(PyCFunction)DBC_get_current_size, METH_NOARGS},
-    {"set_both",        (PyCFunction)DBC_set_both,      METH_VARARGS},
-    {"set_recno",       (PyCFunction)DBC_set_recno,     METH_VARARGS|METH_KEYWORDS},
-    {"consume",         (PyCFunction)DBC_consume,       METH_VARARGS|METH_KEYWORDS},
-    {"next_dup",        (PyCFunction)DBC_next_dup,      METH_VARARGS|METH_KEYWORDS},
-    {"next_nodup",      (PyCFunction)DBC_next_nodup,    METH_VARARGS|METH_KEYWORDS},
-    {"prev_nodup",      (PyCFunction)DBC_prev_nodup,    METH_VARARGS|METH_KEYWORDS},
-    {"join_item",       (PyCFunction)DBC_join_item,     METH_VARARGS},
-    {NULL,      NULL}       /* sentinel */
-};
-
-
-static PyMethodDef DBEnv_methods[] = {
-    {"close",           (PyCFunction)DBEnv_close,            METH_VARARGS},
-    {"open",            (PyCFunction)DBEnv_open,             METH_VARARGS},
-    {"remove",          (PyCFunction)DBEnv_remove,           METH_VARARGS},
-#if (DBVER >= 41)
-    {"dbremove",        (PyCFunction)DBEnv_dbremove,         METH_VARARGS|METH_KEYWORDS},
-    {"dbrename",        (PyCFunction)DBEnv_dbrename,         METH_VARARGS|METH_KEYWORDS},
-    {"set_encrypt",     (PyCFunction)DBEnv_set_encrypt,      METH_VARARGS|METH_KEYWORDS},
-#endif
-    {"set_timeout",     (PyCFunction)DBEnv_set_timeout,      METH_VARARGS|METH_KEYWORDS},
-    {"set_shm_key",     (PyCFunction)DBEnv_set_shm_key,      METH_VARARGS},
-    {"set_cachesize",   (PyCFunction)DBEnv_set_cachesize,    METH_VARARGS},
-    {"set_data_dir",    (PyCFunction)DBEnv_set_data_dir,     METH_VARARGS},
-    {"set_flags",       (PyCFunction)DBEnv_set_flags,        METH_VARARGS},
-#if (DBVER >= 47)
-    {"log_set_config",  (PyCFunction)DBEnv_log_set_config,   METH_VARARGS},
-#endif
-    {"set_lg_bsize",    (PyCFunction)DBEnv_set_lg_bsize,     METH_VARARGS},
-    {"set_lg_dir",      (PyCFunction)DBEnv_set_lg_dir,       METH_VARARGS},
-    {"set_lg_max",      (PyCFunction)DBEnv_set_lg_max,       METH_VARARGS},
-#if (DBVER >= 42)
-    {"get_lg_max",      (PyCFunction)DBEnv_get_lg_max,       METH_NOARGS},
-#endif
-    {"set_lg_regionmax",(PyCFunction)DBEnv_set_lg_regionmax, METH_VARARGS},
-    {"set_lk_detect",   (PyCFunction)DBEnv_set_lk_detect,    METH_VARARGS},
-#if (DBVER < 45)
-    {"set_lk_max",      (PyCFunction)DBEnv_set_lk_max,       METH_VARARGS},
-#endif
-    {"set_lk_max_locks", (PyCFunction)DBEnv_set_lk_max_locks, METH_VARARGS},
-    {"set_lk_max_lockers", (PyCFunction)DBEnv_set_lk_max_lockers, METH_VARARGS},
-    {"set_lk_max_objects", (PyCFunction)DBEnv_set_lk_max_objects, METH_VARARGS},
-    {"set_mp_mmapsize", (PyCFunction)DBEnv_set_mp_mmapsize,  METH_VARARGS},
-    {"set_tmp_dir",     (PyCFunction)DBEnv_set_tmp_dir,      METH_VARARGS},
-    {"txn_begin",       (PyCFunction)DBEnv_txn_begin,        METH_VARARGS|METH_KEYWORDS},
-    {"txn_checkpoint",  (PyCFunction)DBEnv_txn_checkpoint,   METH_VARARGS},
-    {"txn_stat",        (PyCFunction)DBEnv_txn_stat,         METH_VARARGS},
-    {"set_tx_max",      (PyCFunction)DBEnv_set_tx_max,       METH_VARARGS},
-    {"set_tx_timestamp", (PyCFunction)DBEnv_set_tx_timestamp, METH_VARARGS},
-    {"lock_detect",     (PyCFunction)DBEnv_lock_detect,      METH_VARARGS},
-    {"lock_get",        (PyCFunction)DBEnv_lock_get,         METH_VARARGS},
-    {"lock_id",         (PyCFunction)DBEnv_lock_id,          METH_NOARGS},
-    {"lock_id_free",    (PyCFunction)DBEnv_lock_id_free,     METH_VARARGS},
-    {"lock_put",        (PyCFunction)DBEnv_lock_put,         METH_VARARGS},
-    {"lock_stat",       (PyCFunction)DBEnv_lock_stat,        METH_VARARGS},
-    {"log_archive",     (PyCFunction)DBEnv_log_archive,      METH_VARARGS},
-    {"log_flush",       (PyCFunction)DBEnv_log_flush,        METH_NOARGS},
-    {"log_stat",        (PyCFunction)DBEnv_log_stat,         METH_VARARGS},
-#if (DBVER >= 44)
-    {"lsn_reset",       (PyCFunction)DBEnv_lsn_reset,        METH_VARARGS|METH_KEYWORDS},
-#endif
-    {"set_get_returns_none",(PyCFunction)DBEnv_set_get_returns_none, METH_VARARGS},
-    {"txn_recover",     (PyCFunction)DBEnv_txn_recover,       METH_NOARGS},
-    {"set_rpc_server",  (PyCFunction)DBEnv_set_rpc_server,
-        METH_VARARGS||METH_KEYWORDS},
-    {"set_verbose",     (PyCFunction)DBEnv_set_verbose,       METH_VARARGS},
-#if (DBVER >= 42)
-    {"get_verbose",     (PyCFunction)DBEnv_get_verbose,       METH_VARARGS},
-#endif
-    {"set_private",     (PyCFunction)DBEnv_set_private,       METH_O},
-    {"get_private",     (PyCFunction)DBEnv_get_private,       METH_NOARGS},
-    {"rep_start",       (PyCFunction)DBEnv_rep_start,
-        METH_VARARGS|METH_KEYWORDS},
-    {"rep_set_transport", (PyCFunction)DBEnv_rep_set_transport, METH_VARARGS},
-    {"rep_process_message", (PyCFunction)DBEnv_rep_process_message,
-        METH_VARARGS},
-#if (DBVER >= 46)
-    {"rep_elect",       (PyCFunction)DBEnv_rep_elect,         METH_VARARGS},
-#endif
-#if (DBVER >= 44)
-    {"rep_set_config",  (PyCFunction)DBEnv_rep_set_config,    METH_VARARGS},
-    {"rep_get_config",  (PyCFunction)DBEnv_rep_get_config,    METH_VARARGS},
-    {"rep_sync",        (PyCFunction)DBEnv_rep_sync,          METH_NOARGS},
-#endif
-#if (DBVER >= 45)
-    {"rep_set_limit",   (PyCFunction)DBEnv_rep_set_limit,     METH_VARARGS},
-    {"rep_get_limit",   (PyCFunction)DBEnv_rep_get_limit,     METH_NOARGS},
-#endif
-#if (DBVER >= 47)
-    {"rep_set_request", (PyCFunction)DBEnv_rep_set_request,   METH_VARARGS},
-    {"rep_get_request", (PyCFunction)DBEnv_rep_get_request,   METH_NOARGS},
-#endif
-#if (DBVER >= 45)
-    {"set_event_notify", (PyCFunction)DBEnv_set_event_notify, METH_O},
-#endif
-#if (DBVER >= 45)
-    {"rep_set_nsites", (PyCFunction)DBEnv_rep_set_nsites, METH_VARARGS},
-    {"rep_get_nsites", (PyCFunction)DBEnv_rep_get_nsites, METH_NOARGS},
-    {"rep_set_priority", (PyCFunction)DBEnv_rep_set_priority, METH_VARARGS},
-    {"rep_get_priority", (PyCFunction)DBEnv_rep_get_priority, METH_NOARGS},
-    {"rep_set_timeout", (PyCFunction)DBEnv_rep_set_timeout, METH_VARARGS},
-    {"rep_get_timeout", (PyCFunction)DBEnv_rep_get_timeout, METH_VARARGS},
-#endif
-#if (DBVER >= 45)
-    {"repmgr_start", (PyCFunction)DBEnv_repmgr_start,
-        METH_VARARGS|METH_KEYWORDS},
-    {"repmgr_set_local_site", (PyCFunction)DBEnv_repmgr_set_local_site,
-        METH_VARARGS|METH_KEYWORDS},
-    {"repmgr_add_remote_site", (PyCFunction)DBEnv_repmgr_add_remote_site,
-        METH_VARARGS|METH_KEYWORDS},
-    {"repmgr_set_ack_policy", (PyCFunction)DBEnv_repmgr_set_ack_policy,
-        METH_VARARGS},
-    {"repmgr_get_ack_policy", (PyCFunction)DBEnv_repmgr_get_ack_policy,
-        METH_NOARGS},
-    {"repmgr_site_list", (PyCFunction)DBEnv_repmgr_site_list,
-        METH_NOARGS},
-#endif
-#if (DBVER >= 46)
-    {"repmgr_stat", (PyCFunction)DBEnv_repmgr_stat,
-        METH_VARARGS|METH_KEYWORDS},
-    {"repmgr_stat_print", (PyCFunction)DBEnv_repmgr_stat_print,
-        METH_VARARGS|METH_KEYWORDS},
-#endif
-    {NULL,      NULL}       /* sentinel */
-};
-
-
-static PyMethodDef DBTxn_methods[] = {
-    {"commit",          (PyCFunction)DBTxn_commit,      METH_VARARGS},
-    {"prepare",         (PyCFunction)DBTxn_prepare,     METH_VARARGS},
-    {"discard",         (PyCFunction)DBTxn_discard,     METH_NOARGS},
-    {"abort",           (PyCFunction)DBTxn_abort,       METH_NOARGS},
-    {"id",              (PyCFunction)DBTxn_id,          METH_NOARGS},
-    {NULL,      NULL}       /* sentinel */
-};
-
-
-#if (DBVER >= 43)
-static PyMethodDef DBSequence_methods[] = {
-    {"close",           (PyCFunction)DBSequence_close,          METH_VARARGS},
-    {"get",             (PyCFunction)DBSequence_get,            METH_VARARGS|METH_KEYWORDS},
-    {"get_dbp",         (PyCFunction)DBSequence_get_dbp,        METH_NOARGS},
-    {"get_key",         (PyCFunction)DBSequence_get_key,        METH_NOARGS},
-    {"init_value",      (PyCFunction)DBSequence_init_value,     METH_VARARGS},
-    {"open",            (PyCFunction)DBSequence_open,           METH_VARARGS|METH_KEYWORDS},
-    {"remove",          (PyCFunction)DBSequence_remove,         METH_VARARGS|METH_KEYWORDS},
-    {"set_cachesize",   (PyCFunction)DBSequence_set_cachesize,  METH_VARARGS},
-    {"get_cachesize",   (PyCFunction)DBSequence_get_cachesize,  METH_NOARGS},
-    {"set_flags",       (PyCFunction)DBSequence_set_flags,      METH_VARARGS},
-    {"get_flags",       (PyCFunction)DBSequence_get_flags,      METH_NOARGS},
-    {"set_range",       (PyCFunction)DBSequence_set_range,      METH_VARARGS},
-    {"get_range",       (PyCFunction)DBSequence_get_range,      METH_NOARGS},
-    {"stat",            (PyCFunction)DBSequence_stat,           METH_VARARGS|METH_KEYWORDS},
-    {NULL,      NULL}       /* sentinel */
-};
-#endif
-
-
-static PyObject*
-DBEnv_db_home_get(DBEnvObject* self)
-{
-    const char *home = NULL;
-
-    CHECK_ENV_NOT_CLOSED(self);
-
-#if (DBVER >= 42)
-    self->db_env->get_home(self->db_env, &home);
-#else
-    home=self->db_env->db_home;
-#endif
-
-    if (home == NULL) {
-        RETURN_NONE();
-    }
-    return PyBytes_FromString(home);
-}
-
-static PyGetSetDef DBEnv_getsets[] = {
-    {"db_home", (getter)DBEnv_db_home_get, NULL,},
-    {NULL}
-};
-
-
-statichere PyTypeObject DB_Type = {
-#if (PY_VERSION_HEX < 0x03000000)
-    PyObject_HEAD_INIT(NULL)
-    0,                  /*ob_size*/
-#else
-    PyVarObject_HEAD_INIT(NULL, 0)
-#endif
-    "DB",               /*tp_name*/
-    sizeof(DBObject),   /*tp_basicsize*/
-    0,                  /*tp_itemsize*/
-    /* methods */
-    (destructor)DB_dealloc, /*tp_dealloc*/
-    0,          /*tp_print*/
-    0,          /*tp_getattr*/
-    0,          /*tp_setattr*/
-    0,          /*tp_compare*/
-    0,          /*tp_repr*/
-    0,          /*tp_as_number*/
-    0,          /*tp_as_sequence*/
-    &DB_mapping,/*tp_as_mapping*/
-    0,          /*tp_hash*/
-    0,                 /* tp_call */
-    0,                 /* tp_str */
-    0,                 /* tp_getattro */
-    0,          /* tp_setattro */
-    0,                 /* tp_as_buffer */
-#if (PY_VERSION_HEX < 0x03000000)
-    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS,      /* tp_flags */
-#else
-    Py_TPFLAGS_DEFAULT,      /* tp_flags */
-#endif
-    0,          /* tp_doc */
-    0,             /* tp_traverse */
-    0,                 /* tp_clear */
-    0,                 /* tp_richcompare */
-    offsetof(DBObject, in_weakreflist),   /* tp_weaklistoffset */
-    0,          /*tp_iter*/
-    0,          /*tp_iternext*/
-    DB_methods, /*tp_methods*/
-    0, /*tp_members*/
-};
-
-
-statichere PyTypeObject DBCursor_Type = {
-#if (PY_VERSION_HEX < 0x03000000)
-    PyObject_HEAD_INIT(NULL)
-    0,                  /*ob_size*/
-#else
-    PyVarObject_HEAD_INIT(NULL, 0)
-#endif
-    "DBCursor",         /*tp_name*/
-    sizeof(DBCursorObject),  /*tp_basicsize*/
-    0,          /*tp_itemsize*/
-    /* methods */
-    (destructor)DBCursor_dealloc,/*tp_dealloc*/
-    0,          /*tp_print*/
-    0,          /*tp_getattr*/
-    0,          /*tp_setattr*/
-    0,          /*tp_compare*/
-    0,          /*tp_repr*/
-    0,          /*tp_as_number*/
-    0,          /*tp_as_sequence*/
-    0,          /*tp_as_mapping*/
-    0,          /*tp_hash*/
-    0,          /*tp_call*/
-    0,          /*tp_str*/
-    0,          /*tp_getattro*/
-    0,          /*tp_setattro*/
-    0,          /*tp_as_buffer*/
-#if (PY_VERSION_HEX < 0x03000000)
-    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS,      /* tp_flags */
-#else
-    Py_TPFLAGS_DEFAULT,      /* tp_flags */
-#endif
-    0,          /* tp_doc */
-    0,          /* tp_traverse */
-    0,          /* tp_clear */
-    0,          /* tp_richcompare */
-    offsetof(DBCursorObject, in_weakreflist),   /* tp_weaklistoffset */
-    0,          /*tp_iter*/
-    0,          /*tp_iternext*/
-    DBCursor_methods, /*tp_methods*/
-    0,          /*tp_members*/
-};
-
-
-statichere PyTypeObject DBEnv_Type = {
-#if (PY_VERSION_HEX < 0x03000000)
-    PyObject_HEAD_INIT(NULL)
-    0,                  /*ob_size*/
-#else
-    PyVarObject_HEAD_INIT(NULL, 0)
-#endif
-    "DBEnv",            /*tp_name*/
-    sizeof(DBEnvObject),    /*tp_basicsize*/
-    0,          /*tp_itemsize*/
-    /* methods */
-    (destructor)DBEnv_dealloc, /*tp_dealloc*/
-    0,          /*tp_print*/
-    0,          /*tp_getattr*/
-    0,          /*tp_setattr*/
-    0,          /*tp_compare*/
-    0,          /*tp_repr*/
-    0,          /*tp_as_number*/
-    0,          /*tp_as_sequence*/
-    0,          /*tp_as_mapping*/
-    0,          /*tp_hash*/
-    0,                 /* tp_call */
-    0,                 /* tp_str */
-    0,                 /* tp_getattro */
-    0,          /* tp_setattro */
-    0,                 /* tp_as_buffer */
-#if (PY_VERSION_HEX < 0x03000000)
-    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS,      /* tp_flags */
-#else
-    Py_TPFLAGS_DEFAULT,      /* tp_flags */
-#endif
-    0,          /* tp_doc */
-    0,             /* tp_traverse */
-    0,                 /* tp_clear */
-    0,                 /* tp_richcompare */
-    offsetof(DBEnvObject, in_weakreflist),   /* tp_weaklistoffset */
-    0,          /* tp_iter */
-    0,          /* tp_iternext */
-    DBEnv_methods,      /* tp_methods */
-    0,          /* tp_members */
-    DBEnv_getsets,      /* tp_getsets */
-};
-
-statichere PyTypeObject DBTxn_Type = {
-#if (PY_VERSION_HEX < 0x03000000)
-    PyObject_HEAD_INIT(NULL)
-    0,                  /*ob_size*/
-#else
-    PyVarObject_HEAD_INIT(NULL, 0)
-#endif
-    "DBTxn",    /*tp_name*/
-    sizeof(DBTxnObject),  /*tp_basicsize*/
-    0,          /*tp_itemsize*/
-    /* methods */
-    (destructor)DBTxn_dealloc, /*tp_dealloc*/
-    0,          /*tp_print*/
-    0,          /*tp_getattr*/
-    0,          /*tp_setattr*/
-    0,          /*tp_compare*/
-    0,          /*tp_repr*/
-    0,          /*tp_as_number*/
-    0,          /*tp_as_sequence*/
-    0,          /*tp_as_mapping*/
-    0,          /*tp_hash*/
-    0,                 /* tp_call */
-    0,                 /* tp_str */
-    0,                 /* tp_getattro */
-    0,          /* tp_setattro */
-    0,                 /* tp_as_buffer */
-#if (PY_VERSION_HEX < 0x03000000)
-    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS,      /* tp_flags */
-#else
-    Py_TPFLAGS_DEFAULT,      /* tp_flags */
-#endif
-    0,          /* tp_doc */
-    0,         /* tp_traverse */
-    0,                 /* tp_clear */
-    0,                 /* tp_richcompare */
-    offsetof(DBTxnObject, in_weakreflist),   /* tp_weaklistoffset */
-    0,          /*tp_iter*/
-    0,          /*tp_iternext*/
-    DBTxn_methods, /*tp_methods*/
-    0,          /*tp_members*/
-};
-
-
-statichere PyTypeObject DBLock_Type = {
-#if (PY_VERSION_HEX < 0x03000000)
-    PyObject_HEAD_INIT(NULL)
-    0,                  /*ob_size*/
-#else
-    PyVarObject_HEAD_INIT(NULL, 0)
-#endif
-    "DBLock",   /*tp_name*/
-    sizeof(DBLockObject),  /*tp_basicsize*/
-    0,          /*tp_itemsize*/
-    /* methods */
-    (destructor)DBLock_dealloc, /*tp_dealloc*/
-    0,          /*tp_print*/
-    0,          /*tp_getattr*/
-    0,          /*tp_setattr*/
-    0,          /*tp_compare*/
-    0,          /*tp_repr*/
-    0,          /*tp_as_number*/
-    0,          /*tp_as_sequence*/
-    0,          /*tp_as_mapping*/
-    0,          /*tp_hash*/
-    0,                 /* tp_call */
-    0,                 /* tp_str */
-    0,                 /* tp_getattro */
-    0,          /* tp_setattro */
-    0,                 /* tp_as_buffer */
-#if (PY_VERSION_HEX < 0x03000000)
-    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS,      /* tp_flags */
-#else
-    Py_TPFLAGS_DEFAULT,      /* tp_flags */
-#endif
-    0,          /* tp_doc */
-    0,             /* tp_traverse */
-    0,                 /* tp_clear */
-    0,                 /* tp_richcompare */
-    offsetof(DBLockObject, in_weakreflist),   /* tp_weaklistoffset */
-};
-
-#if (DBVER >= 43)
-statichere PyTypeObject DBSequence_Type = {
-#if (PY_VERSION_HEX < 0x03000000)
-    PyObject_HEAD_INIT(NULL)
-    0,                  /*ob_size*/
-#else
-    PyVarObject_HEAD_INIT(NULL, 0)
-#endif
-    "DBSequence",                   /*tp_name*/
-    sizeof(DBSequenceObject),       /*tp_basicsize*/
-    0,          /*tp_itemsize*/
-    /* methods */
-    (destructor)DBSequence_dealloc, /*tp_dealloc*/
-    0,          /*tp_print*/
-    0,          /*tp_getattr*/
-    0,          /*tp_setattr*/
-    0,          /*tp_compare*/
-    0,          /*tp_repr*/
-    0,          /*tp_as_number*/
-    0,          /*tp_as_sequence*/
-    0,          /*tp_as_mapping*/
-    0,          /*tp_hash*/
-    0,                 /* tp_call */
-    0,                 /* tp_str */
-    0,                 /* tp_getattro */
-    0,          /* tp_setattro */
-    0,                 /* tp_as_buffer */
-#if (PY_VERSION_HEX < 0x03000000)
-    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS,      /* tp_flags */
-#else
-    Py_TPFLAGS_DEFAULT,      /* tp_flags */
-#endif
-    0,          /* tp_doc */
-    0,             /* tp_traverse */
-    0,                 /* tp_clear */
-    0,                 /* tp_richcompare */
-    offsetof(DBSequenceObject, in_weakreflist),   /* tp_weaklistoffset */
-    0,          /*tp_iter*/
-    0,          /*tp_iternext*/
-    DBSequence_methods, /*tp_methods*/
-    0,          /*tp_members*/
-};
-#endif
-
-/* --------------------------------------------------------------------- */
-/* Module-level functions */
-
-static PyObject*
-DB_construct(PyObject* self, PyObject* args, PyObject* kwargs)
-{
-    PyObject* dbenvobj = NULL;
-    int flags = 0;
-    static char* kwnames[] = { "dbEnv", "flags", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:DB", kwnames,
-                                     &dbenvobj, &flags))
-        return NULL;
-    if (dbenvobj == Py_None)
-        dbenvobj = NULL;
-    else if (dbenvobj && !DBEnvObject_Check(dbenvobj)) {
-        makeTypeError("DBEnv", dbenvobj);
-        return NULL;
-    }
-
-    return (PyObject* )newDBObject((DBEnvObject*)dbenvobj, flags);
-}
-
-
-static PyObject*
-DBEnv_construct(PyObject* self, PyObject* args)
-{
-    int flags = 0;
-    if (!PyArg_ParseTuple(args, "|i:DbEnv", &flags)) return NULL;
-    return (PyObject* )newDBEnvObject(flags);
-}
-
-#if (DBVER >= 43)
-static PyObject*
-DBSequence_construct(PyObject* self, PyObject* args, PyObject* kwargs)
-{
-    PyObject* dbobj;
-    int flags = 0;
-    static char* kwnames[] = { "db", "flags", NULL};
-
-    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|i:DBSequence", kwnames, &dbobj, &flags))
-        return NULL;
-    if (!DBObject_Check(dbobj)) {
-        makeTypeError("DB", dbobj);
-        return NULL;
-    }
-    return (PyObject* )newDBSequenceObject((DBObject*)dbobj, flags);
-}
-#endif
-
-static char bsddb_version_doc[] =
-"Returns a tuple of major, minor, and patch release numbers of the\n\
-underlying DB library.";
-
-static PyObject*
-bsddb_version(PyObject* self)
-{
-    int major, minor, patch;
-
-    db_version(&major, &minor, &patch);
-    return Py_BuildValue("(iii)", major, minor, patch);
-}
-
-
-/* List of functions defined in the module */
-static PyMethodDef bsddb_methods[] = {
-    {"DB",          (PyCFunction)DB_construct,          METH_VARARGS | METH_KEYWORDS },
-    {"DBEnv",       (PyCFunction)DBEnv_construct,       METH_VARARGS},
-#if (DBVER >= 43)
-    {"DBSequence",  (PyCFunction)DBSequence_construct,  METH_VARARGS | METH_KEYWORDS },
-#endif
-    {"version",     (PyCFunction)bsddb_version,         METH_NOARGS, bsddb_version_doc},
-    {NULL,      NULL}       /* sentinel */
-};
-
-
-/* API structure */
-static BSDDB_api bsddb_api;
-
-
-/* --------------------------------------------------------------------- */
-/* Module initialization */
-
-
-/* Convenience routine to export an integer value.
- * Errors are silently ignored, for better or for worse...
- */
-#define ADD_INT(dict, NAME)         _addIntToDict(dict, #NAME, NAME)
-
-#define MODULE_NAME_MAX_LEN     11
-static char _bsddbModuleName[MODULE_NAME_MAX_LEN+1] = "_bsddb";
-
-#if (PY_VERSION_HEX >= 0x03000000)
-static struct PyModuleDef bsddbmodule = {
-    PyModuleDef_HEAD_INIT,
-    _bsddbModuleName,   /* Name of module */
-    NULL,               /* module documentation, may be NULL */
-    -1,                 /* size of per-interpreter state of the module,
-                            or -1 if the module keeps state in global variables. */
-    bsddb_methods,
-    NULL,   /* Reload */
-    NULL,   /* Traverse */
-    NULL,   /* Clear */
-    NULL    /* Free */
-};
-#endif
-
-
-#if (PY_VERSION_HEX < 0x03000000)
-DL_EXPORT(void) init_bsddb(void)
-#else
-PyMODINIT_FUNC  PyInit__bsddb(void)    /* Note the two underscores */
-#endif
-{
-    PyObject* m;
-    PyObject* d;
-    PyObject* pybsddb_version_s = PyBytes_FromString( PY_BSDDB_VERSION );
-    PyObject* db_version_s = PyBytes_FromString( DB_VERSION_STRING );
-    PyObject* cvsid_s = PyBytes_FromString( rcs_id );
-    PyObject* py_api;
-
-    /* Initialize object types */
-    if ((PyType_Ready(&DB_Type) < 0)
-        || (PyType_Ready(&DBCursor_Type) < 0)
-        || (PyType_Ready(&DBEnv_Type) < 0)
-        || (PyType_Ready(&DBTxn_Type) < 0)
-        || (PyType_Ready(&DBLock_Type) < 0)
-#if (DBVER >= 43)
-        || (PyType_Ready(&DBSequence_Type) < 0)
-#endif
-        ) {
-#if (PY_VERSION_HEX < 0x03000000)
-        return;
-#else
-        return NULL;
-#endif
-    }
-
-#if defined(WITH_THREAD) && !defined(MYDB_USE_GILSTATE)
-    /* Save the current interpreter, so callbacks can do the right thing. */
-    _db_interpreterState = PyThreadState_GET()->interp;
-#endif
-
-    /* Create the module and add the functions */
-#if (PY_VERSION_HEX < 0x03000000)
-    m = Py_InitModule(_bsddbModuleName, bsddb_methods);
-#else
-    m=PyModule_Create(&bsddbmodule);
-#endif
-    if (m == NULL) {
-#if (PY_VERSION_HEX < 0x03000000)
-        return;
-#else
-       return NULL;
-#endif
-    }
-
-    /* Add some symbolic constants to the module */
-    d = PyModule_GetDict(m);
-    PyDict_SetItemString(d, "__version__", pybsddb_version_s);
-    PyDict_SetItemString(d, "cvsid", cvsid_s);
-    PyDict_SetItemString(d, "DB_VERSION_STRING", db_version_s);
-    Py_DECREF(pybsddb_version_s);
-    pybsddb_version_s = NULL;
-    Py_DECREF(cvsid_s);
-    cvsid_s = NULL;
-    Py_DECREF(db_version_s);
-    db_version_s = NULL;
-
-    ADD_INT(d, DB_VERSION_MAJOR);
-    ADD_INT(d, DB_VERSION_MINOR);
-    ADD_INT(d, DB_VERSION_PATCH);
-
-    ADD_INT(d, DB_MAX_PAGES);
-    ADD_INT(d, DB_MAX_RECORDS);
-
-#if (DBVER >= 42)
-    ADD_INT(d, DB_RPCCLIENT);
-#else
-    ADD_INT(d, DB_CLIENT);
-    /* allow apps to be written using DB_RPCCLIENT on older Berkeley DB */
-    _addIntToDict(d, "DB_RPCCLIENT", DB_CLIENT);
-#endif
-    ADD_INT(d, DB_XA_CREATE);
-
-    ADD_INT(d, DB_CREATE);
-    ADD_INT(d, DB_NOMMAP);
-    ADD_INT(d, DB_THREAD);
-#if (DBVER >= 45)
-    ADD_INT(d, DB_MULTIVERSION);
-#endif
-
-    ADD_INT(d, DB_FORCE);
-    ADD_INT(d, DB_INIT_CDB);
-    ADD_INT(d, DB_INIT_LOCK);
-    ADD_INT(d, DB_INIT_LOG);
-    ADD_INT(d, DB_INIT_MPOOL);
-    ADD_INT(d, DB_INIT_TXN);
-    ADD_INT(d, DB_JOINENV);
-
-    ADD_INT(d, DB_XIDDATASIZE);
-
-    ADD_INT(d, DB_RECOVER);
-    ADD_INT(d, DB_RECOVER_FATAL);
-    ADD_INT(d, DB_TXN_NOSYNC);
-    ADD_INT(d, DB_USE_ENVIRON);
-    ADD_INT(d, DB_USE_ENVIRON_ROOT);
-
-    ADD_INT(d, DB_LOCKDOWN);
-    ADD_INT(d, DB_PRIVATE);
-    ADD_INT(d, DB_SYSTEM_MEM);
-
-    ADD_INT(d, DB_TXN_SYNC);
-    ADD_INT(d, DB_TXN_NOWAIT);
-
-    ADD_INT(d, DB_EXCL);
-    ADD_INT(d, DB_FCNTL_LOCKING);
-    ADD_INT(d, DB_ODDFILESIZE);
-    ADD_INT(d, DB_RDWRMASTER);
-    ADD_INT(d, DB_RDONLY);
-    ADD_INT(d, DB_TRUNCATE);
-    ADD_INT(d, DB_EXTENT);
-    ADD_INT(d, DB_CDB_ALLDB);
-    ADD_INT(d, DB_VERIFY);
-    ADD_INT(d, DB_UPGRADE);
-
-    ADD_INT(d, DB_AGGRESSIVE);
-    ADD_INT(d, DB_NOORDERCHK);
-    ADD_INT(d, DB_ORDERCHKONLY);
-    ADD_INT(d, DB_PR_PAGE);
-
-    ADD_INT(d, DB_PR_RECOVERYTEST);
-    ADD_INT(d, DB_SALVAGE);
-
-    ADD_INT(d, DB_LOCK_NORUN);
-    ADD_INT(d, DB_LOCK_DEFAULT);
-    ADD_INT(d, DB_LOCK_OLDEST);
-    ADD_INT(d, DB_LOCK_RANDOM);
-    ADD_INT(d, DB_LOCK_YOUNGEST);
-    ADD_INT(d, DB_LOCK_MAXLOCKS);
-    ADD_INT(d, DB_LOCK_MINLOCKS);
-    ADD_INT(d, DB_LOCK_MINWRITE);
-
-    ADD_INT(d, DB_LOCK_EXPIRE);
-#if (DBVER >= 43)
-    ADD_INT(d, DB_LOCK_MAXWRITE);
-#endif
-
-    _addIntToDict(d, "DB_LOCK_CONFLICT", 0);
-
-    ADD_INT(d, DB_LOCK_DUMP);
-    ADD_INT(d, DB_LOCK_GET);
-    ADD_INT(d, DB_LOCK_INHERIT);
-    ADD_INT(d, DB_LOCK_PUT);
-    ADD_INT(d, DB_LOCK_PUT_ALL);
-    ADD_INT(d, DB_LOCK_PUT_OBJ);
-
-    ADD_INT(d, DB_LOCK_NG);
-    ADD_INT(d, DB_LOCK_READ);
-    ADD_INT(d, DB_LOCK_WRITE);
-    ADD_INT(d, DB_LOCK_NOWAIT);
-    ADD_INT(d, DB_LOCK_WAIT);
-    ADD_INT(d, DB_LOCK_IWRITE);
-    ADD_INT(d, DB_LOCK_IREAD);
-    ADD_INT(d, DB_LOCK_IWR);
-#if (DBVER < 44)
-    ADD_INT(d, DB_LOCK_DIRTY);
-#else
-    ADD_INT(d, DB_LOCK_READ_UNCOMMITTED);  /* renamed in 4.4 */
-#endif
-    ADD_INT(d, DB_LOCK_WWRITE);
-
-    ADD_INT(d, DB_LOCK_RECORD);
-    ADD_INT(d, DB_LOCK_UPGRADE);
-    ADD_INT(d, DB_LOCK_SWITCH);
-    ADD_INT(d, DB_LOCK_UPGRADE_WRITE);
-
-    ADD_INT(d, DB_LOCK_NOWAIT);
-    ADD_INT(d, DB_LOCK_RECORD);
-    ADD_INT(d, DB_LOCK_UPGRADE);
-
-    ADD_INT(d, DB_LSTAT_ABORTED);
-#if (DBVER < 43)
-    ADD_INT(d, DB_LSTAT_ERR);
-#endif
-    ADD_INT(d, DB_LSTAT_FREE);
-    ADD_INT(d, DB_LSTAT_HELD);
-
-    ADD_INT(d, DB_LSTAT_PENDING);
-    ADD_INT(d, DB_LSTAT_WAITING);
-
-    ADD_INT(d, DB_ARCH_ABS);
-    ADD_INT(d, DB_ARCH_DATA);
-    ADD_INT(d, DB_ARCH_LOG);
-#if (DBVER >= 42)
-    ADD_INT(d, DB_ARCH_REMOVE);
-#endif
-
-    ADD_INT(d, DB_BTREE);
-    ADD_INT(d, DB_HASH);
-    ADD_INT(d, DB_RECNO);
-    ADD_INT(d, DB_QUEUE);
-    ADD_INT(d, DB_UNKNOWN);
-
-    ADD_INT(d, DB_DUP);
-    ADD_INT(d, DB_DUPSORT);
-    ADD_INT(d, DB_RECNUM);
-    ADD_INT(d, DB_RENUMBER);
-    ADD_INT(d, DB_REVSPLITOFF);
-    ADD_INT(d, DB_SNAPSHOT);
-
-    ADD_INT(d, DB_JOIN_NOSORT);
-
-    ADD_INT(d, DB_AFTER);
-    ADD_INT(d, DB_APPEND);
-    ADD_INT(d, DB_BEFORE);
-#if (DBVER < 45)
-    ADD_INT(d, DB_CACHED_COUNTS);
-#endif
-
-#if (DBVER >= 41)
-    _addIntToDict(d, "DB_CHECKPOINT", 0);
-#else
-    ADD_INT(d, DB_CHECKPOINT);
-    ADD_INT(d, DB_CURLSN);
-#endif
-#if (DBVER <= 41)
-    ADD_INT(d, DB_COMMIT);
-#endif
-    ADD_INT(d, DB_CONSUME);
-    ADD_INT(d, DB_CONSUME_WAIT);
-    ADD_INT(d, DB_CURRENT);
-    ADD_INT(d, DB_FAST_STAT);
-    ADD_INT(d, DB_FIRST);
-    ADD_INT(d, DB_FLUSH);
-    ADD_INT(d, DB_GET_BOTH);
-    ADD_INT(d, DB_GET_RECNO);
-    ADD_INT(d, DB_JOIN_ITEM);
-    ADD_INT(d, DB_KEYFIRST);
-    ADD_INT(d, DB_KEYLAST);
-    ADD_INT(d, DB_LAST);
-    ADD_INT(d, DB_NEXT);
-    ADD_INT(d, DB_NEXT_DUP);
-    ADD_INT(d, DB_NEXT_NODUP);
-    ADD_INT(d, DB_NODUPDATA);
-    ADD_INT(d, DB_NOOVERWRITE);
-    ADD_INT(d, DB_NOSYNC);
-    ADD_INT(d, DB_POSITION);
-    ADD_INT(d, DB_PREV);
-    ADD_INT(d, DB_PREV_NODUP);
-#if (DBVER < 45)
-    ADD_INT(d, DB_RECORDCOUNT);
-#endif
-    ADD_INT(d, DB_SET);
-    ADD_INT(d, DB_SET_RANGE);
-    ADD_INT(d, DB_SET_RECNO);
-    ADD_INT(d, DB_WRITECURSOR);
-
-    ADD_INT(d, DB_OPFLAGS_MASK);
-    ADD_INT(d, DB_RMW);
-    ADD_INT(d, DB_DIRTY_READ);
-    ADD_INT(d, DB_MULTIPLE);
-    ADD_INT(d, DB_MULTIPLE_KEY);
-
-#if (DBVER >= 44)
-    ADD_INT(d, DB_READ_UNCOMMITTED);    /* replaces DB_DIRTY_READ in 4.4 */
-    ADD_INT(d, DB_READ_COMMITTED);
-#endif
-
-    ADD_INT(d, DB_DONOTINDEX);
-
-#if (DBVER >= 41)
-    _addIntToDict(d, "DB_INCOMPLETE", 0);
-#else
-    ADD_INT(d, DB_INCOMPLETE);
-#endif
-    ADD_INT(d, DB_KEYEMPTY);
-    ADD_INT(d, DB_KEYEXIST);
-    ADD_INT(d, DB_LOCK_DEADLOCK);
-    ADD_INT(d, DB_LOCK_NOTGRANTED);
-    ADD_INT(d, DB_NOSERVER);
-    ADD_INT(d, DB_NOSERVER_HOME);
-    ADD_INT(d, DB_NOSERVER_ID);
-    ADD_INT(d, DB_NOTFOUND);
-    ADD_INT(d, DB_OLD_VERSION);
-    ADD_INT(d, DB_RUNRECOVERY);
-    ADD_INT(d, DB_VERIFY_BAD);
-    ADD_INT(d, DB_PAGE_NOTFOUND);
-    ADD_INT(d, DB_SECONDARY_BAD);
-    ADD_INT(d, DB_STAT_CLEAR);
-    ADD_INT(d, DB_REGION_INIT);
-    ADD_INT(d, DB_NOLOCKING);
-    ADD_INT(d, DB_YIELDCPU);
-    ADD_INT(d, DB_PANIC_ENVIRONMENT);
-    ADD_INT(d, DB_NOPANIC);
-
-#if (DBVER >= 41)
-    ADD_INT(d, DB_OVERWRITE);
-#endif
-
-#ifdef DB_REGISTER
-    ADD_INT(d, DB_REGISTER);
-#endif
-
-#if (DBVER >= 42)
-    ADD_INT(d, DB_TIME_NOTGRANTED);
-    ADD_INT(d, DB_TXN_NOT_DURABLE);
-    ADD_INT(d, DB_TXN_WRITE_NOSYNC);
-    ADD_INT(d, DB_DIRECT_DB);
-    ADD_INT(d, DB_INIT_REP);
-    ADD_INT(d, DB_ENCRYPT);
-    ADD_INT(d, DB_CHKSUM);
-#endif
-
-#if (DBVER >= 42) && (DBVER < 47)
-    ADD_INT(d, DB_LOG_AUTOREMOVE);
-    ADD_INT(d, DB_DIRECT_LOG);
-#endif
-
-#if (DBVER >= 47)
-    ADD_INT(d, DB_LOG_DIRECT);
-    ADD_INT(d, DB_LOG_DSYNC);
-    ADD_INT(d, DB_LOG_IN_MEMORY);
-    ADD_INT(d, DB_LOG_AUTO_REMOVE);
-    ADD_INT(d, DB_LOG_ZERO);
-#endif
-
-#if (DBVER >= 44)
-    ADD_INT(d, DB_DSYNC_DB);
-#endif
-
-#if (DBVER >= 45)
-    ADD_INT(d, DB_TXN_SNAPSHOT);
-#endif
-
-    ADD_INT(d, DB_VERB_DEADLOCK);
-#if (DBVER >= 46)
-    ADD_INT(d, DB_VERB_FILEOPS);
-    ADD_INT(d, DB_VERB_FILEOPS_ALL);
-#endif
-    ADD_INT(d, DB_VERB_RECOVERY);
-#if (DBVER >= 44)
-    ADD_INT(d, DB_VERB_REGISTER);
-#endif
-    ADD_INT(d, DB_VERB_REPLICATION);
-    ADD_INT(d, DB_VERB_WAITSFOR);
-
-#if (DBVER >= 45)
-    ADD_INT(d, DB_EVENT_PANIC);
-    ADD_INT(d, DB_EVENT_REP_CLIENT);
-#if (DBVER >= 46)
-    ADD_INT(d, DB_EVENT_REP_ELECTED);
-#endif
-    ADD_INT(d, DB_EVENT_REP_MASTER);
-    ADD_INT(d, DB_EVENT_REP_NEWMASTER);
-#if (DBVER >= 46)
-    ADD_INT(d, DB_EVENT_REP_PERM_FAILED);
-#endif
-    ADD_INT(d, DB_EVENT_REP_STARTUPDONE);
-    ADD_INT(d, DB_EVENT_WRITE_FAILED);
-#endif
-
-    ADD_INT(d, DB_REP_DUPMASTER);
-    ADD_INT(d, DB_REP_HOLDELECTION);
-#if (DBVER >= 44)
-    ADD_INT(d, DB_REP_IGNORE);
-    ADD_INT(d, DB_REP_JOIN_FAILURE);
-#endif
-#if (DBVER >= 42)
-    ADD_INT(d, DB_REP_ISPERM);
-    ADD_INT(d, DB_REP_NOTPERM);
-#endif
-    ADD_INT(d, DB_REP_NEWSITE);
-
-    ADD_INT(d, DB_REP_MASTER);
-    ADD_INT(d, DB_REP_CLIENT);
-#if (DBVER >= 45)
-    ADD_INT(d, DB_REP_ELECTION);
-
-    ADD_INT(d, DB_REP_ACK_TIMEOUT);
-    ADD_INT(d, DB_REP_CONNECTION_RETRY);
-    ADD_INT(d, DB_REP_ELECTION_TIMEOUT);
-    ADD_INT(d, DB_REP_ELECTION_RETRY);
-#endif
-#if (DBVER >= 46)
-    ADD_INT(d, DB_REP_CHECKPOINT_DELAY);
-    ADD_INT(d, DB_REP_FULL_ELECTION_TIMEOUT);
-#endif
-
-#if (DBVER >= 45)
-    ADD_INT(d, DB_REPMGR_PEER);
-    ADD_INT(d, DB_REPMGR_ACKS_ALL);
-    ADD_INT(d, DB_REPMGR_ACKS_ALL_PEERS);
-    ADD_INT(d, DB_REPMGR_ACKS_NONE);
-    ADD_INT(d, DB_REPMGR_ACKS_ONE);
-    ADD_INT(d, DB_REPMGR_ACKS_ONE_PEER);
-    ADD_INT(d, DB_REPMGR_ACKS_QUORUM);
-    ADD_INT(d, DB_REPMGR_CONNECTED);
-    ADD_INT(d, DB_REPMGR_DISCONNECTED);
-    ADD_INT(d, DB_STAT_CLEAR);
-    ADD_INT(d, DB_STAT_ALL);
-#endif
-
-#if (DBVER >= 43)
-    ADD_INT(d, DB_BUFFER_SMALL);
-    ADD_INT(d, DB_SEQ_DEC);
-    ADD_INT(d, DB_SEQ_INC);
-    ADD_INT(d, DB_SEQ_WRAP);
-#endif
-
-#if (DBVER >= 43) && (DBVER < 47)
-    ADD_INT(d, DB_LOG_INMEMORY);
-    ADD_INT(d, DB_DSYNC_LOG);
-#endif
-
-#if (DBVER >= 41)
-    ADD_INT(d, DB_ENCRYPT_AES);
-    ADD_INT(d, DB_AUTO_COMMIT);
-#else
-    /* allow Berkeley DB 4.1 aware apps to run on older versions */
-    _addIntToDict(d, "DB_AUTO_COMMIT", 0);
-#endif
-
-    ADD_INT(d, EINVAL);
-    ADD_INT(d, EACCES);
-    ADD_INT(d, ENOSPC);
-    ADD_INT(d, ENOMEM);
-    ADD_INT(d, EAGAIN);
-    ADD_INT(d, EBUSY);
-    ADD_INT(d, EEXIST);
-    ADD_INT(d, ENOENT);
-    ADD_INT(d, EPERM);
-
-    ADD_INT(d, DB_SET_LOCK_TIMEOUT);
-    ADD_INT(d, DB_SET_TXN_TIMEOUT);
-
-    /* The exception name must be correct for pickled exception *
-     * objects to unpickle properly.                            */
-#ifdef PYBSDDB_STANDALONE  /* different value needed for standalone pybsddb */
-#define PYBSDDB_EXCEPTION_BASE  "bsddb3.db."
-#else
-#define PYBSDDB_EXCEPTION_BASE  "bsddb.db."
-#endif
-
-    /* All the rest of the exceptions derive only from DBError */
-#define MAKE_EX(name)   name = PyErr_NewException(PYBSDDB_EXCEPTION_BASE #name, DBError, NULL); \
-                        PyDict_SetItemString(d, #name, name)
-
-    /* The base exception class is DBError */
-    DBError = NULL;     /* used in MAKE_EX so that it derives from nothing */
-    MAKE_EX(DBError);
-
-#if (PY_VERSION_HEX < 0x03000000)
-    /* Some magic to make DBNotFoundError and DBKeyEmptyError derive
-     * from both DBError and KeyError, since the API only supports
-     * using one base class. */
-    PyDict_SetItemString(d, "KeyError", PyExc_KeyError);
-    PyRun_String("class DBNotFoundError(DBError, KeyError): pass\n"
-                "class DBKeyEmptyError(DBError, KeyError): pass",
-                 Py_file_input, d, d);
-    DBNotFoundError = PyDict_GetItemString(d, "DBNotFoundError");
-    DBKeyEmptyError = PyDict_GetItemString(d, "DBKeyEmptyError");
-    PyDict_DelItemString(d, "KeyError");
-#else
-    /* Since Python 2.5, PyErr_NewException() accepts a tuple, to be able to
-    ** derive from several classes. We use this new API only for Python 3.0,
-    ** though.
-    */
-    {
-        PyObject* bases;
-
-        bases = PyTuple_Pack(2, DBError, PyExc_KeyError);
-
-#define MAKE_EX2(name)   name = PyErr_NewException(PYBSDDB_EXCEPTION_BASE #name, bases, NULL); \
-                         PyDict_SetItemString(d, #name, name)
-        MAKE_EX2(DBNotFoundError);
-        MAKE_EX2(DBKeyEmptyError);
-
-#undef MAKE_EX2
-
-        Py_XDECREF(bases);
-    }
-#endif
-
-
-#if !INCOMPLETE_IS_WARNING
-    MAKE_EX(DBIncompleteError);
-#endif
-    MAKE_EX(DBCursorClosedError);
-    MAKE_EX(DBKeyEmptyError);
-    MAKE_EX(DBKeyExistError);
-    MAKE_EX(DBLockDeadlockError);
-    MAKE_EX(DBLockNotGrantedError);
-    MAKE_EX(DBOldVersionError);
-    MAKE_EX(DBRunRecoveryError);
-    MAKE_EX(DBVerifyBadError);
-    MAKE_EX(DBNoServerError);
-    MAKE_EX(DBNoServerHomeError);
-    MAKE_EX(DBNoServerIDError);
-    MAKE_EX(DBPageNotFoundError);
-    MAKE_EX(DBSecondaryBadError);
-
-    MAKE_EX(DBInvalidArgError);
-    MAKE_EX(DBAccessError);
-    MAKE_EX(DBNoSpaceError);
-    MAKE_EX(DBNoMemoryError);
-    MAKE_EX(DBAgainError);
-    MAKE_EX(DBBusyError);
-    MAKE_EX(DBFileExistsError);
-    MAKE_EX(DBNoSuchFileError);
-    MAKE_EX(DBPermissionsError);
-
-#if (DBVER >= 42)
-    MAKE_EX(DBRepHandleDeadError);
-#endif
-
-    MAKE_EX(DBRepUnavailError);
-
-#undef MAKE_EX
-
-    /* Initiliase the C API structure and add it to the module */
-    bsddb_api.db_type         = &DB_Type;
-    bsddb_api.dbcursor_type   = &DBCursor_Type;
-    bsddb_api.dbenv_type      = &DBEnv_Type;
-    bsddb_api.dbtxn_type      = &DBTxn_Type;
-    bsddb_api.dblock_type     = &DBLock_Type;
-#if (DBVER >= 43)
-    bsddb_api.dbsequence_type = &DBSequence_Type;
-#endif
-    bsddb_api.makeDBError     = makeDBError;
-
-    py_api = PyCObject_FromVoidPtr((void*)&bsddb_api, NULL);
-    PyDict_SetItemString(d, "api", py_api);
-    Py_DECREF(py_api);
-
-    /* Check for errors */
-    if (PyErr_Occurred()) {
-        PyErr_Print();
-        Py_FatalError("can't initialize module _bsddb/_pybsddb");
-        Py_DECREF(m);
-        m = NULL;
-    }
-#if (PY_VERSION_HEX < 0x03000000)
-    return;
-#else
-    return m;
-#endif
-}
-
-/* allow this module to be named _pybsddb so that it can be installed
- * and imported on top of python >= 2.3 that includes its own older
- * copy of the library named _bsddb without importing the old version. */
-#if (PY_VERSION_HEX < 0x03000000)
-DL_EXPORT(void) init_pybsddb(void)
-#else
-PyMODINIT_FUNC PyInit__pybsddb(void)  /* Note the two underscores */
-#endif
-{
-    strncpy(_bsddbModuleName, "_pybsddb", MODULE_NAME_MAX_LEN);
-#if (PY_VERSION_HEX < 0x03000000)
-    init_bsddb();
-#else
-    return PyInit__bsddb();   /* Note the two underscores */
-#endif
-}
-
diff --git a/Modules/bsddb.h b/Modules/bsddb.h
deleted file mode 100644 (file)
index f796681..0000000
+++ /dev/null
@@ -1,273 +0,0 @@
-/*----------------------------------------------------------------------
-  Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA
-  and Andrew Kuchling. All rights reserved.
-
-  Redistribution and use in source and binary forms, with or without
-  modification, are permitted provided that the following conditions are
-  met:
-
-    o Redistributions of source code must retain the above copyright
-      notice, this list of conditions, and the disclaimer that follows.
-
-    o Redistributions in binary form must reproduce the above copyright
-      notice, this list of conditions, and the following disclaimer in
-      the documentation and/or other materials provided with the
-      distribution.
-
-    o Neither the name of Digital Creations nor the names of its
-      contributors may be used to endorse or promote products derived
-      from this software without specific prior written permission.
-
-  THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS
-  IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
-  TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
-  PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL DIGITAL
-  CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-  INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-  BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
-  OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-  ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
-  TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
-  USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
-  DAMAGE.
-------------------------------------------------------------------------*/
-
-
-/*
- * Handwritten code to wrap version 3.x of the Berkeley DB library,
- * written to replace a SWIG-generated file.  It has since been updated
- * to compile with Berkeley DB versions 3.2 through 4.2.
- *
- * This module was started by Andrew Kuchling to remove the dependency
- * on SWIG in a package by Gregory P. Smith who based his work on a
- * similar package by Robin Dunn <robin@alldunn.com> which wrapped
- * Berkeley DB 2.7.x.
- *
- * Development of this module then returned full circle back to Robin Dunn
- * who worked on behalf of Digital Creations to complete the wrapping of
- * the DB 3.x API and to build a solid unit test suite.  Robin has
- * since gone onto other projects (wxPython).
- *
- * Gregory P. Smith <greg@krypto.org> is once again the maintainer.
- *
- * Use the pybsddb-users@lists.sf.net mailing list for all questions.
- * Things can change faster than the header of this file is updated.  This
- * file is shared with the PyBSDDB project at SourceForge:
- *
- * http://pybsddb.sf.net
- *
- * This file should remain backward compatible with Python 2.1, but see PEP
- * 291 for the most current backward compatibility requirements:
- *
- * http://www.python.org/peps/pep-0291.html
- *
- * This module contains 6 types:
- *
- * DB           (Database)
- * DBCursor     (Database Cursor)
- * DBEnv        (database environment)
- * DBTxn        (An explicit database transaction)
- * DBLock       (A lock handle)
- * DBSequence   (Sequence)
- *
- */
-
-/* --------------------------------------------------------------------- */
-
-/*
- * Portions of this module, associated unit tests and build scripts are the
- * result of a contract with The Written Word (http://thewrittenword.com/)
- * Many thanks go out to them for causing me to raise the bar on quality and
- * functionality, resulting in a better bsddb3 package for all of us to use.
- *
- * --Robin
- */
-
-/* --------------------------------------------------------------------- */
-
-/*
- * Work to split it up into a separate header and to add a C API was
- * contributed by Duncan Grisby <duncan@tideway.com>.   See here:
- *  http://sourceforge.net/tracker/index.php?func=detail&aid=1551895&group_id=13900&atid=313900
- */
-
-/* --------------------------------------------------------------------- */
-
-#ifndef _BSDDB_H_
-#define _BSDDB_H_
-
-#include <db.h>
-
-
-/* 40 = 4.0, 33 = 3.3; this will break if the minor revision is > 9 */
-#define DBVER (DB_VERSION_MAJOR * 10 + DB_VERSION_MINOR)
-#if DB_VERSION_MINOR > 9
-#error "eek! DBVER can't handle minor versions > 9"
-#endif
-
-#define PY_BSDDB_VERSION "4.7.3pre5"
-
-/* Python object definitions */
-
-struct behaviourFlags {
-    /* What is the default behaviour when DB->get or DBCursor->get returns a
-       DB_NOTFOUND || DB_KEYEMPTY error?  Return None or raise an exception? */
-    unsigned int getReturnsNone : 1;
-    /* What is the default behaviour for DBCursor.set* methods when DBCursor->get
-     * returns a DB_NOTFOUND || DB_KEYEMPTY  error?  Return None or raise? */
-    unsigned int cursorSetReturnsNone : 1;
-};
-
-
-
-struct DBObject;          /* Forward declaration */
-struct DBCursorObject;    /* Forward declaration */
-struct DBTxnObject;       /* Forward declaration */
-struct DBSequenceObject;  /* Forward declaration */
-
-typedef struct {
-    PyObject_HEAD
-    DB_ENV*     db_env;
-    u_int32_t   flags;             /* saved flags from open() */
-    int         closed;
-    struct behaviourFlags moduleFlags;
-    PyObject*       event_notifyCallback;
-    struct DBObject *children_dbs;
-    struct DBTxnObject *children_txns;
-    PyObject        *private_obj;
-    PyObject        *rep_transport;
-    PyObject        *in_weakreflist; /* List of weak references */
-} DBEnvObject;
-
-typedef struct DBObject {
-    PyObject_HEAD
-    DB*             db;
-    DBEnvObject*    myenvobj;  /* PyObject containing the DB_ENV */
-    u_int32_t       flags;     /* saved flags from open() */
-    u_int32_t       setflags;  /* saved flags from set_flags() */
-    int             haveStat;
-    struct behaviourFlags moduleFlags;
-    struct DBTxnObject *txn;
-    struct DBCursorObject *children_cursors;
-#if (DBVER >=43)
-    struct DBSequenceObject *children_sequences;
-#endif
-    struct DBObject **sibling_prev_p;
-    struct DBObject *sibling_next;
-    struct DBObject **sibling_prev_p_txn;
-    struct DBObject *sibling_next_txn;
-    PyObject*       associateCallback;
-    PyObject*       btCompareCallback;
-    int             primaryDBType;
-    PyObject        *private_obj;
-    PyObject        *in_weakreflist; /* List of weak references */
-} DBObject;
-
-
-typedef struct DBCursorObject {
-    PyObject_HEAD
-    DBC*            dbc;
-    struct DBCursorObject **sibling_prev_p;
-    struct DBCursorObject *sibling_next;
-    struct DBCursorObject **sibling_prev_p_txn;
-    struct DBCursorObject *sibling_next_txn;
-    DBObject*       mydb;
-    struct DBTxnObject *txn;
-    PyObject        *in_weakreflist; /* List of weak references */
-} DBCursorObject;
-
-
-typedef struct DBTxnObject {
-    PyObject_HEAD
-    DB_TXN*         txn;
-    DBEnvObject*    env;
-    int             flag_prepare;
-    struct DBTxnObject *parent_txn;
-    struct DBTxnObject **sibling_prev_p;
-    struct DBTxnObject *sibling_next;
-    struct DBTxnObject *children_txns;
-    struct DBObject *children_dbs;
-    struct DBSequenceObject *children_sequences;
-    struct DBCursorObject *children_cursors;
-    PyObject        *in_weakreflist; /* List of weak references */
-} DBTxnObject;
-
-
-typedef struct {
-    PyObject_HEAD
-    DB_LOCK         lock;
-    PyObject        *in_weakreflist; /* List of weak references */
-} DBLockObject;
-
-
-#if (DBVER >= 43)
-typedef struct DBSequenceObject {
-    PyObject_HEAD
-    DB_SEQUENCE*     sequence;
-    DBObject*        mydb;
-    struct DBTxnObject *txn;
-    struct DBSequenceObject **sibling_prev_p;
-    struct DBSequenceObject *sibling_next;
-    struct DBSequenceObject **sibling_prev_p_txn;
-    struct DBSequenceObject *sibling_next_txn;
-    PyObject        *in_weakreflist; /* List of weak references */
-} DBSequenceObject;
-#endif
-
-
-/* API structure for use by C code */
-
-/* To access the structure from an external module, use code like the
-   following (error checking missed out for clarity):
-
-     BSDDB_api* bsddb_api;
-     PyObject*  mod;
-     PyObject*  cobj;
-
-     mod  = PyImport_ImportModule("bsddb._bsddb");
-     // Use "bsddb3._pybsddb" if you're using the standalone pybsddb add-on.
-     cobj = PyObject_GetAttrString(mod, "api");
-     api  = (BSDDB_api*)PyCObject_AsVoidPtr(cobj);
-     Py_DECREF(cobj);
-     Py_DECREF(mod);
-
-   The structure's members must not be changed.
-*/
-
-typedef struct {
-    /* Type objects */
-    PyTypeObject* db_type;
-    PyTypeObject* dbcursor_type;
-    PyTypeObject* dbenv_type;
-    PyTypeObject* dbtxn_type;
-    PyTypeObject* dblock_type;
-#if (DBVER >= 43)
-    PyTypeObject* dbsequence_type;
-#endif
-
-    /* Functions */
-    int (*makeDBError)(int err);
-
-} BSDDB_api;
-
-
-#ifndef COMPILING_BSDDB_C
-
-/* If not inside _bsddb.c, define type check macros that use the api
-   structure.  The calling code must have a value named bsddb_api
-   pointing to the api structure.
-*/
-
-#define DBObject_Check(v)       ((v)->ob_type == bsddb_api->db_type)
-#define DBCursorObject_Check(v) ((v)->ob_type == bsddb_api->dbcursor_type)
-#define DBEnvObject_Check(v)    ((v)->ob_type == bsddb_api->dbenv_type)
-#define DBTxnObject_Check(v)    ((v)->ob_type == bsddb_api->dbtxn_type)
-#define DBLockObject_Check(v)   ((v)->ob_type == bsddb_api->dblock_type)
-#if (DBVER >= 43)
-#define DBSequenceObject_Check(v)  ((v)->ob_type == bsddb_api->dbsequence_type)
-#endif
-
-#endif /* COMPILING_BSDDB_C */
-
-
-#endif /* _BSDDB_H_ */
index 69f76064fa7867caa7ba3d3b5a679b6c26572757..f411c5f0bfc9b48ee73b678f140f9d3c9488d2ae 100644 (file)
--- a/setup.py
+++ b/setup.py
@@ -665,162 +665,6 @@ class PyBuildExt(build_ext):
         # implementation independent wrapper for these; dbm/dumb.py provides
         # similar functionality (but slower of course) implemented in Python.
 
-        # Sleepycat^WOracle Berkeley DB interface.
-        #  http://www.oracle.com/database/berkeley-db/db/index.html
-        #
-        # This requires the Sleepycat^WOracle DB code. The supported versions
-        # are set below.  Visit the URL above to download
-        # a release.  Most open source OSes come with one or more
-        # versions of BerkeleyDB already installed.
-
-        max_db_ver = (4, 7)
-        min_db_ver = (4, 0)
-        db_setup_debug = False   # verbose debug prints from this script?
-
-        # construct a list of paths to look for the header file in on
-        # top of the normal inc_dirs.
-        db_inc_paths = [
-            '/usr/include/db4',
-            '/usr/local/include/db4',
-            '/opt/sfw/include/db4',
-            '/usr/include/db3',
-            '/usr/local/include/db3',
-            '/opt/sfw/include/db3',
-            # Fink defaults (http://fink.sourceforge.net/)
-            '/sw/include/db4',
-            '/sw/include/db3',
-        ]
-        # 4.x minor number specific paths
-        for x in range(max_db_ver[1]+1):
-            db_inc_paths.append('/usr/include/db4%d' % x)
-            db_inc_paths.append('/usr/include/db4.%d' % x)
-            db_inc_paths.append('/usr/local/BerkeleyDB.4.%d/include' % x)
-            db_inc_paths.append('/usr/local/include/db4%d' % x)
-            db_inc_paths.append('/pkg/db-4.%d/include' % x)
-            db_inc_paths.append('/opt/db-4.%d/include' % x)
-            # MacPorts default (http://www.macports.org/)
-            db_inc_paths.append('/opt/local/include/db4%d' % x)
-        # 3.x minor number specific paths
-        for x in (3,):
-            db_inc_paths.append('/usr/include/db3%d' % x)
-            db_inc_paths.append('/usr/local/BerkeleyDB.3.%d/include' % x)
-            db_inc_paths.append('/usr/local/include/db3%d' % x)
-            db_inc_paths.append('/pkg/db-3.%d/include' % x)
-            db_inc_paths.append('/opt/db-3.%d/include' % x)
-
-        # Add some common subdirectories for Sleepycat DB to the list,
-        # based on the standard include directories. This way DB3/4 gets
-        # picked up when it is installed in a non-standard prefix and
-        # the user has added that prefix into inc_dirs.
-        std_variants = []
-        for dn in inc_dirs:
-            std_variants.append(os.path.join(dn, 'db3'))
-            std_variants.append(os.path.join(dn, 'db4'))
-            for x in range(max_db_ver[1]+1):
-                std_variants.append(os.path.join(dn, "db4%d"%x))
-                std_variants.append(os.path.join(dn, "db4.%d"%x))
-            for x in (3,):
-                std_variants.append(os.path.join(dn, "db3%d"%x))
-                std_variants.append(os.path.join(dn, "db3.%d"%x))
-
-        db_inc_paths = std_variants + db_inc_paths
-        db_inc_paths = [p for p in db_inc_paths if os.path.exists(p)]
-
-        db_ver_inc_map = {}
-
-        class db_found(Exception): pass
-        try:
-            # See whether there is a Sleepycat header in the standard
-            # search path.
-            for d in inc_dirs + db_inc_paths:
-                f = os.path.join(d, "db.h")
-                if db_setup_debug: print("db: looking for db.h in", f)
-                if os.path.exists(f):
-                    f = open(f).read()
-                    m = re.search(r"#define\WDB_VERSION_MAJOR\W(\d+)", f)
-                    if m:
-                        db_major = int(m.group(1))
-                        m = re.search(r"#define\WDB_VERSION_MINOR\W(\d+)", f)
-                        db_minor = int(m.group(1))
-                        db_ver = (db_major, db_minor)
-
-                        # Avoid 4.6 prior to 4.6.21 due to a BerkeleyDB bug
-                        if db_ver == (4, 6):
-                            m = re.search(r"#define\WDB_VERSION_PATCH\W(\d+)", f)
-                            db_patch = int(m.group(1))
-                            if db_patch < 21:
-                                print("db.h:", db_ver, "patch", db_patch,
-                                      "being ignored (4.6.x must be >= 4.6.21)")
-                                continue
-
-                        if ( (db_ver not in db_ver_inc_map) and
-                           (db_ver <= max_db_ver and db_ver >= min_db_ver) ):
-                            # save the include directory with the db.h version
-                            # (first occurrence only)
-                            db_ver_inc_map[db_ver] = d
-                            if db_setup_debug:
-                                print("db.h: found", db_ver, "in", d)
-                        else:
-                            # we already found a header for this library version
-                            if db_setup_debug: print("db.h: ignoring", d)
-                    else:
-                        # ignore this header, it didn't contain a version number
-                        if db_setup_debug:
-                            print("db.h: no version number version in", d)
-
-            db_found_vers = sorted(db_ver_inc_map.keys())
-
-            while db_found_vers:
-                db_ver = db_found_vers.pop()
-                db_incdir = db_ver_inc_map[db_ver]
-
-                # check lib directories parallel to the location of the header
-                db_dirs_to_check = [
-                    db_incdir.replace("include", 'lib64'),
-                    db_incdir.replace("include", 'lib'),
-                ]
-                db_dirs_to_check = [x for x in db_dirs_to_check if os.path.isdir(x)]
-
-                # Look for a version specific db-X.Y before an ambiguoius dbX
-                # XXX should we -ever- look for a dbX name?  Do any
-                # systems really not name their library by version and
-                # symlink to more general names?
-                for dblib in (('db-%d.%d' % db_ver),
-                              ('db%d%d' % db_ver),
-                              ('db%d' % db_ver[0])):
-                    dblib_file = self.compiler.find_library_file(
-                                    db_dirs_to_check + lib_dirs, dblib )
-                    if dblib_file:
-                        dblib_dir = [ os.path.abspath(os.path.dirname(dblib_file)) ]
-                        raise db_found
-                    else:
-                        if db_setup_debug: print("db lib: ", dblib, "not found")
-
-        except db_found:
-            if db_setup_debug:
-                print("db lib: using", db_ver, dblib)
-                print("db: lib dir", dblib_dir, "inc dir", db_incdir)
-            db_incs = [db_incdir]
-            dblibs = [dblib]
-            # We add the runtime_library_dirs argument because the
-            # BerkeleyDB lib we're linking against often isn't in the
-            # system dynamic library search path.  This is usually
-            # correct and most trouble free, but may cause problems in
-            # some unusual system configurations (e.g. the directory
-            # is on an NFS server that goes away).
-            exts.append(Extension('_bsddb', ['_bsddb.c'],
-                                  depends = ['bsddb.h'],
-                                  library_dirs=dblib_dir,
-                                  runtime_library_dirs=dblib_dir,
-                                  include_dirs=db_incs,
-                                  libraries=dblibs))
-        else:
-            if db_setup_debug: print("db: no appropriate library found")
-            db_incs = None
-            dblibs = []
-            dblib_dir = None
-            missing.append('_bsddb')
-
         # The sqlite interface
         sqlite_setup_debug = False   # verbose debug prints from this script?