+++ /dev/null
-/*
- This is a version (aka dlmalloc) of malloc/free/realloc written by
- Doug Lea and released to the public domain, as explained at
- http://creativecommons.org/licenses/publicdomain. Send questions,
- comments, complaints, performance data, etc to dl@cs.oswego.edu
-
-* Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
-
- Note: There may be an updated version of this malloc obtainable at
- ftp://gee.cs.oswego.edu/pub/misc/malloc.c
- Check before installing!
-
-* Quickstart
-
- This library is all in one file to simplify the most common usage:
- ftp it, compile it (-O3), and link it into another program. All of
- the compile-time options default to reasonable values for use on
- most platforms. You might later want to step through various
- compile-time and dynamic tuning options.
-
- For convenience, an include file for code using this malloc is at:
- ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
- You don't really need this .h file unless you call functions not
- defined in your system include files. The .h file contains only the
- excerpts from this file needed for using this malloc on ANSI C/C++
- systems, so long as you haven't changed compile-time options about
- naming and tuning parameters. If you do, then you can create your
- own malloc.h that does include all settings by cutting at the point
- indicated below. Note that you may already by default be using a C
- library containing a malloc that is based on some version of this
- malloc (for example in linux). You might still want to use the one
- in this file to customize settings or to avoid overheads associated
- with library versions.
-
-* Vital statistics:
-
- Supported pointer/size_t representation: 4 or 8 bytes
- size_t MUST be an unsigned type of the same width as
- pointers. (If you are using an ancient system that declares
- size_t as a signed type, or need it to be a different width
- than pointers, you can use a previous release of this malloc
- (e.g. 2.7.2) supporting these.)
-
- Alignment: 8 bytes (default)
- This suffices for nearly all current machines and C compilers.
- However, you can define MALLOC_ALIGNMENT to be wider than this
- if necessary (up to 128bytes), at the expense of using more space.
-
- Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
- 8 or 16 bytes (if 8byte sizes)
- Each malloced chunk has a hidden word of overhead holding size
- and status information, and additional cross-check word
- if FOOTERS is defined.
-
- Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
- 8-byte ptrs: 32 bytes (including overhead)
-
- Even a request for zero bytes (i.e., malloc(0)) returns a
- pointer to something of the minimum allocatable size.
- The maximum overhead wastage (i.e., number of extra bytes
- allocated than were requested in malloc) is less than or equal
- to the minimum size, except for requests >= mmap_threshold that
- are serviced via mmap(), where the worst case wastage is about
- 32 bytes plus the remainder from a system page (the minimal
- mmap unit); typically 4096 or 8192 bytes.
-
- Security: static-safe; optionally more or less
- The "security" of malloc refers to the ability of malicious
- code to accentuate the effects of errors (for example, freeing
- space that is not currently malloc'ed or overwriting past the
- ends of chunks) in code that calls malloc. This malloc
- guarantees not to modify any memory locations below the base of
- heap, i.e., static variables, even in the presence of usage
- errors. The routines additionally detect most improper frees
- and reallocs. All this holds as long as the static bookkeeping
- for malloc itself is not corrupted by some other means. This
- is only one aspect of security -- these checks do not, and
- cannot, detect all possible programming errors.
-
- If FOOTERS is defined nonzero, then each allocated chunk
- carries an additional check word to verify that it was malloced
- from its space. These check words are the same within each
- execution of a program using malloc, but differ across
- executions, so externally crafted fake chunks cannot be
- freed. This improves security by rejecting frees/reallocs that
- could corrupt heap memory, in addition to the checks preventing
- writes to statics that are always on. This may further improve
- security at the expense of time and space overhead. (Note that
- FOOTERS may also be worth using with MSPACES.)
-
- By default detected errors cause the program to abort (calling
- "abort()"). You can override this to instead proceed past
- errors by defining PROCEED_ON_ERROR. In this case, a bad free
- has no effect, and a malloc that encounters a bad address
- caused by user overwrites will ignore the bad address by
- dropping pointers and indices to all known memory. This may
- be appropriate for programs that should continue if at all
- possible in the face of programming errors, although they may
- run out of memory because dropped memory is never reclaimed.
-
- If you don't like either of these options, you can define
- CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
- else. And if if you are sure that your program using malloc has
- no errors or vulnerabilities, you can define INSECURE to 1,
- which might (or might not) provide a small performance improvement.
-
- Thread-safety: NOT thread-safe unless USE_LOCKS defined
- When USE_LOCKS is defined, each public call to malloc, free,
- etc is surrounded with either a pthread mutex or a win32
- spinlock (depending on WIN32). This is not especially fast, and
- can be a major bottleneck. It is designed only to provide
- minimal protection in concurrent environments, and to provide a
- basis for extensions. If you are using malloc in a concurrent
- program, consider instead using ptmalloc, which is derived from
- a version of this malloc. (See http://www.malloc.de).
-
- System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
- This malloc can use unix sbrk or any emulation (invoked using
- the CALL_MORECORE macro) and/or mmap/munmap or any emulation
- (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
- memory. On most unix systems, it tends to work best if both
- MORECORE and MMAP are enabled. On Win32, it uses emulations
- based on VirtualAlloc. It also uses common C library functions
- like memset.
-
- Compliance: I believe it is compliant with the Single Unix Specification
- (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
- others as well.
-
-* Overview of algorithms
-
- This is not the fastest, most space-conserving, most portable, or
- most tunable malloc ever written. However it is among the fastest
- while also being among the most space-conserving, portable and
- tunable. Consistent balance across these factors results in a good
- general-purpose allocator for malloc-intensive programs.
-
- In most ways, this malloc is a best-fit allocator. Generally, it
- chooses the best-fitting existing chunk for a request, with ties
- broken in approximately least-recently-used order. (This strategy
- normally maintains low fragmentation.) However, for requests less
- than 256bytes, it deviates from best-fit when there is not an
- exactly fitting available chunk by preferring to use space adjacent
- to that used for the previous small request, as well as by breaking
- ties in approximately most-recently-used order. (These enhance
- locality of series of small allocations.) And for very large requests
- (>= 256Kb by default), it relies on system memory mapping
- facilities, if supported. (This helps avoid carrying around and
- possibly fragmenting memory used only for large chunks.)
-
- All operations (except malloc_stats and mallinfo) have execution
- times that are bounded by a constant factor of the number of bits in
- a size_t, not counting any clearing in calloc or copying in realloc,
- or actions surrounding MORECORE and MMAP that have times
- proportional to the number of non-contiguous regions returned by
- system allocation routines, which is often just 1.
-
- The implementation is not very modular and seriously overuses
- macros. Perhaps someday all C compilers will do as good a job
- inlining modular code as can now be done by brute-force expansion,
- but now, enough of them seem not to.
-
- Some compilers issue a lot of warnings about code that is
- dead/unreachable only on some platforms, and also about intentional
- uses of negation on unsigned types. All known cases of each can be
- ignored.
-
- For a longer but out of date high-level description, see
- http://gee.cs.oswego.edu/dl/html/malloc.html
-
-* MSPACES
- If MSPACES is defined, then in addition to malloc, free, etc.,
- this file also defines mspace_malloc, mspace_free, etc. These
- are versions of malloc routines that take an "mspace" argument
- obtained using create_mspace, to control all internal bookkeeping.
- If ONLY_MSPACES is defined, only these versions are compiled.
- So if you would like to use this allocator for only some allocations,
- and your system malloc for others, you can compile with
- ONLY_MSPACES and then do something like...
- static mspace mymspace = create_mspace(0,0); // for example
- #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
-
- (Note: If you only need one instance of an mspace, you can instead
- use "USE_DL_PREFIX" to relabel the global malloc.)
-
- You can similarly create thread-local allocators by storing
- mspaces as thread-locals. For example:
- static __thread mspace tlms = 0;
- void* tlmalloc(size_t bytes) {
- if (tlms == 0) tlms = create_mspace(0, 0);
- return mspace_malloc(tlms, bytes);
- }
- void tlfree(void* mem) { mspace_free(tlms, mem); }
-
- Unless FOOTERS is defined, each mspace is completely independent.
- You cannot allocate from one and free to another (although
- conformance is only weakly checked, so usage errors are not always
- caught). If FOOTERS is defined, then each chunk carries around a tag
- indicating its originating mspace, and frees are directed to their
- originating spaces.
-
- ------------------------- Compile-time options ---------------------------
-
-Be careful in setting #define values for numerical constants of type
-size_t. On some systems, literal values are not automatically extended
-to size_t precision unless they are explicitly casted.
-
-WIN32 default: defined if _WIN32 defined
- Defining WIN32 sets up defaults for MS environment and compilers.
- Otherwise defaults are for unix.
-
-MALLOC_ALIGNMENT default: (size_t)8
- Controls the minimum alignment for malloc'ed chunks. It must be a
- power of two and at least 8, even on machines for which smaller
- alignments would suffice. It may be defined as larger than this
- though. Note however that code and data structures are optimized for
- the case of 8-byte alignment.
-
-MSPACES default: 0 (false)
- If true, compile in support for independent allocation spaces.
- This is only supported if HAVE_MMAP is true.
-
-ONLY_MSPACES default: 0 (false)
- If true, only compile in mspace versions, not regular versions.
-
-USE_LOCKS default: 0 (false)
- Causes each call to each public routine to be surrounded with
- pthread or WIN32 mutex lock/unlock. (If set true, this can be
- overridden on a per-mspace basis for mspace versions.)
-
-FOOTERS default: 0
- If true, provide extra checking and dispatching by placing
- information in the footers of allocated chunks. This adds
- space and time overhead.
-
-INSECURE default: 0
- If true, omit checks for usage errors and heap space overwrites.
-
-USE_DL_PREFIX default: NOT defined
- Causes compiler to prefix all public routines with the string 'dl'.
- This can be useful when you only want to use this malloc in one part
- of a program, using your regular system malloc elsewhere.
-
-ABORT default: defined as abort()
- Defines how to abort on failed checks. On most systems, a failed
- check cannot die with an "assert" or even print an informative
- message, because the underlying print routines in turn call malloc,
- which will fail again. Generally, the best policy is to simply call
- abort(). It's not very useful to do more than this because many
- errors due to overwriting will show up as address faults (null, odd
- addresses etc) rather than malloc-triggered checks, so will also
- abort. Also, most compilers know that abort() does not return, so
- can better optimize code conditionally calling it.
-
-PROCEED_ON_ERROR default: defined as 0 (false)
- Controls whether detected bad addresses cause them to bypassed
- rather than aborting. If set, detected bad arguments to free and
- realloc are ignored. And all bookkeeping information is zeroed out
- upon a detected overwrite of freed heap space, thus losing the
- ability to ever return it from malloc again, but enabling the
- application to proceed. If PROCEED_ON_ERROR is defined, the
- static variable malloc_corruption_error_count is compiled in
- and can be examined to see if errors have occurred. This option
- generates slower code than the default abort policy.
-
-DEBUG default: NOT defined
- The DEBUG setting is mainly intended for people trying to modify
- this code or diagnose problems when porting to new platforms.
- However, it may also be able to better isolate user errors than just
- using runtime checks. The assertions in the check routines spell
- out in more detail the assumptions and invariants underlying the
- algorithms. The checking is fairly extensive, and will slow down
- execution noticeably. Calling malloc_stats or mallinfo with DEBUG
- set will attempt to check every non-mmapped allocated and free chunk
- in the course of computing the summaries.
-
-ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
- Debugging assertion failures can be nearly impossible if your
- version of the assert macro causes malloc to be called, which will
- lead to a cascade of further failures, blowing the runtime stack.
- ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
- which will usually make debugging easier.
-
-MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
- The action to take before "return 0" when malloc fails to be able to
- return memory because there is none available.
-
-HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
- True if this system supports sbrk or an emulation of it.
-
-MORECORE default: sbrk
- The name of the sbrk-style system routine to call to obtain more
- memory. See below for guidance on writing custom MORECORE
- functions. The type of the argument to sbrk/MORECORE varies across
- systems. It cannot be size_t, because it supports negative
- arguments, so it is normally the signed type of the same width as
- size_t (sometimes declared as "intptr_t"). It doesn't much matter
- though. Internally, we only call it with arguments less than half
- the max value of a size_t, which should work across all reasonable
- possibilities, although sometimes generating compiler warnings. See
- near the end of this file for guidelines for creating a custom
- version of MORECORE.
-
-MORECORE_CONTIGUOUS default: 1 (true)
- If true, take advantage of fact that consecutive calls to MORECORE
- with positive arguments always return contiguous increasing
- addresses. This is true of unix sbrk. It does not hurt too much to
- set it true anyway, since malloc copes with non-contiguities.
- Setting it false when definitely non-contiguous saves time
- and possibly wasted space it would take to discover this though.
-
-MORECORE_CANNOT_TRIM default: NOT defined
- True if MORECORE cannot release space back to the system when given
- negative arguments. This is generally necessary only if you are
- using a hand-crafted MORECORE function that cannot handle negative
- arguments.
-
-HAVE_MMAP default: 1 (true)
- True if this system supports mmap or an emulation of it. If so, and
- HAVE_MORECORE is not true, MMAP is used for all system
- allocation. If set and HAVE_MORECORE is true as well, MMAP is
- primarily used to directly allocate very large blocks. It is also
- used as a backup strategy in cases where MORECORE fails to provide
- space from system. Note: A single call to MUNMAP is assumed to be
- able to unmap memory that may have be allocated using multiple calls
- to MMAP, so long as they are adjacent.
-
-HAVE_MREMAP default: 1 on linux, else 0
- If true realloc() uses mremap() to re-allocate large blocks and
- extend or shrink allocation spaces.
-
-MMAP_CLEARS default: 1 on unix
- True if mmap clears memory so calloc doesn't need to. This is true
- for standard unix mmap using /dev/zero.
-
-USE_BUILTIN_FFS default: 0 (i.e., not used)
- Causes malloc to use the builtin ffs() function to compute indices.
- Some compilers may recognize and intrinsify ffs to be faster than the
- supplied C version. Also, the case of x86 using gcc is special-cased
- to an asm instruction, so is already as fast as it can be, and so
- this setting has no effect. (On most x86s, the asm version is only
- slightly faster than the C version.)
-
-malloc_getpagesize default: derive from system includes, or 4096.
- The system page size. To the extent possible, this malloc manages
- memory from the system in page-size units. This may be (and
- usually is) a function rather than a constant. This is ignored
- if WIN32, where page size is determined using getSystemInfo during
- initialization.
-
-USE_DEV_RANDOM default: 0 (i.e., not used)
- Causes malloc to use /dev/random to initialize secure magic seed for
- stamping footers. Otherwise, the current time is used.
-
-NO_MALLINFO default: 0
- If defined, don't compile "mallinfo". This can be a simple way
- of dealing with mismatches between system declarations and
- those in this file.
-
-MALLINFO_FIELD_TYPE default: size_t
- The type of the fields in the mallinfo struct. This was originally
- defined as "int" in SVID etc, but is more usefully defined as
- size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
-
-REALLOC_ZERO_BYTES_FREES default: not defined
- This should be set if a call to realloc with zero bytes should
- be the same as a call to free. Some people think it should. Otherwise,
- since this malloc returns a unique pointer for malloc(0), so does
- realloc(p, 0).
-
-LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
-LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
-LACKS_STDLIB_H default: NOT defined unless on WIN32
- Define these if your system does not have these header files.
- You might need to manually insert some of the declarations they provide.
-
-DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
- system_info.dwAllocationGranularity in WIN32,
- otherwise 64K.
- Also settable using mallopt(M_GRANULARITY, x)
- The unit for allocating and deallocating memory from the system. On
- most systems with contiguous MORECORE, there is no reason to
- make this more than a page. However, systems with MMAP tend to
- either require or encourage larger granularities. You can increase
- this value to prevent system allocation functions to be called so
- often, especially if they are slow. The value must be at least one
- page and must be a power of two. Setting to 0 causes initialization
- to either page size or win32 region size. (Note: In previous
- versions of malloc, the equivalent of this option was called
- "TOP_PAD")
-
-DEFAULT_TRIM_THRESHOLD default: 2MB
- Also settable using mallopt(M_TRIM_THRESHOLD, x)
- The maximum amount of unused top-most memory to keep before
- releasing via malloc_trim in free(). Automatic trimming is mainly
- useful in long-lived programs using contiguous MORECORE. Because
- trimming via sbrk can be slow on some systems, and can sometimes be
- wasteful (in cases where programs immediately afterward allocate
- more large chunks) the value should be high enough so that your
- overall system performance would improve by releasing this much
- memory. As a rough guide, you might set to a value close to the
- average size of a process (program) running on your system.
- Releasing this much memory would allow such a process to run in
- memory. Generally, it is worth tuning trim thresholds when a
- program undergoes phases where several large chunks are allocated
- and released in ways that can reuse each other's storage, perhaps
- mixed with phases where there are no such chunks at all. The trim
- value must be greater than page size to have any useful effect. To
- disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
- some people use of mallocing a huge space and then freeing it at
- program startup, in an attempt to reserve system memory, doesn't
- have the intended effect under automatic trimming, since that memory
- will immediately be returned to the system.
-
-DEFAULT_MMAP_THRESHOLD default: 256K
- Also settable using mallopt(M_MMAP_THRESHOLD, x)
- The request size threshold for using MMAP to directly service a
- request. Requests of at least this size that cannot be allocated
- using already-existing space will be serviced via mmap. (If enough
- normal freed space already exists it is used instead.) Using mmap
- segregates relatively large chunks of memory so that they can be
- individually obtained and released from the host system. A request
- serviced through mmap is never reused by any other request (at least
- not directly; the system may just so happen to remap successive
- requests to the same locations). Segregating space in this way has
- the benefits that: Mmapped space can always be individually released
- back to the system, which helps keep the system level memory demands
- of a long-lived program low. Also, mapped memory doesn't become
- `locked' between other chunks, as can happen with normally allocated
- chunks, which means that even trimming via malloc_trim would not
- release them. However, it has the disadvantage that the space
- cannot be reclaimed, consolidated, and then used to service later
- requests, as happens with normal chunks. The advantages of mmap
- nearly always outweigh disadvantages for "large" chunks, but the
- value of "large" may vary across systems. The default is an
- empirically derived value that works well in most systems. You can
- disable mmap by setting to MAX_SIZE_T.
-
-*/
-
-#ifndef WIN32
-#ifdef _WIN32
-#define WIN32 1
-#endif /* _WIN32 */
-#endif /* WIN32 */
-#ifdef WIN32
-#define WIN32_LEAN_AND_MEAN
-#include <windows.h>
-#define HAVE_MMAP 1
-#define HAVE_MORECORE 0
-#define LACKS_UNISTD_H
-#define LACKS_SYS_PARAM_H
-#define LACKS_SYS_MMAN_H
-#define LACKS_STRING_H
-#define LACKS_STRINGS_H
-#define LACKS_SYS_TYPES_H
-#define LACKS_ERRNO_H
-#define MALLOC_FAILURE_ACTION
-#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
-#endif /* WIN32 */
-
-#if defined(DARWIN) || defined(_DARWIN)
-/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
-#ifndef HAVE_MORECORE
-#define HAVE_MORECORE 0
-#define HAVE_MMAP 1
-#endif /* HAVE_MORECORE */
-#endif /* DARWIN */
-
-#ifndef LACKS_SYS_TYPES_H
-#include <sys/types.h> /* For size_t */
-#endif /* LACKS_SYS_TYPES_H */
-
-/* The maximum possible size_t value has all bits set */
-#define MAX_SIZE_T (~(size_t)0)
-
-#ifndef ONLY_MSPACES
-#define ONLY_MSPACES 0
-#endif /* ONLY_MSPACES */
-#ifndef MSPACES
-#if ONLY_MSPACES
-#define MSPACES 1
-#else /* ONLY_MSPACES */
-#define MSPACES 0
-#endif /* ONLY_MSPACES */
-#endif /* MSPACES */
-#ifndef MALLOC_ALIGNMENT
-#define MALLOC_ALIGNMENT ((size_t)8U)
-#endif /* MALLOC_ALIGNMENT */
-#ifndef FOOTERS
-#define FOOTERS 0
-#endif /* FOOTERS */
-#ifndef ABORT
-#define ABORT abort()
-#endif /* ABORT */
-#ifndef ABORT_ON_ASSERT_FAILURE
-#define ABORT_ON_ASSERT_FAILURE 1
-#endif /* ABORT_ON_ASSERT_FAILURE */
-#ifndef PROCEED_ON_ERROR
-#define PROCEED_ON_ERROR 0
-#endif /* PROCEED_ON_ERROR */
-#ifndef USE_LOCKS
-#define USE_LOCKS 0
-#endif /* USE_LOCKS */
-#ifndef INSECURE
-#define INSECURE 0
-#endif /* INSECURE */
-#ifndef HAVE_MMAP
-#define HAVE_MMAP 1
-#endif /* HAVE_MMAP */
-#ifndef MMAP_CLEARS
-#define MMAP_CLEARS 1
-#endif /* MMAP_CLEARS */
-#ifndef HAVE_MREMAP
-#ifdef linux
-#define HAVE_MREMAP 1
-#else /* linux */
-#define HAVE_MREMAP 0
-#endif /* linux */
-#endif /* HAVE_MREMAP */
-#ifndef MALLOC_FAILURE_ACTION
-#define MALLOC_FAILURE_ACTION errno = ENOMEM;
-#endif /* MALLOC_FAILURE_ACTION */
-#ifndef HAVE_MORECORE
-#if ONLY_MSPACES
-#define HAVE_MORECORE 0
-#else /* ONLY_MSPACES */
-#define HAVE_MORECORE 1
-#endif /* ONLY_MSPACES */
-#endif /* HAVE_MORECORE */
-#if !HAVE_MORECORE
-#define MORECORE_CONTIGUOUS 0
-#else /* !HAVE_MORECORE */
-#ifndef MORECORE
-#define MORECORE sbrk
-#endif /* MORECORE */
-#ifndef MORECORE_CONTIGUOUS
-#define MORECORE_CONTIGUOUS 1
-#endif /* MORECORE_CONTIGUOUS */
-#endif /* HAVE_MORECORE */
-#ifndef DEFAULT_GRANULARITY
-#if MORECORE_CONTIGUOUS
-#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
-#else /* MORECORE_CONTIGUOUS */
-#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
-#endif /* MORECORE_CONTIGUOUS */
-#endif /* DEFAULT_GRANULARITY */
-#ifndef DEFAULT_TRIM_THRESHOLD
-#ifndef MORECORE_CANNOT_TRIM
-#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
-#else /* MORECORE_CANNOT_TRIM */
-#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
-#endif /* MORECORE_CANNOT_TRIM */
-#endif /* DEFAULT_TRIM_THRESHOLD */
-#ifndef DEFAULT_MMAP_THRESHOLD
-#if HAVE_MMAP
-#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
-#else /* HAVE_MMAP */
-#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
-#endif /* HAVE_MMAP */
-#endif /* DEFAULT_MMAP_THRESHOLD */
-#ifndef USE_BUILTIN_FFS
-#define USE_BUILTIN_FFS 0
-#endif /* USE_BUILTIN_FFS */
-#ifndef USE_DEV_RANDOM
-#define USE_DEV_RANDOM 0
-#endif /* USE_DEV_RANDOM */
-#ifndef NO_MALLINFO
-#define NO_MALLINFO 0
-#endif /* NO_MALLINFO */
-#ifndef MALLINFO_FIELD_TYPE
-#define MALLINFO_FIELD_TYPE size_t
-#endif /* MALLINFO_FIELD_TYPE */
-
-/*
- mallopt tuning options. SVID/XPG defines four standard parameter
- numbers for mallopt, normally defined in malloc.h. None of these
- are used in this malloc, so setting them has no effect. But this
- malloc does support the following options.
-*/
-
-#define M_TRIM_THRESHOLD (-1)
-#define M_GRANULARITY (-2)
-#define M_MMAP_THRESHOLD (-3)
-
-/* ------------------------ Mallinfo declarations ------------------------ */
-
-#if !NO_MALLINFO
-/*
- This version of malloc supports the standard SVID/XPG mallinfo
- routine that returns a struct containing usage properties and
- statistics. It should work on any system that has a
- /usr/include/malloc.h defining struct mallinfo. The main
- declaration needed is the mallinfo struct that is returned (by-copy)
- by mallinfo(). The malloinfo struct contains a bunch of fields that
- are not even meaningful in this version of malloc. These fields are
- are instead filled by mallinfo() with other numbers that might be of
- interest.
-
- HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
- /usr/include/malloc.h file that includes a declaration of struct
- mallinfo. If so, it is included; else a compliant version is
- declared below. These must be precisely the same for mallinfo() to
- work. The original SVID version of this struct, defined on most
- systems with mallinfo, declares all fields as ints. But some others
- define as unsigned long. If your system defines the fields using a
- type of different width than listed here, you MUST #include your
- system version and #define HAVE_USR_INCLUDE_MALLOC_H.
-*/
-
-/* #define HAVE_USR_INCLUDE_MALLOC_H */
-
-#ifdef HAVE_USR_INCLUDE_MALLOC_H
-#include "/usr/include/malloc.h"
-#else /* HAVE_USR_INCLUDE_MALLOC_H */
-
-struct mallinfo {
- MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
- MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
- MALLINFO_FIELD_TYPE smblks; /* always 0 */
- MALLINFO_FIELD_TYPE hblks; /* always 0 */
- MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
- MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
- MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
- MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
- MALLINFO_FIELD_TYPE fordblks; /* total free space */
- MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
-};
-
-#endif /* HAVE_USR_INCLUDE_MALLOC_H */
-#endif /* NO_MALLINFO */
-
-#ifdef __cplusplus
-extern "C" {
-#endif /* __cplusplus */
-
-#if !ONLY_MSPACES
-
-/* ------------------- Declarations of public routines ------------------- */
-
-#ifndef USE_DL_PREFIX
-#define dlcalloc calloc
-#define dlfree free
-#define dlmalloc malloc
-#define dlmemalign memalign
-#define dlrealloc realloc
-#define dlvalloc valloc
-#define dlpvalloc pvalloc
-#define dlmallinfo mallinfo
-#define dlmallopt mallopt
-#define dlmalloc_trim malloc_trim
-#define dlmalloc_stats malloc_stats
-#define dlmalloc_usable_size malloc_usable_size
-#define dlmalloc_footprint malloc_footprint
-#define dlmalloc_max_footprint malloc_max_footprint
-#define dlindependent_calloc independent_calloc
-#define dlindependent_comalloc independent_comalloc
-#endif /* USE_DL_PREFIX */
-
-
-/*
- malloc(size_t n)
- Returns a pointer to a newly allocated chunk of at least n bytes, or
- null if no space is available, in which case errno is set to ENOMEM
- on ANSI C systems.
-
- If n is zero, malloc returns a minimum-sized chunk. (The minimum
- size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
- systems.) Note that size_t is an unsigned type, so calls with
- arguments that would be negative if signed are interpreted as
- requests for huge amounts of space, which will often fail. The
- maximum supported value of n differs across systems, but is in all
- cases less than the maximum representable value of a size_t.
-*/
-void* dlmalloc(size_t);
-
-/*
- free(void* p)
- Releases the chunk of memory pointed to by p, that had been previously
- allocated using malloc or a related routine such as realloc.
- It has no effect if p is null. If p was not malloced or already
- freed, free(p) will by default cause the current program to abort.
-*/
-void dlfree(void*);
-
-/*
- calloc(size_t n_elements, size_t element_size);
- Returns a pointer to n_elements * element_size bytes, with all locations
- set to zero.
-*/
-void* dlcalloc(size_t, size_t);
-
-/*
- realloc(void* p, size_t n)
- Returns a pointer to a chunk of size n that contains the same data
- as does chunk p up to the minimum of (n, p's size) bytes, or null
- if no space is available.
-
- The returned pointer may or may not be the same as p. The algorithm
- prefers extending p in most cases when possible, otherwise it
- employs the equivalent of a malloc-copy-free sequence.
-
- If p is null, realloc is equivalent to malloc.
-
- If space is not available, realloc returns null, errno is set (if on
- ANSI) and p is NOT freed.
-
- if n is for fewer bytes than already held by p, the newly unused
- space is lopped off and freed if possible. realloc with a size
- argument of zero (re)allocates a minimum-sized chunk.
-
- The old unix realloc convention of allowing the last-free'd chunk
- to be used as an argument to realloc is not supported.
-*/
-
-void* dlrealloc(void*, size_t);
-
-/*
- memalign(size_t alignment, size_t n);
- Returns a pointer to a newly allocated chunk of n bytes, aligned
- in accord with the alignment argument.
-
- The alignment argument should be a power of two. If the argument is
- not a power of two, the nearest greater power is used.
- 8-byte alignment is guaranteed by normal malloc calls, so don't
- bother calling memalign with an argument of 8 or less.
-
- Overreliance on memalign is a sure way to fragment space.
-*/
-void* dlmemalign(size_t, size_t);
-
-/*
- valloc(size_t n);
- Equivalent to memalign(pagesize, n), where pagesize is the page
- size of the system. If the pagesize is unknown, 4096 is used.
-*/
-void* dlvalloc(size_t);
-
-/*
- mallopt(int parameter_number, int parameter_value)
- Sets tunable parameters The format is to provide a
- (parameter-number, parameter-value) pair. mallopt then sets the
- corresponding parameter to the argument value if it can (i.e., so
- long as the value is meaningful), and returns 1 if successful else
- 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
- normally defined in malloc.h. None of these are use in this malloc,
- so setting them has no effect. But this malloc also supports other
- options in mallopt. See below for details. Briefly, supported
- parameters are as follows (listed defaults are for "typical"
- configurations).
-
- Symbol param # default allowed param values
- M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
- M_GRANULARITY -2 page size any power of 2 >= page size
- M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
-*/
-int dlmallopt(int, int);
-
-/*
- malloc_footprint();
- Returns the number of bytes obtained from the system. The total
- number of bytes allocated by malloc, realloc etc., is less than this
- value. Unlike mallinfo, this function returns only a precomputed
- result, so can be called frequently to monitor memory consumption.
- Even if locks are otherwise defined, this function does not use them,
- so results might not be up to date.
-*/
-size_t dlmalloc_footprint(void);
-
-/*
- malloc_max_footprint();
- Returns the maximum number of bytes obtained from the system. This
- value will be greater than current footprint if deallocated space
- has been reclaimed by the system. The peak number of bytes allocated
- by malloc, realloc etc., is less than this value. Unlike mallinfo,
- this function returns only a precomputed result, so can be called
- frequently to monitor memory consumption. Even if locks are
- otherwise defined, this function does not use them, so results might
- not be up to date.
-*/
-size_t dlmalloc_max_footprint(void);
-
-#if !NO_MALLINFO
-/*
- mallinfo()
- Returns (by copy) a struct containing various summary statistics:
-
- arena: current total non-mmapped bytes allocated from system
- ordblks: the number of free chunks
- smblks: always zero.
- hblks: current number of mmapped regions
- hblkhd: total bytes held in mmapped regions
- usmblks: the maximum total allocated space. This will be greater
- than current total if trimming has occurred.
- fsmblks: always zero
- uordblks: current total allocated space (normal or mmapped)
- fordblks: total free space
- keepcost: the maximum number of bytes that could ideally be released
- back to system via malloc_trim. ("ideally" means that
- it ignores page restrictions etc.)
-
- Because these fields are ints, but internal bookkeeping may
- be kept as longs, the reported values may wrap around zero and
- thus be inaccurate.
-*/
-struct mallinfo dlmallinfo(void);
-#endif /* NO_MALLINFO */
-
-/*
- independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
-
- independent_calloc is similar to calloc, but instead of returning a
- single cleared space, it returns an array of pointers to n_elements
- independent elements that can hold contents of size elem_size, each
- of which starts out cleared, and can be independently freed,
- realloc'ed etc. The elements are guaranteed to be adjacently
- allocated (this is not guaranteed to occur with multiple callocs or
- mallocs), which may also improve cache locality in some
- applications.
-
- The "chunks" argument is optional (i.e., may be null, which is
- probably the most typical usage). If it is null, the returned array
- is itself dynamically allocated and should also be freed when it is
- no longer needed. Otherwise, the chunks array must be of at least
- n_elements in length. It is filled in with the pointers to the
- chunks.
-
- In either case, independent_calloc returns this pointer array, or
- null if the allocation failed. If n_elements is zero and "chunks"
- is null, it returns a chunk representing an array with zero elements
- (which should be freed if not wanted).
-
- Each element must be individually freed when it is no longer
- needed. If you'd like to instead be able to free all at once, you
- should instead use regular calloc and assign pointers into this
- space to represent elements. (In this case though, you cannot
- independently free elements.)
-
- independent_calloc simplifies and speeds up implementations of many
- kinds of pools. It may also be useful when constructing large data
- structures that initially have a fixed number of fixed-sized nodes,
- but the number is not known at compile time, and some of the nodes
- may later need to be freed. For example:
-
- struct Node { int item; struct Node* next; };
-
- struct Node* build_list() {
- struct Node** pool;
- int n = read_number_of_nodes_needed();
- if (n <= 0) return 0;
- pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
- if (pool == 0) die();
- // organize into a linked list...
- struct Node* first = pool[0];
- for (i = 0; i < n-1; ++i)
- pool[i]->next = pool[i+1];
- free(pool); // Can now free the array (or not, if it is needed later)
- return first;
- }
-*/
-void** dlindependent_calloc(size_t, size_t, void**);
-
-/*
- independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
-
- independent_comalloc allocates, all at once, a set of n_elements
- chunks with sizes indicated in the "sizes" array. It returns
- an array of pointers to these elements, each of which can be
- independently freed, realloc'ed etc. The elements are guaranteed to
- be adjacently allocated (this is not guaranteed to occur with
- multiple callocs or mallocs), which may also improve cache locality
- in some applications.
-
- The "chunks" argument is optional (i.e., may be null). If it is null
- the returned array is itself dynamically allocated and should also
- be freed when it is no longer needed. Otherwise, the chunks array
- must be of at least n_elements in length. It is filled in with the
- pointers to the chunks.
-
- In either case, independent_comalloc returns this pointer array, or
- null if the allocation failed. If n_elements is zero and chunks is
- null, it returns a chunk representing an array with zero elements
- (which should be freed if not wanted).
-
- Each element must be individually freed when it is no longer
- needed. If you'd like to instead be able to free all at once, you
- should instead use a single regular malloc, and assign pointers at
- particular offsets in the aggregate space. (In this case though, you
- cannot independently free elements.)
-
- independent_comallac differs from independent_calloc in that each
- element may have a different size, and also that it does not
- automatically clear elements.
-
- independent_comalloc can be used to speed up allocation in cases
- where several structs or objects must always be allocated at the
- same time. For example:
-
- struct Head { ... }
- struct Foot { ... }
-
- void send_message(char* msg) {
- int msglen = strlen(msg);
- size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
- void* chunks[3];
- if (independent_comalloc(3, sizes, chunks) == 0)
- die();
- struct Head* head = (struct Head*)(chunks[0]);
- char* body = (char*)(chunks[1]);
- struct Foot* foot = (struct Foot*)(chunks[2]);
- // ...
- }
-
- In general though, independent_comalloc is worth using only for
- larger values of n_elements. For small values, you probably won't
- detect enough difference from series of malloc calls to bother.
-
- Overuse of independent_comalloc can increase overall memory usage,
- since it cannot reuse existing noncontiguous small chunks that
- might be available for some of the elements.
-*/
-void** dlindependent_comalloc(size_t, size_t*, void**);
-
-
-/*
- pvalloc(size_t n);
- Equivalent to valloc(minimum-page-that-holds(n)), that is,
- round up n to nearest pagesize.
- */
-void* dlpvalloc(size_t);
-
-/*
- malloc_trim(size_t pad);
-
- If possible, gives memory back to the system (via negative arguments
- to sbrk) if there is unused memory at the `high' end of the malloc
- pool or in unused MMAP segments. You can call this after freeing
- large blocks of memory to potentially reduce the system-level memory
- requirements of a program. However, it cannot guarantee to reduce
- memory. Under some allocation patterns, some large free blocks of
- memory will be locked between two used chunks, so they cannot be
- given back to the system.
-
- The `pad' argument to malloc_trim represents the amount of free
- trailing space to leave untrimmed. If this argument is zero, only
- the minimum amount of memory to maintain internal data structures
- will be left. Non-zero arguments can be supplied to maintain enough
- trailing space to service future expected allocations without having
- to re-obtain memory from the system.
-
- Malloc_trim returns 1 if it actually released any memory, else 0.
-*/
-int dlmalloc_trim(size_t);
-
-/*
- malloc_usable_size(void* p);
-
- Returns the number of bytes you can actually use in
- an allocated chunk, which may be more than you requested (although
- often not) due to alignment and minimum size constraints.
- You can use this many bytes without worrying about
- overwriting other allocated objects. This is not a particularly great
- programming practice. malloc_usable_size can be more useful in
- debugging and assertions, for example:
-
- p = malloc(n);
- assert(malloc_usable_size(p) >= 256);
-*/
-size_t dlmalloc_usable_size(void*);
-
-/*
- malloc_stats();
- Prints on stderr the amount of space obtained from the system (both
- via sbrk and mmap), the maximum amount (which may be more than
- current if malloc_trim and/or munmap got called), and the current
- number of bytes allocated via malloc (or realloc, etc) but not yet
- freed. Note that this is the number of bytes allocated, not the
- number requested. It will be larger than the number requested
- because of alignment and bookkeeping overhead. Because it includes
- alignment wastage as being in use, this figure may be greater than
- zero even when no user-level chunks are allocated.
-
- The reported current and maximum system memory can be inaccurate if
- a program makes other calls to system memory allocation functions
- (normally sbrk) outside of malloc.
-
- malloc_stats prints only the most commonly interesting statistics.
- More information can be obtained by calling mallinfo.
-*/
-void dlmalloc_stats(void);
-
-#endif /* ONLY_MSPACES */
-
-#if MSPACES
-
-/*
- mspace is an opaque type representing an independent
- region of space that supports mspace_malloc, etc.
-*/
-typedef void* mspace;
-
-/*
- create_mspace creates and returns a new independent space with the
- given initial capacity, or, if 0, the default granularity size. It
- returns null if there is no system memory available to create the
- space. If argument locked is non-zero, the space uses a separate
- lock to control access. The capacity of the space will grow
- dynamically as needed to service mspace_malloc requests. You can
- control the sizes of incremental increases of this space by
- compiling with a different DEFAULT_GRANULARITY or dynamically
- setting with mallopt(M_GRANULARITY, value).
-*/
-mspace create_mspace(size_t capacity, int locked);
-
-/*
- destroy_mspace destroys the given space, and attempts to return all
- of its memory back to the system, returning the total number of
- bytes freed. After destruction, the results of access to all memory
- used by the space become undefined.
-*/
-size_t destroy_mspace(mspace msp);
-
-/*
- create_mspace_with_base uses the memory supplied as the initial base
- of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
- space is used for bookkeeping, so the capacity must be at least this
- large. (Otherwise 0 is returned.) When this initial space is
- exhausted, additional memory will be obtained from the system.
- Destroying this space will deallocate all additionally allocated
- space (if possible) but not the initial base.
-*/
-mspace create_mspace_with_base(void* base, size_t capacity, int locked);
-
-/*
- mspace_malloc behaves as malloc, but operates within
- the given space.
-*/
-void* mspace_malloc(mspace msp, size_t bytes);
-
-/*
- mspace_free behaves as free, but operates within
- the given space.
-
- If compiled with FOOTERS==1, mspace_free is not actually needed.
- free may be called instead of mspace_free because freed chunks from
- any space are handled by their originating spaces.
-*/
-void mspace_free(mspace msp, void* mem);
-
-/*
- mspace_realloc behaves as realloc, but operates within
- the given space.
-
- If compiled with FOOTERS==1, mspace_realloc is not actually
- needed. realloc may be called instead of mspace_realloc because
- realloced chunks from any space are handled by their originating
- spaces.
-*/
-void* mspace_realloc(mspace msp, void* mem, size_t newsize);
-
-/*
- mspace_calloc behaves as calloc, but operates within
- the given space.
-*/
-void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
-
-/*
- mspace_memalign behaves as memalign, but operates within
- the given space.
-*/
-void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
-
-/*
- mspace_independent_calloc behaves as independent_calloc, but
- operates within the given space.
-*/
-void** mspace_independent_calloc(mspace msp, size_t n_elements,
- size_t elem_size, void* chunks[]);
-
-/*
- mspace_independent_comalloc behaves as independent_comalloc, but
- operates within the given space.
-*/
-void** mspace_independent_comalloc(mspace msp, size_t n_elements,
- size_t sizes[], void* chunks[]);
-
-/*
- mspace_footprint() returns the number of bytes obtained from the
- system for this space.
-*/
-size_t mspace_footprint(mspace msp);
-
-/*
- mspace_max_footprint() returns the peak number of bytes obtained from the
- system for this space.
-*/
-size_t mspace_max_footprint(mspace msp);
-
-
-#if !NO_MALLINFO
-/*
- mspace_mallinfo behaves as mallinfo, but reports properties of
- the given space.
-*/
-struct mallinfo mspace_mallinfo(mspace msp);
-#endif /* NO_MALLINFO */
-
-/*
- mspace_malloc_stats behaves as malloc_stats, but reports
- properties of the given space.
-*/
-void mspace_malloc_stats(mspace msp);
-
-/*
- mspace_trim behaves as malloc_trim, but
- operates within the given space.
-*/
-int mspace_trim(mspace msp, size_t pad);
-
-/*
- An alias for mallopt.
-*/
-int mspace_mallopt(int, int);
-
-#endif /* MSPACES */
-
-#ifdef __cplusplus
-}; /* end of extern "C" */
-#endif /* __cplusplus */
-
-/*
- ========================================================================
- To make a fully customizable malloc.h header file, cut everything
- above this line, put into file malloc.h, edit to suit, and #include it
- on the next line, as well as in programs that use this malloc.
- ========================================================================
-*/
-
-/* #include "malloc.h" */
-
-/*------------------------------ internal #includes ---------------------- */
-
-#ifdef WIN32
-#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
-#endif /* WIN32 */
-
-#include <stdio.h> /* for printing in malloc_stats */
-
-#ifndef LACKS_ERRNO_H
-#include <errno.h> /* for MALLOC_FAILURE_ACTION */
-#endif /* LACKS_ERRNO_H */
-#if FOOTERS
-#include <time.h> /* for magic initialization */
-#endif /* FOOTERS */
-#ifndef LACKS_STDLIB_H
-#include <stdlib.h> /* for abort() */
-#endif /* LACKS_STDLIB_H */
-#ifdef DEBUG
-#if ABORT_ON_ASSERT_FAILURE
-#define assert(x) if(!(x)) ABORT
-#else /* ABORT_ON_ASSERT_FAILURE */
-#include <assert.h>
-#endif /* ABORT_ON_ASSERT_FAILURE */
-#else /* DEBUG */
-#define assert(x)
-#endif /* DEBUG */
-#ifndef LACKS_STRING_H
-#include <string.h> /* for memset etc */
-#endif /* LACKS_STRING_H */
-#if USE_BUILTIN_FFS
-#ifndef LACKS_STRINGS_H
-#include <strings.h> /* for ffs */
-#endif /* LACKS_STRINGS_H */
-#endif /* USE_BUILTIN_FFS */
-#if HAVE_MMAP
-#ifndef LACKS_SYS_MMAN_H
-#include <sys/mman.h> /* for mmap */
-#endif /* LACKS_SYS_MMAN_H */
-#ifndef LACKS_FCNTL_H
-#include <fcntl.h>
-#endif /* LACKS_FCNTL_H */
-#endif /* HAVE_MMAP */
-#if HAVE_MORECORE
-#ifndef LACKS_UNISTD_H
-#include <unistd.h> /* for sbrk */
-#else /* LACKS_UNISTD_H */
-#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
-extern void* sbrk(ptrdiff_t);
-#endif /* FreeBSD etc */
-#endif /* LACKS_UNISTD_H */
-#endif /* HAVE_MMAP */
-
-#ifndef WIN32
-#ifndef malloc_getpagesize
-# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
-# ifndef _SC_PAGE_SIZE
-# define _SC_PAGE_SIZE _SC_PAGESIZE
-# endif
-# endif
-# ifdef _SC_PAGE_SIZE
-# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
-# else
-# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
- extern size_t getpagesize();
-# define malloc_getpagesize getpagesize()
-# else
-# ifdef WIN32 /* use supplied emulation of getpagesize */
-# define malloc_getpagesize getpagesize()
-# else
-# ifndef LACKS_SYS_PARAM_H
-# include <sys/param.h>
-# endif
-# ifdef EXEC_PAGESIZE
-# define malloc_getpagesize EXEC_PAGESIZE
-# else
-# ifdef NBPG
-# ifndef CLSIZE
-# define malloc_getpagesize NBPG
-# else
-# define malloc_getpagesize (NBPG * CLSIZE)
-# endif
-# else
-# ifdef NBPC
-# define malloc_getpagesize NBPC
-# else
-# ifdef PAGESIZE
-# define malloc_getpagesize PAGESIZE
-# else /* just guess */
-# define malloc_getpagesize ((size_t)4096U)
-# endif
-# endif
-# endif
-# endif
-# endif
-# endif
-# endif
-#endif
-#endif
-
-/* ------------------- size_t and alignment properties -------------------- */
-
-/* The byte and bit size of a size_t */
-#define SIZE_T_SIZE (sizeof(size_t))
-#define SIZE_T_BITSIZE (sizeof(size_t) << 3)
-
-/* Some constants coerced to size_t */
-/* Annoying but necessary to avoid errors on some plaftorms */
-#define SIZE_T_ZERO ((size_t)0)
-#define SIZE_T_ONE ((size_t)1)
-#define SIZE_T_TWO ((size_t)2)
-#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
-#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
-#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
-#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
-
-/* The bit mask value corresponding to MALLOC_ALIGNMENT */
-#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
-
-/* True if address a has acceptable alignment */
-#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
-
-/* the number of bytes to offset an address to align it */
-#define align_offset(A)\
- ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
- ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
-
-/* -------------------------- MMAP preliminaries ------------------------- */
-
-/*
- If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
- checks to fail so compiler optimizer can delete code rather than
- using so many "#if"s.
-*/
-
-
-/* MORECORE and MMAP must return MFAIL on failure */
-#define MFAIL ((void*)(MAX_SIZE_T))
-#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
-
-#if !HAVE_MMAP
-#define IS_MMAPPED_BIT (SIZE_T_ZERO)
-#define USE_MMAP_BIT (SIZE_T_ZERO)
-#define CALL_MMAP(s) MFAIL
-#define CALL_MUNMAP(a, s) (-1)
-#define DIRECT_MMAP(s) MFAIL
-
-#else /* HAVE_MMAP */
-#define IS_MMAPPED_BIT (SIZE_T_ONE)
-#define USE_MMAP_BIT (SIZE_T_ONE)
-
-#ifndef WIN32
-#define CALL_MUNMAP(a, s) munmap((a), (s))
-#define MMAP_PROT (PROT_READ|PROT_WRITE)
-#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
-#define MAP_ANONYMOUS MAP_ANON
-#endif /* MAP_ANON */
-#ifdef MAP_ANONYMOUS
-#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
-#define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
-#else /* MAP_ANONYMOUS */
-/*
- Nearly all versions of mmap support MAP_ANONYMOUS, so the following
- is unlikely to be needed, but is supplied just in case.
-*/
-#define MMAP_FLAGS (MAP_PRIVATE)
-static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
-#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
- (dev_zero_fd = open("/dev/zero", O_RDWR), \
- mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
- mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
-#endif /* MAP_ANONYMOUS */
-
-#define DIRECT_MMAP(s) CALL_MMAP(s)
-#else /* WIN32 */
-
-/* Win32 MMAP via VirtualAlloc */
-static void* win32mmap(size_t size) {
- void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
- return (ptr != 0)? ptr: MFAIL;
-}
-
-/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
-static void* win32direct_mmap(size_t size) {
- void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
- PAGE_READWRITE);
- return (ptr != 0)? ptr: MFAIL;
-}
-
-/* This function supports releasing coalesed segments */
-static int win32munmap(void* ptr, size_t size) {
- MEMORY_BASIC_INFORMATION minfo;
- char* cptr = ptr;
- while (size) {
- if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
- return -1;
- if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
- minfo.State != MEM_COMMIT || minfo.RegionSize > size)
- return -1;
- if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
- return -1;
- cptr += minfo.RegionSize;
- size -= minfo.RegionSize;
- }
- return 0;
-}
-
-#define CALL_MMAP(s) win32mmap(s)
-#define CALL_MUNMAP(a, s) win32munmap((a), (s))
-#define DIRECT_MMAP(s) win32direct_mmap(s)
-#endif /* WIN32 */
-#endif /* HAVE_MMAP */
-
-#if HAVE_MMAP && HAVE_MREMAP
-#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
-#else /* HAVE_MMAP && HAVE_MREMAP */
-#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
-#endif /* HAVE_MMAP && HAVE_MREMAP */
-
-#if HAVE_MORECORE
-#define CALL_MORECORE(S) MORECORE(S)
-#else /* HAVE_MORECORE */
-#define CALL_MORECORE(S) MFAIL
-#endif /* HAVE_MORECORE */
-
-/* mstate bit set if continguous morecore disabled or failed */
-#define USE_NONCONTIGUOUS_BIT (4U)
-
-/* segment bit set in create_mspace_with_base */
-#define EXTERN_BIT (8U)
-
-
-/* --------------------------- Lock preliminaries ------------------------ */
-
-#if USE_LOCKS
-
-/*
- When locks are defined, there are up to two global locks:
-
- * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
- MORECORE. In many cases sys_alloc requires two calls, that should
- not be interleaved with calls by other threads. This does not
- protect against direct calls to MORECORE by other threads not
- using this lock, so there is still code to cope the best we can on
- interference.
-
- * magic_init_mutex ensures that mparams.magic and other
- unique mparams values are initialized only once.
-*/
-
-#ifndef WIN32
-/* By default use posix locks */
-#include <pthread.h>
-#define MLOCK_T pthread_mutex_t
-#define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
-#define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
-#define RELEASE_LOCK(l) pthread_mutex_unlock(l)
-
-#if HAVE_MORECORE
-static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
-#endif /* HAVE_MORECORE */
-
-static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
-
-#else /* WIN32 */
-/*
- Because lock-protected regions have bounded times, and there
- are no recursive lock calls, we can use simple spinlocks.
-*/
-
-#define MLOCK_T long
-static int win32_acquire_lock (MLOCK_T *sl) {
- for (;;) {
-#ifdef InterlockedCompareExchangePointer
- if (!InterlockedCompareExchange(sl, 1, 0))
- return 0;
-#else /* Use older void* version */
- if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
- return 0;
-#endif /* InterlockedCompareExchangePointer */
- Sleep (0);
- }
-}
-
-static void win32_release_lock (MLOCK_T *sl) {
- InterlockedExchange (sl, 0);
-}
-
-#define INITIAL_LOCK(l) *(l)=0
-#define ACQUIRE_LOCK(l) win32_acquire_lock(l)
-#define RELEASE_LOCK(l) win32_release_lock(l)
-#if HAVE_MORECORE
-static MLOCK_T morecore_mutex;
-#endif /* HAVE_MORECORE */
-static MLOCK_T magic_init_mutex;
-#endif /* WIN32 */
-
-#define USE_LOCK_BIT (2U)
-#else /* USE_LOCKS */
-#define USE_LOCK_BIT (0U)
-#define INITIAL_LOCK(l)
-#endif /* USE_LOCKS */
-
-#if USE_LOCKS && HAVE_MORECORE
-#define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
-#define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
-#else /* USE_LOCKS && HAVE_MORECORE */
-#define ACQUIRE_MORECORE_LOCK()
-#define RELEASE_MORECORE_LOCK()
-#endif /* USE_LOCKS && HAVE_MORECORE */
-
-#if USE_LOCKS
-#define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
-#define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
-#else /* USE_LOCKS */
-#define ACQUIRE_MAGIC_INIT_LOCK()
-#define RELEASE_MAGIC_INIT_LOCK()
-#endif /* USE_LOCKS */
-
-
-/* ----------------------- Chunk representations ------------------------ */
-
-/*
- (The following includes lightly edited explanations by Colin Plumb.)
-
- The malloc_chunk declaration below is misleading (but accurate and
- necessary). It declares a "view" into memory allowing access to
- necessary fields at known offsets from a given base.
-
- Chunks of memory are maintained using a `boundary tag' method as
- originally described by Knuth. (See the paper by Paul Wilson
- ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
- techniques.) Sizes of free chunks are stored both in the front of
- each chunk and at the end. This makes consolidating fragmented
- chunks into bigger chunks fast. The head fields also hold bits
- representing whether chunks are free or in use.
-
- Here are some pictures to make it clearer. They are "exploded" to
- show that the state of a chunk can be thought of as extending from
- the high 31 bits of the head field of its header through the
- prev_foot and PINUSE_BIT bit of the following chunk header.
-
- A chunk that's in use looks like:
-
- chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Size of previous chunk (if P = 1) |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
- | Size of this chunk 1| +-+
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | |
- +- -+
- | |
- +- -+
- | :
- +- size - sizeof(size_t) available payload bytes -+
- : |
- chunk-> +- -+
- | |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
- | Size of next chunk (may or may not be in use) | +-+
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
- And if it's free, it looks like this:
-
- chunk-> +- -+
- | User payload (must be in use, or we would have merged!) |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
- | Size of this chunk 0| +-+
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Next pointer |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Prev pointer |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | :
- +- size - sizeof(struct chunk) unused bytes -+
- : |
- chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Size of this chunk |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
- | Size of next chunk (must be in use, or we would have merged)| +-+
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | :
- +- User payload -+
- : |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- |0|
- +-+
- Note that since we always merge adjacent free chunks, the chunks
- adjacent to a free chunk must be in use.
-
- Given a pointer to a chunk (which can be derived trivially from the
- payload pointer) we can, in O(1) time, find out whether the adjacent
- chunks are free, and if so, unlink them from the lists that they
- are on and merge them with the current chunk.
-
- Chunks always begin on even word boundaries, so the mem portion
- (which is returned to the user) is also on an even word boundary, and
- thus at least double-word aligned.
-
- The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
- chunk size (which is always a multiple of two words), is an in-use
- bit for the *previous* chunk. If that bit is *clear*, then the
- word before the current chunk size contains the previous chunk
- size, and can be used to find the front of the previous chunk.
- The very first chunk allocated always has this bit set, preventing
- access to non-existent (or non-owned) memory. If pinuse is set for
- any given chunk, then you CANNOT determine the size of the
- previous chunk, and might even get a memory addressing fault when
- trying to do so.
-
- The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
- the chunk size redundantly records whether the current chunk is
- inuse. This redundancy enables usage checks within free and realloc,
- and reduces indirection when freeing and consolidating chunks.
-
- Each freshly allocated chunk must have both cinuse and pinuse set.
- That is, each allocated chunk borders either a previously allocated
- and still in-use chunk, or the base of its memory arena. This is
- ensured by making all allocations from the the `lowest' part of any
- found chunk. Further, no free chunk physically borders another one,
- so each free chunk is known to be preceded and followed by either
- inuse chunks or the ends of memory.
-
- Note that the `foot' of the current chunk is actually represented
- as the prev_foot of the NEXT chunk. This makes it easier to
- deal with alignments etc but can be very confusing when trying
- to extend or adapt this code.
-
- The exceptions to all this are
-
- 1. The special chunk `top' is the top-most available chunk (i.e.,
- the one bordering the end of available memory). It is treated
- specially. Top is never included in any bin, is used only if
- no other chunk is available, and is released back to the
- system if it is very large (see M_TRIM_THRESHOLD). In effect,
- the top chunk is treated as larger (and thus less well
- fitting) than any other available chunk. The top chunk
- doesn't update its trailing size field since there is no next
- contiguous chunk that would have to index off it. However,
- space is still allocated for it (TOP_FOOT_SIZE) to enable
- separation or merging when space is extended.
-
- 3. Chunks allocated via mmap, which have the lowest-order bit
- (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
- PINUSE_BIT in their head fields. Because they are allocated
- one-by-one, each must carry its own prev_foot field, which is
- also used to hold the offset this chunk has within its mmapped
- region, which is needed to preserve alignment. Each mmapped
- chunk is trailed by the first two fields of a fake next-chunk
- for sake of usage checks.
-
-*/
-
-struct malloc_chunk {
- size_t prev_foot; /* Size of previous chunk (if free). */
- size_t head; /* Size and inuse bits. */
- struct malloc_chunk* fd; /* double links -- used only if free. */
- struct malloc_chunk* bk;
-};
-
-typedef struct malloc_chunk mchunk;
-typedef struct malloc_chunk* mchunkptr;
-typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
-typedef unsigned int bindex_t; /* Described below */
-typedef unsigned int binmap_t; /* Described below */
-typedef unsigned int flag_t; /* The type of various bit flag sets */
-
-/* ------------------- Chunks sizes and alignments ----------------------- */
-
-#define MCHUNK_SIZE (sizeof(mchunk))
-
-#if FOOTERS
-#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
-#else /* FOOTERS */
-#define CHUNK_OVERHEAD (SIZE_T_SIZE)
-#endif /* FOOTERS */
-
-/* MMapped chunks need a second word of overhead ... */
-#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
-/* ... and additional padding for fake next-chunk at foot */
-#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
-
-/* The smallest size we can malloc is an aligned minimal chunk */
-#define MIN_CHUNK_SIZE\
- ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
-
-/* conversion from malloc headers to user pointers, and back */
-#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
-#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
-/* chunk associated with aligned address A */
-#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
-
-/* Bounds on request (not chunk) sizes. */
-#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
-#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
-
-/* pad request bytes into a usable size */
-#define pad_request(req) \
- (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
-
-/* pad request, checking for minimum (but not maximum) */
-#define request2size(req) \
- (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
-
-
-/* ------------------ Operations on head and foot fields ----------------- */
-
-/*
- The head field of a chunk is or'ed with PINUSE_BIT when previous
- adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
- use. If the chunk was obtained with mmap, the prev_foot field has
- IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
- mmapped region to the base of the chunk.
-*/
-
-#define PINUSE_BIT (SIZE_T_ONE)
-#define CINUSE_BIT (SIZE_T_TWO)
-#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
-
-/* Head value for fenceposts */
-#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
-
-/* extraction of fields from head words */
-#define cinuse(p) ((p)->head & CINUSE_BIT)
-#define pinuse(p) ((p)->head & PINUSE_BIT)
-#define chunksize(p) ((p)->head & ~(INUSE_BITS))
-
-#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
-#define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
-
-/* Treat space at ptr +/- offset as a chunk */
-#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
-#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
-
-/* Ptr to next or previous physical malloc_chunk. */
-#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
-#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
-
-/* extract next chunk's pinuse bit */
-#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
-
-/* Get/set size at footer */
-#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
-#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
-
-/* Set size, pinuse bit, and foot */
-#define set_size_and_pinuse_of_free_chunk(p, s)\
- ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
-
-/* Set size, pinuse bit, foot, and clear next pinuse */
-#define set_free_with_pinuse(p, s, n)\
- (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
-
-#define is_mmapped(p)\
- (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
-
-/* Get the internal overhead associated with chunk p */
-#define overhead_for(p)\
- (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
-
-/* Return true if malloced space is not necessarily cleared */
-#if MMAP_CLEARS
-#define calloc_must_clear(p) (!is_mmapped(p))
-#else /* MMAP_CLEARS */
-#define calloc_must_clear(p) (1)
-#endif /* MMAP_CLEARS */
-
-/* ---------------------- Overlaid data structures ----------------------- */
-
-/*
- When chunks are not in use, they are treated as nodes of either
- lists or trees.
-
- "Small" chunks are stored in circular doubly-linked lists, and look
- like this:
-
- chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Size of previous chunk |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- `head:' | Size of chunk, in bytes |P|
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Forward pointer to next chunk in list |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Back pointer to previous chunk in list |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Unused space (may be 0 bytes long) .
- . .
- . |
-nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- `foot:' | Size of chunk, in bytes |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
- Larger chunks are kept in a form of bitwise digital trees (aka
- tries) keyed on chunksizes. Because malloc_tree_chunks are only for
- free chunks greater than 256 bytes, their size doesn't impose any
- constraints on user chunk sizes. Each node looks like:
-
- chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Size of previous chunk |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- `head:' | Size of chunk, in bytes |P|
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Forward pointer to next chunk of same size |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Back pointer to previous chunk of same size |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Pointer to left child (child[0]) |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Pointer to right child (child[1]) |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Pointer to parent |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | bin index of this chunk |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | Unused space .
- . |
-nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- `foot:' | Size of chunk, in bytes |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
- Each tree holding treenodes is a tree of unique chunk sizes. Chunks
- of the same size are arranged in a circularly-linked list, with only
- the oldest chunk (the next to be used, in our FIFO ordering)
- actually in the tree. (Tree members are distinguished by a non-null
- parent pointer.) If a chunk with the same size an an existing node
- is inserted, it is linked off the existing node using pointers that
- work in the same way as fd/bk pointers of small chunks.
-
- Each tree contains a power of 2 sized range of chunk sizes (the
- smallest is 0x100 <= x < 0x180), which is is divided in half at each
- tree level, with the chunks in the smaller half of the range (0x100
- <= x < 0x140 for the top nose) in the left subtree and the larger
- half (0x140 <= x < 0x180) in the right subtree. This is, of course,
- done by inspecting individual bits.
-
- Using these rules, each node's left subtree contains all smaller
- sizes than its right subtree. However, the node at the root of each
- subtree has no particular ordering relationship to either. (The
- dividing line between the subtree sizes is based on trie relation.)
- If we remove the last chunk of a given size from the interior of the
- tree, we need to replace it with a leaf node. The tree ordering
- rules permit a node to be replaced by any leaf below it.
-
- The smallest chunk in a tree (a common operation in a best-fit
- allocator) can be found by walking a path to the leftmost leaf in
- the tree. Unlike a usual binary tree, where we follow left child
- pointers until we reach a null, here we follow the right child
- pointer any time the left one is null, until we reach a leaf with
- both child pointers null. The smallest chunk in the tree will be
- somewhere along that path.
-
- The worst case number of steps to add, find, or remove a node is
- bounded by the number of bits differentiating chunks within
- bins. Under current bin calculations, this ranges from 6 up to 21
- (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
- is of course much better.
-*/
-
-struct malloc_tree_chunk {
- /* The first four fields must be compatible with malloc_chunk */
- size_t prev_foot;
- size_t head;
- struct malloc_tree_chunk* fd;
- struct malloc_tree_chunk* bk;
-
- struct malloc_tree_chunk* child[2];
- struct malloc_tree_chunk* parent;
- bindex_t index;
-};
-
-typedef struct malloc_tree_chunk tchunk;
-typedef struct malloc_tree_chunk* tchunkptr;
-typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
-
-/* A little helper macro for trees */
-#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
-
-/* ----------------------------- Segments -------------------------------- */
-
-/*
- Each malloc space may include non-contiguous segments, held in a
- list headed by an embedded malloc_segment record representing the
- top-most space. Segments also include flags holding properties of
- the space. Large chunks that are directly allocated by mmap are not
- included in this list. They are instead independently created and
- destroyed without otherwise keeping track of them.
-
- Segment management mainly comes into play for spaces allocated by
- MMAP. Any call to MMAP might or might not return memory that is
- adjacent to an existing segment. MORECORE normally contiguously
- extends the current space, so this space is almost always adjacent,
- which is simpler and faster to deal with. (This is why MORECORE is
- used preferentially to MMAP when both are available -- see
- sys_alloc.) When allocating using MMAP, we don't use any of the
- hinting mechanisms (inconsistently) supported in various
- implementations of unix mmap, or distinguish reserving from
- committing memory. Instead, we just ask for space, and exploit
- contiguity when we get it. It is probably possible to do
- better than this on some systems, but no general scheme seems
- to be significantly better.
-
- Management entails a simpler variant of the consolidation scheme
- used for chunks to reduce fragmentation -- new adjacent memory is
- normally prepended or appended to an existing segment. However,
- there are limitations compared to chunk consolidation that mostly
- reflect the fact that segment processing is relatively infrequent
- (occurring only when getting memory from system) and that we
- don't expect to have huge numbers of segments:
-
- * Segments are not indexed, so traversal requires linear scans. (It
- would be possible to index these, but is not worth the extra
- overhead and complexity for most programs on most platforms.)
- * New segments are only appended to old ones when holding top-most
- memory; if they cannot be prepended to others, they are held in
- different segments.
-
- Except for the top-most segment of an mstate, each segment record
- is kept at the tail of its segment. Segments are added by pushing
- segment records onto the list headed by &mstate.seg for the
- containing mstate.
-
- Segment flags control allocation/merge/deallocation policies:
- * If EXTERN_BIT set, then we did not allocate this segment,
- and so should not try to deallocate or merge with others.
- (This currently holds only for the initial segment passed
- into create_mspace_with_base.)
- * If IS_MMAPPED_BIT set, the segment may be merged with
- other surrounding mmapped segments and trimmed/de-allocated
- using munmap.
- * If neither bit is set, then the segment was obtained using
- MORECORE so can be merged with surrounding MORECORE'd segments
- and deallocated/trimmed using MORECORE with negative arguments.
-*/
-
-struct malloc_segment {
- char* base; /* base address */
- size_t size; /* allocated size */
- struct malloc_segment* next; /* ptr to next segment */
- flag_t sflags; /* mmap and extern flag */
-};
-
-#define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
-#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
-
-typedef struct malloc_segment msegment;
-typedef struct malloc_segment* msegmentptr;
-
-/* ---------------------------- malloc_state ----------------------------- */
-
-/*
- A malloc_state holds all of the bookkeeping for a space.
- The main fields are:
-
- Top
- The topmost chunk of the currently active segment. Its size is
- cached in topsize. The actual size of topmost space is
- topsize+TOP_FOOT_SIZE, which includes space reserved for adding
- fenceposts and segment records if necessary when getting more
- space from the system. The size at which to autotrim top is
- cached from mparams in trim_check, except that it is disabled if
- an autotrim fails.
-
- Designated victim (dv)
- This is the preferred chunk for servicing small requests that
- don't have exact fits. It is normally the chunk split off most
- recently to service another small request. Its size is cached in
- dvsize. The link fields of this chunk are not maintained since it
- is not kept in a bin.
-
- SmallBins
- An array of bin headers for free chunks. These bins hold chunks
- with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
- chunks of all the same size, spaced 8 bytes apart. To simplify
- use in double-linked lists, each bin header acts as a malloc_chunk
- pointing to the real first node, if it exists (else pointing to
- itself). This avoids special-casing for headers. But to avoid
- waste, we allocate only the fd/bk pointers of bins, and then use
- repositioning tricks to treat these as the fields of a chunk.
-
- TreeBins
- Treebins are pointers to the roots of trees holding a range of
- sizes. There are 2 equally spaced treebins for each power of two
- from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
- larger.
-
- Bin maps
- There is one bit map for small bins ("smallmap") and one for
- treebins ("treemap). Each bin sets its bit when non-empty, and
- clears the bit when empty. Bit operations are then used to avoid
- bin-by-bin searching -- nearly all "search" is done without ever
- looking at bins that won't be selected. The bit maps
- conservatively use 32 bits per map word, even if on 64bit system.
- For a good description of some of the bit-based techniques used
- here, see Henry S. Warren Jr's book "Hacker's Delight" (and
- supplement at http://hackersdelight.org/). Many of these are
- intended to reduce the branchiness of paths through malloc etc, as
- well as to reduce the number of memory locations read or written.
-
- Segments
- A list of segments headed by an embedded malloc_segment record
- representing the initial space.
-
- Address check support
- The least_addr field is the least address ever obtained from
- MORECORE or MMAP. Attempted frees and reallocs of any address less
- than this are trapped (unless INSECURE is defined).
-
- Magic tag
- A cross-check field that should always hold same value as mparams.magic.
-
- Flags
- Bits recording whether to use MMAP, locks, or contiguous MORECORE
-
- Statistics
- Each space keeps track of current and maximum system memory
- obtained via MORECORE or MMAP.
-
- Locking
- If USE_LOCKS is defined, the "mutex" lock is acquired and released
- around every public call using this mspace.
-*/
-
-/* Bin types, widths and sizes */
-#define NSMALLBINS (32U)
-#define NTREEBINS (32U)
-#define SMALLBIN_SHIFT (3U)
-#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
-#define TREEBIN_SHIFT (8U)
-#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
-#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
-#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
-
-struct malloc_state {
- binmap_t smallmap;
- binmap_t treemap;
- size_t dvsize;
- size_t topsize;
- char* least_addr;
- mchunkptr dv;
- mchunkptr top;
- size_t trim_check;
- size_t magic;
- mchunkptr smallbins[(NSMALLBINS+1)*2];
- tbinptr treebins[NTREEBINS];
- size_t footprint;
- size_t max_footprint;
- flag_t mflags;
-#if USE_LOCKS
- MLOCK_T mutex; /* locate lock among fields that rarely change */
-#endif /* USE_LOCKS */
- msegment seg;
-};
-
-typedef struct malloc_state* mstate;
-
-/* ------------- Global malloc_state and malloc_params ------------------- */
-
-/*
- malloc_params holds global properties, including those that can be
- dynamically set using mallopt. There is a single instance, mparams,
- initialized in init_mparams.
-*/
-
-struct malloc_params {
- size_t magic;
- size_t page_size;
- size_t granularity;
- size_t mmap_threshold;
- size_t trim_threshold;
- flag_t default_mflags;
-};
-
-static struct malloc_params mparams;
-
-/* The global malloc_state used for all non-"mspace" calls */
-static struct malloc_state _gm_;
-#define gm (&_gm_)
-#define is_global(M) ((M) == &_gm_)
-#define is_initialized(M) ((M)->top != 0)
-
-/* -------------------------- system alloc setup ------------------------- */
-
-/* Operations on mflags */
-
-#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
-#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
-#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
-
-#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
-#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
-#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
-
-#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
-#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
-
-#define set_lock(M,L)\
- ((M)->mflags = (L)?\
- ((M)->mflags | USE_LOCK_BIT) :\
- ((M)->mflags & ~USE_LOCK_BIT))
-
-/* page-align a size */
-#define page_align(S)\
- (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
-
-/* granularity-align a size */
-#define granularity_align(S)\
- (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
-
-#define is_page_aligned(S)\
- (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
-#define is_granularity_aligned(S)\
- (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
-
-/* True if segment S holds address A */
-#define segment_holds(S, A)\
- ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
-
-/* Return segment holding given address */
-static msegmentptr segment_holding(mstate m, char* addr) {
- msegmentptr sp = &m->seg;
- for (;;) {
- if (addr >= sp->base && addr < sp->base + sp->size)
- return sp;
- if ((sp = sp->next) == 0)
- return 0;
- }
-}
-
-/* Return true if segment contains a segment link */
-static int has_segment_link(mstate m, msegmentptr ss) {
- msegmentptr sp = &m->seg;
- for (;;) {
- if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
- return 1;
- if ((sp = sp->next) == 0)
- return 0;
- }
-}
-
-#ifndef MORECORE_CANNOT_TRIM
-#define should_trim(M,s) ((s) > (M)->trim_check)
-#else /* MORECORE_CANNOT_TRIM */
-#define should_trim(M,s) (0)
-#endif /* MORECORE_CANNOT_TRIM */
-
-/*
- TOP_FOOT_SIZE is padding at the end of a segment, including space
- that may be needed to place segment records and fenceposts when new
- noncontiguous segments are added.
-*/
-#define TOP_FOOT_SIZE\
- (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
-
-
-/* ------------------------------- Hooks -------------------------------- */
-
-/*
- PREACTION should be defined to return 0 on success, and nonzero on
- failure. If you are not using locking, you can redefine these to do
- anything you like.
-*/
-
-#if USE_LOCKS
-
-/* Ensure locks are initialized */
-#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
-
-#define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
-#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
-#else /* USE_LOCKS */
-
-#ifndef PREACTION
-#define PREACTION(M) (0)
-#endif /* PREACTION */
-
-#ifndef POSTACTION
-#define POSTACTION(M)
-#endif /* POSTACTION */
-
-#endif /* USE_LOCKS */
-
-/*
- CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
- USAGE_ERROR_ACTION is triggered on detected bad frees and
- reallocs. The argument p is an address that might have triggered the
- fault. It is ignored by the two predefined actions, but might be
- useful in custom actions that try to help diagnose errors.
-*/
-
-#if PROCEED_ON_ERROR
-
-/* A count of the number of corruption errors causing resets */
-int malloc_corruption_error_count;
-
-/* default corruption action */
-static void reset_on_error(mstate m);
-
-#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
-#define USAGE_ERROR_ACTION(m, p)
-
-#else /* PROCEED_ON_ERROR */
-
-#ifndef CORRUPTION_ERROR_ACTION
-#define CORRUPTION_ERROR_ACTION(m) ABORT
-#endif /* CORRUPTION_ERROR_ACTION */
-
-#ifndef USAGE_ERROR_ACTION
-#define USAGE_ERROR_ACTION(m,p) ABORT
-#endif /* USAGE_ERROR_ACTION */
-
-#endif /* PROCEED_ON_ERROR */
-
-/* -------------------------- Debugging setup ---------------------------- */
-
-#if ! DEBUG
-
-#define check_free_chunk(M,P)
-#define check_inuse_chunk(M,P)
-#define check_malloced_chunk(M,P,N)
-#define check_mmapped_chunk(M,P)
-#define check_malloc_state(M)
-#define check_top_chunk(M,P)
-
-#else /* DEBUG */
-#define check_free_chunk(M,P) do_check_free_chunk(M,P)
-#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
-#define check_top_chunk(M,P) do_check_top_chunk(M,P)
-#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
-#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
-#define check_malloc_state(M) do_check_malloc_state(M)
-
-static void do_check_any_chunk(mstate m, mchunkptr p);
-static void do_check_top_chunk(mstate m, mchunkptr p);
-static void do_check_mmapped_chunk(mstate m, mchunkptr p);
-static void do_check_inuse_chunk(mstate m, mchunkptr p);
-static void do_check_free_chunk(mstate m, mchunkptr p);
-static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
-static void do_check_tree(mstate m, tchunkptr t);
-static void do_check_treebin(mstate m, bindex_t i);
-static void do_check_smallbin(mstate m, bindex_t i);
-static void do_check_malloc_state(mstate m);
-static int bin_find(mstate m, mchunkptr x);
-static size_t traverse_and_check(mstate m);
-#endif /* DEBUG */
-
-/* ---------------------------- Indexing Bins ---------------------------- */
-
-#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
-#define small_index(s) ((s) >> SMALLBIN_SHIFT)
-#define small_index2size(i) ((i) << SMALLBIN_SHIFT)
-#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
-
-/* addressing by index. See above about smallbin repositioning */
-#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
-#define treebin_at(M,i) (&((M)->treebins[i]))
-
-/* assign tree index for size S to variable I */
-#if defined(__GNUC__) && defined(i386)
-#define compute_tree_index(S, I)\
-{\
- size_t X = S >> TREEBIN_SHIFT;\
- if (X == 0)\
- I = 0;\
- else if (X > 0xFFFF)\
- I = NTREEBINS-1;\
- else {\
- unsigned int K;\
- __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
- I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
- }\
-}
-#else /* GNUC */
-#define compute_tree_index(S, I)\
-{\
- size_t X = S >> TREEBIN_SHIFT;\
- if (X == 0)\
- I = 0;\
- else if (X > 0xFFFF)\
- I = NTREEBINS-1;\
- else {\
- unsigned int Y = (unsigned int)X;\
- unsigned int N = ((Y - 0x100) >> 16) & 8;\
- unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
- N += K;\
- N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
- K = 14 - N + ((Y <<= K) >> 15);\
- I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
- }\
-}
-#endif /* GNUC */
-
-/* Bit representing maximum resolved size in a treebin at i */
-#define bit_for_tree_index(i) \
- (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
-
-/* Shift placing maximum resolved bit in a treebin at i as sign bit */
-#define leftshift_for_tree_index(i) \
- ((i == NTREEBINS-1)? 0 : \
- ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
-
-/* The size of the smallest chunk held in bin with index i */
-#define minsize_for_tree_index(i) \
- ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
- (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
-
-
-/* ------------------------ Operations on bin maps ----------------------- */
-
-/* bit corresponding to given index */
-#define idx2bit(i) ((binmap_t)(1) << (i))
-
-/* Mark/Clear bits with given index */
-#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
-#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
-#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
-
-#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
-#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
-#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
-
-/* index corresponding to given bit */
-
-#if defined(__GNUC__) && defined(i386)
-#define compute_bit2idx(X, I)\
-{\
- unsigned int J;\
- __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
- I = (bindex_t)J;\
-}
-
-#else /* GNUC */
-#if USE_BUILTIN_FFS
-#define compute_bit2idx(X, I) I = ffs(X)-1
-
-#else /* USE_BUILTIN_FFS */
-#define compute_bit2idx(X, I)\
-{\
- unsigned int Y = X - 1;\
- unsigned int K = Y >> (16-4) & 16;\
- unsigned int N = K; Y >>= K;\
- N += K = Y >> (8-3) & 8; Y >>= K;\
- N += K = Y >> (4-2) & 4; Y >>= K;\
- N += K = Y >> (2-1) & 2; Y >>= K;\
- N += K = Y >> (1-0) & 1; Y >>= K;\
- I = (bindex_t)(N + Y);\
-}
-#endif /* USE_BUILTIN_FFS */
-#endif /* GNUC */
-
-/* isolate the least set bit of a bitmap */
-#define least_bit(x) ((x) & -(x))
-
-/* mask with all bits to left of least bit of x on */
-#define left_bits(x) ((x<<1) | -(x<<1))
-
-/* mask with all bits to left of or equal to least bit of x on */
-#define same_or_left_bits(x) ((x) | -(x))
-
-
-/* ----------------------- Runtime Check Support ------------------------- */
-
-/*
- For security, the main invariant is that malloc/free/etc never
- writes to a static address other than malloc_state, unless static
- malloc_state itself has been corrupted, which cannot occur via
- malloc (because of these checks). In essence this means that we
- believe all pointers, sizes, maps etc held in malloc_state, but
- check all of those linked or offsetted from other embedded data
- structures. These checks are interspersed with main code in a way
- that tends to minimize their run-time cost.
-
- When FOOTERS is defined, in addition to range checking, we also
- verify footer fields of inuse chunks, which can be used guarantee
- that the mstate controlling malloc/free is intact. This is a
- streamlined version of the approach described by William Robertson
- et al in "Run-time Detection of Heap-based Overflows" LISA'03
- http://www.usenix.org/events/lisa03/tech/robertson.html The footer
- of an inuse chunk holds the xor of its mstate and a random seed,
- that is checked upon calls to free() and realloc(). This is
- (probablistically) unguessable from outside the program, but can be
- computed by any code successfully malloc'ing any chunk, so does not
- itself provide protection against code that has already broken
- security through some other means. Unlike Robertson et al, we
- always dynamically check addresses of all offset chunks (previous,
- next, etc). This turns out to be cheaper than relying on hashes.
-*/
-
-#if !INSECURE
-/* Check if address a is at least as high as any from MORECORE or MMAP */
-#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
-/* Check if address of next chunk n is higher than base chunk p */
-#define ok_next(p, n) ((char*)(p) < (char*)(n))
-/* Check if p has its cinuse bit on */
-#define ok_cinuse(p) cinuse(p)
-/* Check if p has its pinuse bit on */
-#define ok_pinuse(p) pinuse(p)
-
-#else /* !INSECURE */
-#define ok_address(M, a) (1)
-#define ok_next(b, n) (1)
-#define ok_cinuse(p) (1)
-#define ok_pinuse(p) (1)
-#endif /* !INSECURE */
-
-#if (FOOTERS && !INSECURE)
-/* Check if (alleged) mstate m has expected magic field */
-#define ok_magic(M) ((M)->magic == mparams.magic)
-#else /* (FOOTERS && !INSECURE) */
-#define ok_magic(M) (1)
-#endif /* (FOOTERS && !INSECURE) */
-
-
-/* In gcc, use __builtin_expect to minimize impact of checks */
-#if !INSECURE
-#if defined(__GNUC__) && __GNUC__ >= 3
-#define RTCHECK(e) __builtin_expect(e, 1)
-#else /* GNUC */
-#define RTCHECK(e) (e)
-#endif /* GNUC */
-#else /* !INSECURE */
-#define RTCHECK(e) (1)
-#endif /* !INSECURE */
-
-/* macros to set up inuse chunks with or without footers */
-
-#if !FOOTERS
-
-#define mark_inuse_foot(M,p,s)
-
-/* Set cinuse bit and pinuse bit of next chunk */
-#define set_inuse(M,p,s)\
- ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
- ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
-
-/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
-#define set_inuse_and_pinuse(M,p,s)\
- ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
- ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
-
-/* Set size, cinuse and pinuse bit of this chunk */
-#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
- ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
-
-#else /* FOOTERS */
-
-/* Set foot of inuse chunk to be xor of mstate and seed */
-#define mark_inuse_foot(M,p,s)\
- (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
-
-#define get_mstate_for(p)\
- ((mstate)(((mchunkptr)((char*)(p) +\
- (chunksize(p))))->prev_foot ^ mparams.magic))
-
-#define set_inuse(M,p,s)\
- ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
- (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
- mark_inuse_foot(M,p,s))
-
-#define set_inuse_and_pinuse(M,p,s)\
- ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
- (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
- mark_inuse_foot(M,p,s))
-
-#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
- ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
- mark_inuse_foot(M, p, s))
-
-#endif /* !FOOTERS */
-
-/* ---------------------------- setting mparams -------------------------- */
-
-/* Initialize mparams */
-static int init_mparams(void) {
- if (mparams.page_size == 0) {
- size_t s;
-
- mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
- mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
-#if MORECORE_CONTIGUOUS
- mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
-#else /* MORECORE_CONTIGUOUS */
- mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
-#endif /* MORECORE_CONTIGUOUS */
-
-#if (FOOTERS && !INSECURE)
- {
-#if USE_DEV_RANDOM
- int fd;
- unsigned char buf[sizeof(size_t)];
- /* Try to use /dev/urandom, else fall back on using time */
- if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
- read(fd, buf, sizeof(buf)) == sizeof(buf)) {
- s = *((size_t *) buf);
- close(fd);
- }
- else
-#endif /* USE_DEV_RANDOM */
- s = (size_t)(time(0) ^ (size_t)0x55555555U);
-
- s |= (size_t)8U; /* ensure nonzero */
- s &= ~(size_t)7U; /* improve chances of fault for bad values */
-
- }
-#else /* (FOOTERS && !INSECURE) */
- s = (size_t)0x58585858U;
-#endif /* (FOOTERS && !INSECURE) */
- ACQUIRE_MAGIC_INIT_LOCK();
- if (mparams.magic == 0) {
- mparams.magic = s;
- /* Set up lock for main malloc area */
- INITIAL_LOCK(&gm->mutex);
- gm->mflags = mparams.default_mflags;
- }
- RELEASE_MAGIC_INIT_LOCK();
-
-#ifndef WIN32
- mparams.page_size = malloc_getpagesize;
- mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
- DEFAULT_GRANULARITY : mparams.page_size);
-#else /* WIN32 */
- {
- SYSTEM_INFO system_info;
- GetSystemInfo(&system_info);
- mparams.page_size = system_info.dwPageSize;
- mparams.granularity = system_info.dwAllocationGranularity;
- }
-#endif /* WIN32 */
-
- /* Sanity-check configuration:
- size_t must be unsigned and as wide as pointer type.
- ints must be at least 4 bytes.
- alignment must be at least 8.
- Alignment, min chunk size, and page size must all be powers of 2.
- */
- if ((sizeof(size_t) != sizeof(char*)) ||
- (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
- (sizeof(int) < 4) ||
- (MALLOC_ALIGNMENT < (size_t)8U) ||
- ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
- ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
- ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
- ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
- ABORT;
- }
- return 0;
-}
-
-/* support for mallopt */
-static int change_mparam(int param_number, int value) {
- size_t val = (size_t)value;
- init_mparams();
- switch(param_number) {
- case M_TRIM_THRESHOLD:
- mparams.trim_threshold = val;
- return 1;
- case M_GRANULARITY:
- if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
- mparams.granularity = val;
- return 1;
- }
- else
- return 0;
- case M_MMAP_THRESHOLD:
- mparams.mmap_threshold = val;
- return 1;
- default:
- return 0;
- }
-}
-
-#if DEBUG
-/* ------------------------- Debugging Support --------------------------- */
-
-/* Check properties of any chunk, whether free, inuse, mmapped etc */
-static void do_check_any_chunk(mstate m, mchunkptr p) {
- assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
- assert(ok_address(m, p));
-}
-
-/* Check properties of top chunk */
-static void do_check_top_chunk(mstate m, mchunkptr p) {
- msegmentptr sp = segment_holding(m, (char*)p);
- size_t sz = chunksize(p);
- assert(sp != 0);
- assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
- assert(ok_address(m, p));
- assert(sz == m->topsize);
- assert(sz > 0);
- assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
- assert(pinuse(p));
- assert(!next_pinuse(p));
-}
-
-/* Check properties of (inuse) mmapped chunks */
-static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
- size_t sz = chunksize(p);
- size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
- assert(is_mmapped(p));
- assert(use_mmap(m));
- assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
- assert(ok_address(m, p));
- assert(!is_small(sz));
- assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
- assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
- assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
-}
-
-/* Check properties of inuse chunks */
-static void do_check_inuse_chunk(mstate m, mchunkptr p) {
- do_check_any_chunk(m, p);
- assert(cinuse(p));
- assert(next_pinuse(p));
- /* If not pinuse and not mmapped, previous chunk has OK offset */
- assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
- if (is_mmapped(p))
- do_check_mmapped_chunk(m, p);
-}
-
-/* Check properties of free chunks */
-static void do_check_free_chunk(mstate m, mchunkptr p) {
- size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
- mchunkptr next = chunk_plus_offset(p, sz);
- do_check_any_chunk(m, p);
- assert(!cinuse(p));
- assert(!next_pinuse(p));
- assert (!is_mmapped(p));
- if (p != m->dv && p != m->top) {
- if (sz >= MIN_CHUNK_SIZE) {
- assert((sz & CHUNK_ALIGN_MASK) == 0);
- assert(is_aligned(chunk2mem(p)));
- assert(next->prev_foot == sz);
- assert(pinuse(p));
- assert (next == m->top || cinuse(next));
- assert(p->fd->bk == p);
- assert(p->bk->fd == p);
- }
- else /* markers are always of size SIZE_T_SIZE */
- assert(sz == SIZE_T_SIZE);
- }
-}
-
-/* Check properties of malloced chunks at the point they are malloced */
-static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
- if (mem != 0) {
- mchunkptr p = mem2chunk(mem);
- size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
- do_check_inuse_chunk(m, p);
- assert((sz & CHUNK_ALIGN_MASK) == 0);
- assert(sz >= MIN_CHUNK_SIZE);
- assert(sz >= s);
- /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
- assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
- }
-}
-
-/* Check a tree and its subtrees. */
-static void do_check_tree(mstate m, tchunkptr t) {
- tchunkptr head = 0;
- tchunkptr u = t;
- bindex_t tindex = t->index;
- size_t tsize = chunksize(t);
- bindex_t idx;
- compute_tree_index(tsize, idx);
- assert(tindex == idx);
- assert(tsize >= MIN_LARGE_SIZE);
- assert(tsize >= minsize_for_tree_index(idx));
- assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
-
- do { /* traverse through chain of same-sized nodes */
- do_check_any_chunk(m, ((mchunkptr)u));
- assert(u->index == tindex);
- assert(chunksize(u) == tsize);
- assert(!cinuse(u));
- assert(!next_pinuse(u));
- assert(u->fd->bk == u);
- assert(u->bk->fd == u);
- if (u->parent == 0) {
- assert(u->child[0] == 0);
- assert(u->child[1] == 0);
- }
- else {
- assert(head == 0); /* only one node on chain has parent */
- head = u;
- assert(u->parent != u);
- assert (u->parent->child[0] == u ||
- u->parent->child[1] == u ||
- *((tbinptr*)(u->parent)) == u);
- if (u->child[0] != 0) {
- assert(u->child[0]->parent == u);
- assert(u->child[0] != u);
- do_check_tree(m, u->child[0]);
- }
- if (u->child[1] != 0) {
- assert(u->child[1]->parent == u);
- assert(u->child[1] != u);
- do_check_tree(m, u->child[1]);
- }
- if (u->child[0] != 0 && u->child[1] != 0) {
- assert(chunksize(u->child[0]) < chunksize(u->child[1]));
- }
- }
- u = u->fd;
- } while (u != t);
- assert(head != 0);
-}
-
-/* Check all the chunks in a treebin. */
-static void do_check_treebin(mstate m, bindex_t i) {
- tbinptr* tb = treebin_at(m, i);
- tchunkptr t = *tb;
- int empty = (m->treemap & (1U << i)) == 0;
- if (t == 0)
- assert(empty);
- if (!empty)
- do_check_tree(m, t);
-}
-
-/* Check all the chunks in a smallbin. */
-static void do_check_smallbin(mstate m, bindex_t i) {
- sbinptr b = smallbin_at(m, i);
- mchunkptr p = b->bk;
- unsigned int empty = (m->smallmap & (1U << i)) == 0;
- if (p == b)
- assert(empty);
- if (!empty) {
- for (; p != b; p = p->bk) {
- size_t size = chunksize(p);
- mchunkptr q;
- /* each chunk claims to be free */
- do_check_free_chunk(m, p);
- /* chunk belongs in bin */
- assert(small_index(size) == i);
- assert(p->bk == b || chunksize(p->bk) == chunksize(p));
- /* chunk is followed by an inuse chunk */
- q = next_chunk(p);
- if (q->head != FENCEPOST_HEAD)
- do_check_inuse_chunk(m, q);
- }
- }
-}
-
-/* Find x in a bin. Used in other check functions. */
-static int bin_find(mstate m, mchunkptr x) {
- size_t size = chunksize(x);
- if (is_small(size)) {
- bindex_t sidx = small_index(size);
- sbinptr b = smallbin_at(m, sidx);
- if (smallmap_is_marked(m, sidx)) {
- mchunkptr p = b;
- do {
- if (p == x)
- return 1;
- } while ((p = p->fd) != b);
- }
- }
- else {
- bindex_t tidx;
- compute_tree_index(size, tidx);
- if (treemap_is_marked(m, tidx)) {
- tchunkptr t = *treebin_at(m, tidx);
- size_t sizebits = size << leftshift_for_tree_index(tidx);
- while (t != 0 && chunksize(t) != size) {
- t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
- sizebits <<= 1;
- }
- if (t != 0) {
- tchunkptr u = t;
- do {
- if (u == (tchunkptr)x)
- return 1;
- } while ((u = u->fd) != t);
- }
- }
- }
- return 0;
-}
-
-/* Traverse each chunk and check it; return total */
-static size_t traverse_and_check(mstate m) {
- size_t sum = 0;
- if (is_initialized(m)) {
- msegmentptr s = &m->seg;
- sum += m->topsize + TOP_FOOT_SIZE;
- while (s != 0) {
- mchunkptr q = align_as_chunk(s->base);
- mchunkptr lastq = 0;
- assert(pinuse(q));
- while (segment_holds(s, q) &&
- q != m->top && q->head != FENCEPOST_HEAD) {
- sum += chunksize(q);
- if (cinuse(q)) {
- assert(!bin_find(m, q));
- do_check_inuse_chunk(m, q);
- }
- else {
- assert(q == m->dv || bin_find(m, q));
- assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
- do_check_free_chunk(m, q);
- }
- lastq = q;
- q = next_chunk(q);
- }
- s = s->next;
- }
- }
- return sum;
-}
-
-/* Check all properties of malloc_state. */
-static void do_check_malloc_state(mstate m) {
- bindex_t i;
- size_t total;
- /* check bins */
- for (i = 0; i < NSMALLBINS; ++i)
- do_check_smallbin(m, i);
- for (i = 0; i < NTREEBINS; ++i)
- do_check_treebin(m, i);
-
- if (m->dvsize != 0) { /* check dv chunk */
- do_check_any_chunk(m, m->dv);
- assert(m->dvsize == chunksize(m->dv));
- assert(m->dvsize >= MIN_CHUNK_SIZE);
- assert(bin_find(m, m->dv) == 0);
- }
-
- if (m->top != 0) { /* check top chunk */
- do_check_top_chunk(m, m->top);
- assert(m->topsize == chunksize(m->top));
- assert(m->topsize > 0);
- assert(bin_find(m, m->top) == 0);
- }
-
- total = traverse_and_check(m);
- assert(total <= m->footprint);
- assert(m->footprint <= m->max_footprint);
-}
-#endif /* DEBUG */
-
-/* ----------------------------- statistics ------------------------------ */
-
-#if !NO_MALLINFO
-static struct mallinfo internal_mallinfo(mstate m) {
- struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
- if (!PREACTION(m)) {
- check_malloc_state(m);
- if (is_initialized(m)) {
- size_t nfree = SIZE_T_ONE; /* top always free */
- size_t mfree = m->topsize + TOP_FOOT_SIZE;
- size_t sum = mfree;
- msegmentptr s = &m->seg;
- while (s != 0) {
- mchunkptr q = align_as_chunk(s->base);
- while (segment_holds(s, q) &&
- q != m->top && q->head != FENCEPOST_HEAD) {
- size_t sz = chunksize(q);
- sum += sz;
- if (!cinuse(q)) {
- mfree += sz;
- ++nfree;
- }
- q = next_chunk(q);
- }
- s = s->next;
- }
-
- nm.arena = sum;
- nm.ordblks = nfree;
- nm.hblkhd = m->footprint - sum;
- nm.usmblks = m->max_footprint;
- nm.uordblks = m->footprint - mfree;
- nm.fordblks = mfree;
- nm.keepcost = m->topsize;
- }
-
- POSTACTION(m);
- }
- return nm;
-}
-#endif /* !NO_MALLINFO */
-
-static void internal_malloc_stats(mstate m) {
- if (!PREACTION(m)) {
- size_t maxfp = 0;
- size_t fp = 0;
- size_t used = 0;
- check_malloc_state(m);
- if (is_initialized(m)) {
- msegmentptr s = &m->seg;
- maxfp = m->max_footprint;
- fp = m->footprint;
- used = fp - (m->topsize + TOP_FOOT_SIZE);
-
- while (s != 0) {
- mchunkptr q = align_as_chunk(s->base);
- while (segment_holds(s, q) &&
- q != m->top && q->head != FENCEPOST_HEAD) {
- if (!cinuse(q))
- used -= chunksize(q);
- q = next_chunk(q);
- }
- s = s->next;
- }
- }
-
- fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
- fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
- fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
-
- POSTACTION(m);
- }
-}
-
-/* ----------------------- Operations on smallbins ----------------------- */
-
-/*
- Various forms of linking and unlinking are defined as macros. Even
- the ones for trees, which are very long but have very short typical
- paths. This is ugly but reduces reliance on inlining support of
- compilers.
-*/
-
-/* Link a free chunk into a smallbin */
-#define insert_small_chunk(M, P, S) {\
- bindex_t I = small_index(S);\
- mchunkptr B = smallbin_at(M, I);\
- mchunkptr F = B;\
- assert(S >= MIN_CHUNK_SIZE);\
- if (!smallmap_is_marked(M, I))\
- mark_smallmap(M, I);\
- else if (RTCHECK(ok_address(M, B->fd)))\
- F = B->fd;\
- else {\
- CORRUPTION_ERROR_ACTION(M);\
- }\
- B->fd = P;\
- F->bk = P;\
- P->fd = F;\
- P->bk = B;\
-}
-
-/* Unlink a chunk from a smallbin */
-#define unlink_small_chunk(M, P, S) {\
- mchunkptr F = P->fd;\
- mchunkptr B = P->bk;\
- bindex_t I = small_index(S);\
- assert(P != B);\
- assert(P != F);\
- assert(chunksize(P) == small_index2size(I));\
- if (F == B)\
- clear_smallmap(M, I);\
- else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
- (B == smallbin_at(M,I) || ok_address(M, B)))) {\
- F->bk = B;\
- B->fd = F;\
- }\
- else {\
- CORRUPTION_ERROR_ACTION(M);\
- }\
-}
-
-/* Unlink the first chunk from a smallbin */
-#define unlink_first_small_chunk(M, B, P, I) {\
- mchunkptr F = P->fd;\
- assert(P != B);\
- assert(P != F);\
- assert(chunksize(P) == small_index2size(I));\
- if (B == F)\
- clear_smallmap(M, I);\
- else if (RTCHECK(ok_address(M, F))) {\
- B->fd = F;\
- F->bk = B;\
- }\
- else {\
- CORRUPTION_ERROR_ACTION(M);\
- }\
-}
-
-/* Replace dv node, binning the old one */
-/* Used only when dvsize known to be small */
-#define replace_dv(M, P, S) {\
- size_t DVS = M->dvsize;\
- if (DVS != 0) {\
- mchunkptr DV = M->dv;\
- assert(is_small(DVS));\
- insert_small_chunk(M, DV, DVS);\
- }\
- M->dvsize = S;\
- M->dv = P;\
-}
-
-/* ------------------------- Operations on trees ------------------------- */
-
-/* Insert chunk into tree */
-#define insert_large_chunk(M, X, S) {\
- tbinptr* H;\
- bindex_t I;\
- compute_tree_index(S, I);\
- H = treebin_at(M, I);\
- X->index = I;\
- X->child[0] = X->child[1] = 0;\
- if (!treemap_is_marked(M, I)) {\
- mark_treemap(M, I);\
- *H = X;\
- X->parent = (tchunkptr)H;\
- X->fd = X->bk = X;\
- }\
- else {\
- tchunkptr T = *H;\
- size_t K = S << leftshift_for_tree_index(I);\
- for (;;) {\
- if (chunksize(T) != S) {\
- tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
- K <<= 1;\
- if (*C != 0)\
- T = *C;\
- else if (RTCHECK(ok_address(M, C))) {\
- *C = X;\
- X->parent = T;\
- X->fd = X->bk = X;\
- break;\
- }\
- else {\
- CORRUPTION_ERROR_ACTION(M);\
- break;\
- }\
- }\
- else {\
- tchunkptr F = T->fd;\
- if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
- T->fd = F->bk = X;\
- X->fd = F;\
- X->bk = T;\
- X->parent = 0;\
- break;\
- }\
- else {\
- CORRUPTION_ERROR_ACTION(M);\
- break;\
- }\
- }\
- }\
- }\
-}
-
-/*
- Unlink steps:
-
- 1. If x is a chained node, unlink it from its same-sized fd/bk links
- and choose its bk node as its replacement.
- 2. If x was the last node of its size, but not a leaf node, it must
- be replaced with a leaf node (not merely one with an open left or
- right), to make sure that lefts and rights of descendents
- correspond properly to bit masks. We use the rightmost descendent
- of x. We could use any other leaf, but this is easy to locate and
- tends to counteract removal of leftmosts elsewhere, and so keeps
- paths shorter than minimally guaranteed. This doesn't loop much
- because on average a node in a tree is near the bottom.
- 3. If x is the base of a chain (i.e., has parent links) relink
- x's parent and children to x's replacement (or null if none).
-*/
-
-#define unlink_large_chunk(M, X) {\
- tchunkptr XP = X->parent;\
- tchunkptr R;\
- if (X->bk != X) {\
- tchunkptr F = X->fd;\
- R = X->bk;\
- if (RTCHECK(ok_address(M, F))) {\
- F->bk = R;\
- R->fd = F;\
- }\
- else {\
- CORRUPTION_ERROR_ACTION(M);\
- }\
- }\
- else {\
- tchunkptr* RP;\
- if (((R = *(RP = &(X->child[1]))) != 0) ||\
- ((R = *(RP = &(X->child[0]))) != 0)) {\
- tchunkptr* CP;\
- while ((*(CP = &(R->child[1])) != 0) ||\
- (*(CP = &(R->child[0])) != 0)) {\
- R = *(RP = CP);\
- }\
- if (RTCHECK(ok_address(M, RP)))\
- *RP = 0;\
- else {\
- CORRUPTION_ERROR_ACTION(M);\
- }\
- }\
- }\
- if (XP != 0) {\
- tbinptr* H = treebin_at(M, X->index);\
- if (X == *H) {\
- if ((*H = R) == 0) \
- clear_treemap(M, X->index);\
- }\
- else if (RTCHECK(ok_address(M, XP))) {\
- if (XP->child[0] == X) \
- XP->child[0] = R;\
- else \
- XP->child[1] = R;\
- }\
- else\
- CORRUPTION_ERROR_ACTION(M);\
- if (R != 0) {\
- if (RTCHECK(ok_address(M, R))) {\
- tchunkptr C0, C1;\
- R->parent = XP;\
- if ((C0 = X->child[0]) != 0) {\
- if (RTCHECK(ok_address(M, C0))) {\
- R->child[0] = C0;\
- C0->parent = R;\
- }\
- else\
- CORRUPTION_ERROR_ACTION(M);\
- }\
- if ((C1 = X->child[1]) != 0) {\
- if (RTCHECK(ok_address(M, C1))) {\
- R->child[1] = C1;\
- C1->parent = R;\
- }\
- else\
- CORRUPTION_ERROR_ACTION(M);\
- }\
- }\
- else\
- CORRUPTION_ERROR_ACTION(M);\
- }\
- }\
-}
-
-/* Relays to large vs small bin operations */
-
-#define insert_chunk(M, P, S)\
- if (is_small(S)) insert_small_chunk(M, P, S)\
- else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
-
-#define unlink_chunk(M, P, S)\
- if (is_small(S)) unlink_small_chunk(M, P, S)\
- else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
-
-
-/* Relays to internal calls to malloc/free from realloc, memalign etc */
-
-#if ONLY_MSPACES
-#define internal_malloc(m, b) mspace_malloc(m, b)
-#define internal_free(m, mem) mspace_free(m,mem);
-#else /* ONLY_MSPACES */
-#if MSPACES
-#define internal_malloc(m, b)\
- (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
-#define internal_free(m, mem)\
- if (m == gm) dlfree(mem); else mspace_free(m,mem);
-#else /* MSPACES */
-#define internal_malloc(m, b) dlmalloc(b)
-#define internal_free(m, mem) dlfree(mem)
-#endif /* MSPACES */
-#endif /* ONLY_MSPACES */
-
-/* ----------------------- Direct-mmapping chunks ----------------------- */
-
-/*
- Directly mmapped chunks are set up with an offset to the start of
- the mmapped region stored in the prev_foot field of the chunk. This
- allows reconstruction of the required argument to MUNMAP when freed,
- and also allows adjustment of the returned chunk to meet alignment
- requirements (especially in memalign). There is also enough space
- allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
- the PINUSE bit so frees can be checked.
-*/
-
-/* Malloc using mmap */
-static void* mmap_alloc(mstate m, size_t nb) {
- size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
- if (mmsize > nb) { /* Check for wrap around 0 */
- char* mm = (char*)(DIRECT_MMAP(mmsize));
- if (mm != CMFAIL) {
- size_t offset = align_offset(chunk2mem(mm));
- size_t psize = mmsize - offset - MMAP_FOOT_PAD;
- mchunkptr p = (mchunkptr)(mm + offset);
- p->prev_foot = offset | IS_MMAPPED_BIT;
- (p)->head = (psize|CINUSE_BIT);
- mark_inuse_foot(m, p, psize);
- chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
- chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
-
- if (mm < m->least_addr)
- m->least_addr = mm;
- if ((m->footprint += mmsize) > m->max_footprint)
- m->max_footprint = m->footprint;
- assert(is_aligned(chunk2mem(p)));
- check_mmapped_chunk(m, p);
- return chunk2mem(p);
- }
- }
- return 0;
-}
-
-/* Realloc using mmap */
-static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
- size_t oldsize = chunksize(oldp);
- if (is_small(nb)) /* Can't shrink mmap regions below small size */
- return 0;
- /* Keep old chunk if big enough but not too big */
- if (oldsize >= nb + SIZE_T_SIZE &&
- (oldsize - nb) <= (mparams.granularity << 1))
- return oldp;
- else {
- size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
- size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
- size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
- CHUNK_ALIGN_MASK);
- char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
- oldmmsize, newmmsize, 1);
- if (cp != CMFAIL) {
- mchunkptr newp = (mchunkptr)(cp + offset);
- size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
- newp->head = (psize|CINUSE_BIT);
- mark_inuse_foot(m, newp, psize);
- chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
- chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
-
- if (cp < m->least_addr)
- m->least_addr = cp;
- if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
- m->max_footprint = m->footprint;
- check_mmapped_chunk(m, newp);
- return newp;
- }
- }
- return 0;
-}
-
-/* -------------------------- mspace management -------------------------- */
-
-/* Initialize top chunk and its size */
-static void init_top(mstate m, mchunkptr p, size_t psize) {
- /* Ensure alignment */
- size_t offset = align_offset(chunk2mem(p));
- p = (mchunkptr)((char*)p + offset);
- psize -= offset;
-
- m->top = p;
- m->topsize = psize;
- p->head = psize | PINUSE_BIT;
- /* set size of fake trailing chunk holding overhead space only once */
- chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
- m->trim_check = mparams.trim_threshold; /* reset on each update */
-}
-
-/* Initialize bins for a new mstate that is otherwise zeroed out */
-static void init_bins(mstate m) {
- /* Establish circular links for smallbins */
- bindex_t i;
- for (i = 0; i < NSMALLBINS; ++i) {
- sbinptr bin = smallbin_at(m,i);
- bin->fd = bin->bk = bin;
- }
-}
-
-#if PROCEED_ON_ERROR
-
-/* default corruption action */
-static void reset_on_error(mstate m) {
- int i;
- ++malloc_corruption_error_count;
- /* Reinitialize fields to forget about all memory */
- m->smallbins = m->treebins = 0;
- m->dvsize = m->topsize = 0;
- m->seg.base = 0;
- m->seg.size = 0;
- m->seg.next = 0;
- m->top = m->dv = 0;
- for (i = 0; i < NTREEBINS; ++i)
- *treebin_at(m, i) = 0;
- init_bins(m);
-}
-#endif /* PROCEED_ON_ERROR */
-
-/* Allocate chunk and prepend remainder with chunk in successor base. */
-static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
- size_t nb) {
- mchunkptr p = align_as_chunk(newbase);
- mchunkptr oldfirst = align_as_chunk(oldbase);
- size_t psize = (char*)oldfirst - (char*)p;
- mchunkptr q = chunk_plus_offset(p, nb);
- size_t qsize = psize - nb;
- set_size_and_pinuse_of_inuse_chunk(m, p, nb);
-
- assert((char*)oldfirst > (char*)q);
- assert(pinuse(oldfirst));
- assert(qsize >= MIN_CHUNK_SIZE);
-
- /* consolidate remainder with first chunk of old base */
- if (oldfirst == m->top) {
- size_t tsize = m->topsize += qsize;
- m->top = q;
- q->head = tsize | PINUSE_BIT;
- check_top_chunk(m, q);
- }
- else if (oldfirst == m->dv) {
- size_t dsize = m->dvsize += qsize;
- m->dv = q;
- set_size_and_pinuse_of_free_chunk(q, dsize);
- }
- else {
- if (!cinuse(oldfirst)) {
- size_t nsize = chunksize(oldfirst);
- unlink_chunk(m, oldfirst, nsize);
- oldfirst = chunk_plus_offset(oldfirst, nsize);
- qsize += nsize;
- }
- set_free_with_pinuse(q, qsize, oldfirst);
- insert_chunk(m, q, qsize);
- check_free_chunk(m, q);
- }
-
- check_malloced_chunk(m, chunk2mem(p), nb);
- return chunk2mem(p);
-}
-
-
-/* Add a segment to hold a new noncontiguous region */
-static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
- /* Determine locations and sizes of segment, fenceposts, old top */
- char* old_top = (char*)m->top;
- msegmentptr oldsp = segment_holding(m, old_top);
- char* old_end = oldsp->base + oldsp->size;
- size_t ssize = pad_request(sizeof(struct malloc_segment));
- char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
- size_t offset = align_offset(chunk2mem(rawsp));
- char* asp = rawsp + offset;
- char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
- mchunkptr sp = (mchunkptr)csp;
- msegmentptr ss = (msegmentptr)(chunk2mem(sp));
- mchunkptr tnext = chunk_plus_offset(sp, ssize);
- mchunkptr p = tnext;
- int nfences = 0;
-
- /* reset top to new space */
- init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
-
- /* Set up segment record */
- assert(is_aligned(ss));
- set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
- *ss = m->seg; /* Push current record */
- m->seg.base = tbase;
- m->seg.size = tsize;
- m->seg.sflags = mmapped;
- m->seg.next = ss;
-
- /* Insert trailing fenceposts */
- for (;;) {
- mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
- p->head = FENCEPOST_HEAD;
- ++nfences;
- if ((char*)(&(nextp->head)) < old_end)
- p = nextp;
- else
- break;
- }
- assert(nfences >= 2);
-
- /* Insert the rest of old top into a bin as an ordinary free chunk */
- if (csp != old_top) {
- mchunkptr q = (mchunkptr)old_top;
- size_t psize = csp - old_top;
- mchunkptr tn = chunk_plus_offset(q, psize);
- set_free_with_pinuse(q, psize, tn);
- insert_chunk(m, q, psize);
- }
-
- check_top_chunk(m, m->top);
-}
-
-/* -------------------------- System allocation -------------------------- */
-
-/* Get memory from system using MORECORE or MMAP */
-static void* sys_alloc(mstate m, size_t nb) {
- char* tbase = CMFAIL;
- size_t tsize = 0;
- flag_t mmap_flag = 0;
-
- init_mparams();
-
- /* Directly map large chunks */
- if (use_mmap(m) && nb >= mparams.mmap_threshold) {
- void* mem = mmap_alloc(m, nb);
- if (mem != 0)
- return mem;
- }
-
- /*
- Try getting memory in any of three ways (in most-preferred to
- least-preferred order):
- 1. A call to MORECORE that can normally contiguously extend memory.
- (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
- or main space is mmapped or a previous contiguous call failed)
- 2. A call to MMAP new space (disabled if not HAVE_MMAP).
- Note that under the default settings, if MORECORE is unable to
- fulfill a request, and HAVE_MMAP is true, then mmap is
- used as a noncontiguous system allocator. This is a useful backup
- strategy for systems with holes in address spaces -- in this case
- sbrk cannot contiguously expand the heap, but mmap may be able to
- find space.
- 3. A call to MORECORE that cannot usually contiguously extend memory.
- (disabled if not HAVE_MORECORE)
- */
-
- if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
- char* br = CMFAIL;
- msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
- size_t asize = 0;
- ACQUIRE_MORECORE_LOCK();
-
- if (ss == 0) { /* First time through or recovery */
- char* base = (char*)CALL_MORECORE(0);
- if (base != CMFAIL) {
- asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
- /* Adjust to end on a page boundary */
- if (!is_page_aligned(base))
- asize += (page_align((size_t)base) - (size_t)base);
- /* Can't call MORECORE if size is negative when treated as signed */
- if (asize < HALF_MAX_SIZE_T &&
- (br = (char*)(CALL_MORECORE(asize))) == base) {
- tbase = base;
- tsize = asize;
- }
- }
- }
- else {
- /* Subtract out existing available top space from MORECORE request. */
- asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
- /* Use mem here only if it did continuously extend old space */
- if (asize < HALF_MAX_SIZE_T &&
- (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
- tbase = br;
- tsize = asize;
- }
- }
-
- if (tbase == CMFAIL) { /* Cope with partial failure */
- if (br != CMFAIL) { /* Try to use/extend the space we did get */
- if (asize < HALF_MAX_SIZE_T &&
- asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
- size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
- if (esize < HALF_MAX_SIZE_T) {
- char* end = (char*)CALL_MORECORE(esize);
- if (end != CMFAIL)
- asize += esize;
- else { /* Can't use; try to release */
- CALL_MORECORE(-asize);
- br = CMFAIL;
- }
- }
- }
- }
- if (br != CMFAIL) { /* Use the space we did get */
- tbase = br;
- tsize = asize;
- }
- else
- disable_contiguous(m); /* Don't try contiguous path in the future */
- }
-
- RELEASE_MORECORE_LOCK();
- }
-
- if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
- size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
- size_t rsize = granularity_align(req);
- if (rsize > nb) { /* Fail if wraps around zero */
- char* mp = (char*)(CALL_MMAP(rsize));
- if (mp != CMFAIL) {
- tbase = mp;
- tsize = rsize;
- mmap_flag = IS_MMAPPED_BIT;
- }
- }
- }
-
- if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
- size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
- if (asize < HALF_MAX_SIZE_T) {
- char* br = CMFAIL;
- char* end = CMFAIL;
- ACQUIRE_MORECORE_LOCK();
- br = (char*)(CALL_MORECORE(asize));
- end = (char*)(CALL_MORECORE(0));
- RELEASE_MORECORE_LOCK();
- if (br != CMFAIL && end != CMFAIL && br < end) {
- size_t ssize = end - br;
- if (ssize > nb + TOP_FOOT_SIZE) {
- tbase = br;
- tsize = ssize;
- }
- }
- }
- }
-
- if (tbase != CMFAIL) {
-
- if ((m->footprint += tsize) > m->max_footprint)
- m->max_footprint = m->footprint;
-
- if (!is_initialized(m)) { /* first-time initialization */
- m->seg.base = m->least_addr = tbase;
- m->seg.size = tsize;
- m->seg.sflags = mmap_flag;
- m->magic = mparams.magic;
- init_bins(m);
- if (is_global(m))
- init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
- else {
- /* Offset top by embedded malloc_state */
- mchunkptr mn = next_chunk(mem2chunk(m));
- init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
- }
- }
-
- else {
- /* Try to merge with an existing segment */
- msegmentptr sp = &m->seg;
- while (sp != 0 && tbase != sp->base + sp->size)
- sp = sp->next;
- if (sp != 0 &&
- !is_extern_segment(sp) &&
- (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
- segment_holds(sp, m->top)) { /* append */
- sp->size += tsize;
- init_top(m, m->top, m->topsize + tsize);
- }
- else {
- if (tbase < m->least_addr)
- m->least_addr = tbase;
- sp = &m->seg;
- while (sp != 0 && sp->base != tbase + tsize)
- sp = sp->next;
- if (sp != 0 &&
- !is_extern_segment(sp) &&
- (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
- char* oldbase = sp->base;
- sp->base = tbase;
- sp->size += tsize;
- return prepend_alloc(m, tbase, oldbase, nb);
- }
- else
- add_segment(m, tbase, tsize, mmap_flag);
- }
- }
-
- if (nb < m->topsize) { /* Allocate from new or extended top space */
- size_t rsize = m->topsize -= nb;
- mchunkptr p = m->top;
- mchunkptr r = m->top = chunk_plus_offset(p, nb);
- r->head = rsize | PINUSE_BIT;
- set_size_and_pinuse_of_inuse_chunk(m, p, nb);
- check_top_chunk(m, m->top);
- check_malloced_chunk(m, chunk2mem(p), nb);
- return chunk2mem(p);
- }
- }
-
- MALLOC_FAILURE_ACTION;
- return 0;
-}
-
-/* ----------------------- system deallocation -------------------------- */
-
-/* Unmap and unlink any mmapped segments that don't contain used chunks */
-static size_t release_unused_segments(mstate m) {
- size_t released = 0;
- msegmentptr pred = &m->seg;
- msegmentptr sp = pred->next;
- while (sp != 0) {
- char* base = sp->base;
- size_t size = sp->size;
- msegmentptr next = sp->next;
- if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
- mchunkptr p = align_as_chunk(base);
- size_t psize = chunksize(p);
- /* Can unmap if first chunk holds entire segment and not pinned */
- if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
- tchunkptr tp = (tchunkptr)p;
- assert(segment_holds(sp, (char*)sp));
- if (p == m->dv) {
- m->dv = 0;
- m->dvsize = 0;
- }
- else {
- unlink_large_chunk(m, tp);
- }
- if (CALL_MUNMAP(base, size) == 0) {
- released += size;
- m->footprint -= size;
- /* unlink obsoleted record */
- sp = pred;
- sp->next = next;
- }
- else { /* back out if cannot unmap */
- insert_large_chunk(m, tp, psize);
- }
- }
- }
- pred = sp;
- sp = next;
- }
- return released;
-}
-
-static int sys_trim(mstate m, size_t pad) {
- size_t released = 0;
- if (pad < MAX_REQUEST && is_initialized(m)) {
- pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
-
- if (m->topsize > pad) {
- /* Shrink top space in granularity-size units, keeping at least one */
- size_t unit = mparams.granularity;
- size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
- SIZE_T_ONE) * unit;
- msegmentptr sp = segment_holding(m, (char*)m->top);
-
- if (!is_extern_segment(sp)) {
- if (is_mmapped_segment(sp)) {
- if (HAVE_MMAP &&
- sp->size >= extra &&
- !has_segment_link(m, sp)) { /* can't shrink if pinned */
- size_t newsize = sp->size - extra;
- /* Prefer mremap, fall back to munmap */
- if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
- (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
- released = extra;
- }
- }
- }
- else if (HAVE_MORECORE) {
- if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
- extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
- ACQUIRE_MORECORE_LOCK();
- {
- /* Make sure end of memory is where we last set it. */
- char* old_br = (char*)(CALL_MORECORE(0));
- if (old_br == sp->base + sp->size) {
- char* rel_br = (char*)(CALL_MORECORE(-extra));
- char* new_br = (char*)(CALL_MORECORE(0));
- if (rel_br != CMFAIL && new_br < old_br)
- released = old_br - new_br;
- }
- }
- RELEASE_MORECORE_LOCK();
- }
- }
-
- if (released != 0) {
- sp->size -= released;
- m->footprint -= released;
- init_top(m, m->top, m->topsize - released);
- check_top_chunk(m, m->top);
- }
- }
-
- /* Unmap any unused mmapped segments */
- if (HAVE_MMAP)
- released += release_unused_segments(m);
-
- /* On failure, disable autotrim to avoid repeated failed future calls */
- if (released == 0)
- m->trim_check = MAX_SIZE_T;
- }
-
- return (released != 0)? 1 : 0;
-}
-
-/* ---------------------------- malloc support --------------------------- */
-
-/* allocate a large request from the best fitting chunk in a treebin */
-static void* tmalloc_large(mstate m, size_t nb) {
- tchunkptr v = 0;
- size_t rsize = -nb; /* Unsigned negation */
- tchunkptr t;
- bindex_t idx;
- compute_tree_index(nb, idx);
-
- if ((t = *treebin_at(m, idx)) != 0) {
- /* Traverse tree for this bin looking for node with size == nb */
- size_t sizebits = nb << leftshift_for_tree_index(idx);
- tchunkptr rst = 0; /* The deepest untaken right subtree */
- for (;;) {
- tchunkptr rt;
- size_t trem = chunksize(t) - nb;
- if (trem < rsize) {
- v = t;
- if ((rsize = trem) == 0)
- break;
- }
- rt = t->child[1];
- t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
- if (rt != 0 && rt != t)
- rst = rt;
- if (t == 0) {
- t = rst; /* set t to least subtree holding sizes > nb */
- break;
- }
- sizebits <<= 1;
- }
- }
-
- if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
- binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
- if (leftbits != 0) {
- bindex_t i;
- binmap_t leastbit = least_bit(leftbits);
- compute_bit2idx(leastbit, i);
- t = *treebin_at(m, i);
- }
- }
-
- while (t != 0) { /* find smallest of tree or subtree */
- size_t trem = chunksize(t) - nb;
- if (trem < rsize) {
- rsize = trem;
- v = t;
- }
- t = leftmost_child(t);
- }
-
- /* If dv is a better fit, return 0 so malloc will use it */
- if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
- if (RTCHECK(ok_address(m, v))) { /* split */
- mchunkptr r = chunk_plus_offset(v, nb);
- assert(chunksize(v) == rsize + nb);
- if (RTCHECK(ok_next(v, r))) {
- unlink_large_chunk(m, v);
- if (rsize < MIN_CHUNK_SIZE)
- set_inuse_and_pinuse(m, v, (rsize + nb));
- else {
- set_size_and_pinuse_of_inuse_chunk(m, v, nb);
- set_size_and_pinuse_of_free_chunk(r, rsize);
- insert_chunk(m, r, rsize);
- }
- return chunk2mem(v);
- }
- }
- CORRUPTION_ERROR_ACTION(m);
- }
- return 0;
-}
-
-/* allocate a small request from the best fitting chunk in a treebin */
-static void* tmalloc_small(mstate m, size_t nb) {
- tchunkptr t, v;
- size_t rsize;
- bindex_t i;
- binmap_t leastbit = least_bit(m->treemap);
- compute_bit2idx(leastbit, i);
-
- v = t = *treebin_at(m, i);
- rsize = chunksize(t) - nb;
-
- while ((t = leftmost_child(t)) != 0) {
- size_t trem = chunksize(t) - nb;
- if (trem < rsize) {
- rsize = trem;
- v = t;
- }
- }
-
- if (RTCHECK(ok_address(m, v))) {
- mchunkptr r = chunk_plus_offset(v, nb);
- assert(chunksize(v) == rsize + nb);
- if (RTCHECK(ok_next(v, r))) {
- unlink_large_chunk(m, v);
- if (rsize < MIN_CHUNK_SIZE)
- set_inuse_and_pinuse(m, v, (rsize + nb));
- else {
- set_size_and_pinuse_of_inuse_chunk(m, v, nb);
- set_size_and_pinuse_of_free_chunk(r, rsize);
- replace_dv(m, r, rsize);
- }
- return chunk2mem(v);
- }
- }
-
- CORRUPTION_ERROR_ACTION(m);
- return 0;
-}
-
-/* --------------------------- realloc support --------------------------- */
-
-static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
- if (bytes >= MAX_REQUEST) {
- MALLOC_FAILURE_ACTION;
- return 0;
- }
- if (!PREACTION(m)) {
- mchunkptr oldp = mem2chunk(oldmem);
- size_t oldsize = chunksize(oldp);
- mchunkptr next = chunk_plus_offset(oldp, oldsize);
- mchunkptr newp = 0;
- void* extra = 0;
-
- /* Try to either shrink or extend into top. Else malloc-copy-free */
-
- if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
- ok_next(oldp, next) && ok_pinuse(next))) {
- size_t nb = request2size(bytes);
- if (is_mmapped(oldp))
- newp = mmap_resize(m, oldp, nb);
- else if (oldsize >= nb) { /* already big enough */
- size_t rsize = oldsize - nb;
- newp = oldp;
- if (rsize >= MIN_CHUNK_SIZE) {
- mchunkptr remainder = chunk_plus_offset(newp, nb);
- set_inuse(m, newp, nb);
- set_inuse(m, remainder, rsize);
- extra = chunk2mem(remainder);
- }
- }
- else if (next == m->top && oldsize + m->topsize > nb) {
- /* Expand into top */
- size_t newsize = oldsize + m->topsize;
- size_t newtopsize = newsize - nb;
- mchunkptr newtop = chunk_plus_offset(oldp, nb);
- set_inuse(m, oldp, nb);
- newtop->head = newtopsize |PINUSE_BIT;
- m->top = newtop;
- m->topsize = newtopsize;
- newp = oldp;
- }
- }
- else {
- USAGE_ERROR_ACTION(m, oldmem);
- POSTACTION(m);
- return 0;
- }
-
- POSTACTION(m);
-
- if (newp != 0) {
- if (extra != 0) {
- internal_free(m, extra);
- }
- check_inuse_chunk(m, newp);
- return chunk2mem(newp);
- }
- else {
- void* newmem = internal_malloc(m, bytes);
- if (newmem != 0) {
- size_t oc = oldsize - overhead_for(oldp);
- memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
- internal_free(m, oldmem);
- }
- return newmem;
- }
- }
- return 0;
-}
-
-/* --------------------------- memalign support -------------------------- */
-
-static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
- if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
- return internal_malloc(m, bytes);
- if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
- alignment = MIN_CHUNK_SIZE;
- if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
- size_t a = MALLOC_ALIGNMENT << 1;
- while (a < alignment) a <<= 1;
- alignment = a;
- }
-
- if (bytes >= MAX_REQUEST - alignment) {
- if (m != 0) { /* Test isn't needed but avoids compiler warning */
- MALLOC_FAILURE_ACTION;
- }
- }
- else {
- size_t nb = request2size(bytes);
- size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
- char* mem = (char*)internal_malloc(m, req);
- if (mem != 0) {
- void* leader = 0;
- void* trailer = 0;
- mchunkptr p = mem2chunk(mem);
-
- if (PREACTION(m)) return 0;
- if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
- /*
- Find an aligned spot inside chunk. Since we need to give
- back leading space in a chunk of at least MIN_CHUNK_SIZE, if
- the first calculation places us at a spot with less than
- MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
- We've allocated enough total room so that this is always
- possible.
- */
- char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
- alignment -
- SIZE_T_ONE)) &
- -alignment));
- char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
- br : br+alignment;
- mchunkptr newp = (mchunkptr)pos;
- size_t leadsize = pos - (char*)(p);
- size_t newsize = chunksize(p) - leadsize;
-
- if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
- newp->prev_foot = p->prev_foot + leadsize;
- newp->head = (newsize|CINUSE_BIT);
- }
- else { /* Otherwise, give back leader, use the rest */
- set_inuse(m, newp, newsize);
- set_inuse(m, p, leadsize);
- leader = chunk2mem(p);
- }
- p = newp;
- }
-
- /* Give back spare room at the end */
- if (!is_mmapped(p)) {
- size_t size = chunksize(p);
- if (size > nb + MIN_CHUNK_SIZE) {
- size_t remainder_size = size - nb;
- mchunkptr remainder = chunk_plus_offset(p, nb);
- set_inuse(m, p, nb);
- set_inuse(m, remainder, remainder_size);
- trailer = chunk2mem(remainder);
- }
- }
-
- assert (chunksize(p) >= nb);
- assert((((size_t)(chunk2mem(p))) % alignment) == 0);
- check_inuse_chunk(m, p);
- POSTACTION(m);
- if (leader != 0) {
- internal_free(m, leader);
- }
- if (trailer != 0) {
- internal_free(m, trailer);
- }
- return chunk2mem(p);
- }
- }
- return 0;
-}
-
-/* ------------------------ comalloc/coalloc support --------------------- */
-
-static void** ialloc(mstate m,
- size_t n_elements,
- size_t* sizes,
- int opts,
- void* chunks[]) {
- /*
- This provides common support for independent_X routines, handling
- all of the combinations that can result.
-
- The opts arg has:
- bit 0 set if all elements are same size (using sizes[0])
- bit 1 set if elements should be zeroed
- */
-
- size_t element_size; /* chunksize of each element, if all same */
- size_t contents_size; /* total size of elements */
- size_t array_size; /* request size of pointer array */
- void* mem; /* malloced aggregate space */
- mchunkptr p; /* corresponding chunk */
- size_t remainder_size; /* remaining bytes while splitting */
- void** marray; /* either "chunks" or malloced ptr array */
- mchunkptr array_chunk; /* chunk for malloced ptr array */
- flag_t was_enabled; /* to disable mmap */
- size_t size;
- size_t i;
-
- /* compute array length, if needed */
- if (chunks != 0) {
- if (n_elements == 0)
- return chunks; /* nothing to do */
- marray = chunks;
- array_size = 0;
- }
- else {
- /* if empty req, must still return chunk representing empty array */
- if (n_elements == 0)
- return (void**)internal_malloc(m, 0);
- marray = 0;
- array_size = request2size(n_elements * (sizeof(void*)));
- }
-
- /* compute total element size */
- if (opts & 0x1) { /* all-same-size */
- element_size = request2size(*sizes);
- contents_size = n_elements * element_size;
- }
- else { /* add up all the sizes */
- element_size = 0;
- contents_size = 0;
- for (i = 0; i != n_elements; ++i)
- contents_size += request2size(sizes[i]);
- }
-
- size = contents_size + array_size;
-
- /*
- Allocate the aggregate chunk. First disable direct-mmapping so
- malloc won't use it, since we would not be able to later
- free/realloc space internal to a segregated mmap region.
- */
- was_enabled = use_mmap(m);
- disable_mmap(m);
- mem = internal_malloc(m, size - CHUNK_OVERHEAD);
- if (was_enabled)
- enable_mmap(m);
- if (mem == 0)
- return 0;
-
- if (PREACTION(m)) return 0;
- p = mem2chunk(mem);
- remainder_size = chunksize(p);
-
- assert(!is_mmapped(p));
-
- if (opts & 0x2) { /* optionally clear the elements */
- memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
- }
-
- /* If not provided, allocate the pointer array as final part of chunk */
- if (marray == 0) {
- size_t array_chunk_size;
- array_chunk = chunk_plus_offset(p, contents_size);
- array_chunk_size = remainder_size - contents_size;
- marray = (void**) (chunk2mem(array_chunk));
- set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
- remainder_size = contents_size;
- }
-
- /* split out elements */
- for (i = 0; ; ++i) {
- marray[i] = chunk2mem(p);
- if (i != n_elements-1) {
- if (element_size != 0)
- size = element_size;
- else
- size = request2size(sizes[i]);
- remainder_size -= size;
- set_size_and_pinuse_of_inuse_chunk(m, p, size);
- p = chunk_plus_offset(p, size);
- }
- else { /* the final element absorbs any overallocation slop */
- set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
- break;
- }
- }
-
-#if DEBUG
- if (marray != chunks) {
- /* final element must have exactly exhausted chunk */
- if (element_size != 0) {
- assert(remainder_size == element_size);
- }
- else {
- assert(remainder_size == request2size(sizes[i]));
- }
- check_inuse_chunk(m, mem2chunk(marray));
- }
- for (i = 0; i != n_elements; ++i)
- check_inuse_chunk(m, mem2chunk(marray[i]));
-
-#endif /* DEBUG */
-
- POSTACTION(m);
- return marray;
-}
-
-
-/* -------------------------- public routines ---------------------------- */
-
-#if !ONLY_MSPACES
-
-void* dlmalloc(size_t bytes) {
- /*
- Basic algorithm:
- If a small request (< 256 bytes minus per-chunk overhead):
- 1. If one exists, use a remainderless chunk in associated smallbin.
- (Remainderless means that there are too few excess bytes to
- represent as a chunk.)
- 2. If it is big enough, use the dv chunk, which is normally the
- chunk adjacent to the one used for the most recent small request.
- 3. If one exists, split the smallest available chunk in a bin,
- saving remainder in dv.
- 4. If it is big enough, use the top chunk.
- 5. If available, get memory from system and use it
- Otherwise, for a large request:
- 1. Find the smallest available binned chunk that fits, and use it
- if it is better fitting than dv chunk, splitting if necessary.
- 2. If better fitting than any binned chunk, use the dv chunk.
- 3. If it is big enough, use the top chunk.
- 4. If request size >= mmap threshold, try to directly mmap this chunk.
- 5. If available, get memory from system and use it
-
- The ugly goto's here ensure that postaction occurs along all paths.
- */
-
- if (!PREACTION(gm)) {
- void* mem;
- size_t nb;
- if (bytes <= MAX_SMALL_REQUEST) {
- bindex_t idx;
- binmap_t smallbits;
- nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
- idx = small_index(nb);
- smallbits = gm->smallmap >> idx;
-
- if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
- mchunkptr b, p;
- idx += ~smallbits & 1; /* Uses next bin if idx empty */
- b = smallbin_at(gm, idx);
- p = b->fd;
- assert(chunksize(p) == small_index2size(idx));
- unlink_first_small_chunk(gm, b, p, idx);
- set_inuse_and_pinuse(gm, p, small_index2size(idx));
- mem = chunk2mem(p);
- check_malloced_chunk(gm, mem, nb);
- goto postaction;
- }
-
- else if (nb > gm->dvsize) {
- if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
- mchunkptr b, p, r;
- size_t rsize;
- bindex_t i;
- binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
- binmap_t leastbit = least_bit(leftbits);
- compute_bit2idx(leastbit, i);
- b = smallbin_at(gm, i);
- p = b->fd;
- assert(chunksize(p) == small_index2size(i));
- unlink_first_small_chunk(gm, b, p, i);
- rsize = small_index2size(i) - nb;
- /* Fit here cannot be remainderless if 4byte sizes */
- if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
- set_inuse_and_pinuse(gm, p, small_index2size(i));
- else {
- set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
- r = chunk_plus_offset(p, nb);
- set_size_and_pinuse_of_free_chunk(r, rsize);
- replace_dv(gm, r, rsize);
- }
- mem = chunk2mem(p);
- check_malloced_chunk(gm, mem, nb);
- goto postaction;
- }
-
- else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
- check_malloced_chunk(gm, mem, nb);
- goto postaction;
- }
- }
- }
- else if (bytes >= MAX_REQUEST)
- nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
- else {
- nb = pad_request(bytes);
- if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
- check_malloced_chunk(gm, mem, nb);
- goto postaction;
- }
- }
-
- if (nb <= gm->dvsize) {
- size_t rsize = gm->dvsize - nb;
- mchunkptr p = gm->dv;
- if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
- mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
- gm->dvsize = rsize;
- set_size_and_pinuse_of_free_chunk(r, rsize);
- set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
- }
- else { /* exhaust dv */
- size_t dvs = gm->dvsize;
- gm->dvsize = 0;
- gm->dv = 0;
- set_inuse_and_pinuse(gm, p, dvs);
- }
- mem = chunk2mem(p);
- check_malloced_chunk(gm, mem, nb);
- goto postaction;
- }
-
- else if (nb < gm->topsize) { /* Split top */
- size_t rsize = gm->topsize -= nb;
- mchunkptr p = gm->top;
- mchunkptr r = gm->top = chunk_plus_offset(p, nb);
- r->head = rsize | PINUSE_BIT;
- set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
- mem = chunk2mem(p);
- check_top_chunk(gm, gm->top);
- check_malloced_chunk(gm, mem, nb);
- goto postaction;
- }
-
- mem = sys_alloc(gm, nb);
-
- postaction:
- POSTACTION(gm);
- return mem;
- }
-
- return 0;
-}
-
-void dlfree(void* mem) {
- /*
- Consolidate freed chunks with preceeding or succeeding bordering
- free chunks, if they exist, and then place in a bin. Intermixed
- with special cases for top, dv, mmapped chunks, and usage errors.
- */
-
- if (mem != 0) {
- mchunkptr p = mem2chunk(mem);
-#if FOOTERS
- mstate fm = get_mstate_for(p);
- if (!ok_magic(fm)) {
- USAGE_ERROR_ACTION(fm, p);
- return;
- }
-#else /* FOOTERS */
-#define fm gm
-#endif /* FOOTERS */
- if (!PREACTION(fm)) {
- check_inuse_chunk(fm, p);
- if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
- size_t psize = chunksize(p);
- mchunkptr next = chunk_plus_offset(p, psize);
- if (!pinuse(p)) {
- size_t prevsize = p->prev_foot;
- if ((prevsize & IS_MMAPPED_BIT) != 0) {
- prevsize &= ~IS_MMAPPED_BIT;
- psize += prevsize + MMAP_FOOT_PAD;
- if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
- fm->footprint -= psize;
- goto postaction;
- }
- else {
- mchunkptr prev = chunk_minus_offset(p, prevsize);
- psize += prevsize;
- p = prev;
- if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
- if (p != fm->dv) {
- unlink_chunk(fm, p, prevsize);
- }
- else if ((next->head & INUSE_BITS) == INUSE_BITS) {
- fm->dvsize = psize;
- set_free_with_pinuse(p, psize, next);
- goto postaction;
- }
- }
- else
- goto erroraction;
- }
- }
-
- if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
- if (!cinuse(next)) { /* consolidate forward */
- if (next == fm->top) {
- size_t tsize = fm->topsize += psize;
- fm->top = p;
- p->head = tsize | PINUSE_BIT;
- if (p == fm->dv) {
- fm->dv = 0;
- fm->dvsize = 0;
- }
- if (should_trim(fm, tsize))
- sys_trim(fm, 0);
- goto postaction;
- }
- else if (next == fm->dv) {
- size_t dsize = fm->dvsize += psize;
- fm->dv = p;
- set_size_and_pinuse_of_free_chunk(p, dsize);
- goto postaction;
- }
- else {
- size_t nsize = chunksize(next);
- psize += nsize;
- unlink_chunk(fm, next, nsize);
- set_size_and_pinuse_of_free_chunk(p, psize);
- if (p == fm->dv) {
- fm->dvsize = psize;
- goto postaction;
- }
- }
- }
- else
- set_free_with_pinuse(p, psize, next);
- insert_chunk(fm, p, psize);
- check_free_chunk(fm, p);
- goto postaction;
- }
- }
- erroraction:
- USAGE_ERROR_ACTION(fm, p);
- postaction:
- POSTACTION(fm);
- }
- }
-#if !FOOTERS
-#undef fm
-#endif /* FOOTERS */
-}
-
-void* dlcalloc(size_t n_elements, size_t elem_size) {
- void* mem;
- size_t req = 0;
- if (n_elements != 0) {
- req = n_elements * elem_size;
- if (((n_elements | elem_size) & ~(size_t)0xffff) &&
- (req / n_elements != elem_size))
- req = MAX_SIZE_T; /* force downstream failure on overflow */
- }
- mem = dlmalloc(req);
- if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
- memset(mem, 0, req);
- return mem;
-}
-
-void* dlrealloc(void* oldmem, size_t bytes) {
- if (oldmem == 0)
- return dlmalloc(bytes);
-#ifdef REALLOC_ZERO_BYTES_FREES
- if (bytes == 0) {
- dlfree(oldmem);
- return 0;
- }
-#endif /* REALLOC_ZERO_BYTES_FREES */
- else {
-#if ! FOOTERS
- mstate m = gm;
-#else /* FOOTERS */
- mstate m = get_mstate_for(mem2chunk(oldmem));
- if (!ok_magic(m)) {
- USAGE_ERROR_ACTION(m, oldmem);
- return 0;
- }
-#endif /* FOOTERS */
- return internal_realloc(m, oldmem, bytes);
- }
-}
-
-void* dlmemalign(size_t alignment, size_t bytes) {
- return internal_memalign(gm, alignment, bytes);
-}
-
-void** dlindependent_calloc(size_t n_elements, size_t elem_size,
- void* chunks[]) {
- size_t sz = elem_size; /* serves as 1-element array */
- return ialloc(gm, n_elements, &sz, 3, chunks);
-}
-
-void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
- void* chunks[]) {
- return ialloc(gm, n_elements, sizes, 0, chunks);
-}
-
-void* dlvalloc(size_t bytes) {
- size_t pagesz;
- init_mparams();
- pagesz = mparams.page_size;
- return dlmemalign(pagesz, bytes);
-}
-
-void* dlpvalloc(size_t bytes) {
- size_t pagesz;
- init_mparams();
- pagesz = mparams.page_size;
- return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
-}
-
-int dlmalloc_trim(size_t pad) {
- int result = 0;
- if (!PREACTION(gm)) {
- result = sys_trim(gm, pad);
- POSTACTION(gm);
- }
- return result;
-}
-
-size_t dlmalloc_footprint(void) {
- return gm->footprint;
-}
-
-size_t dlmalloc_max_footprint(void) {
- return gm->max_footprint;
-}
-
-#if !NO_MALLINFO
-struct mallinfo dlmallinfo(void) {
- return internal_mallinfo(gm);
-}
-#endif /* NO_MALLINFO */
-
-void dlmalloc_stats() {
- internal_malloc_stats(gm);
-}
-
-size_t dlmalloc_usable_size(void* mem) {
- if (mem != 0) {
- mchunkptr p = mem2chunk(mem);
- if (cinuse(p))
- return chunksize(p) - overhead_for(p);
- }
- return 0;
-}
-
-int dlmallopt(int param_number, int value) {
- return change_mparam(param_number, value);
-}
-
-#endif /* !ONLY_MSPACES */
-
-/* ----------------------------- user mspaces ---------------------------- */
-
-#if MSPACES
-
-static mstate init_user_mstate(char* tbase, size_t tsize) {
- size_t msize = pad_request(sizeof(struct malloc_state));
- mchunkptr mn;
- mchunkptr msp = align_as_chunk(tbase);
- mstate m = (mstate)(chunk2mem(msp));
- memset(m, 0, msize);
- INITIAL_LOCK(&m->mutex);
- msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
- m->seg.base = m->least_addr = tbase;
- m->seg.size = m->footprint = m->max_footprint = tsize;
- m->magic = mparams.magic;
- m->mflags = mparams.default_mflags;
- disable_contiguous(m);
- init_bins(m);
- mn = next_chunk(mem2chunk(m));
- init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
- check_top_chunk(m, m->top);
- return m;
-}
-
-mspace create_mspace(size_t capacity, int locked) {
- mstate m = 0;
- size_t msize = pad_request(sizeof(struct malloc_state));
- init_mparams(); /* Ensure pagesize etc initialized */
-
- if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
- size_t rs = ((capacity == 0)? mparams.granularity :
- (capacity + TOP_FOOT_SIZE + msize));
- size_t tsize = granularity_align(rs);
- char* tbase = (char*)(CALL_MMAP(tsize));
- if (tbase != CMFAIL) {
- m = init_user_mstate(tbase, tsize);
- m->seg.sflags = IS_MMAPPED_BIT;
- set_lock(m, locked);
- }
- }
- return (mspace)m;
-}
-
-mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
- mstate m = 0;
- size_t msize = pad_request(sizeof(struct malloc_state));
- init_mparams(); /* Ensure pagesize etc initialized */
-
- if (capacity > msize + TOP_FOOT_SIZE &&
- capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
- m = init_user_mstate((char*)base, capacity);
- m->seg.sflags = EXTERN_BIT;
- set_lock(m, locked);
- }
- return (mspace)m;
-}
-
-size_t destroy_mspace(mspace msp) {
- size_t freed = 0;
- mstate ms = (mstate)msp;
- if (ok_magic(ms)) {
- msegmentptr sp = &ms->seg;
- while (sp != 0) {
- char* base = sp->base;
- size_t size = sp->size;
- flag_t flag = sp->sflags;
- sp = sp->next;
- if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
- CALL_MUNMAP(base, size) == 0)
- freed += size;
- }
- }
- else {
- USAGE_ERROR_ACTION(ms,ms);
- }
- return freed;
-}
-
-/*
- mspace versions of routines are near-clones of the global
- versions. This is not so nice but better than the alternatives.
-*/
-
-
-void* mspace_malloc(mspace msp, size_t bytes) {
- mstate ms = (mstate)msp;
- if (!ok_magic(ms)) {
- USAGE_ERROR_ACTION(ms,ms);
- return 0;
- }
- if (!PREACTION(ms)) {
- void* mem;
- size_t nb;
- if (bytes <= MAX_SMALL_REQUEST) {
- bindex_t idx;
- binmap_t smallbits;
- nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
- idx = small_index(nb);
- smallbits = ms->smallmap >> idx;
-
- if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
- mchunkptr b, p;
- idx += ~smallbits & 1; /* Uses next bin if idx empty */
- b = smallbin_at(ms, idx);
- p = b->fd;
- assert(chunksize(p) == small_index2size(idx));
- unlink_first_small_chunk(ms, b, p, idx);
- set_inuse_and_pinuse(ms, p, small_index2size(idx));
- mem = chunk2mem(p);
- check_malloced_chunk(ms, mem, nb);
- goto postaction;
- }
-
- else if (nb > ms->dvsize) {
- if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
- mchunkptr b, p, r;
- size_t rsize;
- bindex_t i;
- binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
- binmap_t leastbit = least_bit(leftbits);
- compute_bit2idx(leastbit, i);
- b = smallbin_at(ms, i);
- p = b->fd;
- assert(chunksize(p) == small_index2size(i));
- unlink_first_small_chunk(ms, b, p, i);
- rsize = small_index2size(i) - nb;
- /* Fit here cannot be remainderless if 4byte sizes */
- if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
- set_inuse_and_pinuse(ms, p, small_index2size(i));
- else {
- set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
- r = chunk_plus_offset(p, nb);
- set_size_and_pinuse_of_free_chunk(r, rsize);
- replace_dv(ms, r, rsize);
- }
- mem = chunk2mem(p);
- check_malloced_chunk(ms, mem, nb);
- goto postaction;
- }
-
- else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
- check_malloced_chunk(ms, mem, nb);
- goto postaction;
- }
- }
- }
- else if (bytes >= MAX_REQUEST)
- nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
- else {
- nb = pad_request(bytes);
- if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
- check_malloced_chunk(ms, mem, nb);
- goto postaction;
- }
- }
-
- if (nb <= ms->dvsize) {
- size_t rsize = ms->dvsize - nb;
- mchunkptr p = ms->dv;
- if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
- mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
- ms->dvsize = rsize;
- set_size_and_pinuse_of_free_chunk(r, rsize);
- set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
- }
- else { /* exhaust dv */
- size_t dvs = ms->dvsize;
- ms->dvsize = 0;
- ms->dv = 0;
- set_inuse_and_pinuse(ms, p, dvs);
- }
- mem = chunk2mem(p);
- check_malloced_chunk(ms, mem, nb);
- goto postaction;
- }
-
- else if (nb < ms->topsize) { /* Split top */
- size_t rsize = ms->topsize -= nb;
- mchunkptr p = ms->top;
- mchunkptr r = ms->top = chunk_plus_offset(p, nb);
- r->head = rsize | PINUSE_BIT;
- set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
- mem = chunk2mem(p);
- check_top_chunk(ms, ms->top);
- check_malloced_chunk(ms, mem, nb);
- goto postaction;
- }
-
- mem = sys_alloc(ms, nb);
-
- postaction:
- POSTACTION(ms);
- return mem;
- }
-
- return 0;
-}
-
-void mspace_free(mspace msp, void* mem) {
- if (mem != 0) {
- mchunkptr p = mem2chunk(mem);
-#if FOOTERS
- mstate fm = get_mstate_for(p);
-#else /* FOOTERS */
- mstate fm = (mstate)msp;
-#endif /* FOOTERS */
- if (!ok_magic(fm)) {
- USAGE_ERROR_ACTION(fm, p);
- return;
- }
- if (!PREACTION(fm)) {
- check_inuse_chunk(fm, p);
- if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
- size_t psize = chunksize(p);
- mchunkptr next = chunk_plus_offset(p, psize);
- if (!pinuse(p)) {
- size_t prevsize = p->prev_foot;
- if ((prevsize & IS_MMAPPED_BIT) != 0) {
- prevsize &= ~IS_MMAPPED_BIT;
- psize += prevsize + MMAP_FOOT_PAD;
- if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
- fm->footprint -= psize;
- goto postaction;
- }
- else {
- mchunkptr prev = chunk_minus_offset(p, prevsize);
- psize += prevsize;
- p = prev;
- if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
- if (p != fm->dv) {
- unlink_chunk(fm, p, prevsize);
- }
- else if ((next->head & INUSE_BITS) == INUSE_BITS) {
- fm->dvsize = psize;
- set_free_with_pinuse(p, psize, next);
- goto postaction;
- }
- }
- else
- goto erroraction;
- }
- }
-
- if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
- if (!cinuse(next)) { /* consolidate forward */
- if (next == fm->top) {
- size_t tsize = fm->topsize += psize;
- fm->top = p;
- p->head = tsize | PINUSE_BIT;
- if (p == fm->dv) {
- fm->dv = 0;
- fm->dvsize = 0;
- }
- if (should_trim(fm, tsize))
- sys_trim(fm, 0);
- goto postaction;
- }
- else if (next == fm->dv) {
- size_t dsize = fm->dvsize += psize;
- fm->dv = p;
- set_size_and_pinuse_of_free_chunk(p, dsize);
- goto postaction;
- }
- else {
- size_t nsize = chunksize(next);
- psize += nsize;
- unlink_chunk(fm, next, nsize);
- set_size_and_pinuse_of_free_chunk(p, psize);
- if (p == fm->dv) {
- fm->dvsize = psize;
- goto postaction;
- }
- }
- }
- else
- set_free_with_pinuse(p, psize, next);
- insert_chunk(fm, p, psize);
- check_free_chunk(fm, p);
- goto postaction;
- }
- }
- erroraction:
- USAGE_ERROR_ACTION(fm, p);
- postaction:
- POSTACTION(fm);
- }
- }
-}
-
-void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
- void* mem;
- size_t req = 0;
- mstate ms = (mstate)msp;
- if (!ok_magic(ms)) {
- USAGE_ERROR_ACTION(ms,ms);
- return 0;
- }
- if (n_elements != 0) {
- req = n_elements * elem_size;
- if (((n_elements | elem_size) & ~(size_t)0xffff) &&
- (req / n_elements != elem_size))
- req = MAX_SIZE_T; /* force downstream failure on overflow */
- }
- mem = internal_malloc(ms, req);
- if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
- memset(mem, 0, req);
- return mem;
-}
-
-void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
- if (oldmem == 0)
- return mspace_malloc(msp, bytes);
-#ifdef REALLOC_ZERO_BYTES_FREES
- if (bytes == 0) {
- mspace_free(msp, oldmem);
- return 0;
- }
-#endif /* REALLOC_ZERO_BYTES_FREES */
- else {
-#if FOOTERS
- mchunkptr p = mem2chunk(oldmem);
- mstate ms = get_mstate_for(p);
-#else /* FOOTERS */
- mstate ms = (mstate)msp;
-#endif /* FOOTERS */
- if (!ok_magic(ms)) {
- USAGE_ERROR_ACTION(ms,ms);
- return 0;
- }
- return internal_realloc(ms, oldmem, bytes);
- }
-}
-
-void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
- mstate ms = (mstate)msp;
- if (!ok_magic(ms)) {
- USAGE_ERROR_ACTION(ms,ms);
- return 0;
- }
- return internal_memalign(ms, alignment, bytes);
-}
-
-void** mspace_independent_calloc(mspace msp, size_t n_elements,
- size_t elem_size, void* chunks[]) {
- size_t sz = elem_size; /* serves as 1-element array */
- mstate ms = (mstate)msp;
- if (!ok_magic(ms)) {
- USAGE_ERROR_ACTION(ms,ms);
- return 0;
- }
- return ialloc(ms, n_elements, &sz, 3, chunks);
-}
-
-void** mspace_independent_comalloc(mspace msp, size_t n_elements,
- size_t sizes[], void* chunks[]) {
- mstate ms = (mstate)msp;
- if (!ok_magic(ms)) {
- USAGE_ERROR_ACTION(ms,ms);
- return 0;
- }
- return ialloc(ms, n_elements, sizes, 0, chunks);
-}
-
-int mspace_trim(mspace msp, size_t pad) {
- int result = 0;
- mstate ms = (mstate)msp;
- if (ok_magic(ms)) {
- if (!PREACTION(ms)) {
- result = sys_trim(ms, pad);
- POSTACTION(ms);
- }
- }
- else {
- USAGE_ERROR_ACTION(ms,ms);
- }
- return result;
-}
-
-void mspace_malloc_stats(mspace msp) {
- mstate ms = (mstate)msp;
- if (ok_magic(ms)) {
- internal_malloc_stats(ms);
- }
- else {
- USAGE_ERROR_ACTION(ms,ms);
- }
-}
-
-size_t mspace_footprint(mspace msp) {
- size_t result;
- mstate ms = (mstate)msp;
- if (ok_magic(ms)) {
- result = ms->footprint;
- }
- USAGE_ERROR_ACTION(ms,ms);
- return result;
-}
-
-
-size_t mspace_max_footprint(mspace msp) {
- size_t result;
- mstate ms = (mstate)msp;
- if (ok_magic(ms)) {
- result = ms->max_footprint;
- }
- USAGE_ERROR_ACTION(ms,ms);
- return result;
-}
-
-
-#if !NO_MALLINFO
-struct mallinfo mspace_mallinfo(mspace msp) {
- mstate ms = (mstate)msp;
- if (!ok_magic(ms)) {
- USAGE_ERROR_ACTION(ms,ms);
- }
- return internal_mallinfo(ms);
-}
-#endif /* NO_MALLINFO */
-
-int mspace_mallopt(int param_number, int value) {
- return change_mparam(param_number, value);
-}
-
-#endif /* MSPACES */
-
-/* -------------------- Alternative MORECORE functions ------------------- */
-
-/*
- Guidelines for creating a custom version of MORECORE:
-
- * For best performance, MORECORE should allocate in multiples of pagesize.
- * MORECORE may allocate more memory than requested. (Or even less,
- but this will usually result in a malloc failure.)
- * MORECORE must not allocate memory when given argument zero, but
- instead return one past the end address of memory from previous
- nonzero call.
- * For best performance, consecutive calls to MORECORE with positive
- arguments should return increasing addresses, indicating that
- space has been contiguously extended.
- * Even though consecutive calls to MORECORE need not return contiguous
- addresses, it must be OK for malloc'ed chunks to span multiple
- regions in those cases where they do happen to be contiguous.
- * MORECORE need not handle negative arguments -- it may instead
- just return MFAIL when given negative arguments.
- Negative arguments are always multiples of pagesize. MORECORE
- must not misinterpret negative args as large positive unsigned
- args. You can suppress all such calls from even occurring by defining
- MORECORE_CANNOT_TRIM,
-
- As an example alternative MORECORE, here is a custom allocator
- kindly contributed for pre-OSX macOS. It uses virtually but not
- necessarily physically contiguous non-paged memory (locked in,
- present and won't get swapped out). You can use it by uncommenting
- this section, adding some #includes, and setting up the appropriate
- defines above:
-
- #define MORECORE osMoreCore
-
- There is also a shutdown routine that should somehow be called for
- cleanup upon program exit.
-
- #define MAX_POOL_ENTRIES 100
- #define MINIMUM_MORECORE_SIZE (64 * 1024U)
- static int next_os_pool;
- void *our_os_pools[MAX_POOL_ENTRIES];
-
- void *osMoreCore(int size)
- {
- void *ptr = 0;
- static void *sbrk_top = 0;
-
- if (size > 0)
- {
- if (size < MINIMUM_MORECORE_SIZE)
- size = MINIMUM_MORECORE_SIZE;
- if (CurrentExecutionLevel() == kTaskLevel)
- ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
- if (ptr == 0)
- {
- return (void *) MFAIL;
- }
- // save ptrs so they can be freed during cleanup
- our_os_pools[next_os_pool] = ptr;
- next_os_pool++;
- ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
- sbrk_top = (char *) ptr + size;
- return ptr;
- }
- else if (size < 0)
- {
- // we don't currently support shrink behavior
- return (void *) MFAIL;
- }
- else
- {
- return sbrk_top;
- }
- }
-
- // cleanup any allocated memory pools
- // called as last thing before shutting down driver
-
- void osCleanupMem(void)
- {
- void **ptr;
-
- for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
- if (*ptr)
- {
- PoolDeallocate(*ptr);
- *ptr = 0;
- }
- }
-
-*/
-
-
-/* -----------------------------------------------------------------------
-History:
- V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
- * Add max_footprint functions
- * Ensure all appropriate literals are size_t
- * Fix conditional compilation problem for some #define settings
- * Avoid concatenating segments with the one provided
- in create_mspace_with_base
- * Rename some variables to avoid compiler shadowing warnings
- * Use explicit lock initialization.
- * Better handling of sbrk interference.
- * Simplify and fix segment insertion, trimming and mspace_destroy
- * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
- * Thanks especially to Dennis Flanagan for help on these.
-
- V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
- * Fix memalign brace error.
-
- V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
- * Fix improper #endif nesting in C++
- * Add explicit casts needed for C++
-
- V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
- * Use trees for large bins
- * Support mspaces
- * Use segments to unify sbrk-based and mmap-based system allocation,
- removing need for emulation on most platforms without sbrk.
- * Default safety checks
- * Optional footer checks. Thanks to William Robertson for the idea.
- * Internal code refactoring
- * Incorporate suggestions and platform-specific changes.
- Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
- Aaron Bachmann, Emery Berger, and others.
- * Speed up non-fastbin processing enough to remove fastbins.
- * Remove useless cfree() to avoid conflicts with other apps.
- * Remove internal memcpy, memset. Compilers handle builtins better.
- * Remove some options that no one ever used and rename others.
-
- V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
- * Fix malloc_state bitmap array misdeclaration
-
- V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
- * Allow tuning of FIRST_SORTED_BIN_SIZE
- * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
- * Better detection and support for non-contiguousness of MORECORE.
- Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
- * Bypass most of malloc if no frees. Thanks To Emery Berger.
- * Fix freeing of old top non-contiguous chunk im sysmalloc.
- * Raised default trim and map thresholds to 256K.
- * Fix mmap-related #defines. Thanks to Lubos Lunak.
- * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
- * Branch-free bin calculation
- * Default trim and mmap thresholds now 256K.
-
- V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
- * Introduce independent_comalloc and independent_calloc.
- Thanks to Michael Pachos for motivation and help.
- * Make optional .h file available
- * Allow > 2GB requests on 32bit systems.
- * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
- Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
- and Anonymous.
- * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
- helping test this.)
- * memalign: check alignment arg
- * realloc: don't try to shift chunks backwards, since this
- leads to more fragmentation in some programs and doesn't
- seem to help in any others.
- * Collect all cases in malloc requiring system memory into sysmalloc
- * Use mmap as backup to sbrk
- * Place all internal state in malloc_state
- * Introduce fastbins (although similar to 2.5.1)
- * Many minor tunings and cosmetic improvements
- * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
- * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
- Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
- * Include errno.h to support default failure action.
-
- V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
- * return null for negative arguments
- * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
- * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
- (e.g. WIN32 platforms)
- * Cleanup header file inclusion for WIN32 platforms
- * Cleanup code to avoid Microsoft Visual C++ compiler complaints
- * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
- memory allocation routines
- * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
- * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
- usage of 'assert' in non-WIN32 code
- * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
- avoid infinite loop
- * Always call 'fREe()' rather than 'free()'
-
- V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
- * Fixed ordering problem with boundary-stamping
-
- V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
- * Added pvalloc, as recommended by H.J. Liu
- * Added 64bit pointer support mainly from Wolfram Gloger
- * Added anonymously donated WIN32 sbrk emulation
- * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
- * malloc_extend_top: fix mask error that caused wastage after
- foreign sbrks
- * Add linux mremap support code from HJ Liu
-
- V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
- * Integrated most documentation with the code.
- * Add support for mmap, with help from
- Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
- * Use last_remainder in more cases.
- * Pack bins using idea from colin@nyx10.cs.du.edu
- * Use ordered bins instead of best-fit threshhold
- * Eliminate block-local decls to simplify tracing and debugging.
- * Support another case of realloc via move into top
- * Fix error occuring when initial sbrk_base not word-aligned.
- * Rely on page size for units instead of SBRK_UNIT to
- avoid surprises about sbrk alignment conventions.
- * Add mallinfo, mallopt. Thanks to Raymond Nijssen
- (raymond@es.ele.tue.nl) for the suggestion.
- * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
- * More precautions for cases where other routines call sbrk,
- courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
- * Added macros etc., allowing use in linux libc from
- H.J. Lu (hjl@gnu.ai.mit.edu)
- * Inverted this history list
-
- V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
- * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
- * Removed all preallocation code since under current scheme
- the work required to undo bad preallocations exceeds
- the work saved in good cases for most test programs.
- * No longer use return list or unconsolidated bins since
- no scheme using them consistently outperforms those that don't
- given above changes.
- * Use best fit for very large chunks to prevent some worst-cases.
- * Added some support for debugging
-
- V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
- * Removed footers when chunks are in use. Thanks to
- Paul Wilson (wilson@cs.texas.edu) for the suggestion.
-
- V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
- * Added malloc_trim, with help from Wolfram Gloger
- (wmglo@Dent.MED.Uni-Muenchen.DE).
-
- V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
-
- V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
- * realloc: try to expand in both directions
- * malloc: swap order of clean-bin strategy;
- * realloc: only conditionally expand backwards
- * Try not to scavenge used bins
- * Use bin counts as a guide to preallocation
- * Occasionally bin return list chunks in first scan
- * Add a few optimizations from colin@nyx10.cs.du.edu
-
- V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
- * faster bin computation & slightly different binning
- * merged all consolidations to one part of malloc proper
- (eliminating old malloc_find_space & malloc_clean_bin)
- * Scan 2 returns chunks (not just 1)
- * Propagate failure in realloc if malloc returns 0
- * Add stuff to allow compilation on non-ANSI compilers
- from kpv@research.att.com
-
- V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
- * removed potential for odd address access in prev_chunk
- * removed dependency on getpagesize.h
- * misc cosmetics and a bit more internal documentation
- * anticosmetics: mangled names in macros to evade debugger strangeness
- * tested on sparc, hp-700, dec-mips, rs6000
- with gcc & native cc (hp, dec only) allowing
- Detlefs & Zorn comparison study (in SIGPLAN Notices.)
-
- Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
- * Based loosely on libg++-1.2X malloc. (It retains some of the overall
- structure of old version, but most details differ.)
-
-*/
+++ /dev/null
-/*
- Default header file for malloc-2.8.x, written by Doug Lea
- and released to the public domain, as explained at
- http://creativecommons.org/licenses/publicdomain.
-
- last update: Mon Aug 15 08:55:52 2005 Doug Lea (dl at gee)
-
- This header is for ANSI C/C++ only. You can set any of
- the following #defines before including:
-
- * If USE_DL_PREFIX is defined, it is assumed that malloc.c
- was also compiled with this option, so all routines
- have names starting with "dl".
-
- * If HAVE_USR_INCLUDE_MALLOC_H is defined, it is assumed that this
- file will be #included AFTER <malloc.h>. This is needed only if
- your system defines a struct mallinfo that is incompatible with the
- standard one declared here. Otherwise, you can include this file
- INSTEAD of your system system <malloc.h>. At least on ANSI, all
- declarations should be compatible with system versions
-
- * If MSPACES is defined, declarations for mspace versions are included.
-*/
-
-#ifndef MALLOC_280_H
-#define MALLOC_280_H
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#include <stddef.h> /* for size_t */
-
-#if !ONLY_MSPACES
-
-#ifndef USE_DL_PREFIX
-#define dlcalloc calloc
-#define dlfree free
-#define dlmalloc malloc
-#define dlmemalign memalign
-#define dlrealloc realloc
-#define dlvalloc valloc
-#define dlpvalloc pvalloc
-#define dlmallinfo mallinfo
-#define dlmallopt mallopt
-#define dlmalloc_trim malloc_trim
-#define dlmalloc_stats malloc_stats
-#define dlmalloc_usable_size malloc_usable_size
-#define dlmalloc_footprint malloc_footprint
-#define dlindependent_calloc independent_calloc
-#define dlindependent_comalloc independent_comalloc
-#endif /* USE_DL_PREFIX */
-
-
-/*
- malloc(size_t n)
- Returns a pointer to a newly allocated chunk of at least n bytes, or
- null if no space is available, in which case errno is set to ENOMEM
- on ANSI C systems.
-
- If n is zero, malloc returns a minimum-sized chunk. (The minimum
- size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
- systems.) Note that size_t is an unsigned type, so calls with
- arguments that would be negative if signed are interpreted as
- requests for huge amounts of space, which will often fail. The
- maximum supported value of n differs across systems, but is in all
- cases less than the maximum representable value of a size_t.
-*/
-void* dlmalloc(size_t);
-
-/*
- free(void* p)
- Releases the chunk of memory pointed to by p, that had been previously
- allocated using malloc or a related routine such as realloc.
- It has no effect if p is null. If p was not malloced or already
- freed, free(p) will by default cuase the current program to abort.
-*/
-void dlfree(void*);
-
-/*
- calloc(size_t n_elements, size_t element_size);
- Returns a pointer to n_elements * element_size bytes, with all locations
- set to zero.
-*/
-void* dlcalloc(size_t, size_t);
-
-/*
- realloc(void* p, size_t n)
- Returns a pointer to a chunk of size n that contains the same data
- as does chunk p up to the minimum of (n, p's size) bytes, or null
- if no space is available.
-
- The returned pointer may or may not be the same as p. The algorithm
- prefers extending p in most cases when possible, otherwise it
- employs the equivalent of a malloc-copy-free sequence.
-
- If p is null, realloc is equivalent to malloc.
-
- If space is not available, realloc returns null, errno is set (if on
- ANSI) and p is NOT freed.
-
- if n is for fewer bytes than already held by p, the newly unused
- space is lopped off and freed if possible. realloc with a size
- argument of zero (re)allocates a minimum-sized chunk.
-
- The old unix realloc convention of allowing the last-free'd chunk
- to be used as an argument to realloc is not supported.
-*/
-
-void* dlrealloc(void*, size_t);
-
-/*
- memalign(size_t alignment, size_t n);
- Returns a pointer to a newly allocated chunk of n bytes, aligned
- in accord with the alignment argument.
-
- The alignment argument should be a power of two. If the argument is
- not a power of two, the nearest greater power is used.
- 8-byte alignment is guaranteed by normal malloc calls, so don't
- bother calling memalign with an argument of 8 or less.
-
- Overreliance on memalign is a sure way to fragment space.
-*/
-void* dlmemalign(size_t, size_t);
-
-/*
- valloc(size_t n);
- Equivalent to memalign(pagesize, n), where pagesize is the page
- size of the system. If the pagesize is unknown, 4096 is used.
-*/
-void* dlvalloc(size_t);
-
-/*
- mallopt(int parameter_number, int parameter_value)
- Sets tunable parameters The format is to provide a
- (parameter-number, parameter-value) pair. mallopt then sets the
- corresponding parameter to the argument value if it can (i.e., so
- long as the value is meaningful), and returns 1 if successful else
- 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
- normally defined in malloc.h. None of these are use in this malloc,
- so setting them has no effect. But this malloc also supports other
- options in mallopt:
-
- Symbol param # default allowed param values
- M_TRIM_THRESHOLD -1 2*1024*1024 any (-1U disables trimming)
- M_GRANULARITY -2 page size any power of 2 >= page size
- M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
-*/
-int dlmallopt(int, int);
-
-#define M_TRIM_THRESHOLD (-1)
-#define M_GRANULARITY (-2)
-#define M_MMAP_THRESHOLD (-3)
-
-
-/*
- malloc_footprint();
- Returns the number of bytes obtained from the system. The total
- number of bytes allocated by malloc, realloc etc., is less than this
- value. Unlike mallinfo, this function returns only a precomputed
- result, so can be called frequently to monitor memory consumption.
- Even if locks are otherwise defined, this function does not use them,
- so results might not be up to date.
-*/
-size_t dlmalloc_footprint();
-
-#if !NO_MALLINFO
-/*
- mallinfo()
- Returns (by copy) a struct containing various summary statistics:
-
- arena: current total non-mmapped bytes allocated from system
- ordblks: the number of free chunks
- smblks: always zero.
- hblks: current number of mmapped regions
- hblkhd: total bytes held in mmapped regions
- usmblks: the maximum total allocated space. This will be greater
- than current total if trimming has occurred.
- fsmblks: always zero
- uordblks: current total allocated space (normal or mmapped)
- fordblks: total free space
- keepcost: the maximum number of bytes that could ideally be released
- back to system via malloc_trim. ("ideally" means that
- it ignores page restrictions etc.)
-
- Because these fields are ints, but internal bookkeeping may
- be kept as longs, the reported values may wrap around zero and
- thus be inaccurate.
-*/
-#ifndef HAVE_USR_INCLUDE_MALLOC_H
-#ifndef _MALLOC_H
-#ifndef MALLINFO_FIELD_TYPE
-#define MALLINFO_FIELD_TYPE size_t
-#endif /* MALLINFO_FIELD_TYPE */
-struct mallinfo {
- MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
- MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
- MALLINFO_FIELD_TYPE smblks; /* always 0 */
- MALLINFO_FIELD_TYPE hblks; /* always 0 */
- MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
- MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
- MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
- MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
- MALLINFO_FIELD_TYPE fordblks; /* total free space */
- MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
-};
-#endif /* _MALLOC_H */
-#endif /* HAVE_USR_INCLUDE_MALLOC_H */
-
-struct mallinfo dlmallinfo(void);
-#endif /* NO_MALLINFO */
-
-/*
- independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
-
- independent_calloc is similar to calloc, but instead of returning a
- single cleared space, it returns an array of pointers to n_elements
- independent elements that can hold contents of size elem_size, each
- of which starts out cleared, and can be independently freed,
- realloc'ed etc. The elements are guaranteed to be adjacently
- allocated (this is not guaranteed to occur with multiple callocs or
- mallocs), which may also improve cache locality in some
- applications.
-
- The "chunks" argument is optional (i.e., may be null, which is
- probably the most typical usage). If it is null, the returned array
- is itself dynamically allocated and should also be freed when it is
- no longer needed. Otherwise, the chunks array must be of at least
- n_elements in length. It is filled in with the pointers to the
- chunks.
-
- In either case, independent_calloc returns this pointer array, or
- null if the allocation failed. If n_elements is zero and "chunks"
- is null, it returns a chunk representing an array with zero elements
- (which should be freed if not wanted).
-
- Each element must be individually freed when it is no longer
- needed. If you'd like to instead be able to free all at once, you
- should instead use regular calloc and assign pointers into this
- space to represent elements. (In this case though, you cannot
- independently free elements.)
-
- independent_calloc simplifies and speeds up implementations of many
- kinds of pools. It may also be useful when constructing large data
- structures that initially have a fixed number of fixed-sized nodes,
- but the number is not known at compile time, and some of the nodes
- may later need to be freed. For example:
-
- struct Node { int item; struct Node* next; };
-
- struct Node* build_list() {
- struct Node** pool;
- int n = read_number_of_nodes_needed();
- if (n <= 0) return 0;
- pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
- if (pool == 0) die();
- // organize into a linked list...
- struct Node* first = pool[0];
- for (i = 0; i < n-1; ++i)
- pool[i]->next = pool[i+1];
- free(pool); // Can now free the array (or not, if it is needed later)
- return first;
- }
-*/
-void** dlindependent_calloc(size_t, size_t, void**);
-
-/*
- independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
-
- independent_comalloc allocates, all at once, a set of n_elements
- chunks with sizes indicated in the "sizes" array. It returns
- an array of pointers to these elements, each of which can be
- independently freed, realloc'ed etc. The elements are guaranteed to
- be adjacently allocated (this is not guaranteed to occur with
- multiple callocs or mallocs), which may also improve cache locality
- in some applications.
-
- The "chunks" argument is optional (i.e., may be null). If it is null
- the returned array is itself dynamically allocated and should also
- be freed when it is no longer needed. Otherwise, the chunks array
- must be of at least n_elements in length. It is filled in with the
- pointers to the chunks.
-
- In either case, independent_comalloc returns this pointer array, or
- null if the allocation failed. If n_elements is zero and chunks is
- null, it returns a chunk representing an array with zero elements
- (which should be freed if not wanted).
-
- Each element must be individually freed when it is no longer
- needed. If you'd like to instead be able to free all at once, you
- should instead use a single regular malloc, and assign pointers at
- particular offsets in the aggregate space. (In this case though, you
- cannot independently free elements.)
-
- independent_comallac differs from independent_calloc in that each
- element may have a different size, and also that it does not
- automatically clear elements.
-
- independent_comalloc can be used to speed up allocation in cases
- where several structs or objects must always be allocated at the
- same time. For example:
-
- struct Head { ... }
- struct Foot { ... }
-
- void send_message(char* msg) {
- int msglen = strlen(msg);
- size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
- void* chunks[3];
- if (independent_comalloc(3, sizes, chunks) == 0)
- die();
- struct Head* head = (struct Head*)(chunks[0]);
- char* body = (char*)(chunks[1]);
- struct Foot* foot = (struct Foot*)(chunks[2]);
- // ...
- }
-
- In general though, independent_comalloc is worth using only for
- larger values of n_elements. For small values, you probably won't
- detect enough difference from series of malloc calls to bother.
-
- Overuse of independent_comalloc can increase overall memory usage,
- since it cannot reuse existing noncontiguous small chunks that
- might be available for some of the elements.
-*/
-void** dlindependent_comalloc(size_t, size_t*, void**);
-
-
-/*
- pvalloc(size_t n);
- Equivalent to valloc(minimum-page-that-holds(n)), that is,
- round up n to nearest pagesize.
- */
-void* dlpvalloc(size_t);
-
-/*
- malloc_trim(size_t pad);
-
- If possible, gives memory back to the system (via negative arguments
- to sbrk) if there is unused memory at the `high' end of the malloc
- pool or in unused MMAP segments. You can call this after freeing
- large blocks of memory to potentially reduce the system-level memory
- requirements of a program. However, it cannot guarantee to reduce
- memory. Under some allocation patterns, some large free blocks of
- memory will be locked between two used chunks, so they cannot be
- given back to the system.
-
- The `pad' argument to malloc_trim represents the amount of free
- trailing space to leave untrimmed. If this argument is zero, only
- the minimum amount of memory to maintain internal data structures
- will be left. Non-zero arguments can be supplied to maintain enough
- trailing space to service future expected allocations without having
- to re-obtain memory from the system.
-
- Malloc_trim returns 1 if it actually released any memory, else 0.
-*/
-int dlmalloc_trim(size_t);
-
-/*
- malloc_usable_size(void* p);
-
- Returns the number of bytes you can actually use in
- an allocated chunk, which may be more than you requested (although
- often not) due to alignment and minimum size constraints.
- You can use this many bytes without worrying about
- overwriting other allocated objects. This is not a particularly great
- programming practice. malloc_usable_size can be more useful in
- debugging and assertions, for example:
-
- p = malloc(n);
- assert(malloc_usable_size(p) >= 256);
-*/
-size_t dlmalloc_usable_size(void*);
-
-/*
- malloc_stats();
- Prints on stderr the amount of space obtained from the system (both
- via sbrk and mmap), the maximum amount (which may be more than
- current if malloc_trim and/or munmap got called), and the current
- number of bytes allocated via malloc (or realloc, etc) but not yet
- freed. Note that this is the number of bytes allocated, not the
- number requested. It will be larger than the number requested
- because of alignment and bookkeeping overhead. Because it includes
- alignment wastage as being in use, this figure may be greater than
- zero even when no user-level chunks are allocated.
-
- The reported current and maximum system memory can be inaccurate if
- a program makes other calls to system memory allocation functions
- (normally sbrk) outside of malloc.
-
- malloc_stats prints only the most commonly interesting statistics.
- More information can be obtained by calling mallinfo.
-*/
-void dlmalloc_stats();
-
-#endif /* !ONLY_MSPACES */
-
-#if MSPACES
-
-/*
- mspace is an opaque type representing an independent
- region of space that supports mspace_malloc, etc.
-*/
-typedef void* mspace;
-
-/*
- create_mspace creates and returns a new independent space with the
- given initial capacity, or, if 0, the default granularity size. It
- returns null if there is no system memory available to create the
- space. If argument locked is non-zero, the space uses a separate
- lock to control access. The capacity of the space will grow
- dynamically as needed to service mspace_malloc requests. You can
- control the sizes of incremental increases of this space by
- compiling with a different DEFAULT_GRANULARITY or dynamically
- setting with mallopt(M_GRANULARITY, value).
-*/
-mspace create_mspace(size_t capacity, int locked);
-
-/*
- destroy_mspace destroys the given space, and attempts to return all
- of its memory back to the system, returning the total number of
- bytes freed. After destruction, the results of access to all memory
- used by the space become undefined.
-*/
-size_t destroy_mspace(mspace msp);
-
-/*
- create_mspace_with_base uses the memory supplied as the initial base
- of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
- space is used for bookkeeping, so the capacity must be at least this
- large. (Otherwise 0 is returned.) When this initial space is
- exhausted, additional memory will be obtained from the system.
- Destroying this space will deallocate all additionally allocated
- space (if possible) but not the initial base.
-*/
-mspace create_mspace_with_base(void* base, size_t capacity, int locked);
-
-/*
- mspace_malloc behaves as malloc, but operates within
- the given space.
-*/
-void* mspace_malloc(mspace msp, size_t bytes);
-
-/*
- mspace_free behaves as free, but operates within
- the given space.
-
- If compiled with FOOTERS==1, mspace_free is not actually needed.
- free may be called instead of mspace_free because freed chunks from
- any space are handled by their originating spaces.
-*/
-void mspace_free(mspace msp, void* mem);
-
-/*
- mspace_realloc behaves as realloc, but operates within
- the given space.
-
- If compiled with FOOTERS==1, mspace_realloc is not actually
- needed. realloc may be called instead of mspace_realloc because
- realloced chunks from any space are handled by their originating
- spaces.
-*/
-void* mspace_realloc(mspace msp, void* mem, size_t newsize);
-
-/*
- mspace_calloc behaves as calloc, but operates within
- the given space.
-*/
-void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
-
-/*
- mspace_memalign behaves as memalign, but operates within
- the given space.
-*/
-void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
-
-/*
- mspace_independent_calloc behaves as independent_calloc, but
- operates within the given space.
-*/
-void** mspace_independent_calloc(mspace msp, size_t n_elements,
- size_t elem_size, void* chunks[]);
-
-/*
- mspace_independent_comalloc behaves as independent_comalloc, but
- operates within the given space.
-*/
-void** mspace_independent_comalloc(mspace msp, size_t n_elements,
- size_t sizes[], void* chunks[]);
-
-/*
- mspace_footprint() returns the number of bytes obtained from the
- system for this space.
-*/
-size_t mspace_footprint(mspace msp);
-
-
-#if !NO_MALLINFO
-/*
- mspace_mallinfo behaves as mallinfo, but reports properties of
- the given space.
-*/
-struct mallinfo mspace_mallinfo(mspace msp);
-#endif /* NO_MALLINFO */
-
-/*
- mspace_malloc_stats behaves as malloc_stats, but reports
- properties of the given space.
-*/
-void mspace_malloc_stats(mspace msp);
-
-/*
- mspace_trim behaves as malloc_trim, but
- operates within the given space.
-*/
-int mspace_trim(mspace msp, size_t pad);
-
-/*
- An alias for mallopt.
-*/
-int mspace_mallopt(int, int);
-
-#endif /* MSPACES */
-
-#ifdef __cplusplus
-}; /* end of extern "C" */
-#endif
-
-#endif /* MALLOC_280_H */
+++ /dev/null
-Boost Software License - Version 1.0 - August 17th, 2003\r
-\r
-Permission is hereby granted, free of charge, to any person or organization\r
-obtaining a copy of the software and accompanying documentation covered by\r
-this license (the "Software") to use, reproduce, display, distribute,\r
-execute, and transmit the Software, and to prepare derivative works of the\r
-Software, and to permit third-parties to whom the Software is furnished to\r
-do so, all subject to the following:\r
-\r
-The copyright notices in the Software and this entire statement, including\r
-the above license grant, this restriction and the following disclaimer,\r
-must be included in all copies of the Software, in whole or in part, and\r
-all derivative works of the Software, unless such copies or derivative\r
-works are solely in the form of machine-executable object code generated by\r
-a source language processor.\r
-\r
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r
-FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT\r
-SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE\r
-FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,\r
-ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\r
-DEALINGS IN THE SOFTWARE.\r
+++ /dev/null
-/*\r
- This is a version (aka dlmalloc) of malloc/free/realloc written by\r
- Doug Lea and released to the public domain, as explained at\r
- http://creativecommons.org/licenses/publicdomain. Send questions,\r
- comments, complaints, performance data, etc to dl@cs.oswego.edu\r
-\r
-* Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)\r
-\r
- Note: There may be an updated version of this malloc obtainable at\r
- ftp://gee.cs.oswego.edu/pub/misc/malloc.c\r
- Check before installing!\r
-\r
-* Quickstart\r
-\r
- This library is all in one file to simplify the most common usage:\r
- ftp it, compile it (-O3), and link it into another program. All of\r
- the compile-time options default to reasonable values for use on\r
- most platforms. You might later want to step through various\r
- compile-time and dynamic tuning options.\r
-\r
- For convenience, an include file for code using this malloc is at:\r
- ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h\r
- You don't really need this .h file unless you call functions not\r
- defined in your system include files. The .h file contains only the\r
- excerpts from this file needed for using this malloc on ANSI C/C++\r
- systems, so long as you haven't changed compile-time options about\r
- naming and tuning parameters. If you do, then you can create your\r
- own malloc.h that does include all settings by cutting at the point\r
- indicated below. Note that you may already by default be using a C\r
- library containing a malloc that is based on some version of this\r
- malloc (for example in linux). You might still want to use the one\r
- in this file to customize settings or to avoid overheads associated\r
- with library versions.\r
-\r
-* Vital statistics:\r
-\r
- Supported pointer/size_t representation: 4 or 8 bytes\r
- size_t MUST be an unsigned type of the same width as\r
- pointers. (If you are using an ancient system that declares\r
- size_t as a signed type, or need it to be a different width\r
- than pointers, you can use a previous release of this malloc\r
- (e.g. 2.7.2) supporting these.)\r
-\r
- Alignment: 8 bytes (default)\r
- This suffices for nearly all current machines and C compilers.\r
- However, you can define MALLOC_ALIGNMENT to be wider than this\r
- if necessary (up to 128bytes), at the expense of using more space.\r
-\r
- Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)\r
- 8 or 16 bytes (if 8byte sizes)\r
- Each malloced chunk has a hidden word of overhead holding size\r
- and status information, and additional cross-check word\r
- if FOOTERS is defined.\r
-\r
- Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)\r
- 8-byte ptrs: 32 bytes (including overhead)\r
-\r
- Even a request for zero bytes (i.e., malloc(0)) returns a\r
- pointer to something of the minimum allocatable size.\r
- The maximum overhead wastage (i.e., number of extra bytes\r
- allocated than were requested in malloc) is less than or equal\r
- to the minimum size, except for requests >= mmap_threshold that\r
- are serviced via mmap(), where the worst case wastage is about\r
- 32 bytes plus the remainder from a system page (the minimal\r
- mmap unit); typically 4096 or 8192 bytes.\r
-\r
- Security: static-safe; optionally more or less\r
- The "security" of malloc refers to the ability of malicious\r
- code to accentuate the effects of errors (for example, freeing\r
- space that is not currently malloc'ed or overwriting past the\r
- ends of chunks) in code that calls malloc. This malloc\r
- guarantees not to modify any memory locations below the base of\r
- heap, i.e., static variables, even in the presence of usage\r
- errors. The routines additionally detect most improper frees\r
- and reallocs. All this holds as long as the static bookkeeping\r
- for malloc itself is not corrupted by some other means. This\r
- is only one aspect of security -- these checks do not, and\r
- cannot, detect all possible programming errors.\r
-\r
- If FOOTERS is defined nonzero, then each allocated chunk\r
- carries an additional check word to verify that it was malloced\r
- from its space. These check words are the same within each\r
- execution of a program using malloc, but differ across\r
- executions, so externally crafted fake chunks cannot be\r
- freed. This improves security by rejecting frees/reallocs that\r
- could corrupt heap memory, in addition to the checks preventing\r
- writes to statics that are always on. This may further improve\r
- security at the expense of time and space overhead. (Note that\r
- FOOTERS may also be worth using with MSPACES.)\r
-\r
- By default detected errors cause the program to abort (calling\r
- "abort()"). You can override this to instead proceed past\r
- errors by defining PROCEED_ON_ERROR. In this case, a bad free\r
- has no effect, and a malloc that encounters a bad address\r
- caused by user overwrites will ignore the bad address by\r
- dropping pointers and indices to all known memory. This may\r
- be appropriate for programs that should continue if at all\r
- possible in the face of programming errors, although they may\r
- run out of memory because dropped memory is never reclaimed.\r
-\r
- If you don't like either of these options, you can define\r
- CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything\r
- else. And if if you are sure that your program using malloc has\r
- no errors or vulnerabilities, you can define INSECURE to 1,\r
- which might (or might not) provide a small performance improvement.\r
-\r
- Thread-safety: NOT thread-safe unless USE_LOCKS defined\r
- When USE_LOCKS is defined, each public call to malloc, free,\r
- etc is surrounded with either a pthread mutex or a win32\r
- spinlock (depending on WIN32). This is not especially fast, and\r
- can be a major bottleneck. It is designed only to provide\r
- minimal protection in concurrent environments, and to provide a\r
- basis for extensions. If you are using malloc in a concurrent\r
- program, consider instead using ptmalloc, which is derived from\r
- a version of this malloc. (See http://www.malloc.de).\r
-\r
- System requirements: Any combination of MORECORE and/or MMAP/MUNMAP\r
- This malloc can use unix sbrk or any emulation (invoked using\r
- the CALL_MORECORE macro) and/or mmap/munmap or any emulation\r
- (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system\r
- memory. On most unix systems, it tends to work best if both\r
- MORECORE and MMAP are enabled. On Win32, it uses emulations\r
- based on VirtualAlloc. It also uses common C library functions\r
- like memset.\r
-\r
- Compliance: I believe it is compliant with the Single Unix Specification\r
- (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably\r
- others as well.\r
-\r
-* Overview of algorithms\r
-\r
- This is not the fastest, most space-conserving, most portable, or\r
- most tunable malloc ever written. However it is among the fastest\r
- while also being among the most space-conserving, portable and\r
- tunable. Consistent balance across these factors results in a good\r
- general-purpose allocator for malloc-intensive programs.\r
-\r
- In most ways, this malloc is a best-fit allocator. Generally, it\r
- chooses the best-fitting existing chunk for a request, with ties\r
- broken in approximately least-recently-used order. (This strategy\r
- normally maintains low fragmentation.) However, for requests less\r
- than 256bytes, it deviates from best-fit when there is not an\r
- exactly fitting available chunk by preferring to use space adjacent\r
- to that used for the previous small request, as well as by breaking\r
- ties in approximately most-recently-used order. (These enhance\r
- locality of series of small allocations.) And for very large requests\r
- (>= 256Kb by default), it relies on system memory mapping\r
- facilities, if supported. (This helps avoid carrying around and\r
- possibly fragmenting memory used only for large chunks.)\r
-\r
- All operations (except malloc_stats and mallinfo) have execution\r
- times that are bounded by a constant factor of the number of bits in\r
- a size_t, not counting any clearing in calloc or copying in realloc,\r
- or actions surrounding MORECORE and MMAP that have times\r
- proportional to the number of non-contiguous regions returned by\r
- system allocation routines, which is often just 1.\r
-\r
- The implementation is not very modular and seriously overuses\r
- macros. Perhaps someday all C compilers will do as good a job\r
- inlining modular code as can now be done by brute-force expansion,\r
- but now, enough of them seem not to.\r
-\r
- Some compilers issue a lot of warnings about code that is\r
- dead/unreachable only on some platforms, and also about intentional\r
- uses of negation on unsigned types. All known cases of each can be\r
- ignored.\r
-\r
- For a longer but out of date high-level description, see\r
- http://gee.cs.oswego.edu/dl/html/malloc.html\r
-\r
-* MSPACES\r
- If MSPACES is defined, then in addition to malloc, free, etc.,\r
- this file also defines mspace_malloc, mspace_free, etc. These\r
- are versions of malloc routines that take an "mspace" argument\r
- obtained using create_mspace, to control all internal bookkeeping.\r
- If ONLY_MSPACES is defined, only these versions are compiled.\r
- So if you would like to use this allocator for only some allocations,\r
- and your system malloc for others, you can compile with\r
- ONLY_MSPACES and then do something like...\r
- static mspace mymspace = create_mspace(0,0); // for example\r
- #define mymalloc(bytes) mspace_malloc(mymspace, bytes)\r
-\r
- (Note: If you only need one instance of an mspace, you can instead\r
- use "USE_DL_PREFIX" to relabel the global malloc.)\r
-\r
- You can similarly create thread-local allocators by storing\r
- mspaces as thread-locals. For example:\r
- static __thread mspace tlms = 0;\r
- void* tlmalloc(size_t bytes) {\r
- if (tlms == 0) tlms = create_mspace(0, 0);\r
- return mspace_malloc(tlms, bytes);\r
- }\r
- void tlfree(void* mem) { mspace_free(tlms, mem); }\r
-\r
- Unless FOOTERS is defined, each mspace is completely independent.\r
- You cannot allocate from one and free to another (although\r
- conformance is only weakly checked, so usage errors are not always\r
- caught). If FOOTERS is defined, then each chunk carries around a tag\r
- indicating its originating mspace, and frees are directed to their\r
- originating spaces.\r
-\r
- ------------------------- Compile-time options ---------------------------\r
-\r
-Be careful in setting #define values for numerical constants of type\r
-size_t. On some systems, literal values are not automatically extended\r
-to size_t precision unless they are explicitly casted.\r
-\r
-WIN32 default: defined if _WIN32 defined\r
- Defining WIN32 sets up defaults for MS environment and compilers.\r
- Otherwise defaults are for unix.\r
-\r
-MALLOC_ALIGNMENT default: (size_t)8\r
- Controls the minimum alignment for malloc'ed chunks. It must be a\r
- power of two and at least 8, even on machines for which smaller\r
- alignments would suffice. It may be defined as larger than this\r
- though. Note however that code and data structures are optimized for\r
- the case of 8-byte alignment.\r
-\r
-MSPACES default: 0 (false)\r
- If true, compile in support for independent allocation spaces.\r
- This is only supported if HAVE_MMAP is true.\r
-\r
-ONLY_MSPACES default: 0 (false)\r
- If true, only compile in mspace versions, not regular versions.\r
-\r
-USE_LOCKS default: 0 (false)\r
- Causes each call to each public routine to be surrounded with\r
- pthread or WIN32 mutex lock/unlock. (If set true, this can be\r
- overridden on a per-mspace basis for mspace versions.)\r
-\r
-FOOTERS default: 0\r
- If true, provide extra checking and dispatching by placing\r
- information in the footers of allocated chunks. This adds\r
- space and time overhead.\r
-\r
-INSECURE default: 0\r
- If true, omit checks for usage errors and heap space overwrites.\r
-\r
-USE_DL_PREFIX default: NOT defined\r
- Causes compiler to prefix all public routines with the string 'dl'.\r
- This can be useful when you only want to use this malloc in one part\r
- of a program, using your regular system malloc elsewhere.\r
-\r
-ABORT default: defined as abort()\r
- Defines how to abort on failed checks. On most systems, a failed\r
- check cannot die with an "assert" or even print an informative\r
- message, because the underlying print routines in turn call malloc,\r
- which will fail again. Generally, the best policy is to simply call\r
- abort(). It's not very useful to do more than this because many\r
- errors due to overwriting will show up as address faults (null, odd\r
- addresses etc) rather than malloc-triggered checks, so will also\r
- abort. Also, most compilers know that abort() does not return, so\r
- can better optimize code conditionally calling it.\r
-\r
-PROCEED_ON_ERROR default: defined as 0 (false)\r
- Controls whether detected bad addresses cause them to bypassed\r
- rather than aborting. If set, detected bad arguments to free and\r
- realloc are ignored. And all bookkeeping information is zeroed out\r
- upon a detected overwrite of freed heap space, thus losing the\r
- ability to ever return it from malloc again, but enabling the\r
- application to proceed. If PROCEED_ON_ERROR is defined, the\r
- static variable malloc_corruption_error_count is compiled in\r
- and can be examined to see if errors have occurred. This option\r
- generates slower code than the default abort policy.\r
-\r
-DEBUG default: NOT defined\r
- The DEBUG setting is mainly intended for people trying to modify\r
- this code or diagnose problems when porting to new platforms.\r
- However, it may also be able to better isolate user errors than just\r
- using runtime checks. The assertions in the check routines spell\r
- out in more detail the assumptions and invariants underlying the\r
- algorithms. The checking is fairly extensive, and will slow down\r
- execution noticeably. Calling malloc_stats or mallinfo with DEBUG\r
- set will attempt to check every non-mmapped allocated and free chunk\r
- in the course of computing the summaries.\r
-\r
-ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)\r
- Debugging assertion failures can be nearly impossible if your\r
- version of the assert macro causes malloc to be called, which will\r
- lead to a cascade of further failures, blowing the runtime stack.\r
- ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),\r
- which will usually make debugging easier.\r
-\r
-MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32\r
- The action to take before "return 0" when malloc fails to be able to\r
- return memory because there is none available.\r
-\r
-HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES\r
- True if this system supports sbrk or an emulation of it.\r
-\r
-MORECORE default: sbrk\r
- The name of the sbrk-style system routine to call to obtain more\r
- memory. See below for guidance on writing custom MORECORE\r
- functions. The type of the argument to sbrk/MORECORE varies across\r
- systems. It cannot be size_t, because it supports negative\r
- arguments, so it is normally the signed type of the same width as\r
- size_t (sometimes declared as "intptr_t"). It doesn't much matter\r
- though. Internally, we only call it with arguments less than half\r
- the max value of a size_t, which should work across all reasonable\r
- possibilities, although sometimes generating compiler warnings. See\r
- near the end of this file for guidelines for creating a custom\r
- version of MORECORE.\r
-\r
-MORECORE_CONTIGUOUS default: 1 (true)\r
- If true, take advantage of fact that consecutive calls to MORECORE\r
- with positive arguments always return contiguous increasing\r
- addresses. This is true of unix sbrk. It does not hurt too much to\r
- set it true anyway, since malloc copes with non-contiguities.\r
- Setting it false when definitely non-contiguous saves time\r
- and possibly wasted space it would take to discover this though.\r
-\r
-MORECORE_CANNOT_TRIM default: NOT defined\r
- True if MORECORE cannot release space back to the system when given\r
- negative arguments. This is generally necessary only if you are\r
- using a hand-crafted MORECORE function that cannot handle negative\r
- arguments.\r
-\r
-DONT_MERGE_SEGMENTS default: NOT defined\r
- Defined if no attempt should be made to merge memory segments\r
- returned by either MORECORE or CALL_MMAP. This is mostly an\r
- optimization step to avoid having to walk through all the memory\r
- segments when it is known in advance that a merge will not be possible.\r
-\r
-HAVE_MMAP default: 1 (true)\r
- True if this system supports mmap or an emulation of it. If so, and\r
- HAVE_MORECORE is not true, MMAP is used for all system\r
- allocation. If set and HAVE_MORECORE is true as well, MMAP is\r
- primarily used to directly allocate very large blocks. It is also\r
- used as a backup strategy in cases where MORECORE fails to provide\r
- space from system. Note: A single call to MUNMAP is assumed to be\r
- able to unmap memory that may have be allocated using multiple calls\r
- to MMAP, so long as they are adjacent.\r
-\r
-HAVE_MREMAP default: 1 on linux, else 0\r
- If true realloc() uses mremap() to re-allocate large blocks and\r
- extend or shrink allocation spaces.\r
-\r
-MMAP_CLEARS default: 1 on unix\r
- True if mmap clears memory so calloc doesn't need to. This is true\r
- for standard unix mmap using /dev/zero.\r
-\r
-USE_BUILTIN_FFS default: 0 (i.e., not used)\r
- Causes malloc to use the builtin ffs() function to compute indices.\r
- Some compilers may recognize and intrinsify ffs to be faster than the\r
- supplied C version. Also, the case of x86 using gcc is special-cased\r
- to an asm instruction, so is already as fast as it can be, and so\r
- this setting has no effect. (On most x86s, the asm version is only\r
- slightly faster than the C version.)\r
-\r
-malloc_getpagesize default: derive from system includes, or 4096.\r
- The system page size. To the extent possible, this malloc manages\r
- memory from the system in page-size units. This may be (and\r
- usually is) a function rather than a constant. This is ignored\r
- if WIN32, where page size is determined using getSystemInfo during\r
- initialization.\r
-\r
-USE_DEV_RANDOM default: 0 (i.e., not used)\r
- Causes malloc to use /dev/random to initialize secure magic seed for\r
- stamping footers. Otherwise, the current time is used.\r
-\r
-NO_MALLINFO default: 0\r
- If defined, don't compile "mallinfo". This can be a simple way\r
- of dealing with mismatches between system declarations and\r
- those in this file.\r
-\r
-MALLINFO_FIELD_TYPE default: size_t\r
- The type of the fields in the mallinfo struct. This was originally\r
- defined as "int" in SVID etc, but is more usefully defined as\r
- size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set\r
-\r
-REALLOC_ZERO_BYTES_FREES default: not defined\r
- This should be set if a call to realloc with zero bytes should \r
- be the same as a call to free. Some people think it should. Otherwise, \r
- since this malloc returns a unique pointer for malloc(0), so does \r
- realloc(p, 0).\r
-\r
-LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H\r
-LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H\r
-LACKS_STDLIB_H default: NOT defined unless on WIN32\r
- Define these if your system does not have these header files.\r
- You might need to manually insert some of the declarations they provide.\r
-\r
-DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,\r
- system_info.dwAllocationGranularity in WIN32,\r
- otherwise 64K.\r
- Also settable using mallopt(M_GRANULARITY, x)\r
- The unit for allocating and deallocating memory from the system. On\r
- most systems with contiguous MORECORE, there is no reason to\r
- make this more than a page. However, systems with MMAP tend to\r
- either require or encourage larger granularities. You can increase\r
- this value to prevent system allocation functions to be called so\r
- often, especially if they are slow. The value must be at least one\r
- page and must be a power of two. Setting to 0 causes initialization\r
- to either page size or win32 region size. (Note: In previous\r
- versions of malloc, the equivalent of this option was called\r
- "TOP_PAD")\r
-\r
-DEFAULT_TRIM_THRESHOLD default: 2MB\r
- Also settable using mallopt(M_TRIM_THRESHOLD, x)\r
- The maximum amount of unused top-most memory to keep before\r
- releasing via malloc_trim in free(). Automatic trimming is mainly\r
- useful in long-lived programs using contiguous MORECORE. Because\r
- trimming via sbrk can be slow on some systems, and can sometimes be\r
- wasteful (in cases where programs immediately afterward allocate\r
- more large chunks) the value should be high enough so that your\r
- overall system performance would improve by releasing this much\r
- memory. As a rough guide, you might set to a value close to the\r
- average size of a process (program) running on your system.\r
- Releasing this much memory would allow such a process to run in\r
- memory. Generally, it is worth tuning trim thresholds when a\r
- program undergoes phases where several large chunks are allocated\r
- and released in ways that can reuse each other's storage, perhaps\r
- mixed with phases where there are no such chunks at all. The trim\r
- value must be greater than page size to have any useful effect. To\r
- disable trimming completely, you can set to MAX_SIZE_T. Note that the trick\r
- some people use of mallocing a huge space and then freeing it at\r
- program startup, in an attempt to reserve system memory, doesn't\r
- have the intended effect under automatic trimming, since that memory\r
- will immediately be returned to the system.\r
-\r
-DEFAULT_MMAP_THRESHOLD default: 256K\r
- Also settable using mallopt(M_MMAP_THRESHOLD, x)\r
- The request size threshold for using MMAP to directly service a\r
- request. Requests of at least this size that cannot be allocated\r
- using already-existing space will be serviced via mmap. (If enough\r
- normal freed space already exists it is used instead.) Using mmap\r
- segregates relatively large chunks of memory so that they can be\r
- individually obtained and released from the host system. A request\r
- serviced through mmap is never reused by any other request (at least\r
- not directly; the system may just so happen to remap successive\r
- requests to the same locations). Segregating space in this way has\r
- the benefits that: Mmapped space can always be individually released\r
- back to the system, which helps keep the system level memory demands\r
- of a long-lived program low. Also, mapped memory doesn't become\r
- `locked' between other chunks, as can happen with normally allocated\r
- chunks, which means that even trimming via malloc_trim would not\r
- release them. However, it has the disadvantage that the space\r
- cannot be reclaimed, consolidated, and then used to service later\r
- requests, as happens with normal chunks. The advantages of mmap\r
- nearly always outweigh disadvantages for "large" chunks, but the\r
- value of "large" may vary across systems. The default is an\r
- empirically derived value that works well in most systems. You can\r
- disable mmap by setting to MAX_SIZE_T.\r
-\r
-*/\r
-\r
-#ifndef WIN32\r
-#ifdef _WIN32\r
-#define WIN32 1\r
-#endif /* _WIN32 */\r
-#endif /* WIN32 */\r
-#ifdef WIN32\r
-#define WIN32_LEAN_AND_MEAN\r
-#include <windows.h>\r
-#define HAVE_MMAP 1\r
-#define HAVE_MORECORE 0\r
-#define LACKS_UNISTD_H\r
-#define LACKS_SYS_PARAM_H\r
-#define LACKS_SYS_MMAN_H\r
-#define LACKS_STRING_H\r
-#define LACKS_STRINGS_H\r
-#define LACKS_SYS_TYPES_H\r
-#define LACKS_ERRNO_H\r
-#define MALLOC_FAILURE_ACTION\r
-#define MMAP_CLEARS 1 /* WINCE and some others apparently don't clear */\r
-#endif /* WIN32 */\r
-\r
-#if defined(DARWIN) || defined(_DARWIN)\r
-/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */\r
-#ifndef HAVE_MORECORE\r
-#define HAVE_MORECORE 0\r
-#define HAVE_MMAP 1\r
-#endif /* HAVE_MORECORE */\r
-#endif /* DARWIN */\r
-\r
-#ifndef LACKS_SYS_TYPES_H\r
-#include <sys/types.h> /* For size_t */\r
-#endif /* LACKS_SYS_TYPES_H */\r
-\r
-/* The maximum possible size_t value has all bits set */\r
-#define MAX_SIZE_T (~(size_t)0)\r
-\r
-#ifndef ONLY_MSPACES\r
-#define ONLY_MSPACES 0\r
-#endif /* ONLY_MSPACES */\r
-#ifndef MSPACES\r
-#if ONLY_MSPACES\r
-#define MSPACES 1\r
-#else /* ONLY_MSPACES */\r
-#define MSPACES 0\r
-#endif /* ONLY_MSPACES */\r
-#endif /* MSPACES */\r
-#ifndef MALLOC_ALIGNMENT\r
-#define MALLOC_ALIGNMENT ((size_t)8U)\r
-#endif /* MALLOC_ALIGNMENT */\r
-#ifndef FOOTERS\r
-#define FOOTERS 0\r
-#endif /* FOOTERS */\r
-#ifndef ABORT\r
-#define ABORT abort()\r
-#endif /* ABORT */\r
-#ifndef ABORT_ON_ASSERT_FAILURE\r
-#define ABORT_ON_ASSERT_FAILURE 1\r
-#endif /* ABORT_ON_ASSERT_FAILURE */\r
-#ifndef PROCEED_ON_ERROR\r
-#define PROCEED_ON_ERROR 0\r
-#endif /* PROCEED_ON_ERROR */\r
-#ifndef USE_LOCKS\r
-#define USE_LOCKS 0\r
-#endif /* USE_LOCKS */\r
-#ifndef INSECURE\r
-#define INSECURE 0\r
-#endif /* INSECURE */\r
-#ifndef HAVE_MMAP\r
-#define HAVE_MMAP 1\r
-#endif /* HAVE_MMAP */\r
-#ifndef MMAP_CLEARS\r
-#define MMAP_CLEARS 1\r
-#endif /* MMAP_CLEARS */\r
-#ifndef HAVE_MREMAP\r
-#ifdef linux\r
-#define HAVE_MREMAP 1\r
-#else /* linux */\r
-#define HAVE_MREMAP 0\r
-#endif /* linux */\r
-#endif /* HAVE_MREMAP */\r
-#ifndef MALLOC_FAILURE_ACTION\r
-#define MALLOC_FAILURE_ACTION errno = ENOMEM;\r
-#endif /* MALLOC_FAILURE_ACTION */\r
-#ifndef HAVE_MORECORE\r
-#if ONLY_MSPACES\r
-#define HAVE_MORECORE 0\r
-#else /* ONLY_MSPACES */\r
-#define HAVE_MORECORE 1\r
-#endif /* ONLY_MSPACES */\r
-#endif /* HAVE_MORECORE */\r
-#if !HAVE_MORECORE\r
-#define MORECORE_CONTIGUOUS 0\r
-#else /* !HAVE_MORECORE */\r
-#ifndef MORECORE\r
-#define MORECORE sbrk\r
-#endif /* MORECORE */\r
-#ifndef MORECORE_CONTIGUOUS\r
-#define MORECORE_CONTIGUOUS 1\r
-#endif /* MORECORE_CONTIGUOUS */\r
-#endif /* HAVE_MORECORE */\r
-#ifndef DEFAULT_GRANULARITY\r
-#if MORECORE_CONTIGUOUS\r
-#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */\r
-#else /* MORECORE_CONTIGUOUS */\r
-#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)\r
-#endif /* MORECORE_CONTIGUOUS */\r
-#endif /* DEFAULT_GRANULARITY */\r
-#ifndef DEFAULT_TRIM_THRESHOLD\r
-#ifndef MORECORE_CANNOT_TRIM\r
-#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)\r
-#else /* MORECORE_CANNOT_TRIM */\r
-#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T\r
-#endif /* MORECORE_CANNOT_TRIM */\r
-#endif /* DEFAULT_TRIM_THRESHOLD */\r
-#ifndef DEFAULT_MMAP_THRESHOLD\r
-#if HAVE_MMAP\r
-#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)\r
-#else /* HAVE_MMAP */\r
-#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T\r
-#endif /* HAVE_MMAP */\r
-#endif /* DEFAULT_MMAP_THRESHOLD */\r
-#ifndef USE_BUILTIN_FFS\r
-#define USE_BUILTIN_FFS 0\r
-#endif /* USE_BUILTIN_FFS */\r
-#ifndef USE_DEV_RANDOM\r
-#define USE_DEV_RANDOM 0\r
-#endif /* USE_DEV_RANDOM */\r
-#ifndef NO_MALLINFO\r
-#define NO_MALLINFO 0\r
-#endif /* NO_MALLINFO */\r
-#ifndef MALLINFO_FIELD_TYPE\r
-#define MALLINFO_FIELD_TYPE size_t\r
-#endif /* MALLINFO_FIELD_TYPE */\r
-\r
-/*\r
- mallopt tuning options. SVID/XPG defines four standard parameter\r
- numbers for mallopt, normally defined in malloc.h. None of these\r
- are used in this malloc, so setting them has no effect. But this\r
- malloc does support the following options.\r
-*/\r
-\r
-#define M_TRIM_THRESHOLD (-1)\r
-#define M_GRANULARITY (-2)\r
-#define M_MMAP_THRESHOLD (-3)\r
-\r
-/* ------------------------ Mallinfo declarations ------------------------ */\r
-\r
-#if !NO_MALLINFO\r
-/*\r
- This version of malloc supports the standard SVID/XPG mallinfo\r
- routine that returns a struct containing usage properties and\r
- statistics. It should work on any system that has a\r
- /usr/include/malloc.h defining struct mallinfo. The main\r
- declaration needed is the mallinfo struct that is returned (by-copy)\r
- by mallinfo(). The malloinfo struct contains a bunch of fields that\r
- are not even meaningful in this version of malloc. These fields are\r
- are instead filled by mallinfo() with other numbers that might be of\r
- interest.\r
-\r
- HAVE_USR_INCLUDE_MALLOC_H should be set if you have a\r
- /usr/include/malloc.h file that includes a declaration of struct\r
- mallinfo. If so, it is included; else a compliant version is\r
- declared below. These must be precisely the same for mallinfo() to\r
- work. The original SVID version of this struct, defined on most\r
- systems with mallinfo, declares all fields as ints. But some others\r
- define as unsigned long. If your system defines the fields using a\r
- type of different width than listed here, you MUST #include your\r
- system version and #define HAVE_USR_INCLUDE_MALLOC_H.\r
-*/\r
-\r
-/* #define HAVE_USR_INCLUDE_MALLOC_H */\r
-\r
-#ifdef HAVE_USR_INCLUDE_MALLOC_H\r
-#include "/usr/include/malloc.h"\r
-#else /* HAVE_USR_INCLUDE_MALLOC_H */\r
-\r
-struct mallinfo {\r
- MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */\r
- MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */\r
- MALLINFO_FIELD_TYPE smblks; /* always 0 */\r
- MALLINFO_FIELD_TYPE hblks; /* always 0 */\r
- MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */\r
- MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */\r
- MALLINFO_FIELD_TYPE fsmblks; /* always 0 */\r
- MALLINFO_FIELD_TYPE uordblks; /* total allocated space */\r
- MALLINFO_FIELD_TYPE fordblks; /* total free space */\r
- MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */\r
-};\r
-\r
-#endif /* HAVE_USR_INCLUDE_MALLOC_H */\r
-#endif /* NO_MALLINFO */\r
-\r
-#ifndef FORCEINLINE\r
- #if defined(__GNUC__)\r
-#define FORCEINLINE __inline __attribute__ ((always_inline))\r
- #elif defined(_MSC_VER)\r
- #define FORCEINLINE __forceinline\r
- #endif\r
-#endif\r
-#ifndef NOINLINE\r
- #if defined(__GNUC__)\r
- #define NOINLINE __attribute__ ((noinline))\r
- #elif defined(_MSC_VER)\r
- #define NOINLINE __declspec(noinline)\r
- #else\r
- #define NOINLINE\r
- #endif\r
-#endif\r
-\r
-#ifdef __cplusplus\r
-extern "C" {\r
-#ifndef FORCEINLINE\r
- #define FORCEINLINE inline\r
-#endif\r
-#endif /* __cplusplus */\r
-#ifndef FORCEINLINE\r
- #define FORCEINLINE\r
-#endif\r
-\r
-#if !ONLY_MSPACES\r
-\r
-/* ------------------- Declarations of public routines ------------------- */\r
-\r
-#ifndef USE_DL_PREFIX\r
-#define dlcalloc calloc\r
-#define dlfree free\r
-#define dlmalloc malloc\r
-#define dlmemalign memalign\r
-#define dlrealloc realloc\r
-#define dlvalloc valloc\r
-#define dlpvalloc pvalloc\r
-#define dlmallinfo mallinfo\r
-#define dlmallopt mallopt\r
-#define dlmalloc_trim malloc_trim\r
-#define dlmalloc_stats malloc_stats\r
-#define dlmalloc_usable_size malloc_usable_size\r
-#define dlmalloc_footprint malloc_footprint\r
-#define dlmalloc_max_footprint malloc_max_footprint\r
-#define dlindependent_calloc independent_calloc\r
-#define dlindependent_comalloc independent_comalloc\r
-#endif /* USE_DL_PREFIX */\r
-\r
-\r
-/*\r
- malloc(size_t n)\r
- Returns a pointer to a newly allocated chunk of at least n bytes, or\r
- null if no space is available, in which case errno is set to ENOMEM\r
- on ANSI C systems.\r
-\r
- If n is zero, malloc returns a minimum-sized chunk. (The minimum\r
- size is 16 bytes on most 32bit systems, and 32 bytes on 64bit\r
- systems.) Note that size_t is an unsigned type, so calls with\r
- arguments that would be negative if signed are interpreted as\r
- requests for huge amounts of space, which will often fail. The\r
- maximum supported value of n differs across systems, but is in all\r
- cases less than the maximum representable value of a size_t.\r
-*/\r
-void* dlmalloc(size_t);\r
-\r
-/*\r
- free(void* p)\r
- Releases the chunk of memory pointed to by p, that had been previously\r
- allocated using malloc or a related routine such as realloc.\r
- It has no effect if p is null. If p was not malloced or already\r
- freed, free(p) will by default cause the current program to abort.\r
-*/\r
-void dlfree(void*);\r
-\r
-/*\r
- calloc(size_t n_elements, size_t element_size);\r
- Returns a pointer to n_elements * element_size bytes, with all locations\r
- set to zero.\r
-*/\r
-void* dlcalloc(size_t, size_t);\r
-\r
-/*\r
- realloc(void* p, size_t n)\r
- Returns a pointer to a chunk of size n that contains the same data\r
- as does chunk p up to the minimum of (n, p's size) bytes, or null\r
- if no space is available.\r
-\r
- The returned pointer may or may not be the same as p. The algorithm\r
- prefers extending p in most cases when possible, otherwise it\r
- employs the equivalent of a malloc-copy-free sequence.\r
-\r
- If p is null, realloc is equivalent to malloc.\r
-\r
- If space is not available, realloc returns null, errno is set (if on\r
- ANSI) and p is NOT freed.\r
-\r
- if n is for fewer bytes than already held by p, the newly unused\r
- space is lopped off and freed if possible. realloc with a size\r
- argument of zero (re)allocates a minimum-sized chunk.\r
-\r
- The old unix realloc convention of allowing the last-free'd chunk\r
- to be used as an argument to realloc is not supported.\r
-*/\r
-\r
-void* dlrealloc(void*, size_t);\r
-\r
-/*\r
- memalign(size_t alignment, size_t n);\r
- Returns a pointer to a newly allocated chunk of n bytes, aligned\r
- in accord with the alignment argument.\r
-\r
- The alignment argument should be a power of two. If the argument is\r
- not a power of two, the nearest greater power is used.\r
- 8-byte alignment is guaranteed by normal malloc calls, so don't\r
- bother calling memalign with an argument of 8 or less.\r
-\r
- Overreliance on memalign is a sure way to fragment space.\r
-*/\r
-void* dlmemalign(size_t, size_t);\r
-\r
-/*\r
- valloc(size_t n);\r
- Equivalent to memalign(pagesize, n), where pagesize is the page\r
- size of the system. If the pagesize is unknown, 4096 is used.\r
-*/\r
-void* dlvalloc(size_t);\r
-\r
-/*\r
- mallopt(int parameter_number, int parameter_value)\r
- Sets tunable parameters The format is to provide a\r
- (parameter-number, parameter-value) pair. mallopt then sets the\r
- corresponding parameter to the argument value if it can (i.e., so\r
- long as the value is meaningful), and returns 1 if successful else\r
- 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,\r
- normally defined in malloc.h. None of these are use in this malloc,\r
- so setting them has no effect. But this malloc also supports other\r
- options in mallopt. See below for details. Briefly, supported\r
- parameters are as follows (listed defaults are for "typical"\r
- configurations).\r
-\r
- Symbol param # default allowed param values\r
- M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)\r
- M_GRANULARITY -2 page size any power of 2 >= page size\r
- M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)\r
-*/\r
-int dlmallopt(int, int);\r
-\r
-/*\r
- malloc_footprint();\r
- Returns the number of bytes obtained from the system. The total\r
- number of bytes allocated by malloc, realloc etc., is less than this\r
- value. Unlike mallinfo, this function returns only a precomputed\r
- result, so can be called frequently to monitor memory consumption.\r
- Even if locks are otherwise defined, this function does not use them,\r
- so results might not be up to date.\r
-*/\r
-size_t dlmalloc_footprint(void);\r
-\r
-/*\r
- malloc_max_footprint();\r
- Returns the maximum number of bytes obtained from the system. This\r
- value will be greater than current footprint if deallocated space\r
- has been reclaimed by the system. The peak number of bytes allocated\r
- by malloc, realloc etc., is less than this value. Unlike mallinfo,\r
- this function returns only a precomputed result, so can be called\r
- frequently to monitor memory consumption. Even if locks are\r
- otherwise defined, this function does not use them, so results might\r
- not be up to date.\r
-*/\r
-size_t dlmalloc_max_footprint(void);\r
-\r
-#if !NO_MALLINFO\r
-/*\r
- mallinfo()\r
- Returns (by copy) a struct containing various summary statistics:\r
-\r
- arena: current total non-mmapped bytes allocated from system\r
- ordblks: the number of free chunks\r
- smblks: always zero.\r
- hblks: current number of mmapped regions\r
- hblkhd: total bytes held in mmapped regions\r
- usmblks: the maximum total allocated space. This will be greater\r
- than current total if trimming has occurred.\r
- fsmblks: always zero\r
- uordblks: current total allocated space (normal or mmapped)\r
- fordblks: total free space\r
- keepcost: the maximum number of bytes that could ideally be released\r
- back to system via malloc_trim. ("ideally" means that\r
- it ignores page restrictions etc.)\r
-\r
- Because these fields are ints, but internal bookkeeping may\r
- be kept as longs, the reported values may wrap around zero and\r
- thus be inaccurate.\r
-*/\r
-struct mallinfo dlmallinfo(void);\r
-#endif /* NO_MALLINFO */\r
-\r
-/*\r
- independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);\r
-\r
- independent_calloc is similar to calloc, but instead of returning a\r
- single cleared space, it returns an array of pointers to n_elements\r
- independent elements that can hold contents of size elem_size, each\r
- of which starts out cleared, and can be independently freed,\r
- realloc'ed etc. The elements are guaranteed to be adjacently\r
- allocated (this is not guaranteed to occur with multiple callocs or\r
- mallocs), which may also improve cache locality in some\r
- applications.\r
-\r
- The "chunks" argument is optional (i.e., may be null, which is\r
- probably the most typical usage). If it is null, the returned array\r
- is itself dynamically allocated and should also be freed when it is\r
- no longer needed. Otherwise, the chunks array must be of at least\r
- n_elements in length. It is filled in with the pointers to the\r
- chunks.\r
-\r
- In either case, independent_calloc returns this pointer array, or\r
- null if the allocation failed. If n_elements is zero and "chunks"\r
- is null, it returns a chunk representing an array with zero elements\r
- (which should be freed if not wanted).\r
-\r
- Each element must be individually freed when it is no longer\r
- needed. If you'd like to instead be able to free all at once, you\r
- should instead use regular calloc and assign pointers into this\r
- space to represent elements. (In this case though, you cannot\r
- independently free elements.)\r
-\r
- independent_calloc simplifies and speeds up implementations of many\r
- kinds of pools. It may also be useful when constructing large data\r
- structures that initially have a fixed number of fixed-sized nodes,\r
- but the number is not known at compile time, and some of the nodes\r
- may later need to be freed. For example:\r
-\r
- struct Node { int item; struct Node* next; };\r
-\r
- struct Node* build_list() {\r
- struct Node** pool;\r
- int n = read_number_of_nodes_needed();\r
- if (n <= 0) return 0;\r
- pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);\r
- if (pool == 0) die();\r
- // organize into a linked list...\r
- struct Node* first = pool[0];\r
- for (i = 0; i < n-1; ++i)\r
- pool[i]->next = pool[i+1];\r
- free(pool); // Can now free the array (or not, if it is needed later)\r
- return first;\r
- }\r
-*/\r
-void** dlindependent_calloc(size_t, size_t, void**);\r
-\r
-/*\r
- independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);\r
-\r
- independent_comalloc allocates, all at once, a set of n_elements\r
- chunks with sizes indicated in the "sizes" array. It returns\r
- an array of pointers to these elements, each of which can be\r
- independently freed, realloc'ed etc. The elements are guaranteed to\r
- be adjacently allocated (this is not guaranteed to occur with\r
- multiple callocs or mallocs), which may also improve cache locality\r
- in some applications.\r
-\r
- The "chunks" argument is optional (i.e., may be null). If it is null\r
- the returned array is itself dynamically allocated and should also\r
- be freed when it is no longer needed. Otherwise, the chunks array\r
- must be of at least n_elements in length. It is filled in with the\r
- pointers to the chunks.\r
-\r
- In either case, independent_comalloc returns this pointer array, or\r
- null if the allocation failed. If n_elements is zero and chunks is\r
- null, it returns a chunk representing an array with zero elements\r
- (which should be freed if not wanted).\r
-\r
- Each element must be individually freed when it is no longer\r
- needed. If you'd like to instead be able to free all at once, you\r
- should instead use a single regular malloc, and assign pointers at\r
- particular offsets in the aggregate space. (In this case though, you\r
- cannot independently free elements.)\r
-\r
- independent_comallac differs from independent_calloc in that each\r
- element may have a different size, and also that it does not\r
- automatically clear elements.\r
-\r
- independent_comalloc can be used to speed up allocation in cases\r
- where several structs or objects must always be allocated at the\r
- same time. For example:\r
-\r
- struct Head { ... }\r
- struct Foot { ... }\r
-\r
- void send_message(char* msg) {\r
- int msglen = strlen(msg);\r
- size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };\r
- void* chunks[3];\r
- if (independent_comalloc(3, sizes, chunks) == 0)\r
- die();\r
- struct Head* head = (struct Head*)(chunks[0]);\r
- char* body = (char*)(chunks[1]);\r
- struct Foot* foot = (struct Foot*)(chunks[2]);\r
- // ...\r
- }\r
-\r
- In general though, independent_comalloc is worth using only for\r
- larger values of n_elements. For small values, you probably won't\r
- detect enough difference from series of malloc calls to bother.\r
-\r
- Overuse of independent_comalloc can increase overall memory usage,\r
- since it cannot reuse existing noncontiguous small chunks that\r
- might be available for some of the elements.\r
-*/\r
-void** dlindependent_comalloc(size_t, size_t*, void**);\r
-\r
-\r
-/*\r
- pvalloc(size_t n);\r
- Equivalent to valloc(minimum-page-that-holds(n)), that is,\r
- round up n to nearest pagesize.\r
- */\r
-void* dlpvalloc(size_t);\r
-\r
-/*\r
- malloc_trim(size_t pad);\r
-\r
- If possible, gives memory back to the system (via negative arguments\r
- to sbrk) if there is unused memory at the `high' end of the malloc\r
- pool or in unused MMAP segments. You can call this after freeing\r
- large blocks of memory to potentially reduce the system-level memory\r
- requirements of a program. However, it cannot guarantee to reduce\r
- memory. Under some allocation patterns, some large free blocks of\r
- memory will be locked between two used chunks, so they cannot be\r
- given back to the system.\r
-\r
- The `pad' argument to malloc_trim represents the amount of free\r
- trailing space to leave untrimmed. If this argument is zero, only\r
- the minimum amount of memory to maintain internal data structures\r
- will be left. Non-zero arguments can be supplied to maintain enough\r
- trailing space to service future expected allocations without having\r
- to re-obtain memory from the system.\r
-\r
- Malloc_trim returns 1 if it actually released any memory, else 0.\r
-*/\r
-int dlmalloc_trim(size_t);\r
-\r
-/*\r
- malloc_usable_size(void* p);\r
-\r
- Returns the number of bytes you can actually use in\r
- an allocated chunk, which may be more than you requested (although\r
- often not) due to alignment and minimum size constraints.\r
- You can use this many bytes without worrying about\r
- overwriting other allocated objects. This is not a particularly great\r
- programming practice. malloc_usable_size can be more useful in\r
- debugging and assertions, for example:\r
-\r
- p = malloc(n);\r
- assert(malloc_usable_size(p) >= 256);\r
-*/\r
-size_t dlmalloc_usable_size(void*);\r
-\r
-/*\r
- malloc_stats();\r
- Prints on stderr the amount of space obtained from the system (both\r
- via sbrk and mmap), the maximum amount (which may be more than\r
- current if malloc_trim and/or munmap got called), and the current\r
- number of bytes allocated via malloc (or realloc, etc) but not yet\r
- freed. Note that this is the number of bytes allocated, not the\r
- number requested. It will be larger than the number requested\r
- because of alignment and bookkeeping overhead. Because it includes\r
- alignment wastage as being in use, this figure may be greater than\r
- zero even when no user-level chunks are allocated.\r
-\r
- The reported current and maximum system memory can be inaccurate if\r
- a program makes other calls to system memory allocation functions\r
- (normally sbrk) outside of malloc.\r
-\r
- malloc_stats prints only the most commonly interesting statistics.\r
- More information can be obtained by calling mallinfo.\r
-*/\r
-void dlmalloc_stats(void);\r
-\r
-#endif /* ONLY_MSPACES */\r
-\r
-#if MSPACES\r
-\r
-/*\r
- mspace is an opaque type representing an independent\r
- region of space that supports mspace_malloc, etc.\r
-*/\r
-typedef void* mspace;\r
-\r
-/*\r
- create_mspace creates and returns a new independent space with the\r
- given initial capacity, or, if 0, the default granularity size. It\r
- returns null if there is no system memory available to create the\r
- space. If argument locked is non-zero, the space uses a separate\r
- lock to control access. The capacity of the space will grow\r
- dynamically as needed to service mspace_malloc requests. You can\r
- control the sizes of incremental increases of this space by\r
- compiling with a different DEFAULT_GRANULARITY or dynamically\r
- setting with mallopt(M_GRANULARITY, value).\r
-*/\r
-mspace create_mspace(size_t capacity, int locked);\r
-\r
-/*\r
- destroy_mspace destroys the given space, and attempts to return all\r
- of its memory back to the system, returning the total number of\r
- bytes freed. After destruction, the results of access to all memory\r
- used by the space become undefined.\r
-*/\r
-size_t destroy_mspace(mspace msp);\r
-\r
-/*\r
- create_mspace_with_base uses the memory supplied as the initial base\r
- of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this\r
- space is used for bookkeeping, so the capacity must be at least this\r
- large. (Otherwise 0 is returned.) When this initial space is\r
- exhausted, additional memory will be obtained from the system.\r
- Destroying this space will deallocate all additionally allocated\r
- space (if possible) but not the initial base.\r
-*/\r
-mspace create_mspace_with_base(void* base, size_t capacity, int locked);\r
-\r
-/*\r
- mspace_malloc behaves as malloc, but operates within\r
- the given space.\r
-*/\r
-void* mspace_malloc(mspace msp, size_t bytes);\r
-\r
-/*\r
- mspace_free behaves as free, but operates within\r
- the given space.\r
-\r
- If compiled with FOOTERS==1, mspace_free is not actually needed.\r
- free may be called instead of mspace_free because freed chunks from\r
- any space are handled by their originating spaces.\r
-*/\r
-void mspace_free(mspace msp, void* mem);\r
-\r
-/*\r
- mspace_realloc behaves as realloc, but operates within\r
- the given space.\r
-\r
- If compiled with FOOTERS==1, mspace_realloc is not actually\r
- needed. realloc may be called instead of mspace_realloc because\r
- realloced chunks from any space are handled by their originating\r
- spaces.\r
-*/\r
-void* mspace_realloc(mspace msp, void* mem, size_t newsize);\r
-\r
-/*\r
- mspace_calloc behaves as calloc, but operates within\r
- the given space.\r
-*/\r
-void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);\r
-\r
-/*\r
- mspace_memalign behaves as memalign, but operates within\r
- the given space.\r
-*/\r
-void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);\r
-\r
-/*\r
- mspace_independent_calloc behaves as independent_calloc, but\r
- operates within the given space.\r
-*/\r
-void** mspace_independent_calloc(mspace msp, size_t n_elements,\r
- size_t elem_size, void* chunks[]);\r
-\r
-/*\r
- mspace_independent_comalloc behaves as independent_comalloc, but\r
- operates within the given space.\r
-*/\r
-void** mspace_independent_comalloc(mspace msp, size_t n_elements,\r
- size_t sizes[], void* chunks[]);\r
-\r
-/*\r
- mspace_footprint() returns the number of bytes obtained from the\r
- system for this space.\r
-*/\r
-size_t mspace_footprint(mspace msp);\r
-\r
-/*\r
- mspace_max_footprint() returns the peak number of bytes obtained from the\r
- system for this space.\r
-*/\r
-size_t mspace_max_footprint(mspace msp);\r
-\r
-\r
-#if !NO_MALLINFO\r
-/*\r
- mspace_mallinfo behaves as mallinfo, but reports properties of\r
- the given space.\r
-*/\r
-struct mallinfo mspace_mallinfo(mspace msp);\r
-#endif /* NO_MALLINFO */\r
-\r
-/*\r
- mspace_malloc_stats behaves as malloc_stats, but reports\r
- properties of the given space.\r
-*/\r
-void mspace_malloc_stats(mspace msp);\r
-\r
-/*\r
- mspace_trim behaves as malloc_trim, but\r
- operates within the given space.\r
-*/\r
-int mspace_trim(mspace msp, size_t pad);\r
-\r
-/*\r
- An alias for mallopt.\r
-*/\r
-int mspace_mallopt(int, int);\r
-\r
-#endif /* MSPACES */\r
-\r
-#ifdef __cplusplus\r
-}; /* end of extern "C" */\r
-#endif /* __cplusplus */\r
-\r
-/*\r
- ========================================================================\r
- To make a fully customizable malloc.h header file, cut everything\r
- above this line, put into file malloc.h, edit to suit, and #include it\r
- on the next line, as well as in programs that use this malloc.\r
- ========================================================================\r
-*/\r
-\r
-/* #include "malloc.h" */\r
-\r
-/*------------------------------ internal #includes ---------------------- */\r
-\r
-#ifdef WIN32\r
-#pragma warning( disable : 4146 ) /* no "unsigned" warnings */\r
-#endif /* WIN32 */\r
-\r
-#include <stdio.h> /* for printing in malloc_stats */\r
-\r
-#ifndef LACKS_ERRNO_H\r
-#include <errno.h> /* for MALLOC_FAILURE_ACTION */\r
-#endif /* LACKS_ERRNO_H */\r
-#if FOOTERS\r
-#include <time.h> /* for magic initialization */\r
-#endif /* FOOTERS */\r
-#ifndef LACKS_STDLIB_H\r
-#include <stdlib.h> /* for abort() */\r
-#endif /* LACKS_STDLIB_H */\r
-#ifdef DEBUG\r
-#if ABORT_ON_ASSERT_FAILURE\r
-#define assert(x) if(!(x)) ABORT\r
-#else /* ABORT_ON_ASSERT_FAILURE */\r
-#include <assert.h>\r
-#endif /* ABORT_ON_ASSERT_FAILURE */\r
-#else /* DEBUG */\r
-#define assert(x)\r
-#endif /* DEBUG */\r
-#ifndef LACKS_STRING_H\r
-#include <string.h> /* for memset etc */\r
-#endif /* LACKS_STRING_H */\r
-#if USE_BUILTIN_FFS\r
-#ifndef LACKS_STRINGS_H\r
-#include <strings.h> /* for ffs */\r
-#endif /* LACKS_STRINGS_H */\r
-#endif /* USE_BUILTIN_FFS */\r
-#if HAVE_MMAP\r
-#ifndef LACKS_SYS_MMAN_H\r
-#include <sys/mman.h> /* for mmap */\r
-#endif /* LACKS_SYS_MMAN_H */\r
-#ifndef LACKS_FCNTL_H\r
-#include <fcntl.h>\r
-#endif /* LACKS_FCNTL_H */\r
-#endif /* HAVE_MMAP */\r
-#if HAVE_MORECORE\r
-#ifndef LACKS_UNISTD_H\r
-#include <unistd.h> /* for sbrk */\r
-#else /* LACKS_UNISTD_H */\r
-#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)\r
-extern void* sbrk(ptrdiff_t);\r
-#endif /* FreeBSD etc */\r
-#endif /* LACKS_UNISTD_H */\r
-#endif /* HAVE_MMAP */\r
-\r
-#ifndef WIN32\r
-#ifndef malloc_getpagesize\r
-# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */\r
-# ifndef _SC_PAGE_SIZE\r
-# define _SC_PAGE_SIZE _SC_PAGESIZE\r
-# endif\r
-# endif\r
-# ifdef _SC_PAGE_SIZE\r
-# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)\r
-# else\r
-# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)\r
- extern size_t getpagesize();\r
-# define malloc_getpagesize getpagesize()\r
-# else\r
-# ifdef WIN32 /* use supplied emulation of getpagesize */\r
-# define malloc_getpagesize getpagesize()\r
-# else\r
-# ifndef LACKS_SYS_PARAM_H\r
-# include <sys/param.h>\r
-# endif\r
-# ifdef EXEC_PAGESIZE\r
-# define malloc_getpagesize EXEC_PAGESIZE\r
-# else\r
-# ifdef NBPG\r
-# ifndef CLSIZE\r
-# define malloc_getpagesize NBPG\r
-# else\r
-# define malloc_getpagesize (NBPG * CLSIZE)\r
-# endif\r
-# else\r
-# ifdef NBPC\r
-# define malloc_getpagesize NBPC\r
-# else\r
-# ifdef PAGESIZE\r
-# define malloc_getpagesize PAGESIZE\r
-# else /* just guess */\r
-# define malloc_getpagesize ((size_t)4096U)\r
-# endif\r
-# endif\r
-# endif\r
-# endif\r
-# endif\r
-# endif\r
-# endif\r
-#endif\r
-#endif\r
-\r
-/* ------------------- size_t and alignment properties -------------------- */\r
-\r
-/* The byte and bit size of a size_t */\r
-#define SIZE_T_SIZE (sizeof(size_t))\r
-#define SIZE_T_BITSIZE (sizeof(size_t) << 3)\r
-\r
-/* Some constants coerced to size_t */\r
-/* Annoying but necessary to avoid errors on some plaftorms */\r
-#define SIZE_T_ZERO ((size_t)0)\r
-#define SIZE_T_ONE ((size_t)1)\r
-#define SIZE_T_TWO ((size_t)2)\r
-#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)\r
-#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)\r
-#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)\r
-#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)\r
-\r
-/* The bit mask value corresponding to MALLOC_ALIGNMENT */\r
-#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)\r
-\r
-/* True if address a has acceptable alignment */\r
-#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)\r
-\r
-/* the number of bytes to offset an address to align it */\r
-#define align_offset(A)\\r
- ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\\r
- ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))\r
-\r
-/* -------------------------- MMAP preliminaries ------------------------- */\r
-\r
-/*\r
- If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and\r
- checks to fail so compiler optimizer can delete code rather than\r
- using so many "#if"s.\r
-*/\r
-\r
-\r
-/* MORECORE and MMAP must return MFAIL on failure */\r
-#define MFAIL ((void*)(MAX_SIZE_T))\r
-#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */\r
-\r
-#if !HAVE_MMAP\r
-#define IS_MMAPPED_BIT (SIZE_T_ZERO)\r
-#define USE_MMAP_BIT (SIZE_T_ZERO)\r
-#define CALL_MMAP(s) MFAIL\r
-#define CALL_MUNMAP(a, s) (-1)\r
-#define DIRECT_MMAP(s) MFAIL\r
-\r
-#else /* HAVE_MMAP */\r
-#define IS_MMAPPED_BIT (SIZE_T_ONE)\r
-#define USE_MMAP_BIT (SIZE_T_ONE)\r
-\r
-#ifndef WIN32\r
-#define CALL_MUNMAP(a, s) munmap((a), (s))\r
-#define MMAP_PROT (PROT_READ|PROT_WRITE)\r
-#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)\r
-#define MAP_ANONYMOUS MAP_ANON\r
-#endif /* MAP_ANON */\r
-#ifdef MAP_ANONYMOUS\r
-#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)\r
-#define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)\r
-#else /* MAP_ANONYMOUS */\r
-/*\r
- Nearly all versions of mmap support MAP_ANONYMOUS, so the following\r
- is unlikely to be needed, but is supplied just in case.\r
-*/\r
-#define MMAP_FLAGS (MAP_PRIVATE)\r
-static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */\r
-#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \\r
- (dev_zero_fd = open("/dev/zero", O_RDWR), \\r
- mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \\r
- mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))\r
-#endif /* MAP_ANONYMOUS */\r
-\r
-#define DIRECT_MMAP(s) CALL_MMAP(s)\r
-#else /* WIN32 */\r
-\r
-/* Win32 MMAP via VirtualAlloc */\r
-static FORCEINLINE void* win32mmap(size_t size) {\r
- void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);\r
- return (ptr != 0)? ptr: MFAIL;\r
-}\r
-\r
-/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */\r
-static FORCEINLINE void* win32direct_mmap(size_t size) {\r
- void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,\r
- PAGE_READWRITE);\r
- return (ptr != 0)? ptr: MFAIL;\r
-}\r
-\r
-/* This function supports releasing coalesed segments */\r
-static FORCEINLINE int win32munmap(void* ptr, size_t size) {\r
- MEMORY_BASIC_INFORMATION minfo;\r
- char* cptr = (char *) ptr;\r
- while (size) {\r
- if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)\r
- return -1;\r
- if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||\r
- minfo.State != MEM_COMMIT || minfo.RegionSize > size)\r
- return -1;\r
- if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)\r
- return -1;\r
- cptr += minfo.RegionSize;\r
- size -= minfo.RegionSize;\r
- }\r
- return 0;\r
-}\r
-\r
-#define CALL_MMAP(s) win32mmap(s)\r
-#define CALL_MUNMAP(a, s) win32munmap((a), (s))\r
-#define DIRECT_MMAP(s) win32direct_mmap(s)\r
-#endif /* WIN32 */\r
-#endif /* HAVE_MMAP */\r
-\r
-#if HAVE_MMAP && HAVE_MREMAP\r
-#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))\r
-#else /* HAVE_MMAP && HAVE_MREMAP */\r
-#define CALL_MREMAP(addr, osz, nsz, mv) ((void)(addr),(void)(osz), \\r
- (void)(nsz), (void)(mv),MFAIL)\r
-#endif /* HAVE_MMAP && HAVE_MREMAP */\r
-\r
-#if HAVE_MORECORE\r
-#define CALL_MORECORE(S) MORECORE(S)\r
-#else /* HAVE_MORECORE */\r
-#define CALL_MORECORE(S) MFAIL\r
-#endif /* HAVE_MORECORE */\r
-\r
-/* mstate bit set if continguous morecore disabled or failed */\r
-#define USE_NONCONTIGUOUS_BIT (4U)\r
-\r
-/* segment bit set in create_mspace_with_base */\r
-#define EXTERN_BIT (8U)\r
-\r
-\r
-/* --------------------------- Lock preliminaries ------------------------ */\r
-\r
-#if USE_LOCKS\r
-\r
-#if USE_LOCKS==1\r
-\r
-/*\r
- When locks are defined, there are up to two global locks:\r
-\r
- * If HAVE_MORECORE, morecore_mutex protects sequences of calls to\r
- MORECORE. In many cases sys_alloc requires two calls, that should\r
- not be interleaved with calls by other threads. This does not\r
- protect against direct calls to MORECORE by other threads not\r
- using this lock, so there is still code to cope the best we can on\r
- interference.\r
-\r
- * magic_init_mutex ensures that mparams.magic and other\r
- unique mparams values are initialized only once.\r
-*/\r
-\r
-#ifndef WIN32\r
-/* By default use posix locks */\r
-#include <pthread.h>\r
-\r
-#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))\r
-/* Go for assembler spin locks on x86 and x64 */\r
-struct pthread_mlock_t\r
-{\r
- volatile pthread_t threadid;\r
- volatile unsigned int c;\r
- volatile unsigned int l;\r
-};\r
-#define MLOCK_T struct pthread_mlock_t\r
-#define CURRENT_THREAD pthread_self()\r
-static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) {\r
- if(CURRENT_THREAD==sl->threadid)\r
- ++sl->c;\r
- else for (;;) {\r
- int ret;\r
- __asm__ __volatile__ ("pause\n\tlock cmpxchgl %2,(%1)" : "=a" (ret) : "r" (&sl->l), "r" (1), "a" (0));\r
- if(!ret) {\r
- assert(!sl->threadid);\r
- sl->threadid=CURRENT_THREAD;\r
- sl->c=1;\r
- break;\r
- }\r
- sched_yield();\r
- }\r
-\r
- return 0;\r
-}\r
-\r
-static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) {\r
- int ret;\r
- assert(CURRENT_THREAD==sl->threadid);\r
- if (!--sl->c) {\r
- sl->threadid=0;\r
- __asm__ __volatile__ ("xchgl %2,(%1)" : "=r" (ret) : "r" (&sl->l), "0" (0));\r
- }\r
-}\r
-\r
-static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) {\r
- int ret;\r
- __asm__ __volatile__ ("pause\n\tlock cmpxchgl %2,(%1)" : "=a" (ret) : "r" (&sl->l), "r" (1), "a" (0));\r
- if(!ret){\r
- assert(!sl->threadid);\r
- sl->threadid=CURRENT_THREAD;\r
- sl->c=1;\r
- return 1;\r
- }\r
- return 0;\r
-}\r
-\r
-#define INITIAL_LOCK(sl) (memset((sl), 0, sizeof(MLOCK_T)), 0)\r
-#define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl)\r
-#define RELEASE_LOCK(sl) pthread_release_lock(sl)\r
-#define TRY_LOCK(sl) pthread_try_lock(sl)\r
-#define IS_LOCKED(sl) ((sl)->l)\r
-\r
-#if HAVE_MORECORE\r
-static MLOCK_T morecore_mutex = {0, 0 };\r
-#endif /* HAVE_MORECORE */\r
-\r
-static MLOCK_T magic_init_mutex = {0, 0 };\r
-\r
-#else\r
-\r
-/* The POSIX threads implementation */\r
-struct pthread_mlock_t\r
-{\r
- volatile unsigned int c;\r
- pthread_mutex_t l;\r
-};\r
-#define MLOCK_T struct pthread_mlock_t\r
-#define CURRENT_THREAD pthread_self()\r
-static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) {\r
- if(!pthread_mutex_lock(&(sl)->l)){\r
- sl->c++;\r
- return 0;\r
- }\r
- return 1;\r
-}\r
-\r
-static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) {\r
- --sl->c;\r
- pthread_mutex_unlock(&(sl)->l);\r
-}\r
-\r
-static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) {\r
- if(!pthread_mutex_trylock(&(sl)->l)){\r
- sl->c++;\r
- return 1;\r
- }\r
- return 0;\r
-}\r
-\r
-static FORCEINLINE int pthread_init_lock (MLOCK_T *sl) {\r
- pthread_mutexattr_t attr;\r
- sl->c=0;\r
- if(pthread_mutexattr_init(&attr)) return 1;\r
- if(pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;\r
- if(pthread_mutex_init(&sl->l, &attr)) return 1;\r
- pthread_mutexattr_destroy(&attr);\r
- return 0;\r
-}\r
-\r
-static FORCEINLINE int pthread_islocked (MLOCK_T *sl) {\r
-#if 0 /* This will be faster on SMP systems which enforce cache consistency */\r
- return sl->c!=0;\r
-#else\r
- /* Doing this correctly portably means inefficient code :( */\r
- if(!pthread_try_lock(sl)){\r
- int ret=(sl->c!=0);\r
- pthread_mutex_unlock(sl);\r
- return ret;\r
- }\r
- return 0;\r
-#endif\r
-}\r
-\r
-\r
-#define INITIAL_LOCK(sl) pthread_init_lock(sl)\r
-#define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl)\r
-#define RELEASE_LOCK(sl) pthread_release_lock(sl)\r
-#define TRY_LOCK(sl) pthread_try_lock(sl)\r
-#define IS_LOCKED(sl) pthread_islocked(sl)\r
-\r
-#if HAVE_MORECORE\r
-static MLOCK_T morecore_mutex = {0, PTHREAD_MUTEX_INITIALIZER };\r
-#endif /* HAVE_MORECORE */\r
-\r
-static MLOCK_T magic_init_mutex = {0, PTHREAD_MUTEX_INITIALIZER };\r
-#endif /* __GNUC__ && i386 */\r
-\r
-#else /* WIN32 */\r
-/*\r
- Because lock-protected regions have bounded times, and there\r
- are no recursive lock calls, we can use simple spinlocks.\r
-*/\r
-\r
-#if defined(_MSC_VER) && _MSC_VER>=1310\r
-#if 1\r
- #ifndef _M_AMD64\r
- // These are already defined on AMD64 builds\r
- #ifdef __cplusplus\r
-extern "C" {\r
- #endif\r
- LONG __cdecl _InterlockedCompareExchange(LPLONG volatile Dest, LONG Exchange, LONG Comp);\r
- LONG __cdecl _InterlockedExchange(LPLONG volatile Target, LONG Value);\r
- #ifdef __cplusplus\r
-}\r
- #endif\r
- #endif\r
- // MSVC7.1 Intrinsics are far faster than win32 functions\r
- #pragma intrinsic (_InterlockedCompareExchange)\r
- #pragma intrinsic (_InterlockedExchange)\r
- #define interlockedcompareexchange _InterlockedCompareExchange\r
- #define interlockedexchange _InterlockedExchange\r
-#else\r
- #define interlockedcompareexchange InterlockedCompareExchange\r
- #define interlockedexchange InterlockedExchange\r
-#endif\r
-\r
-struct win32_mlock_t\r
-{\r
- volatile long threadid;\r
- volatile unsigned int c;\r
- long l;\r
-};\r
-#define MLOCK_T struct win32_mlock_t\r
-#define CURRENT_THREAD GetCurrentThreadId()\r
-static FORCEINLINE int win32_acquire_lock (MLOCK_T *sl) {\r
- long mythreadid=CURRENT_THREAD;\r
- if(mythreadid==sl->threadid)\r
- ++sl->c;\r
- else for (;;) {\r
- if (!interlockedexchange(&sl->l, 1)) {\r
- assert(!sl->threadid);\r
- sl->threadid=mythreadid;\r
- sl->c=1;\r
- break;\r
- }\r
- /*YieldProcessor();*/\r
- }\r
-\r
- return 0;\r
-}\r
-\r
-static FORCEINLINE void win32_release_lock (MLOCK_T *sl) {\r
- assert(CURRENT_THREAD==sl->threadid);\r
- if (!--sl->c) {\r
- sl->threadid=0;\r
- interlockedexchange (&sl->l, 0);\r
- }\r
-}\r
-\r
-static FORCEINLINE int win32_try_lock (MLOCK_T *sl) {\r
- if (!interlockedexchange(&sl->l, 1)){\r
- assert(!sl->threadid);\r
- sl->threadid=CURRENT_THREAD;\r
- sl->c=1;\r
- return 1;\r
- }\r
- return 0;\r
-}\r
-\r
-\r
-#define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)\r
-#define ACQUIRE_LOCK(sl) win32_acquire_lock(sl)\r
-#define RELEASE_LOCK(sl) win32_release_lock(sl)\r
-#define TRY_LOCK(sl) win32_try_lock(sl)\r
-#define IS_LOCKED(sl) ((sl)->l)\r
-#else\r
-/* Critical section implementation */\r
-#define MLOCK_T CRITICAL_SECTION\r
-#define CURRENT_THREAD GetCurrentThreadId()\r
-#define INITIAL_LOCK(s) (!InitializeCriticalSectionAndSpinCount((s), 4000)\r
-#define ACQUIRE_LOCK(s) ( (!((s))->DebugInfo ? INITIAL_LOCK((s)) : 0), !EnterCriticalSection((s)), 0)\r
-#define RELEASE_LOCK(s) ( LeaveCriticalSection((s)), 0 )\r
-#define TRY_LOCK(s) ( TryEnterCriticalSection((s)) )\r
-#define IS_LOCKED(s) ( (s)->LockCount >= 0 )\r
-#define NULL_LOCK_INITIALIZER\r
-\r
-#endif /* 1 */\r
-#if HAVE_MORECORE\r
-static MLOCK_T morecore_mutex;\r
-#endif /* HAVE_MORECORE */\r
-static MLOCK_T magic_init_mutex;\r
-#endif /* WIN32 */\r
-\r
-#else\r
-/* User defines their own locks */\r
-#if HAVE_MORECORE\r
-static MLOCK_T morecore_mutex NULL_LOCK_INITIALIZER;\r
-#endif /* HAVE_MORECORE */\r
-static MLOCK_T magic_init_mutex NULL_LOCK_INITIALIZER;\r
-\r
-#endif /* USE_LOCKS==1 */\r
-\r
-#define USE_LOCK_BIT (2U)\r
-#else /* USE_LOCKS */\r
-#define USE_LOCK_BIT (0U)\r
-#define INITIAL_LOCK(l)\r
-#endif /* USE_LOCKS */\r
-\r
-#if USE_LOCKS && HAVE_MORECORE\r
-#define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);\r
-#define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);\r
-#else /* USE_LOCKS && HAVE_MORECORE */\r
-#define ACQUIRE_MORECORE_LOCK()\r
-#define RELEASE_MORECORE_LOCK()\r
-#endif /* USE_LOCKS && HAVE_MORECORE */\r
-\r
-#if USE_LOCKS\r
-#define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);\r
-#define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);\r
-#else /* USE_LOCKS */\r
-#define ACQUIRE_MAGIC_INIT_LOCK()\r
-#define RELEASE_MAGIC_INIT_LOCK()\r
-#endif /* USE_LOCKS */\r
-\r
-\r
-/* ----------------------- Chunk representations ------------------------ */\r
-\r
-/*\r
- (The following includes lightly edited explanations by Colin Plumb.)\r
-\r
- The malloc_chunk declaration below is misleading (but accurate and\r
- necessary). It declares a "view" into memory allowing access to\r
- necessary fields at known offsets from a given base.\r
-\r
- Chunks of memory are maintained using a `boundary tag' method as\r
- originally described by Knuth. (See the paper by Paul Wilson\r
- ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such\r
- techniques.) Sizes of free chunks are stored both in the front of\r
- each chunk and at the end. This makes consolidating fragmented\r
- chunks into bigger chunks fast. The head fields also hold bits\r
- representing whether chunks are free or in use.\r
-\r
- Here are some pictures to make it clearer. They are "exploded" to\r
- show that the state of a chunk can be thought of as extending from\r
- the high 31 bits of the head field of its header through the\r
- prev_foot and PINUSE_BIT bit of the following chunk header.\r
-\r
- A chunk that's in use looks like:\r
-\r
- chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Size of previous chunk (if P = 1) |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|\r
- | Size of this chunk 1| +-+\r
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | |\r
- +- -+\r
- | |\r
- +- -+\r
- | :\r
- +- size - sizeof(size_t) available payload bytes -+\r
- : |\r
- chunk-> +- -+\r
- | |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|\r
- | Size of next chunk (may or may not be in use) | +-+\r
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
-\r
- And if it's free, it looks like this:\r
-\r
- chunk-> +- -+\r
- | User payload (must be in use, or we would have merged!) |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|\r
- | Size of this chunk 0| +-+\r
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Next pointer |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Prev pointer |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | :\r
- +- size - sizeof(struct chunk) unused bytes -+\r
- : |\r
- chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Size of this chunk |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|\r
- | Size of next chunk (must be in use, or we would have merged)| +-+\r
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | :\r
- +- User payload -+\r
- : |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- |0|\r
- +-+\r
- Note that since we always merge adjacent free chunks, the chunks\r
- adjacent to a free chunk must be in use.\r
-\r
- Given a pointer to a chunk (which can be derived trivially from the\r
- payload pointer) we can, in O(1) time, find out whether the adjacent\r
- chunks are free, and if so, unlink them from the lists that they\r
- are on and merge them with the current chunk.\r
-\r
- Chunks always begin on even word boundaries, so the mem portion\r
- (which is returned to the user) is also on an even word boundary, and\r
- thus at least double-word aligned.\r
-\r
- The P (PINUSE_BIT) bit, stored in the unused low-order bit of the\r
- chunk size (which is always a multiple of two words), is an in-use\r
- bit for the *previous* chunk. If that bit is *clear*, then the\r
- word before the current chunk size contains the previous chunk\r
- size, and can be used to find the front of the previous chunk.\r
- The very first chunk allocated always has this bit set, preventing\r
- access to non-existent (or non-owned) memory. If pinuse is set for\r
- any given chunk, then you CANNOT determine the size of the\r
- previous chunk, and might even get a memory addressing fault when\r
- trying to do so.\r
-\r
- The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of\r
- the chunk size redundantly records whether the current chunk is\r
- inuse. This redundancy enables usage checks within free and realloc,\r
- and reduces indirection when freeing and consolidating chunks.\r
-\r
- Each freshly allocated chunk must have both cinuse and pinuse set.\r
- That is, each allocated chunk borders either a previously allocated\r
- and still in-use chunk, or the base of its memory arena. This is\r
- ensured by making all allocations from the the `lowest' part of any\r
- found chunk. Further, no free chunk physically borders another one,\r
- so each free chunk is known to be preceded and followed by either\r
- inuse chunks or the ends of memory.\r
-\r
- Note that the `foot' of the current chunk is actually represented\r
- as the prev_foot of the NEXT chunk. This makes it easier to\r
- deal with alignments etc but can be very confusing when trying\r
- to extend or adapt this code.\r
-\r
- The exceptions to all this are\r
-\r
- 1. The special chunk `top' is the top-most available chunk (i.e.,\r
- the one bordering the end of available memory). It is treated\r
- specially. Top is never included in any bin, is used only if\r
- no other chunk is available, and is released back to the\r
- system if it is very large (see M_TRIM_THRESHOLD). In effect,\r
- the top chunk is treated as larger (and thus less well\r
- fitting) than any other available chunk. The top chunk\r
- doesn't update its trailing size field since there is no next\r
- contiguous chunk that would have to index off it. However,\r
- space is still allocated for it (TOP_FOOT_SIZE) to enable\r
- separation or merging when space is extended.\r
-\r
- 3. Chunks allocated via mmap, which have the lowest-order bit\r
- (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set\r
- PINUSE_BIT in their head fields. Because they are allocated\r
- one-by-one, each must carry its own prev_foot field, which is\r
- also used to hold the offset this chunk has within its mmapped\r
- region, which is needed to preserve alignment. Each mmapped\r
- chunk is trailed by the first two fields of a fake next-chunk\r
- for sake of usage checks.\r
-\r
-*/\r
-\r
-struct malloc_chunk {\r
- size_t prev_foot; /* Size of previous chunk (if free). */\r
- size_t head; /* Size and inuse bits. */\r
- struct malloc_chunk* fd; /* double links -- used only if free. */\r
- struct malloc_chunk* bk;\r
-};\r
-\r
-typedef struct malloc_chunk mchunk;\r
-typedef struct malloc_chunk* mchunkptr;\r
-typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */\r
-typedef unsigned int bindex_t; /* Described below */\r
-typedef unsigned int binmap_t; /* Described below */\r
-typedef unsigned int flag_t; /* The type of various bit flag sets */\r
-\r
-/* ------------------- Chunks sizes and alignments ----------------------- */\r
-\r
-#define MCHUNK_SIZE (sizeof(mchunk))\r
-\r
-#if FOOTERS\r
-#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)\r
-#else /* FOOTERS */\r
-#define CHUNK_OVERHEAD (SIZE_T_SIZE)\r
-#endif /* FOOTERS */\r
-\r
-/* MMapped chunks need a second word of overhead ... */\r
-#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)\r
-/* ... and additional padding for fake next-chunk at foot */\r
-#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)\r
-\r
-/* The smallest size we can malloc is an aligned minimal chunk */\r
-#define MIN_CHUNK_SIZE\\r
- ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)\r
-\r
-/* conversion from malloc headers to user pointers, and back */\r
-#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))\r
-#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))\r
-/* chunk associated with aligned address A */\r
-#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))\r
-\r
-/* Bounds on request (not chunk) sizes. */\r
-#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)\r
-#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)\r
-\r
-/* pad request bytes into a usable size */\r
-#define pad_request(req) \\r
- (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)\r
-\r
-/* pad request, checking for minimum (but not maximum) */\r
-#define request2size(req) \\r
- (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))\r
-\r
-\r
-/* ------------------ Operations on head and foot fields ----------------- */\r
-\r
-/*\r
- The head field of a chunk is or'ed with PINUSE_BIT when previous\r
- adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in\r
- use. If the chunk was obtained with mmap, the prev_foot field has\r
- IS_MMAPPED_BIT set, otherwise holding the offset of the base of the\r
- mmapped region to the base of the chunk.\r
-*/\r
-\r
-#define PINUSE_BIT (SIZE_T_ONE)\r
-#define CINUSE_BIT (SIZE_T_TWO)\r
-#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)\r
-\r
-/* Head value for fenceposts */\r
-#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)\r
-\r
-/* extraction of fields from head words */\r
-#define cinuse(p) ((p)->head & CINUSE_BIT)\r
-#define pinuse(p) ((p)->head & PINUSE_BIT)\r
-#define chunksize(p) ((p)->head & ~(INUSE_BITS))\r
-\r
-#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)\r
-#define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)\r
-\r
-/* Treat space at ptr +/- offset as a chunk */\r
-#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))\r
-#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))\r
-\r
-/* Ptr to next or previous physical malloc_chunk. */\r
-#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))\r
-#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))\r
-\r
-/* extract next chunk's pinuse bit */\r
-#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)\r
-\r
-/* Get/set size at footer */\r
-#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)\r
-#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))\r
-\r
-/* Set size, pinuse bit, and foot */\r
-#define set_size_and_pinuse_of_free_chunk(p, s)\\r
- ((p)->head = (s|PINUSE_BIT), set_foot(p, s))\r
-\r
-/* Set size, pinuse bit, foot, and clear next pinuse */\r
-#define set_free_with_pinuse(p, s, n)\\r
- (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))\r
-\r
-#define is_mmapped(p)\\r
- (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))\r
-\r
-/* Get the internal overhead associated with chunk p */\r
-#define overhead_for(p)\\r
- (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)\r
-\r
-/* Return true if malloced space is not necessarily cleared */\r
-#if MMAP_CLEARS\r
-#define calloc_must_clear(p) (!is_mmapped(p))\r
-#else /* MMAP_CLEARS */\r
-#define calloc_must_clear(p) (1)\r
-#endif /* MMAP_CLEARS */\r
-\r
-/* ---------------------- Overlaid data structures ----------------------- */\r
-\r
-/*\r
- When chunks are not in use, they are treated as nodes of either\r
- lists or trees.\r
-\r
- "Small" chunks are stored in circular doubly-linked lists, and look\r
- like this:\r
-\r
- chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Size of previous chunk |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- `head:' | Size of chunk, in bytes |P|\r
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Forward pointer to next chunk in list |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Back pointer to previous chunk in list |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Unused space (may be 0 bytes long) .\r
- . .\r
- . |\r
-nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- `foot:' | Size of chunk, in bytes |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
-\r
- Larger chunks are kept in a form of bitwise digital trees (aka\r
- tries) keyed on chunksizes. Because malloc_tree_chunks are only for\r
- free chunks greater than 256 bytes, their size doesn't impose any\r
- constraints on user chunk sizes. Each node looks like:\r
-\r
- chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Size of previous chunk |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- `head:' | Size of chunk, in bytes |P|\r
- mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Forward pointer to next chunk of same size |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Back pointer to previous chunk of same size |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Pointer to left child (child[0]) |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Pointer to right child (child[1]) |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Pointer to parent |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | bin index of this chunk |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- | Unused space .\r
- . |\r
-nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
- `foot:' | Size of chunk, in bytes |\r
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\r
-\r
- Each tree holding treenodes is a tree of unique chunk sizes. Chunks\r
- of the same size are arranged in a circularly-linked list, with only\r
- the oldest chunk (the next to be used, in our FIFO ordering)\r
- actually in the tree. (Tree members are distinguished by a non-null\r
- parent pointer.) If a chunk with the same size an an existing node\r
- is inserted, it is linked off the existing node using pointers that\r
- work in the same way as fd/bk pointers of small chunks.\r
-\r
- Each tree contains a power of 2 sized range of chunk sizes (the\r
- smallest is 0x100 <= x < 0x180), which is is divided in half at each\r
- tree level, with the chunks in the smaller half of the range (0x100\r
- <= x < 0x140 for the top nose) in the left subtree and the larger\r
- half (0x140 <= x < 0x180) in the right subtree. This is, of course,\r
- done by inspecting individual bits.\r
-\r
- Using these rules, each node's left subtree contains all smaller\r
- sizes than its right subtree. However, the node at the root of each\r
- subtree has no particular ordering relationship to either. (The\r
- dividing line between the subtree sizes is based on trie relation.)\r
- If we remove the last chunk of a given size from the interior of the\r
- tree, we need to replace it with a leaf node. The tree ordering\r
- rules permit a node to be replaced by any leaf below it.\r
-\r
- The smallest chunk in a tree (a common operation in a best-fit\r
- allocator) can be found by walking a path to the leftmost leaf in\r
- the tree. Unlike a usual binary tree, where we follow left child\r
- pointers until we reach a null, here we follow the right child\r
- pointer any time the left one is null, until we reach a leaf with\r
- both child pointers null. The smallest chunk in the tree will be\r
- somewhere along that path.\r
-\r
- The worst case number of steps to add, find, or remove a node is\r
- bounded by the number of bits differentiating chunks within\r
- bins. Under current bin calculations, this ranges from 6 up to 21\r
- (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case\r
- is of course much better.\r
-*/\r
-\r
-struct malloc_tree_chunk {\r
- /* The first four fields must be compatible with malloc_chunk */\r
- size_t prev_foot;\r
- size_t head;\r
- struct malloc_tree_chunk* fd;\r
- struct malloc_tree_chunk* bk;\r
-\r
- struct malloc_tree_chunk* child[2];\r
- struct malloc_tree_chunk* parent;\r
- bindex_t index;\r
-};\r
-\r
-typedef struct malloc_tree_chunk tchunk;\r
-typedef struct malloc_tree_chunk* tchunkptr;\r
-typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */\r
-\r
-/* A little helper macro for trees */\r
-#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])\r
-\r
-/* ----------------------------- Segments -------------------------------- */\r
-\r
-/*\r
- Each malloc space may include non-contiguous segments, held in a\r
- list headed by an embedded malloc_segment record representing the\r
- top-most space. Segments also include flags holding properties of\r
- the space. Large chunks that are directly allocated by mmap are not\r
- included in this list. They are instead independently created and\r
- destroyed without otherwise keeping track of them.\r
-\r
- Segment management mainly comes into play for spaces allocated by\r
- MMAP. Any call to MMAP might or might not return memory that is\r
- adjacent to an existing segment. MORECORE normally contiguously\r
- extends the current space, so this space is almost always adjacent,\r
- which is simpler and faster to deal with. (This is why MORECORE is\r
- used preferentially to MMAP when both are available -- see\r
- sys_alloc.) When allocating using MMAP, we don't use any of the\r
- hinting mechanisms (inconsistently) supported in various\r
- implementations of unix mmap, or distinguish reserving from\r
- committing memory. Instead, we just ask for space, and exploit\r
- contiguity when we get it. It is probably possible to do\r
- better than this on some systems, but no general scheme seems\r
- to be significantly better.\r
-\r
- Management entails a simpler variant of the consolidation scheme\r
- used for chunks to reduce fragmentation -- new adjacent memory is\r
- normally prepended or appended to an existing segment. However,\r
- there are limitations compared to chunk consolidation that mostly\r
- reflect the fact that segment processing is relatively infrequent\r
- (occurring only when getting memory from system) and that we\r
- don't expect to have huge numbers of segments:\r
-\r
- * Segments are not indexed, so traversal requires linear scans. (It\r
- would be possible to index these, but is not worth the extra\r
- overhead and complexity for most programs on most platforms.)\r
- * New segments are only appended to old ones when holding top-most\r
- memory; if they cannot be prepended to others, they are held in\r
- different segments.\r
-\r
- Except for the top-most segment of an mstate, each segment record\r
- is kept at the tail of its segment. Segments are added by pushing\r
- segment records onto the list headed by &mstate.seg for the\r
- containing mstate.\r
-\r
- Segment flags control allocation/merge/deallocation policies:\r
- * If EXTERN_BIT set, then we did not allocate this segment,\r
- and so should not try to deallocate or merge with others.\r
- (This currently holds only for the initial segment passed\r
- into create_mspace_with_base.)\r
- * If IS_MMAPPED_BIT set, the segment may be merged with\r
- other surrounding mmapped segments and trimmed/de-allocated\r
- using munmap.\r
- * If neither bit is set, then the segment was obtained using\r
- MORECORE so can be merged with surrounding MORECORE'd segments\r
- and deallocated/trimmed using MORECORE with negative arguments.\r
-*/\r
-\r
-struct malloc_segment {\r
- char* base; /* base address */\r
- size_t size; /* allocated size */\r
- struct malloc_segment* next; /* ptr to next segment */\r
- flag_t sflags; /* mmap and extern flag */\r
-};\r
-\r
-#define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)\r
-#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)\r
-\r
-typedef struct malloc_segment msegment;\r
-typedef struct malloc_segment* msegmentptr;\r
-\r
-/* ---------------------------- malloc_state ----------------------------- */\r
-\r
-/*\r
- A malloc_state holds all of the bookkeeping for a space.\r
- The main fields are:\r
-\r
- Top\r
- The topmost chunk of the currently active segment. Its size is\r
- cached in topsize. The actual size of topmost space is\r
- topsize+TOP_FOOT_SIZE, which includes space reserved for adding\r
- fenceposts and segment records if necessary when getting more\r
- space from the system. The size at which to autotrim top is\r
- cached from mparams in trim_check, except that it is disabled if\r
- an autotrim fails.\r
-\r
- Designated victim (dv)\r
- This is the preferred chunk for servicing small requests that\r
- don't have exact fits. It is normally the chunk split off most\r
- recently to service another small request. Its size is cached in\r
- dvsize. The link fields of this chunk are not maintained since it\r
- is not kept in a bin.\r
-\r
- SmallBins\r
- An array of bin headers for free chunks. These bins hold chunks\r
- with sizes less than MIN_LARGE_SIZE bytes. Each bin contains\r
- chunks of all the same size, spaced 8 bytes apart. To simplify\r
- use in double-linked lists, each bin header acts as a malloc_chunk\r
- pointing to the real first node, if it exists (else pointing to\r
- itself). This avoids special-casing for headers. But to avoid\r
- waste, we allocate only the fd/bk pointers of bins, and then use\r
- repositioning tricks to treat these as the fields of a chunk.\r
-\r
- TreeBins\r
- Treebins are pointers to the roots of trees holding a range of\r
- sizes. There are 2 equally spaced treebins for each power of two\r
- from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything\r
- larger.\r
-\r
- Bin maps\r
- There is one bit map for small bins ("smallmap") and one for\r
- treebins ("treemap). Each bin sets its bit when non-empty, and\r
- clears the bit when empty. Bit operations are then used to avoid\r
- bin-by-bin searching -- nearly all "search" is done without ever\r
- looking at bins that won't be selected. The bit maps\r
- conservatively use 32 bits per map word, even if on 64bit system.\r
- For a good description of some of the bit-based techniques used\r
- here, see Henry S. Warren Jr's book "Hacker's Delight" (and\r
- supplement at http://hackersdelight.org/). Many of these are\r
- intended to reduce the branchiness of paths through malloc etc, as\r
- well as to reduce the number of memory locations read or written.\r
-\r
- Segments\r
- A list of segments headed by an embedded malloc_segment record\r
- representing the initial space.\r
-\r
- Address check support\r
- The least_addr field is the least address ever obtained from\r
- MORECORE or MMAP. Attempted frees and reallocs of any address less\r
- than this are trapped (unless INSECURE is defined).\r
-\r
- Magic tag\r
- A cross-check field that should always hold same value as mparams.magic.\r
-\r
- Flags\r
- Bits recording whether to use MMAP, locks, or contiguous MORECORE\r
-\r
- Statistics\r
- Each space keeps track of current and maximum system memory\r
- obtained via MORECORE or MMAP.\r
-\r
- Locking\r
- If USE_LOCKS is defined, the "mutex" lock is acquired and released\r
- around every public call using this mspace.\r
-*/\r
-\r
-/* Bin types, widths and sizes */\r
-#define NSMALLBINS (32U)\r
-#define NTREEBINS (32U)\r
-#define SMALLBIN_SHIFT (3U)\r
-#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)\r
-#define TREEBIN_SHIFT (8U)\r
-#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)\r
-#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)\r
-#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)\r
-\r
-struct malloc_state {\r
- binmap_t smallmap;\r
- binmap_t treemap;\r
- size_t dvsize;\r
- size_t topsize;\r
- char* least_addr;\r
- mchunkptr dv;\r
- mchunkptr top;\r
- size_t trim_check;\r
- size_t magic;\r
- mchunkptr smallbins[(NSMALLBINS+1)*2];\r
- tbinptr treebins[NTREEBINS];\r
- size_t footprint;\r
- size_t max_footprint;\r
- flag_t mflags;\r
- char cachesync[128];\r
-#if USE_LOCKS\r
- MLOCK_T mutex; /* locate lock among fields that rarely change */\r
-#endif /* USE_LOCKS */\r
- msegment seg;\r
- void* nedpool; /* Points back to nedpool owning this mstate */\r
-};\r
-\r
-typedef struct malloc_state* mstate;\r
-\r
-/* ------------- Global malloc_state and malloc_params ------------------- */\r
-\r
-/*\r
- malloc_params holds global properties, including those that can be\r
- dynamically set using mallopt. There is a single instance, mparams,\r
- initialized in init_mparams.\r
-*/\r
-\r
-struct malloc_params {\r
- size_t magic;\r
- size_t page_size;\r
- size_t granularity;\r
- size_t mmap_threshold;\r
- size_t trim_threshold;\r
- flag_t default_mflags;\r
-};\r
-\r
-static struct malloc_params mparams;\r
-\r
-/* The global malloc_state used for all non-"mspace" calls */\r
-static struct malloc_state _gm_;\r
-#define gm (&_gm_)\r
-#define is_global(M) ((M) == &_gm_)\r
-#define is_initialized(M) ((M)->top != 0)\r
-\r
-/* -------------------------- system alloc setup ------------------------- */\r
-\r
-/* Operations on mflags */\r
-\r
-#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)\r
-#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)\r
-#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)\r
-\r
-#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)\r
-#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)\r
-#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)\r
-\r
-#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)\r
-#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)\r
-\r
-#define set_lock(M,L)\\r
- ((M)->mflags = (L)?\\r
- ((M)->mflags | USE_LOCK_BIT) :\\r
- ((M)->mflags & ~USE_LOCK_BIT))\r
-\r
-/* page-align a size */\r
-#define page_align(S)\\r
- (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))\r
-\r
-/* granularity-align a size */\r
-#define granularity_align(S)\\r
- (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))\r
-\r
-#define is_page_aligned(S)\\r
- (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)\r
-#define is_granularity_aligned(S)\\r
- (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)\r
-\r
-/* True if segment S holds address A */\r
-#define segment_holds(S, A)\\r
- ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)\r
-\r
-/* Return segment holding given address */\r
-static msegmentptr segment_holding(mstate m, char* addr) {\r
- msegmentptr sp = &m->seg;\r
- for (;;) {\r
- if (addr >= sp->base && addr < sp->base + sp->size)\r
- return sp;\r
- if ((sp = sp->next) == 0)\r
- return 0;\r
- }\r
-}\r
-\r
-/* Return true if segment contains a segment link */\r
-static int has_segment_link(mstate m, msegmentptr ss) {\r
- msegmentptr sp = &m->seg;\r
- for (;;) {\r
- if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)\r
- return 1;\r
- if ((sp = sp->next) == 0)\r
- return 0;\r
- }\r
-}\r
-\r
-#ifndef MORECORE_CANNOT_TRIM\r
-#define should_trim(M,s) ((s) > (M)->trim_check)\r
-#else /* MORECORE_CANNOT_TRIM */\r
-#define should_trim(M,s) (0)\r
-#endif /* MORECORE_CANNOT_TRIM */\r
-\r
-/*\r
- TOP_FOOT_SIZE is padding at the end of a segment, including space\r
- that may be needed to place segment records and fenceposts when new\r
- noncontiguous segments are added.\r
-*/\r
-#define TOP_FOOT_SIZE\\r
- (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)\r
-\r
-\r
-/* ------------------------------- Hooks -------------------------------- */\r
-\r
-/*\r
- PREACTION should be defined to return 0 on success, and nonzero on\r
- failure. If you are not using locking, you can redefine these to do\r
- anything you like.\r
-*/\r
-\r
-#if USE_LOCKS\r
-\r
-/* Ensure locks are initialized */\r
-#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())\r
-\r
-#define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)\r
-#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }\r
-#else /* USE_LOCKS */\r
-\r
-#ifndef PREACTION\r
-#define PREACTION(M) (0)\r
-#endif /* PREACTION */\r
-\r
-#ifndef POSTACTION\r
-#define POSTACTION(M)\r
-#endif /* POSTACTION */\r
-\r
-#endif /* USE_LOCKS */\r
-\r
-/*\r
- CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.\r
- USAGE_ERROR_ACTION is triggered on detected bad frees and\r
- reallocs. The argument p is an address that might have triggered the\r
- fault. It is ignored by the two predefined actions, but might be\r
- useful in custom actions that try to help diagnose errors.\r
-*/\r
-\r
-#if PROCEED_ON_ERROR\r
-\r
-/* A count of the number of corruption errors causing resets */\r
-int malloc_corruption_error_count;\r
-\r
-/* default corruption action */\r
-static void reset_on_error(mstate m);\r
-\r
-#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)\r
-#define USAGE_ERROR_ACTION(m, p)\r
-\r
-#else /* PROCEED_ON_ERROR */\r
-\r
-#ifndef CORRUPTION_ERROR_ACTION\r
-#define CORRUPTION_ERROR_ACTION(m) ABORT\r
-#endif /* CORRUPTION_ERROR_ACTION */\r
-\r
-#ifndef USAGE_ERROR_ACTION\r
-#define USAGE_ERROR_ACTION(m,p) ABORT\r
-#endif /* USAGE_ERROR_ACTION */\r
-\r
-#endif /* PROCEED_ON_ERROR */\r
-\r
-/* -------------------------- Debugging setup ---------------------------- */\r
-\r
-#if ! DEBUG\r
-\r
-#define check_free_chunk(M,P)\r
-#define check_inuse_chunk(M,P)\r
-#define check_malloced_chunk(M,P,N)\r
-#define check_mmapped_chunk(M,P)\r
-#define check_malloc_state(M)\r
-#define check_top_chunk(M,P)\r
-\r
-#else /* DEBUG */\r
-#define check_free_chunk(M,P) do_check_free_chunk(M,P)\r
-#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)\r
-#define check_top_chunk(M,P) do_check_top_chunk(M,P)\r
-#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)\r
-#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)\r
-#define check_malloc_state(M) do_check_malloc_state(M)\r
-\r
-static void do_check_any_chunk(mstate m, mchunkptr p);\r
-static void do_check_top_chunk(mstate m, mchunkptr p);\r
-static void do_check_mmapped_chunk(mstate m, mchunkptr p);\r
-static void do_check_inuse_chunk(mstate m, mchunkptr p);\r
-static void do_check_free_chunk(mstate m, mchunkptr p);\r
-static void do_check_malloced_chunk(mstate m, void* mem, size_t s);\r
-static void do_check_tree(mstate m, tchunkptr t);\r
-static void do_check_treebin(mstate m, bindex_t i);\r
-static void do_check_smallbin(mstate m, bindex_t i);\r
-static void do_check_malloc_state(mstate m);\r
-static int bin_find(mstate m, mchunkptr x);\r
-static size_t traverse_and_check(mstate m);\r
-#endif /* DEBUG */\r
-\r
-/* ---------------------------- Indexing Bins ---------------------------- */\r
-\r
-#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)\r
-#define small_index(s) ((s) >> SMALLBIN_SHIFT)\r
-#define small_index2size(i) ((i) << SMALLBIN_SHIFT)\r
-#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))\r
-\r
-/* addressing by index. See above about smallbin repositioning */\r
-#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))\r
-#define treebin_at(M,i) (&((M)->treebins[i]))\r
-\r
-/* assign tree index for size S to variable I */\r
-#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))\r
-#define compute_tree_index(S, I)\\r
-{\\r
- unsigned int X = S >> TREEBIN_SHIFT;\\r
- if (X == 0)\\r
- I = 0;\\r
- else if (X > 0xFFFF)\\r
- I = NTREEBINS-1;\\r
- else {\\r
- unsigned int K;\\r
- __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "g" (X));\\r
- I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\\r
- }\\r
-}\r
-#elif defined(_MSC_VER) && _MSC_VER>=1300\r
- #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */\r
- #ifdef __cplusplus\r
-extern "C" {\r
- #endif\r
-unsigned char _BitScanForward(unsigned long *index, unsigned long mask);\r
-unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);\r
- #ifdef __cplusplus\r
-}\r
- #endif\r
- #define BitScanForward _BitScanForward\r
- #define BitScanReverse _BitScanReverse\r
- #pragma intrinsic(_BitScanForward)\r
- #pragma intrinsic(_BitScanReverse)\r
- #endif\r
-#define compute_tree_index(S, I)\\r
-{\\r
- size_t X = S >> TREEBIN_SHIFT;\\r
- if (X == 0)\\r
- I = 0;\\r
- else if (X > 0xFFFF)\\r
- I = NTREEBINS-1;\\r
- else {\\r
- unsigned int K;\\r
- _BitScanReverse((DWORD *) &K, X);\\r
- I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\\r
- }\\r
-}\r
-#else /* GNUC */\r
-#define compute_tree_index(S, I)\\r
-{\\r
- size_t X = S >> TREEBIN_SHIFT;\\r
- if (X == 0)\\r
- I = 0;\\r
- else if (X > 0xFFFF)\\r
- I = NTREEBINS-1;\\r
- else {\\r
- unsigned int Y = (unsigned int)X;\\r
- unsigned int N = ((Y - 0x100) >> 16) & 8;\\r
- unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\\r
- N += K;\\r
- N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\\r
- K = 14 - N + ((Y <<= K) >> 15);\\r
- I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\\r
- }\\r
-}\r
-#endif /* GNUC */\r
-\r
-/* Bit representing maximum resolved size in a treebin at i */\r
-#define bit_for_tree_index(i) \\r
- (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)\r
-\r
-/* Shift placing maximum resolved bit in a treebin at i as sign bit */\r
-#define leftshift_for_tree_index(i) \\r
- ((i == NTREEBINS-1)? 0 : \\r
- ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))\r
-\r
-/* The size of the smallest chunk held in bin with index i */\r
-#define minsize_for_tree_index(i) \\r
- ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \\r
- (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))\r
-\r
-\r
-/* ------------------------ Operations on bin maps ----------------------- */\r
-\r
-/* bit corresponding to given index */\r
-#define idx2bit(i) ((binmap_t)(1) << (i))\r
-\r
-/* Mark/Clear bits with given index */\r
-#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))\r
-#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))\r
-#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))\r
-\r
-#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))\r
-#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))\r
-#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))\r
-\r
-/* index corresponding to given bit */\r
-\r
-#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))\r
-#define compute_bit2idx(X, I)\\r
-{\\r
- unsigned int J;\\r
- __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "g" (X));\\r
- I = (bindex_t)J;\\r
-}\r
-#elif defined(_MSC_VER) && _MSC_VER>=1300\r
-#define compute_bit2idx(X, I)\\r
-{\\r
- unsigned int J;\\r
- _BitScanForward((DWORD *) &J, X);\\r
- I = (bindex_t)J;\\r
-}\r
-\r
-#else /* GNUC */\r
-#if USE_BUILTIN_FFS\r
-#define compute_bit2idx(X, I) I = ffs(X)-1\r
-\r
-#else /* USE_BUILTIN_FFS */\r
-#define compute_bit2idx(X, I)\\r
-{\\r
- unsigned int Y = X - 1;\\r
- unsigned int K = Y >> (16-4) & 16;\\r
- unsigned int N = K; Y >>= K;\\r
- N += K = Y >> (8-3) & 8; Y >>= K;\\r
- N += K = Y >> (4-2) & 4; Y >>= K;\\r
- N += K = Y >> (2-1) & 2; Y >>= K;\\r
- N += K = Y >> (1-0) & 1; Y >>= K;\\r
- I = (bindex_t)(N + Y);\\r
-}\r
-#endif /* USE_BUILTIN_FFS */\r
-#endif /* GNUC */\r
-\r
-/* isolate the least set bit of a bitmap */\r
-#define least_bit(x) ((x) & -(x))\r
-\r
-/* mask with all bits to left of least bit of x on */\r
-#define left_bits(x) ((x<<1) | -(x<<1))\r
-\r
-/* mask with all bits to left of or equal to least bit of x on */\r
-#define same_or_left_bits(x) ((x) | -(x))\r
-\r
-\r
-/* ----------------------- Runtime Check Support ------------------------- */\r
-\r
-/*\r
- For security, the main invariant is that malloc/free/etc never\r
- writes to a static address other than malloc_state, unless static\r
- malloc_state itself has been corrupted, which cannot occur via\r
- malloc (because of these checks). In essence this means that we\r
- believe all pointers, sizes, maps etc held in malloc_state, but\r
- check all of those linked or offsetted from other embedded data\r
- structures. These checks are interspersed with main code in a way\r
- that tends to minimize their run-time cost.\r
-\r
- When FOOTERS is defined, in addition to range checking, we also\r
- verify footer fields of inuse chunks, which can be used guarantee\r
- that the mstate controlling malloc/free is intact. This is a\r
- streamlined version of the approach described by William Robertson\r
- et al in "Run-time Detection of Heap-based Overflows" LISA'03\r
- http://www.usenix.org/events/lisa03/tech/robertson.html The footer\r
- of an inuse chunk holds the xor of its mstate and a random seed,\r
- that is checked upon calls to free() and realloc(). This is\r
- (probablistically) unguessable from outside the program, but can be\r
- computed by any code successfully malloc'ing any chunk, so does not\r
- itself provide protection against code that has already broken\r
- security through some other means. Unlike Robertson et al, we\r
- always dynamically check addresses of all offset chunks (previous,\r
- next, etc). This turns out to be cheaper than relying on hashes.\r
-*/\r
-\r
-#if !INSECURE\r
-/* Check if address a is at least as high as any from MORECORE or MMAP */\r
-#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)\r
-/* Check if address of next chunk n is higher than base chunk p */\r
-#define ok_next(p, n) ((char*)(p) < (char*)(n))\r
-/* Check if p has its cinuse bit on */\r
-#define ok_cinuse(p) cinuse(p)\r
-/* Check if p has its pinuse bit on */\r
-#define ok_pinuse(p) pinuse(p)\r
-\r
-#else /* !INSECURE */\r
-#define ok_address(M, a) (1)\r
-#define ok_next(b, n) (1)\r
-#define ok_cinuse(p) (1)\r
-#define ok_pinuse(p) (1)\r
-#endif /* !INSECURE */\r
-\r
-#if (FOOTERS && !INSECURE)\r
-/* Check if (alleged) mstate m has expected magic field */\r
-#define ok_magic(M) ((M)->magic == mparams.magic)\r
-#else /* (FOOTERS && !INSECURE) */\r
-#define ok_magic(M) (1)\r
-#endif /* (FOOTERS && !INSECURE) */\r
-\r
-\r
-/* In gcc, use __builtin_expect to minimize impact of checks */\r
-#if !INSECURE\r
-#if defined(__GNUC__) && __GNUC__ >= 3\r
-#define RTCHECK(e) __builtin_expect(e, 1)\r
-#else /* GNUC */\r
-#define RTCHECK(e) (e)\r
-#endif /* GNUC */\r
-#else /* !INSECURE */\r
-#define RTCHECK(e) (1)\r
-#endif /* !INSECURE */\r
-\r
-/* macros to set up inuse chunks with or without footers */\r
-\r
-#if !FOOTERS\r
-\r
-#define mark_inuse_foot(M,p,s)\r
-\r
-/* Set cinuse bit and pinuse bit of next chunk */\r
-#define set_inuse(M,p,s)\\r
- ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\\r
- ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)\r
-\r
-/* Set cinuse and pinuse of this chunk and pinuse of next chunk */\r
-#define set_inuse_and_pinuse(M,p,s)\\r
- ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\\r
- ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)\r
-\r
-/* Set size, cinuse and pinuse bit of this chunk */\r
-#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\\r
- ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))\r
-\r
-#else /* FOOTERS */\r
-\r
-/* Set foot of inuse chunk to be xor of mstate and seed */\r
-#define mark_inuse_foot(M,p,s)\\r
- (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))\r
-\r
-#define get_mstate_for(p)\\r
- ((mstate)(((mchunkptr)((char*)(p) +\\r
- (chunksize(p))))->prev_foot ^ mparams.magic))\r
-\r
-#define set_inuse(M,p,s)\\r
- ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\\r
- (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \\r
- mark_inuse_foot(M,p,s))\r
-\r
-#define set_inuse_and_pinuse(M,p,s)\\r
- ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\\r
- (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\\r
- mark_inuse_foot(M,p,s))\r
-\r
-#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\\r
- ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\\r
- mark_inuse_foot(M, p, s))\r
-\r
-#endif /* !FOOTERS */\r
-\r
-/* ---------------------------- setting mparams -------------------------- */\r
-\r
-/* Initialize mparams */\r
-static int init_mparams(void) {\r
- if (mparams.page_size == 0) {\r
- size_t s;\r
-\r
- mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;\r
- mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;\r
-#if MORECORE_CONTIGUOUS\r
- mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;\r
-#else /* MORECORE_CONTIGUOUS */\r
- mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;\r
-#endif /* MORECORE_CONTIGUOUS */\r
-\r
-#if (FOOTERS && !INSECURE)\r
- {\r
-#if USE_DEV_RANDOM\r
- int fd;\r
- unsigned char buf[sizeof(size_t)];\r
- /* Try to use /dev/urandom, else fall back on using time */\r
- if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&\r
- read(fd, buf, sizeof(buf)) == sizeof(buf)) {\r
- s = *((size_t *) buf);\r
- close(fd);\r
- }\r
- else\r
-#endif /* USE_DEV_RANDOM */\r
- s = (size_t)(time(0) ^ (size_t)0x55555555U);\r
-\r
- s |= (size_t)8U; /* ensure nonzero */\r
- s &= ~(size_t)7U; /* improve chances of fault for bad values */\r
-\r
- }\r
-#else /* (FOOTERS && !INSECURE) */\r
- s = (size_t)0x58585858U;\r
-#endif /* (FOOTERS && !INSECURE) */\r
- ACQUIRE_MAGIC_INIT_LOCK();\r
- if (mparams.magic == 0) {\r
- mparams.magic = s;\r
- /* Set up lock for main malloc area */\r
- INITIAL_LOCK(&gm->mutex);\r
- gm->mflags = mparams.default_mflags;\r
- }\r
- RELEASE_MAGIC_INIT_LOCK();\r
-\r
-#ifndef WIN32\r
- mparams.page_size = malloc_getpagesize;\r
- mparams.granularity = ((DEFAULT_GRANULARITY != 0)?\r
- DEFAULT_GRANULARITY : mparams.page_size);\r
-#else /* WIN32 */\r
- {\r
- SYSTEM_INFO system_info;\r
- GetSystemInfo(&system_info);\r
- mparams.page_size = system_info.dwPageSize;\r
- mparams.granularity = system_info.dwAllocationGranularity;\r
- }\r
-#endif /* WIN32 */\r
-\r
- /* Sanity-check configuration:\r
- size_t must be unsigned and as wide as pointer type.\r
- ints must be at least 4 bytes.\r
- alignment must be at least 8.\r
- Alignment, min chunk size, and page size must all be powers of 2.\r
- */\r
- if ((sizeof(size_t) != sizeof(char*)) ||\r
- (MAX_SIZE_T < MIN_CHUNK_SIZE) ||\r
- (sizeof(int) < 4) ||\r
- (MALLOC_ALIGNMENT < (size_t)8U) ||\r
- ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||\r
- ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||\r
- ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||\r
- ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))\r
- ABORT;\r
- }\r
- return 0;\r
-}\r
-\r
-/* support for mallopt */\r
-static int change_mparam(int param_number, int value) {\r
- size_t val = (size_t)value;\r
- init_mparams();\r
- switch(param_number) {\r
- case M_TRIM_THRESHOLD:\r
- mparams.trim_threshold = val;\r
- return 1;\r
- case M_GRANULARITY:\r
- if (val >= mparams.page_size && ((val & (val-1)) == 0)) {\r
- mparams.granularity = val;\r
- return 1;\r
- }\r
- else\r
- return 0;\r
- case M_MMAP_THRESHOLD:\r
- mparams.mmap_threshold = val;\r
- return 1;\r
- default:\r
- return 0;\r
- }\r
-}\r
-\r
-#if DEBUG\r
-/* ------------------------- Debugging Support --------------------------- */\r
-\r
-/* Check properties of any chunk, whether free, inuse, mmapped etc */\r
-static void do_check_any_chunk(mstate m, mchunkptr p) {\r
- assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));\r
- assert(ok_address(m, p));\r
-}\r
-\r
-/* Check properties of top chunk */\r
-static void do_check_top_chunk(mstate m, mchunkptr p) {\r
- msegmentptr sp = segment_holding(m, (char*)p);\r
- size_t sz = chunksize(p);\r
- assert(sp != 0);\r
- assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));\r
- assert(ok_address(m, p));\r
- assert(sz == m->topsize);\r
- assert(sz > 0);\r
- assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);\r
- assert(pinuse(p));\r
- assert(!next_pinuse(p));\r
-}\r
-\r
-/* Check properties of (inuse) mmapped chunks */\r
-static void do_check_mmapped_chunk(mstate m, mchunkptr p) {\r
- size_t sz = chunksize(p);\r
- size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);\r
- assert(is_mmapped(p));\r
- assert(use_mmap(m));\r
- assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));\r
- assert(ok_address(m, p));\r
- assert(!is_small(sz));\r
- assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);\r
- assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);\r
- assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);\r
-}\r
-\r
-/* Check properties of inuse chunks */\r
-static void do_check_inuse_chunk(mstate m, mchunkptr p) {\r
- do_check_any_chunk(m, p);\r
- assert(cinuse(p));\r
- assert(next_pinuse(p));\r
- /* If not pinuse and not mmapped, previous chunk has OK offset */\r
- assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);\r
- if (is_mmapped(p))\r
- do_check_mmapped_chunk(m, p);\r
-}\r
-\r
-/* Check properties of free chunks */\r
-static void do_check_free_chunk(mstate m, mchunkptr p) {\r
- size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);\r
- mchunkptr next = chunk_plus_offset(p, sz);\r
- do_check_any_chunk(m, p);\r
- assert(!cinuse(p));\r
- assert(!next_pinuse(p));\r
- assert (!is_mmapped(p));\r
- if (p != m->dv && p != m->top) {\r
- if (sz >= MIN_CHUNK_SIZE) {\r
- assert((sz & CHUNK_ALIGN_MASK) == 0);\r
- assert(is_aligned(chunk2mem(p)));\r
- assert(next->prev_foot == sz);\r
- assert(pinuse(p));\r
- assert (next == m->top || cinuse(next));\r
- assert(p->fd->bk == p);\r
- assert(p->bk->fd == p);\r
- }\r
- else /* markers are always of size SIZE_T_SIZE */\r
- assert(sz == SIZE_T_SIZE);\r
- }\r
-}\r
-\r
-/* Check properties of malloced chunks at the point they are malloced */\r
-static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {\r
- if (mem != 0) {\r
- mchunkptr p = mem2chunk(mem);\r
- size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);\r
- do_check_inuse_chunk(m, p);\r
- assert((sz & CHUNK_ALIGN_MASK) == 0);\r
- assert(sz >= MIN_CHUNK_SIZE);\r
- assert(sz >= s);\r
- /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */\r
- assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));\r
- }\r
-}\r
-\r
-/* Check a tree and its subtrees. */\r
-static void do_check_tree(mstate m, tchunkptr t) {\r
- tchunkptr head = 0;\r
- tchunkptr u = t;\r
- bindex_t tindex = t->index;\r
- size_t tsize = chunksize(t);\r
- bindex_t idx;\r
- compute_tree_index(tsize, idx);\r
- assert(tindex == idx);\r
- assert(tsize >= MIN_LARGE_SIZE);\r
- assert(tsize >= minsize_for_tree_index(idx));\r
- assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));\r
-\r
- do { /* traverse through chain of same-sized nodes */\r
- do_check_any_chunk(m, ((mchunkptr)u));\r
- assert(u->index == tindex);\r
- assert(chunksize(u) == tsize);\r
- assert(!cinuse(u));\r
- assert(!next_pinuse(u));\r
- assert(u->fd->bk == u);\r
- assert(u->bk->fd == u);\r
- if (u->parent == 0) {\r
- assert(u->child[0] == 0);\r
- assert(u->child[1] == 0);\r
- }\r
- else {\r
- assert(head == 0); /* only one node on chain has parent */\r
- head = u;\r
- assert(u->parent != u);\r
- assert (u->parent->child[0] == u ||\r
- u->parent->child[1] == u ||\r
- *((tbinptr*)(u->parent)) == u);\r
- if (u->child[0] != 0) {\r
- assert(u->child[0]->parent == u);\r
- assert(u->child[0] != u);\r
- do_check_tree(m, u->child[0]);\r
- }\r
- if (u->child[1] != 0) {\r
- assert(u->child[1]->parent == u);\r
- assert(u->child[1] != u);\r
- do_check_tree(m, u->child[1]);\r
- }\r
- if (u->child[0] != 0 && u->child[1] != 0) {\r
- assert(chunksize(u->child[0]) < chunksize(u->child[1]));\r
- }\r
- }\r
- u = u->fd;\r
- } while (u != t);\r
- assert(head != 0);\r
-}\r
-\r
-/* Check all the chunks in a treebin. */\r
-static void do_check_treebin(mstate m, bindex_t i) {\r
- tbinptr* tb = treebin_at(m, i);\r
- tchunkptr t = *tb;\r
- int empty = (m->treemap & (1U << i)) == 0;\r
- if (t == 0)\r
- assert(empty);\r
- if (!empty)\r
- do_check_tree(m, t);\r
-}\r
-\r
-/* Check all the chunks in a smallbin. */\r
-static void do_check_smallbin(mstate m, bindex_t i) {\r
- sbinptr b = smallbin_at(m, i);\r
- mchunkptr p = b->bk;\r
- unsigned int empty = (m->smallmap & (1U << i)) == 0;\r
- if (p == b)\r
- assert(empty);\r
- if (!empty) {\r
- for (; p != b; p = p->bk) {\r
- size_t size = chunksize(p);\r
- mchunkptr q;\r
- /* each chunk claims to be free */\r
- do_check_free_chunk(m, p);\r
- /* chunk belongs in bin */\r
- assert(small_index(size) == i);\r
- assert(p->bk == b || chunksize(p->bk) == chunksize(p));\r
- /* chunk is followed by an inuse chunk */\r
- q = next_chunk(p);\r
- if (q->head != FENCEPOST_HEAD)\r
- do_check_inuse_chunk(m, q);\r
- }\r
- }\r
-}\r
-\r
-/* Find x in a bin. Used in other check functions. */\r
-static int bin_find(mstate m, mchunkptr x) {\r
- size_t size = chunksize(x);\r
- if (is_small(size)) {\r
- bindex_t sidx = small_index(size);\r
- sbinptr b = smallbin_at(m, sidx);\r
- if (smallmap_is_marked(m, sidx)) {\r
- mchunkptr p = b;\r
- do {\r
- if (p == x)\r
- return 1;\r
- } while ((p = p->fd) != b);\r
- }\r
- }\r
- else {\r
- bindex_t tidx;\r
- compute_tree_index(size, tidx);\r
- if (treemap_is_marked(m, tidx)) {\r
- tchunkptr t = *treebin_at(m, tidx);\r
- size_t sizebits = size << leftshift_for_tree_index(tidx);\r
- while (t != 0 && chunksize(t) != size) {\r
- t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];\r
- sizebits <<= 1;\r
- }\r
- if (t != 0) {\r
- tchunkptr u = t;\r
- do {\r
- if (u == (tchunkptr)x)\r
- return 1;\r
- } while ((u = u->fd) != t);\r
- }\r
- }\r
- }\r
- return 0;\r
-}\r
-\r
-/* Traverse each chunk and check it; return total */\r
-static size_t traverse_and_check(mstate m) {\r
- size_t sum = 0;\r
- if (is_initialized(m)) {\r
- msegmentptr s = &m->seg;\r
- sum += m->topsize + TOP_FOOT_SIZE;\r
- while (s != 0) {\r
- mchunkptr q = align_as_chunk(s->base);\r
- mchunkptr lastq = 0;\r
- assert(pinuse(q));\r
- while (segment_holds(s, q) &&\r
- q != m->top && q->head != FENCEPOST_HEAD) {\r
- sum += chunksize(q);\r
- if (cinuse(q)) {\r
- assert(!bin_find(m, q));\r
- do_check_inuse_chunk(m, q);\r
- }\r
- else {\r
- assert(q == m->dv || bin_find(m, q));\r
- assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */\r
- do_check_free_chunk(m, q);\r
- }\r
- lastq = q;\r
- q = next_chunk(q);\r
- }\r
- s = s->next;\r
- }\r
- }\r
- return sum;\r
-}\r
-\r
-/* Check all properties of malloc_state. */\r
-static void do_check_malloc_state(mstate m) {\r
- bindex_t i;\r
- size_t total;\r
- /* check bins */\r
- for (i = 0; i < NSMALLBINS; ++i)\r
- do_check_smallbin(m, i);\r
- for (i = 0; i < NTREEBINS; ++i)\r
- do_check_treebin(m, i);\r
-\r
- if (m->dvsize != 0) { /* check dv chunk */\r
- do_check_any_chunk(m, m->dv);\r
- assert(m->dvsize == chunksize(m->dv));\r
- assert(m->dvsize >= MIN_CHUNK_SIZE);\r
- assert(bin_find(m, m->dv) == 0);\r
- }\r
-\r
- if (m->top != 0) { /* check top chunk */\r
- do_check_top_chunk(m, m->top);\r
- assert(m->topsize == chunksize(m->top));\r
- assert(m->topsize > 0);\r
- assert(bin_find(m, m->top) == 0);\r
- }\r
-\r
- total = traverse_and_check(m);\r
- assert(total <= m->footprint);\r
- assert(m->footprint <= m->max_footprint);\r
-}\r
-#endif /* DEBUG */\r
-\r
-/* ----------------------------- statistics ------------------------------ */\r
-\r
-#if !NO_MALLINFO\r
-static struct mallinfo internal_mallinfo(mstate m) {\r
- struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };\r
- if (!PREACTION(m)) {\r
- check_malloc_state(m);\r
- if (is_initialized(m)) {\r
- size_t nfree = SIZE_T_ONE; /* top always free */\r
- size_t mfree = m->topsize + TOP_FOOT_SIZE;\r
- size_t sum = mfree;\r
- msegmentptr s = &m->seg;\r
- while (s != 0) {\r
- mchunkptr q = align_as_chunk(s->base);\r
- while (segment_holds(s, q) &&\r
- q != m->top && q->head != FENCEPOST_HEAD) {\r
- size_t sz = chunksize(q);\r
- sum += sz;\r
- if (!cinuse(q)) {\r
- mfree += sz;\r
- ++nfree;\r
- }\r
- q = next_chunk(q);\r
- }\r
- s = s->next;\r
- }\r
-\r
- nm.arena = sum;\r
- nm.ordblks = nfree;\r
- nm.hblkhd = m->footprint - sum;\r
- nm.usmblks = m->max_footprint;\r
- nm.uordblks = m->footprint - mfree;\r
- nm.fordblks = mfree;\r
- nm.keepcost = m->topsize;\r
- }\r
-\r
- POSTACTION(m);\r
- }\r
- return nm;\r
-}\r
-#endif /* !NO_MALLINFO */\r
-\r
-static void internal_malloc_stats(mstate m) {\r
- if (!PREACTION(m)) {\r
- size_t maxfp = 0;\r
- size_t fp = 0;\r
- size_t used = 0;\r
- check_malloc_state(m);\r
- if (is_initialized(m)) {\r
- msegmentptr s = &m->seg;\r
- maxfp = m->max_footprint;\r
- fp = m->footprint;\r
- used = fp - (m->topsize + TOP_FOOT_SIZE);\r
-\r
- while (s != 0) {\r
- mchunkptr q = align_as_chunk(s->base);\r
- while (segment_holds(s, q) &&\r
- q != m->top && q->head != FENCEPOST_HEAD) {\r
- if (!cinuse(q))\r
- used -= chunksize(q);\r
- q = next_chunk(q);\r
- }\r
- s = s->next;\r
- }\r
- }\r
-\r
- fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));\r
- fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));\r
- fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));\r
-\r
- POSTACTION(m);\r
- }\r
-}\r
-\r
-/* ----------------------- Operations on smallbins ----------------------- */\r
-\r
-/*\r
- Various forms of linking and unlinking are defined as macros. Even\r
- the ones for trees, which are very long but have very short typical\r
- paths. This is ugly but reduces reliance on inlining support of\r
- compilers.\r
-*/\r
-\r
-/* Link a free chunk into a smallbin */\r
-#define insert_small_chunk(M, P, S) {\\r
- bindex_t I = small_index(S);\\r
- mchunkptr B = smallbin_at(M, I);\\r
- mchunkptr F = B;\\r
- assert(S >= MIN_CHUNK_SIZE);\\r
- if (!smallmap_is_marked(M, I))\\r
- mark_smallmap(M, I);\\r
- else if (RTCHECK(ok_address(M, B->fd)))\\r
- F = B->fd;\\r
- else {\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- }\\r
- B->fd = P;\\r
- F->bk = P;\\r
- P->fd = F;\\r
- P->bk = B;\\r
-}\r
-\r
-/* Unlink a chunk from a smallbin */\r
-#define unlink_small_chunk(M, P, S) {\\r
- mchunkptr F = P->fd;\\r
- mchunkptr B = P->bk;\\r
- bindex_t I = small_index(S);\\r
- assert(P != B);\\r
- assert(P != F);\\r
- assert(chunksize(P) == small_index2size(I));\\r
- if (F == B)\\r
- clear_smallmap(M, I);\\r
- else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\\r
- (B == smallbin_at(M,I) || ok_address(M, B)))) {\\r
- F->bk = B;\\r
- B->fd = F;\\r
- }\\r
- else {\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- }\\r
-}\r
-\r
-/* Unlink the first chunk from a smallbin */\r
-#define unlink_first_small_chunk(M, B, P, I) {\\r
- mchunkptr F = P->fd;\\r
- assert(P != B);\\r
- assert(P != F);\\r
- assert(chunksize(P) == small_index2size(I));\\r
- if (B == F)\\r
- clear_smallmap(M, I);\\r
- else if (RTCHECK(ok_address(M, F))) {\\r
- B->fd = F;\\r
- F->bk = B;\\r
- }\\r
- else {\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- }\\r
-}\r
-\r
-/* Replace dv node, binning the old one */\r
-/* Used only when dvsize known to be small */\r
-#define replace_dv(M, P, S) {\\r
- size_t DVS = M->dvsize;\\r
- if (DVS != 0) {\\r
- mchunkptr DV = M->dv;\\r
- assert(is_small(DVS));\\r
- insert_small_chunk(M, DV, DVS);\\r
- }\\r
- M->dvsize = S;\\r
- M->dv = P;\\r
-}\r
-\r
-/* ------------------------- Operations on trees ------------------------- */\r
-\r
-/* Insert chunk into tree */\r
-#define insert_large_chunk(M, X, S) {\\r
- tbinptr* H;\\r
- bindex_t I;\\r
- compute_tree_index(S, I);\\r
- H = treebin_at(M, I);\\r
- X->index = I;\\r
- X->child[0] = X->child[1] = 0;\\r
- if (!treemap_is_marked(M, I)) {\\r
- mark_treemap(M, I);\\r
- *H = X;\\r
- X->parent = (tchunkptr)H;\\r
- X->fd = X->bk = X;\\r
- }\\r
- else {\\r
- tchunkptr T = *H;\\r
- size_t K = S << leftshift_for_tree_index(I);\\r
- for (;;) {\\r
- if (chunksize(T) != S) {\\r
- tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\\r
- K <<= 1;\\r
- if (*C != 0)\\r
- T = *C;\\r
- else if (RTCHECK(ok_address(M, C))) {\\r
- *C = X;\\r
- X->parent = T;\\r
- X->fd = X->bk = X;\\r
- break;\\r
- }\\r
- else {\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- break;\\r
- }\\r
- }\\r
- else {\\r
- tchunkptr F = T->fd;\\r
- if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\\r
- T->fd = F->bk = X;\\r
- X->fd = F;\\r
- X->bk = T;\\r
- X->parent = 0;\\r
- break;\\r
- }\\r
- else {\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- break;\\r
- }\\r
- }\\r
- }\\r
- }\\r
-}\r
-\r
-/*\r
- Unlink steps:\r
-\r
- 1. If x is a chained node, unlink it from its same-sized fd/bk links\r
- and choose its bk node as its replacement.\r
- 2. If x was the last node of its size, but not a leaf node, it must\r
- be replaced with a leaf node (not merely one with an open left or\r
- right), to make sure that lefts and rights of descendents\r
- correspond properly to bit masks. We use the rightmost descendent\r
- of x. We could use any other leaf, but this is easy to locate and\r
- tends to counteract removal of leftmosts elsewhere, and so keeps\r
- paths shorter than minimally guaranteed. This doesn't loop much\r
- because on average a node in a tree is near the bottom.\r
- 3. If x is the base of a chain (i.e., has parent links) relink\r
- x's parent and children to x's replacement (or null if none).\r
-*/\r
-\r
-#define unlink_large_chunk(M, X) {\\r
- tchunkptr XP = X->parent;\\r
- tchunkptr R;\\r
- if (X->bk != X) {\\r
- tchunkptr F = X->fd;\\r
- R = X->bk;\\r
- if (RTCHECK(ok_address(M, F))) {\\r
- F->bk = R;\\r
- R->fd = F;\\r
- }\\r
- else {\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- }\\r
- }\\r
- else {\\r
- tchunkptr* RP;\\r
- if (((R = *(RP = &(X->child[1]))) != 0) ||\\r
- ((R = *(RP = &(X->child[0]))) != 0)) {\\r
- tchunkptr* CP;\\r
- while ((*(CP = &(R->child[1])) != 0) ||\\r
- (*(CP = &(R->child[0])) != 0)) {\\r
- R = *(RP = CP);\\r
- }\\r
- if (RTCHECK(ok_address(M, RP)))\\r
- *RP = 0;\\r
- else {\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- }\\r
- }\\r
- }\\r
- if (XP != 0) {\\r
- tbinptr* H = treebin_at(M, X->index);\\r
- if (X == *H) {\\r
- if ((*H = R) == 0) \\r
- clear_treemap(M, X->index);\\r
- }\\r
- else if (RTCHECK(ok_address(M, XP))) {\\r
- if (XP->child[0] == X) \\r
- XP->child[0] = R;\\r
- else \\r
- XP->child[1] = R;\\r
- }\\r
- else\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- if (R != 0) {\\r
- if (RTCHECK(ok_address(M, R))) {\\r
- tchunkptr C0, C1;\\r
- R->parent = XP;\\r
- if ((C0 = X->child[0]) != 0) {\\r
- if (RTCHECK(ok_address(M, C0))) {\\r
- R->child[0] = C0;\\r
- C0->parent = R;\\r
- }\\r
- else\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- }\\r
- if ((C1 = X->child[1]) != 0) {\\r
- if (RTCHECK(ok_address(M, C1))) {\\r
- R->child[1] = C1;\\r
- C1->parent = R;\\r
- }\\r
- else\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- }\\r
- }\\r
- else\\r
- CORRUPTION_ERROR_ACTION(M);\\r
- }\\r
- }\\r
-}\r
-\r
-/* Relays to large vs small bin operations */\r
-\r
-#define insert_chunk(M, P, S)\\r
- if (is_small(S)) insert_small_chunk(M, P, S)\\r
- else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }\r
-\r
-#define unlink_chunk(M, P, S)\\r
- if (is_small(S)) unlink_small_chunk(M, P, S)\\r
- else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }\r
-\r
-\r
-/* Relays to internal calls to malloc/free from realloc, memalign etc */\r
-\r
-#if ONLY_MSPACES\r
-#define internal_malloc(m, b) mspace_malloc(m, b)\r
-#define internal_free(m, mem) mspace_free(m,mem);\r
-#else /* ONLY_MSPACES */\r
-#if MSPACES\r
-#define internal_malloc(m, b)\\r
- (m == gm)? dlmalloc(b) : mspace_malloc(m, b)\r
-#define internal_free(m, mem)\\r
- if (m == gm) dlfree(mem); else mspace_free(m,mem);\r
-#else /* MSPACES */\r
-#define internal_malloc(m, b) dlmalloc(b)\r
-#define internal_free(m, mem) dlfree(mem)\r
-#endif /* MSPACES */\r
-#endif /* ONLY_MSPACES */\r
-\r
-/* ----------------------- Direct-mmapping chunks ----------------------- */\r
-\r
-/*\r
- Directly mmapped chunks are set up with an offset to the start of\r
- the mmapped region stored in the prev_foot field of the chunk. This\r
- allows reconstruction of the required argument to MUNMAP when freed,\r
- and also allows adjustment of the returned chunk to meet alignment\r
- requirements (especially in memalign). There is also enough space\r
- allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain\r
- the PINUSE bit so frees can be checked.\r
-*/\r
-\r
-/* Malloc using mmap */\r
-static void* mmap_alloc(mstate m, size_t nb) {\r
- size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);\r
- if (mmsize > nb) { /* Check for wrap around 0 */\r
- char* mm = (char*)(DIRECT_MMAP(mmsize));\r
- if (mm != CMFAIL) {\r
- size_t offset = align_offset(chunk2mem(mm));\r
- size_t psize = mmsize - offset - MMAP_FOOT_PAD;\r
- mchunkptr p = (mchunkptr)(mm + offset);\r
- p->prev_foot = offset | IS_MMAPPED_BIT;\r
- (p)->head = (psize|CINUSE_BIT);\r
- mark_inuse_foot(m, p, psize);\r
- chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;\r
- chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;\r
-\r
- if (mm < m->least_addr)\r
- m->least_addr = mm;\r
- if ((m->footprint += mmsize) > m->max_footprint)\r
- m->max_footprint = m->footprint;\r
- assert(is_aligned(chunk2mem(p)));\r
- check_mmapped_chunk(m, p);\r
- return chunk2mem(p);\r
- }\r
- }\r
- return 0;\r
-}\r
-\r
-/* Realloc using mmap */\r
-static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {\r
- size_t oldsize = chunksize(oldp);\r
- if (is_small(nb)) /* Can't shrink mmap regions below small size */\r
- return 0;\r
- /* Keep old chunk if big enough but not too big */\r
- if (oldsize >= nb + SIZE_T_SIZE &&\r
- (oldsize - nb) <= (mparams.granularity << 1))\r
- return oldp;\r
- else {\r
- size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;\r
- size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;\r
- size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +\r
- CHUNK_ALIGN_MASK);\r
- char* cp = (char*)CALL_MREMAP((char*)oldp - offset,\r
- oldmmsize, newmmsize, 1);\r
- if (cp != CMFAIL) {\r
- mchunkptr newp = (mchunkptr)(cp + offset);\r
- size_t psize = newmmsize - offset - MMAP_FOOT_PAD;\r
- newp->head = (psize|CINUSE_BIT);\r
- mark_inuse_foot(m, newp, psize);\r
- chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;\r
- chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;\r
-\r
- if (cp < m->least_addr)\r
- m->least_addr = cp;\r
- if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)\r
- m->max_footprint = m->footprint;\r
- check_mmapped_chunk(m, newp);\r
- return newp;\r
- }\r
- }\r
- return 0;\r
-}\r
-\r
-/* -------------------------- mspace management -------------------------- */\r
-\r
-/* Initialize top chunk and its size */\r
-static void init_top(mstate m, mchunkptr p, size_t psize) {\r
- /* Ensure alignment */\r
- size_t offset = align_offset(chunk2mem(p));\r
- p = (mchunkptr)((char*)p + offset);\r
- psize -= offset;\r
-\r
- m->top = p;\r
- m->topsize = psize;\r
- p->head = psize | PINUSE_BIT;\r
- /* set size of fake trailing chunk holding overhead space only once */\r
- chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;\r
- m->trim_check = mparams.trim_threshold; /* reset on each update */\r
-}\r
-\r
-/* Initialize bins for a new mstate that is otherwise zeroed out */\r
-static void init_bins(mstate m) {\r
- /* Establish circular links for smallbins */\r
- bindex_t i;\r
- for (i = 0; i < NSMALLBINS; ++i) {\r
- sbinptr bin = smallbin_at(m,i);\r
- bin->fd = bin->bk = bin;\r
- }\r
-}\r
-\r
-#if PROCEED_ON_ERROR\r
-\r
-/* default corruption action */\r
-static void reset_on_error(mstate m) {\r
- int i;\r
- ++malloc_corruption_error_count;\r
- /* Reinitialize fields to forget about all memory */\r
- m->smallbins = m->treebins = 0;\r
- m->dvsize = m->topsize = 0;\r
- m->seg.base = 0;\r
- m->seg.size = 0;\r
- m->seg.next = 0;\r
- m->top = m->dv = 0;\r
- for (i = 0; i < NTREEBINS; ++i)\r
- *treebin_at(m, i) = 0;\r
- init_bins(m);\r
-}\r
-#endif /* PROCEED_ON_ERROR */\r
-\r
-/* Allocate chunk and prepend remainder with chunk in successor base. */\r
-static void* prepend_alloc(mstate m, char* newbase, char* oldbase,\r
- size_t nb) {\r
- mchunkptr p = align_as_chunk(newbase);\r
- mchunkptr oldfirst = align_as_chunk(oldbase);\r
- size_t psize = (char*)oldfirst - (char*)p;\r
- mchunkptr q = chunk_plus_offset(p, nb);\r
- size_t qsize = psize - nb;\r
- set_size_and_pinuse_of_inuse_chunk(m, p, nb);\r
-\r
- assert((char*)oldfirst > (char*)q);\r
- assert(pinuse(oldfirst));\r
- assert(qsize >= MIN_CHUNK_SIZE);\r
-\r
- /* consolidate remainder with first chunk of old base */\r
- if (oldfirst == m->top) {\r
- size_t tsize = m->topsize += qsize;\r
- m->top = q;\r
- q->head = tsize | PINUSE_BIT;\r
- check_top_chunk(m, q);\r
- }\r
- else if (oldfirst == m->dv) {\r
- size_t dsize = m->dvsize += qsize;\r
- m->dv = q;\r
- set_size_and_pinuse_of_free_chunk(q, dsize);\r
- }\r
- else {\r
- if (!cinuse(oldfirst)) {\r
- size_t nsize = chunksize(oldfirst);\r
- unlink_chunk(m, oldfirst, nsize);\r
- oldfirst = chunk_plus_offset(oldfirst, nsize);\r
- qsize += nsize;\r
- }\r
- set_free_with_pinuse(q, qsize, oldfirst);\r
- insert_chunk(m, q, qsize);\r
- check_free_chunk(m, q);\r
- }\r
-\r
- check_malloced_chunk(m, chunk2mem(p), nb);\r
- return chunk2mem(p);\r
-}\r
-\r
-\r
-/* Add a segment to hold a new noncontiguous region */\r
-static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {\r
- /* Determine locations and sizes of segment, fenceposts, old top */\r
- char* old_top = (char*)m->top;\r
- msegmentptr oldsp = segment_holding(m, old_top);\r
- char* old_end = oldsp->base + oldsp->size;\r
- size_t ssize = pad_request(sizeof(struct malloc_segment));\r
- char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);\r
- size_t offset = align_offset(chunk2mem(rawsp));\r
- char* asp = rawsp + offset;\r
- char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;\r
- mchunkptr sp = (mchunkptr)csp;\r
- msegmentptr ss = (msegmentptr)(chunk2mem(sp));\r
- mchunkptr tnext = chunk_plus_offset(sp, ssize);\r
- mchunkptr p = tnext;\r
- int nfences = 0;\r
-\r
- /* reset top to new space */\r
- init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);\r
-\r
- /* Set up segment record */\r
- assert(is_aligned(ss));\r
- set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);\r
- *ss = m->seg; /* Push current record */\r
- m->seg.base = tbase;\r
- m->seg.size = tsize;\r
- m->seg.sflags = mmapped;\r
- m->seg.next = ss;\r
-\r
- /* Insert trailing fenceposts */\r
- for (;;) {\r
- mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);\r
- p->head = FENCEPOST_HEAD;\r
- ++nfences;\r
- if ((char*)(&(nextp->head)) < old_end)\r
- p = nextp;\r
- else\r
- break;\r
- }\r
- assert(nfences >= 2);\r
-\r
- /* Insert the rest of old top into a bin as an ordinary free chunk */\r
- if (csp != old_top) {\r
- mchunkptr q = (mchunkptr)old_top;\r
- size_t psize = csp - old_top;\r
- mchunkptr tn = chunk_plus_offset(q, psize);\r
- set_free_with_pinuse(q, psize, tn);\r
- insert_chunk(m, q, psize);\r
- }\r
-\r
- check_top_chunk(m, m->top);\r
-}\r
-\r
-/* -------------------------- System allocation -------------------------- */\r
-\r
-/* Get memory from system using MORECORE or MMAP */\r
-static void* sys_alloc(mstate m, size_t nb) {\r
- char* tbase = CMFAIL;\r
- size_t tsize = 0;\r
- flag_t mmap_flag = 0;\r
-\r
- init_mparams();\r
-\r
- /* Directly map large chunks */\r
- if (use_mmap(m) && nb >= mparams.mmap_threshold) {\r
- void* mem = mmap_alloc(m, nb);\r
- if (mem != 0)\r
- return mem;\r
- }\r
-\r
- /*\r
- Try getting memory in any of three ways (in most-preferred to\r
- least-preferred order):\r
- 1. A call to MORECORE that can normally contiguously extend memory.\r
- (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or\r
- or main space is mmapped or a previous contiguous call failed)\r
- 2. A call to MMAP new space (disabled if not HAVE_MMAP).\r
- Note that under the default settings, if MORECORE is unable to\r
- fulfill a request, and HAVE_MMAP is true, then mmap is\r
- used as a noncontiguous system allocator. This is a useful backup\r
- strategy for systems with holes in address spaces -- in this case\r
- sbrk cannot contiguously expand the heap, but mmap may be able to\r
- find space.\r
- 3. A call to MORECORE that cannot usually contiguously extend memory.\r
- (disabled if not HAVE_MORECORE)\r
- */\r
-\r
- if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {\r
- char* br = CMFAIL;\r
- msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);\r
- size_t asize = 0;\r
- ACQUIRE_MORECORE_LOCK();\r
-\r
- if (ss == 0) { /* First time through or recovery */\r
- char* base = (char*)CALL_MORECORE(0);\r
- if (base != CMFAIL) {\r
- asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);\r
- /* Adjust to end on a page boundary */\r
- if (!is_page_aligned(base))\r
- asize += (page_align((size_t)base) - (size_t)base);\r
- /* Can't call MORECORE if size is negative when treated as signed */\r
- if (asize < HALF_MAX_SIZE_T &&\r
- (br = (char*)(CALL_MORECORE(asize))) == base) {\r
- tbase = base;\r
- tsize = asize;\r
- }\r
- }\r
- }\r
- else {\r
- /* Subtract out existing available top space from MORECORE request. */\r
- asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);\r
- /* Use mem here only if it did continuously extend old space */\r
- if (asize < HALF_MAX_SIZE_T &&\r
- (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {\r
- tbase = br;\r
- tsize = asize;\r
- }\r
- }\r
-\r
- if (tbase == CMFAIL) { /* Cope with partial failure */\r
- if (br != CMFAIL) { /* Try to use/extend the space we did get */\r
- if (asize < HALF_MAX_SIZE_T &&\r
- asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {\r
- size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);\r
- if (esize < HALF_MAX_SIZE_T) {\r
- char* end = (char*)CALL_MORECORE(esize);\r
- if (end != CMFAIL)\r
- asize += esize;\r
- else { /* Can't use; try to release */\r
- (void) CALL_MORECORE(-asize);\r
- br = CMFAIL;\r
- }\r
- }\r
- }\r
- }\r
- if (br != CMFAIL) { /* Use the space we did get */\r
- tbase = br;\r
- tsize = asize;\r
- }\r
- else\r
- disable_contiguous(m); /* Don't try contiguous path in the future */\r
- }\r
-\r
- RELEASE_MORECORE_LOCK();\r
- }\r
-\r
- if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */\r
- size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;\r
- size_t rsize = granularity_align(req);\r
- if (rsize > nb) { /* Fail if wraps around zero */\r
- char* mp = (char*)(CALL_MMAP(rsize));\r
- if (mp != CMFAIL) {\r
- tbase = mp;\r
- tsize = rsize;\r
- mmap_flag = IS_MMAPPED_BIT;\r
- }\r
- }\r
- }\r
-\r
- if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */\r
- size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);\r
- if (asize < HALF_MAX_SIZE_T) {\r
- char* br = CMFAIL;\r
- char* end = CMFAIL;\r
- ACQUIRE_MORECORE_LOCK();\r
- br = (char*)(CALL_MORECORE(asize));\r
- end = (char*)(CALL_MORECORE(0));\r
- RELEASE_MORECORE_LOCK();\r
- if (br != CMFAIL && end != CMFAIL && br < end) {\r
- size_t ssize = end - br;\r
- if (ssize > nb + TOP_FOOT_SIZE) {\r
- tbase = br;\r
- tsize = ssize;\r
- }\r
- }\r
- }\r
- }\r
-\r
- if (tbase != CMFAIL) {\r
-\r
- if ((m->footprint += tsize) > m->max_footprint)\r
- m->max_footprint = m->footprint;\r
-\r
- if (!is_initialized(m)) { /* first-time initialization */\r
- m->seg.base = m->least_addr = tbase;\r
- m->seg.size = tsize;\r
- m->seg.sflags = mmap_flag;\r
- m->magic = mparams.magic;\r
- init_bins(m);\r
- if (is_global(m)) \r
- init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);\r
- else {\r
- /* Offset top by embedded malloc_state */\r
- mchunkptr mn = next_chunk(mem2chunk(m));\r
- init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);\r
- }\r
- }\r
-\r
- else {\r
- /* Try to merge with an existing segment */\r
-#ifdef DONT_MERGE_SEGMENTS\r
- (void) prepend_alloc;\r
- add_segment(m, tbase, tsize, mmap_flag);\r
-#else\r
- msegmentptr sp = &m->seg;\r
- while (sp != 0 && tbase != sp->base + sp->size)\r
- sp = sp->next;\r
- if (sp != 0 &&\r
- !is_extern_segment(sp) &&\r
- (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&\r
- segment_holds(sp, m->top)) { /* append */\r
- sp->size += tsize;\r
- init_top(m, m->top, m->topsize + tsize);\r
- }\r
- else {\r
- if (tbase < m->least_addr)\r
- m->least_addr = tbase;\r
- sp = &m->seg;\r
- while (sp != 0 && sp->base != tbase + tsize)\r
- sp = sp->next;\r
- if (sp != 0 &&\r
- !is_extern_segment(sp) &&\r
- (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {\r
- char* oldbase = sp->base;\r
- sp->base = tbase;\r
- sp->size += tsize;\r
- return prepend_alloc(m, tbase, oldbase, nb);\r
- }\r
- else\r
- add_segment(m, tbase, tsize, mmap_flag);\r
- }\r
-#endif /* DONT_MERGE_SEGMENTS */\r
- }\r
-\r
- if (nb < m->topsize) { /* Allocate from new or extended top space */\r
- size_t rsize = m->topsize -= nb;\r
- mchunkptr p = m->top;\r
- mchunkptr r = m->top = chunk_plus_offset(p, nb);\r
- r->head = rsize | PINUSE_BIT;\r
- set_size_and_pinuse_of_inuse_chunk(m, p, nb);\r
- check_top_chunk(m, m->top);\r
- check_malloced_chunk(m, chunk2mem(p), nb);\r
- return chunk2mem(p);\r
- }\r
- }\r
-\r
- MALLOC_FAILURE_ACTION;\r
- return 0;\r
-}\r
-\r
-/* ----------------------- system deallocation -------------------------- */\r
-\r
-/* Unmap and unlink any mmapped segments that don't contain used chunks */\r
-static size_t release_unused_segments(mstate m) {\r
- size_t released = 0;\r
- msegmentptr pred = &m->seg;\r
- msegmentptr sp = pred->next;\r
- while (sp != 0) {\r
- char* base = sp->base;\r
- size_t size = sp->size;\r
- msegmentptr next = sp->next;\r
- if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {\r
- mchunkptr p = align_as_chunk(base);\r
- size_t psize = chunksize(p);\r
- /* Can unmap if first chunk holds entire segment and not pinned */\r
- if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {\r
- tchunkptr tp = (tchunkptr)p;\r
- assert(segment_holds(sp, (char*)sp));\r
- if (p == m->dv) {\r
- m->dv = 0;\r
- m->dvsize = 0;\r
- }\r
- else {\r
- unlink_large_chunk(m, tp);\r
- }\r
- if (CALL_MUNMAP(base, size) == 0) {\r
- released += size;\r
- m->footprint -= size;\r
- /* unlink obsoleted record */\r
- sp = pred;\r
- sp->next = next;\r
- }\r
- else { /* back out if cannot unmap */\r
- insert_large_chunk(m, tp, psize);\r
- }\r
- }\r
- }\r
- pred = sp;\r
- sp = next;\r
- }\r
- return released;\r
-}\r
-\r
-static int sys_trim(mstate m, size_t pad) {\r
- size_t released = 0;\r
- if (pad < MAX_REQUEST && is_initialized(m)) {\r
- pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */\r
-\r
- if (m->topsize > pad) {\r
- /* Shrink top space in granularity-size units, keeping at least one */\r
- size_t unit = mparams.granularity;\r
- size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -\r
- SIZE_T_ONE) * unit;\r
- msegmentptr sp = segment_holding(m, (char*)m->top);\r
-\r
- if (!is_extern_segment(sp)) {\r
- if (is_mmapped_segment(sp)) {\r
- if (HAVE_MMAP &&\r
- sp->size >= extra &&\r
- !has_segment_link(m, sp)) { /* can't shrink if pinned */\r
- size_t newsize = sp->size - extra;\r
- /* Prefer mremap, fall back to munmap */\r
- if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||\r
- (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {\r
- released = extra;\r
- }\r
- }\r
- }\r
- else if (HAVE_MORECORE) {\r
- if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */\r
- extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;\r
- ACQUIRE_MORECORE_LOCK();\r
- {\r
- /* Make sure end of memory is where we last set it. */\r
- char* old_br = (char*)(CALL_MORECORE(0));\r
- if (old_br == sp->base + sp->size) {\r
- char* rel_br = (char*)(CALL_MORECORE(-extra));\r
- char* new_br = (char*)(CALL_MORECORE(0));\r
- if (rel_br != CMFAIL && new_br < old_br)\r
- released = old_br - new_br;\r
- }\r
- }\r
- RELEASE_MORECORE_LOCK();\r
- }\r
- }\r
-\r
- if (released != 0) {\r
- sp->size -= released;\r
- m->footprint -= released;\r
- init_top(m, m->top, m->topsize - released);\r
- check_top_chunk(m, m->top);\r
- }\r
- }\r
-\r
- /* Unmap any unused mmapped segments */\r
- if (HAVE_MMAP) \r
- released += release_unused_segments(m);\r
-\r
- /* On failure, disable autotrim to avoid repeated failed future calls */\r
- if (released == 0)\r
- m->trim_check = MAX_SIZE_T;\r
- }\r
-\r
- return (released != 0)? 1 : 0;\r
-}\r
-\r
-/* ---------------------------- malloc support --------------------------- */\r
-\r
-/* allocate a large request from the best fitting chunk in a treebin */\r
-static void* tmalloc_large(mstate m, size_t nb) {\r
- tchunkptr v = 0;\r
- size_t rsize = -nb; /* Unsigned negation */\r
- tchunkptr t;\r
- bindex_t idx;\r
- compute_tree_index(nb, idx);\r
-\r
- if ((t = *treebin_at(m, idx)) != 0) {\r
- /* Traverse tree for this bin looking for node with size == nb */\r
- size_t sizebits = nb << leftshift_for_tree_index(idx);\r
- tchunkptr rst = 0; /* The deepest untaken right subtree */\r
- for (;;) {\r
- tchunkptr rt;\r
- size_t trem = chunksize(t) - nb;\r
- if (trem < rsize) {\r
- v = t;\r
- if ((rsize = trem) == 0)\r
- break;\r
- }\r
- rt = t->child[1];\r
- t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];\r
- if (rt != 0 && rt != t)\r
- rst = rt;\r
- if (t == 0) {\r
- t = rst; /* set t to least subtree holding sizes > nb */\r
- break;\r
- }\r
- sizebits <<= 1;\r
- }\r
- }\r
-\r
- if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */\r
- binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;\r
- if (leftbits != 0) {\r
- bindex_t i;\r
- binmap_t leastbit = least_bit(leftbits);\r
- compute_bit2idx(leastbit, i);\r
- t = *treebin_at(m, i);\r
- }\r
- }\r
-\r
- while (t != 0) { /* find smallest of tree or subtree */\r
- size_t trem = chunksize(t) - nb;\r
- if (trem < rsize) {\r
- rsize = trem;\r
- v = t;\r
- }\r
- t = leftmost_child(t);\r
- }\r
-\r
- /* If dv is a better fit, return 0 so malloc will use it */\r
- if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {\r
- if (RTCHECK(ok_address(m, v))) { /* split */\r
- mchunkptr r = chunk_plus_offset(v, nb);\r
- assert(chunksize(v) == rsize + nb);\r
- if (RTCHECK(ok_next(v, r))) {\r
- unlink_large_chunk(m, v);\r
- if (rsize < MIN_CHUNK_SIZE)\r
- set_inuse_and_pinuse(m, v, (rsize + nb));\r
- else {\r
- set_size_and_pinuse_of_inuse_chunk(m, v, nb);\r
- set_size_and_pinuse_of_free_chunk(r, rsize);\r
- insert_chunk(m, r, rsize);\r
- }\r
- return chunk2mem(v);\r
- }\r
- }\r
- CORRUPTION_ERROR_ACTION(m);\r
- }\r
- return 0;\r
-}\r
-\r
-/* allocate a small request from the best fitting chunk in a treebin */\r
-static void* tmalloc_small(mstate m, size_t nb) {\r
- tchunkptr t, v;\r
- size_t rsize;\r
- bindex_t i;\r
- binmap_t leastbit = least_bit(m->treemap);\r
- compute_bit2idx(leastbit, i);\r
-\r
- v = t = *treebin_at(m, i);\r
- rsize = chunksize(t) - nb;\r
-\r
- while ((t = leftmost_child(t)) != 0) {\r
- size_t trem = chunksize(t) - nb;\r
- if (trem < rsize) {\r
- rsize = trem;\r
- v = t;\r
- }\r
- }\r
-\r
- if (RTCHECK(ok_address(m, v))) {\r
- mchunkptr r = chunk_plus_offset(v, nb);\r
- assert(chunksize(v) == rsize + nb);\r
- if (RTCHECK(ok_next(v, r))) {\r
- unlink_large_chunk(m, v);\r
- if (rsize < MIN_CHUNK_SIZE)\r
- set_inuse_and_pinuse(m, v, (rsize + nb));\r
- else {\r
- set_size_and_pinuse_of_inuse_chunk(m, v, nb);\r
- set_size_and_pinuse_of_free_chunk(r, rsize);\r
- replace_dv(m, r, rsize);\r
- }\r
- return chunk2mem(v);\r
- }\r
- }\r
-\r
- CORRUPTION_ERROR_ACTION(m);\r
- return 0;\r
-}\r
-\r
-/* --------------------------- realloc support --------------------------- */\r
-\r
-static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {\r
- if (bytes >= MAX_REQUEST) {\r
- MALLOC_FAILURE_ACTION;\r
- return 0;\r
- }\r
- if (!PREACTION(m)) {\r
- mchunkptr oldp = mem2chunk(oldmem);\r
- size_t oldsize = chunksize(oldp);\r
- mchunkptr next = chunk_plus_offset(oldp, oldsize);\r
- mchunkptr newp = 0;\r
- void* extra = 0;\r
-\r
- /* Try to either shrink or extend into top. Else malloc-copy-free */\r
-\r
- if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&\r
- ok_next(oldp, next) && ok_pinuse(next))) {\r
- size_t nb = request2size(bytes);\r
- if (is_mmapped(oldp))\r
- newp = mmap_resize(m, oldp, nb);\r
- else if (oldsize >= nb) { /* already big enough */\r
- size_t rsize = oldsize - nb;\r
- newp = oldp;\r
- if (rsize >= MIN_CHUNK_SIZE) {\r
- mchunkptr remainder = chunk_plus_offset(newp, nb);\r
- set_inuse(m, newp, nb);\r
- set_inuse(m, remainder, rsize);\r
- extra = chunk2mem(remainder);\r
- }\r
- }\r
- else if (next == m->top && oldsize + m->topsize > nb) {\r
- /* Expand into top */\r
- size_t newsize = oldsize + m->topsize;\r
- size_t newtopsize = newsize - nb;\r
- mchunkptr newtop = chunk_plus_offset(oldp, nb);\r
- set_inuse(m, oldp, nb);\r
- newtop->head = newtopsize |PINUSE_BIT;\r
- m->top = newtop;\r
- m->topsize = newtopsize;\r
- newp = oldp;\r
- }\r
- }\r
- else {\r
- USAGE_ERROR_ACTION(m, oldmem);\r
- POSTACTION(m);\r
- return 0;\r
- }\r
-\r
- POSTACTION(m);\r
-\r
- if (newp != 0) {\r
- if (extra != 0) {\r
- internal_free(m, extra);\r
- }\r
- check_inuse_chunk(m, newp);\r
- return chunk2mem(newp);\r
- }\r
- else {\r
- void* newmem = internal_malloc(m, bytes);\r
- if (newmem != 0) {\r
- size_t oc = oldsize - overhead_for(oldp);\r
- memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);\r
- internal_free(m, oldmem);\r
- }\r
- return newmem;\r
- }\r
- }\r
- return 0;\r
-}\r
-\r
-/* --------------------------- memalign support -------------------------- */\r
-\r
-static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {\r
- if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */\r
- return internal_malloc(m, bytes);\r
- if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */\r
- alignment = MIN_CHUNK_SIZE;\r
- if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */\r
- size_t a = MALLOC_ALIGNMENT << 1;\r
- while (a < alignment) a <<= 1;\r
- alignment = a;\r
- }\r
- \r
- if (bytes >= MAX_REQUEST - alignment) {\r
- if (m != 0) { /* Test isn't needed but avoids compiler warning */\r
- MALLOC_FAILURE_ACTION;\r
- }\r
- }\r
- else {\r
- size_t nb = request2size(bytes);\r
- size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;\r
- char* mem = (char*)internal_malloc(m, req);\r
- if (mem != 0) {\r
- void* leader = 0;\r
- void* trailer = 0;\r
- mchunkptr p = mem2chunk(mem);\r
-\r
- if (PREACTION(m)) return 0;\r
- if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */\r
- /*\r
- Find an aligned spot inside chunk. Since we need to give\r
- back leading space in a chunk of at least MIN_CHUNK_SIZE, if\r
- the first calculation places us at a spot with less than\r
- MIN_CHUNK_SIZE leader, we can move to the next aligned spot.\r
- We've allocated enough total room so that this is always\r
- possible.\r
- */\r
- char* br = (char*)mem2chunk((size_t)(((size_t)(mem +\r
- alignment -\r
- SIZE_T_ONE)) &\r
- -alignment));\r
- char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?\r
- br : br+alignment;\r
- mchunkptr newp = (mchunkptr)pos;\r
- size_t leadsize = pos - (char*)(p);\r
- size_t newsize = chunksize(p) - leadsize;\r
-\r
- if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */\r
- newp->prev_foot = p->prev_foot + leadsize;\r
- newp->head = (newsize|CINUSE_BIT);\r
- }\r
- else { /* Otherwise, give back leader, use the rest */\r
- set_inuse(m, newp, newsize);\r
- set_inuse(m, p, leadsize);\r
- leader = chunk2mem(p);\r
- }\r
- p = newp;\r
- }\r
-\r
- /* Give back spare room at the end */\r
- if (!is_mmapped(p)) {\r
- size_t size = chunksize(p);\r
- if (size > nb + MIN_CHUNK_SIZE) {\r
- size_t remainder_size = size - nb;\r
- mchunkptr remainder = chunk_plus_offset(p, nb);\r
- set_inuse(m, p, nb);\r
- set_inuse(m, remainder, remainder_size);\r
- trailer = chunk2mem(remainder);\r
- }\r
- }\r
-\r
- assert (chunksize(p) >= nb);\r
- assert((((size_t)(chunk2mem(p))) % alignment) == 0);\r
- check_inuse_chunk(m, p);\r
- POSTACTION(m);\r
- if (leader != 0) {\r
- internal_free(m, leader);\r
- }\r
- if (trailer != 0) {\r
- internal_free(m, trailer);\r
- }\r
- return chunk2mem(p);\r
- }\r
- }\r
- return 0;\r
-}\r
-\r
-/* ------------------------ comalloc/coalloc support --------------------- */\r
-\r
-static void** ialloc(mstate m,\r
- size_t n_elements,\r
- size_t* sizes,\r
- int opts,\r
- void* chunks[]) {\r
- /*\r
- This provides common support for independent_X routines, handling\r
- all of the combinations that can result.\r
-\r
- The opts arg has:\r
- bit 0 set if all elements are same size (using sizes[0])\r
- bit 1 set if elements should be zeroed\r
- */\r
-\r
- size_t element_size; /* chunksize of each element, if all same */\r
- size_t contents_size; /* total size of elements */\r
- size_t array_size; /* request size of pointer array */\r
- void* mem; /* malloced aggregate space */\r
- mchunkptr p; /* corresponding chunk */\r
- size_t remainder_size; /* remaining bytes while splitting */\r
- void** marray; /* either "chunks" or malloced ptr array */\r
- mchunkptr array_chunk; /* chunk for malloced ptr array */\r
- flag_t was_enabled; /* to disable mmap */\r
- size_t size;\r
- size_t i;\r
-\r
- /* compute array length, if needed */\r
- if (chunks != 0) {\r
- if (n_elements == 0)\r
- return chunks; /* nothing to do */\r
- marray = chunks;\r
- array_size = 0;\r
- }\r
- else {\r
- /* if empty req, must still return chunk representing empty array */\r
- if (n_elements == 0)\r
- return (void**)internal_malloc(m, 0);\r
- marray = 0;\r
- array_size = request2size(n_elements * (sizeof(void*)));\r
- }\r
-\r
- /* compute total element size */\r
- if (opts & 0x1) { /* all-same-size */\r
- element_size = request2size(*sizes);\r
- contents_size = n_elements * element_size;\r
- }\r
- else { /* add up all the sizes */\r
- element_size = 0;\r
- contents_size = 0;\r
- for (i = 0; i != n_elements; ++i)\r
- contents_size += request2size(sizes[i]);\r
- }\r
-\r
- size = contents_size + array_size;\r
-\r
- /*\r
- Allocate the aggregate chunk. First disable direct-mmapping so\r
- malloc won't use it, since we would not be able to later\r
- free/realloc space internal to a segregated mmap region.\r
- */\r
- was_enabled = use_mmap(m);\r
- disable_mmap(m);\r
- mem = internal_malloc(m, size - CHUNK_OVERHEAD);\r
- if (was_enabled)\r
- enable_mmap(m);\r
- if (mem == 0)\r
- return 0;\r
-\r
- if (PREACTION(m)) return 0;\r
- p = mem2chunk(mem);\r
- remainder_size = chunksize(p);\r
-\r
- assert(!is_mmapped(p));\r
-\r
- if (opts & 0x2) { /* optionally clear the elements */\r
- memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);\r
- }\r
-\r
- /* If not provided, allocate the pointer array as final part of chunk */\r
- if (marray == 0) {\r
- size_t array_chunk_size;\r
- array_chunk = chunk_plus_offset(p, contents_size);\r
- array_chunk_size = remainder_size - contents_size;\r
- marray = (void**) (chunk2mem(array_chunk));\r
- set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);\r
- remainder_size = contents_size;\r
- }\r
-\r
- /* split out elements */\r
- for (i = 0; ; ++i) {\r
- marray[i] = chunk2mem(p);\r
- if (i != n_elements-1) {\r
- if (element_size != 0)\r
- size = element_size;\r
- else\r
- size = request2size(sizes[i]);\r
- remainder_size -= size;\r
- set_size_and_pinuse_of_inuse_chunk(m, p, size);\r
- p = chunk_plus_offset(p, size);\r
- }\r
- else { /* the final element absorbs any overallocation slop */\r
- set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);\r
- break;\r
- }\r
- }\r
-\r
-#if DEBUG\r
- if (marray != chunks) {\r
- /* final element must have exactly exhausted chunk */\r
- if (element_size != 0) {\r
- assert(remainder_size == element_size);\r
- }\r
- else {\r
- assert(remainder_size == request2size(sizes[i]));\r
- }\r
- check_inuse_chunk(m, mem2chunk(marray));\r
- }\r
- for (i = 0; i != n_elements; ++i)\r
- check_inuse_chunk(m, mem2chunk(marray[i]));\r
-\r
-#endif /* DEBUG */\r
-\r
- POSTACTION(m);\r
- return marray;\r
-}\r
-\r
-\r
-/* -------------------------- public routines ---------------------------- */\r
-\r
-#if !ONLY_MSPACES\r
-\r
-void* dlmalloc(size_t bytes) {\r
- /*\r
- Basic algorithm:\r
- If a small request (< 256 bytes minus per-chunk overhead):\r
- 1. If one exists, use a remainderless chunk in associated smallbin.\r
- (Remainderless means that there are too few excess bytes to\r
- represent as a chunk.)\r
- 2. If it is big enough, use the dv chunk, which is normally the\r
- chunk adjacent to the one used for the most recent small request.\r
- 3. If one exists, split the smallest available chunk in a bin,\r
- saving remainder in dv.\r
- 4. If it is big enough, use the top chunk.\r
- 5. If available, get memory from system and use it\r
- Otherwise, for a large request:\r
- 1. Find the smallest available binned chunk that fits, and use it\r
- if it is better fitting than dv chunk, splitting if necessary.\r
- 2. If better fitting than any binned chunk, use the dv chunk.\r
- 3. If it is big enough, use the top chunk.\r
- 4. If request size >= mmap threshold, try to directly mmap this chunk.\r
- 5. If available, get memory from system and use it\r
-\r
- The ugly goto's here ensure that postaction occurs along all paths.\r
- */\r
-\r
- if (!PREACTION(gm)) {\r
- void* mem;\r
- size_t nb;\r
- if (bytes <= MAX_SMALL_REQUEST) {\r
- bindex_t idx;\r
- binmap_t smallbits;\r
- nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);\r
- idx = small_index(nb);\r
- smallbits = gm->smallmap >> idx;\r
-\r
- if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */\r
- mchunkptr b, p;\r
- idx += ~smallbits & 1; /* Uses next bin if idx empty */\r
- b = smallbin_at(gm, idx);\r
- p = b->fd;\r
- assert(chunksize(p) == small_index2size(idx));\r
- unlink_first_small_chunk(gm, b, p, idx);\r
- set_inuse_and_pinuse(gm, p, small_index2size(idx));\r
- mem = chunk2mem(p);\r
- check_malloced_chunk(gm, mem, nb);\r
- goto postaction;\r
- }\r
-\r
- else if (nb > gm->dvsize) {\r
- if (smallbits != 0) { /* Use chunk in next nonempty smallbin */\r
- mchunkptr b, p, r;\r
- size_t rsize;\r
- bindex_t i;\r
- binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));\r
- binmap_t leastbit = least_bit(leftbits);\r
- compute_bit2idx(leastbit, i);\r
- b = smallbin_at(gm, i);\r
- p = b->fd;\r
- assert(chunksize(p) == small_index2size(i));\r
- unlink_first_small_chunk(gm, b, p, i);\r
- rsize = small_index2size(i) - nb;\r
- /* Fit here cannot be remainderless if 4byte sizes */\r
- if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)\r
- set_inuse_and_pinuse(gm, p, small_index2size(i));\r
- else {\r
- set_size_and_pinuse_of_inuse_chunk(gm, p, nb);\r
- r = chunk_plus_offset(p, nb);\r
- set_size_and_pinuse_of_free_chunk(r, rsize);\r
- replace_dv(gm, r, rsize);\r
- }\r
- mem = chunk2mem(p);\r
- check_malloced_chunk(gm, mem, nb);\r
- goto postaction;\r
- }\r
-\r
- else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {\r
- check_malloced_chunk(gm, mem, nb);\r
- goto postaction;\r
- }\r
- }\r
- }\r
- else if (bytes >= MAX_REQUEST)\r
- nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */\r
- else {\r
- nb = pad_request(bytes);\r
- if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {\r
- check_malloced_chunk(gm, mem, nb);\r
- goto postaction;\r
- }\r
- }\r
-\r
- if (nb <= gm->dvsize) {\r
- size_t rsize = gm->dvsize - nb;\r
- mchunkptr p = gm->dv;\r
- if (rsize >= MIN_CHUNK_SIZE) { /* split dv */\r
- mchunkptr r = gm->dv = chunk_plus_offset(p, nb);\r
- gm->dvsize = rsize;\r
- set_size_and_pinuse_of_free_chunk(r, rsize);\r
- set_size_and_pinuse_of_inuse_chunk(gm, p, nb);\r
- }\r
- else { /* exhaust dv */\r
- size_t dvs = gm->dvsize;\r
- gm->dvsize = 0;\r
- gm->dv = 0;\r
- set_inuse_and_pinuse(gm, p, dvs);\r
- }\r
- mem = chunk2mem(p);\r
- check_malloced_chunk(gm, mem, nb);\r
- goto postaction;\r
- }\r
-\r
- else if (nb < gm->topsize) { /* Split top */\r
- size_t rsize = gm->topsize -= nb;\r
- mchunkptr p = gm->top;\r
- mchunkptr r = gm->top = chunk_plus_offset(p, nb);\r
- r->head = rsize | PINUSE_BIT;\r
- set_size_and_pinuse_of_inuse_chunk(gm, p, nb);\r
- mem = chunk2mem(p);\r
- check_top_chunk(gm, gm->top);\r
- check_malloced_chunk(gm, mem, nb);\r
- goto postaction;\r
- }\r
-\r
- mem = sys_alloc(gm, nb);\r
-\r
- postaction:\r
- POSTACTION(gm);\r
- return mem;\r
- }\r
-\r
- return 0;\r
-}\r
-\r
-void dlfree(void* mem) {\r
- /*\r
- Consolidate freed chunks with preceeding or succeeding bordering\r
- free chunks, if they exist, and then place in a bin. Intermixed\r
- with special cases for top, dv, mmapped chunks, and usage errors.\r
- */\r
-\r
- if (mem != 0) {\r
- mchunkptr p = mem2chunk(mem);\r
-#if FOOTERS\r
- mstate fm = get_mstate_for(p);\r
- if (!ok_magic(fm)) {\r
- USAGE_ERROR_ACTION(fm, p);\r
- return;\r
- }\r
-#else /* FOOTERS */\r
-#define fm gm\r
-#endif /* FOOTERS */\r
- if (!PREACTION(fm)) {\r
- check_inuse_chunk(fm, p);\r
- if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {\r
- size_t psize = chunksize(p);\r
- mchunkptr next = chunk_plus_offset(p, psize);\r
- if (!pinuse(p)) {\r
- size_t prevsize = p->prev_foot;\r
- if ((prevsize & IS_MMAPPED_BIT) != 0) {\r
- prevsize &= ~IS_MMAPPED_BIT;\r
- psize += prevsize + MMAP_FOOT_PAD;\r
- if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)\r
- fm->footprint -= psize;\r
- goto postaction;\r
- }\r
- else {\r
- mchunkptr prev = chunk_minus_offset(p, prevsize);\r
- psize += prevsize;\r
- p = prev;\r
- if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */\r
- if (p != fm->dv) {\r
- unlink_chunk(fm, p, prevsize);\r
- }\r
- else if ((next->head & INUSE_BITS) == INUSE_BITS) {\r
- fm->dvsize = psize;\r
- set_free_with_pinuse(p, psize, next);\r
- goto postaction;\r
- }\r
- }\r
- else\r
- goto erroraction;\r
- }\r
- }\r
-\r
- if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {\r
- if (!cinuse(next)) { /* consolidate forward */\r
- if (next == fm->top) {\r
- size_t tsize = fm->topsize += psize;\r
- fm->top = p;\r
- p->head = tsize | PINUSE_BIT;\r
- if (p == fm->dv) {\r
- fm->dv = 0;\r
- fm->dvsize = 0;\r
- }\r
- if (should_trim(fm, tsize))\r
- sys_trim(fm, 0);\r
- goto postaction;\r
- }\r
- else if (next == fm->dv) {\r
- size_t dsize = fm->dvsize += psize;\r
- fm->dv = p;\r
- set_size_and_pinuse_of_free_chunk(p, dsize);\r
- goto postaction;\r
- }\r
- else {\r
- size_t nsize = chunksize(next);\r
- psize += nsize;\r
- unlink_chunk(fm, next, nsize);\r
- set_size_and_pinuse_of_free_chunk(p, psize);\r
- if (p == fm->dv) {\r
- fm->dvsize = psize;\r
- goto postaction;\r
- }\r
- }\r
- }\r
- else\r
- set_free_with_pinuse(p, psize, next);\r
- insert_chunk(fm, p, psize);\r
- check_free_chunk(fm, p);\r
- goto postaction;\r
- }\r
- }\r
- erroraction:\r
- USAGE_ERROR_ACTION(fm, p);\r
- postaction:\r
- POSTACTION(fm);\r
- }\r
- }\r
-#if !FOOTERS\r
-#undef fm\r
-#endif /* FOOTERS */\r
-}\r
-\r
-void* dlcalloc(size_t n_elements, size_t elem_size) {\r
- void* mem;\r
- size_t req = 0;\r
- if (n_elements != 0) {\r
- req = n_elements * elem_size;\r
- if (((n_elements | elem_size) & ~(size_t)0xffff) &&\r
- (req / n_elements != elem_size))\r
- req = MAX_SIZE_T; /* force downstream failure on overflow */\r
- }\r
- mem = dlmalloc(req);\r
- if (mem != 0 && calloc_must_clear(mem2chunk(mem)))\r
- memset(mem, 0, req);\r
- return mem;\r
-}\r
-\r
-void* dlrealloc(void* oldmem, size_t bytes) {\r
- if (oldmem == 0)\r
- return dlmalloc(bytes);\r
-#ifdef REALLOC_ZERO_BYTES_FREES\r
- if (bytes == 0) {\r
- dlfree(oldmem);\r
- return 0;\r
- }\r
-#endif /* REALLOC_ZERO_BYTES_FREES */\r
- else {\r
-#if ! FOOTERS\r
- mstate m = gm;\r
-#else /* FOOTERS */\r
- mstate m = get_mstate_for(mem2chunk(oldmem));\r
- if (!ok_magic(m)) {\r
- USAGE_ERROR_ACTION(m, oldmem);\r
- return 0;\r
- }\r
-#endif /* FOOTERS */\r
- return internal_realloc(m, oldmem, bytes);\r
- }\r
-}\r
-\r
-void* dlmemalign(size_t alignment, size_t bytes) {\r
- return internal_memalign(gm, alignment, bytes);\r
-}\r
-\r
-void** dlindependent_calloc(size_t n_elements, size_t elem_size,\r
- void* chunks[]) {\r
- size_t sz = elem_size; /* serves as 1-element array */\r
- return ialloc(gm, n_elements, &sz, 3, chunks);\r
-}\r
-\r
-void** dlindependent_comalloc(size_t n_elements, size_t sizes[],\r
- void* chunks[]) {\r
- return ialloc(gm, n_elements, sizes, 0, chunks);\r
-}\r
-\r
-void* dlvalloc(size_t bytes) {\r
- size_t pagesz;\r
- init_mparams();\r
- pagesz = mparams.page_size;\r
- return dlmemalign(pagesz, bytes);\r
-}\r
-\r
-void* dlpvalloc(size_t bytes) {\r
- size_t pagesz;\r
- init_mparams();\r
- pagesz = mparams.page_size;\r
- return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));\r
-}\r
-\r
-int dlmalloc_trim(size_t pad) {\r
- int result = 0;\r
- if (!PREACTION(gm)) {\r
- result = sys_trim(gm, pad);\r
- POSTACTION(gm);\r
- }\r
- return result;\r
-}\r
-\r
-size_t dlmalloc_footprint(void) {\r
- return gm->footprint;\r
-}\r
-\r
-size_t dlmalloc_max_footprint(void) {\r
- return gm->max_footprint;\r
-}\r
-\r
-#if !NO_MALLINFO\r
-struct mallinfo dlmallinfo(void) {\r
- return internal_mallinfo(gm);\r
-}\r
-#endif /* NO_MALLINFO */\r
-\r
-void dlmalloc_stats() {\r
- internal_malloc_stats(gm);\r
-}\r
-\r
-size_t dlmalloc_usable_size(void* mem) {\r
- if (mem != 0) {\r
- mchunkptr p = mem2chunk(mem);\r
- if (cinuse(p))\r
- return chunksize(p) - overhead_for(p);\r
- }\r
- return 0;\r
-}\r
-\r
-int dlmallopt(int param_number, int value) {\r
- return change_mparam(param_number, value);\r
-}\r
-\r
-#endif /* !ONLY_MSPACES */\r
-\r
-/* ----------------------------- user mspaces ---------------------------- */\r
-\r
-#if MSPACES\r
-\r
-static mstate init_user_mstate(char* tbase, size_t tsize) {\r
- size_t msize = pad_request(sizeof(struct malloc_state));\r
- mchunkptr mn;\r
- mchunkptr msp = align_as_chunk(tbase);\r
- mstate m = (mstate)(chunk2mem(msp));\r
- memset(m, 0, msize);\r
- INITIAL_LOCK(&m->mutex);\r
- msp->head = (msize|PINUSE_BIT|CINUSE_BIT);\r
- m->seg.base = m->least_addr = tbase;\r
- m->seg.size = m->footprint = m->max_footprint = tsize;\r
- m->magic = mparams.magic;\r
- m->mflags = mparams.default_mflags;\r
- m->nedpool = 0;\r
- disable_contiguous(m);\r
- init_bins(m);\r
- mn = next_chunk(mem2chunk(m));\r
- init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);\r
- check_top_chunk(m, m->top);\r
- return m;\r
-}\r
-\r
-mspace create_mspace(size_t capacity, int locked) {\r
- mstate m = 0;\r
- size_t msize = pad_request(sizeof(struct malloc_state));\r
- init_mparams(); /* Ensure pagesize etc initialized */\r
-\r
- if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {\r
- size_t rs = ((capacity == 0)? mparams.granularity :\r
- (capacity + TOP_FOOT_SIZE + msize));\r
- size_t tsize = granularity_align(rs);\r
- char* tbase = (char*)(CALL_MMAP(tsize));\r
- if (tbase != CMFAIL) {\r
- m = init_user_mstate(tbase, tsize);\r
- m->seg.sflags = IS_MMAPPED_BIT;\r
- set_lock(m, locked);\r
- }\r
- }\r
- return (mspace)m;\r
-}\r
-\r
-mspace create_mspace_with_base(void* base, size_t capacity, int locked) {\r
- mstate m = 0;\r
- size_t msize = pad_request(sizeof(struct malloc_state));\r
- init_mparams(); /* Ensure pagesize etc initialized */\r
-\r
- if (capacity > msize + TOP_FOOT_SIZE &&\r
- capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {\r
- m = init_user_mstate((char*)base, capacity);\r
- m->seg.sflags = EXTERN_BIT;\r
- set_lock(m, locked);\r
- }\r
- return (mspace)m;\r
-}\r
-\r
-size_t destroy_mspace(mspace msp) {\r
- size_t freed = 0;\r
- mstate ms = (mstate)msp;\r
- if (ok_magic(ms)) {\r
- msegmentptr sp = &ms->seg;\r
- while (sp != 0) {\r
- char* base = sp->base;\r
- size_t size = sp->size;\r
- flag_t flag = sp->sflags;\r
- sp = sp->next;\r
- if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&\r
- CALL_MUNMAP(base, size) == 0)\r
- freed += size;\r
- }\r
- }\r
- else {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- }\r
- return freed;\r
-}\r
-\r
-/*\r
- mspace versions of routines are near-clones of the global\r
- versions. This is not so nice but better than the alternatives.\r
-*/\r
-\r
-\r
-void* mspace_malloc(mspace msp, size_t bytes) {\r
- mstate ms = (mstate)msp;\r
- if (!ok_magic(ms)) {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- return 0;\r
- }\r
- if (!PREACTION(ms)) {\r
- void* mem;\r
- size_t nb;\r
- if (bytes <= MAX_SMALL_REQUEST) {\r
- bindex_t idx;\r
- binmap_t smallbits;\r
- nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);\r
- idx = small_index(nb);\r
- smallbits = ms->smallmap >> idx;\r
-\r
- if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */\r
- mchunkptr b, p;\r
- idx += ~smallbits & 1; /* Uses next bin if idx empty */\r
- b = smallbin_at(ms, idx);\r
- p = b->fd;\r
- assert(chunksize(p) == small_index2size(idx));\r
- unlink_first_small_chunk(ms, b, p, idx);\r
- set_inuse_and_pinuse(ms, p, small_index2size(idx));\r
- mem = chunk2mem(p);\r
- check_malloced_chunk(ms, mem, nb);\r
- goto postaction;\r
- }\r
-\r
- else if (nb > ms->dvsize) {\r
- if (smallbits != 0) { /* Use chunk in next nonempty smallbin */\r
- mchunkptr b, p, r;\r
- size_t rsize;\r
- bindex_t i;\r
- binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));\r
- binmap_t leastbit = least_bit(leftbits);\r
- compute_bit2idx(leastbit, i);\r
- b = smallbin_at(ms, i);\r
- p = b->fd;\r
- assert(chunksize(p) == small_index2size(i));\r
- unlink_first_small_chunk(ms, b, p, i);\r
- rsize = small_index2size(i) - nb;\r
- /* Fit here cannot be remainderless if 4byte sizes */\r
- if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)\r
- set_inuse_and_pinuse(ms, p, small_index2size(i));\r
- else {\r
- set_size_and_pinuse_of_inuse_chunk(ms, p, nb);\r
- r = chunk_plus_offset(p, nb);\r
- set_size_and_pinuse_of_free_chunk(r, rsize);\r
- replace_dv(ms, r, rsize);\r
- }\r
- mem = chunk2mem(p);\r
- check_malloced_chunk(ms, mem, nb);\r
- goto postaction;\r
- }\r
-\r
- else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {\r
- check_malloced_chunk(ms, mem, nb);\r
- goto postaction;\r
- }\r
- }\r
- }\r
- else if (bytes >= MAX_REQUEST)\r
- nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */\r
- else {\r
- nb = pad_request(bytes);\r
- if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {\r
- check_malloced_chunk(ms, mem, nb);\r
- goto postaction;\r
- }\r
- }\r
-\r
- if (nb <= ms->dvsize) {\r
- size_t rsize = ms->dvsize - nb;\r
- mchunkptr p = ms->dv;\r
- if (rsize >= MIN_CHUNK_SIZE) { /* split dv */\r
- mchunkptr r = ms->dv = chunk_plus_offset(p, nb);\r
- ms->dvsize = rsize;\r
- set_size_and_pinuse_of_free_chunk(r, rsize);\r
- set_size_and_pinuse_of_inuse_chunk(ms, p, nb);\r
- }\r
- else { /* exhaust dv */\r
- size_t dvs = ms->dvsize;\r
- ms->dvsize = 0;\r
- ms->dv = 0;\r
- set_inuse_and_pinuse(ms, p, dvs);\r
- }\r
- mem = chunk2mem(p);\r
- check_malloced_chunk(ms, mem, nb);\r
- goto postaction;\r
- }\r
-\r
- else if (nb < ms->topsize) { /* Split top */\r
- size_t rsize = ms->topsize -= nb;\r
- mchunkptr p = ms->top;\r
- mchunkptr r = ms->top = chunk_plus_offset(p, nb);\r
- r->head = rsize | PINUSE_BIT;\r
- set_size_and_pinuse_of_inuse_chunk(ms, p, nb);\r
- mem = chunk2mem(p);\r
- check_top_chunk(ms, ms->top);\r
- check_malloced_chunk(ms, mem, nb);\r
- goto postaction;\r
- }\r
-\r
- mem = sys_alloc(ms, nb);\r
-\r
- postaction:\r
- POSTACTION(ms);\r
- return mem;\r
- }\r
-\r
- return 0;\r
-}\r
-\r
-void mspace_free(mspace msp, void* mem) {\r
- if (mem != 0) {\r
- mchunkptr p = mem2chunk(mem);\r
-#if FOOTERS\r
- mstate fm = get_mstate_for(p);\r
-#else /* FOOTERS */\r
- mstate fm = (mstate)msp;\r
-#endif /* FOOTERS */\r
- if (!ok_magic(fm)) {\r
- USAGE_ERROR_ACTION(fm, p);\r
- return;\r
- }\r
- if (!PREACTION(fm)) {\r
- check_inuse_chunk(fm, p);\r
- if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {\r
- size_t psize = chunksize(p);\r
- mchunkptr next = chunk_plus_offset(p, psize);\r
- if (!pinuse(p)) {\r
- size_t prevsize = p->prev_foot;\r
- if ((prevsize & IS_MMAPPED_BIT) != 0) {\r
- prevsize &= ~IS_MMAPPED_BIT;\r
- psize += prevsize + MMAP_FOOT_PAD;\r
- if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)\r
- fm->footprint -= psize;\r
- goto postaction;\r
- }\r
- else {\r
- mchunkptr prev = chunk_minus_offset(p, prevsize);\r
- psize += prevsize;\r
- p = prev;\r
- if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */\r
- if (p != fm->dv) {\r
- unlink_chunk(fm, p, prevsize);\r
- }\r
- else if ((next->head & INUSE_BITS) == INUSE_BITS) {\r
- fm->dvsize = psize;\r
- set_free_with_pinuse(p, psize, next);\r
- goto postaction;\r
- }\r
- }\r
- else\r
- goto erroraction;\r
- }\r
- }\r
-\r
- if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {\r
- if (!cinuse(next)) { /* consolidate forward */\r
- if (next == fm->top) {\r
- size_t tsize = fm->topsize += psize;\r
- fm->top = p;\r
- p->head = tsize | PINUSE_BIT;\r
- if (p == fm->dv) {\r
- fm->dv = 0;\r
- fm->dvsize = 0;\r
- }\r
- if (should_trim(fm, tsize))\r
- sys_trim(fm, 0);\r
- goto postaction;\r
- }\r
- else if (next == fm->dv) {\r
- size_t dsize = fm->dvsize += psize;\r
- fm->dv = p;\r
- set_size_and_pinuse_of_free_chunk(p, dsize);\r
- goto postaction;\r
- }\r
- else {\r
- size_t nsize = chunksize(next);\r
- psize += nsize;\r
- unlink_chunk(fm, next, nsize);\r
- set_size_and_pinuse_of_free_chunk(p, psize);\r
- if (p == fm->dv) {\r
- fm->dvsize = psize;\r
- goto postaction;\r
- }\r
- }\r
- }\r
- else\r
- set_free_with_pinuse(p, psize, next);\r
- insert_chunk(fm, p, psize);\r
- check_free_chunk(fm, p);\r
- goto postaction;\r
- }\r
- }\r
- erroraction:\r
- USAGE_ERROR_ACTION(fm, p);\r
- postaction:\r
- POSTACTION(fm);\r
- }\r
- }\r
-}\r
-\r
-void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {\r
- void* mem;\r
- size_t req = 0;\r
- mstate ms = (mstate)msp;\r
- if (!ok_magic(ms)) {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- return 0;\r
- }\r
- if (n_elements != 0) {\r
- req = n_elements * elem_size;\r
- if (((n_elements | elem_size) & ~(size_t)0xffff) &&\r
- (req / n_elements != elem_size))\r
- req = MAX_SIZE_T; /* force downstream failure on overflow */\r
- }\r
- mem = internal_malloc(ms, req);\r
- if (mem != 0 && calloc_must_clear(mem2chunk(mem)))\r
- memset(mem, 0, req);\r
- return mem;\r
-}\r
-\r
-void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {\r
- if (oldmem == 0)\r
- return mspace_malloc(msp, bytes);\r
-#ifdef REALLOC_ZERO_BYTES_FREES\r
- if (bytes == 0) {\r
- mspace_free(msp, oldmem);\r
- return 0;\r
- }\r
-#endif /* REALLOC_ZERO_BYTES_FREES */\r
- else {\r
-#if FOOTERS\r
- mchunkptr p = mem2chunk(oldmem);\r
- mstate ms = get_mstate_for(p);\r
-#else /* FOOTERS */\r
- mstate ms = (mstate)msp;\r
-#endif /* FOOTERS */\r
- if (!ok_magic(ms)) {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- return 0;\r
- }\r
- return internal_realloc(ms, oldmem, bytes);\r
- }\r
-}\r
-\r
-void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {\r
- mstate ms = (mstate)msp;\r
- if (!ok_magic(ms)) {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- return 0;\r
- }\r
- return internal_memalign(ms, alignment, bytes);\r
-}\r
-\r
-void** mspace_independent_calloc(mspace msp, size_t n_elements,\r
- size_t elem_size, void* chunks[]) {\r
- size_t sz = elem_size; /* serves as 1-element array */\r
- mstate ms = (mstate)msp;\r
- if (!ok_magic(ms)) {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- return 0;\r
- }\r
- return ialloc(ms, n_elements, &sz, 3, chunks);\r
-}\r
-\r
-void** mspace_independent_comalloc(mspace msp, size_t n_elements,\r
- size_t sizes[], void* chunks[]) {\r
- mstate ms = (mstate)msp;\r
- if (!ok_magic(ms)) {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- return 0;\r
- }\r
- return ialloc(ms, n_elements, sizes, 0, chunks);\r
-}\r
-\r
-int mspace_trim(mspace msp, size_t pad) {\r
- int result = 0;\r
- mstate ms = (mstate)msp;\r
- if (ok_magic(ms)) {\r
- if (!PREACTION(ms)) {\r
- result = sys_trim(ms, pad);\r
- POSTACTION(ms);\r
- }\r
- }\r
- else {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- }\r
- return result;\r
-}\r
-\r
-void mspace_malloc_stats(mspace msp) {\r
- mstate ms = (mstate)msp;\r
- if (ok_magic(ms)) {\r
- internal_malloc_stats(ms);\r
- }\r
- else {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- }\r
-}\r
-\r
-size_t mspace_footprint(mspace msp) {\r
- size_t result = 0;\r
- mstate ms = (mstate)msp;\r
- if (ok_magic(ms)) {\r
- result = ms->footprint;\r
- }\r
- USAGE_ERROR_ACTION(ms,ms);\r
- return result;\r
-}\r
-\r
-\r
-size_t mspace_max_footprint(mspace msp) {\r
- size_t result = 0;\r
- mstate ms = (mstate)msp;\r
- if (ok_magic(ms)) {\r
- result = ms->max_footprint;\r
- }\r
- USAGE_ERROR_ACTION(ms,ms);\r
- return result;\r
-}\r
-\r
-\r
-#if !NO_MALLINFO\r
-struct mallinfo mspace_mallinfo(mspace msp) {\r
- mstate ms = (mstate)msp;\r
- if (!ok_magic(ms)) {\r
- USAGE_ERROR_ACTION(ms,ms);\r
- }\r
- return internal_mallinfo(ms);\r
-}\r
-#endif /* NO_MALLINFO */\r
-\r
-int mspace_mallopt(int param_number, int value) {\r
- return change_mparam(param_number, value);\r
-}\r
-\r
-#endif /* MSPACES */\r
-\r
-/* -------------------- Alternative MORECORE functions ------------------- */\r
-\r
-/*\r
- Guidelines for creating a custom version of MORECORE:\r
-\r
- * For best performance, MORECORE should allocate in multiples of pagesize.\r
- * MORECORE may allocate more memory than requested. (Or even less,\r
- but this will usually result in a malloc failure.)\r
- * MORECORE must not allocate memory when given argument zero, but\r
- instead return one past the end address of memory from previous\r
- nonzero call.\r
- * For best performance, consecutive calls to MORECORE with positive\r
- arguments should return increasing addresses, indicating that\r
- space has been contiguously extended.\r
- * Even though consecutive calls to MORECORE need not return contiguous\r
- addresses, it must be OK for malloc'ed chunks to span multiple\r
- regions in those cases where they do happen to be contiguous.\r
- * MORECORE need not handle negative arguments -- it may instead\r
- just return MFAIL when given negative arguments.\r
- Negative arguments are always multiples of pagesize. MORECORE\r
- must not misinterpret negative args as large positive unsigned\r
- args. You can suppress all such calls from even occurring by defining\r
- MORECORE_CANNOT_TRIM,\r
-\r
- As an example alternative MORECORE, here is a custom allocator\r
- kindly contributed for pre-OSX macOS. It uses virtually but not\r
- necessarily physically contiguous non-paged memory (locked in,\r
- present and won't get swapped out). You can use it by uncommenting\r
- this section, adding some #includes, and setting up the appropriate\r
- defines above:\r
-\r
- #define MORECORE osMoreCore\r
-\r
- There is also a shutdown routine that should somehow be called for\r
- cleanup upon program exit.\r
-\r
- #define MAX_POOL_ENTRIES 100\r
- #define MINIMUM_MORECORE_SIZE (64 * 1024U)\r
- static int next_os_pool;\r
- void *our_os_pools[MAX_POOL_ENTRIES];\r
-\r
- void *osMoreCore(int size)\r
- {\r
- void *ptr = 0;\r
- static void *sbrk_top = 0;\r
-\r
- if (size > 0)\r
- {\r
- if (size < MINIMUM_MORECORE_SIZE)\r
- size = MINIMUM_MORECORE_SIZE;\r
- if (CurrentExecutionLevel() == kTaskLevel)\r
- ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);\r
- if (ptr == 0)\r
- {\r
- return (void *) MFAIL;\r
- }\r
- // save ptrs so they can be freed during cleanup\r
- our_os_pools[next_os_pool] = ptr;\r
- next_os_pool++;\r
- ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);\r
- sbrk_top = (char *) ptr + size;\r
- return ptr;\r
- }\r
- else if (size < 0)\r
- {\r
- // we don't currently support shrink behavior\r
- return (void *) MFAIL;\r
- }\r
- else\r
- {\r
- return sbrk_top;\r
- }\r
- }\r
-\r
- // cleanup any allocated memory pools\r
- // called as last thing before shutting down driver\r
-\r
- void osCleanupMem(void)\r
- {\r
- void **ptr;\r
-\r
- for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)\r
- if (*ptr)\r
- {\r
- PoolDeallocate(*ptr);\r
- *ptr = 0;\r
- }\r
- }\r
-\r
-*/\r
-\r
-\r
-/* -----------------------------------------------------------------------\r
-History:\r
- V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)\r
- * Add max_footprint functions\r
- * Ensure all appropriate literals are size_t\r
- * Fix conditional compilation problem for some #define settings\r
- * Avoid concatenating segments with the one provided\r
- in create_mspace_with_base\r
- * Rename some variables to avoid compiler shadowing warnings\r
- * Use explicit lock initialization.\r
- * Better handling of sbrk interference.\r
- * Simplify and fix segment insertion, trimming and mspace_destroy\r
- * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x\r
- * Thanks especially to Dennis Flanagan for help on these.\r
-\r
- V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)\r
- * Fix memalign brace error.\r
-\r
- V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)\r
- * Fix improper #endif nesting in C++\r
- * Add explicit casts needed for C++\r
-\r
- V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)\r
- * Use trees for large bins\r
- * Support mspaces\r
- * Use segments to unify sbrk-based and mmap-based system allocation,\r
- removing need for emulation on most platforms without sbrk.\r
- * Default safety checks\r
- * Optional footer checks. Thanks to William Robertson for the idea.\r
- * Internal code refactoring\r
- * Incorporate suggestions and platform-specific changes.\r
- Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,\r
- Aaron Bachmann, Emery Berger, and others.\r
- * Speed up non-fastbin processing enough to remove fastbins.\r
- * Remove useless cfree() to avoid conflicts with other apps.\r
- * Remove internal memcpy, memset. Compilers handle builtins better.\r
- * Remove some options that no one ever used and rename others.\r
-\r
- V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)\r
- * Fix malloc_state bitmap array misdeclaration\r
-\r
- V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)\r
- * Allow tuning of FIRST_SORTED_BIN_SIZE\r
- * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.\r
- * Better detection and support for non-contiguousness of MORECORE.\r
- Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger\r
- * Bypass most of malloc if no frees. Thanks To Emery Berger.\r
- * Fix freeing of old top non-contiguous chunk im sysmalloc.\r
- * Raised default trim and map thresholds to 256K.\r
- * Fix mmap-related #defines. Thanks to Lubos Lunak.\r
- * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.\r
- * Branch-free bin calculation\r
- * Default trim and mmap thresholds now 256K.\r
-\r
- V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)\r
- * Introduce independent_comalloc and independent_calloc.\r
- Thanks to Michael Pachos for motivation and help.\r
- * Make optional .h file available\r
- * Allow > 2GB requests on 32bit systems.\r
- * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.\r
- Thanks also to Andreas Mueller <a.mueller at paradatec.de>,\r
- and Anonymous.\r
- * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for\r
- helping test this.)\r
- * memalign: check alignment arg\r
- * realloc: don't try to shift chunks backwards, since this\r
- leads to more fragmentation in some programs and doesn't\r
- seem to help in any others.\r
- * Collect all cases in malloc requiring system memory into sysmalloc\r
- * Use mmap as backup to sbrk\r
- * Place all internal state in malloc_state\r
- * Introduce fastbins (although similar to 2.5.1)\r
- * Many minor tunings and cosmetic improvements\r
- * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK\r
- * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS\r
- Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.\r
- * Include errno.h to support default failure action.\r
-\r
- V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)\r
- * return null for negative arguments\r
- * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>\r
- * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'\r
- (e.g. WIN32 platforms)\r
- * Cleanup header file inclusion for WIN32 platforms\r
- * Cleanup code to avoid Microsoft Visual C++ compiler complaints\r
- * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing\r
- memory allocation routines\r
- * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)\r
- * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to\r
- usage of 'assert' in non-WIN32 code\r
- * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to\r
- avoid infinite loop\r
- * Always call 'fREe()' rather than 'free()'\r
-\r
- V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)\r
- * Fixed ordering problem with boundary-stamping\r
-\r
- V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)\r
- * Added pvalloc, as recommended by H.J. Liu\r
- * Added 64bit pointer support mainly from Wolfram Gloger\r
- * Added anonymously donated WIN32 sbrk emulation\r
- * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen\r
- * malloc_extend_top: fix mask error that caused wastage after\r
- foreign sbrks\r
- * Add linux mremap support code from HJ Liu\r
-\r
- V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)\r
- * Integrated most documentation with the code.\r
- * Add support for mmap, with help from\r
- Wolfram Gloger (Gloger@lrz.uni-muenchen.de).\r
- * Use last_remainder in more cases.\r
- * Pack bins using idea from colin@nyx10.cs.du.edu\r
- * Use ordered bins instead of best-fit threshhold\r
- * Eliminate block-local decls to simplify tracing and debugging.\r
- * Support another case of realloc via move into top\r
- * Fix error occuring when initial sbrk_base not word-aligned.\r
- * Rely on page size for units instead of SBRK_UNIT to\r
- avoid surprises about sbrk alignment conventions.\r
- * Add mallinfo, mallopt. Thanks to Raymond Nijssen\r
- (raymond@es.ele.tue.nl) for the suggestion.\r
- * Add `pad' argument to malloc_trim and top_pad mallopt parameter.\r
- * More precautions for cases where other routines call sbrk,\r
- courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).\r
- * Added macros etc., allowing use in linux libc from\r
- H.J. Lu (hjl@gnu.ai.mit.edu)\r
- * Inverted this history list\r
-\r
- V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)\r
- * Re-tuned and fixed to behave more nicely with V2.6.0 changes.\r
- * Removed all preallocation code since under current scheme\r
- the work required to undo bad preallocations exceeds\r
- the work saved in good cases for most test programs.\r
- * No longer use return list or unconsolidated bins since\r
- no scheme using them consistently outperforms those that don't\r
- given above changes.\r
- * Use best fit for very large chunks to prevent some worst-cases.\r
- * Added some support for debugging\r
-\r
- V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)\r
- * Removed footers when chunks are in use. Thanks to\r
- Paul Wilson (wilson@cs.texas.edu) for the suggestion.\r
-\r
- V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)\r
- * Added malloc_trim, with help from Wolfram Gloger\r
- (wmglo@Dent.MED.Uni-Muenchen.DE).\r
-\r
- V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)\r
-\r
- V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)\r
- * realloc: try to expand in both directions\r
- * malloc: swap order of clean-bin strategy;\r
- * realloc: only conditionally expand backwards\r
- * Try not to scavenge used bins\r
- * Use bin counts as a guide to preallocation\r
- * Occasionally bin return list chunks in first scan\r
- * Add a few optimizations from colin@nyx10.cs.du.edu\r
-\r
- V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)\r
- * faster bin computation & slightly different binning\r
- * merged all consolidations to one part of malloc proper\r
- (eliminating old malloc_find_space & malloc_clean_bin)\r
- * Scan 2 returns chunks (not just 1)\r
- * Propagate failure in realloc if malloc returns 0\r
- * Add stuff to allow compilation on non-ANSI compilers\r
- from kpv@research.att.com\r
-\r
- V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)\r
- * removed potential for odd address access in prev_chunk\r
- * removed dependency on getpagesize.h\r
- * misc cosmetics and a bit more internal documentation\r
- * anticosmetics: mangled names in macros to evade debugger strangeness\r
- * tested on sparc, hp-700, dec-mips, rs6000\r
- with gcc & native cc (hp, dec only) allowing\r
- Detlefs & Zorn comparison study (in SIGPLAN Notices.)\r
-\r
- Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)\r
- * Based loosely on libg++-1.2X malloc. (It retains some of the overall\r
- structure of old version, but most details differ.)\r
- \r
-*/\r
+++ /dev/null
-/* Alternative malloc implementation for multiple threads without\r
-lock contention based on dlmalloc. (C) 2005-2006 Niall Douglas\r
-\r
-Boost Software License - Version 1.0 - August 17th, 2003\r
-\r
-Permission is hereby granted, free of charge, to any person or organization\r
-obtaining a copy of the software and accompanying documentation covered by\r
-this license (the "Software") to use, reproduce, display, distribute,\r
-execute, and transmit the Software, and to prepare derivative works of the\r
-Software, and to permit third-parties to whom the Software is furnished to\r
-do so, all subject to the following:\r
-\r
-The copyright notices in the Software and this entire statement, including\r
-the above license grant, this restriction and the following disclaimer,\r
-must be included in all copies of the Software, in whole or in part, and\r
-all derivative works of the Software, unless such copies or derivative\r
-works are solely in the form of machine-executable object code generated by\r
-a source language processor.\r
-\r
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r
-FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT\r
-SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE\r
-FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,\r
-ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\r
-DEALINGS IN THE SOFTWARE.\r
-*/\r
-\r
-#ifdef _MSC_VER\r
-/* Enable full aliasing on MSVC */\r
-/*#pragma optimize("a", on)*/\r
-#endif\r
-\r
-//#define CHECKVALIDTC\r
-\r
-#include "nedmalloc.h"\r
-#define MSPACES 1\r
-#define ONLY_MSPACES 1\r
-#ifndef USE_LOCKS\r
- #define USE_LOCKS 1\r
-#endif\r
-#define FOOTERS 1 /* Need to enable footers so frees lock the right mspace */\r
-#undef DEBUG /* dlmalloc wants DEBUG either 0 or 1 */\r
-#ifdef _DEBUG\r
- #define DEBUG 1\r
-#else\r
- #define DEBUG 0\r
-#endif\r
-#ifdef NDEBUG /* Disable assert checking on release builds */\r
- #undef DEBUG\r
-#endif\r
-/* The default of 64Kb means we spend too much time kernel-side */\r
-#define DEFAULT_GRANULARITY (1*1024*1024)\r
-#define MAX_RELEASE_CHECK_FREQUENCY 4095\r
-#include "malloc.c.h"\r
-\r
-/* The maximum concurrent threads in a pool possible */\r
-#define MAXTHREADSINPOOL 16\r
-/* The maximum size to be allocated from the thread cache */\r
-#define THREADCACHEMAX 8192\r
-#if 0\r
-/* The number of cache entries for finer grained bins. This is (topbitpos(THREADCACHEMAX)-4)*2 */\r
-#define THREADCACHEMAXBINS ((13-4)*2)\r
-#else\r
-/* The number of cache entries. This is (topbitpos(THREADCACHEMAX)-4) */\r
-#define THREADCACHEMAXBINS (13-4)\r
-#endif\r
-/* Point at which the free space in a thread cache is garbage collected */\r
-#define THREADCACHEMAXFREESPACE (512*1024)\r
-\r
-\r
-#ifdef WIN32\r
- #define TLSVAR DWORD\r
- #define TLSALLOC(k) (*(k)=TlsAlloc(), TLS_OUT_OF_INDEXES==*(k))\r
- #define TLSFREE(k) (!TlsFree(k))\r
- #define TLSGET(k) TlsGetValue(k)\r
- #define TLSSET(k, a) (!TlsSetValue(k, a))\r
-#else\r
- #define TLSVAR pthread_key_t\r
- #define TLSALLOC(k) pthread_key_create(k, 0)\r
- #define TLSFREE(k) pthread_key_delete(k)\r
- #define TLSGET(k) pthread_getspecific(k)\r
- #define TLSSET(k, a) pthread_setspecific(k, a)\r
-#endif\r
-\r
-#if defined(__cplusplus)\r
-#if !defined(NO_NED_NAMESPACE)\r
-namespace nedalloc {\r
-#else\r
-extern "C" {\r
-#endif\r
-#endif\r
-\r
-size_t nedblksize(void *mem) THROWSPEC\r
-{\r
- if(mem)\r
- {\r
- mchunkptr p=mem2chunk(mem);\r
- if(cinuse(p))\r
- return chunksize(p)-overhead_for(p);\r
- }\r
- return 0;\r
-}\r
-\r
-void nedsetvalue(void *v) THROWSPEC { nedpsetvalue(0, v); }\r
-void * nedmalloc(size_t size) THROWSPEC { return nedpmalloc(0, size); }\r
-void * nedcalloc(size_t no, size_t size) THROWSPEC { return nedpcalloc(0, no, size); }\r
-void * nedrealloc(void *mem, size_t size) THROWSPEC { return nedprealloc(0, mem, size); }\r
-void nedfree(void *mem) THROWSPEC { nedpfree(0, mem); }\r
-void * nedmemalign(size_t alignment, size_t bytes) THROWSPEC { return nedpmemalign(0, alignment, bytes); }\r
-#if !NO_MALLINFO\r
-struct mallinfo nedmallinfo(void) THROWSPEC { return nedpmallinfo(0); }\r
-#endif\r
-int nedmallopt(int parno, int value) THROWSPEC { return nedpmallopt(0, parno, value); }\r
-int nedmalloc_trim(size_t pad) THROWSPEC { return nedpmalloc_trim(0, pad); }\r
-void nedmalloc_stats() THROWSPEC { nedpmalloc_stats(0); }\r
-size_t nedmalloc_footprint() THROWSPEC { return nedpmalloc_footprint(0); }\r
-void **nedindependent_calloc(size_t elemsno, size_t elemsize, void **chunks) THROWSPEC { return nedpindependent_calloc(0, elemsno, elemsize, chunks); }\r
-void **nedindependent_comalloc(size_t elems, size_t *sizes, void **chunks) THROWSPEC { return nedpindependent_comalloc(0, elems, sizes, chunks); }\r
-\r
-struct threadcacheblk_t;\r
-typedef struct threadcacheblk_t threadcacheblk;\r
-struct threadcacheblk_t\r
-{ /* Keep less than 16 bytes on 32 bit systems and 32 bytes on 64 bit systems */\r
- unsigned int lastUsed, size;\r
- threadcacheblk *next, *prev;\r
-};\r
-typedef struct threadcache_t\r
-{\r
-#ifdef CHECKVALIDTC\r
- unsigned int magic1;\r
-#endif\r
- int mymspace; /* Last mspace entry this thread used. =-1 for not in use, =-2 for end of list */\r
- long threadid;\r
- unsigned int mallocs, frees, successes;\r
- size_t freeInCache; /* How much free space is stored in this cache */\r
- threadcacheblk *bins[(THREADCACHEMAXBINS+1)*2];\r
-#ifdef CHECKVALIDTC\r
- unsigned int magic2;\r
-#endif\r
- char cachesync[128]; /* Make sure it's nowhere near the next threadcache */\r
-} threadcache;\r
-struct nedpool_t\r
-{\r
- MLOCK_T mutex;\r
- void *uservalue;\r
- int threads; /* Max entries in m to use */\r
- threadcache *caches;\r
- TLSVAR mycache; /* Thread cache for this thread */\r
- mstate m[MAXTHREADSINPOOL+1]; /* mspace entries for this pool */\r
-};\r
-static nedpool syspool;\r
-\r
-static FORCEINLINE unsigned int size2binidx(size_t _size) THROWSPEC\r
-{ /* 8=1000 16=10000 20=10100 24=11000 32=100000 48=110000 4096=1000000000000 */\r
- unsigned int topbit, size=(unsigned int)(_size>>4);\r
- /* 16=1 20=1 24=1 32=10 48=11 64=100 96=110 128=1000 4096=100000000 */\r
-\r
-#if defined(__GNUC__)\r
- topbit = sizeof(size)*__CHAR_BIT__ - 1 - __builtin_clz(size);\r
-#elif defined(_MSC_VER) && _MSC_VER>=1300\r
- {\r
- unsigned long bsrTopBit;\r
-\r
- _BitScanReverse(&bsrTopBit, size);\r
-\r
- topbit = bsrTopBit;\r
- }\r
-#else\r
-#if 0\r
- union {\r
- unsigned asInt[2];\r
- double asDouble;\r
- };\r
- int n;\r
-\r
- asDouble = (double)size + 0.5;\r
- topbit = (asInt[!FOX_BIGENDIAN] >> 20) - 1023;\r
-#else\r
- {\r
- unsigned int x=size;\r
- x = x | (x >> 1);\r
- x = x | (x >> 2);\r
- x = x | (x >> 4);\r
- x = x | (x >> 8);\r
- x = x | (x >>16);\r
- x = ~x;\r
- x = x - ((x >> 1) & 0x55555555);\r
- x = (x & 0x33333333) + ((x >> 2) & 0x33333333);\r
- x = (x + (x >> 4)) & 0x0F0F0F0F;\r
- x = x + (x << 8);\r
- x = x + (x << 16);\r
- topbit=31 - (x >> 24);\r
- }\r
-#endif\r
-#endif\r
- return topbit;\r
-}\r
-\r
-\r
-static int InitCaches(nedpool *p, int from, int to) THROWSPEC\r
-{\r
- if(!p->caches)\r
- {\r
- p->caches=(threadcache *) mspace_malloc(p->m[0], (to+1)*sizeof(threadcache));\r
- if(!p->caches)\r
- return 1;\r
- }\r
- else if(from)\r
- {\r
- threadcache *newcaches;\r
- newcaches=(threadcache *) mspace_realloc(p->m[0], p->caches, (to+1)*sizeof(threadcache));\r
- if(!newcaches) return 1;\r
- p->caches=newcaches;\r
- }\r
- for(; from<to; from++)\r
- {\r
- threadcache *tc=&p->caches[from];\r
- tc->mymspace=-1;\r
- memset(tc->bins, 0, (THREADCACHEMAXBINS+1)*2*sizeof(threadcacheblk *));\r
- }\r
- p->caches[to].mymspace=-2;\r
- memset(p->caches[to].bins, 0, (THREADCACHEMAXBINS+1)*2*sizeof(threadcacheblk *));\r
- return 0;\r
-}\r
-static NOINLINE void RemoveCacheEntries(nedpool *p, threadcache *tc, unsigned int age) THROWSPEC\r
-{\r
- if(tc->freeInCache)\r
- {\r
- threadcacheblk **tcbptr=tc->bins;\r
- int n;\r
- for(n=0; n<=THREADCACHEMAXBINS; n++, tcbptr+=2)\r
- {\r
- threadcacheblk **tcb=tcbptr+1;\r
- for(; *tcb && tc->frees-(*tcb)->lastUsed>=age; )\r
- {\r
- threadcacheblk *f=*tcb;\r
- size_t blksize=f->size; /*nedblksize(f);*/\r
- assert(blksize<=nedblksize(f));\r
- assert(blksize);\r
- *tcb=(*tcb)->prev;\r
- if(*tcb)\r
- (*tcb)->next=0;\r
- else\r
- *tcbptr=0;\r
- tc->freeInCache-=blksize;\r
- mspace_free(0, f);\r
- }\r
- }\r
- }\r
-}\r
-static void DestroyCaches(nedpool *p) THROWSPEC\r
-{\r
- if(p->caches)\r
- {\r
- threadcache *tc;\r
- int n;\r
- for(n=0; -2!=(tc=&p->caches[n], tc->mymspace); n++)\r
- {\r
- tc->frees++;\r
- RemoveCacheEntries(p, tc, 0);\r
- assert(!tc->freeInCache);\r
- tc->mymspace=-1;\r
- tc->threadid=0;\r
- }\r
- }\r
-}\r
-\r
-static NOINLINE threadcache *AllocCache(nedpool *p) THROWSPEC\r
-{\r
- threadcache *tc;\r
- int n;\r
- ACQUIRE_LOCK(&p->mutex);\r
- for(n=0; (tc=&p->caches[n], tc->mymspace)>=0; n++);\r
- if(-2==tc->mymspace)\r
- { /* Need to extend */\r
- InitCaches(p, n, n+4);\r
- tc=&p->caches[n];\r
- }\r
- tc->mymspace=0;\r
-#ifdef CHECKVALIDTC\r
- tc->magic1=*(unsigned int *)"NEDMALC1";\r
- tc->magic2=*(unsigned int *)"NEDMALC2";\r
-#endif\r
- tc->threadid=(long)(size_t)CURRENT_THREAD;\r
- RELEASE_LOCK(&p->mutex);\r
- TLSSET(p->mycache, (void *)(size_t)(n+1));\r
- return tc;\r
-}\r
-\r
-static void *threadcache_malloc(nedpool *p, threadcache *tc, size_t *size) THROWSPEC\r
-{\r
- void *ret=0;\r
- unsigned int bestsize;\r
- unsigned int idx=size2binidx(*size);\r
- size_t blksize=0;\r
- threadcacheblk *blk, **binsptr;\r
- /* Calculate best fit bin size */\r
- bestsize=1<<(idx+4);\r
-#if 0\r
- /* Finer grained bin fit */\r
- idx<<=1;\r
- if(*size>bestsize)\r
- {\r
- idx++;\r
- bestsize+=bestsize>>1;\r
- }\r
- if(*size>bestsize)\r
- {\r
- idx++;\r
- bestsize=1<<(4+(idx>>1));\r
- }\r
-#else\r
- if(*size>bestsize)\r
- {\r
- idx++;\r
- bestsize<<=1;\r
- }\r
-#endif\r
- assert(bestsize>=*size);\r
- if(*size<bestsize) *size=bestsize;\r
- assert(*size<=THREADCACHEMAX);\r
- assert(idx<=THREADCACHEMAXBINS);\r
- binsptr=&tc->bins[idx*2];\r
- /* Try to match close, but move up a bin if necessary */\r
- blk=*binsptr;\r
- if(!blk || blk->size<*size)\r
- { /* Bump it up a bin */\r
- if(idx<THREADCACHEMAXBINS)\r
- {\r
- idx++;\r
- binsptr+=2;\r
- blk=*binsptr;\r
- }\r
- }\r
- if(blk)\r
- {\r
- blksize=blk->size; /*nedblksize(blk);*/\r
- assert(nedblksize(blk)>=blksize);\r
- assert(blksize>=*size);\r
- if(blk->next)\r
- blk->next->prev=0;\r
- *binsptr=blk->next;\r
- if(!*binsptr)\r
- binsptr[1]=0;\r
- assert(nedblksize(blk)>=sizeof(threadcacheblk) && nedblksize(blk)<=THREADCACHEMAX+CHUNK_OVERHEAD);\r
- /*printf("malloc: %p, %p, %p, %lu\n", p, tc, blk, (long) size);*/\r
- ret=(void *) blk;\r
- }\r
- ++tc->mallocs;\r
- if(ret)\r
- {\r
- assert(blksize>=*size);\r
- ++tc->successes;\r
- tc->freeInCache-=blksize;\r
- }\r
-#if defined(DEBUG) && 0\r
- if(!(tc->mallocs & 0xfff))\r
- {\r
- printf("*** threadcache=%u, mallocs=%u (%f), free=%u (%f), freeInCache=%u\n", (unsigned int) tc->threadid, tc->mallocs,\r
- (float) tc->successes/tc->mallocs, tc->frees, (float) tc->successes/tc->frees, (unsigned int) tc->freeInCache);\r
- }\r
-#endif\r
- return ret;\r
-}\r
-static NOINLINE void ReleaseFreeInCache(nedpool *p, threadcache *tc, int mymspace) THROWSPEC\r
-{\r
- unsigned int age=THREADCACHEMAXFREESPACE/8192;\r
- /*ACQUIRE_LOCK(&p->m[mymspace]->mutex);*/\r
- while(age && tc->freeInCache>=THREADCACHEMAXFREESPACE)\r
- {\r
- RemoveCacheEntries(p, tc, age);\r
- /*printf("*** Removing cache entries older than %u (%u)\n", age, (unsigned int) tc->freeInCache);*/\r
- age>>=1;\r
- }\r
- /*RELEASE_LOCK(&p->m[mymspace]->mutex);*/\r
-}\r
-static void threadcache_free(nedpool *p, threadcache *tc, int mymspace, void *mem, size_t size) THROWSPEC\r
-{\r
- unsigned int bestsize;\r
- unsigned int idx=size2binidx(size);\r
- threadcacheblk **binsptr, *tck=(threadcacheblk *) mem;\r
- assert(size>=sizeof(threadcacheblk) && size<=THREADCACHEMAX+CHUNK_OVERHEAD);\r
- /* Calculate best fit bin size */\r
- bestsize=1<<(idx+4);\r
-#if 0\r
- /* Finer grained bin fit */\r
- idx<<=1;\r
- if(size>bestsize)\r
- {\r
- unsigned int biggerbestsize=bestsize+bestsize<<1;\r
- if(size>=biggerbestsize)\r
- {\r
- idx++;\r
- bestsize=biggerbestsize;\r
- }\r
- }\r
-#endif\r
- if(bestsize!=size) /* dlmalloc can round up, so we round down to preserve indexing */\r
- size=bestsize;\r
- binsptr=&tc->bins[idx*2];\r
- assert(idx<=THREADCACHEMAXBINS);\r
- tck->lastUsed=++tc->frees;\r
- tck->size=(unsigned int) size;\r
- tck->next=*binsptr;\r
- tck->prev=0;\r
- if(tck->next)\r
- tck->next->prev=tck;\r
- else\r
- binsptr[1]=tck;\r
- assert(!*binsptr || (*binsptr)->size==tck->size);\r
- *binsptr=tck;\r
- /*printf("free: %p, %p, %p, %lu\n", p, tc, mem, (long) size);*/\r
- tc->freeInCache+=size;\r
-#if 1\r
- if(tc->freeInCache>=THREADCACHEMAXFREESPACE)\r
- ReleaseFreeInCache(p, tc, mymspace);\r
-#endif\r
-}\r
-\r
-\r
-\r
-\r
-static NOINLINE int InitPool(nedpool *p, size_t capacity, int threads) THROWSPEC\r
-{ /* threads is -1 for system pool */\r
- if(INITIAL_LOCK(&p->mutex)) goto err;\r
- ACQUIRE_LOCK(&p->mutex);\r
- p->threads=(threads<1 || threads>MAXTHREADSINPOOL) ? MAXTHREADSINPOOL : threads;\r
- if(TLSALLOC(&p->mycache)) goto err;\r
- if(!(p->m[0]=(mstate) create_mspace(capacity, 1))) goto err;\r
- p->m[0]->extp=p;\r
- if(InitCaches(p, 0, 4)) goto err;\r
- RELEASE_LOCK(&p->mutex);\r
- return 1;\r
-err:\r
- if(threads<0)\r
- abort(); /* If you can't allocate for system pool, we're screwed */\r
- DestroyCaches(p);\r
- if(p->m[0])\r
- {\r
- destroy_mspace(p->m[0]);\r
- p->m[0]=0;\r
- }\r
- if(p->mycache)\r
- {\r
- TLSFREE(p->mycache);\r
- p->mycache=0;\r
- }\r
- RELEASE_LOCK(&p->mutex);\r
- return 0;\r
-}\r
-static NOINLINE mstate FindMSpace(nedpool *p, threadcache *tc, int *lastUsed, size_t size) THROWSPEC\r
-{ /* Gets called when thread's last used mspace is in use. The strategy\r
- is to run through the list of all available mspaces looking for an\r
- unlocked one and if we fail, we create a new one so long as we don't\r
- exceed p->threads */\r
- int n, end;\r
- for(n=end=*lastUsed+1; p->m[n]; end=++n)\r
- {\r
- if(TRY_LOCK(&p->m[n]->mutex)) goto found;\r
- }\r
- for(n=0; n<*lastUsed && p->m[n]; n++)\r
- {\r
- if(TRY_LOCK(&p->m[n]->mutex)) goto found;\r
- }\r
- if(end<p->threads)\r
- {\r
- mstate temp;\r
- if(!(temp=(mstate) create_mspace(size, 1)))\r
- goto badexit;\r
- /* Now we're ready to modify the lists, we lock */\r
- ACQUIRE_LOCK(&p->mutex);\r
- while(p->m[end] && end<p->threads)\r
- end++;\r
- if(p->m[end])\r
- { /* Drat, must destroy it now */\r
- RELEASE_LOCK(&p->mutex);\r
- destroy_mspace((mspace) temp);\r
- goto badexit;\r
- }\r
- /* We really want to make sure this goes into memory now but we\r
- have to be careful of breaking aliasing rules, so write it twice */\r
- *((volatile struct malloc_state **) &p->m[end])=p->m[end]=temp;\r
- ACQUIRE_LOCK(&p->m[end]->mutex);\r
- /*printf("Created mspace idx %d\n", end);*/\r
- RELEASE_LOCK(&p->mutex);\r
- n=end;\r
- goto found;\r
- }\r
- /* Let it lock on the last one it used */\r
-badexit:\r
- ACQUIRE_LOCK(&p->m[*lastUsed]->mutex);\r
- return p->m[*lastUsed];\r
-found:\r
- *lastUsed=n;\r
- if(tc)\r
- tc->mymspace=n;\r
- else\r
- {\r
- TLSSET(p->mycache, (void *)(size_t)(-(n+1)));\r
- }\r
- return p->m[n];\r
-}\r
-\r
-nedpool *nedcreatepool(size_t capacity, int threads) THROWSPEC\r
-{\r
- nedpool *ret;\r
- if(!(ret=(nedpool *) nedpcalloc(0, 1, sizeof(nedpool)))) return 0;\r
- if(!InitPool(ret, capacity, threads))\r
- {\r
- nedpfree(0, ret);\r
- return 0;\r
- }\r
- return ret;\r
-}\r
-void neddestroypool(nedpool *p) THROWSPEC\r
-{\r
- int n;\r
- ACQUIRE_LOCK(&p->mutex);\r
- DestroyCaches(p);\r
- for(n=0; p->m[n]; n++)\r
- {\r
- destroy_mspace(p->m[n]);\r
- p->m[n]=0;\r
- }\r
- RELEASE_LOCK(&p->mutex);\r
- TLSFREE(p->mycache);\r
- nedpfree(0, p);\r
-}\r
-\r
-void nedpsetvalue(nedpool *p, void *v) THROWSPEC\r
-{\r
- if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }\r
- p->uservalue=v;\r
-}\r
-void *nedgetvalue(nedpool **p, void *mem) THROWSPEC\r
-{\r
- nedpool *np=0;\r
- mchunkptr mcp=mem2chunk(mem);\r
- mstate fm;\r
- if(!(is_aligned(chunk2mem(mcp))) && mcp->head != FENCEPOST_HEAD) return 0;\r
- if(!cinuse(mcp)) return 0;\r
- if(!next_pinuse(mcp)) return 0;\r
- if(!is_mmapped(mcp) && !pinuse(mcp))\r
- {\r
- if(next_chunk(prev_chunk(mcp))!=mcp) return 0;\r
- }\r
- fm=get_mstate_for(mcp);\r
- if(!ok_magic(fm)) return 0;\r
- if(!ok_address(fm, mcp)) return 0;\r
- if(!fm->extp) return 0;\r
- np=(nedpool *) fm->extp;\r
- if(p) *p=np;\r
- return np->uservalue;\r
-}\r
-\r
-void neddisablethreadcache(nedpool *p) THROWSPEC\r
-{\r
- int mycache;\r
- if(!p)\r
- {\r
- p=&syspool;\r
- if(!syspool.threads) InitPool(&syspool, 0, -1);\r
- }\r
- mycache=(int)(size_t) TLSGET(p->mycache);\r
- if(!mycache)\r
- { /* Set to mspace 0 */\r
- TLSSET(p->mycache, (void *)-1);\r
- }\r
- else if(mycache>0)\r
- { /* Set to last used mspace */\r
- threadcache *tc=&p->caches[mycache-1];\r
-#if defined(DEBUG)\r
- printf("Threadcache utilisation: %lf%% in cache with %lf%% lost to other threads\n",\r
- 100.0*tc->successes/tc->mallocs, 100.0*((double) tc->mallocs-tc->frees)/tc->mallocs);\r
-#endif\r
- TLSSET(p->mycache, (void *)(size_t)(-tc->mymspace));\r
- tc->threadid=0;\r
- tc->mymspace=-1;\r
- }\r
-}\r
-\r
-#define GETMSPACE(m,p,tc,ms,s,action) \\r
- do \\r
- { \\r
- mstate m = GetMSpace((p),(tc),(ms),(s)); \\r
- action; \\r
- RELEASE_LOCK(&m->mutex); \\r
- } while (0)\r
-\r
-static FORCEINLINE mstate GetMSpace(nedpool *p, threadcache *tc, int mymspace, size_t size) THROWSPEC\r
-{ /* Returns a locked and ready for use mspace */\r
- mstate m=p->m[mymspace];\r
- assert(m);\r
- if(!TRY_LOCK(&p->m[mymspace]->mutex)) m=FindMSpace(p, tc, &mymspace, size);\\r
- assert(IS_LOCKED(&p->m[mymspace]->mutex));\r
- return m;\r
-}\r
-static FORCEINLINE void GetThreadCache(nedpool **p, threadcache **tc, int *mymspace, size_t *size) THROWSPEC\r
-{\r
- int mycache;\r
- if(size && *size<sizeof(threadcacheblk)) *size=sizeof(threadcacheblk);\r
- if(!*p)\r
- {\r
- *p=&syspool;\r
- if(!syspool.threads) InitPool(&syspool, 0, -1);\r
- }\r
- mycache=(int)(size_t) TLSGET((*p)->mycache);\r
- if(mycache>0)\r
- {\r
- *tc=&(*p)->caches[mycache-1];\r
- *mymspace=(*tc)->mymspace;\r
- }\r
- else if(!mycache)\r
- {\r
- *tc=AllocCache(*p);\r
- *mymspace=(*tc)->mymspace;\r
- }\r
- else\r
- {\r
- *tc=0;\r
- *mymspace=-mycache-1;\r
- }\r
- assert(*mymspace>=0);\r
-#ifdef CHECKVALIDTC\r
- if(*tc)\r
- {\r
- if(*tc<&(*p)->caches[0] || *tc>=&(*p)->caches[5] || *(unsigned int *)"NEDMALC1"!=(*tc)->magic1 || *(unsigned int *)"NEDMALC2"!=(*tc)->magic2)\r
- {\r
- abort();\r
- }\r
- }\r
-#endif\r
-}\r
-\r
-void * nedpmalloc(nedpool *p, size_t size) THROWSPEC\r
-{\r
- void *ret=0;\r
- threadcache *tc;\r
- int mymspace;\r
- GetThreadCache(&p, &tc, &mymspace, &size);\r
-#if THREADCACHEMAX\r
- if(tc && size<=THREADCACHEMAX)\r
- { /* Use the thread cache */\r
- ret=threadcache_malloc(p, tc, &size);\r
- }\r
-#endif\r
- if(!ret)\r
- { /* Use this thread's mspace */\r
- GETMSPACE(m, p, tc, mymspace, size,\r
- ret=mspace_malloc(m, size));\r
- }\r
- return ret;\r
-}\r
-void * nedpcalloc(nedpool *p, size_t no, size_t size) THROWSPEC\r
-{\r
- size_t rsize=size*no;\r
- void *ret=0;\r
- threadcache *tc;\r
- int mymspace;\r
- GetThreadCache(&p, &tc, &mymspace, &rsize);\r
-#if THREADCACHEMAX\r
- if(tc && rsize<=THREADCACHEMAX)\r
- { /* Use the thread cache */\r
- if((ret=threadcache_malloc(p, tc, &rsize)))\r
- memset(ret, 0, rsize);\r
- }\r
-#endif\r
- if(!ret)\r
- { /* Use this thread's mspace */\r
- GETMSPACE(m, p, tc, mymspace, rsize,\r
- ret=mspace_calloc(m, 1, rsize));\r
- }\r
- return ret;\r
-}\r
-void * nedprealloc(nedpool *p, void *mem, size_t size) THROWSPEC\r
-{\r
- void *ret=0;\r
- threadcache *tc;\r
- int mymspace;\r
- GetThreadCache(&p, &tc, &mymspace, &size);\r
-#if THREADCACHEMAX\r
- if(tc && size && size<=THREADCACHEMAX)\r
- { /* Use the thread cache */\r
- size_t memsize=nedblksize(mem);\r
- assert(memsize);\r
- if((ret=threadcache_malloc(p, tc, &size)))\r
- {\r
- memcpy(ret, mem, memsize<size ? memsize : size);\r
- if(memsize<=THREADCACHEMAX)\r
- threadcache_free(p, tc, mymspace, mem, memsize);\r
- else\r
- mspace_free(0, mem);\r
- }\r
- }\r
-#endif\r
- if(!ret)\r
- { /* Use this thread's mspace */\r
- GETMSPACE(m, p, tc, mymspace, size,\r
- ret=mspace_realloc(m, mem, size));\r
- }\r
- return ret;\r
-}\r
-void nedpfree(nedpool *p, void *mem) THROWSPEC\r
-{ /* Frees always happen in the mspace they happened in, so skip\r
- locking the preferred mspace for this thread */\r
- threadcache *tc;\r
- int mymspace;\r
- size_t memsize;\r
- assert(mem);\r
- GetThreadCache(&p, &tc, &mymspace, 0);\r
-#if THREADCACHEMAX\r
- memsize=nedblksize(mem);\r
- assert(memsize);\r
- if(mem && tc && memsize<=(THREADCACHEMAX+CHUNK_OVERHEAD))\r
- threadcache_free(p, tc, mymspace, mem, memsize);\r
- else\r
-#endif\r
- mspace_free(0, mem);\r
-}\r
-void * nedpmemalign(nedpool *p, size_t alignment, size_t bytes) THROWSPEC\r
-{\r
- void *ret;\r
- threadcache *tc;\r
- int mymspace;\r
- GetThreadCache(&p, &tc, &mymspace, &bytes);\r
- { /* Use this thread's mspace */\r
- GETMSPACE(m, p, tc, mymspace, bytes,\r
- ret=mspace_memalign(m, alignment, bytes));\r
- }\r
- return ret;\r
-}\r
-#if !NO_MALLINFO\r
-struct mallinfo nedpmallinfo(nedpool *p) THROWSPEC\r
-{\r
- int n;\r
- struct mallinfo ret={0};\r
- if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }\r
- for(n=0; p->m[n]; n++)\r
- {\r
- struct mallinfo t=mspace_mallinfo(p->m[n]);\r
- ret.arena+=t.arena;\r
- ret.ordblks+=t.ordblks;\r
- ret.hblkhd+=t.hblkhd;\r
- ret.usmblks+=t.usmblks;\r
- ret.uordblks+=t.uordblks;\r
- ret.fordblks+=t.fordblks;\r
- ret.keepcost+=t.keepcost;\r
- }\r
- return ret;\r
-}\r
-#endif\r
-int nedpmallopt(nedpool *p, int parno, int value) THROWSPEC\r
-{\r
- return mspace_mallopt(parno, value);\r
-}\r
-int nedpmalloc_trim(nedpool *p, size_t pad) THROWSPEC\r
-{\r
- int n, ret=0;\r
- if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }\r
- for(n=0; p->m[n]; n++)\r
- {\r
- ret+=mspace_trim(p->m[n], pad);\r
- }\r
- return ret;\r
-}\r
-void nedpmalloc_stats(nedpool *p) THROWSPEC\r
-{\r
- int n;\r
- if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }\r
- for(n=0; p->m[n]; n++)\r
- {\r
- mspace_malloc_stats(p->m[n]);\r
- }\r
-}\r
-size_t nedpmalloc_footprint(nedpool *p) THROWSPEC\r
-{\r
- size_t ret=0;\r
- int n;\r
- if(!p) { p=&syspool; if(!syspool.threads) InitPool(&syspool, 0, -1); }\r
- for(n=0; p->m[n]; n++)\r
- {\r
- ret+=mspace_footprint(p->m[n]);\r
- }\r
- return ret;\r
-}\r
-void **nedpindependent_calloc(nedpool *p, size_t elemsno, size_t elemsize, void **chunks) THROWSPEC\r
-{\r
- void **ret;\r
- threadcache *tc;\r
- int mymspace;\r
- GetThreadCache(&p, &tc, &mymspace, &elemsize);\r
- GETMSPACE(m, p, tc, mymspace, elemsno*elemsize,\r
- ret=mspace_independent_calloc(m, elemsno, elemsize, chunks));\r
- return ret;\r
-}\r
-void **nedpindependent_comalloc(nedpool *p, size_t elems, size_t *sizes, void **chunks) THROWSPEC\r
-{\r
- void **ret;\r
- threadcache *tc;\r
- int mymspace;\r
- GetThreadCache(&p, &tc, &mymspace, 0);\r
- GETMSPACE(m, p, tc, mymspace, 0,\r
- ret=mspace_independent_comalloc(m, elems, sizes, chunks));\r
- return ret;\r
-}\r
-\r
-#if defined(__cplusplus)\r
-}\r
-#endif\r
+++ /dev/null
-/* nedalloc, an alternative malloc implementation for multiple threads without\r
-lock contention based on dlmalloc v2.8.3. (C) 2005 Niall Douglas\r
-\r
-Boost Software License - Version 1.0 - August 17th, 2003\r
-\r
-Permission is hereby granted, free of charge, to any person or organization\r
-obtaining a copy of the software and accompanying documentation covered by\r
-this license (the "Software") to use, reproduce, display, distribute,\r
-execute, and transmit the Software, and to prepare derivative works of the\r
-Software, and to permit third-parties to whom the Software is furnished to\r
-do so, all subject to the following:\r
-\r
-The copyright notices in the Software and this entire statement, including\r
-the above license grant, this restriction and the following disclaimer,\r
-must be included in all copies of the Software, in whole or in part, and\r
-all derivative works of the Software, unless such copies or derivative\r
-works are solely in the form of machine-executable object code generated by\r
-a source language processor.\r
-\r
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r
-FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT\r
-SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE\r
-FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,\r
-ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\r
-DEALINGS IN THE SOFTWARE.\r
-*/\r
-\r
-#ifndef NEDMALLOC_H\r
-#define NEDMALLOC_H\r
-\r
-\r
-/* See malloc.c.h for what each function does.\r
-\r
-REPLACE_SYSTEM_ALLOCATOR causes nedalloc's functions to be called malloc,\r
-free etc. instead of nedmalloc, nedfree etc. You may or may not want this.\r
-\r
-NO_NED_NAMESPACE prevents the functions from being defined in the nedalloc\r
-namespace when in C++ (uses the global namespace instead).\r
-\r
-EXTSPEC can be defined to be __declspec(dllexport) or\r
-__attribute__ ((visibility("default"))) or whatever you like. It defaults\r
-to extern.\r
-\r
-USE_LOCKS can be 2 if you want to define your own MLOCK_T, INITIAL_LOCK,\r
-ACQUIRE_LOCK, RELEASE_LOCK, TRY_LOCK, IS_LOCKED and NULL_LOCK_INITIALIZER.\r
-\r
-*/\r
-\r
-#include <stddef.h> /* for size_t */\r
-\r
-#ifndef EXTSPEC\r
- #define EXTSPEC extern\r
-#endif\r
-\r
-#if defined(_MSC_VER) && _MSC_VER>=1400\r
- #define MALLOCATTR __declspec(restrict)\r
-#endif\r
-#ifdef __GNUC__\r
- #define MALLOCATTR __attribute__ ((malloc))\r
-#endif\r
-#ifndef MALLOCATTR\r
- #define MALLOCATTR\r
-#endif\r
-\r
-#ifdef REPLACE_SYSTEM_ALLOCATOR\r
- #define nedmalloc malloc\r
- #define nedcalloc calloc\r
- #define nedrealloc realloc\r
- #define nedfree free\r
- #define nedmemalign memalign\r
- #define nedmallinfo mallinfo\r
- #define nedmallopt mallopt\r
- #define nedmalloc_trim malloc_trim\r
- #define nedmalloc_stats malloc_stats\r
- #define nedmalloc_footprint malloc_footprint\r
- #define nedindependent_calloc independent_calloc\r
- #define nedindependent_comalloc independent_comalloc\r
-#endif\r
-\r
-#ifndef NO_MALLINFO\r
-#define NO_MALLINFO 0\r
-#endif\r
-\r
-#if !NO_MALLINFO\r
-struct mallinfo;\r
-#endif\r
-\r
-#if defined(__cplusplus)\r
- #if !defined(NO_NED_NAMESPACE)\r
-namespace nedalloc {\r
- #else\r
-extern "C" {\r
- #endif\r
- #define THROWSPEC throw()\r
-#else\r
- #define THROWSPEC\r
-#endif\r
-\r
-/* These are the global functions */\r
-\r
-/* Gets the usable size of an allocated block. Note this will always be bigger than what was\r
-asked for due to rounding etc.\r
-*/\r
-EXTSPEC size_t nedblksize(void *mem) THROWSPEC;\r
-\r
-EXTSPEC void nedsetvalue(void *v) THROWSPEC;\r
-\r
-EXTSPEC MALLOCATTR void * nedmalloc(size_t size) THROWSPEC;\r
-EXTSPEC MALLOCATTR void * nedcalloc(size_t no, size_t size) THROWSPEC;\r
-EXTSPEC MALLOCATTR void * nedrealloc(void *mem, size_t size) THROWSPEC;\r
-EXTSPEC void nedfree(void *mem) THROWSPEC;\r
-EXTSPEC MALLOCATTR void * nedmemalign(size_t alignment, size_t bytes) THROWSPEC;\r
-#if !NO_MALLINFO\r
-EXTSPEC struct mallinfo nedmallinfo(void) THROWSPEC;\r
-#endif\r
-EXTSPEC int nedmallopt(int parno, int value) THROWSPEC;\r
-EXTSPEC int nedmalloc_trim(size_t pad) THROWSPEC;\r
-EXTSPEC void nedmalloc_stats(void) THROWSPEC;\r
-EXTSPEC size_t nedmalloc_footprint(void) THROWSPEC;\r
-EXTSPEC MALLOCATTR void **nedindependent_calloc(size_t elemsno, size_t elemsize, void **chunks) THROWSPEC;\r
-EXTSPEC MALLOCATTR void **nedindependent_comalloc(size_t elems, size_t *sizes, void **chunks) THROWSPEC;\r
-\r
-/* These are the pool functions */\r
-struct nedpool_t;\r
-typedef struct nedpool_t nedpool;\r
-\r
-/* Creates a memory pool for use with the nedp* functions below.\r
-Capacity is how much to allocate immediately (if you know you'll be allocating a lot\r
-of memory very soon) which you can leave at zero. Threads specifies how many threads\r
-will *normally* be accessing the pool concurrently. Setting this to zero means it\r
-extends on demand, but be careful of this as it can rapidly consume system resources\r
-where bursts of concurrent threads use a pool at once.\r
-*/\r
-EXTSPEC MALLOCATTR nedpool *nedcreatepool(size_t capacity, int threads) THROWSPEC;\r
-\r
-/* Destroys a memory pool previously created by nedcreatepool().\r
-*/\r
-EXTSPEC void neddestroypool(nedpool *p) THROWSPEC;\r
-\r
-/* Sets a value to be associated with a pool. You can retrieve this value by passing\r
-any memory block allocated from that pool.\r
-*/\r
-EXTSPEC void nedpsetvalue(nedpool *p, void *v) THROWSPEC;\r
-/* Gets a previously set value using nedpsetvalue() or zero if memory is unknown.\r
-Optionally can also retrieve pool.\r
-*/\r
-EXTSPEC void *nedgetvalue(nedpool **p, void *mem) THROWSPEC;\r
-\r
-/* Disables the thread cache for the calling thread, returning any existing cache\r
-data to the central pool.\r
-*/\r
-EXTSPEC void neddisablethreadcache(nedpool *p) THROWSPEC;\r
-\r
-EXTSPEC MALLOCATTR void * nedpmalloc(nedpool *p, size_t size) THROWSPEC;\r
-EXTSPEC MALLOCATTR void * nedpcalloc(nedpool *p, size_t no, size_t size) THROWSPEC;\r
-EXTSPEC MALLOCATTR void * nedprealloc(nedpool *p, void *mem, size_t size) THROWSPEC;\r
-EXTSPEC void nedpfree(nedpool *p, void *mem) THROWSPEC;\r
-EXTSPEC MALLOCATTR void * nedpmemalign(nedpool *p, size_t alignment, size_t bytes) THROWSPEC;\r
-#if !NO_MALLINFO\r
-EXTSPEC struct mallinfo nedpmallinfo(nedpool *p) THROWSPEC;\r
-#endif\r
-EXTSPEC int nedpmallopt(nedpool *p, int parno, int value) THROWSPEC;\r
-EXTSPEC int nedpmalloc_trim(nedpool *p, size_t pad) THROWSPEC;\r
-EXTSPEC void nedpmalloc_stats(nedpool *p) THROWSPEC;\r
-EXTSPEC size_t nedpmalloc_footprint(nedpool *p) THROWSPEC;\r
-EXTSPEC MALLOCATTR void **nedpindependent_calloc(nedpool *p, size_t elemsno, size_t elemsize, void **chunks) THROWSPEC;\r
-EXTSPEC MALLOCATTR void **nedpindependent_comalloc(nedpool *p, size_t elems, size_t *sizes, void **chunks) THROWSPEC;\r
-\r
-#if defined(__cplusplus)\r
-}\r
-#endif\r
-\r
-#undef MALLOCATTR\r
-#undef EXTSPEC\r
-\r
-#endif\r