From: hboehm Date: Sat, 26 Jul 2008 00:51:33 +0000 (+0000) Subject: 2008-07-25 Hans Boehm (Really mostly Ivan Maidanski) X-Git-Tag: gc7_2alpha2~70 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=68b9f2740e77bfae2b94392140608d952114b199;p=gc 2008-07-25 Hans Boehm (Really mostly Ivan Maidanski) Ivan's description of the patch follows. Note that a few pieces like the GC_malloc(0) patch, were not applied since an alternate had been previously applied. A few differed stylistically from the rest of the code (mostly casts to void * instead of target type), or were classified as too minor to bother. Note that all of Ivan's static declarations which did not correct outright naming bugs (as a few did), where replaced by STATIC, which is ignored by default. - minor bug fixing (for FreeBSD, for THREAD_LOCAL_ALLOC and for GC_malloc(0)); - addition of missing getter/setter functions for public variables (may be useful if compiled as Win32 DLL); - addition of missing GC_API for some exported functions; - addition of missing "static" declarator for internal functions and variables (where possible); - replacement of all remaining K&R-style definitions with ANSI C ones (__STDC__ macro is not used anymore); - addition of some Win32 macro definitions (that may be missing in the standard headers supplied with a compiler) for GWW_VDB mode; - elimination of most compiler warnings (except for "uninitialized data" warning); - several typos correction; - missing parenthesis addition in macros in some header files of "libatomic_ops" module. My highlights based on reading the patch: * allchblk.c: Remove GC_freehblk_ptr decl. Make free_list_index_of() static. * include/gc.h: Use __int64 on win64, define GC_oom_func, GC_finalizer_notifier_proc, GC_finalizer_notifier_proc, add getter and setters: GC_get_gc_no, GC_get_parallel, GC_set_oom_fn, GC_set_finalize_on_demand, GC_set_java_finalization, GC_set_dont_expand, GC_set_no_dls, GC_set_max_retries, GC_set_dont_precollect, GC_set_finalizer_notifier. Always define GC_win32_free_heap. gc_config_macros.h: Define _REENTRANT after processing GC_THREADS. * include/gc_cpp.h: Improve GC_PLACEMENT_DELETE test, handling of operator new[] for old Windows compilers. * include/gc_inline.h (GC_MALLOC_FAST_GRANS): Add parentheses around arguments. * dbg_mlc.c, malloc.c, misc.c: Add many GC_API specs. * mark.c (GC_mark_and_push_stack): Fix source argument for blacklist printing. * misc.c: Fix log file naming based on environment variable for Windows. Make GC_set_warn_proc and GC_set_free_space_divisor just return current value with 0 argument. Add DONT_USER_USER32_DLL. Add various getters and setters as in gc.h. * os_dep.c: Remove no longer used GC_disable/enable_signals implementations. (GC_get_stack_base): Add pthread_attr_destroy call. No longer set GC_old_bus_handler in DARWIN workaround. * pthread_support.c: GC_register_my_thread must also call GC_init_thread_local. --- diff --git a/ChangeLog b/ChangeLog index 03a3dd58..6a7e9b7f 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,60 @@ +2008-07-25 Hans Boehm (Really mostly Ivan Maidanski) + Ivan's description of the patch follows. Note that a few pieces like + the GC_malloc(0) patch, were not applied since an alternate had been + previously applied. A few differed stylistically from the rest of + the code (mostly casts to void * instead of target type), + or were classified as too minor to bother. Note that + all of Ivan's static declarations which did not correct outright + naming bugs (as a few did), where replaced by STATIC, which is + ignored by default. + + - minor bug fixing (for FreeBSD, for THREAD_LOCAL_ALLOC and for + GC_malloc(0)); + - addition of missing getter/setter functions for public variables + (may be useful if compiled as Win32 DLL); + - addition of missing GC_API for some exported functions; + - addition of missing "static" declarator for internal functions + and variables (where possible); + - replacement of all remaining K&R-style definitions with ANSI + C ones (__STDC__ macro is not used anymore); + - addition of some Win32 macro definitions (that may be missing in + the standard headers supplied with a compiler) for GWW_VDB mode; + - elimination of most compiler warnings (except for + "uninitialized data" warning); + - several typos correction; + - missing parenthesis addition in macros in some header files of + "libatomic_ops" module. + + My highlights based on reading the patch: + + * allchblk.c: Remove GC_freehblk_ptr decl. + Make free_list_index_of() static. + * include/gc.h: Use __int64 on win64, define GC_oom_func, + GC_finalizer_notifier_proc, GC_finalizer_notifier_proc, + add getter and setters: GC_get_gc_no, GC_get_parallel, + GC_set_oom_fn, GC_set_finalize_on_demand, + GC_set_java_finalization, GC_set_dont_expand, + GC_set_no_dls, GC_set_max_retries, GC_set_dont_precollect, + GC_set_finalizer_notifier. Always define GC_win32_free_heap. + gc_config_macros.h: Define _REENTRANT after processing + GC_THREADS. + * include/gc_cpp.h: Improve GC_PLACEMENT_DELETE test, + handling of operator new[] for old Windows compilers. + * include/gc_inline.h (GC_MALLOC_FAST_GRANS): Add parentheses + around arguments. + * dbg_mlc.c, malloc.c, misc.c: Add many GC_API specs. + * mark.c (GC_mark_and_push_stack): Fix source argument for + blacklist printing. + * misc.c: Fix log file naming based on environment variable + for Windows. Make GC_set_warn_proc and GC_set_free_space_divisor + just return current value with 0 argument. Add DONT_USER_USER32_DLL. + Add various getters and setters as in gc.h. + * os_dep.c: Remove no longer used GC_disable/enable_signals + implementations. (GC_get_stack_base): Add pthread_attr_destroy + call. No longer set GC_old_bus_handler in DARWIN workaround. + * pthread_support.c: GC_register_my_thread must also + call GC_init_thread_local. + 2008-07-21 Hans Boehm * Makefile.direct, mach_dep.c: Add support for NO_GETCONTEXT. * mach_dep.c: Include signal.h. diff --git a/Makefile.direct b/Makefile.direct index 32112b19..0441ecb6 100644 --- a/Makefile.direct +++ b/Makefile.direct @@ -324,6 +324,10 @@ HOSTCFLAGS=$(CFLAGS) # the getcontext() function on linux-like platforms. This currently # happens implicitly on Darwin, Hurd, or ARM or MIPS hardware. # It is explicitly needed for some old versions of FreeBSD. +# -DSTATIC=static Causes various GC_ symbols that could logically be +# declared static to be declared. Reduces the number of visible symbols, +# which is probably cleaner, but may make some kinds of debugging and +# profiling harder. # CXXFLAGS= $(CFLAGS) diff --git a/allchblk.c b/allchblk.c index 9347b675..82822ed3 100644 --- a/allchblk.c +++ b/allchblk.c @@ -48,7 +48,7 @@ struct hblk * GC_hblkfreelist[N_HBLK_FLS+1] = { 0 }; #ifndef USE_MUNMAP - word GC_free_bytes[N_HBLK_FLS+1] = { 0 }; + STATIC word GC_free_bytes[N_HBLK_FLS+1] = { 0 }; /* Number of free bytes on each list. */ /* Return the largest n such that */ @@ -83,7 +83,7 @@ struct hblk * GC_hblkfreelist[N_HBLK_FLS+1] = { 0 }; #endif /* USE_MUNMAP */ /* Map a number of blocks to the appropriate large block free list index. */ -int GC_hblk_fl_from_blocks(word blocks_needed) +STATIC int GC_hblk_fl_from_blocks(word blocks_needed) { if (blocks_needed <= UNIQUE_THRESHOLD) return (int)blocks_needed; if (blocks_needed >= HUGE_THRESHOLD) return N_HBLK_FLS; @@ -97,12 +97,12 @@ int GC_hblk_fl_from_blocks(word blocks_needed) # ifdef USE_MUNMAP # define IS_MAPPED(hhdr) (((hhdr) -> hb_flags & WAS_UNMAPPED) == 0) -# else /* !USE_MMAP */ +# else /* !USE_MUNMAP */ # define IS_MAPPED(hhdr) 1 # endif /* USE_MUNMAP */ # if !defined(NO_DEBUGGING) -void GC_print_hblkfreelist() +void GC_print_hblkfreelist(void) { struct hblk * h; word total_free = 0; @@ -145,7 +145,7 @@ void GC_print_hblkfreelist() /* Return the free list index on which the block described by the header */ /* appears, or -1 if it appears nowhere. */ -int free_list_index_of(hdr *wanted) +static int free_list_index_of(hdr *wanted) { struct hblk * h; hdr * hhdr; @@ -162,7 +162,7 @@ int free_list_index_of(hdr *wanted) return -1; } -void GC_dump_regions() +void GC_dump_regions(void) { unsigned i; ptr_t start, end; @@ -226,7 +226,9 @@ static GC_bool setup_header(hdr * hhdr, struct hblk *block, size_t byte_sz, int kind, unsigned flags) { word descr; - size_t granules; +# ifndef MARK_BIT_PER_OBJ + size_t granules; +# endif /* Set size, kind and mark proc fields */ hhdr -> hb_sz = byte_sz; @@ -286,7 +288,7 @@ static GC_bool setup_header(hdr * hhdr, struct hblk *block, size_t byte_sz, * We assume it is on the nth free list, or on the size * appropriate free list if n is FL_UNKNOWN. */ -void GC_remove_from_fl(hdr *hhdr, int n) +STATIC void GC_remove_from_fl(hdr *hhdr, int n) { int index; @@ -327,7 +329,7 @@ void GC_remove_from_fl(hdr *hhdr, int n) /* * Return a pointer to the free block ending just before h, if any. */ -struct hblk * GC_free_block_ending_at(struct hblk *h) +STATIC struct hblk * GC_free_block_ending_at(struct hblk *h) { struct hblk * p = h - 1; hdr * phdr; @@ -358,7 +360,7 @@ struct hblk * GC_free_block_ending_at(struct hblk *h) * Add hhdr to the appropriate free list. * We maintain individual free lists sorted by address. */ -void GC_add_to_fl(struct hblk *h, hdr *hhdr) +STATIC void GC_add_to_fl(struct hblk *h, hdr *hhdr) { int index = GC_hblk_fl_from_blocks(divHBLKSZ(hhdr -> hb_sz)); struct hblk *second = GC_hblkfreelist[index]; @@ -487,8 +489,8 @@ void GC_merge_unmapped(void) * The header for the returned block must be set up by the caller. * If the return value is not 0, then hhdr is the header for it. */ -struct hblk * GC_get_first_part(struct hblk *h, hdr *hhdr, - size_t bytes, int index) +STATIC struct hblk * GC_get_first_part(struct hblk *h, hdr *hhdr, + size_t bytes, int index) { word total_size = hhdr -> hb_sz; struct hblk * rest; @@ -526,8 +528,8 @@ struct hblk * GC_get_first_part(struct hblk *h, hdr *hhdr, * (Hence adding it to a free list is silly. But this path is hopefully * rare enough that it doesn't matter. The code is cleaner this way.) */ -void GC_split_block(struct hblk *h, hdr *hhdr, struct hblk *n, - hdr *nhdr, int index /* Index of free list */) +STATIC void GC_split_block(struct hblk *h, hdr *hhdr, struct hblk *n, + hdr *nhdr, int index /* Index of free list */) { word total_size = hhdr -> hb_sz; word h_size = (word)n - (word)h; @@ -562,7 +564,7 @@ void GC_split_block(struct hblk *h, hdr *hhdr, struct hblk *n, nhdr -> hb_flags |= FREE_BLK; } -struct hblk * +STATIC struct hblk * GC_allochblk_nth(size_t sz/* bytes */, int kind, unsigned flags, int n, GC_bool may_split); @@ -636,7 +638,7 @@ GC_allochblk(size_t sz, int kind, unsigned flags/* IGNORE_OFF_PAGE or 0 */) * Unlike the above, sz is in bytes. * The may_split flag indicates whether it's OK to split larger blocks. */ -struct hblk * +STATIC struct hblk * GC_allochblk_nth(size_t sz, int kind, unsigned flags, int n, GC_bool may_split) { struct hblk *hbp; @@ -812,8 +814,6 @@ GC_allochblk_nth(size_t sz, int kind, unsigned flags, int n, GC_bool may_split) return( hbp ); } -struct hblk * GC_freehblk_ptr = 0; /* Search position hint for GC_freehblk */ - /* * Free a heap block. * diff --git a/alloc.c b/alloc.c index cecdf82e..0ae94206 100644 --- a/alloc.c +++ b/alloc.c @@ -79,7 +79,7 @@ GC_bool GC_need_full_gc = FALSE; # define IF_THREADS(x) #endif -word GC_used_heap_size_after_full = 0; +STATIC word GC_used_heap_size_after_full = 0; char * GC_copyright[] = {"Copyright 1988,1989 Hans-J. Boehm and Alan J. Demers ", @@ -108,23 +108,25 @@ GC_bool GC_dont_expand = 0; word GC_free_space_divisor = 3; -extern GC_bool GC_collection_in_progress(); +extern GC_bool GC_collection_in_progress(void); /* Collection is in progress, or was abandoned. */ int GC_never_stop_func (void) { return(0); } unsigned long GC_time_limit = TIME_LIMIT; -CLOCK_TYPE GC_start_time; /* Time at which we stopped world. */ +#ifndef NO_CLOCK +STATIC CLOCK_TYPE GC_start_time;/* Time at which we stopped world. */ /* used only in GC_timeout_stop_func. */ +#endif -int GC_n_attempts = 0; /* Number of attempts at finishing */ +STATIC int GC_n_attempts = 0; /* Number of attempts at finishing */ /* collection within GC_time_limit. */ #if defined(SMALL_CONFIG) || defined(NO_CLOCK) # define GC_timeout_stop_func GC_never_stop_func #else - int GC_timeout_stop_func (void) + STATIC int GC_timeout_stop_func (void) { CLOCK_TYPE current_time; static unsigned count = 0; @@ -147,7 +149,7 @@ int GC_n_attempts = 0; /* Number of attempts at finishing */ /* Return the minimum number of words that must be allocated between */ /* collections to amortize the collection cost. */ -static word min_bytes_allocd() +static word min_bytes_allocd(void) { # ifdef THREADS /* We punt, for now. */ @@ -176,7 +178,7 @@ static word min_bytes_allocd() /* Return the number of bytes allocated, adjusted for explicit storage */ /* management, etc.. This number is used in deciding when to trigger */ /* collections. */ -word GC_adj_bytes_allocd(void) +STATIC word GC_adj_bytes_allocd(void) { signed_word result; signed_word expl_managed = @@ -218,7 +220,7 @@ word GC_adj_bytes_allocd(void) /* on the stack by other parts of the collector as roots. This */ /* differs from the code in misc.c, which actually tries to keep the */ /* stack clear of long-lived, client-generated garbage. */ -void GC_clear_a_few_frames() +STATIC void GC_clear_a_few_frames(void) { # define NWORDS 64 word frames[NWORDS]; @@ -245,21 +247,21 @@ GC_bool GC_should_collect(void) } -void GC_notify_full_gc(void) +STATIC void GC_notify_full_gc(void) { if (GC_start_call_back != (void (*) (void))0) { (*GC_start_call_back)(); } } -GC_bool GC_is_full_gc = FALSE; +STATIC GC_bool GC_is_full_gc = FALSE; /* * Initiate a garbage collection if appropriate. * Choose judiciously * between partial, full, and stop-world collections. */ -void GC_maybe_gc(void) +STATIC void GC_maybe_gc(void) { static int n_partial_gcs = 0; @@ -319,7 +321,9 @@ void GC_maybe_gc(void) */ GC_bool GC_try_to_collect_inner(GC_stop_func stop_func) { - CLOCK_TYPE start_time, current_time; +# ifndef SMALL_CONFIG + CLOCK_TYPE start_time, current_time; +# endif if (GC_dont_gc) return FALSE; if (GC_incremental && GC_collection_in_progress()) { if (GC_print_stats) { @@ -333,12 +337,14 @@ GC_bool GC_try_to_collect_inner(GC_stop_func stop_func) } } if (stop_func == GC_never_stop_func) GC_notify_full_gc(); - if (GC_print_stats) { - GET_TIME(start_time); +# ifndef SMALL_CONFIG + if (GC_print_stats) { + GET_TIME(start_time); GC_log_printf( "Initiating full world-stop collection %lu after %ld allocd bytes\n", (unsigned long)GC_gc_no+1, (long)GC_bytes_allocd); - } + } +# endif GC_promote_black_lists(); /* Make sure all blocks have been reclaimed, so sweep routines */ /* don't see cleared mark bits. */ @@ -371,11 +377,13 @@ GC_bool GC_try_to_collect_inner(GC_stop_func stop_func) return(FALSE); } GC_finish_collection(); - if (GC_print_stats) { +# ifndef SMALL_CONFIG + if (GC_print_stats) { GET_TIME(current_time); GC_log_printf("Complete collection took %lu msecs\n", MS_TIME_DIFF(current_time,start_time)); - } + } +# endif return(TRUE); } @@ -395,8 +403,8 @@ GC_bool GC_try_to_collect_inner(GC_stop_func stop_func) /* how long it takes. Doesn't count the initial root scan */ /* for a full GC. */ -int GC_deficit = 0; /* The number of extra calls to GC_mark_some */ - /* that we have made. */ +STATIC int GC_deficit = 0;/* The number of extra calls to GC_mark_some */ + /* that we have made. */ void GC_collect_a_little_inner(int n) { @@ -415,7 +423,9 @@ void GC_collect_a_little_inner(int n) # endif if (GC_n_attempts < MAX_PRIOR_ATTEMPTS && GC_time_limit != GC_TIME_UNLIMITED) { - GET_TIME(GC_start_time); +# ifndef NO_CLOCK + GET_TIME(GC_start_time); +# endif if (!GC_stopped_mark(GC_timeout_stop_func)) { GC_n_attempts++; break; @@ -434,7 +444,7 @@ void GC_collect_a_little_inner(int n) } } -int GC_collect_a_little(void) +GC_API int GC_collect_a_little(void) { int result; DCL_LOCK_STATE; @@ -448,8 +458,11 @@ int GC_collect_a_little(void) } # if !defined(REDIRECT_MALLOC) && (defined(MSWIN32) || defined(MSWINCE)) - void GC_add_current_malloc_heap(); + void GC_add_current_malloc_heap(void); # endif +#ifdef MAKE_BACK_GRAPH + void GC_build_back_graph(void); +#endif /* * Assumes lock is held, signals are disabled. * We stop the world. @@ -460,10 +473,12 @@ GC_bool GC_stopped_mark(GC_stop_func stop_func) { unsigned i; int dummy; - CLOCK_TYPE start_time, current_time; +# ifndef SMALL_CONFIG + CLOCK_TYPE start_time, current_time; - if (GC_print_stats) + if (GC_print_stats) GET_TIME(start_time); +# endif # if !defined(REDIRECT_MALLOC) && (defined(MSWIN32) || defined(MSWINCE)) GC_add_current_malloc_heap(); @@ -523,11 +538,13 @@ GC_bool GC_stopped_mark(GC_stop_func stop_func) IF_THREADS(GC_world_stopped = FALSE); START_WORLD(); - if (GC_print_stats) { - GET_TIME(current_time); - GC_log_printf("World-stopped marking took %lu msecs\n", - MS_TIME_DIFF(current_time,start_time)); - } +# ifndef SMALL_CONFIG + if (GC_print_stats) { + GET_TIME(current_time); + GC_log_printf("World-stopped marking took %lu msecs\n", + MS_TIME_DIFF(current_time,start_time)); + } +# endif return(TRUE); } @@ -573,7 +590,7 @@ void GC_check_fl_marks(ptr_t q) /* Clear all mark bits for the free list whose first entry is q */ /* Decrement GC_bytes_found by number of bytes on free list. */ -void GC_clear_fl_marks(ptr_t q) +STATIC void GC_clear_fl_marks(ptr_t q) { ptr_t p; struct hblk * h, * last_h = 0; @@ -609,13 +626,19 @@ void GC_clear_fl_marks(ptr_t q) extern void GC_check_tls(void); #endif +#ifdef MAKE_BACK_GRAPH +void GC_traverse_back_graph(void); +#endif + /* Finish up a collection. Assumes lock is held, signals are disabled, */ /* but the world is otherwise running. */ -void GC_finish_collection() +void GC_finish_collection(void) { - CLOCK_TYPE start_time; - CLOCK_TYPE finalize_time; - CLOCK_TYPE done_time; +# ifndef SMALL_CONFIG + CLOCK_TYPE start_time; + CLOCK_TYPE finalize_time; + CLOCK_TYPE done_time; +# endif # if defined(GC_ASSERTIONS) && defined(THREADS) \ && defined(THREAD_LOCAL_ALLOC) && !defined(DBG_HDRS_ALL) @@ -624,8 +647,10 @@ void GC_finish_collection() GC_check_tls(); # endif - if (GC_print_stats) - GET_TIME(start_time); +# ifndef SMALL_CONFIG + if (GC_print_stats) + GET_TIME(start_time); +# endif GC_bytes_found = 0; # if defined(LINUX) && defined(__ELF__) && !defined(SMALL_CONFIG) @@ -658,8 +683,10 @@ void GC_finish_collection() GC_clean_changing_list(); # endif - if (GC_print_stats) - GET_TIME(finalize_time); +# ifndef SMALL_CONFIG + if (GC_print_stats) + GET_TIME(finalize_time); +# endif if (GC_print_back_height) { # ifdef MAKE_BACK_GRAPH @@ -737,16 +764,19 @@ void GC_finish_collection() # ifdef USE_MUNMAP GC_unmap_old(); # endif - if (GC_print_stats) { + +# ifndef SMALL_CONFIG + if (GC_print_stats) { GET_TIME(done_time); GC_log_printf("Finalize + initiate sweep took %lu + %lu msecs\n", MS_TIME_DIFF(finalize_time,start_time), MS_TIME_DIFF(done_time,finalize_time)); - } + } +# endif } /* Externally callable routine to invoke full, stop-world collection */ -int GC_try_to_collect(GC_stop_func stop_func) +GC_API int GC_try_to_collect(GC_stop_func stop_func) { int result; DCL_LOCK_STATE; @@ -769,7 +799,7 @@ int GC_try_to_collect(GC_stop_func stop_func) return(result); } -void GC_gcollect(void) +GC_API void GC_gcollect(void) { (void)GC_try_to_collect(GC_never_stop_func); if (GC_have_errors) GC_print_all_errors(); @@ -882,7 +912,7 @@ static INLINE word GC_min(word x, word y) return(x < y? x : y); } -void GC_set_max_heap_size(GC_word n) +GC_API void GC_set_max_heap_size(GC_word n) { GC_max_heapsize = n; } @@ -966,7 +996,7 @@ GC_bool GC_expand_hp_inner(word n) /* Really returns a bool, but it's externally visible, so that's clumsy. */ /* Arguments is in bytes. */ -int GC_expand_hp(size_t bytes) +GC_API int GC_expand_hp(size_t bytes) { int result; DCL_LOCK_STATE; diff --git a/backgraph.c b/backgraph.c index 92d09e03..17c92930 100644 --- a/backgraph.c +++ b/backgraph.c @@ -29,9 +29,9 @@ #define MAX_IN 10 /* Maximum in-degree we handle directly */ #include "private/dbg_mlc.h" -#include +/* #include */ -#if !defined(DBG_HDRS_ALL) || (ALIGNMENT != CPP_WORDSZ/8) || !defined(UNIX_LIKE) +#if !defined(DBG_HDRS_ALL) || (ALIGNMENT != CPP_WORDSZ/8) /* || !defined(UNIX_LIKE) */ # error Configuration doesnt support MAKE_BACK_GRAPH #endif @@ -75,7 +75,8 @@ typedef struct back_edges_struct { /* if this were production code. */ #define MAX_BACK_EDGE_STRUCTS 100000 static back_edges *back_edge_space = 0; -int GC_n_back_edge_structs = 0; /* Serves as pointer to never used */ +STATIC int GC_n_back_edge_structs = 0; + /* Serves as pointer to never used */ /* back_edges space. */ static back_edges *avail_back_edges = 0; /* Pointer to free list of deallocated */ @@ -123,7 +124,7 @@ static size_t n_in_progress = 0; static void push_in_progress(ptr_t p) { - if (n_in_progress >= in_progress_size) + if (n_in_progress >= in_progress_size) { if (in_progress_size == 0) { in_progress_size = INITIAL_IN_PROGRESS; in_progress_space = (ptr_t *)GET_MEM(in_progress_size * sizeof(ptr_t)); @@ -141,6 +142,7 @@ static void push_in_progress(ptr_t p) in_progress_space = new_in_progress_space; /* FIXME: This just drops the old space. */ } + } if (in_progress_space == 0) ABORT("MAKE_BACK_GRAPH: Out of in-progress space: " "Huge linear data structure?"); @@ -281,6 +283,7 @@ void GC_apply_to_each_object(per_object_func f) GC_apply_to_all_blocks(per_object_helper, (word)f); } +/*ARGSUSED*/ static void reset_back_edge(ptr_t p, size_t n_bytes, word gc_descr) { /* Skip any free list links, or dropped blocks */ @@ -292,7 +295,6 @@ static void reset_back_edge(ptr_t p, size_t n_bytes, word gc_descr) deallocate_back_edges(be); SET_OH_BG_PTR(p, 0); } else { - word *currentp; GC_ASSERT(GC_is_marked(p)); @@ -389,8 +391,8 @@ static word backwards_height(ptr_t p) return result; } -word GC_max_height; -ptr_t GC_deepest_obj; +STATIC word GC_max_height; +STATIC ptr_t GC_deepest_obj; /* Compute the maximum height of every unreachable predecessor p of a */ /* reachable object. Arrange to save the heights of all such objects p */ @@ -398,10 +400,10 @@ ptr_t GC_deepest_obj; /* next GC. */ /* Set GC_max_height to be the maximum height we encounter, and */ /* GC_deepest_obj to be the corresponding object. */ +/*ARGSUSED*/ static void update_max_height(ptr_t p, size_t n_bytes, word gc_descr) { if (GC_is_marked(p) && GC_HAS_DEBUG_INFO(p)) { - int i; word p_height = 0; ptr_t p_deepest_obj = 0; ptr_t back_ptr; @@ -444,7 +446,7 @@ static void update_max_height(ptr_t p, size_t n_bytes, word gc_descr) } } -word GC_max_max_height = 0; +STATIC word GC_max_max_height = 0; void GC_traverse_back_graph(void) { @@ -472,4 +474,9 @@ void GC_print_back_graph_stats(void) GC_deepest_obj = 0; } -#endif /* MAKE_BACK_GRAPH */ +#else /* !MAKE_BACK_GRAPH */ + +extern int GC_quiet; + /* ANSI C doesn't allow translation units to be empty. */ + +#endif /* !MAKE_BACK_GRAPH */ diff --git a/blacklst.c b/blacklst.c index afcad9c2..122c2b91 100644 --- a/blacklst.c +++ b/blacklst.c @@ -37,14 +37,14 @@ /* Pointers to individual tables. We replace one table by another by */ /* switching these pointers. */ -word * GC_old_normal_bl; +STATIC word * GC_old_normal_bl; /* Nonstack false references seen at last full */ /* collection. */ -word * GC_incomplete_normal_bl; +STATIC word * GC_incomplete_normal_bl; /* Nonstack false references seen since last */ /* full collection. */ -word * GC_old_stack_bl; -word * GC_incomplete_stack_bl; +STATIC word * GC_old_stack_bl; +STATIC word * GC_incomplete_stack_bl; word GC_total_stack_black_listed; @@ -62,7 +62,8 @@ void GC_default_print_heap_obj_proc(ptr_t p) void (*GC_print_heap_obj) (ptr_t p) = GC_default_print_heap_obj_proc; -void GC_print_source_ptr(ptr_t p) +#ifdef PRINT_BLACK_LIST +STATIC void GC_print_source_ptr(ptr_t p) { ptr_t base = GC_base(p); if (0 == base) { @@ -76,6 +77,7 @@ void GC_print_source_ptr(ptr_t p) (*GC_print_heap_obj)(base); } } +#endif void GC_bl_init(void) { @@ -189,7 +191,6 @@ void GC_unpromote_black_lists(void) /* And the same for false pointers from the stack. */ #ifdef PRINT_BLACK_LIST void GC_add_to_black_list_stack(word p, ptr_t source) - ptr_t source; #else void GC_add_to_black_list_stack(word p) #endif diff --git a/checksums.c b/checksums.c index 0942acb4..419a89ac 100644 --- a/checksums.c +++ b/checksums.c @@ -34,8 +34,7 @@ typedef struct { page_entry GC_sums [NSUMS]; -word GC_checksum(h) -struct hblk *h; +STATIC word GC_checksum(struct hblk *h) { register word *p = (word *)h; register word *lim = (word *)(h+1); @@ -50,8 +49,7 @@ struct hblk *h; # ifdef STUBBORN_ALLOC /* Check whether a stubborn object from the given block appears on */ /* the appropriate free list. */ -GC_bool GC_on_free_list(struct hblk *h) -struct hblk *h; +STATIC GC_bool GC_on_free_list(struct hblk *h) { hdr * hhdr = HDR(h); int sz = BYTES_TO_WORDS(hhdr -> hb_sz); @@ -70,7 +68,7 @@ int GC_n_changed_errors; int GC_n_clean; int GC_n_dirty; -void GC_update_check_page(struct hblk *h, int index) +STATIC void GC_update_check_page(struct hblk *h, int index) { page_entry *pe = GC_sums + index; register hdr * hhdr = HDR(h); @@ -117,19 +115,18 @@ void GC_update_check_page(struct hblk *h, int index) unsigned long GC_bytes_in_used_blocks; -void GC_add_block(h, dummy) -struct hblk *h; -word dummy; +/*ARGSUSED*/ +STATIC void GC_add_block(struct hblk *h, word dummy) { hdr * hhdr = HDR(h); - bytes = hhdr -> hb_sz; + size_t bytes = hhdr -> hb_sz; bytes += HBLKSIZE-1; bytes &= ~(HBLKSIZE-1); GC_bytes_in_used_blocks += bytes; } -void GC_check_blocks() +STATIC void GC_check_blocks(void) { unsigned long bytes_in_free_blocks = GC_large_free_bytes; @@ -144,7 +141,7 @@ void GC_check_blocks() } /* Should be called immediately after GC_read_dirty and GC_read_changed. */ -void GC_check_dirty() +void GC_check_dirty(void) { register int index; register unsigned i; diff --git a/darwin_stop_world.c b/darwin_stop_world.c index 9d3d1e29..7ed9c899 100644 --- a/darwin_stop_world.c +++ b/darwin_stop_world.c @@ -73,7 +73,9 @@ unsigned long FindTopOfStack(unsigned long stack_start) } #ifdef DARWIN_DONT_PARSE_STACK -void GC_push_all_stacks() +void GC_thr_init(void); + +void GC_push_all_stacks(void) { int i; kern_return_t r; @@ -194,7 +196,7 @@ void GC_push_all_stacks() #else /* !DARWIN_DONT_PARSE_STACK; Use FindTopOfStack() */ -void GC_push_all_stacks() +void GC_push_all_stacks(void) { unsigned int i; task_t my_task; @@ -346,7 +348,7 @@ static int GC_use_mach_handler_thread = 0; static struct GC_mach_thread GC_mach_threads[THREAD_TABLE_SZ]; static int GC_mach_threads_count; -void GC_stop_init() +void GC_stop_init(void) { int i; @@ -358,8 +360,8 @@ void GC_stop_init() } /* returns true if there's a thread in act_list that wasn't in old_list */ -int GC_suspend_thread_list(thread_act_array_t act_list, int count, - thread_act_array_t old_list, int old_count) +STATIC int GC_suspend_thread_list(thread_act_array_t act_list, int count, + thread_act_array_t old_list, int old_count) { mach_port_t my_thread = mach_thread_self(); int i, j; @@ -441,7 +443,7 @@ int GC_suspend_thread_list(thread_act_array_t act_list, int count, /* Caller holds allocation lock. */ -void GC_stop_world() +void GC_stop_world(void) { unsigned int i, changes; task_t my_task = current_task(); @@ -527,7 +529,7 @@ void GC_stop_world() /* Caller holds allocation lock, and has held it continuously since */ /* the world stopped. */ -void GC_start_world() +void GC_start_world(void) { task_t my_task = current_task(); mach_port_t my_thread = mach_thread_self(); diff --git a/dbg_mlc.c b/dbg_mlc.c index 4bb0e136..7b23b525 100644 --- a/dbg_mlc.c +++ b/dbg_mlc.c @@ -211,6 +211,9 @@ GC_bool GC_has_other_debug_info(ptr_t p) GC_print_heap_obj(GC_base(base)); GC_err_printf("\n"); break; + default: + GC_err_printf("INTERNAL ERROR: UNEXPECTED SOURCE!!!!\n"); + goto out; } current = base; } @@ -271,7 +274,8 @@ ptr_t GC_store_debug_info(ptr_t p, word sz, const char *string, word integer) #ifdef DBG_HDRS_ALL /* Store debugging info into p. Return displaced pointer. */ /* This version assumes we do hold the allocation lock. */ -ptr_t GC_store_debug_info_inner(ptr_t p, word sz, char *string, word integer) +STATIC ptr_t GC_store_debug_info_inner(ptr_t p, word sz, char *string, + word integer) { register word * result = (word *)((oh *)p + 1); @@ -302,7 +306,7 @@ ptr_t GC_store_debug_info_inner(ptr_t p, word sz, char *string, word integer) /* Check the object with debugging info at ohdr */ /* return NIL if it's OK. Else return clobbered */ /* address. */ -ptr_t GC_check_annotated_obj(oh *ohdr) +STATIC ptr_t GC_check_annotated_obj(oh *ohdr) { register ptr_t body = (ptr_t)(ohdr + 1); register word gc_sz = GC_size((ptr_t)ohdr); @@ -332,7 +336,7 @@ void GC_register_describe_type_fn(int kind, GC_describe_type_fn fn) /* Print a type description for the object whose client-visible address */ /* is p. */ -void GC_print_type(ptr_t p) +STATIC void GC_print_type(ptr_t p) { hdr * hhdr = GC_find_header(p); char buffer[GC_TYPE_DESCR_LEN + 1]; @@ -391,7 +395,7 @@ void GC_print_obj(ptr_t p) PRINT_CALL_CHAIN(ohdr); } -void GC_debug_print_heap_obj_proc(ptr_t p) +STATIC void GC_debug_print_heap_obj_proc(ptr_t p) { GC_ASSERT(I_DONT_HOLD_LOCK()); if (GC_HAS_DEBUG_INFO(p)) { @@ -405,7 +409,7 @@ void GC_debug_print_heap_obj_proc(ptr_t p) /* Use GC_err_printf and friends to print a description of the object */ /* whose client-visible address is p, and which was smashed at */ /* clobbered_addr. */ -void GC_print_smashed_obj(ptr_t p, ptr_t clobbered_addr) +STATIC void GC_print_smashed_obj(ptr_t p, ptr_t clobbered_addr) { register oh * ohdr = (oh *)GC_base(p); @@ -428,11 +432,12 @@ void GC_print_smashed_obj(ptr_t p, ptr_t clobbered_addr) } #endif -void GC_check_heap_proc (void); - -void GC_print_all_smashed_proc (void); - -void GC_do_nothing(void) {} +#ifndef SHORT_DBG_HDRS + STATIC void GC_check_heap_proc (void); + STATIC void GC_print_all_smashed_proc (void); +#else + STATIC void GC_do_nothing(void) {} +#endif void GC_start_debugging(void) { @@ -450,13 +455,13 @@ void GC_start_debugging(void) size_t GC_debug_header_size = sizeof(oh); -void GC_debug_register_displacement(size_t offset) +GC_API void GC_debug_register_displacement(size_t offset) { GC_register_displacement(offset); GC_register_displacement((word)sizeof(oh) + offset); } -void * GC_debug_malloc(size_t lb, GC_EXTRA_PARAMS) +GC_API void * GC_debug_malloc(size_t lb, GC_EXTRA_PARAMS) { void * result = GC_malloc(lb + DEBUG_BYTES); @@ -474,7 +479,7 @@ void * GC_debug_malloc(size_t lb, GC_EXTRA_PARAMS) return (GC_store_debug_info(result, (word)lb, s, (word)i)); } -void * GC_debug_malloc_ignore_off_page(size_t lb, GC_EXTRA_PARAMS) +GC_API void * GC_debug_malloc_ignore_off_page(size_t lb, GC_EXTRA_PARAMS) { void * result = GC_malloc_ignore_off_page(lb + DEBUG_BYTES); @@ -492,7 +497,7 @@ void * GC_debug_malloc_ignore_off_page(size_t lb, GC_EXTRA_PARAMS) return (GC_store_debug_info(result, (word)lb, s, (word)i)); } -void * GC_debug_malloc_atomic_ignore_off_page(size_t lb, GC_EXTRA_PARAMS) +GC_API void * GC_debug_malloc_atomic_ignore_off_page(size_t lb, GC_EXTRA_PARAMS) { void * result = GC_malloc_atomic_ignore_off_page(lb + DEBUG_BYTES); @@ -548,7 +553,7 @@ void * GC_debug_malloc_atomic_ignore_off_page(size_t lb, GC_EXTRA_PARAMS) # endif #ifdef STUBBORN_ALLOC -void * GC_debug_malloc_stubborn(size_t lb, GC_EXTRA_PARAMS) +GC_API void * GC_debug_malloc_stubborn(size_t lb, GC_EXTRA_PARAMS) { void * result = GC_malloc_stubborn(lb + DEBUG_BYTES); @@ -566,7 +571,7 @@ void * GC_debug_malloc_stubborn(size_t lb, GC_EXTRA_PARAMS) return (GC_store_debug_info(result, (word)lb, s, (word)i)); } -void GC_debug_change_stubborn(void *p) +GC_API void GC_debug_change_stubborn(void *p) { void * q = GC_base(p); hdr * hhdr; @@ -583,7 +588,7 @@ void GC_debug_change_stubborn(void *p) GC_change_stubborn(q); } -void GC_debug_end_stubborn_change(void *p) +GC_API void GC_debug_end_stubborn_change(void *p) { register void * q = GC_base(p); register hdr * hhdr; @@ -602,22 +607,22 @@ void GC_debug_end_stubborn_change(void *p) #else /* !STUBBORN_ALLOC */ -void * GC_debug_malloc_stubborn(size_t lb, GC_EXTRA_PARAMS) +GC_API void * GC_debug_malloc_stubborn(size_t lb, GC_EXTRA_PARAMS) { return GC_debug_malloc(lb, OPT_RA s, i); } -void GC_debug_change_stubborn(void *p) +GC_API void GC_debug_change_stubborn(void *p) { } -void GC_debug_end_stubborn_change(void *p) +GC_API void GC_debug_end_stubborn_change(void *p) { } #endif /* !STUBBORN_ALLOC */ -void * GC_debug_malloc_atomic(size_t lb, GC_EXTRA_PARAMS) +GC_API void * GC_debug_malloc_atomic(size_t lb, GC_EXTRA_PARAMS) { void * result = GC_malloc_atomic(lb + DEBUG_BYTES); @@ -635,7 +640,7 @@ void * GC_debug_malloc_atomic(size_t lb, GC_EXTRA_PARAMS) return (GC_store_debug_info(result, (word)lb, s, (word)i)); } -char *GC_debug_strdup(const char *str, GC_EXTRA_PARAMS) +GC_API char *GC_debug_strdup(const char *str, GC_EXTRA_PARAMS) { char *copy; if (str == NULL) return NULL; @@ -648,7 +653,7 @@ char *GC_debug_strdup(const char *str, GC_EXTRA_PARAMS) return copy; } -void * GC_debug_malloc_uncollectable(size_t lb, GC_EXTRA_PARAMS) +GC_API void * GC_debug_malloc_uncollectable(size_t lb, GC_EXTRA_PARAMS) { void * result = GC_malloc_uncollectable(lb + UNCOLLECTABLE_DEBUG_BYTES); @@ -688,10 +693,12 @@ void * GC_debug_malloc_atomic_uncollectable(size_t lb, GC_EXTRA_PARAMS) } #endif /* ATOMIC_UNCOLLECTABLE */ -void GC_debug_free(void * p) +GC_API void GC_debug_free(void * p) { ptr_t base; - ptr_t clobbered; +# ifndef SHORT_DBG_HDRS + ptr_t clobbered; +# endif if (0 == p) return; base = GC_base(p); @@ -755,16 +762,19 @@ void GC_debug_free_inner(void * p) } #endif -void * GC_debug_realloc(void * p, size_t lb, GC_EXTRA_PARAMS) +GC_API void * GC_debug_realloc(void * p, size_t lb, GC_EXTRA_PARAMS) { - void * base = GC_base(p); - ptr_t clobbered; + void * base; +# ifndef SHORT_DBG_HDRS + ptr_t clobbered; +# endif void * result; size_t copy_sz = lb; size_t old_sz; hdr * hhdr; if (p == 0) return(GC_debug_malloc(lb, OPT_RA s, i)); + base = GC_base(p); if (base == 0) { GC_err_printf("Attempt to reallocate invalid pointer %p\n", p); ABORT("realloc(invalid pointer)"); @@ -826,7 +836,7 @@ void * GC_debug_realloc(void * p, size_t lb, GC_EXTRA_PARAMS) ptr_t GC_smashed[MAX_SMASHED]; unsigned GC_n_smashed = 0; -void GC_add_smashed(ptr_t smashed) +STATIC void GC_add_smashed(ptr_t smashed) { GC_ASSERT(GC_is_marked(GC_base(smashed))); GC_smashed[GC_n_smashed] = smashed; @@ -837,7 +847,7 @@ void GC_add_smashed(ptr_t smashed) } /* Print all objects on the list. Clear the list. */ -void GC_print_all_smashed_proc(void) +STATIC void GC_print_all_smashed_proc(void) { unsigned i; @@ -855,7 +865,7 @@ void GC_print_all_smashed_proc(void) /* Check all marked objects in the given block for validity */ /* Avoid GC_apply_to_each_object for performance reasons. */ /*ARGSUSED*/ -void GC_check_heap_block(struct hblk *hbp, word dummy) +STATIC void GC_check_heap_block(struct hblk *hbp, word dummy) { struct hblkhdr * hhdr = HDR(hbp); size_t sz = hhdr -> hb_sz; @@ -885,10 +895,9 @@ void GC_check_heap_block(struct hblk *hbp, word dummy) /* This assumes that all accessible objects are marked, and that */ /* I hold the allocation lock. Normally called by collector. */ -void GC_check_heap_proc(void) +STATIC void GC_check_heap_proc(void) { # ifndef SMALL_CONFIG - /* Ignore gcc no effect warning on the following. */ GC_STATIC_ASSERT((sizeof(oh) & (GRANULE_BYTES - 1)) == 0); /* FIXME: Should we check for twice that alignment? */ # endif @@ -944,9 +953,9 @@ static void store_old (void *obj, GC_finalization_proc my_old_fn, } } -void GC_debug_register_finalizer(void * obj, GC_finalization_proc fn, - void * cd, GC_finalization_proc *ofn, - void * *ocd) +GC_API void GC_debug_register_finalizer(void * obj, GC_finalization_proc fn, + void * cd, GC_finalization_proc *ofn, + void * *ocd) { GC_finalization_proc my_old_fn; void * my_old_cd; @@ -966,7 +975,7 @@ void GC_debug_register_finalizer(void * obj, GC_finalization_proc fn, store_old(obj, my_old_fn, (struct closure *)my_old_cd, ofn, ocd); } -void GC_debug_register_finalizer_no_order +GC_API void GC_debug_register_finalizer_no_order (void * obj, GC_finalization_proc fn, void * cd, GC_finalization_proc *ofn, void * *ocd) @@ -991,7 +1000,7 @@ void GC_debug_register_finalizer_no_order store_old(obj, my_old_fn, (struct closure *)my_old_cd, ofn, ocd); } -void GC_debug_register_finalizer_unreachable +GC_API void GC_debug_register_finalizer_unreachable (void * obj, GC_finalization_proc fn, void * cd, GC_finalization_proc *ofn, void * *ocd) @@ -1016,7 +1025,7 @@ void GC_debug_register_finalizer_unreachable store_old(obj, my_old_fn, (struct closure *)my_old_cd, ofn, ocd); } -void GC_debug_register_finalizer_ignore_self +GC_API void GC_debug_register_finalizer_ignore_self (void * obj, GC_finalization_proc fn, void * cd, GC_finalization_proc *ofn, void * *ocd) @@ -1046,12 +1055,12 @@ void GC_debug_register_finalizer_ignore_self # define RA #endif -void * GC_debug_malloc_replacement(size_t lb) +GC_API void * GC_debug_malloc_replacement(size_t lb) { return GC_debug_malloc(lb, RA "unknown", 0); } -void * GC_debug_realloc_replacement(void *p, size_t lb) +GC_API void * GC_debug_realloc_replacement(void *p, size_t lb) { return GC_debug_realloc(p, lb, RA "unknown", 0); } diff --git a/dyn_load.c b/dyn_load.c index 53469284..a824f867 100644 --- a/dyn_load.c +++ b/dyn_load.c @@ -50,11 +50,6 @@ # undef GC_must_restore_redefined_dlopen # endif -/* A user-supplied routine that is called to determine if a DSO must - be scanned by the gc. */ -static int (*GC_has_static_roots)(const char *, void *, size_t); - - #if (defined(DYNAMIC_LOADING) || defined(MSWIN32) || defined(MSWINCE)) \ && !defined(PCR) #if !defined(SOLARISDL) && !defined(IRIX5) && \ @@ -121,7 +116,7 @@ static int (*GC_has_static_roots)(const char *, void *, size_t); #endif static struct link_map * -GC_FirstDLOpenedLinkMap() +GC_FirstDLOpenedLinkMap(void) { extern ElfW(Dyn) _DYNAMIC; ElfW(Dyn) *dp; @@ -138,7 +133,7 @@ GC_FirstDLOpenedLinkMap() if( dynStructureAddr == 0 ) { void* startupSyms = dlopen(0, RTLD_LAZY); dynStructureAddr = (ElfW(Dyn)*)dlsym(startupSyms, "_DYNAMIC"); - } + } # else dynStructureAddr = &_DYNAMIC; # endif @@ -174,7 +169,7 @@ GC_FirstDLOpenedLinkMap() # endif # ifndef USE_PROC_FOR_LIBRARIES -void GC_register_dynamic_libraries() +void GC_register_dynamic_libraries(void) { struct link_map *lm = GC_FirstDLOpenedLinkMap(); @@ -245,7 +240,7 @@ char *GC_get_maps(void); /* be used in the colector. Hence we roll our own. Should be */ /* reasonably fast if the array is already mostly sorted, as we expect */ /* it to be. */ -void sort_heap_sects(struct HeapSect *base, size_t number_of_elements) +static void sort_heap_sects(struct HeapSect *base, size_t number_of_elements) { signed_word n = (signed_word)number_of_elements; signed_word nsorted = 1; @@ -269,7 +264,7 @@ void sort_heap_sects(struct HeapSect *base, size_t number_of_elements) } } -word GC_register_map_entries(char *maps) +STATIC word GC_register_map_entries(char *maps) { char *prot; char *buf_ptr = maps; @@ -355,14 +350,14 @@ word GC_register_map_entries(char *maps) return 1; } -void GC_register_dynamic_libraries() +void GC_register_dynamic_libraries(void) { if (!GC_register_map_entries(GC_get_maps())) ABORT("Failed to read /proc for library registration."); } /* We now take care of the main data segment ourselves: */ -GC_bool GC_register_main_static_data() +GC_bool GC_register_main_static_data(void) { return FALSE; } @@ -385,6 +380,10 @@ GC_bool GC_register_main_static_data() /* Thus we also treat it as a weak symbol. */ #define HAVE_DL_ITERATE_PHDR +/* A user-supplied routine that is called to determine if a DSO must + be scanned by the gc. */ +static int (*GC_has_static_roots)(const char *, void *, size_t); + static int GC_register_dynlib_callback(info, size, ptr) struct dl_phdr_info * info; size_t size; @@ -427,7 +426,7 @@ static int GC_register_dynlib_callback(info, size, ptr) #pragma weak dl_iterate_phdr -GC_bool GC_register_dynamic_libraries_dl_iterate_phdr() +GC_bool GC_register_dynamic_libraries_dl_iterate_phdr(void) { if (dl_iterate_phdr) { int did_something = 0; @@ -448,7 +447,7 @@ GC_bool GC_register_dynamic_libraries_dl_iterate_phdr() } /* Do we need to separately register the main static data segment? */ -GC_bool GC_register_main_static_data() +GC_bool GC_register_main_static_data(void) { return (dl_iterate_phdr == 0); } @@ -490,7 +489,7 @@ GC_bool GC_register_main_static_data() extern ElfW(Dyn) _DYNAMIC[]; static struct link_map * -GC_FirstDLOpenedLinkMap() +GC_FirstDLOpenedLinkMap(void) { ElfW(Dyn) *dp; static struct link_map *cachedResult = 0; @@ -513,7 +512,7 @@ GC_FirstDLOpenedLinkMap() } -void GC_register_dynamic_libraries() +void GC_register_dynamic_libraries(void) { struct link_map *lm; @@ -568,7 +567,7 @@ void GC_register_dynamic_libraries() # define IRIX6 #endif -extern void * GC_roots_present(); +extern void * GC_roots_present(ptr_t); /* The type is a lie, since the real type doesn't make sense here, */ /* and we only test for NULL. */ @@ -576,7 +575,7 @@ extern void * GC_roots_present(); /* We use /proc to track down all parts of the address space that are */ /* mapped by the process, and throw out regions we know we shouldn't */ /* worry about. This may also work under other SVR4 variants. */ -void GC_register_dynamic_libraries() +void GC_register_dynamic_libraries(void) { static int fd = -1; char buf[30]; @@ -587,7 +586,7 @@ void GC_register_dynamic_libraries() long flags; ptr_t start; ptr_t limit; - ptr_t heap_start = (ptr_t)HEAP_START; + ptr_t heap_start = HEAP_START; ptr_t heap_end = heap_start; # ifdef SOLARISDL @@ -733,14 +732,14 @@ void GC_register_dynamic_libraries() # ifdef MSWINCE /* Do we need to separately register the main static data segment? */ - GC_bool GC_register_main_static_data() + GC_bool GC_register_main_static_data(void) { return FALSE; } # else /* win32 */ extern GC_bool GC_no_win32_dlls; - GC_bool GC_register_main_static_data() + GC_bool GC_register_main_static_data(void) { return GC_no_win32_dlls; } @@ -764,7 +763,7 @@ void GC_register_dynamic_libraries() extern GC_bool GC_wnt; /* Is Windows NT derivative. */ /* Defined and set in os_dep.c. */ - void GC_register_dynamic_libraries() + void GC_register_dynamic_libraries(void) { MEMORY_BASIC_INFORMATION buf; size_t result; @@ -806,7 +805,7 @@ void GC_register_dynamic_libraries() * and predecessors. Hence we now also check for * that case. */ && (buf.Type == MEM_IMAGE || - !GC_wnt && buf.Type == MEM_PRIVATE)) { + (!GC_wnt && buf.Type == MEM_PRIVATE))) { # ifdef DEBUG_VIRTUALQUERY GC_dump_meminfo(&buf); # endif @@ -829,7 +828,7 @@ void GC_register_dynamic_libraries() #include -void GC_register_dynamic_libraries() +void GC_register_dynamic_libraries(void) { int status; ldr_process_t mypid; @@ -995,7 +994,7 @@ void GC_register_dynamic_libraries() #pragma alloca #include #include -void GC_register_dynamic_libraries() +void GC_register_dynamic_libraries(void) { int len; char *ldibuf; @@ -1155,7 +1154,7 @@ void GC_init_dyld() { } #define HAVE_REGISTER_MAIN_STATIC_DATA -GC_bool GC_register_main_static_data() +GC_bool GC_register_main_static_data(void) { /* Already done through dyld callbacks */ return FALSE; @@ -1171,7 +1170,7 @@ GC_bool GC_register_main_static_data() # include "th/PCR_ThCtl.h" # include "mm/PCR_MM.h" -void GC_register_dynamic_libraries() +void GC_register_dynamic_libraries(void) { /* Add new static data areas of dynamically loaded modules. */ { @@ -1206,8 +1205,6 @@ void GC_register_dynamic_libraries() void GC_register_dynamic_libraries(){} -int GC_no_dynamic_loading; - #endif /* !PCR */ #endif /* !DYNAMIC_LOADING */ @@ -1215,16 +1212,18 @@ int GC_no_dynamic_loading; #ifndef HAVE_REGISTER_MAIN_STATIC_DATA /* Do we need to separately register the main static data segment? */ -GC_bool GC_register_main_static_data() +GC_bool GC_register_main_static_data(void) { return TRUE; } /* Register a routine to filter dynamic library registration. */ -void +GC_API void GC_register_has_static_roots_callback (int (*callback)(const char *, void *, size_t)) { - GC_has_static_roots = callback; +# ifdef HAVE_DL_ITERATE_PHDR + GC_has_static_roots = callback; +# endif } #endif /* HAVE_REGISTER_MAIN_STATIC_DATA */ diff --git a/finalize.c b/finalize.c index 8587fae8..eb0252c7 100644 --- a/finalize.c +++ b/finalize.c @@ -62,7 +62,8 @@ static signed_word log_dl_table_size = -1; /* current size of array pointed to by dl_head. */ /* -1 ==> size is 0. */ -word GC_dl_entries = 0; /* Number of entries currently in disappearing */ +STATIC word GC_dl_entries = 0; + /* Number of entries currently in disappearing */ /* link table. */ static struct finalizable_object { @@ -79,7 +80,7 @@ static struct finalizable_object { finalization_mark_proc * fo_mark_proc; /* Mark-through procedure */ } **fo_head = 0; -struct finalizable_object * GC_finalize_now = 0; +STATIC struct finalizable_object * GC_finalize_now = 0; /* LIst of objects that should be finalized now. */ static signed_word log_fo_table_size = -1; @@ -99,8 +100,8 @@ void GC_push_finalizer_structures(void) /* *table is a pointer to an array of hash headers. If we succeed, we */ /* update both *table and *log_size_ptr. */ /* Lock is held. Signals are disabled. */ -void GC_grow_table(struct hash_chain_entry ***table, - signed_word *log_size_ptr) +STATIC void GC_grow_table(struct hash_chain_entry ***table, + signed_word *log_size_ptr) { register word i; register struct hash_chain_entry *p; @@ -136,7 +137,7 @@ void GC_grow_table(struct hash_chain_entry ***table, *table = new_table; } -int GC_register_disappearing_link(void * * link) +GC_API int GC_register_disappearing_link(void * * link) { ptr_t base; @@ -146,7 +147,7 @@ int GC_register_disappearing_link(void * * link) return(GC_general_register_disappearing_link(link, base)); } -int GC_general_register_disappearing_link(void * * link, void * obj) +GC_API int GC_general_register_disappearing_link(void * * link, void * obj) { struct disappearing_link *curr_dl; size_t index; @@ -206,7 +207,7 @@ int GC_general_register_disappearing_link(void * * link, void * obj) return(0); } -int GC_unregister_disappearing_link(void * * link) +GC_API int GC_unregister_disappearing_link(void * * link) { struct disappearing_link *curr_dl, *prev_dl; size_t index; @@ -421,15 +422,15 @@ GC_API void GC_register_finalizer_inner(void * obj, # endif } -void GC_register_finalizer(void * obj, - GC_finalization_proc fn, void * cd, - GC_finalization_proc *ofn, void ** ocd) +GC_API void GC_register_finalizer(void * obj, + GC_finalization_proc fn, void * cd, + GC_finalization_proc *ofn, void ** ocd) { GC_register_finalizer_inner(obj, fn, cd, ofn, ocd, GC_normal_finalize_mark_proc); } -void GC_register_finalizer_ignore_self(void * obj, +GC_API void GC_register_finalizer_ignore_self(void * obj, GC_finalization_proc fn, void * cd, GC_finalization_proc *ofn, void ** ocd) { @@ -437,7 +438,7 @@ void GC_register_finalizer_ignore_self(void * obj, ocd, GC_ignore_self_finalize_mark_proc); } -void GC_register_finalizer_no_order(void * obj, +GC_API void GC_register_finalizer_no_order(void * obj, GC_finalization_proc fn, void * cd, GC_finalization_proc *ofn, void ** ocd) { @@ -448,7 +449,7 @@ void GC_register_finalizer_no_order(void * obj, static GC_bool need_unreachable_finalization = FALSE; /* Avoid the work if this isn't used. */ -void GC_register_finalizer_unreachable(void * obj, +GC_API void GC_register_finalizer_unreachable(void * obj, GC_finalization_proc fn, void * cd, GC_finalization_proc *ofn, void ** ocd) { @@ -655,7 +656,7 @@ void GC_finalize(void) /* Enqueue all remaining finalizers to be run - Assumes lock is * held, and signals are disabled */ -void GC_enqueue_all_finalizers(void) +STATIC void GC_enqueue_all_finalizers(void) { struct finalizable_object * curr_fo, * prev_fo, * next_fo; ptr_t real_ptr; @@ -731,14 +732,14 @@ GC_API void GC_finalize_all(void) /* Returns true if it is worth calling GC_invoke_finalizers. (Useful if */ /* finalizers can only be called from some kind of `safe state' and */ /* getting into that safe state is expensive.) */ -int GC_should_invoke_finalizers(void) +GC_API int GC_should_invoke_finalizers(void) { return GC_finalize_now != 0; } /* Invoke finalizers for all objects that are ready to be finalized. */ /* Should be called without allocation lock. */ -int GC_invoke_finalizers(void) +GC_API int GC_invoke_finalizers(void) { struct finalizable_object * curr_fo; int count = 0; @@ -782,7 +783,8 @@ int GC_invoke_finalizers(void) return count; } -void (* GC_finalizer_notifier)() = (void (*) (void))0; +GC_finalizer_notifier_proc GC_finalizer_notifier = + (GC_finalizer_notifier_proc)0; static GC_word last_finalizer_notification = 0; @@ -794,9 +796,8 @@ void GC_notify_or_invoke_finalizers(void) static word last_back_trace_gc_no = 1; /* Skip first one. */ if (GC_gc_no > last_back_trace_gc_no) { - word i; - # ifdef KEEP_BACK_PTRS + word i; LOCK(); /* Stops when GC_gc_no wraps; that's OK. */ last_back_trace_gc_no = (word)(-1); /* disable others. */ @@ -826,14 +827,14 @@ void GC_notify_or_invoke_finalizers(void) # endif /* Otherwise GC can run concurrently and add more */ return; } - if (GC_finalizer_notifier != (void (*) (void))0 + if (GC_finalizer_notifier != (GC_finalizer_notifier_proc)0 && last_finalizer_notification != GC_gc_no) { last_finalizer_notification = GC_gc_no; GC_finalizer_notifier(); } } -void * GC_call_with_alloc_lock(GC_fn_type fn, void * client_data) +GC_API void * GC_call_with_alloc_lock(GC_fn_type fn, void * client_data) { void * result; DCL_LOCK_STATE; diff --git a/gc_cpp.cc b/gc_cpp.cc index 47cff19e..f46388c9 100644 --- a/gc_cpp.cc +++ b/gc_cpp.cc @@ -56,11 +56,13 @@ void* operator new( size_t size, #endif } +#if _MSC_VER > 1020 // This new operator is used by VC++ 7.0 and later in Debug builds. void* operator new[](size_t size, int nBlockUse, const char* szFileName, int nLine) { return operator new(size, nBlockUse, szFileName, nLine); } +#endif #endif /* _MSC_VER */ diff --git a/gc_dlopen.c b/gc_dlopen.c index 51659d1e..d0a26e16 100644 --- a/gc_dlopen.c +++ b/gc_dlopen.c @@ -37,6 +37,8 @@ # undef dlopen # endif + GC_bool GC_collection_in_progress(void); + /* Make sure we're not in the middle of a collection, and make */ /* sure we don't start any. Returns previous value of GC_dont_gc. */ /* This is invoked prior to a dlopen call to avoid synchronization */ @@ -46,7 +48,7 @@ /* calls in either a multithreaded environment, or if the library */ /* initialization code allocates substantial amounts of GC'ed memory. */ /* But I don't know of a better solution. */ - static void disable_gc_for_dlopen() + static void disable_gc_for_dlopen(void) { LOCK(); while (GC_incremental && GC_collection_in_progress()) { diff --git a/gcj_mlc.c b/gcj_mlc.c index 7e5beb18..5647db62 100644 --- a/gcj_mlc.c +++ b/gcj_mlc.c @@ -31,9 +31,8 @@ * is to get better gcj performance. * * We assume: - * 1) We have an ANSI conforming C compiler. - * 2) Counting on explicit initialization of this interface is OK. - * 3) FASTLOCK is not a significant win. + * 1) Counting on explicit initialization of this interface is OK; + * 2) FASTLOCK is not a significant win. */ #include "private/gc_pmark.h" @@ -51,9 +50,8 @@ ptr_t * GC_gcjobjfreelist; ptr_t * GC_gcjdebugobjfreelist; /* Caller does not hold allocation lock. */ -void GC_init_gcj_malloc(int mp_index, void * /* really GC_mark_proc */mp) +GC_API void GC_init_gcj_malloc(int mp_index, void * /* really GC_mark_proc */mp) { - register int i; GC_bool ignore_gcj_info; DCL_LOCK_STATE; @@ -70,7 +68,8 @@ void GC_init_gcj_malloc(int mp_index, void * /* really GC_mark_proc */mp) } GC_ASSERT(GC_mark_procs[mp_index] == (GC_mark_proc)0); /* unused */ GC_mark_procs[mp_index] = (GC_mark_proc)mp; - if (mp_index >= GC_n_mark_procs) ABORT("GC_init_gcj_malloc: bad index"); + if ((unsigned)mp_index >= GC_n_mark_procs) + ABORT("GC_init_gcj_malloc: bad index"); /* Set up object kind gcj-style indirect descriptor. */ GC_gcjobjfreelist = (ptr_t *)GC_new_free_list_inner(); if (ignore_gcj_info) { @@ -116,9 +115,9 @@ void * GC_clear_stack(void *); /* We do this even where we could just call GC_INVOKE_FINALIZERS, */ /* since it's probably cheaper and certainly more uniform. */ /* FIXME - Consider doing the same elsewhere? */ -static void maybe_finalize() +static void maybe_finalize(void) { - static int last_finalized_no = 0; + static word last_finalized_no = 0; if (GC_gc_no == last_finalized_no) return; if (!GC_is_initialized) return; @@ -134,7 +133,7 @@ static void maybe_finalize() #ifdef THREAD_LOCAL_ALLOC void * GC_core_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr) #else - void * GC_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr) + GC_API void * GC_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr) #endif { ptr_t op; @@ -175,10 +174,12 @@ static void maybe_finalize() return((void *) op); } +void GC_start_debugging(void); + /* Similar to GC_gcj_malloc, but add debug info. This is allocated */ /* with GC_gcj_debug_kind. */ -void * GC_debug_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr, - GC_EXTRA_PARAMS) +GC_API void * GC_debug_gcj_malloc(size_t lb, + void * ptr_to_struct_containing_descr, GC_EXTRA_PARAMS) { void * result; @@ -204,7 +205,7 @@ void * GC_debug_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr, return (GC_store_debug_info(result, (word)lb, s, (word)i)); } -void * GC_gcj_malloc_ignore_off_page(size_t lb, +GC_API void * GC_gcj_malloc_ignore_off_page(size_t lb, void * ptr_to_struct_containing_descr) { ptr_t op; @@ -219,7 +220,7 @@ void * GC_gcj_malloc_ignore_off_page(size_t lb, if( (op = *opp) == 0 ) { maybe_finalize(); op = (ptr_t)GENERAL_MALLOC_IOP(lb, GC_gcj_kind); - lg = GC_size_map[lb]; /* May have been uninitialized. */ + /* lg = GC_size_map[lb]; */ /* May have been uninitialized. */ } else { *opp = obj_link(op); GC_bytes_allocd += GRANULES_TO_BYTES(lg); @@ -240,6 +241,7 @@ void * GC_gcj_malloc_ignore_off_page(size_t lb, #else -char GC_no_gcj_support; +extern int GC_quiet; + /* ANSI C doesn't allow translation units to be empty. */ #endif /* GC_GCJ_SUPPORT */ diff --git a/headers.c b/headers.c index 7aef710d..9735d4b0 100644 --- a/headers.c +++ b/headers.c @@ -24,11 +24,11 @@ # include "private/gc_priv.h" -bottom_index * GC_all_bottom_indices = 0; +STATIC bottom_index * GC_all_bottom_indices = 0; /* Pointer to first (lowest addr) */ /* bottom_index. */ -bottom_index * GC_all_bottom_indices_end = 0; +STATIC bottom_index * GC_all_bottom_indices_end = 0; /* Pointer to last (highest addr) */ /* bottom_index. */ diff --git a/include/gc.h b/include/gc.h index 4d9285bf..31af7914 100644 --- a/include/gc.h +++ b/include/gc.h @@ -49,7 +49,7 @@ /* size as char * or void *. There seems to be no way to do this */ /* even semi-portably. The following is probably no better/worse */ /* than almost anything else. */ -/* The ANSI standard suggests that size_t and ptr_diff_t might be */ +/* The ANSI standard suggests that size_t and ptrdiff_t might be */ /* better choices. But those had incorrect definitions on some older */ /* systems. Notably "typedef int size_t" is WRONG. */ #ifndef _WIN64 @@ -59,14 +59,22 @@ /* Win64 isn't really supported yet, but this is the first step. And */ /* it might cause error messages to show up in more plausible places. */ /* This needs basetsd.h, which is included by windows.h. */ +#ifdef __int64 + typedef unsigned __int64 GC_word; + typedef __int64 GC_signed_word; +#else typedef unsigned long long GC_word; typedef long long GC_signed_word; #endif +#endif /* Public read-only variables */ +/* Getter procedures are supplied in some cases and preferred for new */ +/* code. */ GC_API GC_word GC_gc_no;/* Counter incremented per collection. */ /* Includes empty GCs at startup. */ +GC_API GC_word GC_get_gc_no(void); GC_API int GC_parallel; /* GC is parallelized for performance on */ /* multiprocessors. Currently set only */ @@ -77,11 +85,13 @@ GC_API int GC_parallel; /* GC is parallelized for performance on */ /* If GC_parallel is set, incremental */ /* collection is only partially functional, */ /* and may not be desirable. */ +GC_API int GC_get_parallel(void); /* Public R/W variables */ -GC_API void * (*GC_oom_fn) (size_t bytes_requested); +typedef void * (* GC_oom_func)(size_t /* bytes_requested */); +GC_API GC_oom_func GC_oom_fn; /* When there is insufficient memory to satisfy */ /* an allocation request, we return */ /* (*GC_oom_fn)(). By default this just */ @@ -89,6 +99,7 @@ GC_API void * (*GC_oom_fn) (size_t bytes_requested); /* If it returns, it must return 0 or a valid */ /* pointer to a previously allocated heap */ /* object. */ +GC_API GC_oom_func GC_set_oom_fn(GC_oom_func); GC_API int GC_find_leak; /* Do not actually garbage collect, but simply */ @@ -114,6 +125,7 @@ GC_API int GC_finalize_on_demand; /* call. The default is determined by whether */ /* the FINALIZE_ON_DEMAND macro is defined */ /* when the collector is built. */ +GC_API int GC_set_finalize_on_demand(int); GC_API int GC_java_finalization; /* Mark objects reachable from finalizable */ @@ -123,8 +135,10 @@ GC_API int GC_java_finalization; /* determined by JAVA_FINALIZATION macro. */ /* Enables register_finalizer_unreachable to */ /* work correctly. */ +GC_API int GC_set_java_finalization(int); -GC_API void (* GC_finalizer_notifier)(void); +typedef void (* GC_finalizer_notifier_proc)(void); +GC_API GC_finalizer_notifier_proc GC_finalizer_notifier; /* Invoked by the collector when there are */ /* objects to be finalized. Invoked at most */ /* once per GC cycle. Never invoked unless */ @@ -132,6 +146,8 @@ GC_API void (* GC_finalizer_notifier)(void); /* Typically this will notify a finalization */ /* thread, which will call GC_invoke_finalizers */ /* in response. */ +GC_API GC_finalizer_notifier_proc GC_set_finalizer_notifier( + GC_finalizer_notifier_proc); GC_API int GC_dont_gc; /* != 0 ==> Dont collect. In versions 6.2a1+, */ /* this overrides explicit GC_gcollect() calls. */ @@ -145,13 +161,14 @@ GC_API int GC_dont_gc; /* != 0 ==> Dont collect. In versions 6.2a1+, */ GC_API int GC_dont_expand; /* Dont expand heap unless explicitly requested */ /* or forced to. */ +GC_API int GC_set_dont_expand(int); GC_API int GC_use_entire_heap; /* Causes the nonincremental collector to use the */ /* entire heap before collecting. This was the only */ /* option for GC versions < 5.0. This sometimes */ /* results in more large block fragmentation, since */ - /* very larg blocks will tend to get broken up */ + /* very large blocks will tend to get broken up */ /* during each GC cycle. It is likely to result in a */ /* larger working set, but lower collection */ /* frequencies, and hence fewer instructions executed */ @@ -180,6 +197,7 @@ GC_API int GC_no_dls; /* In Microsoft Windows environments, this will */ /* usually also prevent registration of the */ /* main data segment as part of the root set. */ +GC_API int GC_set_no_dls(int); GC_API GC_word GC_free_space_divisor; /* We try to make sure that we allocate at */ @@ -199,6 +217,7 @@ GC_API GC_word GC_max_retries; /* The maximum number of GCs attempted before */ /* reporting out of memory after heap */ /* expansion fails. Initially 0. */ +GC_API GC_word GC_set_max_retries(GC_word); GC_API char *GC_stackbottom; /* Cool end of user stack. */ @@ -220,6 +239,7 @@ GC_API int GC_dont_precollect; /* Don't collect as part of */ /* before the first collection. */ /* Interferes with blacklisting. */ /* Wizards only. */ +GC_API int GC_set_dont_precollect(int); GC_API unsigned long GC_time_limit; /* If incremental collection is enabled, */ @@ -319,7 +339,7 @@ GC_API size_t GC_size(void * object_addr); /* The resulting object has the same kind as the original. */ /* If the argument is stubborn, the result will have changes enabled. */ /* It is an error to have changes enabled for the original object. */ -/* Follows ANSI comventions for NULL old_object. */ +/* Follows ANSI conventions for NULL old_object. */ GC_API void * GC_realloc(void * old_object, size_t new_size_in_bytes); /* Explicitly increase the heap size. */ @@ -805,11 +825,15 @@ GC_API int GC_invoke_finalizers(void); /* p may not be a NULL pointer. */ typedef void (*GC_warn_proc) (char *msg, GC_word arg); GC_API GC_warn_proc GC_set_warn_proc(GC_warn_proc p); - /* Returns old warning procedure. */ + /* Returns old warning procedure. */ + /* With 0 argument, current warn_proc remains unchanged. */ + /* (Only true for GC7.2+) */ GC_API GC_word GC_set_free_space_divisor(GC_word value); /* Set free_space_divisor. See above for definition. */ /* Returns old value. */ + /* With zero argument, nothing is changed, but old value is */ + /* returned. (Only true for GC7.2+) */ /* The following is intended to be used by a higher level */ /* (e.g. Java-like) finalization facility. It is expected */ @@ -926,7 +950,7 @@ GC_API void * GC_is_valid_displacement (void * p); /* Explicitly dump the GC state. This is most often called from the */ /* debugger, or by setting the GC_DUMP_REGULARLY environment variable, */ /* but it may be useful to call it from client code during debugging. */ -void GC_dump(void); +GC_API void GC_dump(void); /* Safer, but slow, pointer addition. Probably useful mainly with */ /* a preprocessor. Useful only for heap pointers. */ @@ -992,7 +1016,7 @@ GC_API void (*GC_is_visible_print_proc) (void * p); /* the allocation lock can be acquired and released many fewer times. */ /* It is used internally by gc_local_alloc.h, which provides a simpler */ /* programming interface on Linux. */ -void * GC_malloc_many(size_t lb); +GC_API void * GC_malloc_many(size_t lb); #define GC_NEXT(p) (*(void * *)(p)) /* Retrieve the next element */ /* in returned list. */ @@ -1042,11 +1066,14 @@ GC_register_has_static_roots_callback DWORD dwStackSize, LPTHREAD_START_ROUTINE lpStartAddress, LPVOID lpParameter, DWORD dwCreationFlags, LPDWORD lpThreadId ); -# if defined(_MSC_VER) && _MSC_VER >= 1200 && !defined(_UINTPTR_T_DEFINED) - typedef unsigned long uintptr_t; +# if !defined(_UINTPTR_T) && !defined(_UINTPTR_T_DEFINED) \ + && !defined(UINTPTR_MAX) + typedef GC_word GC_uintptr_t; +# else + typedef uintptr_t GC_uintptr_t; # endif - GC_API uintptr_t GC_beginthreadex( + GC_API GC_uintptr_t GC_beginthreadex( void *security, unsigned stack_size, unsigned ( __stdcall *start_address )( void * ), void *arglist, unsigned initflag, unsigned *thrdaddr); @@ -1082,9 +1109,11 @@ GC_API void GC_use_DllMain(void); # ifndef GC_NO_THREAD_REDIRECTS # define CreateThread GC_CreateThread # define ExitThread GC_ExitThread +# undef _beginthreadex # define _beginthreadex GC_beginthreadex +# undef _endthreadex # define _endthreadex GC_endthreadex -# define _beginthread { > "Please use _beginthreadex instead of _beginthread" < } +/* # define _beginthread { > "Please use _beginthreadex instead of _beginthread" < } */ # endif /* !GC_NO_THREAD_REDIRECTS */ #endif /* defined(GC_WIN32_THREADS) && !cygwin */ @@ -1126,22 +1155,19 @@ GC_API void GC_use_DllMain(void); # define GC_INIT() { GC_init(); } #endif -#if !defined(_WIN32_WCE) \ - && ((defined(_MSDOS) || defined(_MSC_VER)) && (_M_IX86 >= 300) \ - || defined(_WIN32) && !defined(__CYGWIN32__) && !defined(__CYGWIN__)) /* win32S may not free all resources on process exit. */ /* This explicitly deallocates the heap. */ - GC_API void GC_win32_free_heap (); -#endif +GC_API void GC_win32_free_heap(void); #if ( defined(_AMIGA) && !defined(GC_AMIGA_MAKINGLIB) ) /* Allocation really goes through GC_amiga_allocwrapper_do */ # include "gc_amiga_redirects.h" #endif -#if defined(GC_REDIRECT_TO_LOCAL) - /* Now redundant; that's the default with THREAD_LOCAL_ALLOC */ -#endif + /* + * GC_REDIRECT_TO_LOCAL is now redundant; + * that's the default with THREAD_LOCAL_ALLOC. + */ #ifdef __cplusplus } /* end of extern "C" */ diff --git a/include/gc_config_macros.h b/include/gc_config_macros.h index 66abf0b1..67f361cc 100644 --- a/include/gc_config_macros.h +++ b/include/gc_config_macros.h @@ -40,17 +40,6 @@ # define GC_USE_LD_WRAP #endif -#if !defined(_REENTRANT) && (defined(GC_SOLARIS_THREADS) \ - || defined(GC_HPUX_THREADS) \ - || defined(GC_AIX_THREADS) \ - || defined(GC_LINUX_THREADS) \ - || defined(GC_NETBSD_THREADS) \ - || defined(GC_GNU_THREADS)) -# define _REENTRANT - /* Better late than never. This fails if system headers that */ - /* depend on this were previously included. */ -#endif - #if !defined(_PTHREADS) && defined(GC_NETBSD_THREADS) # define _PTHREADS #endif @@ -120,6 +109,17 @@ # endif #endif /* GC_THREADS */ +#if !defined(_REENTRANT) && (defined(GC_SOLARIS_THREADS) \ + || defined(GC_HPUX_THREADS) \ + || defined(GC_AIX_THREADS) \ + || defined(GC_LINUX_THREADS) \ + || defined(GC_NETBSD_THREADS) \ + || defined(GC_GNU_THREADS)) +# define _REENTRANT + /* Better late than never. This fails if system headers that */ + /* depend on this were previously included. */ +#endif + #if defined(GC_THREADS) && !defined(GC_PTHREADS) && !defined(GC_WIN32_THREADS) \ && (defined(_WIN32) || defined(_MSC_VER) || defined(__CYGWIN__) \ || defined(__MINGW32__) || defined(__BORLANDC__) \ @@ -157,7 +157,8 @@ # endif #endif -#if (defined(__DMC__) || defined(_MSC_VER)) && defined(GC_DLL) +#if (defined(__DMC__) || defined(_MSC_VER) || defined(__BORLANDC__)) \ + && defined(GC_DLL) # ifdef GC_BUILD # define GC_API extern __declspec(dllexport) # else diff --git a/include/gc_cpp.h b/include/gc_cpp.h index 2a69f052..86a756bc 100644 --- a/include/gc_cpp.h +++ b/include/gc_cpp.h @@ -160,7 +160,8 @@ by UseGC. GC is an alias for UseGC, unless GC_NAME_CONFLICT is defined. #endif #if ! defined ( __BORLANDC__ ) /* Confuses the Borland compiler. */ \ - && ! defined ( __sgi ) + && ! defined ( __sgi ) && ! defined( __WATCOMC__ ) \ + && (!defined(_MSC_VER) || _MSC_VER > 1020) # define GC_PLACEMENT_DELETE #endif @@ -249,9 +250,11 @@ inline void* operator new( * There seems to be no way to redirect new in this environment without * including this everywhere. */ +#if _MSC_VER > 1020 void *operator new[]( size_t size ); void operator delete[](void* obj); +#endif void* operator new( size_t size); diff --git a/include/gc_gcj.h b/include/gc_gcj.h index 699ddf5d..f018a061 100644 --- a/include/gc_gcj.h +++ b/include/gc_gcj.h @@ -14,8 +14,7 @@ * modified is included with the above copyright notice. */ -/* This file assumes the collector has been compiled with GC_GCJ_SUPPORT */ -/* and that an ANSI C compiler is available. */ +/* This file assumes the collector has been compiled with GC_GCJ_SUPPORT. */ /* * We allocate objects whose first word contains a pointer to a struct @@ -59,28 +58,28 @@ /* detect the presence or absence of the debug header. */ /* Mp is really of type mark_proc, as defined in gc_mark.h. We don't */ /* want to include that here for namespace pollution reasons. */ -extern void GC_init_gcj_malloc(int mp_index, void * /* really mark_proc */mp); +GC_API void GC_init_gcj_malloc(int mp_index, void * /* really mark_proc */mp); /* Allocate an object, clear it, and store the pointer to the */ /* type structure (vtable in gcj). */ /* This adds a byte at the end of the object if GC_malloc would.*/ -extern void * GC_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr); +GC_API void * GC_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr); /* The debug versions allocate such that the specified mark_proc */ /* is always invoked. */ -extern void * GC_debug_gcj_malloc(size_t lb, +GC_API void * GC_debug_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr, GC_EXTRA_PARAMS); /* Similar to GC_gcj_malloc, but assumes that a pointer to near the */ /* beginning of the resulting object is always maintained. */ -extern void * GC_gcj_malloc_ignore_off_page(size_t lb, +GC_API void * GC_gcj_malloc_ignore_off_page(size_t lb, void * ptr_to_struct_containing_descr); /* The kind numbers of normal and debug gcj objects. */ /* Useful only for debug support, we hope. */ -extern int GC_gcj_kind; +GC_API int GC_gcj_kind; -extern int GC_gcj_debug_kind; +GC_API int GC_gcj_debug_kind; # ifdef GC_DEBUG # define GC_GCJ_MALLOC(s,d) GC_debug_gcj_malloc(s,d,GC_EXTRAS) diff --git a/include/gc_inline.h b/include/gc_inline.h index 2aa314b3..5b5e51a3 100644 --- a/include/gc_inline.h +++ b/include/gc_inline.h @@ -56,20 +56,20 @@ # define GC_FAST_MALLOC_GRANS(result,granules,tiny_fl,num_direct,\ kind,default_expr,init) \ { \ - if (GC_EXPECT(granules >= GC_TINY_FREELISTS,0)) { \ - result = default_expr; \ + if (GC_EXPECT((granules) >= GC_TINY_FREELISTS,0)) { \ + result = (default_expr); \ } else { \ - void **my_fl = tiny_fl + granules; \ + void **my_fl = (tiny_fl) + (granules); \ void *my_entry=*my_fl; \ void *next; \ \ while (GC_EXPECT((GC_word)my_entry \ - <= num_direct + GC_TINY_FREELISTS + 1, 0)) { \ + <= (num_direct) + GC_TINY_FREELISTS + 1, 0)) { \ /* Entry contains counter or NULL */ \ - if ((GC_word)my_entry - 1 < num_direct) { \ + if ((GC_word)my_entry - 1 < (num_direct)) { \ /* Small counter value, not NULL */ \ - *my_fl = (char *)my_entry + granules + 1; \ - result = default_expr; \ + *my_fl = (char *)my_entry + (granules) + 1; \ + result = (default_expr); \ goto out; \ } else { \ /* Large counter or NULL */ \ @@ -78,7 +78,7 @@ kind, my_fl); \ my_entry = *my_fl; \ if (my_entry == 0) { \ - result = GC_oom_fn(granules*GC_GRANULE_BYTES); \ + result = GC_oom_fn((granules)*GC_GRANULE_BYTES); \ goto out; \ } \ } \ @@ -88,7 +88,7 @@ *my_fl = next; \ init; \ PREFETCH_FOR_WRITE(next); \ - GC_ASSERT(GC_size(result) >= granules*GC_GRANULE_BYTES); \ + GC_ASSERT(GC_size(result) >= (granules)*GC_GRANULE_BYTES); \ GC_ASSERT((kind) == PTRFREE || ((GC_word *)result)[1] == 0); \ out: ; \ } \ @@ -117,7 +117,7 @@ size_t grans = GC_WORDS_TO_WHOLE_GRANULES(n); \ GC_FAST_MALLOC_GRANS(result, grans, tiny_fl, 0, \ PTRFREE, GC_malloc_atomic(grans*GC_GRANULE_BYTES), \ - /* no initialization */); \ + (void)0 /* no initialization */); \ } diff --git a/include/javaxfc.h b/include/javaxfc.h index 23e01005..3146a9f8 100644 --- a/include/javaxfc.h +++ b/include/javaxfc.h @@ -2,6 +2,10 @@ # include "gc.h" # endif +# ifdef __cplusplus + extern "C" { +# endif + /* * Invoke all remaining finalizers that haven't yet been run. * This is needed for strict compliance with the Java standard, @@ -16,6 +20,8 @@ * probably unlikely. * Thus this is not recommended for general use. */ -void GC_finalize_all(); - +GC_API void GC_finalize_all(void); +# ifdef __cplusplus + } /* end of extern "C" */ +# endif diff --git a/include/private/gc_pmark.h b/include/private/gc_pmark.h index 81c260be..33249d89 100644 --- a/include/private/gc_pmark.h +++ b/include/private/gc_pmark.h @@ -196,7 +196,6 @@ exit_label: ; \ # endif -#ifdef USE_MARK_BYTES # if defined(I386) && defined(__GNUC__) # define LONG_MULT(hprod, lprod, x, y) { \ asm("mull %2" : "=a"(lprod), "=d"(hprod) : "g"(y), "0"(x)); \ @@ -210,6 +209,7 @@ exit_label: ; \ } # endif +#ifdef USE_MARK_BYTES /* There is a race here, and we may set */ /* the bit twice in the concurrent case. This can result in the */ /* object being pushed twice. But that's only a performance issue. */ @@ -312,7 +312,7 @@ exit_label: ; \ source, exit_label, hhdr, do_offset_check) \ { \ size_t displ = HBLKDISPL(current); /* Displacement in block; in bytes. */\ - unsigned32 low_prod, high_prod, offset_fraction; \ + unsigned32 low_prod, high_prod; \ unsigned32 inv_sz = hhdr -> hb_inv_sz; \ ptr_t base = current; \ LONG_MULT(high_prod, low_prod, displ, inv_sz); \ diff --git a/include/private/gc_priv.h b/include/private/gc_priv.h index 88c87f6b..3e9290e3 100644 --- a/include/private/gc_priv.h +++ b/include/private/gc_priv.h @@ -785,7 +785,7 @@ struct hblk { # define HBLK_IS_FREE(hdr) (((hdr) -> hb_flags & FREE_BLK) != 0) -# define OBJ_SZ_TO_BLOCKS(sz) divHBLKSZ(sz + HBLKSIZE-1) +# define OBJ_SZ_TO_BLOCKS(sz) divHBLKSZ((sz) + HBLKSIZE-1) /* Size of block (in units of HBLKSIZE) needed to hold objects of */ /* given sz (in bytes). */ @@ -1956,7 +1956,7 @@ void GC_err_puts(const char *s); This code works correctly (ugliness is to avoid "unused var" warnings) */ # define GC_STATIC_ASSERT(expr) do { if (0) { char j[(expr)? 1 : -1]; j[0]='\0'; j[0]=j[0]; } } while(0) #else -# define GC_STATIC_ASSERT(expr) sizeof(char[(expr)? 1 : -1]) +# define GC_STATIC_ASSERT(expr) (void)sizeof(char[(expr)? 1 : -1]) #endif # if defined(PARALLEL_MARK) || defined(THREAD_LOCAL_ALLOC) diff --git a/include/private/gcconfig.h b/include/private/gcconfig.h index 012375bc..548c098e 100644 --- a/include/private/gcconfig.h +++ b/include/private/gcconfig.h @@ -691,7 +691,7 @@ /* that we'd rather not scan. */ # endif /* !GLIBC2 */ extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # else extern int etext[]; # define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff)) @@ -751,7 +751,7 @@ # define DYNAMIC_LOADING # define SEARCH_FOR_DATA_START extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # endif # ifdef DARWIN # define OS_TYPE "DARWIN" @@ -853,7 +853,7 @@ # define OS_TYPE "NOSYS" extern void __end[], __dso_handle[]; # define DATASTART (__dso_handle) /* OK, that's ugly. */ -# define DATAEND (__end) +# define DATAEND (ptr_t)(__end) /* Stack starts at 0xE0000000 for the simulator. */ # undef STACK_GRAN # define STACK_GRAN 0x10000000 @@ -895,7 +895,7 @@ extern int _end[]; extern ptr_t GC_SysVGetDataStart(size_t, ptr_t); # define DATASTART GC_SysVGetDataStart(0x10000, (ptr_t)_etext) -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # if !defined(USE_MMAP) && defined(REDIRECT_MALLOC) # define USE_MMAP /* Otherwise we now use calloc. Mmap may result in the */ @@ -917,7 +917,7 @@ # include # ifdef USERLIMIT /* This should work everywhere, but doesn't. */ -# define STACKBOTTOM USRSTACK +# define STACKBOTTOM ((ptr_t) USRSTACK) # else # define HEURISTIC2 # endif @@ -945,7 +945,7 @@ # endif extern int _end[]; extern int _etext[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # define SVR4 extern ptr_t GC_SysVGetDataStart(size_t, ptr_t); # ifdef __arch64__ @@ -1021,14 +1021,14 @@ extern int _etext[], _end[]; extern ptr_t GC_SysVGetDataStart(size_t, ptr_t); # define DATASTART GC_SysVGetDataStart(0x1000, (ptr_t)_etext) -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) /* # define STACKBOTTOM ((ptr_t)(_start)) worked through 2.7, */ /* but reportedly breaks under 2.8. It appears that the stack */ /* base is a property of the executable, so this should not break */ /* old executables. */ /* HEURISTIC2 probably works, but this appears to be preferable. */ # include -# define STACKBOTTOM USRSTACK +# define STACKBOTTOM ((ptr_t) USRSTACK) /* At least in Solaris 2.5, PROC_VDB gives wrong values for dirty bits. */ /* It appears to be fixed in 2.8 and 2.9. */ # ifdef SOLARIS25_PROC_VDB_BUG_FIXED @@ -1069,7 +1069,7 @@ extern int _etext, _end; extern ptr_t GC_SysVGetDataStart(size_t, ptr_t); # define DATASTART GC_SysVGetDataStart(0x1000, (ptr_t)(&_etext)) -# define DATAEND (&_end) +# define DATAEND (ptr_t)(&_end) # define STACK_GROWS_DOWN # define HEURISTIC2 # include @@ -1130,7 +1130,7 @@ /* that we'd rather not scan. */ # endif extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # else extern int etext[]; # define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff)) @@ -1181,8 +1181,6 @@ # define MPROTECT_VDB /* We also avoided doing this in the past with GC_WIN32_THREADS */ /* Hopefully that's fixed. */ -# endif -# if _MSC_VER >= 1300 /* .NET, i.e. > VisualStudio 6 */ # define GWW_VDB # endif # define DATAEND /* not needed */ @@ -1215,7 +1213,7 @@ # define SIG_SUSPEND (32+6) # define SIG_THR_RESTART (32+5) extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # else # define SIG_SUSPEND SIGUSR1 # define SIG_THR_RESTART SIGUSR2 @@ -1319,7 +1317,7 @@ # define OS_TYPE "LINUX" # define DYNAMIC_LOADING extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) extern int __data_start[]; # define DATASTART ((ptr_t)(__data_start)) # define CPP_WORDSZ _MIPS_SZPTR @@ -1343,11 +1341,11 @@ extern int _DYNAMIC_LINKING[], _gp[]; # define DATASTART ((ptr_t)((((word)etext + 0x3ffff) & ~0x3ffff) \ + ((word)etext & 0xffff))) -# define DATAEND (edata) +# define DATAEND (ptr_t)(edata) # define DATASTART2 (_DYNAMIC_LINKING \ ? (ptr_t)(((word)_gp + 0x8000 + 0x3ffff) & ~0x3ffff) \ : (ptr_t)edata) -# define DATAEND2 (end) +# define DATAEND2 (ptr_t)(end) # define ALIGNMENT 4 # endif # define OS_TYPE "EWS4800" @@ -1471,7 +1469,7 @@ # define DYNAMIC_LOADING # define SEARCH_FOR_DATA_START extern int _end[]; -# define DATAEND (&_end) +# define DATAEND (ptr_t)(&_end) # endif /* LINUX */ # endif /* HP_PA */ @@ -1552,7 +1550,7 @@ # define DATASTART ((ptr_t) 0x140000000) # endif extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # define MPROTECT_VDB /* Has only been superficially tested. May not */ /* work on all versions. */ @@ -1623,7 +1621,7 @@ # define MPROTECT_VDB /* Requires Linux 2.3.47 or later. */ extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # ifdef __GNUC__ # ifndef __INTEL_COMPILER # define PREFETCH(x) \ @@ -1687,7 +1685,7 @@ extern int _end[]; extern ptr_t GC_SysVGetDataStart(size_t, ptr_t); # define DATASTART GC_SysVGetDataStart(0x10000, (ptr_t)_etext) -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # define HEURISTIC2 # endif # endif @@ -1711,7 +1709,7 @@ extern int __data_start[]; # define DATASTART ((ptr_t)(__data_start)) extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # define CACHE_LINE_SIZE 256 # define GETPAGESIZE() 4096 # endif @@ -1755,7 +1753,7 @@ /* that we'd rather not scan. */ # endif extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # else extern int etext[]; # define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff)) @@ -1784,7 +1782,7 @@ # define LINUX_STACKBOTTOM # define SEARCH_FOR_DATA_START extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # endif # ifdef SH @@ -1800,7 +1798,7 @@ # define DYNAMIC_LOADING # define SEARCH_FOR_DATA_START extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # endif # ifdef NETBSD # define OS_TYPE "NETBSD" @@ -1829,7 +1827,7 @@ # define DYNAMIC_LOADING # define SEARCH_FOR_DATA_START extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # endif # endif @@ -1860,7 +1858,7 @@ # include # define SEARCH_FOR_DATA_START extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # else extern int etext[]; # define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff)) @@ -1899,7 +1897,7 @@ # define SIG_SUSPEND (32+6) # define SIG_THR_RESTART (32+5) extern int _end[]; -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) # else # define SIG_SUSPEND SIGUSR1 # define SIG_THR_RESTART SIGUSR2 @@ -1927,7 +1925,7 @@ extern int _etext[], _end[]; extern ptr_t GC_SysVGetDataStart(size_t, ptr_t); # define DATASTART GC_SysVGetDataStart(0x1000, (ptr_t)_etext) -# define DATAEND (_end) +# define DATAEND (ptr_t)(_end) /* # define STACKBOTTOM ((ptr_t)(_start)) worked through 2.7, */ /* but reportedly breaks under 2.8. It appears that the stack */ /* base is a property of the executable, so this should not break */ @@ -1939,7 +1937,7 @@ # include # ifdef USERLIMIT /* This should work everywhere, but doesn't. */ -# define STACKBOTTOM USRSTACK +# define STACKBOTTOM ((ptr_t) USRSTACK) # else # define HEURISTIC2 # endif @@ -2034,7 +2032,7 @@ # ifndef DATAEND extern int end[]; -# define DATAEND (end) +# define DATAEND (ptr_t)(end) # endif # if defined(SVR4) && !defined(GETPAGESIZE) @@ -2133,6 +2131,10 @@ # define CACHE_LINE_SIZE 32 /* Wild guess */ # endif +# ifndef STATIC +# define STATIC /* ignore to aid profiling and possibly debugging */ +# endif + # if defined(LINUX) || defined(HURD) || defined(__GLIBC__) # define REGISTER_LIBRARIES_EARLY /* We sometimes use dl_iterate_phdr, which may acquire an internal */ diff --git a/malloc.c b/malloc.c index 270d0f10..67e74223 100644 --- a/malloc.c +++ b/malloc.c @@ -23,7 +23,7 @@ void GC_extend_size_map(size_t); /* in misc.c. */ /* Allocate reclaim list for kind: */ /* Return TRUE on success */ -GC_bool GC_alloc_reclaim_list(struct obj_kind *kind) +STATIC GC_bool GC_alloc_reclaim_list(struct obj_kind *kind) { struct hblk ** result = (struct hblk **) GC_scratch_alloc((MAXOBJGRANULES+1) * sizeof(struct hblk *)); @@ -206,7 +206,7 @@ void * GC_generic_malloc(size_t lb, int k) #ifdef THREAD_LOCAL_ALLOC void * GC_core_malloc_atomic(size_t lb) #else - void * GC_malloc_atomic(size_t lb) + GC_API void * GC_malloc_atomic(size_t lb) #endif { void *op; @@ -233,12 +233,7 @@ void * GC_generic_malloc(size_t lb, int k) /* provide a version of strdup() that uses the collector to allocate the copy of the string */ -# ifdef __STDC__ - char *GC_strdup(const char *s) -# else - char *GC_strdup(s) - char *s; -#endif +GC_API char *GC_strdup(const char *s) { char *copy; @@ -255,7 +250,7 @@ void * GC_generic_malloc(size_t lb, int k) #ifdef THREAD_LOCAL_ALLOC void * GC_core_malloc(size_t lb) #else - void * GC_malloc(size_t lb) + GC_API void * GC_malloc(size_t lb) #endif { void *op; @@ -273,10 +268,10 @@ void * GC_generic_malloc(size_t lb, int k) } /* See above comment on signals. */ GC_ASSERT(0 == obj_link(op) - || (word)obj_link(op) + || ((word)obj_link(op) <= (word)GC_greatest_plausible_heap_addr && (word)obj_link(op) - >= (word)GC_least_plausible_heap_addr); + >= (word)GC_least_plausible_heap_addr)); *opp = obj_link(op); obj_link(op) = 0; GC_bytes_allocd += GRANULES_TO_BYTES(lg); @@ -327,7 +322,7 @@ void * malloc(size_t lb) extern GC_bool GC_text_mapping(char *nm, ptr_t *startp, ptr_t *endp); /* From os_dep.c */ - void GC_init_lib_bounds(void) + STATIC void GC_init_lib_bounds(void) { if (GC_libpthread_start != 0) return; if (!GC_text_mapping("libpthread-", @@ -391,7 +386,7 @@ void * calloc(size_t n, size_t lb) # endif /* REDIRECT_MALLOC */ /* Explicitly deallocate an object p. */ -void GC_free(void * p) +GC_API void GC_free(void * p) { struct hblk *h; hdr *hhdr; diff --git a/mallocx.c b/mallocx.c index 8a397cf0..913192fd 100644 --- a/mallocx.c +++ b/mallocx.c @@ -24,9 +24,7 @@ #include #include "private/gc_priv.h" -extern ptr_t GC_clear_stack(); /* in misc.c, behaves like identity */ -void GC_extend_size_map(); /* in misc.c. */ -GC_bool GC_alloc_reclaim_list(); /* in malloc.c */ +void * GC_clear_stack(void *); /* in misc.c, behaves like identity */ /* Some externally visible but unadvertised variables to allow access to */ /* free lists from inlined allocators without including gc_priv.h */ @@ -39,7 +37,7 @@ void ** const GC_uobjfreelist_ptr = GC_uobjfreelist; # endif -void * GC_generic_or_special_malloc(size_t lb, int knd) +STATIC void * GC_generic_or_special_malloc(size_t lb, int knd) { switch(knd) { # ifdef STUBBORN_ALLOC @@ -66,7 +64,7 @@ void * GC_generic_or_special_malloc(size_t lb, int knd) /* lb bytes. The object may be (and quite likely will be) moved. */ /* The kind (e.g. atomic) is the same as that of the old. */ /* Shrinking of large blocks is not implemented well. */ -void * GC_realloc(void * p, size_t lb) +GC_API void * GC_realloc(void * p, size_t lb) { struct hblk * h; hdr * hhdr; @@ -211,12 +209,12 @@ void * GC_generic_malloc_ignore_off_page(size_t lb, int k) } } -void * GC_malloc_ignore_off_page(size_t lb) +GC_API void * GC_malloc_ignore_off_page(size_t lb) { return((void *)GC_generic_malloc_ignore_off_page(lb, NORMAL)); } -void * GC_malloc_atomic_ignore_off_page(size_t lb) +GC_API void * GC_malloc_atomic_ignore_off_page(size_t lb) { return((void *)GC_generic_malloc_ignore_off_page(lb, PTRFREE)); } @@ -276,7 +274,7 @@ signed_word my_bytes_allocd = 0; struct obj_kind * ok = &(GC_obj_kinds[k]); DCL_LOCK_STATE; - GC_ASSERT((lb & (GRANULE_BYTES-1)) == 0); + GC_ASSERT(lb != 0 && (lb & (GRANULE_BYTES-1)) == 0); if (!SMALL_OBJ(lb)) { op = GC_generic_malloc(lb, k); if(0 != op) obj_link(op) = 0; @@ -318,7 +316,7 @@ DCL_LOCK_STATE; /* than one thread simultaneously. */ if (my_bytes_allocd_tmp != 0) { (void)AO_fetch_and_add( - (volatile AO_t *)(&GC_bytes_allocd_tmp), + (volatile void *)(&GC_bytes_allocd_tmp), (AO_t)(-my_bytes_allocd_tmp)); GC_bytes_allocd += my_bytes_allocd_tmp; } @@ -421,7 +419,7 @@ DCL_LOCK_STATE; (void) GC_clear_stack(0); } -void * GC_malloc_many(size_t lb) +GC_API void * GC_malloc_many(size_t lb) { void *result; GC_generic_malloc_many(((lb + EXTRA_BYTES + GRANULE_BYTES-1) @@ -435,7 +433,7 @@ void * GC_malloc_many(size_t lb) # endif /* Allocate lb bytes of pointerful, traced, but not collectable data */ -void * GC_malloc_uncollectable(size_t lb) +GC_API void * GC_malloc_uncollectable(size_t lb) { void *op; void **opp; @@ -477,7 +475,6 @@ void * GC_malloc_uncollectable(size_t lb) /* We don't need the lock here, since we have an undisguised */ /* pointer. We do need to hold the lock while we adjust */ /* mark bits. */ - lb = hhdr -> hb_sz; LOCK(); set_mark_bit_from_hdr(hhdr, 0); /* Only object. */ GC_ASSERT(hhdr -> hb_n_marks == 0); @@ -524,7 +521,7 @@ void * GC_memalign(size_t align, size_t lb) /* Allocate lb bytes of pointerfree, untraced, uncollectable data */ /* This is normally roughly equivalent to the system malloc. */ /* But it may be useful if malloc is redefined. */ -void * GC_malloc_atomic_uncollectable(size_t lb) +GC_API void * GC_malloc_atomic_uncollectable(size_t lb) { void *op; void **opp; @@ -560,7 +557,6 @@ void * GC_malloc_atomic_uncollectable(size_t lb) GC_ASSERT(((word)op & (HBLKSIZE - 1)) == 0); hhdr = HDR((struct hbklk *)op); - lb = hhdr -> hb_sz; LOCK(); set_mark_bit_from_hdr(hhdr, 0); /* Only object. */ diff --git a/mark.c b/mark.c index 95011665..3fb2f9d5 100644 --- a/mark.c +++ b/mark.c @@ -32,7 +32,7 @@ #endif /* Single argument version, robust against whole program analysis. */ -void GC_noop1(word x) +GC_API void GC_noop1(word x) { static volatile word sink; @@ -62,7 +62,7 @@ struct obj_kind GC_obj_kinds[MAXOBJKINDS] = { 0 | GC_DS_LENGTH, FALSE /* add length to descr */, FALSE }, # endif # ifdef STUBBORN_ALLOC -/*STUBBORN*/ { &GC_sobjfreelist[0], 0, +/*STUBBORN*/ { (void **)&GC_sobjfreelist[0], 0, 0 | GC_DS_LENGTH, TRUE /* add length to descr */, TRUE }, # endif }; @@ -98,7 +98,7 @@ struct obj_kind GC_obj_kinds[MAXOBJKINDS] = { * need to be marked from. */ -word GC_n_rescuing_pages; /* Number of dirty pages we marked from */ +STATIC word GC_n_rescuing_pages;/* Number of dirty pages we marked from */ /* excludes ptrfree pages, etc. */ mse * GC_mark_stack; @@ -251,7 +251,6 @@ void GC_clear_marks(void) /* Initiate a garbage collection. Initiates a full collection if the */ /* mark state is invalid. */ -/*ARGSUSED*/ void GC_initiate_gc(void) { if (GC_dirty_maintained) GC_read_dirty(); @@ -559,7 +558,7 @@ handle_ex: scan_ptr = 0; ret_val = FALSE; - goto rm_handler; // Back to platform-specific code. + goto rm_handler; /* Back to platform-specific code. */ } #endif /* WRAP_MARK_SOME */ @@ -858,7 +857,6 @@ mse * GC_mark_from(mse *mark_stack_top, mse *mark_stack, mse *mark_stack_limit) #ifdef PARALLEL_MARK -/* We assume we have an ANSI C Compiler. */ GC_bool GC_help_wanted = FALSE; unsigned GC_helper_count = 0; unsigned GC_active_count = 0; @@ -875,8 +873,8 @@ word GC_mark_no = 0; /* Return a pointer to the top of the local mark stack. */ /* *next is replaced by a pointer to the next unscanned mark stack */ /* entry. */ -mse * GC_steal_mark_stack(mse * low, mse * high, mse * local, - unsigned max, mse **next) +STATIC mse * GC_steal_mark_stack(mse * low, mse * high, mse * local, + unsigned max, mse **next) { mse *p; mse *top = local - 1; @@ -908,7 +906,7 @@ mse * GC_steal_mark_stack(mse * low, mse * high, mse * local, /* Copy back a local mark stack. */ /* low and high are inclusive bounds. */ -void GC_return_mark_stack(mse * low, mse * high) +STATIC void GC_return_mark_stack(mse * low, mse * high) { mse * my_top; mse * my_start; @@ -942,7 +940,7 @@ void GC_return_mark_stack(mse * low, mse * high) /* On return, the local mark stack is empty. */ /* But this may be achieved by copying the */ /* local mark stack back into the global one. */ -void GC_do_local_mark(mse *local_mark_stack, mse *local_top) +STATIC void GC_do_local_mark(mse *local_mark_stack, mse *local_top) { unsigned n; # define N_LOCAL_ITERS 1 @@ -994,7 +992,7 @@ long GC_markers = 2; /* Normally changed by thread-library- */ /* Caller does not hold mark lock. */ /* Caller has already incremented GC_helper_count. We decrement it, */ /* and maintain GC_active_count. */ -void GC_mark_local(mse *local_mark_stack, int id) +STATIC void GC_mark_local(mse *local_mark_stack, int id) { mse * my_first_nonempty; @@ -1097,7 +1095,7 @@ void GC_mark_local(mse *local_mark_stack, int id) /* We hold the GC lock, not the mark lock. */ /* Currently runs until the mark stack is */ /* empty. */ -void GC_do_parallel_mark() +void GC_do_parallel_mark(void) { mse local_mark_stack[LOCAL_MARK_STACK_SIZE]; @@ -1218,7 +1216,7 @@ static void alloc_mark_stack(size_t n) GC_mark_stack_top = GC_mark_stack-1; } -void GC_mark_init() +void GC_mark_init(void) { alloc_mark_stack(INITIAL_MARK_STACK_SIZE); } @@ -1356,21 +1354,21 @@ struct GC_ms_entry *GC_mark_and_push(void *obj, if (GC_all_interior_pointers) { hhdr = GC_find_header(GC_base(obj)); if (hhdr == 0) { - GC_ADD_TO_BLACK_LIST_NORMAL(obj, src); + GC_ADD_TO_BLACK_LIST_NORMAL(obj, (ptr_t)src); return mark_stack_ptr; } } else { - GC_ADD_TO_BLACK_LIST_NORMAL(obj, src); + GC_ADD_TO_BLACK_LIST_NORMAL(obj, (ptr_t)src); return mark_stack_ptr; } } if (EXPECT(HBLK_IS_FREE(hhdr),0)) { - GC_ADD_TO_BLACK_LIST_NORMAL(obj, src); + GC_ADD_TO_BLACK_LIST_NORMAL(obj, (ptr_t)src); return mark_stack_ptr; } PUSH_CONTENTS_HDR(obj, mark_stack_ptr /* modified */, mark_stack_limit, - src, was_marked, hhdr, TRUE); + (ptr_t)src, was_marked, hhdr, TRUE); was_marked: return mark_stack_ptr; } @@ -1386,7 +1384,7 @@ struct GC_ms_entry *GC_mark_and_push(void *obj, void GC_mark_and_push_stack(ptr_t p, ptr_t source) # else void GC_mark_and_push_stack(ptr_t p) -# define source 0 +# define source ((ptr_t)0) # endif { hdr * hhdr; @@ -1405,7 +1403,7 @@ struct GC_ms_entry *GC_mark_and_push(void *obj, } } if (EXPECT(HBLK_IS_FREE(hhdr),0)) { - GC_ADD_TO_BLACK_LIST_NORMAL(p, src); + GC_ADD_TO_BLACK_LIST_NORMAL(p, source); return; } # if defined(MANUAL_VDB) && defined(THREADS) @@ -1781,7 +1779,7 @@ void GC_push_marked(struct hblk *h, hdr *hhdr) #ifndef SMALL_CONFIG /* Test whether any page in the given block is dirty */ -GC_bool GC_block_was_dirty(struct hblk *h, hdr *hhdr) +STATIC GC_bool GC_block_was_dirty(struct hblk *h, hdr *hhdr) { size_t sz = hhdr -> hb_sz; diff --git a/mark_rts.c b/mark_rts.c index 1c81f58d..695f2209 100644 --- a/mark_rts.c +++ b/mark_rts.c @@ -135,10 +135,6 @@ static void add_roots_to_index(struct roots *p) GC_root_index[h] = p; } -# else /* MSWIN32 || MSWINCE */ - -# define add_roots_to_index(p) - # endif @@ -146,7 +142,7 @@ static void add_roots_to_index(struct roots *p) word GC_root_size = 0; -void GC_add_roots(void *b, void *e) +GC_API void GC_add_roots(void *b, void *e) { DCL_LOCK_STATE; @@ -238,15 +234,15 @@ void GC_add_roots_inner(ptr_t b, ptr_t e, GC_bool tmp) GC_static_roots[n_root_sets].r_tmp = tmp; # if !defined(MSWIN32) && !defined(MSWINCE) GC_static_roots[n_root_sets].r_next = 0; + add_roots_to_index(GC_static_roots + n_root_sets); # endif - add_roots_to_index(GC_static_roots + n_root_sets); GC_root_size += e - b; n_root_sets++; } static GC_bool roots_were_cleared = FALSE; -void GC_clear_roots (void) +GC_API void GC_clear_roots (void) { DCL_LOCK_STATE; @@ -286,8 +282,10 @@ static void GC_rebuild_root_index(void) } #endif +#if defined(DYNAMIC_LOADING) || defined(MSWIN32) || defined(MSWINCE) \ + || defined(PCR) /* Internal use only; lock held. */ -void GC_remove_tmp_roots(void) +STATIC void GC_remove_tmp_roots(void) { int i; @@ -298,13 +296,14 @@ void GC_remove_tmp_roots(void) i++; } } - #if !defined(MSWIN32) && !defined(MSWINCE) - GC_rebuild_root_index(); - #endif +# if !defined(MSWIN32) && !defined(MSWINCE) + GC_rebuild_root_index(); +# endif } +#endif #if !defined(MSWIN32) && !defined(MSWINCE) -void GC_remove_roots(void *b, void *e) +GC_API void GC_remove_roots(void *b, void *e) { DCL_LOCK_STATE; @@ -362,6 +361,7 @@ ptr_t GC_approx_sp(void) # ifdef _MSC_VER # pragma warning(disable:4172) # endif + /* Ignore "function returns address of local variable" warning. */ return((ptr_t)(&dummy)); # ifdef _MSC_VER # pragma warning(default:4172) @@ -382,12 +382,12 @@ struct exclusion GC_excl_table[MAX_EXCLUSIONS]; -- address order. */ -size_t GC_excl_table_entries = 0; /* Number of entries in use. */ +STATIC size_t GC_excl_table_entries = 0;/* Number of entries in use. */ /* Return the first exclusion range that includes an address >= start_addr */ /* Assumes the exclusion table contains at least one entry (namely the */ /* GC data structures). */ -struct exclusion * GC_next_exclusion(ptr_t start_addr) +STATIC struct exclusion * GC_next_exclusion(ptr_t start_addr) { size_t low = 0; size_t high = GC_excl_table_entries - 1; @@ -406,7 +406,7 @@ struct exclusion * GC_next_exclusion(ptr_t start_addr) return GC_excl_table + low; } -void GC_exclude_static_roots(void *start, void *finish) +GC_API void GC_exclude_static_roots(void *start, void *finish) { struct exclusion * next; size_t next_index, i; @@ -440,7 +440,8 @@ void GC_exclude_static_roots(void *start, void *finish) } /* Invoke push_conditional on ranges that are not excluded. */ -void GC_push_conditional_with_exclusions(ptr_t bottom, ptr_t top, GC_bool all) +STATIC void GC_push_conditional_with_exclusions(ptr_t bottom, ptr_t top, + GC_bool all) { struct exclusion * next; ptr_t excl_start; @@ -463,6 +464,7 @@ void GC_push_conditional_with_exclusions(ptr_t bottom, ptr_t top, GC_bool all) * seen. * FIXME: Merge with per-thread stuff. */ +/*ARGSUSED*/ void GC_push_current_stack(ptr_t cold_gc_frame, void * context) { # if defined(THREADS) diff --git a/misc.c b/misc.c index 5f3eef6e..cf864d7b 100644 --- a/misc.c +++ b/misc.c @@ -123,24 +123,19 @@ long GC_large_alloc_warn_suppressed = 0; /* Number of warnings suppressed so far. */ /*ARGSUSED*/ -void * GC_default_oom_fn(size_t bytes_requested) +STATIC void * GC_default_oom_fn(size_t bytes_requested) { return(0); } -void * (*GC_oom_fn) (size_t bytes_requested) = GC_default_oom_fn; - -void * GC_project2(void *arg1, void *arg2) -{ - return arg2; -} +GC_oom_func GC_oom_fn = GC_default_oom_fn; /* Set things up so that GC_size_map[i] >= granules(i), */ /* but not too much bigger */ /* and so that size_map contains relatively few distinct entries */ /* This was originally stolen from Russ Atkinson's Cedar */ /* quantization alogrithm (but we precompute it). */ -void GC_init_size_map(void) +STATIC void GC_init_size_map(void) { int i; @@ -329,7 +324,7 @@ void * GC_clear_stack(void *arg) /* Return a pointer to the base address of p, given a pointer to a */ /* an address within an object. Return 0 o.w. */ -void * GC_base(void * p) +GC_API void * GC_base(void * p) { ptr_t r; struct hblk *h; @@ -372,29 +367,29 @@ void * GC_base(void * p) /* Return the size of an object, given a pointer to its base. */ /* (For small obects this also happens to work from interior pointers, */ /* but that shouldn't be relied upon.) */ -size_t GC_size(void * p) +GC_API size_t GC_size(void * p) { hdr * hhdr = HDR(p); return hhdr -> hb_sz; } -size_t GC_get_heap_size(void) +GC_API size_t GC_get_heap_size(void) { return GC_heapsize; } -size_t GC_get_free_bytes(void) +GC_API size_t GC_get_free_bytes(void) { return GC_large_free_bytes; } -size_t GC_get_bytes_since_gc(void) +GC_API size_t GC_get_bytes_since_gc(void) { return GC_bytes_allocd; } -size_t GC_get_total_bytes(void) +GC_API size_t GC_get_total_bytes(void) { return GC_bytes_allocd+GC_bytes_allocd_before_gc; } @@ -406,7 +401,7 @@ GC_bool GC_is_initialized = FALSE; # endif /* PARALLEL_MARK || THREAD_LOCAL_ALLOC */ /* FIXME: The GC_init/GC_init_inner distinction should go away. */ -void GC_init(void) +GC_API void GC_init(void) { /* LOCK(); -- no longer does anything this early. */ GC_init_inner(); @@ -421,15 +416,9 @@ void GC_init(void) extern void GC_init_win32(void); #endif -extern void GC_setpagesize(); - -#ifdef MSWIN32 -extern GC_bool GC_no_win32_dlls; -#else -# define GC_no_win32_dlls FALSE -#endif +extern void GC_setpagesize(void); -void GC_exit_check(void) +STATIC void GC_exit_check(void) { GC_gcollect(); } @@ -442,8 +431,7 @@ void GC_exit_check(void) extern void GC_set_and_save_fault_handler(void (*handler)(int)); -static void looping_handler(sig) -int sig; +static void looping_handler(int sig) { GC_err_printf("Caught signal %d: looping in handler\n", sig); for(;;); @@ -451,7 +439,7 @@ int sig; static GC_bool installed_looping_handler = FALSE; -static void maybe_install_looping_handler() +static void maybe_install_looping_handler(void) { /* Install looping handler before the write fault handler, so we */ /* handle write faults correctly. */ @@ -471,7 +459,7 @@ static void maybe_install_looping_handler() void GC_thr_init(void); #endif -void GC_init_inner() +void GC_init_inner(void) { # if !defined(THREADS) && defined(GC_ASSERTIONS) word dummy; @@ -644,7 +632,6 @@ void GC_init_inner() # endif } # endif - /* Ignore gcc -Wall warnings on the following. */ GC_STATIC_ASSERT(sizeof (ptr_t) == sizeof(word)); GC_STATIC_ASSERT(sizeof (signed_word) == sizeof(word)); GC_STATIC_ASSERT(sizeof (struct hblk) == HBLKSIZE); @@ -774,7 +761,7 @@ void GC_init_inner() # endif } -void GC_enable_incremental(void) +GC_API void GC_enable_incremental(void) { # if !defined(SMALL_CONFIG) && !defined(KEEP_BACK_PTRS) /* If we are keeping back pointers, the GC itself dirties all */ @@ -826,9 +813,9 @@ out: # define LOG_FILE _T("gc.log") # endif - HANDLE GC_stdout = 0; + STATIC HANDLE GC_stdout = 0; - void GC_deinit() + void GC_deinit(void) { if (GC_is_initialized) { DeleteCriticalSection(&GC_write_cs); @@ -861,7 +848,7 @@ out: # endif file_name = logPath; } - GC_stdout = CreateFile(logPath, GENERIC_WRITE, + GC_stdout = CreateFile(file_name, GENERIC_WRITE, FILE_SHARE_READ, NULL, CREATE_ALWAYS, FILE_FLAG_WRITE_THROUGH, NULL); @@ -882,12 +869,12 @@ out: #endif #if defined(OS2) || defined(MACOS) -FILE * GC_stdout = NULL; -FILE * GC_stderr = NULL; -FILE * GC_log = NULL; -int GC_tmp; /* Should really be local ... */ +STATIC FILE * GC_stdout = NULL; +STATIC FILE * GC_stderr = NULL; +STATIC FILE * GC_log = NULL; +STATIC int GC_tmp; /* Should really be local ... */ - void GC_set_files() + STATIC void GC_set_files(void) { if (GC_stdout == NULL) { GC_stdout = stdout; @@ -902,8 +889,8 @@ int GC_tmp; /* Should really be local ... */ #endif #if !defined(OS2) && !defined(MACOS) && !defined(MSWIN32) && !defined(MSWINCE) - int GC_stdout = 1; - int GC_stderr = 2; + STATIC int GC_stdout = 1; + STATIC int GC_stderr = 2; int GC_log = 2; # if !defined(AMIGA) # include @@ -912,10 +899,7 @@ int GC_tmp; /* Should really be local ... */ #if !defined(MSWIN32) && !defined(MSWINCE) && !defined(OS2) \ && !defined(MACOS) && !defined(ECOS) && !defined(NOSYS) -int GC_write(fd, buf, len) -int fd; -const char *buf; -size_t len; +int GC_write(int fd, const char *buf, size_t len) { register int bytes_written = 0; register int result; @@ -935,7 +919,7 @@ size_t len; #endif /* UN*X */ #ifdef ECOS -int GC_write(fd, buf, len) +int GC_write(int fd, const char *buf, size_t len) { _Jv_diag_write (buf, len); return len; @@ -943,7 +927,7 @@ int GC_write(fd, buf, len) #endif #ifdef NOSYS -int GC_write(fd, buf, len) +int GC_write(int fd, const char *buf, size_t len) { /* No writing. */ return len; @@ -1019,22 +1003,20 @@ void GC_err_puts(const char *s) } #if defined(LINUX) && !defined(SMALL_CONFIG) -void GC_err_write(buf, len) -const char *buf; -size_t len; +void GC_err_write(const char *buf, size_t len) { if (WRITE(GC_stderr, buf, len) < 0) ABORT("write to stderr failed"); } #endif -void GC_default_warn_proc(char *msg, GC_word arg) +STATIC void GC_default_warn_proc(char *msg, GC_word arg) { GC_err_printf(msg, arg); } GC_warn_proc GC_current_warn_proc = GC_default_warn_proc; -GC_warn_proc GC_set_warn_proc(GC_warn_proc p) +GC_API GC_warn_proc GC_set_warn_proc(GC_warn_proc p) { GC_warn_proc result; @@ -1043,22 +1025,24 @@ GC_warn_proc GC_set_warn_proc(GC_warn_proc p) # endif LOCK(); result = GC_current_warn_proc; - GC_current_warn_proc = p; + if (p != (GC_warn_proc)0) + GC_current_warn_proc = p; UNLOCK(); return(result); } -GC_word GC_set_free_space_divisor (GC_word value) +GC_API GC_word GC_set_free_space_divisor (GC_word value) { GC_word old = GC_free_space_divisor; - GC_free_space_divisor = value; + if (value != ~(GC_word)0) + GC_free_space_divisor = value; return old; } #ifndef PCR void GC_abort(const char *msg) { -# if defined(MSWIN32) +# if defined(MSWIN32) && !defined(DONT_USE_USER32_DLL) (void) MessageBoxA(NULL, msg, "Fatal error in gc", MB_ICONERROR|MB_OK); # else GC_err_printf("%s\n", msg); @@ -1078,14 +1062,14 @@ void GC_abort(const char *msg) } #endif -void GC_enable() +GC_API void GC_enable(void) { LOCK(); GC_dont_gc--; UNLOCK(); } -void GC_disable() +GC_API void GC_disable(void) { LOCK(); GC_dont_gc++; @@ -1093,7 +1077,7 @@ void GC_disable() } /* Helper procedures for new kind creation. */ -void ** GC_new_free_list_inner() +void ** GC_new_free_list_inner(void) { void *result = GC_INTERNAL_MALLOC((MAXOBJGRANULES+1)*sizeof(ptr_t), PTRFREE); @@ -1102,7 +1086,7 @@ void ** GC_new_free_list_inner() return result; } -void ** GC_new_free_list() +void ** GC_new_free_list(void) { void *result; LOCK(); @@ -1167,7 +1151,7 @@ GC_API void * GC_call_with_stack_base(GC_stack_base_func fn, void *arg) #if !defined(NO_DEBUGGING) -void GC_dump() +GC_API void GC_dump(void) { GC_printf("***Static roots:\n"); GC_print_static_roots(); @@ -1182,3 +1166,78 @@ void GC_dump() } #endif /* NO_DEBUGGING */ + +GC_API GC_word GC_get_gc_no(void) +{ + return GC_gc_no; +} + +GC_API int GC_get_parallel(void) +{ + return GC_parallel; +} + +GC_API GC_oom_func GC_set_oom_fn(GC_oom_func fn) +{ + GC_oom_func ofn = GC_oom_fn; + if (fn != (GC_oom_func)0) + GC_oom_fn = fn; + return ofn; +} + +GC_API GC_finalizer_notifier_proc GC_set_finalizer_notifier( + GC_finalizer_notifier_proc fn) +{ + GC_finalizer_notifier_proc ofn = GC_finalizer_notifier; + if (fn != (GC_finalizer_notifier_proc)-1L) + GC_finalizer_notifier = fn; + return ofn; +} + +GC_API int GC_set_finalize_on_demand(int value) +{ + int ovalue = GC_finalize_on_demand; + if (value != -1) + GC_finalize_on_demand = value; + return ovalue; +} + +GC_API int GC_set_java_finalization(int value) +{ + int ovalue = GC_java_finalization; + if (value != -1) + GC_java_finalization = value; + return ovalue; +} + +GC_API int GC_set_dont_expand(int value) +{ + int ovalue = GC_dont_expand; + if (value != -1) + GC_dont_expand = value; + return ovalue; +} + +GC_API int GC_set_no_dls(int value) +{ + int ovalue = GC_no_dls; + if (value != -1) + GC_no_dls = value; + return ovalue; +} + +GC_API GC_word GC_set_max_retries(GC_word value) +{ + GC_word ovalue = GC_max_retries; + if (value != ~(GC_word)0) + GC_max_retries = value; + return ovalue; +} + +GC_API int GC_set_dont_precollect(int value) +{ + int ovalue = GC_dont_precollect; + if (value != -1) + GC_dont_precollect = value; + return ovalue; +} diff --git a/new_hblk.c b/new_hblk.c index 5d5a56f2..6c5c0cae 100644 --- a/new_hblk.c +++ b/new_hblk.c @@ -28,7 +28,7 @@ * Set the last link to * be ofl. Return a pointer tpo the first free list entry. */ -ptr_t GC_build_fl_clear2(struct hblk *h, ptr_t ofl) +STATIC ptr_t GC_build_fl_clear2(struct hblk *h, ptr_t ofl) { word * p = (word *)(h -> hb_body); word * lim = (word *)(h + 1); @@ -48,7 +48,7 @@ ptr_t GC_build_fl_clear2(struct hblk *h, ptr_t ofl) } /* The same for size 4 cleared objects */ -ptr_t GC_build_fl_clear4(struct hblk *h, ptr_t ofl) +STATIC ptr_t GC_build_fl_clear4(struct hblk *h, ptr_t ofl) { word * p = (word *)(h -> hb_body); word * lim = (word *)(h + 1); @@ -68,7 +68,7 @@ ptr_t GC_build_fl_clear4(struct hblk *h, ptr_t ofl) } /* The same for size 2 uncleared objects */ -ptr_t GC_build_fl2(struct hblk *h, ptr_t ofl) +STATIC ptr_t GC_build_fl2(struct hblk *h, ptr_t ofl) { word * p = (word *)(h -> hb_body); word * lim = (word *)(h + 1); @@ -84,7 +84,7 @@ ptr_t GC_build_fl2(struct hblk *h, ptr_t ofl) } /* The same for size 4 uncleared objects */ -ptr_t GC_build_fl4(struct hblk *h, ptr_t ofl) +STATIC ptr_t GC_build_fl4(struct hblk *h, ptr_t ofl) { word * p = (word *)(h -> hb_body); word * lim = (word *)(h + 1); @@ -181,7 +181,6 @@ void GC_new_hblk(size_t gran, int kind) struct hblk *h; /* the new heap block */ GC_bool clear = GC_obj_kinds[kind].ok_init; - /* Ignore gcc "no effect" warning on the following: */ GC_STATIC_ASSERT((sizeof (struct hblk)) == HBLKSIZE); if (GC_debugging_started) clear = TRUE; diff --git a/obj_map.c b/obj_map.c index a1731955..0f66b138 100644 --- a/obj_map.c +++ b/obj_map.c @@ -24,7 +24,7 @@ /* Consider pointers that are offset bytes displaced from the beginning */ /* of an object to be valid. */ -void GC_register_displacement(size_t offset) +GC_API void GC_register_displacement(size_t offset) { DCL_LOCK_STATE; diff --git a/os_dep.c b/os_dep.c index f4033756..fd8fb7bb 100644 --- a/os_dep.c +++ b/os_dep.c @@ -98,7 +98,8 @@ #endif #if defined(LINUX) || defined(FREEBSD) || defined(SOLARIS) || defined(IRIX5) \ - || defined(USE_MMAP) || defined(USE_MUNMAP) + || ((defined(USE_MMAP) || defined(USE_MUNMAP)) \ + && !defined(MSWIN32) && !defined(MSWINCE)) # define MMAP_SUPPORTED #endif @@ -163,10 +164,11 @@ ssize_t GC_repeat_read(int fd, char *buf, size_t count) return num_read; } +#ifdef THREADS /* Determine the length of a file by incrementally reading it into a */ /* This would be sily to use on a file supporting lseek, but Linux */ /* /proc files usually do not. */ -size_t GC_get_file_len(int f) +STATIC size_t GC_get_file_len(int f) { size_t total = 0; ssize_t result; @@ -181,13 +183,14 @@ size_t GC_get_file_len(int f) return total; } -size_t GC_get_maps_len(void) +STATIC size_t GC_get_maps_len(void) { int f = open("/proc/self/maps", O_RDONLY); size_t result = GC_get_file_len(f); close(f); return result; } +#endif /* * Copy the contents of /proc/self/maps to a buffer in our address space. @@ -276,19 +279,19 @@ char * GC_get_maps(void) return maps_buf; } -// -// GC_parse_map_entry parses an entry from /proc/self/maps so we can -// locate all writable data segments that belong to shared libraries. -// The format of one of these entries and the fields we care about -// is as follows: -// XXXXXXXX-XXXXXXXX r-xp 00000000 30:05 260537 name of mapping...\n -// ^^^^^^^^ ^^^^^^^^ ^^^^ ^^ -// start end prot maj_dev -// -// Note that since about august 2003 kernels, the columns no longer have -// fixed offsets on 64-bit kernels. Hence we no longer rely on fixed offsets -// anywhere, which is safer anyway. -// +/* + * GC_parse_map_entry parses an entry from /proc/self/maps so we can + * locate all writable data segments that belong to shared libraries. + * The format of one of these entries and the fields we care about + * is as follows: + * XXXXXXXX-XXXXXXXX r-xp 00000000 30:05 260537 name of mapping...\n + * ^^^^^^^^ ^^^^^^^^ ^^^^ ^^ + * start end prot maj_dev + * + * Note that since about august 2003 kernels, the columns no longer have + * fixed offsets on 64-bit kernels. Hence we no longer rely on fixed offsets + * anywhere, which is safer anyway. + */ /* * Assign various fields of the first line in buf_ptr to *start, *end, @@ -446,9 +449,10 @@ static ptr_t backing_store_base_from_proc(void) ptr_t GC_data_start; - void GC_init_linux_data_start() + ptr_t GC_find_limit(ptr_t, GC_bool); + + void GC_init_linux_data_start(void) { - extern ptr_t GC_find_limit(ptr_t, GC_bool); # if defined(LINUX) || defined(HURD) /* Try the easy approaches first: */ @@ -471,10 +475,10 @@ static ptr_t backing_store_base_from_proc(void) # define ECOS_GC_MEMORY_SIZE (448 * 1024) # endif /* ECOS_GC_MEMORY_SIZE */ -// FIXME: This is a simple way of allocating memory which is -// compatible with ECOS early releases. Later releases use a more -// sophisticated means of allocating memory than this simple static -// allocator, but this method is at least bound to work. +/* FIXME: This is a simple way of allocating memory which is */ +/* compatible with ECOS early releases. Later releases use a more */ +/* sophisticated means of allocating memory than this simple static */ +/* allocator, but this method is at least bound to work. */ static char memory[ECOS_GC_MEMORY_SIZE]; static char *brk = memory; @@ -497,11 +501,11 @@ static void *tiny_sbrk(ptrdiff_t increment) #if (defined(NETBSD) || defined(OPENBSD)) && defined(__ELF__) ptr_t GC_data_start; + ptr_t GC_find_limit(ptr_t, GC_bool); + extern char **environ; void GC_init_netbsd_elf(void) { - extern ptr_t GC_find_limit(ptr_t, GC_bool); - extern char **environ; /* This may need to be environ, without the underscore, for */ /* some versions. */ GC_data_start = GC_find_limit((ptr_t)&environ, FALSE); @@ -589,111 +593,7 @@ struct o32_obj { # define INCL_DOSMEMMGR # include - -/* Disable and enable signals during nontrivial allocations */ - -void GC_disable_signals(void) -{ - ULONG nest; - - DosEnterMustComplete(&nest); - if (nest != 1) ABORT("nested GC_disable_signals"); -} - -void GC_enable_signals(void) -{ - ULONG nest; - - DosExitMustComplete(&nest); - if (nest != 0) ABORT("GC_enable_signals"); -} - - -# else - -# if !defined(PCR) && !defined(AMIGA) && !defined(MSWIN32) \ - && !defined(MSWINCE) \ - && !defined(MACOS) && !defined(DJGPP) && !defined(DOS4GW) \ - && !defined(NOSYS) && !defined(ECOS) - -# if 0 - /* Use the traditional BSD interface */ -# define SIGSET_T int -# define SIG_DEL(set, signal) (set) &= ~(sigmask(signal)) -# define SIG_FILL(set) (set) = 0x7fffffff - /* Setting the leading bit appears to provoke a bug in some */ - /* longjmp implementations. Most systems appear not to have */ - /* a signal 32. */ -# define SIGSETMASK(old, new) (old) = sigsetmask(new) -# endif - - /* Use POSIX/SYSV interface */ -# define SIGSET_T sigset_t -# define SIG_DEL(set, signal) sigdelset(&(set), (signal)) -# define SIG_FILL(set) sigfillset(&set) -# define SIGSETMASK(old, new) sigprocmask(SIG_SETMASK, &(new), &(old)) - - -static GC_bool mask_initialized = FALSE; - -static SIGSET_T new_mask; - -static SIGSET_T old_mask; - -static SIGSET_T dummy; - -#if defined(GC_ASSERTIONS) && !defined(THREADS) -# define CHECK_SIGNALS - int GC_sig_disabled = 0; -#endif - -void GC_disable_signals(void) -{ - if (!mask_initialized) { - SIG_FILL(new_mask); - - SIG_DEL(new_mask, SIGSEGV); - SIG_DEL(new_mask, SIGILL); - SIG_DEL(new_mask, SIGQUIT); -# ifdef SIGBUS - SIG_DEL(new_mask, SIGBUS); -# endif -# ifdef SIGIOT - SIG_DEL(new_mask, SIGIOT); -# endif -# ifdef SIGEMT - SIG_DEL(new_mask, SIGEMT); -# endif -# ifdef SIGTRAP - SIG_DEL(new_mask, SIGTRAP); -# endif - mask_initialized = TRUE; - } -# ifdef CHECK_SIGNALS - if (GC_sig_disabled != 0) ABORT("Nested disables"); - GC_sig_disabled++; -# endif - SIGSETMASK(old_mask,new_mask); -} - -void GC_enable_signals(void) -{ -# ifdef CHECK_SIGNALS - if (GC_sig_disabled != 1) ABORT("Unmatched enable"); - GC_sig_disabled--; -# endif - SIGSETMASK(dummy,old_mask); -} - -# endif /* !PCR */ - -# endif /*!OS/2 */ - -/* Ivan Demakov: simplest way (to me) */ -#if defined (DOS4GW) - void GC_disable_signals() { } - void GC_enable_signals() { } -#endif +# endif /* OS/2 */ /* Find the page size */ word GC_page_size; @@ -735,7 +635,7 @@ word GC_page_size; /* The pointer p is assumed to be page aligned. */ /* If base is not 0, *base becomes the beginning of the */ /* allocation region containing p. */ -word GC_get_writable_length(ptr_t p, ptr_t *base) +STATIC word GC_get_writable_length(ptr_t p, ptr_t *base) { MEMORY_BASIC_INFORMATION buf; word result; @@ -816,7 +716,7 @@ ptr_t GC_get_main_stack_base(void) || defined(HURD) || defined(NETBSD) static struct sigaction old_segv_act; # if defined(_sigargs) /* !Irix6.x */ || defined(HPUX) \ - || defined(HURD) || defined(NETBSD) + || defined(HURD) || defined(NETBSD) || defined(FREEBSD) static struct sigaction old_bus_act; # endif # else @@ -846,7 +746,8 @@ ptr_t GC_get_main_stack_base(void) # else (void) sigaction(SIGSEGV, &act, &old_segv_act); # if defined(IRIX5) && defined(_sigargs) /* Irix 5.x, not 6.x */ \ - || defined(HPUX) || defined(HURD) || defined(NETBSD) + || defined(HPUX) || defined(HURD) || defined(NETBSD) \ + || defined(FREEBSD) /* Under Irix 5.x or HP/UX, we may get SIGBUS. */ /* Pthreads doesn't exist under Irix 5.x, so we */ /* don't have to worry in the threads case. */ @@ -868,7 +769,7 @@ ptr_t GC_get_main_stack_base(void) # define MIN_PAGE_SIZE 256 /* Smallest conceivable page size, bytes */ /*ARGSUSED*/ - void GC_fault_handler(int sig) + STATIC void GC_fault_handler(int sig) { LONGJMP(GC_jmp_buf, 1); } @@ -887,7 +788,8 @@ ptr_t GC_get_main_stack_base(void) || defined(OSF1) || defined(HURD) || defined(NETBSD) (void) sigaction(SIGSEGV, &old_segv_act, 0); # if defined(IRIX5) && defined(_sigargs) /* Irix 5.x, not 6.x */ \ - || defined(HPUX) || defined(HURD) || defined(NETBSD) + || defined(HPUX) || defined(HURD) || defined(NETBSD) \ + || defined(FREEBSD) (void) sigaction(SIGBUS, &old_bus_act, 0); # endif # else @@ -902,7 +804,7 @@ ptr_t GC_get_main_stack_base(void) /* the smallest location q s.t. [q,p) is addressable (!up). */ /* We assume that p (up) or p-1 (!up) is addressable. */ /* Requires allocation lock. */ - ptr_t GC_find_limit_with_bound(ptr_t p, GC_bool up, ptr_t bound) + STATIC ptr_t GC_find_limit_with_bound(ptr_t p, GC_bool up, ptr_t bound) { static volatile ptr_t result; /* Safer if static, since otherwise it may not be */ @@ -935,11 +837,7 @@ ptr_t GC_get_main_stack_base(void) ptr_t GC_find_limit(ptr_t p, GC_bool up) { - if (up) { - return GC_find_limit_with_bound(p, up, (ptr_t)(word)(-1)); - } else { - return GC_find_limit_with_bound(p, up, 0); - } + return GC_find_limit_with_bound(p, up, up ? (ptr_t)(word)(-1) : 0); } # endif @@ -1017,7 +915,7 @@ ptr_t GC_get_main_stack_base(void) } # endif - ptr_t GC_linux_stack_base(void) + STATIC ptr_t GC_linux_stack_base(void) { /* We read the stack base value from /proc/self/stat. We do this */ /* using direct I/O system calls in order to avoid calling malloc */ @@ -1090,7 +988,7 @@ ptr_t GC_get_main_stack_base(void) #include #include - ptr_t GC_freebsd_stack_base(void) + STATIC ptr_t GC_freebsd_stack_base(void) { int nm[2] = {CTL_KERN, KERN_USRSTACK}; ptr_t base; @@ -1110,16 +1008,14 @@ ptr_t GC_get_main_stack_base(void) ptr_t GC_get_main_stack_base(void) { -# if defined(HEURISTIC1) || defined(HEURISTIC2) - word dummy; -# endif - ptr_t result; - -# define STACKBOTTOM_ALIGNMENT_M1 ((word)STACK_GRAN - 1) - # ifdef STACKBOTTOM return(STACKBOTTOM); # else +# if defined(HEURISTIC1) || defined(HEURISTIC2) + word dummy; +# endif + ptr_t result; +# define STACKBOTTOM_ALIGNMENT_M1 ((word)STACK_GRAN - 1) # ifdef HEURISTIC1 # ifdef STACK_GROWS_DOWN result = (ptr_t)((((word)(&dummy)) @@ -1168,13 +1064,14 @@ ptr_t GC_get_main_stack_base(void) #if defined(GC_LINUX_THREADS) && !defined(HAVE_GET_STACK_BASE) #include +/* extern int pthread_getattr_np(pthread_t, pthread_attr_t *); */ #ifdef IA64 ptr_t GC_greatest_stack_base_below(ptr_t bound); /* From pthread_support.c */ #endif -int GC_get_stack_base(struct GC_stack_base *b) +GC_API int GC_get_stack_base(struct GC_stack_base *b) { pthread_attr_t attr; size_t size; @@ -1186,6 +1083,7 @@ int GC_get_stack_base(struct GC_stack_base *b) if (pthread_attr_getstack(&attr, &(b -> mem_base), &size) != 0) { ABORT("pthread_attr_getstack failed"); } + pthread_attr_destroy(&attr); # ifdef STACK_GROWS_DOWN b -> mem_base = (char *)(b -> mem_base) + size; # endif @@ -1223,11 +1121,10 @@ int GC_get_stack_base(struct GC_stack_base *b) /* next. Thus this is likely to identify way too large a */ /* "stack" and thus at least result in disastrous performance. */ /* FIXME - Implement better strategies here. */ -int GC_get_stack_base(struct GC_stack_base *b) +GC_API int GC_get_stack_base(struct GC_stack_base *b) { - int dummy; - # ifdef NEED_FIND_LIMIT + int dummy; # ifdef STACK_GROWS_DOWN b -> mem_base = GC_find_limit((ptr_t)(&dummy), TRUE); # ifdef IA64 @@ -1337,7 +1234,8 @@ void GC_register_data_segments(void) GC_err_printf("Object with invalid pages?\n"); continue; } - GC_add_roots_inner(O32_BASE(seg), O32_BASE(seg)+O32_SIZE(seg), FALSE); + GC_add_roots_inner((ptr_t)O32_BASE(seg), + (ptr_t)(O32_BASE(seg)+O32_SIZE(seg)), FALSE); } } @@ -1360,9 +1258,24 @@ void GC_register_data_segments(void) # if defined(GWW_VDB) -# ifndef _BASETSD_H_ - typedef ULONG * PULONG_PTR; +# ifndef MEM_WRITE_WATCH +# define MEM_WRITE_WATCH 0x200000 # endif + +# ifndef WRITE_WATCH_FLAG_RESET +# define WRITE_WATCH_FLAG_RESET 1 +# endif + +# if !defined(_BASETSD_H_) && !defined(_BASETSD_H) +# ifdef _WIN64 + typedef unsigned __int64 ULONG_PTR; +# else + typedef unsigned long ULONG_PTR; +# endif + typedef ULONG_PTR SIZE_T; + typedef ULONG_PTR * PULONG_PTR; +# endif + typedef UINT (WINAPI * GetWriteWatch_type)( DWORD, PVOID, SIZE_T, PVOID*, PULONG_PTR, PULONG); static GetWriteWatch_type GetWriteWatch_func; @@ -1468,14 +1381,14 @@ void GC_register_data_segments(void) /* apparently works only for NT-based Windows. */ /* In the long run, a better data structure would also be nice ... */ - struct GC_malloc_heap_list { + STATIC struct GC_malloc_heap_list { void * allocation_base; struct GC_malloc_heap_list *next; } *GC_malloc_heap_l = 0; /* Is p the base of one of the malloc heap sections we already know */ /* about? */ - GC_bool GC_is_malloc_heap_base(ptr_t p) + STATIC GC_bool GC_is_malloc_heap_base(ptr_t p) { struct GC_malloc_heap_list *q = GC_malloc_heap_l; @@ -1486,7 +1399,7 @@ void GC_register_data_segments(void) return FALSE; } - void *GC_get_allocation_base(void *p) + STATIC void *GC_get_allocation_base(void *p) { MEMORY_BASIC_INFORMATION buf; size_t result = VirtualQuery(p, &buf, sizeof(buf)); @@ -1496,9 +1409,9 @@ void GC_register_data_segments(void) return buf.AllocationBase; } - size_t GC_max_root_size = 100000; /* Appr. largest root size. */ + STATIC size_t GC_max_root_size = 100000; /* Appr. largest root size. */ - void GC_add_current_malloc_heap() + void GC_add_current_malloc_heap(void) { struct GC_malloc_heap_list *new_l = malloc(sizeof(struct GC_malloc_heap_list)); @@ -1547,7 +1460,7 @@ void GC_register_data_segments(void) } # ifdef MSWIN32 - void GC_register_root_section(ptr_t static_root) + STATIC void GC_register_root_section(ptr_t static_root) { MEMORY_BASIC_INFORMATION buf; size_t result; @@ -1581,7 +1494,7 @@ void GC_register_data_segments(void) } #endif - void GC_register_data_segments() + void GC_register_data_segments(void) { # ifdef MSWIN32 static char dummy; @@ -1672,7 +1585,7 @@ void GC_register_data_segments(void) /* sbrk at process startup. It needs to be scanned, so that */ /* we don't lose some malloc allocated data structures */ /* hanging from it. We're on thin ice here ... */ - extern caddr_t sbrk(); + extern caddr_t sbrk(int); GC_add_roots_inner(DATASTART, (ptr_t)sbrk(0), FALSE); # else @@ -1757,10 +1670,10 @@ void GC_register_data_segments(void) #endif #ifndef HEAP_START -# define HEAP_START 0 +# define HEAP_START ((ptr_t)0) #endif -ptr_t GC_unix_mmap_get_mem(word bytes) +STATIC ptr_t GC_unix_mmap_get_mem(word bytes) { void *result; static ptr_t last_addr = HEAP_START; @@ -1807,7 +1720,7 @@ ptr_t GC_unix_get_mem(word bytes) #else /* Not USE_MMAP */ -ptr_t GC_unix_sbrk_get_mem(word bytes) +STATIC ptr_t GC_unix_sbrk_get_mem(word bytes) { ptr_t result; # ifdef IRIX5 @@ -1957,7 +1870,7 @@ ptr_t GC_win32_get_mem(word bytes) return(result); } -void GC_win32_free_heap(void) +GC_API void GC_win32_free_heap(void) { if (GC_no_win32_dlls) { while (GC_n_heap_bases > 0) { @@ -2034,7 +1947,6 @@ ptr_t GC_wince_get_mem(word bytes) /* For now, this only works on Win32/WinCE and some Unix-like */ /* systems. If you have something else, don't define */ /* USE_MUNMAP. */ -/* We assume ANSI C to support this feature. */ #if !defined(MSWIN32) && !defined(MSWINCE) @@ -2048,23 +1960,20 @@ ptr_t GC_wince_get_mem(word bytes) /* Compute a page aligned starting address for the unmap */ /* operation on a block of size bytes starting at start. */ /* Return 0 if the block is too small to make this feasible. */ -ptr_t GC_unmap_start(ptr_t start, size_t bytes) +STATIC ptr_t GC_unmap_start(ptr_t start, size_t bytes) { - ptr_t result = start; + ptr_t result; /* Round start to next page boundary. */ - result += GC_page_size - 1; - result = (ptr_t)((word)result & ~(GC_page_size - 1)); + result = (ptr_t)((word)(start + GC_page_size - 1) & ~(GC_page_size - 1)); if (result + GC_page_size > start + bytes) return 0; return result; } /* Compute end address for an unmap operation on the indicated */ /* block. */ -ptr_t GC_unmap_end(ptr_t start, size_t bytes) +STATIC ptr_t GC_unmap_end(ptr_t start, size_t bytes) { - ptr_t end_addr = start + bytes; - end_addr = (ptr_t)((word)end_addr & ~(GC_page_size - 1)); - return end_addr; + return (ptr_t)((word)(start + bytes) & ~(GC_page_size - 1)); } /* Under Win32/WinCE we commit (map) and decommit (unmap) */ @@ -2166,7 +2075,6 @@ void GC_unmap_gap(ptr_t start1, size_t bytes1, ptr_t start2, size_t bytes2) ptr_t start1_addr = GC_unmap_start(start1, bytes1); ptr_t end1_addr = GC_unmap_end(start1, bytes1); ptr_t start2_addr = GC_unmap_start(start2, bytes2); - ptr_t end2_addr = GC_unmap_end(start2, bytes2); ptr_t start_addr = end1_addr; ptr_t end_addr = start2_addr; size_t len; @@ -2253,7 +2161,7 @@ void GC_default_push_other_roots(void) extern void GC_push_all_stacks(void); -void GC_default_push_other_roots(void) +STATIC void GC_default_push_other_roots(void) { GC_push_all_stacks(); } @@ -2308,7 +2216,7 @@ GC_bool GC_dirty_maintained = FALSE; #if defined(PROC_VDB) || defined(GWW_VDB) /* Add all pages in pht2 to pht1 */ -void GC_or_pages(page_hash_table pht1, page_hash_table pht2) +STATIC void GC_or_pages(page_hash_table pht1, page_hash_table pht2) { register int i; @@ -2519,7 +2427,6 @@ void GC_read_dirty(void) /* If the actual page size is different, this returns TRUE if any */ /* of the pages overlapping h are dirty. This routine may err on the */ /* side of labelling pages as dirty (and this implementation does). */ -/*ARGSUSED*/ GC_bool GC_page_was_dirty(struct hblk *h) { register word index; @@ -2598,12 +2505,12 @@ void GC_remove_protection(struct hblk *h, word nblocks, GC_bool is_ptrfree) decrease the likelihood of some of the problems described below. */ #include static mach_port_t GC_task_self; - #define PROTECT(addr,len) \ +# define PROTECT(addr,len) \ if(vm_protect(GC_task_self,(vm_address_t)(addr),(vm_size_t)(len), \ FALSE,VM_PROT_READ) != KERN_SUCCESS) { \ ABORT("vm_portect failed"); \ } - #define UNPROTECT(addr,len) \ +# define UNPROTECT(addr,len) \ if(vm_protect(GC_task_self,(vm_address_t)(addr),(vm_size_t)(len), \ FALSE,VM_PROT_READ|VM_PROT_WRITE) != KERN_SUCCESS) { \ ABORT("vm_portect failed"); \ @@ -2652,11 +2559,13 @@ void GC_remove_protection(struct hblk *h, word nblocks, GC_bool is_ptrfree) #endif #ifndef DARWIN -SIG_HNDLR_PTR GC_old_bus_handler; -GC_bool GC_old_bus_handler_used_si; -SIG_HNDLR_PTR GC_old_segv_handler; +STATIC SIG_HNDLR_PTR GC_old_segv_handler; /* Also old MSWIN32 ACCESS_VIOLATION filter */ -GC_bool GC_old_segv_handler_used_si; +#if !defined(MSWIN32) && !defined(MSWINCE) +STATIC SIG_HNDLR_PTR GC_old_bus_handler; +STATIC GC_bool GC_old_bus_handler_used_si; +STATIC GC_bool GC_old_segv_handler_used_si; +#endif #endif /* !DARWIN */ #if defined(THREADS) @@ -2758,15 +2667,10 @@ GC_bool GC_old_segv_handler_used_si; # endif /* MSWIN32 || MSWINCE */ { # if !defined(MSWIN32) && !defined(MSWINCE) - int code = si -> si_code; /* Ignore gcc unused var. warning. */ - ucontext_t * scp = (ucontext_t *)raw_sc; - /* Ignore gcc unused var. warning. */ - char *addr = si -> si_addr; -# endif -# if defined(MSWIN32) || defined(MSWINCE) + char *addr = si -> si_addr; +# else char * addr = (char *) (exc_info -> ExceptionRecord -> ExceptionInformation[1]); -# define sig SIGSEGV # endif unsigned i; @@ -2793,15 +2697,21 @@ GC_bool GC_old_segv_handler_used_si; /* Heap blocks now begin and end on page boundaries */ SIG_HNDLR_PTR old_handler; - GC_bool used_si; - - if (sig == SIGSEGV) { + +# if defined(MSWIN32) || defined(MSWINCE) old_handler = GC_old_segv_handler; - used_si = GC_old_segv_handler_used_si; - } else { - old_handler = GC_old_bus_handler; - used_si = GC_old_bus_handler_used_si; - } +# else + GC_bool used_si; + + if (sig == SIGSEGV) { + old_handler = GC_old_segv_handler; + used_si = GC_old_segv_handler_used_si; + } else { + old_handler = GC_old_bus_handler; + used_si = GC_old_bus_handler_used_si; + } +# endif + if (old_handler == (SIG_HNDLR_PTR)SIG_DFL) { # if !defined(MSWIN32) && !defined(MSWINCE) GC_err_printf("Segfault at %p\n", addr); @@ -2815,7 +2725,7 @@ GC_bool GC_old_segv_handler_used_si; * old signal handler used the traditional style and * if so call it using that style. */ -# ifdef MSWIN32 +# if defined(MSWIN32) || defined(MSWINCE) return((*old_handler)(exc_info)); # else if (used_si) @@ -2872,7 +2782,6 @@ void GC_remove_protection(struct hblk *h, word nblocks, GC_bool is_ptrfree) struct hblk * h_trunc; /* Truncated to page boundary */ struct hblk * h_end; /* Page boundary following block end */ struct hblk * current; - GC_bool found_clean; # if defined(GWW_VDB) if (GC_GWW_AVAILABLE()) return; @@ -2881,7 +2790,6 @@ void GC_remove_protection(struct hblk *h, word nblocks, GC_bool is_ptrfree) h_trunc = (struct hblk *)((word)h & ~(GC_page_size-1)); h_end = (struct hblk *)(((word)(h + nblocks) + GC_page_size-1) & ~(GC_page_size-1)); - found_clean = FALSE; for (current = h_trunc; current < h_end; ++current) { size_t index = PHT_HASH(current); @@ -2940,7 +2848,6 @@ void GC_dirty_init(void) if (GC_print_stats == VERBOSE) GC_log_printf("Replaced other SIGSEGV handler\n"); } -# endif /* ! MS windows */ # if defined(HPUX) || defined(LINUX) || defined(HURD) \ || (defined(FREEBSD) && defined(SUNOS5SIGS)) sigaction(SIGBUS, &act, &oldact); @@ -2960,6 +2867,7 @@ void GC_dirty_init(void) GC_log_printf("Replaced other SIGBUS handler\n"); } # endif /* HPUX || LINUX || HURD || (FREEBSD && SUNOS5SIGS) */ +# endif /* ! MS windows */ # if defined(MSWIN32) # if defined(GWW_VDB) if (GC_gww_dirty_init()) @@ -2976,7 +2884,7 @@ void GC_dirty_init(void) } #endif /* !DARWIN */ -int GC_incremental_protection_needs(void) +GC_API int GC_incremental_protection_needs(void) { if (GC_page_size == HBLKSIZE) { return GC_PROTECTS_POINTER_HEAP; @@ -2990,7 +2898,7 @@ int GC_incremental_protection_needs(void) #define IS_PTRFREE(hhdr) ((hhdr)->hb_descr == 0) #define PAGE_ALIGNED(x) !((word)(x) & (GC_page_size - 1)) -void GC_protect_heap(void) +STATIC void GC_protect_heap(void) { ptr_t start; size_t len; @@ -3087,9 +2995,9 @@ GC_bool GC_page_was_dirty(struct hblk *h) * On other systems, SET_LOCK_HOLDER and friends must be suitably defined. */ +#if 0 static GC_bool syscall_acquired_lock = FALSE; /* Protected by GC lock. */ -#if 0 void GC_begin_syscall(void) { /* FIXME: Resurrecting this code would require fixing the */ @@ -3243,10 +3151,10 @@ GC_bool GC_page_was_ever_dirty(struct hblk *h) #include #define INITIAL_BUF_SZ 16384 -word GC_proc_buf_size = INITIAL_BUF_SZ; -char *GC_proc_buf; +STATIC word GC_proc_buf_size = INITIAL_BUF_SZ; +STATIC char *GC_proc_buf; -int GC_proc_fd; +STATIC int GC_proc_fd; void GC_dirty_init(void) { @@ -3280,10 +3188,7 @@ void GC_dirty_init(void) /* Ignore write hints. They don't help us here. */ /*ARGSUSED*/ -void GC_remove_protection(h, nblocks, is_ptrfree) -struct hblk *h; -word nblocks; -GC_bool is_ptrfree; +void GC_remove_protection(struct hblk *h, word nblocks, GC_bool is_ptrfree) { } @@ -3362,19 +3267,15 @@ void GC_read_dirty(void) GC_bool GC_page_was_dirty(struct hblk *h) { register word index = PHT_HASH(h); - register GC_bool result; - result = get_pht_entry_from_index(GC_grungy_pages, index); - return(result); + return get_pht_entry_from_index(GC_grungy_pages, index); } GC_bool GC_page_was_ever_dirty(struct hblk *h) { register word index = PHT_HASH(h); - register GC_bool result; - result = get_pht_entry_from_index(GC_written_pages, index); - return(result); + return get_pht_entry_from_index(GC_written_pages, index); } # endif /* PROC_VDB */ @@ -3677,8 +3578,6 @@ static void *GC_mprotect_thread(void *arg) meaningless and safe to ignore. */ #ifdef BROKEN_EXCEPTION_HANDLING -static SIG_HNDLR_PTR GC_old_bus_handler; - /* Updates to this aren't atomic, but the SIGBUSs seem pretty rare. Even if this doesn't get updated property, it isn't really a problem */ static int GC_sigbus_count; @@ -3772,8 +3671,7 @@ void GC_dirty_init(void) sa.sa_flags = SA_RESTART|SA_SIGINFO; if(sigaction(SIGBUS, &sa, &oldsa) < 0) ABORT("sigaction"); - GC_old_bus_handler = (SIG_HNDLR_PTR)oldsa.sa_handler; - if (GC_old_bus_handler != SIG_DFL) { + if ((SIG_HNDLR_PTR)oldsa.sa_handler != SIG_DFL) { if (GC_print_stats == VERBOSE) GC_err_printf("Replaced other SIGBUS handler\n"); } @@ -3999,7 +3897,7 @@ catch_exception_raise_state_identity(mach_port_name_t exception_port, #endif /* DARWIN && MPROTECT_VDB */ # ifndef HAVE_INCREMENTAL_PROTECTION_NEEDS - int GC_incremental_protection_needs() + GC_API int GC_incremental_protection_needs(void) { return GC_PROTECTS_NONE; } diff --git a/pthread_stop_world.c b/pthread_stop_world.c index c542aebc..54ceebf0 100644 --- a/pthread_stop_world.c +++ b/pthread_stop_world.c @@ -23,7 +23,7 @@ # endif #endif -void GC_print_sig_mask() +void GC_print_sig_mask(void) { sigset_t blocked; int i; @@ -41,7 +41,7 @@ void GC_print_sig_mask() /* Remove the signals that we want to allow in thread stopping */ /* handler from a set. */ -void GC_remove_allowed_signals(sigset_t *set) +STATIC void GC_remove_allowed_signals(sigset_t *set) { if (sigdelset(set, SIGINT) != 0 || sigdelset(set, SIGQUIT) != 0 @@ -113,13 +113,13 @@ sem_t GC_suspend_ack_sem; sem_t GC_restart_ack_sem; #endif -void GC_suspend_handler_inner(ptr_t sig_arg, void *context); +STATIC void GC_suspend_handler_inner(ptr_t sig_arg, void *context); #if defined(IA64) || defined(HP_PA) || defined(M68K) #ifdef SA_SIGINFO -void GC_suspend_handler(int sig, siginfo_t *info, void *context) +STATIC void GC_suspend_handler(int sig, siginfo_t *info, void *context) #else -void GC_suspend_handler(int sig) +STATIC void GC_suspend_handler(int sig) #endif { int old_errno = errno; @@ -130,9 +130,9 @@ void GC_suspend_handler(int sig) /* We believe that in all other cases the full context is already */ /* in the signal handler frame. */ #ifdef SA_SIGINFO -void GC_suspend_handler(int sig, siginfo_t *info, void *context) +STATIC void GC_suspend_handler(int sig, siginfo_t *info, void *context) #else -void GC_suspend_handler(int sig) +STATIC void GC_suspend_handler(int sig) #endif { int old_errno = errno; @@ -144,18 +144,13 @@ void GC_suspend_handler(int sig) } #endif -void GC_suspend_handler_inner(ptr_t sig_arg, void *context) +STATIC void GC_suspend_handler_inner(ptr_t sig_arg, void *context) { int sig = (int)(word)sig_arg; int dummy; pthread_t my_thread = pthread_self(); GC_thread me; -# ifdef PARALLEL_MARK - word my_mark_no = GC_mark_no; - /* Marker can't proceed until we acknowledge. Thus this is */ - /* guaranteed to be the mark_no correspending to our */ - /* suspension, i.e. the marker can't have incremented it yet. */ -# endif + AO_t my_stop_count = AO_load(&GC_stop_count); if (sig != SIG_SUSPEND) ABORT("Bad signal in suspend_handler"); @@ -219,11 +214,8 @@ void GC_suspend_handler_inner(ptr_t sig_arg, void *context) # endif } -void GC_restart_handler(int sig) +STATIC void GC_restart_handler(int sig) { - pthread_t my_thread = pthread_self(); - GC_thread me; - if (sig != SIG_THR_RESTART) ABORT("Bad signal in suspend_handler"); # ifdef GC_NETBSD_THREADS_WORKAROUND @@ -243,6 +235,8 @@ void GC_restart_handler(int sig) # endif } +void GC_thr_init(void); + # ifdef IA64 # define IF_IA64(x) x # else @@ -250,7 +244,7 @@ void GC_restart_handler(int sig) # endif /* We hold allocation lock. Should do exactly the right thing if the */ /* world is stopped. Should not fail if it isn't. */ -void GC_push_all_stacks() +void GC_push_all_stacks(void) { GC_bool found_me = FALSE; size_t nthreads = 0; @@ -324,13 +318,15 @@ void GC_push_all_stacks() /* There seems to be a very rare thread stopping problem. To help us */ /* debug that, we save the ids of the stopping thread. */ +#if DEBUG_THREADS pthread_t GC_stopping_thread; int GC_stopping_pid; +#endif /* We hold the allocation lock. Suspend all threads that might */ /* still be running. Return the number of suspend signals that */ /* were sent. */ -int GC_suspend_all() +STATIC int GC_suspend_all(void) { int n_live_threads = 0; int i; @@ -338,8 +334,10 @@ int GC_suspend_all() int result; pthread_t my_thread = pthread_self(); - GC_stopping_thread = my_thread; /* debugging only. */ - GC_stopping_pid = getpid(); /* debugging only. */ +# if DEBUG_THREADS + GC_stopping_thread = my_thread; + GC_stopping_pid = getpid(); +# endif for (i = 0; i < THREAD_TABLE_SZ; i++) { for (p = GC_threads[i]; p != 0; p = p -> next) { if (!THREAD_EQUAL(p -> id, my_thread)) { @@ -369,7 +367,7 @@ int GC_suspend_all() return n_live_threads; } -void GC_stop_world() +void GC_stop_world(void) { int i; int n_live_threads; @@ -436,15 +434,15 @@ void GC_stop_world() # ifdef PARALLEL_MARK GC_release_mark_lock(); # endif - #if DEBUG_THREADS +# if DEBUG_THREADS GC_printf("World stopped from 0x%x\n", (unsigned)pthread_self()); - #endif - GC_stopping_thread = 0; /* debugging only */ + GC_stopping_thread = 0; +# endif } /* Caller holds allocation lock, and has held it continuously since */ /* the world stopped. */ -void GC_start_world() +void GC_start_world(void) { pthread_t my_thread = pthread_self(); register int i; @@ -466,10 +464,10 @@ void GC_start_world() if (p -> flags & FINISHED) continue; if (p -> thread_blocked) continue; n_live_threads++; - #if DEBUG_THREADS +# if DEBUG_THREADS GC_printf("Sending restart signal to 0x%x\n", (unsigned)(p -> id)); - #endif +# endif result = pthread_kill(p -> id, SIG_THR_RESTART); switch(result) { @@ -499,7 +497,7 @@ void GC_start_world() # endif } -void GC_stop_init() { +void GC_stop_init(void) { struct sigaction act; if (sem_init(&GC_suspend_ack_sem, 0, 0) != 0) diff --git a/pthread_support.c b/pthread_support.c index 8ca1a3a6..b92c2312 100644 --- a/pthread_support.c +++ b/pthread_support.c @@ -172,7 +172,7 @@ unsigned long GC_lock_holder = NO_THREAD; #ifdef GC_USE_DLOPEN_WRAP static GC_bool GC_syms_initialized = FALSE; - void GC_init_real_syms(void) + STATIC void GC_init_real_syms(void) { void *dl_handle; # define LIBPTHREAD_NAME "libpthread.so.0" @@ -219,7 +219,8 @@ GC_bool GC_need_to_lock = FALSE; void GC_init_parallel(void); -long GC_nprocs = 1; /* Number of processors. We may not have */ +STATIC long GC_nprocs = 1; + /* Number of processors. We may not have */ /* access to all of them, but this is as good */ /* a guess as any ... */ @@ -241,6 +242,10 @@ void GC_mark_thread_local_free_lists(void) } #if defined(GC_ASSERTIONS) + void GC_check_tls_for(GC_tlfs p); +# if defined(USE_CUSTOM_SPECIFIC) + void GC_check_tsd_marks(tsd *key); +# endif /* Check that all thread-local free-lists are completely marked. */ /* also check that thread-specific-data structures are marked. */ void GC_check_tls(void) { @@ -347,6 +352,7 @@ static void start_mark_threads(void) WARN("Marker thread creation failed, errno = %ld.\n", errno); } } + pthread_attr_destroy(&attr); } #endif /* PARALLEL_MARK */ @@ -370,7 +376,7 @@ static struct GC_Thread_Rep first_thread; /* Add a thread to GC_threads. We assume it wasn't already there. */ /* Caller holds allocation lock. */ -GC_thread GC_new_thread(pthread_t id) +STATIC GC_thread GC_new_thread(pthread_t id) { int hv = NUMERIC_THREAD_ID(id) % THREAD_TABLE_SZ; GC_thread result; @@ -395,7 +401,7 @@ GC_thread GC_new_thread(pthread_t id) /* Delete a thread from GC_threads. We assume it is there. */ /* (The code intentionally traps if it wasn't.) */ -void GC_delete_thread(pthread_t id) +STATIC void GC_delete_thread(pthread_t id) { int hv = NUMERIC_THREAD_ID(id) % THREAD_TABLE_SZ; register GC_thread p = GC_threads[hv]; @@ -421,7 +427,7 @@ void GC_delete_thread(pthread_t id) /* been notified, then there may be more than one thread */ /* in the table with the same pthread id. */ /* This is OK, but we need a way to delete a specific one. */ -void GC_delete_gc_thread(GC_thread gc_id) +STATIC void GC_delete_gc_thread(GC_thread gc_id) { pthread_t id = gc_id -> id; int hv = NUMERIC_THREAD_ID(id) % THREAD_TABLE_SZ; @@ -464,7 +470,7 @@ GC_thread GC_lookup_thread(pthread_t id) /* one for the current thread. We need to do this in the child */ /* process after a fork(), since only the current thread */ /* survives in the child. */ -void GC_remove_all_threads_but_me(void) +STATIC void GC_remove_all_threads_but_me(void) { pthread_t self = pthread_self(); int hv; @@ -551,7 +557,7 @@ ptr_t GC_greatest_stack_base_below(ptr_t bound) #ifdef GC_LINUX_THREADS /* Return the number of processors, or i<= 0 if it can't be determined. */ -int GC_get_nprocs(void) +STATIC int GC_get_nprocs(void) { /* Should be "return sysconf(_SC_NPROCESSORS_ONLN);" but that */ /* appears to be buggy in many cases. */ @@ -591,7 +597,7 @@ int GC_get_nprocs(void) /* collection in progress; otherwise we just wait for the current GC */ /* to finish. */ extern GC_bool GC_collection_in_progress(void); -void GC_wait_for_gc_completion(GC_bool wait_for_all) +STATIC void GC_wait_for_gc_completion(GC_bool wait_for_all) { GC_ASSERT(I_HOLD_LOCK()); if (GC_incremental && GC_collection_in_progress()) { @@ -623,7 +629,7 @@ void GC_wait_for_gc_completion(GC_bool wait_for_all) /* between fork() and exec(). Thus we're doing no worse than it. */ /* Called before a fork() */ -void GC_fork_prepare_proc(void) +STATIC void GC_fork_prepare_proc(void) { /* Acquire all relevant locks, so that after releasing the locks */ /* the child will see a consistent state in which monitor */ @@ -643,7 +649,7 @@ void GC_fork_prepare_proc(void) } /* Called in parent after a fork() */ -void GC_fork_parent_proc(void) +STATIC void GC_fork_parent_proc(void) { # if defined(PARALLEL_MARK) || defined(THREAD_LOCAL_ALLOC) GC_release_mark_lock(); @@ -652,7 +658,7 @@ void GC_fork_parent_proc(void) } /* Called in child after a fork() */ -void GC_fork_child_proc(void) +STATIC void GC_fork_child_proc(void) { /* Clean up the thread table, so that just our thread is left. */ # if defined(PARALLEL_MARK) || defined(THREAD_LOCAL_ALLOC) @@ -671,7 +677,7 @@ void GC_fork_child_proc(void) #if defined(GC_DGUX386_THREADS) /* Return the number of processors, or i<= 0 if it can't be determined. */ -int GC_get_nprocs(void) +STATIC int GC_get_nprocs(void) { /* */ int numCpus; @@ -907,7 +913,7 @@ struct start_info { /* parent hasn't yet noticed. */ }; -int GC_unregister_my_thread(void) +GC_API int GC_unregister_my_thread(void) { GC_thread me; @@ -936,7 +942,7 @@ int GC_unregister_my_thread(void) /* results in at most a tiny one-time leak. And */ /* linuxthreads doesn't reclaim the main threads */ /* resources or id anyway. */ -void GC_thread_exit_proc(void *arg) +STATIC void GC_thread_exit_proc(void *arg) { GC_unregister_my_thread(); } @@ -998,8 +1004,8 @@ WRAP_FUNC(pthread_detach)(pthread_t thread) GC_bool GC_in_thread_creation = FALSE; /* Protected by allocation lock. */ -GC_thread GC_register_my_thread_inner(struct GC_stack_base *sb, - pthread_t my_pthread) +STATIC GC_thread GC_register_my_thread_inner(struct GC_stack_base *sb, + pthread_t my_pthread) { GC_thread me; @@ -1018,7 +1024,7 @@ GC_thread GC_register_my_thread_inner(struct GC_stack_base *sb, return me; } -int GC_register_my_thread(struct GC_stack_base *sb) +GC_API int GC_register_my_thread(struct GC_stack_base *sb) { pthread_t my_pthread = pthread_self(); GC_thread me; @@ -1030,6 +1036,9 @@ int GC_register_my_thread(struct GC_stack_base *sb) me -> flags |= DETACHED; /* Treat as detached, since we do not need to worry about */ /* pointer results. */ +# if defined(THREAD_LOCAL_ALLOC) + GC_init_thread_local(&(me->tlfs)); +# endif UNLOCK(); return GC_SUCCESS; } else { @@ -1038,7 +1047,7 @@ int GC_register_my_thread(struct GC_stack_base *sb) } } -void * GC_inner_start_routine(struct GC_stack_base *sb, void * arg) +STATIC void * GC_inner_start_routine(struct GC_stack_base *sb, void * arg) { struct start_info * si = arg; void * result; @@ -1082,7 +1091,7 @@ void * GC_inner_start_routine(struct GC_stack_base *sb, void * arg) return(result); } -void * GC_start_routine(void * arg) +STATIC void * GC_start_routine(void * arg) { # ifdef INCLUDE_LINUX_THREAD_DESCR struct GC_stack_base sb; @@ -1200,9 +1209,10 @@ WRAP_FUNC(pthread_create)(pthread_t *new_thread, return(result); } +#if defined(USE_SPIN_LOCK) || !defined(NO_PTHREAD_TRYLOCK) /* Spend a few cycles in a way that can't introduce contention with */ -/* othre threads. */ -void GC_pause(void) +/* other threads. */ +STATIC void GC_pause(void) { int i; # if !defined(__GNUC__) || defined(__INTEL_COMPILER) @@ -1218,6 +1228,7 @@ void GC_pause(void) # endif } } +#endif #define SPIN_MAX 128 /* Maximum number of calls to GC_pause before */ /* give up. */ @@ -1227,7 +1238,8 @@ volatile GC_bool GC_collecting = 0; /* holding the allocation lock for an */ /* extended period. */ -#if !defined(USE_SPIN_LOCK) || defined(PARALLEL_MARK) +#if (!defined(USE_SPIN_LOCK) && !defined(NO_PTHREAD_TRYLOCK)) \ + || defined(PARALLEL_MARK) || defined(THREAD_LOCAL_ALLOC) /* If we don't want to use the below spinlock implementation, either */ /* because we don't have a GC_test_and_set implementation, or because */ /* we don't want to risk sleeping, we can still try spinning on */ @@ -1251,7 +1263,7 @@ volatile GC_bool GC_collecting = 0; unsigned long GC_unlocked_count = 0; #endif -void GC_generic_lock(pthread_mutex_t * lock) +STATIC void GC_generic_lock(pthread_mutex_t * lock) { #ifndef NO_PTHREAD_TRYLOCK unsigned pause_length = 1; @@ -1286,7 +1298,7 @@ void GC_generic_lock(pthread_mutex_t * lock) pthread_mutex_lock(lock); } -#endif /* !USE_SPIN_LOCK || PARALLEL_MARK */ +#endif /* !USE_SPIN_LOCK || ... */ #if defined(USE_SPIN_LOCK) diff --git a/ptr_chck.c b/ptr_chck.c index d04d2daf..6d5637fa 100644 --- a/ptr_chck.c +++ b/ptr_chck.c @@ -18,7 +18,7 @@ #include "private/gc_pmark.h" -void GC_default_same_obj_print_proc(void * p, void * q) +STATIC void GC_default_same_obj_print_proc(void * p, void * q) { GC_err_printf("%p and %p are not in the same object\n", p, q); ABORT("GC_same_obj test failed"); @@ -36,7 +36,7 @@ void (*GC_same_obj_print_proc) (void *, void *) /* We assume this is performance critical. (It shouldn't */ /* be called by production code, but this can easily make */ /* debugging intolerably slow.) */ -void * GC_same_obj(void *p, void *q) +GC_API void * GC_same_obj(void *p, void *q) { struct hblk *h; hdr *hhdr; @@ -99,7 +99,7 @@ fail: return(p); } -void GC_default_is_valid_displacement_print_proc (void *p) +STATIC void GC_default_is_valid_displacement_print_proc (void *p) { GC_err_printf("%p does not point to valid object displacement\n", p); ABORT("GC_is_valid_displacement test failed"); @@ -114,7 +114,7 @@ void (*GC_is_valid_displacement_print_proc)(void *) = /* Always returns its argument. */ /* Note that we don't lock, since nothing relevant about the header */ /* should change while we have a valid object pointer to the block. */ -void * GC_is_valid_displacement(void *p) +GC_API void * GC_is_valid_displacement(void *p) { hdr *hhdr; word pdispl; @@ -149,7 +149,7 @@ fail: return(p); } -void GC_default_is_visible_print_proc(void * p) +STATIC void GC_default_is_visible_print_proc(void * p) { GC_err_printf("%p is not a GC visible pointer location\n", p); ABORT("GC_is_visible test failed"); @@ -157,12 +157,10 @@ void GC_default_is_visible_print_proc(void * p) void (*GC_is_visible_print_proc)(void * p) = GC_default_is_visible_print_proc; +#ifndef THREADS /* Could p be a stack address? */ -GC_bool GC_on_stack(ptr_t p) -{ -# ifdef THREADS - return(TRUE); -# else + STATIC GC_bool GC_on_stack(ptr_t p) + { int dummy; # ifdef STACK_GROWS_DOWN if ((ptr_t)p >= (ptr_t)(&dummy) && (ptr_t)p < GC_stackbottom ) { @@ -174,8 +172,8 @@ GC_bool GC_on_stack(ptr_t p) } # endif return(FALSE); -# endif -} + } +#endif /* Check that p is visible */ /* to the collector as a possibly pointer containing location. */ @@ -185,7 +183,7 @@ GC_bool GC_on_stack(ptr_t p) /* untyped allocations. The idea is that it should be possible, though */ /* slow, to add such a call to all indirect pointer stores.) */ /* Currently useless for multithreaded worlds. */ -void * GC_is_visible(void *p) +GC_API void * GC_is_visible(void *p) { hdr *hhdr; @@ -204,15 +202,13 @@ void * GC_is_visible(void *p) if (GC_on_stack(p)) return(p); hhdr = HDR((word)p); if (hhdr == 0) { - GC_bool result; - if (GC_is_static_root(p)) return(p); /* Else do it again correctly: */ # if (defined(DYNAMIC_LOADING) || defined(MSWIN32) || \ defined(MSWINCE) || defined(PCR)) GC_register_dynamic_libraries(); - result = GC_is_static_root(p); - if (result) return(p); + if (GC_is_static_root(p)) + return(p); # endif goto fail; } else { @@ -259,7 +255,7 @@ fail: } -void * GC_pre_incr (void **p, size_t how_much) +GC_API void * GC_pre_incr (void **p, size_t how_much) { void * initial = *p; void * result = GC_same_obj((void *)((word)initial + how_much), initial); @@ -270,7 +266,7 @@ void * GC_pre_incr (void **p, size_t how_much) return (*p = result); } -void * GC_post_incr (void **p, size_t how_much) +GC_API void * GC_post_incr (void **p, size_t how_much) { void * initial = *p; void * result = GC_same_obj((void *)((word)initial + how_much), initial); diff --git a/reclaim.c b/reclaim.c index 43fdf70e..0b01b1a0 100644 --- a/reclaim.c +++ b/reclaim.c @@ -34,11 +34,11 @@ signed_word GC_bytes_found = 0; /* the collector, e.g. without the allocation lock. */ #define MAX_LEAKED 40 ptr_t GC_leaked[MAX_LEAKED]; -unsigned GC_n_leaked = 0; +STATIC unsigned GC_n_leaked = 0; GC_bool GC_have_errors = FALSE; -void GC_add_leaked(ptr_t leaked) +STATIC void GC_add_leaked(ptr_t leaked) { if (GC_n_leaked < MAX_LEAKED) { GC_have_errors = TRUE; @@ -51,7 +51,7 @@ void GC_add_leaked(ptr_t leaked) static GC_bool printing_errors = FALSE; /* Print all objects on the list after printing any smashed objs. */ /* Clear both lists. */ -void GC_print_all_errors () +void GC_print_all_errors (void) { unsigned i; @@ -97,7 +97,7 @@ GC_bool GC_block_empty(hdr *hhdr) return (hhdr -> hb_n_marks == 0); } -GC_bool GC_block_nearly_full(hdr *hhdr) +STATIC GC_bool GC_block_nearly_full(hdr *hhdr) { return (hhdr -> hb_n_marks > 7 * HBLK_OBJS(hhdr -> hb_sz)/8); } @@ -110,9 +110,8 @@ GC_bool GC_block_nearly_full(hdr *hhdr) * free list. Returns the new list. * Clears unmarked objects. Sz is in bytes. */ -/*ARGSUSED*/ -ptr_t GC_reclaim_clear(struct hblk *hbp, hdr *hhdr, size_t sz, - ptr_t list, signed_word *count) +STATIC ptr_t GC_reclaim_clear(struct hblk *hbp, hdr *hhdr, size_t sz, + ptr_t list, signed_word *count) { word bit_no = 0; word *p, *q, *plim; @@ -158,9 +157,8 @@ ptr_t GC_reclaim_clear(struct hblk *hbp, hdr *hhdr, size_t sz, } /* The same thing, but don't clear objects: */ -/*ARGSUSED*/ -ptr_t GC_reclaim_uninit(struct hblk *hbp, hdr *hhdr, size_t sz, - ptr_t list, signed_word *count) +STATIC ptr_t GC_reclaim_uninit(struct hblk *hbp, hdr *hhdr, size_t sz, + ptr_t list, signed_word *count) { word bit_no = 0; word *p, *plim; @@ -186,8 +184,7 @@ ptr_t GC_reclaim_uninit(struct hblk *hbp, hdr *hhdr, size_t sz, } /* Don't really reclaim objects, just check for unmarked ones: */ -/*ARGSUSED*/ -void GC_reclaim_check(struct hblk *hbp, hdr *hhdr, word sz) +STATIC void GC_reclaim_check(struct hblk *hbp, hdr *hhdr, word sz) { word bit_no = 0; ptr_t p, plim; @@ -215,7 +212,7 @@ void GC_reclaim_check(struct hblk *hbp, hdr *hhdr, word sz) ptr_t GC_reclaim_generic(struct hblk * hbp, hdr *hhdr, size_t sz, GC_bool init, ptr_t list, signed_word *count) { - ptr_t result = list; + ptr_t result; GC_ASSERT(GC_find_header((ptr_t)hbp) == hhdr); GC_remove_protection(hbp, 1, (hhdr)->hb_descr == 0 /* Pointer-free? */); @@ -235,8 +232,8 @@ ptr_t GC_reclaim_generic(struct hblk * hbp, hdr *hhdr, size_t sz, * If entirely empty blocks are to be completely deallocated, then * caller should perform that check. */ -void GC_reclaim_small_nonempty_block(struct hblk *hbp, - int report_if_found, signed_word *count) +STATIC void GC_reclaim_small_nonempty_block(struct hblk *hbp, + int report_if_found) { hdr *hhdr = HDR(hbp); size_t sz = hhdr -> hb_sz; @@ -263,7 +260,7 @@ void GC_reclaim_small_nonempty_block(struct hblk *hbp, * If report_if_found is TRUE, then process any block immediately, and * simply report free objects; do not actually reclaim them. */ -void GC_reclaim_block(struct hblk *hbp, word report_if_found) +STATIC void GC_reclaim_block(struct hblk *hbp, word report_if_found) { hdr * hhdr = HDR(hbp); size_t sz = hhdr -> hb_sz; /* size of objects in current block */ @@ -308,8 +305,7 @@ void GC_reclaim_block(struct hblk *hbp, word report_if_found) GC_atomic_in_use += sz * hhdr -> hb_n_marks; } if (report_if_found) { - GC_reclaim_small_nonempty_block(hbp, (int)report_if_found, - &GC_bytes_found); + GC_reclaim_small_nonempty_block(hbp, (int)report_if_found); } else if (empty) { GC_bytes_found += HBLKSIZE; GC_freehblk(hbp); @@ -340,7 +336,7 @@ struct Print_stats #ifdef USE_MARK_BYTES /* Return the number of set mark bits in the given header */ -int GC_n_set_marks(hdr *hhdr) +STATIC int GC_n_set_marks(hdr *hhdr) { int result = 0; int i; @@ -371,7 +367,7 @@ static int set_bits(word n) } /* Return the number of set mark bits in the given header */ -int GC_n_set_marks(hdr *hhdr) +STATIC int GC_n_set_marks(hdr *hhdr) { int result = 0; int i; @@ -398,8 +394,8 @@ int GC_n_set_marks(hdr *hhdr) #endif /* !USE_MARK_BYTES */ -/*ARGSUSED*/ -void GC_print_block_descr(struct hblk *h, word /* struct PrintStats */ raw_ps) +STATIC void GC_print_block_descr(struct hblk *h, + word /* struct PrintStats */ raw_ps) { hdr * hhdr = HDR(h); size_t bytes = hhdr -> hb_sz; @@ -422,7 +418,7 @@ void GC_print_block_descr(struct hblk *h, word /* struct PrintStats */ raw_ps) ps->number_of_blocks++; } -void GC_print_block_list() +void GC_print_block_list(void) { struct Print_stats pstats; @@ -463,7 +459,7 @@ void GC_print_free_list(int kind, size_t sz_in_granules) * since may otherwise end up with dangling "descriptor" pointers. * It may help for other pointer-containing objects. */ -void GC_clear_fl_links(void **flp) +STATIC void GC_clear_fl_links(void **flp) { void *next = *flp; @@ -551,7 +547,7 @@ void GC_continue_reclaim(size_t sz /* granules */, int kind) while ((hbp = *rlh) != 0) { hhdr = HDR(hbp); *rlh = hhdr -> hb_next; - GC_reclaim_small_nonempty_block(hbp, FALSE, &GC_bytes_found); + GC_reclaim_small_nonempty_block(hbp, FALSE); if (*flh != 0) break; } } @@ -574,11 +570,13 @@ GC_bool GC_reclaim_all(GC_stop_func stop_func, GC_bool ignore_old) struct obj_kind * ok; struct hblk ** rlp; struct hblk ** rlh; - CLOCK_TYPE start_time; - CLOCK_TYPE done_time; - - if (GC_print_stats == VERBOSE) +# ifndef SMALL_CONFIG + CLOCK_TYPE start_time; + CLOCK_TYPE done_time; + + if (GC_print_stats == VERBOSE) GET_TIME(start_time); +# endif for (kind = 0; kind < GC_n_kinds; kind++) { ok = &(GC_obj_kinds[kind]); @@ -596,15 +594,17 @@ GC_bool GC_reclaim_all(GC_stop_func stop_func, GC_bool ignore_old) /* It's likely we'll need it this time, too */ /* It's been touched recently, so this */ /* shouldn't trigger paging. */ - GC_reclaim_small_nonempty_block(hbp, FALSE, &GC_bytes_found); + GC_reclaim_small_nonempty_block(hbp, FALSE); } } } } - if (GC_print_stats == VERBOSE) { +# ifndef SMALL_CONFIG + if (GC_print_stats == VERBOSE) { GET_TIME(done_time); GC_log_printf("Disposing of reclaim lists took %lu msecs\n", MS_TIME_DIFF(done_time,start_time)); - } + } +# endif return(TRUE); } diff --git a/stubborn.c b/stubborn.c index f4e09583..e3f664c7 100644 --- a/stubborn.c +++ b/stubborn.c @@ -22,36 +22,35 @@ /* MANUAL_VDB. But that imposes the additional constraint that */ /* written, but not yet GC_dirty()ed objects must be referenced */ /* by a stack. */ -void * GC_malloc_stubborn(size_t lb) +GC_API void * GC_malloc_stubborn(size_t lb) { return(GC_malloc(lb)); } -/*ARGSUSED*/ -void GC_end_stubborn_change(void *p) +GC_API void GC_end_stubborn_change(void *p) { GC_dirty(p); } /*ARGSUSED*/ -void GC_change_stubborn(void *p) +GC_API void GC_change_stubborn(void *p) { } #else /* !MANUAL_VDB */ -void * GC_malloc_stubborn(size_t lb) +GC_API void * GC_malloc_stubborn(size_t lb) { return(GC_malloc(lb)); } /*ARGSUSED*/ -void GC_end_stubborn_change(void *p) +GC_API void GC_end_stubborn_change(void *p) { } /*ARGSUSED*/ -void GC_change_stubborn(void *p) +GC_API void GC_change_stubborn(void *p) { } diff --git a/thread_local_alloc.c b/thread_local_alloc.c index d18cd88b..8372c1e6 100644 --- a/thread_local_alloc.c +++ b/thread_local_alloc.c @@ -131,10 +131,10 @@ void GC_destroy_thread_local(GC_tlfs p) #endif #if defined(GC_ASSERTIONS) && defined(GC_WIN32_THREADS) - extern char * GC_lookup_thread(int id); + void * /*GC_thread*/ GC_lookup_thread_inner(unsigned /*DWORD*/ thread_id); #endif -void * GC_malloc(size_t bytes) +GC_API void * GC_malloc(size_t bytes) { size_t granules = ROUNDED_UP_GRANULES(bytes); void *tsd; @@ -181,7 +181,7 @@ void * GC_malloc(size_t bytes) return result; } -void * GC_malloc_atomic(size_t bytes) +GC_API void * GC_malloc_atomic(size_t bytes) { size_t granules = ROUNDED_UP_GRANULES(bytes); void *tsd; @@ -206,8 +206,8 @@ void * GC_malloc_atomic(size_t bytes) # endif GC_ASSERT(GC_is_initialized); tiny_fl = ((GC_tlfs)tsd) -> ptrfree_freelists; - GC_FAST_MALLOC_GRANS(result, granules, tiny_fl, DIRECT_GRANULES, - PTRFREE, GC_core_malloc_atomic(bytes), 0/* no init */); + GC_FAST_MALLOC_GRANS(result, granules, tiny_fl, DIRECT_GRANULES, PTRFREE, + GC_core_malloc_atomic(bytes), (void)0 /* no init */); return result; } @@ -241,8 +241,8 @@ extern int GC_gcj_kind; /* incremental GC should be enabled before we fork a second thread. */ /* Unlike the other thread local allocation calls, we assume that the */ /* collector has been explicitly initialized. */ -void * GC_gcj_malloc(size_t bytes, - void * ptr_to_struct_containing_descr) +GC_API void * GC_gcj_malloc(size_t bytes, + void * ptr_to_struct_containing_descr) { if (GC_EXPECT(GC_incremental, 0)) { return GC_core_gcj_malloc(bytes, ptr_to_struct_containing_descr); @@ -325,9 +325,5 @@ void GC_mark_thread_local_fls_for(GC_tlfs p) } #endif /* GC_ASSERTIONS */ -# else /* !THREAD_LOCAL_ALLOC */ - -# define GC_destroy_thread_local(t) - -# endif /* !THREAD_LOCAL_ALLOC */ +# endif /* THREAD_LOCAL_ALLOC */ diff --git a/typd_mlc.c b/typd_mlc.c index ae529d31..fb8d8e80 100644 --- a/typd_mlc.c +++ b/typd_mlc.c @@ -106,7 +106,7 @@ static void GC_push_typed_structures_proc (void) /* starting index. */ /* Returns -1 on failure. */ /* Caller does not hold allocation lock. */ -signed_word GC_add_ext_descriptor(GC_bitmap bm, word nbits) +STATIC signed_word GC_add_ext_descriptor(GC_bitmap bm, word nbits) { size_t nwords = divWORDSZ(nbits + WORDSZ-1); signed_word result; @@ -167,7 +167,7 @@ GC_descr GC_bm_table[WORDSZ/2]; /* The result is known to be short enough to fit into a bitmap */ /* descriptor. */ /* Descriptor is a GC_DS_LENGTH or GC_DS_BITMAP descriptor. */ -GC_descr GC_double_descr(GC_descr descriptor, word nwords) +STATIC GC_descr GC_double_descr(GC_descr descriptor, word nwords) { if ((descriptor & GC_DS_TAGS) == GC_DS_LENGTH) { descriptor = GC_bm_table[BYTES_TO_WORDS((word)descriptor)]; @@ -176,7 +176,9 @@ GC_descr GC_double_descr(GC_descr descriptor, word nwords) return(descriptor); } -complex_descriptor * GC_make_sequence_descriptor(); +STATIC complex_descriptor * +GC_make_sequence_descriptor(complex_descriptor *first, + complex_descriptor *second); /* Build a descriptor for an array with nelements elements, */ /* each of which can be described by a simple descriptor. */ @@ -197,10 +199,10 @@ complex_descriptor * GC_make_sequence_descriptor(); # define LEAF 1 # define SIMPLE 0 # define NO_MEM (-1) -int GC_make_array_descriptor(size_t nelements, size_t size, GC_descr descriptor, - GC_descr *simple_d, - complex_descriptor **complex_d, - struct LeafDescriptor * leaf) +STATIC int GC_make_array_descriptor(size_t nelements, size_t size, + GC_descr descriptor, GC_descr *simple_d, + complex_descriptor **complex_d, + struct LeafDescriptor * leaf) { # define OPT_THRESHOLD 50 /* For larger arrays, we try to combine descriptors of adjacent */ @@ -293,8 +295,9 @@ int GC_make_array_descriptor(size_t nelements, size_t size, GC_descr descriptor, } } -complex_descriptor * GC_make_sequence_descriptor(complex_descriptor *first, - complex_descriptor *second) +STATIC complex_descriptor * +GC_make_sequence_descriptor(complex_descriptor *first, + complex_descriptor *second) { struct SequenceDescriptor * result = (struct SequenceDescriptor *) @@ -331,20 +334,18 @@ ptr_t * GC_eobjfreelist; ptr_t * GC_arobjfreelist; -mse * GC_typed_mark_proc(word * addr, mse * mark_stack_ptr, - mse * mark_stack_limit, word env); +STATIC mse * GC_typed_mark_proc(word * addr, mse * mark_stack_ptr, + mse * mark_stack_limit, word env); -mse * GC_array_mark_proc(word * addr, mse * mark_stack_ptr, - mse * mark_stack_limit, word env); +STATIC mse * GC_array_mark_proc(word * addr, mse * mark_stack_ptr, + mse * mark_stack_limit, word env); /* Caller does not hold allocation lock. */ -void GC_init_explicit_typing(void) +STATIC void GC_init_explicit_typing(void) { register int i; DCL_LOCK_STATE; - - /* Ignore gcc "no effect" warning. */ GC_STATIC_ASSERT(sizeof(struct LeafDescriptor) % sizeof(word) == 0); LOCK(); if (GC_explicit_typing_initialized) { @@ -375,8 +376,8 @@ void GC_init_explicit_typing(void) UNLOCK(); } -mse * GC_typed_mark_proc(word * addr, mse * mark_stack_ptr, - mse * mark_stack_limit, word env) +STATIC mse * GC_typed_mark_proc(word * addr, mse * mark_stack_ptr, + mse * mark_stack_limit, word env) { word bm = GC_ext_descriptors[env].ed_bitmap; word * current_p = addr; @@ -392,7 +393,7 @@ mse * GC_typed_mark_proc(word * addr, mse * mark_stack_ptr, FIXUP_POINTER(current); if ((ptr_t)current >= least_ha && (ptr_t)current <= greatest_ha) { PUSH_CONTENTS((ptr_t)current, mark_stack_ptr, - mark_stack_limit, current_p, exit1); + mark_stack_limit, (ptr_t)current_p, exit1); } } } @@ -415,7 +416,7 @@ mse * GC_typed_mark_proc(word * addr, mse * mark_stack_ptr, /* Return the size of the object described by d. It would be faster to */ /* store this directly, or to compute it as part of */ /* GC_push_complex_descriptor, but hopefully it doesn't matter. */ -word GC_descr_obj_size(complex_descriptor *d) +STATIC word GC_descr_obj_size(complex_descriptor *d) { switch(d -> TAG) { case LEAF_TAG: @@ -434,8 +435,8 @@ word GC_descr_obj_size(complex_descriptor *d) /* Push descriptors for the object at addr with complex descriptor d */ /* onto the mark stack. Return 0 if the mark stack overflowed. */ -mse * GC_push_complex_descriptor(word *addr, complex_descriptor *d, - mse *msp, mse *msl) +STATIC mse * GC_push_complex_descriptor(word *addr, complex_descriptor *d, + mse *msp, mse *msl) { register ptr_t current = (ptr_t) addr; register word nelements; @@ -490,8 +491,8 @@ mse * GC_push_complex_descriptor(word *addr, complex_descriptor *d, } /*ARGSUSED*/ -mse * GC_array_mark_proc(word * addr, mse * mark_stack_ptr, - mse * mark_stack_limit, word env) +STATIC mse * GC_array_mark_proc(word * addr, mse * mark_stack_ptr, + mse * mark_stack_limit, word env) { hdr * hhdr = HDR(addr); size_t sz = hhdr -> hb_sz; @@ -528,7 +529,7 @@ mse * GC_array_mark_proc(word * addr, mse * mark_stack_ptr, return new_mark_stack_ptr; } -GC_descr GC_make_descriptor(GC_bitmap bm, size_t len) +GC_API GC_descr GC_make_descriptor(GC_bitmap bm, size_t len) { signed_word last_set_bit = len - 1; GC_descr result; @@ -575,7 +576,7 @@ GC_descr GC_make_descriptor(GC_bitmap bm, size_t len) } } -ptr_t GC_clear_stack(); +void * GC_clear_stack(void *); #define GENERAL_MALLOC(lb,k) \ (void *)GC_clear_stack(GC_generic_malloc((word)lb, k)) @@ -583,7 +584,7 @@ ptr_t GC_clear_stack(); #define GENERAL_MALLOC_IOP(lb,k) \ (void *)GC_clear_stack(GC_generic_malloc_ignore_off_page(lb, k)) -void * GC_malloc_explicitly_typed(size_t lb, GC_descr d) +GC_API void * GC_malloc_explicitly_typed(size_t lb, GC_descr d) { ptr_t op; ptr_t * opp; @@ -616,7 +617,7 @@ void * GC_malloc_explicitly_typed(size_t lb, GC_descr d) return((void *) op); } -void * GC_malloc_explicitly_typed_ignore_off_page(size_t lb, GC_descr d) +GC_API void * GC_malloc_explicitly_typed_ignore_off_page(size_t lb, GC_descr d) { ptr_t op; ptr_t * opp; @@ -648,7 +649,7 @@ DCL_LOCK_STATE; return((void *) op); } -void * GC_calloc_explicitly_typed(size_t n, size_t lb, GC_descr d) +GC_API void * GC_calloc_explicitly_typed(size_t n, size_t lb, GC_descr d) { ptr_t op; ptr_t * opp; diff --git a/win32_threads.c b/win32_threads.c index 272af63a..f18f934e 100644 --- a/win32_threads.c +++ b/win32_threads.c @@ -53,8 +53,8 @@ # define DEBUG_WIN32_PTHREADS 0 # endif - void * GC_pthread_start(void * arg); - void GC_thread_exit_proc(void *arg); + STATIC void * GC_pthread_start(void * arg); + STATIC void GC_thread_exit_proc(void *arg); # include @@ -156,7 +156,7 @@ void GC_init_parallel(void); } #endif -DWORD GC_main_thread = 0; +STATIC DWORD GC_main_thread = 0; struct GC_Thread_Rep { union { @@ -260,7 +260,7 @@ GC_bool GC_started_thread_while_stopped(void) /* Add a thread to GC_threads. We assume it wasn't already there. */ /* Caller holds allocation lock. */ /* Unlike the pthreads version, the id field is set by the caller. */ -GC_thread GC_new_thread(DWORD id) +STATIC GC_thread GC_new_thread(DWORD id) { word hv = ((word)id) % THREAD_TABLE_SZ; GC_thread result; @@ -339,7 +339,7 @@ static GC_thread GC_register_my_thread_inner(struct GC_stack_base *sb, /* might be OK. But this hasn't been tested across all win32 */ /* variants. */ /* cast away volatile qualifier */ - for (i = 0; InterlockedExchange((IE_t)&dll_thread_table[i].in_use,1) != 0; + for (i = 0; InterlockedExchange((void*)&dll_thread_table[i].in_use,1) != 0; i++) { /* Compare-and-swap would make this cleaner, but that's not */ /* supported before Windows 98 and NT 4.0. In Windows 2000, */ @@ -401,7 +401,7 @@ static GC_thread GC_register_my_thread_inner(struct GC_stack_base *sb, if (GC_win32_dll_threads) { if (GC_please_stop) { AO_store(&GC_attached_thread, TRUE); - AO_nop_full(); // Later updates must become visible after this. + AO_nop_full(); /* Later updates must become visible after this. */ } /* We'd like to wait here, but can't, since waiting in DllMain */ /* provokes deadlocks. */ @@ -421,7 +421,7 @@ static GC_thread GC_register_my_thread_inner(struct GC_stack_base *sb, #ifdef __GNUC__ __inline__ #endif -LONG GC_get_max_thread_index() +STATIC LONG GC_get_max_thread_index(void) { LONG my_max = GC_max_thread_index; @@ -486,7 +486,7 @@ static GC_thread GC_lookup_thread(DWORD thread_id) /* GC_win32_dll_threads is set. */ /* If GC_win32_dll_threads is set it should be called from the */ /* thread being deleted. */ -void GC_delete_gc_thread(GC_vthread gc_id) +STATIC void GC_delete_gc_thread(GC_vthread gc_id) { CloseHandle(gc_id->handle); if (GC_win32_dll_threads) { @@ -531,7 +531,7 @@ void GC_delete_gc_thread(GC_vthread gc_id) /* GC_win32_dll_threads is set. */ /* If GC_win32_dll_threads is set it should be called from the */ /* thread being deleted. */ -void GC_delete_thread(DWORD id) +STATIC void GC_delete_thread(DWORD id) { if (GC_win32_dll_threads) { GC_thread t = GC_lookup_thread_inner(id); @@ -770,10 +770,10 @@ void GC_start_world(void) { DWORD thread_id = GetCurrentThreadId(); int i; - LONG my_max = GC_get_max_thread_index(); GC_ASSERT(I_HOLD_LOCK()); if (GC_win32_dll_threads) { + LONG my_max = GC_get_max_thread_index(); for (i = 0; i <= my_max; i++) { GC_thread t = (GC_thread)(dll_thread_table + i); if (t -> stack_base != 0 && t -> suspended @@ -822,7 +822,7 @@ void GC_start_world(void) } # endif -void GC_push_stack_for(GC_thread thread) +STATIC void GC_push_stack_for(GC_thread thread) { int dummy; ptr_t sp, stack_min; @@ -901,7 +901,9 @@ void GC_push_all_stacks(void) { DWORD me = GetCurrentThreadId(); GC_bool found_me = FALSE; - size_t nthreads = 0; +# ifndef SMALL_CONFIG + size_t nthreads = 0; +# endif if (GC_win32_dll_threads) { int i; @@ -910,7 +912,9 @@ void GC_push_all_stacks(void) for (i = 0; i <= my_max; i++) { GC_thread t = (GC_thread)(dll_thread_table + i); if (t -> in_use) { - ++nthreads; +# ifndef SMALL_CONFIG + ++nthreads; +# endif GC_push_stack_for(t); if (t -> id == me) found_me = TRUE; } @@ -921,20 +925,24 @@ void GC_push_all_stacks(void) for (i = 0; i < THREAD_TABLE_SZ; i++) { for (t = GC_threads[i]; t != 0; t = t -> next) { - ++nthreads; +# ifndef SMALL_CONFIG + ++nthreads; +# endif if (!KNOWN_FINISHED(t)) GC_push_stack_for(t); if (t -> id == me) found_me = TRUE; } } } - if (GC_print_stats == VERBOSE) { - GC_log_printf("Pushed %d thread stacks ", nthreads); - if (GC_win32_dll_threads) { +# ifndef SMALL_CONFIG + if (GC_print_stats == VERBOSE) { + GC_log_printf("Pushed %d thread stacks ", nthreads); + if (GC_win32_dll_threads) { GC_log_printf("based on DllMain thread tracking\n"); - } else { + } else { GC_log_printf("\n"); + } } - } +# endif if (!found_me && !GC_in_thread_creation) ABORT("Collecting from unknown thread."); } @@ -987,9 +995,7 @@ typedef struct { LPVOID param; } thread_args; -static DWORD WINAPI thread_start(LPVOID arg); - -void * GC_win32_start_inner(struct GC_stack_base *sb, LPVOID arg) +STATIC void * GC_win32_start_inner(struct GC_stack_base *sb, LPVOID arg) { void * ret; thread_args *args = (thread_args *)arg; @@ -1033,7 +1039,7 @@ GC_API HANDLE WINAPI GC_CreateThread( DWORD dwStackSize, LPTHREAD_START_ROUTINE lpStartAddress, LPVOID lpParameter, DWORD dwCreationFlags, LPDWORD lpThreadId ) { - HANDLE thread_h = NULL; + HANDLE thread_h; thread_args *args; @@ -1069,18 +1075,18 @@ GC_API HANDLE WINAPI GC_CreateThread( } } -void WINAPI GC_ExitThread(DWORD dwExitCode) +GC_API void WINAPI GC_ExitThread(DWORD dwExitCode) { GC_unregister_my_thread(); ExitThread(dwExitCode); } -uintptr_t GC_beginthreadex( +GC_API GC_uintptr_t GC_beginthreadex( void *security, unsigned stack_size, unsigned ( __stdcall *start_address )( void * ), void *arglist, unsigned initflag, unsigned *thrdaddr) { - uintptr_t thread_h; + GC_uintptr_t thread_h; thread_args *args; @@ -1099,7 +1105,7 @@ uintptr_t GC_beginthreadex( /* Handed off to and deallocated by child thread. */ if (0 == args) { SetLastError(ERROR_NOT_ENOUGH_MEMORY); - return (uintptr_t)(-1L); + return (GC_uintptr_t)(-1L); } /* set up thread arguments */ @@ -1115,7 +1121,7 @@ uintptr_t GC_beginthreadex( } } -void GC_endthreadex(unsigned retval) +GC_API void GC_endthreadex(unsigned retval) { GC_unregister_my_thread(); _endthreadex(retval); @@ -1178,7 +1184,9 @@ DWORD WINAPI main_thread_start(LPVOID arg) /* Called by GC_init() - we hold the allocation lock. */ void GC_thr_init(void) { struct GC_stack_base sb; - int sb_result; +# ifdef GC_ASSERTIONS + int sb_result; +# endif GC_ASSERT(I_HOLD_LOCK()); if (GC_thr_initialized) return; @@ -1186,7 +1194,10 @@ void GC_thr_init(void) { GC_thr_initialized = TRUE; /* Add the initial thread, so we can stop it. */ - sb_result = GC_get_stack_base(&sb); +# ifdef GC_ASSERTIONS + sb_result = +# endif + GC_get_stack_base(&sb); GC_ASSERT(sb_result == GC_SUCCESS); GC_register_my_thread(&sb); } @@ -1219,16 +1230,16 @@ int GC_pthread_join(pthread_t pthread_id, void **retval) { /* FIXME: It would be better if this worked more like */ /* pthread_support.c. */ - #ifndef GC_WIN32_PTHREADS +# ifndef GC_WIN32_PTHREADS while ((joinee = GC_lookup_pthread(pthread_id)) == 0) Sleep(10); - #endif +# endif result = pthread_join(pthread_id, retval); - #ifdef GC_WIN32_PTHREADS +# ifdef GC_WIN32_PTHREADS /* win32_pthreads id are unique */ joinee = GC_lookup_pthread(pthread_id); - #endif +# endif if (!GC_win32_dll_threads) { LOCK(); @@ -1296,7 +1307,7 @@ GC_pthread_create(pthread_t *new_thread, return(result); } -void * GC_pthread_start_inner(struct GC_stack_base *sb, void * arg) +STATIC void * GC_pthread_start_inner(struct GC_stack_base *sb, void * arg) { struct start_info * si = arg; void * result; @@ -1353,12 +1364,12 @@ void * GC_pthread_start_inner(struct GC_stack_base *sb, void * arg) return(result); } -void * GC_pthread_start(void * arg) +STATIC void * GC_pthread_start(void * arg) { return GC_call_with_stack_base(GC_pthread_start_inner, arg); } -void GC_thread_exit_proc(void *arg) +STATIC void GC_thread_exit_proc(void *arg) { GC_thread me = (GC_thread)arg; int i; @@ -1428,11 +1439,14 @@ int GC_pthread_detach(pthread_t thread) * can do.) */ #ifdef GC_DLL -GC_API BOOL WINAPI DllMain(HINSTANCE inst, ULONG reason, LPVOID reserved) +/*ARGSUSED*/ +BOOL WINAPI DllMain(HINSTANCE inst, ULONG reason, LPVOID reserved) { struct GC_stack_base sb; DWORD thread_id; - int sb_result; +# ifdef GC_ASSERTIONS + int sb_result; +# endif static int entry_count = 0; if (parallel_initialized && !GC_win32_dll_threads) return TRUE; @@ -1446,7 +1460,10 @@ GC_API BOOL WINAPI DllMain(HINSTANCE inst, ULONG reason, LPVOID reserved) thread_id = GetCurrentThreadId(); if (parallel_initialized && GC_main_thread != thread_id) { /* Don't lock here. */ - sb_result = GC_get_stack_base(&sb); +# ifdef GC_ASSERTIONS + sb_result = +# endif + GC_get_stack_base(&sb); GC_ASSERT(sb_result == GC_SUCCESS); # ifdef THREAD_LOCAL_ALLOC ABORT("Cannot initialize thread local cache from DllMain"); @@ -1549,6 +1566,10 @@ void GC_mark_thread_local_free_lists(void) } #if defined(GC_ASSERTIONS) + void GC_check_tls_for(GC_tlfs p); +# if defined(USE_CUSTOM_SPECIFIC) + void GC_check_tsd_marks(tsd *key); +# endif /* Check that all thread-local free-lists are completely marked. */ /* also check that thread-specific-data structures are marked. */ void GC_check_tls(void) {