From: Andres Freund Date: Wed, 23 Nov 2016 08:23:42 +0000 (-0800) Subject: Make simplehash.h grow hashtable in additional cases. X-Git-Tag: REL_10_BETA1~766 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=d4c62a6b623d6eef88218158e9fa3cf974c6c7e5;p=postgresql Make simplehash.h grow hashtable in additional cases. Increase the size when either the distance between actual and optimal slot grows too large, or when too many subsequent entries would have to be moved. This addresses reports that the simplehash performed, sometimes considerably, worse than dynahash. The reason turned out to be that insertions into the hashtable where, due to the use of parallel query, in effect done from another hashtable, in hash-value order. If the target hashtable, due to mis-estimation, was sized a lot smaller than the source table(s) that lead to very imbalanced tables; a lot of entries in many close-by buckets from the source tables were inserted into a single, wider, bucket on the target table. As the growth factor was solely computed based on the fillfactor, the performance of the table decreased further and further. b81b5a96f424531b was an attempt to address this problem for hash aggregates (but not for bitmap scans), but it turns out that the current method of mixing hash values often actually leaves neighboring hash-values close to each other, just in different value range. It might be worth revisiting that independently of the performance issues addressed in this patch.. To address that problem resize tables in two additional cases: Firstly when the optimal position for an entry would be far from the actual position, secondly when many entries would have to be moved to make space for the new entry (while satisfying the robin hood property). Due to the additional resizing threshold it seems possible, and testing confirms that so far, that a higher fillfactor doesn't hurt performance and saves a bit of memory. It seems better to increase it now, before a release containing any of this code, rather than wonder in some later release. The various boundaries aren't determined in a particularly scientific manner, they might need some fine-tuning. In all my tests the new code now, even with parallelism, performs at least as good as the old code, in several scenarios significantly better. Reported-By: Dilip Kumar, Robert Haas, Kuntal Ghosh Discussion: https://postgr.es/m/CAFiTN-vagvuAydKG9VnWcoK=ADAhxmOa4ZTrmNsViBBooTnriQ@mail.gmail.com https://postgr.es/m/CAGz5QC+=fNTYgzMLTBUNeKt6uaWZFXJbkB5+7oWm-n9DwVxcLA@mail.gmail.com --- diff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h index 74f768249a..6c6c3ee0d0 100644 --- a/src/include/lib/simplehash.h +++ b/src/include/lib/simplehash.h @@ -157,13 +157,24 @@ SH_SCOPE void SH_STAT(SH_TYPE *tb); #include "utils/memutils.h" -/* conservative fillfactor for a robin hood table, might want to adjust */ -#define SH_FILLFACTOR (0.8) -/* increase fillfactor if we otherwise would error out */ -#define SH_MAX_FILLFACTOR (0.95) /* max data array size,we allow up to PG_UINT32_MAX buckets, including 0 */ #define SH_MAX_SIZE (((uint64) PG_UINT32_MAX) + 1) +/* normal fillfactor, unless already close to maximum */ +#ifndef SH_FILLFACTOR +#define SH_FILLFACTOR (0.9) +#endif +/* increase fillfactor if we otherwise would error out */ +#define SH_MAX_FILLFACTOR (0.98) +/* grow if actual and optimal location bigger than */ +#ifndef SH_GROW_MAX_DIB +#define SH_GROW_MAX_DIB 25 +#endif +/* grow if more than elements to move when inserting */ +#ifndef SH_GROW_MAX_MOVE +#define SH_GROW_MAX_MOVE 150 +#endif + #ifdef SH_STORE_HASH #define SH_COMPARE_KEYS(tb, ahash, akey, b) (ahash == SH_GET_HASH(tb, b) && SH_EQUAL(tb, b->SH_KEY, akey)) #else @@ -466,12 +477,18 @@ SH_INSERT(SH_TYPE *tb, SH_KEY_TYPE key, bool *found) uint32 startelem; uint32 curelem; SH_ELEMENT_TYPE *data; - uint32 insertdist = 0; + uint32 insertdist; + +restart: + insertdist = 0; /* * We do the grow check even if the key is actually present, to avoid * doing the check inside the loop. This also lets us avoid having to * re-find our position in the hashtable after resizing. + * + * Note that this also reached when resizing the table due to + * SH_GROW_MAX_DIB / SH_GROW_MAX_MOVE. */ if (unlikely(tb->members >= tb->grow_threshold)) { @@ -536,6 +553,7 @@ SH_INSERT(SH_TYPE *tb, SH_KEY_TYPE key, bool *found) SH_ELEMENT_TYPE *lastentry = entry; uint32 emptyelem = curelem; uint32 moveelem; + int32 emptydist = 0; /* find next empty bucket */ while (true) @@ -550,6 +568,19 @@ SH_INSERT(SH_TYPE *tb, SH_KEY_TYPE key, bool *found) lastentry = emptyentry; break; } + + /* + * To avoid negative consequences from overly imbalanced + * hashtables, grow the hashtable if collisions would require + * us to move a lot of entries. The most likely cause of such + * imbalance is filling a (currently) small table, from a + * currently big one, in hash-table order. + */ + if (++emptydist > SH_GROW_MAX_MOVE) + { + tb->grow_threshold = 0; + goto restart; + } } /* shift forward, starting at last occupied element */ @@ -585,6 +616,18 @@ SH_INSERT(SH_TYPE *tb, SH_KEY_TYPE key, bool *found) curelem = SH_NEXT(tb, curelem, startelem); insertdist++; + + /* + * To avoid negative consequences from overly imbalanced hashtables, + * grow the hashtable if collisions lead to large runs. The most + * likely cause of such imbalance is filling a (currently) small + * table, from a currently big one, in hash-table order. + */ + if (insertdist > SH_GROW_MAX_DIB) + { + tb->grow_threshold = 0; + goto restart; + } } } @@ -878,6 +921,8 @@ SH_STAT(SH_TYPE *tb) #undef SH_MAKE_NAME_ #undef SH_FILLFACTOR #undef SH_MAX_FILLFACTOR +#undef SH_GROW_MAX_DIB +#undef SH_GROW_MAX_MOVE #undef SH_MAX_SIZE /* types */