From: Thomas Munro Date: Sun, 10 Jun 2018 08:30:25 +0000 (+1200) Subject: Limit Parallel Hash's bucket array to MaxAllocSize. X-Git-Tag: REL_11_BETA2~86 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=86a2218eb00eb6f97898945967c5f9c95c72b4c6;p=postgresql Limit Parallel Hash's bucket array to MaxAllocSize. Make sure that we don't exceed MaxAllocSize when increasing the number of buckets. Perhaps later we'll remove that limit and use DSA_ALLOC_HUGE, but for now just prevent further increases like the non-parallel code. This change avoids the error from bug report #15225. Author: Thomas Munro Reviewed-By: Tom Lane Reported-by: Frits Jalvingh Discussion: https://postgr.es/m/152802081668.26724.16985037679312485972%40wrigleys.postgresql.org --- diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 4f069d17fd..6ffaa751f2 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -2818,9 +2818,12 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size, { hashtable->batches[0].shared->ntuples += hashtable->batches[0].ntuples; hashtable->batches[0].ntuples = 0; + /* Guard against integer overflow and alloc size overflow */ if (hashtable->batches[0].shared->ntuples + 1 > hashtable->nbuckets * NTUP_PER_BUCKET && - hashtable->nbuckets < (INT_MAX / 2)) + hashtable->nbuckets < (INT_MAX / 2) && + hashtable->nbuckets * 2 <= + MaxAllocSize / sizeof(dsa_pointer_atomic)) { pstate->growth = PHJ_GROWTH_NEED_MORE_BUCKETS; LWLockRelease(&pstate->lock);