]> granicus.if.org Git - postgresql/commitdiff
Limit Parallel Hash's bucket array to MaxAllocSize.
authorThomas Munro <tmunro@postgresql.org>
Sun, 10 Jun 2018 08:30:25 +0000 (20:30 +1200)
committerThomas Munro <tmunro@postgresql.org>
Sun, 10 Jun 2018 08:30:25 +0000 (20:30 +1200)
Make sure that we don't exceed MaxAllocSize when increasing the number of
buckets.  Perhaps later we'll remove that limit and use DSA_ALLOC_HUGE, but
for now just prevent further increases like the non-parallel code.  This
change avoids the error from bug report #15225.

Author: Thomas Munro
Reviewed-By: Tom Lane
Reported-by: Frits Jalvingh
Discussion: https://postgr.es/m/152802081668.26724.16985037679312485972%40wrigleys.postgresql.org

src/backend/executor/nodeHash.c

index 4f069d17fd8a6c71d2fa5a5118411ded15030295..6ffaa751f23fa7aa286abde5b41d9b3649a3a06c 100644 (file)
@@ -2818,9 +2818,12 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size,
                {
                        hashtable->batches[0].shared->ntuples += hashtable->batches[0].ntuples;
                        hashtable->batches[0].ntuples = 0;
+                       /* Guard against integer overflow and alloc size overflow */
                        if (hashtable->batches[0].shared->ntuples + 1 >
                                hashtable->nbuckets * NTUP_PER_BUCKET &&
-                               hashtable->nbuckets < (INT_MAX / 2))
+                               hashtable->nbuckets < (INT_MAX / 2) &&
+                               hashtable->nbuckets * 2 <=
+                               MaxAllocSize / sizeof(dsa_pointer_atomic))
                        {
                                pstate->growth = PHJ_GROWTH_NEED_MORE_BUCKETS;
                                LWLockRelease(&pstate->lock);