Tom Lane [Thu, 21 Dec 2017 17:57:41 +0000 (12:57 -0500)]
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit.
This patch does three interrelated things:
* Create a new expression execution step type EEOP_PARAM_CALLBACK
and add the infrastructure needed for add-on modules to generate that.
As discussed, the best control mechanism for that seems to be to add
another hook function to ParamListInfo, which will be called by
ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found.
For stand-alone expressions, we add a new entry point to allow the
ParamListInfo to be specified directly, since it can't be retrieved
from the parent plan node's EState.
* Redesign the API for the ParamListInfo paramFetch hook so that the
ParamExternData array can be entirely virtual. This also lets us get rid
of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to
decide which param IDs should be accessible or not. plpgsql_param_fetch
was already doing the identical masking check, so having callers do it too
seemed redundant. While I was at it, I added a "speculative" flag to
paramFetch that the planner can specify as TRUE to avoid unwanted failures.
This solves an ancient problem for plpgsql that it couldn't provide values
of non-DTYPE_VAR variables to the planner for fear of triggering premature
"record not assigned yet" or "field not found" errors during planning.
* Rework plpgsql to get rid of the need for "unshared" parameter lists,
by dint of turning the single ParamListInfo per estate into a nearly
read-only data structure that doesn't instantiate any per-variable data.
Instead, the paramFetch hook controls access to per-variable data and can
make the right decisions on the fly, replacing the cases that we used to
need multiple ParamListInfos for. This might perhaps have been a
performance loss on its own, but by using a paramCompile hook we can
bypass plpgsql_param_fetch entirely during normal query execution.
(It's now only called when, eg, we copy the ParamListInfo into a cursor
portal. copyParamList() or SerializeParamList() effectively instantiate
the virtual parameter array as a simple physical array without a
paramFetch hook, which is what we want in those cases.) This allows
reverting most of commit 6c82d8d1f, though I kept the cosmetic
code-consolidation aspects of that (eg the assign_simple_var function).
Performance testing shows this to be at worst a break-even change,
and it can provide wins ranging up to 20% in test cases involving
accesses to fields of "record" variables. The fact that values of
such variables can now be exposed to the planner might produce wins
in some situations, too, but I've not pursued that angle.
In passing, remove the "parent" pointer from the arguments to
ExecInitExprRec and related functions, instead storing that pointer in a
transient field in ExprState. The ParamListInfo pointer for a stand-alone
expression is handled the same way; we'd otherwise have had to add
yet another recursively-passed-down argument in expression compilation.
Alvaro Herrera [Thu, 21 Dec 2017 17:21:39 +0000 (14:21 -0300)]
Get rid of copy_partition_key
That function currently exists to avoid leaking memory in
CacheMemoryContext in case of trouble while the partition key is being
built, but there's a better way: allocate everything in a memcxt that
goes away if the current (sub)transaction fails, and once the partition
key is built and no further errors can occur, make the memcxt permanent
by making it a child of CacheMemoryContext.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20171027172730.eh2domlkpn4ja62m@alvherre.pgsql
Tom Lane [Thu, 21 Dec 2017 15:56:57 +0000 (10:56 -0500)]
Avoid putting build-location-dependent strings into generated files.
Various Perl scripts we use to generate files were in the habit of
printing things like "generated by $0" into their output files.
That looks like a fine idea at first glance, but it results in
non-reproducible output, because in VPATH builds $0 won't be just
the name of the script file, but a full path for it. We'd prefer
that you get identical results whether using VPATH or not, so this
is a bad thing.
Some of these places also printed their input file name(s), causing
an additional hazard of the same type.
Hence, establish a policy that thou shalt not print $0, nor input file
pathnames, into output files (they're still allowed in error messages,
though). Instead just write the script name verbatim. While we are at
it, we can make these annotations more useful by giving the script's
full relative path name within the PG source tree, eg instead of
Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl.
Not all of the changes made here actually affect any files shipped
in finished tarballs today, but it seems best to apply the policy
everyplace so that nobody copies unsafe code into places where it
could matter.
Robert Haas [Thu, 21 Dec 2017 14:09:04 +0000 (09:09 -0500)]
Cancel CV sleep during subtransaction abort.
Generally, error recovery paths that need to do things like
LWLockReleaseAll and pgstat_report_wait_end also need to call
ConditionVariableCancelSleep, but AbortSubTransaction was missed.
Since subtransaction abort might destroy up the DSM segment that
contains the ConditionVariable stored in cv_sleep_target, this
can result in a crash for anything using condition variables.
Andres Freund [Thu, 21 Dec 2017 07:39:21 +0000 (23:39 -0800)]
Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro Reviewed-By: Andres Freund, Robert Haas Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
Robert Haas [Tue, 19 Dec 2017 20:26:09 +0000 (15:26 -0500)]
Test instrumentation of Hash nodes with parallel query.
Commit 8526bcb2df76d5171b4f4d6dc7a97560a73a5eff fixed bugs related
to both Sort and Hash, but only added a test case for Sort. This
adds a test case for Hash to match.
Robert Haas [Tue, 19 Dec 2017 17:21:56 +0000 (12:21 -0500)]
Try again to fix accumulation of parallel worker instrumentation.
When a Gather or Gather Merge node is started and stopped multiple
times, accumulate instrumentation data only once, at the end, instead
of after each execution, to avoid recording inflated totals.
Commit 778e78ae9fa51e58f41cbdc72b293291d02d8984, the previous attempt
at a fix, instead reset the state after every execution, which worked
for the general instrumentation data but had problems for the additional
instrumentation specific to Sort and Hash nodes.
Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila,
following a design proposal from Thomas Munro, with a comment tweak
by me.
Robert Haas [Tue, 19 Dec 2017 15:34:35 +0000 (10:34 -0500)]
Re-fix wrong costing of Sort under Gather Merge.
Commit dc02c7bca4dccf7de278cdc6b3325a829e75b252 changed this call
to create_sort_path() to take -1 rather than limit_tuples because,
at that time, there was no way for a Sort beneath a Gather Merge
to become a top-N sort.
Later, commit 3452dc5240da43e833118484e1e9b4894d04431c provided
a way for a Sort beneath a Gather Merge to become a top-N sort,
but failed to revert the previous commit in the process. Do that.
Report and analysis by Jeff Janes; patch by Thomas Munro; review by
Amit Kapila and by me.
This parameter can be used to enforce the channel binding type used
during a SCRAM authentication. This can be useful to check code paths
where an invalid channel binding type is used by a client and will be
even more useful to allow testing other channel binding types when they
are added.
The default value is tls-unique, which is what RFC 5802 specifies.
Clients can optionally specify an empty value, which has as effect to
not use channel binding and use SCRAM-SHA-256 as chosen SASL mechanism.
More tests for SCRAM and channel binding are added to the SSL test
suite.
Author: Author: Michael Paquier <michael.paquier@gmail.com>
Andres Freund [Mon, 18 Dec 2017 22:23:19 +0000 (14:23 -0800)]
Add shared tuplestores.
SharedTuplestore allows multiple participants to write into it and
then read the tuples back from it in parallel. Each reader receives
partial results.
For now it always uses disk files, but other buffering policies and
other kinds of scans (ie each reader receives complete results) may be
useful in future.
The upcoming parallel hash join feature will use this facility.
Author: Thomas Munro Reviewed-By: Peter Geoghegan, Andres Freund, Robert Haas
Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
Peter Eisentraut [Mon, 18 Dec 2017 21:59:10 +0000 (16:59 -0500)]
Move SCRAM-related name definitions to scram-common.h
Mechanism names for SCRAM and channel binding names have been included
in scram.h by the libpq frontend code, and this header references a set
of routines which are only used by the backend. scram-common.h is on
the contrary usable by both the backend and libpq, so getting those
names from there seems more reasonable.
Author: Michael Paquier <michael.paquier@gmail.com>
Fujii Masao [Mon, 18 Dec 2017 18:46:14 +0000 (03:46 +0900)]
Fix bug in cancellation of non-exclusive backup to avoid assertion failure.
Previously an assertion failure occurred when pg_stop_backup() for
non-exclusive backup was aborted while it's waiting for WAL files to
be archived. This assertion failure happened in do_pg_abort_backup()
which was called when a non-exclusive backup was canceled.
do_pg_abort_backup() assumes that there is at least one non-exclusive
backup running when it's called. But pg_stop_backup() can be canceled
even after it marks the end of non-exclusive backup (e.g.,
during waiting for WAL archiving). This broke the assumption that
do_pg_abort_backup() relies on, and which caused an assertion failure.
This commit changes do_pg_abort_backup() so that it does nothing
when non-exclusive backup has been already marked as completed.
That is, the asssumption is also changed, and do_pg_abort_backup()
now can handle even the case where it's called when there is
no running backup.
Backpatch to 9.6 where SQL-callable non-exclusive backup was added.
Author: Masahiko Sawada and Michael Paquier Reviewed-By: Robert Haas and Fujii Masao
Discussion: https://www.postgresql.org/message-id/CAD21AoD2L1Fu2c==gnVASMyFAAaq3y-AQ2uEVj-zTCGFFjvmDg@mail.gmail.com
Robert Haas [Mon, 18 Dec 2017 17:17:37 +0000 (12:17 -0500)]
Fix crashes on plans with multiple Gather (Merge) nodes.
es_query_dsa turns out to be broken by design, because it supposes
that there is only one DSA for the whole query, whereas there is
actually one per Gather (Merge) node. For now, work around that
problem by setting and clearing the pointer around the sections of
code that might need it. It's probably a better idea to get rid of
es_query_dsa altogether in favor of having each node keep track
individually of which DSA is relevant, but that seems like more than
we would want to back-patch.
Thomas Munro, reviewed and tested by Andreas Seltenreich, Amit
Kapila, and by me.
Tom Lane [Sun, 17 Dec 2017 16:52:22 +0000 (11:52 -0500)]
Try harder to detect unavailability of __builtin_mul_overflow(int64).
Commit c04d35f44 didn't quite do the job here, because it still allowed
the compiler to deduce that the function call could be optimized away.
Prevent that by putting the arguments and results in global variables.
Tom Lane [Sun, 17 Dec 2017 05:41:41 +0000 (00:41 -0500)]
Suppress compiler warning about no function return value.
Compilers that don't know that ereport(ERROR) doesn't return
complained about the new coding in scanint8() introduced by
commit 101c7ee3e. Tweak coding to avoid the warning.
Per buildfarm.
Andres Freund [Sat, 16 Dec 2017 20:49:41 +0000 (12:49 -0800)]
Try to detect runtime unavailability of __builtin_mul_overflow(int64).
On some systems the results of 64 bit __builtin_mul_overflow()
operations can be computed at compile time, but not at runtime. The
known cases are arm buildfar animals using clang where the runtime
operation is implemented in a unavailable function.
Try to avoid compile-time computation by using volatile arguments to
__builtin_mul_overflow(). In that case we hopefully will get a link
error when unavailable, similar to what buildfarm animals dangomushi
and gull are reporting.
Author: Andres Freund
Discussion: https://postgr.es/m/20171213213754.pydkyjs6bt2hvsdb@alap3.anarazel.de
Noah Misch [Sat, 16 Dec 2017 18:03:35 +0000 (10:03 -0800)]
Avoid and detect SIGPIPE race in TAP tests.
Don't write to stdin of a psql process that could have already exited
with an authentication failure. Buildfarm members crake and mandrill
have failed once by doing so. Ignore SIGPIPE in all TAP tests.
Back-patch to v10, where these tests were introduced.
Tom Lane [Sat, 16 Dec 2017 16:32:49 +0000 (11:32 -0500)]
Fix oversights in new plpgsql test suite infrastructure.
Fix a couple of minor oversights in commit 632b03da3: the tests
should be run in database "pl_regression" like the other PLs do,
and we should clean up the tests' output cruft during "make clean".
Andres Freund [Tue, 14 Nov 2017 02:45:47 +0000 (18:45 -0800)]
Perform a lot more sanity checks when freezing tuples.
The previous commit has shown that the sanity checks around freezing
aren't strong enough. Strengthening them seems especially important
because the existance of the bug has caused corruption that we don't
want to make even worse during future vacuum cycles.
The errors are emitted with ereport rather than elog, despite being
"should never happen" messages, so a proper error code is emitted. To
avoid superflous translations, mark messages as internal.
Author: Andres Freund and Alvaro Herrera Reviewed-By: Alvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
Andres Freund [Fri, 3 Nov 2017 14:52:29 +0000 (07:52 -0700)]
Fix pruning of locked and updated tuples.
Previously it was possible that a tuple was not pruned during vacuum,
even though its update xmax (i.e. the updating xid in a multixact with
both key share lockers and an updater) was below the cutoff horizon.
As the freezing code assumed, rightly so, that that's not supposed to
happen, xmax would be preserved (as a member of a new multixact or
xmax directly). That causes two problems: For one the tuple is below
the xmin horizon, which can cause problems if the clog is truncated or
once there's an xid wraparound. The bigger problem is that that will
break HOT chains, which in turn can lead two to breakages: First,
failing index lookups, which in turn can e.g lead to constraints being
violated. Second, future hot prunes / vacuums can end up making
invisible tuples visible again. There's other harmful scenarios.
Fix the problem by recognizing that tuples can be DEAD instead of
RECENTLY_DEAD, even if the multixactid has alive members, if the
update_xid is below the xmin horizon. That's safe because newer
versions of the tuple will contain the locking xids.
A followup commit will harden the code somewhat against future similar
bugs and already corrupted data.
Author: Andres Freund, with changes by Alvaro Herrera Reported-By: Daniel Wood Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter
Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier
Discussion:
https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
Tom Lane [Thu, 14 Dec 2017 22:19:27 +0000 (17:19 -0500)]
Tighten configure's test for __builtin_constant_p().
Commit 9fa6f00b1 assumed that __builtin_constant_p("string literal")
is TRUE, if the compiler has that function at all. Buildfarm results
show that Sun Studio 12, at least, breaks that assumption. Removing
that usage would leave us with no mechanical check for a very fragile
coding requirement, so instead teach configure to ignore
__builtin_constant_p() if it doesn't behave that way. We could
complicate matters by distinguishing three cases (no such function,
vs does, vs doesn't work for string literals); but for now, that seems
unnecessary because our other existing uses of this function are just
fairly minor optimizations of non-returning elog/ereport. We can live
without that on the small population of compilers that act this way.
Andrew Dunstan [Thu, 14 Dec 2017 16:13:14 +0000 (11:13 -0500)]
Fix walsender timeouts when decoding a large transaction
The logical slots have a fast code path for sending data so as not to
impose too high a per message overhead. The fast path skips checks for
interrupts and timeouts. However, the existing coding failed to consider
the fact that a transaction with a large number of changes may take a
very long time to be processed and sent to the client. This causes the
walsender to ignore interrupts for potentially a long time and more
importantly it will result in the walsender being killed due to
timeout at the end of such a transaction.
This commit changes the fast path to also check for interrupts and only
allows calling the fast path when the last keepalive check happened less
than half the walsender timeout ago. Otherwise the slower code path will
be taken.
Backpatched to 9.4
Petr Jelinek, reviewed by Kyotaro HORIGUCHI, Yura Sokolov, Craig
Ringer and Robert Haas.
Andres Freund [Wed, 13 Dec 2017 23:47:01 +0000 (15:47 -0800)]
Allow executor nodes to change their ExecProcNode function.
In order for executor nodes to be able to change their ExecProcNode function
after ExecInitNode() has finished, provide ExecSetExecProcNode(). This allows
any wrappers functions that only execProcnode.c knows about to be reinstalled.
The motivation for wanting to change ExecProcNode after ExecInitNode() has
finished is that it is not known until later whether parallel query is
available, so if a parallel variant is to be installed then ExecInitNode()
is too soon to decide.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=09rr65VN+cAV5FgyM_z=D77Xy8Fuc9CDDDYbq3pQUezg@mail.gmail.com
Andres Freund [Wed, 13 Dec 2017 23:34:20 +0000 (15:34 -0800)]
Add pg_attribute_always_inline.
Sometimes it is useful to be able to insist that the compiler inline a
function that its normal cost analysis would not normally choose to inline.
This can be useful for instantiating different variants of a function that
remove branches of code by constant folding.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=09rr65VN+cAV5FgyM_z=D77Xy8Fuc9CDDDYbq3pQUezg@mail.gmail.com
Andres Freund [Wed, 13 Dec 2017 20:51:32 +0000 (12:51 -0800)]
Add defenses against pre-crash files to BufFileOpenShared().
Crash restarts currently don't clean up temporary files, as a debugging aid.
If a left-over file happens to have the same name as a segment file we're
trying to create, we'll just truncate and reuse it, but there is a problem:
BufFileOpenShared() determines how many segment files exist by trying to open
.0, .1, .2, ... until it finds no more files. It might be confused by a junk
file that has the next segment number. To defend against that, make sure we
always create a gap after the end file by unlinking the following name if it
exists. Also make it an error to try to open a BufFile that doesn't exist
(has no segment 0), so as not to encourage the development of client code
that depends on an interface that we can't reliably provide.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm%3D2jhCbC_GFQJaaDhWxLB4EXtT3vVd5czuRNaqF5CWSTog%40mail.gmail.com
Tom Lane [Wed, 13 Dec 2017 18:55:12 +0000 (13:55 -0500)]
Rethink MemoryContext creation to improve performance.
This patch makes a number of interrelated changes to reduce the overhead
involved in creating/deleting memory contexts. The key ideas are:
* Include the AllocSetContext header of an aset.c context in its first
malloc request, rather than allocating it separately in TopMemoryContext.
This means that we now always create an initial or "keeper" block in an
aset, even if it never receives any allocation requests.
* Create freelists in which we can save and recycle recently-destroyed
asets (this idea is due to Robert Haas).
* In the common case where the name of a context is a constant string,
just store a pointer to it in the context header, rather than copying
the string.
The first change eliminates a palloc/pfree cycle per context, and
also avoids bloat in TopMemoryContext, at the price that creating
a context now involves a malloc/free cycle even if the context never
receives any allocations. That would be a loser for some common
usage patterns, but recycling short-lived contexts via the freelist
eliminates that pain.
Avoiding copying constant strings not only saves strlen() and strcpy()
overhead, but is an essential part of the freelist optimization because
it makes the context header size constant. Currently we make no
attempt to use the freelist for contexts with non-constant names.
(Perhaps someday we'll need to think harder about that, but in current
usage, most contexts with custom names are long-lived anyway.)
The freelist management in this initial commit is pretty simplistic,
and we might want to refine it later --- but in common workloads that
will never matter because the freelists will never get full anyway.
To create a context with a non-constant name, one is now required to
call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME
option. AllocSetContextCreate becomes a wrapper macro, and it includes
a test that will complain about non-string-literal context name
parameters on gcc and similar compilers.
An unfortunate side effect of making AllocSetContextCreate a macro is
that one is now *required* to use the size parameter abstraction macros
(ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of
writing out individual size parameters no longer works unless you
switch to AllocSetContextCreateExtended.
Internally to the memory-context-related modules, the context creation
APIs are simplified, removing the rather baroque original design whereby
a context-type module called mcxt.c which then called back into the
context-type module. That saved a bit of code duplication, but not much,
and it prevented context-type modules from exercising control over the
allocation of context headers.
In passing, I converted the test-and-elog validation of aset size
parameters into Asserts to save a few more cycles. The original thought
was that callers might compute size parameters on the fly, but in practice
nobody does that, so it's useless to expend cycles on checking those
numbers in production builds.
Also, mark the memory context method-pointer structs "const",
just for cleanliness.
The plpgsql.sql test file in the main regression tests is now by far the
largest after numeric_big, making editing and managing the test cases
very cumbersome. The other PLs have their own test suites split up into
smaller files by topic. It would be nice to have that for plpgsql as
well. So, to get that started, set up test infrastructure in
src/pl/plpgsql/src/ and split out the recently added procedure test
cases into a new file there. That file now mirrors the test cases added
to the other PLs, making managing those matching tests a bit easier too.
msvc build system changes with help from Michael Paquier
After d0aa965c0a0ac2ff7906ae1b1dad50a7952efa56, one error path in
PLy_spi_execute_fetch_result() could result in the variable "result"
being dereferenced after being set to NULL. Rearrange the code a bit to
fix that.
Also add another SPI_freetuptable() call so that that is cleared in all
error paths.
discovered by John Naylor <jcnaylor@gmail.com> via scan-build
Andres Freund [Wed, 13 Dec 2017 00:32:31 +0000 (16:32 -0800)]
Use new overflow aware integer operations.
A previous commit added inline functions that provide fast(er) and
correct overflow checks for signed integer math. Use them in a
significant portion of backend code. There's more to touch in both
backend and frontend code, but these were the easily identifiable
cases.
The old overflow checks are noticeable in integer heavy workloads.
A secondary benefit is that getting rid of overflow checks that rely
on signed integer overflow wrapping around, will allow us to get rid
of -fwrapv in the future. Which in turn slows down other code.
Author: Andres Freund
Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
Andres Freund [Mon, 30 Oct 2017 05:13:54 +0000 (22:13 -0700)]
Provide overflow safe integer math inline functions.
It's not easy to get signed integer overflow checks correct and
fast. Therefore abstract the necessary infrastructure into a common
header providing addition, subtraction and multiplication for 16, 32,
64 bit signed integers.
The new macros aren't yet used, but a followup commit will convert
several open coded overflow checks.
Author: Andres Freund, with some code stolen from Greg Stark Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
Robert Haas [Wed, 13 Dec 2017 00:33:50 +0000 (19:33 -0500)]
Remove obsolete comment.
Commit 8b304b8b72b0a60f1968d39f01cf817c8df863ec removed replacement
selection, but left behind this comment text. The optimization to
which the comment refers is not relevant without replacement
selection, because if we had so few tuples as to require only one
tape, we would have just completed the sort in memory.
Tom Lane [Mon, 11 Dec 2017 21:33:20 +0000 (16:33 -0500)]
Fix corner-case coredump in _SPI_error_callback().
I noticed that _SPI_execute_plan initially sets spierrcontext.arg = NULL,
and only fills it in some time later. If an error were to happen in
between, _SPI_error_callback would try to dereference the null pointer.
This is unlikely --- there's not much between those points except
push-snapshot calls --- but it's clearly not impossible. Tweak the
callback to do nothing if the pointer isn't set yet.
It's been like this for awhile, so back-patch to all supported branches.
Tom Lane [Sun, 10 Dec 2017 17:44:03 +0000 (12:44 -0500)]
Stabilize output of new regression test case.
The test added by commit 390d58135 turns out to have different output
in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
error message as a result of detecting the error at a different place.
Possibly we should do something to make that more consistent. But as
a stopgap measure to make the buildfarm green again, adjust the test
to suppress CONTEXT entirely. We can revert this if we do something
in the backend to eliminate the inconsistency.
Tom Lane [Sat, 9 Dec 2017 17:03:00 +0000 (12:03 -0500)]
Fix plpgsql to reinitialize record variables at block re-entry.
If one exits and re-enters a DECLARE ... BEGIN ... END block within a
single execution of a plpgsql function, perhaps due to a surrounding loop,
the declared variables are supposed to get re-initialized to null (or
whatever their initializer is). But this failed to happen for variables
of type "record", because while exec_stmt_block() expected such variables
to be included in the block's initvarnos list, plpgsql_add_initdatums()
only adds DTYPE_VAR variables to that list. This bug appears to have
been there since the aboriginal addition of plpgsql to our tree.
Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
as well. (We don't need to consider other DTYPEs because they don't
represent separately-stored values.) I failed to resist the temptation
to make some nearby cosmetic adjustments, too.
No back-patch, because there have not been field complaints, and it
seems possible that somewhere out there someone has code depending
on the incorrect behavior. In any case this change would have no
impact on correctly-written code.
Noah Misch [Sat, 9 Dec 2017 08:58:55 +0000 (00:58 -0800)]
MSVC 2012+: Permit linking to 32-bit, MinGW-built libraries.
Notably, this permits linking to the 32-bit Perl binaries advertised on
perl.org, namely Strawberry Perl and ActivePerl. This has a side effect
of permitting linking to binaries built with obsolete MSVC versions.
By default, MSVC 2012 and later require a "safe exception handler table"
in each binary. MinGW-built, 32-bit DLLs lack the relevant exception
handler metadata, so linking to them failed with error LNK2026. Restore
the semantics of MSVC 2010, which omits the table from a given binary if
some linker input lacks metadata. This has no effect on 64-bit builds
or on MSVC 2010 and earlier. Back-patch to 9.3 (all supported
versions).
Noah Misch [Sat, 9 Dec 2017 02:06:05 +0000 (18:06 -0800)]
MSVC: Test whether 32-bit Perl needs -D_USE_32BIT_TIME_T.
Commits 5a5c2feca3fd858e70ea348822595547e6fa6c15 and b5178c5d08ca59e30f9d9428fa6fdb2741794e65 introduced support for modern
MSVC-built, 32-bit Perl, but they broke use of MinGW-built, 32-bit Perl
distributions like Strawberry Perl and modern ActivePerl. Perl has no
robust means to report whether it expects a -D_USE_32BIT_TIME_T ABI, so
test this. Back-patch to 9.3 (all supported versions).
The chief alternative was a heuristic of adding -D_USE_32BIT_TIME_T when
$Config{gccversion} is nonempty. That banks on every gcc-built Perl
using the same ABI. gcc could change its default ABI the way MSVC once
did, and one could build Perl with gcc and the non-default ABI.
The GNU make build system could benefit from a similar test, without
which it does not support MSVC-built Perl. For now, just add a comment.
Most users taking the special step of building Perl with MSVC probably
build PostgreSQL with MSVC.
Tom Lane [Fri, 8 Dec 2017 16:20:50 +0000 (11:20 -0500)]
In plpgsql, unify duplicate variables for record and row cases.
plpgsql's function exec_move_row() handles assignment of a composite
source value to either a PLpgSQL_rec or PLpgSQL_row target variable.
Oddly, rather than taking a single target argument which it could do
run-time type detection on, it was coded to take two separate arguments
(only one of which is allowed to be non-NULL). This choice had then
back-propagated into storing two separate target variables in various
plpgsql statement nodes, with lots of duplicative coding and awkward
interface logic to support that. Simplify matters by folding those
pairs down to single variables, distinguishing the two cases only
where we must ... which turns out to be only in exec_move_row itself.
This is purely refactoring and should not change any behavior.
In passing, remove unused field PLpgSQL_stmt_open.returntype.
A COPY into a table should apply identity sequence values just like it
does for ordinary defaults. This was previously forgotten, leading to
null values being inserted, which in turn would fail because identity
columns have not-null constraints.
Author: Michael Paquier <michael.paquier@gmail.com> Reported-by: Steven Winfield <steven.winfield@cantabcapital.com>
Bug: #14952
Robert Haas [Wed, 6 Dec 2017 13:49:30 +0000 (08:49 -0500)]
Report failure to start a background worker.
When a worker is flagged as BGW_NEVER_RESTART and we fail to start it,
or if it is not marked BGW_NEVER_RESTART but is terminated before
startup succeeds, what BgwHandleStatus should be reported? The
previous code really hadn't considered this possibility (as indicated
by the comments which ignore it completely) and would typically return
BGWH_NOT_YET_STARTED, but that's not a good answer, because then
there's no way for code using GetBackgroundWorkerPid() to tell the
difference between a worker that has not started but will start
later and a worker that has not started and will never be started.
So, when this case happens, return BGWH_STOPPED instead. Update the
comments to reflect this.
The preceding fix by itself is insufficient to fix the problem,
because the old code also didn't send a notification to the process
identified in bgw_notify_pid when startup failed. That might've
been technically correct under the theory that the status of the
worker was BGWH_NOT_YET_STARTED, because the status would indeed not
change when the worker failed to start, but now that we're more
usefully reporting BGWH_STOPPED, a notification is needed.
Without these fixes, code which starts background workers and then
uses the recommended APIs to wait for those background workers to
start would hang indefinitely if the postmaster failed to fork a
worker.
Tom Lane [Wed, 6 Dec 2017 03:40:05 +0000 (22:40 -0500)]
Adjust regression test cases added by commit ab7271677.
I suppose it is a copy-and-paste error that this test doesn't actually
test the "Parallel Append with both partial and non-partial subplans"
case (EXPLAIN alone surely doesn't qualify as a test of executor
behavior). Fix that.
Also, add cosmetic aliases to make it possible to tell apart these
otherwise-identical test cases in log_statement output.
Remove the designation that Flex is a GNU package. Even though Bison is
a GNU package, leave out the designation to not make the sentence
unnecessarily complicated.
Robert Haas [Tue, 5 Dec 2017 22:28:39 +0000 (17:28 -0500)]
Support Parallel Append plan nodes.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention. We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.
Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.
Robert Haas [Tue, 5 Dec 2017 19:35:33 +0000 (14:35 -0500)]
Fix accumulation of parallel worker instrumentation.
When a Gather or Gather Merge node is started and stopped multiple
times, the old code wouldn't reset the shared state between executions,
potentially resulting in dramatically inflated instrumentation data
for nodes beneath it. (The per-worker instrumentation ended up OK,
I think, but the overall totals were inflated.)
Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila,
reviewed and tweaked a bit by me.
Andres Freund [Tue, 5 Dec 2017 18:55:56 +0000 (10:55 -0800)]
Fix EXPLAIN ANALYZE of hash join when the leader doesn't participate.
If a hash join appears in a parallel query, there may be no hash table
available for explain.c to inspect even though a hash table may have
been built in other processes. This could happen either because
parallel_leader_participation was set to off or because the leader
happened to hit the end of the outer relation immediately (even though
the complete relation is not empty) and decided not to build the hash
table.
Commit bf11e7ee introduced a way for workers to exchange
instrumentation via the DSM segment for Sort nodes even though they
are not parallel-aware. This commit does the same for Hash nodes, so
that explain.c has a way to find instrumentation data from an
arbitrary participant that actually built the hash table.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm%3D3DUQC2-z252N55eOcZBer6DPdM%3DFzrxH9dZc5vYLsjaA%40mail.gmail.com
Robert Haas [Tue, 5 Dec 2017 16:19:45 +0000 (11:19 -0500)]
postgres_fdw: Judge password use by run-as user, not session user.
This is a backward incompatibility which should be noted in the
release notes for PostgreSQL 11.
For security reasons, we require that a postgres_fdw foreign table use
password authentication when accessing a remote server, so that an
unprivileged user cannot usurp the server's credentials. Superusers
are exempt from this requirement, because we assume they are entitled
to usurp the server's credentials or, at least, can find some other
way to do it.
But what should happen when the foreign table is accessed by a view
owned by a user different from the session user? Is it the view owner
that must be a superuser in order to avoid the requirement of using a
password, or the session user? Historically it was the latter, but
this requirement makes it the former instead. This allows superusers
to delegate to other users the right to select from a foreign table
that doesn't use password authentication by creating a view over the
foreign table and handing out rights to the view. It is also more
consistent with the idea that access to a view should use the view
owner's privileges rather than the session user's privileges.
The upshot of this change is that a superuser selecting from a view
created by a non-superuser may now get an error complaining that no
password was used, while a non-superuser selecting from a view
created by a superuser will no longer receive such an error.
No documentation changes are present in this patch because the
wording of the documentation already suggests that it works this
way. We should perhaps adjust the documentation in the back-branches,
but that's a task for another patch.
Originally proposed by Jeff Janes, but with different semantics;
adjusted to work like this by me per discussion.
Tom Lane [Tue, 5 Dec 2017 01:52:48 +0000 (20:52 -0500)]
Treat directory open failures as hard errors in ResetUnloggedRelations().
Previously, this code just reported such problems at LOG level and kept
going. The problem with this approach is that transient failures (e.g.,
ENFILE) could prevent us from resetting unlogged relations to empty,
yet allow recovery to appear to complete successfully. That seems like
a data corruption hazard large enough to treat such problems as reasons
to fail startup.
For the same reason, treat unlink failures for unlogged files as hard
errors not just LOG messages. It's a little odd that we did it like that
when file-level errors in other steps (copy_file, fsync_fname) are ERRORs.
The sole case that I left alone is that ENOENT failure on a tablespace
(not database) directory is not an error, though it will now be logged
rather than just silently ignored. This is to cover the scenario where
a previous DROP TABLESPACE removed the tablespace directory but failed
before removing the pg_tblspc symlink. I'm not sure that that's very
likely in practice, but that seems like the only real excuse for the
old behavior here, so let's allow for it. (As coded, this will also
allow ENOENT on $PGDATA/base/. But since we'll fail soon enough if
that's gone, I don't think we need to complicate this code by
distinguishing that from a true tablespace case.)
Tom Lane [Mon, 4 Dec 2017 23:37:54 +0000 (18:37 -0500)]
Simplify do_pg_start_backup's API by opening pg_tblspc internally.
do_pg_start_backup() expects its callers to pass in an open DIR pointer
for the pg_tblspc directory, but there's no apparent advantage in that.
It complicates the callers without adding any flexibility, and there's no
robustness advantage, since we surely have to be prepared for errors during
the scan of pg_tblspc anyway. In fact, by holding an extra kernel resource
during operations like the preliminary checkpoint, we might be making
things a fraction more failure-prone not less. Hence, remove that argument
and open the directory just for the duration of the actual scan.
Tom Lane [Mon, 4 Dec 2017 22:59:35 +0000 (17:59 -0500)]
Improve error handling in RemovePgTempFiles().
Modify this function and its subsidiaries so that syscall failures are
reported via ereport(LOG), rather than silently ignored as before.
We don't want to throw a hard ERROR, as that would prevent database
startup, and getting rid of leftover temporary files is not important
enough for that. On the other hand, not reporting trouble at all
seems like an odd choice not in line with current project norms,
especially since any failure here is quite unexpected.
On the same reasoning, adjust these functions' AllocateDir/ReadDir calls
so that failure to scan a directory results in LOG not ERROR. I also
removed the previous practice of silently ignoring ENOENT failures during
directory opens --- there are some corner cases where that could happen
given a previous database crash, but that seems like a bad excuse for
ignoring a condition that isn't expected in most cases. A LOG message
during postmaster start seems OK in such situations, and better than
no output at all.
In passing, make RemovePgTempRelationFiles' test for "is the file name
all digits" look more like the way it's done elsewhere.
Tom Lane [Mon, 4 Dec 2017 22:02:52 +0000 (17:02 -0500)]
Clean up assorted messiness around AllocateDir() usage.
This patch fixes a couple of low-probability bugs that could lead to
reporting an irrelevant errno value (and hence possibly a wrong SQLSTATE)
concerning directory-open or file-open failures. It also fixes places
where we took shortcuts in reporting such errors, either by using elog
instead of ereport or by using ereport but forgetting to specify an
errcode. And it eliminates a lot of just plain redundant error-handling
code.
In service of all this, export fd.c's formerly-static function
ReadDirExtended, so that external callers can make use of the coding
pattern
dir = AllocateDir(path);
while ((de = ReadDirExtended(dir, path, LOG)) != NULL)
if they'd like to treat directory-open failures as mere LOG conditions
rather than errors. Also fix FreeDir to be a no-op if we reach it
with dir == NULL, as such a coding pattern would cause.
Then, remove code at many call sites that was throwing an error or log
message for AllocateDir failure, as ReadDir or ReadDirExtended can handle
that job just fine. Aside from being a net code savings, this gets rid of
a lot of not-quite-up-to-snuff reports, as mentioned above. (In some
places these changes result in replacing a custom error message such as
"could not open tablespace directory" with more generic wording "could not
open directory", but it was agreed that the custom wording buys little as
long as we report the directory name.) In some other call sites where we
can't just remove code, change the error reports to be fully
project-style-compliant.
Also reorder code in restoreTwoPhaseData that was acquiring a lock
between AllocateDir and ReadDir; in the unlikely but surely not
impossible case that LWLockAcquire changes errno, AllocateDir failures
would be misreported. There is no great value in opening the directory
before acquiring TwoPhaseStateLock, so just do it in the other order.
Also fix CheckXLogRemoved to guarantee that it preserves errno,
as quite a number of call sites are implicitly assuming. (Again,
it's unlikely but I think not impossible that errno could change
during a SpinLockAcquire. If so, this function was broken for its
own purposes as well as breaking callers.)
And change a few places that were using not-per-project-style messages,
such as "could not read directory" when "could not open directory" is
more correct.
Back-patch the exporting of ReadDirExtended, in case we have occasion
to back-patch some fix that makes use of it; it's not needed right now
but surely making it global is pretty harmless. Also back-patch the
restoreTwoPhaseData and CheckXLogRemoved fixes. The rest of this is
essentially cosmetic and need not get back-patched.
Michael Paquier, with a bit of additional work by me
Tom Lane [Mon, 4 Dec 2017 16:51:43 +0000 (11:51 -0500)]
Support boolean columns in functional-dependency statistics.
There's no good reason that the multicolumn stats stuff shouldn't work on
booleans. But it looked only for "Var = pseudoconstant" clauses, and it
will seldom find those for boolean Vars, since earlier phases of planning
will fold "boolvar = true" or "boolvar = false" to just "boolvar" or
"NOT boolvar" respectively. Improve dependencies_clauselist_selectivity()
to recognize such clauses as equivalent to equality restrictions.
This fixes a failure of the extended stats mechanism to apply in a case
reported by Vitaliy Garnashevich. It's not a complete solution to his
problem because the bitmap-scan costing code isn't consulting extended
stats where it should, but that's surely an independent issue.
In passing, improve some comments, get rid of a NumRelids() test that's
redundant with the preceding bms_membership() test, and fix
dependencies_clauselist_selectivity() so that estimatedclauses actually
is a pure output argument as stated by its API contract.
Robert Haas [Mon, 4 Dec 2017 15:33:09 +0000 (10:33 -0500)]
Remove memory leak protection from Gather and Gather Merge nodes.
Before commit 6b65a7fe62e129d5c2b85cd74d6a91d8f7564608, tqueue.c could
perform tuple remapping and thus leak memory, which is why commit af33039317ddc4a0e38a02e2255c2bf453115fd2 made TupleQueueReaderNext
run in a short-lived context. Now, however, tqueue.c has been reduced
to a shadow of its former self, and there shouldn't be any chance of
leaks any more. Accordingly, remove some tuple copying and memory
context manipulation to speed up processing.
Patch by me, reviewed by Amit Kapila. Some testing by Rafia Sabih.
Tom Lane [Sun, 3 Dec 2017 16:25:17 +0000 (11:25 -0500)]
Fix uninitialized-variable compiler warning induced by commit e4128ee76.
I'm a little bit astonished that anyone's compiler would have failed to
complain about this. The compiler surely does not know that is_procedure
means the function return value will be ignored.
Andres Freund [Sat, 2 Dec 2017 00:30:56 +0000 (16:30 -0800)]
Add infrastructure for sharing temporary files between backends.
SharedFileSet allows temporary files to be created by one backend and
then exported for read-only access by other backends, with clean-up
managed by reference counting associated with a DSM segment. This
includes changes to fd.c and buffile.c to support the new kind of
temporary file.
This will be used by an upcoming patch adding support for parallel
hash joins.
Author: Thomas Munro Reviewed-By: Peter Geoghegan, Andres Freund, Robert Haas, Rushabh Lathia
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
https://postgr.es/m/CAH2-WznJ_UgLux=_jTgCQ4yFz0iBntudsNKa1we3kN1BAG=88w@mail.gmail.com
Robert Haas [Fri, 1 Dec 2017 18:47:00 +0000 (13:47 -0500)]
postgres_fdw: Fix test that didn't test what it claimed.
Antonin Houska reported that the planner does consider pushing
postgres_fdw_abs() to the remote side, which happens because we make
it shippable earlier in the test case file.
Jeevan Chalke provided this patch, which changes the join
condition to use random(), which is not shippable, instead.
Antonin reviewed the patch.
pg_basebackup: Fix progress messages when writing to a file
The progress messages print out \r to keep overwriting the same line on
the screen. But this does not yield useful results when writing the
output to a file. So in that case, print out \n instead.
Author: Martín Marqués <martin@2ndquadrant.com> Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
Robert Haas [Thu, 30 Nov 2017 21:22:38 +0000 (16:22 -0500)]
Remove extra word from comment.
David Rowley, who also was the primary author of the patch that
added this function; the attribution in my previous commit, 84940644de931f331433b35e3a391822671f8c9c, was incorrect due to
sloppiness on my part.
Peter Eisentraut [Thu, 30 Nov 2017 13:46:13 +0000 (08:46 -0500)]
SQL procedures
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar. This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.
This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL). There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql. Support for defining
procedures is available in all the languages supplied by the core
distribution.
While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Robert Haas [Thu, 30 Nov 2017 14:50:10 +0000 (09:50 -0500)]
Make create_unique_path manage memory like mark_dummy_rel.
Put the unique path in the same context as the owning RelOptInfo, rather
than the toplevel planner context. This is how this function worked
originally, but commit f41803bb39bc2949db200116a609fd242d0ec221
changed it without explanation. mark_dummy_rel adopted the older (or
newer?) technique in commit eca75a12a27d28b972fc269c1c8813cd8eb15441,
which also featured a much better explanation of why it is correct.
So, switch back to that technique here, with the same explanation
given there.
Although this fixes a possible memory leak when GEQO is in use, the
leak is minor and probably nobody cares, so no back-patch.
Noah Misch [Thu, 30 Nov 2017 08:57:22 +0000 (00:57 -0800)]
Fix non-GNU makefiles for AIX make.
Invoking the Makefile without an explicit target was building every
possible target instead of just the "all" target. Back-patch to 9.3
(all supported versions).
Tom Lane [Thu, 30 Nov 2017 03:00:29 +0000 (22:00 -0500)]
Fix neqjoinsel's behavior for semi/anti join cases.
Previously, this function estimated the selectivity as 1 minus eqjoinsel()
for the negator equality operator, regardless of join type (I think there
was an expectation that eqjoinsel would handle the join type). But
actually this is completely wrong for semijoin cases: the fraction of the
LHS that has a non-matching row is not one minus the fraction of the LHS
that has a matching row. In reality a semijoin with <> will nearly always
succeed: it can only fail when the RHS is empty, or it contains a single
distinct value that is equal to the particular LHS value, or the LHS value
is null. The only one of those things we should have much confidence in
estimating is the fraction of LHS values that are null, so let's just take
the selectivity as 1 minus outer nullfrac.
Per coding convention, antijoin should be estimated the same as semijoin.
Arguably this is a bug fix, but in view of the lack of field complaints
and the risk of destabilizing plans, no back-patch.
Andres Freund [Thu, 30 Nov 2017 01:07:16 +0000 (17:07 -0800)]
Add a barrier primitive for synchronizing backends.
Provide support for dynamic or static parties of processes to wait for
all processes to reach point in the code before continuing.
This is similar to the mechanism of the same name in POSIX threads and
MPI, though has explicit phasing and dynamic party support like the
Java core library's Phaser.
This will be used by an upcoming patch adding support for parallel
hash joins.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=2_y7oi01OjA_wLvYcWMc9_d=LaoxrY3eiROCZkB_qakA@mail.gmail.com
Andres Freund [Thu, 30 Nov 2017 00:06:50 +0000 (16:06 -0800)]
Add some regression tests that exercise hash join code.
Although hash joins are already tested by many queries, these tests
systematically cover the four different states we can reach as part of
the strategy for respecting work_mem.
Robert Haas [Wed, 29 Nov 2017 22:06:14 +0000 (17:06 -0500)]
New C function: bms_add_range
This will be used by pending patches to improve partition pruning.
Amit Langote and Kyotaro Horiguchi, per a suggestion from David
Rowley. Review and testing of the larger patch set of which this is a
part by Ashutosh Bapat, David Rowley, Dilip Kumar, Jesper Pedersen,
Rajkumar Raghuwanshi, Beena Emerson, Amul Sul, and Kyotaro Horiguchi.
Robert Haas [Wed, 29 Nov 2017 20:19:07 +0000 (15:19 -0500)]
Add extensive tests for partition pruning.
Currently, partition pruning happens via constraint exclusion, but
there are pending places to replace that with a different and
hopefully faster mechanism. To be sure that we don't change behavior
without realizing it, add extensive test coverage.
Note that not all of these behaviors are optimal; in some cases,
partitions are not pruned even though it would be safe to do so.
These tests therefore serve to memorialize the current state rather
than the ideal state. Patches that improve things can update the test
results as appropriate.
Amit Langote, adjusted by me. Review and testing of the larger patch
set of which this is a part by Ashutosh Bapat, David Rowley, Dilip
Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Amul Sul,
and Kyotaro Horiguchi.