Robert Haas [Thu, 4 Jan 2018 12:56:09 +0000 (07:56 -0500)]
Code review for Parallel Append.
- Remove unnecessary #include mistakenly added in execnodes.h.
- Fix mistake in comment in choose_next_subplan_for_leader.
- Adjust row estimates in cost_append for a possibly-different
parallel divisor.
- Clamp row estimates in cost_append after operations that may
not produce integers.
Tom Lane [Wed, 3 Jan 2018 22:53:06 +0000 (17:53 -0500)]
Clean up tupdesc.c for recent changes.
TupleDescCopy needs to have the same effects as CreateTupleDescCopy in
that, since it doesn't copy constraints, it should clear the per-attribute
fields associated with them. Oversight in commit cc5f81366.
Since TupleDescCopy has already established the presumption that it
can just flat-copy the entire attribute array in one go, propagate
that approach into CreateTupleDescCopy and CreateTupleDescCopyConstr.
(I'm suspicious that this would lead to valgrind complaints if we
had any trailing padding in the struct, but we do not, and anyway
fixing that seems like a job for a separate commit.)
Add some better comments.
Thomas Munro, reviewed by Vik Fearing, some additional hacking by me
Alvaro Herrera [Wed, 3 Jan 2018 14:16:34 +0000 (11:16 -0300)]
Revert "Fix isolation test to be less timing-dependent"
This reverts commit 2268e6afd596. It turned out that inconsistency in
the report is still possible, so go back to the simpler formulation of
the test and instead add an alternate expected output.
Andres Freund [Wed, 3 Jan 2018 20:00:11 +0000 (12:00 -0800)]
Rename pg_rewind's copy_file_range() to avoid conflict with new linux syscall.
Upcoming versions of glibc will contain copy_file_range(2), a wrapper
around a new linux syscall for in-kernel copying of data ranges. This
conflicts with pg_rewinds function of the same name.
Therefore rename pg_rewinds version. As our version isn't a generic
copying facility we decided to choose a rewind specific function name.
Per buildfarm animal caiman and subsequent discussion with Tom Lane.
Author: Andres Freund
Discussion:
https://postgr.es/m/20180103033425.w7jkljth3e26sduc@alap3.anarazel.de
https://postgr.es/m/31122.1514951044@sss.pgh.pa.us
Backpatch: 9.5-, where pg_rewind was introduced
Andrew Dunstan [Wed, 3 Jan 2018 20:26:39 +0000 (15:26 -0500)]
Fix use of config-specific libraries for Windows OpenSSL
Commit 614350a3 allowed for an different builds of OpenSSL libraries on
Windows, but ignored the fact that the alternative builds don't have
config-specific libraries. This patch fixes the Solution file to ask for
the correct libraries.
per offline discussions with Leonardo Cecchi and Marco Nenciarini,
Alvaro Herrera [Wed, 3 Jan 2018 20:26:20 +0000 (17:26 -0300)]
Make XactLockTableWait work for transactions that are not yet self-locked
XactLockTableWait assumed that its xid argument has already added itself
to the lock table. That assumption led to another assumption that if
locking the xid has succeeded but the xid is reported as still in
progress, then the input xid must have been a subtransaction.
These assumptions hold true for the original uses of this code in
locking related to on-disk tuples, but they break down in logical
replication slot snapshot building -- in particular, when a standby
snapshot logged contains an xid that's already in ProcArray but not yet
in the lock table. This leads to assertion failures that can be
reproduced all the way back to 9.4, when logical decoding was
introduced.
To fix, change SubTransGetParent to SubTransGetTopmostTransaction which
has a slightly different API: it returns the argument Xid if there is no
parent, and it goes all the way to the top instead of moving up the
levels one by one. Also, to avoid busy-waiting, add a 1ms sleep to give
the other process time to register itself in the lock table.
For consistency, change ConditionalXactLockTableWait the same way.
Author: Petr Jelínek
Discussion: https://postgr.es/m/1B3E32D8-FCF4-40B4-AEF9-5C0E3AC57969@postgrespro.ru Reported-by: Konstantin Knizhnik Diagnosed-by: Stas Kelvich, Petr Jelínek Reviewed-by: Andres Freund, Robert Haas
Tom Lane [Wed, 3 Jan 2018 17:53:49 +0000 (12:53 -0500)]
Fix some minor errors in new PHJ code.
Correct ExecParallelHashTuplePrealloc's estimate of whether the
space_allowed limit is exceeded. Be more consistent about tuples that
are exactly HASH_CHUNK_THRESHOLD in size (they're "small", not "large").
Neither of these things explain the current buildfarm unhappiness, but
they're still bugs.
Tom Lane [Wed, 3 Jan 2018 17:35:09 +0000 (12:35 -0500)]
Teach eval_const_expressions() to handle some more cases.
Add some infrastructure (mostly macros) to make it easier to write
typical cases for constant-expression simplification. Add simplification
processing for ArrayRef, RowExpr, and ScalarArrayOpExpr node types,
which formerly went unsimplified even if all their inputs were constants.
Also teach it to simplify FieldSelect from a composite constant.
Make use of the new infrastructure to reduce the amount of code needed
for the existing ArrayExpr and ArrayCoerceExpr cases.
One existing test case changes output as a result of the fact that
RowExpr can now be folded to a constant. All the new code is exercised
by existing test cases according to gcov, so I feel no need to add
additional tests.
While ldaptls=1 provides an RFC 4513 conforming way to do LDAP
authentication with TLS encryption, there was an earlier de facto
standard way to do LDAP over SSL called LDAPS. Even though it's not
enshrined in a standard, it's still widely used and sometimes required
by organizations' network policies. There seems to be no reason not to
support it when available in the client library. Therefore, add support
when using OpenLDAP 2.4+ or Windows. It can be configured with
ldapscheme=ldaps or ldapurl=ldaps://...
Add tests for both ways of requesting LDAPS and a test for the
pre-existing ldaptls=1. Modify the 001_auth.pl test for "diagnostic
messages", which was previously relying on the server rejecting
ldaptls=1.
Author: Thomas Munro Reviewed-By: Peter Eisentraut
Discussion: https://postgr.es/m/CAEepm=1s+pA-LZUjQ-9GQz0Z4rX_eK=DFXAF1nBQ+ROPimuOYQ@mail.gmail.com
Andres Freund [Wed, 3 Jan 2018 02:02:37 +0000 (18:02 -0800)]
Simplify representation of aggregate transition values a bit.
Previously aggregate transition values for hash and other forms of
aggregation (i.e. sort and no group by) were represented
differently. Hash based aggregation used a grouping set indexed array
pointing to an array of transition values, whereas other forms of
aggregation used one flattened array with the index being computed out
of grouping set and transition offsets.
That made upcoming changes hard, so represent both as grouping set
indexed array of per-group data.
As a nice side-effect this also makes aggregation slightly faster,
because computing offsets with `transno + (setno * numTrans)` turns
out not to be that cheap (too big for x86 lea for example).
Author: Andres Freund
Discussion: https://postgr.es/m/20171128003121.nmxbm2ounxzb6n2t@alap3.anarazel.de
Tom Lane [Wed, 3 Jan 2018 02:23:02 +0000 (21:23 -0500)]
Ensure proper alignment of tuples in HashMemoryChunkData buffers.
The previous coding relied (without any documentation) on the data[]
member of HashMemoryChunkData being at a MAXALIGN'ed offset. If it
was not, the tuples would not be maxaligned either, leading to failures
on alignment-picky machines. While there seems to be no live bug on any
platform we support, this is clearly pretty fragile: any addition to or
rearrangement of the fields in HashMemoryChunkData could break it.
Let's remove the hazard by getting rid of the data[] member and instead
using pointer arithmetic with an explicitly maxalign'ed offset.
Alvaro Herrera [Tue, 2 Jan 2018 22:16:16 +0000 (19:16 -0300)]
Fix deadlock hazard in CREATE INDEX CONCURRENTLY
Multiple sessions doing CREATE INDEX CONCURRENTLY simultaneously are
supposed to be able to work in parallel, as evidenced by fixes in commit c3d09b3bd23f specifically to support this case. In reality, one of the
sessions would be aborted by a misterious "deadlock detected" error.
Jeff Janes diagnosed that this is because of leftover snapshots used for
system catalog scans -- this was broken by 8aa3e47510b9 keeping track of
(registering) the catalog snapshot. To fix the deadlocks, it's enough
to de-register that snapshot prior to waiting.
Backpatch to 9.4, which introduced MVCC catalog scans.
Include an isolationtester spec that 8 out of 10 times reproduces the
deadlock with the unpatched code for me (Álvaro).
Author: Jeff Janes Diagnosed-by: Jeff Janes Reported-by: Jeremy Finzel
Discussion: https://postgr.es/m/CAMa1XUhHjCv8Qkx0WOr1Mpm_R4qxN26EibwCrj0Oor2YBUFUTg%40mail.gmail.com
Peter Eisentraut [Tue, 26 Dec 2017 18:47:18 +0000 (13:47 -0500)]
Don't cast between GinNullCategory and bool
The original idea was that we could use an isNull-style bool array
directly as a GinNullCategory array. However, the existing code already
acknowledges that that doesn't really work, because of the possibility
that bool as currently defined can have arbitrary bit patterns for true
values. So it has to loop through the nullFlags array to set each bool
value to an acceptable value. But if we are looping through the whole
array anyway, we might as well build a proper GinNullCategory array
instead and abandon the type casting. That makes the code much safer in
case bool is ever changed to something else.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Andres Freund [Mon, 1 Jan 2018 22:38:23 +0000 (14:38 -0800)]
Fix EXPLAIN ANALYZE output for Parallel Hash.
In a race case, EXPLAIN ANALYZE could fail to display correct nbatch
and size information. Refactor so that participants report only on
batches they worked on rather than trying to report on all of them,
and teach explain.c to consider the HashInstrumentation object from
all participants instead of picking the first one it can find. This
should fix an occasional build farm failure in the "join" regression
test.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/30219.1514428346%40sss.pgh.pa.us
Noah Misch [Mon, 1 Jan 2018 05:58:29 +0000 (21:58 -0800)]
In tests, await an LSN no later than the recovery target.
Otherwise, the test fails with "Timed out while waiting for standby to
catch up". This happened rarely, perhaps only when autovacuum wrote WAL
between our choosing the recovery target and choosing the LSN to await.
Commit b26f7fa6ae2b4e5d64525b3d5bc66a0ddccd9e24 fixed one case of this.
Fix two more. Back-patch to 9.6, which introduced the affected test.
Tom Lane [Sun, 31 Dec 2017 22:20:17 +0000 (17:20 -0500)]
Merge coding of return/exit/continue cases in plpgsql's loop statements.
plpgsql's five different loop control statements contained three distinct
implementations of the same (or what ought to be the same, at least)
logic for handling return/exit/continue result codes from their child
statements. At best, that's trouble waiting to happen, and there seems
no very good reason for the coding to be so different. Refactor so that
all the common logic is expressed in a single macro.
Tom Lane [Sun, 31 Dec 2017 22:04:11 +0000 (17:04 -0500)]
Improve regression tests' code coverage for plpgsql control structures.
I noticed that our code coverage report showed considerable deficiency
in test coverage for PL/pgSQL control statements. Notably, both
exec_stmt_block and most of the loop control statements had very poor
coverage of handling of return/exit/continue result codes from their
child statements; and exec_stmt_fori was seriously lacking in feature
coverage, having no test that exercised its BY or REVERSE features,
nor verification that its overflow defenses work.
Now that we have some infrastructure for plpgsql-specific test scripts,
the natural thing to do is make a new script rather than further extend
plpgsql.sql. So I created a new script plpgsql_control.sql with the
charter to test plpgsql control structures, and moved a few existing
tests there because they fell entirely under that charter. I then
added new test cases that exercise the bits of code complained of above.
Of the five kinds of loop statements, only exec_stmt_while's result code
handling is fully exercised by these tests. That would be a deficiency
as things stand, but a follow-on commit will merge the loop statements'
result code handling into one implementation. So testing each usage of
that implementation separately seems redundant.
In passing, also add a couple test cases to plpgsql.sql to more fully
exercise plpgsql's code related to expanded arrays --- I had thought
that area was sufficiently covered already, but the coverage report
showed a couple of un-executed code paths.
Andres Freund [Fri, 29 Dec 2017 20:38:15 +0000 (12:38 -0800)]
Perform slot validity checks in a separate pass over expression.
This reduces code duplication a bit, but the primary benefit that it
makes JITing expression evaluation easier. When doing so we can't, as
previously done in the interpreted case, really change opcode without
recompiling. Nor dow we just carry around unnecessary branches to
avoid re-checking over and over.
As a minor side-effect this makes ExecEvalStepOp() O(log(N)) rather
than O(N).
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Fri, 29 Dec 2017 20:26:29 +0000 (12:26 -0800)]
Rely on executor utils to build targetlist for DML RETURNING.
This is useful because it gets rid of the sole direct user of
ExecAssignResultType(). A future commit will likely make use of that
and combine creating the targetlist with the initialization of the
result slot. But it seems like good code hygiene anyway.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Magnus Hagander [Fri, 29 Dec 2017 15:19:51 +0000 (16:19 +0100)]
Properly set base backup backends to active in pg_stat_activity
When walsenders were included in pg_stat_activity, only the ones
actually streaming WAL were listed as active when they were active. In
particular, the connections sending base backups were listed as being
idle. Which means that a regular pg_basebackup would show up with one
active and one idle connection, when both were active.
This patch updates to set all walsenders to active when they are
(including those doing very fast things like IDENTIFY_SYSTEM), and then
back to idle. Details about exactly what they are doing is available in
pg_stat_replication.
Patch by me, review by Michael Paquier and David Steele.
Simon Riggs [Fri, 29 Dec 2017 14:30:33 +0000 (14:30 +0000)]
Fix race condition when changing synchronous_standby_names
A momentary window exists when synchronous_standby_names
changes that allows commands issued after the change to
continue to act as async until the change becomes visible.
Remove the race by using more appropriate test in syncrep.c
Author: Asim Rama Praveen and Ashwin Agrawal Reported-by: Xin Zhang, Ashwin Agrawal, and Asim Rama Praveen Reviewed-by: Michael Paquier, Masahiko Sawada
Andres Freund [Thu, 28 Dec 2017 10:41:53 +0000 (02:41 -0800)]
Fix rare assertion failure in parallel hash join.
When a backend runs out of inner tuples to hash, it should detach from
grow_batch_barrier only after it has flushed all batches to disk and
merged counters, not before. Otherwise a concurrent backend in
ExecParallelHashIncreaseNumBatches() could stop waiting for this
backend and try to read tuples before they have been written. This
commit reorders those operations and should fix the assertion failures
seen occasionally on the build farm since commit 1804284042e659e7d16904e7bbb0ad546394b6a3.
Author: Thomas Munro
Discussion: https://postgr.es/m/E1eRwXy-0004IK-TO%40gemulon.postgresql.org
Teodor Sigaev [Wed, 27 Dec 2017 15:25:37 +0000 (18:25 +0300)]
Update relation's stats in pg_class during vacuum full.
Hash index depends on estimation of numbers of tuples and pages of relations,
incorrect value could be a reason of significantly growing of index. Vacuum
full recreates heap and reindex all indexes before renewal stats. The patch
fixes that, so indexes will see correct values.
Backpatch to v10 only because earlier versions haven't usable hash index and
growing of hash index is a single user-visible symptom.
Author: Amit Kapila Reviewed-by: Ashutosh Sharma, me
Discussion: https://www.postgresql.org/message-id/flat/20171115232922.5tomkxnw3iq6jsg7@inml.weebeastie.net
Teodor Sigaev [Mon, 25 Dec 2017 15:59:38 +0000 (18:59 +0300)]
Add polygon opclass for SP-GiST
Polygon opclass uses compress method feature of SP-GiST added earlier. For now
it's a single operator class which uses this feature. SP-GiST actually indexes
a bounding boxes of input polygons, so part of supported operations are lossy.
Opclass uses most methods of corresponding opclass over boxes of SP-GiST and
treats bounding boxes as point in 4D-space.
Bump catalog version.
Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me Reviewed-By: all authors + Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru
Tom Lane [Fri, 22 Dec 2017 17:08:06 +0000 (12:08 -0500)]
Fix UNION/INTERSECT/EXCEPT over no columns.
Since 9.4, we've allowed the syntax "select union select" and variants
of that. However, the planner wasn't expecting a no-column set operation
and ended up treating the set operation as if it were UNION ALL.
Turns out it's trivial to fix in v10 and later; we just need to be careful
about not generating a Sort node with no sort keys. However, since a weird
corner case like this is never going to be exercised by developers, we'd
better have thorough regression tests if we want to consider it supported.
Teodor Sigaev [Fri, 22 Dec 2017 10:33:16 +0000 (13:33 +0300)]
Add optional compression method to SP-GiST
Patch allows to have different types of column and value stored in leaf tuples
of SP-GiST. The main application of feature is to transform complex column type
to simple indexed type or for truncating too long value, transformation could
be lossy. Simple example: polygons are converted to their bounding boxes,
this opclass follows.
Authors: me, Heikki Linnakangas, Alexander Korotkov, Nikita Glukhov Reviewed-By: all authors + Darafei Praliaskouski
Discussions:
https://www.postgresql.org/message-id/5447B3FF.2080406@sigaev.ru
https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru#54907069.1030506@sigaev.ru
Alvaro Herrera [Thu, 21 Dec 2017 22:07:32 +0000 (19:07 -0300)]
Minor edits to catalog files and scripts
This fixes a few typos and small mistakes; it also cleans a few
minor stylistic issues. The biggest functional change is that
Gen_fmgrtab.pl no longer knows the OID of language 'internal'.
Author: John Naylor
Discussion: https://postgr.es/m/CAJVSVGXAkwbk-A9QHHHf00N905kKisyQbaYwKqaRpze_gPXGfg@mail.gmail.com
Robert Haas [Thu, 21 Dec 2017 18:10:51 +0000 (13:10 -0500)]
Adjust assertion in GetCurrentCommandId.
currentCommandIdUsed is only used to skip redundant increments of the
command counter, and CommandCounterIncrement() is categorically denied
under parallelism anyway. Therefore, it's OK for
GetCurrentCommandId() to mark the counter value used, as long as it
happens in the leader, not a worker.
Tom Lane [Thu, 21 Dec 2017 17:57:41 +0000 (12:57 -0500)]
Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit.
This patch does three interrelated things:
* Create a new expression execution step type EEOP_PARAM_CALLBACK
and add the infrastructure needed for add-on modules to generate that.
As discussed, the best control mechanism for that seems to be to add
another hook function to ParamListInfo, which will be called by
ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found.
For stand-alone expressions, we add a new entry point to allow the
ParamListInfo to be specified directly, since it can't be retrieved
from the parent plan node's EState.
* Redesign the API for the ParamListInfo paramFetch hook so that the
ParamExternData array can be entirely virtual. This also lets us get rid
of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to
decide which param IDs should be accessible or not. plpgsql_param_fetch
was already doing the identical masking check, so having callers do it too
seemed redundant. While I was at it, I added a "speculative" flag to
paramFetch that the planner can specify as TRUE to avoid unwanted failures.
This solves an ancient problem for plpgsql that it couldn't provide values
of non-DTYPE_VAR variables to the planner for fear of triggering premature
"record not assigned yet" or "field not found" errors during planning.
* Rework plpgsql to get rid of the need for "unshared" parameter lists,
by dint of turning the single ParamListInfo per estate into a nearly
read-only data structure that doesn't instantiate any per-variable data.
Instead, the paramFetch hook controls access to per-variable data and can
make the right decisions on the fly, replacing the cases that we used to
need multiple ParamListInfos for. This might perhaps have been a
performance loss on its own, but by using a paramCompile hook we can
bypass plpgsql_param_fetch entirely during normal query execution.
(It's now only called when, eg, we copy the ParamListInfo into a cursor
portal. copyParamList() or SerializeParamList() effectively instantiate
the virtual parameter array as a simple physical array without a
paramFetch hook, which is what we want in those cases.) This allows
reverting most of commit 6c82d8d1f, though I kept the cosmetic
code-consolidation aspects of that (eg the assign_simple_var function).
Performance testing shows this to be at worst a break-even change,
and it can provide wins ranging up to 20% in test cases involving
accesses to fields of "record" variables. The fact that values of
such variables can now be exposed to the planner might produce wins
in some situations, too, but I've not pursued that angle.
In passing, remove the "parent" pointer from the arguments to
ExecInitExprRec and related functions, instead storing that pointer in a
transient field in ExprState. The ParamListInfo pointer for a stand-alone
expression is handled the same way; we'd otherwise have had to add
yet another recursively-passed-down argument in expression compilation.
Alvaro Herrera [Thu, 21 Dec 2017 17:21:39 +0000 (14:21 -0300)]
Get rid of copy_partition_key
That function currently exists to avoid leaking memory in
CacheMemoryContext in case of trouble while the partition key is being
built, but there's a better way: allocate everything in a memcxt that
goes away if the current (sub)transaction fails, and once the partition
key is built and no further errors can occur, make the memcxt permanent
by making it a child of CacheMemoryContext.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20171027172730.eh2domlkpn4ja62m@alvherre.pgsql
Tom Lane [Thu, 21 Dec 2017 15:56:57 +0000 (10:56 -0500)]
Avoid putting build-location-dependent strings into generated files.
Various Perl scripts we use to generate files were in the habit of
printing things like "generated by $0" into their output files.
That looks like a fine idea at first glance, but it results in
non-reproducible output, because in VPATH builds $0 won't be just
the name of the script file, but a full path for it. We'd prefer
that you get identical results whether using VPATH or not, so this
is a bad thing.
Some of these places also printed their input file name(s), causing
an additional hazard of the same type.
Hence, establish a policy that thou shalt not print $0, nor input file
pathnames, into output files (they're still allowed in error messages,
though). Instead just write the script name verbatim. While we are at
it, we can make these annotations more useful by giving the script's
full relative path name within the PG source tree, eg instead of
Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl.
Not all of the changes made here actually affect any files shipped
in finished tarballs today, but it seems best to apply the policy
everyplace so that nobody copies unsafe code into places where it
could matter.
Robert Haas [Thu, 21 Dec 2017 14:09:04 +0000 (09:09 -0500)]
Cancel CV sleep during subtransaction abort.
Generally, error recovery paths that need to do things like
LWLockReleaseAll and pgstat_report_wait_end also need to call
ConditionVariableCancelSleep, but AbortSubTransaction was missed.
Since subtransaction abort might destroy up the DSM segment that
contains the ConditionVariable stored in cv_sleep_target, this
can result in a crash for anything using condition variables.
Andres Freund [Thu, 21 Dec 2017 07:39:21 +0000 (23:39 -0800)]
Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro Reviewed-By: Andres Freund, Robert Haas Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
Robert Haas [Tue, 19 Dec 2017 20:26:09 +0000 (15:26 -0500)]
Test instrumentation of Hash nodes with parallel query.
Commit 8526bcb2df76d5171b4f4d6dc7a97560a73a5eff fixed bugs related
to both Sort and Hash, but only added a test case for Sort. This
adds a test case for Hash to match.
Robert Haas [Tue, 19 Dec 2017 17:21:56 +0000 (12:21 -0500)]
Try again to fix accumulation of parallel worker instrumentation.
When a Gather or Gather Merge node is started and stopped multiple
times, accumulate instrumentation data only once, at the end, instead
of after each execution, to avoid recording inflated totals.
Commit 778e78ae9fa51e58f41cbdc72b293291d02d8984, the previous attempt
at a fix, instead reset the state after every execution, which worked
for the general instrumentation data but had problems for the additional
instrumentation specific to Sort and Hash nodes.
Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila,
following a design proposal from Thomas Munro, with a comment tweak
by me.
Robert Haas [Tue, 19 Dec 2017 15:34:35 +0000 (10:34 -0500)]
Re-fix wrong costing of Sort under Gather Merge.
Commit dc02c7bca4dccf7de278cdc6b3325a829e75b252 changed this call
to create_sort_path() to take -1 rather than limit_tuples because,
at that time, there was no way for a Sort beneath a Gather Merge
to become a top-N sort.
Later, commit 3452dc5240da43e833118484e1e9b4894d04431c provided
a way for a Sort beneath a Gather Merge to become a top-N sort,
but failed to revert the previous commit in the process. Do that.
Report and analysis by Jeff Janes; patch by Thomas Munro; review by
Amit Kapila and by me.
This parameter can be used to enforce the channel binding type used
during a SCRAM authentication. This can be useful to check code paths
where an invalid channel binding type is used by a client and will be
even more useful to allow testing other channel binding types when they
are added.
The default value is tls-unique, which is what RFC 5802 specifies.
Clients can optionally specify an empty value, which has as effect to
not use channel binding and use SCRAM-SHA-256 as chosen SASL mechanism.
More tests for SCRAM and channel binding are added to the SSL test
suite.
Author: Author: Michael Paquier <michael.paquier@gmail.com>
Andres Freund [Mon, 18 Dec 2017 22:23:19 +0000 (14:23 -0800)]
Add shared tuplestores.
SharedTuplestore allows multiple participants to write into it and
then read the tuples back from it in parallel. Each reader receives
partial results.
For now it always uses disk files, but other buffering policies and
other kinds of scans (ie each reader receives complete results) may be
useful in future.
The upcoming parallel hash join feature will use this facility.
Author: Thomas Munro Reviewed-By: Peter Geoghegan, Andres Freund, Robert Haas
Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
Peter Eisentraut [Mon, 18 Dec 2017 21:59:10 +0000 (16:59 -0500)]
Move SCRAM-related name definitions to scram-common.h
Mechanism names for SCRAM and channel binding names have been included
in scram.h by the libpq frontend code, and this header references a set
of routines which are only used by the backend. scram-common.h is on
the contrary usable by both the backend and libpq, so getting those
names from there seems more reasonable.
Author: Michael Paquier <michael.paquier@gmail.com>
Fujii Masao [Mon, 18 Dec 2017 18:46:14 +0000 (03:46 +0900)]
Fix bug in cancellation of non-exclusive backup to avoid assertion failure.
Previously an assertion failure occurred when pg_stop_backup() for
non-exclusive backup was aborted while it's waiting for WAL files to
be archived. This assertion failure happened in do_pg_abort_backup()
which was called when a non-exclusive backup was canceled.
do_pg_abort_backup() assumes that there is at least one non-exclusive
backup running when it's called. But pg_stop_backup() can be canceled
even after it marks the end of non-exclusive backup (e.g.,
during waiting for WAL archiving). This broke the assumption that
do_pg_abort_backup() relies on, and which caused an assertion failure.
This commit changes do_pg_abort_backup() so that it does nothing
when non-exclusive backup has been already marked as completed.
That is, the asssumption is also changed, and do_pg_abort_backup()
now can handle even the case where it's called when there is
no running backup.
Backpatch to 9.6 where SQL-callable non-exclusive backup was added.
Author: Masahiko Sawada and Michael Paquier Reviewed-By: Robert Haas and Fujii Masao
Discussion: https://www.postgresql.org/message-id/CAD21AoD2L1Fu2c==gnVASMyFAAaq3y-AQ2uEVj-zTCGFFjvmDg@mail.gmail.com
Robert Haas [Mon, 18 Dec 2017 17:17:37 +0000 (12:17 -0500)]
Fix crashes on plans with multiple Gather (Merge) nodes.
es_query_dsa turns out to be broken by design, because it supposes
that there is only one DSA for the whole query, whereas there is
actually one per Gather (Merge) node. For now, work around that
problem by setting and clearing the pointer around the sections of
code that might need it. It's probably a better idea to get rid of
es_query_dsa altogether in favor of having each node keep track
individually of which DSA is relevant, but that seems like more than
we would want to back-patch.
Thomas Munro, reviewed and tested by Andreas Seltenreich, Amit
Kapila, and by me.
Tom Lane [Sun, 17 Dec 2017 16:52:22 +0000 (11:52 -0500)]
Try harder to detect unavailability of __builtin_mul_overflow(int64).
Commit c04d35f44 didn't quite do the job here, because it still allowed
the compiler to deduce that the function call could be optimized away.
Prevent that by putting the arguments and results in global variables.
Tom Lane [Sun, 17 Dec 2017 05:41:41 +0000 (00:41 -0500)]
Suppress compiler warning about no function return value.
Compilers that don't know that ereport(ERROR) doesn't return
complained about the new coding in scanint8() introduced by
commit 101c7ee3e. Tweak coding to avoid the warning.
Per buildfarm.
Andres Freund [Sat, 16 Dec 2017 20:49:41 +0000 (12:49 -0800)]
Try to detect runtime unavailability of __builtin_mul_overflow(int64).
On some systems the results of 64 bit __builtin_mul_overflow()
operations can be computed at compile time, but not at runtime. The
known cases are arm buildfar animals using clang where the runtime
operation is implemented in a unavailable function.
Try to avoid compile-time computation by using volatile arguments to
__builtin_mul_overflow(). In that case we hopefully will get a link
error when unavailable, similar to what buildfarm animals dangomushi
and gull are reporting.
Author: Andres Freund
Discussion: https://postgr.es/m/20171213213754.pydkyjs6bt2hvsdb@alap3.anarazel.de
Noah Misch [Sat, 16 Dec 2017 18:03:35 +0000 (10:03 -0800)]
Avoid and detect SIGPIPE race in TAP tests.
Don't write to stdin of a psql process that could have already exited
with an authentication failure. Buildfarm members crake and mandrill
have failed once by doing so. Ignore SIGPIPE in all TAP tests.
Back-patch to v10, where these tests were introduced.
Tom Lane [Sat, 16 Dec 2017 16:32:49 +0000 (11:32 -0500)]
Fix oversights in new plpgsql test suite infrastructure.
Fix a couple of minor oversights in commit 632b03da3: the tests
should be run in database "pl_regression" like the other PLs do,
and we should clean up the tests' output cruft during "make clean".
Andres Freund [Tue, 14 Nov 2017 02:45:47 +0000 (18:45 -0800)]
Perform a lot more sanity checks when freezing tuples.
The previous commit has shown that the sanity checks around freezing
aren't strong enough. Strengthening them seems especially important
because the existance of the bug has caused corruption that we don't
want to make even worse during future vacuum cycles.
The errors are emitted with ereport rather than elog, despite being
"should never happen" messages, so a proper error code is emitted. To
avoid superflous translations, mark messages as internal.
Author: Andres Freund and Alvaro Herrera Reviewed-By: Alvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
Andres Freund [Fri, 3 Nov 2017 14:52:29 +0000 (07:52 -0700)]
Fix pruning of locked and updated tuples.
Previously it was possible that a tuple was not pruned during vacuum,
even though its update xmax (i.e. the updating xid in a multixact with
both key share lockers and an updater) was below the cutoff horizon.
As the freezing code assumed, rightly so, that that's not supposed to
happen, xmax would be preserved (as a member of a new multixact or
xmax directly). That causes two problems: For one the tuple is below
the xmin horizon, which can cause problems if the clog is truncated or
once there's an xid wraparound. The bigger problem is that that will
break HOT chains, which in turn can lead two to breakages: First,
failing index lookups, which in turn can e.g lead to constraints being
violated. Second, future hot prunes / vacuums can end up making
invisible tuples visible again. There's other harmful scenarios.
Fix the problem by recognizing that tuples can be DEAD instead of
RECENTLY_DEAD, even if the multixactid has alive members, if the
update_xid is below the xmin horizon. That's safe because newer
versions of the tuple will contain the locking xids.
A followup commit will harden the code somewhat against future similar
bugs and already corrupted data.
Author: Andres Freund, with changes by Alvaro Herrera Reported-By: Daniel Wood Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter
Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier
Discussion:
https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
Tom Lane [Thu, 14 Dec 2017 22:19:27 +0000 (17:19 -0500)]
Tighten configure's test for __builtin_constant_p().
Commit 9fa6f00b1 assumed that __builtin_constant_p("string literal")
is TRUE, if the compiler has that function at all. Buildfarm results
show that Sun Studio 12, at least, breaks that assumption. Removing
that usage would leave us with no mechanical check for a very fragile
coding requirement, so instead teach configure to ignore
__builtin_constant_p() if it doesn't behave that way. We could
complicate matters by distinguishing three cases (no such function,
vs does, vs doesn't work for string literals); but for now, that seems
unnecessary because our other existing uses of this function are just
fairly minor optimizations of non-returning elog/ereport. We can live
without that on the small population of compilers that act this way.
Andrew Dunstan [Thu, 14 Dec 2017 16:13:14 +0000 (11:13 -0500)]
Fix walsender timeouts when decoding a large transaction
The logical slots have a fast code path for sending data so as not to
impose too high a per message overhead. The fast path skips checks for
interrupts and timeouts. However, the existing coding failed to consider
the fact that a transaction with a large number of changes may take a
very long time to be processed and sent to the client. This causes the
walsender to ignore interrupts for potentially a long time and more
importantly it will result in the walsender being killed due to
timeout at the end of such a transaction.
This commit changes the fast path to also check for interrupts and only
allows calling the fast path when the last keepalive check happened less
than half the walsender timeout ago. Otherwise the slower code path will
be taken.
Backpatched to 9.4
Petr Jelinek, reviewed by Kyotaro HORIGUCHI, Yura Sokolov, Craig
Ringer and Robert Haas.
Andres Freund [Wed, 13 Dec 2017 23:47:01 +0000 (15:47 -0800)]
Allow executor nodes to change their ExecProcNode function.
In order for executor nodes to be able to change their ExecProcNode function
after ExecInitNode() has finished, provide ExecSetExecProcNode(). This allows
any wrappers functions that only execProcnode.c knows about to be reinstalled.
The motivation for wanting to change ExecProcNode after ExecInitNode() has
finished is that it is not known until later whether parallel query is
available, so if a parallel variant is to be installed then ExecInitNode()
is too soon to decide.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=09rr65VN+cAV5FgyM_z=D77Xy8Fuc9CDDDYbq3pQUezg@mail.gmail.com
Andres Freund [Wed, 13 Dec 2017 23:34:20 +0000 (15:34 -0800)]
Add pg_attribute_always_inline.
Sometimes it is useful to be able to insist that the compiler inline a
function that its normal cost analysis would not normally choose to inline.
This can be useful for instantiating different variants of a function that
remove branches of code by constant folding.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=09rr65VN+cAV5FgyM_z=D77Xy8Fuc9CDDDYbq3pQUezg@mail.gmail.com
Andres Freund [Wed, 13 Dec 2017 20:51:32 +0000 (12:51 -0800)]
Add defenses against pre-crash files to BufFileOpenShared().
Crash restarts currently don't clean up temporary files, as a debugging aid.
If a left-over file happens to have the same name as a segment file we're
trying to create, we'll just truncate and reuse it, but there is a problem:
BufFileOpenShared() determines how many segment files exist by trying to open
.0, .1, .2, ... until it finds no more files. It might be confused by a junk
file that has the next segment number. To defend against that, make sure we
always create a gap after the end file by unlinking the following name if it
exists. Also make it an error to try to open a BufFile that doesn't exist
(has no segment 0), so as not to encourage the development of client code
that depends on an interface that we can't reliably provide.
Author: Thomas Munro Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm%3D2jhCbC_GFQJaaDhWxLB4EXtT3vVd5czuRNaqF5CWSTog%40mail.gmail.com
Tom Lane [Wed, 13 Dec 2017 18:55:12 +0000 (13:55 -0500)]
Rethink MemoryContext creation to improve performance.
This patch makes a number of interrelated changes to reduce the overhead
involved in creating/deleting memory contexts. The key ideas are:
* Include the AllocSetContext header of an aset.c context in its first
malloc request, rather than allocating it separately in TopMemoryContext.
This means that we now always create an initial or "keeper" block in an
aset, even if it never receives any allocation requests.
* Create freelists in which we can save and recycle recently-destroyed
asets (this idea is due to Robert Haas).
* In the common case where the name of a context is a constant string,
just store a pointer to it in the context header, rather than copying
the string.
The first change eliminates a palloc/pfree cycle per context, and
also avoids bloat in TopMemoryContext, at the price that creating
a context now involves a malloc/free cycle even if the context never
receives any allocations. That would be a loser for some common
usage patterns, but recycling short-lived contexts via the freelist
eliminates that pain.
Avoiding copying constant strings not only saves strlen() and strcpy()
overhead, but is an essential part of the freelist optimization because
it makes the context header size constant. Currently we make no
attempt to use the freelist for contexts with non-constant names.
(Perhaps someday we'll need to think harder about that, but in current
usage, most contexts with custom names are long-lived anyway.)
The freelist management in this initial commit is pretty simplistic,
and we might want to refine it later --- but in common workloads that
will never matter because the freelists will never get full anyway.
To create a context with a non-constant name, one is now required to
call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME
option. AllocSetContextCreate becomes a wrapper macro, and it includes
a test that will complain about non-string-literal context name
parameters on gcc and similar compilers.
An unfortunate side effect of making AllocSetContextCreate a macro is
that one is now *required* to use the size parameter abstraction macros
(ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of
writing out individual size parameters no longer works unless you
switch to AllocSetContextCreateExtended.
Internally to the memory-context-related modules, the context creation
APIs are simplified, removing the rather baroque original design whereby
a context-type module called mcxt.c which then called back into the
context-type module. That saved a bit of code duplication, but not much,
and it prevented context-type modules from exercising control over the
allocation of context headers.
In passing, I converted the test-and-elog validation of aset size
parameters into Asserts to save a few more cycles. The original thought
was that callers might compute size parameters on the fly, but in practice
nobody does that, so it's useless to expend cycles on checking those
numbers in production builds.
Also, mark the memory context method-pointer structs "const",
just for cleanliness.
The plpgsql.sql test file in the main regression tests is now by far the
largest after numeric_big, making editing and managing the test cases
very cumbersome. The other PLs have their own test suites split up into
smaller files by topic. It would be nice to have that for plpgsql as
well. So, to get that started, set up test infrastructure in
src/pl/plpgsql/src/ and split out the recently added procedure test
cases into a new file there. That file now mirrors the test cases added
to the other PLs, making managing those matching tests a bit easier too.
msvc build system changes with help from Michael Paquier
After d0aa965c0a0ac2ff7906ae1b1dad50a7952efa56, one error path in
PLy_spi_execute_fetch_result() could result in the variable "result"
being dereferenced after being set to NULL. Rearrange the code a bit to
fix that.
Also add another SPI_freetuptable() call so that that is cleared in all
error paths.
discovered by John Naylor <jcnaylor@gmail.com> via scan-build
Andres Freund [Wed, 13 Dec 2017 00:32:31 +0000 (16:32 -0800)]
Use new overflow aware integer operations.
A previous commit added inline functions that provide fast(er) and
correct overflow checks for signed integer math. Use them in a
significant portion of backend code. There's more to touch in both
backend and frontend code, but these were the easily identifiable
cases.
The old overflow checks are noticeable in integer heavy workloads.
A secondary benefit is that getting rid of overflow checks that rely
on signed integer overflow wrapping around, will allow us to get rid
of -fwrapv in the future. Which in turn slows down other code.
Author: Andres Freund
Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
Andres Freund [Mon, 30 Oct 2017 05:13:54 +0000 (22:13 -0700)]
Provide overflow safe integer math inline functions.
It's not easy to get signed integer overflow checks correct and
fast. Therefore abstract the necessary infrastructure into a common
header providing addition, subtraction and multiplication for 16, 32,
64 bit signed integers.
The new macros aren't yet used, but a followup commit will convert
several open coded overflow checks.
Author: Andres Freund, with some code stolen from Greg Stark Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
Robert Haas [Wed, 13 Dec 2017 00:33:50 +0000 (19:33 -0500)]
Remove obsolete comment.
Commit 8b304b8b72b0a60f1968d39f01cf817c8df863ec removed replacement
selection, but left behind this comment text. The optimization to
which the comment refers is not relevant without replacement
selection, because if we had so few tuples as to require only one
tape, we would have just completed the sort in memory.
Tom Lane [Mon, 11 Dec 2017 21:33:20 +0000 (16:33 -0500)]
Fix corner-case coredump in _SPI_error_callback().
I noticed that _SPI_execute_plan initially sets spierrcontext.arg = NULL,
and only fills it in some time later. If an error were to happen in
between, _SPI_error_callback would try to dereference the null pointer.
This is unlikely --- there's not much between those points except
push-snapshot calls --- but it's clearly not impossible. Tweak the
callback to do nothing if the pointer isn't set yet.
It's been like this for awhile, so back-patch to all supported branches.
Tom Lane [Sun, 10 Dec 2017 17:44:03 +0000 (12:44 -0500)]
Stabilize output of new regression test case.
The test added by commit 390d58135 turns out to have different output
in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
error message as a result of detecting the error at a different place.
Possibly we should do something to make that more consistent. But as
a stopgap measure to make the buildfarm green again, adjust the test
to suppress CONTEXT entirely. We can revert this if we do something
in the backend to eliminate the inconsistency.
Tom Lane [Sat, 9 Dec 2017 17:03:00 +0000 (12:03 -0500)]
Fix plpgsql to reinitialize record variables at block re-entry.
If one exits and re-enters a DECLARE ... BEGIN ... END block within a
single execution of a plpgsql function, perhaps due to a surrounding loop,
the declared variables are supposed to get re-initialized to null (or
whatever their initializer is). But this failed to happen for variables
of type "record", because while exec_stmt_block() expected such variables
to be included in the block's initvarnos list, plpgsql_add_initdatums()
only adds DTYPE_VAR variables to that list. This bug appears to have
been there since the aboriginal addition of plpgsql to our tree.
Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
as well. (We don't need to consider other DTYPEs because they don't
represent separately-stored values.) I failed to resist the temptation
to make some nearby cosmetic adjustments, too.
No back-patch, because there have not been field complaints, and it
seems possible that somewhere out there someone has code depending
on the incorrect behavior. In any case this change would have no
impact on correctly-written code.
Noah Misch [Sat, 9 Dec 2017 08:58:55 +0000 (00:58 -0800)]
MSVC 2012+: Permit linking to 32-bit, MinGW-built libraries.
Notably, this permits linking to the 32-bit Perl binaries advertised on
perl.org, namely Strawberry Perl and ActivePerl. This has a side effect
of permitting linking to binaries built with obsolete MSVC versions.
By default, MSVC 2012 and later require a "safe exception handler table"
in each binary. MinGW-built, 32-bit DLLs lack the relevant exception
handler metadata, so linking to them failed with error LNK2026. Restore
the semantics of MSVC 2010, which omits the table from a given binary if
some linker input lacks metadata. This has no effect on 64-bit builds
or on MSVC 2010 and earlier. Back-patch to 9.3 (all supported
versions).
Noah Misch [Sat, 9 Dec 2017 02:06:05 +0000 (18:06 -0800)]
MSVC: Test whether 32-bit Perl needs -D_USE_32BIT_TIME_T.
Commits 5a5c2feca3fd858e70ea348822595547e6fa6c15 and b5178c5d08ca59e30f9d9428fa6fdb2741794e65 introduced support for modern
MSVC-built, 32-bit Perl, but they broke use of MinGW-built, 32-bit Perl
distributions like Strawberry Perl and modern ActivePerl. Perl has no
robust means to report whether it expects a -D_USE_32BIT_TIME_T ABI, so
test this. Back-patch to 9.3 (all supported versions).
The chief alternative was a heuristic of adding -D_USE_32BIT_TIME_T when
$Config{gccversion} is nonempty. That banks on every gcc-built Perl
using the same ABI. gcc could change its default ABI the way MSVC once
did, and one could build Perl with gcc and the non-default ABI.
The GNU make build system could benefit from a similar test, without
which it does not support MSVC-built Perl. For now, just add a comment.
Most users taking the special step of building Perl with MSVC probably
build PostgreSQL with MSVC.
Tom Lane [Fri, 8 Dec 2017 16:20:50 +0000 (11:20 -0500)]
In plpgsql, unify duplicate variables for record and row cases.
plpgsql's function exec_move_row() handles assignment of a composite
source value to either a PLpgSQL_rec or PLpgSQL_row target variable.
Oddly, rather than taking a single target argument which it could do
run-time type detection on, it was coded to take two separate arguments
(only one of which is allowed to be non-NULL). This choice had then
back-propagated into storing two separate target variables in various
plpgsql statement nodes, with lots of duplicative coding and awkward
interface logic to support that. Simplify matters by folding those
pairs down to single variables, distinguishing the two cases only
where we must ... which turns out to be only in exec_move_row itself.
This is purely refactoring and should not change any behavior.
In passing, remove unused field PLpgSQL_stmt_open.returntype.