Stephen Frost [Thu, 14 Sep 2017 00:04:43 +0000 (20:04 -0400)]
Fix ordering in pg_dump of GRANTs
The order in which GRANTs are output is important as GRANTs which have
been GRANT'd by individuals via WITH GRANT OPTION GRANTs have to come
after the GRANT which included the WITH GRANT OPTION. This happens
naturally in the backend during normal operation as we only change
existing ACLs in-place, only add new ACLs to the end, and when removing
an ACL we remove any which depend on it also.
Also, adjust the comments in acl.h to make this clear.
Unfortunately, the updates to pg_dump to handle initial privileges
involved pulling apart ACLs and then combining them back together and
could end up putting them back together in an invalid order, leading to
dumps which wouldn't restore.
Fix this by adjusting the queries used by pg_dump to ensure that the
ACLs are rebuilt in the same order in which they were originally.
Back-patch to 9.6 where the changes for initial privileges were done.
The documentation claimed that one should send
"pg_same_as_startup_message" as the user name in the SCRAM messages, but
this did not match the actual implementation, so remove it.
Bruce Momjian [Wed, 13 Sep 2017 13:11:28 +0000 (09:11 -0400)]
docs: improve pg_upgrade standby instructions
This makes it clear that pg_upgrade standby upgrade instructions should
only be used in link mode, adds examples, and explains how rsync works
with links.
Reported-by: Andreas Joseph Krogh
Discussion: https://postgr.es/m/VisenaEmail.6c.c0e592c5af4ef0a2.15e785dcb61@tc7-visena
Tom Lane [Tue, 12 Sep 2017 02:02:58 +0000 (22:02 -0400)]
Fix RecursiveCopy.pm to cope with disappearing files.
When copying from an active database tree, it's possible for files to be
deleted after we see them in a readdir() scan but before we can open them.
(Once we've got a file open, we don't expect any further errors from it
getting unlinked, though.) Tweak RecursiveCopy so it can cope with this
case, so as to avoid irreproducible test failures.
Back-patch to 9.6 where this code was added. In v10 and HEAD, also
remove unused "use RecursiveCopy" in one recovery test script.
Tom Lane [Sun, 10 Sep 2017 18:59:56 +0000 (14:59 -0400)]
Quick-hack fix for foreign key cascade vs triggers with transition tables.
AFTER triggers using transition tables crashed if they were fired due
to a foreign key ON CASCADE update. This is because ExecEndModifyTable
flushes the transition tables, on the assumption that any trigger that
could need them was already fired during ExecutorFinish. Normally
that's true, because we don't allow transition-table-using triggers
to be deferred. However, foreign key CASCADE updates force any
triggers on the referencing table to be deferred to the outer query
level, by means of the EXEC_FLAG_SKIP_TRIGGERS flag. I don't recall
all the details of why it's like that and am pretty loath to redesign
it right now. Instead, just teach ExecEndModifyTable to skip destroying
the TransitionCaptureState when that flag is set. This will allow the
transition table data to survive until end of the current subtransaction.
This isn't a terribly satisfactory solution, because (1) we might be
leaking the transition tables for much longer than really necessary,
and (2) as things stand, an AFTER STATEMENT trigger will fire once per
RI updating query, ie once per row updated or deleted in the referenced
table. I suspect that is not per SQL spec. But redesigning this is a
research project that we're certainly not going to get done for v10.
So let's go with this hackish answer for now.
In passing, tweak AfterTriggerSaveEvent to not save the transition_capture
pointer into the event record for a deferrable trigger. This is not
necessary to fix the current bug, but it avoids letting dangling pointers
to long-gone transition tables persist in the trigger event queue. That's
at least a safety feature. It might also allow merging shared trigger
states in more cases than before.
I added a regression test that demonstrates the crash on unpatched code,
and also exposes the behavior of firing the AFTER STATEMENT triggers
once per row update.
Per bug #14808 from Philippe Beaudoin. Back-patch to v10.
Tom Lane [Fri, 8 Sep 2017 23:04:32 +0000 (19:04 -0400)]
Fix uninitialized-variable bug.
map_partition_varattnos() failed to set its found_whole_row output
parameter if the given expression list was NIL. This seems to be
a pre-existing bug that chanced to be exposed by commit 6f6b99d13.
It might be unreachable in v10, but I have little faith in that
proposition, so back-patch.
Robert Haas [Thu, 7 Sep 2017 14:55:45 +0000 (10:55 -0400)]
Even if some partitions are foreign, allow tuple routing.
This doesn't allow routing tuple to the foreign partitions themselves,
but it permits tuples to be routed to regular partitions despite the
presence of foreign partitions in the same inheritance hierarchy.
Etsuro Fujita, reviewed by Amit Langote and by me.
Tom Lane [Wed, 6 Sep 2017 15:35:31 +0000 (11:35 -0400)]
Add psql variables showing server version and psql version.
We already had a psql variable VERSION that shows the verbose form of
psql's own version. Add VERSION_NAME to show the short form (e.g.,
"11devel") and VERSION_NUM to show the numeric form (e.g., 110000).
Also add SERVER_VERSION_NAME and SERVER_VERSION_NUM to show the short and
numeric forms of the server's version. (We'd probably add SERVER_VERSION
with the verbose string if it were readily available; but adding another
network round trip to get it seems too expensive.)
The numeric forms, in particular, are expected to be useful for scripting
purposes, now that psql can do conditional tests.
Tom Lane [Wed, 6 Sep 2017 14:41:05 +0000 (10:41 -0400)]
Clean up handling of dropped columns in NAMEDTUPLESTORE RTEs.
The NAMEDTUPLESTORE patch piggybacked on the infrastructure for
TABLEFUNC/VALUES/CTE RTEs, none of which can ever have dropped columns,
so the possibility was ignored most places. Fix that, including adding a
specification to parsenodes.h about what it's supposed to look like.
In passing, clean up assorted comments that hadn't been maintained
properly by said patch.
Per bug #14799 from Philippe Beaudoin. Back-patch to v10.
Throttling for sending a base backup in walsender is broken for the case
where there is a lot of WAL traffic, because the latch used to put the
walsender to sleep is also signalled by regular WAL traffic (and each
signal causes an additional batch of data to be sent); the net effect is
that there is no or little actual throttling. This is undesirable, so
rewrite the sleep into a loop to achieve the desired effeect.
Author: Jeff Janes, small tweaks by me Reviewed-by: Antonin Houska
Discussion: https://postgr.es/m/CAMkU=1xH6mde-yL-Eo1TKBGNd0PB1-TMxvrNvqcAkN-qr2E9mw@mail.gmail.com
Tom Lane [Sun, 3 Sep 2017 15:01:08 +0000 (11:01 -0400)]
Fix macro-redefinition warning on MSVC.
In commit 9d6b160d7, I tweaked pg_config.h.win32 to use
"#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty,
for consistency with what happens in an autoconf'd build.
But Solution.pm injects another definition of that macro into
ecpg_config.h, leading to justifiable (though harmless) compiler whining.
Make that one consistent too. Back-patch, like the previous patch.
Tom Lane [Fri, 1 Sep 2017 21:38:54 +0000 (17:38 -0400)]
Improve division of labor between execParallel.c and nodeGather[Merge].c.
Move the responsibility for creating/destroying TupleQueueReaders into
execParallel.c, to avoid duplicative coding in nodeGather.c and
nodeGatherMerge.c. Also, instead of having DestroyTupleQueueReader do
shm_mq_detach, do it in the caller (which is now only ExecParallelFinish).
This means execParallel.c does both the attaching and detaching of the
tuple-queue-reader shm_mqs, which seems less weird than the previous
arrangement.
These changes also eliminate a vestigial memory leak (of the pei->tqueue
array). It's now demonstrable that rescans of Gather or GatherMerge don't
leak memory.
Tom Lane [Fri, 1 Sep 2017 19:14:18 +0000 (15:14 -0400)]
Make [U]INT64CONST safe for use in #if conditions.
Instead of using a cast to force the constant to be the right width,
assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.
The old approach to this is very hoary, dating from before we were
willing to require compilers to have working int64 types.
This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX
constants safe to use in preprocessor conditions, where a cast
doesn't work. Other symbolic constants that might be defined using
[U]INT64CONST are likewise safer than before.
Also fix the SIZE_MAX macro to be similarly safe, if we are forced
to provide a definition for that. The test added in commit 2e70d6b5e
happens to do what we want even with the hack "(size_t) -1" definition,
but we could easily get burnt on other tests in future.
Back-patch to all supported branches, like the previous commits.
Tom Lane [Fri, 1 Sep 2017 17:52:53 +0000 (13:52 -0400)]
Ensure SIZE_MAX can be used throughout our code.
Pre-C99 platforms may lack <stdint.h> and thereby SIZE_MAX. We have
a couple of places using the hack "(size_t) -1" as a fallback, but
it wasn't universally available; which means the code added in commit 2e70d6b5e fails to compile everywhere. Move that hack to c.h so that
we can rely on having SIZE_MAX everywhere.
Per discussion, it'd be a good idea to make the macro's value safe
for use in #if-tests, but that will take a bit more work. This is
just a quick expedient to get the buildfarm green again.
Back-patch to all supported branches, like the previous commit.
The original code had a race condition because it never ensured the
standby was caught up before proceeding; add a wait similar to every
other place that does this.
Do for replication origins what the previous commit did for replication
slots: restore the original behavior of replication origin drop to raise
an error rather than blocking, because users might be depending on the
original behavior. Maintain the blocking behavior when invoked
internally from logical replication subscription handling.
Commit 9915de6c1cb2 changed the default behavior of
DROP_REPLICATION_SLOT so that it would wait until any session holding
the slot active would release it, instead of raising an error. But
users are already depending on the original behavior, so revert to it by
default and add a WAIT option to invoke the new behavior.
Per complaint from Simone Gotti, in
Discussion: https://postgr.es/m/CAEvsy6Wgdf90O6pUvg2wSVXL2omH5OPC-38OD4Zzgk-FXavj3Q@mail.gmail.com
Tom Lane [Thu, 31 Aug 2017 20:20:58 +0000 (16:20 -0400)]
Avoid memory leaks when a GatherMerge node is rescanned.
Rescanning a GatherMerge led to leaking some memory in the executor's
query-lifespan context, because most of the node's working data structures
were simply abandoned and rebuilt from scratch. In practice, this might
never amount to much, given the cost of relaunching worker processes ---
but it's still pretty messy, so let's fix it.
We can rearrange things so that the tuple arrays are simply cleared and
reused, and we don't need to rebuild the TupleTableSlots either, just
clear them. One small complication is that because we might get a
different number of workers on each iteration, we can't keep the old
convention that the leader's gm_slots[] entry is the last one; the leader
might clobber a TupleTableSlot that we need for a worker in a future
iteration. Hence, adjust the logic so that the leader has slot 0 always,
while the active workers have slots 1..n.
Back-patch to v10 to keep all the existing versions of nodeGatherMerge.c
in sync --- because of the renumbering of the slots, there would otherwise
be a very large risk that any future backpatches in this module would
introduce bugs.
Tom Lane [Thu, 31 Aug 2017 19:10:24 +0000 (15:10 -0400)]
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Tom Lane [Wed, 30 Aug 2017 21:21:08 +0000 (17:21 -0400)]
Code review for nodeGatherMerge.c.
Comment the fields of GatherMergeState, and organize them a bit more
sensibly. Comment GMReaderTupleBuffer more usefully too. Improve
assorted other comments that were obsolete or just not very good English.
Get rid of the use of a GMReaderTupleBuffer for the leader process;
that was confusing, since only the "done" field was used, and that
in a way redundant with need_to_scan_locally.
In gather_merge_init, avoid calling load_tuple_array for
already-known-exhausted workers. I'm not sure if there's a live bug there,
but the case is unlikely to be well tested due to timing considerations.
Remove some useless code, such as duplicating the tts_isempty test done by
TupIsNull.
Remove useless initialization of ps.qual, replacing that with an assertion
that we have no qual to check. (If we did, the code would fail to check
it.)
Avoid applying heap_copytuple to a null tuple. While that fails to crash,
it's confusing and it makes the code less legible not more so IMO.
Propagate a couple of these changes into nodeGather.c, as well.
Back-patch to v10, partly because of the possibility that the
gather_merge_init change is fixing a live bug, but mostly to keep
the branches in sync to ease future bug fixes.
Tom Lane [Wed, 30 Aug 2017 17:18:16 +0000 (13:18 -0400)]
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Revert the reversion commits a20aac890 and 9b644745c. In the wake of
commit 7df2c1f8d, we should get stable buildfarm results from this test;
if not, I'd like to know sooner not later.
Tom Lane [Wed, 30 Aug 2017 13:29:56 +0000 (09:29 -0400)]
Force rescanning of parallel-aware scan nodes below a Gather[Merge].
The ExecReScan machinery contains various optimizations for postponing
or skipping rescans of plan subtrees; for example a HashAgg node may
conclude that it can re-use the table it built before, instead of
re-reading its input subtree. But that is wrong if the input contains
a parallel-aware table scan node, since the portion of the table scanned
by the leader process is likely to vary from one rescan to the next.
This explains the timing-dependent buildfarm failures we saw after
commit a2b70c89c.
The established mechanism for showing that a plan node's output is
potentially variable is to mark it as depending on some runtime Param.
Hence, to fix this, invent a dummy Param (one that has a PARAM_EXEC
parameter number, but carries no actual value) associated with each Gather
or GatherMerge node, mark parallel-aware nodes below that node as dependent
on that Param, and arrange for ExecReScanGather[Merge] to flag that Param
as changed whenever the Gather[Merge] node is rescanned.
This solution breaks an undocumented assumption made by the parallel
executor logic, namely that all rescans of nodes below a Gather[Merge]
will happen synchronously during the ReScan of the top node itself.
But that's fundamentally contrary to the design of the ExecReScan code,
and so was doomed to fail someday anyway (even if you want to argue
that the bug being fixed here wasn't a failure of that assumption).
A follow-on patch will address that issue. In the meantime, the worst
that's expected to happen is that given very bad timing luck, the leader
might have to do all the work during a rescan, because workers think
they have nothing to do, if they are able to start up before the eventual
ReScan of the leader's parallel-aware table scan node has reset the
shared scan state.
Although this problem exists in 9.6, there does not seem to be any way
for it to manifest there. Without GatherMerge, it seems that a plan tree
that has a rescan-short-circuiting node below Gather will always also
have one above it that will short-circuit in the same cases, preventing
the Gather from being rescanned. Hence we won't take the risk of
back-patching this change into 9.6. But v10 needs it.
Peter Eisentraut [Tue, 29 Aug 2017 23:33:24 +0000 (19:33 -0400)]
doc: Avoid sidebar element
The formatting of the sidebar element didn't carry over to the new tool
chain. Instead of inventing a whole new way of dealing with it, just
convert the one use to a "note".
Tom Lane [Tue, 29 Aug 2017 19:38:05 +0000 (15:38 -0400)]
Doc: document libpq's restriction to INT_MAX rows in a PGresult.
As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're
pretty much stuck with this limitation. The documentation formerly stated
that the result of PQntuples "might overflow on 32-bit operating systems",
which is just nonsense: that's not where the overflow would happen, and
if you did reach an overflow it would not be on a 32-bit machine, because
you'd have OOM'd long since.
Tom Lane [Tue, 29 Aug 2017 19:18:01 +0000 (15:18 -0400)]
Teach libpq to detect integer overflow in the row count of a PGresult.
Adding more than 1 billion rows to a PGresult would overflow its ntups and
tupArrSize fields, leading to client crashes. It'd be desirable to use
wider fields on 64-bit machines, but because all of libpq's external APIs
use plain "int" for row counters, that's going to be hard to accomplish
without an ABI break. Given the lack of complaints so far, and the general
pain that would be involved in using such huge PGresults, let's settle for
just preventing the overflow and reporting a useful error message if it
does happen. Also, for a couple more lines of code we can increase the
threshold of trouble from INT_MAX/2 to INT_MAX rows.
To do that, refactor pqAddTuple() to allow returning an error message that
replaces the default assumption that it failed because of out-of-memory.
Along the way, fix PQsetvalue() so that it reports all failures via
pqInternalNotice(). It already did so in the case of bad field number,
but neglected to report anything for other error causes.
Because of the potential for crashes, this seems like a back-patchable
bug fix, despite the lack of field reports.
Tom Lane [Tue, 29 Aug 2017 13:34:21 +0000 (09:34 -0400)]
Improve docs about numeric formatting patterns (to_char/to_number).
The explanation about "0" versus "9" format characters was confusing
and arguably wrong; the discussion of sign handling wasn't very good
either. Notably, while it's accurate to say that "FM" strips leading
zeroes in date/time values, what it really does with numeric values
is to strip *trailing* zeroes, and then only if you wrote "9" rather
than "0". Per gripes from Erwin Brandstetter.
Tom Lane [Mon, 28 Aug 2017 14:14:20 +0000 (10:14 -0400)]
Fix over-aggressive sanity check in misc_sanity.sql.
Fix thinko in commit 8be8510cf: it's okay to have dbid == 0 in normal
(non-pin) entries in pg_shdepend, because global objects such as
databases are entered that way. The test would pass so long as it
was run in a cluster containing no databases/tablespaces owned by,
or granted to, roles other than the bootstrap superuser. That's the
expected situation for "make check", but for "make installcheck", not
so much.
Peter Eisentraut [Sat, 26 Aug 2017 13:21:46 +0000 (09:21 -0400)]
pg_test_timing: Some NLS fixes
The string "% of total" was marked by xgettext to be a c-format, but it
is actually not, so mark up the source to prevent that.
Compute the column widths of the final display dynamically based on the
translated strings, so that translations don't mess up the display
accidentally.
Robert Haas [Fri, 25 Aug 2017 19:07:44 +0000 (15:07 -0400)]
Improve low-level backup documentation.
Our documentation hasn't really caught up with the fact that
non-exclusive backups can now be taken using pg_start_backup and
pg_stop_backup even on standbys. Update, also correcting some
errors introduced by 52f8a59dd953c6820baf153e97cf07d31b8ac1d6.
Updates to the 9.6 documentation are needed as well, but that
will need a separate patch as some things are different on that
version.
David Steele, reviewed by Robert Haas and Michael Paquier
Andres Freund [Thu, 24 Aug 2017 22:07:40 +0000 (15:07 -0700)]
Fix harmless thinko in dsa.c.
Commit 16be2fd100199bdf284becfcee02c5eb20d8a11d added DSA_ALLOC_HUGE,
DSA_ALLOC_ZERO and DSA_ALLOC_NO_OOM which have the same numerical
values and meanings as the similarly named MCXT_... macros. In one
place we accidentally used MCXT_ALLOC_NO_OOM when DSA_ALLOC_NO_OOM is
wanted, so tidy that up.
Author: Thomas Munro
Discussion: http://postgr.es/m/CAEepm=2AimHxVkkxnMfQvbZMkXy0uKbVa0-D38c5-qwrCm4CMQ@mail.gmail.com
Backpatch: 10, where dsa was introduced.
Peter Eisentraut [Thu, 24 Aug 2017 19:29:35 +0000 (15:29 -0400)]
pg_upgrade: Remove dead code
Remove code meant for upgrading to a particular version of PostgreSQL
9.0. Since pg_upgrade only supports upgrading to the current major
version, this code is no longer useful.
Peter Eisentraut [Thu, 24 Aug 2017 18:04:28 +0000 (14:04 -0400)]
Increase SCRAM salt length
The original value 12 was set based on RFC 5802 for SCRAM-SHA-1, but RFC
7677 for SCRAM-SHA-256 uses 16, so use that. (This does not affect the
validity of already stored verifiers.)
Andres Freund [Tue, 22 Aug 2017 14:46:05 +0000 (07:46 -0700)]
Backpatch introduction of TupleDescAttr(tupdesc, i).
2cd70845240 / c6293249d change the way individual attributes in a
TupleDesc are stored / accessed. To reduce the effort of making
extensions compatible with postgresql 11, and to ease future
backpatching, backpatch introduction of TupleDescAttr() to all
releases. Do not backpatch change in storage, as that'd be a breaking
change for existing and working extensions.
Author: Andres Freund
Discussion: https://postgr.es/m/20170820181723.tdswdinzptbcwhrr@alap3.anarazel.de
Backpatch: 9.2-
Peter Eisentraut [Mon, 21 Aug 2017 13:17:06 +0000 (09:17 -0400)]
Expand set of predefined ICU locales
Install language+region combinations even if they are not distinct from
the language's base locale. This gives better long-term stability of
the set of predefined locales and makes the predefined locales less
implementation-dependent and more practical for users.
Noah Misch [Mon, 21 Aug 2017 04:22:18 +0000 (21:22 -0700)]
Inject $(ICU_LIBS) regardless of platform.
It appeared in a conditional that excludes AIX, Cygwin and MinGW. Give
ICU support a chance to work on those platforms. Back-patch to v10,
where ICU support was introduced.
Tom Lane [Sat, 19 Aug 2017 17:39:37 +0000 (13:39 -0400)]
Fix possible core dump in parallel restore when using a TOC list.
Commit 3eb9a5e7c unintentionally introduced an ordering dependency
into restore_toc_entries_prefork(). The existing coding of
reduce_dependencies() contains a check to skip moving a TOC entry
to the ready_list if it wasn't initially in the pending_list.
This used to suffice to prevent reduce_dependencies() from trying to
move anything into the ready_list during restore_toc_entries_prefork(),
because the pending_list stayed empty throughout that phase; but it no
longer does. The problem doesn't manifest unless the TOC has been
reordered by SortTocFromFile, which is how I missed it in testing.
To fix, just add a test for ready_list == NULL, converting the call
with NULL from a poor man's sanity check into an explicit command
not to touch TOC items' list membership. Clarify some of the comments
around this; in particular, note the primary purpose of the check for
pending_list membership, which is to ensure that we can't try to restore
the same item twice, in case a TOC list forces it to be restored before
its dependency count goes to zero.
Per report from Fabrízio de Royes Mello. Back-patch to 9.3, like the
previous commit.
Robert Haas [Fri, 18 Aug 2017 17:01:05 +0000 (13:01 -0400)]
Fix interaction of triggers, partitioning, and EXPLAIN ANALYZE.
Add a new EState member es_leaf_result_relations, so that the trigger
code knows about ResultRelInfos created by tuple routing. Also make
sure ExplainPrintTriggers knows about partition-related
ResultRelInfos.
That code patch was good as far as it went, but the associated test case
has exposed fundamental brain damage in the parallel scan mechanism,
which is going to take nontrivial work to correct. In the interests of
getting the buildfarm back to green so that unrelated work can proceed,
let's temporarily remove the test case.
Tom Lane [Thu, 17 Aug 2017 17:13:47 +0000 (13:13 -0400)]
Further tweaks to compiler flags for PL/Perl on Windows.
It now emerges that we can only rely on Perl to tell us we must use
-D_USE_32BIT_TIME_T if it's Perl 5.13.4 or later. For older versions,
revert to our previous practice of assuming we need that symbol in
all 32-bit Windows builds. This is not ideal, but inquiring into
which compiler version Perl was built with seems far too fragile.
In any case, we had not previously had complaints about these old
Perl versions, so let's assume this is Good Enough. (It's still
better than the situation ante commit 5a5c2feca, in that at least
the effects are confined to PL/Perl rather than the whole PG build.)
Back-patch to all supported versions, like 5a5c2feca and predecessors.
Alvaro Herrera [Tue, 15 Aug 2017 21:14:07 +0000 (18:14 -0300)]
Simplify autovacuum work-item implementation
The initial implementation of autovacuum work-items used a dynamic
shared memory area (DSA). However, it's argued that dynamic shared
memory is not portable enough, so we cannot rely on it being supported
everywhere; at the same time, autovacuum work-items are now a critical
part of the server, so it's not acceptable that they don't work in the
cases where dynamic shared memory is disabled. Therefore, let's fall
back to a simpler implementation of work-items that just uses
autovacuum's main shared memory segment for storage.
Alvaro Herrera [Tue, 15 Aug 2017 16:35:12 +0000 (13:35 -0300)]
Fix error handling path in autovacuum launcher
The original code (since 00e6a16d01) was assuming aborting the
transaction in autovacuum launcher was sufficient to release all
resources, but in reality the launcher runs quite a lot of code out of
any transactions. Re-introduce individual cleanup calls to make abort
more robust.
Reported-by: Robert Haas
Discussion: https://postgr.es/m/CA+TgmobQVbz4K_+RSmiM9HeRKpy3vS5xnbkL95gSEnWijzprKQ@mail.gmail.com
Tom Lane [Tue, 15 Aug 2017 15:07:52 +0000 (11:07 -0400)]
Distinguish wait-for-connection from wait-for-write-ready on Windows.
The API for WaitLatch and friends followed the Unix convention in which
waiting for a socket connection to complete is identical to waiting for
the socket to accept a write. While Windows provides a select(2)
emulation that agrees with that, the native WaitForMultipleObjects API
treats them as quite different --- and for some bizarre reason, it will
report a not-yet-connected socket as write-ready. libpq itself has so
far escaped dealing with this because it waits with select(), but in
libpqwalreceiver.c we want to wait using WaitLatchOrSocket. The semantics
mismatch resulted in replication connection failures on Windows, but only
for remote connections (apparently, localhost connections complete
immediately, or at least too fast for anyone to have noticed the problem
in single-machine testing).
To fix, introduce an additional WL_SOCKET_CONNECTED wait flag for
WaitLatchOrSocket, which is identical to WL_SOCKET_WRITEABLE on
non-Windows, but results in waiting for FD_CONNECT events on Windows.
Ideally, we would also distinguish the two conditions in the API for
PQconnectPoll(), but changing that API at this point seems infeasible.
Instead, cheat by checking for PQstatus() == CONNECTION_STARTED to
determine that we're still waiting for the connection to complete.
(This is a cheat mainly because CONNECTION_STARTED is documented as an
internal state rather than something callers should rely on. Perhaps
we ought to change the documentation ... but this patch doesn't.)
Per reports from Jobin Augustine and Igor Neyman. Back-patch to v10
where commit 1e8a85009 exposed this longstanding shortcoming.
Andres Freund, minor fix and some code review/beautification by me
Robert Haas [Tue, 15 Aug 2017 13:16:33 +0000 (09:16 -0400)]
Avoid unnecessary single-child Append nodes.
Before commit d3cc37f1d801a6b5cad9bf179274a8, an inheritance parent
whose only children were temp tables of other sessions would end up
as a simple scan of the parent; but with that commit, we end up with
an Append node, per a report from Ashutosh Bapat. Tweak the logic
so that we go back to the old way, and update the function header
comment for partitioning while we're at it.
Ashutosh Bapat, reviewed by Amit Langote and adjusted by me.
Tom Lane [Mon, 14 Aug 2017 19:43:20 +0000 (15:43 -0400)]
Handle elog(FATAL) during ROLLBACK more robustly.
Stress testing by Andreas Seltenreich disclosed longstanding problems that
occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are
trying to execute a ROLLBACK of an already-failed transaction. In such a
case, xact.c is in TBLOCK_ABORT state, so that AbortOutOfAnyTransaction
would skip AbortTransaction and go straight to CleanupTransaction. This
led to an assert failure in an assert-enabled build (due to the ROLLBACK's
portal still having a cleanup hook) or without assertions, to a FATAL exit
complaining about "cannot drop active portal". The latter's not
disastrous, perhaps, but it's messy enough to want to improve it.
We don't really want to run all of AbortTransaction in this code path.
The minimum required to clean up the open portal safely is to do
AtAbort_Memory and AtAbort_Portals. It seems like a good idea to
do AtAbort_Memory unconditionally, to be entirely sure that we are
starting with a safe CurrentMemoryContext. That means that if the
main loop in AbortOutOfAnyTransaction does nothing, we need an extra
step at the bottom to restore CurrentMemoryContext = TopMemoryContext,
which I chose to do by invoking AtCleanup_Memory. This'll result in
calling AtCleanup_Memory twice in many of the paths through this function,
but that seems harmless and reasonably inexpensive.
The original motivation for the assertion in AtCleanup_Portals was that
we wanted to be sure that any user-defined code executed as a consequence
of the cleanup hook runs during AbortTransaction not CleanupTransaction.
That still seems like a valid concern, and now that we've seen one case
of the assertion firing --- which means that exactly that would have
happened in a production build --- let's replace the Assert with a runtime
check. If we see the cleanup hook still set, we'll emit a WARNING and
just drop the hook unexecuted.
This has been like this a long time, so back-patch to all supported
branches.