Andrew Dunstan [Sat, 20 Oct 2018 13:02:36 +0000 (09:02 -0400)]
Lower privilege level of programs calling regression_main
On Windows this mean that the regression tests can now safely and
successfully run as Administrator, which is useful in situations like
Appveyor. Elsewhere it's a no-op.
Backpatch to 9.5 - this is harder in earlier branches and not worth the
trouble.
Tom Lane [Sat, 20 Oct 2018 02:22:57 +0000 (22:22 -0400)]
Client-side fixes for delayed NOTIFY receipt.
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Tom Lane [Sat, 20 Oct 2018 01:39:21 +0000 (21:39 -0400)]
Server-side fix for delayed NOTIFY and SIGTERM processing.
Commit 4f85fde8e introduced some code that was meant to ensure that we'd
process cancel, die, sinval catchup, and notify interrupts while waiting
for client input. But there was a flaw: it supposed that the process
latch would be set upon arrival at secure_read() if any such interrupt
was pending. In reality, we might well have cleared the process latch
at some earlier point while those flags remained set -- particularly
notifyInterruptPending, which can't be handled as long as we're within
a transaction.
To fix the NOTIFY case, also attempt to process signals (except
ProcDiePending) before trying to read.
Also, if we see that ProcDiePending is set before we read, forcibly set the
process latch to ensure that we will handle that signal promptly if no data
is available. I also made it set the process latch on the way out, in case
there is similar logic elsewhere. (It remains true that we won't service
ProcDiePending here unless we need to wait for input.)
The code for handling ProcDiePending during a write needs those changes,
too.
Also be a little more careful about when to reset whereToSendOutput,
and improve related comments.
Back-patch to 9.5 where this code was added. I'm not entirely convinced
that older branches don't have similar issues, but the complaint at hand
is just about the >= 9.5 code.
Tom Lane [Fri, 19 Oct 2018 23:36:34 +0000 (19:36 -0400)]
Sync our copy of the timezone library with IANA release tzcode2018f.
About half of this is purely cosmetic changes to reduce the diff between
our code and theirs, like inserting "const" markers where they have them.
The other half is tracking actual code changes in zic.c and localtime.c.
I don't think any of these represent near-term compatibility hazards, but
it seems best to stay up to date.
I also fixed longstanding bugs in our code for producing the
known_abbrevs.txt list, which by chance hadn't been exposed before,
but which resulted in some garbage output after applying the upstream
changes in zic.c. Notably, because upstream removed their old phony
transitions at the Big Bang, it's now necessary to cope with TZif files
containing no DST transition times at all.
Tom Lane [Fri, 19 Oct 2018 21:01:34 +0000 (17:01 -0400)]
Update time zone data files to tzdata release 2018f.
DST law changes in Chile, Fiji, and Russia (Volgograd).
Historical corrections for China, Japan, Macau, and North Korea.
Note: like the previous tzdata update, this involves a depressingly
large amount of semantically-meaningless churn in tzdata.zi. That
is a consequence of upstream's data compression method assigning
unstable abbreviations to DST rulesets. I complained about that
to them last time, and this version now uses an assignment method
that pays some heed to not changing abbreviations unnecessarily.
So hopefully, that'll be better going forward.
Michael Paquier [Fri, 19 Oct 2018 13:44:12 +0000 (22:44 +0900)]
Use whitelist to choose files scanned with pg_verify_checksums
The original implementation of pg_verify_checksums used a blacklist to
decide which files should be skipped for scanning as they do not include
data checksums, like pg_internal.init or pg_control. However, this
missed two things:
- Some files are created within builds of EXEC_BACKEND and these were
not listed, causing failures on Windows.
- Extensions may create custom files in data folders, causing the tool
to equally fail.
This commit switches to a whitelist-like method instead by checking if
the files to scan are authorized relation files. This is close to a
reverse-engineering of what is defined in relpath.c in charge of
building the relation paths, and we could consider refactoring what this
patch does so as all routines are in a single place. This is left for
later.
This is based on a suggestion from Andres Freund. TAP tests are updated
so as multiple file patterns are tested. The bug has been spotted by
various buildfarm members as a result of b34e84f which has introduced
the TAP tests of pg_verify_checksums.
Author: Michael Paquier Reviewed-by: Andrew Dunstan, Michael Banck
Discussion: https://postgr.es/m/20181012005614.GC26424@paquier.xyz
Backpatch-through: 11
Tom Lane [Thu, 18 Oct 2018 18:55:23 +0000 (14:55 -0400)]
Still further rethinking of build changes for macOS Mojave.
To avoid the sorts of problems complained of by Jakob Egger, it'd be
best if configure didn't emit any references to the sysroot path at all.
In the case of PL/Tcl, we can do that just by keeping our hands off the
TCL_INCLUDE_SPEC string altogether. In the case of PL/Perl, we need to
substitute -iwithsysroot for -I in the compile commands, which is easily
handled if we change to using a configure output variable that includes
the switch not only the directory name. Since PL/Tcl and PL/Python
already do it like that, this seems like good consistency cleanup anyway.
Hence, this replaces the advice given to Perl-related extensions in commit 5e2217131; instead of writing "-I$(perl_archlibexp)/CORE", they should
just write "$(perl_includespec)". (The old way continues to work, but not
on recent macOS.)
It's still the case that configure needs to be aware of the sysroot
path internally, but that's cleaner than what we had before.
Tom Lane [Wed, 17 Oct 2018 19:06:38 +0000 (15:06 -0400)]
Fix minor bug in isolationtester.
If the lock wait query failed, isolationtester would report the
PQerrorMessage from some other connection, meaning there would be
no message or an unrelated one. This seems like a pretty unlikely
occurrence, but if it did happen, this bug could make it really
difficult/confusing to figure out what happened. That seems to
justify patching all the way back.
In passing, clean up another place where the "wrong" conn was used
for an error report. That one's not actually buggy because it's
a different alias for the same connection, but it's still confusing
to the reader.
Tom Lane [Wed, 17 Oct 2018 16:26:48 +0000 (12:26 -0400)]
Improve tzparse's handling of TZDEFRULES ("posixrules") zone data.
In the IANA timezone code, tzparse() always tries to load the zone
file named by TZDEFRULES ("posixrules"). Previously, we'd hacked
that logic to skip the load in the "lastditch" code path, which we use
only to initialize the default "GMT" zone during GUC initialization.
That's critical for a couple of reasons: since we do not support leap
seconds, we *must not* allow "GMT" to have leap seconds, and since this
case runs before the GUC subsystem is fully alive, we'd really rather
not take the risk of pg_open_tzfile throwing any errors.
However, that still left the code reading TZDEFRULES on every other
call, something we'd noticed to the extent of having added code to cache
the result so it was only done once per process not a lot of times.
Andres Freund complained about the static data space used up for the
cache; but as long as the logic was like this, there was no point in
trying to get rid of that space.
We can improve matters by looking a bit more closely at what the IANA
code actually needs the TZDEFRULES data for. One thing it does is
that if "posixrules" is a leap-second-aware zone, the leap-second
behavior will be absorbed into every POSIX-style zone specification.
However, that's a behavior we'd really prefer to do without, since
for our purposes the end effect is to render every POSIX-style zone
name unsupported. Otherwise, the TZDEFRULES data is used only if
the POSIX zone name specifies DST but doesn't include a transition
date rule (e.g., "EST5EDT" rather than "EST5EDT,M3.2.0,M11.1.0").
That is a minority case for our purposes --- in particular, it
never happens when tzload() invokes tzparse() to interpret a
transition date rule string found in a tzdata zone file.
Hence, if we legislate that we're going to ignore leap-second data
from "posixrules", we can postpone the TZDEFRULES load into the path
where we actually need to substitute for a missing date rule string.
That means it will never happen at all in common scenarios, making it
reasonable to dynamically allocate the cache space when it does happen.
Even when the data is already loaded, this saves some cycles in the
common code path since we avoid a memcpy of 23KB or so. And, IMO at
least, this is a less ugly hack on the IANA logic than what we had
before, since it's not messing with the lastditch-vs-regular code paths.
Back-patch to all supported branches, not so much because this is a
critical change as that I want to keep all our copies of the IANA
timezone code in sync.
Tom Lane [Tue, 16 Oct 2018 20:27:15 +0000 (16:27 -0400)]
Back off using -isysroot on Darwin.
Rethink the solution applied in commit 5e2217131 to get PL/Tcl to
build on macOS Mojave. I feared that adding -isysroot globally might
have undesirable consequences, and sure enough Jakob Egger reported
one: it complicates building extensions with a different Xcode version
than was used for the core server. (I find that a risky proposition
in general, but apparently it works most of the time, so we shouldn't
break it if we don't have to.)
We'd already adopted the solution for PL/Perl of inserting the sysroot
path directly into the -I switches used to find Perl's headers, and we
can do the same thing for PL/Tcl by changing the -iwithsysroot switch
that Apple's tclConfig.sh reports. This restricts the risks to PL/Perl
and PL/Tcl themselves and directly-dependent extensions, which is a lot
more pleasing in general than a global -isysroot switch.
Along the way, tighten the test to see if we need to inject the sysroot
path into $perl_includedir, as I'd speculated about upthread but not
gotten round to doing.
Tom Lane [Tue, 16 Oct 2018 17:56:58 +0000 (13:56 -0400)]
Avoid rare race condition in privileges.sql regression test.
We created a temp table, then switched to a new session, leaving
the old session to clean up its temp objects in background. If that
took long enough, the eventual attempt to drop the user that owns
the temp table could fail, as exhibited today by sidewinder.
Fix by dropping the temp table explicitly when we're done with it.
It's been like this for quite some time, so back-patch to all
supported branches.
Tom Lane [Tue, 16 Oct 2018 16:27:14 +0000 (12:27 -0400)]
Make PostgresNode.pm's poll_query_until() more chatty about failures.
Reporting only the stderr is unhelpful when the problem is that the
server output we're getting doesn't match what was expected. So we
should report the query output too; and just for good measure, let's
print the query we used and the output we expected.
Back-patch to 9.5 where poll_query_until was introduced.
Tom Lane [Tue, 16 Oct 2018 16:01:19 +0000 (12:01 -0400)]
Improve stability of recently-added regression test case.
Commit b5febc1d1 added a contrib/btree_gist test case that has been
observed to fail in the buildfarm as a result of background auto-analyze
updating stats and changing the selected plan. Forestall that by
forcibly analyzing in foreground, instead. The new plan choice is
just as good for our purposes, since we really only care that an
index-only plan does not get selected.
localtime.c's "struct state" is a rather large object, ~23KB. We were
statically allocating one for gmtsub() to use to represent the GMT
timezone, even though that function is not at all heavily used and is
never reached in most backends. Let's malloc it on-demand, instead.
This does pose the question of how to handle a malloc failure, but
there's already a well-defined error report convention here, ie
set errno and return NULL.
We have but one caller of pg_gmtime in HEAD, and two in back branches,
neither of which were troubling to check for error. Make them do so.
The possible errors are sufficiently unlikely (out-of-range timestamp,
and now malloc failure) that I think elog() is adequate.
Back-patch to all supported branches to keep our copies of the IANA
timezone code in sync. This particular change is in a stanza that
already differs from upstream, so it's a wash for maintenance purposes
--- but only as long as we keep the branches the same.
Tom Lane [Mon, 15 Oct 2018 18:01:38 +0000 (14:01 -0400)]
Check for stack overrun in standard_ProcessUtility().
ProcessUtility can recurse, and indeed can be driven to infinite
recursion, so it ought to have a check_stack_depth() call. This
covers the reported bug (portal trying to execute itself) and a bunch
of other cases that could perhaps arise somewhere.
Per bug #15428 from Malthe Borch. Back-patch to all supported branches.
Peter Eisentraut [Mon, 15 Oct 2018 07:48:49 +0000 (09:48 +0200)]
Fixes for "Glyph not available" warnings from FOP
With the PostgreSQL 11 release notes acknowledgments list, FOP reported
WARNING: Glyph "?" (0x144, nacute) not available in font "Times-Roman".
WARNING: Glyph "?" (0x15e, Scedilla) not available in font "Times-Roman".
WARNING: Glyph "?" (0x15f, scedilla) not available in font "Times-Roman".
WARNING: Glyph "?" (0x131, dotlessi) not available in font "Times-Roman".
This is because we have some new contributors whose names use letters
that we haven't used before, and apparently FOP can't handle them out
of the box.
For now, just fix this by "unaccenting" those names. In the future,
maybe this can be fixed better with a different font configuration.
There is also another warning
WARNING: Glyph "?" (0x3c0, pi) not available in font "Times-Roman".
but that existed in previous releases and is not touched here.
This commit documents rounding of "length" parameter and absence of support
for unique indexes and NULLs searching. Backpatch to 9.6 where contrib/bloom
was introduced.
Discussion: https://postgr.es/m/CAF4Au4wPQQ7EHVSnzcLjsbY3oLSzVk6UemZLD1Sbmwysy3R61g%40mail.gmail.com
Author: Oleg Bartunov with minor editorialization by me
Backpatch-through: 9.6
Michael Paquier [Sun, 14 Oct 2018 13:23:29 +0000 (22:23 +0900)]
Avoid duplicate XIDs at recovery when building initial snapshot
On a primary, sets of XLOG_RUNNING_XACTS records are generated on a
periodic basis to allow recovery to build the initial state of
transactions for a hot standby. The set of transaction IDs is created
by scanning all the entries in ProcArray. However it happens that its
logic never counted on the fact that two-phase transactions finishing to
prepare can put ProcArray in a state where there are two entries with
the same transaction ID, one for the initial transaction which gets
cleared when prepare finishes, and a second, dummy, entry to track that
the transaction is still running after prepare finishes. This way
ensures a continuous presence of the transaction so as callers of for
example TransactionIdIsInProgress() are always able to see it as alive.
So, if a XLOG_RUNNING_XACTS takes a standby snapshot while a two-phase
transaction finishes to prepare, the record can finish with duplicated
XIDs, which is a state expected by design. If this record gets applied
on a standby to initial its recovery state, then it would simply fail,
so the odds of facing this failure are very low in practice. It would
be tempting to change the generation of XLOG_RUNNING_XACTS so as
duplicates are removed on the source, but this requires to hold on
ProcArrayLock for longer and this would impact all workloads,
particularly those using heavily two-phase transactions.
XLOG_RUNNING_XACTS is also actually used only to initialize the standby
state at recovery, so instead the solution is taken to discard
duplicates when applying the initial snapshot.
Diagnosed-by: Konstantin Knizhnik
Author: Michael Paquier
Discussion: https://postgr.es/m/0c96b653-4696-d4b4-6b5d-78143175d113@postgrespro.ru
Backpatch-through: 9.3
Tom Lane [Sat, 13 Oct 2018 20:31:09 +0000 (16:31 -0400)]
Make an editing pass over v11 release notes.
Set the release date. Do a bunch of copy-editing and markup improvement,
rearrange some stuff into what seemed a more sensible order, move some
things that did not seem to be in the right section.
Tom Lane [Fri, 12 Oct 2018 23:33:56 +0000 (19:33 -0400)]
Remove abstime, reltime, tinterval tables from old regression databases.
In the back branches, drop these tables after the regression tests are
done with them. This fixes failures of cross-branch pg_upgrade testing
caused by these types having been removed in v12. We do lose the ability
to test dump/restore behavior with these types in the back branches, but
the actual loss of code coverage seems to be nil given that there's nothing
very special about these types.
Tom Lane [Fri, 12 Oct 2018 18:26:56 +0000 (14:26 -0400)]
Simplify use of AllocSetContextCreate() wrapper macro.
We can allow this macro to accept either abbreviated or non-abbreviated
allocation parameters by making use of __VA_ARGS__. As noted by Andres
Freund, it's unlikely that any compiler would have __builtin_constant_p
but not __VA_ARGS__, so this gives up little or no error checking, and
it avoids a minor but annoying API break for extensions.
With this change, there is no reason for anybody to call
AllocSetContextCreateExtended directly, so in HEAD I renamed it to
AllocSetContextCreateInternal. It's probably too late for an ABI
break like that in 11, though.
Alvaro Herrera [Fri, 12 Oct 2018 15:36:26 +0000 (12:36 -0300)]
Correct attach/detach logic for FKs in partitions
There was no code to handle foreign key constraints on partitioned
tables in the case of ALTER TABLE DETACH; and if you happened to ATTACH
a partition that already had an equivalent constraint, that one was
ignored and a new constraint was created. Adding this to the fact that
foreign key cloning reuses the constraint name on the partition instead
of generating a new name (as it probably should, to cater to SQL
standard rules about constraint naming within schemas), the result was a
pretty poor user experience -- the most visible failure was that just
detaching a partition and re-attaching it failed with an error such as
because it would try to create an identically-named constraint in the
partition. To make matters worse, if you tried to drop the constraint
in the now-independent partition, that would fail because the constraint
was still seen as dependent on the constraint in its former parent
partitioned table:
ERROR: cannot drop inherited constraint "test_result_asset_id_fkey" of relation "test_result_cbsystem_0001_0050_monthly_2018_09"
This fix attacks the problem from two angles: first, when the partition
is detached, the constraint is also marked as independent, so the drop
now works. Second, when the partition is re-attached, we scan existing
constraints searching for one matching the FK in the parent, and if one
exists, we link that one to the parent constraint. So we don't end up
with a duplicate -- and better yet, we don't need to scan the referenced
table to verify that the constraint holds.
To implement this I made a small change to previously planner-only
struct ForeignKeyCacheInfo to contain the constraint OID; also relcache
now maintains the list of FKs for partitioned tables too.
Backpatch to 11.
Reported-by: Michael Vitale (bug #15425)
Discussion: https://postgr.es/m/15425-2dbc9d2aa999f816@postgresql.org
Andres Freund [Wed, 10 Oct 2018 20:53:02 +0000 (13:53 -0700)]
Fix logical decoding error when system table w/ toast is repeatedly rewritten.
Repeatedly rewriting a mapped catalog table with VACUUM FULL or
CLUSTER could cause logical decoding to fail with:
ERROR, "could not map filenode \"%s\" to relation OID"
To trigger the problem the rewritten catalog had to have live tuples
with toasted columns.
The problem was triggered as during catalog table rewrites the
heap_insert() check that prevents logical decoding information to be
emitted for system catalogs, failed to treat the new heap's toast table
as a system catalog (because the new heap is not recognized as a
catalog table via RelationIsLogicallyLogged()). The relmapper, in
contrast to the normal catalog contents, does not contain historical
information. After a single rewrite of a mapped table the new relation
is known to the relmapper, but if the table is rewritten twice before
logical decoding occurs, the relfilenode cannot be mapped to a
relation anymore. Which then leads us to error out. This only
happens for toast tables, because the main table contents aren't
re-inserted with heap_insert().
The fix is simple, add a new heap_insert() flag that prevents logical
decoding information from being emitted, and accept during decoding
that there might not be tuple data for toast tables.
Unfortunately that does not fix pre-existing logical decoding
errors. Doing so would require not throwing an error when a filenode
cannot be mapped to a relation during decoding, and that seems too
likely to hide bugs. If it's crucial to fix decoding for an existing
slot, temporarily changing the ERROR in ReorderBufferCommit() to a
WARNING appears to be the best fix.
Author: Andres Freund
Discussion: https://postgr.es/m/20180914021046.oi7dm4ra3ot2g2kt@alap3.anarazel.de
Backpatch: 9.4-, where logical decoding was introduced
Alvaro Herrera [Mon, 8 Oct 2018 13:36:33 +0000 (10:36 -0300)]
Silence compiler warning in Assert()
gcc 6.3 does not whine about this mistake I made in 39808e8868c8 but
evidently lots of other compilers do, according to Michael Paquier,
Peter Eisentraut, Arthur Zakirov, Tomas Vondra.
Michael Paquier [Mon, 8 Oct 2018 08:56:13 +0000 (17:56 +0900)]
Improve two error messages related to foreign keys on partitioned tables
Error messages for creating a foreign key on a partitioned table using
ONLY or NOT VALID were wrong in mentioning the objects they worked on.
This commit adds on the way some regression tests missing for those
cases.
Alvaro Herrera [Sun, 7 Oct 2018 01:13:19 +0000 (22:13 -0300)]
Fix catalog insertion order for ATTACH PARTITION
Commit 2fbdf1b38bc changed the order in which we inserted catalog rows
when creating partitions, so that we could remove an unsightly hack
required for untimely relcache invalidations. However, that commit only
changed the ordering for CREATE TABLE PARTITION OF, and left ALTER TABLE
ATTACH PARTITION unchanged, so the latter can be affected when catalog
invalidations occur, for instance when the partition key involves an SQL
function.
Alvaro Herrera [Sat, 6 Oct 2018 22:17:46 +0000 (19:17 -0300)]
Fix event triggers for partitioned tables
Index DDL cascading on partitioned tables introduced a way for ALTER
TABLE to be called reentrantly. This caused an an important deficiency
in event trigger support to be exposed: on exiting the reentrant call,
the alter table state object was clobbered, causing a crash when the
outer alter table tries to finalize its processing. Fix the crash by
creating a stack of event trigger state objects. There are still ways
to cause things to misbehave (and probably other crashers) with more
elaborate tricks, but at least it now doesn't crash in the obvious
scenario.
Backpatch to 9.5, where DDL deparsing of event triggers was introduced.
Reported-by: Marco Slot
Authors: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CANNhMLCpi+HQ7M36uPfGbJZEQLyTy7XvX=5EFkpR-b1bo0uJew@mail.gmail.com
Tom Lane [Sat, 6 Oct 2018 16:00:09 +0000 (12:00 -0400)]
Propagate xactStartTimestamp and stmtStartTimestamp to parallel workers.
Previously, a worker process would establish values for these based on
its own start time. In v10 and up, this can trivially be shown to cause
misbehavior of transaction_timestamp(), timestamp_in(), and related
functions which are (perhaps unwisely?) marked parallel-safe. It seems
likely that other behaviors might diverge from what happens in the parent
as well.
It's not as trivial to demonstrate problems in 9.6 or 9.5, but I'm sure
it's still possible, so back-patch to all branches containing parallel
worker infrastructure.
In HEAD only, mark now() and statement_timestamp() as parallel-safe
(other affected functions already were). While in theory we could
still squeeze that change into v11, it doesn't seem important enough
to force a last-minute catversion bump.
Michael Paquier [Sat, 6 Oct 2018 05:59:52 +0000 (14:59 +0900)]
Assign constraint name when cloning FK definition for partitions
This is for example used when attaching a partition to a partitioned
table which includes foreign keys, and in this case the constraint name
has been missing in the data cloned. This could lead to hard crashes,
as when validating the foreign key constraint, the constraint name is
always expected. Particularly, when using log_min_messages >= DEBUG1, a
log message would be generated with this unassigned constraint name,
leading to an assertion failure on HEAD.
While on it, rename a variable in ATExecAttachPartition which was
declared twice with the same name.
Author: Michael Paquier Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20181005042236.GG1629@paquier.xyz
Backpatch-through: 11
Tom Lane [Fri, 5 Oct 2018 20:01:29 +0000 (16:01 -0400)]
Allow btree comparison functions to return INT_MIN.
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Tom Lane [Fri, 5 Oct 2018 16:45:37 +0000 (12:45 -0400)]
Ensure that PLPGSQL_DTYPE_ROW variables have valid refname fields.
Without this, the syntax-tree-dumping functions in pl_funcs.c crash,
and there are other places that might be at risk too. Per report
from Pavel Stehule.
Looks like I broke this in commit f9263006d, so back-patch to v11.
Alvaro Herrera [Thu, 4 Oct 2018 14:37:20 +0000 (11:37 -0300)]
Fix duplicate primary keys in partitions
When using the CREATE TABLE .. PARTITION OF syntax, it's possible to
cause a partition to get two primary keys if the parent already has one.
Tighten the check to disallow that.
Reported-by: Rajkumar Raghuwanshi
Author: Amul Sul
Discussion: https://postgr.es/m/CAKcux6=OnSV3-qd8Gb6W=KPPwcCz6Fe_O_MQYjTa24__Xn8XxA@mail.gmail.com
Andres Freund [Wed, 3 Oct 2018 19:48:37 +0000 (12:48 -0700)]
Fix issues around EXPLAIN with JIT.
I (Andres) was more than a bit hasty in committing 33001fd7a7072d48327
after last minute changes, leading to a number of problems (jit output
was only shown for JIT in parallel workers, and just EXPLAIN without
ANALYZE didn't work). Lukas luckily found these issues quickly.
Instead of combining instrumentation in in standard_ExecutorEnd(), do
so on demand in the new ExplainPrintJITSummary().
Also update a documentation example of the JIT output, changed in 52050ad8ebec8d831.
Author: Lukas Fittl, with minor changes by me
Discussion: https://postgr.es/m/CAP53PkxmgJht69pabxBXJBM+0oc6kf3KHMborLP7H2ouJ0CCtQ@mail.gmail.com
Backpatch: 11, where JIT compilation was introduced
Amit Kapila [Wed, 3 Oct 2018 03:44:09 +0000 (09:14 +0530)]
MAXALIGN the target address where we store flattened value.
The API (EOH_flatten_into) that flattens the expanded value representation
expects the target address to be maxaligned. All it's usage adhere to that
principle except when serializing datums for parallel query. Fix that
usage.
Diagnosed-by: Tom Lane
Author: Tom Lane and Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/11629.1536550032@sss.pgh.pa.us
Tom Lane [Tue, 2 Oct 2018 16:41:28 +0000 (12:41 -0400)]
Set snprintf.c's maximum number of NL arguments to be 31.
Previously, we used the platform's NL_ARGMAX if any, otherwise 16.
The trouble with this is that the platform value is hugely variable,
ranging from the POSIX-minimum 9 to as much as 64K on recent FreeBSD.
Values of more than a dozen or two have no practical use and slow down
the initialization of the argtypes array. Worse, they cause snprintf.c
to consume far more stack space than was the design intention, possibly
resulting in stack-overflow crashes.
Standardize on 31, which is comfortably more than we need (it looks like
no existing translatable message has more than about 10 parameters).
I chose that, not 32, to make the array sizes powers of 2, for some
possible small gain in speed of the memset.
The lack of reported crashes suggests that the set of platforms we
use snprintf.c on (in released branches) may have no overlap with
the set where NL_ARGMAX has unreasonably large values. But that's
not entirely clear, so back-patch to all supported branches.
Tom Lane [Tue, 2 Oct 2018 15:54:12 +0000 (11:54 -0400)]
Fix corner-case failures in has_foo_privilege() family of functions.
The variants of these functions that take numeric inputs (OIDs or
column numbers) are supposed to return NULL rather than failing
on bad input; this rule reduces problems with snapshot skew when
queries apply the functions to all rows of a catalog.
has_column_privilege() had careless handling of the case where the
table OID didn't exist. You might get something like this:
select has_column_privilege(9999,'nosuchcol','select');
ERROR: column "nosuchcol" of relation "(null)" does not exist
or you might get a crash, depending on the platform's printf's response
to a null string pointer.
In addition, while applying the column-number variant to a dropped
column returned NULL as desired, applying the column-name variant
did not:
select has_column_privilege('mytable','........pg.dropped.2........','select');
ERROR: column "........pg.dropped.2........" of relation "mytable" does not exist
It seems better to make this case return NULL as well.
Also, the OID-accepting variants of has_foreign_data_wrapper_privilege,
has_server_privilege, and has_tablespace_privilege didn't follow the
principle of returning NULL for nonexistent OIDs. Superusers got TRUE,
everybody else got an error.
Per investigation of Jaime Casanova's report of a new crash in HEAD.
These behaviors have been like this for a long time, so back-patch to
all supported branches.
Patch by me; thanks to Stephen Frost for discussion and review
Michael Paquier [Tue, 2 Oct 2018 07:34:59 +0000 (16:34 +0900)]
Fix documentation of pgrowlocks using "lock_type" instead of "modes"
The example used in the documentation is outdated as well. This is an
oversight from 0ac5ad5, which bumped up pgrowlocks but forgot some bits
of the documentation.
Reported-by: Chris Wilson
Discussion: https://postgr.es/m/153838692816.2950.12001142346234155699@wrigleys.postgresql.org
Backpatch-through: 9.3
Change PROCEDURE to FUNCTION in CREATE EVENT TRIGGER syntax
This was claimed to have been done in 0a63f996e018ac508c858e87fa39cc254a5db49f, but that actually only
changed the documentation and not the grammar. (That commit did fully
change it for CREATE TRIGGER.)
Tom Lane [Mon, 1 Oct 2018 15:51:07 +0000 (11:51 -0400)]
Fix tuple_data_split() to not open a relation without any lock.
contrib/pageinspect's tuple_data_split() function thought it could get
away with opening the referenced relation with NoLock. In practice
there's no guarantee that the current session holds any lock on that
rel (even if we just read a page from it), so that this is unsafe.
Switch to using AccessShareLock. Also, postpone closing the relation,
so that we needn't copy its tupdesc. Also, fix unsafe use of
att_isnull() for attributes past the end of the tuple.
Per testing with a patch that complains if we open a relation without
holding any lock on it. I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.
Tom Lane [Mon, 1 Oct 2018 15:39:13 +0000 (11:39 -0400)]
Fix ALTER COLUMN TYPE to not open a relation without any lock.
If the column being modified is referenced by a foreign key constraint
of another table, ALTER TABLE would open the other table (to re-parse
the constraint's definition) without having first obtained a lock on it.
This was evidently intentional, but that doesn't mean it's really safe.
It's especially not safe in 9.3, which pre-dates use of MVCC scans for
catalog reads, but even in current releases it doesn't seem like a good
idea.
We know we'll need AccessExclusiveLock shortly to drop the obsoleted
constraint, so just get that a little sooner to close the hole.
Per testing with a patch that complains if we open a relation without
holding any lock on it. I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.
Tom Lane [Sun, 30 Sep 2018 20:24:56 +0000 (16:24 -0400)]
Fix detection of the result type of strerror_r().
The method we've traditionally used, of redeclaring strerror_r() to
see if the compiler complains of inconsistent declarations, turns out
not to work reliably because some compilers only report a warning,
not an error. Amazingly, this has gone undetected for years, even
though it certainly breaks our detection of whether strerror_r
succeeded.
Let's instead test whether the compiler will take the result of
strerror_r() as a switch() argument. It's possible this won't
work universally either, but it's the best idea I could come up with
on the spur of the moment.
Back-patch of commit 751f532b9. Buildfarm results indicate that only
icc-on-Linux actually has an issue here; perhaps the lack of field
reports indicates that people don't build PG for production that way.
Amit Kapila [Fri, 28 Sep 2018 06:54:09 +0000 (12:24 +0530)]
Fix assertion failure when updating full_page_writes for checkpointer.
When the checkpointer receives a SIGHUP signal to update its configuration,
it may need to update the shared memory for full_page_writes and need to
write a WAL record for it. Now, it is quite possible that the XLOG
machinery has not been initialized by that time and it will lead to
assertion failure while doing that. Fix is to allow the initialization of
the XLOG machinery outside critical section.
This bug has been introduced by the commit 2c03216d83 which added the XLOG
machinery initialization in RecoveryInProgress code path.
Reported-by: Dilip Kumar
Author: Dilip Kumar Reviewed-by: Michael Paquier and Amit Kapila
Backpatch-through: 9.5
Discussion: https://postgr.es/m/CAFiTN-u4BA8KXcQUWDPNgaKAjDXC=C2whnzBM8TAcv=stckYUw@mail.gmail.com
Michael Paquier [Fri, 28 Sep 2018 02:55:43 +0000 (11:55 +0900)]
Fix WAL recycling on standbys depending on archive_mode
A restart point or a checkpoint recycling WAL segments treats segments
marked with neither ".done" (archiving is done) or ".ready" (segment is
ready to be archived) in archive_status the same way for archive_mode
being "on" or "always". While for a primary this is fine, a standby
running a restart point with archive_mode = on would try to mark such a
segment as ready for archiving, which is something that will never
happen except after the standby is promoted.
Note that this problem applies only to WAL segments coming from the
local pg_wal the first time archive recovery is run. Segments part of a
self-contained base backup are the most common case where this could
happen, however even in this case normally the .done markers would be
most likely part of the backup. Segments recovered from an archive are
marked as .ready or .done by the startup process, and segments finished
streaming are marked as such by the WAL receiver, so they are handled
already.
Reported-by: Haruka Takatsuka
Author: Michael Paquier
Discussion: https://postgr.es/m/15402-a453c90ed4cf88b2@postgresql.org
Backpatch-through: 9.5, where archive_mode = always has been added.
Tom Lane [Thu, 27 Sep 2018 22:15:06 +0000 (18:15 -0400)]
Fix assorted bugs in pg_get_partition_constraintdef().
It failed if passed a nonexistent relation OID, or one that was a non-heap
relation, because of blindly applying heap_open to a user-supplied OID.
This is not OK behavior for a SQL-exposed function; we have a project
policy that we should return NULL in such cases. Moreover, since
pg_get_partition_constraintdef ought now to work on indexes, restricting
it to heaps is flat wrong anyway.
The underlying function generate_partition_qual() wasn't on board with
indexes having partition quals either, nor for that matter with rels
having relispartition set but yet null relpartbound. (One wonders
whether the person who wrote the function comment blocks claiming that
these functions allow a missing relpartbound had ever tested it.)
Fix by testing relispartition before opening the rel, and by using
relation_open not heap_open. (If any other relkinds ever grow the
ability to have relispartition set, the code will work with them
automatically.) Also, don't reject null relpartbound in
generate_partition_qual.
Back-patch to v11, and all but the null-relpartbound change to v10.
(It's not really necessary to change generate_partition_qual at all
in v10, but I thought s/heap_open/relation_open/ would be a good
idea anyway just to keep the code in sync with later branches.)
Peter Eisentraut [Fri, 15 Jun 2018 03:22:14 +0000 (23:22 -0400)]
Recurse to sequences on ownership change for all relkinds
When a table ownership is changed, we must apply that also to any owned
sequences. (Otherwise, it would result in a situation that cannot be
restored, because linked sequences must have the same owner as the
table.) But this was previously only applied to regular tables and
materialized views. But it should also apply to at least foreign
tables. This patch removes the relkind check altogether, because it
doesn't save very much and just introduces the possibility of similar
omissions.
Bug: #15238 Reported-by: Christoph Berg <christoph.berg@credativ.de>
Michael Paquier [Wed, 26 Sep 2018 01:29:20 +0000 (10:29 +0900)]
Rework activation of commit timestamps during recovery
The activation and deactivation of commit timestamp tracking has not
been handled consistently for a primary or standbys at recovery. The
facility can be activated at three different moments of recovery:
- The beginning, where a primary would use the GUC value for the
decision-making, and where a standby relies on the contents of the
control file.
- When replaying a XLOG_PARAMETER_CHANGE record at redo.
- The end, where both primary and standby rely on the GUC value.
Using the GUC value for a primary at the beginning of recovery causes
problems with commit timestamp access when doing crash recovery.
Particularly, when replaying transaction commits, it could be possible
that an attempt to read commit timestamps is done for a transaction
which committed at a moment when track_commit_timestamp was disabled.
A test case is added to reproduce the failure. The test works down to
v11 as it takes advantage of transaction commits within procedures.
Reported-by: Hailong Li
Author: Masahiko Sawasa, Michael Paquier Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/11224478-a782-203b-1f17-e4797b39bdf0@qunar.com
Backpatch-through: 9.5, where commit timestamps have been introduced.
Andres Freund [Tue, 25 Sep 2018 19:54:29 +0000 (12:54 -0700)]
Collect JIT instrumentation from workers.
Previously, when using parallel query, EXPLAIN (ANALYZE)'s JIT
compilation timings did not include the overhead from doing so on the
workers. Fix that.
We do so by simply aggregating the cost of doing JIT compilation on
workers and the leader together. Arguably that's not quite accurate,
because the total time spend doing so is spent in parallel - but it's
hard to do much better. For additional detail, when VERBOSE is
specified, the stats for workers are displayed separately.
Author: Amit Khandekar and Andres Freund
Discussion: https://postgr.es/m/CAJ3gD9eLrz51RK_gTkod+71iDcjpB_N8eC6vU2AW-VicsAERpQ@mail.gmail.com
Backpatch: 11-
Tom Lane [Tue, 25 Sep 2018 17:23:29 +0000 (13:23 -0400)]
Make some fixes to allow building Postgres on macOS 10.14 ("Mojave").
Apple's latest rearrangements of the system-supplied headers have broken
building of PL/Perl and PL/Tcl. The only practical way to fix PL/Tcl is to
start using the "-isysroot" compiler flag to point to SDK-supplied headers,
as Apple expects. We must also start distinguishing where to find Perl's
headers from where to find its shared library; but that seems like good
cleanup anyway.
Extensions that formerly did something like -I$(perl_archlibexp)/CORE
should now do -I$(perl_includedir)/CORE instead. perl_archlibexp
is still the place to look for libperl.so, though.
If for some reason you don't like the default -isysroot setting, you can
override that by setting PG_SYSROOT in configure's arguments. I don't
currently think people would need to do so, unless maybe for cross-version
build purposes.
In addition, teach configure where to find tclConfig.sh. Our traditional
method of searching $auto_path hasn't worked for the last couple of macOS
releases, and it now seems clear that Apple's not going to change that.
The workaround of manually specifying --with-tclconfig was annoying
already, but Mojave's made it a lot more so because the sysroot path now
has to be included as well. Let's just wire the knowledge into configure
instead. To avoid breaking builds against non-default Tcl installations
(e.g. MacPorts) wherein the $auto_path method probably still works,
arrange to try the additional case only after all else has failed.
Back-patch to all supported versions, since at least the buildfarm
cares about that. The changes are set up to not do anything on macOS
releases that are old enough to not have functional sysroot trees.
Michael Paquier [Tue, 25 Sep 2018 02:05:13 +0000 (11:05 +0900)]
Ignore publication tables when --no-publications is used
96e1cb4 has added support for --no-publications in pg_dump, pg_dumpall
and pg_restore, but forgot the fact that publication tables also need to
be ignored when this option is used.
Author: Gilles Darold Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/3f48e812-b0fa-388e-2043-9a176bdee27e@dalibo.com
Backpatch-through: 10, where publications have been added.
Michael Paquier [Tue, 25 Sep 2018 00:56:41 +0000 (09:56 +0900)]
Revoke pg_stat_statements_reset() permissions
Commit 25fff40 has granted execute permission of the function
pg_stat_statements_reset() to default role "pg_read_all_stats", but this
role is meant to read statistics, and not to reset them. The
permissions on this function are revoked from "pg_read_all_stats". The
version of pg_stat_statements is bumped up in consequence.
Author: Haribabu Kommi Reviewed-by: Michael Paquier, Amit Kapila
Discussion: https://postgr.es/m/CAJrrPGf5fCnKqXObpwGN9nMyD--tzOf-7LFCJiz59Z1wJ5qj9A@mail.gmail.com
Andres Freund [Mon, 24 Sep 2018 20:27:08 +0000 (13:27 -0700)]
Make EXPLAIN output for JIT compilation more dense.
A discussion about also reporting JIT compilation overhead on workers
brought unhappiness with the verbosity of the current explain format
to light. Make the text format more dense, and restructure the
structured output to mirror that more closely.
As we're re-jiggering the output format anyway: The denser format
allows us to report all flags for JIT compilation (now also reporting
PGJIT_EXPR and PGJIT_DEFORM), and report the total time in addition to
the individual times.
Per complaint from Tom Lane.
Author: Andres Freund
Discussion: https://postgr.es/m/27812.1537221015@sss.pgh.pa.us
Backpatch: 11-, where JIT compilation was introduced
Andrew Dunstan [Mon, 24 Sep 2018 20:11:24 +0000 (16:11 -0400)]
Fast default trigger and expand_tuple fixes
Ensure that triggers get properly filled in tuples for the OLD value.
Also fix the logic of detecting missing null values. The previous logic
failed to detect a missing null column before the first missing column
with a default. Fixing this has simplified the logic a bit.
Regression tests are added to test changes. This should ensure better
coverage of expand_tuple().
Original bug reports, and some code and test scripts from Tomas Vondra
Tom Lane [Mon, 24 Sep 2018 15:30:51 +0000 (11:30 -0400)]
Fix over-allocation of space for array_out()'s result string.
array_out overestimated the space needed for its output, possibly by
a very substantial amount if the array is multi-dimensional, because
of wrong order of operations in the loop that counts the number of
curly-brace pairs needed. While the output string is normally
short-lived, this could still cause problems in extreme cases.
An additional minor error was that it counted one more delimiter than
is actually needed.
Repair those errors, add an Assert that the space is now correctly
calculated, and make some minor improvements in the comments.
I also failed to resist the temptation to get rid of an integer
modulus operation per array element; a simple comparison is sufficient.
This bug dates clear back to Berkeley days, so back-patch to all
supported versions.
Initialize random() in bootstrap/stand-alone postgres and in initdb.
This removes a difference between the standard IsUnderPostmaster
execution environment and that of --boot and --single. In a stand-alone
backend, "SELECT random()" always started at the same seed.
On a system capable of using posix shared memory, initdb could still
conclude "selecting dynamic shared memory implementation ... sysv".
Crashed --boot or --single postgres processes orphaned shared memory
objects having names that collided with the not-actually-random names
that initdb probed. The sysv fallback appeared after ten crashes of
--boot or --single postgres. Since --boot and --single are rare in
production use, systems used for PostgreSQL development are the
principal candidate to notice this symptom.
Back-patch to 9.3 (all supported versions). PostgreSQL 9.4 introduced
dynamic shared memory, but 9.3 does share the "SELECT random()" problem.
Tom Lane [Sun, 23 Sep 2018 20:05:45 +0000 (16:05 -0400)]
Fix failure in WHERE CURRENT OF after rewinding the referenced cursor.
In a case where we have multiple relation-scan nodes in a cursor plan,
such as a scan of an inheritance tree, it's possible to fetch from a
given scan node, then rewind the cursor and fetch some row from an
earlier scan node. In such a case, execCurrent.c mistakenly thought
that the later scan node was still active, because ExecReScan hadn't
done anything to make it look not-active. We'd get some sort of
failure in the case of a SeqScan node, because the node's scan tuple
slot would be pointing at a HeapTuple whose t_self gets reset to
invalid by heapam.c. But it seems possible that for other relation
scan node types we'd actually return a valid tuple TID to the caller,
resulting in updating or deleting a tuple that shouldn't have been
considered current. To fix, forcibly clear the ScanTupleSlot in
ExecScanReScan.
Another issue here, which seems only latent at the moment but could
easily become a live bug in future, is that rewinding a cursor does
not necessarily lead to *immediately* applying ExecReScan to every
scan-level node in the plan tree. Upper-level nodes will think that
they can postpone that call if their child node is already marked
with chgParam flags. I don't see a way for that to happen today in
a plan tree that's simple enough for execCurrent.c's search_plan_tree
to understand, but that's one heck of a fragile assumption. So, add
some logic in search_plan_tree to detect chgParam flags being set on
nodes that it descended to/through, and assume that that means we
should consider lower scan nodes to be logically reset even if their
ReScan call hasn't actually happened yet.
Per bug #15395 from Matvey Arye. This has been broken for a long time,
so back-patch to all supported branches.
Tom Lane [Fri, 21 Sep 2018 19:58:37 +0000 (15:58 -0400)]
Fix bogus tab-completion rule for CREATE PUBLICATION.
You can't use "FOR TABLE" as a single Matches argument, because readline
will consider that input to be two words not one. It's necessary to make
the pattern contain two arguments.
The case accidentally worked anyway because the words_after_create
test fired ... but only for the first such table name.
Noted by Edmund Horner, though this isn't exactly his proposed fix.
Backpatch to v10 where the faulty code came in.
Thomas Munro [Fri, 21 Sep 2018 12:40:13 +0000 (00:40 +1200)]
Use size_t consistently in dsa.{ch}.
Takeshi Ideriha complained that there is a mixture of Size and size_t
in dsa.c and corresponding header. Let's use size_t. Back-patch to 10
where dsa.c landed, to make future back-patching easy.
Andres Freund [Thu, 13 Sep 2018 21:18:43 +0000 (14:18 -0700)]
Error out for clang on x86-32 without SSE2 support, no -fexcess-precision.
As clang currently doesn't support -fexcess-precision=standard,
compiling x86-32 code with SSE2 disabled, can lead to problems with
floating point overflow checks and the like.
This issue was noticed because clang, on at least some BSDs, defaults
to i386 compatibility, whereas it defaults to pentium4 on Linux. Our
forced usage of __builtin_isinf() lead to some overflow checks not
triggering when compiling for i386, e.g. when the result of the
calculation didn't overflow in 80bit registers, but did so in 64bit.
While we could just fall back to a non-builtin isinf, it seems likely
that the use of 80bit registers leads to other problems (which is why
we force the flag for GCC already). Therefore error out when
detecting clang in that situation.
Reported-By: Victor Wagner Analyzed-By: Andrew Gierth and Andres Freund
Author: Andres Freund
Discussion: https://postgr.es/m/20180905005130.ewk4xcs5dgyzcy45@alap3.anarazel.de
Backpatch: 9.3-, all supported versions are affected
Thomas Munro [Thu, 20 Sep 2018 03:52:39 +0000 (15:52 +1200)]
Fix segment_bins corruption in dsa.c.
If a segment has been freed by dsa.c because it is entirely empty, other
backends must make sure to unmap it before following links to new
segments that might happen to have the same index number, or they could
finish up looking at a defunct segment and then corrupt the segment_bins
lists. The correct protocol requires checking freed_segment_counter
after acquiring the area lock and before resolving any index number to a
segment. Add the missing checks and an assertion.
Back-patch to 10, where dsa.c first arrived.
Author: Thomas Munro Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/CAEepm%3D0thg%2Bja5zGVa7jBy-uqyHrTqTm8HGhEOtMmigGrAqTbw%40mail.gmail.com
Thomas Munro [Thu, 20 Sep 2018 02:02:56 +0000 (14:02 +1200)]
Defer restoration of libraries in parallel workers.
Several users of extensions complained of crashes in parallel workers
that turned out to be due to syscache access from their _PG_init()
functions. Reorder the initialization of parallel workers so that
libraries are restored after the caches are initialized, and inside a
transaction.
This was reported in bug #15350 and elsewhere. We don't consider it
to be a bug: extensions shouldn't do that, because then they can't be
used in shared_preload_libraries. However, it's a fairly obscure
hazard and these extensions worked in practice before parallel query
came along. So let's make it work. Later commits might add a warning
message and eventually an error.
Back-patch to 9.6, where parallel query landed.
Author: Thomas Munro Reviewed-by: Amit Kapila Reported-by: Kieran McCusker, Jimmy
Discussion: https://postgr.es/m/153512195228.1489.8545997741965926448%40wrigleys.postgresql.org
Tom Lane [Wed, 19 Sep 2018 16:43:51 +0000 (12:43 -0400)]
Don't ignore locktable-full failures in StandbyAcquireAccessExclusiveLock.
Commit 37c54863c removed the code in StandbyAcquireAccessExclusiveLock
that checked the return value of LockAcquireExtended. That created a
bug, because it's still passing reportMemoryError = false to
LockAcquireExtended, meaning that LOCKACQUIRE_NOT_AVAIL will be returned
if we're out of shared memory for the lock table.
In such a situation, the startup process would believe it had acquired an
exclusive lock even though it hadn't, with potentially dire consequences.
To fix, just drop the use of reportMemoryError = false, which allows us
to simplify the call into a plain LockAcquire(). It's unclear that the
locktable-full situation arises often enough that it's worth having a
better recovery method than crash-and-restart. (I strongly suspect that
the only reason the code path existed at all was that it was relatively
simple to do in the pre-37c54863c implementation. But now it's not.)
LockAcquireExtended's reportMemoryError parameter is now dead code and
could be removed. I refrained from doing so, however, because there
was some interest in resurrecting the behavior if we do get reports of
locktable-full failures in the field. Also, it seems unwise to remove
the parameter concurrently with shipping commit f868a8143, which added a
parameter; if there are any third-party callers of LockAcquireExtended,
we want them to get a wrong-number-of-parameters compile error rather
than a possibly-silent misinterpretation of its last parameter.
The hack this required in initdb.c should probably have clued us that it
wasn't really ready, but we didn't get the hint. Subsequent developments
have made clear that it affected text-vs-binary behavior in a lot of
places, and there's no reason to think that any of those behavioral changes
are desirable. There's no time to fix this before 11beta4, so just revert
for the moment. We can keep working on this in HEAD, and maybe reconsider
a back-patch once we're satisfied things are stable.
(I take the blame for this fiasco, having encouraged Michael to back-patch
a change at the last possible moment before beta wrap.)
Tom Lane [Tue, 18 Sep 2018 17:02:27 +0000 (13:02 -0400)]
Fix some probably-minor oversights in readfuncs.c.
The system expects TABLEFUNC RTEs to have coltypes, coltypmods, and
colcollations lists, but outfuncs doesn't dump them and readfuncs doesn't
restore them. This doesn't cause obvious failures, because the only things
that look at those fields are expandRTE() and get_rte_attribute_type(),
which are mostly used during parse analysis, before anything would've
passed the parsetree through outfuncs/readfuncs. But expandRTE() is used
in build_physical_tlist(), which means that that function will return a
wrong answer for a TABLEFUNC RTE that came from a view. Very accidentally,
this doesn't cause serious problems, because what it will return is NIL
which callers will interpret as "couldn't build a physical tlist because
of dropped columns". So you still get a plan that works, though it's
marginally less efficient than it could be. There are also some other
expandRTE() calls associated with transformation of whole-row Vars in
the planner. I have been unable to exhibit misbehavior from that, and
it may be unreachable in any case that anyone would care about ... but
I'm not entirely convinced, so this seems like something we should back-
patch a fix for. Fortunately, we can fix it without forcing a change
of stored rules and a catversion bump, because we can just copy these
lists from the subsidiary TableFunc object.
readfuncs.c was also missing support for NamedTuplestoreScan plan nodes.
This accidentally fails to break parallel query because a query using
a named tuplestore would never be considered parallel-safe anyway.
However, project policy since parallel query came in is that all plan
node types should have outfuncs/readfuncs support, so this is clearly
an oversight that should be repaired.
Noted while fooling around with a patch to test outfuncs/readfuncs more
thoroughly. That exposed some other issues too, but these are the only
ones that seem worth back-patching.
Back-patch to v10 where both of these features came in.
Thomas Munro [Tue, 18 Sep 2018 10:56:36 +0000 (22:56 +1200)]
Allow DSM allocation to be interrupted.
Chris Travers reported that the startup process can repeatedly try to
cancel a backend that is in a posix_fallocate()/EINTR loop and cause it
to loop forever. Teach the retry loop to give up if an interrupt is
pending. Don't actually check for interrupts in that loop though,
because a non-local exit would skip some clean-up code in the caller.
Back-patch to 9.4 where DSM was added (and posix_fallocate() was later
back-patched).
Author: Chris Travers Reviewed-by: Ildar Musin, Murat Kabilov, Oleksii Kliukin Tested-by: Oleksii Kliukin
Discussion: https://postgr.es/m/CAN-RpxB-oeZve_J3SM_6%3DHXPmvEG%3DHX%2B9V9pi8g2YR7YW0rBBg%40mail.gmail.com
Tom Lane [Mon, 17 Sep 2018 17:16:32 +0000 (13:16 -0400)]
Fix parsetree representation of XMLTABLE(XMLNAMESPACES(DEFAULT ...)).
The original coding for XMLTABLE thought it could represent a default
namespace by a T_String Value node with a null string pointer. That's
not okay, though; in particular outfuncs.c/readfuncs.c are not on board
with such a representation, meaning you'll get a null pointer crash
if you try to store a view or rule containing this construct.
To fix, change the parsetree representation so that we have a NULL
list element, instead of a bogus Value node.
This isn't really a functional limitation since default XML namespaces
aren't yet implemented in the executor; you'd just get "DEFAULT
namespace is not supported" anyway. But crashes are not nice, so
back-patch to v10 where this syntax was added. Ordinarily we'd consider
a parsetree representation change to be un-backpatchable; but since
existing releases would crash on the way to storing such constructs,
there can't be any existing views/rules to be incompatible with.
Tom Lane [Mon, 17 Sep 2018 16:11:43 +0000 (12:11 -0400)]
Fix pgbench lexer's "continuation" rule to cope with Windows newlines.
Our general practice in frontend code is to accept input with either
Unix-style newlines (\n) or DOS-style (\r\n). pgbench was mostly down
with that, but its rule for line continuations (backslash-newline) was
not. This had been masked on Windows buildfarm machines before commit 0ba06e0bf by use of Windows text mode to read files. We could have fixed
it by forcing text mode again, but it's better to fix the parsing code
so that Windows-style text files on Unix systems don't cause problems.
Back-patch to v10 where pgbench grew line continuations.
Michael Paquier [Mon, 17 Sep 2018 11:38:42 +0000 (20:38 +0900)]
Allow concurrent-safe open() and fopen() in frontend code for Windows
PostgreSQL uses a custom wrapper for open() and fopen() which is
concurrent-safe, allowing multiple processes to open and work on the
same file. This has a couple of advantages:
- pg_test_fsync does not handle O_DSYNC correctly otherwise, leading to
false claims that disks are unsafe.
- TAP tests can run into race conditions when a postmaster and pg_ctl
open postmaster.pid, fixing some random failures in the buildfam.
pg_upgrade is one frontend tool using workarounds to bypass file locking
issues with the log files it generates, however the interactions with
pg_ctl are proving to be tedious to get rid of, so this is left for
later.
Andrew Gierth [Sun, 16 Sep 2018 17:46:45 +0000 (18:46 +0100)]
Fix out-of-tree build for transform modules.
Neither plperl nor plpython installed sufficient header files to
permit transform modules to be built out-of-tree using PGXS. Fix that
by installing all plperl and plpython header files (other than those
with special purposes such as generated data tables), and also install
plpython's special .mk file for mangling regression tests.
(This commit does not fix the windows install, which does not
currently install _any_ plperl or plpython headers.)
Also fix the existing transform modules for hstore and ltree so that
their cross-module #include directives work as anticipated by commit df163230b9 et seq. This allows them to serve as working examples of
how to reference other modules when doing separate out-of-tree builds.
Tom Lane [Sun, 16 Sep 2018 17:02:47 +0000 (13:02 -0400)]
Add outfuncs.c support for RawStmt nodes.
I noticed while poking at a report from Andrey Lepikhov that the
recent addition of RawStmt nodes at the top of raw parse trees
makes it impossible to print any raw parse trees whatsoever,
because outfuncs.c doesn't know RawStmt and hence fails to descend
into it.
While we generally lack outfuncs.c support for utility statements,
there is reasonably complete support for what you can find in a
raw SELECT statement. It was not my intention to make that all
dead code ... so let's add support for RawStmt.
Tom Lane [Sat, 15 Sep 2018 21:24:35 +0000 (17:24 -0400)]
In v11, disable JIT by default (it's still enabled by default in HEAD).
Per discussion, JIT isn't quite mature enough to ship enabled-by-default.
I failed to resist the temptation to do a bunch of copy-editing on the
related documentation. Also, clean up some inconsistencies in which
section of config.sgml the JIT GUCs are documented in vs. what guc.c
and postgresql.config.sample had.
Tom Lane [Sat, 15 Sep 2018 17:42:34 +0000 (13:42 -0400)]
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally. For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.
This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.
To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment. This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another. It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.
It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.
Back-patch all the way. Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3. (This isn't the first time back-patches have
needed that, and it exhausted my patience.) I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.
Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.
Fix ALTER/TYPE on columns referenced by FKs in partitioned tables
When ALTER TABLE ... SET DATA TYPE affects a column referenced by
constraints and indexes, it drop those constraints and indexes and
recreates them afterwards, so that the definitions match the new data
type. The original code did this by dropping one object at a time
(commit 077db40fa1f3 of May 2004), which worked fine because the
dependencies between the objects were pretty straightforward, and
ordering the objects in a specific way was enough to make this work.
However, when there are foreign key constraints in partitioned tables,
the dependencies are no longer so straightforward, and we were getting
errors when attempted:
ERROR: cache lookup failed for constraint 16398
This can be fixed by doing all the drops in one pass instead, using
performMultipleDeletions (introduced by df18c51f2955 of Aug 2006). With
this change we can also remove the code to carefully order the list of
objects to be deleted.
Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAKcux6nWS_m+s=1Udk_U9B+QY7pA-Ac58qR5BdUfOyrwnWHDew@mail.gmail.com