Tom Lane [Thu, 6 Dec 2018 20:08:44 +0000 (15:08 -0500)]
Improve our response to invalid format strings, and detect more cases.
Places that are testing for *printf failure ought to include the format
string in their error reports, since bad-format-string is one of the
more likely causes of such failure. This both makes it easier to find
and repair the mistake, and provides at least some useful info to the
user who stumbles across such a problem.
Also, tighten snprintf.c to report EINVAL for an invalid flag or
final character in a format %-spec (including the case where the
%-spec is missing a final character altogether). This seems like
better project policy, and it also allows removing an instruction
or two from the hot code path.
Back-patch the error reporting change in pvsnprintf, since it should be
harmless and may be helpful; but not the snprintf.c change.
Per discussion of bug #15511 from Ertuğrul Kahveci, which reported an
invalid translated format string. These changes don't fix that error,
but they should improve matters next time we make such a mistake.
Tom Lane [Thu, 29 Nov 2018 23:28:11 +0000 (18:28 -0500)]
Document handling of invalid/ambiguous timestamp input near DST boundaries.
The source code comments documented this, but the user-facing docs, not
so much. Add a section to Appendix B that discusses it.
In passing, improve a couple other things in Appendix B --- notably,
a long-obsolete claim that time zone abbreviations are looked up in
a fixed table.
Tom Lane [Thu, 29 Nov 2018 20:53:44 +0000 (15:53 -0500)]
Ensure static libraries have correct mod time even if ranlib messes it up.
In at least Apple's version of ranlib, the output file is updated to have
a mod time equal to the max of the timestamps of its components, and that
data only has seconds precision. On a filesystem with sub-second file
timestamp precision --- say, APFS --- this can result in the finished
static library appearing older than its input files, which causes useless
rebuilds and possible outright failures in parallel makes.
We've only seen this reported in the field from people using Apple's
ranlib with a non-Apple make, because Apple's make doesn't know about
sub-second timestamps either so it doesn't decide rebuilds are needed.
But Apple's ranlib presumably shares code with at least some BSDen,
so it's not that unlikely that the same problem could arise elsewhere.
To fix, just "touch" the output file after ranlib finishes.
We seem to need this in only one place. There are other calls of
ranlib in our makefiles, but they are working on intermediate files
whose timestamps are not actually important, or else on an installed
static library for which sub-second timestamp precision is unlikely
to matter either. (Also, so far as I can tell, Apple's ranlib doesn't
mess up the file timestamp in the latter usage anyhow.)
In passing, change "ranlib" to "$(RANLIB)" in one place that was
bypassing the make macro for no good reason.
Per bug #15525 from Jack Kelly (via Alyssa Ross).
Back-patch to all supported branches.
Michael Paquier [Thu, 29 Nov 2018 00:12:53 +0000 (09:12 +0900)]
Fix handling of synchronous replication for stopping WAL senders
This fixes an oversight from c6c3334 which forgot that if a subset of
WAL senders are stopping and in a sync state, other WAL senders could
still be waiting for a WAL position to be synced while committing a
transaction. However the subset of stopping senders would not release
waiters, potentially breaking synchronous replication guarantees. This
commit makes sure that even WAL senders stopping are able to release
waiters and are tracked properly.
On 9.4, this can also trigger an assertion failure when setting for
example max_wal_senders to 1 where a WAL sender is not able to find
itself as in synchronous state when the instance stops.
Reported-by: Paul Guo
Author: Paul Guo, Michael Paquier
Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com
Backpatch-through: 9.4
Thomas Munro [Wed, 28 Nov 2018 01:00:57 +0000 (14:00 +1300)]
Don't set PAM_RHOST for Unix sockets.
Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix
sockets. This caused Linux PAM's libaudit integration to make DNS
requests for that name. It's not exactly clear what value PAM_RHOST
should have in that case, but it seems clear that we shouldn't set it
to an unresolvable name, so don't do that.
Back-patch to 9.6. Bug #15520.
Author: Thomas Munro Reviewed-by: Peter Eisentraut Reported-by: Albert Schabhuetl
Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
Tomas Vondra [Wed, 28 Nov 2018 00:11:15 +0000 (01:11 +0100)]
Do not decode TOAST data for table rewrites
During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged
using XLOG / FPI records, and thus (correctly) ignored in decoding.
But the associated TOAST table is WAL-logged as plain INSERT records,
and so was logically decoded and passed to reorder buffer.
That has severe consequences with TOAST tables of non-trivial size.
Firstly, reorder buffer has to keep all those changes, possibly spilling
them to a file, incurring I/O costs and disk space.
Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into
a hash table, which got discarded only after processing the row from the
main heap. But as the main heap is not decoded for rewrites, this never
happened, so all the TOAST data accumulated in memory, resulting either
in excessive memory consumption or OOM.
The fix is simple, as commit e9edc1ba already introduced infrastructure
(namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST
tables, but it only applied it to system tables. So simply use it for
all TOAST data in raw_heap_insert().
That would however solve only the memory consumption issue - the TOAST
changes would still be decoded and added to the reorder buffer, and
spilled to disk (although without TOAST tuple data, so much smaller).
But we can solve that by tweaking DecodeInsert() to just ignore such
INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag,
instead of skipping them later in ReorderBufferCommit().
Review: Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com
Backpatch: 9.4-, where logical decoding was introduced
Unfortunately ac218aa4f6 missed the fact that a reference to
'pg_catalog.regnamespace'::regclass wouldn't work before that type is
known. Fix that, by replacing the regtype usage with a join to
pg_type.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us
Backpatch: 9.5-, like ac218aa4f6
Andres Freund [Tue, 27 Nov 2018 01:00:43 +0000 (17:00 -0800)]
Update pg_upgrade test for reg* to include regrole and regnamespace.
When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were
added in 9.5, pg_upgrade's check for reg* types wasn't updated. While
regrole currently is safe, regnamespace is not.
It seems unlikely that anybody uses regnamespace inside catalog tables
across a pg_upgrade, but the tests should be correct nevertheless.
While at it, reorder the types checked in the query to be
alphabetical. Otherwise it's annoying to compare existing and tested
for types.
Author: Andres Freund
Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com
Backpatch: 9.5-, as regrole/regnamespace
Tom Lane [Mon, 26 Nov 2018 22:32:51 +0000 (17:32 -0500)]
Fix translation of special characters in psql's LaTeX output modes.
latex_escaped_print() mistranslated \ and failed to provide any translation
for # ^ and ~, all of which would typically lead to LaTeX document syntax
errors. In addition it didn't translate < > and |, which would typically
render as unexpected characters.
To some extent this represents shortcomings in ancient versions of LaTeX,
which if memory serves had no easy way to render these control characters
as ASCII text. But that's been fixed for, um, decades. In any case there
is no value in emitting guaranteed-to-fail output for these characters.
Noted while fooling with test cases added by commit 9a98984f4. Back-patch
the code change to all supported versions.
Andrew Gierth [Sat, 24 Nov 2018 09:59:49 +0000 (09:59 +0000)]
Fix hstore hash function for empty hstores upgraded from 8.4.
Hstore data generated on pg 8.4 and pg_upgraded to current versions
remains in its original on-disk format unless modified. The same goes
for values generated by the addon hstore-new module on pre-9.0
versions. (The hstoreUpgrade function converts old values on the fly
when read in, but the on-disk value is not modified by this.)
Since old-format empty hstores (and hstore-new hstores) have
representations compatible with the new format, hstoreUpgrade thought
it could get away without modifying such values; but this breaks
hstore_hash (and the new hstore_hash_extended) which assumes
bit-perfect matching between semantically identical hstore values.
Only one bit actually differs (the "new version" flag in the count
field) but that of course is enough to break the hash.
Fix by making hstoreUpgrade unconditionally convert all old values to
new format.
Backpatch all the way, even though this changes a hash value in some
cases, because in those cases the hash value is already failing - for
example, a hash join between old- and new-format empty hstores will be
failing to match, or a hash index on an hstore column containing an
old-format empty value will be failing to find the value since it will
be searching for a hash derived from a new-format datum. (There are no
known field reports of this happening, probably because hashing of
hstores has only been useful in limited circumstances and there
probably isn't much upgraded data being used this way.)
Per concerns arising from discussion of commit eb6f29141be. Original
bug is my fault.
Tom Lane [Sat, 24 Nov 2018 17:45:50 +0000 (12:45 -0500)]
Fix float-to-integer coercions to handle edge cases correctly.
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.
Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.
Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies
between these six functions.
This back-patches commits cbdb8b4c0 and 452b637d4. In the 9.4 branch,
also back-patch the portion of 62e2a8dc2 that added PG_INTnn_MIN and
related constants to c.h, so that these functions can rely on them.
Per bug #15519 from Victor Petrovykh.
Patch by me; thanks to Andrew Gierth for analysis and discussion.
Andrew Gierth [Fri, 23 Nov 2018 23:56:39 +0000 (23:56 +0000)]
Avoid crashes in contrib/intarray gist__int_ops (bug #15518)
1. Integer overflow in internal_size could result in memory corruption
in decompression since a zero-length array would be allocated and then
written to. This leads to crashes or corruption when traversing an
index which has been populated with sufficiently sparse values. Fix by
using int64 for computations and checking for overflow.
2. Integer overflow in g_int_compress could cause pessimal merge
choices, resulting in unnecessarily large ranges (which would in turn
trigger issue 1 above). Fix by using int64 again.
3. Even without overflow, array sizes could become large enough to
cause unexplained memory allocation errors. Fix by capping the sizes
to a safe limit and report actual errors pointing at gist__intbig_ops
as needed.
4. Large inputs to the compression function always consist of large
runs of consecutive integers, and the compression loop was processing
these one at a time in an O(N^2) manner with a lot of overhead. The
expected runtime of this function could easily exceed 6 months for a
single call as a result. Fix by performing a linear-time first pass,
which reduces the worst case to something on the order of seconds.
Backpatch all the way, since this has been wrong forever.
Per bug #15518 from report from irc user "dymk", analysis and patch by
me.
Tom Lane [Mon, 19 Nov 2018 17:01:47 +0000 (12:01 -0500)]
Fix configure's AC_CHECK_DECLS tests to work correctly with clang.
The test case that Autoconf uses to discover whether a function has
been declared doesn't work reliably with clang, because clang reports
a warning not an error if the name is a known built-in function.
On some platforms, this results in a lot of compile-time warnings about
strlcpy and related functions not having been declared.
There is a fix for this (by Noah Misch) in the upstream Autoconf sources,
but since they've not made a release in years and show no indication of
doing so anytime soon, let's just absorb their fix directly. We can
revert this when and if we update to a newer Autoconf release.
Thomas Munro [Mon, 19 Nov 2018 00:40:57 +0000 (13:40 +1300)]
PANIC on fsync() failure.
On some operating systems, it doesn't make sense to retry fsync(),
because dirty data cached by the kernel may have been dropped on
write-back failure. In that case the only remaining copy of the
data is in the WAL. A subsequent fsync() could appear to succeed,
but not have flushed the data. That means that a future checkpoint
could apparently complete successfully but have lost data.
Therefore, violently prevent any future checkpoint attempts by
panicking on the first fsync() failure. Note that we already
did the same for WAL data; this change extends that behavior to
non-temporary data files.
Provide a GUC data_sync_retry to control this new behavior, for
users of operating systems that don't eject dirty data, and possibly
forensic/testing uses. If it is set to on and the write-back error
was transient, a later checkpoint might genuinely succeed (on a
system that does not throw away buffers on failure); if the error is
permanent, later checkpoints will continue to fail. The GUC defaults
to off, meaning that we panic.
Back-patch to all supported releases.
There is still a narrow window for error-loss on some operating
systems: if the file is closed and later reopened and a write-back
error occurs in the intervening time, but the inode has the bad
luck to be evicted due to memory pressure before we reopen, we could
miss the error. A later patch will address that with a scheme
for keeping files with dirty data open at all times, but we judge
that to be too complicated to back-patch.
Author: Craig Ringer, with some adjustments by Thomas Munro Reported-by: Craig Ringer Reviewed-by: Robert Haas, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
Tomas Vondra [Sat, 17 Nov 2018 22:50:21 +0000 (23:50 +0100)]
Add valgrind suppressions for wcsrtombs optimizations
wcsrtombs (called through wchar2char from common functions like lower,
upper, etc.) uses various optimizations that may look like access to
uninitialized data, triggering valgrind reports.
For example AVX2 instructions load data in 256-bit chunks, and gconv
does something similar with 32-bit chunks. This is faster than accessing
the bytes one by one, and the uninitialized part of the buffer is not
actually used. So suppress the bogus reports.
The exact stack depends on possible optimizations - it might be AVX, SSE
(as in the report by Aleksander Alekseev) or something else. Hence the
last frame is wildcarded, to deal with this.
Backpatch all the way back to 9.4.
Author: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
Discussion: https://www.postgresql.org/message-id/20180220150838.GD18315@e733.localdomain
Tom Lane [Wed, 14 Nov 2018 21:29:57 +0000 (16:29 -0500)]
Doc: remove claim that all \pset format options are unique in 1 letter.
This hasn't been correct since 9.3 added "latex-longtable".
I left the phraseology "Unique abbreviations are allowed" alone.
It's correct as far as it goes, and we are studiously refraining
from specifying exactly what happens if you give a non-unique
abbreviation. (The answer in the back branches is "you get a
backwards-compatible choice", and the answer in HEAD will shortly
be "you get an error", but there seems no need to mention such
details here.)
Tom Lane [Wed, 14 Nov 2018 16:27:31 +0000 (11:27 -0500)]
Second try at fixing numeric data passed through an ECPG SQLDA.
In commit ecfd55795, I removed sqlda.c's checks for ndigits != 0 on the
grounds that we should duplicate the state of the numeric value's digit
buffer even when all the digits are zeroes. However, that still isn't
quite right, because another possible state of the digit buffer is
buf == digits == NULL (this occurs for a NaN). As the code now stands,
it'll invoke memcpy with a NULL source address and zero bytecount,
which we know a few platforms crash on. Hence, reinstate the no-copy
short-circuit, but make it test specifically for buf != NULL rather than
some other condition. In hindsight, the ndigits test (added by commit f2ae9f9c3) was almost certainly meant to fix the NaN case not the
all-zeroes case as the associated thread alleged.
Michael Paquier [Wed, 14 Nov 2018 07:48:17 +0000 (16:48 +0900)]
Initialize TransactionState and user ID consistently at transaction start
If a failure happens when a transaction is starting between the moment
the transaction status is changed from TRANS_DEFAULT to TRANS_START and
the moment the current user ID and security context flags are fetched
via GetUserIdAndSecContext(), or before initializing its basic fields,
then those may get reset to incorrect values when the transaction
aborts, leaving the session in an inconsistent state.
One problem reported is that failing a starting transaction at the first
query of a session could cause several kinds of system crashes on the
follow-up queries.
In order to solve that, move the initialization of the transaction state
fields and the call of GetUserIdAndSecContext() in charge of fetching
the current user ID close to the point where the transaction status is
switched to TRANS_START, where there cannot be any error triggered
in-between, per an idea of Tom Lane. This properly ensures that the
current user ID, the security context flags and that the basic fields of
TransactionState remain consistent even if the transaction fails while
starting.
Reported-by: Richard Guo Diagnosed-By: Richard Guo
Author: Michael Paquier Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAN_9JTxECSb=pEPcb0a8d+6J+bDcOZ4=DgRo_B7Y5gRHJUM=Rw@mail.gmail.com
Backpatch-through: 9.4
Amit Kapila [Tue, 13 Nov 2018 05:13:35 +0000 (10:43 +0530)]
Fix the initialization of atomic variable introduced by the
group clearing mechanism.
Commit 0e141c0fbb introduced initialization of atomic variable in
InitProcess which means that it's not safe to look at it for backends that
aren't currently in use. Fix that by initializing the same during postmaster
startup.
Reported-by: Andres Freund
Author: Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/20181027104138.qmbbelopvy7cw2qv@alap3.anarazel.de
Tom Lane [Mon, 12 Nov 2018 16:19:04 +0000 (11:19 -0500)]
Limit the number of index clauses considered in choose_bitmap_and().
classify_index_clause_usage() is O(N^2) in the number of distinct index
qual clauses it considers, because of its use of a simple search list to
store them. For nearly all queries, that's fine because only a few clauses
will be considered. But Alexander Kuzmenkov reported a machine-generated
query with 80000 (!) index qual clauses, which caused this code to take
forever. Somewhat remarkably, this is the only O(N^2) behavior we now
have for such a query, so let's fix it.
We can get rid of the O(N^2) runtime for cases like this without much
damage to the functionality of choose_bitmap_and() by separating out
paths with "too many" qual or pred clauses, and deeming them to always
be nonredundant with other paths. Then their clauses needn't go into
the search list, so it doesn't get too long, but we don't lose the
ability to consider bitmap AND plans altogether. I set the threshold
for "too many" to be 100 clauses per path, which should be plenty to
ensure no change in planning behavior for normal queries.
There are other things we could do to make this go faster, but it's not
clear that it's worth any additional effort. 80000 qual clauses require
a whole lot of work in many other places, too.
The code's been like this for a long time, so back-patch to all supported
branches. The troublesome query only works back to 9.5 (in 9.4 it fails
with stack overflow in the parser); so I'm not sure that fixing this in
9.4 has any real-world benefit, but perhaps it does.
Tom Lane [Sat, 10 Nov 2018 01:42:03 +0000 (20:42 -0500)]
Fix missing role dependencies for some schema and type ACLs.
This patch fixes several related cases in which pg_shdepend entries were
never made, or were lost, for references to roles appearing in the ACLs of
schemas and/or types. While that did no immediate harm, if a referenced
role were later dropped, the drop would be allowed and would leave a
dangling reference in the object's ACL. That still wasn't a big problem
for normal database usage, but it would cause obscure failures in
subsequent dump/reload or pg_upgrade attempts, taking the form of
attempts to grant privileges to all-numeric role names. (I think I've
seen field reports matching that symptom, but can't find any right now.)
Several cases are fixed here:
1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any
existing ACL entries for the domain. This case is ancient, dating
back as far as we've had pg_shdepend tracking at all.
2. If a default type privilege applies, CREATE TYPE recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for types in 9.2.
3. If a default schema privilege applies, CREATE SCHEMA recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for schemas in v10
(commit ab89e465c).
Another somewhat-related problem is that when creating a relation
rowtype or implicit array type, TypeCreate would apply any available
default type privileges to that type, which we don't really want
since such an object isn't supposed to have privileges of its own.
(You can't, for example, drop such privileges once they've been added
to an array type.)
ab89e465c is also to blame for a race condition in the regression tests:
privileges.sql transiently installed globally-applicable default
privileges on schemas, which sometimes got absorbed into the ACLs of
schemas created by concurrent test scripts. This should have resulted
in failures when privileges.sql tried to drop the role holding such
privileges; but thanks to the bug fixed here, it instead led to dangling
ACLs in the final state of the regression database. We'd managed not to
notice that, but it became obvious in the wake of commit da906766c, which
allowed the race condition to occur in pg_upgrade tests.
To fix, add a function recordDependencyOnNewAcl to encapsulate what
callers of get_user_default_acl need to do; while the original call
sites got that right via ad-hoc code, none of the later-added ones
have. Also change GenerateTypeDependencies to generate these
dependencies, which requires adding the typacl to its parameter list.
(That might be annoying if there are any extensions calling that
function directly; but if there are, they're most likely buggy in the
same way as the core callers were, so they need work anyway.) While
I was at it, I changed GenerateTypeDependencies to accept most of its
parameters in the form of a Form_pg_type pointer, making its parameter
list a bit less unwieldy and mistake-prone.
The test race condition is fixed just by wrapping the addition and
removal of default privileges into a single transaction, so that that
state is never visible externally. We might eventually prefer to
separate out tests of default privileges into a script that runs by
itself, but that would be a bigger change and would make the tests
run slower overall.
Back-patch relevant parts to all supported branches.
Tom Lane [Thu, 8 Nov 2018 22:33:26 +0000 (17:33 -0500)]
Disallow setting client_min_messages higher than ERROR.
Previously it was possible to set client_min_messages to FATAL or PANIC,
which had the effect of suppressing transmission of regular ERROR messages
to the client. Perhaps that seemed like a useful option in the past, but
the trouble with it is that it breaks guarantees that are explicitly made
in our FE/BE protocol spec about how a query cycle can end. While libpq
and psql manage to cope with the omission, that's mostly because they
are not very bright; client libraries that have more semantic knowledge
are likely to get confused. Notably, pgODBC doesn't behave very sanely.
Let's fix this by getting rid of the ability to set client_min_messages
above ERROR.
In HEAD, just remove the FATAL and PANIC options from the set of allowed
enum values for client_min_messages. (This change also affects
trace_recovery_messages, but that's OK since these aren't useful values
for that variable either.)
In the back branches, there was concern that rejecting these values might
break applications that are explicitly setting things that way. I'm
pretty skeptical of that argument, but accommodate it by accepting these
values and then internally setting the variable to ERROR anyway.
In all branches, this allows a couple of tiny simplifications in the
logic in elog.c, so do that.
Also respond to the point that was made that client_min_messages has
exactly nothing to do with the server's logging behavior, and therefore
does not belong in the "When To Log" subsection of the documentation.
The "Statement Behavior" subsection is a better match, so move it there.
Andres Freund [Mon, 5 Nov 2018 20:02:25 +0000 (12:02 -0800)]
Fix copy-paste error in errhint() introduced in 691d79a07933.
Reported-By: Petr Jelinek
Discussion: https://postgr.es/m/c95a620b-34f0-7930-aeb5-f7ab804f26cb@2ndquadrant.com
Backpatch: 9.4-, like the previous commit
Tom Lane [Sat, 3 Nov 2018 17:56:10 +0000 (13:56 -0400)]
Make ts_locale.c's character-type functions cope with UTF-16.
On Windows, in UTF8 database encoding, what char2wchar() produces is
UTF16 not UTF32, ie, characters above U+FFFF will be represented by
surrogate pairs. t_isdigit() and siblings did not account for this
and failed to provide a large enough result buffer. That in turn
led to bogus "invalid multibyte character for locale" errors, because
contrary to what you might think from char2wchar()'s documentation,
its Windows code path doesn't cope sanely with buffer overflow.
The solution for t_isdigit() and siblings is pretty clear: provide
a 3-wchar_t result buffer not 2.
char2wchar() also needs some work to provide more consistent, and more
accurately documented, buffer overrun behavior. But that's a bigger job
and it doesn't actually have any immediate payoff, so leave it for later.
Per bug #15476 from Kenji Uno, who deserves credit for identifying the
cause of the problem. Back-patch to all active branches.
Tom Lane [Fri, 2 Nov 2018 22:54:00 +0000 (18:54 -0400)]
Yet further rethinking of build changes for macOS Mojave.
The solution arrived at in commit e74dd00f5 presumes that the compiler
has a suitable default -isysroot setting ... but further experience
shows that in many combinations of macOS version, XCode version, Xcode
command line tools version, and phase of the moon, Apple's compiler
will *not* supply a default -isysroot value.
We could potentially go back to the approach used in commit 68fc227dd,
but I don't have a lot of faith in the reliability or life expectancy of
that either. Let's just revert to the approach already shipped in 11.0,
namely specifying an -isysroot switch globally. As a partial response to
the concerns raised by Jakob Egger, adjust the contents of Makefile.global
to look like
This allows overriding the sysroot path at build time in a relatively
painless way.
Add documentation to installation.sgml about how to use the PG_SYSROOT
option. I also took the opportunity to document how to work around
macOS's "System Integrity Protection" feature.
Andres Freund [Wed, 31 Oct 2018 21:47:41 +0000 (14:47 -0700)]
Disallow starting server with insufficient wal_level for existing slot.
Previously it was possible to create a slot, change wal_level, and
restart, even if the new wal_level was insufficient for the
slot. That's a problem for both logical and physical slots, because
the necessary WAL records are not generated.
This removes a few tests in newer versions that, somewhat
inexplicably, whether restarting with a too low wal_level worked (a
buggy behaviour!).
Reported-By: Joshua D. Drake
Author: Andres Freund
Discussion: https://postgr.es/m/20181029191304.lbsmhshkyymhw22w@alap3.anarazel.de
Backpatch: 9.4-, where replication slots where introduced
Tom Lane [Wed, 31 Oct 2018 21:04:43 +0000 (17:04 -0400)]
Fix memory leak in repeated SPGIST index scans.
spgendscan neglected to pfree all the memory allocated by spgbeginscan.
It's possible to get away with that in most normal queries, since the
memory is allocated in the executor's per-query context which is about
to get deleted anyway; but it causes severe memory leakage during
creation or filling of large exclusion-constraint indexes.
Also, document that amendscan is supposed to free what ambeginscan
allocates. The docs' lack of clarity on that point probably caused this
bug to begin with. (There is discussion of changing that API spec going
forward, but I don't think it'd be appropriate for the back branches.)
Per report from Bruno Wolff. It's been like this since the beginning,
so back-patch to all active branches.
In HEAD, also fix an independent leak caused by commit 2a6368343
(allocating memory during spgrescan instead of spgbeginscan, which
might be all right if it got cleaned up, but it didn't). And do a bit
of code beautification on that commit, too.
Tom Lane [Wed, 31 Oct 2018 13:47:53 +0000 (09:47 -0400)]
Sync our copy of the timezone library with IANA release tzcode2018g.
This patch absorbs an upstream fix to "zic" for a recently-introduced
bug that made it output data that some 32-bit clients couldn't read.
Given the current source data, the bug only manifests in zones with
leap seconds, which we don't generate, so that there's no actual
change in our installed timezone data files from this. Still, in
case somebody uses our copy of "zic" to do something else, it seems
best to apply the fix promptly.
Also, update the README's notes about converting upstream code to
our conventions.
Andrew Dunstan [Sun, 28 Oct 2018 16:22:32 +0000 (12:22 -0400)]
Fix perl searchpath for modern perl for MSVC tools
Modern versions of perl no longer include the current directory in the
perl searchpath, as it's insecure. Instead of adding the current
directory, we get around the problem by adding the directory where the
script lives.
Andrew Dunstan [Sat, 20 Oct 2018 13:02:36 +0000 (09:02 -0400)]
Lower privilege level of programs calling regression_main
On Windows this mean that the regression tests can now safely and
successfully run as Administrator, which is useful in situations like
Appveyor. Elsewhere it's a no-op.
Backpatch to 9.5 - this is harder in earlier branches and not worth the
trouble.
Tom Lane [Sat, 20 Oct 2018 02:22:57 +0000 (22:22 -0400)]
Client-side fixes for delayed NOTIFY receipt.
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Tom Lane [Sat, 20 Oct 2018 01:39:22 +0000 (21:39 -0400)]
Server-side fix for delayed NOTIFY and SIGTERM processing.
Commit 4f85fde8e introduced some code that was meant to ensure that we'd
process cancel, die, sinval catchup, and notify interrupts while waiting
for client input. But there was a flaw: it supposed that the process
latch would be set upon arrival at secure_read() if any such interrupt
was pending. In reality, we might well have cleared the process latch
at some earlier point while those flags remained set -- particularly
notifyInterruptPending, which can't be handled as long as we're within
a transaction.
To fix the NOTIFY case, also attempt to process signals (except
ProcDiePending) before trying to read.
Also, if we see that ProcDiePending is set before we read, forcibly set the
process latch to ensure that we will handle that signal promptly if no data
is available. I also made it set the process latch on the way out, in case
there is similar logic elsewhere. (It remains true that we won't service
ProcDiePending here unless we need to wait for input.)
The code for handling ProcDiePending during a write needs those changes,
too.
Also be a little more careful about when to reset whereToSendOutput,
and improve related comments.
Back-patch to 9.5 where this code was added. I'm not entirely convinced
that older branches don't have similar issues, but the complaint at hand
is just about the >= 9.5 code.
Tom Lane [Fri, 19 Oct 2018 23:36:34 +0000 (19:36 -0400)]
Sync our copy of the timezone library with IANA release tzcode2018f.
About half of this is purely cosmetic changes to reduce the diff between
our code and theirs, like inserting "const" markers where they have them.
The other half is tracking actual code changes in zic.c and localtime.c.
I don't think any of these represent near-term compatibility hazards, but
it seems best to stay up to date.
I also fixed longstanding bugs in our code for producing the
known_abbrevs.txt list, which by chance hadn't been exposed before,
but which resulted in some garbage output after applying the upstream
changes in zic.c. Notably, because upstream removed their old phony
transitions at the Big Bang, it's now necessary to cope with TZif files
containing no DST transition times at all.
Tom Lane [Fri, 19 Oct 2018 21:01:34 +0000 (17:01 -0400)]
Update time zone data files to tzdata release 2018f.
DST law changes in Chile, Fiji, and Russia (Volgograd).
Historical corrections for China, Japan, Macau, and North Korea.
Note: like the previous tzdata update, this involves a depressingly
large amount of semantically-meaningless churn in tzdata.zi. That
is a consequence of upstream's data compression method assigning
unstable abbreviations to DST rulesets. I complained about that
to them last time, and this version now uses an assignment method
that pays some heed to not changing abbreviations unnecessarily.
So hopefully, that'll be better going forward.
Tom Lane [Thu, 18 Oct 2018 18:55:23 +0000 (14:55 -0400)]
Still further rethinking of build changes for macOS Mojave.
To avoid the sorts of problems complained of by Jakob Egger, it'd be
best if configure didn't emit any references to the sysroot path at all.
In the case of PL/Tcl, we can do that just by keeping our hands off the
TCL_INCLUDE_SPEC string altogether. In the case of PL/Perl, we need to
substitute -iwithsysroot for -I in the compile commands, which is easily
handled if we change to using a configure output variable that includes
the switch not only the directory name. Since PL/Tcl and PL/Python
already do it like that, this seems like good consistency cleanup anyway.
Hence, this replaces the advice given to Perl-related extensions in commit 5e2217131; instead of writing "-I$(perl_archlibexp)/CORE", they should
just write "$(perl_includespec)". (The old way continues to work, but not
on recent macOS.)
It's still the case that configure needs to be aware of the sysroot
path internally, but that's cleaner than what we had before.
Tom Lane [Wed, 17 Oct 2018 19:06:38 +0000 (15:06 -0400)]
Fix minor bug in isolationtester.
If the lock wait query failed, isolationtester would report the
PQerrorMessage from some other connection, meaning there would be
no message or an unrelated one. This seems like a pretty unlikely
occurrence, but if it did happen, this bug could make it really
difficult/confusing to figure out what happened. That seems to
justify patching all the way back.
In passing, clean up another place where the "wrong" conn was used
for an error report. That one's not actually buggy because it's
a different alias for the same connection, but it's still confusing
to the reader.
Tom Lane [Wed, 17 Oct 2018 16:26:48 +0000 (12:26 -0400)]
Improve tzparse's handling of TZDEFRULES ("posixrules") zone data.
In the IANA timezone code, tzparse() always tries to load the zone
file named by TZDEFRULES ("posixrules"). Previously, we'd hacked
that logic to skip the load in the "lastditch" code path, which we use
only to initialize the default "GMT" zone during GUC initialization.
That's critical for a couple of reasons: since we do not support leap
seconds, we *must not* allow "GMT" to have leap seconds, and since this
case runs before the GUC subsystem is fully alive, we'd really rather
not take the risk of pg_open_tzfile throwing any errors.
However, that still left the code reading TZDEFRULES on every other
call, something we'd noticed to the extent of having added code to cache
the result so it was only done once per process not a lot of times.
Andres Freund complained about the static data space used up for the
cache; but as long as the logic was like this, there was no point in
trying to get rid of that space.
We can improve matters by looking a bit more closely at what the IANA
code actually needs the TZDEFRULES data for. One thing it does is
that if "posixrules" is a leap-second-aware zone, the leap-second
behavior will be absorbed into every POSIX-style zone specification.
However, that's a behavior we'd really prefer to do without, since
for our purposes the end effect is to render every POSIX-style zone
name unsupported. Otherwise, the TZDEFRULES data is used only if
the POSIX zone name specifies DST but doesn't include a transition
date rule (e.g., "EST5EDT" rather than "EST5EDT,M3.2.0,M11.1.0").
That is a minority case for our purposes --- in particular, it
never happens when tzload() invokes tzparse() to interpret a
transition date rule string found in a tzdata zone file.
Hence, if we legislate that we're going to ignore leap-second data
from "posixrules", we can postpone the TZDEFRULES load into the path
where we actually need to substitute for a missing date rule string.
That means it will never happen at all in common scenarios, making it
reasonable to dynamically allocate the cache space when it does happen.
Even when the data is already loaded, this saves some cycles in the
common code path since we avoid a memcpy of 23KB or so. And, IMO at
least, this is a less ugly hack on the IANA logic than what we had
before, since it's not messing with the lastditch-vs-regular code paths.
Back-patch to all supported branches, not so much because this is a
critical change as that I want to keep all our copies of the IANA
timezone code in sync.
Tom Lane [Tue, 16 Oct 2018 20:27:15 +0000 (16:27 -0400)]
Back off using -isysroot on Darwin.
Rethink the solution applied in commit 5e2217131 to get PL/Tcl to
build on macOS Mojave. I feared that adding -isysroot globally might
have undesirable consequences, and sure enough Jakob Egger reported
one: it complicates building extensions with a different Xcode version
than was used for the core server. (I find that a risky proposition
in general, but apparently it works most of the time, so we shouldn't
break it if we don't have to.)
We'd already adopted the solution for PL/Perl of inserting the sysroot
path directly into the -I switches used to find Perl's headers, and we
can do the same thing for PL/Tcl by changing the -iwithsysroot switch
that Apple's tclConfig.sh reports. This restricts the risks to PL/Perl
and PL/Tcl themselves and directly-dependent extensions, which is a lot
more pleasing in general than a global -isysroot switch.
Along the way, tighten the test to see if we need to inject the sysroot
path into $perl_includedir, as I'd speculated about upthread but not
gotten round to doing.
Tom Lane [Tue, 16 Oct 2018 17:56:58 +0000 (13:56 -0400)]
Avoid rare race condition in privileges.sql regression test.
We created a temp table, then switched to a new session, leaving
the old session to clean up its temp objects in background. If that
took long enough, the eventual attempt to drop the user that owns
the temp table could fail, as exhibited today by sidewinder.
Fix by dropping the temp table explicitly when we're done with it.
It's been like this for quite some time, so back-patch to all
supported branches.
Tom Lane [Tue, 16 Oct 2018 16:27:14 +0000 (12:27 -0400)]
Make PostgresNode.pm's poll_query_until() more chatty about failures.
Reporting only the stderr is unhelpful when the problem is that the
server output we're getting doesn't match what was expected. So we
should report the query output too; and just for good measure, let's
print the query we used and the output we expected.
Back-patch to 9.5 where poll_query_until was introduced.
Tom Lane [Tue, 16 Oct 2018 16:01:19 +0000 (12:01 -0400)]
Improve stability of recently-added regression test case.
Commit b5febc1d1 added a contrib/btree_gist test case that has been
observed to fail in the buildfarm as a result of background auto-analyze
updating stats and changing the selected plan. Forestall that by
forcibly analyzing in foreground, instead. The new plan choice is
just as good for our purposes, since we really only care that an
index-only plan does not get selected.
localtime.c's "struct state" is a rather large object, ~23KB. We were
statically allocating one for gmtsub() to use to represent the GMT
timezone, even though that function is not at all heavily used and is
never reached in most backends. Let's malloc it on-demand, instead.
This does pose the question of how to handle a malloc failure, but
there's already a well-defined error report convention here, ie
set errno and return NULL.
We have but one caller of pg_gmtime in HEAD, and two in back branches,
neither of which were troubling to check for error. Make them do so.
The possible errors are sufficiently unlikely (out-of-range timestamp,
and now malloc failure) that I think elog() is adequate.
Back-patch to all supported branches to keep our copies of the IANA
timezone code in sync. This particular change is in a stanza that
already differs from upstream, so it's a wash for maintenance purposes
--- but only as long as we keep the branches the same.
Tom Lane [Mon, 15 Oct 2018 18:01:38 +0000 (14:01 -0400)]
Check for stack overrun in standard_ProcessUtility().
ProcessUtility can recurse, and indeed can be driven to infinite
recursion, so it ought to have a check_stack_depth() call. This
covers the reported bug (portal trying to execute itself) and a bunch
of other cases that could perhaps arise somewhere.
Per bug #15428 from Malthe Borch. Back-patch to all supported branches.
This commit documents rounding of "length" parameter and absence of support
for unique indexes and NULLs searching. Backpatch to 9.6 where contrib/bloom
was introduced.
Discussion: https://postgr.es/m/CAF4Au4wPQQ7EHVSnzcLjsbY3oLSzVk6UemZLD1Sbmwysy3R61g%40mail.gmail.com
Author: Oleg Bartunov with minor editorialization by me
Backpatch-through: 9.6
Michael Paquier [Sun, 14 Oct 2018 13:23:43 +0000 (22:23 +0900)]
Avoid duplicate XIDs at recovery when building initial snapshot
On a primary, sets of XLOG_RUNNING_XACTS records are generated on a
periodic basis to allow recovery to build the initial state of
transactions for a hot standby. The set of transaction IDs is created
by scanning all the entries in ProcArray. However it happens that its
logic never counted on the fact that two-phase transactions finishing to
prepare can put ProcArray in a state where there are two entries with
the same transaction ID, one for the initial transaction which gets
cleared when prepare finishes, and a second, dummy, entry to track that
the transaction is still running after prepare finishes. This way
ensures a continuous presence of the transaction so as callers of for
example TransactionIdIsInProgress() are always able to see it as alive.
So, if a XLOG_RUNNING_XACTS takes a standby snapshot while a two-phase
transaction finishes to prepare, the record can finish with duplicated
XIDs, which is a state expected by design. If this record gets applied
on a standby to initial its recovery state, then it would simply fail,
so the odds of facing this failure are very low in practice. It would
be tempting to change the generation of XLOG_RUNNING_XACTS so as
duplicates are removed on the source, but this requires to hold on
ProcArrayLock for longer and this would impact all workloads,
particularly those using heavily two-phase transactions.
XLOG_RUNNING_XACTS is also actually used only to initialize the standby
state at recovery, so instead the solution is taken to discard
duplicates when applying the initial snapshot.
Diagnosed-by: Konstantin Knizhnik
Author: Michael Paquier
Discussion: https://postgr.es/m/0c96b653-4696-d4b4-6b5d-78143175d113@postgrespro.ru
Backpatch-through: 9.3
Tom Lane [Fri, 12 Oct 2018 23:33:56 +0000 (19:33 -0400)]
Remove abstime, reltime, tinterval tables from old regression databases.
In the back branches, drop these tables after the regression tests are
done with them. This fixes failures of cross-branch pg_upgrade testing
caused by these types having been removed in v12. We do lose the ability
to test dump/restore behavior with these types in the back branches, but
the actual loss of code coverage seems to be nil given that there's nothing
very special about these types.
Andres Freund [Wed, 10 Oct 2018 20:53:02 +0000 (13:53 -0700)]
Fix logical decoding error when system table w/ toast is repeatedly rewritten.
Repeatedly rewriting a mapped catalog table with VACUUM FULL or
CLUSTER could cause logical decoding to fail with:
ERROR, "could not map filenode \"%s\" to relation OID"
To trigger the problem the rewritten catalog had to have live tuples
with toasted columns.
The problem was triggered as during catalog table rewrites the
heap_insert() check that prevents logical decoding information to be
emitted for system catalogs, failed to treat the new heap's toast table
as a system catalog (because the new heap is not recognized as a
catalog table via RelationIsLogicallyLogged()). The relmapper, in
contrast to the normal catalog contents, does not contain historical
information. After a single rewrite of a mapped table the new relation
is known to the relmapper, but if the table is rewritten twice before
logical decoding occurs, the relfilenode cannot be mapped to a
relation anymore. Which then leads us to error out. This only
happens for toast tables, because the main table contents aren't
re-inserted with heap_insert().
The fix is simple, add a new heap_insert() flag that prevents logical
decoding information from being emitted, and accept during decoding
that there might not be tuple data for toast tables.
Unfortunately that does not fix pre-existing logical decoding
errors. Doing so would require not throwing an error when a filenode
cannot be mapped to a relation during decoding, and that seems too
likely to hide bugs. If it's crucial to fix decoding for an existing
slot, temporarily changing the ERROR in ReorderBufferCommit() to a
WARNING appears to be the best fix.
Author: Andres Freund
Discussion: https://postgr.es/m/20180914021046.oi7dm4ra3ot2g2kt@alap3.anarazel.de
Backpatch: 9.4-, where logical decoding was introduced
Alvaro Herrera [Mon, 8 Oct 2018 13:37:21 +0000 (10:37 -0300)]
Silence compiler warning in Assert()
gcc 6.3 does not whine about this mistake I made in 39808e8868c8 but
evidently lots of other compilers do, according to Michael Paquier,
Peter Eisentraut, Arthur Zakirov, Tomas Vondra.
Alvaro Herrera [Sat, 6 Oct 2018 22:17:46 +0000 (19:17 -0300)]
Fix event triggers for partitioned tables
Index DDL cascading on partitioned tables introduced a way for ALTER
TABLE to be called reentrantly. This caused an an important deficiency
in event trigger support to be exposed: on exiting the reentrant call,
the alter table state object was clobbered, causing a crash when the
outer alter table tries to finalize its processing. Fix the crash by
creating a stack of event trigger state objects. There are still ways
to cause things to misbehave (and probably other crashers) with more
elaborate tricks, but at least it now doesn't crash in the obvious
scenario.
Backpatch to 9.5, where DDL deparsing of event triggers was introduced.
Reported-by: Marco Slot
Authors: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CANNhMLCpi+HQ7M36uPfGbJZEQLyTy7XvX=5EFkpR-b1bo0uJew@mail.gmail.com
Tom Lane [Sat, 6 Oct 2018 16:00:10 +0000 (12:00 -0400)]
Propagate xactStartTimestamp and stmtStartTimestamp to parallel workers.
Previously, a worker process would establish values for these based on
its own start time. In v10 and up, this can trivially be shown to cause
misbehavior of transaction_timestamp(), timestamp_in(), and related
functions which are (perhaps unwisely?) marked parallel-safe. It seems
likely that other behaviors might diverge from what happens in the parent
as well.
It's not as trivial to demonstrate problems in 9.6 or 9.5, but I'm sure
it's still possible, so back-patch to all branches containing parallel
worker infrastructure.
In HEAD only, mark now() and statement_timestamp() as parallel-safe
(other affected functions already were). While in theory we could
still squeeze that change into v11, it doesn't seem important enough
to force a last-minute catversion bump.
Tom Lane [Fri, 5 Oct 2018 20:01:30 +0000 (16:01 -0400)]
Allow btree comparison functions to return INT_MIN.
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Amit Kapila [Wed, 3 Oct 2018 04:24:01 +0000 (09:54 +0530)]
MAXALIGN the target address where we store flattened value.
The API (EOH_flatten_into) that flattens the expanded value representation
expects the target address to be maxaligned. All it's usage adhere to that
principle except when serializing datums for parallel query. Fix that
usage.
Diagnosed-by: Tom Lane
Author: Tom Lane and Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/11629.1536550032@sss.pgh.pa.us
Tom Lane [Tue, 2 Oct 2018 16:41:28 +0000 (12:41 -0400)]
Set snprintf.c's maximum number of NL arguments to be 31.
Previously, we used the platform's NL_ARGMAX if any, otherwise 16.
The trouble with this is that the platform value is hugely variable,
ranging from the POSIX-minimum 9 to as much as 64K on recent FreeBSD.
Values of more than a dozen or two have no practical use and slow down
the initialization of the argtypes array. Worse, they cause snprintf.c
to consume far more stack space than was the design intention, possibly
resulting in stack-overflow crashes.
Standardize on 31, which is comfortably more than we need (it looks like
no existing translatable message has more than about 10 parameters).
I chose that, not 32, to make the array sizes powers of 2, for some
possible small gain in speed of the memset.
The lack of reported crashes suggests that the set of platforms we
use snprintf.c on (in released branches) may have no overlap with
the set where NL_ARGMAX has unreasonably large values. But that's
not entirely clear, so back-patch to all supported branches.
Tom Lane [Tue, 2 Oct 2018 15:54:12 +0000 (11:54 -0400)]
Fix corner-case failures in has_foo_privilege() family of functions.
The variants of these functions that take numeric inputs (OIDs or
column numbers) are supposed to return NULL rather than failing
on bad input; this rule reduces problems with snapshot skew when
queries apply the functions to all rows of a catalog.
has_column_privilege() had careless handling of the case where the
table OID didn't exist. You might get something like this:
select has_column_privilege(9999,'nosuchcol','select');
ERROR: column "nosuchcol" of relation "(null)" does not exist
or you might get a crash, depending on the platform's printf's response
to a null string pointer.
In addition, while applying the column-number variant to a dropped
column returned NULL as desired, applying the column-name variant
did not:
select has_column_privilege('mytable','........pg.dropped.2........','select');
ERROR: column "........pg.dropped.2........" of relation "mytable" does not exist
It seems better to make this case return NULL as well.
Also, the OID-accepting variants of has_foreign_data_wrapper_privilege,
has_server_privilege, and has_tablespace_privilege didn't follow the
principle of returning NULL for nonexistent OIDs. Superusers got TRUE,
everybody else got an error.
Per investigation of Jaime Casanova's report of a new crash in HEAD.
These behaviors have been like this for a long time, so back-patch to
all supported branches.
Patch by me; thanks to Stephen Frost for discussion and review
Michael Paquier [Tue, 2 Oct 2018 07:36:11 +0000 (16:36 +0900)]
Fix documentation of pgrowlocks using "lock_type" instead of "modes"
The example used in the documentation is outdated as well. This is an
oversight from 0ac5ad5, which bumped up pgrowlocks but forgot some bits
of the documentation.
Reported-by: Chris Wilson
Discussion: https://postgr.es/m/153838692816.2950.12001142346234155699@wrigleys.postgresql.org
Backpatch-through: 9.3
Tom Lane [Mon, 1 Oct 2018 15:51:07 +0000 (11:51 -0400)]
Fix tuple_data_split() to not open a relation without any lock.
contrib/pageinspect's tuple_data_split() function thought it could get
away with opening the referenced relation with NoLock. In practice
there's no guarantee that the current session holds any lock on that
rel (even if we just read a page from it), so that this is unsafe.
Switch to using AccessShareLock. Also, postpone closing the relation,
so that we needn't copy its tupdesc. Also, fix unsafe use of
att_isnull() for attributes past the end of the tuple.
Per testing with a patch that complains if we open a relation without
holding any lock on it. I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.
Tom Lane [Mon, 1 Oct 2018 15:39:14 +0000 (11:39 -0400)]
Fix ALTER COLUMN TYPE to not open a relation without any lock.
If the column being modified is referenced by a foreign key constraint
of another table, ALTER TABLE would open the other table (to re-parse
the constraint's definition) without having first obtained a lock on it.
This was evidently intentional, but that doesn't mean it's really safe.
It's especially not safe in 9.3, which pre-dates use of MVCC scans for
catalog reads, but even in current releases it doesn't seem like a good
idea.
We know we'll need AccessExclusiveLock shortly to drop the obsoleted
constraint, so just get that a little sooner to close the hole.
Per testing with a patch that complains if we open a relation without
holding any lock on it. I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.
Tom Lane [Sun, 30 Sep 2018 20:24:56 +0000 (16:24 -0400)]
Fix detection of the result type of strerror_r().
The method we've traditionally used, of redeclaring strerror_r() to
see if the compiler complains of inconsistent declarations, turns out
not to work reliably because some compilers only report a warning,
not an error. Amazingly, this has gone undetected for years, even
though it certainly breaks our detection of whether strerror_r
succeeded.
Let's instead test whether the compiler will take the result of
strerror_r() as a switch() argument. It's possible this won't
work universally either, but it's the best idea I could come up with
on the spur of the moment.
Back-patch of commit 751f532b9. Buildfarm results indicate that only
icc-on-Linux actually has an issue here; perhaps the lack of field
reports indicates that people don't build PG for production that way.
Amit Kapila [Fri, 28 Sep 2018 07:15:00 +0000 (12:45 +0530)]
Fix assertion failure when updating full_page_writes for checkpointer.
When the checkpointer receives a SIGHUP signal to update its configuration,
it may need to update the shared memory for full_page_writes and need to
write a WAL record for it. Now, it is quite possible that the XLOG
machinery has not been initialized by that time and it will lead to
assertion failure while doing that. Fix is to allow the initialization of
the XLOG machinery outside critical section.
This bug has been introduced by the commit 2c03216d83 which added the XLOG
machinery initialization in RecoveryInProgress code path.
Reported-by: Dilip Kumar
Author: Dilip Kumar Reviewed-by: Michael Paquier and Amit Kapila
Backpatch-through: 9.5
Discussion: https://postgr.es/m/CAFiTN-u4BA8KXcQUWDPNgaKAjDXC=C2whnzBM8TAcv=stckYUw@mail.gmail.com
Michael Paquier [Fri, 28 Sep 2018 02:56:04 +0000 (11:56 +0900)]
Fix WAL recycling on standbys depending on archive_mode
A restart point or a checkpoint recycling WAL segments treats segments
marked with neither ".done" (archiving is done) or ".ready" (segment is
ready to be archived) in archive_status the same way for archive_mode
being "on" or "always". While for a primary this is fine, a standby
running a restart point with archive_mode = on would try to mark such a
segment as ready for archiving, which is something that will never
happen except after the standby is promoted.
Note that this problem applies only to WAL segments coming from the
local pg_wal the first time archive recovery is run. Segments part of a
self-contained base backup are the most common case where this could
happen, however even in this case normally the .done markers would be
most likely part of the backup. Segments recovered from an archive are
marked as .ready or .done by the startup process, and segments finished
streaming are marked as such by the WAL receiver, so they are handled
already.
Reported-by: Haruka Takatsuka
Author: Michael Paquier
Discussion: https://postgr.es/m/15402-a453c90ed4cf88b2@postgresql.org
Backpatch-through: 9.5, where archive_mode = always has been added.
Peter Eisentraut [Fri, 15 Jun 2018 03:22:14 +0000 (23:22 -0400)]
Recurse to sequences on ownership change for all relkinds
When a table ownership is changed, we must apply that also to any owned
sequences. (Otherwise, it would result in a situation that cannot be
restored, because linked sequences must have the same owner as the
table.) But this was previously only applied to regular tables and
materialized views. But it should also apply to at least foreign
tables. This patch removes the relkind check altogether, because it
doesn't save very much and just introduces the possibility of similar
omissions.
Bug: #15238 Reported-by: Christoph Berg <christoph.berg@credativ.de>
Michael Paquier [Wed, 26 Sep 2018 01:29:49 +0000 (10:29 +0900)]
Rework activation of commit timestamps during recovery
The activation and deactivation of commit timestamp tracking has not
been handled consistently for a primary or standbys at recovery. The
facility can be activated at three different moments of recovery:
- The beginning, where a primary would use the GUC value for the
decision-making, and where a standby relies on the contents of the
control file.
- When replaying a XLOG_PARAMETER_CHANGE record at redo.
- The end, where both primary and standby rely on the GUC value.
Using the GUC value for a primary at the beginning of recovery causes
problems with commit timestamp access when doing crash recovery.
Particularly, when replaying transaction commits, it could be possible
that an attempt to read commit timestamps is done for a transaction
which committed at a moment when track_commit_timestamp was disabled.
A test case is added to reproduce the failure. The test works down to
v11 as it takes advantage of transaction commits within procedures.
Reported-by: Hailong Li
Author: Masahiko Sawasa, Michael Paquier Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/11224478-a782-203b-1f17-e4797b39bdf0@qunar.com
Backpatch-through: 9.5, where commit timestamps have been introduced.
Tom Lane [Tue, 25 Sep 2018 17:23:29 +0000 (13:23 -0400)]
Make some fixes to allow building Postgres on macOS 10.14 ("Mojave").
Apple's latest rearrangements of the system-supplied headers have broken
building of PL/Perl and PL/Tcl. The only practical way to fix PL/Tcl is to
start using the "-isysroot" compiler flag to point to SDK-supplied headers,
as Apple expects. We must also start distinguishing where to find Perl's
headers from where to find its shared library; but that seems like good
cleanup anyway.
Extensions that formerly did something like -I$(perl_archlibexp)/CORE
should now do -I$(perl_includedir)/CORE instead. perl_archlibexp
is still the place to look for libperl.so, though.
If for some reason you don't like the default -isysroot setting, you can
override that by setting PG_SYSROOT in configure's arguments. I don't
currently think people would need to do so, unless maybe for cross-version
build purposes.
In addition, teach configure where to find tclConfig.sh. Our traditional
method of searching $auto_path hasn't worked for the last couple of macOS
releases, and it now seems clear that Apple's not going to change that.
The workaround of manually specifying --with-tclconfig was annoying
already, but Mojave's made it a lot more so because the sysroot path now
has to be included as well. Let's just wire the knowledge into configure
instead. To avoid breaking builds against non-default Tcl installations
(e.g. MacPorts) wherein the $auto_path method probably still works,
arrange to try the additional case only after all else has failed.
Back-patch to all supported versions, since at least the buildfarm
cares about that. The changes are set up to not do anything on macOS
releases that are old enough to not have functional sysroot trees.
Tom Lane [Mon, 24 Sep 2018 15:30:51 +0000 (11:30 -0400)]
Fix over-allocation of space for array_out()'s result string.
array_out overestimated the space needed for its output, possibly by
a very substantial amount if the array is multi-dimensional, because
of wrong order of operations in the loop that counts the number of
curly-brace pairs needed. While the output string is normally
short-lived, this could still cause problems in extreme cases.
An additional minor error was that it counted one more delimiter than
is actually needed.
Repair those errors, add an Assert that the space is now correctly
calculated, and make some minor improvements in the comments.
I also failed to resist the temptation to get rid of an integer
modulus operation per array element; a simple comparison is sufficient.
This bug dates clear back to Berkeley days, so back-patch to all
supported versions.
Initialize random() in bootstrap/stand-alone postgres and in initdb.
This removes a difference between the standard IsUnderPostmaster
execution environment and that of --boot and --single. In a stand-alone
backend, "SELECT random()" always started at the same seed.
On a system capable of using posix shared memory, initdb could still
conclude "selecting dynamic shared memory implementation ... sysv".
Crashed --boot or --single postgres processes orphaned shared memory
objects having names that collided with the not-actually-random names
that initdb probed. The sysv fallback appeared after ten crashes of
--boot or --single postgres. Since --boot and --single are rare in
production use, systems used for PostgreSQL development are the
principal candidate to notice this symptom.
Back-patch to 9.3 (all supported versions). PostgreSQL 9.4 introduced
dynamic shared memory, but 9.3 does share the "SELECT random()" problem.
Tom Lane [Sun, 23 Sep 2018 20:05:45 +0000 (16:05 -0400)]
Fix failure in WHERE CURRENT OF after rewinding the referenced cursor.
In a case where we have multiple relation-scan nodes in a cursor plan,
such as a scan of an inheritance tree, it's possible to fetch from a
given scan node, then rewind the cursor and fetch some row from an
earlier scan node. In such a case, execCurrent.c mistakenly thought
that the later scan node was still active, because ExecReScan hadn't
done anything to make it look not-active. We'd get some sort of
failure in the case of a SeqScan node, because the node's scan tuple
slot would be pointing at a HeapTuple whose t_self gets reset to
invalid by heapam.c. But it seems possible that for other relation
scan node types we'd actually return a valid tuple TID to the caller,
resulting in updating or deleting a tuple that shouldn't have been
considered current. To fix, forcibly clear the ScanTupleSlot in
ExecScanReScan.
Another issue here, which seems only latent at the moment but could
easily become a live bug in future, is that rewinding a cursor does
not necessarily lead to *immediately* applying ExecReScan to every
scan-level node in the plan tree. Upper-level nodes will think that
they can postpone that call if their child node is already marked
with chgParam flags. I don't see a way for that to happen today in
a plan tree that's simple enough for execCurrent.c's search_plan_tree
to understand, but that's one heck of a fragile assumption. So, add
some logic in search_plan_tree to detect chgParam flags being set on
nodes that it descended to/through, and assume that that means we
should consider lower scan nodes to be logically reset even if their
ReScan call hasn't actually happened yet.
Per bug #15395 from Matvey Arye. This has been broken for a long time,
so back-patch to all supported branches.
Andres Freund [Fri, 21 Sep 2018 01:10:52 +0000 (18:10 -0700)]
Error out for clang on x86-32 without SSE2 support, no -fexcess-precision.
As clang currently doesn't support -fexcess-precision=standard,
compiling x86-32 code with SSE2 disabled, can lead to problems with
floating point overflow checks and the like.
This issue was noticed because clang, on at least some BSDs, defaults
to i386 compatibility, whereas it defaults to pentium4 on Linux. Our
forced usage of __builtin_isinf() lead to some overflow checks not
triggering when compiling for i386, e.g. when the result of the
calculation didn't overflow in 80bit registers, but did so in 64bit.
While we could just fall back to a non-builtin isinf, it seems likely
that the use of 80bit registers leads to other problems (which is why
we force the flag for GCC already). Therefore error out when
detecting clang in that situation.
Reported-By: Victor Wagner Analyzed-By: Andrew Gierth and Andres Freund
Author: Andres Freund
Discussion: https://postgr.es/m/20180905005130.ewk4xcs5dgyzcy45@alap3.anarazel.de
Backpatch: 9.3-, all supported versions are affected
Thomas Munro [Thu, 20 Sep 2018 02:02:40 +0000 (14:02 +1200)]
Defer restoration of libraries in parallel workers.
Several users of extensions complained of crashes in parallel workers
that turned out to be due to syscache access from their _PG_init()
functions. Reorder the initialization of parallel workers so that
libraries are restored after the caches are initialized, and inside a
transaction.
This was reported in bug #15350 and elsewhere. We don't consider it
to be a bug: extensions shouldn't do that, because then they can't be
used in shared_preload_libraries. However, it's a fairly obscure
hazard and these extensions worked in practice before parallel query
came along. So let's make it work. Later commits might add a warning
message and eventually an error.
Back-patch to 9.6, where parallel query landed.
Author: Thomas Munro Reviewed-by: Amit Kapila Reported-by: Kieran McCusker, Jimmy
Discussion: https://postgr.es/m/153512195228.1489.8545997741965926448%40wrigleys.postgresql.org
Tom Lane [Wed, 19 Sep 2018 16:43:51 +0000 (12:43 -0400)]
Don't ignore locktable-full failures in StandbyAcquireAccessExclusiveLock.
Commit 37c54863c removed the code in StandbyAcquireAccessExclusiveLock
that checked the return value of LockAcquireExtended. That created a
bug, because it's still passing reportMemoryError = false to
LockAcquireExtended, meaning that LOCKACQUIRE_NOT_AVAIL will be returned
if we're out of shared memory for the lock table.
In such a situation, the startup process would believe it had acquired an
exclusive lock even though it hadn't, with potentially dire consequences.
To fix, just drop the use of reportMemoryError = false, which allows us
to simplify the call into a plain LockAcquire(). It's unclear that the
locktable-full situation arises often enough that it's worth having a
better recovery method than crash-and-restart. (I strongly suspect that
the only reason the code path existed at all was that it was relatively
simple to do in the pre-37c54863c implementation. But now it's not.)
LockAcquireExtended's reportMemoryError parameter is now dead code and
could be removed. I refrained from doing so, however, because there
was some interest in resurrecting the behavior if we do get reports of
locktable-full failures in the field. Also, it seems unwise to remove
the parameter concurrently with shipping commit f868a8143, which added a
parameter; if there are any third-party callers of LockAcquireExtended,
we want them to get a wrong-number-of-parameters compile error rather
than a possibly-silent misinterpretation of its last parameter.
Thomas Munro [Tue, 18 Sep 2018 10:56:36 +0000 (22:56 +1200)]
Allow DSM allocation to be interrupted.
Chris Travers reported that the startup process can repeatedly try to
cancel a backend that is in a posix_fallocate()/EINTR loop and cause it
to loop forever. Teach the retry loop to give up if an interrupt is
pending. Don't actually check for interrupts in that loop though,
because a non-local exit would skip some clean-up code in the caller.
Back-patch to 9.4 where DSM was added (and posix_fallocate() was later
back-patched).
Author: Chris Travers Reviewed-by: Ildar Musin, Murat Kabilov, Oleksii Kliukin Tested-by: Oleksii Kliukin
Discussion: https://postgr.es/m/CAN-RpxB-oeZve_J3SM_6%3DHXPmvEG%3DHX%2B9V9pi8g2YR7YW0rBBg%40mail.gmail.com
Tom Lane [Sat, 15 Sep 2018 17:42:34 +0000 (13:42 -0400)]
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally. For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.
This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.
To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment. This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another. It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.
It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.
Back-patch all the way. Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3. (This isn't the first time back-patches have
needed that, and it exhausted my patience.) I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.
Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.
Amit Kapila [Fri, 14 Sep 2018 04:47:31 +0000 (10:17 +0530)]
Don't allow LIMIT/OFFSET clause within sub-selects to be pushed to workers.
Allowing sub-select containing LIMIT/OFFSET in workers can lead to
inconsistent results at the top-level as there is no guarantee that the
row order will be fully deterministic. The fix is to prohibit pushing
LIMIT/OFFSET within sub-selects to workers.
Reported-by: Andrew Fletcher
Bug: 15324
Author: Amit Kapila Reviewed-by: Dilip Kumar
Backpatch-through: 9.6
Discussion: https://postgr.es/m/153417684333.10284.11356259990921828616@wrigleys.postgresql.org
Amit Kapila [Fri, 14 Sep 2018 03:06:21 +0000 (08:36 +0530)]
Back-patch "Fix parallel hash join path search."
Back-patch commit 655393a022bd653e2b48dbf20b69236981e35195 to 9.6. This
synchronizes the relavant code and helps in generating parallel paths in
some cases in 9.6. This also helps in back-patch of future patches where
we can get the same plan in all branches.
Amit Kapila [Thu, 13 Sep 2018 10:38:55 +0000 (16:08 +0530)]
Attach FPI to the first record after full_page_writes is turned on.
XLogInsert fails to attach a required FPI to the first record after
full_page_writes is turned on by the last checkpoint. This bug got
introduced in 9.5 due to code rearrangement in commits 2c03216d83 and 2076db2aea. Fix it by ensuring that XLogInsertRecord performs a
recomputation when the given record is generated with FPW as off but
found that the flag has been turned on while actually inserting the
record.
Reported-by: Kyotaro Horiguchi
Author: Kyotaro Horiguchi Reviewed-by: Amit Kapila
Backpatch-through: 9.5 where this problem was introduced
Discussion: https://postgr.es/m/20180420.151043.74298611.horiguchi.kyotaro@lab.ntt.co.jp
Andrew Gierth [Wed, 12 Sep 2018 18:31:06 +0000 (19:31 +0100)]
Repair bug in regexp split performance improvements.
Commit c8ea87e4b introduced a temporary conversion buffer for
substrings extracted during regexp splits. Unfortunately the code that
sized it was failing to ignore the effects of ignored degenerate
regexp matches, so for regexp_split_* calls it could under-size the
buffer in such cases.
Fix, and add some regression test cases (though those will only catch
the bug if run in a multibyte encoding).
Backpatch to 9.3 as the faulty code was.
Thanks to the PostGIS project, Regina Obe and Paul Ramsey for the
report (via IRC) and assistance in analysis. Patch by me.