Alvaro Herrera [Tue, 3 Oct 2017 12:58:25 +0000 (14:58 +0200)]
Fix coding rules violations in walreceiver.c
1. Since commit b1a9bad9e744 we had pstrdup() inside a
spinlock-protected critical section; reported by Andreas Seltenreich.
Turn those into strlcpy() to stack-allocated variables instead.
Backpatch to 9.6.
2. Since commit 9ed551e0a4fd we had a pfree() uselessly inside a
spinlock-protected critical section. Tom Lane noticed in code review.
Move down. Backpatch to 9.6.
3. Since commit 64233902d22b we had GetCurrentTimestamp() (a kernel
call) inside a spinlock-protected critical section. Tom Lane noticed in
code review. Move it up. Backpatch to 9.2.
4. Since commit 1bb2558046cc we did elog(PANIC) while holding spinlock.
Tom Lane noticed in code review. Release spinlock before dying.
Backpatch to 9.2.
Tom Lane [Sun, 1 Oct 2017 16:43:47 +0000 (12:43 -0400)]
Use a longer connection timeout in pg_isready test.
Buildfarm members skink and sungazer have both recently failed this
test, with symptoms indicating that the default 3-second timeout
isn't quite enough for those very slow systems. There's no reason
to be miserly with this timeout, so boost it to 60 seconds.
Back-patch to all versions containing this test. That may be overkill,
because the failure has only been observed in the v10 branch, but
I don't feel like having to revisit this later.
Vacuum calls page-level HOT prune to remove dead HOT tuples before doing
liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples. But
concurrent transaction commit/abort may turn DEAD some of the HOT tuples
that survived the prune, before HeapTupleSatisfiesVacuum tests them.
This happens to activate the code that decides to freeze the tuple ...
which resuscitates it, duplicating data.
(This is especially bad if there's any unique constraints, because those
are now internally violated due to the duplicate entries, though you
won't know until you try to REINDEX or dump/restore the table.)
One possible fix would be to simply skip doing anything to the tuple,
and hope that the next HOT prune would remove it. But there is a
problem: if the tuple is older than freeze horizon, this would leave an
unfrozen XID behind, and if no HOT prune happens to clean it up before
the containing pg_clog segment is truncated away, it'd later cause an
error when the XID is looked up.
Fix the problem by having the tuple freezing routines cope with the
situation: don't freeze the tuple (and keep it dead). In the cases that
the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag
so that there is no need to look up the XID in pg_clog later on.
An isolation test is included, authored by Michael Paquier, loosely
based on Daniel Wood's original reproducer. It only tests one
particular scenario, though, not all the possible ways for this problem
to surface; it be good to have a more reliable way to test this more
fully, but it'd require more work.
In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql
I outlined another test case (more closely matching Dan Wood's) that
exposed a few more ways for the problem to occur.
Backpatch all the way back to 9.3, where this problem was introduced by
multixact juggling. In branches 9.3 and 9.4, this includes a backpatch
of commit e5ff9fefcd50 (of 9.5 era), since the original is not
correctable without matching the coding pattern in 9.5 up.
Reported-by: Daniel Wood Diagnosed-by: Daniel Wood Reviewed-by: Yi Wen Wong, Michaël Paquier
Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
Tom Lane [Wed, 27 Sep 2017 21:05:53 +0000 (17:05 -0400)]
Fix behavior when converting a float infinity to numeric.
float8_numeric() and float4_numeric() failed to consider the possibility
that the input is an IEEE infinity. The results depended on the
platform-specific behavior of sprintf(): on most platforms you'd get
something like
ERROR: invalid input syntax for type numeric: "inf"
but at least on Windows it's possible for the conversion to succeed and
deliver a finite value (typically 1), due to a nonstandard output format
from sprintf and lack of syntax error checking in these functions.
Since our numeric type lacks the concept of infinity, a suitable conversion
is impossible; the best thing to do is throw an explicit error before
letting sprintf do its thing.
While at it, let's use snprintf not sprintf. Overrunning the buffer
should be impossible if sprintf does what it's supposed to, but this
is cheap insurance against a stack smash if it doesn't.
Problem reported by Taiki Kondo. Patch by me based on fix suggestion
from KaiGai Kohei. Back-patch to all supported branches.
Dean Rasheed [Wed, 27 Sep 2017 16:10:35 +0000 (17:10 +0100)]
Improve the CREATE POLICY documentation.
Provide a correct description of how multiple policies are combined,
clarify when SELECT permissions are required, mention SELECT FOR
UPDATE/SHARE, and do some other more minor tidying up.
It drops objects outside information_schema that depend on objects
inside information_schema. For example, it will drop a user-defined
view if the view query refers to information_schema.
posix_fallocate() is not quite a drop-in replacement for fallocate(),
because it is defined to return the error code as its function result,
not in "errno". I (tgl) missed this because RHEL6's version seems
to set errno as well. That is not the case on more modern Linuxen,
though, as per buildfarm results.
Aside from fixing the return-convention confusion, remove the test
for ENOSYS; we expect that glibc will mask that for posix_fallocate,
though it does not for fallocate. Keep the test for EINTR, because
POSIX specifies that as a possible result, and buildfarm results
suggest that it can happen in practice.
Tom Lane [Mon, 25 Sep 2017 20:09:20 +0000 (16:09 -0400)]
Avoid SIGBUS on Linux when a DSM memory request overruns tmpfs.
On Linux, shared memory segments created with shm_open() are backed by
swap files created in tmpfs. If the swap file needs to be extended,
but there's no tmpfs space left, you get a very unfriendly SIGBUS trap.
To avoid this, force allocation of the full request size when we create
the segment. This adds a few cycles, but none that we wouldn't expend
later anyway, assuming the request isn't hugely bigger than the actual
need.
Make this code #ifdef __linux__, because (a) there's not currently a
reason to think the same problem exists on other platforms, and (b)
applying posix_fallocate() to an FD created by shm_open() isn't very
portable anyway.
Peter Eisentraut [Fri, 22 Sep 2017 20:50:59 +0000 (16:50 -0400)]
Fix saving and restoring umask
In two cases, we set a different umask for some piece of code and
restore it afterwards. But if the contained code errors out, the umask
is not restored. So add TRY/CATCH blocks to fix that.
Tom Lane [Fri, 22 Sep 2017 04:04:21 +0000 (00:04 -0400)]
Sync our copy of the timezone library with IANA tzcode master.
This patch absorbs a few unreleased fixes in the IANA code.
It corresponds to commit 2d8b944c1cec0808ac4f7a9ee1a463c28f9cd00a
in https://github.com/eggert/tz. Non-cosmetic changes include:
TZDEFRULESTRING is updated to match current US DST practice,
rather than what it was over ten years ago. This only matters
for interpretation of POSIX-style zone names (e.g., "EST5EDT"),
and only if the timezone database doesn't include either an exact
match for the zone name or a "posixrules" entry. The latter
should not be true in any current Postgres installation, but
this could possibly matter when using --with-system-tzdata.
Get rid of a nonportable use of "++var" on a bool var.
This is part of a larger fix that eliminates some vestigial
support for consecutive leap seconds, and adds checks to
the "zic" compiler that the data files do not specify that.
Remove a couple of ancient compatibility hacks. The IANA
crew think these are obsolete, and I tend to agree. But
perhaps our buildfarm will think different.
Back-patch to all supported branches, in line with our policy
that all branches should be using current IANA code. Before v10,
this includes application of current pgindent rules, to avoid
whitespace problems in future back-patches.
Tom Lane [Thu, 21 Sep 2017 22:13:11 +0000 (18:13 -0400)]
Give a better error for duplicate entries in VACUUM/ANALYZE column list.
Previously, the code didn't think about this case and would just try to
analyze such a column twice. That would fail at the point of inserting
the second version of the pg_statistic row, with obscure error messsages
like "duplicate key value violates unique constraint" or "tuple already
updated by self", depending on context and PG version. We could allow
the case by ignoring duplicate column specifications, but it seems better
to reject it explicitly.
The bogus error messages seem like arguably a bug, so back-patch to
all supported versions.
Nathan Bossart, per a report from Michael Paquier, and whacked
around a bit by me.
Tom Lane [Wed, 20 Sep 2017 15:10:42 +0000 (11:10 -0400)]
Fix erroneous documentation about noise word GROUP.
GRANT, REVOKE, and some allied commands allow the noise word GROUP
before a role name (cf. grantee production in gram.y). This option
does not exist elsewhere, but it had nonetheless snuck into the
documentation for ALTER ROLE, ALTER USER, and CREATE SCHEMA.
Seems to be a copy-and-pasteo in commit 31eae6028, which did expand the
syntax choices here, but not in that way. Back-patch to 9.5 where that
came in.
Magnus Hagander [Wed, 20 Sep 2017 12:09:05 +0000 (14:09 +0200)]
Mention need for --no-inc-recursive in rsync command
Since rsync 3.0.0 (released in 2008), the default way to enumerate
changes was changed in a way that makes it less likely that the hardlink
sync mode works. Since the whole point of the documented procedure is
for the hardlinks to work, change our docs to suggest using the
backwards compatibility switch.
Tom Lane [Sun, 17 Sep 2017 18:50:01 +0000 (14:50 -0400)]
Fix possible dangling pointer dereference in trigger.c.
AfterTriggerEndQuery correctly notes that the query_stack could get
repalloc'd during a trigger firing, but it nonetheless passes the address
of a query_stack entry to afterTriggerInvokeEvents, so that if such a
repalloc occurs, afterTriggerInvokeEvents is already working with an
obsolete dangling pointer while it scans the rest of the events. Oops.
The only code at risk is its "delete_ok" cleanup code, so we can
prevent unsafe behavior by passing delete_ok = false instead of true.
However, that could have a significant performance penalty, because the
point of passing delete_ok = true is to not have to re-scan possibly
a large number of dead trigger events on the next time through the loop.
There's more than one way to skin that cat, though. What we can do is
delete all the "chunks" in the event list except the last one, since
we know all events in them must be dead. Deleting the chunks is work
we'd have had to do later in AfterTriggerEndQuery anyway, and it ends
up saving rescanning of just about the same events we'd have gotten
rid of with delete_ok = true.
In v10 and HEAD, we also have to be careful to mop up any per-table
after_trig_events pointers that would become dangling. This is slightly
annoying, but I don't think that normal use-cases will traverse this code
path often enough for it to be a performance problem.
It's pretty hard to hit this in practice because of the unlikelihood
of the query_stack getting resized at just the wrong time. Nonetheless,
it's definitely a live bug of ancient standing, so back-patch to all
supported branches.
Bruce Momjian [Sat, 16 Sep 2017 15:58:00 +0000 (11:58 -0400)]
docs: clarify pg_upgrade docs regarding standbys and rsync
Document that rsync is an _optional_ way to upgrade standbys, suggest
rsync option --dry-run, and mention a way of upgrading one standby from
another using rsync. Also clarify some instructions by specifying if
they operate on the old or new clusters.
Reported-by: Stephen Frost, Magnus Hagander
Discussion: https://postgr.es/m/20170914191250.GB6595@momjian.us
Stephen Frost [Thu, 14 Sep 2017 00:02:27 +0000 (20:02 -0400)]
Fix ordering in pg_dump of GRANTs
The order in which GRANTs are output is important as GRANTs which have
been GRANT'd by individuals via WITH GRANT OPTION GRANTs have to come
after the GRANT which included the WITH GRANT OPTION. This happens
naturally in the backend during normal operation as we only change
existing ACLs in-place, only add new ACLs to the end, and when removing
an ACL we remove any which depend on it also.
Also, adjust the comments in acl.h to make this clear.
Unfortunately, the updates to pg_dump to handle initial privileges
involved pulling apart ACLs and then combining them back together and
could end up putting them back together in an invalid order, leading to
dumps which wouldn't restore.
Fix this by adjusting the queries used by pg_dump to ensure that the
ACLs are rebuilt in the same order in which they were originally.
Back-patch to 9.6 where the changes for initial privileges were done.
Bruce Momjian [Wed, 13 Sep 2017 13:11:28 +0000 (09:11 -0400)]
docs: improve pg_upgrade standby instructions
This makes it clear that pg_upgrade standby upgrade instructions should
only be used in link mode, adds examples, and explains how rsync works
with links.
Reported-by: Andreas Joseph Krogh
Discussion: https://postgr.es/m/VisenaEmail.6c.c0e592c5af4ef0a2.15e785dcb61@tc7-visena
Tom Lane [Tue, 12 Sep 2017 02:02:58 +0000 (22:02 -0400)]
Fix RecursiveCopy.pm to cope with disappearing files.
When copying from an active database tree, it's possible for files to be
deleted after we see them in a readdir() scan but before we can open them.
(Once we've got a file open, we don't expect any further errors from it
getting unlinked, though.) Tweak RecursiveCopy so it can cope with this
case, so as to avoid irreproducible test failures.
Back-patch to 9.6 where this code was added. In v10 and HEAD, also
remove unused "use RecursiveCopy" in one recovery test script.
Tom Lane [Sun, 3 Sep 2017 15:01:08 +0000 (11:01 -0400)]
Fix macro-redefinition warning on MSVC.
In commit 9d6b160d7, I tweaked pg_config.h.win32 to use
"#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty,
for consistency with what happens in an autoconf'd build.
But Solution.pm injects another definition of that macro into
ecpg_config.h, leading to justifiable (though harmless) compiler whining.
Make that one consistent too. Back-patch, like the previous patch.
Tom Lane [Fri, 1 Sep 2017 19:14:18 +0000 (15:14 -0400)]
Make [U]INT64CONST safe for use in #if conditions.
Instead of using a cast to force the constant to be the right width,
assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.
The old approach to this is very hoary, dating from before we were
willing to require compilers to have working int64 types.
This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX
constants safe to use in preprocessor conditions, where a cast
doesn't work. Other symbolic constants that might be defined using
[U]INT64CONST are likewise safer than before.
Also fix the SIZE_MAX macro to be similarly safe, if we are forced
to provide a definition for that. The test added in commit 2e70d6b5e
happens to do what we want even with the hack "(size_t) -1" definition,
but we could easily get burnt on other tests in future.
Back-patch to all supported branches, like the previous commits.
Tom Lane [Fri, 1 Sep 2017 17:52:53 +0000 (13:52 -0400)]
Ensure SIZE_MAX can be used throughout our code.
Pre-C99 platforms may lack <stdint.h> and thereby SIZE_MAX. We have
a couple of places using the hack "(size_t) -1" as a fallback, but
it wasn't universally available; which means the code added in commit 2e70d6b5e fails to compile everywhere. Move that hack to c.h so that
we can rely on having SIZE_MAX everywhere.
Per discussion, it'd be a good idea to make the macro's value safe
for use in #if-tests, but that will take a bit more work. This is
just a quick expedient to get the buildfarm green again.
Back-patch to all supported branches, like the previous commit.
Robert Haas [Thu, 31 Aug 2017 19:56:21 +0000 (15:56 -0400)]
Improve low-level backup documentation.
Our documentation hasn't really caught up with the fact that
non-exclusive backups can now be taken using pg_start_backup and
pg_stop_backup even on standbys. Update.
David Steele, reviewed by Robert Haas and Michael Paquier
Tom Lane [Tue, 29 Aug 2017 19:38:05 +0000 (15:38 -0400)]
Doc: document libpq's restriction to INT_MAX rows in a PGresult.
As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're
pretty much stuck with this limitation. The documentation formerly stated
that the result of PQntuples "might overflow on 32-bit operating systems",
which is just nonsense: that's not where the overflow would happen, and
if you did reach an overflow it would not be on a 32-bit machine, because
you'd have OOM'd long since.
Tom Lane [Tue, 29 Aug 2017 19:18:01 +0000 (15:18 -0400)]
Teach libpq to detect integer overflow in the row count of a PGresult.
Adding more than 1 billion rows to a PGresult would overflow its ntups and
tupArrSize fields, leading to client crashes. It'd be desirable to use
wider fields on 64-bit machines, but because all of libpq's external APIs
use plain "int" for row counters, that's going to be hard to accomplish
without an ABI break. Given the lack of complaints so far, and the general
pain that would be involved in using such huge PGresults, let's settle for
just preventing the overflow and reporting a useful error message if it
does happen. Also, for a couple more lines of code we can increase the
threshold of trouble from INT_MAX/2 to INT_MAX rows.
To do that, refactor pqAddTuple() to allow returning an error message that
replaces the default assumption that it failed because of out-of-memory.
Along the way, fix PQsetvalue() so that it reports all failures via
pqInternalNotice(). It already did so in the case of bad field number,
but neglected to report anything for other error causes.
Because of the potential for crashes, this seems like a back-patchable
bug fix, despite the lack of field reports.
Tom Lane [Tue, 29 Aug 2017 13:34:21 +0000 (09:34 -0400)]
Improve docs about numeric formatting patterns (to_char/to_number).
The explanation about "0" versus "9" format characters was confusing
and arguably wrong; the discussion of sign handling wasn't very good
either. Notably, while it's accurate to say that "FM" strips leading
zeroes in date/time values, what it really does with numeric values
is to strip *trailing* zeroes, and then only if you wrote "9" rather
than "0". Per gripes from Erwin Brandstetter.
Andres Freund [Tue, 22 Aug 2017 14:46:05 +0000 (07:46 -0700)]
Backpatch introduction of TupleDescAttr(tupdesc, i).
2cd70845240 / c6293249d change the way individual attributes in a
TupleDesc are stored / accessed. To reduce the effort of making
extensions compatible with postgresql 11, and to ease future
backpatching, backpatch introduction of TupleDescAttr() to all
releases. Do not backpatch change in storage, as that'd be a breaking
change for existing and working extensions.
Author: Andres Freund
Discussion: https://postgr.es/m/20170820181723.tdswdinzptbcwhrr@alap3.anarazel.de
Backpatch: 9.2-
Tom Lane [Sat, 19 Aug 2017 17:39:38 +0000 (13:39 -0400)]
Fix possible core dump in parallel restore when using a TOC list.
Commit 3eb9a5e7c unintentionally introduced an ordering dependency
into restore_toc_entries_prefork(). The existing coding of
reduce_dependencies() contains a check to skip moving a TOC entry
to the ready_list if it wasn't initially in the pending_list.
This used to suffice to prevent reduce_dependencies() from trying to
move anything into the ready_list during restore_toc_entries_prefork(),
because the pending_list stayed empty throughout that phase; but it no
longer does. The problem doesn't manifest unless the TOC has been
reordered by SortTocFromFile, which is how I missed it in testing.
To fix, just add a test for ready_list == NULL, converting the call
with NULL from a poor man's sanity check into an explicit command
not to touch TOC items' list membership. Clarify some of the comments
around this; in particular, note the primary purpose of the check for
pending_list membership, which is to ensure that we can't try to restore
the same item twice, in case a TOC list forces it to be restored before
its dependency count goes to zero.
Per report from Fabrízio de Royes Mello. Back-patch to 9.3, like the
previous commit.
Tom Lane [Thu, 17 Aug 2017 17:13:47 +0000 (13:13 -0400)]
Further tweaks to compiler flags for PL/Perl on Windows.
It now emerges that we can only rely on Perl to tell us we must use
-D_USE_32BIT_TIME_T if it's Perl 5.13.4 or later. For older versions,
revert to our previous practice of assuming we need that symbol in
all 32-bit Windows builds. This is not ideal, but inquiring into
which compiler version Perl was built with seems far too fragile.
In any case, we had not previously had complaints about these old
Perl versions, so let's assume this is Good Enough. (It's still
better than the situation ante commit 5a5c2feca, in that at least
the effects are confined to PL/Perl rather than the whole PG build.)
Back-patch to all supported versions, like 5a5c2feca and predecessors.
Tom Lane [Mon, 14 Aug 2017 19:43:20 +0000 (15:43 -0400)]
Handle elog(FATAL) during ROLLBACK more robustly.
Stress testing by Andreas Seltenreich disclosed longstanding problems that
occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are
trying to execute a ROLLBACK of an already-failed transaction. In such a
case, xact.c is in TBLOCK_ABORT state, so that AbortOutOfAnyTransaction
would skip AbortTransaction and go straight to CleanupTransaction. This
led to an assert failure in an assert-enabled build (due to the ROLLBACK's
portal still having a cleanup hook) or without assertions, to a FATAL exit
complaining about "cannot drop active portal". The latter's not
disastrous, perhaps, but it's messy enough to want to improve it.
We don't really want to run all of AbortTransaction in this code path.
The minimum required to clean up the open portal safely is to do
AtAbort_Memory and AtAbort_Portals. It seems like a good idea to
do AtAbort_Memory unconditionally, to be entirely sure that we are
starting with a safe CurrentMemoryContext. That means that if the
main loop in AbortOutOfAnyTransaction does nothing, we need an extra
step at the bottom to restore CurrentMemoryContext = TopMemoryContext,
which I chose to do by invoking AtCleanup_Memory. This'll result in
calling AtCleanup_Memory twice in many of the paths through this function,
but that seems harmless and reasonably inexpensive.
The original motivation for the assertion in AtCleanup_Portals was that
we wanted to be sure that any user-defined code executed as a consequence
of the cleanup hook runs during AbortTransaction not CleanupTransaction.
That still seems like a valid concern, and now that we've seen one case
of the assertion firing --- which means that exactly that would have
happened in a production build --- let's replace the Assert with a runtime
check. If we see the cleanup hook still set, we'll emit a WARNING and
just drop the hook unexecuted.
This has been like this a long time, so back-patch to all supported
branches.
Tom Lane [Mon, 14 Aug 2017 15:48:59 +0000 (11:48 -0400)]
Absorb -D_USE_32BIT_TIME_T switch from Perl, if relevant.
Commit 3c163a7fc's original choice to ignore all #define symbols whose
names begin with underscore turns out to be too simplistic. On Windows,
some Perl installations are built with -D_USE_32BIT_TIME_T, and we must
absorb that or we get the wrong result for sizeof(PerlInterpreter).
This effectively re-reverts commit ef58b87df, which injected that symbol
in a hacky way, making it apply to all of Postgres not just PL/Perl.
More significantly, it did so on *all* 32-bit Windows builds, even when
the Perl build to be used did not select this option; so that it fails
to work properly with some newer Perl builds.
By making this change, we would be introducing an ABI break in 32-bit
Windows builds; but fortunately we have not used type time_t in any
exported Postgres APIs in a long time. So it should be OK, both for
PL/Perl itself and for third-party extensions, if an extension library
is built with a different _USE_32BIT_TIME_T setting than the core code.
Patch by me, based on research by Ashutosh Sharma and Robert Haas.
Back-patch to all supported branches, as commit 3c163a7fc was.
Tom Lane [Sun, 13 Aug 2017 20:15:14 +0000 (16:15 -0400)]
Remove AtEOXact_CatCache().
The sole useful effect of this function, to check that no catcache
entries have positive refcounts at transaction end, has really been
obsolete since we introduced ResourceOwners in PG 8.1. We reduced the
checks to assertions years ago, so that the function was a complete
no-op in production builds. There have been previous discussions about
removing it entirely, but consensus up to now was that it had some small
value as a cross-check for bugs in the ResourceOwner logic.
However, it now emerges that it's possible to trigger these assertions
if you hit an assert-enabled backend with SIGTERM during a call to
SearchCatCacheList, because that function temporarily increases the
refcounts of entries it's intending to add to a catcache list construct.
In a normal ERROR scenario, the extra refcounts are cleaned up by
SearchCatCacheList's PG_CATCH block; but in a FATAL exit we do a
transaction abort and exit without ever executing PG_CATCH handlers.
There's a case to be made that this is a generic hazard and we should
consider restructuring elog(FATAL) handling so that pending PG_CATCH
handlers do get run. That's pretty scary though: it could easily create
more problems than it solves. Preliminary stress testing by Andreas
Seltenreich suggests that there are not many live problems of this ilk,
so we rejected that idea.
There are more-localized ways to fix the problem; the most principled
one would be to use PG_ENSURE_ERROR_CLEANUP instead of plain PG_TRY.
But adding cycles to SearchCatCacheList isn't very appealing. We could
also weaken the assertions in AtEOXact_CatCache in some more or less
ad-hoc way, but that just makes its raison d'etre even less compelling.
In the end, the most reasonable solution seems to be to just remove
AtEOXact_CatCache altogether, on the grounds that it's not worth trying
to fix it. It hasn't found any bugs for us in many years.
Per report from Jeevan Chalke. Back-patch to all supported branches.
Tom Lane [Wed, 9 Aug 2017 21:03:09 +0000 (17:03 -0400)]
Fix handling of container types in find_composite_type_dependencies.
find_composite_type_dependencies correctly found columns that are of
the specified type, and columns that are of arrays of that type, but
not columns that are domains or ranges over the given type, its array
type, etc. The most general way to handle this seems to be to assume
that any type that is directly dependent on the specified type can be
treated as a container type, and processed recursively (allowing us
to handle nested cases such as ranges over domains over arrays ...).
Since a type's array type already has such a dependency, we can drop
the existing special case for the array type.
The very similar logic in get_rels_with_domain was likewise a few
bricks shy of a load, as it supposed that a directly dependent type
could *only* be a sub-domain. This is already wrong for ranges over
domains, and it'll someday be wrong for arrays over domains.
Add test cases illustrating the problems, and back-patch to all
supported branches.
Tom Lane [Wed, 9 Aug 2017 16:05:53 +0000 (12:05 -0400)]
Prevent passing down MAKELEVEL/MAKEFLAGS from non-GNU make to GNU make.
FreeBSD's make, for one, sets the MAKELEVEL environment variable when
invoking commands. In the special Makefile we provide to hand off control
from a non-GNU make to GNU make, this causes GNU make to think it is a
child make invocation rather than top-level. That interferes with the hack
added in commit dcae5facc to cause the temp-install tree to be made only by
the top-level invocation of gmake. Unset the variable to prevent that.
Likewise unset MAKEFLAGS, which FreeBSD's make also sets, and which could
easily confuse gmake. There are no reports of actual trouble from that,
but it seems better to be proactive.
Tom Lane [Tue, 8 Aug 2017 23:18:12 +0000 (19:18 -0400)]
Fix datumSerialize infrastructure to not crash on non-varlena data.
Commit 1efc7e538 did a poor job of emulating existing logic for touching
Datums that might be expanded-object pointers. It didn't check for typlen
being -1 first, which meant it could crash on fixed-length pass-by-ref
values, and probably on cstring values as well. It also didn't use
DatumGetPointer before VARATT_IS_EXTERNAL_EXPANDED, which while currently
harmless is not according to documentation nor prevailing style.
I also think the lack of any explanation as to why datumSerialize makes
these particular nonobvious choices is pretty awful, so fix that.
Per report from Jarred Ward. Back-patch to 9.6 where this code came in.
Tom Lane [Mon, 7 Aug 2017 14:19:01 +0000 (10:19 -0400)]
Require update permission for the large object written by lo_put().
lo_put() surely should require UPDATE permission, the same as lowrite(),
but it failed to check for that, as reported by Chapman Flack. Oversight
in commit c50b7c09d; backpatch to 9.4 where that was introduced.
Noah Misch [Mon, 7 Aug 2017 14:09:28 +0000 (07:09 -0700)]
Again match pg_user_mappings to information_schema.user_mapping_options.
Commit 3eefc51053f250837c3115c12f8119d16881a2d7 claimed to make
pg_user_mappings enforce the qualifications user_mapping_options had
been enforcing, but its removal of a longstanding restriction left them
distinct when the current user is the subject of a mapping yet has no
server privileges. user_mapping_options emits no rows for such a
mapping, but pg_user_mappings includes full umoptions. Change
pg_user_mappings to show null for umoptions. Back-patch to 9.2, like
the above commit.
Some authentication methods allowed it, others did not. In the client-side,
libpq does not even try to authenticate with an empty password, which makes
using empty passwords hazardous: an administrator might think that an
account with an empty password cannot be used to log in, because psql
doesn't allow it, and not realize that a different client would in fact
allow it. To clear that confusion and to be be consistent, disallow empty
passwords in all authentication methods.
All the authentication methods that used plaintext authentication over the
wire, except for BSD authentication, already checked that the password
received from the user was not empty. To avoid forgetting it in the future
again, move the check to the recv_password_packet function. That only
forbids using an empty password with plaintext authentication, however.
MD5 and SCRAM need a different fix:
* In stable branches, check that the MD5 hash stored for the user does not
not correspond to an empty string. This adds some overhead to MD5
authentication, because the server needs to compute an extra MD5 hash, but
it is not noticeable in practice.
* In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty
string, or a password hash that corresponds to an empty string, is
specified. The user-visible behavior is the same as in the stable branches,
the user cannot log in, but it seems better to stop the empty password from
entering the system in the first place. Secondly, it is fairly expensive to
check that a SCRAM hash doesn't correspond to an empty string, because
computing a SCRAM hash is much more expensive than an MD5 hash by design,
so better avoid doing that on every authentication.
We could clear the password on CREATE/ALTER ROLE also in stable branches,
but we would still need to check at authentication time, because even if we
prevent empty passwords from being stored in pg_authid, there might be
existing ones there already.
Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema.
The callers for GetOldestSafeDecodingTransactionId() all inverted the
argument for the argument introduced in 2bef06d516460. Luckily this
appears to be inconsequential for the moment, as we wait for
concurrent in-progress transaction when assembling a
snapshot. Additionally this could only make a difference when adding a
second logical slot, because only a pre-existing slot could cause an
issue by lowering the returned xid dangerously much.
Reported-By: Antonin Houska
Discussion: https://postgr.es/m/32704.1496993134@localhost
Backport: 9.4-, where 2bef06d516460 was backpatched to.
Tom Lane [Fri, 4 Aug 2017 15:07:10 +0000 (11:07 -0400)]
Disallow SSL session tickets.
We don't actually support session tickets, since we do not create an SSL
session identifier. But it seems that OpenSSL will issue a session ticket
on-demand anyway, which will then fail when used. This results in
reconnection failures when using ticket-aware client-side SSL libraries
(such as the Npgsql .NET driver), as reported by Shay Rojansky.
To fix, just tell OpenSSL not to issue tickets. At some point in the
far future, we might consider enabling tickets instead. But the security
implications of that aren't entirely clear; and besides it would have
little benefit except for very short-lived database connections, which is
Something We're Bad At anyhow. It would take a lot of other work to get
to a point where that would really be an exciting thing to do.
While at it, also tell OpenSSL not to use a session cache. This doesn't
really do anything, since a backend would never populate the cache anyway,
but it might gain some micro-efficiencies and/or reduce security
exposures.
Patch by me, per discussion with Heikki Linnakangas and Shay Rojansky.
Back-patch to all supported versions.
Tom Lane [Thu, 3 Aug 2017 21:36:23 +0000 (17:36 -0400)]
Fix pg_dump/pg_restore to emit REFRESH MATERIALIZED VIEW commands last.
Because we push all ACL (i.e. GRANT/REVOKE) restore steps to the end,
materialized view refreshes were occurring while the permissions on
referenced objects were still at defaults. This led to failures if,
say, an MV owned by user A reads from a table owned by user B, even
if B had granted the necessary privileges to A. We've had multiple
complaints about that type of restore failure, most recently from
Jordan Gigov.
The ideal fix for this would be to start treating ACLs as dependency-
sortable objects, rather than hard-wiring anything about their dump order
(the existing approach is a messy kluge dating to commit dc0e76ca3).
But that's going to be a rather major change, and it certainly wouldn't
lead to a back-patchable fix. As a short-term solution, convert the
existing two-pass hack (ie, normal objects then ACLs) to a three-pass hack,
ie, normal objects then ACLs then matview refreshes. Because this happens
in RestoreArchive(), it will also fix the problem when restoring from an
existing archive-format dump.
(Note this means that if a matview refresh would have failed under the
permissions prevailing at dump time, it'll fail during restore as well.
We'll define that as user error rather than something we should try
to work around.)
To avoid performance loss in parallel restore, we need the matview
refreshes to still be parallelizable. Hence, clean things up enough
so that both ACLs and matviews are handled by the parallel restore
infrastructure, instead of reverting back to serial restore for ACLs.
There is still a final serial step, but it shouldn't normally have to
do anything; it's only there to try to recover if we get stuck due to
some problem like unresolved circular dependencies.
Patch by me, but it owes something to an earlier attempt by Kevin Grittner.
Back-patch to 9.3 where materialized views were introduced.
Alvaro Herrera [Thu, 3 Aug 2017 18:48:54 +0000 (14:48 -0400)]
Fix build on zlib-less environments
Commit 4d57e8381677 added support for getting I/O errors out of zlib,
but it introduced a portability problem for systems without zlib.
Repair by wrapping the zlib call inside #ifdef and restore the original
code in the other branch.
This serves to illustrate the inadequacy of the zlib abstraction in
pg_backup_archiver: there is no way to call gzerror() in that
abstraction. This means that the several places that call GZREAD and
GZWRITE are currently doing error reporting wrongly, but ENOTIME to get
it fixed before next week's release set.
Backpatch to 9.4, like the commit that introduced the problem.
Robert Haas [Thu, 3 Aug 2017 17:09:15 +0000 (13:09 -0400)]
Allow a foreign table CHECK constraint to be initially NOT VALID.
For a table, the constraint can be considered validated immediately,
because the table must be empty. But for a foreign table this is
not necessarily the case.
Alvaro Herrera [Wed, 2 Aug 2017 22:26:26 +0000 (18:26 -0400)]
Fix pg_dump's errno checking for zlib I/O
Some error reports were reporting strerror(errno), which for some error
conditions coming from zlib are wrong, resulting in confusing reports
such as
pg_restore: [compress_io] could not read from input file: Success
which makes no sense. To correctly extract the error message we need to
use gzerror(), so let's do that.
This isn't as comprehensive or as neat as I would like, but at least it
should improve things in many common cases. The zlib abstraction in
compress_io does not seem to be applied consistently enough; we could
perhaps improve that, but it seems master-only material, not a bug fix
for back-patching.
This problem goes back all the way, but I decided to apply back to 9.4
only, because older branches don't contain commit 14ea89366 which this
change depends on.
Authors: Vladimir Kunschikov, Álvaro Herrera
Discussion: https://postgr.es/m/1498120508308.9826@infotecs.ru
Tom Lane [Wed, 2 Aug 2017 20:55:03 +0000 (16:55 -0400)]
Add pgtcl back to the list of externally-maintained client interfaces.
FlightAware is still maintaining this, and indeed is seemingly being
more active with it than the pgtclng fork is. List both, for the
time being anyway.
In the back branches, also back-port commit e20f679f6 and other
recent updates to the client-interfaces list. I think these are
probably of current interest to users of back branches. I did
not touch the list of externally maintained PLs in the back
branches, though. Those are much more likely to be server version
sensitive, and I don't know which of these PLs work all the way back.
Tom Lane [Wed, 2 Aug 2017 16:16:50 +0000 (12:16 -0400)]
Remove broken and useless entry-count printing in HASH_DEBUG code.
init_htab(), with #define HASH_DEBUG, prints a bunch of hashtable
parameters. It used to also print nentries, but commit 44ca4022f changed
that to "hash_get_num_entries(hctl)", which is wrong (the parameter should
be "hashp").
Rather than correct the coding, though, let's just remove that field from
the printout. The table must be empty, since we just finished building
it, so expensively calculating the number of entries is rather pointless.
Moreover hash_get_num_entries makes assumptions (about not needing locks)
which we could do without in debugging code.
Noted by Choi Doo-Won in bug #14764. Back-patch to 9.6 where the
faulty code was introduced.
XLByteToSeg and XLByteToPrevSeg calculate only a segment number. The
definition of these macros were modified by commit dfda6ebaec6763090fb78b458a979b558c50b39b but the comment remain
unchanged.
Patch by Yugo Nagata. Back patched to 9.3 and beyond.
Tom Lane [Mon, 31 Jul 2017 17:42:48 +0000 (13:42 -0400)]
Doc: specify that the minimum supported version of Perl is 5.8.3.
Previously the docs just said "5.8 or later". Experimentation shows
that while you can build on Unix from a git checkout with 5.8.0,
compiling recent PL/Perl requires at least 5.8.1, and you won't be
able to run the TAP tests with less than 5.8.3 because that's when
they added "prove". (I do not have any information on just what the
MSVC build scripts require.)
Since all these versions are quite ancient, let's not split hairs
in the docs, but just say that 5.8.3 is the minimum requirement.
Tom Lane [Wed, 26 Jul 2017 23:35:35 +0000 (19:35 -0400)]
Clean up SQL emitted by psql/describe.c.
Fix assorted places that had not bothered with the convention of
prefixing catalog and function names with "pg_catalog.". That
could possibly result in query failure when running with a nondefault
search_path. Also fix two places that weren't quoting OID literals.
I think the latter hasn't mattered much since about 7.3, but it's still
a bad idea to be doing it in 99 places and not in 2 others.
Also remove a useless EXISTS sub-select that someone had stuck into
describeOneTableDetails' queries for child tables. We just got the OID
out of pg_class, so I hardly see how checking that it exists in pg_class
was doing anything helpful.
In passing, try to improve the emitted formatting of a couple of
these queries, though I didn't work really hard on that. And merge
unnecessarily duplicative coding in some other places.
Much of this was new in HEAD, but some was quite old; back-patch
as appropriate.
If several sessions are concurrently locking a tuple update chain with
nonconflicting lock modes using an old snapshot, and they all succeed,
it may happen that some of them fail because of restarting the loop (due
to a concurrent Xmax change) and getting an error in the subsequent pass
while trying to obtain a tuple lock that they already have in some tuple
version.
This can only happen with very high concurrency (where a row is being
both updated and FK-checked by multiple transactions concurrently), but
it's been observed in the field and can have unpleasant consequences
such as an FK check failing to see a tuple that definitely exists:
ERROR: insert or update on table "child_table" violates foreign key constraint "fk_constraint_name"
DETAIL: Key (keyid)=(123456) is not present in table "parent_table".
(where the key is observably present in the table).
This module becomes much more useful if we allow it to be used as base
class for external projects. To achieve this, change the exported
get_new_node function into a class method instead, and use the standard
Perl idiom of accepting the class as first argument. This method works
as expected for subclasses. The standalone function is kept for
backwards compatibility, though it could be removed in pg11.
Author: Chap Flackman, based on an earlier patch from Craig Ringer
Discussion: https://postgr.es/m/CAMsr+YF8kO+4+K-_U4PtN==2FndJ+5Bn6A19XHhMiBykEwv0wA@mail.gmail.com
Tom Lane [Mon, 24 Jul 2017 20:45:46 +0000 (16:45 -0400)]
Fix race condition in predicate-lock init code in EXEC_BACKEND builds.
Trading a little too heavily on letting the code path be the same whether
we were creating shared data structures or only attaching to them,
InitPredicateLocks() inserted the "scratch" PredicateLockTargetHash entry
unconditionally. This is just wrong if we're in a postmaster child,
which would only reach this code in EXEC_BACKEND builds. Most of the
time, the hash_search(HASH_ENTER) call would simply report that the
entry already existed, causing no visible effect since the code did not
bother to check for that possibility. However, if this happened while
some other backend had transiently removed the "scratch" entry, then
that other backend's eventual RestoreScratchTarget would suffer an
assert failure; this appears to be the explanation for a recent failure
on buildfarm member culicidae. In non-assert builds, there would be
no visible consequences there either. But nonetheless this is a pretty
bad bug for EXEC_BACKEND builds, for two reasons:
1. Each new backend would perform the hash_search(HASH_ENTER) call
without holding any lock that would prevent concurrent access to the
PredicateLockTargetHash hash table. This creates a low but certainly
nonzero risk of corruption of that hash table.
2. In the event that the race condition occurred, by reinserting the
scratch entry too soon, we were defeating the entire purpose of the
scratch entry, namely to guarantee that transaction commit could move
hash table entries around with no risk of out-of-memory failure.
The odds of an actual OOM failure are quite low, but not zero, and if
it did happen it would again result in corruption of the hash table.
The user-visible symptoms of such corruption are a little hard to predict,
but would presumably amount to misbehavior of SERIALIZABLE transactions
that'd require a crash or postmaster restart to fix.
To fix, just skip the hash insertion if IsUnderPostmaster. I also
inserted a bunch of assertions that the expected things happen
depending on whether IsUnderPostmaster is true. That might be overkill,
since most comparable code in other functions isn't quite that paranoid,
but once burnt twice shy.
In passing, also move a couple of lines to places where they seemed
to make more sense.
Diagnosis of problem by Thomas Munro, patch by me. Back-patch to
all supported branches.
Robert Haas [Mon, 24 Jul 2017 19:57:24 +0000 (15:57 -0400)]
When WCOs are present, disable direct foreign table modification.
If the user modifies a view that has CHECK OPTIONs and this gets
translated into a modification to an underlying relation which happens
to be a foreign table, the check options should be enforced. In the
normal code path, that was happening properly, but it was not working
properly for "direct" modification because the whole operation gets
pushed to the remote side in that case and we never have an option to
enforce the constraint against individual tuples. Fix by disabling
direct modification when there is a need to enforce CHECK OPTIONs.
Etsuro Fujita, reviewed by Kyotaro Horiguchi and by me.
Tom Lane [Mon, 24 Jul 2017 19:16:31 +0000 (15:16 -0400)]
Ensure that pg_get_ruledef()'s output matches pg_get_viewdef()'s.
Various cases involving renaming of view columns are handled by having
make_viewdef pass down the view's current relation tupledesc to
get_query_def, which then takes care to use the column names from the
tupledesc for the output column names of the SELECT. For some reason
though, we'd missed teaching make_ruledef to do similarly when it is
printing an ON SELECT rule, even though this is exactly the same case.
The results from pg_get_ruledef would then be different and arguably wrong.
In particular, this breaks pre-v10 versions of pg_dump, which in some
situations would define views by means of emitting a CREATE RULE ... ON
SELECT command. Third-party tools might not be happy either.
In passing, clean up some crufty code in make_viewdef; we'd apparently
modernized the equivalent code in make_ruledef somewhere along the way,
and missed this copy.
Per report from Gilles Darold. Back-patch to all supported versions.
Tom Lane [Sun, 23 Jul 2017 00:20:09 +0000 (20:20 -0400)]
Fix pg_dump's handling of event triggers.
pg_dump with the --clean option failed to emit DROP EVENT TRIGGER
commands for event triggers. In a closely related oversight,
it also did not emit ALTER OWNER commands for event triggers.
Since only superusers can create event triggers, the latter oversight
is of little practical consequence ... but if we're going to record
an owner for event triggers, then surely pg_dump should preserve it.
Per complaint from Greg Atkins. Back-patch to 9.3 where event triggers
were introduced.
Robert Haas [Fri, 21 Jul 2017 18:25:36 +0000 (14:25 -0400)]
pg_rewind: Fix some problems when copying files >2GB.
When incrementally updating a file larger than 2GB, the old code could
either fail outright (if the client asked the server for bytes beyond
the 2GB boundary) or fail to copy all the blocks that had actually
been modified (if the server reported a file size to the client in
excess of 2GB), resulting in data corruption. Generally, such files
won't occur anyway, but they might if using a non-default segment size
or if there the directory contains stray files unrelated to
PostgreSQL. Fix by a more prudent choice of data types.
Even with these improvements, this code still uses a mix of different
types (off_t, size_t, uint64, int64) to represent file sizes and
offsets, not all of which necessarily have the same width or
signedness, so further cleanup might be in order here. However, at
least now they all have the potential to be 64 bits wide on 64-bit
platforms.
Kuntal Ghosh and Michael Paquier, with a tweak by me.
Tom Lane [Fri, 21 Jul 2017 18:20:43 +0000 (14:20 -0400)]
Stabilize postgres_fdw regression tests.
The new test cases added in commit 8bf58c0d9 turn out to have output
that can vary depending on the lc_messages setting prevailing on the
test server. Hide the remote end's error messages to ensure stable
output. This isn't a terribly desirable solution; we'd rather know
that the connection failed for the expected reason and not some other
one. But there seems little choice for the moment.
Robert Haas [Fri, 21 Jul 2017 16:48:22 +0000 (12:48 -0400)]
pg_rewind: Fix busted sanity check.
As written, the code would only fail the sanity check if none of the
columns returned by the server were of the expected type, but we want
it to fail if even one column is not of the expected type.