Alvaro Herrera [Tue, 15 Aug 2017 21:14:07 +0000 (18:14 -0300)]
Simplify autovacuum work-item implementation
The initial implementation of autovacuum work-items used a dynamic
shared memory area (DSA). However, it's argued that dynamic shared
memory is not portable enough, so we cannot rely on it being supported
everywhere; at the same time, autovacuum work-items are now a critical
part of the server, so it's not acceptable that they don't work in the
cases where dynamic shared memory is disabled. Therefore, let's fall
back to a simpler implementation of work-items that just uses
autovacuum's main shared memory segment for storage.
Alvaro Herrera [Tue, 15 Aug 2017 16:35:12 +0000 (13:35 -0300)]
Fix error handling path in autovacuum launcher
The original code (since 00e6a16d01) was assuming aborting the
transaction in autovacuum launcher was sufficient to release all
resources, but in reality the launcher runs quite a lot of code out of
any transactions. Re-introduce individual cleanup calls to make abort
more robust.
Reported-by: Robert Haas
Discussion: https://postgr.es/m/CA+TgmobQVbz4K_+RSmiM9HeRKpy3vS5xnbkL95gSEnWijzprKQ@mail.gmail.com
Tom Lane [Tue, 15 Aug 2017 15:07:52 +0000 (11:07 -0400)]
Distinguish wait-for-connection from wait-for-write-ready on Windows.
The API for WaitLatch and friends followed the Unix convention in which
waiting for a socket connection to complete is identical to waiting for
the socket to accept a write. While Windows provides a select(2)
emulation that agrees with that, the native WaitForMultipleObjects API
treats them as quite different --- and for some bizarre reason, it will
report a not-yet-connected socket as write-ready. libpq itself has so
far escaped dealing with this because it waits with select(), but in
libpqwalreceiver.c we want to wait using WaitLatchOrSocket. The semantics
mismatch resulted in replication connection failures on Windows, but only
for remote connections (apparently, localhost connections complete
immediately, or at least too fast for anyone to have noticed the problem
in single-machine testing).
To fix, introduce an additional WL_SOCKET_CONNECTED wait flag for
WaitLatchOrSocket, which is identical to WL_SOCKET_WRITEABLE on
non-Windows, but results in waiting for FD_CONNECT events on Windows.
Ideally, we would also distinguish the two conditions in the API for
PQconnectPoll(), but changing that API at this point seems infeasible.
Instead, cheat by checking for PQstatus() == CONNECTION_STARTED to
determine that we're still waiting for the connection to complete.
(This is a cheat mainly because CONNECTION_STARTED is documented as an
internal state rather than something callers should rely on. Perhaps
we ought to change the documentation ... but this patch doesn't.)
Per reports from Jobin Augustine and Igor Neyman. Back-patch to v10
where commit 1e8a85009 exposed this longstanding shortcoming.
Andres Freund, minor fix and some code review/beautification by me
Robert Haas [Tue, 15 Aug 2017 13:16:33 +0000 (09:16 -0400)]
Avoid unnecessary single-child Append nodes.
Before commit d3cc37f1d801a6b5cad9bf179274a8, an inheritance parent
whose only children were temp tables of other sessions would end up
as a simple scan of the parent; but with that commit, we end up with
an Append node, per a report from Ashutosh Bapat. Tweak the logic
so that we go back to the old way, and update the function header
comment for partitioning while we're at it.
Ashutosh Bapat, reviewed by Amit Langote and adjusted by me.
Tom Lane [Mon, 14 Aug 2017 19:43:20 +0000 (15:43 -0400)]
Handle elog(FATAL) during ROLLBACK more robustly.
Stress testing by Andreas Seltenreich disclosed longstanding problems that
occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are
trying to execute a ROLLBACK of an already-failed transaction. In such a
case, xact.c is in TBLOCK_ABORT state, so that AbortOutOfAnyTransaction
would skip AbortTransaction and go straight to CleanupTransaction. This
led to an assert failure in an assert-enabled build (due to the ROLLBACK's
portal still having a cleanup hook) or without assertions, to a FATAL exit
complaining about "cannot drop active portal". The latter's not
disastrous, perhaps, but it's messy enough to want to improve it.
We don't really want to run all of AbortTransaction in this code path.
The minimum required to clean up the open portal safely is to do
AtAbort_Memory and AtAbort_Portals. It seems like a good idea to
do AtAbort_Memory unconditionally, to be entirely sure that we are
starting with a safe CurrentMemoryContext. That means that if the
main loop in AbortOutOfAnyTransaction does nothing, we need an extra
step at the bottom to restore CurrentMemoryContext = TopMemoryContext,
which I chose to do by invoking AtCleanup_Memory. This'll result in
calling AtCleanup_Memory twice in many of the paths through this function,
but that seems harmless and reasonably inexpensive.
The original motivation for the assertion in AtCleanup_Portals was that
we wanted to be sure that any user-defined code executed as a consequence
of the cleanup hook runs during AbortTransaction not CleanupTransaction.
That still seems like a valid concern, and now that we've seen one case
of the assertion firing --- which means that exactly that would have
happened in a production build --- let's replace the Assert with a runtime
check. If we see the cleanup hook still set, we'll emit a WARNING and
just drop the hook unexecuted.
This has been like this a long time, so back-patch to all supported
branches.
Tom Lane [Mon, 14 Aug 2017 15:48:59 +0000 (11:48 -0400)]
Absorb -D_USE_32BIT_TIME_T switch from Perl, if relevant.
Commit 3c163a7fc's original choice to ignore all #define symbols whose
names begin with underscore turns out to be too simplistic. On Windows,
some Perl installations are built with -D_USE_32BIT_TIME_T, and we must
absorb that or we get the wrong result for sizeof(PerlInterpreter).
This effectively re-reverts commit ef58b87df, which injected that symbol
in a hacky way, making it apply to all of Postgres not just PL/Perl.
More significantly, it did so on *all* 32-bit Windows builds, even when
the Perl build to be used did not select this option; so that it fails
to work properly with some newer Perl builds.
By making this change, we would be introducing an ABI break in 32-bit
Windows builds; but fortunately we have not used type time_t in any
exported Postgres APIs in a long time. So it should be OK, both for
PL/Perl itself and for third-party extensions, if an extension library
is built with a different _USE_32BIT_TIME_T setting than the core code.
Patch by me, based on research by Ashutosh Sharma and Robert Haas.
Back-patch to all supported branches, as commit 3c163a7fc was.
Tom Lane [Sun, 13 Aug 2017 20:15:14 +0000 (16:15 -0400)]
Remove AtEOXact_CatCache().
The sole useful effect of this function, to check that no catcache
entries have positive refcounts at transaction end, has really been
obsolete since we introduced ResourceOwners in PG 8.1. We reduced the
checks to assertions years ago, so that the function was a complete
no-op in production builds. There have been previous discussions about
removing it entirely, but consensus up to now was that it had some small
value as a cross-check for bugs in the ResourceOwner logic.
However, it now emerges that it's possible to trigger these assertions
if you hit an assert-enabled backend with SIGTERM during a call to
SearchCatCacheList, because that function temporarily increases the
refcounts of entries it's intending to add to a catcache list construct.
In a normal ERROR scenario, the extra refcounts are cleaned up by
SearchCatCacheList's PG_CATCH block; but in a FATAL exit we do a
transaction abort and exit without ever executing PG_CATCH handlers.
There's a case to be made that this is a generic hazard and we should
consider restructuring elog(FATAL) handling so that pending PG_CATCH
handlers do get run. That's pretty scary though: it could easily create
more problems than it solves. Preliminary stress testing by Andreas
Seltenreich suggests that there are not many live problems of this ilk,
so we rejected that idea.
There are more-localized ways to fix the problem; the most principled
one would be to use PG_ENSURE_ERROR_CLEANUP instead of plain PG_TRY.
But adding cycles to SearchCatCacheList isn't very appealing. We could
also weaken the assertions in AtEOXact_CatCache in some more or less
ad-hoc way, but that just makes its raison d'etre even less compelling.
In the end, the most reasonable solution seems to be to just remove
AtEOXact_CatCache altogether, on the grounds that it's not worth trying
to fix it. It hasn't found any bugs for us in many years.
Per report from Jeevan Chalke. Back-patch to all supported branches.
Tom Lane [Sat, 12 Aug 2017 16:08:54 +0000 (12:08 -0400)]
Simplify fetch-slot-xmins logic in recovery TAP tests.
Merge wait_slot_xmins() into get_slot_xmins(). At this point the only
place that wasn't doing a wait was the initial-state test, and a wait
there seems pretty harmless.
Tom Lane [Fri, 11 Aug 2017 21:39:27 +0000 (17:39 -0400)]
Be more thorough about cleaning out gcov litter.
At least on my machine, a run with code coverage enabled produces some
".gcov" files whose names begin with ".". "rm -f *.gcov" fails to match
those, so they don't get cleaned up by "make clean". Fix it.
Tom Lane [Fri, 11 Aug 2017 21:27:54 +0000 (17:27 -0400)]
Add regression tests exercising more code paths in nodeLimit.c.
Perusal of the code coverage report shows that the existing regression
test cases for LIMIT/OFFSET don't exercise the nodeLimit code paths
involving backwards scan, empty results, or null values of LIMIT/OFFSET.
Improve the coverage.
Tom Lane [Fri, 11 Aug 2017 20:52:12 +0000 (16:52 -0400)]
Add regression tests exercising the non-hashed code paths in nodeSetop.c.
Perusal of the code coverage report shows that the existing regression
test cases for INTERSECT and EXCEPT seemingly all prefer the SETOP_HASHED
implementation. Add some test cases in which we force use of the
SETOP_SORTED mode.
Tom Lane [Fri, 11 Aug 2017 19:19:40 +0000 (15:19 -0400)]
Remove pgbench's restriction on placement of -M switch.
Previously the -M switch had to appear before any switch that directly
or indirectly specified a benchmarking script. This was both confusing
and inadequately documented, as per gripe from Tatsuo Ishii. We can
remove the restriction at the cost of making an extra pass over the
lists of SQL commands, which seems like a cheap price (the string scans
themselves likely cost much more). The change is just to not extract
parameters from the SQL commands until we have finished parsing the
switches and know the final value of -M.
Per discussion, we'll treat this as a low-grade bug fix and sneak it
into v10, rather than holding it for v11.
Tom Lane, reviewed by Tatsuo Ishii and Fabien Coelho
Peter Eisentraut [Thu, 10 Aug 2017 00:34:51 +0000 (20:34 -0400)]
Reject use of ucol_strcollUTF8() before ICU 53
Various bugs can cause crashes, so don't use that function before ICU
53. It will fall back to the code path used for other encodings.
Since we now tie the function availability to an ICU version, we don't
need the configure test anymore. That also resolves the issue that the
test result was previously hardcoded for Windows.
researched by Daniel Verite <daniel@manitou-mail.org>, Peter Geoghegan
<pg@bowt.ie>, Tom Lane <tgl@sss.pgh.pa.us>
Robert Haas [Thu, 10 Aug 2017 17:44:30 +0000 (13:44 -0400)]
Improve the error message when creating an empty range partition.
The previous message didn't mention the name of the table or the
bounds. Put the table name in the primary error message and the
bounds in the detail message.
Amit Langote, changed slightly by me. Suggestions on the exac
phrasing from Tom Lane, David G. Johnston, and Dean Rasheed.
Robert Haas [Thu, 10 Aug 2017 17:22:31 +0000 (13:22 -0400)]
Make some more improvements to parallel query documentation.
Many places that mentioned only Gather should also mention Gather
Merge, or should be phrased in a more neutral way. Be more clear
about the fact that max_parallel_workers_per_gather affects the number
of workers the planner may want to use. Fix a typo. Explain how
Gather Merge works. Adjust wording around parallel scans to be a bit
more clear. Adjust wording around parallel-restricted operations for
the fact that uncorrelated subplans are no longer restricted.
Robert Haas [Thu, 10 Aug 2017 15:20:57 +0000 (11:20 -0400)]
Remove incorrect assertion in clog.c
We must advance the oldest XID that can be safely looked up in clog
*before* truncating CLOG, and the oldest XID that can't be reused
*after* truncating CLOG. This assertion, and the accompanying
comment, are confused; remove them.
Tom Lane [Wed, 9 Aug 2017 21:03:09 +0000 (17:03 -0400)]
Fix handling of container types in find_composite_type_dependencies.
find_composite_type_dependencies correctly found columns that are of
the specified type, and columns that are of arrays of that type, but
not columns that are domains or ranges over the given type, its array
type, etc. The most general way to handle this seems to be to assume
that any type that is directly dependent on the specified type can be
treated as a container type, and processed recursively (allowing us
to handle nested cases such as ranges over domains over arrays ...).
Since a type's array type already has such a dependency, we can drop
the existing special case for the array type.
The very similar logic in get_rels_with_domain was likewise a few
bricks shy of a load, as it supposed that a directly dependent type
could *only* be a sub-domain. This is already wrong for ranges over
domains, and it'll someday be wrong for arrays over domains.
Add test cases illustrating the problems, and back-patch to all
supported branches.
Tom Lane [Wed, 9 Aug 2017 16:05:53 +0000 (12:05 -0400)]
Prevent passing down MAKELEVEL/MAKEFLAGS from non-GNU make to GNU make.
FreeBSD's make, for one, sets the MAKELEVEL environment variable when
invoking commands. In the special Makefile we provide to hand off control
from a non-GNU make to GNU make, this causes GNU make to think it is a
child make invocation rather than top-level. That interferes with the hack
added in commit dcae5facc to cause the temp-install tree to be made only by
the top-level invocation of gmake. Unset the variable to prevent that.
Likewise unset MAKEFLAGS, which FreeBSD's make also sets, and which could
easily confuse gmake. There are no reports of actual trouble from that,
but it seems better to be proactive.
Tom Lane [Tue, 8 Aug 2017 23:18:11 +0000 (19:18 -0400)]
Fix datumSerialize infrastructure to not crash on non-varlena data.
Commit 1efc7e538 did a poor job of emulating existing logic for touching
Datums that might be expanded-object pointers. It didn't check for typlen
being -1 first, which meant it could crash on fixed-length pass-by-ref
values, and probably on cstring values as well. It also didn't use
DatumGetPointer before VARATT_IS_EXTERNAL_EXPANDED, which while currently
harmless is not according to documentation nor prevailing style.
I also think the lack of any explanation as to why datumSerialize makes
these particular nonobvious choices is pretty awful, so fix that.
Per report from Jarred Ward. Back-patch to 9.6 where this code came in.
Tom Lane [Tue, 8 Aug 2017 22:03:30 +0000 (18:03 -0400)]
Fix yet another race condition in recovery/t/001_stream_rep.pl.
In commit 5c77690f6, we added polling in front of most of the
get_slot_xmins calls in 001_stream_rep.pl, but today's results from
buildfarm member nightjar show that at least one more poll loop
is needed.
Proactively add a poll loop before the next-to-last get_slot_xmins call
as well. It may be that there is no race condition there because the
standby_2 server is shut down at that point, but I'm quite tired of
fighting with this test script. The empirical evidence that it's safe,
from the buildfarm, is no stronger than the evidence for the other
call that nightjar just proved unsafe.
The only remaining get_slot_xmins calls without wait_slot_xmins
protection are the first two, which should be OK since nothing has
happened at that point. It's tempting to ignore that special case
and merge get_slot_xmins and wait_slot_xmins into a single function.
I didn't go that far though.
Alvaro Herrera [Tue, 8 Aug 2017 20:07:46 +0000 (16:07 -0400)]
Fix replication origin-related race conditions
Similar to what was fixed in commit 9915de6c1cb2 for replication slots,
but this time it's related to replication origins: DROP SUBSCRIPTION
attempts to drop the replication origin, but that fails if the
replication worker process hasn't yet marked it unused. This causes
failures in the buildfarm:
ERROR: could not drop replication origin with OID 1, in use by PID 34069
Like the aforementioned commit, fix by having the process running DROP
SUBSCRIPTION sleep until the worker marks the the replication origin
struct as free. This uses a condition variable on each replication
origin shmem state struct, so that the session trying to drop can sleep
and expect to be awakened by the process keeping the origin open.
Alvaro Herrera [Tue, 8 Aug 2017 19:37:44 +0000 (15:37 -0400)]
Fix inadequacies in recently added wait events
In commit 9915de6c1cb2, we introduced a new wait point for replication
slots and incorrectly labelled it as wait event PG_WAIT_LOCK. That's
wrong, so invent an appropriate new wait event instead, and document it
properly.
While at it, fix numerous other problems in the vicinity:
- two different walreceiver wait events were being mixed up in a single
wait event (which wasn't documented either); split it out so that they
can be distinguished, and document the new events properly.
- ParallelBitmapPopulate was documented but didn't exist.
- ParallelBitmapScan was not documented (I think this should be called
"ParallelBitmapScanInit" instead.)
Tom Lane [Mon, 7 Aug 2017 20:42:18 +0000 (16:42 -0400)]
Skip test for IPC::Run if user is overriding our search for PROVE.
The check for IPC::Run we added in commit c254970ad is useful in simple
cases, but there are real use-cases where "prove" is coming from a
different Perl installation than the "perl" we want to use to build.
In such cases asking whether "perl" knows about IPC::Run is irrelevant
and can cause an unnecessary configure failure. Hence, if user has
specified a value for PROVE, skip the IPC::Run check. Per discussion
with Andrew Dunstan.
Fix local/remote attribute mix-up in logical replication
This would lead to failures if local and remote tables have a different
column order. The tests previously didn't catch that because they only
tested the initial data copy. So add another test that exercises the
apply worker.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Tom Lane [Mon, 7 Aug 2017 14:19:01 +0000 (10:19 -0400)]
Require update permission for the large object written by lo_put().
lo_put() surely should require UPDATE permission, the same as lowrite(),
but it failed to check for that, as reported by Chapman Flack. Oversight
in commit c50b7c09d; backpatch to 9.4 where that was introduced.
Noah Misch [Mon, 7 Aug 2017 14:09:28 +0000 (07:09 -0700)]
Again match pg_user_mappings to information_schema.user_mapping_options.
Commit 3eefc51053f250837c3115c12f8119d16881a2d7 claimed to make
pg_user_mappings enforce the qualifications user_mapping_options had
been enforcing, but its removal of a longstanding restriction left them
distinct when the current user is the subject of a mapping yet has no
server privileges. user_mapping_options emits no rows for such a
mapping, but pg_user_mappings includes full umoptions. Change
pg_user_mappings to show null for umoptions. Back-patch to 9.2, like
the above commit.
Some authentication methods allowed it, others did not. In the client-side,
libpq does not even try to authenticate with an empty password, which makes
using empty passwords hazardous: an administrator might think that an
account with an empty password cannot be used to log in, because psql
doesn't allow it, and not realize that a different client would in fact
allow it. To clear that confusion and to be be consistent, disallow empty
passwords in all authentication methods.
All the authentication methods that used plaintext authentication over the
wire, except for BSD authentication, already checked that the password
received from the user was not empty. To avoid forgetting it in the future
again, move the check to the recv_password_packet function. That only
forbids using an empty password with plaintext authentication, however.
MD5 and SCRAM need a different fix:
* In stable branches, check that the MD5 hash stored for the user does not
not correspond to an empty string. This adds some overhead to MD5
authentication, because the server needs to compute an extra MD5 hash, but
it is not noticeable in practice.
* In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty
string, or a password hash that corresponds to an empty string, is
specified. The user-visible behavior is the same as in the stable branches,
the user cannot log in, but it seems better to stop the empty password from
entering the system in the first place. Secondly, it is fairly expensive to
check that a SCRAM hash doesn't correspond to an empty string, because
computing a SCRAM hash is much more expensive than an MD5 hash by design,
so better avoid doing that on every authentication.
We could clear the password on CREATE/ALTER ROLE also in stable branches,
but we would still need to check at authentication time, because even if we
prevent empty passwords from being stored in pg_authid, there might be
existing ones there already.
Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema.
The NOTICE messages about tables being added or removed during
subscription refresh would be incorrect and possibly confusing if the
transaction rolls back, so silence them but keep them available for
debugging.
Tom Lane [Mon, 7 Aug 2017 03:26:09 +0000 (23:26 -0400)]
Update RELEASE_CHANGES' example of branch name format.
We're planning to put an underscore before the major version number in
branch names for v10 and later. Make sure the recipe in RELEASE_CHANGES
reflects that.
In passing, add a reminder to consider doing pgindent right before
the branch.
The callers for GetOldestSafeDecodingTransactionId() all inverted the
argument for the argument introduced in 2bef06d516460. Luckily this
appears to be inconsequential for the moment, as we wait for
concurrent in-progress transaction when assembling a
snapshot. Additionally this could only make a difference when adding a
second logical slot, because only a pre-existing slot could cause an
issue by lowering the returned xid dangerously much.
Reported-By: Antonin Houska
Discussion: https://postgr.es/m/32704.1496993134@localhost
Backport: 9.4-, where 2bef06d516460 was backpatched to.
Tom Lane [Sat, 5 Aug 2017 15:47:28 +0000 (11:47 -0400)]
Improve configure's check for ICU presence.
Without ICU's header files, "configure --with-icu" would succeed anyway,
at least when using the non-pkgconfig-based setup. Then you got a bunch of
ugly failures at build. Add an explicit header check to tighten that up.
Robert Haas [Sat, 5 Aug 2017 14:49:26 +0000 (10:49 -0400)]
Make pg_stop_backup's wait_for_archive flag work on standbys.
Previously, it had no effect. Now, if archive_mode=always, it will
work, and if not, you'll get a warning.
Masahiko Sawada, Michael Paquier, and Robert Haas. The patch as
submitted also changed the behavior so that we would write and remove
history files on standbys, but that seems like material for a separate
patch to me.
Supporting ICU 4.2 seems useful because it ships with CentOS 6.
Versions before ICU 4.6 don't support pkg-config, so document an
installation method without using pkg-config.
In ICU 4.2, ucol_getKeywordsForLocale() sometimes returns values that
will not be accepted by uloc_toLanguageTag(). Skip loading keyword
variants in that version.
Robert Haas [Fri, 4 Aug 2017 23:33:01 +0000 (19:33 -0400)]
hash: Immediately after a bucket split, try to clean the old bucket.
If it works, then we won't be storing two copies of all the tuples
that were just moved. If not, VACUUM will still take care of it
eventually. Per a report from AP and analysis from Amit Kapila, it
seems that a bulk load can cause splits fast enough that VACUUM won't
deal with the problem in time to prevent bloat.
Robert Haas [Fri, 4 Aug 2017 19:29:26 +0000 (15:29 -0400)]
hash: Increase the number of possible overflow bitmaps by 8x.
Per a report from AP, it's not that hard to exhaust the supply of
bitmap pages if you create a table with a hash index and then insert a
few billion rows - and then you start getting errors when you try to
insert additional rows. In the particular case reported by AP,
there's another fix that we can make to improve recycling of overflow
pages, which is another way to avoid the error, but there may be other
cases where this problem happens and that fix won't help. So let's
buy ourselves as much headroom as we can without rearchitecting
anything.
The comments claim that the old limit was 64GB, but it was really
only 32GB, because we didn't use all the bits in the page for bitmap
bits - only the largest power of 2 that could fit after deducting
space for the page header and so forth. Thus, we have 4kB per page
for bitmap bits, not 8kB. The new limit is thus actually 8 times the
old *real* limit but only 4 times the old *purported* limit.
Since this breaks on-disk compatibility, bump HASH_VERSION. We've
already done this earlier in this release cycle, so this doesn't cause
any incremental inconvenience for people using pg_upgrade from
releases prior to v10. However, users who use pg_upgrade to reach
10beta3 or later from 10beta2 or earlier will need to REINDEX any hash
indexes again.
Tom Lane [Fri, 4 Aug 2017 15:45:18 +0000 (11:45 -0400)]
Apply ALTER ... SET NOT NULL recursively in ALTER ... ADD PRIMARY KEY.
If you do ALTER COLUMN SET NOT NULL against an inheritance parent table,
it will recurse to mark all the child columns as NOT NULL as well. This
is necessary for consistency: if the column is labeled NOT NULL then
reading it should never produce nulls.
However, that didn't happen in the case where ALTER ... ADD PRIMARY KEY
marks a target column NOT NULL that wasn't before. That was questionable
from the beginning, and now Tushar Ahuja points out that it can lead to
dump/restore failures in some cases. So let's make that case recurse too.
Although this is meant to fix a bug, it's enough of a behavioral change
that I'm pretty hesitant to back-patch, especially in view of the lack
of similar field complaints. It doesn't seem to be too late to put it
into v10 though.
Tom Lane [Fri, 4 Aug 2017 15:07:10 +0000 (11:07 -0400)]
Disallow SSL session tickets.
We don't actually support session tickets, since we do not create an SSL
session identifier. But it seems that OpenSSL will issue a session ticket
on-demand anyway, which will then fail when used. This results in
reconnection failures when using ticket-aware client-side SSL libraries
(such as the Npgsql .NET driver), as reported by Shay Rojansky.
To fix, just tell OpenSSL not to issue tickets. At some point in the
far future, we might consider enabling tickets instead. But the security
implications of that aren't entirely clear; and besides it would have
little benefit except for very short-lived database connections, which is
Something We're Bad At anyhow. It would take a lot of other work to get
to a point where that would really be an exciting thing to do.
While at it, also tell OpenSSL not to use a session cache. This doesn't
really do anything, since a backend would never populate the cache anyway,
but it might gain some micro-efficiencies and/or reduce security
exposures.
Patch by me, per discussion with Heikki Linnakangas and Shay Rojansky.
Back-patch to all supported versions.
ALTER USER ... SET did not support all the syntax variants of ALTER ROLE
... SET. Fix that, and to avoid further deviations of this kind, unify
many the grammar rules for ROLE/USER/GROUP commands.
Tom Lane [Thu, 3 Aug 2017 21:36:23 +0000 (17:36 -0400)]
Fix pg_dump/pg_restore to emit REFRESH MATERIALIZED VIEW commands last.
Because we push all ACL (i.e. GRANT/REVOKE) restore steps to the end,
materialized view refreshes were occurring while the permissions on
referenced objects were still at defaults. This led to failures if,
say, an MV owned by user A reads from a table owned by user B, even
if B had granted the necessary privileges to A. We've had multiple
complaints about that type of restore failure, most recently from
Jordan Gigov.
The ideal fix for this would be to start treating ACLs as dependency-
sortable objects, rather than hard-wiring anything about their dump order
(the existing approach is a messy kluge dating to commit dc0e76ca3).
But that's going to be a rather major change, and it certainly wouldn't
lead to a back-patchable fix. As a short-term solution, convert the
existing two-pass hack (ie, normal objects then ACLs) to a three-pass hack,
ie, normal objects then ACLs then matview refreshes. Because this happens
in RestoreArchive(), it will also fix the problem when restoring from an
existing archive-format dump.
(Note this means that if a matview refresh would have failed under the
permissions prevailing at dump time, it'll fail during restore as well.
We'll define that as user error rather than something we should try
to work around.)
To avoid performance loss in parallel restore, we need the matview
refreshes to still be parallelizable. Hence, clean things up enough
so that both ACLs and matviews are handled by the parallel restore
infrastructure, instead of reverting back to serial restore for ACLs.
There is still a final serial step, but it shouldn't normally have to
do anything; it's only there to try to recover if we get stuck due to
some problem like unresolved circular dependencies.
Patch by me, but it owes something to an earlier attempt by Kevin Grittner.
Back-patch to 9.3 where materialized views were introduced.
Alvaro Herrera [Thu, 3 Aug 2017 18:48:54 +0000 (14:48 -0400)]
Fix build on zlib-less environments
Commit 4d57e8381677 added support for getting I/O errors out of zlib,
but it introduced a portability problem for systems without zlib.
Repair by wrapping the zlib call inside #ifdef and restore the original
code in the other branch.
This serves to illustrate the inadequacy of the zlib abstraction in
pg_backup_archiver: there is no way to call gzerror() in that
abstraction. This means that the several places that call GZREAD and
GZWRITE are currently doing error reporting wrongly, but ENOTIME to get
it fixed before next week's release set.
Backpatch to 9.4, like the commit that introduced the problem.
Robert Haas [Thu, 3 Aug 2017 17:09:15 +0000 (13:09 -0400)]
Allow a foreign table CHECK constraint to be initially NOT VALID.
For a table, the constraint can be considered validated immediately,
because the table must be empty. But for a foreign table this is
not necessarily the case.
Robert Haas [Thu, 3 Aug 2017 16:47:00 +0000 (12:47 -0400)]
Improve ExecModifyTable comments.
Some of these comments wrongly implied that only an AFTER ROW trigger
will cause a 'wholerow' attribute to be present for a foreign table,
but a BEFORE ROW trigger can have the same effect. Others implied
that it would always be present for a foreign table, but that's not
true either.
Alvaro Herrera [Wed, 2 Aug 2017 22:26:26 +0000 (18:26 -0400)]
Fix pg_dump's errno checking for zlib I/O
Some error reports were reporting strerror(errno), which for some error
conditions coming from zlib are wrong, resulting in confusing reports
such as
pg_restore: [compress_io] could not read from input file: Success
which makes no sense. To correctly extract the error message we need to
use gzerror(), so let's do that.
This isn't as comprehensive or as neat as I would like, but at least it
should improve things in many common cases. The zlib abstraction in
compress_io does not seem to be applied consistently enough; we could
perhaps improve that, but it seems master-only material, not a bug fix
for back-patching.
This problem goes back all the way, but I decided to apply back to 9.4
only, because older branches don't contain commit 14ea89366 which this
change depends on.
Authors: Vladimir Kunschikov, Álvaro Herrera
Discussion: https://postgr.es/m/1498120508308.9826@infotecs.ru
Tom Lane [Wed, 2 Aug 2017 20:55:03 +0000 (16:55 -0400)]
Add pgtcl back to the list of externally-maintained client interfaces.
FlightAware is still maintaining this, and indeed is seemingly being
more active with it than the pgtclng fork is. List both, for the
time being anyway.
In the back branches, also back-port commit e20f679f6 and other
recent updates to the client-interfaces list. I think these are
probably of current interest to users of back branches. I did
not touch the list of externally maintained PLs in the back
branches, though. Those are much more likely to be server version
sensitive, and I don't know which of these PLs work all the way back.
Tom Lane [Wed, 2 Aug 2017 16:16:50 +0000 (12:16 -0400)]
Remove broken and useless entry-count printing in HASH_DEBUG code.
init_htab(), with #define HASH_DEBUG, prints a bunch of hashtable
parameters. It used to also print nentries, but commit 44ca4022f changed
that to "hash_get_num_entries(hctl)", which is wrong (the parameter should
be "hashp").
Rather than correct the coding, though, let's just remove that field from
the printout. The table must be empty, since we just finished building
it, so expensively calculating the number of entries is rather pointless.
Moreover hash_get_num_entries makes assumptions (about not needing locks)
which we could do without in debugging code.
Noted by Choi Doo-Won in bug #14764. Back-patch to 9.6 where the
faulty code was introduced.
Tom Lane [Wed, 2 Aug 2017 15:28:46 +0000 (11:28 -0400)]
Remove duplicate setting of SSL_OP_SINGLE_DH_USE option.
Commit c0a15e07c moved the setting of OpenSSL's SSL_OP_SINGLE_DH_USE option
into a new subroutine initialize_dh(), but forgot to remove it from where
it was. SSL_CTX_set_options() is a trivial function, amounting indeed to
just "ctx->options |= op", hence there's no reason to contort the code or
break separation of concerns to avoid calling it twice. So separating the
DH setup from disabling of old protocol versions is a good change, but we
need to finish the job.
Noted while poking into the question of SSL session tickets.
The buildfarm is still showing at least three distinct behaviors for
a bad locale name in CREATE COLLATION. Although this test was helpful
for getting the error reporting code into some usable shape, it doesn't
seem worth carrying multiple expected-files in order to support the
test in perpetuity. So pull it back out.
Tom Lane [Tue, 1 Aug 2017 21:17:20 +0000 (17:17 -0400)]
Second try at getting useful errors out of newlocale/_create_locale.
The early buildfarm returns for commit 1e165d05f are pretty awful:
not only does Windows not return a useful error, but it looks like
a lot of Unix-ish platforms don't either. Given the number of
different errnos seen so far, guess that what's really going on is
that some newlocale() implementations fail to set errno at all.
Hence, let's try zeroing errno just before newlocale() and then
if it's still zero report as though it's ENOENT. That should cover
the Windows case too.
It's clear that we'll have to drop the regression test case, unless
we want to maintain a separate expected-file for platforms without
HAVE_LOCALE_T. But I'll leave it there awhile longer to see if this
actually improves matters or not.
Tom Lane [Tue, 1 Aug 2017 20:49:23 +0000 (16:49 -0400)]
Suppress less info in regression tests using DROP CASCADE.
DROP CASCADE doesn't currently promise to visit dependent objects in
a fixed order, so when the regression tests use it, we typically need
to suppress the details of which objects get dropped in order to have
predictable test output. Traditionally we've done that by setting
client_min_messages higher than NOTICE, but there's a better way:
we can "\set VERBOSITY terse" in psql. That suppresses the DETAIL
message with the object list, but we still get the basic notice telling
how many objects were dropped. So at least the test case can verify
that the expected number of objects were dropped.
The VERBOSITY method was already in use in a few places, but run
around and use it wherever it makes sense.
Tom Lane [Tue, 1 Aug 2017 20:11:51 +0000 (16:11 -0400)]
Try to deliver a sane message for _create_locale() failure on Windows.
We were just printing errno, which is certainly not gonna work on
Windows. Now, it's not entirely clear from Microsoft's documentation
whether _create_locale() adheres to standard Windows error reporting
conventions, but let's assume it does and try to map the GetLastError
result to an errno. If this turns out not to work, probably the best
thing to do will be to assume the error is always ENOENT on Windows.
This is a longstanding bug, but given the lack of previous field
complaints, I'm not excited about back-patching it.
Tom Lane [Tue, 1 Aug 2017 17:51:05 +0000 (13:51 -0400)]
Allow creation of C/POSIX collations without depending on libc behavior.
Most of our collations code has special handling for the locale names
"C" and "POSIX", allowing those collations to be used whether or not
the system libraries think those locale names are valid, or indeed
whether said libraries even have any locale support. But we missed
handling things that way in CREATE COLLATION. This meant you couldn't
clone the C/POSIX collations, nor explicitly define a new collation
using those locale names, unless the libraries allow it. That's pretty
pointless, as well as being a violation of pg_newlocale_from_collation's
API specification.
The practical effect of this change is quite limited: it allows creating
such collations even on platforms that don't HAVE_LOCALE_T, and it allows
making "POSIX" collation objects on Windows, which before this would only
let you make "C" collation objects. Hence, even though this is a bug fix
IMO, it doesn't seem worth the trouble to back-patch.
In passing, suppress the DROP CASCADE detail messages at the end of the
collation regression test. I'm surprised we've never been bit by
message ordering issues there.