Tom Lane [Wed, 28 Jun 2017 16:30:16 +0000 (12:30 -0400)]
Second try at fixing tcp_keepalives_idle option on Solaris.
Buildfarm evidence shows that TCP_KEEPALIVE_THRESHOLD doesn't exist
after all on Solaris < 11. This means we need to take positive action to
prevent the TCP_KEEPALIVE code path from being taken on that platform.
I've chosen to limit it with "&& defined(__darwin__)", since it's unclear
that anyone else would follow Apple's precedent of spelling the symbol
that way.
Also, follow a suggestion from Michael Paquier of eliminating code
duplication by defining a couple of intermediate symbols for the
socket option.
In passing, make some effort to reduce the number of translatable messages
by replacing "setsockopt(foo) failed" with "setsockopt(%s) failed", etc,
throughout the affected files. And update relevant documentation so
that it doesn't claim to provide an exhaustive list of the possible
socket option names.
Like the previous commit (f0256c774), back-patch to all supported branches.
Stephen Frost [Wed, 28 Jun 2017 14:34:01 +0000 (10:34 -0400)]
Do not require 'public' to exist for pg_dump -c
Commit 330b84d8c4 didn't contemplate the case where the public schema
has been dropped and introduced a query which fails when there is no
public schema into pg_dump (when used with -c).
Adjust the query used by pg_dump to handle the case where the public
schema doesn't exist and add tests to check that such a case no longer
fails.
Back-patch the specific fix to 9.6, as the prior commit was.
Adding tests for this case involved adding support to the pg_dump
TAP tests to work with multiple databases, which, while not a large
change, is a bit much to back-patch, so that's only done in master.
Tom Lane [Tue, 27 Jun 2017 22:47:57 +0000 (18:47 -0400)]
Support tcp_keepalives_idle option on Solaris.
Turns out that the socket option for this is named TCP_KEEPALIVE_THRESHOLD,
at least according to the tcp(7P) man page for Solaris 11. (But since that
text refers to "SunOS", it's likely pretty ancient.) It appears that the
symbol TCP_KEEPALIVE does get defined on that platform, but it doesn't
seem to represent a valid protocol-level socket option. This leads to
bleats in the postmaster log, and no tcp_keepalives_idle functionality.
Per bug #14720 from Andrey Lizenko, as well as an earlier report from
Dhiraj Chawla that nobody had followed up on. The issue's been there
since we added the TCP_KEEPALIVE code path in commit 5acd417c8, so
back-patch to all supported branches.
Tom Lane [Tue, 27 Jun 2017 21:51:11 +0000 (17:51 -0400)]
Re-allow SRFs and window functions within sub-selects within aggregates.
check_agg_arguments_walker threw an error upon seeing a SRF or window
function, but that is too aggressive: if the function is within a
sub-select then it's perfectly fine. I broke the SRF case in commit 0436f6bde by copying the logic for window functions ... but that was
broken too, and had been since commit eaccfded9.
Repair both cases in HEAD, and the window function case back to 9.3.
9.2 gets this right.
Tom Lane [Mon, 26 Jun 2017 23:01:26 +0000 (19:01 -0400)]
Reduce wal_retrieve_retry_interval in applicable TAP tests.
By default, wal_retrieve_retry_interval is five seconds, which is far
more than is needed in any of our TAP tests, leaving the test cases
just twiddling their thumbs for significant stretches. Moreover,
because it's so large, we get basically no testing of the retry-before-
master-is-ready code path. Hence, make PostgresNode::init set up
wal_retrieve_retry_interval = '500ms' as part of its customization of
test clusters' postgresql.conf. This shaves quite a few seconds off
the runtime of the recovery TAP tests.
Back-patch into 9.6. We have wal_retrieve_retry_interval in 9.5,
but the test infrastructure isn't there.
Tom Lane [Mon, 26 Jun 2017 21:31:56 +0000 (17:31 -0400)]
Don't lose walreceiver start requests due to race condition in postmaster.
When a walreceiver dies, the startup process will notice that and send
a PMSIGNAL_START_WALRECEIVER signal to the postmaster, asking for a new
walreceiver to be launched. There's a race condition, which at least
in HEAD is very easy to hit, whereby the postmaster might see that
signal before it processes the SIGCHLD from the walreceiver process.
In that situation, sigusr1_handler() just dropped the start request
on the floor, reasoning that it must be redundant. Eventually, after
10 seconds (WALRCV_STARTUP_TIMEOUT), the startup process would make a
fresh request --- but that's a long time if the connection could have
been re-established almost immediately.
Fix it by setting a state flag inside the postmaster that we won't
clear until we do launch a walreceiver. In cases where that results
in an extra walreceiver launch, it's up to the walreceiver to realize
it's unwanted and go away --- but we have, and need, that logic anyway
for the opposite race case.
I came across this through investigating unexpected delays in the
src/test/recovery TAP tests: it manifests there in test cases where
a master server is stopped and restarted while leaving streaming
slaves active.
This logic has been broken all along, so back-patch to all supported
branches.
Tom Lane [Mon, 26 Jun 2017 20:17:05 +0000 (16:17 -0400)]
Ignore old stats file timestamps when starting the stats collector.
The stats collector disregards inquiry messages that bear a cutoff_time
before when it last wrote the relevant stats file. That's fine, but at
startup when it reads the "permanent" stats files, it absorbed their
timestamps as if they were the times at which the corresponding temporary
stats files had been written. In reality, of course, there's no data
out there at all. This led to disregarding inquiry messages soon after
startup if the postmaster had been shut down and restarted within less
than PGSTAT_STAT_INTERVAL; which is a pretty common scenario, both for
testing and in the field. Requesting backends would hang for 10 seconds
and then report failure to read statistics, unless they got bailed out
by some other backend coming along and making a newer request within
that interval.
I came across this through investigating unexpected delays in the
src/test/recovery TAP tests: it manifests there because the autovacuum
launcher hangs for 10 seconds when it can't get statistics at startup,
thus preventing a second shutdown from occurring promptly. We might
want to do some things in the autovac code to make it less prone to
getting stuck that way, but this change is a good bug fix regardless.
In passing, also fix pgstat_read_statsfiles() to ensure that it
re-zeroes its global stats variables if they are corrupted by a
short read from the stats file. (Other reads in that function
go into temp variables, so that the issue doesn't arise.)
This has been broken since we created the separation between permanent
and temporary stats files in 8.4, so back-patch to all supported branches.
Tom Lane [Mon, 26 Jun 2017 14:31:10 +0000 (10:31 -0400)]
Minor code review for parse_phrase_operator().
Fix its header comment, which described the old behavior of the <N>
phrase distance operator; we missed updating that in commit 028350f61.
Also, reset errno before strtol() call, to defend against the possibility
that it was already ERANGE at entry. (The lack of complaints says that
it generally isn't, but this is at least a latent bug.) Very minor
stylistic improvements as well.
Victor Drobny noted the obsolete comment, I noted the errno issue.
Back-patch to 9.6 where this code was added, just in case the errno
issue is a live bug in some cases.
Andres Freund [Wed, 21 Jun 2017 21:14:38 +0000 (14:14 -0700)]
Fix possibility of creating a "phantom" segment after promotion.
When promoting a standby just after a XLOG_SWITCH record was replayed,
and next segment(s) are already are locally available (via walsender,
restore_command + trigger/recovery target), that segment could
accidentally be recycled onto the past of the new timeline. Later
checkpointer would create a .ready file for it, assuming there was an
error during creation, and it would get archived. That causes trouble
if another standby is later brought up from a basebackup from before
the timeline creation, because it would try to read the
segment, because XLogFileReadAnyTLI just tries all possible timelines,
which doesn't have valid contents. Thus replay would fail.
The problem, if already occurred, can be fixed by removing the segment
and/or having restore_command filter it out.
The reason for the creation of such "phantom" segments was, that after
an XLOG_SWITCH record the EndOfLog variable points to the beginning of
the next segment, and RemoveXlogFile() used XLByteToPrevSeg().
Normally RemoveXlogFile() doing so is harmless, because the last
segment will still exist preventing InstallXLogFileSegment() from
causing harm, but just after promotion there's no previous segment on
the new timeline.
Fix that by using XLByteToSeg() instead of XLByteToPrevSeg().
Author: Andres Freund Reported-By: Greg Burek
Discussion: https://postgr.es/m/20170619073026.zcwpe6mydsaz5ygd@alap3.anarazel.de
Backpatch: 9.2-, bug older than all supported versions
Bruce Momjian [Tue, 20 Jun 2017 17:20:02 +0000 (13:20 -0400)]
pg_upgrade: start/stop new server after pg_resetwal
When commit 0f33a719fdbb5d8c43839ea0d2c90cd03e2af2d2 removed the
instructions to start/stop the new cluster before running rsync, it was
now possible for pg_resetwal/pg_resetxlog to leave the final WAL record
at wal_level=minimum, preventing upgraded standby servers from
reconnecting.
This patch fixes that by having pg_upgrade unconditionally start/stop
the new cluster after pg_resetwal/pg_resetxlog has run.
Backpatch through 9.2 since, though the instructions were added in PG
9.5, they worked all the way back to 9.2.
Tom Lane [Mon, 19 Jun 2017 22:32:22 +0000 (18:32 -0400)]
Fix materialized-view documentation oversights.
When materialized views were added, psql's \d commands were made to
treat them as a separate object category ... but not everyplace in the
documentation or comments got the memo.
Noted by David Johnston. Back-patch to 9.3 where matviews came in.
Tom Lane [Mon, 19 Jun 2017 19:33:41 +0000 (15:33 -0400)]
Avoid regressions in foreign-key-based selectivity estimates.
David Rowley found that the "use the smallest per-column selectivity"
heuristic applied in some cases by get_foreign_key_join_selectivity()
was badly off if the FK columns are independent, producing estimates
much worse than we got before that code was added in 9.6.
One case where that heuristic was used was for LEFT and FULL outer joins
with the referenced rel on the outside of the join. But we should not
really need to special-case those here. eqjoinsel() never has had such a
special case; the correction is applied by calc_joinrel_size_estimate()
instead. Let's just estimate such cases like inner joins and rely on that
later adjustment. (I think there was something of a thinko here, in that
the comments seem to be thinking about the selectivity as defined for
semi/anti joins; but that shouldn't apply to left/full joins.) Add a
regression test exercising such a case to show that this is sane in
at least some cases.
The other case where we used that heuristic was for SEMI/ANTI outer joins,
either if the referenced rel was on the outside, or if it was on the inside
but was part of a join within the RHS. In either case, the FK doesn't give
us a lot of traction towards estimating the selectivity. To ensure that
we don't have regressions from what happened before 9.6, let's punt by
ignoring the FK in such cases and applying the traditional selectivity
calculation. (We might be able to improve on that later, but for now
I just want to be sure it's not worse than 9.5.)
Report and patch by David Rowley, simplified a bit by me. Back-patch
to 9.6 where this code was added.
Tom Lane [Mon, 19 Jun 2017 15:02:45 +0000 (11:02 -0400)]
On Windows, make pg_dump use binary mode for compressed plain text output.
The combination of -Z -Fp and output to stdout resulted in corrupted
output data, because we left stdout in text mode, resulting in newline
conversion being done on the compressed stream. Switch stdout to binary
mode for this case, at the same place where we do it for non-text output
formats.
Report and patch by Kuntal Ghosh, tested by Ashutosh Sharma and Neha
Sharma. Back-patch to all supported branches.
Andres Freund [Mon, 19 Jun 2017 01:48:22 +0000 (18:48 -0700)]
Fix leaking of small spilled subtransactions during logical decoding.
When, during logical decoding, a transaction gets too big, it's
contents get spilled to disk. Not just the top-transaction gets
spilled, but *also* all of its subtransactions, even if they're not
that large themselves. Unfortunately we didn't clean up
such small spilled subtransactions from disk.
Fix that, by keeping better track of whether a transaction has been
spilled to disk.
Author: Andres Freund Reported-By: Dmitriy Sarafannikov, FabrÃzio de Royes Mello
Discussion:
https://postgr.es/m/1457621358.355011041@f382.i.mail.ru
https://postgr.es/m/CAFcNs+qNMhNYii4nxpO6gqsndiyxNDYV0S=JNq0v_sEE+9PHXg@mail.gmail.com
Backpatch: 9.4-, where logical decoding was introduced
Fix dependency, when changing a function's argument/return type.
When a new base type is created using the old-style procedure of first
creating the input/output functions with "opaque" in place of the base
type, the "opaque" argument/return type is changed to the final base type,
on CREATE TYPE. However, we did not create a pg_depend record when doing
that, so the functions were left not depending on the type.
Tom Lane [Thu, 15 Jun 2017 19:03:39 +0000 (15:03 -0400)]
Fix low-probability leaks of PGresult objects in the backend.
We had three occurrences of essentially the same coding pattern
wherein we tried to retrieve a query result from a libpq connection
without blocking. In the case where PQconsumeInput failed (typically
indicating a lost connection), all three loops simply gave up and
returned, forgetting to clear any previously-collected PGresult
object. Since those are malloc'd not palloc'd, the oversight results
in a process-lifespan memory leak.
One instance, in libpqwalreceiver, is of little significance because
the walreceiver process would just quit anyway if its connection fails.
But we might as well fix it.
The other two instances, in postgres_fdw, are somewhat more worrisome
because at least in principle the scenario could be repeated, allowing
the amount of memory leaked to build up to something worth worrying
about. Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS
calls, as well as other calls that could potentially elog(ERROR),
providing another way to exit without having cleared the PGresult.
Here we need to add PG_TRY logic similar to what exists in quite a
few other places in postgres_fdw.
Coverity noted the libpqwalreceiver bug; I found the other two cases
by checking all calls of PQconsumeInput.
Back-patch to all supported versions as appropriate (9.2 lacks
postgres_fdw, so this is really quite unexciting for that branch).
Bruce Momjian [Thu, 15 Jun 2017 16:30:02 +0000 (12:30 -0400)]
docs: Fix pg_upgrade standby server upgrade docs
It was unsafe to instruct users to start/stop the server after
pg_upgrade was run but before the standby servers were rsync'ed. The
new instructions avoid this.
RELEASE NOTES: This fix should be mentioned in the minor release notes.
Reported-by: Dmitriy Sarafannikov and Sergey Burladyan
Discussion: https://postgr.es/m/87wp8o506b.fsf@seb.koffice.internal
Backpatch-through: 9.5, where standby server upgrade instructions first appeared
Tatsuo Ishii [Thu, 15 Jun 2017 01:01:39 +0000 (10:01 +0900)]
Fix document bug regarding read only transactions.
It was explained that read only transactions (not in standby) allow to
update sequences. This had been wrong since the commit: 05d8a561ff85db1545f5768fe8d8dc9d99ad2ef7
Tom Lane [Tue, 13 Jun 2017 00:04:33 +0000 (20:04 -0400)]
Assert that we don't invent relfilenodes or type OIDs in binary upgrade.
During pg_upgrade's restore run, all relfilenode choices should be
overridden by commands in the dump script. If we ever find ourselves
choosing a relfilenode in the ordinary way, someone blew it. Likewise for
pg_type OIDs. Since pg_upgrade might well succeed anyway, if there happens
not to be a conflict during the regression test run, we need assertions
here to keep us on the straight and narrow.
We might someday be able to remove the assertion in GetNewRelFileNode,
if pg_upgrade is rewritten to remove its assumption that old and new
relfilenodes always match. But it's hard to see how to get rid of the
pg_type OID constraint, since those OIDs are embedded in user tables
in some cases.
Back-patch as far as 9.5, because of the risk of back-patches breaking
something here even if it works in HEAD. I'd prefer to go back further,
but 9.4 fails both assertions due to get_rel_infos()'s use of a temporary
table. We can't use the later-branch solution of a CTE for compatibility
reasons (cf commit 5d16332e9), and it doesn't seem worth inventing some
other way to do the query. (I did check, by dint of changing the Asserts
to elog(WARNING), that there are no other cases of unwanted OID assignments
during 9.4's regression test run.)
Andrew Dunstan [Sat, 10 Jun 2017 14:19:06 +0000 (10:19 -0400)]
Take PROVE_FLAGS from the command line but not the environment
This reverts commit 56b6ef893fee9e9bf47d927a02f4d1ea911f4d9c and instead
makes vcregress.pl parse out PROVE_FLAGS from a command line argument
when doing a TAP test, thus making it consistent with the makefile
treatment.
Robert Haas [Wed, 7 Jun 2017 19:14:55 +0000 (15:14 -0400)]
postgres_fdw: Allow cancellation of transaction control commands.
Commit f039eaac7131ef2a4cf63a10cf98486f8bcd09d2, later back-patched
with commit 1b812afb0eafe125b820cc3b95e7ca03821aa675, allowed many of
the queries issued by postgres_fdw to fetch remote data to respond to
cancel interrupts in a timely fashion. However, it didn't do anything
about the transaction control commands, which remained
noninterruptible.
Improve the situation by changing do_sql_command() to retrieve query
results using pgfdw_get_result(), which uses the asynchronous
interface to libpq so that it can check for interrupts every time
libpq returns control. Since this might result in a situation
where we can no longer be sure that the remote transaction state
matches the local transaction state, add a facility to force all
levels of the local transaction to abort if we've lost track of
the remote state; without this, an apparently-successful commit of
the local transaction might fail to commit changes made on the
remote side. Also, add a 60-second timeout for queries issue during
transaction abort; if that expires, give up and mark the state of
the connection as unknown. Drop all such connections when we exit
the local transaction. Together, these changes mean that if we're
aborting the local toplevel transaction anyway, we can just drop the
remote connection in lieu of waiting (possibly for a very long time)
for it to complete an abort.
This still leaves quite a bit of room for improvement. PQcancel()
has no asynchronous interface, so if we get stuck sending the cancel
request we'll still hang. Also, PQsetnonblocking() is not used, which
means we could block uninterruptibly when sending a query. There
might be some other optimizations possible as well. Nonetheless,
this allows us to escape a wait for an unresponsive remote server
quickly in many more cases than previously.
Report by Suraj Kharage. Patch by me and Rafia Sabih. Review
and testing by Amit Kapila and Tushar Ahuja.
Clear auth context correctly when re-connecting after failed auth attempt.
If authentication over an SSL connection fails, with sslmode=prefer,
libpq will reconnect without SSL and retry. However, we did not clear
the variables related to GSS, SSPI, and SASL authentication state, when
reconnecting. Because of that, the second authentication attempt would
always fail with a "duplicate GSS/SASL authentication request" error.
pg_SSPI_startup did not check for duplicate authentication requests like
the corresponding GSS and SASL functions, so with SSPI, you would leak
some memory instead.
Another way this could manifest itself, on version 10, is if you list
multiple hostnames in the "host" parameter. If the first server requests
Kerberos or SCRAM authentication, but it fails, the attempts to connect to
the other servers will also fail with "duplicate authentication request"
errors.
To fix, move the clearing of authentication state from closePGconn to
pgDropConnection, so that it is cleared also when re-connecting.
Patch by Michael Paquier, with some kibitzing by me.
Backpatch down to 9.3. 9.2 has the same bug, but the code around closing
the connection is somewhat different, so that this patch doesn't apply.
To fix this in 9.2, I think we would need to back-port commit 210eb9b743
first, and then apply this patch. However, given that we only bumped into
this in our own testing, we haven't heard any reports from users about
this, and that 9.2 will be end-of-lifed in a couple of months anyway, it
doesn't seem worth the risk and trouble.
Andres Freund [Tue, 6 Jun 2017 01:53:41 +0000 (18:53 -0700)]
Unify SIGHUP handling between normal and walsender backends.
Because walsender and normal backends share the same main loop it's
problematic to have two different flag variables, set in signal
handlers, indicating a pending configuration reload. Only certain
walsender commands reach code paths checking for the
variable (START_[LOGICAL_]REPLICATION, CREATE_REPLICATION_SLOT
... LOGICAL, notably not base backups).
This is a bug present since the introduction of walsender, but has
gotten worse in releases since then which allow walsender to do more.
A later patch, not slated for v10, will similarly unify SIGHUP
handling in other types of processes as well.
Author: Petr Jelinek, Andres Freund Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20170423235941.qosiuoyqprq4nu7v@alap3.anarazel.de
Backpatch: 9.2-, bug is present since 9.0
Andres Freund [Tue, 6 Jun 2017 01:53:41 +0000 (18:53 -0700)]
Prevent possibility of panics during shutdown checkpoint.
When the checkpointer writes the shutdown checkpoint, it checks
afterwards whether any WAL has been written since it started and
throws a PANIC if so. At that point, only walsenders are still
active, so one might think this could not happen, but walsenders can
also generate WAL, for instance in BASE_BACKUP and logical decoding
related commands (e.g. via hint bits). So they can trigger this panic
if such a command is run while the shutdown checkpoint is being
written.
To fix this, divide the walsender shutdown into two phases. First,
checkpointer, itself triggered by postmaster, sends a
PROCSIG_WALSND_INIT_STOPPING signal to all walsenders. If the backend
is idle or runs an SQL query this causes the backend to shutdown, if
logical replication is in progress all existing WAL records are
processed followed by a shutdown. Otherwise this causes the walsender
to switch to the "stopping" state. In this state, the walsender will
reject any further replication commands. The checkpointer begins the
shutdown checkpoint once all walsenders are confirmed as
stopping. When the shutdown checkpoint finishes, the postmaster sends
us SIGUSR2. This instructs walsender to send any outstanding WAL,
including the shutdown checkpoint record, wait for it to be replicated
to the standby, and then exit.
Author: Andres Freund, based on an earlier patch by Michael Paquier Reported-By: Fujii Masao, Andres Freund Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20170602002912.tqlwn4gymzlxpvs2@alap3.anarazel.de
Backpatch: 9.4, where logical decoding was introduced
Andres Freund [Tue, 6 Jun 2017 01:53:41 +0000 (18:53 -0700)]
Have walsenders participate in procsignal infrastructure.
The non-participation in procsignal was a problem for both changes in
master, e.g. parallelism not working for normal statements run in
walsender backends, and older branches, e.g. recovery conflicts and
catchup interrupts not working for logical decoding walsenders.
This commit thus replaces the previous WalSndXLogSendHandler with
procsignal_sigusr1_handler. In branches since db0f6cad48 that can
lead to additional SetLatch calls, but that only rarely seems to make
a difference.
Author: Andres Freund Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20170421014030.fdzvvvbrz4nckrow@alap3.anarazel.de
Backpatch: 9.4, earlier commits don't seem to benefit sufficiently
Andres Freund [Mon, 5 Jun 2017 22:56:58 +0000 (15:56 -0700)]
Fix record length computation in pg_waldump/xlogdump.
The current method of computing the record length (excluding the
lenght of full-page images) has been wrong since the WAL format has
been revamped in 2c03216d831160bedd72d45f712601b6f7d03f1c. Only the
main record's length was counted, but that can be significantly too
little if there's data associated with further blocks.
Fix by computing the record length as total_lenght - fpi_length.
Andres Freund [Sat, 3 Jun 2017 02:05:57 +0000 (19:05 -0700)]
Allow parallelism in COPY (query) TO ...;
Previously this was not allowed, as copy.c didn't set the
CURSOR_OPT_PARALLEL_OK flag when planning the query. Set it.
While the lack of parallel query for COPY isn't strictly speaking a
bug, it does prevent parallelism from being used in a facility
commonly used to run long running queries. Thus backpatch to 9.6.
Author: Andres Freund
Discussion: https://postgr.es/m/20170531231958.ihanapplorptykzm@alap3.anarazel.de
Backpatch: 9.6, where parallelism was introduced.
Tom Lane [Thu, 1 Jun 2017 17:32:56 +0000 (13:32 -0400)]
Always use -fPIC, not -fpic, when building shared libraries with gcc.
On some platforms, -fpic fails for sufficiently large shared libraries.
We've mostly not hit that boundary yet, but there are some extensions
such as Citus and pglogical where it's becoming a problem. A bit of
research suggests that the penalty for -fPIC is small, in the
single-digit-percentage range --- and there's none at all on popular
platforms such as x86_64. So let's just default to -fPIC everywhere
and provide one less thing for extension developers to worry about.
Per complaint from Christoph Berg. Back-patch to all supported branches.
(I did not bother to touch the recently-removed Makefiles for sco and
unixware in the back branches, though. We'd have no way to test that
it doesn't break anything on those platforms.)
Alvaro Herrera [Tue, 30 May 2017 19:45:42 +0000 (15:45 -0400)]
Fix wording in amvalidate error messages
Remove some gratuituous message differences by making the AM name
previously embedded in each message be a %s instead. While at it, get
rid of terminology that's unclear and unnecessary in one message.
Tom Lane [Mon, 29 May 2017 21:08:16 +0000 (17:08 -0400)]
Prevent running pg_resetwal/pg_resetxlog against wrong-version data dirs.
pg_resetwal (formerly pg_resetxlog) doesn't insist on finding a matching
version number in pg_control, and that seems like an important thing to
preserve since recovering from corrupt pg_control is a prime reason to
need to run it. However, that means you can try to run it against a
data directory of a different major version, which is at best useless
and at worst disastrous. So as to provide some protection against that
type of pilot error, inspect PG_VERSION at startup and refuse to do
anything if it doesn't match. PG_VERSION is read-only after initdb,
so it's unlikely to get corrupted, and even if it were corrupted it would
be easy to fix by hand.
This hazard has been there all along, so back-patch to all supported
branches.
Tom Lane [Mon, 29 May 2017 19:19:07 +0000 (15:19 -0400)]
Allow NumericOnly to be "+ FCONST".
The NumericOnly grammar production accepted ICONST, + ICONST, - ICONST,
FCONST, and - FCONST, but for some reason not + FCONST. This led to
strange inconsistencies like
regression=# set random_page_cost = +4;
SET
regression=# set random_page_cost = 4000000000;
SET
regression=# set random_page_cost = +4000000000;
ERROR: syntax error at or near "4000000000"
(because 4000000000 is too large to be an ICONST). While there's
no actual functional reason to need to write a "+", if we allow
it for integers it seems like we should allow it for numerics too.
It's been like that forever, so back-patch to all supported branches.
Tom Lane [Fri, 26 May 2017 19:16:59 +0000 (15:16 -0400)]
Move autogenerated array types out of the way during ALTER ... RENAME.
Commit 9aa3c782c added code to allow CREATE TABLE/CREATE TYPE to not fail
when the desired type name conflicts with an autogenerated array type, by
dint of renaming the array type out of the way. But I (tgl) overlooked
that the same case arises in ALTER TABLE/TYPE RENAME. Fix that too.
Back-patch to all supported branches.
Report and patch by Vik Fearing, modified a bit by me
Tom Lane [Fri, 26 May 2017 16:51:05 +0000 (12:51 -0400)]
Fix pg_dump to not emit invalid SQL for an empty operator class.
If an operator class has no operators or functions, and doesn't need
a STORAGE clause, we emitted "CREATE OPERATOR CLASS ... AS ;" which
is syntactically invalid. Fix by forcing a STORAGE clause to be
emitted anyway in this case.
(At some point we might consider changing the grammar to allow CREATE
OPERATOR CLASS without an opclass_item_list. But probably we'd want to
omit the AS in that case, so that wouldn't fix this pg_dump issue anyway.)
It's been like this all along, so back-patch to all supported branches.
Daniel Gustafsson, tweaked by me to avoid a dangling-pointer bug
Tom Lane [Wed, 24 May 2017 19:28:34 +0000 (15:28 -0400)]
Tighten checks for whitespace in functions that parse identifiers etc.
This patch replaces isspace() calls with scanner_isspace() in functions
that are likely to be presented with non-ASCII input. isspace() has
the small advantage that it will correctly recognize no-break space
in single-byte encodings (such as LATIN1); but it cannot work successfully
for any multibyte character, and depending on platform it might return
false positive results for some fragments of multibyte characters. That's
disastrous for functions that are trying to discard whitespace between
valid strings, as noted in bug #14662 from Justin Muise. Even treating
no-break space as whitespace is pretty questionable for the usages touched
here, because the core scanner would think it is an identifier character.
Affected functions are parse_ident(), parseNameAndArgTypes (underlying
regprocedurein() and siblings), SplitIdentifierString (used for parsing
GUCs and options that are qualified names or lists of names), and
SplitDirectoriesString (used for parsing GUCs that are lists of
directories).
All the functions adjusted here are parsing SQL identifiers and similar
constructs, so it's reasonable to insist that their definition of
whitespace match the core scanner. So we can hope that this won't cause
many backwards-compatibility problems. I've left alone isspace() calls
in places that aren't really expecting any non-ASCII input characters,
such as float8in().
Tom Lane [Sun, 21 May 2017 17:05:17 +0000 (13:05 -0400)]
Fix precision and rounding issues in money multiplication and division.
The cash_div_intX functions applied rint() to the result of the division.
That's not merely useless (because the result is already an integer) but
it causes precision loss for values larger than 2^52 or so, because of
the forced conversion to float8.
On the other hand, the cash_mul_fltX functions neglected to apply rint() to
their multiplication results, thus possibly causing off-by-one outputs.
Per C standard, arithmetic between any integral value and a float value is
performed in float format. Thus, cash_mul_flt4 and cash_div_flt4 produced
answers good to only about six digits, even when the float value is exact.
We can improve matters noticeably by widening the float inputs to double.
(It's tempting to consider using "long double" arithmetic if available,
but that's probably too much of a stretch for a back-patched fix.)
Also, document that cash_div_intX operators truncate rather than round.
Per bug #14663 from Richard Pistole. Back-patch to all supported branches.
Tom Lane [Wed, 17 May 2017 16:24:19 +0000 (12:24 -0400)]
Make psql handle EOF during COPY FROM STDIN properly on all platforms.
When stdin is a terminal, it's possible to end a COPY FROM STDIN with
a keyboard EOF signal (typically control-D), and then keep on issuing
SQL commands. One would expect another COPY FROM STDIN to work as well,
but on some platforms it did not. This turns out to be because we were
not resetting the stream's feof() flag, and BSD-ish versions of fread()
and fgets() won't attempt to read more data if that's set.
The misbehavior is observed on BSDen (including macOS), but not Linux,
Windows, or SysV-ish Unixen, which makes this a portability bug not
just a missing feature.
Add a clearerr() call to fix the behavior, and improve the prompt that's
issued when copying from a TTY to mention that EOF signals work.
It's been like this forever, so back-patch to all supported branches.
Tom Lane [Tue, 16 May 2017 03:27:51 +0000 (23:27 -0400)]
In SSL tests, don't scribble on permissions of a repo file.
Modifying the permissions of a persistent file isn't really much nicer
than modifying its contents, even if git doesn't currently notice it.
Adjust the test script to make a copy and set the permissions of that
instead.
Michael Paquier, per a gripe from me. Back-patch to 9.5 where these
tests were introduced.
Tom Lane [Mon, 15 May 2017 15:33:44 +0000 (11:33 -0400)]
Fix unsafe reference into relcache in constructed CommentStmt.
The CommentStmt made by RebuildConstraintComment() has to pstrdup the
relation name, else it will contain a dangling pointer after that
relcache entry is flushed. (I'm less sure that pstrdup'ing conname
is necessary, but let's be safe.) Failure to do this leads to weird
errors or crashes, as reported by Marko Elezovic.
Bug introduced by commit e42375fc8, so back-patch to 9.5 as that was.
Fix by David Rowley, regression test by Michael Paquier
Tom Lane [Mon, 15 May 2017 01:39:10 +0000 (21:39 -0400)]
Make stats regression test more robust in the face of parallel query.
Commit 60690a6fe attempted to fix the wait_for_stats() function in this
test so that it would wait properly if the tenk2 scans were done in
parallel workers instead of the main session (typically as a consequence of
force_parallel_mode being turned on). However, we made it test for whether
the main session's actions had been reported by looking for inserts on
'trunc_stats_test'. This is the Wrong Thing, because those aren't the last
updates we expect the main session to do. As shown by recent failures on
buildfarm member frogmouth, it's entirely likely that the trunc_stats_test
updates will be reported in a separate message from later updates, which
means there can be a window in which wait_for_stats() will exit but not all
the updates we are expecting to see will have arrived. We should test for
the last updates we're expecting, namely those on 'trunc_stats_test4'.
Unfortunately, I doubt that this explains frogmouth's failures, because
there's no reason to believe that it's running the tenk2 queries in
parallel. Still, the test is wrong on its own terms, so fix and back-patch
to 9.6 where parallel query came in.
Andrew Dunstan [Sun, 14 May 2017 05:10:18 +0000 (01:10 -0400)]
Suppress indentation from Data::Dumper in regression tests
Ultra-modern versions of the perl Data::Dumper module have apparently
changed how they indent output. Instead of trying to keep up we choose
to tell it to supporess all indentation in the hstore_plperl regression
tests.
Backpatch to 9.5 where this feature was introduced.
Andres Freund [Sat, 13 May 2017 21:47:41 +0000 (14:47 -0700)]
Avoid superfluous work for commits during logical slot creation.
Before 955a684e0401 logical decoding snapshot maintenance needed to
cope with transactions it might not have seen in their entirety. For
such transactions we'd to assume they modified the catalog (could have
happened before we were watching), and thus a new snapshot had to be
built, and distributed to concurrently running transactions.
That's problematic because building a new snapshot isn't that cheap ,
especially as the the array of committed transactions needs to be
sorted. When creating a slot on a server with a lot of transactions,
this could make logical slot creation infeasibly expensive.
After 955a684e0401 there's no need to deal with transaction that
aren't guaranteed to be fully observable. That allows to avoid
building snapshots for transactions that haven't modified catalog,
even before reaching consistency.
While this isn't necessarily a bugfix, slot creation being impossible
in some production workloads, is severe enough to warrant
backpatching.
Author: Andres Freund, based on a quite different patch from Petr Jelinek Analyzed-By: Petr Jelinek Reviewed-By: Petr Jelinek
Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com
Backpatch: 9.4-, where logical decoding has been introduced
Andres Freund [Sat, 13 May 2017 21:21:00 +0000 (14:21 -0700)]
Fix race condition leading to hanging logical slot creation.
The snapshot assembly during the creation of logical slots relied
waiting for transactions in xl_running_xacts to end, by checking for
their commit/abort records. Unfortunately, despite locking, it is
possible to see an xl_running_xact record listing transactions as
ready, that have already WAL-logged an commit/abort record, as the
locking just prevents the ProcArray to be adjusted, and the commit
record has to be logged first.
That lead to either delayed or hanging snapshot creation, because
snapbuild.c would wait "forever" to see commit/abort records for some
transactions. That hang resolved only if a xl_running_xacts record
without any running transactions happened to be logged, far from
certain on a busy server.
It's impractical to prevent that via more heavyweight locking, the
likelihood of deadlocks and significantly increased contention would
be too big.
Instead change the initial snapshot creation to be solely based on
tracking the oldest running transaction via
xl_running_xacts->oldestRunningXid - that actually ends up
significantly simplifying the code. That has two disadvantages:
1) Because we cannot fully "trust" the contents of xl_running_xacts,
we cannot use it to build the initial snapshot. Instead we have to
wait twice for all running transactions to finish.
2) Previously a slot, unless the race occurred, could be created when
the all transaction perceived as running based on commit/abort
records, now we have to wait for the next xl_running_xacts record.
To address that, trigger logging new xl_running_xacts record from
within snapbuild.c exactly when necessary.
Unfortunately snabuild.c's SnapBuild is stored on disk, one of the
stupider ideas of a certain Mr Freund, so we can't change it in a
minor release. As this is going to be backpatched, we have to hack
around a bit to keep on-disk compatibility. A later commit will
rejigger that on master.
Author: Andres Freund, based on a quite different patch from Petr Jelinek Analyzed-By: Petr Jelinek Reviewed-By: Petr Jelinek
Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com
Backpatch: 9.4-, where logical decoding has been introduced
Tom Lane [Fri, 12 May 2017 23:05:13 +0000 (19:05 -0400)]
Avoid searching for callback functions in CallSyscacheCallbacks().
We have now grown enough registerable syscache-invalidation callback
functions that the original assumption that there would be few of them
is causing performance problems. In particular, let's fix things so that
CallSyscacheCallbacks doesn't have to search the whole array to find
which callback(s) to invoke for a given cache ID. Preserve the original
behavior that callbacks are called in order of registration, just in
case there's someplace that depends on that (which I doubt).
In support of this, export the number of syscaches from syscache.h.
People could have found that out anyway from the enum, but adding a
#define makes that much safer.
This provides a useful additional speedup in Mathieu Fenniak's
logical-decoding test case, although we're reaching the point of
diminishing returns there. I think any further improvement will have
to come from reducing the number of cache invalidations that are
triggered in the first place. Still, we can hope that this change
gives some incremental benefit for all invalidation scenarios.
Back-patch to 9.4 where logical decoding was introduced.
Tom Lane [Fri, 12 May 2017 22:30:02 +0000 (18:30 -0400)]
Reduce initial size of RelfilenodeMapHash.
A test case provided by Mathieu Fenniak shows that hash_seq_search'ing
this hashtable can consume a very significant amount of overhead during
logical decoding, which triggers frequent cache invalidation. Testing
suggests that the actual population of the hashtable is often no more
than a few dozen entries, so we can cut the overhead just by dropping
the initial number of buckets down from 1024 --- I chose to cut it to 64.
(In situations where we do have a significant number of entries, we
shouldn't get any real penalty from doing this, as the dynahash.c code
will resize the hashtable automatically.)
This gives a further factor-of-two savings in Mathieu's test case.
That may be overly optimistic for real-world benefit, as real cases
may have larger average table populations, but it's hard to see it
turning into a net negative for any workload.
Back-patch to 9.4 where relfilenodemap.c was introduced.
Tom Lane [Fri, 12 May 2017 22:17:29 +0000 (18:17 -0400)]
Avoid searching for the target catcache in CatalogCacheIdInvalidate.
A test case provided by Mathieu Fenniak shows that the initial search for
the target catcache in CatalogCacheIdInvalidate consumes a very significant
amount of overhead in cases where cache invalidation is triggered but has
little useful work to do. There is no good reason for that search to exist
at all, as the index array maintained by syscache.c allows direct lookup of
the catcache from its ID. We just need a frontend function in syscache.c,
matching the division of labor for most other cache-accessing operations.
While there's more that can be done in this area, this patch alone reduces
the runtime of Mathieu's example by 2X. We can hope that it offers some
useful benefit in other cases too, although usually cache invalidation
overhead is not such a striking fraction of the total runtime.
Back-patch to 9.4 where logical decoding was introduced. It might be
worth going further back, but presently the only case we know of where
cache invalidation is really a significant burden is in logical decoding.
Also, older branches have fewer catcaches, reducing the possible benefit.
(Note: although this nominally changes catcache's API, we have always
documented CatalogCacheIdInvalidate as a private function, so I would
have little sympathy for an external module calling it directly. So
backpatching should be fine.)
Andrew Dunstan [Fri, 12 May 2017 15:21:20 +0000 (11:21 -0400)]
Honor PROVE_FLAGS environment setting
On MSVC builds and on back branches that means removing the hardcoded
--verbose setting. On master for Unix that means removing the empty
setting in the global Makefile so that the value can be acquired from
the environment as well as from the make arguments.
Andrew Dunstan [Fri, 12 May 2017 14:17:54 +0000 (10:17 -0400)]
Add libxml2 include path for MSVC builds
On Unix this path is detected via the use of xml2-config, but that's not
available on Windows. This means that users building with libxml2 will
no longer need to move things around from the standard libxml2
installation for MSVC builds.
Tom Lane [Thu, 11 May 2017 18:51:21 +0000 (14:51 -0400)]
Increase MAX_SYSCACHE_CALLBACKS to provide more room for extensions.
Increase from the historical value of 32 to 64. We are up to 31 callers
of CacheRegisterSyscacheCallback() in HEAD, so if they were all to be
exercised in one process that would leave only one slot for add-on modules.
It's probably not possible for that to happen, but still we clearly need
more daylight here. (At some point it might be worth making the array
dynamically resizable; but since we've never heard a complaint of "out of
syscache_callback_list slots" happening in the field, I doubt it's worth
it yet.)
Back-patch as far as 9.4, which is where we increased the companion limit
MAX_RELCACHE_CALLBACKS (cf commit f01d1ae3a). It's not as urgent in
released branches, which have only a couple dozen call sites in core, but
it still seems that somebody might hit the limit before these branches die.
Tom Lane [Mon, 8 May 2017 19:02:57 +0000 (15:02 -0400)]
Further patch rangetypes_selfuncs.c's statistics slot management.
Values in a STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM slot are float8,
not of the type of the column the statistics are for.
This bug is at least partly the fault of sloppy specification comments
for get_attstatsslot()/free_attstatsslot(): the type OID they want is that
of the stavalues entries, not of the underlying column. (I double-checked
other callers and they seem to get this right.) Adjust the comments to be
more correct.
Noah Misch [Mon, 8 May 2017 14:24:24 +0000 (07:24 -0700)]
Match pg_user_mappings limits to information_schema.user_mapping_options.
Both views replace the umoptions field with NULL when the user does not
meet qualifications to see it. They used different qualifications, and
pg_user_mappings documented qualifications did not match its implemented
qualifications. Make its documentation and implementation match those
of user_mapping_options. One might argue for stronger qualifications,
but these have long, documented tenure. pg_user_mappings has always
exhibited this problem, so back-patch to 9.2 (all supported versions).
Michael Paquier and Feike Steenbergen. Reviewed by Jeff Janes.
Reported by Andrew Wheelwright.
Noah Misch [Mon, 8 May 2017 14:24:24 +0000 (07:24 -0700)]
Restore PGREQUIRESSL recognition in libpq.
Commit 65c3bf19fd3e1f6a591618e92eb4c54d0b217564 moved handling of the,
already then, deprecated requiressl parameter into conninfo_storeval().
The default PGREQUIRESSL environment variable was however lost in the
change resulting in a potentially silent accept of a non-SSL connection
even when set. Its documentation remained. Restore its implementation.
Also amend the documentation to mark PGREQUIRESSL as deprecated for
those not following the link to requiressl. Back-patch to 9.3, where
commit 65c3bf1 first appeared.
Behavior has been more complex when the user provides both deprecated
and non-deprecated settings. Before commit 65c3bf1, libpq operated
according to the first of these found:
requiressl=1
PGREQUIRESSL=1
sslmode=*
PGSSLMODE=*
(Note requiressl=0 didn't override sslmode=*; it would only suppress
PGREQUIRESSL=1 or a previous requiressl=1. PGREQUIRESSL=0 had no effect
whatsoever.) Starting with commit 65c3bf1, libpq ignored PGREQUIRESSL,
and order of precedence changed to this:
last of requiressl=* or sslmode=*
PGSSLMODE=*
Starting now, adopt the following order of precedence:
last of requiressl=* or sslmode=*
PGSSLMODE=*
PGREQUIRESSL=1
This retains the 65c3bf1 behavior for connection strings that contain
both requiressl=* and sslmode=*. It retains the 65c3bf1 change that
either connection string option overrides both environment variables.
For the first time, PGSSLMODE has precedence over PGREQUIRESSL; this
avoids reducing security of "PGREQUIRESSL=1 PGSSLMODE=verify-full"
configurations originating under v9.3 and later.
Add security checks to selectivity estimation functions
Some selectivity estimation functions run user-supplied operators over
data obtained from pg_statistic without security checks, which allows
those operators to leak pg_statistic data without having privileges on
the underlying tables. Fix by checking that one of the following is
satisfied: (1) the user has table or column privileges on the table
underlying the pg_statistic data, or (2) the function implementing the
user-supplied operator is leak-proof. If neither is satisfied, planning
will proceed as if there are no statistics available.
At least one of these is satisfied in most cases in practice. The only
situations that are negatively impacted are user-defined or
not-leak-proof operators on a security-barrier view.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Author: Peter Eisentraut <peter_e@gmx.net>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Tom Lane [Sun, 7 May 2017 16:33:12 +0000 (12:33 -0400)]
Guard against null t->tm_zone in strftime.c.
The upstream IANA code does not guard against null TM_ZONE pointers in this
function, but in our code there is such a check in the other pre-existing
use of t->tm_zone. We do have some places that set pg_tm.tm_zone to NULL.
I'm not entirely sure it's possible to reach strftime with such a value,
but I'm not sure it isn't either, so be safe.
Tom Lane [Sun, 7 May 2017 15:57:41 +0000 (11:57 -0400)]
Install the "posixrules" timezone link in MSVC builds.
Somehow, we'd missed ever doing this. The consequences aren't too
severe: basically, the timezone library would fall back on its hardwired
notion of the DST transition dates to use for a POSIX-style zone name,
rather than obeying US/Eastern which is the intended behavior. The net
effect would only be to obey current US DST law further back than it
ought to apply; so it's not real surprising that nobody noticed.
Tom Lane [Sun, 7 May 2017 15:34:31 +0000 (11:34 -0400)]
Restore fullname[] contents before falling through in pg_open_tzfile().
Fix oversight in commit af2c5aa88: if the shortcut open() doesn't work,
we need to reset fullname[] to be just the name of the toplevel tzdata
directory before we fall through into the pre-existing code. This failed
to be exposed in my (tgl's) testing because the fall-through path is
actually never taken under normal circumstances.
Stephen Frost [Sun, 7 May 2017 02:58:22 +0000 (22:58 -0400)]
pg_dump: Don't leak memory in buildDefaultACLCommands()
buildDefaultACLCommands() didn't destroy the string buffer created in
certain cases, leading to a memory leak. Fix by destroying the buffer
before returning from the function.
Spotted by Coverity.
Author: Michael Paquier
Back-patch to 9.6 where buildDefaultACLCommands() was added.
Stephen Frost [Sun, 7 May 2017 01:46:41 +0000 (21:46 -0400)]
RLS: Fix ALL vs. SELECT+UPDATE policy usage
When we add the SELECT-privilege based policies to the RLS with check
options (such as for an UPDATE statement, or when we have INSERT ...
RETURNING), we need to be sure and use the 'USING' case if the policy is
actually an 'ALL' policy (which could have both a USING clause and an
independent WITH CHECK clause).
This could result in policies acting differently when built using ALL
(when the ALL had both USING and WITH CHECK clauses) and when building
the policies independently as SELECT and UPDATE policies.
Fix this by adding an explicit boolean to add_with_check_options() to
indicate when the USING policy should be used, even if the policy has
both USING and WITH CHECK policies on it.
Alvaro Herrera [Fri, 5 May 2017 15:05:34 +0000 (12:05 -0300)]
Allow MSVC to build with Tcl 8.6.
Commit eaba54c20c5 added support for Tcl 8.6 for configure-supported
platforms after verifying that pltcl works without further changes, but
the MSVC tooling wasn't updated accordingly. Update MSVC to match,
restructuring the code to avoid duplicating the logic for every Tcl
version supported.
Backpatch to all live branches, like eaba54c20c5. In 9.4 and previous,
change the patch to use backslashes rather than forward, as in the rest
of the file.
Reported by Paresh More, who also tested the patch I provided.
Discussion: https://postgr.es/m/CAAgiCNGVw3ssBtSi3ZNstrz5k00ax=UV+_ZEHUeW_LMSGL2sew@mail.gmail.com
Give nicer error message when connecting to a v10 server requiring SCRAM.
This is just to give the user a hint that they need to upgrade, if they try
to connect to a v10 server that uses SCRAM authentication, with an older
client.
Tom Lane [Thu, 4 May 2017 17:59:13 +0000 (13:59 -0400)]
Fix pfree-of-already-freed-tuple when rescanning a GiST index-only scan.
GiST's getNextNearest() function attempts to pfree the previously-returned
tuple if any (that is, scan->xs_hitup in HEAD, or scan->xs_itup in older
branches). However, if we are rescanning a plan node after ending a
previous scan early, those tuple pointers could be pointing to garbage,
because they would be pointing into the scan's pageDataCxt or queueCxt
which has been reset. In a debug build this reliably results in a crash,
although I think it might sometimes accidentally fail to fail in
production builds.
To fix, clear the pointer field anyplace we reset a context it might
be pointing into. This may be overkill --- I think probably only the
queueCxt case is involved in this bug, so that resetting in gistrescan()
would be sufficient --- but dangling pointers are generally bad news,
so let's avoid them.
Another plausible answer might be to just not bother with the pfree in
getNextNearest(). The reconstructed tuples would go away anyway in the
context resets, and I'm far from convinced that freeing them a bit earlier
really saves anything meaningful. I'll stick with the original logic in
this patch, but if we find more problems in the same area we should
consider that approach.
Per bug #14641 from Denis Smirnov. Back-patch to 9.5 where this
logic was introduced.
Tom Lane [Wed, 3 May 2017 23:37:01 +0000 (19:37 -0400)]
Remove useless and rather expensive stanza in matview regression test.
This removes a test case added by commit b69ec7cc9, which was intended
to exercise a corner case involving the rule used at that time that
materialized views were unpopulated iff they had physical size zero.
We got rid of that rule very shortly later, in commit 1d6c72a55, but
kept the test case. However, because the case now asks what VACUUM
will do to a zero-sized physical file, it would be pretty surprising
if the answer were ever anything but "nothing" ... and if things were
indeed that broken, surely we'd find it out from other tests. Since
the test involves a table that's fairly large by regression-test
standards (100K rows), it's quite slow to run. Dropping it should
save some buildfarm cycles, so let's do that.
Tom Lane [Wed, 3 May 2017 01:50:35 +0000 (21:50 -0400)]
Improve performance of timezone loading, especially pg_timezone_names view.
tzparse() would attempt to load the "posixrules" timezone database file on
each call. That might seem like it would only be an issue when selecting a
POSIX-style zone name rather than a zone defined in the timezone database,
but it turns out that each zone definition file contains a POSIX-style zone
string and tzload() will call tzparse() to parse that. Thus, when scanning
the whole timezone file tree as we do in the pg_timezone_names view,
"posixrules" was read repetitively for each zone definition file. Fix
that by caching the file on first use within any given process. (We cache
other zone definitions for the life of the process, so there seems little
reason not to cache this one as well.) This probably won't help much in
processes that never run pg_timezone_names, but even one additional SET
of the timezone GUC would come out ahead.
An even worse problem for pg_timezone_names is that pg_open_tzfile()
has an inefficient way of identifying the canonical case of a zone name:
it basically re-descends the directory tree to the zone file. That's not
awful for an individual "SET timezone" operation, but it's pretty horrid
when we're inspecting every zone in the database. And it's pointless too
because we already know the canonical spelling, having just read it from
the filesystem. Fix by teaching pg_open_tzfile() to avoid the directory
search if it's not asked for the canonical name, and backfilling the
proper result in pg_tzenumerate_next().
In combination these changes seem to make the pg_timezone_names view
about 3x faster to read, for me. Since a scan of pg_timezone_names
has up to now been one of the slowest queries in the regression tests,
this should help some little bit for buildfarm cycle times.
Back-patch to all supported branches, not so much because it's likely
that users will care much about the view's performance as because
tracking changes in the upstream IANA timezone code is really painful
if we don't keep all the branches in sync.
Tom Lane [Tue, 2 May 2017 22:05:54 +0000 (18:05 -0400)]
Ensure commands in extension scripts see the results of preceding DDL.
Due to a missing CommandCounterIncrement() call, parsing of a non-utility
command in an extension script would not see the effects of the immediately
preceding DDL command, unless that command's execution ends with
CommandCounterIncrement() internally ... which some do but many don't.
Report by Philippe Beaudoin, diagnosis by Julien Rouhaud.
Rather remarkably, this bug has evaded detection since extensions were
invented, so back-patch to all supported branches.
Tom Lane [Mon, 1 May 2017 15:52:59 +0000 (11:52 -0400)]
Update time zone data files to tzdata release 2017b.
DST law changes in Chile, Haiti, and Mongolia. Historical corrections for
Ecuador, Kazakhstan, Liberia, and Spain.
The IANA crew continue their campaign to replace invented time zone
abbrevations with numeric GMT offsets. This update changes numerous zones
in South America, the Pacific and Indian oceans, and some Asian and Middle
Eastern zones. I kept these abbreviations in the tznames/ data files,
however, so that we will still accept them for input. (We may want to
start trimming those files someday, but I think we should wait for the
upstream dust to settle before deciding what to do.)
In passing, add MESZ (Mitteleuropaeische Sommerzeit) to the tznames lists;
since we accept MEZ (Mitteleuropaeische Zeit) it seems rather strange not
to take the other one. And fix some incorrect, or at least obsolete,
comments that certain abbreviations are not traceable to the IANA data.
Andrew Dunstan [Mon, 1 May 2017 14:12:02 +0000 (10:12 -0400)]
Allow vcregress.pl to run an arbitrary TAP test set
Currently only provision for running the bin checks in a single step is
provided for. Now these tests can be run individually, as well as tests
in other locations (e.g. src.test/recover).
Also provide for suppressing unnecessary temp installs by setting the
NO_TEMP_INSTALL environment variable just as the Makefiles do.
Tom Lane [Sun, 30 Apr 2017 19:13:51 +0000 (15:13 -0400)]
Sync our copy of the timezone library with IANA release tzcode2017b.
zic no longer mishandles some transitions in January 2038 when it
attempts to work around Qt bug 53071. This fixes a bug affecting
Pacific/Tongatapu that was introduced in zic 2016e. localtime.c
now contains a workaround, useful when loading a file generated by
a buggy zic.
There are assorted cosmetic changes as well, notably relocation
of a bunch of #defines.
Robert Haas [Fri, 28 Apr 2017 18:48:38 +0000 (14:48 -0400)]
Fix VALIDATE CONSTRAINT to consider NO INHERIT attribute.
Currently, trying to validate a NO INHERIT constraint on the parent will
search for the constraint in child tables (where it is not supposed to
exist), wrongly causing a "constraint does not exist" error.
Andres Freund [Thu, 27 Apr 2017 22:28:24 +0000 (15:28 -0700)]
Don't use on-disk snapshots for exported logical decoding snapshot.
Logical decoding stores historical snapshots on disk, so that logical
decoding can restart without having to reconstruct a snapshot from
scratch (for which the resources are not guaranteed to be present
anymore). These serialized snapshots were also used when creating a
new slot via the walsender interface, which can export a "full"
snapshot (i.e. one that can read all tables, not just catalog ones).
The problem is that the serialized snapshots are only useful for
catalogs and not for normal user tables. Thus the use of such a
serialized snapshot could result in an inconsistent snapshot being
exported, which could lead to queries returning wrong data. This
would only happen if logical slots are created while another logical
slot already exists.
Author: Petr Jelinek Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
Tom Lane [Thu, 27 Apr 2017 21:13:29 +0000 (17:13 -0400)]
Cope with glibc too old to have epoll_create1().
Commit fa31b6f4e supposed that we didn't have to worry about that
anymore, but it seems that RHEL5 is like that, and that's still
a supported platform. Put back the prior coding under an #ifdef,
adding an explicit fcntl() to retain the desired CLOEXEC property.
Andres Freund [Mon, 24 Apr 2017 03:41:29 +0000 (20:41 -0700)]
Preserve required !catalog tuples while computing initial decoding snapshot.
The logical decoding machinery already preserved all the required
catalog tuples, which is sufficient in the course of normal logical
decoding, but did not guarantee that non-catalog tuples were preserved
during computation of the initial snapshot when creating a slot over
the replication protocol.
This could cause a corrupted initial snapshot being exported. The
time window for issues is usually not terribly large, but on a busy
server it's perfectly possible to it hit it. Ongoing decoding is not
affected by this bug.
To avoid increased overhead for the SQL API, only retain additional
tuples when a logical slot is being created over the replication
protocol. To do so this commit changes the signature of
CreateInitDecodingContext(), but it seems unlikely that it's being
used in an extension, so that's probably ok.
In a drive-by fix, fix handling of
ReplicationSlotsComputeRequiredXmin's already_locked argument, which
should only apply to ProcArrayLock, not ReplicationSlotControlLock.
Reported-By: Erik Rijkers Analyzed-By: Petr Jelinek
Author: Petr Jelinek, heavily editorialized by Andres Freund Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/9a897b86-46e1-9915-ee4c-da02e4ff6a95@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
Tom Lane [Thu, 27 Apr 2017 19:07:36 +0000 (15:07 -0400)]
Make latch.c more paranoid about child-process cases.
Although the postmaster doesn't currently create a self-pipe or any
latches, there's discussion of it doing so in future. It's also
conceivable that a shared_preload_libraries extension would try to
create such a thing in the postmaster process today. In that case
the self-pipe FDs would be inherited by forked child processes.
latch.c was entirely unprepared for such a case and could suffer an
assertion failure, or worse try to use the inherited pipe if somebody
called WaitLatch without having called InitializeLatchSupport in that
process. Make it keep track of whether InitializeLatchSupport has been
called in the *current* process, and do the right thing if state has
been inherited from a parent.
Apply FD_CLOEXEC to file descriptors created in latch.c (the self-pipe,
as well as epoll event sets). This ensures that child processes spawned
in backends, the archiver, etc cannot accidentally or intentionally mess
with these FDs. It also ensures that we end up with the right state
for the self-pipe in EXEC_BACKEND processes, which otherwise wouldn't
know to close the postmaster's self-pipe FDs.
Back-patch to 9.6, mainly to keep latch.c looking similar in all branches
it exists in.
Tom Lane [Wed, 26 Apr 2017 20:17:29 +0000 (16:17 -0400)]
Allow multiple bgworkers to be launched per postmaster iteration.
Previously, maybe_start_bgworker() would launch at most one bgworker
process per call, on the grounds that the postmaster might otherwise
neglect its other duties for too long. However, that seems overly
conservative, especially since bad effects only become obvious when
many hundreds of bgworkers need to be launched at once. On the other
side of the coin is that the existing logic could result in substantial
delay of bgworker launches, because ServerLoop isn't guaranteed to
iterate immediately after a signal arrives. (My attempt to fix that
by using pselect(2) encountered too many portability question marks,
and in any case could not help on platforms without pselect().)
One could also question the wisdom of using an O(N^2) processing
method if the system is intended to support so many bgworkers.
As a compromise, allow that function to launch up to 100 bgworkers
per call (and in consequence, rename it to maybe_start_bgworkers).
This will allow any normal parallel-query request for workers
to be satisfied immediately during sigusr1_handler, avoiding the
question of whether ServerLoop will be able to launch more promptly.
There is talk of rewriting the postmaster to use a WaitEventSet to
avoid the signal-response-delay problem, but I'd argue that this change
should be kept even after that happens (if it ever does).
Backpatch to 9.6 where parallel query was added. The issue exists
before that, but previous uses of bgworkers typically aren't as
sensitive to how quickly they get launched.
Peter Eisentraut [Tue, 25 Apr 2017 14:01:59 +0000 (10:01 -0400)]
postgres_fdw: Fix join push down with extensions
Objects in an extension are shippable to a foreign server if the
extension is part of the foreign server definition's shippable
extensions list. But this was not properly considered in some cases
when checking whether a join condition can be pushed to a foreign server
and the join condition uses an object from a shippable extension. So
the join would never be pushed down in those cases.
So, the list of extensions needs to be made available in fpinfo of the
relation being considered to be pushed down before any expressions are
assessed for being shippable. Fix foreign_join_ok() to do that for a
join relation.
David Rowley and Ashutosh Bapat, per report from David Rowley
Buildfarm results suggest that some platforms have versions of pselect(2)
that are not merely non-atomic, but flat out non-functional. Revert the
use-pselect patch to confirm this diagnosis (and exclude the no-SA_RESTART
patch as the source of trouble). If it's so, we should probably look into
blacklisting specific platforms that have broken pselect.