Tom Lane [Sat, 8 Jul 2017 16:42:25 +0000 (12:42 -0400)]
Avoid unreferenced-function warning on low-functionality platforms.
On platforms lacking both locale_t and ICU, collationcmds.c failed
to make any use of its static function is_all_ascii(), thus probably
drawing a compiler warning. Oversight in my commit ddb5fdc06.
Per buildfarm member gaur.
Dean Rasheed [Thu, 6 Jul 2017 11:46:08 +0000 (12:46 +0100)]
Clarify the contract of partition_rbound_cmp().
partition_rbound_cmp() is intended to compare range partition bounds
in a way such that if all the bound values are equal but one is an
upper bound and one is a lower bound, the upper bound is treated as
smaller than the lower bound. This particular ordering is required by
RelationBuildPartitionDesc() when building the PartitionBoundInfoData,
so that it can consistently keep only the upper bounds when upper and
lower bounds coincide.
Update the function comment to make that clearer.
Also, fix a (currently unreachable) corner-case bug -- if the bound
values coincide and they contain unbounded values, fall through to the
lower-vs-upper comparison code, rather than immediately returning
0. Currently it is not possible to define coincident upper and lower
bounds containing unbounded columns, but that may change in the
future, so code defensively.
Dean Rasheed [Thu, 6 Jul 2017 08:58:06 +0000 (09:58 +0100)]
Simplify the logic checking new range partition bounds.
The previous logic, whilst not actually wrong, was overly complex and
involved doing two binary searches, where only one was really
necessary. This simplifies that logic and improves the comments.
One visible change is that if the new partition overlaps multiple
existing partitions, the error message now always reports the overlap
with the first existing partition (the one with the lowest
bounds). The old code would sometimes report the clash with the first
partition and sometimes with the last one.
Original patch idea from Amit Langote, substantially rewritten by me.
Tom Lane [Thu, 6 Jul 2017 03:59:20 +0000 (23:59 -0400)]
Fix another race-condition-ish issue in recovery/t/001_stream_rep.pl.
Buildfarm members hornet and sungazer have shown multiple instances of
"Failed test 'xmin of non-cascaded slot with hs feedback has changed'".
The reason seems to be that the test is checking the current xmin of the
master server's replication slot against a past xmin of the first slave
server's replication slot. Even though the latter slot is downstream of
the former, it's possible for its reported xmin to be ahead of the former's
reported xmin, because those numbers are updated whenever the respective
downstream walreceiver feels like it (see logic in WalReceiverMain).
Instrumenting this test shows that indeed the slave slot's xmin does often
advance before the master's does, especially if an autovacuum transaction
manages to occur during the relevant window. If we happen to capture such
an advanced xmin as $xmin, then the subsequent wait_slot_xmins call can
fall through before the master's xmin has advanced at all, and then if it
advances before the get_slot_xmins call, we can get the observed failure.
Yeah, that's a bit of a long chain of deduction, but it's hard to explain
any other way how the test can get past an "xmin <> '$xmin'" check only
to have the next query find that xmin does equal $xmin.
Fix by keeping separate images of the master and slave slots' xmins
and testing their has-xmin-advanced conditions independently.
pg_ctl: Make failure to complete operation a nonzero exit
If an operation being waited for does not complete within the timeout,
then exit with a nonzero exit status. This was previously handled
inconsistently.
Peter Eisentraut [Thu, 22 Jun 2017 02:57:23 +0000 (22:57 -0400)]
Fix output of char node fields
WRITE_CHAR_FIELD() didn't do any escaping, so that for example a zero
byte would cause the whole output string to be truncated. To fix, pass
the char through outToken(), so it is escaped like a string. Adjust the
reading side to handle this.
Update the documentation for \pset to mention
columns|linestyle|pager_min_lines. Add various mentions of \pset
command equivalences that were previously inconsistent.
Forbid gen_random_uuid() with --disable-strong-random
Previously, gen_random_uuid() would fall back to a weak random number
generator, unlike gen_random_bytes() which would just fail. And this was
not made very clear in the docs. For consistency, also make
gen_random_uuid() fail outright, if compiled with --disable-strong-random.
Re-word the error message you get with --disable-strong-random. It is also
used by pgp functions that require random salts, and now also
gen_random_uuid().
Tom Lane [Mon, 3 Jul 2017 02:01:19 +0000 (22:01 -0400)]
Fix race condition in recovery/t/009_twophase.pl test.
Since reducing pg_ctl's reaction time in commit c61559ec3, some
slower buildfarm members have shown erratic failures in this test.
The reason turns out to be that the test assumes synchronous
replication (because it does not provide any lag time for a commit
to replicate before shutting down the servers), but it had only
enabled sync rep in one direction. The observed symptoms correspond
to failure to replicate the last committed transaction in the other
direction, which can be expected to happen if the shutdown command
is issued soon enough and we are providing no synchronous-commit
guarantees.
Fix that, and add a bit more paranoid state checking at the bottom
of the script.
Tom Lane [Sun, 2 Jul 2017 21:22:09 +0000 (17:22 -0400)]
Fix bug in PostgresNode::query_hash's split() call.
By default, Perl's split() function drops trailing empty fields,
which is not what we want here. Oversight in commit fb093e4cb.
We'd managed to miss it thus far thanks to the very limited usage
of this function.
Tom Lane [Sun, 2 Jul 2017 19:27:29 +0000 (15:27 -0400)]
Try to improve readability of recovery/t/009_twophase.pl test.
The original coding here was very confusing, because it named the
two servers it set up "master" and "slave" even though it swapped
their replication roles multiple times. At any given point in the
script it was very unobvious whether "$node_master" actually referred
to the server named "master" or the other one. Instead, pick arbitrary
names for the two servers --- I used "london" and "paris" --- and
distinguish those permanent names from the nonce references $cur_master
and $cur_slave. Add logging to help distinguish which is which at
any given point. Also, use distinct data and transaction names to
make all the prepared transactions easily distinguishable in the
postmaster logs. (There was one place where we intentionally tested
that the server could cope with re-use of a transaction name, but
it seems like one place is sufficient for that purpose.)
Also, add checks at the end to make sure that all the transactions
that were supposed to be committed did survive.
Tom Lane [Sun, 2 Jul 2017 18:03:41 +0000 (14:03 -0400)]
Improve TAP test function PostgresNode::poll_query_until().
Add an optional "expected" argument to override the default assumption
that we're waiting for the query to return "t". This allows replacing
a handwritten polling loop in recovery/t/007_sync_rep.pl with use of
poll_query_until(); AFAICS that's the only remaining ad-hoc polling
loop in our TAP tests.
Change poll_query_until() to probe ten times per second not once per
second. Like some similar changes I've been making recently, the
one-second interval seems to be rooted in ancient traditions rather
than the actual likely wait duration on modern machines. I'd consider
reducing it further if there were a convenient way to spawn just one
psql for the whole loop rather than one per probe attempt.
doc: Document that logical replication supports synchronous replication
Update the documentation a bit to include that logical replication as
well as other and third-party replication clients can participate in
synchronous replication.
Peter Eisentraut [Sat, 24 Jun 2017 13:39:24 +0000 (09:39 -0400)]
Refine memory allocation in ICU conversions
The simple calculations done to estimate the size of the output buffers
for ucnv_fromUChars() and ucnv_toUChars() could overflow int32_t for
large strings. To avoid that, go the long way and run the function
first without an output buffer to get the correct output buffer size
requirement.
Tom Lane [Sat, 1 Jul 2017 18:25:09 +0000 (14:25 -0400)]
Clean up misuse and nonuse of poll_query_until().
Several callers of PostgresNode::poll_query_until() neglected to check
for failure; I do not think that's optional. Also, rewrite one place
that had reinvented poll_query_until() for no very good reason.
Tom Lane [Sat, 1 Jul 2017 16:15:51 +0000 (12:15 -0400)]
Reduce delay for last logicalrep feedback message when master goes idle.
The regression tests contain numerous cases where we do some activity on a
master server and then wait till the slave has ack'd flushing its copy of
that transaction. Because WAL flush on the slave is asynchronous to the
logicalrep worker process, the worker cannot send such a feedback message
during the LogicalRepApplyLoop iteration where it processes the last data
from the master. In the previous coding, the feedback message would come
out only when the loop's WaitLatchOrSocket call returned WL_TIMEOUT. That
requires one full second of delay (NAPTIME_PER_CYCLE); and to add insult
to injury, it could take more than that if the WaitLatchOrSocket was
interrupted a few times by latch-setting events.
In reality we can expect the slave's walwriter process to have flushed the
WAL data after, more or less, WalWriterDelay (typically 200ms). Hence,
if there are unacked transactions pending, make the wait delay only that
long rather than the full NAPTIME_PER_CYCLE. Also, move one of the
send_feedback() calls into the loop main line, so that we'll check for the
need to send feedback even if we were woken by a latch event and not either
socket data or timeout.
It's not clear how much this matters for production purposes, but
it's definitely helpful for testing.
Tom Lane [Sat, 1 Jul 2017 15:59:44 +0000 (11:59 -0400)]
Shorten timeouts while waiting for logicalrep worker slot attach/detach.
When waiting for a logical replication worker process to start or stop,
we have to busy-wait until we see it add or remove itself from the
LogicalRepWorker slot in shared memory. Those loops were using a
one-second delay between checks, but on any reasonably modern machine, it
doesn't take more than a couple of msec for a worker to spawn or shut down.
Reduce the loop delays to 10ms to avoid wasting quite so much time in the
related regression tests.
In principle, a better solution would be to fix things so that the waiting
process can be awakened via its latch at the right time. But that seems
considerably more invasive, which is undesirable for a post-beta fix.
Worker start/stop performance likely isn't of huge interest anyway for
production purposes, so we might not ever get around to it.
In passing, rearrange the second wait loop in logicalrep_worker_stop()
so that the lock is held at the top of the loop, thus saving one lock
acquisition/release per call, and making it look more like the other loop.
Alvaro Herrera [Fri, 30 Jun 2017 22:06:33 +0000 (18:06 -0400)]
Fix locking in WAL receiver/sender shmem state structs
In WAL receiver and WAL server, some accesses to their corresponding
shared memory control structs were done without holding any kind of
lock, which could lead to inconsistent and possibly insecure results.
In walsender, fix by clarifying the locking rules and following them
correctly, as documented in the new comment in walsender_private.h;
namely that some members can be read in walsender itself without a lock,
because the only writes occur in the same process. The rest of the
struct requires spinlock for accesses, as usual.
In walreceiver, fix by always holding spinlock while accessing the
struct.
While there is potentially a problem in all branches, it is minor in
stable ones. This only became a real problem in pg10 because of quorum
commit in synchronous replication (commit 3901fd70cc7c), and a potential
security problem in walreceiver because a superuser() check was removed
by default monitoring roles (commit 25fff40798fc). Thus, no backpatch.
In passing, clean up some leftover braces which were used to create
unconditional blocks. Once upon a time these were used for
volatile-izing accesses to those shmem structs, which is no longer
required. Many other occurrences of this pattern remain.
Author: Michaël Paquier Reported-by: Michaël Paquier Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi, Thomas Munro,
Robert Haas
Discussion: https://postgr.es/m/CAB7nPqTWYqtzD=LN_oDaf9r-hAjUEPAy0B9yRkhcsLdRN8fzrw@mail.gmail.com
Tom Lane [Fri, 30 Jun 2017 18:57:06 +0000 (14:57 -0400)]
Fix race conditions and missed wakeups in syncrep worker signaling.
When a sync worker is waiting for the associated apply worker to notice
that it's in SYNCWAIT state, wait_for_worker_state_change() would just
patiently wait for that to happen. This generally required waiting for
the 1-second timeout in LogicalRepApplyLoop to elapse. Kicking the worker
via its latch makes things significantly snappier.
While at it, fix race conditions that could potentially result in crashes:
we can *not* call logicalrep_worker_wakeup_ptr() once we've released the
LogicalRepWorkerLock, because worker->proc might've been reset to NULL
after we do that (indeed, there's no really solid reason to believe that
the LogicalRepWorker slot even belongs to the same worker anymore).
In logicalrep_worker_wakeup(), we can just move the wakeup inside the
lock scope. In process_syncing_tables_for_apply(), a bit more code
rearrangement is needed.
Tom Lane [Fri, 30 Jun 2017 16:00:02 +0000 (12:00 -0400)]
Fix walsender to exit promptly if client requests shutdown.
It's possible for WalSndWaitForWal to be asked to wait for WAL that doesn't
exist yet. That's fine, in fact it's the normal situation if we're caught
up; but when the client requests shutdown we should not keep waiting.
The previous coding could wait indefinitely if the source server was idle.
In passing, improve the rather weak comments in this area, and slightly
rearrange some related code for better readability.
Peter Eisentraut [Fri, 30 Jun 2017 15:24:00 +0000 (11:24 -0400)]
Prohibit creating ICU collation with different ctype
ICU does not support "collate" and "ctype" being different, so the
collctype catalog column is ignored. But for catalog neatness, ensure
that they are the same.
Peter Eisentraut [Fri, 30 Jun 2017 12:50:39 +0000 (08:50 -0400)]
Copy collencoding in CREATE COLLATION / FROM
This command used to compute the collencoding entry like when a
completely new collation is created. But for example when copying the
"C" collation, this would then result in a collation that has a
collencoding entry for the current database encoding rather than -1,
thus not making an exact copy. This has probably no practical impact,
but making this change keeps the catalog contents neat.
Tom Lane [Thu, 29 Jun 2017 02:11:12 +0000 (22:11 -0400)]
Eat XIDs more efficiently in recovery TAP test.
The point of this loop is to insert 1000 rows into the test table
and consume 1000 XIDs. I can't see any good reason why it's useful
to launch 1000 psqls and 1000 backend processes to accomplish that.
Pushing the looping into a plpgsql DO block shaves about 10 seconds
off the runtime of the src/test/recovery TAP tests on my machine;
that's over 10% of the runtime of that test suite.
It is, in fact, sufficiently more efficient that we now demonstrably
need wait_slot_xmins() afterwards, or the slaves' xmins may not have
moved yet.
Tom Lane [Wed, 28 Jun 2017 21:31:24 +0000 (17:31 -0400)]
Change pg_ctl to detect server-ready by watching status in postmaster.pid.
Traditionally, "pg_ctl start -w" has waited for the server to become
ready to accept connections by attempting a connection once per second.
That has the major problem that connection issues (for instance, a
kernel packet filter blocking traffic) can't be reliably told apart
from server startup issues, and the minor problem that if server startup
isn't quick, we accumulate "the database system is starting up" spam
in the server log. We've hacked around many of the possible connection
issues, but it resulted in ugly and complicated code in pg_ctl.c.
In commit c61559ec3, I changed the probe rate to every tenth of a second.
That prompted Jeff Janes to complain that the log-spam problem had become
much worse. In the ensuing discussion, Andres Freund pointed out that
we could dispense with connection attempts altogether if the postmaster
were changed to report its status in postmaster.pid, which "pg_ctl start"
already relies on being able to read. This patch implements that, teaching
postmaster.c to report a status string into the pidfile at the same
state-change points already identified as being of interest for systemd
status reporting (cf commit 7d17e683f). pg_ctl no longer needs to link
with libpq at all; all its functions now depend on reading server files.
In support of this, teach AddToDataDirLockFile() to allow addition of
postmaster.pid lines in not-necessarily-sequential order. This is needed
on Windows where the SHMEM_KEY line will never be written at all. We still
have the restriction that we don't want to truncate the pidfile; document
the reasons for that a bit better.
Also, fix the pg_ctl TAP tests so they'll notice if "start -w" mode
is broken --- before, they'd just wait out the sixty seconds until
the loop gives up, and then report success anyway. (Yes, I found that
out the hard way.)
While at it, arrange for pg_ctl to not need to #include miscadmin.h;
as a rather low-level backend header, requiring that to be compilable
client-side is pretty dubious. This requires moving the #define's
associated with the pidfile into a new header file, and moving
PG_BACKEND_VERSIONSTR someplace else. For lack of a clearly better
"someplace else", I put it into port.h, beside the declaration of
find_other_exec(), since most users of that macro are passing the value to
find_other_exec(). (initdb still depends on miscadmin.h, but at least
pg_ctl and pg_upgrade no longer do.)
In passing, fix main.c so that PG_BACKEND_VERSIONSTR actually defines the
output of "postgres -V", which remarkably it had never done before.
Andrew Gierth [Wed, 28 Jun 2017 18:00:55 +0000 (19:00 +0100)]
Fix transition tables for ON CONFLICT.
We now disallow having triggers with both transition tables and ON
INSERT OR UPDATE (which was a PG extension to the spec anyway),
because in this case it's not at all clear how the transition tables
should work for an INSERT ... ON CONFLICT query. Separate ON INSERT
and ON UPDATE triggers with transition tables are allowed, and the
transition tables for these reflect only the inserted and only the
updated tuples respectively.
Andrew Gierth [Wed, 28 Jun 2017 17:55:03 +0000 (18:55 +0100)]
Fix transition tables for partition/inheritance.
We disallow row-level triggers with transition tables on child tables.
Transition tables for triggers on the parent table contain only those
columns present in the parent. (We can't mix tuple formats in a
single transition table.)
Tom Lane [Wed, 28 Jun 2017 16:30:16 +0000 (12:30 -0400)]
Second try at fixing tcp_keepalives_idle option on Solaris.
Buildfarm evidence shows that TCP_KEEPALIVE_THRESHOLD doesn't exist
after all on Solaris < 11. This means we need to take positive action to
prevent the TCP_KEEPALIVE code path from being taken on that platform.
I've chosen to limit it with "&& defined(__darwin__)", since it's unclear
that anyone else would follow Apple's precedent of spelling the symbol
that way.
Also, follow a suggestion from Michael Paquier of eliminating code
duplication by defining a couple of intermediate symbols for the
socket option.
In passing, make some effort to reduce the number of translatable messages
by replacing "setsockopt(foo) failed" with "setsockopt(%s) failed", etc,
throughout the affected files. And update relevant documentation so
that it doesn't claim to provide an exhaustive list of the possible
socket option names.
Like the previous commit (f0256c774), back-patch to all supported branches.
Stephen Frost [Wed, 28 Jun 2017 14:33:57 +0000 (10:33 -0400)]
Do not require 'public' to exist for pg_dump -c
Commit 330b84d8c4 didn't contemplate the case where the public schema
has been dropped and introduced a query which fails when there is no
public schema into pg_dump (when used with -c).
Adjust the query used by pg_dump to handle the case where the public
schema doesn't exist and add tests to check that such a case no longer
fails.
Back-patch the specific fix to 9.6, as the prior commit was.
Adding tests for this case involved adding support to the pg_dump
TAP tests to work with multiple databases, which, while not a large
change, is a bit much to back-patch, so that's only done in master.
Tom Lane [Tue, 27 Jun 2017 22:47:57 +0000 (18:47 -0400)]
Support tcp_keepalives_idle option on Solaris.
Turns out that the socket option for this is named TCP_KEEPALIVE_THRESHOLD,
at least according to the tcp(7P) man page for Solaris 11. (But since that
text refers to "SunOS", it's likely pretty ancient.) It appears that the
symbol TCP_KEEPALIVE does get defined on that platform, but it doesn't
seem to represent a valid protocol-level socket option. This leads to
bleats in the postmaster log, and no tcp_keepalives_idle functionality.
Per bug #14720 from Andrey Lizenko, as well as an earlier report from
Dhiraj Chawla that nobody had followed up on. The issue's been there
since we added the TCP_KEEPALIVE code path in commit 5acd417c8, so
back-patch to all supported branches.
Tom Lane [Tue, 27 Jun 2017 21:51:11 +0000 (17:51 -0400)]
Re-allow SRFs and window functions within sub-selects within aggregates.
check_agg_arguments_walker threw an error upon seeing a SRF or window
function, but that is too aggressive: if the function is within a
sub-select then it's perfectly fine. I broke the SRF case in commit 0436f6bde by copying the logic for window functions ... but that was
broken too, and had been since commit eaccfded9.
Repair both cases in HEAD, and the window function case back to 9.3.
9.2 gets this right.
Tom Lane [Mon, 26 Jun 2017 23:01:26 +0000 (19:01 -0400)]
Reduce wal_retrieve_retry_interval in applicable TAP tests.
By default, wal_retrieve_retry_interval is five seconds, which is far
more than is needed in any of our TAP tests, leaving the test cases
just twiddling their thumbs for significant stretches. Moreover,
because it's so large, we get basically no testing of the retry-before-
master-is-ready code path. Hence, make PostgresNode::init set up
wal_retrieve_retry_interval = '500ms' as part of its customization of
test clusters' postgresql.conf. This shaves quite a few seconds off
the runtime of the recovery TAP tests.
Back-patch into 9.6. We have wal_retrieve_retry_interval in 9.5,
but the test infrastructure isn't there.
Tom Lane [Mon, 26 Jun 2017 21:31:56 +0000 (17:31 -0400)]
Don't lose walreceiver start requests due to race condition in postmaster.
When a walreceiver dies, the startup process will notice that and send
a PMSIGNAL_START_WALRECEIVER signal to the postmaster, asking for a new
walreceiver to be launched. There's a race condition, which at least
in HEAD is very easy to hit, whereby the postmaster might see that
signal before it processes the SIGCHLD from the walreceiver process.
In that situation, sigusr1_handler() just dropped the start request
on the floor, reasoning that it must be redundant. Eventually, after
10 seconds (WALRCV_STARTUP_TIMEOUT), the startup process would make a
fresh request --- but that's a long time if the connection could have
been re-established almost immediately.
Fix it by setting a state flag inside the postmaster that we won't
clear until we do launch a walreceiver. In cases where that results
in an extra walreceiver launch, it's up to the walreceiver to realize
it's unwanted and go away --- but we have, and need, that logic anyway
for the opposite race case.
I came across this through investigating unexpected delays in the
src/test/recovery TAP tests: it manifests there in test cases where
a master server is stopped and restarted while leaving streaming
slaves active.
This logic has been broken all along, so back-patch to all supported
branches.
Tom Lane [Mon, 26 Jun 2017 20:17:05 +0000 (16:17 -0400)]
Ignore old stats file timestamps when starting the stats collector.
The stats collector disregards inquiry messages that bear a cutoff_time
before when it last wrote the relevant stats file. That's fine, but at
startup when it reads the "permanent" stats files, it absorbed their
timestamps as if they were the times at which the corresponding temporary
stats files had been written. In reality, of course, there's no data
out there at all. This led to disregarding inquiry messages soon after
startup if the postmaster had been shut down and restarted within less
than PGSTAT_STAT_INTERVAL; which is a pretty common scenario, both for
testing and in the field. Requesting backends would hang for 10 seconds
and then report failure to read statistics, unless they got bailed out
by some other backend coming along and making a newer request within
that interval.
I came across this through investigating unexpected delays in the
src/test/recovery TAP tests: it manifests there because the autovacuum
launcher hangs for 10 seconds when it can't get statistics at startup,
thus preventing a second shutdown from occurring promptly. We might
want to do some things in the autovac code to make it less prone to
getting stuck that way, but this change is a good bug fix regardless.
In passing, also fix pgstat_read_statsfiles() to ensure that it
re-zeroes its global stats variables if they are corrupted by a
short read from the stats file. (Other reads in that function
go into temp variables, so that the issue doesn't arise.)
This has been broken since we created the separation between permanent
and temporary stats files in 8.4, so back-patch to all supported branches.
Tom Lane [Mon, 26 Jun 2017 19:13:23 +0000 (15:13 -0400)]
Reduce pg_ctl's reaction time when waiting for postmaster start/stop.
pg_ctl has traditionally waited one second between probes for whether
the start or stop request has completed. That behavior was embodied
in the original shell script written in 1999 (commit 5b912b089) and
I doubt anyone's questioned it since. Nowadays, machines are a lot
faster, and the shell script is long since replaced by C code, so it's
fair to reconsider how long we ought to wait.
This patch adjusts the coding so that the wait time can be any even
divisor of 1 second, and sets the actual probe rate to 10 per second.
That's based on experimentation with the src/test/recovery TAP tests,
which include a lot of postmaster starts and stops. This patch alone
reduces the (non-parallelized) runtime of those tests from ~4m30s to
~3m5s on my machine. Increasing the probe rate further doesn't help
much, so this seems like a good number.
In the real world this probably won't have much impact, since people
don't start/stop production postmasters often, and the shutdown checkpoint
usually takes nontrivial time too. But it makes development work and
testing noticeably snappier, and that's good enough reason for me.
Also, by reducing the dead time in postmaster restart sequences, this
change has made it easier to reproduce some bugs that have been lurking
for awhile. Patches for those will follow.
Tom Lane [Mon, 26 Jun 2017 18:39:55 +0000 (14:39 -0400)]
Improve wait logic in TAP tests for streaming replication.
Remove hard-wired sleep(2) delays in 001_stream_rep.pl in favor of using
poll_query_until to check for the desired state to appear. In addition,
add such a wait before the last test in the script, as it's possible
to demonstrate failures there after upcoming improvements in pg_ctl.
(We might end up adding polling before each of the get_slot_xmins calls in
this script, but I feel no great need to do that until shown necessary.)
In passing, clarify the description strings for some of the test cases.
Michael Paquier and Craig Ringer, pursuant to a complaint from me
Tom Lane [Mon, 26 Jun 2017 14:31:10 +0000 (10:31 -0400)]
Minor code review for parse_phrase_operator().
Fix its header comment, which described the old behavior of the <N>
phrase distance operator; we missed updating that in commit 028350f61.
Also, reset errno before strtol() call, to defend against the possibility
that it was already ERANGE at entry. (The lack of complaints says that
it generally isn't, but this is at least a latent bug.) Very minor
stylistic improvements as well.
Victor Drobny noted the obsolete comment, I noted the errno issue.
Back-patch to 9.6 where this code was added, just in case the errno
issue is a live bug in some cases.
Tom Lane [Sat, 24 Jun 2017 17:54:15 +0000 (13:54 -0400)]
Further hacking on ICU collation creation and usage.
pg_import_system_collations() refused to create any ICU collations if
the current database's encoding didn't support ICU. This is wrongheaded:
initdb must initialize pg_collation in an encoding-independent way
since it might be used in other databases with different encodings.
The reason for the restriction seems to be that get_icu_locale_comment()
used icu_from_uchar() to convert the UChar-format display name, and that
unsurprisingly doesn't know what to do in unsupported encodings.
But by the same token that the initial catalog contents must be
encoding-independent, we can't allow non-ASCII characters in the comment
strings. So we don't really need icu_from_uchar() here: just check for
Unicode codes outside the ASCII range, and if there are none, the format
conversion is trivial. If there are some, we can simply not install the
comment. (In my testing, this affects only Norwegian Bokmål, which has
given us trouble before.)
For paranoia's sake, also check for non-ASCII characters in ICU locale
names, and skip such locales, as we do for libc locales. I don't
currently have a reason to believe that this will ever reject anything,
but then again the libc maintainers should have known better too.
With just the import changes, ICU collations can be found in pg_collation
in databases with unsupported encodings. This resulted in more or less
clean failures at runtime, but that's not how things act for unsupported
encodings with libc collations. Make it work the same as our traditional
behavior for libc collations by having collation lookup take into account
whether is_encoding_supported_by_icu().
Adjust documentation to match. Also, expand Table 23.1 to show which
encodings are supported by ICU.
catversion bump because of likely change in pg_collation/pg_description
initial contents in ICU-enabled builds.
Tom Lane [Fri, 23 Jun 2017 20:00:45 +0000 (16:00 -0400)]
Fix incorrect buffer-length argument to uloc_getDisplayName().
The maxResultSize argument of uloc_getDisplayName is the number of
UChars in the output buffer, not the number of bytes. In principle
this could result in a stack smash, although at least in my Fedora 25
install there are no ICU locales with display names long enough to
overrun the buffer. But it's easily proven to be wrong by reducing
the length of displayname to around 20, whereupon a stack smash
does happen.
(This is a rather scary bug, because the same mistake could easily
have been made in other places; but in a quick code search looking
at uses of UChar I could not find any other instances.)
Peter Eisentraut [Fri, 23 Jun 2017 19:12:36 +0000 (15:12 -0400)]
Fix replication with replica identity full
The comparison with the target rows on the subscriber side was done with
datumIsEqual(), which can have false negatives. For instance, it didn't
work reliably for text columns. So use the equality operator provided
by the type cache instead.
Also add more user documentation about replica identity requirements.
Tom Lane [Fri, 23 Jun 2017 18:19:48 +0000 (14:19 -0400)]
Rethink behavior of pg_import_system_collations().
Marco Atzeri reported that initdb would fail if "locale -a" reported
the same locale name more than once. All previous versions of Postgres
implicitly de-duplicated the results of "locale -a", but the rewrite
to move the collation import logic into C had lost that property.
It had also lost the property that locale names matching built-in
collation names were silently ignored.
The simplest way to fix this is to make initdb run the function in
if-not-exists mode, which means that there's no real use-case for
non if-not-exists mode; we might as well just drop the boolean argument
and simplify the function's definition to be "add any collations not
already known". This change also gets rid of some odd corner cases
caused by the fact that aliases were added in if-not-exists mode even
if the function argument said otherwise.
While at it, adjust the behavior so that pg_import_system_collations()
doesn't spew "collation foo already exists, skipping" messages during a
re-run; that's completely unhelpful, especially since there are often
hundreds of them. And make it return a count of the number of collations
it did add, which seems like it might be helpful.
Also, re-integrate the previous coding's property that it would make a
deterministic selection of which alias to use if there were conflicting
possibilities. This would only come into play if "locale -a" reports
multiple equivalent locale names, say "de_DE.utf8" and "de_DE.UTF-8",
but that hardly seems out of the question.
In passing, fix incorrect behavior in pg_import_system_collations()'s
ICU code path: it neglected CommandCounterIncrement, which would result
in failures if ICU returns duplicate names, and it would try to create
comments even if a new collation hadn't been created.
Also, reorder operations in initdb so that the 'ucs_basic' collation
is created before calling pg_import_system_collations() not after.
This prevents a failure if "locale -a" were to report a locale named
that. There's no reason to think that that ever happens in the wild,
but the old coding would have survived it, so let's be equally robust.
Simon Riggs [Fri, 23 Jun 2017 17:58:46 +0000 (18:58 +0100)]
Improve replication lag interpolation after idle period
After sitting idle and fully replayed for a while and then encountering
a new burst of WAL activity, we interpolate between an ancient sample and the
not-yet-reached one for the new traffic. That produced a corner case report
of lag after receiving first new reply from standby, which might sometimes
be a large spike.
Correct this by resetting last_read time and handle that new case.
Tom Lane [Fri, 23 Jun 2017 16:22:06 +0000 (12:22 -0400)]
Fix memory leakage in ICU encoding conversion, and other code review.
Callers of icu_to_uchar() neglected to pfree the result string when done
with it. This results in catastrophic memory leaks in varstr_cmp(),
because of our prevailing assumption that btree comparison functions don't
leak memory. For safety, make all the call sites clean up leaks, though
I suspect that we could get away without it in formatting.c. I audited
callers of icu_from_uchar() as well, but found no places that seemed to
have a comparable issue.
Add function API specifications for icu_to_uchar() and icu_from_uchar();
the lack of any thought-through specification is perhaps not unrelated
to the existence of this bug in the first place. Fix icu_to_uchar()
to guarantee a nul-terminated result; although no existing caller appears
to care, the fact that it would have been nul-terminated except in
extreme corner cases seems ideally designed to bite someone on the rear
someday. Fix ucnv_fromUChars() destCapacity argument --- in the worst
case, that could perhaps have led to a non-nul-terminated result, too.
Fix icu_from_uchar() to have a more reasonable definition of the function
result --- no callers are actually paying attention, so this isn't a live
bug, but it's certainly sloppily designed. Const-ify icu_from_uchar()'s
input string for consistency.
That is not the end of what needs to be done to these functions, but
it's as much as I have the patience for right now.
Tom Lane [Fri, 23 Jun 2017 15:03:04 +0000 (11:03 -0400)]
Add testing to detect errors of omission in "pin" dependency creation.
It's essential that initdb.c's setup_depend() scan each system catalog
that could contain objects that need to have "p" (pin) entries in pg_depend
or pg_shdepend. Forgetting to add that, either when a catalog is first
invented or when it first acquires DATA() entries, is an obvious bug
hazard. We can detect such omissions at reasonable cost by probing every
OID-containing system catalog to see whether the lowest-numbered OID in it
is pinned. If so, the catalog must have been properly accounted for in
setup_depend(). If the lowest OID is above FirstNormalObjectId then the
catalog must have been empty at the end of initdb, so it doesn't matter.
There are a small number of catalogs whose first entry is made later in
initdb than setup_depend(), resulting in nonempty expected output of the
test, but these can be manually inspected to see that they are OK. Any
future mistake of this ilk will manifest as a new entry in the test's
output.
Since pg_conversion is already in the test's output, add it to the set of
catalogs scanned by setup_depend(). That has no effect today (hence, no
catversion bump here) but it will protect us if we ever do add pin-worthy
conversions.
This test is very much like the catalog sanity checks embodied in
opr_sanity.sql and type_sanity.sql, but testing pg_depend doesn't seem to
fit naturally into either of those scripts' charters. Hence, invent a new
test script misc_sanity.sql, which can be a home for this as well as tests
on any other catalogs we might want in future.
Alvaro Herrera [Thu, 22 Jun 2017 21:12:27 +0000 (17:12 -0400)]
Fix typos in README.dependencies
There was a logic error in a formula, reported by Atsushi Torokoshi.
Ashutosh Bapat furthermore recommended to change notation for a variable
that was re-using a letter from a previous formula, though his proposed
patch contained a small error in attributing what the new letter is for.
Also, instead of his proposed d' I ended up using e, to avoid confusing
the reader with quotes which are used differently in the explaining
prose.
Alvaro Herrera [Thu, 22 Jun 2017 17:50:26 +0000 (13:50 -0400)]
Fix autovacuum launcher attachment to its DSA
The autovacuum launcher doesn't actually do anything with its DSA other
than creating it and attaching to it, but it's been observed that after
longjmp'ing to the standard error handling block (for example after
getting SIGINT) the autovacuum enters an infinite loop reporting that it
cannot attach to its DSA anymore (which is correct, because it's already
attached to it.) Fix by only attempting to attach if not already
attached.
I introduced this bug together with BRIN autosummarization in 7526e10224f0.
Reported-by: Yugo Nagata.
Author: Thomas Munro. I added the comment to go with it.
Discussion: https://postgr.es/m/20170621211538.0c9eae73.nagata@sraoss.co.jp
Andres Freund [Wed, 21 Jun 2017 21:14:45 +0000 (14:14 -0700)]
Fix possibility of creating a "phantom" segment after promotion.
When promoting a standby just after a XLOG_SWITCH record was replayed,
and next segment(s) are already are locally available (via walsender,
restore_command + trigger/recovery target), that segment could
accidentally be recycled onto the past of the new timeline. Later
checkpointer would create a .ready file for it, assuming there was an
error during creation, and it would get archived. That causes trouble
if another standby is later brought up from a basebackup from before
the timeline creation, because it would try to read the
segment, because XLogFileReadAnyTLI just tries all possible timelines,
which doesn't have valid contents. Thus replay would fail.
The problem, if already occurred, can be fixed by removing the segment
and/or having restore_command filter it out.
The reason for the creation of such "phantom" segments was, that after
an XLOG_SWITCH record the EndOfLog variable points to the beginning of
the next segment, and RemoveXlogFile() used XLByteToPrevSeg().
Normally RemoveXlogFile() doing so is harmless, because the last
segment will still exist preventing InstallXLogFileSegment() from
causing harm, but just after promotion there's no previous segment on
the new timeline.
Fix that by using XLByteToSeg() instead of XLByteToPrevSeg().
Author: Andres Freund Reported-By: Greg Burek
Discussion: https://postgr.es/m/20170619073026.zcwpe6mydsaz5ygd@alap3.anarazel.de
Backpatch: 9.2-, bug older than all supported versions
Tom Lane [Wed, 21 Jun 2017 19:35:54 +0000 (15:35 -0400)]
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Tom Lane [Wed, 21 Jun 2017 19:18:54 +0000 (15:18 -0400)]
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Tom Lane [Wed, 21 Jun 2017 18:39:04 +0000 (14:39 -0400)]
Initial pgindent run with pg_bsd_indent version 2.0.
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:
* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
than the expected column 33.
On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list. This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.
There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses. I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.
Tom Lane [Wed, 21 Jun 2017 18:26:21 +0000 (14:26 -0400)]
Adjust pgindent script to use pg_bsd_indent 2.0.
Update version-checking code and list of switches. Delete obsolete
quasi-support for using GNU indent. Remove a lot of no-longer-needed
workarounds for bugs of the old version, and improve comments for
the hacks that remain. Update run_build() subroutine to fetch the
pg_bsd_indent code from the newly established git repo for it.
In passing, fix pgindent to not overwrite files that require no changes;
this makes it a bit more friendly to run on a built tree.
Adjust relevant documentation.
Remove indent.bsd.patch; it's not relevant anymore (and was obsolete
long ago anyway). Likewise remove pgcppindent, since we're no longer
in the business of shipping C++ code.
Piotr Stefaniak is responsible for most of the algorithmic changes
to the pgindent script; I did the rest.
Tom Lane [Tue, 20 Jun 2017 21:06:43 +0000 (17:06 -0400)]
Make opr_sanity test complain about built-in functions marked prosecdef.
Currently, there are no built-in functions that are SECURITY DEFINER.
But we just found an instance where one was mistakenly marked that way,
so it seems prudent to add a test about it. If we ever grow some
functions that are intentionally SECURITY DEFINER, we can alter the
expected output of this test, or adjust the query to filter out functions
for which it's okay.
Tom Lane [Tue, 20 Jun 2017 17:39:57 +0000 (13:39 -0400)]
Upgrade documentation connected with shared_preload_libraries et al.
Noplace in the documentation actually defined what these variables
contain. Define them as lists of arguments for LOAD, and improve
that command's documentation a bit.
Bruce Momjian [Tue, 20 Jun 2017 17:20:02 +0000 (13:20 -0400)]
pg_upgrade: start/stop new server after pg_resetwal
When commit 0f33a719fdbb5d8c43839ea0d2c90cd03e2af2d2 removed the
instructions to start/stop the new cluster before running rsync, it was
now possible for pg_resetwal/pg_resetxlog to leave the final WAL record
at wal_level=minimum, preventing upgraded standby servers from
reconnecting.
This patch fixes that by having pg_upgrade unconditionally start/stop
the new cluster after pg_resetwal/pg_resetxlog has run.
Backpatch through 9.2 since, though the instructions were added in PG
9.5, they worked all the way back to 9.2.
Tom Lane [Tue, 20 Jun 2017 17:02:42 +0000 (13:02 -0400)]
Don't downcase entries within shared_preload_libraries et al.
load_libraries(), which processes the various xxx_preload_libraries GUCs,
was parsing them using SplitIdentifierString() which isn't really
appropriate for values that could be path names: it downcases unquoted
text, and it doesn't allow embedded whitespace unless quoted.
Use SplitDirectoriesString() instead. That also allows us to simplify
load_libraries() a bit, since canonicalize_path() is now done for it.
While this definitely seems like a bug fix, it has the potential to
break configuration settings that accidentally worked before because
of the downcasing behavior. Also, there's an easy workaround for the
bug, namely to double-quote troublesome text. Hence, no back-patch.
Peter Eisentraut [Tue, 20 Jun 2017 16:25:07 +0000 (12:25 -0400)]
Tweak publication fetching in psql
Viewing a table with \d in psql also shows the publications at table is
in. If a publication is concurrently dropped, this shows an error,
because the view pg_publication_tables internally uses
pg_get_publication_tables(), which uses a catalog snapshot. This can be
particularly annoying if a for-all-tables publication is concurrently
dropped.
To avoid that, write the query in psql differently. Expose the function
pg_relation_is_publishable() to SQL and write the query using that.
That still has a risk of being affected by concurrent catalog changes,
but in this case it would be a table drop that causes problems, and then
the psql \d command wouldn't be interesting anymore anyway.
Tom Lane [Mon, 19 Jun 2017 22:32:22 +0000 (18:32 -0400)]
Fix materialized-view documentation oversights.
When materialized views were added, psql's \d commands were made to
treat them as a separate object category ... but not everyplace in the
documentation or comments got the memo.
Noted by David Johnston. Back-patch to 9.3 where matviews came in.