Dean Rasheed [Fri, 21 Jul 2017 08:20:47 +0000 (09:20 +0100)]
Use MINVALUE/MAXVALUE instead of UNBOUNDED for range partition bounds.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Tom Lane [Thu, 20 Jul 2017 19:28:42 +0000 (15:28 -0400)]
In v10 release notes, call out sequence changes as a compatibility item.
The previous description didn't make it clear that this change
potentially breaks applications, partly because the entry wasn't even
in the compatibility-hazard section. Move and clarify.
Tom Lane [Thu, 20 Jul 2017 16:41:26 +0000 (12:41 -0400)]
Doc: clarify description of degenerate NATURAL joins.
Claiming that NATURAL JOIN is equivalent to CROSS JOIN when there are
no common column names is only strictly correct if it's an inner join;
you can't say e.g. CROSS LEFT JOIN. Better to explain it as meaning
JOIN ON TRUE, instead. Per a suggestion from David Johnston.
Tom Lane [Thu, 20 Jul 2017 15:29:36 +0000 (11:29 -0400)]
Fix dumping of outer joins with empty qual lists.
Normally, a JoinExpr would have empty "quals" only if it came from CROSS
JOIN syntax. However, it's possible to get to this state by specifying
NATURAL JOIN between two tables with no common column names, and there
might be other ways too. The code previously printed no ON clause if
"quals" was empty; that's right for CROSS JOIN but syntactically invalid
if it's some type of outer join. Fix by printing ON TRUE in that case.
This got broken by commit 2ffa740be, which stopped using NATURAL JOIN
syntax in ruleutils output due to its brittleness in the face of
column renamings. Back-patch to 9.3 where that commit appeared.
Tom Lane [Wed, 19 Jul 2017 20:16:57 +0000 (16:16 -0400)]
Add static assertions about pg_control fitting into one disk sector.
When pg_control was first designed, sizeof(ControlFileData) was small
enough that a comment seemed like plenty to document the assumption that
it'd fit into one disk sector. Now it's nearly 300 bytes, raising the
possibility that somebody would carelessly add enough stuff to create
a problem. Let's add a StaticAssertStmt() to ensure that the situation
doesn't pass unnoticed if it ever occurs.
While at it, rename PG_CONTROL_SIZE to PG_CONTROL_FILE_SIZE to make it
clearer what that symbol means, and convert the existing runtime
comparisons of sizeof(ControlFileData) vs. PG_CONTROL_FILE_SIZE to be
static asserts --- we didn't have that technology when this code was
first written.
Tom Lane [Tue, 18 Jul 2017 17:13:47 +0000 (13:13 -0400)]
Improve make_tsvector() to handle empty input, and simplify its callers.
It seemed a bit silly that each caller of make_tsvector() was laboriously
special-casing the situation where no lexemes were found, when it would
be easy and much more bullet-proof to make make_tsvector() handle that.
Tom Lane [Tue, 18 Jul 2017 16:45:51 +0000 (12:45 -0400)]
Fix serious performance problems in json(b) to_tsvector().
In an off-list followup to bug #14745, Bob Jones complained that
to_tsvector() on a 2MB jsonb value took an unreasonable amount of
time and space --- enough to draw the wrath of the OOM killer on
his machine. On my machine, his example proved to require upwards
of 18 seconds and 4GB, which seemed pretty bogus considering that
to_tsvector() on the same data treated as text took just a couple
hundred msec and 10 or so MB.
On investigation, the problem is that the implementation scans each
string element of the json(b) and converts it to tsvector separately,
then applies tsvector_concat() to join those separate tsvectors.
The unreasonable memory usage came from leaking every single one of
the transient tsvectors --- but even without that mistake, this is an
O(N^2) or worse algorithm, because tsvector_concat() has to repeatedly
process the words coming from earlier elements.
We can fix it by accumulating all the lexeme data and applying
make_tsvector() just once. As a side benefit, that also makes the
desired adjustment of lexeme positions far cheaper, because we can
just tweak the running "pos" counter between JSON elements.
In passing, try to make the explanation of that tweak more intelligible.
(I didn't think that a barely-readable comment far removed from the
actual code was helpful.) And do some minor other code beautification.
Robert Haas [Tue, 18 Jul 2017 01:29:45 +0000 (21:29 -0400)]
Use a real RT index when setting up partition tuple routing.
Before, we always used a dummy value of 1, but that's not right when
the partitioned table being modified is inside of a WITH clause
rather than part of the main query.
Amit Langote, reported and reviewd by Etsuro Fujita, with a comment
change by me.
Tom Lane [Mon, 17 Jul 2017 20:43:03 +0000 (16:43 -0400)]
Doc: explain dollar quoting in the intro part of the pl/pgsql chapter.
We're throwing people into the guts of the syntax with not much context;
let's back up one step and point out that this goes inside a literal in
a CREATE FUNCTION command. Per suggestion from Kurt Kartaltepe.
Tom Lane [Mon, 17 Jul 2017 19:28:16 +0000 (15:28 -0400)]
Merge large_object.sql test into largeobject.source.
It seems pretty confusing to have tests named both largeobject and
large_object. The latter is of very recent vintage (commit ff992c074),
so get rid of it in favor of merging into the former.
Also, enable the LO comment test that was added by commit 70ad7ed4e,
since the later commit added the then-missing pg_upgrade functionality.
The large_object.sql test case is almost completely redundant with that,
but not quite: it seems like creating a user-defined LO with an OID in
the system range might be an interesting case for pg_upgrade, so let's
keep it.
Like the earlier patch, back-patch to all supported branches.
Robert Haas [Mon, 17 Jul 2017 16:03:35 +0000 (12:03 -0400)]
hash: Fix write-ahead logging bugs related to init forks.
One, logging for CREATE INDEX was oblivious to the fact that when
an unlogged table is created, *only* operations on the init fork
should be logged.
Two, init fork buffers need to be flushed after they are written;
otherwise, a filesystem-level copy following recovery may do the
wrong thing. (There may be a better fix for this issue than the
one used here, but this is transposed from the similar logic already
present in XLogReadBufferForRedoExtended, and a broader refactoring
after beta2 seems inadvisable.)
Amit Kapila, reviewed by Ashutosh Sharma, Kyotaro Horiguchi,
and Michael Paquier
Tom Lane [Sat, 15 Jul 2017 20:57:43 +0000 (16:57 -0400)]
Improve comments for execExpr.c's handling of FieldStore subexpressions.
Given this code's general eagerness to use subexpressions' output variables
as temporary workspace, it's not exactly clear that it is safe for
FieldStore to tell a newval subexpression that it can write into the same
variable that is being supplied as a potential input. Document the chain
of assumptions needed for that to be safe.
Tom Lane [Sat, 15 Jul 2017 18:03:32 +0000 (14:03 -0400)]
Improve comments for execExpr.c's isAssignmentIndirectionExpr().
I got confused about why this function doesn't need to recursively
search the expression tree for a CaseTestExpr node. After figuring
that out, add a comment to save the next person some time.
Tom Lane [Fri, 14 Jul 2017 19:25:43 +0000 (15:25 -0400)]
Code review for NextValueExpr expression node type.
Add missing infrastructure for this node type, notably in ruleutils.c where
its lack could demonstrably cause EXPLAIN to fail. Add outfuncs/readfuncs
support. (outfuncs support is useful today for debugging purposes. The
readfuncs support may never be needed, since at present it would only
matter for parallel query and NextValueExpr should never appear in a
parallelizable query; but it seems like a bad idea to have a primnode type
that isn't fully supported here.) Teach planner infrastructure that
NextValueExpr is a volatile, parallel-unsafe, non-leaky expression node
with cost cpu_operator_cost. Given its limited scope of usage, there
*might* be no live bug today from the lack of that knowledge, but it's
certainly going to bite us on the rear someday. Teach pg_stat_statements
about the new node type, too.
While at it, also teach cost_qual_eval() that MinMaxExpr, SQLValueFunction,
XmlExpr, and CoerceToDomain should be charged as cpu_operator_cost.
Failing to do this for SQLValueFunction was an oversight in my commit 0bb51aa96. The others are longer-standing oversights, but no time like the
present to fix them. (In principle, CoerceToDomain could have cost much
higher than this, but it doesn't presently seem worth trying to examine the
domain's constraints here.)
Modify execExprInterp.c to execute NextValueExpr as an out-of-line
function; it seems quite unlikely to me that it's worth insisting that
it be inlined in all expression eval methods. Besides, providing the
out-of-line function doesn't stop anyone from inlining if they want to.
Adjust some places where NextValueExpr support had been inserted with the
aid of a dartboard rather than keeping it in the same order as elsewhere.
That method is badly broken, as seen in bug #14742 from Chris Ruprecht.
The -L flag for src/fe_utils ends up being placed after whatever random
-L flags are in LDFLAGS already. That puts us at risk of pulling in
libpgfeutils.a from some previous installation rather than the freshly
built one in src/fe_utils. Also, the lack of an "override" is hazardous
if someone tries to specify some LDFLAGS on the make command line.
so that libpgfeutils, along with libpq, libpgport, and libpgcommon, are
guaranteed to be pulled in from the build tree and not from any referenced
system directory, because their -L flags will appear first.
In some places we'd been even lazier and done it like this:
which is subtly wrong in an additional way: on platforms where we can't
restrict the symbols exported by libpq.so, it allows libpgfeutils to
latch onto libpgport and libpgcommon symbols from libpq.so, rather than
directly from those static libraries as intended. This carries hazards
like those explained in the comments for the libpq_pgport macro.
In addition to fixing the broken libpgfeutils usages, I tried to
standardize on using $(libpq_pgport) like so:
override LDFLAGS := $(libpq_pgport) $(LDFLAGS)
even where libpgfeutils is not in the picture. This makes no difference
right now but will hopefully discourage future mistakes of the same ilk.
And it's more like the way we handle CPPFLAGS in libpq-using Makefiles.
In passing, just for consistency, make pgbench include PTHREAD_LIBS the
same way everyplace else does, ie just after LIBS rather than in some
random place in the command line. This might have practical effect if
there are -L switches in that macro on some platform.
It looks to me like the MSVC build scripts are not affected by this
error, but someone more familiar with them than I might want to double
check.
Back-patch to 9.6 where libpgfeutils was introduced. In 9.6, the hazard
this error creates is that a reinstallation might link to the prior
installation's copy of libpgfeutils.a and thereby fail to absorb a
minor-version bug fix.
When writing a backup to stdout with pg_basebackup on Windows, put stdout
to binary mode. Any CR bytes in the output will otherwise be output
incorrectly as CR+LF.
In the passing, standardize on using "_setmode" instead of "setmode", for
the sake of consistency. They both do the same thing, but according to
MSDN documentation, setmode is deprecated.
Fixes bug #14634, reported by Henry Boehlert. Patch by Haribabu Kommi.
Backpatch to all supported versions.
Tom Lane [Thu, 13 Jul 2017 23:24:44 +0000 (19:24 -0400)]
Fix dumping of FUNCTION RTEs that contain non-function-call expressions.
The grammar will only accept something syntactically similar to a function
call in a function-in-FROM expression. However, there are various ways
to input something that ruleutils.c won't deparse that way, potentially
leading to a view or rule that fails dump/reload. Fix by inserting a
dummy CAST around anything that isn't going to deparse as a function
(which is one of the ways to get something like that in there in the
first place).
In HEAD, also make use of the infrastructure added by this to avoid
emitting unnecessary parentheses in CREATE INDEX deparsing. I did
not change that in back branches, thinking that people might find it
to be unexpected/unnecessary behavioral change.
In HEAD, also fix incorrect logic for when to add extra parens to
partition key expressions. Somebody apparently thought they could
get away with simpler logic than pg_get_indexdef_worker has, but
they were wrong --- a counterexample is PARTITION BY LIST ((a[1])).
Ignoring the prettyprint flag for partition expressions isn't exactly
a nice solution anyway.
This has been broken all along, so back-patch to all supported branches.
Fix race between GetNewTransactionId and GetOldestActiveTransactionId.
The race condition goes like this:
1. GetNewTransactionId advances nextXid e.g. from 100 to 101
2. GetOldestActiveTransactionId reads the new nextXid, 101
3. GetOldestActiveTransactionId loops through the proc array. There are no
active XIDs there, so it returns 101 as the oldest active XID.
4. GetNewTransactionid stores XID 100 to MyPgXact->xid
So, GetOldestActiveTransactionId returned XID 101, even though 100 only
just started and is surely still running.
This would be hard to hit in practice, and even harder to spot any ill
effect if it happens. GetOldestActiveTransactionId is only used when
creating a checkpoint in a master server, and the race condition can only
happen on an online checkpoint, as there are no backends running during a
shutdown checkpoint. The oldestActiveXid value of an online checkpoint is
only used when starting up a hot standby server, to determine the starting
point where pg_subtrans is initialized from. For the race condition to
happen, there must be no other XIDs in the proc array that would hold back
the oldest-active XID value, which means that the missed XID must be a top
transaction's XID. However, pg_subtrans is not used for top XIDs, so I
believe an off-by-one error is in fact inconsequential. Nevertheless, let's
fix it, as it's clearly wrong and the fix is simple.
This has been wrong ever since hot standby was introduced, so backport to
all supported versions.
Tom Lane [Wed, 12 Jul 2017 22:00:04 +0000 (18:00 -0400)]
Fix ruleutils.c for domain-over-array cases, too.
Further investigation shows that ruleutils isn't quite up to speed either
for cases where we have a domain-over-array: it needs to be prepared to
look past a CoerceToDomain at the top level of field and element
assignments, else it decompiles them incorrectly. Potentially this would
result in failure to dump/reload a rule, if it looked like the one in the
new test case. (I also added a test for EXPLAIN; that output isn't broken,
but clearly we need more test coverage here.)
Like commit b1cb32fb6, this bug is reachable in cases we already support,
so back-patch all the way.
Reduce memory usage of tsvector type analyze function.
compute_tsvector_stats() detoasted and kept in memory every tsvector value
in the sample, but that can be a lot of memory. The original bug report
described a case using over 10 gigabytes, with statistics target of 10000
(the maximum).
To fix, allocate a separate copy of just the lexemes that we keep around,
and free the detoasted tsvector values as we go. This adds some palloc/pfree
overhead, when you have a lot of distinct lexemes in the sample, but it's
better than running out of memory.
Fixes bug #14654 reported by James C. Reviewed by Tom Lane. Backport to
all supported versions.
Tom Lane [Wed, 12 Jul 2017 17:24:16 +0000 (13:24 -0400)]
Avoid integer overflow while sifting-up a heap in tuplesort.c.
If the number of tuples in the heap exceeds approximately INT_MAX/2,
this loop's calculation "2*i+1" could overflow, resulting in a crash.
Fix it by using unsigned int rather than int for the relevant local
variables; that shouldn't cost anything extra on any popular hardware.
Per bug #14722 from Sergey Koposov.
Original patch by Sergey Koposov, modified by me per a suggestion
from Heikki Linnakangas to use unsigned int not int64.
Back-patch to 9.4, where tuplesort.c grew the ability to sort as many
as INT_MAX tuples in-memory (commit 263865a48).
Fix ordering of operations in SyncRepWakeQueue to avoid assertion failure.
Commit 14e8803f1 removed the locking in SyncRepWaitForLSN, but that
introduced a race condition, where SyncRepWaitForLSN might see
syncRepState already set to SYNC_REP_WAIT_COMPLETE, but the process was
not yet removed from the queue. That tripped the assertion, that the
process should no longer be in the uqeue. Reorder the operations in
SyncRepWakeQueue to remove the process from the queue first, and update
syncRepState only after that, and add a memory barrier in between to make
sure the operations are made visible to other processes in that order.
Fixes bug #14721 reported by Const Zhang. Analysis and fix by Thomas Munro.
Backpatch down to 9.5, where the locking was removed.
Tom Lane [Tue, 11 Jul 2017 20:48:59 +0000 (16:48 -0400)]
Fix multiple assignments to a column of a domain type.
We allow INSERT and UPDATE commands to assign to the same column more than
once, as long as the assignments are to subfields or elements rather than
the whole column. However, this failed when the target column was a domain
over array rather than plain array. Fix by teaching process_matched_tle()
to look through CoerceToDomain nodes, and add relevant test cases.
Also add a group of test cases exercising domains over array of composite.
It's doubtless accidental that CREATE DOMAIN allows this case while not
allowing straight domain over composite; but it does, so we'd better make
sure we don't break it. (I could not find any documentation mentioning
either side of that, so no doc changes.)
It's been like this for a long time, so back-patch to all supported
branches.
Tom Lane [Mon, 10 Jul 2017 15:00:09 +0000 (11:00 -0400)]
On Windows, retry process creation if we fail to reserve shared memory.
We've heard occasional reports of backend launch failing because
pgwin32_ReserveSharedMemoryRegion() fails, indicating that something
has already used that address space in the child process. It's not
very clear what, given that we disable ASLR in Windows builds, but
suspicion falls on antivirus products. It'd be better if we didn't
have to disable ASLR, anyway. So let's try to ameliorate the problem
by retrying the process launch after such a failure, up to 100 times.
Patch by me, based on previous work by Amit Kapila and others.
This is a longstanding issue, so back-patch to all supported branches.
Andrew Gierth [Mon, 10 Jul 2017 10:40:08 +0000 (11:40 +0100)]
Fix COPY's handling of transition tables with indexes.
Commit c46c0e5202e8cfe750c6629db7852fdb15d528f3 failed to pass the
TransitionCaptureState object to ExecARInsertTriggers() in the case
where it's using heap_multi_insert and there are indexes. Repair.
Thomas Munro, from a report by David Fetter
Discussion: https://postgr.es/m/20170708084213.GA14720%40fetter.org
Allow multiple hostaddrs to go with multiple hostnames.
Also fix two other issues, while we're at it:
* In error message on connection failure, if multiple network addresses
were given as the host option, as in "host=127.0.0.1,127.0.0.2", the
error message printed the address twice.
* If there were many more ports than hostnames, the error message would
always claim that there was one port too many, even if there was more than
one. For example, if you gave 2 hostnames and 5 ports, the error message
claimed that you gave 2 hostnames and 3 ports.
Tom Lane [Mon, 10 Jul 2017 04:44:05 +0000 (00:44 -0400)]
Doc: remove claim that PROVE_FLAGS defaults to '--verbose'.
Commit e9c81b601 changed this, but missed updating the documentation.
The adjacent claim that we use TAP tests only in src/bin seems pretty
obsolete as well. Minor other copy-editing.
Tom Lane [Mon, 10 Jul 2017 04:08:19 +0000 (00:08 -0400)]
Doc: clarify wording about tool requirements in sourcerepo.sgml.
Original wording had confusingly vague antecedent for "they", so replace
that with a more repetitive but clearer formulation. In passing, make the
link to the installation requirements section more specific. Per gripe
from Martin Mai, though this is not the fix he initially proposed.
It generates an empty file, so libpq.dll advertises no version
information. Commit facde2a98f0b5f7689b4e30a9e7376e926e733b8
mistranslated "print O;" in this one place.
Tom Lane [Sat, 8 Jul 2017 16:42:25 +0000 (12:42 -0400)]
Avoid unreferenced-function warning on low-functionality platforms.
On platforms lacking both locale_t and ICU, collationcmds.c failed
to make any use of its static function is_all_ascii(), thus probably
drawing a compiler warning. Oversight in my commit ddb5fdc06.
Per buildfarm member gaur.
Dean Rasheed [Thu, 6 Jul 2017 11:46:08 +0000 (12:46 +0100)]
Clarify the contract of partition_rbound_cmp().
partition_rbound_cmp() is intended to compare range partition bounds
in a way such that if all the bound values are equal but one is an
upper bound and one is a lower bound, the upper bound is treated as
smaller than the lower bound. This particular ordering is required by
RelationBuildPartitionDesc() when building the PartitionBoundInfoData,
so that it can consistently keep only the upper bounds when upper and
lower bounds coincide.
Update the function comment to make that clearer.
Also, fix a (currently unreachable) corner-case bug -- if the bound
values coincide and they contain unbounded values, fall through to the
lower-vs-upper comparison code, rather than immediately returning
0. Currently it is not possible to define coincident upper and lower
bounds containing unbounded columns, but that may change in the
future, so code defensively.
Dean Rasheed [Thu, 6 Jul 2017 08:58:06 +0000 (09:58 +0100)]
Simplify the logic checking new range partition bounds.
The previous logic, whilst not actually wrong, was overly complex and
involved doing two binary searches, where only one was really
necessary. This simplifies that logic and improves the comments.
One visible change is that if the new partition overlaps multiple
existing partitions, the error message now always reports the overlap
with the first existing partition (the one with the lowest
bounds). The old code would sometimes report the clash with the first
partition and sometimes with the last one.
Original patch idea from Amit Langote, substantially rewritten by me.
Tom Lane [Thu, 6 Jul 2017 03:59:20 +0000 (23:59 -0400)]
Fix another race-condition-ish issue in recovery/t/001_stream_rep.pl.
Buildfarm members hornet and sungazer have shown multiple instances of
"Failed test 'xmin of non-cascaded slot with hs feedback has changed'".
The reason seems to be that the test is checking the current xmin of the
master server's replication slot against a past xmin of the first slave
server's replication slot. Even though the latter slot is downstream of
the former, it's possible for its reported xmin to be ahead of the former's
reported xmin, because those numbers are updated whenever the respective
downstream walreceiver feels like it (see logic in WalReceiverMain).
Instrumenting this test shows that indeed the slave slot's xmin does often
advance before the master's does, especially if an autovacuum transaction
manages to occur during the relevant window. If we happen to capture such
an advanced xmin as $xmin, then the subsequent wait_slot_xmins call can
fall through before the master's xmin has advanced at all, and then if it
advances before the get_slot_xmins call, we can get the observed failure.
Yeah, that's a bit of a long chain of deduction, but it's hard to explain
any other way how the test can get past an "xmin <> '$xmin'" check only
to have the next query find that xmin does equal $xmin.
Fix by keeping separate images of the master and slave slots' xmins
and testing their has-xmin-advanced conditions independently.
pg_ctl: Make failure to complete operation a nonzero exit
If an operation being waited for does not complete within the timeout,
then exit with a nonzero exit status. This was previously handled
inconsistently.
Peter Eisentraut [Thu, 22 Jun 2017 02:57:23 +0000 (22:57 -0400)]
Fix output of char node fields
WRITE_CHAR_FIELD() didn't do any escaping, so that for example a zero
byte would cause the whole output string to be truncated. To fix, pass
the char through outToken(), so it is escaped like a string. Adjust the
reading side to handle this.
Update the documentation for \pset to mention
columns|linestyle|pager_min_lines. Add various mentions of \pset
command equivalences that were previously inconsistent.
Forbid gen_random_uuid() with --disable-strong-random
Previously, gen_random_uuid() would fall back to a weak random number
generator, unlike gen_random_bytes() which would just fail. And this was
not made very clear in the docs. For consistency, also make
gen_random_uuid() fail outright, if compiled with --disable-strong-random.
Re-word the error message you get with --disable-strong-random. It is also
used by pgp functions that require random salts, and now also
gen_random_uuid().
Tom Lane [Mon, 3 Jul 2017 02:01:19 +0000 (22:01 -0400)]
Fix race condition in recovery/t/009_twophase.pl test.
Since reducing pg_ctl's reaction time in commit c61559ec3, some
slower buildfarm members have shown erratic failures in this test.
The reason turns out to be that the test assumes synchronous
replication (because it does not provide any lag time for a commit
to replicate before shutting down the servers), but it had only
enabled sync rep in one direction. The observed symptoms correspond
to failure to replicate the last committed transaction in the other
direction, which can be expected to happen if the shutdown command
is issued soon enough and we are providing no synchronous-commit
guarantees.
Fix that, and add a bit more paranoid state checking at the bottom
of the script.
Tom Lane [Sun, 2 Jul 2017 21:22:09 +0000 (17:22 -0400)]
Fix bug in PostgresNode::query_hash's split() call.
By default, Perl's split() function drops trailing empty fields,
which is not what we want here. Oversight in commit fb093e4cb.
We'd managed to miss it thus far thanks to the very limited usage
of this function.
Tom Lane [Sun, 2 Jul 2017 19:27:29 +0000 (15:27 -0400)]
Try to improve readability of recovery/t/009_twophase.pl test.
The original coding here was very confusing, because it named the
two servers it set up "master" and "slave" even though it swapped
their replication roles multiple times. At any given point in the
script it was very unobvious whether "$node_master" actually referred
to the server named "master" or the other one. Instead, pick arbitrary
names for the two servers --- I used "london" and "paris" --- and
distinguish those permanent names from the nonce references $cur_master
and $cur_slave. Add logging to help distinguish which is which at
any given point. Also, use distinct data and transaction names to
make all the prepared transactions easily distinguishable in the
postmaster logs. (There was one place where we intentionally tested
that the server could cope with re-use of a transaction name, but
it seems like one place is sufficient for that purpose.)
Also, add checks at the end to make sure that all the transactions
that were supposed to be committed did survive.
Tom Lane [Sun, 2 Jul 2017 18:03:41 +0000 (14:03 -0400)]
Improve TAP test function PostgresNode::poll_query_until().
Add an optional "expected" argument to override the default assumption
that we're waiting for the query to return "t". This allows replacing
a handwritten polling loop in recovery/t/007_sync_rep.pl with use of
poll_query_until(); AFAICS that's the only remaining ad-hoc polling
loop in our TAP tests.
Change poll_query_until() to probe ten times per second not once per
second. Like some similar changes I've been making recently, the
one-second interval seems to be rooted in ancient traditions rather
than the actual likely wait duration on modern machines. I'd consider
reducing it further if there were a convenient way to spawn just one
psql for the whole loop rather than one per probe attempt.
doc: Document that logical replication supports synchronous replication
Update the documentation a bit to include that logical replication as
well as other and third-party replication clients can participate in
synchronous replication.
Peter Eisentraut [Sat, 24 Jun 2017 13:39:24 +0000 (09:39 -0400)]
Refine memory allocation in ICU conversions
The simple calculations done to estimate the size of the output buffers
for ucnv_fromUChars() and ucnv_toUChars() could overflow int32_t for
large strings. To avoid that, go the long way and run the function
first without an output buffer to get the correct output buffer size
requirement.
Tom Lane [Sat, 1 Jul 2017 18:25:09 +0000 (14:25 -0400)]
Clean up misuse and nonuse of poll_query_until().
Several callers of PostgresNode::poll_query_until() neglected to check
for failure; I do not think that's optional. Also, rewrite one place
that had reinvented poll_query_until() for no very good reason.
Tom Lane [Sat, 1 Jul 2017 16:15:51 +0000 (12:15 -0400)]
Reduce delay for last logicalrep feedback message when master goes idle.
The regression tests contain numerous cases where we do some activity on a
master server and then wait till the slave has ack'd flushing its copy of
that transaction. Because WAL flush on the slave is asynchronous to the
logicalrep worker process, the worker cannot send such a feedback message
during the LogicalRepApplyLoop iteration where it processes the last data
from the master. In the previous coding, the feedback message would come
out only when the loop's WaitLatchOrSocket call returned WL_TIMEOUT. That
requires one full second of delay (NAPTIME_PER_CYCLE); and to add insult
to injury, it could take more than that if the WaitLatchOrSocket was
interrupted a few times by latch-setting events.
In reality we can expect the slave's walwriter process to have flushed the
WAL data after, more or less, WalWriterDelay (typically 200ms). Hence,
if there are unacked transactions pending, make the wait delay only that
long rather than the full NAPTIME_PER_CYCLE. Also, move one of the
send_feedback() calls into the loop main line, so that we'll check for the
need to send feedback even if we were woken by a latch event and not either
socket data or timeout.
It's not clear how much this matters for production purposes, but
it's definitely helpful for testing.
Tom Lane [Sat, 1 Jul 2017 15:59:44 +0000 (11:59 -0400)]
Shorten timeouts while waiting for logicalrep worker slot attach/detach.
When waiting for a logical replication worker process to start or stop,
we have to busy-wait until we see it add or remove itself from the
LogicalRepWorker slot in shared memory. Those loops were using a
one-second delay between checks, but on any reasonably modern machine, it
doesn't take more than a couple of msec for a worker to spawn or shut down.
Reduce the loop delays to 10ms to avoid wasting quite so much time in the
related regression tests.
In principle, a better solution would be to fix things so that the waiting
process can be awakened via its latch at the right time. But that seems
considerably more invasive, which is undesirable for a post-beta fix.
Worker start/stop performance likely isn't of huge interest anyway for
production purposes, so we might not ever get around to it.
In passing, rearrange the second wait loop in logicalrep_worker_stop()
so that the lock is held at the top of the loop, thus saving one lock
acquisition/release per call, and making it look more like the other loop.
Alvaro Herrera [Fri, 30 Jun 2017 22:06:33 +0000 (18:06 -0400)]
Fix locking in WAL receiver/sender shmem state structs
In WAL receiver and WAL server, some accesses to their corresponding
shared memory control structs were done without holding any kind of
lock, which could lead to inconsistent and possibly insecure results.
In walsender, fix by clarifying the locking rules and following them
correctly, as documented in the new comment in walsender_private.h;
namely that some members can be read in walsender itself without a lock,
because the only writes occur in the same process. The rest of the
struct requires spinlock for accesses, as usual.
In walreceiver, fix by always holding spinlock while accessing the
struct.
While there is potentially a problem in all branches, it is minor in
stable ones. This only became a real problem in pg10 because of quorum
commit in synchronous replication (commit 3901fd70cc7c), and a potential
security problem in walreceiver because a superuser() check was removed
by default monitoring roles (commit 25fff40798fc). Thus, no backpatch.
In passing, clean up some leftover braces which were used to create
unconditional blocks. Once upon a time these were used for
volatile-izing accesses to those shmem structs, which is no longer
required. Many other occurrences of this pattern remain.
Author: Michaël Paquier Reported-by: Michaël Paquier Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi, Thomas Munro,
Robert Haas
Discussion: https://postgr.es/m/CAB7nPqTWYqtzD=LN_oDaf9r-hAjUEPAy0B9yRkhcsLdRN8fzrw@mail.gmail.com
Tom Lane [Fri, 30 Jun 2017 18:57:06 +0000 (14:57 -0400)]
Fix race conditions and missed wakeups in syncrep worker signaling.
When a sync worker is waiting for the associated apply worker to notice
that it's in SYNCWAIT state, wait_for_worker_state_change() would just
patiently wait for that to happen. This generally required waiting for
the 1-second timeout in LogicalRepApplyLoop to elapse. Kicking the worker
via its latch makes things significantly snappier.
While at it, fix race conditions that could potentially result in crashes:
we can *not* call logicalrep_worker_wakeup_ptr() once we've released the
LogicalRepWorkerLock, because worker->proc might've been reset to NULL
after we do that (indeed, there's no really solid reason to believe that
the LogicalRepWorker slot even belongs to the same worker anymore).
In logicalrep_worker_wakeup(), we can just move the wakeup inside the
lock scope. In process_syncing_tables_for_apply(), a bit more code
rearrangement is needed.
Tom Lane [Fri, 30 Jun 2017 16:00:02 +0000 (12:00 -0400)]
Fix walsender to exit promptly if client requests shutdown.
It's possible for WalSndWaitForWal to be asked to wait for WAL that doesn't
exist yet. That's fine, in fact it's the normal situation if we're caught
up; but when the client requests shutdown we should not keep waiting.
The previous coding could wait indefinitely if the source server was idle.
In passing, improve the rather weak comments in this area, and slightly
rearrange some related code for better readability.
Peter Eisentraut [Fri, 30 Jun 2017 15:24:00 +0000 (11:24 -0400)]
Prohibit creating ICU collation with different ctype
ICU does not support "collate" and "ctype" being different, so the
collctype catalog column is ignored. But for catalog neatness, ensure
that they are the same.
Peter Eisentraut [Fri, 30 Jun 2017 12:50:39 +0000 (08:50 -0400)]
Copy collencoding in CREATE COLLATION / FROM
This command used to compute the collencoding entry like when a
completely new collation is created. But for example when copying the
"C" collation, this would then result in a collation that has a
collencoding entry for the current database encoding rather than -1,
thus not making an exact copy. This has probably no practical impact,
but making this change keeps the catalog contents neat.
Tom Lane [Thu, 29 Jun 2017 02:11:12 +0000 (22:11 -0400)]
Eat XIDs more efficiently in recovery TAP test.
The point of this loop is to insert 1000 rows into the test table
and consume 1000 XIDs. I can't see any good reason why it's useful
to launch 1000 psqls and 1000 backend processes to accomplish that.
Pushing the looping into a plpgsql DO block shaves about 10 seconds
off the runtime of the src/test/recovery TAP tests on my machine;
that's over 10% of the runtime of that test suite.
It is, in fact, sufficiently more efficient that we now demonstrably
need wait_slot_xmins() afterwards, or the slaves' xmins may not have
moved yet.
Tom Lane [Wed, 28 Jun 2017 21:31:24 +0000 (17:31 -0400)]
Change pg_ctl to detect server-ready by watching status in postmaster.pid.
Traditionally, "pg_ctl start -w" has waited for the server to become
ready to accept connections by attempting a connection once per second.
That has the major problem that connection issues (for instance, a
kernel packet filter blocking traffic) can't be reliably told apart
from server startup issues, and the minor problem that if server startup
isn't quick, we accumulate "the database system is starting up" spam
in the server log. We've hacked around many of the possible connection
issues, but it resulted in ugly and complicated code in pg_ctl.c.
In commit c61559ec3, I changed the probe rate to every tenth of a second.
That prompted Jeff Janes to complain that the log-spam problem had become
much worse. In the ensuing discussion, Andres Freund pointed out that
we could dispense with connection attempts altogether if the postmaster
were changed to report its status in postmaster.pid, which "pg_ctl start"
already relies on being able to read. This patch implements that, teaching
postmaster.c to report a status string into the pidfile at the same
state-change points already identified as being of interest for systemd
status reporting (cf commit 7d17e683f). pg_ctl no longer needs to link
with libpq at all; all its functions now depend on reading server files.
In support of this, teach AddToDataDirLockFile() to allow addition of
postmaster.pid lines in not-necessarily-sequential order. This is needed
on Windows where the SHMEM_KEY line will never be written at all. We still
have the restriction that we don't want to truncate the pidfile; document
the reasons for that a bit better.
Also, fix the pg_ctl TAP tests so they'll notice if "start -w" mode
is broken --- before, they'd just wait out the sixty seconds until
the loop gives up, and then report success anyway. (Yes, I found that
out the hard way.)
While at it, arrange for pg_ctl to not need to #include miscadmin.h;
as a rather low-level backend header, requiring that to be compilable
client-side is pretty dubious. This requires moving the #define's
associated with the pidfile into a new header file, and moving
PG_BACKEND_VERSIONSTR someplace else. For lack of a clearly better
"someplace else", I put it into port.h, beside the declaration of
find_other_exec(), since most users of that macro are passing the value to
find_other_exec(). (initdb still depends on miscadmin.h, but at least
pg_ctl and pg_upgrade no longer do.)
In passing, fix main.c so that PG_BACKEND_VERSIONSTR actually defines the
output of "postgres -V", which remarkably it had never done before.
Andrew Gierth [Wed, 28 Jun 2017 18:00:55 +0000 (19:00 +0100)]
Fix transition tables for ON CONFLICT.
We now disallow having triggers with both transition tables and ON
INSERT OR UPDATE (which was a PG extension to the spec anyway),
because in this case it's not at all clear how the transition tables
should work for an INSERT ... ON CONFLICT query. Separate ON INSERT
and ON UPDATE triggers with transition tables are allowed, and the
transition tables for these reflect only the inserted and only the
updated tuples respectively.
Andrew Gierth [Wed, 28 Jun 2017 17:55:03 +0000 (18:55 +0100)]
Fix transition tables for partition/inheritance.
We disallow row-level triggers with transition tables on child tables.
Transition tables for triggers on the parent table contain only those
columns present in the parent. (We can't mix tuple formats in a
single transition table.)
Tom Lane [Wed, 28 Jun 2017 16:30:16 +0000 (12:30 -0400)]
Second try at fixing tcp_keepalives_idle option on Solaris.
Buildfarm evidence shows that TCP_KEEPALIVE_THRESHOLD doesn't exist
after all on Solaris < 11. This means we need to take positive action to
prevent the TCP_KEEPALIVE code path from being taken on that platform.
I've chosen to limit it with "&& defined(__darwin__)", since it's unclear
that anyone else would follow Apple's precedent of spelling the symbol
that way.
Also, follow a suggestion from Michael Paquier of eliminating code
duplication by defining a couple of intermediate symbols for the
socket option.
In passing, make some effort to reduce the number of translatable messages
by replacing "setsockopt(foo) failed" with "setsockopt(%s) failed", etc,
throughout the affected files. And update relevant documentation so
that it doesn't claim to provide an exhaustive list of the possible
socket option names.
Like the previous commit (f0256c774), back-patch to all supported branches.
Stephen Frost [Wed, 28 Jun 2017 14:33:57 +0000 (10:33 -0400)]
Do not require 'public' to exist for pg_dump -c
Commit 330b84d8c4 didn't contemplate the case where the public schema
has been dropped and introduced a query which fails when there is no
public schema into pg_dump (when used with -c).
Adjust the query used by pg_dump to handle the case where the public
schema doesn't exist and add tests to check that such a case no longer
fails.
Back-patch the specific fix to 9.6, as the prior commit was.
Adding tests for this case involved adding support to the pg_dump
TAP tests to work with multiple databases, which, while not a large
change, is a bit much to back-patch, so that's only done in master.
Tom Lane [Tue, 27 Jun 2017 22:47:57 +0000 (18:47 -0400)]
Support tcp_keepalives_idle option on Solaris.
Turns out that the socket option for this is named TCP_KEEPALIVE_THRESHOLD,
at least according to the tcp(7P) man page for Solaris 11. (But since that
text refers to "SunOS", it's likely pretty ancient.) It appears that the
symbol TCP_KEEPALIVE does get defined on that platform, but it doesn't
seem to represent a valid protocol-level socket option. This leads to
bleats in the postmaster log, and no tcp_keepalives_idle functionality.
Per bug #14720 from Andrey Lizenko, as well as an earlier report from
Dhiraj Chawla that nobody had followed up on. The issue's been there
since we added the TCP_KEEPALIVE code path in commit 5acd417c8, so
back-patch to all supported branches.
Tom Lane [Tue, 27 Jun 2017 21:51:11 +0000 (17:51 -0400)]
Re-allow SRFs and window functions within sub-selects within aggregates.
check_agg_arguments_walker threw an error upon seeing a SRF or window
function, but that is too aggressive: if the function is within a
sub-select then it's perfectly fine. I broke the SRF case in commit 0436f6bde by copying the logic for window functions ... but that was
broken too, and had been since commit eaccfded9.
Repair both cases in HEAD, and the window function case back to 9.3.
9.2 gets this right.
Tom Lane [Mon, 26 Jun 2017 23:01:26 +0000 (19:01 -0400)]
Reduce wal_retrieve_retry_interval in applicable TAP tests.
By default, wal_retrieve_retry_interval is five seconds, which is far
more than is needed in any of our TAP tests, leaving the test cases
just twiddling their thumbs for significant stretches. Moreover,
because it's so large, we get basically no testing of the retry-before-
master-is-ready code path. Hence, make PostgresNode::init set up
wal_retrieve_retry_interval = '500ms' as part of its customization of
test clusters' postgresql.conf. This shaves quite a few seconds off
the runtime of the recovery TAP tests.
Back-patch into 9.6. We have wal_retrieve_retry_interval in 9.5,
but the test infrastructure isn't there.