Tom Lane [Fri, 28 Jul 2017 18:25:28 +0000 (14:25 -0400)]
PL/Perl portability fix: absorb relevant -D switches from Perl.
The Perl documentation is very clear that stuff calling libperl should
be built with the compiler switches shown by Perl's $Config{ccflags}.
We'd been ignoring that up to now, and mostly getting away with it,
but recent Perl versions contain ABI compatibility cross-checks that
fail on some builds because of this omission. In particular the
sizeof(PerlInterpreter) can come out different due to some fields being
added or removed; which means we have a live ABI hazard that we'd better
fix rather than continuing to sweep it under the rug.
However, it still seems like a bad idea to just absorb $Config{ccflags}
verbatim. In some environments Perl was built with a different compiler
that doesn't even use the same switch syntax. -D switch syntax is pretty
universal though, and absorbing Perl's -D switches really ought to be
enough to fix the problem.
Furthermore, Perl likes to inject stuff like -D_LARGEFILE_SOURCE and
-D_FILE_OFFSET_BITS=64 into $Config{ccflags}, which affect libc ABIs on
platforms where they're relevant. Adopting those seems dangerous too.
It's unclear whether a build wherein Perl and Postgres have different ideas
of sizeof(off_t) etc would work, or whether anyone would care about making
it work. But it's dead certain that having different stdio ABIs in
core Postgres and PL/Perl will not work; we've seen that movie before.
Therefore, let's also ignore -D switches for symbols beginning with
underscore. The symbols that we actually need to import should be the ones
mentioned in perl.h's PL_bincompat_options stanza, and none of those start
with underscore, so this seems likely to work. (If it turns out not to
work everywhere, we could consider intersecting the symbols mentioned in
PL_bincompat_options with the -D switches. But that will be much more
complicated, so let's try this way first.)
This will need to be back-patched, but first let's see what the
buildfarm makes of it.
Tom Lane [Fri, 28 Jul 2017 16:25:43 +0000 (12:25 -0400)]
PL/Perl portability fix: avoid including XSUB.h in plperl.c.
In Perl builds that define PERL_IMPLICIT_SYS, XSUB.h defines macros
that replace a whole lot of basic libc functions with Perl functions.
We can't tolerate that in plperl.c; it breaks at least PG_TRY and
probably other stuff. The core idea of this patch is to include XSUB.h
only in the .xs files where it's really needed, and to move any code
broken by PERL_IMPLICIT_SYS out of the .xs files and into plperl.c.
The reason this hasn't been a problem before is that our build techniques
did not result in PERL_IMPLICIT_SYS appearing as a #define in PL/Perl,
even on some platforms where Perl thinks it is defined. That's about to
change in order to fix a nasty portability issue, so we need this work to
make the code safe for that.
Rather unaccountably, the Perl people chose XSUB.h as the place to provide
the versions of the aTHX/aTHX_ macros that are needed by code that's not
explicitly aware of the MULTIPLICITY API conventions. Hence, just removing
XSUB.h from plperl.c fails miserably. But we can work around that by
defining PERL_NO_GET_CONTEXT (which would make the relevant stanza of
XSUB.h a no-op anyway). As explained in perlguts.pod, that means we need
to add a "dTHX" macro call in every C function that calls a Perl API
function. In most of them we just add this at the top; but since the
macro fetches the current Perl interpreter pointer, more care is needed
in functions that switch the active interpreter. Lack of the macro is
easily recognized since it results in bleats about "my_perl" not being
defined.
(A nice side benefit of this is that it significantly reduces the number
of fetches of the current interpreter pointer. On my machine, plperl.so
gets more than 10% smaller, and there's probably some performance win too.
We could reduce the number of fetches still more by decorating the code
with pTHX_/aTHX_ macros to pass the interpreter pointer around, as
explained by perlguts.pod; but that's a task for another day.)
Formatting note: pgindent seems happy to treat "dTHX;" as a declaration
so long as it's the first thing after the left brace, as we'd already
observed with respect to the similar macro "dSP;". If you try to put
it later in a set of declarations, pgindent puts ugly extra space
around it.
Having removed XSUB.h from plperl.c, we need only move the support
functions for spi_return_next and util_elog (both of which use PG_TRY)
out of the .xs files and into plperl.c. This seems sufficient to
avoid the known problems caused by PERL_IMPLICIT_SYS, although we
could move more code if additional issues emerge.
This will need to be back-patched, but first let's see what the
buildfarm makes of it.
Tom Lane [Thu, 27 Jul 2017 17:30:59 +0000 (13:30 -0400)]
Standardize describe.c's behavior for no-matching-objects a bit more.
Most functions in this file are content to print an empty table if there
are no matching objects. In some, the behavior is to loop over all
matching objects and print a table for each one; therefore, without any
extra logic, nothing at all would be printed if no objects match.
We accept that outcome in QUIET mode, but in normal mode it seems better
to print a helpful message. The new \dRp+ command had not gotten that
memo; fix it.
listDbRoleSettings() is out of step on this, but I think it's better for
it to print a custom message rather than an empty table, because of the
possibility that the user is confused about what the pattern arguments mean
or which is which. The original message wording was entirely useless for
clarifying that, though, not to mention being unlike the wordings used
elsewhere. Improve the text, and also print the messages with psql_error
as is the general custom here.
listTables() is also out in left field, but since it's such a heavily
used function, I'm hesitant to change its behavior so much as to print
an empty table rather than a custom message. People are probably used
to getting a message. But we can make the wording more standardized and
helpful, and print it with psql_error rather than printing to stdout.
In both listDbRoleSettings and listTables, we play dumb and emit an
empty table, not a custom message, in QUIET mode. That was true before
and I see no need to change it.
Several of the places printing such messages risked dumping core if
no pattern string had been provided; make them more wary. (This case
is presently unreachable in describeTableDetails; but it shouldn't be
assuming that command.c will never pass it a null. The text search
functions would only reach the case if a database contained no text
search objects, which is also currently impossible since we pin the
built-in objects, but again it seems unwise to assume that here.)
Tom Lane [Thu, 27 Jul 2017 16:12:37 +0000 (12:12 -0400)]
Avoid use of sprintf/snprintf in describe.c.
Most places were already using the PQExpBuffer library for constructing
variable-length strings; bring the two stragglers into line.
describeOneTSParser was living particularly dangerously since it wasn't
even using snprintf().
Tom Lane [Thu, 27 Jul 2017 15:57:29 +0000 (11:57 -0400)]
Sync listDbRoleSettings() with the rest of the world.
listDbRoleSettings() handled its server version check randomly differently
from every other comparable function in describe.c, not only as to code
layout but also message wording. It also leaked memory, because its
PQExpBuffer management was also unlike everyplace else (and wrong).
Also fix an error-case leak in add_tablespace_footer().
In passing, standardize the format of function header comments in
describe.c --- we usually put "/*" alone on a line.
Tom Lane [Thu, 27 Jul 2017 15:10:38 +0000 (11:10 -0400)]
Fix very minor memory leaks in psql's command.c.
\drds leaked its second pattern argument if any, and \connect leaked
any empty-string or "-" arguments. These are old bugs, but it's hard
to imagine any real use-case where the leaks could amount to anything
meaningful, so not bothering with a back-patch.
Andrew Dunstan [Wed, 26 Jul 2017 21:15:59 +0000 (17:15 -0400)]
Work around Msys weakness in Testlib.pm's command_like()
When output of IPC::Run::run () is redirected to scalar references, in
certain circumstances the Msys perl does not correctly detect that the
end of file has been seen, making the test hang indefinitely. One such
circumstance is when the command is 'pg_ctl start', and such a change
was made in commit f13ea95f9e. The workaround, which only applies on
MSys, is to redirect the output to temporary files and then read them in
when the process has finished.
Tom Lane [Wed, 26 Jul 2017 23:35:35 +0000 (19:35 -0400)]
Clean up SQL emitted by psql/describe.c.
Fix assorted places that had not bothered with the convention of
prefixing catalog and function names with "pg_catalog.". That
could possibly result in query failure when running with a nondefault
search_path. Also fix two places that weren't quoting OID literals.
I think the latter hasn't mattered much since about 7.3, but it's still
a bad idea to be doing it in 99 places and not in 2 others.
Also remove a useless EXISTS sub-select that someone had stuck into
describeOneTableDetails' queries for child tables. We just got the OID
out of pg_class, so I hardly see how checking that it exists in pg_class
was doing anything helpful.
In passing, try to improve the emitted formatting of a couple of
these queries, though I didn't work really hard on that. And merge
unnecessarily duplicative coding in some other places.
Much of this was new in HEAD, but some was quite old; back-patch
as appropriate.
If several sessions are concurrently locking a tuple update chain with
nonconflicting lock modes using an old snapshot, and they all succeed,
it may happen that some of them fail because of restarting the loop (due
to a concurrent Xmax change) and getting an error in the subsequent pass
while trying to obtain a tuple lock that they already have in some tuple
version.
This can only happen with very high concurrency (where a row is being
both updated and FK-checked by multiple transactions concurrently), but
it's been observed in the field and can have unpleasant consequences
such as an FK check failing to see a tuple that definitely exists:
ERROR: insert or update on table "child_table" violates foreign key constraint "fk_constraint_name"
DETAIL: Key (keyid)=(123456) is not present in table "parent_table".
(where the key is observably present in the table).
Remove obsolete comments about functional dependencies
Initial submitted versions of the functional dependencies patch ignored
row groups that were smaller than a configured size. However, that
consideration was removed in late stages of the patch just before
commit, but some comments referring to it remained. Remove them to
avoid confusion.
This module becomes much more useful if we allow it to be used as base
class for external projects. To achieve this, change the exported
get_new_node function into a class method instead, and use the standard
Perl idiom of accepting the class as first argument. This method works
as expected for subclasses. The standalone function is kept for
backwards compatibility, though it could be removed in pg11.
Author: Chap Flackman, based on an earlier patch from Craig Ringer
Discussion: https://postgr.es/m/CAMsr+YF8kO+4+K-_U4PtN==2FndJ+5Bn6A19XHhMiBykEwv0wA@mail.gmail.com
Fix race conditions in replication slot operations
It is relatively easy to get a replication slot to look as still active
while one process is in the process of getting rid of it; when some
other process tries to "acquire" the slot, it would fail with an error
message of "replication slot XYZ is active for PID N".
The error message in itself is fine, except that when the intention is
to drop the slot, it is unhelpful: the useful behavior would be to wait
until the slot is no longer acquired, so that the drop can proceed. To
implement this, we use a condition variable so that slot acquisition can
be told to wait on that condition variable if the slot is already
acquired, and we make any change in active_pid broadcast a signal on the
condition variable. Thus, as soon as the slot is released, the drop
will proceed properly.
Reported by: Tom Lane
Discussion: https://postgr.es/m/11904.1499039688@sss.pgh.pa.us
Authors: Petr Jelínek, Álvaro Herrera
Robert Haas [Mon, 24 Jul 2017 22:08:08 +0000 (18:08 -0400)]
Fix partitioning crashes during error reporting.
In various places where we reverse-map a tuple before calling
ExecBuildSlotValueDescription, we neglected to ensure that the
slot descriptor matched the tuple stored in it.
Amit Langote and Amit Khandekar, reviewed by Etsuro Fujita
Tom Lane [Mon, 24 Jul 2017 20:45:46 +0000 (16:45 -0400)]
Fix race condition in predicate-lock init code in EXEC_BACKEND builds.
Trading a little too heavily on letting the code path be the same whether
we were creating shared data structures or only attaching to them,
InitPredicateLocks() inserted the "scratch" PredicateLockTargetHash entry
unconditionally. This is just wrong if we're in a postmaster child,
which would only reach this code in EXEC_BACKEND builds. Most of the
time, the hash_search(HASH_ENTER) call would simply report that the
entry already existed, causing no visible effect since the code did not
bother to check for that possibility. However, if this happened while
some other backend had transiently removed the "scratch" entry, then
that other backend's eventual RestoreScratchTarget would suffer an
assert failure; this appears to be the explanation for a recent failure
on buildfarm member culicidae. In non-assert builds, there would be
no visible consequences there either. But nonetheless this is a pretty
bad bug for EXEC_BACKEND builds, for two reasons:
1. Each new backend would perform the hash_search(HASH_ENTER) call
without holding any lock that would prevent concurrent access to the
PredicateLockTargetHash hash table. This creates a low but certainly
nonzero risk of corruption of that hash table.
2. In the event that the race condition occurred, by reinserting the
scratch entry too soon, we were defeating the entire purpose of the
scratch entry, namely to guarantee that transaction commit could move
hash table entries around with no risk of out-of-memory failure.
The odds of an actual OOM failure are quite low, but not zero, and if
it did happen it would again result in corruption of the hash table.
The user-visible symptoms of such corruption are a little hard to predict,
but would presumably amount to misbehavior of SERIALIZABLE transactions
that'd require a crash or postmaster restart to fix.
To fix, just skip the hash insertion if IsUnderPostmaster. I also
inserted a bunch of assertions that the expected things happen
depending on whether IsUnderPostmaster is true. That might be overkill,
since most comparable code in other functions isn't quite that paranoid,
but once burnt twice shy.
In passing, also move a couple of lines to places where they seemed
to make more sense.
Diagnosis of problem by Thomas Munro, patch by me. Back-patch to
all supported branches.
Robert Haas [Mon, 24 Jul 2017 19:57:24 +0000 (15:57 -0400)]
When WCOs are present, disable direct foreign table modification.
If the user modifies a view that has CHECK OPTIONs and this gets
translated into a modification to an underlying relation which happens
to be a foreign table, the check options should be enforced. In the
normal code path, that was happening properly, but it was not working
properly for "direct" modification because the whole operation gets
pushed to the remote side in that case and we never have an option to
enforce the constraint against individual tuples. Fix by disabling
direct modification when there is a need to enforce CHECK OPTIONs.
Etsuro Fujita, reviewed by Kyotaro Horiguchi and by me.
Tom Lane [Mon, 24 Jul 2017 19:16:31 +0000 (15:16 -0400)]
Ensure that pg_get_ruledef()'s output matches pg_get_viewdef()'s.
Various cases involving renaming of view columns are handled by having
make_viewdef pass down the view's current relation tupledesc to
get_query_def, which then takes care to use the column names from the
tupledesc for the output column names of the SELECT. For some reason
though, we'd missed teaching make_ruledef to do similarly when it is
printing an ON SELECT rule, even though this is exactly the same case.
The results from pg_get_ruledef would then be different and arguably wrong.
In particular, this breaks pre-v10 versions of pg_dump, which in some
situations would define views by means of emitting a CREATE RULE ... ON
SELECT command. Third-party tools might not be happy either.
In passing, clean up some crufty code in make_viewdef; we'd apparently
modernized the equivalent code in make_ruledef somewhere along the way,
and missed this copy.
Per report from Gilles Darold. Back-patch to all supported versions.
Tom Lane [Mon, 24 Jul 2017 15:23:27 +0000 (11:23 -0400)]
Be more consistent about errors for opfamily member lookup failures.
Add error checks in some places that were calling get_opfamily_member
or get_opfamily_proc and just assuming that the call could never fail.
Also, standardize the wording for such errors in some other places.
None of these errors are expected in normal use, hence they're just
elog not ereport. But they may be handy for diagnosing omissions in
custom opclasses.
Rushabh Lathia found the oversight in RelationBuildPartitionKey();
I found the others by grepping for all callers of these functions.
Tom Lane [Sun, 23 Jul 2017 00:20:09 +0000 (20:20 -0400)]
Fix pg_dump's handling of event triggers.
pg_dump with the --clean option failed to emit DROP EVENT TRIGGER
commands for event triggers. In a closely related oversight,
it also did not emit ALTER OWNER commands for event triggers.
Since only superusers can create event triggers, the latter oversight
is of little practical consequence ... but if we're going to record
an owner for event triggers, then surely pg_dump should preserve it.
Per complaint from Greg Atkins. Back-patch to 9.3 where event triggers
were introduced.
Tom Lane [Sat, 22 Jul 2017 22:02:26 +0000 (18:02 -0400)]
Improve comments about partitioned hash table freelists.
While I couldn't find any live bugs in commit 44ca4022f, the comments
seemed pretty far from adequate; in particular it was not made plain that
"borrowing" entries from other freelists is critical for correctness.
Try to improve the commentary. A couple of very minor code style
tweaks, as well.
Tom Lane [Sat, 22 Jul 2017 16:15:19 +0000 (12:15 -0400)]
Update expected results for collate.linux.utf8 regression test.
I believe this changed as a consequence of commit 54baa4813: trying to
clone the "C" collation now produces a true clone with collencoding -1,
hence the error message if it's duplicate no longer specifies an encoding.
Per buildfarm member crake, which apparently hadn't been running this
test for the last few weeks.
Robert Haas [Fri, 21 Jul 2017 18:25:36 +0000 (14:25 -0400)]
pg_rewind: Fix some problems when copying files >2GB.
When incrementally updating a file larger than 2GB, the old code could
either fail outright (if the client asked the server for bytes beyond
the 2GB boundary) or fail to copy all the blocks that had actually
been modified (if the server reported a file size to the client in
excess of 2GB), resulting in data corruption. Generally, such files
won't occur anyway, but they might if using a non-default segment size
or if there the directory contains stray files unrelated to
PostgreSQL. Fix by a more prudent choice of data types.
Even with these improvements, this code still uses a mix of different
types (off_t, size_t, uint64, int64) to represent file sizes and
offsets, not all of which necessarily have the same width or
signedness, so further cleanup might be in order here. However, at
least now they all have the potential to be 64 bits wide on 64-bit
platforms.
Kuntal Ghosh and Michael Paquier, with a tweak by me.
Tom Lane [Fri, 21 Jul 2017 18:20:43 +0000 (14:20 -0400)]
Stabilize postgres_fdw regression tests.
The new test cases added in commit 8bf58c0d9 turn out to have output
that can vary depending on the lc_messages setting prevailing on the
test server. Hide the remote end's error messages to ensure stable
output. This isn't a terribly desirable solution; we'd rather know
that the connection failed for the expected reason and not some other
one. But there seems little choice for the moment.
Robert Haas [Fri, 21 Jul 2017 16:48:22 +0000 (12:48 -0400)]
pg_rewind: Fix busted sanity check.
As written, the code would only fail the sanity check if none of the
columns returned by the server were of the expected type, but we want
it to fail if even one column is not of the expected type.
Tom Lane [Fri, 21 Jul 2017 16:51:38 +0000 (12:51 -0400)]
Re-establish postgres_fdw connections after server or user mapping changes.
Previously, postgres_fdw would keep on using an existing connection even
if the user did ALTER SERVER or ALTER USER MAPPING commands that should
affect connection parameters. Teach it to watch for catcache invals
on these catalogs and re-establish connections when the relevant catalog
entries change. Per bug #14738 from Michal Lis.
In passing, clean up some rather crufty decisions in commit ae9bfc5d6
about where fields of ConnCacheEntry should be reset. We now reset
all the fields whenever we open a new connection.
Kyotaro Horiguchi, reviewed by Ashutosh Bapat and myself.
Back-patch to 9.3 where postgres_fdw appeared.
SLRU buffer lwlocks are allocated twice by oversight in commit fe702a7b3f9f2bc5bf6d173166d7d55226af82c8 where that locks were moved to
separate tranche. The bug doesn't have user-visible effects except small
overspending of shared memory.
Backpatch to 9.6 where it was introduced.
Alexander Korotkov with small editorization by me.
Dean Rasheed [Fri, 21 Jul 2017 09:18:01 +0000 (10:18 +0100)]
Make the new partition regression tests locale-independent.
The order of partitions listed by \d+ is in general locale-dependent.
Rename the partitions in the test added by d363d42bb9 to force them to
be listed in a consistent order.
Dean Rasheed [Fri, 21 Jul 2017 08:20:47 +0000 (09:20 +0100)]
Use MINVALUE/MAXVALUE instead of UNBOUNDED for range partition bounds.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Tom Lane [Thu, 20 Jul 2017 19:28:42 +0000 (15:28 -0400)]
In v10 release notes, call out sequence changes as a compatibility item.
The previous description didn't make it clear that this change
potentially breaks applications, partly because the entry wasn't even
in the compatibility-hazard section. Move and clarify.
Tom Lane [Thu, 20 Jul 2017 16:41:26 +0000 (12:41 -0400)]
Doc: clarify description of degenerate NATURAL joins.
Claiming that NATURAL JOIN is equivalent to CROSS JOIN when there are
no common column names is only strictly correct if it's an inner join;
you can't say e.g. CROSS LEFT JOIN. Better to explain it as meaning
JOIN ON TRUE, instead. Per a suggestion from David Johnston.
Tom Lane [Thu, 20 Jul 2017 15:29:36 +0000 (11:29 -0400)]
Fix dumping of outer joins with empty qual lists.
Normally, a JoinExpr would have empty "quals" only if it came from CROSS
JOIN syntax. However, it's possible to get to this state by specifying
NATURAL JOIN between two tables with no common column names, and there
might be other ways too. The code previously printed no ON clause if
"quals" was empty; that's right for CROSS JOIN but syntactically invalid
if it's some type of outer join. Fix by printing ON TRUE in that case.
This got broken by commit 2ffa740be, which stopped using NATURAL JOIN
syntax in ruleutils output due to its brittleness in the face of
column renamings. Back-patch to 9.3 where that commit appeared.
Tom Lane [Wed, 19 Jul 2017 20:16:57 +0000 (16:16 -0400)]
Add static assertions about pg_control fitting into one disk sector.
When pg_control was first designed, sizeof(ControlFileData) was small
enough that a comment seemed like plenty to document the assumption that
it'd fit into one disk sector. Now it's nearly 300 bytes, raising the
possibility that somebody would carelessly add enough stuff to create
a problem. Let's add a StaticAssertStmt() to ensure that the situation
doesn't pass unnoticed if it ever occurs.
While at it, rename PG_CONTROL_SIZE to PG_CONTROL_FILE_SIZE to make it
clearer what that symbol means, and convert the existing runtime
comparisons of sizeof(ControlFileData) vs. PG_CONTROL_FILE_SIZE to be
static asserts --- we didn't have that technology when this code was
first written.
Tom Lane [Tue, 18 Jul 2017 17:13:47 +0000 (13:13 -0400)]
Improve make_tsvector() to handle empty input, and simplify its callers.
It seemed a bit silly that each caller of make_tsvector() was laboriously
special-casing the situation where no lexemes were found, when it would
be easy and much more bullet-proof to make make_tsvector() handle that.
Tom Lane [Tue, 18 Jul 2017 16:45:51 +0000 (12:45 -0400)]
Fix serious performance problems in json(b) to_tsvector().
In an off-list followup to bug #14745, Bob Jones complained that
to_tsvector() on a 2MB jsonb value took an unreasonable amount of
time and space --- enough to draw the wrath of the OOM killer on
his machine. On my machine, his example proved to require upwards
of 18 seconds and 4GB, which seemed pretty bogus considering that
to_tsvector() on the same data treated as text took just a couple
hundred msec and 10 or so MB.
On investigation, the problem is that the implementation scans each
string element of the json(b) and converts it to tsvector separately,
then applies tsvector_concat() to join those separate tsvectors.
The unreasonable memory usage came from leaking every single one of
the transient tsvectors --- but even without that mistake, this is an
O(N^2) or worse algorithm, because tsvector_concat() has to repeatedly
process the words coming from earlier elements.
We can fix it by accumulating all the lexeme data and applying
make_tsvector() just once. As a side benefit, that also makes the
desired adjustment of lexeme positions far cheaper, because we can
just tweak the running "pos" counter between JSON elements.
In passing, try to make the explanation of that tweak more intelligible.
(I didn't think that a barely-readable comment far removed from the
actual code was helpful.) And do some minor other code beautification.
Robert Haas [Tue, 18 Jul 2017 01:29:45 +0000 (21:29 -0400)]
Use a real RT index when setting up partition tuple routing.
Before, we always used a dummy value of 1, but that's not right when
the partitioned table being modified is inside of a WITH clause
rather than part of the main query.
Amit Langote, reported and reviewd by Etsuro Fujita, with a comment
change by me.
Tom Lane [Mon, 17 Jul 2017 20:43:03 +0000 (16:43 -0400)]
Doc: explain dollar quoting in the intro part of the pl/pgsql chapter.
We're throwing people into the guts of the syntax with not much context;
let's back up one step and point out that this goes inside a literal in
a CREATE FUNCTION command. Per suggestion from Kurt Kartaltepe.
Tom Lane [Mon, 17 Jul 2017 19:28:16 +0000 (15:28 -0400)]
Merge large_object.sql test into largeobject.source.
It seems pretty confusing to have tests named both largeobject and
large_object. The latter is of very recent vintage (commit ff992c074),
so get rid of it in favor of merging into the former.
Also, enable the LO comment test that was added by commit 70ad7ed4e,
since the later commit added the then-missing pg_upgrade functionality.
The large_object.sql test case is almost completely redundant with that,
but not quite: it seems like creating a user-defined LO with an OID in
the system range might be an interesting case for pg_upgrade, so let's
keep it.
Like the earlier patch, back-patch to all supported branches.
Robert Haas [Mon, 17 Jul 2017 16:03:35 +0000 (12:03 -0400)]
hash: Fix write-ahead logging bugs related to init forks.
One, logging for CREATE INDEX was oblivious to the fact that when
an unlogged table is created, *only* operations on the init fork
should be logged.
Two, init fork buffers need to be flushed after they are written;
otherwise, a filesystem-level copy following recovery may do the
wrong thing. (There may be a better fix for this issue than the
one used here, but this is transposed from the similar logic already
present in XLogReadBufferForRedoExtended, and a broader refactoring
after beta2 seems inadvisable.)
Amit Kapila, reviewed by Ashutosh Sharma, Kyotaro Horiguchi,
and Michael Paquier
Tom Lane [Sat, 15 Jul 2017 20:57:43 +0000 (16:57 -0400)]
Improve comments for execExpr.c's handling of FieldStore subexpressions.
Given this code's general eagerness to use subexpressions' output variables
as temporary workspace, it's not exactly clear that it is safe for
FieldStore to tell a newval subexpression that it can write into the same
variable that is being supplied as a potential input. Document the chain
of assumptions needed for that to be safe.
Tom Lane [Sat, 15 Jul 2017 18:03:32 +0000 (14:03 -0400)]
Improve comments for execExpr.c's isAssignmentIndirectionExpr().
I got confused about why this function doesn't need to recursively
search the expression tree for a CaseTestExpr node. After figuring
that out, add a comment to save the next person some time.
Tom Lane [Fri, 14 Jul 2017 19:25:43 +0000 (15:25 -0400)]
Code review for NextValueExpr expression node type.
Add missing infrastructure for this node type, notably in ruleutils.c where
its lack could demonstrably cause EXPLAIN to fail. Add outfuncs/readfuncs
support. (outfuncs support is useful today for debugging purposes. The
readfuncs support may never be needed, since at present it would only
matter for parallel query and NextValueExpr should never appear in a
parallelizable query; but it seems like a bad idea to have a primnode type
that isn't fully supported here.) Teach planner infrastructure that
NextValueExpr is a volatile, parallel-unsafe, non-leaky expression node
with cost cpu_operator_cost. Given its limited scope of usage, there
*might* be no live bug today from the lack of that knowledge, but it's
certainly going to bite us on the rear someday. Teach pg_stat_statements
about the new node type, too.
While at it, also teach cost_qual_eval() that MinMaxExpr, SQLValueFunction,
XmlExpr, and CoerceToDomain should be charged as cpu_operator_cost.
Failing to do this for SQLValueFunction was an oversight in my commit 0bb51aa96. The others are longer-standing oversights, but no time like the
present to fix them. (In principle, CoerceToDomain could have cost much
higher than this, but it doesn't presently seem worth trying to examine the
domain's constraints here.)
Modify execExprInterp.c to execute NextValueExpr as an out-of-line
function; it seems quite unlikely to me that it's worth insisting that
it be inlined in all expression eval methods. Besides, providing the
out-of-line function doesn't stop anyone from inlining if they want to.
Adjust some places where NextValueExpr support had been inserted with the
aid of a dartboard rather than keeping it in the same order as elsewhere.
That method is badly broken, as seen in bug #14742 from Chris Ruprecht.
The -L flag for src/fe_utils ends up being placed after whatever random
-L flags are in LDFLAGS already. That puts us at risk of pulling in
libpgfeutils.a from some previous installation rather than the freshly
built one in src/fe_utils. Also, the lack of an "override" is hazardous
if someone tries to specify some LDFLAGS on the make command line.
so that libpgfeutils, along with libpq, libpgport, and libpgcommon, are
guaranteed to be pulled in from the build tree and not from any referenced
system directory, because their -L flags will appear first.
In some places we'd been even lazier and done it like this:
which is subtly wrong in an additional way: on platforms where we can't
restrict the symbols exported by libpq.so, it allows libpgfeutils to
latch onto libpgport and libpgcommon symbols from libpq.so, rather than
directly from those static libraries as intended. This carries hazards
like those explained in the comments for the libpq_pgport macro.
In addition to fixing the broken libpgfeutils usages, I tried to
standardize on using $(libpq_pgport) like so:
override LDFLAGS := $(libpq_pgport) $(LDFLAGS)
even where libpgfeutils is not in the picture. This makes no difference
right now but will hopefully discourage future mistakes of the same ilk.
And it's more like the way we handle CPPFLAGS in libpq-using Makefiles.
In passing, just for consistency, make pgbench include PTHREAD_LIBS the
same way everyplace else does, ie just after LIBS rather than in some
random place in the command line. This might have practical effect if
there are -L switches in that macro on some platform.
It looks to me like the MSVC build scripts are not affected by this
error, but someone more familiar with them than I might want to double
check.
Back-patch to 9.6 where libpgfeutils was introduced. In 9.6, the hazard
this error creates is that a reinstallation might link to the prior
installation's copy of libpgfeutils.a and thereby fail to absorb a
minor-version bug fix.
When writing a backup to stdout with pg_basebackup on Windows, put stdout
to binary mode. Any CR bytes in the output will otherwise be output
incorrectly as CR+LF.
In the passing, standardize on using "_setmode" instead of "setmode", for
the sake of consistency. They both do the same thing, but according to
MSDN documentation, setmode is deprecated.
Fixes bug #14634, reported by Henry Boehlert. Patch by Haribabu Kommi.
Backpatch to all supported versions.
Tom Lane [Thu, 13 Jul 2017 23:24:44 +0000 (19:24 -0400)]
Fix dumping of FUNCTION RTEs that contain non-function-call expressions.
The grammar will only accept something syntactically similar to a function
call in a function-in-FROM expression. However, there are various ways
to input something that ruleutils.c won't deparse that way, potentially
leading to a view or rule that fails dump/reload. Fix by inserting a
dummy CAST around anything that isn't going to deparse as a function
(which is one of the ways to get something like that in there in the
first place).
In HEAD, also make use of the infrastructure added by this to avoid
emitting unnecessary parentheses in CREATE INDEX deparsing. I did
not change that in back branches, thinking that people might find it
to be unexpected/unnecessary behavioral change.
In HEAD, also fix incorrect logic for when to add extra parens to
partition key expressions. Somebody apparently thought they could
get away with simpler logic than pg_get_indexdef_worker has, but
they were wrong --- a counterexample is PARTITION BY LIST ((a[1])).
Ignoring the prettyprint flag for partition expressions isn't exactly
a nice solution anyway.
This has been broken all along, so back-patch to all supported branches.
Fix race between GetNewTransactionId and GetOldestActiveTransactionId.
The race condition goes like this:
1. GetNewTransactionId advances nextXid e.g. from 100 to 101
2. GetOldestActiveTransactionId reads the new nextXid, 101
3. GetOldestActiveTransactionId loops through the proc array. There are no
active XIDs there, so it returns 101 as the oldest active XID.
4. GetNewTransactionid stores XID 100 to MyPgXact->xid
So, GetOldestActiveTransactionId returned XID 101, even though 100 only
just started and is surely still running.
This would be hard to hit in practice, and even harder to spot any ill
effect if it happens. GetOldestActiveTransactionId is only used when
creating a checkpoint in a master server, and the race condition can only
happen on an online checkpoint, as there are no backends running during a
shutdown checkpoint. The oldestActiveXid value of an online checkpoint is
only used when starting up a hot standby server, to determine the starting
point where pg_subtrans is initialized from. For the race condition to
happen, there must be no other XIDs in the proc array that would hold back
the oldest-active XID value, which means that the missed XID must be a top
transaction's XID. However, pg_subtrans is not used for top XIDs, so I
believe an off-by-one error is in fact inconsequential. Nevertheless, let's
fix it, as it's clearly wrong and the fix is simple.
This has been wrong ever since hot standby was introduced, so backport to
all supported versions.
Tom Lane [Wed, 12 Jul 2017 22:00:04 +0000 (18:00 -0400)]
Fix ruleutils.c for domain-over-array cases, too.
Further investigation shows that ruleutils isn't quite up to speed either
for cases where we have a domain-over-array: it needs to be prepared to
look past a CoerceToDomain at the top level of field and element
assignments, else it decompiles them incorrectly. Potentially this would
result in failure to dump/reload a rule, if it looked like the one in the
new test case. (I also added a test for EXPLAIN; that output isn't broken,
but clearly we need more test coverage here.)
Like commit b1cb32fb6, this bug is reachable in cases we already support,
so back-patch all the way.
Reduce memory usage of tsvector type analyze function.
compute_tsvector_stats() detoasted and kept in memory every tsvector value
in the sample, but that can be a lot of memory. The original bug report
described a case using over 10 gigabytes, with statistics target of 10000
(the maximum).
To fix, allocate a separate copy of just the lexemes that we keep around,
and free the detoasted tsvector values as we go. This adds some palloc/pfree
overhead, when you have a lot of distinct lexemes in the sample, but it's
better than running out of memory.
Fixes bug #14654 reported by James C. Reviewed by Tom Lane. Backport to
all supported versions.
Tom Lane [Wed, 12 Jul 2017 17:24:16 +0000 (13:24 -0400)]
Avoid integer overflow while sifting-up a heap in tuplesort.c.
If the number of tuples in the heap exceeds approximately INT_MAX/2,
this loop's calculation "2*i+1" could overflow, resulting in a crash.
Fix it by using unsigned int rather than int for the relevant local
variables; that shouldn't cost anything extra on any popular hardware.
Per bug #14722 from Sergey Koposov.
Original patch by Sergey Koposov, modified by me per a suggestion
from Heikki Linnakangas to use unsigned int not int64.
Back-patch to 9.4, where tuplesort.c grew the ability to sort as many
as INT_MAX tuples in-memory (commit 263865a48).
Fix ordering of operations in SyncRepWakeQueue to avoid assertion failure.
Commit 14e8803f1 removed the locking in SyncRepWaitForLSN, but that
introduced a race condition, where SyncRepWaitForLSN might see
syncRepState already set to SYNC_REP_WAIT_COMPLETE, but the process was
not yet removed from the queue. That tripped the assertion, that the
process should no longer be in the uqeue. Reorder the operations in
SyncRepWakeQueue to remove the process from the queue first, and update
syncRepState only after that, and add a memory barrier in between to make
sure the operations are made visible to other processes in that order.
Fixes bug #14721 reported by Const Zhang. Analysis and fix by Thomas Munro.
Backpatch down to 9.5, where the locking was removed.
Tom Lane [Tue, 11 Jul 2017 20:48:59 +0000 (16:48 -0400)]
Fix multiple assignments to a column of a domain type.
We allow INSERT and UPDATE commands to assign to the same column more than
once, as long as the assignments are to subfields or elements rather than
the whole column. However, this failed when the target column was a domain
over array rather than plain array. Fix by teaching process_matched_tle()
to look through CoerceToDomain nodes, and add relevant test cases.
Also add a group of test cases exercising domains over array of composite.
It's doubtless accidental that CREATE DOMAIN allows this case while not
allowing straight domain over composite; but it does, so we'd better make
sure we don't break it. (I could not find any documentation mentioning
either side of that, so no doc changes.)
It's been like this for a long time, so back-patch to all supported
branches.
Tom Lane [Mon, 10 Jul 2017 15:00:09 +0000 (11:00 -0400)]
On Windows, retry process creation if we fail to reserve shared memory.
We've heard occasional reports of backend launch failing because
pgwin32_ReserveSharedMemoryRegion() fails, indicating that something
has already used that address space in the child process. It's not
very clear what, given that we disable ASLR in Windows builds, but
suspicion falls on antivirus products. It'd be better if we didn't
have to disable ASLR, anyway. So let's try to ameliorate the problem
by retrying the process launch after such a failure, up to 100 times.
Patch by me, based on previous work by Amit Kapila and others.
This is a longstanding issue, so back-patch to all supported branches.
Andrew Gierth [Mon, 10 Jul 2017 10:40:08 +0000 (11:40 +0100)]
Fix COPY's handling of transition tables with indexes.
Commit c46c0e5202e8cfe750c6629db7852fdb15d528f3 failed to pass the
TransitionCaptureState object to ExecARInsertTriggers() in the case
where it's using heap_multi_insert and there are indexes. Repair.
Thomas Munro, from a report by David Fetter
Discussion: https://postgr.es/m/20170708084213.GA14720%40fetter.org
Allow multiple hostaddrs to go with multiple hostnames.
Also fix two other issues, while we're at it:
* In error message on connection failure, if multiple network addresses
were given as the host option, as in "host=127.0.0.1,127.0.0.2", the
error message printed the address twice.
* If there were many more ports than hostnames, the error message would
always claim that there was one port too many, even if there was more than
one. For example, if you gave 2 hostnames and 5 ports, the error message
claimed that you gave 2 hostnames and 3 ports.
Tom Lane [Mon, 10 Jul 2017 04:44:05 +0000 (00:44 -0400)]
Doc: remove claim that PROVE_FLAGS defaults to '--verbose'.
Commit e9c81b601 changed this, but missed updating the documentation.
The adjacent claim that we use TAP tests only in src/bin seems pretty
obsolete as well. Minor other copy-editing.
Tom Lane [Mon, 10 Jul 2017 04:08:19 +0000 (00:08 -0400)]
Doc: clarify wording about tool requirements in sourcerepo.sgml.
Original wording had confusingly vague antecedent for "they", so replace
that with a more repetitive but clearer formulation. In passing, make the
link to the installation requirements section more specific. Per gripe
from Martin Mai, though this is not the fix he initially proposed.
It generates an empty file, so libpq.dll advertises no version
information. Commit facde2a98f0b5f7689b4e30a9e7376e926e733b8
mistranslated "print O;" in this one place.
Tom Lane [Sat, 8 Jul 2017 16:42:25 +0000 (12:42 -0400)]
Avoid unreferenced-function warning on low-functionality platforms.
On platforms lacking both locale_t and ICU, collationcmds.c failed
to make any use of its static function is_all_ascii(), thus probably
drawing a compiler warning. Oversight in my commit ddb5fdc06.
Per buildfarm member gaur.
Dean Rasheed [Thu, 6 Jul 2017 11:46:08 +0000 (12:46 +0100)]
Clarify the contract of partition_rbound_cmp().
partition_rbound_cmp() is intended to compare range partition bounds
in a way such that if all the bound values are equal but one is an
upper bound and one is a lower bound, the upper bound is treated as
smaller than the lower bound. This particular ordering is required by
RelationBuildPartitionDesc() when building the PartitionBoundInfoData,
so that it can consistently keep only the upper bounds when upper and
lower bounds coincide.
Update the function comment to make that clearer.
Also, fix a (currently unreachable) corner-case bug -- if the bound
values coincide and they contain unbounded values, fall through to the
lower-vs-upper comparison code, rather than immediately returning
0. Currently it is not possible to define coincident upper and lower
bounds containing unbounded columns, but that may change in the
future, so code defensively.
Dean Rasheed [Thu, 6 Jul 2017 08:58:06 +0000 (09:58 +0100)]
Simplify the logic checking new range partition bounds.
The previous logic, whilst not actually wrong, was overly complex and
involved doing two binary searches, where only one was really
necessary. This simplifies that logic and improves the comments.
One visible change is that if the new partition overlaps multiple
existing partitions, the error message now always reports the overlap
with the first existing partition (the one with the lowest
bounds). The old code would sometimes report the clash with the first
partition and sometimes with the last one.
Original patch idea from Amit Langote, substantially rewritten by me.
Tom Lane [Thu, 6 Jul 2017 03:59:20 +0000 (23:59 -0400)]
Fix another race-condition-ish issue in recovery/t/001_stream_rep.pl.
Buildfarm members hornet and sungazer have shown multiple instances of
"Failed test 'xmin of non-cascaded slot with hs feedback has changed'".
The reason seems to be that the test is checking the current xmin of the
master server's replication slot against a past xmin of the first slave
server's replication slot. Even though the latter slot is downstream of
the former, it's possible for its reported xmin to be ahead of the former's
reported xmin, because those numbers are updated whenever the respective
downstream walreceiver feels like it (see logic in WalReceiverMain).
Instrumenting this test shows that indeed the slave slot's xmin does often
advance before the master's does, especially if an autovacuum transaction
manages to occur during the relevant window. If we happen to capture such
an advanced xmin as $xmin, then the subsequent wait_slot_xmins call can
fall through before the master's xmin has advanced at all, and then if it
advances before the get_slot_xmins call, we can get the observed failure.
Yeah, that's a bit of a long chain of deduction, but it's hard to explain
any other way how the test can get past an "xmin <> '$xmin'" check only
to have the next query find that xmin does equal $xmin.
Fix by keeping separate images of the master and slave slots' xmins
and testing their has-xmin-advanced conditions independently.
pg_ctl: Make failure to complete operation a nonzero exit
If an operation being waited for does not complete within the timeout,
then exit with a nonzero exit status. This was previously handled
inconsistently.
Peter Eisentraut [Thu, 22 Jun 2017 02:57:23 +0000 (22:57 -0400)]
Fix output of char node fields
WRITE_CHAR_FIELD() didn't do any escaping, so that for example a zero
byte would cause the whole output string to be truncated. To fix, pass
the char through outToken(), so it is escaped like a string. Adjust the
reading side to handle this.
Update the documentation for \pset to mention
columns|linestyle|pager_min_lines. Add various mentions of \pset
command equivalences that were previously inconsistent.
Forbid gen_random_uuid() with --disable-strong-random
Previously, gen_random_uuid() would fall back to a weak random number
generator, unlike gen_random_bytes() which would just fail. And this was
not made very clear in the docs. For consistency, also make
gen_random_uuid() fail outright, if compiled with --disable-strong-random.
Re-word the error message you get with --disable-strong-random. It is also
used by pgp functions that require random salts, and now also
gen_random_uuid().
Tom Lane [Mon, 3 Jul 2017 02:01:19 +0000 (22:01 -0400)]
Fix race condition in recovery/t/009_twophase.pl test.
Since reducing pg_ctl's reaction time in commit c61559ec3, some
slower buildfarm members have shown erratic failures in this test.
The reason turns out to be that the test assumes synchronous
replication (because it does not provide any lag time for a commit
to replicate before shutting down the servers), but it had only
enabled sync rep in one direction. The observed symptoms correspond
to failure to replicate the last committed transaction in the other
direction, which can be expected to happen if the shutdown command
is issued soon enough and we are providing no synchronous-commit
guarantees.
Fix that, and add a bit more paranoid state checking at the bottom
of the script.