Tom Lane [Sat, 15 Sep 2018 17:42:33 +0000 (13:42 -0400)]
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally. For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.
This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.
To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment. This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another. It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.
It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.
Back-patch all the way. Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3. (This isn't the first time back-patches have
needed that, and it exhausted my patience.) I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.
Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.
Tom Lane [Fri, 14 Sep 2018 21:31:51 +0000 (17:31 -0400)]
Improve parallel scheduling logic in pg_dump/pg_restore.
Previously, the way this worked was that a parallel pg_dump would
re-order the TABLE_DATA items in the dump's TOC into decreasing size
order, and separately re-order (some of) the INDEX items into decreasing
size order. Then pg_dump would dump the items in that order. Later,
parallel pg_restore just followed the TOC order. This method had lots
of deficiencies:
* TOC ordering randomly differed between parallel and non-parallel
dumps, and was hard to predict in the former case, causing problems
for building stable pg_dump test cases.
* Parallel restore only followed a well-chosen order if the dump had
been done in parallel; in particular, this never happened for restore
from custom-format dumps.
* The best order for restore isn't necessarily the same as for dump,
and it's not really static either because of locking considerations.
* TABLE_DATA and INDEX items aren't the only things that might take a lot
of work during restore. Scheduling was particularly stupid for the BLOBS
item, which might require lots of work during dump as well as restore,
but was left to the end in either case.
This patch removes the logic that changed the TOC order, fixing the
test instability problem. Instead, we sort the parallelizable items
just before processing them during a parallel dump. Independently
of that, parallel restore prioritizes the ready-to-execute tasks
based on the size of the underlying table. In the case of dependent
tasks such as index, constraint, or foreign key creation, the largest
relevant table is used as the metric for estimating the task length.
(This is pretty crude, but it should be enough to avoid the case we
want to avoid, which is ending the run with just a few large tasks
such that we can't make use of all N workers.)
Patch by me, responding to a complaint from Peter Eisentraut,
who also reviewed the patch.
Fix ALTER/TYPE on columns referenced by FKs in partitioned tables
When ALTER TABLE ... SET DATA TYPE affects a column referenced by
constraints and indexes, it drop those constraints and indexes and
recreates them afterwards, so that the definitions match the new data
type. The original code did this by dropping one object at a time
(commit 077db40fa1f3 of May 2004), which worked fine because the
dependencies between the objects were pretty straightforward, and
ordering the objects in a specific way was enough to make this work.
However, when there are foreign key constraints in partitioned tables,
the dependencies are no longer so straightforward, and we were getting
errors when attempted:
ERROR: cache lookup failed for constraint 16398
This can be fixed by doing all the drops in one pass instead, using
performMultipleDeletions (introduced by df18c51f2955 of Aug 2006). With
this change we can also remove the code to carefully order the list of
objects to be deleted.
Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAKcux6nWS_m+s=1Udk_U9B+QY7pA-Ac58qR5BdUfOyrwnWHDew@mail.gmail.com
Andrew Gierth [Fri, 14 Sep 2018 16:35:42 +0000 (17:35 +0100)]
Order active window clauses for greater reuse of Sort nodes.
By sorting the active window list lexicographically by the sort clause
list but putting longer clauses before shorter prefixes, we generate
more chances to elide Sort nodes when building the path.
Author: Daniel Gustafsson (with some editorialization by me) Reviewed-by: Alexander Kuzmenkov, Masahiko Sawada, Tom Lane
Discussion: https://postgr.es/m/124A7F69-84CD-435B-BA0E-2695BE21E5C2%40yesql.se
Amit Kapila [Fri, 14 Sep 2018 04:06:30 +0000 (09:36 +0530)]
Don't allow LIMIT/OFFSET clause within sub-selects to be pushed to workers.
Allowing sub-select containing LIMIT/OFFSET in workers can lead to
inconsistent results at the top-level as there is no guarantee that the
row order will be fully deterministic. The fix is to prohibit pushing
LIMIT/OFFSET within sub-selects to workers.
Reported-by: Andrew Fletcher
Bug: 15324
Author: Amit Kapila Reviewed-by: Dilip Kumar
Backpatch-through: 9.6
Discussion: https://postgr.es/m/153417684333.10284.11356259990921828616@wrigleys.postgresql.org
Michael Paquier [Fri, 14 Sep 2018 01:04:14 +0000 (10:04 +0900)]
Allow concurrent-safe open() and fopen() in frontend code for Windows
PostgreSQL uses a custom wrapper for open() and fopen() which is
concurrent-safe, allowing multiple processes to open and work on the
same file. This has a couple of advantages:
- pg_test_fsync does not handle O_DSYNC correctly otherwise, leading to
false claims that disks are unsafe.
- TAP tests can run into race conditions when a postmaster and pg_ctl
open postmaster.pid, fixing some random failures in the buildfam.
pg_upgrade is one frontend tool using workarounds to bypass file locking
issues with the log files it generates, however the interactions with
pg_ctl are proving to be tedious to get rid of, so this is left for
later.
Michael Paquier [Thu, 13 Sep 2018 22:35:39 +0000 (07:35 +0900)]
Improve autovacuum logging for aggressive and anti-wraparound runs
A log message was being generated when log_min_duration is reached for
autovacuum on a given relation to indicate if it was an aggressive run,
and missed the point of mentioning if it is doing an anti-wrapround
run. The log message generated is improved so as one, both or no extra
details are added depending on the option set.
Author: Sergei Kornilov Reviewed-by: Masahiko Sawada, Michael Paquier
Discussion: https://postgr.es/m/11587951532155118@sas1-19a94364928d.qloud-c.yandex.net
Andres Freund [Thu, 13 Sep 2018 17:41:39 +0000 (10:41 -0700)]
Detect LLVM 7 without specifying binaries explicitly.
Before this commit LLVM 7 was supported, but only if one explicitly
provided LLVM_CONFIG= and CLANG= paths. As LLVM 7 is the first
version that includes our upstreamed debugging and profiling features,
and as debian is planning to default to 7 due to wider architecture
support, it seems good to support auto-detecting that version.
Author: Christoph Berg
Discussion: https://postgr.es/m/20180912124517.GD24584@msg.df7cb.de
Backpatch: 11, where LLVM was introduced
Tom Lane [Thu, 13 Sep 2018 16:36:21 +0000 (12:36 -0400)]
Attempt to identify system timezone by reading /etc/localtime symlink.
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database. Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones(). This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases. (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter. More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)
If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.
Also, tweak initdb so that it prints the zone name it selected.
In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.
Patch by me, per a suggestion from Robert Haas; review by Michael Paquier
Amit Kapila [Thu, 13 Sep 2018 10:00:26 +0000 (15:30 +0530)]
Attach FPI to the first record after full_page_writes is turned on.
XLogInsert fails to attach a required FPI to the first record after
full_page_writes is turned on by the last checkpoint. This bug got
introduced in 9.5 due to code rearrangement in commits 2c03216d83 and 2076db2aea. Fix it by ensuring that XLogInsertRecord performs a
recomputation when the given record is generated with FPW as off but
found that the flag has been turned on while actually inserting the
record.
Reported-by: Kyotaro Horiguchi
Author: Kyotaro Horiguchi Reviewed-by: Amit Kapila
Backpatch-through: 9.5 where this problem was introduced
Discussion: https://postgr.es/m/20180420.151043.74298611.horiguchi.kyotaro@lab.ntt.co.jp
Michael Paquier [Thu, 13 Sep 2018 07:56:57 +0000 (16:56 +0900)]
Simplify static function in extension.c
An extra argument for the filename defining the extension script
location was present, aimed at being used for error reporting, but has
never been used. This was around since extensions have been added in d9572c4.
Peter Eisentraut [Mon, 27 Aug 2018 13:50:50 +0000 (15:50 +0200)]
Simplify index tuple descriptor initialization
We have two code paths for initializing the tuple descriptor for a new
index: For a normal index, we copy the tuple descriptor from the table
and reset a number of fields that are not applicable to indexes. For an
expression index, we make a blank tuple descriptor and fill in the
needed fields based on the provided expressions. As pg_attribute has
grown over time, the number of fields that we need to reset in the first
case is now bigger than the number of fields we actually want to copy,
so it's sensible to do it the other way around: Make a blank descriptor
and copy just the fields we need. This also allows more code sharing
between the two branches, and it avoids having to touch this code for
almost every unrelated change to the pg_attribute structure.
Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
Andrew Gierth [Wed, 12 Sep 2018 18:31:06 +0000 (19:31 +0100)]
Repair bug in regexp split performance improvements.
Commit c8ea87e4b introduced a temporary conversion buffer for
substrings extracted during regexp splits. Unfortunately the code that
sized it was failing to ignore the effects of ignored degenerate
regexp matches, so for regexp_split_* calls it could under-size the
buffer in such cases.
Fix, and add some regression test cases (though those will only catch
the bug if run in a multibyte encoding).
Backpatch to 9.3 as the faulty code was.
Thanks to the PostGIS project, Regina Obe and Paul Ramsey for the
report (via IRC) and assistance in analysis. Patch by me.
Michael Paquier [Tue, 11 Sep 2018 21:46:01 +0000 (06:46 +0900)]
Parse more strictly integer parameters from connection strings in libpq
The following parameters have been parsed in lossy ways when specified
in a connection string processed by libpq:
- connect_timeout
- keepalives
- keepalives_count
- keepalives_idle
- keepalives_interval
- port
Overflowing values or the presence of incorrect characters were not
properly checked, leading to libpq trying to use such values and fail
with unhelpful error messages. This commit hardens the parsing of those
parameters so as it is possible to find easily incorrect values.
Author: Fabien Coelho Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/alpine.DEB.2.21.1808171206180.20841@lancre
Tom Lane [Tue, 11 Sep 2018 20:32:12 +0000 (16:32 -0400)]
Remove ruleutils.c's special case for BIT [VARYING] literals.
Up to now, get_const_expr() insisted on prefixing BIT and VARBIT
literals with 'B'. That's not really necessary, because we always
append explicit-cast syntax to identify the constant's type.
Moreover, it's subtly wrong for VARBIT, because the parser will
interpret B'...' as '...'::"bit"; see make_const() which explicitly
assigns type BITOID for a T_BitString literal. So what had been
a simple VARBIT literal is reconstructed as ('...'::"bit")::varbit,
which is not the same thing, at least not before constant folding.
This results in odd differences after dump/restore, as complained
of by the patch submitter, and it could result in actual failures in
partitioning or inheritance DDL operations (see commit 542320c2b,
which repaired similar misbehaviors for some other data types).
Fixing it is pretty easy: just remove the special case and let the
default code path handle these types. We could have kept the special
case for BIT only, but there seems little point in that.
Like the previous patch, I judge that back-patching this into stable
branches wouldn't be a good idea. However, it seems not quite too
late for v11, so let's fix it there.
Paul Guo, reviewed by Davy Machado and John Naylor, minor adjustments
by me
Andrew Gierth [Tue, 11 Sep 2018 17:14:19 +0000 (18:14 +0100)]
Repair double-free in SP-GIST rescan (bug #15378)
spgrescan would first reset traversalCxt, and then traverse a
potentially non-empty stack containing pointers to traversalValues
which had been allocated in those contexts, freeing them a second
time. This bug originates in commit ccd6eb49a where traversalValue was
introduced.
Repair by traversing the stack before the context reset; this isn't
ideal, since it means doing retail pfree in a context that's about to
be reset, but the freeing of a stack entry is also done in other
places in the code during the scan so it's not worth trying to
refactor it further. Regression test added.
Backpatch to 9.6 where the problem was introduced.
Per bug #15378; analysis and patch by me, originally from a report on
IRC by user velix; see also PostGIS ticket #4174; review by Alexander
Korotkov.
Tom Lane [Tue, 11 Sep 2018 02:22:12 +0000 (22:22 -0400)]
Use -Bsymbolic for shared libraries on HP-UX and Solaris.
These platforms are also subject to the mis-linking problem addressed
in commit e3d77ea6b. It's not clear whether we could solve it with
a solution equivalent to GNU ld's version scripts, but -Bsymbolic
appears to fix it, so let's use that.
Like the previous commit, back-patch as far as v10.
Tom Lane [Mon, 10 Sep 2018 16:47:02 +0000 (12:47 -0400)]
Hide a static inline from FRONTEND code.
For some reason pg_waldump is including tuptable.h, and the recent
addition of a static inline function to it is causing problems on
older buildfarm members that fail to optimize such functions away
completely. I wonder if this situation doesn't mean that some header
refactoring is called for ... but as a band-aid, wrap the static
function in "#ifndef FRONTEND".
Tom Lane [Sun, 9 Sep 2018 19:16:51 +0000 (15:16 -0400)]
Prevent mis-linking of src/port and src/common functions on *BSD.
On ELF-based platforms (and maybe others?) it's possible for a shared
library, when dynamically loaded into the backend, to call the backend
versions of src/port and src/common functions rather than the frontend
versions that are actually linked into the shlib. This is the cause
of bug #15367 from Jeremy Evans, and is likely to lead to more problems
in future; it's accidental that we've failed to notice any bad effects
up to now.
The recommended way to fix this on ELF-based platforms is to use a
linker "version script" that makes the shlib's versions of the functions
local. (Apparently, -Bsymbolic would fix it as well, but with other
side effects that we don't want.) Doing so has the additional benefit
that we can make sure the shlib only exposes the symbols that are meant
to be part of its API, and not ones that are just for cross-file
references within the shlib. So we'd already been using a version
script for libpq on popular platforms, but it's now apparent that it's
necessary for correctness on every ELF-based platform.
Hence, add appropriate logic to the openbsd, freebsd, and netbsd stanzas
of Makefile.shlib; this is just a copy-and-paste from the linux stanza.
There may be additional work to do if commit ed0cdf0e0 reveals that the
problem exists elsewhere, but this is all that is known to be needed
right now.
Back-patch to v10 where SCRAM support came in. The problem is ancient,
but analysis suggests that there were no really severe consequences
in older branches. Hence, I won't take the risk of such a large change
in the build process for older branches.
In passing, remove a rather opaque comment about -Bsymbolic; I don't
think it's very on-point about why we don't use that, if indeed that's
what it's talking about at all.
Patch by me; thanks to Andrew Gierth for helping to diagnose the problem,
and for additional testing.
Improve behavior of to_timestamp()/to_date() functions
to_timestamp()/to_date() functions were introduced mainly for Oracle
compatibility, and became very popular among PostgreSQL users. However, some
behavior of to_timestamp()/to_date() functions are both incompatible with Oracle
and confusing for our users. This behavior is related to handling of spaces and
separators in non FX (fixed format) mode. This commit reworks this behavior
making less confusing, better documented and more compatible with Oracle.
Nevertheless, there are still following incompatibilities with Oracle.
1) We don't insist that there are no format string patterns unmatched to
input string.
2) In FX mode we don't insist space and separators in format string to exactly
match input string.
3) When format string patterns are divided by mix of spaces and separators, we
don't distinguish them, while Oracle takes into account only last group of
spaces/separators.
Discussion: https://postgr.es/m/1873520224.1784572.1465833145330.JavaMail.yahoo%40mail.yahoo.com
Author: Artur Zakirov, Alexander Korotkov, Liudmila Mantrova
Review: Amul Sul, Robert Haas, Tom Lane, Dmitry Dolgov, David G. Johnston
ginRedoRecompress() replays actions over compressed segments of posting list
in-place. However, it might lead to write past pg_upper, because intermediate
state during playing the changes can take more space than both original state
and final state. This commit fixes that by refuse from in-place modification.
Instead page tail is copied once modification is started, and then it's used
as the source of original segments. Backpatch to 9.4 where posting list
compression was introduced.
Reported-by: Sivasubramanian Ramasubramanian
Discussion: https://postgr.es/m/1536091151804.6588%40amazon.com
Author: Alexander Korotkov based on patch from and ideas by Sivasubramanian Ramasubramanian
Review: Sivasubramanian Ramasubramanian
Backpatch-through: 9.4
Tom Lane [Sun, 9 Sep 2018 16:41:27 +0000 (12:41 -0400)]
Work around stdbool problem in dfmgr.c.
Commit 842cb9fa6 refactored things so that dfmgr.c includes <dlfcn.h>,
which before that had only been directly included in platform-specific
stub files. It turns out that on macOS, <dlfcn.h> includes <stdbool.h>,
and that causes problems on platforms where _Bool is not char-sized ...
which happens to include the PPC versions of macOS. Work around it
much as we have in plperl.h, by #undef'ing bool after including the
problematic file, but only if we're not using stdbool-style booleans.
Tom Lane [Sun, 9 Sep 2018 16:23:23 +0000 (12:23 -0400)]
Install a check for mis-linking of src/port and src/common functions.
On ELF-based platforms (and maybe others?) it's possible for a shared
library, when dynamically loaded into the backend, to call the backend
versions of src/port and src/common functions rather than the frontend
versions that are actually linked into the shlib. This is definitely
not what we want, because the frontend versions often behave slightly
differently. Up to now it's been "slight" enough that nobody noticed;
but with the addition of SCRAM support functions in src/common, we're
observing crashes due to the difference between palloc and malloc
memory allocation rules, as reported in bug #15367 from Jeremy Evans.
The purpose of this patch is to create a direct test for this type of
mis-linking, so that we know whether any given platform requires extra
measures to prevent using the wrong functions. If the test fails, it
will lead to connection failures in the contrib/postgres_fdw regression
test. At the moment, *BSD platforms using ELF format are known to have
the problem and can be expected to fail; but we need to know whether
anything else does, and we need a reliable ongoing check for future
platforms.
Actually fixing the problem will be the subject of later commit(s).
Buildfarm member tern failed src/bin/pg_ctl/t/001_start_stop.pl when a
check_mode_recursive() call overlapped a server's startup-time deletion
of pg_stat/global.stat. Just warn. Also, include errno in the message.
Back-patch to v11, where check_mode_recursive() first appeared.
Tom Lane [Sat, 8 Sep 2018 22:20:36 +0000 (18:20 -0400)]
Minor cleanup/future-proofing for pg_saslprep().
Ensure that pg_saslprep() initializes its output argument to NULL in
all failure paths, and then remove the redundant initialization that
some (not all) of its callers did. This does not fix any live bug,
but it reduces the odds of future bugs of omission.
Also add a comment about why the existing failure-path coding is
adequate.
Back-patch so as to keep the function's API consistent across branches,
again to forestall future bug introduction.
Tom Lane [Sat, 8 Sep 2018 00:09:57 +0000 (20:09 -0400)]
Save/restore SPI's global variables in SPI_connect() and SPI_finish().
This patch removes two sources of interference between nominally
independent functions when one SPI-using function calls another,
perhaps without knowing that it does so.
Chapman Flack pointed out that xml.c's query_to_xml_internal() expects
SPI_tuptable and SPI_processed to stay valid across datatype output
function calls; but it's possible that such a call could involve
re-entrant use of SPI. It seems likely that there are similar hazards
elsewhere, if not in the core code then in third-party SPI users.
Previously SPI_finish() reset SPI's API globals to zeroes/nulls, which
would typically make for a crash in such a situation. Restoring them
to the values they had at SPI_connect() seems like a considerably more
useful behavior, and it still meets the design goal of not leaving any
dangling pointers to tuple tables of the function being exited.
Also, cause SPI_connect() to reset these variables to zeroes/nulls after
saving them. This prevents interference in the opposite direction: it's
possible that a SPI-using function that's only ever been tested standalone
contains assumptions that these variables start out as zeroes. That was
the case as long as you were the outermost SPI user, but not so much for
an inner user. Now it's consistent.
Report and fix suggestion by Chapman Flack, actual patch by me.
Back-patch to all supported branches.
Tom Lane [Fri, 7 Sep 2018 22:13:29 +0000 (18:13 -0400)]
Limit depth of forced recursion for CLOBBER_CACHE_RECURSIVELY.
It's somewhat surprising that we got away with this before. (Actually,
since nobody tests this routinely AFAIK, it might've been broken for
awhile. But it's definitely broken in the wake of commit f868a8143.)
It seems sufficient to limit the forced recursion to a small number
of levels.
Back-patch to all supported branches, like the preceding patch.
Tom Lane [Fri, 7 Sep 2018 22:04:37 +0000 (18:04 -0400)]
Fix longstanding recursion hazard in sinval message processing.
LockRelationOid and sibling routines supposed that, if our session already
holds the lock they were asked to acquire, they could skip calling
AcceptInvalidationMessages on the grounds that we must have already read
any remote sinval messages issued against the relation being locked.
This is normally true, but there's a critical special case where it's not:
processing inside AcceptInvalidationMessages might attempt to access system
relations, resulting in a recursive call to acquire a relation lock.
Hence, if the outer call had acquired that same system catalog lock, we'd
fall through, despite the possibility that there's an as-yet-unread sinval
message for that system catalog. This could, for example, result in
failure to access a system catalog or index that had just been processed
by VACUUM FULL. This is the explanation for buildfarm failures we've been
seeing intermittently for the past three months. The bug is far older
than that, but commits a54e1f158 et al added a new recursion case within
AcceptInvalidationMessages that is apparently easier to hit than any
previous case.
To fix this, we must not skip calling AcceptInvalidationMessages until
we have *finished* a call to it since acquiring a relation lock, not
merely acquired the lock. (There's already adequate logic inside
AcceptInvalidationMessages to deal with being called recursively.)
Fortunately, we can implement that at trivial cost, by adding a flag
to LOCALLOCK hashtable entries that tracks whether we know we have
completed such a call.
There is an API hazard added by this patch for external callers of
LockAcquire: if anything is testing for LOCKACQUIRE_ALREADY_HELD,
it might be fooled by the new return code LOCKACQUIRE_ALREADY_CLEAR
into thinking the lock wasn't already held. This should be a fail-soft
condition, though, unless something very bizarre is being done in
response to the test.
Also, I added an additional output argument to LockAcquireExtended,
assuming that that probably isn't called by any outside code given
the very limited usefulness of its additional functionality.
Michael Paquier [Fri, 7 Sep 2018 18:00:16 +0000 (11:00 -0700)]
Improve handling of corrupted two-phase state files at recovery
When a corrupted two-phase state file is found by WAL replay, be it for
crash recovery or archive recovery, then the file is simply skipped and
a WARNING is logged to the user, causing the transaction to be silently
lost. Facing an on-disk WAL file which is corrupted is as likely to
happen as what is stored in WAL records, but WAL records are already
able to fail hard if there is a CRC mismatch. On-disk two-phase state
files, on the contrary, are simply ignored if corrupted. Note that when
restoring the initial two-phase data state at recovery, files newer than
the horizon XID are discarded hence no files present in pg_twophase/
should be torned and have been made durable by a previous checkpoint, so
recovery should never see any corrupted two-phase state file by design.
The situation got better since 978b2f6 which has added two-phase state
information directly in WAL instead of using on-disk files, so the risk
is limited to two-phase transactions which live across at least one
checkpoint for long periods. Backups having legit two-phase state files
on-disk could also lose silently transactions when restored if things
get corrupted.
This behavior exists since two-phase commit has been introduced, no
back-patch is done for now per the lack of complaints about this
problem.
Author: Michael Paquier
Discussion: https://postgr.es/m/20180709050309.GM1467@paquier.xyz
Andrew Gierth [Fri, 7 Sep 2018 12:51:30 +0000 (13:51 +0100)]
Refactor installation of extension headers.
Commit be54b3777 failed on gmake 3.80 due to a chained conditional,
which on closer examination could be removed entirely with some
refactoring elsewhere for a net simplification and more robustness
against empty expansions. Along the way, add some more comments.
Also make explicit in the documentation and comments that built
headers are not removed by 'make clean', since we don't typically want
that for headers generated by a separate ./configure step, and it's
much easier to add your own 'distclean' rule or use EXTRA_CLEAN than
to try and override a deletion rule in pgxs.mk.
Per buildfarm member prariedog and comments by Michael Paquier, though
all the actual changes are my fault.
libpq connection options as returned by PQconndefaults() have a
"dispchar" field that determines (among other things) whether an option
is a "debug" option, which shouldn't be shown by default to clients.
postgres_fdw makes use of that to control which connection options to
accept from a foreign server configuration.
Curiously, the "options" option, which allows passing configuration
settings to the backend server, was listed as a debug option, which
prevented it from being used by postgres_fdw. Maybe it was once meant
for debugging, but it's clearly in general use nowadays.
So change the dispchar for it to be the normal non-debug case. Also
remove the "debug" reference from its label field.
Tom Lane [Thu, 6 Sep 2018 14:49:45 +0000 (10:49 -0400)]
Make contrib/unaccent's unaccent() function work when not in search path.
Since the fixes for CVE-2018-1058, we've advised people to schema-qualify
function references in order to fix failures in code that executes under
a minimal search_path setting. However, that's insufficient to make the
single-argument form of unaccent() work, because it looks up the "unaccent"
text search dictionary using the search path.
The most expedient answer seems to be to remove the search_path dependency
by making it look in the same schema that the unaccent() function itself
is declared in. This will definitely work for the normal usage of this
function with the unaccent dictionary provided by the extension.
It's barely possible that there are people who were relying on the
search-path-dependent behavior to select other dictionaries with the same
name; but if there are any such people at all, they can still get that
behavior by writing unaccent('unaccent', ...), or possibly
unaccent('unaccent'::text::regdictionary, ...) if the lookup has to be
postponed to runtime.
Per complaint from Gunnlaugur Thor Briem. Back-patch to all supported
branches.
Nowadays, all platforms except Windows and older HP-UX have standard
dlopen() support. So having a separate implementation per platform
under src/backend/port/dynloader/ is a bit excessive. Instead, treat
dlopen() like other library functions that happen to be missing
sometimes and put a replacement implementation under src/port/.
Amit Kapila [Thu, 6 Sep 2018 03:57:19 +0000 (09:27 +0530)]
Fix the overrun in hash index metapage for smaller block sizes.
The commit 620b49a1 changed the value of HASH_MAX_BITMAPS with the intent
to allow many non-unique values in hash indexes without worrying to reach
the limit of the number of overflow pages. At that time, this didn't
occur to us that it can overrun the block for smaller block sizes.
Choose the value of HASH_MAX_BITMAPS based on BLCKSZ such that it gives
the same answer as now for the cases where the overrun doesn't occur, and
some other sufficiently-value for the cases where an overrun currently
does occur. This allows us not to change the behavior in any case that
currently works, so there's really no reason for a HASH_VERSION bump.
Author: Dilip Kumar Reviewed-by: Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/CAA4eK1LtF4VmU4mx_+i72ff1MdNZ8XaJMGkt2HV8+uSWcn8t4A@mail.gmail.com
Andrew Gierth [Wed, 5 Sep 2018 21:01:21 +0000 (22:01 +0100)]
Allow extensions to install built as well as unbuilt headers.
Commit df163230b overlooked the case that an out-of-tree extension
might need to build its header files (e.g. via ./configure). If it is
also doing a VPATH build, the HEADERS_* rules in the original commit
would then fail to find the files, since they would be looking only
under $(srcdir) and not in the build directory.
Fix by adding HEADERS_built and HEADERS_built_$(MODULE) which behave
like DATA_built in that they look in the build dir rather than the
source dir (and also make the files dependencies of the "all" target).
No Windows support appears to be needed for this, since it is only
relevant to out-of-tree builds (no support exists in Mkvcbuild.pm to
build extension header files in any case).
Tom Lane [Wed, 5 Sep 2018 17:47:15 +0000 (13:47 -0400)]
Make argument names of pg_get_object_address consistent, and fix docs.
pg_get_object_address and pg_identify_object_as_address are supposed
to be inverses, but they disagreed as to the names of the arguments
representing the textual form of an object address. Moreover, the
documented argument names didn't agree with reality at all, either
for these functions or pg_identify_object.
In HEAD and v11, I think we can get away with renaming the input
arguments of pg_get_object_address to match the outputs of
pg_identify_object_as_address. In theory that might break queries
using named-argument notation to call pg_get_object_address, but
it seems really unlikely that anybody is doing that, or that they'd
have much trouble adjusting if they were. In older branches, we'll
just live with the lack of consistency.
Aside from fixing the documentation of these functions to match reality,
I couldn't resist the temptation to do some copy-editing.
Per complaint from Jean-Pierre Pelletier. Back-patch to 9.5 where these
functions were introduced. (Before v11, this is a documentation change
only.)
In the original code, we were storing the pg_inherits row for a
partitioned table too early: enough that we had a hack for relcache to
avoid falling flat on its face while reading such a partial entry. If
we finish the pg_class creation first and *then* store the pg_inherits
entry, we don't need that hack.
Also recognize that pg_class.relpartbound is not marked NOT NULL and
therefore it's entirely possible to read null values, so having only
Assert() protection isn't enough. Change those to if/elog tests
instead. This qualifies as a robustness fix, so backpatch to pg11.
In passing, remove one access that wasn't actually needed, and reword
one message to be like all the others that check for the same thing.
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20180903213916.hh6wasnrdg6xv2ud@alvherre.pgsql
Peter Eisentraut [Wed, 29 Aug 2018 09:10:17 +0000 (11:10 +0200)]
PL/Python: Remove use of simple slicing API
The simple slicing API (sq_slice, sq_ass_slice) has been deprecated
since Python 2.0 and has been removed altogether in Python 3, so remove
those functions from the PLyResult class. Instead, the non-slice
mapping functions mp_subscript and mp_ass_subscript can take slice
objects as an index. Since we just pass the index through to the
underlying list object, we already support that. Test coverage was
already in place.
Bruce Momjian [Wed, 5 Sep 2018 02:34:07 +0000 (22:34 -0400)]
docs: improve AT TIME ZONE description
The previous description was unclear. Also add a third example, change
use of time zone acronyms to more verbose descriptions, and add a
mention that using 'time' with AT TIME ZONE uses the current time zone
rules.
Michael Paquier [Tue, 4 Sep 2018 18:06:04 +0000 (11:06 -0700)]
Improve some error message strings and errcodes
This makes a bit less work for translators, by unifying error strings a
bit more with what the rest of the code does, this time for three error
strings in autoprewarm and one in base backup code.
After some code review of slot.c, some file-access errcodes are reported
but lead to an incorrect internal error, while corrupted data makes the
most sense, similarly to the previous work done in e41d0a1. Also,
after calling rmtree(), a WARNING gets reported, which is a duplicate of
what the internal call report, so make the code more consistent with all
other code paths calling this function.
Author: Michael Paquier
Discussion: https://postgr.es/m/20180902200747.GC1343@paquier.xyz
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Tom Lane [Tue, 4 Sep 2018 14:52:01 +0000 (10:52 -0400)]
Clean up after TAP tests in oid2name and vacuumlo.
Oversights in commits 1aaf532de and bfea331a5. Unlike the case for
traditional-style REGRESS tests, pgxs.mk doesn't have any builtin support
for TAP tests, so it doesn't realize it should remove tmp_check/.
Maybe we should build some actual pgxs infrastructure for TAP tests ...
but for the moment, just remove explicitly.
Amit Kapila [Tue, 4 Sep 2018 04:36:09 +0000 (10:06 +0530)]
Prohibit pushing subqueries containing window function calculation to
workers.
Allowing window function calculation in workers leads to inconsistent
results because if the input row ordering is not fully deterministic, the
output of window functions might vary across workers. The fix is to treat
them as parallel-restricted.
In the passing, improve the coding pattern in max_parallel_hazard_walker
so that it has a chain of mutually-exclusive if ... else if ... else if
... else if ... IsA tests.
Reported-by: Marko Tiikkaja
Bug: 15324
Author: Amit Kapila Reviewed-by: Tom Lane
Backpatch-through: 9.6
Discussion: https://postgr.es/m/CAL9smLAnfPJCDUUG4ckX2iznj53V7VSMsYefzZieN93YxTNOcw@mail.gmail.com
Amit Kapila [Tue, 4 Sep 2018 02:43:35 +0000 (08:13 +0530)]
During the split, set checksum on an empty hash index page.
On a split, we allocate a new splitpoint's worth of bucket pages wherein
we initialize the last page with zeros which is fine, but we forgot to set
the checksum for that last page.
We decided to back-patch this fix till 10 because we don't have an easy
way to test it in prior versions. Another reason is that the hash-index
code is changed heavily in 10, so it is not advisable to push the fix
without testing it in prior versions.
Author: Amit Kapila Reviewed-by: Yugo Nagata
Backpatch-through: 10
Discussion: https://postgr.es/m/5d03686d-727c-dbf8-0064-bf8b97ffe850@2ndquadrant.com
This column was added in commit 8224de4f42cc ("Indexes with INCLUDE
columns and their support in B-tree") to ease writing the ruleutils.c
supporting code for that feature, but it turns out to be unnecessary --
we can do the same thing with just one more syscache lookup.
Even the documentation for the new column being removed in this commit
is awkward.
Tomas Vondra [Mon, 3 Sep 2018 00:10:24 +0000 (02:10 +0200)]
Fix memory leak in TRUNCATE decoding
When decoding a TRUNCATE record, the relids array was being allocated in
the main ReorderBuffer memory context, but not released with the change
resulting in a memory leak.
The array was also ignored when serializing/deserializing the change,
assuming all the information is stored in the change itself. So when
spilling the change to disk, we've only we have serialized only the
pointer to the relids array. Thanks to never releasing the array,
the pointer however remained valid even after loading the change back
to memory, preventing an actual crash.
This fixes both the memory leak and (de)serialization. The relids array
is still allocated in the main ReorderBuffer memory context (none of the
existing ones seems like a good match, and adding an extra context seems
like an overkill). The allocation is wrapped in a new ReorderBuffer API
functions, to keep the details within reorderbuffer.c, just like the
other ReorderBufferGet methods do.
Author: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/66175a41-9342-2845-652f-1bd4c3ee50aa%402ndquadrant.com
Backpatch: 11, where decoding of TRUNCATE was introduced
Michael Paquier [Sun, 2 Sep 2018 19:40:30 +0000 (12:40 -0700)]
Fix initial sync of slot parent directory when restoring status
At the beginning of recovery, information from replication slots is
recovered from disk to memory. In order to ensure the durability of the
information, the status file as well as its parent directory are
synced. It happens that the sync on the parent directory was done
directly using the status file path, which is logically incorrect, and
the current code has been doing a sync on the same object twice in a
row.
Reported-by: Konstantin Knizhnik Diagnosed-by: Konstantin Knizhnik
Author: Michael Paquier
Discussion: https://postgr.es/m/9eb1a6d5-b66f-2640-598d-c5ea46b8f68a@postgrespro.ru
Backpatch-through: 9.4-
Tom Lane [Sat, 1 Sep 2018 20:02:47 +0000 (16:02 -0400)]
Doc: fix oversights in "Client/Server Character Set Conversions" table.
This table claimed that JOHAB could be used as a server encoding, which
was true originally but hasn't been true since 8.3. It also lacked
entries for EUC_JIS_2004 and SHIFT_JIS_2004.
JOHAB problem noted by Lars Kanis, the others by me.
Tom Lane [Sat, 1 Sep 2018 19:27:12 +0000 (15:27 -0400)]
Avoid using potentially-under-aligned page buffers.
There's a project policy against using plain "char buf[BLCKSZ]" local
or static variables as page buffers; preferred style is to palloc or
malloc each buffer to ensure it is MAXALIGN'd. However, that policy's
been ignored in an increasing number of places. We've apparently got
away with it so far, probably because (a) relatively few people use
platforms on which misalignment causes core dumps and/or (b) the
variables chance to be sufficiently aligned anyway. But this is not
something to rely on. Moreover, even if we don't get a core dump,
we might be paying a lot of cycles for misaligned accesses.
To fix, invent new union types PGAlignedBlock and PGAlignedXLogBlock
that the compiler must allocate with sufficient alignment, and use
those in place of plain char arrays.
I used these types even for variables where there's no risk of a
misaligned access, since ensuring proper alignment should make
kernel data transfers faster. I also changed some places where
we had been palloc'ing short-lived buffers, for coding style
uniformity and to save palloc/pfree overhead.
Since this seems to be a live portability hazard (despite the lack
of field reports), back-patch to all supported versions.
Patch by me; thanks to Michael Paquier for review.
Thomas Munro [Sat, 1 Sep 2018 19:12:24 +0000 (07:12 +1200)]
Add Greek characters to unaccent.rules.
Author: Tasos Maschalidis Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/153495048900.1368.11566580687623014380%40wrigleys.postgresql.org
Discussion: https://postgr.es/m/VI1PR01MB38537EBD529FE5EE3FE9A5FEB5370%40VI1PR01MB3853.eurprd01.prod.exchangelabs.com
Currently there are two ways to trigger log rotation in logging collector
process: call pg_rotate_logfile() SQL-function or send SIGUSR1 signal directly
to logging collector process. However, it's nice to have more suitable way
for external tools to do that, which wouldn't require SQL connection or
knowledge of logging collector pid. This commit implements triggering log
rotation by "pg_ctl logrotate" command.
Discussion: https://postgr.es/m/20180416.115435.28153375.horiguchi.kyotaro%40lab.ntt.co.jp
Author: Kyotaro Horiguchi, Alexander Kuzmenkov, Alexander Korotkov
Ignore server-side delays when enforcing wal_sender_timeout.
Healthy clients of servers having poor I/O performance, such as
buildfarm members hamster and tern, saw unexpected timeouts. That
disagreed with documentation. This fix adds one gettimeofday() call
whenever ProcessRepliesIfAny() finds no client reply messages.
Back-patch to 9.4; the bug's symptom is rare and mild, and the code all
moved between 9.3 and 9.4.
Andres Freund [Fri, 31 Aug 2018 23:56:11 +0000 (16:56 -0700)]
Fix 8a934d677 for libc++ and make more include order resistant.
The previous definition was used in C++ mode, which causes problems
when using clang with libc++ (rather than libstdc++), due to bugs
therein. So just avoid in C++ mode.
A second problem is that depending on include order and implicit
includes the previous definition did not guarantee that the current
hack was effective by the time isinf was used, fix that by forcing
math.h to be included. This can cause clang using builds, or gcc
using ones with JIT enabled, to slow down noticably.
It's likely that we at some point want a better solution for the
performance problem, but while it's there it should better work.
Reported-By: Steven Winfield
Bug: #15270
Discussion: https://postgr.es/m/153116283147.1401.360416241833049560@wrigleys.postgresql.org
Author: Andres Freund
Backpatch: 11, like the previous commit.
Tom Lane [Fri, 31 Aug 2018 20:45:33 +0000 (16:45 -0400)]
Fix psql's \dC command to annotate I/O conversion casts as such.
A cast declared WITH INOUT was described as '(binary coercible)',
which seems pretty inaccurate; let's print '(with inout)' instead.
Per complaint from Jean-Pierre Pelletier.
This definitely seems like a bug fix, but given that it's been wrong
since 8.4 and nobody complained before, I'm hesitant to back-patch a
behavior change into stable branches. It doesn't seem too late for
v11 though.
Michael Paquier [Fri, 31 Aug 2018 18:03:40 +0000 (11:03 -0700)]
Ensure correct minimum consistent point on standbys
Startup process has improved its calculation of incorrect minimum
consistent point in 8d68ee6, which ensures that all WAL available gets
replayed when doing crash recovery, and has introduced an incorrect
calculation of the minimum recovery point for non-startup processes,
which can cause incorrect page references on a standby when for example
the background writer flushed a couple of pages on-disk but was not
updating the control file to let a subsequent crash recovery replay to
where it should have.
The only case where this has been reported to be a problem is when a
standby needs to calculate the latest removed xid when replaying a btree
deletion record, so one would need connections on a standby that happen
just after recovery has thought it reached a consistent point. Using a
background worker which is started after the consistent point is reached
would be the easiest way to get into problems if it connects to a
database. Having clients which attempt to connect periodically could
also be a problem, but the odds of seeing this problem are much lower.
The fix used is pretty simple, as the idea is to give access to the
minimum recovery point written in the control file to non-startup
processes so as they use a reference, while the startup process still
initializes its own references of the minimum consistent point so as the
original problem with incorrect page references happening post-promotion
with a crash do not show up.
Reported-by: Alexander Kukushkin Diagnosed-by: Alexander Kukushkin
Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, Alexander Kukushkin
Discussion: https://postgr.es/m/153492341830.1368.3936905691758473953@wrigleys.postgresql.org
Backpatch-through: 9.3
Tom Lane [Fri, 31 Aug 2018 17:42:18 +0000 (13:42 -0400)]
Code review for pg_verify_checksums.c.
Use postgres_fe.h, since this is frontend code. Pretend that we've heard
of project style guidelines for, eg, #include order. Use BlockNumber not
int arithmetic for block numbers, to avoid misbehavior with relations
exceeding 2^31 blocks. Avoid an unnecessary strict-aliasing warning
(per report from Michael Banck). Const-ify assorted stuff. Avoid
scribbling on the output of readdir() -- perhaps that's safe in practice,
but POSIX forbids it, and this code has so far earned exactly zero
credibility portability-wise. Editorialize on an ambiguously-worded
message.
I did not touch the problem of the "buf" local variable being possibly
insufficiently aligned; that's not specific to this code, and seems like
it should be fixed as part of a different, larger patch.
Enforce cube dimension limit in all cube construction functions
contrib/cube has a limit to 100 dimensions for cube datatype. However, it's
not enforced everywhere, and one can actually construct cube with more than
100 dimensions having then trouble with dump/restore. This commit add checks
for dimensions limit in all functions responsible for cube construction.
Backpatch to all supported versions.
Reported-by: Andrew Gierth
Discussion: https://postgr.es/m/87va7uybt4.fsf%40news-spur.riddles.org.uk
Author: Andrey Borodin with small additions by me
Review: Tom Lane
Backpatch-through: 9.3
Split contrib/cube platform-depended checks into separate test
We're currently maintaining two outputs for cube regression test. But that
appears to be unsuitable, because these outputs are different in out few checks
involving scientific notation. So, split checks involving scientific notation
into separate test, making contrib/cube easier to maintain. Backpatch to all
supported versions in order to make further backpatching easier.
Discussion: https://postgr.es/m/CAPpHfdvJgWjxHsJTtT%2Bo1tz3OR8EFHcLQjhp-d3%2BUcmJLh-fQA%40mail.gmail.com
Author: Alexander Korotkov
Backpatch-through: 9.3
Tom Lane [Fri, 31 Aug 2018 16:26:20 +0000 (12:26 -0400)]
Make checksum_impl.h safe to compile with -fstrict-aliasing.
In general, Postgres requires -fno-strict-aliasing with compilers that
implement C99 strict aliasing rules. There's little hope of getting
rid of that overall. But it seems like it would be a good idea if
storage/checksum_impl.h in particular didn't depend on it, because
that header is explicitly intended to be included by external programs.
We don't have a lot of control over the compiler switches that an
external program might use, as shown by Michael Banck's report of
failure in a privately-modified version of pg_verify_checksums.
Hence, switch to using a union in place of willy-nilly pointer casting
inside this file. I think this makes the code a bit more readable
anyway.
checksum_impl.h hasn't changed since it was introduced in 9.3,
so back-patch all the way.
Etsuro Fujita [Fri, 31 Aug 2018 11:34:06 +0000 (20:34 +0900)]
Disable support for partitionwise joins in problematic cases.
Commit f49842d, which added support for partitionwise joins, built the
child's tlist by applying adjust_appendrel_attrs() to the parent's. So in
the case where the parent's included a whole-row Var for the parent, the
child's contained a ConvertRowtypeExpr. To cope with that, that commit
added code to the planner, such as setrefs.c, but some code paths still
assumed that the tlist for a scan (or join) rel would only include Vars
and PlaceHolderVars, which was true before that commit, causing errors:
* When creating an explicit sort node for an input path for a mergejoin
path for a child join, prepare_sort_from_pathkeys() threw the 'could not
find pathkey item to sort' error.
* When deparsing a relation participating in a pushed down child join as a
subquery in contrib/postgres_fdw, get_relation_column_alias_ids() threw
the 'unexpected expression in subquery output' error.
* When performing set_plan_references() on a local join plan generated by
contrib/postgres_fdw for EvalPlanQual support for a pushed down child
join, fix_join_expr() threw the 'variable not found in subplan target
lists' error.
To fix these, two approaches have been proposed: one by Ashutosh Bapat and
one by me. While the former keeps building the child's tlist with a
ConvertRowtypeExpr, the latter builds it with a whole-row Var for the
child not to violate the planner assumption, and tries to fix it up later,
But both approaches need more work, so refuse to generate partitionwise
join paths when whole-row Vars are involved, instead. We don't need to
handle ConvertRowtypeExprs in the child's tlists for now, so this commit
also removes the changes to the planner.
Previously, partitionwise join computed attr_needed data for each child
separately, and built the child join's tlist using that data, which also
required an extra step for adding PlaceHolderVars to that tlist, but it
would be more efficient to build it from the parent join's tlist through
the adjust_appendrel_attrs() transformation. So this commit builds that
list that way, and simplifies build_joinrel_tlist() and placeholder.c as
well as part of set_append_rel_size() to basically what they were before
partitionwise join went in.
Back-patch to PG11 where partitionwise join was introduced.
Report by Rajkumar Raghuwanshi. Analysis by Ashutosh Bapat, who also
provided some of regression tests. Patch by me, reviewed by Robert Haas.
Amit Kapila [Fri, 31 Aug 2018 09:55:42 +0000 (15:25 +0530)]
Fix pg_verify_checksums on Windows.
To verify the checksums, we open the file in text mode which doesn't work
on Windows as WIN32 treats Control-Z as EOF in files opened in text mode.
This leads to "short read of block .." error in some cases.
Fix it by opening the files in the binary mode.
Author: Amit Kapila Reviewed-by: Magnus Hagander
Backpatch-through: 11
Discussion: https://postgr.es/m/CAA4eK1+LOnzod+h85FGmyjWzXKy-XV1FYwEyP-Tky2WpD5cxwA@mail.gmail.com
Peter Eisentraut [Wed, 29 Aug 2018 14:45:32 +0000 (16:45 +0200)]
pg_dump: Reorganize getTableAttrs()
Instead of repeating the almost same large query in each version branch,
use one query and add a few columns to the SELECT list depending on the
version. This saves a lot of duplication.
Peter Eisentraut [Wed, 22 Aug 2018 06:42:49 +0000 (08:42 +0200)]
Error position support for defaults and check constraints
Add support for error position reporting for the expressions contained
in defaults and check constraint definitions. This currently works only
for CREATE TABLE, not ALTER TABLE, because the latter is not set up to
pass around the original query string.
Michael Paquier [Thu, 30 Aug 2018 00:10:02 +0000 (17:10 -0700)]
Stop bgworkers during fast shutdown with postmaster in startup phase
When a postmaster gets into its phase PM_STARTUP, it would start
background workers using BgWorkerStart_PostmasterStart mode immediately,
which would cause problems for a fast shutdown as the postmaster forgets
to send SIGTERM to already-started background workers. With smart and
immediate shutdowns, this correctly happened, and fast shutdown is the
only mode missing the shot.
Author: Alexander Kukushkin Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAFh8B=mvnD8+DZUfzpi50DoaDfZRDfd7S=gwj5vU9GYn8UvHkA@mail.gmail.com
Backpatch-through: 9.5
Tom Lane [Tue, 28 Aug 2018 23:46:59 +0000 (19:46 -0400)]
Make pg_restore's identify_locking_dependencies() more bulletproof.
This function had a blacklist of dump object types that it believed
needed exclusive lock ... but we hadn't maintained that, so that it
was missing ROW SECURITY, POLICY, and INDEX ATTACH items, all of
which need (or should be treated as needing) exclusive lock.
Since the same oversight seems likely in future, let's reverse the
sense of the test so that the code has a whitelist of safe object
types; better to wrongly assume a command can't be run in parallel
than the opposite. Currently the only POST_DATA object type that's
safe is CREATE INDEX ... and that list hasn't changed in a long time.
Tom Lane [Tue, 28 Aug 2018 23:33:04 +0000 (19:33 -0400)]
Code review for pg_dump's handling of ALTER INDEX ATTACH PARTITION.
Ensure the TOC entry is marked with the correct schema, so that its
name is as unique as the index's is.
Fix the dependencies: we want dependencies from this TOC entry to the
two indexes it depends on, and we don't care (at least not for this
purpose) what order the indexes are created in. Also, add dependencies
on the indexes' underlying tables. Those might seem pointless given
the index dependencies, but they are helpful to cue parallel restore
to avoid running the ATTACH PARTITION in parallel with other DDL on
the same tables.
Tom Lane [Tue, 28 Aug 2018 21:26:09 +0000 (17:26 -0400)]
Include contrib modules in the temp installation even without REGRESS.
Now that we have TAP tests, a contrib module may have something useful
to do in "make check" even if it has no pg_regress-style regression
scripts, and hence no REGRESS setting. But the TAP tests will fail,
or else test the wrong installed files, unless we install the contrib
module into the temp installation. So move the bit about adding to
EXTRA_INSTALL so that it applies regardless.
We might want this in back branches in future, but for the moment
I only risked adding it to v11.
Andrew Gierth [Tue, 28 Aug 2018 13:43:51 +0000 (14:43 +0100)]
postgres_fdw: don't push ORDER BY with no vars (bug #15352)
Commit aa09cd242 changed a condition in find_em_expr_for_rel from
being a bms_equal comparison of relids to bms_is_subset, in order to
support order by clauses on foreign joins. But this also allows
through the degenerate case of expressions with no Vars at all (and
hence empty relids), including integer constants which will be parsed
unexpectedly on the remote (viz. "ERROR: ORDER BY position 0 is not in
select list" as in the bug report).
Repair by adding an additional !bms_is_empty test.
Backpatch through to 9.6 where the aforementioned change was made.
Per bug #15352 from Maksym Boguk; analysis and patch by me.
Michael Paquier [Tue, 28 Aug 2018 12:42:45 +0000 (21:42 +0900)]
Rework option set of vacuumlo
Like oid2name, vacuumlo has been lacking consistency with other
utilities for its options:
- Connection options gain long aliases.
- Document environment variables which could be used: PGHOST, PGPORT and
PGUSER.
Documentation and code is reordered to be more consistent. A basic set
of TAP tests has been added while on it.
Author: Tatsuro Yamada Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/c7e7f25c-1747-cd0f-9335-390bc97b2db5@lab.ntt.co.jp
Michael Paquier [Tue, 28 Aug 2018 12:33:32 +0000 (21:33 +0900)]
Rework option set of oid2name
oid2name has done little effort to keep an interface consistent with
other binary utilities:
- -H was used instead of -h/-host. This option is now marked as
deprecated, still its output is accepted to be backward-compatible.
- -P has been removed from the code, and was still documented.
- All options gain long aliases, making connection options more similar
to other binaries.
- Document environment variables which could be used: PGHOST, PGPORT and
PGUSER.
A basic set of TAP tests is added on the way, and documentation is
cleaned up to be more consistent with other things.
Author: Tatsuro Yamada Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/c7e7f25c-1747-cd0f-9335-390bc97b2db5@lab.ntt.co.jp
Andrew Gierth [Tue, 28 Aug 2018 08:52:25 +0000 (09:52 +0100)]
Avoid quadratic slowdown in regexp match/split functions.
regexp_matches, regexp_split_to_table and regexp_split_to_array all
work by compiling a list of match positions as character offsets (NOT
byte positions) in the source string.
Formerly, they then used text_substr to extract the matched text; but
in a multi-byte encoding, that counts the characters in the string,
and the characters needed to reach the starting byte position, on
every call. Accordingly, the performance degraded as the product of
the input string length and the number of match positions, such that
splitting a string of a few hundred kbytes could take many minutes.
Repair by keeping the wide-character copy of the input string
available (only in the case where encoding_max_length is not 1) after
performing the match operation, and extracting substrings from that
instead. This reduces the complexity to being linear in the number of
result bytes, discounting the actual regexp match itself (which is not
affected by this patch).
In passing, remove cleanup using retail pfree() which was obsoleted by
commit ff428cded (Feb 2008) which made cleanup of SRF multi-call
contexts automatic. Also increase (to ~134 million) the maximum number
of matches and provide an error message when it is reached.
Backpatch all the way because this has been wrong forever.
Peter Eisentraut [Tue, 28 Aug 2018 09:49:11 +0000 (11:49 +0200)]
pg_verify_checksums: Message style improvements and NLS support
The source code was already set up for NLS support, so just a nls.mk
file needed to be added. Also, fix the old problem of putting the int64
format specifier right into the string, which breaks NLS.
Thomas Munro [Tue, 28 Aug 2018 00:32:22 +0000 (12:32 +1200)]
Code review for simplehash.h.
Fix reference to non-existent file in comment.
Add SH_ prefix to the EMPTY and IN_USE tokens, to reduce likelihood of
collisions with unrelated macros.
Add include guards around the function definitions that are not
"parameterized", so the header can be used again in the same translation
unit.
Undefine SH_EQUAL macro where other "parameter" macros are undefined, for
the same reason.
Author: Thomas Munro Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1LdXZ3mMTM8tHt_b%3DK1kREit%3Dp8sikesak%3DkzHHM07Nw%40mail.gmail.com
Peter Eisentraut [Thu, 23 Aug 2018 13:13:48 +0000 (15:13 +0200)]
Fix snapshot leak warning for some procedures
The problem arises with the combination of CALL with output parameters
and doing a COMMIT inside the procedure. When a CALL has output
parameters, the portal uses the strategy PORTAL_UTIL_SELECT instead of
PORTAL_MULTI_QUERY. Using PORTAL_UTIL_SELECT causes the portal's
snapshot to be registered with the current resource
owner (portal->holdSnapshot); see 9ee1cf04ab6bcefe03a11837b53f29ca9dc24c7a for the reason.
Normally, PortalDrop() unregisters the snapshot. If not, then
ResourceOwnerRelease() will print a warning about a snapshot leak on
transaction commit. A transaction commit normally drops all
portals (PreCommit_Portals()), except the active portal. So in case of
the active portal, we need to manually release the snapshot to avoid the
warning.
Reported-by: Prabhat Sahu <prabhat.sahu@enterprisedb.com> Reviewed-by: Jonathan S. Katz <jkatz@postgresql.org>
Tom Lane [Mon, 27 Aug 2018 19:11:12 +0000 (15:11 -0400)]
Fix missing dependency for pg_dump's ENABLE ROW LEVEL SECURITY items.
The archive should show a dependency on the item's table, but it failed
to include one. This could cause failures in parallel restore due to
emitting ALTER TABLE ... ENABLE ROW LEVEL SECURITY before restoring
the table's data. In practice the odds of a problem seem low, since
you would typically need to have set FORCE ROW LEVEL SECURITY as well,
and you'd also need a very high --jobs count to have any chance of this
happening. That probably explains the lack of field reports.
Still, it's a bug, so back-patch to 9.5 where RLS was introduced.
Michael Paquier [Mon, 27 Aug 2018 00:11:12 +0000 (09:11 +0900)]
Improve VACUUM and ANALYZE by avoiding early lock queue
A caller of VACUUM can perform early lookup obtention which can cause
other sessions to block on the request done, causing potentially DOS
attacks as even a non-privileged user can attempt a vacuum fill of a
critical catalog table to block even all incoming connection attempts.
Contrary to TRUNCATE, a client could attempt a system-wide VACUUM after
building the list of relations to VACUUM, which can cause vacuum_rel()
or analyze_rel() to try to lock the relation but the operation would
just block. When the client specifies a list of relations and the
relation needs to be skipped, ownership checks are done when building
the list of relations to work on, preventing a later lock attempt.
vacuum_rel() already had the sanity checks needed, except that those
were applied too late. This commit refactors the code so as relation
skips are checked beforehand, making it safer to avoid too early locks,
for both manual VACUUM with and without a list of relations specified.
An isolation test is added emulating the fact that early locks do not
happen anymore, issuing a WARNING message earlier if the user calling
VACUUM is not a relation owner.
When a partitioned table is listed in a manual VACUUM or ANALYZE
command, its full list of partitions is fetched, all partitions get
added to the list to work on, and then each one of them is processed one
by one, with ownership checks happening at the later phase of
vacuum_rel() or analyze_rel(). Trying to do early ownership checks for
each partition is proving to be tedious as this would result in deadlock
risks with lock upgrades, and skipping all partitions if the listed
partitioned table is not owned would result in a behavior change
compared to how Postgres 10 has implemented vacuum for partitioned
tables. The original problem reported related to early lock queue for
critical relations is fixed anyway, so priority is given to avoiding a
backward-incompatible behavior.
Reported-by: Lloyd Albin, Jeremy Schneider
Author: Michael Paquier
Reviewed by: Nathan Bossart, Kyotaro Horiguchi
Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org
Discussion: https://postgr.es/m/20180812222142.GA6097@paquier.xyz
Tom Lane [Sun, 26 Aug 2018 18:21:55 +0000 (14:21 -0400)]
Make syslogger more robust against failures in opening CSV log files.
The previous coding figured it'd be good enough to postpone opening
the first CSV log file until we got a message we needed to write there.
This is unsafe, though, because if the open fails we end up in infinite
recursion trying to report the failure. Instead make the CSV log file
management code look as nearly as possible like the longstanding logic
for the stderr log file. In particular, open it immediately at postmaster
startup (if enabled), or when we get a SIGHUP in which we find that
log_destination has been changed to enable CSV logging.
It seems OK to fail if a postmaster-start-time open attempt fails, as
we've long done for the stderr log file. But we can't die if we fail
to open a CSV log file during SIGHUP, so we're still left with a problem.
In that case, write any output meant for the CSV log file to the stderr
log file. (This will also cover race-condition cases in which backends
send CSV log data before or after we have the CSV log file open.)
This patch also fixes an ancient oversight that, if CSV logging was
turned off during a SIGHUP, we never actually closed the last CSV
log file.
In passing, remember to reset whereToSendOutput = DestNone during syslogger
start, since (unlike all other postmaster children) it's forked before the
postmaster has done that. This made for a platform-dependent difference
in error reporting behavior between the syslogger and other children:
except on Windows, it'd report problems to the original postmaster stderr
as well as the normal error log file(s). It's barely possible that that
was intentional at some point; but it doesn't seem likely to be desirable
in production, and the platform dependency definitely isn't desirable.
Per report from Alexander Kukushkin. It's been like this for a long time,
so back-patch to all supported branches.
Bruce Momjian [Sat, 25 Aug 2018 17:35:14 +0000 (13:35 -0400)]
doc: "Latest checkpoint location" will not match in pg_upgrade
Mention that "Latest checkpoint location" will not match in pg_upgrade
if the standby server is still running during the upgrade, which is
possible. "Match" text first appeared in PG 9.5.
Reported-by: Paul Bonaud
Discussion: https://postgr.es/m/c7268794-edb4-1772-3bfd-04c54585c24e@trainline.com