Alvaro Herrera [Wed, 5 Dec 2018 16:31:55 +0000 (13:31 -0300)]
Don't mark partitioned indexes invalid unnecessarily
When an indexes is created on a partitioned table using ONLY (don't
recurse to partitions), it gets marked invalid until index partitions
are attached for each table partition. But there's no reason to do this
if there are no partitions ... and moreover, there's no way to get the
index to become valid afterwards, because all partitions that get
created/attached get their own index partition already attached to the
parent index, so there's no chance to do ALTER INDEX ... ATTACH PARTITION
that would make the parent index valid.
Fix by not marking the index as invalid to begin with.
This is very similar to 9139aa19423b, but the pg_dump aspect does not
appear to be relevant until we add FKs that can point to PKs on
partitioned tables. (I tried to cause the pg_upgrade test to break by
leaving some of these bogus tables around, but wasn't able to.)
Making this change means that an index that was supposed to be invalid
in the insert_conflict regression test is no longer invalid; reorder the
DDL so that the test continues to verify the behavior we want it to.
Author: Álvaro Herrera Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20181203225019.2vvdef2ybnkxt364@alvherre.pgsql
Michael Paquier [Fri, 30 Nov 2018 01:34:56 +0000 (10:34 +0900)]
Fix various checksum check problems for pg_verify_checksums and base backups
Three issues are fixed in this patch:
- Base backups forgot to ignore files specific to EXEC_BACKEND, leading
to spurious warnings when checksums are enabled, per analysis from me.
- pg_verify_checksums forgot about files specific to EXEC_BACKEND,
leading to failures of the tool on any such build, particularly Windows.
This error was originally found by newly-introduced TAP tests in various
buildfarm members using EXEC_BACKEND.
- pg_verify_checksums forgot to count for temporary files and temporary
paths, which could be valid relation files, without checksums, per
report from Andres Freund. More tests are added to cover this case.
A new test case which emulates corruption for a file in a different
tablespace is added, coming from from Michael Banck, while I have coded
the main code and refactored the test code.
Author: Michael Banck, Michael Paquier Reviewed-by: Stephen Frost, David Steele
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
Michael Paquier [Fri, 30 Nov 2018 01:15:06 +0000 (10:15 +0900)]
Switch pg_verify_checksums back to a blacklist
This basically reverts commit d55241af705667d4503638e3f77d3689fd6be31,
leaving around a portion of the regression tests still adapted with
empty relation files, and corrupted cases. This is also proving to be
failing to check properly relation files located in a non-default
tablespace path.
Per discussion with various folks, including Stephen Frost, David
Steele, Andres Freund, Michael Banck and myself.
Reported-by: Michael Banck
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
Backpatch-through: 11
Tom Lane [Thu, 29 Nov 2018 23:28:10 +0000 (18:28 -0500)]
Document handling of invalid/ambiguous timestamp input near DST boundaries.
The source code comments documented this, but the user-facing docs, not
so much. Add a section to Appendix B that discusses it.
In passing, improve a couple other things in Appendix B --- notably,
a long-obsolete claim that time zone abbreviations are looked up in
a fixed table.
Tom Lane [Thu, 29 Nov 2018 20:53:44 +0000 (15:53 -0500)]
Ensure static libraries have correct mod time even if ranlib messes it up.
In at least Apple's version of ranlib, the output file is updated to have
a mod time equal to the max of the timestamps of its components, and that
data only has seconds precision. On a filesystem with sub-second file
timestamp precision --- say, APFS --- this can result in the finished
static library appearing older than its input files, which causes useless
rebuilds and possible outright failures in parallel makes.
We've only seen this reported in the field from people using Apple's
ranlib with a non-Apple make, because Apple's make doesn't know about
sub-second timestamps either so it doesn't decide rebuilds are needed.
But Apple's ranlib presumably shares code with at least some BSDen,
so it's not that unlikely that the same problem could arise elsewhere.
To fix, just "touch" the output file after ranlib finishes.
We seem to need this in only one place. There are other calls of
ranlib in our makefiles, but they are working on intermediate files
whose timestamps are not actually important, or else on an installed
static library for which sub-second timestamp precision is unlikely
to matter either. (Also, so far as I can tell, Apple's ranlib doesn't
mess up the file timestamp in the latter usage anyhow.)
In passing, change "ranlib" to "$(RANLIB)" in one place that was
bypassing the make macro for no good reason.
Per bug #15525 from Jack Kelly (via Alyssa Ross).
Back-patch to all supported branches.
Michael Paquier [Thu, 29 Nov 2018 00:12:40 +0000 (09:12 +0900)]
Fix handling of synchronous replication for stopping WAL senders
This fixes an oversight from c6c3334 which forgot that if a subset of
WAL senders are stopping and in a sync state, other WAL senders could
still be waiting for a WAL position to be synced while committing a
transaction. However the subset of stopping senders would not release
waiters, potentially breaking synchronous replication guarantees. This
commit makes sure that even WAL senders stopping are able to release
waiters and are tracked properly.
On 9.4, this can also trigger an assertion failure when setting for
example max_wal_senders to 1 where a WAL sender is not able to find
itself as in synchronous state when the instance stops.
Reported-by: Paul Guo
Author: Paul Guo, Michael Paquier
Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com
Backpatch-through: 9.4
Peter Geoghegan [Wed, 28 Nov 2018 22:42:52 +0000 (14:42 -0800)]
Have BufFileSize() ereport() on FileSize() failure.
Move the responsibility for checking for and reporting a failure from
the only current BufFileSize() caller, logtape.c, to BufFileSize()
itself. Code within buffile.c is generally responsible for interfacing
with fd.c to report irrecoverable failures. This seems like a
convention that's worth sticking to.
Reorganizing things this way makes it easy to make the error message
raised in the event of BufFileSize() failure descriptive of the
underlying problem. We're now clear on the distinction between
temporary file name and BufFile name, and can show errno, confident that
its value actually relates to the error being reported. In passing, an
existing, similar buffile.c ereport() + errcode_for_file_access() site
is changed to follow the same conventions.
The API of the function BufFileSize() is changed by this commit, despite
already being in a stable release (Postgres 11). This seems acceptable,
since the BufFileSize() ABI was changed by commit aa551830421, which
hasn't made it into a point release yet. Besides, it's difficult to
imagine a third party BufFileSize() caller not just raising an error
anyway, since BufFile state should be considered corrupt when
BufFileSize() fails.
Per complaint from Tom Lane.
Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us
Backpatch: 11-, where shared BufFiles were introduced.
Thomas Munro [Wed, 28 Nov 2018 01:00:57 +0000 (14:00 +1300)]
Don't set PAM_RHOST for Unix sockets.
Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix
sockets. This caused Linux PAM's libaudit integration to make DNS
requests for that name. It's not exactly clear what value PAM_RHOST
should have in that case, but it seems clear that we shouldn't set it
to an unresolvable name, so don't do that.
Back-patch to 9.6. Bug #15520.
Author: Thomas Munro Reviewed-by: Peter Eisentraut Reported-by: Albert Schabhuetl
Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
Tomas Vondra [Wed, 28 Nov 2018 00:11:15 +0000 (01:11 +0100)]
Do not decode TOAST data for table rewrites
During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged
using XLOG / FPI records, and thus (correctly) ignored in decoding.
But the associated TOAST table is WAL-logged as plain INSERT records,
and so was logically decoded and passed to reorder buffer.
That has severe consequences with TOAST tables of non-trivial size.
Firstly, reorder buffer has to keep all those changes, possibly spilling
them to a file, incurring I/O costs and disk space.
Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into
a hash table, which got discarded only after processing the row from the
main heap. But as the main heap is not decoded for rewrites, this never
happened, so all the TOAST data accumulated in memory, resulting either
in excessive memory consumption or OOM.
The fix is simple, as commit e9edc1ba already introduced infrastructure
(namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST
tables, but it only applied it to system tables. So simply use it for
all TOAST data in raw_heap_insert().
That would however solve only the memory consumption issue - the TOAST
changes would still be decoded and added to the reorder buffer, and
spilled to disk (although without TOAST tuple data, so much smaller).
But we can solve that by tweaking DecodeInsert() to just ignore such
INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag,
instead of skipping them later in ReorderBufferCommit().
Review: Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com
Backpatch: 9.4-, where logical decoding was introduced
Andres Freund [Tue, 27 Nov 2018 18:07:43 +0000 (10:07 -0800)]
Fix jit compilation bug on wide tables.
The function generated to perform JIT compiled tuple deforming failed
when HeapTupleHeader's t_hoff was bigger than a signed int8. I'd
failed to realize that LLVM's getelementptr would treat an int8 index
argument as signed, rather than unsigned. That means that a hoff
larger than 127 would result in a negative offset being applied. Fix
that by widening the index to 32bit.
Add a testcase with a wide table. Don't drop it, as it seems useful to
verify other tools deal properly with wide tables.
Thanks to Justin Pryzby for both reporting a bug and then reducing it
to a reproducible testcase!
Reported-By: Justin Pryzby
Author: Andres Freund
Discussion: https://postgr.es/m/20181115223959.GB10913@telsasoft.com
Backpatch: 11, just as jit compilation was
Unfortunately ac218aa4f6 missed the fact that a reference to
'pg_catalog.regnamespace'::regclass wouldn't work before that type is
known. Fix that, by replacing the regtype usage with a join to
pg_type.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us
Backpatch: 9.5-, like ac218aa4f6
Andres Freund [Tue, 27 Nov 2018 01:00:43 +0000 (17:00 -0800)]
Update pg_upgrade test for reg* to include regrole and regnamespace.
When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were
added in 9.5, pg_upgrade's check for reg* types wasn't updated. While
regrole currently is safe, regnamespace is not.
It seems unlikely that anybody uses regnamespace inside catalog tables
across a pg_upgrade, but the tests should be correct nevertheless.
While at it, reorder the types checked in the query to be
alphabetical. Otherwise it's annoying to compare existing and tested
for types.
Author: Andres Freund
Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com
Backpatch: 9.5-, as regrole/regnamespace
Tom Lane [Mon, 26 Nov 2018 22:32:51 +0000 (17:32 -0500)]
Fix translation of special characters in psql's LaTeX output modes.
latex_escaped_print() mistranslated \ and failed to provide any translation
for # ^ and ~, all of which would typically lead to LaTeX document syntax
errors. In addition it didn't translate < > and |, which would typically
render as unexpected characters.
To some extent this represents shortcomings in ancient versions of LaTeX,
which if memory serves had no easy way to render these control characters
as ASCII text. But that's been fixed for, um, decades. In any case there
is no value in emitting guaranteed-to-fail output for these characters.
Noted while fooling with test cases added by commit 9a98984f4. Back-patch
the code change to all supported versions.
Andrew Gierth [Sat, 24 Nov 2018 09:59:49 +0000 (09:59 +0000)]
Fix hstore hash function for empty hstores upgraded from 8.4.
Hstore data generated on pg 8.4 and pg_upgraded to current versions
remains in its original on-disk format unless modified. The same goes
for values generated by the addon hstore-new module on pre-9.0
versions. (The hstoreUpgrade function converts old values on the fly
when read in, but the on-disk value is not modified by this.)
Since old-format empty hstores (and hstore-new hstores) have
representations compatible with the new format, hstoreUpgrade thought
it could get away without modifying such values; but this breaks
hstore_hash (and the new hstore_hash_extended) which assumes
bit-perfect matching between semantically identical hstore values.
Only one bit actually differs (the "new version" flag in the count
field) but that of course is enough to break the hash.
Fix by making hstoreUpgrade unconditionally convert all old values to
new format.
Backpatch all the way, even though this changes a hash value in some
cases, because in those cases the hash value is already failing - for
example, a hash join between old- and new-format empty hstores will be
failing to match, or a hash index on an hstore column containing an
old-format empty value will be failing to find the value since it will
be searching for a hash derived from a new-format datum. (There are no
known field reports of this happening, probably because hashing of
hstores has only been useful in limited circumstances and there
probably isn't much upgraded data being used this way.)
Per concerns arising from discussion of commit eb6f29141be. Original
bug is my fault.
Tom Lane [Sat, 24 Nov 2018 17:45:49 +0000 (12:45 -0500)]
Fix float-to-integer coercions to handle edge cases correctly.
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.
Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.
Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies
between these six functions.
This back-patches commits cbdb8b4c0 and 452b637d4. In the 9.4 branch,
also back-patch the portion of 62e2a8dc2 that added PG_INTnn_MIN and
related constants to c.h, so that these functions can rely on them.
Per bug #15519 from Victor Petrovykh.
Patch by me; thanks to Andrew Gierth for analysis and discussion.
Andrew Gierth [Fri, 23 Nov 2018 23:56:39 +0000 (23:56 +0000)]
Avoid crashes in contrib/intarray gist__int_ops (bug #15518)
1. Integer overflow in internal_size could result in memory corruption
in decompression since a zero-length array would be allocated and then
written to. This leads to crashes or corruption when traversing an
index which has been populated with sufficiently sparse values. Fix by
using int64 for computations and checking for overflow.
2. Integer overflow in g_int_compress could cause pessimal merge
choices, resulting in unnecessarily large ranges (which would in turn
trigger issue 1 above). Fix by using int64 again.
3. Even without overflow, array sizes could become large enough to
cause unexplained memory allocation errors. Fix by capping the sizes
to a safe limit and report actual errors pointing at gist__intbig_ops
as needed.
4. Large inputs to the compression function always consist of large
runs of consecutive integers, and the compression loop was processing
these one at a time in an O(N^2) manner with a lot of overhead. The
expected runtime of this function could easily exceed 6 months for a
single call as a result. Fix by performing a linear-time first pass,
which reduces the worst case to something on the order of seconds.
Backpatch all the way, since this has been wrong forever.
Per bug #15518 from report from irc user "dymk", analysis and patch by
me.
Michael Paquier [Fri, 23 Nov 2018 00:11:12 +0000 (09:11 +0900)]
Clarify documentation about PASSWORD in CREATE/ALTER ROLE
The documentation of CREATE/ALTER ROLE has been missing two things
related to PASSWORD:
- The password value provided needs to be quoted, some places of the
documentation marked the field with quotes, but not others, which led to
confusion.
- PASSWORD NULL was not provided consistently, with ENCRYPTED being not
compatible with it.
Reported-by: Steven Winfield
Author: Michael Paquier Reviewed-by: David G. Johnston
Discussion: https://postgr.es/m/154282901979.1316.7418475422120496802@wrigleys.postgresql.org
Tom Lane [Thu, 22 Nov 2018 20:14:01 +0000 (15:14 -0500)]
Fix another crash in json{b}_populate_recordset and json{b}_to_recordset.
populate_recordset_worker() failed to consider the possibility that the
supplied JSON data contains no rows, so that update_cached_tupdesc never
got called. This led to a null-pointer dereference since commit 9a5e8ed28;
before that it led to a bogus "set-valued function called in context that
cannot accept a set" error. Fix by forcing the update to happen.
Per bug #15514. Back-patch to v11 as 9a5e8ed28 was. (If we were excited
about the bogus error, we could perhaps go back further, but it'd take more
work to figure out how to fix it in older branches. Given the lack of
field complaints about that aspect, I'm not excited.)
Tom Lane [Thu, 22 Nov 2018 18:24:57 +0000 (13:24 -0500)]
Doc: rework introductory documentation about covering indexes.
Documenting INCLUDE in the section about unique indexes is confusing,
as complained of by Emilio Platzer. Furthermore, it entirely failed
to explain why you might want to use the feature. The section about
index-only scans is really the right place; it already talked about
making such things the hard way. Rewrite that text to describe INCLUDE
as the normal way to make a covering index.
Also, move that section up a couple of places, as it now seems more
important than some of the stuff we had before it. It still has to
be after expression and partial indexes, since otherwise some of it
would involve forward references.
Tom Lane [Mon, 19 Nov 2018 22:28:04 +0000 (17:28 -0500)]
Add needed #include.
Per POSIX, WIFSIGNALED and related macros are provided by <sys/wait.h>.
Apparently on Linux they're also pulled in by some other inclusions,
but BSD-ish systems are pickier. Fixes portability issue in ffa4cbd62.
Tom Lane [Mon, 19 Nov 2018 22:02:25 +0000 (17:02 -0500)]
Handle EPIPE more sanely when we close a pipe reading from a program.
Previously, any program launched by COPY TO/FROM PROGRAM inherited the
server's setting of SIGPIPE handling, i.e. SIG_IGN. Hence, if we were
doing COPY FROM PROGRAM and closed the pipe early, the child process
would see EPIPE on its output file and typically would treat that as
a fatal error, in turn causing the COPY to report error. Similarly,
one could get a failure report from a query that didn't read all of
the output from a contrib/file_fdw foreign table that uses file_fdw's
PROGRAM option.
To fix, ensure that child programs inherit SIG_DFL not SIG_IGN processing
of SIGPIPE. This seems like an all-around better situation since if
the called program wants some non-default treatment of SIGPIPE, it would
expect to have to set that up for itself. Then in COPY, if it's COPY
FROM PROGRAM and we stop reading short of detecting EOF, treat a SIGPIPE
exit from the called program as a non-error condition. This still allows
us to report an error for any case where the called program gets SIGPIPE
on some other file descriptor.
As coded, we won't report a SIGPIPE if we stop reading as a result of
seeing an in-band EOF marker (e.g. COPY BINARY EOF marker). It's
somewhat debatable whether we should complain if the called program
continues to transmit data after an EOF marker. However, it seems like
we should avoid throwing error in any questionable cases, especially in a
back-patched fix, and anyway it would take additional code to make such
an error get reported consistently.
Back-patch to v10. We could go further back, since COPY FROM PROGRAM
has been around awhile, but AFAICS the only way to reach this situation
using core or contrib is via file_fdw, which has only supported PROGRAM
sources since v10. The COPY statement per se has no feature whereby
it'd stop reading without having hit EOF or an error already. Therefore,
I don't see any upside to back-patching further that'd outweigh the
risk of complaints about behavioral change.
Per bug #15449 from Eric Cyr.
Patch by me, review by Etsuro Fujita and Kyotaro Horiguchi
Tom Lane [Mon, 19 Nov 2018 17:43:05 +0000 (12:43 -0500)]
Postpone LLVM-related uses of AC_CHECK_DECLS.
Calling AC_CHECK_DECLS before we've finished setting up the compiler's
CFLAGS seems like a pretty risky proposition, especially now that the
first use of that macro will result in a test to see whether the compiler
gives warning or error for undeclared built-in functions. That answer
could very easily get changed later than where PGAC_LLVM_SUPPORT is
called; furthermore, it's hardly unlikely that flags such as -D_GNU_SOURCE
could change visibility of declarations. Hence, be a little less cavalier
about where to do LLVM-related tests. This results in v11 and HEAD doing
the warning-or-error check at the same place in the script as older
branches are doing it, which seems like a good thing.
Tom Lane [Mon, 19 Nov 2018 17:01:47 +0000 (12:01 -0500)]
Fix configure's AC_CHECK_DECLS tests to work correctly with clang.
The test case that Autoconf uses to discover whether a function has
been declared doesn't work reliably with clang, because clang reports
a warning not an error if the name is a known built-in function.
On some platforms, this results in a lot of compile-time warnings about
strlcpy and related functions not having been declared.
There is a fix for this (by Noah Misch) in the upstream Autoconf sources,
but since they've not made a release in years and show no indication of
doing so anytime soon, let's just absorb their fix directly. We can
revert this when and if we update to a newer Autoconf release.
Alvaro Herrera [Mon, 19 Nov 2018 14:16:28 +0000 (11:16 -0300)]
Disallow COPY FREEZE on partitioned tables
This didn't actually work: COPY would fail to flush the right files, and
instead would try to flush a non-existing file, causing the whole
transaction to fail.
Cope by raising an error as soon as the command is sent instead, to
avoid a nasty later surprise. Of course, it would be much better to
make it work, but we don't have a patch for that yet, and we don't know
if we'll want to backpatch one when we do.
Reported-by: Tomas Vondra
Author: David Rowley Reviewed-by: Amit Langote, Steve Singer, Tomas Vondra
Thomas Munro [Mon, 19 Nov 2018 00:37:59 +0000 (13:37 +1300)]
PANIC on fsync() failure.
On some operating systems, it doesn't make sense to retry fsync(),
because dirty data cached by the kernel may have been dropped on
write-back failure. In that case the only remaining copy of the
data is in the WAL. A subsequent fsync() could appear to succeed,
but not have flushed the data. That means that a future checkpoint
could apparently complete successfully but have lost data.
Therefore, violently prevent any future checkpoint attempts by
panicking on the first fsync() failure. Note that we already
did the same for WAL data; this change extends that behavior to
non-temporary data files.
Provide a GUC data_sync_retry to control this new behavior, for
users of operating systems that don't eject dirty data, and possibly
forensic/testing uses. If it is set to on and the write-back error
was transient, a later checkpoint might genuinely succeed (on a
system that does not throw away buffers on failure); if the error is
permanent, later checkpoints will continue to fail. The GUC defaults
to off, meaning that we panic.
Back-patch to all supported releases.
There is still a narrow window for error-loss on some operating
systems: if the file is closed and later reopened and a write-back
error occurs in the intervening time, but the inode has the bad
luck to be evicted due to memory pressure before we reopen, we could
miss the error. A later patch will address that with a scheme
for keeping files with dirty data open at all times, but we judge
that to be too complicated to back-patch.
Author: Craig Ringer, with some adjustments by Thomas Munro Reported-by: Craig Ringer Reviewed-by: Robert Haas, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
Tom Lane [Sun, 18 Nov 2018 04:16:00 +0000 (23:16 -0500)]
Fix AC_REQUIRES breakage in LLVM autoconf tests.
Any Autoconf macro that uses AC_REQUIRES -- directly or indirectly --
must not be inside a plain shell "if" test; if it is, whatever code
gets pulled in by the AC_REQUIRES will also be inside that "if".
Instead of "if" we can use AS_IF, which knows how to get this right
(cf commit 01051a987).
The only immediate problem from getting this wrong was that AC_PROG_AWK
had to be run twice, once inside the "if llvm" block and once in the
main line. However, it broke a different patch I'm about to submit
more thoroughly.
Tomas Vondra [Sat, 17 Nov 2018 22:50:21 +0000 (23:50 +0100)]
Add valgrind suppressions for wcsrtombs optimizations
wcsrtombs (called through wchar2char from common functions like lower,
upper, etc.) uses various optimizations that may look like access to
uninitialized data, triggering valgrind reports.
For example AVX2 instructions load data in 256-bit chunks, and gconv
does something similar with 32-bit chunks. This is faster than accessing
the bytes one by one, and the uninitialized part of the buffer is not
actually used. So suppress the bogus reports.
The exact stack depends on possible optimizations - it might be AVX, SSE
(as in the report by Aleksander Alekseev) or something else. Hence the
last frame is wildcarded, to deal with this.
Backpatch all the way back to 9.4.
Author: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
Discussion: https://www.postgresql.org/message-id/20180220150838.GD18315@e733.localdomain
Peter Eisentraut [Thu, 15 Nov 2018 22:04:48 +0000 (23:04 +0100)]
Correct code comments for PartitionedRelPruneInfo struct
The comments above the PartitionedRelPruneInfo struct incorrectly
document how subplan_map and subpart_map are indexed. This seems to
have snuck in on 4e232364033.
Author: David Rowley <david.rowley@2ndquadrant.com>
Andres Freund [Thu, 15 Nov 2018 19:35:07 +0000 (11:35 -0800)]
Make reformat-dat-files, reformat-dat-files VPATH safe.
The reformat_dat_file.pl script, added by 372728b0d49552641, supported
all the necessary options to make it work in a VPATH build, but the
makefile invocations didn't take VPATH into account. Fix that.
Discussion: https://postgr.es/m/20181115185303.d2z7wonx23mdfvd3@alap3.anarazel.de
Backpatch: 11-, where 372728b0d49552641 was merged
Amit Kapila [Thu, 15 Nov 2018 05:39:15 +0000 (11:09 +0530)]
Fix the omission in docs.
Commit 5373bc2a08 has added type for background workers but forgot to
update at one place in the documentation.
Reported-by: John Naylor
Author: John Naylor Reviewed-by: Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/CAJVSVGVmvgJ8Lq4WBxC3zV5wf0txdCqRSgkWVP+jaBF=HgWscA@mail.gmail.com
Thomas Munro [Wed, 14 Nov 2018 23:34:04 +0000 (12:34 +1300)]
Use 64 bit type for BufFileSize().
BufFileSize() can't use off_t, because it's only 32 bits wide on
some systems. BufFile objects can have many 1GB segments so the
total size can exceed 2^31. The only known client of the function
is parallel CREATE INDEX, which was reported to fail when building
large indexes on Windows.
Though this is technically an ABI break on platforms with a 32 bit
off_t and we might normally avoid back-patching it, the function is
brand new and thus unlikely to have been discovered by extension
authors yet, and it's fairly thoroughly broken on those platforms
anyway, so just fix it.
Defect in 9da0cc35. Bug #15460. Back-patch to 11, where this
function landed.
Author: Thomas Munro Reported-by: Paul van der Linden, Pavel Oskin Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/15460-b6db80de822fa0ad%40postgresql.org
Discussion: https://postgr.es/m/CAHDGBJP_GsESbTt4P3FZA8kMUKuYxjg57XHF7NRBoKnR%3DCAR-g%40mail.gmail.com
Tom Lane [Wed, 14 Nov 2018 21:29:57 +0000 (16:29 -0500)]
Doc: remove claim that all \pset format options are unique in 1 letter.
This hasn't been correct since 9.3 added "latex-longtable".
I left the phraseology "Unique abbreviations are allowed" alone.
It's correct as far as it goes, and we are studiously refraining
from specifying exactly what happens if you give a non-unique
abbreviation. (The answer in the back branches is "you get a
backwards-compatible choice", and the answer in HEAD will shortly
be "you get an error", but there seems no need to mention such
details here.)
Tom Lane [Wed, 14 Nov 2018 16:27:30 +0000 (11:27 -0500)]
Second try at fixing numeric data passed through an ECPG SQLDA.
In commit ecfd55795, I removed sqlda.c's checks for ndigits != 0 on the
grounds that we should duplicate the state of the numeric value's digit
buffer even when all the digits are zeroes. However, that still isn't
quite right, because another possible state of the digit buffer is
buf == digits == NULL (this occurs for a NaN). As the code now stands,
it'll invoke memcpy with a NULL source address and zero bytecount,
which we know a few platforms crash on. Hence, reinstate the no-copy
short-circuit, but make it test specifically for buf != NULL rather than
some other condition. In hindsight, the ndigits test (added by commit f2ae9f9c3) was almost certainly meant to fix the NaN case not the
all-zeroes case as the associated thread alleged.
Michael Paquier [Wed, 14 Nov 2018 07:47:51 +0000 (16:47 +0900)]
Initialize TransactionState and user ID consistently at transaction start
If a failure happens when a transaction is starting between the moment
the transaction status is changed from TRANS_DEFAULT to TRANS_START and
the moment the current user ID and security context flags are fetched
via GetUserIdAndSecContext(), or before initializing its basic fields,
then those may get reset to incorrect values when the transaction
aborts, leaving the session in an inconsistent state.
One problem reported is that failing a starting transaction at the first
query of a session could cause several kinds of system crashes on the
follow-up queries.
In order to solve that, move the initialization of the transaction state
fields and the call of GetUserIdAndSecContext() in charge of fetching
the current user ID close to the point where the transaction status is
switched to TRANS_START, where there cannot be any error triggered
in-between, per an idea of Tom Lane. This properly ensures that the
current user ID, the security context flags and that the basic fields of
TransactionState remain consistent even if the transaction fails while
starting.
Reported-by: Richard Guo Diagnosed-By: Richard Guo
Author: Michael Paquier Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAN_9JTxECSb=pEPcb0a8d+6J+bDcOZ4=DgRo_B7Y5gRHJUM=Rw@mail.gmail.com
Backpatch-through: 9.4
A table with OIDs that was the first in the dump output would not get
dumped with OIDs enabled. Fix that.
The reason was that the currWithOids flag was declared to be bool but
actually also takes a -1 value for "don't know yet". But under
stdbool.h semantics, that is coerced to true, so the required SET
default_with_oids command is not output again. Change the variable
type to char to fix that.
Amit Kapila [Tue, 13 Nov 2018 05:39:44 +0000 (11:09 +0530)]
Fix the initialization of atomic variables introduced by the
group clearing mechanism.
Commits 0e141c0fbb and baaf272ac9 introduced initialization of atomic
variables in InitProcess which means that it's not safe to look at those
for backends that aren't currently in use. Fix that by initializing them
during postmaster startup.
Reported-by: Andres Freund
Author: Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/20181027104138.qmbbelopvy7cw2qv@alap3.anarazel.de
Thomas Munro [Tue, 13 Nov 2018 04:39:36 +0000 (17:39 +1300)]
Fix handling of HBA ldapserver with multiple hostnames.
Commit 35c0754f failed to handle space-separated lists of alternative
hostnames in ldapserver, when building a URI for ldap_initialize()
(OpenLDAP). Such lists need to be expanded to space-separated URIs.
Repair. Back-patch to 11, to fix bug report #15495.
Author: Thomas Munro Reported-by: Renaud Navarro
Discussion: https://postgr.es/m/15495-2c39fc196c95cd72%40postgresql.org
Thomas Munro [Tue, 13 Nov 2018 03:27:08 +0000 (16:27 +1300)]
Fix possible buffer overrun in hba.c.
Coverty reports a possible buffer overrun in the code that populates the
pg_hba_file_rules view. It may not be a live bug due to restrictions
on options that can be used together, but let's increase MAX_HBA_OPTIONS
and correct a nearby misleading comment.
Tom Lane [Mon, 12 Nov 2018 16:19:04 +0000 (11:19 -0500)]
Limit the number of index clauses considered in choose_bitmap_and().
classify_index_clause_usage() is O(N^2) in the number of distinct index
qual clauses it considers, because of its use of a simple search list to
store them. For nearly all queries, that's fine because only a few clauses
will be considered. But Alexander Kuzmenkov reported a machine-generated
query with 80000 (!) index qual clauses, which caused this code to take
forever. Somewhat remarkably, this is the only O(N^2) behavior we now
have for such a query, so let's fix it.
We can get rid of the O(N^2) runtime for cases like this without much
damage to the functionality of choose_bitmap_and() by separating out
paths with "too many" qual or pred clauses, and deeming them to always
be nonredundant with other paths. Then their clauses needn't go into
the search list, so it doesn't get too long, but we don't lose the
ability to consider bitmap AND plans altogether. I set the threshold
for "too many" to be 100 clauses per path, which should be plenty to
ensure no change in planning behavior for normal queries.
There are other things we could do to make this go faster, but it's not
clear that it's worth any additional effort. 80000 qual clauses require
a whole lot of work in many other places, too.
The code's been like this for a long time, so back-patch to all supported
branches. The troublesome query only works back to 9.5 (in 9.4 it fails
with stack overflow in the parser); so I'm not sure that fixing this in
9.4 has any real-world benefit, but perhaps it does.
Tom Lane [Sat, 10 Nov 2018 03:04:14 +0000 (22:04 -0500)]
Fix error-cleanup mistakes in exec_stmt_call().
Commit 15c729347 was a couple bricks shy of a load: we need to
ensure that expr->plan gets reset to NULL on any error exit,
if it's not supposed to be saved. Also ensure that the
stmt->target calculation gets redone if needed.
The easy way to exhibit a problem is to set up code that
violates the writable-argument restriction and then execute
it twice. But error exits out of, eg, setup_param_list()
could also break it. Make the existing PG_TRY block cover
all of that code to be sure.
Tom Lane [Sat, 10 Nov 2018 01:42:03 +0000 (20:42 -0500)]
Fix missing role dependencies for some schema and type ACLs.
This patch fixes several related cases in which pg_shdepend entries were
never made, or were lost, for references to roles appearing in the ACLs of
schemas and/or types. While that did no immediate harm, if a referenced
role were later dropped, the drop would be allowed and would leave a
dangling reference in the object's ACL. That still wasn't a big problem
for normal database usage, but it would cause obscure failures in
subsequent dump/reload or pg_upgrade attempts, taking the form of
attempts to grant privileges to all-numeric role names. (I think I've
seen field reports matching that symptom, but can't find any right now.)
Several cases are fixed here:
1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any
existing ACL entries for the domain. This case is ancient, dating
back as far as we've had pg_shdepend tracking at all.
2. If a default type privilege applies, CREATE TYPE recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for types in 9.2.
3. If a default schema privilege applies, CREATE SCHEMA recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for schemas in v10
(commit ab89e465c).
Another somewhat-related problem is that when creating a relation
rowtype or implicit array type, TypeCreate would apply any available
default type privileges to that type, which we don't really want
since such an object isn't supposed to have privileges of its own.
(You can't, for example, drop such privileges once they've been added
to an array type.)
ab89e465c is also to blame for a race condition in the regression tests:
privileges.sql transiently installed globally-applicable default
privileges on schemas, which sometimes got absorbed into the ACLs of
schemas created by concurrent test scripts. This should have resulted
in failures when privileges.sql tried to drop the role holding such
privileges; but thanks to the bug fixed here, it instead led to dangling
ACLs in the final state of the regression database. We'd managed not to
notice that, but it became obvious in the wake of commit da906766c, which
allowed the race condition to occur in pg_upgrade tests.
To fix, add a function recordDependencyOnNewAcl to encapsulate what
callers of get_user_default_acl need to do; while the original call
sites got that right via ad-hoc code, none of the later-added ones
have. Also change GenerateTypeDependencies to generate these
dependencies, which requires adding the typacl to its parameter list.
(That might be annoying if there are any extensions calling that
function directly; but if there are, they're most likely buggy in the
same way as the core callers were, so they need work anyway.) While
I was at it, I changed GenerateTypeDependencies to accept most of its
parameters in the form of a Form_pg_type pointer, making its parameter
list a bit less unwieldy and mistake-prone.
The test race condition is fixed just by wrapping the addition and
removal of default privileges into a single transaction, so that that
state is never visible externally. We might eventually prefer to
separate out tests of default privileges into a script that runs by
itself, but that would be a bigger change and would make the tests
run slower overall.
Back-patch relevant parts to all supported branches.
Michael Paquier [Fri, 9 Nov 2018 01:03:31 +0000 (10:03 +0900)]
Fix dependency handling of partitions and inheritance for ON COMMIT
This commit fixes a set of issues with ON COMMIT actions when used on
partitioned tables and tables with inheritance children:
- Applying ON COMMIT DROP on a partitioned table with partitions or on a
table with inheritance children caused a failure at commit time, with
complains about the children being already dropped as all relations are
dropped one at the same time.
- Applying ON COMMIT DELETE on a partition relying on a partitioned
table which uses ON COMMIT DROP would cause the partition truncation to
fail as the parent is removed first.
The solution to the first problem is to handle the removal of all the
dependencies in one go instead of dropping relations one-by-one, based
on a suggestion from Álvaro Herrera. So instead all the relation OIDs
to remove are gathered and then processed in one round of multiple
deletions.
The solution to the second problem is to reorder the actions, with
truncation happening first and relation drop done after. Even if it
means that a partition could be first truncated, then immediately
dropped if its partitioned table is dropped, this has the merit to keep
the code simple as there is no need to do existence checks on the
relations to drop.
Contrary to a manual TRUNCATE on a partitioned table, ON COMMIT DELETE
does not cascade to its partitions. The ON COMMIT action defined on
each partition gets the priority.
Author: Michael Paquier Reviewed-by: Amit Langote, Álvaro Herrera, Robert Haas
Discussion: https://postgr.es/m/68f17907-ec98-1192-f99f-8011400517f5@lab.ntt.co.jp
Backpatch-through: 10
Tom Lane [Thu, 8 Nov 2018 22:33:25 +0000 (17:33 -0500)]
Disallow setting client_min_messages higher than ERROR.
Previously it was possible to set client_min_messages to FATAL or PANIC,
which had the effect of suppressing transmission of regular ERROR messages
to the client. Perhaps that seemed like a useful option in the past, but
the trouble with it is that it breaks guarantees that are explicitly made
in our FE/BE protocol spec about how a query cycle can end. While libpq
and psql manage to cope with the omission, that's mostly because they
are not very bright; client libraries that have more semantic knowledge
are likely to get confused. Notably, pgODBC doesn't behave very sanely.
Let's fix this by getting rid of the ability to set client_min_messages
above ERROR.
In HEAD, just remove the FATAL and PANIC options from the set of allowed
enum values for client_min_messages. (This change also affects
trace_recovery_messages, but that's OK since these aren't useful values
for that variable either.)
In the back branches, there was concern that rejecting these values might
break applications that are explicitly setting things that way. I'm
pretty skeptical of that argument, but accommodate it by accepting these
values and then internally setting the variable to ERROR anyway.
In all branches, this allows a couple of tiny simplifications in the
logic in elog.c, so do that.
Also respond to the point that was made that client_min_messages has
exactly nothing to do with the server's logging behavior, and therefore
does not belong in the "When To Log" subsection of the documentation.
The "Statement Behavior" subsection is a better match, so move it there.
Alvaro Herrera [Thu, 8 Nov 2018 19:22:09 +0000 (16:22 -0300)]
Revise attribute handling code on partition creation
The original code to propagate NOT NULL and default expressions
specified when creating a partition was mostly copy-pasted from
typed-tables creation, but not being a great match it contained some
duplicity, inefficiency and bugs.
This commit fixes the bug that NOT NULL constraints declared in the
parent table would not be honored in the partition. One reported issue
that is not fixed is that a DEFAULT declared in the child is not used
when inserting through the parent. That would amount to a behavioral
change that's better not back-patched.
This rewrite makes the code simpler:
1. instead of checking for duplicate column names in its own block,
reuse the original one that already did that;
2. instead of concatenating the list of columns from parent and the one
declared in the partition and scanning the result to (incorrectly)
propagate defaults and not-null constraints, just scan the latter
searching the former for a match, and merging sensibly. This works
because we know the list in the parent is already correct and there can
only be one parent.
This rewrite makes ColumnDef->is_from_parent unused, so it's removed
on branch master; on released branches, it's kept as an unused field in
order not to cause ABI incompatibilities.
This commit also adds a test case for creating partitions with
collations mismatching that on the parent table, something that is
closely related to the code being patched. No code change is introduced
though, since that'd be a behavior change that could break some (broken)
working applications.
Amit Langote wrote a less invasive fix for the original
NOT NULL/defaults bug, but while I kept the tests he added, I ended up
not using his original code. Ashutosh Bapat reviewed Amit's fix. Amit
reviewed mine.
Author: Álvaro Herrera, Amit Langote Reviewed-by: Ashutosh Bapat, Amit Langote Reported-by: Jürgen Strobel (bug #15212)
Discussion: https://postgr.es/m/152746742177.1291.9847032632907407358@wrigleys.postgresql.org
Tom Lane [Tue, 6 Nov 2018 23:33:15 +0000 (18:33 -0500)]
Disable recheck_on_update optimization to avoid crashes.
The code added by commit c203d6cf8 causes a crash in at least one case,
where a potentially-optimizable expression index has a storage type
different from the input data type. A cursory code review turned up
numerous other problems that seem impractical to fix on short notice.
Andres argued for revert of that patch some time ago, and if additional
senior committers had been paying attention, that's likely what would
have happened, but we were not :-(
At this point we can't just revert, at least not in v11, because that would
mean an ABI break for code touching relcache entries. And we should not
remove the (also buggy) support for the recheck_on_update index reloption,
since it might already be used in some databases in the field. So this
patch just does the as-little-invasive-as-possible measure of disabling
the feature as though recheck_on_update were forced off for all indexes.
I also removed the related regression tests (which would otherwise fail)
and the user-facing documentation of the reloption.
We should undertake a more thorough code cleanup if the patch can't be
fixed, but not under the extreme time pressure of being already overdue
for 11.1 release.
Per report from Ondřej Bouda and subsequent private discussion among
pgsql-release.
Tom Lane [Tue, 6 Nov 2018 18:25:24 +0000 (13:25 -0500)]
Rename rbtree.c functions to use "rbt" prefix not "rb" prefix.
The "rb" prefix is used by Ruby, so that our existing code results
in name collisions that break plruby. We discussed ways to prevent
that by adjusting dynamic linker options, but it seems that at best
we'd move the pain to other cases. Renaming to avoid the collision
is the only portable fix anyway. Fortunately, our rbtree code is
not (yet?) widely used --- in core, there's only a single usage
in GIN --- so it seems likely that we can get away with a rename.
I chose to do this basically as s/rb/rbt/g, except for places where
there already was a "t" after "rb". The patch could have been made
smaller by only touching linker-visible symbols, but it would have
resulted in oddly inconsistent-looking code. Better to make it look
like "rbt" was the plan all along.
Back-patch to v10. The rbtree.c code exists back to 9.5, but
rb_iterate() which is the actual immediate source of pain was added
in v10, so it seems like changing the names before that would have
more risk than benefit.
Tom Lane [Mon, 5 Nov 2018 21:07:06 +0000 (16:07 -0500)]
Last-minute updates for release notes.
I removed the item about the pg_stat_statements change from
release-11.sgml, as part of a sweep to delete items already committed
in 11.0; but actually we'd best keep it to ensure that people who've
pg_upgraded their databases will take the requisite action. Also make
said action more visible by making it into its own para. Noted by
Jonathan Katz.
Andres Freund [Mon, 5 Nov 2018 20:02:25 +0000 (12:02 -0800)]
Fix copy-paste error in errhint() introduced in 691d79a07933.
Reported-By: Petr Jelinek
Discussion: https://postgr.es/m/c95a620b-34f0-7930-aeb5-f7ab804f26cb@2ndquadrant.com
Backpatch: 9.4-, like the previous commit
Michael Paquier [Mon, 5 Nov 2018 02:04:14 +0000 (11:04 +0900)]
Block creation of partitions with open references to its parent
When a partition is created as part of a trigger processing, it is
possible that the partition which just gets created changes the
properties of the table the executor of the ongoing command relies on,
causing a subsequent crash. This has been found possible when for
example using a BEFORE INSERT which creates a new partition for a
partitioned table being inserted to.
Any attempt to do so is blocked when working on a partition, with
regression tests added for both CREATE TABLE PARTITION OF and ALTER
TABLE ATTACH PARTITION.
Reported-by: Dmitry Shalashov
Author: Amit Langote Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/15437-3fe01ee66bd1bae1@postgresql.org
Backpatch-through: 10
Michael Paquier [Mon, 5 Nov 2018 00:15:08 +0000 (09:15 +0900)]
Ignore partitioned tables when processing ON COMMIT DELETE ROWS
Those tables have no physical storage, making this option unusable with
partition trees as at commit time an actual truncation was attempted.
There are still issues with the way ON COMMIT actions are done when
mixing several action types, however this impacts as well inheritance
trees, so this issue will be dealt with later.
Reported-by: Rajkumar Raghuwanshi
Author: Amit Langote Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAKcux6mhgcjSiB_egqEAEFgX462QZtncU8QCAJ2HZwM-wWGVew@mail.gmail.com
Tom Lane [Sun, 4 Nov 2018 19:50:55 +0000 (14:50 -0500)]
Fix ExecuteCallStmt to not scribble on the passed-in parse tree.
Modifying the parse tree at execution time is, or at least ought to be,
verboten. It seems quite difficult to actually cause a crash this way
in v11 (although you can exhibit it pretty easily in HEAD by messing
with plan_cache_mode). Nonetheless, it's risky, so fix and back-patch.
Tom Lane [Sun, 4 Nov 2018 18:25:39 +0000 (13:25 -0500)]
Fix bugs in plpgsql's handling of CALL argument lists.
exec_stmt_call() tried to extract information out of a CALL statement's
argument list without using expand_function_arguments(), apparently in
the hope of saving a few nanoseconds by not processing defaulted
arguments. It got that quite wrong though, leading to crashes with
named arguments, as well as failure to enforce writability of the
argument for a defaulted INOUT parameter. Fix and simplify the logic
by using expand_function_arguments() before examining the list.
Also, move the argument-examination to just after producing the CALL
command's plan, before invoking the called procedure. This ensures
that we'll track possible changes in the procedure's argument list
correctly, and avoids a hazard of the plan cache being flushed while
the procedure executes.
Also fix assorted falsehoods and omissions in associated documentation.
Per bug #15477 from Alexey Stepanov.
Patch by me, with some help from Pavel Stehule. Back-patch to v11.
Andres Freund [Sat, 3 Nov 2018 22:55:23 +0000 (15:55 -0700)]
Prevent generating EEOP_AGG_STRICT_INPUT_CHECK operations when nargs == 0.
This only became a problem with 4c640f4f38, which didn't synchronize
the value agg_strict_input_check.nargs is set to, with the guard
condition for emitting the operation.
Besides such instructions being unnecessary overhead, currently the
LLVM JIT provider doesn't support them. It seems more sensible to
avoid generating such instruction than supporting them. Add assertions
to make it easier to debug a potential further occurance.
Andres Freund [Sat, 3 Nov 2018 21:35:23 +0000 (14:35 -0700)]
Fix STRICT check for strict aggregates with NULL ORDER BY columns.
I (Andres) broke this unintentionally in 69c3936a14, by checking
strictness for all input expressions computed for an aggregate, rather
than just the input for the aggregate transition function.
Reported-By: Ondřej Bouda Bisected-By: Tom Lane Diagnosed-By: Andrew Gierth
Discussion: https://postgr.es/m/2a505161-2727-2473-7c46-591ed108ac52@email.cz
Backpatch: 11-, like 69c3936a14
Tom Lane [Sat, 3 Nov 2018 17:56:10 +0000 (13:56 -0400)]
Make ts_locale.c's character-type functions cope with UTF-16.
On Windows, in UTF8 database encoding, what char2wchar() produces is
UTF16 not UTF32, ie, characters above U+FFFF will be represented by
surrogate pairs. t_isdigit() and siblings did not account for this
and failed to provide a large enough result buffer. That in turn
led to bogus "invalid multibyte character for locale" errors, because
contrary to what you might think from char2wchar()'s documentation,
its Windows code path doesn't cope sanely with buffer overflow.
The solution for t_isdigit() and siblings is pretty clear: provide
a 3-wchar_t result buffer not 2.
char2wchar() also needs some work to provide more consistent, and more
accurately documented, buffer overrun behavior. But that's a bigger job
and it doesn't actually have any immediate payoff, so leave it for later.
Per bug #15476 from Kenji Uno, who deserves credit for identifying the
cause of the problem. Back-patch to all active branches.
Alvaro Herrera [Sat, 3 Nov 2018 16:23:37 +0000 (13:23 -0300)]
Fix tablespace handling for partitioned indexes
When creating partitioned indexes, the tablespace was not being saved
for the parent index. This meant that subsequently created partitions
would not use the right tablespace for their indexes.
ALTER INDEX SET TABLESPACE and ALTER INDEX ALL IN TABLESPACE raised
errors when tried; fix them too. This requires bespoke code for
ATExecCmd() that applies to the special case when the tablespace move is
just a catalog change.
Tom Lane [Fri, 2 Nov 2018 22:54:00 +0000 (18:54 -0400)]
Yet further rethinking of build changes for macOS Mojave.
The solution arrived at in commit e74dd00f5 presumes that the compiler
has a suitable default -isysroot setting ... but further experience
shows that in many combinations of macOS version, XCode version, Xcode
command line tools version, and phase of the moon, Apple's compiler
will *not* supply a default -isysroot value.
We could potentially go back to the approach used in commit 68fc227dd,
but I don't have a lot of faith in the reliability or life expectancy of
that either. Let's just revert to the approach already shipped in 11.0,
namely specifying an -isysroot switch globally. As a partial response to
the concerns raised by Jakob Egger, adjust the contents of Makefile.global
to look like
This allows overriding the sysroot path at build time in a relatively
painless way.
Add documentation to installation.sgml about how to use the PG_SYSROOT
option. I also took the opportunity to document how to work around
macOS's "System Integrity Protection" feature.
Thomas Munro [Fri, 2 Nov 2018 22:08:03 +0000 (11:08 +1300)]
Fix NULL handling in multi-batch Parallel Hash Left Join.
NULL keys in left joins were skipped when building batch files.
Repair, by making the keep_nulls argument to ExecHashGetHashValue()
depend on whether this is a left outer join, as we do in other
paths.
Bug #15475. Thinko in 1804284042e. Back-patch to 11.
Reported-by: Paul Schaap Diagnosed-by: Andrew Gierth
Dicussion: https://postgr.es/m/15475-11a7a783fed72a36%40postgresql.org
Peter Geoghegan [Thu, 1 Nov 2018 16:18:55 +0000 (09:18 -0700)]
Adjust trace_sort log messages.
The project message style guide dictates: "When citing the name of an
object, state what kind of object it is". The parallel CREATE INDEX
patch added a worker number to most of the trace_sort messages within
tuplesort.c without specifying the object type. Bring these messages
into compliance with the style guide.
We're still treating a leader or serial Tuplesortstate as having worker
number -1. trace_sort is a developer option, and these two cases are
highly comparable, so this seems appropriate.
Per complaint from Tom Lane.
Discussion: https://postgr.es/m/8330.1540831863@sss.pgh.pa.us
Backpatch: 11-, where parallel CREATE INDEX was introduced.
Andres Freund [Wed, 31 Oct 2018 21:47:41 +0000 (14:47 -0700)]
Disallow starting server with insufficient wal_level for existing slot.
Previously it was possible to create a slot, change wal_level, and
restart, even if the new wal_level was insufficient for the
slot. That's a problem for both logical and physical slots, because
the necessary WAL records are not generated.
This removes a few tests in newer versions that, somewhat
inexplicably, whether restarting with a too low wal_level worked (a
buggy behaviour!).
Reported-By: Joshua D. Drake
Author: Andres Freund
Discussion: https://postgr.es/m/20181029191304.lbsmhshkyymhw22w@alap3.anarazel.de
Backpatch: 9.4-, where replication slots where introduced
Tom Lane [Wed, 31 Oct 2018 21:04:42 +0000 (17:04 -0400)]
Fix memory leak in repeated SPGIST index scans.
spgendscan neglected to pfree all the memory allocated by spgbeginscan.
It's possible to get away with that in most normal queries, since the
memory is allocated in the executor's per-query context which is about
to get deleted anyway; but it causes severe memory leakage during
creation or filling of large exclusion-constraint indexes.
Also, document that amendscan is supposed to free what ambeginscan
allocates. The docs' lack of clarity on that point probably caused this
bug to begin with. (There is discussion of changing that API spec going
forward, but I don't think it'd be appropriate for the back branches.)
Per report from Bruno Wolff. It's been like this since the beginning,
so back-patch to all active branches.
In HEAD, also fix an independent leak caused by commit 2a6368343
(allocating memory during spgrescan instead of spgbeginscan, which
might be all right if it got cleaned up, but it didn't). And do a bit
of code beautification on that commit, too.
Tom Lane [Wed, 31 Oct 2018 13:47:53 +0000 (09:47 -0400)]
Sync our copy of the timezone library with IANA release tzcode2018g.
This patch absorbs an upstream fix to "zic" for a recently-introduced
bug that made it output data that some 32-bit clients couldn't read.
Given the current source data, the bug only manifests in zones with
leap seconds, which we don't generate, so that there's no actual
change in our installed timezone data files from this. Still, in
case somebody uses our copy of "zic" to do something else, it seems
best to apply the fix promptly.
Also, update the README's notes about converting upstream code to
our conventions.
Tom Lane [Tue, 30 Oct 2018 19:26:11 +0000 (15:26 -0400)]
Fix interaction of CASE and ArrayCoerceExpr.
An array-type coercion appearing within a CASE that has a constant
(after const-folding) test expression was mangled by the planner, causing
all the elements of the resulting array to be equal to the coerced value
of the CASE's test expression. This is my oversight in commit c12d570fa:
that changed ArrayCoerceExpr to use a subexpression involving a
CaseTestExpr, and I didn't notice that eval_const_expressions needed an
adjustment to keep from folding such a CaseTestExpr to a constant when
it's inside a suitable CASE.
This is another in what's getting to be a depressingly long line of bugs
associated with misidentification of the referent of a CaseTestExpr.
We're overdue to redesign that mechanism; but any such fix is unlikely
to be back-patchable into v11. As a stopgap, fix eval_const_expressions
to do what it must here. Also add a bunch of comments pointing out the
restrictions and assumptions that are needed to make this work at all.
Also fix a related oversight: contain_context_dependent_node() was not
aware of the relationship of ArrayCoerceExpr to CaseTestExpr. That was
somewhat fail-soft, in that the outcome of a wrong answer would be to
prevent optimizations that could have been made, but let's fix it while
we're at it.
Per bug #15471 from Matt Williams. Back-patch to v11 where the faulty
logic came in.
Andrew Dunstan [Sun, 28 Oct 2018 16:22:32 +0000 (12:22 -0400)]
Fix perl searchpath for modern perl for MSVC tools
Modern versions of perl no longer include the current directory in the
perl searchpath, as it's insecure. Instead of adding the current
directory, we get around the problem by adding the directory where the
script lives.