Andres Freund [Tue, 20 Nov 2018 23:36:57 +0000 (15:36 -0800)]
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
Michael Paquier [Tue, 20 Nov 2018 23:43:32 +0000 (08:43 +0900)]
Improve description of buffer used to store records in WAL reader
The dedicated private buffer to store records is used only for these
crossing a page boundary since 285bd0ac, but its description did not
match completely the reality.
Reported-by: Andrey Lepikhov
Author: Michael Paquier
Discussion: https://postgr.es/m/49518b48-2036-5e43-1818-0f594e375e76@postgrespro.ru
Peter Eisentraut [Tue, 20 Nov 2018 21:59:36 +0000 (22:59 +0100)]
Make detection of SSL_CTX_set_min_proto_version more portable
As already explained in configure.in, using the OpenSSL version number
to detect presence of functions doesn't work, because LibreSSL reports
incompatible version numbers. Fortunately, the functions we need here
are actually macros, so we can just test for them directly.
Peter Eisentraut [Tue, 20 Nov 2018 12:30:01 +0000 (13:30 +0100)]
Make WAL description output more consistent
The output for record types XLOG_DBASE_CREATE and XLOG_DBASE_DROP used
the order dbid/tablespaceid, whereas elsewhere the order is
tablespaceid/dbid[/relfilenodeid]. Flip the order for those two types
to make it consistent.
Michael Paquier [Tue, 20 Nov 2018 01:20:52 +0000 (10:20 +0900)]
Fix issues with TAP tests of pg_verify_checksums
Two issues have been spotted and get fixed here:
- When checking for corrupted files, make sure that pg_verify_checksums
complains about the correct file. In order to make the logic more
robust, all files created are immediately removed once checks on them
are done. The error message generated by pg_verify_checksums also now
includes the file name it sees as corrupted.
- Before running corruption-related tests, empty files are generated
which used names mapping with the corrupted files, potentially leading
to conflicts. So use different set of names for both.
Author: Michael Banck
Discussion: https://postgr.es/m/20181119181119.GC23740@nighthawk.caipicrew.dd-dns.de
Tom Lane [Mon, 19 Nov 2018 22:28:04 +0000 (17:28 -0500)]
Add needed #include.
Per POSIX, WIFSIGNALED and related macros are provided by <sys/wait.h>.
Apparently on Linux they're also pulled in by some other inclusions,
but BSD-ish systems are pickier. Fixes portability issue in ffa4cbd62.
Tom Lane [Mon, 19 Nov 2018 22:02:25 +0000 (17:02 -0500)]
Handle EPIPE more sanely when we close a pipe reading from a program.
Previously, any program launched by COPY TO/FROM PROGRAM inherited the
server's setting of SIGPIPE handling, i.e. SIG_IGN. Hence, if we were
doing COPY FROM PROGRAM and closed the pipe early, the child process
would see EPIPE on its output file and typically would treat that as
a fatal error, in turn causing the COPY to report error. Similarly,
one could get a failure report from a query that didn't read all of
the output from a contrib/file_fdw foreign table that uses file_fdw's
PROGRAM option.
To fix, ensure that child programs inherit SIG_DFL not SIG_IGN processing
of SIGPIPE. This seems like an all-around better situation since if
the called program wants some non-default treatment of SIGPIPE, it would
expect to have to set that up for itself. Then in COPY, if it's COPY
FROM PROGRAM and we stop reading short of detecting EOF, treat a SIGPIPE
exit from the called program as a non-error condition. This still allows
us to report an error for any case where the called program gets SIGPIPE
on some other file descriptor.
As coded, we won't report a SIGPIPE if we stop reading as a result of
seeing an in-band EOF marker (e.g. COPY BINARY EOF marker). It's
somewhat debatable whether we should complain if the called program
continues to transmit data after an EOF marker. However, it seems like
we should avoid throwing error in any questionable cases, especially in a
back-patched fix, and anyway it would take additional code to make such
an error get reported consistently.
Back-patch to v10. We could go further back, since COPY FROM PROGRAM
has been around awhile, but AFAICS the only way to reach this situation
using core or contrib is via file_fdw, which has only supported PROGRAM
sources since v10. The COPY statement per se has no feature whereby
it'd stop reading without having hit EOF or an error already. Therefore,
I don't see any upside to back-patching further that'd outweigh the
risk of complaints about behavioral change.
Per bug #15449 from Eric Cyr.
Patch by me, review by Etsuro Fujita and Kyotaro Horiguchi
Alvaro Herrera [Mon, 19 Nov 2018 19:54:26 +0000 (16:54 -0300)]
psql: Describe partitioned tables/indexes as such
In \d and \z, instead of conflating partitioned tables and indexes with
plain ones, set the "type" column and table title differently to make
the distinction obvious. A simple ease-of-use improvement.
Author: Pavel Stehule, Michaël Paquier, Álvaro Herrera Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/CAFj8pRDMWPgijpt_vPj1t702PgLG4Ls2NCf+rEcb+qGPpossmg@mail.gmail.com
Tom Lane [Mon, 19 Nov 2018 17:43:05 +0000 (12:43 -0500)]
Postpone LLVM-related uses of AC_CHECK_DECLS.
Calling AC_CHECK_DECLS before we've finished setting up the compiler's
CFLAGS seems like a pretty risky proposition, especially now that the
first use of that macro will result in a test to see whether the compiler
gives warning or error for undeclared built-in functions. That answer
could very easily get changed later than where PGAC_LLVM_SUPPORT is
called; furthermore, it's hardly unlikely that flags such as -D_GNU_SOURCE
could change visibility of declarations. Hence, be a little less cavalier
about where to do LLVM-related tests. This results in v11 and HEAD doing
the warning-or-error check at the same place in the script as older
branches are doing it, which seems like a good thing.
Alvaro Herrera [Mon, 19 Nov 2018 17:34:12 +0000 (14:34 -0300)]
psql: Show IP address in \conninfo
When hostaddr is given, the actual IP address that psql is connected to
can be totally unexpected for the given host. The more verbose output
we now generate makes things clearer. Since the "host" and "hostaddr"
parts of the conninfo could come from different sources (say, one of
them is in the service specification or a URI-style conninfo and the
other is not), this is not as silly as it may first appear. This is
also definitely useful if the hostname resolves to multiple addresses.
Author: Fabien Coelho Reviewed-by: Pavel Stehule, Arthur Zakirov
Discussion: https://postgr.es/m/alpine.DEB.2.21.1810261532380.27686@lancre
https://postgr.es/m/alpine.DEB.2.21.1808201323020.13832@lancre
Robert Haas [Mon, 19 Nov 2018 17:10:41 +0000 (12:10 -0500)]
Reduce unnecessary list construction in RelationBuildPartitionDesc.
The 'partoids' list which was constructed by the previous version
of this code was necessarily identical to 'inhoids'. There's no
point to duplicating the list, so avoid that. Instead, construct
the array representation directly from the original 'inhoids' list.
Also, use an array rather than a list for 'boundspecs'. We know
exactly how many items we need to store, so there's really no
reason to use a list. Using an array instead reduces the number
of memory allocations we perform.
Patch by me, reviewed by Michael Paquier and Amit Langote, the
latter of whom also helped with rebasing.
Tom Lane [Mon, 19 Nov 2018 17:01:47 +0000 (12:01 -0500)]
Fix configure's AC_CHECK_DECLS tests to work correctly with clang.
The test case that Autoconf uses to discover whether a function has
been declared doesn't work reliably with clang, because clang reports
a warning not an error if the name is a known built-in function.
On some platforms, this results in a lot of compile-time warnings about
strlcpy and related functions not having been declared.
There is a fix for this (by Noah Misch) in the upstream Autoconf sources,
but since they've not made a release in years and show no indication of
doing so anytime soon, let's just absorb their fix directly. We can
revert this when and if we update to a newer Autoconf release.
Alvaro Herrera [Mon, 19 Nov 2018 14:16:28 +0000 (11:16 -0300)]
Disallow COPY FREEZE on partitioned tables
This didn't actually work: COPY would fail to flush the right files, and
instead would try to flush a non-existing file, causing the whole
transaction to fail.
Cope by raising an error as soon as the command is sent instead, to
avoid a nasty later surprise. Of course, it would be much better to
make it work, but we don't have a patch for that yet, and we don't know
if we'll want to backpatch one when we do.
Reported-by: Tomas Vondra
Author: David Rowley Reviewed-by: Amit Langote, Steve Singer, Tomas Vondra
Thomas Munro [Mon, 19 Nov 2018 00:31:10 +0000 (13:31 +1300)]
PANIC on fsync() failure.
On some operating systems, it doesn't make sense to retry fsync(),
because dirty data cached by the kernel may have been dropped on
write-back failure. In that case the only remaining copy of the
data is in the WAL. A subsequent fsync() could appear to succeed,
but not have flushed the data. That means that a future checkpoint
could apparently complete successfully but have lost data.
Therefore, violently prevent any future checkpoint attempts by
panicking on the first fsync() failure. Note that we already
did the same for WAL data; this change extends that behavior to
non-temporary data files.
Provide a GUC data_sync_retry to control this new behavior, for
users of operating systems that don't eject dirty data, and possibly
forensic/testing uses. If it is set to on and the write-back error
was transient, a later checkpoint might genuinely succeed (on a
system that does not throw away buffers on failure); if the error is
permanent, later checkpoints will continue to fail. The GUC defaults
to off, meaning that we panic.
Back-patch to all supported releases.
There is still a narrow window for error-loss on some operating
systems: if the file is closed and later reopened and a write-back
error occurs in the intervening time, but the inode has the bad
luck to be evicted due to memory pressure before we reopen, we could
miss the error. A later patch will address that with a scheme
for keeping files with dirty data open at all times, but we judge
that to be too complicated to back-patch.
Author: Craig Ringer, with some adjustments by Thomas Munro Reported-by: Craig Ringer Reviewed-by: Robert Haas, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
Michael Paquier [Mon, 19 Nov 2018 01:25:48 +0000 (10:25 +0900)]
Remove unnecessary memcpy when reading WAL record fitting on page
When reading a WAL record, its contents are copied into an intermediate
buffer. However, doing so is not necessary if the record fits fully
into the current page, saving one memcpy for each such record. The
allocation handling of the intermediate buffer is also now done only
when a record crosses a page boundary, shaving some extra cycles when
reading a WAL record.
Author: Andrey Lepikhov Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas
Discussion: https://postgr.es/m/c2ea54dd-a1d3-80eb-ddbf-7e6f258e615e@postgrespro.ru
Andrew Dunstan [Sun, 18 Nov 2018 17:36:31 +0000 (12:36 -0500)]
Silence MSVC warnings about redefinition of isnan
Some versions of perl.h define isnan when the compiler is MSVC. To avoid
warnings about this, undefine the symbol before including perl.h and
re-add the definition afterwards if it wasn't recreated.
Tom Lane [Sun, 18 Nov 2018 04:16:00 +0000 (23:16 -0500)]
Fix AC_REQUIRES breakage in LLVM autoconf tests.
Any Autoconf macro that uses AC_REQUIRES -- directly or indirectly --
must not be inside a plain shell "if" test; if it is, whatever code
gets pulled in by the AC_REQUIRES will also be inside that "if".
Instead of "if" we can use AS_IF, which knows how to get this right
(cf commit 01051a987).
The only immediate problem from getting this wrong was that AC_PROG_AWK
had to be run twice, once inside the "if llvm" block and once in the
main line. However, it broke a different patch I'm about to submit
more thoroughly.
Tomas Vondra [Sat, 17 Nov 2018 22:50:21 +0000 (23:50 +0100)]
Add valgrind suppressions for wcsrtombs optimizations
wcsrtombs (called through wchar2char from common functions like lower,
upper, etc.) uses various optimizations that may look like access to
uninitialized data, triggering valgrind reports.
For example AVX2 instructions load data in 256-bit chunks, and gconv
does something similar with 32-bit chunks. This is faster than accessing
the bytes one by one, and the uninitialized part of the buffer is not
actually used. So suppress the bogus reports.
The exact stack depends on possible optimizations - it might be AVX, SSE
(as in the report by Aleksander Alekseev) or something else. Hence the
last frame is wildcarded, to deal with this.
Backpatch all the way back to 9.4.
Author: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
Discussion: https://www.postgresql.org/message-id/20180220150838.GD18315@e733.localdomain
Tom Lane [Sat, 17 Nov 2018 21:31:07 +0000 (16:31 -0500)]
Avoid defining SIGTTIN/SIGTTOU on Windows.
Setting them to SIG_IGN seems unlikely to have any beneficial effect
on that platform, and given the signal numbering collision with SIGABRT,
it could easily have bad effects.
Given the lack of field complaints that can be traced to this, I don't
presently feel a need to back-patch.
Tom Lane [Sat, 17 Nov 2018 21:23:55 +0000 (16:23 -0500)]
Leave SIGTTIN/SIGTTOU signal handling alone in postmaster child processes.
For reasons lost in the mists of time, most postmaster child processes
reset SIGTTIN/SIGTTOU signal handling to SIG_DFL, with the major exception
that backend sessions do not. It seems like a pretty bad idea for any
postmaster children to do that: if stderr is connected to the terminal,
and the user has put the postmaster in background, any log output would
result in the child process freezing up. Hence, switch them all to
doing what backends do, ie, nothing. This allows them to inherit the
postmaster's SIG_IGN setting. On the other hand, manually-launched
processes such as standalone backends will have default processing,
which seems fine.
In passing, also remove useless resets of SIGCONT and SIGWINCH signal
processing. Perhaps the postmaster once changed those to something
besides SIG_DFL, but it doesn't now, so these are just wasted (and
confusing) syscalls.
Basically, this propagates the changes made in commit 8e2998d8a from
backends to other postmaster children. Probably the only reason these
calls now exist elsewhere is that I missed changing pgstat.c along with
postgres.c at the time.
Given the lack of field complaints that can be traced to this, I don't
presently feel a need to back-patch.
Andres Freund [Sat, 17 Nov 2018 00:35:11 +0000 (16:35 -0800)]
Make TupleTableSlots extensible, finish split of existing slot type.
This commit completes the work prepared in 1a0586de36, splitting the
old TupleTableSlot implementation (which could store buffer, heap,
minimal and virtual slots) into four different slot types. As
described in the aforementioned commit, this is done with the goal of
making tuple table slots extensible, to allow for pluggable table
access methods.
To achieve runtime extensibility for TupleTableSlots, operations on
slots that can differ between types of slots are performed using the
TupleTableSlotOps struct provided at slot creation time. That
includes information from the size of TupleTableSlot struct to be
allocated, initialization, deforming etc. See the struct's definition
for more detailed information about callbacks TupleTableSlotOps.
I decided to rename TTSOpsBufferTuple to TTSOpsBufferHeapTuple and
ExecCopySlotTuple to ExecCopySlotHeapTuple, as that seems more
consistent with other naming introduced in recent patches.
There's plenty optimization potential in the slot implementation, but
according to benchmarking the state after this commit has similar
performance characteristics to before this set of changes, which seems
sufficient.
There's a few changes in execReplication.c that currently need to poke
through the slot abstraction, that'll be repaired once the pluggable
storage patchset provides the necessary infrastructure.
Author: Andres Freund and Ashutosh Bapat, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
Alvaro Herrera [Fri, 16 Nov 2018 18:43:40 +0000 (15:43 -0300)]
pgbench: introduce a RandomState struct
This becomes useful when used to retry a transaction after a
serialization error or deadlock abort. (We don't yet have that feature,
but this is preparation for it.)
While at it, use separate random state for thread administratrivia such
as deciding which script to run, how long to delay for throttling, or
whether to log a message when sampling; this not only makes these tasks
independent of each other, but makes the actual thread run
deterministic.
Alvaro Herrera [Fri, 16 Nov 2018 17:54:15 +0000 (14:54 -0300)]
Redesign initialization of partition routing structures
This speeds up write operations (INSERT, UPDATE, DELETE, COPY, as well
as the future MERGE) on partitioned tables.
This changes the setup for tuple routing so that it does far less work
during the initial setup and pushes more work out to when partitions
receive tuples. PartitionDispatchData structs for sub-partitioned
tables are only created when a tuple gets routed through it. The
possibly large arrays in the PartitionTupleRouting struct have largely
been removed. The partitions[] array remains but now never contains any
NULL gaps. Previously the NULLs had to be skipped during
ExecCleanupTupleRouting(), which could add a large overhead to the
cleanup when the number of partitions was large. The partitions[] array
is allocated small to start with and only enlarged when we route tuples
to enough partitions that it runs out of space. This allows us to keep
simple single-row partition INSERTs running quickly. Redesign
The arrays in PartitionTupleRouting which stored the tuple translation maps
have now been removed. These have been moved out into a
PartitionRoutingInfo struct which is an additional field in ResultRelInfo.
The find_all_inheritors() call still remains by far the slowest part of
ExecSetupPartitionTupleRouting(). This commit just removes the other slow
parts.
In passing also rename the tuple translation maps from being ParentToChild
and ChildToParent to being RootToPartition and PartitionToRoot. The old
names mislead you into thinking that a partition of some sub-partitioned
table would translate to the rowtype of the sub-partitioned table rather
than the root partitioned table.
Authors: David Rowley and Amit Langote, heavily revised by Álvaro Herrera
Testing help from Jesper Pedersen and Kato Sho.
Discussion: https://postgr.es/m/CAKJS1f_1RJyFquuCKRFHTdcXqoPX-PYqAd7nz=GVBwvGh4a6xA@mail.gmail.com
Andres Freund [Fri, 16 Nov 2018 07:10:50 +0000 (23:10 -0800)]
Fix slot type assumptions for nodeGather[Merge].
The assumption made in 1a0586de3657c was wrong, as evidenced by
buildfarm failure on locust, which runs with
force_parallel_mode=regress. The tuples accessed in either nodes are
in the outer slot, and we can't trivially rely on the slot type being
known because the leader might execute the subsidiary node directly,
or via the tuple queue on a worker. In the latter case the tuple will
always be a heaptuple slot, but in the former, it'll be whatever the
subsidiary node returns.
Andres Freund [Fri, 16 Nov 2018 06:00:30 +0000 (22:00 -0800)]
Don't generate tuple deforming functions for virtual slots.
Virtual tuple table slots never need tuple deforming. Therefore, if we
know at expression compilation time, that a certain slot will always
be virtual, there's no need to create a tuple deforming routine for
it.
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
Andres Freund [Fri, 16 Nov 2018 06:00:30 +0000 (22:00 -0800)]
Verify that expected slot types match returned slot types.
This is important so JIT compilation knows what kind of tuple slot the
deforming routine can expect. There's also optimization potential for
expression initialization without JIT compilation. It e.g. seems
plausible to elide EEOP_*_FETCHSOME ops entirely when dealing with
virtual slots.
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
Andres Freund [Fri, 16 Nov 2018 06:00:30 +0000 (22:00 -0800)]
Compute information about EEOP_*_FETCHSOME at expression init time.
Previously this information was computed when JIT compiling an
expression. But the information is useful for assertions in the
non-JIT case too (for assertions), therefore it makes sense to move
it.
This will, in a followup commit, allow to treat different slot types
differently. E.g. for virtual slots there's no need to generate a JIT
function to deform the slot.
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
Andres Freund [Fri, 16 Nov 2018 06:00:30 +0000 (22:00 -0800)]
Introduce notion of different types of slots (without implementing them).
Upcoming work intends to allow pluggable ways to introduce new ways of
storing table data. Accessing those table access methods from the
executor requires TupleTableSlots to be carry tuples in the native
format of such storage methods; otherwise there'll be a significant
conversion overhead.
Different access methods will require different data to store tuples
efficiently (just like virtual, minimal, heap already require fields
in TupleTableSlot). To allow that without requiring additional pointer
indirections, we want to have different structs (embedding
TupleTableSlot) for different types of slots. Thus different types of
slots are needed, which requires adapting creators of slots.
The slot that most efficiently can represent a type of tuple in an
executor node will often depend on the type of slot a child node
uses. Therefore we need to track the type of slot is returned by
nodes, so parent slots can create slots based on that.
Relatedly, JIT compilation of tuple deforming needs to know which type
of slot a certain expression refers to, so it can create an
appropriate deforming function for the type of tuple in the slot.
But not all nodes will only return one type of slot, e.g. an append
node will potentially return different types of slots for each of its
subplans.
Therefore add function that allows to query the type of a node's
result slot, and whether it'll always be the same type (whether it's
fixed). This can be queried using ExecGetResultSlotOps().
The scan, result, inner, outer type of slots are automatically
inferred from ExecInitScanTupleSlot(), ExecInitResultSlot(),
left/right subtrees respectively. If that's not correct for a node,
that can be overwritten using new fields in PlanState.
This commit does not introduce the actually abstracted implementation
of different kind of TupleTableSlots, that will be left for a followup
commit. The different types of slots introduced will, for now, still
use the same backing implementation.
While this already partially invalidates the big comment in
tuptable.h, it seems to make more sense to update it later, when the
different TupleTableSlot implementations actually exist.
Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
Andres Freund [Thu, 15 Nov 2018 22:26:14 +0000 (14:26 -0800)]
Rejigger materializing and fetching a HeapTuple from a slot.
Previously materializing a slot always returned a HeapTuple. As
current work aims to reduce the reliance on HeapTuples (so other
storage systems can work efficiently), that needs to change. Thus
split the tasks of materializing a slot (i.e. making it independent
from the underlying storage / other memory contexts) from fetching a
HeapTuple from the slot. For brevity, allow to fetch a HeapTuple from
a slot and materializing the slot at the same time, controlled by a
parameter.
For now some callers of ExecFetchSlotHeapTuple, with materialize =
true, expect that changes to the heap tuple will be reflected in the
underlying slot. Those places will be adapted in due course, so while
not pretty, that's OK for now.
Also rename ExecFetchSlotTuple to ExecFetchSlotHeapTupleDatum and
ExecFetchSlotTupleDatum to ExecFetchSlotHeapTupleDatum, as it's likely
that future storage methods will need similar methods. There already
is ExecFetchSlotMinimalTuple, so the new names make the naming scheme
more coherent.
Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
Peter Eisentraut [Thu, 15 Nov 2018 22:22:19 +0000 (23:22 +0100)]
A small tweak to some comments for PartitionKeyData
It was not really that obvious that there's meant to be exactly 1 item
in the partexprs List for each zero-valued partattrs element. Some
incorrect code using these fields was the cause of CVE-2018-1052, so
it's worthwhile to mention how they should be used in the comments.
Author: David Rowley <david.rowley@2ndquadrant.com>
Peter Eisentraut [Thu, 15 Nov 2018 22:04:48 +0000 (23:04 +0100)]
Correct code comments for PartitionedRelPruneInfo struct
The comments above the PartitionedRelPruneInfo struct incorrectly
document how subplan_map and subpart_map are indexed. This seems to
have snuck in on 4e232364033.
Author: David Rowley <david.rowley@2ndquadrant.com>
Andres Freund [Tue, 13 Nov 2018 20:18:25 +0000 (12:18 -0800)]
Rationalize expression context reset in ExecModifyTable().
The current pattern of reseting expressions both in
ExecProcessReturning() and ExecOnConflictUpdate() makes it harder than
necessary to reason about memory lifetimes. It also requires
materializing slots unnecessarily, although this patch doesn't take
advantage of the fact that that's not necessary anymore.
Instead reset the expression context once for each input tuple.
Andres Freund [Thu, 15 Nov 2018 19:35:07 +0000 (11:35 -0800)]
Make reformat-dat-files, reformat-dat-files VPATH safe.
The reformat_dat_file.pl script, added by 372728b0d49552641, supported
all the necessary options to make it work in a VPATH build, but the
makefile invocations didn't take VPATH into account. Fix that.
Discussion: https://postgr.es/m/20181115185303.d2z7wonx23mdfvd3@alap3.anarazel.de
Backpatch: 11-, where 372728b0d49552641 was merged
Tom Lane [Thu, 15 Nov 2018 18:34:16 +0000 (13:34 -0500)]
Improve performance of partition pruning remapping a little.
ExecFindInitialMatchingSubPlans has to update the PartitionPruneState's
subplan mapping data to account for the removal of subplans it prunes.
However, that's only necessary if run-time pruning will also occur,
so we can skip it when that won't happen, which should result in not
needing to do the remapping in many cases. (We now need it only when
some partitions are potentially startup-time prunable and others are
potentially run-time prunable, which seems like an unusual case.)
Also make some marginal performance improvements in the remapping
itself. These will mainly win if most partitions got pruned by
the startup-time pruning, which is perhaps a debatable assumption
in this context.
Also fix some bogus comments, and rearrange code to marginally
reduce space consumption in the executor's query-lifespan context.
Amit Kapila [Thu, 15 Nov 2018 05:26:49 +0000 (10:56 +0530)]
Fix the omission in docs.
Commit 5373bc2a08 has added type for background workers but forgot to
update at one place in the documentation.
Reported-by: John Naylor
Author: John Naylor Reviewed-by: Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/CAJVSVGVmvgJ8Lq4WBxC3zV5wf0txdCqRSgkWVP+jaBF=HgWscA@mail.gmail.com
Thomas Munro [Thu, 15 Nov 2018 03:25:30 +0000 (16:25 +1300)]
Increase the number of possible random seeds per time period.
Commit 197e4af9 refactored the initialization of the libc random()
seed, but reduced the number of possible seeds values that could be
chosen in a given time period. This negation of the effects of
commit 98c50656c was unintentional. Replace with code that
shifts the fast moving timestamp bits left, similar to the original
algorithm (though not the previous float-tolerating coding, which
is no longer necessary).
Author: Thomas Munro Reported-by: Noah Misch Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20181112083358.GA1073796%40rfd.leadboat.com
Thomas Munro [Wed, 14 Nov 2018 23:34:04 +0000 (12:34 +1300)]
Use 64 bit type for BufFileSize().
BufFileSize() can't use off_t, because it's only 32 bits wide on
some systems. BufFile objects can have many 1GB segments so the
total size can exceed 2^31. The only known client of the function
is parallel CREATE INDEX, which was reported to fail when building
large indexes on Windows.
Though this is technically an ABI break on platforms with a 32 bit
off_t and we might normally avoid back-patching it, the function is
brand new and thus unlikely to have been discovered by extension
authors yet, and it's fairly thoroughly broken on those platforms
anyway, so just fix it.
Defect in 9da0cc35. Bug #15460. Back-patch to 11, where this
function landed.
Author: Thomas Munro Reported-by: Paul van der Linden, Pavel Oskin Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/15460-b6db80de822fa0ad%40postgresql.org
Discussion: https://postgr.es/m/CAHDGBJP_GsESbTt4P3FZA8kMUKuYxjg57XHF7NRBoKnR%3DCAR-g%40mail.gmail.com
Tom Lane [Wed, 14 Nov 2018 21:39:59 +0000 (16:39 -0500)]
Make psql's "\pset format" command reject non-unique abbreviations.
The previous behavior of preferring the oldest match had the advantage
of not breaking existing scripts when we add a conflicting format name;
but that behavior was undocumented and fragile (it seems just luck that
commit add9182e5 didn't break it). Let's go over to the less mistake-
prone approach of complaining when there are multiple matches.
Since this is a small compatibility break, no back-patch.
Tom Lane [Wed, 14 Nov 2018 21:29:57 +0000 (16:29 -0500)]
Doc: remove claim that all \pset format options are unique in 1 letter.
This hasn't been correct since 9.3 added "latex-longtable".
I left the phraseology "Unique abbreviations are allowed" alone.
It's correct as far as it goes, and we are studiously refraining
from specifying exactly what happens if you give a non-unique
abbreviation. (The answer in the back branches is "you get a
backwards-compatible choice", and the answer in HEAD will shortly
be "you get an error", but there seems no need to mention such
details here.)
Tom Lane [Wed, 14 Nov 2018 20:41:07 +0000 (15:41 -0500)]
Add a timezone-specific variant of date_trunc().
date_trunc(field, timestamptz, zone_name) performs truncation using
the named time zone as reference, rather than working in the session
time zone as is the default behavior. It's equivalent to
date_trunc(field, timestamptz at time zone zone_name) at time zone zone_name
but it's faster, easier to type, and arguably easier to understand.
Tom Lane [Wed, 14 Nov 2018 16:27:30 +0000 (11:27 -0500)]
Second try at fixing numeric data passed through an ECPG SQLDA.
In commit ecfd55795, I removed sqlda.c's checks for ndigits != 0 on the
grounds that we should duplicate the state of the numeric value's digit
buffer even when all the digits are zeroes. However, that still isn't
quite right, because another possible state of the digit buffer is
buf == digits == NULL (this occurs for a NaN). As the code now stands,
it'll invoke memcpy with a NULL source address and zero bytecount,
which we know a few platforms crash on. Hence, reinstate the no-copy
short-circuit, but make it test specifically for buf != NULL rather than
some other condition. In hindsight, the ndigits test (added by commit f2ae9f9c3) was almost certainly meant to fix the NaN case not the
all-zeroes case as the associated thread alleged.
Peter Eisentraut [Thu, 25 Oct 2018 07:33:17 +0000 (08:33 +0100)]
Lower lock level for renaming indexes
Change lock level for renaming index (either ALTER INDEX or implicitly
via some other commands) from AccessExclusiveLock to
ShareUpdateExclusiveLock.
One reason we need a strong lock for relation renaming is that the
name change causes a rebuild of the relcache entry. Concurrent
sessions that have the relation open might not be able to handle the
relcache entry changing underneath them. Therefore, we need to lock
the relation in a way that no one can have the relation open
concurrently. But for indexes, the relcache handles reloads specially
in RelationReloadIndexInfo() in a way that keeps changes in the
relcache entry to a minimum. As long as no one keeps pointers to
rd_amcache and rd_options around across possible relcache flushes,
which is the case, this ought to be safe.
We also want to use a self-exclusive lock for correctness, so that
concurrent DDL doesn't overwrite the rename if they start updating
while still seeing the old version. Therefore, we use
ShareUpdateExclusiveLock, which is already used by other DDL commands
that want to operate in a concurrent manner.
The reason this is interesting at all is that renaming an index is a
typical part of a concurrent reindexing workflow (CREATE INDEX
CONCURRENTLY new + DROP INDEX CONCURRENTLY old + rename back). And
indeed a future built-in REINDEX CONCURRENTLY might rely on the ability
to do concurrent renames as well.
Michael Paquier [Wed, 14 Nov 2018 07:46:53 +0000 (16:46 +0900)]
Initialize TransactionState and user ID consistently at transaction start
If a failure happens when a transaction is starting between the moment
the transaction status is changed from TRANS_DEFAULT to TRANS_START and
the moment the current user ID and security context flags are fetched
via GetUserIdAndSecContext(), or before initializing its basic fields,
then those may get reset to incorrect values when the transaction
aborts, leaving the session in an inconsistent state.
One problem reported is that failing a starting transaction at the first
query of a session could cause several kinds of system crashes on the
follow-up queries.
In order to solve that, move the initialization of the transaction state
fields and the call of GetUserIdAndSecContext() in charge of fetching
the current user ID close to the point where the transaction status is
switched to TRANS_START, where there cannot be any error triggered
in-between, per an idea of Tom Lane. This properly ensures that the
current user ID, the security context flags and that the basic fields of
TransactionState remain consistent even if the transaction fails while
starting.
Reported-by: Richard Guo Diagnosed-By: Richard Guo
Author: Michael Paquier Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAN_9JTxECSb=pEPcb0a8d+6J+bDcOZ4=DgRo_B7Y5gRHJUM=Rw@mail.gmail.com
Backpatch-through: 9.4
Michael Paquier [Wed, 14 Nov 2018 01:33:10 +0000 (10:33 +0900)]
Add flag values in WAL description to all heap records
Hexadecimal is consistently used as format to not bloat too much the
output but keep it readable. This information is useful mainly for
debugging purposes with for example pg_waldump.
Author: Michael Paquier Reviewed-by: Nathan Bossart, Dmitry Dolgov, Andres Freund, Álvaro
Herrera
Discussion: https://postgr.es/m/20180413034734.GE1552@paquier.xyz
Michael Paquier [Wed, 14 Nov 2018 01:01:49 +0000 (10:01 +0900)]
Refactor code creating PartitionBoundInfo
The code building PartitionBoundInfo based on the constituent partition
data read from catalogs has been located in partcache.c, with a specific
set of routines dedicated to bound types, like sorting or bound data
creation. All this logic is moved to partbounds.c and relocates all the
bound-specific logistic into it, with partition_bounds_create() as
principal entry point.
Author: Amit Langote Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/3f289da8-6d10-75fe-814a-635e8b191d43@lab.ntt.co.jp
Alvaro Herrera [Tue, 13 Nov 2018 21:12:39 +0000 (18:12 -0300)]
Add INSERT ON CONFLICT test on partitioned tables with transition table
This case was uncovered by existing tests, so breakage went undetected.
Make sure it remains stable.
Extracted from a larger patch by
Author: David Rowley Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/CAKJS1f-aGCJ5H7_hiSs5PhWs6Obmj+vGARjGymqH1=o5PcrNnQ@mail.gmail.com
Tom Lane [Tue, 13 Nov 2018 19:54:41 +0000 (14:54 -0500)]
Fix realfailN lexer rules to not make assumptions about input format.
The realfail1 and realfail2 backup-prevention rules always returned
token type FCONST, ignoring the possibility that what we've scanned
is more appropriately described as ICONST. I think that at the
time that code was added, it might actually have been safe to not
distinguish; but since we started allowing AS-less aliases in SELECT
target lists, it's definitely legal to have a number immediately
followed by an identifier.
In the SELECT case, it seems there's no visible consequence because
make_const() will change the type back to integer anyway. But I'm
worried that there are other contexts, or will be in future, where
it's more important to get the constant's type right.
Hence, use process_integer_literal to correctly determine which
token type to return.
Arguably this is a bug fix, but given the lack of evidence of
user-visible problems, I'll refrain from back-patching.
Tom Lane [Tue, 13 Nov 2018 17:57:52 +0000 (12:57 -0500)]
Align ECPG lexer more closely with the core and psql lexers.
Make a bunch of basically-cosmetic changes to reduce the diffs between
the flex rules in scan.l, psqlscan.l, and pgc.l. Reorder some code,
adjust a lot of whitespace, sync some comments, make use of flex start
condition scopes to do that.
There are a few non-cosmetic changes in the ECPG lexer:
* Bring over the decimalfail rule (and support function
process_integer_literal) so that ECPG will lex "1..10" into
the same tokens as the backend would. I'm not sure this makes any
visible difference to users, but I'm not sure it doesn't, either.
* <xdc><<EOF>> gets its own rule so as to produce a more on-point
error message.
A table with OIDs that was the first in the dump output would not get
dumped with OIDs enabled. Fix that.
The reason was that the currWithOids flag was declared to be bool but
actually also takes a -1 value for "don't know yet". But under
stdbool.h semantics, that is coerced to true, so the required SET
default_with_oids command is not output again. Change the variable
type to char to fix that.
Amit Kapila [Tue, 13 Nov 2018 05:22:40 +0000 (10:52 +0530)]
Fix the initialization of atomic variables introduced by the
group clearing mechanism.
Commits 0e141c0fbb and baaf272ac9 introduced initialization of atomic
variables in InitProcess which means that it's not safe to look at those
for backends that aren't currently in use. Fix that by initializing them
during postmaster startup.
Reported-by: Andres Freund
Author: Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/20181027104138.qmbbelopvy7cw2qv@alap3.anarazel.de
Thomas Munro [Tue, 13 Nov 2018 04:39:36 +0000 (17:39 +1300)]
Fix handling of HBA ldapserver with multiple hostnames.
Commit 35c0754f failed to handle space-separated lists of alternative
hostnames in ldapserver, when building a URI for ldap_initialize()
(OpenLDAP). Such lists need to be expanded to space-separated URIs.
Repair. Back-patch to 11, to fix bug report #15495.
Author: Thomas Munro Reported-by: Renaud Navarro
Discussion: https://postgr.es/m/15495-2c39fc196c95cd72%40postgresql.org
Thomas Munro [Tue, 13 Nov 2018 03:27:13 +0000 (16:27 +1300)]
Fix possible buffer overrun in hba.c.
Coverty reports a possible buffer overrun in the code that populates the
pg_hba_file_rules view. It may not be a live bug due to restrictions
on options that can be used together, but let's increase MAX_HBA_OPTIONS
and correct a nearby misleading comment.
Michael Paquier [Mon, 12 Nov 2018 23:59:41 +0000 (08:59 +0900)]
Remove CommandCounterIncrement() after processing ON COMMIT DELETE
This comes from f9b5b41, which is part of one the original commits that
implemented ON COMMIT actions. By looking at the truncation code, any
CCI needed happens locally when rebuilding indexes, so it looks safe to
just remove this final incrementation.
Author: Michael Paquier Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20181109024731.GF2652@paquier.xyz
Tom Lane [Mon, 12 Nov 2018 16:50:28 +0000 (11:50 -0500)]
Simplify null-element handling in extension_config_remove().
There's no point in asking deconstruct_array() for a null-flags
array when we already checked the array has no nulls, and aren't
going to examine the output anyhow. Not asking for this output
should make the code marginally faster, and it's also more
robust since if there somehow were nulls, deconstruct_array()
would throw an error.
Tom Lane [Mon, 12 Nov 2018 16:19:04 +0000 (11:19 -0500)]
Limit the number of index clauses considered in choose_bitmap_and().
classify_index_clause_usage() is O(N^2) in the number of distinct index
qual clauses it considers, because of its use of a simple search list to
store them. For nearly all queries, that's fine because only a few clauses
will be considered. But Alexander Kuzmenkov reported a machine-generated
query with 80000 (!) index qual clauses, which caused this code to take
forever. Somewhat remarkably, this is the only O(N^2) behavior we now
have for such a query, so let's fix it.
We can get rid of the O(N^2) runtime for cases like this without much
damage to the functionality of choose_bitmap_and() by separating out
paths with "too many" qual or pred clauses, and deeming them to always
be nonredundant with other paths. Then their clauses needn't go into
the search list, so it doesn't get too long, but we don't lose the
ability to consider bitmap AND plans altogether. I set the threshold
for "too many" to be 100 clauses per path, which should be plenty to
ensure no change in planning behavior for normal queries.
There are other things we could do to make this go faster, but it's not
clear that it's worth any additional effort. 80000 qual clauses require
a whole lot of work in many other places, too.
The code's been like this for a long time, so back-patch to all supported
branches. The troublesome query only works back to 9.5 (in 9.4 it fails
with stack overflow in the parser); so I'm not sure that fixing this in
9.4 has any real-world benefit, but perhaps it does.
Peter Eisentraut [Mon, 12 Nov 2018 13:34:28 +0000 (14:34 +0100)]
doc: Small run-time pruning doc fix
A note in ddl.sgml used to mention that run-time pruning was only
implemented for Append. When we got MergeAppend support, this was
updated to mention that MergeAppend is supported too. This is
slightly weird as it's not all that obvious what exactly isn't
supported when we mention:
<para>
Both of these behaviors are likely to be changed in a future release
of <productname>PostgreSQL</productname>.
</para>
This patch updates this to mention that ModifyTable is unsupported,
which makes the above fragment make sense again.
Author: David Rowley <david.rowley@2ndquadrant.com>
Andres Freund [Sat, 10 Nov 2018 04:54:40 +0000 (20:54 -0800)]
Remove volatiles from {procarray,volatile}.c and fix memory ordering issue.
The use of volatiles in procarray.c largely originated from the time
when postgres did not have reliable compiler and memory
barriers. That's not the case anymore, so we can do better.
Several of the functions in procarray.c can be bottlenecks, and
removal of volatile yields mildly better code.
The new state, with explicit memory barriers, is also more
correct. The previous use of volatile did not actually deliver
sufficient guarantees on weakly ordered machines, in particular the
logic in GetNewTransactionId() does not look safe. It seems unlikely
to be a problem in practice, but worth fixing.
Thomas and I independently wrote a patch for this.
Reported-By: Andres Freund and Thomas Munro
Author: Andres Freund, with cherrypicked changes from a patch by Thomas Munro
Discussion:
https://postgr.es/m/20181005172955.wyjb4fzcdzqtaxjq@alap3.anarazel.de
https://postgr.es/m/CAEepm=1nff0x=7i3YQO16jLA2qw-F9O39YmUew4oq-xcBQBs0g@mail.gmail.com
Peter Eisentraut [Thu, 19 Jul 2018 06:37:32 +0000 (08:37 +0200)]
Apply RI trigger skipping tests also for DELETE
The tests added in cfa0f4255bb0f5550d37a01c4d8fe2966d20040c to skip
firing an RI trigger if any old key value is NULL can also be applied
for DELETE. This should give a performance gain in those cases, and it
also saves a lot of duplicate code in the actual RI triggers. (That
code was already dead code for the UPDATE cases.)
Peter Eisentraut [Thu, 19 Jul 2018 06:20:59 +0000 (08:20 +0200)]
Remove dead foreign key optimization code
The ri_KeysEqual() calls in the foreign-key trigger functions to
optimize away some updates are useless because since adfeef55cbcc5dc72a772777f88c1be05a70dfee those triggers are not enqueued
at all. (It's also not useful to keep these checks as some kind of
backstop, since it's also semantically correct to just run the full
check even with equal keys.)
Andres Freund [Sat, 10 Nov 2018 04:43:56 +0000 (20:43 -0800)]
Combine two flag tests in GetSnapshotData().
Previously the code checked PROC_IN_LOGICAL_DECODING and
PROC_IN_VACUUM separately. As the relevant variable is marked as
volatile, the compiler cannot combine the two tests. As
GetSnapshotData() is pretty hot in a number of workloads, it's
worthwhile to fix that.
It'd also be a good idea to get rid of the volatiles altogether. But
for one that's a larger patch, and for another, the code after this
change still seems at least as easy to read as before.
Author: Andres Freund
Discussion: https://postgr.es/m/20181005172955.wyjb4fzcdzqtaxjq@alap3.anarazel.de
Tom Lane [Sat, 10 Nov 2018 03:04:14 +0000 (22:04 -0500)]
Fix error-cleanup mistakes in exec_stmt_call().
Commit 15c729347 was a couple bricks shy of a load: we need to
ensure that expr->plan gets reset to NULL on any error exit,
if it's not supposed to be saved. Also ensure that the
stmt->target calculation gets redone if needed.
The easy way to exhibit a problem is to set up code that
violates the writable-argument restriction and then execute
it twice. But error exits out of, eg, setup_param_list()
could also break it. Make the existing PG_TRY block cover
all of that code to be sure.
Tom Lane [Sat, 10 Nov 2018 01:42:03 +0000 (20:42 -0500)]
Fix missing role dependencies for some schema and type ACLs.
This patch fixes several related cases in which pg_shdepend entries were
never made, or were lost, for references to roles appearing in the ACLs of
schemas and/or types. While that did no immediate harm, if a referenced
role were later dropped, the drop would be allowed and would leave a
dangling reference in the object's ACL. That still wasn't a big problem
for normal database usage, but it would cause obscure failures in
subsequent dump/reload or pg_upgrade attempts, taking the form of
attempts to grant privileges to all-numeric role names. (I think I've
seen field reports matching that symptom, but can't find any right now.)
Several cases are fixed here:
1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any
existing ACL entries for the domain. This case is ancient, dating
back as far as we've had pg_shdepend tracking at all.
2. If a default type privilege applies, CREATE TYPE recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for types in 9.2.
3. If a default schema privilege applies, CREATE SCHEMA recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for schemas in v10
(commit ab89e465c).
Another somewhat-related problem is that when creating a relation
rowtype or implicit array type, TypeCreate would apply any available
default type privileges to that type, which we don't really want
since such an object isn't supposed to have privileges of its own.
(You can't, for example, drop such privileges once they've been added
to an array type.)
ab89e465c is also to blame for a race condition in the regression tests:
privileges.sql transiently installed globally-applicable default
privileges on schemas, which sometimes got absorbed into the ACLs of
schemas created by concurrent test scripts. This should have resulted
in failures when privileges.sql tried to drop the role holding such
privileges; but thanks to the bug fixed here, it instead led to dangling
ACLs in the final state of the regression database. We'd managed not to
notice that, but it became obvious in the wake of commit da906766c, which
allowed the race condition to occur in pg_upgrade tests.
To fix, add a function recordDependencyOnNewAcl to encapsulate what
callers of get_user_default_acl need to do; while the original call
sites got that right via ad-hoc code, none of the later-added ones
have. Also change GenerateTypeDependencies to generate these
dependencies, which requires adding the typacl to its parameter list.
(That might be annoying if there are any extensions calling that
function directly; but if there are, they're most likely buggy in the
same way as the core callers were, so they need work anyway.) While
I was at it, I changed GenerateTypeDependencies to accept most of its
parameters in the form of a Form_pg_type pointer, making its parameter
list a bit less unwieldy and mistake-prone.
The test race condition is fixed just by wrapping the addition and
removal of default privileges into a single transaction, so that that
state is never visible externally. We might eventually prefer to
separate out tests of default privileges into a script that runs by
itself, but that would be a bigger change and would make the tests
run slower overall.
Back-patch relevant parts to all supported branches.
Andres Freund [Sat, 10 Nov 2018 01:40:40 +0000 (17:40 -0800)]
Remove ineffective check against dropped columns from slot_getattr().
Before this commit slot_getattr() checked for dropped
columns (returning NULL in that case), but only after checking for
previously deformed columns. As slot_deform_tuple() does not contain
such a check, the check in slot_getattr() would often not have been
reached, depending on previous use of the slot.
These days locking and plan invalidation ought to ensure that dropped
columns are not accessed in query plans. Therefore this commit just
drops the insufficient check in slot_getattr(). It's possible that
we'll find some holes againt use of dropped columns, but if so, those
need to be addressed independent of slot_getattr(), as most accesses
don't go through that function anyway.
Author: Andres Freund
Discussion: https://postgr.es/m/20181107174403.zai7fedgcjoqx44p@alap3.anarazel.de
Andres Freund [Sat, 10 Nov 2018 01:19:39 +0000 (17:19 -0800)]
Don't require return slots for nodes without projection.
In a lot of nodes the return slot is not required. That can either be
because the node doesn't do any projection (say an Append node), or
because the node does perform projections but the projection is
optimized away because the projection would yield an identical row.
Slots aren't that small, especially for wide rows, so it's worthwhile
to avoid creating them. It's not possible to just skip creating the
slot - it's currently used to determine the tuple descriptor returned
by ExecGetResultType(). So separate the determination of the result
type from the slot creation. The work previously done internally
ExecInitResultTupleSlotTL() can now also be done separately with
ExecInitResultTypeTL() and ExecInitResultSlot(). That way nodes that
aren't guaranteed to need a result slot, can use
ExecInitResultTypeTL() to determine the result type of the node, and
ExecAssignScanProjectionInfo() (via
ExecConditionalAssignProjectionInfo()) determines that a result slot
is needed, it is created with ExecInitResultSlot().
Besides the advantage of avoiding to create slots that then are
unused, this is necessary preparation for later patches around tuple
table slot abstraction. In particular separating the return descriptor
and slot is a prerequisite to allow JITing of tuple deforming with
knowledge of the underlying tuple format, and to avoid unnecessarily
creating JITed tuple deforming for virtual slots.
This commit removes a redundant argument from
ExecInitResultTupleSlotTL(). While this commit touches a lot of the
relevant lines anyway, it'd normally still not worthwhile to cause
breakage, except that aforementioned later commits will touch *all*
ExecInitResultTupleSlotTL() callers anyway (but fits worse
thematically).
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
Alvaro Herrera [Fri, 9 Nov 2018 16:08:00 +0000 (13:08 -0300)]
Indicate session name in isolationtester notices
When a session under isolationtester produces printable notices (NOTICE,
WARNING) we were just printing them unadorned, which can be confusing
when debugging. Prefix them with the session name, which makes things
clearer.
Author: Álvaro Herrera Reviewed-by: Hari Babu Kommi
Discussion: https://postgr.es/m/20181024213451.75nh3f3dctmcdbfq@alvherre.pgsql
Michael Paquier [Fri, 9 Nov 2018 01:03:22 +0000 (10:03 +0900)]
Fix dependency handling of partitions and inheritance for ON COMMIT
This commit fixes a set of issues with ON COMMIT actions when used on
partitioned tables and tables with inheritance children:
- Applying ON COMMIT DROP on a partitioned table with partitions or on a
table with inheritance children caused a failure at commit time, with
complains about the children being already dropped as all relations are
dropped one at the same time.
- Applying ON COMMIT DELETE on a partition relying on a partitioned
table which uses ON COMMIT DROP would cause the partition truncation to
fail as the parent is removed first.
The solution to the first problem is to handle the removal of all the
dependencies in one go instead of dropping relations one-by-one, based
on a suggestion from Álvaro Herrera. So instead all the relation OIDs
to remove are gathered and then processed in one round of multiple
deletions.
The solution to the second problem is to reorder the actions, with
truncation happening first and relation drop done after. Even if it
means that a partition could be first truncated, then immediately
dropped if its partitioned table is dropped, this has the merit to keep
the code simple as there is no need to do existence checks on the
relations to drop.
Contrary to a manual TRUNCATE on a partitioned table, ON COMMIT DELETE
does not cascade to its partitions. The ON COMMIT action defined on
each partition gets the priority.
Author: Michael Paquier Reviewed-by: Amit Langote, Álvaro Herrera, Robert Haas
Discussion: https://postgr.es/m/68f17907-ec98-1192-f99f-8011400517f5@lab.ntt.co.jp
Backpatch-through: 10
Tom Lane [Thu, 8 Nov 2018 22:33:25 +0000 (17:33 -0500)]
Disallow setting client_min_messages higher than ERROR.
Previously it was possible to set client_min_messages to FATAL or PANIC,
which had the effect of suppressing transmission of regular ERROR messages
to the client. Perhaps that seemed like a useful option in the past, but
the trouble with it is that it breaks guarantees that are explicitly made
in our FE/BE protocol spec about how a query cycle can end. While libpq
and psql manage to cope with the omission, that's mostly because they
are not very bright; client libraries that have more semantic knowledge
are likely to get confused. Notably, pgODBC doesn't behave very sanely.
Let's fix this by getting rid of the ability to set client_min_messages
above ERROR.
In HEAD, just remove the FATAL and PANIC options from the set of allowed
enum values for client_min_messages. (This change also affects
trace_recovery_messages, but that's OK since these aren't useful values
for that variable either.)
In the back branches, there was concern that rejecting these values might
break applications that are explicitly setting things that way. I'm
pretty skeptical of that argument, but accommodate it by accepting these
values and then internally setting the variable to ERROR anyway.
In all branches, this allows a couple of tiny simplifications in the
logic in elog.c, so do that.
Also respond to the point that was made that client_min_messages has
exactly nothing to do with the server's logging behavior, and therefore
does not belong in the "When To Log" subsection of the documentation.
The "Statement Behavior" subsection is a better match, so move it there.
Alvaro Herrera [Thu, 8 Nov 2018 19:22:09 +0000 (16:22 -0300)]
Revise attribute handling code on partition creation
The original code to propagate NOT NULL and default expressions
specified when creating a partition was mostly copy-pasted from
typed-tables creation, but not being a great match it contained some
duplicity, inefficiency and bugs.
This commit fixes the bug that NOT NULL constraints declared in the
parent table would not be honored in the partition. One reported issue
that is not fixed is that a DEFAULT declared in the child is not used
when inserting through the parent. That would amount to a behavioral
change that's better not back-patched.
This rewrite makes the code simpler:
1. instead of checking for duplicate column names in its own block,
reuse the original one that already did that;
2. instead of concatenating the list of columns from parent and the one
declared in the partition and scanning the result to (incorrectly)
propagate defaults and not-null constraints, just scan the latter
searching the former for a match, and merging sensibly. This works
because we know the list in the parent is already correct and there can
only be one parent.
This rewrite makes ColumnDef->is_from_parent unused, so it's removed
on branch master; on released branches, it's kept as an unused field in
order not to cause ABI incompatibilities.
This commit also adds a test case for creating partitions with
collations mismatching that on the parent table, something that is
closely related to the code being patched. No code change is introduced
though, since that'd be a behavior change that could break some (broken)
working applications.
Amit Langote wrote a less invasive fix for the original
NOT NULL/defaults bug, but while I kept the tests he added, I ended up
not using his original code. Ashutosh Bapat reviewed Amit's fix. Amit
reviewed mine.
Author: Álvaro Herrera, Amit Langote Reviewed-by: Ashutosh Bapat, Amit Langote Reported-by: Jürgen Strobel (bug #15212)
Discussion: https://postgr.es/m/152746742177.1291.9847032632907407358@wrigleys.postgresql.org
Andres Freund [Wed, 7 Nov 2018 19:08:45 +0000 (11:08 -0800)]
Move EEOP_*_SYSVAR evaluation out of line.
This mainly de-duplicates code. As evaluating a system variable isn't
the hottest path and the current inline implementation ends up calling
out to an external function anyway, this is OK from a performance POV.
The main motivation for de-duplicating is the upcoming slot
abstraction work, after which there's not guaranteed to be a HeapTuple
backing the slot.
Author: Andres Freund, Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
Add another transfer mode --clone to pg_upgrade (besides the existing
--link and the default copy), using special file cloning calls. This
makes the file transfer faster and more space efficient, achieving
speed similar to --link mode without the associated drawbacks.
On Linux, file cloning is supported on Btrfs and XFS (if formatted with
reflink support). On macOS, file cloning is supported on APFS.
Reviewed-by: Michael Paquier <michael@paquier.xyz>