Andres Freund [Fri, 1 Mar 2019 18:37:57 +0000 (10:37 -0800)]
Store tuples for EvalPlanQual in slots, rather than as HeapTuples.
For the upcoming pluggable table access methods it's quite
inconvenient to store tuples as HeapTuples, as that'd require
converting tuples from a their native format into HeapTuples. Instead
use slots to manage epq tuples.
To fit into that scheme, change the foreign data wrapper callback
RefetchForeignRow, to store the tuple in a slot. Insist on using the
caller provided slot, so it conveniently can be stored in the
corresponding EPQ slot. As there is no in core user of
RefetchForeignRow, that change was done blindly, but we plan to test
that soon.
To avoid duplicating that work for row locks, move row locks to just
directly use the EPQ slots - it previously temporarily stored tuples
in LockRowsState.lr_curtuples, but that doesn't seem beneficial, given
we'd possibly end up with a significant number of additional slots.
The behaviour of es_epqTupleSet[rti -1] is now checked by
es_epqTupleSlot[rti -1] != NULL, as that is distinguishable from a
slot containing an empty tuple.
Author: Andres Freund, Haribabu Kommi, Ashutosh Bapat
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andrew Dunstan [Fri, 1 Mar 2019 15:47:44 +0000 (10:47 -0500)]
Add --exclude-database option to pg_dumpall
This option functions similarly to pg_dump's --exclude-table option, but
for database names. The option can be given once, and the argument can
be a pattern including wildcard characters.
Author: Andrew Dunstan. Reviewd-by: Fabien Coelho and Michael Paquier
Discussion: https://postgr.es/m/43a54a47-4aa7-c70e-9ca6-648f436dd6e6@2ndQuadrant.com
Amit Kapila [Fri, 1 Mar 2019 02:08:47 +0000 (07:38 +0530)]
Clear the local map when not used.
After commit b0eaa4c51b, we use a local map of pages to find the required
space for small relations. We do clear this map when we have found a block
with enough free space, when we extend the relation, or on transaction
abort so that it can be used next time. However, we miss to clear it when
we didn't find any pages to try from the map which leads to an assertion
failure when we later tried to use it after relation extension.
In the passing, I have improved some comments in this area.
Reported-by: Tom Lane based on buildfarm results
Author: Amit Kapila Reviewed-by: John Naylor Tested-by: Kuntal Ghosh
Discussion: https://postgr.es/m/32368.1551114120@sss.pgh.pa.us
Michael Paquier [Fri, 1 Mar 2019 00:07:07 +0000 (09:07 +0900)]
Make pg_partition_tree return no rows on unsupported and undefined objects
The function was tweaked so as it returned one row full of NULLs when
working on an unsupported relkind or an undefined object as of cc53123,
and after discussion with Amit and Álvaro it looks more natural to make
it return no rows.
Author: Michael Paquier Reviewed-by: Álvaro Herrera, Amit Langote
Discussion: https://postgr.es/m/20190227184808.GA17357@alvherre.pgsql
Andres Freund [Thu, 28 Feb 2019 21:55:27 +0000 (13:55 -0800)]
Don't force materializing when copying a buffer tuple table slot.
After 5408e233f0667478 it's not necessary to force materializing the
target slot when copying from one buffer slot to another. Previously
that was required because the HeapTupleData portion of the source slot
wasn't guaranteed to stay valid long enough, but now we can simply
copy that part into the destination slot's tupdata.
Joe Conway [Thu, 28 Feb 2019 20:57:40 +0000 (15:57 -0500)]
Make get_controlfile not leak file descriptors
When backend functions were added to expose controldata via SQL,
reading of pg_control was consolidated under src/common so that
both frontend and backend could share the same code. That move
from frontend-only to shared frontend-backend failed to recognize
the risk (and coding standards violation) of using a bare open().
In particular, it risked leaking file descriptors if transient
errors occurred while reading the file. Fix that by using
OpenTransientFile() instead in the backend case, which is
purpose-built for this type of usage.
Since there have been no complaints from the field, and an intermittent
failure low risk, no backpatch. Hard failure would of course be bad, but
in that case these functions are probably the least of your worries.
Author: Joe Conway Reviewed-By: Michael Paquier
Reported by: Michael Paquier
Discussion: https://postgr.es/m/20190227074728.GA15710@paquier.xyz
Andres Freund [Thu, 28 Feb 2019 20:27:58 +0000 (12:27 -0800)]
Allow buffer tuple table slots to materialize after ExecStoreVirtualTuple().
While not common, it can be useful to store a virtual tuple into a
buffer tuple table slot, and then materialize that slot. So far we've
asserted out, which surprisingly wasn't a problem for anything in
core. But that seems fragile, and it also breaks redis_fdw after ff11e7f4b9.
Thus, allow materializing a virtual tuple stored in a buffer tuple
table slot.
Author: Andres Freund
Discussion:
https://postgr.es/m/20190227181621.xholonj7ff7ohxsg@alap3.anarazel.de
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Alvaro Herrera [Thu, 28 Feb 2019 20:16:08 +0000 (17:16 -0300)]
pg_dump: Fix ArchiveEntry handling of some empty values
Commit f831d4acc changed what pg_dump emits for some empty fields: they
were output as empty strings before, NULL pointer afterwards. That
makes old pg_restore unable to work (crash) with such files, which is
unacceptable. Return to the original representation by explicitly
setting those struct members to "" where needed; remove some no longer
needed checks for NULL input.
We can declutter the code a little by returning to NULLs when we next
update the archive version, so add a note to remind us later.
Tom Lane [Thu, 28 Feb 2019 19:25:01 +0000 (14:25 -0500)]
Standardize some more loops that chase down parallel lists.
We have forboth() and forthree() macros that simplify iterating
through several parallel lists, but not everyplace that could
reasonably use those was doing so. Also invent forfour() and
forfive() macros to do the same for four or five parallel lists,
and use those where applicable.
The immediate motivation for doing this is to reduce the number
of ad-hoc lnext() calls, to reduce the footprint of a WIP patch.
However, it seems like good cleanup and error-proofing anyway;
the places that were combining forthree() with a manually iterated
loop seem particularly illegible and bug-prone.
There was some speculation about restructuring related parsetree
representations to reduce the need for parallel list chasing of
this sort. Perhaps that's a win, or perhaps not, but in any case
it would be considerably more invasive than this patch; and it's
not particularly related to my immediate goal of improving the
List infrastructure. So I'll leave that question for another day.
Peter Eisentraut [Thu, 28 Feb 2019 14:29:00 +0000 (15:29 +0100)]
Clean up some variable names in ri_triggers.c
There was a mix of old_slot/oldslot, new_slot/newslot. Since we've
changed everything from row to slot, we might as well take this
opportunity to clean this up.
Also update some more comments for the slot change.
Peter Eisentraut [Thu, 28 Feb 2019 09:54:44 +0000 (10:54 +0100)]
Reduce comments
Reduce the vertical space used by comments in ri_triggers.c, making
the file longer and more tedious to read than it needs to be. Update
some comments to use a more common style.
Peter Eisentraut [Thu, 28 Feb 2019 09:54:44 +0000 (10:54 +0100)]
Remove unnecessary unused MATCH PARTIAL code
ri_triggers.c spends a lot of space catering to a not-yet-implemented
MATCH PARTIAL option. An actual implementation would probably not use
the existing code structure anyway, so let's just simplify this for
now.
First, have ri_FetchConstraintInfo() check that riinfo->confmatchtype
is valid. Then we don't have to repeat that everywhere.
In the various referential action functions, we don't need to pay
attention to the match type at all right now, so remove all that code.
A future MATCH PARTIAL implementation would probably have some
conditions added to the present code, but it won't need an entirely
separate switch branch in each case.
In RI_FKey_fk_upd_check_required(), reorganize the code to make it
much simpler.
Michael Paquier [Thu, 28 Feb 2019 00:40:28 +0000 (09:40 +0900)]
Fix SCRAM authentication via SSL when mixing versions of OpenSSL
When using a libpq client linked with OpenSSL 1.0.1 or older to connect
to a backend linked with OpenSSL 1.0.2 or newer, the server would send
SCRAM-SHA-256-PLUS and SCRAM-SHA-256 as valid mechanisms for the SASL
exchange, and the client would choose SCRAM-SHA-256-PLUS even if it does
not support channel binding, leading to a confusing error. In this
case, what the client ought to do is switch to SCRAM-SHA-256 so as the
authentication can move on and succeed.
So for a SCRAM authentication over SSL, here are all the cases present
and how we deal with them using libpq:
1) Server supports channel binding, it sends SCRAM-SHA-256-PLUS and
SCRAM-SHA-256 as allowed mechanisms.
1-1) Client supports channel binding, chooses SCRAM-SHA-256-PLUS.
1-2) Client does not support channel binding, chooses SCRAM-SHA-256.
2) Server does not support channel binding, sends SCRAM-SHA-256 as
allowed mechanism.
2-1) Client supports channel binding, still it has no choice but to
choose SCRAM-SHA-256.
2-2) Client does not support channel binding, it chooses SCRAM-SHA-256.
In all these scenarios the connection should succeed, and the one which
was handled incorrectly prior this commit is 1-2), causing the
connection attempt to fail because client chose SCRAM-SHA-256-PLUS over
SCRAM-SHA-256.
Reported-by: Hugh Ranalli Diagnosed-by: Peter Eisentraut
Author: Michael Paquier Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/CAAhbUMO89SqUk-5mMY+OapgWf-twF2NA5sCucbHEzMfGbvcepA@mail.gmail.com
Backpatch-through: 11
Andres Freund [Wed, 27 Feb 2019 17:14:34 +0000 (09:14 -0800)]
Initialize variable to silence compiler warning.
After ff11e7f4b9ae Tom's compiler warns about accessing a potentially
uninitialized rInfo. That's not actually possible, but it's
understandable the compiler would get this wrong. NULL initialize too.
Reported-By: Tom Lane
Discussion: https://postgr.es/m/11199.1551285318@sss.pgh.pa.us
Set fallback_application_name for a walreceiver to cluster_name
By default, the fallback_application_name for a physical walreceiver
is "walreceiver". This means that multiple standbys cannot be
distinguished easily on a primary, for example in pg_stat_activity or
synchronous_standby_names.
If cluster_name is set, use that for fallback_application_name in the
walreceiver. (If it's not set, it remains "walreceiver".) If someone
set cluster_name to identify their instance, we might as well use that
by default to identify the node remotely as well. It's still possible
to specify another application_name in primary_conninfo explicitly.
Michael Paquier [Wed, 27 Feb 2019 05:14:06 +0000 (14:14 +0900)]
Fix memory leak when inserting tuple at relation creation for CTAS
The leak has been introduced by 763f2ed which has addressed the problem
for transient tables, and forgot CREATE TABLE AS which shares a similar
logic when receiving a new tuple to store into the newly-created
relation.
Author: Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1xZXtz3mziPEPD2Fubbas4G2RWkZm5HHABtfKVcbu1=Sg@mail.gmail.com
Andres Freund [Wed, 27 Feb 2019 04:30:28 +0000 (20:30 -0800)]
Use slots in trigger infrastructure, except for the actual invocation.
In preparation for abstracting table storage, convert trigger.c to
track tuples in slots. Which also happens to make code calling
triggers simpler.
As the calling interface for triggers themselves is not changed in
this patch, HeapTuples still are extracted from the slot at that
time. But that's handled solely inside trigger.c, not visible to
callers. It's quite likely that we'll want to revise the external
trigger interface, but that's a separate large project.
As part of this work the slots used for old/new/return tuples are
moved from EState into ResultRelInfo, as different updated tables
might need different slots. The slots are now also now created
on-demand, which is good both from an efficiency POV, but also makes
the modifying code simpler.
Author: Andres Freund, Amit Khandekar and Ashutosh Bapat
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Wed, 27 Feb 2019 02:21:44 +0000 (18:21 -0800)]
Store table oid and tuple's tid in tuple slots directly.
After the introduction of tuple table slots all table AMs need to
support returning the table oid of the tuple stored in a slot created
by said AM. It does not make sense to re-implement that in every AM,
therefore move handling of table OIDs into the TupleTableSlot
structure itself. It's possible that we, at a later date, might want
to get rid of HeapTupleData.t_tableOid entirely, but doing so before
the abstractions for table AMs are integrated turns out to be too
hard, so delay that for now.
Similarly, every AM needs to support the concept of a tuple
identifier (tid / item pointer) for its tuples. It's quite possible
that we'll generalize the exact form of a tid at a future point (to
allow for things like index organized tables), but for now many parts
of the code know about tids, so there's not much point in abstracting
tids away. Therefore also move into slot (rather than providing API to
set/get the tid associated with the tuple in a slot).
Once table AM includes insert/updating/deleting tuples, the
responsibility to set the correct tid after such an action will move
into that. After that change, code doing such modifications, should
not have to deal with HeapTuples directly anymore.
Author: Andres Freund, Haribabu Kommi and Ashutosh Bapat
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Wed, 27 Feb 2019 02:15:59 +0000 (18:15 -0800)]
Allow to use HeapTupleData embedded in [Buffer]HeapTupleTableSlot.
That avoids having to care about the lifetime of the
HeapTupleHeaderData passed to ExecStore[Buffer]HeapTuple(). That
doesn't make a huge difference for a plain HeapTupleTableSlot, but for
BufferHeapTupleTableSlot it can be a significant advantage, avoiding
the need to materialize slots where it's inconvenient to provide a
HeapTupleData with appropriate lifetime to point to the on-disk tuple.
It's quite possible that we'll want to add support functions for
constructing HeapTuples using that embedded HeapTupleData, but for now
callers do so themselves.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Wed, 27 Feb 2019 01:59:01 +0000 (17:59 -0800)]
Add ExecStorePinnedBufferHeapTuple.
This allows to avoid an unnecessary pin/unpin cycle when storing a
tuple in an already pinned buffer into a slot, when the pin isn't
further needed at the call site.
Only a single caller for now (to ensure coverage), but upcoming
patches will increase use of the new function.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Robert Haas [Tue, 26 Feb 2019 17:22:57 +0000 (12:22 -0500)]
Change lock acquisition order in expand_inherited_rtentry.
Previously, this function acquired locks in the order using
find_all_inheritors(), which locks the children of each table that it
processes in ascending OID order, and which processes the inheritance
hierarchy as a whole in a breadth-first fashion. Now, it processes
the inheritance hierarchy in a depth-first fashion, and at each level
it proceeds in the order in which tables appear in the PartitionDesc.
If table inheritance rather than table partitioning is used, the old
order is preserved.
This change moves the locking of any given partition much closer to
the code that actually expands that partition. This seems essential
if we ever want to allow concurrent DDL to add or remove partitions,
because if the set of partitions can change, we must use the same data
to decide which partitions to lock as we do to decide which partitions
to expand; otherwise, we might expand a partition that we haven't
locked. It should hopefully also facilitate efforts to postpone
inheritance expansion or locking for performance reasons, because
there's really no way to postpone locking some partitions if
we're blindly locking them all using find_all_inheritors().
The only downside of this change which is known to me is that it
further deviates from the principle that we should always lock the
inheritance hierarchy in find_all_inheritors() order to avoid deadlock
risk. However, we've already crossed that bridge in commit 9eefba181f7782d27d85d7e94e6028371e7ab2d7 and there are futher patches
pending that make similar changes, so this isn't really giving up
anything that we haven't surrendered already -- and it seems entirely
worth it, given the performance benefits some of those changes seem
likely to bring.
Patch by me; thanks to David Rowley for discussion of these issues.
Peter Geoghegan [Tue, 26 Feb 2019 01:47:43 +0000 (17:47 -0800)]
Remove unneeded argument from _bt_getstackbuf().
_bt_getstackbuf() is called at exactly two points following commit efada2b8e92 (one call site is concerned with page splits, while the
other is concerned with page deletion). The parent buffer returned by
_bt_getstackbuf() is write-locked in both cases. Remove the 'access'
argument and make _bt_getstackbuf() assume that callers require a
write-lock.
Peter Geoghegan [Tue, 26 Feb 2019 00:54:18 +0000 (16:54 -0800)]
Correct obsolete nbtree page deletion comment.
Commit efada2b8e92, which made the nbtree page deletion algorithm more
robust, removed _bt_getstackbuf() calls from _bt_pagedel(). It failed
to update a comment that referenced the earlier approach. Update the
comment to explain that the _bt_getstackbuf() page deletion call site
mirrors the only other remaining _bt_getstackbuf() call site, which is
reached during page splits.
Peter Eisentraut [Mon, 25 Feb 2019 07:38:59 +0000 (08:38 +0100)]
Remove unnecessary use of PROCEDURAL
Remove some unnecessary, legacy-looking use of the PROCEDURAL keyword
before LANGUAGE. We mostly don't use this anymore, so some of these
look a bit old.
There is still some use in pg_dump, which is harder to remove because
it's baked into the archive format, so I'm not touching that.
Michael Paquier [Mon, 25 Feb 2019 05:19:34 +0000 (14:19 +0900)]
Make release of 2PC identifier and locks consistent in COMMIT PREPARED
When preparing a transaction in two-phase commit, a dummy PGPROC entry
holding the GID used for the transaction is registered, which gets
released once COMMIT PREPARED is run. Prior releasing its shared memory
state, all the locks taken in the prepared transaction are released
using a dedicated set of callbacks (pgstat and multixact having similar
callbacks), which may cause the locks to be released before the GID is
set free.
Hence, there is a small window where lock conflicts could happen, for
example:
- Transaction A releases its locks, still holding its GID in shared
memory.
- Transaction B held a lock which conflicted with locks of transaction
A.
- Transaction B continues its processing, reusing the same GID as
transaction A.
- Transaction B fails because of a conflicting GID, already in use by
transaction A.
This commit changes the shared memory state release so as post-commit
callbacks and predicate lock cleanup happen consistently with the shared
memory state cleanup for the dummy PGPROC entry. The race window is
small and 2PC had this issue from the start, so no backpatch is done.
On top if that fixes discussed involved ABI breakages, which are not
welcome in stable branches.
Thomas Munro [Sun, 24 Feb 2019 21:54:12 +0000 (10:54 +1300)]
Fix inconsistent out-of-memory error reporting in dsa.c.
Commit 16be2fd1 introduced the flag DSA_ALLOC_NO_OOM to control whether
the DSA allocator would raise an error or return InvalidDsaPointer on
failure to allocate. One edge case was not handled correctly: if we
fail to allocate an internal "span" object for a large allocation, we
would always return InvalidDsaPointer regardless of the flag; a caller
not expecting that could then dereference a null pointer.
This is a plausible explanation for a one-off report of a segfault.
Remove a redundant pair of braces so that all three stanzas that handle
DSA_ALLOC_NO_OOM match in style, for visual consistency.
While fixing inconsistencies, if FreePageManagerGet() can't supply the
pages that our book-keeping says it should be able to supply, then we
should always report a FATAL error. Previously we treated that as a
regular allocation failure in one code path, but as a FATAL condition
in another.
Back-patch to 10, where dsa.c landed.
Author: Thomas Munro Reported-by: Jakub Glapa
Discussion: https://postgr.es/m/CAEepm=2oPqXxyWQ-1o60tpOLrwkw=VpgNXqqF1VN2EyO9zKGQw@mail.gmail.com
Tom Lane [Sun, 24 Feb 2019 17:51:50 +0000 (12:51 -0500)]
Fix ecpg bugs caused by missing semicolons in the backend grammar.
The Bison documentation clearly states that a semicolon is required
after every grammar rule, and our scripts that generate ecpg's
grammar from the backend's implicitly assumed this is true. But it
turns out that only ancient versions of Bison actually enforce that.
There have been a couple of rules without trailing semicolons in
gram.y for some time, and as a consequence, ecpg's grammar was faulty
and produced wrong output for the affected statements.
To fix, add the missing semis, and add some cross-checks to ecpg's
scripts so that they'll bleat if we mess this up again.
The cases that were broken were:
* "SET variable = DEFAULT" (but not "SET variable TO DEFAULT"),
as well as allied syntaxes such as ALTER SYSTEM SET ... DEFAULT.
These produced syntactically invalid output that the server
would reject.
* Multiple type names in DROP TYPE/DOMAIN commands. Only the
first type name would be listed in the emitted command.
Per report from Daisuke Higuchi. Back-patch to all supported versions.
Thomas Munro [Sun, 24 Feb 2019 10:48:52 +0000 (23:48 +1300)]
Tolerate EINVAL when calling fsync() on a directory.
Previously, we tolerated EBADF as a way for the operating system to
indicate that it doesn't support fsync() on a directory. Tolerate
EINVAL too, for older versions of Linux CIFS.
Bug #15636. Back-patch all the way.
Reported-by: John Klann
Discussion: https://postgr.es/m/15636-d380890dafd78fc6@postgresql.org
Thomas Munro [Sun, 24 Feb 2019 00:38:15 +0000 (13:38 +1300)]
Tolerate ENOSYS failure from sync_file_range().
One unintended consequence of commit 9ccdd7f6 was that Windows WSL
users started getting a panic whenever we tried to initiate data
flushing with sync_file_range(), because WSL does not implement that
system call. Previously, they got a stream of periodic warnings,
which was also undesirable but at least ignorable.
Prevent the panic by handling ENOSYS specially and skipping the panic
promotion with data_sync_elevel(). Also suppress future attempts
after the first such failure so that the pre-existing problem of
noisy warnings is improved.
Back-patch to 9.6 (older branches were not affected in this way by 9ccdd7f6).
Author: Thomas Munro and James Sewell Tested-by: James Sewell Reported-by: Bruce Klein
Discussion: https://postgr.es/m/CA+mCpegfOUph2U4ZADtQT16dfbkjjYNJL1bSTWErsazaFjQW9A@mail.gmail.com
Michael Paquier [Fri, 22 Feb 2019 23:19:31 +0000 (08:19 +0900)]
Add TAP tests for 2PC post-commit callbacks of multixacts at recovery
The current set of TAP tests for two-phase transactions include some
coverage for post-commit callbacks of multixact, but it lacked tests for
testing the recovery of those callbacks. This commit adds some tests
with soft and hard restarts in this case, using transactions which
include DDLs.
Author: Michael Paquier Reviewed-by: Oleksii Kliukin
Discussion: https://postgr.es/m/20190221055431.GO15532@paquier.xyz
Tom Lane [Fri, 22 Feb 2019 17:23:00 +0000 (12:23 -0500)]
Fix plan created for inherited UPDATE/DELETE with all tables excluded.
In the case where inheritance_planner() finds that every table has
been excluded by constraints, it thought it could get away with
making a plan consisting of just a dummy Result node. While certainly
there's no updating or deleting to be done, this had two user-visible
problems: the plan did not report the correct set of output columns
when a RETURNING clause was present, and if there were any
statement-level triggers that should be fired, it didn't fire them.
Hence, rather than only generating the dummy Result, we need to
stick a valid ModifyTable node on top, which requires a tad more
effort here.
It's been broken this way for as long as inheritance_planner() has
known about deleting excluded subplans at all (cf commit 635d42e9c),
so back-patch to all supported branches.
Amit Langote and Tom Lane, per a report from Petr Fedorov.
Peter Eisentraut [Fri, 22 Feb 2019 08:01:19 +0000 (09:01 +0100)]
Add const qualifier
New code introduced in 050710b36964dee7e1b2bf6b5ef00041fd5d2787. The
lack of const is not currently a compiler warning, but it's nice to
have for consistency with surrounding code.
Tom Lane [Thu, 21 Feb 2019 23:55:29 +0000 (18:55 -0500)]
Fix mark-and-restore-skipping test case to not be a self-join.
There isn't any good reason for this test to be a self-join rather
than a join between separate tables, except that it saved a couple
of SQL commands for setup. A proposed patch to optimize away
self-joins breaks the test, so adjust it to avoid that happening.
Tom Lane [Thu, 21 Feb 2019 19:59:02 +0000 (14:59 -0500)]
Move estimate_hashagg_tablesize to selfuncs.c, and widen result to double.
It seems to make more sense for this to be in selfuncs.c, since it's
largely a statistical-estimation thing, and it's related to other
functions like estimate_hash_bucket_stats that are there.
While at it, change the result type from Size to double. Perhaps at one
point it was impossible for the result to overflow an integer, but
I've got no confidence in that proposition anymore. Nothing's actually
done with the result except to compare it to a work_mem-based limit,
so as long as we don't get an overflow on the way to that comparison,
things should be fine even with very large dNumGroups.
Code movement proposed by Antonin Houska, type change by me
Peter Eisentraut [Wed, 20 Feb 2019 10:38:44 +0000 (11:38 +0100)]
Hide other user's pg_stat_ssl rows
Change pg_stat_ssl so that an unprivileged user can only see their own
rows; other rows will be all null. This makes the behavior consistent
with pg_stat_activity, where information about where the connection
came from is also restricted.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/63117976-d02c-c8e2-3aef-caa31a5ab8d3%402ndquadrant.com
Peter Eisentraut [Thu, 21 Feb 2019 17:34:19 +0000 (18:34 +0100)]
pg_regress: Don't use absolute paths for the diff
Don't expand inputfile and outputfile to absolute paths globally, just
where needed. In particular, pass them as is to the file name
arguments of the diff command, so that we don't see the full absolute
path in the diff header, which makes the diff unnecessarily verbose
and harder to read.
Robert Haas [Thu, 21 Feb 2019 16:38:54 +0000 (11:38 -0500)]
Move code for managing PartitionDescs into a new file, partdesc.c
This is similar in spirit to the existing partbounds.c file in the
same directory, except that there's a lot less code in the new file
created by this commit. Pending work in this area proposes to add a
bunch more code related to PartitionDescs, though, and this will give
us a good place to put it.
Robert Haas [Thu, 21 Feb 2019 16:24:40 +0000 (11:24 -0500)]
Delay lock acquisition for partitions until we route a tuple to them.
Instead of locking all partitions to which we might route a tuple at
executor startup, just lock them as we use them. In some cases such a
partition might get locked at executor startup anyway because it
appears in the query's range table for some other reason, but in other
cases this is a bit savings.
This changes the order in which partitions are locked in some cases,
which might conceivably create deadlock hazards that don't exist
today, but per discussion, it seems like such cases should be rare
enough that we can neglect them in favor of improving performance.
David Rowley, reviewed and tested by Tomas Vondra, Sho Kato, John
Naylor, Tom Lane, and me.
Peter Eisentraut [Thu, 21 Feb 2019 14:39:37 +0000 (15:39 +0100)]
Fix dbtoepub output file name
In previous releases, the input file of dbtoepub was postgres.xml, and
dbtoepub knows to derive the output file name postgres.epub from that
automatically. But now the intput file is postgres.sgml (since
postgres.sgml is itself an XML file and we no longer need the
intermediate postgres.xml file), but dbtoepub doesn't know how to deal
with the .sgml suffix, so the automatically derived output file name
becomes postgres.sgml.epub. Fix by adding an explicit -o option.
Tom Lane [Thu, 21 Feb 2019 01:53:08 +0000 (20:53 -0500)]
Speed up match_eclasses_to_foreign_key_col() when there are many ECs.
Check ec_relids before bothering to iterate through the EC members.
On a perhaps extreme, but still real-world, query in which
match_eclasses_to_foreign_key_col() accounts for the bulk of the
planner's runtime, this saves nearly 40% of the runtime. It's a bit
of a stopgap fix, but it's simple enough to be back-patched to 9.6
where this code came in; so let's do that.
Andrew Gierth [Wed, 20 Feb 2019 21:31:02 +0000 (21:31 +0000)]
Use an unsigned char for bool if we don't use the native bool.
On (rare) platforms where sizeof(bool) > 1, we need to use our own
bool, but imported c99 code (such as Ryu) may want to use bool values
as array subscripts, which elicits warnings if bool is defined as
char. Using unsigned char instead should work just as well for our
purposes, and avoid such warnings.
Tom Lane [Wed, 20 Feb 2019 19:39:11 +0000 (14:39 -0500)]
Improve planner's understanding of strictness of type coercions.
PG type coercions are generally strict, ie a NULL input must produce
a NULL output (or, in domain cases, possibly an error). The planner's
understanding of that was a bit incomplete though, so improve it:
* Teach contain_nonstrict_functions() that CoerceViaIO can always be
considered strict. Previously it believed that only if the underlying
I/O functions were marked strict, which is often but not always true.
* Teach clause_is_strict_for() that CoerceViaIO, ArrayCoerceExpr,
ConvertRowtypeExpr, CoerceToDomain can all be considered strict.
Previously it knew nothing about any of them.
The main user-visible impact of this is that IS NOT NULL predicates
can be proven to hold from expressions involving casts in more cases
than before, allowing partial indexes with such predicates to be used
without extra pushups. This reduces the surprise factor for users,
who may well be used to ordinary (function-call-based) casts being
known to be strict.
Per a gripe from Samuel Williams. This doesn't rise to the level of
a bug, IMO, so no back-patch.
Tom Lane [Wed, 20 Feb 2019 18:36:55 +0000 (13:36 -0500)]
Fix incorrect strictness test for ArrayCoerceExpr expressions.
The recursion in contain_nonstrict_functions_walker() was done wrong,
causing the strictness check to be bypassed for a parse node that
is the immediate input of an ArrayCoerceExpr node. This could allow,
for example, incorrect decisions about whether a strict SQL function
can be inlined.
I didn't add a regression test, because (a) the bug is so narrow
and (b) I couldn't think of a test case that wasn't dependent on a
large number of other behaviors, to the point where it would likely
soon rot to the point of not testing what it was intended to.
I broke this in commit c12d570fa, so back-patch to v11.
Amit Kapila [Wed, 20 Feb 2019 12:07:39 +0000 (17:37 +0530)]
Doc: Update the documentation for FSM behavior for small tables.
In commit b0eaa4c51b, we have avoided the creation of FSM for small tables.
So the functions that use FSM to compute the free space can return zero for
such tables. This was previously also possible for the cases where the
vacuum has not been triggered to update FSM.
This commit updates the comments in code and documentation to reflect this
behavior.
Author: John Naylor Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CACPNZCtba-3m1q3A8gxA_vxg=T7gQf7gMbpR4Ciy5LntY-j+0Q@mail.gmail.com
Dean Rasheed [Wed, 20 Feb 2019 08:30:21 +0000 (08:30 +0000)]
Fix DEFAULT-handling in multi-row VALUES lists for updatable views.
INSERT ... VALUES for a single VALUES row is implemented differently
from a multi-row VALUES list, which causes inconsistent behaviour in
the way that DEFAULT items are handled. In particular, when inserting
into an auto-updatable view on top of a table with a column default, a
DEFAULT item in a single VALUES row gets correctly replaced with the
table column's default, but for a multi-row VALUES list it is replaced
with NULL.
Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the
VALUES list untouched if the target relation is an auto-updatable view
and has no column default, deferring DEFAULT-expansion until the query
against the base relation is rewritten. For all other types of target
relation, including tables and trigger- and rule-updatable views, we
must continue to replace DEFAULT items with NULL in the absence of a
column default.
This is somewhat complicated by the fact that if an auto-updatable
view has DO ALSO rules attached, the VALUES lists for the product
queries need to be handled differently from the original query, since
the product queries need to act like rule-updatable views whereas the
original query has auto-updatable view semantics.
Back-patch to all supported versions.
Reported by Roger Curley (bug #15623). Patch by Amit Langote and me.
Michael Paquier [Wed, 20 Feb 2019 03:31:07 +0000 (12:31 +0900)]
Mark correctly initial slot snapshots with MVCC type when built
When building an initial slot snapshot, snapshots are marked with
historic MVCC snapshots as type with the marker field being set in
SnapBuildBuildSnapshot() but not overriden in SnapBuildInitialSnapshot().
Existing callers of SnapBuildBuildSnapshot() do not care about the type
of snapshot used, but extensions calling it actually may, as reported.
While on it, mark correctly the snapshot type when importing one. This
is cosmetic as the field is enforced to 0.
Andrew Dunstan [Mon, 18 Feb 2019 12:22:00 +0000 (07:22 -0500)]
Provide an extra-float-digits setting for pg_dump / pg_dumpall
Changes made by commit 02ddd49 mean that dumps made against pre version
12 instances are no longer comparable with those made against version 12
or later instances. This makes cross-version upgrade testing fail in the
buildfarm. Experimentation has shown that the error is cured if the
dumps are made when extra_float_digits is set to 0. Hence this patch
allows for it to be explicitly set rather than relying on pg_dump's
builtin default (3 in almost all cases). This feature might have other
uses, but should not normally be used.
Michael Meskes [Mon, 18 Feb 2019 09:20:31 +0000 (10:20 +0100)]
Add bytea datatype to ECPG.
So far ECPG programs had to treat binary data for bytea column as 'char' type.
But this meant converting from/to escaped format with PQunescapeBytea/
PQescapeBytea() and therefore forcing users to add unnecessary code and cost
for the conversion in runtime. By adding a dedicated datatype for bytea most of
this special handling is no longer needed.
Etsuro Fujita [Mon, 18 Feb 2019 07:13:46 +0000 (16:13 +0900)]
Save PathTargets for distinct/ordered relations in root->upper_targets[].
For the convenience of extensions, we previously only saved PathTargets
for grouped, window, and final relations in root->upper_targets[] in
grouping_planner(). To improve the convenience, save PathTargets for
distinct and ordered relations as well.
Author: Antonin Houska, with an additional change by me
Discussion: https://postgr.es/m/10994.1549559088@localhost
Michael Paquier [Mon, 18 Feb 2019 05:23:30 +0000 (14:23 +0900)]
Fix some issues with TAP tests of pg_basebackup and pg_verify_checksums
ee9e145 has fixed the tests of pg_basebackup for checksums a first time,
still one seek() call missed the shot. Also, the data written in files
to emulate corruptions was not actually writing zeros as the quoting
style was incorrect.
Backpatch the portion for pg_basebackup to v11 where these tests have
been introduced. The tests of pg_verify_checksums are new as of v12.
Author: Michael Banck
Discussion: https://postgr.es/m/1550153276.796.35.camel@credativ.de
Backpatch-through: 11
Michael Paquier [Mon, 18 Feb 2019 00:52:02 +0000 (09:52 +0900)]
Avoid some unnecessary block reads in WAL reader
When reading a new page internally and depending on the way the WAL
reader facility gets used by plugins, the current implementation of the
WAL reader may finish by reading a block multiple times while it is not
actually necessary as the requested data length may be equal to what has
been already read. This can happen for any size, but is more likely to
happen at the end of a page. This can cause performance penalties in
plugins which rely on the block reads to be purely sequential, zlib not
liking backward reads for example. The new behavior also shaves some
cycles when doing recovery.
Author: Arthur Zakirov Reviewed-by: Andrey Lepikhov, Michael Paquier, Grigory Smolkin
Discussion: https://postgr.es/m/2ddf4a32-517e-d6f4-d992-4a63b6035bfd@postgrespro.ru
Thomas Munro [Sun, 17 Feb 2019 20:53:26 +0000 (09:53 +1300)]
Fix race in dsm_unpin_segment() when handles are reused.
Teach dsm_unpin_segment() to skip segments that are in the process
of being destroyed by another backend, when searching for a handle.
Such a segment cannot possibly be the one we are looking for, even
if its handle matches. Another slot might hold a recently created
segment that has the same handle value by coincidence, and we need
to keep searching for that one.
The bug caused rare "cannot unpin a segment that is not pinned"
errors on 10 and 11. Similar to commit 6c0fb941 for dsm_attach().
Back-patch to 10, where dsm_unpin_segment() landed.
Author: Thomas Munro Reported-by: Justin Pryzby Tested-by: Justin Pryzby (along with other recent DSA/DSM fixes)
Discussion: https://postgr.es/m/20190216023854.GF30291@telsasoft.com
Joe Conway [Sun, 17 Feb 2019 18:15:14 +0000 (13:15 -0500)]
Fix documentation for dblink_error_message() return value
The dblink documentation claims that an empty string is returned if there
has been no error, however OK is actually returned in that case. Also,
clarify that an async error may not be seen unless dblink_is_busy() or
dblink_get_result() have been called first.
Tom Lane [Sun, 17 Feb 2019 17:37:31 +0000 (12:37 -0500)]
Fix CREATE VIEW to allow zero-column views.
We should logically have allowed this case when we allowed zero-column
tables, but it was overlooked.
Although this might be thought a feature addition, it's really a bug
fix, because it was possible to create a zero-column view via
the convert-table-to-view code path, and then you'd have a situation
where dump/reload would fail. Hence, back-patch to all supported
branches.
Arrange the added test cases to provide coverage of the related
pg_dump code paths (since these views will be dumped and reloaded
during the pg_upgrade regression test). I also made them test
the case where pg_dump has to postpone the view rule into post-data,
which disturbingly had no regression coverage before.
Report and patch by Ashutosh Sharma (test case by me)
Joe Conway [Sun, 17 Feb 2019 14:21:13 +0000 (09:21 -0500)]
Mark pg_config() stable rather than immutable
pg_config() has been marked immutable since its inception. As part of a
larger discussion around the definition of immutable versus stable and
related implications for marking functions parallel safe raised by
Andres, the consensus was clearly that pg_config() is stable, since
it could possibly change output even for the same minor version with
a recompile or installation of a new binary. So mark it stable.
Theoretically this could/should be backpatched, but it was deemed to be not
worth the effort since in practice this is very unlikely to cause problems
in the real world.
Tatsuo Ishii [Sun, 17 Feb 2019 11:23:10 +0000 (20:23 +0900)]
Doc: remove ancient comment.
There's a very old comment in rules.sgml added back to 2003. It
expected to a feature coming back but it never happened. So now we can
safely remove the comment. Back-patched to all supported branches.
Noah Misch [Sat, 16 Feb 2019 23:28:27 +0000 (15:28 -0800)]
In imath.h, replace stdint.h usage with c.h equivalents.
As things stood, buildfarm member dory failed. MSVC versions lacking
stdint.h are unusable for building PostgreSQL, but pg_config.h.win32
doesn't know that. Even so, we support other systems lacking stdint.h,
including buildfarm member gaur. Per a suggestion from Tom Lane.
Andrew Gierth [Sat, 16 Feb 2019 22:11:04 +0000 (22:11 +0000)]
Remove float8-small-is-zero regression test variant.
Since this was also the variant used as an example in the docs, update
the docs to use float4-misrounded-input as an example instead, since
that is now the only remaining variant file.
Noah Misch [Sat, 16 Feb 2019 21:12:28 +0000 (13:12 -0800)]
Import changes from IMath versions (1.3, 1.29].
Upstream fixed bugs over the years, but none are confirmed to have
affected pgcrypto. We're better off naively tracking upstream than
reactively maintaining a twelve-year-old snapshot of upstream. Add a
header comment describing the synchronization procedure. Discard use of
INVERT_COMPARE_RESULT(); the domain of the comparisons in question is
{-1,0,1}, controlled entirely by code in imath.c.
Andrew Gierth reviewed the Makefile change. Tom Lane reviewed the
synchronization procedure description.
Tom Lane [Sat, 16 Feb 2019 21:11:12 +0000 (16:11 -0500)]
Allow user control of CTE materialization, and change the default behavior.
Historically we've always materialized the full output of a CTE query,
treating WITH as an optimization fence (so that, for example, restrictions
from the outer query cannot be pushed into it). This is appropriate when
the CTE query is INSERT/UPDATE/DELETE, or is recursive; but when the CTE
query is non-recursive and side-effect-free, there's no hazard of changing
the query results by pushing restrictions down.
Another argument for materialization is that it can avoid duplicate
computation of an expensive WITH query --- but that only applies if
the WITH query is called more than once in the outer query. Even then
it could still be a net loss, if each call has restrictions that
would allow just a small part of the WITH query to be computed.
Hence, let's change the behavior for WITH queries that are non-recursive
and side-effect-free. By default, we will inline them into the outer
query (removing the optimization fence) if they are called just once.
If they are called more than once, we will keep the old behavior by
default, but the user can override this and force inlining by specifying
NOT MATERIALIZED. Lastly, the user can force the old behavior by
specifying MATERIALIZED; this would mainly be useful when the query had
deliberately been employing WITH as an optimization fence to prevent a
poor choice of plan.
Michael Meskes [Sat, 16 Feb 2019 09:55:17 +0000 (10:55 +0100)]
Add DECLARE STATEMENT support to ECPG.
DECLARE STATEMENT is a statement that lets users declare an identifier
pointing at a connection. This identifier will be used in other embedded
dynamic SQL statement such as PREPARE, EXECUTE, DECLARE CURSOR and so on.
When connecting to a non-default connection, the AT clause can be used in
a DECLARE STATEMENT once and is no longer needed in every dynamic SQL
statement. This makes ECPG applications easier and more efficient. Moreover,
writing code without designating connection explicitly improves portability.
Tom Lane [Sat, 16 Feb 2019 04:22:27 +0000 (23:22 -0500)]
Make use of compiler builtins and/or assembly for CLZ, CTZ, POPCNT.
Test for the compiler builtins __builtin_clz, __builtin_ctz, and
__builtin_popcount, and make use of these in preference to
handwritten C code if they're available. Create src/port
infrastructure for "leftmost one", "rightmost one", and "popcount"
so as to centralize these decisions.
On x86_64, __builtin_popcount generally won't make use of the POPCNT
opcode because that's not universally supported yet. Provide code
that checks CPUID and then calls POPCNT via asm() if available.
This requires indirecting through a function pointer, which is
an annoying amount of overhead for a one-instruction operation,
but it's probably not worth working harder than this for our
current use-cases.
I'm not sure we've found all the existing places that could profit
from this new infrastructure; but we at least touched all the
ones that used copied-and-pasted versions of the bitmapset.c code,
and got rid of multiple copies of the associated constant arrays.
While at it, replace c-compiler.m4's one-per-builtin-function
macros with a single one that can handle all the cases we need
to worry about so far. Also, because I'm paranoid, make those
checks into AC_LINK checks rather than just AC_COMPILE; the
former coding failed to verify that libgcc has support for the
builtin, in cases where it's not inline code.
David Rowley, Thomas Munro, Alvaro Herrera, Tom Lane
Andrew Gierth [Sat, 16 Feb 2019 01:50:16 +0000 (01:50 +0000)]
Cygwin and Mingw floating-point fixes.
Deal with silent-underflow errors in float4 for cygwin and mingw by
using our strtof() wrapper; deal with misrounding errors by adding
them to the resultmap. Some slight reorganization of declarations was
done to avoid duplicating material between cygwin.h and win32_port.h.
While here, remove from the resultmap all references to
float8-small-is-zero; inspection of cygwin output suggests it's no
longer required there, and the freebsd/netbsd/openbsd entries should
no longer be necessary (these date back to c. 2000). This commit
doesn't remove the file itself nor the documentation references for
it; that will happen in a subsequent commit if all goes well.
Somebody will have to try harder before submitting this patch again.
I've spent entirely too much time on it already, and the #ifdef maze yet
to be written in order for it to build at all got on my nerves. The
amount of work needed to get a platform-specific performance improvement
that's barely above the noise level is not worth it.
Tom Lane [Fri, 15 Feb 2019 18:05:19 +0000 (13:05 -0500)]
Refactor index cost estimation functions in view of IndexClause changes.
Get rid of deconstruct_indexquals() in favor of just iterating over the
IndexClause list directly. The extra services that that function used to
provide, such as hiding clause commutation and associating the right index
column with each clause, are no longer useful given the new data structure.
I'd originally thought that it'd provide a useful amount of abstraction
by freeing callers from paying attention to the exact clause type of each
indexqual, but that hope proves to have been vain, because few callers can
ignore the semantic differences between different clause types. Indeed,
removing it results in a net code savings, and probably some cycles shaved
by not having to build an extra list-of-structs data structure.
Also, export a few formerly-static support functions, with the goal
of allowing extension AMs to write functionality equivalent to
genericcostestimate() without pointless code duplication.
Alvaro Herrera [Fri, 15 Feb 2019 16:07:02 +0000 (13:07 -0300)]
Fix compiler builtin usage in new pg_bitutils.c
Split out these new functions in three parts: one in a new file that
uses the compiler builtin and gets compiled with the -mpopcnt compiler
option if it exists; another one that uses the compiler builtin but not
the compiler option; and finally the fallback with open-coded
algorithms.
Split out the configure logic: in the original commit, it was selecting
to use the -mpopcnt compiler switch together with deciding whether to
use the compiler builtin, but those two things are really separate.
Split them out. Also, expose whether the builtin exists to
Makefile.global, so that src/port's Makefile can decide whether to
compile the hw-optimized file.
Remove CPUID test for CTZ/CLZ. Make pg_{right,left}most_ones use either
the compiler intrinsic or open-coded algo; trying to use the
HW-optimized version is a waste of time. Make them static inline
functions.
Peter Eisentraut [Fri, 15 Feb 2019 16:29:41 +0000 (17:29 +0100)]
doc: Update README.links
The guideline to "not use text with <ulink> so the URL appears in
printed output" is obsolete, since the URL appears as a footnote in
printed output in that case.
Michael Paquier [Fri, 15 Feb 2019 08:12:24 +0000 (17:12 +0900)]
Fix support for CREATE TABLE IF NOT EXISTS AS EXECUTE
The grammar IF NOT EXISTS for CTAS is supported since 9.5 and documented
as such, however the case of using EXECUTE as query has never been
covered as EXECUTE CTAS statements and normal CTAS statements are parsed
separately.
Author: Andreas Karlsson
Discussion: https://postgr.es/m/2ddcc188-e37c-a0be-32bf-a56b07c3559e@proxel.se
Backpatch-through: 9.5
Thomas Munro [Thu, 14 Feb 2019 21:19:11 +0000 (10:19 +1300)]
Fix race in dsm_attach() when handles are reused.
DSM handle values can be reused as soon as the underlying shared memory
object has been destroyed. That means that for a brief moment we
might have two DSM slots with the same handle. While trying to attach,
if we encounter a slot with refcnt == 1, meaning that it is currently
being destroyed, we should continue our search in case the same handle
exists in another slot.
The race manifested as a rare "dsa_area could not attach to segment"
error, and was more likely in 10 and 11 due to the lack of distinct
seed for random() in parallel workers. It was made very unlikely in
in master by commit 197e4af9, and older releases don't usually create
new DSM segments in background workers so it was also unlikely there.
This fixes the root cause of bug report #15585, in which the error
could also sometimes result in a self-deadlock in the error path.
It's not yet clear if further changes are needed to avoid that failure
mode.
Back-patch to 9.4, where dsm.c arrived.
Author: Thomas Munro Reported-by: Justin Pryzby, Sergei Kornilov
Discussion: https://postgr.es/m/20190207014719.GJ29720@telsasoft.com
Discussion: https://postgr.es/m/15585-324ff6a93a18da46@postgresql.org