Jeff Davis [Thu, 18 Jul 2019 19:42:39 +0000 (12:42 -0700)]
Fix daterange canonicalization for +/- infinity.
The values 'infinity' and '-infinity' are a part of the DATE type
itself, so a bound of the date 'infinity' is not the same as an
unbounded/infinite range. However, it is still wrong to try to
canonicalize such values, because adding or subtracting one has no
effect. Fix by treating 'infinity' and '-infinity' the same as
unbounded ranges for the purposes of canonicalization (but not other
purposes).
Backpatch to all versions because it is inconsistent with the
documented behavior. Note that this could be an incompatibility for
applications relying on the behavior contrary to the documentation.
Author: Laurenz Albe Reviewed-by: Thomas Munro
Discussion: https://postgr.es/m/77f24ea19ab802bc9bc60ddbb8977ee2d646aec1.camel%40cybertec.at
Backpatch-through: 9.4
Peter Geoghegan [Thu, 18 Jul 2019 20:22:56 +0000 (13:22 -0700)]
Fix nbtree metapage cache upgrade bug.
Commit 857f9c36cda, which taught nbtree VACUUM to avoid unnecessary
index scans, bumped the nbtree version number from 2 to 3, while adding
the ability for nbtree indexes to be upgraded on-the-fly. Various
assertions that assumed that an nbtree index was always on version 2 had
to be changed to accept any supported version (version 2 or 3 on
Postgres 11).
However, a few assertions were missed in the initial commit, all of
which were in code paths that cache a local copy of the metapage
metadata, where the index had been expected to be on the current version
(no longer version 2) as a generic sanity check. Rather than simply
update the assertions, follow-up commit 0a64b45152b intentionally made
the metapage caching code update the per-backend cached metadata version
without changing the on-disk version at the same time. This could even
happen when the planner needed to determine the height of a B-Tree for
costing purposes. The assertions only fail on Postgres v12 when
upgrading from v10, because they were adjusted to use the authoritative
shared memory metapage by v12's commit dd299df8.
To fix, remove the cache-only upgrade mechanism entirely, and update the
assertions themselves to accept any supported version (go back to using
the cached version in v12). The fix is almost a full revert of commit 0a64b45152b on the v11 branch.
VACUUM only considers the authoritative metapage, and never bothers with
a locally cached version, whereas everywhere else isn't interested in
the metapage fields that were added by commit 857f9c36cda. It seems
unlikely that this bug has affected any user on v11.
Reported-By: Christoph Berg
Bug: #15896
Discussion: https://postgr.es/m/15896-5b25e260fdb0b081%40postgresql.org
Backpatch: 11-, where VACUUM was taught to avoid unnecessary index scans.
Tom Lane [Thu, 18 Jul 2019 14:37:13 +0000 (10:37 -0400)]
Further adjust SPITupleTable to provide a public row-count field.
Now that commit fec0778c8 drew a clear line between public and private
fields in SPITupleTable, it seems pretty silly that the count of valid
tuples isn't on the public side of that line. The reason why not was
that there wasn't such a count. For reasons lost in the mists of time,
spi.c preferred to keep a count of remaining free entries in the array.
But that seems pretty pointless: it's unlike the way we handle similar
code everywhere else, and it involves extra subtractions that surely
outweigh having to do a comparison rather than test-for-zero to check
for array-full.
Hence, rearrange so that this code does the expansible array logic
the same as everywhere else, with a count of valid entries alongside
the allocated array length. And document the count as public.
I looked for core-code callers where it would make sense to start
relying on tuptable->numvals rather than the separate SPI_processed
variable. Right now there don't seem to be places where it'd be
a win to do so without more code restructuring than I care to
undertake today. In principle, though, having SPITupleTables be
fully self-contained should be helpful down the line.
Tomas Vondra [Wed, 17 Jul 2019 16:16:50 +0000 (18:16 +0200)]
Simplify bitmap updates in multivariate MCV code
When evaluating clauses on a multivariate MCV list, we build a bitmap
tracking how the clauses match each item of the MCV list. When updating
the bitmap we need to consider the current value (tracking how the item
matches preceding clauses), match for the current clause and whether the
clauses are connected by AND or OR.
Until now the logic was copied on every place updating the bitmap, which
was not quite readable. So just move it to a separate function and call
it where needed.
Backpatch to 12, where the code was introduced. While not a bugfix, this
should make maintenance and future backpatches easier.
Tomas Vondra [Mon, 15 Jul 2019 00:00:31 +0000 (02:00 +0200)]
Fix handling of NULLs in MCV items and constants
There were two issues in how the extended statistics handled NULL values
in opclauses. Firstly, the code was oblivious to the possibility that
Const may be NULL (constisnull=true) in which case the constvalue is
undefined. We need to treat this as a mismatch, and not call the proc.
Secondly, the MCV item itself may contain NULL values too - the code
already did check that, and updated the match bitmap accordingly, but
failed to ensure we won't call the operator procedure anyway. It did
work for AND-clauses, because in that case false in the bitmap stops
evaluation of further clauses. But for OR-clauses ir was not easy to
get incorrect estimates or even trigger a crash.
This fixes both issues by extending the existing check so that it looks
at constisnull too, and making sure it skips calling the procedure.
Tomas Vondra [Fri, 12 Jul 2019 22:12:16 +0000 (00:12 +0200)]
Fix handling of opclauses in extended statistics
We expect opclauses to have exactly one Var and one Const, but the code
was checking the Const by calling is_pseudo_constant_clause() which is
incorrect - we need a proper constant.
Fixed by using plain IsA(x,Const) to check type of the node. We need to
do these checks in two places, so move it into a separate function that
can be called in both places.
Reported by Andreas Seltenreich, based on crash reported by sqlsmith.
Tomas Vondra [Wed, 17 Jul 2019 16:13:39 +0000 (18:13 +0200)]
Remove unnecessary TYPECACHE_GT_OPR lookup
The TYPECACHE_GT_OPR is not needed (it used to be in older version of
the MCV code), but the compiler failed to detect this as the result was
used in a fmgr_info() call, populating a FmgrInfo entry.
Michael Paquier [Thu, 18 Jul 2019 01:05:59 +0000 (10:05 +0900)]
Simplify description of --data-checksums in documentation of initdb
The documentation mentioned that data checksums cannot be changed after
initialization, which is not true as pg_checksums can do that with its
--enable option introduced in v12. This simply removes the sentence
telling so.
Reported-by: Basil Bourque
Author: Michael Paquier Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/15909-e9d74271f1647472@postgresql.org
Backpatch-through: 12
Tom Lane [Wed, 17 Jul 2019 22:26:23 +0000 (18:26 -0400)]
Sync our copy of the timezone library with IANA release tzcode2019b.
A large fraction of this diff is just due to upstream's somewhat
random decision to rename a bunch of internal variables and struct
fields. However, there is an interesting new feature in zic:
it's grown a "-b slim" option that emits zone files without 32-bit
data and other backwards-compatibility hacks. We should consider
whether we wish to enable that.
Tom Lane [Wed, 17 Jul 2019 18:55:13 +0000 (14:55 -0400)]
Clarify the distinction between public and private SPITupleTable fields.
The fields that we consider public are "tupdesc" and "vals", which
historically are in the middle of the struct. Move them to the front
(this should be perfectly safe to do in HEAD) and add comments to make
it quite clear which fields are public or not.
Also adjust spi.sgml's documentation of the struct to match.
That doc had bit-rotted somewhat, as it was missing some fields.
(Arguably we should just remove all the private fields from the docs,
but for now I refrained.)
Tom Lane [Wed, 17 Jul 2019 17:13:15 +0000 (13:13 -0400)]
Doc: explain where to find Makefile used to build sepgsql-regtest.pp.
At least on Fedora and RHEL, it's not in the same RPM that's needed
for building sepgsql itself. Today is the second or third time I've
had to rediscover how to install that, so let's document it this time.
The aggregate-order difference explained in my previous commit
turns out to also affect the order of log entries emitted in the
contrib/sepgsql regression test. Per buildfarm.
Tom Lane [Wed, 17 Jul 2019 15:15:28 +0000 (11:15 -0400)]
Avoid using lcons and list_delete_first where it's easy to do so.
Formerly, lcons was about the same speed as lappend, but with the new
List implementation, that's not so; with a long List, data movement
imposes an O(N) cost on lcons and list_delete_first, but not lappend.
Hence, invent list_delete_last with semantics parallel to
list_delete_first (but O(1) cost), and change various places to use
lappend and list_delete_last where this can be done without much
violence to the code logic.
There are quite a few places that construct result lists using lcons not
lappend. Some have semantic rationales for that; I added comments about
it to a couple that didn't have them already. In many such places though,
I think the coding is that way only because back in the dark ages lcons
was faster than lappend. Hence, switch to lappend where this can be done
without causing semantic changes.
In ExecInitExprRec(), this results in aggregates and window functions that
are in the same plan node being executed in a different order than before.
Generally, the executions of such functions ought to be independent of
each other, so this shouldn't result in visibly different query results.
But if you push it, as one regression test case does, you can show that
the order is different. The new order seems saner; it's closer to
the order of the functions in the query text. And we never documented
or promised anything about this, anyway.
Also, in gistfinishsplit(), don't bother building a reverse-order list;
it's easy now to iterate backwards through the original list.
It'd be possible to go further towards removing uses of lcons and
list_delete_first, but it'd require more extensive logic changes,
and I'm not convinced it's worth it. Most of the remaining uses
deal with queues that probably never get long enough to be worth
sweating over. (Actually, I doubt that any of the changes in this
patch will have measurable performance effects either. But better
to have good examples than bad ones in the code base.)
Patch by me, thanks to David Rowley and Daniel Gustafsson for review.
Thomas Munro [Wed, 17 Jul 2019 00:14:08 +0000 (12:14 +1200)]
Move some md.c-specific logic from smgr.c to md.c.
Potential future SMGR implementations may not want to create
tablespace directories when creating an SMGR relation. Move that
logic to mdcreate(). Move the initialization of md-specific
data structures from smgropen() to a new callback mdopen().
Author: Thomas Munro Reviewed-by: Shawn Debnath (as part of an earlier patch set)
Discussion: https://postgr.es/m/CA%2BhUKG%2BOZqOiOuDm5tC5DyQZtJ3FH4%2BFSVMqtdC4P1atpJ%2Bqhg%40mail.gmail.com
Tom Lane [Tue, 16 Jul 2019 22:17:47 +0000 (18:17 -0400)]
Fix thinko in construction of old_conpfeqop list.
This should lappend the OIDs, not lcons them; the existing code produced
a list in reversed order. This is harmless for single-key FKs or FKs
where all the key columns are of the same type, which probably explains
how it went unnoticed. But if those conditions are not met,
ATAddForeignKeyConstraint would make the wrong decision about whether an
existing FK needs to be revalidated. I think it would almost always err
in the safe direction by revalidating a constraint that didn't need it.
You could imagine scenarios where the pfeqop check was fooled by
swapping the types of two FK columns in one ALTER TABLE, but that case
would probably be rejected by other tests, so it might be impossible to
get to the worst-case scenario where an FK should be revalidated and
isn't. (And even then, it's likely to be fine, unless there are weird
inconsistencies in the equality behavior of the replacement types.)
However, this is a performance bug at least.
Noted while poking around to see whether lcons calls could be converted
to lappend.
This bug is old, dating to commit cb3a7c2b9, so back-patch to all
supported branches.
Tom Lane [Tue, 16 Jul 2019 17:12:24 +0000 (13:12 -0400)]
Remove lappend_cell...() family of List functions.
It seems worth getting rid of these functions because they require the
caller to retain a ListCell pointer into a List that it's modifying,
which is a dangerous practice with the new List implementation.
(The only other List-modifying function that takes a ListCell pointer
as input is list_delete_cell, which nowadays is preferentially used
via the constrained API foreach_delete_current.)
There was only one remaining caller of these functions after commit 2f5b8eb5a, and that was some fairly ugly GEQO code that can be much
more clearly expressed using a list-index variable and list_insert_nth.
Hence, rewrite that code, and remove the functions.
Tom Lane [Tue, 16 Jul 2019 16:04:06 +0000 (12:04 -0400)]
Clean up some ad-hoc code for sorting and de-duplicating Lists.
heap.c and relcache.c contained nearly identical copies of logic
to insert OIDs into an OID list while preserving the list's OID
ordering (and rejecting duplicates, in one case but not the other).
The comments argue that this is faster than qsort for small numbers
of OIDs, which is at best unproven, and seems even less likely to be
true now that lappend_cell_oid has to move data around. In any case
it's ugly and hard-to-follow code, and if we do have a lot of OIDs
to consider, it's O(N^2).
Hence, replace with simply lappend'ing OIDs to a List, then list_sort
the completed List, then remove adjacent duplicates if necessary.
This is demonstrably O(N log N) and it's much simpler for the
callers. It's possible that this would be somewhat inefficient
if there were a very large number of duplicates, but that seems
unlikely in the existing usage.
This adds list_deduplicate_oid and list_oid_cmp infrastructure
to list.c. I didn't bother with equivalent functionality for
integer or pointer Lists, but such could always be added later
if we find a use for it.
Tom Lane [Tue, 16 Jul 2019 15:51:44 +0000 (11:51 -0400)]
Redesign the API for list sorting (list_qsort becomes list_sort).
In the wake of commit 1cff1b95a, the obvious way to sort a List
is to apply qsort() directly to the array of ListCells. list_qsort
was building an intermediate array of pointers-to-ListCells, which
we no longer need, but getting rid of it forces an API change:
the comparator functions need to do one less level of indirection.
Since we're having to touch the callers anyway, let's do two additional
changes: sort the given list in-place rather than making a copy (as
none of the existing callers have any use for the copying behavior),
and rename list_qsort to list_sort. It was argued that the old name
exposes more about the implementation than it should, which I find
pretty questionable, but a better reason to rename it is to be sure
we get the attention of any external callers about the need to fix
their comparator functions.
While we're at it, change four existing callers of qsort() to use
list_sort instead; previously, they all had local reinventions
of list_qsort, ie build-an-array-from-a-List-and-qsort-it.
(There are some other places where changing to list_sort perhaps
would be worthwhile, but they're less obviously wins.)
Michael Paquier [Tue, 16 Jul 2019 04:23:53 +0000 (13:23 +0900)]
Fix inconsistencies and typos in the tree
This is numbered take 7, and addresses a set of issues around:
- Fixes for typos and incorrect reference names.
- Removal of unneeded comments.
- Removal of unreferenced functions and structures.
- Fixes regarding variable name consistency.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/10bfd4ac-3e7c-40ab-2b2e-355ed15495e8@gmail.com
Peter Geoghegan [Mon, 15 Jul 2019 21:35:06 +0000 (14:35 -0700)]
Correct nbtsplitloc.c comment.
The logic just added by commit e3899ffd falls back on a 50:50 page split
in the event of a new item that's just to the right of our provisional
"many duplicates" split point. Fix a comment that incorrectly claimed
that the new item had to be just to the left of our provisional split
point.
Peter Geoghegan [Mon, 15 Jul 2019 20:19:13 +0000 (13:19 -0700)]
Fix pathological nbtree split point choice issue.
Specific ever-decreasing insertion patterns could cause successive
unbalanced nbtree page splits. Problem cases involve a large group of
duplicates to the left, and ever-decreasing insertions to the right.
To fix, detect the situation by considering the newitem offset before
performing a split using nbtsplitloc.c's "many duplicates" strategy. If
the new item was inserted just to the right of our provisional "many
duplicates" split point, infer ever-decreasing insertions and fall back
on a 50:50 (space delta optimal) split. This seems to barely affect
cases that already had acceptable space utilization.
An alternative fix also seems possible. Instead of changing
nbtsplitloc.c split choice logic, we could instead teach _bt_truncate()
to generate a new value for new high keys by interpolating from the
lastleft and firstright key values. That would certainly be a more
elegant fix, but it isn't suitable for backpatching.
Discussion: https://postgr.es/m/CAH2-WznCNvhZpxa__GqAa1fgQ9uYdVc=_apArkW2nc-K3O7_NA@mail.gmail.com
Backpatch: 12-, where the nbtree page split enhancements were introduced.
Tom Lane [Mon, 15 Jul 2019 17:41:58 +0000 (13:41 -0400)]
Represent Lists as expansible arrays, not chains of cons-cells.
Originally, Postgres Lists were a more or less exact reimplementation of
Lisp lists, which consist of chains of separately-allocated cons cells,
each having a value and a next-cell link. We'd hacked that once before
(commit d0b4399d8) to add a separate List header, but the data was still
in cons cells. That makes some operations -- notably list_nth() -- O(N),
and it's bulky because of the next-cell pointers and per-cell palloc
overhead, and it's very cache-unfriendly if the cons cells end up
scattered around rather than being adjacent.
In this rewrite, we still have List headers, but the data is in a
resizable array of values, with no next-cell links. Now we need at
most two palloc's per List, and often only one, since we can allocate
some values in the same palloc call as the List header. (Of course,
extending an existing List may require repalloc's to enlarge the array.
But this involves just O(log N) allocations not O(N).)
Of course this is not without downsides. The key difficulty is that
addition or deletion of a list entry may now cause other entries to
move, which it did not before.
For example, that breaks foreach() and sister macros, which historically
used a pointer to the current cons-cell as loop state. We can repair
those macros transparently by making their actual loop state be an
integer list index; the exposed "ListCell *" pointer is no longer state
carried across loop iterations, but is just a derived value. (In
practice, modern compilers can optimize things back to having just one
loop state value, at least for simple cases with inline loop bodies.)
In principle, this is a semantics change for cases where the loop body
inserts or deletes list entries ahead of the current loop index; but
I found no such cases in the Postgres code.
The change is not at all transparent for code that doesn't use foreach()
but chases lists "by hand" using lnext(). The largest share of such
code in the backend is in loops that were maintaining "prev" and "next"
variables in addition to the current-cell pointer, in order to delete
list cells efficiently using list_delete_cell(). However, we no longer
need a previous-cell pointer to delete a list cell efficiently. Keeping
a next-cell pointer doesn't work, as explained above, but we can improve
matters by changing such code to use a regular foreach() loop and then
using the new macro foreach_delete_current() to delete the current cell.
(This macro knows how to update the associated foreach loop's state so
that no cells will be missed in the traversal.)
There remains a nontrivial risk of code assuming that a ListCell *
pointer will remain good over an operation that could now move the list
contents. To help catch such errors, list.c can be compiled with a new
define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents
whenever that could possibly happen. This makes list operations
significantly more expensive so it's not normally turned on (though it
is on by default if USE_VALGRIND is on).
There are two notable API differences from the previous code:
* lnext() now requires the List's header pointer in addition to the
current cell's address.
* list_delete_cell() no longer requires a previous-cell argument.
These changes are somewhat unfortunate, but on the other hand code using
either function needs inspection to see if it is assuming anything
it shouldn't, so it's not all bad.
Programmers should be aware of these significant performance changes:
* list_nth() and related functions are now O(1); so there's no
major access-speed difference between a list and an array.
* Inserting or deleting a list element now takes time proportional to
the distance to the end of the list, due to moving the array elements.
(However, it typically *doesn't* require palloc or pfree, so except in
long lists it's probably still faster than before.) Notably, lcons()
used to be about the same cost as lappend(), but that's no longer true
if the list is long. Code that uses lcons() and list_delete_first()
to maintain a stack might usefully be rewritten to push and pop at the
end of the list rather than the beginning.
* There are now list_insert_nth...() and list_delete_nth...() functions
that add or remove a list cell identified by index. These have the
data-movement penalty explained above, but there's no search penalty.
* list_concat() and variants now copy the second list's data into
storage belonging to the first list, so there is no longer any
sharing of cells between the input lists. The second argument is
now declared "const List *" to reflect that it isn't changed.
This patch just does the minimum needed to get the new implementation
in place and fix bugs exposed by the regression tests. As suggested
by the foregoing, there's a fair amount of followup work remaining to
do.
Also, the ENABLE_LIST_COMPAT macros are finally removed in this
commit. Code using those should have been gone a dozen years ago.
Patch by me; thanks to David Rowley, Jesper Pedersen, and others
for review.
Thomas Munro [Mon, 15 Jul 2019 05:03:46 +0000 (17:03 +1200)]
Provide XLogRecGetFullXid().
In order to be able to work with FullTransactionId values during replay
without increasing the size of the WAL, infer the epoch. In general we
can't do that safely, but during replay we can because we know that
nextFullXid can't advance concurrently.
Prevent frontend code from seeing this new function, due to the above
restriction. Perhaps in future it will be possible to extract the value
entirely from independent WAL records, and then this restriction can be
lifted.
Author: Thomas Munro, based on earlier code from Andres Freund
Discussion: https://postgr.es/m/CA%2BhUKG%2BmLmuDjMi6o1dxkKvGRL56Y2Rz%2BiXAcrZV03G9ZuFQ8Q%40mail.gmail.com
Peter Eisentraut [Sun, 14 Jul 2019 12:30:27 +0000 (14:30 +0200)]
Add gen_random_uuid function
This adds a built-in function to generate UUIDs.
PostgreSQL hasn't had a built-in function to generate a UUID yet,
relying on external modules such as uuid-ossp and pgcrypto to provide
one. Now that we have a strong random number generator built-in, we
can easily provide a version 4 (random) UUID generation function.
This patch takes the existing function gen_random_uuid() from pgcrypto
and makes it a built-in function. The pgcrypto implementation now
internally redirects to the built-in one.
Add support for <-> (box, point) operator to GiST box_ops
Index-based calculation of this operator is exact. So, signature of
gist_bbox_distance() function is changes so that caller is responsible for
setting *recheck flag.
Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru
Author: Nikita Glukhov Reviewed-by: Tom Lane, Alexander Korotkov
Some of <-> operators between geometric types have their commutators missed.
This commit adds them. The motivation is upcoming kNN support for some of those
operators.
Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru
Author: Nikita Glukhov Reviewed-by: Tom Lane, Alexander Korotkov
Andrew Gierth [Sun, 14 Jul 2019 11:07:40 +0000 (12:07 +0100)]
Teach pg_stat_statements not to ignore FOR UPDATE clauses
Performance of a SELECT FOR UPDATE may be quite distinct from the
non-UPDATE version of the query, so treat all of the FOR UPDATE clause
as being significant for distinguishing queries.
Andrew Gierth and Vik Fearing, reviewed by Sergei Kornilov, Thomas
Munro, Tom Lane
Commit 578b229718e8f15fa779e20f086c4b6bb3776106 replaced it with a
concurrent "nextval" test. That version does not detect PostgreSQL's
incompatibility with xlc 13.1.3, so bring back an OID-based test that
does. Back-patch to v12, where that commit first appeared.
Michael Paquier [Sat, 13 Jul 2019 07:51:31 +0000 (16:51 +0900)]
Fix some inconsistencies in MSVC scripts
In configure scripts, --with-ossp-uuid is obsolete is replaced by
--with-uuid, and it needs to specify a path to its library builds when
building with the MSVC scripts. --with-perl needs also to specify a
path.
Thomas Munro [Sat, 13 Jul 2019 01:55:10 +0000 (13:55 +1200)]
Forward received condition variable signals on cancel.
After a process decides not to wait for a condition variable, it can
still consume a signal before it reaches ConditionVariableCancelSleep().
In that case, pass the signal on to another waiter if possible, so that
a signal doesn't go missing when there is another process ready to
receive it.
Author: Thomas Munro Reviewed-by: Shawn Debnath
Discussion: https://postgr.es/m/CA%2BhUKGLQ_RW%2BXs8znDn36e-%2Bmq2--zrPemBqTQ8eKT-VO1OF4Q%40mail.gmail.com
Thomas Munro [Fri, 12 Jul 2019 22:35:34 +0000 (10:35 +1200)]
Warn if wal_level is too low when creating a publication.
Provide a hint to users that they need to increase wal_level before
subscriptions can work.
Author: Lucas Viecelli, with some adjustments by Thomas Munro Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAPjy-57rn5Y9g4e5u--eSOP-7P4QrE9uOZmT2ZcUebF8qxsYhg%40mail.gmail.com
Tom Lane [Fri, 12 Jul 2019 20:24:59 +0000 (16:24 -0400)]
Fix get_actual_variable_range() to cope with broken HOT chains.
Commit 3ca930fc3 modified get_actual_variable_range() to use a new
"SnapshotNonVacuumable" snapshot type for selecting tuples that it
would consider valid. However, because that snapshot type can accept
recently-dead tuples, this caused a bug when using a recently-created
index: we might accept a recently-dead tuple that is an early member
of a broken HOT chain and does not actually match the index entry.
Then, the data extracted from the heap tuple would not necessarily be
an endpoint value of the column; it could even be NULL, leading to
get_actual_variable_range() itself reporting "found unexpected null
value in index". Even without an error, this could lead to poor
plan choices due to an erroneous notion of the endpoint value.
We can improve matters by changing the code to use the index-only
scan technique (which didn't exist when get_actual_variable_range was
originally written). If any of the tuples in a HOT chain are live
enough to satisfy SnapshotNonVacuumable, we take the data from the
index entry, ignoring what is in the heap. This fixes the problem
without changing the live-vs-dead-tuple behavior from what was
intended by commit 3ca930fc3.
A side benefit is that for static tables we might not have to touch
the heap at all (when the extremal value is in an all-visible page).
In addition, we can save some overhead by not having to create a
complete ExecutorState, and we don't need to run FormIndexDatum,
avoiding more cycles as well as the possibility of failure for
indexes on expressions. (I'm not sure that this code would ever
be used to determine the extreme value of an expression, in the
current state of the planner; but it's definitely possible that
lower-order columns of the selected index could be expressions.
So one could construct perhaps-artificial examples in which the
old code unexpectedly failed due to trying to compute an
expression's value for a now-dead row.)
Per report from Manuel Rigger. Back-patch to v11 where commit 3ca930fc3 came in.
David Rowley [Fri, 12 Jul 2019 07:12:38 +0000 (19:12 +1200)]
Fix RANGE partition pruning with multiple boolean partition keys
match_clause_to_partition_key incorrectly would return
PARTCLAUSE_UNSUPPORTED if a bool qual could not be matched to the current
partition key. This was a problem, as it causes the calling function to
discard the qual and not try to match it to any other partition key. If
there was another partition key which did match this qual, then the qual
would not be checked again and we could fail to prune some partitions.
The worst this could do was to cause partitions not to be pruned when they
could have been, so there was no danger of incorrect query results here.
Fix this by changing match_boolean_partition_clause to have it return a
PartClauseMatchStatus rather than a boolean value. This allows it to
communicate if the qual is unsupported or if it just does not match this
particular partition key, previously these two cases were treated the
same. Now, if match_clause_to_partition_key is unable to match the qual
to any other qual type then we can simply return the value from the
match_boolean_partition_clause call so that the calling function properly
treats the qual as either unmatched or unsupported.
Reported-by: Rares Salcudean Reviewed-by: Amit Langote
Backpatch-through: 11 where partition pruning was introduced
Discussion: https://postgr.es/m/CAHp_FN2xwEznH6oyS0hNTuUUZKp5PvegcVv=Co6nBXJ+mC7Y5w@mail.gmail.com
Tom Lane [Wed, 10 Jul 2019 18:32:28 +0000 (14:32 -0400)]
Reduce memory consumption for multi-statement query strings.
Previously, exec_simple_query always ran parse analysis, rewrite, and
planning in MessageContext, allowing all the data generated thereby
to persist until the end of processing of the whole query string.
That's fine for single-command strings, but if a client sends many
commands in a single simple-Query message, this strategy could result
in annoying memory bloat, as complained of by Andreas Seltenreich.
To fix, create a child context to do this work in, and reclaim it
after each command. But we only do so for parsetrees that are not
last in their query string. That avoids adding any memory management
overhead for the typical case of a single-command string. Memory
allocated for the last parsetree would be freed immediately after
finishing the command string anyway.
Similarly, adjust extension.c's execute_sql_string() to reclaim memory
after each command. In that usage, multi-command strings are the norm,
so it's a bit surprising that no one has yet complained of bloat ---
especially since the bloat extended to whatever data ProcessUtility
execution might leak.
Michael Paquier [Wed, 10 Jul 2019 06:14:54 +0000 (15:14 +0900)]
Fix variable initialization when using buffering build with GiST
This can cause valgrind to complain, as the flag marking a buffer as a
temporary copy was not getting initialized.
While on it, fill in with zeros newly-created buffer pages. This does
not matter when loading a block from a temporary file, but it makes the
push of an index tuple into a new buffer page safer.
This has been introduced by 1d27dcf, so backpatch all the way down to
9.4.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/15899-0d24fb273b3dd90c@postgresql.org
Backpatch-through: 9.4
David Rowley [Wed, 10 Jul 2019 04:03:04 +0000 (16:03 +1200)]
Fix missing calls to table_finish_bulk_insert during COPY, take 2
86b85044e abstracted calls to heap functions in COPY FROM to support a
generic table AM. However, when performing a copy into a partitioned
table, this commit neglected to call table_finish_bulk_insert for each
partition. Before 86b85044e, when we always called the heap functions,
there was no need to call heapam_finish_bulk_insert for partitions since
it only did any work when performing a copy without WAL. For partitioned
tables, this was unsupported anyway, so there was no issue. With
pluggable storage, we can't make any assumptions about what the table AM
might want to do in its equivalent function, so we'd better ensure we
always call table_finish_bulk_insert each partition that's received a row.
For now, we make the table_finish_bulk_insert call whenever we evict a
CopyMultiInsertBuffer out of the CopyMultiInsertInfo. This does mean
that it's possible that we call table_finish_bulk_insert multiple times
per partition, which is not a problem other than being an inefficiency.
Improving this requires a more invasive patch, so let's leave that for
another day.
This also changes things so that we no longer needlessly call
table_finish_bulk_insert when performing a COPY FROM for a non-partitioned
table when not using multi-inserts.
Reported-by: Robert Haas
Backpatch-through: 12
Discussion: https://postgr.es/m/CA+TgmoYK=6BpxiJ0tN-p9wtH0BTAfbdxzHhwou0mdud4+BkYuQ@mail.gmail.com
Amit Kapila [Wed, 10 Jul 2019 02:22:51 +0000 (07:52 +0530)]
Fix few typos and minor wordsmithing in tableam comments.
Reported-by: Ashwin Agrawal
Author: Ashwin Agrawal Reviewed-by: Amit Kapila
Backpatch-through: 12, where it was introduced
Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
Thomas Munro [Tue, 9 Jul 2019 22:15:32 +0000 (10:15 +1200)]
Pass QueryEnvironment down to EvalPlanQual's EState.
Otherwise the executor can't see trigger transition tables during
EPQ evaluation. Fixes bug #15900 and almost certainly also #15720.
Back-patch to 10, where trigger transition tables landed.
Author: Alex Aktsipetrov Reviewed-by: Thomas Munro, Tom Lane
Discussion: https://postgr.es/m/15900-bc482754fe8d7415%40postgresql.org
Discussion: https://postgr.es/m/15720-38c2b29e5d720187%40postgresql.org
We were creating the cloned triggers with an empty list of arguments,
losing the ones that had been specified by the user when creating the
trigger in the partitioned table. Repair.
Author: Patrick McHardy Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20190709130027.amr2cavjvo7rdvac@access1.trash.net
Discussion: https://postgr.es/m/15752-123bc90287986de4@postgresql.org
Robert Haas [Mon, 8 Jul 2019 18:51:53 +0000 (14:51 -0400)]
tableam: Provide helper functions for relation sizing.
Most block-based table AMs will need the exact same implementation of
the relation_size callback as the heap, and if they use a standard
page layout, they will likely need an implementation of the
relation_estimate_size callback that is very similar to that of the
heap. Rearrange to facilitate code reuse.
Patch by me, reviewed by Michael Paquier, Daniel Gustafsson, and
Álvaro Herrera.
Michael Paquier [Mon, 8 Jul 2019 04:15:09 +0000 (13:15 +0900)]
Fix inconsistencies in the code
This addresses a couple of issues in the code:
- Typos and inconsistencies in comments and function declarations.
- Removal of unreferenced function declarations.
- Removal of unnecessary compile flags.
- A cleanup error in regressplans.sh.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/0c991fdf-2670-1997-c027-772a420c4604@gmail.com
Tom Lane [Sat, 6 Jul 2019 15:25:37 +0000 (11:25 -0400)]
In pg_log_generic(), be more paranoid about preserving errno.
This code failed to account for the possibility that malloc() would
change errno, resulting in wrong output for %m, not to mention the
possibility of message truncation. Such a change is obviously
expected when malloc fails, but there's reason to fear that on some
platforms even a successful malloc call can modify errno.
In normal interactive mode, psql's log messages accidentally got a
"psql:" prefix that was not supposed to be there. This only happened
if there was no .psqlrc file being read, so it wasn't discovered for a
while. Fix this by adding the appropriate logging format
configuration call in the right code path.
Amit Kapila [Sat, 6 Jul 2019 06:11:23 +0000 (11:41 +0530)]
Add missing assertions for required table am callbacks.
Reported-by: Ashwin Agrawal
Author: Ashwin Agrawal Reviewed-by: Amit Kapila
Backpatch-through: 12, where it was introduced
Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
Tom Lane [Sat, 6 Jul 2019 03:56:34 +0000 (23:56 -0400)]
Add some test cases to improve test coverage of parse_expr.c.
I chanced to notice while thumbing through lcov reports that we had
exactly no coverage of BETWEEN SYMMETRIC, nor of current_time(N) and
localtime(N). Improve that.
parse_expr.c still has a pretty awful coverage number, but a large part
of that is due to lack of coverage of the operator_precedence_warning
logic. I have zero desire to write tests for that; I think ripping it
out would be more sensible at this point.
Tom Lane [Fri, 5 Jul 2019 18:17:27 +0000 (14:17 -0400)]
Remove dead encoding-conversion functions.
The code for conversions SQL_ASCII <-> MULE_INTERNAL and
SQL_ASCII <-> UTF8 was unreachable, because we long ago changed
the wrapper functions pg_do_encoding_conversion() et al so that
they have hard-wired behaviors for conversions involving SQL_ASCII.
(At least some of those fast paths date back to 2002, though it
looks like we may not have been totally consistent about this until
later.) Given the lack of complaints, nobody is dissatisfied with
this state of affairs. Hence, let's just remove the unreachable code.
Also, change CREATE CONVERSION so that it rejects attempts to
define such conversions. Since we consider that SQL_ASCII represents
lack of knowledge about the encoding in use, such a conversion would
be semantically dubious even if it were reachable.
Adjust a couple of regression test cases that had randomly decided
to rely on these conversion functions rather than any other ones.
Tomas Vondra [Fri, 5 Jul 2019 16:06:02 +0000 (18:06 +0200)]
Remove unused variable in statext_mcv_serialize()
The itemlen variable used to be referenced in multiple places, but since
reworking the serialization code it's used only in one assert. Fixed by
removing the variable and calling the macro from the assert directly.
Backpatch to 12, where this code was introduced.
Reported-by: Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1zc_ovH9NZd_9ovuiEWkF9yX06URUDdXCmgDydf-bqB5A@mail.gmail.com
Tom Lane [Fri, 5 Jul 2019 16:32:36 +0000 (12:32 -0400)]
Add \warn command to psql.
This is like \echo except that the text is sent to stderr not stdout.
In passing, fix a pre-existing bug in \echo and \qecho: per documentation
the -n switch should only be recognized when it is the first argument,
but actually any argument matching "-n" was treated as a switch.
(Should we back-patch that?)
David Fetter (bug fix by me), reviewed by Fabien Coelho
Michael Paquier [Fri, 5 Jul 2019 01:47:32 +0000 (10:47 +0900)]
Update hardcoded DH parameters to IANA standards
The source defining the current fallback and hardcoded DH parameters
has disappeared from the web a long time ago, and RFC 3526 defines the
most current Diffie-Hellman MODP groups, so update to those new values.
Author: Daniel Gustafsson Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/5E60AC9A-CB10-4851-9EF2-7209490A164C@yesql.se
Tomas Vondra [Thu, 4 Jul 2019 22:45:20 +0000 (00:45 +0200)]
Simplify pg_mcv_list (de)serialization
The serialization format of multivariate MCV lists included alignment in
order to allow direct access to part of the serialized data, but despite
multiple fixes (see for example commits d85e0f366a and ea4e1c0e8f) this
proved to be problematic.
This commit abandons alignment in the serialized format, and just copies
everything during deserialization. We now also track amount of memory
needed after deserialization (including alignment), which allows us to
deserialize the MCV list in a single pass.
Bump catversion, as this affects contents of pg_statistic_ext_data.
Backpatch to 12, where multi-column MCV lists were introduced.
Author: Tomas Vondra Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2201.1561521148@sss.pgh.pa.us
Tomas Vondra [Thu, 4 Jul 2019 21:43:04 +0000 (23:43 +0200)]
Fix pg_mcv_list_items() to produce text[]
The function pg_mcv_list_items() returns values stored in MCV items. The
items may contain columns with different data types, so the function was
generating text array-like representation, but in an ad-hoc way without
properly escaping various characters etc.
Fixed by simply building a text[] array, which also makes it easier to
use from queries etc.
Requires changes to pg_proc entry, so bump catversion.
Backpatch to 12, where multi-column MCV lists were introduced.
Author: Tomas Vondra Reviewed-by: Dean Rasheed
Discussion: https://postgr.es/m/20190618205920.qtlzcu73whfpfqne@development
Tomas Vondra [Thu, 4 Jul 2019 21:02:02 +0000 (23:02 +0200)]
Speed-up build of MCV lists with many distinct values
When building multi-column MCV lists, we compute base frequency for each
item, i.e. a product of per-column frequencies for values from the item.
As a value may be in multiple groups, the code was scanning the whole
array of groups while adding items to the MCV list. This works fine as
long as the number of distinct groups is small, but it's easy to trigger
trigger O(N^2) behavior, especially after increasing statistics target.
This commit precomputes frequencies for values in all columns, so that
when computing the base frequency it's enough to make a simple bsearch
lookup in the array.
Backpatch to 12, where multi-column MCV lists were introduced.
Unwind some workarounds for lack of portable int64 format specifier
Because there is no portable int64/uint64 format specifier and we
can't stick macros like INT64_FORMAT into the middle of a translatable
string, we have been using various workarounds that put the number to
be printed into a string buffer first. Now that we always use our own
sprintf(), we can rely on %lld and %llu to work, so we can use those.
This patch undoes this workaround in a few places where it was
egregiously verbose.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/CAH2-Wz%3DWbNxc5ob5NJ9yqo2RMJ0q4HXDS30GVCobeCvC9A1L9A%40mail.gmail.com
Michael Paquier [Thu, 4 Jul 2019 07:08:09 +0000 (16:08 +0900)]
Introduce safer encoding and decoding routines for base64.c
This is a follow-up refactoring after 09ec55b and b674211, which has
proved that the encoding and decoding routines used by SCRAM have a
poor interface when it comes to check after buffer overflows. This adds
an extra argument in the shape of the length of the result buffer for
each routine, which is used for overflow checks when encoding or
decoding an input string. The original idea comes from Tom Lane.
As a result of that, the encoding routine can now fail, so all its
callers are adjusted to generate proper error messages in case of
problems.
On failure, the result buffer gets zeroed.
Author: Michael Paquier Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20190623132535.GB1628@paquier.xyz
Michael Paquier [Thu, 4 Jul 2019 02:33:42 +0000 (11:33 +0900)]
Simplify TAP tests of pg_dump for connection strings
The last set of scenarios did an initialization of nodes followed by an
extra command to set up the authentication policy with pg_regress
--config-auth. This configuration step can be integrated directly using
the option auth_extra from PostgresNode::init when initializing the
node, saving from one extra command. On Windows, this also restricts
more pg_ident.conf for the SSPI user mapping by removing the entry of
the OS user running the test, which is not needed anyway.
Note that IPC::Run mishandles double quotes, hence the restore user name
is changed to map with that. This was already done in the test as a
later step, but not in a consistent way, causing the switch to use
auth_extra to fail.
David Rowley [Thu, 4 Jul 2019 01:01:13 +0000 (13:01 +1200)]
Use appendStringInfoString and appendPQExpBufferStr where possible
This changes various places where appendPQExpBuffer was used in places
where it was possible to use appendPQExpBufferStr, and likewise for
appendStringInfo and appendStringInfoString. This is really just a
stylistic improvement, but there are also small performance gains to be
had from doing this.
Tom Lane [Wed, 3 Jul 2019 22:08:53 +0000 (18:08 -0400)]
Ensure plpgsql result tuples have the right composite type marking.
A function that is declared to return a named composite type must
return tuple datums that are physically marked as having that type.
The plpgsql code path that allowed directly returning an expanded-record
datum forgot to check that, so that an expanded record marked as type
RECORDOID could be returned if it had a physically-compatible tupdesc.
This'd be harmless, I think, if the record value never escaped the
current session --- but it's possible for it to get stored into a table,
and then subsequent sessions can't interpret the anonymous record type.
Fix by flattening the record into a tuple datum and overwriting its
type/typmod fields, if its declared type doesn't match the function's
declared type. (In principle it might be possible to just change the
expanded record's stored type ID info, but there are enough tricky
consequences that I didn't want to mess with that, especially not in
a back-patched bug fix.)
Per bug report from Steve Rogerson. Back-patch to v11 where the bug
was introduced.
David Rowley [Wed, 3 Jul 2019 11:44:54 +0000 (23:44 +1200)]
Don't remove surplus columns from GROUP BY for inheritance parents
d4c3a156c added code to remove columns that were not part of a table's
PRIMARY KEY constraint from the GROUP BY clause when all the primary key
columns were present in the group by. This is fine to do since we know
that there will only be one row per group coming from this relation.
However, the logic failed to consider inheritance parent relations. These
can have child relations without a primary key, but even if they did, they
could duplicate one of the parent's rows or one from another child
relation. In this case, those additional GROUP BY columns are required.
Fix this by disabling the optimization for inheritance parent tables.
In v11 and beyond, partitioned tables are fine since partitions cannot
overlap and before v11 partitioned tables could not have a primary key.
Reported-by: Manuel Rigger
Discussion: http://postgr.es/m/CA+u7OA7VLKf_vEr6kLF3MnWSA9LToJYncgpNX2tQ-oWzYCBQAw@mail.gmail.com
Backpatch-through: 9.6
postgres_fdw: Remove redundancy in postgresAcquireSampleRowsFunc().
Previously, in the loop in postgresAcquireSampleRowsFunc() to iterate
fetching rows from a given remote table, we redundantly 1) determined the
fetch size by parsing the table's server/table-level options and then 2)
constructed the fetch command; remove that redundancy.