threadRun() function is very long and deeply-nested. Extract the code to
print progress reports to a separate function, to make it slightly easier
to read.
Alvaro Herrera [Mon, 25 Mar 2019 15:16:07 +0000 (12:16 -0300)]
pgbench: Remove \cset
Partial revert of commit 6260cc550b0e, "pgbench: add \cset and \gset
commands".
While \gset is widely considered a useful and necessary tool for user-
defined benchmarks, \cset does not have as much value, and its
implementation was considered "not to be up to project standards"
(though I, Álvaro, can't quite understand exactly how). Therefore,
remove \cset.
Robert Haas [Mon, 25 Mar 2019 14:59:04 +0000 (10:59 -0400)]
Add progress reporting for CLUSTER and VACUUM FULL.
This uses the same progress reporting infrastructure added in commit c16dc1aca5e01e6acaadfcf38f5fc964a381dc62 and extends it to these
additional cases. We lack the ability to track the internal progress
of sorts and index builds so the information reported is
coarse-grained for some parts of the operation, but it still seems
like a significant improvement over having nothing at all.
Tatsuro Yamada, reviewed by Thomas Munro, Masahiko Sawada, Michael
Paquier, Jeff Janes, Alvaro Herrera, Rafia Sabih, and by me. A fair
amount of polishing also by me.
Non-backtracking flex parsers work faster than backtracking ones. So, this
commit gets rid of backtracking in jsonpath_scan.l. That required explicit
handling of some cases as well as manual backtracking for some cases. More
regression tests for numerics are added.
Discussion: https://mail.google.com/mail/u/0?ik=a20b091faa&view=om&permmsgid=msg-f%3A1628425344167939063
Author: John Naylor, Nikita Gluknov, Alexander Korotkov
Cosmetic changes for jsonpath_gram.y and jsonpath_scan.l
This commit include formatting improvements, renamings and comments. Also,
it makes jsonpath_scan.l be more uniform with other our lexers. Firstly,
states names are renamed to more short alternatives. Secondly, <INITIAL>
prefix removed from the rules. Corresponding rules are moved to the tail, so
they would anyway work only in initial state.
Author: Alexander Korotkov Reviewed-by: John Naylor
Coverity complained that simple8b_encode() might read beyond the end of
the 'diffs' array, in the loop to encode the integers. That was a false
positive, because we never get into the loop in modes 0 or 1, and the
array is large enough for all the other modes. But I admit it's very
subtle, so it's not surprising that Coverity didn't see it, and it's not
very obvious to humans either. Refactor it, so that the second loop
re-computes the differences, instead of carrying them over from the first
loop in the 'diffs' array. This way, the 'diffs' array is not needed
anymore. It makes no measurable difference in performance, and seems more
straightforward this way.
Also, improve the comments in simple8b_encode(): fix the comment about its
return value that was flat-out wrong, and explain the condition when it
returns EMPTY_CODEWORD better.
In the passing, move the 'selector' from the codeword's low bits to the
high bits. It doesn't matter much, but looking at the original paper, and
googling around for other Simple-8b implementations, that's how it's
usually done.
Andres Freund [Mon, 25 Mar 2019 07:13:42 +0000 (00:13 -0700)]
tableam: Add and use table_fetch_row_version().
This is essentially the tableam version of heapam_fetch(),
i.e. fetching a tuple identified by a tid, performing visibility
checks.
Note that this different from table_index_fetch_tuple(), which is for
index lookups. It therefore has to handle a tid pointing to an earlier
version of a tuple if the AM uses an optimization like heap's HOT. Add
comments to that end.
This commit removes the stats_relation argument from heap_fetch, as
it's been unused for a long time.
Author: Andres Freund Reviewed-By: Haribabu Kommi
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Mon, 25 Mar 2019 01:55:34 +0000 (18:55 -0700)]
tableam: Use in CREATE TABLE AS and CREATE MATERIALIZED VIEW.
Previously those directly performed a heap_insert(). Use
table_insert() instead. The input slot of those routines is not of
the target relation - we could fix that by copying if necessary, but
that'd not be beneficial for performance. As those codepaths don't
access any AM specific tuple fields (say xmin/xmax), there's no need
to use an AM specific slot.
Author: Andres Freund Reviewed-By: Haribabu Kommi
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Tom Lane [Sun, 24 Mar 2019 23:15:37 +0000 (19:15 -0400)]
Un-hide most cascaded-drop details in regression test results.
Now that the ordering of DROP messages ought to be stable everywhere,
we should not need these kluges of hiding DETAIL output just to avoid
unstable ordering. Hiding it's not great for test coverage, so
let's undo that where possible.
In a small number of places, it's necessary to leave it in, for
example because the output might include a variable pg_temp_nnn
schema name. I also left things alone in places where the details
would depend on other regression test scripts, e.g. plpython_drop.sql.
Perhaps buildfarm experience will show this to be a bad idea,
but if so I'd like to know why.
Tom Lane [Sun, 24 Mar 2019 22:17:41 +0000 (18:17 -0400)]
Sort dependent objects before reporting them in DROP ROLE.
Commit 8aa9dd74b didn't quite finish the job in this area after all,
because DROP ROLE has a code path distinct from DROP OWNED BY, and
it was still reporting dependent objects in whatever order the index
scan returned them in.
Buildfarm experience shows that index ordering of equal-keyed objects is
significantly less stable than before in the wake of using heap TIDs as
tie-breakers. So if we try to hide the unstable ordering by suppressing
DETAIL reports, we're just going to end up having to do that for every
DROP that reports multiple objects. That's not great from a coverage
or problem-detection standpoint, and it's something we'll inevitably
forget in future patches, leading to more iterations of fixing-an-
unstable-result. So let's just bite the bullet and sort here too.
Peter Geoghegan [Sun, 24 Mar 2019 19:28:58 +0000 (12:28 -0700)]
Remove dead code from nbtsplitloc.c.
It doesn't make sense to consider the possibility that there will only
be one candidate split point when choosing among split points to find
the split with the lowest penalty. This is a vestige of an earlier
version of the patch that became commit fab25024.
Issue spotted while rereviewing coverage of the nbtree patch series
using gcov.
Tom Lane [Sun, 24 Mar 2019 19:13:20 +0000 (15:13 -0400)]
Avoid double-free in vacuumlo error path.
The code would do "PQclear(res)" twice if lo_unlink failed, evidently
due to careless thinking about how far out a "break" would break.
Remove the extra PQclear and adjust the loop logic so that we'll fall
out of both levels of loop after an error, as was clearly the intent.
Spotted by Coverity. I have no idea why it took this long to notice,
since the bug has been there since commit 67ccbb080. Accordingly,
back-patch to all supported branches.
Michael Paquier [Sun, 24 Mar 2019 12:00:35 +0000 (21:00 +0900)]
Make current_logfiles use permissions assigned to files in data directory
Since its introduction in 19dc233c, current_logfiles has been assigned
the same permissions as a log file, which can be enforced with
log_file_mode. This setup can lead to incompatibility problems with
group access permissions as current_logfiles is not located in the log
directory, but at the root of the data folder. Hence, if group
permissions are used but log_file_mode is more restrictive, a backup
with a user in the group having read access could fail even if the log
directory is located outside of the data folder.
Per discussion with the folks mentioned below, we have concluded that
current_logfiles should not be treated as a log file as it only stores
metadata related to log files, and that it should use the same
permissions as all other files in the data directory. This solution has
the merit to be simple and fixes all the interaction problems between
group access and log_file_mode.
Author: Haribabu Kommi Reviewed-by: Stephen Frost, Robert Haas, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CAJrrPGcEotF1P7AWoeQyD3Pqr-0xkQg_Herv98DjbaMj+naozw@mail.gmail.com
Backpatch-through: 11, where group access has been added.
Peter Eisentraut [Sun, 24 Mar 2019 09:33:14 +0000 (10:33 +0100)]
Transaction chaining
Add command variants COMMIT AND CHAIN and ROLLBACK AND CHAIN, which
start new transactions with the same transaction characteristics as the
just finished one, per SQL standard.
Support for transaction chaining in PL/pgSQL is also added. This
functionality is especially useful when running COMMIT in a loop in
PL/pgSQL.
Andres Freund [Sun, 24 Mar 2019 02:55:57 +0000 (19:55 -0700)]
tableam: Add tuple_{insert, delete, update, lock} and use.
This adds new, required, table AM callbacks for insert/delete/update
and lock_tuple. To be able to reasonably use those, the EvalPlanQual
mechanism had to be adapted, moving more logic into the AM.
Previously both delete/update/lock call-sites and the EPQ mechanism had
to have awareness of the specific tuple format to be able to fetch the
latest version of a tuple. Obviously that needs to be abstracted
away. To do so, move the logic that find the latest row version into
the AM. lock_tuple has a new flag argument,
TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last
version, rather than the current one. It'd have been possible to do
so via a separate callback as well, but finding the last version
usually also necessitates locking the newest version, making it
sensible to combine the two. This replaces the previous use of
EvalPlanQualFetch(). Additionally HeapTupleUpdated, which previously
signaled either a concurrent update or delete, is now split into two,
to avoid callers needing AM specific knowledge to differentiate.
The move of finding the latest row version into tuple_lock means that
encountering a row concurrently moved into another partition will now
raise an error about "tuple to be locked" rather than "tuple to be
updated/deleted" - which is accurate, as that always happens when
locking rows. While possible slightly less helpful for users, it seems
like an acceptable trade-off.
As part of this commit HTSU_Result has been renamed to TM_Result, and
its members been expanded to differentiated between updating and
deleting. HeapUpdateFailureData has been renamed to TM_FailureData.
The interface to speculative insertion is changed so nodeModifyTable.c
does not have to set the speculative token itself anymore. Instead
there's a version of tuple_insert, tuple_insert_speculative, that
performs the speculative insertion (without requiring a flag to signal
that fact), and the speculative insertion is either made permanent
with table_complete_speculative(succeeded = true) or aborted with
succeeded = false).
Note that multi_insert is not yet routed through tableam, nor is
COPY. Changing multi_insert requires changes to copy.c that are large
enough to better be done separately.
Similarly, although simpler, CREATE TABLE AS and CREATE MATERIALIZED
VIEW are also only going to be adjusted in a later commit.
Author: Andres Freund and Haribabu Kommi
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
Tom Lane [Sat, 23 Mar 2019 20:24:30 +0000 (16:24 -0400)]
Accept XML documents when xmloption = content, as required by SQL:2006+.
Previously we were using the SQL:2003 definition, which doesn't allow
this, but that creates a serious dump/restore gotcha: there is no
setting of xmloption that will allow all valid XML data. Hence,
switch to the 2006 definition.
Since libxml doesn't accept <!DOCTYPE> directives in the mode we
use for CONTENT parsing, the implementation is to detect <!DOCTYPE>
in the input and switch to DOCUMENT parsing mode. This should not
cost much, because <!DOCTYPE> should be close to the front of the
input if it's there at all. It's possible that this causes the
error messages for malformed input to be slightly different than
they were before, if said input includes <!DOCTYPE>; but that does
not seem like a big problem.
In passing, buy back a few cycles in parsing of large XML documents
by not doing strlen() of the whole input in parse_xml_decl().
Back-patch because dump/restore failures are not nice. This change
shouldn't break any cases that worked before, so it seems safe to
back-patch.
Peter Geoghegan [Sat, 23 Mar 2019 20:49:53 +0000 (13:49 -0700)]
Suppress DETAIL output from an event_trigger test.
Suppress 3 lines of unstable DETAIL output from a DROP ROLE statement in
event_trigger.sql. This is further cleanup for commit dd299df8.
Note that the event_trigger test instability issue is very similar to
the recently suppressed foreign_data test instability issue. Both
issues involve DETAIL output for a DROP ROLE statement that needed to be
changed as part of dd299df8.
Peter Geoghegan [Sat, 23 Mar 2019 18:01:53 +0000 (11:01 -0700)]
Add nbtree high key "continuescan" optimization.
Teach nbtree forward index scans to check the high key before moving to
the right sibling page in the hope of finding that it isn't actually
necessary to do so. The new check may indicate that the scan definitely
cannot find matching tuples to the right, ending the scan immediately.
We already opportunistically force a similar "continuescan orientated"
key check of the final non-pivot tuple when it's clear that it cannot be
returned to the scan due to being dead-to-all. The new high key check
is complementary.
The new approach for forward scans is more effective than checking the
final non-pivot tuple, especially with composite indexes and non-unique
indexes. The improvements to the logic for picking a split point added
by commit fab25024 make it likely that relatively dissimilar high keys
will appear on a page. A distinguishing key value that can only appear
on non-pivot tuples on the right sibling page will often be present in
leaf page high keys.
Since forcing the final item to be key checked no longer makes any
difference in the case of forward scans, the existing extra key check is
now only used for backwards scans. Backward scans continue to
opportunistically check the final non-pivot tuple, which is actually the
first non-pivot tuple on the page (not the last).
Note that even pg_upgrade'd v3 indexes make use of this optimization.
Author: Peter Geoghegan, Heikki Linnakangas Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/CAH2-WzkOmUduME31QnuTFpimejuQoiZ-HOf0pOWeFZNhTMctvA@mail.gmail.com
Andres Freund [Sat, 23 Mar 2019 02:55:23 +0000 (19:55 -0700)]
Expand EPQ tests for UPDATEs and DELETEs
Previously there was basically no coverage for UPDATEs encountering
deleted rows, and no coverage for DELETE having to perform EPQ. That's
problematic for an upcoming commit in which EPQ is tought to integrate
with tableams. Also, there was no test for UPDATE to encounter a row
UPDATEd into another partition.
Michael Paquier [Fri, 22 Mar 2019 23:37:36 +0000 (08:37 +0900)]
Add option -N/--no-sync to pg_checksums
This is an option consistent with what pg_dump, pg_rewind and
pg_basebackup provide which is useful for leveraging the I/O effort when
testing things, not to be used in a production environment.
Author: Michael Paquier Reviewed-by: Michael Banck, Fabien Coelho, Sergei Kornilov
Discussion: https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
Michael Paquier [Fri, 22 Mar 2019 23:12:55 +0000 (08:12 +0900)]
Add options to enable and disable checksums in pg_checksums
An offline cluster can now work with more modes in pg_checksums:
- --enable enables checksums in a cluster, updating all blocks with a
correct checksum, and updating the control file at the end.
- --disable disables checksums in a cluster, updating only the control
file.
- --check is an extra option able to verify checksums for a cluster, and
the default used if no mode is specified.
When running --enable or --disable, the data folder gets fsync'd for
durability, and then it is followed by a control file update and flush
to keep the operation consistent should the tool be interrupted, killed
or the host unplugged. If no mode is specified in the options, then
--check is used for compatibility with older versions of pg_checksums
(named pg_verify_checksums in v11 where it was introduced).
Author: Michael Banck, Michael Paquier Reviewed-by: Fabien Coelho, Magnus Hagander, Sergei Kornilov
Discussion: https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
Tom Lane [Fri, 22 Mar 2019 18:56:05 +0000 (14:56 -0400)]
Rearrange make_partitionedrel_pruneinfo to avoid work when we can't prune.
Postpone most of the effort of constructing PartitionedRelPruneInfos
until after we have found out whether run-time pruning is needed at all.
This costs very little duplicated effort (basically just an extra
find_base_rel() call per partition) and saves quite a bit when we
can't do run-time pruning.
Also, merge the first loop (for building relid_subpart_map) into
the second loop, since we don't need the map to be valid during
that loop.
Peter Geoghegan [Fri, 22 Mar 2019 18:34:28 +0000 (11:34 -0700)]
Go back to suppressing foreign_data DETAIL test output.
This is almost a straight revert of commit fff518d, which itself was a
revert of 7d3bf73ac.
It turns out that commit 8aa9dd74, which sorted dependent objects before
deletion in DROP OWNED BY, was not sufficient to make all remaining
unstable DETAIL output stable. Unstable DETAIL output from DROP ROLE
was not affected, because that happens to use a different code path. It
doesn't seem worthwhile to fix the other code path at this time.
Tom Lane [Fri, 22 Mar 2019 18:16:50 +0000 (14:16 -0400)]
Don't copy PartitionBoundInfo in set_relation_partition_info.
I (tgl) remain dubious that it's a good idea for PartitionDirectory
to hold a pin on a relcache entry throughout planning, rather than
copying the data or using some kind of refcount scheme. However, it's
certainly the responsibility of the PartitionDirectory code to ensure
that what it's handing back is a stable data structure, not that of
its caller. So this is a pretty clear oversight in commit 898e5e329,
and one that can cost a lot of performance when there are many
partitions.
Amit Langote (extracted from a much larger patch set)
Fix yet more portability bugs in integerset and its tests.
There were more large constants that needed UINT64CONST. And one variable
was declared as "int", when it needed to be uint64. These bugs were only
visible on 32-bit systems; clearly I should've tested on one, given that
this code does a lot of work with 64-bit integers.
Also, in the test "huge distances" test, the code created some values with
random distances between them, but the test logic didn't take into account
the possibility that the random distance was exactly 1. That never actually
happens with the seed we're using, but let's be tidy.
Buildfarm member 'woodlouse' failed one of the tests, and I'm not sure
which test failed. Better to print the names of the tests, so that it
will appear in the regression.diffs on failure.
To do this, we scan GiST two times. In the first pass we make note of
empty leaf pages and internal pages. At second pass we scan through
internal pages, looking for downlinks to the empty pages.
Deleting internal pages is still not supported, like in nbtree, the last
child of an internal page is never deleted. That means that if you have a
workload where new keys are always inserted to different area than where
old keys are removed, the index will still grow without bound. But the rate
of growth will be an order of magnitude slower than before.
Add IntegerSet, to hold large sets of 64-bit ints efficiently.
The set is implemented as a B-tree, with a compact representation at leaf
items, using Simple-8b algorithm, so that clusters of nearby values use
less memory.
The IntegerSet isn't used for anything yet, aside from the test code, but
we have two patches in the works that would benefit from this: A patch to
allow GiST vacuum to delete empty pages, and a patch to reduce heap
VACUUM's memory usage, by storing the list of dead TIDs more efficiently
and lifting the 1 GB limit on its size.
This includes a unit test module, in src/test/modules/test_integerset.
It can be used to verify correctness, as a regression test, but if you run
it manully, it can also print memory usage and execution time of some of
the tests.
Author: Heikki Linnakangas, Andrey Borodin Reviewed-by: Julien Rouhaud
Discussion: https://www.postgresql.org/message-id/b5e82599-1966-5783-733c-1a947ddb729f@iki.fi
Peter Eisentraut [Fri, 22 Mar 2019 11:09:32 +0000 (12:09 +0100)]
Collations with nondeterministic comparison
This adds a flag "deterministic" to collations. If that is false,
such a collation disables various optimizations that assume that
strings are equal only if they are byte-wise equal. That then allows
use cases such as case-insensitive or accent-insensitive comparisons
or handling of strings with different Unicode normal forms.
This functionality is only supported with the ICU provider. At least
glibc doesn't appear to have any locales that work in a
nondeterministic way, so it's not worth supporting this for the libc
provider.
The term "deterministic comparison" in this context is from Unicode
Technical Standard #10
(https://unicode.org/reports/tr10/#Deterministic_Comparison).
This patch makes changes in three areas:
- CREATE COLLATION DDL changes and system catalog changes to support
this new flag.
- Many executor nodes and auxiliary code are extended to track
collations. Previously, this code would just throw away collation
information, because the eventually-called user-defined functions
didn't use it since they only cared about equality, which didn't
need collation information.
- String data type functions that do equality comparisons and hashing
are changed to take the (non-)deterministic flag into account. For
comparison, this just means skipping various shortcuts and tie
breakers that use byte-wise comparison. For hashing, we first need
to convert the input string to a canonical "sort key" using the ICU
analogue of strxfrm().
Reviewed-by: Daniel Verite <daniel@manitou-mail.org> Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://www.postgresql.org/message-id/flat/1ccc668f-4cbc-0bef-af67-450b47cdfee7@2ndquadrant.com
Michael Paquier [Fri, 22 Mar 2019 08:27:38 +0000 (17:27 +0900)]
Fix crash with pg_partition_root
Trying to call the function with the top-most parent of a partition tree
was leading to a crash. In this case the correct result is to return
the top-most parent itself.
Reported-by: Álvaro Herrera
Author: Michael Paquier Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20190322032612.GA323@alvherre.pgsql
Alvaro Herrera [Thu, 21 Mar 2019 21:34:29 +0000 (18:34 -0300)]
Fix dependency recording bug for partitioned PKs
When DefineIndex recurses to create constraints on partitions, it needs
to use the value returned by index_constraint_create to set up partition
dependencies. However, in the course of fixing the DEPENDENCY_INTERNAL_AUTO
mess, commit 1d92a0c9f7dd introduced some code to that function that
clobbered the return value, causing the recorded OID to be of the wrong
object. Close examination of pg_depend after creating the tables leads
to indescribable objects :-( My sin (in commit bdc3d7fa2376, while
preparing for DDL deparsing in event triggers) was to use a variable
name for the return value that's typically used for throwaway objects in
dependency-setting calls ("referenced"). Fix by changing the variable
names to match extended practice (the return value is "myself" rather
than "referenced".)
The pg_upgrade test notices the problem (in an indirect way: the pg_dump
outputs are in different order), but only if you create the objects in a
specific way that wasn't being used in the existing tests. Add a stanza
to leave some objects around that shows the bug.
Catversion bump because preexisting databases might have bogus pg_depend
entries.
Tom Lane [Thu, 21 Mar 2019 15:51:55 +0000 (11:51 -0400)]
Improve error reporting for DROP FUNCTION/PROCEDURE/AGGREGATE/ROUTINE.
These commands allow the argument type list to be omitted if there is
just one object that matches by name. However, if that syntax was
used with DROP IF EXISTS and there was more than one match, you got
a "function ... does not exist, skipping" notice message rather than a
truthful complaint about the ambiguity. This was basically due to
poor factorization and a rats-nest of logic, so refactor the relevant
lookup code to make it cleaner.
Note that this amounts to narrowing the scope of which sorts of
error conditions IF EXISTS will bypass. Per discussion, we only
intend it to skip no-such-object cases, not multiple-possible-matches
cases.
Per bug #15572 from Ash Marath. Although this definitely seems like
a bug, it's not clear that people would thank us for changing the
behavior in minor releases, so no back-patch.
David Rowley, reviewed by Julien Rouhaud and Pavel Stehule
Thomas Munro [Thu, 21 Mar 2019 02:19:03 +0000 (15:19 +1300)]
Add DNS SRV support for LDAP server discovery.
LDAP servers can be advertised on a network with RFC 2782 DNS SRV
records. The OpenLDAP command-line tools automatically try to find
servers that way, if no server name is provided by the user. Teach
PostgreSQL to do the same using OpenLDAP's support functions, when
building with OpenLDAP.
For now, we assume that HAVE_LDAP_INITIALIZE (an OpenLDAP extension
available since OpenLDAP 2.0 and also present in Apple LDAP) implies
that you also have ldap_domain2hostlist() (which arrived in the same
OpenLDAP version and is also present in Apple LDAP).
Author: Thomas Munro Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CAEepm=2hAnSfhdsd6vXsM6VZVN0br-FbAZ-O+Swk18S5HkCP=A@mail.gmail.com
Tom Lane [Wed, 20 Mar 2019 22:06:29 +0000 (18:06 -0400)]
Sort the dependent objects before deletion in DROP OWNED BY.
This finishes a task we left undone in commit f1ad067fc, by extending
the delete-in-descending-OID-order rule to deletions triggered by
DROP OWNED BY. We've coped with machine-dependent deletion orders
one time too many, and the new issues caused by Peter G's recent
nbtree hacking seem like the last straw.
Peter Geoghegan [Wed, 20 Mar 2019 20:38:38 +0000 (13:38 -0700)]
Suppress DETAIL output from a foreign_data test.
Unstable sort order related to changes to nbtree from commit dd299df8
can cause two lines of DETAIL output to be in opposite-of-expected
order. Suppress the output using the same VERBOSITY hack that is used
elsewhere in the foreign_data tests.
Note that the same foreign_data.out DETAIL output was mechanically
updated by commit dd299df8. Only a few such changes were required,
though.
Alvaro Herrera [Wed, 20 Mar 2019 20:28:43 +0000 (17:28 -0300)]
Restore RI trigger sanity check
I unnecessarily removed this check in 3de241dba86f because I
misunderstood what the final representation of constraints across a
partitioning hierarchy was to be. Put it back (in both branches).
Peter Geoghegan [Wed, 20 Mar 2019 17:41:36 +0000 (10:41 -0700)]
Allow amcheck to re-find tuples using new search.
Teach contrib/amcheck's bt_index_parent_check() function to take
advantage of the uniqueness property of heapkeyspace indexes in support
of a new verification option: non-pivot tuples (non-highkey tuples on
the leaf level) can optionally be re-found using a new search for each,
that starts from the root page. If a tuple cannot be re-found, report
that the index is corrupt.
The new "rootdescend" verification option is exhaustive, and can
therefore make a call to bt_index_parent_check() take a lot longer.
Re-finding tuples during verification is mostly intended as an option
for backend developers, since the corruption scenarios that it alone is
uniquely capable of detecting seem fairly far-fetched.
For example, "rootdescend" verification is much more likely to detect
corruption of the least significant byte of a key from a pivot tuple in
the root page of a B-Tree that already has at least three levels.
Typically, only a few tuples on a cousin leaf page are at risk of
"getting overlooked" by index scans in this scenario. The corrupt key
in the root page is only slightly corrupt: corrupt enough to give wrong
answers to some queries, and yet not corrupt enough to allow the problem
to be detected without verifying agreement between the leaf page and the
root page, skipping at least one internal page level. The existing
bt_index_parent_check() checks never cross more than a single level.
Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/CAH2-Wz=yTWnVu+HeHGKb2AGiADL9eprn-cKYAto4MkKOuiGtRQ@mail.gmail.com
Peter Geoghegan [Wed, 20 Mar 2019 17:12:19 +0000 (10:12 -0700)]
Consider secondary factors during nbtree splits.
Teach nbtree to give some consideration to how "distinguishing"
candidate leaf page split points are. This should not noticeably affect
the balance of free space within each half of the split, while still
making suffix truncation truncate away significantly more attributes on
average.
The logic for choosing a leaf split point now uses a fallback mode in
the case where the page is full of duplicates and it isn't possible to
find even a minimally distinguishing split point. When the page is full
of duplicates, the split should pack the left half very tightly, while
leaving the right half mostly empty. Our assumption is that logical
duplicates will almost always be inserted in ascending heap TID order
with v4 indexes. This strategy leaves most of the free space on the
half of the split that will likely be where future logical duplicates of
the same value need to be placed.
The number of cycles added is not very noticeable. This is important
because deciding on a split point takes place while at least one
exclusive buffer lock is held. We avoid using authoritative insertion
scankey comparisons to save cycles, unlike suffix truncation proper. We
use a faster binary comparison instead.
Note that even pg_upgrade'd v3 indexes make use of these optimizations.
Benchmarking has shown that even v3 indexes benefit, despite the fact
that suffix truncation will only truncate non-key attributes in INCLUDE
indexes. Grouping relatively similar tuples together is beneficial in
and of itself, since it reduces the number of leaf pages that must be
accessed by subsequent index scans.
Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/CAH2-WzmmoLNQOj9mAD78iQHfWLJDszHEDrAzGTUMG3mVh5xWPw@mail.gmail.com
Peter Geoghegan [Wed, 20 Mar 2019 17:04:01 +0000 (10:04 -0700)]
Make heap TID a tiebreaker nbtree index column.
Make nbtree treat all index tuples as having a heap TID attribute.
Index searches can distinguish duplicates by heap TID, since heap TID is
always guaranteed to be unique. This general approach has numerous
benefits for performance, and is prerequisite to teaching VACUUM to
perform "retail index tuple deletion".
Naively adding a new attribute to every pivot tuple has unacceptable
overhead (it bloats internal pages), so suffix truncation of pivot
tuples is added. This will usually truncate away the "extra" heap TID
attribute from pivot tuples during a leaf page split, and may also
truncate away additional user attributes. This can increase fan-out,
especially in a multi-column index. Truncation can only occur at the
attribute granularity, which isn't particularly effective, but works
well enough for now. A future patch may add support for truncating
"within" text attributes by generating truncated key values using new
opclass infrastructure.
Only new indexes (BTREE_VERSION 4 indexes) will have insertions that
treat heap TID as a tiebreaker attribute, or will have pivot tuples
undergo suffix truncation during a leaf page split (on-disk
compatibility with versions 2 and 3 is preserved). Upgrades to version
4 cannot be performed on-the-fly, unlike upgrades from version 2 to
version 3. contrib/amcheck continues to work with version 2 and 3
indexes, while also enforcing stricter invariants when verifying version
4 indexes. These stricter invariants are the same invariants described
by "3.1.12 Sequencing" from the Lehman and Yao paper.
A later patch will enhance the logic used by nbtree to pick a split
point. This patch is likely to negatively impact performance without
smarter choices around the precise point to split leaf pages at. Making
these two mostly-distinct sets of enhancements into distinct commits
seems like it might clarify their design, even though neither commit is
particularly useful on its own.
The maximum allowed size of new tuples is reduced by an amount equal to
the space required to store an extra MAXALIGN()'d TID in a new high key
during leaf page splits. The user-facing definition of the "1/3 of a
page" restriction is already imprecise, and so does not need to be
revised. However, there should be a compatibility note in the v12
release notes.
Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas, Alexander Korotkov
Discussion: https://postgr.es/m/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
Peter Geoghegan [Wed, 20 Mar 2019 16:30:57 +0000 (09:30 -0700)]
Refactor nbtree insertion scankeys.
Use dedicated struct to represent nbtree insertion scan keys. Having a
dedicated struct makes the difference between search type scankeys and
insertion scankeys a lot clearer, and simplifies the signature of
several related functions. This is based on a suggestion by Andrey
Lepikhov.
Streamline how unique index insertions cache binary search progress.
Cache the state of in-progress binary searches within _bt_check_unique()
for later instead of having callers avoid repeating the binary search in
an ad-hoc manner. This makes it easy to add a new optimization:
_bt_check_unique() now falls out of its loop immediately in the common
case where it's already clear that there couldn't possibly be a
duplicate.
The new _bt_check_unique() scheme makes it a lot easier to manage cached
binary search effort afterwards, from within _bt_findinsertloc(). This
is needed for the upcoming patch to make nbtree tuples unique by
treating heap TID as a final tiebreaker column. Unique key binary
searches need to restore lower and upper bounds. They cannot simply
continue to use the >= lower bound as the offset to insert at, because
the heap TID tiebreaker column must be used in comparisons for the
restored binary search (unlike the original _bt_check_unique() binary
search, where scankey's heap TID column must be omitted).
Author: Peter Geoghegan, Heikki Linnakangas Reviewed-By: Heikki Linnakangas, Andrey Lepikhov
Discussion: https://postgr.es/m/CAH2-WzmE6AhUdk9NdWBf4K3HjWXZBX3+umC7mH7+WDrKcRtsOw@mail.gmail.com
Jsonpath grammar and scanner are both quite small. It doesn't worth complexity
to compile them separately. This commit makes grammar and scanner be compiled
at once. Therefore, jsonpath_gram.h and jsonpath_gram.h are no longer needed.
This commit also does some reorganization of code in jsonpath_gram.y.
Remove ambiguity for jsonb_path_match() and jsonb_path_exists()
There are 2-arguments and 4-arguments versions of jsonb_path_match() and
jsonb_path_exists(). But 4-arguments versions have optional 3rd and 4th
arguments, that leads to ambiguity. In the same time 2-arguments versions are
needed only for @@ and @? operators. So, rename 2-arguments versions to
remove the ambiguity.
Tom Lane [Tue, 19 Mar 2019 20:20:20 +0000 (16:20 -0400)]
Restructure libpq's handling of send failures.
Originally, if libpq got a failure (e.g., ECONNRESET) while trying to
send data to the server, it would just report that and wash its hands
of the matter. It was soon found that that wasn't a very pleasant way
of coping with server-initiated disconnections, so we introduced a hack
(pqHandleSendFailure) in the code that sends queries to make it peek
ahead for server error reports before reporting the send failure.
It now emerges that related cases can occur during connection setup;
in particular, as of TLS 1.3 it's unsafe to assume that SSL connection
failures will be reported by SSL_connect rather than during our first
send attempt. We could have fixed that in a hacky way by applying
pqHandleSendFailure after a startup packet send failure, but
(a) pqHandleSendFailure explicitly disclaims suitability for use in any
state except query startup, and (b) the problem still potentially exists
for other send attempts in libpq.
Instead, let's fix this in a more general fashion by eliminating
pqHandleSendFailure altogether, and instead arranging to postpone
all reports of send failures in libpq until after we've made an
attempt to read and process server messages. The send failure won't
be reported at all if we find a server message or detect input EOF.
(Note: this removes one of the reasons why libpq typically overwrites,
rather than appending to, conn->errorMessage: pqHandleSendFailure needed
that behavior so that the send failure report would be replaced if we
got a server message or read failure report. Eventually I'd like to get
rid of that overwrite behavior altogether, but today is not that day.
For the moment, pqSendSome is assuming that its callees will overwrite
not append to conn->errorMessage.)
Possibly this change should get back-patched someday; but it needs
testing first, so let's not consider that till after v12 beta.
Tom Lane [Tue, 19 Mar 2019 16:49:27 +0000 (12:49 -0400)]
Make checkpoint requests more robust.
Commit 6f6a6d8b1 introduced a delay of up to 2 seconds if we're trying
to request a checkpoint but the checkpointer hasn't started yet (or,
much less likely, our kill() call fails). However buildfarm experience
shows that that's not quite enough for slow or heavily-loaded machines.
There's no good reason to assume that the checkpointer won't start
eventually, so we may as well make the timeout much longer, say 60 sec.
However, if the caller didn't say CHECKPOINT_WAIT, it seems like a bad
idea to be waiting at all, much less for as long as 60 sec. We can
remove the need for that, and make this whole thing more robust, by
adjusting the code so that the existence of a pending checkpoint
request is clear from the contents of shared memory, and making sure
that the checkpointer process will notice it at startup even if it did
not get a signal. In this way there's no need for a non-CHECKPOINT_WAIT
call to wait at all; if it can't send the signal, it can nonetheless
assume that the checkpointer will eventually service the request.
A potential downside of this change is that "kill -INT" on the checkpointer
process is no longer enough to trigger a checkpoint, should anyone be
relying on something so hacky. But there's no obvious reason to do it
like that rather than issuing a plain old CHECKPOINT command, so we'll
assume that nobody is. There doesn't seem to be a way to preserve this
undocumented quasi-feature without introducing race conditions.
Since a principal reason for messing with this is to prevent intermittent
buildfarm failures, back-patch to all supported branches.
Peter Eisentraut [Tue, 19 Mar 2019 09:48:03 +0000 (10:48 +0100)]
Ignore attempts to add TOAST table to shared or catalog tables
Running ALTER TABLE on any table will check if a TOAST table needs to be
added. On shared tables, this would previously fail, thus effectively
disabling ALTER TABLE for those tables. On (non-shared) system
catalogs, on the other hand, it would add a TOAST table, even though we
don't really want TOAST tables on some system catalogs. In some cases,
it would also fail with an error "AccessExclusiveLock required to add
toast table.", depending on what locks the ALTER TABLE actions had
already taken.
So instead, just ignore attempts to add TOAST tables to such tables,
outside of bootstrap mode, pretending they don't need one.
This allows running ALTER TABLE on such tables without messing up the
TOAST situation. Legitimate uses for ALTER TABLE on system catalogs
include setting reloptions (say, fillfactor or autovacuum settings).
(All this still requires allow_system_table_mods, which is independent
of this.)
Peter Eisentraut [Tue, 19 Mar 2019 08:37:46 +0000 (09:37 +0100)]
Fix bug in support for collation attributes on older ICU versions
Unrecognized attribute names are supposed to be ignored. But the code
would error out on an unrecognized attribute value even if it did not
recognize the attribute name. So unrecognized attributes wouldn't
really be ignored unless the value happened to be one that matched a
recognized value. This would break some important cases where the
attribute would be processed by ucol_open() directly. Fix that and
add a test case.
The restructured code should also avoid compiler warnings about
initializing a UColAttribute value to -1, because the type might be an
unsigned enum. (reported by Andres Freund)
Andrew Gierth [Tue, 19 Mar 2019 01:16:50 +0000 (01:16 +0000)]
Implement OR REPLACE option for CREATE AGGREGATE.
Aggregates have acquired a dozen or so optional attributes in recent
years for things like parallel query and moving-aggregate mode; the
lack of an OR REPLACE option to add or change these for an existing
agg makes extension upgrades gratuitously hard. Rectify.
Tom Lane [Mon, 18 Mar 2019 21:54:24 +0000 (17:54 -0400)]
Fix memory leak in printtup.c.
Commit f2dec34e1 changed things so that printtup's output stringinfo
buffer was allocated outside the per-row temporary context, not inside
it. This creates a need to free that buffer explicitly when the temp
context is freed, but that was overlooked. In most cases, this is all
happening inside a portal or executor context that will go away shortly
anyhow, but that's not always true. Notably, the stringinfo ends up
getting leaked when JDBC uses row-at-a-time fetches. For a query
that returns wide rows, that adds up after awhile.
Per bug #15700 from Matthias Otterbach. Back-patch to v11 where the
faulty code was added.
We should try to prewarm each database only once. Otherwise, if
prewarming fails for some reason, it will just keep retrying in an
infnite loop. This can happen if, for example, the database has been
dropped. The existing code was intended to implement the try-once
behavior, but failed to do so because it neglected to set
worker.bgw_restart_time to BGW_NEVER_RESTART.
Robert Haas [Mon, 18 Mar 2019 19:14:52 +0000 (15:14 -0400)]
Revise parse tree representation for VACUUM and ANALYZE.
Like commit f41551f61f9cf4eedd5b7173f985a3bdb4d9858c, this aims
to make it easier to add non-Boolean options to VACUUM (or, in
this case, to ANALYZE). Instead of building up a bitmap of
options directly in the parser, build up a list of DefElem
objects and let ExecVacuum() sort it out; right now, we make
no use of the fact that a DefElem can carry an associated value,
but it will be easy to make that change in the future.
Robert Haas [Mon, 18 Mar 2019 17:57:33 +0000 (13:57 -0400)]
Fold vacuum's 'int options' parameter into VacuumParams.
Many places need both, so this allows a few functions to take one
fewer parameter. More importantly, as soon as we add a VACUUM
option that takes a non-Boolean parameter, we need to replace
'int options' with a struct, and it seems better to think
of adding more fields to VacuumParams rather than passing around
both VacuumParams and a separate struct as well.
Peter Eisentraut [Mon, 18 Mar 2019 16:01:40 +0000 (17:01 +0100)]
Fix optimization of foreign-key on update actions
In RI_FKey_pk_upd_check_required(), we check among other things
whether the old and new key are equal, so that we don't need to run
cascade actions when nothing has actually changed. This was using the
equality operator. But the effect of this is that if a value in the
primary key is changed to one that "looks" different but compares as
equal, the update is not propagated. (Examples are float -0 and 0 and
case-insensitive text.) This appears to violate the SQL standard, and
it also behaves inconsistently if in a multicolumn key another key is
also updated that would cause the row to compare as not equal.
To fix, if we are looking at the PK table in ri_KeysEqual(), then do a
bytewise comparison similar to record_image_eq() instead of using the
equality operators. This only makes a difference for ON UPDATE
CASCADE, but for consistency we treat all changes to the PK the same. For
the FK table, we continue to use the equality operators.
Michael Paquier [Mon, 18 Mar 2019 03:59:35 +0000 (12:59 +0900)]
Refactor more code logic to update the control file
ce6afc6 has begun the refactoring work by plugging pg_rewind into a
central routine to update the control file, and left around two extra
copies, with one in xlog.c for the backend and one in pg_resetwal.c. By
adding an extra option to the central routine in controldata_utils.c to
control if a flush of the control file needs to be done, it is proving
to be straight-forward to make xlog.c and pg_resetwal.c use the central
code path at the condition of moving the wait event tracking there.
Hence, this allows to have only one central code path to update the
control file, shaving the code from the duplicates.
This refactoring actually fixes a problem in pg_resetwal. Previously,
the control file was first removed before being recreated. So if a
crash happened between the moment the file was removed and the moment
the file was created, then it would have been possible to not have a
control file anymore in the database folder.
Author: Fabien Coelho Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/alpine.DEB.2.21.1903170935210.2506@lancre
Michael Paquier [Mon, 18 Mar 2019 01:34:45 +0000 (10:34 +0900)]
Fix pg_rewind when rewinding new database with tables included
This fixes an issue introduced by 266b6ac, which has added filters to
exclude file patterns on the target and source data directories to
reduce the number of files transferred. Filters get applied to both
the target and source data files, and include pg_internal.init which is
present for each database once relations are created on it. However, if
the target differed from the source with at least one new database with
relations, the rewind would fail due to the exclusion filters applied on
the target files, causing pg_internal.init to still be present on the
target database folder, while its contents should have been completely
removed so as there is nothing remaining inside at the time of the
folder deletion.
Applying exclusion filters on the source files is fine, because this way
the amount of data copied from the source to the target is reduced. And
actually, not applying the filters on the target is what pg_rewind
should do, because this causes such files to be automatically removed
during the rewind on the target. Exclusion filters apply to paths which
are removed or recreated automatically at startup, so removing all those
files on the target during the rewind is a win.
The existing set of TAP tests already stresses the rewind of databases,
but it did not include any tables on those newly-created databases.
Creating extra tables in this case is enough to reproduce the failure,
so the existing tests are extended to close the gap.
Reported-by: Mithun Cy
Author: Michael Paquier
Discussion: https://postgr.es/m/CADq3xVYt6_pO7ZzmjOqPgY9HWsL=kLd-_tNyMtdfjKqEALDyTA@mail.gmail.com
Backpatch-through: 11
Michael Paquier [Mon, 18 Mar 2019 00:11:52 +0000 (09:11 +0900)]
Error out in pg_checksums on incompatible block size
pg_checksums is compiled with a given block size and has a hard
dependency to it per the way checksums are calculated via
checksum_impl.h, and trying to use the tool on a data folder which has
not the same block size would result in incorrect checksum calculations
and/or block read errors, meaning that the data folder is corrupted.
This is harmless as checksums are only checked now, but very confusing
for the user so issue an error properly if the block size used at
compilation and the block size used in the data folder do not match.
Reported-by: Sergei Kornilov
Author: Michael Banck, Michael Paquier Reviewed-by: Fabien Coelho, Magnus Hagander
Discussion: https://postgr.es/m/20190317054657.GA3357@paquier.xyz
ackpatch-through: 11
Peter Eisentraut [Sun, 17 Mar 2019 07:16:33 +0000 (08:16 +0100)]
Add support for collation attributes on older ICU versions
Starting in ICU 54, collation customization attributes can be
specified in the locale string, for example
"@colStrength=primary;colCaseLevel=yes". Add support for this for
older ICU versions as well, by adding some minimal parsing of the
attributes in the locale string and calling ucol_setAttribute() on
them. This is essentially what never ICU versions do internally in
ucol_open(). This was we can offer this functionality in a consistent
way in all ICU versions supported by PostgreSQL.
Also add some tests for ICU collation customization.
Reported-by: Daniel Verite <daniel@manitou-mail.org>
Discussion: https://www.postgresql.org/message-id/0270ebd4-f67c-8774-1a5a-91adfb9bb41f@2ndquadrant.com
Tom Lane [Sat, 16 Mar 2019 19:50:13 +0000 (15:50 -0400)]
Further adjust the tests for the hyperbolic functions.
It looks like we can leave in most of the test cases for Infinity/NaN
inputs, but buildfarm member jacana gets the wrong answer for acosh(Inf).
It's not worth carrying a variant expected file for that, so just disable
that one test.
Add support of numeric error suppression to jsonpath as it's required by
standard. This commit doesn't use PG_TRY()/PG_CATCH() in order to implement
that. Instead, it provides internal versions of numeric functions used, which
support error suppression.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Alexander Korotkov, Nikita Glukhov Reviewed-by: Tomas Vondra
SQL 2016 standards among other things contains set of SQL/JSON features for
JSON processing inside of relational database. The core of SQL/JSON is JSON
path language, allowing access parts of JSON documents and make computations
over them. This commit implements partial support JSON path language as
separate datatype called "jsonpath". The implementation is partial because
it's lacking datetime support and suppression of numeric errors. Missing
features will be added later by separate commits.
Support of SQL/JSON features requires implementation of separate nodes, and it
will be considered in subsequent patches. This commit includes following
set of plain functions, allowing to execute jsonpath over jsonb values:
This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which
are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb,
jsonpath) correspondingly. These operators will have an index support
(implemented in subsequent patches).
Catversion bumped, to add new functions and operators.
Code was written by Nikita Glukhov and Teodor Sigaev, revised by me.
Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work
was inspired by Oleg Bartunov.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
Peter Eisentraut [Fri, 15 Mar 2019 20:24:05 +0000 (21:24 +0100)]
Don't propagate PGAPPNAME through pg_ctl in tests
When libpq is loaded in the server (for instance, by
libpqwalreceiver), it may use libpq environment variables set in the
postmaster environment for connection parameter defaults. This has
some confusing effects in our test suites. For example, the TAP test
infrastructure sets PGAPPNAME to allow identifying clients in the
server log. But this environment variable is also inherited by
temporary servers started with pg_ctl and is then in turn used by
libpqwalreceiver as the application_name for connecting to remote
servers where it then shows up in pg_stat_replication and is relevant
for things like synchronous_standby_names. Replication already has a
suitable default for application_name, and overriding that
accidentally then requires the individual test cases to re-override
that, which is all very confusing and unnecessary.
To fix, unset PGAPPNAME temporarily before running pg_ctl start or
restart in the tests.
More comprehensive approaches like unsetting all environment variables
in pg_ctl were considered but might be too complicated to achieve
portably.
The now unnecessary re-overriding of application_name by test cases is
also removed.