Michael Paquier [Wed, 29 May 2019 15:19:06 +0000 (11:19 -0400)]
Fix some documentation about FKs and partitioned tables
This got forgotten in f56f8f which has added foreign key support for
partitioned tables. In passing, add a mention about caveats applying to
tables partitioned using inheritance regarding indexes and foreign keys.
Author: Paul A Jungwirth Reviewed-by: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/CA+renyUuSmYgmZjKc_DfUNVZ0uttF91-FwhDVW3F7WEPj0jL5w@mail.gmail.com
Noah Misch [Tue, 28 May 2019 19:59:00 +0000 (12:59 -0700)]
In the pg_upgrade test suite, don't write to src/test/regress.
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a
race condition in "make -j check-world". If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously. Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs. This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.
Buildfarm client REL_10, released fifty-four days ago, supports saving
regression.diffs from its new location. When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.
Noah Misch [Tue, 28 May 2019 19:58:30 +0000 (12:58 -0700)]
In the pg_upgrade test suite, remove and recreate "tmp_check".
This allows "vcregress upgradecheck" to pass twice in immediate
succession, and it's more like how $(prove_check) works. Back-patch to
9.5, where pg_upgrade moved to src/bin.
Tom Lane [Sun, 26 May 2019 14:39:11 +0000 (10:39 -0400)]
Fix more thinkos in new ECPG "PREPARE AS" code.
ecpg_build_params() failed to check for ecpg_alloc failure in one
newly-added code path, and leaked a temporary string in another path.
Errors in commit a1dc6ab46, spotted by Coverity.
Tom Lane [Sun, 26 May 2019 14:06:37 +0000 (10:06 -0400)]
Fix thinko in new ECPG "PREPARE AS" code.
ecpg_register_prepared_stmt() is pretty obviously checking the wrong
variable while trying to detect malloc failure. Error in commit a1dc6ab46, spotted by Coverity.
Amit Kapila [Sun, 26 May 2019 12:58:18 +0000 (18:28 +0530)]
Fix typos.
Reported-by: Alexander Lakhin
Author: Alexander Lakhin Reviewed-by: Amit Kapila and Tom Lane
Discussion: https://postgr.es/m/7208de98-add8-8537-91c0-f8b089e2928c@gmail.com
Andres Freund [Thu, 23 May 2019 23:25:48 +0000 (16:25 -0700)]
tableam: Rename wrapper functions to match callback names.
Some of the wrapper functions didn't match the callback names. Many of
them due to staying "consistent" with historic naming of the wrapped
functionality. We decided that for most cases it's more important to
be for tableam to be consistent going forward, than with the past.
The one exception is beginscan/endscan/... because it'd have looked
odd to have systable_beginscan/endscan/... with a different naming
scheme, and changing the systable_* APIs would have caused way too
much churn (including breaking a lot of external users).
Author: Ashwin Agrawal, with some small additions by Andres Freund Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CALfoeiugyrXZfX7n0ORCa4L-m834dzmaE8eFdbNR6PMpetU4Ww@mail.gmail.com
Michael Paquier [Thu, 23 May 2019 23:19:21 +0000 (08:19 +0900)]
Fix table dump in pg_dump[all] with backends older than 9.5
The access method name "amname" can be dumped as of 3b925e90, but
queries for backends older than 9.5 forgot to map it to a dummy NULL
value, causing the column to not be mapped to a number. As a result,
pg_dump was throwing some spurious errors in its stderr output coming
from libpq:
pg_dump: column number -1 is out of range 0..36
Fix this issue by adding a mapping of "amname" to NULL to all the older
queries.
Discussion: https://postgr.es/m/20190522083038.GA16837@paquier.xyz
Author: Michael Paquier Reviewed-by: Dmitry Dolgov, Andres Freund, Tom Lane
Andres Freund [Thu, 23 May 2019 21:46:52 +0000 (14:46 -0700)]
pg_upgrade: Make test.sh's installcheck use to-be-upgraded version's bindir.
On master (after 700538) the old version's installed psql was used -
even when the old version might not actually be installed / might be
installed into a temporary directory. As commonly the case when just
executing make check for pg_upgrade, as $oldbindir is just the current
version's $bindir.
In the back branches, with --install specified, psql from the new
version's temporary installation was used, without --install (e.g for
NO_TEMP_INSTALL, cf 47b3c26642), the new version's installed psql was
used (which might or might not exist).
Author: Andres Freund
Discussion: https://postgr.es/m/20190522175150.c26f4jkqytahajdg@alap3.anarazel.de
Andrew Gierth [Thu, 23 May 2019 14:26:01 +0000 (15:26 +0100)]
Fix array size allocation for HashAggregate hash keys.
When there were duplicate columns in the hash key list, the array
sizes could be miscomputed, resulting in access off the end of the
array. Adjust the computation to ensure the array is always large
enough.
(I considered whether the duplicates could be removed in planning, but
I can't rule out the possibility that duplicate columns might have
different hash functions assigned. Simpler to just make sure it works
at execution time regardless.)
Bug apparently introduced in fc4b3dea2 as part of narrowing down the
tuples stored in the hashtable. Reported by Colm McHugh of Salesforce,
though I didn't use their patch. Backpatch back to version 10 where
the bug was introduced.
Michael Paquier [Thu, 23 May 2019 01:48:17 +0000 (10:48 +0900)]
Fix ordering of GRANT commands in pg_dumpall for tablespaces
This uses a method similar to 68a7c24f and now b8c6014 (applied for
database creation), which guarantees that GRANT commands using the WITH
GRANT OPTION are dumped in a way so as cascading dependencies are
respected. Note that tablespaces do not have support for initial
privileges via pg_init_privs, so the same method needs to be applied
again. It would be nice to merge all the logic generating ACL queries
in dumps under the same banner, but this requires extending the support
of pg_init_privs to objects that cannot use it yet, so this is left as
future work.
Discussion: https://postgr.es/m/20190522071555.GB1278@paquier.xyz
Author: Michael Paquier Reviewed-by: Nathan Bossart
Backpatch-through: 9.6
Tom Lane [Wed, 22 May 2019 17:04:48 +0000 (13:04 -0400)]
Phase 2 pgindent run for v12.
Switch to 2.1 version of pg_bsd_indent. This formats
multiline function declarations "correctly", that is with
additional lines of parameter declarations indented to match
where the first line's left parenthesis is.
Tom Lane [Wed, 22 May 2019 16:55:34 +0000 (12:55 -0400)]
Initial pgindent run for v12.
This is still using the 2.0 version of pg_bsd_indent.
I thought it would be good to commit this separately,
so as to document the differences between 2.0 and 2.1 behavior.
Peter Eisentraut [Wed, 15 May 2019 17:37:52 +0000 (19:37 +0200)]
Convert ExecComputeStoredGenerated to use tuple slots
This code was still using the old style of forming a heap tuple rather
than using tuple slots. This would be less efficient if a non-heap
access method was used. And using tuple slots is actually quite a bit
faster when using heap as well.
Also add some test cases for generated columns with null values and
with varlena values. This lack of coverage was discovered while
working on this patch.
Fujii Masao [Wed, 22 May 2019 16:18:16 +0000 (01:18 +0900)]
Mention ANALYZE boolean options in documentation.
Commit 41b54ba78e allowed not only VACUUM but also ANALYZE options
to take a boolean argument. But it forgot to update the documentation
for ANALYZE. This commit adds the descriptions about those ANALYZE
boolean options into the documentation.
This patch also updates tab-completion for ANALYZE boolean options.
Tom Lane [Wed, 22 May 2019 15:46:57 +0000 (11:46 -0400)]
Fix O(N^2) performance issue in pg_publication_tables view.
The original coding of this view relied on a correlated IN sub-query.
Our planner is not very bright about correlated sub-queries, and even
if it were, there's no way for it to know that the output of
pg_get_publication_tables() is duplicate-free, making the de-duplicating
semantics of IN unnecessary. Hence, rewrite as a LATERAL sub-query.
This provides circa 100X speedup for me with a few hundred published
tables (the whole regression database), and things would degrade as
roughly O(published_relations * all_relations) beyond that.
Because the rules.out expected output changes, force a catversion bump.
Ordinarily we might not want to do that post-beta1; but we already know
we'll be doing a catversion bump before beta2 to fix pg_statistic_ext
issues, so it's pretty much free to fix it now instead of waiting for v13.
Tom Lane [Wed, 22 May 2019 14:38:21 +0000 (10:38 -0400)]
In transam.h, don't expose static inline functions to frontend code.
That leads to unsatisfied external references if the C compiler fails
to elide unused static functions. Apparently, we have no buildfarm
members building HEAD that have that issue ... but such compilers still
exist in the wild. Need to do something about that.
Michael Paquier [Wed, 22 May 2019 05:48:00 +0000 (14:48 +0900)]
Fix ordering of GRANT commands in pg_dump for database creation
This uses a method similar to 68a7c24f, which guarantees that GRANT
commands using the WITH GRANT OPTION are dumped in a way so as cascading
dependencies are respected. As databases do not have support for
initial privileges via pg_init_privs, we need to repeat again the same
ACL reordering method.
ACL for databases have been moved from pg_dumpall to pg_dump in v11, so
this impacts pg_dump for v11 and above, and pg_dumpall for v9.6 and
v10.
Michael Meskes [Wed, 22 May 2019 02:58:29 +0000 (04:58 +0200)]
Implement PREPARE AS statement for ECPG.
Besides implementing the new statement this change fix some issues with the
parsing of PREPARE and EXECUTE statements. The different forms of these
statements are now all handled in a ujnified way.
When $(MAKE) is present in a rule, make assumes that target is a
submake, and it doesn't need to buffer its output. But in this case
it's a shell script that needs buffered output. Avoid that heuristic,
by referring to $(MAKE) via an indirection.
Andres Freund [Tue, 21 May 2019 21:56:29 +0000 (14:56 -0700)]
pg_upgrade: Don't use separate installation for test.
For pg_upgrade's test we (unless prevented by the caller via via
NO_TEMP_INSTALL) built a separate installation. That causes an
unnecessary slowdown after the infrastructure introduced by dcae5faccab (and unnecessarily duplicates code).
Author: Andres Freund Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/20190521191918.z7kwnrlj45mk2k67@alap3.anarazel.de
https://postgr.es/m/20190521195209.qfzwfxvymguuwlu5@alap3.anarazel.de
Tom Lane [Tue, 21 May 2019 17:11:57 +0000 (13:11 -0400)]
Make pg_upgrade's test.sh less chatty.
The use of "set -x" to echo a subset of the test's commands might've
been a good idea during development of this test, but it's been stable
for long enough now that the extra output isn't very useful. Also
our project expectations have been trending towards less output in
non-error cases; the fact that "set -x" produces output on stderr
is particularly annoying from that standpoint. So get rid of it.
Also, pass "-A trust" to initdb explicitly so that it won't issue
a warning about "trust" being an insecure default. This matches
what the TAP tests have done for a long time, and again gets rid
of some noise on stderr.
Tom Lane [Tue, 21 May 2019 16:23:16 +0000 (12:23 -0400)]
Insert temporary debugging output in regression tests.
We're seeing occasional instability in the plans generated for
parallel queries on the "a_star" table hierarchy. This suggests
that something is changing the planner's stats for those tables,
but that should not be happening within a regression test run.
To try to gather some information about what's happening, insert
additional queries to check the basic page/tuple counts for these
tables, as well as whether any vacuums or analyzes have happened
on them. (We expect that only the database-wide VACUUM in
sanity_check.sql will have touched them.)
I added the probes not only in select_parallel.sql itself, but
also in stats.sql, bearing in mind that the stats collector's
lag may prevent the initial query from reporting current truth.
If any extra vacuum/analyze has happened, the recheck in stats.sql
definitely ought to see it.
This commit can be reverted once we figure out what's going on.
Per suggestion from David Rowley, though I changed the queries around.
Tom Lane [Mon, 20 May 2019 22:39:53 +0000 (18:39 -0400)]
Doc: improve description of regexp character classes.
Define the meanings of the POSIX-spec character classes in line,
rather than referring to the ctype(3) man page. That man page
doesn't even exist on many modern systems, and if it does exist
it probably says the wrong things about non-ASCII characters.
Also document our non-POSIX-spec "ascii" character class.
Also, point out here that this behavior is controlled by collation or
LC_CTYPE, since the existing text explaining that is pretty far away.
Per gripe from Geert Lobbestael. Given the lack of prior complaints,
I'm not excited about back-patching this.
This shouldn't have been committed without even running the tests (nor
were the tests added that were suggested). I'm fixing up the results
to get the buildfarm back to green, it's quite possible we'll want to
revert this later.
Fujii Masao [Mon, 20 May 2019 15:22:06 +0000 (00:22 +0900)]
Make VACUUM accept 1 and 0 as a boolean value.
Commit 41b54ba78e allowed existing VACUUM options to take a boolean
argument. It's documented that valid boolean values that VACUUM can
accept are true, false, on, off, 1, and 0. But previously the parser
failed to accept 1 and 0 as a boolean value in VACUUM syntax because
of a lack of NumericOnly clause for vac_analyze_option_arg in gram.y.
This commit adds such NumericOnly clause so that VACUUM options
can take also 1 and 0 as a boolean value.
Andres Freund [Mon, 20 May 2019 01:01:06 +0000 (18:01 -0700)]
Minimally fix partial aggregation for aggregates that don't have one argument.
For partial aggregation combine steps,
AggStatePerTrans->numTransInputs was set to the transition function's
number of inputs, rather than the combine function's number of
inputs (always 1).
That lead to partial aggregates with strict combine functions to
wrongly check for NOT NULL input as required by strictness. When the
aggregate wasn't exactly passed one argument, the strictness check was
either omitted (in the 0 args case) or too many arguments were
checked. In the latter case we'd read beyond the end of
FunctionCallInfoData->args (only in master).
AggStatePerTrans->numTransInputs actually has been wrong since since
9.6, where partial aggregates were added. But it turns out to not be
an active problem in 9.6 and 10, because numTransInputs wasn't used at
all for combine functions: Before c253b722f6 there simply was no NULL
check for the input to strict trans functions, and after that the
check was simply hardcoded for the right offset in fcinfo, as it's
done by code specific to combine functions.
In bf6c614a2f2 (11) the strictness check was generalized, with common
code doing the strictness checks for both plain and combine transition
functions, based on numTransInputs. For combine functions this lead to
not emitting an expression step to check for strict input in the 0
arguments case, and in the > 1 arguments case, we'd check too many
arguments.Due to the fact that the relevant fcinfo->isnull[2..] was
always zero-initialized (more or less by accident, by being part of
the AggStatePerTrans struct, which is palloc0'ed), there was no
observable damage in the latter case before a9c35cf85ca1f, we just
checked too many array elements.
Due to the changes in a9c35cf85ca1f, > 1 argument bug became visible,
because these days fcinfo is a) dynamically allocated without being
zeroed b) exactly the length required for the number of specified
arguments (hardcoded to 2 in this case).
This commit only contains a fairly minimal fix, setting numTransInputs
to a hardcoded 1 when building a pertrans for a combine function. It
seems likely that we'll want to clean this up further (e.g. the
arguments build_pertrans_for_aggref() aren't particularly meaningful
for combine functions). But the wrap date for 12 beta1 is coming up
fast, so it seems good to have a minimal fix in place.
Backpatch to 11. While AggStatePerTrans->numTransInputs was set
wrongly before that, the value was not used for combine functions.
Reported-By: Rajkumar Raghuwanshi Diagnosed-By: Kyotaro Horiguchi, Jeevan Chalke, Andres Freund, David Rowley
Author: David Rowley, Kyotaro Horiguchi, Andres Freund
Discussion: https://postgr.es/m/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.com
Michael Paquier [Mon, 20 May 2019 00:47:19 +0000 (09:47 +0900)]
Fix some grammar in documentation of spgist and pgbench
Discussion: https://postgr.es/m/92961161-9b49-e42f-0a72-d5d47e0ed4de@postgrespro.ru
Author: Liudmila Mantrova Reviewed-by: Jonathan Katz, Tom Lane, Michael Paquier
Backpatch-through: 9.4
Andres Freund [Sun, 19 May 2019 23:17:18 +0000 (16:17 -0700)]
Fix and improve SnapshotType comments.
The comment for SNAPSHOT_SELF was unfortunately explaining
SNAPSHOT_DIRTY, as reported by Sergei. Also expand a few comments, and
include a few more comments from heapam_visibility.c, so they're in an
AM independent place.
Reported-By: Sergei Kornilov
Author: Andres Freund
Discussion: https://postgr.es/m/9152241558192351@sas1-d856b3d759c7.qloud-c.yandex.net
Andres Freund [Sun, 19 May 2019 22:10:28 +0000 (15:10 -0700)]
Don't to predicate lock for analyze scans, refactor scan option passing.
Before this commit, when ANALYZE was run on a table and serializable
was used (either by virtue of an explicit BEGIN TRANSACTION ISOLATION
LEVEL SERIALIZABLE, or default_transaction_isolation being set to
serializable) a null pointer dereference lead to a crash.
The analyze scan doesn't need a snapshot (nor predicate locking), but
before this commit a scan only contained information about being a
bitmap or sample scan.
Refactor the option passing to the scan_begin callback to use a
bitmask instead. Alternatively we could have added a new boolean
parameter, but that seems harder to read. Even before this issue
various people (Heikki, Tom, Robert) suggested doing so.
These changes don't change the scan APIs outside of tableam. The flags
argument could be exposed, it's not necessary to fix this
problem. Also the wrapper table_beginscan* functions encapsulate most
of that complexity.
After these changes fixing the bug is trivial, just don't acquire
predicate lock for analyze style scans. That was already done for
bitmap heap scans. Add an assert that a snapshot is passed when
acquiring the predicate lock, so this kind of bug doesn't require
running with serializable.
Also add a comment about sample scans currently requiring predicate
locking the entire relation, that previously wasn't remarked upon.
Reported-By: Joe Wildish
Author: Andres Freund
Discussion:
https://postgr.es/m/4EA80A20-E9BF-49F1-9F01-5B66CAB21453@elusive.cx
https://postgr.es/m/20190411164947.nkii4gaeilt4bui7@alap3.anarazel.de
https://postgr.es/m/20190518203102.g7peu2fianukjuxm@alap3.anarazel.de
Noah Misch [Sun, 19 May 2019 21:36:44 +0000 (14:36 -0700)]
In the pg_upgrade test suite, don't write to src/test/regress.
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a
race condition in "make -j check-world". If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously. Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs. This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.
Buildfarm client REL_10, released forty-five days ago, supports saving
regression.diffs from its new location. When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.
Tom Lane [Sun, 19 May 2019 17:55:39 +0000 (13:55 -0400)]
Improve logrotate test so that it meaningfully exercises syslogger.
Discussion of bug #15804 reveals that this test didn't really prove
that the syslogger child process ever launched successfully, much
less did anything. It was only checking that the expected log file
gets created, and that's done in the postmaster. Moreover, the
test assumed it could rename the log file, which is likely to fail
on Windows (cf. commit d611175e5).
Instead, use the default log file name pattern, which should result
in a new file name being chosen after 1 second, and verify that
rotation has occurred by checking for a new file name. Also add code
to test that messages actually do propagate through the syslogger.
In theory this version of the test should work on Windows, so
revert d611175e5.
While that's still a good idea in the abstract, we found out
that there are multiple crasher bugs in it on Windows builds,
making the logging_collector option unusable on Windows.
There's no time left to fix these issues before 12beta1,
so revert the patch to allow Windows beta testing to proceed.
We'll try again at some future date.
Per bug #15804 from Yulian Khodorkovskiy and additional
investigation by Michael Paquier.
Tom Lane [Sun, 19 May 2019 00:16:50 +0000 (20:16 -0400)]
ANSI-ify a few straggler K&R-style function definitions.
We still had a couple of these left in ancient src/port/ files.
Convert them to modern style in preparation for switching to
a version of pg_bsd_indent that doesn't cope well with K&R style.
Tom Lane [Sat, 18 May 2019 17:51:16 +0000 (13:51 -0400)]
Make BufFileCreateTemp() ensure that temp tablespaces are set up.
If PrepareTempTablespaces() has never been called in the current
transaction, OpenTemporaryFile() will fall back to using the default
tablespace, which is a bug if the user wanted temp files placed elsewhere.
gistInitBuildBuffers() appears to have this disease already, and it
seems like an easy trap for future coders to fall into.
We discussed other ways to close this gap, but none of them are prettier
or more reliable than just having BufFileCreateTemp do it. In particular,
having fd.c do this creates layering issues that we could do without.
Per suggestion from Melanie Plageman. Arguably this is a bug fix, but
nobody seems very excited about back-patching, so change in HEAD only.
Andres Freund [Sat, 18 May 2019 01:52:01 +0000 (18:52 -0700)]
tableam: Avoid relying on relation size to determine validity of tids.
Instead add a tableam callback to do so. To avoid adding per
validation overhead, pass a scan to tuple_tid_valid. In heap's case
we'd otherwise incurred a RelationGetNumberOfBlocks() call for each
tid - which'd have added noticable overhead to nodeTidscan.c.
Author: Andres Freund Reviewed-By: Ashwin Agrawal
Discussion: https://postgr.es/m/20190515185447.gno2jtqxyktylyvs@alap3.anarazel.de
Andres Freund [Sat, 18 May 2019 01:06:18 +0000 (18:06 -0700)]
tableam: Don't assume that every AM uses md.c style storage.
Previously various parts of the code routed size requests through
RelationGetNumberOfBlocks[InFork]. That works if md.c is used by the
AM, but not otherwise.
Add a tableam callback to return the size of the table. As not every
AM will use postgres' BLCKSZ, have it return bytes, and have
RelationGetNumberOfBlocksInFork() round the byte size up into blocks.
To allow code outside of the AM to determine the actual relation size
map InvalidForkNumber the total size of a relation, as not every AM
might just need the postgres defined forks.
A few users of RelationGetNumberOfBlocks() ought to be converted away
from that. One case, the use of it to determine whether a tid is
valid, will be fixed in a follow up commit. Others will have to wait
for v13.
Author: Andres Freund
Discussion: https://postgr.es/m/20190423225201.3bbv6tbqzkb5w7cw@alap3.anarazel.de
Tom Lane [Fri, 17 May 2019 23:44:19 +0000 (19:44 -0400)]
Restructure creation of run-time pruning steps.
Previously, gen_partprune_steps() always built executor pruning steps
using all suitable clauses, including those containing PARAM_EXEC
Params. This meant that the pruning steps were only completely safe
for executor run-time (scan start) pruning. To prune at executor
startup, we had to ignore the steps involving exec Params. But this
doesn't really work in general, since there may be logic changes
needed as well --- for example, pruning according to the last operator's
btree strategy is the wrong thing if we're not applying that operator.
The rules embodied in gen_partprune_steps() and its minions are
sufficiently complicated that tracking their incremental effects in
other logic seems quite impractical.
Short of a complete redesign, the only safe fix seems to be to run
gen_partprune_steps() twice, once to create executor startup pruning
steps and then again for run-time pruning steps. We can save a few
cycles however by noting during the first scan whether we rejected
any clauses because they involved exec Params --- if not, we don't
need to do the second scan.
In support of this, refactor the internal APIs in partprune.c to make
more use of passing information in the GeneratePruningStepsContext
struct, rather than as separate arguments.
This is, I hope, the last piece of our response to a bug report from
Alan Jackson. Back-patch to v11 where this code came in.
Peter Geoghegan [Thu, 16 May 2019 22:11:58 +0000 (15:11 -0700)]
Remove extra nbtree half-dead internal page check.
It's not safe for nbtree VACUUM to attempt to delete a target page whose
right sibling is already half-dead, since that would fail the
cross-check when VACUUM attempts to re-find a downlink to the right
sibling in the parent page. Logic to prevent this from happening was
added by commit 8da31837803, which addressed a bug in the overhaul of
page deletion that went into PostgreSQL 9.4 (commit efada2b8e92).
VACUUM was made to check the right sibling page, and back off when it
happened to be half-dead already.
However, it is only truly necessary to do the right sibling check on the
leaf level, since that transitively determines if the deletion target's
parent's right sibling page is itself undergoing deletion. Remove the
internal page level check, and add a comment explaining why the leaf
level check alone suffices.
The extra check is also unnecessary due to the fact that internal pages
that are marked half-dead are generally considered corrupt. Commit efada2b8e92 established the principle that there should never be
half-dead internal pages (internal pages pending deletion are possible,
but that status is never directly represented in the internal page).
VACUUM will complain about corruption when it encounters half-dead
internal pages, so VACUUM is bound to raise an error one way or another
when an nbtree index has a half-dead internal page (contrib/amcheck will
also report that the page is corrupt).
It's possible that a pg_upgrade'd 9.3 database will still have half-dead
internal pages, so it may seem like there is an argument for leaving the
check in place to reliably get a cleaner error message that advises the
user to REINDEX. However, leaf pages are also deleted in the first
phase of deletion prior to PostgreSQL 9.4, so I believe we won't even
attempt to re-find the parent page anyway (we won't have the fully
deleted leaf page as the right sibling of our target page, so we won't
even try to find a downlink for it).
Tom Lane [Thu, 16 May 2019 15:58:21 +0000 (11:58 -0400)]
Fix partition pruning to treat stable comparison operators properly.
Cross-type comparison operators in a btree or hash opclass might be
only stable not immutable (this is true of timestamp vs. timestamptz
for example). partprune.c ignored this possibility and would perform
plan-time pruning with them anyway, possibly leading to wrong answers
if the environment changed between planning and execution.
To fix, teach gen_partprune_steps() to do things differently when
creating plan-time pruning steps vs. run-time pruning steps.
analyze_partkey_exprs() also needs an extra check, which is rather
annoying but now is not the time to restructure things enough to
avoid that.
While at it, simplify the logic for the plan-time case a little
by insisting that the comparison value be a Const and nothing else.
This relies on the assumption that eval_const_expressions will have
reduced any immutable expression to a Const; which is not quite
100% true, but certainly any case that comes up often enough to be
interesting should have simplification logic there.
Also improve a bunch of inadequate/obsolete/wrong comments.
Per discussion of a report from Alan Jackson (though this fixes only one
aspect of that problem). Back-patch to v11 where this code came in.
Peter Geoghegan [Wed, 15 May 2019 23:53:11 +0000 (16:53 -0700)]
Remove obsolete nbtree insertion comment.
Remove a Berkeley-era comment above _bt_insertonpg() that admonishes the
reader to grok Lehman and Yao's paper before making any changes. This
made a certain amount of sense back when _bt_insertonpg() was
responsible for most of the things that are now spread across
_bt_insertonpg(), _bt_findinsertloc(), _bt_insert_parent(), and
_bt_split(), but it doesn't work like that anymore.
I believe that this comment alludes to the need to "couple" or "crab"
buffer locks as we ascend the tree as page splits cascade upwards. The
nbtree README already explains this in detail, which seems sufficient.
Besides, the changes to page splits made by commit 40dae7ec537 altered
the exact details of how buffer locks are retained during splits; Lehman
and Yao's original algorithm seems to release the lock on the left child
page/buffer slightly earlier than _bt_insertonpg()/_bt_insert_parent()
can.
Peter Geoghegan [Wed, 15 May 2019 19:22:07 +0000 (12:22 -0700)]
Reverse order of newitem nbtree candidate splits.
Commit fab25024, which taught nbtree to choose candidate split points
more carefully, had _bt_findsplitloc() record all possible split points
in an initial pass over a page that is about to be split. The order
that candidate split points were processed and stored in was assumed to
match the offset number order of split points on an imaginary version of
the page that contains the same items as the original, but also fits
newitem (the item that provoked the split precisely because it didn't
fit).
However, the order of split points in the final array was not quite what
was expected: the split point that makes newitem the firstright item
came after the split point that makes newitem the lastleft item -- not
before. As a result, _bt_findsplitloc() could get confused about the
leftmost and rightmost tuples among all possible split points recorded
for the page. This seems to have no appreciable impact on the quality
of the final split point chosen by _bt_findsplitloc(), but it's still
wrong.
To fix, switch the order in which newitem candidate splits are recorded
in. This also makes it possible to describe candidate split points in
terms of which pair of adjoining tuples enclose the split point within
_bt_findsplitloc(), making it clearer why it's generally safe for
_bt_split() to expect lastleft and firstright tuples.
Andres Freund [Tue, 14 May 2019 19:11:26 +0000 (12:11 -0700)]
Handle table_complete_speculative's succeeded argument as documented.
For some reason both callsite and the implementation for heapam had
the meaning inverted (i.e. succeeded == true was passed in case of
conflict). That's confusing.
I (Andres) briefly pondered whether it'd be better to rename
table_complete_speculative's argument to 'bool specConflict' or such,
but decided not to. The 'complete' in the function name for me makes
`succeeded` sound a bit better.
Reported-By: Ashwin Agrawal, Melanie Plageman, Heikki Linnakangas
Discussion:
https://postgr.es/m/CALfoeitk7-TACwYv3hCw45FNPjkA86RfXg4iQ5kAOPhR+F1Y4w@mail.gmail.com
https://postgr.es/m/97673451-339f-b21e-a781-998d06b1067c@iki.fi
Andres Freund [Tue, 14 May 2019 18:45:40 +0000 (11:45 -0700)]
Add isolation test for INSERT ON CONFLICT speculative insertion failure.
This path previously was not reliably covered. There was some
heuristic coverage via insert-conflict-toast.spec, but that test is
not deterministic, and only tested for a somewhat specific bug.
Backpatch, as this is a complicated and otherwise untested code
path. Unfortunately 9.5 cannot handle two waiting sessions, and thus
cannot execute this test.
Triggered by a conversion with Melanie Plageman.
Author: Andres Freund
Discussion: https://postgr.es/m/CAAKRu_a7hbyrk=wveHYhr4LbcRnRCG=yPUVoQYB9YO1CdUBE9Q@mail.gmail.com
Backpatch: 9.5-
Tom Lane [Tue, 14 May 2019 18:19:49 +0000 (14:19 -0400)]
Move logging.h and logging.c from src/fe_utils/ to src/common/.
The original placement of this module in src/fe_utils/ is ill-considered,
because several src/common/ modules have dependencies on it, meaning that
libpgcommon and libpgfeutils now have mutual dependencies. That makes it
pointless to have distinct libraries at all. The intended design is that
libpgcommon is lower-level than libpgfeutils, so only dependencies from
the latter to the former are acceptable.
We already have the precedent that fe_memutils and a couple of other
modules in src/common/ are frontend-only, so it's not stretching anything
out of whack to treat logging.c as a frontend-only module in src/common/.
To the extent that such modules help provide a common frontend/backend
environment for the rest of common/ to use, it's a reasonable design.
(logging.c does not yet provide an ereport() emulation, but one can
dream.)
Hence, move these files over, and revert basically all of the build-system
changes made by commit cc8d41511. There are no places that need to grow
new dependencies on libpgcommon, further reinforcing the idea that this
is the right solution.
The existence of these files became rather confusing with the
introduction of a widely-known logging.h header in commit cc8d41511.
(Indeed, there's already some duplicative #includes here, perhaps
betraying such confusion.) The only thing left in them, after that
commit, is a progress-reporting function that's neither general-purpose
nor tied in any way to other logging infrastructure. Hence, let's just
move that function to pg_rewind.c, and get rid of the separate files.
Tom Lane [Tue, 14 May 2019 15:27:31 +0000 (11:27 -0400)]
Fix SQL-style substring() to have spec-compliant greediness behavior.
SQL's regular-expression substring() function is defined to have a
pattern argument that's separated into three subpatterns by escape-
double-quote markers; the function result is the part of the input
matching the second subpattern. The standard makes it clear that
if there is ambiguity about how to match the input to the subpatterns,
the first and third subpatterns should be taken to match the smallest
possible amount of text (i.e., they're "non greedy", in the terms of
our regex code). We were not doing it that way: the first subpattern
would eat the largest possible amount of text, causing the function
result to be shorter than what the spec requires.
Fix that by attaching explicit greediness quantifiers to the
subpatterns. (This depends on the regex fix in commit 8a29ed053;
before that, this didn't reliably change the regex engine's behavior.)
Also, by adding parentheses around each subpattern, we ensure that
"|" (OR) in the subpatterns behave sanely. Previously, "|" in the
first or third subpatterns didn't work.
This patch also makes the function throw error if you write more than
two escape-double-quote markers, and do something sane if you write
just one, and document that behavior. Previously, an odd number of
markers led to a confusing complaint about unbalanced parentheses,
while extra pairs of markers were just ignored. (Note that the spec
requires exactly two markers, but we've historically allowed there
to be none, and this patch preserves the old behavior for that case.)
In passing, adjust some substring() test cases that didn't really
prove what they said they were testing for: they used patterns
that didn't match the data string, so that the output would be
NULL whether or not the function was really strict.
Although this is certainly a bug fix, changing the behavior in back
branches seems undesirable: applications could perhaps be depending on
the old behavior, since it's not obviously wrong unless you read the
spec very closely. Hence, no back-patch.
Tom Lane [Tue, 14 May 2019 14:22:28 +0000 (10:22 -0400)]
In bootstrap mode, use default signal handling for SIGINT etc.
Previously, the code pointed the standard process-termination signals
to postgres.c's die(). That would typically result in an attempt to
execute a transaction abort, which is not possible in bootstrap mode,
leading to PANIC. This choice seems to be a leftover from an old code
structure in which the same signal-assignment code was used for many
sorts of auxiliary processes, including interactive standalone
backends. It's not very sensible for bootstrap mode, which has no
interest in either interactivity or continuing after an error. We can
get better behavior with less effort by just letting normal process
termination happen, after which the parent initdb process will clean up.
This is basically cosmetic in any case, since initdb will react the
same way whether bootstrap dies on a signal or abort(). Given the
lack of previous complaints, I don't feel a need to back-patch,
even though the behavior is old.
Note: SQL:2016-2 lists a large number of non-reserved keywords that
are really just information_schema column names related to new
features. Those kinds of thing have not previously been listed as
keywords, and this was apparently done here by mistake, since these
keywords have been removed again in post-2016 working drafts. So in
order to avoid bloating the keywords table unnecessarily, I have
omitted these erroneous keywords here.
Detect internal GiST page splits correctly during index build.
As we descend the GiST tree during insertion, we modify any downlinks on
the way down to include the new tuple we're about to insert (if they don't
cover it already). Modifying an existing downlink might cause an internal
page to split, if the new downlink tuple is larger than the old one. If
that happens, we need to back up to the parent and re-choose a page to
insert to. We used to detect that situation, thanks to the NSN-LSN
interlock normally used to detect concurrent page splits, but that got
broken by commit 9155580fd5. With that commit, we now use a dummy constant
LSN value for every page during index build, so the LSN-NSN interlock no
longer works. I thought that was OK because there can't be any other
backends modifying the index during index build, but missed that the
insertion itself can modify the page we're inserting to. The consequence
was that we would sometimes insert the new tuple to an incorrect page, one
whose downlink doesn't cover the new tuple.
To fix, add a flag to the stack that keeps track of the state while
descending tree, to indicate that a page was split, and that we need to
retry the descend from the parent.
Thomas Munro first reported that the contrib/intarray regression test was
failing occasionally on the buildfarm after commit 9155580fd5. The failure
was intermittent, because the gistchoose() function is not deterministic,
and would only occasionally create the right circumstances for this bug to
cause the failure.
Patch by Anastasia Lubennikova, with some changes by me to make it work
correctly also when the internal page split also causes the "grandparent"
to be split.