Andres Freund [Sun, 19 May 2019 23:17:18 +0000 (16:17 -0700)]
Fix and improve SnapshotType comments.
The comment for SNAPSHOT_SELF was unfortunately explaining
SNAPSHOT_DIRTY, as reported by Sergei. Also expand a few comments, and
include a few more comments from heapam_visibility.c, so they're in an
AM independent place.
Reported-By: Sergei Kornilov
Author: Andres Freund
Discussion: https://postgr.es/m/9152241558192351@sas1-d856b3d759c7.qloud-c.yandex.net
Andres Freund [Sun, 19 May 2019 22:10:28 +0000 (15:10 -0700)]
Don't to predicate lock for analyze scans, refactor scan option passing.
Before this commit, when ANALYZE was run on a table and serializable
was used (either by virtue of an explicit BEGIN TRANSACTION ISOLATION
LEVEL SERIALIZABLE, or default_transaction_isolation being set to
serializable) a null pointer dereference lead to a crash.
The analyze scan doesn't need a snapshot (nor predicate locking), but
before this commit a scan only contained information about being a
bitmap or sample scan.
Refactor the option passing to the scan_begin callback to use a
bitmask instead. Alternatively we could have added a new boolean
parameter, but that seems harder to read. Even before this issue
various people (Heikki, Tom, Robert) suggested doing so.
These changes don't change the scan APIs outside of tableam. The flags
argument could be exposed, it's not necessary to fix this
problem. Also the wrapper table_beginscan* functions encapsulate most
of that complexity.
After these changes fixing the bug is trivial, just don't acquire
predicate lock for analyze style scans. That was already done for
bitmap heap scans. Add an assert that a snapshot is passed when
acquiring the predicate lock, so this kind of bug doesn't require
running with serializable.
Also add a comment about sample scans currently requiring predicate
locking the entire relation, that previously wasn't remarked upon.
Reported-By: Joe Wildish
Author: Andres Freund
Discussion:
https://postgr.es/m/4EA80A20-E9BF-49F1-9F01-5B66CAB21453@elusive.cx
https://postgr.es/m/20190411164947.nkii4gaeilt4bui7@alap3.anarazel.de
https://postgr.es/m/20190518203102.g7peu2fianukjuxm@alap3.anarazel.de
Noah Misch [Sun, 19 May 2019 21:36:44 +0000 (14:36 -0700)]
In the pg_upgrade test suite, don't write to src/test/regress.
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a
race condition in "make -j check-world". If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously. Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs. This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.
Buildfarm client REL_10, released forty-five days ago, supports saving
regression.diffs from its new location. When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.
Tom Lane [Sun, 19 May 2019 17:55:39 +0000 (13:55 -0400)]
Improve logrotate test so that it meaningfully exercises syslogger.
Discussion of bug #15804 reveals that this test didn't really prove
that the syslogger child process ever launched successfully, much
less did anything. It was only checking that the expected log file
gets created, and that's done in the postmaster. Moreover, the
test assumed it could rename the log file, which is likely to fail
on Windows (cf. commit d611175e5).
Instead, use the default log file name pattern, which should result
in a new file name being chosen after 1 second, and verify that
rotation has occurred by checking for a new file name. Also add code
to test that messages actually do propagate through the syslogger.
In theory this version of the test should work on Windows, so
revert d611175e5.
While that's still a good idea in the abstract, we found out
that there are multiple crasher bugs in it on Windows builds,
making the logging_collector option unusable on Windows.
There's no time left to fix these issues before 12beta1,
so revert the patch to allow Windows beta testing to proceed.
We'll try again at some future date.
Per bug #15804 from Yulian Khodorkovskiy and additional
investigation by Michael Paquier.
Tom Lane [Sun, 19 May 2019 00:16:50 +0000 (20:16 -0400)]
ANSI-ify a few straggler K&R-style function definitions.
We still had a couple of these left in ancient src/port/ files.
Convert them to modern style in preparation for switching to
a version of pg_bsd_indent that doesn't cope well with K&R style.
Tom Lane [Sat, 18 May 2019 17:51:16 +0000 (13:51 -0400)]
Make BufFileCreateTemp() ensure that temp tablespaces are set up.
If PrepareTempTablespaces() has never been called in the current
transaction, OpenTemporaryFile() will fall back to using the default
tablespace, which is a bug if the user wanted temp files placed elsewhere.
gistInitBuildBuffers() appears to have this disease already, and it
seems like an easy trap for future coders to fall into.
We discussed other ways to close this gap, but none of them are prettier
or more reliable than just having BufFileCreateTemp do it. In particular,
having fd.c do this creates layering issues that we could do without.
Per suggestion from Melanie Plageman. Arguably this is a bug fix, but
nobody seems very excited about back-patching, so change in HEAD only.
Andres Freund [Sat, 18 May 2019 01:52:01 +0000 (18:52 -0700)]
tableam: Avoid relying on relation size to determine validity of tids.
Instead add a tableam callback to do so. To avoid adding per
validation overhead, pass a scan to tuple_tid_valid. In heap's case
we'd otherwise incurred a RelationGetNumberOfBlocks() call for each
tid - which'd have added noticable overhead to nodeTidscan.c.
Author: Andres Freund Reviewed-By: Ashwin Agrawal
Discussion: https://postgr.es/m/20190515185447.gno2jtqxyktylyvs@alap3.anarazel.de
Andres Freund [Sat, 18 May 2019 01:06:18 +0000 (18:06 -0700)]
tableam: Don't assume that every AM uses md.c style storage.
Previously various parts of the code routed size requests through
RelationGetNumberOfBlocks[InFork]. That works if md.c is used by the
AM, but not otherwise.
Add a tableam callback to return the size of the table. As not every
AM will use postgres' BLCKSZ, have it return bytes, and have
RelationGetNumberOfBlocksInFork() round the byte size up into blocks.
To allow code outside of the AM to determine the actual relation size
map InvalidForkNumber the total size of a relation, as not every AM
might just need the postgres defined forks.
A few users of RelationGetNumberOfBlocks() ought to be converted away
from that. One case, the use of it to determine whether a tid is
valid, will be fixed in a follow up commit. Others will have to wait
for v13.
Author: Andres Freund
Discussion: https://postgr.es/m/20190423225201.3bbv6tbqzkb5w7cw@alap3.anarazel.de
Tom Lane [Fri, 17 May 2019 23:44:19 +0000 (19:44 -0400)]
Restructure creation of run-time pruning steps.
Previously, gen_partprune_steps() always built executor pruning steps
using all suitable clauses, including those containing PARAM_EXEC
Params. This meant that the pruning steps were only completely safe
for executor run-time (scan start) pruning. To prune at executor
startup, we had to ignore the steps involving exec Params. But this
doesn't really work in general, since there may be logic changes
needed as well --- for example, pruning according to the last operator's
btree strategy is the wrong thing if we're not applying that operator.
The rules embodied in gen_partprune_steps() and its minions are
sufficiently complicated that tracking their incremental effects in
other logic seems quite impractical.
Short of a complete redesign, the only safe fix seems to be to run
gen_partprune_steps() twice, once to create executor startup pruning
steps and then again for run-time pruning steps. We can save a few
cycles however by noting during the first scan whether we rejected
any clauses because they involved exec Params --- if not, we don't
need to do the second scan.
In support of this, refactor the internal APIs in partprune.c to make
more use of passing information in the GeneratePruningStepsContext
struct, rather than as separate arguments.
This is, I hope, the last piece of our response to a bug report from
Alan Jackson. Back-patch to v11 where this code came in.
Peter Geoghegan [Thu, 16 May 2019 22:11:58 +0000 (15:11 -0700)]
Remove extra nbtree half-dead internal page check.
It's not safe for nbtree VACUUM to attempt to delete a target page whose
right sibling is already half-dead, since that would fail the
cross-check when VACUUM attempts to re-find a downlink to the right
sibling in the parent page. Logic to prevent this from happening was
added by commit 8da31837803, which addressed a bug in the overhaul of
page deletion that went into PostgreSQL 9.4 (commit efada2b8e92).
VACUUM was made to check the right sibling page, and back off when it
happened to be half-dead already.
However, it is only truly necessary to do the right sibling check on the
leaf level, since that transitively determines if the deletion target's
parent's right sibling page is itself undergoing deletion. Remove the
internal page level check, and add a comment explaining why the leaf
level check alone suffices.
The extra check is also unnecessary due to the fact that internal pages
that are marked half-dead are generally considered corrupt. Commit efada2b8e92 established the principle that there should never be
half-dead internal pages (internal pages pending deletion are possible,
but that status is never directly represented in the internal page).
VACUUM will complain about corruption when it encounters half-dead
internal pages, so VACUUM is bound to raise an error one way or another
when an nbtree index has a half-dead internal page (contrib/amcheck will
also report that the page is corrupt).
It's possible that a pg_upgrade'd 9.3 database will still have half-dead
internal pages, so it may seem like there is an argument for leaving the
check in place to reliably get a cleaner error message that advises the
user to REINDEX. However, leaf pages are also deleted in the first
phase of deletion prior to PostgreSQL 9.4, so I believe we won't even
attempt to re-find the parent page anyway (we won't have the fully
deleted leaf page as the right sibling of our target page, so we won't
even try to find a downlink for it).
Tom Lane [Thu, 16 May 2019 15:58:21 +0000 (11:58 -0400)]
Fix partition pruning to treat stable comparison operators properly.
Cross-type comparison operators in a btree or hash opclass might be
only stable not immutable (this is true of timestamp vs. timestamptz
for example). partprune.c ignored this possibility and would perform
plan-time pruning with them anyway, possibly leading to wrong answers
if the environment changed between planning and execution.
To fix, teach gen_partprune_steps() to do things differently when
creating plan-time pruning steps vs. run-time pruning steps.
analyze_partkey_exprs() also needs an extra check, which is rather
annoying but now is not the time to restructure things enough to
avoid that.
While at it, simplify the logic for the plan-time case a little
by insisting that the comparison value be a Const and nothing else.
This relies on the assumption that eval_const_expressions will have
reduced any immutable expression to a Const; which is not quite
100% true, but certainly any case that comes up often enough to be
interesting should have simplification logic there.
Also improve a bunch of inadequate/obsolete/wrong comments.
Per discussion of a report from Alan Jackson (though this fixes only one
aspect of that problem). Back-patch to v11 where this code came in.
Peter Geoghegan [Wed, 15 May 2019 23:53:11 +0000 (16:53 -0700)]
Remove obsolete nbtree insertion comment.
Remove a Berkeley-era comment above _bt_insertonpg() that admonishes the
reader to grok Lehman and Yao's paper before making any changes. This
made a certain amount of sense back when _bt_insertonpg() was
responsible for most of the things that are now spread across
_bt_insertonpg(), _bt_findinsertloc(), _bt_insert_parent(), and
_bt_split(), but it doesn't work like that anymore.
I believe that this comment alludes to the need to "couple" or "crab"
buffer locks as we ascend the tree as page splits cascade upwards. The
nbtree README already explains this in detail, which seems sufficient.
Besides, the changes to page splits made by commit 40dae7ec537 altered
the exact details of how buffer locks are retained during splits; Lehman
and Yao's original algorithm seems to release the lock on the left child
page/buffer slightly earlier than _bt_insertonpg()/_bt_insert_parent()
can.
Peter Geoghegan [Wed, 15 May 2019 19:22:07 +0000 (12:22 -0700)]
Reverse order of newitem nbtree candidate splits.
Commit fab25024, which taught nbtree to choose candidate split points
more carefully, had _bt_findsplitloc() record all possible split points
in an initial pass over a page that is about to be split. The order
that candidate split points were processed and stored in was assumed to
match the offset number order of split points on an imaginary version of
the page that contains the same items as the original, but also fits
newitem (the item that provoked the split precisely because it didn't
fit).
However, the order of split points in the final array was not quite what
was expected: the split point that makes newitem the firstright item
came after the split point that makes newitem the lastleft item -- not
before. As a result, _bt_findsplitloc() could get confused about the
leftmost and rightmost tuples among all possible split points recorded
for the page. This seems to have no appreciable impact on the quality
of the final split point chosen by _bt_findsplitloc(), but it's still
wrong.
To fix, switch the order in which newitem candidate splits are recorded
in. This also makes it possible to describe candidate split points in
terms of which pair of adjoining tuples enclose the split point within
_bt_findsplitloc(), making it clearer why it's generally safe for
_bt_split() to expect lastleft and firstright tuples.
Andres Freund [Tue, 14 May 2019 19:11:26 +0000 (12:11 -0700)]
Handle table_complete_speculative's succeeded argument as documented.
For some reason both callsite and the implementation for heapam had
the meaning inverted (i.e. succeeded == true was passed in case of
conflict). That's confusing.
I (Andres) briefly pondered whether it'd be better to rename
table_complete_speculative's argument to 'bool specConflict' or such,
but decided not to. The 'complete' in the function name for me makes
`succeeded` sound a bit better.
Reported-By: Ashwin Agrawal, Melanie Plageman, Heikki Linnakangas
Discussion:
https://postgr.es/m/CALfoeitk7-TACwYv3hCw45FNPjkA86RfXg4iQ5kAOPhR+F1Y4w@mail.gmail.com
https://postgr.es/m/97673451-339f-b21e-a781-998d06b1067c@iki.fi
Andres Freund [Tue, 14 May 2019 18:45:40 +0000 (11:45 -0700)]
Add isolation test for INSERT ON CONFLICT speculative insertion failure.
This path previously was not reliably covered. There was some
heuristic coverage via insert-conflict-toast.spec, but that test is
not deterministic, and only tested for a somewhat specific bug.
Backpatch, as this is a complicated and otherwise untested code
path. Unfortunately 9.5 cannot handle two waiting sessions, and thus
cannot execute this test.
Triggered by a conversion with Melanie Plageman.
Author: Andres Freund
Discussion: https://postgr.es/m/CAAKRu_a7hbyrk=wveHYhr4LbcRnRCG=yPUVoQYB9YO1CdUBE9Q@mail.gmail.com
Backpatch: 9.5-
Tom Lane [Tue, 14 May 2019 18:19:49 +0000 (14:19 -0400)]
Move logging.h and logging.c from src/fe_utils/ to src/common/.
The original placement of this module in src/fe_utils/ is ill-considered,
because several src/common/ modules have dependencies on it, meaning that
libpgcommon and libpgfeutils now have mutual dependencies. That makes it
pointless to have distinct libraries at all. The intended design is that
libpgcommon is lower-level than libpgfeutils, so only dependencies from
the latter to the former are acceptable.
We already have the precedent that fe_memutils and a couple of other
modules in src/common/ are frontend-only, so it's not stretching anything
out of whack to treat logging.c as a frontend-only module in src/common/.
To the extent that such modules help provide a common frontend/backend
environment for the rest of common/ to use, it's a reasonable design.
(logging.c does not yet provide an ereport() emulation, but one can
dream.)
Hence, move these files over, and revert basically all of the build-system
changes made by commit cc8d41511. There are no places that need to grow
new dependencies on libpgcommon, further reinforcing the idea that this
is the right solution.
The existence of these files became rather confusing with the
introduction of a widely-known logging.h header in commit cc8d41511.
(Indeed, there's already some duplicative #includes here, perhaps
betraying such confusion.) The only thing left in them, after that
commit, is a progress-reporting function that's neither general-purpose
nor tied in any way to other logging infrastructure. Hence, let's just
move that function to pg_rewind.c, and get rid of the separate files.
Tom Lane [Tue, 14 May 2019 15:27:31 +0000 (11:27 -0400)]
Fix SQL-style substring() to have spec-compliant greediness behavior.
SQL's regular-expression substring() function is defined to have a
pattern argument that's separated into three subpatterns by escape-
double-quote markers; the function result is the part of the input
matching the second subpattern. The standard makes it clear that
if there is ambiguity about how to match the input to the subpatterns,
the first and third subpatterns should be taken to match the smallest
possible amount of text (i.e., they're "non greedy", in the terms of
our regex code). We were not doing it that way: the first subpattern
would eat the largest possible amount of text, causing the function
result to be shorter than what the spec requires.
Fix that by attaching explicit greediness quantifiers to the
subpatterns. (This depends on the regex fix in commit 8a29ed053;
before that, this didn't reliably change the regex engine's behavior.)
Also, by adding parentheses around each subpattern, we ensure that
"|" (OR) in the subpatterns behave sanely. Previously, "|" in the
first or third subpatterns didn't work.
This patch also makes the function throw error if you write more than
two escape-double-quote markers, and do something sane if you write
just one, and document that behavior. Previously, an odd number of
markers led to a confusing complaint about unbalanced parentheses,
while extra pairs of markers were just ignored. (Note that the spec
requires exactly two markers, but we've historically allowed there
to be none, and this patch preserves the old behavior for that case.)
In passing, adjust some substring() test cases that didn't really
prove what they said they were testing for: they used patterns
that didn't match the data string, so that the output would be
NULL whether or not the function was really strict.
Although this is certainly a bug fix, changing the behavior in back
branches seems undesirable: applications could perhaps be depending on
the old behavior, since it's not obviously wrong unless you read the
spec very closely. Hence, no back-patch.
Tom Lane [Tue, 14 May 2019 14:22:28 +0000 (10:22 -0400)]
In bootstrap mode, use default signal handling for SIGINT etc.
Previously, the code pointed the standard process-termination signals
to postgres.c's die(). That would typically result in an attempt to
execute a transaction abort, which is not possible in bootstrap mode,
leading to PANIC. This choice seems to be a leftover from an old code
structure in which the same signal-assignment code was used for many
sorts of auxiliary processes, including interactive standalone
backends. It's not very sensible for bootstrap mode, which has no
interest in either interactivity or continuing after an error. We can
get better behavior with less effort by just letting normal process
termination happen, after which the parent initdb process will clean up.
This is basically cosmetic in any case, since initdb will react the
same way whether bootstrap dies on a signal or abort(). Given the
lack of previous complaints, I don't feel a need to back-patch,
even though the behavior is old.
Note: SQL:2016-2 lists a large number of non-reserved keywords that
are really just information_schema column names related to new
features. Those kinds of thing have not previously been listed as
keywords, and this was apparently done here by mistake, since these
keywords have been removed again in post-2016 working drafts. So in
order to avoid bloating the keywords table unnecessarily, I have
omitted these erroneous keywords here.
Detect internal GiST page splits correctly during index build.
As we descend the GiST tree during insertion, we modify any downlinks on
the way down to include the new tuple we're about to insert (if they don't
cover it already). Modifying an existing downlink might cause an internal
page to split, if the new downlink tuple is larger than the old one. If
that happens, we need to back up to the parent and re-choose a page to
insert to. We used to detect that situation, thanks to the NSN-LSN
interlock normally used to detect concurrent page splits, but that got
broken by commit 9155580fd5. With that commit, we now use a dummy constant
LSN value for every page during index build, so the LSN-NSN interlock no
longer works. I thought that was OK because there can't be any other
backends modifying the index during index build, but missed that the
insertion itself can modify the page we're inserting to. The consequence
was that we would sometimes insert the new tuple to an incorrect page, one
whose downlink doesn't cover the new tuple.
To fix, add a flag to the stack that keeps track of the state while
descending tree, to indicate that a page was split, and that we need to
retry the descend from the parent.
Thomas Munro first reported that the contrib/intarray regression test was
failing occasionally on the buildfarm after commit 9155580fd5. The failure
was intermittent, because the gistchoose() function is not deterministic,
and would only occasionally create the right circumstances for this bug to
cause the failure.
Patch by Anastasia Lubennikova, with some changes by me to make it work
correctly also when the internal page split also causes the "grandparent"
to be split.
Peter Geoghegan [Mon, 13 May 2019 22:53:39 +0000 (15:53 -0700)]
Standardize ItemIdData terminology.
The term "item pointer" should not be used to refer to ItemIdData
variables, since that is needlessly ambiguous. Only
ItemPointerData/ItemPointer variables should be called item pointers.
To fix, establish the convention that ItemIdData variables should always
be referred to either as "item identifiers" or "line pointers". The
term "item identifier" already predominates in docs and translatable
messages, and so should be the preferred alternative there.
Peter Geoghegan [Mon, 13 May 2019 22:39:06 +0000 (15:39 -0700)]
Doc: Refer to line pointers as item identifiers.
An upcoming HEAD-only patch will standardize the terminology around
ItemIdData variables/line pointers, ending the practice of referring to
them as "item pointers". Make the "Database Page Layout" docs
consistent with the new policy. The term "item identifier" is already
used in the same section, so stick with that.
Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com
Backpatch: All supported branches.
Tom Lane [Mon, 13 May 2019 21:23:00 +0000 (17:23 -0400)]
Fix logical replication's ideas about which type OIDs are built-in.
Only hand-assigned type OIDs should be presumed to match across different
PG servers; those assigned during genbki.pl or during initdb are likely
to change due to addition or removal of unrelated objects.
This means that the cutoff should be FirstGenbkiObjectId (in HEAD)
or FirstBootstrapObjectId (before that), not FirstNormalObjectId.
Compare postgres_fdw's is_builtin() test.
It's likely that this error has no observable consequence in a
normally-functioning system, since ATM the only affected type OIDs are
system catalog rowtypes and information_schema types, which would not
typically be interesting for logical replication. But you could
probably break it if you tried hard, so back-patch.
Tom Lane [Mon, 13 May 2019 21:05:48 +0000 (17:05 -0400)]
Improve commentary about hack in is_publishable_class().
The FirstNormalObjectId test here is a kluge that needs to go away,
but the only substitute we can think of is to add a column to pg_class,
which will take more work than can be handled right now. Add some
commentary in the meanwhile.
Peter Geoghegan [Mon, 13 May 2019 17:27:59 +0000 (10:27 -0700)]
Don't leave behind junk nbtree pages during split.
Commit 8fa30f906be reduced the elevel of a number of "can't happen"
_bt_split() errors from PANIC to ERROR. At the same time, the new right
page buffer for the split could continue to be acquired well before the
critical section. This was possible because it was relatively
straightforward to make sure that _bt_split() could not throw an error,
with a few specific exceptions. The exceptional cases were safe because
they involved specific, well understood errors, making it possible to
consistently zero the right page before actually raising an error using
elog(). There was no danger of leaving around a junk page, provided
_bt_split() stuck to this coding rule.
Commit 8224de4f, which introduced INCLUDE indexes, added code to make
_bt_split() truncate away non-key attributes. This happened at a point
that broke the rule around zeroing the right page in _bt_split(). If
truncation failed (perhaps due to palloc() failure), that would result
in an errant right page buffer with junk contents. This could confuse
VACUUM when it attempted to delete the page, and should be avoided on
general principle.
To fix, reorganize _bt_split() so that truncation occurs before the new
right page buffer is even acquired. A junk page/buffer will not be left
behind if _bt_nonkey_truncate()/_bt_truncate() raise an error.
Discussion: https://postgr.es/m/CAH2-WzkcWT_-NH7EeL=Az4efg0KCV+wArygW8zKB=+HoP=VWMw@mail.gmail.com
Backpatch: 11-, where INCLUDE indexes were introduced.
Robert Haas [Mon, 13 May 2019 17:13:24 +0000 (13:13 -0400)]
Improve comment for att_isnull.
The comment implies that a 1 in the null bitmap indicates a null value,
but actually a 0 in the null bitmap indicates a null value. Try to
be more clear.
Patch by me; proposed wording reviewed by Alvaro Herrera and Tom Lane.
Tom Lane [Mon, 13 May 2019 14:53:19 +0000 (10:53 -0400)]
Fix misuse of an integer as a bool.
pgtls_read_pending is declared to return bool, but what the underlying
SSL_pending function returns is a count of available bytes.
This is actually somewhat harmless if we're using C99 bools, but in
the back branches it's a live bug: if the available-bytes count happened
to be a multiple of 256, it would get converted to a zero char value.
On machines where char is signed, counts of 128 and up could misbehave
as well. The net effect is that when using SSL, libpq might block
waiting for data even though some has already been received.
Broken by careless refactoring in commit 4e86f1b16, so back-patch
to 9.5 where that came in.
Michael Paquier [Mon, 13 May 2019 00:11:50 +0000 (09:11 +0900)]
Fix incorrect return value in JSON equality function for scalars
equalsJsonbScalarValue() uses a boolean as return type, however for one
code path -1 gets returned, which is confusing. The origin of the
confusion is visibly that this code got copy-pasted from
compareJsonbScalarValue() since it has been introduced in d1d50bf.
Tom Lane [Sun, 12 May 2019 22:53:12 +0000 (18:53 -0400)]
Fix misoptimization of "{1,1}" quantifiers in regular expressions.
A bounded quantifier with m = n = 1 might be thought a no-op. But
according to our documentation (which traces back to Henry Spencer's
original man page) it still imposes greediness, or non-greediness in the
case of the non-greedy variant "{1,1}?", on whatever it's attached to.
This turns out not to work though, because parseqatom() optimizes away
the m = n = 1 case without regard for whether it's supposed to change
the greediness of the argument RE.
We can fix this by just not applying the optimization when the greediness
needs to change; the subsequent general cases handle it fine.
The three cases in which we can still apply the optimization are
(a) no quantifier, or quantifier does not impose a preference;
(b) atom has no greediness property, implying it cannot match a
variable amount of text anyway; or
(c) quantifier's greediness is same as atom's.
Note that in most cases where one of these applies, we'd have exited
earlier in the "not a messy case" fast path. I think it's now only
possible to get to the optimization when the atom involves capturing
parentheses or a non-top-level backref.
Back-patch to all supported branches. I'd ordinarily be hesitant to
put a subtle behavioral change into back branches, but in this case
it's very hard to see a reason why somebody would write "{1,1}?" unless
they're trying to get the documented change-of-greediness behavior.
Noah Misch [Sun, 12 May 2019 17:33:05 +0000 (10:33 -0700)]
Fail pgwin32_message_to_UTF16() for SQL_ASCII messages.
The function had been interpreting SQL_ASCII messages as UTF8, throwing
an error when they were invalid UTF8. The new behavior is consistent
with pg_do_encoding_conversion(). This affects LOG_DESTINATION_STDERR
and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to
write() and ReportEventA(). On buildfarm member bowerbird, enabling
log_connections caused an error whenever the role name was not valid
UTF8. Back-patch to 9.4 (all supported versions).
Tom Lane [Sun, 12 May 2019 01:27:13 +0000 (21:27 -0400)]
Rearrange pgstat_bestart() to avoid failures within its critical section.
We long ago decided to design the shared PgBackendStatus data structure to
minimize the cost of writing status updates, which means that writers just
have to increment the st_changecount field twice. That isn't hooked into
any sort of resource management mechanism, which means that if something
were to throw error between the two increments, the st_changecount field
would be left odd indefinitely. That would cause readers to lock up.
Now, since it's also a bad idea to leave the field odd for longer than
absolutely necessary (because readers will spin while we have it set),
the expectation was that we'd treat these segments like spinlock critical
sections, with only short, more or less straight-line, code in them.
That was fine as originally designed, but commit 9029f4b37 broke it
by inserting a significant amount of non-straight-line code into
pgstat_bestart(), code that is very capable of throwing errors, not to
mention taking a significant amount of time during which readers will spin.
We have a report from Neeraj Kumar of readers actually locking up, which
I suspect was due to an encoding conversion error in X509_NAME_to_cstring,
though conceivably it was just a garden-variety OOM failure.
Subsequent commits have loaded even more dubious code into pgstat_bestart's
critical section (and commit fc70a4b0d deserves some kind of booby prize
for managing to miss the critical section entirely, although the negative
consequences seem minimal given that the PgBackendStatus entry should be
seen by readers as inactive at that point).
The right way to fix this mess seems to be to compute all these values
into a local copy of the process' PgBackendStatus struct, and then just
copy the data back within the critical section proper. This plan can't
be implemented completely cleanly because of the struct's heavy reliance
on out-of-line strings, which we must initialize separately within the
critical section. But still, the critical section is far smaller and
safer than it was before.
In hopes of forestalling future errors of the same ilk, rename the
macros for st_changecount management to make it more apparent that
the writer-side macros create a critical section. And to prevent
the worst consequences if we nonetheless manage to mess it up anyway,
adjust those macros so that they really are a critical section, ie
they now bump CritSectionCount. That doesn't add much overhead, and
it guarantees that if we do somehow throw an error while the counter
is odd, it will lead to PANIC and a database restart to reset shared
memory.
Back-patch to 9.5 where the problem was introduced.
In HEAD, also fix an oversight in commit b0b39f72b: it failed to teach
pgstat_read_current_status to copy st_gssstatus data from shared memory to
local memory. Hence, subsequent use of that data within the transaction
would potentially see changing data that it shouldn't see.
Noah Misch [Sat, 11 May 2019 07:22:38 +0000 (00:22 -0700)]
Honor TEMP_CONFIG in TAP suites.
The buildfarm client uses TEMP_CONFIG to implement its extra_config
setting. Except for stats_temp_directory, extra_config now applies to
TAP suites; extra_config values seen in the past month are compatible
with this. Back-patch to 9.6, where PostgresNode was introduced, so the
buildfarm can rely on it sooner.
Andres Freund [Mon, 6 May 2019 06:31:58 +0000 (23:31 -0700)]
Remove reindex_catalog test from test schedules.
As none of the approaches for avoiding the deadlock issues seem
promising enough, and all the expected reindex related changes have
been made, apply 60c2951e1bab7e to master as well.
Tom Lane [Fri, 10 May 2019 18:56:41 +0000 (14:56 -0400)]
Cope with EINVAL and EIDRM shmat() failures in PGSharedMemoryAttach.
There's a very old race condition in our code to see whether a pre-existing
shared memory segment is still in use by a conflicting postmaster: it's
possible for the other postmaster to remove the segment in between our
shmctl() and shmat() calls. It's a narrow window, and there's no risk
unless both postmasters are using the same port number, but that's possible
during parallelized "make check" tests. (Note that while the TAP tests
take some pains to choose a randomized port number, pg_regress doesn't.)
If it does happen, we treated that as an unexpected case and errored out.
To fix, allow EINVAL to be treated as segment-not-present, and the same
for EIDRM on Linux. AFAICS, the considerations here are basically
identical to the checks for acceptable shmctl() failures, so I documented
and coded it that way.
While at it, adjust PGSharedMemoryAttach's API to remove its undocumented
dependency on UsedShmemSegAddr in favor of passing the attach address
explicitly. This makes it easier to be sure we're using a null shmaddr
when probing for segment conflicts (thus avoiding questions about what
EINVAL means). I don't think there was a bug there, but it required
fragile assumptions about the state of UsedShmemSegAddr during
PGSharedMemoryIsInUse.
Commit c09850992 may have made this failure more probable by applying
the conflicting-segment tests more often. Hence, back-patch to all
supported branches, as that was.
Michael Paquier [Fri, 10 May 2019 00:35:27 +0000 (09:35 +0900)]
Fix and improve description of locktag types in lock.h
The description of the lock type for speculative insertions was
incorrect, being copy-pasted from another one.
As discussed, also move the description for all the fields of lock tag
types from the structure listing lock tag types to the set of macros
setting each LOCKTAG.
Author: John Naylor
Discussion: https://postgr.es/m/CACPNZCtA0-ybaC4fFfaDq_8p_TUOLvGxZH9Dm-=TMHZJarBa7Q@mail.gmail.com
Michael Paquier [Thu, 9 May 2019 23:18:46 +0000 (08:18 +0900)]
Improve and fix some error handling for REINDEX INDEX/TABLE CONCURRENTLY
This improves the user experience when it comes to restrict several
flavors of REINDEX CONCURRENTLY. First, for INDEX, remove a restriction
on shared relations as we already check after catalog relations. Then,
for TABLE, add a proper error message when attempting to run the command
on system catalogs. The code path of CREATE INDEX CONCURRENTLY already
complains about that, but if a REINDEX is issued then then the error
generated is confusing.
While on it, add more tests to check restrictions on catalog indexes and
on toast table/index for catalogs. Some error messages are improved,
with wording suggestion coming from Tom Lane.
Reported-by: Tom Lane
Author: Michael Paquier Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/23694.1556806002@sss.pgh.pa.us
Tom Lane [Thu, 9 May 2019 20:52:48 +0000 (16:52 -0400)]
Repair issues with faulty generation of merge-append plans.
create_merge_append_plan failed to honor the CP_EXACT_TLIST flag:
it would generate the expected targetlist but then it felt free to
add resjunk sort targets to it. This demonstrably leads to assertion
failures in v11 and HEAD, and it's probably just accidental that we
don't see the same in older branches. I've not looked into whether
there would be any real-world consequences in non-assert builds.
In HEAD, create_append_plan has sprouted the same problem, so fix
that too (although we do not have any test cases that seem able to
reach that bug). This is an oversight in commit 3fc6e2d7f which
invented the CP_EXACT_TLIST flag, so back-patch to 9.6 where that
came in.
convert_subquery_pathkeys would create pathkeys for subquery output
values if they match any EquivalenceClass known in the outer query
and are available in the subquery's syntactic targetlist. However,
the second part of that condition is wrong, because such values might
not appear in the subquery relation's reltarget list, which would
mean that they couldn't be accessed above the level of the subquery
scan. We must check that they appear in the reltarget list, instead.
This can lead to dropping knowledge about the subquery's sort
ordering, but I believe it's okay, because any sort key that the
outer query actually has any interest in would appear in the
reltarget list.
This second issue is of very long standing, but right now there's no
evidence that it causes observable problems before 9.6, so I refrained
from back-patching further than that. We can revisit that choice if
somebody finds a way to make it cause problems in older branches.
(Developing useful test cases for these issues is really problematic;
fixing convert_subquery_pathkeys removes the only known way to exhibit
the create_merge_append_plan bug, and neither of the test cases added
by this patch causes a problem in all branches, even when considering
the issues separately.)
The second issue explains bug #15795 from Suresh Kumar R ("could not
find pathkey item to sort" with nested DISTINCT queries). I stumbled
across the first issue while investigating that.
Bruce Momjian [Thu, 9 May 2019 20:44:27 +0000 (16:44 -0400)]
doc: update PG 12 release notes, v2
Adjustments requested by reviewers.
Reported-by: Amit Kapila, Thomas Munro, Andrew Gierth, Amit Langote, Oleg Bartunov, Michael Paquier, Alvaro Herrera, Tatsuo Ishii
Discussion: https://postgr.es/m/20190506233029.ozwged67i7s4qd6c@momjian.us
Etsuro Fujita [Thu, 9 May 2019 10:50:15 +0000 (19:50 +0900)]
Doc: Update FDW documentation about GetForeignUpperPaths().
In commit d50d172e51, which added support for LIMIT/OFFSET pushdown in
postgres_fdw, a new struct was introduced as the extra parameter of
GetForeignUpperPaths() set for UPPERREL_FINAL, but I forgot to update
the documentation to mention that.
Etsuro Fujita [Thu, 9 May 2019 09:39:23 +0000 (18:39 +0900)]
postgres_fdw: Fix cost estimation for aggregate pushdown.
In commit 7012b132d0, which added support for aggregate pushdown in
postgres_fdw, the expense of evaluating the final scan/join target
computed by make_group_input_target() was not accounted for at all in
costing aggregate pushdown paths with local statistics. The right fix
for this would be to have a separate upper stage to adjust the final
scan/join relation (see comments for apply_scanjoin_target_to_paths());
but for now, fix by adding the tlist eval cost when costing aggregate
pushdown paths with local statistics.
Apply this to HEAD only to avoid destabilizing existing plan choices.
Thomas Munro [Thu, 9 May 2019 07:11:46 +0000 (19:11 +1200)]
Fix SxactGlobalXmin tracking.
Commit bb16aba50 broke the code that maintains SxactGlobalXmin. It
could get stuck when a well-timed READ ONLY transaction runs. If
SxactGlobalXmin stops advancing, transactions on the
FinishedSerializableTransactions queue are never cleaned up, so
resources are effectively leaked. Revert that hunk of the commit.
Also revert another similar hunk that was probably harmless, but
unnecessary and unjustified, relating to the DOOMED flag in case of
RO_SAFE early release.
Author: Thomas Munro Reported-by: Tom Lane
Discussion: https://postgr.es/m/16170.1557251214%40sss.pgh.pa.us
Tom Lane [Thu, 9 May 2019 03:27:29 +0000 (23:27 -0400)]
Clean up the behavior and API of catalog.c's is-catalog-relation tests.
The right way for IsCatalogRelation/Class to behave is to return true
for OIDs less than FirstBootstrapObjectId (not FirstNormalObjectId),
without any of the ad-hoc fooling around with schema membership.
The previous code was wrong because (1) it claimed that
information_schema tables were not catalog relations but their toast
tables were, which is silly; and (2) if you dropped and recreated
information_schema, which is a supported operation, the behavior
changed. That's even sillier. With this definition, "catalog
relations" are exactly the ones traceable to the postgres.bki data,
which seems like what we want.
With this simplification, we don't actually need access to the pg_class
tuple to identify a catalog relation; we only need its OID. Hence,
replace IsCatalogClass with "IsCatalogRelationOid(oid)". But keep
IsCatalogRelation as a convenience function.
This allows fixing some arguably-wrong semantics in contrib/sepgsql and
ReindexRelationConcurrently, which were using an IsSystemNamespace test
where what they really should be using is IsCatalogRelationOid. The
previous coding failed to protect toast tables of system catalogs, and
also was not on board with the general principle that user-created tables
do not become catalogs just by virtue of being renamed into pg_catalog.
We can also get rid of a messy hack in ReindexMultipleTables.
While we're at it, also rename IsSystemNamespace to IsCatalogNamespace,
because the previous name invited confusion with the more expansive
semantics used by IsSystemRelation/Class.
Also improve the comments in catalog.c.
There are a few remaining places in replication-related code that are
special-casing OIDs below FirstNormalObjectId. I'm inclined to think
those are wrong too, and if there should be any special case it should
just extend to FirstBootstrapObjectId. But first we need to debate
whether a FOR ALL TABLES publication should include information_schema.
Michael Paquier [Thu, 9 May 2019 01:29:10 +0000 (10:29 +0900)]
Fix error status of vacuumdb when multiple jobs are used
When running a batch of VACUUM or ANALYZE commands on a given database,
there were cases where it is possible to have vacuumdb not report an
error where it actually should, leading to incorrect status results.
Author: Julien Rouhaud Reviewed-by: Amit Kapila, Michael Paquier
Discussion: https://postgr.es/m/CAOBaU_ZuTwz7CtqLYJ1Ouuh272bTQPLN8b1bAPk0bCBm4PDMTQ@mail.gmail.com
Backpatch-through: 9.5
Commit dd299df8189, which added suffix truncation to nbtree, simplified
the WAL record format used by page splits. It became necessary to
explicitly WAL-log the new high key for the left half of a split in all
cases, which relieved the REDO routine from having to reconstruct a new
high key for the left page by copying the first item from the right
page. Remove a comment that referred to the previous practice.
Alvaro Herrera [Wed, 8 May 2019 17:16:54 +0000 (13:16 -0400)]
Fix error messages
Some messages related to foreign servers were reporting the server name
without quotes, or not at all; our style is to have all names be quoted,
and the server name already appears quoted in a few other messages, so
just add quotes and make them all consistent.
Remove an extra "s" in other messages (typos introduced by myself in f56f8f8da6af).
Fujii Masao [Wed, 8 May 2019 16:35:13 +0000 (01:35 +0900)]
Fix documentation for the privileges required for replication functions.
Previously it's documented that use of replication functions is
restricted to superusers. This is true for the functions which
use replication origin, but not for pg_logicl_emit_message() and
functions which use replication slot. For example, not only
superusers but also users with REPLICATION privilege is allowed
to use the functions for replication slot. This commit fixes
the documentation for the privileges required for those replication
functions.
Fix table lock levels for REINDEX INDEX CONCURRENTLY
REINDEX CONCURRENTLY locks tables with ShareUpdateExclusiveLock rather
than the ShareLock used by a plain REINDEX. However,
RangeVarCallbackForReindexIndex() was not updated for that and still
used the ShareLock only. This would lead to lock upgrades later,
leading to possible deadlocks.
Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20190430151735.wi52sxjvxsjvaxxt%40alap3.anarazel.de
Thomas Munro [Mon, 6 May 2019 03:02:41 +0000 (15:02 +1200)]
Probe only 127.0.0.1 when looking for ports on Unix.
Commit c0985099, later adjusted by commit 4ab02e81, probed 0.0.0.0
in addition to 127.0.0.1, for the benefit of Windows build farm
animals. It isn't really useful on Unix systems, and turned out to
be a bit inconvenient to users of some corporate firewall software.
Switch back to probing just 127.0.0.1 on non-Windows systems.