Tom Lane [Wed, 23 May 2018 23:04:34 +0000 (19:04 -0400)]
Fix simple_prompt() to disable echo on Windows when stdin != terminal.
If echo = false, simple_prompt() is supposed to prevent echoing the
input (for password input). However, the Windows implementation applied
the mode change to STD_INPUT_HANDLE. That would not have the desired
effect if stdin isn't actually the terminal, for instance if the user
is piping something into psql. Fix it to apply the mode change to
the correct input file, so that passwords do not echo in such cases.
In passing, shorten and de-uglify this code by using #elif rather than
an #if nest and removing some duplicated code.
Back-patch to all supported versions. To simplify that, also back-patch
the portions of commit 9daec77e1 that got rid of an unnecessary
malloc/free in the same area.
Tom Lane [Tue, 22 May 2018 17:32:52 +0000 (13:32 -0400)]
Widen COPY FROM's current-line-number counter from 32 to 64 bits.
Because the code for the HEADER option skips a line when this counter
is zero, a very long COPY FROM WITH HEADER operation would drop a line
every 2^32 lines. A lesser but still unfortunate problem is that errors
would show a wrong input line number for errors occurring beyond the
2^31'st input line. While such large input streams seemed impractical
when this code was first written, they're not any more. Widening the
counter (and some associated variables) to uint64 should be enough to
prevent problems for the foreseeable future.
Andrew Gierth [Mon, 21 May 2018 16:02:17 +0000 (17:02 +0100)]
Fix SQL:2008 FETCH FIRST syntax to allow parameters.
OFFSET <x> ROWS FETCH FIRST <y> ROWS ONLY syntax is supposed to accept
<simple value specification>, which includes parameters as well as
literals. When this syntax was added all those years ago, it was done
inconsistently, with <x> and <y> being different subsets of the
standard syntax.
Rectify that by making <x> and <y> accept the same thing, and allowing
either a (signed) numeric literal or a c_expr there, which allows for
parameters, variables, and parenthesized arbitrary expressions.
Per bug #15200 from Lukas Eder.
Backpatch all the way, since this has been broken from the start.
Tom Lane [Mon, 21 May 2018 04:32:28 +0000 (00:32 -0400)]
Fix unsafe usage of strerror(errno) within ereport().
This is the converse of the unsafe-usage-of-%m problem: the reason
ereport/elog provide that format code is mainly to dodge the hazard
of errno getting changed before control reaches functions within the
arguments of the macro. I only found one instance of this hazard,
but it's been there since 9.4 :-(.
Tom Lane [Sun, 20 May 2018 15:40:54 +0000 (11:40 -0400)]
printf("%lf") is not portable, so omit the "l".
The "l" (ell) width spec means something in the corresponding scanf usage,
but not here. While modern POSIX says that applying "l" to "f" and other
floating format specs is a no-op, SUSv2 says it's undefined. Buildfarm
experience says that some old compilers emit warnings about it, and at
least one old stdio implementation (mingw's "ANSI" option) actually
produces wrong answers and/or crashes.
Tom Lane [Sat, 19 May 2018 18:22:18 +0000 (14:22 -0400)]
Support platforms where strtoll/strtoull are spelled __strtoll/__strtoull.
Ancient HPUX, for one, does this. We hadn't noticed due to the lack
of regression tests that required a working strtoll.
(I was slightly tempted to remove the other historical spelling,
strto[u]q, since it seems we have no buildfarm members testing that case.
But I refrained.)
Tom Lane [Sat, 19 May 2018 02:42:10 +0000 (22:42 -0400)]
Arrange to supply declarations for strtoll/strtoull if needed.
Buildfarm member dromedary is still unhappy about the recently-added
ecpg "long long" tests. The reason turns out to be that it includes
"-ansi" in its CFLAGS, and in their infinite wisdom Apple have decided
to hide the declarations of strtoll/strtoull in C89-compliant builds.
(I find it pretty curious that they hide those function declarations
when you can nonetheless declare a "long long" variable, but anyway
that is their behavior, both on dromedary's obsolete macOS version and
the newest and shiniest.) As a result, gcc assumes these functions
return "int", leading naturally to wrong results.
(Looking at dromedary's past build results, it's evident that this
problem also breaks pg_strtouint64() on 32-bit platforms; but we
evidently have no regression tests that exercise that function with
values above 32 bits.)
To fix, supply declarations for these functions when the platform
provides the functions but not the declarations, using the same type
of mechanism as we use for some other similar cases.
Tom Lane [Fri, 18 May 2018 23:03:32 +0000 (19:03 -0400)]
Hot-fix ecpg regression test for missing ecpg_config.h inclusion.
I don't think this is really the best long-term answer, and in
particular it doesn't fix the pre-existing hazard in sqltypes.h.
But for the moment let's just try to make the buildfarm green again.
Tom Lane [Fri, 18 May 2018 17:04:59 +0000 (13:04 -0400)]
Add some test coverage for ecpg's "long long" support.
This will only actually exercise the "long long" code paths on platforms
where "long" is 32 bits --- otherwise, the SQL bigint type maps to
plain "long", and we will test that code path instead. But that's
probably sufficient coverage, and anyway we weren't testing either
code path before.
Tom Lane [Fri, 18 May 2018 16:52:28 +0000 (12:52 -0400)]
Recognize that MSVC can support strtoll() and strtoull().
This is needed for full support of "long long" variables in ecpg, but
the previous patch for bug #15080 (commits 51057feaa et al) missed it.
In MSVC versions where the functions don't exist under those names,
we can nonetheless use _strtoi64() and _strtoui64().
Tom Lane [Wed, 16 May 2018 17:46:09 +0000 (13:46 -0400)]
Fix misprocessing of equivalence classes involving record_eq().
canonicalize_ec_expression() is supposed to agree with coerce_type() as to
whether a RelabelType should be inserted to make a subexpression be valid
input for the operators of a given opclass. However, it did the wrong
thing with named-composite-type inputs to record_eq(): it put in a
RelabelType to RECORDOID, which the parser doesn't. In some cases this was
harmless because all code paths involving a particular equivalence class
did the same thing, but in other cases this would result in failing to
recognize a composite-type expression as being a member of an equivalence
class that it actually is a member of. The most obvious bad effect was to
fail to recognize that an index on a composite column could provide the
sort order needed for a mergejoin on that column, as reported by Teodor
Sigaev. I think there might be other, subtler, cases that result in
misoptimization. It also seems possible that an unwanted RelabelType
would sometimes get into an emitted plan --- but because record_eq and
friends don't examine the declared type of their input expressions, that
would not create any visible problems.
To fix, just treat RECORDOID as if it were a polymorphic type, which in
some sense it is. We might want to consider formalizing that a bit more
someday, but for the moment this seems to be the only place where an
IsPolymorphicType() test ought to include RECORDOID as well.
This has been broken for a long time, so back-patch to all supported
branches.
Tom Lane [Tue, 15 May 2018 19:06:53 +0000 (15:06 -0400)]
Fix type checking for support functions of parallel VARIADIC aggregates.
The impact of VARIADIC on the combine/serialize/deserialize support
functions of an aggregate wasn't thought through carefully. There is
actually no impact, because variadicity isn't passed through to these
functions (and it doesn't seem like it would need to be). However,
lookup_agg_function was mistakenly told to check things as though it were
passed through. The net result was that it was impossible to declare an
aggregate that had both VARIADIC input and parallelism support functions.
In passing, fix a runtime check in nodeAgg.c for the combine function's
strictness to make its error message agree with the creation-time check.
The previous message was actually backwards, and it doesn't seem like
there's a good reason to have two versions of this message text anyway.
Back-patch to 9.6 where parallel aggregation was introduced.
Tom Lane [Wed, 9 May 2018 17:55:27 +0000 (13:55 -0400)]
Update time zone data files to tzdata release 2018e.
DST law changes in North Korea. Redefinition of "daylight savings" in
Ireland, as well as for some past years in Namibia and Czechoslovakia.
Additional historical corrections for Czechoslovakia.
With this change, the IANA database models Irish timekeeping as following
"standard time" in summer, and "daylight savings" in winter, so that the
daylight savings offset is one hour behind standard time not one hour
ahead. This does not change their UTC offset (+1:00 in summer, 0:00 in
winter) nor their timezone abbreviations (IST in summer, GMT in winter),
though now "IST" is more correctly read as "Irish Standard Time" not "Irish
Summer Time". However, the "is_dst" column in the pg_timezone_names view
will now be true in winter and false in summer for the Europe/Dublin zone.
Similar changes were made for Namibia between 1994 and 2017, and for
Czechoslovakia between 1946 and 1947.
So far as I can find, no Postgres internal logic cares about which way
tm_isdst is reported; in particular, since commit b2cbced9e we do not
rely on it to decide how to interpret ambiguous timestamps during DST
transitions. So I don't think this change will affect any Postgres
behavior other than the timezone-view outputs.
Tom Lane [Wed, 9 May 2018 00:17:43 +0000 (20:17 -0400)]
Improve inefficient regexes in vacuumdb TAP test.
The regexes used in 102_vacuumdb_stages.pl to check the postmaster log
for expected output contained several places with ".*.*", which is
underdetermined and can cause exponential runtime growth in Perl's regex
matcher (since it's not bright enough not to waste time seeing whether
different splits of the same substring would allow a match). We were
fortunate that the amount of text in the postmaster log was generally not
enough to make the runtime go to the moon; although commit 6271fceb8 had
been on the hairy edge of an obvious problem, thanks to its increasing the
default log verbosity to DEBUG1. Experimentation shows that anyone who
tried to run this test case with an even higher log verbosity would have
been in for serious pain. But even at default logging level, fixing this
saves several hundred ms on my workstation, more on slower buildfarm
members.
Remove the extra ".*"s, restoring more-or-less-linear matching speed.
Back-patch to 9.4 where the test case was added, mostly in case anyone
tries to do related debugging in a back branch.
Tom Lane [Mon, 7 May 2018 17:13:27 +0000 (13:13 -0400)]
Last-minute updates for release notes.
The set of functions that need parallel-safety adjustments isn't the
same in 9.6 as 10, so I shouldn't have blindly back-patched that list.
Adjust as needed. Also, provide examples of the commands to issue.
Stephen Frost [Mon, 7 May 2018 14:10:45 +0000 (10:10 -0400)]
adminpack: Revoke EXECUTE on pg_logfile_rotate()
In 9.6, we moved a number of functions over to using the GRANT system to
control access instead of having hard-coded superuser checks.
As it turns out, adminpack was creating another function in the catalog
for one of those backend functions where the superuser check was
removed, specifically pg_rotate_logfile(), but it didn't get the memo
about having to REVOKE EXECUTE on the alternative-name function
(pg_logfile_rotate()), meaning that in any installations with adminpack
on 9.6 and higher, any user is able to run the pg_logfile_rotate()
function, which then calls pg_rotate_logfile() and rotates the logfile.
Fix by adding a new version of adminpack (1.1) which handles the REVOKE.
As this function should have only been available to the superuser, this
is a security issue, albeit a minor one.
Peter Eisentraut [Tue, 13 Feb 2018 21:13:20 +0000 (16:13 -0500)]
Tweak tests to support Python 3.7
Python 3.7 removes the trailing comma in the repr() of
BaseException (see <https://bugs.python.org/issue30399>), leading to
test output differences. Work around that by composing the equivalent
test output in a more manual way.
Fix scenario where streaming standby gets stuck at a continuation record.
If a continuation record is split so that its first half has already been
removed from the master, and is only present in pg_wal, and there is a
recycled WAL segment in the standby server that looks like it would
contain the second half, recovery would get stuck. The code in
XLogPageRead() incorrectly started streaming at the beginning of the
WAL record, even if we had already read the first page.
Backpatch to 9.4. In principle, older versions have the same problem, but
without replication slots, there was no straightforward mechanism to
prevent the master from recycling old WAL that was still needed by standby.
Without such a mechanism, I think it's reasonable to assume that there's
enough slack in how many old segments are kept around to not run into this,
or you have a WAL archive.
Reported by Jonathon Nelson. Analysis and patch by Kyotaro HORIGUCHI, with
some extra comments by me.
Alvaro Herrera [Fri, 4 May 2018 18:24:44 +0000 (15:24 -0300)]
Don't mark pages all-visible spuriously
Dan Wood diagnosed a long-standing problem that pages containing tuples
that are locked by multixacts containing live lockers may spuriously end
up as candidates for getting their all-visible flag set. This has the
long-term effect that multixacts remain unfrozen; this may previously
pass undetected, but since commit XYZ it would be reported as
"ERROR: found multixact 134100944 from before relminmxid 192042633"
because when a later vacuum tries to freeze the page it detects that a
multixact that should have gotten frozen, wasn't.
Dan proposed a (correct) patch that simply sets a variable to its
correct value, after a bogus initialization. But, per discussion, it
seems better coding to avoid the bogus initializations altogether, since
they could give rise to more bugs later. Therefore this fix rewrites
the logic a little bit to avoid depending on the bogus initializations.
This bug was part of a family introduced in 9.6 by commit a892234f830e;
later, commit 38e9f90a227d fixed most of them, but this one was
unnoticed.
Andrew Dunstan [Fri, 4 May 2018 19:22:48 +0000 (15:22 -0400)]
Provide for testing on python3 modules when under MSVC
This should have been done some years ago as promised in commit c4dcdd0c2. However, better late than never.
Along the way do a little housekeeping, including using a simpler test
for the python version being tested, and removing a redundant subroutine
parameter. These changes only apply back to release 9.5.
Tom Lane [Fri, 4 May 2018 16:26:25 +0000 (12:26 -0400)]
Sync our copy of the timezone library with IANA release tzcode2018e.
The non-cosmetic changes involve teaching the "zic" tzdata compiler about
negative DST. While I'm not currently intending that we start using
negative-DST data right away, it seems possible that somebody would try
to use our copy of zic with bleeding-edge IANA data. So we'd better be
out in front of this change code-wise, even though it doesn't matter for
the data file we're shipping.
Teodor Sigaev [Thu, 3 May 2018 17:09:28 +0000 (20:09 +0300)]
Add HOLD_INTERRUPTS section into FinishPreparedTransaction.
If an interrupt arrives in the middle of FinishPreparedTransaction
and any callback decide to call CHECK_FOR_INTERRUPTS (e.g.
RemoveTwoPhaseFile can write a warning with ereport, which checks for
interrupts) then it's possible to leave current GXact undeleted.
Tom Lane [Wed, 2 May 2018 21:32:40 +0000 (17:32 -0400)]
Revert back-branch changes in power()'s behavior for NaN inputs.
Per discussion, the value of fixing these bugs in the back branches
doesn't outweigh the downsides of changing corner-case behavior in
a minor release. Hence, revert commits 217d8f3a1 and 4d864de48 in
the v10 branch and the corresponding commits in 9.3-9.6.
Tom Lane [Mon, 30 Apr 2018 01:56:28 +0000 (21:56 -0400)]
Fix bogus list-iteration code in pg_regress.c, affecting ecpg tests only.
While looking at a recent buildfarm failure in the ecpg tests, I wondered
why the pg_regress output claimed the stderr part of the test failed, when
the regression diffs were clearly for the stdout part. Looking into it,
the reason is that pg_regress.c's logic for iterating over three parallel
lists is wrong, and has been wrong since it was written: it advances the
"tag" pointer at a different place in the loop than the other two pointers.
Fix that.
Tom Lane [Sun, 29 Apr 2018 22:15:16 +0000 (18:15 -0400)]
Avoid wrong results for power() with NaN input on more platforms.
Buildfarm results show that the modern POSIX rule that 1 ^ NaN = 1 is not
honored on *BSD until relatively recently, and really old platforms don't
believe that NaN ^ 0 = 1 either. (This is unsurprising, perhaps, since
SUSv2 doesn't require either behavior.) In hopes of getting to platform
independent behavior, let's deal with all the NaN-input cases explicitly
in dpow().
Note that numeric_power() doesn't know either of these special cases.
But since that behavior is platform-independent, I think it should be
addressed separately, and probably not back-patched.
Tom Lane [Sun, 29 Apr 2018 19:50:08 +0000 (15:50 -0400)]
Update time zone data files to tzdata release 2018d.
DST law changes in Palestine and Antarctica (Casey Station). Historical
corrections for Portugal and its colonies, as well as Enderbury, Jamaica,
Turks & Caicos Islands, and Uruguay.
Tom Lane [Sun, 29 Apr 2018 19:21:45 +0000 (15:21 -0400)]
Avoid wrong results for power() with NaN input on some platforms.
Per spec, the result of power() should be NaN if either input is NaN.
It appears that on some versions of Windows, the libc function does
return NaN, but it also sets errno = EDOM, confusing our code that
attempts to work around shortcomings of other platforms. Hence, add
guard tests to avoid substituting a wrong result for the right one.
It's been like this for a long time (and the odd behavior only appears
in older MSVC releases, too) so back-patch to all supported branches.
The predecessor test boiled down to "PQserverVersion(NULL) >= 100000",
which is always false. No release includes that, so it could not have
reintroduced CVE-2018-1058. Back-patch to 9.4, like the addition of the
predecessor in commit 8d2814f274def85f39fbe997d454b01628cb5667.
Tom Lane [Fri, 20 Apr 2018 21:15:31 +0000 (17:15 -0400)]
Fix race conditions when an event trigger is added concurrently with DDL.
EventTriggerTableRewrite crashed if there were table_rewrite triggers
present, but there had not been when the calling command started.
EventTriggerDDLCommandEnd called ddl_command_end triggers if present,
even if there had been no such triggers when the calling command started,
which would lead to a failure in pg_event_trigger_ddl_commands.
In both cases, fix by doing nothing; it's better to wait till the next
command when things will be properly initialized.
In passing, remove an elog(DEBUG1) call that might have seemed interesting
four years ago but surely isn't today.
We found this because of intermittent failures in the buildfarm. Thanks
to Alvaro Herrera and Andrew Gierth for analysis.
Back-patch to 9.5; some of this code exists before that, but the specific
hazards we need to guard against don't.
Tom Lane [Fri, 20 Apr 2018 19:19:17 +0000 (15:19 -0400)]
Change more places to be less trusting of RestrictInfo.is_pushed_down.
On further reflection, commit e5d83995e didn't go far enough: pretty much
everywhere in the planner that examines a clause's is_pushed_down flag
ought to be changed to use the more complicated behavior where we also
check the clause's required_relids. Otherwise we could make incorrect
decisions about whether, say, a clause is safe to use as a hash clause.
Some (many?) of these places are safe as-is, either because they are
never reached while considering a parameterized path, or because there
are additional checks that would reject a pushed-down clause anyway.
However, it seems smarter to just code them all the same way rather
than rely on easily-broken reasoning of that sort.
In support of that, invent a new macro RINFO_IS_PUSHED_DOWN that should
be used in place of direct tests on the is_pushed_down flag.
Like the previous patch, back-patch to all supported branches.
Tom Lane [Thu, 19 Apr 2018 22:29:39 +0000 (18:29 -0400)]
Fix broken extract_actual_join_clauses call in 9.6 postgres_fdw.
In commits e5d83995e et al, I changed the signature of
extract_actual_join_clauses, thinking that it was not called from
anywhere but createplan.c. I missed that postgres_fdw uses it
in the 9.6 branch only.
This opens up the question of whether any third-party modules might
be calling it, and whether we need to take steps to avoid an API break
for them. But for the moment, just get the buildfarm green again.
Tom Lane [Thu, 19 Apr 2018 19:49:12 +0000 (15:49 -0400)]
Fix incorrect handling of join clauses pushed into parameterized paths.
In some cases a clause attached to an outer join can be pushed down into
the outer join's RHS even though the clause is not degenerate --- this
can happen if we choose to make a parameterized path for the RHS. If
the clause ends up attached to a lower outer join, we'd misclassify it
as being a "join filter" not a plain "filter" condition at that node,
leading to wrong query results.
To fix, teach extract_actual_join_clauses to examine each join clause's
required_relids, not just its is_pushed_down flag. (The latter now
seems vestigial, or at least in need of rethinking, but we won't do
anything so invasive as redefining it in a bug-fix patch.)
This has been wrong since we introduced parameterized paths in 9.2,
though it's evidently hard to hit given the lack of previous reports.
The test case used here involves a lateral function call, and I think
that a lateral reference may be required to get the planner to select
a broken plan; though I wouldn't swear to that. In any case, even if
LATERAL is needed to trigger the bug, it still affects all supported
branches, so back-patch to all.
Per report from Andreas Karlsson. Thanks to Andrew Gierth for
preliminary investigation.
The buffer was 100 bytes long, which is barely sufficient when the
version string gets longer (such as by configure --with-extra-version).
Set it to MAXPGPATH.
Tom Lane [Wed, 18 Apr 2018 16:07:37 +0000 (12:07 -0400)]
Better fix for deadlock hazard in CREATE INDEX CONCURRENTLY.
Commit 54eff5311 did not account for the possibility that we'd have
a transaction snapshot due to default_transaction_isolation being
set high enough to require one. The transaction snapshot is enough
to hold back our advertised xmin and thus risk deadlock anyway.
The only way to get rid of that snap is to start a new transaction,
so let's do that instead. Also throw in an assert checking that we
really have gotten to a state where no xmin is being advertised.
Tom Lane [Mon, 16 Apr 2018 20:06:47 +0000 (16:06 -0400)]
Fix broken collation-aware searches in SP-GiST text opclass.
spg_text_leaf_consistent() supposed that it should compare only
Min(querylen, entrylen) bytes of the two strings, and then deal with
any excess bytes in one string or the other by assuming the longer
string is greater if the prefixes are equal. Quite aside from the
fact that that's just wrong in some locales (e.g., 'ch' is not less
than 'd' in cs_CZ), it also risked passing incomplete multibyte
characters to strcoll(), with ensuing bad results.
Instead, just pass the full strings to varstr_cmp, and let it decide
what to do about unequal-length strings.
Fortunately, this error doesn't imply any index corruption, it's just
that searches might return the wrong set of entries.
Per report from Emre Hasegeli, though this is not his patch.
Thanks to Peter Geoghegan for review and discussion.
This code was born broken, so back-patch to all supported branches.
In HEAD, I failed to resist the temptation to do a bit of cosmetic
cleanup/pgindent'ing on 710d90da1, too.
Tom Lane [Sun, 15 Apr 2018 17:02:11 +0000 (13:02 -0400)]
Fix potentially-unportable code in contrib/adminpack.
Spelling access(2)'s second argument as "2" is just horrid.
POSIX makes no promises as to the numeric values of W_OK and related
macros. Even if it accidentally works as intended on every supported
platform, it's still unreadable and inconsistent with adjacent code.
In passing, don't spell "NULL" as "0" either. Yes, that's legal C;
no, it's not project style.
Back-patch, just in case the unportability is real and not theoretical.
(Most likely, even if a platform had different bit assignments for
access()'s modes, there'd not be an observable behavior difference
here; but I'm being paranoid today.)
Tom Lane [Fri, 13 Apr 2018 16:53:46 +0000 (12:53 -0400)]
In libpq, free any partial query result before collecting a server error.
We'd throw away the partial result anyway after parsing the error message.
Throwing it away beforehand costs nothing and reduces the risk of
out-of-memory failure. Also, at least in systems that behave like
glibc/Linux, if the partial result was very large then the error PGresult
would get allocated at high heap addresses, preventing the heap storage
used by the partial result from being released to the OS until the error
PGresult is freed.
In psql >= 9.6, we hold onto the error PGresult until another error is
received (for \errverbose), so that this behavior causes a seeming
memory leak to persist for awhile, as in a recent complaint from
Darafei Praliaskouski. This is a potential performance regression from
older versions, justifying back-patching at least that far. But similar
behavior may occur in other client applications, so it seems worth just
back-patching to all supported branches.
Tom Lane [Thu, 12 Apr 2018 22:39:51 +0000 (18:39 -0400)]
Fix bogus affix-merging code.
NISortAffixes() compared successive compound affixes incorrectly,
thus possibly failing to merge identical affixes, or (less likely)
merging ones that shouldn't be merged. The user-visible effects
of this are unclear, to me anyway.
Per bug #15150 from Alexander Lakhin. It's been broken for a long time,
so back-patch to all supported branches.
Tom Lane [Wed, 11 Apr 2018 22:11:30 +0000 (18:11 -0400)]
Ignore nextOid when replaying an ONLINE checkpoint.
The nextOid value is from the start of the checkpoint and may well be stale
compared to values from more recent XLOG_NEXTOID records. Previously, we
adopted it anyway, allowing the OID counter to go backwards during a crash.
While this should be harmless, it contributed to the severity of the bug
fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned
immediately following a crash. Without this error, that issue would only
have arisen when TOAST objects just younger than a multiple of 2^32 OIDs
were deleted and then not vacuumed in time to avoid a conflict.
Tom Lane [Wed, 11 Apr 2018 21:41:10 +0000 (17:41 -0400)]
Do not select new object OIDs that match recently-dead entries.
When selecting a new OID, we take care to avoid picking one that's already
in use in the target table, so as not to create duplicates after the OID
counter has wrapped around. However, up to now we used SnapshotDirty when
scanning for pre-existing entries. That ignores committed-dead rows, so
that we could select an OID matching a deleted-but-not-yet-vacuumed row.
While that mostly worked, it has two problems:
* If recently deleted, the dead row might still be visible to MVCC
snapshots, creating a risk for duplicate OIDs when examining the catalogs
within our own transaction. Such duplication couldn't be visible outside
the object-creating transaction, though, and we've heard few if any field
reports corresponding to such a symptom.
* When selecting a TOAST OID, deleted toast rows definitely *are* visible
to SnapshotToast, and will remain so until vacuumed away. This leads to
a conflict that will manifest in errors like "unexpected chunk number 0
(expected 1) for toast value nnnnn". We've been seeing reports of such
errors from the field for years, but the cause was unclear before.
The fix is simple: just use SnapshotAny to search for conflicting rows.
This results in a slightly longer window before object OIDs can be
recycled, but that seems unlikely to create any large problems.
Make local copy of client hostnames in backend status array.
The other strings, application_name and query string, were snapshotted to
local memory in pgstat_read_current_status(), but we forgot to do that for
client hostnames. As a result, the client hostname would appear to change in
the local copy, if the client disconnected.
Backpatch to all supported versions.
Author: Edmund Horner Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com
Tom Lane [Tue, 10 Apr 2018 22:34:40 +0000 (18:34 -0400)]
Fix incorrect close() call in dsm_impl_mmap().
One improbable error-exit path in this function used close() where
it should have used CloseTransientFile(). This is unlikely to be
hit in the field, and I think the consequences wouldn't be awful
(just an elog(LOG) bleat later). But a bug is a bug, so back-patch
to 9.4 where this code came in.
Remove wrongly backpatched piece of code in cube.c
Due to sloppy division of changes between f50c80dbb (which was not
back-patched) and 563a053bd, this piece of code was wrongly backpatched to
REL_10_STABLE and REL9_6_STABLE. This code never causes real error because
its condition is never satisfied, but it's a dead code, which needs to be
removed.
Tom Lane [Sun, 8 Apr 2018 20:35:43 +0000 (16:35 -0400)]
Doc: clarify explanation of pg_dump usage.
This section confusingly used both "infile" and "outfile" to refer
to the same file, i.e. the textual output of pg_dump. Use "dumpfile"
for both cases, per suggestion from Jonathan Katz.
Andres Freund [Sun, 8 Apr 2018 01:27:14 +0000 (18:27 -0700)]
Remove overzeleous assertions in pg_atomic_flag code.
The atomics code asserts proper alignment in various places. That's
mainly because the alignment of 64bit integers is not sufficient for
atomic operations on all platforms. Some ABIs only have four byte
alignment, but don't have atomic behavior when crossing page
boundaries.
The flags code isn't affected by that however, as the type alignment
always is sufficient for atomic operations. Nevertheless the code
asserted alignment requirements. Before 8c3debbb it was only broken on
hppa, after it probably affect further platforms.
Thus remove the assertions for pg_atomic_flag operators.
Andres Freund [Sat, 7 Apr 2018 03:02:02 +0000 (20:02 -0700)]
Fix and improve pg_atomic_flag fallback implementation.
The atomics fallback implementation for pg_atomic_flag was broken,
returning the inverted value from pg_atomic_test_set_flag(). This was
unnoticed because a) atomic flags were unused until recently b) the
test code wasn't run when the fallback implementation was in
use (because it didn't allow to test for some edge cases).
Fix the bug, and improve the fallback so it has the same behaviour as
the non-fallback implementation in the problematic edge cases. That
breaks ABI compatibility in the back branches when fallbacks are in
use, but given they were broken until now...
Author: Andres Freund Reported-by: Daniel Gustafsson
Discussion:
https://postgr.es/m/FB948276-7B32-4B77-83E6-D00167F8EEB4@yesql.se
https://postgr.es/m/20180406233854.uni2h3mbnveczl32@alap3.anarazel.de
Backpatch: 9.5-, where the atomics abstraction was introduced.
Tom Lane [Sat, 31 Mar 2018 20:28:52 +0000 (16:28 -0400)]
Fix assorted issues in parallel vacuumdb.
Avoid storing the result of PQsocket() in a pgsocket variable; it's
declared as int, and the no-socket test is properly written as "x < 0"
not "x == PGINVALID_SOCKET". This accidentally had no bad effect
because we never got to init_slot() with a bad connection, but it's
still wrong.
Actually, it seems like we should avoid storing the result for a long
period at all. The function's not so expensive that it's worth avoiding,
and the existing coding technique here would fail if anyone tried to
PQreset the connection during the life of the program. Hence, just
re-call PQsocket every time we construct a select(2) mask.
Speaking of select(), GetIdleSlot imagined that it could compute the
select mask once and continue to use it over multiple calls to
select_loop(), which is pretty bogus since that would stomp on the
mask on return. This could only matter if the function's outer loop
iterated more than once, which is unlikely (it'd take some connection
receiving data, but not enough to complete its command). But if it
did happen, we'd acquire "tunnel vision" and stop watching the other
connections for query termination, with the effect of losing parallelism.
Another way in which GetIdleSlot could lose parallelism is that once
PQisBusy returns false, it would lock in on that connection and do
PQgetResult until that returns NULL; in some cases that could result
in blocking. (Perhaps this can never happen in vacuumdb due to the
limited set of commands that it can issue, but I'm not quite sure
of that, and even if true today it's not a future-proof assumption.)
Refactor the code to do that properly, so that it risks blocking in
PQgetResult only in cases where we need to wait anyway.
Another loss-of-parallelism problem, which *is* easily demonstrable,
is that any setup queries issued during prepare_vacuum_command() were
always issued on the last-to-be-created connection, whether or not
that was idle. Long-running operations on that connection thus
prevented issuance of additional operations on the other ones, except
in the limited cases where no preparatory query was needed. Instead,
wait till we've identified a free connection and use that one.
Also, avoid core dump due to undersized malloc request in the case
that no tables are identified to be vacuumed.
The bogus no-socket test was noted by CharSyam, the other problems
identified in my own code review. Back-patch to 9.5 where parallel
vacuumdb was introduced.
Tom Lane [Fri, 30 Mar 2018 22:14:51 +0000 (18:14 -0400)]
Fix bogus provolatile/proparallel markings on a few built-in functions.
Richard Yen reported that pg_upgrade failed if the target cluster had
force_parallel_mode = on, because binary_upgrade_create_empty_extension()
is marked parallel restricted, allowing it to be executed in parallel
mode, which complains because it tries to acquire an XID.
In general, no function that might try to modify database data should
be considered parallel safe or restricted, since execution of it might
force XID acquisition. We found several other examples of this mistake.
Furthermore, functions that execute user-supplied SQL queries or query
fragments, or pull data from user-supplied cursors, had better be marked
both volatile and parallel unsafe, because we don't know what the supplied
query or cursor might try to do. There were several tsquery and XML
functions that had the wrong proparallel marking for this, and some of
them were even mislabeled as to volatility.
All these bugs are old, dating back to 9.6 for the proparallel mistakes
and much further for the provolatile mistakes. We can't force a
catversion bump in the back branches, but we can at least ensure that
installations initdb'd in future have the right values.
Fujii Masao [Wed, 28 Mar 2018 19:00:21 +0000 (04:00 +0900)]
Fix handling of files that source server removes during pg_rewind is running.
After processing the filemap to build the list of chunks that will be
fetched from the source to rewing the target server, it is possible that
a file which was previously processed is removed from the source. A
simple example of such an occurence is a WAL segment which gets recycled
on the target in-between. When the filemap is processed, files not
categorized as relation files are first truncated to prepare for its
full copy of which is going to be taken from the source, divided into a
set of junks. However, for a recycled WAL segment, this would result in
a segment which has a zero-byte size. With such an empty file,
post-rewind recovery thinks that records are saved but they are actually
not because of the truncation which happened when processing the
filemap, resulting in data loss.
In order to fix the problem, make sure that files which are found as
removed on the source when receiving chunks of them are as well deleted
on the target server for consistency.
Tom Lane [Wed, 28 Mar 2018 17:26:43 +0000 (13:26 -0400)]
Fix actual and potential double-frees around tuplesort usage.
tuplesort_gettupleslot() passed back tuples allocated in the tuplesort's
own memory context, even when the caller was responsible to free them.
This created a double-free hazard, because some callers might destroy
the tuplesort object (via tuplesort_end) before trying to clean up the
last returned tuple. To avoid this, change the API to specify that the
tuple is allocated in the caller's memory context. v10 and HEAD already
did things that way, but in 9.5 and 9.6 this is a live bug that can
demonstrably cause crashes with some grouping-set usages.
In 9.5 and 9.6, this requires doing an extra tuple copy in some cases,
which is unfortunate. But the amount of refactoring needed to avoid it
seems excessive for a back-patched change, especially since the cases
where an extra copy happens are less performance-critical.
Likewise change tuplesort_getdatum() to return pass-by-reference Datums
in the caller's context not the tuplesort's context. There seem to be
no live bugs among its callers, but clearly the same sort of situation
could happen in future.
For other tuplesort fetch routines, continue to allocate the memory in
the tuplesort's context. This is a little inconsistent with what we now
do for tuplesort_gettupleslot() and tuplesort_getdatum(), but that's
preferable to adding new copy overhead in the back branches where it's
clearly unnecessary. These other fetch routines provide the weakest
possible guarantees about tuple memory lifespan from v10 on, anyway,
so this actually seems more consistent overall.
Adjust relevant comments to reflect these API redefinitions.
Arguably, we should change the pre-9.5 branches as well, but since
there are no known failure cases there, it seems not worth the risk.
Peter Geoghegan, per report from Bernd Helmle. Reviewed by Kyotaro
Horiguchi; thanks also to Andreas Seltenreich for extracting a
self-contained test case.
Alvaro Herrera [Mon, 26 Mar 2018 15:00:25 +0000 (12:00 -0300)]
Fix thinko in comment
The listed numbers disagreed with the ones being used in the symbols;
but instead of just fixing the numbers in the comment, use the symbolic
name instead, which seems clearer.
This has been wrong all along, so apply back to 9.5 where BRIN was
introduced.
Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/5ff514f2-8b1e-6366-b11c-8e2ed442562d@2ndquadrant.com
Tom Lane [Sun, 25 Mar 2018 20:15:16 +0000 (16:15 -0400)]
Doc: add example of type resolution in nested UNIONs.
Section 10.5 didn't say explicitly that multiple UNIONs are resolved
pairwise. Since the resolution algorithm is described as taking any
number of inputs, readers might well think that a query like
"select x union select y union select z" would be resolved by
considering x, y, and z in one resolution step. But that's not what
happens (and I think that behavior is per SQL spec). Add an example
clarifying this point.
Noah Misch [Sat, 24 Mar 2018 03:31:03 +0000 (20:31 -0700)]
Don't qualify type pg_catalog.text in extend-extensions-example.
Extension scripts begin execution with pg_catalog at the front of the
search path, so type names reliably refer to pg_catalog. Remove these
superfluous qualifications. Earlier <programlisting> of this <sect1>
already omitted them. Back-patch to 9.3 (all supported versions).
Tom Lane [Fri, 23 Mar 2018 17:45:38 +0000 (13:45 -0400)]
Fix make rules that generate multiple output files.
For years, our makefiles have correctly observed that "there is no correct
way to write a rule that generates two files". However, what we did is to
provide empty rules that "generate" the secondary output files from the
primary one, and that's not right either. Depending on the details of
the creating process, the primary file might end up timestamped later than
one or more secondary files, causing subsequent make runs to consider the
secondary file(s) out of date. That's harmless in a plain build, since
make will just re-execute the empty rule and nothing happens. But it's
fatal in a VPATH build, since make will expect the secondary file to be
rebuilt in the build directory. This would manifest as "file not found"
failures during VPATH builds from tarballs, if we were ever unlucky enough
to ship a tarball with apparently out-of-date secondary files. (It's not
clear whether that has ever actually happened, but it definitely could.)
To ensure that secondary output files have timestamps >= their primary's,
change our makefile convention to be that we provide a "touch $@" action
not an empty rule. Also, make sure that this rule actually gets invoked
during a distprep run, else the hazard remains.
It's been like this a long time, so back-patch to all supported branches.
In HEAD, I skipped the changes in src/backend/catalog/Makefile, because
those rules are due to get replaced soon in the bootstrap data format
patch, and there seems no need to create a merge issue for that patch.
If for some reason we fail to land that patch in v11, we'll need to
back-fill the changes in that one makefile from v10.
Tom Lane [Thu, 22 Mar 2018 17:23:48 +0000 (13:23 -0400)]
Fix tuple counting in SP-GiST index build.
Count the number of tuples in the index honestly, instead of assuming
that it's the same as the number of tuples in the heap. (It might be
different if the index is partial.)
Tom Lane [Thu, 22 Mar 2018 17:13:58 +0000 (13:13 -0400)]
Fix errors in contrib/bloom index build.
Count the number of tuples in the index honestly, instead of assuming
that it's the same as the number of tuples in the heap. (It might be
different if the index is partial.)
Fix counting of tuples in current index page, too. This error would
have led to failing to write out the final page of the index if it
contained exactly one tuple, so that the last tuple of the relation
would not get indexed.
Tom Lane [Thu, 22 Mar 2018 00:03:28 +0000 (20:03 -0400)]
Fix mishandling of quoted-list GUC values in pg_dump and ruleutils.c.
Code that prints out the contents of setconfig or proconfig arrays in
SQL format needs to handle GUC_LIST_QUOTE variables differently from
other ones, because for those variables, flatten_set_variable_args()
already applied a layer of quoting. The value can therefore safely
be printed as-is, and indeed must be, or flatten_set_variable_args()
will muck it up completely on reload. For all other GUC variables,
it's necessary and sufficient to quote the value as a SQL literal.
We'd recognized the need for this long ago, but mis-analyzed the
need slightly, thinking that all GUC_LIST_INPUT variables needed
the special treatment. That's actually wrong, since a valid value
of a LIST variable might include characters that need quoting,
although no existing variables accept such values.
More to the point, we hadn't made any particular effort to keep the
various places that deal with this up-to-date with the set of variables
that actually need special treatment, meaning that we'd do the wrong
thing with, for example, temp_tablespaces values. This affects dumping
of SET clauses attached to functions, as well as ALTER DATABASE/ROLE SET
commands.
In ruleutils.c we can fix it reasonably honestly by exporting a guc.c
function that allows discovering the flags for a given GUC variable.
But pg_dump doesn't have easy access to that, so continue the old method
of having a hard-wired list of affected variable names. At least we can
fix it to have just one list not two, and update the list to match
current reality.
A remaining problem with this is that it only works for built-in
GUC variables. pg_dump's list obvious knows nothing of third-party
extensions, and even the "ask guc.c" method isn't bulletproof since
the relevant extension might not be loaded. There's no obvious
solution to that, so for now, we'll just have to discourage extension
authors from inventing custom GUCs that need GUC_LIST_QUOTE.
This has been busted for a long time, so back-patch to all supported
branches.
Michael Paquier and Tom Lane, reviewed by Kyotaro Horiguchi and
Pavel Stehule
Teodor Sigaev [Wed, 21 Mar 2018 11:37:51 +0000 (14:37 +0300)]
Rework word_similarity documentation, make it close to actual algorithm.
word_similarity before claimed as returning similarity of closest word in
string, but, actually it returns similarity of substring. Also fix mistyped
comments.
Author: Alexander Korotkov
Review by: David Steele, Liudmila Mantrova
Discussionis:
https://www.postgresql.org/message-id/flat/CY4PR17MB13207ED8310F847CF117EED0D85A0@CY4PR17MB1320.namprd17.prod.outlook.com
https://www.postgresql.org/message-id/flat/f43b242d-000c-f4c8-cb8b-d37e9752cd93%40postgrespro.ru
Tom Lane [Tue, 20 Mar 2018 03:59:17 +0000 (23:59 -0400)]
Prevent query-lifespan memory leakage of SP-GiST traversal values.
The original coding of the SP-GiST scan traversalValue feature (commit ccd6eb49a) arranged for traversal values to be stored in the query's main
executor context. That's fine if there's only one index scan per query,
but if there are many, we have a memory leak as successive scans create
new traversal values. Fix it by creating a separate memory context for
traversal values, which we can reset during spgrescan(). Back-patch
to 9.6 where this code was introduced.
In principle, adding the traversalCxt field to SpGistScanOpaqueData
creates an ABI break in the back branches. But I (tgl) have little
sympathy for extensions including spgist_private.h, so I'm not very
worried about that. Alternatively we could stick the new field at the
end of the struct in back branches, but that has its own downsides.
Tom Lane [Mon, 19 Mar 2018 22:49:53 +0000 (18:49 -0400)]
Fix some corner-case issues in REFRESH MATERIALIZED VIEW CONCURRENTLY.
refresh_by_match_merge() has some issues in the way it builds a SQL
query to construct the "diff" table:
1. It doesn't require the selected unique index(es) to be indimmediate.
2. It doesn't pay attention to the particular equality semantics enforced
by a given index, but just assumes that they must be those of the column
datatype's default btree opclass.
3. It doesn't check that the indexes are btrees.
4. It's insufficiently careful to ensure that the parser will pick the
intended operator when parsing the query. (This would have been a
security bug before CVE-2018-1058.)
5. It's not careful about indexes on system columns.
The way to fix #4 is to make use of the existing code in ri_triggers.c
for generating an arbitrary binary operator clause. I chose to move
that to ruleutils.c, since that seems a more reasonable place to be
exporting such functionality from than ri_triggers.c.
While #1, #3, and #5 are just latent given existing feature restrictions,
and #2 doesn't arise in the core system for lack of alternate opclasses
with different equality behaviors, #4 seems like an issue worth
back-patching. That's the bulk of the change anyway, so just back-patch
the whole thing to 9.4 where this code was introduced.
Tom Lane [Mon, 19 Mar 2018 21:23:07 +0000 (17:23 -0400)]
Fix performance hazard in REFRESH MATERIALIZED VIEW CONCURRENTLY.
Jeff Janes discovered that commit 7ca25b7de made one of the queries run by
REFRESH MATERIALIZED VIEW CONCURRENTLY perform badly. The root cause is
bad cardinality estimation for correlated quals, but a principled solution
to that problem is some way off, especially since the planner lacks any
statistics about whole-row variables. Moreover, in non-error cases this
query produces no rows, meaning it must be run to completion; but use of
LIMIT 1 encourages the planner to pick a fast-start, slow-completion plan,
exactly not what we want. Remove the LIMIT clause, and instead rely on
the count parameter we pass to SPI_execute() to prevent excess work if the
query does return some rows.
While we've heard no field reports of planner misbehavior with this query,
it could be that people are having performance issues that haven't reached
the level of pain needed to cause a bug report. In any case, that LIMIT
clause can't possibly do anything helpful with any existing version of the
planner, and it demonstrably can cause bad choices in some cases, so
back-patch to 9.4 where the code was introduced.
Tom Lane [Sun, 18 Mar 2018 19:10:28 +0000 (15:10 -0400)]
Doc: note that statement-level view triggers require an INSTEAD OF trigger.
If a view lacks an INSTEAD OF trigger, DML on it can only work by rewriting
the command into a command on the underlying base table(s). Then we will
fire triggers attached to those table(s), not those for the view. This
seems appropriate from a consistency standpoint, but nowhere was the
behavior explicitly documented, so let's do that.
There was some discussion of throwing an error or warning if a statement
trigger is created on a view without creating a row INSTEAD OF trigger.
But a simple implementation of that would result in dump/restore ordering
hazards. Given that it's been like this all along, and we hadn't heard
a complaint till now, a documentation improvement seems sufficient.
Per bug #15106 from Pu Qun. Back-patch to all supported branches.
Magnus Hagander [Sun, 18 Mar 2018 12:08:25 +0000 (13:08 +0100)]
Fix pg_recvlogical for pre-10 versions
In e170b8c8, protection against modified search_path was added. However,
PostgreSQL versions prior to 10 does not accept SQL commands over a
replication connection, so the protection would generate a syntax error.
Since we cannot run SQL commands on it, we are also not vulnerable to
the issue that e170b8c8 fixes, so we can just skip this command for
older versions.
Tom Lane [Sat, 17 Mar 2018 19:38:15 +0000 (15:38 -0400)]
Fix overflow handling in plpgsql's integer FOR loops.
The test to exit the loop if the integer control value would overflow
an int32 turns out not to work on some ICC versions, as it's dependent
on the assumption that the compiler will execute the code as written
rather than "optimize" it. ICC lacks any equivalent of gcc's -fwrapv
switch, so it was optimizing on the assumption of no integer overflow,
and that breaks this. Rewrite into a form that in fact does not
do any overflowing computations.
Per Tomas Vondra and buildfarm member fulmar. It's been like this
for a long time, although it was not till we added a regression test
case covering the behavior (in commit dd2243f2a) that the problem
became apparent. Back-patch to all supported versions.
Tom Lane [Sat, 17 Mar 2018 18:59:31 +0000 (14:59 -0400)]
Fix WHERE CURRENT OF when the referenced cursor uses an index-only scan.
"UPDATE/DELETE WHERE CURRENT OF cursor_name" failed, with an error message
like "cannot extract system attribute from virtual tuple", if the cursor
was using a index-only scan for the target table. Fix it by digging the
current TID out of the indexscan state.
It seems likely that the same failure could occur for CustomScan plans
and perhaps some FDW plan types, so that leaving this to be treated as an
internal error with an obscure message isn't as good an idea as it first
seemed. Hence, add a bit of heaptuple.c infrastructure to let us deliver
a more on-topic message. I chose to make the message match what you get
for the case where execCurrentOf can't identify the target scan node at
all, "cursor "foo" is not a simply updatable scan of table "bar"".
Perhaps it should be different, but we can always adjust that later.
In the future, it might be nice to provide hooks that would let custom
scan providers and/or FDWs deal with this in other ways; but that's
not a suitable topic for a back-patchable bug fix.
It's been like this all along, so back-patch to all supported branches.
Tom Lane [Fri, 16 Mar 2018 20:03:45 +0000 (16:03 -0400)]
Fix query-lifespan memory leakage in repeatedly executed hash joins.
ExecHashTableCreate allocated some memory that wasn't freed by
ExecHashTableDestroy, specifically the per-hash-key function information.
That's not a huge amount of data, but if one runs a query that repeats
a hash join enough times, it builds up. Fix by arranging for the data
in question to be kept in the hashtable's hashCxt instead of leaving it
"loose" in the query-lifespan executor context. (This ensures that we'll
also clean up anything that the hash functions allocate in fn_mcxt.)
Per report from Amit Khandekar. It's been like this forever, so back-patch
to all supported branches.
Tom Lane [Fri, 16 Mar 2018 17:44:34 +0000 (13:44 -0400)]
Doc: explicitly point out that enum values can't be dropped.
This was not stated in so many words anywhere. Document it to make
clear that it's a design limitation and not just an oversight or
documentation omission.
Alvaro Herrera [Thu, 15 Mar 2018 12:51:12 +0000 (09:51 -0300)]
test_ddl_deparse: Don't use pg_class as source for a matview
Doing so causes a pg_upgrade of a database containing these objects to
fail whenever pg_class changes. And it's pointless anyway: we have more
interesting tables anyhow.
Tom Lane [Tue, 13 Mar 2018 17:24:27 +0000 (13:24 -0400)]
When updating reltuples after ANALYZE, just extrapolate from our sample.
The existing logic for updating pg_class.reltuples trusted the sampling
results only for the pages ANALYZE actually visited, preferring to
believe the previous tuple density estimate for all the unvisited pages.
While there's some rationale for doing that for VACUUM (first that
VACUUM is likely to visit a very nonrandom subset of pages, and second
that we know for sure that the unvisited pages did not change), there's
no such rationale for ANALYZE: by assumption, it's looked at an unbiased
random sample of the table's pages. Furthermore, in a very large table
ANALYZE will have examined only a tiny fraction of the table's pages,
meaning it cannot slew the overall density estimate very far at all.
In a table that is physically growing, this causes reltuples to increase
nearly proportionally to the change in relpages, regardless of what is
actually happening in the table. This has been observed to cause reltuples
to become so much larger than reality that it effectively shuts off
autovacuum, whose threshold for doing anything is a fraction of reltuples.
(Getting to the point where that would happen seems to require some
additional, not well understood, conditions. But it's undeniable that if
reltuples is seriously off in a large table, ANALYZE alone will not fix it
in any reasonable number of iterations, especially not if the table is
continuing to grow.)
Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone,
and in ANALYZE, just extrapolate from the sample pages on the assumption
that they provide an accurate model of the whole table. If, by very bad
luck, they don't, at least another ANALYZE will fix it; in the old logic
a single bad estimate could cause problems indefinitely.
In HEAD, let's remove vac_estimate_reltuples' is_analyze argument
altogether; it was never used for anything and now it's totally pointless.
But keep it in the back branches, in case any third-party code is calling
this function.
Per bug #15005. Back-patch to all supported branches.
David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me
Tom Lane [Tue, 13 Mar 2018 16:28:15 +0000 (12:28 -0400)]
Avoid holding AutovacuumScheduleLock while rechecking table statistics.
In databases with many tables, re-fetching the statistics takes some time,
so that this behavior seriously decreases the available concurrency for
multiple autovac workers. There's discussion afoot about more complete
fixes, but a simple and back-patchable amelioration is to claim the table
and release the lock before rechecking stats. If we find out there's no
longer a reason to process the table, re-taking the lock to un-claim the
table is cheap enough.
(This patch is quite old, but got lost amongst a discussion of more
aggressive fixes. It's not clear when or if such a fix will be
accepted, but in any case it'd be unlikely to get back-patched.
Let's do this now so we have some improvement for the back branches.)
In passing, make the normal un-claim step take AutovacuumScheduleLock
not AutovacuumLock, since that is what is documented to protect the
wi_tableoid field. This wasn't an actual bug in view of the fact that
readers of that field hold both locks, but it creates some concurrency
penalty against operations that need only AutovacuumLock.
Tom Lane [Sun, 11 Mar 2018 22:10:42 +0000 (18:10 -0400)]
Fix improper uses of canonicalize_qual().
One of the things canonicalize_qual() does is to remove constant-NULL
subexpressions of top-level AND/OR clauses. It does that on the assumption
that what it's given is a top-level WHERE clause, so that NULL can be
treated like FALSE. Although this is documented down inside a subroutine
of canonicalize_qual(), it wasn't mentioned in the documentation of that
function itself, and some callers hadn't gotten that memo.
Notably, commit d007a9505 caused get_relation_constraints() to apply
canonicalize_qual() to CHECK constraints. That allowed constraint
exclusion to misoptimize situations in which a CHECK constraint had a
provably-NULL subclause, as seen in the regression test case added here,
in which a child table that should be scanned is not. (Although this
thinko is ancient, the test case doesn't fail before 9.2, for reasons
I've not bothered to track down in detail. There may be related cases
that do fail before that.)
More recently, commit f0e44751d added an independent bug by applying
canonicalize_qual() to index expressions, which is even sillier since
those might not even be boolean. If they are, though, I think this
could lead to making incorrect index entries for affected index
expressions in v10. I haven't attempted to prove that though.
To fix, add an "is_check" parameter to canonicalize_qual() to specify
whether it should assume WHERE or CHECK semantics, and make it perform
NULL-elimination accordingly. Adjust the callers to apply the right
semantics, or remove the call entirely in cases where it's not known
that the expression has one or the other semantics. I also removed
the call in some cases involving partition expressions, where it should
be a no-op because such expressions should be canonical already ...
and was a no-op, independently of whether it could in principle have
done something, because it was being handed the qual in implicit-AND
format which isn't what it expects. In HEAD, add an Assert to catch
that type of mistake in future.
This represents an API break for external callers of canonicalize_qual().
While that's intentional in HEAD to make such callers think about which
case applies to them, it seems like something we probably wouldn't be
thanked for in released branches. Hence, in released branches, the
extra parameter is added to a new function canonicalize_qual_ext(),
and canonicalize_qual() is a wrapper that retains its old behavior.
Patch by me with suggestions from Dean Rasheed. Back-patch to all
supported branches.
Peter Eisentraut [Wed, 28 Feb 2018 13:22:51 +0000 (08:22 -0500)]
Fix warnings in man page build
The changes in the CREATE POLICY man page from commit 87c2a17fee784c7e1004ba3d3c5d8147da676783 triggered a stylesheet bug that
created some warning messages and incorrect output. This installs a
workaround.
Also improve the whitespace a bit so it looks better.
Alvaro Herrera [Tue, 6 Mar 2018 18:57:20 +0000 (15:57 -0300)]
Refrain from duplicating data in reorderbuffers
If a walsender exits leaving data in reorderbuffers, the next walsender
that tries to decode the same transaction would append its decoded data
in the same spill files without truncating it first, which effectively
duplicate the data. Avoid that by removing any leftover reorderbuffer
spill files when a walsender starts.
Backpatch to 9.4; this bug has been there from the very beginning of
logical decoding.
Author: Craig Ringer, revised by me
Reviewed by: Álvaro Herrera, Petr Jelínek, Masahiko Sawada
Fujii Masao [Mon, 5 Mar 2018 17:08:18 +0000 (02:08 +0900)]
Fix pg_rewind to handle relation data files in tablespaces properly.
pg_rewind checks whether each file is a relation data file, from its path.
Previously this check logic had the bug which made pg_rewind fail to
recognize any relation data files in tablespaces. Which also caused
an assertion failure in pg_rewind.
Tom Lane [Sun, 4 Mar 2018 01:31:35 +0000 (20:31 -0500)]
Fix assorted issues in convert_to_scalar().
If convert_to_scalar is passed a pair of datatypes it can't cope with,
its former behavior was just to elog(ERROR). While this is OK so far as
the core code is concerned, there's extension code that would like to use
scalarltsel/scalargtsel/etc as selectivity estimators for operators that
work on non-core datatypes, and this behavior is a show-stopper for that
use-case. If we simply allow convert_to_scalar to return FALSE instead of
outright failing, then the main logic of scalarltsel/scalargtsel will work
fine for any operator that behaves like a scalar inequality comparison.
The lack of conversion capability will mean that we can't estimate to
better than histogram-bin-width precision, since the code will effectively
assume that the comparison constant falls at the middle of its bin. But
that's still a lot better than nothing. (Someday we should provide a way
for extension code to supply a custom version of convert_to_scalar, but
today is not that day.)
While poking at this issue, we noted that the existing code for handling
type bytea in convert_to_scalar is several bricks shy of a load.
It assumes without checking that if the comparison value is type bytea,
the bounds values are too; in the worst case this could lead to a crash.
It also fails to detoast the input values, so that the comparison result is
complete garbage if any input is toasted out-of-line, compressed, or even
just short-header. I'm not sure how often such cases actually occur ---
the bounds values, at least, are probably safe since they are elements of
an array and hence can't be toasted. But that doesn't make this code OK.
Back-patch to all supported branches, partly because author requested that,
but mostly because of the bytea bugs. The change in API for the exposed
routine convert_network_to_scalar() is theoretically a back-patch hazard,
but it seems pretty unlikely that any third-party code is calling that
function directly.
Tom Lane [Fri, 2 Mar 2018 22:40:48 +0000 (17:40 -0500)]
Fix VM buffer pin management in heap_lock_updated_tuple_rec().
Sloppy coding in this function could lead to leaking a VM buffer pin,
or to attempting to free the same pin twice. Repair. While at it,
reduce the code's tendency to free and reacquire the same page pin.
Back-patch to 9.6; before that, this routine did not concern itself
with VM pages.
Tom Lane [Fri, 2 Mar 2018 16:22:42 +0000 (11:22 -0500)]
Make gistvacuumcleanup() count the actual number of index tuples.
Previously, it just returned the heap tuple count, which might be only an
estimate, and would be completely the wrong thing if the index is partial.
Since this function scans every index page anyway to find free pages,
it's practically free to count the surviving index tuples. Let's do that
and return an accurate count.
This is easily visible as a wrong reltuples value for a partial GiST
index following VACUUM, so back-patch to all supported branches.