Tom Lane [Wed, 15 Aug 2018 20:29:31 +0000 (16:29 -0400)]
Clean up assorted misuses of snprintf()'s result value.
Fix a small number of places that were testing the result of snprintf()
but doing so incorrectly. The right test for buffer overrun, per C99,
is "result >= bufsize" not "result > bufsize". Some places were also
checking for failure with "result == -1", but the standard only says
that a negative value is delivered on failure.
(Note that this only makes these places correct if snprintf() delivers
C99-compliant results. But at least now these places are consistent
with all the other places where we assume that.)
Also, make psql_start_test() and isolation_start_test() check for
buffer overrun while constructing their shell commands. There seems
like a higher risk of overrun, with more severe consequences, here
than there is for the individual file paths that are made elsewhere
in the same functions, so this seemed like a worthwhile change.
Also fix guc.c's do_serialize() to initialize errno = 0 before
calling vsnprintf. In principle, this should be unnecessary because
vsnprintf should have set errno if it returns a failure indication ...
but the other two places this coding pattern is cribbed from don't
assume that, so let's be consistent.
These errors are all very old, so back-patch as appropriate. I think
that only the shell command overrun cases are even theoretically
reachable in practice, but there's not much point in erroneous error
checks.
Tom Lane [Wed, 15 Aug 2018 17:21:05 +0000 (13:21 -0400)]
Make snprintf.c follow the C99 standard for snprintf's result value.
C99 says that the result should be the number of bytes that would have
been emitted given a large enough buffer, not the number we actually
were able to put in the buffer. It's time to make our substitute
implementation comply with that. Not doing so results in inefficiency
in buffer-enlargement cases, and also poses a portability hazard for
third-party code that might expect C99-compliant snprintf behavior
within Postgres.
In passing, remove useless tests for str == NULL; neither C99 nor
predecessor standards ever allowed that except when count == 0,
so I see no reason to expend cycles on making that a non-crash case
for this implementation. Also, don't waste a byte in pg_vfprintf's
local I/O buffer; this might have performance benefits by allowing
aligned writes during flushbuffer calls.
Bruce Momjian [Tue, 14 Aug 2018 21:19:02 +0000 (17:19 -0400)]
pg_upgrade: fix shutdown check for standby servers
Commit 244142d32afd02e7408a2ef1f249b00393983822 only tested for the
pg_controldata output for primary servers, but standby servers have
different "Database cluster state" output, so check for that too.
Diagnosed-by: Michael Paquier
Discussion: https://postgr.es/m/20180810164240.GM13638@paquier.xyz
Tom Lane [Mon, 13 Aug 2018 17:07:52 +0000 (13:07 -0400)]
Fix libpq's implementation of per-host connection timeouts.
Commit 5f374fe7a attempted to turn the connect_timeout from an overall
maximum time limit into a per-host limit, but it didn't do a great job of
that. The timer would only get restarted if we actually detected timeout
within connectDBComplete(), not if we changed our attention to a new host
for some other reason. In that case the old timeout continued to run,
possibly causing a premature timeout failure for the new host.
Fix that, and also tweak the logic so that if we do get a timeout,
we advance to the next available IP address, not to the next host name.
There doesn't seem to be a good reason to assume that all the IP
addresses supplied for a given host name will necessarily fail the
same way as the current one. Moreover, this conforms better to the
admittedly-vague documentation statement that the timeout is "per
connection attempt". I changed that to "per host name or IP address"
to be clearer. (Note that reconnections to the same server, such as for
switching protocol version or SSL status, don't get their own separate
timeout; that was true before and remains so.)
Also clarify documentation about the interpretation of connect_timeout
values less than 2.
This seems like a bug, so back-patch to v10 where this logic came in.
Michael Paquier [Mon, 13 Aug 2018 09:49:04 +0000 (11:49 +0200)]
Make autovacuum more aggressive to remove orphaned temp tables
Commit dafa084, added in 10, made the removal of temporary orphaned
tables more aggressive. This commit makes an extra step into the
aggressiveness by adding a flag in each backend's MyProc which tracks
down any temporary namespace currently in use. The flag is set when the
namespace gets created and can be reset if the temporary namespace has
been created in a transaction or sub-transaction which is aborted. The
flag value assignment is assumed to be atomic, so this can be done in a
lock-less fashion like other flags already present in PGPROC like
databaseId or backendId, still the fact that the temporary namespace and
table created are still locked until the transaction creating those
commits acts as a barrier for other backends.
This new flag gets used by autovacuum to discard more aggressively
orphaned tables by additionally checking for the database a backend is
connected to as well as its temporary namespace in-use, removing
orphaned temporary relations even if a backend reuses the same slot as
one which created temporary relations in a past session.
The base idea of this patch comes from Robert Haas, has been written in
its first version by Tsunakawa Takayuki, then heavily reviewed by me.
Author: Tsunakawa Takayuki Reviewed-by: Michael Paquier, Kyotaro Horiguchi, Andres Freund
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8A4DC6@G01JPEXMBYT05
Backpatch: 11-, as PGPROC gains a new flag and we don't want silent ABI
breakages on already released versions.
Amit Kapila [Mon, 13 Aug 2018 04:34:39 +0000 (10:04 +0530)]
Adjust comment atop ExecShutdownNode.
After commits a315b967cc and b805b63ac2, part of the comment atop
ExecShutdownNode is redundant. Adjust it.
Author: Amit Kapila
Backpatch-through: 10 where both the mentioned commits are present.
Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
Amit Kapila [Mon, 13 Aug 2018 02:52:18 +0000 (08:22 +0530)]
Prohibit shutting down resources if there is a possibility of back up.
Currently, we release the asynchronous resources as soon as it is evident
that no more rows will be needed e.g. when a Limit is filled. This can be
problematic especially for custom and foreign scans where we can scan
backward. Fix that by disallowing the shutting down of resources in such
cases.
Reported-by: Robert Haas Analysed-by: Robert Haas and Amit Kapila
Author: Amit Kapila Reviewed-by: Robert Haas
Backpatch-through: 9.6 where this code was introduced
Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
Andrew Gierth [Mon, 13 Aug 2018 00:45:35 +0000 (01:45 +0100)]
Avoid query-lifetime memory leaks in XMLTABLE (bug #15321)
Multiple calls to XMLTABLE in a query (e.g. laterally applying it to a
table with an xml column, an important use-case) were leaking large
amounts of memory into the per-query context, blowing up memory usage.
Repair by reorganizing memory context usage in nodeTableFuncscan; use
the usual per-tuple context for row-by-row evaluations instead of
perValueCxt, and use the explicitly created context -- renamed from
perValueCxt to perTableCxt -- for arguments and state for each
individual table-generation operation.
Backpatch to PG10 where this code was introduced.
Original report by IRC user begriffs; analysis and patch by me.
Reviewed by Tom Lane and Pavel Stehule.
Tom Lane [Sun, 12 Aug 2018 22:46:01 +0000 (18:46 -0400)]
Revert "Distinguish printf-like functions that support %m from those that don't."
This reverts commit 3a60c8ff892a8242b907f44702bfd9f1ff877d45. Buildfarm
results show that that caused a whole bunch of new warnings on platforms
where gcc believes the local printf to be non-POSIX-compliant. This
problem outweighs the hypothetical-anyway possibility of getting warnings
for misuse of %m. We could use gnu_printf archetype when we've substituted
src/port/snprintf.c, but that brings us right back to the problem of not
getting warnings for %m.
A possible answer is to attack it in the other direction by insisting
that %m support be included in printf's feature set, but that will take
more investigation. In the meantime, revert the previous change, and
update the comment for PGAC_C_PRINTF_ARCHETYPE to more fully explain
what's going on.
Tom Lane [Sun, 12 Aug 2018 22:05:49 +0000 (18:05 -0400)]
Fix bogus loop logic in 013_crash_restart test's pump_until subroutine.
The pump_nb() step might've already received the desired data, so we must
check for that at the top of the loop not the bottom. Otherwise, the
call to pump() will sit with nothing to do until the timeout elapses.
pump_until then falls out with apparent success ... but the timeout has
been used up, causing the next call of pump_until to report a timeout
failure. I believe this explains the intermittent timeout failures
we've seen in the buildfarm ever since this test went in. I was able
to reproduce the problem on gaur semi-repeatably, and this appears to
fix it.
In passing, remove a duplicate assignment, fix one stdin-assignment to
look like the rest, and document the test's dependency on test_decoding.
Tom Lane [Sat, 11 Aug 2018 19:53:20 +0000 (15:53 -0400)]
Fix wrong order of operations in inheritance_planner.
When considering a partitioning parent rel, we should stop processing that
subroot as soon as we've done adjust_appendrel_attrs and any securityQuals
updates. The rest of this is unnecessary, and indeed adding duplicate
subquery RTEs to the subroot is *wrong*. As the code stood, the children
of that partition ended up with two sets of copied subquery RTEs, confusing
matters greatly. Even more hilarity ensued if all of the children got
excluded by constraint exclusion, so that the extra RTEs didn't make it
back into the parent rtable.
Per fuzz testing by Andreas Seltenreich. Back-patch to v11 where this
got broken (by commit 0a480502b, it looks like).
Tom Lane [Sat, 11 Aug 2018 15:23:41 +0000 (11:23 -0400)]
Produce compiler errors if errno is referenced inside elog/ereport calls.
It's often unsafe to reference errno within an elog/ereport call, because
there are a lot of sub-functions involved and they might not all preserve
errno. (This is why we support the %m format spec: it works off a value
of errno captured before we execute any potentially-unsafe functions in
the arguments.) Therefore, we have a project policy not to use errno
there.
This patch adds a hack to cause an (admittedly obscure) compiler error
for such unsafe usages. With the current code, the error will only be seen
on Linux, macOS, and FreeBSD, but that should certainly be enough to catch
mistakes in the buildfarm if they somehow get missed earlier.
In addition, fix some places in src/common/exec.c that trip the error.
I think these places are actually all safe, but it's simple enough to
avoid the error by capturing errno manually, and doing so is good
future-proofing in case these call sites get any more complicated.
Tom Lane [Sat, 11 Aug 2018 15:11:05 +0000 (11:11 -0400)]
Distinguish printf-like functions that support %m from those that don't.
The elog/ereport family of functions certainly support the %m format spec,
because they implement it "by hand". But elsewhere we have printf wrappers
that might or might not allow it depending on whether the platform's printf
does. (Most non-glibc versions don't, and notably, src/port/snprintf.c
doesn't.) Hence, rather than using the gnu_printf format archetype
interchangeably for all these functions, use it only for elog/ereport.
This will allow us to get compiler warnings for mistakes like the ones
fixed in commit a13b47a59, at least on platforms where printf doesn't
take %m and gcc is correctly configured to know it. (Unfortunately,
that won't happen on Linux, nor on macOS according to my testing.
It remains to be seen what the buildfarm's gcc-on-Windows animals will
think of this, but we may well have to rely on less-popular platforms
to warn us about unportable code of this kind.)
Peter Geoghegan [Fri, 10 Aug 2018 20:01:34 +0000 (13:01 -0700)]
Handle parallel index builds on mapped relations.
Commit 9da0cc35284, which introduced parallel CREATE INDEX, failed to
propagate relmapper.c backend local cache state to parallel worker
processes. This could result in parallel index builds against mapped
catalog relations where the leader process (participating as a worker)
scans the new, pristine relfilenode, while worker processes scan the
obsolescent relfilenode. When this happened, the final index structure
was typically not consistent with the owning table's structure. The
final index structure could contain entries formed from both heap
relfilenodes. Only rebuilds on mapped catalog relations that occur as
part of a VACUUM FULL or CLUSTER could become corrupt in practice, since
their mapped relation relfilenode swap is what allows the inconsistency
to arise.
On master, fix the problem by propagating the required relmapper.c
backend state as part of standard parallel initialization (Cf. commit 29d58fd3). On v11, simply disallow builds against mapped catalog
relations by deeming them parallel unsafe.
Author: Peter Geoghegan Reported-By: "death lock" Reviewed-By: Tom Lane, Amit Kapila
Bug: #15309
Discussion: https://postgr.es/m/153329671686.1405.18298309097348420351@wrigleys.postgresql.org
Backpatch: 11-, where parallel CREATE INDEX was introduced.
Michael Paquier [Fri, 10 Aug 2018 16:26:59 +0000 (18:26 +0200)]
Improve TRUNCATE by avoiding early lock queue
A caller of TRUNCATE could previously queue for an access exclusive lock
on a relation it may not have permission to truncate, potentially
interfering with users authorized to work on it. This can be very
intrusive depending on the lock attempted to be taken. For example,
pg_authid could be blocked, preventing any authentication attempt to
happen on a PostgreSQL instance.
This commit fixes the case of TRUNCATE so as RangeVarGetRelidExtended is
used with a callback doing the necessary ACL checks at an earlier stage,
avoiding lock queuing issues, so as an immediate failure happens for
unprivileged users instead of waiting on a lock that would not be
taken.
This is rather similar to the type of work done in cbe24a6 for CLUSTER,
and the code of TRUNCATE is this time refactored so as there is no
user-facing changes. As the commit for CLUSTER, no back-patch is done.
Reported-by: Lloyd Albin, Jeremy Schneider
Author: Michael Paquier
Reviewed by: Nathan Bossart, Kyotaro Horiguchi
Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org
Discussion: https://postgr.es/m/20180806165816.GA19883@paquier.xyz
Add missing documentation for argument of amcostestimate()
5262f7a4fc44 have introduced parallel index scan. In order to estimate the
number of parallel workers, it adds extra argument to amcostestimate() index
access method API function. However, this extra argument was missed in the
documentation. This commit fixes that.
Discussion: https://postgr.es/m/4128fdb4-8b63-2e05-38f6-3125f8c27263%40lab.ntt.co.jp
Author: Tatsuro Yamada, Alexander Korotkov
Backpatch-through: 10
Tom Lane [Thu, 9 Aug 2018 19:21:09 +0000 (15:21 -0400)]
Document need to clear MAKELEVEL when invoking PG build from a makefile.
Since commit 3b8f6e75f, failure to do this would lead to
submake-generated-headers not doing anything, so that references to
generated or symlinked headers would fail. Previous to that, the
omission only led to temp-install not doing anything, which apparently
affects many fewer people (doesn't anybody use "make check" in their
build rules??). Hence, backpatch to v11 but not further.
Per complaints from Christoph Berg, Jakob Egger, and others.
Bruce Momjian [Thu, 9 Aug 2018 14:13:15 +0000 (10:13 -0400)]
docs: Only first instance of a PREPARE parameter sets data type
If the first reference to $1 is "($1 = col) or ($1 is null)", the data
type can be determined, but not for "($1 is null) or ($1 = col)". This
change documents this.
Michael Paquier [Thu, 9 Aug 2018 07:40:15 +0000 (09:40 +0200)]
Restrict access to reindex of shared catalogs for non-privileged users
A database owner running a database-level REINDEX has the possibility to
also do the operation on shared system catalogs without being an owner
of them, which allows him to block resources it should not have access
to. The same goes for a schema owner. For example, PostgreSQL would go
unresponsive and even block authentication if a lock is waited for
pg_authid. This commit makes sure that a user running a REINDEX SYSTEM,
DATABASE or SCHEMA only works on the following relations:
- The user is a superuser
- The user is the table owner
- The user is the database/schema owner, only if the relation worked on
is not shared.
Robert has worded most the documentation changes, and I have coded the
core part.
Reported-by: Lloyd Albin, Jeremy Schneider
Author: Michael Paquier, Robert Haas
Reviewed by: Nathan Bossart, Kyotaro Horiguchi
Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org
Discussion: https://postgr.es/m/20180805211059.GA2185@paquier.xyz
Backpatch-through: 11- as the current behavior has been around for a
very long time and could be disruptive for already released branches.
Tom Lane [Thu, 9 Aug 2018 00:02:10 +0000 (20:02 -0400)]
Remove bogus Assert in make_partitionedrel_pruneinfo().
This Assert thought that a given rel couldn't be both leaf and
non-leaf, but it turns out that in some unusual plan trees
that's wrong, so remove it.
The lack of testing for cases like that is quite concerning ---
there is little reason for confidence that there aren't other
bugs in the area. But developing a stable test case seems
rather difficult, and in any case we don't need this Assert.
Peter Geoghegan [Wed, 8 Aug 2018 19:56:11 +0000 (12:56 -0700)]
Doc: Correct description of amcheck example query.
The amcheck documentation incorrectly claimed that its example query
verifies every catalog index in the database. In fact, the query only
verifies the 10 largest indexes (as determined by pg_class.relpages).
Adjust the description accordingly.
Backpatch: 10-, where contrib/amcheck was introduced.
Tom Lane [Wed, 8 Aug 2018 18:32:29 +0000 (14:32 -0400)]
Remove unwanted "garbage cleanup" logic in Makefiles.
GNUmakefile.in defined a macro "garbage" that seems to have been meant
as a suitable target for automatic "rm -rf" treatment, but it isn't
actually used anywhere (and indeed never was, AFAICT).
Moreover, we have concluded that the Makefiles shouldn't take it upon
themselves to remove files that aren't expected by-products of building,
so that doing anything like that would be against project policy anyway.
Hence, just remove the macro.
Grepping around finds another violation of that policy in ecpg/preproc,
so clean that up too.
Don't run atexit callbacks in quickdie signal handlers.
exit() is not async-signal safe. Even if the libc implementation is, 3rd
party libraries might have installed unsafe atexit() callbacks. After
receiving SIGQUIT, we really just want to exit as quickly as possible, so
we don't really want to run the atexit() callbacks anyway.
The original report by Jimmy Yih was a self-deadlock in startup_die().
However, this patch doesn't address that scenario; the signal handling
while waiting for the startup packet is more complicated. But at least this
alleviates similar problems in the SIGQUIT handlers, like that reported
by Asim R P later in the same thread.
Tom Lane [Wed, 8 Aug 2018 15:44:50 +0000 (11:44 -0400)]
Match RelOptInfos by relids not pointer equality.
Commit 1c2cb2744 added some code that tried to detect whether two
RelOptInfos were the "same" rel by pointer comparison; but it turns
out that inheritance_planner breaks that, through its shenanigans
with copying some relations forward into new subproblems. Compare
relid sets instead. Add a regression test case to exercise this
area.
Problem reported by Rushabh Lathia; diagnosis and fix by Amit Langote,
modified a bit by me.
Tom Lane [Tue, 7 Aug 2018 20:32:50 +0000 (16:32 -0400)]
Don't record FDW user mappings as members of extensions.
CreateUserMapping has a recordDependencyOnCurrentExtension call that's
been there since extensions were introduced (very possibly my fault).
However, there's no support anywhere else for user mappings as members
of extensions, nor are they listed as a possible member object type in
the documentation. Nor does it really seem like a good idea for user
mappings to belong to extensions when roles don't. Hence, remove the
bogus call.
(As we saw in bug #15310, the lack of any pg_dump support for this case
ensures that any such membership record would silently disappear during
pg_upgrade. So there's probably no need for us to do anything else
about cleaning up after this mistake.)
Tom Lane [Tue, 7 Aug 2018 20:00:44 +0000 (16:00 -0400)]
Fix incorrect initialization of BackendActivityBuffer.
Since commit c8e8b5a6e, this has been zeroed out using the wrong length.
In practice the length would always be too small, leading to not zeroing
the whole buffer rather than clobbering additional memory; and that's
pretty harmless, both because shmem would likely start out as zeroes
and because we'd reinitialize any given entry before use. Still,
it's bogus, so fix it.
Tom Lane [Tue, 7 Aug 2018 19:43:48 +0000 (15:43 -0400)]
Fix pg_upgrade to handle event triggers in extensions correctly.
pg_dump with --binary-upgrade must emit ALTER EXTENSION ADD commands
for all objects that are members of extensions. It forgot to do so for
event triggers, as per bug #15310 from Nick Barnes. Back-patch to 9.3
where event triggers were introduced.
Tom Lane [Tue, 7 Aug 2018 17:13:42 +0000 (13:13 -0400)]
Ensure pg_dump_sort.c sorts null vs non-null namespace consistently.
The original coding here (which is, I believe, my fault) supposed that
it didn't need to concern itself with the possibility that one object
of a given type-priority has a namespace while another doesn't. But
that's not reliably true anymore, if it ever was; and if it does happen
then it's possible that DOTypeNameCompare returns self-inconsistent
comparison results. That leads to unspecified behavior in qsort()
and a resultant weird output order from pg_dump.
This should end up being only a cosmetic problem, because any ordering
constraints that actually matter should be enforced by the later
dependency-based sort. Still, it's a bug, so back-patch.
Report and fix by Jacob Champion, though I editorialized on his
patch to the extent of making NULL sort after non-NULL, for consistency
with our usual sorting definitions.
Tom Lane [Mon, 6 Aug 2018 14:53:35 +0000 (10:53 -0400)]
Fix failure to reset libpq's state fully between connection attempts.
The logic in PQconnectPoll() did not take care to ensure that all of
a PGconn's internal state variables were reset before trying a new
connection attempt. If we got far enough in the connection sequence
to have changed any of these variables, and then decided to try a new
server address or server name, the new connection might be completed
with some state that really only applied to the failed connection.
While this has assorted bad consequences, the only one that is clearly
a security issue is that password_needed didn't get reset, so that
if the first server asked for a password and the second didn't,
PQconnectionUsedPassword() would return an incorrect result. This
could be leveraged by unprivileged users of dblink or postgres_fdw
to allow them to use server-side login credentials that they should
not be able to use.
Other notable problems include the possibility of forcing a v2-protocol
connection to a server capable of supporting v3, or overriding
"sslmode=prefer" to cause a non-encrypted connection to a server that
would have accepted an encrypted one. Those are certainly bugs but
it's harder to paint them as security problems in themselves. However,
forcing a v2-protocol connection could result in libpq having a wrong
idea of the server's standard_conforming_strings setting, which opens
the door to SQL-injection attacks. The extent to which that's actually
a problem, given the prerequisite that the attacker needs control of
the client's connection parameters, is unclear.
These problems have existed for a long time, but became more easily
exploitable in v10, both because it introduced easy ways to force libpq
to abandon a connection attempt at a late stage and then try another one
(rather than just giving up), and because it provided an easy way to
specify multiple target hosts.
Fix by rearranging PQconnectPoll's state machine to provide centralized
places to reset state properly when moving to a new target host or when
dropping and retrying a connection to the same host.
Tom Lane, reviewed by Noah Misch. Our thanks to Andrew Krasichkov
for finding and reporting the problem.
There are some problems with the tls-unique channel binding type. It's not
supported by all SSL libraries, and strictly speaking it's not defined for
TLS 1.3 at all, even though at least in OpenSSL, the functions used for it
still seem to work with TLS 1.3 connections. And since we had no
mechanism to negotiate what channel binding type to use, there would be
awkward interoperability issues if a server only supported some channel
binding types. tls-server-end-point seems feasible to support with any SSL
library, so let's just stick to that.
This removes the scram_channel_binding libpq option altogether, since there
is now only one supported channel binding type.
This also removes all the channel binding tests from the SSL test suite.
They were really just testing the scram_channel_binding option, which
is now gone. Channel binding is used if both client and server support it,
so it is used in the existing tests. It would be good to have some tests
specifically for channel binding, to make sure it really is used, and the
different combinations of a client and a server that support or doesn't
support it. The current set of settings we have make it hard to write such
tests, but I did test those things manually, by disabling
HAVE_BE_TLS_GET_CERTIFICATE_HASH and/or
HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH.
I also removed the SCRAM_CHANNEL_BINDING_TLS_END_POINT constant. This is a
matter of taste, but IMO it's more readable to just use the
"tls-server-end-point" string.
Refactor the checks on whether the SSL library supports the functions
needed for tls-server-end-point channel binding. Now the server won't
advertise, and the client won't choose, the SCRAM-SHA-256-PLUS variant, if
compiled with an OpenSSL version too old to support it.
In the passing, add some sanity checks to check that the chosen SASL
mechanism, SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, matches whether the SCRAM
exchange used channel binding or not. For example, if the client selects
the non-channel-binding variant SCRAM-SHA-256, but in the SCRAM message
uses channel binding anyway. It's harmless from a security point of view,
I believe, and I'm not sure if there are some other conditions that would
cause the connection to fail, but it seems better to be strict about these
things and check explicitly.
Tom Lane [Sun, 5 Aug 2018 03:49:53 +0000 (23:49 -0400)]
Update version 11 release notes.
Remove description of commit 1944cdc98, which has now been back-patched
so it's not relevant to v11 any longer. Add descriptions of other
recent commits that seemed worth mentioning.
I marked the update as stopping at 2018-07-30, because it's unclear
whether d06eebce5 will be allowed to stay in v11, and I didn't feel like
putting effort into writing a description of it yet. If it does stay,
I think it will deserve mention in the Source Code section.
Tom Lane [Sat, 4 Aug 2018 23:38:58 +0000 (19:38 -0400)]
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *.
When expanding an updatable view that is an INSERT's target, the rewriter
failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally
worked if the view was just "SELECT * FROM ...", as the transformation
would be a no-op in that case. With more complicated view targetlists,
this omission would often lead to "attribute ... has the wrong type" errors
or even crashes, as reported by Mario De Frutos Dieguez.
Fix by adding code to rewriteTargetView to fix up the data structure
correctly. The easiest way to update the exclRelTlist list is to rebuild
it from scratch looking at the new target relation, so factor the code
for that out of transformOnConflictClause to make it sharable.
In passing, avoid duplicate permissions checks against the EXCLUDED
pseudo-relation, and prevent useless view expansion of that relation's
dummy RTE. The latter is only known to happen (after this patch) in cases
where the query would fail later due to not having any INSTEAD OF triggers
for the view. But by exactly that token, it would create an unintended
and very poorly tested state of the query data structure, so it seems like
a good idea to prevent it from happening at all.
This has been broken since ON CONFLICT was introduced, so back-patch
to 9.5.
Dean Rasheed, based on an earlier patch by Amit Langote;
comment-kibitzing and back-patching by me
Michael Paquier [Sat, 4 Aug 2018 20:31:18 +0000 (05:31 +0900)]
Reset properly errno before calling write()
6cb3372 enforces errno to ENOSPC when less bytes than what is expected
have been written when it is unset, though it forgot to properly reset
errno before doing a system call to write(), causing errno to
potentially come from a previous system call.
Reported-by: Tom Lane
Author: Michael Paquier Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/31797.1533326676@sss.pgh.pa.us
Noah Misch [Sat, 4 Aug 2018 03:53:25 +0000 (20:53 -0700)]
Make "kerberos" test suite independent of "localhost" name resolution.
This suite malfunctioned if the canonical name of "localhost" was
something other than "localhost", such as "localhost.localdomain". Use
hostaddr=127.0.0.1 and a fictitious host=, so the resolver's answers for
"localhost" don't affect the outcome. Back-patch to v11, which
introduced this test suite.
Peter Geoghegan [Fri, 3 Aug 2018 21:45:11 +0000 (14:45 -0700)]
Add table relcache invalidation to index builds.
It's necessary to make sure that owning tables have a relcache
invalidation prior to advancing the command counter to make
newly-entered catalog tuples for the index visible. inval.c must be
able to maintain the consistency of the local caches in the event of
transaction abort. There is usually only a problem when CREATE INDEX
transactions abort, since there is a generic invalidation once we reach
index_update_stats().
This bug is of long standing. Problems were made much more likely by
the addition of parallel CREATE INDEX (commit 9da0cc35284), but it is
strongly suspected that similar problems can be triggered without
involving plan_create_index_workers(). (plan_create_index_workers()
triggers a relcache build or rebuild, which previously only happened in
rare edge cases.)
Author: Peter Geoghegan Reported-By: Luca Ferrari Diagnosed-By: Andres Freund Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAKoxK+5fVodiCtMsXKV_1YAKXbzwSfp7DgDqUmcUAzeAhf=HEQ@mail.gmail.com
Backpatch: 9.3-
Tom Lane [Fri, 3 Aug 2018 16:20:47 +0000 (12:20 -0400)]
Remove no-longer-appropriate special case in psql's \conninfo code.
\conninfo prints the results of PQhost() and some other libpq functions.
It used to override the PQhost() result with the hostaddr parameter if
that'd been given, but that's unhelpful when multiple hosts were listed
in the connection string. Furthermore, it seems unnecessary in the wake
of commit 1944cdc98, since PQhost does any useful substitution itself.
So let's just remove the extra code and print PQhost()'s result without
any editorialization.
Tom Lane [Fri, 3 Aug 2018 16:12:10 +0000 (12:12 -0400)]
Change libpq's internal uses of PQhost() to inspect host field directly.
Commit 1944cdc98 changed PQhost() to return the hostaddr value when that
is specified and host isn't. This is a good idea in general, but
fe-auth.c and related files contain PQhost() calls for which it isn't.
Specifically, when we compare SSL certificates or other server identity
information to the host field, we do not want to use hostaddr instead;
that's not what's documented, that's not what happened pre-v10, and
it doesn't seem like a good idea.
Instead, we can just look at connhost[].host directly. This does what
we want in v10 and up; in particular, if neither host nor hostaddr
were given, the host field will be replaced with the default host name.
That seems useful, and it's likely the reason that these places were
coded to call PQhost() originally (since pre-v10, the stored field was
not replaced with the default).
Amit Kapila [Fri, 3 Aug 2018 05:32:02 +0000 (11:02 +0530)]
Fix buffer usage stats for parallel nodes.
The buffer usage stats is accounted only for the execution phase of the
node. For Gather and Gather Merge nodes, such stats are accumulated at
the time of shutdown of workers which is done after execution of node due
to which we missed to account them for such nodes. Fix it by treating
nodes as running while we shut down them.
We can also miss accounting for a Limit node when Gather or Gather Merge
is beneath it, because it can finish the execution before shutting down
such nodes. So we allow a Limit node to shut down the resources before it
completes the execution.
In the passing fix the gather node code to allow workers to shut down as
soon as we find that all the tuples from the workers have been retrieved.
The original code use to do that, but is accidently removed by commit 01edb5c7fc.
Reported-by: Adrien Nayrat
Author: Amit Kapila and Robert Haas Reviewed-by: Robert Haas and Andres Freund
Backpatch-through: 9.6 where this code was introduced
Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
Amit Kapila [Fri, 3 Aug 2018 03:41:37 +0000 (09:11 +0530)]
Match the buffer usage tracking for leader and worker backends.
In the leader backend, we don't track the buffer usage for ExecutorStart
phase whereas in worker backend we track it for ExecutorStart phase as
well. This leads to different value for buffer usage stats for the
parallel and non-parallel query. Change the code so that worker backend
also starts tracking buffer usage after ExecutorStart.
Author: Amit Kapila and Robert Haas Reviewed-by: Robert Haas and Andres Freund
Backpatch-through: 9.6 where this code was introduced
Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
Thomas Munro [Wed, 1 Aug 2018 23:25:31 +0000 (11:25 +1200)]
Add missing header include to pmsignal.h.
pmsignal.h uses sig_atomic_t in some builds, but relied on signal.h
having been included already. We could include it conditionally
but evidently that wouldn't save anything in practice and would
add more ugly macros, so let's just include signal.h always.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/4166.1533154074%40sss.pgh.pa.us
Tom Lane [Wed, 1 Aug 2018 23:42:46 +0000 (19:42 -0400)]
Fix run-time partition pruning for appends with multiple source rels.
The previous coding here supposed that if run-time partitioning applied to
a particular Append/MergeAppend plan, then all child plans of that node
must be members of a single partitioning hierarchy. This is totally wrong,
since an Append could be formed from a UNION ALL: we could have multiple
hierarchies sharing the same Append, or child plans that aren't part of any
hierarchy.
To fix, restructure the related plan-time and execution-time data
structures so that we can have a separate list or array for each
partitioning hierarchy. Also track subplans that are not part of any
hierarchy, and make sure they don't get pruned.
Per reports from Phil Florent and others. Back-patch to v11, since
the bug originated there.
David Rowley, with a lot of cosmetic adjustments by me; thanks also
to Amit Langote for review.
Alvaro Herrera [Wed, 1 Aug 2018 21:39:07 +0000 (17:39 -0400)]
Fix logical replication slot initialization
This was broken in commit 9c7d06d60680, which inadvertently gave the
wrong value to fast_forward in one StartupDecodingContext call. Fix by
flipping the value. Add a test for the obvious error, namely trying to
initialize a replication slot with an nonexistent output plugin.
While at it, move the CreateDecodingContext call earlier, so that any
errors are reported before sending the CopyBoth message.
Author: Dave Cramer <davecramer@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CADK3HHLVkeRe1v4P02-5hj55H3_yJg3AEtpXyEY5T3wuzO2jSg@mail.gmail.com
Alvaro Herrera [Wed, 1 Aug 2018 19:06:47 +0000 (15:06 -0400)]
Fix per-tuple memory leak in partition tuple routing
Some operations were being done in a longer-lived memory context,
causing intra-query leaks. It's not noticeable unless you're doing a
large COPY, but if you are, it eats enough memory to cause a problem.
Tom Lane [Wed, 1 Aug 2018 16:30:36 +0000 (12:30 -0400)]
Fix libpq's code for searching .pgpass; rationalize empty-list-item cases.
Before v10, we always searched ~/.pgpass using the host parameter,
and nothing else, to match to the "hostname" field of ~/.pgpass.
(However, null host or host matching DEFAULT_PGSOCKET_DIR was replaced by
"localhost".) In v10, this got broken by commit 274bb2b38, repaired by
commit bdac9836d, and broken again by commit 7b02ba62e; in the code
actually shipped, we'd search with hostaddr if both that and host were
specified --- though oddly, *not* if only hostaddr were specified.
Since this is directly contrary to the documentation, and not
backwards-compatible, it's clearly a bug.
However, the change wasn't totally without justification, even though it
wasn't done quite right, because the pre-v10 behavior has arguably been
buggy since we added hostaddr. If hostaddr is specified and host isn't,
the pre-v10 code will search ~/.pgpass for "localhost", and ship that
password off to a server that most likely isn't local at all. That's
unhelpful at best, and could be a security breach at worst.
Therefore, rather than just revert to that old behavior, let's define
the behavior as "search with host if provided, else with hostaddr if
provided, else search for localhost". (As before, a host name matching
DEFAULT_PGSOCKET_DIR is replaced by localhost.) This matches the
behavior of the actual connection code, so that we don't pick up an
inappropriate password; and it allows useful searches to happen when
only hostaddr is given.
While we're messing around here, ensure that empty elements within a
host or hostaddr list select the same behavior as a totally-empty
field would; for instance "host=a,,b" is equivalent to "host=a,/tmp,b"
if DEFAULT_PGSOCKET_DIR is /tmp. Things worked that way in some cases
already, but not consistently so, which contributed to the confusion
about what key ~/.pgpass would get searched with.
Update documentation accordingly, and also clarify some nearby text.
Back-patch to v10 where the host/hostaddr list functionality was
introduced.
Allow multi-inserts during COPY into a partitioned table
CopyFrom allows multi-inserts to be used for non-partitioned tables, but
this was disabled for partitioned tables. The reason for this appeared
to be that the tuple may not belong to the same partition as the
previous tuple did. Not allowing multi-inserts here greatly slowed down
imports into partitioned tables. These could take twice as long as a
copy to an equivalent non-partitioned table. It seems wise to do
something about this, so this change allows the multi-inserts by
flushing the so-far inserted tuples to the partition when the next tuple
does not belong to the same partition, or when the buffer fills. This
improves performance when the next tuple in the stream commonly belongs
to the same partition as the previous tuple.
In cases where the target partition changes on every tuple, using
multi-inserts slightly slows the performance. To get around this we
track the average size of the batches that have been inserted and
adaptively enable or disable multi-inserts based on the size of the
batch. Some testing was done and the regression only seems to exist
when the average size of the insert batch is close to 1, so let's just
enable multi-inserts when the average size is at least 1.3. More
performance testing might reveal a better number for, this, but since
the slowdown was only 1-2% it does not seem critical enough to spend too
much time calculating it. In any case it may depend on other factors
rather than just the size of the batch.
Allowing multi-inserts for partitions required a bit of work around the
per-tuple memory contexts as we must flush the tuples when the next
tuple does not belong the same partition. In which case there is no
good time to reset the per-tuple context, as we've already built the new
tuple by this time. In order to work around this we maintain two
per-tuple contexts and just switch between them every time the partition
changes and reset the old one. This does mean that the first of each
batch of tuples is not allocated in the same memory context as the
others, but that does not matter since we only reset the context once
the previous batch has been inserted.
Author: David Rowley <david.rowley@2ndquadrant.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Tom Lane [Tue, 31 Jul 2018 17:00:07 +0000 (13:00 -0400)]
Further fixes for quoted-list GUC values in pg_dump and ruleutils.c.
Commits 742869946 et al turn out to be a couple bricks shy of a load.
We were dumping the stored values of GUC_LIST_QUOTE variables as they
appear in proconfig or setconfig catalog columns. However, although that
quoting rule looks a lot like SQL-identifier double quotes, there are two
critical differences: empty strings ("") are legal, and depending on which
variable you're considering, values longer than NAMEDATALEN might be valid
too. So the current technique fails altogether on empty-string list
entries (as reported by Steven Winfield in bug #15248) and it also risks
truncating file pathnames during dump/reload of GUC values that are lists
of pathnames.
To fix, split the stored value without any downcasing or truncation,
and then emit each element as a SQL string literal.
This is a tad annoying, because we now have three copies of the
comma-separated-string splitting logic in varlena.c as well as a fourth
one in dumputils.c. (Not to mention the randomly-different-from-those
splitting logic in libpq...) I looked at unifying these, but it would
be rather a mess unless we're willing to tweak the API definitions of
SplitIdentifierString, SplitDirectoriesString, or both. That might be
worth doing in future; but it seems pretty unsafe for a back-patched
bug fix, so for now accept the duplication.
Back-patch to all supported branches, as the previous fix was.
Change bms_add_range to be a no-op for empty ranges
In commit 84940644de93, bms_add_range was added with an API to fail with
an error if an empty range was specified. This seems arbitrary and
unhelpful, so turn that case into a no-op instead. Callers that require
further verification on the arguments or result can apply them by
themselves.
This fixes the bug that partition pruning throws an API error for a case
involving the default partition of a default partition, as in the
included test case.
Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/16590.1532622503@sss.pgh.pa.us
Tom Lane [Mon, 30 Jul 2018 22:04:39 +0000 (18:04 -0400)]
Ensure we build generated headers at the start of some more cases.
"make installcheck" and some related cases, when invoked from the toplevel
directory, start out by doing "make all" in src/test/regress. Since that's
one make recursion level down, the submake-generated-headers target will
do nothing, causing us to fail to create/update generated headers before
building pg_regress. This is, I believe, a new failure mode induced by
commit 3b8f6e75f, so let's fix it. To do so, we have to invoke
submake-generated-headers at the top level.
Set ActiveSnapshot when logically replaying inserts
Input functions for the inserted tuples may require a snapshot, when
they are replayed by native logical replication. An example is a domain
with a constraint using a SQL-language function, which prior to this
commit failed to apply on the subscriber side.
Tom Lane [Mon, 30 Jul 2018 16:35:49 +0000 (12:35 -0400)]
Fix pg_dump's failure to dump REPLICA IDENTITY for constraint indexes.
pg_dump knew about printing ALTER TABLE ... REPLICA IDENTITY USING INDEX
for indexes declared as indexes, but it failed to print that for indexes
declared as unique or primary-key constraints. Per report from Achilleas
Mantzios.
This has been broken since the feature was introduced, AFAICS.
Back-patch to 9.4.
Tom Lane [Mon, 30 Jul 2018 15:54:41 +0000 (11:54 -0400)]
Doc: fix oversimplified example for CREATE POLICY.
As written, this policy constrained only the post-image not the pre-image
of rows, meaning that users could delete other users' rows or take
ownership of such rows, contrary to what the docs claimed would happen.
We need two separate policies to achieve the documented effect.
While at it, try to explain what's happening a bit more fully.
Per report from Олег Самойлов. Back-patch to 9.5 where this was added.
Thanks to Stephen Frost for off-list discussion.
Tomas Vondra [Sun, 29 Jul 2018 21:08:00 +0000 (23:08 +0200)]
Mark variable used only in assertion with PG_USED_FOR_ASSERTS_ONLY
Perpendicular lines always intersect, so the line_interpt_line() return
value in line_closept_point() was used only in an assertion, triggering
compiler warnings in non-assert builds.
Tomas Vondra [Sun, 29 Jul 2018 18:35:18 +0000 (20:35 +0200)]
Restore handling of -0 in the C field of lines in line_construct().
Commit a7dc63d904 inadvertedly removed this bit originally introduced
by 43fe90f66a, causing regression test failures on some platforms,
due to producing {1,-1,-0} instead of {1,-1,0}.
Affected test queries have been testing the wrong thing since their
introduction in commit 4c1383efd132e4f532213c8a8cc63a455f55e344.
Back-patch to 9.3 (all supported versions).
Michael Paquier [Sun, 29 Jul 2018 14:50:35 +0000 (23:50 +0900)]
Make error message of pageinspect more consistent for raw page inputs
There is a copy-paste error from bt_page_items() which got into
bt_page_items_bytea(). A second message in get_raw_page_internal() was
inconsistent with all the other sub-modules.
Michael Paquier [Sun, 29 Jul 2018 13:00:42 +0000 (22:00 +0900)]
Fix two oversights from 9ebe0572 which refactored cluster_rel
The recheck option became a no-op as ClusterOption failed to set proper
values for each element. There was a second code path where local
options got overwritten.
Document security implications of qualified names.
Commit 5770172cb0c9df9e6ce27c507b449557e5b45124 documented secure schema
usage, and that advice suffices for using unqualified names securely.
Document, in typeconv-func primarily, the additional issues that arise
with qualified names. Back-patch to 9.3 (all supported versions).
Tomas Vondra [Sun, 29 Jul 2018 01:30:48 +0000 (03:30 +0200)]
Provide separate header file for built-in float types
Some data types under adt/ have separate header files, but most simple
ones do not, and their public functions are defined in builtins.h. As
the patches improving geometric types will require making additional
functions public, this seems like a good opportunity to create a header
for floats types.
Commit 1acf757255 made _cmp functions public to solve NaN issues locally
for GiST indexes. This patch reworks it in favour of a more widely
applicable API. The API uses inline functions, as they are easier to
use compared to macros, and avoid double-evaluation hazards.
Tomas Vondra [Sun, 29 Jul 2018 00:02:48 +0000 (02:02 +0200)]
Refactor geometric functions and operators
The primary goal of this patch is to eliminate duplicate code and share
code between different geometric data types more often, to prepare the
ground for additional patches. Until now the code reuse was limited,
probably because the simpler types (line and point) were implemented
after the more complex ones.
The changes are quite extensive and can be summarised as:
* Eliminate SQL-level function calls.
* Re-use more functions to implement others.
* Unify internal function names and signatures.
* Remove private functions from geo_decls.h.
* Replace should-not-happen checks with assertions.
* Add comments describe for various functions.
* Remove some unreachable code.
* Define delimiter symbols of line datatype like the other ones.
* Remove the GEODEBUG macro and printf() calls.
* Unify code style of a few oddly formatted lines.
While the goal was to cause minimal user-visible changes, it was not
possible to keep the original behavior in all cases - for example when
handling NaN values, or when reusing code makes the functions return
consistent results.
Author: Emre Hasegeli Reviewed-by: Kyotaro Horiguchi, me
Discussion: https://www.postgresql.org/message-id/CAE2gYzxF7-5djV6-cEvqQu-fNsnt%3DEqbOURx7ZDg%2BVv6ZMTWbg%40mail.gmail.com
Michael Paquier [Sat, 28 Jul 2018 22:53:11 +0000 (07:53 +0900)]
Add verbosity to pg_basebackup for sync
This is useful to know when the data copy has been finished. The
current situation can be confusing for users as the last message is
"waiting for background process to finish streaming", so it looks like
this is taking time but the final sync is instead.
Author: Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1ypeoMJ=tFBG8vP13sxEtXd4Pm_x1SqsJdW_RvzpcvN=A@mail.gmail.com
Bruce Momjian [Sat, 28 Jul 2018 19:01:55 +0000 (15:01 -0400)]
pg_upgrade: check for clean server shutdowns
Previously pg_upgrade checked for the pid file and started/stopped the
server to force a clean shutdown. However, "pg_ctl -m immediate"
removes the pid file but doesn't do a clean shutdown, so check
pg_controldata for a clean shutdown too.
Diagnosed-by: Vimalraj A
Discussion: https://postgr.es/m/CAFKBAK5e4Q-oTUuPPJ56EU_d2Rzodq6GWKS3ncAk3xo7hAsOZg@mail.gmail.com
Reduce path length for locking leaf B-tree pages during insertion
In our B-tree implementation appropriate leaf page for new tuple
insertion is acquired using _bt_search() function. This function always
returns leaf page locked in shared mode. In order to obtain exclusive
lock, caller have to relock the page.
This commit makes _bt_search() function lock leaf page immediately in
exclusive mode when needed. That removes unnecessary relock and, in
turn reduces lock contention for B-tree leaf pages. Our experiments
on multi-core systems showed acceleration up to 4.5 times in corner
case.
Discussion: https://postgr.es/m/CAPpHfduAMDFMNYTCN7VMBsFg_hsf0GqiqXnt%2BbSeaJworwFoig%40mail.gmail.com
Author: Alexander Korotkov Reviewed-by: Yoshikazu Imai, Simon Riggs, Peter Geoghegan
Robert Haas [Fri, 27 Jul 2018 13:34:57 +0000 (09:34 -0400)]
Use key and partdesc from PartitionDispatch where possible.
Instead of repeatedly fishing the data out of the relcache entry,
let's use the version that we cached in the PartitionDispatch. We
could alternatively rip out the PartitionDispatch fields altogether,
but it doesn't make much sense to have them and not use them; before
this patch, partdesc was set but altogether unused. Amit Langote and
I both thought using them was a litle better than removing them, so
this patch takes that approach.
Amit Kapila [Fri, 27 Jul 2018 05:23:00 +0000 (10:53 +0530)]
Fix the buffer release order for parallel index scans.
During parallel index scans, if the current page to be read is deleted, we
skip it and try to get the next page for a scan without releasing the buffer
lock on the current page. To get the next page, sometimes it needs to wait
for another process to complete its scan and advance it to the next page.
Now, it is quite possible that the master backend has errored out before
advancing the scan and issued a termination signal for all workers. The
workers failed to notice the termination request during wait because the
interrupts are held due to buffer lock on the previous page. This lead to
all workers being stuck.
The fix is to release the buffer lock on current page before trying to get
the next page. We are already doing same in backward scans, but missed
it for forward scans.
Reported-by: Victor Yegorov
Bug: 15290 Diagnosed-by: Thomas Munro and Amit Kapila
Author: Amit Kapila Reviewed-by: Thomas Munro Tested-By: Thomas Munro and Victor Yegorov
Backpatch-through: 10 where parallel index scans were introduced
Discussion: https://postgr.es/m/153228422922.1395.1746424054206154747@wrigleys.postgresql.org