Michael Paquier [Mon, 21 Oct 2019 02:17:13 +0000 (11:17 +0900)]
Fix parsing of integer values for connection parameters in libpq
Commit e7a2217 has introduced stricter checks for integer values in
connection parameters for libpq. However this failed to correctly check
after trailing whitespaces, while leading whitespaces were discarded per
the use of strtol(3). This fixes and refactors the parsing logic to
handle both cases consistently. Note that trying to restrict the use of
trailing whitespaces can easily break connection strings like in ECPG
regression tests (these have allowed me to catch the parsing bug with
connect_timeout).
Author: Michael Paquier Reviewed-by: Lars Kanis
Discussion: https://postgr.es/m/a9b4cbd7-4ecb-06b2-ebd7-1739bbff3217@greiz-reinsdorf.de
Backpatch-through: 12
Peter Eisentraut [Sun, 20 Oct 2019 08:19:13 +0000 (10:19 +0200)]
Clean up MinGW def file generation
There were some leftovers from ancient ad-hoc ways to build on
Windows, prior to the standardization on MSVC and MinGW. We don't
need to build a lib$(NAME)ddll.def (debug build, as opposed to
lib$(NAME)dll.def) for MinGW, since nothing uses that. We also don't
need to build the regular .def file during distprep, since the MinGW
build environment is perfectly capable of creating that normally at
build time.
Peter Eisentraut [Sat, 19 Oct 2019 16:21:58 +0000 (18:21 +0200)]
Fix most -Wundef warnings
In some cases #if was used instead of #ifdef in an inconsistent style.
Cleaning this up also helps when analyzing cases like 38d8dce61fff09daae0edb6bcdd42b0c7f10ebcd where this makes a
difference.
There are no behavior changes here, but the change in pg_bswap.h would
prevent possible accidental misuse by third-party code.
Noah Misch [Sat, 19 Oct 2019 03:20:28 +0000 (20:20 -0700)]
For PowerPC instruction "addi", use constraint "b".
Without "b", a variant of the tas() code miscompiles on macOS 10.4.
This may also fix a compilation failure involving macOS 10.1. Today's
compilers have been allocating acceptable registers with or without this
change, but this future-proofs the code by precisely conveying the
acceptable registers. Back-patch to 9.4 (all supported versions).
Michael Paquier [Sat, 19 Oct 2019 02:18:15 +0000 (11:18 +0900)]
Remove last traces of heap_open/close in the tree
Since pluggable storage has been introduced, those two routines have
been replaced by table_open/close, with some compatibility macros still
present to allow extensions to compile correctly with v12.
Some code paths using the old routines still remained, so replace them.
Based on the discussion done, the consensus reached is that it is better
to remove those compatibility macros so as nothing new uses the old
routines, so remove also the compatibility macros.
Fujii Masao [Fri, 18 Oct 2019 13:32:18 +0000 (22:32 +0900)]
Fix failure of archive recovery with recovery_min_apply_delay enabled.
recovery_min_apply_delay parameter is intended for use with streaming
replication deployments. However, the document clearly explains that
the parameter will be honored in all cases if it's specified. So it should
take effect even if in archive recovery. But, previously, archive recovery
with recovery_min_apply_delay enabled always failed, and caused assertion
failure if --enable-caasert is enabled.
The cause of this problem is that; the ownership of recoveryWakeupLatch
that recovery_min_apply_delay uses was taken only when standby mode
is requested. So unowned latch could be used in archive recovery, and
which caused the failure.
This commit changes recovery code so that the ownership of
recoveryWakeupLatch is taken even in archive recovery. Which prevents
archive recovery with recovery_min_apply_delay from failing.
Back-patch to v9.4 where recovery_min_apply_delay was added.
Author: Fujii Masao Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAHGQGwEyD6HdZLfdWc+95g=VQFPR4zQL4n+yHxQgGEGjaSVheQ@mail.gmail.com
Fujii Masao [Fri, 18 Oct 2019 13:24:18 +0000 (22:24 +0900)]
Make crash recovery ignore recovery_min_apply_delay setting.
In v11 or before, this setting could not take effect in crash recovery
because it's specified in recovery.conf and crash recovery always
starts without recovery.conf. But commit 2dedf4d9a8 integrated
recovery.conf into postgresql.conf and which unexpectedly allowed
this setting to take effect even in crash recovery. This is definitely
not good behavior.
To fix the issue, this commit makes crash recovery always ignore
recovery_min_apply_delay setting.
Back-patch to v12 where the issue was added.
Author: Fujii Masao Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAHGQGwEyD6HdZLfdWc+95g=VQFPR4zQL4n+yHxQgGEGjaSVheQ@mail.gmail.com
Discussion: https://postgr.es/m/e445616d-023e-a268-8aa1-67b8b335340c@pgmasters.net
Alvaro Herrera [Fri, 18 Oct 2019 12:49:39 +0000 (14:49 +0200)]
Fix typo
Apparently while this code was being developed,
ReindexRelationConcurrently operated on multiple relations. The version
that was ultimately pushed doesn't, so this comment's use of plural is
inaccurate.
Michael Paquier [Fri, 18 Oct 2019 05:26:29 +0000 (14:26 +0900)]
Fix timeout handling in logical replication worker
The timestamp tracking the last moment a message is received in a
logical replication worker was initialized in each loop checking if a
message was received or not, causing wal_receiver_timeout to be ignored
in basically any logical replication deployments. This also broke the
ping sent to the server when reaching half of wal_receiver_timeout.
This simply moves the initialization of the timestamp out of the apply
loop to the beginning of LogicalRepApplyLoop().
Alvaro Herrera [Thu, 17 Oct 2019 13:06:06 +0000 (15:06 +0200)]
Fix minor bug in logical-replication walsender shutdown
Logical walsender should exit when it catches up with sending WAL during
shutdown; but there was a rare corner case when it failed to because of
a race condition that puts it back to wait for more WAL instead -- but
since there wasn't any, it'd not shut down immediately. It would only
continue the shutdown when wal_sender_timeout terminates the sleep,
which causes annoying waits during shutdown procedure. Restructure the
code so that we no longer forget to set WalSndCaughtUp in that case.
Alvaro Herrera [Thu, 17 Oct 2019 07:58:01 +0000 (09:58 +0200)]
Fix parallel restore of FKs to partitioned tables
When an FK constraint is created, it needs the index on the referenced
table to exist and be valid. When doing parallel pg_restore and the
referenced table was partitioned, this condition can sometimes not be
met, because pg_dump didn't emit sufficient object dependencies to
ensure so; this means that parallel pg_restore would fail in certain
conditions. Fix by having pg_dump make the FK constraint object
dependent on the partition attachment objects for the constraint's
referenced index.
This has been broken since f56f8f8da6af, so backpatch to Postgres 12.
Thomas Munro [Thu, 17 Oct 2019 00:24:50 +0000 (13:24 +1300)]
When restoring GUCs in parallel workers, show an error context.
Otherwise it can be hard to see where an error is coming from, when
the parallel worker sets all the GUCs that it received from the
leader. Bug #15726. Back-patch to 9.5, where RestoreGUCState()
appeared.
Reported-by: Tiago Anastacio Reviewed-by: Daniel Gustafsson, Tom Lane
Discussion: https://postgr.es/m/15726-6d67e4fa14f027b3%40postgresql.org
Thomas Munro [Wed, 16 Oct 2019 20:59:21 +0000 (09:59 +1300)]
Fix bug that could try to freeze running multixacts.
Commits 801c2dc7 and 801c2dc7 made it possible for vacuum to
try to freeze a multixact that is still running. That was
prevented by a check, but raised an error. Repair.
Back-patch all the way.
Author: Nathan Bossart, Jeremy Schneider Reported-by: Jeremy Schneider Reviewed-by: Jim Nasby, Thomas Munro
Discussion: https://postgr.es/m/DAFB8AFF-2F05-4E33-AD7F-FF8B0F760C17%40amazon.com
Alvaro Herrera [Wed, 16 Oct 2019 12:51:34 +0000 (14:51 +0200)]
Fix crash when reporting CREATE INDEX progress
A race condition can make us try to dereference a NULL pointer to the
PGPROC struct of a process that's already finished. That results in
crashes during REINDEX CONCURRENTLY and CREATE INDEX CONCURRENTLY.
This was introduced in ab0dfc961b6a, so backpatch to pg12.
Tomas Vondra [Wed, 16 Oct 2019 11:23:18 +0000 (13:23 +0200)]
Improve the check for pg_catalog.unknown data type in pg_upgrade
The pg_upgrade check for pg_catalog.unknown type when upgrading from 9.6
had a couple of issues with domains and composite types - it detected
even composite types unused in objects with storage. So for example this
was enough to trigger an unnecessary pg_upgrade failure:
CREATE TYPE unknown_composite AS (u pg_catalog.unknown)
On the other hand, this only happened with composite types directly on
the pg_catalog.unknown data type, but not with a domain. So this was not
detected
CREATE DOMAIN unknown_domain AS pg_catalog.unknown;
CREATE TYPE unknown_composite_2 AS (u unknown_domain);
unlike the first example. These false positives and inconsistencies are
unfortunate, but what's worse we've failed to detected objects using the
pg_catalog.unknown type through a domain. So we missed cases like this
CREATE TABLE t (u unknown_composite_2);
The consequence is clusters broken after a pg_upgrade.
This fixes these false positives and false negatives by using the same
recursive CTE introduced by eaf900e842 for sql_identifier. Backpatch all
the way to 10, where the of pg_catalog.unknown data type was restricted.
Author: Tomas Vondra Backpatch-to: 10-
Discussion: https://postgr.es/m/16045-673e8fa6b5ace196%40postgresql.org
Tomas Vondra [Wed, 16 Oct 2019 11:23:14 +0000 (13:23 +0200)]
Improve the check for pg_catalog.line data type in pg_upgrade
The pg_upgrade check for pg_catalog.line data type when upgrading from
9.3 had a couple of issues with domains and composite types. Firstly, it
triggered false positives for composite types unused in objects with
storage. This was enough to trigger an unnecessary pg_upgrade failure:
CREATE TYPE line_composite AS (l pg_catalog.line)
On the other hand, this only happened with composite types directly on
the pg_catalog.line data type, but not with a domain. So this was not
detected
CREATE DOMAIN line_domain AS pg_catalog.line;
CREATE TYPE line_composite_2 AS (l line_domain);
unlike the first example. These false positives and inconsistencies are
unfortunate, but what's worse we've failed to detected objects using the
pg_catalog.line data type through a domain. So we missed cases like this
CREATE TABLE t (l line_composite_2);
The consequence is clusters broken after a pg_upgrade.
This fixes these false positives and false negatives by using the same
recursive CTE introduced by eaf900e842 for sql_identifier. 9.3 did not
support domains on composite types, but we can still have multi-level
composite types.
Backpatch all the way to 9.4, where the format for pg_catalog.line data
type changed.
Author: Tomas Vondra Backpatch-to: 9.4-
Discussion: https://postgr.es/m/16045-673e8fa6b5ace196%40postgresql.org
Andres Freund [Wed, 16 Oct 2019 09:37:34 +0000 (02:37 -0700)]
Replace alter_table.sql test usage of event triggers.
The test in 93765bd956b added an event trigger to ensure that the
tested table rewrites do not get optimized away (as happened in the
past). But doing so would require running the tests in isolation, as
otherwise the trigger might also fire in concurrent sessions, causing
test failures there.
Reported-By: Tom Lane
Discussion: https://postgr.es/m/3328.1570740683@sss.pgh.pa.us
Backpatch: 12, just as 93765bd956b
Thomas Munro [Wed, 16 Oct 2019 03:51:40 +0000 (16:51 +1300)]
Use libc version as a collation version on glibc systems.
Using glibc's version string to detect potential collation definition
changes is not 100% reliable, but it's better than nothing. Currently
this affects only collations explicitly provided by "libc". More work
will be needed to handle the default collation.
Author: Thomas Munro, based on a suggestion from Christoph Berg Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/4b76c6d4-ae5e-0dc6-7d0d-b5c796a07e34%402ndquadrant.com
Michael Paquier [Wed, 16 Oct 2019 04:09:52 +0000 (13:09 +0900)]
Doc: Fix various inconsistencies
This fixes multiple areas of the documentation:
- COPY for its past compatibility section.
- SET ROLE mentioning INHERITS instead of INHERIT
- PREPARE referring to stmt_name, that is not present.
- Extension documentation about format name with upgrade scripts.
Backpatch down to 9.4 for the relevant parts.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/bf95233a-9943-b341-e2ff-a860c28af481@gmail.com
Backpatch-through: 9.4
Andres Freund [Tue, 15 Oct 2019 17:40:13 +0000 (10:40 -0700)]
Fix CLUSTER on expression indexes.
Since the introduction of different slot types, in 1a0586de3657, we
create a virtual slot in tuplesort_begin_cluster(). While that looks
right, it unfortunately doesn't actually work, as ExecStoreHeapTuple()
is used to store tuples in the slot. Unfortunately no regression tests
for CLUSTER on expression indexes existed so far.
Fix the slot type, and add bare bones tests for CLUSTER on expression
indexes.
Reported-By: Justin Pryzby
Author: Andres Freund
Discussion: https://postgr.es/m/20191011210320.GS10470@telsasoft.com
Backpatch: 12, like 1a0586de3657
Tomas Vondra [Mon, 14 Oct 2019 20:31:56 +0000 (22:31 +0200)]
Check for tables with sql_identifier during pg_upgrade
Commit 7c15cef86d changed sql_identifier data type to be based on name
instead of varchar. Unfortunately, this breaks on-disk format for this
data type. Luckily, that should be a very rare problem, as this data
type is used only in information_schema views, so this only affects user
objects (tables, materialized views and indexes). One way to end in
such situation is to do CTAS with a query on those system views.
There are two options to deal with this - we can either abort pg_upgrade
if there are user objects with sql_identifier columns in pg_upgrade, or
we could replace the sql_identifier type with varchar. Considering how
rare the issue is expected to be, and the complexity of replacing the
data type (e.g. in matviews), we've decided to go with the simple check.
The query is somewhat complex - the sql_identifier data type may be used
indirectly - through a domain, a composite type or both, possibly in
multiple levels. Detecting this requires a recursive CTE.
Backpatch to 12, where the sql_identifier definition changed.
Reported-by: Hans Buschmann
Author: Tomas Vondra Reviewed-by: Tom Lane Backpatch-to: 12
Discussion: https://postgr.es/m/16045-673e8fa6b5ace196%40postgresql.org
Michael Paquier [Sun, 13 Oct 2019 23:58:38 +0000 (08:58 +0900)]
Update test output of sepgsql for ALTER TABLE COLUMN DROP
1df5875 has changed the way dependencies are dropped for this command
with inheritance trees, which impacts sepgsql. This just updates the
regression test output to take care of the failures and adapt to the new
code.
Reported by buildfarm member rhinoceros.
Author: Michael Paquier Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20191013101331.GC1434@paquier.xyz
Backpatch-through: 12
Tom Lane [Sun, 13 Oct 2019 19:48:26 +0000 (15:48 -0400)]
In the postmaster, rely on the signal infrastructure to block signals.
POSIX sigaction(2) can be told to block a set of signals while a
signal handler executes. Make use of that instead of manually
blocking and unblocking signals in the postmaster's signal handlers.
This should save a few cycles, and it also prevents recursive
invocation of signal handlers when many signals arrive in close
succession. We have seen buildfarm failures that seem to be due to
postmaster stack overflow caused by such recursion (exacerbated by
a Linux PPC64 kernel bug).
This doesn't change anything about the way that it works on Windows.
Somebody might consider adjusting port/win32/signal.c to let it work
similarly, but I'm not in a position to do that.
For the moment, just apply to HEAD. Possibly we should consider
back-patching this, but it'd be good to let it age awhile first.
Michael Paquier [Sun, 13 Oct 2019 08:51:55 +0000 (17:51 +0900)]
Fix dependency handling of column drop with partitioned tables
When dropping a column on a partitioned table which has one or more
partitioned indexes, the operation was failing as dependencies with
partitioned indexes using the column dropped were not getting removed in
a way consistent with the columns involved across all the relations part
of an inheritance tree.
This commit refactors the code executing column drop so as all the
columns from an inheritance tree to remove are gathered first, and
dropped all at the end. This way, we let the dependency machinery sort
out by itself the deletion of all the columns with the partitioned
indexes across a partition tree.
This issue has been introduced by 1d92a0c, so backpatch down to
REL_12_STABLE.
Author: Amit Langote, Michael Paquier Reviewed-by: Álvaro Herrera, Ashutosh Sharma
Discussion: https://postgr.es/m/CA+HiwqE9kuBsZ3b5pob2-cvE8ofzPWs-og+g8bKKGnu6b4-yTQ@mail.gmail.com
Backpatch-through: 12
Peter Eisentraut [Sat, 12 Oct 2019 19:17:34 +0000 (21:17 +0200)]
Fix use of term "verifier"
Within the context of SCRAM, "verifier" has a specific meaning in the
protocol, per RFCs. The existing code used "verifier" differently, to
mean whatever is or would be stored in pg_auth.rolpassword.
Fix this by using the term "secret" for this, following RFC 5803.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/be397b06-6e4b-ba71-c7fb-54cae84a7e18%402ndquadrant.com
Noah Misch [Sat, 12 Oct 2019 07:21:47 +0000 (00:21 -0700)]
AIX: Stop adding option -qsrcmsg.
With xlc v16.1.0, it causes internal compiler errors. With xlc versions
not exhibiting that bug, removing -qsrcmsg merely changes the compiler
error reporting format. Back-patch to 9.4 (all supported versions).
Fujii Masao [Fri, 11 Oct 2019 06:47:59 +0000 (15:47 +0900)]
Make crash recovery ignore restore_command and recovery_end_command settings.
In v11 or before, those settings could not take effect in crash recovery
because they are specified in recovery.conf and crash recovery always
starts without recovery.conf. But commit 2dedf4d9a8 integrated
recovery.conf into postgresql.conf and which unexpectedly allowed
those settings to take effect even in crash recovery. This is definitely
not good behavior.
To fix the issue, this commit makes crash recovery always ignore
restore_command and recovery_end_command settings.
Back-patch to v12 where the issue was added.
Author: Fujii Masao Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/e445616d-023e-a268-8aa1-67b8b335340c@pgmasters.net
Tom Lane [Thu, 10 Oct 2019 18:24:56 +0000 (14:24 -0400)]
Put back pqsignal() as an exported libpq symbol.
This reverts commit f7ab80285. Per discussion, we can't remove an
exported symbol without a SONAME bump, which we don't want to do.
In particular that breaks usage of current libpq.so with pre-9.3
versions of psql etc, which need libpq to export pqsignal().
As noted in that commit message, exporting the symbol from libpgport.a
won't work reliably; but actually we don't want to export src/port's
implementation anyway. Any pre-9.3 client is going to be expecting the
definition that pqsignal() had before 9.3, which was that it didn't
set SA_RESTART for SIGALRM. Hence, put back pqsignal() in a separate
source file in src/interfaces/libpq, and give it the old semantics.
Andres Freund [Thu, 10 Oct 2019 05:00:50 +0000 (22:00 -0700)]
Fix table rewrites that include a column without a default.
In c2fe139c201c I made ATRewriteTable() use tuple slots. Unfortunately
I did not notice that columns can be added in a rewrite that do not
have a default, when another column is added/altered requiring one.
Initialize columns to NULL again, and add tests.
Bug: #16038 Reported-By: anonymous
Author: Andres Freund
Discussion: https://postgr.es/m/16038-5c974541f2bf6749@postgresql.org
Backpatch: 12, where the bug was introduced in c2fe139c201c
Michael Paquier [Wed, 9 Oct 2019 04:30:43 +0000 (13:30 +0900)]
Flush logical mapping files with fd opened for read/write at checkpoint
The file descriptor was opened with read-only to fsync a regular file,
which would cause EBADFD errors on some platforms.
This is similar to the recent fix done by a586cc4b (which was broken by
me with 82a5649), except that I noticed this issue while monitoring the
backend code for similar mistakes. Backpatch to 9.4, as this has been
introduced since logical decoding exists as of b89e151.
Author: Michael Paquier Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/20191006045548.GA14532@paquier.xyz
Backpatch-through: 9.4
Bruce Momjian [Wed, 9 Oct 2019 02:16:48 +0000 (22:16 -0400)]
pg_upgrade: clarify the database names in error files
Previously, the "Database:" label in the error file was unclear if the
label was a status report or the problem was _in_ the database. New
text is "In database:".
Bruce Momjian [Wed, 9 Oct 2019 01:49:08 +0000 (21:49 -0400)]
doc: improve docs so config value default units are clearer
Previously, our docs would say "Specifies the number of milliseconds"
but it wasn't clear that "milliseconds" was merely the default unit.
New text says "Specifies duration (defaults to milliseconds)", which is
clearer.
Remove some code for old unsupported versions of MSVC
As of d9dd406fe281d22d5238d3c26a7182543c711e74, we require MSVC 2013,
which means _MSC_VER >= 1800. This means that conditionals about
older versions of _MSC_VER can be removed or simplified.
Previous code was also in some cases handling MinGW, where _MSC_VER is
not defined at all, incorrectly, such as in pg_ctl.c and win32_port.h,
leading to some compiler warnings. This should now be handled better.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Michael Paquier [Tue, 8 Oct 2019 02:46:30 +0000 (11:46 +0900)]
Improve test coverage of pg_rewind
This includes new TAP tests for a couple of areas not covered yet and
some improvements:
- More coverage for --no-ensure-shutdown, the enforced recovery step and
--dry-run.
- Failures with option combinations and basic option checks.
- Removal of a duplicated comment.
Author: Alexey Kondratov, Michael Paquier
Discussion: https://postgr.es/m/20191007010651.GD14532@paquier.xyz
Tom Lane [Mon, 7 Oct 2019 16:39:09 +0000 (12:39 -0400)]
Check for too many postmaster children before spawning a bgworker.
The postmaster's code path for spawning a bgworker neglected to check
whether we already have the max number of live child processes. That's
a bit hard to hit, since it would necessarily be a transient condition;
but if we do, AssignPostmasterChildSlot() fails causing a postmaster
crash, as seen in a report from Bhargav Kamineni.
To fix, invoke canAcceptConnections() in the bgworker code path, as we
do in the other code paths that spawn children. Since we don't want
the same pmState tests in this case, add a child-process-type parameter
to canAcceptConnections() so that it can know what to do.
Back-patch to 9.5. In principle the same hazard exists in 9.4, but the
code is enough different that this patch wouldn't quite fix it there.
Given the tiny usage of bgworkers in that branch it doesn't seem worth
creating a variant patch for it.
Since 63bd0db12199c5df043e1dea0f2b574f622b3a4c we don't use tzname
anymore, so we don't need to check for it. Instead, just keep the
part of PGAC_STRUCT_TIMEZONE that we need, which is the check for
struct tm.tm_zone.
Tom Lane [Mon, 7 Oct 2019 14:39:07 +0000 (10:39 -0400)]
Hack pg_ctl to report postmaster's exit status.
Temporarily change pg_ctl so that the postmaster's exit status will
be printed (to the postmaster's stdout). This is to help identify
the cause of intermittent "postmaster exited during a parallel
transaction" failures seen on a couple of buildfarm members. This
change degrades pg_ctl's functionality in a couple of minor ways,
so we'll revert it once we've obtained the desired info.
Michael Paquier [Mon, 7 Oct 2019 00:07:22 +0000 (09:07 +0900)]
Improve handling and coverage of --no-ensure-shutdown in pg_rewind
This includes a couple of changes around the new behavior of pg_rewind
which enforces recovery to happen once on a cluster not shut down
cleanly:
- Some comments and documentation improvements.
- Shutdown the cluster to rewind with immediate mode in all the tests,
this allows to check after the forced recovery behavior which is wanted
as new default.
- Use -F for the forced recovery step, so as postgres does not use
fsync. This was useless as a final sync is done once the tool is done.
Author: Michael Paquier Reviewed-by: Alexey Kondratov
Discussion: https://postgr.es/m/20191004083721.GA1829@paquier.xyz
Tom Lane [Sun, 6 Oct 2019 18:14:45 +0000 (14:14 -0400)]
Doc: improve docs about pg_statistic_ext_data.
Commit aa087ec64 was a bit over-hasty about the doc changes needed
while splitting pg_statistic_ext_data off from pg_statistic_ext.
It duplicated one para and inserted another in what seems to me
to be the wrong section. Fix that up, and in passing do some minor
copy-editing.
Tom Lane [Sun, 6 Oct 2019 16:06:30 +0000 (12:06 -0400)]
Avoid trying to release a List's initial allocation via repalloc().
Commit 1cff1b95a included some code that supposed it could repalloc()
a memory chunk to a smaller size without risk of the chunk moving.
That was not a great idea, because it depended on undocumented behavior
of AllocSetRealloc, which commit c477f3e44 changed thereby breaking it.
(Not to mention that this code ought to work with other memory context
types, which might not work the same...) So get rid of the repalloc
calls, and instead just wipe the now-unused ListCell array and/or tell
Valgrind it's NOACCESS, as if we'd freed it.
In cases where the initial list allocation had been quite large, this
could represent an annoying waste of space. In principle we could
ameliorate that by allocating the initial cell array separately when
it exceeds some threshold. But that would complicate new_list() which
is hot code, and the returns would materialize only in narrow cases.
On balance I don't think it'd be worth it.
Tomas Vondra [Sat, 5 Oct 2019 18:49:39 +0000 (20:49 +0200)]
Change MemoryContextMemAllocated to return Size
Commit f2369bc610 switched most of the memory accounting from int64 to
Size, but it forgot to change the MemoryContextMemAllocated return type.
So this fixes that omission.
Noah Misch [Sat, 5 Oct 2019 17:05:05 +0000 (10:05 -0700)]
Report test_atomic_ops() failures consistently, via macros.
This prints the unexpected value in more failure cases, and it removes
forty-eight hand-maintained error messages. Back-patch to 9.5, which
introduced these tests.
Reviewed (in an earlier version) by Andres Freund.
Tom Lane [Sat, 5 Oct 2019 16:26:55 +0000 (12:26 -0400)]
Avoid use of wildcard in pg_waldump's .gitignore.
This would be all right, maybe, if it didn't also match a file that
definitely should not be ignored. We don't add rmgrs so often that
manual maintenance of this file list is impractical, so just write
out the list.
(I find the equivalent wildcard use in the Makefile pretty lazy and
unsafe as well, but will leave that alone until it actually causes a
problem.)
One of the upsert related tests is unstable (sometimes even hanging
until isolationtester's step timeout is reached). Based on preliminary
analysis that might be a problem outside of just that test, but not
really related to EPQ and triggers. Disable for now, to get the
buildfarm greener again.
Discussion: https://postgr.es/m/20191004222437.45qmglpto43pd3jb@alap3.anarazel.de
Backpatch: 9.6-, just like c8841199509.
Andres Freund [Fri, 4 Oct 2019 20:56:04 +0000 (13:56 -0700)]
Add isolation tests for the combination of EPQ and triggers.
As evidenced by bug #16036 this area is woefully under-tested. Add
fairly extensive tests for the combination.
Backpatch back to 9.6 - before that isolationtester was not capable
enough. While we don't backpatch tests all the time, future fixes to
trigger.c would potentially look different enough in 12+ from the
earlier branches that introducing bugs during backpatching is more
likely than normal. Also, it's just a crucial and undertested area of
the code.
Author: Andres Freund
Discussion: https://postgr.es/m/16036-28184c90d952fb7f@postgresql.org
Backpatch: 9.6-, the earliest these tests work
Andres Freund [Fri, 4 Oct 2019 18:59:34 +0000 (11:59 -0700)]
Fix crash caused by EPQ happening with a before update trigger present.
When ExecBRUpdateTriggers()'s GetTupleForTrigger() follows an EPQ
chain the former needs to run the result tuple through the junkfilter
again, and update the slot containing the new version of the tuple to
contain that new version. The input tuple may already be in the
junkfilter's output slot, which used to be OK - we don't need the
previous version anymore. Unfortunately ff11e7f4b9ae started to use
ExecCopySlot() to update newslot, and ExecCopySlot() doesn't support
copying a slot into itself, leading to a slot in a corrupt
state, which then can cause crashes or other symptoms.
Fix this by skipping the ExecCopySlot() when copying into itself.
While we could have easily made ExecCopySlot() handle that case, it
seems better to add an assert forbidding doing so instead. As the goal
of copying might be to make the contents of one slot independent from
another, it seems failure prone to handle doing so silently.
A follow-up commit will add tests for the obviously under-covered
combination of EPQ and triggers. Done as a separate commit as it might
make sense to backpatch them further than this bug.
Also remove confusion with confusing variable names for slots in
ExecBRDeleteTriggers() and ExecBRUpdateTriggers().
Bug: #16036 Reported-By: Антон Власов
Author: Andres Freund
Discussion: https://postgr.es/m/16036-28184c90d952fb7f@postgresql.org
Backpatch: 12-, where ff11e7f4b9ae was merged
Andres Freund [Fri, 4 Oct 2019 20:08:51 +0000 (13:08 -0700)]
Use a fd opened for read/write when syncing slots during startup, take 2.
Cribbing from dfbaed45975:
Some operating systems, including the reporter's windows, return EBADFD
or similar when fsync() is invoked on a O_RDONLY file descriptor.
Unfortunately RestoreSlotFromDisk() does exactly that; which causes
failures after restarts in at least some scenarios.
If you hit the bug the error message will be something like
ERROR: could not fsync file "pg_replslot/$name/state": Bad file descriptor
Simply use O_RDWR instead of O_RDONLY when opening the relevant file
descriptor to fix the bug.
Unfortunately this fix was undone in 82a5649fb9db. Re-apply, and add a
comment.
Bug: 16039 Reported-By: Hans Buschmann
Author: Andres Freund
Discussion: https://postgr.es/m/16039-196fc97cc05e141c@postgresql.org
Backpatch: 12-, as 82a5649fb9db
Andrew Dunstan [Fri, 4 Oct 2019 19:34:40 +0000 (15:34 -0400)]
Handle spaces in OpenSSL install location for MSVC
First, make sure that the .exe name is quoted when trying to get the
version number. Also, don't quote the lib name for using in the project
files if it's already been quoted. This second change applies to all
libraries, not just OpenSSL.
This has clearly been broken forever, so backpatch to all live branches.
Robert Haas [Fri, 4 Oct 2019 18:24:46 +0000 (14:24 -0400)]
Rename some toasting functions based on whether they are heap-specific.
The old names for the attribute-detoasting functions names included
the word "heap," which seems outdated now that the heap is only one of
potentially many table access methods.
On the other hand, toast_insert_or_update and toast_delete are
heap-specific, so rename them by adding "heap_" as a prefix.
Not all of the work of making the TOAST system fully accessible to AMs
other than the heap is done yet, but there seems to be little harm in
getting this renaming out of the way now. Commit 8b94dab06617ef80a0901ab103ebd8754427ef5a already divided up the
functions among various files partially according to whether it was
intended that they should be heap-specific or AM-agnostic, so this is
just clarifying the division contemplated by that commit.
Patch by me, reviewed and tested by Prabhat Sabu, Thomas Munro,
Andres Freund, and Álvaro Herrera.
Tom Lane [Fri, 4 Oct 2019 14:34:21 +0000 (10:34 -0400)]
Fix bitshiftright()'s zero-padding some more.
Commit 5ac0d9360 failed to entirely fix bitshiftright's habit of
leaving one-bits in the pad space that should be all zeroes,
because in a moment of sheer brain fade I'd concluded that only
the code path used for not-a-multiple-of-8 shift distances needed
to be fixed. Of course, a multiple-of-8 shift distance can also
cause the problem, so we need to forcibly zero the extra bits
in both cases.
Per bug #16037 from Alexander Lakhin. As before, back-patch to all
supported branches.
Tomas Vondra [Fri, 4 Oct 2019 14:10:56 +0000 (16:10 +0200)]
Use Size instead of int64 to track allocated memory
Commit 5dd7fc1519 added block-level memory accounting, but used int64 variable to
track the amount of allocated memory. That is incorrect, because we have Size for
exactly these purposes, but it was mostly harmless until c477f3e449 which changed
how we handle with repalloc() when downsizing the chunk. Previously we've ignored
these cases and just kept using the original chunk, but now we need to update the
accounting, and the code was doing this:
context->mem_allocated += blksize - oldblksize;
Both blksize and oldblksize are Size (so unsigned) which means the subtraction
underflows, producing a very high positive value. On 64-bit platforms (where Size
has the same size as mem_alllocated) this happens to work because the result wraps
to the right value, but on (some) 32-bit platforms this fails.
This fixes two things - it changes mem_allocated (and related variables) to Size,
and it splits the update to two separate steps, to prevent any underflows.
Michael Paquier [Fri, 4 Oct 2019 07:18:29 +0000 (16:18 +0900)]
Fix issues in pg_rewind with --no-ensure-shutdown/--write-recovery-conf
This fixes two issues with recent features added in pg_rewind:
- --dry-run should do nothing on the target directory, but 927474c
forgot to consider that for --write-recovery-conf.
- --no-ensure-shutdown was not actually working. There is no test
coverage for this option yet, but a subsequent patch will add that.
Michael Paquier [Fri, 4 Oct 2019 00:14:51 +0000 (09:14 +0900)]
Fix --dry-run mode of pg_rewind
Even if --dry-run mode was specified, the control file was getting
updated, preventing follow-up runs of pg_rewind to work properly on the
target data folder. The origin of the problem came from the refactoring
done by ce6afc6.
Tom Lane [Thu, 3 Oct 2019 21:34:25 +0000 (17:34 -0400)]
Avoid unnecessary out-of-memory errors during encoding conversion.
Encoding conversion uses the very simplistic rule that the output
can't be more than 4X longer than the input, and palloc's a buffer
of that size. This results in failure to convert any string longer
than 1/4 GB, which is becoming an annoying limitation.
As a band-aid to improve matters, allow the allocated output buffer
size to exceed 1GB. We still insist that the final result fit into
MaxAllocSize (1GB), though. Perhaps it'd be safe to relax that
restriction, but it'd require close analysis of all callers, which
is daunting (not least because external modules might call these
functions). For the moment, this should allow a 2X to 4X improvement
in the longest string we can convert, which is a useful gain in
return for quite a simple patch.
Also, once we have successfully converted a long string, repalloc
the output down to the actual string length, returning the excess
to the malloc pool. This seems worth doing since we can usually
expect to give back several MB if we take this path at all.
This still leaves much to be desired, most notably that the assumption
that MAX_CONVERSION_GROWTH == 4 is very fragile, and yet we have no
guard code verifying that the output buffer isn't overrun. Fixing
that would require significant changes in the encoding conversion
APIs, so it'll have to wait for some other day.
The present patch seems safely back-patchable, so patch all supported
branches.
Tom Lane [Thu, 3 Oct 2019 17:56:26 +0000 (13:56 -0400)]
Allow repalloc() to give back space when a large chunk is downsized.
Up to now, if you resized a large (>8K) palloc chunk down to a smaller
size, aset.c made no attempt to return any space to the malloc pool.
That's unpleasant if a really large allocation is resized to a
significantly smaller size. I think no such cases existed when this
code was designed, and I'm not sure whether they're common even yet,
but an upcoming fix to encoding conversion will certainly create such
cases. Therefore, fix AllocSetRealloc so that it gives realloc()
a chance to do something with the block. This doesn't noticeably
increase complexity, we mostly just have to change the order in which
the cases are considered.
Andrew Gierth [Thu, 3 Oct 2019 09:54:52 +0000 (10:54 +0100)]
Selectively include window frames in expression walks/mutates.
query_tree_walker and query_tree_mutator were skipping the
windowClause of the query, without regard for the fact that the
startOffset and endOffset in a WindowClause node are expression trees
that need to be processed. This was an oversight in commit ec4be2ee6
from 2010 which added the expression fields; the main symptom is that
function parameters in window frame clauses don't work in inlined
functions.
Fix (as conservatively as possible since this needs to not break
existing out-of-tree callers) and add tests.
Backpatch all the way, since this has been broken since 9.0.
Per report from Alastair McKinley; fix by me with kibitzing and review
from Tom Lane.
Amit Kapila [Tue, 1 Oct 2019 04:20:26 +0000 (09:50 +0530)]
pgbench: add --partitions and --partition-method options.
These new options allow users to partition the pgbench_accounts table by
specifying the number of partitions and partitioning method. The values
allowed for partitioning method are range and hash.
This feature allows users to measure the overhead of partitioning if any.
Author: Fabien COELHO Reviewed-by: Amit Kapila, Amit Langote, Dilip Kumar, Asif Rehman, and
Alvaro Herrera
Discussion: https://postgr.es/m/alpine.DEB.2.21.1907230826190.7008@lancre
Michael Paquier [Wed, 2 Oct 2019 06:53:07 +0000 (15:53 +0900)]
Remove temporary WAL and history files at the end of archive recovery
cbc55da has reworked the order of some actions at the end of archive
recovery. Unfortunately this overlooked the fact that the startup
process needs to remove RECOVERYXLOG (for temporary WAL segment newly
recovered from archives) and RECOVERYHISTORY (for temporary history
file) at this step, leaving the files around even after recovery ended.
Michael Paquier [Wed, 2 Oct 2019 00:55:27 +0000 (09:55 +0900)]
Revert hooks for session start and end, take two
The location of the session end hook has been chosen so as it is
possible to allow modules to do their own transactions, however any
trying to any any subsystem which went through before_shmem_exit()
would cause issues, limiting the pluggability of the hook.
Tomas Vondra [Tue, 1 Oct 2019 14:53:04 +0000 (16:53 +0200)]
Blind attempt to fix pglz_maximum_compressed_size
Commit 11a078cf87 triggered failures on big-endian machines, and the
only plausible place for an issue seems to be that TOAST_COMPRESS_SIZE
calls VARSIZE instead of VARSIZE_ANY. So try fixing that blindly.
Tomas Vondra [Tue, 1 Oct 2019 12:13:44 +0000 (14:13 +0200)]
Optimize partial TOAST decompression
Commit 4d0e994eed added support for partial TOAST decompression, so the
decompression is interrupted after producing the requested prefix. For
prefix and slices near the beginning of the entry, this may saves a lot
of decompression work.
That however only deals with decompression - the whole compressed entry
was still fetched and re-assembled, even though the compression used
only a small fraction of it. This commit improves that by computing how
much compressed data may be needed to decompress the requested prefix,
and then fetches only the necessary part.
We always need to fetch a bit more compressed data than the requested
(uncompressed) prefix, because the prefix may not be compressible at all
and pglz itself adds a bit of overhead. That means this optimization is
most effective when the requested prefix is much smaller than the whole
compressed entry.
Author: Binguo Bao Reviewed-by: Andrey Borodin, Tomas Vondra, Paul Ramsey
Discussion: https://www.postgresql.org/message-id/flat/CAL-OGkthU9Gs7TZchf5OWaL-Gsi=hXqufTxKv9qpNG73d5na_g@mail.gmail.com
Michael Paquier [Tue, 1 Oct 2019 06:19:32 +0000 (15:19 +0900)]
Fix test_session_hooks with parallel workers
Several buildfarm machines have been complaining about the new module
test_session_hooks to be unstable, like crake and thorntail. The issue
was that the module was trying to log some start and end session
activity for parallel workers, which makes little sense as they don't
support DML, so just prevent this pattern to happen in the module.
This could be reproduced by enforcing force_parallel_mode=regress, which
is the value used by some of the buildfarm members.
Michael Paquier [Tue, 1 Oct 2019 03:15:25 +0000 (12:15 +0900)]
Add hooks for session start and session end, take two
These hooks can be used in loadable modules. A simple test module is
included.
The first attempt was done with cd8ce3a but we lacked handling for
NO_INSTALLCHECK in the MSVC scripts (problem solved afterwards by 431f1599) so the buildfarm got angry. This also fixes a couple of
issues noticed upon review compared to the first attempt, so the code
has slightly changed, resulting in a more simple test module.
Author: Fabrízio de Royes Mello, Yugo Nagata Reviewed-by: Andrew Dunstan, Michael Paquier, Aleksandr Parfenov
Discussion: https://postgr.es/m/20170720204733.40f2b7eb.nagata@sraoss.co.jp
Discussion: https://postgr.es/m/20190823042602.GB5275@paquier.xyz
Michael Paquier [Tue, 1 Oct 2019 01:56:27 +0000 (10:56 +0900)]
Fix confusing error caused by connection parameter channel_binding
When using a client compiled without channel binding support (linking to
OpenSSL 1.0.1 or older) to connect to a server which supports channel
binding (linking to OpenSSL 1.0.2 or newer), libpq would generate a
confusing error message with channel_binding=require for an SSL
connection, where the server sends back SCRAM-SHA-256-PLUS:
"channel binding is required, but server did not offer an authentication
method that supports channel binding."
This is confusing because the server did send a SASL mechanism able to
support channel binding, but libpq was not able to detect that
properly.
The situation can be summarized as followed for the case described in
the previous paragraph for the SASL mechanisms used with the various
modes of channel_binding:
1) Client supports channel binding.
1-1) channel_binding = disable => OK, with SCRAM-SHA-256.
1-2) channel_binding = prefer => OK, with SCRAM-SHA-256-PLUS.
1-3) channel_binding = require => OK, with SCRAM-SHA-256-PLUS.
2) Client does not support channel binding.
2-1) channel_binding = disable => OK, with SCRAM-SHA-256.
2-2) channel_binding = prefer => OK, with SCRAM-SHA-256.
2-3) channel_binding = require => failure with new error message,
instead of the confusing one.
This commit updates case 2-3 to generate a better error message. Note
that the SSL TAP tests are not impacted as it is not possible to test
with mixed versions of OpenSSL for the backend and libpq.
Reported-by: Tom Lane
Author: Michael Paquier Reviewed-by: Jeff Davis, Tom Lane
Discussion: https://postgr.es/m/24857.1569775891@sss.pgh.pa.us
Tomas Vondra [Tue, 1 Oct 2019 01:13:39 +0000 (03:13 +0200)]
Add transparent block-level memory accounting
Adds accounting of memory allocated in a memory context. Compared to
various ad hoc solutions, the main advantage is that the accounting is
transparent and does not require direct control over allocations (this
matters for use cases where the allocations happen in user code, like
for example aggregate states allocated in a transition functions).
To reduce overhead, the accounting happens at the block level (not for
individual chunks) and only the context immediately owning the block is
updated. When inquiring about amount of memory allocated in a context,
we have to recursively walk all children contexts.
This "lazy" accounting works well for cases with relatively small number
of contexts in the relevant subtree and/or with infrequent inquiries.
Author: Jeff Davis Reivewed-by: Tomas Vondra, Melanie Plageman, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/flat/027a129b8525601c6a680d27ce3a7172dab61aab.camel@j-davis.com
Andres Freund [Mon, 30 Sep 2019 23:06:16 +0000 (16:06 -0700)]
Don't generate EEOP_*_FETCHSOME operations for slots know to be virtual.
That avoids unnecessary work during both interpreted execution, and
JIT compiled expression evaluation. Both benefit from fewer expression
steps needing be processed, and for interpreted execution there now is
a fastpath dedicated to just fetching a value from a virtual
slot. That's e.g. beneficial for hashjoins over nodes that perform
projections, as the hashed columns are currently fetched individually.
Author: Soumyadeep Chakraborty, Andres Freund
Discussion: https://postgr.es/m/CAE-ML+9OKSN71+mHtfMD-L24oDp8dGTfaVjDU6U+j+FNAW5kRQ@mail.gmail.com
Tom Lane [Mon, 30 Sep 2019 21:14:00 +0000 (17:14 -0400)]
Rely on plan_cache_mode to force generic plans in partition_prune test.
This file had a very weird mix of tests that did "set plan_cache_mode =
force_generic_plan" to get a generic plan, and tests that relied on
using five dummy executions of a prepared statement. Converting them
all to rely on plan_cache_mode is more consistent and shaves off a
noticeable fraction of the test script's runtime.
Tom Lane [Mon, 30 Sep 2019 18:31:12 +0000 (14:31 -0400)]
Doc: improve PREPARE documentation, cross-referencing to plan_cache_mode.
The behavior described in the PREPARE man page applies only for the
default plan_cache_mode setting, so explain that properly. Rewrite
some of the text while I'm here. Per suggestion from Bruce.
This commit adds a mention that the order of columns specified during
multi-column most-common-value statistics is insignificant, and tries to
simplify examples.
Michael Paquier [Mon, 30 Sep 2019 04:11:31 +0000 (13:11 +0900)]
Fix SSL test for libpq connection parameter channel_binding
When compiling Postgres with OpenSSL 1.0.1 or older versions, SCRAM's
channel binding cannot be supported as X509_get_signature_nid() is
needed, which causes a regression test with channel_binding='require' to
fail as the server cannot publish SCRAM-SHA-256-PLUS as SASL mechanism
over an SSL connection.
Fix the issue by using a method similar to c3d41cc, making the test
result conditional. The test passes if X509_get_signature_nid() is
present, and when missing we test for a connection failure. Testing a
connection failure is more useful than skipping the test as we should
fail the connection if channel binding is required by the client but the
server does not support it.
Reported-by: Tom Lane, Michael Paquier
Author: Michael Paquier
Discussion: https://postgr.es/m/20190927024457.GA8485@paquier.xyz
Discussion: https://postgr.es/m/24857.1569775891@sss.pgh.pa.us
Make crash recovery ignore recovery target settings.
In v11 or before, recovery target settings could not take effect in
crash recovery because they are specified in recovery.conf and
crash recovery always starts without recovery.conf. But commit 2dedf4d9a8 integrated recovery.conf into postgresql.conf and
which unexpectedly allowed recovery target settings to take effect
even in crash recovery. This is definitely not good behavior.
To fix the issue, this commit makes crash recovery always ignore
recovery target settings.
Back-patch to v12.
Author: Peter Eisentraut Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/e445616d-023e-a268-8aa1-67b8b335340c@pgmasters.net
Andres Freund [Sun, 29 Sep 2019 23:24:32 +0000 (16:24 -0700)]
jit: Re-allow JIT compilation of execGrouping.c hashtable comparisons.
In the course of 5567d12ce03, 356687bd8 and 317ffdfeaac, I changed
BuildTupleHashTable[Ext]'s call to ExecBuildGroupingEqual to not pass
in the parent node, but NULL. Which in turn prevents the tuple
equality comparator from being JIT compiled. While that fixes
bug #15486, it is not actually necessary after all of the above commits,
as we don't re-build the comparator when using the new
BuildTupleHashTableExt() interface (as the content of the hashtable
are reset, but the TupleHashTable itself is not).
Therefore re-allow jit compilation for callers that use
BuildTupleHashTableExt with a separate context for "metadata" and
content.
As in the previous commit, there's ongoing work to make this easier to
test to prevent such regressions in the future, but that
infrastructure is not going to be backpatchable.
The performance impact of not JIT compiling hashtable equality
comparators can be substantial e.g. for aggregation queries that
aggregate a lot of input rows to few output rows (when there are a lot
of output groups, there will be fewer comparisons).
Author: Andres Freund
Discussion: https://postgr.es/m/20190927072053.njf6prdl3vb7y7qb@alap3.anarazel.de
Backpatch: 11, just as 5567d12ce03
Andres Freund [Sun, 29 Sep 2019 22:24:54 +0000 (15:24 -0700)]
Fix determination when slot types for upper executor nodes are fixed.
For many queries the fact that the tuple descriptor from the lower
node was not taken into account when determining whether the type of a
slot is fixed, lead to tuple deforming for such upper nodes not to be
JIT accelerated.
There is ongoing work to enable writing regression tests for related
behavior (including a patch that would have detected this
regression), by optionally showing such details in EXPLAIN. But as it
seems unlikely that that will be suitable for stable branches, just
merge the fix for now.
While it's fairly close to the 12 release window, the fact that 11
continues to perform JITed tuple deforming in these cases, that
there's still cases where we do so in 12, and the fact that the
performance regression can be sizable, weigh in favor of fixing it
now.
Author: Andres Freund
Discussion: https://postgr.es/m/20190927072053.njf6prdl3vb7y7qb@alap3.anarazel.de
Backpatch: 12-, where 675af5c01e297 was merged.