Robert Haas [Fri, 6 Apr 2018 15:42:28 +0000 (11:42 -0400)]
Enforce child constraints during COPY TO a partitioned table.
The previous coding inadvertently checked the constraints for the
partitioned table rather than the target partition, which could
lead to data in a partition that fails to satisfy some constraint
on that partition. This problem seems to date back to when
table partitioning was introduced; prior to that, there was only
one target table for a COPY, so the problem didn't occur, and the
code just didn't get updated.
Etsuro Fujita, reviewed by Amit Langote and Ashutosh Bapat
Robert Haas [Fri, 6 Apr 2018 15:29:43 +0000 (11:29 -0400)]
Refactor PgFdwModifyState creation/destruction into separate functions.
Etsuro Fujita. The larger patch series of which this is a part has
been reviewed by Amit Langote, David Fetter, Maksim Milyutin,
Álvaro Herrera, Stephen Frost, and me.
Magnus Hagander [Thu, 5 Apr 2018 19:57:26 +0000 (21:57 +0200)]
Allow on-line enabling and disabling of data checksums
This makes it possible to turn checksums on in a live cluster, without
the previous need for dump/reload or logical replication (and to turn it
off).
Enabling checkusm starts a background process in the form of a
launcher/worker combination that goes through the entire database and
recalculates checksums on each and every page. Only when all pages have
been checksummed are they fully enabled in the cluster. Any failure of
the process will revert to checksums off and the process has to be
started.
This adds a new WAL record that indicates the state of checksums, so
the process works across replicated clusters.
Authors: Magnus Hagander and Daniel Gustafsson
Review: Tomas Vondra, Michael Banck, Heikki Linnakangas, Andrey Borodin
Peter Eisentraut [Thu, 29 Mar 2018 16:00:51 +0000 (12:00 -0400)]
PL/pgSQL: Add support for SET TRANSACTION
A normal SQL command run inside PL/pgSQL acquires a snapshot, but SET
TRANSACTION does not work anymore if a snapshot is set. So we have to
handle this separately.
Reviewed-by: Alexander Korotkov <a.korotkov@postgrespro.ru> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
If we are not going to save the plan, then we need to unset expr->plan
after we are done, also in error cases. Otherwise, we get a dangling
pointer next time around.
This is not the ideal solution. It would be better if we could convince
SPI not to associate a cached plan with a resource owner, and then we
could just save the plan in all cases. But that would require bigger
surgery.
Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
Magnus Hagander [Thu, 5 Apr 2018 16:59:32 +0000 (18:59 +0200)]
Allow background workers to bypass datallowconn
THis adds a "flags" field to the BackgroundWorkerInitializeConnection()
and BackgroundWorkerInitializeConnectionByOid(). For now only one flag,
BGWORKER_BYPASS_ALLOWCONN, is defined, which allows the worker to ignore
datallowconn.
Error-tolerant conversion function with web-like syntax for search query,
it simplifies constraining search engine with close to habitual interface for
users.
Bump catalog version
Authors: Victor Drobny, Dmitry Ivanov with editorization by me
Reviewed by: Aleksander Alekseev, Tomas Vondra, Thomas Munro, Aleksandr Parfenov
Discussion: https://www.postgresql.org/message-id/flat/fe931111ff7e9ad79196486ada79e268@postgrespro.ru
857f9c36 bumps B-tree metapage version while upgrade is performed "on the fly"
when needed. However, some asserts fired when old version metapage was
cached to rel->rd_amcache. Despite new metadata fields are never used from
rel->rd_amcache, that needs to be fixed. This patch introduces metadata
upgrade during its caching, which fills unavailable fields with their default
values. contrib/pageinspect is also patched to handle non-upgraded metapages
in the same way.
Simon Riggs [Thu, 5 Apr 2018 08:54:07 +0000 (09:54 +0100)]
MERGE post-commit review
Review comments from Andres Freund
* Consolidate code into AfterTriggerGetTransitionTable()
* Rename nodeMerge.c to execMerge.c
* Rename nodeMerge.h to execMerge.h
* Move MERGE handling in ExecInitModifyTable()
into a execMerge.c ExecInitMerge()
* Move mt_merge_subcommands flags into execMerge.h
* Rename opt_and_condition to opt_merge_when_and_condition
* Wordsmith various comments
Andrew Gierth [Thu, 5 Apr 2018 03:05:40 +0000 (04:05 +0100)]
Install errcodes.txt for use by extensions.
Maintainers of out-of-tree PLs typically need access to the set of
error codes. To avoid the need to duplicate that information in some
form in PL source trees, provide errcodes.txt as part of a server
installation.
Thomas Munro, based on a suggestion from Andrew Gierth
Discussion: https://postgr.es/m/87woykk7mu.fsf%40news-spur.riddles.org.uk
Stephen Frost [Wed, 4 Apr 2018 19:26:51 +0000 (15:26 -0400)]
Rewrite pg_dump TAP tests
This reworks how the tests to run are defined. Instead of having to
define all runs for all tests, we define those tests which should pass
(generally using one of the defined broad hashes), add in any which
should be specific for this test, and exclude any specific runs that
shouldn't pass for this test. This ends up removing some 4k+ lines
(more than half the file) but, more importantly, greatly simplifies the
way runs-to-be-tested are defined.
As discussed in the updated comments, for example, take the test which
does CREATE TABLE test_table. That CREATE TABLE should show up in all
'full' runs of pg_dump, except those cases where 'test_table' is
excluded, of course, and that's exactly how the test gets defined now
(modulo a few other related cases, like where we dump only that table,
or we dump the schema it's in, or we exclude the schema it's in):
Next, we no longer expect every run to be listed for every test. If a
run is listed in 'like' (directly or through a hash) then it's a 'like',
unless it's listed in 'unlike' in which case it's an 'unlike'. If it
isn't listed in either, then it's considered an 'unlike' automatically.
Lastly, this changes the code to no longer use like/unlike but rather to
use 'ok()' with 'diag()' which allows much more control over what gets
spit out to the screen. Gone are the days of the entire dump being sent
to the console, now you'll just get a couple of lines for each failing
test which say the test that failed and the run that it failed on.
This covers both the pg_dump TAP tests in src/bin/pg_dump and those in
src/test/modules/test_pg_dump.
Tom Lane [Wed, 4 Apr 2018 18:26:04 +0000 (14:26 -0400)]
Improve FSM management for BRIN indexes.
BRIN indexes like to propagate additions of free space into the upper pages
of their free space maps as soon as the new space is known, even when it's
just on one individual index page. Previously this required calling
FreeSpaceMapVacuum, which is quite an expensive thing if the map is large.
Use the FreeSpaceMapVacuumRange function recently added by commit c79f6df75
to reduce the amount of work done for this purpose.
Fix a couple of places that neglected to do the upper-page vacuuming at all
after recording new free space. If the policy is to be that BRIN should do
that, it should do it everywhere.
Do RecordPageWithFreeSpace unconditionally in brin_page_cleanup, and do
FreeSpaceMapVacuum unconditionally in brin_vacuum_scan. Because of the
FSM's imprecise storage of free space, the old complications here seldom
bought anything, they just slowed things down. This approach also
provides a predictable path for FSM corruption to be repaired.
Remove premature RecordPageWithFreeSpace call in brin_getinsertbuffer
where it's about to return an extended page to the caller. The caller
should do that, instead, after it's inserted its new tuple. Fix the
one caller that forgot to do so.
Simplify logic in brin_doupdate's same-page-update case by postponing
brin_initialize_empty_new_buffer to after the critical section; I see
little point in doing it before.
Avoid repeat calls of RelationGetNumberOfBlocks in brin_vacuum_scan.
Avoid duplicate BufferGetBlockNumber and BufferGetPage calls in
a couple of places where we already had the right values.
Move a BRIN_elog debug logging call out of a critical section; that's
pretty unsafe and I don't think it buys us anything to not wait till
after the critical section.
Move the "*extended = false" step in brin_getinsertbuffer into the
routine's main loop. There's no actual bug there, since the loop can't
iterate with *extended still true, but it doesn't seem very future-proof
as coded; and it's certainly not documented as a loop invariant.
This is all from follow-on investigation inspired by commit c79f6df75.
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
Tom Lane [Wed, 4 Apr 2018 15:51:36 +0000 (11:51 -0400)]
Remove less-portable-than-believed test case.
In commit 331b2369c I added a test to see what jsonb_plperl would do
with a qr{} result. Turns out the answer is Perl version dependent.
That fact doesn't bother me particularly, but coping with multiple
result possibilities is way more work than this test seems worth.
So remove it again.
Tom Lane [Wed, 4 Apr 2018 15:28:33 +0000 (11:28 -0400)]
Fix platform and Perl-version dependencies in new jsonb_plperl code.
Testing SvTYPE() directly is more fraught with problems than one might
think, because depending on context Perl might be storing a scalar value
in one of several forms, eg both numeric and string values. This resulted
in Perl-version-dependent buildfarm test failures. Instead use the SvTYPE
test only to distinguish non-scalar cases (AV, HV, NULL). Disambiguate
scalars by testing SvIOK, SvNOK, then SvPOK. This creates a preference
order for how we will resolve cases where the value is available in more
than one form, which seems fine to me.
Furthermore, because we're now dealing directly with a "double" value
in the SvNOK case, we can get rid of an inadequate and unportable
string-comparison test for infinities, and use isinf() instead.
(We do need some additional #include and "-lm" infrastructure to use
that in a contrib module, per prior experiences.)
In passing, prevent the regression test results from depending on DROP
CASCADE order; I've not seen that malfunction, but it's trouble waiting
to happen.
Fix the new ARMv8 CRC code for short and unaligned input.
The code before the main loop, to handle the possible 1-7 unaligned bytes
at the beginning of the input, was broken, and read past the input, if the
the input was very short.
Magnus Hagander [Wed, 4 Apr 2018 09:35:48 +0000 (11:35 +0200)]
Fix pg_bsaebackup checksum tests
Hopefully fix the fact that these checks are unstable, by introducing
the corruption in a separate table from pg_class, and also explicitly
disable autovacuum on those tables. Also make sure PostgreSQL is
stopped while the corruption is introduced to avoid possible caching
effects.
ARMv8 introduced special CPU instructions for calculating CRC-32C. Use
them, when available, for speed.
Like with the similar Intel CRC instructions, several factors affect
whether the instructions can be used. The compiler intrinsics for them must
be supported by the compiler, and the instructions must be supported by the
target architecture. If the compilation target architecture does not
support the instructions, but adding "-march=armv8-a+crc" makes them
available, then we compile the code with a runtime check to determine if
the host we're running on supports them or not.
For the runtime check, use glibc getauxval() function. Unfortunately,
that's not very portable, but I couldn't find any more portable way to do
it. If getauxval() is not available, the CRC instructions will still be
used if the target architecture supports them without any additional
compiler flags, but the runtime check will not be available.
Original patch by Yuqi Gu, heavily modified by me. Reviewed by Andres
Freund, Thomas Munro.
Trigger cloning to partitions was supposed to occur for user-visible
triggers only, but during development the protection that prevented it
from occurring to internal triggers was lost. Reinstate it, as well as
add a test case to ensure internal triggers (in the tested case,
triggers implementing a deferred unique constraint) are not cloned.
Without the code fix, the partitions in the test end up with different
numbers of triggers, which is clearly wrong ...
Previous coding was passing the wrong table's tuple descriptor, which
accidentally fails to fail because no existing test case exercises a
foreign key in which the referenced attributes are further to the right
of the referencing attributes.
Tom Lane [Tue, 3 Apr 2018 20:26:05 +0000 (16:26 -0400)]
Prevent accidental linking of system-supplied copies of libpq.so etc.
We were being careless in some places about the order of -L switches in
link command lines, such that -L switches referring to external directories
could come before those referring to directories within the build tree.
This made it possible to accidentally link a system-supplied library, for
example /usr/lib/libpq.so, in place of the one built in the build tree.
Hilarity ensued, the more so the older the system-supplied library is.
To fix, break LDFLAGS into two parts, a sub-variable LDFLAGS_INTERNAL
and the main LDFLAGS variable, both of which are "recursively expanded"
so that they can be incrementally adjusted by different makefiles.
Establish a policy that -L switches for directories in the build tree
must always be added to LDFLAGS_INTERNAL, while -L switches for external
directories must always be added to LDFLAGS. This is sufficient to
ensure a safe search order. For simplicity, we typically also put -l
switches for the respective libraries into those same variables.
(Traditional make usage would have us put -l switches into LIBS, but
cleaning that up is a project for another day, as there's no clear
need for it.)
This turns out to also require separating SHLIB_LINK into two variables,
SHLIB_LINK and SHLIB_LINK_INTERNAL, with a similar rule about which
switches go into which variable. And likewise for PG_LIBS.
Although this change might appear to affect external users of pgxs.mk,
I think it doesn't; they shouldn't have any need to touch the _INTERNAL
variables.
In passing, tweak src/common/Makefile so that the value of CPPFLAGS
recorded in pg_config lacks "-DFRONTEND" and the recorded value of
LDFLAGS lacks "-L../../../src/common". Both of those things are
mistakes, apparently introduced during prior code rearrangements,
as old versions of pg_config don't print them. In general we don't
want anything that's specific to the src/common subdirectory to
appear in those outputs.
This is certainly a bug fix, but in view of the lack of field
complaints, I'm unsure whether it's worth the risk of back-patching.
In any case it seems wise to see what the buildfarm makes of it first.
Tom Lane [Tue, 3 Apr 2018 18:47:26 +0000 (14:47 -0400)]
Suppress compiler warning in new jsonb_plperl code.
Some compilers are evidently pickier than others about whether Perl's
I32 typedef should be considered equivalent to int. Dodge the problem
by using a separate variable; the prior coding was a bit confusing anyway.
Per buildfarm. Note this does nothing to fix the test failures due to
SV_to_JsonbValue not covering enough variable types.
The prefix operator along with SP-GiST indexes can be used as an alternative
for LIKE 'word%' commands and it doesn't have a limitation of string/prefix
length as B-Tree has.
Bump catalog version
Author: Ildus Kurbangaliev with some editorization by me
Review by: Arthur Zakirov, Alexander Korotkov, and me
Discussion: https://www.postgresql.org/message-id/flat/20180202180327.222b04b3@wp.localdomain
Add a new contrib module jsonb_plperl that provides a transform between
jsonb and PL/Perl. jsonb values are converted to appropriate Perl types
such as arrays and hashes, and vice versa.
Author: Anthony Bykov <a.bykov@postgrespro.ru> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Aleksander Alekseev <a.alekseev@postgrespro.ru> Reviewed-by: Nikita Glukhov <n.gluhov@postgrespro.ru>
Magnus Hagander [Tue, 3 Apr 2018 11:47:16 +0000 (13:47 +0200)]
Validate page level checksums in base backups
When base backups are run over the replication protocol (for example
using pg_basebackup), verify the checksums of all data blocks if
checksums are enabled. If checksum failures are encountered, log them
as warnings but don't abort the backup.
This becomes the default behaviour in pg_basebackup (provided checksums
are enabled on the server), so add a switch (-k) to disable the checks
if necessary.
Author: Michael Banck Reviewed-By: Magnus Hagander, David Steele
Discussion: https://postgr.es/m/20180228180856.GE13784@nighthawk.caipicrew.dd-dns.de
Simon Riggs [Tue, 3 Apr 2018 08:28:16 +0000 (09:28 +0100)]
MERGE SQL Command following SQL:2016
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.
MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.
MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.
Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.
This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.
Various issues reported via sqlsmith by Andreas Seltenreich
Authors: Pavan Deolasee, Simon Riggs
Reviewer: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs
Simon Riggs [Mon, 2 Apr 2018 20:04:35 +0000 (21:04 +0100)]
MERGE SQL Command following SQL:2016
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.
MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.
MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.
Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.
This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.
Various issues reported via sqlsmith by Andreas Seltenreich
Authors: Pavan Deolasee, Simon Riggs
Reviewers: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs
Tom Lane [Mon, 2 Apr 2018 17:46:13 +0000 (13:46 -0400)]
Fix some dubious WAL-parsing code.
Coverity complained about possible buffer overrun in two places added by
commit 1eb6d6527, and AFAICS it's reasonable to worry: even granting that
the WAL originator properly truncated the commit GID to GIDSIZE, we should
not really bet our lives on that having the same value as it does in the
current build. Hence, use strlcpy() not strcpy(), and adjust the pointer
advancement logic to be sure we skip over the whole source string even if
strlcpy() truncated it.
Tom Lane [Mon, 2 Apr 2018 16:36:21 +0000 (12:36 -0400)]
Remove contrib/jsonb_plpython's tests for infinity and NaN conversions.
These tests don't work reliably with pre-2.6 Python versions, since
Python code like float('inf') was not guaranteed to work before that,
even granting an IEEE-compliant platform.
Since there's no explicit handling of these cases in jsonb_plpython,
we're not adding any real code coverage by testing them, and thus
it doesn't seem to make sense to go to any great lengths to work
around the test instability.
Tom Lane [Mon, 2 Apr 2018 16:26:09 +0000 (12:26 -0400)]
Teach configure --with-python to report the Python version number.
We already do this for Perl and some other interesting tools, so it
seems sensible to do it for Python as well, especially since the
sub-release number is never determinable from other configure output
and even the major/minor numbers may not be clear without excavation
in config.log.
While at it, get rid of the code's assumption that both the major and
minor numbers contain exactly one digit. That will foreseeably be
broken by Python 3.10 in perhaps four or five years. That's far enough
out that we probably don't need to back-patch this.
Make be-secure-common.c more consistent for future SSL implementations
Recent commit 8a3d9425 has introduced be-secure-common.c, which is aimed
at including backend-side APIs that can be used by any SSL
implementation. The purpose is similar to fe-secure-common.c for the
frontend-side APIs.
However, this has forgotten to include check_ssl_key_file_permissions()
in the move, which causes a double dependency between be-secure.c and
be-secure-openssl.c.
Refactor the code in a more logical way. This also puts into light an
API which is usable by future SSL implementations for permissions on SSL
key files.
Robert Haas [Mon, 2 Apr 2018 14:51:50 +0000 (10:51 -0400)]
postgres_fdw: Push down partition-wise aggregation.
Since commit 7012b132d07c2b4ea15b0b3cb1ea9f3278801d98, postgres_fdw
has been able to push down the toplevel aggregation operation to the
remote server. Commit e2f1eb0ee30d144628ab523432320f174a2c8966 made
it possible to break down the toplevel aggregation into one
aggregate per partition. This commit lets postgres_fdw push down
aggregation in that case just as it does at the top level.
In order to make this work, this commit adds an additional argument
to the GetForeignUpperPaths FDW API. A matching argument is added
to the signature for create_upper_paths_hook. Third-party code using
either of these will need to be updated.
Also adjust create_foreignscan_plan() so that it picks up the correct
set of relids in this case.
Jeevan Chalke, reviewed by Ashutosh Bapat and by me and with some
adjustments by me. The larger patch series of which this patch is a
part was also reviewed and tested by Antonin Houska, Rajkumar
Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal
Legrand, and Rafia Sabih.
Andres Freund [Sun, 1 Apr 2018 03:26:47 +0000 (20:26 -0700)]
Fix non-portable use of round().
round() is from C99. Use rint() instead. There are behavioral
differences between round() and rint(), but they should not matter to
the Bloom filter optimal_k() function. We already assume POSIX
behavior for rint(), so there is no question of rint() not using
"rounds towards nearest" as its rounding mode.
Andres Freund [Sun, 1 Apr 2018 02:52:01 +0000 (19:52 -0700)]
Add amcheck verification of heap relations belonging to btree indexes.
Add a new, optional, capability to bt_index_check() and
bt_index_parent_check(): check that each heap tuple that should have an
index entry does in fact have one. The extra checking is performed at
the end of the existing nbtree checks.
This is implemented by using a Bloom filter data structure. The
implementation performs set membership tests within a callback (the same
type of callback that each index AM registers for CREATE INDEX). The
Bloom filter is populated during the initial index verification scan.
Reusing the CREATE INDEX infrastructure allows the new verification
option to automatically benefit from the heap consistency checks that
CREATE INDEX already performs. CREATE INDEX does thorough sanity
checking of HOT chains, so the new check actually manages to detect
problems in heap-only tuples.
Author: Peter Geoghegan Reviewed-By: Pavan Deolasee, Andres Freund
Discussion: https://postgr.es/m/CAH2-Wzm5VmG7cu1N-H=nnS57wZThoSDQU+F5dewx3o84M+jY=g@mail.gmail.com
Andres Freund [Sun, 1 Apr 2018 00:49:41 +0000 (17:49 -0700)]
Add Bloom filter implementation.
A Bloom filter is a space-efficient, probabilistic data structure that
can be used to test set membership. Callers will sometimes incur false
positives, but never false negatives. The rate of false positives is a
function of the total number of elements and the amount of memory
available for the Bloom filter.
Two classic applications of Bloom filters are cache filtering, and data
synchronization testing. Any user of Bloom filters must accept the
possibility of false positives as a cost worth paying for the benefit in
space efficiency.
This commit adds a test harness extension module, test_bloomfilter. It
can be used to get a sense of how the Bloom filter implementation
performs under varying conditions.
This is infrastructure for the upcoming "heapallindexed" amcheck patch,
which verifies the consistency of a heap relation against one of its
indexes.
Author: Peter Geoghegan Reviewed-By: Andrey Borodin, Michael Paquier, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/CAH2-Wzm5VmG7cu1N-H=nnS57wZThoSDQU+F5dewx3o84M+jY=g@mail.gmail.com
Tom Lane [Sat, 31 Mar 2018 20:28:52 +0000 (16:28 -0400)]
Fix assorted issues in parallel vacuumdb.
Avoid storing the result of PQsocket() in a pgsocket variable; it's
declared as int, and the no-socket test is properly written as "x < 0"
not "x == PGINVALID_SOCKET". This accidentally had no bad effect
because we never got to init_slot() with a bad connection, but it's
still wrong.
Actually, it seems like we should avoid storing the result for a long
period at all. The function's not so expensive that it's worth avoiding,
and the existing coding technique here would fail if anyone tried to
PQreset the connection during the life of the program. Hence, just
re-call PQsocket every time we construct a select(2) mask.
Speaking of select(), GetIdleSlot imagined that it could compute the
select mask once and continue to use it over multiple calls to
select_loop(), which is pretty bogus since that would stomp on the
mask on return. This could only matter if the function's outer loop
iterated more than once, which is unlikely (it'd take some connection
receiving data, but not enough to complete its command). But if it
did happen, we'd acquire "tunnel vision" and stop watching the other
connections for query termination, with the effect of losing parallelism.
Another way in which GetIdleSlot could lose parallelism is that once
PQisBusy returns false, it would lock in on that connection and do
PQgetResult until that returns NULL; in some cases that could result
in blocking. (Perhaps this can never happen in vacuumdb due to the
limited set of commands that it can issue, but I'm not quite sure
of that, and even if true today it's not a future-proof assumption.)
Refactor the code to do that properly, so that it risks blocking in
PQgetResult only in cases where we need to wait anyway.
Another loss-of-parallelism problem, which *is* easily demonstrable,
is that any setup queries issued during prepare_vacuum_command() were
always issued on the last-to-be-created connection, whether or not
that was idle. Long-running operations on that connection thus
prevented issuance of additional operations on the other ones, except
in the limited cases where no preparatory query was needed. Instead,
wait till we've identified a free connection and use that one.
Also, avoid core dump due to undersized malloc request in the case
that no tables are identified to be vacuumed.
The bogus no-socket test was noted by CharSyam, the other problems
identified in my own code review. Back-patch to 9.5 where parallel
vacuumdb was introduced.
So far as I can find, NI_MAXHOST isn't actually required anywhere by
POSIX. Nonetheless, commit 9a895462d supposed that it could rely on
having that symbol without any ceremony at all. We do have a hack
for providing it if the platform doesn't, in getaddrinfo.h, so fix
the problem by #including that file. Per buildfarm.
Andres Freund [Sat, 31 Mar 2018 00:24:07 +0000 (17:24 -0700)]
Remove PARTIAL_LINKING build mode.
In 9956ddc19164b02dc1925fb389a1af77472eba5e, ten years ago, the
current objfile.txt based linking model was introduced. It's time to
retire the old SUBSYS.o based model.
This primarily is pertinent because the bitcode files for LLVM based
inlining are not produced when using PARTIAL_LINKING. It does not seem
worth to fix PARTIAL_LINKING to support that.
Author: Andres Freund
Discussion: https://postgr.es/m/20180121204356.d5oeu34jetqhmdv2@alap3.anarazel.de
Tatsuo Ishii [Sat, 31 Mar 2018 00:26:43 +0000 (09:26 +0900)]
Fix bug with view locking code.
LockViewRecurese() obtains view relation using heap_open() and passes
it to get_view_query() to get view info. It immediately closes the
relation then uses the returned view info by calling
LockViewRecurse_walker(). Since get_view_query() returns a pointer
within the relcache, the relcache should be kept until
LockViewRecurse_walker() returns. Otherwise the relation could point
to a garbage memory area.
Fix is moving the heap_close() call after LockViewRecurse_walker().
Problem reported by Tom Lane (buildfarm is unhappy, especially prion
since it enables -DRELCACHE_FORCE_RELEASE cpp flag), fix by me.
Andres Freund [Fri, 30 Mar 2018 23:56:41 +0000 (16:56 -0700)]
Add SKIP_LOCKED option to RangeVarGetRelidExtended().
This will be used for VACUUM (SKIP LOCKED).
Author: Nathan Bossart Reviewed-By: Michael Paquier and Andres Freund
Discussion: https://postgr.es/m/20180306005349.b65whmvj7z6hbe2y@alap3.anarazel.de
Andres Freund [Fri, 30 Mar 2018 23:33:42 +0000 (16:33 -0700)]
Combine options for RangeVarGetRelidExtended() into a flags argument.
A followup patch will add a SKIP_LOCKED option. To avoid introducing
evermore arguments, breaking existing callers each time, introduce a
flags argument. This'll no doubt break a few external users...
Also change the MISSING_OK behaviour so a DEBUG1 debug message is
emitted when a relation is not found.
Author: Nathan Bossart Reviewed-By: Michael Paquier and Andres Freund
Discussion: https://postgr.es/m/20180306005349.b65whmvj7z6hbe2y@alap3.anarazel.de
Fujii Masao [Fri, 30 Mar 2018 22:51:22 +0000 (07:51 +0900)]
Enhance pg_stat_wal_receiver view to display host and port of sender server.
Previously there was no way in the standby side to find out the host and port
of the sender server that the walreceiver was currently connected to when
multiple hosts and ports were specified in primary_conninfo. For that purpose,
this patch adds sender_host and sender_port columns into pg_stat_wal_receiver
view. They report the host and port that the active replication connection
currently uses.
Bump catalog version.
Author: Haribabu Kommi Reviewed-by: Michael Paquier and me
Discussion: https://postgr.es/m/CAJrrPGcV_aq8=cdqkFhVDJKEnDQ70yRTTdY9RODzMnXNrCz2Ow@mail.gmail.com
Tom Lane [Fri, 30 Mar 2018 22:14:51 +0000 (18:14 -0400)]
Fix bogus provolatile/proparallel markings on a few built-in functions.
Richard Yen reported that pg_upgrade failed if the target cluster had
force_parallel_mode = on, because binary_upgrade_create_empty_extension()
is marked parallel restricted, allowing it to be executed in parallel
mode, which complains because it tries to acquire an XID.
In general, no function that might try to modify database data should
be considered parallel safe or restricted, since execution of it might
force XID acquisition. We found several other examples of this mistake.
Furthermore, functions that execute user-supplied SQL queries or query
fragments, or pull data from user-supplied cursors, had better be marked
both volatile and parallel unsafe, because we don't know what the supplied
query or cursor might try to do. There were several tsquery and XML
functions that had the wrong proparallel marking for this, and some of
them were even mislabeled as to volatility.
All these bugs are old, dating back to 9.6 for the proparallel mistakes
and much further for the provolatile mistakes. We can't force a
catversion bump in the back branches, but we can at least ensure that
installations initdb'd in future have the right values.
Tom Lane [Fri, 30 Mar 2018 20:18:18 +0000 (16:18 -0400)]
Ensure that WAL pages skipped by a forced WAL switch are zero-filled.
In the previous coding, skipped pages were mostly zeroes, but they still
had valid WAL page headers. That makes them very much less compressible
than an unbroken string of zeroes would be --- about 10X worse for bzip2
compression, for instance. We don't need those headers, so tweak the logic
so that we zero them out.
Tom Lane [Fri, 30 Mar 2018 19:11:39 +0000 (15:11 -0400)]
Remove obsolete SLRU wrapping and warnings from predicate.c.
When SSI was developed, slru.c was limited to segment files with names in
the range 0000-FFFF. This didn't allow enough space for predicate.c to
store every possible XID when spilling old transactions to disk, so it
would wrap around sooner and print warnings. Since commits 638cf09e and 73c986ad increased the number of segment files slru.c could manage, that
behavior is unnecessary. Therefore remove that code.
Also remove the macro OldSerXidSegment, which has been unused since 4cd3fb6e.
Tom Lane [Fri, 30 Mar 2018 17:53:33 +0000 (13:53 -0400)]
Improve out-of-memory error reports by including memory context name.
Add the target context's name to the errdetail field of "out of memory"
errors in mcxt.c. Per discussion, this seems likely to be useful to
help narrow down the cause of a reported failure, and it costs little.
Also, now that context names are required to be compile-time constants
in all cases, there's little reason to be concerned about security
issues from exposing these names to users. (Because of such concerns,
we are *not* including the context "ident" field.)
In passing, add unlikely() markers to the allocation-failed tests,
just to be sure the compiler is on the right page about that.
Also, in palloc and friends, copy CurrentMemoryContext into a local
variable, as that's almost surely cheaper to reference than a global.
Tom Lane [Fri, 30 Mar 2018 15:48:17 +0000 (11:48 -0400)]
Do index FSM vacuuming sooner.
In btree and SP-GiST indexes, move the responsibility for calling
IndexFreeSpaceMapVacuum from the vacuumcleanup phase to the bulkdelete
phase, and do it if and only if we found some pages that could be put into
FSM. As in commit 851a26e26, the idea is to make free pages visible to FSM
searchers sooner when vacuuming very large tables (large enough to need
multiple bulkdelete scans). This adds more redundant work than that commit
did, since we have to scan the entire index FSM each time rather than being
able to localize what needs to be updated; but it still seems worthwhile.
However, we can buy something back by not touching the FSM at all when
there are no pages that can be put in it. That will result in slower
recovery from corrupt upper FSM pages in such a scenario, but it doesn't
seem like that's a case we need to optimize for.
Hash indexes don't use FSM at all. GIN, GiST, and bloom indexes update
FSM during the vacuumcleanup phase not bulkdelete, so that doing something
comparable to this would be a much more invasive change, and it's not clear
it's worth it. BRIN indexes do things sufficiently differently that this
change doesn't apply to them, either.
Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me
Robert Haas [Fri, 30 Mar 2018 15:37:48 +0000 (11:37 -0400)]
Don't call IS_DUMMY_REL() when cheapest_total_path might be junk.
Unlike the previous coding, this might result in a Gather per Append
subplan when the target list is parallel-restricted, but such a plan
is probably worth considering in that case, since a single Gather
on top of the entire Append is impossible.
Teodor Sigaev [Fri, 30 Mar 2018 11:23:17 +0000 (14:23 +0300)]
Predicate locking in GIN index
Predicate locks are used on per page basis only if fastupdate = off, in
opposite case predicate lock on pending list will effectively lock whole index,
to reduce locking overhead, just lock a relation. Entry and posting trees are
essentially B-tree, so locks are acquired on leaf pages only.
Author: Shubham Barai with some editorization by me and Dmitry Ivanov
Review by: Alexander Korotkov, Dmitry Ivanov, Fedor Sigaev
Discussion: https://www.postgresql.org/message-id/flat/CALxAEPt5sWW+EwTaKUGFL5_XFcZ0MuGBcyJ70oqbWqr42YKR8Q@mail.gmail.com
Robert Haas [Thu, 29 Mar 2018 19:47:57 +0000 (15:47 -0400)]
Rewrite the code that applies scan/join targets to paths.
If the toplevel scan/join target list is parallel-safe, postpone
generating Gather (or Gather Merge) paths until after the toplevel has
been adjusted to return it. This (correctly) makes queries with
expensive functions in the target list more likely to choose a
parallel plan, since the cost of the plan now reflects the fact that
the evaluation will happen in the workers rather than the leader.
The original complaint about this problem was from Jeff Janes.
If the toplevel scan/join relation is partitioned, recursively apply
the changes to all partitions. This sometimes allows us to get rid of
Result nodes, because Append is not projection-capable but its
children may be. It also cleans up what appears to be incorrect SRF
handling from commit e2f1eb0ee30d144628ab523432320f174a2c8966: the old
code had no knowledge of SRFs for child scan/join rels.
Because we now use create_projection_path() in some cases where we
formerly used apply_projection_to_path(), this changes the ordering
of columns in some queries generated by postgres_fdw. Update
regression outputs accordingly.
Patch by me, reviewed by Amit Kapila and by Ashutosh Bapat. Other
fixes for this problem (substantially different from this version)
were reviewed by Dilip Kumar, Amit Khandekar, and Marina Polyakova.
Robert Haas [Mon, 12 Mar 2018 20:45:15 +0000 (16:45 -0400)]
Postpone generate_gather_paths for topmost scan/join rel.
Don't call generate_gather_paths for the topmost scan/join relation
when it is initially populated with paths. Instead, do the work in
grouping_planner. By itself, this gains nothing; in fact it loses
slightly because we end up calling set_cheapest() for the topmost
scan/join rel twice rather than once. However, it paves the way for
a future commit which will postpone generate_gather_paths for the
topmost scan/join relation even further, allowing more accurate
costing of parallel paths.
Amit Kapila and Robert Haas. Earlier versions of this patch (which
different substantially) were reviewed by Dilip Kumar, Amit
Khandekar, Marina Polyakova, and Ashutosh Bapat.