Simon Riggs [Mon, 2 Apr 2018 20:04:35 +0000 (21:04 +0100)]
MERGE SQL Command following SQL:2016
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.
MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.
MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.
Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.
This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.
Various issues reported via sqlsmith by Andreas Seltenreich
Authors: Pavan Deolasee, Simon Riggs
Reviewers: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs
Tom Lane [Mon, 2 Apr 2018 17:46:13 +0000 (13:46 -0400)]
Fix some dubious WAL-parsing code.
Coverity complained about possible buffer overrun in two places added by
commit 1eb6d6527, and AFAICS it's reasonable to worry: even granting that
the WAL originator properly truncated the commit GID to GIDSIZE, we should
not really bet our lives on that having the same value as it does in the
current build. Hence, use strlcpy() not strcpy(), and adjust the pointer
advancement logic to be sure we skip over the whole source string even if
strlcpy() truncated it.
Tom Lane [Mon, 2 Apr 2018 16:36:21 +0000 (12:36 -0400)]
Remove contrib/jsonb_plpython's tests for infinity and NaN conversions.
These tests don't work reliably with pre-2.6 Python versions, since
Python code like float('inf') was not guaranteed to work before that,
even granting an IEEE-compliant platform.
Since there's no explicit handling of these cases in jsonb_plpython,
we're not adding any real code coverage by testing them, and thus
it doesn't seem to make sense to go to any great lengths to work
around the test instability.
Tom Lane [Mon, 2 Apr 2018 16:26:09 +0000 (12:26 -0400)]
Teach configure --with-python to report the Python version number.
We already do this for Perl and some other interesting tools, so it
seems sensible to do it for Python as well, especially since the
sub-release number is never determinable from other configure output
and even the major/minor numbers may not be clear without excavation
in config.log.
While at it, get rid of the code's assumption that both the major and
minor numbers contain exactly one digit. That will foreseeably be
broken by Python 3.10 in perhaps four or five years. That's far enough
out that we probably don't need to back-patch this.
Make be-secure-common.c more consistent for future SSL implementations
Recent commit 8a3d9425 has introduced be-secure-common.c, which is aimed
at including backend-side APIs that can be used by any SSL
implementation. The purpose is similar to fe-secure-common.c for the
frontend-side APIs.
However, this has forgotten to include check_ssl_key_file_permissions()
in the move, which causes a double dependency between be-secure.c and
be-secure-openssl.c.
Refactor the code in a more logical way. This also puts into light an
API which is usable by future SSL implementations for permissions on SSL
key files.
Robert Haas [Mon, 2 Apr 2018 14:51:50 +0000 (10:51 -0400)]
postgres_fdw: Push down partition-wise aggregation.
Since commit 7012b132d07c2b4ea15b0b3cb1ea9f3278801d98, postgres_fdw
has been able to push down the toplevel aggregation operation to the
remote server. Commit e2f1eb0ee30d144628ab523432320f174a2c8966 made
it possible to break down the toplevel aggregation into one
aggregate per partition. This commit lets postgres_fdw push down
aggregation in that case just as it does at the top level.
In order to make this work, this commit adds an additional argument
to the GetForeignUpperPaths FDW API. A matching argument is added
to the signature for create_upper_paths_hook. Third-party code using
either of these will need to be updated.
Also adjust create_foreignscan_plan() so that it picks up the correct
set of relids in this case.
Jeevan Chalke, reviewed by Ashutosh Bapat and by me and with some
adjustments by me. The larger patch series of which this patch is a
part was also reviewed and tested by Antonin Houska, Rajkumar
Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal
Legrand, and Rafia Sabih.
Andres Freund [Sun, 1 Apr 2018 03:26:47 +0000 (20:26 -0700)]
Fix non-portable use of round().
round() is from C99. Use rint() instead. There are behavioral
differences between round() and rint(), but they should not matter to
the Bloom filter optimal_k() function. We already assume POSIX
behavior for rint(), so there is no question of rint() not using
"rounds towards nearest" as its rounding mode.
Andres Freund [Sun, 1 Apr 2018 02:52:01 +0000 (19:52 -0700)]
Add amcheck verification of heap relations belonging to btree indexes.
Add a new, optional, capability to bt_index_check() and
bt_index_parent_check(): check that each heap tuple that should have an
index entry does in fact have one. The extra checking is performed at
the end of the existing nbtree checks.
This is implemented by using a Bloom filter data structure. The
implementation performs set membership tests within a callback (the same
type of callback that each index AM registers for CREATE INDEX). The
Bloom filter is populated during the initial index verification scan.
Reusing the CREATE INDEX infrastructure allows the new verification
option to automatically benefit from the heap consistency checks that
CREATE INDEX already performs. CREATE INDEX does thorough sanity
checking of HOT chains, so the new check actually manages to detect
problems in heap-only tuples.
Author: Peter Geoghegan Reviewed-By: Pavan Deolasee, Andres Freund
Discussion: https://postgr.es/m/CAH2-Wzm5VmG7cu1N-H=nnS57wZThoSDQU+F5dewx3o84M+jY=g@mail.gmail.com
Andres Freund [Sun, 1 Apr 2018 00:49:41 +0000 (17:49 -0700)]
Add Bloom filter implementation.
A Bloom filter is a space-efficient, probabilistic data structure that
can be used to test set membership. Callers will sometimes incur false
positives, but never false negatives. The rate of false positives is a
function of the total number of elements and the amount of memory
available for the Bloom filter.
Two classic applications of Bloom filters are cache filtering, and data
synchronization testing. Any user of Bloom filters must accept the
possibility of false positives as a cost worth paying for the benefit in
space efficiency.
This commit adds a test harness extension module, test_bloomfilter. It
can be used to get a sense of how the Bloom filter implementation
performs under varying conditions.
This is infrastructure for the upcoming "heapallindexed" amcheck patch,
which verifies the consistency of a heap relation against one of its
indexes.
Author: Peter Geoghegan Reviewed-By: Andrey Borodin, Michael Paquier, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/CAH2-Wzm5VmG7cu1N-H=nnS57wZThoSDQU+F5dewx3o84M+jY=g@mail.gmail.com
Tom Lane [Sat, 31 Mar 2018 20:28:52 +0000 (16:28 -0400)]
Fix assorted issues in parallel vacuumdb.
Avoid storing the result of PQsocket() in a pgsocket variable; it's
declared as int, and the no-socket test is properly written as "x < 0"
not "x == PGINVALID_SOCKET". This accidentally had no bad effect
because we never got to init_slot() with a bad connection, but it's
still wrong.
Actually, it seems like we should avoid storing the result for a long
period at all. The function's not so expensive that it's worth avoiding,
and the existing coding technique here would fail if anyone tried to
PQreset the connection during the life of the program. Hence, just
re-call PQsocket every time we construct a select(2) mask.
Speaking of select(), GetIdleSlot imagined that it could compute the
select mask once and continue to use it over multiple calls to
select_loop(), which is pretty bogus since that would stomp on the
mask on return. This could only matter if the function's outer loop
iterated more than once, which is unlikely (it'd take some connection
receiving data, but not enough to complete its command). But if it
did happen, we'd acquire "tunnel vision" and stop watching the other
connections for query termination, with the effect of losing parallelism.
Another way in which GetIdleSlot could lose parallelism is that once
PQisBusy returns false, it would lock in on that connection and do
PQgetResult until that returns NULL; in some cases that could result
in blocking. (Perhaps this can never happen in vacuumdb due to the
limited set of commands that it can issue, but I'm not quite sure
of that, and even if true today it's not a future-proof assumption.)
Refactor the code to do that properly, so that it risks blocking in
PQgetResult only in cases where we need to wait anyway.
Another loss-of-parallelism problem, which *is* easily demonstrable,
is that any setup queries issued during prepare_vacuum_command() were
always issued on the last-to-be-created connection, whether or not
that was idle. Long-running operations on that connection thus
prevented issuance of additional operations on the other ones, except
in the limited cases where no preparatory query was needed. Instead,
wait till we've identified a free connection and use that one.
Also, avoid core dump due to undersized malloc request in the case
that no tables are identified to be vacuumed.
The bogus no-socket test was noted by CharSyam, the other problems
identified in my own code review. Back-patch to 9.5 where parallel
vacuumdb was introduced.
So far as I can find, NI_MAXHOST isn't actually required anywhere by
POSIX. Nonetheless, commit 9a895462d supposed that it could rely on
having that symbol without any ceremony at all. We do have a hack
for providing it if the platform doesn't, in getaddrinfo.h, so fix
the problem by #including that file. Per buildfarm.
Andres Freund [Sat, 31 Mar 2018 00:24:07 +0000 (17:24 -0700)]
Remove PARTIAL_LINKING build mode.
In 9956ddc19164b02dc1925fb389a1af77472eba5e, ten years ago, the
current objfile.txt based linking model was introduced. It's time to
retire the old SUBSYS.o based model.
This primarily is pertinent because the bitcode files for LLVM based
inlining are not produced when using PARTIAL_LINKING. It does not seem
worth to fix PARTIAL_LINKING to support that.
Author: Andres Freund
Discussion: https://postgr.es/m/20180121204356.d5oeu34jetqhmdv2@alap3.anarazel.de
Tatsuo Ishii [Sat, 31 Mar 2018 00:26:43 +0000 (09:26 +0900)]
Fix bug with view locking code.
LockViewRecurese() obtains view relation using heap_open() and passes
it to get_view_query() to get view info. It immediately closes the
relation then uses the returned view info by calling
LockViewRecurse_walker(). Since get_view_query() returns a pointer
within the relcache, the relcache should be kept until
LockViewRecurse_walker() returns. Otherwise the relation could point
to a garbage memory area.
Fix is moving the heap_close() call after LockViewRecurse_walker().
Problem reported by Tom Lane (buildfarm is unhappy, especially prion
since it enables -DRELCACHE_FORCE_RELEASE cpp flag), fix by me.
Andres Freund [Fri, 30 Mar 2018 23:56:41 +0000 (16:56 -0700)]
Add SKIP_LOCKED option to RangeVarGetRelidExtended().
This will be used for VACUUM (SKIP LOCKED).
Author: Nathan Bossart Reviewed-By: Michael Paquier and Andres Freund
Discussion: https://postgr.es/m/20180306005349.b65whmvj7z6hbe2y@alap3.anarazel.de
Andres Freund [Fri, 30 Mar 2018 23:33:42 +0000 (16:33 -0700)]
Combine options for RangeVarGetRelidExtended() into a flags argument.
A followup patch will add a SKIP_LOCKED option. To avoid introducing
evermore arguments, breaking existing callers each time, introduce a
flags argument. This'll no doubt break a few external users...
Also change the MISSING_OK behaviour so a DEBUG1 debug message is
emitted when a relation is not found.
Author: Nathan Bossart Reviewed-By: Michael Paquier and Andres Freund
Discussion: https://postgr.es/m/20180306005349.b65whmvj7z6hbe2y@alap3.anarazel.de
Fujii Masao [Fri, 30 Mar 2018 22:51:22 +0000 (07:51 +0900)]
Enhance pg_stat_wal_receiver view to display host and port of sender server.
Previously there was no way in the standby side to find out the host and port
of the sender server that the walreceiver was currently connected to when
multiple hosts and ports were specified in primary_conninfo. For that purpose,
this patch adds sender_host and sender_port columns into pg_stat_wal_receiver
view. They report the host and port that the active replication connection
currently uses.
Bump catalog version.
Author: Haribabu Kommi Reviewed-by: Michael Paquier and me
Discussion: https://postgr.es/m/CAJrrPGcV_aq8=cdqkFhVDJKEnDQ70yRTTdY9RODzMnXNrCz2Ow@mail.gmail.com
Tom Lane [Fri, 30 Mar 2018 22:14:51 +0000 (18:14 -0400)]
Fix bogus provolatile/proparallel markings on a few built-in functions.
Richard Yen reported that pg_upgrade failed if the target cluster had
force_parallel_mode = on, because binary_upgrade_create_empty_extension()
is marked parallel restricted, allowing it to be executed in parallel
mode, which complains because it tries to acquire an XID.
In general, no function that might try to modify database data should
be considered parallel safe or restricted, since execution of it might
force XID acquisition. We found several other examples of this mistake.
Furthermore, functions that execute user-supplied SQL queries or query
fragments, or pull data from user-supplied cursors, had better be marked
both volatile and parallel unsafe, because we don't know what the supplied
query or cursor might try to do. There were several tsquery and XML
functions that had the wrong proparallel marking for this, and some of
them were even mislabeled as to volatility.
All these bugs are old, dating back to 9.6 for the proparallel mistakes
and much further for the provolatile mistakes. We can't force a
catversion bump in the back branches, but we can at least ensure that
installations initdb'd in future have the right values.
Tom Lane [Fri, 30 Mar 2018 20:18:18 +0000 (16:18 -0400)]
Ensure that WAL pages skipped by a forced WAL switch are zero-filled.
In the previous coding, skipped pages were mostly zeroes, but they still
had valid WAL page headers. That makes them very much less compressible
than an unbroken string of zeroes would be --- about 10X worse for bzip2
compression, for instance. We don't need those headers, so tweak the logic
so that we zero them out.
Tom Lane [Fri, 30 Mar 2018 19:11:39 +0000 (15:11 -0400)]
Remove obsolete SLRU wrapping and warnings from predicate.c.
When SSI was developed, slru.c was limited to segment files with names in
the range 0000-FFFF. This didn't allow enough space for predicate.c to
store every possible XID when spilling old transactions to disk, so it
would wrap around sooner and print warnings. Since commits 638cf09e and 73c986ad increased the number of segment files slru.c could manage, that
behavior is unnecessary. Therefore remove that code.
Also remove the macro OldSerXidSegment, which has been unused since 4cd3fb6e.
Tom Lane [Fri, 30 Mar 2018 17:53:33 +0000 (13:53 -0400)]
Improve out-of-memory error reports by including memory context name.
Add the target context's name to the errdetail field of "out of memory"
errors in mcxt.c. Per discussion, this seems likely to be useful to
help narrow down the cause of a reported failure, and it costs little.
Also, now that context names are required to be compile-time constants
in all cases, there's little reason to be concerned about security
issues from exposing these names to users. (Because of such concerns,
we are *not* including the context "ident" field.)
In passing, add unlikely() markers to the allocation-failed tests,
just to be sure the compiler is on the right page about that.
Also, in palloc and friends, copy CurrentMemoryContext into a local
variable, as that's almost surely cheaper to reference than a global.
Tom Lane [Fri, 30 Mar 2018 15:48:17 +0000 (11:48 -0400)]
Do index FSM vacuuming sooner.
In btree and SP-GiST indexes, move the responsibility for calling
IndexFreeSpaceMapVacuum from the vacuumcleanup phase to the bulkdelete
phase, and do it if and only if we found some pages that could be put into
FSM. As in commit 851a26e26, the idea is to make free pages visible to FSM
searchers sooner when vacuuming very large tables (large enough to need
multiple bulkdelete scans). This adds more redundant work than that commit
did, since we have to scan the entire index FSM each time rather than being
able to localize what needs to be updated; but it still seems worthwhile.
However, we can buy something back by not touching the FSM at all when
there are no pages that can be put in it. That will result in slower
recovery from corrupt upper FSM pages in such a scenario, but it doesn't
seem like that's a case we need to optimize for.
Hash indexes don't use FSM at all. GIN, GiST, and bloom indexes update
FSM during the vacuumcleanup phase not bulkdelete, so that doing something
comparable to this would be a much more invasive change, and it's not clear
it's worth it. BRIN indexes do things sufficiently differently that this
change doesn't apply to them, either.
Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me
Robert Haas [Fri, 30 Mar 2018 15:37:48 +0000 (11:37 -0400)]
Don't call IS_DUMMY_REL() when cheapest_total_path might be junk.
Unlike the previous coding, this might result in a Gather per Append
subplan when the target list is parallel-restricted, but such a plan
is probably worth considering in that case, since a single Gather
on top of the entire Append is impossible.
Teodor Sigaev [Fri, 30 Mar 2018 11:23:17 +0000 (14:23 +0300)]
Predicate locking in GIN index
Predicate locks are used on per page basis only if fastupdate = off, in
opposite case predicate lock on pending list will effectively lock whole index,
to reduce locking overhead, just lock a relation. Entry and posting trees are
essentially B-tree, so locks are acquired on leaf pages only.
Author: Shubham Barai with some editorization by me and Dmitry Ivanov
Review by: Alexander Korotkov, Dmitry Ivanov, Fedor Sigaev
Discussion: https://www.postgresql.org/message-id/flat/CALxAEPt5sWW+EwTaKUGFL5_XFcZ0MuGBcyJ70oqbWqr42YKR8Q@mail.gmail.com
Robert Haas [Thu, 29 Mar 2018 19:47:57 +0000 (15:47 -0400)]
Rewrite the code that applies scan/join targets to paths.
If the toplevel scan/join target list is parallel-safe, postpone
generating Gather (or Gather Merge) paths until after the toplevel has
been adjusted to return it. This (correctly) makes queries with
expensive functions in the target list more likely to choose a
parallel plan, since the cost of the plan now reflects the fact that
the evaluation will happen in the workers rather than the leader.
The original complaint about this problem was from Jeff Janes.
If the toplevel scan/join relation is partitioned, recursively apply
the changes to all partitions. This sometimes allows us to get rid of
Result nodes, because Append is not projection-capable but its
children may be. It also cleans up what appears to be incorrect SRF
handling from commit e2f1eb0ee30d144628ab523432320f174a2c8966: the old
code had no knowledge of SRFs for child scan/join rels.
Because we now use create_projection_path() in some cases where we
formerly used apply_projection_to_path(), this changes the ordering
of columns in some queries generated by postgres_fdw. Update
regression outputs accordingly.
Patch by me, reviewed by Amit Kapila and by Ashutosh Bapat. Other
fixes for this problem (substantially different from this version)
were reviewed by Dilip Kumar, Amit Khandekar, and Marina Polyakova.
Robert Haas [Mon, 12 Mar 2018 20:45:15 +0000 (16:45 -0400)]
Postpone generate_gather_paths for topmost scan/join rel.
Don't call generate_gather_paths for the topmost scan/join relation
when it is initially populated with paths. Instead, do the work in
grouping_planner. By itself, this gains nothing; in fact it loses
slightly because we end up calling set_cheapest() for the topmost
scan/join rel twice rather than once. However, it paves the way for
a future commit which will postpone generate_gather_paths for the
topmost scan/join relation even further, allowing more accurate
costing of parallel paths.
Amit Kapila and Robert Haas. Earlier versions of this patch (which
different substantially) were reviewed by Dilip Kumar, Amit
Khandekar, Marina Polyakova, and Ashutosh Bapat.
Robert Haas [Thu, 29 Mar 2018 19:37:39 +0000 (15:37 -0400)]
Teach create_projection_plan to omit projection where possible.
We sometimes insert a ProjectionPath into a plan tree when projection
is not strictly required. The existing code already arranges to avoid
emitting a Result node when the ProjectionPath's subpath can perform
the projection itself, but previously it didn't consider the
possibility that the parent node might not actually require the
projection to be performed at all.
Skipping projection when it's not required can not only avoid Result
nodes that aren't needed, but also avoid losing the "physical tlist"
optimization unneccessarily.
Tom Lane [Thu, 29 Mar 2018 16:44:19 +0000 (12:44 -0400)]
Remove unnecessary BufferGetPage() calls in fsm_vacuum_page().
Just noticed that these were quite redundant, since we're holding the
page address in a local variable anyway, and we have pin on the buffer
throughout.
Tom Lane [Thu, 29 Mar 2018 16:22:37 +0000 (12:22 -0400)]
Remove UpdateFreeSpaceMap(), use FreeSpaceMapVacuumRange() instead.
FreeSpaceMapVacuumRange has the same effect, is more efficient if many
pages are involved, and makes fewer assumptions about how it's used.
Notably, Claudio Freire pointed out that UpdateFreeSpaceMap could fail
if the specified freespace value isn't the maximum possible. This isn't
a problem for the single existing user, but the function represents an
attractive nuisance IMO, because it's named as though it were a
general-purpose update function and its limitations are undocumented.
In any case we don't need multiple ways to get the same result.
In passing, do some code review and cleanup in RelationAddExtraBlocks.
In particular, I see no excuse for it to omit the PageIsNew safety check
that's done in the mainline extension path in RelationGetBufferForTuple.
Tom Lane [Thu, 29 Mar 2018 15:29:54 +0000 (11:29 -0400)]
While vacuuming a large table, update upper-level FSM data every so often.
VACUUM updates leaf-level FSM entries immediately after cleaning the
corresponding heap blocks. fsmpage.c updates the intra-page search trees
on the leaf-level FSM pages when this happens, but it does not touch the
upper-level FSM pages, so that the released space might not actually be
findable by searchers. Previously, updating the upper-level pages happened
only at the conclusion of the VACUUM run, in a single FreeSpaceMapVacuum()
call. This is bad because the VACUUM might get canceled before ever
reaching that point, so that from the point of view of searchers no space
has been freed at all, leading to table bloat.
We can improve matters by updating the upper pages immediately after each
cycle of index-cleaning and heap-cleaning, processing just the FSM pages
corresponding to the range of heap blocks we have now fully cleaned.
This adds a small amount of extra work, since the FSM pages leading down
to each range boundary will be touched twice, but it's pretty negligible
compared to everything else going on in a large VACUUM.
If there are no indexes, VACUUM doesn't work in cycles but just cleans
each heap page on first visit. In that case we just arbitrarily update
upper FSM pages after each 8GB of heap. That maintains the goal of not
letting all this work slide until the very end, and it doesn't seem worth
expending extra complexity on a case that so seldom occurs in practice.
In either case, the FSM is fully up to date before any attempt is made
to truncate the relation, so that the most likely scenario for VACUUM
cancellation no longer results in out-of-date upper FSM pages. When
we do successfully truncate, adjusting the FSM to reflect that is now
fully handled within FreeSpaceMapTruncateRel.
Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me
Teodor Sigaev [Thu, 29 Mar 2018 13:33:56 +0000 (16:33 +0300)]
Add casts from jsonb
Add explicit cast from scalar jsonb to all numeric and bool types. It would be
better to have cast from scalar jsonb to text too but there is already a cast
from jsonb to text as just text representation of json. There is no way to have
two different casts for the same type's pair.
Bump catalog version
Author: Anastasia Lubennikova with editorization by Nikita Glukhov and me
Review by: Aleksander Alekseev, Nikita Glukhov, Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/0154d35a-24ae-f063-5273-9ffcdf1c7f2e@postgrespro.ru
Peter Eisentraut [Wed, 28 Mar 2018 22:57:10 +0000 (18:57 -0400)]
Allow committing inside cursor loop
Previously, committing or aborting inside a cursor loop was prohibited
because that would close and remove the cursor. To allow that,
automatically convert such cursors to holdable cursors so they survive
commits or rollbacks. Portals now have a new state "auto-held", which
means they have been converted automatically from pinned. An auto-held
portal is kept on transaction commit or rollback, but is still removed
when returning to the main loop on error.
This supports all languages that have cursor loop constructs: PL/pgSQL,
PL/Python, PL/Perl.
Andres Freund [Wed, 28 Mar 2018 21:22:42 +0000 (14:22 -0700)]
Add documentation for the JIT feature.
As promised in earlier commits, this adds documentation about the new
build options, the new GUCs, about the planner logic when JIT is used,
and the benefits of JIT in general.
Also adds a more implementation oriented README.
I'm sure we're going to want to expand this further, but I think this
is a reasonable start.
Author: Andres Freund, with contributions by Thomas Munro Reviewed-By: Thomas Munro
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Wed, 28 Mar 2018 20:26:51 +0000 (13:26 -0700)]
Add EXPLAIN support for JIT.
This just shows a few details about JITing, e.g. how many functions
have been JITed, and how long that took. To avoid noise in regression
tests with functions sometimes being JITed in --with-llvm builds,
disable display when COSTS OFF is specified.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Wed, 28 Mar 2018 20:19:08 +0000 (13:19 -0700)]
Add inlining support to LLVM JIT provider.
This provides infrastructure to allow JITed code to inline code
implemented in C. This e.g. can be postgres internal functions or
extension code.
This already speeds up long running queries, by allowing the LLVM
optimizer to optimize across function boundaries. The optimization
potential currently doesn't reach its full potential because LLVM
cannot optimize the FunctionCallInfoData argument fully away, because
it's allocated on the heap rather than the stack. Fixing that is
beyond what's realistic for v11.
To be able to do that, use CLANG to convert C code to LLVM bitcode,
and have LLVM build a summary for it. That bitcode can then be used to
to inline functions at runtime. For that the bitcode needs to be
installed. Postgres bitcode goes into $pkglibdir/bitcode/postgres,
extensions go into equivalent directories. PGXS has been modified so
that happens automatically if postgres has been compiled with LLVM
support.
Currently this isn't the fastest inline implementation, modules are
reloaded from disk during inlining. That's to work around an apparent
LLVM bug, triggering an apparently spurious error in LLVM assertion
enabled builds. Once that is resolved we can remove the superfluous
read from disk.
Docs will follow in a later commit containing docs for the whole JIT
feature.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Wed, 28 Mar 2018 19:45:32 +0000 (12:45 -0700)]
Use isinf builtin for clang, for performance.
When compiling with clang glibc's definition of isinf() ends up
leading to and external libc function call. That's because there was a
bug in the builtin in an old gcc version, and clang claims
compatibility with an older version. That causes clang to be
measurably slower for floating point heavy workloads than gcc.
To fix simply redirect isinf when using clang and clang confirms it
has __builtin_isinf().
Fujii Masao [Wed, 28 Mar 2018 19:56:52 +0000 (04:56 +0900)]
Make pg_rewind skip files and directories that are removed during server start.
The target cluster that was rewound needs to perform recovery from
the checkpoint created at failover, which leads it to remove or recreate
some files and directories that may have been copied from the source
cluster. So pg_rewind can skip synchronizing such files and directories,
and which reduces the amount of data transferred during a rewind
without changing the usefulness of the operation.
Author: Michael Paquier Reviewed-by: Anastasia Lubennikova, Stephen Frost and me
Discussion: https://postgr.es/m/20180205071022.GA17337@paquier.xyz
Fujii Masao [Wed, 28 Mar 2018 19:00:21 +0000 (04:00 +0900)]
Fix handling of files that source server removes during pg_rewind is running.
After processing the filemap to build the list of chunks that will be
fetched from the source to rewing the target server, it is possible that
a file which was previously processed is removed from the source. A
simple example of such an occurence is a WAL segment which gets recycled
on the target in-between. When the filemap is processed, files not
categorized as relation files are first truncated to prepare for its
full copy of which is going to be taken from the source, divided into a
set of junks. However, for a recycled WAL segment, this would result in
a segment which has a zero-byte size. With such an empty file,
post-rewind recovery thinks that records are saved but they are actually
not because of the truncation which happened when processing the
filemap, resulting in data loss.
In order to fix the problem, make sure that files which are found as
removed on the source when receiving chunks of them are as well deleted
on the target server for consistency.
Peter Eisentraut [Sat, 24 Mar 2018 14:05:06 +0000 (10:05 -0400)]
PL/pgSQL: Nested CALL with transactions
So far, a nested CALL or DO in PL/pgSQL would not establish a context
where transaction control statements were allowed. This fixes that by
handling CALL and DO specially in PL/pgSQL, passing the atomic/nonatomic
execution context through and doing the required management around
transaction boundaries.
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Tom Lane [Wed, 28 Mar 2018 17:26:43 +0000 (13:26 -0400)]
Fix actual and potential double-frees around tuplesort usage.
tuplesort_gettupleslot() passed back tuples allocated in the tuplesort's
own memory context, even when the caller was responsible to free them.
This created a double-free hazard, because some callers might destroy
the tuplesort object (via tuplesort_end) before trying to clean up the
last returned tuple. To avoid this, change the API to specify that the
tuple is allocated in the caller's memory context. v10 and HEAD already
did things that way, but in 9.5 and 9.6 this is a live bug that can
demonstrably cause crashes with some grouping-set usages.
In 9.5 and 9.6, this requires doing an extra tuple copy in some cases,
which is unfortunate. But the amount of refactoring needed to avoid it
seems excessive for a back-patched change, especially since the cases
where an extra copy happens are less performance-critical.
Likewise change tuplesort_getdatum() to return pass-by-reference Datums
in the caller's context not the tuplesort's context. There seem to be
no live bugs among its callers, but clearly the same sort of situation
could happen in future.
For other tuplesort fetch routines, continue to allocate the memory in
the tuplesort's context. This is a little inconsistent with what we now
do for tuplesort_gettupleslot() and tuplesort_getdatum(), but that's
preferable to adding new copy overhead in the back branches where it's
clearly unnecessary. These other fetch routines provide the weakest
possible guarantees about tuple memory lifespan from v10 on, anyway,
so this actually seems more consistent overall.
Adjust relevant comments to reflect these API redefinitions.
Arguably, we should change the pre-9.5 branches as well, but since
there are no known failure cases there, it seems not worth the risk.
Peter Geoghegan, per report from Bernd Helmle. Reviewed by Kyotaro
Horiguchi; thanks also to Andreas Seltenreich for extracting a
self-contained test case.
Simon Riggs [Wed, 28 Mar 2018 16:42:50 +0000 (17:42 +0100)]
Store 2PC GID in commit/abort WAL recs for logical decoding
Store GID of 2PC in commit/abort WAL records when wal_level = logical.
This allows logical decoding to send the SAME gid to subscribers
across restarts of logical replication.
Track relica origin replay progress for 2PC.
(Edited from patch 0003 in the logical decoding 2PC series.)
Authors: Nikhil Sontakke, Stas Kelvich Reviewed-by: Simon Riggs, Andres Freund
Peter Eisentraut [Wed, 28 Mar 2018 12:32:43 +0000 (08:32 -0400)]
Transforms for jsonb to PL/Python
Add a new contrib module jsonb_plpython that provide a transform between
jsonb and PL/Python. jsonb values are converted to appropriate Python
types such as dicts and lists, and vice versa.
Author: Anthony Bykov <a.bykov@postgrespro.ru> Reviewed-by: Aleksander Alekseev <a.alekseev@postgrespro.ru> Reviewed-by: Nikita Glukhov <n.gluhov@postgrespro.ru>
Simon Riggs [Wed, 28 Mar 2018 04:21:00 +0000 (05:21 +0100)]
Use pg_stat_get_xact* functions within xacts
Resolve build farm failures from c203d6cf81b4d7e43,
diagnosed by Tom Lane.
The output of pg_stat_get_xact_tuples_hot_updated() and friends
is not guaranteed to show anything after the transaction completes.
Data is flushed slowly to stats collector, so using them can
give timing issues.
Andres Freund [Wed, 28 Mar 2018 04:03:10 +0000 (21:03 -0700)]
Quick adaption of JIT tuple deforming to the fast default patch.
Instead using memset to set tts_isnull, call the new
slot_getmissingattrs().
Also fix a bug (= instead of >=) in the code generation. Normally = is
correct, but when repeatedly deforming fields not in a
tuple (e.g. deform up to natts + 1 and then natts + 2) >= is needed.
Andrew Dunstan [Wed, 28 Mar 2018 00:13:52 +0000 (10:43 +1030)]
Fast ALTER TABLE ADD COLUMN with a non-NULL default
Currently adding a column to a table with a non-NULL default results in
a rewrite of the table. For large tables this can be both expensive and
disruptive. This patch removes the need for the rewrite as long as the
default value is not volatile. The default expression is evaluated at
the time of the ALTER TABLE and the result stored in a new column
(attmissingval) in pg_attribute, and a new column (atthasmissing) is set
to true. Any existing row when fetched will be supplied with the
attmissingval. New rows will have the supplied value or the default and
so will never need the attmissingval.
Any time the table is rewritten all the atthasmissing and attmissingval
settings for the attributes are cleared, as they are no longer needed.
The most visible code change from this is in heap_attisnull, which
acquires a third TupleDesc argument, allowing it to detect a missing
value if there is one. In many cases where it is known that there will
not be any (e.g. catalog relations) NULL can be passed for this
argument.
Andrew Dunstan, heavily modified from an original patch from Serge
Rielau.
Reviewed by Tom Lane, Andres Freund, Tomas Vondra and David Rowley.
Tom Lane [Tue, 27 Mar 2018 22:15:39 +0000 (18:15 -0400)]
Update pgindent's typedefs blacklist, and make it easier to adjust.
It seems that all buildfarm members are now using the <stdbool.h> code
path, so that none of them report "bool" as a typedef. We still need it
to be treated that way, so adjust pgindent to force that whether or not
it's in the given list.
Also, the recent introduction of LLVM infrastructure has caused the
appearance of some typedef names that we definitely *don't* want
treated as typedefs, such as "string" and "abs". Extend the existing
blacklist to include these. (Additions based on comparing v10's
typedefs list to what the buildfarm is currently emitting.)
Rearrange the code so that the lists of whitelisted/blacklisted
names are a bit easier to find and modify.
Tom Lane [Tue, 27 Mar 2018 20:46:47 +0000 (16:46 -0400)]
Allow memory contexts to have both fixed and variable ident strings.
Originally, we treated memory context names as potentially variable in
all cases, and therefore always copied them into the context header.
Commit 9fa6f00b1 rethought this a little bit and invented a distinction
between fixed and variable names, skipping the copy step for the former.
But we can make things both simpler and more useful by instead allowing
there to be two parts to a context's identification, a fixed "name" and
an optional, variable "ident". The name supplied in the context create
call is now required to be a compile-time-constant string in all cases,
as it is never copied but just pointed to. The "ident" string, if
wanted, is supplied later. This is needed because typically we want
the ident to be stored inside the context so that it's cleaned up
automatically on context deletion; that means it has to be copied into
the context before we can set the pointer.
The cost of this approach is basically just an additional pointer field
in struct MemoryContextData, which isn't much overhead, and is bought
back entirely in the AllocSet case by not needing a headerSize field
anymore, since we no longer have to cope with variable header length.
In addition, we can simplify the internal interfaces for memory context
creation still further, saving a few cycles there. And it's no longer
true that a custom identifier disqualifies a context from participating
in aset.c's freelist scheme, so possibly there's some win on that end.
All the places that were using non-compile-time-constant context names
are adjusted to put the variable info into the "ident" instead. This
allows more effective identification of those contexts in many cases;
for example, subsidary contexts of relcache entries are now identified
by both type (e.g. "index info") and relname, where before you got only
one or the other. Contexts associated with PL function cache entries
are now identified more fully and uniformly, too.
I also arranged for plancache contexts to use the query source string
as their identifier. This is basically free for CachedPlanSources, as
they contained a copy of that string already. We pay an extra pstrdup
to do it for CachedPlans. That could perhaps be avoided, but it would
make things more fragile (since the CachedPlanSource is sometimes
destroyed first). I suspect future improvements in error reporting will
require CachedPlans to have a copy of that string anyway, so it's not
clear that it's worth moving mountains to avoid it now.
This also changes the APIs for context statistics routines so that the
context-specific routines no longer assume that output goes straight
to stderr, nor do they know all details of the output format. This
is useful immediately to reduce code duplication, and it also allows
for external code to do something with stats output that's different
from printing to stderr.
The reason for pushing this now rather than waiting for v12 is that
it rethinks some of the API changes made by commit 9fa6f00b1. Seems
better for extension authors to endure just one round of API changes
not two.
Simon Riggs [Tue, 27 Mar 2018 18:57:02 +0000 (19:57 +0100)]
Allow HOT updates for some expression indexes
If the value of an index expression is unchanged after UPDATE,
allow HOT updates where previously we disallowed them, giving
a significant performance boost in those cases.
Particularly useful for indexes such as JSON->>field where the
JSON value changes but the indexed value does not.
Submitted as "surjective indexes" patch, now enabled by use
of new "recheck_on_update" parameter.
Author: Konstantin Knizhnik
Reviewer: Simon Riggs, with much wordsmithing and some cleanup
Peter Eisentraut [Tue, 27 Mar 2018 16:31:34 +0000 (12:31 -0400)]
libpq: PQhost to return active connected host or hostaddr
Previously, PQhost didn't return the connected host details when the
connection type was CHT_HOST_ADDRESS (i.e., via hostaddr). Instead, it
returned the complete host connection parameter (which could contain
multiple hosts) or the default host details, which was confusing and
arguably incorrect.
Change this to return the actually connected host or hostaddr
irrespective of the connection type. When hostaddr but no host was
specified, hostaddr is now returned. Never return the original host
connection parameter, and document that PQhost cannot be relied on
before the connection is established.
PQport is similarly changed to always return the active connection port
and never the original connection parameter.
Author: Hari Babu <kommi.haribabu@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Teodor Sigaev [Tue, 27 Mar 2018 12:43:19 +0000 (15:43 +0300)]
Add predicate locking for GiST
Add page-level predicate locking, due to gist's code organization, patch seems
close to trivial: add check before page changing, add predicate lock before page
scanning. Although choosing right place to check is not simple: it should not
be called during index build, it should support insertion of new downlink and so
on.
Author: Shubham Barai with editorization by me and Alexander Korotkov
Reviewed by: Alexander Korotkov, Andrey Borodin, me
Discussion: https://www.postgresql.org/message-id/flat/CALxAEPtdcANpw5ePU3LvnTP8HCENFw6wygupQAyNBgD-sG3h0g@mail.gmail.com
Andres Freund [Mon, 26 Mar 2018 21:59:37 +0000 (14:59 -0700)]
Make new regression indpendent of max_parallel_workers_per_gather.
The tests in e2f1eb0ee30d1 ("Implement partition-wise
grouping/aggregation.") weren't independent of the server's
max_parallel_workers_per_gather setting. I (Andres) find it useful to
locally run with that disabled, and the aforementioned patch broke
this.
Andres Freund [Mon, 26 Mar 2018 19:57:19 +0000 (12:57 -0700)]
JIT tuple deforming in LLVM JIT provider.
Performing JIT compilation for deforming gains performance benefits
over unJITed deforming from compile-time knowledge of the tuple
descriptor. Fixed column widths, NOT NULLness, etc can be taken
advantage of.
Right now the JITed deforming is only used when deforming tuples as
part of expression evaluation (and obviously only if the descriptor is
known). It's likely to be beneficial in other cases, too.
By default tuple deforming is JITed whenever an expression is JIT
compiled. There's a separate boolean GUC controlling it, but that's
expected to be primarily useful for development and benchmarking.
Docs will follow in a later commit containing docs for the whole JIT
feature.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Teodor Sigaev [Mon, 26 Mar 2018 15:26:27 +0000 (18:26 +0300)]
Set random seed for pgbench.
Setting random could increase reproducibility of test in some cases. Patch
suggests three providers for seed: time (default), strong random
generator (if available) and unsigned constant. Seed could be set from
command line or enviroment variable.
Alvaro Herrera [Mon, 26 Mar 2018 15:00:25 +0000 (12:00 -0300)]
Fix thinko in comment
The listed numbers disagreed with the ones being used in the symbols;
but instead of just fixing the numbers in the comment, use the symbolic
name instead, which seems clearer.
This has been wrong all along, so apply back to 9.5 where BRIN was
introduced.
Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/5ff514f2-8b1e-6366-b11c-8e2ed442562d@2ndquadrant.com
Alvaro Herrera [Mon, 26 Mar 2018 13:43:54 +0000 (10:43 -0300)]
Handle INSERT .. ON CONFLICT with partitioned tables
Commit eb7ed3f30634 enabled unique constraints on partitioned tables,
but one thing that was not working properly is INSERT/ON CONFLICT.
This commit introduces a new node keeps state related to the ON CONFLICT
clause per partition, and fills it when that partition is about to be
used for tuple routing.
Andrew Dunstan [Mon, 26 Mar 2018 12:09:24 +0000 (22:39 +1030)]
Optimize btree insertions for common case of increasing values
Remember the last page of an index insert if it's the rightmost leaf
page. If the next entry belongs on and can fit in the remembered page,
insert the new entry there as long as we can get a lock on the page.
Otherwise, fall back on the more expensive method of searching for
the right place to insert the entry.
This provides a performance improvement for the common case where an
index entry is for monotonically increasing or nearly monotonically
increasing value such as an identity field or a current timestamp.
Pavan Deolasee
Reviewed by Claudio Freire, Simon Riggs and Peter Geoghegan
Tom Lane [Sun, 25 Mar 2018 20:15:15 +0000 (16:15 -0400)]
Doc: add example of type resolution in nested UNIONs.
Section 10.5 didn't say explicitly that multiple UNIONs are resolved
pairwise. Since the resolution algorithm is described as taking any
number of inputs, readers might well think that a query like
"select x union select y union select z" would be resolved by
considering x, y, and z in one resolution step. But that's not what
happens (and I think that behavior is per SQL spec). Add an example
clarifying this point.
Tom Lane [Sun, 25 Mar 2018 19:15:32 +0000 (15:15 -0400)]
Fix unsafe extraction of the OID part of a relation filename.
Commit 8694cc96b did this randomly differently from other callers of
parse_filename_for_nontemp_relation(). Perhaps unsurprisingly,
the randomly different way is wrong; it fails to ensure the
extracted string is null-terminated. Per buildfarm member skink.
Tom Lane [Sun, 25 Mar 2018 18:54:16 +0000 (14:54 -0400)]
Remove useless if-test.
Coverity complained that this check is pointless, and it's right.
There is no case where we'd call ExecutorStart with a null plannedstmt,
and if we did, it'd have crashed before here. Thinko in commit cc415a56d.
Tom Lane [Sun, 25 Mar 2018 04:09:26 +0000 (00:09 -0400)]
Stabilize regression test result.
If random() returns a result sufficiently close to zero, float8out
switches to scientific notation, breaking this test case's expectation
that the output should look like '0.xxxxxxxxx'. Casting to numeric
should fix that. Per buildfarm member pogona.
Peter Eisentraut [Sun, 25 Mar 2018 01:14:20 +0000 (21:14 -0400)]
Add long options to pg_resetwal and pg_controldata
We were running out of good single-letter options for some upcoming
pg_resetwal functionality, so add long options to create more
possibilities. Add to pg_controldata as well for symmetry.
based on patch by Bossart, Nathan <bossartn@amazon.com>