Stephen Frost [Wed, 8 Mar 2017 20:14:03 +0000 (15:14 -0500)]
Expose explain's SUMMARY option
This exposes the existing explain summary option to users to allow them
to choose if they wish to have the planning time and totalled run time
included in the EXPLAIN result. The existing default behavior is
retained if SUMMARY is not specified- running explain without analyze
will not print the summary lines (just the planning time, currently)
while running explain with analyze will include the summary lines (both
the planning time and the totalled execution time).
Users who wish to see the summary information for plain explain can now
use: EXPLAIN (SUMMARY ON) query; Users who do not want to have the
summary printed for an analyze run can use:
EXPLAIN (ANALYZE ON, SUMMARY OFF) query;
With this, we can now also have EXPLAIN ANALYZE queries included in our
regression tests by using:
EXPLAIN (ANALYZE ON, TIMING OFF, SUMMARY off) query;
I went ahead and added an example of this, which will hopefully not make
the buildfarm complain.
Tom Lane [Wed, 8 Mar 2017 17:21:12 +0000 (12:21 -0500)]
Use doubly-linked block lists in aset.c to reduce large-chunk overhead.
Large chunks (those too large for any palloc freelist) are managed as
separate blocks. Formerly, realloc'ing or pfree'ing such a chunk required
O(N) time in a context with N blocks, since we had to traipse down the
singly-linked block list to locate the block's predecessor before we could
fix the list links. This can result in O(N^2) runtime in situations where
large numbers of such chunks are manipulated within one context. Cases
like that were not foreseen in the original design of aset.c, and indeed
didn't arise until fairly recently. But such problems can now occur in
reorderbuffer.c and in hash joining, both of which make repeated large
requests without scaling up their request size as they do so, and which
will free their requests in not-necessarily-LIFO order.
To fix, change the block list from singly-linked to doubly-linked.
This adds another 4 or 8 bytes to ALLOC_BLOCKHDRSZ, but that doesn't
seem like unacceptable overhead, since aset.c's blocks are normally
8K or more, and never less than 1K in current practice.
In passing, get rid of some redundant AllocChunkGetPointer() calls in
AllocSetRealloc (the compiler might be smart enough to optimize these
away anyway, but no need to assume that) and improve AllocSetCheck's
checking of block header fields.
Back-patch to 9.4 where reorderbuffer.c appeared. We could take this
further back, but currently there's no evidence that it would be useful.
Robert Haas [Wed, 8 Mar 2017 17:05:43 +0000 (12:05 -0500)]
Support parallel bitmap heap scans.
The index is scanned by a single process, but then all cooperating
processes can iterate jointly over the resulting set of heap blocks.
In the future, we might also want to support using a parallel bitmap
index scan to set up for a parallel bitmap heap scan, but that's a
job for another day.
Dilip Kumar, with some corrections and cosmetic changes by me. The
larger patch set of which this is a part has been reviewed and tested
by (at least) Andres Freund, Amit Khandekar, Tushar Ahuja, Rafia
Sabih, Haribabu Kommi, Thomas Munro, and me.
Fujii Masao [Wed, 8 Mar 2017 16:44:23 +0000 (01:44 +0900)]
Prevent logical rep workers with removed subscriptions from starting.
Any logical rep workers must have their subscription entries in
pg_subscription. To ensure this, we need to prevent the launcher
from starting new worker corresponding to the subscription that
DROP SUBSCRIPTION command is removing. To implement this,
previously LogicalRepLauncherLock was introduced and held until
the end of transaction running DROP SUBSCRIPTION. But using
LWLock for that purpose was not valid.
Instead, this commit changes DROP SUBSCRIPTION so that it takes
AccessExclusiveLock on pg_subscription, in order to ensure that
the launcher cannot see any subscriptions being removed. Also this
commit gets rid of LogicalRepLauncherLock.
Alvaro Herrera [Wed, 8 Mar 2017 16:29:48 +0000 (13:29 -0300)]
Fix XMLTABLE on older libxml2
libxml2 older than 2.9.1 does not have xmlXPathSetContextNode (released
in 2013, so reasonable platforms have trouble). That function is fairly
trivial, so I have inlined it in the one added caller. This passes
tests on my machine; let's see what the buildfarm thinks about it.
Alvaro Herrera [Wed, 8 Mar 2017 15:39:37 +0000 (12:39 -0300)]
Support XMLTABLE query expression
XMLTABLE is defined by the SQL/XML standard as a feature that allows
turning XML-formatted data into relational form, so that it can be used
as a <table primary> in the FROM clause of a query.
This new construct provides significant simplicity and performance
benefit for XML data processing; what in a client-side custom
implementation was reported to take 20 minutes can be executed in 400ms
using XMLTABLE. (The same functionality was said to take 10 seconds
using nested PostgreSQL XPath function calls, and 5 seconds using
XMLReader under PL/Python).
The implemented syntax deviates slightly from what the standard
requires. First, the standard indicates that the PASSING clause is
optional and that multiple XML input documents may be given to it; we
make it mandatory and accept a single document only. Second, we don't
currently support a default namespace to be specified.
This implementation relies on a new executor node based on a hardcoded
method table. (Because the grammar is fixed, there is no extensibility
in the current approach; further constructs can be implemented on top of
this such as JSON_TABLE, but they require changes to core code.)
Fujii Masao [Wed, 8 Mar 2017 14:43:38 +0000 (23:43 +0900)]
Fix connection leak in DROP SUBSCRIPTION command, take 2.
Commit 898a792eb8283e31efc0b6fcbc03bbcd5f7df667 fixed the connection
leak issue, but it was an unreliable way of bugfix. This bugfix was
assuming that walrcv_command() subroutine cannot throw an error,
but it's untenable assumption. For example, if it will be changed
so that an error is thrown, connection leak issue will happen again.
This patch ensures that the connection is closed even when
walrcv_command() subroutine throws an error.
Patch by me, reviewed by Petr Jelinek and Michael Paquier
Robert Haas [Wed, 8 Mar 2017 13:15:24 +0000 (08:15 -0500)]
Fix parallel index and index-only scans to fall back to serial.
Parallel executor nodes can't assume that parallel execution will
happen in every case where the plan calls for it, because it might
not work out that way. However, parallel index scan and parallel
index-only scan failed to do the right thing here. Repair.
Robert Haas [Wed, 8 Mar 2017 13:02:03 +0000 (08:02 -0500)]
tidbitmap: Support shared iteration.
When a shared iterator is used, each call to tbm_shared_iterate()
returns a result that has not yet been returned to any process
attached to the shared iterator. In other words, each cooperating
processes gets a disjoint subset of the full result set, but all
results are returned exactly once.
This is infrastructure for parallel bitmap heap scan.
Dilip Kumar. The larger patch set of which this is a part has been
reviewed and tested by (at least) Andres Freund, Amit Khandekar,
Tushar Ahuja, Rafia Sabih, Haribabu Kommi, and Thomas Munro.
Robert Haas [Tue, 7 Mar 2017 22:03:51 +0000 (17:03 -0500)]
hash: Refactor hash index creation.
The primary goal here is to move all of the related page modifications
to a single section of code, in preparation for adding write-ahead
logging. In passing, rename _hash_metapinit to _hash_init, since it
initializes more than just the metapage.
Amit Kapila. The larger patch series of which this is a part has been
reviewed and tested by Álvaro Herrera, Ashutosh Sharma, Mark Kirkwood,
Jeff Janes, and Jesper Pedersen.
Tom Lane [Tue, 7 Mar 2017 17:40:44 +0000 (12:40 -0500)]
Invent start_proc parameters for PL/Tcl.
Define GUCs pltcl.start_proc and pltclu.start_proc. When set to a
nonempty value at the time a new Tcl interpreter is created, the
parameterless pltcl or pltclu function named by the GUC is called to
allow user-controlled initialization to occur within the interpreter.
This is modeled on plv8's start_proc parameter, and also has much in
common with plperl's on_init feature. It allows users to fully
replace the "modules" feature that was removed in commit 817f2a586.
Since an initializer function could subvert later Tcl code in nearly
arbitrary ways, mark both GUCs as SUSET for now. It would be nice
to find a way to relax that someday; but the corresponding GUCs in
plperl are also SUSET, and there's not been much complaint.
Tom Lane [Tue, 7 Mar 2017 17:05:57 +0000 (12:05 -0500)]
Clean up test_ifaddrs a bit.
We customarily #include <netinet/in.h> before <arpa/inet.h>; according
to our git history (cf commit 527f8babc) there used to be platform(s)
where <arpa/inet.h> didn't compile otherwise. That's probably not
really an issue anymore, but since test_ifaddrs.c is the one and only
place in our code that's not following that rule, bring it into line.
Also remove #include <sys/socket.h>, as that's duplicative given that
libpq/ifaddr.h does so (via pqcomm.h).
In passing, add a .gitignore file so nobody accidentally commits the
test_ifaddrs executable, as I nearly did.
I see no particular need to back-patch this, as it's just neatnik-ism
considering we don't build test_ifaddrs by default, or even document
it anywhere.
Robert Haas [Tue, 7 Mar 2017 16:49:49 +0000 (11:49 -0500)]
Consider parallel merge joins.
Commit 45be99f8cd5d606086e0a458c9c72910ba8a613d took the position
that performing a merge join in parallel was not likely to work out
well, but this conclusion was greeted with skepticism even at the
time. Whether it was true then or not, it's clearly not true any
more now that we have parallel index scan.
Tom Lane [Tue, 7 Mar 2017 16:36:35 +0000 (11:36 -0500)]
Fix pgbench's failure to honor the documented long-form option "--builtin".
Not only did it not accept --builtin as a synonym for -b, but what it did
accept as a synonym was --tpc-b (huh?), which it got even further wrong
by marking as no_argument, so that if you did try that you got a core
dump. I suppose this is leftover from some early design for the new
switches added by commit 8bea3d221, but it's still pretty sloppy work.
Per bug #14580 from Stepan Pesternikov. Back-patch to 9.6 where the
error was introduced.
Tom Lane [Tue, 7 Mar 2017 15:42:11 +0000 (10:42 -0500)]
Remove vestigial grammar support for CHARACTER ... CHARACTER SET option.
The SQL standard says that you should be able to write "CHARACTER SET foo"
as part of the declaration of a char-type column. We don't implement that,
but a rough form of support has existed in gram.y since commit f10b63923.
That's now sat there for nigh 20 years without anyone fleshing it out ---
and even if someone did, the contemplated approach of having separate data
type name(s) for every character set certainly isn't what we'd do today.
Let's just remove the grammar production; if anyone is ever motivated to
work on this, reinventing the grammar support is a trivial fraction of
what they'd have to do. And we've never documented anything about
supporting such a clause.
Robert Haas [Tue, 7 Mar 2017 15:33:29 +0000 (10:33 -0500)]
Preparatory refactoring for parallel merge join support.
Extract the logic used by hash_inner_and_outer into a separate
function, get_cheapest_parallel_safe_total_inner, so that it can
also be used to plan parallel merge joins.
Also, add a require_parallel_safe argument to the existing function
get_cheapest_path_for_pathkeys, because parallel merge join needs
to find the cheapest path for a given set of pathkeys that is
parallel-safe, not just the cheapest one overall.
Robert Haas [Tue, 7 Mar 2017 15:22:07 +0000 (10:22 -0500)]
Fix parallel hash join path search.
When the very cheapest path is not parallel-safe, we want to instead use
the cheapest unparameterized path that is. The old code searched
innerrel->cheapest_parameterized_paths, but that isn't right, because
the path we want may not be in that list. Search innerrel->pathlist
instead.
Stephen Frost [Tue, 7 Mar 2017 14:31:52 +0000 (09:31 -0500)]
psql: Add \gx command
It can often be useful to use expanded mode output (\x) for just a
single query. Introduce a \gx which acts exactly like \g except that it
will force expanded output mode for that one \gx call. This is simpler
than having to use \x as a toggle and also means that the user doesn't
have to worry about the current state of the expanded variable, or
resetting it later, to ensure a given query is always returned in
expanded mode.
Primairly Christoph's patch, though I did tweak the documentation and help
text a bit, and re-indented the tab completion section.
Author: Christoph Berg
Reviewed By: Daniel Verite
Discussion: https://postgr.es/m/20170127132737.6skslelaf4txs6iw%40msg.credativ.de
Simon Riggs [Tue, 7 Mar 2017 14:00:54 +0000 (22:00 +0800)]
Allow pg_dumpall to dump roles w/o user passwords
Add new option --no-role-passwords which dumps roles without passwords.
Since we don’t need passwords, we choose to use pg_roles in preference
to pg_authid since access may be restricted for security reasons in
some configrations.
Support SCRAM-SHA-256 authentication (RFC 5802 and 7677).
This introduces a new generic SASL authentication method, similar to the
GSS and SSPI methods. The server first tells the client which SASL
authentication mechanism to use, and then the mechanism-specific SASL
messages are exchanged in AuthenticationSASLcontinue and PasswordMessage
messages. Only SCRAM-SHA-256 is supported at the moment, but this allows
adding more SASL mechanisms in the future, without changing the overall
protocol.
Support for channel binding, aka SCRAM-SHA-256-PLUS is left for later.
The SASLPrep algorithm, for pre-processing the password, is not yet
implemented. That could cause trouble, if you use a password with
non-ASCII characters, and a client library that does implement SASLprep.
That will hopefully be added later.
Authorization identities, as specified in the SCRAM-SHA-256 specification,
are ignored. SET SESSION AUTHORIZATION provides more or less the same
functionality, anyway.
If a user doesn't exist, perform a "mock" authentication, by constructing
an authentic-looking challenge on the fly. The challenge is derived from
a new system-wide random value, "mock authentication nonce", which is
created at initdb, and stored in the control file. We go through these
motions, in order to not give away the information on whether the user
exists, to unauthenticated users.
Bumps PG_CONTROL_VERSION, because of the new field in control file.
Patch by Michael Paquier and Heikki Linnakangas, reviewed at different
stages by Robert Haas, Stephen Frost, David Steele, Aleksander Alekseev,
and many others.
Stephen Frost [Tue, 7 Mar 2017 04:29:02 +0000 (23:29 -0500)]
pg_dump: Properly handle public schema ACLs with --clean
pg_dump has always handled the public schema in a special way when it
comes to the "--clean" option. To wit, we do not drop or recreate the
public schema in "normal" mode, but when we are run in "--clean" mode
then we do drop and recreate the public schema.
When running in "--clean" mode, the public schema is dropped and then
recreated and it is recreated with the normal schema-default privileges
of "nothing". This is unlike how the public schema starts life, which
is to have CREATE and USAGE GRANT'd to the PUBLIC role, and that is what
is recorded in pg_init_privs.
Due to this, in "--clean" mode, pg_dump would mistakenly only dump out
the set of privileges required to go from the initdb-time privileges on
the public schema to whatever the current-state privileges are. If the
privileges were not changed from initdb time, then no privileges would
be dumped out for the public schema, but with the schema being dropped
and recreated, the result was that the public schema would have no ACLs
on it instead of what it should have, which is the initdb-time
privileges.
Practically speaking, this meant that pg_dump with --clean mode dumping
a database where the ACLs on the public schema were not changed from the
default would, upon restore, result in a public schema with *no*
privileges GRANT'd, not matching the state of the existing database
(where the initdb-time privileges would have been CREATE and USAGE to
the PUBLIC role for the public schema).
To fix, adjust the query in getNamespaces() to ignore the pg_init_privs
entry for the public schema when running in "--clean" mode, meaning that
the privileges for the public schema would be dumped, correctly, as if
it was going from a newly-created schema to the current state (which is,
indeed, what will happen during the restore thanks to the DROP/CREATE).
Only the public schema is handled in this special way by pg_dump, no
other initdb-time objects are dropped/recreated in --clean mode.
Tom Lane [Tue, 7 Mar 2017 00:33:59 +0000 (19:33 -0500)]
Repair incorrect pg_dump labeling for some comments and security labels.
We attached no schema label to comments for procedural languages, casts,
transforms, operator classes, operator families, or text search objects.
The first three categories of objects don't really have schemas, but
pg_dump treats them as if they do, and it seems like the TocEntry fields
for their comments had better match the TocEntry fields for the parent
objects. (As an example of a possible hazard, the type names in a CAST
will be formatted with the assumption of a particular search_path, so
failing to ensure that this same path is active for the COMMENT ON command
could lead to an error or to attaching the comment to the wrong cast.)
In the last six cases, this was a flat-out error --- possibly mine to
begin with, but it was a long time ago.
The security label for a procedural language was likewise not correctly
labeled as to schema, and both the comment and security label for a
procedural language were not correctly labeled as to owner.
In simple cases the restore would accidentally work correctly anyway, since
these comments and security labels would normally get emitted right after
the owning object, and so the search path and active user would be correct
anyhow. But it could fail in corner cases; for example a schema-selective
restore would omit comments it should include.
Giuseppe Broccolo noted the oversight, and proposed the correct fix, for
text search dictionary objects; I found the rest by cross-checking other
dumpComment() calls. These oversights are ancient, so back-patch all
the way.
Andres Freund [Wed, 23 Nov 2016 08:23:42 +0000 (00:23 -0800)]
Make simplehash.h grow hashtable in additional cases.
Increase the size when either the distance between actual and optimal
slot grows too large, or when too many subsequent entries would have
to be moved.
This addresses reports that the simplehash performed, sometimes
considerably, worse than dynahash.
The reason turned out to be that insertions into the hashtable where,
due to the use of parallel query, in effect done from another
hashtable, in hash-value order. If the target hashtable, due to
mis-estimation, was sized a lot smaller than the source table(s) that
lead to very imbalanced tables; a lot of entries in many close-by
buckets from the source tables were inserted into a single, wider,
bucket on the target table. As the growth factor was solely computed
based on the fillfactor, the performance of the table decreased
further and further.
b81b5a96f424531b was an attempt to address this problem for hash
aggregates (but not for bitmap scans), but it turns out that the
current method of mixing hash values often actually leaves neighboring
hash-values close to each other, just in different value range. It
might be worth revisiting that independently of the performance issues
addressed in this patch..
To address that problem resize tables in two additional cases: Firstly
when the optimal position for an entry would be far from the actual
position, secondly when many entries would have to be moved to make
space for the new entry (while satisfying the robin hood property).
Due to the additional resizing threshold it seems possible, and
testing confirms that so far, that a higher fillfactor doesn't hurt
performance and saves a bit of memory. It seems better to increase it
now, before a release containing any of this code, rather than wonder
in some later release.
The various boundaries aren't determined in a particularly scientific
manner, they might need some fine-tuning.
In all my tests the new code now, even with parallelism, performs at
least as good as the old code, in several scenarios significantly
better.
Reported-By: Dilip Kumar, Robert Haas, Kuntal Ghosh
Discussion:
https://postgr.es/m/CAFiTN-vagvuAydKG9VnWcoK=ADAhxmOa4ZTrmNsViBBooTnriQ@mail.gmail.com
https://postgr.es/m/CAGz5QC+=fNTYgzMLTBUNeKt6uaWZFXJbkB5+7oWm-n9DwVxcLA@mail.gmail.com
Stephen Frost [Mon, 6 Mar 2017 22:03:57 +0000 (17:03 -0500)]
pg_upgrade: Fix large object COMMENTS, SECURITY LABELS
When performing a pg_upgrade, we copy the files behind pg_largeobject
and pg_largeobject_metadata, allowing us to avoid having to dump out and
reload the actual data for large objects and their ACLs.
Unfortunately, that isn't all of the information which can be associated
with large objects. Currently, we also support COMMENTs and SECURITY
LABELs with large objects and these were being silently dropped during a
pg_upgrade as pg_dump would skip everything having to do with a large
object and pg_upgrade only copied the tables mentioned to the new
cluster.
As the file copies happen after the catalog dump and reload, we can't
simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode
output but we also have to include the actual large object definition as
well. With the definition, comments, and security labels in the pg_dump
output and the file copies performed by pg_upgrade, all of the data and
metadata associated with large objects is able to be successfully pulled
forward across a pg_upgrade.
In 9.6 and master, we can simply adjust the dump bitmask to indicate
which components we don't want. In 9.5 and earlier, we have to put
explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL
or the data when in binary-upgrade mode.
Adjustments made to the privileges regression test to allow another test
(large_object.sql) to be added which explicitly leaves a large object
with a comment in place to provide coverage of that case with
pg_upgrade.
Tom Lane [Mon, 6 Mar 2017 21:50:47 +0000 (16:50 -0500)]
Avoid dangling pointer to relation name in RLS code path in DoCopy().
With RLS active, "COPY tab TO ..." failed under -DRELCACHE_FORCE_RELEASE,
and would sometimes fail without that, because it used the relation name
directly from the relcache as part of the parsetree it's building. That
becomes a potentially-dangling pointer as soon as the relcache entry is
closed, a bit further down. Typical symptom if the relcache entry chanced
to get cleared would be "relation does not exist" error with a garbage
relation name, or possibly a core dump; but if you were really truly
unlucky, the COPY might copy from the wrong table.
Per report from Andrew Dunstan that regression tests fail with
-DRELCACHE_FORCE_RELEASE. The core tests now pass for me (but have
not tried "make check-world" yet).
Peter Eisentraut [Wed, 28 Dec 2016 17:00:00 +0000 (12:00 -0500)]
Allow dropping multiple functions at once
The generic drop support already supported dropping multiple objects of
the same kind at once. But the previous representation
of function signatures across two grammar symbols and structure members
made this cumbersome to do for functions, so it was not supported. Now
that function signatures are represented by a single structure, it's
trivial to add this support. Same for aggregates and operators.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Peter Eisentraut [Wed, 28 Dec 2016 17:00:00 +0000 (12:00 -0500)]
Replace LookupFuncNameTypeNames() with LookupFuncWithArgs()
The old function took function name and function argument list as
separate arguments. Now that all function signatures are passed around
as ObjectWithArgs structs, this is no longer necessary and can be
replaced by a function that takes ObjectWithArgs directly. Similarly
for aggregates and operators.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Peter Eisentraut [Sat, 12 Nov 2016 17:00:00 +0000 (12:00 -0500)]
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Robert Haas [Mon, 6 Mar 2017 18:10:55 +0000 (13:10 -0500)]
Fix incorrect comments.
Commit 19dc233c32f2900e57b8da4f41c0f662ab42e080 introduced these
comments. Michael Paquier noticed that one of them had a typo, but
a bigger problem is that they were not an accurate description of
what the code was doing.
Robert Haas [Mon, 6 Mar 2017 17:41:55 +0000 (12:41 -0500)]
Mark pg_start_backup and pg_stop_backup as parallel-restricted.
They depend on backend-private state that will not be synchronized by
the parallel machinery, so they should not be marked parallel-safe.
This issue also exists in 9.6, but we obviously can't do anything
about 9.6 clusters that already exist. Possibly this could be
back-patched so that future 9.6 clusters would come out OK, or
possibly we should back-patch some other fix, but that would need more
discussion.
Simon Riggs [Mon, 6 Mar 2017 10:34:31 +0000 (16:04 +0530)]
Reduce lock levels for table storage params related to planning
The following parameters are now updateable with ShareUpdateExclusiveLock
effective_io_concurrency
parallel_workers
seq_page_cost
random_page_cost
n_distinct
n_distinct_inherited
Tom Lane [Sat, 4 Mar 2017 21:09:33 +0000 (16:09 -0500)]
In rebuild_relation(), don't access an already-closed relcache entry.
This reliably fails with -DRELCACHE_FORCE_RELEASE, as reported by
Andrew Dunstan, and could sometimes fail in normal operation, resulting
in a wrong persistence value being used for the transient table.
It's not immediately clear to me what effects that might have beyond
the risk of a crash while accessing OldHeap->rd_rel->relpersistence,
but it's probably not good.
Bug introduced by commit f41872d0c, and made substantially worse by
commit 85b506bbf, which added a second such access significantly
later than the heap_close. I doubt the first reference could fail
in a production scenario, but the second one definitely could.
Disallow CREATE/DROP SUBSCRIPTION in transaction block
Disallow CREATE SUBSCRIPTION and DROP SUBSCRIPTION in a transaction
block when the replication slot is to be created or dropped, since that
cannot be rolled back.
based on patch by Masahiko Sawada <sawada.mshk@gmail.com>
Add tab completion for publications and subscriptions. Also, to be able
to get a list of subscriptions, make pg_subscription world-readable but
revoke access to subconninfo using column privileges.
This makes the connection attempt from CREATE SUBSCRIPTION and from
WalReceiver interruptable by the user in case the libpq connection is
hanging. The previous coding required immediate shutdown (SIGQUIT) of
PostgreSQL in that situation.
From: Petr Jelinek <petr.jelinek@2ndquadrant.com> Tested-by: Thom Brown <thom@linux.com>
Simon Riggs [Fri, 3 Mar 2017 13:48:25 +0000 (19:18 +0530)]
Allow vacuums to report oldestxmin
Allow VACUUM and Autovacuum to report the oldestxmin value they
used while cleaning tables, helping to make better sense out of
the other statistics we report in various cases.
Robert Haas [Fri, 3 Mar 2017 06:02:45 +0000 (11:32 +0530)]
Add pg_current_logfile() function.
The syslogger will write out the current stderr and csvlog names, if
it's running and there are any, to a new file in the data directory
called "current_logfiles". We take care to remove this file when it
might no longer be valid (but not at shutdown). The function
pg_current_logfile() can be used to read the entries in the file.
Gilles Darold, reviewed and modified by Karl O. Pinc, Michael
Paquier, and me. Further review by Álvaro Herrera and Christoph Berg.
Robert Haas [Fri, 3 Mar 2017 03:44:49 +0000 (09:14 +0530)]
Notify bgworker registrant after freeing worker slot.
Tom Lane observed buildfarm failures caused by the select_parallel
regression test trying to launch new parallel queries before the
worker slots used by the previous ones were freed. Try to fix this by
having the postmaster free the worker slots before it sends the
SIGUSR1 notifications to the registering process. This doesn't
completely eliminate the possibility that the user backend might
(correctly) observe the worker as dead before the slot is free, but I
believe it should make the window significantly narrower.
Patch by me, per complaint from Tom Lane. Reviewed by Amit Kapila.
Robert Haas [Fri, 3 Mar 2017 03:37:41 +0000 (09:07 +0530)]
Improve error reporting for tuple-routing failures.
Currently, the whole row is shown without column names. Instead,
adopt a style similar to _bt_check_unique() in ExecFindPartition()
and show the failing key: (key1, ...) = (val1, ...).
Amit Langote, per a complaint from Simon Riggs. Reviewed by me;
I also adjusted the grammar in one of the comments.
Robert Haas [Thu, 2 Mar 2017 13:17:40 +0000 (18:47 +0530)]
Refactor bitmap heap scan in preparation for parallel support.
The final patch will be less messy if the prefetching support is
a bit better isolated, so do that.
Dilip Kumar, with some changes by me. The larger patch set of which
this is a part has been reviewed and tested by (at least) Andres
Freund, Amit Khandekar, Tushar Ahuja, Rafia Sabih, Haribabu Kommi, and
Thomas Munro.
Robert Haas [Thu, 2 Mar 2017 11:48:19 +0000 (17:18 +0530)]
Don't uselessly rewrite, truncate, VACUUM, or ANALYZE partitioned tables.
Also, recursively perform VACUUM and ANALYZE on partitions when the
command is applied to a partitioned table. In passing, some related
documentation updates.
Amit Langote, reviewed by Michael Paquier, Ashutosh Bapat, and by me.
Noah Misch [Thu, 2 Mar 2017 05:03:27 +0000 (00:03 -0500)]
Handle unaligned SerializeSnapshot() buffer.
Likewise in RestoreSnapshot(). Do so by copying between the user buffer
and a stack buffer of known alignment. Back-patch to 9.6, where this
last applies cleanly. In master, the select_parallel test dies with
SIGBUS on "Oracle Solaris 10 1/13 s10s_u11wos_24a SPARC", building
32-bit with gcc 4.9.2. In 9.6 and 9.5, the buffers in question happen
to be sufficiently-aligned, and this change is mere insurance against
future 9.6 changes or extension code compromising that.
Amit Kapila, with slight changes by me. The larger patch series of
which this is a part has been reviewed and tested by Álvaro Herrera,
Ashutosh Sharma, Mark Kirkwood, Jeff Janes, and Jesper Pedersen.
Andres Freund [Wed, 1 Mar 2017 04:41:44 +0000 (20:41 -0800)]
Fix assertion failure due to over-eager code deduplication.
In the previous commit I'd made MemoryContextContains() use
GetMemoryChunkContext(), but that causes trouble when the passed
pointer isn't allocated in any memory context - that's probably
something we shouldn't do, but the previous commit isn't a place for a
"policy" change.
Andres Freund [Tue, 28 Feb 2017 18:36:29 +0000 (10:36 -0800)]
Overhaul memory management README.
The README was written as a "historical account", and that style
hasn't aged particularly well. Rephrase it to describe the current
situation, instead of having various version specific comments.
This also updates the description of how allocated chunks are
associated with their corresponding context, the method of which has
changed in the preceding commit.
Author: Andres Freund
Discussion: https://postgr.es/m/20170228074420.aazv4iw6k562mnxg@alap3.anarazel.de
Andres Freund [Tue, 28 Feb 2017 07:32:22 +0000 (23:32 -0800)]
Reduce size of common allocation header.
The new slab allocator needs different per-allocation information than
the classical aset.c. The definition in 58b25e981 wasn't sufficiently
careful on 32 platforms with 8 byte alignment, leading to buildfarm
failures. That's not entirely easy to fix by just adjusting the
definition.
As slab.c doesn't actually need the size part(s) of the common header,
all chunks are equally sized after all, it seems better to instead
reduce the header to the part needed by all allocators, namely which
context an allocation belongs to. That has the advantage of reducing
the overhead of slab allocations, and also allows for more flexibility
in future allocators.
To avoid spreading the logic about accessing a chunk's context around,
centralize it in GetMemoryChunkContext(), which allows to delete a
good number of lines.
A followup commit will revise the mmgr/README portion about
StandardChunkHeader, and more.
Author: Andres Freund
Discussion: https://postgr.es/m/20170228074420.aazv4iw6k562mnxg@alap3.anarazel.de
Tom Lane [Mon, 27 Feb 2017 22:20:34 +0000 (17:20 -0500)]
Allow index AMs to return either HeapTuple or IndexTuple format during IOS.
Previously, only IndexTuple format was supported for the output data of
an index-only scan. This is fine for btree, which is just returning a
verbatim index tuple anyway. It's not so fine for SP-GiST, which can
return reconstructed data that's much larger than a page.
To fix, extend the index AM API so that index-only scan data can be
returned in either HeapTuple or IndexTuple format. There's other ways
we could have done it, but this way avoids an API break for index AMs
that aren't concerned with the issue, and it costs little except a couple
more fields in IndexScanDescs.
I changed both GiST and SP-GiST to use the HeapTuple method. I'm not
very clear on whether GiST can reconstruct data that's too large for an
IndexTuple, but that seems possible, and it's not much of a code change to
fix.
Per a complaint from Vik Fearing. Reviewed by Jason Li.
Robert Haas [Mon, 27 Feb 2017 17:26:34 +0000 (22:56 +0530)]
hash: Refactor overflow page allocation.
As with commit b0f18cb77f50a54e997d857d592f6a511617f52c, the goal
here is to move all of the related page modifications to a single
section of code, in preparation for adding write-ahead logging.
Amit Kapila, with slight changes by me. The larger patch series
of which this is a part has been reviewed and tested by Álvaro
Herrera, Ashutosh Sharma, Mark Kirkwood, Jeff Janes, and Jesper
Pedersen, all of whom should also have been credited in the
previous commit message.
Robert Haas [Mon, 27 Feb 2017 17:04:21 +0000 (22:34 +0530)]
hash: Refactor bucket squeeze code.
In preparation for adding write-ahead logging to hash indexes,
refactor _hash_freeovflpage and _hash_squeezebucket so that all
related page modifications happen in a single section of code. The
previous coding assumed that it would be fine to move tuples one at a
time, and also that the various operations involved in freeing an
overflow page didn't necessarily all need to be done together, all
of which is true if you don't care about write-ahead logging.
Tom Lane [Mon, 27 Feb 2017 16:20:22 +0000 (11:20 -0500)]
Remove PL/Tcl's "module" facility.
PL/Tcl has long had a facility whereby Tcl code could be autoloaded from
a database table named "pltcl_modules". However, nobody is using it, as
evidenced by the recent discovery that it's never been fixed to work with
standard_conforming_strings turned on. Moreover, it's rather shaky from
a security standpoint, and the table design is very old and crufty (partly
because it dates from before we had TOAST). A final problem is that
because the table-population scripts depend on the Tcl client library
Pgtcl, which we removed from the core distribution in 2004, it's
impossible to create a self-contained regression test for the feature.
Rather than try to surmount these problems, let's just remove it.
A follow-on patch will provide a way to execute user-defined
initialization code, similar to features that exist in plperl and plv8.
With that, it will be possible to implement this feature or similar ones
entirely in userspace, which is where it belongs.
Peter Eisentraut [Mon, 27 Feb 2017 13:30:06 +0000 (08:30 -0500)]
chomp PQerrorMessage() in backend uses
PQerrorMessage() returns an error message with a trailing newline, but
in backend use (dblink, postgres_fdw, libpqwalreceiver), we want to have
the error message without that for emitting via ereport(). To simplify
that, add a function pchomp() that returns a pstrdup'ed string with the
trailing newline characters removed.
Andres Freund [Mon, 27 Feb 2017 11:41:44 +0000 (03:41 -0800)]
Use the new "Slab" context for some allocations in reorderbuffer.h.
Note that this change alone does not yet fully address the performance
problems triggering this work, a large portion of the slowdown is
triggered by the tuple allocator, which isn't converted to the new
allocator. It would be possible to do so, but using evenly sized
objects, like both the current implementation in reorderbuffer.c and
slab.c, wastes a fair amount of memory. A later patch by Tomas will
introduce a better approach.
Author: Tomas Vondra Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/d15dff83-0b37-28ed-0809-95a5cc7292ad@2ndquadrant.com
Andres Freund [Mon, 27 Feb 2017 11:41:44 +0000 (03:41 -0800)]
Add "Slab" MemoryContext implementation for efficient equal-sized allocations.
The default general purpose aset.c style memory context is not a great
choice for allocations that are all going to be evenly sized,
especially when those objects aren't small, and have varying
lifetimes. There tends to be a lot of fragmentation, larger
allocations always directly go to libc rather than have their cost
amortized over several pallocs.
These problems lead to the introduction of ad-hoc slab allocators in
reorderbuffer.c. But it turns out that the simplistic implementation
leads to problems when a lot of objects are allocated and freed, as
aset.c is still the underlying implementation. Especially freeing can
easily run into O(n^2) behavior in aset.c.
While the O(n^2) behavior in aset.c can, and probably will, be
addressed, custom allocators for this behavior are more efficient
both in space and time.
This allocator is for evenly sized allocations, and supports both
cheap allocations and freeing, without fragmenting significantly. It
does so by allocating evenly sized blocks via malloc(), and carves
them into chunks that can be used for allocations. In order to
release blocks to the OS as early as possible, chunks are allocated
from the fullest block that still has free objects, increasing the
likelihood of a block being entirely unused.
A subsequent commit uses this in reorderbuffer.c, but a further
allocator is needed to resolve the performance problems triggering
this work.
There likely are further potentialy uses of this allocator besides
reorderbuffer.c.
There's potential further optimizations of the new slab.c, in
particular the array of freelists could be replaced by a more
intelligent structure - but for now this looks more than good enough.
Author: Tomas Vondra, editorialized by Andres Freund Reviewed-By: Andres Freund, Petr Jelinek, Robert Haas, Jim Nasby
Discussion: https://postgr.es/m/d15dff83-0b37-28ed-0809-95a5cc7292ad@2ndquadrant.com
Andres Freund [Mon, 27 Feb 2017 11:41:44 +0000 (03:41 -0800)]
Make useful infrastructure from aset.c generally available.
An upcoming patch introduces a new type of memory context. To avoid
duplicating debugging infrastructure within aset.c, move useful pieces
to memdebug.[ch].
While touching aset.c, fix printf format code in AllocFree* debug
macros.
Author: Tomas Vondra Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/b3b2245c-b37a-e1e5-ebc4-857c914bc747@2ndquadrant.com