Tom Lane [Mon, 9 Jan 2017 22:47:02 +0000 (17:47 -0500)]
Fix error handling in pltcl_returnnext.
We can't throw elog(ERROR) out of a Tcl command procedure; we have
to catch the error and return TCL_ERROR to the Tcl interpreter.
pltcl_returnnext failed to meet this requirement, so that errors
detected by pltcl_build_tuple_result or other functions called here
led to longjmp'ing out of the Tcl interpreter and thereby leaving it
in a bad state. Use the existing subtransaction support to prevent
that. Oversight in commit 26abb50c4, found more or less accidentally
by the buildfarm thanks to the tests added in 961bed020.
Alvaro Herrera [Mon, 9 Jan 2017 22:26:58 +0000 (19:26 -0300)]
Fix ALTER TABLE / SET TYPE for irregular inheritance
If inherited tables don't have exactly the same schema, the USING clause
in an ALTER TABLE / SET DATA TYPE misbehaves when applied to the
children tables since commit 9550e8348b79. Starting with that commit,
the attribute numbers in the USING expression are fixed during parse
analysis. This can lead to bogus errors being reported during
execution, such as:
ERROR: attribute 2 has wrong type
DETAIL: Table has type smallint, but query expects integer.
Since it wouldn't do to revert to the original coding, we now apply a
transformation to map the attribute numbers to the correct ones for each
child.
Reported by Justin Pryzby
Analysis by Tom Lane; patch by me.
Discussion: https://postgr.es/m/20170102225618.GA10071@telsasoft.com
Alvaro Herrera [Mon, 9 Jan 2017 21:19:29 +0000 (18:19 -0300)]
BRIN revmap pages are not standard pages ...
... and therefore we ought not to tell XLogRegisterBuffer the opposite,
when writing XLog for a brin update that moves the index tuple to a
different page. Otherwise, xlog insertion would try to "compress the
hole" when producing a full-page image for it; but since we don't update
pd_lower/upper, the hole covers the whole page. On WAL replay, the
revmap page becomes empty and so the entire portion of the index is
useless and needs to be recomputed.
This is low-probability: a BRIN update only moves an index tuple to a
different page when the summary tuple is larger than the existing one,
which doesn't happen with fixed-width datatypes. Also, the revmap
page must be first after a checkpoint.
Report and patch: Kuntal Ghosh
Bug is alleged to have detected by a WAL-consistency-checking tool.
Discussion: https://postgr.es/m/CAGz5QCJ=00UQjScSEFbV=0qO5ShTZB9WWz_Fm7+Wd83zPs9Geg@mail.gmail.com
I posted a test case demonstrating the problem, but I'm refraining from
adding it to the test suite; if the WAL consistency tool makes it in,
that will be a better way to catch this from regressing. (We should
definitely have someting that causes not-same-page updates, though.)
Tom Lane [Sat, 7 Jan 2017 21:02:16 +0000 (16:02 -0500)]
Get rid of ParseState.p_value_substitute; use a columnref hook instead.
I noticed that p_value_substitute, which is a single-purpose kluge I added
in 2002 (commit b0422b215), could be replaced by having domainAddConstraint
install a parser hook that looks for the name "value". The parser hook
code only dates back to 2009, so it's not surprising that we had to kluge
this in 2002, but we can do it more cleanly now.
Tom Lane [Sat, 7 Jan 2017 20:34:28 +0000 (15:34 -0500)]
Improve documentation of struct ParseState.
I got annoyed about how some fields of ParseState were documented in the
struct's block comment and some weren't; not all of the latter are trivial.
Fix that. Also reorder a couple of fields that seem to have been placed
rather randomly, or maybe with an idea of avoiding padding space; but there
are never so many ParseStates in existence at one time that we ought to
value pad space over readability.
Stephen Frost [Fri, 6 Jan 2017 21:29:31 +0000 (16:29 -0500)]
Add basic pg_dumpall/pg_restore TAP tests
For reasons unknown, pg_dumpall and pg_restore managed to escape the
basic set of TAP tests that were added for pg_dump in 6bd356c3, so
let's get them added now. A few minor adjustments are also made to the
dump/restore tests to improve code coverage for pg_restore/pg_dumpall.
Tom Lane [Fri, 6 Jan 2017 21:21:57 +0000 (16:21 -0500)]
Merge two copies of tuple-building code in pltcl.c.
Make pltcl_trigger_handler() construct modified tuples using
pltcl_build_tuple_result(), rather than its own copy of essentially
the same logic. This results in slightly different message wording for
the error cases, and in one case a different SQLSTATE, but it seems
unlikely that any existing applications are depending on any of those
details.
While at it, fix a typo in commit 26abb50c4: pltcl_build_tuple_result was
applying encoding conversion in the wrong direction. That would be a
back-patchable bug fix, except the code hasn't shipped yet.
Stephen Frost [Fri, 6 Jan 2017 20:27:47 +0000 (15:27 -0500)]
Protect against NULL-dereference in pg_dump
findTableByOid() is allowed to return NULL and we should therefore be
checking for that case. getOwnedSeqs() and dumpSequence() shouldn't
ever actually see this happen, but given odd circumstances it might and
commit f9e439b1 probably shouldn't have removed that check.
Pointed out by Coverity. Initial patch from Michael Paquier.
Back-patch to 9.6, where that commit had removed the check.
Tom Lane [Fri, 6 Jan 2017 19:12:52 +0000 (14:12 -0500)]
Invalidate cached plans on FDW option changes.
This fixes problems where a plan must change but fails to do so,
as seen in a bug report from Rajkumar Raghuwanshi.
For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of
forcing a relcache flush on the table. For ALTER FOREIGN DATA WRAPPER
and ALTER SERVER, just flush the whole plan cache on any change in
pg_foreign_data_wrapper or pg_foreign_server. That matches the way
we handle some other low-probability cases such as opclass changes, and
it's unclear that the case arises often enough to be worth working harder.
Besides, that gives a patch that is simple enough to back-patch with
confidence.
Back-patch to 9.3. In principle we could apply the code change to 9.2 as
well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that
anyone is doing anything exciting enough with FDWs that far back to need
this desperately, and (c) the patch doesn't apply cleanly.
Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh
Bapat, who each contributed substantial changes as well.
This commit purported to use a variable hash seed for Partial
HashAggregate, but actually did the opposite - it made us use a
variable seed for any HashAggregate that is NOT partial. Woops.
Robert Haas [Thu, 5 Jan 2017 18:12:16 +0000 (13:12 -0500)]
Fix possible leak of semaphore count.
Commit 4aec49899e5782247e134f94ce1c6ee926f88e1c reorganized the order
of operations here so that we no longer increment the number of "extra
waits" before locking the semaphore, but it did not change the
starting value of extraWaits from 0 to -1 to compensate. In the worst
case, this could leak a semaphore count, but that seems to be unlikely
in practice.
Robert Haas [Thu, 5 Jan 2017 17:27:09 +0000 (12:27 -0500)]
Fix possible crash reading pg_stat_activity.
With the old code, a backend that read pg_stat_activity without ever
having executed a parallel query might see a backend in the midst of
executing one waiting on a DSA LWLock, resulting in a crash. The
solution is for backends to register the tranche at startup time, not
the first time a parallel query is executed.
Report by Andreas Seltenreich. Patch by me, reviewed by Thomas Munro.
Tom Lane [Thu, 5 Jan 2017 16:33:51 +0000 (11:33 -0500)]
Fix handling of empty arrays in array_fill().
array_fill(..., array[0]) produced an empty array, which is probably
what users expect, but it was a one-dimensional zero-length array
which is not our standard representation of empty arrays. Also, for
no very good reason, it rejected empty input arrays; that case should
be allowed and produce an empty output array.
In passing, remove the restriction that the input array(s) have lower
bound 1. That seems rather pointless, and it would have needed extra
complexity to make the check deal with empty input arrays.
Per bug #14487 from Andrew Gierth. It's been broken all along, so
back-patch to all supported branches.
Tom Lane [Wed, 4 Jan 2017 23:00:11 +0000 (18:00 -0500)]
Handle OID column inheritance correctly in ALTER TABLE ... INHERIT.
Inheritance operations must treat the OID column, if any, much like
regular user columns. But MergeAttributesIntoExisting() neglected to
do that, leading to weird results after a table with OIDs is associated
to a parent with OIDs via ALTER TABLE ... INHERIT.
Report and patch by Amit Langote, reviewed by Ashutosh Bapat, some
adjustments by me. It's been broken all along, so back-patch to
all supported branches.
Robert Haas [Wed, 4 Jan 2017 21:30:16 +0000 (16:30 -0500)]
Improve documentation of timestamp internal representation.
Be more clear that we represent timestamps in microseconds when
integer timestamps are used, and in fractional seconds when
floating-point timestamps are used.
Robert Haas [Wed, 4 Jan 2017 19:36:34 +0000 (14:36 -0500)]
Fix reporting of constraint violations for table partitioning.
After a tuple is routed to a partition, it has been converted from the
root table's row type to the partition's row type. ExecConstraints
needs to report the failure using the original tuple and the parent's
tuple descriptor rather than the ones for the selected partition.
Tom Lane [Wed, 4 Jan 2017 18:36:44 +0000 (13:36 -0500)]
Prefer int-wide pg_atomic_flag over char-wide when using gcc intrinsics.
configure can only probe the existence of gcc intrinsics, not how well
they're implemented, and unfortunately the answer is sometimes "badly".
In particular we've found that multiple compilers fail to implement
char-width __sync_lock_test_and_set() correctly on PPC; and even a correct
implementation would necessarily be pretty inefficient, since that hardware
has only a word-wide primitive to work with.
Given the knowledge we've accumulated in s_lock.h, it appears that it's
best to rely on int-width TAS operations on most non-Intel architectures.
Hence, pick int not char when both are nominally available to us in
generic-gcc.h (note that that code is not used for x86[_64]).
Back-patch to fix regression test failures on FreeBSD/PPC. Ordinarily
back-patching a change like this would be verboten because of ABI breakage.
But since pg_atomic_flag is not yet used in any Postgres data structure,
there's no ABI to break. It seems safer to back-patch to avoid possible
gotchas, if someday we do back-patch something that uses pg_atomic_flag.
Robert Haas [Wed, 4 Jan 2017 18:05:29 +0000 (13:05 -0500)]
Move partition_tuple_slot out of EState.
Commit 2ac3ef7a01df859c62d0a02333b646d65eaec5ff added a TupleTapleSlot
for partition tuple slot to EState (es_partition_tuple_slot) but it's
more logical to have it as part of ModifyTableState
(mt_partition_tuple_slot) and CopyState (partition_tuple_slot).
Tom Lane [Wed, 4 Jan 2017 17:43:52 +0000 (12:43 -0500)]
Re-allow SSL passphrase prompt at server start, but not thereafter.
Leave OpenSSL's default passphrase collection callback in place during
the first call of secure_initialize() in server startup. Although that
doesn't work terribly well in daemon contexts, some people feel we should
not break it for anyone who was successfully using it before. We still
block passphrase demands during SIGHUP, meaning that you can't adjust SSL
configuration on-the-fly if you used a passphrase, but this is no worse
than what it was before commit de41869b6. And we block passphrase demands
during EXEC_BACKEND reloads; that behavior wasn't useful either, but at
least now it's documented.
Tweak some related log messages for more readability, and avoid issuing
essentially duplicate messages about reload failure caused by a passphrase.
Robert Haas [Wed, 4 Jan 2017 17:03:40 +0000 (12:03 -0500)]
Update obsolete comments in lwlock.h.
The typical size of an LWLock is now 16 bytes even on 64-bit platforms,
and the size of slock_t is now irrelevant. But pg_atomic_uint32 can
(perhaps surprisingly) still be larger than 4 bytes, so there's still
some marginal point to allowing LWLOCK_MINIMAL_SIZE == 64.
Simon Riggs [Wed, 4 Jan 2017 16:50:23 +0000 (16:50 +0000)]
Allow PostgresNode.pm tests to wait for catchup
Add methods to the core test framework PostgresNode.pm to allow us to
test that standby nodes have caught up with the master, as well as
basic LSN handling. Used in tests recovery/t/001_stream_rep.pl and
recovery/t/004_timeline_switch.pl
Craig Ringer, reviewed by Aleksander Alekseev and Simon Riggs
Better fix for sequence access in hot standby test
The purpose of the test was to check access to the sequence relation on
a hot standby, so change the test to read a different column from the
sequence, instead of just reading the catalog.
Magnus Hagander [Wed, 4 Jan 2017 09:48:30 +0000 (10:48 +0100)]
Attempt to handle pending-delete files on Windows
These files are deleted but not yet gone from the filesystem. Operations
on them will return ERROR_DELETE_PENDING.
With this we start treating that as ENOENT, meaning files does not
exist (which is the state it will soon reach). This should be safe in
every case except when we try to recreate a file with exactly the same
name. This is an operation that PostgreSQL does very seldom, so
hopefully that won't happen much -- and even if it does, this treatment
should be no worse than treating it as an unhandled error.
We've been un able to reproduce the bug reliably, so pushing this to
master to get buildfarm coverage and other testing. Once it's proven to
be stable, it should be considered for backpatching.
Tom Lane [Tue, 3 Jan 2017 17:33:29 +0000 (12:33 -0500)]
Disable prompting for passphrase while (re)loading SSL config files.
OpenSSL's default behavior when loading a passphrase-protected key file
is to open /dev/tty and demand the password from there. It was kinda
sorta okay to allow that to happen at server start, but really that was
never workable in standard daemon environments. And it was a complete
fail on Windows, where the same thing would happen at every backend launch.
Yesterday's commit de41869b6 put the final nail in the coffin by causing
that to happen at every SIGHUP; even if you've still got a terminal acting
as the server's TTY, having the postmaster freeze until you enter the
passphrase again isn't acceptable.
Hence, override the default behavior with a callback that returns an empty
string, ensuring failure. Change the documentation to say that you can't
have a passphrase-protected server key, period.
If we can think of a production-grade way of collecting a passphrase from
somewhere, we might do that once at server startup and use this callback
to feed it to OpenSSL, but it's far from clear that anyone cares enough
to invest that much work in the feature. The lack of complaints about
the existing fractionally-baked behavior suggests nobody's using it anyway.
Remove bogus notice that older clients might not work with MD5 passwords.
That was written when we still had "crypt" authentication, and it was
referring to the fact that an older client might support "crypt"
authentication but not "md5". But we haven't supported "crypt" for years.
(As soon as we add a new authentication mechanism that doesn't work with
MD5 hashes, we'll need a similar notice again. But this text as it's worded
now is just wrong.)
Tom Lane [Tue, 3 Jan 2017 02:37:12 +0000 (21:37 -0500)]
Allow SSL configuration to be updated at SIGHUP.
It is no longer necessary to restart the server to enable, disable,
or reconfigure SSL. Instead, we just create a new SSL_CTX struct
(by re-reading all relevant files) whenever we get SIGHUP. Testing
shows that this is fast enough that it shouldn't be a problem.
In conjunction with that, downgrade the logic that complains about
pg_hba.conf "hostssl" lines when SSL isn't active: now that's just
a warning condition not an error.
An issue that still needs to be addressed is what shall we do with
passphrase-protected server keys? As this stands, the server would
demand the passphrase again on every SIGHUP, which is certainly
impractical. But the case was only barely supported before, so that
does not seem a sufficient reason to hold up committing this patch.
Andreas Karlsson, reviewed by Michael Banck and Michael Paquier
Tom Lane [Mon, 2 Jan 2017 18:41:51 +0000 (13:41 -0500)]
Use clock_gettime(), if available, in instr_time measurements.
The advantage of clock_gettime() is that the API allows the result to
be precise to nanoseconds, not just microseconds as in gettimeofday().
Now that it's routinely possible to do tens of plan node executions
in 1us, we really need more precision than gettimeofday() can offer
for EXPLAIN ANALYZE to accumulate statistics with.
Some research shows that clock_gettime() is available on pretty nearly
every modern Unix-ish platform, and as far as I have been able to test,
it has about the same execution time as gettimeofday(), so there's no
loss in switching over. (By the same token, this doesn't do anything
to fix the fact that we really wish clock readings were faster. But
there's enough win here to justify changing anyway.)
A small side benefit is that on most platforms, we can use CLOCK_MONOTONIC
instead of CLOCK_REALTIME and thereby render EXPLAIN impervious to
concurrent resets of the system clock. (This means that code must not
assume that the contents of struct instr_time have any well-defined
interpretation as timestamps, but really that was true before.)
Some platforms offer nonstandard clock IDs that might be of interest.
This patch knows we should use CLOCK_MONOTONIC_RAW on macOS, because it
provides more precision and is faster to read than their CLOCK_MONOTONIC.
If there turn out to be many more cases where we need special rules, it
might be appropriate to handle the selection of clock ID in configure,
but for the moment that doesn't seem worth the trouble.
Tom Lane [Mon, 2 Jan 2017 17:26:03 +0000 (12:26 -0500)]
In pgbench logging, avoid assuming that instr_times match Unix timestamps.
For aggregated logging, pg_bench supposed that printing the integer part of
INSTR_TIME_GET_DOUBLE() would produce a Unix timestamp. That was already
broken on Windows, and it's about to get broken on most other platforms as
well. As in commit 74baa1e3b, we can remove the entanglement at the price
of one extra syscall per transaction; though here it seems more convenient
to use time(NULL) instead of gettimeofday(), since we only need
integral-second precision.
I took the time to do some wordsmithing on the documentation about
pgbench's logging features, too.
Tom Lane [Sun, 1 Jan 2017 20:17:08 +0000 (15:17 -0500)]
Avoid assuming that instr_time == struct timeval in pgbench logging.
This code was presuming undue familiarity with the contents of the
instr_time struct. That was already broken on Windows, and it's about
to get broken on most other platforms as well. The simplest solution
that preserves the current output definition is to just do our own
gettimeofday() call here. Realistically, the extra cost is probably
negligible in comparison to everything else that's going on in a
pgbench transaction, so it's not worth sweating over.
On Windows, the precision delivered by gettimeofday() is lower than
one could wish, but this is still a big improvement over printing
zeroes, as the code did before.
Tom Lane [Sat, 31 Dec 2016 23:39:08 +0000 (18:39 -0500)]
Fix unstable regression test results.
Commit 2ac3ef7a0 added a query with an underdetermined output row order;
it has failed multiple times in the buildfarm since then. Add an ORDER BY
to fix. Also, don't rely on a DROP CASCADE to drop in a well-determined
order; that hasn't failed yet but I don't trust it much, and we're not
saving any typing by using CASCADE anyway.
Tom Lane [Thu, 29 Dec 2016 21:57:41 +0000 (16:57 -0500)]
Remove manual breaks in NodeTag assignments to fix duplicate tag numbers.
Commit f0e44751d added new node tags at a place in the tag numbering
where there was no daylight left before the next hard-coded number,
resulting in some duplicate tag assignments. This doesn't seem to have
caused any big problem so far, but it's surely trouble waiting to happen.
We could adjust the manually assigned breakpoints to make more room,
but that just leaves the same hazard waiting to strike again in future.
What seems like a better idea is to get rid of the manual assignments
and leave NodeTags to be automatically assigned, consecutively from one
on up. This means that any change in the tag list forces a backend-wide
recompile, but realistically that's usually needed anyway.
Peter Eisentraut [Wed, 28 Dec 2016 17:00:00 +0000 (12:00 -0500)]
Make more use of RoleSpec struct
Most code was casting this through a generic Node. By declaring
everything as RoleSpec appropriately, we can remove a bunch of casts and
ad-hoc node type checking.
Tom Lane [Tue, 27 Dec 2016 20:43:54 +0000 (15:43 -0500)]
Fix interval_transform so it doesn't throw away non-no-op casts.
interval_transform() contained two separate bugs that caused it to
sometimes mistakenly decide that a cast from interval to restricted
interval is a no-op and throw it away.
First, it was wrong to rely on dt.h's field type macros to have an
ordering consistent with the field's significance; in one case they do
not. This led to mistakenly treating YEAR as less significant than MONTH,
so that a cast from INTERVAL MONTH to INTERVAL YEAR was incorrectly
discarded.
Second, fls(1<<k) produces k+1 not k, so comparing its output directly
to SECOND was wrong. This led to supposing that a cast to INTERVAL
MINUTE was really a cast to INTERVAL SECOND and so could be discarded.
To fix, get rid of the use of fls(), and make a function based on
intervaltypmodout to produce a field ID code adapted to the need here.
Per bug #14479 from Piotr Stefaniak. Back-patch to 9.2 where transform
functions were introduced, because this code was born broken.
Andrew Dunstan [Tue, 27 Dec 2016 16:23:46 +0000 (11:23 -0500)]
Explain unaccounted for space in pgstattuple.
In addition to space accounted for by tuple_len, dead_tuple_len and
free_space, the table_len includes page overhead, the item pointers
table and padding bytes.
Magnus Hagander [Tue, 27 Dec 2016 09:37:11 +0000 (10:37 +0100)]
Don't rename .partial files in pg_receivexlog if an error occured
In 56c7d8d the behavior to keep .partial segments around
(considered corrupt) in case an connection failure occurs was
accidentally removed. This would lead to an incomplete segment
being considered complete.
Tom Lane [Mon, 26 Dec 2016 19:58:02 +0000 (14:58 -0500)]
Remove triggerable Assert in hashname().
hashname() asserted that the key string it is given is shorter than
NAMEDATALEN. That should surely always be true if the input is in fact a
regular value of type "name". However, for reasons of coding convenience,
we allow plain old C strings to be treated as "name" values in many places.
Some SQL functions accept arbitrary "text" inputs, convert them to C
strings, and pass them otherwise-untransformed to syscache lookups for name
columns, allowing an overlength input value to trigger hashname's Assert.
This would be a DOS problem, except that it only happens in assert-enabled
builds which aren't recommended for production. In a production build,
you'll just get a name lookup error, since regardless of the hash value
computed by hashname, the later equality comparison checks can't match.
Likewise, if the catalog lookup is done by seqscan or indexscan searches,
there will just be a lookup error, since the name comparison functions
don't contain any similar length checks, and will see an overlength input
as unequal to any stored entry.
After discussion we concluded that we should simply remove this Assert.
It's inessential to hashname's own functionality, and having such an
assertion in only some paths for name lookup is more of a foot-gun than
a useful check. There may or may not be a case for the affected callers
to do something other than let the name lookup fail, but we'll consider
that separately; in any case we probably don't want to change such
behavior in the back branches.
Per report from Tushar Ahuja. Back-patch to all supported branches.
Tom Lane [Sun, 25 Dec 2016 21:04:31 +0000 (16:04 -0500)]
Fix incorrect error reporting for duplicate data in \crosstabview.
\crosstabview's complaint about multiple entries for the same crosstab
cell quoted the wrong row and/or column values. It would accidentally
appear to work if the data had been in strcmp() order to start with,
which probably explains how we missed noticing this during development.
This could be fixed in more than one way, but the way I chose was to
hang onto both result pointers from bsearch() and use those to get at
the value names.
In passing, avoid casting away const in the bsearch comparison functions.
No bug there, just poor style.
Per bug #14476 from Tomonari Katsumata. Back-patch to 9.6 where
\crosstabview was introduced.
Stephen Frost [Sat, 24 Dec 2016 06:41:59 +0000 (01:41 -0500)]
pg_dumpall: Include --verbose option in --help output
The -v/--verbose option was not included in the output from --help for
pg_dumpall even though it's in the pg_dumpall documentation and has
apparently been around since pg_dumpall was reimplemented in C in 2002.
Stephen Frost [Sat, 24 Dec 2016 02:01:29 +0000 (21:01 -0500)]
Fix tab completion in psql for ALTER DEFAULT PRIVILEGES
When providing tab completion for ALTER DEFAULT PRIVILEGES, we are
including the list of roles as possible options for completion after the
GRANT or REVOKE. Further, we accept FOR ROLE/IN SCHEMA at the same time
and in either order, but the tab completion was only working for one or
the other. Lastly, we weren't using the actual list of allowed kinds of
objects for default privileges for completion after the 'GRANT X ON' but
instead were completeing to what 'GRANT X ON' supports, which isn't the
ssame at all.
Address these issues by improving the forward tab-completion for ALTER
DEFAULT PRIVILEGES and then constrain and correct how the tail
completion is done when it is for ALTER DEFAULT PRIVILEGES.
Back-patch the forward/tail tab-completion to 9.6, where we made it easy
to handle such cases.
For 9.5 and earlier, correct the initial tab-completion to at least be
correct as far as it goes and then add a check for GRANT/REVOKE to only
tab-complete when the GRANT/REVOKE is the start of the command, so we
don't try to do tab-completion after we get to the GRANT/REVOKE part of
the ALTER DEFAULT PRIVILEGES command, which is better than providing
incorrect completions.
Initial patch for master and 9.6 by Gilles Darold, though I cleaned it
up and added a few comments. All bugs in the 9.5 and earlier patch are
mine.
Tom Lane [Fri, 23 Dec 2016 18:35:11 +0000 (13:35 -0500)]
Replace enum InhOption with simple boolean.
Now that it has only INH_NO and INH_YES values, it's just weird that
it's not a plain bool, so make it that way.
Also rename RangeVar.inhOpt to "inh", to be like RangeTblEntry.inh.
My recollection is that we gave it a different name specifically because
it had a different representation than the derived bool value, but it
no longer does. And this is a good forcing function to be sure we
catch any places that are affected by the change.
Bump catversion because of possible effect on stored RangeVar nodes.
I'm not exactly convinced that we ever store RangeVar on disk, but
we have a readfuncs function for it, so be cautious. (If we do do so,
then commit e13486eba was in error not to bump catversion.)
Tom Lane [Fri, 23 Dec 2016 17:53:09 +0000 (12:53 -0500)]
Doc: improve index entry for "median".
We had an index entry for "median" attached to the percentile_cont function
entry, which was pretty useless because a person following the link would
never realize that that function was the one they were being hinted to use.
Instead, make the index entry point at the example in syntax-aggregates,
and add a <seealso> link to "percentile".
Also, since that example explicitly claims to be calculating the median,
make it use percentile_cont not percentile_disc. This makes no difference
in terms of the larger goals of that section, but so far as I can find,
nearly everyone thinks that "median" means the continuous not discrete
calculation.
Per gripe from Steven Winfield. Back-patch to 9.4 where we introduced
percentile_cont.
Tom Lane [Fri, 23 Dec 2016 16:53:35 +0000 (11:53 -0500)]
Spellcheck: s/descendent/descendant/g
I got a little annoyed by reading documentation paragraphs containing
both spellings within a few lines of each other. My dictionary says
"descendant" is the preferred spelling, and it's certainly the majority
usage in our tree, so standardize on that.
For one usage in parallel.sgml, I thought it better to rewrite to avoid
the term altogether.
Peter Eisentraut [Fri, 23 Dec 2016 17:00:00 +0000 (12:00 -0500)]
pg_dump: Remove obsolete handling of sequence names
There was code that attempted to check whether the sequence name stored
inside the sequence was the same as the name in pg_class. But that code
was already ifdef'ed out, and now that the sequence no longer stores its
own name, it's altogether obsolete, so remove it.
Robert Haas [Fri, 23 Dec 2016 12:14:37 +0000 (07:14 -0500)]
Remove _hash_chgbufaccess().
This is basically for the same reasons I got rid of _hash_wrtbuf()
in commit 25216c98938495fd741bf585dcbef45b3a9ffd40: it's not
convenient to have a function which encapsulates MarkBufferDirty(),
especially as we move towards having hash indexes be WAL-logged.
Patch by me, reviewed (but not entirely endorsed) by Amit Kapila.
Joe Conway [Fri, 23 Dec 2016 01:56:50 +0000 (17:56 -0800)]
Improve RLS documentation with respect to COPY
Documentation for pg_restore said COPY TO does not support row security
when in fact it should say COPY FROM. Fix that.
While at it, make it clear that "COPY FROM" does not allow RLS to be
enabled and INSERT should be used instead. Also that SELECT policies
will apply to COPY TO statements.
Back-patch to 9.5 where RLS first appeared.
Author: Joe Conway Reviewed-By: Dean Rasheed and Robert Haas
Discussion: https://postgr.es/m/5744FA24.3030008%40joeconway.com
Robert Haas [Thu, 22 Dec 2016 22:31:52 +0000 (17:31 -0500)]
Fix tuple routing in cases where tuple descriptors don't match.
The previous coding failed to work correctly when we have a
multi-level partitioned hierarchy where tables at successive levels
have different attribute numbers for the partition key attributes. To
fix, have each PartitionDispatch object store a standalone
TupleTableSlot initialized with the TupleDesc of the corresponding
partitioned table, along with a TupleConversionMap to map tuples from
the its parent's rowtype to own rowtype. After tuple routing chooses
a leaf partition, we must use the leaf partition's tuple descriptor,
not the root table's. To that end, a dedicated TupleTableSlot for
tuple routing is now allocated in EState.
Stephen Frost [Thu, 22 Dec 2016 22:08:43 +0000 (17:08 -0500)]
Use TSConfigRelationId in AlterTSConfiguration()
When we are altering a text search configuration, we are getting the
tuple from pg_ts_config and using its OID, so use TSConfigRelationId
when invoking any post-alter hooks and setting the object address.
Further, in the functions called from AlterTSConfiguration(), we're
saving information about the command via
EventTriggerCollectAlterTSConfig(), so we should be setting
commandCollected to true. Also add a regression test to
test_ddl_deparse for ALTER TEXT SEARCH CONFIGURATION.
Author: Artur Zakirov, a few additional comments by me
Discussion: https://www.postgresql.org/message-id/57a71eba-f2c7-e7fd-6fc0-2126ec0b39bd%40postgrespro.ru
Back-patch the fix for the InvokeObjectPostAlterHook() call to 9.3 where
it was introduced, and the fix for the ObjectAddressSet() call and
setting commandCollected to true to 9.5 where those changes to
ProcessUtilitySlow() were introduced.
Tom Lane [Thu, 22 Dec 2016 21:23:33 +0000 (16:23 -0500)]
Fix CREATE TABLE ... LIKE ... WITH OIDS.
Having a WITH OIDS specification should result in the creation of an OID
column, but commit b943f502b broke that in the case that there were LIKE
tables without OIDS. Commentary in that patch makes it look like this was
intentional, but if so it was based on a faulty reading of what inheritance
does: the parent tables can add an OID column, but they can't subtract one.
AFAICS, the behavior ought to be that you get an OID column if any of the
inherited tables, LIKE tables, or WITH clause ask for one.
Also, revert that patch's unnecessary split of transformCreateStmt's loop
over the tableElts list into two passes. That seems to have been based on
a misunderstanding as well: we already have two-pass processing here,
we don't need three passes.
Per bug #14474 from Jeff Dafoe. Back-patch to 9.6 where the misbehavior
was introduced.
Tom Lane [Thu, 22 Dec 2016 20:01:27 +0000 (15:01 -0500)]
Fix handling of expanded objects in CoerceToDomain and CASE execution.
When the input value to a CoerceToDomain expression node is a read-write
expanded datum, we should pass a read-only pointer to any domain CHECK
expressions and then return the original read-write pointer as the
expression result. Previously we were blindly passing the same pointer to
all the consumers of the value, making it possible for a function in CHECK
to modify or even delete the expanded value. (Since a plpgsql function
will absorb a passed-in read-write expanded array as a local variable
value, it will in fact delete the value on exit.)
A similar hazard of passing the same read-write pointer to multiple
consumers exists in domain_check() and in ExecEvalCase, so fix those too.
The fix requires adding MakeExpandedObjectReadOnly calls at the appropriate
places, which is simple enough except that we need to get the data type's
typlen from somewhere. For the domain cases, solve this by redefining
DomainConstraintRef.tcache as okay for callers to access; there wasn't any
reason for the original convention against that, other than not wanting the
API of typcache.c to be any wider than it had to be. For CASE, there's
no good solution except to add a syscache lookup during executor start.
Per bug #14472 from Marcos Castedo. Back-patch to 9.5 where expanded
values were introduced.
Andres Freund [Thu, 22 Dec 2016 19:31:50 +0000 (11:31 -0800)]
Skip checkpoints, archiving on idle systems.
Some background activity (like checkpoints, archive timeout, standby
snapshots) is not supposed to happen on an idle system. Unfortunately
so far it was not easy to determine when a system is idle, which
defeated some of the attempts to avoid redundant activity on an idle
system.
To make that easier, allow to make individual WAL insertions as not
being "important". By checking whether any important activity happened
since the last time an activity was performed, it now is easy to check
whether some action needs to be repeated.
Use the new facility for checkpoints, archive timeout and standby
snapshots.
The lack of a facility causes some issues in older releases, but in my
opinion the consequences (superflous checkpoints / archived segments)
aren't grave enough to warrant backpatching.
Author: Michael Paquier, editorialized by Andres Freund Reviewed-By: Andres Freund, David Steele, Amit Kapila, Kyotaro HORIGUCHI
Bug: #13685
Discussion:
https://www.postgresql.org/message-id/20151016203031.3019.72930@wrigleys.postgresql.org
https://www.postgresql.org/message-id/CAB7nPqQcPqxEM3S735Bd2RzApNqSNJVietAC=6kfkYv_45dKwA@mail.gmail.com
Backpatch: -
Robert Haas [Thu, 22 Dec 2016 18:54:40 +0000 (13:54 -0500)]
Fix broken error check in _hash_doinsert.
You can't just cast a HashMetaPage to a Page, because the meta page
data is stored after the page header, not at offset 0. Fortunately,
this didn't break anything because it happens to find hashm_bsize
at the offset at which it expects to find pd_pagesize_version, and
the values are close enough to the same that this works out.
Still, it's a bug, so back-patch to all supported versions.
Joe Conway [Thu, 22 Dec 2016 17:48:05 +0000 (09:48 -0800)]
Make dblink try harder to form useful error messages
When libpq encounters a connection-level error, e.g. runs out of memory
while forming a result, there will be no error associated with PGresult,
but a message will be placed into PGconn's error buffer. postgres_fdw
takes care to use the PGconn error message when PGresult does not have
one, but dblink has been negligent in that regard. Modify dblink to mirror
what postgres_fdw has been doing.
Back-patch to all supported branches.
Author: Joe Conway Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
Joe Conway [Thu, 22 Dec 2016 17:19:44 +0000 (09:19 -0800)]
Protect dblink from invalid options when using postgres_fdw server
When dblink uses a postgres_fdw server name for its connection, it
is possible for the connection to have options that are invalid
with dblink (e.g. "updatable"). The recommended way to avoid this
problem is to use dblink_fdw servers instead. However there are use
cases for using postgres_fdw, and possibly other FDWs, for dblink
connection options, therefore protect against trying to use any
options that do not apply by using is_valid_dblink_option() when
building the connection string from the options.
Back-patch to 9.3. Although 9.2 supports FDWs for connection info,
is_valid_dblink_option() did not yet exist, and neither did
postgres_fdw, at least in the postgres source tree. Given the lack
of previous complaints, fixing that seems too invasive/not worth it.
Author: Corey Huinker Reviewed-By: Joe Conway
Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
No more indirect blocks. The blocks form a linked list instead.
This saves some memory, because we don't need to have a buffer in memory to
hold the indirect block (or blocks). To reflect that, TAPE_BUFFER_OVERHEAD
is reduced from 3 to 1 buffer, which allows using more memory for building
the initial runs.
Tom Lane [Thu, 22 Dec 2016 16:19:04 +0000 (11:19 -0500)]
Give a useful error message if uuid-ossp is built without preconfiguration.
Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without
having told configure you meant to; you could just cd into that directory
and "make". That no longer works because the code depends on configure to
have done header and library probes, but the ensuing error messages are
not so easy to interpret if you're not an old C hand. We've gotten a
couple of complaints recently from people trying to do this the low-tech
way, so add an explicit #error directing the user to use --with-uuid.
(In principle we might want to do something similar in the other
optionally-built contrib modules; but I don't think any of the others have
ever worked without preconfiguration, so there are no bad habits to break
people of.)
Back-patch to 9.4 where the previous commit came in.
Joe Conway [Wed, 21 Dec 2016 23:47:54 +0000 (15:47 -0800)]
Improve dblink error message when remote does not provide it
When dblink or postgres_fdw detects an error on the remote side of the
connection, it will try to construct a local error message as best it
can using libpq's PQresultErrorField(). When no primary message is
available, it was bailing out with an unhelpful "unknown error". Make
that message better and more style guide compliant. Per discussion
on hackers.
Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3.
Tom Lane [Wed, 21 Dec 2016 22:39:32 +0000 (17:39 -0500)]
Fix detection of unfinished Unicode surrogate pair at end of string.
The U&'...' and U&"..." syntaxes silently discarded a surrogate pair
start (that is, a code between U+D800 and U+DBFF) if it occurred at
the very end of the string. This seems like an obvious oversight,
since we throw an error for every other invalid combination of surrogate
characters, including the very same situation in E'...' syntax.
This has been wrong since the pair processing was added (in 9.0),
so back-patch to all supported branches.
Tom Lane [Wed, 21 Dec 2016 20:18:25 +0000 (15:18 -0500)]
Fix strange behavior (and possible crashes) in full text phrase search.
In an attempt to simplify the tsquery matching engine, the original
phrase search patch invented rewrite rules that would rearrange a
tsquery so that no AND/OR/NOT operator appeared below a PHRASE operator.
But this approach had numerous problems. The rearrangement step was
missed by ts_rewrite (and perhaps other places), allowing tsqueries
to be created that would cause Assert failures or perhaps crashes at
execution, as reported by Andreas Seltenreich. The rewrite rules
effectively defined semantics for operators underneath PHRASE that were
buggy, or at least unintuitive. And because rewriting was done in
tsqueryin() rather than at execution, the rearrangement was user-visible,
which is not very desirable --- for example, it might cause unexpected
matches or failures to match in ts_rewrite.
As a somewhat independent problem, the behavior of nested PHRASE operators
was only sane for left-deep trees; queries like "x <-> (y <-> z)" did not
behave intuitively at all.
To fix, get rid of the rewrite logic altogether, and instead teach the
tsquery execution engine to manage AND/OR/NOT below a PHRASE operator
by explicitly computing the match location(s) and match widths for these
operators.
This requires introducing some additional fields into the publicly visible
ExecPhraseData struct; but since there's no way for third-party code to
pass such a struct to TS_phrase_execute, it shouldn't create an ABI problem
as long as we don't move the offsets of the existing fields.
Another related problem was that index searches supposed that "!x <-> y"
could be lossily approximated as "!x & y", which isn't correct because
the latter will reject, say, "x q y" which the query itself accepts.
This required some tweaking in TS_execute_ternary along with the main
tsquery engine.
Back-patch to 9.6 where phrase operators were introduced. While this
could be argued to change behavior more than we'd like in a stable branch,
we have to do something about the crash hazards and index-vs-seqscan
inconsistency, and it doesn't seem desirable to let the unintuitive
behaviors induced by the rewriting implementation stand as precedent.
Stephen Frost [Wed, 21 Dec 2016 20:03:32 +0000 (15:03 -0500)]
Improve ALTER TABLE documentation
The ALTER TABLE documentation wasn't terribly clear when it came to
which commands could be combined together and what it meant when they
were.
In particular, SET TABLESPACE *can* be combined with other commands,
when it's operating against a single table, but not when multiple tables
are being moved with ALL IN TABLESPACE. Further, the actions are
applied together but not really in 'parallel', at least today.
Pointed out by: Amit Langote
Improved wording from Tom.
Back-patch to 9.4, where the ALL IN TABLESPACE option was added.
Stephen Frost [Wed, 21 Dec 2016 18:47:06 +0000 (13:47 -0500)]
Fix dumping of casts and transforms using built-in functions
In pg_dump.c dumpCast() and dumpTransform(), we would happily ignore the
cast or transform if it happened to use a built-in function because we
weren't including the information about built-in functions when querying
pg_proc from getFuncs().
Modify the query in getFuncs() to also gather information about
functions which are used by user-defined casts and transforms (where
"user-defined" means "has an OID >= FirstNormalObjectId"). This also
adds to the TAP regression tests for 9.6 and master to cover these
types of objects.
Back-patch all the way for casts, back to 9.5 for transforms.
Stephen Frost [Wed, 21 Dec 2016 18:47:06 +0000 (13:47 -0500)]
For 8.0 servers, get last built-in oid from pg_database
We didn't start ensuring that all built-in objects had OIDs less than
16384 until 8.1, so for 8.0 servers we still need to query the value out
of pg_database. We need this, in particular, to distinguish which casts
were built-in and which were user-defined.
For HEAD, we only worry about going back to 8.0, for the back-branches,
we also ensure that 7.0-7.4 work.
Dean Rasheed [Wed, 21 Dec 2016 16:58:18 +0000 (16:58 +0000)]
Fix order of operations in CREATE OR REPLACE VIEW.
When CREATE OR REPLACE VIEW acts on an existing view, don't update the
view options until after the view query has been updated.
This is necessary in the case where CREATE OR REPLACE VIEW is used on
an existing view that is not updatable, and the new view is updatable
and specifies the WITH CHECK OPTION. In this case, attempting to apply
the new options to the view before updating its query fails, because
the options are applied using the ALTER TABLE infrastructure which
checks that WITH CHECK OPTION is only applied to an updatable view.
If new columns are being added to the view, that is also done using
the ALTER TABLE infrastructure, but it is important that that still be
done before updating the view query, because the rules system checks
that the query columns match those on the view relation. Added a
comment to explain that, in case someone is tempted to move that to
where the view options are now being set.
Back-patch to 9.4 where WITH CHECK OPTION was added.
Robert Haas [Wed, 21 Dec 2016 16:47:13 +0000 (11:47 -0500)]
Convert elog() to ereport() and do some wordsmithing.
It's not entirely clear that we should log a message here at all, but
it's certainly wrong to use elog() for a message that should clearly
be translatable.
Robert Haas [Wed, 21 Dec 2016 16:01:48 +0000 (11:01 -0500)]
Fix corner-case bug in WaitEventSetWaitBlock on Windows.
If we do not reset the FD_READ event, WaitForMultipleObjects won't
return it again again unless we've meanwhile read from the socket,
which is generally true but not guaranteed. WaitEventSetWaitBlock
itself may fail to return the event to the caller if the latch is
also set, and even if we changed that, the caller isn't obliged to
handle all returned events at once. On non-Windows systems, the
socket-read event is purely level-triggered, so this issue does
not exist. To fix, make Windows reset the event when needed.
Robert Haas [Wed, 21 Dec 2016 14:44:33 +0000 (09:44 -0500)]
Refactor merge path generation code.
This shouldn't change the set of paths that get generated in any
way, but it is preparatory work for further changes to allow a
partial path to be merge-joined witih a non-partial path to produce
a partial join path.
Peter Eisentraut [Wed, 21 Dec 2016 17:00:00 +0000 (12:00 -0500)]
Reorder pg_sequence columns to avoid alignment issue
On AIX, doubles are aligned at 4 bytes, but int64 is aligned at 8 bytes.
Our code assumes that doubles have alignment that can also be applied to
int64, but that fails in this case. One effect is that
heap_form_tuple() writes tuples in a different layout than
Form_pg_sequence expects.
Rather than rewrite the whole alignment code, work around the issue by
reordering the columns in pg_sequence so that the first int64 column
naturally comes out at an 8-byte boundary.
Fujii Masao [Wed, 21 Dec 2016 11:27:37 +0000 (20:27 +0900)]
Forbid invalid combination of options in pg_basebackup.
Commit 56c7d8d4552180fd66fe48423bb2a9bb767c2d87 allowed pg_basebackup
to stream WAL in tar mode. But there is the restriction that WAL
streaming in tar mode works only when the value - (dash) is not
specified as output directory. This means that the combination of
three options "-D -", "-F t" and "-X stream" is invalid. However,
previously, even when those options were specified at the same time,
pg_basebackup background process unexpectedly started streaming WAL.
And then it exited with an error.
This commit changes pg_basebackup so that it errors out on such
invalid combination of options at the beginning.
Tom Lane [Wed, 21 Dec 2016 00:22:02 +0000 (19:22 -0500)]
Fix minor oversights in nodeAgg.c.
aggstate->evalproj is always set up by ExecInitAgg, so there's no
need to test. Doing so led Coverity to think that we might be
intending "slot" to be possibly NULL here, and it quite properly
complained that the rest of combine_aggregates() wasn't prepared
for that.
Also fix a couple of obvious thinkos in Asserts checking that
"inputoff" isn't past the end of the slot.
Errors introduced in commit 8ed3f11bb, so no need for back-patch.
Peter Eisentraut [Tue, 20 Dec 2016 17:00:00 +0000 (12:00 -0500)]
Add pg_sequence system catalog
Move sequence metadata (start, increment, etc.) into a proper system
catalog instead of storing it in the sequence heap object. This
separates the metadata from the sequence data. Sequence metadata is now
operated on transactionally by DDL commands, whereas previously
rollbacks of sequence-related DDL commands would be ignored.
Fix sharing Agg transition state of DISTINCT or ordered aggs.
If a query contained two aggregates that could share the transition value,
we would correctly collect the input into a tuplesort only once, but
incorrectly run the transition function over the accumulated input twice,
in finalize_aggregates(). That caused a crash, when we tried to call
tuplesort_performsort() on an already-freed NULL tuplestore.
Backport to 9.6, where sharing of transition state and this bug were
introduced.