Fujii Masao [Tue, 21 Feb 2017 18:11:58 +0000 (03:11 +0900)]
Make walsender always initialize the buffers.
Walsender uses the local buffers for each outgoing and incoming message.
Previously when creating replication slot, walsender forgot to initialize
one of them and which can cause the segmentation fault error. To fix this
issue, this commit changes walsender so that it always initialize them
before it executes the requested replication command.
Back-patch to 9.4 where replication slot was introduced.
Problem report and initial patch by Stas Kelvich, modified by me.
Report: https://www.postgresql.org/message-id/A1E9CB90-1FAC-4CAD-8DBA-9AA62A6E97C5@postgrespro.ru
Tom Lane [Tue, 21 Feb 2017 22:51:27 +0000 (17:51 -0500)]
Fix sloppy handling of corner-case errors in fd.c.
Several places in fd.c had badly-thought-through handling of error returns
from lseek() and close(). The fact that those would seldom fail on valid
FDs is probably the reason we've not noticed this up to now; but if they
did fail, we'd get quite confused.
LruDelete and LruInsert actually just Assert'd that lseek never fails,
which is pretty awful on its face.
In LruDelete, we indeed can't throw an error, because that's likely to get
called during error abort and so throwing an error would probably just lead
to an infinite loop. But by the same token, throwing an error from the
close() right after that was ill-advised, not to mention that it would've
left the LRU state corrupted since we'd already unlinked the VFD from the
list. I also noticed that really, most of the time, we should know the
current seek position and it shouldn't be necessary to do an lseek here at
all. As patched, if we don't have a seek position and an lseek attempt
doesn't give us one, we'll close the file but then subsequent re-open
attempts will fail (except in the somewhat-unlikely case that a
FileSeek(SEEK_SET) call comes between and allows us to re-establish a known
target seek position). This isn't great but it won't result in any state
corruption.
Meanwhile, having an Assert instead of an honest test in LruInsert is
really dangerous: if that lseek failed, a subsequent read or write would
read or write from the start of the file, not where the caller expected,
leading to data corruption.
In both LruDelete and FileClose, if close() fails, just LOG that and mark
the VFD closed anyway. Possibly leaking an FD is preferable to getting
into an infinite loop or corrupting the VFD list. Besides, as far as I can
tell from the POSIX spec, it's unspecified whether or not the file has been
closed, so treating it as still open could be the wrong thing anyhow.
I also fixed a number of other places that were being sloppy about
behaving correctly when the seekPos is unknown.
Also, I changed FileSeek to return -1 with EINVAL for the cases where it
detects a bad offset, rather than throwing a hard elog(ERROR). It seemed
pretty inconsistent that some bad-offset cases would get a failure return
while others got elog(ERROR). It was missing an offset validity check for
the SEEK_CUR case on a closed file, too.
Back-patch to all supported branches, since all this code is fundamentally
identical in all of them.
Tom Lane [Mon, 20 Feb 2017 15:05:00 +0000 (10:05 -0500)]
Fix documentation of to_char/to_timestamp TZ, tz, OF formatting patterns.
These are only supported in to_char, not in the other direction, but the
documentation failed to mention that. Also, describe TZ/tz as printing the
time zone "abbreviation", not "name", because what they print is elsewhere
referred to that way. Per bug #14558.
Tom Lane [Sun, 19 Feb 2017 21:14:52 +0000 (16:14 -0500)]
Adjust PL/Tcl regression test to dodge a possible bug or zone dependency.
One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish
time zone setting. It's not clear if this is an old Tcl bug or something
odd about the zone data, but in any case that test is meant to see if the
Tcl [clock] command works at all, not what its corner-case behaviors are.
Therefore we have no need to test exactly which week a Sunday midnight is
considered to fall into. Probe the following Tuesday instead.
Magnus Hagander [Sat, 18 Feb 2017 12:45:14 +0000 (13:45 +0100)]
Fix help message for pg_basebackup -R
The recovery.conf file that's generated is specifically for replication,
and not needed (or wanted) for regular backup restore, so indicate that
in the message.
Tom Lane [Thu, 16 Feb 2017 16:30:07 +0000 (11:30 -0500)]
Doc: remove duplicate index entry.
This causes a warning with the old html-docs toolchain, though not with the
new. I had originally supposed that we needed both <indexterm> entries to
get both a primary index entry and a see-also link; but evidently not,
as pointed out by Fabien Coelho.
Tom Lane [Wed, 15 Feb 2017 23:15:47 +0000 (18:15 -0500)]
Formatting and docs corrections for logical decoding output plugins.
Make the typedefs for output plugins consistent with project style;
they were previously not even consistent with each other as to layout
or inclusion of parameter names. Make the documentation look the same,
and fix errors therein (missing and misdescribed parameters).
Tom Lane [Wed, 15 Feb 2017 21:40:05 +0000 (16:40 -0500)]
Make sure that hash join's bulk-tuple-transfer loops are interruptible.
The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and
ExecHashRemoveNextSkewBucket() are all capable of iterating over many
tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend
might fail to respond to SIGINT or SIGTERM for an unreasonably long time.
Fix that. In the case of ExecHashJoinNewBatch(), it seems useful to put
the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather
than directly in the loop, because that will also ensure that both
principal code paths through ExecHashJoinOuterGetTuple() will do a
CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises.
Tom Lane [Wed, 15 Feb 2017 20:23:19 +0000 (15:23 -0500)]
Fix tab completion for "ALTER SYSTEM SET variable ...".
It wouldn't complete "TO" after the variable name, which is certainly
minor enough. But since we do complete "TO" after "SET variable ...",
and since this case used to work pre-9.6, I think this is a bug.
Also, fix the query used to collect the variable names; whoever last
touched it evidently didn't understand how the pieces are supposed
to fit together. It accidentally worked anyway, because readline
ignores irrelevant completions, but it was randomly unlike the ones
around it, and could be a source of actual bugs if someone copied
it as a prototype for another query.
Tom Lane [Wed, 15 Feb 2017 19:44:00 +0000 (14:44 -0500)]
Fix YA unwanted behavioral difference with operator_precedence_warning.
Jeff Janes noted that the error cursor position shown for some errors
would vary when operator_precedence_warning is turned on. We'd prefer
that option to have no undocumented effects, so this isn't desirable.
To fix, make sure that an AEXPR_PAREN node has the same exprLocation
as its child node.
(Note: it would be a little cheaper to use @2 here instead of an
exprLocation call, but there are cases where that wouldn't produce
the identical answer, so don't do it like that.)
Back-patch to 9.5 where this feature was introduced.
Noah Misch [Sun, 12 Feb 2017 21:03:41 +0000 (16:03 -0500)]
Ignore tablespace ACLs when ignoring schema ACLs.
The ALTER TABLE ALTER TYPE implementation can issue DROP INDEX and
CREATE INDEX to refit existing indexes for the new column type. Since
this CREATE INDEX is an implementation detail of an index alteration,
the ensuing DefineIndex() should skip ACL checks specific to index
creation. It already skips the namespace ACL check. Make it skip the
tablespace ACL check, too. Back-patch to 9.2 (all supported versions).
Tom Lane [Thu, 9 Feb 2017 20:49:57 +0000 (15:49 -0500)]
Blind try to fix portability issue in commit 8f93bd851 et al.
The S/390 members of the buildfarm are showing failures indicating
that they're having trouble with the rint() calls I added yesterday.
There's no good reason for that, and I wonder if it is a compiler bug
similar to the one we worked around in d9476b838. Try to fix it using
the same method as before, namely to store the result of rint() back
into a "double" variable rather than immediately converting to int64.
(This isn't entirely waving a dead chicken, since on machines with
wider-than-double float registers, the extra store forces a width
conversion. I don't know if S/390 is like that, but it seems worth
trying.)
In passing, merge duplicate ereport() calls in float8_timestamptz().
Tom Lane [Wed, 8 Feb 2017 23:04:59 +0000 (18:04 -0500)]
Fix roundoff problems in float8_timestamptz() and make_interval().
When converting a float value to integer microseconds, we should be careful
to round the value to the nearest integer, typically with rint(); simply
assigning to an int64 variable will truncate, causing apparently off-by-one
values in cases that should work. Most places in the datetime code got
this right, but not these two.
float8_timestamptz() is new as of commit e511d878f (9.6). Previous
versions effectively depended on interval_mul() to do roundoff correctly,
which it does, so this fixes an accuracy regression in 9.6.
The problem in make_interval() dates to its introduction in 9.4. Aside
from being careful to round not truncate, let's incorporate the hours and
minutes inputs into the result with exact integer arithmetic, rather than
risk introducing roundoff error where there need not have been any.
float8_timestamptz() problem reported by Erik Nordström, though this is
not his proposed patch. make_interval() problem found by me.
Tom Lane [Tue, 7 Feb 2017 15:24:25 +0000 (10:24 -0500)]
Correct thinko in last-minute release note item.
The CREATE INDEX CONCURRENTLY bug can only be triggered by row updates,
not inserts, since the problem would arise from an update incorrectly
being made HOT. Noted by Alvaro.
Tom Lane [Mon, 6 Feb 2017 18:19:51 +0000 (13:19 -0500)]
Avoid returning stale attribute bitmaps in RelationGetIndexAttrBitmap().
The problem with the original coding here is that we might receive (and
clear) a relcache invalidation signal for the target relation down inside
one of the index_open calls we're doing. Since the target is open, we
would not drop the relcache entry, just reset its rd_indexvalid and
rd_indexlist fields. But RelationGetIndexAttrBitmap() kept going, and
would eventually cache and return potentially-obsolete attribute bitmaps.
The case where this matters is where the inval signal was from a CREATE
INDEX CONCURRENTLY telling us about a new index on a formerly-unindexed
column. (In all other cases, the lock we hold on the target rel should
prevent any concurrent change in index state.) Even just returning the
stale attribute bitmap is not such a problem, because it shouldn't matter
during the transaction in which we receive the signal. What hurts is
caching the stale data, because it can survive into later transactions,
breaking CREATE INDEX CONCURRENTLY's expectation that later transactions
will not create new broken HOT chains. The upshot is that there's a window
for building corrupted indexes during CREATE INDEX CONCURRENTLY.
This patch fixes the problem by rechecking that the set of index OIDs
is still the same at the end of RelationGetIndexAttrBitmap() as it was
at the start. If not, we loop back and try again. That's a little
more than is strictly necessary to fix the bug --- in principle, we
could return the stale data but not cache it --- but it seems like a
bad idea on general principles for relcache to return data it knows
is stale.
There might be more hazards of the same ilk, or there might be a better
way to fix this one, but this patch definitely improves matters and seems
unlikely to make anything worse. So let's push it into today's releases
even as we continue to study the problem.
Tom Lane [Fri, 3 Feb 2017 00:11:27 +0000 (19:11 -0500)]
Fix placement of initPlans when forcibly materializing a subplan.
If we forcibly place a Material node atop a finished subplan, we need
to move any initPlans attached to the subplan up to the Material node,
in order to keep SS_finalize_plan() happy. I'd figured this out in
commit 7b67a0a49 for the case of materializing a cursor plan, but out of
an abundance of caution, I put the initPlan movement hack at the call
site for that case, rather than inside materialize_finished_plan().
That was the wrong thing, because it turns out to also be necessary for
the only other caller of materialize_finished_plan(), ie subselect.c.
We lacked any test cases that exposed the mistake, but bug#14524 from
Wei Congrui shows that it's possible to get an initPlan reference into
the top tlist in that case too, and then SS_finalize_plan() complains.
Hence, move the hack into materialize_finished_plan().
In HEAD, also relocate some recently-added tests in subselect.sql, which
I'd unthinkingly dropped into the middle of a sequence of related tests.
Andrew Dunstan [Wed, 1 Feb 2017 22:52:35 +0000 (17:52 -0500)]
Don't count background workers against a user's connection limit.
Doing so doesn't seem to be within the purpose of the per user
connection limits, and has particularly unfortunate effects in
conjunction with parallel queries.
Backpatch to 9.6 where parallel queries were introduced.
David Rowley, reviewed by Robert Haas and Albe Laurenz.
Stephen Frost [Tue, 31 Jan 2017 21:24:14 +0000 (16:24 -0500)]
pg_dump: Fix handling of ALTER DEFAULT PRIVILEGES
In commit 23f34fa, we changed how ACLs were handled to use the new
pg_init_privs catalog and to dump out the ACL commands as REVOKE+GRANT
combinations instead of trying to REVOKE all rights always and then
GRANT back just the ones which were in place.
Unfortunately, the DEFAULT PRIVILEGES system didn't quite get the
correct treatment with this change and ended up (incorrectly) only
including positive GRANTs instead of both the REVOKEs and GRANTs
necessary to preserve the correct privileges.
There are only a couple cases where such REVOKEs are possible because,
generally speaking, there's few rights which exist on objects by
default to be revoked.
Examples of REVOKEs which weren't being correctly preserved are when
privileges are REVOKE'd from the creator/owner, like so:
ALTER DEFAULT PRIVILEGES
FOR ROLE myrole
REVOKE SELECT ON TABLES FROM myrole;
or when other default privileges are being revoked, such as EXECUTE
rights granted to public for functions:
ALTER DEFAULT PRIVILEGES
FOR ROLE myrole
REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
Fix this by correctly working out what the correct REVOKE statements are
(if any) and dump them out, just as we do for everything else.
Noticed while developing additional regression tests for pg_dump, which
will be landing shortly.
Stephen Frost [Tue, 31 Jan 2017 16:17:40 +0000 (11:17 -0500)]
test_pg_dump: perltidy cleanup
As pointed out by Alvaro, we actually use perltidy on the perl scripts
in the source tree, so go back to the results of a perltidy run for the
test_pg_dump TAP script.
To make it look slightly less tragic, I changed most of the independent
arguments into long-form single arguments (eg: -f file.sql changed to be
--file=file.sql) to avoid having them confusingly split across lines due
to perltidy.
Tom Lane [Mon, 30 Jan 2017 16:40:22 +0000 (11:40 -0500)]
Update time zone data files to tzdata release 2016j.
DST law changes in northern Cyprus (new zone Asia/Famagusta), Russia (new
zone Europe/Saratov), Tonga, Antarctica/Casey. Historical corrections for
Asia/Aqtau, Asia/Atyrau, Asia/Gaza, Asia/Hebron, Italy, Malta. Replace
invented zone abbreviation "TOT" for Tonga with numeric UTC offset; but
as in the past, we'll keep accepting "TOT" for input.
Stephen Frost [Mon, 30 Jan 2017 04:05:09 +0000 (23:05 -0500)]
Handle ALTER EXTENSION ADD/DROP with pg_init_privs
In commit 6c268df, pg_init_privs was added to track the initial
privileges of catalog objects and extensions. Unfortunately, that
commit didn't include understanding of ALTER EXTENSION ADD/DROP, which
allows the objects associated with an extension to be changed after the
initial CREATE EXTENSION script has been run.
The result of this meant that ACLs for objects added through
ALTER EXTENSION ADD were not recorded into pg_init_privs and we would
end up including those ACLs in pg_dump when we shouldn't have.
This commit corrects that by making sure to have pg_init_privs updated
when ALTER EXTENSION ADD/DROP is run, recording the permissions as they
are at ALTER EXTENSION ADD time, and removing any if/when ALTER
EXTENSION DROP is called.
This issue was pointed out by Moshe Jacobson as commentary on bug #14456
(which was actually a bug about versions prior to 9.6 not handling
custom ACLs on extensions correctly, an issue now addressed with
pg_init_privs in 9.6).
Back-patch to 9.6 where pg_init_privs was introduced.
Stephen Frost [Mon, 30 Jan 2017 04:05:09 +0000 (23:05 -0500)]
test_pg_dump TAP test whitespace cleanup
The formatting of the perl hashes used in the TAP tests for test_pg_dump
was rather horribly inconsistent and made it more difficult than it
really should have been to add new tests or adjust what tests are for
what runs, etc.
Andres Freund [Fri, 27 Jan 2017 00:47:03 +0000 (16:47 -0800)]
Add castNode(type, ptr) for safe casting between NodeTag based types.
The new function allows to cast from one NodeTag based type to
another, while asserting that the conversion is valid. This replaces
the common pattern of doing a cast and a Assert(IsA(ptr, type))
close-by.
As this seems likely to be used pervasively, we decided to backpatch
this change the addition of this macro. Otherwise backpatched fixes
are more likely not to work on back-branches.
On branches before 9.6, where we do not yet rely on inline functions
being available, the type assertion is only performed if PG_USE_INLINE
support is detected. The cast obviously is performed regardless.
For the benefit of verifying the macro compiles in the back-branches,
this commit contains a single use of the new macro. On master, a
somewhat larger conversion will be committed separately.
Author: Peter Eisentraut and Andres Freund Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/c5d387d9-3440-f5e0-f9d4-71d53b9fbe52@2ndquadrant.com
Backpatch: 9.2-
Alvaro Herrera [Thu, 26 Jan 2017 20:45:22 +0000 (17:45 -0300)]
Remove test for COMMENT ON DATABASE
Our current DDL only allows a database name to be specified in COMMENT
ON DATABASE, which Andrew Dunstan reports to make this test fail on the
buildfarm. Remove the line until we gain a DDL command that allows the
current database to be operated on without having the specify it by
name.
Simon Riggs [Thu, 26 Jan 2017 20:06:44 +0000 (20:06 +0000)]
Reset hot standby xmin after restart
Hot_standby_feedback could be reset by reload and worked correctly, but if
the server was restarted rather than reloaded the xmin was not reset.
Force reset always if hot_standby_feedback is enabled at startup.
Tom Lane [Thu, 26 Jan 2017 17:17:47 +0000 (12:17 -0500)]
Ensure that a tsquery like '!foo' matches empty tsvectors.
!foo means "the tsvector does not contain foo", and therefore it should
match an empty tsvector. ts_match_vq() overenthusiastically supposed
that an empty tsvector could never match any query, so it forcibly
returned FALSE, the wrong answer. Remove the premature optimization.
Our behavior on this point was inconsistent, because while seqscans and
GIST index searches both failed to match empty tsvectors, GIN index
searches would find them, since GIN scans don't rely on ts_match_vq().
That makes this certainly a bug, not a debatable definition disagreement,
so back-patch to all supported branches.
Report and diagnosis by Tom Dunstan (bug #14515); added test cases by me.
Robert Haas [Fri, 20 Jan 2017 20:55:45 +0000 (15:55 -0500)]
Avoid useless respawining the autovacuum launcher at high speed.
When (1) autovacuum = off and (2) there's at least one database with
an XID age greater than autovacuum_freeze_max_age and (3) all tables
in that database that need vacuuming are already being processed by a
worker and (4) the autovacuum launcher is started, a kind of infinite
loop occurs. The launcher starts a worker and immediately exits. The
worker, finding no worker to do, immediately starts the launcher,
supposedly so that the next database can be processed. But because
datfrozenxid for that database hasn't been advanced yet, the new
worker gets put right back into the same database as the old one,
where it once again starts the launcher and exits. High-speed ping
pong ensues.
There are several possible ways to break the cycle; this seems like
the safest one.
Amit Khandekar (code) and Robert Haas (comments), reviewed by
Álvaro Herrera.
Stephen Frost [Thu, 19 Jan 2017 17:06:27 +0000 (12:06 -0500)]
Dump sequence data based on the TableDataInfo flag
When considering a sequence's Data entry in dumpSequenceData, we were
actually looking at the sequence definition's dump flag to decide if we
should dump the data or not. That's generally fine, except for when the
sequence data entry was created by processExtensionTables() because it's
a config sequence. In that case, the sequence itself won't be marked as
dumping data because it's part of an extension, leading to the need for
processExtensionTables() to create the sequence data entry.
This leads to extension config sequence data not being included in the
dump when it should be. Fix this by looking at the sequence data's dump
flag instead, just as dumpTableData() was doing for tables (which is why
config tables were correctly being handled), and add a regression test
to make sure we don't break it moving forward.
All of this is a bit round-about since we can now represent which
components of a given dump item should be dumped out through the dump
flag. A future improvement might be to change checkExtensionMembership()
to check for config sequences/tables and set the dump flag based on that
directly, possibly removing the need for processExtensionTables().
Tom Lane [Wed, 18 Jan 2017 21:33:18 +0000 (16:33 -0500)]
Reset the proper GUC in create_index test.
Thinko in commit a4523c5aa. It doesn't really affect anything at
present, but it would be a problem if any tests added later in this
file ought to get index-only-scan plans. Back-patch, like the previous
commit, just to avoid surprises in case we add such a test and then
back-patch it.
Alvaro Herrera [Wed, 18 Jan 2017 21:06:13 +0000 (18:06 -0300)]
Change some test macros to return true booleans
These macros work fine when they are used directly in an "if" test or
similar, but as soon as the return values are assigned to boolean
variables (or passed as boolean arguments to some function), they become
bugs, hopefully caught by compiler warnings. To avoid future problems,
fix the definitions so that they return actual booleans.
To further minimize the risk that somebody uses them in back-patched
fixes that only work correctly in branches starting from the current
master and not in old ones, back-patch the change to supported branches
as appropriate.
Tom Lane [Wed, 18 Jan 2017 20:21:52 +0000 (15:21 -0500)]
Disable transforms that replaced AT TIME ZONE with RelabelType.
These resulted in wrong answers if the relabeled argument could be matched
to an index column, as shown in bug #14504 from Evgeniy Kozlov. We might
be able to resurrect these optimizations by adjusting the planner's
treatment of RelabelType, or by adjusting btree's rules for selecting
comparison functions, but either solution will take careful analysis
and does not sound like a fit candidate for backpatching.
I left the catalog infrastructure in place and just reduced the transform
functions to always-return-NULL. This would be necessary anyway in the
back branches, and it doesn't seem important to be more invasive in HEAD.
Bug introduced by commit b8a18ad48. Back-patch to 9.5 where that came in.
Fujii Masao [Tue, 17 Jan 2017 08:29:15 +0000 (17:29 +0900)]
Fix an assertion failure related to an exclusive backup.
Previously multiple sessions could execute pg_start_backup() and
pg_stop_backup() to start and stop an exclusive backup at the same time.
This could trigger the assertion failure of
"FailedAssertion("!(XLogCtl->Insert.exclusiveBackup)".
This happend because, even while pg_start_backup() was starting
an exclusive backup, other session could run pg_stop_backup()
concurrently and mark the backup as not-in-progress unconditionally.
This patch introduces ExclusiveBackupState indicating the state of
an exclusive backup. This state is used to ensure that there is only
one session running pg_start_backup() or pg_stop_backup() at
the same time, to avoid the assertion failure.
Back-patch to all supported versions.
Author: Michael Paquier Reviewed-By: Kyotaro Horiguchi and me Reported-By: Andreas Seltenreich
Discussion: <87mvktojme.fsf@credativ.de>
Tom Lane [Sat, 14 Jan 2017 18:27:47 +0000 (13:27 -0500)]
Throw suitable error for COPY TO STDOUT/FROM STDIN in a SQL function.
A client copy can't work inside a function because the FE/BE wire protocol
doesn't support nesting of a COPY operation within query results. (Maybe
it could, but the protocol spec doesn't suggest that clients should support
this, and libpq for one certainly doesn't.)
In most PLs, this prohibition is enforced by spi.c, but SQL functions don't
use SPI. A comparison of _SPI_execute_plan() and init_execution_state()
shows that rejecting client COPY is the only discrepancy in what they
allow, so there's no other similar bugs.
This is an astonishingly ancient oversight, so back-patch to all supported
branches.
Peter Eisentraut [Fri, 13 Jan 2017 17:00:00 +0000 (12:00 -0500)]
pg_upgrade: Fix for changed pg_ctl default stop mode
In 9.5, the default pg_ctl stop mode was changed from "smart" to "fast".
pg_upgrade still thought the default mode was "smart" and only specified
the mode when "fast" was asked for. This results in using "fast" all
the time. It's not clear what the effect in practice is, but fix it
nonetheless to restore the previous behavior.
Robert Haas [Fri, 13 Jan 2017 18:29:31 +0000 (13:29 -0500)]
Fix cardinality estimates for parallel joins.
For a partial path, the cardinality estimate needs to reflect the
number of rows we think each worker will see, rather than the total
number of rows; otherwise, costing will go wrong. The previous coding
got this completely wrong for parallel joins.
Unfortunately, this change may destabilize plans for users of 9.6 who
have enabled parallel query, but since 9.6 is still fairly new I'm
hoping expectations won't be too settled yet. Also, this is really a
brown-paper-bag bug, so leaving it unfixed for the entire lifetime of
9.6 seems unwise.
Related reports (whose import I initially failed to recognize) by
Tomas Vondra and Tom Lane.
Stephen Frost [Wed, 11 Jan 2017 20:45:53 +0000 (15:45 -0500)]
pg_restore: Don't allow non-positive number of jobs
pg_restore will currently accept invalid values for the number of
parallel jobs to run (eg: -1), unlike pg_dump which does check that the
value provided is reasonable.
Worse, '-1' is actually a valid, independent, parameter (as an alias for
--single-transaction), leading to potentially completely unexpected
results from a command line such as:
-> pg_restore -j -1
Where a user would get neither parallel jobs nor a single-transaction.
Add in validity checking of the parallel jobs option, as we already have
in pg_dump, before we try to open up the archive. Also move the check
that we haven't been asked to run more parallel jobs than possible on
Windows to the same place, so we do all the option validity checking
before opening the archive.
Back-patch all the way, though for 9.2 we're adding the Windows-specific
check against MAXIMUM_WAIT_OBJECTS as that check wasn't back-patched
originally.
Stephen Frost [Tue, 10 Jan 2017 16:34:55 +0000 (11:34 -0500)]
pg_dump: Strict names with no matching schema
When using pg_dump --strict-names and a schema pattern which doesn't
match any schemas (eg: --schema='nonexistant*'), we were incorrectly
throwing an error claiming no tables were found when, really, there
were no schemas found:
-> pg_dump --strict-names --schema='nonexistant*'
pg_dump: no matching tables were found for pattern "nonexistant*"
Fix that by changing the error message to say 'schemas' instead, since
that is what we are actually complaining about.
Noticed while testing pg_dump error cases.
Back-patch to 9.6 where --strict-names and this error message were
introduced.
Alvaro Herrera [Mon, 9 Jan 2017 22:26:58 +0000 (19:26 -0300)]
Fix ALTER TABLE / SET TYPE for irregular inheritance
If inherited tables don't have exactly the same schema, the USING clause
in an ALTER TABLE / SET DATA TYPE misbehaves when applied to the
children tables since commit 9550e8348b79. Starting with that commit,
the attribute numbers in the USING expression are fixed during parse
analysis. This can lead to bogus errors being reported during
execution, such as:
ERROR: attribute 2 has wrong type
DETAIL: Table has type smallint, but query expects integer.
Since it wouldn't do to revert to the original coding, we now apply a
transformation to map the attribute numbers to the correct ones for each
child.
Reported by Justin Pryzby
Analysis by Tom Lane; patch by me.
Discussion: https://postgr.es/m/20170102225618.GA10071@telsasoft.com
Alvaro Herrera [Mon, 9 Jan 2017 21:19:29 +0000 (18:19 -0300)]
BRIN revmap pages are not standard pages ...
... and therefore we ought not to tell XLogRegisterBuffer the opposite,
when writing XLog for a brin update that moves the index tuple to a
different page. Otherwise, xlog insertion would try to "compress the
hole" when producing a full-page image for it; but since we don't update
pd_lower/upper, the hole covers the whole page. On WAL replay, the
revmap page becomes empty and so the entire portion of the index is
useless and needs to be recomputed.
This is low-probability: a BRIN update only moves an index tuple to a
different page when the summary tuple is larger than the existing one,
which doesn't happen with fixed-width datatypes. Also, the revmap
page must be first after a checkpoint.
Report and patch: Kuntal Ghosh
Bug is alleged to have detected by a WAL-consistency-checking tool.
Discussion: https://postgr.es/m/CAGz5QCJ=00UQjScSEFbV=0qO5ShTZB9WWz_Fm7+Wd83zPs9Geg@mail.gmail.com
I posted a test case demonstrating the problem, but I'm refraining from
adding it to the test suite; if the WAL consistency tool makes it in,
that will be a better way to catch this from regressing. (We should
definitely have someting that causes not-same-page updates, though.)
Stephen Frost [Fri, 6 Jan 2017 20:27:50 +0000 (15:27 -0500)]
Protect against NULL-dereference in pg_dump
findTableByOid() is allowed to return NULL and we should therefore be
checking for that case. getOwnedSeqs() and dumpSequence() shouldn't
ever actually see this happen, but given odd circumstances it might and
commit f9e439b1 probably shouldn't have removed that check.
Pointed out by Coverity. Initial patch from Michael Paquier.
Back-patch to 9.6, where that commit had removed the check.
Tom Lane [Fri, 6 Jan 2017 19:12:52 +0000 (14:12 -0500)]
Invalidate cached plans on FDW option changes.
This fixes problems where a plan must change but fails to do so,
as seen in a bug report from Rajkumar Raghuwanshi.
For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of
forcing a relcache flush on the table. For ALTER FOREIGN DATA WRAPPER
and ALTER SERVER, just flush the whole plan cache on any change in
pg_foreign_data_wrapper or pg_foreign_server. That matches the way
we handle some other low-probability cases such as opclass changes, and
it's unclear that the case arises often enough to be worth working harder.
Besides, that gives a patch that is simple enough to back-patch with
confidence.
Back-patch to 9.3. In principle we could apply the code change to 9.2 as
well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that
anyone is doing anything exciting enough with FDWs that far back to need
this desperately, and (c) the patch doesn't apply cleanly.
Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh
Bapat, who each contributed substantial changes as well.
Robert Haas [Thu, 5 Jan 2017 18:12:16 +0000 (13:12 -0500)]
Fix possible leak of semaphore count.
Commit 4aec49899e5782247e134f94ce1c6ee926f88e1c reorganized the order
of operations here so that we no longer increment the number of "extra
waits" before locking the semaphore, but it did not change the
starting value of extraWaits from 0 to -1 to compensate. In the worst
case, this could leak a semaphore count, but that seems to be unlikely
in practice.
Tom Lane [Thu, 5 Jan 2017 16:33:51 +0000 (11:33 -0500)]
Fix handling of empty arrays in array_fill().
array_fill(..., array[0]) produced an empty array, which is probably
what users expect, but it was a one-dimensional zero-length array
which is not our standard representation of empty arrays. Also, for
no very good reason, it rejected empty input arrays; that case should
be allowed and produce an empty output array.
In passing, remove the restriction that the input array(s) have lower
bound 1. That seems rather pointless, and it would have needed extra
complexity to make the check deal with empty input arrays.
Per bug #14487 from Andrew Gierth. It's been broken all along, so
back-patch to all supported branches.
Tom Lane [Wed, 4 Jan 2017 23:00:11 +0000 (18:00 -0500)]
Handle OID column inheritance correctly in ALTER TABLE ... INHERIT.
Inheritance operations must treat the OID column, if any, much like
regular user columns. But MergeAttributesIntoExisting() neglected to
do that, leading to weird results after a table with OIDs is associated
to a parent with OIDs via ALTER TABLE ... INHERIT.
Report and patch by Amit Langote, reviewed by Ashutosh Bapat, some
adjustments by me. It's been broken all along, so back-patch to
all supported branches.
Tom Lane [Wed, 4 Jan 2017 18:36:44 +0000 (13:36 -0500)]
Prefer int-wide pg_atomic_flag over char-wide when using gcc intrinsics.
configure can only probe the existence of gcc intrinsics, not how well
they're implemented, and unfortunately the answer is sometimes "badly".
In particular we've found that multiple compilers fail to implement
char-width __sync_lock_test_and_set() correctly on PPC; and even a correct
implementation would necessarily be pretty inefficient, since that hardware
has only a word-wide primitive to work with.
Given the knowledge we've accumulated in s_lock.h, it appears that it's
best to rely on int-width TAS operations on most non-Intel architectures.
Hence, pick int not char when both are nominally available to us in
generic-gcc.h (note that that code is not used for x86[_64]).
Back-patch to fix regression test failures on FreeBSD/PPC. Ordinarily
back-patching a change like this would be verboten because of ABI breakage.
But since pg_atomic_flag is not yet used in any Postgres data structure,
there's no ABI to break. It seems safer to back-patch to avoid possible
gotchas, if someday we do back-patch something that uses pg_atomic_flag.
Remove bogus notice that older clients might not work with MD5 passwords.
That was written when we still had "crypt" authentication, and it was
referring to the fact that an older client might support "crypt"
authentication but not "md5". But we haven't supported "crypt" for years.
(As soon as we add a new authentication mechanism that doesn't work with
MD5 hashes, we'll need a similar notice again. But this text as it's worded
now is just wrong.)
Joe Conway [Mon, 2 Jan 2017 22:42:28 +0000 (14:42 -0800)]
Silence compiler warnings
Rearrange a bit of code to ensure that 'mode' in LWLockRelease is
obviously always set, which seems a bit cleaner and avoids a compiler
warning (thanks to Robert for the suggestion!).
Back-patch back to 9.5 where the warning is first seen.
Author: Stephen Frost
Discussion: https://postgr.es/m/20161129152102.GR13284%40tamriel.snowman.net
Joe Conway [Mon, 2 Jan 2017 22:11:50 +0000 (14:11 -0800)]
Silence compiler warnings
In GetCachedPlan(), initialize 'plan' to silence a compiler warning, but
also add an Assert() to make sure we don't ever actually fall through
with 'plan' still being set to NULL, since we are about to dereference
it.
Back-patch back to 9.2.
Author: Stephen Frost
Discussion: https://postgr.es/m/20161129152102.GR13284%40tamriel.snowman.net
Tom Lane [Tue, 27 Dec 2016 20:43:54 +0000 (15:43 -0500)]
Fix interval_transform so it doesn't throw away non-no-op casts.
interval_transform() contained two separate bugs that caused it to
sometimes mistakenly decide that a cast from interval to restricted
interval is a no-op and throw it away.
First, it was wrong to rely on dt.h's field type macros to have an
ordering consistent with the field's significance; in one case they do
not. This led to mistakenly treating YEAR as less significant than MONTH,
so that a cast from INTERVAL MONTH to INTERVAL YEAR was incorrectly
discarded.
Second, fls(1<<k) produces k+1 not k, so comparing its output directly
to SECOND was wrong. This led to supposing that a cast to INTERVAL
MINUTE was really a cast to INTERVAL SECOND and so could be discarded.
To fix, get rid of the use of fls(), and make a function based on
intervaltypmodout to produce a field ID code adapted to the need here.
Per bug #14479 from Piotr Stefaniak. Back-patch to 9.2 where transform
functions were introduced, because this code was born broken.
Andrew Dunstan [Tue, 27 Dec 2016 16:23:46 +0000 (11:23 -0500)]
Explain unaccounted for space in pgstattuple.
In addition to space accounted for by tuple_len, dead_tuple_len and
free_space, the table_len includes page overhead, the item pointers
table and padding bytes.
Tom Lane [Mon, 26 Dec 2016 19:58:02 +0000 (14:58 -0500)]
Remove triggerable Assert in hashname().
hashname() asserted that the key string it is given is shorter than
NAMEDATALEN. That should surely always be true if the input is in fact a
regular value of type "name". However, for reasons of coding convenience,
we allow plain old C strings to be treated as "name" values in many places.
Some SQL functions accept arbitrary "text" inputs, convert them to C
strings, and pass them otherwise-untransformed to syscache lookups for name
columns, allowing an overlength input value to trigger hashname's Assert.
This would be a DOS problem, except that it only happens in assert-enabled
builds which aren't recommended for production. In a production build,
you'll just get a name lookup error, since regardless of the hash value
computed by hashname, the later equality comparison checks can't match.
Likewise, if the catalog lookup is done by seqscan or indexscan searches,
there will just be a lookup error, since the name comparison functions
don't contain any similar length checks, and will see an overlength input
as unequal to any stored entry.
After discussion we concluded that we should simply remove this Assert.
It's inessential to hashname's own functionality, and having such an
assertion in only some paths for name lookup is more of a foot-gun than
a useful check. There may or may not be a case for the affected callers
to do something other than let the name lookup fail, but we'll consider
that separately; in any case we probably don't want to change such
behavior in the back branches.
Per report from Tushar Ahuja. Back-patch to all supported branches.
Tom Lane [Sun, 25 Dec 2016 21:04:31 +0000 (16:04 -0500)]
Fix incorrect error reporting for duplicate data in \crosstabview.
\crosstabview's complaint about multiple entries for the same crosstab
cell quoted the wrong row and/or column values. It would accidentally
appear to work if the data had been in strcmp() order to start with,
which probably explains how we missed noticing this during development.
This could be fixed in more than one way, but the way I chose was to
hang onto both result pointers from bsearch() and use those to get at
the value names.
In passing, avoid casting away const in the bsearch comparison functions.
No bug there, just poor style.
Per bug #14476 from Tomonari Katsumata. Back-patch to 9.6 where
\crosstabview was introduced.
Stephen Frost [Sat, 24 Dec 2016 06:42:07 +0000 (01:42 -0500)]
pg_dumpall: Include --verbose option in --help output
The -v/--verbose option was not included in the output from --help for
pg_dumpall even though it's in the pg_dumpall documentation and has
apparently been around since pg_dumpall was reimplemented in C in 2002.
Stephen Frost [Sat, 24 Dec 2016 02:01:33 +0000 (21:01 -0500)]
Fix tab completion in psql for ALTER DEFAULT PRIVILEGES
When providing tab completion for ALTER DEFAULT PRIVILEGES, we are
including the list of roles as possible options for completion after the
GRANT or REVOKE. Further, we accept FOR ROLE/IN SCHEMA at the same time
and in either order, but the tab completion was only working for one or
the other. Lastly, we weren't using the actual list of allowed kinds of
objects for default privileges for completion after the 'GRANT X ON' but
instead were completeing to what 'GRANT X ON' supports, which isn't the
ssame at all.
Address these issues by improving the forward tab-completion for ALTER
DEFAULT PRIVILEGES and then constrain and correct how the tail
completion is done when it is for ALTER DEFAULT PRIVILEGES.
Back-patch the forward/tail tab-completion to 9.6, where we made it easy
to handle such cases.
For 9.5 and earlier, correct the initial tab-completion to at least be
correct as far as it goes and then add a check for GRANT/REVOKE to only
tab-complete when the GRANT/REVOKE is the start of the command, so we
don't try to do tab-completion after we get to the GRANT/REVOKE part of
the ALTER DEFAULT PRIVILEGES command, which is better than providing
incorrect completions.
Initial patch for master and 9.6 by Gilles Darold, though I cleaned it
up and added a few comments. All bugs in the 9.5 and earlier patch are
mine.
Tom Lane [Fri, 23 Dec 2016 17:53:09 +0000 (12:53 -0500)]
Doc: improve index entry for "median".
We had an index entry for "median" attached to the percentile_cont function
entry, which was pretty useless because a person following the link would
never realize that that function was the one they were being hinted to use.
Instead, make the index entry point at the example in syntax-aggregates,
and add a <seealso> link to "percentile".
Also, since that example explicitly claims to be calculating the median,
make it use percentile_cont not percentile_disc. This makes no difference
in terms of the larger goals of that section, but so far as I can find,
nearly everyone thinks that "median" means the continuous not discrete
calculation.
Per gripe from Steven Winfield. Back-patch to 9.4 where we introduced
percentile_cont.
Joe Conway [Fri, 23 Dec 2016 01:57:01 +0000 (17:57 -0800)]
Improve RLS documentation with respect to COPY
Documentation for pg_restore said COPY TO does not support row security
when in fact it should say COPY FROM. Fix that.
While at it, make it clear that "COPY FROM" does not allow RLS to be
enabled and INSERT should be used instead. Also that SELECT policies
will apply to COPY TO statements.
Back-patch to 9.5 where RLS first appeared.
Author: Joe Conway Reviewed-By: Dean Rasheed and Robert Haas
Discussion: https://postgr.es/m/5744FA24.3030008%40joeconway.com
Stephen Frost [Thu, 22 Dec 2016 22:08:47 +0000 (17:08 -0500)]
Use TSConfigRelationId in AlterTSConfiguration()
When we are altering a text search configuration, we are getting the
tuple from pg_ts_config and using its OID, so use TSConfigRelationId
when invoking any post-alter hooks and setting the object address.
Further, in the functions called from AlterTSConfiguration(), we're
saving information about the command via
EventTriggerCollectAlterTSConfig(), so we should be setting
commandCollected to true. Also add a regression test to
test_ddl_deparse for ALTER TEXT SEARCH CONFIGURATION.
Author: Artur Zakirov, a few additional comments by me
Discussion: https://www.postgresql.org/message-id/57a71eba-f2c7-e7fd-6fc0-2126ec0b39bd%40postgrespro.ru
Back-patch the fix for the InvokeObjectPostAlterHook() call to 9.3 where
it was introduced, and the fix for the ObjectAddressSet() call and
setting commandCollected to true to 9.5 where those changes to
ProcessUtilitySlow() were introduced.
Tom Lane [Thu, 22 Dec 2016 21:23:34 +0000 (16:23 -0500)]
Fix CREATE TABLE ... LIKE ... WITH OIDS.
Having a WITH OIDS specification should result in the creation of an OID
column, but commit b943f502b broke that in the case that there were LIKE
tables without OIDS. Commentary in that patch makes it look like this was
intentional, but if so it was based on a faulty reading of what inheritance
does: the parent tables can add an OID column, but they can't subtract one.
AFAICS, the behavior ought to be that you get an OID column if any of the
inherited tables, LIKE tables, or WITH clause ask for one.
Also, revert that patch's unnecessary split of transformCreateStmt's loop
over the tableElts list into two passes. That seems to have been based on
a misunderstanding as well: we already have two-pass processing here,
we don't need three passes.
Per bug #14474 from Jeff Dafoe. Back-patch to 9.6 where the misbehavior
was introduced.
Tom Lane [Thu, 22 Dec 2016 20:01:27 +0000 (15:01 -0500)]
Fix handling of expanded objects in CoerceToDomain and CASE execution.
When the input value to a CoerceToDomain expression node is a read-write
expanded datum, we should pass a read-only pointer to any domain CHECK
expressions and then return the original read-write pointer as the
expression result. Previously we were blindly passing the same pointer to
all the consumers of the value, making it possible for a function in CHECK
to modify or even delete the expanded value. (Since a plpgsql function
will absorb a passed-in read-write expanded array as a local variable
value, it will in fact delete the value on exit.)
A similar hazard of passing the same read-write pointer to multiple
consumers exists in domain_check() and in ExecEvalCase, so fix those too.
The fix requires adding MakeExpandedObjectReadOnly calls at the appropriate
places, which is simple enough except that we need to get the data type's
typlen from somewhere. For the domain cases, solve this by redefining
DomainConstraintRef.tcache as okay for callers to access; there wasn't any
reason for the original convention against that, other than not wanting the
API of typcache.c to be any wider than it had to be. For CASE, there's
no good solution except to add a syscache lookup during executor start.
Per bug #14472 from Marcos Castedo. Back-patch to 9.5 where expanded
values were introduced.
Robert Haas [Thu, 22 Dec 2016 18:54:40 +0000 (13:54 -0500)]
Fix broken error check in _hash_doinsert.
You can't just cast a HashMetaPage to a Page, because the meta page
data is stored after the page header, not at offset 0. Fortunately,
this didn't break anything because it happens to find hashm_bsize
at the offset at which it expects to find pd_pagesize_version, and
the values are close enough to the same that this works out.
Still, it's a bug, so back-patch to all supported versions.
Joe Conway [Thu, 22 Dec 2016 17:47:55 +0000 (09:47 -0800)]
Make dblink try harder to form useful error messages
When libpq encounters a connection-level error, e.g. runs out of memory
while forming a result, there will be no error associated with PGresult,
but a message will be placed into PGconn's error buffer. postgres_fdw
takes care to use the PGconn error message when PGresult does not have
one, but dblink has been negligent in that regard. Modify dblink to mirror
what postgres_fdw has been doing.
Back-patch to all supported branches.
Author: Joe Conway Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
Joe Conway [Thu, 22 Dec 2016 17:19:34 +0000 (09:19 -0800)]
Protect dblink from invalid options when using postgres_fdw server
When dblink uses a postgres_fdw server name for its connection, it
is possible for the connection to have options that are invalid
with dblink (e.g. "updatable"). The recommended way to avoid this
problem is to use dblink_fdw servers instead. However there are use
cases for using postgres_fdw, and possibly other FDWs, for dblink
connection options, therefore protect against trying to use any
options that do not apply by using is_valid_dblink_option() when
building the connection string from the options.
Back-patch to 9.3. Although 9.2 supports FDWs for connection info,
is_valid_dblink_option() did not yet exist, and neither did
postgres_fdw, at least in the postgres source tree. Given the lack
of previous complaints, fixing that seems too invasive/not worth it.
Author: Corey Huinker Reviewed-By: Joe Conway
Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
Tom Lane [Thu, 22 Dec 2016 16:19:04 +0000 (11:19 -0500)]
Give a useful error message if uuid-ossp is built without preconfiguration.
Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without
having told configure you meant to; you could just cd into that directory
and "make". That no longer works because the code depends on configure to
have done header and library probes, but the ensuing error messages are
not so easy to interpret if you're not an old C hand. We've gotten a
couple of complaints recently from people trying to do this the low-tech
way, so add an explicit #error directing the user to use --with-uuid.
(In principle we might want to do something similar in the other
optionally-built contrib modules; but I don't think any of the others have
ever worked without preconfiguration, so there are no bad habits to break
people of.)
Back-patch to 9.4 where the previous commit came in.
Joe Conway [Wed, 21 Dec 2016 23:48:05 +0000 (15:48 -0800)]
Improve dblink error message when remote does not provide it
When dblink or postgres_fdw detects an error on the remote side of the
connection, it will try to construct a local error message as best it
can using libpq's PQresultErrorField(). When no primary message is
available, it was bailing out with an unhelpful "unknown error". Make
that message better and more style guide compliant. Per discussion
on hackers.
Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3.
Tom Lane [Wed, 21 Dec 2016 22:39:32 +0000 (17:39 -0500)]
Fix detection of unfinished Unicode surrogate pair at end of string.
The U&'...' and U&"..." syntaxes silently discarded a surrogate pair
start (that is, a code between U+D800 and U+DBFF) if it occurred at
the very end of the string. This seems like an obvious oversight,
since we throw an error for every other invalid combination of surrogate
characters, including the very same situation in E'...' syntax.
This has been wrong since the pair processing was added (in 9.0),
so back-patch to all supported branches.
Tom Lane [Wed, 21 Dec 2016 20:18:25 +0000 (15:18 -0500)]
Fix strange behavior (and possible crashes) in full text phrase search.
In an attempt to simplify the tsquery matching engine, the original
phrase search patch invented rewrite rules that would rearrange a
tsquery so that no AND/OR/NOT operator appeared below a PHRASE operator.
But this approach had numerous problems. The rearrangement step was
missed by ts_rewrite (and perhaps other places), allowing tsqueries
to be created that would cause Assert failures or perhaps crashes at
execution, as reported by Andreas Seltenreich. The rewrite rules
effectively defined semantics for operators underneath PHRASE that were
buggy, or at least unintuitive. And because rewriting was done in
tsqueryin() rather than at execution, the rearrangement was user-visible,
which is not very desirable --- for example, it might cause unexpected
matches or failures to match in ts_rewrite.
As a somewhat independent problem, the behavior of nested PHRASE operators
was only sane for left-deep trees; queries like "x <-> (y <-> z)" did not
behave intuitively at all.
To fix, get rid of the rewrite logic altogether, and instead teach the
tsquery execution engine to manage AND/OR/NOT below a PHRASE operator
by explicitly computing the match location(s) and match widths for these
operators.
This requires introducing some additional fields into the publicly visible
ExecPhraseData struct; but since there's no way for third-party code to
pass such a struct to TS_phrase_execute, it shouldn't create an ABI problem
as long as we don't move the offsets of the existing fields.
Another related problem was that index searches supposed that "!x <-> y"
could be lossily approximated as "!x & y", which isn't correct because
the latter will reject, say, "x q y" which the query itself accepts.
This required some tweaking in TS_execute_ternary along with the main
tsquery engine.
Back-patch to 9.6 where phrase operators were introduced. While this
could be argued to change behavior more than we'd like in a stable branch,
we have to do something about the crash hazards and index-vs-seqscan
inconsistency, and it doesn't seem desirable to let the unintuitive
behaviors induced by the rewriting implementation stand as precedent.
Stephen Frost [Wed, 21 Dec 2016 20:03:37 +0000 (15:03 -0500)]
Improve ALTER TABLE documentation
The ALTER TABLE documentation wasn't terribly clear when it came to
which commands could be combined together and what it meant when they
were.
In particular, SET TABLESPACE *can* be combined with other commands,
when it's operating against a single table, but not when multiple tables
are being moved with ALL IN TABLESPACE. Further, the actions are
applied together but not really in 'parallel', at least today.
Pointed out by: Amit Langote
Improved wording from Tom.
Back-patch to 9.4, where the ALL IN TABLESPACE option was added.
Stephen Frost [Wed, 21 Dec 2016 18:47:13 +0000 (13:47 -0500)]
Fix dumping of casts and transforms using built-in functions
In pg_dump.c dumpCast() and dumpTransform(), we would happily ignore the
cast or transform if it happened to use a built-in function because we
weren't including the information about built-in functions when querying
pg_proc from getFuncs().
Modify the query in getFuncs() to also gather information about
functions which are used by user-defined casts and transforms (where
"user-defined" means "has an OID >= FirstNormalObjectId"). This also
adds to the TAP regression tests for 9.6 and master to cover these
types of objects.
Back-patch all the way for casts, back to 9.5 for transforms.
Stephen Frost [Wed, 21 Dec 2016 18:47:13 +0000 (13:47 -0500)]
For 8.0 servers, get last built-in oid from pg_database
We didn't start ensuring that all built-in objects had OIDs less than
16384 until 8.1, so for 8.0 servers we still need to query the value out
of pg_database. We need this, in particular, to distinguish which casts
were built-in and which were user-defined.
For HEAD, we only worry about going back to 8.0, for the back-branches,
we also ensure that 7.0-7.4 work.
Dean Rasheed [Wed, 21 Dec 2016 17:01:52 +0000 (17:01 +0000)]
Fix order of operations in CREATE OR REPLACE VIEW.
When CREATE OR REPLACE VIEW acts on an existing view, don't update the
view options until after the view query has been updated.
This is necessary in the case where CREATE OR REPLACE VIEW is used on
an existing view that is not updatable, and the new view is updatable
and specifies the WITH CHECK OPTION. In this case, attempting to apply
the new options to the view before updating its query fails, because
the options are applied using the ALTER TABLE infrastructure which
checks that WITH CHECK OPTION is only applied to an updatable view.
If new columns are being added to the view, that is also done using
the ALTER TABLE infrastructure, but it is important that that still be
done before updating the view query, because the rules system checks
that the query columns match those on the view relation. Added a
comment to explain that, in case someone is tempted to move that to
where the view options are now being set.
Back-patch to 9.4 where WITH CHECK OPTION was added.
Robert Haas [Wed, 21 Dec 2016 16:01:48 +0000 (11:01 -0500)]
Fix corner-case bug in WaitEventSetWaitBlock on Windows.
If we do not reset the FD_READ event, WaitForMultipleObjects won't
return it again again unless we've meanwhile read from the socket,
which is generally true but not guaranteed. WaitEventSetWaitBlock
itself may fail to return the event to the caller if the latch is
also set, and even if we changed that, the caller isn't obliged to
handle all returned events at once. On non-Windows systems, the
socket-read event is purely level-triggered, so this issue does
not exist. To fix, make Windows reset the event when needed.
Fix sharing Agg transition state of DISTINCT or ordered aggs.
If a query contained two aggregates that could share the transition value,
we would correctly collect the input into a tuplesort only once, but
incorrectly run the transition function over the accumulated input twice,
in finalize_aggregates(). That caused a crash, when we tried to call
tuplesort_performsort() on an already-freed NULL tuplestore.
Backport to 9.6, where sharing of transition state and this bug were
introduced.
Tom Lane [Mon, 19 Dec 2016 18:49:45 +0000 (13:49 -0500)]
Fix handling of phrase operator removal while removing tsquery stopwords.
The distance of a removed phrase operator should propagate up to a
parent phrase operator if there is one, but this only worked correctly
in left-deep trees. Throwing in a few parentheses confused it completely,
as indeed was illustrated by bizarre results in existing regression test
cases.
To fix, track unaccounted-for distances that should propagate to the left
and to the right of the current node, rather than trying to make it work
with only one returned distance.
Also make some adjustments to behave as well as we can for cases of
intermixed phrase and regular (AND/OR) operators. I don't think it's
possible to be 100% correct for that without a rethinking of the tsquery
representation; for example, maybe we should just not drop stopword nodes
at all underneath phrase operators. But this is better than it was,
and changing tsquery representation wouldn't be safely back-patchable.
While at it, I simplified the API of the clean_fakeval_intree function
a bit by getting rid of the "char *result" output parameter; that wasn't
doing anything that wasn't redundant with whether the result node is
NULL or not, and testing for NULL seems a lot clearer/safer.
This is part of a larger project to fix various infelicities in the
phrase-search implementation, but this part seems comittable on its own.
Back-patch to 9.6 where phrase operators were introduced.
Magnus Hagander [Mon, 19 Dec 2016 09:11:04 +0000 (10:11 +0100)]
Fix base backup rate limiting in presence of slow i/o
When source i/o on disk was too slow compared to the rate limiting
specified, the system could end up with a negative value for sleep that
it never got out of, which caused rate limiting to effectively be
turned off.
Tom Lane [Sun, 18 Dec 2016 03:24:13 +0000 (22:24 -0500)]
In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().
Oversight in commit b8cc8f947. I just noticed this causes compiler
warnings on FreeBSD, and it really ought to cause warnings elsewhere too:
all references I can find say that <arpa/inet.h> is required for these.
We have a lot of code elsewhere that thinks that both <netinet/in.h>
and <arpa/inet.h> should be included for these functions, so do it that
way here too, even though <arpa/inet.h> ought to be sufficient according
to the references I consulted.
Back-patch to 9.4 where the previous commit landed.
Tom Lane [Sat, 17 Dec 2016 20:28:54 +0000 (15:28 -0500)]
Fix FK-based join selectivity estimation for semi/antijoins.
This case wasn't thought through sufficiently in commit 100340e2d.
It's true that the FK proves that every outer row has a match in the
inner table, but we forgot that some of the inner rows might be filtered
away by WHERE conditions located within the semijoin's RHS.
If the RHS is just one table, we can reasonably take the semijoin
selectivity as equal to the fraction of the referenced table's rows
that are expected to survive its restriction clauses.
If the RHS is a join, it's not clear how much of the referenced table
might get through the join, so fall back to the same rule we were
already using for other outer-join cases: use the minimum of the
regular per-clause selectivity estimates. This gives the same result
as if we hadn't considered the FK at all when there's a single FK
column, but it should still help for multi-column FKs, which is the
case that 100340e2d is really meant to help with.
Back-patch to 9.6 where the previous commit came in.
Fujii Masao [Fri, 16 Dec 2016 17:20:59 +0000 (02:20 +0900)]
Ensure that num_sync is greater than zero in synchronous_standby_names.
Previously num_sync could be set to zero and this setting caused
an assertion failure. This means that multiple synchronous standbys
code should assume that num_sync is greater than zero.
Also setting num_sync to zero is nonsense because it's basically
the configuration for synchronous replication. If users want not to
make transaction commits wait for any standbys,
synchronous_standby_names should be emptied to disable synchronous
replication instead of setting num_sync to zero.
This patch forbids users from setting num_sync to zero in
synchronous_standby_names. If zero is specified, an error will
happen during processing the parameter settings.
Back-patch to 9.6 where multiple synchronous standbys feature was added.
Patch by me. Reviewed by Tom Lane.
Discussion: <CAHGQGwHWB3izc6cXuFLh5kOcAbFXaRhhgwd-X5PeN9TEjxqXwg@mail.gmail.com>