Noah Misch [Mon, 5 Oct 2015 14:06:29 +0000 (10:06 -0400)]
Prevent stack overflow in container-type functions.
A range type can name another range type as its subtype, and a record
type can bear a column of another record type. Consequently, functions
like range_cmp() and record_recv() are recursive. Functions at risk
include operator family members and referents of pg_type regproc
columns. Treat as recursive any such function that looks up and calls
the same-purpose function for a record column type or the range subtype.
Back-patch to 9.0 (all supported versions).
An array type's element type is never itself an array type, so array
functions are unaffected. Recursion depth proportional to array
dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.
Noah Misch [Mon, 5 Oct 2015 14:06:29 +0000 (10:06 -0400)]
Prevent stack overflow in json-related functions.
Sufficiently-deep recursion heretofore elicited a SIGSEGV. If an
application constructs PostgreSQL json or jsonb values from arbitrary
user input, application users could have exploited this to terminate all
active database connections. That applies to 9.3, where the json parser
adopted recursive descent, and later versions. Only row_to_json() and
array_to_json() were at risk in 9.2, both in a non-security capacity.
Back-patch to 9.2, where the json type was introduced.
Noah Misch [Mon, 5 Oct 2015 14:06:29 +0000 (10:06 -0400)]
pgcrypto: Detect and report too-short crypt() salts.
Certain short salts crashed the backend or disclosed a few bytes of
backend memory. For existing salt-induced error conditions, emit a
message saying as much. Back-patch to 9.0 (all supported versions).
Stephen Frost [Mon, 5 Oct 2015 11:55:13 +0000 (07:55 -0400)]
Apply SELECT policies in INSERT/UPDATE+RETURNING
Similar to 7d8db3e, given that INSERT+RETURNING requires SELECT rights
on the table, apply the SELECT policies as WCOs to the tuples being
inserted. Apply the same logic to UPDATE+RETURNING.
Andres Freund [Mon, 5 Oct 2015 09:53:43 +0000 (11:53 +0200)]
Re-Align *_freeze_max_age reloption limits with corresponding GUC limits.
In 020235a5754 I lowered the autovacuum_*freeze_max_age minimums to
allow for easier testing of wraparounds. I did not touch the
corresponding per-table limits. While those don't matter for the purpose
of wraparound, it seems more consistent to lower them as well.
It's noteworthy that the previous reloption lower limit for
autovacuum_multixact_freeze_max_age was too high by one magnitude, even
before 020235a5754.
Discussion: 26377.1443105453@sss.pgh.pa.us
Backpatch: back to 9.0 (in parts), like the prior patch
Stephen Frost [Mon, 5 Oct 2015 01:05:08 +0000 (21:05 -0400)]
ALTER TABLE .. FORCE ROW LEVEL SECURITY
To allow users to force RLS to always be applied, even for table owners,
add ALTER TABLE .. FORCE ROW LEVEL SECURITY.
row_security=off overrides FORCE ROW LEVEL SECURITY, to ensure pg_dump
output is complete (by default).
Also add SECURITY_NOFORCE_RLS context to avoid data corruption when
ALTER TABLE .. FORCE ROW SECURITY is being used. The
SECURITY_NOFORCE_RLS security context is used only during referential
integrity checks and is only considered in check_enable_rls() after we
have already checked that the current user is the owner of the relation
(which should always be the case during referential integrity checks).
Tom Lane [Sun, 4 Oct 2015 21:58:29 +0000 (17:58 -0400)]
Improve contrib/pg_stat_statements' handling of garbage collection failure.
If we can't read the query texts file (whether because out-of-memory, or
for some other reason), give up and reset the file to empty, discarding all
stored query texts, though not the statistics per se. We used to leave
things alone and hope for better luck next time, but the problem is that
the file is only going to get bigger and even harder to slurp into memory.
Better to do something that will get us out of trouble.
Likewise reset the file to empty for any other failure within gc_qtexts().
The previous behavior after a write error was to discard query texts but
not do anything to truncate the file, which is just weird.
Also, increase the maximum supported file size from MaxAllocSize to
MaxAllocHugeSize; this makes it more likely we'll be able to do a garbage
collection successfully.
Also, fix recalculation of mean_query_len within entry_dealloc() to match
the calculation in gc_qtexts(). The previous coding overlooked the
possibility of dropped texts (query_len == -1) and would underestimate the
mean of the remaining entries in such cases, thus possibly causing excess
garbage collection cycles.
In passing, add some errdetail to the log entry that complains about
insufficient memory to read the query texts file, which after all was
Jim Nasby's original complaint.
Back-patch to 9.4 where the current handling of query texts was
introduced.
Andres Freund [Sun, 4 Oct 2015 20:24:13 +0000 (22:24 +0200)]
Fix hstore_plpython test when python3 is used.
Due to b67aaf21e8ef8 / CREATE EXTENSION ... CASCADE the test output
contains the extension name in yet another place. Since that's variable
depending on the python version...
Add yet another name mangling stanza to regress-python3-mangle.mk.
Tom Lane [Sun, 4 Oct 2015 19:55:07 +0000 (15:55 -0400)]
Further twiddling of nodeHash.c hashtable sizing calculation.
On reflection, the submitted patch didn't really work to prevent the
request size from exceeding MaxAllocSize, because of the fact that we'd
happily round nbuckets up to the next power of 2 after we'd limited it to
max_pointers. The simplest way to enforce the limit correctly is to
round max_pointers down to a power of 2 when it isn't one already.
(Note that the constraint to INT_MAX / 2, if it were doing anything useful
at all, is properly applied after that.)
Tom Lane [Sun, 4 Oct 2015 18:06:40 +0000 (14:06 -0400)]
Fix some issues in new hashtable size calculations in nodeHash.c.
Limit the size of the hashtable pointer array to not more than
MaxAllocSize, per reports from Kouhei Kaigai and others of "invalid memory
alloc request size" failures. There was discussion of allowing the array
to get larger than that by using the "huge" palloc API, but so far no proof
that that is actually a good idea, and at this point in the 9.5 cycle major
changes from old behavior don't seem like the way to go.
Fix a rather serious secondary bug in the new code, which was that it
didn't ensure nbuckets remained a power of 2 when recomputing it for the
multiple-batch case.
Clean up sloppy division of labor between ExecHashIncreaseNumBuckets and
its sole call site.
Noah Misch [Sun, 4 Oct 2015 00:19:57 +0000 (20:19 -0400)]
Make BYPASSRLS behave like superuser RLS bypass.
Specifically, make its effect independent from the row_security GUC, and
make it affect permission checks pertinent to views the BYPASSRLS role
owns. The row_security GUC thereby ceases to change successful-query
behavior; it can only make a query fail with an error. Back-patch to
9.5, where BYPASSRLS was introduced.
Andres Freund [Sat, 3 Oct 2015 16:19:37 +0000 (18:19 +0200)]
Add CASCADE support for CREATE EXTENSION.
Without CASCADE, if an extension has an unfullfilled dependency on
another extension, CREATE EXTENSION ERRORs out with "required extension
... is not installed". That is annoying, especially when that dependency
is an implementation detail of the extension, rather than something the
extension's user can make sense of.
In addition to CASCADE this also includes a small set of regression
tests around CREATE EXTENSION.
Author: Petr Jelinek, editorialized by Michael Paquier, Andres Freund Reviewed-By: Michael Paquier, Andres Freund, Jeff Janes
Discussion: 557E0520.3040800@2ndquadrant.com
Andres Freund [Sat, 3 Oct 2015 13:12:10 +0000 (15:12 +0200)]
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation.
Four related issues:
1) attnos/varnos/resnos for EXCLUDED were out of sync when a column
after one dropped in the underlying relation was referenced.
2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors.
3) It was possible to reference system columns in the EXCLUDED pseudo
relations, even though they would not have valid contents.
4) References to EXCLUDED were rewritten by the RLS machinery, as
EXCLUDED was treated as if it were the underlying relation.
To fix the first two issues, generate the excluded targetlist with
dropped columns in mind and add an entry for whole row
variables. Instead of unconditionally adding a wholerow entry we could
pull up the expression if needed, but doing it unconditionally seems
simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN
support anyway.
The remaining two issues are addressed by changing the EXCLUDED RTE to
have relkind = composite. That fits with EXCLUDED not actually being a
real relation, and allows to treat it differently in the relevant
places. scanRTEForColumn now skips looking up system columns when the
RTE has a composite relkind; fireRIRrules() already had a corresponding
check, thereby preventing RLS expansion on EXCLUDED.
Also add tests for these issues, and improve a few comments around
excluded handling in setrefs.c.
Reported-By: Peter Geoghegan, Geoff Winkless
Author: Andres Freund, Amit Langote, Peter Geoghegan
Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com,
CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com
Backpatch: 9.5, where ON CONFLICT was introduced
Tom Lane [Fri, 2 Oct 2015 23:15:39 +0000 (19:15 -0400)]
Update time zone data files to tzdata release 2015g.
DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island,
North Korea, Turkey, Uruguay. New zone America/Fort_Nelson for Canadian
Northern Rockies.
Tom Lane [Fri, 2 Oct 2015 18:51:58 +0000 (14:51 -0400)]
Add recursion depth protections to regular expression matching.
Some of the functions in regex compilation and execution recurse, and
therefore could in principle be driven to stack overflow. The Tcl crew
has seen this happen in practice in duptraverse(), though their fix was
to put in a hard-wired limit on the number of recursive levels, which is
not too appetizing --- fortunately, we have enough infrastructure to check
the actually available stack. Greg Stark has also seen it in other places
while fuzz testing on a machine with limited stack space. Let's put guards
in to prevent crashes in all these places.
Since the regex code would leak memory if we simply threw elog(ERROR),
we have to introduce an API that checks for stack depth without throwing
such an error. Fortunately that's not difficult.
Tom Lane [Fri, 2 Oct 2015 18:26:36 +0000 (14:26 -0400)]
Fix potential infinite loop in regular expression execution.
In cfindloop(), if the initial call to shortest() reports that a
zero-length match is possible at the current search start point, but then
it is unable to construct any actual match to that, it'll just loop around
with the same start point, and thus make no progress. We need to force the
start point to be advanced. This is safe because the loop over "begin"
points has already tried and failed to match starting at "close", so there
is surely no need to try that again.
This bug was introduced in commit e2bd904955e2221eddf01110b1f25002de2aaa83,
wherein we allowed continued searching after we'd run out of match
possibilities, but evidently failed to think hard enough about exactly
where we needed to search next.
Because of the way this code works, such a match failure is only possible
in the presence of backrefs --- otherwise, shortest()'s judgment that a
match is possible should always be correct. That probably explains how
come the bug has escaped detection for several years.
The actual fix is a one-liner, but I took the trouble to add/improve some
comments related to the loop logic.
After fixing that, the submitted test case "()*\1" didn't loop anymore.
But it reported failure, though it seems like it ought to match a
zero-length string; both Tcl and Perl think it does. That seems to be from
overenthusiastic optimization on my part when I rewrote the iteration match
logic in commit 173e29aa5deefd9e71c183583ba37805c8102a72: we can't just
"declare victory" for a zero-length match without bothering to set match
data for capturing parens inside the iterator node.
Per fuzz testing by Greg Stark. The first part of this is a bug in all
supported branches, and the second part is a bug since 9.2 where the
iteration rewrite happened.
Tom Lane [Fri, 2 Oct 2015 17:45:39 +0000 (13:45 -0400)]
Add some more query-cancel checks to regular expression matching.
Commit 9662143f0c35d64d7042fbeaf879df8f0b54be32 added infrastructure to
allow regular-expression operations to be terminated early in the event
of SIGINT etc. However, fuzz testing by Greg Stark disclosed that there
are still cases where regex compilation could run for a long time without
noticing a cancel request. Specifically, the fixempties() phase never
adds new states, only new arcs, so it doesn't hit the cancel check I'd put
in newstate(). Add one to newarc() as well to cover that.
Some experimentation of my own found that regex execution could also run
for a long time despite a pending cancel. We'd put a high-level cancel
check into cdissect(), but there was none inside the core text-matching
routines longest() and shortest(). Ordinarily those inner loops are very
very fast ... but in the presence of lookahead constraints, not so much.
As a compromise, stick a cancel check into the stateset cache-miss
function, which is enough to guarantee a cancel check at least once per
lookahead constraint test.
Making this work required more attention to error handling throughout the
regex executor. Henry Spencer had apparently originally intended longest()
and shortest() to be incapable of incurring errors while running, so
neither they nor their subroutines had well-defined error reporting
behaviors. However, that was already broken by the lookahead constraint
feature, since lacon() can surely suffer an out-of-memory failure ---
which, in the code as it stood, might never be reported to the user at all,
but just silently be treated as a non-match of the lookahead constraint.
Normalize all that by inserting explicit error tests as needed. I took the
opportunity to add some more comments to the code, too.
Back-patch to all supported branches, like the previous patch.
Tom Lane [Fri, 2 Oct 2015 17:30:42 +0000 (13:30 -0400)]
Docs: add disclaimer about hazards of using regexps from untrusted sources.
It's not terribly hard to devise regular expressions that take large
amounts of time and/or memory to process. Recent testing by Greg Stark has
also shown that machines with small stack limits can be driven to stack
overflow by suitably crafted regexps. While we intend to fix these things
as much as possible, it's probably impossible to eliminate slow-execution
cases altogether. In any case we don't want to treat such things as
security issues. The history of that code should already discourage
prudent DBAs from allowing execution of regexp patterns coming from
possibly-hostile sources, but it seems like a good idea to warn about the
hazard explicitly.
Currently, similar_escape() allows access to enough of the underlying
regexp behavior that the warning has to apply to SIMILAR TO as well.
We might be able to make it safer if we tightened things up to allow only
SQL-mandated capabilities in SIMILAR TO; but that would be a subtly
non-backwards-compatible change, so it requires discussion and probably
could not be back-patched.
Tom Lane [Fri, 2 Oct 2015 16:20:01 +0000 (12:20 -0400)]
Docs: add another example of creating a range type.
The "floatrange" example is a bit too simple because float8mi can be
used without any additional type conversion. Add an example that does
have to account for that, and do some minor other wordsmithing.
The output of a typical pg_rewind run contained a mix of capitalized and
not-capitalized and punctuated and not-punctuated phrases for no
apparent reason. Make that consistent. Also fix some problems in other
messages.
Tom Lane [Thu, 1 Oct 2015 20:19:49 +0000 (16:19 -0400)]
Fix pg_dump to handle inherited NOT VALID check constraints correctly.
This case seems to have been overlooked when unvalidated check constraints
were introduced, in 9.2. The code would attempt to dump such constraints
over again for each child table, even though adding them to the parent
table is sufficient.
In 9.2 and 9.3, also fix contrib/pg_upgrade/Makefile so that the "make
clean" target fully cleans up after a failed test. This evidently got
dealt with at some point in 9.4, but it wasn't back-patched. I ran into
it while testing this fix ...
Alvaro Herrera [Thu, 1 Oct 2015 18:06:55 +0000 (15:06 -0300)]
Fix commit_ts for standby
Module initialization was still not completely correct after commit 6b61955135e9, per crash report from Takashi Ohnishi. To fix, instead of
trying to monkey around with the value of the GUC setting directly, add
a separate boolean flag that enables the feature on a standby, but only
for the startup (recovery) process, when it sees that its master server
has the feature enabled.
Discussion: http://www.postgresql.org/message-id/ca44c6c7f9314868bdc521aea4f77cbf@MP-MSGSS-MBX004.msg.nttdata.co.jp
Also change the deactivation routine to delete all segment files rather
than leaving the last one around. (This doesn't need separate
WAL-logging, because on recovery we execute the same deactivation
routine anyway.)
In passing, clean up the code structure somewhat, particularly so that
xlog.c doesn't know so much about when to activate/deactivate the
feature.
Thanks to Fujii Masao for testing and Petr JelÃnek for off-list discussion.
Back-patch to 9.5, where commit_ts was introduced.
Fujii Masao [Thu, 1 Oct 2015 14:39:02 +0000 (23:39 +0900)]
Fix incorrect tab-completion for GRANT and REVOKE
Previously "GRANT * ON * TO " was tab-completed to add an extra "TO",
rather than with a list of roles. This is the bug that commit 2f88807
introduced unexpectedly. This commit fixes that incorrect tab-completion.
Fujii Masao [Thu, 1 Oct 2015 14:00:52 +0000 (23:00 +0900)]
Fix mention of htup.h in storage.sgml
Previously it was documented that the details on HeapTupleHeaderData
struct could be found in htup.h. This is not correct because it's now
defined in htup_details.h.
Back-patch to 9.3 where the definition of HeapTupleHeaderData struct
was moved from htup.h to htup_details.h.
Tom Lane [Thu, 1 Oct 2015 03:32:22 +0000 (23:32 -0400)]
Improve LISTEN startup time when there are many unread notifications.
If some existing listener is far behind, incoming new listener sessions
would start from that session's read pointer and then need to advance over
many already-committed notification messages, which they have no interest
in. This was expensive in itself and also thrashed the pg_notify SLRU
buffers a lot more than necessary. We can improve matters considerably
in typical scenarios, without much added cost, by starting from the
furthest-ahead read pointer, not the furthest-behind one. We do have to
consider only sessions in our own database when doing this, which requires
an extra field in the data structure, but that's a pretty small cost.
Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced.
Robert Haas [Wed, 30 Sep 2015 23:23:36 +0000 (19:23 -0400)]
Add a Gather executor node.
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
Robert Haas [Wed, 30 Sep 2015 22:36:31 +0000 (18:36 -0400)]
Don't dump core when destroying an unused ParallelContext.
If a transaction or subtransaction creates a ParallelContext but ends
without calling InitializeParallelDSM, the previous code would
seg fault. Fix that.
Stephen Frost [Wed, 30 Sep 2015 11:39:24 +0000 (07:39 -0400)]
Include policies based on ACLs needed
When considering which policies should be included, rather than look at
individual bits of the query (eg: if a RETURNING clause exists, or if a
WHERE clause exists which is referencing the table, or if it's a
FOR SHARE/UPDATE query), consider any case where we've determined
the user needs SELECT rights on the relation while doing an UPDATE or
DELETE to be a case where we apply SELECT policies, and any case where
we've deteremind that the user needs UPDATE rights on the relation while
doing a SELECT to be a case where we apply UPDATE policies.
This simplifies the logic and addresses concerns that a user could use
UPDATE or DELETE with a WHERE clauses to determine if rows exist, or
they could use SELECT .. FOR UPDATE to lock rows which they are not
actually allowed to modify through UPDATE policies.
Use list_append_unique() to avoid adding the same quals multiple times,
as, on balance, the cost of checking when adding the quals will almost
always be cheaper than keeping them and doing busywork for each tuple
during execution.
Tom Lane [Wed, 30 Sep 2015 02:06:59 +0000 (22:06 -0400)]
Small improvements in comments in async.c.
We seem to have lost a line somewhere along the way in the comment block
that discusses async.c's locks, because it suddenly refers to "both locks"
without previously having mentioned more than one. Add a sentence to make
that read more sanely. Also, refer to the "pos of the slowest backend"
not the "tail of the slowest backend", since we have no per-backend value
called "tail".
Fix incorrect tps number calculation in "excluding connections establishing".
The tolerance (larger than actual tps number) increases as the number
of threads decreases. The bug has been there since the thread support
was introduced in 9.0. Because back patching introduces incompatible
behavior changes regarding the tps number, the fix is committed to
master and 9.5 stable branches only.
Problem spotted by me and fix proposed by Fabien COELHO. Note that his
original patch included more than fixes (a code re-factoring) which is
not related to the problem and I omitted the part.
1. No longer cause a start failure in a standby if the feature is
disabled in postgresql.conf but enabled in the master. This reverts one
part of commit 4f3924d9cd43; what we keep is the ability of the standby
to activate/deactivate the module (which includes creating and removing
segments as appropriate) during replay of such actions in the master.
2. Replay WAL records affecting commitTS even if the feature is
disabled. This means the standby will always have the same state as the
master after replay.
3. Have COMMIT PREPARE record the transaction commit time as well. We
were previously only applying it in the normal transaction commit path.
Author: Petr JelÃnek
Discussion: http://www.postgresql.org/message-id/CAHGQGwHereDzzzmfxEBYcVQu3oZv6vZcgu1TPeERWbDc+gQ06g@mail.gmail.com
Discussion: http://www.postgresql.org/message-id/CAHGQGwFuzfO4JscM9LCAmCDCxp_MfLvN4QdB+xWsS-FijbjTYQ@mail.gmail.com
Additionally, I cleaned up nearby code related to replication origins,
which I found a bit hard to follow, and fixed a couple of typos.
Backpatch to 9.5, where this code was introduced.
Per bug reports from Fujii Masao and subsequent discussion.
Tom Lane [Tue, 29 Sep 2015 14:52:22 +0000 (10:52 -0400)]
Fix plperl to handle non-ASCII error message texts correctly.
We were passing error message texts to croak() verbatim, which turns out
not to work if the text contains non-ASCII characters; Perl mangles their
encoding, as reported in bug #13638 from Michal Leinweber. To fix, convert
the text into a UTF8-encoded SV first.
It's hard to test this without risking failures in different database
encodings; but we can follow the lead of plpython, which is already
assuming that no-break space (U+00A0) has an equivalent in all encodings
we care about running the regression tests in (cf commit 2dfa15de5).
Back-patch to 9.1. The code is quite different in 9.0, and anyway it seems
too risky to put something like this into 9.0's final minor release.
Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane
Robert Haas [Tue, 29 Sep 2015 01:55:57 +0000 (21:55 -0400)]
Parallel executor support.
This code provides infrastructure for a parallel leader to start up
parallel workers to execute subtrees of the plan tree being executed
in the master. User-supplied parameters from ParamListInfo are passed
down, but PARAM_EXEC parameters are not. Various other constructs,
such as initplans, subplans, and CTEs, are also not currently shared.
Nevertheless, there's enough here to support a basic implementation of
parallel query, and we can lift some of the current restrictions as
needed.
Tom Lane [Mon, 28 Sep 2015 22:02:38 +0000 (18:02 -0400)]
Fix poor errno handling in libpq's version of our custom OpenSSL BIO.
Thom Brown reported that SSL connections didn't seem to work on Windows in
9.5. Asif Naeem figured out that the cause was my_sock_read() looking at
"errno" when it needs to look at "SOCK_ERRNO". This mistake was introduced
in commit 680513ab79c7e12e402a2aad7921b95a25a4bcc8, which cloned the
backend's custom SSL BIO code into libpq, and didn't translate the errno
handling properly. Moreover, it introduced unnecessary errno save/restore
logic, which was particularly confusing because it was incomplete; and it
failed to check for all three of EINTR, EAGAIN, and EWOULDBLOCK in
my_sock_write. (That might not be necessary; but since we're copying
well-tested backend code that does do that, it seems prudent to copy it
faithfully.)
Stephen Frost [Mon, 28 Sep 2015 19:48:36 +0000 (15:48 -0400)]
Ensure a few policies remain for pg_upgrade
To make sure that pg_dump/pg_restore function properly with RLS
policies, arrange to have a few of them left around at the end of the
regression tests.
Andres Freund [Mon, 28 Sep 2015 17:12:48 +0000 (19:12 +0200)]
Fix ON CONFLICT DO UPDATE for tables with oids.
When taking the UPDATE path in an INSERT .. ON CONFLICT .. UPDATE tables
with oids were not supported. The tuple generated by the update target
list was projected without space for an oid - a simple oversight.
Reported-By: Peter Geoghegan
Author: Andres Freund
Backpatch: 9.5, where ON CONFLICT was introduced
Robert Haas [Mon, 28 Sep 2015 14:47:05 +0000 (10:47 -0400)]
Don't try to create a temp install without abs_top_builddir.
Otherwise, we effectively act as if abs_top_builddir were the root
directory, which is quite dangerous if the user happens to have
permissions to do things there. This can crop up in PGXS builds,
for example.
Report by Sandro Santilli, patch by me, review by Noah Misch.
Peter Eisentraut [Sun, 27 Sep 2015 01:00:59 +0000 (21:00 -0400)]
doc: Tweak "cube" index entry
With the arrival of the CUBE key word/feature, the index entries for the
cube extension and the CUBE feature were collapsed into one. Tweak the
entry for the cube extension so they are separate entries.
Andres Freund [Sat, 26 Sep 2015 17:04:25 +0000 (19:04 +0200)]
Remove legacy multixact truncation support.
In 9.5 and master there is no need to support legacy truncation. This is
just committed separately to make it easier to backpatch the WAL logged
multixact truncation to 9.3 and 9.4 if we later decide to do so.
I bumped master's magic from 0xD086 to 0xD088 and 9.5's from 0xD085 to
0xD087 to avoid 9.5 reusing a value that has been in use on master while
keeping the numbers increasing between major versions.
Andres Freund [Sat, 26 Sep 2015 17:04:25 +0000 (19:04 +0200)]
Rework the way multixact truncations work.
The fact that multixact truncations are not WAL logged has caused a fair
share of problems. Amongst others it requires to do computations during
recovery while the database is not in a consistent state, delaying
truncations till checkpoints, and handling members being truncated, but
offset not.
We tried to put bandaids on lots of these issues over the last years,
but it seems time to change course. Thus this patch introduces WAL
logging for multixact truncations.
This allows:
1) to perform the truncation directly during VACUUM, instead of delaying it
to the checkpoint.
2) to avoid looking at the offsets SLRU for truncation during recovery,
we can just use the master's values.
3) simplify a fair amount of logic to keep in memory limits straight,
this has gotten much easier
During the course of fixing this a bunch of additional bugs had to be
fixed:
1) Data was not purged from memory the member's SLRU before deleting
segments. This happened to be hard or impossible to hit due to the
interlock between checkpoints and truncation.
2) find_multixact_start() relied on SimpleLruDoesPhysicalPageExist - but
that doesn't work for offsets that haven't yet been flushed to
disk. Add code to flush the SLRUs to fix. Not pretty, but it feels
slightly safer to only make decisions based on actual on-disk state.
3) find_multixact_start() could be called concurrently with a truncation
and thus fail. Via SetOffsetVacuumLimit() that could lead to a round
of emergency vacuuming. The problem remains in
pg_get_multixact_members(), but that's quite harmless.
For now this is going to only get applied to 9.5+, leaving the issues in
the older branches in place. It is quite possible that we need to
backpatch at a later point though.
For the case this gets backpatched we need to handle that an updated
standby may be replaying WAL from a not-yet upgraded primary. We have to
recognize that situation and use "old style" truncation (i.e. looking at
the SLRUs) during WAL replay. In contrast to before, this now happens in
the startup process, when replaying a checkpoint record, instead of the
checkpointer. Doing truncation in the restartpoint is incorrect, they
can happen much later than the original checkpoint, thereby leading to
wraparound. To avoid "multixact_redo: unknown op code 48" errors
standbys would have to be upgraded before primaries.
A later patch will bump the WAL page magic, and remove the legacy
truncation codepaths. Legacy truncation support is just included to make
a possible future backpatch easier.
Discussion: 20150621192409.GA4797@alap3.anarazel.de Reviewed-By: Robert Haas, Alvaro Herrera, Thomas Munro
Backpatch: 9.5 for now
Tom Lane [Fri, 25 Sep 2015 17:16:30 +0000 (13:16 -0400)]
Second try at fixing O(N^2) problem in foreign key references.
This replaces ill-fated commit 5ddc72887a012f6a8b85707ef27d85c274faf53d,
which was reverted because it broke active uses of FK cache entries. In
this patch, we still do nothing more to invalidatable cache entries than
mark them as needing revalidation, so we won't break active uses. To keep
down the overhead of InvalidateConstraintCacheCallBack(), keep a list of
just the currently-valid cache entries. (The entries are large enough that
some added space for list links doesn't seem like a big problem.) This
would still be O(N^2) when there are many valid entries, though, so when
the list gets too long, just force the "sinval reset" behavior to remove
everything from the list. I set the threshold at 1000 entries, somewhat
arbitrarily. Possibly that could be fine-tuned later. Another item for
future study is whether it's worth adding reference counting so that we
could safely remove invalidated entries. As-is, problem cases are likely
to end up with large and mostly invalid FK caches.
Tom Lane [Fri, 25 Sep 2015 16:20:45 +0000 (12:20 -0400)]
Further fix for psql's code for locale-aware formatting of numeric output.
(Third time's the charm, I hope.)
Additional testing disclosed that this code could mangle already-localized
output from the "money" datatype. We can't very easily skip applying it
to "money" values, because the logic is tied to column right-justification
and people expect "money" output to be right-justified. Short of
decoupling that, we can fix it in what should be a safe enough way by
testing to make sure the string doesn't contain any characters that would
not be expected in plain numeric output.
Tom Lane [Fri, 25 Sep 2015 04:00:33 +0000 (00:00 -0400)]
Further fix for psql's code for locale-aware formatting of numeric output.
On closer inspection, those seemingly redundant atoi() calls were not so
much inefficient as just plain wrong: the author of this code either had
not read, or had not understood, the POSIX specification for localeconv().
The grouping field is *not* a textual digit string but separate integers
encoded as chars.
We'll follow the existing code as well as the backend's cash.c in only
honoring the first group width, but let's at least honor it correctly.
This doesn't actually result in any behavioral change in any of the
locales I have installed on my Linux box, which may explain why nobody's
complained; grouping width 3 is close enough to universal that it's barely
worth considering other cases. Still, wrong is wrong, so back-patch.
Tom Lane [Fri, 25 Sep 2015 03:01:04 +0000 (23:01 -0400)]
Fix psql's code for locale-aware formatting of numeric output.
This code did the wrong thing entirely for numbers with an exponent
but no decimal point (e.g., '1e6'), as reported by Jeff Janes in
bug #13636. More generally, it made lots of unverified assumptions
about what the input string could possibly look like. Rearrange so
that it only fools with leading digits that it's directly verified
are there, and an immediately adjacent decimal point. While at it,
get rid of some useless inefficiencies, like converting the grouping
count string to integer over and over (and over).
This has been broken for a long time, so back-patch to all supported
branches.
Tom Lane [Thu, 24 Sep 2015 22:35:46 +0000 (18:35 -0400)]
Allow planner to use expression-index stats for function calls in WHERE.
Previously, a function call appearing at the top level of WHERE had a
hard-wired selectivity estimate of 0.3333333, a kludge conveniently dated
in the source code itself to July 1992. The expectation at the time was
that somebody would soon implement estimator support functions analogous
to those for operators; but no such code has appeared, nor does it seem
likely to in the near future. We do have an alternative solution though,
at least for immutable functions on single relations: creating an
expression index on the function call will allow ANALYZE to gather stats
about the function's selectivity. But the code in clause_selectivity()
failed to make use of such data even if it exists.
Refactor so that that will happen. I chose to make it try this technique
for any clause type for which clause_selectivity() doesn't have a special
case, not just functions. To avoid adding unnecessary overhead in the
common case where we don't learn anything new, make selfuncs.c provide an
API that hooks directly to examine_variable() and then var_eq_const(),
rather than the previous coding which laboriously constructed an OpExpr
only so that it could be expensively deconstructed again.
I preserved the behavior that the default estimate for a function call
is 0.3333333. (For any other expression node type, it's 0.5, as before.)
I had originally thought to make the default be 0.5 across the board, but
changing a default estimate that's survived for twenty-three years seems
like something not to do without a lot more testing than I care to put
into it right now.
Per a complaint from Jehan-Guillaume de Rorthais. Back-patch into 9.5,
but not further, at least for the moment.
Tom Lane [Thu, 24 Sep 2015 16:47:29 +0000 (12:47 -0400)]
Improve handling of collations in contrib/postgres_fdw.
If we have a local Var of say varchar type with default collation, and
we apply a RelabelType to convert that to text with default collation, we
don't want to consider that as creating an FDW_COLLATE_UNSAFE situation.
It should be okay to compare that to a remote Var, so long as the remote
Var determines the comparison collation. (When we actually ship such an
expression to the remote side, the local Var would become a Param with
default collation, meaning the remote Var would in fact control the
comparison collation, because non-default implicit collation overrides
default implicit collation in parse_collate.c.) To fix, be more precise
about what FDW_COLLATE_NONE means: it applies either to a noncollatable
data type or to a collatable type with default collation, if that collation
can't be traced to a remote Var. (When it can, FDW_COLLATE_SAFE is
appropriate.) We were essentially using that interpretation already at
the Var/Const/Param level, but we weren't bubbling it up properly.
An alternative fix would be to introduce a separate FDW_COLLATE_DEFAULT
value to describe the second situation, but that would add more code
without changing the actual behavior, so it didn't seem worthwhile.
Also, since we're clarifying the rule to be that we care about whether
operator/function input collations match, there seems no need to fail
immediately upon seeing a Const/Param/non-foreign-Var with nondefault
collation. We only have to reject if it appears in a collation-sensitive
context (for example, "var IS NOT NULL" is perfectly safe from a collation
standpoint, whatever collation the var has). So just set the state to
UNSAFE rather than failing immediately.
Per report from Jeevan Chalke. This essentially corrects some sloppy
thinking in commit ed3ddf918b59545583a4b374566bc1148e75f593, so back-patch
to 9.3 where that logic appeared.
Robert Haas [Thu, 24 Sep 2015 15:27:20 +0000 (11:27 -0400)]
Don't zero opfuncid when reading nodes.
The comments here stated that this was just in case we ever had an
ALTER OPERATOR command that could remap an operator to a different
function. But those comments have been here for a long time, and no
such command has come about. In the absence of such a feature,
forcing the pg_proc OID to be looked up again each time we reread a
stored rule or similar is just a waste of cycles. Moreover, parallel
query needs a way to reread the exact same node tree that was written
out, not one that has been slightly stomped on. So just get rid of
this for now.
Make pg_controldata report newest XID with valid commit timestamp
Previously pg_controldata didn't report newestCommitTs and this was
an oversight in commit 73c986a.
Also this patch changes pg_resetxlog so that it uses the same sentences
as pg_controldata does, regarding oldestCommitTs and newestCommitTs,
for the sake of consistency.
Back-patch to 9.5 where track_commit_timestamp was added.
Andres Freund [Thu, 24 Sep 2015 12:53:32 +0000 (14:53 +0200)]
Lower *_freeze_max_age minimum values.
The old minimum values are rather large, making it time consuming to
test related behaviour. Additionally the current limits, especially for
multixacts, can be problematic in space-constrained systems. 10000000
multixacts can contain a lot of members.
Since there's no good reason for the current limits, lower them a good
bit. Setting them to 0 would be a bad idea, triggering endless vacuums,
so still retain a limit.
While at it fix autovacuum_multixact_freeze_max_age to refer to
multixact.c instead of varsup.c.
Reviewed-By: Robert Haas
Discussion: CA+TgmoYmQPHcrc3GSs7vwvrbTkbcGD9Gik=OztbDGGrovkkEzQ@mail.gmail.com
Backpatch: back to 9.0 (in parts)
Tom Lane [Wed, 23 Sep 2015 22:26:49 +0000 (18:26 -0400)]
Make ANALYZE compute basic statistics even for types with no "=" operator.
Previously, ANALYZE simply ignored columns of datatypes that have neither
a btree nor hash opclass (which means they have no recognized equality
operator). Without a notion of equality, we can't identify most-common
values nor estimate the number of distinct values. But we can still
count nulls and compute the average physical column width, and those
stats might be of value. Moreover there are some tools out there that
don't work so well if rows are missing from pg_statistic. So let's
add suitable logic for this case.
While this is arguably a bug fix, it also has the potential to change
query plans, and the gain seems not worth taking a risk of that in
stable branches. So back-patch into 9.5 but not further.
Robert Haas [Wed, 23 Sep 2015 15:51:50 +0000 (11:51 -0400)]
Add readfuncs.c support for plan nodes.
For parallel query, we need to be able to pass a Plan to a worker, so
that it knows what it's supposed to do. We could invent our own way
of serializing plans for that purpose, but piggybacking on the
existing node infrastructure seems like a much better idea.
Initially, we'll probably only support a limited number of nodes
within parallel workers, but this commit adds support for everything
in plannodes.h except CustomScan, because doing it all at once seems
easier than doing it piecemeal, and it makes testing this code easier,
too. CustomScan is excluded because making that work requires a
larger rework of that facility.
Allow autoanalyze to add pages deleted from pending list to FSM
Commit e95680832854cf300e64c10de9cc2f586df558e8 introduces adding pages
to FSM for ordinary insert, but autoanalyze was able just cleanup
pending list without adding to FSM.
Also fix double call of IndexFreeSpaceMapVacuum() during ginvacuumcleanup()
Report from Fujii Masao
Patch by me
Review by Jeff Janes
Robert Haas [Wed, 23 Sep 2015 01:42:00 +0000 (21:42 -0400)]
Teach planstate_tree_walker about custom scans.
This logic was missing from ExplainPreScanNode, from which I derived
planstate_tree_walker. But it shouldn't be missing, especially not
from a generic walker function, so add it.
Andres Freund [Tue, 22 Sep 2015 13:33:30 +0000 (15:33 +0200)]
test_decoding: Protect against rare spurious test failures.
A bunch of tests missed specifying that empty transactions shouldn't be
displayed. That causes problems when e.g. autovacuum runs in an
unfortunate moment. The tests in question only run for a very short
time, making this quite unlikely.
Reported-By: Buildfarm member axolotl
Backpatch: 9.4, where logical decoding was introduced
Andres Freund [Tue, 22 Sep 2015 09:05:48 +0000 (11:05 +0200)]
Correct value of LW_SHARED_MASK.
The previous wrong value lead to wrong LOCK_DEBUG output, never showing
any shared lock holders.
Reported-By: Alexander Korotkov
Discussion: CAPpHfdsPmWqz9FB0AnxJrwp1=KLF0n=-iB+QvR0Q8GSmpFVdUQ@mail.gmail.com
Backpatch: 9.5, where the bug was introduced.
Tom Lane [Mon, 21 Sep 2015 16:11:32 +0000 (12:11 -0400)]
Fix possible internal overflow in numeric multiplication.
mul_var() postpones propagating carries until it risks overflow in its
internal digit array. However, the logic failed to account for the
possibility of overflow in the carry propagation step, allowing wrong
results to be generated in corner cases. We must slightly reduce the
when-to-propagate-carries threshold to avoid that.
Discovered and fixed by Dean Rasheed, with small adjustments by me.
Remove the SECURITY_ROW_LEVEL_DISABLED security context bit.
This commit's parent made superfluous the bit's sole usage. Referential
integrity checks have long run as the subject table's owner, and that
now implies RLS bypass. Safe use of the bit was tricky, requiring
strict control over the SQL expressions evaluating therein. Back-patch
to 9.5, where the bit was introduced.
Every query of a single ENABLE ROW SECURITY table has two meanings, with
the row_security GUC selecting between them. With row_security=force
available, every function author would have been advised to either set
the GUC locally or test both meanings. Non-compliance would have
threatened reliability and, for SECURITY DEFINER functions, security.
Authors already face an obligation to account for search_path, and we
should not mimic that example. With this change, only BYPASSRLS roles
need exercise the aforementioned care. Back-patch to 9.5, where the
row_security GUC was introduced.
Since this narrows the domain of pg_db_role_setting.setconfig and
pg_proc.proconfig, one might bump catversion. A row_security=force
setting in one of those columns will elicit a clear message, so don't.
Tom Lane [Sun, 20 Sep 2015 20:48:44 +0000 (16:48 -0400)]
Be more wary about partially-valid LOCALLOCK data in RemoveLocalLock().
RemoveLocalLock() must consider the possibility that LockAcquireExtended()
failed to palloc the initial space for a locallock's lockOwners array.
I had evidently meant to cope with this hazard when the code was originally
written (commit 1785acebf2ed14fd66955e2d9a55d77a025f418d), but missed that
the pfree needed to be protected with an if-test. Just to make sure things
are left in a clean state, reset numLockOwners as well.
Per low-memory testing by Andreas Seltenreich. Back-patch to all supported
branches.
Peter Eisentraut [Sat, 19 Sep 2015 02:03:42 +0000 (22:03 -0400)]
Remove trailing slashes from directories in find command
BSD find is not very smart and ends up writing double slashes into the
output in those cases. Also, xgettext is not very smart and splits the
file names incorrectly in those cases, resulting in slightly incorrect
file names being written into the POT file.
Robert Haas [Sat, 19 Sep 2015 01:10:08 +0000 (21:10 -0400)]
Glue layer to connect the executor to the shm_mq mechanism.
The shm_mq mechanism was built to send error (and notice) messages and
tuples between backends. However, shm_mq itself only deals in raw
bytes. Since commit 2bd9e412f92bc6a68f3e8bcb18e04955cc35001d, we have
had infrastructure for one message to redirect protocol messages to a
queue and for another backend to parse them and do useful things with
them. This commit introduces a somewhat analogous facility for tuples
by adding a new type of DestReceiver, DestTupleQueue, which writes
each tuple generated by a query into a shm_mq, and a new
TupleQueueFunnel facility which reads raw tuples out of the queue and
reconstructs the HeapTuple format expected by the executor.
The TupleQueueFunnel abstraction supports reading from multiple tuple
streams at the same time, but only in round-robin fashion. Someone
could imaginably want other policies, but this should be good enough
to meet our short-term needs related to parallel query, and we can
always extend it later.
This also makes one minor addition to the shm_mq API that didn'
seem worth breaking out as a separate patch.
Extracted from Amit Kapila's parallel sequential scan patch. This
code was originally written by me, and then it was revised by Amit,
and then it was revised some more by me.
Andrew Dunstan [Fri, 18 Sep 2015 18:39:39 +0000 (14:39 -0400)]
Cache argument type information in json(b) aggregate functions.
These functions have been looking up type info for every row they
process. Instead of doing that we only look them up the first time
through and stash the information in the aggregate state object.
Affects json_agg, json_object_agg, jsonb_agg and jsonb_object_agg.
There is plenty more work to do in making these more efficient,
especially the jsonb functions, but this is a virtually cost free
improvement that can be done right away.
Backpatch to 9.5 where the jsonb variants were introduced.
Tom Lane [Fri, 18 Sep 2015 17:55:17 +0000 (13:55 -0400)]
Fix low-probability memory leak in regex execution.
After an internal failure in shortest() or longest() while pinning down the
exact location of a match, find() forgot to free the DFA structure before
returning. This is pretty unlikely to occur, since we just successfully
ran the "search" variant of the DFA; but it could happen, and it would
result in a session-lifespan memory leak since this code uses malloc()
directly. Problem seems to have been aboriginal in Spencer's library,
so back-patch all the way.
In passing, correct a thinko in a comment I added awhile back about the
meaning of the "ntree" field.
I happened across these issues while comparing our code to Tcl's version
of the library.