Tom Lane [Tue, 8 Mar 2016 21:28:27 +0000 (16:28 -0500)]
Finish refactoring make_foo() functions in createplan.c.
This patch removes some redundant cost calculations that I left for later
cleanup in commit 3fc6e2d7f5b652b4. There's now a uniform policy that the
make_foo() convenience functions don't do any cost calculations. Most of
their callers copy costs from the source Path node, and for those that
don't, the calculation in the make_foo() function wasn't necessarily right
anyhow. (make_result() was particularly a mess, as it was serving multiple
callers using cost calcs designed for only the first one or two that had
ever existed.) Aside from saving a few cycles, this ensures that what
EXPLAIN prints matches the costs we used for planning purposes. It does
not change any planner decisions, since the decisions are already made.
Robert Haas [Tue, 8 Mar 2016 15:09:50 +0000 (10:09 -0500)]
Add some functions to fd.c for the convenience of extensions.
For example, if you want to perform an ioctl() on a file descriptor
opened through the fd.c routines, there's no way to do that without
being able to get at the underlying fd.
Robert Haas [Tue, 8 Mar 2016 13:46:48 +0000 (08:46 -0500)]
Department of second thoughts: remove PD_ALL_FROZEN.
Commit a892234f830e832110f63fc0a2afce2fb21d1584 added a second bit per
page to the visibility map, which still seems like a good idea, but it
also added a second page-level bit alongside PD_ALL_VISIBLE to track
whether the visibility map bit was set. That no longer seems like a
clever plan, because we don't really need that bit for anything. We
always clear both bits when the page is modified anyway.
Patch by me, reviewed by Kyotaro Horiguchi and Masahiko Sawada.
Robert Haas [Tue, 8 Mar 2016 13:38:50 +0000 (08:38 -0500)]
Add pg_visibility contrib module.
This lets you examine the visibility map as well as page-level
visibility information. I initially wrote it as a debugging aid,
but was encouraged to polish it for commit.
Robert Haas [Tue, 8 Mar 2016 13:13:02 +0000 (08:13 -0500)]
pg_upgrade: Remove converter plugin facility.
We've not found a use for this so far, and the current need, which
is to convert the visibility map to a new format, does not suit the
existing design anyway. So just rip it out.
Author: Masahiko Sawada, slightly revised by me.
Discussion: 20160215211313.GB31273@momjian.us
Joe Conway [Mon, 7 Mar 2016 23:14:20 +0000 (15:14 -0800)]
Make get_controlfile() error logging consistent with src/common
As originally committed, get_controlfile() used a non-standard approach
to error logging. Make it consistent with the majority of error logging
done in src/common.
Coverity and inspection for the issue addressed in fd45d16f found some
questionable code.
Specifically coverity noticed that the wrong length was added in
ReorderBufferSerializeChange() - without immediate negative consequences
as the variable isn't used afterwards. During code-review and testing I
noticed that a bit of space was wasted when allocating tuple bufs in
several places. Thirdly, the debug memset()s in
ReorderBufferGetTupleBuf() reduce the error checking valgrind can do.
Tom Lane [Mon, 7 Mar 2016 20:58:22 +0000 (15:58 -0500)]
Make the upper part of the planner work by generating and comparing Paths.
I've been saying we needed to do this for more than five years, and here it
finally is. This patch removes the ever-growing tangle of spaghetti logic
that grouping_planner() used to use to try to identify the best plan for
post-scan/join query steps. Now, there is (nearly) independent
consideration of each execution step, and entirely separate construction of
Paths to represent each of the possible ways to do that step. We choose
the best Path or set of Paths using the same add_path() logic that's been
used inside query_planner() for years.
In addition, this patch removes the old restriction that subquery_planner()
could return only a single Plan. It now returns a RelOptInfo containing a
set of Paths, just as query_planner() does, and the parent query level can
use each of those Paths as the basis of a SubqueryScanPath at its level.
This allows finding some optimizations that we missed before, wherein a
subquery was capable of returning presorted data and thereby avoiding a
sort in the parent level, making the overall cost cheaper even though
delivering sorted output was not the cheapest plan for the subquery in
isolation. (A couple of regression test outputs change in consequence of
that. However, there is very little change in visible planner behavior
overall, because the point of this patch is not to get immediate planning
benefits but to create the infrastructure for future improvements.)
There is a great deal left to do here. This patch unblocks a lot of
planner work that was basically impractical in the old code structure,
such as allowing FDWs to implement remote aggregation, or rewriting
plan_set_operations() to allow consideration of multiple implementation
orders for set operations. (The latter will likely require a full
rewrite of plan_set_operations(); what I've done here is only to fix it
to return Paths not Plans.) I have also left unfinished some localized
refactoring in createplan.c and planner.c, because it was not necessary
to get this patch to a working state.
Thanks to Robert Haas, David Rowley, and Amit Kapila for review.
Tom Lane [Mon, 7 Mar 2016 15:40:44 +0000 (10:40 -0500)]
Fix backwards test for Windows service-ness in pg_ctl.
A thinko in a96761391 caused pg_ctl to get it exactly backwards when
deciding whether to report problems to the Windows eventlog or to stderr.
Per bug #14001 from Manuel Mathar, who also identified the fix.
Like the previous patch, back-patch to all supported branches.
Tom Lane [Mon, 7 Mar 2016 02:04:25 +0000 (21:04 -0500)]
Fix broken definition for function name in pgbench's exprscan.l.
As written, this would accept e.g. 123e9 as a function name. Aside
from being mildly astonishing, that would come back to haunt us if
we ever try to add float constants to the expression syntax. Insist
that function names start with letters (or at least non-digits).
In passing reset yyline as well as yycol when starting a new expression.
This variable is useless since it's used nowhere, but if we're going
to have it we should have it act sanely.
In c8f621c43 I forgot to account for MAXALIGN when allocating a new
tuplebuf in ReorderBufferGetTupleBuf(). That happens to currently not
cause active problems on a number of platforms because the affected
pointer is already aligned, but others, like ppc and hppa, trigger this
in the regression test, due to a debug memset clearing memory.
Tom Lane [Mon, 7 Mar 2016 00:20:55 +0000 (19:20 -0500)]
Fix not-terribly-safe coding in NIImportOOAffixes() and NIImportAffixes().
There were two places in spell.c that supposed that they could search
for a location in a string produced by lowerstr() and then transpose
the offset into the original string. But this fails completely if
lowerstr() transforms any characters into characters of different byte
length, as can happen in Turkish UTF8 for instance.
We'd added some comments about this coding in commit 51e78ab4ff328296,
but failed to realize that it was not merely confusing but wrong.
Coverity complained about this code years ago, but in such an opaque
fashion that nobody understood what it was on about. I'm not entirely
sure that this issue *is* what it's on about, actually, but perhaps
this patch will shut it up -- and in any case the problem is clear.
Tom Lane [Sun, 6 Mar 2016 23:23:53 +0000 (18:23 -0500)]
Fix unportable usage of <ctype.h> functions.
isdigit(), isspace(), etc are likely to give surprising results if passed a
signed char. We should always cast the argument to unsigned char to avoid
that. Error in commit d78a7d9c7fa3e9cd, found by buildfarm member gaur.
Andres Freund [Sun, 6 Mar 2016 02:02:20 +0000 (18:02 -0800)]
logical decoding: Fix handling of large old tuples with replica identity full.
When decoding the old version of an UPDATE or DELETE change, and if that
tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or
failed in more subtle ways in non-assert builds. Normally individual
tuples aren't bigger than MaxHeapTupleSize, with big datums toasted.
But that's not the case for the old version of a tuple for logical
decoding; the replica identity is logged as one piece. With the default
replica identity btree limits that to small tuples, but that's not the
case for FULL.
Change the tuple buffer infrastructure to separate allocate over-large
tuples, instead of always going through the slab cache.
This unfortunately requires changing the ReorderBufferTupleBuf
definition, we need to store the allocated size someplace. To avoid
requiring output plugins to recompile, don't store HeapTupleHeaderData
directly after HeapTupleData, but point to it via t_data; that leaves
rooms for the allocated size. As there's no reason for an output plugin
to look at ReorderBufferTupleBuf->t_data.header, remove the field. It
was just a minor convenience having it directly accessible.
Reported-By: Adam Dratwiński
Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
Andres Freund [Sun, 6 Mar 2016 02:02:20 +0000 (18:02 -0800)]
logical decoding: old/newtuple in spooled UPDATE changes was switched around.
Somehow I managed to flip the order of restoring old & new tuples when
de-spooling a change in a large transaction from disk. This happens to
only take effect when a change is spooled to disk which has old/new
versions of the tuple. That only is the case for UPDATEs where he
primary key changed or where replica identity is changed to FULL.
The tests didn't catch this because either spooled updates, or updates
that changed primary keys, were tested; not both at the same time.
Found while adding tests for the following commit.
Andres Freund [Sun, 6 Mar 2016 02:02:20 +0000 (18:02 -0800)]
logical decoding: Tell reorderbuffer about all xids.
Logical decoding's reorderbuffer keeps transactions in an LSN ordered
list for efficiency. To make that's efficiently possible upper-level
xids are forced to be logged before nested subtransaction xids. That
only works though if these records are all looked at: Unfortunately we
didn't do so for e.g. row level locks, which are otherwise uninteresting
for logical decoding.
This could lead to errors like:
"ERROR: subxact logged without previous toplevel record".
It's not sufficient to just look at row locking records, the xid could
appear first due to a lot of other types of records (which will trigger
the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny).
So invent infrastructure to tell reorderbuffer about xids seen, when
they'd otherwise not pass through reorderbuffer.c.
Reported-By: Jarred Ward
Bug: #13844
Discussion: 20160105033249.1087.66040@wrigleys.postgresql.org
Backpatch: 9.4, where logical decoding was added
Joe Conway [Sat, 5 Mar 2016 19:10:19 +0000 (11:10 -0800)]
Expose control file data via SQL accessible functions.
Add four new SQL accessible functions: pg_control_system(),
pg_control_checkpoint(), pg_control_recovery(), and pg_control_init()
which expose a subset of the control file data.
Along the way move the code to read and validate the control file to
src/common, where it can be shared by the new backend functions
and the original pg_controldata frontend program.
Patch by me, significant input, testing, and review by Michael Paquier.
Fujii Masao [Sat, 5 Mar 2016 17:29:04 +0000 (02:29 +0900)]
Ignore recovery_min_apply_delay until recovery has reached consistent state
Previously recovery_min_apply_delay was applied even before recovery
had reached consistency. This could cause us to wait a long time
unexpectedly for read-only connections to be allowed. It's problematic
because the standby was useless during that wait time.
This patch changes recovery_min_apply_delay so that it's applied once
the database has reached the consistent state. That is, even if the delay
is set, the standby tries to replay WAL records as fast as possible until
it has reached consistency.
Author: Michael Paquier Reviewed-By: Julien Rouhaud Reported-By: Greg Clough
Backpatch: 9.4, where recovery_min_apply_delay was added
Bug: #13770
Discussion: http://www.postgresql.org/message-id/20151111155006.2644.84564@wrigleys.postgresql.org
Tom Lane [Fri, 4 Mar 2016 21:20:44 +0000 (16:20 -0500)]
Make stats regression test robust in the face of parallel query.
Historically, the wait_for_stats() function in this test has simply checked
for a report of an indexscan on tenk2, corresponding to the last command
issued before we expect stats updates to appear. However, with parallel
query that indexscan could be done by a parallel worker that will emit
its stats counters to the collector before the session's main backend does
(a full second before, in fact, thanks to the "pg_sleep(1.0)" added by
commit 957d08c81f9cc277). That leaves a sizable window in which an
autovacuum-triggered write of the stats files would present a state in
which the indexscan on tenk2 appears to have been done, but none of the
write updates performed by the test have been. This is evidently the
explanation for intermittent failures seen by me and on buildfarm member
mandrill.
To fix, we should check separately for both the tenk2 seqscan and indexscan
counts, since those might be reported by different processes that could be
delayed arbitrarily on an overloaded test machine. And we need to check
for at least one update-related count. If we ever allow parallel workers
to do writes, this will get even more complicated ... but in view of all
the other hard problems that will entail, I don't feel a need to solve this
one today.
Per research by Rahila Syed and myself; part of this patch is Rahila's.
Robert Haas [Fri, 4 Mar 2016 19:12:28 +0000 (14:12 -0500)]
Minor improvements to transaction manager README.
A simple SELECT is handled by PortalRunSelect, not ProcessQuery. Also,
the previous indentation was unclear: change it so that a deeper level
of indentation indicates that the outer function calls the inner one.
Robert Haas [Fri, 4 Mar 2016 17:59:10 +0000 (12:59 -0500)]
Minor optimizations based on ParallelContext having nworkers_launched.
Originally, we didn't have nworkers_launched, so code that used parallel
contexts had to be preprared for the possibility that not all of the
workers requested actually got launched. But now we can count on knowing
the number of workers that were successfully launched, which can shave
off a few cycles and simplify some code slightly.
Amit Kapila, reviewed by Haribabu Kommi, per a suggestion from Peter
Geoghegan.
Teodor Sigaev [Fri, 4 Mar 2016 17:08:10 +0000 (20:08 +0300)]
Improve support of Hunspell in ispell dictionary.
Now it's possible to load recent version of Hunspell for several languages.
To handle these dictionaries Hunspell patch adds support for:
* FLAG long - sets the double extended ASCII character flag type
* FLAG num - sets the decimal number flag type (from 1 to 65535)
* AF parameter - alias for flag's set
Also it moves test dictionaries into separate directory.
Robert Haas [Fri, 4 Mar 2016 16:53:20 +0000 (11:53 -0500)]
Fix query-based tab completion for multibyte characters.
The existing code confuses the byte length of the string (which is
relevant when passing it to pg_strncasecmp) with the character length
of the string (which is relevant when it is used with the SQL substring
function). Separate those two concepts.
Report and patch by Kyotaro Horiguchi, reviewed by Thomas Munro and
reviewed and further revised by me.
Robert Haas [Fri, 4 Mar 2016 16:35:46 +0000 (11:35 -0500)]
postgres_fdw: When sending ORDER BY, always include NULLS FIRST/LAST.
Previously, we included NULLS FIRST when appropriate but relied on the
default behavior to be NULLS LAST. This is, however, not true for a
sort in descending order and seems like a fragile assumption anyway.
Report by Rajkumar Raghuwanshi. Patch by Ashutosh Bapat. Review
comments from Michael Paquier and Tom Lane.
Alvaro Herrera [Fri, 4 Mar 2016 15:59:47 +0000 (12:59 -0300)]
Add 'tap_tests' flag in config_default.pl
This makes the flag more visible for testers using the default file as a
template, increasing the likelyhood that the test suite will be run.
Also have the flag be displayed in the fake "configure" output, if set.
This patch is two new lines only, but perltidy decides to shift things
around which makes it appear a bit bigger.
Alvaro Herrera [Thu, 3 Mar 2016 20:58:30 +0000 (17:58 -0300)]
Rework PostgresNode's psql method
This makes the psql() method much more capable: it captures both stdout
and stderr; it now returns the psql exit code rather than stdout; a
timeout can now be specified, as can ON_ERROR_STOP behavior; it gained a
new "on_error_die" (defaulting to off) parameter to raise an exception
if there's any problem. Finally, additional parameters to psql can be
passed if there's need for further tweaking.
For convenience, a new safe_psql() method retains much of the old
behavior of psql(), except that it uses on_error_die on, so that
problems like syntax errors in SQL commands can be detected more easily.
Many existing TAP test files now use safe_psql, which is what is really
wanted. A couple of ->psql() calls are now added in the commit_ts
tests, which verify that the right thing is happening on certain errors.
Some ->command_fails() calls in recovery tests that were verifying that
psql failed also became ->psql() calls now.
Author: Craig Ringer. Some tweaks by Álvaro Herrera Reviewed-By: Michaël Paquier
Alvaro Herrera [Thu, 3 Mar 2016 16:20:46 +0000 (13:20 -0300)]
perltidy PostgresNode and SimpleTee
Also, mention in README that Perl files should be perltidy'ed. This
isn't really the best place (since we have Perl files elsewhere in the
tree) and this is already in pgindent's README, but this subdir is
likely to get hacked a whole lot more than the other Perl files, so it
seems okay to spend two lines on this.
Alvaro Herrera [Thu, 3 Mar 2016 15:49:02 +0000 (12:49 -0300)]
Fix mistakes in recovery tests
One test was relying on method remove_tree that isn't implemented in the
oldest Perl we support; fix it by using the older rmtree instead.
Another test had a typo in a SQL command, which isn't noticed because
the PostgresNode->psql() method doesn't check that queries return
correctly. That's undesirable and will also be fixed later on, but for
now let's make the test actually work.
Andres Freund [Thu, 3 Mar 2016 07:42:21 +0000 (23:42 -0800)]
logical decoding: fix decoding of a commit's commit time.
When adding replication origins in 5aa235042, I somehow managed to set
the timestamp of decoded transactions to InvalidXLogRecptr when decoding
one made without a replication origin. Fix that, and the wrong type of
the new commit_time variable.
This didn't trigger a regression test failure because we explicitly
don't show commit timestamps in the regression tests, as they obviously
are variable. Add a test that checks that a decoded commit's timestamp
is within minutes of NOW() from before the commit.
Reported-By: Weiping Qu Diagnosed-By: Artur Zakirov
Discussion: 56D4197E.9050706@informatik.uni-kl.de, 56D42918.1010108@postgrespro.ru
Backpatch: 9.5, where 5aa235042 originates.
Tom Lane [Thu, 3 Mar 2016 04:31:39 +0000 (23:31 -0500)]
Fix json_to_record() bug with nested objects.
A thinko concerning nesting depth caused json_to_record() to produce bogus
output if a field of its input object contained a sub-object with a field
name matching one of the requested output column names. Per bug #13996
from Johann Visagie.
I added a regression test case based on his example, plus parallel tests
for json_to_recordset, jsonb_to_record, jsonb_to_recordset. The latter
three do not exhibit the same bug (which suggests that we may be missing
some opportunities to share code...) but testing seems like a good idea
in any case.
Back-patch to 9.4 where these functions were introduced.
Tom Lane [Wed, 2 Mar 2016 22:37:54 +0000 (17:37 -0500)]
Create stub functions to support pg_upgrade of old contrib/tsearch2.
Commits 9ff60273e35cad6e and dbe2328959e12701 adjusted the declarations
of some core functions referenced by contrib/tsearch2's install script,
forgetting that in a pg_upgrade situation, we'll be trying to restore
operator class definitions that reference the old signatures. We've
hit this problem before; solve it in the same way as before, namely by
installing stub functions that have the expected signature and just
invoke the correct function. Per report from Jeff Janes.
(Someday we ought to stop supporting contrib/tsearch2, but I'm not
sure today is that day.)
Tom Lane [Wed, 2 Mar 2016 18:30:14 +0000 (13:30 -0500)]
Fix PL/Tcl's encoding conversion logic.
PL/Tcl appears to contain logic to convert strings between the database
encoding and UTF8, which is the only encoding modern Tcl will deal with.
However, that code has been disabled since commit 034895125d648b86, which
made it "#if defined(UNICODE_CONVERSION)" and neglected to provide any way
for that symbol to become defined. That might have been all right back
in 2001, but these days we take a dim view of allowing strings with
incorrect encoding into the database.
Remove the conditional compilation, fix warnings about signed/unsigned char
conversions, clean up assorted places that didn't bother with conversions.
(Notably, there were lots of assumptions that database table and field
names didn't need conversion...)
Add a regression test based on plpython_unicode. It's not terribly
thorough, but better than no test at all.
Tom Lane [Wed, 2 Mar 2016 17:24:30 +0000 (12:24 -0500)]
Make PL/Tcl require Tcl 8.4 or later.
As of commit 287822068246a6ae30bb2c7191de727672ae6328, PL/Tcl will not
compile against pre-8.0 Tcl, whereas it used to work (more or less anyway)
with quite prehistoric versions. As long as we're moving these goalposts,
let's reinstall them at someplace that has some thought behind it. This
commit sets the minimum allowed Tcl version at 8.4, and rips out some bits
of compatibility cruft that are in consequence no longer needed. Reasons
for requiring 8.4 include:
* 8.4 was released in 2002; there seems little reason to believe that
anyone would want to use older versions with Postgres 9.6+.
* We have no buildfarm members testing anything older than 8.4, and
thus no way to know if it's broken.
* We need at least 8.1 to allow enforcement of database encoding
security (8.1 standardized Tcl on using UTF8 internally, before that
it was pretty unpredictable).
* Some versions between 8.1 and 8.4 allowed the backend to become
multithreaded, which is disastrous. We need at least 8.4 to be able
to disable the Tcl notifier subsystem to prevent that.
A small side benefit is that we can make the code more readable by
doing s/CONST84/const/g.
Tom Lane [Wed, 2 Mar 2016 17:07:31 +0000 (12:07 -0500)]
Convert PL/Tcl to use Tcl's "object" interfaces.
The original implementation of Tcl was all strings, but they improved
performance significantly by introducing typed "objects" (integers,
lists, code, etc). It's past time we made use of that; that happened
in Tcl 8.0 which was released in 1997.
This patch also modernizes some of the error-reporting code, which may
cause small changes in the spelling of complaints about bad calls to
PL/Tcl-provided commands.
Jim Nasby and Karl Lehenbauer, reviewed by Victor Wagner
Tom Lane [Wed, 2 Mar 2016 06:06:31 +0000 (01:06 -0500)]
Fix TAP tests for older Perls.
Commit 7132810c (Retain tempdirs for failed tests) used Test::More's
is_passing method, but that was added in Test::More 0.89_01 which is
sometime later than Perl 5.10.1. Popular platforms such as RHEL6 don't
have that, nevermind some of our older dinosaurs. Do it the hard way.
Michael Paquier, based on research by Craig Ringer
Robert Haas [Wed, 2 Mar 2016 02:49:41 +0000 (21:49 -0500)]
Change the format of the VM fork to add a second bit per page.
The new bit indicates whether every tuple on the page is already frozen.
It is cleared only when the all-visible bit is cleared, and it can be
set only when we vacuum a page and find that every tuple on that page is
both visible to every transaction and in no need of any future
vacuuming.
A future commit will use this new bit to optimize away full-table scans
that would otherwise be triggered by XID wraparound considerations. A
page which is merely all-visible must still be scanned in that case, but
a page which is all-frozen need not be. This commit does not attempt
that optimization, although that optimization is the goal here. It
seems better to get the basic infrastructure in place first.
Per discussion, it's very desirable for pg_upgrade to automatically
migrate existing VM forks from the old format to the new format. That,
too, will be handled in a follow-on patch.
Masahiko Sawada, reviewed by Kyotaro Horiguchi, Fujii Masao, Amit
Kapila, Simon Riggs, Andres Freund, and others, and substantially
revised by me.
Tom Lane [Wed, 2 Mar 2016 01:01:07 +0000 (20:01 -0500)]
Improve coverage of pltcl regression tests.
Test composite-type arguments and the argisnull and spi_lastoid Tcl
commmands. This stuff was not covered before, but needs to be exercised
since the upcoming Tcl object-conversion patch changes these code paths
(and broke at least one of them).
Alvaro Herrera [Tue, 1 Mar 2016 22:53:18 +0000 (19:53 -0300)]
Add more tests for commit_timestamp feature
These tests verify that 1) WAL replay preserves the stored value,
2) a streaming standby server replays the value obtained from the
master, and 3) the behavior is sensible in the face of repeated
configuration changes.
One annoyance is that tmp_check/ subdir from the TAP tests is clobbered
when the pg_regress test runs in the same subdirectory. This is
bothersome but not too terrible a problem, since the pg_regress test is
not run anyway if the TAP tests fail (unless "make -k" is used).
I had these tests around since commit 69e7235c93e2; add them now that we
have the recovery test framework in place.
Robert Haas [Tue, 1 Mar 2016 18:04:09 +0000 (13:04 -0500)]
Extend pgbench's expression syntax to support a few built-in functions.
Fabien Coelho, reviewed mostly by Michael Paquier and me, but also by
Heikki Linnakangas, BeomYong Lee, Kyotaro Horiguchi, Oleksander
Shulgin, and Álvaro Herrera.
Tom Lane [Tue, 1 Mar 2016 00:29:19 +0000 (19:29 -0500)]
Suppress scary-looking log messages from async-notify isolation test.
I noticed that the async-notify test results in log messages like these:
LOG: could not send data to client: Broken pipe
FATAL: connection to client lost
This is because it unceremoniously disconnects a client session that is
about to have some NOTIFY messages delivered to it. Such log messages
during a regression test might well cause people to go looking for a
problem that doesn't really exist (it did cause me to waste some time that
way). We can shut it up by adding an UNLISTEN command to session teardown.
Patch HEAD only; this doesn't seem significant enough to back-patch.
Tom Lane [Tue, 1 Mar 2016 00:11:38 +0000 (19:11 -0500)]
Improve error message for rejecting RETURNING clauses with dropped columns.
This error message was written with only ON SELECT rules in mind, but since
then we also made RETURNING-clause targetlists go through the same logic.
This means that you got a rather off-topic error message if you tried to
add a rule with RETURNING to a table having dropped columns. Ideally we'd
just support that, but some preliminary investigation says that it might be
a significant amount of work. Seeing that Nicklas Avén's complaint is the
first one we've gotten about this in the ten years or so that the code's
been like that, I'm unwilling to put much time into it. Instead, improve
the error report by issuing a different message for RETURNING cases, and
revise the associated comment based on this investigation.
Alvaro Herrera [Mon, 29 Feb 2016 19:28:54 +0000 (16:28 -0300)]
Make new isolationtester test more stable
The original coding of the test was relying too much on the ordering in
which backends are awakened once an advisory lock which they wait for is
released. Change the code so that each backend uses its own advisory
lock instead, so that the output becomes stable. Also add a few seconds
of sleep between lock releases, so that the test isn't broken in
overloaded buildfarm animals, as suggested by Tom Lane.
Tom Lane [Mon, 29 Feb 2016 15:48:40 +0000 (10:48 -0500)]
Remove useless unary plus.
It's harmless, but might confuse readers. Seems to have been introduced
in 6bc8ef0b7f1f1df3. Back-patch, just to avoid cosmetic cross-branch
differences.
Tom Lane [Mon, 29 Feb 2016 15:14:12 +0000 (10:14 -0500)]
Fix build under OPTIMIZER_DEBUG.
In commit 19a541143a09c067 I replaced RelOptInfo.width with
RelOptInfo.reltarget.width, but I missed updating debug_print_rel()
for that because it's not compiled by default.
Reported by Salvador Fandino, patch by Michael Paquier.
Dean Rasheed [Mon, 29 Feb 2016 12:28:06 +0000 (12:28 +0000)]
Fix incorrect varlevelsup in security_barrier_replace_vars().
When converting an RTE with securityQuals into a security barrier
subquery RTE, ensure that the Vars in the new subquery's targetlist
all have varlevelsup = 0 so that they correctly refer to the
underlying base relation being wrapped.
The original code was creating new Vars by copying them from existing
Vars referencing the base relation found elsewhere in the query, but
failed to account for the fact that such Vars could come from sublink
subqueries, and hence have varlevelsup > 0. In practice it looks like
this could only happen with nested security barrier views, where the
outer view has a WHERE clause containing a correlated subquery, due to
the order in which the Vars are processed.
Bug: #13988 Reported-by: Adam Guthrie Backpatch-to: 9.4, where updatable SB views were introduced
Tom Lane [Mon, 29 Feb 2016 04:39:20 +0000 (23:39 -0500)]
Avoid multiple free_struct_lconv() calls on same data.
A failure partway through PGLC_localeconv() led to a situation where
the next call would call free_struct_lconv() a second time, leading
to free() on already-freed strings, typically leading to a core dump.
Add a flag to remember whether we need to do that.
Per report from Thom Brown. His example case only provokes the failure
as far back as 9.4, but nonetheless this code is obviously broken, so
back-patch to all supported branches.
Alvaro Herrera [Fri, 26 Feb 2016 20:11:15 +0000 (17:11 -0300)]
Add isolationtester spec for old heapam.c bug
In 0e5680f4737a, I fixed a bug in heapam that caused spurious deadlocks
when multiple updates concurrently attempted to modify the old version
of an updated tuple whose new version was key-share locked. I proposed
an isolationtester spec file that reproduced the bug, but back then
isolationtester wasn't mature enough to be able to run it. Now that 38f8bdcac498 is in the tree, we can have this spec file too.
Alvaro Herrera [Fri, 26 Feb 2016 19:22:53 +0000 (16:22 -0300)]
Apply last revision of recovery patch
I applied the previous-to-last revision of Michaël's submitted patch
instead of the last; these two tweaks pointed out by Craig were left out
of the previous commit by accident.
Alvaro Herrera [Fri, 26 Feb 2016 19:13:30 +0000 (16:13 -0300)]
Add a test framework for recovery
This long-awaited framework is an expansion of the existing PostgresNode
stuff to support additional features for recovery testing; the recovery
tests included in this commit are a starting point that cover some of
the recovery features we have. More scripts are expected to be added
later.
Author: Michaël Paquier, a bit of help from Amir Rohan
Reviewed by: Amir Rohan, Stas Kelvich, Kyotaro Horiguchi, Victor Wagner,
Craig Ringer, Álvaro Herrera
Discussion: http://www.postgresql.org/message-id/CAB7nPqTf7V6rswrFa=q_rrWeETUWagP=h8LX8XAov2Jcxw0DRg@mail.gmail.com
Discussion: http://www.postgresql.org/message-id/trinity-b4a8035d-59af-4c42-a37e-258f0f28e44a-1443795007012@3capp-mailcom-lxa08
Alvaro Herrera [Fri, 26 Feb 2016 16:24:22 +0000 (13:24 -0300)]
Move some code from RewindTest into PostgresNode
Some code in the RewindTest test suite is more generally useful than
just for that suite, so put it where other test suites can reach it.
Some postgresql.conf parameters change their default values when a
cluster is initialized with 'allows_streaming' than the previous
behavior; most notably, autovacuum is no longer turned off.
(Also, we no longer call pg_ctl promote with -w, but that flag doesn't
actually do anything in promote so there's no behavior change.)
Robert Haas [Fri, 26 Feb 2016 11:03:37 +0000 (16:33 +0530)]
On second thought, disable parallelism for prepared statements.
CREATE TABLE .. AS EXECUTE can turn an apparently read-only query into
a write operation, which parallel query can't handle. It's a bit of a
shame that requires us to avoid parallel query for queries prepared via
PREPARE in all cases, but for right now it does.
Robert Haas [Fri, 26 Feb 2016 10:44:46 +0000 (16:14 +0530)]
Add new FDW API to test for parallel-safety.
This is basically a bug fix; the old code assumes that a ForeignScan
is always parallel-safe, but for postgres_fdw, for example, this is
definitely false. It should be true for file_fdw, though, since a
worker can read a file from the filesystem just as well as any other
backend process.
Original patch by Thomas Munro. Documentation, and changes to the
comments, by me.
Robert Haas [Thu, 25 Feb 2016 07:32:18 +0000 (13:02 +0530)]
Enable parallelism for prepared statements and extended query protocol.
Parallel query can't handle running a query only partially rather than
to completion. However, there seems to be no way to run a statement
prepared via SQL PREPARE other than to completion, so we can enable it
there without a problem.
The situation is more complicated for the extend query protocol.
libpq seems to provide no way to send an Execute message with a
non-zero rowcount, but some other client might. If that happens, and
a parallel plan was chosen, we'll execute the parallel plan without
using any workers, which may be somewhat inefficient but should still
work. Hopefully this won't be a problem; users can always set
max_parallel_degree=0 to avoid choosing parallel plans in the first
place.
Tom Lane [Mon, 22 Feb 2016 19:31:43 +0000 (14:31 -0500)]
Create a function to reliably identify which sessions block which others.
This patch introduces "pg_blocking_pids(int) returns int[]", which returns
the PIDs of any sessions that are blocking the session with the given PID.
Historically people have obtained such information using a self-join on
the pg_locks view, but it's unreasonably tedious to do it that way with any
modicum of correctness, and the addition of parallel queries has pretty
much broken that approach altogether. (Given some more columns in the view
than there are today, you could imagine handling parallel-query cases with
a 4-way join; but ugh.)
The new function has the following behaviors that are painful or impossible
to get right via pg_locks:
1. Correctly understands which lock modes block which other ones.
2. In soft-block situations (two processes both waiting for conflicting lock
modes), only the one that's in front in the wait queue is reported to
block the other.
3. In parallel-query cases, reports all sessions blocking any member of
the given PID's lock group, and reports a session by naming its leader
process's PID, which will be the pg_backend_pid() value visible to
clients.
The motivation for doing this right now is mostly to fix the isolation
tests. Commit 38f8bdcac4982215beb9f65a19debecaf22fd470 lobotomized
isolationtester's is-it-waiting query by removing its ability to recognize
nonconflicting lock modes, as a crude workaround for the inability to
handle soft-block situations properly. But even without the lock mode
tests, the old query was excessively slow, particularly in
CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new
deadlock-hard test because the deadlock timeout elapses before they can
probe the waiting status of all eight sessions. Replacing the pg_locks
self-join with use of pg_blocking_pids() is not only much more correct, but
a lot faster: I measure it at about 9X faster in a typical dev build with
Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds. That should provide
enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the
test, without having to lengthen deadlock_timeout yet more and thus slow
down the test for everyone else.
We don't really need this field, because it's either zero or redundant with
PGPROC.pid. The use of zero to mark "not a group leader" is not necessary
since we can just as well test whether lockGroupLeader is NULL. This does
not save very much, either as to code or data, but the simplification seems
worthwhile anyway.
Andres Freund [Mon, 22 Feb 2016 06:48:44 +0000 (22:48 -0800)]
Fix wrong keysize in PrivateRefCountHash creation.
In 4b4b680c3 I accidentally used sizeof(PrivateRefCountArray) instead of
sizeof(PrivateRefCountEntry) when creating the refcount overflow
hashtable. As the former is bigger than the latter, this luckily only
resulted in a slightly increased memory usage when many buffers are
pinned in a backend.
Reported-By: Takashi Horikawa
Discussion: 73FA3881462C614096F815F75628AFCD035A48C3@BPXM01GP.gisp.nec.co.jp
Backpatch: 9.5, where thew new ref count infrastructure was introduced
Tom Lane [Sun, 21 Feb 2016 20:23:17 +0000 (15:23 -0500)]
Docs: make prose discussion match the ordering of Table 9-58.
The "Session Information Functions" table seems to be sorted mostly
alphabetically (although it's not perfect), which would be all right
if it didn't lead to some related functions being described in a
pretty nonintuitive order. Also, the prose discussions after the table
were in an order that hardly matched the table at all. Rearrange to
make things a bit easier to follow.
Tom Lane [Sun, 21 Feb 2016 16:38:24 +0000 (11:38 -0500)]
Cosmetic improvements in new config_info code.
Coverity griped about use of unchecked strcpy() into a local variable.
There's unlikely to be any actual bug there, since no caller would be
passing a path longer than MAXPGPATH, but nonetheless use of strlcpy()
seems preferable.
While at it, get rid of unmaintainable separation between list of
field names and list of field values in favor of initializing them
in parallel. And we might as well declare get_configdata()'s path
argument as const char *, even though no current caller needs that.
Andrew Dunstan [Sun, 21 Feb 2016 15:30:49 +0000 (10:30 -0500)]
Fix two-argument jsonb_object when called with empty arrays
Some over-eager copy-and-pasting on my part resulted in a nonsense
result being returned in this case. I have adopted the same pattern for
handling this case as is used in the one argument form of the function,
i.e. we just skip over the code that adds values to the object.
Diagnosis and patch from Michael Paquier, although not quite his
solution.
Fixes bug #13936.
Backpatch to 9.5 where jsonb_object was introduced.