Tom Lane [Mon, 18 Jan 2016 00:36:59 +0000 (19:36 -0500)]
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
Tom Lane [Sun, 17 Jan 2016 16:38:40 +0000 (11:38 -0500)]
Remove dead code in pg_dump.
Coverity quite reasonably complained that this check for fout==NULL
occurred after we'd already dereferenced fout. However, the check
is just dead code since there is no code path by which CreateArchive
can return a null pointer. Errors such as can't-open-that-file are
reported down inside CreateArchive, and control doesn't return.
So let's silence the warning by removing the dead code, rather than
continuing to pretend it does something.
Coverity didn't complain about this before 5b5fea2a1, so back-patch
to 9.5 like that patch.
Tom Lane [Thu, 14 Jan 2016 16:51:57 +0000 (11:51 -0500)]
Fix build_grouping_chain() to not clobber its input lists.
There's no good reason for stomping on the input data; it makes the logic
in this function no simpler, in fact probably the reverse. And it makes
it impossible to separate path generation from plan generation, as I'm
working towards doing; that will require more than one traversal of these
lists.
Magnus Hagander [Thu, 14 Jan 2016 12:06:03 +0000 (13:06 +0100)]
Properly close token in sspi authentication
We can never leak more than one token, but we shouldn't do that. We
don't bother closing it in the error paths since the process will
exit shortly anyway.
Tom Lane [Wed, 13 Jan 2016 23:55:27 +0000 (18:55 -0500)]
Handle extension members when first setting object dump flags in pg_dump.
pg_dump's original approach to handling extension member objects was to
run around and clear (or set) their dump flags rather late in its data
collection process. Unfortunately, quite a lot of code expects those flags
to be valid before that; which was an entirely reasonable expectation
before we added extensions. In particular, this explains Karsten Hilbert's
recent report of pg_upgrade failing on a database in which an extension
has been installed into the pg_catalog schema. Its objects are initially
marked as not-to-be-dumped on the strength of their schema, and later we
change them to must-dump because we're doing a binary upgrade of their
extension; but we've already skipped essential tasks like making associated
DO_SHELL_TYPE objects.
To fix, collect extension membership data first, and incorporate it in the
initial setting of the dump flags, so that those are once again correct
from the get-go. This has the undesirable side effect of slightly
lengthening the time taken before pg_dump acquires table locks, but testing
suggests that the increase in that window is not very much.
Along the way, get rid of ugly special-case logic for deciding whether
to dump procedural languages, FDWs, and foreign servers; dump decisions
for those are now correct up-front, too.
In 9.3 and up, this also fixes erroneous logic about when to dump event
triggers (basically, they were *always* dumped before). In 9.5 and up,
transform objects had that problem too.
Since this problem came in with extensions, back-patch to all supported
versions.
Tom Lane [Wed, 13 Jan 2016 22:48:33 +0000 (17:48 -0500)]
Access pg_dump's options structs through Archive struct, not directly.
Rather than passing around DumpOptions and RestoreOptions as separate
arguments, add fields to struct Archive to carry pointers to these objects,
and access them through those fields when needed. There already was a
RestoreOptions pointer in Archive, though for no obvious reason it was part
of the "private" struct rather than out where pg_dump.c could see it.
Doing this allows reversion of quite a lot of parameter-addition changes
made in commit 0eea8047bf, which is a good thing IMO because this will
reduce the code delta between 9.4 and 9.5, probably easing a few future
back-patch efforts. Moreover, the previous commit only added a DumpOptions
argument to functions that had to have it at the time, which means we could
anticipate still more code churn (and more back-patch hazard) as the
requirement spread further. I'd hit exactly that problem in my upcoming
patch to fix extension membership marking, which is what motivated me to
do this.
Peter Eisentraut [Sun, 10 Jan 2016 16:43:27 +0000 (11:43 -0500)]
psql: Fix CREATE INDEX tab completion
The previous code supported a syntax like CREATE INDEX name
CONCURRENTLY, which never existed. Mistake introduced in commit 37ec19a15ce452ee94f32ebc3d6a9a45868e82fd. Remove the addition of
CONCURRENTLY at that point.
Tom Lane [Tue, 12 Jan 2016 01:13:31 +0000 (20:13 -0500)]
Remove no-longer-needed old-style check for incompatible plpythons.
Commit 866566a690bb9916 introduced a new mechanism for incompatible
plpythons to detect each other. I left the old mechanism in place,
because it seems possible that a plpython predating that commit might be
used with one postdating it. (This would require updating plpython3 but
not plpython2 or vice versa, but that seems well within the realm of
possibility.) However, surely it will not be able to happen in 9.6 or
later, so we can delete the old mechanism in HEAD.
Tom Lane [Tue, 12 Jan 2016 01:06:36 +0000 (20:06 -0500)]
Use LOAD not actual code execution to pull in plpython library.
Commit 866566a690bb9916 is insufficient to prevent dump/reload failures
when using transform modules in a database with both plpython2 and
plpython3 installed. The reason is that the transform extension scripts
use DO blocks as a mechanism to pull in the libpython library before
creating the transform function. It's necessary to preload the library
because the dynamic loader won't do it for us on every platform, leading
to "unresolved symbol" failures when the transform library is loaded.
But it's *not* necessary to execute Python code, and doing so will
provoke a multiple-Pythons-are-loaded error even after the preceding
commit.
To fix, use LOAD instead of a DO block. That requires superuser privilege,
but creation of a C function does anyway. It also embeds knowledge of
the underlying library name for each PL language; but that's wired into
the initdb-time contents of pg_pltemplate too, so that doesn't seem like
a large problem either. Note that CREATE TRANSFORM as such doesn't call
the language module at all.
Per a report from Paul Jones. Back-patch to 9.5 where transform modules
were introduced.
Tom Lane [Tue, 12 Jan 2016 00:55:39 +0000 (19:55 -0500)]
Avoid dump/reload problems when using both plpython2 and plpython3.
Commit 803716013dc1350f installed a safeguard against loading plpython2
and plpython3 at the same time, but asserted that both could still be
used in the same database, just not in the same session. However, that's
not actually all that practical because dumping and reloading will fail
(since both libraries necessarily get loaded into the restoring session).
pg_upgrade is even worse, because it checks for missing libraries by
loading every .so library mentioned in the entire installation into one
session, so that you can have only one across the whole cluster.
We can improve matters by not throwing the error immediately in _PG_init,
but only when and if we're asked to do something that requires calling
into libpython. This ameliorates both of the above situations, since
while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in
loading plpython, it isn't asked to do anything interesting (at least
not if check_function_bodies is off, as it will be during a restore).
It's possible that this opens some corner-case holes in which a crash
could be provoked with sufficient effort. However, since plpython
only exists as an untrusted language, any such crash would require
superuser privileges, making it "don't do that" not a security issue.
To reduce the hazards in this area, the error is still FATAL when it
does get thrown.
Per a report from Paul Jones. Back-patch to 9.2, which is as far back
as the patch applies without work. (It could be made to work in 9.1,
but given the lack of previous complaints, I'm disinclined to expend
effort so far back. We've been pretty desultory about support for
Python 3 in 9.1 anyway.)
Tom Lane [Sat, 9 Jan 2016 22:39:45 +0000 (17:39 -0500)]
Remove a useless PG_GETARG_DATUM() call from jsonb_build_array.
This loop uselessly fetched the argument after the one it's currently
looking at. No real harm is done since we couldn't possibly fetch off
the end of memory, but it's confusing to the reader.
Also remove a duplicate (and therefore confusing) PG_ARGISNULL check in
jsonb_build_object.
I happened to notice these things while trolling for missed null-arg
checks earlier today. Back-patch to 9.5, not because there is any
real bug, but just because 9.5 and HEAD are still in sync in this
file and we might as well keep them so.
Tom Lane [Sat, 9 Jan 2016 22:20:58 +0000 (17:20 -0500)]
Add some checks on "char"-type columns to type_sanity and opr_sanity.
I noticed that the sanity checks in the regression tests omitted to
check a couple of "poor man's enum" columns that you'd reasonably
expect them to check.
There are other "char"-type columns in system catalogs that are not
covered by either type_sanity or opr_sanity, e.g. pg_rewrite.ev_type.
However, those catalogs are not populated with any manually-created
data during bootstrap, so it seems less necessary to check them
this way.
The first three of these take OID, so a null argument will normally look
like a zero to them, resulting in "ERROR: could not open relation with OID
0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX
functions. The other three will dump core on a null argument, though this
is mitigated by the fact that they won't do so until after checking that
the caller is superuser or has rolreplication privilege.
In addition, the pg_logical_slot_get/peek[_binary]_changes family was
intentionally marked nonstrict, but failed to make nullness checks on all
the arguments; so again a null-pointer-dereference crash is possible but
only for superusers and rolreplication users.
Add the missing ARGISNULL checks to the latter functions, and mark the
former functions as strict in pg_proc. Make that change in the back
branches too, even though we can't force initdb there, just so that
installations initdb'd in future won't have the issue. Since none of these
bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX
functions hardly misbehave at all), it seems sufficient to do this.
In addition, fix some order-of-operations oddities in the slot_get_changes
family, mostly cosmetic, but not the part that moves the function's last
few operations into the PG_TRY block. As it stood, there was significant
risk for an error to exit without clearing historical information from
the system caches.
The slot_get_changes bugs go back to 9.4 where that code was introduced.
Back-patch appropriate subsets of the pg_proc changes into all active
branches, as well.
Tom Lane [Sat, 9 Jan 2016 18:44:27 +0000 (13:44 -0500)]
Clean up code for widget_in() and widget_out().
Given syntactically wrong input, widget_in() could call atof() with an
indeterminate pointer argument, typically leading to a crash; or if it
didn't do that, it might return a NULL pointer, which again would lead
to a crash since old-style C functions aren't supposed to do things
that way. Fix that by correcting the off-by-one syntax test and
throwing a proper error rather than just returning NULL.
Also, since widget_in and widget_out have been marked STRICT for a
long time, their tests for null inputs are just dead code; remove 'em.
In the oldest branches, also improve widget_out to use snprintf not
sprintf, just to be sure.
In passing, get rid of a long-since-useless sprintf into a local buffer
that nothing further is done with, and make some other minor coding
style cleanups.
In the intended regression-testing usage of these functions, none of
this is very significant; but if the regression test database were
left around in a production installation, these bugs could amount
to a minor security hazard.
Tom Lane [Sat, 9 Jan 2016 18:02:54 +0000 (13:02 -0500)]
Add STRICT to some C functions created by the regression tests.
These functions readily crash when passed a NULL input value. The tests
themselves do not pass NULL values to them; but when the regression
database is used as a basis for fuzz testing, they cause a lot of noise.
Also, if someone were to leave a regression database lying about in a
production installation, these would create a minor security hazard.
Simon Riggs [Sat, 9 Jan 2016 10:10:08 +0000 (10:10 +0000)]
Avoid pin scan for replay of XLOG_BTREE_VACUUM
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require
complex interlocking that matched the requirements on the master. This required
an O(N) operation that became a significant problem with large indexes, causing
replication delays of seconds or in some cases minutes while the
XLOG_BTREE_VACUUM was replayed.
This commit skips the “pin scan” that was previously required, by observing in
detail when and how it is safe to do so, with full documentation. The pin scan
is skipped only in replay; the VACUUM code path on master is not touched here.
The current commit still performs the pin scan for toast indexes, though this
can also be avoided if we recheck scans on toast indexes. Later patch will
address this.
No tests included. Manual tests using an additional patch to view WAL records
and their timing have shown the change in WAL records and their handling has
successfully reduced replication delay.
Alvaro Herrera [Fri, 8 Jan 2016 16:18:40 +0000 (13:18 -0300)]
Revert "Blind attempt at a Cygwin fix"
This reverts commit e9282e953205a2f3125fc8d1052bc01cb77cd2a3, which blew
up in a pretty spectacular way. Re-introduce the original code while we
search for a real fix.
Alvaro Herrera [Fri, 8 Jan 2016 14:48:39 +0000 (11:48 -0300)]
Blind attempt at a Cygwin fix
Further portability fix for a967613911f7. Mingw- and MSVC-based builds
appear to be working fine, but Cygwin needs an extra tweak whereby the
new win32security.c file is explicitely added to the list of files to
build in pgport, per Cygwin members brolga and lorikeet.
Tom Lane [Fri, 8 Jan 2016 01:32:35 +0000 (20:32 -0500)]
Marginal cleanup of GROUPING SETS code in grouping_planner().
Improve comments and make it a shade less messy. I think we might want
to move all of this somewhere else later, but it needs to be more
readable first.
In passing, re-pgindent the file, affecting some recently-added comments
concerning parallel query planning.
Tom Lane [Fri, 8 Jan 2016 01:23:57 +0000 (20:23 -0500)]
Delay creation of subplan tlist until after create_plan().
Once upon a time it was necessary for grouping_planner() to determine
the tlist it wanted from the scan/join plan subtree before it called
query_planner(), because query_planner() would actually make a Plan using
that. But we refactored things a long time ago to delay construction of
the Plan tree till later, so there's no need to build that tlist until
(and indeed unless) we're ready to plaster it onto the Plan. The only
thing query_planner() cares about is what Vars are going to be needed for
the tlist, and it can perfectly well get that by looking at the real tlist
rather than some masticated version.
Well, actually, there is one minor glitch in that argument, which is that
make_subplanTargetList also adds Vars appearing only in HAVING to the
tlist it produces. So now we have to account for HAVING explicitly in
build_base_rel_tlists. But that just adds a few lines of code, and
I doubt it moves the needle much on processing time; we might be doing
pull_var_clause() twice on the havingQual, but before we had it scanning
dummy tlist entries instead.
This is a very small down payment on rationalizing grouping_planner
enough so it can be refactored.
Tom Lane [Thu, 7 Jan 2016 23:20:57 +0000 (18:20 -0500)]
Fix unobvious interaction between -X switch and subdirectory creation.
Turns out the only reason initdb -X worked is that pg_mkdir_p won't
whine if you point it at something that's a symlink to a directory.
Otherwise, the attempt to create pg_xlog/ just like all the other
subdirectories would have failed. Let's be a little more explicit
about what's happening. Oversight in my patch for bug #13853
(mea culpa for not testing -X ...)
Tom Lane [Thu, 7 Jan 2016 20:22:01 +0000 (15:22 -0500)]
Use plain mkdir() not pg_mkdir_p() to create subdirectories of PGDATA.
When we're creating subdirectories of PGDATA during initdb, we know darn
well that the parent directory exists (or should exist) and that the new
subdirectory doesn't (or shouldn't). There is therefore no need to use
anything more complicated than mkdir(). Using pg_mkdir_p() just opens us
up to unexpected failure modes, such as the one exhibited in bug #13853
from Nuri Boardman. It's not very clear why pg_mkdir_p() went wrong there,
but it is clear that we didn't need to be trying to create parent
directories in the first place. We're not even saving any code, as proven
by the fact that this patch nets out at minus five lines.
Since this is a response to a field bug report, back-patch to all branches.
Alvaro Herrera [Thu, 7 Jan 2016 19:21:19 +0000 (16:21 -0300)]
pgstat: add WAL receiver status view & SRF
This new view provides insight into the state of a running WAL receiver
in a HOT standby node.
The information returned includes the PID of the WAL receiver process,
its status (stopped, starting, streaming, etc), start LSN and TLI, last
received LSN and TLI, timestamp of last message send and receipt, latest
end-of-WAL LSN and time, and the name of the slot (if any).
Access to the detailed data is only granted to superusers; others only
get the PID.
Tom Lane [Thu, 7 Jan 2016 16:26:54 +0000 (11:26 -0500)]
Remove vestigial CHECK_FOR_INTERRUPTS call.
Commit e710b65c inserted code in md5_crypt_verify to disable and later
re-enable interrupts, with a CHECK_FOR_INTERRUPTS call as part of the
second step, to process any interrupts that had been held off. Commit 6647248e removed the interrupt disable/re-enable code, but left behind
the CHECK_FOR_INTERRUPTS, even though this is now an entirely random,
pointless place for one. md5_crypt_verify doesn't run long enough to
need such a check, and if it did, this would still be the wrong place
to put one.
Tom Lane [Thu, 7 Jan 2016 16:19:33 +0000 (11:19 -0500)]
Provide more detail in postmaster log for password authentication failures.
We tell people to examine the postmaster log if they're unsure why they are
getting auth failures, but actually only a few relatively-uncommon failure
cases were given their own log detail messages in commit 64e43c59b817a78d.
Expand on that so that every failure case detected within md5_crypt_verify
gets a specific log detail message. This should cover pretty much every
ordinary password auth failure cause.
So far I've not noticed user demand for a similar level of auth detail
for the other auth methods, but sooner or later somebody might want to
work on them. This is not that patch, though.
Alvaro Herrera [Thu, 7 Jan 2016 14:59:08 +0000 (11:59 -0300)]
Windows: Make pg_ctl reliably detect service status
pg_ctl is using isatty() to verify whether the process is running in a
terminal, and if not it sends its output to Windows' Event Log ... which
does the wrong thing when the output has been redirected to a pipe, as
reported in bug #13592.
To fix, make pg_ctl use the code we already have to detect service-ness:
in the master branch, move src/backend/port/win32/security.c to src/port
(with suitable tweaks so that it runs properly in backend and frontend
environments); pg_ctl already has access to pgport so it Just Works. In
older branches, that's likely to cause trouble, so instead duplicate the
required code in pg_ctl.c.
Author: Michael Paquier
Bug report and diagnosis: Egon Kocjan
Backpatch: all supported branches
Tom Lane [Wed, 6 Jan 2016 17:25:32 +0000 (12:25 -0500)]
In initdb's post-bootstrap phase, drop temp tables explicitly.
Although these temp tables will get removed from template1 at the end of
the standalone-backend run, that's too late to keep them from getting
copied into the template0 and postgres databases, now that we use only a
single backend run for the whole sequence. While no real harm is done
by the extra copies (since they'd be deleted on first use of the temp
schema), it's still unsightly, and it would mean some wasted cycles for
every database creation for the life of the installation.
Tom Lane [Tue, 5 Jan 2016 21:43:40 +0000 (16:43 -0500)]
Remove some ancient and unmaintained encoding-conversion test cruft.
In commit 921191912c48a68d I claimed that we weren't testing encoding
conversion functions, but further poking around reveals that we did
have an equivalent though hard-wired set of tests in conversion.sql.
AFAICS there is no advantage to doing it like that as compared to letting
the catalog contents drive the test, so let the opr_sanity addition stand
and remove the now-redundant tests in conversion.sql.
Also, remove some infrastructure in src/backend/utils/mb/conversion_procs
for building conversion.sql's list of tests. That was unmaintained, and
had not corresponded to the actual contents of conversion.sql since 2007
or perhaps even further back.
Tom Lane [Tue, 5 Jan 2016 20:47:05 +0000 (15:47 -0500)]
Sort $(wildcard) output where needed for reproducible build output.
The order of inclusion of .o files makes a difference in linker output;
not a functional difference, but still a bitwise difference, which annoys
some packagers who would like reproducible builds.
Alvaro Herrera [Tue, 5 Jan 2016 20:25:12 +0000 (17:25 -0300)]
Make pg_receivexlog silent with 9.3 and older servers
A pointless and confusing error message is shown to the user when
attempting to identify a 9.3 or older remote server with a 9.5/9.6
pg_receivexlog, because the return signature of IDENTIFY_SYSTEM was
changed in 9.4. There's no good reason for the warning message, so
shuffle code around to keep it quiet.
(pg_recvlogical is also affected by this commit, but since it obviously
cannot work with 9.3 that doesn't actually matter much.)
Backpatch to 9.5.
Reported by Marco Nenciarini, who also wrote the initial patch. Further
tweaked by Robert Haas and Fujii Masao; reviewed by Michael Paquier and
Craig Ringer.
Tom Lane [Tue, 5 Jan 2016 20:00:54 +0000 (15:00 -0500)]
In opr_sanity regression test, check for unexpected uses of cstring.
In light of commit ea0d494dae0d3d6f, it seems like a good idea to add
a regression test that will complain about random functions taking or
returning cstring. Only I/O support functions and encoding conversion
functions should be declared that way.
While at it, add some checks that encoding conversion functions are
declared properly. Since pg_conversion isn't populated manually,
it's not quite as necessary to check its contents as it is for catalogs
like pg_proc; but one thing we definitely have not tested in the past
is whether the identified conproc for a conversion actually does that
conversion vs. some other one.
Tom Lane [Tue, 5 Jan 2016 18:02:43 +0000 (13:02 -0500)]
Make the to_reg*() functions accept text not cstring.
Using cstring as the input type was a poor decision, because that's not
really a full-fledged type. In particular, it lacks implicit coercions
from text or varchar, meaning that usages like to_regproc('foo'||'bar')
wouldn't work; basically the only case that did work without explicit
casting was a simple literal constant argument.
The lack of field complaints about this suggests that hardly anyone
is using these functions, so hopefully fixing it won't cause much of
a compatibility problem. They've only been there since 9.4, anyway.
Alvaro Herrera [Tue, 5 Jan 2016 17:50:53 +0000 (14:50 -0300)]
Make pg_shseclabel available in early backend startup
While the in-core authentication mechanism doesn't need to access
pg_shseclabel at all, it's reasonable to think that an authentication
hook will want to look at the label for the role logging in, or for rows
in other catalogs used during the authentication phase of startup.
Catalog version bumped, because this changes the "is nailed" status for
pg_shseclabel.
Tom Lane [Tue, 5 Jan 2016 17:00:13 +0000 (12:00 -0500)]
Convert psql's tab completion for backslash commands to the new style.
This requires adding some more infrastructure to handle both case-sensitive
and case-insensitive matching, as well as the ability to match a prefix of
a previous word. So it ends up being about a wash line-count-wise, but
it's just as big a readability win here as in the SQL tab completion rules.
Tom Lane [Tue, 5 Jan 2016 01:08:08 +0000 (20:08 -0500)]
In psql's tab completion, change most TailMatches patterns to Matches.
In the refactoring in commit d37b816dc9e8f976c8913296781e08cbd45c5af1,
we mostly kept to the original design whereby only the last few words
on the line were matched to identify a completable pattern. However,
after commit d854118c8df8c413d069f7e88bb01b9e18e4c8ed, there's really
no reason to do it like that: where it's sensible, we can use patterns
that expect to match the entire input line. And mostly, it's sensible.
Matching the entire line greatly reduces the odds of a false match that
leads to offering irrelevant completions. Moreover (though I've not
tried to measure this), it should make tab completion faster since
many of the patterns will be discarded after a single integer comparison
that finds that the wrong number of words appear on the line.
There are certain identifiable places where we still need to use
TailMatches because the statement in question is allowed to appear
embedded in a larger statement. These are just a small minority of
the existing patterns, though, so the benefit of switching where
possible is large.
It's possible that this patch has removed some within-line matching
behaviors that are in fact desirable, but we can put those back when
we get complaints. Most of the removed behaviors are certainly silly.
Michael Paquier, with some further adjustments by me
Tom Lane [Mon, 4 Jan 2016 20:11:43 +0000 (15:11 -0500)]
Docs: provide a concrete discussion and example for RLS race conditions.
Commit 43cd468cf01007f3 added some wording to create_policy.sgml purporting
to warn users against a race condition of the sort that had been noted some
time ago by Peter Geoghegan. However, that warning was far too vague to be
useful (or at least, I completely failed to grasp what it was on about).
Since the problem case occurs with a security design pattern that lots of
people are likely to try to use, we need to be as clear as possible about
it. Provide a concrete example in the main-line docs in place of the
original warning.
Tom Lane [Mon, 4 Jan 2016 17:21:31 +0000 (12:21 -0500)]
Adjust behavior of row_security GUC to match the docs.
Some time back we agreed that row_security=off should not be a way to
bypass RLS entirely, but only a way to get an error if it was being
applied. However, the code failed to act that way for table owners.
Per discussion, this is a must-fix bug for 9.5.0.
Adjust the logic in rls.c to behave as expected; also, modify the
error message to be more consistent with the new interpretation.
The regression tests need minor corrections as well. Also update
the comments about row_security in ddl.sgml to be correct. (The
official description of the GUC in config.sgml is already correct.)
I failed to resist the temptation to do some other very minor
cleanup as well, such as getting rid of a duplicate extern declaration.
Tom Lane [Mon, 4 Jan 2016 06:03:53 +0000 (01:03 -0500)]
Fix regrole and regnamespace types to honor quoting like other reg* types.
Aside from any consistency arguments, this is logically necessary because
the I/O functions for these types also handle numeric OID values. Without
a quoting rule it is impossible to distinguish numeric OIDs from role or
namespace names that happen to contain only digits.
Also change the to_regrole and to_regnamespace functions to dequote their
arguments. While not logically essential, this seems like a good idea
since the other to_reg* functions do it. Anyone who really wants raw
lookup of an uninterpreted name can fall back on the time-honored solution
of (SELECT oid FROM pg_namespace WHERE nspname = whatever).
Report and patch by Jim Nasby, reviewed by Michael Paquier
Tom Lane [Mon, 4 Jan 2016 01:53:35 +0000 (20:53 -0500)]
Fix bogus lock release in RemovePolicyById and RemoveRoleFromObjectPolicy.
Can't release the AccessExclusiveLock on the target table until commit.
Otherwise there is a race condition whereby other backends might service
our cache invalidation signals before they can actually see the updated
catalog rows.
Just to add insult to injury, RemovePolicyById was closing the rel (with
incorrect lock drop) and then passing the now-dangling rel pointer to
CacheInvalidateRelcache. Probably the reason this doesn't fall over on
CLOBBER_CACHE buildfarm members is that some outer level of the DROP logic
is still holding the rel open ... but it'd have bit us on the arse
eventually, no doubt.
Tom Lane [Sun, 3 Jan 2016 21:26:38 +0000 (16:26 -0500)]
Guard against null arguments in binary_upgrade_create_empty_extension().
The CHECK_IS_BINARY_UPGRADE macro is not sufficient security protection
if we're going to dereference pass-by-reference arguments before it.
But in any case we really need to explicitly check PG_ARGISNULL for all
the arguments of a non-strict function, not only the ones we expect null
values for.
Tom Lane [Sun, 3 Jan 2016 18:56:29 +0000 (13:56 -0500)]
Fix treatment of *lpNumberOfBytesRecvd == 0: that's a completion condition.
pgwin32_recv() has treated a non-error return of zero bytes from WSARecv()
as being a reason to block ever since the current implementation was
introduced in commit a4c40f140d23cefb. However, so far as one can tell
from Microsoft's documentation, that is just wrong: what it means is
graceful connection closure (in stream protocols) or receipt of a
zero-length message (in message protocols), and neither case should result
in blocking here. The only reason the code worked at all was that control
then fell into the retry loop, which did *not* treat zero bytes specially,
so we'd get out after only wasting some cycles. But as of 9.5 we do not
normally reach the retry loop and so the bug is exposed, as reported by
Shay Rojansky and diagnosed by Andres Freund.
Remove the unnecessary test on the byte count, and rearrange the code
in the retry loop so that it looks identical to the initial sequence.
Back-patch to 9.5. The code is wrong all the way back, AFAICS, but
since it's relatively harmless in earlier branches we'll leave it alone.
Tom Lane [Sun, 3 Jan 2016 00:04:45 +0000 (19:04 -0500)]
Teach pg_dump to quote reloption values safely.
Commit c7e27becd2e6eb93 fixed this on the backend side, but we neglected
the fact that several code paths in pg_dump were printing reloptions
values that had not gotten massaged by ruleutils. Apply essentially the
same quoting logic in those places, too.
Tom Lane [Sat, 2 Jan 2016 21:24:50 +0000 (16:24 -0500)]
Fix overly-strict assertions in spgtextproc.c.
spg_text_inner_consistent is capable of reconstructing an empty string
to pass down to the next index level; this happens if we have an empty
string coming in, no prefix, and a dummy node label. (In practice, what
is needed to trigger that is insertion of a whole bunch of empty-string
values.) Then, we will arrive at the next level with in->level == 0
and a non-NULL (but zero length) in->reconstructedValue, which is valid
but the Assert tests weren't expecting it.
Per report from Andreas Seltenreich. This has no impact in non-Assert
builds, so should not be a problem in production, but back-patch to
all affected branches anyway.
In passing, remove a couple of useless variable initializations and
shorten the code by not duplicating DatumGetPointer() calls.
Tom Lane [Sat, 2 Jan 2016 20:29:02 +0000 (15:29 -0500)]
Adjust back-branch release note description of commits a2a718b22 et al.
As pointed out by Michael Paquier, recovery_min_apply_delay didn't exist
in 9.0-9.3, making the release note text not very useful. Instead make it
talk about recovery_target_xid, which did exist then.
9.0 is already out of support, but we can fix the text in the newer
branches' copies of its release notes.
Tom Lane [Sat, 2 Jan 2016 19:19:48 +0000 (14:19 -0500)]
Update copyright for 2016
On closer inspection, the reason copyright.pl was missing files is
that it is looking for 'Copyright (c)' and they had 'Copyright (C)'.
Fix that, and update a couple more that grepping for that revealed.
Tom Lane [Fri, 1 Jan 2016 20:27:53 +0000 (15:27 -0500)]
Teach flatten_reloptions() to quote option values safely.
flatten_reloptions() supposed that it didn't really need to do anything
beyond inserting commas between reloption array elements. However, in
principle the value of a reloption could be nearly anything, since the
grammar allows a quoted string there. Any restrictions on it would come
from validity checking appropriate to the particular option, if any.
A reloption value that isn't a simple identifier or number could thus lead
to dump/reload failures due to syntax errors in CREATE statements issued
by pg_dump. We've gotten away with not worrying about this so far with
the core-supported reloptions, but extensions might allow reloption values
that cause trouble, as in bug #13840 from Kouhei Sutou.
To fix, split the reloption array elements explicitly, and then convert
any value that doesn't look like a safe identifier to a string literal.
(The details of the quoting rule could be debated, but this way is safe
and requires little code.) While we're at it, also quote reloption names
if they're not safe identifiers; that may not be a likely problem in the
field, but we might as well try to be bulletproof here.
It's been like this for a long time, so back-patch to all supported
branches.
Tom Lane [Fri, 1 Jan 2016 18:42:21 +0000 (13:42 -0500)]
Add some more defenses against silly estimates to gincostestimate().
A report from Andy Colson showed that gincostestimate() was not being
nearly paranoid enough about whether to believe the statistics it finds in
the index metapage. The problem is that the metapage stats (other than the
pending-pages count) are only updated by VACUUM, and in the worst case
could still reflect the index's original empty state even when it has grown
to many entries. We attempted to deal with that by scaling up the stats to
match the current index size, but if nEntries is zero then scaling it up
still gives zero. Moreover, the proportion of pages that are entry pages
vs. data pages vs. pending pages is unlikely to be estimated very well by
scaling if the index is now orders of magnitude larger than before.
We can improve matters by expanding the use of the rule-of-thumb estimates
I introduced in commit 7fb008c5ee59b040: if the index has grown by more
than a cutoff amount (here set at 4X growth) since VACUUM, then use the
rule-of-thumb numbers instead of scaling. This might not be exactly right
but it seems much less likely to produce insane estimates.
I also improved both the scaling estimate and the rule-of-thumb estimate
to account for numPendingPages, since it's reasonable to expect that that
is accurate in any case, and certainly pages that are in the pending list
are not either entry or data pages.
As a somewhat separate issue, adjust the estimation equations that are
concerned with extra fetches for partial-match searches. These equations
suppose that a fraction partialEntries / numEntries of the entry and data
pages will be visited as a consequence of a partial-match search. Now,
it's physically impossible for that fraction to exceed one, but our
estimate of partialEntries is mostly bunk, and our estimate of numEntries
isn't exactly gospel either, so we could arrive at a silly value. In the
example presented by Andy we were coming out with a value of 100, leading
to insane cost estimates. Clamp the fraction to one to avoid that.
Like the previous patch, back-patch to all supported branches; this
problem can be demonstrated in one form or another in all of them.
Noah Misch [Fri, 1 Jan 2016 06:46:46 +0000 (01:46 -0500)]
Fix comments about WAL rule "write xlog before data" versus pg_multixact.
Recovery does not achieve its goal of zeroing all pg_multixact entries
whose accompanying WAL records never reached disk. Remove that claim
and justify its expendability. Detail the need for TrimMultiXact(),
which has little in common with the TrimCLOG() rationale. Merge two
tightly-related comments. Stop presenting pg_multixact as specific to
heap_lock_tuple(); PostgreSQL 9.3 extended its use to heap_update().
Noticed while investigating a report from Andres Freund.
doc: Remove redundant duplicate URLs from ulink elements
Empty ulink elements default to displaying the URL, so there is no need
to specify the URL again. This was already done for most occurrences,
but some cases didn't follow this convention.
Tom Lane [Thu, 31 Dec 2015 22:59:10 +0000 (17:59 -0500)]
Add a comment noting that FDWs don't have to implement EXCEPT or LIMIT TO.
postgresImportForeignSchema pays attention to IMPORT's EXCEPT and LIMIT TO
options, but only as an efficiency hack, not for correctness' sake. The
FDW documentation does explain that, but someone using postgres_fdw.c
as a coding guide might not remember it, so let's add a comment here.
Per question from Regina Obe.
Tom Lane [Thu, 31 Dec 2015 22:37:31 +0000 (17:37 -0500)]
Fix ALTER OPERATOR to update dependencies properly.
Fix an oversight in commit 321eed5f0f7563a0: replacing an operator's
selectivity functions needs to result in a corresponding update in
pg_depend. We have a function that can handle that, but it was not
called by AlterOperator().
To fix this without enlarging pg_operator.h's #include list beyond
what clients can safely include, split off the function definitions
into a new file pg_operator_fn.h, similarly to what we've done for
some other catalog header files. It's not entirely clear whether
any client-side code needs to include pg_operator.h, but it seems
prudent to assume that there is some such code somewhere.
Tom Lane [Wed, 30 Dec 2015 22:32:23 +0000 (17:32 -0500)]
Dept of second thoughts: the !scan_all exit mustn't increase scanned_pages.
In the extreme edge case where contended pages are the only ones that
escape being scanned, the previous commit would have allowed us to think
that relfrozenxid could be advanced, which is exactly wrong.
Tom Lane [Wed, 30 Dec 2015 22:13:15 +0000 (17:13 -0500)]
Avoid useless truncation attempts during VACUUM.
VACUUM can skip heap pages altogether when there's a run of consecutive
pages that are all-visible according to the visibility map. This causes it
to not update its nonempty_pages count, just as if those pages were empty,
which means that at the end we will think they are candidates for deletion.
Thus, we may take the table's AccessExclusive lock only to find that no
pages are really truncatable. This usually causes no real problems on a
master server, thanks to the lock being acquired only conditionally; but on
hot-standby servers, the same lock must be acquired unconditionally which
can result in unnecessary query cancellations.
To improve matters, force examination of the table's last page whenever
we reach there with a nonempty_pages count that would allow a truncation
attempt. If it's not empty, we'll advance nonempty_pages and thereby
prevent the truncation attempt.
If we are unable to acquire cleanup lock on that page, there's no need to
force it, unless we're doing an anti-wraparound vacuum. We can just check
for tuples with a shared buffer lock and then give up. (When we are doing
an anti-wraparound vacuum, and decide it's okay to skip the page because it
contains no freezable tuples, this patch still improves matters because
nonempty_pages is properly updated, which it was not before.)
Since only the last page is special-cased in this way, we might attempt a
truncation that will release many fewer pages than the normal heuristic
would suggest; at worst, only one page would be truncated. But that seems
all right, because the situation won't repeat during the next vacuum.
The real problem with the old logic is that the useless truncation attempt
happens every time we vacuum, so long as the state of the last few dozen
pages doesn't change.
This is a longstanding deficiency, but since the consequences aren't very
severe in most scenarios, I'm not going to risk a back-patch.
Tom Lane [Wed, 30 Dec 2015 02:21:04 +0000 (21:21 -0500)]
Minor hacking on contrib/cube documentation.
Improve markup, particularly of the table of functions; add or improve
examples for some of the functions; wordsmith some of the function
descriptions.
Tom Lane [Tue, 29 Dec 2015 23:50:35 +0000 (18:50 -0500)]
Add some comments about division of labor between rewriter and planner.
The rationale for the way targetlist processing is done wasn't clearly
stated anywhere, and I for one had forgotten some of the details. Having
just painfully re-learned them, add some breadcrumbs for the next person.
Tom Lane [Tue, 29 Dec 2015 21:45:47 +0000 (16:45 -0500)]
Put back one copyObject() in rewriteTargetView().
Commit 6f8cb1e23485bd6d tried to centralize rewriteTargetView's copying
of a target view's Query struct. However, it ignored the fact that the
jointree->quals field was used twice. This only accidentally failed to
fail immediately because the same ChangeVarNodes mutation is applied in
both cases, so that we end up with logically identical expression trees
for both uses (and, as the code stands, the second ChangeVarNodes call
actually does nothing). However, we end up linking *physically*
identical expression trees into both an RTE's securityQuals list and
the WithCheckOption list. That's pretty dangerous, mainly because
prepsecurity.c is utterly cavalier about further munging such structures
without copying them first.
There may be no live bug in HEAD as a consequence of the fact that we apply
preprocess_expression in between here and prepsecurity.c, and that will
make a copy of the tree anyway. Or it may just be that the regression
tests happen to not trip over it. (I noticed this only because things
fell over pretty badly when I tried to relocate the planner's call of
expand_security_quals to before expression preprocessing.) In any case
it's very fragile because if anyone tried to make the securityQuals and
WithCheckOption trees diverge before we reach preprocess_expression, it
would not work. The fact that the current code will preprocess
securityQuals and WithCheckOptions lists at completely different times in
different query levels does nothing to increase my trust that that can't
happen.
In view of the fact that 9.5.0 is almost upon us and the aforesaid commit
has seen exactly zero field testing, the prudent course is to make an extra
copy of the quals so that the behavior is not different from what has been
in the field during beta.
Joe Conway [Mon, 28 Dec 2015 20:34:11 +0000 (12:34 -0800)]
Rename (new|old)estCommitTs to (new|old)estCommitTsXid
The variables newestCommitTs and oldestCommitTs sound as if they are
timestamps, but in fact they are the transaction Ids that correspond
to the newest and oldest timestamps rather than the actual timestamps.
Rename these variables to reflect that they are actually xids: to wit
newestCommitTsXid and oldestCommitTsXid respectively. Also modify
related code in a similar fashion, particularly the user facing output
emitted by pg_controldata and pg_resetxlog.
Complaint and patch by me, review by Tom Lane and Alvaro Herrera.
Backpatch to 9.5 where these variables were first introduced.
Adjust coding so that compilers unfamiliar with elog/ereport don't complain
about uninitialized values.
Fix misuse of PG_GETARG_INT16 to retrieve arguments declared as "integer"
at the SQL level. (This was evidently copied from cube_ll_coord and
cube_ur_coord, but those were wrong too.)
Fix non-style-guide-conforming error messages.
Fix underparenthesized if statements, which pgindent would have made a
hash of, and remove some unnecessary parens elsewhere.
Run pgindent over new code.
Revise documentation: repeated accretion of more operators without any
rethinking of the text already there had left things in a bit of a mess.
Merge all the cube operators into one table and adjust surrounding text
appropriately.
Tom Lane [Mon, 28 Dec 2015 17:09:00 +0000 (12:09 -0500)]
Document the exponentiation operator as associating left to right.
Common mathematical convention is that exponentiation associates right to
left. We aren't going to change the parser for this, but we could note
it in the operator's description. (It's already noted in the operator
precedence/associativity table, but users might not look there.)
Per bug #13829 from Henrik Pauli.
Tom Lane [Mon, 28 Dec 2015 16:46:32 +0000 (11:46 -0500)]
Fix omission of -X (--no-psqlrc) in some psql invocations.
As of commit d5563d7df, psql -c no longer implies -X, but not all of
our regression testing scripts had gotten that memo.
To ensure consistency of results across different developers, make
sure that *all* invocations of psql in all scripts in our tree
use -X, even where this is not what previously happened.
Tom Lane [Mon, 28 Dec 2015 16:04:42 +0000 (11:04 -0500)]
Update documentation about pseudo-types.
Tone down an overly strong statement about which pseudo-types PLs are
likely to allow. Add "event_trigger" to the list, as well as
"pg_ddl_command" in 9.5/HEAD. Back-patch to 9.3 where event_trigger
was added.
Alvaro Herrera [Mon, 28 Dec 2015 13:50:35 +0000 (10:50 -0300)]
Fix translation domain in pg_basebackup
For some reason, we've been overlooking the fact that pg_receivexlog
and pg_recvlogical are using wrong translation domains all along,
so their output hasn't ever been translated. The right domain is
pg_basebackup, not their own executable names.
Noticed by Ioseph Kim, who's been working on the Korean translation.
Backpatch pg_receivexlog to 9.2 and pg_recvlogical to 9.4.
Alvaro Herrera [Sun, 27 Dec 2015 16:03:19 +0000 (13:03 -0300)]
Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()
Both Blowfish and DES implementations of crypt() can take arbitrarily
long time, depending on the number of rounds specified by the caller;
make sure they can be interrupted.
Tom Lane [Sat, 26 Dec 2015 18:41:29 +0000 (13:41 -0500)]
Include typmod when complaining about inherited column type mismatches.
MergeAttributes() rejects cases where columns to be merged have the same
type but different typmod, which is correct; but the error message it
printed didn't show either typmod, which is unhelpful. Changing this
requires using format_type_with_typemod() in place of TypeNameToString(),
which will have some minor side effects on the way some type names are
printed, but on balance this is an improvement: the old code sometimes
printed one type according to one set of rules and the other type according
to the other set, which could be confusing in its own way.
Oddly, there were no regression test cases covering any of this behavior,
so add some.
Tom Lane [Sat, 26 Dec 2015 17:56:09 +0000 (12:56 -0500)]
Fix brin_summarize_new_values() to check index type and ownership.
brin_summarize_new_values() did not check that the passed OID was for
an index at all, much less that it was a BRIN index, and would fail in
obscure ways if it wasn't (possibly damaging data first?). It also
lacked any permissions test; by analogy to VACUUM, we should only allow
the table's owner to summarize.
Noted by Jeff Janes, fix by Michael Paquier and me