Tom Lane [Thu, 26 Jun 2014 23:22:15 +0000 (16:22 -0700)]
Get rid of bogus separate pg_proc entries for json_extract_path operators.
These should not have existed to begin with, but there was apparently some
misunderstanding of the purpose of the opr_sanity regression test item
that checks for operator implementation functions with their own comments.
The idea there is to check for unintentional violations of the rule that
operator implementation functions shouldn't be documented separately
.... but for these functions, that is in fact what we want, since the
variadic option is useful and not accessible via the operator syntax.
Get rid of the extra pg_proc entries and fix the regression test and
documentation to be explicit about what we're doing here.
Tom Lane [Thu, 26 Jun 2014 17:40:50 +0000 (10:40 -0700)]
Forward-patch regression test for "could not find pathkey item to sort".
Commit a87c729153e372f3731689a7be007bc2b53f1410 already fixed the bug this
is checking for, but the regression test case it added didn't cover this
scenario. Since we managed to miss the fact that there was a bug at all,
it seems like a good idea to propagate the extra test case forward to HEAD.
Fujii Masao [Thu, 26 Jun 2014 05:27:27 +0000 (14:27 +0900)]
Remove obsolete example of CSV log file name from log_filename document.
7380b63 changed log_filename so that epoch was not appended to it
when no format specifier is given. But the example of CSV log file name
with epoch still left in log_filename document. This commit removes
such obsolete example.
This commit also documents the defaults of log_directory and
log_filename.
Tom Lane [Wed, 25 Jun 2014 22:25:22 +0000 (15:25 -0700)]
Rationalize error messages within jsonfuncs.c.
I noticed that the functions in jsonfuncs.c sometimes printed error
messages that claimed I'd called some other function. Investigation showed
that this was from repurposing code into "worker" functions without taking
much care as to whether it would mention the right SQL-level function if it
threw an error. Moreover, there was a weird mismash of messages that
contained a fixed function name, messages that used %s for a function name,
and messages that constructed a function name out of spare parts, like
"json%s_populate_record" (which, quite aside from being ugly as sin, wasn't
even sufficient to cover all the cases). This would put an undue burden on
our long-suffering translators. Standardize on inserting the SQL function
name with %s so as to reduce the number of translatable strings, and pass
function names around as needed to make sure we can report the right one.
Fix up some gratuitous variations in wording, too.
Tom Lane [Wed, 25 Jun 2014 18:22:18 +0000 (11:22 -0700)]
Cosmetic improvements in jsonfuncs.c.
Re-pgindent, remove a lot of random vertical whitespace, remove useless
(if not counterproductive) inline markings, get rid of unnecessary
zero-padding of strings for hashtable searches. No functional changes.
Tom Lane [Wed, 25 Jun 2014 04:22:40 +0000 (21:22 -0700)]
Fix handling of nested JSON objects in json_populate_recordset and friends.
populate_recordset_object_start() improperly created a new hash table
(overwriting the link to the existing one) if called at nest levels
greater than one. This resulted in previous fields not appearing in
the final output, as reported by Matti Hameister in bug #10728.
In 9.4 the problem also affects json_to_recordset.
This perhaps missed detection earlier because the default behavior is to
throw an error for nested objects: you have to pass use_json_as_text = true
to see the problem.
In addition, fix query-lifespan leakage of the hashtable created by
json_populate_record(). This is pretty much the same problem recently
fixed in dblink: creating an intended-to-be-temporary context underneath
the executor's per-tuple context isn't enough to make it go away at the
end of the tuple cycle, because MemoryContextReset is not
MemoryContextResetAndDeleteChildren.
The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it
was still possible with default_with_oids=true. But the rest of the system,
including pg_dump, isn't prepared to handle foreign tables with OIDs
properly.
Backpatch down to 9.1, where foreign tables were introduced. It's possible
that there are databases out there that already have foreign tables with
OIDs. There isn't much we can do about that, but at least we can prevent
them from being created in the future.
Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
Robert Haas [Tue, 24 Jun 2014 01:45:21 +0000 (21:45 -0400)]
Check for interrupts during tuple-insertion loops.
Normally, this won't matter too much; but if I/O is really slow, for
example because the system is overloaded, we might write many pages
before checking for interrupts. A single toast insertion might
write up to 1GB of data, and a multi-insert could write hundreds
of tuples (and their corresponding TOAST data).
Improve tab-completion of DROP and ALTER ENABLE/DISABLE on triggers and rules.
At "DROP RULE/TRIGGER triggername ON ...", tab-complete tables that have
a rule/trigger with that name.
At "ALTER TABLE tablename ENABLE/DISABLE TRIGGER/RULE ...", tab-complete to
rules/triggers on that table. Previously, we would tab-complete to all
rules or triggers, not just those that are on that table.
Also, filter out internal RI triggers from the list. You can't DROP them,
and enabling/disabling them is such a rare (and dangerous) operation that
it seems better to hide them.
Kevin Grittner [Sat, 21 Jun 2014 14:17:04 +0000 (09:17 -0500)]
Fix documentation template for CREATE TRIGGER.
By using curly braces, the template had specified that one of
"NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED"
was required on any CREATE TRIGGER statement, which is not
accurate. Change to square brackets makes that optional.
Tom Lane [Fri, 20 Jun 2014 22:20:56 +0000 (18:20 -0400)]
Add Asserts to verify that catalog cache keys are unique and not null.
The catcache code is effectively assuming this already, so let's insist
that the catalog and index are actually declared that way.
Having done that, the comments in indexing.h about non-unique indexes
not being used for catcaches are completely redundant not just mostly so;
and we didn't have such a comment for every such index anyway. So let's
get rid of them.
Per discussion of whether we should identify primary keys for catalogs.
We might or might not take that further step, but this change in itself
will allow quicker detection of misdeclared catcaches, so it seems worth
doing in any case.
Joe Conway [Fri, 20 Jun 2014 19:22:13 +0000 (12:22 -0700)]
Clean up data conversion short-lived memory context.
dblink uses a short-lived data conversion memory context. However it
was not deleted when no longer needed, leading to a noticeable memory
leak under some circumstances. Plug the hole, along with minor
refactoring. Backpatch to 9.2 where the leak was introduced.
Report and initial patch by MauMau. Reviewed/modified slightly by
Tom Lane and me.
Andres Freund [Fri, 20 Jun 2014 09:06:42 +0000 (11:06 +0200)]
Do all-visible handling in lazy_vacuum_page() outside its critical section.
Since fdf9e21196a lazy_vacuum_page() rechecks the all-visible status
of pages in the second pass over the heap. It does so inside a
critical section, but both visibilitymap_test() and
heap_page_is_all_visible() perform operations that should not happen
inside one. The former potentially performs IO and both potentially do
memory allocations.
To fix, simply move all the all-visible handling outside the critical
section. Doing so means that the PD_ALL_VISIBLE on the page won't be
included in the full page image of the HEAP2_CLEAN record anymore. But
that's fine, the flag will be set by the HEAP2_VISIBLE logged later.
Backpatch to 9.3 where the problem was introduced. The bug only came
to light due to the assertion added in 4a170ee9 and isn't likely to
cause problems in production scenarios. The worst outcome is a
avoidable PANIC restart.
This also gets rid of the difference in the order of operations
between master and standby mentioned in 2a8e1ac5.
Per reports from David Leverton and Keith Fiske in bug #10533.
Andres Freund [Fri, 20 Jun 2014 09:06:42 +0000 (11:06 +0200)]
Don't allow to disable backend assertions via the debug_assertions GUC.
The existance of the assert_enabled variable (backing the
debug_assertions GUC) reduced the amount of knowledge some static code
checkers (like coverity and various compilers) could infer from the
existance of the assertion. That could have been solved by optionally
removing the assertion_enabled variable from the Assert() et al macros
at compile time when some special macro is defined, but the resulting
complication doesn't seem to be worth the gain from having
debug_assertions. Recompiling is fast enough.
The debug_assertions GUC is still available, but readonly, as it's
useful when diagnosing problems. The commandline/client startup option
-A, which previously also allowed to enable/disable assertions, has
been removed as it doesn't serve a purpose anymore.
While at it, reduce code duplication in bufmgr.c and localbuf.c
assertions checking for spurious buffer pins. That code had to be
reindented anyway to cope with the assert_enabled removal.
Tom Lane [Fri, 20 Jun 2014 02:13:41 +0000 (22:13 -0400)]
Avoid leaking memory while evaluating arguments for a table function.
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM
in the query-lifespan memory context. This is insignificant in simple
cases where the function relation is scanned only once; but if the function
is in a sub-SELECT or is on the inside of a nested loop, any memory
consumed during argument evaluation can add up quickly. (The potential for
trouble here had been foreseen long ago, per existing comments; but we'd
not previously seen a complaint from the field about it.) To fix, create
an additional temporary context just for this purpose.
Per an example from MauMau. Back-patch to all active branches.
Noah Misch [Fri, 20 Jun 2014 01:41:26 +0000 (21:41 -0400)]
Let installcheck-world pass against a server requiring a password.
Give passwords to each user created in support of an ECPG connection
test case. Use SET SESSION AUTHORIZATION, not a fresh connection, to
reduce privileges during a dblink test case.
To test against such a server, both the "make installcheck-world"
environment and the postmaster environment must provide the default
user's password; $PGPASSFILE is the principal way to do so. (The
postmaster environment needs it for dblink and postgres_fdw tests.)
Kevin Grittner [Thu, 19 Jun 2014 13:40:37 +0000 (08:40 -0500)]
Fix calculation of PREDICATELOCK_MANAGER_LWLOCK_OFFSET.
Commit ea9df812d8502fff74e7bc37d61bdc7d66d77a7f failed to include
NUM_BUFFER_PARTITIONS in this offset, resulting in a bad offset.
Ultimately this threw off NUM_FIXED_LWLOCKS which is based on
earlier offsets, leading to memory allocation problems. It seems
likely to have also caused increased LWLOCK contention when
serializable transactions were used, because lightweight locks used
for that overlapped others.
Reported by Amit Kapila with analysis and fix.
Backpatch to 9.4, where the bug was introduced.
Fujii Masao [Thu, 19 Jun 2014 11:31:20 +0000 (20:31 +0900)]
Don't allow data_directory to be set in postgresql.auto.conf by ALTER SYSTEM.
data_directory could be set both in postgresql.conf and postgresql.auto.conf so far.
This could cause some problematic situations like circular definition. To avoid such
situations, this commit forbids a user to set data_directory in postgresql.auto.conf.
Backpatch this to 9.4 where ALTER SYSTEM command was introduced.
Amit Kapila, reviewed by Abhijit Menon-Sen, with minor adjustments by me.
Tom Lane [Thu, 19 Jun 2014 00:12:47 +0000 (20:12 -0400)]
Improve our mechanism for controlling the Linux out-of-memory killer.
Arrange for postmaster child processes to respond to two environment
variables, PG_OOM_ADJUST_FILE and PG_OOM_ADJUST_VALUE, to determine whether
they reset their OOM score adjustments and if so to what. This is superior
to the previous design involving #ifdef's in several ways. The behavior is
now available in a default build, and both ends of the adjustment --- the
original adjustment of the postmaster's level and the subsequent
readjustment by child processes --- can now be controlled in one place,
namely the postmaster launch script. So it's no longer necessary for the
launch script to act on faith that the server was compiled with the
appropriate options. In addition, if someone wants to use an OOM score
other than zero for the child processes, that doesn't take a recompile
anymore; and we no longer have to cater separately to the two different
historical kernel APIs for this adjustment.
Tom Lane [Wed, 18 Jun 2014 17:22:25 +0000 (13:22 -0400)]
Implement UPDATE tab SET (col1,col2,...) = (SELECT ...), ...
This SQL-standard feature allows a sub-SELECT yielding multiple columns
(but only one row) to be used to compute the new values of several columns
to be updated. While the same results can be had with an independent
sub-SELECT per column, such a workaround can require a great deal of
duplicated computation.
The standard actually says that the source for a multi-column assignment
could be any row-valued expression. The implementation used here is
tightly tied to our existing sub-SELECT support and can't handle other
cases; the Bison grammar would have some issues with them too. However,
I don't feel too bad about this since other cases can be converted into
sub-SELECTs. For instance, "SET (a,b,c) = row_valued_function(x)" could
be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
Noah Misch [Wed, 18 Jun 2014 13:21:50 +0000 (09:21 -0400)]
Fix the MSVC build process for uuid-ossp.
Catch up with commit b8cc8f94730610c0189aa82dfec4ae6ce9b13e34's
introduction of the HAVE_UUID_OSSP symbol to the principal build
process. Back-patch to 9.4, where that commit appeared.
Tom Lane [Mon, 16 Jun 2014 19:55:05 +0000 (15:55 -0400)]
Avoid recursion when processing simple lists of AND'ed or OR'ed clauses.
Since most of the system thinks AND and OR are N-argument expressions
anyway, let's have the grammar generate a representation of that form when
dealing with input like "x AND y AND z AND ...", rather than generating
a deeply-nested binary tree that just has to be flattened later by the
planner. This avoids stack overflow in parse analysis when dealing with
queries having more than a few thousand such clauses; and in any case it
removes some rather unsightly inconsistencies, since some parts of parse
analysis were generating N-argument ANDs/ORs already.
It's still possible to get a stack overflow with weirdly parenthesized
input, such as "x AND (y AND (z AND ( ... )))", but such cases are not
mainstream usage. The maximum depth of parenthesization is already
limited by Bison's stack in such cases, anyway, so that the limit is
probably fairly platform-independent.
Patch originally by Gurjeet Singh, heavily revised by me
Noah Misch [Sat, 14 Jun 2014 13:41:13 +0000 (09:41 -0400)]
Secure Unix-domain sockets of "make check" temporary clusters.
Any OS user able to access the socket can connect as the bootstrap
superuser and proceed to execute arbitrary code as the OS user running
the test. Protect against that by placing the socket in a temporary,
mode-0700 subdirectory of /tmp. The pg_regress-based test suites and
the pg_upgrade test suite were vulnerable; the $(prove_check)-based test
suites were already secure. Back-patch to 8.4 (all supported versions).
The hazard remains wherever the temporary cluster accepts TCP
connections, notably on Windows.
As a convenient side effect, this lets testing proceed smoothly in
builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values
like /var/run/postgresql are often unwritable to the build user.
Noah Misch [Sat, 14 Jun 2014 13:41:13 +0000 (09:41 -0400)]
Add mkdtemp() to libpgport.
This function is pervasive on free software operating systems; import
NetBSD's implementation. Back-patch to 8.4, like the commit that will
harness it.
Tom Lane [Fri, 13 Jun 2014 04:02:56 +0000 (00:02 -0400)]
Improve predtest.c's ability to reason about operator expressions.
We have for a long time been able to prove implications and refutations
between clauses structured like "expr op const" with the same subexpression
and btree-related operators; for example that "x < 4" implies "x <= 5".
The implication machinery is needed to detect usability of partial indexes,
and the refutation machinery is needed to implement constraint exclusion.
This patch extends that machinery to make proofs for operator expressions
involving the same two immutable-but-not-necessarily-just-Const input
expressions, ie does "expr1 op1 expr2" prove or refute "expr1 op2 expr2" or
"expr2 op2 expr1"? An important example is that we can now prove "x = y"
given "y = x", which formerly the code could not deduce unless x or y was a
constant. We can make use of the system's knowledge of operator commutator
and negator pairs, and can also make use of btree opclass relationships,
for example "x < y" implies "x <= y" and refutes "x > y" (notice that
neither of these could be proven just from commutator or negator links).
Inspired by a gripe from Brian Dunavant. This seems more like a new
feature than a bug fix, though, so no back-patch.
Tom Lane [Fri, 13 Jun 2014 00:14:32 +0000 (20:14 -0400)]
Fix pg_restore's processing of old-style BLOB COMMENTS data.
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch
of COMMENT commands into a single BLOB COMMENTS archive object. With
sufficiently many such comments, some of the commands would likely get
split across bufferloads when restoring, causing failures in
direct-to-database restores (though no problem would be evident in text
output). This is the same type of issue we have with table data dumped as
INSERT commands, and it can be fixed in the same way, by using a mini SQL
lexer to figure out where the command boundaries are. Fortunately, the
COMMENT commands are no more complex to lex than INSERTs, so we can just
re-use the existing lexer for INSERTs.
Per bug #10611 from Jacek Zalewski. Back-patch to all active branches.
Tom Lane [Thu, 12 Jun 2014 22:59:06 +0000 (18:59 -0400)]
Improve tuplestore's error messages for I/O failures.
We should report the errno when we get a failure from functions like
BufFileWrite. "ERROR: write failed" is unreasonably taciturn for a
case that's well within the realm of possibility; I've seen it a
couple times in the buildfarm recently, in situations that were
probably out-of-disk-space, but it'd be good to see the errno
to confirm it.
I think this code was originally written without assuming that
the buffile.c functions would return useful errno; but most other
callers *are* assuming that, and a quick look at the buffile code
gives no reason to suppose otherwise.
Also, a couple of the old messages were phrased on the assumption
that a short read might indicate a logic bug in tuplestore itself;
but that code's pretty well tested by now, so a filesystem-level
problem seems much more likely.
Tom Lane [Thu, 12 Jun 2014 21:51:47 +0000 (17:51 -0400)]
Adjust largeobject regression test to leave a couple of LOs behind.
Since we commonly test pg_dump/pg_restore by seeing whether they can dump
and restore the regression test database, it behooves us to include some
large objects in that test scenario.
I tried to include a comment on one of these large objects to improve
the test scenario further ... but it turns out that pg_upgrade fails to
preserve comments on large objects, and its regression test notices
the discrepancy. So uncommenting that COMMENT is a TODO for later.
Tom Lane [Thu, 12 Jun 2014 20:51:02 +0000 (16:51 -0400)]
Remove inadvertent copyright violation in largeobject regression test.
Robert Frost is no longer with us, but his copyrights still are, so
let's stop using "Stopping by Woods on a Snowy Evening" as test data
before somebody decides to sue us. Wordsworth is more safely dead.
Tom Lane [Thu, 12 Jun 2014 19:54:13 +0000 (15:54 -0400)]
Add regression test to prevent future breakage of legacy query in libpq.
Memorialize the expected output of the query that libpq has been using for
many years to get the OIDs of large-object support functions. Although
we really ought to change the way libpq does this, we must expect that
this query will remain in use in the field for the foreseeable future,
so until we're ready to break compatibility with old libpq versions
we'd better check the results stay the same. See the recent lo_create()
fiasco.
Tom Lane [Thu, 12 Jun 2014 19:39:09 +0000 (15:39 -0400)]
Rename lo_create(oid, bytea) to lo_from_bytea().
The previous naming broke the query that libpq's lo_initialize() uses
to collect the OIDs of the server-side functions it requires, because
that query effectively assumes that there is only one function named
lo_create in the pg_catalog schema (and likewise only one lo_open, etc).
While we should certainly make libpq more robust about this, the naive
query will remain in use in the field for the foreseeable future, so it
seems the only workable choice is to use a different name for the new
function. lo_from_bytea() won a small straw poll.
Back-patch into 9.4 where the new function was introduced.
Tom Lane [Thu, 12 Jun 2014 17:12:53 +0000 (13:12 -0400)]
Remove unnecessary output expressions from unflattened subqueries.
If a sub-select-in-FROM gets flattened into the upper query, then we
naturally get rid of any output columns that are defined in the sub-select
text but not actually used in the upper query. However, this doesn't
happen when it's not possible to flatten the subquery, for example because
it contains GROUP BY, LIMIT, etc. Allowing the subquery to compute useless
output columns is often fairly harmless, but sometimes it has significant
performance cost: the unused output might be an expensive expression,
or it might be a Var from a relation that we could remove entirely (via
the join-removal logic) if only we realized that we didn't really need
that Var. Situations like this are common when expanding views, so it
seems worth taking the trouble to detect and remove unused outputs.
Because the upper query's Var numbering for subquery references depends on
positions in the subquery targetlist, we don't want to renumber the items
we leave behind. Instead, we can implement "removal" by replacing the
unwanted expressions with simple NULL constants. This wastes a few cycles
at runtime, but not enough to justify more work in the planner.
Andres Freund [Thu, 12 Jun 2014 11:23:46 +0000 (13:23 +0200)]
Consistency improvements for slot and decoding code.
Change the order of checks in similar functions to be the same; remove
a parameter that's not needed anymore; rename a memory context and
expand a couple of comments.
Noah Misch [Wed, 11 Jun 2014 23:50:57 +0000 (19:50 -0400)]
Have configuration templates augment, not replace, LDFLAGS.
This preserves user-specified LDFLAGS; we already kept user-specified
CFLAGS and CPPFLAGS. Given the shortage of complaints and the fact that
any problem caused is likely to appear at build time, no back-patch.
Noah Misch [Wed, 11 Jun 2014 23:50:41 +0000 (19:50 -0400)]
Consistently define BUILDING_DLL during builds of src/port for Windows.
The MSVC build process already did so; this fixes the principal build
process to match. Both processes already did likewise for src/common.
This lets server builds of src/port reference postgres.exe data symbols.
Tom Lane [Wed, 11 Jun 2014 02:48:16 +0000 (22:48 -0400)]
Fix ancient encoding error in hungarian.stop.
When we grabbed this file off the Snowball project's website, we mistakenly
supposed that it was in LATIN1 encoding, but evidently it was actually in
LATIN2. This resulted in Å‘ (o-double-acute, U+0151, which is code 0xF5 in
LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in
bug #10589 from Zoltán Sörös. We'd have messed up u-double-acute too,
but there aren't any of those in the file. Other characters used in the
file have the same codes in LATIN1 and LATIN2, which no doubt helped hide
the problem for so long.
The error is not only ours: the Snowball project also was confused about
which encoding is required for Hungarian. But dealing with that will
require source-code changes that I'm not at all sure we'll wish to
back-patch. Fixing the stopword file seems reasonably safe to back-patch
however.
Tom Lane [Tue, 10 Jun 2014 01:37:18 +0000 (21:37 -0400)]
Forward-port regression test for bug #10587 into 9.3 and HEAD.
Although this bug is already fixed in post-9.2 branches, the case
triggering it is quite different from what was under consideration
at the time. It seems worth memorializing this example in HEAD
just to make sure it doesn't get broken again in future.
Tom Lane [Mon, 9 Jun 2014 20:30:40 +0000 (16:30 -0400)]
Fix infinite loop when splitting inner tuples in SPGiST text indexes.
Previously, the code used a node label of zero both for strings that
contain no bytes beyond the inner tuple's prefix, and for cases where an
"allTheSame" inner tuple has to be split to allow a string with a different
next byte to be inserted into it. Failing to distinguish these cases meant
that if a string ending with the current prefix needed to be inserted into
an allTheSame tuple, we got into an infinite loop, because after splitting
the tuple we'd descend into the child allTheSame tuple and then find we
need to split again.
To fix, instead use -1 and -2 as the node labels for these two cases.
This requires widening the node label type from "char" to int2, but
fortunately SPGiST stores all pass-by-value node label types in their
Datum representation, which means that this change is transparently upward
compatible so far as the on-disk representation goes. We continue to
recognize zero as a dummy node label for reading purposes, but will not
attempt to push new index entries down into such a label, so that the loop
won't occur even when dealing with an existing index.
Per report from Teodor Sigaev. Back-patch to 9.2 where the faulty
code was introduced.
Alvaro Herrera [Mon, 9 Jun 2014 19:17:23 +0000 (15:17 -0400)]
Wrap multixact/members correctly during extension, take 2
In a50d97625497b7 I already changed this, but got it wrong for the case
where the number of members is larger than the number of entries that
fit in the last page of the last segment.
As reported by Serge Negodyuck in a followup to bug #8673.
Andres Freund [Thu, 5 Jun 2014 16:27:11 +0000 (18:27 +0200)]
Fix off-by-one in decoding causing one-record events to be skipped.
A ReorderBufferTransaction's end_lsn, the sentPtr advocated by
walsender keepalive messages, and the end location remembered by the
decoding get_*changes* SQL functions all use the location of the last
read record + 1. I.e. the LSN points to the beginning of the next
record. That cannot realistically be changed without changing the
replication protocol because that's how keepalive messages have worked
since 9.0.
The bug is that the logic inside the snapshot builder, which decides
whether a transaction's contents should be decoded, assumed the start
location would point towards the last byte of the last record. The
reason this didn't actually cause visible problems is that currently
that decision is only made for commit records. Since interesting
transactions always have at least one additional record - containing
actual data - we'd never skip a transaction.
But if there ever were transactions, or other events, with just one
record containing important information, we'd skip them after stopping
and restarting logical decoding.
Tom Lane [Thu, 5 Jun 2014 15:31:06 +0000 (11:31 -0400)]
Add defenses against running with a wrong selection of LOBLKSIZE.
It's critical that the backend's idea of LOBLKSIZE match the way data has
actually been divided up in pg_largeobject. While we don't provide any
direct way to adjust that value, doing so is a one-line source code change
and various people have expressed interest recently in changing it. So,
just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in
pg_control and cross-check that the backend's compiled-in setting matches
the on-disk data.
Also tweak the code in inv_api.c so that fetches from pg_largeobject
explicitly verify that the length of the data field is not more than
LOBLKSIZE. Formerly we just had Asserts() for that, which is no protection
at all in production builds. In some of the call sites an overlength data
value would translate directly to a security-relevant stack clobber, so it
seems worth one extra runtime comparison to be sure.
In the back branches, we can't change the contents of pg_control; but we
can still make the extra checks in inv_api.c, which will offer some amount
of protection against running with the wrong value of LOBLKSIZE.
Andres Freund [Thu, 5 Jun 2014 14:29:20 +0000 (16:29 +0200)]
Consistently spell a replication slot's name as slot_name.
Previously there's been a mix between 'slotname' and 'slot_name'. It's
not nice to be unneccessarily inconsistent in a new feature. As a post
beta1 initdb now is required in the wake of eeca4cd35e, fix the
inconsistencies.
Most the changes won't affect usage of replication slots because the
majority of changes is around function parameter names. The prominent
exception to that is that the recovery.conf parameter
'primary_slotname' is now named 'primary_slot_name'.
Andres Freund [Thu, 5 Jun 2014 11:54:16 +0000 (13:54 +0200)]
Move regression test listing of builtin leakproof functions to opr_sanity.sql.
The original location in create_function_3.sql didn't invite the close
structinity warranted for adding new leakproof functions. Add comments
to the test explaining that functions should only be added after
careful consideration and understanding what a leakproof function is.
Adjust SP-GiST WAL record formats to reduce alignment padding.
The way the code was written, the padding was copied from uninitialized
memory areas.. Because the structs are local variables in the code where
the WAL records are constructed, making them larger and zeroing the padding
bytes would not make the code very pretty, so rather than fixing this
directly by zeroing out the padding bytes, it seems more clear to not try to
align the tuples in the WAL records. The redo functions are taught to copy
the tuple header to a local variable to avoid unaligned access.
Stable-branches have the same problem, but we can't change the WAL format
there, so fix in master only. Reading a few random extra bytes at the stack
is harmless in practice, so it's not worth crafting a different
back-patchable fix.
Per reports from Kevin Grittner and Andres Freund, using clang static
analyzer and Valgrind, respectively.
Tom Lane [Thu, 5 Jun 2014 01:31:41 +0000 (21:31 -0400)]
Tweak new regression test case for better portability.
Buildfarm says we get different plans on 32-bit and 64-bit platforms,
probably because of MAXALIGN-related differences in memory-consumption
calculations. Add some dummy WHERE clauses so that the planner estimates
different sizes for the three generate_series() relations; that should
stabilize the choice of join order.
Tom Lane [Thu, 5 Jun 2014 00:45:56 +0000 (20:45 -0400)]
Add btree and hash opclasses for pg_lsn.
This is needed to allow ORDER BY, DISTINCT, etc to work as expected for
pg_lsn values.
We had previously decided to put this off for 9.5, but in view of commit eeca4cd35e284c72b2ea1b4494e64e7738896e81 there's no reason to avoid a
catversion bump for 9.4beta2, and this does make a pretty significant
usability difference for pg_lsn.
Michael Paquier, with fixes from Andres Freund and Tom Lane
Andres Freund [Wed, 4 Jun 2014 19:36:19 +0000 (21:36 +0200)]
Fix longstanding bug in HeapTupleSatisfiesVacuum().
HeapTupleSatisfiesVacuum() didn't properly discern between
DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been
inserted in the current transaction and deleted in a aborted
subtransaction of the current backend. At the very least that caused
problems for CLUSTER and CREATE INDEX in transactions that had
aborting subtransactions producing rows, leading to warnings like:
WARNING: concurrent delete in progress within table "..."
possibly in an endless, uninterruptible, loop.
Instead of treating *InProgress xmins the same as *IsCurrent ones,
treat them as being distinct like the other visibility routines. As
implemented this separatation can cause a behaviour change for rows
that have been inserted and deleted in another, still running,
transaction. HTSV will now return INSERT_IN_PROGRESS instead of
DELETE_IN_PROGRESS for those. That's both, more in line with the other
visibility routines and arguably more correct. The latter because a
INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of
xmax.
The only current caller where that's possibly worse than the old
behaviour is heap_prune_chain() which now won't mark the page as
prunable if a row has concurrently been inserted and deleted. That's
harmless enough.
As a cautionary measure also insert a interrupt check before the gotos
in IndexBuildHeapScan() that lead to the uninterruptible loop. There
are other possible causes, like a row that several sessions try to
update and all fail, for repeated loops and the cost of doing so in
the retry case is low.
As this bug goes back all the way to the introduction of
subtransactions in 573a71a5da backpatch to all supported releases.
Fujii Masao [Wed, 4 Jun 2014 03:09:45 +0000 (12:09 +0900)]
Save pg_stat_statements statistics file into $PGDATA/pg_stat directory at shutdown.
187492b6c2e8cafc5b39063ca3b67846e8155d24 changed pgstat.c so that
the stats files were saved into $PGDATA/pg_stat directory when the server
was shutdowned. But it accidentally forgot to change the location of
pg_stat_statements permanent stats file. This commit fixes pg_stat_statements
so that its stats file is also saved into $PGDATA/pg_stat at shutdown.
Since this fix changes the file layout, we don't back-patch it to 9.3
where this oversight was introduced.
Andrew Dunstan [Tue, 3 Jun 2014 22:26:47 +0000 (18:26 -0400)]
Use EncodeDateTime instead of to_char to render JSON timestamps.
Per gripe from Peter Eisentraut and Tom Lane.
The output is slightly different, but still ISO 8601 compliant: to_char
doesn't output the minutes when time zone offset is an integer number of
hours, while EncodeDateTime outputs ":00".
Andrew Dunstan [Tue, 3 Jun 2014 20:11:31 +0000 (16:11 -0400)]
Do not escape a unicode sequence when escaping JSON text.
Previously, any backslash in text being escaped for JSON was doubled so
that the result was still valid JSON. However, this led to some perverse
results in the case of Unicode sequences, These are now detected and the
initial backslash is no longer escaped. All other backslashes are
still escaped. No validity check is performed, all that is looked for is
\uXXXX where X is a hexidecimal digit.
This is a change from the 9.2 and 9.3 behaviour as noted in the Release
notes.
Andrew Dunstan [Tue, 3 Jun 2014 17:56:53 +0000 (13:56 -0400)]
Output timestamps in ISO 8601 format when rendering JSON.
Many JSON processors require timestamp strings in ISO 8601 format in
order to convert the strings. When converting a timestamp, with or
without timezone, to a JSON datum we therefore now use such a format
rather than the type's default text output, in functions such as
to_json().
This is a change in behaviour from 9.2 and 9.3, as noted in the release
notes.
Tom Lane [Tue, 3 Jun 2014 16:01:27 +0000 (12:01 -0400)]
Make plpython_unicode regression test work in more database encodings.
This test previously used a data value containing U+0080, and would
therefore fail if the database encoding didn't have an equivalent to
that; which only about half of our supported server encodings do.
We could fall back to using some plain-ASCII character, but that seems
like it's losing most of the point of the test. Instead switch to using
U+00A0 (no-break space), which translates into all our supported encodings
except the four in the EUC_xx family.
Per buildfarm testing. Back-patch to 9.1, which is as far back as this
test is expected to succeed everywhere. (9.0 has the test, but without
back-patching some 9.1 code changes we could not expect to get consistent
results across platforms anyway.)
Andres Freund [Tue, 3 Jun 2014 12:02:54 +0000 (14:02 +0200)]
Set the process latch when processing recovery conflict interrupts.
Because RecoveryConflictInterrupt() didn't set the process latch
anything using the latter to wait for events didn't get notified about
recovery conflicts. Most latch users are never the target of recovery
conflicts, which explains the lack of reports about this until
now.
Since 9.3 two possible affected users exist though: The sql callable
pg_sleep() now uses latches to wait and background workers are
expected to use latches in their main loop. Both would currently wait
until the end of WaitLatch's timeout.
Fix by adding a SetLatch() to RecoveryConflictInterrupt(). It'd also
be possible to fix the issue by having each latch user set
set_latch_on_sigusr1. That seems failure prone and though, as most of
these callsites won't often receive recovery conflicts and thus will
likely only be tested against normal query cancels et al. It'd also be
unnecessarily verbose.
Backpatch to 9.1 where latches were introduced. Arguably 9.3 would be
sufficient, because that's where pg_sleep() was converted to waiting
on the latch and background workers got introduced; but there could be
user level code making use of the latch pre 9.3.
Andres Freund [Tue, 3 Jun 2014 10:19:18 +0000 (12:19 +0200)]
Use unaligned output in another regression test query to reduce diff noise.
Use the unaligned/no rowcount output mode in a regression tests that
shows all built-in leakproof functions. Currently a new leakproof
function will often change the alignment of all existing functions,
making it hard to see the actual difference and creating unnecessary
patch conflicts.
Noticed while looking over a patch introducing new leakproof functions.
Andrew Dunstan [Sun, 1 Jun 2014 23:04:02 +0000 (19:04 -0400)]
Improve the efficiency of certain jsonb get operations.
Instead of iterating over jsonb structures, use the inbuilt functions
findJsonbValueFromContainerLen() and getIthJsonbValueFromContainer() to
extract values directly. These functions use algorithms that are O(n log
n) and O(1) respectively, whereas iterating is O(n), so we should see
considerable speedup here.
Andres Freund [Sat, 31 May 2014 13:58:04 +0000 (15:58 +0200)]
Improvements to the replication protocol documentation.
Document the CREATE_REPLICATION_SLOT's output_plugin parameter; that
START_REPLICATION ... LOGICAL takes parameters; that START_REPLICATION
... LOGICAL uses the same messages as ... PHYSICAL; and be more
consistent with the usage of <literal/>.
Michael Paquier, with some additional changes by me.
Tom Lane [Fri, 30 May 2014 22:18:11 +0000 (18:18 -0400)]
On OS X, link libpython normally, ignoring the "framework" framework.
As of Xcode 5.0, Apple isn't including the Python framework as part of the
SDK-level files, which means that linking to it might fail depending on
whether Xcode thinks you've selected a specific SDK version. According to
their Tech Note 2328, they've basically deprecated the framework method of
linking to libpython and are telling people to link to the shared library
normally. (I'm pretty sure this is in direct contradiction to the advice
they were giving a few years ago, but whatever.) Testing says that this
approach works fine at least as far back as OS X 10.4.11, so let's just
rip out the framework special case entirely. We do still need a special
case to decide that OS X provides a shared library at all, unfortunately
(I wonder why the distutils check doesn't work ...). But this is still
less of a special case than before, so it's fine.
Back-patch to all supported branches, since we'll doubtless be hearing
about this more as more people update to recent Xcode.
Tom Lane [Thu, 29 May 2014 17:51:02 +0000 (13:51 -0400)]
When using the OSSP UUID library, cache its uuid_t state object.
The original coding in contrib/uuid-ossp created and destroyed a uuid_t
object (or, in some cases, even two of them) each time it was called.
This is not the intended usage: you're supposed to keep the uuid_t object
around so that the library can cache its state across uses. (Other UUID
libraries seem to keep equivalent state behind-the-scenes in static
variables, but OSSP chose differently.) Aside from being quite inefficient,
creating a new uuid_t loses knowledge of the previously generated UUID,
which in theory could result in duplicate V1-style UUIDs being created
on sufficiently fast machines.
On at least some platforms, creating a new uuid_t also draws some entropy
from /dev/urandom, leaving less for the rest of the system. This seems
sufficiently unpleasant to justify back-patching this change.
Tom Lane [Thu, 29 May 2014 03:15:51 +0000 (23:15 -0400)]
Fix uuid-ossp regression tests based on buildfarm feedback.
The previous version of these tests expected uuid_generate_v1() to always
emit MAC addresses with the local-admin and multicast address bits zero.
However, several of the buildfarm critters are reporting values with the
local-admin bit set. (Perhaps they're running inside VMs or jails.)
And a couple are reporting values with the multicast bit set, probably
meaning that the UUID library couldn't read the system MAC address.
Also, it emerges that if OSSP UUID can't read the system MAC address, it
falls back to V1MC behavior wherein the whole node field gets randomized
each time, breaking the test that expected the node field to remain stable
in V1 output. (It looks like e2fs doesn't behave that way, though.)
It's not entirely clear why we can't get a system MAC address, since the
buildfarm scripts would not work without internet access. Nonetheless,
the regression tests had better cope with the case, so adjust the tests
to expect these behaviors.
It turns out that the %name-prefix syntax without "=" does not work
at all in pre-2.4 Bison. We are not prepared to make such a large
jump in minimum required Bison version just to suppress a warning
message in a version hardly any developers are using yet.
When 3.0 gets more popular, we'll figure out a way to deal with this.
In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for
anyone using 3.0 who doesn't want to see the warning.
Andres Freund [Wed, 28 May 2014 22:32:09 +0000 (00:32 +0200)]
Don't pay heed to wal_sender_timeout while creating a decoding slot.
Sometimes CREATE_REPLICATION_SLOT ... LOGICAL ... needs to wait for
further WAL using WalSndWaitForWal(). That used to always respect
wal_sender_timeout and kill the session when waiting long enough
because no feedback/ping messages can be sent while the slot is still
being created.
Introduce the notion that last_reply_timestamp = 0 means that the
walsender currently doesn't need timeout processing to avoid that
problem. Use that notion for CREATE_REPLICATION_SLOT ... LOGICAL.
Bugreport and initial patch by Steve Singer, revised by me.
The only caller of compareJsonbScalarValue that needed locale-sensitive
comparison of strings was also the only caller that didn't just check for
equality. Separate the two cases for clarity: compareJsonbScalarValue now
does locale-sensitive comparison, and a new function,
equalsJsonbScalarValue, just checks for equality.
Tom Lane [Wed, 28 May 2014 19:41:53 +0000 (15:41 -0400)]
Fix bogus %name-prefix option syntax in all our Bison files.
%name-prefix doesn't use an "=" sign according to the Bison docs, but it
silently accepted one anyway, until Bison 3.0. This was originally a
typo of mine in commit 012abebab1bc72043f3f670bf32e91ae4ee04bd2, and we
seem to have slavishly copied the error into all the other grammar files.
Per report from Vik Fearing; analysis by Peter Eisentraut.
Back-patch to all active branches, since somebody might try to build
a back branch with up-to-date tools.
Tom Lane [Wed, 28 May 2014 18:21:17 +0000 (14:21 -0400)]
Improve regression tests for uuid-ossp.
On reflection, the timestamp-advances test might fail if we're unlucky
enough for the time_mid field to change between two calls, since uuid_cmp
is just bytewise comparison and the field ordering has more significant
fields later. Build some field extraction functions so we can do a more
honest test of that. Also check that the version and reserved fields
contain what they should.
Tom Lane [Wed, 28 May 2014 15:50:41 +0000 (11:50 -0400)]
Fix stack clobber in new uuid-ossp code.
The V5 (SHA1 hashing) code wrote 20 bytes into a 16-byte local variable.
This had accidentally failed to fail in my testing and Matteo's, but
buildfarm results exposed the problem.
Magnus Hagander [Wed, 28 May 2014 10:40:45 +0000 (12:40 +0200)]
Ensure cleanup in case of early errors in streaming base backups
Move the code that sends the initial status information as well as the
calculation of paths inside the ENSURE_ERROR_CLEANUP block. If this code
failed, we would "leak" a counter of number of concurrent backups, thereby
making the system always believe it was in backup mode. This could happen
if the sending failed (which it probably never did given that the small
amount of data to send would never cause a flush) or if the psprintf calls
ran out of memory. Both are very low risk, but all operations after
do_pg_start_backup should be protected.
Tom Lane [Wed, 28 May 2014 04:26:46 +0000 (00:26 -0400)]
pg_lsn should not be marked typispreferred.
In general it's not a good idea for built-in types in the 'U' category
to be marked preferred; they could draw behavior away from user-defined
types with similarly-named operators. pg_lsn is probably at low risk
of that right now given the lack of casts between it and other types,
but that doesn't make this marking OK.
Ordinarily we'd bump catversion when changing any predefined catalog
contents like this, but since we're past beta1, the costs of a forced
initdb seem to outweigh the benefits of guaranteed behavioral consistency.
There's not any known behavioral impact today anyway --- this is more
in the nature of being sure there's not problems in future.
Tom Lane [Wed, 28 May 2014 02:31:21 +0000 (22:31 -0400)]
Fix obsolete config-module-exclusion logic in vcregress.pl.
The recent addition of regression tests to uuid-ossp exposed the fact
that the MSVC build system wasn't being consistent about whether it was
building/testing that contrib module, ie, it would try to test the module
even when it hadn't built it. The same hazard was latent for sslinfo.
For the moment I just copied the more up-to-date logic from point A to
point B, but this is screaming for refactoring.