Tom Lane [Tue, 22 Jul 2014 17:30:01 +0000 (13:30 -0400)]
Re-enable error for "SELECT ... OFFSET -1".
The executor has thrown errors for negative OFFSET values since 8.4 (see
commit bfce56eea45b1369b7bb2150a150d1ac109f5073), but in a moment of brain
fade I taught the planner that OFFSET with a constant negative value was a
no-op (commit 1a1832eb085e5bca198735e5d0e766a3cb61b8fc). Reinstate the
former behavior by only discarding OFFSET with a value of exactly 0. In
passing, adjust a planner comment that referenced the ancient behavior.
Back-patch to 9.3 where the mistake was introduced.
Tom Lane [Tue, 22 Jul 2014 15:45:50 +0000 (11:45 -0400)]
Check block number against the correct fork in get_raw_page().
get_raw_page tried to validate the supplied block number against
RelationGetNumberOfBlocks(), which of course is only right when
accessing the main fork. In most cases, the main fork is longer
than the others, so that the check was too weak (allowing a
lower-level error to be reported, but no real harm to be done).
However, very small tables could have an FSM larger than their heap,
in which case the mistake prevented access to some FSM pages.
Per report from Torsten Foertsch.
In passing, make the bad-block-number error into an ereport not elog
(since it's certainly not an internal error); and fix sloppily
maintained comment for RelationGetNumberOfBlocksInFork.
This has been wrong since we invented relation forks, so back-patch
to all supported branches.
Diagnose incompatible OpenLDAP versions during build and test.
With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, PostgreSQL
backends can crash at exit. Raise a warning during "configure" based on
the compile-time OpenLDAP version number, and test the crash scenario in
the dblink test suite. Back-patch to 9.0 (all supported versions).
Peter Eisentraut [Tue, 22 Jul 2014 04:42:36 +0000 (00:42 -0400)]
Unset some local environment variables in TAP tests
Unset environment variables that control message language, so that we
can compare some program output with expected strings. This is very
similar to what pg_regress does.
In commit 631dc390f49909a5c8ebd6002cfb2bcee5415a9d, we started to handle
simple numeric timezone offsets via the zic library instead of the old
CTimeZone/HasCTZSet kluge. However, we overlooked the fact that the zic
code will reject UTC offsets exceeding a week (which seems a bit arbitrary,
but not because it's too tight ...). This led to possibly setting
session_timezone to NULL, which results in crashes in most timezone-related
operations as of 9.4, and crashes in a small number of places even before
that. So check for NULL return from pg_tzset_offset() and report an
appropriate error message. Per bug #11014 from Duncan Gillis.
Back-patch to all supported branches, like the previous patch.
(Unfortunately, as of today that no longer includes 8.4.)
Tom Lane [Mon, 21 Jul 2014 15:41:30 +0000 (11:41 -0400)]
Defend against bad relfrozenxid/relminmxid/datfrozenxid/datminmxid values.
In commit a61daa14d56867e90dc011bbba52ef771cea6770, we fixed pg_upgrade so
that it would install sane relminmxid and datminmxid values, but that does
not cure the problem for installations that were already pg_upgraded to
9.3; they'll initially have "1" in those fields. This is not a big problem
so long as 1 is "in the past" compared to the current nextMultiXact
counter. But if an installation were more than halfway to the MXID wrap
point at the time of upgrade, 1 would appear to be "in the future" and
that would effectively disable tracking of oldest MXIDs in those
tables/databases, until such time as the counter wrapped around.
While in itself this isn't worse than the situation pre-9.3, where we did
not manage MXID wraparound risk at all, the consequences of premature
truncation of pg_multixact are worse now; so we ought to make some effort
to cope with this. We discussed advising users to fix the tracking values
manually, but that seems both very tedious and very error-prone.
Instead, this patch adopts two amelioration rules. First, a relminmxid
value that is "in the future" is allowed to be overwritten with a
full-table VACUUM's actual freeze cutoff, ignoring the normal rule that
relminmxid should never go backwards. (This essentially assumes that we
have enough defenses in place that wraparound can never occur anymore,
and thus that a value "in the future" must be corrupt.) Second, if we see
any "in the future" values then we refrain from truncating pg_clog and
pg_multixact. This prevents loss of clog data until we have cleaned up
all the broken tracking data. In the worst case that could result in
considerable clog bloat, but in practice we expect that relfrozenxid-driven
freezing will happen soon enough to fix the problem before clog bloat
becomes intolerable. (Users could do manual VACUUM FREEZEs if not.)
Note that this mechanism cannot save us if there are already-wrapped or
already-truncated-away MXIDs in the table; it's only capable of dealing
with corrupt tracking values. But that's the situation we have with the
pg_upgrade bug.
For consistency, apply the same rules to relfrozenxid/datfrozenxid. There
are not known mechanisms for these to get messed up, but if they were, the
same tactics seem appropriate for fixing them.
Tom Lane [Sat, 19 Jul 2014 18:28:25 +0000 (14:28 -0400)]
Partial fix for dropped columns in functions returning composite.
When a view has a function-returning-composite in FROM, and there are
some dropped columns in the underlying composite type, ruleutils.c
printed junk in the column alias list for the reconstructed FROM entry.
Before 9.3, this was prevented by doing get_rte_attribute_is_dropped
tests while printing the column alias list; but that solution is not
currently available to us for reasons I'll explain below. Instead,
check for empty-string entries in the alias list, which can only exist
if that column position had been dropped at the time the view was made.
(The parser fills in empty strings to preserve the invariant that the
aliases correspond to physical column positions.)
While this is sufficient to handle the case of columns dropped before
the view was made, we have still got issues with columns dropped after
the view was made. In particular, the view could contain Vars that
explicitly reference such columns! The dependency machinery really
ought to refuse the column drop attempt in such cases, as it would do
when trying to drop a table column that's explicitly referenced in
views. However, we currently neglect to store dependencies on columns
of composite types, and fixing that is likely to be too big to be
back-patchable (not to mention that existing views in existing databases
would not have the needed pg_depend entries anyway). So I'll leave that
for a separate patch.
Pre-9.3, ruleutils would print such Vars normally (with their original
column names) even though it suppressed their entries in the RTE's
column alias list. This is certainly bogus, since the printed view
definition would fail to reload, but at least it didn't crash. However,
as of 9.3 the printed column alias list is tightly tied to the names
printed for Vars; so we can't treat columns as dropped for one purpose
and not dropped for the other. This is why we can't just put back the
get_rte_attribute_is_dropped test: it results in an assertion failure
if the view in fact contains any Vars referencing the dropped column.
Once we've got dependencies preventing such cases, we'll probably want
to do it that way instead of relying on the empty-string test used here.
This fix turned up a very ancient bug in outfuncs/readfuncs, namely
that T_String nodes containing empty strings were not dumped/reloaded
correctly: the node was printed as "<>" which is read as a string
value of <>. Since (per SQL) we disallow empty-string identifiers,
such nodes don't occur normally, which is why we'd not noticed.
(Such nodes aren't used for literal constants, just identifiers.)
Per report from Marc Schablewski. Back-patch to 9.3 which is where
the rule printing behavior changed. The dangling-variable case is
broken all the way back, but that's not what his complaint is about.
Limit pg_upgrade authentication advice to always-secure techniques.
~/.pgpass is a sound choice everywhere, and "peer" authentication is
safe on every platform it supports. Cease to recommend "trust"
authentication, the safety of which is deeply configuration-specific.
Back-patch to 9.0, where pg_upgrade was introduced.
Tom Lane [Fri, 18 Jul 2014 17:00:27 +0000 (13:00 -0400)]
Fix two low-probability memory leaks in regular expression parsing.
If pg_regcomp failed after having invoked markst/cleanst, it would leak any
"struct subre" nodes it had created. (We've already detected all regex
syntax errors at that point, so the only likely causes of later failure
would be query cancel or out-of-memory.) To fix, make sure freesrnode
knows the difference between the pre-cleanst and post-cleanst cleanup
procedures. Add some documentation of this less-than-obvious point.
Also, newlacon did the wrong thing with an out-of-memory failure from
realloc(), so that the previously allocated array would be leaked.
Both of these are pretty low-probability scenarios, but a bug is a bug,
so patch all the way back.
Fix bugs in SP-GiST search with range type's -|- (adjacent) operator.
The consistent function contained several bugs:
* The "if (which2) { ... }" block was broken. It compared the argument's
lower bound against centroid's upper bound, while it was supposed to compare
the argument's upper bound against the centroid's lower bound (the comment
was correct, code was wrong). Also, it cleared bits in the "which1"
variable, while it was supposed to clear bits in "which2".
* If the argument's upper bound was equal to the centroid's lower bound, we
descended to both halves (= all quadrants). That's unnecessary, searching
the right quadrants is sufficient. This didn't lead to incorrect query
results, but was clearly wrong, and slowed down queries unnecessarily.
* In the case that argument's lower bound is adjacent to the centroid's
upper bound, we also don't need to visit all quadrants. Per similar
reasoning as previous point.
* The code where we compare the previous centroid with the current centroid
should match the code where we compare the current centroid with the
argument. The point of that code is to redo the calculation done in the
previous level, to see if we were supposed to traverse left or right (or up
or down), and if we actually did. If we moved in the different direction,
then we know there are no matches for bound.
Refactor the code and adds comments to make it more readable and easier to
reason about.
Backpatch to 9.3 where SP-GiST support for range types was introduced.
Trying to reassign objects owned by a user that had text search
dictionaries or configurations used to fail with:
ERROR: unexpected classid 3600
or
ERROR: unexpected classid 3602
Fix by adding cases for those object types in a switch in pg_shdepend.c.
Both REASSIGN OWNED and text search objects go back all the way to 8.1,
so backpatch to all supported branches. In 9.3 the alter-owner code was
made generic, so the required change in recent branches is pretty
simple; however, for 9.2 and older ones we need some additional
reshuffling to enable specifying objects by OID rather than name.
Text search templates and parsers are not owned objects, so there's no
change required for them.
Magnus Hagander [Tue, 15 Jul 2014 16:04:43 +0000 (18:04 +0200)]
Detect presence of SSL_get_current_compression
Apparently we still build against OpenSSL so old that it doesn't
have this function, so add an autoconf check for it to make the
buildfarm happy. If the function doesn't exist, always return
that compression is disabled, since presumably the actual
compression functionality is always missing.
For now, hardcode the function as present on MSVC, since we should
hopefully be well beyond those old versions on that platform.
Magnus Hagander [Tue, 15 Jul 2014 13:07:38 +0000 (15:07 +0200)]
Include SSL compression status in psql banner and connection logging
Both the psql banner and the connection logging already included
SSL status, cipher and bitlength, this adds the information about
compression being on or off.
Prevent bitmap heap scans from showing unnecessary block info in EXPLAIN ANALYZE.
EXPLAIN ANALYZE shows the information of the numbers of exact/lossy blocks which
bitmap heap scan processes. But, previously, when those numbers were both zero,
it displayed only the prefix "Heap Blocks:" in TEXT output format. This is strange
and would confuse the users. So this commit suppresses such unnecessary information.
Backpatch to 9.4 where EXPLAIN ANALYZE was changed so that such information was
displayed.
Andres Freund [Sat, 12 Jul 2014 12:28:19 +0000 (14:28 +0200)]
Fix decoding of consecutive MULTI_INSERTs emitted by one heap_multi_insert().
Commit 1b86c81d2d fixed the decoding of toasted columns for the rows
contained in one xl_heap_multi_insert record. But that's not actually
enough, because heap_multi_insert() will actually first toast all
passed in rows and then emit several *_multi_insert records; one for
each page it fills with tuples.
Add a XLOG_HEAP_LAST_MULTI_INSERT flag which is set in
xl_heap_multi_insert->flag denoting that this multi_insert record is
the last emitted by one heap_multi_insert() call. Then use that flag
in decode.c to only set clear_toast_afterwards in the right situation.
Expand the number of rows inserted via COPY in the corresponding
regression test to make sure that more than one heap page is filled
with tuples by one heap_multi_insert() call.
Tom Lane [Fri, 11 Jul 2014 23:12:38 +0000 (19:12 -0400)]
Fix bug with whole-row references to append subplans.
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source
TupleTableSlot just once per query. But if the input is coming from an
Append (or, perhaps, other cases?) more than one slot might be returned
over the query run. This led to "record type has not been registered"
errors when a composite datum was extracted from a non-blessed slot.
This bug has been there a long time; I guess it escaped notice because when
dealing with subqueries the planner tends to expand whole-row Vars into
RowExprs, which don't have the same problem. It is possible to trigger
the problem in all active branches, though, as illustrated by the added
regression test.
Andres Freund [Wed, 2 Jul 2014 19:07:47 +0000 (21:07 +0200)]
Rename logical decoding's pg_llog directory to pg_logical.
The old name wasn't very descriptive as of actual contents of the
directory, which are historical snapshots in the snapshots/
subdirectory and mappingdata for rewritten tuples in
mappings/. There's been a fair amount of discussion what would be a
good name. I'm settling for pg_logical because it's likely that
further data around logical decoding and replication will need saving
in the future.
Also add the missing entry for the directory into storage.sgml's list
of PGDATA contents.
Bumps catversion as the data directories won't be compatible.
Backpatch to 9.4, which I missed to do as Michael Paquier luckily
noticed. As there already has been a catversion bump after 9.4beta1,
there's no reasons for having 9.4 diverge from master.
Tom Lane [Tue, 8 Jul 2014 18:03:16 +0000 (14:03 -0400)]
Don't assume a subquery's output is unique if there's a SRF in its tlist.
While the x output of "select x from t group by x" can be presumed unique,
this does not hold for "select x, generate_series(1,10) from t group by x",
because we may expand the set-returning function after the grouping step.
(Perhaps that should be re-thought; but considering all the other oddities
involved with SRFs in targetlists, it seems unlikely we'll change it.)
Put a check in query_is_distinct_for() so it's not fooled by such cases.
Bruce Momjian [Mon, 7 Jul 2014 17:24:08 +0000 (13:24 -0400)]
pg_upgrade: allow upgrades for new-only TOAST tables
Previously, when calculations on the need for toast tables changed,
pg_upgrade could not handle cases where the new cluster needed a TOAST
table and the old cluster did not. (It already handled the opposite
case.) This fixes the "OID mismatch" error typically generated in this
case.
Andres Freund [Sun, 6 Jul 2014 13:58:01 +0000 (15:58 +0200)]
Fix decoding of MULTI_INSERTs when rows other than the last are toasted.
When decoding the results of a HEAP2_MULTI_INSERT (currently only
generated by COPY FROM) toast columns for all but the last tuple
weren't replaced by their actual contents before being handed to the
output plugin. The reassembled toast datums where disregarded after
every REORDER_BUFFER_CHANGE_(INSERT|UPDATE|DELETE) which is correct
for plain inserts, updates, deletes, but not multi inserts - there we
generate several REORDER_BUFFER_CHANGE_INSERTs for a single
xl_heap_multi_insert record.
To solve the problem add a clear_toast_afterwards boolean to
ReorderBufferChange's union member that's used by modifications. All
row changes but multi_inserts always set that to true, but
multi_insert sets it only for the last change generated.
Add a regression test covering decoding of multi_inserts - there was
none at all before.
Backpatch to 9.4 where logical decoding was introduced.
Consistently pass an "unsigned char" to ctype.h functions.
The isxdigit() calls relied on undefined behavior. The isascii() call
was well-defined, but our prevailing style is to include the cast.
Back-patch to 9.4, where the isxdigit() calls were introduced.
Tom Lane [Thu, 3 Jul 2014 22:47:09 +0000 (18:47 -0400)]
Don't cache per-group context across the whole query in orderedsetaggs.c.
Although nodeAgg.c currently uses the same per-group memory context for
all groups of a query, that might change in future. Avoid assuming it.
This costs us an extra AggCheckCallContext() call per group, but that's
pretty cheap and is probably good from a safety standpoint anyway.
Back-patch to 9.4 in case any third-party code copies this logic.
Tom Lane [Thu, 3 Jul 2014 22:25:37 +0000 (18:25 -0400)]
Redesign API presented by nodeAgg.c for ordered-set and similar aggregates.
The previous design exposed the input and output ExprContexts of the
Agg plan node, but work on grouping sets has suggested that we'll regret
doing that. Instead provide more narrowly-defined APIs that can be
implemented in multiple ways, namely a way to get a short-term memory
context and a way to register an aggregate shutdown callback.
Back-patch to 9.4 where the bad APIs were introduced, since we don't
want third-party code using these APIs and then having to change in 9.5.
Use a separate temporary directory for the Unix-domain socket
Creating the Unix-domain socket in the build directory can run into
name-length limitations. Therefore, create the socket file in the
default temporary directory of the operating system. Keep the temporary
data directory etc. in the build tree.
Kevin Grittner [Wed, 2 Jul 2014 20:03:57 +0000 (15:03 -0500)]
Smooth reporting of commit/rollback statistics.
If a connection committed or rolled back any transactions within a
PGSTAT_STAT_INTERVAL pacing interval without accessing any tables,
the reporting of those statistics would be held up until the
connection closed or until it ended a PGSTAT_STAT_INTERVAL interval
in which it had accessed a table. This could result in under-
reporting of transactions for an extended period, followed by a
spike in reported transactions.
While this is arguably a bug, the impact is minimal, primarily
affecting, and being affected by, monitoring software. It might
cause more confusion than benefit to change the existing behavior
in released stable branches, so apply only to master and the 9.4
beta.
Gurjeet Singh, with review and editing by Kevin Grittner,
incorporating suggested changes from Abhijit Menon-Sen and Tom
Lane.
Tom Lane [Wed, 2 Jul 2014 16:31:27 +0000 (12:31 -0400)]
Add some errdetail to checkRuleResultList().
This function wasn't originally thought to be really user-facing,
because converting a table to a view isn't something we expect people
to do manually. So not all that much effort was spent on the error
messages; in particular, while the code will complain that you got
the column types wrong it won't say exactly what they are. But since
we repurposed the code to also check compatibility of rule RETURNING
lists, it's definitely user-facing. It now seems worthwhile to add
errdetail messages showing exactly what the conflict is when there's
a mismatch of column names or types. This is prompted by bug #10836
from Matthias Raffelsieper, which might have been forestalled if the
error message had reported the wrong column type as being "record".
Back-patch to 9.4, but not into older branches where the set of
translatable error strings is supposed to be stable.
Prevent psql from issuing BEGIN before ALTER SYSTEM when AUTOCOMMIT is off.
The autocommit-off mode works by issuing an implicit BEGIN just before
any command that is not already in a transaction block and is not itself
a BEGIN or other transaction-control command, nor a command that
cannot be executed inside a transaction block. This commit prevents psql
from issuing such an implicit BEGIN before ALTER SYSTEM because it's
not allowed inside a transaction block.
Tom Lane [Tue, 1 Jul 2014 15:22:46 +0000 (11:22 -0400)]
Fix inadequately-sized output buffer in contrib/unaccent.
The output buffer size in unaccent_lexize() was calculated as input string
length times pg_database_encoding_max_length(), which effectively assumes
that replacement strings aren't more than one character. While that was
all that we previously documented it to support, the code actually has
always allowed replacement strings of arbitrary length; so if you tried
to make use of longer strings, you were at risk of buffer overrun. To fix,
use an expansible StringInfo buffer instead of trying to determine the
maximum space needed a-priori.
This would be a security issue if unaccent rules files could be installed
by unprivileged users; but fortunately they can't, so in the back branches
the problem can be labeled as improper configuration by a superuser.
Nonetheless, a memory stomp isn't a nice way of reacting to improper
configuration, so let's back-patch the fix.
Bruce Momjian [Mon, 30 Jun 2014 23:57:47 +0000 (19:57 -0400)]
pg_upgrade: update C comments about pg_dumpall
There were some C comments that hadn't been updated from the switch of
using only pg_dumpall to using pg_dump and pg_dumpall, so update them.
Also, don't bother using --schema-only for pg_dumpall --globals-only.
Noah Misch [Mon, 30 Jun 2014 20:59:19 +0000 (16:59 -0400)]
Don't prematurely free the BufferAccessStrategy in pgstat_heap().
This function continued to use it after heap_endscan() freed it. In
passing, don't explicit create a strategy here. Instead, use the one
created by heap_beginscan_strat(), if any. Back-patch to 9.2, where use
of a BufferAccessStrategy here was introduced.
Andres Freund [Sun, 29 Jun 2014 15:08:04 +0000 (17:08 +0200)]
Check interrupts during logical decoding more frequently.
When reading large amounts of preexisting WAL during logical decoding
using the SQL interface we possibly could fail to check interrupts in
due time. Similarly the same could happen on systems with a very high
WAL volume while creating a new logical replication slot, independent
of the used interface.
Previously these checks where only performed in xlogreader's read_page
callbacks, while waiting for new WAL to be produced. That's not
sufficient though, if there's never a need to wait. Walsender's send
loop already contains a interrupt check.
Backpatch to 9.4 where the logical decoding feature was introduced.
Revert the assertion of no palloc's in critical section.
Per discussion, it still fires too often to be safe to enable in
production. Keep it in master, so that we find the issues, but disable it
in the stable branch.
Tom Lane [Sun, 29 Jun 2014 17:51:02 +0000 (13:51 -0400)]
Remove use_json_as_text options from json_to_record/json_populate_record.
The "false" case was really quite useless since all it did was to throw
an error; a definition not helped in the least by making it the default.
Instead let's just have the "true" case, which emits nested objects and
arrays in JSON syntax. We might later want to provide the ability to
emit sub-objects in Postgres record or array syntax, but we'd be best off
to drive that off a check of the target field datatype, not a separate
argument.
For the functions newly added in 9.4, we can just remove the flag arguments
outright. We can't do that for json_populate_record[set], which already
existed in 9.3, but we can ignore the argument and always behave as if it
were "true". It helps that the flag arguments were optional and not
documented in any useful fashion anyway.
Alvaro Herrera [Fri, 27 Jun 2014 18:43:52 +0000 (14:43 -0400)]
Have multixact be truncated by checkpoint, not vacuum
Instead of truncating pg_multixact at vacuum time, do it only at
checkpoint time. The reason for doing it this way is twofold: first, we
want it to delete only segments that we're certain will not be required
if there's a crash immediately after the removal; and second, we want to
do it relatively often so that older files are not left behind if
there's an untimely crash.
Per my proposal in
http://www.postgresql.org/message-id/20140626044519.GJ7340@eldon.alvh.no-ip.org
we now execute the truncation in the checkpointer process rather than as
part of vacuum. Vacuum is in only charge of maintaining in shared
memory the value to which it's possible to truncate the files; that
value is stored as part of checkpoints also, and so upon recovery we can
reuse the same value to re-execute truncate and reset the
oldest-value-still-safe-to-use to one known to remain after truncation.
Per bug reported by Jeff Janes in the course of his tests involving
bug #8673.
While at it, update some comments that hadn't been updated since
multixacts were changed.
Backpatch to 9.3, where persistency of pg_multixact files was
introduced by commit 0ac5ad5134f2.
Alvaro Herrera [Fri, 27 Jun 2014 18:43:46 +0000 (14:43 -0400)]
Don't allow relminmxid to go backwards during VACUUM FULL
We were allowing a table's pg_class.relminmxid value to move backwards
when heaps were swapped by VACUUM FULL or CLUSTER. There is a
similar protection against relfrozenxid going backwards, which we
neglected to clone when the multixact stuff was rejiggered by commit 0ac5ad5134f276.
Backpatch to 9.3, where relminmxid was introduced.
As reported by Heikki in
http://www.postgresql.org/message-id/52401AEA.9000608@vmware.com
Don't assert MultiXactIdIsRunning if the multi came from a tuple that
had been share-locked and later copied over to the new cluster by
pg_upgrade. Doing that causes an error to be raised unnecessarily:
MultiXactIdIsRunning is not open to the possibility that its argument
came from a pg_upgraded tuple, and all its other callers are already
checking; but such multis cannot, obviously, have transactions still
running, so the assert is pointless.
Noticed while investigating the bogus pg_multixact/offsets/0000 file
left over by pg_upgrade, as reported by Andres Freund in
http://www.postgresql.org/message-id/20140530121631.GE25431@alap3.anarazel.de
Backpatch to 9.3, as the commit that introduced the buglet.
Tom Lane [Fri, 27 Jun 2014 18:08:51 +0000 (11:08 -0700)]
Disallow pushing volatile qual expressions down into DISTINCT subqueries.
A WHERE clause applied to the output of a subquery with DISTINCT should
theoretically be applied only once per distinct row; but if we push it
into the subquery then it will be evaluated at each row before duplicate
elimination occurs. If the qual is volatile this can give rise to
observably wrong results, so don't do that.
While at it, refactor a little bit to allow subquery_is_pushdown_safe
to report more than one kind of restrictive condition without indefinitely
expanding its argument list.
Although this is a bug fix, it seems unwise to back-patch it into released
branches, since it might de-optimize plans for queries that aren't giving
any trouble in practice. So apply to 9.4 but not further back.
Tom Lane [Thu, 26 Jun 2014 23:22:18 +0000 (16:22 -0700)]
Get rid of bogus separate pg_proc entries for json_extract_path operators.
These should not have existed to begin with, but there was apparently some
misunderstanding of the purpose of the opr_sanity regression test item
that checks for operator implementation functions with their own comments.
The idea there is to check for unintentional violations of the rule that
operator implementation functions shouldn't be documented separately
.... but for these functions, that is in fact what we want, since the
variadic option is useful and not accessible via the operator syntax.
Get rid of the extra pg_proc entries and fix the regression test and
documentation to be explicit about what we're doing here.
Tom Lane [Thu, 26 Jun 2014 17:40:55 +0000 (10:40 -0700)]
Forward-patch regression test for "could not find pathkey item to sort".
Commit a87c729153e372f3731689a7be007bc2b53f1410 already fixed the bug this
is checking for, but the regression test case it added didn't cover this
scenario. Since we managed to miss the fact that there was a bug at all,
it seems like a good idea to propagate the extra test case forward to HEAD.
Fujii Masao [Thu, 26 Jun 2014 05:27:27 +0000 (14:27 +0900)]
Remove obsolete example of CSV log file name from log_filename document.
7380b63 changed log_filename so that epoch was not appended to it
when no format specifier is given. But the example of CSV log file name
with epoch still left in log_filename document. This commit removes
such obsolete example.
This commit also documents the defaults of log_directory and
log_filename.
Tom Lane [Wed, 25 Jun 2014 22:25:26 +0000 (15:25 -0700)]
Rationalize error messages within jsonfuncs.c.
I noticed that the functions in jsonfuncs.c sometimes printed error
messages that claimed I'd called some other function. Investigation showed
that this was from repurposing code into "worker" functions without taking
much care as to whether it would mention the right SQL-level function if it
threw an error. Moreover, there was a weird mismash of messages that
contained a fixed function name, messages that used %s for a function name,
and messages that constructed a function name out of spare parts, like
"json%s_populate_record" (which, quite aside from being ugly as sin, wasn't
even sufficient to cover all the cases). This would put an undue burden on
our long-suffering translators. Standardize on inserting the SQL function
name with %s so as to reduce the number of translatable strings, and pass
function names around as needed to make sure we can report the right one.
Fix up some gratuitous variations in wording, too.
Tom Lane [Wed, 25 Jun 2014 18:22:21 +0000 (11:22 -0700)]
Cosmetic improvements in jsonfuncs.c.
Re-pgindent, remove a lot of random vertical whitespace, remove useless
(if not counterproductive) inline markings, get rid of unnecessary
zero-padding of strings for hashtable searches. No functional changes.
Tom Lane [Wed, 25 Jun 2014 04:22:43 +0000 (21:22 -0700)]
Fix handling of nested JSON objects in json_populate_recordset and friends.
populate_recordset_object_start() improperly created a new hash table
(overwriting the link to the existing one) if called at nest levels
greater than one. This resulted in previous fields not appearing in
the final output, as reported by Matti Hameister in bug #10728.
In 9.4 the problem also affects json_to_recordset.
This perhaps missed detection earlier because the default behavior is to
throw an error for nested objects: you have to pass use_json_as_text = true
to see the problem.
In addition, fix query-lifespan leakage of the hashtable created by
json_populate_record(). This is pretty much the same problem recently
fixed in dblink: creating an intended-to-be-temporary context underneath
the executor's per-tuple context isn't enough to make it go away at the
end of the tuple cycle, because MemoryContextReset is not
MemoryContextResetAndDeleteChildren.
The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it
was still possible with default_with_oids=true. But the rest of the system,
including pg_dump, isn't prepared to handle foreign tables with OIDs
properly.
Backpatch down to 9.1, where foreign tables were introduced. It's possible
that there are databases out there that already have foreign tables with
OIDs. There isn't much we can do about that, but at least we can prevent
them from being created in the future.
Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
Kevin Grittner [Sat, 21 Jun 2014 14:17:14 +0000 (09:17 -0500)]
Fix documentation template for CREATE TRIGGER.
By using curly braces, the template had specified that one of
"NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED"
was required on any CREATE TRIGGER statement, which is not
accurate. Change to square brackets makes that optional.
Joe Conway [Fri, 20 Jun 2014 19:22:34 +0000 (12:22 -0700)]
Clean up data conversion short-lived memory context.
dblink uses a short-lived data conversion memory context. However it
was not deleted when no longer needed, leading to a noticeable memory
leak under some circumstances. Plug the hole, along with minor
refactoring. Backpatch to 9.2 where the leak was introduced.
Report and initial patch by MauMau. Reviewed/modified slightly by
Tom Lane and me.
Andres Freund [Fri, 20 Jun 2014 09:06:48 +0000 (11:06 +0200)]
Do all-visible handling in lazy_vacuum_page() outside its critical section.
Since fdf9e21196a lazy_vacuum_page() rechecks the all-visible status
of pages in the second pass over the heap. It does so inside a
critical section, but both visibilitymap_test() and
heap_page_is_all_visible() perform operations that should not happen
inside one. The former potentially performs IO and both potentially do
memory allocations.
To fix, simply move all the all-visible handling outside the critical
section. Doing so means that the PD_ALL_VISIBLE on the page won't be
included in the full page image of the HEAP2_CLEAN record anymore. But
that's fine, the flag will be set by the HEAP2_VISIBLE logged later.
Backpatch to 9.3 where the problem was introduced. The bug only came
to light due to the assertion added in 4a170ee9 and isn't likely to
cause problems in production scenarios. The worst outcome is a
avoidable PANIC restart.
This also gets rid of the difference in the order of operations
between master and standby mentioned in 2a8e1ac5.
Per reports from David Leverton and Keith Fiske in bug #10533.
Tom Lane [Fri, 20 Jun 2014 02:13:44 +0000 (22:13 -0400)]
Avoid leaking memory while evaluating arguments for a table function.
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM
in the query-lifespan memory context. This is insignificant in simple
cases where the function relation is scanned only once; but if the function
is in a sub-SELECT or is on the inside of a nested loop, any memory
consumed during argument evaluation can add up quickly. (The potential for
trouble here had been foreseen long ago, per existing comments; but we'd
not previously seen a complaint from the field about it.) To fix, create
an additional temporary context just for this purpose.
Per an example from MauMau. Back-patch to all active branches.
Kevin Grittner [Thu, 19 Jun 2014 13:51:54 +0000 (08:51 -0500)]
Fix calculation of PREDICATELOCK_MANAGER_LWLOCK_OFFSET.
Commit ea9df812d8502fff74e7bc37d61bdc7d66d77a7f failed to include
NUM_BUFFER_PARTITIONS in this offset, resulting in a bad offset.
Ultimately this threw off NUM_FIXED_LWLOCKS which is based on
earlier offsets, leading to memory allocation problems. It seems
likely to have also caused increased LWLOCK contention when
serializable transactions were used, because lightweight locks used
for that overlapped others.
Reported by Amit Kapila with analysis and fix.
Backpatch to 9.4, where the bug was introduced.
Fujii Masao [Thu, 19 Jun 2014 11:31:20 +0000 (20:31 +0900)]
Don't allow data_directory to be set in postgresql.auto.conf by ALTER SYSTEM.
data_directory could be set both in postgresql.conf and postgresql.auto.conf so far.
This could cause some problematic situations like circular definition. To avoid such
situations, this commit forbids a user to set data_directory in postgresql.auto.conf.
Backpatch this to 9.4 where ALTER SYSTEM command was introduced.
Amit Kapila, reviewed by Abhijit Menon-Sen, with minor adjustments by me.
Noah Misch [Wed, 18 Jun 2014 13:21:50 +0000 (09:21 -0400)]
Fix the MSVC build process for uuid-ossp.
Catch up with commit b8cc8f94730610c0189aa82dfec4ae6ce9b13e34's
introduction of the HAVE_UUID_OSSP symbol to the principal build
process. Back-patch to 9.4, where that commit appeared.
Noah Misch [Sat, 14 Jun 2014 13:41:13 +0000 (09:41 -0400)]
Secure Unix-domain sockets of "make check" temporary clusters.
Any OS user able to access the socket can connect as the bootstrap
superuser and proceed to execute arbitrary code as the OS user running
the test. Protect against that by placing the socket in a temporary,
mode-0700 subdirectory of /tmp. The pg_regress-based test suites and
the pg_upgrade test suite were vulnerable; the $(prove_check)-based test
suites were already secure. Back-patch to 8.4 (all supported versions).
The hazard remains wherever the temporary cluster accepts TCP
connections, notably on Windows.
As a convenient side effect, this lets testing proceed smoothly in
builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values
like /var/run/postgresql are often unwritable to the build user.
Noah Misch [Sat, 14 Jun 2014 13:41:13 +0000 (09:41 -0400)]
Add mkdtemp() to libpgport.
This function is pervasive on free software operating systems; import
NetBSD's implementation. Back-patch to 8.4, like the commit that will
harness it.
Tom Lane [Fri, 13 Jun 2014 00:14:36 +0000 (20:14 -0400)]
Fix pg_restore's processing of old-style BLOB COMMENTS data.
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch
of COMMENT commands into a single BLOB COMMENTS archive object. With
sufficiently many such comments, some of the commands would likely get
split across bufferloads when restoring, causing failures in
direct-to-database restores (though no problem would be evident in text
output). This is the same type of issue we have with table data dumped as
INSERT commands, and it can be fixed in the same way, by using a mini SQL
lexer to figure out where the command boundaries are. Fortunately, the
COMMENT commands are no more complex to lex than INSERTs, so we can just
re-use the existing lexer for INSERTs.
Per bug #10611 from Jacek Zalewski. Back-patch to all active branches.
Tom Lane [Thu, 12 Jun 2014 22:59:06 +0000 (18:59 -0400)]
Improve tuplestore's error messages for I/O failures.
We should report the errno when we get a failure from functions like
BufFileWrite. "ERROR: write failed" is unreasonably taciturn for a
case that's well within the realm of possibility; I've seen it a
couple times in the buildfarm recently, in situations that were
probably out-of-disk-space, but it'd be good to see the errno
to confirm it.
I think this code was originally written without assuming that
the buffile.c functions would return useful errno; but most other
callers *are* assuming that, and a quick look at the buffile code
gives no reason to suppose otherwise.
Also, a couple of the old messages were phrased on the assumption
that a short read might indicate a logic bug in tuplestore itself;
but that code's pretty well tested by now, so a filesystem-level
problem seems much more likely.
Tom Lane [Thu, 12 Jun 2014 21:51:47 +0000 (17:51 -0400)]
Adjust largeobject regression test to leave a couple of LOs behind.
Since we commonly test pg_dump/pg_restore by seeing whether they can dump
and restore the regression test database, it behooves us to include some
large objects in that test scenario.
I tried to include a comment on one of these large objects to improve
the test scenario further ... but it turns out that pg_upgrade fails to
preserve comments on large objects, and its regression test notices
the discrepancy. So uncommenting that COMMENT is a TODO for later.
Tom Lane [Thu, 12 Jun 2014 20:51:05 +0000 (16:51 -0400)]
Remove inadvertent copyright violation in largeobject regression test.
Robert Frost is no longer with us, but his copyrights still are, so
let's stop using "Stopping by Woods on a Snowy Evening" as test data
before somebody decides to sue us. Wordsworth is more safely dead.
Tom Lane [Thu, 12 Jun 2014 19:54:13 +0000 (15:54 -0400)]
Add regression test to prevent future breakage of legacy query in libpq.
Memorialize the expected output of the query that libpq has been using for
many years to get the OIDs of large-object support functions. Although
we really ought to change the way libpq does this, we must expect that
this query will remain in use in the field for the foreseeable future,
so until we're ready to break compatibility with old libpq versions
we'd better check the results stay the same. See the recent lo_create()
fiasco.