Robert Haas [Sun, 6 Jul 2014 18:52:25 +0000 (14:52 -0400)]
Remove swpb-based spinlock implementation for ARMv5 and earlier.
Per recent analysis by Andres Freund, this implementation is in fact
unsafe, because ARMv5 has weak memory ordering, which means tha the
CPU could move loads or stores across the volatile store performed by
the default S_UNLOCK. We could try to fix this, but have no ARMv5
hardware to test on, so removing support seems better. We can still
support ARMv5 systems on GCC versions new enough to have built-in
atomics support for this platform, and can also re-add support for
the old way if someone has hardware that can be used to test a fix.
However, since the requirement to use a relatively-new GCC hasn't
been an issue for ARMv6 or ARMv7, which lack the swpb instruction
altogether, perhaps it won't be an issue for ARMv5 either.
Andres Freund [Sun, 6 Jul 2014 13:58:01 +0000 (15:58 +0200)]
Fix decoding of MULTI_INSERTs when rows other than the last are toasted.
When decoding the results of a HEAP2_MULTI_INSERT (currently only
generated by COPY FROM) toast columns for all but the last tuple
weren't replaced by their actual contents before being handed to the
output plugin. The reassembled toast datums where disregarded after
every REORDER_BUFFER_CHANGE_(INSERT|UPDATE|DELETE) which is correct
for plain inserts, updates, deletes, but not multi inserts - there we
generate several REORDER_BUFFER_CHANGE_INSERTs for a single
xl_heap_multi_insert record.
To solve the problem add a clear_toast_afterwards boolean to
ReorderBufferChange's union member that's used by modifications. All
row changes but multi_inserts always set that to true, but
multi_insert sets it only for the last change generated.
Add a regression test covering decoding of multi_inserts - there was
none at all before.
Backpatch to 9.4 where logical decoding was introduced.
Consistently pass an "unsigned char" to ctype.h functions.
The isxdigit() calls relied on undefined behavior. The isascii() call
was well-defined, but our prevailing style is to include the cast.
Back-patch to 9.4, where the isxdigit() calls were introduced.
Refactor pg_receivexlog main loop code, for readability.
Previously the source codes for receiving the data and for
polling the socket were included in pg_receivexlog main loop.
This commit splits out them as separate functions. This is
useful for improving the readability of main loop code and
making the future pg_receivexlog-related patch simpler.
Tom Lane [Thu, 3 Jul 2014 22:47:09 +0000 (18:47 -0400)]
Don't cache per-group context across the whole query in orderedsetaggs.c.
Although nodeAgg.c currently uses the same per-group memory context for
all groups of a query, that might change in future. Avoid assuming it.
This costs us an extra AggCheckCallContext() call per group, but that's
pretty cheap and is probably good from a safety standpoint anyway.
Back-patch to 9.4 in case any third-party code copies this logic.
Tom Lane [Thu, 3 Jul 2014 22:25:33 +0000 (18:25 -0400)]
Redesign API presented by nodeAgg.c for ordered-set and similar aggregates.
The previous design exposed the input and output ExprContexts of the
Agg plan node, but work on grouping sets has suggested that we'll regret
doing that. Instead provide more narrowly-defined APIs that can be
implemented in multiple ways, namely a way to get a short-term memory
context and a way to register an aggregate shutdown callback.
Back-patch to 9.4 where the bad APIs were introduced, since we don't
want third-party code using these APIs and then having to change in 9.5.
Tom Lane [Thu, 3 Jul 2014 20:10:50 +0000 (16:10 -0400)]
Improve support for composite types in PL/Python.
Allow PL/Python functions to return arrays of composite types.
Also, fix the restriction that plpy.prepare/plpy.execute couldn't
handle query parameters or result columns of composite types.
In passing, adopt a saner arrangement for where to release the
tupledesc reference counts acquired via lookup_rowtype_tupdesc.
The callers of PLyObject_ToCompositeDatum were doing the lookups,
but then the releases happened somewhere down inside subroutines
of PLyObject_ToCompositeDatum, which is bizarre and bug-prone.
Instead release in the same function that acquires the refcount.
Ed Behn and Ronan Dunklau, reviewed by Abhijit Menon-Sen
Use a separate temporary directory for the Unix-domain socket
Creating the Unix-domain socket in the build directory can run into
name-length limitations. Therefore, create the socket file in the
default temporary directory of the operating system. Keep the temporary
data directory etc. in the build tree.
Kevin Grittner [Wed, 2 Jul 2014 20:20:30 +0000 (15:20 -0500)]
Smooth reporting of commit/rollback statistics.
If a connection committed or rolled back any transactions within a
PGSTAT_STAT_INTERVAL pacing interval without accessing any tables,
the reporting of those statistics would be held up until the
connection closed or until it ended a PGSTAT_STAT_INTERVAL interval
in which it had accessed a table. This could result in under-
reporting of transactions for an extended period, followed by a
spike in reported transactions.
While this is arguably a bug, the impact is minimal, primarily
affecting, and being affected by, monitoring software. It might
cause more confusion than benefit to change the existing behavior
in released stable branches, so apply only to master and the 9.4
beta.
Gurjeet Singh, with review and editing by Kevin Grittner,
incorporating suggested changes from Abhijit Menon-Sen and Tom
Lane.
Andres Freund [Wed, 2 Jul 2014 19:07:47 +0000 (21:07 +0200)]
Rename logical decoding's pg_llog directory to pg_logical.
The old name wasn't very descriptive as of actual contents of the
directory, which are historical snapshots in the snapshots/
subdirectory and mappingdata for rewritten tuples in
mappings/. There's been a fair amount of discussion what would be a
good name. I'm settling for pg_logical because it's likely that
further data around logical decoding and replication will need saving
in the future.
Also add the missing entry for the directory into storage.sgml's list
of PGDATA contents.
Bumps catversion as the data directories won't be compatible.
Tom Lane [Wed, 2 Jul 2014 16:31:24 +0000 (12:31 -0400)]
Add some errdetail to checkRuleResultList().
This function wasn't originally thought to be really user-facing,
because converting a table to a view isn't something we expect people
to do manually. So not all that much effort was spent on the error
messages; in particular, while the code will complain that you got
the column types wrong it won't say exactly what they are. But since
we repurposed the code to also check compatibility of rule RETURNING
lists, it's definitely user-facing. It now seems worthwhile to add
errdetail messages showing exactly what the conflict is when there's
a mismatch of column names or types. This is prompted by bug #10836
from Matthias Raffelsieper, which might have been forestalled if the
error message had reported the wrong column type as being "record".
Back-patch to 9.4, but not into older branches where the set of
translatable error strings is supposed to be stable.
Prevent psql from issuing BEGIN before ALTER SYSTEM when AUTOCOMMIT is off.
The autocommit-off mode works by issuing an implicit BEGIN just before
any command that is not already in a transaction block and is not itself
a BEGIN or other transaction-control command, nor a command that
cannot be executed inside a transaction block. This commit prevents psql
from issuing such an implicit BEGIN before ALTER SYSTEM because it's
not allowed inside a transaction block.
Tom Lane [Wed, 2 Jul 2014 00:10:38 +0000 (20:10 -0400)]
Allow CREATE/ALTER DATABASE to manipulate datistemplate and datallowconn.
Historically these database properties could be manipulated only by
manually updating pg_database, which is error-prone and only possible for
superusers. But there seems no good reason not to allow database owners to
set them for their databases, so invent CREATE/ALTER DATABASE options to do
that. Adjust a couple of places that were doing it the hard way to use the
commands instead.
Tom Lane [Tue, 1 Jul 2014 23:02:21 +0000 (19:02 -0400)]
Refactor CREATE/ALTER DATABASE syntax so options need not be keywords.
Most of the existing option names are keywords anyway, but we can get rid
of LC_COLLATE and LC_CTYPE as keywords known to the lexer/grammar. This
immediately reduces the size of the grammar tables by about 8KB, and will
save more when we add additional CREATE/ALTER DATABASE options in future.
A side effect of the implementation is that the CONNECTION LIMIT option
can now also be spelled CONNECTION_LIMIT. We choose not to document this,
however.
Vik Fearing, based on a suggestion by me; reviewed by Pavel Stehule
Tom Lane [Tue, 1 Jul 2014 21:51:53 +0000 (17:51 -0400)]
Remove some useless code in the configure script.
Almost ten years ago, commit e48322a6d6cfce1ec52ab303441df329ddbc04d1 broke
the logic in ACX_PTHREAD by looping through all the possible flags rather
than stopping with the first one that would work. This meant that
$acx_pthread_ok was no longer meaningful after the loop; it would usually
be "no", whether or not we'd found working thread flags. The reason nobody
noticed is that Postgres doesn't actually use any of the symbols set up
by the code after the loop. Rather than complicate things some more to
make it work as designed, let's just remove all that dead code, and thereby
save a few cycles in each configure run.
Tom Lane [Tue, 1 Jul 2014 15:22:43 +0000 (11:22 -0400)]
Fix inadequately-sized output buffer in contrib/unaccent.
The output buffer size in unaccent_lexize() was calculated as input string
length times pg_database_encoding_max_length(), which effectively assumes
that replacement strings aren't more than one character. While that was
all that we previously documented it to support, the code actually has
always allowed replacement strings of arbitrary length; so if you tried
to make use of longer strings, you were at risk of buffer overrun. To fix,
use an expansible StringInfo buffer instead of trying to determine the
maximum space needed a-priori.
This would be a security issue if unaccent rules files could be installed
by unprivileged users; but fortunately they can't, so in the back branches
the problem can be labeled as improper configuration by a superuser.
Nonetheless, a memory stomp isn't a nice way of reacting to improper
configuration, so let's back-patch the fix.
Robert Haas [Tue, 1 Jul 2014 14:34:42 +0000 (10:34 -0400)]
Avoid copying index tuples when building an index.
The previous code, perhaps out of concern for avoid memory leaks, formed
the tuple in one memory context and then copied it to another memory
context. However, this doesn't appear to be necessary, since
index_form_tuple and the functions it calls take precautions against
leaking memory. In my testing, building the tuple directly inside the
sort context shaves several percent off the index build time.
Rearrange things so we do that.
Patch by me. Review by Amit Kapila, Tom Lane, Andres Freund.
Tom Lane [Tue, 1 Jul 2014 02:03:37 +0000 (22:03 -0400)]
Issue a WARNING about invalid rule file format in contrib/unaccent.
We were already issuing a WARNING, albeit only elog not ereport, for
duplicate source strings; so warning rather than just being stoically
silent seems like the best thing to do here. Arguably both of these
complaints should be upgraded to ERRORs, but that might be more
behavioral change than people want.
Note: the faulty line is already printed via an errcontext hook,
so there's no need for more information than these messages provide.
Tom Lane [Tue, 1 Jul 2014 01:46:29 +0000 (21:46 -0400)]
Allow multi-character source strings in contrib/unaccent.
This could be useful in languages where diacritic signs are represented as
separate characters; more generally it supports using unaccent dictionaries
for substring substitutions beyond narrowly conceived "diacritic removal".
In any case, since the rule-file parser doesn't complain about
multi-character source strings, it behooves us to do something unsurprising
with them.
Tom Lane [Tue, 1 Jul 2014 00:51:26 +0000 (20:51 -0400)]
Allow empty replacement strings in contrib/unaccent.
This is useful in languages where diacritic signs are represented as
separate characters; it's also one step towards letting unaccent be used
for arbitrary substring substitutions.
In passing, improve the user documentation for unaccent, which was sadly
vague about some important details.
Bruce Momjian [Mon, 30 Jun 2014 23:55:55 +0000 (19:55 -0400)]
pg_upgrade: update C comments about pg_dumpall
There were some C comments that hadn't been updated from the switch of
using only pg_dumpall to using pg_dump and pg_dumpall, so update them.
Also, don't bother using --schema-only for pg_dumpall --globals-only.
Noah Misch [Mon, 30 Jun 2014 20:59:19 +0000 (16:59 -0400)]
Don't prematurely free the BufferAccessStrategy in pgstat_heap().
This function continued to use it after heap_endscan() freed it. In
passing, don't explicit create a strategy here. Instead, use the one
created by heap_beginscan_strat(), if any. Back-patch to 9.2, where use
of a BufferAccessStrategy here was introduced.
Andres Freund [Sun, 29 Jun 2014 15:08:04 +0000 (17:08 +0200)]
Check interrupts during logical decoding more frequently.
When reading large amounts of preexisting WAL during logical decoding
using the SQL interface we possibly could fail to check interrupts in
due time. Similarly the same could happen on systems with a very high
WAL volume while creating a new logical replication slot, independent
of the used interface.
Previously these checks where only performed in xlogreader's read_page
callbacks, while waiting for new WAL to be produced. That's not
sufficient though, if there's never a need to wait. Walsender's send
loop already contains a interrupt check.
Backpatch to 9.4 where the logical decoding feature was introduced.
Fix and enhance the assertion of no palloc's in a critical section.
The assertion failed if WAL_DEBUG or LWLOCK_STATS was enabled; fix that by
using separate memory contexts for the allocations made within those code
blocks.
This patch introduces a mechanism for marking any memory context as allowed
in a critical section. Previously ErrorContext was exempt as a special case.
Instead of a blanket exception of the checkpointer process, only exempt the
memory context used for the pending ops hash table.
Tom Lane [Sun, 29 Jun 2014 17:50:58 +0000 (13:50 -0400)]
Remove use_json_as_text options from json_to_record/json_populate_record.
The "false" case was really quite useless since all it did was to throw
an error; a definition not helped in the least by making it the default.
Instead let's just have the "true" case, which emits nested objects and
arrays in JSON syntax. We might later want to provide the ability to
emit sub-objects in Postgres record or array syntax, but we'd be best off
to drive that off a check of the target field datatype, not a separate
argument.
For the functions newly added in 9.4, we can just remove the flag arguments
outright. We can't do that for json_populate_record[set], which already
existed in 9.3, but we can ignore the argument and always behave as if it
were "true". It helps that the flag arguments were optional and not
documented in any useful fashion anyway.
Andres Freund [Sun, 29 Jun 2014 12:15:09 +0000 (14:15 +0200)]
Add cluster_name GUC which is included in process titles if set.
When running several postgres clusters on one OS instance it's often
inconveniently hard to identify which "postgres" process belongs to
which postgres instance.
Add the cluster_name GUC, whose value will be included as part of the
process titles if set. With that processes can more easily identified
using tools like 'ps'.
To avoid problems with encoding mismatches between postgresql.conf,
consoles, and individual databases replace non-ASCII chars in the name
with question marks. The length is limited to NAMEDATALEN to make it
less likely to truncate important information at the end of the
status.
Thomas Munro, with some adjustments by me and review by a host of people.
Andres Freund [Sat, 28 Jun 2014 19:40:40 +0000 (21:40 +0200)]
Remove Alpha and Tru64 support.
Support for running postgres on Alpha hasn't been tested for a long
while. Due to Alpha's uniquely lax cache coherency model it's a hard
to develop for platform (especially blindly!) and thought to be
unlikely to currently work correctly.
As Alpha is the only supported architecture for Tru64 drop support for
it as well. Tru64's support has ended 2012 and it has been in
maintenance-only mode for much longer.
Also remove stray references to __ksr__ and ultrix defines.
Tom Lane [Sat, 28 Jun 2014 06:08:08 +0000 (23:08 -0700)]
Allow pushdown of WHERE quals into subqueries with window functions.
We can allow this even without any specific knowledge of the semantics
of the window function, so long as pushed-down quals will either accept
every row in a given window partition, or reject every such row. Because
window functions act only within a partition, such a case can't result
in changing the window functions' outputs for any surviving row.
Eliminating entire partitions in this way obviously can reduce the cost
of the window-function computations substantially.
The fly in the ointment is that it's hard to be entirely sure whether
this is true for an arbitrary qual condition. This patch allows pushdown
if (a) the qual references only partitioning columns, and (b) the qual
contains no volatile functions. We are at risk of incorrect results if
the qual can produce different answers for values that the partitioning
equality operator sees as equal. While it's not hard to invent cases
for which that can happen, it seems to seldom be a problem in practice,
since no one has complained about a similar assumption that we've had
for many years with respect to DISTINCT. The potential performance
gains seem to be worth the risk.
David Rowley, reviewed by Vik Fearing; some credit is due also to
Thomas Mayer who did considerable preliminary investigation.
Alvaro Herrera [Fri, 27 Jun 2014 18:43:53 +0000 (14:43 -0400)]
Have multixact be truncated by checkpoint, not vacuum
Instead of truncating pg_multixact at vacuum time, do it only at
checkpoint time. The reason for doing it this way is twofold: first, we
want it to delete only segments that we're certain will not be required
if there's a crash immediately after the removal; and second, we want to
do it relatively often so that older files are not left behind if
there's an untimely crash.
Per my proposal in
http://www.postgresql.org/message-id/20140626044519.GJ7340@eldon.alvh.no-ip.org
we now execute the truncation in the checkpointer process rather than as
part of vacuum. Vacuum is in only charge of maintaining in shared
memory the value to which it's possible to truncate the files; that
value is stored as part of checkpoints also, and so upon recovery we can
reuse the same value to re-execute truncate and reset the
oldest-value-still-safe-to-use to one known to remain after truncation.
Per bug reported by Jeff Janes in the course of his tests involving
bug #8673.
While at it, update some comments that hadn't been updated since
multixacts were changed.
Backpatch to 9.3, where persistency of pg_multixact files was
introduced by commit 0ac5ad5134f2.
Alvaro Herrera [Fri, 27 Jun 2014 18:43:46 +0000 (14:43 -0400)]
Don't allow relminmxid to go backwards during VACUUM FULL
We were allowing a table's pg_class.relminmxid value to move backwards
when heaps were swapped by VACUUM FULL or CLUSTER. There is a
similar protection against relfrozenxid going backwards, which we
neglected to clone when the multixact stuff was rejiggered by commit 0ac5ad5134f276.
Backpatch to 9.3, where relminmxid was introduced.
As reported by Heikki in
http://www.postgresql.org/message-id/52401AEA.9000608@vmware.com
Don't assert MultiXactIdIsRunning if the multi came from a tuple that
had been share-locked and later copied over to the new cluster by
pg_upgrade. Doing that causes an error to be raised unnecessarily:
MultiXactIdIsRunning is not open to the possibility that its argument
came from a pg_upgraded tuple, and all its other callers are already
checking; but such multis cannot, obviously, have transactions still
running, so the assert is pointless.
Noticed while investigating the bogus pg_multixact/offsets/0000 file
left over by pg_upgrade, as reported by Andres Freund in
http://www.postgresql.org/message-id/20140530121631.GE25431@alap3.anarazel.de
Backpatch to 9.3, as the commit that introduced the buglet.
Tom Lane [Fri, 27 Jun 2014 18:08:48 +0000 (11:08 -0700)]
Disallow pushing volatile qual expressions down into DISTINCT subqueries.
A WHERE clause applied to the output of a subquery with DISTINCT should
theoretically be applied only once per distinct row; but if we push it
into the subquery then it will be evaluated at each row before duplicate
elimination occurs. If the qual is volatile this can give rise to
observably wrong results, so don't do that.
While at it, refactor a little bit to allow subquery_is_pushdown_safe
to report more than one kind of restrictive condition without indefinitely
expanding its argument list.
Although this is a bug fix, it seems unwise to back-patch it into released
branches, since it might de-optimize plans for queries that aren't giving
any trouble in practice. So apply to 9.4 but not further back.
Tom Lane [Thu, 26 Jun 2014 23:22:15 +0000 (16:22 -0700)]
Get rid of bogus separate pg_proc entries for json_extract_path operators.
These should not have existed to begin with, but there was apparently some
misunderstanding of the purpose of the opr_sanity regression test item
that checks for operator implementation functions with their own comments.
The idea there is to check for unintentional violations of the rule that
operator implementation functions shouldn't be documented separately
.... but for these functions, that is in fact what we want, since the
variadic option is useful and not accessible via the operator syntax.
Get rid of the extra pg_proc entries and fix the regression test and
documentation to be explicit about what we're doing here.
Tom Lane [Thu, 26 Jun 2014 17:40:50 +0000 (10:40 -0700)]
Forward-patch regression test for "could not find pathkey item to sort".
Commit a87c729153e372f3731689a7be007bc2b53f1410 already fixed the bug this
is checking for, but the regression test case it added didn't cover this
scenario. Since we managed to miss the fact that there was a bug at all,
it seems like a good idea to propagate the extra test case forward to HEAD.
Fujii Masao [Thu, 26 Jun 2014 05:27:27 +0000 (14:27 +0900)]
Remove obsolete example of CSV log file name from log_filename document.
7380b63 changed log_filename so that epoch was not appended to it
when no format specifier is given. But the example of CSV log file name
with epoch still left in log_filename document. This commit removes
such obsolete example.
This commit also documents the defaults of log_directory and
log_filename.
Tom Lane [Wed, 25 Jun 2014 22:25:22 +0000 (15:25 -0700)]
Rationalize error messages within jsonfuncs.c.
I noticed that the functions in jsonfuncs.c sometimes printed error
messages that claimed I'd called some other function. Investigation showed
that this was from repurposing code into "worker" functions without taking
much care as to whether it would mention the right SQL-level function if it
threw an error. Moreover, there was a weird mismash of messages that
contained a fixed function name, messages that used %s for a function name,
and messages that constructed a function name out of spare parts, like
"json%s_populate_record" (which, quite aside from being ugly as sin, wasn't
even sufficient to cover all the cases). This would put an undue burden on
our long-suffering translators. Standardize on inserting the SQL function
name with %s so as to reduce the number of translatable strings, and pass
function names around as needed to make sure we can report the right one.
Fix up some gratuitous variations in wording, too.
Tom Lane [Wed, 25 Jun 2014 18:22:18 +0000 (11:22 -0700)]
Cosmetic improvements in jsonfuncs.c.
Re-pgindent, remove a lot of random vertical whitespace, remove useless
(if not counterproductive) inline markings, get rid of unnecessary
zero-padding of strings for hashtable searches. No functional changes.
Tom Lane [Wed, 25 Jun 2014 04:22:40 +0000 (21:22 -0700)]
Fix handling of nested JSON objects in json_populate_recordset and friends.
populate_recordset_object_start() improperly created a new hash table
(overwriting the link to the existing one) if called at nest levels
greater than one. This resulted in previous fields not appearing in
the final output, as reported by Matti Hameister in bug #10728.
In 9.4 the problem also affects json_to_recordset.
This perhaps missed detection earlier because the default behavior is to
throw an error for nested objects: you have to pass use_json_as_text = true
to see the problem.
In addition, fix query-lifespan leakage of the hashtable created by
json_populate_record(). This is pretty much the same problem recently
fixed in dblink: creating an intended-to-be-temporary context underneath
the executor's per-tuple context isn't enough to make it go away at the
end of the tuple cycle, because MemoryContextReset is not
MemoryContextResetAndDeleteChildren.
The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it
was still possible with default_with_oids=true. But the rest of the system,
including pg_dump, isn't prepared to handle foreign tables with OIDs
properly.
Backpatch down to 9.1, where foreign tables were introduced. It's possible
that there are databases out there that already have foreign tables with
OIDs. There isn't much we can do about that, but at least we can prevent
them from being created in the future.
Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
Robert Haas [Tue, 24 Jun 2014 01:45:21 +0000 (21:45 -0400)]
Check for interrupts during tuple-insertion loops.
Normally, this won't matter too much; but if I/O is really slow, for
example because the system is overloaded, we might write many pages
before checking for interrupts. A single toast insertion might
write up to 1GB of data, and a multi-insert could write hundreds
of tuples (and their corresponding TOAST data).
Improve tab-completion of DROP and ALTER ENABLE/DISABLE on triggers and rules.
At "DROP RULE/TRIGGER triggername ON ...", tab-complete tables that have
a rule/trigger with that name.
At "ALTER TABLE tablename ENABLE/DISABLE TRIGGER/RULE ...", tab-complete to
rules/triggers on that table. Previously, we would tab-complete to all
rules or triggers, not just those that are on that table.
Also, filter out internal RI triggers from the list. You can't DROP them,
and enabling/disabling them is such a rare (and dangerous) operation that
it seems better to hide them.
Kevin Grittner [Sat, 21 Jun 2014 14:17:04 +0000 (09:17 -0500)]
Fix documentation template for CREATE TRIGGER.
By using curly braces, the template had specified that one of
"NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED"
was required on any CREATE TRIGGER statement, which is not
accurate. Change to square brackets makes that optional.
Tom Lane [Fri, 20 Jun 2014 22:20:56 +0000 (18:20 -0400)]
Add Asserts to verify that catalog cache keys are unique and not null.
The catcache code is effectively assuming this already, so let's insist
that the catalog and index are actually declared that way.
Having done that, the comments in indexing.h about non-unique indexes
not being used for catcaches are completely redundant not just mostly so;
and we didn't have such a comment for every such index anyway. So let's
get rid of them.
Per discussion of whether we should identify primary keys for catalogs.
We might or might not take that further step, but this change in itself
will allow quicker detection of misdeclared catcaches, so it seems worth
doing in any case.
Joe Conway [Fri, 20 Jun 2014 19:22:13 +0000 (12:22 -0700)]
Clean up data conversion short-lived memory context.
dblink uses a short-lived data conversion memory context. However it
was not deleted when no longer needed, leading to a noticeable memory
leak under some circumstances. Plug the hole, along with minor
refactoring. Backpatch to 9.2 where the leak was introduced.
Report and initial patch by MauMau. Reviewed/modified slightly by
Tom Lane and me.
Andres Freund [Fri, 20 Jun 2014 09:06:42 +0000 (11:06 +0200)]
Do all-visible handling in lazy_vacuum_page() outside its critical section.
Since fdf9e21196a lazy_vacuum_page() rechecks the all-visible status
of pages in the second pass over the heap. It does so inside a
critical section, but both visibilitymap_test() and
heap_page_is_all_visible() perform operations that should not happen
inside one. The former potentially performs IO and both potentially do
memory allocations.
To fix, simply move all the all-visible handling outside the critical
section. Doing so means that the PD_ALL_VISIBLE on the page won't be
included in the full page image of the HEAP2_CLEAN record anymore. But
that's fine, the flag will be set by the HEAP2_VISIBLE logged later.
Backpatch to 9.3 where the problem was introduced. The bug only came
to light due to the assertion added in 4a170ee9 and isn't likely to
cause problems in production scenarios. The worst outcome is a
avoidable PANIC restart.
This also gets rid of the difference in the order of operations
between master and standby mentioned in 2a8e1ac5.
Per reports from David Leverton and Keith Fiske in bug #10533.
Andres Freund [Fri, 20 Jun 2014 09:06:42 +0000 (11:06 +0200)]
Don't allow to disable backend assertions via the debug_assertions GUC.
The existance of the assert_enabled variable (backing the
debug_assertions GUC) reduced the amount of knowledge some static code
checkers (like coverity and various compilers) could infer from the
existance of the assertion. That could have been solved by optionally
removing the assertion_enabled variable from the Assert() et al macros
at compile time when some special macro is defined, but the resulting
complication doesn't seem to be worth the gain from having
debug_assertions. Recompiling is fast enough.
The debug_assertions GUC is still available, but readonly, as it's
useful when diagnosing problems. The commandline/client startup option
-A, which previously also allowed to enable/disable assertions, has
been removed as it doesn't serve a purpose anymore.
While at it, reduce code duplication in bufmgr.c and localbuf.c
assertions checking for spurious buffer pins. That code had to be
reindented anyway to cope with the assert_enabled removal.
Tom Lane [Fri, 20 Jun 2014 02:13:41 +0000 (22:13 -0400)]
Avoid leaking memory while evaluating arguments for a table function.
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM
in the query-lifespan memory context. This is insignificant in simple
cases where the function relation is scanned only once; but if the function
is in a sub-SELECT or is on the inside of a nested loop, any memory
consumed during argument evaluation can add up quickly. (The potential for
trouble here had been foreseen long ago, per existing comments; but we'd
not previously seen a complaint from the field about it.) To fix, create
an additional temporary context just for this purpose.
Per an example from MauMau. Back-patch to all active branches.
Noah Misch [Fri, 20 Jun 2014 01:41:26 +0000 (21:41 -0400)]
Let installcheck-world pass against a server requiring a password.
Give passwords to each user created in support of an ECPG connection
test case. Use SET SESSION AUTHORIZATION, not a fresh connection, to
reduce privileges during a dblink test case.
To test against such a server, both the "make installcheck-world"
environment and the postmaster environment must provide the default
user's password; $PGPASSFILE is the principal way to do so. (The
postmaster environment needs it for dblink and postgres_fdw tests.)
Kevin Grittner [Thu, 19 Jun 2014 13:40:37 +0000 (08:40 -0500)]
Fix calculation of PREDICATELOCK_MANAGER_LWLOCK_OFFSET.
Commit ea9df812d8502fff74e7bc37d61bdc7d66d77a7f failed to include
NUM_BUFFER_PARTITIONS in this offset, resulting in a bad offset.
Ultimately this threw off NUM_FIXED_LWLOCKS which is based on
earlier offsets, leading to memory allocation problems. It seems
likely to have also caused increased LWLOCK contention when
serializable transactions were used, because lightweight locks used
for that overlapped others.
Reported by Amit Kapila with analysis and fix.
Backpatch to 9.4, where the bug was introduced.
Fujii Masao [Thu, 19 Jun 2014 11:31:20 +0000 (20:31 +0900)]
Don't allow data_directory to be set in postgresql.auto.conf by ALTER SYSTEM.
data_directory could be set both in postgresql.conf and postgresql.auto.conf so far.
This could cause some problematic situations like circular definition. To avoid such
situations, this commit forbids a user to set data_directory in postgresql.auto.conf.
Backpatch this to 9.4 where ALTER SYSTEM command was introduced.
Amit Kapila, reviewed by Abhijit Menon-Sen, with minor adjustments by me.
Tom Lane [Thu, 19 Jun 2014 00:12:47 +0000 (20:12 -0400)]
Improve our mechanism for controlling the Linux out-of-memory killer.
Arrange for postmaster child processes to respond to two environment
variables, PG_OOM_ADJUST_FILE and PG_OOM_ADJUST_VALUE, to determine whether
they reset their OOM score adjustments and if so to what. This is superior
to the previous design involving #ifdef's in several ways. The behavior is
now available in a default build, and both ends of the adjustment --- the
original adjustment of the postmaster's level and the subsequent
readjustment by child processes --- can now be controlled in one place,
namely the postmaster launch script. So it's no longer necessary for the
launch script to act on faith that the server was compiled with the
appropriate options. In addition, if someone wants to use an OOM score
other than zero for the child processes, that doesn't take a recompile
anymore; and we no longer have to cater separately to the two different
historical kernel APIs for this adjustment.
Tom Lane [Wed, 18 Jun 2014 17:22:25 +0000 (13:22 -0400)]
Implement UPDATE tab SET (col1,col2,...) = (SELECT ...), ...
This SQL-standard feature allows a sub-SELECT yielding multiple columns
(but only one row) to be used to compute the new values of several columns
to be updated. While the same results can be had with an independent
sub-SELECT per column, such a workaround can require a great deal of
duplicated computation.
The standard actually says that the source for a multi-column assignment
could be any row-valued expression. The implementation used here is
tightly tied to our existing sub-SELECT support and can't handle other
cases; the Bison grammar would have some issues with them too. However,
I don't feel too bad about this since other cases can be converted into
sub-SELECTs. For instance, "SET (a,b,c) = row_valued_function(x)" could
be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
Noah Misch [Wed, 18 Jun 2014 13:21:50 +0000 (09:21 -0400)]
Fix the MSVC build process for uuid-ossp.
Catch up with commit b8cc8f94730610c0189aa82dfec4ae6ce9b13e34's
introduction of the HAVE_UUID_OSSP symbol to the principal build
process. Back-patch to 9.4, where that commit appeared.
Tom Lane [Mon, 16 Jun 2014 19:55:05 +0000 (15:55 -0400)]
Avoid recursion when processing simple lists of AND'ed or OR'ed clauses.
Since most of the system thinks AND and OR are N-argument expressions
anyway, let's have the grammar generate a representation of that form when
dealing with input like "x AND y AND z AND ...", rather than generating
a deeply-nested binary tree that just has to be flattened later by the
planner. This avoids stack overflow in parse analysis when dealing with
queries having more than a few thousand such clauses; and in any case it
removes some rather unsightly inconsistencies, since some parts of parse
analysis were generating N-argument ANDs/ORs already.
It's still possible to get a stack overflow with weirdly parenthesized
input, such as "x AND (y AND (z AND ( ... )))", but such cases are not
mainstream usage. The maximum depth of parenthesization is already
limited by Bison's stack in such cases, anyway, so that the limit is
probably fairly platform-independent.
Patch originally by Gurjeet Singh, heavily revised by me
Noah Misch [Sat, 14 Jun 2014 13:41:13 +0000 (09:41 -0400)]
Secure Unix-domain sockets of "make check" temporary clusters.
Any OS user able to access the socket can connect as the bootstrap
superuser and proceed to execute arbitrary code as the OS user running
the test. Protect against that by placing the socket in a temporary,
mode-0700 subdirectory of /tmp. The pg_regress-based test suites and
the pg_upgrade test suite were vulnerable; the $(prove_check)-based test
suites were already secure. Back-patch to 8.4 (all supported versions).
The hazard remains wherever the temporary cluster accepts TCP
connections, notably on Windows.
As a convenient side effect, this lets testing proceed smoothly in
builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values
like /var/run/postgresql are often unwritable to the build user.
Noah Misch [Sat, 14 Jun 2014 13:41:13 +0000 (09:41 -0400)]
Add mkdtemp() to libpgport.
This function is pervasive on free software operating systems; import
NetBSD's implementation. Back-patch to 8.4, like the commit that will
harness it.
Tom Lane [Fri, 13 Jun 2014 04:02:56 +0000 (00:02 -0400)]
Improve predtest.c's ability to reason about operator expressions.
We have for a long time been able to prove implications and refutations
between clauses structured like "expr op const" with the same subexpression
and btree-related operators; for example that "x < 4" implies "x <= 5".
The implication machinery is needed to detect usability of partial indexes,
and the refutation machinery is needed to implement constraint exclusion.
This patch extends that machinery to make proofs for operator expressions
involving the same two immutable-but-not-necessarily-just-Const input
expressions, ie does "expr1 op1 expr2" prove or refute "expr1 op2 expr2" or
"expr2 op2 expr1"? An important example is that we can now prove "x = y"
given "y = x", which formerly the code could not deduce unless x or y was a
constant. We can make use of the system's knowledge of operator commutator
and negator pairs, and can also make use of btree opclass relationships,
for example "x < y" implies "x <= y" and refutes "x > y" (notice that
neither of these could be proven just from commutator or negator links).
Inspired by a gripe from Brian Dunavant. This seems more like a new
feature than a bug fix, though, so no back-patch.
Tom Lane [Fri, 13 Jun 2014 00:14:32 +0000 (20:14 -0400)]
Fix pg_restore's processing of old-style BLOB COMMENTS data.
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch
of COMMENT commands into a single BLOB COMMENTS archive object. With
sufficiently many such comments, some of the commands would likely get
split across bufferloads when restoring, causing failures in
direct-to-database restores (though no problem would be evident in text
output). This is the same type of issue we have with table data dumped as
INSERT commands, and it can be fixed in the same way, by using a mini SQL
lexer to figure out where the command boundaries are. Fortunately, the
COMMENT commands are no more complex to lex than INSERTs, so we can just
re-use the existing lexer for INSERTs.
Per bug #10611 from Jacek Zalewski. Back-patch to all active branches.
Tom Lane [Thu, 12 Jun 2014 22:59:06 +0000 (18:59 -0400)]
Improve tuplestore's error messages for I/O failures.
We should report the errno when we get a failure from functions like
BufFileWrite. "ERROR: write failed" is unreasonably taciturn for a
case that's well within the realm of possibility; I've seen it a
couple times in the buildfarm recently, in situations that were
probably out-of-disk-space, but it'd be good to see the errno
to confirm it.
I think this code was originally written without assuming that
the buffile.c functions would return useful errno; but most other
callers *are* assuming that, and a quick look at the buffile code
gives no reason to suppose otherwise.
Also, a couple of the old messages were phrased on the assumption
that a short read might indicate a logic bug in tuplestore itself;
but that code's pretty well tested by now, so a filesystem-level
problem seems much more likely.
Tom Lane [Thu, 12 Jun 2014 21:51:47 +0000 (17:51 -0400)]
Adjust largeobject regression test to leave a couple of LOs behind.
Since we commonly test pg_dump/pg_restore by seeing whether they can dump
and restore the regression test database, it behooves us to include some
large objects in that test scenario.
I tried to include a comment on one of these large objects to improve
the test scenario further ... but it turns out that pg_upgrade fails to
preserve comments on large objects, and its regression test notices
the discrepancy. So uncommenting that COMMENT is a TODO for later.
Tom Lane [Thu, 12 Jun 2014 20:51:02 +0000 (16:51 -0400)]
Remove inadvertent copyright violation in largeobject regression test.
Robert Frost is no longer with us, but his copyrights still are, so
let's stop using "Stopping by Woods on a Snowy Evening" as test data
before somebody decides to sue us. Wordsworth is more safely dead.
Tom Lane [Thu, 12 Jun 2014 19:54:13 +0000 (15:54 -0400)]
Add regression test to prevent future breakage of legacy query in libpq.
Memorialize the expected output of the query that libpq has been using for
many years to get the OIDs of large-object support functions. Although
we really ought to change the way libpq does this, we must expect that
this query will remain in use in the field for the foreseeable future,
so until we're ready to break compatibility with old libpq versions
we'd better check the results stay the same. See the recent lo_create()
fiasco.
Tom Lane [Thu, 12 Jun 2014 19:39:09 +0000 (15:39 -0400)]
Rename lo_create(oid, bytea) to lo_from_bytea().
The previous naming broke the query that libpq's lo_initialize() uses
to collect the OIDs of the server-side functions it requires, because
that query effectively assumes that there is only one function named
lo_create in the pg_catalog schema (and likewise only one lo_open, etc).
While we should certainly make libpq more robust about this, the naive
query will remain in use in the field for the foreseeable future, so it
seems the only workable choice is to use a different name for the new
function. lo_from_bytea() won a small straw poll.
Back-patch into 9.4 where the new function was introduced.
Tom Lane [Thu, 12 Jun 2014 17:12:53 +0000 (13:12 -0400)]
Remove unnecessary output expressions from unflattened subqueries.
If a sub-select-in-FROM gets flattened into the upper query, then we
naturally get rid of any output columns that are defined in the sub-select
text but not actually used in the upper query. However, this doesn't
happen when it's not possible to flatten the subquery, for example because
it contains GROUP BY, LIMIT, etc. Allowing the subquery to compute useless
output columns is often fairly harmless, but sometimes it has significant
performance cost: the unused output might be an expensive expression,
or it might be a Var from a relation that we could remove entirely (via
the join-removal logic) if only we realized that we didn't really need
that Var. Situations like this are common when expanding views, so it
seems worth taking the trouble to detect and remove unused outputs.
Because the upper query's Var numbering for subquery references depends on
positions in the subquery targetlist, we don't want to renumber the items
we leave behind. Instead, we can implement "removal" by replacing the
unwanted expressions with simple NULL constants. This wastes a few cycles
at runtime, but not enough to justify more work in the planner.
Andres Freund [Thu, 12 Jun 2014 11:23:46 +0000 (13:23 +0200)]
Consistency improvements for slot and decoding code.
Change the order of checks in similar functions to be the same; remove
a parameter that's not needed anymore; rename a memory context and
expand a couple of comments.
Noah Misch [Wed, 11 Jun 2014 23:50:57 +0000 (19:50 -0400)]
Have configuration templates augment, not replace, LDFLAGS.
This preserves user-specified LDFLAGS; we already kept user-specified
CFLAGS and CPPFLAGS. Given the shortage of complaints and the fact that
any problem caused is likely to appear at build time, no back-patch.
Noah Misch [Wed, 11 Jun 2014 23:50:41 +0000 (19:50 -0400)]
Consistently define BUILDING_DLL during builds of src/port for Windows.
The MSVC build process already did so; this fixes the principal build
process to match. Both processes already did likewise for src/common.
This lets server builds of src/port reference postgres.exe data symbols.
Tom Lane [Wed, 11 Jun 2014 02:48:16 +0000 (22:48 -0400)]
Fix ancient encoding error in hungarian.stop.
When we grabbed this file off the Snowball project's website, we mistakenly
supposed that it was in LATIN1 encoding, but evidently it was actually in
LATIN2. This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in
LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in
bug #10589 from Zoltán Sörös. We'd have messed up u-double-acute too,
but there aren't any of those in the file. Other characters used in the
file have the same codes in LATIN1 and LATIN2, which no doubt helped hide
the problem for so long.
The error is not only ours: the Snowball project also was confused about
which encoding is required for Hungarian. But dealing with that will
require source-code changes that I'm not at all sure we'll wish to
back-patch. Fixing the stopword file seems reasonably safe to back-patch
however.