Alvaro Herrera [Fri, 4 Oct 2013 13:32:48 +0000 (10:32 -0300)]
isolationtester: Allow tuples to be returned in more places
Previously, isolationtester would forbid returning tuples in
session-specific teardown (but not global teardown), as well as in
global setup. Allow these places to return tuples, too.
It makes for cleaner code to have separate Get/Add functions for PostingItems
and ItemPointers. A few callsites that have to deal with both types need to
be duplicated because of this, but all the callers have to know which one
they're dealing with anyway. Overall, this reduces the amount of casting
required.
Extracted from Alexander Korotkov's larger patch to change the data page
format.
The cancel handler was uselessly set up even before the first connection
was opened. By setting it up afterwards, the user can use Ctrl+C to
abort psql if the initial connection attempt hangs.
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Ryan Kelly <rpkelly22@gmail.com>
Alvaro Herrera [Tue, 1 Oct 2013 20:36:15 +0000 (17:36 -0300)]
Remove broken PGXS code for pg_xlogdump
With the PGXS boilerplate in place, pg_xlogdump currently fails with an
ominous error message that certain targets cannot be built because
certain files do not exist. Remove that and instead throw a quick error
message alerting the user of the actual problem, which should be easier
to diagnose that the statu quo.
In bms_add_member(), use repalloc() if the bms needs to be enlarged.
Previously bms_add_member() would palloc a whole-new copy of the existing
set, copy the words, and pfree the old one. repalloc() is potentially much
faster, and more importantly, this is less surprising if CurrentMemoryContext
is not the same as the context the old set is in. bms_add_member() still
allocates a new bitmapset in CurrentMemoryContext if NULL is passed as
argument, but that is a lot less likely to induce bugs.
Fix snapshot leak if lo_open called on non-existent object.
lo_open registers the currently active snapshot, and checks if the
large object exists after that. Normally, snapshots registered by lo_open
are unregistered at end of transaction when the lo descriptor is closed, but
if we error out before the lo descriptor is added to the list of open
descriptors, it is leaked. Fix by moving the snapshot registration to after
checking if the large object exists.
Reported by Pavel Stehule. Backpatch to 8.4. The snapshot registration
system was introduced in 8.4, so prior versions are not affected (and not
supported, anyway).
Andrew Dunstan [Sun, 29 Sep 2013 21:41:56 +0000 (17:41 -0400)]
Use a new hstore extension version for added json functions.
This should have been done when the json functionality was added to
hstore in 9.3.0. To handle this correctly, the upgrade script therefore
uses conditional logic by using plpgsql in a DO statement to add the two
new functions and the new cast. If hstore_to_json_loose is detected as
already present and dependent on the hstore extension nothing is done.
This will require that the database be loaded with plpgsql.
People who have installed the earlier and spurious 1.1 version of hstore
will need to do:
Fix spurious warning after vacuuming a page on a table with no indexes.
There is a rare race condition, when a transaction that inserted a tuple
aborts while vacuum is processing the page containing the inserted tuple.
Vacuum prunes the page first, which normally removes any dead tuples, but
if the inserting transaction aborts right after that, the loop after
pruning will see a dead tuple and remove it instead. That's OK, but if the
page is on a table with no indexes, and the page becomes completely empty
after removing the dead tuple (or tuples) on it, it will be immediately
marked as all-visible. That's OK, but the sanity check in vacuum would
throw a warning because it thinks that the page contains dead tuples and
was nevertheless marked as all-visible, even though it just vacuumed away
the dead tuples and so it doesn't actually contain any.
Spotted this while reading the code. It's difficult to hit the race
condition otherwise, but can be done by putting a breakpoint after the
heap_page_prune() call.
Backpatch all the way to 8.4, where this code first appeared.
B-tree operators are not allowed to leak memory into the current memory
context. Range_cmp leaked detoasted copies of the arguments. That caused
a quick out-of-memory error when creating an index on a range column.
Use @libdir@ in both of regress/{input,output}/security_label.source
Though @libdir@ almost always matches @abs_builddir@ in this context,
the test could only fail if they differed. Back-patch to 9.1, where the
test was introduced.
Robert Haas [Mon, 23 Sep 2013 17:31:22 +0000 (13:31 -0400)]
Don't allow system columns in CHECK constraints, except tableoid.
Previously, arbitray system columns could be mentioned in table
constraints, but they were not correctly checked at runtime, because
the values weren't actually set correctly in the tuple. Since it
seems easy enough to initialize the table OID properly, do that,
and continue allowing that column, but disallow the rest unless and
until someone figures out a way to make them work properly.
No back-patch, because this doesn't seem important enough to take the
risk of destabilizing the back branches. In fact, this will pose a
dump-and-reload hazard for those upgrading from previous versions:
constraints that were accepted before but were not correctly enforced
will now either be enforced correctly or not accepted at all. Either
could result in restore failures, but in practice I think very few
users will notice the difference, since the use case is pretty
marginal anyway and few users will be relying on features that have
not historically worked.
Amit Kapila, reviewed by Rushabh Lathia, with doc changes by me.
Stephen Frost [Mon, 23 Sep 2013 12:33:41 +0000 (08:33 -0400)]
Fix SSL deadlock risk in libpq
In libpq, we set up and pass to OpenSSL callback routines to handle
locking. When we run out of SSL connections, we try to clean things
up by de-registering the hooks. Unfortunately, we had a few calls
into the OpenSSL library after these hooks were de-registered during
SSL cleanup which lead to deadlocking. This moves the thread callback
cleanup to be after all SSL-cleanup related OpenSSL library calls.
I've been unable to reproduce the deadlock with this fix.
In passing, also move the close_SSL call to be after unlocking our
ssl_config mutex when in a failure state. While it looks pretty
unlikely to be an issue, it could have resulted in deadlocks if we
ended up in this code path due to something other than SSL_new
failing. Thanks to Heikki for pointing this out.
Back-patch to all supported versions; note that the close_SSL issue
only goes back to 9.0, so that hunk isn't included in the 8.4 patch.
Initially found and reported by Vesa-Matti J Kari; many thanks to
both Heikki and Andres for their help running down the specific
issue and reviewing the patch.
When a timeline history file is fetched from server, it is initially created
with a temporary file name, and renamed to place. However, the temporary
file name was constructed using an uninitialized buffer. Usually that meant
that the file was created in current directory instead of the target, which
usually goes unnoticed, but if the target is on a different filesystem than
the current dir, the rename() would fail. Fix that.
The second issue is that pg_receivexlog would not take .partial files into
account when determining when scanning the target directory for existing
WAL files. If the timeline has switched in the server several times in the
last WAL segment, and pg_receivexlog is restarted, it would choose a too
old starting point. That's not a problem as long as the old WAL segment
exists in the server and can be streamed over, but will cause a failure if
it's not.
Backpatch to 9.3, where this timeline handling code was written.
Analysed by Andrew Gierth, bug #8453, based on a bug report on IRC.
It seems to make more sense to use "cutoff multixact" terminology
throughout the backend code; "freeze" is associated with replacing of an
Xid with FrozenTransactionId, which is not what we do for MultiXactIds.
Once the administrator has called for an immediate shutdown or a backend
crash has triggered a reinitialization, no mere SIGINT or SIGTERM should
change that course. Such derailment remains possible when the signal
arrives before quickdie() blocks signals. That being a narrow race
affecting most PostgreSQL signal handlers in some way, leave it for
another patch. Back-patch this to all supported versions.
The prototype for inval_twophase_postcommit wasn't removed when it's definition
was removed in efc16ea520679d713d98a2c7bf1453c4ff7b91ec / the initial HS commit.
Bruce Momjian [Sat, 7 Sep 2013 15:44:33 +0000 (11:44 -0400)]
intarray: return empty zero-dimensional array for an empty array
Previously a one-dimensional empty array was returned, but its text
representation matched a zero-dimensional array, and there is no way to
dump/reload a one-dimensional empty array.
Doing so was helpful for some Valgrind usage and distracting for other
usage. One can achieve the same effect by changing log_statement and
pointing both PostgreSQL and Valgrind logging to stderr.
Kevin Grittner [Thu, 5 Sep 2013 19:03:43 +0000 (14:03 -0500)]
Eliminate pg_rewrite.ev_attr column and related dead code.
Commit 95ef6a344821655ce4d0a74999ac49dd6af6d342 removed the
ability to create rules on an individual column as of 7.3, but
left some residual code which has since been useless. This cleans
up that dead code without any change in behavior other than
dropping the useless column from the catalog.
If the hash table backing a catalog cache becomes too full (fillfactor > 2),
enlarge it. A new buckets array, double the size of the old, is allocated,
and all entries in the old hash are moved to the right bucket in the new
hash.
This has two benefits. First, cache lookups don't get so expensive when
there are lots of entries in a cache, like if you access hundreds of
thousands of tables. Second, we can make the (initial) sizes of the caches
much smaller, which saves memory.
This patch dials down the initial sizes of the catcaches. The new sizes are
chosen so that a backend that only runs a few basic queries still won't need
to enlarge any of them.
Keep heavily-contended fields in XLogCtlInsert on different cache lines.
Performance testing shows that if the insertpos_lck spinlock and the fields
that it protects are on the same cache line with other variables that are
frequently accessed, the false sharing can hurt performance a lot. Keep
them apart by adding some padding.
Tom Lane [Tue, 3 Sep 2013 22:56:22 +0000 (18:56 -0400)]
Update comments concerning PGC_S_TEST.
This GUC context value was once only used by ALTER DATABASE SET and
ALTER USER SET. That's not true anymore, though, so rewrite the
comments to be a bit more general.
Patch in HEAD only, since this is just an internal documentation issue.
Tom Lane [Tue, 3 Sep 2013 22:32:20 +0000 (18:32 -0400)]
Don't fail for bad GUCs in CREATE FUNCTION with check_function_bodies off.
The previous coding attempted to activate all the GUC settings specified
in SET clauses, so that the function validator could operate in the GUC
environment expected by the function body. However, this is problematic
when restoring a dump, since the SET clauses might refer to database
objects that don't exist yet. We already have the parameter
check_function_bodies that's meant to prevent forward references in
function definitions from breaking dumps, so let's change CREATE FUNCTION
to not install the SET values if check_function_bodies is off.
Authors of function validators were already advised not to make any
"context sensitive" checks when check_function_bodies is off, if indeed
they're checking anything at all in that mode. But extend the
documentation to point out the GUC issue in particular.
(Note that we still check the SET clauses to some extent; the behavior
with !check_function_bodies is now approximately equivalent to what ALTER
DATABASE/ROLE have been doing for awhile with context-dependent GUCs.)
This problem can be demonstrated in all active branches, so back-patch
all the way.
Tom Lane [Tue, 3 Sep 2013 21:08:38 +0000 (17:08 -0400)]
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
Tom Lane [Tue, 3 Sep 2013 20:28:56 +0000 (16:28 -0400)]
Docs: wording improvements in discussion of timestamp arithmetic.
I started out just to fix the broken markup in commit 1c2085766187031eaeaae7db4785b9e1d4241988, but got distracted by
copy-editing. I see Bruce already fixed the markup, but I'll
commit the wordsmithing anyway.
Tom Lane [Sun, 1 Sep 2013 23:43:02 +0000 (19:43 -0400)]
Update "Using EXPLAIN" documentation examples using current code.
It seems like a good idea to update these examples since some fairly
basic planner behaviors have changed in 9.3; notably that the startup cost
for an indexscan plan node is no longer invariably estimated at 0.00.
Tom Lane [Fri, 30 Aug 2013 23:15:21 +0000 (19:15 -0400)]
Reset the binary heap in MergeAppend rescans.
Failing to do so can cause queries to return wrong data, error out or crash.
This requires adding a new binaryheap_reset() method to binaryheap.c,
but that probably should have been there anyway.
Per bug #8410 from Terje Elde. Diagnosis and patch by Andres Freund.
Use a non-locking initial test in TAS_SPIN on x86_64.
Testing done in 2011 by Tom Lane concluded that this is a win on Intel Xeons
and AMD Opterons, but it was not changed back then, because of an old
comment in tas() that suggested that it's a huge loss on older Opterons.
However, didn't have separate TAS() and TAS_SPIN() macros back then, so the
comment referred to doing a non-locked initial test even on the first
access, in uncontended case. I don't have access to older Opterons, but I'm
pretty sure that doing an initial unlocked test is unlikely to be a loss
while spinning, even though it might be for the first access.
We probably should do the same on 32-bit x86, but I'm afraid of changing it
without any testing. Hence just add a note to the x86 implementation
suggesting that we probably should do the same there.
Robert Haas [Wed, 28 Aug 2013 18:08:13 +0000 (14:08 -0400)]
Allow discovery of whether a dynamic background worker is running.
Using the infrastructure provided by this patch, it's possible either
to wait for the startup of a dynamically-registered background worker,
or to poll the status of such a worker without waiting. In either
case, the current PID of the worker process can also be obtained.
As usual, worker_spi is updated to demonstrate the new functionality.
As noted by Tom Lane, commit 813fb0315587d32e3b77af1051a0ef517d187763
was overly optimistic about how safe it is to concurrently change
enumsortorder values under MVCC catalog scan semantics. Restore
some of the previous text, with hopefully-correct adjustments for
the new state of play.
Tom Lane [Sat, 24 Aug 2013 19:14:17 +0000 (15:14 -0400)]
Account better for planning cost when choosing whether to use custom plans.
The previous coding in plancache.c essentially used 10% of the estimated
runtime as its cost estimate for planning. This can be pretty bogus,
especially when the estimated runtime is very small, such as in a simple
expression plan created by plpgsql, or a simple INSERT ... VALUES.
While we don't have a really good handle on how planning time compares
to runtime, it seems reasonable to use an estimate based on the number of
relations referenced in the query, with a rather large multiplier. This
patch uses 1000 * cpu_operator_cost * (nrelations + 1), so that even a
trivial query will be charged 1000 * cpu_operator_cost for planning.
This should address the problem reported by Marc Cousin and others that
9.2 and up prefer custom plans in cases where the planning time greatly
exceeds what can be saved.
Magnus Hagander [Sat, 24 Aug 2013 15:11:31 +0000 (17:11 +0200)]
Don't crash when pg_xlog is empty and pg_basebackup -x is used
The backup will not work (without a logarchive, and that's the whole
point of -x) in this case, this patch just changes it to throw an
error instead of crashing when this happens.
Tom Lane [Fri, 23 Aug 2013 21:30:53 +0000 (17:30 -0400)]
In locate_grouping_columns(), don't expect an exact match of Var typmods.
It's possible that inlining of SQL functions (or perhaps other changes?)
has exposed typmod information not known at parse time. In such cases,
Vars generated by query_planner might have valid typmod values while the
original grouping columns only have typmod -1. This isn't a semantic
problem since the behavior of grouping only depends on type not typmod,
but it breaks locate_grouping_columns' use of tlist_member to locate the
matching entry in query_planner's result tlist.
We can fix this without an excessive amount of new code or complexity by
relying on the fact that locate_grouping_columns only gets called when
make_subplanTargetList has set need_tlist_eval == false, and that can only
happen if all the grouping columns are simple Vars. Therefore we only need
to search the sub_tlist for a matching Var, and we can reasonably define a
"match" as being a match of the Var identity fields
varno/varattno/varlevelsup. The code still Asserts that vartype matches,
but ignores vartypmod.
Per bug #8393 from Evan Martin. The added regression test case is
basically the same as his example. This has been broken for a very long
time, so back-patch to all supported branches.
Tom Lane [Wed, 21 Aug 2013 17:38:16 +0000 (13:38 -0400)]
Fix hash table size estimation error in choose_hashed_distinct().
We should account for the per-group hashtable entry overhead when
considering whether to use a hash aggregate to implement DISTINCT. The
comparable logic in choose_hashed_grouping() gets this right, but I think
I omitted it here in the mistaken belief that there would be no overhead
if there were no aggregate functions to be evaluated. This can result in
more than 2X underestimate of the hash table size, if the tuples being
aggregated aren't very wide. Per report from Tomas Vondra.
This bug is of long standing, but per discussion we'll only back-patch into
9.3. Changing the estimation behavior in stable branches seems to carry too
much risk of destabilizing plan choices for already-tuned applications.