TYPEALIGN doesn't work on int64 on 32-bit platforms.
The TYPEALIGN macro, and the related ones like MAXALIGN, don't work with
values larger than intptr_t, because TYPEALIGN casts the argument to
intptr_t to do the arithmetic. That's not a problem when dealing with
pointers or lengths or offsets related to pointers, but the XLogInsert
scaling patch added a call to MAXALIGN with an XLogRecPtr argument.
To fix, add wider variants of the macros, called TYPEALIGN64 and MAXALIGN64,
which are just like the existing variants but work with uint64 instead of
intptr_t.
Report and patch by David Rowley, analysis by Andres Freund.
1. In heap_hot_search_buffer(), the PredicateLockTuple() call is passed
wrong offset number. heapTuple->t_self is set to the tid of the first
tuple in the chain that's visited, not the one actually being read.
2. CheckForSerializableConflictIn() uses the tuple's t_ctid field
instead of t_self to check for exiting predicate locks on the tuple. If
the tuple was updated, but the updater rolled back, t_ctid points to the
aborted dead tuple.
Kevin Grittner [Mon, 7 Oct 2013 19:16:54 +0000 (14:16 -0500)]
Eliminate xmin from hash tag for predicate locks on heap tuples.
If a tuple was frozen while its predicate locks mattered,
read-write dependencies could be missed, resulting in failure to
detect conflicts which could lead to anomalies in committed
serializable transactions.
This field was added to the tag when we still thought that it was
necessary to carry locks forward to a new version of an updated
row. That was later proven to be unnecessary, which allowed
simplification of the code, but elimination of xmin from the tag
was missed at the time.
Per report and analysis by Heikki Linnakangas.
Backpatch to 9.1.
Alvaro Herrera [Sun, 6 Oct 2013 02:24:50 +0000 (23:24 -0300)]
Fix various bugs in postmaster SIGKILL processing
Clamp the minimum sleep time during immediate shutdown or crash to a
minimum of zero, not a maximum of one second. The previous code could
result in a negative sleep time, leading to failure in select() calls.
Also, on crash recovery, reset AbortStartTime as soon as SIGKILL is sent
or abort processing has commenced instead of waiting until the startup
process completes. Reset AbortStartTime as soon as SIGKILL is sent,
too, to avoid doing that repeatedly.
Per trouble report from Jeff Janes on
CAMkU=1xd3=wFqZwwuXPWe4BQs3h1seYo8LV9JtSjW5RodoPxMg@mail.gmail.com
Noah Misch [Sat, 5 Oct 2013 21:33:38 +0000 (17:33 -0400)]
pgbench: Elaborate latency reporting.
Isolate transaction latency (elapsed time between submitting first
command and receiving response to last command) from client-side delays
pertaining to the --rate schedule. Under --rate, report schedule lag as
defined in the documentation. Report latency standard deviation
whenever we collect the measurements to do so. All of these changes
affect --progress messages and the final report.
Alvaro Herrera [Fri, 4 Oct 2013 13:32:48 +0000 (10:32 -0300)]
isolationtester: Allow tuples to be returned in more places
Previously, isolationtester would forbid returning tuples in
session-specific teardown (but not global teardown), as well as in
global setup. Allow these places to return tuples, too.
It makes for cleaner code to have separate Get/Add functions for PostingItems
and ItemPointers. A few callsites that have to deal with both types need to
be duplicated because of this, but all the callers have to know which one
they're dealing with anyway. Overall, this reduces the amount of casting
required.
Extracted from Alexander Korotkov's larger patch to change the data page
format.
The cancel handler was uselessly set up even before the first connection
was opened. By setting it up afterwards, the user can use Ctrl+C to
abort psql if the initial connection attempt hangs.
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Ryan Kelly <rpkelly22@gmail.com>
Alvaro Herrera [Tue, 1 Oct 2013 20:36:15 +0000 (17:36 -0300)]
Remove broken PGXS code for pg_xlogdump
With the PGXS boilerplate in place, pg_xlogdump currently fails with an
ominous error message that certain targets cannot be built because
certain files do not exist. Remove that and instead throw a quick error
message alerting the user of the actual problem, which should be easier
to diagnose that the statu quo.
In bms_add_member(), use repalloc() if the bms needs to be enlarged.
Previously bms_add_member() would palloc a whole-new copy of the existing
set, copy the words, and pfree the old one. repalloc() is potentially much
faster, and more importantly, this is less surprising if CurrentMemoryContext
is not the same as the context the old set is in. bms_add_member() still
allocates a new bitmapset in CurrentMemoryContext if NULL is passed as
argument, but that is a lot less likely to induce bugs.
Fix snapshot leak if lo_open called on non-existent object.
lo_open registers the currently active snapshot, and checks if the
large object exists after that. Normally, snapshots registered by lo_open
are unregistered at end of transaction when the lo descriptor is closed, but
if we error out before the lo descriptor is added to the list of open
descriptors, it is leaked. Fix by moving the snapshot registration to after
checking if the large object exists.
Reported by Pavel Stehule. Backpatch to 8.4. The snapshot registration
system was introduced in 8.4, so prior versions are not affected (and not
supported, anyway).
Andrew Dunstan [Sun, 29 Sep 2013 21:41:56 +0000 (17:41 -0400)]
Use a new hstore extension version for added json functions.
This should have been done when the json functionality was added to
hstore in 9.3.0. To handle this correctly, the upgrade script therefore
uses conditional logic by using plpgsql in a DO statement to add the two
new functions and the new cast. If hstore_to_json_loose is detected as
already present and dependent on the hstore extension nothing is done.
This will require that the database be loaded with plpgsql.
People who have installed the earlier and spurious 1.1 version of hstore
will need to do:
Fix spurious warning after vacuuming a page on a table with no indexes.
There is a rare race condition, when a transaction that inserted a tuple
aborts while vacuum is processing the page containing the inserted tuple.
Vacuum prunes the page first, which normally removes any dead tuples, but
if the inserting transaction aborts right after that, the loop after
pruning will see a dead tuple and remove it instead. That's OK, but if the
page is on a table with no indexes, and the page becomes completely empty
after removing the dead tuple (or tuples) on it, it will be immediately
marked as all-visible. That's OK, but the sanity check in vacuum would
throw a warning because it thinks that the page contains dead tuples and
was nevertheless marked as all-visible, even though it just vacuumed away
the dead tuples and so it doesn't actually contain any.
Spotted this while reading the code. It's difficult to hit the race
condition otherwise, but can be done by putting a breakpoint after the
heap_page_prune() call.
Backpatch all the way to 8.4, where this code first appeared.
B-tree operators are not allowed to leak memory into the current memory
context. Range_cmp leaked detoasted copies of the arguments. That caused
a quick out-of-memory error when creating an index on a range column.
Use @libdir@ in both of regress/{input,output}/security_label.source
Though @libdir@ almost always matches @abs_builddir@ in this context,
the test could only fail if they differed. Back-patch to 9.1, where the
test was introduced.
Robert Haas [Mon, 23 Sep 2013 17:31:22 +0000 (13:31 -0400)]
Don't allow system columns in CHECK constraints, except tableoid.
Previously, arbitray system columns could be mentioned in table
constraints, but they were not correctly checked at runtime, because
the values weren't actually set correctly in the tuple. Since it
seems easy enough to initialize the table OID properly, do that,
and continue allowing that column, but disallow the rest unless and
until someone figures out a way to make them work properly.
No back-patch, because this doesn't seem important enough to take the
risk of destabilizing the back branches. In fact, this will pose a
dump-and-reload hazard for those upgrading from previous versions:
constraints that were accepted before but were not correctly enforced
will now either be enforced correctly or not accepted at all. Either
could result in restore failures, but in practice I think very few
users will notice the difference, since the use case is pretty
marginal anyway and few users will be relying on features that have
not historically worked.
Amit Kapila, reviewed by Rushabh Lathia, with doc changes by me.
Stephen Frost [Mon, 23 Sep 2013 12:33:41 +0000 (08:33 -0400)]
Fix SSL deadlock risk in libpq
In libpq, we set up and pass to OpenSSL callback routines to handle
locking. When we run out of SSL connections, we try to clean things
up by de-registering the hooks. Unfortunately, we had a few calls
into the OpenSSL library after these hooks were de-registered during
SSL cleanup which lead to deadlocking. This moves the thread callback
cleanup to be after all SSL-cleanup related OpenSSL library calls.
I've been unable to reproduce the deadlock with this fix.
In passing, also move the close_SSL call to be after unlocking our
ssl_config mutex when in a failure state. While it looks pretty
unlikely to be an issue, it could have resulted in deadlocks if we
ended up in this code path due to something other than SSL_new
failing. Thanks to Heikki for pointing this out.
Back-patch to all supported versions; note that the close_SSL issue
only goes back to 9.0, so that hunk isn't included in the 8.4 patch.
Initially found and reported by Vesa-Matti J Kari; many thanks to
both Heikki and Andres for their help running down the specific
issue and reviewing the patch.
When a timeline history file is fetched from server, it is initially created
with a temporary file name, and renamed to place. However, the temporary
file name was constructed using an uninitialized buffer. Usually that meant
that the file was created in current directory instead of the target, which
usually goes unnoticed, but if the target is on a different filesystem than
the current dir, the rename() would fail. Fix that.
The second issue is that pg_receivexlog would not take .partial files into
account when determining when scanning the target directory for existing
WAL files. If the timeline has switched in the server several times in the
last WAL segment, and pg_receivexlog is restarted, it would choose a too
old starting point. That's not a problem as long as the old WAL segment
exists in the server and can be streamed over, but will cause a failure if
it's not.
Backpatch to 9.3, where this timeline handling code was written.
Analysed by Andrew Gierth, bug #8453, based on a bug report on IRC.
It seems to make more sense to use "cutoff multixact" terminology
throughout the backend code; "freeze" is associated with replacing of an
Xid with FrozenTransactionId, which is not what we do for MultiXactIds.
Once the administrator has called for an immediate shutdown or a backend
crash has triggered a reinitialization, no mere SIGINT or SIGTERM should
change that course. Such derailment remains possible when the signal
arrives before quickdie() blocks signals. That being a narrow race
affecting most PostgreSQL signal handlers in some way, leave it for
another patch. Back-patch this to all supported versions.
The prototype for inval_twophase_postcommit wasn't removed when it's definition
was removed in efc16ea520679d713d98a2c7bf1453c4ff7b91ec / the initial HS commit.
Bruce Momjian [Sat, 7 Sep 2013 15:44:33 +0000 (11:44 -0400)]
intarray: return empty zero-dimensional array for an empty array
Previously a one-dimensional empty array was returned, but its text
representation matched a zero-dimensional array, and there is no way to
dump/reload a one-dimensional empty array.
Doing so was helpful for some Valgrind usage and distracting for other
usage. One can achieve the same effect by changing log_statement and
pointing both PostgreSQL and Valgrind logging to stderr.
Kevin Grittner [Thu, 5 Sep 2013 19:03:43 +0000 (14:03 -0500)]
Eliminate pg_rewrite.ev_attr column and related dead code.
Commit 95ef6a344821655ce4d0a74999ac49dd6af6d342 removed the
ability to create rules on an individual column as of 7.3, but
left some residual code which has since been useless. This cleans
up that dead code without any change in behavior other than
dropping the useless column from the catalog.
If the hash table backing a catalog cache becomes too full (fillfactor > 2),
enlarge it. A new buckets array, double the size of the old, is allocated,
and all entries in the old hash are moved to the right bucket in the new
hash.
This has two benefits. First, cache lookups don't get so expensive when
there are lots of entries in a cache, like if you access hundreds of
thousands of tables. Second, we can make the (initial) sizes of the caches
much smaller, which saves memory.
This patch dials down the initial sizes of the catcaches. The new sizes are
chosen so that a backend that only runs a few basic queries still won't need
to enlarge any of them.
Keep heavily-contended fields in XLogCtlInsert on different cache lines.
Performance testing shows that if the insertpos_lck spinlock and the fields
that it protects are on the same cache line with other variables that are
frequently accessed, the false sharing can hurt performance a lot. Keep
them apart by adding some padding.
Tom Lane [Tue, 3 Sep 2013 22:56:22 +0000 (18:56 -0400)]
Update comments concerning PGC_S_TEST.
This GUC context value was once only used by ALTER DATABASE SET and
ALTER USER SET. That's not true anymore, though, so rewrite the
comments to be a bit more general.
Patch in HEAD only, since this is just an internal documentation issue.
Tom Lane [Tue, 3 Sep 2013 22:32:20 +0000 (18:32 -0400)]
Don't fail for bad GUCs in CREATE FUNCTION with check_function_bodies off.
The previous coding attempted to activate all the GUC settings specified
in SET clauses, so that the function validator could operate in the GUC
environment expected by the function body. However, this is problematic
when restoring a dump, since the SET clauses might refer to database
objects that don't exist yet. We already have the parameter
check_function_bodies that's meant to prevent forward references in
function definitions from breaking dumps, so let's change CREATE FUNCTION
to not install the SET values if check_function_bodies is off.
Authors of function validators were already advised not to make any
"context sensitive" checks when check_function_bodies is off, if indeed
they're checking anything at all in that mode. But extend the
documentation to point out the GUC issue in particular.
(Note that we still check the SET clauses to some extent; the behavior
with !check_function_bodies is now approximately equivalent to what ALTER
DATABASE/ROLE have been doing for awhile with context-dependent GUCs.)
This problem can be demonstrated in all active branches, so back-patch
all the way.
Tom Lane [Tue, 3 Sep 2013 21:08:38 +0000 (17:08 -0400)]
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
Tom Lane [Tue, 3 Sep 2013 20:28:56 +0000 (16:28 -0400)]
Docs: wording improvements in discussion of timestamp arithmetic.
I started out just to fix the broken markup in commit 1c2085766187031eaeaae7db4785b9e1d4241988, but got distracted by
copy-editing. I see Bruce already fixed the markup, but I'll
commit the wordsmithing anyway.
Tom Lane [Sun, 1 Sep 2013 23:43:02 +0000 (19:43 -0400)]
Update "Using EXPLAIN" documentation examples using current code.
It seems like a good idea to update these examples since some fairly
basic planner behaviors have changed in 9.3; notably that the startup cost
for an indexscan plan node is no longer invariably estimated at 0.00.