It was designed to test the longest possible interval output length,
so removing four zeros from the number of hours, as this patch does,
is not ideal. But the test still has some utility for its original
purpose, and there aren't a lot of other good options.
Noah Misch suggested a different approach where we test that the
output either matches what we expect from integer timestamps or what
we expect from floating-point timestamps. That seemed to obscure an
otherwise simple test, however.
Tom Lane [Wed, 7 May 2014 02:49:32 +0000 (22:49 -0400)]
hash_any returns Datum, not uint32 (and definitely not "int").
The coding in JsonbHashScalarValue might have accidentally failed to fail
given current representational choices, but the key word there would be
"accidental". Insert the appropriate datatype conversion macro. And
use the right conversion macro for hash_numeric's result, too.
In passing make the code a bit cleaner and less repetitive by factoring
out the xor step from the switch.
Jeff Davis [Sun, 4 May 2014 20:18:55 +0000 (13:18 -0700)]
Improve comment for tricky aspect of index-only scans.
Index-only scans avoid taking a lock on the VM buffer, which would
cause a lot of contention. To be correct, that requires some intricate
assumptions that weren't completely documented in the previous
comment.
There is currently nothing in the build system that enforces that things
stay valid, because that requires additional tools and will receive
separate consideration.
Simon Riggs [Tue, 6 May 2014 12:44:15 +0000 (13:44 +0100)]
pg_basebackup streaming: adjust version check msg
Commit d298b50a3b469c088bb40a4d36d38111b4cd574d by Heikki Linnakangas
requested that the version check message be updated at next release, suggesting
that the appropriate text would be “9.3 or later”. The logic used for the check
indicates that the correct text for 9.4 is “9.3 or 9.4”, since the logic would
cause this to fail for later releases.
Michael Meskes [Tue, 6 May 2014 11:04:30 +0000 (13:04 +0200)]
Fix handling of array of char pointers in ecpglib.
When array of char * was used as target for a FETCH statement returning more
than one row, it tried to store all the result in the first element. Instead it
should dump array of char pointers with right offset, use the address instead
of the value of the C variable while reading the array and treat such variable
as char **, instead of char * for pointer arithmetic.
Patch by Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Tom Lane [Mon, 5 May 2014 18:43:39 +0000 (14:43 -0400)]
Fix possible cache invalidation failure in ReceiveSharedInvalidMessages.
Commit fad153ec45299bd4d4f29dec8d9e04e2f1c08148 modified sinval.c to reduce
the number of calls into sinvaladt.c (which require taking a shared lock)
by keeping a local buffer of collected-but-not-yet-processed messages.
However, if processing of the last message in a batch resulted in a
recursive call to ReceiveSharedInvalidMessages, we could overwrite that
message with a new one while the outer invalidation function was still
working on it. This would be likely to lead to invalidation of the wrong
cache entry, allowing subsequent processing to use stale cache data.
The fix is just to make a local copy of each message while we're processing
it.
Spotted by Andres Freund. Back-patch to 8.4 where the bug was introduced.
Tom Lane [Mon, 5 May 2014 17:37:54 +0000 (13:37 -0400)]
Fix pg_type.typlen for newly-revived line type.
Commit 261c7d4b653bc3e44c31fd456d94f292caa50d8f removed the "m" field
from struct LINE, but neglected to make pg_type.h's idea of the type's
size match. This resulted in reading past the end of palloc'd LINE
values when inserting them into tuples etc. In principle that could
cause a SIGSEGV, though the odds of detectable problems seem low.
Bump catversion since this makes an incompatible on-disk format change.
Note that if the line type had been in use in the field, this would
break pg_upgrade'ability of databases containing line values; but
it seems unlikely that there are any (they'd have had to be compiled
with -DENABLE_LINE_TYPE).
Tom Lane [Mon, 5 May 2014 15:26:41 +0000 (11:26 -0400)]
Fix case of pg_dump -Fc to an unseekable file (such as a pipe).
This was accidentally broken in commits cfa1b4a711/5e8e794e3b.
It saves a line or so to call ftello unconditionally in _CloseArchive,
but we have to expect that it might fail if we're not in hasSeek mode.
Per report from Bernd Helmle.
In passing, improve _getFilePos to print an appropriate message if
ftello fails unexpectedly, rather than just a vague complaint about
"ftell mismatch".
Assert that pre/post-fix updated tuples are on the same page during replay.
If they were not 'oldtup.t_data' would be dereferenced while set to NULL
in case of a full page image for block 0.
Do so primarily to silence coverity; but also to make sure this prerequisite
isn't changed without adapting the replay routine as that would appear to
work in many cases.
Replace SYSTEMQUOTEs with Windows-specific wrapper functions.
It's easy to forget using SYSTEMQUOTEs when constructing command strings
for system() or popen(). Even if we fix all the places missing it now, it is
bound to be forgotten again in the future. Introduce wrapper functions that
do the the extra quoting for you, and get rid of SYSTEMQUOTEs in all the
callers.
We previosly used SYSTEMQUOTEs in all the hard-coded command strings, and
this doesn't change the behavior of those. But user-supplied commands, like
archive_command, restore_command, COPY TO/FROM PROGRAM calls, as well as
pgbench's \shell, will now gain an extra pair of quotes. That is desirable,
but if you have existing scripts or config files that include an extra
pair of quotes, those might need to be adjusted.
Tom Lane [Fri, 2 May 2014 00:22:37 +0000 (20:22 -0400)]
Fix yet another corner case in dumping rules/views with USING clauses.
ruleutils.c tries to cope with additions/deletions/renamings of columns in
tables referenced by views, by means of adding machine-generated aliases to
the printed form of a view when needed to preserve the original semantics.
A recent blog post by Marko Tiikkaja pointed out a case I'd missed though:
if one input of a join with USING is itself a join, there is nothing to
stop the user from adding a column of the same name as the USING column to
whichever side of the sub-join didn't provide the USING column. And then
there'll be an error when the view is re-parsed, since now the sub-join
exposes two columns matching the USING specification. We were catching a
lot of related cases, but not this one, so add some logic to cope with it.
Back-patch to 9.3, which is the first release that makes any serious
attempt to cope with such cases (cf commit 2ffa740be and follow-ons).
Tom Lane [Thu, 1 May 2014 20:16:36 +0000 (16:16 -0400)]
Fix "quiet inline" configure test for newer clang compilers.
This test used to just define an unused static inline function and check
whether that causes a warning. But newer clang versions warn about
unused static inline functions when defined inside a .c file, but not
when defined in an included header, which is the case we care about.
Change the test to cope.
Tom Lane [Thu, 1 May 2014 19:19:06 +0000 (15:19 -0400)]
Fix failure to detoast fields in composite elements of structured types.
If we have an array of records stored on disk, the individual record fields
cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are
only prepared to deal with TOAST pointers appearing in top-level fields of
a stored row. The same applies for ranges over composite types, nested
composites, etc. However, the existing code only took care of expanding
sub-field TOAST pointers for the case of nested composites, not for other
structured types containing composites. For example, given a command such
as
UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ...
where x is a direct reference to a field of an on-disk tuple, if that field
is long enough to be toasted out-of-line then the TOAST pointer would be
inserted as-is into the array column. If the source record for x is later
deleted, the array field value would become a dangling pointer, leading
to errors along the line of "missing chunk number 0 for toast value ..."
when the value is referenced. A reproducible test case for this was
provided by Jan Pecek, but it seems likely that some of the "missing chunk
number" reports we've heard in the past were caused by similar issues.
Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to
produce a self-contained Datum value if the Datum is of composite type.
Seen in this light, the problem is not just confined to arrays and ranges,
but could also affect some other places where detoasting is done in that
way, for example form_index_tuple().
I tried teaching the array code to apply toast_flatten_tuple_attribute()
along with PG_DETOAST_DATUM() when the array element type is composite,
but this was messy and imposed extra cache lookup costs whether or not any
TOAST pointers were present, indeed sometimes when the array element type
isn't even composite (since sometimes it takes a typcache lookup to find
that out). The idea of extending that approach to all the places that
currently use PG_DETOAST_DATUM() wasn't attractive at all.
This patch instead solves the problem by decreeing that composite Datum
values must not contain any out-of-line TOAST pointers in the first place;
that is, we expand out-of-line fields at the point of constructing a
composite Datum, not at the point where we're about to insert it into a
larger tuple. This rule is applied only to true composite Datums, not
to tuples that are being passed around the system as tuples, so it's not
as invasive as it might sound at first. With this approach, the amount
of code that has to be touched for a full solution is greatly reduced,
and added cache lookup costs are avoided except when there actually is
a TOAST pointer that needs to be inlined.
The main drawback of this approach is that we might sometimes dereference
a TOAST pointer that will never actually be used by the query, imposing a
rather large cost that wasn't there before. On the other side of the coin,
if the field value is used multiple times then we'll come out ahead by
avoiding repeat detoastings. Experimentation suggests that common SQL
coding patterns are unaffected either way, though. Applications that are
very negatively affected could be advised to modify their code to not fetch
columns they won't be using.
In future, we might consider reverting this solution in favor of detoasting
only at the point where data is about to be stored to disk, using some
method that can drill down into multiple levels of nested structured types.
That will require defining new APIs for structured types, though, so it
doesn't seem feasible as a back-patchable fix.
Note that this patch changes HeapTupleGetDatum() from a macro to a function
call; this means that any third-party code using that macro will not get
protection against creating TOAST-pointer-containing Datums until it's
recompiled. The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER().
It seems likely that this is not a big problem in practice: most of the
tuple-returning functions in core and contrib produce outputs that could
not possibly be toasted anyway, and the same probably holds for third-party
extensions.
This bug has existed since TOAST was invented, so back-patch to all
supported branches.
Tom Lane [Wed, 30 Apr 2014 22:16:53 +0000 (18:16 -0400)]
Improve error messages in reorderbuffer.c.
Be more clear about failure cases in relfilenode->relation lookup,
and fix some other places that were inconsistent or not per our
message style guidelines.
Robert Haas [Wed, 30 Apr 2014 21:38:18 +0000 (17:38 -0400)]
Consistently allow reading of messages from a detached shm_mq.
This was intended to work always, but the previous code only allowed
it if at least one message was successfully read by the receiver
before the sender detached the queue.
Tom Lane [Wed, 30 Apr 2014 21:30:50 +0000 (17:30 -0400)]
Rationalize common/relpath.[hc].
Commit a73018392636ce832b09b5c31f6ad1f18a4643ea created rather a mess by
putting dependencies on backend-only include files into include/common.
We really shouldn't do that. To clean it up:
* Move TABLESPACE_VERSION_DIRECTORY back to its longtime home in
catalog/catalog.h. We won't consider this symbol part of the FE/BE API.
* Push enum ForkNumber from relfilenode.h into relpath.h. We'll consider
relpath.h as the source of truth for fork numbers, since relpath.c was
already partially serving that function, and anyway relfilenode.h was
kind of a random place for that enum.
* So, relfilenode.h now includes relpath.h rather than vice-versa. This
direction of dependency is fine. (That allows most, but not quite all,
of the existing explicit #includes of relpath.h to go away again.)
* Push forkname_to_number from catalog.c to relpath.c, just to centralize
fork number stuff a bit better.
* Push GetDatabasePath from catalog.c to relpath.c; it was rather odd
that the previous commit didn't keep this together with relpath().
* To avoid needing relfilenode.h in common/, redefine the underlying
function (now called GetRelationPath) as taking separate OID arguments,
and make the APIs using RelFileNode or RelFileNodeBackend into macro
wrappers. (The macros have a potential multiple-eval risk, but none of
the existing call sites have an issue with that; one of them had such a
risk already anyway.)
* Fix failure to follow the directions when "init" fork type was added;
specifically, the errhint in forkname_to_number wasn't updated, and neither
was the SGML documentation for pg_relation_size().
* Fix tablespace-path-too-long check in CreateTableSpace() to account for
fork-name component of maximum-length pathnames. This requires putting
FORKNAMECHARS into a header file, but it was rather useless (and
actually unreferenced) where it was.
The last couple of items are potentially back-patchable bug fixes,
if anyone is sufficiently excited about them; but personally I'm not.
Per a gripe from Christoph Berg about how include/common wasn't
self-contained.
Tom Lane [Wed, 30 Apr 2014 17:46:13 +0000 (13:46 -0400)]
Check for interrupts and stack overflow during rule/view dumps.
Since ruleutils.c recurses, it could be driven to stack overflow by
deeply nested constructs. Very large queries might also take long
enough to deparse that a check for interrupts seems like a good idea.
Stick appropriate tests into a couple of key places.
Noted by Greg Stark. Back-patch to all supported branches.
Tom Lane [Wed, 30 Apr 2014 17:26:26 +0000 (13:26 -0400)]
Reduce indentation/parenthesization of set operations in rule/view dumps.
A query such as "SELECT x UNION SELECT y UNION SELECT z UNION ..."
produces a left-deep nested parse tree, which we formerly showed in its
full nested glory and with all the possible parentheses. This does little
for readability, though, and long UNION lists resulting in excessive
indentation are common. Instead, let's omit parentheses and indent all
the subqueries at the same level in such cases.
This patch skips indentation/parenthesization whenever the lefthand input
of a SetOperationStmt is another SetOperationStmt of the same kind and
ALL/DISTINCT property. We could teach the code the exact syntactic
precedence of set operations and thereby avoid parenthesization in some
more cases, but it's not clear that that'd be a readability win: it seems
better to parenthesize if the set operation changes. (As an example,
if there's one UNION in a long list of UNION ALL, it now stands out like
a sore thumb, which seems like a good thing.)
Back-patch to 9.3. This completes our response to a complaint from Greg
Stark that since commit 62e666400d there's a performance problem in pg_dump
for views containing long UNION sequences (or other types of deeply nested
constructs). The previous commit 0601cb54dac14d979d726ab2ebeda251ae36e857
handles the general problem, but this one makes the specific case of UNION
lists look a lot nicer.
Tom Lane [Wed, 30 Apr 2014 16:48:12 +0000 (12:48 -0400)]
Limit overall indentation in rule/view dumps.
Continuing to indent no matter how deeply nested we get doesn't really
do anything for readability; what's worse, it results in O(N^2) total
whitespace, which can become a performance and memory-consumption issue.
To address this, once we get past 40 characters of indentation, reduce
the indentation step distance 4x, and also limit the maximum indentation
by reducing it modulo 40. This latter choice is a bit weird at first
glance, but it seems to preserve readability better than a simple cap
would do.
Back-patch to 9.3, because since commit 62e666400d the performance issue
is a hazard for pg_dump.
Tom Lane [Wed, 30 Apr 2014 16:01:19 +0000 (12:01 -0400)]
Fix indentation of JOIN clauses in rule/view dumps.
The code attempted to outdent JOIN clauses further left than the parent
FROM keyword, which was odd in any case, and led to inconsistent formatting
since in simple cases the clauses couldn't be moved any further left than
that. And it left a permanent decrement of the indentation level, causing
subsequent lines to be much further left than they should be (again, this
couldn't be seen in simple cases for lack of indentation to give up).
After a little experimentation I chose to make it indent JOIN keywords
two spaces from the parent FROM, which is one space more than the join's
lefthand input in cases where that appears on a different line from FROM.
Back-patch to 9.3. This is a purely cosmetic change, and the bug is quite
old, so that may seem arbitrary; but we are going to be making some other
changes to the indentation behavior in both HEAD and 9.3, so it seems
reasonable to include this in 9.3 too. I committed this one first because
its effects are more visible in the regression test results as they
currently stand than they will be later.
Some popen() calls were missing SYSTEMQUOTEs, which caused initdb and
pg_upgrade to fail on Windows, if the installation path contained both
spaces and @ signs.
Patch by Nikhil Deshpande. Backpatch to all supported versions.
Peter Eisentraut [Wed, 30 Apr 2014 02:16:16 +0000 (22:16 -0400)]
PL/Python: Adjust the regression tests for Python 3.4
The error test case in the plpython_do test resulted in a slightly
different error message with Python 3.4. So pick a different way to
test it that avoids that and is perhaps also a bit clearer.
Tom Lane [Tue, 29 Apr 2014 17:12:26 +0000 (13:12 -0400)]
Improve planner to drop constant-NULL inputs of AND/OR where it's legal.
In general we can't discard constant-NULL inputs, since they could change
the result of the AND/OR to be NULL. But at top level of WHERE, we do not
need to distinguish a NULL result from a FALSE result, so it's okay to
treat NULL as FALSE and then simplify AND/OR accordingly.
This is a very ancient oversight, but in 9.2 and later it can lead to
failure to optimize queries that previous releases did optimize, as a
result of more aggressive parameter substitution rules making it possible
to reduce more subexpressions to NULL constants. This is the root cause of
bug #10171 from Arnold Scheffler. We could alternatively have fixed that
by teaching orclauses.c to ignore constant-NULL OR arms, but it seems
better to get rid of them globally.
I resisted the temptation to back-patch this change into all active
branches, but it seems appropriate to back-patch as far as 9.2 so that
there will not be performance regressions of the kind shown in this bug.
Greg Stark [Mon, 28 Apr 2014 17:41:36 +0000 (18:41 +0100)]
Add support for wrapping to psql's "extended" mode. This makes it very
feasible to display tables that have both many columns and some large
data in some columns (such as pg_stats).
Emre Hasegeli with review and rewriting from Sergey Muraviov and
reviewed by Greg Stark
Fix two bugs in WAL-logging of GIN pending-list pages.
In writeListPage, never take a full-page image of the page, because we
have all the information required to re-initialize in the WAL record
anyway. Before this fix, a full-page image was always generated, unless
full_page_writes=off, because when the page is initialized its LSN is
always 0. In stable-branches, keep the code to restore the backup blocks
if they exist, in case that the WAL is generated with an older minor
version, but in master Assert that there are no full-page images.
In the redo routine, add missing "off++". Otherwise the tuples are added
to the page in reverse order. That happens to be harmless because we
always scan and remove all the tuples together, but it was clearly wrong.
Also, it was masked by the first bug unless full_page_writes=off, because
the page was always restored from a full-page image.
Tom Lane [Mon, 28 Apr 2014 01:24:19 +0000 (21:24 -0400)]
Can't completely get rid of #ifndef FRONTEND in palloc.h :-(
pg_controldata includes postgres.h not postgres_fe.h, so utils/palloc.h
must be able to compile in a "#define FRONTEND" context. It appears that
Solaris Studio is smart enough to persuade us to define PG_USE_INLINE,
but not smart enough to not make a copy of unreferenced static functions;
which leads to an unsatisfied reference to CurrentMemoryContext. So we
need an #ifndef FRONTEND around that declaration. Per buildfarm.
Tom Lane [Sat, 26 Apr 2014 19:11:10 +0000 (15:11 -0400)]
Improve generation algorithm for database system identifier.
As noted some time ago, the original coding had a typo ("|" for "^")
that made the result less unique than intended. Even the intended
behavior is obsolete since it was based on wanting to produce a
usable value even if we didn't have int64 arithmetic --- a limitation
we stopped supporting years ago. Instead, let's redefine the system
identifier as tv_sec in the upper 32 bits (same as before), tv_usec
in the next 20 bits, and the low 12 bits of getpid() in the remaining
bits. This is still hardly guaranteed-universally-unique, but it's
noticeably better than before. Per my proposal at
<29019.1374535940@sss.pgh.pa.us>
Tom Lane [Sat, 26 Apr 2014 18:14:28 +0000 (14:14 -0400)]
Don't #include utils/palloc.h in common/fe_memutils.h.
This breaks the principle that common/ ought not depend on anything in the
server, not only code-wise but in the headers. The only arguable advantage
is avoidance of duplication of half a dozen extern declarations, and even
that is rather dubious, considering that the previous coding was wrong
about which declarations to duplicate: it exposed pnstrdup() to frontend
code even though no such function is provided in fe_memutils.c.
On the same principle, don't #include utils/memutils.h in the frontend
build of psprintf.c. This requires duplicating the definition of
MaxAllocSize, but that seems fine to me: there's no a-priori reason why
frontend code should use the same size limit as the backend anyway.
In passing, clean up some rather odd layout and ordering choices that
were imposed on palloc.h to reduce the number of #ifdefs required by
the previous approach.
Per gripe from Christoph Berg. There's still more work to do to make
include/common/ clean, but this part seems reasonably noncontroversial.
Tom Lane [Sat, 26 Apr 2014 16:22:09 +0000 (12:22 -0400)]
Record the proper typmod for an index expression column.
We should use exprTypmod() to extract the typmod of the expression,
instead of just blindly storing -1. This seems to have been an aboriginal
oversight in commit fc8d970cbcdd6f025475822a4cf01dfda0873226 which
introduced general-expression indexes. The consequences are only cosmetic
at present, since the index machinery doesn't really look at typmod for
index columns; but still it seems best to describe the column type as
precisely as we can. Per off-list complaint from Thomas Fanghaenel.
Tom Lane [Fri, 25 Apr 2014 19:40:35 +0000 (15:40 -0400)]
Clean up temp installations after client program tests.
Commit 7d0f493f19607774fdccb1a1ea06fdd96a3d9698 added infrastructure
to perform tests in assorted src/bin/ subdirectories, but forgot to
teach "make clean" to clean up the detritus the tests leave behind.
Fix race when updating a tuple concurrently locked by another process
If a tuple is locked, and this lock is later upgraded either to an
update or to a stronger lock, and in the meantime some other process
tries to lock, update or delete the same tuple, it (the tuple) could end
up being updated twice, or having conflicting locks held.
The reason for this is that the second updater checks for a change in
Xmax value, or in the HEAP_XMAX_IS_MULTI infomask bit, after noticing
the first lock; and if there's a change, it restarts and re-evaluates
its ability to update the tuple. But it neglected to check for changes
in lock strength or in lock-vs-update status when those two properties
stayed the same. This would lead it to take the wrong decision and
continue with its own update, when in reality it shouldn't do so but
instead restart from the top.
This could lead to either an assertion failure much later (when a
multixact containing multiple updates is detected), or duplicate copies
of tuples.
To fix, make sure to compare the other relevant infomask bits alongside
the Xmax value and HEAP_XMAX_IS_MULTI bit, and restart from the top if
necessary.
Also, in the belt-and-suspenders spirit, add a check to
MultiXactCreateFromMembers that a multixact being created does not have
two or more members that are claimed to be updates. This should protect
against other bugs that might cause similar bogus situations.
Backpatch to 9.3, where the possibility of multixacts containing updates
was introduced. (In prior versions it was possible to have the tuple
lock upgraded from shared to exclusive, and an update would not restart
from the top; yet we're protected against a bug there because there's
always a sleep to wait for the locking transaction to complete before
continuing to do anything. Really, the fact that tuple locks always
conflicted with concurrent updates is what protected against bugs here.)
Per report from Andrew Dunstan and Josh Berkus in thread at
http://www.postgresql.org/message-id/534C8B33.9050807@pgexperts.com
Tom Lane [Thu, 24 Apr 2014 17:29:48 +0000 (13:29 -0400)]
Reset pg_stat_activity.xact_start during PREPARE TRANSACTION.
Once we've completed a PREPARE, our session is not running a transaction,
so its entry in pg_stat_activity should show xact_start as null, rather
than leaving the value as the start time of the now-prepared transaction.
I think possibly this oversight was triggered by faulty extrapolation
from the adjacent comment that says PrepareTransaction should not call
AtEOXact_PgStat, so tweak the wording of that comment.
Noted by Andres Freund while considering bug #10123 from Maxim Boguk,
although this error doesn't seem to explain that report.
Tom Lane [Thu, 24 Apr 2014 01:21:05 +0000 (21:21 -0400)]
Fix incorrect pg_proc.proallargtypes entries for two built-in functions.
pg_sequence_parameters() and pg_identify_object() have had incorrect
proallargtypes entries since 9.1 and 9.3 respectively. This was mostly
masked by the correct information in proargtypes, but a few operations
such as pg_get_function_arguments() (and thus psql's \df display) would
show the wrong data types for these functions' input parameters.
In HEAD, fix the wrong info, bump catversion, and add an opr_sanity
regression test to catch future mistakes of this sort.
In the back branches, just fix the wrong info so that installations
initdb'd with future minor releases will have the right data. We
can't force an initdb, and it doesn't seem like a good idea to add
a regression test that will fail on existing installations.
Tom Lane [Wed, 23 Apr 2014 23:17:31 +0000 (19:17 -0400)]
Allow polymorphic aggregates to have non-polymorphic state data types.
Before 9.4, such an aggregate couldn't be declared, because its final
function would have to have polymorphic result type but no polymorphic
argument, which CREATE FUNCTION would quite properly reject. The
ordered-set-aggregate patch found a workaround: allow the final function
to be declared as accepting additional dummy arguments that have types
matching the aggregate's regular input arguments. However, we failed
to notice that this problem applies just as much to regular aggregates,
despite the fact that we had a built-in regular aggregate array_agg()
that was known to be undeclarable in SQL because its final function
had an illegal signature. So what we should have done, and what this
patch does, is to decouple the extra-dummy-arguments behavior from
ordered-set aggregates and make it generally available for all aggregate
declarations. We have to put this into 9.4 rather than waiting till
later because it slightly alters the rules for declaring ordered-set
aggregates.
The patch turned out a bit bigger than I'd hoped because it proved
necessary to record the extra-arguments option in a new pg_aggregate
column. I'd thought we could just look at the final function's pronargs
at runtime, but that didn't work well for variadic final functions.
It's probably just as well though, because it simplifies life for pg_dump
to record the option explicitly.
While at it, fix array_agg() to have a valid final-function signature,
and add an opr_sanity test to notice future deviations from polymorphic
consistency. I also marked the percentile_cont() aggregates as not
needing extra arguments, since they don't.
When marking a branch as half-dead, a pointer to the top of the branch is
stored in the leaf block's hi-key. During normal operation, the high key
was left in place, and the block number was just stored in the ctid field
of the high key tuple, but in WAL replay, the high key was recreated as a
truncated tuple with zero columns. For the sake of easier debugging, also
truncate the tuple in normal operation, so that the page is identical
after WAL replay. Also, rename the 'downlink' field in the WAL record to
'topparent', as that seems like a more descriptive name. And make sure
it's set to invalid when unlinking the leaf page.
Tom Lane [Wed, 23 Apr 2014 03:22:12 +0000 (23:22 -0400)]
Fix documentation of FmgrInfo.fn_nargs.
Some ancient comments claimed that fn_nargs could be -1 to indicate a
variable number of input arguments; but this was never implemented, and
is at variance with what we ultimately did with "variadic" functions.
Update the comments.
Forgot to update LSN of left sibling's page, when creating a new root.
I fixed this for regular insertions and page splits earlier, but missed
new root creation.
The README incorrectly claimed that GIN posting tree pages contain an array
of uncompressed items in addition to compressed posting lists. Earlier
versions of the GIN posting list compression patch worked that way, but not
the one that was committed.
Avoid transient bogus page contents when creating a sequence.
Don't use simple_heap_insert to insert the tuple to a sequence relation.
simple_heap_insert creates a heap insertion WAL record, and replaying that
will create a regular heap page without the special area containing the
sequence magic constant, which is wrong for a sequence. That was not a bug
because we always created a sequence WAL record after that, and replaying
that overwrote the bogus heap page, and the transient state could never be
seen by another backend because it was only done when creating a new
sequence relation. But it's simpler and cleaner to avoid that in the first
place.
Tom Lane [Mon, 21 Apr 2014 17:28:07 +0000 (13:28 -0400)]
pg_stat_statements forgot to let previous occupant of hook get control too.
pgss_post_parse_analyze() neglected to pass the call on to any earlier
occupant of the post_parse_analyze_hook. There are no other users of that
hook in contrib/, and most likely none in the wild either, so this is
probably just a latent bug. But it's a bug nonetheless, so back-patch
to 9.2 where this code was introduced.