Accept postgres:// URIs in libpq connection functions
postgres:// URIs are an attempt to "stop the bleeding" in this general
area that has been said to occur due to external projects adopting their
own syntaxes. The syntaxes supported by this patch:
should be enough to cover most interesting cases without having to
resort to "param=value" pairs, but those are provided for the cases that
need them regardless.
libpq documentation has been shuffled around a bit, to avoid stuffing
all the format details into the PQconnectdbParams description, which was
already a bit overwhelming. The list of keywords has moved to its own
subsection, and the details on the URI format live in another subsection.
This includes a simple test program, as requested in discussion, to
ensure that interesting corner cases continue to work appropriately in
the future.
Author: Alexander Shulgin
Some tweaking by Álvaro Herrera, Greg Smith, Daniel Farina, Peter Eisentraut
Reviewed by Robert Haas, Alexey Klyukin (offlist), Heikki Linnakangas,
Marko Kreen, and others
Oh, it also supports postgresql:// but that's probably just an accident.
Tom Lane [Wed, 11 Apr 2012 01:42:46 +0000 (21:42 -0400)]
Make pg_tablespace_location(0) return the database's default tablespace.
This definition is convenient when applying the function to the
reltablespace column of pg_class, since that's what zero means there;
and it doesn't interfere with any other plausible use of the function.
Per gripe from Bruce Momjian.
Bruce Momjian [Tue, 10 Apr 2012 23:57:14 +0000 (19:57 -0400)]
Fix pg_upgrade to properly upgrade a table that is stored in the cluster
default tablespace, but part of a database that is in a user-defined
tablespace. Caused "file not found" error during upgrade.
Tom Lane [Tue, 10 Apr 2012 16:04:42 +0000 (12:04 -0400)]
Measure epoch of timestamp-without-time-zone from local not UTC midnight.
This patch reverts commit 191ef2b407f065544ceed5700e42400857d9270f
and thereby restores the pre-7.3 behavior of EXTRACT(EPOCH FROM
timestamp-without-tz). Per discussion, the more recent behavior was
misguided on a couple of grounds: it makes it hard to get a
non-timezone-aware epoch value for a timestamp, and it makes this one
case dependent on the value of the timezone GUC, which is incompatible
with having timestamp_part() labeled as immutable.
The other behavior is still available (in all releases) by explicitly
casting the timestamp to timestamp with time zone before applying EXTRACT.
This will need to be called out as an incompatible change in the 9.2
release notes. Although having mutable behavior in a function marked
immutable is clearly a bug, we're not going to back-patch such a change.
Tom Lane [Tue, 10 Apr 2012 00:49:01 +0000 (20:49 -0400)]
Adjust various references to GEQO being non-deterministic.
It's still non-deterministic in some sense ... but given fixed settings
and identical planning problems, it will now always choose the same plan,
so we probably shouldn't tar it with that brush. Per bug #6565 from
Guillaume Cottenceau. Back-patch to 9.0 where the behavior was fixed.
Tom Lane [Mon, 9 Apr 2012 15:58:24 +0000 (11:58 -0400)]
Fix an Assert that turns out to be reachable after all.
estimate_num_groups() gets unhappy with
create table empty();
select * from empty except select * from empty e2;
I can't see any actual use-case for such a query (and the table is illegal
per SQL spec), but it seems like a good idea that it not cause an assert
failure.
Tom Lane [Mon, 9 Apr 2012 15:41:54 +0000 (11:41 -0400)]
Don't bother copying empty support arrays in a zero-column MergeJoin.
The case could not arise when this code was originally written, but it can
now (since we made zero-column MergeJoins work for the benefit of FULL JOIN
ON TRUE). I don't think there is any actual bug here, but we might as well
treat it consistently with other uses of COPY_POINTER_FIELD(). Per comment
from Ashutosh Bapat.
Tom Lane [Mon, 9 Apr 2012 15:16:04 +0000 (11:16 -0400)]
Save a few cycles while creating "sticky" entries in pg_stat_statements.
There's no need to sit there and increment the stats when we know all the
increments would be zero anyway. The actual additions might not be very
expensive, but skipping acquisition of the spinlock seems like a good
thing. Pushing the logic about initialization of the usage count down into
entry_alloc() allows us to do that while making the code actually simpler,
not more complex. Expansion on a suggestion by Peter Geoghegan.
Tom Lane [Sun, 8 Apr 2012 19:49:47 +0000 (15:49 -0400)]
Improve management of "sticky" entries in contrib/pg_stat_statements.
This patch addresses a deficiency in the previous pg_stat_statements patch.
We want to give sticky entries an initial "usage" factor high enough that
they probably will stick around until their query is completed. However,
if the query never completes (eg it gets an error during execution), the
entry shouldn't persist indefinitely. Manage this by starting out with
a usage setting equal to the (approximate) median usage value within the
whole hashtable, but decaying the value much more aggressively than we
do for normal entries.
set_stack_base() no longer needs to be called in PostgresMain.
This was a thinko in previous commit. Now that stack base pointer is now set
in PostmasterMain and SubPostmasterMain, it doesn't need to be set in
PostgresMain anymore.
Do stack-depth checking in all postmaster children.
We used to only initialize the stack base pointer when starting up a regular
backend, not in other processes. In particular, autovacuum workers can run
arbitrary user code, and without stack-depth checking, infinite recursion
in e.g an index expression will bring down the whole cluster.
The comment about PL/Java using set_stack_base() is not yet true. As the
code stands, PL/java still modifies the stack_base_ptr variable directly.
However, it's been discussed in the PL/Java mailing list that it should be
changed to use the function, because PL/Java is currently oblivious to the
register stack used on Itanium. There's another issues with PL/Java, namely
that the stack base pointer it sets is not really the base of the stack, it
could be something close to the bottom of the stack. That's a separate issue
that might need some further changes to this code, but that's a different
story.
Tom Lane [Fri, 6 Apr 2012 22:10:21 +0000 (18:10 -0400)]
Fix misleading output from gin_desc().
XLOG_GIN_UPDATE_META_PAGE and XLOG_GIN_DELETE_LISTPAGE records were printed
with a list link field labeled as "blkno", which was confusing, especially
when the link was empty (InvalidBlockNumber). Print the metapage block
number instead, since that's what's actually being updated. We could
include the link values too as a separate field, but not clear it's worth
the trouble.
Back-patch to 8.4 where the dubious code was added.
Tom Lane [Fri, 6 Apr 2012 20:58:17 +0000 (16:58 -0400)]
Fix broken comparetup_datum code.
Commit 337b6f5ecf05b21b5e997986884d097d60e4e3d0 contained the entirely
fanciful assumption that it had made comparetup_datum unreachable.
Reported and patched by Takashi Yamamoto.
Fix up some not terribly accurate/useful comments from that commit, too.
Tom Lane [Fri, 6 Apr 2012 20:04:10 +0000 (16:04 -0400)]
Dept of second thoughts: improve the API for AnalyzeForeignTable.
If we make the initially-called function return the table physical-size
estimate, acquire_inherited_sample_rows will be able to use that to
allocate numbers of samples among child tables, when the day comes that
we want to support foreign tables in inheritance trees.
Tom Lane [Fri, 6 Apr 2012 19:02:35 +0000 (15:02 -0400)]
Allow statistics to be collected for foreign tables.
ANALYZE now accepts foreign tables and allows the table's FDW to control
how the sample rows are collected. (But only manual ANALYZEs will touch
foreign tables, for the moment, since among other things it's not very
clear how to handle remote permissions checks in an auto-analyze.)
contrib/file_fdw is extended to support this.
Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
Robert Haas [Thu, 5 Apr 2012 15:37:31 +0000 (11:37 -0400)]
Expose track_iotiming data via the statistics collector.
Ants Aasma's original patch to add timing information for buffer I/O
requests exposed this data at the relation level, which was judged too
costly. I've here exposed it at the database level instead.
Tom Lane [Thu, 5 Apr 2012 01:50:31 +0000 (21:50 -0400)]
Fix plpgsql named-cursor-parameter feature for variable name conflicts.
The parser got confused if a cursor parameter had the same name as
a plpgsql variable. Reported and diagnosed by Yeb Havinga, though
this isn't exactly his proposed fix.
Also, some mostly-but-not-entirely-cosmetic adjustments to the original
named-cursor-parameter patch, for code readability and better error
diagnostics.
Tom Lane [Wed, 4 Apr 2012 22:39:08 +0000 (18:39 -0400)]
Improve efficiency of dblink by using libpq's new row processor API.
This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result. Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.
A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result(). In the latter
case we don't know the desired tuple rowtype soon enough. While hack
solutions to that are possible, a different user-level API would
probably be a better answer.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
Tom Lane [Wed, 4 Apr 2012 22:27:56 +0000 (18:27 -0400)]
Add a "row processor" API to libpq for better handling of large results.
Traditionally libpq has collected an entire query result before passing
it back to the application. That provides a simple and transactional API,
but it's pretty inefficient for large result sets. This patch allows the
application to process each row on-the-fly instead of accumulating the
rows into the PGresult. Error recovery becomes a bit more complex, but
often that tradeoff is well worth making.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
Tom Lane [Wed, 4 Apr 2012 20:15:04 +0000 (16:15 -0400)]
Remove useless PGRES_COPY_BOTH "support" in psql.
There is no existing or foreseeable case in which psql should see a
PGRES_COPY_BOTH PQresultStatus; and if such a case ever emerges, it's a
pretty good bet that these code fragments wouldn't do the right thing
anyway. Remove them, and let the existing default cases do the appropriate
thing, namely emit an "unexpected PQresultStatus" bleat.
Noted while working on libpq row processor patch, for which I was
considering adding a PGRES_SUSPENDED status code --- the same default-case
treatment would be appropriate for that.
Tom Lane [Wed, 4 Apr 2012 19:05:10 +0000 (15:05 -0400)]
Fix syslogger to not lose log coherency under high load.
The original coding of the syslogger had an arbitrary limit of 20 large
messages concurrently in progress, after which it would just punt and dump
message fragments to the output file separately. Our ambitions are a bit
higher than that now, so allow the data structure to expand as necessary.
Reported and patched by Andrew Dunstan; some editing by Tom
Tom Lane [Wed, 4 Apr 2012 00:43:15 +0000 (20:43 -0400)]
Fix a couple of contrib/dblink bugs.
dblink_exec leaked temporary database connections if any error occurred
after connection setup, for example
SELECT dblink_exec('...connect string...', 'select 1/0');
Add a PG_TRY block to ensure PQfinish gets done when it is needed.
(dblink_record_internal is on the hairy edge of needing similar treatment,
but seems not to be actively broken at the moment.)
Also, in 9.0 and up, only one of the three functions using tuplestore
return mode was properly checking that the query context would allow
a tuplestore result.
Noted while reviewing dblink patch. Back-patch to all supported branches.
Tom Lane [Sat, 31 Mar 2012 19:51:07 +0000 (15:51 -0400)]
Fix O(N^2) behavior in pg_dump when many objects are in dependency loops.
Combining the loop workspace with the record of already-processed objects
might have been a cute trick, but it behaves horridly if there are many
dependency loops to repair: the time spent in the first step of findLoop()
grows as O(N^2). Instead use a separate flag array indexed by dump ID,
which we can check in constant time. The length of the workspace array
is now never more than the actual length of a dependency chain, which
should be reasonably short in all cases of practical interest. The code
is noticeably easier to understand this way, too.
Per gripe from Mike Roest. Since this is a longstanding performance bug,
backpatch to all supported versions.
Tom Lane [Sat, 31 Mar 2012 18:42:17 +0000 (14:42 -0400)]
Fix O(N^2) behavior in pg_dump for large numbers of owned sequences.
The loop that matched owned sequences to their owning tables required time
proportional to number of owned sequences times number of tables; although
this work was only expended in selective-dump situations, which is probably
why the issue wasn't recognized long since. Refactor slightly so that we
can perform this work after the index array for findTableByOid has been
set up, reducing the time to O(M log N).
Per gripe from Mike Roest. Since this is a longstanding performance bug,
backpatch to all supported versions.
Tom Lane [Sat, 31 Mar 2012 17:15:53 +0000 (13:15 -0400)]
Rename frontend keyword arrays to avoid conflict with backend.
ecpg and pg_dump each contain keyword arrays with structure similar
to the backend's keyword array. Up to now, we actually named those
arrays the same as the backend's and relied on parser/keywords.h
to declare them. This seems a tad too cute, though, and it breaks
now that we need to PGDLLIMPORT-decorate the backend symbols.
Rename to avoid the problem. Per buildfarm.
(It strikes me that maybe we should get rid of the separate keywords.c
files altogether, and just define these arrays in the modules that use
them, but that's a rather more invasive change.)
Tom Lane [Sat, 31 Mar 2012 15:19:23 +0000 (11:19 -0400)]
Fix glitch recently introduced in psql tab completion.
Over-optimization (by me, looks like :-() broke the case of recognizing
a word boundary just before a quoted identifier. Reported and diagnosed
by Dean Rasheed.
Peter Eisentraut [Fri, 30 Mar 2012 17:42:06 +0000 (20:42 +0300)]
Add new files to NLS file lists
Some of these are newly added, some are older and were forgotten, some
don't contain any translatable strings right now but look like they
could in the future.
Tom Lane [Thu, 29 Mar 2012 21:52:28 +0000 (17:52 -0400)]
Fix dblink's failure to report correct connection name in error messages.
The DBLINK_GET_CONN and DBLINK_GET_NAMED_CONN macros did not set the
surrounding function's conname variable, causing errors to be incorrectly
reported as having occurred on the "unnamed" connection in some cases.
This bug was actually visible in two cases in the regression tests,
but apparently whoever added those cases wasn't paying attention.
Noted by Kyotaro Horiguchi, though this is different from his proposed
patch.
Back-patch to 8.4; 8.3 does not have the same type of error reporting
so the patch is not relevant.
Tom Lane [Thu, 29 Mar 2012 20:42:09 +0000 (16:42 -0400)]
Improve contrib/pg_stat_statements' handling of PREPARE/EXECUTE statements.
It's actually more useful for the module to ignore these. Ignoring
EXECUTE (and not incrementing the nesting level) allows the executor
hooks to charge the time to the underlying prepared query, which
shows up as a stats entry with the original PREPARE as query string
(possibly modified by suppression of constants, which might not be
terribly useful here but it's not worth avoiding). This is much more
useful than cluttering the stats table with a distinct entry for each
textually distinct EXECUTE.
Experimentation with this idea shows that it's also preferable to ignore
PREPARE. If we don't, we get two stats table entries, one with the query
string hash and one with the jumble-derived hash, but with the same visible
query string (modulo those constants). This is confusing and not very
helpful, since the first entry will only receive costs associated with
initial planning of the query, which is not something counted at all
normally by pg_stat_statements. (And if we do start tracking planning
costs, we'd want them blamed on the other hash table entry anyway.)
Tom Lane [Thu, 29 Mar 2012 19:32:50 +0000 (15:32 -0400)]
Improve handling of utility statements containing plannable statements.
When tracking nested statements, contrib/pg_stat_statements formerly
double-counted the execution costs of utility statements that directly
contain an executable statement, such as EXPLAIN and DECLARE CURSOR.
This was not obvious since the ProcessUtility and Executor hooks
would each add their measured costs to the same stats table entry.
However, with the new implementation that hashes utility and plannable
statements differently, this showed up as seemingly-duplicate stats
entries. Fix that by disabling the Executor hooks when the query has a
queryId of zero, which was the case already for such statements but is now
more clearly specified in the code. (The zero queryId was causing problems
anyway because all such statements would add to a single bogus entry.)
The PREPARE/EXECUTE case still results in counting the same execution
in two different stats table entries, but it should be much less surprising
to users that there are two entries in such cases.
In passing, include a CommonTableExpr's ctename in the query hash.
I had left it out originally on the grounds that we wanted to omit all
inessential aliases, but since RTE_CTE RTEs are hashing their referenced
names, we'd better hash the CTE names too to make sure we don't hash
semantically different queries the same.
Simon Riggs [Thu, 29 Mar 2012 13:55:30 +0000 (14:55 +0100)]
Correct epoch of txid_current() when executed on a Hot Standby server.
Initialise ckptXidEpoch from starting checkpoint and maintain the correct
value as we roll forwards. This allows GetNextXidAndEpoch() to return the
correct epoch when executed during recovery. Backpatch to 9.0 when the
problem is first observable by a user.
Inherit max_safe_fds to child processes in EXEC_BACKEND mode.
Postmaster sets max_safe_fds by testing how many open file descriptors it
can open, and that is normally inherited by all child processes at fork().
Not so on EXEC_BACKEND, ie. Windows, however. Because of that, we
effectively ignored max_files_per_process on Windows, and always assumed
a conservative default of 32 simultaneous open files. That could have an
impact on performance, if you need to access a lot of different files
in a query. After this patch, the value is passed to child processes by
save/restore_backend_variables() among many other global variables.
It has been like this forever, but given the lack of complaints about it,
I'm not backpatching this.
Tom Lane [Thu, 29 Mar 2012 01:00:31 +0000 (21:00 -0400)]
Improve contrib/pg_stat_statements to lump "similar" queries together.
pg_stat_statements now hashes selected fields of the analyzed parse tree
to assign a "fingerprint" to each query, and groups all queries with the
same fingerprint into a single entry in the pg_stat_statements view.
In practice it is expected that queries with the same fingerprint will be
equivalent except for values of literal constants. To make the display
more useful, such constants are replaced by "?" in the displayed query
strings.
This mechanism currently supports only optimizable queries (SELECT,
INSERT, UPDATE, DELETE). Utility commands are still matched on the
basis of their literal query strings.
There remain some open questions about how to deal with utility statements
that contain optimizable queries (such as EXPLAIN and SELECT INTO) and how
to deal with expiring speculative hashtable entries that are made to save
the normalized form of a query string. However, fixing these issues should
require only localized changes, and since there are other open patches
involving contrib/pg_stat_statements, it seems best to go ahead and commit
what we've got.
Tom Lane [Tue, 27 Mar 2012 19:17:00 +0000 (15:17 -0400)]
Bend parse location rules for the convenience of pg_stat_statements.
Generally, the parse location assigned to a multiple-token construct is
the location of its leftmost token. This commit breaks that rule for
the syntaxes TYPENAME 'LITERAL' and CAST(CONSTANT AS TYPENAME) --- the
resulting Const will have the location of the literal string, not the
typename or CAST keyword. The cases where this matters are pretty thin on
the ground (no error messages in the regression tests change, for example),
and it's unlikely that any user would be confused anyway by an error cursor
pointing at the literal. But still it's less than consistent. The reason
for changing it is that contrib/pg_stat_statements wants to know the parse
location of the original literal, and it was agreed that this is the least
unpleasant way to preserve that information through parse analysis.
Tom Lane [Tue, 27 Mar 2012 19:14:13 +0000 (15:14 -0400)]
Add some infrastructure for contrib/pg_stat_statements.
Add a queryId field to Query and PlannedStmt. This is not used by the
core backend, except for being copied around at appropriate times.
It's meant to allow plug-ins to track a particular query forward from
parse analysis to execution.
The queryId is intentionally not dumped into stored rules (and hence this
commit doesn't bump catversion). You could argue that choice either way,
but it seems better that stored rule strings not have any dependency
on plug-ins that might or might not be present.
Also, add a post_parse_analyze_hook that gets invoked at the end of
parse analysis (but only for top-level analysis of complete queries,
not cases such as analyzing a domain's default-value expression).
This is mainly meant to be used to compute and assign a queryId,
but it could have other applications.
Robert Haas [Tue, 27 Mar 2012 18:52:37 +0000 (14:52 -0400)]
New GUC, track_iotiming, to track I/O timings.
Currently, the only way to see the numbers this gathers is via
EXPLAIN (ANALYZE, BUFFERS), but the plan is to add visibility through
the stats collector and pg_stat_statements in subsequent patches.
Ants Aasma, reviewed by Greg Smith, with some further changes by me.
Robert Haas [Mon, 26 Mar 2012 15:03:06 +0000 (11:03 -0400)]
Code cleanup for heap_freeze_tuple.
It used to be case that lazy vacuum could call this function with only
a shared lock on the buffer, but neither lazy vacuum nor any other
code path does that any more. Simplify the code accordingly and clean
up some related, obsolete comments.
Tom Lane [Mon, 26 Mar 2012 03:17:22 +0000 (23:17 -0400)]
Fix COPY FROM for null marker strings that correspond to invalid encoding.
The COPY documentation says "COPY FROM matches the input against the null
string before removing backslashes". It is therefore reasonable to presume
that null markers like E'\\0' will work ... and they did, until someone put
the tests in the wrong order during microoptimization-driven rewrites.
Since then, we've been failing if the null marker is something that would
de-escape to an invalidly-encoded string. Since null markers generally
need to be something that can't appear in the data, this represents a
nontrivial loss of functionality; surprising nobody noticed it earlier.
Per report from Jeff Davis. Backpatch to 8.4 where this got broken.
Tom Lane [Mon, 26 Mar 2012 01:47:22 +0000 (21:47 -0400)]
Replace empty locale name with implied value in CREATE DATABASE and initdb.
setlocale() accepts locale name "" as meaning "the locale specified by the
process's environment variables". Historically we've accepted that for
Postgres' locale settings, too. However, it's fairly unsafe to store an
empty string in a new database's pg_database.datcollate or datctype fields,
because then the interpretation could vary across postmaster restarts,
possibly resulting in index corruption and other unpleasantness.
Instead, we should expand "" to whatever it means at the moment of calling
CREATE DATABASE, which we can do by saving the value returned by
setlocale().
For consistency, make initdb set up the initial lc_xxx parameter values the
same way. initdb was already doing the right thing for empty locale names,
but it did not replace non-empty names with setlocale results. On a
platform where setlocale chooses to canonicalize the spellings of locale
names, this would result in annoying inconsistency. (It seems that popular
implementations of setlocale don't do such canonicalization, which is a
pity, but the POSIX spec certainly allows it to be done.) The same risk
of inconsistency leads me to not venture back-patching this, although it
could certainly be seen as a longstanding bug.
Per report from Jeff Davis, though this is not his proposed patch.
Tom Lane [Sat, 24 Mar 2012 20:21:39 +0000 (16:21 -0400)]
Fix planner's handling of outer PlaceHolderVars within subqueries.
For some reason, in the original coding of the PlaceHolderVar mechanism
I had supposed that PlaceHolderVars couldn't propagate into subqueries.
That is of course entirely possible. When it happens, we need to treat
an outer-level PlaceHolderVar much like an outer Var or Aggref, that is
SS_replace_correlation_vars() needs to replace the PlaceHolderVar with
a Param, and then when building the finished SubPlan we have to provide
the PlaceHolderVar expression as an actual parameter for the SubPlan.
The handling of the contained expression is a bit delicate but it can be
treated exactly like an Aggref's expression.
In addition to the missing logic in subselect.c, prepjointree.c was failing
to search subqueries for PlaceHolderVars that need their relids adjusted
during subquery pullup. It looks like everyplace else that touches
PlaceHolderVars got it right, though.
Per report from Mark Murawski. In 9.1 and HEAD, queries affected by this
oversight would fail with "ERROR: Upper-level PlaceHolderVar found where
not expected". But in 9.0 and 8.4, you'd silently get possibly-wrong
answers, since the value transmitted into the subquery wouldn't go to null
when it should.
Tom Lane [Fri, 23 Mar 2012 23:15:58 +0000 (19:15 -0400)]
Refactor simplify_function et al to centralize argument simplification.
We were doing the recursive simplification of function/operator arguments
in half a dozen different places, with rather baroque logic to ensure it
didn't get done multiple times on some arguments. This patch improves that
by postponing argument simplification until after we've dealt with named
parameters and added any needed default expressions.
Tom Lane [Fri, 23 Mar 2012 21:29:57 +0000 (17:29 -0400)]
Code review for protransform patches.
Fix loss of previous expression-simplification work when a transform
function fires: we must not simply revert to untransformed input tree.
Instead build a dummy FuncExpr node to pass to the transform function.
This has the additional advantage of providing a simpler, more uniform
API for transform functions.
Move documentation to a somewhat less buried spot, relocate some
poorly-placed code, be more wary of null constants and invalid typmod
values, add an opr_sanity check on protransform function signatures,
and some other minor cosmetic adjustments.
Note: although this patch touches pg_proc.h, no need for catversion
bump, because the changes are cosmetic and don't actually change the
intended catalog contents.
Tom Lane [Thu, 22 Mar 2012 18:13:17 +0000 (14:13 -0400)]
Fix GET DIAGNOSTICS for case of assignment to function's first variable.
An incorrect and entirely unnecessary "safety check" in exec_stmt_getdiag()
caused the code to treat an assignment to a variable with dno zero as a
no-op. Unfortunately, that's a perfectly valid dno. This has been broken
since GET DIAGNOSTICS was invented. It's not terribly surprising that the
bug went unnoticed for so long, since in most cases you probably wouldn't
use the function's first-created variable (normally its first parameter)
as a GET DIAGNOSTICS target. Nonetheless, it's broken. Per bug #6551
from Adam Buraczewski.
Tom Lane [Thu, 22 Mar 2012 06:08:25 +0000 (02:08 -0400)]
If a role has a password expiration date, show that in psql's \du output.
Per a suggestion from Euler Taveira, it seems like a good idea to include
this information in \du (and \dg) output. This costs nothing for people
who are not using the VALID UNTIL feature, while for those who are, it's
rather critical information.
Tom Lane [Thu, 22 Mar 2012 04:46:03 +0000 (00:46 -0400)]
Fix configure's search for collateindex.pl.
PGAC_PATH_COLLATEINDEX supposed that it could use AC_PATH_PROGS to search
for collateindex.pl, but that macro will only accept files that are marked
executable, and at least some DocBook installations don't mark the script
executable (a case the docs Makefile was already prepared for). Accept the
script if it's present and readable in $DOCBOOKSTYLE/bin, and otherwise
search the PATH as before.
Having fixed that up, we don't need the fallback case that was in the docs
Makefile, and instead can throw an understandable error if configure didn't
find the script. Per recent trouble report from John Lumby.
Peter Eisentraut [Wed, 21 Mar 2012 21:30:14 +0000 (23:30 +0200)]
Clean up compiler warnings from unused variables with asserts disabled
For those variables only used when asserts are enabled, use a new
macro PG_USED_FOR_ASSERTS_ONLY, which expands to
__attribute__((unused)) when asserts are not enabled.
Robert Haas [Wed, 21 Mar 2012 18:51:11 +0000 (14:51 -0400)]
Doc updates for index-only scans.
Document that routine vacuuming is now also important for the purpose
of index-only scans; and mention in the section that describes the
visibility map that it is used to implement index-only scans.