Tom Lane [Wed, 19 Oct 2011 00:09:18 +0000 (20:09 -0400)]
Reject empty pg_hba.conf files.
An empty HBA file is surely an error, since it means there is no way to
connect to the server. We've not heard identifiable reports of people
actually doing that, but this will also close off the case Thom Brown just
complained of, namely pointing hba_file at a directory. (On at least some
platforms with some directories, it will read as an empty file.)
Perhaps this should be back-patched, but given the lack of previous
complaints, I won't add extra work for the translators.
Tom Lane [Tue, 18 Oct 2011 21:39:14 +0000 (17:39 -0400)]
Remove unnecessary AssertMacro() to suppress gcc 4.6 compiler warning.
There's no particular value in doing AssertMacro((tup) != NULL) in front
of code that's certain to crash anyway if tup is NULL. And if "tup" is
actually the address of a local variable, gcc 4.6 whinges about it. That's
arguably pretty broken on gcc's part, but we might as well remove the
useless test to silence the warnings. This gets rid of all the -Waddress
warnings in the backend; there are some in libpq and psql that are a bit
harder to avoid.
Tom Lane [Tue, 18 Oct 2011 21:10:56 +0000 (17:10 -0400)]
Fix pg_dump to dump casts between auto-generated types.
The heuristic for when to dump a cast failed for a cast between table
rowtypes, as reported by Frédéric Rejol. Fix it by setting
the "dump" flag for such a type the same way as the flag is set for the
underlying table or base type. This won't result in the auto-generated
type appearing in the output, since setting its objType to DO_DUMMY_TYPE
unconditionally suppresses that. But it will result in dumpCast doing what
was intended.
Back-patch to 8.3. The 8.2 code is rather different in this area, and it
doesn't seem worth any risk to fix a corner case that nobody has stumbled
on before.
Tom Lane [Sun, 16 Oct 2011 23:15:04 +0000 (19:15 -0400)]
Avoid assuming that index-only scan data matches the index's rowtype.
In general the data returned by an index-only scan should have the
datatypes originally computed by FormIndexDatum. If the index opclasses
use "storage" datatypes different from their input datatypes, the scan
tuple will not have the same rowtype attributed to the index; but we had
a hard-wired assumption that that was true in nodeIndexonlyscan.c. We'd
already hacked around the issue for the one case where the types are
different in btree indexes (btree name_ops), but this would definitely
come back to bite us if we ever implement index-only scans in GiST.
To fix, require the index AM to explicitly provide the tupdesc for the
tuple it is returning. btree can just pass back the index's tupdesc, but
GiST will have to work harder when and if it supports index-only scans.
I had previously proposed fixing this by allowing the index AM to fill the
scan tuple slot directly; but on reflection that seemed like a module
layering violation, since TupleTableSlots are creatures of the executor.
At least in the btree case, it would also be less efficient, since the
tuple deconstruction work would occur even for rows later found to be
invisible to the scan's snapshot.
Tom Lane [Sat, 15 Oct 2011 17:02:37 +0000 (13:02 -0400)]
Marginal improvements to documentation of plpgsql's OPEN cursor statement.
Rearrange text to improve clarity, and add an example of implicit reference
to a plpgsql variable in a bound cursor's query. Byproduct of some work
I'd done on the "named cursor parameters" patch before giving up on it.
Bruce Momjian [Sat, 15 Oct 2011 00:26:28 +0000 (20:26 -0400)]
Allow a major PG version psql .psqlrc file to be used if a minor
matching version file does not exist. This avoids needing to rename
.psqlrc files after minor version upgrades.
Tom Lane [Sat, 15 Oct 2011 00:24:17 +0000 (20:24 -0400)]
Fix bugs in information_schema.referential_constraints view.
This view was being insufficiently careful about matching the FK constraint
to the depended-on primary or unique key constraint. That could result in
failure to show an FK constraint at all, or showing it multiple times, or
claiming that it depended on a different constraint than the one it really
does. Fix by joining via pg_depend to ensure that we find only the correct
dependency.
Back-patch, but don't bump catversion because we can't force initdb in back
branches. The next minor-version release notes should explain that if you
need to fix this in an existing installation, you can drop the
information_schema schema then re-create it by sourcing
$SHAREDIR/information_schema.sql in each database (as a superuser of
course).
Tom Lane [Fri, 14 Oct 2011 21:23:01 +0000 (17:23 -0400)]
Measure the number of all-visible pages for use in index-only scan costing.
Add a column pg_class.relallvisible to remember the number of pages that
were all-visible according to the visibility map as of the last VACUUM
(or ANALYZE, or some other operations that update pg_class.relpages).
Use relallvisible/relpages, instead of an arbitrary constant, to estimate
how many heap page fetches can be avoided during an index-only scan.
This is pretty primitive and will no doubt see refinements once we've
acquired more field experience with the index-only scan mechanism, but
it's way better than using a constant.
Note: I had to adjust an underspecified query in the window.sql regression
test, because it was changing answers when the plan changed to use an
index-only scan. Some of the adjacent tests perhaps should be adjusted
as well, but I didn't do that here.
Robert Haas [Fri, 14 Oct 2011 18:16:02 +0000 (14:16 -0400)]
Dump all roles first, then all config settings on roles.
This way, if a role's config setting uses the name of another role,
the validity of the dump isn't dependent on the order in which those
two roles are dumped.
Tom Lane [Thu, 13 Oct 2011 22:02:43 +0000 (18:02 -0400)]
Fix up Perl-to-Postgres datatype conversions in pl/perl.
This patch restores the pre-9.1 behavior that pl/perl functions returning
VOID ignore the result value of their last Perl statement. 9.1.0
unintentionally threw an error if the last statement returned a reference,
as reported by Amit Khandekar.
Also, make sure it works to return a string value for a composite type,
so long as the string meets the type's input format. We already allowed
the equivalent behavior for arrays, so it seems inconsistent to not allow
it for composites.
In addition, ensure we throw errors for attempts to return arrays or hashes
when the function's declared result type is not an array or composite type,
respectively. Pre-9.1 versions rather uselessly returned strings like
ARRAY(0x221a9a0) or HASH(0x221aa90), while 9.1.0 threw an error for the
hash case and returned a garbage value for the array case.
Also, clean up assorted grotty coding in Perl array conversion, including
use of a session-lifespan memory context to accumulate the array value
(resulting in session-lifespan memory leak on error), failure to apply the
declared typmod if any, and failure to detect some cases of non-rectangular
multi-dimensional arrays.
Tom Lane [Wed, 12 Oct 2011 22:40:09 +0000 (18:40 -0400)]
Don't mark auto-generated types as extension members.
Relation rowtypes and automatically-generated array types do not need to
have their own extension membership dependency entries. If we create such
then it becomes more difficult to remove items from an extension, and it's
also harder for an extension upgrade script to make sure it duplicates the
dependencies created by the extension's regular installation script.
I changed the code in such a way that this happened in commit 988cccc620dd8c16d77f88ede167b22056176324, I think because of worries about
the shell-type-replacement case; but that cure was worse than the disease.
It would only matter if one extension created a shell type that was
replaced with an auto-generated type in another extension, which seems
pretty far-fetched. Better to make this work unsurprisingly in normal
cases.
Report and patch by Robert Haas, comment adjustments by me.
Bruce Momjian [Wed, 12 Oct 2011 19:45:46 +0000 (15:45 -0400)]
Modify pgindent to use a renamed pg_bsd_indent binary. New features
include the ability to supply a typedef file, rather than list them on
the command line. Also improve the README.
Tom Lane [Wed, 12 Oct 2011 19:45:03 +0000 (15:45 -0400)]
Throw a useful error message if an extension script file is fed to psql.
We have seen one too many reports of people trying to use 9.1 extension
files in the old-fashioned way of sourcing them in psql. Not only does
that usually not work (due to failure to substitute for MODULE_PATHNAME
and/or @extschema@), but if it did work they'd get a collection of loose
objects not an extension. To prevent this, insert an \echo ... \quit
line that prints a suitable error message into each extension script file,
and teach commands/extension.c to ignore lines starting with \echo.
That should not only prevent any adverse consequences of loading a script
file the wrong way, but make it crystal clear to users that they need to
do it differently now.
Tom Lane, following an idea of Andrew Dunstan's. Back-patch into 9.1
... there is not going to be much value in this if we wait till 9.2.
Tom Lane [Wed, 12 Oct 2011 17:59:30 +0000 (13:59 -0400)]
Improve documentation of psql's \q command.
The documentation neglected to explain its behavior in a script file
(it only ends execution of the script, not psql as a whole), and failed
to mention the long form \quit either.
Tom Lane [Tue, 11 Oct 2011 22:40:53 +0000 (18:40 -0400)]
Add comment on why pulling data from a "name" index column can't crash.
It's been bothering me for several days that pretending that the cstring
data stored in a btree name_ops column is really a "name" Datum could lead
to reading past the end of memory. However, given the current memory
layout used for index-only scans in the btree code, a crash is in fact not
possible. Document that so we don't break it. I have not thought of any
other solutions that aren't fairly ugly too, and most of them lose the
functionality of index-only scans on name columns altogether, so this seems
like the way to go.
Tom Lane [Tue, 11 Oct 2011 22:11:51 +0000 (18:11 -0400)]
Generate index-only scan tuple descriptor from the plan node's indextlist.
Dept. of second thoughts: as long as we've got that tlist hanging around
anyway, we can apply ExecTypeFromTL to it to get a suitable descriptor for
the ScanTupleSlot. This is a nicer solution than the previous one because
it eliminates some hard-wired knowledge about btree name_ops, and because
it avoids the somewhat shaky assumption that we needn't set up the scan
tuple descriptor in EXPLAIN_ONLY mode. It doesn't change what actually
happens at run-time though, and I'm still a bit nervous about that.
Tom Lane [Tue, 11 Oct 2011 18:20:06 +0000 (14:20 -0400)]
Rearrange the implementation of index-only scans.
This commit changes index-only scans so that data is read directly from the
index tuple without first generating a faux heap tuple. The only immediate
benefit is that indexes on system columns (such as OID) can be used in
index-only scans, but this is necessary infrastructure if we are ever to
support index-only scans on expression indexes. The executor is now ready
for that, though the planner still needs substantial work to recognize
the possibility.
To do this, Vars in index-only plan nodes have to refer to index columns
not heap columns. I introduced a new special varno, INDEX_VAR, to mark
such Vars to avoid confusion. (In passing, this commit renames the two
existing special varnos to OUTER_VAR and INNER_VAR.) This allows
ruleutils.c to handle them with logic similar to what we use for subplan
reference Vars.
Since index-only scans are now fundamentally different from regular
indexscans so far as their expression subtrees are concerned, I also chose
to change them to have their own plan node type (and hence, their own
executor source file).
Robert Haas [Tue, 11 Oct 2011 13:14:30 +0000 (09:14 -0400)]
Replace hardcoded switch in object_exists() with a lookup table.
There's no particular advantage to this change on its face; indeed,
it's possible that this might be slightly slower than the old way.
But it makes this information more easily accessible to other
functions, and therefore paves the way for future code consolidation.
Performance isn't critical here, so there's no need to be smart about
how we do the search.
This is a heavily cut-down version of a patch from KaiGai Kohei,
with several fixes by me. Additional review from Dimitri Fontaine.
Robert Haas [Mon, 10 Oct 2011 17:38:32 +0000 (13:38 -0400)]
Make the reference to "CREATE USER" in the CREATE ROLE page a link.
This might help to avoid confusion between the CREATE USER command,
and the deprecated CREATEUSER option to CREATE ROLE, as per a recent
complaint from Ron Adams. At any rate, having a cross-link here
seems like a good idea; two commands that are so similar should
reference each other.
Robert Haas [Mon, 10 Oct 2011 03:39:52 +0000 (23:39 -0400)]
Fix ALTER TABLE ONLY .. DROP CONSTRAINT.
When I consolidated two copies of the HOT-chain search logic in commit 4da99ea4231e3d8bbf28b666748c1028e7b7d665, I introduced a behavior
change: the old code wouldn't necessarily traverse the entire chain,
if the most recently returned tuple were updated while the HOT chain
traversal is in progress. The new behavior seems more correct, but
unfortunately, the code here relies on a scan with SnapshotNow failing
to see its own updates. That seems pretty shaky even with the old HOT
chain traversal behavior, since there's no guarantee that these
updates will always be HOT, but it's trivial to broke a failure with
the new HOT search logic. Fix by updating just the first matching
pg_constraint tuple, rather than all of them, since there should be
only one anyway. But since nobody has reproduced this failure on older
versions, no back-patch for now.
Report and test case by Alex Hunsaker; tablecmds.c changes by me.
Robert Haas [Mon, 10 Oct 2011 02:20:44 +0000 (22:20 -0400)]
Revert accidental change to pg_config_manual.h.
This was broken in commit 53dbc27c62d8e1b6c5253feba04a5094cb8fe046, which
introduced unlogged tables. Fortunately, as debugging tools go, this one
is pretty cheap, which is probably why it took nine months for someone to
notice, but it's not intended to be enabled by default, so revert.
The original idea of this patch was to make box picksplit run faster, by
eliminating unnecessary palloc() overhead, but that was obsoleted by the new
double-sorting split algorithm that doesn't call these functions so heavily
anymore. Nevertheless, the code looks better this way.
Original patch by me, reviewed and tidied up after the double-sorting patch
by Kevin Grittner.
Tom Lane [Sun, 9 Oct 2011 04:21:08 +0000 (00:21 -0400)]
Improve index-only scans to avoid repeated access to the index page.
We copy all the matched tuples off the page during _bt_readpage, instead of
expensively re-locking the page during each subsequent tuple fetch. This
costs a bit more local storage, but not more than 2*BLCKSZ worth, and the
reduction in LWLock traffic is certainly worth that. What's more, this
lets us get rid of the API wart in the original patch that said an index AM
could randomly decline to supply an index tuple despite having asserted
pg_am.amcanreturn. That will be important for future improvements in the
index-only-scan feature, since the executor will now be able to rely on
having the index data available.
Tom Lane [Sun, 9 Oct 2011 03:45:58 +0000 (23:45 -0400)]
Prevent index-only scans in stats regression test.
This bollixes the test because it's expecting to see the idx_tup_fetch
counter increase, which won't happen if heap fetches were avoided by use
of an index-only scan. Per buildfarm results.
While at it, let's just make sure that enable_seqscan and enable_indexscan
are ON for this test ...
Don't let transform_null_equals=on affect CASE foo WHEN NULL ... constructs.
transform_null_equals is only supposed to affect "foo = NULL" expressions
given directly by the user, not the internal "foo = NULL" expression
generated from CASE-WHEN.
This fixes bug #6242, reported by Sergey. Backpatch to all supported
branches.
Tom Lane [Sat, 8 Oct 2011 00:13:02 +0000 (20:13 -0400)]
Support index-only scans using the visibility map to avoid heap fetches.
When a btree index contains all columns required by the query, and the
visibility map shows that all tuples on a target heap page are
visible-to-all, we don't need to fetch that heap page. This patch depends
on the previous patches that made the visibility map reliable.
There's a fair amount left to do here, notably trying to figure out a less
chintzy way of estimating the cost of an index-only scan, but the core
functionality seems ready to commit.
Robert Haas and Ibrar Ahmed, with some previous work by Heikki Linnakangas.
Magnus Hagander [Thu, 6 Oct 2011 19:43:14 +0000 (21:43 +0200)]
Ensure walsenders can be SIGTERMed while in non-walsender code
In oder to exit on SIGTERM when in non-walsender code,
such as do_pg_stop_backup(), we need to set the interrupt
variables that are used there, and not just the walsender
local ones.
Robert Haas [Thu, 6 Oct 2011 16:08:59 +0000 (12:08 -0400)]
Make pgstatindex respond to cancel interrupts.
A similar problem for pgstattuple() was fixed in April of 2010 by commit 33065ef8bc52253ae855bc959576e52d8a28ba06, but pgstatindex() seems to have
been overlooked.
Back-patch all the way, as with that commit, though not to 7.4 through
8.1, since those are now EOL.
Bruce Momjian [Thu, 6 Oct 2011 13:38:39 +0000 (09:38 -0400)]
Add postmaster -C option to query configuration parameters, and have
pg_ctl use that to query the data directory for config-only installs.
This fixes awkward or impossible pg_ctl operation for config-only
installs.
Replace the "New Linear" GiST split algorithm for boxes and points with a
new double-sorting algorithm. The new algorithm produces better quality
trees, making searches faster.
Tom Lane [Thu, 6 Oct 2011 00:44:16 +0000 (20:44 -0400)]
Improve and simplify CREATE EXTENSION's management of GUC variables.
CREATE EXTENSION needs to transiently set search_path, as well as
client_min_messages and log_min_messages. We were doing this by the
expedient of saving the current string value of each variable, doing a
SET LOCAL, and then doing another SET LOCAL with the previous value at
the end of the command. This is a bit expensive though, and it also fails
badly if there is anything funny about the existing search_path value,
as seen in a recent report from Roger Niederland. Fortunately, there's a
much better way, which is to piggyback on the GUC infrastructure previously
developed for functions with SET options. We just open a new GUC nesting
level, do our assignments with GUC_ACTION_SAVE, and then close the nesting
level when done. This automatically restores the prior settings without a
re-parsing pass, so (in principle anyway) there can't be an error. And
guc.c still takes care of cleanup in event of an error abort.
The CREATE EXTENSION code for this was modeled on some much older code in
ri_triggers.c, which I also changed to use the better method, even though
there wasn't really much risk of failure there. Also improve the comments
in guc.c to reflect this additional usage.
Tom Lane [Tue, 4 Oct 2011 23:57:21 +0000 (19:57 -0400)]
Improve define_custom_variable's handling of pre-existing settings.
Arrange for any problems with pre-existing settings to be reported as
WARNING not ERROR, so that we don't undesirably abort the loading of the
incoming add-on module. The bad setting is just discarded, as though it
had never been applied at all. (This requires a change in the API of
set_config_option. After some thought I decided the most potentially
useful addition was to allow callers to just pass in a desired elevel.)
Arrange to restore the complete stacked state of the variable, rather than
cheesily reinstalling only the active value. This ensures that custom GUCs
will behave unsurprisingly even when the module loading operation occurs
within nested subtransactions that have changed the active value. Since a
module load could occur as a result of, eg, a PL function call, this is not
an unlikely scenario.
Tom Lane [Tue, 4 Oct 2011 20:47:48 +0000 (16:47 -0400)]
Add sourcefile/sourceline data to EXEC_BACKEND GUC transmission files.
This oversight meant that on Windows, the pg_settings view would not
display source file or line number information for values coming from
postgresql.conf, unless the backend had received a SIGHUP since starting.
In passing, also make the error detection in read_nondefault_variables a
tad more thorough, and fix it to not lose precision on float GUCs (these
changes are already in HEAD as of my previous commit).
Tom Lane [Tue, 4 Oct 2011 20:13:16 +0000 (16:13 -0400)]
Remember the source GucContext for each GUC parameter.
We used to just remember the GucSource, but saving GucContext too provides
a little more information --- notably, whether a SET was done by a
superuser or regular user. This allows us to rip out the fairly dodgy code
that define_custom_variable used to use to try to infer the context to
re-install a pre-existing setting with. In particular, it now works for
a superuser to SET a extension's SUSET custom variable before loading the
associated extension, because GUC can remember whether the SET was done as
a superuser or not. The plperl regression tests contain an example where
this is useful.
Use callbacks in SlruScanDirectory for the actual action
Previously, the code assumed that the only possible action to take was
to delete files behind a certain cutoff point. The async notify code
was already a crock: it used a different "pagePrecedes" function for
truncation than for regular operation. By allowing it to pass a
callback to SlruScanDirectory it can do cleanly exactly what it needs to
do.
The clog.c code also had its own use for SlruScanDirectory, which is
made a bit simpler with this.
Tom Lane [Tue, 4 Oct 2011 16:36:18 +0000 (12:36 -0400)]
Remove the custom_variable_classes parameter.
This variable provides only marginal error-prevention capability (since
it can only check the prefix of a qualified GUC name), and the consensus
is that that isn't worth the amount of hassle that maintaining the setting
creates for DBAs. So, let's just remove it.
With this commit, the system will silently accept a value for any qualified
GUC name at all, whether it has anything to do with any known extension or
not. (Unqualified names still have to match known built-in settings,
though; and you will get a WARNING at extension load time if there's an
unrecognized setting with that extension's prefix.)
There's still some discussion ongoing about whether to tighten that up and
if so how; but if we do come up with a solution, it's not likely to look
anything like custom_variable_classes.
Tom Lane [Mon, 3 Oct 2011 16:13:15 +0000 (12:13 -0400)]
ProcedureCreate neglected to record dependencies on default expressions.
Thus, an object referenced in a default expression could be dropped while
the function remained present. This was unaccountably missed in the
original patch to add default parameters for functions. Reported by
Pavel Stehule.
Tom Lane [Sun, 2 Oct 2011 20:50:04 +0000 (16:50 -0400)]
Restructure error handling in reading of postgresql.conf.
This patch has two distinct purposes: to report multiple problems in
postgresql.conf rather than always bailing out after the first one,
and to change the policy for whether changes are applied when there are
unrelated errors in postgresql.conf.
Formerly the policy was to apply no changes if any errors could be
detected, but that had a significant consistency problem, because in some
cases specific values might be seen as valid by some processes but invalid
by others. This meant that the latter processes would fail to adopt
changes in other parameters even though the former processes had done so.
The new policy is that during SIGHUP, the file is rejected as a whole
if there are any errors in the "name = value" syntax, or if any lines
attempt to set nonexistent built-in parameters, or if any lines attempt
to set custom parameters whose prefix is not listed in (the new value of)
custom_variable_classes. These tests should always give the same results
in all processes, and provide what seems a reasonably robust defense
against loading values from badly corrupted config files. If these tests
pass, all processes will apply all settings that they individually see as
good, ignoring (but logging) any they don't.
In addition, the postmaster does not abandon reading a configuration file
after the first syntax error, but continues to read the file and report
syntax errors (up to a maximum of 100 syntax errors per file).
The postmaster will still refuse to start up if the configuration file
contains any errors at startup time, but these changes allow multiple
errors to be detected and reported before quitting.
Alexey Klyukin, reviewed by Andy Colson and av (Alexander ?)
with some additional hacking by Tom Lane
Tom Lane [Sat, 1 Oct 2011 18:01:46 +0000 (14:01 -0400)]
Improve generated column names for cases involving sub-SELECTs.
We'll now use "exists" for EXISTS(SELECT ...), "array" for ARRAY(SELECT
...), or the sub-select's own result column name for a simple expression
sub-select. Previously, you usually got "?column?" in such cases.
Tom Lane [Sat, 1 Oct 2011 03:54:27 +0000 (23:54 -0400)]
Cache the result of makesign() across calls of gtrgm_penalty().
Since gtrgm_penalty() is usually called many times in a row with the same
"newval" (to determine which item on an index page newval fits into best),
the makesign() calculation is repetitious. It's expensive enough to make
it worth caching the result, so do so. On my machine this is good for
more than a 40% savings in the time needed to build a trigram index on
/usr/share/dict/words. This is all per a suggestion of Heikki's.
In passing, make some mostly-cosmetic improvements in the caching logic in
the other functions in this file that rely on caching info in fn_extra.
Tom Lane [Fri, 30 Sep 2011 23:48:57 +0000 (19:48 -0400)]
Support GiST index support functions that want to cache data across calls.
pg_trgm was already doing this unofficially, but the implementation hadn't
been thought through very well and leaked memory. Restructure the core
GiST code so that it actually works, and document it. Ordinarily this
would have required an extra memory context creation/destruction for each
GiST index search, but I was able to avoid that in the normal case of a
non-rescanned search by finessing the handling of the RBTree. It used to
have its own context always, but now shares a context with the
scan-lifespan data structures, unless there is more than one rescan call.
This should make the added overhead unnoticeable in typical cases.
Tom Lane [Thu, 29 Sep 2011 22:12:34 +0000 (18:12 -0400)]
Fix recursion into previously planned sub-query in examine_simple_variable.
This code was looking at the sub-Query tree as seen in the parent query's
RangeTblEntry; but that's the pristine parser output, and what we need to
look at is the tree as it stands at the completion of planning. Otherwise
we might pick up a Var that references a subquery that got flattened and
hence has no RelOptInfo in the subroot. Per report from Peter Geoghegan.
Tom Lane [Thu, 29 Sep 2011 04:43:42 +0000 (00:43 -0400)]
Fix index matching for operators with mixed collatable/noncollatable inputs.
If an indexable operator for a non-collatable indexed datatype has a
collatable right-hand input type, any OpExpr for it will be marked with a
nonzero inputcollid (since having one collatable input is sufficient to
make that happen). However, an index on a non-collatable column certainly
doesn't have any collation. This caused us to fail to match such operators
to their indexes, because indxpath.c required an exact match of index
collation and clause collation. It seems correct to allow a match when the
index is collation-less regardless of the clause's inputcollid: an operator
with both noncollatable and collatable inputs could perhaps depend on the
collation of the collatable input, but it could hardly expect the index for
the noncollatable input to have that same collation.
Per bug #6232 from Pierre Ducroquet. His example is specifically about
"hstore ? text" but the problem seems quite generic.
Bruce Momjian [Thu, 29 Sep 2011 02:30:44 +0000 (22:30 -0400)]
In pg_upgrade, because toast table names can be mismatched with the heap
oid on 8.4, modify the toast name comparison test to only apply to old
9.0+ servers. (The test was previously 8.4+.)
Tom Lane [Wed, 28 Sep 2011 23:39:54 +0000 (19:39 -0400)]
Update and extend the EXPLAIN-related documentation.
I've made a significant effort at filling in the "Using EXPLAIN" section
to be reasonably complete about mentioning everything that EXPLAIN can
output, including the "Rows Removed" outputs that were added by Marko
Tiikkaja's recent documentation-free patch. I also updated the examples to
be consistent with current behavior; several of them were not close to what
the current code will do. No doubt there's more that can be done here, but
I'm out of patience for today.
Tom Lane [Wed, 28 Sep 2011 00:07:15 +0000 (20:07 -0400)]
Take sepgsql regression tests out of the regular regression test mechanism.
Because these tests require root privileges, not to mention invasive
changes to the security configuration of the host system, it's not
reasonable for them to be invoked by a regular "make check" or "make
installcheck". Instead, dike out the Makefile's knowledge of the tests,
and change chkselinuxenv (now renamed "test_sepgsql") into a script that
verifies the environment is workable and then runs the tests. It's
expected that test_sepgsql will only be run manually.
While at it, do some cleanup in the error checking in the script, and
do some wordsmithing in the documentation.
Remove dependency on error ordering in isolation tests
We now report errors reported by the just-unblocked and unblocking
transactions identically; this should fix relatively common buildfarm
failures reported by animals that are failing the "wrong" session.
Robert Haas [Tue, 27 Sep 2011 13:30:23 +0000 (09:30 -0400)]
Update comments related to the crash-safety of the visibility map.
In hio.c, document how we avoid deadlock with respect to visibility map
buffer locks. In visibilitymap.c, update the LOCKING section of the
file header comment.
Tom Lane [Tue, 27 Sep 2011 03:48:39 +0000 (23:48 -0400)]
Fix window functions that sort by expressions involving aggregates.
In commit c1d9579dd8bf3c921ca6bc2b62c40da6d25372e5, I changed things so
that the output of the Agg node that feeds the window functions would not
list any ungrouped Vars directly. Formerly, for example, the Agg tlist
might have included both "x" and "sum(x)", which is not really valid if
"x" isn't a grouping column. If we then had a window function ordering on
something like "sum(x) + 1", prepare_sort_from_pathkeys would find no exact
match for this in the Agg tlist, and would conclude that it must recompute
the expression. But it would break the expression down to just the Var
"x", which it would find in the tlist, and then rebuild the ORDER BY
expression using a reference to the subplan's "x" output. Now, after the
above-referenced changes, "x" isn't in the Agg tlist if it's not a grouping
column, so that prepare_sort_from_pathkeys fails with "could not find
pathkey item to sort", as reported by Bricklen Anderson.
The fix is to not break down Aggrefs into their component parts, but just
treat them as irreducible expressions to be sought in the subplan tlist.
This is definitely OK for the use with respect to window functions in
grouping_planner, since it just built the tlist being used on the same
basis. AFAICT it is safe for other uses too; most of the other call sites
couldn't encounter Aggrefs anyway.
Tom Lane [Tue, 27 Sep 2011 02:25:28 +0000 (22:25 -0400)]
Allow snapshot references to still work during transaction abort.
In REPEATABLE READ (nee SERIALIZABLE) mode, an attempt to do
GetTransactionSnapshot() between AbortTransaction and CleanupTransaction
failed, because GetTransactionSnapshot would recompute the transaction
snapshot (which is already wrong, given the isolation mode) and then
re-register it in the TopTransactionResourceOwner, leading to an Assert
because the TopTransactionResourceOwner should be empty of resources after
AbortTransaction. This is the root cause of bug #6218 from Yamamoto
Takashi. While changing plancache.c to avoid requesting a snapshot when
handling a ROLLBACK masks the problem, I think this is really a snapmgr.c
bug: it's lower-level than the resource manager mechanism and should not be
shutting itself down before we unwind resource manager resources. However,
just postponing the release of the transaction snapshot until cleanup time
didn't work because of the circular dependency with
TopTransactionResourceOwner. Fix by managing the internal reference to
that snapshot manually instead of depending on TopTransactionResourceOwner.
This saves a few cycles as well as making the module layering more
straightforward. predicate.c's dependencies on TopTransactionResourceOwner
go away too.
I think this is a longstanding bug, but there's no evidence that it's more
than a latent bug, so it doesn't seem worth any risk of back-patching.
Tom Lane [Mon, 26 Sep 2011 16:44:17 +0000 (12:44 -0400)]
Use a fresh copy of query_list when making a second plan in GetCachedPlan.
The code path that tried a generic plan, didn't like it, and then made a
custom plan was mistakenly passing the same copy of the query_list to the
planner both times. This doesn't work too well for nontrivial queries,
since the planner tends to scribble on its input. Diagnosis and fix by
Yamamoto Takashi.