Fix the number of lwlocks needed by the "fast path" lock patch. It needs
one lock per backend or auxiliary process - the need for a lock for each
aux processes was not accounted for in NumLWLocks(). No-one noticed,
because the three locks needed for the three aux processes fit into the
few extra lwlocks we allocate for 3rd party modules that don't call
RequestAddinLWLocks() (NUM_USER_DEFINED_LWLOCKS, 4 by default).
Tom Lane [Thu, 27 Oct 2011 19:21:51 +0000 (15:21 -0400)]
Avoid recursion while processing ELSIF lists in plpgsql.
The original implementation of ELSIF in plpgsql converted the construct
into nested simple IF statements. This was prone to stack overflow with
long ELSIF lists, in two different ways. First, it's difficult to generate
the parsetree without using right-recursion in the bison grammar, and
that's prone to parser stack overflow since nothing can be reduced until
the whole list has been read. Second, we'd recurse during execution, thus
creating an unnecessary risk of execution-time stack overflow. Rewrite
so that the ELSIF list is represented as a flat list, scanned via iteration
not recursion, and generated through left-recursion in the grammar.
Per a gripe from Håvard Kongsgård.
Tom Lane [Thu, 27 Oct 2011 17:50:57 +0000 (13:50 -0400)]
Add simple script to check for right recursion in Bison grammars.
We should generally use left-recursion not right-recursion to parse lists.
Bison hasn't got any built-in way to check for this type of inefficiency,
and I didn't find anything on the net in a quick search, so I wrote a
little Perl script to do it. Add to src/tools/ so we don't have to
re-invent this wheel next time we wonder if we're doing anything stupid.
Currently, the only place that seems to need fixing is plpgsql's stmt_else
production, so the problem doesn't appear to be common enough to warrant
trying to include such a test in our standard build process. If we did
want to do that, we'd need a way to ignore some false positives, such as
a_expr := '-' a_expr
Tom Lane [Wed, 26 Oct 2011 21:52:02 +0000 (17:52 -0400)]
Improve planner's ability to recognize cases where an IN's RHS is unique.
If the right-hand side of a semijoin is unique, then we can treat it like a
normal join (or another way to say that is: we don't need to explicitly
unique-ify the data before doing it as a normal join). We were recognizing
such cases when the RHS was a sub-query with appropriate DISTINCT or GROUP
BY decoration, but there's another way: if the RHS is a plain relation with
unique indexes, we can check if any of the indexes prove the output is
unique. Most of the infrastructure for that was there already in the join
removal code, though I had to rearrange it a bit. Per reflection about a
recent example in pgsql-performance.
Magnus Hagander [Wed, 26 Oct 2011 18:13:33 +0000 (20:13 +0200)]
Implement streaming xlog for backup tools
Add option for parallel streaming of the transaction log while a
base backup is running, to get the logfiles before the server has
removed them.
Also add a tool called pg_receivexlog, which streams the transaction
log into files, creating a log archive without having to wait for
segments to complete, thus decreasing the window of data loss without
having to waste space using archive_timeout. This works best in
combination with archive_command - suggested usage docs etc coming later.
Tom Lane [Wed, 26 Oct 2011 17:19:42 +0000 (13:19 -0400)]
Change FK trigger naming convention to fix self-referential FKs.
Use names like "RI_ConstraintTrigger_a_NNNN" for FK action triggers and
"RI_ConstraintTrigger_c_NNNN" for FK check triggers. This ensures the
action trigger fires first in self-referential cases where the very same
row update fires both an action and a check trigger. This change provides
a non-probabilistic solution for bug #6268, at the risk that it could break
client code that is making assumptions about the exact names assigned to
auto-generated FK triggers. Hence, change this in HEAD only. No need for
forced initdb since old triggers continue to work fine.
Tom Lane [Wed, 26 Oct 2011 17:02:28 +0000 (13:02 -0400)]
Change FK trigger creation order to better support self-referential FKs.
When a foreign-key constraint references another column of the same table,
row updates will queue both the PK's ON UPDATE action and the FK's CHECK
action in the same event. The ON UPDATE action must execute first, else
the CHECK will check a non-final state of the row and possibly throw an
inappropriate error, as seen in bug #6268 from Roman Lytovchenko.
Now, the firing order of multiple triggers for the same event is determined
by the sort order of their pg_trigger.tgnames, and the auto-generated names
we use for FK triggers are "RI_ConstraintTrigger_NNNN" where NNNN is the
trigger OID. So most of the time the firing order is the same as creation
order, and so rearranging the creation order fixes it.
This patch will fail to fix the problem if the OID counter wraps around or
adds a decimal digit (eg, from 99999 to 100000) while we are creating the
triggers for an FK constraint. Given the small odds of that, and the low
usage of self-referential FKs, we'll live with that solution in the back
branches. A better fix is to change the auto-generated names for FK
triggers, but it seems unwise to do that in stable branches because there
may be client code that depends on the naming convention. We'll fix it
that way in HEAD in a separate patch.
Back-patch to all supported branches, since this bug has existed for a long
time.
Tom Lane [Sun, 23 Oct 2011 18:34:36 +0000 (14:34 -0400)]
Improve git_changelog's handling of inconsistent commit orderings.
Use the CommitDate not the AuthorDate, as the former is representative of
the order in which things went into the main repository, and the latter
isn't very; we now have instances where the AuthorDate is as much as a
month before the patch really went in. Also, get rid of the "commit order
inversions" heuristic, which turns out not to do anything very desirable.
Instead we just print commits in strict timestamp order, interpreting the
"timestamp" of a merged commit as its timestamp on the newest branch it
appears in. This fixes some cases where very ancient commits were being
printed relatively early in the report.
Tom Lane [Sun, 23 Oct 2011 04:43:39 +0000 (00:43 -0400)]
Don't trust deferred-unique indexes for join removal.
The uniqueness condition might fail to hold intra-transaction, and assuming
it does can give incorrect query results. Per report from Marti Raudsepp,
though this is not his proposed patch.
Back-patch to 9.0, where both these features were introduced. In the
released branches, add the new IndexOptInfo field to the end of the struct,
to try to minimize ABI breakage for third-party code that may be examining
that struct.
Tom Lane [Sat, 22 Oct 2011 22:22:45 +0000 (18:22 -0400)]
Support synchronization of snapshots through an export/import procedure.
A transaction can export a snapshot with pg_export_snapshot(), and then
others can import it with SET TRANSACTION SNAPSHOT. The data does not
leave the server so there are not security issues. A snapshot can only
be imported while the exporting transaction is still running, and there
are some other restrictions.
I'm not totally convinced that we've covered all the bases for SSI (true
serializable) mode, but it works fine for lesser isolation modes.
Joachim Wieland, reviewed by Marko Tiikkaja, and rather heavily modified
by Tom Lane
Fix overly-complicated usage of errcode_for_file_access().
No need to do "errcode(errcode_for_file_access())", just
"errcode_for_file_access()" is enough. The extra errcode() call is useless
but harmless, so there's no user-visible bug here. Nevertheless, backpatch
to 9.1 where this code were added.
Tom Lane [Fri, 21 Oct 2011 20:36:04 +0000 (16:36 -0400)]
Code review for pgstat_get_crashed_backend_activity patch.
Avoid possibly dumping core when pgstat_track_activity_query_size has a
less-than-default value; avoid uselessly searching for the query string
of a successfully-exited backend; don't bother putting out an ERRDETAIL if
we don't have a query to show; some other minor stylistic improvements.
Tom Lane [Fri, 21 Oct 2011 17:49:51 +0000 (13:49 -0400)]
More cleanup after failed reduced-lock-levels-for-DDL feature.
Turns out that use of ShareUpdateExclusiveLock or ShareRowExclusiveLock
to protect DDL changes had gotten copied into several places that were
not touched by either of Simon's original patches for the feature, and
thus neither he nor I thought to revert them. (Indeed, it appears that
two of these uses were committed *after* the reversion, which just goes
to show that git merging is no panacea.) Change these places to use
AccessExclusiveLock again. If we ever manage to resurrect that feature,
we're going to have to think a bit harder about how to keep lock level
usage in sync for DDL operations that aren't within the AlterTable
infrastructure.
Two of these bugs are only in HEAD, but one is in the 9.1 branch too.
Alvaro found one of them, I found the other two.
Robert Haas [Fri, 21 Oct 2011 17:26:40 +0000 (13:26 -0400)]
Try to log current the query string when a backend crashes.
To avoid minimize risk inside the postmaster, we subject this feature
to a number of significant limitations. We very much wish to avoid
doing any complex processing inside the postmaster, due to the
posssibility that the crashed backend has completely corrupted shared
memory. To that end, no encoding conversion is done; instead, we just
replace anything that doesn't look like an ASCII character with a
question mark. We limit the amount of data copied to 1024 characters,
and carefully sanity check the source of that data. While these
restrictions would doubtless be unacceptable in a general-purpose
logging facility, even this limited facility seems like an improvement
over the status quo ante.
Tom Lane [Thu, 20 Oct 2011 23:43:31 +0000 (19:43 -0400)]
Simplify and improve ProcessStandbyHSFeedbackMessage logic.
There's no need to clamp the standby's xmin to be greater than
GetOldestXmin's result; if there were any such need this logic would be
hopelessly inadequate anyway, because it fails to account for
within-database versus cluster-wide values of GetOldestXmin. So get rid of
that, and just rely on sanity-checking that the xmin is not wrapped around
relative to the nextXid counter. Also, don't reset the walsender's xmin if
the current feedback xmin is indeed out of range; that just creates more
problems than we already had. Lastly, don't bother to take the
ProcArrayLock; there's no need to do that to set xmin.
Also improve the comments about this in GetOldestXmin itself.
Tom Lane [Thu, 20 Oct 2011 19:38:57 +0000 (15:38 -0400)]
Rewrite tab completion's previous-word fetching for more sanity.
Make it return empty strings when there are no more words to the left of
the current position, instead of sometimes returning NULL and other times
returning copies of the leftmost word. Also, fetch the words in one scan,
rather than the previous wasteful approach of starting from scratch for
each word. Make the code a bit harder to break when someone decides we
need more words of context, too. (There was actually a memory leak here,
because whoever added prev6_wd neglected to free it.)
Robert Haas [Thu, 20 Oct 2011 04:05:31 +0000 (00:05 -0400)]
Fix get_object_namespace() not to think extensions are "in" a schema.
extnamespace means something altogether different in this context.
Mostly by accident, this coding error (introduced in my commit 82a4a777d94bec965ab2f1d04b6e6a3f0447b377) broke the buildfarm instead
of just silently doing the wrong thing.
Tom Lane [Wed, 19 Oct 2011 01:37:51 +0000 (21:37 -0400)]
Suppress -Wunused-result warnings about write() and fwrite().
This is merely an exercise in satisfying pedants, not a bug fix, because
in every case we were checking for failure later with ferror(), or else
there was nothing useful to be done about a failure anyway. Document
the latter cases.
Tom Lane [Wed, 19 Oct 2011 00:09:18 +0000 (20:09 -0400)]
Reject empty pg_hba.conf files.
An empty HBA file is surely an error, since it means there is no way to
connect to the server. We've not heard identifiable reports of people
actually doing that, but this will also close off the case Thom Brown just
complained of, namely pointing hba_file at a directory. (On at least some
platforms with some directories, it will read as an empty file.)
Perhaps this should be back-patched, but given the lack of previous
complaints, I won't add extra work for the translators.
Tom Lane [Tue, 18 Oct 2011 21:39:14 +0000 (17:39 -0400)]
Remove unnecessary AssertMacro() to suppress gcc 4.6 compiler warning.
There's no particular value in doing AssertMacro((tup) != NULL) in front
of code that's certain to crash anyway if tup is NULL. And if "tup" is
actually the address of a local variable, gcc 4.6 whinges about it. That's
arguably pretty broken on gcc's part, but we might as well remove the
useless test to silence the warnings. This gets rid of all the -Waddress
warnings in the backend; there are some in libpq and psql that are a bit
harder to avoid.
Tom Lane [Tue, 18 Oct 2011 21:10:56 +0000 (17:10 -0400)]
Fix pg_dump to dump casts between auto-generated types.
The heuristic for when to dump a cast failed for a cast between table
rowtypes, as reported by Frédéric Rejol. Fix it by setting
the "dump" flag for such a type the same way as the flag is set for the
underlying table or base type. This won't result in the auto-generated
type appearing in the output, since setting its objType to DO_DUMMY_TYPE
unconditionally suppresses that. But it will result in dumpCast doing what
was intended.
Back-patch to 8.3. The 8.2 code is rather different in this area, and it
doesn't seem worth any risk to fix a corner case that nobody has stumbled
on before.
Tom Lane [Sun, 16 Oct 2011 23:15:04 +0000 (19:15 -0400)]
Avoid assuming that index-only scan data matches the index's rowtype.
In general the data returned by an index-only scan should have the
datatypes originally computed by FormIndexDatum. If the index opclasses
use "storage" datatypes different from their input datatypes, the scan
tuple will not have the same rowtype attributed to the index; but we had
a hard-wired assumption that that was true in nodeIndexonlyscan.c. We'd
already hacked around the issue for the one case where the types are
different in btree indexes (btree name_ops), but this would definitely
come back to bite us if we ever implement index-only scans in GiST.
To fix, require the index AM to explicitly provide the tupdesc for the
tuple it is returning. btree can just pass back the index's tupdesc, but
GiST will have to work harder when and if it supports index-only scans.
I had previously proposed fixing this by allowing the index AM to fill the
scan tuple slot directly; but on reflection that seemed like a module
layering violation, since TupleTableSlots are creatures of the executor.
At least in the btree case, it would also be less efficient, since the
tuple deconstruction work would occur even for rows later found to be
invisible to the scan's snapshot.
Tom Lane [Sat, 15 Oct 2011 17:02:37 +0000 (13:02 -0400)]
Marginal improvements to documentation of plpgsql's OPEN cursor statement.
Rearrange text to improve clarity, and add an example of implicit reference
to a plpgsql variable in a bound cursor's query. Byproduct of some work
I'd done on the "named cursor parameters" patch before giving up on it.
Bruce Momjian [Sat, 15 Oct 2011 00:26:28 +0000 (20:26 -0400)]
Allow a major PG version psql .psqlrc file to be used if a minor
matching version file does not exist. This avoids needing to rename
.psqlrc files after minor version upgrades.
Tom Lane [Sat, 15 Oct 2011 00:24:17 +0000 (20:24 -0400)]
Fix bugs in information_schema.referential_constraints view.
This view was being insufficiently careful about matching the FK constraint
to the depended-on primary or unique key constraint. That could result in
failure to show an FK constraint at all, or showing it multiple times, or
claiming that it depended on a different constraint than the one it really
does. Fix by joining via pg_depend to ensure that we find only the correct
dependency.
Back-patch, but don't bump catversion because we can't force initdb in back
branches. The next minor-version release notes should explain that if you
need to fix this in an existing installation, you can drop the
information_schema schema then re-create it by sourcing
$SHAREDIR/information_schema.sql in each database (as a superuser of
course).
Tom Lane [Fri, 14 Oct 2011 21:23:01 +0000 (17:23 -0400)]
Measure the number of all-visible pages for use in index-only scan costing.
Add a column pg_class.relallvisible to remember the number of pages that
were all-visible according to the visibility map as of the last VACUUM
(or ANALYZE, or some other operations that update pg_class.relpages).
Use relallvisible/relpages, instead of an arbitrary constant, to estimate
how many heap page fetches can be avoided during an index-only scan.
This is pretty primitive and will no doubt see refinements once we've
acquired more field experience with the index-only scan mechanism, but
it's way better than using a constant.
Note: I had to adjust an underspecified query in the window.sql regression
test, because it was changing answers when the plan changed to use an
index-only scan. Some of the adjacent tests perhaps should be adjusted
as well, but I didn't do that here.
Robert Haas [Fri, 14 Oct 2011 18:16:02 +0000 (14:16 -0400)]
Dump all roles first, then all config settings on roles.
This way, if a role's config setting uses the name of another role,
the validity of the dump isn't dependent on the order in which those
two roles are dumped.
Tom Lane [Thu, 13 Oct 2011 22:02:43 +0000 (18:02 -0400)]
Fix up Perl-to-Postgres datatype conversions in pl/perl.
This patch restores the pre-9.1 behavior that pl/perl functions returning
VOID ignore the result value of their last Perl statement. 9.1.0
unintentionally threw an error if the last statement returned a reference,
as reported by Amit Khandekar.
Also, make sure it works to return a string value for a composite type,
so long as the string meets the type's input format. We already allowed
the equivalent behavior for arrays, so it seems inconsistent to not allow
it for composites.
In addition, ensure we throw errors for attempts to return arrays or hashes
when the function's declared result type is not an array or composite type,
respectively. Pre-9.1 versions rather uselessly returned strings like
ARRAY(0x221a9a0) or HASH(0x221aa90), while 9.1.0 threw an error for the
hash case and returned a garbage value for the array case.
Also, clean up assorted grotty coding in Perl array conversion, including
use of a session-lifespan memory context to accumulate the array value
(resulting in session-lifespan memory leak on error), failure to apply the
declared typmod if any, and failure to detect some cases of non-rectangular
multi-dimensional arrays.
Tom Lane [Wed, 12 Oct 2011 22:40:09 +0000 (18:40 -0400)]
Don't mark auto-generated types as extension members.
Relation rowtypes and automatically-generated array types do not need to
have their own extension membership dependency entries. If we create such
then it becomes more difficult to remove items from an extension, and it's
also harder for an extension upgrade script to make sure it duplicates the
dependencies created by the extension's regular installation script.
I changed the code in such a way that this happened in commit 988cccc620dd8c16d77f88ede167b22056176324, I think because of worries about
the shell-type-replacement case; but that cure was worse than the disease.
It would only matter if one extension created a shell type that was
replaced with an auto-generated type in another extension, which seems
pretty far-fetched. Better to make this work unsurprisingly in normal
cases.
Report and patch by Robert Haas, comment adjustments by me.
Bruce Momjian [Wed, 12 Oct 2011 19:45:46 +0000 (15:45 -0400)]
Modify pgindent to use a renamed pg_bsd_indent binary. New features
include the ability to supply a typedef file, rather than list them on
the command line. Also improve the README.
Tom Lane [Wed, 12 Oct 2011 19:45:03 +0000 (15:45 -0400)]
Throw a useful error message if an extension script file is fed to psql.
We have seen one too many reports of people trying to use 9.1 extension
files in the old-fashioned way of sourcing them in psql. Not only does
that usually not work (due to failure to substitute for MODULE_PATHNAME
and/or @extschema@), but if it did work they'd get a collection of loose
objects not an extension. To prevent this, insert an \echo ... \quit
line that prints a suitable error message into each extension script file,
and teach commands/extension.c to ignore lines starting with \echo.
That should not only prevent any adverse consequences of loading a script
file the wrong way, but make it crystal clear to users that they need to
do it differently now.
Tom Lane, following an idea of Andrew Dunstan's. Back-patch into 9.1
... there is not going to be much value in this if we wait till 9.2.
Tom Lane [Wed, 12 Oct 2011 17:59:30 +0000 (13:59 -0400)]
Improve documentation of psql's \q command.
The documentation neglected to explain its behavior in a script file
(it only ends execution of the script, not psql as a whole), and failed
to mention the long form \quit either.
Tom Lane [Tue, 11 Oct 2011 22:40:53 +0000 (18:40 -0400)]
Add comment on why pulling data from a "name" index column can't crash.
It's been bothering me for several days that pretending that the cstring
data stored in a btree name_ops column is really a "name" Datum could lead
to reading past the end of memory. However, given the current memory
layout used for index-only scans in the btree code, a crash is in fact not
possible. Document that so we don't break it. I have not thought of any
other solutions that aren't fairly ugly too, and most of them lose the
functionality of index-only scans on name columns altogether, so this seems
like the way to go.
Tom Lane [Tue, 11 Oct 2011 22:11:51 +0000 (18:11 -0400)]
Generate index-only scan tuple descriptor from the plan node's indextlist.
Dept. of second thoughts: as long as we've got that tlist hanging around
anyway, we can apply ExecTypeFromTL to it to get a suitable descriptor for
the ScanTupleSlot. This is a nicer solution than the previous one because
it eliminates some hard-wired knowledge about btree name_ops, and because
it avoids the somewhat shaky assumption that we needn't set up the scan
tuple descriptor in EXPLAIN_ONLY mode. It doesn't change what actually
happens at run-time though, and I'm still a bit nervous about that.
Tom Lane [Tue, 11 Oct 2011 18:20:06 +0000 (14:20 -0400)]
Rearrange the implementation of index-only scans.
This commit changes index-only scans so that data is read directly from the
index tuple without first generating a faux heap tuple. The only immediate
benefit is that indexes on system columns (such as OID) can be used in
index-only scans, but this is necessary infrastructure if we are ever to
support index-only scans on expression indexes. The executor is now ready
for that, though the planner still needs substantial work to recognize
the possibility.
To do this, Vars in index-only plan nodes have to refer to index columns
not heap columns. I introduced a new special varno, INDEX_VAR, to mark
such Vars to avoid confusion. (In passing, this commit renames the two
existing special varnos to OUTER_VAR and INNER_VAR.) This allows
ruleutils.c to handle them with logic similar to what we use for subplan
reference Vars.
Since index-only scans are now fundamentally different from regular
indexscans so far as their expression subtrees are concerned, I also chose
to change them to have their own plan node type (and hence, their own
executor source file).
Robert Haas [Tue, 11 Oct 2011 13:14:30 +0000 (09:14 -0400)]
Replace hardcoded switch in object_exists() with a lookup table.
There's no particular advantage to this change on its face; indeed,
it's possible that this might be slightly slower than the old way.
But it makes this information more easily accessible to other
functions, and therefore paves the way for future code consolidation.
Performance isn't critical here, so there's no need to be smart about
how we do the search.
This is a heavily cut-down version of a patch from KaiGai Kohei,
with several fixes by me. Additional review from Dimitri Fontaine.
Robert Haas [Mon, 10 Oct 2011 17:38:32 +0000 (13:38 -0400)]
Make the reference to "CREATE USER" in the CREATE ROLE page a link.
This might help to avoid confusion between the CREATE USER command,
and the deprecated CREATEUSER option to CREATE ROLE, as per a recent
complaint from Ron Adams. At any rate, having a cross-link here
seems like a good idea; two commands that are so similar should
reference each other.
Robert Haas [Mon, 10 Oct 2011 03:39:52 +0000 (23:39 -0400)]
Fix ALTER TABLE ONLY .. DROP CONSTRAINT.
When I consolidated two copies of the HOT-chain search logic in commit 4da99ea4231e3d8bbf28b666748c1028e7b7d665, I introduced a behavior
change: the old code wouldn't necessarily traverse the entire chain,
if the most recently returned tuple were updated while the HOT chain
traversal is in progress. The new behavior seems more correct, but
unfortunately, the code here relies on a scan with SnapshotNow failing
to see its own updates. That seems pretty shaky even with the old HOT
chain traversal behavior, since there's no guarantee that these
updates will always be HOT, but it's trivial to broke a failure with
the new HOT search logic. Fix by updating just the first matching
pg_constraint tuple, rather than all of them, since there should be
only one anyway. But since nobody has reproduced this failure on older
versions, no back-patch for now.
Report and test case by Alex Hunsaker; tablecmds.c changes by me.
Robert Haas [Mon, 10 Oct 2011 02:20:44 +0000 (22:20 -0400)]
Revert accidental change to pg_config_manual.h.
This was broken in commit 53dbc27c62d8e1b6c5253feba04a5094cb8fe046, which
introduced unlogged tables. Fortunately, as debugging tools go, this one
is pretty cheap, which is probably why it took nine months for someone to
notice, but it's not intended to be enabled by default, so revert.
The original idea of this patch was to make box picksplit run faster, by
eliminating unnecessary palloc() overhead, but that was obsoleted by the new
double-sorting split algorithm that doesn't call these functions so heavily
anymore. Nevertheless, the code looks better this way.
Original patch by me, reviewed and tidied up after the double-sorting patch
by Kevin Grittner.
Tom Lane [Sun, 9 Oct 2011 04:21:08 +0000 (00:21 -0400)]
Improve index-only scans to avoid repeated access to the index page.
We copy all the matched tuples off the page during _bt_readpage, instead of
expensively re-locking the page during each subsequent tuple fetch. This
costs a bit more local storage, but not more than 2*BLCKSZ worth, and the
reduction in LWLock traffic is certainly worth that. What's more, this
lets us get rid of the API wart in the original patch that said an index AM
could randomly decline to supply an index tuple despite having asserted
pg_am.amcanreturn. That will be important for future improvements in the
index-only-scan feature, since the executor will now be able to rely on
having the index data available.
Tom Lane [Sun, 9 Oct 2011 03:45:58 +0000 (23:45 -0400)]
Prevent index-only scans in stats regression test.
This bollixes the test because it's expecting to see the idx_tup_fetch
counter increase, which won't happen if heap fetches were avoided by use
of an index-only scan. Per buildfarm results.
While at it, let's just make sure that enable_seqscan and enable_indexscan
are ON for this test ...
Don't let transform_null_equals=on affect CASE foo WHEN NULL ... constructs.
transform_null_equals is only supposed to affect "foo = NULL" expressions
given directly by the user, not the internal "foo = NULL" expression
generated from CASE-WHEN.
This fixes bug #6242, reported by Sergey. Backpatch to all supported
branches.
Tom Lane [Sat, 8 Oct 2011 00:13:02 +0000 (20:13 -0400)]
Support index-only scans using the visibility map to avoid heap fetches.
When a btree index contains all columns required by the query, and the
visibility map shows that all tuples on a target heap page are
visible-to-all, we don't need to fetch that heap page. This patch depends
on the previous patches that made the visibility map reliable.
There's a fair amount left to do here, notably trying to figure out a less
chintzy way of estimating the cost of an index-only scan, but the core
functionality seems ready to commit.
Robert Haas and Ibrar Ahmed, with some previous work by Heikki Linnakangas.