Andres Freund [Tue, 21 Apr 2015 09:51:06 +0000 (11:51 +0200)]
Add 'active_in' column to pg_replication_slots.
Right now it is visible whether a replication slot is active in any
session, but not in which. Adding the active_in column, containing the
pid of the backend having acquired the slot, makes it much easier to
associate pg_replication_slots entries with the corresponding
pg_stat_replication/pg_stat_activity row.
This should have been done from the start, but I (Andres) dropped the
ball there somehow.
Author: Craig Ringer, revised by me Discussion:
CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
Stephen Frost [Fri, 17 Apr 2015 20:37:11 +0000 (16:37 -0400)]
Fix typo in relcache's equalPolicy()
The USING policies were not being checked for differences as the same
policy was being passed in to both sides of the equal(). This could
result in backends not realizing that a policy had been changed, if
none of the other attributes had been changed.
Fix by passing to equal() the policy1 and policy2 using quals for
comparison.
No need to back-patch as this is not yet released. Noticed while
testing changes to RLS proposed by Dean Rasheed.
Logical decoding set SnapshotData's regd_count field to avoid the
snapshot manager from prematurely freeing snapshots that are generated
by the decoding system. That was always an abuse of the field, as it was
never supposed to be used outside the snapshot manager. Commit 94028691
made snapshot manager's tracking of the snapshots smarter, and that scheme
fell apart. The snapshot manager got confused and hit the assertion, when
a snapshot that was marked with regd_count==1 was not found in the heap,
where the snapshot manager tracks registered the snapshots.
To fix, don't abuse the regd_count field like that. Logical decoding still
abuses the active_count field for similar purposes, but that's currently
harmless.
The assertion failure was first reported by Michael Paquier
Update comments and function names to use the terms "source" and "target"
consistently. Some places were calling them remote and local instead, which
was confusing.
Fix incorrect comment in extractPageInfo on database creation record - it
was wrong on what happens for databases created in the target that don't
exist in source.
Shut down test servers after pg_rewind regression tests.
Now that the test servers are initialized twice in each .pl script,
the single END block is not enough to stop them. Add a new clean_rewind_test
function that is called at the end of each test.
Fix logic to skip checkpoint if no records have been inserted.
After the WAL format changes, the calculation of the size of a checkpoint
record became incorrect. Instead of trying to fix the math, check that the
previous record, i.e. the xl_prev value that we'd write for the next
record, matches the last checkpoint's redo pointer. That way it's not
dependent on the size of the checkpoint record at all.
The old logic was actually slightly wrong all along: if the previous
checkpoint record crossed a page boundary, the page headers threw off the
record size calculation, and the checkpoint was not skipped. The new
checkpoint would not cross a page boundary, so this only resulted in at
most one extra checkpoint after the system became idle. The new logic fixes
that. (It's not worth fixing in backbranches).
However, it makes some sense to try to keep the latest checkpoint contained
fully in a page, or at least in a single WAL segment, just on general
robustness grounds. If something goes awfully wrong, it's more likely that
you can recover the latest WAL segment, than the last two WAL segments. So
I added an extra check that the checkpoint is not skipped if the previous
checkpoint crossed a WAL segment.
Peter Eisentraut [Wed, 11 Mar 2015 02:33:25 +0000 (22:33 -0400)]
Integrate pg_upgrade_support module into backend
Previously, these functions were created in a schema "binary_upgrade",
which was deleted after pg_upgrade was finished. Because we don't want
to keep that schema around permanently, move them to pg_catalog but
rename them with a binary_upgrade_... prefix.
The provided functions are only small wrappers around global variables
that were added specifically for pg_upgrade use, so keeping the module
separate does not create any modularity.
The functions still check that they are only called in binary upgrade
mode, so it is not possible to call these during normal operation.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Optimize pg_comp_crc32c_sse42 routine slightly, and also use it on x86.
Eliminate the separate 'len' variable from the loops, and also use the 4
byte instruction. This shaves off a few more cycles. Even though this
routine that uses the special SSE 4.2 instructions is much faster than a
generic routine, it's still a hot spot, so let's make it as fast as
possible.
Change the configure test to not test _mm_crc32_u64. That variant is only
available in the 64-bit x86-64 architecture, not in 32-bit x86. Modify
pg_comp_crc32c_sse42 so that it only uses _mm_crc32_u64 on x86-64. With
these changes, the SSE accelerated CRC-32C implementation can also be used
on 32-bit x86 systems.
Try to fix the CRC-32C autoconf magic for icc compiler.
On gcc and clang, the _mm_crc32_u8 and _mm_crc32_u64 intrinsics are not
defined at all, when not building with -msse4.2. But on icc, they are.
So we cannot assume that if those intrinsics are defined, we can always use
them safely, we might still need the runtime check.
To fix, check if the __SSE_4_2__ preprocessor symbol is defined. That's
supposed to be defined only when the compiler is targeting a processor that
has SSE 4.2 support.
I introduced this ancient typo in subtrans.c and later propagated it to
multixact.c. I fixed the latter in f741300c, but only back to 9.3;
backpatch to all supported branches for consistency.
Use Intel SSE 4.2 CRC instructions where available.
Modern x86 and x86-64 processors with SSE 4.2 support have special
instructions, crc32b and crc32q, for calculating CRC-32C. They greatly
speed up CRC calculation.
Whether the instructions can be used or not depends on the compiler and the
target architecture. If generation of SSE 4.2 instructions is allowed for
the target (-msse4.2 flag on gcc and clang), use them. If they are not
allowed by default, but the compiler supports the -msse4.2 flag to enable
them, compile just the CRC-32C function with -msse4.2 flag, and check at
runtime whether the processor we're running on supports it. If it doesn't,
fall back to the slicing-by-8 algorithm. (With the common defaults on
current operating systems, the runtime-check variant is what you get in
practice.)
Abhijit Menon-Sen, heavily modified by me, reviewed by Andres Freund.
Now that we use CRC-32C in WAL and the control file, the "traditional" and
"legacy" CRC-32 variants are not used in any frontend programs anymore.
Move the code for those back from src/common to src/backend/utils/hash.
Also move the slicing-by-8 implementation (back) to src/port. This is in
preparation for next patch that will add another implementation that uses
Intel SSE 4.2 instructions to calculate CRC-32C, where available.
* Don't pass arguments to prove, since that's not supported on perl 5.8
which is the minimum version supported by the TAP tests. Refactor the
test files themselves to run the tests twice, in both local and remote mode.
* Use eq rather than == for string comparison. This thinko caused the remote
versions of the tests to never run.
* Add "use strict" and "use warnings", and fix warnings that that produced.
* Increase the delay after standby promotion, to make the tests more robust.
* In remote mode, the connection string to the promoted standby was
incorrect, leading to connection errors.
Patch by Michael Paquier, to address Peter Eisentraut's report.
Don't archive bogus recycled or preallocated files after timeline switch.
After a timeline switch, we would leave behind recycled WAL segments that
are in the future, but on the old timeline. After promotion, and after they
become old enough to be recycled again, we would notice that they don't have
a .ready or .done file, create a .ready file for them, and archive them.
That's bogus, because the files contain garbage, recycled from an older
timeline (or prealloced as zeros). We shouldn't archive such files.
This could happen when we're following a timeline switch during replay, or
when we switch to new timeline at end-of-recovery.
To fix, whenever we switch to a new timeline, scan the data directory for
WAL segments on the old timeline, but with a higher segment number, and
remove them. Those don't belong to our timeline history, and are most
likely bogus recycled or preallocated files. They could also be valid files
that we streamed from the primary ahead of time, but in any case, they're
not needed to recover to the new timeline.
Silence gettext warning about '\r' escape sequence in translatable string.
gettext was unhappy about the commit b216ad7 because it revealed
the problem that internationalized messages may contain '\r' escape
sequence in pg_rewind. This commit moves '\r' to a separate printf() call.
Magnus Hagander [Sun, 12 Apr 2015 17:07:46 +0000 (19:07 +0200)]
Add system view pg_stat_ssl
This view shows information about all connections, such as if the
connection is using SSL, which cipher is used, and which client
certificate (if any) is used.
Reviews by Alex Shulgin, Heikki Linnakangas, Andres Freund & Michael Paquier
Optimize locking a tuple already locked by another subxact
Locking and updating the same tuple repeatedly led to some strange
multixacts being created which had several subtransactions of the same
parent transaction holding locks of the same strength. However,
once a subxact of the current transaction holds a lock of a given
strength, it's not necessary to acquire the same lock again. This made
some coding patterns much slower than required.
The fix is twofold. First we change HeapTupleSatisfiesUpdate to return
HeapTupleBeingUpdated for the case where the current transaction is
already a single-xid locker for the given tuple; it used to return
HeapTupleMayBeUpdated for that case. The new logic is simpler, and the
change to pgrowlocks is a testament to that: previously we needed to
check for the single-xid locker separately in a very ugly way. That
test is simpler now.
As fallout from the HTSU change, some of its callers need to be amended
so that tuple-locked-by-own-transaction is taken into account in the
BeingUpdated case rather than the MayBeUpdated case. For many of them
there is no difference; but heap_delete() and heap_update now check
explicitely and do not grab tuple lock in that case.
The HTSU change also means that routine MultiXactHasRunningRemoteMembers
introduced in commit 11ac4c73cb895 is no longer necessary and can be
removed; the case that used to require it is now handled naturally as
result of the changes to heap_delete and heap_update.
The second part of the fix to the performance issue is to adjust
heap_lock_tuple to avoid the slowness:
1. Previously we checked for the case that our own transaction already
held a strong enough lock and returned MayBeUpdated, but only in the
multixact case. Now we do it for the plain Xid case as well, which
saves having to LockTuple.
2. If the current transaction is the only locker of the tuple (but with
a lock not as strong as what we need; otherwise it would have been
caught in the check mentioned above), we can skip sleeping on the
multixact, and instead go straight to create an updated multixact with
the additional lock strength.
3. Most importantly, make sure that both the single-xid-locker case and
the multixact-locker case optimization are applied always. We do this
by checking both in a single place, rather than them appearing in two
separate portions of the routine -- something that is made possible by
the HeapTupleSatisfiesUpdate API change. Previously we would only check
for the single-xid case when HTSU returned MayBeUpdated, and only
checked for the multixact case when HTSU returned BeingUpdated. This
was at odds with what HTSU actually returned in one case: if our own
transaction was locker in a multixact, it returned MayBeUpdated, so the
optimization never applied. This is what led to the large multixacts in
the first place.
FORCE option has been marked "obsolete" since very old version 7.4
but existed for backwards compatibility. Per discussion on pgsql-hackers,
we concluded that it's no longer worth keeping supporting the option.
Change SQLSTATE for event triggers "wrong context" message
When certain event-trigger-only functions are called when not in the
wrong context, they were reporting the "feature not supported" SQLSTATE,
which is somewhat misleading. Create a new custom error code for such
uses instead.
Not backpatched since it may be seen as an undesirable behavioral
change.
Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAB7nPqQ-5NAkHQHh_NOm7FPep37NCiLKwPoJ2Yxb8TDoGgbYYA@mail.gmail.com
It was previously possible to have the launcher re-execute its main loop
before shutting down if some other signal was received or an error
occurred after getting SIGTERM, as reported by Qingqing Zhou.
While investigating, Tom Lane further noticed that if autovacuum had
been disabled in the config file, it would misbehave by trying to start
a new worker instead of bailing out immediately -- it would consider
itself as invoked in emergency mode.
Fix both problems by checking the shutdown flag in a few more places.
These problems have existed since autovacuum was introduced, so
backpatch all the way back.
Robert Haas [Wed, 8 Apr 2015 02:45:17 +0000 (22:45 -0400)]
Make trace_sort control abbreviation debug output for the text opclass.
This is consistent with what the new numeric suppor for abbreviated keys
now does, and seems much more convenient than having a separate compiler
define to control this debug output.
Tom Lane [Tue, 7 Apr 2015 20:56:21 +0000 (16:56 -0400)]
Fix assorted inconsistent function declarations.
While gcc doesn't complain if you declare a function "static" and then
define it not-static, other compilers do; and in any case the code is
highly misleading this way. Add the missing "static" keywords to a
couple of recent patches. Per buildfarm member pademelon.
Commit a2e35b53 should have removed the variable declaration in the
inner block, but didn't. As a result, the returned address might end up
not being what was intended.
Don't allow pg_rewind to run as root on Unix platforms, as any new or
replaced files in the data directory would become owned by root. On Windows,
it can run under a user that has Administrator rights, but a restricted
token needs to be used. This is the same we do e.g. in pg_resetxlog.
Also, add missing set_pglocale_pgservice() call, to fix localization.
It now also reports temporary objects dropped that are local to the
backend. Previously we weren't reporting any temp objects because it
was deemed unnecessary; but as it turns out, it is necessary if we want
to keep close track of DDL command execution inside one session. Temp
objects are reported as living in schema pg_temp, which works because
such a schema-qualification always refers to the temp objects of the
current session.
This was already fixed in 0d906798f, but I failed to update the
array-formatted case. This is not backpatched, since this only affects
the code path introduced by commit a676201490c.
Tom Lane [Sun, 5 Apr 2015 17:01:55 +0000 (13:01 -0400)]
Suppress clang's unhelpful gripes about -pthread switch being unused.
Considering the number of cases in which "unused" command line arguments
are silently ignored by compilers, it's fairly astonishing that anybody
thought this warning was useful; it's certainly nothing but an annoyance
when building Postgres. One such case is that neither gcc nor clang
complain about unrecognized -Wno-foo switches, making it more difficult
to figure out whether the switch does anything than one could wish.
Back-patch to 9.3, which is as far back as the patch applies conveniently
(we'd have to back-patch PGAC_PROG_CC_VAR_OPT to go further, and it doesn't
seem worth that).
Andrew Dunstan [Sun, 5 Apr 2015 15:49:49 +0000 (11:49 -0400)]
Enable float8-byval as the default for 64 bit MSVC builds
This is a long-standing inconsistency that was probably just missed when
we got 64 bit MSVC builds. This brings the platform into line with all
other systems.
Simon Riggs [Sun, 5 Apr 2015 15:37:08 +0000 (11:37 -0400)]
Reduce lock levels of some trigger DDL and add FKs
Reduce lock levels to ShareRowExclusive for the following SQL
CREATE TRIGGER (but not DROP or ALTER)
ALTER TABLE ENABLE TRIGGER
ALTER TABLE DISABLE TRIGGER
ALTER TABLE … ADD CONSTRAINT FOREIGN KEY
Original work by Simon Riggs, extracted and refreshed by Andreas Karlsson
New test cases added by Andreas Karlsson
Reviewed by Noah Misch, Andres Freund, Michael Paquier and Simon Riggs
Tom Lane [Sat, 4 Apr 2015 23:55:15 +0000 (19:55 -0400)]
Fix incorrect matching of subexpressions in outer-join plan nodes.
Previously we would re-use input subexpressions in all expression trees
attached to a Join plan node. However, if it's an outer join and the
subexpression appears in the nullable-side input, this is potentially
incorrect for apparently-matching subexpressions that came from above
the outer join (ie, targetlist and qpqual expressions), because the
executor will treat the subexpression value as NULL when maybe it should
not be.
The case is fairly hard to hit because (a) you need a non-strict
subexpression (else NULL is correct), and (b) we don't usually compute
expressions in the outputs of non-toplevel plan nodes. But we might do
so if the expressions are sort keys for a mergejoin, for example.
Probably in the long run we should make a more explicit distinction between
Vars appearing above and below an outer join, but that will be a major
planner redesign and not at all back-patchable. For the moment, just hack
set_join_references so that it will not match any non-Var expressions
coming from nullable inputs to expressions that came from above the join.
(This is somewhat overkill, in that a strict expression could still be
matched, but it doesn't seem worth the effort to check that.)
Per report from Qingqing Zhou. The added regression test case is based
on his example.
This has been broken for a very long time, so back-patch to all active
branches.
Tom Lane [Sat, 4 Apr 2015 17:34:23 +0000 (13:34 -0400)]
Fix TAP tests to use only standard command-line argument ordering.
Some of the TAP tests were supposing that PG programs would accept switches
after non-switch arguments on their command lines. While GNU getopt_long()
does allow that, our own implementation does not, and it's nowhere
suggested in our documentation that such cases should work. Adjust the
tests to use only the documented syntax.
Back-patch to 9.4, since without this the TAP tests fail when run with
src/port's getopt_long() implementation.
Robert Haas [Sat, 4 Apr 2015 02:34:37 +0000 (22:34 -0400)]
Fix numeric abbreviation for --disable-float8-byval.
When committing abd94bcac4582903765be7be959d1dbc121df0d0, I tried to make
it decide what kind of abbreviation to use based only on SIZEOF_DATUM,
without regard to USE_FLOAT8_BYVAL. That attempt was a few bricks short
of a load, so try to fix it, and add a comment explaining what we're
about.
Patch by me; review (but not a full endorsement) by Andrew Gierth.
Tom Lane [Fri, 3 Apr 2015 20:49:11 +0000 (16:49 -0400)]
Remove unnecessary variables in _hash_splitbucket().
Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass
start_nblkno to _hash_splitbucket(), and for that matter unnecessary to
have the internal nblkno variable either. My compiler didn't complain
about that, but some did. I also rearranged the use of oblkno a bit to
make that case more parallel.
Report and initial patch by Petr Jelinek, rearranged a bit by me.
Back-patch to all branches, like the previous patch.
Transform ALTER TABLE/SET TYPE/USING expr during parse analysis
This lets later stages have access to the transformed expression; in
particular it allows DDL-deparsing code during event triggers to pass
the transformed expression to ruleutils.c, so that the complete command
can be deparsed.
This shuffles the timing of the transform calls a bit: previously,
nothing was transformed during parse analysis, and only the
RELKIND_RELATION case was being handled during execution. After this
patch, all expressions are transformed during parse analysis (including
those for relkinds other than RELATION), and the error for other
relation kinds is thrown only during execution. So we do more work than
before to reject some bogus cases. That seems acceptable.
Fix error handling of XLogReaderAllocate in case of OOM
Similarly to previous fix 9b8d478, commit 2c03216 has switched
XLogReaderAllocate() to use a set of palloc calls instead of malloc,
causing any callers of this function to fail with an error instead of
receiving a NULL pointer in case of out-of-memory error. Fix this by
using palloc_extended with MCXT_ALLOC_NO_OOM that will safely return
NULL in case of an OOM.
Robert Haas [Fri, 3 Apr 2015 12:32:05 +0000 (08:32 -0400)]
Change the way we decide whether to give up on abbreviated text keys.
Be more aggressive about aborting early on if it looks like it's not
helping, but be less aggressive about aborting later on, since it's
more expensive at that point, and also since we're currently aborting
in some cases where abbreviation can still deliver a substantial win.
Peter Geoghegan. Extensive testing by Tomas Vondra.
Rework handling of OOM when allocating record buffer in XLOG reader.
Commit 2c03216 changed allocate_recordbuf() so that it uses a palloc to
allocate the read buffer and fails immediately when an out-of-memory error
shows up, even though its callers still expect that NULL is returned in that
case. This bug is fixed making allocate_recordbuf() use a palloc_extended
with MCXT_ALLOC_NO_OOM flag and return NULL in OOM case.
This commit also adds pg_malloc_extended for frontend. These interfaces
can be used to control at a lower level memory allocation using an interface
similar to MemoryContextAllocExtended. For example, the callers can specify
MCXT_ALLOC_NO_OOM if they want to suppress the "out of memory" error while
allocating the memory and handle a NULL return value.
Tom Lane [Fri, 3 Apr 2015 04:07:29 +0000 (00:07 -0400)]
Fix rare startup failure induced by MVCC-catalog-scans patch.
While a new backend nominally participates in sinval signaling starting
from the SharedInvalBackendInit call near the top of InitPostgres, it
cannot recognize sinval messages for unshared catalogs of its database
until it has set up MyDatabaseId. This is not problematic for the catcache
or relcache, which by definition won't have loaded any data from or about
such catalogs before that point. However, commit 568d4138c646cd7c
introduced a mechanism for re-using MVCC snapshots for catalog scans, and
made invalidation of those depend on recognizing relevant sinval messages.
So it's possible to establish a catalog snapshot to read pg_authid and
pg_database, then before we set MyDatabaseId, receive sinval messages that
should result in invalidating that snapshot --- but do not, because we
don't realize they are for our database. This mechanism explains the
intermittent buildfarm failures we've seen since commit 31eae6028eca4365.
That commit was not itself at fault, but it introduced a new regression
test that does reconnections concurrently with the "vacuum full pg_am"
command in vacuum.sql. This allowed the pre-existing error to be exposed,
given just the right timing, because we'd fail to update our information
about how to access pg_am. In principle any VACUUM FULL on a system
catalog could have created a similar hazard for concurrent incoming
connections. Perhaps there are more subtle failure cases as well.
To fix, force invalidation of the catalog snapshot as soon as we've
set MyDatabaseId.
Robert Haas [Thu, 2 Apr 2015 20:26:49 +0000 (16:26 -0400)]
Improve pgbench error reporting.
This would have been worth doing on general principle anyway, but the
recent addition of an expression syntax to pgbench makes it an even
better idea than it would have been otherwise.
Commit 0d831389749a3 inadvertently reversed the meaning of the
wraparound variable. This causes vacuums which are not required for
wraparound to wait for locks to be acquired, and what is worse, it
allows wraparound vacuums to skip locked pages.
Bug reported by Jeff Janes in
http://www.postgresql.org/message-id/CAMkU=1xmTEiaY=5oMHsSQo5vd9V1Ze4kNLL0qN2eH0P_GXOaYw@mail.gmail.com
Analysis and patch by Kyotaro HORIGUCHI
Andres Freund [Thu, 2 Apr 2015 15:43:35 +0000 (17:43 +0200)]
Define integer limits independently from the system definitions.
In 83ff1618 we defined integer limits iff they're not provided by the
system. That turns out not to be the greatest idea because there's
different ways some datatypes can be represented. E.g. on OSX PG's 64bit
datatype will be a 'long int', but OSX unconditionally uses 'long
long'. That disparity then can lead to warnings, e.g. around printf
formats.
One way to fix that would be to back int64 using stdint.h's
int64_t. While a good idea it's not that easy to implement. We would
e.g. need to include stdint.h in our external headers, which we don't
today. Also computing the correct int64 printf formats in that case is
nontrivial.
Instead simply prefix the integer limits with PG_ and define them
unconditionally. I've adjusted all the references to them in code, but
not the ones in comments; the latter seems unnecessary to me.
This is the second try at this, after fcef1617295 failed miserably and
had to be reverted: as it turns out, libpq cannot depend on libpgcommon
after all. Instead of shuffling code in the master branch, make that one
just like 9.4 and accept the duplication. (This was all my own mistake,
not the patch submitter's).
psql was already accepting conninfo strings as the first parameter in
\connect, but the way it worked wasn't sane; some of the other
parameters would get the previous connection's values, causing it to
connect to a completely unexpected server or, more likely, not finding
any server at all because of completely wrong combinations of
parameters.
Fix by explicitely checking for a conninfo-looking parameter in the
dbname position; if one is found, use its complete specification rather
than mix with the other arguments. Also, change tab-completion to not
try to complete conninfo/URI-looking "dbnames" and document that
conninfos are accepted as first argument.
There was a weak consensus to backpatch this, because while the behavior
of using the dbname as a conninfo is nowhere documented for \connect, it
is reasonable to expect that it works because it does work in many other
contexts. Therefore this is backpatched all the way back to 9.0.
Author: David Fetter, Andrew Dunstan. Some editorialization by me
(probably earning a Gierth's "Sloppy" badge in the process.)
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,
Robert Haas, Andrew Dunstan.
psql was already accepting conninfo strings as the first parameter in
\connect, but the way it worked wasn't sane; some of the other
parameters would get the previous connection's values, causing it to
connect to a completely unexpected server or, more likely, not finding
any server at all because of completely wrong combinations of
parameters.
Fix by explicitely checking for a conninfo-looking parameter in the
dbname position; if one is found, use its complete specification rather
than mix with the other arguments. Also, change tab-completion to not
try to complete conninfo/URI-looking "dbnames" and document that
conninfos are accepted as first argument.
There was a weak consensus to backpatch this, because while the behavior
of using the dbname as a conninfo is nowhere documented for \connect, it
is reasonable to expect that it works because it does work in many other
contexts. Therefore this is backpatched all the way back to 9.0.
To implement this, routines previously private to libpq have been
duplicated so that psql can decide what looks like a conninfo/URI
string. In back branches, just duplicate the same code all the way back
to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version.
In master, the routines are moved to src/common and renamed.
Author: David Fetter, Andrew Dunstan. Some editorialization by me
(probably earning a Gierth's "Sloppy" badge in the process.)
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,
Robert Haas, Andrew Dunstan.
Tom Lane [Wed, 1 Apr 2015 21:11:21 +0000 (17:11 -0400)]
Provide real selectivity estimators for inet/cidr operators.
This patch fills in the formerly-stub networksel() and networkjoinsel()
estimation functions. Those are used for << <<= >> >>= and && operators
on inet/cidr types. The estimation is not perfect, certainly, because
we rely on the existing statistics collected for the inet btree operators.
But it's a long way better than nothing, and it's not clear that asking
ANALYZE to collect separate stats for these operators would be a win.
Emre Hasegeli, with reviews from Dilip Kumar and Heikki Linnakangas,
and some further hacking by me
Tom Lane [Wed, 1 Apr 2015 00:02:40 +0000 (20:02 -0400)]
Fix incorrect markup in documentation of window frame clauses.
You're required to write either RANGE or ROWS to start a frame clause,
but the documentation incorrectly implied this is optional. Noted by
David Johnston.