Noah Misch [Mon, 8 Aug 2016 14:07:46 +0000 (10:07 -0400)]
Obstruct shell, SQL, and conninfo injection via database and role names.
Due to simplistic quoting and confusion of database names with conninfo
strings, roles with the CREATEDB or CREATEROLE option could escalate to
superuser privileges when a superuser next ran certain maintenance
commands. The new coding rule for PQconnectdbParams() calls, documented
at conninfo_array_parse(), is to pass expand_dbname=true and wrap
literal database names in a trivial connection string. Escape
zero-length values in appendConnStrVal(). Back-patch to 9.1 (all
supported versions).
Nathan Bossart, Michael Paquier, and Noah Misch. Reviewed by Peter
Eisentraut. Reported by Nathan Bossart.
Noah Misch [Mon, 8 Aug 2016 14:07:46 +0000 (10:07 -0400)]
Promote pg_dumpall shell/connstr quoting functions to src/fe_utils.
Rename these newly-extern functions with terms more typical of their new
neighbors. No functional changes; a subsequent commit will use them in
more places. Back-patch to 9.1 (all supported versions). Back branches
lack src/fe_utils, so instead rename the functions in place; the
subsequent commit will copy them into the other programs using them.
Noah Misch [Mon, 8 Aug 2016 14:07:46 +0000 (10:07 -0400)]
Fix Windows shell argument quoting.
The incorrect quoting may have permitted arbitrary command execution.
At a minimum, it gave broader control over the command line to actors
supposed to have control over a single argument. Back-patch to 9.1 (all
supported versions).
Noah Misch [Mon, 8 Aug 2016 14:07:46 +0000 (10:07 -0400)]
Reject, in pg_dumpall, names containing CR or LF.
These characters prematurely terminate Windows shell command processing,
causing the shell to execute a prefix of the intended command. The
chief alternative to rejecting these characters was to bypass the
Windows shell with CreateProcess(), but the ability to use such names
has little value. Back-patch to 9.1 (all supported versions).
This change formally revokes support for these characters in database
names and roles names. Don't document this; the error message is
self-explanatory, and too few users would benefit. A future major
release may forbid creation of databases and roles so named. For now,
check only at known weak points in pg_dumpall. Future commits will,
without notice, reject affected names from other frontend programs.
Also extend the restriction to pg_dumpall --dbname=CONNSTR arguments and
--file arguments. Unlike the effects on role name arguments and
database names, this does not reflect a broad policy change. A
migration to CreateProcess() could lift these two restrictions.
Noah Misch [Mon, 8 Aug 2016 14:07:46 +0000 (10:07 -0400)]
Field conninfo strings throughout src/bin/scripts.
These programs nominally accepted conninfo strings, but they would
proceed to use the original dbname parameter as though it were an
unadorned database name. This caused "reindexdb dbname=foo" to issue an
SQL command that always failed, and other programs printed a conninfo
string in error messages that purported to print a database name. Fix
both problems by using PQdb() to retrieve actual database names.
Continue to print the full conninfo string when reporting a connection
failure. It is informative there, and if the database name is the sole
problem, the server-side error message will include the name. Beyond
those user-visible fixes, this allows a subsequent commit to synthesize
and use conninfo strings without that implementation detail leaking into
messages. As a side effect, the "vacuuming database" message now
appears after, not before, the connection attempt. Back-patch to 9.1
(all supported versions).
Noah Misch [Mon, 8 Aug 2016 14:07:46 +0000 (10:07 -0400)]
Introduce a psql "\connect -reuse-previous=on|off" option.
The decision to reuse values of parameters from a previous connection
has been based on whether the new target is a conninfo string. Add this
means of overriding that default. This feature arose as one component
of a fix for security vulnerabilities in pg_dump, pg_dumpall, and
pg_upgrade, so back-patch to 9.1 (all supported versions). In 9.3 and
later, comment paragraphs that required update had already-incorrect
claims about behavior when no connection is open; fix those problems.
Noah Misch [Mon, 8 Aug 2016 14:07:46 +0000 (10:07 -0400)]
Sort out paired double quotes in \connect, \password and \crosstabview.
In arguments, these meta-commands wrongly treated each pair as closing
the double quoted string. Make the behavior match the documentation.
This is a compatibility break, but I more expect to find software with
untested reliance on the documented behavior than software reliant on
today's behavior. Back-patch to 9.1 (all supported versions).
Tom Lane [Sun, 7 Aug 2016 22:52:02 +0000 (18:52 -0400)]
Fix misestimation of n_distinct for a nearly-unique column with many nulls.
If ANALYZE found no repeated non-null entries in its sample, it set the
column's stadistinct value to -1.0, intending to indicate that the entries
are all distinct. But what this value actually means is that the number
of distinct values is 100% of the table's rowcount, and thus it was
overestimating the number of distinct values by however many nulls there
are. This could lead to very poor selectivity estimates, as for example
in a recent report from Andreas Joseph Krogh. We should discount the
stadistinct value by whatever we've estimated the nulls fraction to be.
(That is what will happen if we choose to use a negative stadistinct for
a column that does have repeated entries, so this code path was just
inconsistent.)
In addition to fixing the stadistinct entries stored by several different
ANALYZE code paths, adjust the logic where get_variable_numdistinct()
forces an "all distinct" estimate on the basis of finding a relevant unique
index. Unique indexes don't reject nulls, so there's no reason to assume
that the null fraction doesn't apply.
Back-patch to all supported branches. Back-patching is a bit of a judgment
call, but this problem seems to affect only a few users (else we'd have
identified it long ago), and it's bad enough when it does happen that
destabilizing plan choices in a worse direction seems unlikely.
Patch by me, with documentation wording suggested by Dean Rasheed
Tom Lane [Fri, 5 Aug 2016 22:58:12 +0000 (18:58 -0400)]
Teach libpq to decode server version correctly from future servers.
Beginning with the next development cycle, PG servers will report two-part
not three-part version numbers. Fix libpq so that it will compute the
correct numeric representation of such server versions for reporting by
PQserverVersion(). It's desirable to get this into the field and
back-patched ASAP, so that older clients are more likely to understand the
new server version numbering by the time any such servers are in the wild.
(The results with an old client would probably not be catastrophic anyway
for a released server; for example "10.1" would be interpreted as 100100
which would be wrong in detail but would not likely cause an old client to
misbehave badly. But "10devel" or "10beta1" would result in sversion==0
which at best would result in disabling all use of modern features.)
Extracted from a patch by Peter Eisentraut; comments added by me
Tom Lane [Fri, 5 Aug 2016 16:58:17 +0000 (12:58 -0400)]
Update time zone data files to tzdata release 2016f.
DST law changes in Kemerovo and Novosibirsk. Historical corrections for
Azerbaijan, Belarus, and Morocco. Asia/Novokuznetsk and Asia/Novosibirsk
now use numeric time zone abbreviations instead of invented ones. Zones
for Antarctic bases and other locations that have been uninhabited for
portions of the time span known to the tzdata database now report "-00"
rather than "zzz" as the zone abbreviation for those time spans.
Also, I decided to remove some of the timezone/data/ files that we don't
use. At one time that subdirectory was a complete copy of what IANA
distributes in the tzdata tarballs, but that hasn't been true for a long
time. There seems no good reason to keep shipping those specific files
but not others; they're just bloating our tarballs.
Fujii Masao [Mon, 1 Aug 2016 08:36:14 +0000 (17:36 +0900)]
Fix pg_basebackup so that it accepts 0 as a valid compression level.
The help message for pg_basebackup specifies that the numbers 0 through 9
are accepted as valid values of -Z option. But, previously -Z 0 was rejected
as an invalid compression level.
Per discussion, it's better to make pg_basebackup treat 0 as valid
compression level meaning no compression, like pg_dump.
Back-patch to all supported versions.
Reported-By: Jeff Janes Reviewed-By: Amit Kapila
Discussion: CAMkU=1x+GwjSayc57v6w87ij6iRGFWt=hVfM0B64b1_bPVKRqg@mail.gmail.com
Tom Lane [Sun, 31 Jul 2016 22:32:34 +0000 (18:32 -0400)]
Doc: remove claim that hash index creation depends on effective_cache_size.
This text was added by commit ff213239c, and not long thereafter obsoleted
by commit 4adc2f72a (which made the test depend on NBuffers instead); but
nobody noticed the need for an update. Commit 9563d5b5e adds some further
dependency on maintenance_work_mem, but the existing verbiage seems to
cover that with about as much precision as we really want here. Let's
just take it all out rather than leaving ourselves open to more errors of
omission in future. (That solution makes this change back-patchable, too.)
Tom Lane [Thu, 28 Jul 2016 22:57:24 +0000 (18:57 -0400)]
Guard against empty buffer in gets_fromFile()'s check for a newline.
Per the fgets() specification, it cannot return without reading some data
unless it reports EOF or error. So the code here assumed that the data
buffer would necessarily be nonempty when we go to check for a newline
having been read. However, Agostino Sarubbo noticed that this could fail
to be true if the first byte of the data is a NUL (\0). The fgets() API
doesn't really work for embedded NULs, which is something I don't feel
any great need for us to worry about since we generally don't allow NULs
in SQL strings anyway. But we should not access off the end of our own
buffer if the case occurs. Normally this would just be a harmless read,
but if you were unlucky the byte before the buffer would contain '\n'
and we'd overwrite it with '\0', and if you were really unlucky that
might be valuable data and psql would crash.
Agostino reported this to pgsql-security, but after discussion we concluded
that it isn't worth treating as a security bug; if you can control the
input to psql you can do far more interesting things than just maybe-crash
it. Nonetheless, it is a bug, so back-patch to all supported versions.
Tom Lane [Thu, 28 Jul 2016 20:09:15 +0000 (16:09 -0400)]
Fix assorted fallout from IS [NOT] NULL patch.
Commits 4452000f3 et al established semantics for NullTest.argisrow that
are a bit different from its initial conception: rather than being merely
a cache of whether we've determined the input to have composite type,
the flag now has the further meaning that we should apply field-by-field
testing as per the standard's definition of IS [NOT] NULL. If argisrow
is false and yet the input has composite type, the construct instead has
the semantics of IS [NOT] DISTINCT FROM NULL. Update the comments in
primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print
such cases correctly. In the case of ruleutils.c, this merely results in
cosmetic changes in EXPLAIN output, since the case can't currently arise
in stored rules. However, it represents a live bug for deparse.c, which
would formerly have sent a remote query that had semantics different
from the local behavior. (From the user's standpoint, this means that
testing a remote nested-composite column for null-ness could have had
unexpected recursive behavior much like that fixed in 4452000f3.)
In a related but somewhat independent fix, make plancat.c set argisrow
to false in all NullTest expressions constructed to represent "attnotnull"
constructs. Since attnotnull is actually enforced as a simple null-value
check, this is a more accurate representation of the semantics; we were
previously overpromising what it meant for composite columns, which might
possibly lead to incorrect planner optimizations. (It seems that what the
SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so
arguably we are violating the spec and should fix attnotnull to do the
other thing. If we ever do, this part should get reverted.)
Tom Lane [Thu, 28 Jul 2016 17:26:59 +0000 (13:26 -0400)]
Improve documentation about CREATE TABLE ... LIKE.
The docs failed to explain that LIKE INCLUDING INDEXES would not preserve
the names of indexes and associated constraints. Also, it wasn't mentioned
that EXCLUDE constraints would be copied by this option. The latter
oversight seems enough of a documentation bug to justify back-patching.
In passing, do some minor copy-editing in the same area, and add an entry
for LIKE under "Compatibility", since it's not exactly a faithful
implementation of the standard's feature.
Tom Lane [Thu, 28 Jul 2016 15:39:11 +0000 (11:39 -0400)]
Register atexit hook only once in pg_upgrade.
start_postmaster() registered stop_postmaster_atexit as an atexit(3)
callback each time through, although the obvious intention was to do
so only once per program run. The extra registrations were harmless,
so long as we didn't exceed ATEXIT_MAX, but still it's a bug.
Artur Zakirov, with bikeshedding by Kyotaro Horiguchi and me
Tom Lane [Tue, 26 Jul 2016 19:25:02 +0000 (15:25 -0400)]
Fix constant-folding of ROW(...) IS [NOT] NULL with composite fields.
The SQL standard appears to specify that IS [NOT] NULL's tests of field
nullness are non-recursive, ie, we shouldn't consider that a composite
field with value ROW(NULL,NULL) is null for this purpose.
ExecEvalNullTest got this right, but eval_const_expressions did not,
leading to weird inconsistencies depending on whether the expression
was such that the planner could apply constant folding.
Also, adjust the docs to mention that IS [NOT] DISTINCT FROM NULL can be
used as a substitute test if a simple null check is wanted for a rowtype
argument. That motivated reordering things so that IS [NOT] DISTINCT FROM
is described before IS [NOT] NULL. In HEAD, I went a bit further and added
a table showing all the comparison-related predicates.
Per bug #14235. Back-patch to all supported branches, since it's certainly
undesirable that constant-folding should change the semantics.
Report and patch by Andrew Gierth; assorted wordsmithing and revised
regression test cases by me.
Tom Lane [Thu, 21 Jul 2016 20:52:36 +0000 (16:52 -0400)]
Make contrib regression tests safe for Danish locale.
In btree_gin and citext, avoid some not-particularly-interesting
dependencies on the sorting of 'aa'. In tsearch2, use COLLATE "C" to
remove an uninteresting dependency on locale sort order (and thereby
allow removal of a variant expected-file).
Also, in citext, avoid assuming that lower('I') = 'i'. This isn't relevant
to Danish but it does fail in Turkish.
Tom Lane [Thu, 21 Jul 2016 18:24:07 +0000 (14:24 -0400)]
Make pltcl regression tests safe for Danish locale.
Another peculiarity of Danish locale is that it has an unusual idea
of how to sort upper vs. lower case. One of the pltcl test cases has
an issue with that. Now that COLLATE works in all supported branches,
we can just change the test to be locale-independent, and get rid of
the variant expected file that used to support non-C locales.
Tom Lane [Tue, 19 Jul 2016 19:59:36 +0000 (15:59 -0400)]
Sync back-branch copies of the timezone code with IANA release tzcode2016c.
Back-patch commit 1c1a7cbd6a1600c9, along with subsequent portability
fixes, into all active branches. Also, back-patch commits 696027727 and 596857043 (addition of zic -P option) into 9.1 and 9.2, just to reduce
differences between the branches. src/timezone/ is now largely identical
in all active branches, except that in 9.1, pgtz.c retains the
initial-timezone-selection code that was moved over to initdb in 9.2.
Ordinarily we wouldn't risk this much code churn in back branches, but it
seems necessary in this case, because among the changes are two feature
additions in the "zic" zone data file compiler (a larger limit on the
number of allowed DST transitions, and addition of a "%z" escape in zone
abbreviations). IANA have not yet started to use those features in their
tzdata files, but presumably they will before too long. If we don't update
then we'll be unable to adopt new timezone data. Also, installations built
with --with-system-tzdata (which includes most distro-supplied builds, I
believe) might fail even if we don't update our copies of the data files.
There are assorted bug fixes too, mostly affecting obscure timezones or
post-2037 dates.
Peter Eisentraut [Sun, 17 Jul 2016 13:37:33 +0000 (09:37 -0400)]
Use correct symbol for minimum int64 value
The old code used SEQ_MINVALUE to get the smallest int64 value. This
was done as a convenience to avoid having to deal with INT64_IS_BUSTED,
but that is obsolete now. Also, it is incorrect because the smallest
int64 value is actually SEQ_MINVALUE-1. Fix by writing out the constant
the long way, as it is done elsewhere in the code.
Tom Lane [Sat, 16 Jul 2016 18:42:37 +0000 (14:42 -0400)]
Fix crash in close_ps() for NaN input coordinates.
The Assert() here seems unreasonably optimistic. Andreas Seltenreich
found that it could fail with NaNs in the input geometries, and it
seems likely to me that it might fail in corner cases due to roundoff
error, even for ordinary input values. As a band-aid, make the function
return SQL NULL instead of crashing.
Andres Freund [Sat, 16 Jul 2016 00:49:49 +0000 (17:49 -0700)]
Fix torn-page, unlogged xid and further risks from heap_update().
When heap_update needs to look for a page for the new tuple version,
because the current one doesn't have sufficient free space, or when
columns have to be processed by the tuple toaster, it has to release the
lock on the old page during that. Otherwise there'd be lock ordering and
lock nesting issues.
To avoid concurrent sessions from trying to update / delete / lock the
tuple while the page's content lock is released, the tuple's xmax is set
to the current session's xid.
That unfortunately was done without any WAL logging, thereby violating
the rule that no XIDs may appear on disk, without an according WAL
record. If the database were to crash / fail over when the page level
lock is released, and some activity lead to the page being written out
to disk, the xid could end up being reused; potentially leading to the
row becoming invisible.
There might be additional risks by not having t_ctid point at the tuple
itself, without having set the appropriate lock infomask fields.
To fix, compute the appropriate xmax/infomask combination for locking
the tuple, and perform WAL logging using the existing XLOG_HEAP_LOCK
record. That allows the fix to be backpatched.
This issue has existed for a long time. There appears to have been
partial attempts at preventing dangers, but these never have fully been
implemented, and were removed a long time ago, in 11919160 (cf. HEAP_XMAX_UNLOGGED).
In master / 9.6, there's an additional issue, namely that the
visibilitymap's freeze bit isn't reset at that point yet. Since that's a
new issue, introduced only in a892234f830, that'll be fixed in a
separate commit.
Author: Masahiko Sawada and Andres Freund Reported-By: Different aspects by Thomas Munro, Noah Misch, and others
Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com
Backpatch: 9.1/all supported versions
Bruce Momjian [Sat, 2 Jul 2016 15:22:35 +0000 (11:22 -0400)]
doc: mention dependency on collation libraries
Document that index storage is dependent on the operating system's
collation library ordering, and any change in that ordering can create
invalid indexes.
Tom Lane [Wed, 22 Jun 2016 00:07:58 +0000 (20:07 -0400)]
Document that dependency tracking doesn't consider function bodies.
If there's anyplace in our SGML docs that explains this behavior, I can't
find it right at the moment. Add an explanation in "Dependency Tracking"
which seems like the authoritative place for such a discussion. Per
gripe from Michelle Schwan.
While at it, update this section's example of a dependency-related
error message: they last looked like that in 8.3. And remove the
explanation of dependency updates from pre-7.3 installations, which
is probably no longer worth anybody's brain cells to read.
The bogus error message example seems like an actual documentation bug,
so back-patch to all supported branches.
Tom Lane [Sun, 19 Jun 2016 18:01:17 +0000 (14:01 -0400)]
Increase fixed waits in "pg_ctl start -w" from 5 seconds to 10.
In the 9.1 branch only, modify test_postmaster_connection() so that it
will wait up to 10 seconds, not 5, for the postmaster pid file to appear.
This is a much simpler and safer, if less complete, way of addressing
the buildfarm instability issues we hoped to solve with c869a7d5b.
Tom Lane [Sun, 19 Jun 2016 17:45:03 +0000 (13:45 -0400)]
Revert "Fix "pg_ctl start -w" to test child process status directly."
This reverts commit c869a7d5b44e7164fadfb638786def05d510312a.
As pointed out by Maksym Sobolyev in bug #14199, that approach doesn't
work if the postmaster forks itself an extra time due to silent_mode
being enabled. We removed silent_mode in 9.2, so the pg_ctl change is
fine in 9.2 and later, but it fails when that option is enabled in 9.1.
Seeing that 9.1 is close to end-of-life, let's adopt the most conservative
fix we can, which is to revert the pg_ctl change in the 9.1 branch.
Tom Lane [Sun, 19 Jun 2016 17:11:40 +0000 (13:11 -0400)]
Docs: improve description of psql's %R prompt escape sequence.
Dilian Palauzov pointed out in bug #14201 that the docs failed to mention
the possibility of %R producing '(' due to an unmatched parenthesis.
He proposed just adding that in the same style as the other options were
listed; but it seemed to me that the sentence was already nearly
unintelligible, so I rewrote it a bit more extensively.
Tom Lane [Thu, 16 Jun 2016 21:16:32 +0000 (17:16 -0400)]
Fix validation of overly-long IPv6 addresses.
The inet/cidr types sometimes failed to reject IPv6 inputs with too many
colon-separated fields, instead translating them to '::/0'. This is the
result of a thinko in the original ISC code that seems to be as yet
unreported elsewhere. Per bug #14198 from Stefan Kaltenbrunner.
Tom Lane [Mon, 13 Jun 2016 17:53:10 +0000 (13:53 -0400)]
Fix multiple minor infelicities in aclchk.c error reports.
pg_type_aclmask reported the wrong type's OID when complaining that
it could not find a type's typelem. It also failed to provide a
suitable errcode when the initially given OID doesn't exist (which
is a user-facing error, since that OID can be user-specified).
pg_foreign_data_wrapper_aclmask and pg_foreign_server_aclmask likewise
lacked errcode specifications. Trivial cosmetic adjustments too.
The wrong-type-OID problem was reported by Petru-Florin Mihancea in
bug #14186; the other issues noted by me while reading the code.
These errors all seem to be aboriginal in the respective routines, so
back-patch as necessary.
Tom Lane [Thu, 9 Jun 2016 15:58:01 +0000 (11:58 -0400)]
Clarify documentation of ceil/ceiling/floor functions.
Document these as "nearest integer >= argument" and "nearest integer <=
argument", which will hopefully be less confusing than the old formulation.
New wording is from Matlab via Dean Rasheed.
I changed the pg_description entries as well as the SGML docs. In the
back branches, this will only affect installations initdb'd in the future,
but it should be harmless otherwise.
Alvaro Herrera [Tue, 7 Jun 2016 22:55:18 +0000 (18:55 -0400)]
nls-global.mk: search build dir for source files, too
In VPATH builds, the build directory was not being searched for files in
GETTEXT_FILES, leading to failure to construct the .pot files. This has
bit me all along, but never hard enough to get it fixed; I suppose not a
lot of people uses VPATH and NLS-enabled builds, and those that do,
don't do "make update-po" often.
This is a longstanding problem, so backpatch all the way back.
Tom Lane [Mon, 6 Jun 2016 21:44:18 +0000 (17:44 -0400)]
Don't reset changes_since_analyze after a selective-columns ANALYZE.
If we ANALYZE only selected columns of a table, we should not postpone
auto-analyze because of that; other columns may well still need stats
updates. As committed, the counter is left alone if a column list is
given, whether or not it includes all analyzable columns of the table.
Per complaint from Tomasz Ostrowski.
It's been like this a long time, so back-patch to all supported branches.
Alvaro Herrera [Wed, 25 May 2016 23:39:49 +0000 (19:39 -0400)]
Avoid hot standby cancels from VAC FREEZE
VACUUM FREEZE generated false cancelations of standby queries on an
otherwise idle master. Caused by an off-by-one error on cutoff_xid
which goes back to original commit.
Analysis and report by Marco Nenciarini
Bug fix by Simon Riggs
This is a correct backpatch of commit 66fbcb0d2e to branches 9.1 through
9.4. That commit was backpatched to 9.0 originally, but it was
immediately reverted in 9.0-9.4 because it didn't compile.
Tom Lane [Tue, 24 May 2016 19:47:51 +0000 (15:47 -0400)]
Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation. Noted while reviewing Alexander Korotkov's
patch in the same area.
Tom Lane [Tue, 24 May 2016 19:20:12 +0000 (15:20 -0400)]
Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assigned an XID, so calling GetCurrentTransactionId()
causes an XID to be assigned when otherwise one would not be. Most of the
time that's not a big problem ... but if we are hard up against the
wraparound limit, consuming XIDs during antiwraparound vacuums is a very
bad thing.
Instead, use ReadNewTransactionId(), which not only avoids this problem
but is in itself a better comparison point to test whether wraparound
has already occurred.
Report and patch by Alexander Korotkov. Back-patch to all versions.
Tom Lane [Mon, 23 May 2016 18:16:41 +0000 (14:16 -0400)]
Fix latent crash in do_text_output_multiline().
do_text_output_multiline() would fail (typically with a null pointer
dereference crash) if its input string did not end with a newline. Such
cases do not arise in our current sources; but it certainly could happen
in future, or in extension code's usage of the function, so we should fix
it. To fix, replace "eol += len" with "eol = text + len".
While at it, make two cosmetic improvements: mark the input string const,
and rename the argument from "text" to "txt" to dodge pgindent strangeness
(since "text" is a typedef name).
Even though this problem is only latent at present, it seems like a good
idea to back-patch the fix, since it's a very simple/safe patch and it's
not out of the realm of possibility that we might in future back-patch
something that expects sane behavior from do_text_output_multiline().
Tom Lane [Fri, 13 May 2016 00:04:12 +0000 (20:04 -0400)]
Ensure plan stability in contrib/btree_gist regression test.
Buildfarm member skink failed with symptoms suggesting that an
auto-analyze had happened and changed the plan displayed for a
test query. Although this is evidently of low probability,
regression tests that sometimes fail are no fun, so add commands
to force a bitmap scan to be chosen.
Alvaro Herrera [Tue, 10 May 2016 19:23:54 +0000 (16:23 -0300)]
Fix autovacuum for shared relations
The table-skipping logic in autovacuum would fail to consider that
multiple workers could be processing the same shared catalog in
different databases. This normally wouldn't be a problem: firstly
because autovacuum workers not for wraparound would simply ignore tables
in which they cannot acquire lock, and secondly because most of the time
these tables are small enough that even if multiple for-wraparound
workers are stuck in the same catalog, they would be over pretty
quickly. But in cases where the catalogs are severely bloated it could
become a problem.
Backpatch all the way back, because the problem has been there since the
beginning.
OpenSSL has an unfortunate tendency to mix per-session state error
handling with per-thread error handling. This can cause problems when
programs that link to libpq with OpenSSL enabled have some other use of
OpenSSL; without care, one caller of OpenSSL may cause problems for the
other caller. Backend code might similarly be affected, for example
when a third party extension independently uses OpenSSL without taking
the appropriate precautions.
To fix, don't trust other users of OpenSSL to clear the per-thread error
queue. Instead, clear the entire per-thread queue ahead of certain I/O
operations when it appears that there might be trouble (these I/O
operations mostly need to call SSL_get_error() to check for success,
which relies on the queue being empty). This is slightly aggressive,
but it's pretty clear that the other callers have a very dubious claim
to ownership of the per-thread queue. Do this is both frontend and
backend code.
Finally, be more careful about clearing our own error queue, so as to
not cause these problems ourself. It's possibly that control previously
did not always reach SSLerrmessage(), where ERR_get_error() was supposed
to be called to clear the queue's earliest code. Make sure
ERR_get_error() is always called, so as to spare other users of OpenSSL
the possibility of similar problems caused by libpq (as opposed to
problems caused by a third party OpenSSL library like PHP's OpenSSL
extension). Again, do this is both frontend and backend code.
See bug #12799 and https://bugs.php.net/bug.php?id=68276
Based on patches by Dave Vitek and Peter Eisentraut.
Tom Lane [Fri, 6 May 2016 16:09:20 +0000 (12:09 -0400)]
Fix possible read past end of string in to_timestamp().
to_timestamp() handles the TH/th format codes by advancing over two input
characters, whatever those are. It failed to notice whether there were
two characters available to be skipped, making it possible to advance
the pointer past the end of the input string and keep on parsing.
A similar risk existed in the handling of "Y,YYY" format: it would advance
over three characters after the "," whether or not three characters were
available.
In principle this might be exploitable to disclose contents of server
memory. But the security team concluded that it would be very hard to use
that way, because the parsing loop would stop upon hitting any zero byte,
and TH/th format codes can't be consecutive --- they have to follow some
other format code, which would have to match whatever data is there.
So it seems impractical to examine memory very much beyond the end of the
input string via this bug; and the input string will always be in local
memory not in disk buffers, making it unlikely that anything very
interesting is close to it in a predictable way. So this doesn't quite
rise to the level of needing a CVE.
Tom Lane [Fri, 6 May 2016 00:08:58 +0000 (20:08 -0400)]
Update time zone data files to tzdata release 2016d.
DST law changes in Russia (Magadan, Tomsk regions) and Venezuela.
Historical corrections for Russia. There are new zone names Europe/Kirov
and Asia/Tomsk reflecting the fact that these regions now have different
time zone histories from adjacent regions.
Tom Lane [Mon, 2 May 2016 15:18:11 +0000 (11:18 -0400)]
Fix configure's incorrect version tests for flex and perl.
awk's equality-comparison operator is "==" not "=". We got this right
in many places, but not in configure's checks for supported version
numbers of flex and perl. It hadn't been noticed because unsupported
versions are so old as to be basically extinct in the wild, and because
the only consequence is whether or not a WARNING flies by during
configure.
Daniel Gustafsson noted the problem with respect to the test for flex,
I found the other by reviewing other awk calls.
CHECK_PAGE_OFFSET_RANGE() has been unused forever.
CHECK_RELATION_BLOCK_RANGE() has been unused in pgstatindex.c ever since
bt_page_stats() and bt_page_items() functions were moved from pgstattuple
to pageinspect module. It still exists in pageinspect/btreefuncs.c.
Tom Lane [Thu, 28 Apr 2016 15:50:58 +0000 (11:50 -0400)]
Adjust DatumGetBool macro, this time for sure.
Commit 23a41573c attempted to fix the DatumGetBool macro to ignore bits
in a Datum that are to the left of the actual bool value. But it did that
by casting the Datum to bool; and on compilers that use C99 semantics for
bool, that ends up being a whole-word test, not a 1-byte test. This seems
to be the true explanation for contrib/seg failing in VS2015. To fix, use
GET_1_BYTE() explicitly. I think in the previous patch, I'd had some idea
of not having to commit to bool being exactly 1 byte wide, but regardless
of what the compiler's bool is, boolean columns and Datums are certainly
1 byte wide.
The previous fix was (eventually) back-patched into all active versions,
so do likewise with this one.
Tom Lane [Sat, 23 Apr 2016 20:53:15 +0000 (16:53 -0400)]
Rename strtoi() to strtoint().
NetBSD has seen fit to invent a libc function named strtoi(), which
conflicts with the long-established static functions of the same name in
datetime.c and ecpg's interval.c. While muttering darkly about intrusions
on application namespace, we'll rename our functions to avoid the conflict.
Back-patch to all supported branches, since this would affect attempts
to build any of them on recent NetBSD.
Tom Lane [Fri, 22 Apr 2016 00:05:58 +0000 (20:05 -0400)]
Fix planner failure with full join in RHS of left join.
Given a left join containing a full join in its righthand side, with
the left join's joinclause referencing only one side of the full join
(in a non-strict fashion, so that the full join doesn't get simplified),
the planner could fail with "failed to build any N-way joins" or related
errors. This happened because the full join was seen as overlapping the
left join's RHS, and then recent changes within join_is_legal() caused
that function to conclude that the full join couldn't validly be formed.
Rather than try to rejigger join_is_legal() yet more to allow this,
I think it's better to fix initsplan.c so that the required join order
is explicit in the SpecialJoinInfo data structure. The previous coding
there essentially ignored full joins, relying on the fact that we don't
flatten them in the joinlist data structure to preserve their ordering.
That's sufficient to prevent a wrong plan from being formed, but as this
example shows, it's not sufficient to ensure that the right plan will
be formed. We need to work a bit harder to ensure that the right plan
looks sane according to the SpecialJoinInfos.
Per bug #14105 from Vojtech Rylko. This was apparently induced by
commit 8703059c6 (though now that I've seen it, I wonder whether there
are related cases that could have failed before that); so back-patch
to all active branches. Unfortunately, that patch also went into 9.0,
so this bug is a regression that won't be fixed in that branch.
Tom Lane [Thu, 21 Apr 2016 20:58:47 +0000 (16:58 -0400)]
Improve TranslateSocketError() to handle more Windows error codes.
The coverage was rather lean for cases that bind() or listen() might
return. Add entries for everything that there's a direct equivalent
for in the set of Unix errnos that elog.c has heard of.
Tom Lane [Thu, 21 Apr 2016 20:16:19 +0000 (16:16 -0400)]
Remove dead code in win32.h.
There's no longer a need for the MSVC-version-specific code stanza that
forcibly redefines errno code symbols, because since commit 73838b52 we're
unconditionally redefining them in the stanza before this one anyway.
Now it's merely confusing and ugly, so get rid of it; and improve the
comment that explains what's going on here.
Although this is just cosmetic, back-patch anyway since I'm intending
to back-patch some less-cosmetic changes in this same hunk of code.
Tom Lane [Thu, 21 Apr 2016 18:20:18 +0000 (14:20 -0400)]
Fix ruleutils.c's dumping of ScalarArrayOpExpr containing an EXPR_SUBLINK.
When we shoehorned "x op ANY (array)" into the SQL syntax, we created a
fundamental ambiguity as to the proper treatment of a sub-SELECT on the
righthand side: perhaps what's meant is to compare x against each row of
the sub-SELECT's result, or perhaps the sub-SELECT is meant as a scalar
sub-SELECT that delivers a single array value whose members should be
compared against x. The grammar resolves it as the former case whenever
the RHS is a select_with_parens, making the latter case hard to reach ---
but you can get at it, with tricks such as attaching a no-op cast to the
sub-SELECT. Parse analysis would throw away the no-op cast, leaving a
parsetree with an EXPR_SUBLINK SubLink directly under a ScalarArrayOpExpr.
ruleutils.c was not clued in on this fine point, and would naively emit
"x op ANY ((SELECT ...))", which would be parsed as the first alternative,
typically leading to errors like "operator does not exist: text = text[]"
during dump/reload of a view or rule containing such a construct. To fix,
emit a no-op cast when dumping such a parsetree. This might well be
exactly what the user wrote to get the construct accepted in the first
place; and even if she got there with some other dodge, it is a valid
representation of the parsetree.
Per report from Karl Czajkowski. He mentioned only a case involving
RLS policies, but actually the problem is very old, so back-patch to
all supported branches.
Tom Lane [Thu, 21 Apr 2016 03:48:13 +0000 (23:48 -0400)]
Honor PGCTLTIMEOUT environment variable for pg_regress' startup wait.
In commit 2ffa86962077c588 we made pg_ctl recognize an environment variable
PGCTLTIMEOUT to set the default timeout for starting and stopping the
postmaster. However, pg_regress uses pg_ctl only for the "stop" end of
that; it has bespoke code for starting the postmaster, and that code has
historically had a hard-wired 60-second timeout. Further buildfarm
experience says it'd be a good idea if that timeout were also controlled
by PGCTLTIMEOUT, so let's make it so. Like the previous patch, back-patch
to all active branches.
Tom Lane [Wed, 13 Apr 2016 22:57:52 +0000 (18:57 -0400)]
Fix pg_dump so pg_upgrade'ing an extension with simple opfamilies works.
As reported by Michael Feld, pg_upgrade'ing an installation having
extensions with operator families that contain just a single operator class
failed to reproduce the extension membership of those operator families.
This caused no immediate ill effects, but would create problems when later
trying to do a plain dump and restore, because the seemingly-not-part-of-
the-extension operator families would appear separately in the pg_dump
output, and then would conflict with the families created by loading the
extension. This has been broken ever since extensions were introduced,
and many of the standard contrib extensions are affected, so it's a bit
astonishing nobody complained before.
The cause of the problem is a perhaps-ill-considered decision to omit
such operator families from pg_dump's output on the grounds that the
CREATE OPERATOR CLASS commands could recreate them, and having explicit
CREATE OPERATOR FAMILY commands would impede loading the dump script into
pre-8.3 servers. Whatever the merits of that decision when 8.3 was being
written, it looks like a poor tradeoff now. We can fix the pg_upgrade
problem simply by removing that code, so that the operator families are
dumped explicitly (and then will be properly made to be part of their
extensions).
Although this fixes the behavior of future pg_upgrade runs, it does nothing
to clean up existing installations that may have improperly-linked operator
families. Given the small number of complaints to date, maybe we don't
need to worry about providing an automated solution for that; anyone who
needs to clean it up can do so with manual "ALTER EXTENSION ADD OPERATOR
FAMILY" commands, or even just ignore the duplicate-opfamily errors they
get during a pg_restore. In any case we need this fix.
Tom Lane [Mon, 4 Apr 2016 15:13:17 +0000 (11:13 -0400)]
Fix latent portability issue in pgwin32_dispatch_queued_signals().
The first iteration of the signal-checking loop would compute sigmask(0)
which expands to 1<<(-1) which is undefined behavior according to the
C standard. The lack of field reports of trouble suggest that it
evaluates to 0 on all existing Windows compilers, but that's hardly
something to rely on. Since signal 0 isn't a queueable signal anyway,
we can just make the loop iterate from 1 instead, and save a few cycles
as well as avoiding the undefined behavior.
In passing, avoid evaluating the volatile expression UNBLOCKED_SIGNAL_QUEUE
twice in a row; there's no reason to waste cycles like that.
Noted by Aleksander Alekseev, though this isn't his proposed fix.
Back-patch to all supported branches.
Tom Lane [Tue, 29 Mar 2016 15:54:58 +0000 (11:54 -0400)]
Avoid possibly-unsafe use of Windows' FormatMessage() function.
Whenever this function is used with the FORMAT_MESSAGE_FROM_SYSTEM flag,
it's good practice to include FORMAT_MESSAGE_IGNORE_INSERTS as well.
Otherwise, if the message contains any %n insertion markers, the function
will try to fetch argument strings to substitute --- which we are not
passing, possibly leading to a crash. This is exactly analogous to the
rule about not giving printf() a format string you're not in control of.
Noted and patched by Christian Ullrich.
Back-patch to all supported branches.
Andres Freund [Sun, 27 Mar 2016 15:47:46 +0000 (17:47 +0200)]
Change various Gin*Is* macros to return 0/1.
Returning the direct result of bit arithmetic, in a macro intended to be
used in a boolean manner, can be problematic if the return value is
stored in a variable of type 'bool'. If bool is implemented using C99's
_Bool, that can lead to comparison failures if the variable is then
compared again with the expression (see ginStepRight() for an example
that fails), as _Bool forces the result to be 0/1. That happens in some
configurations of newer MSVC compilers. It's also problematic when
storing the result of such an expression in a narrower type.
Several gin macros have been declared in that style since gin's initial
commit in 8a3631f8d86.
There's a lot more macros like this, but this is the only one causing
regression test failures; and I don't want to commit and backpatch a
larger patch with lots of conflicts just before the next set of minor
releases.
Discussion: 20150811154237.GD17575@awork2.anarazel.de
Backpatch: All supported branches
Tom Lane [Sat, 26 Mar 2016 19:58:44 +0000 (15:58 -0400)]
Modernize zic's test for valid timezone abbreviations.
We really need to sync all of our IANA-derived timezone code with upstream,
but that's going to be a large patch and I certainly don't care to shove
such a thing into stable branches immediately before a release. As a
stopgap, copy just the tzcode2016c logic that checks validity of timezone
abbreviations. This prevents getting multiple "time zone abbreviation
differs from POSIX standard" bleats with tzdata 2014b and later.
Tom Lane [Fri, 25 Mar 2016 23:03:08 +0000 (19:03 -0400)]
Update time zone data files to tzdata release 2016c.
DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai,
Astrakhan, Kirov, Sakhalin, Ulyanovsk regions). Historical corrections
for Lithuania, Moldova, Russia (Kaliningrad, Samara, Volgograd).
As of 2015b, the keepers of the IANA timezone database started to use
numeric time zone abbreviations (e.g., "+04") instead of inventing
abbreviations not found in the wild like "ASTT". This causes our rather
old copy of zic to whine "warning: time zone abbreviation differs from
POSIX standard" several times during "make install". This warning is
harmless according to the IANA folk, and I don't see any problems with
these abbreviations in some simple tests; but it seems like now would be
a good time to update our copy of the tzcode stuff. I'll look into that
soon.
Andrew Dunstan [Sat, 19 Mar 2016 22:59:41 +0000 (18:59 -0400)]
Remove dependency on psed for MSVC builds.
Modern Perl has removed psed from its core distribution, so it might not
be readily available on some build platforms. We therefore replace its
use with a Perl script generated by s2p, which is equivalent to the sed
script. The latter is retained for non-MSVC builds to avoid creating a
new hard dependency on Perl for non-Windows tarball builds.
Tom Lane [Thu, 17 Mar 2016 03:18:08 +0000 (23:18 -0400)]
Fix "pg_bench -C -M prepared".
This didn't work because when we dropped and re-established a database
connection, we did not bother to reset session-specific state such as
the statements-are-prepared flags.
The st->prepared[] array certainly needs to be flushed, and I cleared a
couple of other fields as well that couldn't possibly retain meaningful
state for a new connection.
In passing, fix some bogus comments and strange field order choices.
Tom Lane [Tue, 15 Mar 2016 17:19:58 +0000 (13:19 -0400)]
Cope if platform declares mbstowcs_l(), but not locale_t, in <xlocale.h>.
Previously, we included <xlocale.h> only if necessary to get the definition
of type locale_t. According to notes in PGAC_TYPE_LOCALE_T, this is
important because on some versions of glibc that file supplies an
incompatible declaration of locale_t. (This info may be obsolete, because
on my RHEL6 box that seems to be the *only* definition of locale_t; but
there may still be glibc's in the wild for which it's a live concern.)
It turns out though that on FreeBSD and maybe other BSDen, you can get
locale_t from stdlib.h or locale.h but mbstowcs_l() and friends only from
<xlocale.h>. This was leaving us compiling calls to mbstowcs_l() and
friends with no visible prototype, which causes a warning and could
possibly cause actual trouble, since it's not declared to return int.
Hence, adjust the configure checks so that we'll include <xlocale.h>
either if it's necessary to get type locale_t or if it's necessary to
get a declaration of mbstowcs_l().
Report and patch by Aleksander Alekseev, somewhat whacked around by me.
Back-patch to all supported branches, since we have been using
mbstowcs_l() since 9.1.
Tom Lane [Mon, 14 Mar 2016 15:31:22 +0000 (11:31 -0400)]
Add missing NULL terminator to list_SECURITY_LABEL_preposition[].
On the machines I tried this on, pressing TAB after SECURITY LABEL led to
being offered ON and FOR as intended, plus random other keywords (varying
across machines). But if you were a bit more unlucky you'd get a crash,
as reported by nummervet@mail.ru in bug #14019.
Seems to have been an aboriginal error in the SECURITY LABEL patch,
commit 4d355a8336e0f226. Hence, back-patch to all supported versions.
There's no bug in HEAD, though, thanks to our recent tab-completion
rewrite.
Magnus Hagander [Thu, 10 Mar 2016 12:48:58 +0000 (13:48 +0100)]
Avoid crash on old Windows with AVX2-capable CPU for VS2013 builds
The Visual Studio 2013 CRT generates invalid code when it makes a 64-bit
build that is later used on a CPU that supports AVX2 instructions using a
version of Windows before 7SP1/2008R2SP1.
Detect this combination, and in those cases turn off the generation of
FMA3, per recommendation from the Visual Studio team.
The bug is actually in the CRT shipping with Visual Studio 2013, but
Microsoft have stated they're only fixing it in newer major versions.
The fix is therefor conditioned specifically on being built with this
version of Visual Studio, and not previous or later versions.
Andres Freund [Thu, 10 Mar 2016 02:53:54 +0000 (18:53 -0800)]
Avoid unlikely data-loss scenarios due to rename() without fsync.
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes. Use the previously added durable_rename()/durable_link_or_rename()
in various places where we previously just renamed files.
Most of the changed call sites are arguably not critical, but it seems
better to err on the side of too much durability. The most prominent
known case where the previously missing fsyncs could cause data loss is
crashes at the end of a checkpoint. After the actual checkpoint has been
performed, old WAL files are recycled. When they're filled, their
contents are fdatasynced, but we did not fsync the containing
directory. An OS/hardware crash in an unfortunate moment could then end
up leaving that file with its old name, but new content; WAL replay
would thus not replay it.
Reported-By: Tomas Vondra
Author: Michael Paquier, Tomas Vondra, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
Andres Freund [Thu, 10 Mar 2016 02:53:54 +0000 (18:53 -0800)]
Introduce durable_rename() and durable_link_or_rename().
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes; especially on filesystems like xfs and ext4 when mounted
with data=writeback. To be certain that a rename() atomically replaces
the previous file contents in the face of crashes and different
filesystems, one has to fsync the old filename, rename the file, fsync
the new filename, fsync the containing directory. This sequence is not
generally adhered to currently; which exposes us to data loss risks. To
avoid having to repeat this arduous sequence, introduce
durable_rename(), which wraps all that.
Also add durable_link_or_rename(). Several places use link() (with a
fallback to rename()) to rename a file, trying to avoid replacing the
target file out of paranoia. Some of those rename sequences need to be
durable as well. There seems little reason extend several copies of the
same logic, so centralize the link() callers.
This commit does not yet make use of the new functions; they're used in
a followup commit.
Author: Michael Paquier, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
Tom Lane [Wed, 9 Mar 2016 19:51:02 +0000 (14:51 -0500)]
Fix incorrect handling of NULL index entries in indexed ROW() comparisons.
An index search using a row comparison such as ROW(a, b) > ROW('x', 'y')
would stop upon reaching a NULL entry in the "b" column, ignoring the
fact that there might be non-NULL "b" values associated with later values
of "a". This happens because _bt_mark_scankey_required() marks the
subsidiary scankey for "b" as required, which is just wrong: it's for
a column after the one with the first inequality key (namely "a"), and
thus can't be considered a required match.
This bit of brain fade dates back to the very beginnings of our support
for indexed ROW() comparisons, in 2006. Kind of astonishing that no one
came across it before Glen Takahashi, in bug #14010.
Back-patch to all supported versions.
Note: the given test case doesn't actually fail in unpatched 9.1, evidently
because the fix for bug #6278 (i.e., stopping at nulls in either scan
direction) is required to make it fail. I'm sure I could devise a case
that fails in 9.1 as well, perhaps with something involving making a cursor
back up; but it doesn't seem worth the trouble.
Andres Freund [Tue, 8 Mar 2016 22:59:29 +0000 (14:59 -0800)]
ltree: Zero padding bytes when allocating memory for externally visible data.
ltree/ltree_gist/ltxtquery's headers stores data at MAXALIGN alignment,
requiring some padding bytes. So far we left these uninitialized. Zero
those by using palloc0.
Author: Andres Freund Reported-By: Andres Freund / valgrind / buildarm animal skink
Backpatch: 9.1-
Andres Freund [Tue, 8 Mar 2016 21:33:24 +0000 (13:33 -0800)]
plperl: Correctly handle empty arrays in plperl_ref_from_pg_array.
plperl_ref_from_pg_array() didn't consider the case that postgrs arrays
can have 0 dimensions (when they're empty) and accessed the first
dimension without a check. Fix that by special casing the empty array
case.
Author: Alex Hunsaker Reported-By: Andres Freund / valgrind / buildfarm animal skink
Discussion: 20160308063240.usnzg6bsbjrne667@alap3.anarazel.de
Backpatch: 9.1-
Tom Lane [Mon, 7 Mar 2016 15:40:44 +0000 (10:40 -0500)]
Fix backwards test for Windows service-ness in pg_ctl.
A thinko in a96761391 caused pg_ctl to get it exactly backwards when
deciding whether to report problems to the Windows eventlog or to stderr.
Per bug #14001 from Manuel Mathar, who also identified the fix.
Like the previous patch, back-patch to all supported branches.
Tom Lane [Mon, 7 Mar 2016 00:21:03 +0000 (19:21 -0500)]
Fix not-terribly-safe coding in NIImportOOAffixes() and NIImportAffixes().
There were two places in spell.c that supposed that they could search
for a location in a string produced by lowerstr() and then transpose
the offset into the original string. But this fails completely if
lowerstr() transforms any characters into characters of different byte
length, as can happen in Turkish UTF8 for instance.
We'd added some comments about this coding in commit 51e78ab4ff328296,
but failed to realize that it was not merely confusing but wrong.
Coverity complained about this code years ago, but in such an opaque
fashion that nobody understood what it was on about. I'm not entirely
sure that this issue *is* what it's on about, actually, but perhaps
this patch will shut it up -- and in any case the problem is clear.
Robert Haas [Fri, 4 Mar 2016 16:53:20 +0000 (11:53 -0500)]
Fix query-based tab completion for multibyte characters.
The existing code confuses the byte length of the string (which is
relevant when passing it to pg_strncasecmp) with the character length
of the string (which is relevant when it is used with the SQL substring
function). Separate those two concepts.
Report and patch by Kyotaro Horiguchi, reviewed by Thomas Munro and
reviewed and further revised by me.
Tom Lane [Tue, 1 Mar 2016 00:11:38 +0000 (19:11 -0500)]
Improve error message for rejecting RETURNING clauses with dropped columns.
This error message was written with only ON SELECT rules in mind, but since
then we also made RETURNING-clause targetlists go through the same logic.
This means that you got a rather off-topic error message if you tried to
add a rule with RETURNING to a table having dropped columns. Ideally we'd
just support that, but some preliminary investigation says that it might be
a significant amount of work. Seeing that Nicklas Avén's complaint is the
first one we've gotten about this in the ten years or so that the code's
been like that, I'm unwilling to put much time into it. Instead, improve
the error report by issuing a different message for RETURNING cases, and
revise the associated comment based on this investigation.