Michael Paquier [Wed, 26 Jun 2019 01:44:46 +0000 (10:44 +0900)]
Add support for OpenSSL 1.1.0 and newer versions in MSVC scripts
Up to now, the MSVC build scripts are able to support only one fixed
version of OpenSSL, and they lacked logic to detect the version of
OpenSSL a given compilation of Postgres is linking to (currently 1.0.2,
the latest LTS of upstream which will be EOL'd at the end of 2019).
This commit adds more logic to detect the version of OpenSSL used by a
build and makes use of it to add support for compilation with OpenSSL
1.1.0 which requires a new set of compilation flags to work properly.
The supported OpenSSL installers have changed their library layer with
various library renames with the upgrade to 1.1.0, making the logic a
bit more complicated. The scripts are now able to adapt to the new
world order.
Reported-by: Sergey Pashkov
Author: Juan José Santamaría Flecha, Michael Paquier Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/15789-8fc75dea3c5a17c8@postgresql.org
Michael Paquier [Tue, 25 Jun 2019 00:09:27 +0000 (09:09 +0900)]
Add toast-level reloption for vacuum_index_cleanup
a96c41f has introduced the option for heap, but it still lacked the
variant to control the behavior for toast relations.
While on it, refactor the tests so as they stress more scenarios with
the various values that vacuum_index_cleanup can use. It would be
useful to couple those tests with pageinspect to check that pages are
actually cleaned up, but this is left for later.
Author: Masahiko Sawada, Michael Paquier Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAD21AoCqs8iN04RX=i1KtLSaX5RrTEM04b7NHYps4+rqtpWNEg@mail.gmail.com
Tom Lane [Mon, 24 Jun 2019 21:19:32 +0000 (17:19 -0400)]
Purely-cosmetic adjustments in tablecmds.c.
Move ATExecAlterColumnGenericOptions away from where it was unthinkingly
dropped, in the middle of a lot of ALTER COLUMN TYPE code. I don't have
any high principles about where to put it instead, so let's just put it
after ALTER COLUMN TYPE and before ALTER OWNER, matching existing
decisions about how to order related code stanzas.
Also add the minimal function header comment that the original author
was too cool to bother with.
Along the way, upgrade header comments for nearby ALTER COLUMN TYPE
functions.
Tom Lane [Mon, 24 Jun 2019 20:43:05 +0000 (16:43 -0400)]
Further fix ALTER COLUMN TYPE's handling of indexes and index constraints.
This patch reverts all the code changes of commit e76de8861, which turns
out to have been seriously misguided. We can't wait till later to compute
the definition string for an index; we must capture that before applying
the data type change for any column it depends on, else ruleutils.c will
deliverr wrong/misleading results. (This fine point was documented
nowhere, of course.)
I'd also managed to forget that ATExecAlterColumnType executes once per
ALTER COLUMN TYPE clause, not once per statement; which resulted in the
code being basically completely broken for any case in which multiple ALTER
COLUMN TYPE clauses are applied to a table having non-constraint indexes
that must be rebuilt. Through very bad luck, none of the existing test
cases nor the ones added by e76de8861 caught that, but of course it was
soon found in the field.
The previous patch also had an implicit assumption that if a constraint's
index had a dependency on a table column, so would the constraint --- but
that isn't actually true, so it didn't fix such cases.
Instead of trying to delete unneeded index dependencies later, do the
is-there-a-constraint lookup immediately on seeing an index dependency,
and switch to remembering the constraint if so. In the unusual case of
multiple column dependencies for a constraint index, this will result in
duplicate constraint lookups, but that's not that horrible compared to all
the other work that happens here. Besides, such cases did not work at all
before, so it's hard to argue that they're performance-critical for anyone.
Per bug #15865 from Keith Fiske. As before, back-patch to all supported
branches.
Tom Lane [Mon, 24 Jun 2019 16:36:51 +0000 (12:36 -0400)]
Drop test user when done with it.
Commit d7f8d26d9 added a test case that created a user, but forgot
to drop it again. This is no good; for one thing, it causes repeated
"make installcheck" runs to fail.
Peter Eisentraut [Mon, 24 Jun 2019 08:39:12 +0000 (10:39 +0200)]
Upgrade internal error message to external
As part of REINDEX CONCURRENTLY, this formerly internal-only error
message becomes potentially user-visible (see regression tests), so
change from errmsg_internal() to errmsg(), and update comment.
Dean Rasheed [Sun, 23 Jun 2019 17:50:08 +0000 (18:50 +0100)]
Add security checks to the multivariate MCV estimation code.
The multivariate MCV estimation code may run user-defined operators on
the values in the MCV list, which means that those operators may
potentially leak the values from the MCV list. Guard against leaking
data to unprivileged users by checking that the user has SELECT
privileges on the table or all of the columns referred to by the
statistics.
Additionally, if there are any securityQuals on the RTE (either due to
RLS policies on the table, or accessing the table via a security
barrier view), not all rows may be visible to the current user, even
if they have table or column privileges. Thus we further insist that
the operator be leakproof in this case.
Tom Lane [Sun, 23 Jun 2019 00:31:50 +0000 (20:31 -0400)]
Fix spinlock assembly code for MIPS so it works on MIPS r6.
Original MIPS-I processors didn't have the LL/SC instructions (nor any
other userland synchronization primitive). If the build toolchain
targets that ISA variant by default, as an astonishingly large fraction
of MIPS platforms still do, the assembler won't take LL/SC without
coercion in the form of a ".set mips2" instruction. But we issued that
unconditionally, making it an ISA downgrade for chips later than MIPS2.
That breaks things for the latest MIPS r6 ISA, which encodes these
instructions differently. Adjust the code so we don't change ISA level
if it's >= 2.
Note that this patch doesn't change what happens on an actual MIPS-I
processor: either the kernel will emulate these instructions
transparently, or you'll get a SIGILL failure. That tradeoff seemed
fine in 2002 when this code was added (cf 3cbe6b247), and it's even
more so today when MIPS-I is basically extinct. But let's add a
comment about that.
YunQiang Su (with cosmetic adjustments by me). Back-patch to all
supported branches.
Noah Misch [Sat, 22 Jun 2019 03:34:23 +0000 (20:34 -0700)]
Consolidate methods for translating a Perl path to a Windows path.
This fixes some TAP suites when using msys Perl and a builddir located
in an msys mount point other than "/". For example, builddir=/c/pg
exhibited the problem, since /c/pg falls in mount point "/c".
Back-patch to 9.6, where tests first started to perform such
translations. In back branches, offer both new and old APIs.
Thomas Munro [Thu, 20 Jun 2019 22:57:07 +0000 (10:57 +1200)]
Remove obsolete comments about sempahores from proc.c.
Commit 6753333f switched from a semaphore-based wait to a latch-based
wait for ProcSleep()/ProcWakeup(), but left behind some stray references
to semaphores.
Back-patch to 9.5.
Reviewed-by: Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/CA+hUKGLs5H6zhmgTijZ1OaJvC1sG0=AFXc1aHuce32tKiQrdEA@mail.gmail.com
Peter Eisentraut [Wed, 19 Jun 2019 19:43:31 +0000 (21:43 +0200)]
pg_upgrade: Improve invalid option handling
Currently, calling pg_upgrade with an invalid command-line option
aborts pg_upgrade but leaves a pg_upgrade_internal.log file lying
around. Reorder things a bit so that that file is not created until
all the options have been parsed.
The description is ended part way and PASSING clause is not implemented yet.
But the variables might be passed as parameters to several jsonpath functions.
So, complete the description based on the current implementation, leaving
description of PASSING clause in TODO.
Discussion: https://postgr.es/m/CAKPRHz%2BxOuQSSvkuB1mCQjedd%2BB2B1Vnkrq0E-pLmoXyTO%2Bz9Q%40mail.gmail.com
Author: Kyotaro Horiguchi, Alexander Korotkov
Support 'q' flag in jsonpath 'like_regex' predicate
SQL/JSON standard defines that jsonpath 'like_regex' predicate should support
the same set of flags as XQuery/XPath. It appears that implementation of 'q'
flag was missed. This commit fixes that.
Discussion: https://postgr.es/m/CAPpHfdtyfPsxLYiTjp5Ov8T5xGsB5t3CwE5%2B3PS%3DLLwA%2BxTJog%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov
Michael Paquier [Wed, 19 Jun 2019 02:18:50 +0000 (11:18 +0900)]
Remove last references to WAL segment size in MSVC scripts
fc49e24 has removed the last use of this compile-time variable as WAL
segment size is something that can now be set at initdb time, still this
commit has forgotten some references to it.
Michael Paquier [Wed, 19 Jun 2019 02:02:19 +0000 (11:02 +0900)]
Fix description of WAL record XLOG_BTREE_META_CLEANUP
This record uses one metadata buffer and registers some data associated
to the buffer, but when parsing the record for its description a direct
access to the record data was done, but there is none. This leads
usually to an incorrect description, but can also cause crashes like in
pg_waldump. Instead, fix things so as the parsing uses the data
associated to the metadata block.
This is an oversight from 3d92796, so backpatch down to 11.
Author: Michael Paquier
Description: https://postgr.es/m/20190617013059.GA3153@paquier.xyz
Backpatch-through: 11
Andres Freund [Tue, 18 Jun 2019 22:51:04 +0000 (15:51 -0700)]
Fix memory corruption/crash in ANALYZE.
This fixes an embarrassing oversight I (Andres) made in 737a292b,
namely missing two place where liverows/deadrows were used when
converting those variables to pointers, leading to incrementing the
pointer, rather than the value.
It's not that actually that easy to trigger a crash: One needs tuples
deleted by the current transaction, followed by a tuple deleted in
another session, all in one page. Which is presumably why this hasn't
been noticed before.
Reported-By: Steve Singer
Author: Steve Singer
Discussion: https://postgr.es/m/c7988239-d42c-ddc4-41db-171b23b35e4f@ssinger.info
Alvaro Herrera [Tue, 18 Jun 2019 22:23:16 +0000 (18:23 -0400)]
Avoid spurious deadlocks when upgrading a tuple lock
This puts back reverted commit de87a084c0a5, with some bug fixes.
When two (or more) transactions are waiting for transaction T1 to release a
tuple-level lock, and transaction T1 upgrades its lock to a higher level, a
spurious deadlock can be reported among the waiting transactions when T1
finishes. The simplest example case seems to be:
T1: select id from job where name = 'a' for key share;
Y: select id from job where name = 'a' for update; -- starts waiting for T1
Z: select id from job where name = 'a' for key share;
T1: update job set name = 'b' where id = 1;
Z: update job set name = 'c' where id = 1; -- starts waiting for T1
T1: rollback;
At this point, transaction Y is rolled back on account of a deadlock: Y
holds the heavyweight tuple lock and is waiting for the Xmax to be released,
while Z holds part of the multixact and tries to acquire the heavyweight
lock (per protocol) and goes to sleep; once T1 releases its part of the
multixact, Z is awakened only to be put back to sleep on the heavyweight
lock that Y is holding while sleeping. Kaboom.
This can be avoided by having Z skip the heavyweight lock acquisition. As
far as I can see, the biggest downside is that if there are multiple Z
transactions, the order in which they resume after T1 finishes is not
guaranteed.
Backpatch to 9.6. The patch applies cleanly on 9.5, but the new tests don't
work there (because isolationtester is not smart enough), so I'm not going
to risk it.
Thomas Munro [Tue, 18 Jun 2019 10:38:30 +0000 (22:38 +1200)]
Prevent Parallel Hash Join for JOIN_UNIQUE_INNER.
WHERE EXISTS (...) queries cannot be executed by Parallel Hash Join
with jointype JOIN_UNIQUE_INNER, because there is no way to make a
partial plan totally unique. The consequence of allowing such plans
was duplicate results from some EXISTS queries.
Back-patch to 11. Bug #15857.
Author: Thomas Munro Reviewed-by: Tom Lane Reported-by: Vladimir Kriukov
Discussion: https://postgr.es/m/15857-d1ba2a64bce0795e%40postgresql.org
Michael Paquier [Mon, 17 Jun 2019 13:13:57 +0000 (22:13 +0900)]
Fix buffer overflow when processing SCRAM final message in libpq
When a client connects to a rogue server sending specifically-crafted
messages, this can suffice to execute arbitrary code as the operating
system account used by the client.
While on it, fix one error handling when decoding an incorrect salt
included in the first message received from server.
Author: Michael Paquier Reviewed-by: Jonathan Katz, Heikki Linnakangas
Security: CVE-2019-10164
Backpatch-through: 10
Michael Paquier [Mon, 17 Jun 2019 12:48:17 +0000 (21:48 +0900)]
Fix buffer overflow when parsing SCRAM verifiers in backend
Any authenticated user can overflow a stack-based buffer by changing the
user's own password to a purpose-crafted value. This often suffices to
execute arbitrary code as the PostgreSQL operating system account.
This fix is contributed by multiple folks, based on an initial analysis
from Tom Lane. This issue has been introduced by 68e61ee, so it was
possible to make use of it at authentication time. It became more
easily to trigger after ccae190 which has made the SCRAM parsing more
strict when changing a password, in the case where the client passes
down a verifier already hashed using SCRAM. Back-patch to v10 where
SCRAM has been introduced.
Reported-by: Alexander Lakhin
Author: Jonathan Katz, Heikki Linnakangas, Michael Paquier
Security: CVE-2019-10164
Backpatch-through: 10
This code has some tricky corner cases that I'm not sure are correct and
not properly tested anyway, so I'm reverting the whole thing for next
week's releases (reintroducing the deadlock bug that we set to fix).
I'll try again afterwards.
Tom Lane [Sun, 16 Jun 2019 15:00:23 +0000 (11:00 -0400)]
Further fix privileges on pg_statistic_ext[_data].
We don't need to restrict column privileges on pg_statistic_ext;
all of that data is OK to read publicly. What we *do* need to do,
which was overlooked by 6cbfb784c, is revoke public read access on
pg_statistic_ext_data; otherwise we still have the same security
hole we started with.
Catversion bump to ensure that installations calling themselves
beta2 will have this fix.
Diagnosis/correction by Dean Rasheed and Tomas Vondra, but I'm
going to go ahead and push this fix ASAP so we get more buildfarm
cycles on it.
Tomas Vondra [Sun, 16 Jun 2019 10:04:40 +0000 (12:04 +0200)]
Fix privileges on pg_statistic_ext.tableoid
The GRANT in system_views allowed SELECT privileges on various columns in
the pg_statistic_ext catalog, but tableoid was not included in the list.
That made pg_dump fail because it's accessing this column when building
the list of extended statistics to dump.
Tomas Vondra [Thu, 13 Jun 2019 15:25:04 +0000 (17:25 +0200)]
Add pg_stats_ext view for extended statistics
Regular per-column statistics are stored in pg_statistics catalog, which
is however rather difficult to read, so we also have pg_stats view with
a human-reablable version of the data.
For extended statistic the catalog was fairly easy to read, so we did
not have such human-readable view so far. Commit 9b6babfa2d however did
split the catalog into two, which makes querying harder. Furthermore,
we want to show the multi-column MCV list in a way similar to per-column
stats (and not as a bytea value).
This commit introduces pg_stats_ext view, joining the two catalogs and
massaging the data to produce human-readable output similar to pg_stats.
It also considers RLS and access privileges - the data is shown only when
the user has access to all columns the extended statistic is defined on.
Bumped CATVERSION due to adding new system view.
Author: Dean Rasheed, with improvements by me Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
Tomas Vondra [Thu, 13 Jun 2019 15:19:21 +0000 (17:19 +0200)]
Rework the pg_statistic_ext catalog
Since extended statistic got introduced in PostgreSQL 10, there was a
single catalog pg_statistic_ext storing both the definitions and built
statistic. That's however problematic when a user is supposed to have
access only to the definitions, but not to user data.
Consider for example pg_dump on a database with RLS enabled - if the
pg_statistic_ext catalog respects RLS (which it should, if it contains
user data), pg_dump would not see any records and the result would not
define any extended statistics. That would be a surprising behavior.
Until now this was not a pressing issue, because the existing types of
extended statistic (functional dependencies and ndistinct coefficients)
do not include any user data directly. This changed with introduction
of MCV lists, which do include most common combinations of values.
The easiest way to fix this is to split the pg_statistic_ext catalog
into two - one for definitions, one for the built statistic values.
The new catalog is called pg_statistic_ext_data, and we're maintaining
a 1:1 relationship with the old catalog - either there are matching
records in both catalogs, or neither of them.
Bumped CATVERSION due to changing system catalog definitions.
Author: Dean Rasheed, with improvements by me Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
Andrew Gierth [Sat, 15 Jun 2019 17:15:23 +0000 (18:15 +0100)]
Prefer timezone name "UTC" over alternative spellings.
tzdb 2019a made "UCT" a link to the "UTC" zone rather than a separate
zone with its own abbreviation. Unfortunately, our code for choosing a
timezone in initdb has an arbitrary preference for names earlier in
the alphabet, and so it would choose the spelling "UCT" over "UTC"
when the system is running on a UTC zone.
Commit 23bd3cec6 was backpatched in order to address this issue, but
that code helps only when /etc/localtime exists as a symlink, and does
nothing to help on systems where /etc/localtime is a copy of a zone
file (as is the standard setup on FreeBSD and probably some other
platforms too) or when /etc/localtime is simply absent (giving UTC as
the default).
Accordingly, add a preference for the spelling "UTC", such that if
multiple zone names have equally good content matches, we prefer that
name before applying the existing arbitrary rules. Also add a slightly
lower preference for "Etc/UTC"; lower because that preserves the
previous behaviour of choosing the shorter name, but letting us still
choose "Etc/UTC" over "Etc/UCT" when both exist but "UTC" does
not (not common, but I've seen it happen).
Backpatch all the way, because the tzdb change that sparked this issue
is in those branches too.
* When reusing conninfo data from the previous connection in \connect,
the host address should only be reused if it was specified as
hostaddr; if it wasn't, then 'host' is resolved afresh. We were
reusing the same IP address, which ignores a possible DNS change
as well as any other addresses that the name resolves to than the
one that was used in the original connection.
* PQhost, PQhostaddr: Don't present user-specified hostaddr when we have
an inet_net_ntop-produced equivalent address. The latter has been
put in canonical format, which is cleaner (so it produces "127.0.0.1"
when given "host=2130706433", for example).
* Document the hostaddr-reusing aspect of \connect.
Etsuro Fujita [Fri, 14 Jun 2019 11:49:59 +0000 (20:49 +0900)]
postgres_fdw: Fix costing of pre-sorted foreign paths with local stats.
Commit aa09cd242 modified estimate_path_cost_size() so that it reuses
cached costs of a basic foreign path for a given foreign-base/join
relation when costing pre-sorted foreign paths for that relation, but it
incorrectly re-computed retrieved_rows, an estimated number of rows
fetched from the remote side, which is needed for costing both the basic
and pre-sorted foreign paths. To fix, handle retrieved_rows the same way
as the cached costs: store in that relation's fpinfo the retrieved_rows
estimate computed for costing the basic foreign path, and reuse it when
costing the pre-sorted foreign paths. Also, reuse the rows/width
estimates stored in that relation's fpinfo when costing the pre-sorted
foreign paths, to make the code consistent.
In commit ffab494a4, to extend the costing mentioned above to the
foreign-grouping case, I made a change to add_foreign_grouping_paths() to
store in a given foreign-grouped relation's RelOptInfo the rows estimate
for that relation for reuse, but this patch makes that change unnecessary
since we already store the row estimate in that relation's fpinfo, which
this patch reuses when costing a foreign path for that relation with the
sortClause ordering; remove that change.
In passing, fix thinko in commit 7012b132d: in estimate_path_cost_size(),
the width estimate for a given foreign-grouped relation to be stored in
that relation's fpinfo was reset incorrectly when costing a basic foreign
path for that relation with local stats.
Apply the patch to HEAD only to avoid destabilizing existing plan choices.
Michael Paquier [Fri, 14 Jun 2019 00:00:36 +0000 (09:00 +0900)]
Use OpenSSL-specific ifdefs in sha2.h
In order to separate OpenSSL's SHA symbols, this header has been using
USE_SSL, which is equivalent to USE_OPENSSL. There is now only one SSL
implementation included in the tree, so this works fine, but when
adding a new SSL implementation this would run into failures.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/0DF29010-CE26-4F51-85A6-9C8ABF5536F9@yesql.se
Tom Lane [Thu, 13 Jun 2019 22:10:08 +0000 (18:10 -0400)]
Avoid combinatorial explosion in add_child_rel_equivalences().
If an EquivalenceClass member expression includes variables from
multiple appendrels, then instead of producing one substituted
expression per child relation as intended, we'd create additional
child expressions for combinations of children of different appendrels.
This happened because the child expressions generated while considering
the first appendrel were taken as sources during substitution of the
second appendrel, and so on. The extra expressions are useless, and are
harmless unless there are too many of them --- but if you have several
appendrels with a thousand or so members each, it gets bad fast.
To fix, consider only original (non-em_is_child) EC members as candidates
to be expanded. This requires the ability to substitute directly from a
top parent relation's Vars to those of an indirect descendant relation,
but we already have that in adjust_appendrel_attrs_multilevel().
Per bug #15847 from Feike Steenbergen. This is a longstanding misbehavior,
but it's only worth worrying about when there are more appendrel children
than we've historically considered wise to use. So I'm not going to take
the risk of back-patching this.
Alvaro Herrera [Thu, 13 Jun 2019 21:28:24 +0000 (17:28 -0400)]
Avoid spurious deadlocks when upgrading a tuple lock
When two (or more) transactions are waiting for transaction T1 to release a
tuple-level lock, and transaction T1 upgrades its lock to a higher level, a
spurious deadlock can be reported among the waiting transactions when T1
finishes. The simplest example case seems to be:
T1: select id from job where name = 'a' for key share;
Y: select id from job where name = 'a' for update; -- starts waiting for X
Z: select id from job where name = 'a' for key share;
T1: update job set name = 'b' where id = 1;
Z: update job set name = 'c' where id = 1; -- starts waiting for X
T1: rollback;
At this point, transaction Y is rolled back on account of a deadlock: Y
holds the heavyweight tuple lock and is waiting for the Xmax to be released,
while Z holds part of the multixact and tries to acquire the heavyweight
lock (per protocol) and goes to sleep; once X releases its part of the
multixact, Z is awakened only to be put back to sleep on the heavyweight
lock that Y is holding while sleeping. Kaboom.
This can be avoided by having Z skip the heavyweight lock acquisition. As
far as I can see, the biggest downside is that if there are multiple Z
transactions, the order in which they resume after X finishes is not
guaranteed.
Backpatch to 9.6. The patch applies cleanly on 9.5, but the new tests don't
work there (because isolationtester is not smart enough), so I'm not going
to risk it.
Tom Lane [Thu, 13 Jun 2019 14:53:17 +0000 (10:53 -0400)]
Mark ReplicationSlotCtl as PGDLLIMPORT.
Also MyReplicationSlot, in branches where it wasn't already.
This was discussed in the thread that resulted in c572599c6, but
for some reason nobody pulled the trigger. Now that we have another
request for the same thing, we should just do it.
Etsuro Fujita [Thu, 13 Jun 2019 08:59:09 +0000 (17:59 +0900)]
postgres_fdw: Account for triggers in non-direct remote UPDATE planning.
Previously, in postgresPlanForeignModify, we planned an UPDATE operation
on a foreign table so that we transmit only columns that were explicitly
targets of the UPDATE, so as to avoid unnecessary data transmission, but
if there were BEFORE ROW UPDATE triggers on the foreign table, those
triggers might change values for non-target columns, in which case we
would miss sending changed values for those columns. Prevent optimizing
away transmitting all columns if there are BEFORE ROW UPDATE triggers on
the foreign table.
This is an oversight in commit 7cbe57c34 which added triggers on foreign
tables, so apply the patch all the way back to 9.4 where that came in.
Author: Shohei Mochizuki Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/201905270152.x4R1q3qi014550@toshiba.co.jp
Tom Lane [Thu, 13 Jun 2019 02:54:46 +0000 (22:54 -0400)]
Doc: improve description of allowed spellings for Boolean input.
datatype.sgml failed to explain that boolin() accepts any unique
prefix of the basic input strings. Indeed it was actively misleading
because it called out a few minimal prefixes without mentioning that
there were more valid inputs.
I also felt that it wasn't doing anybody any favors by conflating
SQL key words, valid Boolean input, and string literals containing
valid Boolean input. Rewrite in hopes of reducing the confusion.
Per bug #15836 from Yuming Wang, as diagnosed by David Johnston.
Back-patch to supported branches.
Tom Lane [Wed, 12 Jun 2019 23:42:38 +0000 (19:42 -0400)]
Fix incorrect printing of queries with duplicated join names.
Given a query in which multiple JOIN nodes used the same alias
(which'd necessarily be in different sub-SELECTs), ruleutils.c
would assign the JOIN nodes distinct aliases for clarity ...
but then it forgot to print the modified aliases when dumping
the JOIN nodes themselves. This results in a dump/reload hazard
for views, because the emitted query is flat-out incorrect:
Vars will be printed with table names that have no referent.
This has been wrong for a long time, so back-patch to all supported
branches.
Tom Lane [Wed, 12 Jun 2019 16:29:24 +0000 (12:29 -0400)]
Fix ALTER COLUMN TYPE failure with a partial exclusion constraint.
ATExecAlterColumnType failed to consider the possibility that an index
that needs to be rebuilt might be a child of a constraint that needs to be
rebuilt. We missed this so far because usually a constraint index doesn't
have a direct dependency on its table, just on the constraint object.
But if there's a WHERE clause, then dependency analysis of the WHERE
clause results in direct dependencies on the column(s) mentioned in WHERE.
This led to trying to drop and rebuild both the constraint and its
underlying index.
In v11/HEAD, we successfully drop both the index and the constraint,
and then try to rebuild both, and of course the second rebuild hits a
duplicate-index-name problem. Before v11, it fails with obscure messages
about a missing relation OID, due to trying to drop the index twice.
This is essentially the same kind of problem noted in commit 20bef2c31: the possible dependency linkages are broader than what
ATExecAlterColumnType was designed for. It was probably OK when
written, but it's certainly been broken since the introduction of
partial exclusion constraints. Fix by adding an explicit check
for whether any of the indexes-to-be-rebuilt belong to any of the
constraints-to-be-rebuilt, and ignoring any that do.
In passing, fix a latent bug introduced by commit 8b08f7d48: in
get_constraint_index() we must "continue" not "break" when rejecting
a relation of a wrong relkind. This is harmless today because we don't
expect that code path to be taken anyway; but if there ever were any
relations to be ignored, the existing coding would have an extremely
undesirable dependency on the order of pg_depend entries.
Also adjust a couple of obsolete comments.
Per bug #15835 from Yaroslav Schekin. Back-patch to all supported
branches.
Michael Paquier [Wed, 12 Jun 2019 02:30:11 +0000 (11:30 +0900)]
Fix handling of COMMENT for domain constraints
For a non-superuser, changing a comment on a domain constraint was
leading to a cache lookup failure as the code tried to perform the
ownership lookup on the constraint OID itself, thinking that it was a
type, but this check needs to happen on the type the domain constraint
relies on. As the type a domain constraint relies on can be guessed
directly based on the constraint OID, first fetch its type OID and
perform the ownership on it.
This is broken since 7eca575, which has split the handling of comments
for table constraints and domain constraints, so back-patch down to
9.5.
Reported-by: Clemens Ladisch
Author: Daniel Gustafsson, Michael Paquier Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/15833-808e11904835d26f@postgresql.org
Backpatch-through: 9.5
David Rowley [Tue, 11 Jun 2019 20:08:57 +0000 (08:08 +1200)]
doc: Add best practises section to partitioning docs
A few questionable partitioning designs have been cropping up lately
around the mailing lists. Generally, these cases have been partitioning
using too many partitions which have caused performance or OOM problems for
the users.
Since we have very little else to guide users into good design, here we
add a new section to the partitioning documentation with some best
practise guidelines for good design.
Tom Lane [Tue, 11 Jun 2019 17:33:08 +0000 (13:33 -0400)]
Fix conversion of JSON strings to JSON output columns in json_to_record().
json_to_record(), when an output column is declared as type json or jsonb,
should emit the corresponding field of the input JSON object. But it got
this slightly wrong when the field is just a string literal: it failed to
escape the contents of the string. That typically resulted in syntax
errors if the string contained any double quotes or backslashes.
jsonb_to_record() handles such cases correctly, but I added corresponding
test cases for it too, to prevent future backsliding.
Improve the documentation, as it provided only a very hand-wavy
description of the conversion rules used by these functions.
Per bug report from Robert Vollmert. Back-patch to v10 where the
error was introduced (by commit cf35346e8).
Note that PG 9.4 - 9.6 also get this case wrong, but differently so:
they feed the de-escaped contents of the string literal to json[b]_in.
That behavior is less obviously wrong, so possibly it's being depended on
in the field, so I won't risk trying to make the older branches behave
like the newer ones.
Andres Freund [Tue, 11 Jun 2019 06:20:48 +0000 (23:20 -0700)]
Don't access catalogs to validate GUCs when not connected to a DB.
Vignesh found this bug in the check function for
default_table_access_method's check hook, but that was just copied
from older GUCs. Investigation by Michael and me then found the bug in
further places.
When not connected to a database (e.g. in a walsender connection), we
cannot perform (most) GUC checks that need database access. Even when
only shared tables are needed, unless they're
nailed (c.f. RelationCacheInitializePhase2()), they cannot be accessed
without pg_class etc. being present.
Fix by extending the existing IsTransactionState() checks to also
check for MyDatabaseOid.
Reported-By: Vignesh C, Michael Paquier, Andres Freund
Author: Vignesh C, Andres Freund
Discussion: https://postgr.es/m/CALDaNm1KXK9gbZfY-p_peRFm_XrBh1OwQO1Kk6Gig0c0fVZ2uw%40mail.gmail.com
Backpatch: 9.4-
All of the other code thinks that the 8th parameter is the number of
blocks, but this declaration thinks that it's the ending block number.
Repair this inconsistency.
Alvaro Herrera [Mon, 10 Jun 2019 22:56:23 +0000 (18:56 -0400)]
Make pg_dump emit ATTACH PARTITION instead of PARTITION OF (reprise)
Using PARTITION OF can result in column ordering being changed from the
database being dumped, if the partition uses a column layout different
from the parent's. It's not pg_dump's job to editorialize on table
definitions, so this is not acceptable; back-patch all the way back to
pg10, where partitioned tables where introduced.
This change also ensures that partitions end up in the correct
tablespace, if different from the parent's; this is an oversight in ca4103025dfe (in pg12 only). Partitioned indexes (in pg11) don't have
this problem, because they're already created as independent indexes and
attached to their parents afterwards.
This change also has the advantage that the partition is restorable from
the dump (as a standalone table) even if its parent table isn't
restored.
The original commits (3b23552ad8bb in branch master) failed to cover
subsidiary column elements correctly, such as NOT NULL constraint and
CHECK constraints, as reported by Rushabh Lathia (initially as a failure
to restore serial columns). They were reverted. This recapitulation
commit fixes those problems.
Add some pg_dump tests to verify these things more exhaustively,
including constraints with legacy-inheritance tables, which were not
tested originally. In branches 10 and 11, add a local constraint to the
pg_dump test partition that was added by commit 2d7eeb1b1492 to master.
Fix operator naming in pg_trgm GUC option descriptions
Descriptions of pg_trgm GUC options have % replaced with %% like it was
a printf-like format. But that's not needed since they are just plain strings.
This commit fixed that. Backpatch to last supported version since this error
present from the beginning.
5871b884 introduced pg_trgm.word_similarity_threshold GUC, but its documentation
contains wrong indentation. This commit fixes that. Backpatch for easier
backpatching of other documentation fixes.
Discussion: https://postgr.es/m/4c735d30-ab59-fc0e-45d8-f90eb5ed3855%402ndquadrant.com
Author: Ian Barwick
Backpatch-through: 9.6
Noah Misch [Sun, 9 Jun 2019 21:00:36 +0000 (14:00 -0700)]
Reconcile nodes/*funcs.c with PostgreSQL 12 work.
One would have needed out-of-tree code to observe the defects. Remove
unreferenced fields instead of completing their support functions.
Since in-tree code can't reach _readIntoClause(), no catversion bump.
Noah Misch [Sat, 8 Jun 2019 17:12:26 +0000 (10:12 -0700)]
Update documentation about SIGKILL of postmaster.
Removing shared memory and semaphores in response to server start
failure often masks the real problem, a live process associated with the
data directory; see commit 5a907404b52753c4d6c6a7c21765aeaa42fd6a3b.
Since 9.6, it's rarely necessary to kill subprocesses manually. (When
it is necessary, that commit's HINT will say as much, in all supported
versions.)
Amit Kapila [Sat, 8 Jun 2019 02:46:38 +0000 (08:16 +0530)]
Fix assorted inconsistencies.
There were a number of issues in the recent commits which include typos,
code and comments mismatch, leftover function declarations. Fix them.
Reported-by: Alexander Lakhin
Author: Alexander Lakhin, Amit Kapila and Amit Langote Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/ef0c0232-0c1d-3a35-63d4-0ebd06e31387@gmail.com
Michael Paquier [Sat, 8 Jun 2019 00:59:02 +0000 (09:59 +0900)]
Move be-gssapi-common.h into src/include/libpq/
The file has been introduced in src/backend/libpq/ as of b0b39f72, but
all backend-side headers of libpq are located in src/include/libpq/.
Note that the identification path on top of the file referred to
src/include/libpq/ from the start.
Author: Michael Paquier Reviewed-by: Stephen Frost
Discussion: https://postgr.es/m/20190607043415.GE1736@paquier.xyz
Fix copy-pasto in freeing memory on error in vacuumlo.
It's harmless to call PQfreemem() with a NULL argument, so the only
consequence was that if allocating 'schema' failed, but allocating 'table'
or 'field' succeeded, we would leak a bit of memory. That's highly
unlikely to happen, so this is just academical, but let's get it right.
Per bug #15838 from Timur Birsh. Backpatch back to 9.5, where the
PQfreemem() calls were introduced.
Alvaro Herrera [Fri, 7 Jun 2019 04:44:17 +0000 (00:44 -0400)]
Fix default_tablespace usage for partitioned tables
In commit 87259588d0ab I (Álvaro) tried to rationalize the determination
of tablespace to use for partitioned tables, but failed to handle the
default_tablespace case. Repair and add proper tests.
Author: Amit Langote, Rushabh Lathia Reported-by: Rushabh Lathia Reviewed-by: Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/CAGPqQf0cYjm1=rjxk_6gU0SjUS70=yFUAdCJLwWzh9bhNJnyVg@mail.gmail.com
Amit Kapila [Thu, 6 Jun 2019 23:53:52 +0000 (05:23 +0530)]
Fix inconsistency in comments atop ExecParallelEstimate.
When this code was initially introduced in commit d1b7c1ff, the structure
used was SharedPlanStateInstrumentation, but later when it got changed to
Instrumentation structure in commit b287df70, we forgot to update the
comment.
Previously, running pg_waldump with an invalid option (pg_waldump
--foo) would print the help output and exit successfully. This was
because it tried to process the option letter '?' as a normal option,
but that letter is used by getopt() to report an invalid option.
To fix, process help and version options separately, like we do
everywhere else. Also add a basic test suite for pg_waldump and run
the basic option handling tests, which would have caught this.
Fix confusion on different kinds of slots in IndexOnlyScans.
We used the same slot to store a tuple from the index, and to store a
tuple from the table. That's not OK. It worked with the heap, because
heapam_getnextslot() stores a HeapTuple to the slot, and doesn't care how
large the tts_values/nulls arrays are. But when I played with a toy table
AM implementation that used a virtual tuple, it caused memory overruns.
In the passing, tidy up comments on the ioss_PscanLen fields.