Peter Eisentraut [Mon, 30 Apr 2018 17:49:20 +0000 (13:49 -0400)]
Don't do logical replication of TRUNCATE of zero tables
When due to publication configuration, a TRUNCATE change ends up with
zero tables to be published, don't send the message out, just skip it.
It's not wrong, but obviously useless overhead.
Peter Eisentraut [Mon, 30 Apr 2018 16:28:45 +0000 (12:28 -0400)]
Prevent infinity and NaN in jsonb/plperl transform
jsonb uses numeric internally, and numeric can store NaN, but that is
not allowed by jsonb on input, so we shouldn't store it. Also prevent
infinity to get a consistent error message. (numeric input would reject
infinity anyway.)
Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Tom Lane [Mon, 30 Apr 2018 15:16:21 +0000 (11:16 -0400)]
Dump full memory maps around failing Windows reattach code.
This morning's results from buildfarm member dory make it pretty
clear that something is getting mapped into the just-freed space,
but not what that something is. Replace my minimalistic probes
with a full dump of the process address space and module space,
based on Noah's work at
<20170403065106.GA2624300%40tornado.leadboat.com>
This is all (probably) to get reverted once we have fixed the
problem, but for now we need information.
Tom Lane [Mon, 30 Apr 2018 01:56:27 +0000 (21:56 -0400)]
Fix bogus list-iteration code in pg_regress.c, affecting ecpg tests only.
While looking at a recent buildfarm failure in the ecpg tests, I wondered
why the pg_regress output claimed the stderr part of the test failed, when
the regression diffs were clearly for the stdout part. Looking into it,
the reason is that pg_regress.c's logic for iterating over three parallel
lists is wrong, and has been wrong since it was written: it advances the
"tag" pointer at a different place in the loop than the other two pointers.
Fix that.
Tom Lane [Mon, 30 Apr 2018 00:41:19 +0000 (20:41 -0400)]
Get still more info about Windows can't-reattach-to-shared-memory errors.
After some thought about the info captured so far, it seems possible
that MapViewOfFileEx is itself causing some DLL to get loaded into
the space just freed by VirtualFree. The previous commit here didn't
capture enough info to really prove the case for that, so let's add
one more VirtualQuery in between those steps. Also, be sure to
capture the post-Map state before we emit any log entries, just in
case elog() is invoking some code not previously loaded.
Tom Lane [Sun, 29 Apr 2018 22:15:16 +0000 (18:15 -0400)]
Avoid wrong results for power() with NaN input on more platforms.
Buildfarm results show that the modern POSIX rule that 1 ^ NaN = 1 is not
honored on *BSD until relatively recently, and really old platforms don't
believe that NaN ^ 0 = 1 either. (This is unsurprising, perhaps, since
SUSv2 doesn't require either behavior.) In hopes of getting to platform
independent behavior, let's deal with all the NaN-input cases explicitly
in dpow().
Note that numeric_power() doesn't know either of these special cases.
But since that behavior is platform-independent, I think it should be
addressed separately, and probably not back-patched.
Tom Lane [Sun, 29 Apr 2018 19:50:08 +0000 (15:50 -0400)]
Update time zone data files to tzdata release 2018d.
DST law changes in Palestine and Antarctica (Casey Station). Historical
corrections for Portugal and its colonies, as well as Enderbury, Jamaica,
Turks & Caicos Islands, and Uruguay.
Tom Lane [Sun, 29 Apr 2018 19:21:44 +0000 (15:21 -0400)]
Avoid wrong results for power() with NaN input on some platforms.
Per spec, the result of power() should be NaN if either input is NaN.
It appears that on some versions of Windows, the libc function does
return NaN, but it also sets errno = EDOM, confusing our code that
attempts to work around shortcomings of other platforms. Hence, add
guard tests to avoid substituting a wrong result for the right one.
It's been like this for a long time (and the odd behavior only appears
in older MSVC releases, too) so back-patch to all supported branches.
Tom Lane [Sun, 29 Apr 2018 17:26:26 +0000 (13:26 -0400)]
Cosmetic improvement: use BKI_DEFAULT and BKI_LOOKUP in pg_language.
The point of this is not really to remove redundancy in pg_language.dat;
with only three entries, it's hardly worth it. Rather, it is to get
to a point where there are exactly zero hard-coded numeric pg_proc OID
references in the catalog .dat files. The lanvalidator column was the
only remaining location of such references, and it seems like a good
thing for future-proofing reasons to make it not be a special case.
There are still a few places in the .dat files with numeric OID references
to other catalogs, but after review I don't see any that seem worth
changing at present. In each case there are just too few entries to make
it worth the trouble to create lookup infrastructure.
This doesn't change the emitted postgres.bki file, so no catversion bump.
Tom Lane [Sat, 28 Apr 2018 21:45:02 +0000 (17:45 -0400)]
In AtEOXact_Files, complain if any files remain unclosed at commit.
This change makes this module act more like most of our other low-level
resource management modules. It's a caller error if something is not
explicitly closed by the end of a successful transaction, so issue
a WARNING about it. This would not actually have caught the file leak
bug fixed in commit 231bcd080, because that was in a transaction-abort
path; but it still seems like a good, and pretty cheap, cross-check.
Tom Lane [Sat, 28 Apr 2018 20:09:03 +0000 (16:09 -0400)]
Tweak reformat_dat_file.pl to make it more easily hand-invokable.
Use the same code we already applied in duplicate_oids and unused_oids
to let this script find Catalog.pm without help. This removes the need
to supply a -I switch in most cases.
Also, mark the script executable, again to follow the precedent of
duplicate_oids and unused_oids. Now you can just do
"./reformat_dat_file.pl pg_proc.dat"
if you want to reformat only one or a few .dat files rather than all.
It'd be possible to remove the -I switches in the Makefile's convenience
targets, but I chose to leave them: they don't hurt anything, and it's
possible that in weird VPATH situations they might be of value.
Tom Lane [Sat, 28 Apr 2018 19:27:16 +0000 (15:27 -0400)]
Clarify handling of special-case values in bootstrap catalog data.
I (tgl) originally coded the special case for pg_proc.pronargs as
though it were a kind of default value. It seems better though to
treat computable columns as an independent concern: this makes the
code clearer, and probably a bit faster too since we needn't do
work inside the per-column loop.
Improve related comments, as well, in the expectation that there
might be more cases like this in future.
John Naylor, some additional comment-hacking by me
Tom Lane [Sat, 28 Apr 2018 18:45:39 +0000 (14:45 -0400)]
Un-break contrib install with llvm.
Apparently $(foreach ... $(call install_llvm_module,...)) doesn't work
too well without a blank line ending the install_llvm_module macro.
The previous coding hackishly dodged this problem with some parens,
but that's not really a good solution because make misunderstands
where the command boundaries are that way.
Tom Lane [Sat, 28 Apr 2018 18:02:57 +0000 (14:02 -0400)]
Minor cleanups for install_llvm_module/uninstall_llvm_module Make macros.
Don't put comments inside the macros, per complaint from Michael Paquier.
Quote target directory path with single quotes, not double; that seems
to be our project standard. Not quoting it at all definitely isn't
per standard.
Tom Lane [Sat, 28 Apr 2018 15:46:15 +0000 (11:46 -0400)]
Assorted minor doc/comment fixes.
Identify pg_replication_origin as a shared catalog in catalogs.sgml,
using the same boilerplate wording used for most other shared catalogs
(and tweak another place where someone had randomly deviated from
that boilerplate).
Make an example in mmgr/README more consistent with surrounding text.
Update an obsolete cross-reference in a comment in storage/block.h.
Tom Lane [Sat, 28 Apr 2018 01:59:58 +0000 (21:59 -0400)]
Try to get some info about Windows can't-reattach-to-shared-memory errors.
Add some debug printouts focused on the idea that MapViewOfFileEx might
be rounding its virtual memory allocation up more than we expect (and,
in particular, more than VirtualAllocEx does).
Once we've seen what this reports in one of the failures on buildfarm
members dory or jacana, we might revert this ... or perhaps just
decrease the log level.
Tom Lane [Fri, 27 Apr 2018 17:42:03 +0000 (13:42 -0400)]
Adjust hints and docs to suggest CREATE EXTENSION not CREATE LANGUAGE.
The core PLs have been extension-ified for seven years now, and we can
reasonably hope that all out-of-core PLs have been too. So adjust a few
places that were still recommending CREATE LANGUAGE as the user-level
way to install a PL.
Remove outdated comment on how to set logtape's read buffer size.
Commit b75f467b6e removed the LogicalTapeAssignReadBufferSize() function,
but forgot to update this comment. The read buffer size is an argument to
LogicalTapeRewindForRead() now. Doesn't seem worth going into the details
in the file header comment, so remove the outdated sentence altogether.
Tom Lane [Thu, 26 Apr 2018 18:45:04 +0000 (14:45 -0400)]
Preliminary work for pgindent run.
Update typedefs.list from current buildfarm results. Adjust pgindent's
typedef blacklist to block some more unfortunate typedef names that have
snuck in since last time. Manually tweak a few places where I didn't
like the initial results of pgindent'ing.
Tom Lane [Thu, 26 Apr 2018 17:22:27 +0000 (13:22 -0400)]
Avoid parsing catalog data twice during BKI file construction.
In the wake of commit 5602265f7, we were doing duplicate-OID detection
quite inefficiently, by invoking duplicate_oids which does all the same
parsing of catalog headers and .dat files as genbki.pl does. That adds
under half a second on modern machines, but quite a bit more on slow
buildfarm critters, so it seems worth avoiding. Let's just extend
genbki.pl a little so it can also detect duplicate OIDs, and remove
the duplicate_oids call from the build process.
(This also means that duplicate OID detection will happen during
Windows builds, which AFAICS it didn't before.)
This makes the use-case for duplicate_oids a bit dubious, but it's
possible that people will still want to run that check without doing
a whole build run, so let's keep that script.
In passing, move down genbki.pl's creation of its temp output files
so that it doesn't happen until after we've done parsing and validation
of the input. This avoids leaving a lot of clutter around after a
failure.
Tom Lane [Thu, 26 Apr 2018 15:19:52 +0000 (11:19 -0400)]
Fix duplicate_oids and unused_oids so user needn't cd to catalog dir.
Previously, you had to cd into src/include/catalog before running either
of these scripts. That's a bit tedious, so let's make the scripts do it
for you.
In passing, improve the initial comments in both scripts. Also remove
unused_oids' code to complain about duplicate oids. That was added in
yesterday's commit 5602265f7, but on second thought we shouldn't be
randomly redefining the script's behavior that way.
The predecessor test boiled down to "PQserverVersion(NULL) >= 100000",
which is always false. No release includes that, so it could not have
reintroduced CVE-2018-1058. Back-patch to 9.4, like the addition of the
predecessor in commit 8d2814f274def85f39fbe997d454b01628cb5667.
Tom Lane [Wed, 25 Apr 2018 20:01:47 +0000 (16:01 -0400)]
Convert unused_oids and duplicate_oids to use Catalog.pm infrastructure.
unused_oids was previously a shell script, which of course didn't work at
all on Windows. Also, commit 372728b0d introduced some other portability
problems, as complained of by Stas Kelvich. We can improve matters by
converting it to Perl.
While we're at it, let's future-proof both this script and duplicate_oids
to use Catalog.pm rather than having a bunch of ad-hoc logic for parsing
catalog headers and .dat files. These scripts are thereby a bit slower,
which doesn't seem like a problem for typical manual use. It is a little
annoying for buildfarm purposes, but we should be able to fix that case
by having genbki.pl make the check instead of parsing the headers twice.
(That's not done in this commit, though.)
Tom Lane [Wed, 25 Apr 2018 19:19:44 +0000 (15:19 -0400)]
Make Catalog.pm's representation of toast and index decls more abstract.
Instead of immediately constructing the string we need to emit into the
.BKI file, preserve the items we extracted from the header file in a hash.
This eases using the info for other purposes.
Robert Haas [Wed, 25 Apr 2018 19:14:14 +0000 (15:14 -0400)]
Prevent generation of bogus subquery scan paths.
Commit 0927d2f46ddd4cf7d6bf2cc84b3be923e0aedc52 didn't check that
consider_parallel was set for the target relation or account for
the possibility that required_outer might be non-empty.
To prevent future bugs of this ilk, add some assertions to
add_partial_path and do a bit of future-proofing of the code
recently added to recurse_set_operations.
Report by Andreas Seltenreich. Patch by Jeevan Chalke. Review
by Amit Kapila and by me.
Add missing and dangling downlink checks to amcheck
When bt_index_parent_check() is called with the heapallindexed option,
allocate a second Bloom filter to fingerprint block numbers that appear
in the downlinks of internal pages. Use Bloom filter probes when
walking the B-Tree to detect missing downlinks. This can detect subtle
problems with page deletion/VACUUM, such as corruption caused by the bug
just fixed in commit 6db4b499.
The downlink Bloom filter is bound in size by work_mem. Its optimal
size is typically far smaller than that of the regular heapallindexed
Bloom filter, especially when the index has high fan-out.
Author: Peter Geoghegan
Reviewer: Teodor Sigaev
Discussion: https://postgr.es/m/CAH2-WznUzY4fWTjm1tBB3JpVz8cCfz7k_qVp5BhuPyhivmWJFg@mail.gmail.com
Initialize ExprStates once in run-time partition pruning
Instead of doing ExecInitExpr every time a Param needs to be evaluated
in run-time partition pruning, do it once during run-time pruning
set-up and cache the exprstate in PartitionPruneContext, saving a lot of
work.
Author: David Rowley Reviewed-by: Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f8-x+q-90QAPDu_okhQBV4DPEtPz8CJ=m0940GyT4DA4w@mail.gmail.com
This controls both plan-time and execution-time new-style partition
pruning. While finer-grain control is possible (maybe using an enum GUC
instead of boolean), there doesn't seem to be much need for that.
This new parameter controls partition pruning for all queries:
trivially, SELECT queries that affect partitioned tables are naturally
under its control since they are using the new technology. However,
while UPDATE/DELETE queries do not use the new code, we make the new GUC
control their behavior also (stealing control from
constraint_exclusion), because it is more natural, and it leads to a
more natural transition to the future in which those queries will also
use the new pruning code.
Constraint exclusion still controls pruning for regular inheritance
situations (those not involving partitioned tables).
Author: David Rowley
Review: Amit Langote, Ashutosh Bapat, Justin Pryzby, David G. Johnston
Discussion: https://postgr.es/m/CAKJS1f_0HwsxJG9m+nzU+CizxSdGtfe6iF_ykPYBiYft302DCw@mail.gmail.com
Tom Lane [Mon, 23 Apr 2018 19:29:11 +0000 (15:29 -0400)]
Fix handling of partition bounds for boolean partitioning columns.
Previously, you could partition by a boolean column as long as you
spelled the bound values as string literals, for instance FOR VALUES
IN ('t'). The trouble with this is that ruleutils.c printed that as
FOR VALUES IN (TRUE), which is reasonable syntax but wasn't accepted by
the grammar. That results in dump-and-reload failures for such cases.
Apply a minimal fix that just causes TRUE and FALSE to be converted to
strings 'true' and 'false'. This is pretty grotty, but it's too late for
a more principled fix in v11 (to say nothing of v10). We should revisit
the whole issue of how partition bound values are parsed for v12.
Peter Eisentraut [Mon, 23 Apr 2018 15:44:31 +0000 (11:44 -0400)]
Make Emacs settings match perltidy configuration
Set Emacs's perl-continued-statement-offset to match perltidy's
--continuation-indentation, which is 2 (not overridden in PostgreSQL's
profile) rather than the 4 that Emacs uses by default.
Make bms_prev_member work correctly with a 64 bit bitmapword
5c067521 erroneously had coded bms_prev_member assuming that a bitmapword
would always hold 32 bits and started it's search on what it thought was the
highest 8-bits of the word. This was not the case if bitmapwords were 64
bits.
In passing add a test to exercise this function a little. Previously there was
no coverage at all.
Fix wrong validation of top-parent pointer during page deletion in Btree.
After introducing usage of t_tid of inner or page high key for storing
number of attributes of tuple, validation of tuple's ItemPointer with
ItemPointerIsValid becomes incorrect, it's need to validate only blocknumber of
ItemPointer. Missing this causes a incorrect page deletion, fix that. Test is
added.
BTW, current contrib/amcheck doesn't fail on index corrupted by this way.
Also introduce BTreeTupleGetTopParent/BTreeTupleSetTopParent macroses to improve
code readability and to avoid possible confusion with page high key: high key
is used to store top-parent link for branch to remove.
Bug found by Michael Paquier, but bug doesn't exist in previous versions because
t_tid was set to P_HIKEY.
Author: Teodor Sigaev
Reviewer: Peter Geoghegan
Discussion: https://www.postgresql.org/message-id/flat/20180419052436.GA16000%40paquier.xyz
Tom Lane [Fri, 20 Apr 2018 23:54:58 +0000 (19:54 -0400)]
Test conversion of NaN between float4 and float8.
Results from buildfarm member opossum suggest that this doesn't work
quite right on that platform. We've seen issues with NaN support on
MIPS/NetBSD before ... allegedly they fixed this stuff back in 2010,
but maybe only for small values of "fixed".
If, in fact, opossum fails this test then I plan to revert it;
it's mainly for diagnostic purposes rather than something we'd
necessarily keep long-term. I think that the failures in window.sql
could be worked around with some code duplication, but I want to
verify my theory about the cause first.
Tom Lane [Fri, 20 Apr 2018 21:27:56 +0000 (17:27 -0400)]
Don't run fast_default regression test in parallel with other tests.
Since it sets up an event trigger that would fire on DDL done by any
concurrent test script, the original scheduling is just an invitation
to irreproducible test failures. (The fact that we found a bug through
exactly such irreproducible test failures doesn't really change the
calculus here: this script is a hazard to anything that runs in parallel
with it today or might be added to that parallel group in future. No,
I don't believe that the trigger is protecting itself sufficiently to
avoid all possible trouble.)
Tom Lane [Fri, 20 Apr 2018 21:15:31 +0000 (17:15 -0400)]
Fix race conditions when an event trigger is added concurrently with DDL.
EventTriggerTableRewrite crashed if there were table_rewrite triggers
present, but there had not been when the calling command started.
EventTriggerDDLCommandEnd called ddl_command_end triggers if present,
even if there had been no such triggers when the calling command started,
which would lead to a failure in pg_event_trigger_ddl_commands.
In both cases, fix by doing nothing; it's better to wait till the next
command when things will be properly initialized.
In passing, remove an elog(DEBUG1) call that might have seemed interesting
four years ago but surely isn't today.
We found this because of intermittent failures in the buildfarm. Thanks
to Alvaro Herrera and Andrew Gierth for analysis.
Back-patch to 9.5; some of this code exists before that, but the specific
hazards we need to guard against don't.
Tom Lane [Fri, 20 Apr 2018 19:19:16 +0000 (15:19 -0400)]
Change more places to be less trusting of RestrictInfo.is_pushed_down.
On further reflection, commit e5d83995e didn't go far enough: pretty much
everywhere in the planner that examines a clause's is_pushed_down flag
ought to be changed to use the more complicated behavior where we also
check the clause's required_relids. Otherwise we could make incorrect
decisions about whether, say, a clause is safe to use as a hash clause.
Some (many?) of these places are safe as-is, either because they are
never reached while considering a parameterized path, or because there
are additional checks that would reject a pushed-down clause anyway.
However, it seems smarter to just code them all the same way rather
than rely on easily-broken reasoning of that sort.
In support of that, invent a new macro RINFO_IS_PUSHED_DOWN that should
be used in place of direct tests on the is_pushed_down flag.
Like the previous patch, back-patch to all supported branches.
Tom Lane [Thu, 19 Apr 2018 21:14:09 +0000 (17:14 -0400)]
Improve consistency of comments in system catalog headers.
Use the term "system catalog" rather than "system relation" in assorted
places where it's clearly referring to a table rather than, say, an
index. Use more natural word order in the header boilerplate, improve
some of the one-liner catalog descriptions, and fix assorted random
deviations from the normal boilerplate. All purely neatnik-ism, but
why not.
Tom Lane [Thu, 19 Apr 2018 19:49:12 +0000 (15:49 -0400)]
Fix incorrect handling of join clauses pushed into parameterized paths.
In some cases a clause attached to an outer join can be pushed down into
the outer join's RHS even though the clause is not degenerate --- this
can happen if we choose to make a parameterized path for the RHS. If
the clause ends up attached to a lower outer join, we'd misclassify it
as being a "join filter" not a plain "filter" condition at that node,
leading to wrong query results.
To fix, teach extract_actual_join_clauses to examine each join clause's
required_relids, not just its is_pushed_down flag. (The latter now
seems vestigial, or at least in need of rethinking, but we won't do
anything so invasive as redefining it in a bug-fix patch.)
This has been wrong since we introduced parameterized paths in 9.2,
though it's evidently hard to hit given the lack of previous reports.
The test case used here involves a lateral function call, and I think
that a lateral reference may be required to get the planner to select
a broken plan; though I wouldn't swear to that. In any case, even if
LATERAL is needed to trigger the bug, it still affects all supported
branches, so back-patch to all.
Per report from Andreas Karlsson. Thanks to Andrew Gierth for
preliminary investigation.
Remove quick path in ExecInitPartitionInfo for equal tupdescs
I added this "optimization" on top of Amit Langote's 158b7bc6d779, but
the quick path is never taken because the partition uses a different
pg_type oid than its parent table (causing equalTupleDescs to return
false). Changing that requires more analysis and is too considered
dangerous at this point in the cycle, so revert it.
Rework code to determine partition pruning procedure
Amit Langote reported that partition prune was unable to work with
arrays, enums, etc, which led him to research the appropriate way to
match query clauses to partition keys: instead of searching for an exact
match of the expression's type, it is better to rely on the fact that
the expression qual has already been resolved to a specific operator,
and that the partition key is linked to a specific operator family.
With that info, it's possible to figure out the strategy and comparison
function to use for the pruning clause in a manner that works reliably
for pseudo-types also.
Include new test cases that demonstrate pruning where pseudotypes are
involved.
Author: Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/2b02f1e9-9812-9c41-972d-517bdc0f815d@lab.ntt.co.jp
The buffer was 100 bytes long, which is barely sufficient when the
version string gets longer (such as by configure --with-extra-version).
Set it to MAXPGPATH.
Fix datatype for number of heap tuples during last cleanup
It appears that new fields introduced in 857f9c36 have inconsistent datatypes:
BTMetaPageData.btm_last_cleanup_num_heap_tuples is of float4 type,
while xl_btree_metadata.last_cleanup_num_heap_tuples is of double type.
IndexVacuumInfo.num_heap_tuples, which is a source of values for
both former fields is of double type. So, make both those fields in
BTMetaPageData and xl_btree_metadata use float8 type in order to match the
precision of the source. That shouldn't be double type, because we always
use types with explicit width in WAL.
Patch introduces incompatibility of on-disk format since 857f9c36 commit, but
that versions never was released, so just bump catalog version to avoid
possible confusion.
Remove an obsolete reference to the 'afteritem' argument, which was
removed by commit bc292937. Add a comment that clarifies how
_bt_insertonpg() indirectly handles the insertion of high key items.
Handle XLOG_BTREE_META_CLEANUP in btree_desc() and btree_identify()
New WAL record XLOG_BTREE_META_CLEANUP introduced in 857f9c36 has no handling
in btree_desc() and btree_identify(). This patch implements corresponding
handling.
Adjust INCLUDE index truncation comments and code.
Add several assertions that ensure that we're dealing with a pivot tuple
without non-key attributes where that's expected. Also, remove the
assertion within _bt_isequal(), restoring the v10 function signature. A
similar check will be performed for the page highkey within
_bt_moveright() in most cases. Also avoid dropping all objects within
regression tests, to increase pg_dump test coverage for INCLUDE indexes.
Rather than using infrastructure that's generally intended to be used
with reference counted heap tuple descriptors during truncation, use the
same function that was introduced to store flat TupleDescs in shared
memory (we use a temp palloc'd buffer). This isn't strictly necessary,
but seems more future-proof than the old approach. It also lets us
avoid including rel.h within indextuple.c, which was arguably a
modularity violation. Also, we now call index_deform_tuple() with the
truncated TupleDesc, not the source TupleDesc, since that's more robust,
and saves a few cycles.
In passing, fix a memory leak by pfree'ing truncated pivot tuple memory
during CREATE INDEX. Also pfree during a page split, just to be
consistent.
Refactor _bt_check_natts() to be more readable.
Author: Peter Geoghegan with some editorization by me
Reviewed by: Alexander Korotkov, Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CAH2-Wz%3DkCWuXeMrBCopC-tFs3FbiVxQNjjgNKdG2sHxZ5k2y3w%40mail.gmail.com
Tom Lane [Wed, 18 Apr 2018 22:17:02 +0000 (18:17 -0400)]
Improve error detection/reporting in Catalog.pm and genbki.pl.
Clean up error messages relating to mistakes in .dat files: make sure they
provide the .dat file name and line number, not the place in the Perl
script that's reporting the problem. Adopt more uniform message phrasing,
too.
Make genbki.pl spit up on unrecognized field names in the input hashes.
Previously, it just silently ignored such fields, which could make a
misspelled field name into a very hard-to-decipher problem. (This is in
genbki.pl, *not* Catalog.pm, because we don't want reformat_dat_file.pl to
complain about unrecognized fields. We'd rather it silently dropped them,
to facilitate removing unwanted fields after a reorganization.)
Tom Lane [Wed, 18 Apr 2018 16:07:37 +0000 (12:07 -0400)]
Better fix for deadlock hazard in CREATE INDEX CONCURRENTLY.
Commit 54eff5311 did not account for the possibility that we'd have
a transaction snapshot due to default_transaction_isolation being
set high enough to require one. The transaction snapshot is enough
to hold back our advertised xmin and thus risk deadlock anyway.
The only way to get rid of that snap is to start a new transaction,
so let's do that instead. Also throw in an assert checking that we
really have gotten to a state where no xmin is being advertised.
Tom Lane [Tue, 17 Apr 2018 23:53:50 +0000 (19:53 -0400)]
Rationalize handling of single and double quotes in bootstrap data.
Change things around so that proper quoting of values interpolated into
the BKI data by initdb is the responsibility of initdb, not something
we half-heartedly handle by putting double quotes into the raw BKI data.
(Note: experimentation shows that it still doesn't work to put a double
quote into the initial superuser username, but that's the fault of
inadequate quoting while interpolating the name into SQL scripts;
the BKI aspect of it works fine now.)
Having done that, we can remove the special-case handling of values
that look like "something" from genbki.pl, and instead teach it to
escape double --- and single --- quotes properly. This removes the
nowhere-documented need to treat those specially in the BKI source
data; whatever you write will be passed through unchanged into the
inserted data value, modulo Perl's rules about single-quoted strings.
Add documentation explaining the (pre-existing) handling of backslashes
in the BKI data.
Tom Lane [Tue, 17 Apr 2018 22:29:11 +0000 (18:29 -0400)]
Rationalize handling of array type names in bootstrap data.
Formerly, Catalog.pm turned a C array type declaration in the catalog
header files into a SQL type, e.g., 'foo[]'. Along the way, genbki.pl
turned this into '_foo' for the purpose of type lookups, but wrote 'foo[]'
to postgres.bki. During bootstrap, bootscanner.l had to have a special
case rule to tokenize this, and then MapArrayTypeName() would turn 'foo[]'
into '_foo' one more time.
This seems unnecessarily complicated, especially since nobody cares that
much about the readability of postgres.bki. Instead, make Catalog.pm
convert the C declaration into '_foo' to start with, and preserve that
representation of the type name throughout bootstrap data processing.
Then rip out the special-case code in bootscanner.l and bootstrap.c.
This changes postgres.bki to the extent that array fields are now
declared like
proconfig = _text ,
rather than
proconfig = text[] ,
No documentation update, since the SGML docs didn't mention any of this
in the first place, and it's all pretty transparent to writers of
catalog header files anyway.
Tom Lane [Tue, 17 Apr 2018 22:10:16 +0000 (18:10 -0400)]
Simplify genbki.pl's data quoting rules.
During the bootstrap data format conversion, it seemed important for
verifiability's sake that the generated postgres.bki file stayed the same
as before. That resulted in adding a bunch of ad-hoc rules about when to
quote emitted data values, to match previous manual decisions that had
often quoted values unnecessarily. Now that the conversion is complete,
it seems fine to remove all those ad-hoc rules. The net actual effect on
the current contents of postgres.bki is that some fields that had been
quoted despite containing only digits or only "-" lose their unnecessary
quotes.
Also, now that genbki.pl will always quote values containing a backslash,
there's no need for bootscanner.l to allow unquoted octal escapes;
so simplify its production for "id" by removing that possibility.
Fix confusion on the padding of GIDs in on commit and abort records.
Review of commit 1eb6d652: It's pointless to add padding to the GID fields,
when the code that follows assumes that there is no alignment, and uses
memcpy(). Remove the pointless padding.
Update comments to note the new fields in the WAL records.
Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/33b787bf-dc20-1161-54e9-3f3b607bf59d%40iki.fi
It turns out that after runtime partition pruning, Append's
first_partial_plan does not accurately represent partial plans to run,
if any of those got pruned. This could limit participation of workers
in some partial subplans, if other subplans got pruned. Fix it by
keeping an index of the first valid partial subplan in the state node,
determined at execnode Init time.
Author: David Rowley, with cosmetic changes by me.
Discussion: https://postgr.es/m/CAKJS1f8o2Yd=rOP=Et3A0FWgF+gSAOkFSU6eNhnGzTPV7nN8sQ@mail.gmail.com
Improve coverage of nodeAppend runtime partition prune
coverage report indicated that mark_invalid_subplans_as_finished() and
nearby code was not getting exercised by any tests. Add a new one which
has execution-time Params rather than only external Params to fix this.
In passing, David noticed that ab_q6 tests were not actually required to
have a generic plan. The tests were testing exec Params not external
Params, so there was no need for the PREPARE. Remove the PREPARE,
making these plain queries. (The new queries are called from
explain_parallel_append, which may be unnecessary since they don't
actually have a Parallel Append node, just an Append. But it doesn't
seem to hurt anything, either.)
Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f--hopb6JBSDY4wiXTS3ZcDp-wparXjTQ1nzNdBa04Fog@mail.gmail.com
Restore partition_prune's usage of parallel workers
This reverts commit 4d0f6d3f207d ("Attempt to stabilize partition_prune
test output (2)"), and attempts to stabilize the test by using string
replacement to hide any loop count difference in parallel nodes.
Tom Lane [Mon, 16 Apr 2018 20:06:47 +0000 (16:06 -0400)]
Fix broken collation-aware searches in SP-GiST text opclass.
spg_text_leaf_consistent() supposed that it should compare only
Min(querylen, entrylen) bytes of the two strings, and then deal with
any excess bytes in one string or the other by assuming the longer
string is greater if the prefixes are equal. Quite aside from the
fact that that's just wrong in some locales (e.g., 'ch' is not less
than 'd' in cs_CZ), it also risked passing incomplete multibyte
characters to strcoll(), with ensuing bad results.
Instead, just pass the full strings to varstr_cmp, and let it decide
what to do about unequal-length strings.
Fortunately, this error doesn't imply any index corruption, it's just
that searches might return the wrong set of entries.
Per report from Emre Hasegeli, though this is not his patch.
Thanks to Peter Geoghegan for review and discussion.
This code was born broken, so back-patch to all supported branches.
In HEAD, I failed to resist the temptation to do a bit of cosmetic
cleanup/pgindent'ing on 710d90da1, too.
Ignore whole-rows in INSERT/CONFLICT with partitioned tables
We had an Assert() preventing whole-row expressions from being used in
the SET clause of INSERT ON CONFLICT, but it seems unnecessary, given
some tests, so remove it. Add a new test to exercise the case.
Still at ExecInitPartitionInfo, we used map_partition_varattnos (which
constructs an attribute map, then calls map_variable_attnos) using
the same two relations many times in different expressions and with
different parameters. Constructing the map over and over is a waste.
To avoid this repeated work, construct the map once, and use
map_variable_attnos() directly instead.
Author: Amit Langote, per comments by me (Álvaro)
Discussion: https://postgr.es/m/20180326142016.m4st5e34chrzrknk@alvherre.pgsql
Tom Lane [Sun, 15 Apr 2018 17:02:11 +0000 (13:02 -0400)]
Fix potentially-unportable code in contrib/adminpack.
Spelling access(2)'s second argument as "2" is just horrid.
POSIX makes no promises as to the numeric values of W_OK and related
macros. Even if it accidentally works as intended on every supported
platform, it's still unreadable and inconsistent with adjacent code.
In passing, don't spell "NULL" as "0" either. Yes, that's legal C;
no, it's not project style.
Back-patch, just in case the unportability is real and not theoretical.
(Most likely, even if a platform had different bit assignments for
access()'s modes, there'd not be an observable behavior difference
here; but I'm being paranoid today.)
Tom Lane [Sun, 15 Apr 2018 16:40:01 +0000 (12:40 -0400)]
Clean up callers of JsonbIteratorNext().
Coverity complained about the lack of a check on the return value in
parse_jsonb_index_flags' last call of JsonbIteratorNext. Seems like
a reasonable gripe to me, especially since the code is depending on
that being WJB_DONE to not leak memory, so add a check.
In passing, improve a couple other places where the result was being
ignored, either by adding an assert or at least a cast to void.
Also, don't spell "WJB_DONE" as "0". That's horrid coding style,
and it wasn't consistent either.
Magnus Hagander [Sun, 15 Apr 2018 11:57:02 +0000 (13:57 +0200)]
Fix build of pg_verify_checksum docs
They were accidentally excluded when reverting the backend online
checksum functionality, and since they weren't built the incorrect
reference to a removed section also did not trigger a problem.
Tom Lane [Sun, 15 Apr 2018 01:00:54 +0000 (21:00 -0400)]
Simplify view-expansion code in rewriteHandler.c.
In the wake of commit 50c6bb022, it's not necessary for ApplyRetrieveRule
to have a forUpdatePushedDown parameter. By the time control gets here for
any given view-referencing RTE, we should already have pushed down the
effects of any FOR UPDATE/SHARE clauses affecting the view from outer query
levels. Hence if we don't find a RowMarkClause at the current query level,
that's sufficient proof that there is no outer one either. This in turn
means we need no forUpdatePushedDown parameter for fireRIRrules.
I wonder whether we oughtn't also revert commit cba2d2717, since it now
seems likely that that was band-aiding around the bad effects of doing
FOR UPDATE pushdown and view expansion in the wrong order. However,
in the absence of evidence that the current coding of markQueryForLocking
is actually buggy (i.e. missing RTEs it ought to mark), it seems best to
leave it alone.
There's been a massive addition of partitioning code in PostgreSQL 11,
with little oversight on its placement, resulting in a
catalog/partition.c with poorly defined boundaries and responsibilities.
This commit tries to set a couple of distinct modules to separate things
a little bit. There are no code changes here, only code movement.
There are three new files:
src/backend/utils/cache/partcache.c
src/include/partitioning/partdefs.h
src/include/utils/partcache.h
The previous arrangement of #including catalog/partition.h almost
everywhere is no more.
Authors: Amit Langote and Álvaro Herrera
Discussion: https://postgr.es/m/98e8d509-790a-128c-be7f-e48a5b2d8d97@lab.ntt.co.jp
https://postgr.es/m/11aa0c50-316b-18bb-722d-c23814f39059@lab.ntt.co.jp
https://postgr.es/m/143ed9a4-6038-76d4-9a55-502035815e68@lab.ntt.co.jp
https://postgr.es/m/20180413193503.nynq7bnmgh6vs5vm@alvherre.pgsql
Tom Lane [Sat, 14 Apr 2018 23:02:30 +0000 (19:02 -0400)]
Improve regression test coverage of expand_tuple().
I was dissatisfied with the code coverage report for expand_tuple() in the
wake of commit 7c44c46de: while better than no coverage at all, it was
still not exercising the core function of inserting out-of-line default
values, nor was the HeapTuple-output path covered. So far as I can find,
the only code path that reaches the latter at present is EvalPlanQual
fetches for non-locked tables. Hence, extend eval-plan-qual.spec to
test cases where out-of-line defaults must be inserted into a tuple
fetched from a non-locked table.
Tom Lane [Sat, 14 Apr 2018 19:38:09 +0000 (15:38 -0400)]
Fix enforcement of SELECT FOR UPDATE permissions with nested views.
SELECT FOR UPDATE on a view should require UPDATE (as well as SELECT)
permissions on the view, and then the view's owner needs those same
permissions against the relations it references, and so on all the way
down to base tables. But ApplyRetrieveRule did things in the wrong order,
resulting in failure to mark intermediate view levels as needing UPDATE
permission. Thus for example, if user A creates a table T and an updatable
view V1 on T, then grants only SELECT permissions on V1 to user B, B could
create a second view V2 on V1 and then would be allowed to perform SELECT
FOR UPDATE via V2 (since V1 wouldn't be checked for UPDATE permissions).
To fix, just switch the order of expanding sub-views and marking referenced
objects as needing UPDATE permission. I think additional simplifications
are now possible, but that's distinct from the bug fix proper.
This is certainly a security issue, but the consequences are pretty minor
(just the ability to lock rows that shouldn't be lockable). Against that
we have a small risk of breaking applications that are working as-desired,
since nested views have behaved this way since such cases worked at all.
On balance I'm inclined not to back-patch.
Tom Lane [Sat, 14 Apr 2018 16:33:15 +0000 (12:33 -0400)]
Add commentary explaining why MaxIndexTuplesPerPage calculation is safe.
MaxIndexTuplesPerPage ignores the fact that btree indexes sometimes
store tuples with no data payload. But it also ignores the possibility
of "special space" on index pages, which offsets that, so that the
result isn't an underestimate. This all seems worth documenting, though.
In passing, remove #define MinIndexTupleSize, which was added by
commit 2c03216d8 but not used in that commit nor later ones.
Comment text by me; issue noticed by Peter Geoghegan.