Tom Lane [Fri, 18 Dec 2015 00:38:21 +0000 (19:38 -0500)]
Use just one standalone-backend session for initdb's post-bootstrap steps.
Previously, each subroutine in initdb fired up its own standalone backend
session. Over time we'd grown as many as fifteen of these sessions,
and the cumulative startup and shutdown work for them was getting pretty
noticeable. Combining things so that all these steps share a single
backend session cuts a good 10% off the total runtime of initdb, more
if you're not fsync'ing.
The main stumbling block to doing this before was that some of the sessions
were run with -j and some not. The improved definition of -j mode
implemented by my previous commit makes it possible to fix that by running
all the post-bootstrap steps with -j; we just have to use double instead of
single newlines to end command strings. (This is only absolutely necessary
around the VACUUM and CREATE DATABASE steps, since those can't be run in a
transaction block. But it seems best to make them all use double newlines
so that the commands remain separate for error-reporting purposes.)
A minor disadvantage is that since initdb can't tell how much of its
output the backend has executed, we can no longer have the per-step
progress reporting initdb used to print. But things are fast enough
nowadays that that's not really all that useful anyway.
In passing, add more const decoration to some of the static arrays in
initdb.c.
Tom Lane [Fri, 18 Dec 2015 00:34:15 +0000 (19:34 -0500)]
Adjust behavior of single-user -j mode for better initdb error reporting.
Previously, -j caused the entire input file to be read in and executed as
a single command string. That's undesirable, not least because any error
causes the entire file to be regurgitated as the "failing query". Some
experimentation suggests a better rule: end the command string when we see
a semicolon immediately followed by two newlines, ie, an empty line after
a query. This serves nicely to break up the existing examples such as
information_schema.sql and system_views.sql. A limitation is that it's
no longer possible to write such a sequence within a string literal or
multiline comment in a file meant to be read with -j; but there are no
instances of such a problem within the data currently used by initdb.
(If someone does make such a mistake in future, it'll be obvious because
they'll get an unterminated-literal or unterminated-comment syntax error.)
Other than that, there shouldn't be any negative consequences; you're not
forced to end statements that way, it's just a better idea in most cases.
In passing, remove src/include/tcop/tcopdebug.h, which is dead code
because it's not included anywhere, and hasn't been for more than
ten years. One of the debug-support symbols it purported to describe
has been unreferenced for at least the same amount of time, and the
other is removed by this commit on the grounds that it was useless:
forcing -j mode all the time would have broken initdb. The lack of
complaints about that, or about the missing inclusion, shows that
no one has tried to use TCOP_DONTUSENEWLINE in many years.
Tom Lane [Thu, 17 Dec 2015 21:55:23 +0000 (16:55 -0500)]
Fix improper initialization order for readline.
Turns out we must set rl_basic_word_break_characters *before* we call
rl_initialize() the first time, because it will quietly copy that value
elsewhere --- but only on the first call. (Love these undocumented
dependencies.) I broke this yesterday in commit 2ec477dc8108339d;
like that commit, back-patch to all active branches. Per report from
Pavel Stehule.
Alvaro Herrera [Thu, 17 Dec 2015 17:25:41 +0000 (14:25 -0300)]
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
Tom Lane [Wed, 16 Dec 2015 21:58:55 +0000 (16:58 -0500)]
Cope with Readline's failure to track SIGWINCH events outside of input.
It emerges that libreadline doesn't notice terminal window size change
events unless they occur while collecting input. This is easy to stumble
over if you resize the window while using a pager to look at query output,
but it can be demonstrated without any pager involvement. The symptom is
that queries exceeding one line are misdisplayed during subsequent input
cycles, because libreadline has the wrong idea of the screen dimensions.
The safest, simplest way to fix this is to call rl_reset_screen_size()
just before calling readline(). That causes an extra ioctl(TIOCGWINSZ)
for every command; but since it only happens when reading from a tty, the
performance impact should be negligible. A more valid objection is that
this still leaves a tiny window during entry to readline() wherein delivery
of SIGWINCH will be missed; but the practical consequences of that are
probably negligible. In any case, there doesn't seem to be any good way to
avoid the race, since readline exposes no functions that seem safe to call
from a generic signal handler --- rl_reset_screen_size() certainly isn't.
It turns out that we also need an explicit rl_initialize() call, else
rl_reset_screen_size() dumps core when called before the first readline()
call.
rl_reset_screen_size() is not present in old versions of libreadline,
so we need a configure test for that. (rl_initialize() is present at
least back to readline 4.0, so we won't bother with a test for it.)
We would need a configure test anyway since libedit's emulation of
libreadline doesn't currently include such a function. Fortunately,
libedit seems not to have any corresponding bug.
Robert Haas [Wed, 16 Dec 2015 20:23:45 +0000 (15:23 -0500)]
Speed up CREATE INDEX CONCURRENTLY's TID sort.
Encode TIDs as 64-bit integers to speed up comparisons. This seems to
speed things up on all platforms, but is even more beneficial when
8-byte integers are passed by value.
Peter Geoghegan. Design suggestions and review by Tom Lane. Review
also by Simon Riggs and by me.
Robert Haas [Wed, 16 Dec 2015 12:43:56 +0000 (07:43 -0500)]
Mark CHECK constraints declared NOT VALID valid if created with table.
FOREIGN KEY constraints have behaved this way for a long time, but for
some reason the behavior of CHECK constraints has been inconsistent up
until now.
Amit Langote and Amul Sul, with assorted tweaks by me.
Robert Haas [Tue, 15 Dec 2015 18:57:45 +0000 (13:57 -0500)]
Teach mdnblocks() not to create zero-length files.
It's entirely surprising that mdnblocks() has the side effect of
creating new files on disk, so let's make it not do that. One
consequence of the old behavior is that, if running on a damaged
cluster that is missing a file, mdnblocks() can recreate the file
and allow a subsequent _mdfd_getseg() for a higher segment to succeed.
This happens because, while mdnblocks() stops when it finds a segment
that is shorter than 1GB, _mdfd_getseg() has no such check, and thus
the empty file created by mdnblocks() can allow it to continue its
traversal and find higher-numbered segments which remain.
It might be a good idea for _mdfd_getseg() to actually verify that
each segment it finds is exactly 1GB before proceeding to the next
one, but that would involve some additional system calls, so for
now I'm just doing this much.
Patch by me, per off-list analysis by Kevin Grittner and Rahila Syed.
Review by Andres Freund.
Robert Haas [Tue, 15 Dec 2015 18:32:54 +0000 (13:32 -0500)]
Move buffer I/O and content LWLocks out of the main tranche.
Move the content lock directly into the BufferDesc, so that locking and
pinning a buffer touches only one cache line rather than two. Adjust
the definition of BufferDesc slightly so that this doesn't make the
BufferDesc any larger than one cache line (at least on platforms where
a spinlock is only 1 or 2 bytes).
We can't fit the I/O locks into the BufferDesc and stay within one
cache line, so move those to a completely separate tranche. This
leaves a relatively limited number of LWLocks in the main tranche, so
increase the padding of those remaining locks to a full cache line,
rather than allowing adjacent locks to share a cache line, hopefully
reducing false sharing.
Performance testing shows that these changes make little difference
on laptop-class machines, but help significantly on larger servers,
especially those with more than 2 sockets.
Andres Freund, originally based on an earlier patch by Simon Riggs.
Review and cosmetic adjustments (including heavy rewriting of the
comments) by me.
Robert Haas [Tue, 15 Dec 2015 16:32:13 +0000 (11:32 -0500)]
Provide a way to predefine LWLock tranche IDs.
It's a bit cumbersome to use LWLockNewTrancheId(), because the returned
value needs to be shared between backends so that each backend can call
LWLockRegisterTranche() with the correct ID. So, for built-in tranches,
use a hard-coded value instead.
This is motivated by an upcoming patch adding further built-in tranches.
Stephen Frost [Tue, 15 Dec 2015 15:08:09 +0000 (10:08 -0500)]
Improve CREATE POLICY documentation
Clarify that SELECT policies are now applied when SELECT rights
are required for a given query, even if the query is an UPDATE or
DELETE query. Pointed out by Noah.
Additionally, note the risk regarding concurrently open transactions
where a relation which controls access to the rows of another relation
are updated and the rows of the primary relation are also being
modified. Pointed out by Peter Geoghegan.
Stephen Frost [Tue, 15 Dec 2015 01:05:43 +0000 (20:05 -0500)]
Collect the global OR of hasRowSecurity flags for plancache
We carry around information about if a given query has row security or
not to allow the plancache to use that information to invalidate a
planned query in the event that the environment changes.
Previously, the flag of one of the subqueries was simply being copied
into place to indicate if the query overall included RLS components.
That's wrong as we need the global OR of all subqueries. Fix by
changing the code to match how fireRIRules works, which is results
in OR'ing all of the flags.
Alvaro Herrera [Mon, 14 Dec 2015 19:44:40 +0000 (16:44 -0300)]
Add missing CHECK_FOR_INTERRUPTS in lseg_inside_poly
Apparently, there are bugs in this code that cause it to loop endlessly.
That bug still needs more research, but in the meantime it's clear that
the loop is missing a check for interrupts so that it can be cancelled
timely.
Backpatch to 9.1 -- this has been missing since 49475aab8d0d.
Kevin Grittner [Mon, 14 Dec 2015 17:37:26 +0000 (11:37 -0600)]
Remove xmlparse(document '') test
This one test was behaving differently between the ubuntu fix for
CVE-2015-7499 and the base "expected" file. It's not worth having
yet another version of the expected file for this test, so drop it.
Perhaps at some point when all distros have settled down to the
same behavior on this test, it can be restored.
Problem found by me on libxml2 (2.9.1+dfsg1-3ubuntu4.6).
Solution suggested by Tom Lane.
Backpatch to 9.5, where the test was added.
Fix out-of-memory error handling in ParameterDescription message processing.
If libpq ran out of memory while constructing the result set, it would hang,
waiting for more data from the server, which might never arrive. To fix,
distinguish between out-of-memory error and not-enough-data cases, and give
a proper error message back to the client on OOM.
There are still similar issues in handling COPY start messages, but let's
handle that as a separate patch.
Michael Paquier, Amit Kapila and me. Backpatch to all supported versions.
Andres Freund [Mon, 14 Dec 2015 10:34:16 +0000 (11:34 +0100)]
Fix bug in SetOffsetVacuumLimit() triggered by find_multixact_start() failure.
Previously, if find_multixact_start() failed, SetOffsetVacuumLimit() would
install 0 into MultiXactState->offsetStopLimit if it previously succeeded.
Luckily, there are no known cases where find_multixact_start() will return
an error in 9.5 and above. But if it were to happen, for example due to
filesystem permission issues, it'd be somewhat bad: GetNewMultiXactId()
could continue allocating mxids even if close to a wraparound, or it could
erroneously stop allocating mxids, even if no wraparound is looming. The
wrong value would be corrected the next time SetOffsetVacuumLimit() is
called, or by a restart.
Reported-By: Noah Misch, although this is not his preferred fix
Discussion: 20151210140450.GA22278@alap3.anarazel.de
Backpatch: 9.5, where the bug was introduced as part of 4f627f
Tom Lane [Mon, 14 Dec 2015 04:42:54 +0000 (23:42 -0500)]
Docs: document that psql's "\i -" means read from stdin.
This has worked that way for a long time, maybe always, but you would
not have known it from the documentation. Also back-patch the notes
I added to HEAD earlier today about behavior of the "-f -" switch,
which likewise have been valid for many releases.
Tom Lane [Sun, 13 Dec 2015 19:52:07 +0000 (14:52 -0500)]
Code and docs review for multiple -c and -f options in psql.
Commit d5563d7df94488bf drew complaints from Coverity, which quite
correctly complained that one copy of each -c or -f string was being
leaked. What's more, simple_action_list_append was allocating enough space
for still a third copy of each string as part of the SimpleActionListCell,
even though that coding method had been superseded by a separate strdup
operation. There were some other minor coding infelicities too. The
documentation needed more work as well, eg it forgot to explain that -c
causes psql not to accept any interactive input.
Magnus Hagander [Sun, 13 Dec 2015 15:53:38 +0000 (16:53 +0100)]
Consistently set all fields in pg_stat_replication to null instead of 0
Previously the "sent" field would be set to 0 and all other xlog
pointers be set to NULL if there were no valid values (such as when
in a backup sending walsender).
Magnus Hagander [Sun, 13 Dec 2015 15:40:37 +0000 (16:40 +0100)]
Properly initialize write, flush and replay locations in walsender slots
These would leak random xlog positions if a walsender used for backup would
a walsender slot previously used by a replication walsender.
In passing also fix a couple of cases where the xlog pointer is directly
compared to zero instead of using XLogRecPtrIsInvalid, noted by
Michael Paquier.
Andres Freund [Sat, 12 Dec 2015 13:12:35 +0000 (14:12 +0100)]
Fix ALTER TABLE ... SET TABLESPACE for unlogged relations.
Changing the tablespace of an unlogged relation did not WAL log the
creation and content of the init fork. Thus, after a standby is
promoted, unlogged relation cannot be accessed anymore, with errors
like:
ERROR: 58P01: could not open file "pg_tblspc/...": No such file or directory
Additionally the init fork was not synced to disk, independent of the
configured wal_level, a relatively small durability risk.
Investigation of that problem also brought to light that, even for
permanent relations, the creation of !main forks was not WAL logged,
i.e. no XLOG_SMGR_CREATE record were emitted. That mostly turns out not
to be a problem, because these files were created when the actual
relation data is copied; nonexistent files are not treated as an error
condition during replay. But that doesn't work for empty files, and
generally feels a bit haphazard. Luckily, outside init and main forks,
empty forks don't occur often or are not a problem.
Add the required WAL logging and syncing to disk.
Reported-By: Michael Paquier
Author: Michael Paquier and Andres Freund
Discussion: 20151210163230.GA11331@alap3.anarazel.de
Backpatch: 9.1, where unlogged relations were introduced
Tom Lane [Sat, 12 Dec 2015 00:08:40 +0000 (19:08 -0500)]
Add an expected-file to match behavior of latest libxml2.
Recent releases of libxml2 do not provide error context reports for errors
detected at the very end of the input string. This appears to be a bug, or
at least an infelicity, introduced by the fix for libxml2's CVE-2015-7499.
We can hope that this behavioral change will get undone before too long;
but the security patch is likely to spread a lot faster/further than any
follow-on cleanup, which means this behavior is likely to be present in the
wild for some time to come. As a stopgap, add a variant regression test
expected-file that matches what you get with a libxml2 that acts this way.
Alvaro Herrera [Fri, 11 Dec 2015 21:39:09 +0000 (18:39 -0300)]
For REASSIGN OWNED for foreign user mappings
As reported in bug #13809 by Alexander Ashurkov, the code for REASSIGN
OWNED hadn't gotten word about user mappings. Deal with them in the
same way default ACLs do, which is to ignore them altogether; they are
handled just fine by DROP OWNED. The other foreign object cases are
already handled correctly by both commands.
Also add a REASSIGN OWNED statement to foreign_data test to exercise the
foreign data objects. (The changes are just before the "cleanup" phase,
so it shouldn't remove any existing live test.)
Reported by Alexander Ashurkov, then independently by Jaime Casanova.
Stephen Frost [Fri, 11 Dec 2015 21:12:25 +0000 (16:12 -0500)]
Handle policies during DROP OWNED BY
DROP OWNED BY handled GRANT-based ACLs but was not removing roles from
policies. Fix that by having DROP OWNED BY remove the role specified
from the list of roles the policy (or policies) apply to, or the entire
policy (or policies) if it only applied to the role specified.
As with ACLs, the DROP OWNED BY caller must have permission to modify
the policy or a WARNING is thrown and no change is made to the policy.
Tom Lane [Fri, 11 Dec 2015 20:52:16 +0000 (15:52 -0500)]
Get rid of the planner's LateralJoinInfo data structure.
I originally modeled this data structure on SpecialJoinInfo, but after
commit acfcd45cacb6df23 that looks like a pretty poor decision.
All we really need is relid sets identifying laterally-referenced rels;
and most of the time, what we want to know about includes indirect lateral
references, a case the LateralJoinInfo data was unsuited to compute with
any efficiency. The previous commit redefined RelOptInfo.lateral_relids
as the transitive closure of lateral references, so that it easily supports
checking indirect references. For the places where we really do want just
direct references, add a new RelOptInfo field direct_lateral_relids, which
is easily set up as a copy of lateral_relids before we perform the
transitive closure calculation. Then we can just drop lateral_info_list
and LateralJoinInfo and the supporting code. This makes the planner's
handling of lateral references noticeably more efficient, and shorter too.
Such a change can't be back-patched into stable branches for fear of
breaking extensions that might be looking at the planner's data structures;
but it seems not too late to push it into 9.5, so I've done so.
Stephen Frost [Fri, 11 Dec 2015 20:43:03 +0000 (15:43 -0500)]
Handle dependencies properly in ALTER POLICY
ALTER POLICY hadn't fully considered partial policy alternation
(eg: change just the roles on the policy, or just change one of
the expressions) when rebuilding the dependencies. Instead, it
would happily remove all dependencies which existed for the
policy and then only recreate the dependencies for the objects
referred to in the specific ALTER POLICY command.
Correct that by extracting and building the dependencies for all
objects referenced by the policy, regardless of if they were
provided as part of the ALTER POLICY command or were already in
place as part of the pre-existing policy.
Tom Lane [Fri, 11 Dec 2015 19:22:20 +0000 (14:22 -0500)]
Still more fixes for planner's handling of LATERAL references.
More fuzz testing by Andreas Seltenreich exposed that the planner did not
cope well with chains of lateral references. If relation X references Y
laterally, and Y references Z laterally, then we will have to scan X on the
inside of a nestloop with Z, so for all intents and purposes X is laterally
dependent on Z too. The planner did not understand this and would generate
intermediate joins that could not be used. While that was usually harmless
except for wasting some planning cycles, under the right circumstances it
would lead to "failed to build any N-way joins" or "could not devise a
query plan" planner failures.
To fix that, convert the existing per-relation lateral_relids and
lateral_referencers relid sets into their transitive closures; that is,
they now show all relations on which a rel is directly or indirectly
laterally dependent. This not only fixes the chained-reference problem
but allows some of the relevant tests to be made substantially simpler
and faster, since they can be reduced to simple bitmap manipulations
instead of searches of the LateralJoinInfo list.
Also, when a PlaceHolderVar that is due to be evaluated at a join contains
lateral references, we should treat those references as indirect lateral
dependencies of each of the join's base relations. This prevents us from
trying to join any individual base relations to the lateral reference
source before the join is formed, which again cannot work.
Andreas' testing also exposed another oversight in the "dangerous
PlaceHolderVar" test added in commit 85e5e222b1dd02f1. Simply rejecting
unsafe join paths in joinpath.c is insufficient, because in some cases
we will end up rejecting *all* possible paths for a particular join, again
leading to "could not devise a query plan" failures. The restriction has
to be known also to join_is_legal and its cohort functions, so that they
will not select a join for which that will happen. I chose to move the
supporting logic into joinrels.c where the latter functions are.
Back-patch to 9.3 where LATERAL support was introduced.
Alvaro Herrera [Fri, 11 Dec 2015 17:30:43 +0000 (14:30 -0300)]
Fix commit timestamp initialization
This module needs explicit initialization in order to replay WAL records
in recovery, but we had broken this recently following changes to make
other (stranger) scenarios work correctly. To fix, rework the
initialization sequence so that it always takes place before WAL replay
commences for both master and standby.
I could have gone for a more localized fix that just added a "startup"
call for the master server, but it seemed better to restructure the
existing callers as well so that the whole thing made more sense. As a
drawback, there is more control logic in xlog.c now than previously, but
doing otherwise meant passing down the ControlFile flag, which seemed
uglier as a whole.
This also meant adding a check to not re-execute ActivateCommitTs if it
had already been called.
Robert Haas [Thu, 10 Dec 2015 17:28:46 +0000 (12:28 -0500)]
Improve ALTER POLICY tab completion.
Complete "ALTER POLICY" with a policy name, as we do for DROP POLICY.
And, complete "ALTER POLICY polname ON" with a table name that has such
a policy, as we do for DROP POLICY, rather than with any table name
at all.
Andres Freund [Thu, 10 Dec 2015 15:26:45 +0000 (16:26 +0100)]
Fix ON CONFLICT UPDATE bug breaking AFTER UPDATE triggers.
ExecOnConflictUpdate() passed t_ctid of the to-be-updated tuple to
ExecUpdate(). That's problematic primarily because of two reason: First
and foremost t_ctid could point to a different tuple. Secondly, and
that's what triggered the complaint by Stanislav, t_ctid is changed by
heap_update() to point to the new tuple version. The behavior of AFTER
UPDATE triggers was therefore broken, with NEW.* and OLD.* tuples
spuriously identical within AFTER UPDATE triggers.
To fix both issues, pass a pointer to t_self of a on-stack HeapTuple
instead.
Fixing this bug lead to one change in regression tests, which previously
failed due to the first issue mentioned above. There's a reasonable
expectation that test fails, as it updates one row repeatedly within one
INSERT ... ON CONFLICT statement. That is only possible if the second
update is triggered via ON CONFLICT ... SET, ON CONFLICT ... WHERE, or
by a WITH CHECK expression, as those are executed after
ExecOnConflictUpdate() does a visibility check. That could easily be
prohibited, but given it's allowed for plain UPDATEs and a rare corner
case, it doesn't seem worthwhile.
Reported-By: Stanislav Grozev
Author: Andres Freund and Peter Geoghegan
Discussion: CAA78GVqy1+LisN-8DygekD_Ldfy=BJLarSpjGhytOsgkpMavfQ@mail.gmail.com
Backpatch: 9.5, where ON CONFLICT was introduced
Andres Freund [Thu, 10 Dec 2015 15:25:12 +0000 (16:25 +0100)]
Fix bug leading to restoring unlogged relations from empty files.
At the end of crash recovery, unlogged relations are reset to the empty
state, using their init fork as the template. The init fork is copied to
the main fork without going through shared buffers. Unfortunately WAL
replay so far has not necessarily flushed writes from shared buffers to
disk at that point. In normal crash recovery, and before the
introduction of 'fast promotions' in fd4ced523 / 9.3, the
END_OF_RECOVERY checkpoint flushes the buffers out in time. But with
fast promotions that's not the case anymore.
To fix, force WAL writes targeting the init fork to be flushed
immediately (using the new FlushOneBuffer() function). In 9.5+ that
flush can centrally be triggered from the code dealing with restoring
full page writes (XLogReadBufferForRedoExtended), in earlier releases
that responsibility is in the hands of XLOG_HEAP_NEWPAGE's replay
function.
Backpatch to 9.1, even if this currently is only known to trigger in
9.3+. Flushing earlier is more robust, and it is advantageous to keep
the branches similar.
Typical symptoms of this bug are errors like
'ERROR: index "..." contains unexpected zero page at block 0'
shortly after promoting a node.
Reported-By: Thom Brown
Author: Andres Freund and Michael Paquier
Discussion: 20150326175024.GJ451@alap3.anarazel.de
Backpatch: 9.1-
Robert Haas [Wed, 9 Dec 2015 18:18:09 +0000 (13:18 -0500)]
Allow EXPLAIN (ANALYZE, VERBOSE) to display per-worker statistics.
The original parallel sequential scan commit included only very limited
changes to the EXPLAIN output. Aggregated totals from all workers were
displayed, but there was no way to see what each individual worker did
or to distinguish the effort made by the workers from the effort made by
the leader.
Per a gripe by Thom Brown (and maybe others). Patch by me, reviewed
by Amit Kapila.
Kevin Grittner [Tue, 8 Dec 2015 23:32:49 +0000 (17:32 -0600)]
Improve performance in freeing memory contexts
The single linked list of memory contexts could result in O(N^2)
performance to free a set of contexts if they were not freed in
reverse order of creation. In many cases the reverse order was
used, but there were some significant exceptions that caused real-
world performance problems. Rather than requiring all callers to
care about the order in which contexts were freed, and hunting down
and changing all existing cases where the wrong order was used, we
add one pointer per memory context so that the implementation
details are not so visible.
Tom Lane [Tue, 8 Dec 2015 22:14:46 +0000 (17:14 -0500)]
Make failure to open psql's --log-file fatal.
Commit 344cdff2c made failure to open the target of --output fatal.
For consistency, the --log-file switch should behave similarly.
Like the previous commit, back-patch to 9.5 but no further.
Tom Lane [Tue, 8 Dec 2015 21:58:05 +0000 (16:58 -0500)]
Avoid odd portability problem in TestLib.pm's slurp_file function.
For unclear reasons, this function doesn't always read the expected data
in some old Perl versions. Rewriting it to avoid use of ARGV seems to
dodge the problem, and this version is clearer anyway if you ask me.
In passing, also improve error message in adjacent append_to_file function.
Robert Haas [Tue, 8 Dec 2015 19:04:08 +0000 (14:04 -0500)]
psql: Support multiple -c and -f options, and allow mixing them.
To support this, we must reconcile some historical anomalies in the
behavior of -c. In particular, as a backward-incompatibility, -c no
longer implies --no-psqlrc.
Pavel Stehule (code) and Catalin Iacob (documentation). Review by
Michael Paquier and myself. Proposed behavior per Tom Lane.
Robert Haas [Tue, 8 Dec 2015 17:31:03 +0000 (12:31 -0500)]
Allow foreign and custom joins to handle EvalPlanQual rechecks.
Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 provided basic
infrastructure for allowing a foreign data wrapper or custom scan
provider to replace a join of one or more tables with a scan.
However, this infrastructure failed to take into account the need
for possible EvalPlanQual rechecks, and ExecScanFetch would fail
an assertion (or just overwrite memory) if such a check was attempted
for a plan containing a pushed-down join. To fix, adjust the EPQ
machinery to skip some processing steps when scanrelid == 0, making
those the responsibility of scan's recheck method, which also has
the responsibility in this case of correctly populating the relevant
slot.
To allow foreign scans to gain control in the right place to make
use of this new facility, add a new, optional RecheckForeignScan
method. Also, allow a foreign scan to have a child plan, which can
be used to correctly populate the slot (or perhaps for something
else, but this is the only use currently envisioned).
KaiGai Kohei, reviewed by Robert Haas, Etsuro Fujita, and Kyotaro
Horiguchi.
Tom Lane [Mon, 7 Dec 2015 23:56:14 +0000 (18:56 -0500)]
Simplify LATERAL-related calculations within add_paths_to_joinrel().
While convincing myself that commit 7e19db0c09719d79 would solve both of
the problems recently reported by Andreas Seltenreich, I realized that
add_paths_to_joinrel's handling of LATERAL restrictions could be made
noticeably simpler and faster if we were to retain the minimum possible
parameterization for each joinrel (that is, the set of relids supplying
unsatisfied lateral references in it). We already retain that for
baserels, in RelOptInfo.lateral_relids, so we can use that field for
joinrels too.
I re-pgindent'd the files touched here, which affects some unrelated
comments.
This is, I believe, just a minor optimization not a bug fix, so no
back-patch.
Tom Lane [Mon, 7 Dec 2015 22:41:45 +0000 (17:41 -0500)]
Fix another oversight in checking if a join with LATERAL refs is legal.
It was possible for the planner to decide to join a LATERAL subquery to
the outer side of an outer join before the outer join itself is completed.
Normally that's fine because of the associativity rules, but it doesn't
work if the subquery contains a lateral reference to the inner side of the
outer join. In such a situation the outer join *must* be done first.
join_is_legal() missed this consideration and would allow the join to be
attempted, but the actual path-building code correctly decided that no
valid join path could be made, sometimes leading to planner errors such as
"failed to build any N-way joins".
Per report from Andreas Seltenreich. Back-patch to 9.3 where LATERAL
support was added.
Alvaro Herrera [Mon, 7 Dec 2015 22:25:31 +0000 (19:25 -0300)]
Cleanup some problems in new Perl test code
Noted by Tom Lane:
- PostgresNode had a BEGIN block which created files, contrary to
perlmod suggestions to do that only on INIT blocks.
- Assign ports randomly rather than starting from 90600.
Noted by Noah Misch:
- Change use of no-longer-set PGPORT environment variable to $node->port
- Don't start a server in pg_controldata test
- PostgresNode was reading the PID file incorrectly; test the right
thing, and chomp the line we read from the PID file.
- Remove an unused $devnull variable
- Use 'pg_ctl kill' instead of "kill" directly, for Windos portability.
- Make server log names more informative.
Tom Lane [Sat, 5 Dec 2015 18:23:48 +0000 (13:23 -0500)]
Create TestLib.pm's tempdir underneath tmp_check/, not out in the open.
This way, existing .gitignore entries and makefile clean actions will
automatically apply to the tempdir, should it survive a TAP test run
(which can happen if the user control-C's out of the run, for example).
Noah Misch [Sat, 5 Dec 2015 08:04:17 +0000 (03:04 -0500)]
Instruct Coverity using an assertion.
This should make Coverity deduce that plperl_call_perl_func() does not
dereference NULL argtypes. Back-patch to 9.5, where the affected code
was introduced.
Tom Lane [Fri, 4 Dec 2015 19:44:13 +0000 (14:44 -0500)]
Further improve documentation of the role-dropping process.
In commit 1ea0c73c2 I added a section to user-manag.sgml about how to drop
roles that own objects; but as pointed out by Stephen Frost, I neglected
that shared objects (databases or tablespaces) may need special treatment.
Fix that. Back-patch to supported versions, like the previous patch.
Alvaro Herrera [Thu, 3 Dec 2015 22:22:31 +0000 (19:22 -0300)]
Further tweak commit_timestamp behavior
As pointed out by Fujii Masao, we weren't quite there on a standby
behaving sanely: first because we were failing to acquire the correct
state in the case where no XLOG_PARAMETER_CHANGE message was sent
(because a checkpoint had already happened after the setting was changed
in the master, and then the standby was restarted); and second because
promoting the standby with the feature enabled failed to activate it if
the master had the feature disabled.
This patch fixes both those misbehaviors hopefully without
re-introducing any old problems.
Also change the hint emitted in a standby together with the error
message about the feature being disabled, to make it point out that the
place to chance the setting is the master. Otherwise, if the setting is
already enabled in the standby, it is very confusing to have it say that
the setting must be enabled ...
Authors: Álvaro Herrera, Petr Jelínek.
Backpatch to 9.5.
Tom Lane [Thu, 3 Dec 2015 19:28:58 +0000 (14:28 -0500)]
Clean up some psql issues around handling of the query output file.
Formerly, if "psql -o foo" failed to open the output file "foo", it would
print an error message but then carry on as though -o had not been
specified at all. This seems contrary to expectation: a program that
cannot open its output file normally fails altogether. Make psql do
exit(1) after reporting the error.
If "\o foo" failed to open "foo", it would print an error message but then
reset the output file to stdout, as if the argument had been omitted.
This is likewise pretty surprising behavior. Make it keep the previous
output state, instead.
psql keeps SIGPIPE interrupts disabled when it is writing to a pipe, either
a pipe specified by -o/\o or a transient pipe opened for purposes such as
using a pager on query output. The logic for this was too simple and could
sometimes re-enable SIGPIPE when a -o pipe was still active, thus possibly
leading to an unexpected psql crash later.
Fixing the last point required getting rid of the kluge in PrintQueryTuples
and ExecQueryUsingCursor whereby they'd transiently change the global
queryFout state, but that seems like good cleanup anyway.
Back-patch to 9.5 but not further; these are minor-enough issues that
changing the behavior in stable branches doesn't seem appropriate.
Tom Lane [Wed, 2 Dec 2015 23:20:33 +0000 (18:20 -0500)]
Fix behavior of printTable() and friends with externally-invoked pager.
The formatting modes that depend on knowledge of the terminal window width
did not work right when printing a query result that's been fetched in
sections (as a result of FETCH_SIZE). ExecQueryUsingCursor() would force
use of the pager as soon as there's more than one result section, and then
print.c would see an output file pointer that's not stdout and incorrectly
conclude that the terminal window width isn't relevant.
This has been broken all along for non-expanded "wrapped" output format,
and as of 9.5 the issue affects expanded mode as well. The problem also
caused "\pset expanded auto" mode to invariably *not* switch to expanded
output in a segmented result, which seems to me to be exactly backwards.
To fix, we need to pass down an "is_pager" flag to inform the print.c
subroutines that some calling level has already replaced stdout with a
pager pipe, so they should (a) not do that again and (b) nonetheless honor
the window size. (Notably, this makes the first is_pager test in
print_aligned_text() not be dead code anymore.)
This patch is a bit invasive because there are so many existing calls of
printQuery()/printTable(), but fortunately all but a couple can just pass
"false" for the added parameter.
Back-patch to 9.5 but no further. Given the lack of field complaints,
it's not clear that we should change the behavior in stable branches.
Also, the API change for printQuery()/printTable() might possibly break
third-party code, again something we don't like to do in stable branches.
However, it's not quite too late to do this in 9.5, and with the larger
scope of the problem there, it seems worth doing.
Alvaro Herrera [Wed, 2 Dec 2015 21:46:16 +0000 (18:46 -0300)]
Refactor Perl test code
The original code was a bit clunky; make it more amenable for further
reuse by creating a new Perl package PostgresNode, which is an
object-oriented representation of a single server, with some support
routines such as init, start, stop, psql. This serves as a better basis
on which to build further test code, and enables writing tests that use
more than one server without too much complication.
This commit modifies a lot of the existing test files, mostly to remove
explicit calls to system commands (pg_ctl) replacing them with method
calls of a PostgresNode object. The result is quite a bit more
straightforward.
Also move some initialization code to BEGIN and INIT blocks instead of
having it straight in as top-level code.
This commit also introduces package RecursiveCopy so that we can copy
whole directories without having to depend on packages that may not be
present on vanilla Perl 5.8 installations.
I also ran perltidy on the modified files, which changes some code sites
that are not otherwise touched by this patch. I tried to avoid this,
but it ended up being more trouble than it's worth.
Authors: Michael Paquier, Álvaro Herrera
Review: Noah Misch
Tom Lane [Tue, 1 Dec 2015 21:24:34 +0000 (16:24 -0500)]
Make gincostestimate() cope with hypothetical GIN indexes.
We tried to fetch statistics data from the index metapage, which does not
work if the index isn't actually present. If the index is hypothetical,
instead extrapolate some plausible internal statistics based on the index
page count provided by the index-advisor plugin.
There was already some code in gincostestimate() to invent internal stats
in this way, but since it was only meant as a stopgap for pre-9.1 GIN
indexes that hadn't been vacuumed since upgrading, it was pretty crude.
If we want it to support index advisors, we should try a little harder.
A small amount of testing says that it's better to estimate the entry pages
as 90% of the index, not 100%. Also, estimating the number of entries
(keys) as equal to the heap tuple count could be wildly wrong in either
direction. Instead, let's estimate 100 entries per entry page.
Perhaps someday somebody will want the index advisor to be able to provide
these numbers more directly, but for the moment this should serve.
Problem report and initial patch by Julien Rouhaud; modified by me to
invent less-bogus internal statistics. Back-patch to all supported
branches, since we've supported index advisors since 9.0.
Tom Lane [Tue, 1 Dec 2015 19:47:13 +0000 (14:47 -0500)]
Further tweaking of print_aligned_vertical().
Don't force the data width to extend all the way to the right margin if it
doesn't need to. This reverts the behavior in non-wrapping cases to be
what it was in 9.4. Also, make the logic that ensures the data line width
is at least equal to the record-header line width a little less obscure.
In passing, avoid possible calculation of log10(0). Probably that's
harmless, given the lack of field complaints, but it seems risky:
conversion of NaN to an integer isn't well defined.
Tom Lane [Tue, 1 Dec 2015 16:42:25 +0000 (11:42 -0500)]
Use "g" not "f" format in ecpg's PGTYPESnumeric_from_double().
The previous coding could overrun the provided buffer size for a very large
input, or lose precision for a very small input. Adopt the methodology
that's been in use in the equivalent backend code for a long time.
Per private report from Bas van Schaik. Back-patch to all supported
branches.
Tom Lane [Tue, 1 Dec 2015 16:07:29 +0000 (11:07 -0500)]
Further adjustment to psql's print_aligned_vertical() function.
We should ignore output_columns unless it's greater than zero.
A zero means we couldn't get any information from ioctl(TIOCGWINSZ);
in that case the expected behavior is to print the data at native width,
not to wrap it at the smallest possible value. print_aligned_text()
gets this consideration right, but print_aligned_vertical() lost track
of this detail somewhere along the line.
Teodor Sigaev [Tue, 1 Dec 2015 15:56:44 +0000 (18:56 +0300)]
Use pg_rewind when target timeline was switched
Allow pg_rewind to work when target timeline was switched. Now
user can return promoted standby to old master.
Target timeline history becomes a global variable. Index
in target timeline history is used in function interfaces instead of
specifying TLI directly. Thus, SimpleXLogPageRead() can easily start
reading XLOGs from next timeline when current timeline ends.
Author: Alexander Korotkov
Review: Michael Paquier
Tom Lane [Mon, 30 Nov 2015 22:53:32 +0000 (17:53 -0500)]
Rework wrap-width calculation in psql's print_aligned_vertical() function.
This area was rather heavily whacked around in 6513633b9 and follow-on
commits, and it was showing it, because the logic to calculate the
allowable data width in wrapped expanded mode had only the vaguest
relationship to the logic that was actually printing the data. It was
not very close to being right about the conditions requiring overhead
columns to be added. Aside from being wrong, it was pretty unreadable
and under-commented. Rewrite it so it corresponds to what the printing
code actually does.
In passing, remove a couple of dead tests in the printing logic, too.
Per a complaint from Jeff Janes, though this doesn't look much like his
patch because it fixes a number of other corner-case bogosities too.
One such fix that's visible in the regression test results is that
although the code was attempting to enforce a minimum data width of
3 columns, it sometimes left less space than that available.
Tom Lane [Sun, 29 Nov 2015 23:18:42 +0000 (18:18 -0500)]
Avoid caching expression state trees for domain constraints across queries.
In commit 8abb3cda0ddc00a0ab98977a1633a95b97068d4e I attempted to cache
the expression state trees constructed for domain CHECK constraints for
the life of the backend (assuming the domain's constraints don't get
redefined). However, this turns out not to work very well, because
execQual.c will run those state trees with ecxt_per_query_memory pointing
to a query-lifespan context, and in some situations we'll end up with
pointers into that context getting stored into the state trees. This
happens in particular with SQL-language functions, as reported by
Emre Hasegeli, but there are many other cases.
To fix, keep only the expression plan trees for domain CHECK constraints
in the typcache's data structure, and revert to performing ExecInitExpr
(at least) once per query to set up expression state trees in the query's
context.
Eventually it'd be nice to undo this, but that will require some careful
thought about memory management for expression state trees, and it seems
far too late for any such redesign in 9.5. This way is still much more
efficient than what happened before 8abb3cda0.
Tom Lane [Sat, 28 Nov 2015 18:42:27 +0000 (13:42 -0500)]
Avoid doing encoding conversions by double-conversion via MULE_INTERNAL.
Previously, we did many conversions for Cyrillic and Central European
single-byte encodings by converting to a related MULE_INTERNAL coding
scheme before converting to the destination. This seems unnecessarily
inefficient. Moreover, if the conversion encounters an untranslatable
character, the error message will confusingly complain about failure
to convert to or from MULE_INTERNAL, rather than the user-visible
encodings. Worse still, this approach results in some completely
unnecessary conversion failures; there are cases where the chosen
MULE subset lacks characters that exist in both of the user-visible
encodings, causing a conversion failure that need not occur.
This patch fixes the first two of those deficiencies by introducing
a new local2local() conversion support subroutine for direct conversion
between any two single-byte character sets, and adding new conversion
tables where needed. However, I generated the new conversion tables by
testing PG 9.5's behavior, so that the actual conversion behavior is
bug-compatible with previous releases; the only user-visible behavior
change is that the error messages for conversion failures are saner.
Changes in the conversion behavior will probably ensue after discussion.
Interestingly, although this approach requires more tables, the .so files
actually end up smaller (at least on my x86_64 machine); the tables are
smaller than the management code needed for double conversion.
Tom Lane [Fri, 27 Nov 2015 21:50:47 +0000 (16:50 -0500)]
Auto-generate file header comments in Unicode mapping files.
Some of the Unicode/*.map files had identification comments added to them,
evidently by hand. Others did not. Modify the generating scripts to
produce these comments automatically, and update the generated files that
lacked them.
This is just minor cleanup as a by-product of trying to verify that the
*.map files can indeed be reproduced from authoritative data. There are a
depressingly large number that fail to reproduce from the claimed sources.
I have not touched those in this commit, except for the JIS 2004-related
files which required only a single comment update to match.
Since this only affects comments, no need to consider a back-patch.
Tom Lane [Fri, 27 Nov 2015 19:13:53 +0000 (14:13 -0500)]
Improve PQhost() to return useful data for default Unix-socket connections.
Previously, if no host information had been specified at connection time,
PQhost() would return NULL (unless you are on Windows, in which case you
got "localhost"). This is an unhelpful definition for a couple of reasons:
it can cause corner-case crashes in applications (cf commit c5ef8ce53d),
and there's no well-defined way for applications to find out the socket
directory path that's actually in use. As an example of the latter
problem, psql substituted DEFAULT_PGSOCKET_DIR for NULL in a couple of
places, but this is subtly wrong because it's conceivable that psql is
using a libpq shared library that was built with a different setting.
Hence, change PQhost() to return DEFAULT_PGSOCKET_DIR when appropriate,
and strip out the now-dead substitutions in psql. (There is still one
remaining reference to DEFAULT_PGSOCKET_DIR in psql, in prompt.c, which
I don't see a nice way to get rid of. But it only controls a prompt
abbreviation decision, so it seems noncritical.)
Also update the docs for PQhost, which had never previously mentioned
the possibility of a socket directory path being returned. In passing
fix the outright-incorrect code comment about PGconn.pgunixsocket.
Tom Lane [Thu, 26 Nov 2015 18:23:02 +0000 (13:23 -0500)]
Fix failure to consider failure cases in GetComboCommandId().
Failure to initially palloc the comboCids array, or to realloc it bigger
when needed, left combocid's data structures in an inconsistent state that
would cause trouble if the top transaction continues to execute. Noted
while examining a user complaint about the amount of memory used for this.
(There's not much we can do about that, but it does point up that repalloc
failure has a non-negligible chance of occurring here.)
In HEAD/9.5, also avoid possible invocation of memcpy() with a null pointer
in SerializeComboCIDState; cf commit 13bba0227.
Tom Lane [Wed, 25 Nov 2015 22:31:53 +0000 (17:31 -0500)]
Be more paranoid about null return values from libpq status functions.
PQhost() can return NULL in non-error situations, namely when a Unix-socket
connection has been selected by default. That behavior is a tad debatable
perhaps, but for the moment we should make sure that psql copes with it.
Unfortunately, do_connect() failed to: it could pass a NULL pointer to
strcmp(), resulting in crashes on most platforms. This was reported as a
security issue by ChenQin of Topsec Security Team, but the consensus of
the security list is that it's just a garden-variety bug with no security
implications.
For paranoia's sake, I made the keep_password test not trust PQuser or
PQport either, even though I believe those will never return NULL given
a valid PGconn.
Tom Lane [Wed, 25 Nov 2015 21:05:57 +0000 (16:05 -0500)]
Improve div_var_fast(), mostly by making comments better.
The integer overflow situation in div_var_fast() is a great deal more
complicated than the pre-existing comments would suggest. Moreover, the
comments were also flat out incorrect as to the precise statement of the
maxdiv loop invariant. Upon clarifying that, it becomes apparent that the
way in which we updated maxdiv after a carry propagation pass was overly
slow, complex, and conservative: we can just reset it to one, which is much
easier and also reduces the number of times carry propagation occurs.
Fix that and improve the relevant comments.
Since this is mostly a comment fix, with only a rather marginal performance
boost, no need for back-patch.
Tom Lane [Sun, 22 Nov 2015 01:21:31 +0000 (20:21 -0500)]
Adopt the GNU convention for handling tar-archive members exceeding 8GB.
The POSIX standard for tar headers requires archive member sizes to be
printed in octal with at most 11 digits, limiting the representable file
size to 8GB. However, GNU tar and apparently most other modern tars
support a convention in which oversized values can be stored in base-256,
allowing any practical file to be a tar member. Adopt this convention
to remove two limitations:
* pg_dump with -Ft output format failed if the contents of any one table
exceeded 8GB.
* pg_basebackup failed if the data directory contained any file exceeding
8GB. (This would be a fatal problem for installations configured with a
table segment size of 8GB or more, and it has also been seen to fail when
large core dump files exist in the data directory.)
File sizes under 8GB are still printed in octal, so that no compatibility
issues are created except in cases that would have failed entirely before.
In addition, this patch fixes several bugs in the same area:
* In 9.3 and later, we'd defined tarCreateHeader's file-size argument as
size_t, which meant that on 32-bit machines it would write a corrupt tar
header for file sizes between 4GB and 8GB, even though no error was raised.
This broke both "pg_dump -Ft" and pg_basebackup for such cases.
* pg_restore from a tar archive would fail on tables of size between 4GB
and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits.
This happened even with an archive file not affected by the previous bug.
* pg_basebackup would fail if there were files of size between 4GB and 8GB,
even on 64-bit machines.
* In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size,
on 64-bit big-endian machines.
In view of these potential data-loss bugs, back-patch to all supported
branches, even though removal of the documented 8GB limit might otherwise
be considered a new feature rather than a bug fix.
Tom Lane [Fri, 20 Nov 2015 19:55:28 +0000 (14:55 -0500)]
Fix handling of inherited check constraints in ALTER COLUMN TYPE (again).
The previous way of reconstructing check constraints was to do a separate
"ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance
hierarchy. However, that way has no hope of reconstructing the check
constraints' own inheritance properties correctly, as pointed out in
bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do
a regular "ALTER TABLE", allowing recursion, at the topmost table that
has a particular constraint, and then suppress the work queue entries
for inherited instances of the constraint.
Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546cf,
but we failed to notice that it wasn't reconstructing the pg_constraint
field values correctly.
As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to
always schema-qualify the target table name; this seems like useful backup
to the protections installed by commit 5f173040.
In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused.
(I could alternatively have modified it to also return conislocal, but that
seemed like a pretty single-purpose API, so let's not pretend it has some
other use.) It's unused in the back branches as well, but I left it in
place just in case some third-party code has decided to use it.
In HEAD/9.5, also rename pg_get_constraintdef_string to
pg_get_constraintdef_command, as the previous name did nothing to explain
what that entry point did differently from others (and its comment was
equally useless). Again, that change doesn't seem like material for
back-patching.
I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well.
Robert Haas [Fri, 20 Nov 2015 18:03:39 +0000 (13:03 -0500)]
Avoid server crash when worker registration fails at execution time.
The previous coding attempts to destroy the DSM in this case, but
child nodes might have stored data there and still be holding onto
pointers in this case. So don't do that.
Also, free the reader array instead of leaking it.
Extracted from two different patch versions both by Amit Kapila.
Tom Lane [Thu, 19 Nov 2015 19:54:05 +0000 (14:54 -0500)]
Dodge a macro-name conflict with Perl.
Some versions of Perl export a macro named HS_KEY. This creates a
conflict in contrib/hstore_plperl against hstore's macro of the same
name. The most future-proof solution seems to be to rename our macro;
I chose HSTORE_KEY. For consistency, rename HS_VAL and related macros
similarly.
Back-patch to 9.5. contrib/hstore_plperl doesn't exist before that
so there is no need to worry about the conflict in older releases.