Robert Haas [Mon, 5 Dec 2016 20:54:28 +0000 (15:54 -0500)]
Ensure gatherstate->nextreader is properly initialized.
The previously code worked OK as long as a Gather node was never
rescanned, or if it was rescanned, as long as it got at least as
many workers on rescan as it had originally. But if the number
of workers ever decreased on a rescan, then it could crash.
Fujii Masao [Mon, 5 Dec 2016 11:29:41 +0000 (20:29 +0900)]
Fix incorrect output from gin_desc().
Previously gin_desc() displayed incorrect output "unknown action 0"
for XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE records with
valid actions. The cause of this problem was that gin_desc() wrongly
used XLogRecGetData() to extract data from those records.
Since they were registered by XLogRegisterBufData(), gin_desc() should
have used XLogRecGetBlockData(), instead, like gin_redo().
Also there were other differences about how to treat XLOG_GIN_INSERT
record between gin_desc() and gin_redo().
This commit fixes gin_desc() routine so that it treats those records
in the same way as gin_redo().
Batch-patch to 9.5 where WAL record format was revamped and
XLogRegisterBufData() was added.
Reported-By: Andres Freund Reviewed-By: Tom Lane
Discussion: <20160509194645.7lewnpw647zegx2m@alap3.anarazel.de>
Tom Lane [Sun, 4 Dec 2016 20:02:27 +0000 (15:02 -0500)]
Don't mess up pstate->p_next_resno in transformOnConflictClause().
transformOnConflictClause incremented p_next_resno while generating the
phony targetlist for the EXCLUDED pseudo-rel. Then that field got
incremented some more during transformTargetList, possibly leading to
free_parsestate concluding that we'd overrun the allowed length of a tlist,
as reported by Justin Pryzby.
We could fix this by resetting p_next_resno to 1 after using it for the
EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other
places if we just don't use that field at all in this loop. (Note that
this doesn't change anything about the resnos that end up appearing in
the main target list, because those are all replaced with target-column
numbers by updateTargetListEntry.)
In passing, fix incorrect type OID assigned to the whole-row Var for
"EXCLUDED.*" (somehow this escaped having any bad consequences so far,
but it's certainly wrong); remove useless assignment to var->location;
pstrdup the column names in case of a relcache flush; and improve
nearby comments.
Back-patch to 9.5 where ON CONFLICT was introduced.
Noah Misch [Sat, 3 Dec 2016 20:46:36 +0000 (15:46 -0500)]
Make pgwin32_putenv() visit debug CRTs.
This has no effect in the most conventional case, where no relevant DLL
uses a debug build. For an example where it does matter, given a debug
build of MIT Kerberos, the krb_server_keyfile parameter usually had no
effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all
supported versions).
Noah Misch [Sat, 3 Dec 2016 20:46:35 +0000 (15:46 -0500)]
Remove wrong CloseHandle() call.
In accordance with its own documentation, invoke CloseHandle() only when
directed in the documentation for the function that furnished the
handle. GetModuleHandle() does not so direct. We have been issuing
this call only in the rare event that a CRT DLL contains no "_putenv"
symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all
supported versions).
Noah Misch [Sat, 3 Dec 2016 20:46:35 +0000 (15:46 -0500)]
Refine win32env.c cosmetics.
Replace use of plain 0 as a null pointer constant. In comments, update
terminology and lessen redundancy. Back-patch to 9.2 (all supported
versions) for the convenience of back-patching the next two commits.
Christian Ullrich and Noah Misch, reviewed (in earlier versions) by
Michael Paquier.
Alvaro Herrera [Fri, 2 Dec 2016 03:34:01 +0000 (00:34 -0300)]
Permit dump/reload of not-too-large >1GB tuples
Our documentation states that our maximum field size is 1 GB, and that
our maximum row size of 1.6 TB. However, while this might be attainable
in theory with enough contortions, it is not workable in practice; for
starters, pg_dump fails to dump tables containing rows larger than 1 GB,
even if individual columns are well below the limit; and even if one
does manage to manufacture a dump file containing a row that large, the
server refuses to load it anyway.
This commit enables dumping and reloading of such tuples, provided two
conditions are met:
1. no single column is larger than 1 GB (in output size -- for bytea
this includes the formatting overhead)
2. the whole row is not larger than 2 GB
There are three related changes to enable this:
a. StringInfo's API now has two additional functions that allow creating
a string that grows beyond the typical 1GB limit (and "long" string).
ABI compatibility is maintained. We still limit these strings to 2 GB,
though, for reasons explained below.
b. COPY now uses long StringInfos, so that pg_dump doesn't choke
trying to emit rows longer than 1GB.
c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation
for the input tuple, which means that large tuples are accepted on
input. Note that at this point we do not apply any further limit to the
input tuple size.
The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit
length words to describe each row; and because the documentation is
ambiguous on its signedness and libpq does consider it signed, we cannot
use the highest-order bit. Additionally, the StringInfo API uses "int"
(which is 4 bytes wide in most platforms) in many places, so we'd need
to change that API too in order to improve, which has lots of fallout.
Backpatch to 9.5, which is the oldest that has
MemoryContextAllocExtended, a necessary piece of infrastructure. We
could apply to 9.4 with very minimal additional effort, but any further
than that would require backpatching "huge" allocations too.
This is the largest set of changes we could find that can be
back-patched without breaking compatibility with existing systems.
Fixing a bigger set of problems (for example, dumping tuples bigger than
2GB, or dumping fields bigger than 1GB) would require changing the FE/BE
protocol and/or changing the StringInfo API in an ABI-incompatible way,
neither of which would be back-patchable.
Authors: Daniel Vérité, Álvaro Herrera
Reviewed by: Tomas Vondra
Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
Tom Lane [Wed, 30 Nov 2016 18:34:14 +0000 (13:34 -0500)]
Doc: improve description of trim() and related functions.
Per bug #14441 from Mark Pether, the documentation could be misread,
mainly because some of the examples failed to show what happens with
a multicharacter "characters to trim" string. Also, while the text
description in most of these entries was fairly clear that the
"characters" argument is a set of characters not a substring to match,
some of them used variant wording that was a bit less clear.
trim() itself suffered from both deficiencies and was thus pretty
misinterpretable.
Also fix failure to explain which of LEADING/TRAILING/BOTH is the
default.
Tom Lane [Wed, 30 Nov 2016 00:32:35 +0000 (19:32 -0500)]
Fix bogus handling of JOIN_UNIQUE_OUTER/INNER cases for parallel joins.
consider_parallel_nestloop passed the wrong jointype down to its
subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it
thought that it could pass paths other than innerrel->cheapest_total_path
to create_unique_path, which create_unique_path is not on board with.
These bugs would lead to assertion failures or other errors, suggesting
that this code path hasn't been tested much.
hash_inner_and_outer's code for parallel join effectively treated both
JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for
different reasons :-(), leading to incorrect plans that treated a semijoin
as if it were a plain join.
Michael Day submitted a test case demonstrating that hash_inner_and_outer
failed for JOIN_UNIQUE_OUTER, and I found the other cases through code
review.
Stephen Frost [Tue, 29 Nov 2016 15:35:07 +0000 (10:35 -0500)]
Clarify pg_dump -b documentation
The documentation around the -b/--blobs option to pg_dump seemed to
imply that it might be possible to add blobs to a "schema-only" dump or
similar. Clarify that blobs are data and therefore will only be
included in dumps where data is being included, even when -b is used to
request blobs be included.
The -b option has been around since before 9.2, so back-patch to all
supported branches.
Tom Lane [Sat, 26 Nov 2016 18:31:35 +0000 (13:31 -0500)]
Fix test about ignoring extension dependencies during extension scripts.
Commit 08dd23cec introduced an exception to the rule that extension member
objects can only be dropped as part of dropping the whole extension,
intending to allow such drops while running the extension's own creation or
update scripts. However, the exception was only applied at the outermost
recursion level, because it was modeled on a pre-existing check to ignore
dependencies on objects listed in pendingObjects. Bug #14434 from Philippe
Beaudoin shows that this is inadequate: in some cases we can reach an
extension member object by recursion from another one. (The bug concerns
the serial-sequence case; I'm not sure if there are other cases, but there
might well be.)
To fix, revert 08dd23cec's changes to findDependentObjects() and instead
apply the creating_extension exception regardless of stack level.
Having seen this example, I'm a bit suspicious that the pendingObjects
logic is also wrong and such cases should likewise be allowed at any
recursion level. However, changing that would interact in subtle ways
with the recursion logic (at least it would need to be moved to after the
recursing-from check). Given that the code's been like that a long time,
I'll refrain from touching it without a clear example showing it's wrong.
Back-patch to all active branches. In HEAD and 9.6, where suitable
test infrastructure exists, add a regression test case based on the
bug report.
Tom Lane [Fri, 25 Nov 2016 23:36:10 +0000 (18:36 -0500)]
Bring some clarity to the defaults for the xxx_flush_after parameters.
Instead of confusingly stating platform-dependent defaults for these
parameters in the comments in postgresql.conf.sample (with the main
entry being a lie on Linux), teach initdb to install the correct
platform-dependent value in postgresql.conf, similarly to the way
we handle other platform-dependent defaults. This won't do anything
for existing 9.6 installations, but since it's effectively only a
documentation improvement, that seems OK.
Since this requires initdb to have access to the default values,
move the #define's for those to pg_config_manual.h; the original
placement in bufmgr.h is unworkable because that file can't be
included by frontend programs.
Adjust the default value for wal_writer_flush_after so that it is 1MB
regardless of XLOG_BLCKSZ, conforming to what is stated in both the
SGML docs and postgresql.conf. (We could alternatively make it scale
with XLOG_BLCKSZ, but I'm not sure I see the point.)
Copy-edit related SGML documentation.
Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra.
Tom Lane [Fri, 25 Nov 2016 21:20:12 +0000 (16:20 -0500)]
Mark a query's topmost Paths parallel-unsafe if they will have initPlans.
Andreas Seltenreich found another case where we were being too optimistic
about allowing a plan to be considered parallelizable despite it containing
initPlans. It seems like the real issue here is that if we know we are
going to tack initPlans onto the topmost Plan node for a subquery, we
had better mark that subquery's result Paths as not-parallel-safe. That
fixes this problem and allows reversion of a kluge (added in commit 7b67a0a49 and extended in f24cf960d) to not trust the parallel_safe flag
at top level.
Tom Lane [Fri, 25 Nov 2016 18:44:47 +0000 (13:44 -0500)]
Check for pending trigger events on far end when dropping an FK constraint.
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT,
we refuse the drop if there are any pending trigger events on the named
table; this ensures that we won't remove the pg_trigger row that will be
consulted by those events. But we should make the same check for the
referenced relation, else we might remove a due-to-be-referenced pg_trigger
row for that relation too, resulting in "could not find trigger NNN" or
"relation NNN has no triggers" errors at commit. Per bug #14431 from
Benjie Gillam. Back-patch to all supported branches.
Alvaro Herrera [Thu, 24 Nov 2016 18:39:55 +0000 (15:39 -0300)]
Fix commit_ts for FrozenXid and BootstrapXid
Previously, requesting commit timestamp for transactions
FrozenTransactionId and BootstrapTransactionId resulted in an error.
But since those values can validly appear in committed tuples' Xmin,
this behavior is unhelpful and error prone: each caller would have to
special-case those values before requesting timestamp data for an Xid.
We already have a perfectly good interface for returning "the Xid you
requested is too old for us to have commit TS data for it", so let's use
that instead.
Backpatch to 9.5, where commit timestamps appeared.
Tom Lane [Wed, 23 Nov 2016 18:45:56 +0000 (13:45 -0500)]
Make sure ALTER TABLE preserves index tablespaces.
When rebuilding an existing index, ALTER TABLE correctly kept the
physical file in the same tablespace, but it messed up the pg_class
entry if the index had been in the database's default tablespace
and "default_tablespace" was set to some non-default tablespace.
This led to an inaccessible index.
Fix by fixing pg_get_indexdef_string() to always include a tablespace
clause, whether or not the index is in the default tablespace. The
previous behavior was installed in commit 537e92e41, and I think it just
wasn't thought through very clearly; certainly the possible effect of
default_tablespace wasn't considered. There's some risk in changing the
behavior of this function, but there are no other call sites in the core
code. Even if it's being used by some third party extension, it's fairly
hard to envision a usage that is okay with a tablespace clause being
appended some of the time but can't handle it being appended all the time.
Back-patch to all supported versions.
Code fix by me, investigation and test cases by Michael Paquier.
Tom Lane [Tue, 22 Nov 2016 23:07:43 +0000 (18:07 -0500)]
Doc: in back branches, don't call it a row constructor if it isn't really.
Before commit 906bfcad7, we were not actually processing the righthand
side of a multiple-column assignment in UPDATE as a row constructor:
it was just a parenthesized list of expressions. Call it that rather
than risking confusion by people who would expect the documented behaviors
of row constructors to apply.
Back-patch to 9.5; before that, the text correctly described the construct
as a "list of independent expressions".
Tom Lane [Tue, 22 Nov 2016 22:56:16 +0000 (17:56 -0500)]
Doc: improve documentation about composite-value usage.
Create a section specifically for the syntactic rules around whole-row
variable usage, such as expansion of "foo.*". This was previously
documented only haphazardly, with some critical info buried in
unexpected places like xfunc-sql-composite-functions. Per repeated
questions in different mailing lists.
Tom Lane [Tue, 22 Nov 2016 19:02:52 +0000 (14:02 -0500)]
Doc: add a section in Part II concerning RETURNING.
There are assorted references to RETURNING in Part II, but nothing
that would qualify as an explanation of the feature, which seems
like an oversight considering how useful it is. Add something.
Noted while looking for a place to point a cross-reference to ...
Tom Lane [Mon, 21 Nov 2016 23:21:55 +0000 (18:21 -0500)]
Fix PGLC_localeconv() to handle errors better.
The code was intentionally not very careful about leaking strdup'd
strings in case of an error. That was forgivable probably, but it
also failed to notice strdup() failures, which could lead to subsequent
null-pointer-dereference crashes, since many callers unsurprisingly
didn't check for null pointers in the struct lconv fields. An even
worse problem is that it could throw error while we were setlocale'd
to a non-C locale, causing unwanted behavior in subsequent libc calls.
Rewrite to ensure that we cannot throw elog(ERROR) until after we've
restored the previous locale settings, or at least attempted to.
(I'm sorely tempted to make restore failure be a FATAL error, but
will refrain for the moment.) Having done that, it's not much more
work to ensure that we clean up strdup'd storage on the way out, too.
This code is substantially the same in all supported branches, so
back-patch all the way.
Tom Lane [Mon, 21 Nov 2016 16:09:24 +0000 (11:09 -0500)]
Fix test for subplans in force-parallel mode.
We mustn't force parallel mode if the query has any subplans, since
ExecSerializePlan doesn't transmit them to workers. Testing
top_plan->initPlan is inadequate because (1) there might be initPlans
attached to lower plan nodes, and (2) non-initPlan subplans don't
work either. There's certainly room for improvement in those
restrictions, but for the moment that's what we've got.
Tom Lane [Sun, 20 Nov 2016 19:26:19 +0000 (14:26 -0500)]
Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.
Because we use transformTargetList() for UPDATE as well as SELECT
tlists, the code accidentally tried to expand a "*" reference into
several columns. This is nonsensical, because the UPDATE syntax
provides exactly one target column to put the value into. The
immediate result was that transformUpdateTargetList() got confused
and reported "UPDATE target count mismatch --- internal error".
It seems better to treat such a reference as a plain whole-row
variable, as it would be in other contexts. (This could produce
useful results when the target column is of composite type.)
Fix by tweaking transformTargetList() to perform *-expansion only
conditionally, depending on its exprKind parameter.
Back-patch to 9.3. The problem exists further back, but a fix would be
much more invasive before that, because transformTargetList() wasn't
told what kind of list it was working on. Doesn't seem worth the
trouble given the lack of field reports. (I only noticed it because
I was checking the code while trying to improve the documentation about
how we handle "foo.*".)
Tom Lane [Sat, 19 Nov 2016 19:26:19 +0000 (14:26 -0500)]
Code review for GUC serialization/deserialization code.
The serialization code dumped core for a string-valued GUC whose value
is NULL, which is a legal state. The infrastructure isn't capable of
transmitting that state exactly, but fortunately, transmitting an empty
string instead should be close enough (compare, eg, commit e45e990e4).
The code potentially underestimated the space required to format a
real-valued variable, both because it made an unwarranted assumption that
%g output would never be longer than %e output, and because it didn't count
right even for %e format. In practice this would pretty much always be
masked by overestimates for other variables, but it's still wrong.
Also fix boundary-case error in read_gucstate, incorrect handling of the
case where guc_sourcefile is non-NULL but zero length (not clear that can
happen, but if it did, this code would get totally confused), and
confusingly useless check for a NULL result from read_gucstate.
Andreas Seltenreich discovered the core dump; other issues noted while
reading nearby code. Back-patch to 9.5 where this code was introduced.
Teach it not to complain if the dropStmt attached to an archive entry
is actually spelled CREATE OR REPLACE VIEW, since that will happen due to
an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it
print a WARNING and then emit the dropStmt unmodified. That seems like a
much saner behavior than Assert'ing or dumping core due to a null-pointer
dereference, which is what would happen before :-(.
Back-patch to 9.4 where this option was introduced.
Alvaro Herrera [Thu, 17 Nov 2016 16:31:30 +0000 (13:31 -0300)]
Avoid pin scan for replay of XLOG_BTREE_VACUUM in all cases
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to
require complex interlocking that matched the requirements on the
master. This required an O(N) operation that became a significant
problem with large indexes, causing replication delays of seconds or in
some cases minutes while the XLOG_BTREE_VACUUM was replayed.
This commit skips the “pin scan” that was previously required, by
observing in detail when and how it is safe to do so, with full
documentation. The pin scan is skipped only in replay; the VACUUM code
path on master is not touched here.
No tests included. Manual tests using an additional patch to view WAL records
and their timing have shown the change in WAL records and their handling has
successfully reduced replication delay.
This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375
by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is
possible because this depends on catalog scans being MVCC. I (Álvaro)
additionally updated a slight problem in the README, which explains why
this touches the 9.6 and master branches.
Tom Lane [Tue, 15 Nov 2016 21:17:19 +0000 (16:17 -0500)]
Allow DOS-style line endings in ~/.pgpass files.
On Windows, libc will mask \r\n line endings for us, since we read the
password file in text mode. But that doesn't happen on Unix. People
who share password files across both systems might have \r\n line endings
in a file they use on Unix, so as a convenience, ignore trailing \r.
Per gripe from Josh Berkus.
In passing, put the existing check for empty line somewhere where it's
actually useful, ie after stripping the newline not before.
Tom Lane [Tue, 15 Nov 2016 20:55:35 +0000 (15:55 -0500)]
Account for catalog snapshot in PGXACT->xmin updates.
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting
for whether MyPgXact->xmin could be cleared or advanced. In normal
transactions this was masked by the fact that the transaction snapshot
would be older, but during backend startup and certain utility commands
it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had
been cleared, meaning that recently-deleted rows could be pruned even
though this snapshot could still see them, causing unexpected catalog
lookup failures. This effect appears to be the explanation for a recent
failure on buildfarm member piculet.
To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever
it is valid.
In the previous logic, it was possible for the CatalogSnapshot to remain
valid across waits for client input, but with this change that would mean
it delays advance of global xmin in cases where it did not before. To
avoid possibly causing new table-bloat problems with clients that sit idle
for long intervals, add code to invalidate the CatalogSnapshot before
waiting for client input. (When the backend is busy, it's unlikely that
the CatalogSnapshot would be the oldest snap for very long, so we don't
worry about forcing early invalidation of it otherwise.)
In passing, remove the CatalogSnapshotStale flag in favor of using
"CatalogSnapshot != NULL" to represent validity, as we do for the other
special snapshots in snapmgr.c. And improve some obsolete comments.
No regression test because I don't know a deterministic way to cause this
failure. But the stress test shown in the original discussion provokes
"cache lookup failed for relation 1255" within a few dozen seconds for me.
Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's
quite easy to produce similar failures with the same test case in branches
before 9.4. But MVCC catalog scans were supposed to fix that.)
Alvaro Herrera [Mon, 14 Nov 2016 14:14:34 +0000 (11:14 -0300)]
Fix duplication in ALTER MATERIALIZE VIEW synopsis
Commit 3c4cf080879b should have removed SET TABLESPACE from the synopsis
of ALTER MATERIALIZE VIEW as a possible "action" when it added a
separate line for it in the main command listing, but failed to.
Repair.
Tom Lane [Sun, 13 Nov 2016 18:12:35 +0000 (13:12 -0500)]
Doc: remove obsolete example.
The documentation for ts_headline() recommends using a sub-select to
avoid extra evaluations of ts_headline() in a query with ORDER BY+LIMIT.
Since commit 9118d03a8 this contortionism is unnecessary, so remove the
recommendation. Noted by Oleg Bartunov.
Tom Lane [Fri, 11 Nov 2016 17:03:49 +0000 (12:03 -0500)]
Doc: fix data types of FuncCallContext's call_cntr and max_calls fields.
Commit 23a27b039 widened these from uint32 to uint64, but I overlooked
that the documentation explicitly showed them as uint32. Per report
from Vicky Vergara.
Tom Lane [Thu, 10 Nov 2016 20:00:58 +0000 (15:00 -0500)]
Re-allow user_catalog_table option for materialized views.
The reloptions stuff allows this option to be set on a matview.
While it's questionable whether that is useful or was really intended,
it does work, and we shouldn't change that in minor releases. Commit e3e66d8a9 disabled the option since I didn't realize that it was
possible for it to be set on a matview. Tweak the test to re-allow it.
Tom Lane [Thu, 10 Nov 2016 16:31:56 +0000 (11:31 -0500)]
Fix partial aggregation for the case of a degenerate GROUP BY clause.
The plan generated for sorted partial aggregation with "GROUP BY constant"
included a Sort node with no sort keys, which the executor does not like.
Per report from Steve Randall. I'd add a regression test case if I could
think of a compact one, but it doesn't seem worth expending lots of cycles
on.
Noah Misch [Tue, 8 Nov 2016 01:27:30 +0000 (20:27 -0500)]
Change qr/foo$/m to qr/foo\n/m, for Perl 5.8.8.
In each case, absence of a trailing newline would itself constitute a
PostgreSQL bug. Therefore, this slightly enhances the changed tests.
This works around a bug that last appeared in Perl 5.8.8, fixing
src/test/modules/test_pg_dump when run against that version. Commit e7293e3271bf618eeb2d4779a15fc516a69fe463 worked around the bug, but the
subsequent addition of test_pg_dump introduced affected code. As that
commit had shown, slight increases in pattern complexity can suppress
the bug. This commit edits qr/foo$/m patterns too complex to encounter
the bug today, for style consistency and robustness against unrelated
pattern changes. Back-patch to 9.6, where test_pg_dump was introduced.
As of this writing, a fresh MSYS installation includes an affected Perl
5.8.8. The Perl 5.8.8 in Red Hat Enterprise Linux 5.11 carries a patch
that renders it unaffected, but the Perl 5.8.5 of Red Hat Enterprise
Linux 4.4 is affected.
Tom Lane [Mon, 7 Nov 2016 17:08:19 +0000 (12:08 -0500)]
Band-aid fix for incorrect use of view options as StdRdOptions.
We really ought to make StdRdOptions and the other decoded forms of
reloptions self-identifying, but for the moment, assume that only plain
relations could possibly be user_catalog_tables. Fixes problem with bogus
"ON CONFLICT is not supported on table ... used as a catalog table" error
when target is a view with cascade option.
Tom Lane [Sun, 6 Nov 2016 19:43:13 +0000 (14:43 -0500)]
Rationalize and document pltcl's handling of magic ".tupno" array element.
For a very long time, pltcl's spi_exec and spi_execp commands have had
a behavior of storing the current row number as an element of output
arrays, but this was never documented. Fix that.
For an equally long time, pltcl_trigger_handler had a behavior of silently
ignoring ".tupno" as an output column name, evidently so that the result
of spi_exec could be used directly as a trigger result tuple. Not sure
how useful that really is, but in any case it's bad that it would break
attempts to use ".tupno" as an actual column name. We can fix it by not
checking for ".tupno" until after we check for a column name match. This
comports with the effective behavior of spi_exec[p] that ".tupno" is only
magic when you don't have an actual column named that.
In passing, wordsmith the description of returning modified tuples from
a pltcl trigger.
Noted while working on Jim Nasby's patch to support composite results
from pltcl. The inability to return trigger tuples using ".tupno" as
a column name is a bug, so back-patch to all supported branches.
Tom Lane [Sun, 6 Nov 2016 17:09:36 +0000 (12:09 -0500)]
Need to do SPI_push/SPI_pop around expression evaluation in plpgsql.
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI). I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.
Per bug #14414 from Marcos Castedo. Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.
Tom Lane [Sun, 6 Nov 2016 15:45:58 +0000 (10:45 -0500)]
More zic cleanup.
The workaround the IANA guys chose to get rid of the clang warning
we'd silenced in commit 23ed2ba81 turns out not to satisfy Coverity.
Go back to the previous solution, ie, remove the useless comparison
to SIZE_MAX. (In principle, there could be machines out there where
it's not useless because ptrdiff_t is wider than size_t. But the whole
thing is pretty academic anyway, as we could never approach this limit
for any sane estimate of the amount of data that zic will ever be asked
to work with.)
Also, s/lineno/lineno_t/g, because if we accept their decision to start
using "lineno" as a typedef, it is going to have very unpleasant
consequences in our next pgindent run. Noted that while fooling with
pltcl yesterday.
Tom Lane [Fri, 4 Nov 2016 14:44:16 +0000 (10:44 -0400)]
Sync our copy of the timezone library with IANA tzcode master.
This patch absorbs some unreleased fixes for symlink manipulation bugs
introduced in tzcode 2016g. Ordinarily I'd wait around for a released
version, but in this case it seems like we could do with extra testing,
in particular checking whether it works in EDB's VMware build environment.
This corresponds to commit aec59156abbf8472ba201b6c7ca2592f9c10e077 in
https://github.com/eggert/tz.
Per a report from Sandeep Thakkar, building in an environment where hard
links are not supported in the timezone data installation directory failed,
because upstream code refactoring had broken the case of symlinking from an
existing symlink. Further experimentation also showed that the symlinks
were sometimes made incorrectly, with too many or too few "../"'s in the
symlink contents.
Tom Lane [Wed, 2 Nov 2016 19:50:15 +0000 (15:50 -0400)]
Don't make FK-based selectivity estimates in inheritance situations.
The foreign-key-aware logic for estimation of join sizes (added in commit 100340e2d) blindly tried to apply the concept to rels that are actually
parents of inheritance trees. This is just plain wrong so far as the
referenced relation is concerned, since the inheritance scan may well
produce lots of rows that are not participating in the constraint. It's
wrong for the referencing relation too, for the same reason; although on
that end we could conceivably detect whether all members of the inheritance
tree have equivalent FK constraints pointing to the same referenced rel,
and then proceed more or less as we do now. But pending somebody writing
code to do that, we must disable this, because it's producing completely
silly estimates when there's an FK linking the heads of inheritance trees.
Per bug #14404 from Clinton Adams. Back-patch to 9.6 where the new
estimation logic came in.
Tom Lane [Wed, 2 Nov 2016 18:32:13 +0000 (14:32 -0400)]
Don't convert Consts into Vars during setrefs.c processing.
While converting expressions in an upper-level plan node so that they
reference Vars and expressions provided by the input plan node(s),
don't convert plain Const items, even if there happens to be a matching
Const in the input. It's silly to do so because a Var is more expensive to
execute than a Const. Moreover, converting can fool ExecCheckPlanOutput's
check that an insert or update query inserts nulls into dropped columns,
leading to "query provides a value for a dropped column" errors during
INSERT or UPDATE on a table with a dropped column. We could solve this
by making that check more complicated, but I don't see the point; this fix
should save a marginal number of cycles, and it also makes for less messy
EXPLAIN output, as shown by the ensuing regression test result changes.
Per report from Pavel Hanák. I have not incorporated a test case based
on that example, as there doesn't seem to be a simple way of checking
this in isolation without making a bunch of assumptions about other
planner and SQL-function behavior.
Back-patch to 9.6. This setrefs.c behavior exists much further back,
but there is not currently reason to think that it causes problems
before 9.6.
Tom Lane [Wed, 2 Nov 2016 04:09:28 +0000 (00:09 -0400)]
Fix portability bug in gin_page_opaque_info().
Somebody apparently thought that "if Int32GetDatum is good,
Int64GetDatum must be better". Per buildfarm failures now
that Peter has added some regression tests here.
Tom Lane [Sun, 30 Oct 2016 21:35:42 +0000 (17:35 -0400)]
Fix nasty performance problem in tsquery_rewrite().
tsquery_rewrite() tries to find matches to subsets of AND/OR conditions;
for example, in the query 'a | b | c' the substitution subquery 'a | c'
should match and lead to replacement of the first and third items.
That's fine, but the matching algorithm apparently takes about O(2^N)
for an N-clause query (I say "apparently" because the code is also both
unintelligible and uncommented). We could probably do better than that
even without any extra assumptions --- but actually, we know that the
subclauses are sorted, indeed are depending on that elsewhere in this very
same function. So we can just scan the two lists a single time to detect
matches, as though we were doing a merge join.
Also do a re-flattening call (QTNTernary()) in tsquery_rewrite_query, just
to make sure that the tree fits the expectations of the next search cycle.
I didn't try to devise a test case for this, but I'm pretty sure that the
oversight could have led to failure to match in some cases where a match
would be expected.
Improve comments, and also stick a CHECK_FOR_INTERRUPTS into
dofindsubquery, just in case it's still too slow for somebody.
Per report from Andreas Seltenreich. Back-patch to all supported branches.
Tom Lane [Sun, 30 Oct 2016 19:24:40 +0000 (15:24 -0400)]
Fix bogus tree-flattening logic in QTNTernary().
QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c',
which is all well and good, but it tries to do that to NOT nodes as well,
so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to
be done only on AND and OR nodes, and add a test case illustrating the bug.
In passing, provide some comments for the sadly naked functions in
tsquery_util.c, and simplify some baroque logic in QTNFree(), which
I think may have been leaking some items it intended to free.
Noted while investigating a complaint from Andreas Seltenreich.
Back-patch to all supported versions.
Tom Lane [Sun, 30 Oct 2016 16:27:41 +0000 (12:27 -0400)]
Improve speed of aggregates that use array_append as transition function.
In the previous coding, if an aggregate's transition function returned an
expanded array, nodeAgg.c and nodeWindowAgg.c would always copy it and thus
force it into the flat representation. This led to ping-ponging between
flat and expanded formats, which costs a lot. For an aggregate using
array_append as transition function, I measured about a 15X slowdown
compared to the pre-9.5 code, when working on simple int[] arrays.
Of course, the old code was already O(N^2) in this usage due to copying
flat arrays all the time, but it wasn't quite this inefficient.
To fix, teach nodeAgg.c and nodeWindowAgg.c to allow expanded transition
values without copying, so long as the transition function takes care to
return the transition value already properly parented under the aggcontext.
That puts a bit of extra responsibility on the transition function, but
doing it this way allows us to not need any extra logic in the fast path
of advance_transition_function (ie, with a pass-by-value transition value,
or with a modified-in-place pass-by-reference value). We already know
that that's a hot spot so I'm loath to add any cycles at all there. Also,
while only array_append currently knows how to follow this convention,
this solution allows other transition functions to opt-in without needing
to have a whitelist in the core aggregation code.
(The reason we would need a whitelist is that currently, if you pass a
R/W expanded-object pointer to an arbitrary function, it's allowed to do
anything with it including deleting it; that breaks the core agg code's
assumption that it should free discarded values. Returning a value under
aggcontext is the transition function's signal that it knows it is an
aggregate transition function and will play nice. Possibly the API rules
for expanded objects should be refined, but that would not be a
back-patchable change.)
With this fix, an aggregate using array_append is no longer O(N^2), so it's
much faster than pre-9.5 code rather than much slower. It's still a bit
slower than the bespoke infrastructure for array_agg, but the differential
seems to be only about 10%-20% rather than orders of magnitude.
Robert Haas [Thu, 27 Oct 2016 15:19:51 +0000 (11:19 -0400)]
Fix possible pg_basebackup failure on standby with "include WAL".
If a restartpoint flushed no dirty buffers, it could fail to update
the minimum recovery point, leading to a minimum recovery point prior
to the starting REDO location. perform_base_backup() would interpret
that as meaning that no WAL files at all needed to be included in the
backup, failing an internal sanity check. To fix, have restartpoints
always update the minimum recovery point to just after the checkpoint
record itself, so that the file (or files) containing the checkpoint
record will always be included in the backup.
Code by Amit Kapila, per a design suggestion by me, with some
additional work on the code comment by me. Test case by Michael
Paquier. Report by Kyotaro Horiguchi.
Tom Lane [Wed, 26 Oct 2016 21:05:06 +0000 (17:05 -0400)]
Fix incorrect trigger-property updating in ALTER CONSTRAINT.
The code to change the deferrability properties of a foreign-key constraint
updated all the associated triggers to match; but a moment's examination of
the code that creates those triggers in the first place shows that only
some of them should track the constraint's deferrability properties. This
leads to odd failures in subsequent exercise of the foreign key, as the
triggers are fired at the wrong times. Fix that, and add a regression test
comparing the trigger properties produced by ALTER CONSTRAINT with those
you get by creating the constraint as-intended to begin with.
Per report from James Parks. Back-patch to 9.4 where this ALTER
functionality was introduced.
Tom Lane [Wed, 26 Oct 2016 17:40:41 +0000 (13:40 -0400)]
Fix not-HAVE_SYMLINK code in zic.c.
I broke this in commit f3094920a. Apparently it's dead code anyway,
at least as far as our buildfarm is concerned (and the upstream IANA
code doesn't worry at all about symlink() not being present).
But as long as the rest of our code is willing to guard against not
having symlink(), this should too. Noted while investigating a
tangentially-related complaint from Sandeep Thakkar.
Tom Lane [Wed, 26 Oct 2016 15:46:25 +0000 (11:46 -0400)]
Doc: improve documentation about inheritance.
Clarify documentation about inheritance of check constraints, in
particular mentioning the NO INHERIT option, which didn't exist when
this text was written.
Document that in an inherited query, the applicable row security policies
are those of the explicitly-named table, not its children. This is the
intended behavior (per off-list discussion with Stephen Frost), and there
are regression tests for it, but it wasn't documented anywhere user-facing
as far as I could find.
Do a bit of wordsmithing on the description of inherited access-privilege
checks.
Alvaro Herrera [Mon, 24 Oct 2016 12:27:24 +0000 (09:27 -0300)]
Preserve commit timestamps across clean restart
An oversight in setting the boundaries of known commit timestamps during
startup caused old commit timestamps to become inaccessible after a
server restart.
Author and reporter: Julien Rouhaud
Review, test code: Craig Ringer
Tom Lane [Sun, 23 Oct 2016 23:14:32 +0000 (19:14 -0400)]
Avoid testing tuple visibility without buffer lock.
INSERT ... ON CONFLICT (specifically ExecCheckHeapTupleVisible) contains
another example of this unsafe coding practice. It is much harder to get
a failure out of it than the case fixed in commit 6292c2339, because in
most scenarios any hint bits that could be set would have already been set
earlier in the command. However, Konstantin Knizhnik reported a failure
with a custom transaction manager, and it's clearly possible to get a
failure via a race condition in async-commit mode.
For lack of a reproducible example, no regression test case in this
commit.
I did some testing with Asserts added to tqual.c's functions, and can say
that running "make check-world" exposed these two bugs and no others.
The Asserts are messy enough that I've not added them to the code for now.
Tom Lane [Sun, 23 Oct 2016 22:36:13 +0000 (18:36 -0400)]
Don't throw serialization errors for self-conflicts in INSERT ON CONFLICT.
A transaction that conflicts against itself, for example
INSERT INTO t(pk) VALUES (1),(1) ON CONFLICT DO NOTHING;
should behave the same regardless of isolation level. It certainly
shouldn't throw a serialization error, as retrying will not help.
We got this wrong due to the ON CONFLICT logic not considering the case,
as reported by Jason Dusek.
Core of this patch is by Peter Geoghegan (based on an earlier patch by
Thomas Munro), though I didn't take his proposed code refactoring for fear
that it might have unexpected side-effects. Test cases by Thomas Munro
and myself.
Tom Lane [Sun, 23 Oct 2016 19:01:24 +0000 (15:01 -0400)]
Avoid testing tuple visibility without buffer lock in RI_FKey_check().
Despite the argumentation I wrote in commit 7a2fe85b0, it's unsafe to do
this, because in corner cases it's possible for HeapTupleSatisfiesSelf
to try to set hint bits on the target tuple; and at least since 8.2 we
have required the buffer content lock to be held while setting hint bits.
The added regression test exercises one such corner case. Unpatched, it
causes an assertion failure in assert-enabled builds, or otherwise would
cause a hint bit change in a buffer we don't hold lock on, which given
the right race condition could result in checksum failures or other data
consistency problems. The odds of a problem in the field are probably
pretty small, but nonetheless back-patch to all supported branches.
Tom Lane [Sat, 22 Oct 2016 18:04:51 +0000 (14:04 -0400)]
Improve documentation about use of Linux huge pages.
Show how to get the system's huge page size, rather than misleadingly
referring to PAGE_SIZE (which is usually understood to be the regular
page size). Show how to confirm whether huge pages have been allocated.
Minor wordsmithing. Back-patch to 9.4 where this section appeared.
Tom Lane [Fri, 21 Oct 2016 15:01:35 +0000 (11:01 -0400)]
Doc: wording tweak for PERL, PYTHON, TCLSH configuration variables.
Replace "Full path to ..." with "Full path name of ...". At least one
user has misinterpreted the existing wording as meaning "Directory
containing ...".
Tom Lane [Thu, 20 Oct 2016 21:17:50 +0000 (17:17 -0400)]
Fix EXPLAIN so that it doesn't emit invalid XML in corner cases.
With track_io_timing = on, EXPLAIN (ANALYZE, BUFFERS) will emit fields
named like "I/O Read Time". The slash makes that invalid as an XML
element name, so that adding FORMAT XML would produce invalid XML.
We already have code in there to translate spaces to dashes, so let's
generalize that to convert anything that isn't a valid XML name character,
viz letters, digits, hyphens, underscores, and periods. We could just
reject slashes, which would run a bit faster. But the fact that this went
unnoticed for so long doesn't give me a warm feeling that we'd notice the
next creative violation, so let's make it a permanent fix.
Reported by Markus Winand, though this isn't his initial patch proposal.
Back-patch to 9.2 where track_io_timing was added. The problem is only
latent in 9.1, so I don't feel a need to fix it there.
Tom Lane [Thu, 20 Oct 2016 19:40:07 +0000 (15:40 -0400)]
Sync our copy of the timezone library with IANA release tzcode2016h.
This absorbs a fix for a symlink-manipulation bug in zic that was
introduced in 2016g. It probably isn't interesting for our use-case,
but I'm not quite sure, so let's update while we're at it.
Tom Lane [Wed, 19 Oct 2016 22:55:52 +0000 (18:55 -0400)]
Sync our copy of the timezone library with IANA release tzcode2016g.
This is mostly to absorb some corner-case fixes in zic for year-2037
timestamps. The other changes that have been made are unlikely to affect
our usage, but nonetheless we may as well take 'em.
Tom Lane [Wed, 19 Oct 2016 22:11:49 +0000 (18:11 -0400)]
Suppress "Factory" zone in pg_timezone_names view for tzdata >= 2016g.
IANA got rid of the really silly "abbreviation" and replaced it with one
that's only moderately silly. But it's still pointless, so keep on not
showing it.
Tom Lane [Wed, 19 Oct 2016 21:56:38 +0000 (17:56 -0400)]
Update time zone data files to tzdata release 2016g.
DST law changes in Turkey. Historical corrections for America/Los_Angeles,
Europe/Kirov, Europe/Moscow, Europe/Samara, and Europe/Ulyanovsk.
Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link.
The IANA crew continue their campaign to replace invented time zone
abbrevations with numeric GMT offsets. This update changes numerous zones
in Antarctica and the former Soviet Union, for instance Antarctica/Casey
now reports "+08" not "AWST" in the pg_timezone_names view. I kept these
abbreviations in the tznames/ data files, however, so that we will still
accept them for input. (We may want to start trimming those files someday,
but today is not that day.)
An exception is that since IANA no longer claims that "AMT" is in use
in Armenia for GMT+4, I replaced it in the Default file with GMT-4,
corresponding to Amazon Time which is in use in South America. It may be
that that meaning is also invented and IANA will drop it in a future
update; but for now, it seems silly to give pride of place to a meaning
not traceable to IANA over one that is.
When a relation is truncated, it is important that the FSM is truncated as
well. Otherwise, after recovery, the FSM can return a page that has been
truncated away, leading to errors like:
ERROR: could not read block 28991 in file "base/16390/572026": read only 0
of 8192 bytes
We were using MarkBufferDirtyHint() to dirty the buffer holding the last
remaining page of the FSM, but during recovery, that might in fact not
dirty the page, and the FSM update might be lost.
To fix, use the stronger MarkBufferDirty() function. MarkBufferDirty()
requires us to do WAL-logging ourselves, to protect from a torn page, if
checksumming is enabled.
Also fix an oversight in visibilitymap_truncate: it also needs to WAL-log
when checksumming is enabled.
Tom Lane [Tue, 18 Oct 2016 16:24:46 +0000 (12:24 -0400)]
Fix cidin() to handle values above 2^31 platform-independently.
CommandId is declared as uint32, and values up to 4G are indeed legal.
cidout() handles them properly by treating the value as unsigned int.
But cidin() was just using atoi(), which has platform-dependent behavior
for values outside the range of signed int, as reported by Bart Lengkeek
in bug #14379. Use strtoul() instead, as xidin() does.
In passing, make some purely cosmetic changes to make xidin/xidout
look more like cidin/cidout; the former didn't have a monopoly on
best practice IMO.
Neither xidin nor cidin make any attempt to throw error for invalid input.
I didn't change that here, and am not sure it's worth worrying about
since neither is really a user-facing type. The point is just to ensure
that indubitably-valid inputs work as expected.
It's been like this for a long time, so back-patch to all supported
branches.
Fix use-after-free around DISTINCT transition function calls.
Have tuplesort_gettupleslot() copy the contents of its current table slot
as needed. This is based on an approach taken by tuplestore_gettupleslot().
In the future, tuplesort_gettupleslot() may also be taught to avoid copying
the tuple where caller can determine that that is safe (the
tuplestore_gettupleslot() interface already offers this option to callers).
Patch by Peter Geoghegan. Fixes bug #14344, reported by Regina Obe.
Tom Lane [Fri, 14 Oct 2016 20:28:34 +0000 (16:28 -0400)]
Fix assorted integer-overflow hazards in varbit.c.
bitshiftright() and bitshiftleft() would recursively call each other
infinitely if the user passed INT_MIN for the shift amount, due to integer
overflow in negating the shift amount. To fix, clamp to -VARBITMAXLEN.
That doesn't change the results since any shift distance larger than the
input bit string's length produces an all-zeroes result.
Also fix some places that seemed inadequately paranoid about input typmods
exceeding VARBITMAXLEN. While a typmod accepted by anybit_typmodin() will
certainly be much less than that, at least some of these spots are
reachable with user-chosen integer values.
Tom Lane [Thu, 13 Oct 2016 23:45:58 +0000 (19:45 -0400)]
Fix handling of pgstat counters for TRUNCATE in a prepared transaction.
pgstat_twophase_postcommit is supposed to duplicate the math in
AtEOXact_PgStat, but it had missed out the bit about clearing
t_delta_live_tuples/t_delta_dead_tuples for a TRUNCATE.
It's harder than you might think to replicate the issue here, because
those counters would only be nonzero when a previous transaction in
the same backend had added/deleted tuples in the truncated table,
and those counts hadn't been sent to the stats collector yet.
Evident oversight in commit d42358efb. I've not added a regression
test for this; we tried to add one in d42358efb, and had to revert it
because it was too timing-sensitive for the buildfarm.
Tom Lane [Thu, 13 Oct 2016 21:05:14 +0000 (17:05 -0400)]
Fix another bug in merging of inherited CHECK constraints.
It's not good for an inherited child constraint to be marked connoinherit;
that would result in the constraint not propagating to grandchild tables,
if any are created later. The code mostly prevented this from happening
but there was one case that was missed.
This is somewhat related to commit e55a946a8, which also tightened checks
on constraint merging. Hence, back-patch to 9.2 like that one. This isn't
so much because there's a concrete feature-related reason to stop there,
as to avoid having more distinct behaviors than we have to in this area.
Tom Lane [Thu, 13 Oct 2016 19:06:46 +0000 (15:06 -0400)]
Try to find out the actual hugepage size when making a MAP_HUGETLB request.
Even if Linux's mmap() is okay with a partial-hugepage request, munmap()
is not, as reported by Chris Richards. Therefore it behooves us to try
a bit harder to find out the actual hugepage size, instead of assuming
that we can skate by with a guess.
For the moment, just look into /proc/meminfo to find out the default
hugepage size, and use that. Later, on kernels that support requests
for nondefault sizes, we might try to consider other alternatives.
But that smells more like a new feature than a bug fix, especially if
we want to provide any way for the DBA to control it, so leave it for
another day.
I set this up to allow easy addition of platform-specific code for
non-Linux platforms, if needed; but right now there are no reports
suggesting that we need to work harder on other platforms.
Back-patch to 9.4 where hugepage support was introduced.
Tom Lane [Thu, 13 Oct 2016 17:59:56 +0000 (13:59 -0400)]
Clean up handling of anonymous mmap'd shared-memory segment.
Fix detaching of the mmap'd segment to have its own on_shmem_exit callback,
rather than piggybacking on the one for detaching from the SysV segment.
That was confusing, and given the distance between the two attach calls,
it was trouble waiting to happen.
Make the detaching calls idempotent by clearing AnonymousShmem to show
we've already unmapped. I spent quite a bit of time yesterday trying
to find a path that would allow the munmap()'s to be done twice, and
while I did not succeed, it seems silly that there's even a question.
Make the #ifdef logic less confusing by separating "do we want to use
anonymous shmem" from EXEC_BACKEND. Even though there's no current
scenario where those conditions are different, it is not helpful for
different places in the same file to be testing EXEC_BACKEND for what
are fundamentally different reasons.
Don't do on_exit_reset() in StartBackgroundWorker(). At best that's
useless (InitPostmasterChild would have done it already) and at worst
it could zap some callback that's unrelated to shared memory.
Improve comments, and simplify the huge_pages enablement logic slightly.
Back-patch to 9.4 where hugepage support was introduced.
Arguably this should go into 9.3 as well, but the code looks
significantly different there, and I doubt it's worth the
trouble of adapting the patch given I can't show a live bug.
Tom Lane [Thu, 13 Oct 2016 04:25:28 +0000 (00:25 -0400)]
Fix broken jsonb_set() logic for replacing array elements.
Commit 0b62fd036 did a fairly sloppy job of refactoring setPath()
to support jsonb_insert() along with jsonb_set(). In its defense,
though, there was no regression test case exercising the case of
replacing an existing element in a jsonb array.
Per bug #14366 from Peng Sun. Back-patch to 9.6 where bug was introduced.
Tom Lane [Wed, 12 Oct 2016 22:01:43 +0000 (18:01 -0400)]
Revert addition of PGDLLEXPORT in PG_FUNCTION_INFO_V1 macro.
This turns out not to be as harmless as I thought: MSVC will complain
if it sees an "extern" declaration without PGDLLEXPORT and then one with.
(Seems fairly silly, given that this can be changed after the fact by the
linker, but there you have it.) Therefore, contrib modules that have
extern's for V1 functions in header files are falling over in the
buildfarm, since none of those externs are marked PGDLLEXPORT.
We might or might not conclude that we're willing to plaster those
declarations with PGDLLEXPORT in HEAD, but in any case there's no way we're
going to ship this change in the back branches. Third-party authors would
not thank us for breaking their code in a minor release. Hence, revert
the addition of PGDLLEXPORT (but let's keep the extra info in the comment).
If we do the other changes we can revert this commit in HEAD.
Tom Lane [Wed, 12 Oct 2016 16:45:50 +0000 (12:45 -0400)]
Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro.
This isn't really necessary for our own code, because we use a .DEF file
in MSVC builds (see gendef.pl), or --export-all-symbols in MinGW and
Cygwin builds, to ensure that all global symbols in loadable modules
will be exported on Windows. However, third-party authors might use
different build processes that need this marker, and it's harmless
enough for our own builds.
To some extent, this is an oversight in commit e7128e8db, so back-patch
to 9.4 where that was added.