Robert Haas [Sun, 12 Jun 2011 04:07:04 +0000 (00:07 -0400)]
Code cleanup for InitProcGlobal.
The old code creates three separate arrays when only one is needed,
using two different shmem allocation functions for no obvious reason.
It also strangely splits up the initialization of AuxilaryProcs
between the top and bottom of the function to no evident purpose.
Tom Lane [Fri, 10 Jun 2011 21:03:03 +0000 (17:03 -0400)]
Work around gcc 4.6.0 bug that breaks WAL replay.
ReadRecord's habit of using both direct references to tmpRecPtr and
references to *RecPtr (which is pointing at tmpRecPtr) triggers an
optimization bug in gcc 4.6.0, which apparently has forgotten about
aliasing rules. Avoid the compiler bug, and make the code more readable
to boot, by getting rid of the direct references. Improve the comments
while at it.
Back-patch to all supported versions, in case they get built with 4.6.0.
Tom Lane, with some cosmetic suggestions from Alex Hunsaker
Fix locking while setting flags in MySerializableXact.
Even if a flag is modified only by the backend owning the transaction, it's
not safe to modify it without a lock. Another backend might be setting or
clearing a different flag in the flags field concurrently, and that
operation might be lost because setting or clearing a bit in a word is not
atomic.
Make did-write flag a simple backend-private boolean variable, because it
was only set or tested in the owning backend (except when committing a
prepared transaction, but it's not worthwhile to optimize for the case of a
read-only prepared transaction). This also eliminates the need to add
locking where that flag is set.
Also, set the did-write flag when doing DDL operations like DROP TABLE or
TRUNCATE -- that was missed earlier.
Alvaro Herrera [Fri, 10 Jun 2011 17:43:02 +0000 (13:43 -0400)]
Use "transient" files for blind writes, take 2
"Blind writes" are a mechanism to push buffers down to disk when
evicting them; since they may belong to different databases than the one
a backend is connected to, the backend does not necessarily have a
relation to link them to, and thus no way to blow them away. We were
keeping those files open indefinitely, which would cause a problem if
the underlying table was deleted, because the operating system would not
be able to reclaim the disk space used by those files.
To fix, have bufmgr mark such files as transient to smgr; the lower
layer is allowed to close the file descriptor when the current
transaction ends. We must be careful to have any other access of the
file to remove the transient markings, to prevent unnecessary expensive
system calls when evicting buffers belonging to our own database (which
files we're likely to require again soon.)
This commit fixes a bug in the previous one, which neglected to cleanly
handle the LRU ring that fd.c uses to manage open files, and caused an
unacceptable failure just before beta2 and was thus reverted.
Alvaro Herrera [Thu, 9 Jun 2011 17:41:12 +0000 (13:41 -0400)]
Use "transient" files for blind writes
"Blind writes" are a mechanism to push buffers down to disk when
evicting them; since they may belong to different databases than the one
a backend is connected to, the backend does not necessarily have a
relation to link them to, and thus no way to blow them away. We were
keeping those files open indefinitely, which would cause a problem if
the underlying table was deleted, because the operating system would not
be able to reclaim the disk space used by those files.
To fix, have bufmgr mark such files as transient to smgr; the lower
layer is allowed to close the file descriptor when the current
transaction ends. We must be careful to have any other access of the
file to remove the transient markings, to prevent unnecessary expensive
system calls when evicting buffers belonging to our own database (which
files we're likely to require again soon.)
Fix the truncation logic of the OldSerXid SLRU mechanism. We can't pass
SimpleLruTruncate() a page number that's "in the future", because it will
issue a warning and refuse to truncate anything. Instead, we leave behind
the latest segment. If the slru is not needed before XID wrap-around, the
segment will appear as new again, and not be cleaned up until it gets old
enough again. That's a bit unpleasant, but better than not cleaning up
anything.
Also, fix broken calculation to check and warn if the span of the OldSerXid
SLRU is getting too large to fit in the 64k SLRU pages that we have
available. It was not XID wraparound aware.
Tom Lane [Wed, 8 Jun 2011 19:24:27 +0000 (15:24 -0400)]
Make citext's equality and hashing functions collation-insensitive.
This is an ugly hack to get around the fact that significant parts of the
core backend assume they don't need to worry about passing collation to
equality and hashing functions. That's true for the core string datatypes,
but citext should ideally have equality behavior that depends on the
specified collation's LC_CTYPE. However, there's no chance of fixing the
core before 9.2, so we'll have to live with this compromise arrangement for
now. Per bug #6053 from Regina Obe.
The code changes in this commit should be reverted in full once the core
code is up to speed, but be careful about reverting the docs changes:
I fixed a number of obsolete statements while at it.
Since start/stop/restart/reload/status is a kind of standard command
set, it seems odd to insert the special-purpose "promote" in between
the closely related "restart" and "reload". So put it after "status"
in code and documentation.
Put the documentation of the -U option in some sensible place.
Rewrite the synopsis sentence in help and documentation to make it
less of a growing mouthful.
Tom Lane [Wed, 8 Jun 2011 16:52:12 +0000 (12:52 -0400)]
Allow domains over arrays to match ANYARRAY parameters again.
This use-case was broken in commit 529cb267a6843a6a8190c86b75d091771d99d6a9
of 2010-10-21, in which I commented "For the moment, we just forbid such
matching. We might later wish to insert an automatic downcast to the
underlying array type, but such a change should also change matching of
domains to ANYELEMENT for consistency". We still lack consensus about what
to do with ANYELEMENT; but not matching ANYARRAY is a clear loss of
functionality compared to prior releases, so let's go ahead and make that
happen. Per complaint from Regina Obe and extensive subsequent discussion.
Make DDL operations play nicely with Serializable Snapshot Isolation.
Truncating or dropping a table is treated like deletion of all tuples, and
check for conflicts accordingly. If a table is clustered or rewritten by
ALTER TABLE, all predicate locks on the heap are promoted to relation-level
locks, because the tuple or page ids of any existing tuples will change and
won't be valid after rewriting the table. Arguably ALTER TABLE should be
treated like a mass-UPDATE of every row, but if you e.g change the datatype
of a column, you could also argue that it's just a change to the physical
layout, not a logical change. Reindexing promotes all locks on the index to
relation-level lock on the heap.
Kevin Grittner, with a lot of cosmetic changes by me.
Robert Haas [Wed, 8 Jun 2011 02:12:44 +0000 (22:12 -0400)]
Complain politely about access temp/unlogged tables during recovery.
This has never been supported, but we previously let md.c issue the
complaint for us at whatever point we tried to examine the backing file.
Now we print a nicer error message.
Per bug #6041, reported by Emanuel, and extensive discussion with Tom
Lane over where to put the check.
Tom Lane [Tue, 7 Jun 2011 04:08:31 +0000 (00:08 -0400)]
Fix rewriter to cope (more or less) with CTEs in the query being rewritten.
Since the original implementation of CTEs only allowed them in SELECT
queries, the rule rewriter did not expect to find any CTEs in statements
being rewritten by ON INSERT/UPDATE/DELETE rules. We had dealt with this
to some extent but the code was still several bricks shy of a load, as
illustrated in bug #6051 from Jehan-Guillaume de Rorthais.
In particular, we have to be able to copy CTEs from the original query's
cteList into that of a rule action, in case the rule action references the
CTE (which it pretty much always will). This also implies we were doing
things in the wrong order in RewriteQuery: we have to recursively rewrite
the CTE queries before expanding the main query, so that we have the
rewritten queries available to copy.
There are unpleasant limitations yet to resolve here, but at least we now
throw understandable FEATURE_NOT_SUPPORTED errors for them instead of just
failing with bizarre implementation-dependent errors. In particular, we
can't handle propagating the same CTE into multiple post-rewrite queries
(because then the CTE would be evaluated multiple times), and we can't cope
with conflicts between CTE names in the original query and in the rule
actions.
Tom Lane [Sat, 4 Jun 2011 19:48:17 +0000 (15:48 -0400)]
Expose the "*VALUES*" alias that we generate for a stand-alone VALUES list.
We were trying to make that strictly an internal implementation detail,
but it turns out that it's exposed anyway when dumping a view defined
like
CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1;
This comes out as
CREATE VIEW ... ORDER BY "*VALUES*".column1;
which fails to parse when reloading the dump.
Hacking ruleutils.c to suppress the column qualification looks like it'd
be a risky business, so instead promote the RTE alias to full-fledged
usability.
Per bug #6049 from Dylan Adams. Back-patch to all supported branches.
Alvaro Herrera [Thu, 2 Jun 2011 16:39:53 +0000 (12:39 -0400)]
Fix pg_get_constraintdef to cope with NOT VALID constraints
This case was missed when NOT VALID constraints were first introduced in
commit 722bf7017bbe796decc79c1fde03e7a83dae9ada by Simon Riggs on
2011-02-08. Among other things, it causes pg_dump to omit the NOT VALID
flag when dumping such constraints, which may cause them to fail to
load afterwards, if they contained values failing the constraint.
Tom Lane [Fri, 3 Jun 2011 19:38:12 +0000 (15:38 -0400)]
Fix failure to check whether a rowtype's component types are sortable.
The existence of a btree opclass accepting composite types caused us to
assume that every composite type is sortable. This isn't true of course;
we need to check if the column types are all sortable. There was logic
for this for the case of array comparison (ie, check that the element
type is sortable), but we missed the point for rowtypes. Per Teodor's
report of an ANALYZE failure for an unsortable composite type.
Rather than just add some more ad-hoc logic for this, I moved knowledge of
the issue into typcache.c. The typcache will now only report out array_eq,
record_cmp, and friends as usable operators if the array or composite type
will work with those functions.
Unfortunately we don't have enough info to do this for anonymous RECORD
types; in that case, just assume it will work, and take the runtime failure
as before if it doesn't.
This patch might be a candidate for back-patching at some point, but
given the lack of complaints from the field, I'd rather just test it in
HEAD for now.
Note: most of the places touched in this patch will need further work
when we get around to supporting hashing of record types.
This is the original DocBook SGML limit, but apparently most
installations have changed it or ignore it, which is why few people
have run into this problem.
SSI comment fixes and enhancements. Notably, document that the conflict-out
flag actually means that the transaction has a conflict out to a transaction
that committed before the flagged transaction.
Tom Lane [Thu, 2 Jun 2011 22:37:57 +0000 (18:37 -0400)]
Handle domains when checking for recursive inclusion of composite types.
We need this now because we allow domains over arrays, and we'll probably
allow domains over composites pretty soon, which makes the problem even
more obvious.
Although domains over arrays also exist in previous versions, this does not
need to be back-patched, because the coding used in older versions
successfully "looked through" domains over arrays. The problem is exposed
by not treating a domain as having a typelem.
Problem identified by Noah Misch, though I did not use his patch, since
it would require additional work to handle domains over composites that
way. This approach is more future-proof.
Tom Lane [Thu, 2 Jun 2011 19:30:56 +0000 (15:30 -0400)]
Clean up after erroneous SELECT FOR UPDATE/SHARE on a sequence.
My previous commit disallowed this operation, but did nothing about
cleaning up the damage if one had already been done. With the operation
disallowed, it's okay to just forcibly clear xmax in a sequence's tuple,
since any value seen there could not represent a live transaction's lock.
So, any sequence-specific operation will repair the problem automatically,
whether or not the user has already seen "could not access status of
transaction" failures.
Tom Lane [Thu, 2 Jun 2011 18:46:15 +0000 (14:46 -0400)]
Disallow SELECT FOR UPDATE/SHARE on sequences.
We can't allow this because such an operation stores its transaction XID
into the sequence tuple's xmax. Because VACUUM doesn't process sequences
(and we don't want it to start doing so), such an xmax value won't get
frozen, meaning it will eventually refer to nonexistent pg_clog storage,
and even wrap around completely. Since the row lock is ignored by nextval
and setval, the usefulness of the operation is highly debatable anyway.
Per reports of trouble with pgpool 3.0, which had ill-advisedly started
using such commands as a form of locking.
In HEAD, also disallow SELECT FOR UPDATE/SHARE on toast tables. Although
this does work safely given the current implementation, there seems no
good reason to allow it. I refrained from changing that behavior in
back branches, however.
Tom Lane [Wed, 1 Jun 2011 21:01:59 +0000 (17:01 -0400)]
Allow hash joins to be interrupted while searching hash table for match.
Per experimentation with a recent example, in which unreasonable amounts
of time could elapse before the backend would respond to a query-cancel.
This might be something to back-patch, but the patch doesn't apply cleanly
because this code was rewritten for 9.1. Given the lack of field
complaints I won't bother for now.
Tom Lane [Wed, 1 Jun 2011 17:09:07 +0000 (13:09 -0400)]
Further improvements in pg_ctl's new wait-for-postmaster-start logic.
Add a postmaster_is_alive() test to the wait loop, so that we stop waiting
if the postmaster dies without removing its pidfile. Unfortunately this
only helps after the postmaster has created its pidfile, since until then
we don't know which PID to check. But if it never does create the pidfile,
we can give up in a relatively short time, so this is a useful addition
in practice. Per suggestion from Fujii Masao, though this doesn't look
very much like his patch.
In addition, improve pg_ctl's ability to cope with pre-existing pidfiles.
Such a file might or might not represent a live postmaster that is going to
block our postmaster from starting, but the previous code pre-judged the
situation and gave up waiting immediately. Now, we will wait for up to 5
seconds to see if our postmaster overwrites such a file. This issue
interacts with Fujii's patch because we would make the wrong conclusion
if we did the postmaster_is_alive() test with a pre-existing PID.
All of this could be improved if we rewrote start_postmaster() so that it
could report the child postmaster's PID, so that we'd know a-priori the
correct PID to test with postmaster_is_alive(). That looks like a bit too
much change for so late in the 9.1 development cycle, unfortunately.
Tom Lane [Tue, 31 May 2011 21:53:45 +0000 (17:53 -0400)]
Protect GIST logic that assumes penalty values can't be negative.
Apparently sane-looking penalty code might return small negative values,
for example because of roundoff error. This will confuse places like
gistchoose(). Prevent problems by clamping negative penalty values to
zero. (Just to be really sure, I also made it force NaNs to zero.)
Back-patch to all supported branches.
Peter Eisentraut [Tue, 31 May 2011 20:10:05 +0000 (23:10 +0300)]
Recode non-ASCII characters in source to UTF-8
For consistency, have all non-ASCII characters from contributors'
names in the source be in UTF-8. But remove some other more
gratuitous uses of non-ASCII characters.
Tom Lane [Tue, 31 May 2011 20:10:46 +0000 (16:10 -0400)]
Replace use of credential control messages with getsockopt(LOCAL_PEERCRED).
It turns out the reason we hadn't found out about the portability issues
with our credential-control-message code is that almost no modern platforms
use that code at all; the ones that used to need it now offer getpeereid(),
which we choose first. The last holdout was NetBSD, and they added
getpeereid() as of 5.0. So far as I can tell, the only live platform on
which that code was being exercised was Debian/kFreeBSD, ie, FreeBSD kernel
with Linux userland --- since glibc doesn't provide getpeereid(), we fell
back to the control message code. However, the FreeBSD kernel provides a
LOCAL_PEERCRED socket parameter that's functionally equivalent to Linux's
SO_PEERCRED. That is both much simpler to use than control messages, and
superior because it doesn't require receiving a message from the other end
at just the right time.
Therefore, add code to use LOCAL_PEERCRED when necessary, and rip out all
the credential-control-message code in the backend. (libpq still has such
code so that it can still talk to pre-9.1 servers ... but eventually we can
get rid of it there too.) Clean up related autoconf probes, too.
This means that libpq's requirepeer parameter now works on exactly the same
platforms where the backend supports peer authentication, so adjust the
documentation accordingly.
Tom Lane [Mon, 30 May 2011 23:16:05 +0000 (19:16 -0400)]
Fix portability bugs in use of credentials control messages for peer auth.
Even though our existing code for handling credentials control messages has
been basically unchanged since 2001, it was fundamentally wrong: it did not
ensure proper alignment of the supplied buffer, and it was calculating
buffer sizes and message sizes incorrectly. This led to failures on
platforms where alignment padding is relevant, for instance FreeBSD on
64-bit platforms, as seen in a recent Debian bug report passed on by
Martin Pitt (http://bugs.debian.org//cgi-bin/bugreport.cgi?bug=612888).
Rewrite to do the message-whacking using the macros specified in RFC 2292,
following a suggestion from Theo de Raadt in that thread. Tested by me
on Debian/kFreeBSD-amd64; since OpenBSD and NetBSD document the identical
CMSG API, it should work there too.
Tom Lane [Mon, 30 May 2011 21:05:26 +0000 (17:05 -0400)]
Fix VACUUM so that it always updates pg_class.reltuples/relpages.
When we added the ability for vacuum to skip heap pages by consulting the
visibility map, we made it just not update the reltuples/relpages
statistics if it skipped any pages. But this could leave us with extremely
out-of-date stats for a table that contains any unchanging areas,
especially for TOAST tables which never get processed by ANALYZE. In
particular this could result in autovacuum making poor decisions about when
to process the table, as in recent report from Florian Helmberger. And in
general it's a bad idea to not update the stats at all. Instead, use the
previous values of reltuples/relpages as an estimate of the tuple density
in unvisited pages. This approach results in a "moving average" estimate
of reltuples, which should converge to the correct value over multiple
VACUUM and ANALYZE cycles even when individual measurements aren't very
good.
This new method for updating reltuples is used by both VACUUM and ANALYZE,
with the result that we no longer need the grotty interconnections that
caused ANALYZE to not update the stats depending on what had happened
in the parent VACUUM command.
Also, fix the logic for skipping all-visible pages during VACUUM so that it
looks ahead rather than behind to decide what to do, as per a suggestion
from Greg Stark. This eliminates useless scanning of all-visible pages at
the start of the relation or just after a not-all-visible page. In
particular, the first few pages of the relation will not be invariably
included in the scanned pages, which seems to help in not overweighting
them in the reltuples estimate.
Back-patch to 8.4, where the visibility map was introduced.
Magnus Hagander [Mon, 30 May 2011 18:46:14 +0000 (20:46 +0200)]
Don't recommend upgrading to latest available Windows SDK
We only support up to version 7.0, so don't recommend
upgrading past it. The rest of the documentation around this
was already updated, but one spot was missed.
Magnus Hagander [Mon, 30 May 2011 18:09:51 +0000 (20:09 +0200)]
Don't include local line on platforms without support
Since we now include a sample line for replication on local
connections in pg_hba.conf, don't include it where local
connections aren't available (such as on win32).
Also make sure we use authmethodlocal and not authmethod on
the sample line.
The row-version chaining in Serializable Snapshot Isolation was still wrong.
On further analysis, it turns out that it is not needed to duplicate predicate
locks to the new row version at update, the lock on the version that the
transaction saw as visible is enough. However, there was a different bug in
the code that checks for dangerous structures when a new rw-conflict happens.
Fix that bug, and remove all the row-version chaining related code.
Kevin Grittner & Dan Ports, with some comment editorialization by me.
Alvaro Herrera [Mon, 30 May 2011 16:15:13 +0000 (12:15 -0400)]
Remove usage of &PL_sv_undef in hashes and arrays
According to perlguts, &PL_sv_undef is not the right thing to use in
those cases because it doesn't behave the same way as an undef value via
Perl code. Seems the intuitive way to deal with undef values is subtly
enough broken that it's hard to notice when misused.
The broken uses got inadvertently introduced in commit 87bb2ade2ce646083f39d5ab3e3307490211ad04 by Alexey Klyukin, Alex
Hunsaker and myself on 2011-02-17; no backpatch is necessary.
Tom Lane [Sat, 28 May 2011 16:36:04 +0000 (12:36 -0400)]
Fix null-dereference crash in parse_xml_decl().
parse_xml_decl's header comment says you can pass NULL for any unwanted
output parameter, but it failed to honor this contract for the "standalone"
flag. The only currently-affected caller is xml_recv, so the net effect is
that sending a binary XML value containing a standalone parameter in its
xml declaration would crash the backend. Per bug #6044 from Christopher
Dillard.
In passing, remove useless initializations of parse_xml_decl's output
parameters in xml_parse.
Back-patch to 8.3, where this code was introduced.
Tom Lane [Fri, 27 May 2011 18:13:38 +0000 (14:13 -0400)]
Improve corner cases in pg_ctl's new wait-for-postmaster-startup code.
With "-w -t 0", we should report "still starting up", not "ok". If we
fall out of the loop without ever being able to call PQping (because we
were never able to construct a connection string), report "no response",
not "ok". This gets rid of corner cases in which we'd claim the server
had started even though it had not.
Also, if the postmaster.pid file is not there at any point after we've
waited 5 seconds, assume the postmaster has failed and report that, rather
than almost-certainly-fruitlessly continuing to wait. The pidfile should
appear almost instantly even when there is extensive startup work to do,
so 5 seconds is already a very conservative figure. This part is per a
gripe from MauMau --- there might be better ways to do it, but nothing
simple enough to get done for 9.1.
Tom Lane [Fri, 27 May 2011 16:10:32 +0000 (12:10 -0400)]
Preserve caller's memory context in ProcessCompletedNotifies().
This is necessary to avoid long-term memory leakage, because the main loop
in PostgresMain expects to be executing in MessageContext, and hence is a
bit sloppy about freeing stuff that is only needed for the duration of
processing the current client message. The known case of an actual leak
is when encoding conversion has to be done on the incoming command string,
but there might be others. Per report from Per-Olov Esgard.
Back-patch to 9.0, where the bug was introduced by the LISTEN/NOTIFY
rewrite.
Check the return code of pthread_create(). Otherwise we go into an infinite
loop if it fails, which is what what happened on my HP-UX box. (I think
the reason it failed on that box is a misconfiguration on my behalf, but
that's no reason to hang.)
Tom Lane [Thu, 26 May 2011 23:25:19 +0000 (19:25 -0400)]
Make decompilation of optimized CASE constructs more robust.
We had some hacks in ruleutils.c to cope with various odd transformations
that the optimizer could do on a CASE foo WHEN "CaseTestExpr = RHS" clause.
However, the fundamental impossibility of covering all cases was exposed
by Heikki, who pointed out that the "=" operator could get replaced by an
inlined SQL function, which could contain nearly anything at all. So give
up on the hacks and just print the expression as-is if we fail to recognize
it as "CaseTestExpr = RHS". (We must cover that case so that decompiled
rules print correctly; but we are not under any obligation to make EXPLAIN
output be 100% valid SQL in all cases, and already could not do so in some
other cases.) This approach requires that we have some printable
representation of the CaseTestExpr node type; I used "CASE_TEST_EXPR".
Back-patch to all supported branches, since the problem case fails in all.
Tom Lane [Thu, 26 May 2011 21:29:33 +0000 (17:29 -0400)]
Adjust configure to use "+Olibmerrno" with HP-UX C compiler, if possible.
This is reported to be necessary on some versions of that OS. In service
of this, cause PGAC_PROG_CC_CFLAGS_OPT to reject switches that result in
compiler warnings, since on yet other versions of that OS, the switch does
nothing except provoke a warning.
Report and patch by Ibrar Ahmed, further tweaking by me.
Tom Lane [Wed, 25 May 2011 20:26:45 +0000 (16:26 -0400)]
Suppress extensions in partial dumps.
We initially had pg_dump emit CREATE EXTENSION commands unconditionally.
However, pg_dump has long been in the habit of not dumping procedural
language definitions when a --schema or --table switch is given. It seems
appropriate to handle extensions the same way, since like PLs they are SQL
objects that are not in any particular schema. Per complaint from Adrian
Schreyer.
Peter Eisentraut [Wed, 25 May 2011 18:53:26 +0000 (21:53 +0300)]
Put options in some sensible order
For the --help output and reference pages of pg_dump, pg_dumpall,
pg_restore, put the options in some consistent, mostly alphabetical,
and consistent order, rather than newest option last or something like
that.
Andrew Dunstan [Wed, 25 May 2011 04:21:07 +0000 (00:21 -0400)]
Convert builddoc.bat into a perl script that actually works.
The old .bat file wasn't working for reasons that are unclear, and
which it did not seem worth the trouble to ascertain.
The new perl script has been tested and is known to work.
Soon it will be tested regularly on the buildfarm.
The .bat file is kept as a simple wrapper for the perl script.
Tom Lane [Tue, 24 May 2011 21:56:52 +0000 (17:56 -0400)]
Cleanup for pull-up-isReset patch.
Clear isReset before, not after, calling the context-specific alloc method,
so as to preserve the option to do a tail call in MemoryContextAlloc
(and also so this code isn't assuming that a failed alloc call won't have
changed the context's state before failing). Fix missed direct invocation
of reset method. Reformat a comment.
Tom Lane [Mon, 23 May 2011 20:34:27 +0000 (16:34 -0400)]
Make plpgsql complain about conflicting IN and OUT parameter names.
The core CREATE FUNCTION code only enforces that IN parameter names are
non-duplicate, and that OUT parameter names are separately non-duplicate.
This is because some function languages might not have any confusion
between the two. But in plpgsql, such names are all in the same namespace,
so we'd better disallow it.
Per a recent complaint from Dan S. Not back-patching since this is a small
issue and the change could cause unexpected failures if we started to
enforce it in a minor release.