Tom Lane [Sat, 12 Mar 2005 21:33:55 +0000 (21:33 +0000)]
When cloning template0 (or other fully-frozen databases), set the new
database's datallowconn and datfrozenxid to the current transaction ID
instead of copying the source database's values. This is OK because we
assume the source DB contains no normal transaction IDs whatsoever.
This keeps VACUUM from immediately starting to complain about unvacuumed
databases in the situation where we are more than 2 billion transactions
out from the XID stamp of template0. Per discussion with Milen Radev
(although his complaint turned out to be due to something else, but the
problem is real anyway).
Tom Lane [Sat, 12 Mar 2005 21:11:50 +0000 (21:11 +0000)]
Fix ALTER DATABASE RENAME to allow the operation if user is a superuser
who for some reason isn't marked usecreatedb. Per report from Alexander
Pravking. Also fix sloppy coding in have_createdb_privilege().
Tom Lane [Sat, 12 Mar 2005 20:25:06 +0000 (20:25 +0000)]
Adjust the API for aggregate function calls so that a C-coded function
can tell whether it is being used as an aggregate or not. This allows
such a function to avoid re-pallocing a pass-by-reference transition
value; normally it would be unsafe for a function to scribble on an input,
but in the aggregate case it's safe to reuse the old transition value.
Make int8inc() do this. This gets a useful improvement in the speed of
COUNT(*), at least on narrow tables (it seems to be swamped by I/O when
the table rows are wide). Per a discussion in early December with
Neil Conway. I also fixed int_aggregate.c to check this, thereby
turning it into something approaching a supportable technique instead
of being a crude hack.
Neil Conway [Sat, 12 Mar 2005 06:53:54 +0000 (06:53 +0000)]
Some builds (depends on crypto engine support?) of OpenSSL
0.9.7x have EVP_DigestFinal function which which clears all of
EVP_MD_CTX. This makes pgcrypto crash in functions which
re-use one digest context several times: hmac() and crypt()
with md5 algorithm.
Following patch fixes it by carring the digest info around
EVP_DigestFinal and re-initializing cipher.
Tom Lane [Sat, 12 Mar 2005 01:54:44 +0000 (01:54 +0000)]
Fix problem with infinite recursion between write_syslogger_file and
elog if the former has trouble writing its file. Code review for
Magnus' patch to redirect stderr to syslog on Windows (Bruce's version
seems right, but did some minor prettification).
Tom Lane [Thu, 10 Mar 2005 23:21:26 +0000 (23:21 +0000)]
Make the behavior of HAVING without GROUP BY conform to the SQL spec.
Formerly, if such a clause contained no aggregate functions we mistakenly
treated it as equivalent to WHERE. Per spec it must cause the query to
be treated as a grouped query of a single group, the same as appearance
of aggregate functions would do. Also, the HAVING filter must execute
after aggregate function computation even if it itself contains no
aggregate functions.
Neil Conway [Thu, 10 Mar 2005 07:14:03 +0000 (07:14 +0000)]
Refactor fork()-related code. We need to do various housekeeping tasks
before we can invoke fork() -- flush stdio buffers, save and restore the
profiling timer on Linux with LINUX_PROFILE, and handle BeOS stuff. This
patch moves that code into a single function, fork_process(), instead of
duplicating it at the various callsites of fork().
This patch doesn't address the EXEC_BACKEND case; there is room for
further cleanup there.
Tom Lane [Mon, 7 Mar 2005 04:42:17 +0000 (04:42 +0000)]
Adjust creation/destruction of TupleDesc data structure to reduce the
number of palloc calls. This has a salutory impact on plpgsql operations
with record variables (which create and destroy tupdescs constantly)
and probably helps a bit in some other cases too.
Tom Lane [Sun, 6 Mar 2005 22:15:05 +0000 (22:15 +0000)]
Revise hash join code so that we can increase the number of batches
on-the-fly, and thereby avoid blowing out memory when the planner has
underestimated the hash table size. Hash join will now obey the
work_mem limit with some faithfulness. Per my recent proposal
(hash aggregate part isn't done yet though).
Tom Lane [Fri, 4 Mar 2005 20:21:07 +0000 (20:21 +0000)]
Replace the BufMgrLock with separate locks on the lookup hashtable and
the freelist, plus per-buffer spinlocks that protect access to individual
shared buffer headers. This requires abandoning a global freelist (since
the freelist is a global contention point), which shoots down ARC and 2Q
as well as plain LRU management. Adopt a clock sweep algorithm instead.
Preliminary results show substantial improvement in multi-backend situations.
Tom Lane [Wed, 2 Mar 2005 04:10:53 +0000 (04:10 +0000)]
Another go at making pred_test() handle all reasonable combinations
of AND and OR clauses. The key point here is that an OR on the
predicate side has to be treated gingerly: we may be able to prove
that the OR is implied even when no one of its components is implied.
For example (x OR y) implies (x OR y OR z) even though no one of x,
y, or z can be individually proven. This code handles both the
example shown recently by Sergey Koshcheyev and the one shown last
October by Dawid Kuroczko.
Tom Lane [Tue, 1 Mar 2005 21:14:59 +0000 (21:14 +0000)]
Release proclock immediately in RemoveFromWaitQueue() if it represents
no held locks. This maintains the invariant that proclocks are present
only for procs that are holding or awaiting a lock; when this is not
true, LockRelease will fail. Per report from Stephen Clouse.
Tom Lane [Tue, 1 Mar 2005 01:40:05 +0000 (01:40 +0000)]
Adjust OR indexscan logic to not generate redundant condition-free OR
indexscans involving partial indexes. These would always be dominated
by a simple indexscan on such an index, so there's no point in considering
them. Fixes overoptimism in a patch I applied last October.
Tom Lane [Tue, 1 Mar 2005 00:24:52 +0000 (00:24 +0000)]
Revert the logic for expanding AND/OR conditions in pred_test() to what
it was in 7.4, and add some comments explaining why it has to be this way.
I broke it for OR'd index predicates in a fit of code cleanup last summer.
Per example from Sergey Koshcheyev.
Neil Conway [Mon, 28 Feb 2005 03:45:24 +0000 (03:45 +0000)]
Implement max() and min() aggregates for array types. Patch from Koju
Iijima, reviewed by Neil Conway. Catalog version number bumped,
regression tests updated.
Bruce Momjian [Sat, 26 Feb 2005 23:19:05 +0000 (23:19 +0000)]
Adjust OS-specific kernel settings to mention old and new BSD methods of
adjusting values:
> But to be on the safe side, it would make sense to do something similar
> to the BSD section, and comment about older distributions maybe needing
> to manipulate /proc/kernel/* directly.
Tom Lane [Sat, 26 Feb 2005 18:43:34 +0000 (18:43 +0000)]
Finish up the flat-files project: get rid of GetRawDatabaseInfo() hack
in favor of looking at the flat file copy of pg_database during backend
startup. This should finally eliminate the various corner cases in which
backend startup fails unexpectedly because it isn't able to distinguish
live and dead tuples in pg_database. Simplify locking on pg_database
to be similar to the rules used with pg_shadow and pg_group, and eliminate
FlushRelationBuffers operations that were used only to reduce the odds
of failure of GetRawDatabaseInfo.
initdb forced due to addition of a trigger to pg_database.
Bruce Momjian [Sat, 26 Feb 2005 18:39:04 +0000 (18:39 +0000)]
Clarify PGPASSWORD usage:
! authentication. Use of this environment variable is not
! recommended for security reasons (some operating systems
! allow non-root users to see process environment variables via
! <application>ps</>); instead consider using the
! <filename>~/.pgpass</> file (see <xref linkend="libpq-pgpass">).
Neil Conway [Wed, 23 Feb 2005 22:46:17 +0000 (22:46 +0000)]
This patch optimizes the md5_text() function (which is used to
implement the md5() SQL-level function). The old code did the
following:
1. de-toast the datum
2. convert it to a cstring via textout()
3. get the length of the cstring via strlen()
Since we are treating the datum context as a blob of binary data,
the latter two steps are unnecessary. Once the data has been
detoasted, we can just use it as-is, and derive its length from
the varlena metadata.
This patch improves some run-of-the-mill md5() computations by
just under 10% in my limited tests, and passes the regression tests.
I also noticed that md5_text() wasn't checking the return value
of md5_hash(); encountering OOM at precisely the right moment
could result in returning a random md5 hash. This patch corrects
that. A better fix would be to make md5_hash() only return on
success (and/or allocate via palloc()), but since it's used in
the frontend as well I don't see an easy way to do that.
Neil Conway [Tue, 22 Feb 2005 07:18:27 +0000 (07:18 +0000)]
This patch changes makes some significant changes to how compilation
and parsing work in PL/PgSQL:
- memory management is now done via palloc(). The compiled representation
of each function now has its own memory context. Therefore, the storage
consumed by a function can be reclaimed via MemoryContextDelete().
During compilation, the CurrentMemoryContext is the function's memory
context. This means that a palloc() is sufficient to allocate memory
that will have the same lifetime as the function itself. As a result,
code invoked during compilation should be careful to pfree() temporary
allocations to avoid leaking memory. Since a lot of the code in the
backend is not careful about releasing palloc'ed memory, that means
we should switch into a temporary memory context before invoking
backend functions. A temporary context appropriate for such allocations
is `compile_tmp_cxt'.
- The ability to use palloc() allows us to simply a lot of the code in
the parser. Rather than representing lists of elements via ad hoc
linked lists or arrays, we can use the List type. Rather than doing
malloc followed by memset(0), we can just use palloc0().
- We now check that the user has supplied the right number of parameters
to a RAISE statement. Supplying either too few or too many results in
an error (at runtime).
- PL/PgSQL's parser needs to accept arbitrary SQL statements. Since we
do not want to duplicate the SQL grammar in the PL/PgSQL grammar, this
means we need to be quite lax in what the PL/PgSQL grammar considers
a "SQL statement". This can lead to misleading behavior if there is a
syntax error in the function definition, since we assume a malformed
PL/PgSQL construct is a SQL statement. Furthermore, these errors were
only detected at runtime (when we tried to execute the alleged "SQL
statement" via SPI).
To rectify this, the patch changes the parser to invoke the main SQL
parser when it sees a string it believes to be a SQL expression. This
means that synctically-invalid SQL will be rejected during the
compilation of the PL/PgSQL function. This is only done when compiling
for "validation" purposes (i.e. at CREATE FUNCTION time), so it should
not impose a runtime overhead.
- Fixes for the various buffer overruns I've patched in stable branches
in the past few weeks. I've rewritten code where I thought it was
warranted (unlike the patches applied to older branches, which were
minimally invasive).