Andres Freund [Wed, 29 Mar 2017 20:16:30 +0000 (13:16 -0700)]
Move contrib/seg to only use V1 calling conventions.
A later commit will remove V0 support.
Author: Andres Freund, with contributions by Craig Ringer Reviewed-By: Peter Eisentraut, Craig Ringer
Discussion: https://postgr.es/m/20161208213441.k3mbno4twhg2qf7g@alap3.anarazel.de
Peter Eisentraut [Wed, 29 Mar 2017 19:17:14 +0000 (15:17 -0400)]
pg_dump: Remove query truncation in error messages
Remove the behavior that a query mentioned in an error message would be
truncated to 128 characters. The queries that pg_dump runs are often
longer than that, and this behavior makes analyzing failures harder
unnecessarily.
Alvaro Herrera [Wed, 29 Mar 2017 15:18:48 +0000 (12:18 -0300)]
Simplify check of modified attributes in heap_update
The old coding was getting more complicated as new things were added,
and it would be barely tolerable with upcoming WARM updates and other
future features such as indirect indexes. The new coding incurs a small
performance cost in synthetic benchmark cases, and is barely measurable
in normal cases. A much larger benefit is expected from WARM, which
could actually bolt its needs on top of the existing coding, but it is
much uglier and bug-prone than doing it on this new code. Additional
optimization can be applied on top of this, if need be.
Reviewed-by: Pavan Deolasee, Amit Kapila, Mithun CY
Discussion: https://postgr.es/m/20161228232018.4hc66ndrzpz4g4wn@alvherre.pgsql
https://postgr.es/m/CABOikdMJfz69dBNRTOZcB6s5A0tf8OMCyQVYQyR-WFFdoEwKMQ@mail.gmail.com
Robert Haas [Wed, 29 Mar 2017 14:59:27 +0000 (10:59 -0400)]
Mark more functions parallel-restricted.
Commit 61c2e1a95f94bb904953a6281ce17a18ac38ee6d allowed parallel
query to be used in more places, revealing via buildfarm member
mandrill that several functions intended to be called from triggers
were incorrectly marked parallel-safe rather than parallel-restricted.
Report by Tom Lane. Patch by Rafia Sabih. Reviewed by me.
Robert Haas [Wed, 29 Mar 2017 13:44:29 +0000 (09:44 -0400)]
Plug race in dsa_attach.
With sufficiently bad luck, it was possible for a parallel worker to
attempt attach to a DSA area after all other backends have detached
from it, which is not legal. If the worker had waited a little longer
to get started, the DSM itself would have been destroyed, which is why
this wasn't noticed before.
Thomas Munro, per a report from Andreas Seltenreich
copyObject() is declared to return void *, which allows easily assigning
the result independent of the input, but it loses all type checking.
If the compiler supports typeof or something similar, cast the result to
the input type. This creates a greater amount of type safety. In some
cases, where the result is assigned to a generic type such as Node * or
Expr *, new casts are now necessary, but in general casts are now
unnecessary in the normal case and indicate that something unusual is
happening.
Reviewed-by: Mark Dilger <hornschnorter@gmail.com>
Peter Eisentraut [Wed, 29 Mar 2017 00:38:06 +0000 (20:38 -0400)]
Change 'diag' to 'note' in TAP tests
Reduce noise from TAP tests by changing 'diag' to 'note', so output only
goes to the test's log file not stdout, unless in verbose mode. This
also removes the junk on screen when running the TAP tests in parallel.
Alvaro Herrera [Fri, 24 Mar 2017 22:27:22 +0000 (19:27 -0300)]
Allow DSM segments to be created as pinned
dsm_create and dsm_attach assumed that a current resource owner was
always in place. Exploration with the API show that this is
inconvenient: sometimes one must create a dummy resowner, create/attach
the DSM, only to pin the mapping later, which is wasteful. Change
create/attach so that if there is no current resowner, the dsm is
effectively pinned right from the start.
Discussion: https://postgr.es/m/20170324232710.32acsfsvjqfgc6ud@alvherre.pgsql
Reviewed by Thomas Munro.
Tom Lane [Tue, 28 Mar 2017 22:05:03 +0000 (18:05 -0400)]
Make new expression eval code reject references to dropped columns.
Formerly, a Var referencing an already-dropped column was allowed and would
always produce a NULL value. However, that behavior was implemented in
slot_getattr which the new expression code doesn't use; thus there is now a
risk of returning theoretically-deleted data. We had regression test cases
that purported to exercise this, but they failed to expose any problem,
apparently because plpgsql filters the dropped column and produces an
output tuple that has a NULL there already.
Ideally the DROP or ALTER attempt in these test cases would get rejected
due to dependency checks; but until that happens, let's modify the behavior
so that we fail the query during executor start. This was already true for
the related case of a column having changed type underneath us, and there's
no obvious reason why we need to be laxer for dropped columns.
In passing, adjust the error messages in CheckVarSlotCompatibility to
include the composite type name. In the cases shown in the regression
tests this is always just "record", but it should be more useful in
actual stale-plan cases, where the slot tupdesc would be a table's
tupdesc directly.
Alvaro Herrera [Tue, 28 Mar 2017 15:52:55 +0000 (12:52 -0300)]
Remove direct uses of ItemPointer.{ip_blkid,ip_posid}
There are no functional changes here; this simply encapsulates knowledge
of the ItemPointerData struct so that a future patch can change things
without more breakage.
All direct users of ip_blkid and ip_posid are changed to use existing
macros ItemPointerGetBlockNumber and ItemPointerGetOffsetNumber
respectively. For callers where that's inappropriate (because they
Assert that the itempointer is is valid-looking), add
ItemPointerGetBlockNumberNoCheck and ItemPointerGetOffsetNumberNoCheck,
which lack the assertion but are otherwise identical.
Tom Lane [Tue, 28 Mar 2017 17:16:19 +0000 (13:16 -0400)]
Suppress implicit-conversion warnings seen with newer clang versions.
We were assigning values near 255 through "char *" pointers. On machines
where char is signed, that's not entirely kosher, and it's reasonable
for compilers to warn about it.
A better solution would be to change the pointer type to "unsigned char *",
but that would be vastly more invasive. For the moment, let's just apply
this simple backpatchable solution.
Peter Eisentraut [Tue, 28 Mar 2017 15:08:38 +0000 (11:08 -0400)]
dblink: Fix error reporting
The conname variable was not initialized in some code paths, resulting
in error reports referring to the "unnamed" connection rather than the
correct connection name.
Simon Riggs [Tue, 28 Mar 2017 14:05:21 +0000 (10:05 -0400)]
Cleanup slots during drop database
Automatically drop all logical replication slots associated with a
database when the database is dropped. Previously we threw an ERROR
if a slot existed. Now we throw ERROR only if a slot is active in
the database being dropped.
Peter Eisentraut [Tue, 28 Mar 2017 13:00:59 +0000 (09:00 -0400)]
Fix Perl code which had broken the Windows build
The previous change wanted to avoid modifying $_ in grep, but the code
just made the change in a local variable and then lost it. Rewrite the
code using a separate map and grep, which is clearer anyway.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Tom Lane [Tue, 28 Mar 2017 00:14:36 +0000 (20:14 -0400)]
Show ignored constants as "$N" rather than "?" in pg_stat_statements.
The trouble with the original choice here is that "?" is a valid (and
indeed used) operator name, so that you could end up with ambiguous
statement texts like "SELECT ? ? ?". With this patch, you instead
see "SELECT $1 ? $2", which seems significantly more readable.
The numbers used for this purpose begin after the last actual $N parameter
in the particular query. The conflict with external parameters has its own
potential for confusion of course, but it was agreed to be an improvement
over the previous behavior.
Alvaro Herrera [Mon, 27 Mar 2017 17:52:19 +0000 (14:52 -0300)]
Fix uninitialized memory propagation mistakes
Valgrind complains that some uninitialized bytes are being passed around
by the extended statistics code since commit 7b504eb282ca2f, as reported
by Andres Freund. Silence it.
Tomas Vondra submitted a patch which he verified to fix the complaints
in his machine; however I messed with it a bit before pushing, so any
remaining problems are likely my (Álvaro's) fault.
Author: Tomas Vondra
Discussion: https://postgr.es/m/20170325211031.4xxoptigqxm2emn2@alap3.anarazel.de
Robert Haas [Mon, 27 Mar 2017 16:50:51 +0000 (12:50 -0400)]
Still more code review for single-page hash vacuuming.
Most seriously, fix use of incorrect block ID, per a report from
Jeff Janes that it causes a crash and a diagnosis from Amit Kapila.
Improve consistency between the hash and btree versions of this
code by adding back a PANIC that btree has, and by registering
data in the xlog record in the same way, per complaints from
Jeff Janes and Amit Kapila.
Tidy up some minor cosmetic points, per complaints from Amit
Kapila.
Patch by Ashutosh Sharma, reviewed by Amit Kapila, and tested by
Jeff Janes.
Teodor Sigaev [Mon, 27 Mar 2017 16:33:01 +0000 (19:33 +0300)]
Fsync directory after creating or unlinking file.
If file was created/deleted just before powerloss it's possible that
file system will miss that. To prevent it, call fsync() where creating/
unlinkg file is critical.
Author: Michael Paquier Reviewed-by: Ashutosh Bapat, Takayuki Tsunakawa, me
Alvaro Herrera [Mon, 27 Mar 2017 15:52:50 +0000 (12:52 -0300)]
Fix thinko in estimate_num_groups
The code for the reworked n-distinct estimation on commit 7b504eb282 was
written differently in a previous version of the patch, prior to commit;
on rewriting it, we missed updating an initializer. This caused the
code to (mistakenly) apply a fudge factor even in the case where a
single value is applied, leading to incorrect results.
This means that the 'relvarcount' variable name is now wrong. Add a
comment to try and make the situation clearer, and remove an incorrect
comment I added.
Problem noticed, and code patch, by Tomas Vondra. Additional commentary
by Álvaro.
Alvaro Herrera [Mon, 27 Mar 2017 15:40:42 +0000 (12:40 -0300)]
Rework the stats_ext test
As suggested by Tom Lane, avoid printing specific estimated cost values,
because they vary across architectures; instead, verify plan shapes (in
this case, HashAggregate vs. GroupAggregate), as we do in other planner
tests.
Peter Eisentraut [Mon, 27 Mar 2017 14:34:33 +0000 (10:34 -0400)]
Change default of log_directory to 'log'
The previous default 'pg_log' might have indicated by its "pg_" prefix
that it is an internal system directory. The new default is more in
line with the typical naming of directories with user-facing log files.
Together with the renaming of pg_clog and pg_xlog, this should clear up
that difference.
Alvaro Herrera [Mon, 27 Mar 2017 03:53:59 +0000 (00:53 -0300)]
Fix a couple of problems in pg_get_statisticsextdef
There was a thinko whereby we tested the wrong tuple after fetching it
from cache; avoid that by using generate_relation_name instead, which is
simpler. Also, the statistics name was not qualified, so add that. (It
could be argued that qualification should be conditional on the schema
not being on search path. We can add that later, but at least this form
is correct.)
Author: David Rowley, Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f8RjLeVZJ2+93pdQGuZJeBF-ifsHaFMR-q-6-Z0qxA8cA@mail.gmail.com
Andrew Gierth [Mon, 27 Mar 2017 03:20:54 +0000 (04:20 +0100)]
Support hashed aggregation with grouping sets.
This extends the Aggregate node with two new features: HashAggregate
can now run multiple hashtables concurrently, and a new strategy
MixedAggregate populates hashtables while doing sorted grouping.
The planner will now attempt to save as many sorts as possible when
planning grouping sets queries, while not exceeding work_mem for the
estimated combined sizes of all hashtables used. No SQL-level changes
are required. There should be no user-visible impact other than the
new EXPLAIN output and possible changes to result ordering when ORDER
BY was not used (which affected a few regression tests). The
enable_hashagg option is respected.
Author: Andrew Gierth
Reviewers: Mark Dilger, Andres Freund
Discussion: https://postgr.es/m/87vatszyhj.fsf@news-spur.riddles.org.uk
Robert Haas [Mon, 27 Mar 2017 02:02:22 +0000 (22:02 -0400)]
Show more processes in pg_stat_activity.
Previously, auxiliary processes and background workers not connected
to a database (such as the logical replication launcher) weren't
shown. Include them, so that we can see the associated wait state
information. Add a new column to identify the processes type, so that
people can filter them out easily using SQL if they wish.
Before this patch was written, there was discussion about whether we
should expose this information in a separate view, so as to avoid
contaminating pg_stat_activity with things people might not want to
see. But putting everything in pg_stat_activity was a more popular
choice, so that's what the patch does.
Kuntal Ghosh, reviewed by Amit Langote and Michael Paquier. Some
revisions and bug fixes by me.
Tom Lane [Sun, 26 Mar 2017 23:14:47 +0000 (19:14 -0400)]
Improve performance of ExecEvalWholeRowVar.
In commit b8d7f053c, we needed to fix ExecEvalWholeRowVar to not change
the state of the slot it's copying. The initial quick hack at that
required two rounds of tuple construction, which is not very nice.
To fix, add another primitive to tuptoaster.c that does precisely what
we need. (I initially tried to do this by refactoring one of the
existing functions into two pieces; but it looked like that might hurt
performance for the existing case, and the amount of code that could
be shared is not very large, so I gave up on that.)
Tom Lane [Sun, 26 Mar 2017 22:14:03 +0000 (18:14 -0400)]
Use ExecPrepareExpr in place of ExecPrepareCheck where appropriate.
Change one more place where ExecInitCheck/ExecPrepareCheck's insistence
on getting implicit-AND-format quals wasn't really helpful, because the
caller had to do make_ands_implicit() for no reason that it cared about.
Using ExecPrepareExpr directly simplifies the code and saves cycles.
The only remaining use of these functions is to process
resultRelInfo->ri_PartitionCheck quals. However, implicit-AND format
does seem to be what we want for that, so leave it alone.
Tom Lane [Sun, 26 Mar 2017 21:35:35 +0000 (17:35 -0400)]
Fix unportable disregard of alignment requirements in RADIUS code.
The compiler is entitled to store a char[] local variable with no
particular alignment requirement. Our RADIUS code cavalierly took such
a local variable and cast its address to a struct type that does have
alignment requirements. On an alignment-picky machine this would lead
to bus errors. To fix, declare the local variable honestly, and then
cast its address to char * for use in the I/O calls.
Given the lack of field complaints, there must be very few if any
people affected; but nonetheless this is a clear portability issue,
so back-patch to all supported branches.
Noted while looking at a Coverity complaint in the same code.
Tom Lane [Sun, 26 Mar 2017 21:02:38 +0000 (17:02 -0400)]
Fix some minor resource leaks in PerformRadiusTransaction().
Failure to free serveraddrs pointed out by Coverity, failure to close
socket noted by code-reading. These bugs seem to be quite old, but
given the low probability of taking these error-exit paths and the
minimal consequences of the leaks (since the process would presumably
exit shortly anyway), it doesn't seem worth back-patching.
Tom Lane [Sun, 26 Mar 2017 19:57:02 +0000 (15:57 -0400)]
Improve implementation of EEOP_BOOLTEST_* opcodes.
Both Andres and I were happy with "*op->resvalue = *op->resvalue;",
but Coverity isn't; and it has a point, because some compilers might
not be smart enough to elide that. So remove it. In passing, also
avoid doing unnecessary assignments to *op->resnull when it's already
known to have the right value.
Peter Eisentraut [Sun, 26 Mar 2017 18:54:56 +0000 (14:54 -0400)]
doc: Clean up bibliography rendering for XSLT
In the DSSSL stylesheets, we had an extensive customization of the
bibliography rendering. Since the bibliography isn't that used much, it
doesn't seem worth doing an elaborate porting of that to XSLT. So this
just moves some things around, removes some unused things, and does some
minimal XSLT stylesheet customizations to make things look clean.
Andres Freund [Sat, 25 Mar 2017 22:32:01 +0000 (15:32 -0700)]
Remove unreachable code in expression evaluation.
The previous code still contained expression evaluation time support
for CaseExprs without a defresult. But transformCaseExpr() creates a
default expression if necessary.
Author: Andres Freund
Discussion: https://postgr.es/m/4834.1490480275@sss.pgh.pa.us
Andres Freund [Tue, 14 Mar 2017 22:45:36 +0000 (15:45 -0700)]
Faster expression evaluation and targetlist projection.
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.
This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.
The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
out operation metadata sequentially; including the avoidance of
nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
constant re-checks at evaluation time
Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.
The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
overhead of expression evaluation, by caching state in prepared
statements. That'd be helpful in OLTPish scenarios where
initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
been made here too.
The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
initialization, whereas previously they were done during
execution. In edge cases this can lead to errors being raised that
previously wouldn't have been, e.g. a NULL array being coerced to a
different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
during expression initialization, previously it was re-built
every time a domain check was evaluated. For normal queries this
doesn't change much, but e.g. for plpgsql functions, which caches
ExprStates, the old set could stick around longer. The behavior
around might still change.
Author: Andres Freund, with significant changes by Tom Lane,
changes by Heikki Linnakangas Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
Tom Lane [Sat, 25 Mar 2017 21:25:28 +0000 (17:25 -0400)]
Remember to drop roles created by regression tests.
Commit e3920ac82 created "regress_subscription_user2" in subscription.sql,
but forgot to drop it, causing the regression tests to fail if run twice
without re-initdb'ing.
Simon Riggs [Sat, 25 Mar 2017 14:07:27 +0000 (14:07 +0000)]
Report catalog_xmin separately in hot_standby_feedback
If the upstream walsender is using a physical replication slot, store the
catalog_xmin in the slot's catalog_xmin field. If the upstream doesn't use a
slot and has only a PGPROC entry behaviour doesn't change, as we store the
combined xmin and catalog_xmin in the PGPROC entry.
Peter Eisentraut [Sat, 25 Mar 2017 04:28:28 +0000 (00:28 -0400)]
Remove ICU tests from default run
These tests require the test database to be in UTF8 encoding. Until
there is a better solution, take them out of the default test set and
treat them like the existing collate.linux.utf8 test, meaning it has to
be selected manually.
Alvaro Herrera [Fri, 24 Mar 2017 19:14:34 +0000 (16:14 -0300)]
Fix stats_ext test in 32 bit machines
Because tuple packing is different (because of the MAXALIGN difference),
the expected costs of a seqscan is different.
The commonly used trick of eliding costs in EXPLAIN output (COSTS OFF)
would make the tests completely pointless. Instead, add an alternative
expected file.
Robert Haas [Fri, 24 Mar 2017 18:46:33 +0000 (14:46 -0400)]
Improve access to parallel query from procedural languages.
In SQL, the ability to use parallel query was previous contingent on
fcache->readonly_func, which is only set for non-volatile functions;
but the volatility of a function has no bearing on whether queries
inside it can use parallelism. Remove that condition.
SPI_execute and SPI_execute_with_args always run the plan just once,
though not necessarily to completion. Given the changes in commit 691b8d59281b5177f16fe80858df921f77a8e955, it's sensible to pass
CURSOR_OPT_PARALLEL_OK here, so do that. This improves access to
parallelism for any caller that uses these functions to execute
queries. Such callers include plperl, plpython, pltcl, and plpgsql,
though it's not the case that they all use these functions
exclusively.
In plpgsql, allow parallel query for plain SELECT queries (as
opposed to PERFORM, which already worked) and for plain expressions
(which probably won't go through the executor at all, because they
will likely be simple expressions, but if they do then this helps).
Rafia Sabih and Robert Haas, reviewed by Dilip Kumar and Amit Kapila
Fujii Masao [Fri, 24 Mar 2017 17:39:44 +0000 (02:39 +0900)]
Make VACUUM VERBOSE report the number of skipped frozen pages.
Previously manual VACUUM did not report the number of skipped frozen
pages even when VERBOSE option is specified. But this information is
helpful to monitor the VACUUM activity, and also autovacuum reports that
number in the log file when the condition of log_autovacuum_min_duration
is met.
This commit changes VACUUM VERBOSE so that it reports the number
of frozen pages that it skips.
Author: Masahiko Sawada Reviewed-by: Yugo Nagata and Jim Nasby
Discussion: http://postgr.es/m/CAD21AoDZQKCxo0L39Mrq08cONNkXQKXuh=2DP1Q8ebmt35SoaA@mail.gmail.com
Alvaro Herrera [Fri, 24 Mar 2017 17:06:10 +0000 (14:06 -0300)]
Implement multivariate n-distinct coefficients
Add support for explicitly declared statistic objects (CREATE
STATISTICS), allowing collection of statistics on more complex
combinations that individual table columns. Companion commands DROP
STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
added too. All this DDL has been designed so that more statistic types
can be added later on, such as multivariate most-common-values and
multivariate histograms between columns of a single table, leaving room
for permitting columns on multiple tables, too, as well as expressions.
This commit only adds support for collection of n-distinct coefficient
on user-specified sets of columns in a single table. This is useful to
estimate number of distinct groups in GROUP BY and DISTINCT clauses;
estimation errors there can cause over-allocation of memory in hashed
aggregates, for instance, so it's a worthwhile problem to solve. A new
special pseudo-type pg_ndistinct is used.
(num-distinct estimation was deemed sufficiently useful by itself that
this is worthwhile even if no further statistic types are added
immediately; so much so that another version of essentially the same
functionality was submitted by Kyotaro Horiguchi:
https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
though this commit does not use that code.)
Author: Tomas Vondra. Some code rework by Álvaro. Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
Ideriha Takeshi
Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz
https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
Robert Haas [Fri, 24 Mar 2017 16:30:39 +0000 (12:30 -0400)]
plpgsql: Don't generate parallel plans for RETURN QUERY.
Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b allowed a parallel
plan to be generated when for a RETURN QUERY or RETURN QUERY EXECUTE
statement in a PL/pgsql block, but that's a bad idea because plplgsql
asks the executor for 50 rows at a time. That means that we'll always
be running serially a plan that was intended for parallel execution,
which is not a good idea. Fix by not requesting a parallel plan from
the outset.
Per discussion, back-patch to 9.6. There is a slight risk that, due
to optimizer error, somebody could have a case where the parallel plan
executed serially is actually faster than the supposedly-best serial
plan, but the consensus seems to be that that's not sufficient
justification for leaving 9.6 unpatched.
Robert Haas [Fri, 24 Mar 2017 16:00:53 +0000 (12:00 -0400)]
Add a txid_status function.
If your connection to the database server is lost while a COMMIT is
in progress, it may be difficult to figure out whether the COMMIT was
successful or not. This function will tell you, provided that you
don't wait too long to ask. It may be useful in other situations,
too.
Allow SCRAM authentication, when pg_hba.conf says 'md5'.
If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason
we cannot attempt to perform SCRAM authentication instead of MD5. The worst
that can happen is that the client doesn't support SCRAM, and the
authentication will fail. But previously, it would fail for sure, because
we would not even try. SCRAM is strictly more secure than MD5, so there's
no harm in trying it. This allows for a more graceful transition from MD5
passwords to SCRAM, as user passwords can be changed to SCRAM verifiers
incrementally, without changing pg_hba.conf.
Refactor the code in auth.c to support that better. Notably, we now have to
look up the user's pg_authid entry before sending the password challenge,
also when performing MD5 authentication. Also simplify the concept of a
"doomed" authentication. Previously, if a user had a password, but it had
expired, we still performed SCRAM authentication (but always returned error
at the end) using the salt and iteration count from the expired password.
Now we construct a fake salt, like we do when the user doesn't have a
password or doesn't exist at all. That simplifies get_role_password(), and
we can don't need to distinguish the "user has expired password", and
"user does not exist" cases in auth.c.
On second thoughts, also rename uaSASL to uaSCRAM. It refers to the
mechanism specified in pg_hba.conf, and while we use SASL for SCRAM
authentication at the protocol level, the mechanism should be called SCRAM,
not SASL. As a comparison, we have uaLDAP, even though it looks like the
plain 'password' authentication at the protocol level.
Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
Teodor Sigaev [Fri, 24 Mar 2017 10:53:40 +0000 (13:53 +0300)]
Fix backup canceling
Assert-enabled build crashes but without asserts it works by wrong way:
it may not reset forcing full page write and preventing from starting
exclusive backup with the same name as cancelled.
Patch replaces pair of booleans
nonexclusive_backup_running/exclusive_backup_running to single enum to
correctly describe backup state.
Backpatch to 9.6 where bug was introduced
Reported-by: David Steele
Authors: Michael Paquier, David Steele Reviewed-by: Anastasia Lubennikova
https://commitfest.postgresql.org/13/1068/
Robert Haas [Thu, 23 Mar 2017 20:10:43 +0000 (16:10 -0400)]
Fix enum definition.
Commit 249cf070e36721a65be74838c53acf8249faf935 assigned to one of
the labels in the middle the value that should have been assigned
to the first member of the enum. Rushabh's patch didn't have that
defect as submitted, but I managed to mess it up while editing.
Repair.
Peter Eisentraut [Thu, 23 Mar 2017 19:25:34 +0000 (15:25 -0400)]
ICU support
Add a column collprovider to pg_collation that determines which library
provides the collation data. The existing choices are default and libc,
and this adds an icu choice, which uses the ICU4C library.
The pg_locale_t type is changed to a union that contains the
provider-specific locale handles. Users of locale information are
changed to look into that struct for the appropriate handle to use.
Also add a collversion column that records the version of the collation
when it is created, and check at run time whether it is still the same.
This detects potentially incompatible library upgrades that can corrupt
indexes and other structures. This is currently only supported by
ICU-provided collations.
initdb initializes the default collation set as before from the `locale
-a` output but also adds all available ICU locales with a "-x-icu"
appended.
Currently, ICU-provided collations can only be explicitly named
collations. The global database locales are still always libc-provided.
ICU support is enabled by configure --with-icu.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com> Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Robert Haas [Thu, 23 Mar 2017 18:08:23 +0000 (14:08 -0400)]
Track the oldest XID that can be safely looked up in CLOG.
This provides infrastructure for looking up arbitrary, user-supplied
XIDs without a risk of scary-looking failures from within the clog
module. Normally, the oldest XID that can be safely looked up in CLOG
is the same as the oldest XID that can reused without causing
wraparound, and the latter is already tracked. However, while
truncation is in progress, the values are different, so we must
keep track of them separately.
Robert Haas [Thu, 23 Mar 2017 17:05:48 +0000 (13:05 -0400)]
Allow for parallel execution whenever ExecutorRun() is done only once.
Previously, it was unsafe to execute a plan in parallel if
ExecutorRun() might be called with a non-zero row count. However,
it's quite easy to fix things up so that we can support that case,
provided that it is known that we will never call ExecutorRun() a
second time for the same QueryDesc. Add infrastructure to signal
this, and cross-checks to make sure that a caller who claims this is
true doesn't later reneg.
While that pattern never happens with queries received directly from a
client -- there's no way to know whether multiple Execute messages
will be sent unless the first one requests all the rows -- it's pretty
common for queries originating from procedural languages, which often
limit the result to a single tuple or to a user-specified number of
tuples.
This commit doesn't actually enable parallelism in any additional
cases, because currently none of the places that would be able to
benefit from this infrastructure pass CURSOR_OPT_PARALLEL_OK in the
first place, but it makes it much more palatable to pass
CURSOR_OPT_PARALLEL_OK in places where we currently don't, because it
eliminates some cases where we'd end up having to run the parallel
plan serially.
Patch by me, based on some ideas from Rafia Sabih and corrected by
Rafia Sabih based on feedback from Dilip Kumar and myself.
Teodor Sigaev [Thu, 23 Mar 2017 16:38:47 +0000 (19:38 +0300)]
Reduce page locking in GIN vacuum
GIN vacuum during cleaning posting tree can lock this whole tree for a long
time with by holding LockBufferForCleanup() on root. Patch changes it with
two ways: first, cleanup lock will be taken only if there is an empty page
(which should be deleted) and, second, it tries to lock only subtree, not the
whole posting tree.
Author: Andrey Borodin with minor editorization by me Reviewed-by: Jeff Davis, me
https://commitfest.postgresql.org/13/896/