<para>
The query writes any data or locks any database rows. If a query
contains a data-modifying operation either at the top level or within
- a CTE, no parallel plans for that query will be generated. This is a
- limitation of the current implementation which could be lifted in a
- future release.
+ a CTE, no parallel plans for that query will be generated. As an
+ exception, the commands <literal>CREATE TABLE</>, <literal>SELECT
+ INTO</>, and <literal>CREATE MATERIALIZED VIEW</> which create a new
+ table and populate it can use a parallel plan.
</para>
</listitem>
</para>
</listitem>
- <listitem>
- <para>
- A prepared statement is executed using a <literal>CREATE TABLE .. AS
- EXECUTE ..</literal> statement. This construct converts what otherwise
- would have been a read-only operation into a read-write operation,
- making it ineligible for parallel query.
- </para>
- </listitem>
-
<listitem>
<para>
The transaction isolation level is serializable. This situation
CommandId cid, int options)
{
/*
- * For now, parallel operations are required to be strictly read-only.
- * Unlike heap_update() and heap_delete(), an insert should never create a
- * combo CID, so it might be possible to relax this restriction, but not
- * without more thought and testing.
- */
- if (IsInParallelMode())
+ * Parallel operations are required to be strictly read-only in a parallel
+ * worker. Parallel inserts are not safe even in the leader in the
+ * general case, because group locking means that heavyweight locks for
+ * relation extension or GIN page locks will not conflict between members
+ * of a lock group, but we don't prohibit that case here because there are
+ * useful special cases that we can safely allow, such as CREATE TABLE AS.
+ */
+ if (IsParallelWorker())
ereport(ERROR,
(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
- errmsg("cannot insert tuples during a parallel operation")));
+ errmsg("cannot insert tuples in a parallel worker")));
if (relation->rd_rel->relhasoids)
{
query = linitial_node(Query, rewritten);
Assert(query->commandType == CMD_SELECT);
- /* plan the query --- note we disallow parallelism */
- plan = pg_plan_query(query, 0, params);
+ /* plan the query */
+ plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, params);
/*
* Use a snapshot with an updated command ID to ensure this query sees
* We have to rewrite the contained SELECT and then pass it back to
* ExplainOneQuery. It's probably not really necessary to copy the
* contained parsetree another time, but let's be safe.
- *
- * Like ExecCreateTableAs, disallow parallelism in the plan.
*/
CreateTableAsStmt *ctas = (CreateTableAsStmt *) utilityStmt;
List *rewritten;
rewritten = QueryRewrite(castNode(Query, copyObject(ctas->query)));
Assert(list_length(rewritten) == 1);
ExplainOneQuery(linitial_node(Query, rewritten),
- 0, ctas->into, es,
+ CURSOR_OPT_PARALLEL_OK, ctas->into, es,
queryString, params, queryEnv);
}
else if (IsA(utilityStmt, DeclareCursorStmt))
/*
* If the plan might potentially be executed multiple times, we must force
- * it to run without parallelism, because we might exit early. Also
- * disable parallelism when writing into a relation, because no database
- * changes are allowed in parallel mode.
+ * it to run without parallelism, because we might exit early.
*/
- if (!execute_once || dest->mydest == DestIntoRel)
+ if (!execute_once)
use_parallel_mode = false;
if (use_parallel_mode)
* to values that don't permit parallelism, or if parallel-unsafe
* functions are present in the query tree.
*
+ * (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE
+ * MATERIALIZED VIEW to use parallel plans, but this is safe only because
+ * the command is writing into a completely new table which workers won't
+ * be able to see. If the workers could see the table, the fact that
+ * group locking would cause them to ignore the leader's heavyweight
+ * relation extension lock and GIN page locks would make this unsafe.
+ * We'll have to fix that somehow if we want to allow parallel inserts in
+ * general; updates and deletes have additional problems especially around
+ * combo CIDs.)
+ *
* For now, we don't try to use parallel mode if we're running inside a
* parallel worker. We might eventually be able to relax this
* restriction, but for now it seems best not to have parallel workers
--- /dev/null
+--
+-- PARALLEL
+--
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+ select length(stringu1) from tenk1 group by length(stringu1);
+ QUERY PLAN
+---------------------------------------------------
+ Finalize HashAggregate
+ Group Key: (length((stringu1)::text))
+ -> Gather
+ Workers Planned: 4
+ -> Partial HashAggregate
+ Group Key: length((stringu1)::text)
+ -> Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as
+ select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) select length(stringu1) into parallel_write
+ from tenk1 group by length(stringu1);
+ QUERY PLAN
+---------------------------------------------------
+ Finalize HashAggregate
+ Group Key: (length((stringu1)::text))
+ -> Gather
+ Workers Planned: 4
+ -> Partial HashAggregate
+ Group Key: length((stringu1)::text)
+ -> Parallel Seq Scan on tenk1
+(7 rows)
+
+select length(stringu1) into parallel_write
+ from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) create materialized view parallel_mat_view as
+ select length(stringu1) from tenk1 group by length(stringu1);
+ QUERY PLAN
+---------------------------------------------------
+ Finalize HashAggregate
+ Group Key: (length((stringu1)::text))
+ -> Gather
+ Workers Planned: 4
+ -> Partial HashAggregate
+ Group Key: length((stringu1)::text)
+ -> Parallel Seq Scan on tenk1
+(7 rows)
+
+create materialized view parallel_mat_view as
+ select length(stringu1) from tenk1 group by length(stringu1);
+drop materialized view parallel_mat_view;
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+ QUERY PLAN
+---------------------------------------------------
+ Finalize HashAggregate
+ Group Key: (length((stringu1)::text))
+ -> Gather
+ Workers Planned: 4
+ -> Partial HashAggregate
+ Group Key: length((stringu1)::text)
+ -> Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+rollback;
# run by itself so it can run parallel workers
test: select_parallel
+test: write_parallel
# no relation related tests can be put in this group
test: publication subscription
test: rules
test: psql_crosstab
test: select_parallel
+test: write_parallel
test: publication
test: subscription
test: amutils
--- /dev/null
+--
+-- PARALLEL
+--
+
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+ select length(stringu1) from tenk1 group by length(stringu1);
+create table parallel_write as
+ select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) select length(stringu1) into parallel_write
+ from tenk1 group by length(stringu1);
+select length(stringu1) into parallel_write
+ from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) create materialized view parallel_mat_view as
+ select length(stringu1) from tenk1 group by length(stringu1);
+create materialized view parallel_mat_view as
+ select length(stringu1) from tenk1 group by length(stringu1);
+drop materialized view parallel_mat_view;
+
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+
+rollback;