From 3fc6e2d7f5b652b417fa6937c34de2438d60fa9f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 7 Mar 2016 15:58:22 -0500 Subject: [PATCH] Make the upper part of the planner work by generating and comparing Paths. I've been saying we needed to do this for more than five years, and here it finally is. This patch removes the ever-growing tangle of spaghetti logic that grouping_planner() used to use to try to identify the best plan for post-scan/join query steps. Now, there is (nearly) independent consideration of each execution step, and entirely separate construction of Paths to represent each of the possible ways to do that step. We choose the best Path or set of Paths using the same add_path() logic that's been used inside query_planner() for years. In addition, this patch removes the old restriction that subquery_planner() could return only a single Plan. It now returns a RelOptInfo containing a set of Paths, just as query_planner() does, and the parent query level can use each of those Paths as the basis of a SubqueryScanPath at its level. This allows finding some optimizations that we missed before, wherein a subquery was capable of returning presorted data and thereby avoiding a sort in the parent level, making the overall cost cheaper even though delivering sorted output was not the cheapest plan for the subquery in isolation. (A couple of regression test outputs change in consequence of that. However, there is very little change in visible planner behavior overall, because the point of this patch is not to get immediate planning benefits but to create the infrastructure for future improvements.) There is a great deal left to do here. This patch unblocks a lot of planner work that was basically impractical in the old code structure, such as allowing FDWs to implement remote aggregation, or rewriting plan_set_operations() to allow consideration of multiple implementation orders for set operations. (The latter will likely require a full rewrite of plan_set_operations(); what I've done here is only to fix it to return Paths not Plans.) I have also left unfinished some localized refactoring in createplan.c and planner.c, because it was not necessary to get this patch to a working state. Thanks to Robert Haas, David Rowley, and Amit Kapila for review. --- doc/src/sgml/fdwhandler.sgml | 34 + src/backend/executor/execAmi.c | 23 +- src/backend/nodes/copyfuncs.c | 2 +- src/backend/nodes/outfuncs.c | 238 +- src/backend/nodes/readfuncs.c | 4 +- src/backend/optimizer/README | 108 +- src/backend/optimizer/path/allpaths.c | 133 +- src/backend/optimizer/path/costsize.c | 152 +- src/backend/optimizer/path/equivclass.c | 42 - src/backend/optimizer/path/pathkeys.c | 9 +- src/backend/optimizer/plan/createplan.c | 1900 ++++++++++---- src/backend/optimizer/plan/planagg.c | 320 +-- src/backend/optimizer/plan/planmain.c | 7 +- src/backend/optimizer/plan/planner.c | 2856 +++++++++------------ src/backend/optimizer/plan/setrefs.c | 72 +- src/backend/optimizer/plan/subselect.c | 180 +- src/backend/optimizer/prep/prepjointree.c | 7 +- src/backend/optimizer/prep/prepunion.c | 696 ++--- src/backend/optimizer/util/pathnode.c | 1043 +++++++- src/backend/optimizer/util/plancat.c | 22 +- src/backend/optimizer/util/relnode.c | 59 +- src/backend/optimizer/util/tlist.c | 154 +- src/include/nodes/nodes.h | 54 +- src/include/nodes/plannodes.h | 32 +- src/include/nodes/relation.h | 295 ++- src/include/optimizer/cost.h | 7 +- src/include/optimizer/pathnode.h | 98 +- src/include/optimizer/paths.h | 8 +- src/include/optimizer/planmain.h | 45 +- src/include/optimizer/planner.h | 11 +- src/include/optimizer/prep.h | 3 +- src/include/optimizer/subselect.h | 8 +- src/include/optimizer/tlist.h | 11 +- src/test/regress/expected/aggregates.out | 9 +- src/test/regress/expected/join.out | 39 +- 35 files changed, 5573 insertions(+), 3108 deletions(-) diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml index 9ad4e1c960..279daef2d1 100644 --- a/doc/src/sgml/fdwhandler.sgml +++ b/doc/src/sgml/fdwhandler.sgml @@ -1316,6 +1316,40 @@ GetForeignServerByName(const char *name, bool missing_ok); (extra->restrictlist). + + An FDW might additionally support direct execution of some plan actions + that are above the level of scans and joins, such as grouping or + aggregation. To offer such options, the FDW should generate paths + (probably ForeignPaths or CustomPaths) and insert them into the + appropriate upper relation. For example, a path + representing remote aggregation should be inserted into the relation + obtained from fetch_upper_rel(root, UPPERREL_GROUP_AGG, + NULL), using add_path. This path will be compared on a + cost basis with local aggregation performed by reading a simple scan path + for the foreign relation (note that such a path must also be supplied, + else there will be an error at plan time). If the remote-aggregation + path wins, which it usually would, it will be converted into a plan in + the usual way, by calling GetForeignPlan. + + + + PlanForeignModify and the other callbacks described in + are designed around the assumption + that the foreign relation will be scanned in the usual way and then + individual row updates will be driven by a local ModifyTable + plan node. This approach is necessary for the general case where an + update requires reading local tables as well as foreign tables. + However, if the operation could be executed entirely by the foreign + server, the FDW could generate a path representing that and insert it + into the UPPERREL_FINAL upper relation, where it would + compete against the ModifyTable approach. This approach + could also be used to implement remote SELECT FOR UPDATE, + rather than using the row locking callbacks described in + . Keep in mind that a path + inserted into UPPERREL_FINAL is responsible for + implementing all behavior of the query. + + When planning an UPDATE or DELETE, PlanForeignModify can look up the RelOptInfo diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c index 35864c1681..0c8e939905 100644 --- a/src/backend/executor/execAmi.c +++ b/src/backend/executor/execAmi.c @@ -407,17 +407,20 @@ ExecSupportsMarkRestore(Path *pathnode) case T_Result: /* - * Although Result supports mark/restore if it has a child plan - * that does, we presently come here only for ResultPath nodes, - * which represent Result plans without a child plan. So there is - * nothing to recurse to and we can just say "false". (This means - * that Result's support for mark/restore is in fact dead code. We - * keep it since it's not much code, and someday the planner might - * be smart enough to use it. That would require making this - * function smarter too, of course.) + * Result supports mark/restore iff it has a child plan that does. + * + * We have to be careful here because there is more than one Path + * type that can produce a Result plan node. */ - Assert(IsA(pathnode, ResultPath)); - return false; + if (IsA(pathnode, ProjectionPath)) + return ExecSupportsMarkRestore(((ProjectionPath *) pathnode)->subpath); + else if (IsA(pathnode, MinMaxAggPath)) + return false; /* childless Result */ + else + { + Assert(IsA(pathnode, ResultPath)); + return false; /* childless Result */ + } default: break; diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index a9e9cc379b..df7c2fa892 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -867,9 +867,9 @@ _copyAgg(const Agg *from) CopyPlanFields((const Plan *) from, (Plan *) newnode); COPY_SCALAR_FIELD(aggstrategy); - COPY_SCALAR_FIELD(numCols); COPY_SCALAR_FIELD(combineStates); COPY_SCALAR_FIELD(finalizeAggs); + COPY_SCALAR_FIELD(numCols); if (from->numCols > 0) { COPY_POINTER_FIELD(grpColIdx, from->numCols * sizeof(AttrNumber)); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 85acce819c..3119b9ea01 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -706,21 +706,19 @@ _outAgg(StringInfo str, const Agg *node) _outPlanInfo(str, (const Plan *) node); WRITE_ENUM_FIELD(aggstrategy, AggStrategy); + WRITE_BOOL_FIELD(combineStates); + WRITE_BOOL_FIELD(finalizeAggs); WRITE_INT_FIELD(numCols); appendStringInfoString(str, " :grpColIdx"); for (i = 0; i < node->numCols; i++) appendStringInfo(str, " %d", node->grpColIdx[i]); - WRITE_BOOL_FIELD(combineStates); - WRITE_BOOL_FIELD(finalizeAggs); - appendStringInfoString(str, " :grpOperators"); for (i = 0; i < node->numCols; i++) appendStringInfo(str, " %u", node->grpOperators[i]); WRITE_LONG_FIELD(numGroups); - WRITE_NODE_FIELD(groupingSets); WRITE_NODE_FIELD(chain); } @@ -1603,6 +1601,15 @@ _outPathInfo(StringInfo str, const Path *node) if (node->pathtarget != &(node->parent->reltarget)) { WRITE_NODE_FIELD(pathtarget->exprs); + if (node->pathtarget->sortgrouprefs) + { + int i; + + appendStringInfoString(str, " :pathtarget->sortgrouprefs"); + for (i = 0; i < list_length(node->pathtarget->exprs); i++) + appendStringInfo(str, " %u", + node->pathtarget->sortgrouprefs[i]); + } WRITE_FLOAT_FIELD(pathtarget->cost.startup, "%.2f"); WRITE_FLOAT_FIELD(pathtarget->cost.per_tuple, "%.2f"); WRITE_INT_FIELD(pathtarget->width); @@ -1703,6 +1710,16 @@ _outTidPath(StringInfo str, const TidPath *node) WRITE_NODE_FIELD(tidquals); } +static void +_outSubqueryScanPath(StringInfo str, const SubqueryScanPath *node) +{ + WRITE_NODE_TYPE("SUBQUERYSCANPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); +} + static void _outForeignPath(StringInfo str, const ForeignPath *node) { @@ -1793,6 +1810,174 @@ _outGatherPath(StringInfo str, const GatherPath *node) WRITE_BOOL_FIELD(single_copy); } +static void +_outProjectionPath(StringInfo str, const ProjectionPath *node) +{ + WRITE_NODE_TYPE("PROJECTIONPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); +} + +static void +_outSortPath(StringInfo str, const SortPath *node) +{ + WRITE_NODE_TYPE("SORTPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); +} + +static void +_outGroupPath(StringInfo str, const GroupPath *node) +{ + WRITE_NODE_TYPE("GROUPPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); + WRITE_NODE_FIELD(groupClause); + WRITE_NODE_FIELD(qual); +} + +static void +_outUpperUniquePath(StringInfo str, const UpperUniquePath *node) +{ + WRITE_NODE_TYPE("UPPERUNIQUEPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); + WRITE_INT_FIELD(numkeys); +} + +static void +_outAggPath(StringInfo str, const AggPath *node) +{ + WRITE_NODE_TYPE("AGGPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); + WRITE_ENUM_FIELD(aggstrategy, AggStrategy); + WRITE_FLOAT_FIELD(numGroups, "%.0f"); + WRITE_NODE_FIELD(groupClause); + WRITE_NODE_FIELD(qual); +} + +static void +_outGroupingSetsPath(StringInfo str, const GroupingSetsPath *node) +{ + WRITE_NODE_TYPE("GROUPINGSETSPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); + /* we don't bother to print groupColIdx */ + WRITE_NODE_FIELD(rollup_groupclauses); + WRITE_NODE_FIELD(rollup_lists); + WRITE_NODE_FIELD(qual); +} + +static void +_outMinMaxAggPath(StringInfo str, const MinMaxAggPath *node) +{ + WRITE_NODE_TYPE("MINMAXAGGPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(mmaggregates); + WRITE_NODE_FIELD(quals); +} + +static void +_outWindowAggPath(StringInfo str, const WindowAggPath *node) +{ + WRITE_NODE_TYPE("WINDOWAGGPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); + WRITE_NODE_FIELD(winclause); + WRITE_NODE_FIELD(winpathkeys); +} + +static void +_outSetOpPath(StringInfo str, const SetOpPath *node) +{ + WRITE_NODE_TYPE("SETOPPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); + WRITE_ENUM_FIELD(cmd, SetOpCmd); + WRITE_ENUM_FIELD(strategy, SetOpStrategy); + WRITE_NODE_FIELD(distinctList); + WRITE_INT_FIELD(flagColIdx); + WRITE_INT_FIELD(firstFlag); + WRITE_FLOAT_FIELD(numGroups, "%.0f"); +} + +static void +_outRecursiveUnionPath(StringInfo str, const RecursiveUnionPath *node) +{ + WRITE_NODE_TYPE("RECURSIVEUNIONPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(leftpath); + WRITE_NODE_FIELD(rightpath); + WRITE_NODE_FIELD(distinctList); + WRITE_INT_FIELD(wtParam); + WRITE_FLOAT_FIELD(numGroups, "%.0f"); +} + +static void +_outLockRowsPath(StringInfo str, const LockRowsPath *node) +{ + WRITE_NODE_TYPE("LOCKROWSPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); + WRITE_NODE_FIELD(rowMarks); + WRITE_INT_FIELD(epqParam); +} + +static void +_outModifyTablePath(StringInfo str, const ModifyTablePath *node) +{ + WRITE_NODE_TYPE("MODIFYTABLEPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_ENUM_FIELD(operation, CmdType); + WRITE_BOOL_FIELD(canSetTag); + WRITE_UINT_FIELD(nominalRelation); + WRITE_NODE_FIELD(resultRelations); + WRITE_NODE_FIELD(subpaths); + WRITE_NODE_FIELD(subroots); + WRITE_NODE_FIELD(withCheckOptionLists); + WRITE_NODE_FIELD(returningLists); + WRITE_NODE_FIELD(rowMarks); + WRITE_NODE_FIELD(onconflict); + WRITE_INT_FIELD(epqParam); +} + +static void +_outLimitPath(StringInfo str, const LimitPath *node) +{ + WRITE_NODE_TYPE("LIMITPATH"); + + _outPathInfo(str, (const Path *) node); + + WRITE_NODE_FIELD(subpath); + WRITE_NODE_FIELD(limitOffset); + WRITE_NODE_FIELD(limitCount); +} + static void _outNestPath(StringInfo str, const NestPath *node) { @@ -1881,6 +2066,7 @@ _outPlannerInfo(StringInfo str, const PlannerInfo *node) WRITE_NODE_FIELD(window_pathkeys); WRITE_NODE_FIELD(distinct_pathkeys); WRITE_NODE_FIELD(sort_pathkeys); + WRITE_NODE_FIELD(processed_tlist); WRITE_NODE_FIELD(minmax_aggs); WRITE_FLOAT_FIELD(total_table_pages, "%.0f"); WRITE_FLOAT_FIELD(tuple_fraction, "%.4f"); @@ -1910,6 +2096,7 @@ _outRelOptInfo(StringInfo str, const RelOptInfo *node) WRITE_BOOL_FIELD(consider_param_startup); WRITE_BOOL_FIELD(consider_parallel); WRITE_NODE_FIELD(reltarget.exprs); + /* reltarget.sortgrouprefs is never interesting, at present anyway */ WRITE_FLOAT_FIELD(reltarget.cost.startup, "%.2f"); WRITE_FLOAT_FIELD(reltarget.cost.per_tuple, "%.2f"); WRITE_INT_FIELD(reltarget.width); @@ -1933,7 +2120,6 @@ _outRelOptInfo(StringInfo str, const RelOptInfo *node) WRITE_UINT_FIELD(pages); WRITE_FLOAT_FIELD(tuples, "%.0f"); WRITE_FLOAT_FIELD(allvisfrac, "%.6f"); - WRITE_NODE_FIELD(subplan); WRITE_NODE_FIELD(subroot); WRITE_NODE_FIELD(subplan_params); WRITE_OID_FIELD(serverid); @@ -3331,6 +3517,9 @@ _outNode(StringInfo str, const void *obj) case T_TidPath: _outTidPath(str, obj); break; + case T_SubqueryScanPath: + _outSubqueryScanPath(str, obj); + break; case T_ForeignPath: _outForeignPath(str, obj); break; @@ -3355,6 +3544,45 @@ _outNode(StringInfo str, const void *obj) case T_GatherPath: _outGatherPath(str, obj); break; + case T_ProjectionPath: + _outProjectionPath(str, obj); + break; + case T_SortPath: + _outSortPath(str, obj); + break; + case T_GroupPath: + _outGroupPath(str, obj); + break; + case T_UpperUniquePath: + _outUpperUniquePath(str, obj); + break; + case T_AggPath: + _outAggPath(str, obj); + break; + case T_GroupingSetsPath: + _outGroupingSetsPath(str, obj); + break; + case T_MinMaxAggPath: + _outMinMaxAggPath(str, obj); + break; + case T_WindowAggPath: + _outWindowAggPath(str, obj); + break; + case T_SetOpPath: + _outSetOpPath(str, obj); + break; + case T_RecursiveUnionPath: + _outRecursiveUnionPath(str, obj); + break; + case T_LockRowsPath: + _outLockRowsPath(str, obj); + break; + case T_ModifyTablePath: + _outModifyTablePath(str, obj); + break; + case T_LimitPath: + _outLimitPath(str, obj); + break; case T_NestPath: _outNestPath(str, obj); break; diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index e6e6f2981c..a2c2243fb5 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -1997,10 +1997,10 @@ _readAgg(void) ReadCommonPlan(&local_node->plan); READ_ENUM_FIELD(aggstrategy, AggStrategy); - READ_INT_FIELD(numCols); - READ_ATTRNUMBER_ARRAY(grpColIdx, local_node->numCols); READ_BOOL_FIELD(combineStates); READ_BOOL_FIELD(finalizeAggs); + READ_INT_FIELD(numCols); + READ_ATTRNUMBER_ARRAY(grpColIdx, local_node->numCols); READ_OID_ARRAY(grpOperators, local_node->numCols); READ_LONG_FIELD(numGroups); READ_NODE_FIELD(groupingSets); diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README index 501980449c..f9967c0828 100644 --- a/src/backend/optimizer/README +++ b/src/backend/optimizer/README @@ -20,7 +20,7 @@ Paths and Join Pairs During the planning/optimizing process, we build "Path" trees representing the different ways of doing a query. We select the cheapest Path that generates the desired relation and turn it into a Plan to pass to the -executor. (There is pretty much a one-to-one correspondence between the +executor. (There is pretty nearly a one-to-one correspondence between the Path and Plan trees, but Path nodes omit info that won't be needed during planning, and include info needed for planning that won't be needed by the executor.) @@ -43,10 +43,8 @@ base rels of the query. Possible Paths for a primitive table relation include plain old sequential scan, plus index scans for any indexes that exist on the table, plus bitmap -index scans using one or more indexes. A subquery base relation just has -one Path, a "SubqueryScan" path (which links to the subplan that was built -by a recursive invocation of the planner). Likewise a function-RTE base -relation has only one possible Path. +index scans using one or more indexes. Specialized RTE types, such as +function RTEs, may have only one possible Path. Joins always occur using two RelOptInfos. One is outer, the other inner. Outers drive lookups of values in the inner. In a nested loop, lookups of @@ -59,9 +57,10 @@ hashjoin, the inner is scanned first and all its rows are entered in a hashtable, then the outer is scanned and for each row we lookup the join key in the hashtable. -A Path for a join relation is actually a tree structure, with the top -Path node representing the join method. It has left and right subpaths -that represent the scan or join methods used for the two input relations. +A Path for a join relation is actually a tree structure, with the topmost +Path node representing the last-applied join method. It has left and right +subpaths that represent the scan or join methods used for the two input +relations. Join Tree Construction @@ -292,8 +291,7 @@ Optimizer Functions The primary entry point is planner(). planner() - set up for recursive handling of subqueries - do final cleanup after planning +set up for recursive handling of subqueries -subquery_planner() pull up sublinks and subqueries from rangetable, if possible canonicalize qual @@ -326,14 +324,15 @@ planner() Back at standard_join_search(), apply set_cheapest() to extract the cheapest path for each newly constructed joinrel. Loop back if this wasn't the top join level. - Back at grouping_planner: - convert Path tree returned by query_planner into a Plan tree - do grouping(GROUP) - do aggregates - do window functions - make unique(DISTINCT) - make sort(ORDER BY) - make limit(LIMIT/OFFSET) + Back at grouping_planner: + do grouping (GROUP BY) and aggregation + do window functions + make unique (DISTINCT) + do sorting (ORDER BY) + do limit (LIMIT/OFFSET) +Back at planner(): +convert finished Path tree into a Plan tree +do final cleanup after planning Optimizer Data Structures @@ -355,12 +354,28 @@ RelOptInfo - a relation or joined relations IndexPath - index scan BitmapHeapPath - top of a bitmapped index scan TidPath - scan by CTID + SubqueryScanPath - scan a subquery-in-FROM ForeignPath - scan a foreign table + CustomPath - for custom scan providers AppendPath - append multiple subpaths together MergeAppendPath - merge multiple subpaths, preserving their common sort order - ResultPath - a Result plan node (used for FROM-less SELECT) + ResultPath - a childless Result plan node (used for FROM-less SELECT) MaterialPath - a Material plan node - UniquePath - remove duplicate rows + UniquePath - remove duplicate rows (either by hashing or sorting) + GatherPath - collect the results of parallel workers + ProjectionPath - a Result plan node with child (used for projection) + SortPath - a Sort plan node applied to some sub-path + GroupPath - a Group plan node applied to some sub-path + UpperUniquePath - a Unique plan node applied to some sub-path + AggPath - an Agg plan node applied to some sub-path + GroupingSetsPath - an Agg plan node used to implement GROUPING SETS + MinMaxAggPath - a Result plan node with subplans performing MIN/MAX + WindowAggPath - a WindowAgg plan node applied to some sub-path + SetOpPath - a SetOp plan node applied to some sub-path + RecursiveUnionPath - a RecursiveUnion plan node applied to two sub-paths + LockRowsPath - a LockRows plan node applied to some sub-path + ModifyTablePath - a ModifyTable plan node applied to some sub-path(s) + LimitPath - a Limit plan node applied to some sub-path NestPath - nested-loop joins MergePath - merge joins HashPath - hash joins @@ -851,6 +866,59 @@ lateral reference. (Perhaps now that that stuff works, we could relax the pullup restriction?) +Post scan/join planning +----------------------- + +So far we have discussed only scan/join planning, that is, implementation +of the FROM and WHERE clauses of a SQL query. But the planner must also +determine how to deal with GROUP BY, aggregation, and other higher-level +features of queries; and in many cases there are multiple ways to do these +steps and thus opportunities for optimization choices. These steps, like +scan/join planning, are handled by constructing Paths representing the +different ways to do a step, then choosing the cheapest Path. + +Since all Paths require a RelOptInfo as "parent", we create RelOptInfos +representing the outputs of these upper-level processing steps. These +RelOptInfos are mostly dummy, but their pathlist lists hold all the Paths +considered useful for each step. Currently, we may create these types of +additional RelOptInfos during upper-level planning: + +UPPERREL_SETOP result of UNION/INTERSECT/EXCEPT, if any +UPPERREL_GROUP_AGG result of grouping/aggregation, if any +UPPERREL_WINDOW result of window functions, if any +UPPERREL_DISTINCT result of "SELECT DISTINCT", if any +UPPERREL_ORDERED result of ORDER BY, if any +UPPERREL_FINAL result of any remaining top-level actions + +UPPERREL_FINAL is used to represent any final processing steps, currently +LockRows (SELECT FOR UPDATE), LIMIT/OFFSET, and ModifyTable. There is no +flexibility about the order in which these steps are done, and thus no need +to subdivide this stage more finely. + +These "upper relations" are identified by the UPPERREL enum values shown +above, plus a relids set, which allows there to be more than one upperrel +of the same kind. We use NULL for the relids if there's no need for more +than one upperrel of the same kind. Currently, in fact, the relids set +is vestigial because it's always NULL, but that's expected to change in +future. For example, in planning set operations, we might need the relids +to denote which subset of the leaf SELECTs has been combined in a +particular group of Paths that are competing with each other. + +The result of subquery_planner() is always returned as a set of Paths +stored in the UPPERREL_FINAL rel with NULL relids. The other types of +upperrels are created only if needed for the particular query. + +The upper-relation infrastructure is designed so that things will work +properly if a particular upper relation is created and Paths are added +to it sooner than would normally happen. This allows, for example, +for an FDW's GetForeignPaths function to insert a Path representing +remote aggregation into the UPPERREL_GROUP_AGG upperrel, if it notices +that the query represents an aggregation that could be done entirely on +the foreign server. That Path will then compete with Paths representing +local aggregation on a regular scan of the foreign table, once the core +planner reaches the point of considering aggregation. + + Parallel Query and Partial Paths -------------------------------- diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 6233be3e50..a08c248e14 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -37,6 +37,7 @@ #include "optimizer/planner.h" #include "optimizer/prep.h" #include "optimizer/restrictinfo.h" +#include "optimizer/tlist.h" #include "optimizer/var.h" #include "parser/parse_clause.h" #include "parser/parsetree.h" @@ -97,7 +98,6 @@ static Path *get_cheapest_parameterized_child_path(PlannerInfo *root, RelOptInfo *rel, Relids required_outer); static List *accumulate_append_subpath(List *subpaths, Path *path); -static void set_dummy_rel_pathlist(RelOptInfo *rel); static void set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel, Index rti, RangeTblEntry *rte); static void set_function_pathlist(PlannerInfo *root, RelOptInfo *rel, @@ -1507,8 +1507,10 @@ accumulate_append_subpath(List *subpaths, Path *path) * * Rather than inventing a special "dummy" path type, we represent this as an * AppendPath with no members (see also IS_DUMMY_PATH/IS_DUMMY_REL macros). + * + * This is exported because inheritance_planner() has need for it. */ -static void +void set_dummy_rel_pathlist(RelOptInfo *rel) { /* Set dummy size estimates --- we leave attr_widths[] as zeroes */ @@ -1554,15 +1556,15 @@ has_multiple_baserels(PlannerInfo *root) /* * set_subquery_pathlist - * Build the (single) access path for a subquery RTE + * Generate SubqueryScan access paths for a subquery RTE * * We don't currently support generating parameterized paths for subqueries * by pushing join clauses down into them; it seems too expensive to re-plan - * the subquery multiple times to consider different alternatives. So the - * subquery will have exactly one path. (The path will be parameterized - * if the subquery contains LATERAL references, otherwise not.) Since there's - * no freedom of action here, there's no need for a separate set_subquery_size - * phase: we just make the path right away. + * the subquery multiple times to consider different alternatives. + * (XXX that could stand to be reconsidered, now that we use Paths.) + * So the paths made here will be parameterized if the subquery contains + * LATERAL references, otherwise not. As long as that's true, there's no need + * for a separate set_subquery_size phase: just make the paths right away. */ static void set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel, @@ -1573,8 +1575,8 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel, Relids required_outer; pushdown_safety_info safetyInfo; double tuple_fraction; - PlannerInfo *subroot; - List *pathkeys; + RelOptInfo *sub_final_rel; + ListCell *lc; /* * Must copy the Query so that planning doesn't mess up the RTE contents @@ -1685,12 +1687,10 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel, /* plan_params should not be in use in current query level */ Assert(root->plan_params == NIL); - /* Generate the plan for the subquery */ - rel->subplan = subquery_planner(root->glob, subquery, + /* Generate a subroot and Paths for the subquery */ + rel->subroot = subquery_planner(root->glob, subquery, root, - false, tuple_fraction, - &subroot); - rel->subroot = subroot; + false, tuple_fraction); /* Isolate the params needed by this specific subplan */ rel->subplan_params = root->plan_params; @@ -1698,23 +1698,44 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel, /* * It's possible that constraint exclusion proved the subquery empty. If - * so, it's convenient to turn it back into a dummy path so that we will + * so, it's desirable to produce an unadorned dummy path so that we will * recognize appropriate optimizations at this query level. */ - if (is_dummy_plan(rel->subplan)) + sub_final_rel = fetch_upper_rel(rel->subroot, UPPERREL_FINAL, NULL); + + if (IS_DUMMY_REL(sub_final_rel)) { set_dummy_rel_pathlist(rel); return; } - /* Mark rel with estimated output rows, width, etc */ + /* + * Mark rel with estimated output rows, width, etc. Note that we have to + * do this before generating outer-query paths, else cost_subqueryscan is + * not happy. + */ set_subquery_size_estimates(root, rel); - /* Convert subquery pathkeys to outer representation */ - pathkeys = convert_subquery_pathkeys(root, rel, subroot->query_pathkeys); - - /* Generate appropriate path */ - add_path(rel, create_subqueryscan_path(root, rel, pathkeys, required_outer)); + /* + * For each Path that subquery_planner produced, make a SubqueryScanPath + * in the outer query. + */ + foreach(lc, sub_final_rel->pathlist) + { + Path *subpath = (Path *) lfirst(lc); + List *pathkeys; + + /* Convert subpath's pathkeys to outer representation */ + pathkeys = convert_subquery_pathkeys(root, + rel, + subpath->pathkeys, + make_tlist_from_pathtarget(subpath->pathtarget)); + + /* Generate outer path using this subpath */ + add_path(rel, (Path *) + create_subqueryscan_path(root, rel, subpath, + pathkeys, required_outer)); + } } /* @@ -1858,7 +1879,7 @@ set_cte_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte) cteplan = (Plan *) list_nth(root->glob->subplans, plan_id - 1); /* Mark rel with estimated output rows, width, etc */ - set_cte_size_estimates(root, rel, cteplan); + set_cte_size_estimates(root, rel, cteplan->plan_rows); /* * We don't support pushing join clauses into the quals of a CTE scan, but @@ -1881,13 +1902,13 @@ set_cte_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte) static void set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte) { - Plan *cteplan; + Path *ctepath; PlannerInfo *cteroot; Index levelsup; Relids required_outer; /* - * We need to find the non-recursive term's plan, which is in the plan + * We need to find the non-recursive term's path, which is in the plan * level that's processing the recursive UNION, which is one level *below* * where the CTE comes from. */ @@ -1902,12 +1923,12 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte) if (!cteroot) /* shouldn't happen */ elog(ERROR, "bad levelsup for CTE \"%s\"", rte->ctename); } - cteplan = cteroot->non_recursive_plan; - if (!cteplan) /* shouldn't happen */ - elog(ERROR, "could not find plan for CTE \"%s\"", rte->ctename); + ctepath = cteroot->non_recursive_path; + if (!ctepath) /* shouldn't happen */ + elog(ERROR, "could not find path for CTE \"%s\"", rte->ctename); /* Mark rel with estimated output rows, width, etc */ - set_cte_size_estimates(root, rel, cteplan); + set_cte_size_estimates(root, rel, ctepath->rows); /* * We don't support pushing join clauses into the quals of a worktable @@ -2859,6 +2880,9 @@ print_path(PlannerInfo *root, Path *path, int indent) case T_TidPath: ptype = "TidScan"; break; + case T_SubqueryScanPath: + ptype = "SubqueryScanScan"; + break; case T_ForeignPath: ptype = "ForeignScan"; break; @@ -2883,6 +2907,55 @@ print_path(PlannerInfo *root, Path *path, int indent) ptype = "Gather"; subpath = ((GatherPath *) path)->subpath; break; + case T_ProjectionPath: + ptype = "Projection"; + subpath = ((ProjectionPath *) path)->subpath; + break; + case T_SortPath: + ptype = "Sort"; + subpath = ((SortPath *) path)->subpath; + break; + case T_GroupPath: + ptype = "Group"; + subpath = ((GroupPath *) path)->subpath; + break; + case T_UpperUniquePath: + ptype = "UpperUnique"; + subpath = ((UpperUniquePath *) path)->subpath; + break; + case T_AggPath: + ptype = "Agg"; + subpath = ((AggPath *) path)->subpath; + break; + case T_GroupingSetsPath: + ptype = "GroupingSets"; + subpath = ((GroupingSetsPath *) path)->subpath; + break; + case T_MinMaxAggPath: + ptype = "MinMaxAgg"; + break; + case T_WindowAggPath: + ptype = "WindowAgg"; + subpath = ((WindowAggPath *) path)->subpath; + break; + case T_SetOpPath: + ptype = "SetOp"; + subpath = ((SetOpPath *) path)->subpath; + break; + case T_RecursiveUnionPath: + ptype = "RecursiveUnion"; + break; + case T_LockRowsPath: + ptype = "LockRows"; + subpath = ((LockRowsPath *) path)->subpath; + break; + case T_ModifyTablePath: + ptype = "ModifyTable"; + break; + case T_LimitPath: + ptype = "Limit"; + subpath = ((LimitPath *) path)->subpath; + break; case T_NestPath: ptype = "NestLoop"; join = true; diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 5fc2f9ceb4..ffff3c0128 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -1169,7 +1169,7 @@ cost_tidscan(Path *path, PlannerInfo *root, * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL */ void -cost_subqueryscan(Path *path, PlannerInfo *root, +cost_subqueryscan(SubqueryScanPath *path, PlannerInfo *root, RelOptInfo *baserel, ParamPathInfo *param_info) { Cost startup_cost; @@ -1183,17 +1183,18 @@ cost_subqueryscan(Path *path, PlannerInfo *root, /* Mark the path with the correct row estimate */ if (param_info) - path->rows = param_info->ppi_rows; + path->path.rows = param_info->ppi_rows; else - path->rows = baserel->rows; + path->path.rows = baserel->rows; /* * Cost of path is cost of evaluating the subplan, plus cost of evaluating - * any restriction clauses that will be attached to the SubqueryScan node, - * plus cpu_tuple_cost to account for selection and projection overhead. + * any restriction clauses and tlist that will be attached to the + * SubqueryScan node, plus cpu_tuple_cost to account for selection and + * projection overhead. */ - path->startup_cost = baserel->subplan->startup_cost; - path->total_cost = baserel->subplan->total_cost; + path->path.startup_cost = path->subpath->startup_cost; + path->path.total_cost = path->subpath->total_cost; get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost); @@ -1202,11 +1203,11 @@ cost_subqueryscan(Path *path, PlannerInfo *root, run_cost = cpu_per_tuple * baserel->tuples; /* tlist eval costs are paid per output row, not per tuple scanned */ - startup_cost += path->pathtarget->cost.startup; - run_cost += path->pathtarget->cost.per_tuple * path->rows; + startup_cost += path->path.pathtarget->cost.startup; + run_cost += path->path.pathtarget->cost.per_tuple * path->path.rows; - path->startup_cost += startup_cost; - path->total_cost += startup_cost + run_cost; + path->path.startup_cost += startup_cost; + path->path.total_cost += startup_cost + run_cost; } /* @@ -1369,14 +1370,10 @@ cost_ctescan(Path *path, PlannerInfo *root, * Determines and returns the cost of performing a recursive union, * and also the estimated output size. * - * We are given Plans for the nonrecursive and recursive terms. - * - * Note that the arguments and output are Plans, not Paths as in most of - * the rest of this module. That's because we don't bother setting up a - * Path representation for recursive union --- we have only one way to do it. + * We are given Paths for the nonrecursive and recursive terms. */ void -cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm) +cost_recursive_union(Path *runion, Path *nrterm, Path *rterm) { Cost startup_cost; Cost total_cost; @@ -1385,7 +1382,7 @@ cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm) /* We probably have decent estimates for the non-recursive term */ startup_cost = nrterm->startup_cost; total_cost = nrterm->total_cost; - total_rows = nrterm->plan_rows; + total_rows = nrterm->rows; /* * We arbitrarily assume that about 10 recursive iterations will be @@ -1394,7 +1391,7 @@ cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm) * hard to see how to do better. */ total_cost += 10 * rterm->total_cost; - total_rows += 10 * rterm->plan_rows; + total_rows += 10 * rterm->rows; /* * Also charge cpu_tuple_cost per row to account for the costs of @@ -1405,8 +1402,9 @@ cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm) runion->startup_cost = startup_cost; runion->total_cost = total_cost; - runion->plan_rows = total_rows; - runion->plan_width = Max(nrterm->plan_width, rterm->plan_width); + runion->rows = total_rows; + runion->pathtarget->width = Max(nrterm->pathtarget->width, + rterm->pathtarget->width); } /* @@ -3996,8 +3994,8 @@ calc_joinrel_size_estimate(PlannerInfo *root, * Set the size estimates for a base relation that is a subquery. * * The rel's targetlist and restrictinfo list must have been constructed - * already, and the plan for the subquery must have been completed. - * We look at the subquery's plan and PlannerInfo to extract data. + * already, and the Paths for the subquery must have been completed. + * We look at the subquery's PlannerInfo to extract data. * * We set the same fields as set_baserel_size_estimates. */ @@ -4005,6 +4003,7 @@ void set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel) { PlannerInfo *subroot = rel->subroot; + RelOptInfo *sub_final_rel; RangeTblEntry *rte PG_USED_FOR_ASSERTS_ONLY; ListCell *lc; @@ -4013,8 +4012,12 @@ set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel) rte = planner_rt_fetch(rel->relid, root); Assert(rte->rtekind == RTE_SUBQUERY); - /* Copy raw number of output rows from subplan */ - rel->tuples = rel->subplan->plan_rows; + /* + * Copy raw number of output rows from subquery. All of its paths should + * have the same output rowcount, so just look at cheapest-total. + */ + sub_final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL); + rel->tuples = sub_final_rel->cheapest_total_path->rows; /* * Compute per-output-column width estimates by examining the subquery's @@ -4144,13 +4147,13 @@ set_values_size_estimates(PlannerInfo *root, RelOptInfo *rel) * Set the size estimates for a base relation that is a CTE reference. * * The rel's targetlist and restrictinfo list must have been constructed - * already, and we need the completed plan for the CTE (if a regular CTE) - * or the non-recursive term (if a self-reference). + * already, and we need an estimate of the number of rows returned by the CTE + * (if a regular CTE) or the non-recursive term (if a self-reference). * * We set the same fields as set_baserel_size_estimates. */ void -set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, Plan *cteplan) +set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, double cte_rows) { RangeTblEntry *rte; @@ -4165,12 +4168,12 @@ set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, Plan *cteplan) * In a self-reference, arbitrarily assume the average worktable size * is about 10 times the nonrecursive term's size. */ - rel->tuples = 10 * cteplan->plan_rows; + rel->tuples = 10 * cte_rows; } else { - /* Otherwise just believe the CTE plan's output estimate */ - rel->tuples = cteplan->plan_rows; + /* Otherwise just believe the CTE's rowcount estimate */ + rel->tuples = cte_rows; } /* Now estimate number of output rows, etc */ @@ -4225,7 +4228,7 @@ set_foreign_size_estimates(PlannerInfo *root, RelOptInfo *rel) * any better number. * * The per-attribute width estimates are cached for possible re-use while - * building join relations. + * building join relations or post-scan/join pathtargets. */ static void set_rel_width(PlannerInfo *root, RelOptInfo *rel) @@ -4373,6 +4376,91 @@ set_rel_width(PlannerInfo *root, RelOptInfo *rel) rel->reltarget.width = tuple_width; } +/* + * set_pathtarget_cost_width + * Set the estimated eval cost and output width of a PathTarget tlist. + * + * As a notational convenience, returns the same PathTarget pointer passed in. + * + * Most, though not quite all, uses of this function occur after we've run + * set_rel_width() for base relations; so we can usually obtain cached width + * estimates for Vars. If we can't, fall back on datatype-based width + * estimates. Present early-planning uses of PathTargets don't need accurate + * widths badly enough to justify going to the catalogs for better data. + */ +PathTarget * +set_pathtarget_cost_width(PlannerInfo *root, PathTarget *target) +{ + int32 tuple_width = 0; + ListCell *lc; + + /* Vars are assumed to have cost zero, but other exprs do not */ + target->cost.startup = 0; + target->cost.per_tuple = 0; + + foreach(lc, target->exprs) + { + Node *node = (Node *) lfirst(lc); + + if (IsA(node, Var)) + { + Var *var = (Var *) node; + int32 item_width; + + /* We should not see any upper-level Vars here */ + Assert(var->varlevelsup == 0); + + /* Try to get data from RelOptInfo cache */ + if (var->varno < root->simple_rel_array_size) + { + RelOptInfo *rel = root->simple_rel_array[var->varno]; + + if (rel != NULL && + var->varattno >= rel->min_attr && + var->varattno <= rel->max_attr) + { + int ndx = var->varattno - rel->min_attr; + + if (rel->attr_widths[ndx] > 0) + { + tuple_width += rel->attr_widths[ndx]; + continue; + } + } + } + + /* + * No cached data available, so estimate using just the type info. + */ + item_width = get_typavgwidth(var->vartype, var->vartypmod); + Assert(item_width > 0); + tuple_width += item_width; + } + else + { + /* + * Handle general expressions using type info. + */ + int32 item_width; + QualCost cost; + + item_width = get_typavgwidth(exprType(node), exprTypmod(node)); + Assert(item_width > 0); + tuple_width += item_width; + + /* Account for cost, too */ + cost_qual_eval_node(&cost, node, root); + target->cost.startup += cost.startup; + target->cost.per_tuple += cost.per_tuple; + } + } + + Assert(tuple_width >= 0); + target->width = tuple_width; + + return target; +} + /* * relation_byte_size * Estimate the storage space in bytes for a given number of tuples diff --git a/src/backend/optimizer/path/equivclass.c b/src/backend/optimizer/path/equivclass.c index a36d0c9fcf..d9a65eba35 100644 --- a/src/backend/optimizer/path/equivclass.c +++ b/src/backend/optimizer/path/equivclass.c @@ -1998,48 +1998,6 @@ add_child_rel_equivalences(PlannerInfo *root, } -/* - * mutate_eclass_expressions - * Apply an expression tree mutator to all expressions stored in - * equivalence classes (but ignore child exprs unless include_child_exprs). - * - * This is a bit of a hack ... it's currently needed only by planagg.c, - * which needs to do a global search-and-replace of MIN/MAX Aggrefs - * after eclasses are already set up. Without changing the eclasses too, - * subsequent matching of ORDER BY and DISTINCT clauses would fail. - * - * Note that we assume the mutation won't affect relation membership or any - * other properties we keep track of (which is a bit bogus, but by the time - * planagg.c runs, it no longer matters). Also we must be called in the - * main planner memory context. - */ -void -mutate_eclass_expressions(PlannerInfo *root, - Node *(*mutator) (), - void *context, - bool include_child_exprs) -{ - ListCell *lc1; - - foreach(lc1, root->eq_classes) - { - EquivalenceClass *cur_ec = (EquivalenceClass *) lfirst(lc1); - ListCell *lc2; - - foreach(lc2, cur_ec->ec_members) - { - EquivalenceMember *cur_em = (EquivalenceMember *) lfirst(lc2); - - if (cur_em->em_is_child && !include_child_exprs) - continue; /* ignore children unless requested */ - - cur_em->em_expr = (Expr *) - mutator((Node *) cur_em->em_expr, context); - } - } -} - - /* * generate_implied_equalities_for_column * Create EC-derived joinclauses usable with a specific column. diff --git a/src/backend/optimizer/path/pathkeys.c b/src/backend/optimizer/path/pathkeys.c index eed39b9e1b..4436ac111d 100644 --- a/src/backend/optimizer/path/pathkeys.c +++ b/src/backend/optimizer/path/pathkeys.c @@ -557,6 +557,7 @@ build_expression_pathkey(PlannerInfo *root, * * 'rel': outer query's RelOptInfo for the subquery relation. * 'subquery_pathkeys': the subquery's output pathkeys, in its terms. + * 'subquery_tlist': the subquery's output targetlist, in its terms. * * It is not necessary for caller to do truncate_useless_pathkeys(), * because we select keys in a way that takes usefulness of the keys into @@ -564,12 +565,12 @@ build_expression_pathkey(PlannerInfo *root, */ List * convert_subquery_pathkeys(PlannerInfo *root, RelOptInfo *rel, - List *subquery_pathkeys) + List *subquery_pathkeys, + List *subquery_tlist) { List *retval = NIL; int retvallen = 0; int outer_query_keys = list_length(root->query_pathkeys); - List *sub_tlist = rel->subplan->targetlist; ListCell *i; foreach(i, subquery_pathkeys) @@ -589,7 +590,7 @@ convert_subquery_pathkeys(PlannerInfo *root, RelOptInfo *rel, if (sub_eclass->ec_sortref == 0) /* can't happen */ elog(ERROR, "volatile EquivalenceClass has no sortref"); - tle = get_sortgroupref_tle(sub_eclass->ec_sortref, sub_tlist); + tle = get_sortgroupref_tle(sub_eclass->ec_sortref, subquery_tlist); Assert(tle); /* resjunk items aren't visible to outer query */ if (!tle->resjunk) @@ -669,7 +670,7 @@ convert_subquery_pathkeys(PlannerInfo *root, RelOptInfo *rel, if (sub_member->em_is_child) continue; /* ignore children here */ - foreach(k, sub_tlist) + foreach(k, subquery_tlist) { TargetEntry *tle = (TargetEntry *) lfirst(k); Expr *tle_expr; diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 198b06b849..88c72792c5 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -44,24 +44,78 @@ #include "utils/lsyscache.h" -static Plan *create_plan_recurse(PlannerInfo *root, Path *best_path); -static Plan *create_scan_plan(PlannerInfo *root, Path *best_path); +/* + * Flag bits that can appear in the flags argument of create_plan_recurse(). + * These can be OR-ed together. + * + * CP_EXACT_TLIST specifies that the generated plan node must return exactly + * the tlist specified by the path's pathtarget (this overrides both + * CP_SMALL_TLIST and CP_LABEL_TLIST, if those are set). Otherwise, the + * plan node is allowed to return just the Vars and PlaceHolderVars needed + * to evaluate the pathtarget. + * + * CP_SMALL_TLIST specifies that a narrower tlist is preferred. This is + * passed down by parent nodes such as Sort and Hash, which will have to + * store the returned tuples. + * + * CP_LABEL_TLIST specifies that the plan node must return columns matching + * any sortgrouprefs specified in its pathtarget, with appropriate + * ressortgroupref labels. This is passed down by parent nodes such as Sort + * and Group, which need these values to be available in their inputs. + */ +#define CP_EXACT_TLIST 0x0001 /* Plan must return specified tlist */ +#define CP_SMALL_TLIST 0x0002 /* Prefer narrower tlists */ +#define CP_LABEL_TLIST 0x0004 /* tlist must contain sortgrouprefs */ + + +static Plan *create_plan_recurse(PlannerInfo *root, Path *best_path, + int flags); +static Plan *create_scan_plan(PlannerInfo *root, Path *best_path, + int flags); static List *build_path_tlist(PlannerInfo *root, Path *path); -static bool use_physical_tlist(PlannerInfo *root, RelOptInfo *rel); -static void disuse_physical_tlist(PlannerInfo *root, Plan *plan, Path *path); -static Plan *create_gating_plan(PlannerInfo *root, Plan *plan, List *quals); +static bool use_physical_tlist(PlannerInfo *root, Path *path, int flags); +static List *get_gating_quals(PlannerInfo *root, List *quals); +static Plan *create_gating_plan(PlannerInfo *root, Path *path, Plan *plan, + List *gating_quals); static Plan *create_join_plan(PlannerInfo *root, JoinPath *best_path); static Plan *create_append_plan(PlannerInfo *root, AppendPath *best_path); static Plan *create_merge_append_plan(PlannerInfo *root, MergeAppendPath *best_path); static Result *create_result_plan(PlannerInfo *root, ResultPath *best_path); -static Material *create_material_plan(PlannerInfo *root, MaterialPath *best_path); -static Plan *create_unique_plan(PlannerInfo *root, UniquePath *best_path); +static Material *create_material_plan(PlannerInfo *root, MaterialPath *best_path, + int flags); +static Plan *create_unique_plan(PlannerInfo *root, UniquePath *best_path, + int flags); +static Gather *create_gather_plan(PlannerInfo *root, GatherPath *best_path); +static Plan *create_projection_plan(PlannerInfo *root, ProjectionPath *best_path); +static Sort *create_sort_plan(PlannerInfo *root, SortPath *best_path, int flags); +static Group *create_group_plan(PlannerInfo *root, GroupPath *best_path); +static Unique *create_upper_unique_plan(PlannerInfo *root, UpperUniquePath *best_path, + int flags); +static Agg *create_agg_plan(PlannerInfo *root, AggPath *best_path); +static Plan *create_groupingsets_plan(PlannerInfo *root, GroupingSetsPath *best_path); +static Result *create_minmaxagg_plan(PlannerInfo *root, MinMaxAggPath *best_path); +static WindowAgg *create_windowagg_plan(PlannerInfo *root, WindowAggPath *best_path); +static SetOp *create_setop_plan(PlannerInfo *root, SetOpPath *best_path, + int flags); +static RecursiveUnion *create_recursiveunion_plan(PlannerInfo *root, RecursiveUnionPath *best_path); +static void get_column_info_for_window(PlannerInfo *root, WindowClause *wc, + List *tlist, + int numSortCols, AttrNumber *sortColIdx, + int *partNumCols, + AttrNumber **partColIdx, + Oid **partOperators, + int *ordNumCols, + AttrNumber **ordColIdx, + Oid **ordOperators); +static LockRows *create_lockrows_plan(PlannerInfo *root, LockRowsPath *best_path, + int flags); +static ModifyTable *create_modifytable_plan(PlannerInfo *root, ModifyTablePath *best_path); +static Limit *create_limit_plan(PlannerInfo *root, LimitPath *best_path, + int flags); static SeqScan *create_seqscan_plan(PlannerInfo *root, Path *best_path, List *tlist, List *scan_clauses); static SampleScan *create_samplescan_plan(PlannerInfo *root, Path *best_path, List *tlist, List *scan_clauses); -static Gather *create_gather_plan(PlannerInfo *root, - GatherPath *best_path); static Scan *create_indexscan_plan(PlannerInfo *root, IndexPath *best_path, List *tlist, List *scan_clauses, bool indexonly); static BitmapHeapScan *create_bitmap_scan_plan(PlannerInfo *root, @@ -71,7 +125,8 @@ static Plan *create_bitmap_subplan(PlannerInfo *root, Path *bitmapqual, List **qual, List **indexqual, List **indexECs); static TidScan *create_tidscan_plan(PlannerInfo *root, TidPath *best_path, List *tlist, List *scan_clauses); -static SubqueryScan *create_subqueryscan_plan(PlannerInfo *root, Path *best_path, +static SubqueryScan *create_subqueryscan_plan(PlannerInfo *root, + SubqueryScanPath *best_path, List *tlist, List *scan_clauses); static FunctionScan *create_functionscan_plan(PlannerInfo *root, Path *best_path, List *tlist, List *scan_clauses); @@ -86,12 +141,9 @@ static ForeignScan *create_foreignscan_plan(PlannerInfo *root, ForeignPath *best static CustomScan *create_customscan_plan(PlannerInfo *root, CustomPath *best_path, List *tlist, List *scan_clauses); -static NestLoop *create_nestloop_plan(PlannerInfo *root, NestPath *best_path, - Plan *outer_plan, Plan *inner_plan); -static MergeJoin *create_mergejoin_plan(PlannerInfo *root, MergePath *best_path, - Plan *outer_plan, Plan *inner_plan); -static HashJoin *create_hashjoin_plan(PlannerInfo *root, HashPath *best_path, - Plan *outer_plan, Plan *inner_plan); +static NestLoop *create_nestloop_plan(PlannerInfo *root, NestPath *best_path); +static MergeJoin *create_mergejoin_plan(PlannerInfo *root, MergePath *best_path); +static HashJoin *create_hashjoin_plan(PlannerInfo *root, HashPath *best_path); static Node *replace_nestloop_params(PlannerInfo *root, Node *expr); static Node *replace_nestloop_params_mutator(Node *node, PlannerInfo *root); static void process_subquery_nestloop_params(PlannerInfo *root, @@ -106,8 +158,6 @@ static void copy_plan_costsize(Plan *dest, Plan *src); static SeqScan *make_seqscan(List *qptlist, List *qpqual, Index scanrelid); static SampleScan *make_samplescan(List *qptlist, List *qpqual, Index scanrelid, TableSampleClause *tsc); -static Gather *make_gather(List *qptlist, List *qpqual, - int nworkers, bool single_copy, Plan *subplan); static IndexScan *make_indexscan(List *qptlist, List *qpqual, Index scanrelid, Oid indexid, List *indexqual, List *indexqualorig, List *indexorderby, List *indexorderbyorig, @@ -128,6 +178,10 @@ static BitmapHeapScan *make_bitmap_heapscan(List *qptlist, Index scanrelid); static TidScan *make_tidscan(List *qptlist, List *qpqual, Index scanrelid, List *tidquals); +static SubqueryScan *make_subqueryscan(List *qptlist, + List *qpqual, + Index scanrelid, + Plan *subplan); static FunctionScan *make_functionscan(List *qptlist, List *qpqual, Index scanrelid, List *functions, bool funcordinality); static ValuesScan *make_valuesscan(List *qptlist, List *qpqual, @@ -136,6 +190,13 @@ static CteScan *make_ctescan(List *qptlist, List *qpqual, Index scanrelid, int ctePlanId, int cteParam); static WorkTableScan *make_worktablescan(List *qptlist, List *qpqual, Index scanrelid, int wtParam); +static Append *make_append(List *appendplans, List *tlist); +static RecursiveUnion *make_recursive_union(List *tlist, + Plan *lefttree, + Plan *righttree, + int wtParam, + List *distinctList, + long numGroups); static BitmapAnd *make_bitmap_and(List *bitmapplans); static BitmapOr *make_bitmap_or(List *bitmapplans); static NestLoop *make_nestloop(List *tlist, @@ -179,7 +240,48 @@ static Plan *prepare_sort_from_pathkeys(PlannerInfo *root, static EquivalenceMember *find_ec_member_for_tle(EquivalenceClass *ec, TargetEntry *tle, Relids relids); +static Sort *make_sort_from_pathkeys(PlannerInfo *root, Plan *lefttree, + List *pathkeys, double limit_tuples); +static Sort *make_sort_from_sortclauses(PlannerInfo *root, List *sortcls, + Plan *lefttree); +static Sort *make_sort_from_groupcols(PlannerInfo *root, + List *groupcls, + AttrNumber *grpColIdx, + Plan *lefttree); static Material *make_material(Plan *lefttree); +static Agg *make_agg(List *tlist, List *qual, AggStrategy aggstrategy, + bool combineStates, bool finalizeAggs, + int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators, + List *groupingSets, List *chain, + double dNumGroups, Plan *lefttree); +static WindowAgg *make_windowagg(List *tlist, Index winref, + int partNumCols, AttrNumber *partColIdx, Oid *partOperators, + int ordNumCols, AttrNumber *ordColIdx, Oid *ordOperators, + int frameOptions, Node *startOffset, Node *endOffset, + Plan *lefttree); +static Group *make_group(List *tlist, List *qual, int numGroupCols, + AttrNumber *grpColIdx, Oid *grpOperators, + Plan *lefttree); +static Unique *make_unique_from_sortclauses(Plan *lefttree, List *distinctList); +static Unique *make_unique_from_pathkeys(Plan *lefttree, + List *pathkeys, int numCols); +static Gather *make_gather(List *qptlist, List *qpqual, + int nworkers, bool single_copy, Plan *subplan); +static SetOp *make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree, + List *distinctList, AttrNumber flagColIdx, int firstFlag, + long numGroups); +static LockRows *make_lockrows(Plan *lefttree, List *rowMarks, int epqParam); +static Limit *make_limit(Plan *lefttree, Node *limitOffset, Node *limitCount); +static Result *make_result(PlannerInfo *root, + List *tlist, + Node *resconstantqual, + Plan *subplan); +static ModifyTable *make_modifytable(PlannerInfo *root, + CmdType operation, bool canSetTag, + Index nominalRelation, + List *resultRelations, List *subplans, + List *withCheckOptionLists, List *returningLists, + List *rowMarks, OnConflictExpr *onconflict, int epqParam); /* @@ -209,8 +311,26 @@ create_plan(PlannerInfo *root, Path *best_path) root->curOuterRels = NULL; root->curOuterParams = NIL; - /* Recursively process the path tree */ - plan = create_plan_recurse(root, best_path); + /* Recursively process the path tree, demanding the correct tlist result */ + plan = create_plan_recurse(root, best_path, CP_EXACT_TLIST); + + /* + * Make sure the topmost plan node's targetlist exposes the original + * column names and other decorative info. Targetlists generated within + * the planner don't bother with that stuff, but we must have it on the + * top-level tlist seen at execution time. However, ModifyTable plan + * nodes don't have a tlist matching the querytree targetlist. + */ + if (!IsA(plan, ModifyTable)) + apply_tlist_labeling(plan->targetlist, root->processed_tlist); + + /* + * Attach any initPlans created in this query level to the topmost plan + * node. (The initPlans could actually go in any plan node at or above + * where they're referenced, but there seems no reason to put them any + * lower than the topmost node for the query level.) + */ + SS_attach_initplans(root, plan); /* Update parallel safety information if needed. */ if (!best_path->parallel_safe) @@ -234,7 +354,7 @@ create_plan(PlannerInfo *root, Path *best_path) * Recursive guts of create_plan(). */ static Plan * -create_plan_recurse(PlannerInfo *root, Path *best_path) +create_plan_recurse(PlannerInfo *root, Path *best_path, int flags) { Plan *plan; @@ -253,7 +373,7 @@ create_plan_recurse(PlannerInfo *root, Path *best_path) case T_WorkTableScan: case T_ForeignScan: case T_CustomScan: - plan = create_scan_plan(root, best_path); + plan = create_scan_plan(root, best_path, flags); break; case T_HashJoin: case T_MergeJoin: @@ -270,21 +390,94 @@ create_plan_recurse(PlannerInfo *root, Path *best_path) (MergeAppendPath *) best_path); break; case T_Result: - plan = (Plan *) create_result_plan(root, - (ResultPath *) best_path); + if (IsA(best_path, ProjectionPath)) + { + plan = create_projection_plan(root, + (ProjectionPath *) best_path); + } + else if (IsA(best_path, MinMaxAggPath)) + { + plan = (Plan *) create_minmaxagg_plan(root, + (MinMaxAggPath *) best_path); + } + else + { + Assert(IsA(best_path, ResultPath)); + plan = (Plan *) create_result_plan(root, + (ResultPath *) best_path); + } break; case T_Material: plan = (Plan *) create_material_plan(root, - (MaterialPath *) best_path); + (MaterialPath *) best_path, + flags); break; case T_Unique: - plan = create_unique_plan(root, - (UniquePath *) best_path); + if (IsA(best_path, UpperUniquePath)) + { + plan = (Plan *) create_upper_unique_plan(root, + (UpperUniquePath *) best_path, + flags); + } + else + { + Assert(IsA(best_path, UniquePath)); + plan = create_unique_plan(root, + (UniquePath *) best_path, + flags); + } break; case T_Gather: plan = (Plan *) create_gather_plan(root, (GatherPath *) best_path); break; + case T_Sort: + plan = (Plan *) create_sort_plan(root, + (SortPath *) best_path, + flags); + break; + case T_Group: + plan = (Plan *) create_group_plan(root, + (GroupPath *) best_path); + break; + case T_Agg: + if (IsA(best_path, GroupingSetsPath)) + plan = create_groupingsets_plan(root, + (GroupingSetsPath *) best_path); + else + { + Assert(IsA(best_path, AggPath)); + plan = (Plan *) create_agg_plan(root, + (AggPath *) best_path); + } + break; + case T_WindowAgg: + plan = (Plan *) create_windowagg_plan(root, + (WindowAggPath *) best_path); + break; + case T_SetOp: + plan = (Plan *) create_setop_plan(root, + (SetOpPath *) best_path, + flags); + break; + case T_RecursiveUnion: + plan = (Plan *) create_recursiveunion_plan(root, + (RecursiveUnionPath *) best_path); + break; + case T_LockRows: + plan = (Plan *) create_lockrows_plan(root, + (LockRowsPath *) best_path, + flags); + break; + case T_ModifyTable: + plan = (Plan *) create_modifytable_plan(root, + (ModifyTablePath *) best_path); + break; + case T_Limit: + plan = (Plan *) create_limit_plan(root, + (LimitPath *) best_path, + flags); + break; default: elog(ERROR, "unrecognized node type: %d", (int) best_path->pathtype); @@ -300,34 +493,68 @@ create_plan_recurse(PlannerInfo *root, Path *best_path) * Create a scan plan for the parent relation of 'best_path'. */ static Plan * -create_scan_plan(PlannerInfo *root, Path *best_path) +create_scan_plan(PlannerInfo *root, Path *best_path, int flags) { RelOptInfo *rel = best_path->parent; - List *tlist; List *scan_clauses; + List *gating_clauses; + List *tlist; Plan *plan; + /* + * Extract the relevant restriction clauses from the parent relation. The + * executor must apply all these restrictions during the scan, except for + * pseudoconstants which we'll take care of below. + */ + scan_clauses = rel->baserestrictinfo; + + /* + * If this is a parameterized scan, we also need to enforce all the join + * clauses available from the outer relation(s). + * + * For paranoia's sake, don't modify the stored baserestrictinfo list. + */ + if (best_path->param_info) + scan_clauses = list_concat(list_copy(scan_clauses), + best_path->param_info->ppi_clauses); + + /* + * Detect whether we have any pseudoconstant quals to deal with. Then, if + * we'll need a gating Result node, it will be able to project, so there + * are no requirements on the child's tlist. + */ + gating_clauses = get_gating_quals(root, scan_clauses); + if (gating_clauses) + flags = 0; + /* * For table scans, rather than using the relation targetlist (which is * only those Vars actually needed by the query), we prefer to generate a * tlist containing all Vars in order. This will allow the executor to - * optimize away projection of the table tuples, if possible. (Note that - * planner.c may replace the tlist we generate here, forcing projection to - * occur.) + * optimize away projection of the table tuples, if possible. */ - if (use_physical_tlist(root, rel)) + if (use_physical_tlist(root, best_path, flags)) { if (best_path->pathtype == T_IndexOnlyScan) { /* For index-only scan, the preferred tlist is the index's */ tlist = copyObject(((IndexPath *) best_path)->indexinfo->indextlist); + /* Transfer any sortgroupref data to the replacement tlist */ + apply_pathtarget_labeling_to_tlist(tlist, best_path->pathtarget); } else { tlist = build_physical_tlist(root, rel); - /* if fail because of dropped cols, use regular method */ if (tlist == NIL) + { + /* Failed because of dropped cols, so use regular method */ tlist = build_path_tlist(root, best_path); + } + else + { + /* Transfer any sortgroupref data to the replacement tlist */ + apply_pathtarget_labeling_to_tlist(tlist, best_path->pathtarget); + } } } else @@ -335,23 +562,6 @@ create_scan_plan(PlannerInfo *root, Path *best_path) tlist = build_path_tlist(root, best_path); } - /* - * Extract the relevant restriction clauses from the parent relation. The - * executor must apply all these restrictions during the scan, except for - * pseudoconstants which we'll take care of below. - */ - scan_clauses = rel->baserestrictinfo; - - /* - * If this is a parameterized scan, we also need to enforce all the join - * clauses available from the outer relation(s). - * - * For paranoia's sake, don't modify the stored baserestrictinfo list. - */ - if (best_path->param_info) - scan_clauses = list_concat(list_copy(scan_clauses), - best_path->param_info->ppi_clauses); - switch (best_path->pathtype) { case T_SeqScan: @@ -400,7 +610,7 @@ create_scan_plan(PlannerInfo *root, Path *best_path) case T_SubqueryScan: plan = (Plan *) create_subqueryscan_plan(root, - best_path, + (SubqueryScanPath *) best_path, tlist, scan_clauses); break; @@ -459,27 +669,30 @@ create_scan_plan(PlannerInfo *root, Path *best_path) * gating Result node that evaluates the pseudoconstants as one-time * quals. */ - if (root->hasPseudoConstantQuals) - plan = create_gating_plan(root, plan, scan_clauses); + if (gating_clauses) + plan = create_gating_plan(root, best_path, plan, gating_clauses); return plan; } /* * Build a target list (ie, a list of TargetEntry) for the Path's output. + * + * This is almost just make_tlist_from_pathtarget(), but we also have to + * deal with replacing nestloop params. */ static List * build_path_tlist(PlannerInfo *root, Path *path) { - RelOptInfo *rel = path->parent; List *tlist = NIL; + Index *sortgrouprefs = path->pathtarget->sortgrouprefs; int resno = 1; ListCell *v; - foreach(v, rel->reltarget.exprs) + foreach(v, path->pathtarget->exprs) { - /* Do we really need to copy here? Not sure */ - Node *node = (Node *) copyObject(lfirst(v)); + Node *node = (Node *) lfirst(v); + TargetEntry *tle; /* * If it's a parameterized path, there might be lateral references in @@ -490,10 +703,14 @@ build_path_tlist(PlannerInfo *root, Path *path) if (path->param_info) node = replace_nestloop_params(root, node); - tlist = lappend(tlist, makeTargetEntry((Expr *) node, - resno, - NULL, - false)); + tle = makeTargetEntry((Expr *) node, + resno, + NULL, + false); + if (sortgrouprefs) + tle->ressortgroupref = sortgrouprefs[resno - 1]; + + tlist = lappend(tlist, tle); resno++; } return tlist; @@ -505,11 +722,18 @@ build_path_tlist(PlannerInfo *root, Path *path) * rather than only those Vars actually referenced. */ static bool -use_physical_tlist(PlannerInfo *root, RelOptInfo *rel) +use_physical_tlist(PlannerInfo *root, Path *path, int flags) { + RelOptInfo *rel = path->parent; int i; ListCell *lc; + /* + * Forget it if either exact tlist or small tlist is demanded. + */ + if (flags & (CP_EXACT_TLIST | CP_SMALL_TLIST)) + return false; + /* * We can do this for real relation scans, subquery scans, function scans, * values scans, and CTE scans (but not for, eg, joins). @@ -523,7 +747,8 @@ use_physical_tlist(PlannerInfo *root, RelOptInfo *rel) /* * Can't do it with inheritance cases either (mainly because Append - * doesn't project). + * doesn't project; this test may be unnecessary now that + * create_append_plan instructs its children to return an exact tlist). */ if (rel->reloptkind != RELOPT_BASEREL) return false; @@ -552,52 +777,60 @@ use_physical_tlist(PlannerInfo *root, RelOptInfo *rel) return false; } + /* + * Also, can't do it if CP_LABEL_TLIST is specified and path is requested + * to emit any sort/group columns that are not simple Vars. (If they are + * simple Vars, they should appear in the physical tlist, and + * apply_pathtarget_labeling_to_tlist will take care of getting them + * labeled again.) + */ + if ((flags & CP_LABEL_TLIST) && path->pathtarget->sortgrouprefs) + { + i = 0; + foreach(lc, path->pathtarget->exprs) + { + Expr *expr = (Expr *) lfirst(lc); + + if (path->pathtarget->sortgrouprefs[i]) + { + if (expr && IsA(expr, Var)) + /* okay */ ; + else + return false; + } + i++; + } + } + return true; } /* - * disuse_physical_tlist - * Switch a plan node back to emitting only Vars actually referenced. + * get_gating_quals + * See if there are pseudoconstant quals in a node's quals list * - * If the plan node immediately above a scan would prefer to get only - * needed Vars and not a physical tlist, it must call this routine to - * undo the decision made by use_physical_tlist(). Currently, Hash, Sort, - * Material, and Gather nodes want this, so they don't have to store or - * transfer useless columns. + * If the node's quals list includes any pseudoconstant quals, + * return just those quals. */ -static void -disuse_physical_tlist(PlannerInfo *root, Plan *plan, Path *path) +static List * +get_gating_quals(PlannerInfo *root, List *quals) { - /* Only need to undo it for path types handled by create_scan_plan() */ - switch (path->pathtype) - { - case T_SeqScan: - case T_SampleScan: - case T_IndexScan: - case T_IndexOnlyScan: - case T_BitmapHeapScan: - case T_TidScan: - case T_SubqueryScan: - case T_FunctionScan: - case T_ValuesScan: - case T_CteScan: - case T_WorkTableScan: - case T_ForeignScan: - case T_CustomScan: - plan->targetlist = build_path_tlist(root, path); - break; - default: - break; - } + /* No need to look if we know there are no pseudoconstants */ + if (!root->hasPseudoConstantQuals) + return NIL; + + /* Sort into desirable execution order while still in RestrictInfo form */ + quals = order_qual_clauses(root, quals); + + /* Pull out any pseudoconstant quals from the RestrictInfo list */ + return extract_actual_clauses(quals, true); } /* * create_gating_plan * Deal with pseudoconstant qual clauses * - * If the node's quals list includes any pseudoconstant quals, put them - * into a gating Result node atop the already-built plan. Otherwise, - * return the plan as-is. + * Add a gating Result node atop the already-built plan. * * Note that we don't change cost or size estimates when doing gating. * The costs of qual eval were already folded into the plan's startup cost. @@ -611,22 +844,19 @@ disuse_physical_tlist(PlannerInfo *root, Plan *plan, Path *path) * qual being true. */ static Plan * -create_gating_plan(PlannerInfo *root, Plan *plan, List *quals) +create_gating_plan(PlannerInfo *root, Path *path, Plan *plan, + List *gating_quals) { - List *pseudoconstants; - - /* Sort into desirable execution order while still in RestrictInfo form */ - quals = order_qual_clauses(root, quals); - - /* Pull out any pseudoconstant quals from the RestrictInfo list */ - pseudoconstants = extract_actual_clauses(quals, true); - - if (!pseudoconstants) - return plan; + Assert(gating_quals); + /* + * Since we need a Result node anyway, always return the path's requested + * tlist; that's never a wrong choice, even if the parent node didn't ask + * for CP_EXACT_TLIST. + */ return (Plan *) make_result(root, - plan->targetlist, - (Node *) pseudoconstants, + build_path_tlist(root, path), + (Node *) gating_quals, plan); } @@ -638,43 +868,22 @@ create_gating_plan(PlannerInfo *root, Plan *plan, List *quals) static Plan * create_join_plan(PlannerInfo *root, JoinPath *best_path) { - Plan *outer_plan; - Plan *inner_plan; Plan *plan; - Relids saveOuterRels = root->curOuterRels; - - outer_plan = create_plan_recurse(root, best_path->outerjoinpath); - - /* For a nestloop, include outer relids in curOuterRels for inner side */ - if (best_path->path.pathtype == T_NestLoop) - root->curOuterRels = bms_union(root->curOuterRels, - best_path->outerjoinpath->parent->relids); - - inner_plan = create_plan_recurse(root, best_path->innerjoinpath); + List *gating_clauses; switch (best_path->path.pathtype) { case T_MergeJoin: plan = (Plan *) create_mergejoin_plan(root, - (MergePath *) best_path, - outer_plan, - inner_plan); + (MergePath *) best_path); break; case T_HashJoin: plan = (Plan *) create_hashjoin_plan(root, - (HashPath *) best_path, - outer_plan, - inner_plan); + (HashPath *) best_path); break; case T_NestLoop: - /* Restore curOuterRels */ - bms_free(root->curOuterRels); - root->curOuterRels = saveOuterRels; - plan = (Plan *) create_nestloop_plan(root, - (NestPath *) best_path, - outer_plan, - inner_plan); + (NestPath *) best_path); break; default: elog(ERROR, "unrecognized node type: %d", @@ -688,8 +897,10 @@ create_join_plan(PlannerInfo *root, JoinPath *best_path) * gating Result node that evaluates the pseudoconstants as one-time * quals. */ - if (root->hasPseudoConstantQuals) - plan = create_gating_plan(root, plan, best_path->joinrestrictinfo); + gating_clauses = get_gating_quals(root, best_path->joinrestrictinfo); + if (gating_clauses) + plan = create_gating_plan(root, (Path *) best_path, plan, + gating_clauses); #ifdef NOT_USED @@ -745,8 +956,12 @@ create_append_plan(PlannerInfo *root, AppendPath *best_path) foreach(subpaths, best_path->subpaths) { Path *subpath = (Path *) lfirst(subpaths); + Plan *subplan; + + /* Must insist that all children return the same tlist */ + subplan = create_plan_recurse(root, subpath, CP_EXACT_TLIST); - subplans = lappend(subplans, create_plan_recurse(root, subpath)); + subplans = lappend(subplans, subplan); } /* @@ -817,7 +1032,8 @@ create_merge_append_plan(PlannerInfo *root, MergeAppendPath *best_path) bool *nullsFirst; /* Build the child plan */ - subplan = create_plan_recurse(root, subpath); + /* Must insist that all children return the same tlist */ + subplan = create_plan_recurse(root, subpath, CP_EXACT_TLIST); /* Compute sort column info, and adjust subplan's tlist as needed */ subplan = prepare_sort_from_pathkeys(root, subplan, pathkeys, @@ -893,15 +1109,18 @@ create_result_plan(PlannerInfo *root, ResultPath *best_path) * Returns a Plan node. */ static Material * -create_material_plan(PlannerInfo *root, MaterialPath *best_path) +create_material_plan(PlannerInfo *root, MaterialPath *best_path, int flags) { Material *plan; Plan *subplan; - subplan = create_plan_recurse(root, best_path->subpath); - - /* We don't want any excess columns in the materialized tuples */ - disuse_physical_tlist(root, subplan, best_path->subpath); + /* + * We don't want any excess columns in the materialized tuples, so request + * a smaller tlist. Otherwise, since Material doesn't project, tlist + * requirements pass through. + */ + subplan = create_plan_recurse(root, best_path->subpath, + flags | CP_SMALL_TLIST); plan = make_material(subplan); @@ -918,7 +1137,7 @@ create_material_plan(PlannerInfo *root, MaterialPath *best_path) * Returns a Plan node. */ static Plan * -create_unique_plan(PlannerInfo *root, UniquePath *best_path) +create_unique_plan(PlannerInfo *root, UniquePath *best_path, int flags) { Plan *plan; Plan *subplan; @@ -932,7 +1151,8 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path) int groupColPos; ListCell *l; - subplan = create_plan_recurse(root, best_path->subpath); + /* Unique doesn't project, so tlist requirements pass through */ + subplan = create_plan_recurse(root, best_path->subpath, flags); /* Done if we don't need to do any actual unique-ifying */ if (best_path->umethod == UNIQUE_PATH_NOOP) @@ -1018,11 +1238,8 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path) if (best_path->umethod == UNIQUE_PATH_HASH) { - long numGroups; Oid *groupOperators; - numGroups = (long) Min(best_path->path.rows, (double) LONG_MAX); - /* * Get the hashable equality operators for the Agg node to use. * Normally these are the same as the IN clause operators, but if @@ -1047,18 +1264,17 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path) * minimum output tlist, without any stuff we might have added to the * subplan tlist. */ - plan = (Plan *) make_agg(root, - build_path_tlist(root, &best_path->path), + plan = (Plan *) make_agg(build_path_tlist(root, &best_path->path), NIL, AGG_HASHED, - NULL, + false, + true, numGroupCols, groupColIdx, groupOperators, NIL, - numGroups, - false, - true, + NIL, + best_path->path.rows, subplan); } else @@ -1106,11 +1322,11 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path) groupColPos++; } plan = (Plan *) make_sort_from_sortclauses(root, sortList, subplan); - plan = (Plan *) make_unique(plan, sortList); + plan = (Plan *) make_unique_from_sortclauses(plan, sortList); } - /* Adjust output size estimate (other fields should be OK already) */ - plan->plan_rows = best_path->path.rows; + /* Copy cost data from Path to Plan */ + copy_generic_path_info(plan, &best_path->path); return plan; } @@ -1121,28 +1337,843 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path) * Create a Gather plan for 'best_path' and (recursively) plans * for its subpaths. */ -static Gather * -create_gather_plan(PlannerInfo *root, GatherPath *best_path) +static Gather * +create_gather_plan(PlannerInfo *root, GatherPath *best_path) +{ + Gather *gather_plan; + Plan *subplan; + + /* Must insist that all children return the same tlist */ + subplan = create_plan_recurse(root, best_path->subpath, CP_EXACT_TLIST); + + gather_plan = make_gather(subplan->targetlist, + NIL, + best_path->path.parallel_degree, + best_path->single_copy, + subplan); + + copy_generic_path_info(&gather_plan->plan, &best_path->path); + + /* use parallel mode for parallel plans. */ + root->glob->parallelModeNeeded = true; + + return gather_plan; +} + +/* + * create_projection_plan + * + * Create a Result node to do a projection step and (recursively) plans + * for its subpaths. + */ +static Plan * +create_projection_plan(PlannerInfo *root, ProjectionPath *best_path) +{ + Plan *plan; + Plan *subplan; + List *tlist; + + /* Since we intend to project, we don't need to constrain child tlist */ + subplan = create_plan_recurse(root, best_path->subpath, 0); + + tlist = build_path_tlist(root, &best_path->path); + + /* + * Although the ProjectionPath node wouldn't have been made unless its + * pathtarget is different from the subpath's, it can still happen that + * the constructed tlist matches the subplan's. (An example is that + * MergeAppend doesn't project, so we would have thought that we needed a + * projection to attach resjunk sort columns to its output ... but + * create_merge_append_plan might have added those same resjunk sort + * columns to both MergeAppend and its children.) So, if the desired + * tlist is the same expression-wise as the subplan's, just jam it in + * there. We'll have charged for a Result that doesn't actually appear in + * the plan, but that's better than having a Result we don't need. + */ + if (tlist_same_exprs(tlist, subplan->targetlist)) + { + plan = subplan; + plan->targetlist = tlist; + + /* Adjust cost to match what we thought during planning */ + plan->startup_cost = best_path->path.startup_cost; + plan->total_cost = best_path->path.total_cost; + /* ... but be careful not to munge subplan's parallel-aware flag */ + } + else + { + plan = (Plan *) make_result(root, tlist, NULL, subplan); + + copy_generic_path_info(plan, (Path *) best_path); + } + + return plan; +} + +/* + * create_sort_plan + * + * Create a Sort plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static Sort * +create_sort_plan(PlannerInfo *root, SortPath *best_path, int flags) +{ + Sort *plan; + Plan *subplan; + + /* + * We don't want any excess columns in the sorted tuples, so request a + * smaller tlist. Otherwise, since Sort doesn't project, tlist + * requirements pass through. + */ + subplan = create_plan_recurse(root, best_path->subpath, + flags | CP_SMALL_TLIST); + + /* + * Don't need to have correct limit_tuples; that only affects the cost + * estimate, which we'll overwrite. (XXX should refactor so that we don't + * have a useless cost_sort call in here.) + */ + plan = make_sort_from_pathkeys(root, + subplan, + best_path->path.pathkeys, + -1.0); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + return plan; +} + +/* + * create_group_plan + * + * Create a Group plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static Group * +create_group_plan(PlannerInfo *root, GroupPath *best_path) +{ + Group *plan; + Plan *subplan; + List *tlist; + List *quals; + + /* + * Group can project, so no need to be terribly picky about child tlist, + * but we do need grouping columns to be available + */ + subplan = create_plan_recurse(root, best_path->subpath, CP_LABEL_TLIST); + + tlist = build_path_tlist(root, &best_path->path); + + quals = order_qual_clauses(root, best_path->qual); + + plan = make_group(tlist, + quals, + list_length(best_path->groupClause), + extract_grouping_cols(best_path->groupClause, + subplan->targetlist), + extract_grouping_ops(best_path->groupClause), + subplan); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + return plan; +} + +/* + * create_upper_unique_plan + * + * Create a Unique plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static Unique * +create_upper_unique_plan(PlannerInfo *root, UpperUniquePath *best_path, int flags) +{ + Unique *plan; + Plan *subplan; + + /* + * Unique doesn't project, so tlist requirements pass through; moreover we + * need grouping columns to be labeled. + */ + subplan = create_plan_recurse(root, best_path->subpath, + flags | CP_LABEL_TLIST); + + plan = make_unique_from_pathkeys(subplan, + best_path->path.pathkeys, + best_path->numkeys); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + return plan; +} + +/* + * create_agg_plan + * + * Create an Agg plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static Agg * +create_agg_plan(PlannerInfo *root, AggPath *best_path) +{ + Agg *plan; + Plan *subplan; + List *tlist; + List *quals; + + /* + * Agg can project, so no need to be terribly picky about child tlist, but + * we do need grouping columns to be available + */ + subplan = create_plan_recurse(root, best_path->subpath, CP_LABEL_TLIST); + + tlist = build_path_tlist(root, &best_path->path); + + quals = order_qual_clauses(root, best_path->qual); + + plan = make_agg(tlist, quals, + best_path->aggstrategy, + false, + true, + list_length(best_path->groupClause), + extract_grouping_cols(best_path->groupClause, + subplan->targetlist), + extract_grouping_ops(best_path->groupClause), + NIL, + NIL, + best_path->numGroups, + subplan); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + return plan; +} + +/* + * Given a groupclause for a collection of grouping sets, produce the + * corresponding groupColIdx. + * + * root->grouping_map maps the tleSortGroupRef to the actual column position in + * the input tuple. So we get the ref from the entries in the groupclause and + * look them up there. + */ +static AttrNumber * +remap_groupColIdx(PlannerInfo *root, List *groupClause) +{ + AttrNumber *grouping_map = root->grouping_map; + AttrNumber *new_grpColIdx; + ListCell *lc; + int i; + + Assert(grouping_map); + + new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(groupClause)); + + i = 0; + foreach(lc, groupClause) + { + SortGroupClause *clause = lfirst(lc); + + new_grpColIdx[i++] = grouping_map[clause->tleSortGroupRef]; + } + + return new_grpColIdx; +} + +/* + * create_groupingsets_plan + * Create a plan for 'best_path' and (recursively) plans + * for its subpaths. + * + * What we emit is an Agg plan with some vestigial Agg and Sort nodes + * hanging off the side. The top Agg implements the last grouping set + * specified in the GroupingSetsPath, and any additional grouping sets + * each give rise to a subsidiary Agg and Sort node in the top Agg's + * "chain" list. These nodes don't participate in the plan directly, + * but they are a convenient way to represent the required data for + * the extra steps. + * + * Returns a Plan node. + */ +static Plan * +create_groupingsets_plan(PlannerInfo *root, GroupingSetsPath *best_path) +{ + Agg *plan; + Plan *subplan; + AttrNumber *groupColIdx = best_path->groupColIdx; + List *rollup_groupclauses = best_path->rollup_groupclauses; + List *rollup_lists = best_path->rollup_lists; + AttrNumber *grouping_map; + int maxref; + List *chain; + int i; + ListCell *lc, + *lc2; + + /* Shouldn't get here without grouping sets */ + Assert(root->parse->groupingSets); + Assert(rollup_lists != NIL); + Assert(list_length(rollup_lists) == list_length(rollup_groupclauses)); + + /* + * Agg can project, so no need to be terribly picky about child tlist, but + * we do need grouping columns to be available + */ + subplan = create_plan_recurse(root, best_path->subpath, CP_LABEL_TLIST); + + /* + * Compute the mapping from tleSortGroupRef to column index. First, + * identify max SortGroupRef in groupClause, for array sizing. + */ + maxref = 0; + foreach(lc, root->parse->groupClause) + { + SortGroupClause *gc = (SortGroupClause *) lfirst(lc); + + if (gc->tleSortGroupRef > maxref) + maxref = gc->tleSortGroupRef; + } + + grouping_map = (AttrNumber *) palloc0((maxref + 1) * sizeof(AttrNumber)); + + i = 0; + foreach(lc, root->parse->groupClause) + { + SortGroupClause *gc = (SortGroupClause *) lfirst(lc); + + grouping_map[gc->tleSortGroupRef] = groupColIdx[i++]; + } + + /* + * During setrefs.c, we'll need the grouping_map to fix up the cols lists + * in GroupingFunc nodes. Save it for setrefs.c to use. + * + * This doesn't work if we're in an inheritance subtree (see notes in + * create_modifytable_plan). Fortunately we can't be because there would + * never be grouping in an UPDATE/DELETE; but let's Assert that. + */ + Assert(!root->hasInheritedTarget); + Assert(root->grouping_map == NULL); + root->grouping_map = grouping_map; + + /* + * Generate the side nodes that describe the other sort and group + * operations besides the top one. Note that we don't worry about putting + * accurate cost estimates in the side nodes; only the topmost Agg node's + * costs will be shown by EXPLAIN. + */ + chain = NIL; + if (list_length(rollup_groupclauses) > 1) + { + forboth(lc, rollup_groupclauses, lc2, rollup_lists) + { + List *groupClause = (List *) lfirst(lc); + List *gsets = (List *) lfirst(lc2); + AttrNumber *new_grpColIdx; + Plan *sort_plan; + Plan *agg_plan; + + /* We want to iterate over all but the last rollup list elements */ + if (lnext(lc) == NULL) + break; + + new_grpColIdx = remap_groupColIdx(root, groupClause); + + sort_plan = (Plan *) + make_sort_from_groupcols(root, + groupClause, + new_grpColIdx, + subplan); + + agg_plan = (Plan *) make_agg(NIL, + NIL, + AGG_SORTED, + false, + true, + list_length((List *) linitial(gsets)), + new_grpColIdx, + extract_grouping_ops(groupClause), + gsets, + NIL, + 0, /* numGroups not needed */ + sort_plan); + + /* + * Nuke stuff we don't need to avoid bloating debug output. + */ + sort_plan->targetlist = NIL; + sort_plan->lefttree = NULL; + + chain = lappend(chain, agg_plan); + } + } + + /* + * Now make the final Agg node + */ + { + List *groupClause = (List *) llast(rollup_groupclauses); + List *gsets = (List *) llast(rollup_lists); + AttrNumber *top_grpColIdx; + int numGroupCols; + + top_grpColIdx = remap_groupColIdx(root, groupClause); + + numGroupCols = list_length((List *) linitial(gsets)); + + plan = make_agg(build_path_tlist(root, &best_path->path), + best_path->qual, + (numGroupCols > 0) ? AGG_SORTED : AGG_PLAIN, + false, + true, + numGroupCols, + top_grpColIdx, + extract_grouping_ops(groupClause), + gsets, + chain, + 0, /* numGroups not needed */ + subplan); + + /* Copy cost data from Path to Plan */ + copy_generic_path_info(&plan->plan, &best_path->path); + } + + return (Plan *) plan; +} + +/* + * create_minmaxagg_plan + * + * Create a Result plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static Result * +create_minmaxagg_plan(PlannerInfo *root, MinMaxAggPath *best_path) +{ + Result *plan; + List *tlist; + ListCell *lc; + + /* Prepare an InitPlan for each aggregate's subquery. */ + foreach(lc, best_path->mmaggregates) + { + MinMaxAggInfo *mminfo = (MinMaxAggInfo *) lfirst(lc); + PlannerInfo *subroot = mminfo->subroot; + Query *subparse = subroot->parse; + Plan *plan; + + /* + * Generate the plan for the subquery. We already have a Path, but we + * have to convert it to a Plan and attach a LIMIT node above it. + * Since we are entering a different planner context (subroot), + * recurse to create_plan not create_plan_recurse. + */ + plan = create_plan(subroot, mminfo->path); + + plan = (Plan *) make_limit(plan, + subparse->limitOffset, + subparse->limitCount); + + /* Must apply correct cost/width data to Limit node */ + plan->startup_cost = mminfo->path->startup_cost; + plan->total_cost = mminfo->pathcost; + plan->plan_rows = 1; + plan->plan_width = mminfo->path->pathtarget->width; + plan->parallel_aware = false; + + /* Convert the plan into an InitPlan in the outer query. */ + SS_make_initplan_from_plan(root, subroot, plan, mminfo->param); + } + + /* Generate the output plan --- basically just a Result */ + tlist = build_path_tlist(root, &best_path->path); + + plan = make_result(root, tlist, (Node *) best_path->quals, NULL); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + /* + * During setrefs.c, we'll need to replace references to the Agg nodes + * with InitPlan output params. (We can't just do that locally in the + * MinMaxAgg node, because path nodes above here may have Agg references + * as well.) Save the mmaggregates list to tell setrefs.c to do that. + * + * This doesn't work if we're in an inheritance subtree (see notes in + * create_modifytable_plan). Fortunately we can't be because there would + * never be aggregates in an UPDATE/DELETE; but let's Assert that. + */ + Assert(!root->hasInheritedTarget); + Assert(root->minmax_aggs == NIL); + root->minmax_aggs = best_path->mmaggregates; + + return plan; +} + +/* + * create_windowagg_plan + * + * Create a WindowAgg plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static WindowAgg * +create_windowagg_plan(PlannerInfo *root, WindowAggPath *best_path) +{ + WindowAgg *plan; + WindowClause *wc = best_path->winclause; + Plan *subplan; + List *tlist; + int numsortkeys; + AttrNumber *sortColIdx; + Oid *sortOperators; + Oid *collations; + bool *nullsFirst; + int partNumCols; + AttrNumber *partColIdx; + Oid *partOperators; + int ordNumCols; + AttrNumber *ordColIdx; + Oid *ordOperators; + + /* + * WindowAgg can project, so no need to be terribly picky about child + * tlist, but we do need grouping columns to be available + */ + subplan = create_plan_recurse(root, best_path->subpath, CP_LABEL_TLIST); + + tlist = build_path_tlist(root, &best_path->path); + + /* + * We shouldn't need to actually sort, but it's convenient to use + * prepare_sort_from_pathkeys to identify the input's sort columns. + */ + subplan = prepare_sort_from_pathkeys(root, + subplan, + best_path->winpathkeys, + NULL, + NULL, + false, + &numsortkeys, + &sortColIdx, + &sortOperators, + &collations, + &nullsFirst); + + /* Now deconstruct that into partition and ordering portions */ + get_column_info_for_window(root, + wc, + subplan->targetlist, + numsortkeys, + sortColIdx, + &partNumCols, + &partColIdx, + &partOperators, + &ordNumCols, + &ordColIdx, + &ordOperators); + + /* And finally we can make the WindowAgg node */ + plan = make_windowagg(tlist, + wc->winref, + partNumCols, + partColIdx, + partOperators, + ordNumCols, + ordColIdx, + ordOperators, + wc->frameOptions, + wc->startOffset, + wc->endOffset, + subplan); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + return plan; +} + +/* + * get_column_info_for_window + * Get the partitioning/ordering column numbers and equality operators + * for a WindowAgg node. + * + * This depends on the behavior of planner.c's make_pathkeys_for_window! + * + * We are given the target WindowClause and an array of the input column + * numbers associated with the resulting pathkeys. In the easy case, there + * are the same number of pathkey columns as partitioning + ordering columns + * and we just have to copy some data around. However, it's possible that + * some of the original partitioning + ordering columns were eliminated as + * redundant during the transformation to pathkeys. (This can happen even + * though the parser gets rid of obvious duplicates. A typical scenario is a + * window specification "PARTITION BY x ORDER BY y" coupled with a clause + * "WHERE x = y" that causes the two sort columns to be recognized as + * redundant.) In that unusual case, we have to work a lot harder to + * determine which keys are significant. + * + * The method used here is a bit brute-force: add the sort columns to a list + * one at a time and note when the resulting pathkey list gets longer. But + * it's a sufficiently uncommon case that a faster way doesn't seem worth + * the amount of code refactoring that'd be needed. + */ +static void +get_column_info_for_window(PlannerInfo *root, WindowClause *wc, List *tlist, + int numSortCols, AttrNumber *sortColIdx, + int *partNumCols, + AttrNumber **partColIdx, + Oid **partOperators, + int *ordNumCols, + AttrNumber **ordColIdx, + Oid **ordOperators) +{ + int numPart = list_length(wc->partitionClause); + int numOrder = list_length(wc->orderClause); + + if (numSortCols == numPart + numOrder) + { + /* easy case */ + *partNumCols = numPart; + *partColIdx = sortColIdx; + *partOperators = extract_grouping_ops(wc->partitionClause); + *ordNumCols = numOrder; + *ordColIdx = sortColIdx + numPart; + *ordOperators = extract_grouping_ops(wc->orderClause); + } + else + { + List *sortclauses; + List *pathkeys; + int scidx; + ListCell *lc; + + /* first, allocate what's certainly enough space for the arrays */ + *partNumCols = 0; + *partColIdx = (AttrNumber *) palloc(numPart * sizeof(AttrNumber)); + *partOperators = (Oid *) palloc(numPart * sizeof(Oid)); + *ordNumCols = 0; + *ordColIdx = (AttrNumber *) palloc(numOrder * sizeof(AttrNumber)); + *ordOperators = (Oid *) palloc(numOrder * sizeof(Oid)); + sortclauses = NIL; + pathkeys = NIL; + scidx = 0; + foreach(lc, wc->partitionClause) + { + SortGroupClause *sgc = (SortGroupClause *) lfirst(lc); + List *new_pathkeys; + + sortclauses = lappend(sortclauses, sgc); + new_pathkeys = make_pathkeys_for_sortclauses(root, + sortclauses, + tlist); + if (list_length(new_pathkeys) > list_length(pathkeys)) + { + /* this sort clause is actually significant */ + (*partColIdx)[*partNumCols] = sortColIdx[scidx++]; + (*partOperators)[*partNumCols] = sgc->eqop; + (*partNumCols)++; + pathkeys = new_pathkeys; + } + } + foreach(lc, wc->orderClause) + { + SortGroupClause *sgc = (SortGroupClause *) lfirst(lc); + List *new_pathkeys; + + sortclauses = lappend(sortclauses, sgc); + new_pathkeys = make_pathkeys_for_sortclauses(root, + sortclauses, + tlist); + if (list_length(new_pathkeys) > list_length(pathkeys)) + { + /* this sort clause is actually significant */ + (*ordColIdx)[*ordNumCols] = sortColIdx[scidx++]; + (*ordOperators)[*ordNumCols] = sgc->eqop; + (*ordNumCols)++; + pathkeys = new_pathkeys; + } + } + /* complain if we didn't eat exactly the right number of sort cols */ + if (scidx != numSortCols) + elog(ERROR, "failed to deconstruct sort operators into partitioning/ordering operators"); + } +} + +/* + * create_setop_plan + * + * Create a SetOp plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static SetOp * +create_setop_plan(PlannerInfo *root, SetOpPath *best_path, int flags) +{ + SetOp *plan; + Plan *subplan; + long numGroups; + + /* + * SetOp doesn't project, so tlist requirements pass through; moreover we + * need grouping columns to be labeled. + */ + subplan = create_plan_recurse(root, best_path->subpath, + flags | CP_LABEL_TLIST); + + /* Convert numGroups to long int --- but 'ware overflow! */ + numGroups = (long) Min(best_path->numGroups, (double) LONG_MAX); + + plan = make_setop(best_path->cmd, + best_path->strategy, + subplan, + best_path->distinctList, + best_path->flagColIdx, + best_path->firstFlag, + numGroups); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + return plan; +} + +/* + * create_recursiveunion_plan + * + * Create a RecursiveUnion plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static RecursiveUnion * +create_recursiveunion_plan(PlannerInfo *root, RecursiveUnionPath *best_path) +{ + RecursiveUnion *plan; + Plan *leftplan; + Plan *rightplan; + List *tlist; + long numGroups; + + /* Need both children to produce same tlist, so force it */ + leftplan = create_plan_recurse(root, best_path->leftpath, CP_EXACT_TLIST); + rightplan = create_plan_recurse(root, best_path->rightpath, CP_EXACT_TLIST); + + tlist = build_path_tlist(root, &best_path->path); + + /* Convert numGroups to long int --- but 'ware overflow! */ + numGroups = (long) Min(best_path->numGroups, (double) LONG_MAX); + + plan = make_recursive_union(tlist, + leftplan, + rightplan, + best_path->wtParam, + best_path->distinctList, + numGroups); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + return plan; +} + +/* + * create_lockrows_plan + * + * Create a LockRows plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static LockRows * +create_lockrows_plan(PlannerInfo *root, LockRowsPath *best_path, + int flags) +{ + LockRows *plan; + Plan *subplan; + + /* LockRows doesn't project, so tlist requirements pass through */ + subplan = create_plan_recurse(root, best_path->subpath, flags); + + plan = make_lockrows(subplan, best_path->rowMarks, best_path->epqParam); + + copy_generic_path_info(&plan->plan, (Path *) best_path); + + return plan; +} + +/* + * create_modifytable_plan + * Create a ModifyTable plan for 'best_path'. + * + * Returns a Plan node. + */ +static ModifyTable * +create_modifytable_plan(PlannerInfo *root, ModifyTablePath *best_path) +{ + ModifyTable *plan; + List *subplans = NIL; + ListCell *subpaths, + *subroots; + + /* Build the plan for each input path */ + forboth(subpaths, best_path->subpaths, + subroots, best_path->subroots) + { + Path *subpath = (Path *) lfirst(subpaths); + PlannerInfo *subroot = (PlannerInfo *) lfirst(subroots); + Plan *subplan; + + /* + * In an inherited UPDATE/DELETE, reference the per-child modified + * subroot while creating Plans from Paths for the child rel. This is + * a kluge, but otherwise it's too hard to ensure that Plan creation + * functions (particularly in FDWs) don't depend on the contents of + * "root" matching what they saw at Path creation time. The main + * downside is that creation functions for Plans that might appear + * below a ModifyTable cannot expect to modify the contents of "root" + * and have it "stick" for subsequent processing such as setrefs.c. + * That's not great, but it seems better than the alternative. + */ + subplan = create_plan_recurse(subroot, subpath, CP_EXACT_TLIST); + + /* Transfer resname/resjunk labeling, too, to keep executor happy */ + apply_tlist_labeling(subplan->targetlist, subroot->processed_tlist); + + subplans = lappend(subplans, subplan); + } + + plan = make_modifytable(root, + best_path->operation, + best_path->canSetTag, + best_path->nominalRelation, + best_path->resultRelations, + subplans, + best_path->withCheckOptionLists, + best_path->returningLists, + best_path->rowMarks, + best_path->onconflict, + best_path->epqParam); + + copy_generic_path_info(&plan->plan, &best_path->path); + + return plan; +} + +/* + * create_limit_plan + * + * Create a Limit plan for 'best_path' and (recursively) plans + * for its subpaths. + */ +static Limit * +create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags) { - Gather *gather_plan; + Limit *plan; Plan *subplan; - subplan = create_plan_recurse(root, best_path->subpath); - - disuse_physical_tlist(root, subplan, best_path->subpath); - - gather_plan = make_gather(subplan->targetlist, - NIL, - best_path->path.parallel_degree, - best_path->single_copy, - subplan); + /* Limit doesn't project, so tlist requirements pass through */ + subplan = create_plan_recurse(root, best_path->subpath, flags); - copy_generic_path_info(&gather_plan->plan, &best_path->path); + plan = make_limit(subplan, + best_path->limitOffset, + best_path->limitCount); - /* use parallel mode for parallel plans. */ - root->glob->parallelModeNeeded = true; + copy_generic_path_info(&plan->plan, (Path *) best_path); - return gather_plan; + return plan; } @@ -1814,15 +2845,24 @@ create_tidscan_plan(PlannerInfo *root, TidPath *best_path, * with restriction clauses 'scan_clauses' and targetlist 'tlist'. */ static SubqueryScan * -create_subqueryscan_plan(PlannerInfo *root, Path *best_path, +create_subqueryscan_plan(PlannerInfo *root, SubqueryScanPath *best_path, List *tlist, List *scan_clauses) { SubqueryScan *scan_plan; - Index scan_relid = best_path->parent->relid; + RelOptInfo *rel = best_path->path.parent; + Index scan_relid = rel->relid; + Plan *subplan; /* it should be a subquery base rel... */ Assert(scan_relid > 0); - Assert(best_path->parent->rtekind == RTE_SUBQUERY); + Assert(rel->rtekind == RTE_SUBQUERY); + + /* + * Recursively create Plan from Path for subquery. Since we are entering + * a different planner context (subroot), recurse to create_plan not + * create_plan_recurse. + */ + subplan = create_plan(rel->subroot, best_path->subpath); /* Sort clauses into best execution order */ scan_clauses = order_qual_clauses(root, scan_clauses); @@ -1831,20 +2871,20 @@ create_subqueryscan_plan(PlannerInfo *root, Path *best_path, scan_clauses = extract_actual_clauses(scan_clauses, false); /* Replace any outer-relation variables with nestloop params */ - if (best_path->param_info) + if (best_path->path.param_info) { scan_clauses = (List *) replace_nestloop_params(root, (Node *) scan_clauses); process_subquery_nestloop_params(root, - best_path->parent->subplan_params); + rel->subplan_params); } scan_plan = make_subqueryscan(tlist, scan_clauses, scan_relid, - best_path->parent->subplan); + subplan); - copy_generic_path_info(&scan_plan->scan.plan, best_path); + copy_generic_path_info(&scan_plan->scan.plan, &best_path->path); return scan_plan; } @@ -2108,7 +3148,8 @@ create_foreignscan_plan(PlannerInfo *root, ForeignPath *best_path, /* transform the child path if any */ if (best_path->fdw_outerpath) - outer_plan = create_plan_recurse(root, best_path->fdw_outerpath); + outer_plan = create_plan_recurse(root, best_path->fdw_outerpath, + CP_EXACT_TLIST); /* * If we're scanning a base relation, fetch its OID. (Irrelevant if @@ -2243,7 +3284,8 @@ create_customscan_plan(PlannerInfo *root, CustomPath *best_path, /* Recursively transform child paths. */ foreach(lc, best_path->custom_paths) { - Plan *plan = create_plan_recurse(root, (Path *) lfirst(lc)); + Plan *plan = create_plan_recurse(root, (Path *) lfirst(lc), + CP_EXACT_TLIST); custom_plans = lappend(custom_plans, plan); } @@ -2303,21 +3345,35 @@ create_customscan_plan(PlannerInfo *root, CustomPath *best_path, static NestLoop * create_nestloop_plan(PlannerInfo *root, - NestPath *best_path, - Plan *outer_plan, - Plan *inner_plan) + NestPath *best_path) { NestLoop *join_plan; + Plan *outer_plan; + Plan *inner_plan; List *tlist = build_path_tlist(root, &best_path->path); List *joinrestrictclauses = best_path->joinrestrictinfo; List *joinclauses; List *otherclauses; Relids outerrelids; List *nestParams; + Relids saveOuterRels = root->curOuterRels; ListCell *cell; ListCell *prev; ListCell *next; + /* NestLoop can project, so no need to be picky about child tlists */ + outer_plan = create_plan_recurse(root, best_path->outerjoinpath, 0); + + /* For a nestloop, include outer relids in curOuterRels for inner side */ + root->curOuterRels = bms_union(root->curOuterRels, + best_path->outerjoinpath->parent->relids); + + inner_plan = create_plan_recurse(root, best_path->innerjoinpath, 0); + + /* Restore curOuterRels */ + bms_free(root->curOuterRels); + root->curOuterRels = saveOuterRels; + /* Sort join qual clauses into best execution order */ joinrestrictclauses = order_qual_clauses(root, joinrestrictclauses); @@ -2394,10 +3450,11 @@ create_nestloop_plan(PlannerInfo *root, static MergeJoin * create_mergejoin_plan(PlannerInfo *root, - MergePath *best_path, - Plan *outer_plan, - Plan *inner_plan) + MergePath *best_path) { + MergeJoin *join_plan; + Plan *outer_plan; + Plan *inner_plan; List *tlist = build_path_tlist(root, &best_path->jpath.path); List *joinclauses; List *otherclauses; @@ -2409,12 +3466,23 @@ create_mergejoin_plan(PlannerInfo *root, Oid *mergecollations; int *mergestrategies; bool *mergenullsfirst; - MergeJoin *join_plan; int i; ListCell *lc; ListCell *lop; ListCell *lip; + /* + * MergeJoin can project, so we don't have to demand exact tlists from the + * inputs. However, if we're intending to sort an input's result, it's + * best to request a small tlist so we aren't sorting more data than + * necessary. + */ + outer_plan = create_plan_recurse(root, best_path->jpath.outerjoinpath, + (best_path->outersortkeys != NIL) ? CP_SMALL_TLIST : 0); + + inner_plan = create_plan_recurse(root, best_path->jpath.innerjoinpath, + (best_path->innersortkeys != NIL) ? CP_SMALL_TLIST : 0); + /* Sort join qual clauses into best execution order */ /* NB: do NOT reorder the mergeclauses */ joinclauses = order_qual_clauses(root, best_path->jpath.joinrestrictinfo); @@ -2462,11 +3530,9 @@ create_mergejoin_plan(PlannerInfo *root, /* * Create explicit sort nodes for the outer and inner paths if necessary. - * Make sure there are no excess columns in the inputs if sorting. */ if (best_path->outersortkeys) { - disuse_physical_tlist(root, outer_plan, best_path->jpath.outerjoinpath); outer_plan = (Plan *) make_sort_from_pathkeys(root, outer_plan, @@ -2479,7 +3545,6 @@ create_mergejoin_plan(PlannerInfo *root, if (best_path->innersortkeys) { - disuse_physical_tlist(root, inner_plan, best_path->jpath.innerjoinpath); inner_plan = (Plan *) make_sort_from_pathkeys(root, inner_plan, @@ -2689,10 +3754,12 @@ create_mergejoin_plan(PlannerInfo *root, static HashJoin * create_hashjoin_plan(PlannerInfo *root, - HashPath *best_path, - Plan *outer_plan, - Plan *inner_plan) + HashPath *best_path) { + HashJoin *join_plan; + Hash *hash_plan; + Plan *outer_plan; + Plan *inner_plan; List *tlist = build_path_tlist(root, &best_path->jpath.path); List *joinclauses; List *otherclauses; @@ -2702,8 +3769,19 @@ create_hashjoin_plan(PlannerInfo *root, bool skewInherit = false; Oid skewColType = InvalidOid; int32 skewColTypmod = -1; - HashJoin *join_plan; - Hash *hash_plan; + + /* + * HashJoin can project, so we don't have to demand exact tlists from the + * inputs. However, it's best to request a small tlist from the inner + * side, so that we aren't storing more data than necessary. Likewise, if + * we anticipate batching, request a small tlist from the outer side so + * that we don't put extra data in the outer batch files. + */ + outer_plan = create_plan_recurse(root, best_path->jpath.outerjoinpath, + (best_path->num_batches > 1) ? CP_SMALL_TLIST : 0); + + inner_plan = create_plan_recurse(root, best_path->jpath.innerjoinpath, + CP_SMALL_TLIST); /* Sort join qual clauses into best execution order */ joinclauses = order_qual_clauses(root, best_path->jpath.joinrestrictinfo); @@ -2749,13 +3827,6 @@ create_hashjoin_plan(PlannerInfo *root, hashclauses = get_switched_clauses(best_path->path_hashclauses, best_path->jpath.outerjoinpath->parent->relids); - /* We don't want any excess columns in the hashed tuples */ - disuse_physical_tlist(root, inner_plan, best_path->jpath.innerjoinpath); - - /* If we expect batching, suppress excess columns in outer tuples too */ - if (best_path->num_batches > 1) - disuse_physical_tlist(root, outer_plan, best_path->jpath.outerjoinpath); - /* * If there is a single join clause and we can identify the outer variable * as a simple column reference, supply its identity for possible use in @@ -3661,7 +4732,7 @@ make_tidscan(List *qptlist, return node; } -SubqueryScan * +static SubqueryScan * make_subqueryscan(List *qptlist, List *qpqual, Index scanrelid, @@ -3805,7 +4876,7 @@ make_foreignscan(List *qptlist, return node; } -Append * +static Append * make_append(List *appendplans, List *tlist) { Append *node = makeNode(Append); @@ -3852,7 +4923,7 @@ make_append(List *appendplans, List *tlist) return node; } -RecursiveUnion * +static RecursiveUnion * make_recursive_union(List *tlist, Plan *lefttree, Plan *righttree, @@ -3864,8 +4935,6 @@ make_recursive_union(List *tlist, Plan *plan = &node->plan; int numCols = list_length(distinctList); - cost_recursive_union(plan, lefttree, righttree); - plan->targetlist = tlist; plan->qual = NIL; plan->lefttree = lefttree; @@ -4408,7 +5477,7 @@ find_ec_member_for_tle(EquivalenceClass *ec, * 'limit_tuples' is the bound on the number of output tuples; * -1 if no bound */ -Sort * +static Sort * make_sort_from_pathkeys(PlannerInfo *root, Plan *lefttree, List *pathkeys, double limit_tuples) { @@ -4442,7 +5511,7 @@ make_sort_from_pathkeys(PlannerInfo *root, Plan *lefttree, List *pathkeys, * 'sortcls' is a list of SortGroupClauses * 'lefttree' is the node which yields input tuples */ -Sort * +static Sort * make_sort_from_sortclauses(PlannerInfo *root, List *sortcls, Plan *lefttree) { List *sub_tlist = lefttree->targetlist; @@ -4491,7 +5560,7 @@ make_sort_from_sortclauses(PlannerInfo *root, List *sortcls, Plan *lefttree) * appropriate to the grouping node. So, only the sort ordering info * is used from the SortGroupClause entries. */ -Sort * +static Sort * make_sort_from_groupcols(PlannerInfo *root, List *groupcls, AttrNumber *grpColIdx, @@ -4552,7 +5621,7 @@ make_material(Plan *lefttree) * materialize_finished_plan: stick a Material node atop a completed plan * * There are a couple of places where we want to attach a Material node - * after completion of subquery_planner(), without any MaterialPath path. + * after completion of create_plan(), without any MaterialPath path. */ Plan * materialize_finished_plan(Plan *subplan) @@ -4572,81 +5641,46 @@ materialize_finished_plan(Plan *subplan) matplan->total_cost = matpath.total_cost; matplan->plan_rows = subplan->plan_rows; matplan->plan_width = subplan->plan_width; + matplan->parallel_aware = false; return matplan; } -Agg * -make_agg(PlannerInfo *root, List *tlist, List *qual, - AggStrategy aggstrategy, const AggClauseCosts *aggcosts, +static Agg * +make_agg(List *tlist, List *qual, + AggStrategy aggstrategy, + bool combineStates, bool finalizeAggs, int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators, - List *groupingSets, long numGroups, bool combineStates, - bool finalizeAggs, Plan *lefttree) + List *groupingSets, List *chain, + double dNumGroups, Plan *lefttree) { Agg *node = makeNode(Agg); Plan *plan = &node->plan; - Path agg_path; /* dummy for result of cost_agg */ - QualCost qual_cost; + long numGroups; + + /* Reduce to long, but 'ware overflow! */ + numGroups = (long) Min(dNumGroups, (double) LONG_MAX); node->aggstrategy = aggstrategy; - node->numCols = numGroupCols; node->combineStates = combineStates; node->finalizeAggs = finalizeAggs; + node->numCols = numGroupCols; node->grpColIdx = grpColIdx; node->grpOperators = grpOperators; node->numGroups = numGroups; - - copy_plan_costsize(plan, lefttree); /* only care about copying size */ - cost_agg(&agg_path, root, - aggstrategy, aggcosts, - numGroupCols, numGroups, - lefttree->startup_cost, - lefttree->total_cost, - lefttree->plan_rows); - plan->startup_cost = agg_path.startup_cost; - plan->total_cost = agg_path.total_cost; - - /* - * We will produce a single output tuple if not grouping, and a tuple per - * group otherwise. - */ - if (aggstrategy == AGG_PLAIN) - plan->plan_rows = groupingSets ? list_length(groupingSets) : 1; - else - plan->plan_rows = numGroups; - node->groupingSets = groupingSets; - - /* - * We also need to account for the cost of evaluation of the qual (ie, the - * HAVING clause) and the tlist. Note that cost_qual_eval doesn't charge - * anything for Aggref nodes; this is okay since they are really - * comparable to Vars. - * - * See notes in add_tlist_costs_to_plan about why only make_agg, - * make_windowagg and make_group worry about tlist eval cost. - */ - if (qual) - { - cost_qual_eval(&qual_cost, qual, root); - plan->startup_cost += qual_cost.startup; - plan->total_cost += qual_cost.startup; - plan->total_cost += qual_cost.per_tuple * plan->plan_rows; - } - add_tlist_costs_to_plan(root, plan, tlist); + node->chain = chain; plan->qual = qual; plan->targetlist = tlist; - plan->lefttree = lefttree; plan->righttree = NULL; return node; } -WindowAgg * -make_windowagg(PlannerInfo *root, List *tlist, - List *windowFuncs, Index winref, +static WindowAgg * +make_windowagg(List *tlist, Index winref, int partNumCols, AttrNumber *partColIdx, Oid *partOperators, int ordNumCols, AttrNumber *ordColIdx, Oid *ordOperators, int frameOptions, Node *startOffset, Node *endOffset, @@ -4654,7 +5688,6 @@ make_windowagg(PlannerInfo *root, List *tlist, { WindowAgg *node = makeNode(WindowAgg); Plan *plan = &node->plan; - Path windowagg_path; /* dummy for result of cost_windowagg */ node->winref = winref; node->partNumCols = partNumCols; @@ -4667,23 +5700,6 @@ make_windowagg(PlannerInfo *root, List *tlist, node->startOffset = startOffset; node->endOffset = endOffset; - copy_plan_costsize(plan, lefttree); /* only care about copying size */ - cost_windowagg(&windowagg_path, root, - windowFuncs, partNumCols, ordNumCols, - lefttree->startup_cost, - lefttree->total_cost, - lefttree->plan_rows); - plan->startup_cost = windowagg_path.startup_cost; - plan->total_cost = windowagg_path.total_cost; - - /* - * We also need to account for the cost of evaluation of the tlist. - * - * See notes in add_tlist_costs_to_plan about why only make_agg, - * make_windowagg and make_group worry about tlist eval cost. - */ - add_tlist_costs_to_plan(root, plan, tlist); - plan->targetlist = tlist; plan->lefttree = lefttree; plan->righttree = NULL; @@ -4693,58 +5709,23 @@ make_windowagg(PlannerInfo *root, List *tlist, return node; } -Group * -make_group(PlannerInfo *root, - List *tlist, +static Group * +make_group(List *tlist, List *qual, int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators, - double numGroups, Plan *lefttree) { Group *node = makeNode(Group); Plan *plan = &node->plan; - Path group_path; /* dummy for result of cost_group */ - QualCost qual_cost; + + /* caller must fill cost/size fields */ node->numCols = numGroupCols; node->grpColIdx = grpColIdx; node->grpOperators = grpOperators; - copy_plan_costsize(plan, lefttree); /* only care about copying size */ - cost_group(&group_path, root, - numGroupCols, numGroups, - lefttree->startup_cost, - lefttree->total_cost, - lefttree->plan_rows); - plan->startup_cost = group_path.startup_cost; - plan->total_cost = group_path.total_cost; - - /* One output tuple per estimated result group */ - plan->plan_rows = numGroups; - - /* - * We also need to account for the cost of evaluation of the qual (ie, the - * HAVING clause) and the tlist. - * - * XXX this double-counts the cost of evaluation of any expressions used - * for grouping, since in reality those will have been evaluated at a - * lower plan level and will only be copied by the Group node. Worth - * fixing? - * - * See notes in add_tlist_costs_to_plan about why only make_agg, - * make_windowagg and make_group worry about tlist eval cost. - */ - if (qual) - { - cost_qual_eval(&qual_cost, qual, root); - plan->startup_cost += qual_cost.startup; - plan->total_cost += qual_cost.startup; - plan->total_cost += qual_cost.per_tuple * plan->plan_rows; - } - add_tlist_costs_to_plan(root, plan, tlist); - plan->qual = qual; plan->targetlist = tlist; plan->lefttree = lefttree; @@ -4758,8 +5739,8 @@ make_group(PlannerInfo *root, * that should be considered by the Unique filter. The input path must * already be sorted accordingly. */ -Unique * -make_unique(Plan *lefttree, List *distinctList) +static Unique * +make_unique_from_sortclauses(Plan *lefttree, List *distinctList) { Unique *node = makeNode(Unique); Plan *plan = &node->plan; @@ -4769,21 +5750,6 @@ make_unique(Plan *lefttree, List *distinctList) Oid *uniqOperators; ListCell *slitem; - copy_plan_costsize(plan, lefttree); - - /* - * Charge one cpu_operator_cost per comparison per input tuple. We assume - * all columns get compared at most of the tuples. (XXX probably this is - * an overestimate.) - */ - plan->total_cost += cpu_operator_cost * plan->plan_rows * numCols; - - /* - * plan->plan_rows is left as a copy of the input subplan's plan_rows; ie, - * we assume the filter removes nothing. The caller must alter this if he - * has a better idea. - */ - plan->targetlist = lefttree->targetlist; plan->qual = NIL; plan->lefttree = lefttree; @@ -4815,6 +5781,111 @@ make_unique(Plan *lefttree, List *distinctList) return node; } +/* + * as above, but use pathkeys to identify the sort columns and semantics + */ +static Unique * +make_unique_from_pathkeys(Plan *lefttree, List *pathkeys, int numCols) +{ + Unique *node = makeNode(Unique); + Plan *plan = &node->plan; + int keyno = 0; + AttrNumber *uniqColIdx; + Oid *uniqOperators; + ListCell *lc; + + plan->targetlist = lefttree->targetlist; + plan->qual = NIL; + plan->lefttree = lefttree; + plan->righttree = NULL; + + /* + * Convert pathkeys list into arrays of attr indexes and equality + * operators, as wanted by executor. This has a lot in common with + * prepare_sort_from_pathkeys ... maybe unify sometime? + */ + Assert(numCols >= 0 && numCols <= list_length(pathkeys)); + uniqColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numCols); + uniqOperators = (Oid *) palloc(sizeof(Oid) * numCols); + + foreach(lc, pathkeys) + { + PathKey *pathkey = (PathKey *) lfirst(lc); + EquivalenceClass *ec = pathkey->pk_eclass; + EquivalenceMember *em; + TargetEntry *tle = NULL; + Oid pk_datatype = InvalidOid; + Oid eqop; + ListCell *j; + + /* Ignore pathkeys beyond the specified number of columns */ + if (keyno >= numCols) + break; + + if (ec->ec_has_volatile) + { + /* + * If the pathkey's EquivalenceClass is volatile, then it must + * have come from an ORDER BY clause, and we have to match it to + * that same targetlist entry. + */ + if (ec->ec_sortref == 0) /* can't happen */ + elog(ERROR, "volatile EquivalenceClass has no sortref"); + tle = get_sortgroupref_tle(ec->ec_sortref, plan->targetlist); + Assert(tle); + Assert(list_length(ec->ec_members) == 1); + pk_datatype = ((EquivalenceMember *) linitial(ec->ec_members))->em_datatype; + } + else + { + /* + * Otherwise, we can use any non-constant expression listed in the + * pathkey's EquivalenceClass. For now, we take the first tlist + * item found in the EC. + */ + foreach(j, plan->targetlist) + { + tle = (TargetEntry *) lfirst(j); + em = find_ec_member_for_tle(ec, tle, NULL); + if (em) + { + /* found expr already in tlist */ + pk_datatype = em->em_datatype; + break; + } + tle = NULL; + } + } + + if (!tle) + elog(ERROR, "could not find pathkey item to sort"); + + /* + * Look up the correct equality operator from the PathKey's slightly + * abstracted representation. + */ + eqop = get_opfamily_member(pathkey->pk_opfamily, + pk_datatype, + pk_datatype, + BTEqualStrategyNumber); + if (!OidIsValid(eqop)) /* should not happen */ + elog(ERROR, "could not find member %d(%u,%u) of opfamily %u", + BTEqualStrategyNumber, pk_datatype, pk_datatype, + pathkey->pk_opfamily); + + uniqColIdx[keyno] = tle->resno; + uniqOperators[keyno] = eqop; + + keyno++; + } + + node->numCols = numCols; + node->uniqColIdx = uniqColIdx; + node->uniqOperators = uniqOperators; + + return node; +} + static Gather * make_gather(List *qptlist, List *qpqual, @@ -4842,10 +5913,10 @@ make_gather(List *qptlist, * items that should be considered by the SetOp filter. The input path must * already be sorted accordingly. */ -SetOp * +static SetOp * make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree, List *distinctList, AttrNumber flagColIdx, int firstFlag, - long numGroups, double outputRows) + long numGroups) { SetOp *node = makeNode(SetOp); Plan *plan = &node->plan; @@ -4855,15 +5926,6 @@ make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree, Oid *dupOperators; ListCell *slitem; - copy_plan_costsize(plan, lefttree); - plan->plan_rows = outputRows; - - /* - * Charge one cpu_operator_cost per comparison per input tuple. We assume - * all columns get compared at most of the tuples. - */ - plan->total_cost += cpu_operator_cost * lefttree->plan_rows * numCols; - plan->targetlist = lefttree->targetlist; plan->qual = NIL; plan->lefttree = lefttree; @@ -4904,17 +5966,12 @@ make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree, * make_lockrows * Build a LockRows plan node */ -LockRows * +static LockRows * make_lockrows(Plan *lefttree, List *rowMarks, int epqParam) { LockRows *node = makeNode(LockRows); Plan *plan = &node->plan; - copy_plan_costsize(plan, lefttree); - - /* charge cpu_tuple_cost to reflect locking costs (underestimate?) */ - plan->total_cost += cpu_tuple_cost * plan->plan_rows; - plan->targetlist = lefttree->targetlist; plan->qual = NIL; plan->lefttree = lefttree; @@ -4927,68 +5984,15 @@ make_lockrows(Plan *lefttree, List *rowMarks, int epqParam) } /* - * Note: offset_est and count_est are passed in to save having to repeat - * work already done to estimate the values of the limitOffset and limitCount - * expressions. Their values are as returned by preprocess_limit (0 means - * "not relevant", -1 means "couldn't estimate"). Keep the code below in sync - * with that function! + * make_limit + * Build a Limit plan node */ -Limit * -make_limit(Plan *lefttree, Node *limitOffset, Node *limitCount, - int64 offset_est, int64 count_est) +static Limit * +make_limit(Plan *lefttree, Node *limitOffset, Node *limitCount) { Limit *node = makeNode(Limit); Plan *plan = &node->plan; - copy_plan_costsize(plan, lefttree); - - /* - * Adjust the output rows count and costs according to the offset/limit. - * This is only a cosmetic issue if we are at top level, but if we are - * building a subquery then it's important to report correct info to the - * outer planner. - * - * When the offset or count couldn't be estimated, use 10% of the - * estimated number of rows emitted from the subplan. - */ - if (offset_est != 0) - { - double offset_rows; - - if (offset_est > 0) - offset_rows = (double) offset_est; - else - offset_rows = clamp_row_est(lefttree->plan_rows * 0.10); - if (offset_rows > plan->plan_rows) - offset_rows = plan->plan_rows; - if (plan->plan_rows > 0) - plan->startup_cost += - (plan->total_cost - plan->startup_cost) - * offset_rows / plan->plan_rows; - plan->plan_rows -= offset_rows; - if (plan->plan_rows < 1) - plan->plan_rows = 1; - } - - if (count_est != 0) - { - double count_rows; - - if (count_est > 0) - count_rows = (double) count_est; - else - count_rows = clamp_row_est(lefttree->plan_rows * 0.10); - if (count_rows > plan->plan_rows) - count_rows = plan->plan_rows; - if (plan->plan_rows > 0) - plan->total_cost = plan->startup_cost + - (plan->total_cost - plan->startup_cost) - * count_rows / plan->plan_rows; - plan->plan_rows = count_rows; - if (plan->plan_rows < 1) - plan->plan_rows = 1; - } - plan->targetlist = lefttree->targetlist; plan->qual = NIL; plan->lefttree = lefttree; @@ -5008,8 +6012,9 @@ make_limit(Plan *lefttree, Node *limitOffset, Node *limitCount, * were already factored into the subplan's startup cost, and just copy the * subplan cost. If there's no subplan, we should include the qual eval * cost. In either case, tlist eval cost is not to be included here. + * XXX really we don't want to be doing cost estimation here. */ -Result * +static Result * make_result(PlannerInfo *root, List *tlist, Node *resconstantqual, @@ -5049,14 +6054,8 @@ make_result(PlannerInfo *root, /* * make_modifytable * Build a ModifyTable plan node - * - * Currently, we don't charge anything extra for the actual table modification - * work, nor for the WITH CHECK OPTIONS or RETURNING expressions if any. It - * would only be window dressing, since these are always top-level nodes and - * there is no way for the costs to change any higher-level planning choices. - * But we might want to make it look better sometime. */ -ModifyTable * +static ModifyTable * make_modifytable(PlannerInfo *root, CmdType operation, bool canSetTag, Index nominalRelation, @@ -5065,10 +6064,7 @@ make_modifytable(PlannerInfo *root, List *rowMarks, OnConflictExpr *onconflict, int epqParam) { ModifyTable *node = makeNode(ModifyTable); - Plan *plan = &node->plan; - double total_size; List *fdw_private_list; - ListCell *subnode; ListCell *lc; int i; @@ -5078,28 +6074,6 @@ make_modifytable(PlannerInfo *root, Assert(returningLists == NIL || list_length(resultRelations) == list_length(returningLists)); - /* - * Compute cost as sum of subplan costs. - */ - plan->startup_cost = 0; - plan->total_cost = 0; - plan->plan_rows = 0; - total_size = 0; - foreach(subnode, subplans) - { - Plan *subplan = (Plan *) lfirst(subnode); - - if (subnode == list_head(subplans)) /* first node? */ - plan->startup_cost = subplan->startup_cost; - plan->total_cost += subplan->total_cost; - plan->plan_rows += subplan->plan_rows; - total_size += subplan->plan_width * subplan->plan_rows; - } - if (plan->plan_rows > 0) - plan->plan_width = rint(total_size / plan->plan_rows); - else - plan->plan_width = 0; - node->plan.lefttree = NULL; node->plan.righttree = NULL; node->plan.qual = NIL; @@ -5193,6 +6167,42 @@ make_modifytable(PlannerInfo *root, return node; } +/* + * is_projection_capable_path + * Check whether a given Path node is able to do projection. + */ +bool +is_projection_capable_path(Path *path) +{ + /* Most plan types can project, so just list the ones that can't */ + switch (path->pathtype) + { + case T_Hash: + case T_Material: + case T_Sort: + case T_Unique: + case T_SetOp: + case T_LockRows: + case T_Limit: + case T_ModifyTable: + case T_MergeAppend: + case T_RecursiveUnion: + return false; + case T_Append: + + /* + * Append can't project, but if it's being used to represent a + * dummy path, claim that it can project. This prevents us from + * converting a rel from dummy to non-dummy status by applying a + * projection to its dummy path. + */ + return IS_DUMMY_PATH(path); + default: + break; + } + return true; +} + /* * is_projection_capable_plan * Check whether a given Plan node is able to do projection. diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c index 373e6ccf3d..9d6c181e36 100644 --- a/src/backend/optimizer/plan/planagg.c +++ b/src/backend/optimizer/plan/planagg.c @@ -35,13 +35,14 @@ #include "nodes/nodeFuncs.h" #include "optimizer/clauses.h" #include "optimizer/cost.h" +#include "optimizer/pathnode.h" #include "optimizer/paths.h" #include "optimizer/planmain.h" -#include "optimizer/planner.h" #include "optimizer/subselect.h" #include "optimizer/tlist.h" #include "parser/parsetree.h" #include "parser/parse_clause.h" +#include "rewrite/rewriteManip.h" #include "utils/lsyscache.h" #include "utils/syscache.h" @@ -50,8 +51,6 @@ static bool find_minmax_aggs_walker(Node *node, List **context); static bool build_minmax_path(PlannerInfo *root, MinMaxAggInfo *mminfo, Oid eqop, Oid sortop, bool nulls_first); static void minmax_qp_callback(PlannerInfo *root, void *extra); -static void make_agg_subplan(PlannerInfo *root, MinMaxAggInfo *mminfo); -static Node *replace_aggs_with_params_mutator(Node *node, PlannerInfo *root); static Oid fetch_agg_sort_op(Oid aggfnoid); @@ -60,8 +59,14 @@ static Oid fetch_agg_sort_op(Oid aggfnoid); * * Check to see whether the query contains MIN/MAX aggregate functions that * might be optimizable via indexscans. If it does, and all the aggregates - * are potentially optimizable, then set up root->minmax_aggs with a list of - * these aggregates. + * are potentially optimizable, then create a MinMaxAggPath and add it to + * the (UPPERREL_GROUP_AGG, NULL) upperrel. + * + * This should be called by grouping_planner() just before it's ready to call + * query_planner(), because we generate indexscan paths by cloning the + * planner's state and invoking query_planner() on a modified version of + * the query parsetree. Thus, all preprocessing needed before query_planner() + * must already be done. * * Note: we are passed the preprocessed targetlist separately, because it's * not necessarily equal to root->parse->targetList. @@ -74,6 +79,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist) RangeTblRef *rtr; RangeTblEntry *rte; List *aggs_list; + RelOptInfo *grouped_rel; ListCell *lc; /* minmax_aggs list should be empty at this point */ @@ -91,12 +97,10 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist) * * We don't handle GROUP BY or windowing, because our current * implementations of grouping require looking at all the rows anyway, and - * so there's not much point in optimizing MIN/MAX. (Note: relaxing this - * would likely require some restructuring in grouping_planner(), since it - * performs assorted processing related to these features between calling - * preprocess_minmax_aggregates and optimize_minmax_aggregates.) + * so there's not much point in optimizing MIN/MAX. */ - if (parse->groupClause || list_length(parse->groupingSets) > 1 || parse->hasWindowFuncs) + if (parse->groupClause || list_length(parse->groupingSets) > 1 || + parse->hasWindowFuncs) return; /* @@ -138,11 +142,9 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist) /* * OK, there is at least the possibility of performing the optimization. - * Build an access path for each aggregate. (We must do this now because - * we need to call query_planner with a pristine copy of the current query - * tree; it'll be too late when optimize_minmax_aggregates gets called.) - * If any of the aggregates prove to be non-indexable, give up; there is - * no point in optimizing just some of them. + * Build an access path for each aggregate. If any of the aggregates + * prove to be non-indexable, give up; there is no point in optimizing + * just some of them. */ foreach(lc, aggs_list) { @@ -177,111 +179,40 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist) } /* - * We're done until path generation is complete. Save info for later. - * (Setting root->minmax_aggs non-NIL signals we succeeded in making index - * access paths for all the aggregates.) - */ - root->minmax_aggs = aggs_list; -} - -/* - * optimize_minmax_aggregates - check for optimizing MIN/MAX via indexes - * - * Check to see whether using the aggregate indexscans is cheaper than the - * generic aggregate method. If so, generate and return a Plan that does it - * that way. Otherwise, return NULL. - * - * Note: it seems likely that the generic method will never be cheaper - * in practice, except maybe for tiny tables where it'd hardly matter. - * Should we skip even trying to build the standard plan, if - * preprocess_minmax_aggregates succeeds? - * - * We are passed the preprocessed tlist, as well as the estimated costs for - * doing the aggregates the regular way, and the best path devised for - * computing the input of a standard Agg node. - */ -Plan * -optimize_minmax_aggregates(PlannerInfo *root, List *tlist, - const AggClauseCosts *aggcosts, Path *best_path) -{ - Query *parse = root->parse; - Cost total_cost; - Path agg_p; - Plan *plan; - Node *hqual; - ListCell *lc; - - /* Nothing to do if preprocess_minmax_aggs rejected the query */ - if (root->minmax_aggs == NIL) - return NULL; - - /* - * Now we have enough info to compare costs against the generic aggregate - * implementation. + * OK, we can do the query this way. Prepare to create a MinMaxAggPath + * node. * - * Note that we don't include evaluation cost of the tlist here; this is - * OK since it isn't included in best_path's cost either, and should be - * the same in either case. + * First, create an output Param node for each agg. (If we end up not + * using the MinMaxAggPath, we'll waste a PARAM_EXEC slot for each agg, + * which is not worth worrying about. We can't wait till create_plan time + * to decide whether to make the Param, unfortunately.) */ - total_cost = 0; - foreach(lc, root->minmax_aggs) + foreach(lc, aggs_list) { MinMaxAggInfo *mminfo = (MinMaxAggInfo *) lfirst(lc); - total_cost += mminfo->pathcost; + mminfo->param = + SS_make_initplan_output_param(root, + exprType((Node *) mminfo->target), + -1, + exprCollation((Node *) mminfo->target)); } - cost_agg(&agg_p, root, AGG_PLAIN, aggcosts, - 0, 0, - best_path->startup_cost, best_path->total_cost, - best_path->parent->rows); - - if (total_cost > agg_p.total_cost) - return NULL; /* too expensive */ - /* - * OK, we are going to generate an optimized plan. + * Create a MinMaxAggPath node with the appropriate estimated costs and + * other needed data, and add it to the UPPERREL_GROUP_AGG upperrel, where + * it will compete against the standard aggregate implementation. (It + * will likely always win, but we need not assume that here.) * - * First, generate a subplan and output Param node for each agg. - */ - foreach(lc, root->minmax_aggs) - { - MinMaxAggInfo *mminfo = (MinMaxAggInfo *) lfirst(lc); - - make_agg_subplan(root, mminfo); - } - - /* - * Modify the targetlist and HAVING qual to reference subquery outputs + * Note: grouping_planner won't have created this upperrel yet, but it's + * fine for us to create it first. */ - tlist = (List *) replace_aggs_with_params_mutator((Node *) tlist, root); - hqual = replace_aggs_with_params_mutator(parse->havingQual, root); - - /* - * We have to replace Aggrefs with Params in equivalence classes too, else - * ORDER BY or DISTINCT on an optimized aggregate will fail. We don't - * need to process child eclass members though, since they aren't of - * interest anymore --- and replace_aggs_with_params_mutator isn't able to - * handle Aggrefs containing translated child Vars, anyway. - * - * Note: at some point it might become necessary to mutate other data - * structures too, such as the query's sortClause or distinctClause. Right - * now, those won't be examined after this point. - */ - mutate_eclass_expressions(root, - replace_aggs_with_params_mutator, - (void *) root, - false); - - /* - * Generate the output plan --- basically just a Result - */ - plan = (Plan *) make_result(root, tlist, hqual, NULL); - - /* Account for evaluation cost of the tlist (make_result did the rest) */ - add_tlist_costs_to_plan(root, plan, tlist); - - return plan; + grouped_rel = fetch_upper_rel(root, UPPERREL_GROUP_AGG, NULL); + add_path(grouped_rel, (Path *) + create_minmaxagg_path(root, grouped_rel, + create_pathtarget(root, tlist), + aggs_list, + (List *) parse->havingQual)); } /* @@ -403,6 +334,7 @@ build_minmax_path(PlannerInfo *root, MinMaxAggInfo *mminfo, PlannerInfo *subroot; Query *parse; TargetEntry *tle; + List *tlist; NullTest *ntest; SortGroupClause *sortcl; RelOptInfo *final_rel; @@ -410,40 +342,51 @@ build_minmax_path(PlannerInfo *root, MinMaxAggInfo *mminfo, Cost path_cost; double path_fraction; - /*---------- - * Generate modified query of the form - * (SELECT col FROM tab - * WHERE col IS NOT NULL AND existing-quals - * ORDER BY col ASC/DESC - * LIMIT 1) - * - * We cheat a bit here by building what is effectively a subplan query - * level without taking the trouble to increment varlevelsup of outer - * references. Therefore we don't increment the subroot's query_level nor - * repoint its parent_root to the parent level. We can get away with that - * because the plan will be an initplan and therefore cannot need any - * parameters from the parent level. But see hackery in make_agg_subplan; - * we might someday need to do this the hard way. - *---------- + /* + * We are going to construct what is effectively a sub-SELECT query, so + * clone the current query level's state and adjust it to make it look + * like a subquery. Any outer references will now be one level higher + * than before. (This means that when we are done, there will be no Vars + * of level 1, which is why the subquery can become an initplan.) */ subroot = (PlannerInfo *) palloc(sizeof(PlannerInfo)); memcpy(subroot, root, sizeof(PlannerInfo)); - subroot->parse = parse = (Query *) copyObject(root->parse); + subroot->query_level++; + subroot->parent_root = root; /* reset subplan-related stuff */ subroot->plan_params = NIL; subroot->outer_params = NULL; subroot->init_plans = NIL; + subroot->cte_plan_ids = NIL; + + subroot->parse = parse = (Query *) copyObject(root->parse); + IncrementVarSublevelsUp((Node *) parse, 1, 1); + + /* append_rel_list might contain outer Vars? */ + subroot->append_rel_list = (List *) copyObject(root->append_rel_list); + IncrementVarSublevelsUp((Node *) subroot->append_rel_list, 1, 1); /* There shouldn't be any OJ info to translate, as yet */ Assert(subroot->join_info_list == NIL); + /* and we haven't made equivalence classes, either */ + Assert(subroot->eq_classes == NIL); /* and we haven't created PlaceHolderInfos, either */ Assert(subroot->placeholder_list == NIL); + /*---------- + * Generate modified query of the form + * (SELECT col FROM tab + * WHERE col IS NOT NULL AND existing-quals + * ORDER BY col ASC/DESC + * LIMIT 1) + *---------- + */ /* single tlist entry that is the aggregate target */ tle = makeTargetEntry(copyObject(mminfo->target), (AttrNumber) 1, pstrdup("agg_target"), false); - parse->targetList = list_make1(tle); + tlist = list_make1(tle); + subroot->processed_tlist = parse->targetList = tlist; /* No HAVING, no DISTINCT, no aggregates anymore */ parse->havingQual = NULL; @@ -467,7 +410,7 @@ build_minmax_path(PlannerInfo *root, MinMaxAggInfo *mminfo, /* Build suitable ORDER BY clause */ sortcl = makeNode(SortGroupClause); - sortcl->tleSortGroupRef = assignSortGroupRef(tle, parse->targetList); + sortcl->tleSortGroupRef = assignSortGroupRef(tle, tlist); sortcl->eqop = eqop; sortcl->sortop = sortop; sortcl->nulls_first = nulls_first; @@ -488,8 +431,16 @@ build_minmax_path(PlannerInfo *root, MinMaxAggInfo *mminfo, subroot->tuple_fraction = 1.0; subroot->limit_tuples = 1.0; - final_rel = query_planner(subroot, parse->targetList, - minmax_qp_callback, NULL); + final_rel = query_planner(subroot, tlist, minmax_qp_callback, NULL); + + /* + * Since we didn't go through subquery_planner() to handle the subquery, + * we have to do some of the same cleanup it would do, in particular cope + * with params and initplans used within this subquery. (This won't + * matter if we end up not using the subplan.) + */ + SS_identify_outer_params(subroot); + SS_charge_for_initplans(subroot, final_rel); /* * Get the best presorted path, that being the one that's cheapest for @@ -508,6 +459,14 @@ build_minmax_path(PlannerInfo *root, MinMaxAggInfo *mminfo, if (!sorted_path) return false; + /* + * The path might not return exactly what we want, so fix that. (We + * assume that this won't change any conclusions about which was the + * cheapest path.) + */ + sorted_path = apply_projection_to_path(subroot, final_rel, sorted_path, + create_pathtarget(root, tlist)); + /* * Determine cost to get just the first row of the presorted path. * @@ -526,7 +485,7 @@ build_minmax_path(PlannerInfo *root, MinMaxAggInfo *mminfo, } /* - * Compute query_pathkeys and other pathkeys during plan generation + * Compute query_pathkeys and other pathkeys during query_planner() */ static void minmax_qp_callback(PlannerInfo *root, void *extra) @@ -543,105 +502,6 @@ minmax_qp_callback(PlannerInfo *root, void *extra) root->query_pathkeys = root->sort_pathkeys; } -/* - * Construct a suitable plan for a converted aggregate query - */ -static void -make_agg_subplan(PlannerInfo *root, MinMaxAggInfo *mminfo) -{ - PlannerInfo *subroot = mminfo->subroot; - Query *subparse = subroot->parse; - Plan *plan; - - /* - * Generate the plan for the subquery. We already have a Path, but we have - * to convert it to a Plan and attach a LIMIT node above it. - */ - plan = create_plan(subroot, mminfo->path); - - /* - * If the top-level plan node is one that cannot do expression evaluation - * and its existing target list isn't already what we need, we must insert - * a Result node to project the desired tlist. - */ - if (!is_projection_capable_plan(plan) && - !tlist_same_exprs(subparse->targetList, plan->targetlist)) - { - plan = (Plan *) make_result(subroot, - subparse->targetList, - NULL, - plan); - } - else - { - /* - * Otherwise, just replace the subplan's flat tlist with the desired - * tlist. - */ - plan->targetlist = subparse->targetList; - } - - plan = (Plan *) make_limit(plan, - subparse->limitOffset, - subparse->limitCount, - 0, 1); - - /* - * We have to do some of the same cleanup that subquery_planner() would - * do, namely cope with params and initplans used within this plan tree. - * - * This is a little bit messy because although we initially created the - * subroot by cloning the outer root, it really is a subplan and needs to - * consider initplans belonging to the outer root as providing available - * parameters. So temporarily change its parent_root pointer. - * (Fortunately, SS_identify_outer_params doesn't care whether the depth - * of parent_root nesting matches query_level.) - */ - subroot->parent_root = root; - SS_identify_outer_params(subroot); - subroot->parent_root = root->parent_root; - - SS_attach_initplans(subroot, plan); - - /* - * Convert the plan into an InitPlan, and make a Param for its result. - */ - mminfo->param = - SS_make_initplan_from_plan(root, subroot, plan, - exprType((Node *) mminfo->target), - -1, - exprCollation((Node *) mminfo->target)); -} - -/* - * Replace original aggregate calls with subplan output Params - */ -static Node * -replace_aggs_with_params_mutator(Node *node, PlannerInfo *root) -{ - if (node == NULL) - return NULL; - if (IsA(node, Aggref)) - { - Aggref *aggref = (Aggref *) node; - TargetEntry *curTarget = (TargetEntry *) linitial(aggref->args); - ListCell *lc; - - foreach(lc, root->minmax_aggs) - { - MinMaxAggInfo *mminfo = (MinMaxAggInfo *) lfirst(lc); - - if (mminfo->aggfnoid == aggref->aggfnoid && - equal(mminfo->target, curTarget->expr)) - return (Node *) mminfo->param; - } - elog(ERROR, "failed to re-find MinMaxAggInfo record"); - } - Assert(!IsA(node, SubLink)); - return expression_tree_mutator(node, replace_aggs_with_params_mutator, - (void *) root); -} - /* * Get the OID of the sort operator, if any, associated with an aggregate. * Returns InvalidOid if there is no such operator. diff --git a/src/backend/optimizer/plan/planmain.c b/src/backend/optimizer/plan/planmain.c index f4319c6101..da2c7f6606 100644 --- a/src/backend/optimizer/plan/planmain.c +++ b/src/backend/optimizer/plan/planmain.c @@ -36,9 +36,7 @@ * Since query_planner does not handle the toplevel processing (grouping, * sorting, etc) it cannot select the best path by itself. Instead, it * returns the RelOptInfo for the top level of joining, and the caller - * (grouping_planner) can choose one of the surviving paths for the rel. - * Normally it would choose either the rel's cheapest path, or the cheapest - * path for the desired sort order. + * (grouping_planner) can choose among the surviving paths for the rel. * * root describes the query to plan * tlist is the target list the query should produce @@ -85,6 +83,7 @@ query_planner(PlannerInfo *root, List *tlist, /* The only path for it is a trivial Result path */ add_path(final_rel, (Path *) create_result_path(final_rel, + &(final_rel->reltarget), (List *) parse->jointree->quals)); /* Select cheapest path (pretty easy in this case...) */ @@ -104,7 +103,7 @@ query_planner(PlannerInfo *root, List *tlist, * Init planner lists to empty. * * NOTE: append_rel_list was set up by subquery_planner, so do not touch - * here; eq_classes and minmax_aggs may contain data already, too. + * here. */ root->join_rel_list = NIL; root->join_rel_hash = NULL; diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 65b99e2af3..5fc8e5bd36 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -84,8 +84,9 @@ typedef struct /* Local functions */ static Node *preprocess_expression(PlannerInfo *root, Node *expr, int kind); static void preprocess_qual_conditions(PlannerInfo *root, Node *jtnode); -static Plan *inheritance_planner(PlannerInfo *root); -static Plan *grouping_planner(PlannerInfo *root, double tuple_fraction); +static void inheritance_planner(PlannerInfo *root); +static void grouping_planner(PlannerInfo *root, bool inheritance_update, + double tuple_fraction); static void preprocess_rowmarks(PlannerInfo *root); static double preprocess_limit(PlannerInfo *root, double tuple_fraction, @@ -96,52 +97,44 @@ static List *preprocess_groupclause(PlannerInfo *root, List *force); static List *extract_rollup_sets(List *groupingSets); static List *reorder_grouping_sets(List *groupingSets, List *sortclause); static void standard_qp_callback(PlannerInfo *root, void *extra); -static bool choose_hashed_grouping(PlannerInfo *root, - double tuple_fraction, double limit_tuples, - double path_rows, - Path *cheapest_path, Path *sorted_path, - double dNumGroups, AggClauseCosts *agg_costs); -static bool choose_hashed_distinct(PlannerInfo *root, - double tuple_fraction, double limit_tuples, - double path_rows, - Cost cheapest_startup_cost, Cost cheapest_total_cost, - int cheapest_path_width, - Cost sorted_startup_cost, Cost sorted_total_cost, - int sorted_path_width, - List *sorted_pathkeys, - double dNumDistinctRows); -static List *make_subplanTargetList(PlannerInfo *root, List *tlist, - AttrNumber **groupColIdx, bool *need_tlist_eval); +static double get_number_of_groups(PlannerInfo *root, + double path_rows, + List *rollup_lists, + List *rollup_groupclauses); +static RelOptInfo *create_grouping_paths(PlannerInfo *root, + RelOptInfo *input_rel, + PathTarget *target, + AttrNumber *groupColIdx, + List *rollup_lists, + List *rollup_groupclauses); +static RelOptInfo *create_window_paths(PlannerInfo *root, + RelOptInfo *input_rel, + List *base_tlist, + List *tlist, + WindowFuncLists *wflists, + List *activeWindows); +static void create_one_window_path(PlannerInfo *root, + RelOptInfo *window_rel, + Path *path, + List *base_tlist, + List *tlist, + WindowFuncLists *wflists, + List *activeWindows); +static RelOptInfo *create_distinct_paths(PlannerInfo *root, + RelOptInfo *input_rel); +static RelOptInfo *create_ordered_paths(PlannerInfo *root, + RelOptInfo *input_rel, + double limit_tuples); +static PathTarget *make_scanjoin_target(PlannerInfo *root, List *tlist, + AttrNumber **groupColIdx); static int get_grouping_column_index(Query *parse, TargetEntry *tle); -static void locate_grouping_columns(PlannerInfo *root, - List *tlist, - List *sub_tlist, - AttrNumber *groupColIdx); static List *postprocess_setop_tlist(List *new_tlist, List *orig_tlist); static List *select_active_windows(PlannerInfo *root, WindowFuncLists *wflists); static List *make_windowInputTargetList(PlannerInfo *root, List *tlist, List *activeWindows); static List *make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc, List *tlist); -static void get_column_info_for_window(PlannerInfo *root, WindowClause *wc, - List *tlist, - int numSortCols, AttrNumber *sortColIdx, - int *partNumCols, - AttrNumber **partColIdx, - Oid **partOperators, - int *ordNumCols, - AttrNumber **ordColIdx, - Oid **ordOperators); -static Plan *build_grouping_chain(PlannerInfo *root, - Query *parse, - List *tlist, - bool need_sort_for_grouping, - List *rollup_groupclauses, - List *rollup_lists, - AttrNumber *groupColIdx, - AggClauseCosts *agg_costs, - long numGroups, - Plan *result_plan); + /***************************************************************************** * @@ -175,6 +168,8 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) PlannerGlobal *glob; double tuple_fraction; PlannerInfo *root; + RelOptInfo *final_rel; + Path *best_path; Plan *top_plan; ListCell *lp, *lr; @@ -292,8 +287,14 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) } /* primary planning entry point (may recurse for subqueries) */ - top_plan = subquery_planner(glob, parse, NULL, - false, tuple_fraction, &root); + root = subquery_planner(glob, parse, NULL, + false, tuple_fraction); + + /* Select best Path and turn it into a Plan */ + final_rel = fetch_upper_rel(root, UPPERREL_FINAL, NULL); + best_path = get_cheapest_fractional_path(final_rel, tuple_fraction); + + top_plan = create_plan(root, best_path); /* * If creating a plan for a scrollable cursor, make sure it can run @@ -407,9 +408,6 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) * tuple_fraction is the fraction of tuples we expect will be retrieved. * tuple_fraction is interpreted as explained for grouping_planner, below. * - * If subroot isn't NULL, we pass back the query's final PlannerInfo struct; - * among other things this tells the output sort ordering of the plan. - * * Basically, this routine does the stuff that should only be done once * per Query object. It then calls grouping_planner. At one time, * grouping_planner could be invoked recursively on the same Query object; @@ -419,20 +417,23 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) * subquery_planner will be called recursively to handle sub-Query nodes * found within the query's expressions and rangetable. * - * Returns a query plan. + * Returns the PlannerInfo struct ("root") that contains all data generated + * while planning the subquery. In particular, the Path(s) attached to + * the (UPPERREL_FINAL, NULL) upperrel represent our conclusions about the + * cheapest way(s) to implement the query. The top level will select the + * best Path and pass it through createplan.c to produce a finished Plan. *-------------------- */ -Plan * +PlannerInfo * subquery_planner(PlannerGlobal *glob, Query *parse, PlannerInfo *parent_root, - bool hasRecursion, double tuple_fraction, - PlannerInfo **subroot) + bool hasRecursion, double tuple_fraction) { PlannerInfo *root; - Plan *plan; List *newWithCheckOptions; List *newHaving; bool hasOuterJoins; + RelOptInfo *final_rel; ListCell *l; /* Create a PlannerInfo data structure for this subquery */ @@ -450,15 +451,17 @@ subquery_planner(PlannerGlobal *glob, Query *parse, root->eq_classes = NIL; root->append_rel_list = NIL; root->rowMarks = NIL; - root->hasInheritedTarget = false; + memset(root->upper_rels, 0, sizeof(root->upper_rels)); + root->processed_tlist = NIL; root->grouping_map = NULL; - + root->minmax_aggs = NIL; + root->hasInheritedTarget = false; root->hasRecursion = hasRecursion; if (hasRecursion) root->wt_param_id = SS_assign_special_param(root); else root->wt_param_id = -1; - root->non_recursive_plan = NULL; + root->non_recursive_path = NULL; /* * If there is a WITH list, process each WITH query and build an initplan @@ -732,54 +735,9 @@ subquery_planner(PlannerGlobal *glob, Query *parse, */ if (parse->resultRelation && rt_fetch(parse->resultRelation, parse->rtable)->inh) - plan = inheritance_planner(root); + inheritance_planner(root); else - { - plan = grouping_planner(root, tuple_fraction); - /* If it's not SELECT, we need a ModifyTable node */ - if (parse->commandType != CMD_SELECT) - { - List *withCheckOptionLists; - List *returningLists; - List *rowMarks; - - /* - * Set up the WITH CHECK OPTION and RETURNING lists-of-lists, if - * needed. - */ - if (parse->withCheckOptions) - withCheckOptionLists = list_make1(parse->withCheckOptions); - else - withCheckOptionLists = NIL; - - if (parse->returningList) - returningLists = list_make1(parse->returningList); - else - returningLists = NIL; - - /* - * If there was a FOR [KEY] UPDATE/SHARE clause, the LockRows node - * will have dealt with fetching non-locked marked rows, else we - * need to have ModifyTable do that. - */ - if (parse->rowMarks) - rowMarks = NIL; - else - rowMarks = root->rowMarks; - - plan = (Plan *) make_modifytable(root, - parse->commandType, - parse->canSetTag, - parse->resultRelation, - list_make1_int(parse->resultRelation), - list_make1(plan), - withCheckOptionLists, - returningLists, - rowMarks, - parse->onConflict, - SS_assign_special_param(root)); - } - } + grouping_planner(root, false, tuple_fraction); /* * Capture the set of outer-level param IDs we have access to, for use in @@ -788,17 +746,22 @@ subquery_planner(PlannerGlobal *glob, Query *parse, SS_identify_outer_params(root); /* - * If any initPlans were created in this query level, attach them to the - * topmost plan node for the level, and increment that node's cost to - * account for them. + * If any initPlans were created in this query level, increment the + * surviving Paths' costs to account for them. They won't actually get + * attached to the plan tree till create_plan() runs, but we want to be + * sure their costs are included now. */ - SS_attach_initplans(root, plan); + final_rel = fetch_upper_rel(root, UPPERREL_FINAL, NULL); + SS_charge_for_initplans(root, final_rel); - /* Return internal info if caller wants it */ - if (subroot) - *subroot = root; + /* + * Make sure we've identified the cheapest Path for the final rel. (By + * doing this here not in grouping_planner, we include initPlan costs in + * the decision, though it's unlikely that will change anything.) + */ + set_cheapest(final_rel); - return plan; + return root; } /* @@ -944,7 +907,7 @@ preprocess_phv_expression(PlannerInfo *root, Expr *expr) /* * inheritance_planner - * Generate a plan in the case where the result relation is an + * Generate Paths in the case where the result relation is an * inheritance set. * * We have to handle this case differently from cases where a source relation @@ -955,9 +918,13 @@ preprocess_phv_expression(PlannerInfo *root, Expr *expr) * the UPDATE/DELETE target can never be the nullable side of an outer join, * so it's OK to generate the plan this way. * - * Returns a query plan. + * Returns nothing; the useful output is in the Paths we attach to + * the (UPPERREL_FINAL, NULL) upperrel stored in *root. + * + * Note that we have not done set_cheapest() on the final rel; it's convenient + * to leave this to the caller. */ -static Plan * +static void inheritance_planner(PlannerInfo *root) { Query *parse = root->parse; @@ -969,11 +936,13 @@ inheritance_planner(PlannerInfo *root) List *final_rtable = NIL; int save_rel_array_size = 0; RelOptInfo **save_rel_array = NULL; - List *subplans = NIL; + List *subpaths = NIL; + List *subroots = NIL; List *resultRelations = NIL; List *withCheckOptionLists = NIL; List *returningLists = NIL; List *rowMarks; + RelOptInfo *final_rel; ListCell *lc; Index rti; @@ -1060,8 +1029,9 @@ inheritance_planner(PlannerInfo *root) foreach(lc, root->append_rel_list) { AppendRelInfo *appinfo = (AppendRelInfo *) lfirst(lc); - PlannerInfo subroot; - Plan *subplan; + PlannerInfo *subroot; + RelOptInfo *sub_final_rel; + Path *subpath; /* append_rel_list contains all append rels; ignore others */ if (appinfo->parent_relid != parentRTindex) @@ -1071,7 +1041,8 @@ inheritance_planner(PlannerInfo *root) * We need a working copy of the PlannerInfo so that we can control * propagation of information back to the main copy. */ - memcpy(&subroot, root, sizeof(PlannerInfo)); + subroot = makeNode(PlannerInfo); + memcpy(subroot, root, sizeof(PlannerInfo)); /* * Generate modified query with this rel as target. We first apply @@ -1079,7 +1050,7 @@ inheritance_planner(PlannerInfo *root) * references to the parent RTE to refer to the current child RTE, * then fool around with subquery RTEs. */ - subroot.parse = (Query *) + subroot->parse = (Query *) adjust_appendrel_attrs(root, (Node *) parse, appinfo); @@ -1090,7 +1061,7 @@ inheritance_planner(PlannerInfo *root) * executor doesn't need to see the modified copies --- we can just * pass it the original rowMarks list.) */ - subroot.rowMarks = (List *) copyObject(root->rowMarks); + subroot->rowMarks = (List *) copyObject(root->rowMarks); /* * The append_rel_list likewise might contain references to subquery @@ -1106,7 +1077,7 @@ inheritance_planner(PlannerInfo *root) { ListCell *lc2; - subroot.append_rel_list = NIL; + subroot->append_rel_list = NIL; foreach(lc2, root->append_rel_list) { AppendRelInfo *appinfo2 = (AppendRelInfo *) lfirst(lc2); @@ -1114,8 +1085,8 @@ inheritance_planner(PlannerInfo *root) if (bms_is_member(appinfo2->child_relid, modifiableARIindexes)) appinfo2 = (AppendRelInfo *) copyObject(appinfo2); - subroot.append_rel_list = lappend(subroot.append_rel_list, - appinfo2); + subroot->append_rel_list = lappend(subroot->append_rel_list, + appinfo2); } } @@ -1125,9 +1096,9 @@ inheritance_planner(PlannerInfo *root) * These won't be referenced, so there's no need to make them very * valid-looking. */ - while (list_length(subroot.parse->rtable) < list_length(final_rtable)) - subroot.parse->rtable = lappend(subroot.parse->rtable, - makeNode(RangeTblEntry)); + while (list_length(subroot->parse->rtable) < list_length(final_rtable)) + subroot->parse->rtable = lappend(subroot->parse->rtable, + makeNode(RangeTblEntry)); /* * If this isn't the first child Query, generate duplicates of all @@ -1156,15 +1127,15 @@ inheritance_planner(PlannerInfo *root) * save a few cycles by applying ChangeVarNodes before we * append the RTE to the rangetable. */ - newrti = list_length(subroot.parse->rtable) + 1; - ChangeVarNodes((Node *) subroot.parse, rti, newrti, 0); - ChangeVarNodes((Node *) subroot.rowMarks, rti, newrti, 0); + newrti = list_length(subroot->parse->rtable) + 1; + ChangeVarNodes((Node *) subroot->parse, rti, newrti, 0); + ChangeVarNodes((Node *) subroot->rowMarks, rti, newrti, 0); /* Skip processing unchanging parts of append_rel_list */ if (modifiableARIindexes != NULL) { ListCell *lc2; - foreach(lc2, subroot.append_rel_list) + foreach(lc2, subroot->append_rel_list) { AppendRelInfo *appinfo2 = (AppendRelInfo *) lfirst(lc2); @@ -1175,28 +1146,28 @@ inheritance_planner(PlannerInfo *root) } rte = copyObject(rte); ChangeVarNodes((Node *) rte->securityQuals, rti, newrti, 0); - subroot.parse->rtable = lappend(subroot.parse->rtable, - rte); + subroot->parse->rtable = lappend(subroot->parse->rtable, + rte); } rti++; } } /* There shouldn't be any OJ info to translate, as yet */ - Assert(subroot.join_info_list == NIL); + Assert(subroot->join_info_list == NIL); /* and we haven't created PlaceHolderInfos, either */ - Assert(subroot.placeholder_list == NIL); + Assert(subroot->placeholder_list == NIL); /* hack to mark target relation as an inheritance partition */ - subroot.hasInheritedTarget = true; + subroot->hasInheritedTarget = true; - /* Generate plan */ - subplan = grouping_planner(&subroot, 0.0 /* retrieve all tuples */ ); + /* Generate Path(s) for accessing this result relation */ + grouping_planner(subroot, true, 0.0 /* retrieve all tuples */ ); /* * Planning may have modified the query result relation (if there were * security barrier quals on the result RTE). */ - appinfo->child_relid = subroot.parse->resultRelation; + appinfo->child_relid = subroot->parse->resultRelation; /* * We'll use the first child relation (even if it's excluded) as the @@ -1212,22 +1183,29 @@ inheritance_planner(PlannerInfo *root) if (nominalRelation < 0) nominalRelation = appinfo->child_relid; + /* + * Select cheapest path in case there's more than one. We always run + * modification queries to conclusion, so we care only for the + * cheapest-total path. + */ + sub_final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL); + set_cheapest(sub_final_rel); + subpath = sub_final_rel->cheapest_total_path; + /* * If this child rel was excluded by constraint exclusion, exclude it * from the result plan. */ - if (is_dummy_plan(subplan)) + if (IS_DUMMY_PATH(subpath)) continue; - subplans = lappend(subplans, subplan); - /* * If this is the first non-excluded child, its post-planning rtable * becomes the initial contents of final_rtable; otherwise, append * just its modified subquery RTEs to final_rtable. */ if (final_rtable == NIL) - final_rtable = subroot.parse->rtable; + final_rtable = subroot->parse->rtable; else { List *tmp_rtable = NIL; @@ -1244,7 +1222,7 @@ inheritance_planner(PlannerInfo *root) * When this happens, we want to use the new subqueries in the * final rtable. */ - forboth(cell1, final_rtable, cell2, subroot.parse->rtable) + forboth(cell1, final_rtable, cell2, subroot->parse->rtable) { RangeTblEntry *rte1 = (RangeTblEntry *) lfirst(cell1); RangeTblEntry *rte2 = (RangeTblEntry *) lfirst(cell2); @@ -1261,7 +1239,7 @@ inheritance_planner(PlannerInfo *root) } final_rtable = list_concat(tmp_rtable, - list_copy_tail(subroot.parse->rtable, + list_copy_tail(subroot->parse->rtable, list_length(final_rtable))); } @@ -1272,19 +1250,25 @@ inheritance_planner(PlannerInfo *root) * have to propagate forward the RelOptInfos that were already built * in previous children. */ - Assert(subroot.simple_rel_array_size >= save_rel_array_size); + Assert(subroot->simple_rel_array_size >= save_rel_array_size); for (rti = 1; rti < save_rel_array_size; rti++) { RelOptInfo *brel = save_rel_array[rti]; if (brel) - subroot.simple_rel_array[rti] = brel; + subroot->simple_rel_array[rti] = brel; } - save_rel_array_size = subroot.simple_rel_array_size; - save_rel_array = subroot.simple_rel_array; + save_rel_array_size = subroot->simple_rel_array_size; + save_rel_array = subroot->simple_rel_array; /* Make sure any initplans from this rel get into the outer list */ - root->init_plans = subroot.init_plans; + root->init_plans = subroot->init_plans; + + /* Build list of sub-paths */ + subpaths = lappend(subpaths, subpath); + + /* Build list of modified subroots, too */ + subroots = lappend(subroots, subroot); /* Build list of target-relation RT indexes */ resultRelations = lappend_int(resultRelations, appinfo->child_relid); @@ -1292,40 +1276,44 @@ inheritance_planner(PlannerInfo *root) /* Build lists of per-relation WCO and RETURNING targetlists */ if (parse->withCheckOptions) withCheckOptionLists = lappend(withCheckOptionLists, - subroot.parse->withCheckOptions); + subroot->parse->withCheckOptions); if (parse->returningList) returningLists = lappend(returningLists, - subroot.parse->returningList); + subroot->parse->returningList); Assert(!parse->onConflict); } - /* Mark result as unordered (probably unnecessary) */ - root->query_pathkeys = NIL; + /* Result path must go into outer query's FINAL upperrel */ + final_rel = fetch_upper_rel(root, UPPERREL_FINAL, NULL); /* * If we managed to exclude every child rel, return a dummy plan; it * doesn't even need a ModifyTable node. */ - if (subplans == NIL) + if (subpaths == NIL) { - /* although dummy, it must have a valid tlist for executor */ - List *tlist; - - tlist = preprocess_targetlist(root, parse->targetList); - return (Plan *) make_result(root, - tlist, - (Node *) list_make1(makeBoolConst(false, - false)), - NULL); + set_dummy_rel_pathlist(final_rel); + return; } /* * Put back the final adjusted rtable into the master copy of the Query. + * (We mustn't do this if we found no non-excluded children.) */ parse->rtable = final_rtable; root->simple_rel_array_size = save_rel_array_size; root->simple_rel_array = save_rel_array; + /* Must reconstruct master's simple_rte_array, too */ + root->simple_rte_array = (RangeTblEntry **) + palloc0((list_length(final_rtable) + 1) * sizeof(RangeTblEntry *)); + rti = 1; + foreach(lc, final_rtable) + { + RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc); + + root->simple_rte_array[rti++] = rte; + } /* * If there was a FOR [KEY] UPDATE/SHARE clause, the LockRows node will @@ -1337,28 +1325,35 @@ inheritance_planner(PlannerInfo *root) else rowMarks = root->rowMarks; - /* And last, tack on a ModifyTable node to do the UPDATE/DELETE work */ - return (Plan *) make_modifytable(root, + /* Create Path representing a ModifyTable to do the UPDATE/DELETE work */ + add_path(final_rel, (Path *) + create_modifytable_path(root, final_rel, parse->commandType, parse->canSetTag, nominalRelation, resultRelations, - subplans, + subpaths, + subroots, withCheckOptionLists, returningLists, rowMarks, NULL, - SS_assign_special_param(root)); + SS_assign_special_param(root))); } /*-------------------- * grouping_planner * Perform planning steps related to grouping, aggregation, etc. - * This primarily means adding top-level processing to the basic - * query plan produced by query_planner. * - * tuple_fraction is the fraction of tuples we expect will be retrieved + * This function adds all required top-level processing to the scan/join + * Path(s) produced by query_planner. * + * If inheritance_update is true, we're being called from inheritance_planner + * and should not include a ModifyTable step in the resulting Path(s). + * (inheritance_planner will create a single ModifyTable node covering all the + * target tables.) + * + * tuple_fraction is the fraction of tuples we expect will be retrieved. * tuple_fraction is interpreted as follows: * 0: expect all tuples to be retrieved (normal case) * 0 < tuple_fraction < 1: expect the given fraction of tuples available @@ -1366,23 +1361,26 @@ inheritance_planner(PlannerInfo *root) * tuple_fraction >= 1: tuple_fraction is the absolute number of tuples * expected to be retrieved (ie, a LIMIT specification) * - * Returns a query plan. Also, root->query_pathkeys is returned as the - * actual output ordering of the plan (in pathkey format). + * Returns nothing; the useful output is in the Paths we attach to the + * (UPPERREL_FINAL, NULL) upperrel in *root. In addition, + * root->processed_tlist contains the final processed targetlist. + * + * Note that we have not done set_cheapest() on the final rel; it's convenient + * to leave this to the caller. *-------------------- */ -static Plan * -grouping_planner(PlannerInfo *root, double tuple_fraction) +static void +grouping_planner(PlannerInfo *root, bool inheritance_update, + double tuple_fraction) { Query *parse = root->parse; List *tlist = parse->targetList; int64 offset_est = 0; int64 count_est = 0; double limit_tuples = -1.0; - Plan *result_plan; - List *current_pathkeys; - double dNumGroups = 0; - bool use_hashed_distinct = false; - bool tested_hashed_distinct = false; + RelOptInfo *current_rel; + RelOptInfo *final_rel; + ListCell *lc; /* Tweak caller-supplied tuple_fraction if have LIMIT/OFFSET */ if (parse->limitCount || parse->limitOffset) @@ -1398,36 +1396,29 @@ grouping_planner(PlannerInfo *root, double tuple_fraction) limit_tuples = (double) count_est + (double) offset_est; } + /* Make tuple_fraction accessible to lower-level routines */ + root->tuple_fraction = tuple_fraction; + if (parse->setOperations) { - List *set_sortclauses; - /* * If there's a top-level ORDER BY, assume we have to fetch all the * tuples. This might be too simplistic given all the hackery below * to possibly avoid the sort; but the odds of accurate estimates here - * are pretty low anyway. + * are pretty low anyway. XXX try to get rid of this in favor of + * letting plan_set_operations generate both fast-start and + * cheapest-total paths. */ if (parse->sortClause) - tuple_fraction = 0.0; + root->tuple_fraction = 0.0; /* - * Construct the plan for set operations. The result will not need - * any work except perhaps a top-level sort and/or LIMIT. Note that - * any special work for recursive unions is the responsibility of + * Construct Paths for set operations. The results will not need any + * work except perhaps a top-level sort and/or LIMIT. Note that any + * special work for recursive unions is the responsibility of * plan_set_operations. */ - result_plan = plan_set_operations(root, tuple_fraction, - &set_sortclauses); - - /* - * Calculate pathkeys representing the sort order (if any) of the set - * operation's result. We have to do this before overwriting the sort - * key information... - */ - current_pathkeys = make_pathkeys_for_sortclauses(root, - set_sortclauses, - result_plan->targetlist); + current_rel = plan_set_operations(root); /* * We should not need to call preprocess_targetlist, since we must be @@ -1438,8 +1429,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction) */ Assert(parse->commandType == CMD_SELECT); - tlist = postprocess_setop_tlist(copyObject(result_plan->targetlist), - tlist); + tlist = root->processed_tlist; /* from plan_set_operations */ + + /* for safety, copy processed_tlist instead of modifying in-place */ + tlist = postprocess_setop_tlist(copyObject(tlist), parse->targetList); + + /* Save aside the final decorated tlist */ + root->processed_tlist = tlist; /* * Can't handle FOR [KEY] UPDATE/SHARE here (parser should have @@ -1465,33 +1461,25 @@ grouping_planner(PlannerInfo *root, double tuple_fraction) else { /* No set operations, do regular planning */ - long numGroups = 0; - AggClauseCosts agg_costs; - int numGroupCols; - double path_rows; - bool use_hashed_grouping = false; + PathTarget *sub_target; + AttrNumber *groupColIdx; + double tlist_rows; + List *grouping_tlist; WindowFuncLists *wflists = NULL; List *activeWindows = NIL; - OnConflictExpr *onconfl; - int maxref = 0; List *rollup_lists = NIL; List *rollup_groupclauses = NIL; standard_qp_extra qp_extra; - RelOptInfo *final_rel; - Path *cheapest_path; - Path *sorted_path; - Path *best_path; - - MemSet(&agg_costs, 0, sizeof(AggClauseCosts)); /* A recursive query should always have setOperations */ Assert(!root->hasRecursion); - /* Preprocess grouping sets, if any */ + /* Preprocess grouping sets and GROUP BY clause, if any */ if (parse->groupingSets) { int *tleref_to_colnum_map; List *sets; + int maxref; ListCell *lc; ListCell *lc2; ListCell *lc_set; @@ -1499,7 +1487,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction) parse->groupingSets = expand_grouping_sets(parse->groupingSets, -1); /* Identify max SortGroupRef in groupClause, for array sizing */ - /* (note this value will be used again later) */ + maxref = 0; foreach(lc, parse->groupClause) { SortGroupClause *gc = lfirst(lc); @@ -1570,21 +1558,17 @@ grouping_planner(PlannerInfo *root, double tuple_fraction) } else { - /* Preprocess GROUP BY clause, if any */ + /* Preprocess regular GROUP BY clause, if any */ if (parse->groupClause) parse->groupClause = preprocess_groupclause(root, NIL); - rollup_groupclauses = list_make1(parse->groupClause); } - numGroupCols = list_length(parse->groupClause); - /* Preprocess targetlist */ tlist = preprocess_targetlist(root, tlist); - onconfl = parse->onConflict; - if (onconfl) - onconfl->onConflictSet = - preprocess_onconflict_targetlist(onconfl->onConflictSet, + if (parse->onConflict) + parse->onConflict->onConflictSet = + preprocess_onconflict_targetlist(parse->onConflict->onConflictSet, parse->resultRelation, parse->rtable); @@ -1596,6 +1580,15 @@ grouping_planner(PlannerInfo *root, double tuple_fraction) if (parse->hasRowSecurity) root->glob->hasRowSecurity = true; + /* + * We are now done hacking up the query's targetlist. Most of the + * remaining planning work will be done with the PathTarget + * representation of tlists, but save aside the full representation so + * that we can transfer its decoration (resnames etc) to the topmost + * tlist of the finished Plan. + */ + root->processed_tlist = tlist; + /* * Locate any window functions in the tlist. (We don't need to look * anywhere else, since expressions used in ORDER BY will be in there @@ -1613,34 +1606,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction) } /* - * Do aggregate preprocessing, if the query has any aggs. - * - * Note: think not that we can turn off hasAggs if we find no aggs. It - * is possible for constant-expression simplification to remove all - * explicit references to aggs, but we still have to follow the - * aggregate semantics (eg, producing only one output row). + * Preprocess MIN/MAX aggregates, if any. Note: be careful about + * adding logic between here and the query_planner() call. Anything + * that is needed in MIN/MAX-optimizable cases will have to be + * duplicated in planagg.c. */ if (parse->hasAggs) - { - /* - * Collect statistics about aggregates for estimating costs. Note: - * we do not attempt to detect duplicate aggregates here; a - * somewhat-overestimated cost is okay for our present purposes. - */ - count_agg_clauses(root, (Node *) tlist, &agg_costs); - count_agg_clauses(root, parse->havingQual, &agg_costs); - - /* - * Preprocess MIN/MAX aggregates, if any. Note: be careful about - * adding logic between here and the optimize_minmax_aggregates - * call. Anything that is needed in MIN/MAX-optimizable cases - * will have to be duplicated in planagg.c. - */ preprocess_minmax_aggregates(root, tlist); - } - - /* Make tuple_fraction accessible to lower-level routines */ - root->tuple_fraction = tuple_fraction; /* * Figure out whether there's a hard limit on the number of rows that @@ -1661,1073 +1633,256 @@ grouping_planner(PlannerInfo *root, double tuple_fraction) /* Set up data needed by standard_qp_callback */ qp_extra.tlist = tlist; qp_extra.activeWindows = activeWindows; - qp_extra.groupClause = llast(rollup_groupclauses); + qp_extra.groupClause = + parse->groupingSets ? llast(rollup_groupclauses) : parse->groupClause; /* - * Generate the best unsorted and presorted paths for this Query (but - * note there may not be any presorted paths). We also generate (in - * standard_qp_callback) pathkey representations of the query's sort - * clause, distinct clause, etc. + * Generate the best unsorted and presorted paths for the scan/join + * portion of this Query, ie the processing represented by the + * FROM/WHERE clauses. (Note there may not be any presorted paths.) + * We also generate (in standard_qp_callback) pathkey representations + * of the query's sort clause, distinct clause, etc. */ - final_rel = query_planner(root, tlist, - standard_qp_callback, &qp_extra); + current_rel = query_planner(root, tlist, + standard_qp_callback, &qp_extra); /* - * Extract rowcount estimate for use below. If final_rel has been - * proven dummy, its rows estimate will be zero; clamp it to one to - * avoid zero-divide in subsequent calculations. + * Now determine the tlist that we want the topmost scan/join plan + * node to emit; this may be different from the final tlist if + * grouping or aggregation is needed. This is also a convenient spot + * for conversion of the tlist to PathTarget format. + * + * Note: it's desirable to not do this till after query_planner(), + * because the target width estimates can use per-Var width numbers + * that were obtained within query_planner(). */ - path_rows = clamp_row_est(final_rel->rows); + sub_target = make_scanjoin_target(root, tlist, + &groupColIdx); /* - * If there's grouping going on, estimate the number of result groups. - * We couldn't do this any earlier because it depends on relation size - * estimates that are created within query_planner(). + * Forcibly apply that tlist to all the Paths for the scan/join rel. * - * Then convert tuple_fraction to fractional form if it is absolute, - * and if grouping or aggregation is involved, adjust tuple_fraction - * to describe the fraction of the underlying un-aggregated tuples - * that will be fetched. + * In principle we should re-run set_cheapest() here to identify the + * cheapest path, but it seems unlikely that adding the same tlist + * eval costs to all the paths would change that, so we don't bother. + * Instead, just assume that the cheapest-startup and cheapest-total + * paths remain so. (There should be no parameterized paths anymore, + * so we needn't worry about updating cheapest_parameterized_paths.) */ - dNumGroups = 1; /* in case not grouping */ - - if (parse->groupClause) + foreach(lc, current_rel->pathlist) { - List *groupExprs; - - if (parse->groupingSets) - { - ListCell *lc, - *lc2; - - dNumGroups = 0; - - forboth(lc, rollup_groupclauses, lc2, rollup_lists) - { - ListCell *lc3; - - groupExprs = get_sortgrouplist_exprs(lfirst(lc), - parse->targetList); - - foreach(lc3, lfirst(lc2)) - { - List *gset = lfirst(lc3); - - dNumGroups += estimate_num_groups(root, - groupExprs, - path_rows, - &gset); - } - } - } - else + Path *subpath = (Path *) lfirst(lc); + Path *path; + + Assert(subpath->param_info == NULL); + path = apply_projection_to_path(root, current_rel, + subpath, sub_target); + /* If we had to add a Result, path is different from subpath */ + if (path != subpath) { - groupExprs = get_sortgrouplist_exprs(parse->groupClause, - parse->targetList); - - dNumGroups = estimate_num_groups(root, groupExprs, path_rows, - NULL); + lfirst(lc) = path; + if (subpath == current_rel->cheapest_startup_path) + current_rel->cheapest_startup_path = path; + if (subpath == current_rel->cheapest_total_path) + current_rel->cheapest_total_path = path; } - - /* - * In GROUP BY mode, an absolute LIMIT is relative to the number - * of groups not the number of tuples. If the caller gave us a - * fraction, keep it as-is. (In both cases, we are effectively - * assuming that all the groups are about the same size.) - */ - if (tuple_fraction >= 1.0) - tuple_fraction /= dNumGroups; - - /* - * If there's more than one grouping set, we'll have to sort the - * entire input. - */ - if (list_length(rollup_lists) > 1) - tuple_fraction = 0.0; - - /* - * If both GROUP BY and ORDER BY are specified, we will need two - * levels of sort --- and, therefore, certainly need to read all - * the tuples --- unless ORDER BY is a subset of GROUP BY. - * Likewise if we have both DISTINCT and GROUP BY, or if we have a - * window specification not compatible with the GROUP BY. - */ - if (!pathkeys_contained_in(root->sort_pathkeys, - root->group_pathkeys) || - !pathkeys_contained_in(root->distinct_pathkeys, - root->group_pathkeys) || - !pathkeys_contained_in(root->window_pathkeys, - root->group_pathkeys)) - tuple_fraction = 0.0; - } - else if (parse->hasAggs || root->hasHavingQual || parse->groupingSets) - { - /* - * Ungrouped aggregate will certainly want to read all the tuples, - * and it will deliver a single result row per grouping set (or 1 - * if no grouping sets were explicitly given, in which case leave - * dNumGroups as-is) - */ - tuple_fraction = 0.0; - if (parse->groupingSets) - dNumGroups = list_length(parse->groupingSets); } - else if (parse->distinctClause) - { - /* - * Since there was no grouping or aggregation, it's reasonable to - * assume the UNIQUE filter has effects comparable to GROUP BY. - * (If DISTINCT is used with grouping, we ignore its effects for - * rowcount estimation purposes; this amounts to assuming the - * grouped rows are distinct already.) - */ - List *distinctExprs; - - distinctExprs = get_sortgrouplist_exprs(parse->distinctClause, - parse->targetList); - dNumGroups = estimate_num_groups(root, distinctExprs, path_rows, NULL); - /* - * Adjust tuple_fraction the same way as for GROUP BY, too. - */ - if (tuple_fraction >= 1.0) - tuple_fraction /= dNumGroups; - } + /* + * Determine the tlist we need grouping paths to emit. While we could + * skip this if we're not going to call create_grouping_paths, it's + * trivial unless we've got window functions, and then we have to do + * the work anyway. (XXX: refactor to work with PathTargets instead + * of tlists) + */ + if (activeWindows) + grouping_tlist = make_windowInputTargetList(root, + tlist, + activeWindows); else + grouping_tlist = tlist; + + /* + * If we have grouping and/or aggregation, consider ways to implement + * that. We build a new upperrel representing the output of this + * phase. + */ + if (parse->groupClause || parse->groupingSets || parse->hasAggs || + root->hasHavingQual) { - /* - * Plain non-grouped, non-aggregated query: an absolute tuple - * fraction can be divided by the number of tuples. - */ - if (tuple_fraction >= 1.0) - tuple_fraction /= path_rows; + current_rel = create_grouping_paths(root, + current_rel, + create_pathtarget(root, + grouping_tlist), + groupColIdx, + rollup_lists, + rollup_groupclauses); } /* - * Pick out the cheapest-total path as well as the cheapest presorted - * path for the requested pathkeys (if there is one). We should take - * the tuple fraction into account when selecting the cheapest - * presorted path, but not when selecting the cheapest-total path, - * since if we have to sort then we'll have to fetch all the tuples. - * (But there's a special case: if query_pathkeys is NIL, meaning - * order doesn't matter, then the "cheapest presorted" path will be - * the cheapest overall for the tuple fraction.) + * If we have window functions, consider ways to implement those. We + * build a new upperrel representing the output of this phase. */ - cheapest_path = final_rel->cheapest_total_path; - - sorted_path = - get_cheapest_fractional_path_for_pathkeys(final_rel->pathlist, - root->query_pathkeys, - NULL, - tuple_fraction); - - /* Don't consider same path in both guises; just wastes effort */ - if (sorted_path == cheapest_path) - sorted_path = NULL; + if (activeWindows) + { + current_rel = create_window_paths(root, + current_rel, + grouping_tlist, + tlist, + wflists, + activeWindows); + } /* - * Forget about the presorted path if it would be cheaper to sort the - * cheapest-total path. Here we need consider only the behavior at - * the tuple_fraction point. Also, limit_tuples is only relevant if - * not grouping/aggregating, so use root->limit_tuples in the - * cost_sort call. + * If there are set-returning functions in the tlist, scale up the + * assumed output rowcounts of all surviving Paths to account for + * that. This is a bit of a kluge, but it's not clear how to account + * for it in a more principled way. We definitely don't want to apply + * the multiplier more than once, which would happen if we tried to + * fold it into PathTarget accounting. And the expansion does happen + * before any explicit DISTINCT or ORDER BY processing is done. */ - if (sorted_path) + tlist_rows = tlist_returns_set_rows(tlist); + if (tlist_rows > 1) { - Path sort_path; /* dummy for result of cost_sort */ - - if (root->query_pathkeys == NIL || - pathkeys_contained_in(root->query_pathkeys, - cheapest_path->pathkeys)) - { - /* No sort needed for cheapest path */ - sort_path.startup_cost = cheapest_path->startup_cost; - sort_path.total_cost = cheapest_path->total_cost; - } - else + foreach(lc, current_rel->pathlist) { - /* Figure cost for sorting */ - cost_sort(&sort_path, root, root->query_pathkeys, - cheapest_path->total_cost, - path_rows, cheapest_path->pathtarget->width, - 0.0, work_mem, root->limit_tuples); - } + Path *path = (Path *) lfirst(lc); - if (compare_fractional_path_costs(sorted_path, &sort_path, - tuple_fraction) > 0) - { - /* Presorted path is a loser */ - sorted_path = NULL; + /* + * We assume that execution costs of the tlist as such were + * already accounted for. However, it still seems appropriate + * to charge something more for the executor's general costs + * of processing the added tuples. The cost is probably less + * than cpu_tuple_cost, though, so we arbitrarily use half of + * that. + */ + path->total_cost += path->rows * (tlist_rows - 1) * + cpu_tuple_cost / 2; + + path->rows *= tlist_rows; } + + /* There seems no need for a fresh set_cheapest comparison. */ } /* - * Consider whether we want to use hashing instead of sorting. + * If there is a DISTINCT clause, consider ways to implement that. We + * build a new upperrel representing the output of this phase. */ - if (parse->groupClause) + if (parse->distinctClause) { - /* - * If grouping, decide whether to use sorted or hashed grouping. - * If grouping sets are present, we can currently do only sorted - * grouping. - */ + current_rel = create_distinct_paths(root, + current_rel); + } - if (parse->groupingSets) - { - use_hashed_grouping = false; - } - else - { - use_hashed_grouping = - choose_hashed_grouping(root, - tuple_fraction, limit_tuples, - path_rows, - cheapest_path, sorted_path, - dNumGroups, &agg_costs); - } + } /* end of if (setOperations) */ - /* Also convert # groups to long int --- but 'ware overflow! */ - numGroups = (long) Min(dNumGroups, (double) LONG_MAX); - } - else if (parse->distinctClause && sorted_path && - !root->hasHavingQual && !parse->hasAggs && !activeWindows) - { - /* - * We'll reach the DISTINCT stage without any intermediate - * processing, so figure out whether we will want to hash or not - * so we can choose whether to use cheapest or sorted path. - */ - use_hashed_distinct = - choose_hashed_distinct(root, - tuple_fraction, limit_tuples, - path_rows, - cheapest_path->startup_cost, - cheapest_path->total_cost, - cheapest_path->pathtarget->width, - sorted_path->startup_cost, - sorted_path->total_cost, - sorted_path->pathtarget->width, - sorted_path->pathkeys, - dNumGroups); - tested_hashed_distinct = true; - } + /* + * If ORDER BY was given, consider ways to implement that, and generate a + * new upperrel containing only paths that emit the correct ordering. We + * can apply the original limit_tuples limit in sorting now. + */ + if (parse->sortClause) + { + current_rel = create_ordered_paths(root, + current_rel, + limit_tuples); + } + + /* + * Now we are prepared to build the final-output upperrel. Insert all + * surviving paths, with LockRows, Limit, and/or ModifyTable steps added + * if needed. + */ + final_rel = fetch_upper_rel(root, UPPERREL_FINAL, NULL); + + foreach(lc, current_rel->pathlist) + { + Path *path = (Path *) lfirst(lc); /* - * Select the best path. If we are doing hashed grouping, we will - * always read all the input tuples, so use the cheapest-total path. - * Otherwise, the comparison above is correct. + * If there is a FOR [KEY] UPDATE/SHARE clause, add the LockRows node. + * (Note: we intentionally test parse->rowMarks not root->rowMarks + * here. If there are only non-locking rowmarks, they should be + * handled by the ModifyTable node instead. However, root->rowMarks + * is what goes into the LockRows node.) */ - if (use_hashed_grouping || use_hashed_distinct || !sorted_path) - best_path = cheapest_path; - else - best_path = sorted_path; + if (parse->rowMarks) + { + path = (Path *) create_lockrows_path(root, final_rel, path, + root->rowMarks, + SS_assign_special_param(root)); + } /* - * Check to see if it's possible to optimize MIN/MAX aggregates. If - * so, we will forget all the work we did so far to choose a "regular" - * path ... but we had to do it anyway to be able to tell which way is - * cheaper. + * If there is a LIMIT/OFFSET clause, add the LIMIT node. */ - result_plan = optimize_minmax_aggregates(root, - tlist, - &agg_costs, - best_path); - if (result_plan != NULL) + if (limit_needed(parse)) { - /* - * optimize_minmax_aggregates generated the full plan, with the - * right tlist, and it has no sort order. - */ - current_pathkeys = NIL; + path = (Path *) create_limit_path(root, final_rel, path, + parse->limitOffset, + parse->limitCount, + offset_est, count_est); } - else - { - /* - * Normal case --- create a plan according to query_planner's - * results. - */ - List *sub_tlist; - AttrNumber *groupColIdx = NULL; - bool need_tlist_eval = true; - bool need_sort_for_grouping = false; - result_plan = create_plan(root, best_path); - current_pathkeys = best_path->pathkeys; - - /* Detect if we'll need an explicit sort for grouping */ - if (parse->groupClause && !use_hashed_grouping && - !pathkeys_contained_in(root->group_pathkeys, current_pathkeys)) - need_sort_for_grouping = true; - - /* - * Generate appropriate target list for scan/join subplan; may be - * different from tlist if grouping or aggregation is needed. - */ - sub_tlist = make_subplanTargetList(root, tlist, - &groupColIdx, - &need_tlist_eval); + /* + * If this is an INSERT/UPDATE/DELETE, and we're not being called from + * inheritance_planner, add the ModifyTable node. + */ + if (parse->commandType != CMD_SELECT && !inheritance_update) + { + List *withCheckOptionLists; + List *returningLists; + List *rowMarks; /* - * create_plan returns a plan with just a "flat" tlist of required - * Vars. Usually we need to insert the sub_tlist as the tlist of - * the top plan node. However, we can skip that if we determined - * that whatever create_plan chose to return will be good enough. - * - * If we need_sort_for_grouping, always override create_plan's - * tlist, so that we don't sort useless data from a "physical" - * tlist. + * Set up the WITH CHECK OPTION and RETURNING lists-of-lists, if + * needed. */ - if (need_tlist_eval || need_sort_for_grouping) - { - /* - * If the top-level plan node is one that cannot do expression - * evaluation and its existing target list isn't already what - * we need, we must insert a Result node to project the - * desired tlist. - */ - if (!is_projection_capable_plan(result_plan) && - !tlist_same_exprs(sub_tlist, result_plan->targetlist)) - { - result_plan = (Plan *) make_result(root, - sub_tlist, - NULL, - result_plan); - } - else - { - /* - * Otherwise, just replace the subplan's flat tlist with - * the desired tlist. - */ - result_plan->targetlist = sub_tlist; - } + if (parse->withCheckOptions) + withCheckOptionLists = list_make1(parse->withCheckOptions); + else + withCheckOptionLists = NIL; - /* - * Also, account for the cost of evaluation of the sub_tlist. - * See comments for add_tlist_costs_to_plan() for more info. - */ - add_tlist_costs_to_plan(root, result_plan, sub_tlist); - } + if (parse->returningList) + returningLists = list_make1(parse->returningList); else - { - /* - * Since we're using create_plan's tlist and not the one - * make_subplanTargetList calculated, we have to refigure any - * grouping-column indexes make_subplanTargetList computed. - */ - locate_grouping_columns(root, tlist, result_plan->targetlist, - groupColIdx); - } + returningLists = NIL; /* - * groupColIdx is now cast in stone, so record a mapping from - * tleSortGroupRef to column index. setrefs.c will need this to - * finalize GROUPING() operations. + * If there was a FOR [KEY] UPDATE/SHARE clause, the LockRows node + * will have dealt with fetching non-locked marked rows, else we + * need to have ModifyTable do that. */ - if (parse->groupingSets) - { - AttrNumber *grouping_map = palloc0(sizeof(AttrNumber) * (maxref + 1)); - ListCell *lc; - int i = 0; + if (parse->rowMarks) + rowMarks = NIL; + else + rowMarks = root->rowMarks; - foreach(lc, parse->groupClause) - { - SortGroupClause *gc = lfirst(lc); + path = (Path *) + create_modifytable_path(root, final_rel, + parse->commandType, + parse->canSetTag, + parse->resultRelation, + list_make1_int(parse->resultRelation), + list_make1(path), + list_make1(root), + withCheckOptionLists, + returningLists, + rowMarks, + parse->onConflict, + SS_assign_special_param(root)); + } - grouping_map[gc->tleSortGroupRef] = groupColIdx[i++]; - } + /* And shove it into final_rel */ + add_path(final_rel, path); + } - root->grouping_map = grouping_map; - } - - /* - * Insert AGG or GROUP node if needed, plus an explicit sort step - * if necessary. - * - * HAVING clause, if any, becomes qual of the Agg or Group node. - */ - if (use_hashed_grouping) - { - /* Hashed aggregate plan --- no sort needed */ - result_plan = (Plan *) make_agg(root, - tlist, - (List *) parse->havingQual, - AGG_HASHED, - &agg_costs, - numGroupCols, - groupColIdx, - extract_grouping_ops(parse->groupClause), - NIL, - numGroups, - false, - true, - result_plan); - /* Hashed aggregation produces randomly-ordered results */ - current_pathkeys = NIL; - } - else if (parse->hasAggs || - (parse->groupingSets && parse->groupClause)) - { - /* - * Aggregation and/or non-degenerate grouping sets. - * - * Output is in sorted order by group_pathkeys if, and only - * if, there is a single rollup operation on a non-empty list - * of grouping expressions. - */ - if (list_length(rollup_groupclauses) == 1 - && list_length(linitial(rollup_groupclauses)) > 0) - current_pathkeys = root->group_pathkeys; - else - current_pathkeys = NIL; - - result_plan = build_grouping_chain(root, - parse, - tlist, - need_sort_for_grouping, - rollup_groupclauses, - rollup_lists, - groupColIdx, - &agg_costs, - numGroups, - result_plan); - } - else if (parse->groupClause) - { - /* - * GROUP BY without aggregation, so insert a group node (plus - * the appropriate sort node, if necessary). - * - * Add an explicit sort if we couldn't make the path come out - * the way the GROUP node needs it. - */ - if (need_sort_for_grouping) - { - result_plan = (Plan *) - make_sort_from_groupcols(root, - parse->groupClause, - groupColIdx, - result_plan); - current_pathkeys = root->group_pathkeys; - } - - result_plan = (Plan *) make_group(root, - tlist, - (List *) parse->havingQual, - numGroupCols, - groupColIdx, - extract_grouping_ops(parse->groupClause), - dNumGroups, - result_plan); - /* The Group node won't change sort ordering */ - } - else if (root->hasHavingQual || parse->groupingSets) - { - int nrows = list_length(parse->groupingSets); - - /* - * No aggregates, and no GROUP BY, but we have a HAVING qual - * or grouping sets (which by elimination of cases above must - * consist solely of empty grouping sets, since otherwise - * groupClause will be non-empty). - * - * This is a degenerate case in which we are supposed to emit - * either 0 or 1 row for each grouping set depending on - * whether HAVING succeeds. Furthermore, there cannot be any - * variables in either HAVING or the targetlist, so we - * actually do not need the FROM table at all! We can just - * throw away the plan-so-far and generate a Result node. This - * is a sufficiently unusual corner case that it's not worth - * contorting the structure of this routine to avoid having to - * generate the plan in the first place. - */ - result_plan = (Plan *) make_result(root, - tlist, - parse->havingQual, - NULL); - - /* - * Doesn't seem worthwhile writing code to cons up a - * generate_series or a values scan to emit multiple rows. - * Instead just clone the result in an Append. - */ - if (nrows > 1) - { - List *plans = list_make1(result_plan); - - while (--nrows > 0) - plans = lappend(plans, copyObject(result_plan)); - - result_plan = (Plan *) make_append(plans, tlist); - } - } - } /* end of non-minmax-aggregate case */ - - /* - * Since each window function could require a different sort order, we - * stack up a WindowAgg node for each window, with sort steps between - * them as needed. - */ - if (activeWindows) - { - List *window_tlist; - ListCell *l; - - /* - * If the top-level plan node is one that cannot do expression - * evaluation, we must insert a Result node to project the desired - * tlist. (In some cases this might not really be required, but - * it's not worth trying to avoid it. In particular, think not to - * skip adding the Result if the initial window_tlist matches the - * top-level plan node's output, because we might change the tlist - * inside the following loop.) Note that on second and subsequent - * passes through the following loop, the top-level node will be a - * WindowAgg which we know can project; so we only need to check - * once. - */ - if (!is_projection_capable_plan(result_plan)) - { - result_plan = (Plan *) make_result(root, - NIL, - NULL, - result_plan); - } - - /* - * The "base" targetlist for all steps of the windowing process is - * a flat tlist of all Vars and Aggs needed in the result. (In - * some cases we wouldn't need to propagate all of these all the - * way to the top, since they might only be needed as inputs to - * WindowFuncs. It's probably not worth trying to optimize that - * though.) We also add window partitioning and sorting - * expressions to the base tlist, to ensure they're computed only - * once at the bottom of the stack (that's critical for volatile - * functions). As we climb up the stack, we'll add outputs for - * the WindowFuncs computed at each level. - */ - window_tlist = make_windowInputTargetList(root, - tlist, - activeWindows); - - /* - * The copyObject steps here are needed to ensure that each plan - * node has a separately modifiable tlist. (XXX wouldn't a - * shallow list copy do for that?) - */ - result_plan->targetlist = (List *) copyObject(window_tlist); - - foreach(l, activeWindows) - { - WindowClause *wc = (WindowClause *) lfirst(l); - List *window_pathkeys; - int partNumCols; - AttrNumber *partColIdx; - Oid *partOperators; - int ordNumCols; - AttrNumber *ordColIdx; - Oid *ordOperators; - - window_pathkeys = make_pathkeys_for_window(root, - wc, - tlist); - - /* - * This is a bit tricky: we build a sort node even if we don't - * really have to sort. Even when no explicit sort is needed, - * we need to have suitable resjunk items added to the input - * plan's tlist for any partitioning or ordering columns that - * aren't plain Vars. (In theory, make_windowInputTargetList - * should have provided all such columns, but let's not assume - * that here.) Furthermore, this way we can use existing - * infrastructure to identify which input columns are the - * interesting ones. - */ - if (window_pathkeys) - { - Sort *sort_plan; - - sort_plan = make_sort_from_pathkeys(root, - result_plan, - window_pathkeys, - -1.0); - if (!pathkeys_contained_in(window_pathkeys, - current_pathkeys)) - { - /* we do indeed need to sort */ - result_plan = (Plan *) sort_plan; - current_pathkeys = window_pathkeys; - } - /* In either case, extract the per-column information */ - get_column_info_for_window(root, wc, tlist, - sort_plan->numCols, - sort_plan->sortColIdx, - &partNumCols, - &partColIdx, - &partOperators, - &ordNumCols, - &ordColIdx, - &ordOperators); - } - else - { - /* empty window specification, nothing to sort */ - partNumCols = 0; - partColIdx = NULL; - partOperators = NULL; - ordNumCols = 0; - ordColIdx = NULL; - ordOperators = NULL; - } - - if (lnext(l)) - { - /* Add the current WindowFuncs to the running tlist */ - window_tlist = add_to_flat_tlist(window_tlist, - wflists->windowFuncs[wc->winref]); - } - else - { - /* Install the original tlist in the topmost WindowAgg */ - window_tlist = tlist; - } - - /* ... and make the WindowAgg plan node */ - result_plan = (Plan *) - make_windowagg(root, - (List *) copyObject(window_tlist), - wflists->windowFuncs[wc->winref], - wc->winref, - partNumCols, - partColIdx, - partOperators, - ordNumCols, - ordColIdx, - ordOperators, - wc->frameOptions, - wc->startOffset, - wc->endOffset, - result_plan); - } - } - } /* end of if (setOperations) */ - - /* - * If there is a DISTINCT clause, add the necessary node(s). - */ - if (parse->distinctClause) - { - double dNumDistinctRows; - long numDistinctRows; - - /* - * If there was grouping or aggregation, use the current number of - * rows as the estimated number of DISTINCT rows (ie, assume the - * result was already mostly unique). If not, use the number of - * distinct-groups calculated previously. - */ - if (parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->hasAggs) - dNumDistinctRows = result_plan->plan_rows; - else - dNumDistinctRows = dNumGroups; - - /* Also convert to long int --- but 'ware overflow! */ - numDistinctRows = (long) Min(dNumDistinctRows, (double) LONG_MAX); - - /* Choose implementation method if we didn't already */ - if (!tested_hashed_distinct) - { - /* - * At this point, either hashed or sorted grouping will have to - * work from result_plan, so we pass that as both "cheapest" and - * "sorted". - */ - use_hashed_distinct = - choose_hashed_distinct(root, - tuple_fraction, limit_tuples, - result_plan->plan_rows, - result_plan->startup_cost, - result_plan->total_cost, - result_plan->plan_width, - result_plan->startup_cost, - result_plan->total_cost, - result_plan->plan_width, - current_pathkeys, - dNumDistinctRows); - } - - if (use_hashed_distinct) - { - /* Hashed aggregate plan --- no sort needed */ - result_plan = (Plan *) make_agg(root, - result_plan->targetlist, - NIL, - AGG_HASHED, - NULL, - list_length(parse->distinctClause), - extract_grouping_cols(parse->distinctClause, - result_plan->targetlist), - extract_grouping_ops(parse->distinctClause), - NIL, - numDistinctRows, - false, - true, - result_plan); - /* Hashed aggregation produces randomly-ordered results */ - current_pathkeys = NIL; - } - else - { - /* - * Use a Unique node to implement DISTINCT. Add an explicit sort - * if we couldn't make the path come out the way the Unique node - * needs it. If we do have to sort, always sort by the more - * rigorous of DISTINCT and ORDER BY, to avoid a second sort - * below. However, for regular DISTINCT, don't sort now if we - * don't have to --- sorting afterwards will likely be cheaper, - * and also has the possibility of optimizing via LIMIT. But for - * DISTINCT ON, we *must* force the final sort now, else it won't - * have the desired behavior. - */ - List *needed_pathkeys; - - if (parse->hasDistinctOn && - list_length(root->distinct_pathkeys) < - list_length(root->sort_pathkeys)) - needed_pathkeys = root->sort_pathkeys; - else - needed_pathkeys = root->distinct_pathkeys; - - if (!pathkeys_contained_in(needed_pathkeys, current_pathkeys)) - { - if (list_length(root->distinct_pathkeys) >= - list_length(root->sort_pathkeys)) - current_pathkeys = root->distinct_pathkeys; - else - { - current_pathkeys = root->sort_pathkeys; - /* Assert checks that parser didn't mess up... */ - Assert(pathkeys_contained_in(root->distinct_pathkeys, - current_pathkeys)); - } - - result_plan = (Plan *) make_sort_from_pathkeys(root, - result_plan, - current_pathkeys, - -1.0); - } - - result_plan = (Plan *) make_unique(result_plan, - parse->distinctClause); - result_plan->plan_rows = dNumDistinctRows; - /* The Unique node won't change sort ordering */ - } - } - - /* - * If ORDER BY was given and we were not able to make the plan come out in - * the right order, add an explicit sort step. - */ - if (parse->sortClause) - { - if (!pathkeys_contained_in(root->sort_pathkeys, current_pathkeys)) - { - result_plan = (Plan *) make_sort_from_pathkeys(root, - result_plan, - root->sort_pathkeys, - limit_tuples); - current_pathkeys = root->sort_pathkeys; - } - } - - /* - * If there is a FOR [KEY] UPDATE/SHARE clause, add the LockRows node. - * (Note: we intentionally test parse->rowMarks not root->rowMarks here. - * If there are only non-locking rowmarks, they should be handled by the - * ModifyTable node instead.) - */ - if (parse->rowMarks) - { - result_plan = (Plan *) make_lockrows(result_plan, - root->rowMarks, - SS_assign_special_param(root)); - - /* - * The result can no longer be assumed sorted, since locking might - * cause the sort key columns to be replaced with new values. - */ - current_pathkeys = NIL; - } - - /* - * Finally, if there is a LIMIT/OFFSET clause, add the LIMIT node. - */ - if (limit_needed(parse)) - { - result_plan = (Plan *) make_limit(result_plan, - parse->limitOffset, - parse->limitCount, - offset_est, - count_est); - } - - /* - * Return the actual output ordering in query_pathkeys for possible use by - * an outer query level. - */ - root->query_pathkeys = current_pathkeys; - - return result_plan; -} + /* Note: currently, we leave it to callers to do set_cheapest() */ +} -/* - * Given a groupclause for a collection of grouping sets, produce the - * corresponding groupColIdx. - * - * root->grouping_map maps the tleSortGroupRef to the actual column position in - * the input tuple. So we get the ref from the entries in the groupclause and - * look them up there. - */ -static AttrNumber * -remap_groupColIdx(PlannerInfo *root, List *groupClause) -{ - AttrNumber *grouping_map = root->grouping_map; - AttrNumber *new_grpColIdx; - ListCell *lc; - int i; - - Assert(grouping_map); - - new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(groupClause)); - - i = 0; - foreach(lc, groupClause) - { - SortGroupClause *clause = lfirst(lc); - - new_grpColIdx[i++] = grouping_map[clause->tleSortGroupRef]; - } - - return new_grpColIdx; -} - -/* - * Build Agg and Sort nodes to implement sorted grouping with one or more - * grouping sets. A plain GROUP BY or just the presence of aggregates counts - * for this purpose as a single grouping set; the calling code is responsible - * for providing a single-element rollup_groupclauses list for such cases, - * though rollup_lists may be nil. - * - * The last entry in rollup_groupclauses (which is the one the input is sorted - * on, if at all) is the one used for the returned Agg node. Any additional - * rollups are attached, with corresponding sort info, to subsidiary Agg and - * Sort nodes attached to the side of the real Agg node; these nodes don't - * participate in the plan directly, but they are both a convenient way to - * represent the required data and a convenient way to account for the costs - * of execution. - */ -static Plan * -build_grouping_chain(PlannerInfo *root, - Query *parse, - List *tlist, - bool need_sort_for_grouping, - List *rollup_groupclauses, - List *rollup_lists, - AttrNumber *groupColIdx, - AggClauseCosts *agg_costs, - long numGroups, - Plan *result_plan) -{ - AttrNumber *top_grpColIdx = groupColIdx; - List *chain = NIL; - - /* - * Prepare the grpColIdx for the real Agg node first, because we may need - * it for sorting - */ - if (parse->groupingSets) - top_grpColIdx = remap_groupColIdx(root, llast(rollup_groupclauses)); - - /* - * If we need a Sort operation on the input, generate that. - */ - if (need_sort_for_grouping) - { - result_plan = (Plan *) - make_sort_from_groupcols(root, - llast(rollup_groupclauses), - top_grpColIdx, - result_plan); - } - - /* - * Generate the side nodes that describe the other sort and group - * operations besides the top one. - */ - if (list_length(rollup_groupclauses) > 1) - { - ListCell *lc, - *lc2; - - Assert(list_length(rollup_groupclauses) == list_length(rollup_lists)); - forboth(lc, rollup_groupclauses, lc2, rollup_lists) - { - List *groupClause = (List *) lfirst(lc); - List *gsets = (List *) lfirst(lc2); - AttrNumber *new_grpColIdx; - Plan *sort_plan; - Plan *agg_plan; - - /* We want to iterate over all but the last rollup list elements */ - if (lnext(lc) == NULL) - break; - - new_grpColIdx = remap_groupColIdx(root, groupClause); - - sort_plan = (Plan *) - make_sort_from_groupcols(root, - groupClause, - new_grpColIdx, - result_plan); - - /* - * sort_plan includes the cost of result_plan, which is not what - * we want (since we'll not actually run that plan again). So - * correct the cost figures. - */ - sort_plan->startup_cost -= result_plan->total_cost; - sort_plan->total_cost -= result_plan->total_cost; - - agg_plan = (Plan *) make_agg(root, - tlist, - (List *) parse->havingQual, - AGG_SORTED, - agg_costs, - list_length(linitial(gsets)), - new_grpColIdx, - extract_grouping_ops(groupClause), - gsets, - numGroups, - false, - true, - sort_plan); - - /* - * Nuke stuff we don't need to avoid bloating debug output. - */ - sort_plan->targetlist = NIL; - sort_plan->lefttree = NULL; - - agg_plan->targetlist = NIL; - agg_plan->qual = NIL; - - chain = lappend(chain, agg_plan); - } - } - - /* - * Now make the final Agg node - */ - { - List *groupClause = (List *) llast(rollup_groupclauses); - List *gsets = rollup_lists ? (List *) llast(rollup_lists) : NIL; - int numGroupCols; - ListCell *lc; - - if (gsets) - numGroupCols = list_length(linitial(gsets)); - else - numGroupCols = list_length(parse->groupClause); - - result_plan = (Plan *) make_agg(root, - tlist, - (List *) parse->havingQual, - (numGroupCols > 0) ? AGG_SORTED : AGG_PLAIN, - agg_costs, - numGroupCols, - top_grpColIdx, - extract_grouping_ops(groupClause), - gsets, - numGroups, - false, - true, - result_plan); - - ((Agg *) result_plan)->chain = chain; - - /* - * Add the additional costs. But only the total costs count, since the - * additional sorts aren't run on startup. - */ - foreach(lc, chain) - { - Plan *subplan = lfirst(lc); - - result_plan->total_cost += subplan->total_cost; - } - } - - return result_plan; -} - -/* - * add_tlist_costs_to_plan - * - * Estimate the execution costs associated with evaluating the targetlist - * expressions, and add them to the cost estimates for the Plan node. - * - * If the tlist contains set-returning functions, also inflate the Plan's cost - * and plan_rows estimates accordingly. (Hence, this must be called *after* - * any logic that uses plan_rows to, eg, estimate qual evaluation costs.) - * - * Note: during initial stages of planning, we mostly consider plan nodes with - * "flat" tlists, containing just Vars and PlaceHolderVars. The evaluation - * cost of Vars is zero according to the model used by cost_qual_eval() (or if - * you prefer, the cost is factored into cpu_tuple_cost). The evaluation cost - * of a PHV's expression is charged as part of the scan cost of whichever plan - * node first computes it, and then subsequent references to the PHV can be - * taken as having cost zero. Thus we can avoid worrying about tlist cost - * as such throughout query_planner() and subroutines. But once we apply a - * tlist that might contain actual operators, sub-selects, etc, we'd better - * account for its cost. Any set-returning functions in the tlist must also - * affect the estimated rowcount. - * - * Once grouping_planner() has applied a general tlist to the topmost - * scan/join plan node, any tlist eval cost for added-on nodes should be - * accounted for as we create those nodes. Presently, of the node types we - * can add on later, only Agg, WindowAgg, and Group project new tlists (the - * rest just copy their input tuples) --- so make_agg(), make_windowagg() and - * make_group() are responsible for calling this function to account for their - * tlist costs. - */ -void -add_tlist_costs_to_plan(PlannerInfo *root, Plan *plan, List *tlist) -{ - QualCost tlist_cost; - double tlist_rows; - - cost_qual_eval(&tlist_cost, tlist, root); - plan->startup_cost += tlist_cost.startup; - plan->total_cost += tlist_cost.startup + - tlist_cost.per_tuple * plan->plan_rows; - - tlist_rows = tlist_returns_set_rows(tlist); - if (tlist_rows > 1) - { - /* - * We assume that execution costs of the tlist proper were all - * accounted for by cost_qual_eval. However, it still seems - * appropriate to charge something more for the executor's general - * costs of processing the added tuples. The cost is probably less - * than cpu_tuple_cost, though, so we arbitrarily use half of that. - */ - plan->total_cost += plan->plan_rows * (tlist_rows - 1) * - cpu_tuple_cost / 2; - - plan->plan_rows *= tlist_rows; - } -} - /* * Detect whether a plan node is a "dummy" plan created when a relation * is deemed not to need scanning due to constraint exclusion. @@ -2987,7 +2142,7 @@ select_rowmark_type(RangeTblEntry *rte, LockClauseStrength strength) * for OFFSET but a little bit bogus for LIMIT: effectively we estimate * LIMIT 0 as though it were LIMIT 1. But this is in line with the planner's * usual practice of never estimating less than one row.) These values will - * be passed to make_limit, which see if you change this code. + * be passed to create_limit_path, which see if you change this code. * * The return value is the suitably adjusted tuple_fraction to use for * planning the query. This adjustment is not overridable, since it reflects @@ -3839,333 +2994,767 @@ standard_qp_callback(PlannerInfo *root, void *extra) } /* - * choose_hashed_grouping - should we use hashed grouping? + * Estimate number of groups produced by grouping clauses (1 if not grouping) * - * Returns TRUE to select hashing, FALSE to select sorting. + * path_rows: number of output rows from scan/join step + * rollup_lists: list of grouping sets, or NIL if not doing grouping sets + * rollup_groupclauses: list of grouping clauses for grouping sets, + * or NIL if not doing grouping sets */ -static bool -choose_hashed_grouping(PlannerInfo *root, - double tuple_fraction, double limit_tuples, - double path_rows, - Path *cheapest_path, Path *sorted_path, - double dNumGroups, AggClauseCosts *agg_costs) +static double +get_number_of_groups(PlannerInfo *root, + double path_rows, + List *rollup_lists, + List *rollup_groupclauses) { Query *parse = root->parse; - int numGroupCols = list_length(parse->groupClause); - bool can_hash; - bool can_sort; - Size hashentrysize; - List *target_pathkeys; - List *current_pathkeys; - Path hashed_p; - Path sorted_p; - int sorted_p_width; + double dNumGroups; - /* - * Executor doesn't support hashed aggregation with DISTINCT or ORDER BY - * aggregates. (Doing so would imply storing *all* the input values in - * the hash table, and/or running many sorts in parallel, either of which - * seems like a certain loser.) We similarly don't support ordered-set - * aggregates in hashed aggregation, but that case is included in the - * numOrderedAggs count. - */ - can_hash = (agg_costs->numOrderedAggs == 0 && - grouping_is_hashable(parse->groupClause)); - can_sort = grouping_is_sortable(parse->groupClause); - - /* Quick out if only one choice is workable */ - if (!(can_hash && can_sort)) + if (parse->groupClause) { - if (can_hash) - return true; - else if (can_sort) - return false; + List *groupExprs; + + if (parse->groupingSets) + { + /* Add up the estimates for each grouping set */ + ListCell *lc, + *lc2; + + dNumGroups = 0; + forboth(lc, rollup_groupclauses, lc2, rollup_lists) + { + List *groupClause = (List *) lfirst(lc); + List *gsets = (List *) lfirst(lc2); + ListCell *lc3; + + groupExprs = get_sortgrouplist_exprs(groupClause, + parse->targetList); + + foreach(lc3, gsets) + { + List *gset = (List *) lfirst(lc3); + + dNumGroups += estimate_num_groups(root, + groupExprs, + path_rows, + &gset); + } + } + } else - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("could not implement GROUP BY"), - errdetail("Some of the datatypes only support hashing, while others only support sorting."))); + { + /* Plain GROUP BY */ + groupExprs = get_sortgrouplist_exprs(parse->groupClause, + parse->targetList); + + dNumGroups = estimate_num_groups(root, groupExprs, path_rows, + NULL); + } + } + else if (parse->groupingSets) + { + /* Empty grouping sets ... one result row for each one */ + dNumGroups = list_length(parse->groupingSets); + } + else if (parse->hasAggs || root->hasHavingQual) + { + /* Plain aggregation, one result row */ + dNumGroups = 1; + } + else + { + /* Not grouping */ + dNumGroups = 1; } - /* Prefer sorting when enable_hashagg is off */ - if (!enable_hashagg) - return false; + return dNumGroups; +} + +/* + * create_grouping_paths + * + * Build a new upperrel containing Paths for grouping and/or aggregation. + * + * input_rel: contains the source-data Paths + * target: the pathtarget for the result Paths to compute + * groupColIdx: array of indexes of grouping columns in the source data + * rollup_lists: list of grouping sets, or NIL if not doing grouping sets + * rollup_groupclauses: list of grouping clauses for grouping sets, + * or NIL if not doing grouping sets + * + * We need to consider sorted and hashed aggregation in the same function, + * because otherwise (1) it would be harder to throw an appropriate error + * message if neither way works, and (2) we should not allow enable_hashagg or + * hashtable size considerations to dissuade us from using hashing if sorting + * is not possible. + */ +static RelOptInfo * +create_grouping_paths(PlannerInfo *root, + RelOptInfo *input_rel, + PathTarget *target, + AttrNumber *groupColIdx, + List *rollup_lists, + List *rollup_groupclauses) +{ + Query *parse = root->parse; + Path *cheapest_path = input_rel->cheapest_total_path; + RelOptInfo *grouped_rel; + AggClauseCosts agg_costs; + double dNumGroups; + bool allow_hash; + ListCell *lc; + + /* For now, do all work in the (GROUP_AGG, NULL) upperrel */ + grouped_rel = fetch_upper_rel(root, UPPERREL_GROUP_AGG, NULL); /* - * Don't do it if it doesn't look like the hashtable will fit into - * work_mem. + * Check for degenerate grouping. */ + if ((root->hasHavingQual || parse->groupingSets) && + !parse->hasAggs && parse->groupClause == NIL) + { + /* + * We have a HAVING qual and/or grouping sets, but no aggregates and + * no GROUP BY (which implies that the grouping sets are all empty). + * + * This is a degenerate case in which we are supposed to emit either + * zero or one row for each grouping set depending on whether HAVING + * succeeds. Furthermore, there cannot be any variables in either + * HAVING or the targetlist, so we actually do not need the FROM table + * at all! We can just throw away the plan-so-far and generate a + * Result node. This is a sufficiently unusual corner case that it's + * not worth contorting the structure of this module to avoid having + * to generate the earlier paths in the first place. + */ + int nrows = list_length(parse->groupingSets); + Path *path; + + if (nrows > 1) + { + /* + * Doesn't seem worthwhile writing code to cons up a + * generate_series or a values scan to emit multiple rows. Instead + * just make N clones and append them. (With a volatile HAVING + * clause, this means you might get between 0 and N output rows. + * Offhand I think that's desired.) + */ + List *paths = NIL; + + while (--nrows >= 0) + { + path = (Path *) + create_result_path(grouped_rel, + target, + (List *) parse->havingQual); + paths = lappend(paths, path); + } + path = (Path *) + create_append_path(grouped_rel, + paths, + NULL, + 0); + path->pathtarget = target; + } + else + { + /* No grouping sets, or just one, so one output row */ + path = (Path *) + create_result_path(grouped_rel, + target, + (List *) parse->havingQual); + } - /* Estimate per-hash-entry space at tuple width... */ - hashentrysize = MAXALIGN(cheapest_path->pathtarget->width) + - MAXALIGN(SizeofMinimalTupleHeader); - /* plus space for pass-by-ref transition values... */ - hashentrysize += agg_costs->transitionSpace; - /* plus the per-hash-entry overhead */ - hashentrysize += hash_agg_entry_size(agg_costs->numAggs); + add_path(grouped_rel, path); - if (hashentrysize * dNumGroups > work_mem * 1024L) - return false; + /* No need to consider any other alternatives. */ + set_cheapest(grouped_rel); + + return grouped_rel; + } /* - * When we have both GROUP BY and DISTINCT, use the more-rigorous of - * DISTINCT and ORDER BY as the assumed required output sort order. This - * is an oversimplification because the DISTINCT might get implemented via - * hashing, but it's not clear that the case is common enough (or that our - * estimates are good enough) to justify trying to solve it exactly. + * Collect statistics about aggregates for estimating costs. Note: we do + * not detect duplicate aggregates here; a somewhat-overestimated cost is + * okay for our purposes. */ - if (list_length(root->distinct_pathkeys) > - list_length(root->sort_pathkeys)) - target_pathkeys = root->distinct_pathkeys; - else - target_pathkeys = root->sort_pathkeys; + MemSet(&agg_costs, 0, sizeof(AggClauseCosts)); + if (parse->hasAggs) + { + count_agg_clauses(root, (Node *) target->exprs, &agg_costs); + count_agg_clauses(root, parse->havingQual, &agg_costs); + } /* - * See if the estimated cost is no more than doing it the other way. While - * avoiding the need for sorted input is usually a win, the fact that the - * output won't be sorted may be a loss; so we need to do an actual cost - * comparison. + * Estimate number of groups. Note: if cheapest_path is a dummy, it will + * have zero rowcount estimate, which we don't want to use for fear of + * divide-by-zero. Hence clamp. + */ + dNumGroups = get_number_of_groups(root, + clamp_row_est(cheapest_path->rows), + rollup_lists, + rollup_groupclauses); + + /* + * Consider sort-based implementations of grouping, if possible. (Note + * that if groupClause is empty, grouping_is_sortable() is trivially true, + * and all the pathkeys_contained_in() tests will succeed too, so that + * we'll consider every surviving input path.) + */ + if (grouping_is_sortable(parse->groupClause)) + { + /* + * Use any available suitably-sorted path as input, and also consider + * sorting the cheapest-total path. + */ + foreach(lc, input_rel->pathlist) + { + Path *path = (Path *) lfirst(lc); + bool is_sorted; + + is_sorted = pathkeys_contained_in(root->group_pathkeys, + path->pathkeys); + if (path == cheapest_path || is_sorted) + { + /* Sort the cheapest-total path if it isn't already sorted */ + if (!is_sorted) + path = (Path *) create_sort_path(root, + grouped_rel, + path, + root->group_pathkeys, + -1.0); + + /* Now decide what to stick atop it */ + if (parse->groupingSets) + { + /* + * We have grouping sets, possibly with aggregation. Make + * a GroupingSetsPath. + */ + add_path(grouped_rel, (Path *) + create_groupingsets_path(root, + grouped_rel, + path, + target, + (List *) parse->havingQual, + groupColIdx, + rollup_lists, + rollup_groupclauses, + &agg_costs, + dNumGroups)); + } + else if (parse->hasAggs) + { + /* + * We have aggregation, possibly with plain GROUP BY. Make + * an AggPath. + */ + add_path(grouped_rel, (Path *) + create_agg_path(root, + grouped_rel, + path, + target, + parse->groupClause ? AGG_SORTED : AGG_PLAIN, + parse->groupClause, + (List *) parse->havingQual, + &agg_costs, + dNumGroups)); + } + else if (parse->groupClause) + { + /* + * We have GROUP BY without aggregation or grouping sets. + * Make a GroupPath. + */ + add_path(grouped_rel, (Path *) + create_group_path(root, + grouped_rel, + path, + target, + parse->groupClause, + (List *) parse->havingQual, + dNumGroups)); + } + else + { + /* Other cases should have been handled above */ + Assert(false); + } + } + } + } + + /* + * Consider hash-based implementations of grouping, if possible. * - * We need to consider cheapest_path + hashagg [+ final sort] versus - * either cheapest_path [+ sort] + group or agg [+ final sort] or - * presorted_path + group or agg [+ final sort] where brackets indicate a - * step that may not be needed. We assume grouping_planner() will have - * passed us a presorted path only if it's a winner compared to - * cheapest_path for this purpose. + * Hashed aggregation only applies if we're grouping. We currently can't + * hash if there are grouping sets, though. * - * These path variables are dummies that just hold cost fields; we don't - * make actual Paths for these steps. + * Executor doesn't support hashed aggregation with DISTINCT or ORDER BY + * aggregates. (Doing so would imply storing *all* the input values in + * the hash table, and/or running many sorts in parallel, either of which + * seems like a certain loser.) We similarly don't support ordered-set + * aggregates in hashed aggregation, but that case is also included in the + * numOrderedAggs count. + * + * Note: grouping_is_hashable() is much more expensive to check than the + * other gating conditions, so we want to do it last. */ - cost_agg(&hashed_p, root, AGG_HASHED, agg_costs, - numGroupCols, dNumGroups, - cheapest_path->startup_cost, cheapest_path->total_cost, - path_rows); - /* Result of hashed agg is always unsorted */ - if (target_pathkeys) - cost_sort(&hashed_p, root, target_pathkeys, hashed_p.total_cost, - dNumGroups, cheapest_path->pathtarget->width, - 0.0, work_mem, limit_tuples); - - if (sorted_path) + allow_hash = (parse->groupClause != NIL && + parse->groupingSets == NIL && + agg_costs.numOrderedAggs == 0); + + /* Consider reasons to disable hashing, but only if we can sort instead */ + if (allow_hash && grouped_rel->pathlist != NIL) { - sorted_p.startup_cost = sorted_path->startup_cost; - sorted_p.total_cost = sorted_path->total_cost; - sorted_p_width = sorted_path->pathtarget->width; - current_pathkeys = sorted_path->pathkeys; + if (!enable_hashagg) + allow_hash = false; + else + { + /* + * Don't hash if it doesn't look like the hashtable will fit into + * work_mem. + */ + Size hashentrysize; + + /* Estimate per-hash-entry space at tuple width... */ + hashentrysize = MAXALIGN(cheapest_path->pathtarget->width) + + MAXALIGN(SizeofMinimalTupleHeader); + /* plus space for pass-by-ref transition values... */ + hashentrysize += agg_costs.transitionSpace; + /* plus the per-hash-entry overhead */ + hashentrysize += hash_agg_entry_size(agg_costs.numAggs); + + if (hashentrysize * dNumGroups > work_mem * 1024L) + allow_hash = false; + } } - else + + if (allow_hash && grouping_is_hashable(parse->groupClause)) { - sorted_p.startup_cost = cheapest_path->startup_cost; - sorted_p.total_cost = cheapest_path->total_cost; - sorted_p_width = cheapest_path->pathtarget->width; - current_pathkeys = cheapest_path->pathkeys; + /* + * We just need an Agg over the cheapest-total input path, since input + * order won't matter. + */ + add_path(grouped_rel, (Path *) + create_agg_path(root, grouped_rel, + cheapest_path, + target, + AGG_HASHED, + parse->groupClause, + (List *) parse->havingQual, + &agg_costs, + dNumGroups)); } - if (!pathkeys_contained_in(root->group_pathkeys, current_pathkeys)) + + /* Give a helpful error if we failed to find any implementation */ + if (grouped_rel->pathlist == NIL) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("could not implement GROUP BY"), + errdetail("Some of the datatypes only support hashing, while others only support sorting."))); + + /* Now choose the best path(s) */ + set_cheapest(grouped_rel); + + return grouped_rel; +} + +/* + * create_window_paths + * + * Build a new upperrel containing Paths for window-function evaluation. + * + * input_rel: contains the source-data Paths + * base_tlist: result of make_windowInputTargetList + * tlist: query's final target list (which is what output paths should emit) + * wflists: result of find_window_functions + * activeWindows: result of select_active_windows + */ +static RelOptInfo * +create_window_paths(PlannerInfo *root, + RelOptInfo *input_rel, + List *base_tlist, + List *tlist, + WindowFuncLists *wflists, + List *activeWindows) +{ + RelOptInfo *window_rel; + ListCell *lc; + + /* For now, do all work in the (WINDOW, NULL) upperrel */ + window_rel = fetch_upper_rel(root, UPPERREL_WINDOW, NULL); + + /* + * Consider computing window functions starting from the existing + * cheapest-total path (which will likely require a sort) as well as any + * existing paths that satisfy root->window_pathkeys (which won't). + */ + foreach(lc, input_rel->pathlist) { - cost_sort(&sorted_p, root, root->group_pathkeys, sorted_p.total_cost, - path_rows, sorted_p_width, - 0.0, work_mem, -1.0); - current_pathkeys = root->group_pathkeys; + Path *path = (Path *) lfirst(lc); + + if (path == input_rel->cheapest_total_path || + pathkeys_contained_in(root->window_pathkeys, path->pathkeys)) + create_one_window_path(root, + window_rel, + path, + base_tlist, + tlist, + wflists, + activeWindows); } - if (parse->hasAggs) - cost_agg(&sorted_p, root, AGG_SORTED, agg_costs, - numGroupCols, dNumGroups, - sorted_p.startup_cost, sorted_p.total_cost, - path_rows); - else - cost_group(&sorted_p, root, numGroupCols, dNumGroups, - sorted_p.startup_cost, sorted_p.total_cost, - path_rows); - /* The Agg or Group node will preserve ordering */ - if (target_pathkeys && - !pathkeys_contained_in(target_pathkeys, current_pathkeys)) - cost_sort(&sorted_p, root, target_pathkeys, sorted_p.total_cost, - dNumGroups, sorted_p_width, - 0.0, work_mem, limit_tuples); + /* Now choose the best path(s) */ + set_cheapest(window_rel); + + return window_rel; +} + +/* + * Stack window-function implementation steps atop the given Path, and + * add the result to window_rel. + * + * window_rel: upperrel to contain result + * path: input Path to use + * base_tlist: result of make_windowInputTargetList + * tlist: query's final target list (which is what output paths should emit) + * wflists: result of find_window_functions + * activeWindows: result of select_active_windows + */ +static void +create_one_window_path(PlannerInfo *root, + RelOptInfo *window_rel, + Path *path, + List *base_tlist, + List *tlist, + WindowFuncLists *wflists, + List *activeWindows) +{ + List *window_tlist; + ListCell *l; /* - * Now make the decision using the top-level tuple fraction. + * Since each window clause could require a different sort order, we stack + * up a WindowAgg node for each clause, with sort steps between them as + * needed. (We assume that select_active_windows chose a good order for + * executing the clauses in.) + * + * The "base" targetlist for all steps of the windowing process is a flat + * tlist of all Vars and Aggs needed in the result. (In some cases we + * wouldn't need to propagate all of these all the way to the top, since + * they might only be needed as inputs to WindowFuncs. It's probably not + * worth trying to optimize that though.) We also add window partitioning + * and sorting expressions to the base tlist, to ensure they're computed + * only once at the bottom of the stack (that's critical for volatile + * functions). As we climb up the stack, we'll add outputs for the + * WindowFuncs computed at each level. */ - if (compare_fractional_path_costs(&hashed_p, &sorted_p, - tuple_fraction) < 0) + window_tlist = base_tlist; + + /* + * Apply base_tlist to the given base path. If that path node is one that + * cannot do expression evaluation, we must insert a Result node to + * project the desired tlist. (In some cases this might not really be + * required, but it's not worth trying to avoid it.) If the query has + * both grouping and windowing, base_tlist was already applied to the + * input path, but apply_projection_to_path is smart about that. + * + * The seemingly redundant create_pathtarget() steps here are important to + * ensure that each path node has a separately modifiable tlist. + */ + path = apply_projection_to_path(root, window_rel, + path, + create_pathtarget(root, base_tlist)); + + foreach(l, activeWindows) { - /* Hashed is cheaper, so use it */ - return true; + WindowClause *wc = (WindowClause *) lfirst(l); + List *window_pathkeys; + + window_pathkeys = make_pathkeys_for_window(root, + wc, + tlist); + + /* Sort if necessary */ + if (!pathkeys_contained_in(window_pathkeys, path->pathkeys)) + { + path = (Path *) create_sort_path(root, window_rel, + path, + window_pathkeys, + -1.0); + } + + if (lnext(l)) + { + /* Add the current WindowFuncs to the running tlist */ + window_tlist = add_to_flat_tlist(window_tlist, + wflists->windowFuncs[wc->winref]); + } + else + { + /* Install the final tlist in the topmost WindowAgg */ + window_tlist = tlist; + } + + path = (Path *) + create_windowagg_path(root, window_rel, path, + create_pathtarget(root, window_tlist), + wflists->windowFuncs[wc->winref], + wc, + window_pathkeys); } - return false; + + add_path(window_rel, path); } /* - * choose_hashed_distinct - should we use hashing for DISTINCT? + * create_distinct_paths * - * This is fairly similar to choose_hashed_grouping, but there are enough - * differences that it doesn't seem worth trying to unify the two functions. - * (One difference is that we sometimes apply this after forming a Plan, - * so the input alternatives can't be represented as Paths --- instead we - * pass in the costs as individual variables.) + * Build a new upperrel containing Paths for SELECT DISTINCT evaluation. * - * But note that making the two choices independently is a bit bogus in - * itself. If the two could be combined into a single choice operation - * it'd probably be better, but that seems far too unwieldy to be practical, - * especially considering that the combination of GROUP BY and DISTINCT - * isn't very common in real queries. By separating them, we are giving - * extra preference to using a sorting implementation when a common sort key - * is available ... and that's not necessarily wrong anyway. + * input_rel: contains the source-data Paths * - * Returns TRUE to select hashing, FALSE to select sorting. + * Note: input paths should already compute the desired pathtarget, since + * Sort/Unique won't project anything. */ -static bool -choose_hashed_distinct(PlannerInfo *root, - double tuple_fraction, double limit_tuples, - double path_rows, - Cost cheapest_startup_cost, Cost cheapest_total_cost, - int cheapest_path_width, - Cost sorted_startup_cost, Cost sorted_total_cost, - int sorted_path_width, - List *sorted_pathkeys, - double dNumDistinctRows) +static RelOptInfo * +create_distinct_paths(PlannerInfo *root, + RelOptInfo *input_rel) { Query *parse = root->parse; - int numDistinctCols = list_length(parse->distinctClause); - bool can_sort; - bool can_hash; - Size hashentrysize; - List *current_pathkeys; - List *needed_pathkeys; - Path hashed_p; - Path sorted_p; - - /* - * If we have a sortable DISTINCT ON clause, we always use sorting. This - * enforces the expected behavior of DISTINCT ON. - */ - can_sort = grouping_is_sortable(parse->distinctClause); - if (can_sort && parse->hasDistinctOn) - return false; + Path *cheapest_input_path = input_rel->cheapest_total_path; + RelOptInfo *distinct_rel; + double numDistinctRows; + bool allow_hash; + Path *path; + ListCell *lc; - can_hash = grouping_is_hashable(parse->distinctClause); + /* For now, do all work in the (DISTINCT, NULL) upperrel */ + distinct_rel = fetch_upper_rel(root, UPPERREL_DISTINCT, NULL); - /* Quick out if only one choice is workable */ - if (!(can_hash && can_sort)) + /* Estimate number of distinct rows there will be */ + if (parse->groupClause || parse->groupingSets || parse->hasAggs || + root->hasHavingQual) { - if (can_hash) - return true; - else if (can_sort) - return false; - else - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("could not implement DISTINCT"), - errdetail("Some of the datatypes only support hashing, while others only support sorting."))); + /* + * If there was grouping or aggregation, use the number of input rows + * as the estimated number of DISTINCT rows (ie, assume the input is + * already mostly unique). + */ + numDistinctRows = cheapest_input_path->rows; } + else + { + /* + * Otherwise, the UNIQUE filter has effects comparable to GROUP BY. + */ + List *distinctExprs; - /* Prefer sorting when enable_hashagg is off */ - if (!enable_hashagg) - return false; + distinctExprs = get_sortgrouplist_exprs(parse->distinctClause, + parse->targetList); + numDistinctRows = estimate_num_groups(root, distinctExprs, + cheapest_input_path->rows, + NULL); + } /* - * Don't do it if it doesn't look like the hashtable will fit into - * work_mem. + * Consider sort-based implementations of DISTINCT, if possible. */ + if (grouping_is_sortable(parse->distinctClause)) + { + /* + * First, if we have any adequately-presorted paths, just stick a + * Unique node on those. Then consider doing an explicit sort of the + * cheapest input path and Unique'ing that. + * + * When we have DISTINCT ON, we must sort by the more rigorous of + * DISTINCT and ORDER BY, else it won't have the desired behavior. + * Also, if we do have to do an explicit sort, we might as well use + * the more rigorous ordering to avoid a second sort later. (Note + * that the parser will have ensured that one clause is a prefix of + * the other.) + */ + List *needed_pathkeys; - /* Estimate per-hash-entry space at tuple width... */ - hashentrysize = MAXALIGN(cheapest_path_width) + - MAXALIGN(SizeofMinimalTupleHeader); - /* plus the per-hash-entry overhead */ - hashentrysize += hash_agg_entry_size(0); + if (parse->hasDistinctOn && + list_length(root->distinct_pathkeys) < + list_length(root->sort_pathkeys)) + needed_pathkeys = root->sort_pathkeys; + else + needed_pathkeys = root->distinct_pathkeys; + + foreach(lc, input_rel->pathlist) + { + Path *path = (Path *) lfirst(lc); - if (hashentrysize * dNumDistinctRows > work_mem * 1024L) - return false; + if (pathkeys_contained_in(needed_pathkeys, path->pathkeys)) + { + add_path(distinct_rel, (Path *) + create_upper_unique_path(root, distinct_rel, + path, + list_length(root->distinct_pathkeys), + numDistinctRows)); + } + } + + /* For explicit-sort case, always use the more rigorous clause */ + if (list_length(root->distinct_pathkeys) < + list_length(root->sort_pathkeys)) + { + needed_pathkeys = root->sort_pathkeys; + /* Assert checks that parser didn't mess up... */ + Assert(pathkeys_contained_in(root->distinct_pathkeys, + needed_pathkeys)); + } + else + needed_pathkeys = root->distinct_pathkeys; + + path = cheapest_input_path; + if (!pathkeys_contained_in(needed_pathkeys, path->pathkeys)) + path = (Path *) create_sort_path(root, distinct_rel, + path, + needed_pathkeys, + -1.0); + + add_path(distinct_rel, (Path *) + create_upper_unique_path(root, distinct_rel, + path, + list_length(root->distinct_pathkeys), + numDistinctRows)); + } /* - * See if the estimated cost is no more than doing it the other way. While - * avoiding the need for sorted input is usually a win, the fact that the - * output won't be sorted may be a loss; so we need to do an actual cost - * comparison. + * Consider hash-based implementations of DISTINCT, if possible. * - * We need to consider cheapest_path + hashagg [+ final sort] versus - * sorted_path [+ sort] + group [+ final sort] where brackets indicate a - * step that may not be needed. + * If we were not able to make any other types of path, we *must* hash or + * die trying. If we do have other choices, there are several things that + * should prevent selection of hashing: if the query uses DISTINCT ON + * (because it won't really have the expected behavior if we hash), or if + * enable_hashagg is off, or if it looks like the hashtable will exceed + * work_mem. * - * These path variables are dummies that just hold cost fields; we don't - * make actual Paths for these steps. + * Note: grouping_is_hashable() is much more expensive to check than the + * other gating conditions, so we want to do it last. */ - cost_agg(&hashed_p, root, AGG_HASHED, NULL, - numDistinctCols, dNumDistinctRows, - cheapest_startup_cost, cheapest_total_cost, - path_rows); + if (distinct_rel->pathlist == NIL) + allow_hash = true; /* we have no alternatives */ + else if (parse->hasDistinctOn || !enable_hashagg) + allow_hash = false; /* policy-based decision not to hash */ + else + { + Size hashentrysize; - /* - * Result of hashed agg is always unsorted, so if ORDER BY is present we - * need to charge for the final sort. - */ - if (parse->sortClause) - cost_sort(&hashed_p, root, root->sort_pathkeys, hashed_p.total_cost, - dNumDistinctRows, cheapest_path_width, - 0.0, work_mem, limit_tuples); + /* Estimate per-hash-entry space at tuple width... */ + hashentrysize = MAXALIGN(cheapest_input_path->pathtarget->width) + + MAXALIGN(SizeofMinimalTupleHeader); + /* plus the per-hash-entry overhead */ + hashentrysize += hash_agg_entry_size(0); - /* - * Now for the GROUP case. See comments in grouping_planner about the - * sorting choices here --- this code should match that code. - */ - sorted_p.startup_cost = sorted_startup_cost; - sorted_p.total_cost = sorted_total_cost; - current_pathkeys = sorted_pathkeys; - if (parse->hasDistinctOn && - list_length(root->distinct_pathkeys) < - list_length(root->sort_pathkeys)) - needed_pathkeys = root->sort_pathkeys; - else - needed_pathkeys = root->distinct_pathkeys; - if (!pathkeys_contained_in(needed_pathkeys, current_pathkeys)) + /* Allow hashing only if hashtable is predicted to fit in work_mem */ + allow_hash = (hashentrysize * numDistinctRows <= work_mem * 1024L); + } + + if (allow_hash && grouping_is_hashable(parse->distinctClause)) { - if (list_length(root->distinct_pathkeys) >= - list_length(root->sort_pathkeys)) - current_pathkeys = root->distinct_pathkeys; - else - current_pathkeys = root->sort_pathkeys; - cost_sort(&sorted_p, root, current_pathkeys, sorted_p.total_cost, - path_rows, sorted_path_width, - 0.0, work_mem, -1.0); + /* Generate hashed aggregate path --- no sort needed */ + add_path(distinct_rel, (Path *) + create_agg_path(root, + distinct_rel, + cheapest_input_path, + cheapest_input_path->pathtarget, + AGG_HASHED, + parse->distinctClause, + NIL, + NULL, + numDistinctRows)); } - cost_group(&sorted_p, root, numDistinctCols, dNumDistinctRows, - sorted_p.startup_cost, sorted_p.total_cost, - path_rows); - if (parse->sortClause && - !pathkeys_contained_in(root->sort_pathkeys, current_pathkeys)) - cost_sort(&sorted_p, root, root->sort_pathkeys, sorted_p.total_cost, - dNumDistinctRows, sorted_path_width, - 0.0, work_mem, limit_tuples); - /* - * Now make the decision using the top-level tuple fraction. - */ - if (compare_fractional_path_costs(&hashed_p, &sorted_p, - tuple_fraction) < 0) + /* Give a helpful error if we failed to find any implementation */ + if (distinct_rel->pathlist == NIL) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("could not implement DISTINCT"), + errdetail("Some of the datatypes only support hashing, while others only support sorting."))); + + /* Now choose the best path(s) */ + set_cheapest(distinct_rel); + + return distinct_rel; +} + +/* + * create_ordered_paths + * + * Build a new upperrel containing Paths for ORDER BY evaluation. + * + * All paths in the result must satisfy the ORDER BY ordering. + * The only new path we need consider is an explicit sort on the + * cheapest-total existing path. + * + * input_rel: contains the source-data Paths + * limit_tuples: estimated bound on the number of output tuples, + * or -1 if no LIMIT or couldn't estimate + */ +static RelOptInfo * +create_ordered_paths(PlannerInfo *root, + RelOptInfo *input_rel, + double limit_tuples) +{ + Path *cheapest_input_path = input_rel->cheapest_total_path; + RelOptInfo *ordered_rel; + ListCell *lc; + + /* For now, do all work in the (ORDERED, NULL) upperrel */ + ordered_rel = fetch_upper_rel(root, UPPERREL_ORDERED, NULL); + + foreach(lc, input_rel->pathlist) { - /* Hashed is cheaper, so use it */ - return true; + Path *path = (Path *) lfirst(lc); + bool is_sorted; + + is_sorted = pathkeys_contained_in(root->sort_pathkeys, + path->pathkeys); + if (path == cheapest_input_path || is_sorted) + { + if (!is_sorted) + { + /* An explicit sort here can take advantage of LIMIT */ + path = (Path *) create_sort_path(root, + ordered_rel, + path, + root->sort_pathkeys, + limit_tuples); + } + add_path(ordered_rel, path); + } } - return false; + + /* + * No need to bother with set_cheapest here; grouping_planner does not + * need us to do it. + */ + Assert(ordered_rel->pathlist != NIL); + + return ordered_rel; } + /* - * make_subplanTargetList - * Generate appropriate target list when grouping is required. + * make_scanjoin_target + * Generate appropriate PathTarget for the result of scan/join steps. * - * When grouping_planner inserts grouping or aggregation plan nodes - * above the scan/join plan constructed by query_planner+create_plan, - * we typically want the scan/join plan to emit a different target list - * than the outer plan nodes should have. This routine generates the - * correct target list for the scan/join subplan. + * If there is grouping/aggregation or window functions, we typically want the + * scan/join plan to emit a different target list than the upper plan nodes + * will (in particular, it certainly can't include any aggregate or window + * function calls). This routine generates the correct target list for the + * scan/join subplan. * * The initial target list passed from the parser already contains entries * for all ORDER BY and GROUP BY expressions, but it will not have entries * for variables used only in HAVING clauses; so we need to add those * variables to the subplan target list. Also, we flatten all expressions - * except GROUP BY items into their component variables; the other expressions - * will be computed by the inserted nodes rather than by the subplan. + * except GROUP BY items into their component variables; other expressions + * will be computed by the upper plan nodes rather than by the subplan. * For example, given a query like * SELECT a+b,SUM(c+d) FROM table GROUP BY a+b; * we want to pass this targetlist to the subplan: @@ -4173,28 +3762,20 @@ choose_hashed_distinct(PlannerInfo *root, * where the a+b target will be used by the Sort/Group steps, and the * other targets will be used for computing the final results. * - * If we are grouping or aggregating, *and* there are no non-Var grouping - * expressions, then the returned tlist is effectively dummy; we do not - * need to force it to be evaluated, because all the Vars it contains - * should be present in the "flat" tlist generated by create_plan, though - * possibly in a different order. In that case we'll use create_plan's tlist, - * and the tlist made here is only needed as input to query_planner to tell - * it which Vars are needed in the output of the scan/join plan. + * We also convert from targetlist format (List of TargetEntry nodes) + * into PathTarget format, which is more compact and includes cost/width. * * 'tlist' is the query's target list. * 'groupColIdx' receives an array of column numbers for the GROUP BY * expressions (if there are any) in the returned target list. - * 'need_tlist_eval' is set true if we really need to evaluate the - * returned tlist as-is. (Note: locate_grouping_columns assumes - * that if this is FALSE, all grouping columns are simple Vars.) * - * The result is the targetlist to be passed to query_planner. + * The result is the PathTarget to be applied to the Paths returned from + * query_planner(). */ -static List * -make_subplanTargetList(PlannerInfo *root, - List *tlist, - AttrNumber **groupColIdx, - bool *need_tlist_eval) +static PathTarget * +make_scanjoin_target(PlannerInfo *root, + List *tlist, + AttrNumber **groupColIdx) { Query *parse = root->parse; List *sub_tlist; @@ -4205,15 +3786,12 @@ make_subplanTargetList(PlannerInfo *root, *groupColIdx = NULL; /* - * If we're not grouping or aggregating, there's nothing to do here; - * query_planner should receive the unmodified target list. + * If we're not grouping or aggregating or windowing, there's nothing to + * do here except convert to PathTarget format. */ - if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual && - !parse->hasWindowFuncs) - { - *need_tlist_eval = true; - return tlist; - } + if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && + !root->hasHavingQual && !parse->hasWindowFuncs) + return create_pathtarget(root, tlist); /* * Otherwise, we must build a tlist containing all grouping columns, plus @@ -4221,7 +3799,6 @@ make_subplanTargetList(PlannerInfo *root, */ sub_tlist = NIL; non_group_cols = NIL; - *need_tlist_eval = false; /* only eval if not flat tlist */ numCols = list_length(parse->groupClause); if (numCols > 0) @@ -4257,13 +3834,11 @@ make_subplanTargetList(PlannerInfo *root, list_length(sub_tlist) + 1, NULL, false); + newtle->ressortgroupref = tle->ressortgroupref; sub_tlist = lappend(sub_tlist, newtle); Assert(grpColIdx[colno] == 0); /* no dups expected */ grpColIdx[colno] = newtle->resno; - - if (!(newtle->expr && IsA(newtle->expr, Var))) - *need_tlist_eval = true; /* tlist contains non Vars */ } else { @@ -4308,7 +3883,7 @@ make_subplanTargetList(PlannerInfo *root, list_free(non_group_vars); list_free(non_group_cols); - return sub_tlist; + return create_pathtarget(root, sub_tlist); } /* @@ -4342,59 +3917,6 @@ get_grouping_column_index(Query *parse, TargetEntry *tle) return -1; } -/* - * locate_grouping_columns - * Locate grouping columns in the tlist chosen by create_plan. - * - * This is only needed if we don't use the sub_tlist chosen by - * make_subplanTargetList. We have to forget the column indexes found - * by that routine and re-locate the grouping exprs in the real sub_tlist. - * We assume the grouping exprs are just Vars (see make_subplanTargetList). - */ -static void -locate_grouping_columns(PlannerInfo *root, - List *tlist, - List *sub_tlist, - AttrNumber *groupColIdx) -{ - int keyno = 0; - ListCell *gl; - - /* - * No work unless grouping. - */ - if (!root->parse->groupClause) - { - Assert(groupColIdx == NULL); - return; - } - Assert(groupColIdx != NULL); - - foreach(gl, root->parse->groupClause) - { - SortGroupClause *grpcl = (SortGroupClause *) lfirst(gl); - Var *groupexpr = (Var *) get_sortgroupclause_expr(grpcl, tlist); - TargetEntry *te; - - /* - * The grouping column returned by create_plan might not have the same - * typmod as the original Var. (This can happen in cases where a - * set-returning function has been inlined, so that we now have more - * knowledge about what it returns than we did when the original Var - * was created.) So we can't use tlist_member() to search the tlist; - * instead use tlist_member_match_var. For safety, still check that - * the vartype matches. - */ - if (!(groupexpr && IsA(groupexpr, Var))) - elog(ERROR, "grouping column is not a Var as expected"); - te = tlist_member_match_var(groupexpr, sub_tlist); - if (!te) - elog(ERROR, "failed to locate grouping columns"); - Assert(((Var *) te->expr)->vartype == groupexpr->vartype); - groupColIdx[keyno++] = te->resno; - } -} - /* * postprocess_setop_tlist * Fix up targetlist returned by plan_set_operations(). @@ -4506,28 +4028,31 @@ select_active_windows(PlannerInfo *root, WindowFuncLists *wflists) * make_windowInputTargetList * Generate appropriate target list for initial input to WindowAgg nodes. * - * When grouping_planner inserts one or more WindowAgg nodes into the plan, - * this function computes the initial target list to be computed by the node - * just below the first WindowAgg. This list must contain all values needed - * to evaluate the window functions, compute the final target list, and - * perform any required final sort step. If multiple WindowAggs are needed, - * each intermediate one adds its window function results onto this tlist; - * only the topmost WindowAgg computes the actual desired target list. + * When the query has window functions, this function computes the initial + * target list to be computed by the node just below the first WindowAgg. + * This list must contain all values needed to evaluate the window functions, + * compute the final target list, and perform any required final sort step. + * If multiple WindowAggs are needed, each intermediate one adds its window + * function results onto this tlist; only the topmost WindowAgg computes the + * actual desired target list. * - * This function is much like make_subplanTargetList, though not quite enough + * This function is much like make_scanjoin_target, though not quite enough * like it to share code. As in that function, we flatten most expressions * into their component variables. But we do not want to flatten window * PARTITION BY/ORDER BY clauses, since that might result in multiple * evaluations of them, which would be bad (possibly even resulting in - * inconsistent answers, if they contain volatile functions). Also, we must - * not flatten GROUP BY clauses that were left unflattened by - * make_subplanTargetList, because we may no longer have access to the + * inconsistent answers, if they contain volatile functions). + * Also, we must not flatten GROUP BY clauses that were left unflattened by + * make_scanjoin_target, because we may no longer have access to the * individual Vars in them. * - * Another key difference from make_subplanTargetList is that we don't flatten + * Another key difference from make_scanjoin_target is that we don't flatten * Aggref expressions, since those are to be computed below the window * functions and just referenced like Vars above that. * + * XXX another difference is that this produces targetlist format not a + * PathTarget, but that should change sometime soon. + * * 'tlist' is the query's final target list. * 'activeWindows' is the list of active windows previously identified by * select_active_windows. @@ -4651,6 +4176,8 @@ make_windowInputTargetList(PlannerInfo *root, * The required ordering is first the PARTITION keys, then the ORDER keys. * In the future we might try to implement windowing using hashing, in which * case the ordering could be relaxed, but for now we always sort. + * + * Caution: if you change this, see createplan.c's get_column_info_for_window! */ static List * make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc, @@ -4681,113 +4208,42 @@ make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc, return window_pathkeys; } -/*---------- - * get_column_info_for_window - * Get the partitioning/ordering column numbers and equality operators - * for a WindowAgg node. - * - * This depends on the behavior of make_pathkeys_for_window()! +/* + * get_cheapest_fractional_path + * Find the cheapest path for retrieving a specified fraction of all + * the tuples expected to be returned by the given relation. * - * We are given the target WindowClause and an array of the input column - * numbers associated with the resulting pathkeys. In the easy case, there - * are the same number of pathkey columns as partitioning + ordering columns - * and we just have to copy some data around. However, it's possible that - * some of the original partitioning + ordering columns were eliminated as - * redundant during the transformation to pathkeys. (This can happen even - * though the parser gets rid of obvious duplicates. A typical scenario is a - * window specification "PARTITION BY x ORDER BY y" coupled with a clause - * "WHERE x = y" that causes the two sort columns to be recognized as - * redundant.) In that unusual case, we have to work a lot harder to - * determine which keys are significant. + * We interpret tuple_fraction the same way as grouping_planner. * - * The method used here is a bit brute-force: add the sort columns to a list - * one at a time and note when the resulting pathkey list gets longer. But - * it's a sufficiently uncommon case that a faster way doesn't seem worth - * the amount of code refactoring that'd be needed. - *---------- + * We assume set_cheapest() has been run on the given rel. */ -static void -get_column_info_for_window(PlannerInfo *root, WindowClause *wc, List *tlist, - int numSortCols, AttrNumber *sortColIdx, - int *partNumCols, - AttrNumber **partColIdx, - Oid **partOperators, - int *ordNumCols, - AttrNumber **ordColIdx, - Oid **ordOperators) +Path * +get_cheapest_fractional_path(RelOptInfo *rel, double tuple_fraction) { - int numPart = list_length(wc->partitionClause); - int numOrder = list_length(wc->orderClause); + Path *best_path = rel->cheapest_total_path; + ListCell *l; - if (numSortCols == numPart + numOrder) - { - /* easy case */ - *partNumCols = numPart; - *partColIdx = sortColIdx; - *partOperators = extract_grouping_ops(wc->partitionClause); - *ordNumCols = numOrder; - *ordColIdx = sortColIdx + numPart; - *ordOperators = extract_grouping_ops(wc->orderClause); - } - else + /* If all tuples will be retrieved, just return the cheapest-total path */ + if (tuple_fraction <= 0.0) + return best_path; + + /* Convert absolute # of tuples to a fraction; no need to clamp */ + if (tuple_fraction >= 1.0) + tuple_fraction /= best_path->rows; + + foreach(l, rel->pathlist) { - List *sortclauses; - List *pathkeys; - int scidx; - ListCell *lc; - - /* first, allocate what's certainly enough space for the arrays */ - *partNumCols = 0; - *partColIdx = (AttrNumber *) palloc(numPart * sizeof(AttrNumber)); - *partOperators = (Oid *) palloc(numPart * sizeof(Oid)); - *ordNumCols = 0; - *ordColIdx = (AttrNumber *) palloc(numOrder * sizeof(AttrNumber)); - *ordOperators = (Oid *) palloc(numOrder * sizeof(Oid)); - sortclauses = NIL; - pathkeys = NIL; - scidx = 0; - foreach(lc, wc->partitionClause) - { - SortGroupClause *sgc = (SortGroupClause *) lfirst(lc); - List *new_pathkeys; + Path *path = (Path *) lfirst(l); - sortclauses = lappend(sortclauses, sgc); - new_pathkeys = make_pathkeys_for_sortclauses(root, - sortclauses, - tlist); - if (list_length(new_pathkeys) > list_length(pathkeys)) - { - /* this sort clause is actually significant */ - (*partColIdx)[*partNumCols] = sortColIdx[scidx++]; - (*partOperators)[*partNumCols] = sgc->eqop; - (*partNumCols)++; - pathkeys = new_pathkeys; - } - } - foreach(lc, wc->orderClause) - { - SortGroupClause *sgc = (SortGroupClause *) lfirst(lc); - List *new_pathkeys; + if (path == rel->cheapest_total_path || + compare_fractional_path_costs(best_path, path, tuple_fraction) <= 0) + continue; - sortclauses = lappend(sortclauses, sgc); - new_pathkeys = make_pathkeys_for_sortclauses(root, - sortclauses, - tlist); - if (list_length(new_pathkeys) > list_length(pathkeys)) - { - /* this sort clause is actually significant */ - (*ordColIdx)[*ordNumCols] = sortColIdx[scidx++]; - (*ordOperators)[*ordNumCols] = sgc->eqop; - (*ordNumCols)++; - pathkeys = new_pathkeys; - } - } - /* complain if we didn't eat exactly the right number of sort cols */ - if (scidx != numSortCols) - elog(ERROR, "failed to deconstruct sort operators into partitioning/ordering operators"); + best_path = path; } -} + return best_path; +} /* * expression_planner diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c index 615f3a2687..d296d09814 100644 --- a/src/backend/optimizer/plan/setrefs.c +++ b/src/backend/optimizer/plan/setrefs.c @@ -304,8 +304,8 @@ add_rtes_to_flat_rtable(PlannerInfo *root, bool recursing) * in our query level. In this case apply * flatten_unplanned_rtes. * - * If it was planned but the plan is dummy, we assume that it - * has been omitted from our plan tree (see + * If it was planned but the result rel is dummy, we assume + * that it has been omitted from our plan tree (see * set_subquery_pathlist), and recurse to pull up its RTEs. * * Otherwise, it should be represented by a SubqueryScan node @@ -313,17 +313,16 @@ add_rtes_to_flat_rtable(PlannerInfo *root, bool recursing) * we process that plan node. * * However, if we're recursing, then we should pull up RTEs - * whether the subplan is dummy or not, because we've found + * whether the subquery is dummy or not, because we've found * that some upper query level is treating this one as dummy, * and so we won't scan this level's plan tree at all. */ - if (rel->subplan == NULL) + if (rel->subroot == NULL) flatten_unplanned_rtes(glob, rte); - else if (recursing || is_dummy_plan(rel->subplan)) - { - Assert(rel->subroot != NULL); + else if (recursing || + IS_DUMMY_REL(fetch_upper_rel(rel->subroot, + UPPERREL_FINAL, NULL))) add_rtes_to_flat_rtable(rel->subroot, true); - } } } rti++; @@ -979,7 +978,6 @@ set_subqueryscan_references(PlannerInfo *root, /* Need to look up the subquery's RelOptInfo, since we need its subroot */ rel = find_base_rel(root, plan->scan.scanrelid); - Assert(rel->subplan == plan->subplan); /* Recursively process the subplan */ plan->subplan = set_plan_references(rel->subroot, plan->subplan); @@ -1386,6 +1384,7 @@ fix_param_node(PlannerInfo *root, Param *p) * * This consists of incrementing all Vars' varnos by rtoffset, * replacing PARAM_MULTIEXPR Params, expanding PlaceHolderVars, + * replacing Aggref nodes that should be replaced by initplan output Params, * looking up operator opcode info for OpExpr and related nodes, * and adding OIDs from regclass Const nodes into root->glob->relationOids. */ @@ -1399,7 +1398,8 @@ fix_scan_expr(PlannerInfo *root, Node *node, int rtoffset) if (rtoffset != 0 || root->multiexpr_params != NIL || - root->glob->lastPHId != 0) + root->glob->lastPHId != 0 || + root->minmax_aggs != NIL) { return fix_scan_expr_mutator(node, &context); } @@ -1409,7 +1409,8 @@ fix_scan_expr(PlannerInfo *root, Node *node, int rtoffset) * If rtoffset == 0, we don't need to change any Vars, and if there * are no MULTIEXPR subqueries then we don't need to replace * PARAM_MULTIEXPR Params, and if there are no placeholders anywhere - * we won't need to remove them. Then it's OK to just scribble on the + * we won't need to remove them, and if there are no minmax Aggrefs we + * won't need to replace them. Then it's OK to just scribble on the * input node tree instead of copying (since the only change, filling * in any unset opfuncid fields, is harmless). This saves just enough * cycles to be noticeable on trivial queries. @@ -1444,6 +1445,28 @@ fix_scan_expr_mutator(Node *node, fix_scan_expr_context *context) } if (IsA(node, Param)) return fix_param_node(context->root, (Param *) node); + if (IsA(node, Aggref)) + { + Aggref *aggref = (Aggref *) node; + + /* See if the Aggref should be replaced by a Param */ + if (context->root->minmax_aggs != NIL && + list_length(aggref->args) == 1) + { + TargetEntry *curTarget = (TargetEntry *) linitial(aggref->args); + ListCell *lc; + + foreach(lc, context->root->minmax_aggs) + { + MinMaxAggInfo *mminfo = (MinMaxAggInfo *) lfirst(lc); + + if (mminfo->aggfnoid == aggref->aggfnoid && + equal(mminfo->target, curTarget->expr)) + return (Node *) copyObject(mminfo->param); + } + } + /* If no match, just fall through to process it normally */ + } if (IsA(node, CurrentOfExpr)) { CurrentOfExpr *cexpr = (CurrentOfExpr *) copyObject(node); @@ -2091,8 +2114,9 @@ fix_join_expr_mutator(Node *node, fix_join_expr_context *context) /* * fix_upper_expr * Modifies an expression tree so that all Var nodes reference outputs - * of a subplan. Also performs opcode lookup, and adds regclass OIDs to - * root->glob->relationOids. + * of a subplan. Also looks for Aggref nodes that should be replaced + * by initplan output Params. Also performs opcode lookup, and adds + * regclass OIDs to root->glob->relationOids. * * This is used to fix up target and qual expressions of non-join upper-level * plan nodes, as well as index-only scan nodes. @@ -2169,6 +2193,28 @@ fix_upper_expr_mutator(Node *node, fix_upper_expr_context *context) } if (IsA(node, Param)) return fix_param_node(context->root, (Param *) node); + if (IsA(node, Aggref)) + { + Aggref *aggref = (Aggref *) node; + + /* See if the Aggref should be replaced by a Param */ + if (context->root->minmax_aggs != NIL && + list_length(aggref->args) == 1) + { + TargetEntry *curTarget = (TargetEntry *) linitial(aggref->args); + ListCell *lc; + + foreach(lc, context->root->minmax_aggs) + { + MinMaxAggInfo *mminfo = (MinMaxAggInfo *) lfirst(lc); + + if (mminfo->aggfnoid == aggref->aggfnoid && + equal(mminfo->target, curTarget->expr)) + return (Node *) copyObject(mminfo->param); + } + } + /* If no match, just fall through to process it normally */ + } /* Try matching more complex expressions too, if tlist has any */ if (context->subplan_itlist->has_non_vars) { diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c index 31db35cb22..1ff430229d 100644 --- a/src/backend/optimizer/plan/subselect.c +++ b/src/backend/optimizer/plan/subselect.c @@ -478,8 +478,10 @@ make_subplan(PlannerInfo *root, Query *orig_subquery, Query *subquery; bool simple_exists = false; double tuple_fraction; - Plan *plan; PlannerInfo *subroot; + RelOptInfo *final_rel; + Path *best_path; + Plan *plan; List *plan_params; Node *result; @@ -527,18 +529,24 @@ make_subplan(PlannerInfo *root, Query *orig_subquery, /* plan_params should not be in use in current query level */ Assert(root->plan_params == NIL); - /* - * Generate the plan for the subquery. - */ - plan = subquery_planner(root->glob, subquery, - root, - false, tuple_fraction, - &subroot); + /* Generate Paths for the subquery */ + subroot = subquery_planner(root->glob, subquery, + root, + false, tuple_fraction); /* Isolate the params needed by this specific subplan */ plan_params = root->plan_params; root->plan_params = NIL; + /* + * Select best Path and turn it into a Plan. At least for now, there + * seems no reason to postpone doing that. + */ + final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL); + best_path = get_cheapest_fractional_path(final_rel, tuple_fraction); + + plan = create_plan(subroot, best_path); + /* And convert to SubPlan or InitPlan format. */ result = build_subplan(root, plan, subroot, plan_params, subLinkType, subLinkId, @@ -568,17 +576,23 @@ make_subplan(PlannerInfo *root, Query *orig_subquery, &newtestexpr, ¶mIds); if (subquery) { - /* Generate the plan for the ANY subquery; we'll need all rows */ - plan = subquery_planner(root->glob, subquery, - root, - false, 0.0, - &subroot); + /* Generate Paths for the ANY subquery; we'll need all rows */ + subroot = subquery_planner(root->glob, subquery, + root, + false, 0.0); /* Isolate the params needed by this specific subplan */ plan_params = root->plan_params; root->plan_params = NIL; + /* Select best Path and turn it into a Plan */ + final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL); + best_path = final_rel->cheapest_total_path; + + plan = create_plan(subroot, best_path); + /* Now we can check if it'll fit in work_mem */ + /* XXX can we check this at the Path stage? */ if (subplan_is_hashable(plan)) { SubPlan *hashplan; @@ -1133,8 +1147,10 @@ SS_process_ctes(PlannerInfo *root) CommonTableExpr *cte = (CommonTableExpr *) lfirst(lc); CmdType cmdType = ((Query *) cte->ctequery)->commandType; Query *subquery; - Plan *plan; PlannerInfo *subroot; + RelOptInfo *final_rel; + Path *best_path; + Plan *plan; SubPlan *splan; int paramid; @@ -1158,13 +1174,12 @@ SS_process_ctes(PlannerInfo *root) Assert(root->plan_params == NIL); /* - * Generate the plan for the CTE query. Always plan for full - * retrieval --- we don't have enough info to predict otherwise. + * Generate Paths for the CTE query. Always plan for full retrieval + * --- we don't have enough info to predict otherwise. */ - plan = subquery_planner(root->glob, subquery, - root, - cte->cterecursive, 0.0, - &subroot); + subroot = subquery_planner(root->glob, subquery, + root, + cte->cterecursive, 0.0); /* * Since the current query level doesn't yet contain any RTEs, it @@ -1174,6 +1189,15 @@ SS_process_ctes(PlannerInfo *root) if (root->plan_params) elog(ERROR, "unexpected outer reference in CTE query"); + /* + * Select best Path and turn it into a Plan. At least for now, there + * seems no reason to postpone doing that. + */ + final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL); + best_path = final_rel->cheapest_total_path; + + plan = create_plan(subroot, best_path); + /* * Make a SubPlan node for it. This is just enough unlike * build_subplan that we can't share code. @@ -2109,35 +2133,70 @@ SS_identify_outer_params(PlannerInfo *root) } /* - * SS_attach_initplans - attach initplans to topmost plan node + * SS_charge_for_initplans - account for cost of initplans in Path costs * - * Attach any initplans created in the current query level to the topmost plan - * node for the query level, and increment that node's cost to account for - * them. (The initPlans could actually go in any node at or above where - * they're referenced, but there seems no reason to put them any lower than - * the topmost node for the query level.) + * If any initPlans have been created in the current query level, they will + * get attached to the Plan tree created from whichever Path we select from + * the given rel; so increment all the rel's Paths' costs to account for them. + * + * This is separate from SS_attach_initplans because we might conditionally + * create more initPlans during create_plan(), depending on which Path we + * select. However, Paths that would generate such initPlans are expected + * to have included their cost already. */ void -SS_attach_initplans(PlannerInfo *root, Plan *plan) +SS_charge_for_initplans(PlannerInfo *root, RelOptInfo *final_rel) { + Cost initplan_cost; ListCell *lc; - plan->initPlan = root->init_plans; - foreach(lc, plan->initPlan) + /* Nothing to do if no initPlans */ + if (root->init_plans == NIL) + return; + + /* + * Compute the cost increment just once, since it will be the same for all + * Paths. We assume each initPlan gets run once during top plan startup. + * This is a conservative overestimate, since in fact an initPlan might be + * executed later than plan startup, or even not at all. + */ + initplan_cost = 0; + foreach(lc, root->init_plans) { SubPlan *initsubplan = (SubPlan *) lfirst(lc); - Cost initplan_cost; - /* - * Assume each initPlan gets run once during top plan startup. This - * is a conservative overestimate, since in fact an initPlan might be - * executed later than plan startup, or even not at all. - */ - initplan_cost = initsubplan->startup_cost + initsubplan->per_call_cost; + initplan_cost += initsubplan->startup_cost + initsubplan->per_call_cost; + } + + /* + * Now adjust the costs. + */ + foreach(lc, final_rel->pathlist) + { + Path *path = (Path *) lfirst(lc); - plan->startup_cost += initplan_cost; - plan->total_cost += initplan_cost; + path->startup_cost += initplan_cost; + path->total_cost += initplan_cost; } + + /* We needn't do set_cheapest() here, caller will do it */ +} + +/* + * SS_attach_initplans - attach initplans to topmost plan node + * + * Attach any initplans created in the current query level to the specified + * plan node, which should normally be the topmost node for the query level. + * (The initPlans could actually go in any node at or above where they're + * referenced; but there seems no reason to put them any lower than the + * topmost node, so we don't bother to track exactly where they came from.) + * We do not touch the plan node's cost; the initplans should have been + * accounted for in path costing. + */ +void +SS_attach_initplans(PlannerInfo *root, Plan *plan) +{ + plan->initPlan = root->init_plans; } /* @@ -2298,7 +2357,6 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, /* We must run SS_finalize_plan on the subquery */ rel = find_base_rel(root, sscan->scan.scanrelid); - Assert(rel->subplan == sscan->subplan); SS_finalize_plan(rel->subroot, sscan->subplan); /* Now we can add its extParams to the parent's params */ @@ -2740,21 +2798,35 @@ finalize_primnode(Node *node, finalize_primnode_context *context) } /* - * SS_make_initplan_from_plan - given a plan tree, make it an InitPlan + * SS_make_initplan_output_param - make a Param for an initPlan's output * * The plan is expected to return a scalar value of the given type/collation. + * + * Note that in some cases the initplan may not ever appear in the finished + * plan tree. If that happens, we'll have wasted a PARAM_EXEC slot, which + * is no big deal. + */ +Param * +SS_make_initplan_output_param(PlannerInfo *root, + Oid resulttype, int32 resulttypmod, + Oid resultcollation) +{ + return generate_new_param(root, resulttype, resulttypmod, resultcollation); +} + +/* + * SS_make_initplan_from_plan - given a plan tree, make it an InitPlan + * * We build an EXPR_SUBLINK SubPlan node and put it into the initplan * list for the outer query level. A Param that represents the initplan's - * output is returned. + * output has already been assigned using SS_make_initplan_output_param. */ -Param * +void SS_make_initplan_from_plan(PlannerInfo *root, PlannerInfo *subroot, Plan *plan, - Oid resulttype, int32 resulttypmod, - Oid resultcollation) + Param *prm) { SubPlan *node; - Param *prm; /* * Add the subplan and its PlannerInfo to the global lists. @@ -2769,9 +2841,12 @@ SS_make_initplan_from_plan(PlannerInfo *root, */ node = makeNode(SubPlan); node->subLinkType = EXPR_SUBLINK; + node->plan_id = list_length(root->glob->subplans); + node->plan_name = psprintf("InitPlan %d (returns $%d)", + node->plan_id, prm->paramid); get_first_col_type(plan, &node->firstColType, &node->firstColTypmod, &node->firstColCollation); - node->plan_id = list_length(root->glob->subplans); + node->setParam = list_make1_int(prm->paramid); root->init_plans = lappend(root->init_plans, node); @@ -2780,17 +2855,6 @@ SS_make_initplan_from_plan(PlannerInfo *root, * parParam and args lists remain empty. */ + /* Set costs of SubPlan using info from the plan tree */ cost_subplan(subroot, node, plan); - - /* - * Make a Param that will be the subplan's output. - */ - prm = generate_new_param(root, resulttype, resulttypmod, resultcollation); - node->setParam = list_make1_int(prm->paramid); - - /* Label the subplan for EXPLAIN purposes */ - node->plan_name = psprintf("InitPlan %d (returns $%d)", - node->plan_id, prm->paramid); - - return prm; } diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c index b368cf9458..ba6770b4d5 100644 --- a/src/backend/optimizer/prep/prepjointree.c +++ b/src/backend/optimizer/prep/prepjointree.c @@ -907,9 +907,14 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte, subroot->eq_classes = NIL; subroot->append_rel_list = NIL; subroot->rowMarks = NIL; + memset(subroot->upper_rels, 0, sizeof(subroot->upper_rels)); + subroot->processed_tlist = NIL; + subroot->grouping_map = NULL; + subroot->minmax_aggs = NIL; + subroot->hasInheritedTarget = false; subroot->hasRecursion = false; subroot->wt_param_id = -1; - subroot->non_recursive_plan = NULL; + subroot->non_recursive_path = NULL; /* No CTEs to worry about */ Assert(subquery->cteList == NIL); diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index e509a1aa1f..6ea3319e5f 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -40,6 +40,7 @@ #include "nodes/nodeFuncs.h" #include "optimizer/cost.h" #include "optimizer/pathnode.h" +#include "optimizer/paths.h" #include "optimizer/planmain.h" #include "optimizer/planner.h" #include "optimizer/prep.h" @@ -58,35 +59,33 @@ typedef struct int sublevels_up; } adjust_appendrel_attrs_context; -static Plan *recurse_set_operations(Node *setOp, PlannerInfo *root, - double tuple_fraction, +static Path *recurse_set_operations(Node *setOp, PlannerInfo *root, List *colTypes, List *colCollations, bool junkOK, int flag, List *refnames_tlist, - List **sortClauses, double *pNumGroups); -static Plan *generate_recursion_plan(SetOperationStmt *setOp, - PlannerInfo *root, double tuple_fraction, + List **pTargetList, + double *pNumGroups); +static Path *generate_recursion_path(SetOperationStmt *setOp, + PlannerInfo *root, List *refnames_tlist, - List **sortClauses); -static Plan *generate_union_plan(SetOperationStmt *op, PlannerInfo *root, - double tuple_fraction, + List **pTargetList); +static Path *generate_union_path(SetOperationStmt *op, PlannerInfo *root, List *refnames_tlist, - List **sortClauses, double *pNumGroups); -static Plan *generate_nonunion_plan(SetOperationStmt *op, PlannerInfo *root, - double tuple_fraction, + List **pTargetList, + double *pNumGroups); +static Path *generate_nonunion_path(SetOperationStmt *op, PlannerInfo *root, List *refnames_tlist, - List **sortClauses, double *pNumGroups); + List **pTargetList, + double *pNumGroups); static List *recurse_union_children(Node *setOp, PlannerInfo *root, - double tuple_fraction, SetOperationStmt *top_union, - List *refnames_tlist); -static Plan *make_union_unique(SetOperationStmt *op, Plan *plan, - PlannerInfo *root, double tuple_fraction, - List **sortClauses); + List *refnames_tlist, + List **tlist_list); +static Path *make_union_unique(SetOperationStmt *op, Path *path, List *tlist, + PlannerInfo *root); static bool choose_hashed_setop(PlannerInfo *root, List *groupClauses, - Plan *input_plan, + Path *input_path, double dNumGroups, double dNumOutputRows, - double tuple_fraction, const char *construct); static List *generate_setop_tlist(List *colTypes, List *colCollations, int flag, @@ -96,7 +95,7 @@ static List *generate_setop_tlist(List *colTypes, List *colCollations, List *refnames_tlist); static List *generate_append_tlist(List *colTypes, List *colCollations, bool flag, - List *input_plans, + List *input_tlists, List *refnames_tlist); static List *generate_setop_grouplist(SetOperationStmt *op, List *targetlist); static void expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, @@ -120,27 +119,24 @@ static List *adjust_inherited_tlist(List *tlist, * Plans the queries for a tree of set operations (UNION/INTERSECT/EXCEPT) * * This routine only deals with the setOperations tree of the given query. - * Any top-level ORDER BY requested in root->parse->sortClause will be added - * when we return to grouping_planner. - * - * tuple_fraction is the fraction of tuples we expect will be retrieved. - * tuple_fraction is interpreted as for grouping_planner(); in particular, - * zero means "all the tuples will be fetched". Any LIMIT present at the - * top level has already been factored into tuple_fraction. + * Any top-level ORDER BY requested in root->parse->sortClause will be handled + * when we return to grouping_planner; likewise for LIMIT. * - * *sortClauses is an output argument: it is set to a list of SortGroupClauses - * representing the result ordering of the topmost set operation. (This will - * be NIL if the output isn't ordered.) + * What we return is an "upperrel" RelOptInfo containing at least one Path + * that implements the set-operation tree. In addition, root->processed_tlist + * receives a targetlist representing the output of the topmost setop node. */ -Plan * -plan_set_operations(PlannerInfo *root, double tuple_fraction, - List **sortClauses) +RelOptInfo * +plan_set_operations(PlannerInfo *root) { Query *parse = root->parse; SetOperationStmt *topop = (SetOperationStmt *) parse->setOperations; Node *node; RangeTblEntry *leftmostRTE; Query *leftmostQuery; + RelOptInfo *setop_rel; + Path *path; + List *top_tlist; Assert(topop && IsA(topop, SetOperationStmt)); @@ -171,54 +167,82 @@ plan_set_operations(PlannerInfo *root, double tuple_fraction, leftmostQuery = leftmostRTE->subquery; Assert(leftmostQuery != NULL); + /* + * We return our results in the (SETOP, NULL) upperrel. For the moment, + * this is also the parent rel of all Paths in the setop tree; we may well + * change that in future. + */ + setop_rel = fetch_upper_rel(root, UPPERREL_SETOP, NULL); + + /* * If the topmost node is a recursive union, it needs special processing. */ if (root->hasRecursion) - return generate_recursion_plan(topop, root, tuple_fraction, + { + path = generate_recursion_path(topop, root, leftmostQuery->targetList, - sortClauses); + &top_tlist); + } + else + { + /* + * Recurse on setOperations tree to generate paths for set ops. The + * final output path should have just the column types shown as the + * output from the top-level node, plus possibly resjunk working + * columns (we can rely on upper-level nodes to deal with that). + */ + path = recurse_set_operations((Node *) topop, root, + topop->colTypes, topop->colCollations, + true, -1, + leftmostQuery->targetList, + &top_tlist, + NULL); + } - /* - * Recurse on setOperations tree to generate plans for set ops. The final - * output plan should have just the column types shown as the output from - * the top-level node, plus possibly resjunk working columns (we can rely - * on upper-level nodes to deal with that). - */ - return recurse_set_operations((Node *) topop, root, tuple_fraction, - topop->colTypes, topop->colCollations, - true, -1, - leftmostQuery->targetList, - sortClauses, NULL); + /* Must return the built tlist into root->processed_tlist. */ + root->processed_tlist = top_tlist; + + /* Add only the final path to the SETOP upperrel. */ + add_path(setop_rel, path); + + /* Select cheapest path (pretty easy at the moment) */ + set_cheapest(setop_rel); + + return setop_rel; } /* * recurse_set_operations * Recursively handle one step in a tree of set operations * - * tuple_fraction: fraction of tuples we expect to retrieve from node * colTypes: OID list of set-op's result column datatypes * colCollations: OID list of set-op's result column collations * junkOK: if true, child resjunk columns may be left in the result * flag: if >= 0, add a resjunk output column indicating value of flag * refnames_tlist: targetlist to take column names from * - * Returns a plan for the subtree, as well as these output parameters: - * *sortClauses: receives list of SortGroupClauses for result plan, if any + * Returns a path for the subtree, as well as these output parameters: + * *pTargetList: receives the fully-fledged tlist for the subtree's top plan * *pNumGroups: if not NULL, we estimate the number of distinct groups * in the result, and store it there * + * The pTargetList output parameter is mostly redundant with the pathtarget + * of the returned path, but for the moment we need it because much of the + * logic in this file depends on flag columns being marked resjunk. Pending + * a redesign of how that works, this is the easy way out. + * * We don't have to care about typmods here: the only allowed difference * between set-op input and output typmods is input is a specific typmod * and output is -1, and that does not require a coercion. */ -static Plan * +static Path * recurse_set_operations(Node *setOp, PlannerInfo *root, - double tuple_fraction, List *colTypes, List *colCollations, bool junkOK, int flag, List *refnames_tlist, - List **sortClauses, double *pNumGroups) + List **pTargetList, + double *pNumGroups) { if (IsA(setOp, RangeTblRef)) { @@ -227,14 +251,16 @@ recurse_set_operations(Node *setOp, PlannerInfo *root, Query *subquery = rte->subquery; RelOptInfo *rel; PlannerInfo *subroot; - Plan *subplan, - *plan; + RelOptInfo *final_rel; + Path *subpath; + Path *path; + List *tlist; Assert(subquery != NULL); /* * We need to build a RelOptInfo for each leaf subquery. This isn't - * used for anything here, but it carries the subroot data structures + * used for much here, but it carries the subroot data structures * forward to setrefs.c processing. */ rel = build_simple_rel(root, rtr->rtindex, RELOPT_BASEREL); @@ -242,17 +268,11 @@ recurse_set_operations(Node *setOp, PlannerInfo *root, /* plan_params should not be in use in current query level */ Assert(root->plan_params == NIL); - /* - * Generate plan for primitive subquery - */ - subplan = subquery_planner(root->glob, subquery, - root, - false, tuple_fraction, - &subroot); - - /* Save subroot and subplan in RelOptInfo for setrefs.c */ - rel->subplan = subplan; - rel->subroot = subroot; + /* Generate a subroot and Paths for the subquery */ + subroot = rel->subroot = subquery_planner(root->glob, subquery, + root, + false, + root->tuple_fraction); /* * It should not be possible for the primitive query to contain any @@ -261,6 +281,50 @@ recurse_set_operations(Node *setOp, PlannerInfo *root, if (root->plan_params) elog(ERROR, "unexpected outer reference in set operation subquery"); + /* + * Mark rel with estimated output rows, width, etc. Note that we have + * to do this before generating outer-query paths, else + * cost_subqueryscan is not happy. + */ + set_subquery_size_estimates(root, rel); + + /* + * For the moment, we consider only a single Path for the subquery. + * This should change soon (make it look more like + * set_subquery_pathlist). + */ + final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL); + subpath = get_cheapest_fractional_path(final_rel, + root->tuple_fraction); + + /* + * Stick a SubqueryScanPath atop that. + * + * We don't bother to determine the subquery's output ordering since + * it won't be reflected in the set-op result anyhow; so just label + * the SubqueryScanPath with nil pathkeys. (XXX that should change + * soon too, likely.) + */ + path = (Path *) create_subqueryscan_path(root, rel, subpath, + NIL, NULL); + + /* + * Figure out the appropriate target list, and update the + * SubqueryScanPath with the PathTarget form of that. + */ + tlist = generate_setop_tlist(colTypes, colCollations, + flag, + rtr->rtindex, + true, + subroot->processed_tlist, + refnames_tlist); + + path = apply_projection_to_path(root, rel, path, + create_pathtarget(root, tlist)); + + /* Return the fully-fledged tlist to caller, too */ + *pTargetList = tlist; + /* * Estimate number of groups if caller wants it. If the subquery used * grouping or aggregation, its output is probably mostly unique @@ -271,50 +335,32 @@ recurse_set_operations(Node *setOp, PlannerInfo *root, if (subquery->groupClause || subquery->groupingSets || subquery->distinctClause || subroot->hasHavingQual || subquery->hasAggs) - *pNumGroups = subplan->plan_rows; + *pNumGroups = subpath->rows; else *pNumGroups = estimate_num_groups(subroot, - get_tlist_exprs(subquery->targetList, false), - subplan->plan_rows, + get_tlist_exprs(subroot->processed_tlist, false), + subpath->rows, NULL); } - /* - * Add a SubqueryScan with the caller-requested targetlist - */ - plan = (Plan *) - make_subqueryscan(generate_setop_tlist(colTypes, colCollations, - flag, - rtr->rtindex, - true, - subplan->targetlist, - refnames_tlist), - NIL, - rtr->rtindex, - subplan); - - /* - * We don't bother to determine the subquery's output ordering since - * it won't be reflected in the set-op result anyhow. - */ - *sortClauses = NIL; - - return plan; + return (Path *) path; } else if (IsA(setOp, SetOperationStmt)) { SetOperationStmt *op = (SetOperationStmt *) setOp; - Plan *plan; + Path *path; /* UNIONs are much different from INTERSECT/EXCEPT */ if (op->op == SETOP_UNION) - plan = generate_union_plan(op, root, tuple_fraction, + path = generate_union_path(op, root, refnames_tlist, - sortClauses, pNumGroups); + pTargetList, + pNumGroups); else - plan = generate_nonunion_plan(op, root, tuple_fraction, + path = generate_nonunion_path(op, root, refnames_tlist, - sortClauses, pNumGroups); + pTargetList, + pNumGroups); /* * If necessary, add a Result node to project the caller-requested @@ -330,45 +376,49 @@ recurse_set_operations(Node *setOp, PlannerInfo *root, * generate_setop_tlist() to use varno 0. */ if (flag >= 0 || - !tlist_same_datatypes(plan->targetlist, colTypes, junkOK) || - !tlist_same_collations(plan->targetlist, colCollations, junkOK)) + !tlist_same_datatypes(*pTargetList, colTypes, junkOK) || + !tlist_same_collations(*pTargetList, colCollations, junkOK)) { - plan = (Plan *) - make_result(root, - generate_setop_tlist(colTypes, colCollations, - flag, - 0, - false, - plan->targetlist, - refnames_tlist), - NULL, - plan); + *pTargetList = generate_setop_tlist(colTypes, colCollations, + flag, + 0, + false, + *pTargetList, + refnames_tlist); + path = apply_projection_to_path(root, + path->parent, + path, + create_pathtarget(root, + *pTargetList)); } - return plan; + return path; } else { elog(ERROR, "unrecognized node type: %d", (int) nodeTag(setOp)); + *pTargetList = NIL; return NULL; /* keep compiler quiet */ } } /* - * Generate plan for a recursive UNION node + * Generate path for a recursive UNION node */ -static Plan * -generate_recursion_plan(SetOperationStmt *setOp, PlannerInfo *root, - double tuple_fraction, +static Path * +generate_recursion_path(SetOperationStmt *setOp, PlannerInfo *root, List *refnames_tlist, - List **sortClauses) + List **pTargetList) { - Plan *plan; - Plan *lplan; - Plan *rplan; + RelOptInfo *result_rel = fetch_upper_rel(root, UPPERREL_SETOP, NULL); + Path *path; + Path *lpath; + Path *rpath; + List *lpath_tlist; + List *rpath_tlist; List *tlist; List *groupList; - long numGroups; + double dNumGroups; /* Parser should have rejected other cases */ if (setOp->op != SETOP_UNION) @@ -380,37 +430,41 @@ generate_recursion_plan(SetOperationStmt *setOp, PlannerInfo *root, * Unlike a regular UNION node, process the left and right inputs * separately without any intention of combining them into one Append. */ - lplan = recurse_set_operations(setOp->larg, root, tuple_fraction, + lpath = recurse_set_operations(setOp->larg, root, setOp->colTypes, setOp->colCollations, false, -1, - refnames_tlist, sortClauses, NULL); - /* The right plan will want to look at the left one ... */ - root->non_recursive_plan = lplan; - rplan = recurse_set_operations(setOp->rarg, root, tuple_fraction, + refnames_tlist, + &lpath_tlist, + NULL); + /* The right path will want to look at the left one ... */ + root->non_recursive_path = lpath; + rpath = recurse_set_operations(setOp->rarg, root, setOp->colTypes, setOp->colCollations, false, -1, - refnames_tlist, sortClauses, NULL); - root->non_recursive_plan = NULL; + refnames_tlist, + &rpath_tlist, + NULL); + root->non_recursive_path = NULL; /* - * Generate tlist for RecursiveUnion plan node --- same as in Append cases + * Generate tlist for RecursiveUnion path node --- same as in Append cases */ tlist = generate_append_tlist(setOp->colTypes, setOp->colCollations, false, - list_make2(lplan, rplan), + list_make2(lpath_tlist, rpath_tlist), refnames_tlist); + *pTargetList = tlist; + /* * If UNION, identify the grouping operators */ if (setOp->all) { groupList = NIL; - numGroups = 0; + dNumGroups = 0; } else { - double dNumGroups; - /* Identify the grouping semantics */ groupList = generate_setop_grouplist(setOp, tlist); @@ -425,36 +479,41 @@ generate_recursion_plan(SetOperationStmt *setOp, PlannerInfo *root, * For the moment, take the number of distinct groups as equal to the * total input size, ie, the worst case. */ - dNumGroups = lplan->plan_rows + rplan->plan_rows * 10; - - /* Also convert to long int --- but 'ware overflow! */ - numGroups = (long) Min(dNumGroups, (double) LONG_MAX); + dNumGroups = lpath->rows + rpath->rows * 10; } /* - * And make the plan node. + * And make the path node. */ - plan = (Plan *) make_recursive_union(tlist, lplan, rplan, - root->wt_param_id, - groupList, numGroups); - - *sortClauses = NIL; /* RecursiveUnion result is always unsorted */ - - return plan; + path = (Path *) create_recursiveunion_path(root, + result_rel, + lpath, + rpath, + create_pathtarget(root, tlist), + groupList, + root->wt_param_id, + dNumGroups); + + return path; } /* - * Generate plan for a UNION or UNION ALL node + * Generate path for a UNION or UNION ALL node */ -static Plan * -generate_union_plan(SetOperationStmt *op, PlannerInfo *root, - double tuple_fraction, +static Path * +generate_union_path(SetOperationStmt *op, PlannerInfo *root, List *refnames_tlist, - List **sortClauses, double *pNumGroups) + List **pTargetList, + double *pNumGroups) { - List *planlist; + RelOptInfo *result_rel = fetch_upper_rel(root, UPPERREL_SETOP, NULL); + double save_fraction = root->tuple_fraction; + List *pathlist; + List *child_tlists1; + List *child_tlists2; + List *tlist_list; List *tlist; - Plan *plan; + Path *path; /* * If plain UNION, tell children to fetch all tuples. @@ -468,20 +527,21 @@ generate_union_plan(SetOperationStmt *op, PlannerInfo *root, * of preferring fast-start plans. */ if (!op->all) - tuple_fraction = 0.0; + root->tuple_fraction = 0.0; /* * If any of my children are identical UNION nodes (same op, all-flag, and * colTypes) then they can be merged into this node so that we generate * only one Append and unique-ification for the lot. Recurse to find such - * nodes and compute their children's plans. + * nodes and compute their children's paths. */ - planlist = list_concat(recurse_union_children(op->larg, root, - tuple_fraction, - op, refnames_tlist), + pathlist = list_concat(recurse_union_children(op->larg, root, + op, refnames_tlist, + &child_tlists1), recurse_union_children(op->rarg, root, - tuple_fraction, - op, refnames_tlist)); + op, refnames_tlist, + &child_tlists2)); + tlist_list = list_concat(child_tlists1, child_tlists2); /* * Generate tlist for Append plan node. @@ -491,21 +551,24 @@ generate_union_plan(SetOperationStmt *op, PlannerInfo *root, * next plan level up. */ tlist = generate_append_tlist(op->colTypes, op->colCollations, false, - planlist, refnames_tlist); + tlist_list, refnames_tlist); + + *pTargetList = tlist; /* * Append the child results together. */ - plan = (Plan *) make_append(planlist, tlist); + path = (Path *) create_append_path(result_rel, pathlist, NULL, 0); + + /* We have to manually jam the right tlist into the path; ick */ + path->pathtarget = create_pathtarget(root, tlist); /* - * For UNION ALL, we just need the Append plan. For UNION, need to add + * For UNION ALL, we just need the Append path. For UNION, need to add * node(s) to remove duplicates. */ - if (op->all) - *sortClauses = NIL; /* result of UNION ALL is always unsorted */ - else - plan = make_union_unique(op, plan, root, tuple_fraction, sortClauses); + if (!op->all) + path = make_union_unique(op, path, tlist, root); /* * Estimate number of groups if caller wants it. For now we just assume @@ -513,49 +576,63 @@ generate_union_plan(SetOperationStmt *op, PlannerInfo *root, * we want worst-case estimates anyway. */ if (pNumGroups) - *pNumGroups = plan->plan_rows; + *pNumGroups = path->rows; + + /* Undo effects of possibly forcing tuple_fraction to 0 */ + root->tuple_fraction = save_fraction; - return plan; + return path; } /* - * Generate plan for an INTERSECT, INTERSECT ALL, EXCEPT, or EXCEPT ALL node + * Generate path for an INTERSECT, INTERSECT ALL, EXCEPT, or EXCEPT ALL node */ -static Plan * -generate_nonunion_plan(SetOperationStmt *op, PlannerInfo *root, - double tuple_fraction, +static Path * +generate_nonunion_path(SetOperationStmt *op, PlannerInfo *root, List *refnames_tlist, - List **sortClauses, double *pNumGroups) + List **pTargetList, + double *pNumGroups) { - Plan *lplan, - *rplan, - *plan; - List *tlist, + RelOptInfo *result_rel = fetch_upper_rel(root, UPPERREL_SETOP, NULL); + double save_fraction = root->tuple_fraction; + Path *lpath, + *rpath, + *path; + List *lpath_tlist, + *rpath_tlist, + *tlist_list, + *tlist, *groupList, - *planlist, - *child_sortclauses; + *pathlist; double dLeftGroups, dRightGroups, dNumGroups, dNumOutputRows; - long numGroups; bool use_hash; SetOpCmd cmd; int firstFlag; + /* + * Tell children to fetch all tuples. + */ + root->tuple_fraction = 0.0; + /* Recurse on children, ensuring their outputs are marked */ - lplan = recurse_set_operations(op->larg, root, - 0.0 /* all tuples needed */ , + lpath = recurse_set_operations(op->larg, root, op->colTypes, op->colCollations, false, 0, refnames_tlist, - &child_sortclauses, &dLeftGroups); - rplan = recurse_set_operations(op->rarg, root, - 0.0 /* all tuples needed */ , + &lpath_tlist, + &dLeftGroups); + rpath = recurse_set_operations(op->rarg, root, op->colTypes, op->colCollations, false, 1, refnames_tlist, - &child_sortclauses, &dRightGroups); + &rpath_tlist, + &dRightGroups); + + /* Undo effects of forcing tuple_fraction to 0 */ + root->tuple_fraction = save_fraction; /* * For EXCEPT, we must put the left input first. For INTERSECT, either @@ -565,12 +642,14 @@ generate_nonunion_plan(SetOperationStmt *op, PlannerInfo *root, */ if (op->op == SETOP_EXCEPT || dLeftGroups <= dRightGroups) { - planlist = list_make2(lplan, rplan); + pathlist = list_make2(lpath, rpath); + tlist_list = list_make2(lpath_tlist, rpath_tlist); firstFlag = 0; } else { - planlist = list_make2(rplan, lplan); + pathlist = list_make2(rpath, lpath); + tlist_list = list_make2(rpath_tlist, lpath_tlist); firstFlag = 1; } @@ -584,22 +663,24 @@ generate_nonunion_plan(SetOperationStmt *op, PlannerInfo *root, * confused. */ tlist = generate_append_tlist(op->colTypes, op->colCollations, true, - planlist, refnames_tlist); + tlist_list, refnames_tlist); + + *pTargetList = tlist; /* * Append the child results together. */ - plan = (Plan *) make_append(planlist, tlist); + path = (Path *) create_append_path(result_rel, pathlist, NULL, 0); + + /* We have to manually jam the right tlist into the path; ick */ + path->pathtarget = create_pathtarget(root, tlist); /* Identify the grouping semantics */ groupList = generate_setop_grouplist(op, tlist); /* punt if nothing to group on (can this happen?) */ if (groupList == NIL) - { - *sortClauses = NIL; - return plan; - } + return path; /* * Estimate number of distinct groups that we'll need hashtable entries @@ -612,29 +693,32 @@ generate_nonunion_plan(SetOperationStmt *op, PlannerInfo *root, if (op->op == SETOP_EXCEPT) { dNumGroups = dLeftGroups; - dNumOutputRows = op->all ? lplan->plan_rows : dNumGroups; + dNumOutputRows = op->all ? lpath->rows : dNumGroups; } else { dNumGroups = Min(dLeftGroups, dRightGroups); - dNumOutputRows = op->all ? Min(lplan->plan_rows, rplan->plan_rows) : dNumGroups; + dNumOutputRows = op->all ? Min(lpath->rows, rpath->rows) : dNumGroups; } - /* Also convert to long int --- but 'ware overflow! */ - numGroups = (long) Min(dNumGroups, (double) LONG_MAX); - /* * Decide whether to hash or sort, and add a sort node if needed. */ - use_hash = choose_hashed_setop(root, groupList, plan, - dNumGroups, dNumOutputRows, tuple_fraction, + use_hash = choose_hashed_setop(root, groupList, path, + dNumGroups, dNumOutputRows, (op->op == SETOP_INTERSECT) ? "INTERSECT" : "EXCEPT"); if (!use_hash) - plan = (Plan *) make_sort_from_sortclauses(root, groupList, plan); + path = (Path *) create_sort_path(root, + result_rel, + path, + make_pathkeys_for_sortclauses(root, + groupList, + tlist), + -1.0); /* - * Finally, add a SetOp plan node to generate the correct output. + * Finally, add a SetOp path node to generate the correct output. */ switch (op->op) { @@ -649,19 +733,21 @@ generate_nonunion_plan(SetOperationStmt *op, PlannerInfo *root, cmd = SETOPCMD_INTERSECT; /* keep compiler quiet */ break; } - plan = (Plan *) make_setop(cmd, use_hash ? SETOP_HASHED : SETOP_SORTED, - plan, groupList, - list_length(op->colTypes) + 1, - use_hash ? firstFlag : -1, - numGroups, dNumOutputRows); - - /* Result is sorted only if we're not hashing */ - *sortClauses = use_hash ? NIL : groupList; + path = (Path *) create_setop_path(root, + result_rel, + path, + cmd, + use_hash ? SETOP_HASHED : SETOP_SORTED, + groupList, + list_length(op->colTypes) + 1, + use_hash ? firstFlag : -1, + dNumGroups, + dNumOutputRows); if (pNumGroups) *pNumGroups = dNumGroups; - return plan; + return path; } /* @@ -675,15 +761,16 @@ generate_nonunion_plan(SetOperationStmt *op, PlannerInfo *root, * collations have the same notion of equality. It is valid from an * implementation standpoint because we don't care about the ordering of * a UNION child's result: UNION ALL results are always unordered, and - * generate_union_plan will force a fresh sort if the top level is a UNION. + * generate_union_path will force a fresh sort if the top level is a UNION. */ static List * recurse_union_children(Node *setOp, PlannerInfo *root, - double tuple_fraction, SetOperationStmt *top_union, - List *refnames_tlist) + List *refnames_tlist, + List **tlist_list) { - List *child_sortclauses; + List *result; + List *child_tlist; if (IsA(setOp, SetOperationStmt)) { @@ -693,15 +780,20 @@ recurse_union_children(Node *setOp, PlannerInfo *root, (op->all == top_union->all || op->all) && equal(op->colTypes, top_union->colTypes)) { - /* Same UNION, so fold children into parent's subplan list */ - return list_concat(recurse_union_children(op->larg, root, - tuple_fraction, - top_union, - refnames_tlist), - recurse_union_children(op->rarg, root, - tuple_fraction, - top_union, - refnames_tlist)); + /* Same UNION, so fold children into parent's subpath list */ + List *child_tlists1; + List *child_tlists2; + + result = list_concat(recurse_union_children(op->larg, root, + top_union, + refnames_tlist, + &child_tlists1), + recurse_union_children(op->rarg, root, + top_union, + refnames_tlist, + &child_tlists2)); + *tlist_list = list_concat(child_tlists1, child_tlists2); + return result; } } @@ -714,36 +806,34 @@ recurse_union_children(Node *setOp, PlannerInfo *root, * tuples have junk and some don't. This case only arises when we have an * EXCEPT or INTERSECT as child, else there won't be resjunk anyway. */ - return list_make1(recurse_set_operations(setOp, root, - tuple_fraction, - top_union->colTypes, - top_union->colCollations, - false, -1, - refnames_tlist, - &child_sortclauses, NULL)); + result = list_make1(recurse_set_operations(setOp, root, + top_union->colTypes, + top_union->colCollations, + false, -1, + refnames_tlist, + &child_tlist, + NULL)); + *tlist_list = list_make1(child_tlist); + return result; } /* - * Add nodes to the given plan tree to unique-ify the result of a UNION. + * Add nodes to the given path tree to unique-ify the result of a UNION. */ -static Plan * -make_union_unique(SetOperationStmt *op, Plan *plan, - PlannerInfo *root, double tuple_fraction, - List **sortClauses) +static Path * +make_union_unique(SetOperationStmt *op, Path *path, List *tlist, + PlannerInfo *root) { + RelOptInfo *result_rel = fetch_upper_rel(root, UPPERREL_SETOP, NULL); List *groupList; double dNumGroups; - long numGroups; /* Identify the grouping semantics */ - groupList = generate_setop_grouplist(op, plan->targetlist); + groupList = generate_setop_grouplist(op, tlist); /* punt if nothing to group on (can this happen?) */ if (groupList == NIL) - { - *sortClauses = NIL; - return plan; - } + return path; /* * XXX for the moment, take the number of distinct groups as equal to the @@ -753,45 +843,44 @@ make_union_unique(SetOperationStmt *op, Plan *plan, * as well the propensity of novices to write UNION rather than UNION ALL * even when they don't expect any duplicates... */ - dNumGroups = plan->plan_rows; - - /* Also convert to long int --- but 'ware overflow! */ - numGroups = (long) Min(dNumGroups, (double) LONG_MAX); + dNumGroups = path->rows; /* Decide whether to hash or sort */ - if (choose_hashed_setop(root, groupList, plan, - dNumGroups, dNumGroups, tuple_fraction, + if (choose_hashed_setop(root, groupList, path, + dNumGroups, dNumGroups, "UNION")) { /* Hashed aggregate plan --- no sort needed */ - plan = (Plan *) make_agg(root, - plan->targetlist, - NIL, - AGG_HASHED, - NULL, - list_length(groupList), - extract_grouping_cols(groupList, - plan->targetlist), - extract_grouping_ops(groupList), - NIL, - numGroups, - false, - true, - plan); - /* Hashed aggregation produces randomly-ordered results */ - *sortClauses = NIL; + path = (Path *) create_agg_path(root, + result_rel, + path, + create_pathtarget(root, tlist), + AGG_HASHED, + groupList, + NIL, + NULL, + dNumGroups); } else { /* Sort and Unique */ - plan = (Plan *) make_sort_from_sortclauses(root, groupList, plan); - plan = (Plan *) make_unique(plan, groupList); - plan->plan_rows = dNumGroups; - /* We know the sort order of the result */ - *sortClauses = groupList; + path = (Path *) create_sort_path(root, + result_rel, + path, + make_pathkeys_for_sortclauses(root, + groupList, + tlist), + -1.0); + /* We have to manually jam the right tlist into the path; ick */ + path->pathtarget = create_pathtarget(root, tlist); + path = (Path *) create_upper_unique_path(root, + result_rel, + path, + list_length(path->pathkeys), + dNumGroups); } - return plan; + return path; } /* @@ -799,9 +888,8 @@ make_union_unique(SetOperationStmt *op, Plan *plan, */ static bool choose_hashed_setop(PlannerInfo *root, List *groupClauses, - Plan *input_plan, + Path *input_path, double dNumGroups, double dNumOutputRows, - double tuple_fraction, const char *construct) { int numGroupCols = list_length(groupClauses); @@ -810,6 +898,7 @@ choose_hashed_setop(PlannerInfo *root, List *groupClauses, Size hashentrysize; Path hashed_p; Path sorted_p; + double tuple_fraction; /* Check whether the operators support sorting or hashing */ can_sort = grouping_is_sortable(groupClauses); @@ -837,7 +926,7 @@ choose_hashed_setop(PlannerInfo *root, List *groupClauses, * Don't do it if it doesn't look like the hashtable will fit into * work_mem. */ - hashentrysize = MAXALIGN(input_plan->plan_width) + MAXALIGN(SizeofMinimalTupleHeader); + hashentrysize = MAXALIGN(input_path->pathtarget->width) + MAXALIGN(SizeofMinimalTupleHeader); if (hashentrysize * dNumGroups > work_mem * 1024L) return false; @@ -855,27 +944,28 @@ choose_hashed_setop(PlannerInfo *root, List *groupClauses, */ cost_agg(&hashed_p, root, AGG_HASHED, NULL, numGroupCols, dNumGroups, - input_plan->startup_cost, input_plan->total_cost, - input_plan->plan_rows); + input_path->startup_cost, input_path->total_cost, + input_path->rows); /* * Now for the sorted case. Note that the input is *always* unsorted, * since it was made by appending unrelated sub-relations together. */ - sorted_p.startup_cost = input_plan->startup_cost; - sorted_p.total_cost = input_plan->total_cost; + sorted_p.startup_cost = input_path->startup_cost; + sorted_p.total_cost = input_path->total_cost; /* XXX cost_sort doesn't actually look at pathkeys, so just pass NIL */ cost_sort(&sorted_p, root, NIL, sorted_p.total_cost, - input_plan->plan_rows, input_plan->plan_width, + input_path->rows, input_path->pathtarget->width, 0.0, work_mem, -1.0); cost_group(&sorted_p, root, numGroupCols, dNumGroups, sorted_p.startup_cost, sorted_p.total_cost, - input_plan->plan_rows); + input_path->rows); /* * Now make the decision using the top-level tuple fraction. First we * have to convert an absolute count (LIMIT) into fractional form. */ + tuple_fraction = root->tuple_fraction; if (tuple_fraction >= 1.0) tuple_fraction /= dNumOutputRows; @@ -995,6 +1085,14 @@ generate_setop_tlist(List *colTypes, List *colCollations, (AttrNumber) resno++, pstrdup(reftle->resname), false); + + /* + * By convention, all non-resjunk columns in a setop tree have + * ressortgroupref equal to their resno. In some cases the ref isn't + * needed, but this is a cleaner way than modifying the tlist later. + */ + tle->ressortgroupref = tle->resno; + tlist = lappend(tlist, tle); } @@ -1025,17 +1123,21 @@ generate_setop_tlist(List *colTypes, List *colCollations, * colTypes: OID list of set-op's result column datatypes * colCollations: OID list of set-op's result column collations * flag: true to create a flag column copied up from subplans - * input_plans: list of sub-plans of the Append + * input_tlists: list of tlists for sub-plans of the Append * refnames_tlist: targetlist to take column names from * * The entries in the Append's targetlist should always be simple Vars; * we just have to make sure they have the right datatypes/typmods/collations. * The Vars are always generated with varno 0. + * + * XXX a problem with the varno-zero approach is that set_pathtarget_cost_width + * cannot figure out a realistic width for the tlist we make here. But we + * ought to refactor this code to produce a PathTarget directly, anyway. */ static List * generate_append_tlist(List *colTypes, List *colCollations, bool flag, - List *input_plans, + List *input_tlists, List *refnames_tlist) { List *tlist = NIL; @@ -1046,7 +1148,7 @@ generate_append_tlist(List *colTypes, List *colCollations, int colindex; TargetEntry *tle; Node *expr; - ListCell *planl; + ListCell *tlistl; int32 *colTypmods; /* @@ -1057,16 +1159,16 @@ generate_append_tlist(List *colTypes, List *colCollations, */ colTypmods = (int32 *) palloc(list_length(colTypes) * sizeof(int32)); - foreach(planl, input_plans) + foreach(tlistl, input_tlists) { - Plan *subplan = (Plan *) lfirst(planl); - ListCell *subtlist; + List *subtlist = (List *) lfirst(tlistl); + ListCell *subtlistl; curColType = list_head(colTypes); colindex = 0; - foreach(subtlist, subplan->targetlist) + foreach(subtlistl, subtlist) { - TargetEntry *subtle = (TargetEntry *) lfirst(subtlist); + TargetEntry *subtle = (TargetEntry *) lfirst(subtlistl); if (subtle->resjunk) continue; @@ -1076,7 +1178,7 @@ generate_append_tlist(List *colTypes, List *colCollations, /* If first subplan, copy the typmod; else compare */ int32 subtypmod = exprTypmod((Node *) subtle->expr); - if (planl == list_head(input_plans)) + if (tlistl == list_head(input_tlists)) colTypmods[colindex] = subtypmod; else if (subtypmod != colTypmods[colindex]) colTypmods[colindex] = -1; @@ -1116,6 +1218,14 @@ generate_append_tlist(List *colTypes, List *colCollations, (AttrNumber) resno++, pstrdup(reftle->resname), false); + + /* + * By convention, all non-resjunk columns in a setop tree have + * ressortgroupref equal to their resno. In some cases the ref isn't + * needed, but this is a cleaner way than modifying the tlist later. + */ + tle->ressortgroupref = tle->resno; + tlist = lappend(tlist, tle); } @@ -1150,7 +1260,7 @@ generate_append_tlist(List *colTypes, List *colCollations, * list, except that the entries do not have sortgrouprefs set because * the parser output representation doesn't include a tlist for each * setop. So what we need to do here is copy that list and install - * proper sortgrouprefs into it and into the targetlist. + * proper sortgrouprefs into it (copying those from the targetlist). */ static List * generate_setop_grouplist(SetOperationStmt *op, List *targetlist) @@ -1158,7 +1268,6 @@ generate_setop_grouplist(SetOperationStmt *op, List *targetlist) List *grouplist = (List *) copyObject(op->groupClauses); ListCell *lg; ListCell *lt; - Index refno = 1; lg = list_head(grouplist); foreach(lt, targetlist) @@ -1166,11 +1275,15 @@ generate_setop_grouplist(SetOperationStmt *op, List *targetlist) TargetEntry *tle = (TargetEntry *) lfirst(lt); SortGroupClause *sgc; - /* tlist shouldn't have any sortgrouprefs yet */ - Assert(tle->ressortgroupref == 0); - if (tle->resjunk) + { + /* resjunk columns should not have sortgrouprefs */ + Assert(tle->ressortgroupref == 0); continue; /* ignore resjunk columns */ + } + + /* non-resjunk columns should have sortgroupref = resno */ + Assert(tle->ressortgroupref == tle->resno); /* non-resjunk columns should have grouping clauses */ Assert(lg != NULL); @@ -1178,8 +1291,7 @@ generate_setop_grouplist(SetOperationStmt *op, List *targetlist) lg = lnext(lg); Assert(sgc->tleSortGroupRef == 0); - /* we could use assignSortGroupRef here, but seems a bit silly */ - sgc->tleSortGroupRef = tle->ressortgroupref = refno++; + sgc->tleSortGroupRef = tle->ressortgroupref; } Assert(lg == NULL); return grouplist; diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 9417587abf..19c15709a4 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -1063,7 +1063,7 @@ create_bitmap_heap_path(PlannerInfo *root, pathnode->path.param_info = get_baserel_parampathinfo(root, rel, required_outer); pathnode->path.parallel_aware = false; - pathnode->path.parallel_safe = bitmapqual->parallel_safe; + pathnode->path.parallel_safe = rel->consider_parallel; pathnode->path.parallel_degree = 0; pathnode->path.pathkeys = NIL; /* always unordered */ @@ -1208,7 +1208,7 @@ create_append_path(RelOptInfo *rel, List *subpaths, Relids required_outer, * Compute rows and costs as sums of subplan rows and costs. We charge * nothing extra for the Append itself, which perhaps is too optimistic, * but since it doesn't do any selection or projection, it is a pretty - * cheap node. If you change this, see also make_append(). + * cheap node. */ pathnode->path.rows = 0; pathnode->path.startup_cost = 0; @@ -1323,16 +1323,18 @@ create_merge_append_path(PlannerInfo *root, /* * create_result_path * Creates a path representing a Result-and-nothing-else plan. - * This is only used for the case of a query with an empty jointree. + * + * This is only used for degenerate cases, such as a query with an empty + * jointree. */ ResultPath * -create_result_path(RelOptInfo *rel, List *quals) +create_result_path(RelOptInfo *rel, PathTarget *target, List *quals) { ResultPath *pathnode = makeNode(ResultPath); pathnode->path.pathtype = T_Result; pathnode->path.parent = rel; - pathnode->path.pathtarget = &(rel->reltarget); + pathnode->path.pathtarget = target; pathnode->path.param_info = NULL; /* there are no other rels... */ pathnode->path.parallel_aware = false; pathnode->path.parallel_safe = rel->consider_parallel; @@ -1342,8 +1344,9 @@ create_result_path(RelOptInfo *rel, List *quals) /* Hardly worth defining a cost_result() function ... just do it */ pathnode->path.rows = 1; - pathnode->path.startup_cost = 0; - pathnode->path.total_cost = cpu_tuple_cost; + pathnode->path.startup_cost = target->cost.startup; + pathnode->path.total_cost = target->cost.startup + + cpu_tuple_cost + target->cost.per_tuple; /* * In theory we should include the qual eval cost as well, but at present @@ -1351,8 +1354,8 @@ create_result_path(RelOptInfo *rel, List *quals) * again in make_result; since this is only used for degenerate cases, * nothing interesting will be done with the path cost values. * - * (Likewise, we don't worry about pathtarget->cost since that tlist will - * be empty at this point.) + * XXX should refactor so that make_result does not do costing work, at + * which point this will need to do it honestly. */ return pathnode; @@ -1375,8 +1378,9 @@ create_material_path(RelOptInfo *rel, Path *subpath) pathnode->path.pathtarget = &(rel->reltarget); pathnode->path.param_info = subpath->param_info; pathnode->path.parallel_aware = false; - pathnode->path.parallel_safe = subpath->parallel_safe; - pathnode->path.parallel_degree = 0; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; pathnode->path.pathkeys = subpath->pathkeys; pathnode->subpath = subpath; @@ -1439,8 +1443,9 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, pathnode->path.pathtarget = &(rel->reltarget); pathnode->path.param_info = subpath->param_info; pathnode->path.parallel_aware = false; - pathnode->path.parallel_safe = subpath->parallel_safe; - pathnode->path.parallel_degree = 0; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; /* * Assume the output is unsorted, since we don't necessarily have pathkeys @@ -1540,7 +1545,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, * Charge one cpu_operator_cost per comparison per input tuple. We * assume all columns get compared at most of the tuples. (XXX * probably this is an overestimate.) This should agree with - * make_unique. + * create_upper_unique_path. */ sort_path.total_cost += cpu_operator_cost * rel->rows * numCols; } @@ -1607,8 +1612,37 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, } /* - * create_gather_path + * translate_sub_tlist - get subquery column numbers represented by tlist + * + * The given targetlist usually contains only Vars referencing the given relid. + * Extract their varattnos (ie, the column numbers of the subquery) and return + * as an integer List. * + * If any of the tlist items is not a simple Var, we cannot determine whether + * the subquery's uniqueness condition (if any) matches ours, so punt and + * return NIL. + */ +static List * +translate_sub_tlist(List *tlist, int relid) +{ + List *result = NIL; + ListCell *l; + + foreach(l, tlist) + { + Var *var = (Var *) lfirst(l); + + if (!var || !IsA(var, Var) || + var->varno != relid) + return NIL; /* punt */ + + result = lappend_int(result, var->varattno); + } + return result; +} + +/* + * create_gather_path * Creates a path corresponding to a gather scan, returning the * pathnode. */ @@ -1645,58 +1679,30 @@ create_gather_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, return pathnode; } -/* - * translate_sub_tlist - get subquery column numbers represented by tlist - * - * The given targetlist usually contains only Vars referencing the given relid. - * Extract their varattnos (ie, the column numbers of the subquery) and return - * as an integer List. - * - * If any of the tlist items is not a simple Var, we cannot determine whether - * the subquery's uniqueness condition (if any) matches ours, so punt and - * return NIL. - */ -static List * -translate_sub_tlist(List *tlist, int relid) -{ - List *result = NIL; - ListCell *l; - - foreach(l, tlist) - { - Var *var = (Var *) lfirst(l); - - if (!var || !IsA(var, Var) || - var->varno != relid) - return NIL; /* punt */ - - result = lappend_int(result, var->varattno); - } - return result; -} - /* * create_subqueryscan_path - * Creates a path corresponding to a sequential scan of a subquery, + * Creates a path corresponding to a scan of a subquery, * returning the pathnode. */ -Path * -create_subqueryscan_path(PlannerInfo *root, RelOptInfo *rel, +SubqueryScanPath * +create_subqueryscan_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, List *pathkeys, Relids required_outer) { - Path *pathnode = makeNode(Path); + SubqueryScanPath *pathnode = makeNode(SubqueryScanPath); - pathnode->pathtype = T_SubqueryScan; - pathnode->parent = rel; - pathnode->pathtarget = &(rel->reltarget); - pathnode->param_info = get_baserel_parampathinfo(root, rel, - required_outer); - pathnode->parallel_aware = false; - pathnode->parallel_safe = rel->consider_parallel; - pathnode->parallel_degree = 0; - pathnode->pathkeys = pathkeys; + pathnode->path.pathtype = T_SubqueryScan; + pathnode->path.parent = rel; + pathnode->path.pathtarget = &(rel->reltarget); + pathnode->path.param_info = get_baserel_parampathinfo(root, rel, + required_outer); + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + pathnode->path.pathkeys = pathkeys; + pathnode->subpath = subpath; - cost_subqueryscan(pathnode, root, rel, pathnode->param_info); + cost_subqueryscan(pathnode, root, rel, pathnode->path.param_info); return pathnode; } @@ -2035,7 +2041,8 @@ create_mergejoin_path(PlannerInfo *root, pathnode->jpath.path.parallel_aware = false; pathnode->jpath.path.parallel_safe = joinrel->consider_parallel && outer_path->parallel_safe && inner_path->parallel_safe; - pathnode->jpath.path.parallel_degree = 0; + /* This is a foolish way to estimate parallel_degree, but for now... */ + pathnode->jpath.path.parallel_degree = outer_path->parallel_degree; pathnode->jpath.path.pathkeys = pathkeys; pathnode->jpath.jointype = jointype; pathnode->jpath.outerjoinpath = outer_path; @@ -2123,6 +2130,911 @@ create_hashjoin_path(PlannerInfo *root, return pathnode; } +/* + * create_projection_path + * Creates a pathnode that represents performing a projection. + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'target' is the PathTarget to be computed + */ +ProjectionPath * +create_projection_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target) +{ + ProjectionPath *pathnode = makeNode(ProjectionPath); + + pathnode->path.pathtype = T_Result; + pathnode->path.parent = rel; + pathnode->path.pathtarget = target; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + /* Projection does not change the sort order */ + pathnode->path.pathkeys = subpath->pathkeys; + + pathnode->subpath = subpath; + + /* + * The Result node's cost is cpu_tuple_cost per row, plus the cost of + * evaluating the tlist. + */ + pathnode->path.rows = subpath->rows; + pathnode->path.startup_cost = subpath->startup_cost + target->cost.startup; + pathnode->path.total_cost = subpath->total_cost + target->cost.startup + + (cpu_tuple_cost + target->cost.per_tuple) * subpath->rows; + + return pathnode; +} + +/* + * apply_projection_to_path + * Add a projection step, or just apply the target directly to given path. + * + * Most plan types include ExecProject, so we can implement a new projection + * without an extra plan node: just replace the given path's pathtarget with + * the desired one. If the given path can't project, add a ProjectionPath. + * + * We can also short-circuit cases where the targetlist expressions are + * actually equal; this is not an uncommon case, since it may arise from + * trying to apply a PathTarget with sortgroupref labeling to a derived + * path without such labeling. + * + * This requires knowing that the source path won't be referenced for other + * purposes (e.g., other possible paths), since we modify it in-place. Note + * also that we mustn't change the source path's parent link; so when it is + * add_path'd to "rel" things will be a bit inconsistent. So far that has + * not caused any trouble. + * + * 'rel' is the parent relation associated with the result + * 'path' is the path representing the source of data + * 'target' is the PathTarget to be computed + */ +Path * +apply_projection_to_path(PlannerInfo *root, + RelOptInfo *rel, + Path *path, + PathTarget *target) +{ + QualCost oldcost; + + /* Make a separate ProjectionPath if needed */ + if (!is_projection_capable_path(path) && + !equal(path->pathtarget->exprs, target->exprs)) + return (Path *) create_projection_path(root, rel, path, target); + + /* + * We can just jam the desired tlist into the existing path, being sure to + * update its cost estimates appropriately. + */ + oldcost = path->pathtarget->cost; + path->pathtarget = target; + + path->startup_cost += target->cost.startup - oldcost.startup; + path->total_cost += target->cost.startup - oldcost.startup + + (target->cost.per_tuple - oldcost.per_tuple) * path->rows; + + return path; +} + +/* + * create_sort_path + * Creates a pathnode that represents performing an explicit sort. + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'pathkeys' represents the desired sort order + * 'limit_tuples' is the estimated bound on the number of output tuples, + * or -1 if no LIMIT or couldn't estimate + */ +SortPath * +create_sort_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + List *pathkeys, + double limit_tuples) +{ + SortPath *pathnode = makeNode(SortPath); + + pathnode->path.pathtype = T_Sort; + pathnode->path.parent = rel; + /* Sort doesn't project, so use source path's pathtarget */ + pathnode->path.pathtarget = subpath->pathtarget; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + pathnode->path.pathkeys = pathkeys; + + pathnode->subpath = subpath; + + cost_sort(&pathnode->path, root, pathkeys, + subpath->total_cost, + subpath->rows, + subpath->pathtarget->width, + 0.0, /* XXX comparison_cost shouldn't be 0? */ + work_mem, limit_tuples); + + return pathnode; +} + +/* + * create_group_path + * Creates a pathnode that represents performing grouping of presorted input + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'target' is the PathTarget to be computed + * 'groupClause' is a list of SortGroupClause's representing the grouping + * 'qual' is the HAVING quals if any + * 'numGroups' is the estimated number of groups + */ +GroupPath * +create_group_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target, + List *groupClause, + List *qual, + double numGroups) +{ + GroupPath *pathnode = makeNode(GroupPath); + + pathnode->path.pathtype = T_Group; + pathnode->path.parent = rel; + pathnode->path.pathtarget = target; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + /* Group doesn't change sort ordering */ + pathnode->path.pathkeys = subpath->pathkeys; + + pathnode->subpath = subpath; + + pathnode->groupClause = groupClause; + pathnode->qual = qual; + + cost_group(&pathnode->path, root, + list_length(groupClause), + numGroups, + subpath->startup_cost, subpath->total_cost, + subpath->rows); + + /* add tlist eval cost for each output row */ + pathnode->path.startup_cost += target->cost.startup; + pathnode->path.total_cost += target->cost.startup + + target->cost.per_tuple * pathnode->path.rows; + + return pathnode; +} + +/* + * create_upper_unique_path + * Creates a pathnode that represents performing an explicit Unique step + * on presorted input. + * + * This produces a Unique plan node, but the use-case is so different from + * create_unique_path that it doesn't seem worth trying to merge the two. + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'numCols' is the number of grouping columns + * 'numGroups' is the estimated number of groups + * + * The input path must be sorted on the grouping columns, plus possibly + * additional columns; so the first numCols pathkeys are the grouping columns + */ +UpperUniquePath * +create_upper_unique_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + int numCols, + double numGroups) +{ + UpperUniquePath *pathnode = makeNode(UpperUniquePath); + + pathnode->path.pathtype = T_Unique; + pathnode->path.parent = rel; + /* Unique doesn't project, so use source path's pathtarget */ + pathnode->path.pathtarget = subpath->pathtarget; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + /* Unique doesn't change the input ordering */ + pathnode->path.pathkeys = subpath->pathkeys; + + pathnode->subpath = subpath; + pathnode->numkeys = numCols; + + /* + * Charge one cpu_operator_cost per comparison per input tuple. We assume + * all columns get compared at most of the tuples. (XXX probably this is + * an overestimate.) + */ + pathnode->path.startup_cost = subpath->startup_cost; + pathnode->path.total_cost = subpath->total_cost + + cpu_operator_cost * subpath->rows * numCols; + pathnode->path.rows = numGroups; + + return pathnode; +} + +/* + * create_agg_path + * Creates a pathnode that represents performing aggregation/grouping + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'target' is the PathTarget to be computed + * 'aggstrategy' is the Agg node's basic implementation strategy + * 'groupClause' is a list of SortGroupClause's representing the grouping + * 'qual' is the HAVING quals if any + * 'aggcosts' contains cost info about the aggregate functions to be computed + * 'numGroups' is the estimated number of groups (1 if not grouping) + */ +AggPath * +create_agg_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target, + AggStrategy aggstrategy, + List *groupClause, + List *qual, + const AggClauseCosts *aggcosts, + double numGroups) +{ + AggPath *pathnode = makeNode(AggPath); + + pathnode->path.pathtype = T_Agg; + pathnode->path.parent = rel; + pathnode->path.pathtarget = target; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + if (aggstrategy == AGG_SORTED) + pathnode->path.pathkeys = subpath->pathkeys; /* preserves order */ + else + pathnode->path.pathkeys = NIL; /* output is unordered */ + pathnode->subpath = subpath; + + pathnode->aggstrategy = aggstrategy; + pathnode->numGroups = numGroups; + pathnode->groupClause = groupClause; + pathnode->qual = qual; + + cost_agg(&pathnode->path, root, + aggstrategy, aggcosts, + list_length(groupClause), numGroups, + subpath->startup_cost, subpath->total_cost, + subpath->rows); + + /* add tlist eval cost for each output row */ + pathnode->path.startup_cost += target->cost.startup; + pathnode->path.total_cost += target->cost.startup + + target->cost.per_tuple * pathnode->path.rows; + + return pathnode; +} + +/* + * create_groupingsets_path + * Creates a pathnode that represents performing GROUPING SETS aggregation + * + * GroupingSetsPath represents sorted grouping with one or more grouping sets. + * The input path's result must be sorted to match the last entry in + * rollup_groupclauses, and groupColIdx[] identifies its sort columns. + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'target' is the PathTarget to be computed + * 'having_qual' is the HAVING quals if any + * 'groupColIdx' is an array of indexes of grouping columns in the source data + * 'rollup_lists' is a list of grouping sets + * 'rollup_groupclauses' is a list of grouping clauses for grouping sets + * 'agg_costs' contains cost info about the aggregate functions to be computed + * 'numGroups' is the estimated number of groups + */ +GroupingSetsPath * +create_groupingsets_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target, + List *having_qual, + AttrNumber *groupColIdx, + List *rollup_lists, + List *rollup_groupclauses, + const AggClauseCosts *agg_costs, + double numGroups) +{ + GroupingSetsPath *pathnode = makeNode(GroupingSetsPath); + int numGroupCols; + + /* The topmost generated Plan node will be an Agg */ + pathnode->path.pathtype = T_Agg; + pathnode->path.parent = rel; + pathnode->path.pathtarget = target; + pathnode->path.param_info = subpath->param_info; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + pathnode->subpath = subpath; + + /* + * Output will be in sorted order by group_pathkeys if, and only if, there + * is a single rollup operation on a non-empty list of grouping + * expressions. + */ + if (list_length(rollup_groupclauses) == 1 && + ((List *) linitial(rollup_groupclauses)) != NIL) + pathnode->path.pathkeys = root->group_pathkeys; + else + pathnode->path.pathkeys = NIL; + + pathnode->groupColIdx = groupColIdx; + pathnode->rollup_groupclauses = rollup_groupclauses; + pathnode->rollup_lists = rollup_lists; + pathnode->qual = having_qual; + + Assert(rollup_lists != NIL); + Assert(list_length(rollup_lists) == list_length(rollup_groupclauses)); + + /* Account for cost of the topmost Agg node */ + numGroupCols = list_length((List *) linitial((List *) llast(rollup_lists))); + + cost_agg(&pathnode->path, root, + (numGroupCols > 0) ? AGG_SORTED : AGG_PLAIN, + agg_costs, + numGroupCols, + numGroups, + subpath->startup_cost, + subpath->total_cost, + subpath->rows); + + /* + * Add in the costs and output rows of the additional sorting/aggregation + * steps, if any. Only total costs count, since the extra sorts aren't + * run on startup. + */ + if (list_length(rollup_lists) > 1) + { + ListCell *lc; + + foreach(lc, rollup_lists) + { + List *gsets = (List *) lfirst(lc); + Path sort_path; /* dummy for result of cost_sort */ + Path agg_path; /* dummy for result of cost_agg */ + + /* We must iterate over all but the last rollup_lists element */ + if (lnext(lc) == NULL) + break; + + /* Account for cost of sort, but don't charge input cost again */ + cost_sort(&sort_path, root, NIL, + 0.0, + subpath->rows, + subpath->pathtarget->width, + 0.0, + work_mem, + -1.0); + + /* Account for cost of aggregation */ + numGroupCols = list_length((List *) linitial(gsets)); + + cost_agg(&agg_path, root, + AGG_SORTED, + agg_costs, + numGroupCols, + numGroups, /* XXX surely not right for all steps? */ + sort_path.startup_cost, + sort_path.total_cost, + sort_path.rows); + + pathnode->path.total_cost += agg_path.total_cost; + pathnode->path.rows += agg_path.rows; + } + } + + /* add tlist eval cost for each output row */ + pathnode->path.startup_cost += target->cost.startup; + pathnode->path.total_cost += target->cost.startup + + target->cost.per_tuple * pathnode->path.rows; + + return pathnode; +} + +/* + * create_minmaxagg_path + * Creates a pathnode that represents computation of MIN/MAX aggregates + * + * 'rel' is the parent relation associated with the result + * 'target' is the PathTarget to be computed + * 'mmaggregates' is a list of MinMaxAggInfo structs + * 'quals' is the HAVING quals if any + */ +MinMaxAggPath * +create_minmaxagg_path(PlannerInfo *root, + RelOptInfo *rel, + PathTarget *target, + List *mmaggregates, + List *quals) +{ + MinMaxAggPath *pathnode = makeNode(MinMaxAggPath); + Cost initplan_cost; + ListCell *lc; + + /* The topmost generated Plan node will be a Result */ + pathnode->path.pathtype = T_Result; + pathnode->path.parent = rel; + pathnode->path.pathtarget = target; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + /* A MinMaxAggPath implies use of subplans, so cannot be parallel-safe */ + pathnode->path.parallel_safe = false; + pathnode->path.parallel_degree = 0; + /* Result is one unordered row */ + pathnode->path.rows = 1; + pathnode->path.pathkeys = NIL; + + pathnode->mmaggregates = mmaggregates; + pathnode->quals = quals; + + /* Calculate cost of all the initplans ... */ + initplan_cost = 0; + foreach(lc, mmaggregates) + { + MinMaxAggInfo *mminfo = (MinMaxAggInfo *) lfirst(lc); + + initplan_cost += mminfo->pathcost; + } + + /* add tlist eval cost for each output row, plus cpu_tuple_cost */ + pathnode->path.startup_cost = initplan_cost + target->cost.startup; + pathnode->path.total_cost = initplan_cost + target->cost.startup + + target->cost.per_tuple + cpu_tuple_cost; + + return pathnode; +} + +/* + * create_windowagg_path + * Creates a pathnode that represents computation of window functions + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'target' is the PathTarget to be computed + * 'windowFuncs' is a list of WindowFunc structs + * 'winclause' is a WindowClause that is common to all the WindowFuncs + * 'winpathkeys' is the pathkeys for the PARTITION keys + ORDER keys + * + * The actual sort order of the input must match winpathkeys, but might + * have additional keys after those. + */ +WindowAggPath * +create_windowagg_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target, + List *windowFuncs, + WindowClause *winclause, + List *winpathkeys) +{ + WindowAggPath *pathnode = makeNode(WindowAggPath); + + pathnode->path.pathtype = T_WindowAgg; + pathnode->path.parent = rel; + pathnode->path.pathtarget = target; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + /* WindowAgg preserves the input sort order */ + pathnode->path.pathkeys = subpath->pathkeys; + + pathnode->subpath = subpath; + pathnode->winclause = winclause; + pathnode->winpathkeys = winpathkeys; + + /* + * For costing purposes, assume that there are no redundant partitioning + * or ordering columns; it's not worth the trouble to deal with that + * corner case here. So we just pass the unmodified list lengths to + * cost_windowagg. + */ + cost_windowagg(&pathnode->path, root, + windowFuncs, + list_length(winclause->partitionClause), + list_length(winclause->orderClause), + subpath->startup_cost, + subpath->total_cost, + subpath->rows); + + /* add tlist eval cost for each output row */ + pathnode->path.startup_cost += target->cost.startup; + pathnode->path.total_cost += target->cost.startup + + target->cost.per_tuple * pathnode->path.rows; + + return pathnode; +} + +/* + * create_setop_path + * Creates a pathnode that represents computation of INTERSECT or EXCEPT + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'cmd' is the specific semantics (INTERSECT or EXCEPT, with/without ALL) + * 'strategy' is the implementation strategy (sorted or hashed) + * 'distinctList' is a list of SortGroupClause's representing the grouping + * 'flagColIdx' is the column number where the flag column will be, if any + * 'firstFlag' is the flag value for the first input relation when hashing; + * or -1 when sorting + * 'numGroups' is the estimated number of distinct groups + * 'outputRows' is the estimated number of output rows + */ +SetOpPath * +create_setop_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + SetOpCmd cmd, + SetOpStrategy strategy, + List *distinctList, + AttrNumber flagColIdx, + int firstFlag, + double numGroups, + double outputRows) +{ + SetOpPath *pathnode = makeNode(SetOpPath); + + pathnode->path.pathtype = T_SetOp; + pathnode->path.parent = rel; + /* SetOp doesn't project, so use source path's pathtarget */ + pathnode->path.pathtarget = subpath->pathtarget; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + /* SetOp preserves the input sort order if in sort mode */ + pathnode->path.pathkeys = + (strategy == SETOP_SORTED) ? subpath->pathkeys : NIL; + + pathnode->subpath = subpath; + pathnode->cmd = cmd; + pathnode->strategy = strategy; + pathnode->distinctList = distinctList; + pathnode->flagColIdx = flagColIdx; + pathnode->firstFlag = firstFlag; + pathnode->numGroups = numGroups; + + /* + * Charge one cpu_operator_cost per comparison per input tuple. We assume + * all columns get compared at most of the tuples. + */ + pathnode->path.startup_cost = subpath->startup_cost; + pathnode->path.total_cost = subpath->total_cost + + cpu_operator_cost * subpath->rows * list_length(distinctList); + pathnode->path.rows = outputRows; + + return pathnode; +} + +/* + * create_recursiveunion_path + * Creates a pathnode that represents a recursive UNION node + * + * 'rel' is the parent relation associated with the result + * 'leftpath' is the source of data for the non-recursive term + * 'rightpath' is the source of data for the recursive term + * 'target' is the PathTarget to be computed + * 'distinctList' is a list of SortGroupClause's representing the grouping + * 'wtParam' is the ID of Param representing work table + * 'numGroups' is the estimated number of groups + * + * For recursive UNION ALL, distinctList is empty and numGroups is zero + */ +RecursiveUnionPath * +create_recursiveunion_path(PlannerInfo *root, + RelOptInfo *rel, + Path *leftpath, + Path *rightpath, + PathTarget *target, + List *distinctList, + int wtParam, + double numGroups) +{ + RecursiveUnionPath *pathnode = makeNode(RecursiveUnionPath); + + pathnode->path.pathtype = T_RecursiveUnion; + pathnode->path.parent = rel; + pathnode->path.pathtarget = target; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + leftpath->parallel_safe && rightpath->parallel_safe; + /* Foolish, but we'll do it like joins for now: */ + pathnode->path.parallel_degree = leftpath->parallel_degree; + /* RecursiveUnion result is always unsorted */ + pathnode->path.pathkeys = NIL; + + pathnode->leftpath = leftpath; + pathnode->rightpath = rightpath; + pathnode->distinctList = distinctList; + pathnode->wtParam = wtParam; + pathnode->numGroups = numGroups; + + cost_recursive_union(&pathnode->path, leftpath, rightpath); + + return pathnode; +} + +/* + * create_lockrows_path + * Creates a pathnode that represents acquiring row locks + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'rowMarks' is a list of PlanRowMark's + * 'epqParam' is the ID of Param for EvalPlanQual re-eval + */ +LockRowsPath * +create_lockrows_path(PlannerInfo *root, RelOptInfo *rel, + Path *subpath, List *rowMarks, int epqParam) +{ + LockRowsPath *pathnode = makeNode(LockRowsPath); + + pathnode->path.pathtype = T_LockRows; + pathnode->path.parent = rel; + /* LockRows doesn't project, so use source path's pathtarget */ + pathnode->path.pathtarget = subpath->pathtarget; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = false; + pathnode->path.parallel_degree = 0; + pathnode->path.rows = subpath->rows; + + /* + * The result cannot be assumed sorted, since locking might cause the sort + * key columns to be replaced with new values. + */ + pathnode->path.pathkeys = NIL; + + pathnode->subpath = subpath; + pathnode->rowMarks = rowMarks; + pathnode->epqParam = epqParam; + + /* + * We should charge something extra for the costs of row locking and + * possible refetches, but it's hard to say how much. For now, use + * cpu_tuple_cost per row. + */ + pathnode->path.startup_cost = subpath->startup_cost; + pathnode->path.total_cost = subpath->total_cost + + cpu_tuple_cost * subpath->rows; + + return pathnode; +} + +/* + * create_modifytable_path + * Creates a pathnode that represents performing INSERT/UPDATE/DELETE mods + * + * 'rel' is the parent relation associated with the result + * 'operation' is the operation type + * 'canSetTag' is true if we set the command tag/es_processed + * 'nominalRelation' is the parent RT index for use of EXPLAIN + * 'resultRelations' is an integer list of actual RT indexes of target rel(s) + * 'subpaths' is a list of Path(s) producing source data (one per rel) + * 'subroots' is a list of PlannerInfo structs (one per rel) + * 'withCheckOptionLists' is a list of WCO lists (one per rel) + * 'returningLists' is a list of RETURNING tlists (one per rel) + * 'rowMarks' is a list of PlanRowMarks (non-locking only) + * 'onconflict' is the ON CONFLICT clause, or NULL + * 'epqParam' is the ID of Param for EvalPlanQual re-eval + */ +ModifyTablePath * +create_modifytable_path(PlannerInfo *root, RelOptInfo *rel, + CmdType operation, bool canSetTag, + Index nominalRelation, + List *resultRelations, List *subpaths, + List *subroots, + List *withCheckOptionLists, List *returningLists, + List *rowMarks, OnConflictExpr *onconflict, + int epqParam) +{ + ModifyTablePath *pathnode = makeNode(ModifyTablePath); + double total_size; + ListCell *lc; + + Assert(list_length(resultRelations) == list_length(subpaths)); + Assert(list_length(resultRelations) == list_length(subroots)); + Assert(withCheckOptionLists == NIL || + list_length(resultRelations) == list_length(withCheckOptionLists)); + Assert(returningLists == NIL || + list_length(resultRelations) == list_length(returningLists)); + + pathnode->path.pathtype = T_ModifyTable; + pathnode->path.parent = rel; + /* pathtarget is not interesting, just make it minimally valid */ + pathnode->path.pathtarget = &(rel->reltarget); + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = false; + pathnode->path.parallel_degree = 0; + pathnode->path.pathkeys = NIL; + + /* + * Compute cost & rowcount as sum of subpath costs & rowcounts. + * + * Currently, we don't charge anything extra for the actual table + * modification work, nor for the WITH CHECK OPTIONS or RETURNING + * expressions if any. It would only be window dressing, since + * ModifyTable is always a top-level node and there is no way for the + * costs to change any higher-level planning choices. But we might want + * to make it look better sometime. + */ + pathnode->path.startup_cost = 0; + pathnode->path.total_cost = 0; + pathnode->path.rows = 0; + total_size = 0; + foreach(lc, subpaths) + { + Path *subpath = (Path *) lfirst(lc); + + if (lc == list_head(subpaths)) /* first node? */ + pathnode->path.startup_cost = subpath->startup_cost; + pathnode->path.total_cost += subpath->total_cost; + pathnode->path.rows += subpath->rows; + total_size += subpath->pathtarget->width * subpath->rows; + } + + /* + * Set width to the average width of the subpath outputs. XXX this is + * totally wrong: we should report zero if no RETURNING, else an average + * of the RETURNING tlist widths. But it's what happened historically, + * and improving it is a task for another day. + */ + if (pathnode->path.rows > 0) + total_size /= pathnode->path.rows; + pathnode->path.pathtarget->width = rint(total_size); + + pathnode->operation = operation; + pathnode->canSetTag = canSetTag; + pathnode->nominalRelation = nominalRelation; + pathnode->resultRelations = resultRelations; + pathnode->subpaths = subpaths; + pathnode->subroots = subroots; + pathnode->withCheckOptionLists = withCheckOptionLists; + pathnode->returningLists = returningLists; + pathnode->rowMarks = rowMarks; + pathnode->onconflict = onconflict; + pathnode->epqParam = epqParam; + + return pathnode; +} + +/* + * create_limit_path + * Creates a pathnode that represents performing LIMIT/OFFSET + * + * In addition to providing the actual OFFSET and LIMIT expressions, + * the caller must provide estimates of their values for costing purposes. + * The estimates are as computed by preprocess_limit(), ie, 0 represents + * the clause not being present, and -1 means it's present but we could + * not estimate its value. + * + * 'rel' is the parent relation associated with the result + * 'subpath' is the path representing the source of data + * 'limitOffset' is the actual OFFSET expression, or NULL + * 'limitCount' is the actual LIMIT expression, or NULL + * 'offset_est' is the estimated value of the OFFSET expression + * 'count_est' is the estimated value of the LIMIT expression + */ +LimitPath * +create_limit_path(PlannerInfo *root, RelOptInfo *rel, + Path *subpath, + Node *limitOffset, Node *limitCount, + int64 offset_est, int64 count_est) +{ + LimitPath *pathnode = makeNode(LimitPath); + + pathnode->path.pathtype = T_Limit; + pathnode->path.parent = rel; + /* Limit doesn't project, so use source path's pathtarget */ + pathnode->path.pathtarget = subpath->pathtarget; + /* For now, assume we are above any joins, so no parameterization */ + pathnode->path.param_info = NULL; + pathnode->path.parallel_aware = false; + pathnode->path.parallel_safe = rel->consider_parallel && + subpath->parallel_safe; + pathnode->path.parallel_degree = subpath->parallel_degree; + pathnode->path.rows = subpath->rows; + pathnode->path.startup_cost = subpath->startup_cost; + pathnode->path.total_cost = subpath->total_cost; + pathnode->path.pathkeys = subpath->pathkeys; + pathnode->subpath = subpath; + pathnode->limitOffset = limitOffset; + pathnode->limitCount = limitCount; + + /* + * Adjust the output rows count and costs according to the offset/limit. + * This is only a cosmetic issue if we are at top level, but if we are + * building a subquery then it's important to report correct info to the + * outer planner. + * + * When the offset or count couldn't be estimated, use 10% of the + * estimated number of rows emitted from the subpath. + * + * XXX we don't bother to add eval costs of the offset/limit expressions + * themselves to the path costs. In theory we should, but in most cases + * those expressions are trivial and it's just not worth the trouble. + */ + if (offset_est != 0) + { + double offset_rows; + + if (offset_est > 0) + offset_rows = (double) offset_est; + else + offset_rows = clamp_row_est(subpath->rows * 0.10); + if (offset_rows > pathnode->path.rows) + offset_rows = pathnode->path.rows; + if (subpath->rows > 0) + pathnode->path.startup_cost += + (subpath->total_cost - subpath->startup_cost) + * offset_rows / subpath->rows; + pathnode->path.rows -= offset_rows; + if (pathnode->path.rows < 1) + pathnode->path.rows = 1; + } + + if (count_est != 0) + { + double count_rows; + + if (count_est > 0) + count_rows = (double) count_est; + else + count_rows = clamp_row_est(subpath->rows * 0.10); + if (count_rows > pathnode->path.rows) + count_rows = pathnode->path.rows; + if (subpath->rows > 0) + pathnode->path.total_cost = pathnode->path.startup_cost + + (subpath->total_cost - subpath->startup_cost) + * count_rows / subpath->rows; + pathnode->path.rows = count_rows; + if (pathnode->path.rows < 1) + pathnode->path.rows = 1; + } + + return pathnode; +} + + /* * reparameterize_path * Attempt to modify a Path to have greater parameterization @@ -2186,8 +3098,15 @@ reparameterize_path(PlannerInfo *root, Path *path, loop_count); } case T_SubqueryScan: - return create_subqueryscan_path(root, rel, path->pathkeys, - required_outer); + { + SubqueryScanPath *spath = (SubqueryScanPath *) path; + + return (Path *) create_subqueryscan_path(root, + rel, + spath->subpath, + spath->path.pathkeys, + required_outer); + } default: break; } diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index 0ea9fcf7c2..ad715bbcc5 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -1142,7 +1142,27 @@ relation_excluded_by_constraints(PlannerInfo *root, List *safe_constraints; ListCell *lc; - /* Skip the test if constraint exclusion is disabled for the rel */ + /* + * Regardless of the setting of constraint_exclusion, detect + * constant-FALSE-or-NULL restriction clauses. Because const-folding will + * reduce "anything AND FALSE" to just "FALSE", any such case should + * result in exactly one baserestrictinfo entry. This doesn't fire very + * often, but it seems cheap enough to be worth doing anyway. (Without + * this, we'd miss some optimizations that 9.5 and earlier found via much + * more roundabout methods.) + */ + if (list_length(rel->baserestrictinfo) == 1) + { + RestrictInfo *rinfo = (RestrictInfo *) linitial(rel->baserestrictinfo); + Expr *clause = rinfo->clause; + + if (clause && IsA(clause, Const) && + (((Const *) clause)->constisnull || + !DatumGetBool(((Const *) clause)->constvalue))) + return true; + } + + /* Skip further tests if constraint exclusion is disabled for the rel */ if (constraint_exclusion == CONSTRAINT_EXCLUSION_OFF || (constraint_exclusion == CONSTRAINT_EXCLUSION_PARTITION && !(rel->reloptkind == RELOPT_OTHER_MEMBER_REL || diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c index e658f27180..763d39d142 100644 --- a/src/backend/optimizer/util/relnode.c +++ b/src/backend/optimizer/util/relnode.c @@ -107,6 +107,7 @@ build_simple_rel(PlannerInfo *root, int relid, RelOptKind reloptkind) rel->consider_param_startup = false; /* might get changed later */ rel->consider_parallel = false; /* might get changed later */ rel->reltarget.exprs = NIL; + rel->reltarget.sortgrouprefs = NULL; rel->reltarget.cost.startup = 0; rel->reltarget.cost.per_tuple = 0; rel->reltarget.width = 0; @@ -128,7 +129,6 @@ build_simple_rel(PlannerInfo *root, int relid, RelOptKind reloptkind) rel->pages = 0; rel->tuples = 0; rel->allvisfrac = 0; - rel->subplan = NULL; rel->subroot = NULL; rel->subplan_params = NIL; rel->serverid = InvalidOid; @@ -394,6 +394,7 @@ build_join_rel(PlannerInfo *root, joinrel->consider_param_startup = false; joinrel->consider_parallel = false; joinrel->reltarget.exprs = NIL; + joinrel->reltarget.sortgrouprefs = NULL; joinrel->reltarget.cost.startup = 0; joinrel->reltarget.cost.per_tuple = 0; joinrel->reltarget.width = 0; @@ -422,7 +423,6 @@ build_join_rel(PlannerInfo *root, joinrel->pages = 0; joinrel->tuples = 0; joinrel->allvisfrac = 0; - joinrel->subplan = NULL; joinrel->subroot = NULL; joinrel->subplan_params = NIL; joinrel->serverid = InvalidOid; @@ -839,6 +839,61 @@ build_empty_join_rel(PlannerInfo *root) } +/* + * fetch_upper_rel + * Build a RelOptInfo describing some post-scan/join query processing, + * or return a pre-existing one if somebody already built it. + * + * An "upper" relation is identified by an UpperRelationKind and a Relids set. + * The meaning of the Relids set is not specified here, and very likely will + * vary for different relation kinds. + * + * Most of the fields in an upper-level RelOptInfo are not used and are not + * set here (though makeNode should ensure they're zeroes). We basically only + * care about fields that are of interest to add_path() and set_cheapest(). + */ +RelOptInfo * +fetch_upper_rel(PlannerInfo *root, UpperRelationKind kind, Relids relids) +{ + RelOptInfo *upperrel; + ListCell *lc; + + /* + * For the moment, our indexing data structure is just a List for each + * relation kind. If we ever get so many of one kind that this stops + * working well, we can improve it. No code outside this function should + * assume anything about how to find a particular upperrel. + */ + + /* If we already made this upperrel for the query, return it */ + foreach(lc, root->upper_rels[kind]) + { + upperrel = (RelOptInfo *) lfirst(lc); + + if (bms_equal(upperrel->relids, relids)) + return upperrel; + } + + upperrel = makeNode(RelOptInfo); + upperrel->reloptkind = RELOPT_UPPER_REL; + upperrel->relids = bms_copy(relids); + + /* cheap startup cost is interesting iff not all tuples to be retrieved */ + upperrel->consider_startup = (root->tuple_fraction > 0); + upperrel->consider_param_startup = false; + upperrel->consider_parallel = false; /* might get changed later */ + upperrel->pathlist = NIL; + upperrel->cheapest_startup_path = NULL; + upperrel->cheapest_total_path = NULL; + upperrel->cheapest_unique_path = NULL; + upperrel->cheapest_parameterized_paths = NIL; + + root->upper_rels[kind] = lappend(root->upper_rels[kind], upperrel); + + return upperrel; +} + + /* * find_childrel_appendrelinfo * Get the AppendRelInfo associated with an appendrel child rel. diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c index 6cd02c3460..2e90ecb4a6 100644 --- a/src/backend/optimizer/util/tlist.c +++ b/src/backend/optimizer/util/tlist.c @@ -74,13 +74,12 @@ tlist_member_ignore_relabel(Node *node, List *targetlist) /* * tlist_member_match_var * Same as above, except that we match the provided Var on the basis - * of varno/varattno/varlevelsup only, rather than using full equal(). + * of varno/varattno/varlevelsup/vartype only, rather than full equal(). * * This is needed in some cases where we can't be sure of an exact typmod - * match. It's probably a good idea to check the vartype anyway, but - * we leave it to the caller to apply any suitable sanity checks. + * match. For safety, though, we insist on vartype match. */ -TargetEntry * +static TargetEntry * tlist_member_match_var(Var *var, List *targetlist) { ListCell *temp; @@ -94,7 +93,8 @@ tlist_member_match_var(Var *var, List *targetlist) continue; if (var->varno == tlvar->varno && var->varattno == tlvar->varattno && - var->varlevelsup == tlvar->varlevelsup) + var->varlevelsup == tlvar->varlevelsup && + var->vartype == tlvar->vartype) return tlentry; } return NULL; @@ -316,6 +316,34 @@ tlist_same_collations(List *tlist, List *colCollations, bool junkOK) return true; } +/* + * apply_tlist_labeling + * Apply the TargetEntry labeling attributes of src_tlist to dest_tlist + * + * This is useful for reattaching column names etc to a plan's final output + * targetlist. + */ +void +apply_tlist_labeling(List *dest_tlist, List *src_tlist) +{ + ListCell *ld, + *ls; + + Assert(list_length(dest_tlist) == list_length(src_tlist)); + forboth(ld, dest_tlist, ls, src_tlist) + { + TargetEntry *dest_tle = (TargetEntry *) lfirst(ld); + TargetEntry *src_tle = (TargetEntry *) lfirst(ls); + + Assert(dest_tle->resno == src_tle->resno); + dest_tle->resname = src_tle->resname; + dest_tle->ressortgroupref = src_tle->ressortgroupref; + dest_tle->resorigtbl = src_tle->resorigtbl; + dest_tle->resorigcol = src_tle->resorigcol; + dest_tle->resjunk = src_tle->resjunk; + } +} + /* * get_sortgroupref_tle @@ -506,3 +534,119 @@ grouping_is_hashable(List *groupClause) } return true; } + + +/***************************************************************************** + * PathTarget manipulation functions + * + * PathTarget is a somewhat stripped-down version of a full targetlist; it + * omits all the TargetEntry decoration except (optionally) sortgroupref data, + * and it adds evaluation cost and output data width info. + *****************************************************************************/ + +/* + * make_pathtarget_from_tlist + * Construct a PathTarget equivalent to the given targetlist. + * + * This leaves the cost and width fields as zeroes. Most callers will want + * to use create_pathtarget(), so as to get those set. + */ +PathTarget * +make_pathtarget_from_tlist(List *tlist) +{ + PathTarget *target = (PathTarget *) palloc0(sizeof(PathTarget)); + int i; + ListCell *lc; + + target->sortgrouprefs = (Index *) palloc(list_length(tlist) * sizeof(Index)); + + i = 0; + foreach(lc, tlist) + { + TargetEntry *tle = (TargetEntry *) lfirst(lc); + + target->exprs = lappend(target->exprs, tle->expr); + target->sortgrouprefs[i] = tle->ressortgroupref; + i++; + } + + return target; +} + +/* + * make_tlist_from_pathtarget + * Construct a targetlist from a PathTarget. + */ +List * +make_tlist_from_pathtarget(PathTarget *target) +{ + List *tlist = NIL; + int i; + ListCell *lc; + + i = 0; + foreach(lc, target->exprs) + { + Expr *expr = (Expr *) lfirst(lc); + TargetEntry *tle; + + tle = makeTargetEntry(expr, + i + 1, + NULL, + false); + if (target->sortgrouprefs) + tle->ressortgroupref = target->sortgrouprefs[i]; + tlist = lappend(tlist, tle); + i++; + } + + return tlist; +} + +/* + * apply_pathtarget_labeling_to_tlist + * Apply any sortgrouprefs in the PathTarget to matching tlist entries + * + * Here, we do not assume that the tlist entries are one-for-one with the + * PathTarget. The intended use of this function is to deal with cases + * where createplan.c has decided to use some other tlist and we have + * to identify what matches exist. + */ +void +apply_pathtarget_labeling_to_tlist(List *tlist, PathTarget *target) +{ + int i; + ListCell *lc; + + /* Nothing to do if PathTarget has no sortgrouprefs data */ + if (target->sortgrouprefs == NULL) + return; + + i = 0; + foreach(lc, target->exprs) + { + Expr *expr = (Expr *) lfirst(lc); + TargetEntry *tle; + + if (target->sortgrouprefs[i]) + { + /* + * For Vars, use tlist_member_match_var's weakened matching rule; + * this allows us to deal with some cases where a set-returning + * function has been inlined, so that we now have more knowledge + * about what it returns than we did when the original Var was + * created. Otherwise, use regular equal() to see if there's a + * matching TLE. (In current usage, only the Var case is actually + * needed; but it seems best to have sane behavior here for + * non-Vars too.) + */ + if (expr && IsA(expr, Var)) + tle = tlist_member_match_var((Var *) expr, tlist); + else + tle = tlist_member((Node *) expr, tlist); + if (tle) + tle->ressortgroupref = target->sortgrouprefs[i]; + } + i++; + } +} diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h index c407fa2cd4..fad9988119 100644 --- a/src/include/nodes/nodes.h +++ b/src/include/nodes/nodes.h @@ -229,18 +229,33 @@ typedef enum NodeTag T_BitmapHeapPath, T_BitmapAndPath, T_BitmapOrPath, - T_NestPath, - T_MergePath, - T_HashPath, T_TidPath, + T_SubqueryScanPath, T_ForeignPath, T_CustomPath, + T_NestPath, + T_MergePath, + T_HashPath, T_AppendPath, T_MergeAppendPath, T_ResultPath, T_MaterialPath, T_UniquePath, T_GatherPath, + T_ProjectionPath, + T_SortPath, + T_GroupPath, + T_UpperUniquePath, + T_AggPath, + T_GroupingSetsPath, + T_MinMaxAggPath, + T_WindowAggPath, + T_SetOpPath, + T_RecursiveUnionPath, + T_LockRowsPath, + T_ModifyTablePath, + T_LimitPath, + /* these aren't subclasses of Path: */ T_EquivalenceClass, T_EquivalenceMember, T_PathKey, @@ -653,6 +668,39 @@ typedef enum JoinType (1 << JOIN_RIGHT) | \ (1 << JOIN_ANTI))) != 0) +/* + * AggStrategy - + * overall execution strategies for Agg plan nodes + * + * This is needed in both plannodes.h and relation.h, so put it here... + */ +typedef enum AggStrategy +{ + AGG_PLAIN, /* simple agg across all input rows */ + AGG_SORTED, /* grouped agg, input must be sorted */ + AGG_HASHED /* grouped agg, use internal hashtable */ +} AggStrategy; + +/* + * SetOpCmd and SetOpStrategy - + * overall semantics and execution strategies for SetOp plan nodes + * + * This is needed in both plannodes.h and relation.h, so put it here... + */ +typedef enum SetOpCmd +{ + SETOPCMD_INTERSECT, + SETOPCMD_INTERSECT_ALL, + SETOPCMD_EXCEPT, + SETOPCMD_EXCEPT_ALL +} SetOpCmd; + +typedef enum SetOpStrategy +{ + SETOP_SORTED, /* input must be sorted */ + SETOP_HASHED /* use internal hashtable */ +} SetOpStrategy; + /* * OnConflictAction - * "ON CONFLICT" clause type of query diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index ae224cfa31..5961f2c988 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -714,23 +714,17 @@ typedef struct Group * we are using the Agg node to implement hash-based grouping.) * --------------- */ -typedef enum AggStrategy -{ - AGG_PLAIN, /* simple agg across all input rows */ - AGG_SORTED, /* grouped agg, input must be sorted */ - AGG_HASHED /* grouped agg, use internal hashtable */ -} AggStrategy; - typedef struct Agg { Plan plan; - AggStrategy aggstrategy; - int numCols; /* number of grouping columns */ - AttrNumber *grpColIdx; /* their indexes in the target list */ + AggStrategy aggstrategy; /* basic strategy, see nodes.h */ bool combineStates; /* input tuples contain transition states */ bool finalizeAggs; /* should we call the finalfn on agg states? */ + int numCols; /* number of grouping columns */ + AttrNumber *grpColIdx; /* their indexes in the target list */ Oid *grpOperators; /* equality operators to compare with */ long numGroups; /* estimated number of groups in input */ + /* Note: the planner only provides numGroups in AGG_HASHED case */ List *groupingSets; /* grouping sets to use */ List *chain; /* chained Agg/Sort nodes */ } Agg; @@ -802,25 +796,11 @@ typedef struct Hash * setop node * ---------------- */ -typedef enum SetOpCmd -{ - SETOPCMD_INTERSECT, - SETOPCMD_INTERSECT_ALL, - SETOPCMD_EXCEPT, - SETOPCMD_EXCEPT_ALL -} SetOpCmd; - -typedef enum SetOpStrategy -{ - SETOP_SORTED, /* input must be sorted */ - SETOP_HASHED /* use internal hashtable */ -} SetOpStrategy; - typedef struct SetOp { Plan plan; - SetOpCmd cmd; /* what to do */ - SetOpStrategy strategy; /* how to do it */ + SetOpCmd cmd; /* what to do, see nodes.h */ + SetOpStrategy strategy; /* how to do it, see nodes.h */ int numCols; /* number of columns to check for * duplicate-ness */ AttrNumber *dupColIdx; /* their indexes in the target list */ diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index af8cb6be68..098a48690f 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -70,16 +70,40 @@ typedef struct AggClauseCosts * example, an indexscan might return index expressions that would otherwise * need to be explicitly calculated. * - * Note that PathTarget.exprs is just a list of expressions; they do not have - * TargetEntry nodes on top, though those will appear in the finished Plan. + * exprs contains bare expressions; they do not have TargetEntry nodes on top, + * though those will appear in finished Plans. + * + * sortgrouprefs[] is an array of the same length as exprs, containing the + * corresponding sort/group refnos, or zeroes for expressions not referenced + * by sort/group clauses. If sortgrouprefs is NULL (which it always is in + * RelOptInfo.reltarget structs; only upper-level Paths contain this info), we + * have not identified sort/group columns in this tlist. This allows us to + * deal with sort/group refnos when needed with less expense than including + * TargetEntry nodes in the exprs list. */ typedef struct PathTarget { List *exprs; /* list of expressions to be computed */ - QualCost cost; /* cost of evaluating the above */ + Index *sortgrouprefs; /* corresponding sort/group refnos, or 0 */ + QualCost cost; /* cost of evaluating the expressions */ int width; /* estimated avg width of result tuples */ } PathTarget; +/* + * This enum identifies the different types of "upper" (post-scan/join) + * relations that we might deal with during planning. + */ +typedef enum UpperRelationKind +{ + UPPERREL_SETOP, /* result of UNION/INTERSECT/EXCEPT, if any */ + UPPERREL_GROUP_AGG, /* result of grouping/aggregation, if any */ + UPPERREL_WINDOW, /* result of window functions, if any */ + UPPERREL_DISTINCT, /* result of "SELECT DISTINCT", if any */ + UPPERREL_ORDERED, /* result of ORDER BY, if any */ + UPPERREL_FINAL /* result of any remaining top-level actions */ + /* NB: UPPERREL_FINAL must be last enum entry; it's used to size arrays */ +} UpperRelationKind; + /*---------- * PlannerGlobal @@ -255,18 +279,28 @@ typedef struct PlannerInfo List *placeholder_list; /* list of PlaceHolderInfos */ - List *query_pathkeys; /* desired pathkeys for query_planner(), and - * actual pathkeys after planning */ + List *query_pathkeys; /* desired pathkeys for query_planner() */ List *group_pathkeys; /* groupClause pathkeys, if any */ List *window_pathkeys; /* pathkeys of bottom window, if any */ List *distinct_pathkeys; /* distinctClause pathkeys, if any */ List *sort_pathkeys; /* sortClause pathkeys, if any */ - List *minmax_aggs; /* List of MinMaxAggInfos */ - List *initial_rels; /* RelOptInfos we are now trying to join */ + /* Use fetch_upper_rel() to get any particular upper rel */ + List *upper_rels[UPPERREL_FINAL + 1]; /* upper-rel RelOptInfos */ + + /* + * grouping_planner passes back its final processed targetlist here, for + * use in relabeling the topmost tlist of the finished Plan. + */ + List *processed_tlist; + + /* Fields filled during create_plan() for use in setrefs.c */ + AttrNumber *grouping_map; /* for GroupingFunc fixup */ + List *minmax_aggs; /* List of MinMaxAggInfos */ + MemoryContext planner_cxt; /* context holding PlannerInfo */ double total_table_pages; /* # of pages in all tables of query */ @@ -286,7 +320,7 @@ typedef struct PlannerInfo /* These fields are used only when hasRecursion is true: */ int wt_param_id; /* PARAM_EXEC ID for the work table */ - struct Plan *non_recursive_plan; /* plan for non-recursive term */ + struct Path *non_recursive_path; /* a path for non-recursive term */ /* These fields are workspace for createplan.c */ Relids curOuterRels; /* outer rels above current node */ @@ -294,9 +328,6 @@ typedef struct PlannerInfo /* optional private data for join_search_hook, e.g., GEQO */ void *join_search_private; - - /* for GroupingFunc fixup in setrefs */ - AttrNumber *grouping_map; } PlannerInfo; @@ -328,10 +359,7 @@ typedef struct PlannerInfo * * We also have "other rels", which are like base rels in that they refer to * single RT indexes; but they are not part of the join tree, and are given - * a different RelOptKind to identify them. Lastly, there is a RelOptKind - * for "dead" relations, which are base rels that we have proven we don't - * need to join after all. - * + * a different RelOptKind to identify them. * Currently the only kind of otherrels are those made for member relations * of an "append relation", that is an inheritance set or UNION ALL subquery. * An append relation has a parent RTE that is a base rel, which represents @@ -346,6 +374,14 @@ typedef struct PlannerInfo * handling join alias Vars. Currently this is not needed because all join * alias Vars are expanded to non-aliased form during preprocess_expression. * + * There is also a RelOptKind for "upper" relations, which are RelOptInfos + * that describe post-scan/join processing steps, such as aggregation. + * Many of the fields in these RelOptInfos are meaningless, but their Path + * fields always hold Paths showing ways to do that processing step. + * + * Lastly, there is a RelOptKind for "dead" relations, which are base rels + * that we have proven we don't need to join after all. + * * Parts of this data structure are specific to various scan and join * mechanisms. It didn't seem worth creating new node types for them. * @@ -401,11 +437,10 @@ typedef struct PlannerInfo * pages - number of disk pages in relation (zero if not a table) * tuples - number of tuples in relation (not considering restrictions) * allvisfrac - fraction of disk pages that are marked all-visible - * subplan - plan for subquery (NULL if it's not a subquery) * subroot - PlannerInfo for subquery (NULL if it's not a subquery) * subplan_params - list of PlannerParamItems to be passed to subquery * - * Note: for a subquery, tuples, subplan, subroot are not set immediately + * Note: for a subquery, tuples and subroot are not set immediately * upon creation of the RelOptInfo object; they are filled in when * set_subquery_pathlist processes the object. * @@ -455,6 +490,7 @@ typedef enum RelOptKind RELOPT_BASEREL, RELOPT_JOINREL, RELOPT_OTHER_MEMBER_REL, + RELOPT_UPPER_REL, RELOPT_DEADREL } RelOptKind; @@ -506,8 +542,6 @@ typedef struct RelOptInfo BlockNumber pages; /* size estimates derived from pg_class */ double tuples; double allvisfrac; - /* use "struct Plan" to avoid including plannodes.h here */ - struct Plan *subplan; /* if subquery */ PlannerInfo *subroot; /* if subquery */ List *subplan_params; /* if subquery */ @@ -938,6 +972,20 @@ typedef struct TidPath List *tidquals; /* qual(s) involving CTID = something */ } TidPath; +/* + * SubqueryScanPath represents a scan of an unflattened subquery-in-FROM + * + * Note that the subpath comes from a different planning domain; for example + * RTE indexes within it mean something different from those known to the + * SubqueryScanPath. path.parent->subroot is the planning context needed to + * interpret the subpath. + */ +typedef struct SubqueryScanPath +{ + Path path; + Path *subpath; /* path representing subquery execution */ +} SubqueryScanPath; + /* * ForeignPath represents a potential scan of a foreign table * @@ -1062,14 +1110,13 @@ typedef struct MaterialPath * UniquePath represents elimination of distinct rows from the output of * its subpath. * - * This is unlike the other Path nodes in that it can actually generate - * different plans: either hash-based or sort-based implementation, or a - * no-op if the input path can be proven distinct already. The decision - * is sufficiently localized that it's not worth having separate Path node - * types. (Note: in the no-op case, we could eliminate the UniquePath node - * entirely and just return the subpath; but it's convenient to have a - * UniquePath in the path tree to signal upper-level routines that the input - * is known distinct.) + * This can represent significantly different plans: either hash-based or + * sort-based implementation, or a no-op if the input path can be proven + * distinct already. The decision is sufficiently localized that it's not + * worth having separate Path node types. (Note: in the no-op case, we could + * eliminate the UniquePath node entirely and just return the subpath; but + * it's convenient to have a UniquePath in the path tree to signal upper-level + * routines that the input is known distinct.) */ typedef enum { @@ -1180,6 +1227,195 @@ typedef struct HashPath int num_batches; /* number of batches expected */ } HashPath; +/* + * ProjectionPath represents a projection (that is, targetlist computation) + * + * This path node represents using a Result plan node to do a projection. + * It's only needed atop a node that doesn't support projection (such as + * Sort); otherwise we just jam the new desired PathTarget into the lower + * path node, and adjust that node's estimated cost accordingly. + */ +typedef struct ProjectionPath +{ + Path path; + Path *subpath; /* path representing input source */ +} ProjectionPath; + +/* + * SortPath represents an explicit sort step + * + * The sort keys are, by definition, the same as path.pathkeys. + * + * Note: the Sort plan node cannot project, so path.pathtarget must be the + * same as the input's pathtarget. + */ +typedef struct SortPath +{ + Path path; + Path *subpath; /* path representing input source */ +} SortPath; + +/* + * GroupPath represents grouping (of presorted input) + * + * groupClause represents the columns to be grouped on; the input path + * must be at least that well sorted. + * + * We can also apply a qual to the grouped rows (equivalent of HAVING) + */ +typedef struct GroupPath +{ + Path path; + Path *subpath; /* path representing input source */ + List *groupClause; /* a list of SortGroupClause's */ + List *qual; /* quals (HAVING quals), if any */ +} GroupPath; + +/* + * UpperUniquePath represents adjacent-duplicate removal (in presorted input) + * + * The columns to be compared are the first numkeys columns of the path's + * pathkeys. The input is presumed already sorted that way. + */ +typedef struct UpperUniquePath +{ + Path path; + Path *subpath; /* path representing input source */ + int numkeys; /* number of pathkey columns to compare */ +} UpperUniquePath; + +/* + * AggPath represents generic computation of aggregate functions + * + * This may involve plain grouping (but not grouping sets), using either + * sorted or hashed grouping; for the AGG_SORTED case, the input must be + * appropriately presorted. + */ +typedef struct AggPath +{ + Path path; + Path *subpath; /* path representing input source */ + AggStrategy aggstrategy; /* basic strategy, see nodes.h */ + double numGroups; /* estimated number of groups in input */ + List *groupClause; /* a list of SortGroupClause's */ + List *qual; /* quals (HAVING quals), if any */ +} AggPath; + +/* + * GroupingSetsPath represents a GROUPING SETS aggregation + * + * Currently we only support this in sorted not hashed form, so the input + * must always be appropriately presorted. + */ +typedef struct GroupingSetsPath +{ + Path path; + Path *subpath; /* path representing input source */ + AttrNumber *groupColIdx; /* grouping col indexes */ + List *rollup_groupclauses; /* list of lists of SortGroupClause's */ + List *rollup_lists; /* parallel list of lists of grouping sets */ + List *qual; /* quals (HAVING quals), if any */ +} GroupingSetsPath; + +/* + * MinMaxAggPath represents computation of MIN/MAX aggregates from indexes + */ +typedef struct MinMaxAggPath +{ + Path path; + List *mmaggregates; /* list of MinMaxAggInfo */ + List *quals; /* HAVING quals, if any */ +} MinMaxAggPath; + +/* + * WindowAggPath represents generic computation of window functions + * + * Note: winpathkeys is separate from path.pathkeys because the actual sort + * order might be an extension of winpathkeys; but createplan.c needs to + * know exactly how many pathkeys match the window clause. + */ +typedef struct WindowAggPath +{ + Path path; + Path *subpath; /* path representing input source */ + WindowClause *winclause; /* WindowClause we'll be using */ + List *winpathkeys; /* PathKeys for PARTITION keys + ORDER keys */ +} WindowAggPath; + +/* + * SetOpPath represents a set-operation, that is INTERSECT or EXCEPT + */ +typedef struct SetOpPath +{ + Path path; + Path *subpath; /* path representing input source */ + SetOpCmd cmd; /* what to do, see nodes.h */ + SetOpStrategy strategy; /* how to do it, see nodes.h */ + List *distinctList; /* SortGroupClauses identifying target cols */ + AttrNumber flagColIdx; /* where is the flag column, if any */ + int firstFlag; /* flag value for first input relation */ + double numGroups; /* estimated number of groups in input */ +} SetOpPath; + +/* + * RecursiveUnionPath represents a recursive UNION node + */ +typedef struct RecursiveUnionPath +{ + Path path; + Path *leftpath; /* paths representing input sources */ + Path *rightpath; + List *distinctList; /* SortGroupClauses identifying target cols */ + int wtParam; /* ID of Param representing work table */ + double numGroups; /* estimated number of groups in input */ +} RecursiveUnionPath; + +/* + * LockRowsPath represents acquiring row locks for SELECT FOR UPDATE/SHARE + */ +typedef struct LockRowsPath +{ + Path path; + Path *subpath; /* path representing input source */ + List *rowMarks; /* a list of PlanRowMark's */ + int epqParam; /* ID of Param for EvalPlanQual re-eval */ +} LockRowsPath; + +/* + * ModifyTablePath represents performing INSERT/UPDATE/DELETE modifications + * + * We represent most things that will be in the ModifyTable plan node + * literally, except we have child Path(s) not Plan(s). But analysis of the + * OnConflictExpr is deferred to createplan.c, as is collection of FDW data. + */ +typedef struct ModifyTablePath +{ + Path path; + CmdType operation; /* INSERT, UPDATE, or DELETE */ + bool canSetTag; /* do we set the command tag/es_processed? */ + Index nominalRelation; /* Parent RT index for use of EXPLAIN */ + List *resultRelations; /* integer list of RT indexes */ + List *subpaths; /* Path(s) producing source data */ + List *subroots; /* per-target-table PlannerInfos */ + List *withCheckOptionLists; /* per-target-table WCO lists */ + List *returningLists; /* per-target-table RETURNING tlists */ + List *rowMarks; /* PlanRowMarks (non-locking only) */ + OnConflictExpr *onconflict; /* ON CONFLICT clause, or NULL */ + int epqParam; /* ID of Param for EvalPlanQual re-eval */ +} ModifyTablePath; + +/* + * LimitPath represents applying LIMIT/OFFSET restrictions + */ +typedef struct LimitPath +{ + Path path; + Path *subpath; /* path representing input source */ + Node *limitOffset; /* OFFSET parameter, or NULL if none */ + Node *limitCount; /* COUNT parameter, or NULL if none */ +} LimitPath; + + /* * Restriction clause info. * @@ -1615,8 +1851,9 @@ typedef struct PlaceHolderInfo } PlaceHolderInfo; /* - * For each potentially index-optimizable MIN/MAX aggregate function, - * root->minmax_aggs stores a MinMaxAggInfo describing it. + * This struct describes one potentially index-optimizable MIN/MAX aggregate + * function. MinMaxAggPath contains a list of these, and if we accept that + * path, the list is stored into root->minmax_aggs for use during setrefs.c. */ typedef struct MinMaxAggInfo { diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index 78c7cae99b..fea2bb77f4 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -85,7 +85,7 @@ extern void cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root); extern void cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec); extern void cost_tidscan(Path *path, PlannerInfo *root, RelOptInfo *baserel, List *tidquals, ParamPathInfo *param_info); -extern void cost_subqueryscan(Path *path, PlannerInfo *root, +extern void cost_subqueryscan(SubqueryScanPath *path, PlannerInfo *root, RelOptInfo *baserel, ParamPathInfo *param_info); extern void cost_functionscan(Path *path, PlannerInfo *root, RelOptInfo *baserel, ParamPathInfo *param_info); @@ -93,7 +93,7 @@ extern void cost_valuesscan(Path *path, PlannerInfo *root, RelOptInfo *baserel, ParamPathInfo *param_info); extern void cost_ctescan(Path *path, PlannerInfo *root, RelOptInfo *baserel, ParamPathInfo *param_info); -extern void cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm); +extern void cost_recursive_union(Path *runion, Path *nrterm, Path *rterm); extern void cost_sort(Path *path, PlannerInfo *root, List *pathkeys, Cost input_cost, double tuples, int width, Cost comparison_cost, int sort_mem, @@ -180,8 +180,9 @@ extern void set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel); extern void set_function_size_estimates(PlannerInfo *root, RelOptInfo *rel); extern void set_values_size_estimates(PlannerInfo *root, RelOptInfo *rel); extern void set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, - Plan *cteplan); + double cte_rows); extern void set_foreign_size_estimates(PlannerInfo *root, RelOptInfo *rel); +extern PathTarget *set_pathtarget_cost_width(PlannerInfo *root, PathTarget *target); /* * prototypes for clausesel.c diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h index f479981d37..37744bf972 100644 --- a/src/include/optimizer/pathnode.h +++ b/src/include/optimizer/pathnode.h @@ -68,13 +68,15 @@ extern MergeAppendPath *create_merge_append_path(PlannerInfo *root, List *subpaths, List *pathkeys, Relids required_outer); -extern ResultPath *create_result_path(RelOptInfo *rel, List *quals); +extern ResultPath *create_result_path(RelOptInfo *rel, + PathTarget *target, List *quals); extern MaterialPath *create_material_path(RelOptInfo *rel, Path *subpath); extern UniquePath *create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, SpecialJoinInfo *sjinfo); extern GatherPath *create_gather_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, Relids required_outer); -extern Path *create_subqueryscan_path(PlannerInfo *root, RelOptInfo *rel, +extern SubqueryScanPath *create_subqueryscan_path(PlannerInfo *root, + RelOptInfo *rel, Path *subpath, List *pathkeys, Relids required_outer); extern Path *create_functionscan_path(PlannerInfo *root, RelOptInfo *rel, List *pathkeys, Relids required_outer); @@ -132,6 +134,96 @@ extern HashPath *create_hashjoin_path(PlannerInfo *root, Relids required_outer, List *hashclauses); +extern ProjectionPath *create_projection_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target); +extern Path *apply_projection_to_path(PlannerInfo *root, + RelOptInfo *rel, + Path *path, + PathTarget *target); +extern SortPath *create_sort_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + List *pathkeys, + double limit_tuples); +extern GroupPath *create_group_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target, + List *groupClause, + List *qual, + double numGroups); +extern UpperUniquePath *create_upper_unique_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + int numCols, + double numGroups); +extern AggPath *create_agg_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target, + AggStrategy aggstrategy, + List *groupClause, + List *qual, + const AggClauseCosts *aggcosts, + double numGroups); +extern GroupingSetsPath *create_groupingsets_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target, + List *having_qual, + AttrNumber *groupColIdx, + List *rollup_lists, + List *rollup_groupclauses, + const AggClauseCosts *agg_costs, + double numGroups); +extern MinMaxAggPath *create_minmaxagg_path(PlannerInfo *root, + RelOptInfo *rel, + PathTarget *target, + List *mmaggregates, + List *quals); +extern WindowAggPath *create_windowagg_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + PathTarget *target, + List *windowFuncs, + WindowClause *winclause, + List *winpathkeys); +extern SetOpPath *create_setop_path(PlannerInfo *root, + RelOptInfo *rel, + Path *subpath, + SetOpCmd cmd, + SetOpStrategy strategy, + List *distinctList, + AttrNumber flagColIdx, + int firstFlag, + double numGroups, + double outputRows); +extern RecursiveUnionPath *create_recursiveunion_path(PlannerInfo *root, + RelOptInfo *rel, + Path *leftpath, + Path *rightpath, + PathTarget *target, + List *distinctList, + int wtParam, + double numGroups); +extern LockRowsPath *create_lockrows_path(PlannerInfo *root, RelOptInfo *rel, + Path *subpath, List *rowMarks, int epqParam); +extern ModifyTablePath *create_modifytable_path(PlannerInfo *root, + RelOptInfo *rel, + CmdType operation, bool canSetTag, + Index nominalRelation, + List *resultRelations, List *subpaths, + List *subroots, + List *withCheckOptionLists, List *returningLists, + List *rowMarks, OnConflictExpr *onconflict, + int epqParam); +extern LimitPath *create_limit_path(PlannerInfo *root, RelOptInfo *rel, + Path *subpath, + Node *limitOffset, Node *limitCount, + int64 offset_est, int64 count_est); + extern Path *reparameterize_path(PlannerInfo *root, Path *path, Relids required_outer, double loop_count); @@ -155,6 +247,8 @@ extern Relids min_join_parameterization(PlannerInfo *root, RelOptInfo *outer_rel, RelOptInfo *inner_rel); extern RelOptInfo *build_empty_join_rel(PlannerInfo *root); +extern RelOptInfo *fetch_upper_rel(PlannerInfo *root, UpperRelationKind kind, + Relids relids); extern AppendRelInfo *find_childrel_appendrelinfo(PlannerInfo *root, RelOptInfo *rel); extern RelOptInfo *find_childrel_top_parent(PlannerInfo *root, RelOptInfo *rel); diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index 20474c3e1f..2fccc3a998 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -47,6 +47,7 @@ extern PGDLLIMPORT join_search_hook_type join_search_hook; extern RelOptInfo *make_one_rel(PlannerInfo *root, List *joinlist); +extern void set_dummy_rel_pathlist(RelOptInfo *rel); extern RelOptInfo *standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels); @@ -137,10 +138,6 @@ extern void add_child_rel_equivalences(PlannerInfo *root, AppendRelInfo *appinfo, RelOptInfo *parent_rel, RelOptInfo *child_rel); -extern void mutate_eclass_expressions(PlannerInfo *root, - Node *(*mutator) (), - void *context, - bool include_child_exprs); extern List *generate_implied_equalities_for_column(PlannerInfo *root, RelOptInfo *rel, ec_matches_callback_type callback, @@ -182,7 +179,8 @@ extern List *build_expression_pathkey(PlannerInfo *root, Expr *expr, Relids nullable_relids, Oid opno, Relids rel, bool create_it); extern List *convert_subquery_pathkeys(PlannerInfo *root, RelOptInfo *rel, - List *subquery_pathkeys); + List *subquery_pathkeys, + List *subquery_tlist); extern List *build_join_pathkeys(PlannerInfo *root, RelOptInfo *joinrel, JoinType jointype, diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h index eaa642bc57..cd7338a98c 100644 --- a/src/include/optimizer/planmain.h +++ b/src/include/optimizer/planmain.h @@ -43,60 +43,17 @@ extern RelOptInfo *query_planner(PlannerInfo *root, List *tlist, * prototypes for plan/planagg.c */ extern void preprocess_minmax_aggregates(PlannerInfo *root, List *tlist); -extern Plan *optimize_minmax_aggregates(PlannerInfo *root, List *tlist, - const AggClauseCosts *aggcosts, Path *best_path); /* * prototypes for plan/createplan.c */ extern Plan *create_plan(PlannerInfo *root, Path *best_path); -extern SubqueryScan *make_subqueryscan(List *qptlist, List *qpqual, - Index scanrelid, Plan *subplan); extern ForeignScan *make_foreignscan(List *qptlist, List *qpqual, Index scanrelid, List *fdw_exprs, List *fdw_private, List *fdw_scan_tlist, List *fdw_recheck_quals, Plan *outer_plan); -extern Append *make_append(List *appendplans, List *tlist); -extern RecursiveUnion *make_recursive_union(List *tlist, - Plan *lefttree, Plan *righttree, int wtParam, - List *distinctList, long numGroups); -extern Sort *make_sort_from_pathkeys(PlannerInfo *root, Plan *lefttree, - List *pathkeys, double limit_tuples); -extern Sort *make_sort_from_sortclauses(PlannerInfo *root, List *sortcls, - Plan *lefttree); -extern Sort *make_sort_from_groupcols(PlannerInfo *root, List *groupcls, - AttrNumber *grpColIdx, Plan *lefttree); -extern Agg *make_agg(PlannerInfo *root, List *tlist, List *qual, - AggStrategy aggstrategy, const AggClauseCosts *aggcosts, - int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators, - List *groupingSets, long numGroups, bool combineStates, - bool finalizeAggs, Plan *lefttree); -extern WindowAgg *make_windowagg(PlannerInfo *root, List *tlist, - List *windowFuncs, Index winref, - int partNumCols, AttrNumber *partColIdx, Oid *partOperators, - int ordNumCols, AttrNumber *ordColIdx, Oid *ordOperators, - int frameOptions, Node *startOffset, Node *endOffset, - Plan *lefttree); -extern Group *make_group(PlannerInfo *root, List *tlist, List *qual, - int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators, - double numGroups, - Plan *lefttree); extern Plan *materialize_finished_plan(Plan *subplan); -extern Unique *make_unique(Plan *lefttree, List *distinctList); -extern LockRows *make_lockrows(Plan *lefttree, List *rowMarks, int epqParam); -extern Limit *make_limit(Plan *lefttree, Node *limitOffset, Node *limitCount, - int64 offset_est, int64 count_est); -extern SetOp *make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree, - List *distinctList, AttrNumber flagColIdx, int firstFlag, - long numGroups, double outputRows); -extern Result *make_result(PlannerInfo *root, List *tlist, - Node *resconstantqual, Plan *subplan); -extern ModifyTable *make_modifytable(PlannerInfo *root, - CmdType operation, bool canSetTag, - Index nominalRelation, - List *resultRelations, List *subplans, - List *withCheckOptionLists, List *returningLists, - List *rowMarks, OnConflictExpr *onconflict, int epqParam); +extern bool is_projection_capable_path(Path *path); extern bool is_projection_capable_plan(Plan *plan); /* diff --git a/src/include/optimizer/planner.h b/src/include/optimizer/planner.h index 886acfeac0..3fb7cb58cb 100644 --- a/src/include/optimizer/planner.h +++ b/src/include/optimizer/planner.h @@ -30,19 +30,18 @@ extern PlannedStmt *planner(Query *parse, int cursorOptions, extern PlannedStmt *standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams); -extern Plan *subquery_planner(PlannerGlobal *glob, Query *parse, +extern PlannerInfo *subquery_planner(PlannerGlobal *glob, Query *parse, PlannerInfo *parent_root, - bool hasRecursion, double tuple_fraction, - PlannerInfo **subroot); - -extern void add_tlist_costs_to_plan(PlannerInfo *root, Plan *plan, - List *tlist); + bool hasRecursion, double tuple_fraction); extern bool is_dummy_plan(Plan *plan); extern RowMarkType select_rowmark_type(RangeTblEntry *rte, LockClauseStrength strength); +extern Path *get_cheapest_fractional_path(RelOptInfo *rel, + double tuple_fraction); + extern Expr *expression_planner(Expr *expr); extern Expr *preprocess_phv_expression(PlannerInfo *root, Expr *expr); diff --git a/src/include/optimizer/prep.h b/src/include/optimizer/prep.h index cebd8b6b0f..fb35b689bb 100644 --- a/src/include/optimizer/prep.h +++ b/src/include/optimizer/prep.h @@ -53,8 +53,7 @@ extern PlanRowMark *get_plan_rowmark(List *rowmarks, Index rtindex); /* * prototypes for prepunion.c */ -extern Plan *plan_set_operations(PlannerInfo *root, double tuple_fraction, - List **sortClauses); +extern RelOptInfo *plan_set_operations(PlannerInfo *root); extern void expand_inherited_tables(PlannerInfo *root); diff --git a/src/include/optimizer/subselect.h b/src/include/optimizer/subselect.h index 4c652fa9bb..f100d02940 100644 --- a/src/include/optimizer/subselect.h +++ b/src/include/optimizer/subselect.h @@ -26,11 +26,15 @@ extern JoinExpr *convert_EXISTS_sublink_to_join(PlannerInfo *root, extern Node *SS_replace_correlation_vars(PlannerInfo *root, Node *expr); extern Node *SS_process_sublinks(PlannerInfo *root, Node *expr, bool isQual); extern void SS_identify_outer_params(PlannerInfo *root); +extern void SS_charge_for_initplans(PlannerInfo *root, RelOptInfo *final_rel); extern void SS_attach_initplans(PlannerInfo *root, Plan *plan); extern void SS_finalize_plan(PlannerInfo *root, Plan *plan); -extern Param *SS_make_initplan_from_plan(PlannerInfo *root, +extern Param *SS_make_initplan_output_param(PlannerInfo *root, + Oid resulttype, int32 resulttypmod, + Oid resultcollation); +extern void SS_make_initplan_from_plan(PlannerInfo *root, PlannerInfo *subroot, Plan *plan, - Oid resulttype, int32 resulttypmod, Oid resultcollation); + Param *prm); extern Param *assign_nestloop_param_var(PlannerInfo *root, Var *var); extern Param *assign_nestloop_param_placeholdervar(PlannerInfo *root, PlaceHolderVar *phv); diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h index 25e2581c7a..fc6bc088b1 100644 --- a/src/include/optimizer/tlist.h +++ b/src/include/optimizer/tlist.h @@ -19,7 +19,6 @@ extern TargetEntry *tlist_member(Node *node, List *targetlist); extern TargetEntry *tlist_member_ignore_relabel(Node *node, List *targetlist); -extern TargetEntry *tlist_member_match_var(Var *var, List *targetlist); extern List *flatten_tlist(List *tlist, PVCAggregateBehavior aggbehavior, PVCPlaceHolderBehavior phbehavior); @@ -34,6 +33,8 @@ extern bool tlist_same_exprs(List *tlist1, List *tlist2); extern bool tlist_same_datatypes(List *tlist, List *colTypes, bool junkOK); extern bool tlist_same_collations(List *tlist, List *colCollations, bool junkOK); +extern void apply_tlist_labeling(List *dest_tlist, List *src_tlist); + extern TargetEntry *get_sortgroupref_tle(Index sortref, List *targetList); extern TargetEntry *get_sortgroupclause_tle(SortGroupClause *sgClause, @@ -51,4 +52,12 @@ extern AttrNumber *extract_grouping_cols(List *groupClause, List *tlist); extern bool grouping_is_sortable(List *groupClause); extern bool grouping_is_hashable(List *groupClause); +extern PathTarget *make_pathtarget_from_tlist(List *tlist); +extern List *make_tlist_from_pathtarget(PathTarget *target); +extern void apply_pathtarget_labeling_to_tlist(List *tlist, PathTarget *target); + +/* Convenience macro to get a PathTarget with valid cost/width fields */ +#define create_pathtarget(root, tlist) \ + set_pathtarget_cost_width(root, make_pathtarget_from_tlist(tlist)) + #endif /* TLIST_H */ diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out index e434c5d8cd..601bdb405a 100644 --- a/src/test/regress/expected/aggregates.out +++ b/src/test/regress/expected/aggregates.out @@ -806,8 +806,7 @@ explain (costs off) select distinct min(f1), max(f1) from minmaxtest; QUERY PLAN ---------------------------------------------------------------------------------------------- - HashAggregate - Group Key: $0, $1 + Unique InitPlan 1 (returns $0) -> Limit -> Merge Append @@ -832,8 +831,10 @@ explain (costs off) Index Cond: (f1 IS NOT NULL) -> Index Only Scan Backward using minmaxtest3i on minmaxtest3 minmaxtest3_1 Index Cond: (f1 IS NOT NULL) - -> Result -(27 rows) + -> Sort + Sort Key: ($0), ($1) + -> Result +(28 rows) select distinct min(f1), max(f1) from minmaxtest; min | max diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index 59d7877b57..cafbc5e54d 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -3949,39 +3949,34 @@ select d.* from d left join (select distinct * from b) s explain (costs off) select d.* from d left join (select * from b group by b.id, b.c_id) s on d.a = s.id; - QUERY PLAN ---------------------------------------- - Merge Left Join - Merge Cond: (d.a = s.id) + QUERY PLAN +------------------------------------------ + Merge Right Join + Merge Cond: (b.id = d.a) + -> Group + Group Key: b.id + -> Index Scan using b_pkey on b -> Sort Sort Key: d.a -> Seq Scan on d - -> Sort - Sort Key: s.id - -> Subquery Scan on s - -> HashAggregate - Group Key: b.id - -> Seq Scan on b -(11 rows) +(8 rows) -- similarly, but keying off a DISTINCT clause explain (costs off) select d.* from d left join (select distinct * from b) s on d.a = s.id; - QUERY PLAN ---------------------------------------------- - Merge Left Join - Merge Cond: (d.a = s.id) + QUERY PLAN +-------------------------------------- + Merge Right Join + Merge Cond: (b.id = d.a) + -> Unique + -> Sort + Sort Key: b.id, b.c_id + -> Seq Scan on b -> Sort Sort Key: d.a -> Seq Scan on d - -> Sort - Sort Key: s.id - -> Subquery Scan on s - -> HashAggregate - Group Key: b.id, b.c_id - -> Seq Scan on b -(11 rows) +(9 rows) -- check join removal works when uniqueness of the join condition is enforced -- by a UNION -- 2.40.0