LCOV - code coverage report
Current view: top level - src/backend/optimizer/path - costsize.c (source / functions) Hit Total Coverage
Test: PostgreSQL 19devel Lines: 1742 1781 97.8 %
Date: 2025-07-21 04:17:26 Functions: 75 75 100.0 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * costsize.c
       4             :  *    Routines to compute (and set) relation sizes and path costs
       5             :  *
       6             :  * Path costs are measured in arbitrary units established by these basic
       7             :  * parameters:
       8             :  *
       9             :  *  seq_page_cost       Cost of a sequential page fetch
      10             :  *  random_page_cost    Cost of a non-sequential page fetch
      11             :  *  cpu_tuple_cost      Cost of typical CPU time to process a tuple
      12             :  *  cpu_index_tuple_cost  Cost of typical CPU time to process an index tuple
      13             :  *  cpu_operator_cost   Cost of CPU time to execute an operator or function
      14             :  *  parallel_tuple_cost Cost of CPU time to pass a tuple from worker to leader backend
      15             :  *  parallel_setup_cost Cost of setting up shared memory for parallelism
      16             :  *
      17             :  * We expect that the kernel will typically do some amount of read-ahead
      18             :  * optimization; this in conjunction with seek costs means that seq_page_cost
      19             :  * is normally considerably less than random_page_cost.  (However, if the
      20             :  * database is fully cached in RAM, it is reasonable to set them equal.)
      21             :  *
      22             :  * We also use a rough estimate "effective_cache_size" of the number of
      23             :  * disk pages in Postgres + OS-level disk cache.  (We can't simply use
      24             :  * NBuffers for this purpose because that would ignore the effects of
      25             :  * the kernel's disk cache.)
      26             :  *
      27             :  * Obviously, taking constants for these values is an oversimplification,
      28             :  * but it's tough enough to get any useful estimates even at this level of
      29             :  * detail.  Note that all of these parameters are user-settable, in case
      30             :  * the default values are drastically off for a particular platform.
      31             :  *
      32             :  * seq_page_cost and random_page_cost can also be overridden for an individual
      33             :  * tablespace, in case some data is on a fast disk and other data is on a slow
      34             :  * disk.  Per-tablespace overrides never apply to temporary work files such as
      35             :  * an external sort or a materialize node that overflows work_mem.
      36             :  *
      37             :  * We compute two separate costs for each path:
      38             :  *      total_cost: total estimated cost to fetch all tuples
      39             :  *      startup_cost: cost that is expended before first tuple is fetched
      40             :  * In some scenarios, such as when there is a LIMIT or we are implementing
      41             :  * an EXISTS(...) sub-select, it is not necessary to fetch all tuples of the
      42             :  * path's result.  A caller can estimate the cost of fetching a partial
      43             :  * result by interpolating between startup_cost and total_cost.  In detail:
      44             :  *      actual_cost = startup_cost +
      45             :  *          (total_cost - startup_cost) * tuples_to_fetch / path->rows;
      46             :  * Note that a base relation's rows count (and, by extension, plan_rows for
      47             :  * plan nodes below the LIMIT node) are set without regard to any LIMIT, so
      48             :  * that this equation works properly.  (Note: while path->rows is never zero
      49             :  * for ordinary relations, it is zero for paths for provably-empty relations,
      50             :  * so beware of division-by-zero.)  The LIMIT is applied as a top-level
      51             :  * plan node.
      52             :  *
      53             :  * Each path stores the total number of disabled nodes that exist at or
      54             :  * below that point in the plan tree. This is regarded as a component of
      55             :  * the cost, and paths with fewer disabled nodes should be regarded as
      56             :  * cheaper than those with more. Disabled nodes occur when the user sets
      57             :  * a GUC like enable_seqscan=false. We can't necessarily respect such a
      58             :  * setting in every part of the plan tree, but we want to respect in as many
      59             :  * parts of the plan tree as possible. Simpler schemes like storing a Boolean
      60             :  * here rather than a count fail to do that. We used to disable nodes by
      61             :  * adding a large constant to the startup cost, but that distorted planning
      62             :  * in other ways.
      63             :  *
      64             :  * For largely historical reasons, most of the routines in this module use
      65             :  * the passed result Path only to store their results (rows, startup_cost and
      66             :  * total_cost) into.  All the input data they need is passed as separate
      67             :  * parameters, even though much of it could be extracted from the Path.
      68             :  * An exception is made for the cost_XXXjoin() routines, which expect all
      69             :  * the other fields of the passed XXXPath to be filled in, and similarly
      70             :  * cost_index() assumes the passed IndexPath is valid except for its output
      71             :  * values.
      72             :  *
      73             :  *
      74             :  * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
      75             :  * Portions Copyright (c) 1994, Regents of the University of California
      76             :  *
      77             :  * IDENTIFICATION
      78             :  *    src/backend/optimizer/path/costsize.c
      79             :  *
      80             :  *-------------------------------------------------------------------------
      81             :  */
      82             : 
      83             : #include "postgres.h"
      84             : 
      85             : #include <limits.h>
      86             : #include <math.h>
      87             : 
      88             : #include "access/amapi.h"
      89             : #include "access/htup_details.h"
      90             : #include "access/tsmapi.h"
      91             : #include "executor/executor.h"
      92             : #include "executor/nodeAgg.h"
      93             : #include "executor/nodeHash.h"
      94             : #include "executor/nodeMemoize.h"
      95             : #include "miscadmin.h"
      96             : #include "nodes/makefuncs.h"
      97             : #include "nodes/nodeFuncs.h"
      98             : #include "optimizer/clauses.h"
      99             : #include "optimizer/cost.h"
     100             : #include "optimizer/optimizer.h"
     101             : #include "optimizer/pathnode.h"
     102             : #include "optimizer/paths.h"
     103             : #include "optimizer/placeholder.h"
     104             : #include "optimizer/plancat.h"
     105             : #include "optimizer/restrictinfo.h"
     106             : #include "parser/parsetree.h"
     107             : #include "utils/lsyscache.h"
     108             : #include "utils/selfuncs.h"
     109             : #include "utils/spccache.h"
     110             : #include "utils/tuplesort.h"
     111             : 
     112             : 
     113             : #define LOG2(x)  (log(x) / 0.693147180559945)
     114             : 
     115             : /*
     116             :  * Append and MergeAppend nodes are less expensive than some other operations
     117             :  * which use cpu_tuple_cost; instead of adding a separate GUC, estimate the
     118             :  * per-tuple cost as cpu_tuple_cost multiplied by this value.
     119             :  */
     120             : #define APPEND_CPU_COST_MULTIPLIER 0.5
     121             : 
     122             : /*
     123             :  * Maximum value for row estimates.  We cap row estimates to this to help
     124             :  * ensure that costs based on these estimates remain within the range of what
     125             :  * double can represent.  add_path() wouldn't act sanely given infinite or NaN
     126             :  * cost values.
     127             :  */
     128             : #define MAXIMUM_ROWCOUNT 1e100
     129             : 
     130             : double      seq_page_cost = DEFAULT_SEQ_PAGE_COST;
     131             : double      random_page_cost = DEFAULT_RANDOM_PAGE_COST;
     132             : double      cpu_tuple_cost = DEFAULT_CPU_TUPLE_COST;
     133             : double      cpu_index_tuple_cost = DEFAULT_CPU_INDEX_TUPLE_COST;
     134             : double      cpu_operator_cost = DEFAULT_CPU_OPERATOR_COST;
     135             : double      parallel_tuple_cost = DEFAULT_PARALLEL_TUPLE_COST;
     136             : double      parallel_setup_cost = DEFAULT_PARALLEL_SETUP_COST;
     137             : double      recursive_worktable_factor = DEFAULT_RECURSIVE_WORKTABLE_FACTOR;
     138             : 
     139             : int         effective_cache_size = DEFAULT_EFFECTIVE_CACHE_SIZE;
     140             : 
     141             : Cost        disable_cost = 1.0e10;
     142             : 
     143             : int         max_parallel_workers_per_gather = 2;
     144             : 
     145             : bool        enable_seqscan = true;
     146             : bool        enable_indexscan = true;
     147             : bool        enable_indexonlyscan = true;
     148             : bool        enable_bitmapscan = true;
     149             : bool        enable_tidscan = true;
     150             : bool        enable_sort = true;
     151             : bool        enable_incremental_sort = true;
     152             : bool        enable_hashagg = true;
     153             : bool        enable_nestloop = true;
     154             : bool        enable_material = true;
     155             : bool        enable_memoize = true;
     156             : bool        enable_mergejoin = true;
     157             : bool        enable_hashjoin = true;
     158             : bool        enable_gathermerge = true;
     159             : bool        enable_partitionwise_join = false;
     160             : bool        enable_partitionwise_aggregate = false;
     161             : bool        enable_parallel_append = true;
     162             : bool        enable_parallel_hash = true;
     163             : bool        enable_partition_pruning = true;
     164             : bool        enable_presorted_aggregate = true;
     165             : bool        enable_async_append = true;
     166             : 
     167             : typedef struct
     168             : {
     169             :     PlannerInfo *root;
     170             :     QualCost    total;
     171             : } cost_qual_eval_context;
     172             : 
     173             : static List *extract_nonindex_conditions(List *qual_clauses, List *indexclauses);
     174             : static MergeScanSelCache *cached_scansel(PlannerInfo *root,
     175             :                                          RestrictInfo *rinfo,
     176             :                                          PathKey *pathkey);
     177             : static void cost_rescan(PlannerInfo *root, Path *path,
     178             :                         Cost *rescan_startup_cost, Cost *rescan_total_cost);
     179             : static bool cost_qual_eval_walker(Node *node, cost_qual_eval_context *context);
     180             : static void get_restriction_qual_cost(PlannerInfo *root, RelOptInfo *baserel,
     181             :                                       ParamPathInfo *param_info,
     182             :                                       QualCost *qpqual_cost);
     183             : static bool has_indexed_join_quals(NestPath *path);
     184             : static double approx_tuple_count(PlannerInfo *root, JoinPath *path,
     185             :                                  List *quals);
     186             : static double calc_joinrel_size_estimate(PlannerInfo *root,
     187             :                                          RelOptInfo *joinrel,
     188             :                                          RelOptInfo *outer_rel,
     189             :                                          RelOptInfo *inner_rel,
     190             :                                          double outer_rows,
     191             :                                          double inner_rows,
     192             :                                          SpecialJoinInfo *sjinfo,
     193             :                                          List *restrictlist);
     194             : static Selectivity get_foreign_key_join_selectivity(PlannerInfo *root,
     195             :                                                     Relids outer_relids,
     196             :                                                     Relids inner_relids,
     197             :                                                     SpecialJoinInfo *sjinfo,
     198             :                                                     List **restrictlist);
     199             : static Cost append_nonpartial_cost(List *subpaths, int numpaths,
     200             :                                    int parallel_workers);
     201             : static void set_rel_width(PlannerInfo *root, RelOptInfo *rel);
     202             : static int32 get_expr_width(PlannerInfo *root, const Node *expr);
     203             : static double relation_byte_size(double tuples, int width);
     204             : static double page_size(double tuples, int width);
     205             : static double get_parallel_divisor(Path *path);
     206             : 
     207             : 
     208             : /*
     209             :  * clamp_row_est
     210             :  *      Force a row-count estimate to a sane value.
     211             :  */
     212             : double
     213     9008926 : clamp_row_est(double nrows)
     214             : {
     215             :     /*
     216             :      * Avoid infinite and NaN row estimates.  Costs derived from such values
     217             :      * are going to be useless.  Also force the estimate to be at least one
     218             :      * row, to make explain output look better and to avoid possible
     219             :      * divide-by-zero when interpolating costs.  Make it an integer, too.
     220             :      */
     221     9008926 :     if (nrows > MAXIMUM_ROWCOUNT || isnan(nrows))
     222           0 :         nrows = MAXIMUM_ROWCOUNT;
     223     9008926 :     else if (nrows <= 1.0)
     224     3213630 :         nrows = 1.0;
     225             :     else
     226     5795296 :         nrows = rint(nrows);
     227             : 
     228     9008926 :     return nrows;
     229             : }
     230             : 
     231             : /*
     232             :  * clamp_width_est
     233             :  *      Force a tuple-width estimate to a sane value.
     234             :  *
     235             :  * The planner represents datatype width and tuple width estimates as int32.
     236             :  * When summing column width estimates to create a tuple width estimate,
     237             :  * it's possible to reach integer overflow in edge cases.  To ensure sane
     238             :  * behavior, we form such sums in int64 arithmetic and then apply this routine
     239             :  * to clamp to int32 range.
     240             :  */
     241             : int32
     242     1895846 : clamp_width_est(int64 tuple_width)
     243             : {
     244             :     /*
     245             :      * Anything more than MaxAllocSize is clearly bogus, since we could not
     246             :      * create a tuple that large.
     247             :      */
     248     1895846 :     if (tuple_width > MaxAllocSize)
     249           0 :         return (int32) MaxAllocSize;
     250             : 
     251             :     /*
     252             :      * Unlike clamp_row_est, we just Assert that the value isn't negative,
     253             :      * rather than masking such errors.
     254             :      */
     255             :     Assert(tuple_width >= 0);
     256             : 
     257     1895846 :     return (int32) tuple_width;
     258             : }
     259             : 
     260             : /*
     261             :  * clamp_cardinality_to_long
     262             :  *      Cast a Cardinality value to a sane long value.
     263             :  */
     264             : long
     265       45860 : clamp_cardinality_to_long(Cardinality x)
     266             : {
     267             :     /*
     268             :      * Just for paranoia's sake, ensure we do something sane with negative or
     269             :      * NaN values.
     270             :      */
     271       45860 :     if (isnan(x))
     272           0 :         return LONG_MAX;
     273       45860 :     if (x <= 0)
     274         556 :         return 0;
     275             : 
     276             :     /*
     277             :      * If "long" is 64 bits, then LONG_MAX cannot be represented exactly as a
     278             :      * double.  Casting it to double and back may well result in overflow due
     279             :      * to rounding, so avoid doing that.  We trust that any double value that
     280             :      * compares strictly less than "(double) LONG_MAX" will cast to a
     281             :      * representable "long" value.
     282             :      */
     283       45304 :     return (x < (double) LONG_MAX) ? (long) x : LONG_MAX;
     284             : }
     285             : 
     286             : 
     287             : /*
     288             :  * cost_seqscan
     289             :  *    Determines and returns the cost of scanning a relation sequentially.
     290             :  *
     291             :  * 'baserel' is the relation to be scanned
     292             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     293             :  */
     294             : void
     295      427500 : cost_seqscan(Path *path, PlannerInfo *root,
     296             :              RelOptInfo *baserel, ParamPathInfo *param_info)
     297             : {
     298      427500 :     Cost        startup_cost = 0;
     299             :     Cost        cpu_run_cost;
     300             :     Cost        disk_run_cost;
     301             :     double      spc_seq_page_cost;
     302             :     QualCost    qpqual_cost;
     303             :     Cost        cpu_per_tuple;
     304             : 
     305             :     /* Should only be applied to base relations */
     306             :     Assert(baserel->relid > 0);
     307             :     Assert(baserel->rtekind == RTE_RELATION);
     308             : 
     309             :     /* Mark the path with the correct row estimate */
     310      427500 :     if (param_info)
     311         840 :         path->rows = param_info->ppi_rows;
     312             :     else
     313      426660 :         path->rows = baserel->rows;
     314             : 
     315             :     /* fetch estimated page cost for tablespace containing table */
     316      427500 :     get_tablespace_page_costs(baserel->reltablespace,
     317             :                               NULL,
     318             :                               &spc_seq_page_cost);
     319             : 
     320             :     /*
     321             :      * disk costs
     322             :      */
     323      427500 :     disk_run_cost = spc_seq_page_cost * baserel->pages;
     324             : 
     325             :     /* CPU costs */
     326      427500 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
     327             : 
     328      427500 :     startup_cost += qpqual_cost.startup;
     329      427500 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     330      427500 :     cpu_run_cost = cpu_per_tuple * baserel->tuples;
     331             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     332      427500 :     startup_cost += path->pathtarget->cost.startup;
     333      427500 :     cpu_run_cost += path->pathtarget->cost.per_tuple * path->rows;
     334             : 
     335             :     /* Adjust costing for parallelism, if used. */
     336      427500 :     if (path->parallel_workers > 0)
     337             :     {
     338       26156 :         double      parallel_divisor = get_parallel_divisor(path);
     339             : 
     340             :         /* The CPU cost is divided among all the workers. */
     341       26156 :         cpu_run_cost /= parallel_divisor;
     342             : 
     343             :         /*
     344             :          * It may be possible to amortize some of the I/O cost, but probably
     345             :          * not very much, because most operating systems already do aggressive
     346             :          * prefetching.  For now, we assume that the disk run cost can't be
     347             :          * amortized at all.
     348             :          */
     349             : 
     350             :         /*
     351             :          * In the case of a parallel plan, the row count needs to represent
     352             :          * the number of tuples processed per worker.
     353             :          */
     354       26156 :         path->rows = clamp_row_est(path->rows / parallel_divisor);
     355             :     }
     356             : 
     357      427500 :     path->disabled_nodes = enable_seqscan ? 0 : 1;
     358      427500 :     path->startup_cost = startup_cost;
     359      427500 :     path->total_cost = startup_cost + cpu_run_cost + disk_run_cost;
     360      427500 : }
     361             : 
     362             : /*
     363             :  * cost_samplescan
     364             :  *    Determines and returns the cost of scanning a relation using sampling.
     365             :  *
     366             :  * 'baserel' is the relation to be scanned
     367             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     368             :  */
     369             : void
     370         306 : cost_samplescan(Path *path, PlannerInfo *root,
     371             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
     372             : {
     373         306 :     Cost        startup_cost = 0;
     374         306 :     Cost        run_cost = 0;
     375             :     RangeTblEntry *rte;
     376             :     TableSampleClause *tsc;
     377             :     TsmRoutine *tsm;
     378             :     double      spc_seq_page_cost,
     379             :                 spc_random_page_cost,
     380             :                 spc_page_cost;
     381             :     QualCost    qpqual_cost;
     382             :     Cost        cpu_per_tuple;
     383             : 
     384             :     /* Should only be applied to base relations with tablesample clauses */
     385             :     Assert(baserel->relid > 0);
     386         306 :     rte = planner_rt_fetch(baserel->relid, root);
     387             :     Assert(rte->rtekind == RTE_RELATION);
     388         306 :     tsc = rte->tablesample;
     389             :     Assert(tsc != NULL);
     390         306 :     tsm = GetTsmRoutine(tsc->tsmhandler);
     391             : 
     392             :     /* Mark the path with the correct row estimate */
     393         306 :     if (param_info)
     394          72 :         path->rows = param_info->ppi_rows;
     395             :     else
     396         234 :         path->rows = baserel->rows;
     397             : 
     398             :     /* fetch estimated page cost for tablespace containing table */
     399         306 :     get_tablespace_page_costs(baserel->reltablespace,
     400             :                               &spc_random_page_cost,
     401             :                               &spc_seq_page_cost);
     402             : 
     403             :     /* if NextSampleBlock is used, assume random access, else sequential */
     404         612 :     spc_page_cost = (tsm->NextSampleBlock != NULL) ?
     405         306 :         spc_random_page_cost : spc_seq_page_cost;
     406             : 
     407             :     /*
     408             :      * disk costs (recall that baserel->pages has already been set to the
     409             :      * number of pages the sampling method will visit)
     410             :      */
     411         306 :     run_cost += spc_page_cost * baserel->pages;
     412             : 
     413             :     /*
     414             :      * CPU costs (recall that baserel->tuples has already been set to the
     415             :      * number of tuples the sampling method will select).  Note that we ignore
     416             :      * execution cost of the TABLESAMPLE parameter expressions; they will be
     417             :      * evaluated only once per scan, and in most usages they'll likely be
     418             :      * simple constants anyway.  We also don't charge anything for the
     419             :      * calculations the sampling method might do internally.
     420             :      */
     421         306 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
     422             : 
     423         306 :     startup_cost += qpqual_cost.startup;
     424         306 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     425         306 :     run_cost += cpu_per_tuple * baserel->tuples;
     426             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     427         306 :     startup_cost += path->pathtarget->cost.startup;
     428         306 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
     429             : 
     430         306 :     path->disabled_nodes = 0;
     431         306 :     path->startup_cost = startup_cost;
     432         306 :     path->total_cost = startup_cost + run_cost;
     433         306 : }
     434             : 
     435             : /*
     436             :  * cost_gather
     437             :  *    Determines and returns the cost of gather path.
     438             :  *
     439             :  * 'rel' is the relation to be operated upon
     440             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     441             :  * 'rows' may be used to point to a row estimate; if non-NULL, it overrides
     442             :  * both 'rel' and 'param_info'.  This is useful when the path doesn't exactly
     443             :  * correspond to any particular RelOptInfo.
     444             :  */
     445             : void
     446       19134 : cost_gather(GatherPath *path, PlannerInfo *root,
     447             :             RelOptInfo *rel, ParamPathInfo *param_info,
     448             :             double *rows)
     449             : {
     450       19134 :     Cost        startup_cost = 0;
     451       19134 :     Cost        run_cost = 0;
     452             : 
     453             :     /* Mark the path with the correct row estimate */
     454       19134 :     if (rows)
     455        1752 :         path->path.rows = *rows;
     456       17382 :     else if (param_info)
     457           0 :         path->path.rows = param_info->ppi_rows;
     458             :     else
     459       17382 :         path->path.rows = rel->rows;
     460             : 
     461       19134 :     startup_cost = path->subpath->startup_cost;
     462             : 
     463       19134 :     run_cost = path->subpath->total_cost - path->subpath->startup_cost;
     464             : 
     465             :     /* Parallel setup and communication cost. */
     466       19134 :     startup_cost += parallel_setup_cost;
     467       19134 :     run_cost += parallel_tuple_cost * path->path.rows;
     468             : 
     469       19134 :     path->path.disabled_nodes = path->subpath->disabled_nodes;
     470       19134 :     path->path.startup_cost = startup_cost;
     471       19134 :     path->path.total_cost = (startup_cost + run_cost);
     472       19134 : }
     473             : 
     474             : /*
     475             :  * cost_gather_merge
     476             :  *    Determines and returns the cost of gather merge path.
     477             :  *
     478             :  * GatherMerge merges several pre-sorted input streams, using a heap that at
     479             :  * any given instant holds the next tuple from each stream. If there are N
     480             :  * streams, we need about N*log2(N) tuple comparisons to construct the heap at
     481             :  * startup, and then for each output tuple, about log2(N) comparisons to
     482             :  * replace the top heap entry with the next tuple from the same stream.
     483             :  */
     484             : void
     485       10190 : cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
     486             :                   RelOptInfo *rel, ParamPathInfo *param_info,
     487             :                   int input_disabled_nodes,
     488             :                   Cost input_startup_cost, Cost input_total_cost,
     489             :                   double *rows)
     490             : {
     491       10190 :     Cost        startup_cost = 0;
     492       10190 :     Cost        run_cost = 0;
     493             :     Cost        comparison_cost;
     494             :     double      N;
     495             :     double      logN;
     496             : 
     497             :     /* Mark the path with the correct row estimate */
     498       10190 :     if (rows)
     499        4624 :         path->path.rows = *rows;
     500        5566 :     else if (param_info)
     501           0 :         path->path.rows = param_info->ppi_rows;
     502             :     else
     503        5566 :         path->path.rows = rel->rows;
     504             : 
     505             :     /*
     506             :      * Add one to the number of workers to account for the leader.  This might
     507             :      * be overgenerous since the leader will do less work than other workers
     508             :      * in typical cases, but we'll go with it for now.
     509             :      */
     510             :     Assert(path->num_workers > 0);
     511       10190 :     N = (double) path->num_workers + 1;
     512       10190 :     logN = LOG2(N);
     513             : 
     514             :     /* Assumed cost per tuple comparison */
     515       10190 :     comparison_cost = 2.0 * cpu_operator_cost;
     516             : 
     517             :     /* Heap creation cost */
     518       10190 :     startup_cost += comparison_cost * N * logN;
     519             : 
     520             :     /* Per-tuple heap maintenance cost */
     521       10190 :     run_cost += path->path.rows * comparison_cost * logN;
     522             : 
     523             :     /* small cost for heap management, like cost_merge_append */
     524       10190 :     run_cost += cpu_operator_cost * path->path.rows;
     525             : 
     526             :     /*
     527             :      * Parallel setup and communication cost.  Since Gather Merge, unlike
     528             :      * Gather, requires us to block until a tuple is available from every
     529             :      * worker, we bump the IPC cost up a little bit as compared with Gather.
     530             :      * For lack of a better idea, charge an extra 5%.
     531             :      */
     532       10190 :     startup_cost += parallel_setup_cost;
     533       10190 :     run_cost += parallel_tuple_cost * path->path.rows * 1.05;
     534             : 
     535       10190 :     path->path.disabled_nodes = input_disabled_nodes
     536       10190 :         + (enable_gathermerge ? 0 : 1);
     537       10190 :     path->path.startup_cost = startup_cost + input_startup_cost;
     538       10190 :     path->path.total_cost = (startup_cost + run_cost + input_total_cost);
     539       10190 : }
     540             : 
     541             : /*
     542             :  * cost_index
     543             :  *    Determines and returns the cost of scanning a relation using an index.
     544             :  *
     545             :  * 'path' describes the indexscan under consideration, and is complete
     546             :  *      except for the fields to be set by this routine
     547             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
     548             :  *      estimates of caching behavior
     549             :  *
     550             :  * In addition to rows, startup_cost and total_cost, cost_index() sets the
     551             :  * path's indextotalcost and indexselectivity fields.  These values will be
     552             :  * needed if the IndexPath is used in a BitmapIndexScan.
     553             :  *
     554             :  * NOTE: path->indexquals must contain only clauses usable as index
     555             :  * restrictions.  Any additional quals evaluated as qpquals may reduce the
     556             :  * number of returned tuples, but they won't reduce the number of tuples
     557             :  * we have to fetch from the table, so they don't reduce the scan cost.
     558             :  */
     559             : void
     560      783406 : cost_index(IndexPath *path, PlannerInfo *root, double loop_count,
     561             :            bool partial_path)
     562             : {
     563      783406 :     IndexOptInfo *index = path->indexinfo;
     564      783406 :     RelOptInfo *baserel = index->rel;
     565      783406 :     bool        indexonly = (path->path.pathtype == T_IndexOnlyScan);
     566             :     amcostestimate_function amcostestimate;
     567             :     List       *qpquals;
     568      783406 :     Cost        startup_cost = 0;
     569      783406 :     Cost        run_cost = 0;
     570      783406 :     Cost        cpu_run_cost = 0;
     571             :     Cost        indexStartupCost;
     572             :     Cost        indexTotalCost;
     573             :     Selectivity indexSelectivity;
     574             :     double      indexCorrelation,
     575             :                 csquared;
     576             :     double      spc_seq_page_cost,
     577             :                 spc_random_page_cost;
     578             :     Cost        min_IO_cost,
     579             :                 max_IO_cost;
     580             :     QualCost    qpqual_cost;
     581             :     Cost        cpu_per_tuple;
     582             :     double      tuples_fetched;
     583             :     double      pages_fetched;
     584             :     double      rand_heap_pages;
     585             :     double      index_pages;
     586             : 
     587             :     /* Should only be applied to base relations */
     588             :     Assert(IsA(baserel, RelOptInfo) &&
     589             :            IsA(index, IndexOptInfo));
     590             :     Assert(baserel->relid > 0);
     591             :     Assert(baserel->rtekind == RTE_RELATION);
     592             : 
     593             :     /*
     594             :      * Mark the path with the correct row estimate, and identify which quals
     595             :      * will need to be enforced as qpquals.  We need not check any quals that
     596             :      * are implied by the index's predicate, so we can use indrestrictinfo not
     597             :      * baserestrictinfo as the list of relevant restriction clauses for the
     598             :      * rel.
     599             :      */
     600      783406 :     if (path->path.param_info)
     601             :     {
     602      143600 :         path->path.rows = path->path.param_info->ppi_rows;
     603             :         /* qpquals come from the rel's restriction clauses and ppi_clauses */
     604      143600 :         qpquals = list_concat(extract_nonindex_conditions(path->indexinfo->indrestrictinfo,
     605             :                                                           path->indexclauses),
     606      143600 :                               extract_nonindex_conditions(path->path.param_info->ppi_clauses,
     607             :                                                           path->indexclauses));
     608             :     }
     609             :     else
     610             :     {
     611      639806 :         path->path.rows = baserel->rows;
     612             :         /* qpquals come from just the rel's restriction clauses */
     613      639806 :         qpquals = extract_nonindex_conditions(path->indexinfo->indrestrictinfo,
     614             :                                               path->indexclauses);
     615             :     }
     616             : 
     617             :     /* we don't need to check enable_indexonlyscan; indxpath.c does that */
     618      783406 :     path->path.disabled_nodes = enable_indexscan ? 0 : 1;
     619             : 
     620             :     /*
     621             :      * Call index-access-method-specific code to estimate the processing cost
     622             :      * for scanning the index, as well as the selectivity of the index (ie,
     623             :      * the fraction of main-table tuples we will have to retrieve) and its
     624             :      * correlation to the main-table tuple order.  We need a cast here because
     625             :      * pathnodes.h uses a weak function type to avoid including amapi.h.
     626             :      */
     627      783406 :     amcostestimate = (amcostestimate_function) index->amcostestimate;
     628      783406 :     amcostestimate(root, path, loop_count,
     629             :                    &indexStartupCost, &indexTotalCost,
     630             :                    &indexSelectivity, &indexCorrelation,
     631             :                    &index_pages);
     632             : 
     633             :     /*
     634             :      * Save amcostestimate's results for possible use in bitmap scan planning.
     635             :      * We don't bother to save indexStartupCost or indexCorrelation, because a
     636             :      * bitmap scan doesn't care about either.
     637             :      */
     638      783406 :     path->indextotalcost = indexTotalCost;
     639      783406 :     path->indexselectivity = indexSelectivity;
     640             : 
     641             :     /* all costs for touching index itself included here */
     642      783406 :     startup_cost += indexStartupCost;
     643      783406 :     run_cost += indexTotalCost - indexStartupCost;
     644             : 
     645             :     /* estimate number of main-table tuples fetched */
     646      783406 :     tuples_fetched = clamp_row_est(indexSelectivity * baserel->tuples);
     647             : 
     648             :     /* fetch estimated page costs for tablespace containing table */
     649      783406 :     get_tablespace_page_costs(baserel->reltablespace,
     650             :                               &spc_random_page_cost,
     651             :                               &spc_seq_page_cost);
     652             : 
     653             :     /*----------
     654             :      * Estimate number of main-table pages fetched, and compute I/O cost.
     655             :      *
     656             :      * When the index ordering is uncorrelated with the table ordering,
     657             :      * we use an approximation proposed by Mackert and Lohman (see
     658             :      * index_pages_fetched() for details) to compute the number of pages
     659             :      * fetched, and then charge spc_random_page_cost per page fetched.
     660             :      *
     661             :      * When the index ordering is exactly correlated with the table ordering
     662             :      * (just after a CLUSTER, for example), the number of pages fetched should
     663             :      * be exactly selectivity * table_size.  What's more, all but the first
     664             :      * will be sequential fetches, not the random fetches that occur in the
     665             :      * uncorrelated case.  So if the number of pages is more than 1, we
     666             :      * ought to charge
     667             :      *      spc_random_page_cost + (pages_fetched - 1) * spc_seq_page_cost
     668             :      * For partially-correlated indexes, we ought to charge somewhere between
     669             :      * these two estimates.  We currently interpolate linearly between the
     670             :      * estimates based on the correlation squared (XXX is that appropriate?).
     671             :      *
     672             :      * If it's an index-only scan, then we will not need to fetch any heap
     673             :      * pages for which the visibility map shows all tuples are visible.
     674             :      * Hence, reduce the estimated number of heap fetches accordingly.
     675             :      * We use the measured fraction of the entire heap that is all-visible,
     676             :      * which might not be particularly relevant to the subset of the heap
     677             :      * that this query will fetch; but it's not clear how to do better.
     678             :      *----------
     679             :      */
     680      783406 :     if (loop_count > 1)
     681             :     {
     682             :         /*
     683             :          * For repeated indexscans, the appropriate estimate for the
     684             :          * uncorrelated case is to scale up the number of tuples fetched in
     685             :          * the Mackert and Lohman formula by the number of scans, so that we
     686             :          * estimate the number of pages fetched by all the scans; then
     687             :          * pro-rate the costs for one scan.  In this case we assume all the
     688             :          * fetches are random accesses.
     689             :          */
     690       82992 :         pages_fetched = index_pages_fetched(tuples_fetched * loop_count,
     691             :                                             baserel->pages,
     692       82992 :                                             (double) index->pages,
     693             :                                             root);
     694             : 
     695       82992 :         if (indexonly)
     696        9360 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     697             : 
     698       82992 :         rand_heap_pages = pages_fetched;
     699             : 
     700       82992 :         max_IO_cost = (pages_fetched * spc_random_page_cost) / loop_count;
     701             : 
     702             :         /*
     703             :          * In the perfectly correlated case, the number of pages touched by
     704             :          * each scan is selectivity * table_size, and we can use the Mackert
     705             :          * and Lohman formula at the page level to estimate how much work is
     706             :          * saved by caching across scans.  We still assume all the fetches are
     707             :          * random, though, which is an overestimate that's hard to correct for
     708             :          * without double-counting the cache effects.  (But in most cases
     709             :          * where such a plan is actually interesting, only one page would get
     710             :          * fetched per scan anyway, so it shouldn't matter much.)
     711             :          */
     712       82992 :         pages_fetched = ceil(indexSelectivity * (double) baserel->pages);
     713             : 
     714       82992 :         pages_fetched = index_pages_fetched(pages_fetched * loop_count,
     715             :                                             baserel->pages,
     716       82992 :                                             (double) index->pages,
     717             :                                             root);
     718             : 
     719       82992 :         if (indexonly)
     720        9360 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     721             : 
     722       82992 :         min_IO_cost = (pages_fetched * spc_random_page_cost) / loop_count;
     723             :     }
     724             :     else
     725             :     {
     726             :         /*
     727             :          * Normal case: apply the Mackert and Lohman formula, and then
     728             :          * interpolate between that and the correlation-derived result.
     729             :          */
     730      700414 :         pages_fetched = index_pages_fetched(tuples_fetched,
     731             :                                             baserel->pages,
     732      700414 :                                             (double) index->pages,
     733             :                                             root);
     734             : 
     735      700414 :         if (indexonly)
     736       64364 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     737             : 
     738      700414 :         rand_heap_pages = pages_fetched;
     739             : 
     740             :         /* max_IO_cost is for the perfectly uncorrelated case (csquared=0) */
     741      700414 :         max_IO_cost = pages_fetched * spc_random_page_cost;
     742             : 
     743             :         /* min_IO_cost is for the perfectly correlated case (csquared=1) */
     744      700414 :         pages_fetched = ceil(indexSelectivity * (double) baserel->pages);
     745             : 
     746      700414 :         if (indexonly)
     747       64364 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     748             : 
     749      700414 :         if (pages_fetched > 0)
     750             :         {
     751      635782 :             min_IO_cost = spc_random_page_cost;
     752      635782 :             if (pages_fetched > 1)
     753      187546 :                 min_IO_cost += (pages_fetched - 1) * spc_seq_page_cost;
     754             :         }
     755             :         else
     756       64632 :             min_IO_cost = 0;
     757             :     }
     758             : 
     759      783406 :     if (partial_path)
     760             :     {
     761             :         /*
     762             :          * For index only scans compute workers based on number of index pages
     763             :          * fetched; the number of heap pages we fetch might be so small as to
     764             :          * effectively rule out parallelism, which we don't want to do.
     765             :          */
     766      272286 :         if (indexonly)
     767       23216 :             rand_heap_pages = -1;
     768             : 
     769             :         /*
     770             :          * Estimate the number of parallel workers required to scan index. Use
     771             :          * the number of heap pages computed considering heap fetches won't be
     772             :          * sequential as for parallel scans the pages are accessed in random
     773             :          * order.
     774             :          */
     775      272286 :         path->path.parallel_workers = compute_parallel_worker(baserel,
     776             :                                                               rand_heap_pages,
     777             :                                                               index_pages,
     778             :                                                               max_parallel_workers_per_gather);
     779             : 
     780             :         /*
     781             :          * Fall out if workers can't be assigned for parallel scan, because in
     782             :          * such a case this path will be rejected.  So there is no benefit in
     783             :          * doing extra computation.
     784             :          */
     785      272286 :         if (path->path.parallel_workers <= 0)
     786      262222 :             return;
     787             : 
     788       10064 :         path->path.parallel_aware = true;
     789             :     }
     790             : 
     791             :     /*
     792             :      * Now interpolate based on estimated index order correlation to get total
     793             :      * disk I/O cost for main table accesses.
     794             :      */
     795      521184 :     csquared = indexCorrelation * indexCorrelation;
     796             : 
     797      521184 :     run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);
     798             : 
     799             :     /*
     800             :      * Estimate CPU costs per tuple.
     801             :      *
     802             :      * What we want here is cpu_tuple_cost plus the evaluation costs of any
     803             :      * qual clauses that we have to evaluate as qpquals.
     804             :      */
     805      521184 :     cost_qual_eval(&qpqual_cost, qpquals, root);
     806             : 
     807      521184 :     startup_cost += qpqual_cost.startup;
     808      521184 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     809             : 
     810      521184 :     cpu_run_cost += cpu_per_tuple * tuples_fetched;
     811             : 
     812             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     813      521184 :     startup_cost += path->path.pathtarget->cost.startup;
     814      521184 :     cpu_run_cost += path->path.pathtarget->cost.per_tuple * path->path.rows;
     815             : 
     816             :     /* Adjust costing for parallelism, if used. */
     817      521184 :     if (path->path.parallel_workers > 0)
     818             :     {
     819       10064 :         double      parallel_divisor = get_parallel_divisor(&path->path);
     820             : 
     821       10064 :         path->path.rows = clamp_row_est(path->path.rows / parallel_divisor);
     822             : 
     823             :         /* The CPU cost is divided among all the workers. */
     824       10064 :         cpu_run_cost /= parallel_divisor;
     825             :     }
     826             : 
     827      521184 :     run_cost += cpu_run_cost;
     828             : 
     829      521184 :     path->path.startup_cost = startup_cost;
     830      521184 :     path->path.total_cost = startup_cost + run_cost;
     831             : }
     832             : 
     833             : /*
     834             :  * extract_nonindex_conditions
     835             :  *
     836             :  * Given a list of quals to be enforced in an indexscan, extract the ones that
     837             :  * will have to be applied as qpquals (ie, the index machinery won't handle
     838             :  * them).  Here we detect only whether a qual clause is directly redundant
     839             :  * with some indexclause.  If the index path is chosen for use, createplan.c
     840             :  * will try a bit harder to get rid of redundant qual conditions; specifically
     841             :  * it will see if quals can be proven to be implied by the indexquals.  But
     842             :  * it does not seem worth the cycles to try to factor that in at this stage,
     843             :  * since we're only trying to estimate qual eval costs.  Otherwise this must
     844             :  * match the logic in create_indexscan_plan().
     845             :  *
     846             :  * qual_clauses, and the result, are lists of RestrictInfos.
     847             :  * indexclauses is a list of IndexClauses.
     848             :  */
     849             : static List *
     850      927006 : extract_nonindex_conditions(List *qual_clauses, List *indexclauses)
     851             : {
     852      927006 :     List       *result = NIL;
     853             :     ListCell   *lc;
     854             : 
     855     1945178 :     foreach(lc, qual_clauses)
     856             :     {
     857     1018172 :         RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc);
     858             : 
     859     1018172 :         if (rinfo->pseudoconstant)
     860        9694 :             continue;           /* we may drop pseudoconstants here */
     861     1008478 :         if (is_redundant_with_indexclauses(rinfo, indexclauses))
     862      591930 :             continue;           /* dup or derived from same EquivalenceClass */
     863             :         /* ... skip the predicate proof attempt createplan.c will try ... */
     864      416548 :         result = lappend(result, rinfo);
     865             :     }
     866      927006 :     return result;
     867             : }
     868             : 
     869             : /*
     870             :  * index_pages_fetched
     871             :  *    Estimate the number of pages actually fetched after accounting for
     872             :  *    cache effects.
     873             :  *
     874             :  * We use an approximation proposed by Mackert and Lohman, "Index Scans
     875             :  * Using a Finite LRU Buffer: A Validated I/O Model", ACM Transactions
     876             :  * on Database Systems, Vol. 14, No. 3, September 1989, Pages 401-424.
     877             :  * The Mackert and Lohman approximation is that the number of pages
     878             :  * fetched is
     879             :  *  PF =
     880             :  *      min(2TNs/(2T+Ns), T)            when T <= b
     881             :  *      2TNs/(2T+Ns)                    when T > b and Ns <= 2Tb/(2T-b)
     882             :  *      b + (Ns - 2Tb/(2T-b))*(T-b)/T   when T > b and Ns > 2Tb/(2T-b)
     883             :  * where
     884             :  *      T = # pages in table
     885             :  *      N = # tuples in table
     886             :  *      s = selectivity = fraction of table to be scanned
     887             :  *      b = # buffer pages available (we include kernel space here)
     888             :  *
     889             :  * We assume that effective_cache_size is the total number of buffer pages
     890             :  * available for the whole query, and pro-rate that space across all the
     891             :  * tables in the query and the index currently under consideration.  (This
     892             :  * ignores space needed for other indexes used by the query, but since we
     893             :  * don't know which indexes will get used, we can't estimate that very well;
     894             :  * and in any case counting all the tables may well be an overestimate, since
     895             :  * depending on the join plan not all the tables may be scanned concurrently.)
     896             :  *
     897             :  * The product Ns is the number of tuples fetched; we pass in that
     898             :  * product rather than calculating it here.  "pages" is the number of pages
     899             :  * in the object under consideration (either an index or a table).
     900             :  * "index_pages" is the amount to add to the total table space, which was
     901             :  * computed for us by make_one_rel.
     902             :  *
     903             :  * Caller is expected to have ensured that tuples_fetched is greater than zero
     904             :  * and rounded to integer (see clamp_row_est).  The result will likewise be
     905             :  * greater than zero and integral.
     906             :  */
     907             : double
     908     1093930 : index_pages_fetched(double tuples_fetched, BlockNumber pages,
     909             :                     double index_pages, PlannerInfo *root)
     910             : {
     911             :     double      pages_fetched;
     912             :     double      total_pages;
     913             :     double      T,
     914             :                 b;
     915             : 
     916             :     /* T is # pages in table, but don't allow it to be zero */
     917     1093930 :     T = (pages > 1) ? (double) pages : 1.0;
     918             : 
     919             :     /* Compute number of pages assumed to be competing for cache space */
     920     1093930 :     total_pages = root->total_table_pages + index_pages;
     921     1093930 :     total_pages = Max(total_pages, 1.0);
     922             :     Assert(T <= total_pages);
     923             : 
     924             :     /* b is pro-rated share of effective_cache_size */
     925     1093930 :     b = (double) effective_cache_size * T / total_pages;
     926             : 
     927             :     /* force it positive and integral */
     928     1093930 :     if (b <= 1.0)
     929           0 :         b = 1.0;
     930             :     else
     931     1093930 :         b = ceil(b);
     932             : 
     933             :     /* This part is the Mackert and Lohman formula */
     934     1093930 :     if (T <= b)
     935             :     {
     936     1093930 :         pages_fetched =
     937     1093930 :             (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
     938     1093930 :         if (pages_fetched >= T)
     939      634516 :             pages_fetched = T;
     940             :         else
     941      459414 :             pages_fetched = ceil(pages_fetched);
     942             :     }
     943             :     else
     944             :     {
     945             :         double      lim;
     946             : 
     947           0 :         lim = (2.0 * T * b) / (2.0 * T - b);
     948           0 :         if (tuples_fetched <= lim)
     949             :         {
     950           0 :             pages_fetched =
     951           0 :                 (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
     952             :         }
     953             :         else
     954             :         {
     955           0 :             pages_fetched =
     956           0 :                 b + (tuples_fetched - lim) * (T - b) / T;
     957             :         }
     958           0 :         pages_fetched = ceil(pages_fetched);
     959             :     }
     960     1093930 :     return pages_fetched;
     961             : }
     962             : 
     963             : /*
     964             :  * get_indexpath_pages
     965             :  *      Determine the total size of the indexes used in a bitmap index path.
     966             :  *
     967             :  * Note: if the same index is used more than once in a bitmap tree, we will
     968             :  * count it multiple times, which perhaps is the wrong thing ... but it's
     969             :  * not completely clear, and detecting duplicates is difficult, so ignore it
     970             :  * for now.
     971             :  */
     972             : static double
     973      186588 : get_indexpath_pages(Path *bitmapqual)
     974             : {
     975      186588 :     double      result = 0;
     976             :     ListCell   *l;
     977             : 
     978      186588 :     if (IsA(bitmapqual, BitmapAndPath))
     979             :     {
     980       23352 :         BitmapAndPath *apath = (BitmapAndPath *) bitmapqual;
     981             : 
     982       70056 :         foreach(l, apath->bitmapquals)
     983             :         {
     984       46704 :             result += get_indexpath_pages((Path *) lfirst(l));
     985             :         }
     986             :     }
     987      163236 :     else if (IsA(bitmapqual, BitmapOrPath))
     988             :     {
     989          70 :         BitmapOrPath *opath = (BitmapOrPath *) bitmapqual;
     990             : 
     991         222 :         foreach(l, opath->bitmapquals)
     992             :         {
     993         152 :             result += get_indexpath_pages((Path *) lfirst(l));
     994             :         }
     995             :     }
     996      163166 :     else if (IsA(bitmapqual, IndexPath))
     997             :     {
     998      163166 :         IndexPath  *ipath = (IndexPath *) bitmapqual;
     999             : 
    1000      163166 :         result = (double) ipath->indexinfo->pages;
    1001             :     }
    1002             :     else
    1003           0 :         elog(ERROR, "unrecognized node type: %d", nodeTag(bitmapqual));
    1004             : 
    1005      186588 :     return result;
    1006             : }
    1007             : 
    1008             : /*
    1009             :  * cost_bitmap_heap_scan
    1010             :  *    Determines and returns the cost of scanning a relation using a bitmap
    1011             :  *    index-then-heap plan.
    1012             :  *
    1013             :  * 'baserel' is the relation to be scanned
    1014             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1015             :  * 'bitmapqual' is a tree of IndexPaths, BitmapAndPaths, and BitmapOrPaths
    1016             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
    1017             :  *      estimates of caching behavior
    1018             :  *
    1019             :  * Note: the component IndexPaths in bitmapqual should have been costed
    1020             :  * using the same loop_count.
    1021             :  */
    1022             : void
    1023      535218 : cost_bitmap_heap_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel,
    1024             :                       ParamPathInfo *param_info,
    1025             :                       Path *bitmapqual, double loop_count)
    1026             : {
    1027      535218 :     Cost        startup_cost = 0;
    1028      535218 :     Cost        run_cost = 0;
    1029             :     Cost        indexTotalCost;
    1030             :     QualCost    qpqual_cost;
    1031             :     Cost        cpu_per_tuple;
    1032             :     Cost        cost_per_page;
    1033             :     Cost        cpu_run_cost;
    1034             :     double      tuples_fetched;
    1035             :     double      pages_fetched;
    1036             :     double      spc_seq_page_cost,
    1037             :                 spc_random_page_cost;
    1038             :     double      T;
    1039             : 
    1040             :     /* Should only be applied to base relations */
    1041             :     Assert(IsA(baserel, RelOptInfo));
    1042             :     Assert(baserel->relid > 0);
    1043             :     Assert(baserel->rtekind == RTE_RELATION);
    1044             : 
    1045             :     /* Mark the path with the correct row estimate */
    1046      535218 :     if (param_info)
    1047      222466 :         path->rows = param_info->ppi_rows;
    1048             :     else
    1049      312752 :         path->rows = baserel->rows;
    1050             : 
    1051      535218 :     pages_fetched = compute_bitmap_pages(root, baserel, bitmapqual,
    1052             :                                          loop_count, &indexTotalCost,
    1053             :                                          &tuples_fetched);
    1054             : 
    1055      535218 :     startup_cost += indexTotalCost;
    1056      535218 :     T = (baserel->pages > 1) ? (double) baserel->pages : 1.0;
    1057             : 
    1058             :     /* Fetch estimated page costs for tablespace containing table. */
    1059      535218 :     get_tablespace_page_costs(baserel->reltablespace,
    1060             :                               &spc_random_page_cost,
    1061             :                               &spc_seq_page_cost);
    1062             : 
    1063             :     /*
    1064             :      * For small numbers of pages we should charge spc_random_page_cost
    1065             :      * apiece, while if nearly all the table's pages are being read, it's more
    1066             :      * appropriate to charge spc_seq_page_cost apiece.  The effect is
    1067             :      * nonlinear, too. For lack of a better idea, interpolate like this to
    1068             :      * determine the cost per page.
    1069             :      */
    1070      535218 :     if (pages_fetched >= 2.0)
    1071      113116 :         cost_per_page = spc_random_page_cost -
    1072      113116 :             (spc_random_page_cost - spc_seq_page_cost)
    1073      113116 :             * sqrt(pages_fetched / T);
    1074             :     else
    1075      422102 :         cost_per_page = spc_random_page_cost;
    1076             : 
    1077      535218 :     run_cost += pages_fetched * cost_per_page;
    1078             : 
    1079             :     /*
    1080             :      * Estimate CPU costs per tuple.
    1081             :      *
    1082             :      * Often the indexquals don't need to be rechecked at each tuple ... but
    1083             :      * not always, especially not if there are enough tuples involved that the
    1084             :      * bitmaps become lossy.  For the moment, just assume they will be
    1085             :      * rechecked always.  This means we charge the full freight for all the
    1086             :      * scan clauses.
    1087             :      */
    1088      535218 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1089             : 
    1090      535218 :     startup_cost += qpqual_cost.startup;
    1091      535218 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1092      535218 :     cpu_run_cost = cpu_per_tuple * tuples_fetched;
    1093             : 
    1094             :     /* Adjust costing for parallelism, if used. */
    1095      535218 :     if (path->parallel_workers > 0)
    1096             :     {
    1097        4190 :         double      parallel_divisor = get_parallel_divisor(path);
    1098             : 
    1099             :         /* The CPU cost is divided among all the workers. */
    1100        4190 :         cpu_run_cost /= parallel_divisor;
    1101             : 
    1102        4190 :         path->rows = clamp_row_est(path->rows / parallel_divisor);
    1103             :     }
    1104             : 
    1105             : 
    1106      535218 :     run_cost += cpu_run_cost;
    1107             : 
    1108             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1109      535218 :     startup_cost += path->pathtarget->cost.startup;
    1110      535218 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1111             : 
    1112      535218 :     path->disabled_nodes = enable_bitmapscan ? 0 : 1;
    1113      535218 :     path->startup_cost = startup_cost;
    1114      535218 :     path->total_cost = startup_cost + run_cost;
    1115      535218 : }
    1116             : 
    1117             : /*
    1118             :  * cost_bitmap_tree_node
    1119             :  *      Extract cost and selectivity from a bitmap tree node (index/and/or)
    1120             :  */
    1121             : void
    1122     1002220 : cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec)
    1123             : {
    1124     1002220 :     if (IsA(path, IndexPath))
    1125             :     {
    1126      947418 :         *cost = ((IndexPath *) path)->indextotalcost;
    1127      947418 :         *selec = ((IndexPath *) path)->indexselectivity;
    1128             : 
    1129             :         /*
    1130             :          * Charge a small amount per retrieved tuple to reflect the costs of
    1131             :          * manipulating the bitmap.  This is mostly to make sure that a bitmap
    1132             :          * scan doesn't look to be the same cost as an indexscan to retrieve a
    1133             :          * single tuple.
    1134             :          */
    1135      947418 :         *cost += 0.1 * cpu_operator_cost * path->rows;
    1136             :     }
    1137       54802 :     else if (IsA(path, BitmapAndPath))
    1138             :     {
    1139       51570 :         *cost = path->total_cost;
    1140       51570 :         *selec = ((BitmapAndPath *) path)->bitmapselectivity;
    1141             :     }
    1142        3232 :     else if (IsA(path, BitmapOrPath))
    1143             :     {
    1144        3232 :         *cost = path->total_cost;
    1145        3232 :         *selec = ((BitmapOrPath *) path)->bitmapselectivity;
    1146             :     }
    1147             :     else
    1148             :     {
    1149           0 :         elog(ERROR, "unrecognized node type: %d", nodeTag(path));
    1150             :         *cost = *selec = 0;     /* keep compiler quiet */
    1151             :     }
    1152     1002220 : }
    1153             : 
    1154             : /*
    1155             :  * cost_bitmap_and_node
    1156             :  *      Estimate the cost of a BitmapAnd node
    1157             :  *
    1158             :  * Note that this considers only the costs of index scanning and bitmap
    1159             :  * creation, not the eventual heap access.  In that sense the object isn't
    1160             :  * truly a Path, but it has enough path-like properties (costs in particular)
    1161             :  * to warrant treating it as one.  We don't bother to set the path rows field,
    1162             :  * however.
    1163             :  */
    1164             : void
    1165       51384 : cost_bitmap_and_node(BitmapAndPath *path, PlannerInfo *root)
    1166             : {
    1167             :     Cost        totalCost;
    1168             :     Selectivity selec;
    1169             :     ListCell   *l;
    1170             : 
    1171             :     /*
    1172             :      * We estimate AND selectivity on the assumption that the inputs are
    1173             :      * independent.  This is probably often wrong, but we don't have the info
    1174             :      * to do better.
    1175             :      *
    1176             :      * The runtime cost of the BitmapAnd itself is estimated at 100x
    1177             :      * cpu_operator_cost for each tbm_intersect needed.  Probably too small,
    1178             :      * definitely too simplistic?
    1179             :      */
    1180       51384 :     totalCost = 0.0;
    1181       51384 :     selec = 1.0;
    1182      154152 :     foreach(l, path->bitmapquals)
    1183             :     {
    1184      102768 :         Path       *subpath = (Path *) lfirst(l);
    1185             :         Cost        subCost;
    1186             :         Selectivity subselec;
    1187             : 
    1188      102768 :         cost_bitmap_tree_node(subpath, &subCost, &subselec);
    1189             : 
    1190      102768 :         selec *= subselec;
    1191             : 
    1192      102768 :         totalCost += subCost;
    1193      102768 :         if (l != list_head(path->bitmapquals))
    1194       51384 :             totalCost += 100.0 * cpu_operator_cost;
    1195             :     }
    1196       51384 :     path->bitmapselectivity = selec;
    1197       51384 :     path->path.rows = 0;     /* per above, not used */
    1198       51384 :     path->path.disabled_nodes = 0;
    1199       51384 :     path->path.startup_cost = totalCost;
    1200       51384 :     path->path.total_cost = totalCost;
    1201       51384 : }
    1202             : 
    1203             : /*
    1204             :  * cost_bitmap_or_node
    1205             :  *      Estimate the cost of a BitmapOr node
    1206             :  *
    1207             :  * See comments for cost_bitmap_and_node.
    1208             :  */
    1209             : void
    1210         976 : cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root)
    1211             : {
    1212             :     Cost        totalCost;
    1213             :     Selectivity selec;
    1214             :     ListCell   *l;
    1215             : 
    1216             :     /*
    1217             :      * We estimate OR selectivity on the assumption that the inputs are
    1218             :      * non-overlapping, since that's often the case in "x IN (list)" type
    1219             :      * situations.  Of course, we clamp to 1.0 at the end.
    1220             :      *
    1221             :      * The runtime cost of the BitmapOr itself is estimated at 100x
    1222             :      * cpu_operator_cost for each tbm_union needed.  Probably too small,
    1223             :      * definitely too simplistic?  We are aware that the tbm_unions are
    1224             :      * optimized out when the inputs are BitmapIndexScans.
    1225             :      */
    1226         976 :     totalCost = 0.0;
    1227         976 :     selec = 0.0;
    1228        2736 :     foreach(l, path->bitmapquals)
    1229             :     {
    1230        1760 :         Path       *subpath = (Path *) lfirst(l);
    1231             :         Cost        subCost;
    1232             :         Selectivity subselec;
    1233             : 
    1234        1760 :         cost_bitmap_tree_node(subpath, &subCost, &subselec);
    1235             : 
    1236        1760 :         selec += subselec;
    1237             : 
    1238        1760 :         totalCost += subCost;
    1239        1760 :         if (l != list_head(path->bitmapquals) &&
    1240         784 :             !IsA(subpath, IndexPath))
    1241           6 :             totalCost += 100.0 * cpu_operator_cost;
    1242             :     }
    1243         976 :     path->bitmapselectivity = Min(selec, 1.0);
    1244         976 :     path->path.rows = 0;     /* per above, not used */
    1245         976 :     path->path.startup_cost = totalCost;
    1246         976 :     path->path.total_cost = totalCost;
    1247         976 : }
    1248             : 
    1249             : /*
    1250             :  * cost_tidscan
    1251             :  *    Determines and returns the cost of scanning a relation using TIDs.
    1252             :  *
    1253             :  * 'baserel' is the relation to be scanned
    1254             :  * 'tidquals' is the list of TID-checkable quals
    1255             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1256             :  */
    1257             : void
    1258         852 : cost_tidscan(Path *path, PlannerInfo *root,
    1259             :              RelOptInfo *baserel, List *tidquals, ParamPathInfo *param_info)
    1260             : {
    1261         852 :     Cost        startup_cost = 0;
    1262         852 :     Cost        run_cost = 0;
    1263             :     QualCost    qpqual_cost;
    1264             :     Cost        cpu_per_tuple;
    1265             :     QualCost    tid_qual_cost;
    1266             :     double      ntuples;
    1267             :     ListCell   *l;
    1268             :     double      spc_random_page_cost;
    1269             : 
    1270             :     /* Should only be applied to base relations */
    1271             :     Assert(baserel->relid > 0);
    1272             :     Assert(baserel->rtekind == RTE_RELATION);
    1273             :     Assert(tidquals != NIL);
    1274             : 
    1275             :     /* Mark the path with the correct row estimate */
    1276         852 :     if (param_info)
    1277         144 :         path->rows = param_info->ppi_rows;
    1278             :     else
    1279         708 :         path->rows = baserel->rows;
    1280             : 
    1281             :     /* Count how many tuples we expect to retrieve */
    1282         852 :     ntuples = 0;
    1283        1728 :     foreach(l, tidquals)
    1284             :     {
    1285         876 :         RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    1286         876 :         Expr       *qual = rinfo->clause;
    1287             : 
    1288             :         /*
    1289             :          * We must use a TID scan for CurrentOfExpr; in any other case, we
    1290             :          * should be generating a TID scan only if enable_tidscan=true. Also,
    1291             :          * if CurrentOfExpr is the qual, there should be only one.
    1292             :          */
    1293             :         Assert(enable_tidscan || IsA(qual, CurrentOfExpr));
    1294             :         Assert(list_length(tidquals) == 1 || !IsA(qual, CurrentOfExpr));
    1295             : 
    1296         876 :         if (IsA(qual, ScalarArrayOpExpr))
    1297             :         {
    1298             :             /* Each element of the array yields 1 tuple */
    1299          50 :             ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) qual;
    1300          50 :             Node       *arraynode = (Node *) lsecond(saop->args);
    1301             : 
    1302          50 :             ntuples += estimate_array_length(root, arraynode);
    1303             :         }
    1304         826 :         else if (IsA(qual, CurrentOfExpr))
    1305             :         {
    1306             :             /* CURRENT OF yields 1 tuple */
    1307         404 :             ntuples++;
    1308             :         }
    1309             :         else
    1310             :         {
    1311             :             /* It's just CTID = something, count 1 tuple */
    1312         422 :             ntuples++;
    1313             :         }
    1314             :     }
    1315             : 
    1316             :     /*
    1317             :      * The TID qual expressions will be computed once, any other baserestrict
    1318             :      * quals once per retrieved tuple.
    1319             :      */
    1320         852 :     cost_qual_eval(&tid_qual_cost, tidquals, root);
    1321             : 
    1322             :     /* fetch estimated page cost for tablespace containing table */
    1323         852 :     get_tablespace_page_costs(baserel->reltablespace,
    1324             :                               &spc_random_page_cost,
    1325             :                               NULL);
    1326             : 
    1327             :     /* disk costs --- assume each tuple on a different page */
    1328         852 :     run_cost += spc_random_page_cost * ntuples;
    1329             : 
    1330             :     /* Add scanning CPU costs */
    1331         852 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1332             : 
    1333             :     /* XXX currently we assume TID quals are a subset of qpquals */
    1334         852 :     startup_cost += qpqual_cost.startup + tid_qual_cost.per_tuple;
    1335         852 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple -
    1336         852 :         tid_qual_cost.per_tuple;
    1337         852 :     run_cost += cpu_per_tuple * ntuples;
    1338             : 
    1339             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1340         852 :     startup_cost += path->pathtarget->cost.startup;
    1341         852 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1342             : 
    1343             :     /*
    1344             :      * There are assertions above verifying that we only reach this function
    1345             :      * either when enable_tidscan=true or when the TID scan is the only legal
    1346             :      * path, so it's safe to set disabled_nodes to zero here.
    1347             :      */
    1348         852 :     path->disabled_nodes = 0;
    1349         852 :     path->startup_cost = startup_cost;
    1350         852 :     path->total_cost = startup_cost + run_cost;
    1351         852 : }
    1352             : 
    1353             : /*
    1354             :  * cost_tidrangescan
    1355             :  *    Determines and sets the costs of scanning a relation using a range of
    1356             :  *    TIDs for 'path'
    1357             :  *
    1358             :  * 'baserel' is the relation to be scanned
    1359             :  * 'tidrangequals' is the list of TID-checkable range quals
    1360             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1361             :  */
    1362             : void
    1363        1940 : cost_tidrangescan(Path *path, PlannerInfo *root,
    1364             :                   RelOptInfo *baserel, List *tidrangequals,
    1365             :                   ParamPathInfo *param_info)
    1366             : {
    1367             :     Selectivity selectivity;
    1368             :     double      pages;
    1369        1940 :     Cost        startup_cost = 0;
    1370        1940 :     Cost        run_cost = 0;
    1371             :     QualCost    qpqual_cost;
    1372             :     Cost        cpu_per_tuple;
    1373             :     QualCost    tid_qual_cost;
    1374             :     double      ntuples;
    1375             :     double      nseqpages;
    1376             :     double      spc_random_page_cost;
    1377             :     double      spc_seq_page_cost;
    1378             : 
    1379             :     /* Should only be applied to base relations */
    1380             :     Assert(baserel->relid > 0);
    1381             :     Assert(baserel->rtekind == RTE_RELATION);
    1382             : 
    1383             :     /* Mark the path with the correct row estimate */
    1384        1940 :     if (param_info)
    1385           0 :         path->rows = param_info->ppi_rows;
    1386             :     else
    1387        1940 :         path->rows = baserel->rows;
    1388             : 
    1389             :     /* Count how many tuples and pages we expect to scan */
    1390        1940 :     selectivity = clauselist_selectivity(root, tidrangequals, baserel->relid,
    1391             :                                          JOIN_INNER, NULL);
    1392        1940 :     pages = ceil(selectivity * baserel->pages);
    1393             : 
    1394        1940 :     if (pages <= 0.0)
    1395          42 :         pages = 1.0;
    1396             : 
    1397             :     /*
    1398             :      * The first page in a range requires a random seek, but each subsequent
    1399             :      * page is just a normal sequential page read. NOTE: it's desirable for
    1400             :      * TID Range Scans to cost more than the equivalent Sequential Scans,
    1401             :      * because Seq Scans have some performance advantages such as scan
    1402             :      * synchronization and parallelizability, and we'd prefer one of them to
    1403             :      * be picked unless a TID Range Scan really is better.
    1404             :      */
    1405        1940 :     ntuples = selectivity * baserel->tuples;
    1406        1940 :     nseqpages = pages - 1.0;
    1407             : 
    1408             :     /*
    1409             :      * The TID qual expressions will be computed once, any other baserestrict
    1410             :      * quals once per retrieved tuple.
    1411             :      */
    1412        1940 :     cost_qual_eval(&tid_qual_cost, tidrangequals, root);
    1413             : 
    1414             :     /* fetch estimated page cost for tablespace containing table */
    1415        1940 :     get_tablespace_page_costs(baserel->reltablespace,
    1416             :                               &spc_random_page_cost,
    1417             :                               &spc_seq_page_cost);
    1418             : 
    1419             :     /* disk costs; 1 random page and the remainder as seq pages */
    1420        1940 :     run_cost += spc_random_page_cost + spc_seq_page_cost * nseqpages;
    1421             : 
    1422             :     /* Add scanning CPU costs */
    1423        1940 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1424             : 
    1425             :     /*
    1426             :      * XXX currently we assume TID quals are a subset of qpquals at this
    1427             :      * point; they will be removed (if possible) when we create the plan, so
    1428             :      * we subtract their cost from the total qpqual cost.  (If the TID quals
    1429             :      * can't be removed, this is a mistake and we're going to underestimate
    1430             :      * the CPU cost a bit.)
    1431             :      */
    1432        1940 :     startup_cost += qpqual_cost.startup + tid_qual_cost.per_tuple;
    1433        1940 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple -
    1434        1940 :         tid_qual_cost.per_tuple;
    1435        1940 :     run_cost += cpu_per_tuple * ntuples;
    1436             : 
    1437             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1438        1940 :     startup_cost += path->pathtarget->cost.startup;
    1439        1940 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1440             : 
    1441             :     /* we should not generate this path type when enable_tidscan=false */
    1442             :     Assert(enable_tidscan);
    1443        1940 :     path->disabled_nodes = 0;
    1444        1940 :     path->startup_cost = startup_cost;
    1445        1940 :     path->total_cost = startup_cost + run_cost;
    1446        1940 : }
    1447             : 
    1448             : /*
    1449             :  * cost_subqueryscan
    1450             :  *    Determines and returns the cost of scanning a subquery RTE.
    1451             :  *
    1452             :  * 'baserel' is the relation to be scanned
    1453             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1454             :  * 'trivial_pathtarget' is true if the pathtarget is believed to be trivial.
    1455             :  */
    1456             : void
    1457       48672 : cost_subqueryscan(SubqueryScanPath *path, PlannerInfo *root,
    1458             :                   RelOptInfo *baserel, ParamPathInfo *param_info,
    1459             :                   bool trivial_pathtarget)
    1460             : {
    1461             :     Cost        startup_cost;
    1462             :     Cost        run_cost;
    1463             :     List       *qpquals;
    1464             :     QualCost    qpqual_cost;
    1465             :     Cost        cpu_per_tuple;
    1466             : 
    1467             :     /* Should only be applied to base relations that are subqueries */
    1468             :     Assert(baserel->relid > 0);
    1469             :     Assert(baserel->rtekind == RTE_SUBQUERY);
    1470             : 
    1471             :     /*
    1472             :      * We compute the rowcount estimate as the subplan's estimate times the
    1473             :      * selectivity of relevant restriction clauses.  In simple cases this will
    1474             :      * come out the same as baserel->rows; but when dealing with parallelized
    1475             :      * paths we must do it like this to get the right answer.
    1476             :      */
    1477       48672 :     if (param_info)
    1478         606 :         qpquals = list_concat_copy(param_info->ppi_clauses,
    1479         606 :                                    baserel->baserestrictinfo);
    1480             :     else
    1481       48066 :         qpquals = baserel->baserestrictinfo;
    1482             : 
    1483       48672 :     path->path.rows = clamp_row_est(path->subpath->rows *
    1484       48672 :                                     clauselist_selectivity(root,
    1485             :                                                            qpquals,
    1486             :                                                            0,
    1487             :                                                            JOIN_INNER,
    1488             :                                                            NULL));
    1489             : 
    1490             :     /*
    1491             :      * Cost of path is cost of evaluating the subplan, plus cost of evaluating
    1492             :      * any restriction clauses and tlist that will be attached to the
    1493             :      * SubqueryScan node, plus cpu_tuple_cost to account for selection and
    1494             :      * projection overhead.
    1495             :      */
    1496       48672 :     path->path.disabled_nodes = path->subpath->disabled_nodes;
    1497       48672 :     path->path.startup_cost = path->subpath->startup_cost;
    1498       48672 :     path->path.total_cost = path->subpath->total_cost;
    1499             : 
    1500             :     /*
    1501             :      * However, if there are no relevant restriction clauses and the
    1502             :      * pathtarget is trivial, then we expect that setrefs.c will optimize away
    1503             :      * the SubqueryScan plan node altogether, so we should just make its cost
    1504             :      * and rowcount equal to the input path's.
    1505             :      *
    1506             :      * Note: there are some edge cases where createplan.c will apply a
    1507             :      * different targetlist to the SubqueryScan node, thus falsifying our
    1508             :      * current estimate of whether the target is trivial, and making the cost
    1509             :      * estimate (though not the rowcount) wrong.  It does not seem worth the
    1510             :      * extra complication to try to account for that exactly, especially since
    1511             :      * that behavior falsifies other cost estimates as well.
    1512             :      */
    1513       48672 :     if (qpquals == NIL && trivial_pathtarget)
    1514       24682 :         return;
    1515             : 
    1516       23990 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1517             : 
    1518       23990 :     startup_cost = qpqual_cost.startup;
    1519       23990 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1520       23990 :     run_cost = cpu_per_tuple * path->subpath->rows;
    1521             : 
    1522             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1523       23990 :     startup_cost += path->path.pathtarget->cost.startup;
    1524       23990 :     run_cost += path->path.pathtarget->cost.per_tuple * path->path.rows;
    1525             : 
    1526       23990 :     path->path.startup_cost += startup_cost;
    1527       23990 :     path->path.total_cost += startup_cost + run_cost;
    1528             : }
    1529             : 
    1530             : /*
    1531             :  * cost_functionscan
    1532             :  *    Determines and returns the cost of scanning a function RTE.
    1533             :  *
    1534             :  * 'baserel' is the relation to be scanned
    1535             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1536             :  */
    1537             : void
    1538       51610 : cost_functionscan(Path *path, PlannerInfo *root,
    1539             :                   RelOptInfo *baserel, ParamPathInfo *param_info)
    1540             : {
    1541       51610 :     Cost        startup_cost = 0;
    1542       51610 :     Cost        run_cost = 0;
    1543             :     QualCost    qpqual_cost;
    1544             :     Cost        cpu_per_tuple;
    1545             :     RangeTblEntry *rte;
    1546             :     QualCost    exprcost;
    1547             : 
    1548             :     /* Should only be applied to base relations that are functions */
    1549             :     Assert(baserel->relid > 0);
    1550       51610 :     rte = planner_rt_fetch(baserel->relid, root);
    1551             :     Assert(rte->rtekind == RTE_FUNCTION);
    1552             : 
    1553             :     /* Mark the path with the correct row estimate */
    1554       51610 :     if (param_info)
    1555        8294 :         path->rows = param_info->ppi_rows;
    1556             :     else
    1557       43316 :         path->rows = baserel->rows;
    1558             : 
    1559             :     /*
    1560             :      * Estimate costs of executing the function expression(s).
    1561             :      *
    1562             :      * Currently, nodeFunctionscan.c always executes the functions to
    1563             :      * completion before returning any rows, and caches the results in a
    1564             :      * tuplestore.  So the function eval cost is all startup cost, and per-row
    1565             :      * costs are minimal.
    1566             :      *
    1567             :      * XXX in principle we ought to charge tuplestore spill costs if the
    1568             :      * number of rows is large.  However, given how phony our rowcount
    1569             :      * estimates for functions tend to be, there's not a lot of point in that
    1570             :      * refinement right now.
    1571             :      */
    1572       51610 :     cost_qual_eval_node(&exprcost, (Node *) rte->functions, root);
    1573             : 
    1574       51610 :     startup_cost += exprcost.startup + exprcost.per_tuple;
    1575             : 
    1576             :     /* Add scanning CPU costs */
    1577       51610 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1578             : 
    1579       51610 :     startup_cost += qpqual_cost.startup;
    1580       51610 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1581       51610 :     run_cost += cpu_per_tuple * baserel->tuples;
    1582             : 
    1583             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1584       51610 :     startup_cost += path->pathtarget->cost.startup;
    1585       51610 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1586             : 
    1587       51610 :     path->disabled_nodes = 0;
    1588       51610 :     path->startup_cost = startup_cost;
    1589       51610 :     path->total_cost = startup_cost + run_cost;
    1590       51610 : }
    1591             : 
    1592             : /*
    1593             :  * cost_tablefuncscan
    1594             :  *    Determines and returns the cost of scanning a table function.
    1595             :  *
    1596             :  * 'baserel' is the relation to be scanned
    1597             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1598             :  */
    1599             : void
    1600         626 : cost_tablefuncscan(Path *path, PlannerInfo *root,
    1601             :                    RelOptInfo *baserel, ParamPathInfo *param_info)
    1602             : {
    1603         626 :     Cost        startup_cost = 0;
    1604         626 :     Cost        run_cost = 0;
    1605             :     QualCost    qpqual_cost;
    1606             :     Cost        cpu_per_tuple;
    1607             :     RangeTblEntry *rte;
    1608             :     QualCost    exprcost;
    1609             : 
    1610             :     /* Should only be applied to base relations that are functions */
    1611             :     Assert(baserel->relid > 0);
    1612         626 :     rte = planner_rt_fetch(baserel->relid, root);
    1613             :     Assert(rte->rtekind == RTE_TABLEFUNC);
    1614             : 
    1615             :     /* Mark the path with the correct row estimate */
    1616         626 :     if (param_info)
    1617         234 :         path->rows = param_info->ppi_rows;
    1618             :     else
    1619         392 :         path->rows = baserel->rows;
    1620             : 
    1621             :     /*
    1622             :      * Estimate costs of executing the table func expression(s).
    1623             :      *
    1624             :      * XXX in principle we ought to charge tuplestore spill costs if the
    1625             :      * number of rows is large.  However, given how phony our rowcount
    1626             :      * estimates for tablefuncs tend to be, there's not a lot of point in that
    1627             :      * refinement right now.
    1628             :      */
    1629         626 :     cost_qual_eval_node(&exprcost, (Node *) rte->tablefunc, root);
    1630             : 
    1631         626 :     startup_cost += exprcost.startup + exprcost.per_tuple;
    1632             : 
    1633             :     /* Add scanning CPU costs */
    1634         626 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1635             : 
    1636         626 :     startup_cost += qpqual_cost.startup;
    1637         626 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1638         626 :     run_cost += cpu_per_tuple * baserel->tuples;
    1639             : 
    1640             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1641         626 :     startup_cost += path->pathtarget->cost.startup;
    1642         626 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1643             : 
    1644         626 :     path->disabled_nodes = 0;
    1645         626 :     path->startup_cost = startup_cost;
    1646         626 :     path->total_cost = startup_cost + run_cost;
    1647         626 : }
    1648             : 
    1649             : /*
    1650             :  * cost_valuesscan
    1651             :  *    Determines and returns the cost of scanning a VALUES RTE.
    1652             :  *
    1653             :  * 'baserel' is the relation to be scanned
    1654             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1655             :  */
    1656             : void
    1657        8246 : cost_valuesscan(Path *path, PlannerInfo *root,
    1658             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
    1659             : {
    1660        8246 :     Cost        startup_cost = 0;
    1661        8246 :     Cost        run_cost = 0;
    1662             :     QualCost    qpqual_cost;
    1663             :     Cost        cpu_per_tuple;
    1664             : 
    1665             :     /* Should only be applied to base relations that are values lists */
    1666             :     Assert(baserel->relid > 0);
    1667             :     Assert(baserel->rtekind == RTE_VALUES);
    1668             : 
    1669             :     /* Mark the path with the correct row estimate */
    1670        8246 :     if (param_info)
    1671          66 :         path->rows = param_info->ppi_rows;
    1672             :     else
    1673        8180 :         path->rows = baserel->rows;
    1674             : 
    1675             :     /*
    1676             :      * For now, estimate list evaluation cost at one operator eval per list
    1677             :      * (probably pretty bogus, but is it worth being smarter?)
    1678             :      */
    1679        8246 :     cpu_per_tuple = cpu_operator_cost;
    1680             : 
    1681             :     /* Add scanning CPU costs */
    1682        8246 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1683             : 
    1684        8246 :     startup_cost += qpqual_cost.startup;
    1685        8246 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1686        8246 :     run_cost += cpu_per_tuple * baserel->tuples;
    1687             : 
    1688             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1689        8246 :     startup_cost += path->pathtarget->cost.startup;
    1690        8246 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1691             : 
    1692        8246 :     path->disabled_nodes = 0;
    1693        8246 :     path->startup_cost = startup_cost;
    1694        8246 :     path->total_cost = startup_cost + run_cost;
    1695        8246 : }
    1696             : 
    1697             : /*
    1698             :  * cost_ctescan
    1699             :  *    Determines and returns the cost of scanning a CTE RTE.
    1700             :  *
    1701             :  * Note: this is used for both self-reference and regular CTEs; the
    1702             :  * possible cost differences are below the threshold of what we could
    1703             :  * estimate accurately anyway.  Note that the costs of evaluating the
    1704             :  * referenced CTE query are added into the final plan as initplan costs,
    1705             :  * and should NOT be counted here.
    1706             :  */
    1707             : void
    1708        5094 : cost_ctescan(Path *path, PlannerInfo *root,
    1709             :              RelOptInfo *baserel, ParamPathInfo *param_info)
    1710             : {
    1711        5094 :     Cost        startup_cost = 0;
    1712        5094 :     Cost        run_cost = 0;
    1713             :     QualCost    qpqual_cost;
    1714             :     Cost        cpu_per_tuple;
    1715             : 
    1716             :     /* Should only be applied to base relations that are CTEs */
    1717             :     Assert(baserel->relid > 0);
    1718             :     Assert(baserel->rtekind == RTE_CTE);
    1719             : 
    1720             :     /* Mark the path with the correct row estimate */
    1721        5094 :     if (param_info)
    1722           0 :         path->rows = param_info->ppi_rows;
    1723             :     else
    1724        5094 :         path->rows = baserel->rows;
    1725             : 
    1726             :     /* Charge one CPU tuple cost per row for tuplestore manipulation */
    1727        5094 :     cpu_per_tuple = cpu_tuple_cost;
    1728             : 
    1729             :     /* Add scanning CPU costs */
    1730        5094 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1731             : 
    1732        5094 :     startup_cost += qpqual_cost.startup;
    1733        5094 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1734        5094 :     run_cost += cpu_per_tuple * baserel->tuples;
    1735             : 
    1736             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1737        5094 :     startup_cost += path->pathtarget->cost.startup;
    1738        5094 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1739             : 
    1740        5094 :     path->disabled_nodes = 0;
    1741        5094 :     path->startup_cost = startup_cost;
    1742        5094 :     path->total_cost = startup_cost + run_cost;
    1743        5094 : }
    1744             : 
    1745             : /*
    1746             :  * cost_namedtuplestorescan
    1747             :  *    Determines and returns the cost of scanning a named tuplestore.
    1748             :  */
    1749             : void
    1750         466 : cost_namedtuplestorescan(Path *path, PlannerInfo *root,
    1751             :                          RelOptInfo *baserel, ParamPathInfo *param_info)
    1752             : {
    1753         466 :     Cost        startup_cost = 0;
    1754         466 :     Cost        run_cost = 0;
    1755             :     QualCost    qpqual_cost;
    1756             :     Cost        cpu_per_tuple;
    1757             : 
    1758             :     /* Should only be applied to base relations that are Tuplestores */
    1759             :     Assert(baserel->relid > 0);
    1760             :     Assert(baserel->rtekind == RTE_NAMEDTUPLESTORE);
    1761             : 
    1762             :     /* Mark the path with the correct row estimate */
    1763         466 :     if (param_info)
    1764           0 :         path->rows = param_info->ppi_rows;
    1765             :     else
    1766         466 :         path->rows = baserel->rows;
    1767             : 
    1768             :     /* Charge one CPU tuple cost per row for tuplestore manipulation */
    1769         466 :     cpu_per_tuple = cpu_tuple_cost;
    1770             : 
    1771             :     /* Add scanning CPU costs */
    1772         466 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1773             : 
    1774         466 :     startup_cost += qpqual_cost.startup;
    1775         466 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1776         466 :     run_cost += cpu_per_tuple * baserel->tuples;
    1777             : 
    1778         466 :     path->disabled_nodes = 0;
    1779         466 :     path->startup_cost = startup_cost;
    1780         466 :     path->total_cost = startup_cost + run_cost;
    1781         466 : }
    1782             : 
    1783             : /*
    1784             :  * cost_resultscan
    1785             :  *    Determines and returns the cost of scanning an RTE_RESULT relation.
    1786             :  */
    1787             : void
    1788        4280 : cost_resultscan(Path *path, PlannerInfo *root,
    1789             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
    1790             : {
    1791        4280 :     Cost        startup_cost = 0;
    1792        4280 :     Cost        run_cost = 0;
    1793             :     QualCost    qpqual_cost;
    1794             :     Cost        cpu_per_tuple;
    1795             : 
    1796             :     /* Should only be applied to RTE_RESULT base relations */
    1797             :     Assert(baserel->relid > 0);
    1798             :     Assert(baserel->rtekind == RTE_RESULT);
    1799             : 
    1800             :     /* Mark the path with the correct row estimate */
    1801        4280 :     if (param_info)
    1802         156 :         path->rows = param_info->ppi_rows;
    1803             :     else
    1804        4124 :         path->rows = baserel->rows;
    1805             : 
    1806             :     /* We charge qual cost plus cpu_tuple_cost */
    1807        4280 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1808             : 
    1809        4280 :     startup_cost += qpqual_cost.startup;
    1810        4280 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1811        4280 :     run_cost += cpu_per_tuple * baserel->tuples;
    1812             : 
    1813        4280 :     path->disabled_nodes = 0;
    1814        4280 :     path->startup_cost = startup_cost;
    1815        4280 :     path->total_cost = startup_cost + run_cost;
    1816        4280 : }
    1817             : 
    1818             : /*
    1819             :  * cost_recursive_union
    1820             :  *    Determines and returns the cost of performing a recursive union,
    1821             :  *    and also the estimated output size.
    1822             :  *
    1823             :  * We are given Paths for the nonrecursive and recursive terms.
    1824             :  */
    1825             : void
    1826        1000 : cost_recursive_union(Path *runion, Path *nrterm, Path *rterm)
    1827             : {
    1828             :     Cost        startup_cost;
    1829             :     Cost        total_cost;
    1830             :     double      total_rows;
    1831             : 
    1832             :     /* We probably have decent estimates for the non-recursive term */
    1833        1000 :     startup_cost = nrterm->startup_cost;
    1834        1000 :     total_cost = nrterm->total_cost;
    1835        1000 :     total_rows = nrterm->rows;
    1836             : 
    1837             :     /*
    1838             :      * We arbitrarily assume that about 10 recursive iterations will be
    1839             :      * needed, and that we've managed to get a good fix on the cost and output
    1840             :      * size of each one of them.  These are mighty shaky assumptions but it's
    1841             :      * hard to see how to do better.
    1842             :      */
    1843        1000 :     total_cost += 10 * rterm->total_cost;
    1844        1000 :     total_rows += 10 * rterm->rows;
    1845             : 
    1846             :     /*
    1847             :      * Also charge cpu_tuple_cost per row to account for the costs of
    1848             :      * manipulating the tuplestores.  (We don't worry about possible
    1849             :      * spill-to-disk costs.)
    1850             :      */
    1851        1000 :     total_cost += cpu_tuple_cost * total_rows;
    1852             : 
    1853        1000 :     runion->disabled_nodes = nrterm->disabled_nodes + rterm->disabled_nodes;
    1854        1000 :     runion->startup_cost = startup_cost;
    1855        1000 :     runion->total_cost = total_cost;
    1856        1000 :     runion->rows = total_rows;
    1857        1000 :     runion->pathtarget->width = Max(nrterm->pathtarget->width,
    1858             :                                     rterm->pathtarget->width);
    1859        1000 : }
    1860             : 
    1861             : /*
    1862             :  * cost_tuplesort
    1863             :  *    Determines and returns the cost of sorting a relation using tuplesort,
    1864             :  *    not including the cost of reading the input data.
    1865             :  *
    1866             :  * If the total volume of data to sort is less than sort_mem, we will do
    1867             :  * an in-memory sort, which requires no I/O and about t*log2(t) tuple
    1868             :  * comparisons for t tuples.
    1869             :  *
    1870             :  * If the total volume exceeds sort_mem, we switch to a tape-style merge
    1871             :  * algorithm.  There will still be about t*log2(t) tuple comparisons in
    1872             :  * total, but we will also need to write and read each tuple once per
    1873             :  * merge pass.  We expect about ceil(logM(r)) merge passes where r is the
    1874             :  * number of initial runs formed and M is the merge order used by tuplesort.c.
    1875             :  * Since the average initial run should be about sort_mem, we have
    1876             :  *      disk traffic = 2 * relsize * ceil(logM(p / sort_mem))
    1877             :  *      cpu = comparison_cost * t * log2(t)
    1878             :  *
    1879             :  * If the sort is bounded (i.e., only the first k result tuples are needed)
    1880             :  * and k tuples can fit into sort_mem, we use a heap method that keeps only
    1881             :  * k tuples in the heap; this will require about t*log2(k) tuple comparisons.
    1882             :  *
    1883             :  * The disk traffic is assumed to be 3/4ths sequential and 1/4th random
    1884             :  * accesses (XXX can't we refine that guess?)
    1885             :  *
    1886             :  * By default, we charge two operator evals per tuple comparison, which should
    1887             :  * be in the right ballpark in most cases.  The caller can tweak this by
    1888             :  * specifying nonzero comparison_cost; typically that's used for any extra
    1889             :  * work that has to be done to prepare the inputs to the comparison operators.
    1890             :  *
    1891             :  * 'tuples' is the number of tuples in the relation
    1892             :  * 'width' is the average tuple width in bytes
    1893             :  * 'comparison_cost' is the extra cost per comparison, if any
    1894             :  * 'sort_mem' is the number of kilobytes of work memory allowed for the sort
    1895             :  * 'limit_tuples' is the bound on the number of output tuples; -1 if no bound
    1896             :  */
    1897             : static void
    1898     1753248 : cost_tuplesort(Cost *startup_cost, Cost *run_cost,
    1899             :                double tuples, int width,
    1900             :                Cost comparison_cost, int sort_mem,
    1901             :                double limit_tuples)
    1902             : {
    1903     1753248 :     double      input_bytes = relation_byte_size(tuples, width);
    1904             :     double      output_bytes;
    1905             :     double      output_tuples;
    1906     1753248 :     int64       sort_mem_bytes = sort_mem * (int64) 1024;
    1907             : 
    1908             :     /*
    1909             :      * We want to be sure the cost of a sort is never estimated as zero, even
    1910             :      * if passed-in tuple count is zero.  Besides, mustn't do log(0)...
    1911             :      */
    1912     1753248 :     if (tuples < 2.0)
    1913      516000 :         tuples = 2.0;
    1914             : 
    1915             :     /* Include the default cost-per-comparison */
    1916     1753248 :     comparison_cost += 2.0 * cpu_operator_cost;
    1917             : 
    1918             :     /* Do we have a useful LIMIT? */
    1919     1753248 :     if (limit_tuples > 0 && limit_tuples < tuples)
    1920             :     {
    1921        1846 :         output_tuples = limit_tuples;
    1922        1846 :         output_bytes = relation_byte_size(output_tuples, width);
    1923             :     }
    1924             :     else
    1925             :     {
    1926     1751402 :         output_tuples = tuples;
    1927     1751402 :         output_bytes = input_bytes;
    1928             :     }
    1929             : 
    1930     1753248 :     if (output_bytes > sort_mem_bytes)
    1931             :     {
    1932             :         /*
    1933             :          * We'll have to use a disk-based sort of all the tuples
    1934             :          */
    1935       20224 :         double      npages = ceil(input_bytes / BLCKSZ);
    1936       20224 :         double      nruns = input_bytes / sort_mem_bytes;
    1937       20224 :         double      mergeorder = tuplesort_merge_order(sort_mem_bytes);
    1938             :         double      log_runs;
    1939             :         double      npageaccesses;
    1940             : 
    1941             :         /*
    1942             :          * CPU costs
    1943             :          *
    1944             :          * Assume about N log2 N comparisons
    1945             :          */
    1946       20224 :         *startup_cost = comparison_cost * tuples * LOG2(tuples);
    1947             : 
    1948             :         /* Disk costs */
    1949             : 
    1950             :         /* Compute logM(r) as log(r) / log(M) */
    1951       20224 :         if (nruns > mergeorder)
    1952        4856 :             log_runs = ceil(log(nruns) / log(mergeorder));
    1953             :         else
    1954       15368 :             log_runs = 1.0;
    1955       20224 :         npageaccesses = 2.0 * npages * log_runs;
    1956             :         /* Assume 3/4ths of accesses are sequential, 1/4th are not */
    1957       20224 :         *startup_cost += npageaccesses *
    1958       20224 :             (seq_page_cost * 0.75 + random_page_cost * 0.25);
    1959             :     }
    1960     1733024 :     else if (tuples > 2 * output_tuples || input_bytes > sort_mem_bytes)
    1961             :     {
    1962             :         /*
    1963             :          * We'll use a bounded heap-sort keeping just K tuples in memory, for
    1964             :          * a total number of tuple comparisons of N log2 K; but the constant
    1965             :          * factor is a bit higher than for quicksort.  Tweak it so that the
    1966             :          * cost curve is continuous at the crossover point.
    1967             :          */
    1968        1376 :         *startup_cost = comparison_cost * tuples * LOG2(2.0 * output_tuples);
    1969             :     }
    1970             :     else
    1971             :     {
    1972             :         /* We'll use plain quicksort on all the input tuples */
    1973     1731648 :         *startup_cost = comparison_cost * tuples * LOG2(tuples);
    1974             :     }
    1975             : 
    1976             :     /*
    1977             :      * Also charge a small amount (arbitrarily set equal to operator cost) per
    1978             :      * extracted tuple.  We don't charge cpu_tuple_cost because a Sort node
    1979             :      * doesn't do qual-checking or projection, so it has less overhead than
    1980             :      * most plan nodes.  Note it's correct to use tuples not output_tuples
    1981             :      * here --- the upper LIMIT will pro-rate the run cost so we'd be double
    1982             :      * counting the LIMIT otherwise.
    1983             :      */
    1984     1753248 :     *run_cost = cpu_operator_cost * tuples;
    1985     1753248 : }
    1986             : 
    1987             : /*
    1988             :  * cost_incremental_sort
    1989             :  *  Determines and returns the cost of sorting a relation incrementally, when
    1990             :  *  the input path is presorted by a prefix of the pathkeys.
    1991             :  *
    1992             :  * 'presorted_keys' is the number of leading pathkeys by which the input path
    1993             :  * is sorted.
    1994             :  *
    1995             :  * We estimate the number of groups into which the relation is divided by the
    1996             :  * leading pathkeys, and then calculate the cost of sorting a single group
    1997             :  * with tuplesort using cost_tuplesort().
    1998             :  */
    1999             : void
    2000       11734 : cost_incremental_sort(Path *path,
    2001             :                       PlannerInfo *root, List *pathkeys, int presorted_keys,
    2002             :                       int input_disabled_nodes,
    2003             :                       Cost input_startup_cost, Cost input_total_cost,
    2004             :                       double input_tuples, int width, Cost comparison_cost, int sort_mem,
    2005             :                       double limit_tuples)
    2006             : {
    2007             :     Cost        startup_cost,
    2008             :                 run_cost,
    2009       11734 :                 input_run_cost = input_total_cost - input_startup_cost;
    2010             :     double      group_tuples,
    2011             :                 input_groups;
    2012             :     Cost        group_startup_cost,
    2013             :                 group_run_cost,
    2014             :                 group_input_run_cost;
    2015       11734 :     List       *presortedExprs = NIL;
    2016             :     ListCell   *l;
    2017       11734 :     bool        unknown_varno = false;
    2018             : 
    2019             :     Assert(presorted_keys > 0 && presorted_keys < list_length(pathkeys));
    2020             : 
    2021             :     /*
    2022             :      * We want to be sure the cost of a sort is never estimated as zero, even
    2023             :      * if passed-in tuple count is zero.  Besides, mustn't do log(0)...
    2024             :      */
    2025       11734 :     if (input_tuples < 2.0)
    2026        6760 :         input_tuples = 2.0;
    2027             : 
    2028             :     /* Default estimate of number of groups, capped to one group per row. */
    2029       11734 :     input_groups = Min(input_tuples, DEFAULT_NUM_DISTINCT);
    2030             : 
    2031             :     /*
    2032             :      * Extract presorted keys as list of expressions.
    2033             :      *
    2034             :      * We need to be careful about Vars containing "varno 0" which might have
    2035             :      * been introduced by generate_append_tlist, which would confuse
    2036             :      * estimate_num_groups (in fact it'd fail for such expressions). See
    2037             :      * recurse_set_operations which has to deal with the same issue.
    2038             :      *
    2039             :      * Unlike recurse_set_operations we can't access the original target list
    2040             :      * here, and even if we could it's not very clear how useful would that be
    2041             :      * for a set operation combining multiple tables. So we simply detect if
    2042             :      * there are any expressions with "varno 0" and use the default
    2043             :      * DEFAULT_NUM_DISTINCT in that case.
    2044             :      *
    2045             :      * We might also use either 1.0 (a single group) or input_tuples (each row
    2046             :      * being a separate group), pretty much the worst and best case for
    2047             :      * incremental sort. But those are extreme cases and using something in
    2048             :      * between seems reasonable. Furthermore, generate_append_tlist is used
    2049             :      * for set operations, which are likely to produce mostly unique output
    2050             :      * anyway - from that standpoint the DEFAULT_NUM_DISTINCT is defensive
    2051             :      * while maintaining lower startup cost.
    2052             :      */
    2053       11830 :     foreach(l, pathkeys)
    2054             :     {
    2055       11830 :         PathKey    *key = (PathKey *) lfirst(l);
    2056       11830 :         EquivalenceMember *member = (EquivalenceMember *)
    2057       11830 :             linitial(key->pk_eclass->ec_members);
    2058             : 
    2059             :         /*
    2060             :          * Check if the expression contains Var with "varno 0" so that we
    2061             :          * don't call estimate_num_groups in that case.
    2062             :          */
    2063       11830 :         if (bms_is_member(0, pull_varnos(root, (Node *) member->em_expr)))
    2064             :         {
    2065          10 :             unknown_varno = true;
    2066          10 :             break;
    2067             :         }
    2068             : 
    2069             :         /* expression not containing any Vars with "varno 0" */
    2070       11820 :         presortedExprs = lappend(presortedExprs, member->em_expr);
    2071             : 
    2072       11820 :         if (foreach_current_index(l) + 1 >= presorted_keys)
    2073       11724 :             break;
    2074             :     }
    2075             : 
    2076             :     /* Estimate the number of groups with equal presorted keys. */
    2077       11734 :     if (!unknown_varno)
    2078       11724 :         input_groups = estimate_num_groups(root, presortedExprs, input_tuples,
    2079             :                                            NULL, NULL);
    2080             : 
    2081       11734 :     group_tuples = input_tuples / input_groups;
    2082       11734 :     group_input_run_cost = input_run_cost / input_groups;
    2083             : 
    2084             :     /*
    2085             :      * Estimate the average cost of sorting of one group where presorted keys
    2086             :      * are equal.
    2087             :      */
    2088       11734 :     cost_tuplesort(&group_startup_cost, &group_run_cost,
    2089             :                    group_tuples, width, comparison_cost, sort_mem,
    2090             :                    limit_tuples);
    2091             : 
    2092             :     /*
    2093             :      * Startup cost of incremental sort is the startup cost of its first group
    2094             :      * plus the cost of its input.
    2095             :      */
    2096       11734 :     startup_cost = group_startup_cost + input_startup_cost +
    2097             :         group_input_run_cost;
    2098             : 
    2099             :     /*
    2100             :      * After we started producing tuples from the first group, the cost of
    2101             :      * producing all the tuples is given by the cost to finish processing this
    2102             :      * group, plus the total cost to process the remaining groups, plus the
    2103             :      * remaining cost of input.
    2104             :      */
    2105       11734 :     run_cost = group_run_cost + (group_run_cost + group_startup_cost) *
    2106       11734 :         (input_groups - 1) + group_input_run_cost * (input_groups - 1);
    2107             : 
    2108             :     /*
    2109             :      * Incremental sort adds some overhead by itself. Firstly, it has to
    2110             :      * detect the sort groups. This is roughly equal to one extra copy and
    2111             :      * comparison per tuple.
    2112             :      */
    2113       11734 :     run_cost += (cpu_tuple_cost + comparison_cost) * input_tuples;
    2114             : 
    2115             :     /*
    2116             :      * Additionally, we charge double cpu_tuple_cost for each input group to
    2117             :      * account for the tuplesort_reset that's performed after each group.
    2118             :      */
    2119       11734 :     run_cost += 2.0 * cpu_tuple_cost * input_groups;
    2120             : 
    2121       11734 :     path->rows = input_tuples;
    2122             : 
    2123             :     /* should not generate these paths when enable_incremental_sort=false */
    2124             :     Assert(enable_incremental_sort);
    2125       11734 :     path->disabled_nodes = input_disabled_nodes;
    2126             : 
    2127       11734 :     path->startup_cost = startup_cost;
    2128       11734 :     path->total_cost = startup_cost + run_cost;
    2129       11734 : }
    2130             : 
    2131             : /*
    2132             :  * cost_sort
    2133             :  *    Determines and returns the cost of sorting a relation, including
    2134             :  *    the cost of reading the input data.
    2135             :  *
    2136             :  * NOTE: some callers currently pass NIL for pathkeys because they
    2137             :  * can't conveniently supply the sort keys.  Since this routine doesn't
    2138             :  * currently do anything with pathkeys anyway, that doesn't matter...
    2139             :  * but if it ever does, it should react gracefully to lack of key data.
    2140             :  * (Actually, the thing we'd most likely be interested in is just the number
    2141             :  * of sort keys, which all callers *could* supply.)
    2142             :  */
    2143             : void
    2144     1741514 : cost_sort(Path *path, PlannerInfo *root,
    2145             :           List *pathkeys, int input_disabled_nodes,
    2146             :           Cost input_cost, double tuples, int width,
    2147             :           Cost comparison_cost, int sort_mem,
    2148             :           double limit_tuples)
    2149             : 
    2150             : {
    2151             :     Cost        startup_cost;
    2152             :     Cost        run_cost;
    2153             : 
    2154     1741514 :     cost_tuplesort(&startup_cost, &run_cost,
    2155             :                    tuples, width,
    2156             :                    comparison_cost, sort_mem,
    2157             :                    limit_tuples);
    2158             : 
    2159     1741514 :     startup_cost += input_cost;
    2160             : 
    2161     1741514 :     path->rows = tuples;
    2162     1741514 :     path->disabled_nodes = input_disabled_nodes + (enable_sort ? 0 : 1);
    2163     1741514 :     path->startup_cost = startup_cost;
    2164     1741514 :     path->total_cost = startup_cost + run_cost;
    2165     1741514 : }
    2166             : 
    2167             : /*
    2168             :  * append_nonpartial_cost
    2169             :  *    Estimate the cost of the non-partial paths in a Parallel Append.
    2170             :  *    The non-partial paths are assumed to be the first "numpaths" paths
    2171             :  *    from the subpaths list, and to be in order of decreasing cost.
    2172             :  */
    2173             : static Cost
    2174       18330 : append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
    2175             : {
    2176             :     Cost       *costarr;
    2177             :     int         arrlen;
    2178             :     ListCell   *l;
    2179             :     ListCell   *cell;
    2180             :     int         path_index;
    2181             :     int         min_index;
    2182             :     int         max_index;
    2183             : 
    2184       18330 :     if (numpaths == 0)
    2185       14190 :         return 0;
    2186             : 
    2187             :     /*
    2188             :      * Array length is number of workers or number of relevant paths,
    2189             :      * whichever is less.
    2190             :      */
    2191        4140 :     arrlen = Min(parallel_workers, numpaths);
    2192        4140 :     costarr = (Cost *) palloc(sizeof(Cost) * arrlen);
    2193             : 
    2194             :     /* The first few paths will each be claimed by a different worker. */
    2195        4140 :     path_index = 0;
    2196       11946 :     foreach(cell, subpaths)
    2197             :     {
    2198        8650 :         Path       *subpath = (Path *) lfirst(cell);
    2199             : 
    2200        8650 :         if (path_index == arrlen)
    2201         844 :             break;
    2202        7806 :         costarr[path_index++] = subpath->total_cost;
    2203             :     }
    2204             : 
    2205             :     /*
    2206             :      * Since subpaths are sorted by decreasing cost, the last one will have
    2207             :      * the minimum cost.
    2208             :      */
    2209        4140 :     min_index = arrlen - 1;
    2210             : 
    2211             :     /*
    2212             :      * For each of the remaining subpaths, add its cost to the array element
    2213             :      * with minimum cost.
    2214             :      */
    2215        4622 :     for_each_cell(l, subpaths, cell)
    2216             :     {
    2217        1028 :         Path       *subpath = (Path *) lfirst(l);
    2218             : 
    2219             :         /* Consider only the non-partial paths */
    2220        1028 :         if (path_index++ == numpaths)
    2221         546 :             break;
    2222             : 
    2223         482 :         costarr[min_index] += subpath->total_cost;
    2224             : 
    2225             :         /* Update the new min cost array index */
    2226         482 :         min_index = 0;
    2227        1482 :         for (int i = 0; i < arrlen; i++)
    2228             :         {
    2229        1000 :             if (costarr[i] < costarr[min_index])
    2230         202 :                 min_index = i;
    2231             :         }
    2232             :     }
    2233             : 
    2234             :     /* Return the highest cost from the array */
    2235        4140 :     max_index = 0;
    2236       11946 :     for (int i = 0; i < arrlen; i++)
    2237             :     {
    2238        7806 :         if (costarr[i] > costarr[max_index])
    2239         182 :             max_index = i;
    2240             :     }
    2241             : 
    2242        4140 :     return costarr[max_index];
    2243             : }
    2244             : 
    2245             : /*
    2246             :  * cost_append
    2247             :  *    Determines and returns the cost of an Append node.
    2248             :  */
    2249             : void
    2250       54580 : cost_append(AppendPath *apath, PlannerInfo *root)
    2251             : {
    2252             :     ListCell   *l;
    2253             : 
    2254       54580 :     apath->path.disabled_nodes = 0;
    2255       54580 :     apath->path.startup_cost = 0;
    2256       54580 :     apath->path.total_cost = 0;
    2257       54580 :     apath->path.rows = 0;
    2258             : 
    2259       54580 :     if (apath->subpaths == NIL)
    2260        1738 :         return;
    2261             : 
    2262       52842 :     if (!apath->path.parallel_aware)
    2263             :     {
    2264       34512 :         List       *pathkeys = apath->path.pathkeys;
    2265             : 
    2266       34512 :         if (pathkeys == NIL)
    2267             :         {
    2268       32298 :             Path       *firstsubpath = (Path *) linitial(apath->subpaths);
    2269             : 
    2270             :             /*
    2271             :              * For an unordered, non-parallel-aware Append we take the startup
    2272             :              * cost as the startup cost of the first subpath.
    2273             :              */
    2274       32298 :             apath->path.startup_cost = firstsubpath->startup_cost;
    2275             : 
    2276             :             /*
    2277             :              * Compute rows, number of disabled nodes, and total cost as sums
    2278             :              * of underlying subplan values.
    2279             :              */
    2280      124900 :             foreach(l, apath->subpaths)
    2281             :             {
    2282       92602 :                 Path       *subpath = (Path *) lfirst(l);
    2283             : 
    2284       92602 :                 apath->path.rows += subpath->rows;
    2285       92602 :                 apath->path.disabled_nodes += subpath->disabled_nodes;
    2286       92602 :                 apath->path.total_cost += subpath->total_cost;
    2287             :             }
    2288             :         }
    2289             :         else
    2290             :         {
    2291             :             /*
    2292             :              * For an ordered, non-parallel-aware Append we take the startup
    2293             :              * cost as the sum of the subpath startup costs.  This ensures
    2294             :              * that we don't underestimate the startup cost when a query's
    2295             :              * LIMIT is such that several of the children have to be run to
    2296             :              * satisfy it.  This might be overkill --- another plausible hack
    2297             :              * would be to take the Append's startup cost as the maximum of
    2298             :              * the child startup costs.  But we don't want to risk believing
    2299             :              * that an ORDER BY LIMIT query can be satisfied at small cost
    2300             :              * when the first child has small startup cost but later ones
    2301             :              * don't.  (If we had the ability to deal with nonlinear cost
    2302             :              * interpolation for partial retrievals, we would not need to be
    2303             :              * so conservative about this.)
    2304             :              *
    2305             :              * This case is also different from the above in that we have to
    2306             :              * account for possibly injecting sorts into subpaths that aren't
    2307             :              * natively ordered.
    2308             :              */
    2309        8588 :             foreach(l, apath->subpaths)
    2310             :             {
    2311        6374 :                 Path       *subpath = (Path *) lfirst(l);
    2312             :                 int         presorted_keys;
    2313             :                 Path        sort_path;  /* dummy for result of
    2314             :                                          * cost_sort/cost_incremental_sort */
    2315             : 
    2316        6374 :                 if (!pathkeys_count_contained_in(pathkeys, subpath->pathkeys,
    2317             :                                                  &presorted_keys))
    2318             :                 {
    2319             :                     /*
    2320             :                      * We'll need to insert a Sort node, so include costs for
    2321             :                      * that.  We choose to use incremental sort if it is
    2322             :                      * enabled and there are presorted keys; otherwise we use
    2323             :                      * full sort.
    2324             :                      *
    2325             :                      * We can use the parent's LIMIT if any, since we
    2326             :                      * certainly won't pull more than that many tuples from
    2327             :                      * any child.
    2328             :                      */
    2329          56 :                     if (enable_incremental_sort && presorted_keys > 0)
    2330             :                     {
    2331          12 :                         cost_incremental_sort(&sort_path,
    2332             :                                               root,
    2333             :                                               pathkeys,
    2334             :                                               presorted_keys,
    2335             :                                               subpath->disabled_nodes,
    2336             :                                               subpath->startup_cost,
    2337             :                                               subpath->total_cost,
    2338             :                                               subpath->rows,
    2339          12 :                                               subpath->pathtarget->width,
    2340             :                                               0.0,
    2341             :                                               work_mem,
    2342             :                                               apath->limit_tuples);
    2343             :                     }
    2344             :                     else
    2345             :                     {
    2346          44 :                         cost_sort(&sort_path,
    2347             :                                   root,
    2348             :                                   pathkeys,
    2349             :                                   subpath->disabled_nodes,
    2350             :                                   subpath->total_cost,
    2351             :                                   subpath->rows,
    2352          44 :                                   subpath->pathtarget->width,
    2353             :                                   0.0,
    2354             :                                   work_mem,
    2355             :                                   apath->limit_tuples);
    2356             :                     }
    2357             : 
    2358          56 :                     subpath = &sort_path;
    2359             :                 }
    2360             : 
    2361        6374 :                 apath->path.rows += subpath->rows;
    2362        6374 :                 apath->path.disabled_nodes += subpath->disabled_nodes;
    2363        6374 :                 apath->path.startup_cost += subpath->startup_cost;
    2364        6374 :                 apath->path.total_cost += subpath->total_cost;
    2365             :             }
    2366             :         }
    2367             :     }
    2368             :     else                        /* parallel-aware */
    2369             :     {
    2370       18330 :         int         i = 0;
    2371       18330 :         double      parallel_divisor = get_parallel_divisor(&apath->path);
    2372             : 
    2373             :         /* Parallel-aware Append never produces ordered output. */
    2374             :         Assert(apath->path.pathkeys == NIL);
    2375             : 
    2376             :         /* Calculate startup cost. */
    2377       71526 :         foreach(l, apath->subpaths)
    2378             :         {
    2379       53196 :             Path       *subpath = (Path *) lfirst(l);
    2380             : 
    2381             :             /*
    2382             :              * Append will start returning tuples when the child node having
    2383             :              * lowest startup cost is done setting up. We consider only the
    2384             :              * first few subplans that immediately get a worker assigned.
    2385             :              */
    2386       53196 :             if (i == 0)
    2387       18330 :                 apath->path.startup_cost = subpath->startup_cost;
    2388       34866 :             else if (i < apath->path.parallel_workers)
    2389       17772 :                 apath->path.startup_cost = Min(apath->path.startup_cost,
    2390             :                                                subpath->startup_cost);
    2391             : 
    2392             :             /*
    2393             :              * Apply parallel divisor to subpaths.  Scale the number of rows
    2394             :              * for each partial subpath based on the ratio of the parallel
    2395             :              * divisor originally used for the subpath to the one we adopted.
    2396             :              * Also add the cost of partial paths to the total cost, but
    2397             :              * ignore non-partial paths for now.
    2398             :              */
    2399       53196 :             if (i < apath->first_partial_path)
    2400        8288 :                 apath->path.rows += subpath->rows / parallel_divisor;
    2401             :             else
    2402             :             {
    2403             :                 double      subpath_parallel_divisor;
    2404             : 
    2405       44908 :                 subpath_parallel_divisor = get_parallel_divisor(subpath);
    2406       44908 :                 apath->path.rows += subpath->rows * (subpath_parallel_divisor /
    2407             :                                                      parallel_divisor);
    2408       44908 :                 apath->path.total_cost += subpath->total_cost;
    2409             :             }
    2410             : 
    2411       53196 :             apath->path.disabled_nodes += subpath->disabled_nodes;
    2412       53196 :             apath->path.rows = clamp_row_est(apath->path.rows);
    2413             : 
    2414       53196 :             i++;
    2415             :         }
    2416             : 
    2417             :         /* Add cost for non-partial subpaths. */
    2418       18330 :         apath->path.total_cost +=
    2419       18330 :             append_nonpartial_cost(apath->subpaths,
    2420             :                                    apath->first_partial_path,
    2421             :                                    apath->path.parallel_workers);
    2422             :     }
    2423             : 
    2424             :     /*
    2425             :      * Although Append does not do any selection or projection, it's not free;
    2426             :      * add a small per-tuple overhead.
    2427             :      */
    2428       52842 :     apath->path.total_cost +=
    2429       52842 :         cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER * apath->path.rows;
    2430             : }
    2431             : 
    2432             : /*
    2433             :  * cost_merge_append
    2434             :  *    Determines and returns the cost of a MergeAppend node.
    2435             :  *
    2436             :  * MergeAppend merges several pre-sorted input streams, using a heap that
    2437             :  * at any given instant holds the next tuple from each stream.  If there
    2438             :  * are N streams, we need about N*log2(N) tuple comparisons to construct
    2439             :  * the heap at startup, and then for each output tuple, about log2(N)
    2440             :  * comparisons to replace the top entry.
    2441             :  *
    2442             :  * (The effective value of N will drop once some of the input streams are
    2443             :  * exhausted, but it seems unlikely to be worth trying to account for that.)
    2444             :  *
    2445             :  * The heap is never spilled to disk, since we assume N is not very large.
    2446             :  * So this is much simpler than cost_sort.
    2447             :  *
    2448             :  * As in cost_sort, we charge two operator evals per tuple comparison.
    2449             :  *
    2450             :  * 'pathkeys' is a list of sort keys
    2451             :  * 'n_streams' is the number of input streams
    2452             :  * 'input_disabled_nodes' is the sum of the input streams' disabled node counts
    2453             :  * 'input_startup_cost' is the sum of the input streams' startup costs
    2454             :  * 'input_total_cost' is the sum of the input streams' total costs
    2455             :  * 'tuples' is the number of tuples in all the streams
    2456             :  */
    2457             : void
    2458        4220 : cost_merge_append(Path *path, PlannerInfo *root,
    2459             :                   List *pathkeys, int n_streams,
    2460             :                   int input_disabled_nodes,
    2461             :                   Cost input_startup_cost, Cost input_total_cost,
    2462             :                   double tuples)
    2463             : {
    2464        4220 :     Cost        startup_cost = 0;
    2465        4220 :     Cost        run_cost = 0;
    2466             :     Cost        comparison_cost;
    2467             :     double      N;
    2468             :     double      logN;
    2469             : 
    2470             :     /*
    2471             :      * Avoid log(0)...
    2472             :      */
    2473        4220 :     N = (n_streams < 2) ? 2.0 : (double) n_streams;
    2474        4220 :     logN = LOG2(N);
    2475             : 
    2476             :     /* Assumed cost per tuple comparison */
    2477        4220 :     comparison_cost = 2.0 * cpu_operator_cost;
    2478             : 
    2479             :     /* Heap creation cost */
    2480        4220 :     startup_cost += comparison_cost * N * logN;
    2481             : 
    2482             :     /* Per-tuple heap maintenance cost */
    2483        4220 :     run_cost += tuples * comparison_cost * logN;
    2484             : 
    2485             :     /*
    2486             :      * Although MergeAppend does not do any selection or projection, it's not
    2487             :      * free; add a small per-tuple overhead.
    2488             :      */
    2489        4220 :     run_cost += cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER * tuples;
    2490             : 
    2491        4220 :     path->disabled_nodes = input_disabled_nodes;
    2492        4220 :     path->startup_cost = startup_cost + input_startup_cost;
    2493        4220 :     path->total_cost = startup_cost + run_cost + input_total_cost;
    2494        4220 : }
    2495             : 
    2496             : /*
    2497             :  * cost_material
    2498             :  *    Determines and returns the cost of materializing a relation, including
    2499             :  *    the cost of reading the input data.
    2500             :  *
    2501             :  * If the total volume of data to materialize exceeds work_mem, we will need
    2502             :  * to write it to disk, so the cost is much higher in that case.
    2503             :  *
    2504             :  * Note that here we are estimating the costs for the first scan of the
    2505             :  * relation, so the materialization is all overhead --- any savings will
    2506             :  * occur only on rescan, which is estimated in cost_rescan.
    2507             :  */
    2508             : void
    2509      532066 : cost_material(Path *path,
    2510             :               int input_disabled_nodes,
    2511             :               Cost input_startup_cost, Cost input_total_cost,
    2512             :               double tuples, int width)
    2513             : {
    2514      532066 :     Cost        startup_cost = input_startup_cost;
    2515      532066 :     Cost        run_cost = input_total_cost - input_startup_cost;
    2516      532066 :     double      nbytes = relation_byte_size(tuples, width);
    2517      532066 :     double      work_mem_bytes = work_mem * (Size) 1024;
    2518             : 
    2519      532066 :     path->rows = tuples;
    2520             : 
    2521             :     /*
    2522             :      * Whether spilling or not, charge 2x cpu_operator_cost per tuple to
    2523             :      * reflect bookkeeping overhead.  (This rate must be more than what
    2524             :      * cost_rescan charges for materialize, ie, cpu_operator_cost per tuple;
    2525             :      * if it is exactly the same then there will be a cost tie between
    2526             :      * nestloop with A outer, materialized B inner and nestloop with B outer,
    2527             :      * materialized A inner.  The extra cost ensures we'll prefer
    2528             :      * materializing the smaller rel.)  Note that this is normally a good deal
    2529             :      * less than cpu_tuple_cost; which is OK because a Material plan node
    2530             :      * doesn't do qual-checking or projection, so it's got less overhead than
    2531             :      * most plan nodes.
    2532             :      */
    2533      532066 :     run_cost += 2 * cpu_operator_cost * tuples;
    2534             : 
    2535             :     /*
    2536             :      * If we will spill to disk, charge at the rate of seq_page_cost per page.
    2537             :      * This cost is assumed to be evenly spread through the plan run phase,
    2538             :      * which isn't exactly accurate but our cost model doesn't allow for
    2539             :      * nonuniform costs within the run phase.
    2540             :      */
    2541      532066 :     if (nbytes > work_mem_bytes)
    2542             :     {
    2543        5464 :         double      npages = ceil(nbytes / BLCKSZ);
    2544             : 
    2545        5464 :         run_cost += seq_page_cost * npages;
    2546             :     }
    2547             : 
    2548      532066 :     path->disabled_nodes = input_disabled_nodes + (enable_material ? 0 : 1);
    2549      532066 :     path->startup_cost = startup_cost;
    2550      532066 :     path->total_cost = startup_cost + run_cost;
    2551      532066 : }
    2552             : 
    2553             : /*
    2554             :  * cost_memoize_rescan
    2555             :  *    Determines the estimated cost of rescanning a Memoize node.
    2556             :  *
    2557             :  * In order to estimate this, we must gain knowledge of how often we expect to
    2558             :  * be called and how many distinct sets of parameters we are likely to be
    2559             :  * called with. If we expect a good cache hit ratio, then we can set our
    2560             :  * costs to account for that hit ratio, plus a little bit of cost for the
    2561             :  * caching itself.  Caching will not work out well if we expect to be called
    2562             :  * with too many distinct parameter values.  The worst-case here is that we
    2563             :  * never see any parameter value twice, in which case we'd never get a cache
    2564             :  * hit and caching would be a complete waste of effort.
    2565             :  */
    2566             : static void
    2567      299506 : cost_memoize_rescan(PlannerInfo *root, MemoizePath *mpath,
    2568             :                     Cost *rescan_startup_cost, Cost *rescan_total_cost)
    2569             : {
    2570             :     EstimationInfo estinfo;
    2571             :     ListCell   *lc;
    2572      299506 :     Cost        input_startup_cost = mpath->subpath->startup_cost;
    2573      299506 :     Cost        input_total_cost = mpath->subpath->total_cost;
    2574      299506 :     double      tuples = mpath->subpath->rows;
    2575      299506 :     double      calls = mpath->calls;
    2576      299506 :     int         width = mpath->subpath->pathtarget->width;
    2577             : 
    2578             :     double      hash_mem_bytes;
    2579             :     double      est_entry_bytes;
    2580             :     double      est_cache_entries;
    2581             :     double      ndistinct;
    2582             :     double      evict_ratio;
    2583             :     double      hit_ratio;
    2584             :     Cost        startup_cost;
    2585             :     Cost        total_cost;
    2586             : 
    2587             :     /* available cache space */
    2588      299506 :     hash_mem_bytes = get_hash_memory_limit();
    2589             : 
    2590             :     /*
    2591             :      * Set the number of bytes each cache entry should consume in the cache.
    2592             :      * To provide us with better estimations on how many cache entries we can
    2593             :      * store at once, we make a call to the executor here to ask it what
    2594             :      * memory overheads there are for a single cache entry.
    2595             :      */
    2596      299506 :     est_entry_bytes = relation_byte_size(tuples, width) +
    2597      299506 :         ExecEstimateCacheEntryOverheadBytes(tuples);
    2598             : 
    2599             :     /* include the estimated width for the cache keys */
    2600      643636 :     foreach(lc, mpath->param_exprs)
    2601      344130 :         est_entry_bytes += get_expr_width(root, (Node *) lfirst(lc));
    2602             : 
    2603             :     /* estimate on the upper limit of cache entries we can hold at once */
    2604      299506 :     est_cache_entries = floor(hash_mem_bytes / est_entry_bytes);
    2605             : 
    2606             :     /* estimate on the distinct number of parameter values */
    2607      299506 :     ndistinct = estimate_num_groups(root, mpath->param_exprs, calls, NULL,
    2608             :                                     &estinfo);
    2609             : 
    2610             :     /*
    2611             :      * When the estimation fell back on using a default value, it's a bit too
    2612             :      * risky to assume that it's ok to use a Memoize node.  The use of a
    2613             :      * default could cause us to use a Memoize node when it's really
    2614             :      * inappropriate to do so.  If we see that this has been done, then we'll
    2615             :      * assume that every call will have unique parameters, which will almost
    2616             :      * certainly mean a MemoizePath will never survive add_path().
    2617             :      */
    2618      299506 :     if ((estinfo.flags & SELFLAG_USED_DEFAULT) != 0)
    2619       16610 :         ndistinct = calls;
    2620             : 
    2621             :     /*
    2622             :      * Since we've already estimated the maximum number of entries we can
    2623             :      * store at once and know the estimated number of distinct values we'll be
    2624             :      * called with, we'll take this opportunity to set the path's est_entries.
    2625             :      * This will ultimately determine the hash table size that the executor
    2626             :      * will use.  If we leave this at zero, the executor will just choose the
    2627             :      * size itself.  Really this is not the right place to do this, but it's
    2628             :      * convenient since everything is already calculated.
    2629             :      */
    2630      299506 :     mpath->est_entries = Min(Min(ndistinct, est_cache_entries),
    2631             :                              PG_UINT32_MAX);
    2632             : 
    2633             :     /*
    2634             :      * When the number of distinct parameter values is above the amount we can
    2635             :      * store in the cache, then we'll have to evict some entries from the
    2636             :      * cache.  This is not free. Here we estimate how often we'll incur the
    2637             :      * cost of that eviction.
    2638             :      */
    2639      299506 :     evict_ratio = 1.0 - Min(est_cache_entries, ndistinct) / ndistinct;
    2640             : 
    2641             :     /*
    2642             :      * In order to estimate how costly a single scan will be, we need to
    2643             :      * attempt to estimate what the cache hit ratio will be.  To do that we
    2644             :      * must look at how many scans are estimated in total for this node and
    2645             :      * how many of those scans we expect to get a cache hit.
    2646             :      */
    2647      599012 :     hit_ratio = ((calls - ndistinct) / calls) *
    2648      299506 :         (est_cache_entries / Max(ndistinct, est_cache_entries));
    2649             : 
    2650             :     Assert(hit_ratio >= 0 && hit_ratio <= 1.0);
    2651             : 
    2652             :     /*
    2653             :      * Set the total_cost accounting for the expected cache hit ratio.  We
    2654             :      * also add on a cpu_operator_cost to account for a cache lookup. This
    2655             :      * will happen regardless of whether it's a cache hit or not.
    2656             :      */
    2657      299506 :     total_cost = input_total_cost * (1.0 - hit_ratio) + cpu_operator_cost;
    2658             : 
    2659             :     /* Now adjust the total cost to account for cache evictions */
    2660             : 
    2661             :     /* Charge a cpu_tuple_cost for evicting the actual cache entry */
    2662      299506 :     total_cost += cpu_tuple_cost * evict_ratio;
    2663             : 
    2664             :     /*
    2665             :      * Charge a 10th of cpu_operator_cost to evict every tuple in that entry.
    2666             :      * The per-tuple eviction is really just a pfree, so charging a whole
    2667             :      * cpu_operator_cost seems a little excessive.
    2668             :      */
    2669      299506 :     total_cost += cpu_operator_cost / 10.0 * evict_ratio * tuples;
    2670             : 
    2671             :     /*
    2672             :      * Now adjust for storing things in the cache, since that's not free
    2673             :      * either.  Everything must go in the cache.  We don't proportion this
    2674             :      * over any ratio, just apply it once for the scan.  We charge a
    2675             :      * cpu_tuple_cost for the creation of the cache entry and also a
    2676             :      * cpu_operator_cost for each tuple we expect to cache.
    2677             :      */
    2678      299506 :     total_cost += cpu_tuple_cost + cpu_operator_cost * tuples;
    2679             : 
    2680             :     /*
    2681             :      * Getting the first row must be also be proportioned according to the
    2682             :      * expected cache hit ratio.
    2683             :      */
    2684      299506 :     startup_cost = input_startup_cost * (1.0 - hit_ratio);
    2685             : 
    2686             :     /*
    2687             :      * Additionally we charge a cpu_tuple_cost to account for cache lookups,
    2688             :      * which we'll do regardless of whether it was a cache hit or not.
    2689             :      */
    2690      299506 :     startup_cost += cpu_tuple_cost;
    2691             : 
    2692      299506 :     *rescan_startup_cost = startup_cost;
    2693      299506 :     *rescan_total_cost = total_cost;
    2694      299506 : }
    2695             : 
    2696             : /*
    2697             :  * cost_agg
    2698             :  *      Determines and returns the cost of performing an Agg plan node,
    2699             :  *      including the cost of its input.
    2700             :  *
    2701             :  * aggcosts can be NULL when there are no actual aggregate functions (i.e.,
    2702             :  * we are using a hashed Agg node just to do grouping).
    2703             :  *
    2704             :  * Note: when aggstrategy == AGG_SORTED, caller must ensure that input costs
    2705             :  * are for appropriately-sorted input.
    2706             :  */
    2707             : void
    2708       69476 : cost_agg(Path *path, PlannerInfo *root,
    2709             :          AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
    2710             :          int numGroupCols, double numGroups,
    2711             :          List *quals,
    2712             :          int disabled_nodes,
    2713             :          Cost input_startup_cost, Cost input_total_cost,
    2714             :          double input_tuples, double input_width)
    2715             : {
    2716             :     double      output_tuples;
    2717             :     Cost        startup_cost;
    2718             :     Cost        total_cost;
    2719       69476 :     const AggClauseCosts dummy_aggcosts = {0};
    2720             : 
    2721             :     /* Use all-zero per-aggregate costs if NULL is passed */
    2722       69476 :     if (aggcosts == NULL)
    2723             :     {
    2724             :         Assert(aggstrategy == AGG_HASHED);
    2725       12588 :         aggcosts = &dummy_aggcosts;
    2726             :     }
    2727             : 
    2728             :     /*
    2729             :      * The transCost.per_tuple component of aggcosts should be charged once
    2730             :      * per input tuple, corresponding to the costs of evaluating the aggregate
    2731             :      * transfns and their input expressions. The finalCost.per_tuple component
    2732             :      * is charged once per output tuple, corresponding to the costs of
    2733             :      * evaluating the finalfns.  Startup costs are of course charged but once.
    2734             :      *
    2735             :      * If we are grouping, we charge an additional cpu_operator_cost per
    2736             :      * grouping column per input tuple for grouping comparisons.
    2737             :      *
    2738             :      * We will produce a single output tuple if not grouping, and a tuple per
    2739             :      * group otherwise.  We charge cpu_tuple_cost for each output tuple.
    2740             :      *
    2741             :      * Note: in this cost model, AGG_SORTED and AGG_HASHED have exactly the
    2742             :      * same total CPU cost, but AGG_SORTED has lower startup cost.  If the
    2743             :      * input path is already sorted appropriately, AGG_SORTED should be
    2744             :      * preferred (since it has no risk of memory overflow).  This will happen
    2745             :      * as long as the computed total costs are indeed exactly equal --- but if
    2746             :      * there's roundoff error we might do the wrong thing.  So be sure that
    2747             :      * the computations below form the same intermediate values in the same
    2748             :      * order.
    2749             :      */
    2750       69476 :     if (aggstrategy == AGG_PLAIN)
    2751             :     {
    2752       37230 :         startup_cost = input_total_cost;
    2753       37230 :         startup_cost += aggcosts->transCost.startup;
    2754       37230 :         startup_cost += aggcosts->transCost.per_tuple * input_tuples;
    2755       37230 :         startup_cost += aggcosts->finalCost.startup;
    2756       37230 :         startup_cost += aggcosts->finalCost.per_tuple;
    2757             :         /* we aren't grouping */
    2758       37230 :         total_cost = startup_cost + cpu_tuple_cost;
    2759       37230 :         output_tuples = 1;
    2760             :     }
    2761       32246 :     else if (aggstrategy == AGG_SORTED || aggstrategy == AGG_MIXED)
    2762             :     {
    2763             :         /* Here we are able to deliver output on-the-fly */
    2764       11266 :         startup_cost = input_startup_cost;
    2765       11266 :         total_cost = input_total_cost;
    2766       11266 :         if (aggstrategy == AGG_MIXED && !enable_hashagg)
    2767         456 :             ++disabled_nodes;
    2768             :         /* calcs phrased this way to match HASHED case, see note above */
    2769       11266 :         total_cost += aggcosts->transCost.startup;
    2770       11266 :         total_cost += aggcosts->transCost.per_tuple * input_tuples;
    2771       11266 :         total_cost += (cpu_operator_cost * numGroupCols) * input_tuples;
    2772       11266 :         total_cost += aggcosts->finalCost.startup;
    2773       11266 :         total_cost += aggcosts->finalCost.per_tuple * numGroups;
    2774       11266 :         total_cost += cpu_tuple_cost * numGroups;
    2775       11266 :         output_tuples = numGroups;
    2776             :     }
    2777             :     else
    2778             :     {
    2779             :         /* must be AGG_HASHED */
    2780       20980 :         startup_cost = input_total_cost;
    2781       20980 :         if (!enable_hashagg)
    2782        1578 :             ++disabled_nodes;
    2783       20980 :         startup_cost += aggcosts->transCost.startup;
    2784       20980 :         startup_cost += aggcosts->transCost.per_tuple * input_tuples;
    2785             :         /* cost of computing hash value */
    2786       20980 :         startup_cost += (cpu_operator_cost * numGroupCols) * input_tuples;
    2787       20980 :         startup_cost += aggcosts->finalCost.startup;
    2788             : 
    2789       20980 :         total_cost = startup_cost;
    2790       20980 :         total_cost += aggcosts->finalCost.per_tuple * numGroups;
    2791             :         /* cost of retrieving from hash table */
    2792       20980 :         total_cost += cpu_tuple_cost * numGroups;
    2793       20980 :         output_tuples = numGroups;
    2794             :     }
    2795             : 
    2796             :     /*
    2797             :      * Add the disk costs of hash aggregation that spills to disk.
    2798             :      *
    2799             :      * Groups that go into the hash table stay in memory until finalized, so
    2800             :      * spilling and reprocessing tuples doesn't incur additional invocations
    2801             :      * of transCost or finalCost. Furthermore, the computed hash value is
    2802             :      * stored with the spilled tuples, so we don't incur extra invocations of
    2803             :      * the hash function.
    2804             :      *
    2805             :      * Hash Agg begins returning tuples after the first batch is complete.
    2806             :      * Accrue writes (spilled tuples) to startup_cost and to total_cost;
    2807             :      * accrue reads only to total_cost.
    2808             :      */
    2809       69476 :     if (aggstrategy == AGG_HASHED || aggstrategy == AGG_MIXED)
    2810             :     {
    2811             :         double      pages;
    2812       21896 :         double      pages_written = 0.0;
    2813       21896 :         double      pages_read = 0.0;
    2814             :         double      spill_cost;
    2815             :         double      hashentrysize;
    2816             :         double      nbatches;
    2817             :         Size        mem_limit;
    2818             :         uint64      ngroups_limit;
    2819             :         int         num_partitions;
    2820             :         int         depth;
    2821             : 
    2822             :         /*
    2823             :          * Estimate number of batches based on the computed limits. If less
    2824             :          * than or equal to one, all groups are expected to fit in memory;
    2825             :          * otherwise we expect to spill.
    2826             :          */
    2827       21896 :         hashentrysize = hash_agg_entry_size(list_length(root->aggtransinfos),
    2828             :                                             input_width,
    2829       21896 :                                             aggcosts->transitionSpace);
    2830       21896 :         hash_agg_set_limits(hashentrysize, numGroups, 0, &mem_limit,
    2831             :                             &ngroups_limit, &num_partitions);
    2832             : 
    2833       21896 :         nbatches = Max((numGroups * hashentrysize) / mem_limit,
    2834             :                        numGroups / ngroups_limit);
    2835             : 
    2836       21896 :         nbatches = Max(ceil(nbatches), 1.0);
    2837       21896 :         num_partitions = Max(num_partitions, 2);
    2838             : 
    2839             :         /*
    2840             :          * The number of partitions can change at different levels of
    2841             :          * recursion; but for the purposes of this calculation assume it stays
    2842             :          * constant.
    2843             :          */
    2844       21896 :         depth = ceil(log(nbatches) / log(num_partitions));
    2845             : 
    2846             :         /*
    2847             :          * Estimate number of pages read and written. For each level of
    2848             :          * recursion, a tuple must be written and then later read.
    2849             :          */
    2850       21896 :         pages = relation_byte_size(input_tuples, input_width) / BLCKSZ;
    2851       21896 :         pages_written = pages_read = pages * depth;
    2852             : 
    2853             :         /*
    2854             :          * HashAgg has somewhat worse IO behavior than Sort on typical
    2855             :          * hardware/OS combinations. Account for this with a generic penalty.
    2856             :          */
    2857       21896 :         pages_read *= 2.0;
    2858       21896 :         pages_written *= 2.0;
    2859             : 
    2860       21896 :         startup_cost += pages_written * random_page_cost;
    2861       21896 :         total_cost += pages_written * random_page_cost;
    2862       21896 :         total_cost += pages_read * seq_page_cost;
    2863             : 
    2864             :         /* account for CPU cost of spilling a tuple and reading it back */
    2865       21896 :         spill_cost = depth * input_tuples * 2.0 * cpu_tuple_cost;
    2866       21896 :         startup_cost += spill_cost;
    2867       21896 :         total_cost += spill_cost;
    2868             :     }
    2869             : 
    2870             :     /*
    2871             :      * If there are quals (HAVING quals), account for their cost and
    2872             :      * selectivity.
    2873             :      */
    2874       69476 :     if (quals)
    2875             :     {
    2876             :         QualCost    qual_cost;
    2877             : 
    2878        4400 :         cost_qual_eval(&qual_cost, quals, root);
    2879        4400 :         startup_cost += qual_cost.startup;
    2880        4400 :         total_cost += qual_cost.startup + output_tuples * qual_cost.per_tuple;
    2881             : 
    2882        4400 :         output_tuples = clamp_row_est(output_tuples *
    2883        4400 :                                       clauselist_selectivity(root,
    2884             :                                                              quals,
    2885             :                                                              0,
    2886             :                                                              JOIN_INNER,
    2887             :                                                              NULL));
    2888             :     }
    2889             : 
    2890       69476 :     path->rows = output_tuples;
    2891       69476 :     path->disabled_nodes = disabled_nodes;
    2892       69476 :     path->startup_cost = startup_cost;
    2893       69476 :     path->total_cost = total_cost;
    2894       69476 : }
    2895             : 
    2896             : /*
    2897             :  * get_windowclause_startup_tuples
    2898             :  *      Estimate how many tuples we'll need to fetch from a WindowAgg's
    2899             :  *      subnode before we can output the first WindowAgg tuple.
    2900             :  *
    2901             :  * How many tuples need to be read depends on the WindowClause.  For example,
    2902             :  * a WindowClause with no PARTITION BY and no ORDER BY requires that all
    2903             :  * subnode tuples are read and aggregated before the WindowAgg can output
    2904             :  * anything.  If there's a PARTITION BY, then we only need to look at tuples
    2905             :  * in the first partition.  Here we attempt to estimate just how many
    2906             :  * 'input_tuples' the WindowAgg will need to read for the given WindowClause
    2907             :  * before the first tuple can be output.
    2908             :  */
    2909             : static double
    2910        2754 : get_windowclause_startup_tuples(PlannerInfo *root, WindowClause *wc,
    2911             :                                 double input_tuples)
    2912             : {
    2913        2754 :     int         frameOptions = wc->frameOptions;
    2914             :     double      partition_tuples;
    2915             :     double      return_tuples;
    2916             :     double      peer_tuples;
    2917             : 
    2918             :     /*
    2919             :      * First, figure out how many partitions there are likely to be and set
    2920             :      * partition_tuples according to that estimate.
    2921             :      */
    2922        2754 :     if (wc->partitionClause != NIL)
    2923             :     {
    2924             :         double      num_partitions;
    2925         716 :         List       *partexprs = get_sortgrouplist_exprs(wc->partitionClause,
    2926         716 :                                                         root->parse->targetList);
    2927             : 
    2928         716 :         num_partitions = estimate_num_groups(root, partexprs, input_tuples,
    2929             :                                              NULL, NULL);
    2930         716 :         list_free(partexprs);
    2931             : 
    2932         716 :         partition_tuples = input_tuples / num_partitions;
    2933             :     }
    2934             :     else
    2935             :     {
    2936             :         /* all tuples belong to the same partition */
    2937        2038 :         partition_tuples = input_tuples;
    2938             :     }
    2939             : 
    2940             :     /* estimate the number of tuples in each peer group */
    2941        2754 :     if (wc->orderClause != NIL)
    2942             :     {
    2943             :         double      num_groups;
    2944             :         List       *orderexprs;
    2945             : 
    2946        2274 :         orderexprs = get_sortgrouplist_exprs(wc->orderClause,
    2947        2274 :                                              root->parse->targetList);
    2948             : 
    2949             :         /* estimate out how many peer groups there are in the partition */
    2950        2274 :         num_groups = estimate_num_groups(root, orderexprs,
    2951             :                                          partition_tuples, NULL,
    2952             :                                          NULL);
    2953        2274 :         list_free(orderexprs);
    2954        2274 :         peer_tuples = partition_tuples / num_groups;
    2955             :     }
    2956             :     else
    2957             :     {
    2958             :         /* no ORDER BY so only 1 tuple belongs in each peer group */
    2959         480 :         peer_tuples = 1.0;
    2960             :     }
    2961             : 
    2962        2754 :     if (frameOptions & FRAMEOPTION_END_UNBOUNDED_FOLLOWING)
    2963             :     {
    2964             :         /* include all partition rows */
    2965         346 :         return_tuples = partition_tuples;
    2966             :     }
    2967        2408 :     else if (frameOptions & FRAMEOPTION_END_CURRENT_ROW)
    2968             :     {
    2969        1418 :         if (frameOptions & FRAMEOPTION_ROWS)
    2970             :         {
    2971             :             /* just count the current row */
    2972         608 :             return_tuples = 1.0;
    2973             :         }
    2974         810 :         else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS))
    2975             :         {
    2976             :             /*
    2977             :              * When in RANGE/GROUPS mode, it's more complex.  If there's no
    2978             :              * ORDER BY, then all rows in the partition are peers, otherwise
    2979             :              * we'll need to read the first group of peers.
    2980             :              */
    2981         810 :             if (wc->orderClause == NIL)
    2982         308 :                 return_tuples = partition_tuples;
    2983             :             else
    2984         502 :                 return_tuples = peer_tuples;
    2985             :         }
    2986             :         else
    2987             :         {
    2988             :             /*
    2989             :              * Something new we don't support yet?  This needs attention.
    2990             :              * We'll just return 1.0 in the meantime.
    2991             :              */
    2992             :             Assert(false);
    2993           0 :             return_tuples = 1.0;
    2994             :         }
    2995             :     }
    2996         990 :     else if (frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING)
    2997             :     {
    2998             :         /*
    2999             :          * BETWEEN ... AND N PRECEDING will only need to read the WindowAgg's
    3000             :          * subnode after N ROWS/RANGES/GROUPS.  N can be 0, but not negative,
    3001             :          * so we'll just assume only the current row needs to be read to fetch
    3002             :          * the first WindowAgg row.
    3003             :          */
    3004         108 :         return_tuples = 1.0;
    3005             :     }
    3006         882 :     else if (frameOptions & FRAMEOPTION_END_OFFSET_FOLLOWING)
    3007             :     {
    3008         882 :         Const      *endOffset = (Const *) wc->endOffset;
    3009             :         double      end_offset_value;
    3010             : 
    3011             :         /* try and figure out the value specified in the endOffset. */
    3012         882 :         if (IsA(endOffset, Const))
    3013             :         {
    3014         882 :             if (endOffset->constisnull)
    3015             :             {
    3016             :                 /*
    3017             :                  * NULLs are not allowed, but currently, there's no code to
    3018             :                  * error out if there's a NULL Const.  We'll only discover
    3019             :                  * this during execution.  For now, just pretend everything is
    3020             :                  * fine and assume that just the first row/range/group will be
    3021             :                  * needed.
    3022             :                  */
    3023           0 :                 end_offset_value = 1.0;
    3024             :             }
    3025             :             else
    3026             :             {
    3027         882 :                 switch (endOffset->consttype)
    3028             :                 {
    3029          24 :                     case INT2OID:
    3030          24 :                         end_offset_value =
    3031          24 :                             (double) DatumGetInt16(endOffset->constvalue);
    3032          24 :                         break;
    3033         132 :                     case INT4OID:
    3034         132 :                         end_offset_value =
    3035         132 :                             (double) DatumGetInt32(endOffset->constvalue);
    3036         132 :                         break;
    3037         384 :                     case INT8OID:
    3038         384 :                         end_offset_value =
    3039         384 :                             (double) DatumGetInt64(endOffset->constvalue);
    3040         384 :                         break;
    3041         342 :                     default:
    3042         342 :                         end_offset_value =
    3043         342 :                             partition_tuples / peer_tuples *
    3044             :                             DEFAULT_INEQ_SEL;
    3045         342 :                         break;
    3046             :                 }
    3047             :             }
    3048             :         }
    3049             :         else
    3050             :         {
    3051             :             /*
    3052             :              * When the end bound is not a Const, we'll just need to guess. We
    3053             :              * just make use of DEFAULT_INEQ_SEL.
    3054             :              */
    3055           0 :             end_offset_value =
    3056           0 :                 partition_tuples / peer_tuples * DEFAULT_INEQ_SEL;
    3057             :         }
    3058             : 
    3059         882 :         if (frameOptions & FRAMEOPTION_ROWS)
    3060             :         {
    3061             :             /* include the N FOLLOWING and the current row */
    3062         222 :             return_tuples = end_offset_value + 1.0;
    3063             :         }
    3064         660 :         else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS))
    3065             :         {
    3066             :             /* include N FOLLOWING ranges/group and the initial range/group */
    3067         660 :             return_tuples = peer_tuples * (end_offset_value + 1.0);
    3068             :         }
    3069             :         else
    3070             :         {
    3071             :             /*
    3072             :              * Something new we don't support yet?  This needs attention.
    3073             :              * We'll just return 1.0 in the meantime.
    3074             :              */
    3075             :             Assert(false);
    3076           0 :             return_tuples = 1.0;
    3077             :         }
    3078             :     }
    3079             :     else
    3080             :     {
    3081             :         /*
    3082             :          * Something new we don't support yet?  This needs attention.  We'll
    3083             :          * just return 1.0 in the meantime.
    3084             :          */
    3085             :         Assert(false);
    3086           0 :         return_tuples = 1.0;
    3087             :     }
    3088             : 
    3089        2754 :     if (wc->partitionClause != NIL || wc->orderClause != NIL)
    3090             :     {
    3091             :         /*
    3092             :          * Cap the return value to the estimated partition tuples and account
    3093             :          * for the extra tuple WindowAgg will need to read to confirm the next
    3094             :          * tuple does not belong to the same partition or peer group.
    3095             :          */
    3096        2474 :         return_tuples = Min(return_tuples + 1.0, partition_tuples);
    3097             :     }
    3098             :     else
    3099             :     {
    3100             :         /*
    3101             :          * Cap the return value so it's never higher than the expected tuples
    3102             :          * in the partition.
    3103             :          */
    3104         280 :         return_tuples = Min(return_tuples, partition_tuples);
    3105             :     }
    3106             : 
    3107             :     /*
    3108             :      * We needn't worry about any EXCLUDE options as those only exclude rows
    3109             :      * from being aggregated, not from being read from the WindowAgg's
    3110             :      * subnode.
    3111             :      */
    3112             : 
    3113        2754 :     return clamp_row_est(return_tuples);
    3114             : }
    3115             : 
    3116             : /*
    3117             :  * cost_windowagg
    3118             :  *      Determines and returns the cost of performing a WindowAgg plan node,
    3119             :  *      including the cost of its input.
    3120             :  *
    3121             :  * Input is assumed already properly sorted.
    3122             :  */
    3123             : void
    3124        2754 : cost_windowagg(Path *path, PlannerInfo *root,
    3125             :                List *windowFuncs, WindowClause *winclause,
    3126             :                int input_disabled_nodes,
    3127             :                Cost input_startup_cost, Cost input_total_cost,
    3128             :                double input_tuples)
    3129             : {
    3130             :     Cost        startup_cost;
    3131             :     Cost        total_cost;
    3132             :     double      startup_tuples;
    3133             :     int         numPartCols;
    3134             :     int         numOrderCols;
    3135             :     ListCell   *lc;
    3136             : 
    3137        2754 :     numPartCols = list_length(winclause->partitionClause);
    3138        2754 :     numOrderCols = list_length(winclause->orderClause);
    3139             : 
    3140        2754 :     startup_cost = input_startup_cost;
    3141        2754 :     total_cost = input_total_cost;
    3142             : 
    3143             :     /*
    3144             :      * Window functions are assumed to cost their stated execution cost, plus
    3145             :      * the cost of evaluating their input expressions, per tuple.  Since they
    3146             :      * may in fact evaluate their inputs at multiple rows during each cycle,
    3147             :      * this could be a drastic underestimate; but without a way to know how
    3148             :      * many rows the window function will fetch, it's hard to do better.  In
    3149             :      * any case, it's a good estimate for all the built-in window functions,
    3150             :      * so we'll just do this for now.
    3151             :      */
    3152        6246 :     foreach(lc, windowFuncs)
    3153             :     {
    3154        3492 :         WindowFunc *wfunc = lfirst_node(WindowFunc, lc);
    3155             :         Cost        wfunccost;
    3156             :         QualCost    argcosts;
    3157             : 
    3158        3492 :         argcosts.startup = argcosts.per_tuple = 0;
    3159        3492 :         add_function_cost(root, wfunc->winfnoid, (Node *) wfunc,
    3160             :                           &argcosts);
    3161        3492 :         startup_cost += argcosts.startup;
    3162        3492 :         wfunccost = argcosts.per_tuple;
    3163             : 
    3164             :         /* also add the input expressions' cost to per-input-row costs */
    3165        3492 :         cost_qual_eval_node(&argcosts, (Node *) wfunc->args, root);
    3166        3492 :         startup_cost += argcosts.startup;
    3167        3492 :         wfunccost += argcosts.per_tuple;
    3168             : 
    3169             :         /*
    3170             :          * Add the filter's cost to per-input-row costs.  XXX We should reduce
    3171             :          * input expression costs according to filter selectivity.
    3172             :          */
    3173        3492 :         cost_qual_eval_node(&argcosts, (Node *) wfunc->aggfilter, root);
    3174        3492 :         startup_cost += argcosts.startup;
    3175        3492 :         wfunccost += argcosts.per_tuple;
    3176             : 
    3177        3492 :         total_cost += wfunccost * input_tuples;
    3178             :     }
    3179             : 
    3180             :     /*
    3181             :      * We also charge cpu_operator_cost per grouping column per tuple for
    3182             :      * grouping comparisons, plus cpu_tuple_cost per tuple for general
    3183             :      * overhead.
    3184             :      *
    3185             :      * XXX this neglects costs of spooling the data to disk when it overflows
    3186             :      * work_mem.  Sooner or later that should get accounted for.
    3187             :      */
    3188        2754 :     total_cost += cpu_operator_cost * (numPartCols + numOrderCols) * input_tuples;
    3189        2754 :     total_cost += cpu_tuple_cost * input_tuples;
    3190             : 
    3191        2754 :     path->rows = input_tuples;
    3192        2754 :     path->disabled_nodes = input_disabled_nodes;
    3193        2754 :     path->startup_cost = startup_cost;
    3194        2754 :     path->total_cost = total_cost;
    3195             : 
    3196             :     /*
    3197             :      * Also, take into account how many tuples we need to read from the
    3198             :      * subnode in order to produce the first tuple from the WindowAgg.  To do
    3199             :      * this we proportion the run cost (total cost not including startup cost)
    3200             :      * over the estimated startup tuples.  We already included the startup
    3201             :      * cost of the subnode, so we only need to do this when the estimated
    3202             :      * startup tuples is above 1.0.
    3203             :      */
    3204        2754 :     startup_tuples = get_windowclause_startup_tuples(root, winclause,
    3205             :                                                      input_tuples);
    3206             : 
    3207        2754 :     if (startup_tuples > 1.0)
    3208        2466 :         path->startup_cost += (total_cost - startup_cost) / input_tuples *
    3209        2466 :             (startup_tuples - 1.0);
    3210        2754 : }
    3211             : 
    3212             : /*
    3213             :  * cost_group
    3214             :  *      Determines and returns the cost of performing a Group plan node,
    3215             :  *      including the cost of its input.
    3216             :  *
    3217             :  * Note: caller must ensure that input costs are for appropriately-sorted
    3218             :  * input.
    3219             :  */
    3220             : void
    3221        1214 : cost_group(Path *path, PlannerInfo *root,
    3222             :            int numGroupCols, double numGroups,
    3223             :            List *quals,
    3224             :            int input_disabled_nodes,
    3225             :            Cost input_startup_cost, Cost input_total_cost,
    3226             :            double input_tuples)
    3227             : {
    3228             :     double      output_tuples;
    3229             :     Cost        startup_cost;
    3230             :     Cost        total_cost;
    3231             : 
    3232        1214 :     output_tuples = numGroups;
    3233        1214 :     startup_cost = input_startup_cost;
    3234        1214 :     total_cost = input_total_cost;
    3235             : 
    3236             :     /*
    3237             :      * Charge one cpu_operator_cost per comparison per input tuple. We assume
    3238             :      * all columns get compared at most of the tuples.
    3239             :      */
    3240        1214 :     total_cost += cpu_operator_cost * input_tuples * numGroupCols;
    3241             : 
    3242             :     /*
    3243             :      * If there are quals (HAVING quals), account for their cost and
    3244             :      * selectivity.
    3245             :      */
    3246        1214 :     if (quals)
    3247             :     {
    3248             :         QualCost    qual_cost;
    3249             : 
    3250           0 :         cost_qual_eval(&qual_cost, quals, root);
    3251           0 :         startup_cost += qual_cost.startup;
    3252           0 :         total_cost += qual_cost.startup + output_tuples * qual_cost.per_tuple;
    3253             : 
    3254           0 :         output_tuples = clamp_row_est(output_tuples *
    3255           0 :                                       clauselist_selectivity(root,
    3256             :                                                              quals,
    3257             :                                                              0,
    3258             :                                                              JOIN_INNER,
    3259             :                                                              NULL));
    3260             :     }
    3261             : 
    3262        1214 :     path->rows = output_tuples;
    3263        1214 :     path->disabled_nodes = input_disabled_nodes;
    3264        1214 :     path->startup_cost = startup_cost;
    3265        1214 :     path->total_cost = total_cost;
    3266        1214 : }
    3267             : 
    3268             : /*
    3269             :  * initial_cost_nestloop
    3270             :  *    Preliminary estimate of the cost of a nestloop join path.
    3271             :  *
    3272             :  * This must quickly produce lower-bound estimates of the path's startup and
    3273             :  * total costs.  If we are unable to eliminate the proposed path from
    3274             :  * consideration using the lower bounds, final_cost_nestloop will be called
    3275             :  * to obtain the final estimates.
    3276             :  *
    3277             :  * The exact division of labor between this function and final_cost_nestloop
    3278             :  * is private to them, and represents a tradeoff between speed of the initial
    3279             :  * estimate and getting a tight lower bound.  We choose to not examine the
    3280             :  * join quals here, since that's by far the most expensive part of the
    3281             :  * calculations.  The end result is that CPU-cost considerations must be
    3282             :  * left for the second phase; and for SEMI/ANTI joins, we must also postpone
    3283             :  * incorporation of the inner path's run cost.
    3284             :  *
    3285             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    3286             :  *      other data to be used by final_cost_nestloop
    3287             :  * 'jointype' is the type of join to be performed
    3288             :  * 'outer_path' is the outer input to the join
    3289             :  * 'inner_path' is the inner input to the join
    3290             :  * 'extra' contains miscellaneous information about the join
    3291             :  */
    3292             : void
    3293     2915176 : initial_cost_nestloop(PlannerInfo *root, JoinCostWorkspace *workspace,
    3294             :                       JoinType jointype,
    3295             :                       Path *outer_path, Path *inner_path,
    3296             :                       JoinPathExtraData *extra)
    3297             : {
    3298             :     int         disabled_nodes;
    3299     2915176 :     Cost        startup_cost = 0;
    3300     2915176 :     Cost        run_cost = 0;
    3301     2915176 :     double      outer_path_rows = outer_path->rows;
    3302             :     Cost        inner_rescan_start_cost;
    3303             :     Cost        inner_rescan_total_cost;
    3304             :     Cost        inner_run_cost;
    3305             :     Cost        inner_rescan_run_cost;
    3306             : 
    3307             :     /* Count up disabled nodes. */
    3308     2915176 :     disabled_nodes = enable_nestloop ? 0 : 1;
    3309     2915176 :     disabled_nodes += inner_path->disabled_nodes;
    3310     2915176 :     disabled_nodes += outer_path->disabled_nodes;
    3311             : 
    3312             :     /* estimate costs to rescan the inner relation */
    3313     2915176 :     cost_rescan(root, inner_path,
    3314             :                 &inner_rescan_start_cost,
    3315             :                 &inner_rescan_total_cost);
    3316             : 
    3317             :     /* cost of source data */
    3318             : 
    3319             :     /*
    3320             :      * NOTE: clearly, we must pay both outer and inner paths' startup_cost
    3321             :      * before we can start returning tuples, so the join's startup cost is
    3322             :      * their sum.  We'll also pay the inner path's rescan startup cost
    3323             :      * multiple times.
    3324             :      */
    3325     2915176 :     startup_cost += outer_path->startup_cost + inner_path->startup_cost;
    3326     2915176 :     run_cost += outer_path->total_cost - outer_path->startup_cost;
    3327     2915176 :     if (outer_path_rows > 1)
    3328     2072332 :         run_cost += (outer_path_rows - 1) * inner_rescan_start_cost;
    3329             : 
    3330     2915176 :     inner_run_cost = inner_path->total_cost - inner_path->startup_cost;
    3331     2915176 :     inner_rescan_run_cost = inner_rescan_total_cost - inner_rescan_start_cost;
    3332             : 
    3333     2915176 :     if (jointype == JOIN_SEMI || jointype == JOIN_ANTI ||
    3334     2852572 :         extra->inner_unique)
    3335             :     {
    3336             :         /*
    3337             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3338             :          * executor will stop after the first match.
    3339             :          *
    3340             :          * Getting decent estimates requires inspection of the join quals,
    3341             :          * which we choose to postpone to final_cost_nestloop.
    3342             :          */
    3343             : 
    3344             :         /* Save private data for final_cost_nestloop */
    3345     1333264 :         workspace->inner_run_cost = inner_run_cost;
    3346     1333264 :         workspace->inner_rescan_run_cost = inner_rescan_run_cost;
    3347             :     }
    3348             :     else
    3349             :     {
    3350             :         /* Normal case; we'll scan whole input rel for each outer row */
    3351     1581912 :         run_cost += inner_run_cost;
    3352     1581912 :         if (outer_path_rows > 1)
    3353     1138416 :             run_cost += (outer_path_rows - 1) * inner_rescan_run_cost;
    3354             :     }
    3355             : 
    3356             :     /* CPU costs left for later */
    3357             : 
    3358             :     /* Public result fields */
    3359     2915176 :     workspace->disabled_nodes = disabled_nodes;
    3360     2915176 :     workspace->startup_cost = startup_cost;
    3361     2915176 :     workspace->total_cost = startup_cost + run_cost;
    3362             :     /* Save private data for final_cost_nestloop */
    3363     2915176 :     workspace->run_cost = run_cost;
    3364     2915176 : }
    3365             : 
    3366             : /*
    3367             :  * final_cost_nestloop
    3368             :  *    Final estimate of the cost and result size of a nestloop join path.
    3369             :  *
    3370             :  * 'path' is already filled in except for the rows and cost fields
    3371             :  * 'workspace' is the result from initial_cost_nestloop
    3372             :  * 'extra' contains miscellaneous information about the join
    3373             :  */
    3374             : void
    3375     1405576 : final_cost_nestloop(PlannerInfo *root, NestPath *path,
    3376             :                     JoinCostWorkspace *workspace,
    3377             :                     JoinPathExtraData *extra)
    3378             : {
    3379     1405576 :     Path       *outer_path = path->jpath.outerjoinpath;
    3380     1405576 :     Path       *inner_path = path->jpath.innerjoinpath;
    3381     1405576 :     double      outer_path_rows = outer_path->rows;
    3382     1405576 :     double      inner_path_rows = inner_path->rows;
    3383     1405576 :     Cost        startup_cost = workspace->startup_cost;
    3384     1405576 :     Cost        run_cost = workspace->run_cost;
    3385             :     Cost        cpu_per_tuple;
    3386             :     QualCost    restrict_qual_cost;
    3387             :     double      ntuples;
    3388             : 
    3389             :     /* Set the number of disabled nodes. */
    3390     1405576 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    3391             : 
    3392             :     /* Protect some assumptions below that rowcounts aren't zero */
    3393     1405576 :     if (outer_path_rows <= 0)
    3394           0 :         outer_path_rows = 1;
    3395     1405576 :     if (inner_path_rows <= 0)
    3396         702 :         inner_path_rows = 1;
    3397             :     /* Mark the path with the correct row estimate */
    3398     1405576 :     if (path->jpath.path.param_info)
    3399       29138 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    3400             :     else
    3401     1376438 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    3402             : 
    3403             :     /* For partial paths, scale row estimate. */
    3404     1405576 :     if (path->jpath.path.parallel_workers > 0)
    3405             :     {
    3406       12758 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    3407             : 
    3408       12758 :         path->jpath.path.rows =
    3409       12758 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    3410             :     }
    3411             : 
    3412             :     /* cost of inner-relation source data (we already dealt with outer rel) */
    3413             : 
    3414     1405576 :     if (path->jpath.jointype == JOIN_SEMI || path->jpath.jointype == JOIN_ANTI ||
    3415     1362990 :         extra->inner_unique)
    3416      910432 :     {
    3417             :         /*
    3418             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3419             :          * executor will stop after the first match.
    3420             :          */
    3421      910432 :         Cost        inner_run_cost = workspace->inner_run_cost;
    3422      910432 :         Cost        inner_rescan_run_cost = workspace->inner_rescan_run_cost;
    3423             :         double      outer_matched_rows;
    3424             :         double      outer_unmatched_rows;
    3425             :         Selectivity inner_scan_frac;
    3426             : 
    3427             :         /*
    3428             :          * For an outer-rel row that has at least one match, we can expect the
    3429             :          * inner scan to stop after a fraction 1/(match_count+1) of the inner
    3430             :          * rows, if the matches are evenly distributed.  Since they probably
    3431             :          * aren't quite evenly distributed, we apply a fuzz factor of 2.0 to
    3432             :          * that fraction.  (If we used a larger fuzz factor, we'd have to
    3433             :          * clamp inner_scan_frac to at most 1.0; but since match_count is at
    3434             :          * least 1, no such clamp is needed now.)
    3435             :          */
    3436      910432 :         outer_matched_rows = rint(outer_path_rows * extra->semifactors.outer_match_frac);
    3437      910432 :         outer_unmatched_rows = outer_path_rows - outer_matched_rows;
    3438      910432 :         inner_scan_frac = 2.0 / (extra->semifactors.match_count + 1.0);
    3439             : 
    3440             :         /*
    3441             :          * Compute number of tuples processed (not number emitted!).  First,
    3442             :          * account for successfully-matched outer rows.
    3443             :          */
    3444      910432 :         ntuples = outer_matched_rows * inner_path_rows * inner_scan_frac;
    3445             : 
    3446             :         /*
    3447             :          * Now we need to estimate the actual costs of scanning the inner
    3448             :          * relation, which may be quite a bit less than N times inner_run_cost
    3449             :          * due to early scan stops.  We consider two cases.  If the inner path
    3450             :          * is an indexscan using all the joinquals as indexquals, then an
    3451             :          * unmatched outer row results in an indexscan returning no rows,
    3452             :          * which is probably quite cheap.  Otherwise, the executor will have
    3453             :          * to scan the whole inner rel for an unmatched row; not so cheap.
    3454             :          */
    3455      910432 :         if (has_indexed_join_quals(path))
    3456             :         {
    3457             :             /*
    3458             :              * Successfully-matched outer rows will only require scanning
    3459             :              * inner_scan_frac of the inner relation.  In this case, we don't
    3460             :              * need to charge the full inner_run_cost even when that's more
    3461             :              * than inner_rescan_run_cost, because we can assume that none of
    3462             :              * the inner scans ever scan the whole inner relation.  So it's
    3463             :              * okay to assume that all the inner scan executions can be
    3464             :              * fractions of the full cost, even if materialization is reducing
    3465             :              * the rescan cost.  At this writing, it's impossible to get here
    3466             :              * for a materialized inner scan, so inner_run_cost and
    3467             :              * inner_rescan_run_cost will be the same anyway; but just in
    3468             :              * case, use inner_run_cost for the first matched tuple and
    3469             :              * inner_rescan_run_cost for additional ones.
    3470             :              */
    3471      149778 :             run_cost += inner_run_cost * inner_scan_frac;
    3472      149778 :             if (outer_matched_rows > 1)
    3473       20642 :                 run_cost += (outer_matched_rows - 1) * inner_rescan_run_cost * inner_scan_frac;
    3474             : 
    3475             :             /*
    3476             :              * Add the cost of inner-scan executions for unmatched outer rows.
    3477             :              * We estimate this as the same cost as returning the first tuple
    3478             :              * of a nonempty scan.  We consider that these are all rescans,
    3479             :              * since we used inner_run_cost once already.
    3480             :              */
    3481      149778 :             run_cost += outer_unmatched_rows *
    3482      149778 :                 inner_rescan_run_cost / inner_path_rows;
    3483             : 
    3484             :             /*
    3485             :              * We won't be evaluating any quals at all for unmatched rows, so
    3486             :              * don't add them to ntuples.
    3487             :              */
    3488             :         }
    3489             :         else
    3490             :         {
    3491             :             /*
    3492             :              * Here, a complicating factor is that rescans may be cheaper than
    3493             :              * first scans.  If we never scan all the way to the end of the
    3494             :              * inner rel, it might be (depending on the plan type) that we'd
    3495             :              * never pay the whole inner first-scan run cost.  However it is
    3496             :              * difficult to estimate whether that will happen (and it could
    3497             :              * not happen if there are any unmatched outer rows!), so be
    3498             :              * conservative and always charge the whole first-scan cost once.
    3499             :              * We consider this charge to correspond to the first unmatched
    3500             :              * outer row, unless there isn't one in our estimate, in which
    3501             :              * case blame it on the first matched row.
    3502             :              */
    3503             : 
    3504             :             /* First, count all unmatched join tuples as being processed */
    3505      760654 :             ntuples += outer_unmatched_rows * inner_path_rows;
    3506             : 
    3507             :             /* Now add the forced full scan, and decrement appropriate count */
    3508      760654 :             run_cost += inner_run_cost;
    3509      760654 :             if (outer_unmatched_rows >= 1)
    3510      728896 :                 outer_unmatched_rows -= 1;
    3511             :             else
    3512       31758 :                 outer_matched_rows -= 1;
    3513             : 
    3514             :             /* Add inner run cost for additional outer tuples having matches */
    3515      760654 :             if (outer_matched_rows > 0)
    3516      268356 :                 run_cost += outer_matched_rows * inner_rescan_run_cost * inner_scan_frac;
    3517             : 
    3518             :             /* Add inner run cost for additional unmatched outer tuples */
    3519      760654 :             if (outer_unmatched_rows > 0)
    3520      507888 :                 run_cost += outer_unmatched_rows * inner_rescan_run_cost;
    3521             :         }
    3522             :     }
    3523             :     else
    3524             :     {
    3525             :         /* Normal-case source costs were included in preliminary estimate */
    3526             : 
    3527             :         /* Compute number of tuples processed (not number emitted!) */
    3528      495144 :         ntuples = outer_path_rows * inner_path_rows;
    3529             :     }
    3530             : 
    3531             :     /* CPU costs */
    3532     1405576 :     cost_qual_eval(&restrict_qual_cost, path->jpath.joinrestrictinfo, root);
    3533     1405576 :     startup_cost += restrict_qual_cost.startup;
    3534     1405576 :     cpu_per_tuple = cpu_tuple_cost + restrict_qual_cost.per_tuple;
    3535     1405576 :     run_cost += cpu_per_tuple * ntuples;
    3536             : 
    3537             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    3538     1405576 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    3539     1405576 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    3540             : 
    3541     1405576 :     path->jpath.path.startup_cost = startup_cost;
    3542     1405576 :     path->jpath.path.total_cost = startup_cost + run_cost;
    3543     1405576 : }
    3544             : 
    3545             : /*
    3546             :  * initial_cost_mergejoin
    3547             :  *    Preliminary estimate of the cost of a mergejoin path.
    3548             :  *
    3549             :  * This must quickly produce lower-bound estimates of the path's startup and
    3550             :  * total costs.  If we are unable to eliminate the proposed path from
    3551             :  * consideration using the lower bounds, final_cost_mergejoin will be called
    3552             :  * to obtain the final estimates.
    3553             :  *
    3554             :  * The exact division of labor between this function and final_cost_mergejoin
    3555             :  * is private to them, and represents a tradeoff between speed of the initial
    3556             :  * estimate and getting a tight lower bound.  We choose to not examine the
    3557             :  * join quals here, except for obtaining the scan selectivity estimate which
    3558             :  * is really essential (but fortunately, use of caching keeps the cost of
    3559             :  * getting that down to something reasonable).
    3560             :  * We also assume that cost_sort/cost_incremental_sort is cheap enough to use
    3561             :  * here.
    3562             :  *
    3563             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    3564             :  *      other data to be used by final_cost_mergejoin
    3565             :  * 'jointype' is the type of join to be performed
    3566             :  * 'mergeclauses' is the list of joinclauses to be used as merge clauses
    3567             :  * 'outer_path' is the outer input to the join
    3568             :  * 'inner_path' is the inner input to the join
    3569             :  * 'outersortkeys' is the list of sort keys for the outer path
    3570             :  * 'innersortkeys' is the list of sort keys for the inner path
    3571             :  * 'outer_presorted_keys' is the number of presorted keys of the outer path
    3572             :  * 'extra' contains miscellaneous information about the join
    3573             :  *
    3574             :  * Note: outersortkeys and innersortkeys should be NIL if no explicit
    3575             :  * sort is needed because the respective source path is already ordered.
    3576             :  */
    3577             : void
    3578     1284824 : initial_cost_mergejoin(PlannerInfo *root, JoinCostWorkspace *workspace,
    3579             :                        JoinType jointype,
    3580             :                        List *mergeclauses,
    3581             :                        Path *outer_path, Path *inner_path,
    3582             :                        List *outersortkeys, List *innersortkeys,
    3583             :                        int outer_presorted_keys,
    3584             :                        JoinPathExtraData *extra)
    3585             : {
    3586             :     int         disabled_nodes;
    3587     1284824 :     Cost        startup_cost = 0;
    3588     1284824 :     Cost        run_cost = 0;
    3589     1284824 :     double      outer_path_rows = outer_path->rows;
    3590     1284824 :     double      inner_path_rows = inner_path->rows;
    3591             :     Cost        inner_run_cost;
    3592             :     double      outer_rows,
    3593             :                 inner_rows,
    3594             :                 outer_skip_rows,
    3595             :                 inner_skip_rows;
    3596             :     Selectivity outerstartsel,
    3597             :                 outerendsel,
    3598             :                 innerstartsel,
    3599             :                 innerendsel;
    3600             :     Path        sort_path;      /* dummy for result of
    3601             :                                  * cost_sort/cost_incremental_sort */
    3602             : 
    3603             :     /* Protect some assumptions below that rowcounts aren't zero */
    3604     1284824 :     if (outer_path_rows <= 0)
    3605          96 :         outer_path_rows = 1;
    3606     1284824 :     if (inner_path_rows <= 0)
    3607         126 :         inner_path_rows = 1;
    3608             : 
    3609             :     /*
    3610             :      * A merge join will stop as soon as it exhausts either input stream
    3611             :      * (unless it's an outer join, in which case the outer side has to be
    3612             :      * scanned all the way anyway).  Estimate fraction of the left and right
    3613             :      * inputs that will actually need to be scanned.  Likewise, we can
    3614             :      * estimate the number of rows that will be skipped before the first join
    3615             :      * pair is found, which should be factored into startup cost. We use only
    3616             :      * the first (most significant) merge clause for this purpose. Since
    3617             :      * mergejoinscansel() is a fairly expensive computation, we cache the
    3618             :      * results in the merge clause RestrictInfo.
    3619             :      */
    3620     1284824 :     if (mergeclauses && jointype != JOIN_FULL)
    3621     1278684 :     {
    3622     1278684 :         RestrictInfo *firstclause = (RestrictInfo *) linitial(mergeclauses);
    3623             :         List       *opathkeys;
    3624             :         List       *ipathkeys;
    3625             :         PathKey    *opathkey;
    3626             :         PathKey    *ipathkey;
    3627             :         MergeScanSelCache *cache;
    3628             : 
    3629             :         /* Get the input pathkeys to determine the sort-order details */
    3630     1278684 :         opathkeys = outersortkeys ? outersortkeys : outer_path->pathkeys;
    3631     1278684 :         ipathkeys = innersortkeys ? innersortkeys : inner_path->pathkeys;
    3632             :         Assert(opathkeys);
    3633             :         Assert(ipathkeys);
    3634     1278684 :         opathkey = (PathKey *) linitial(opathkeys);
    3635     1278684 :         ipathkey = (PathKey *) linitial(ipathkeys);
    3636             :         /* debugging check */
    3637     1278684 :         if (opathkey->pk_opfamily != ipathkey->pk_opfamily ||
    3638     1278684 :             opathkey->pk_eclass->ec_collation != ipathkey->pk_eclass->ec_collation ||
    3639     1278684 :             opathkey->pk_cmptype != ipathkey->pk_cmptype ||
    3640     1278684 :             opathkey->pk_nulls_first != ipathkey->pk_nulls_first)
    3641           0 :             elog(ERROR, "left and right pathkeys do not match in mergejoin");
    3642             : 
    3643             :         /* Get the selectivity with caching */
    3644     1278684 :         cache = cached_scansel(root, firstclause, opathkey);
    3645             : 
    3646     1278684 :         if (bms_is_subset(firstclause->left_relids,
    3647     1278684 :                           outer_path->parent->relids))
    3648             :         {
    3649             :             /* left side of clause is outer */
    3650      687210 :             outerstartsel = cache->leftstartsel;
    3651      687210 :             outerendsel = cache->leftendsel;
    3652      687210 :             innerstartsel = cache->rightstartsel;
    3653      687210 :             innerendsel = cache->rightendsel;
    3654             :         }
    3655             :         else
    3656             :         {
    3657             :             /* left side of clause is inner */
    3658      591474 :             outerstartsel = cache->rightstartsel;
    3659      591474 :             outerendsel = cache->rightendsel;
    3660      591474 :             innerstartsel = cache->leftstartsel;
    3661      591474 :             innerendsel = cache->leftendsel;
    3662             :         }
    3663     1278684 :         if (jointype == JOIN_LEFT ||
    3664             :             jointype == JOIN_ANTI)
    3665             :         {
    3666      222462 :             outerstartsel = 0.0;
    3667      222462 :             outerendsel = 1.0;
    3668             :         }
    3669     1056222 :         else if (jointype == JOIN_RIGHT ||
    3670             :                  jointype == JOIN_RIGHT_ANTI)
    3671             :         {
    3672      215144 :             innerstartsel = 0.0;
    3673      215144 :             innerendsel = 1.0;
    3674             :         }
    3675             :     }
    3676             :     else
    3677             :     {
    3678             :         /* cope with clauseless or full mergejoin */
    3679        6140 :         outerstartsel = innerstartsel = 0.0;
    3680        6140 :         outerendsel = innerendsel = 1.0;
    3681             :     }
    3682             : 
    3683             :     /*
    3684             :      * Convert selectivities to row counts.  We force outer_rows and
    3685             :      * inner_rows to be at least 1, but the skip_rows estimates can be zero.
    3686             :      */
    3687     1284824 :     outer_skip_rows = rint(outer_path_rows * outerstartsel);
    3688     1284824 :     inner_skip_rows = rint(inner_path_rows * innerstartsel);
    3689     1284824 :     outer_rows = clamp_row_est(outer_path_rows * outerendsel);
    3690     1284824 :     inner_rows = clamp_row_est(inner_path_rows * innerendsel);
    3691             : 
    3692             :     Assert(outer_skip_rows <= outer_rows);
    3693             :     Assert(inner_skip_rows <= inner_rows);
    3694             : 
    3695             :     /*
    3696             :      * Readjust scan selectivities to account for above rounding.  This is
    3697             :      * normally an insignificant effect, but when there are only a few rows in
    3698             :      * the inputs, failing to do this makes for a large percentage error.
    3699             :      */
    3700     1284824 :     outerstartsel = outer_skip_rows / outer_path_rows;
    3701     1284824 :     innerstartsel = inner_skip_rows / inner_path_rows;
    3702     1284824 :     outerendsel = outer_rows / outer_path_rows;
    3703     1284824 :     innerendsel = inner_rows / inner_path_rows;
    3704             : 
    3705             :     Assert(outerstartsel <= outerendsel);
    3706             :     Assert(innerstartsel <= innerendsel);
    3707             : 
    3708     1284824 :     disabled_nodes = enable_mergejoin ? 0 : 1;
    3709             : 
    3710             :     /* cost of source data */
    3711             : 
    3712     1284824 :     if (outersortkeys)          /* do we need to sort outer? */
    3713             :     {
    3714             :         /*
    3715             :          * We can assert that the outer path is not already ordered
    3716             :          * appropriately for the mergejoin; otherwise, outersortkeys would
    3717             :          * have been set to NIL.
    3718             :          */
    3719             :         Assert(!pathkeys_contained_in(outersortkeys, outer_path->pathkeys));
    3720             : 
    3721             :         /*
    3722             :          * We choose to use incremental sort if it is enabled and there are
    3723             :          * presorted keys; otherwise we use full sort.
    3724             :          */
    3725      616130 :         if (enable_incremental_sort && outer_presorted_keys > 0)
    3726             :         {
    3727        1978 :             cost_incremental_sort(&sort_path,
    3728             :                                   root,
    3729             :                                   outersortkeys,
    3730             :                                   outer_presorted_keys,
    3731             :                                   outer_path->disabled_nodes,
    3732             :                                   outer_path->startup_cost,
    3733             :                                   outer_path->total_cost,
    3734             :                                   outer_path_rows,
    3735        1978 :                                   outer_path->pathtarget->width,
    3736             :                                   0.0,
    3737             :                                   work_mem,
    3738             :                                   -1.0);
    3739             :         }
    3740             :         else
    3741             :         {
    3742      614152 :             cost_sort(&sort_path,
    3743             :                       root,
    3744             :                       outersortkeys,
    3745             :                       outer_path->disabled_nodes,
    3746             :                       outer_path->total_cost,
    3747             :                       outer_path_rows,
    3748      614152 :                       outer_path->pathtarget->width,
    3749             :                       0.0,
    3750             :                       work_mem,
    3751             :                       -1.0);
    3752             :         }
    3753             : 
    3754      616130 :         disabled_nodes += sort_path.disabled_nodes;
    3755      616130 :         startup_cost += sort_path.startup_cost;
    3756      616130 :         startup_cost += (sort_path.total_cost - sort_path.startup_cost)
    3757      616130 :             * outerstartsel;
    3758      616130 :         run_cost += (sort_path.total_cost - sort_path.startup_cost)
    3759      616130 :             * (outerendsel - outerstartsel);
    3760             :     }
    3761             :     else
    3762             :     {
    3763      668694 :         disabled_nodes += outer_path->disabled_nodes;
    3764      668694 :         startup_cost += outer_path->startup_cost;
    3765      668694 :         startup_cost += (outer_path->total_cost - outer_path->startup_cost)
    3766      668694 :             * outerstartsel;
    3767      668694 :         run_cost += (outer_path->total_cost - outer_path->startup_cost)
    3768      668694 :             * (outerendsel - outerstartsel);
    3769             :     }
    3770             : 
    3771     1284824 :     if (innersortkeys)          /* do we need to sort inner? */
    3772             :     {
    3773             :         /*
    3774             :          * We can assert that the inner path is not already ordered
    3775             :          * appropriately for the mergejoin; otherwise, innersortkeys would
    3776             :          * have been set to NIL.
    3777             :          */
    3778             :         Assert(!pathkeys_contained_in(innersortkeys, inner_path->pathkeys));
    3779             : 
    3780             :         /*
    3781             :          * We do not consider incremental sort for inner path, because
    3782             :          * incremental sort does not support mark/restore.
    3783             :          */
    3784             : 
    3785     1006564 :         cost_sort(&sort_path,
    3786             :                   root,
    3787             :                   innersortkeys,
    3788             :                   inner_path->disabled_nodes,
    3789             :                   inner_path->total_cost,
    3790             :                   inner_path_rows,
    3791     1006564 :                   inner_path->pathtarget->width,
    3792             :                   0.0,
    3793             :                   work_mem,
    3794             :                   -1.0);
    3795     1006564 :         disabled_nodes += sort_path.disabled_nodes;
    3796     1006564 :         startup_cost += sort_path.startup_cost;
    3797     1006564 :         startup_cost += (sort_path.total_cost - sort_path.startup_cost)
    3798     1006564 :             * innerstartsel;
    3799     1006564 :         inner_run_cost = (sort_path.total_cost - sort_path.startup_cost)
    3800     1006564 :             * (innerendsel - innerstartsel);
    3801             :     }
    3802             :     else
    3803             :     {
    3804      278260 :         disabled_nodes += inner_path->disabled_nodes;
    3805      278260 :         startup_cost += inner_path->startup_cost;
    3806      278260 :         startup_cost += (inner_path->total_cost - inner_path->startup_cost)
    3807      278260 :             * innerstartsel;
    3808      278260 :         inner_run_cost = (inner_path->total_cost - inner_path->startup_cost)
    3809      278260 :             * (innerendsel - innerstartsel);
    3810             :     }
    3811             : 
    3812             :     /*
    3813             :      * We can't yet determine whether rescanning occurs, or whether
    3814             :      * materialization of the inner input should be done.  The minimum
    3815             :      * possible inner input cost, regardless of rescan and materialization
    3816             :      * considerations, is inner_run_cost.  We include that in
    3817             :      * workspace->total_cost, but not yet in run_cost.
    3818             :      */
    3819             : 
    3820             :     /* CPU costs left for later */
    3821             : 
    3822             :     /* Public result fields */
    3823     1284824 :     workspace->disabled_nodes = disabled_nodes;
    3824     1284824 :     workspace->startup_cost = startup_cost;
    3825     1284824 :     workspace->total_cost = startup_cost + run_cost + inner_run_cost;
    3826             :     /* Save private data for final_cost_mergejoin */
    3827     1284824 :     workspace->run_cost = run_cost;
    3828     1284824 :     workspace->inner_run_cost = inner_run_cost;
    3829     1284824 :     workspace->outer_rows = outer_rows;
    3830     1284824 :     workspace->inner_rows = inner_rows;
    3831     1284824 :     workspace->outer_skip_rows = outer_skip_rows;
    3832     1284824 :     workspace->inner_skip_rows = inner_skip_rows;
    3833     1284824 : }
    3834             : 
    3835             : /*
    3836             :  * final_cost_mergejoin
    3837             :  *    Final estimate of the cost and result size of a mergejoin path.
    3838             :  *
    3839             :  * Unlike other costsize functions, this routine makes two actual decisions:
    3840             :  * whether the executor will need to do mark/restore, and whether we should
    3841             :  * materialize the inner path.  It would be logically cleaner to build
    3842             :  * separate paths testing these alternatives, but that would require repeating
    3843             :  * most of the cost calculations, which are not all that cheap.  Since the
    3844             :  * choice will not affect output pathkeys or startup cost, only total cost,
    3845             :  * there is no possibility of wanting to keep more than one path.  So it seems
    3846             :  * best to make the decisions here and record them in the path's
    3847             :  * skip_mark_restore and materialize_inner fields.
    3848             :  *
    3849             :  * Mark/restore overhead is usually required, but can be skipped if we know
    3850             :  * that the executor need find only one match per outer tuple, and that the
    3851             :  * mergeclauses are sufficient to identify a match.
    3852             :  *
    3853             :  * We materialize the inner path if we need mark/restore and either the inner
    3854             :  * path can't support mark/restore, or it's cheaper to use an interposed
    3855             :  * Material node to handle mark/restore.
    3856             :  *
    3857             :  * 'path' is already filled in except for the rows and cost fields and
    3858             :  *      skip_mark_restore and materialize_inner
    3859             :  * 'workspace' is the result from initial_cost_mergejoin
    3860             :  * 'extra' contains miscellaneous information about the join
    3861             :  */
    3862             : void
    3863      327262 : final_cost_mergejoin(PlannerInfo *root, MergePath *path,
    3864             :                      JoinCostWorkspace *workspace,
    3865             :                      JoinPathExtraData *extra)
    3866             : {
    3867      327262 :     Path       *outer_path = path->jpath.outerjoinpath;
    3868      327262 :     Path       *inner_path = path->jpath.innerjoinpath;
    3869      327262 :     double      inner_path_rows = inner_path->rows;
    3870      327262 :     List       *mergeclauses = path->path_mergeclauses;
    3871      327262 :     List       *innersortkeys = path->innersortkeys;
    3872      327262 :     Cost        startup_cost = workspace->startup_cost;
    3873      327262 :     Cost        run_cost = workspace->run_cost;
    3874      327262 :     Cost        inner_run_cost = workspace->inner_run_cost;
    3875      327262 :     double      outer_rows = workspace->outer_rows;
    3876      327262 :     double      inner_rows = workspace->inner_rows;
    3877      327262 :     double      outer_skip_rows = workspace->outer_skip_rows;
    3878      327262 :     double      inner_skip_rows = workspace->inner_skip_rows;
    3879             :     Cost        cpu_per_tuple,
    3880             :                 bare_inner_cost,
    3881             :                 mat_inner_cost;
    3882             :     QualCost    merge_qual_cost;
    3883             :     QualCost    qp_qual_cost;
    3884             :     double      mergejointuples,
    3885             :                 rescannedtuples;
    3886             :     double      rescanratio;
    3887             : 
    3888             :     /* Set the number of disabled nodes. */
    3889      327262 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    3890             : 
    3891             :     /* Protect some assumptions below that rowcounts aren't zero */
    3892      327262 :     if (inner_path_rows <= 0)
    3893          90 :         inner_path_rows = 1;
    3894             : 
    3895             :     /* Mark the path with the correct row estimate */
    3896      327262 :     if (path->jpath.path.param_info)
    3897         764 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    3898             :     else
    3899      326498 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    3900             : 
    3901             :     /* For partial paths, scale row estimate. */
    3902      327262 :     if (path->jpath.path.parallel_workers > 0)
    3903             :     {
    3904        9474 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    3905             : 
    3906        9474 :         path->jpath.path.rows =
    3907        9474 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    3908             :     }
    3909             : 
    3910             :     /*
    3911             :      * Compute cost of the mergequals and qpquals (other restriction clauses)
    3912             :      * separately.
    3913             :      */
    3914      327262 :     cost_qual_eval(&merge_qual_cost, mergeclauses, root);
    3915      327262 :     cost_qual_eval(&qp_qual_cost, path->jpath.joinrestrictinfo, root);
    3916      327262 :     qp_qual_cost.startup -= merge_qual_cost.startup;
    3917      327262 :     qp_qual_cost.per_tuple -= merge_qual_cost.per_tuple;
    3918             : 
    3919             :     /*
    3920             :      * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3921             :      * executor will stop scanning for matches after the first match.  When
    3922             :      * all the joinclauses are merge clauses, this means we don't ever need to
    3923             :      * back up the merge, and so we can skip mark/restore overhead.
    3924             :      */
    3925      327262 :     if ((path->jpath.jointype == JOIN_SEMI ||
    3926      320450 :          path->jpath.jointype == JOIN_ANTI ||
    3927      460500 :          extra->inner_unique) &&
    3928      147510 :         (list_length(path->jpath.joinrestrictinfo) ==
    3929      147510 :          list_length(path->path_mergeclauses)))
    3930      123870 :         path->skip_mark_restore = true;
    3931             :     else
    3932      203392 :         path->skip_mark_restore = false;
    3933             : 
    3934             :     /*
    3935             :      * Get approx # tuples passing the mergequals.  We use approx_tuple_count
    3936             :      * here because we need an estimate done with JOIN_INNER semantics.
    3937             :      */
    3938      327262 :     mergejointuples = approx_tuple_count(root, &path->jpath, mergeclauses);
    3939             : 
    3940             :     /*
    3941             :      * When there are equal merge keys in the outer relation, the mergejoin
    3942             :      * must rescan any matching tuples in the inner relation. This means
    3943             :      * re-fetching inner tuples; we have to estimate how often that happens.
    3944             :      *
    3945             :      * For regular inner and outer joins, the number of re-fetches can be
    3946             :      * estimated approximately as size of merge join output minus size of
    3947             :      * inner relation. Assume that the distinct key values are 1, 2, ..., and
    3948             :      * denote the number of values of each key in the outer relation as m1,
    3949             :      * m2, ...; in the inner relation, n1, n2, ...  Then we have
    3950             :      *
    3951             :      * size of join = m1 * n1 + m2 * n2 + ...
    3952             :      *
    3953             :      * number of rescanned tuples = (m1 - 1) * n1 + (m2 - 1) * n2 + ... = m1 *
    3954             :      * n1 + m2 * n2 + ... - (n1 + n2 + ...) = size of join - size of inner
    3955             :      * relation
    3956             :      *
    3957             :      * This equation works correctly for outer tuples having no inner match
    3958             :      * (nk = 0), but not for inner tuples having no outer match (mk = 0); we
    3959             :      * are effectively subtracting those from the number of rescanned tuples,
    3960             :      * when we should not.  Can we do better without expensive selectivity
    3961             :      * computations?
    3962             :      *
    3963             :      * The whole issue is moot if we are working from a unique-ified outer
    3964             :      * input, or if we know we don't need to mark/restore at all.
    3965             :      */
    3966      327262 :     if (IsA(outer_path, UniquePath) || path->skip_mark_restore)
    3967      126420 :         rescannedtuples = 0;
    3968             :     else
    3969             :     {
    3970      200842 :         rescannedtuples = mergejointuples - inner_path_rows;
    3971             :         /* Must clamp because of possible underestimate */
    3972      200842 :         if (rescannedtuples < 0)
    3973       84362 :             rescannedtuples = 0;
    3974             :     }
    3975             : 
    3976             :     /*
    3977             :      * We'll inflate various costs this much to account for rescanning.  Note
    3978             :      * that this is to be multiplied by something involving inner_rows, or
    3979             :      * another number related to the portion of the inner rel we'll scan.
    3980             :      */
    3981      327262 :     rescanratio = 1.0 + (rescannedtuples / inner_rows);
    3982             : 
    3983             :     /*
    3984             :      * Decide whether we want to materialize the inner input to shield it from
    3985             :      * mark/restore and performing re-fetches.  Our cost model for regular
    3986             :      * re-fetches is that a re-fetch costs the same as an original fetch,
    3987             :      * which is probably an overestimate; but on the other hand we ignore the
    3988             :      * bookkeeping costs of mark/restore.  Not clear if it's worth developing
    3989             :      * a more refined model.  So we just need to inflate the inner run cost by
    3990             :      * rescanratio.
    3991             :      */
    3992      327262 :     bare_inner_cost = inner_run_cost * rescanratio;
    3993             : 
    3994             :     /*
    3995             :      * When we interpose a Material node the re-fetch cost is assumed to be
    3996             :      * just cpu_operator_cost per tuple, independently of the underlying
    3997             :      * plan's cost; and we charge an extra cpu_operator_cost per original
    3998             :      * fetch as well.  Note that we're assuming the materialize node will
    3999             :      * never spill to disk, since it only has to remember tuples back to the
    4000             :      * last mark.  (If there are a huge number of duplicates, our other cost
    4001             :      * factors will make the path so expensive that it probably won't get
    4002             :      * chosen anyway.)  So we don't use cost_rescan here.
    4003             :      *
    4004             :      * Note: keep this estimate in sync with create_mergejoin_plan's labeling
    4005             :      * of the generated Material node.
    4006             :      */
    4007      327262 :     mat_inner_cost = inner_run_cost +
    4008      327262 :         cpu_operator_cost * inner_rows * rescanratio;
    4009             : 
    4010             :     /*
    4011             :      * If we don't need mark/restore at all, we don't need materialization.
    4012             :      */
    4013      327262 :     if (path->skip_mark_restore)
    4014      123870 :         path->materialize_inner = false;
    4015             : 
    4016             :     /*
    4017             :      * Prefer materializing if it looks cheaper, unless the user has asked to
    4018             :      * suppress materialization.
    4019             :      */
    4020      203392 :     else if (enable_material && mat_inner_cost < bare_inner_cost)
    4021        2650 :         path->materialize_inner = true;
    4022             : 
    4023             :     /*
    4024             :      * Even if materializing doesn't look cheaper, we *must* do it if the
    4025             :      * inner path is to be used directly (without sorting) and it doesn't
    4026             :      * support mark/restore.
    4027             :      *
    4028             :      * Since the inner side must be ordered, and only Sorts and IndexScans can
    4029             :      * create order to begin with, and they both support mark/restore, you
    4030             :      * might think there's no problem --- but you'd be wrong.  Nestloop and
    4031             :      * merge joins can *preserve* the order of their inputs, so they can be
    4032             :      * selected as the input of a mergejoin, and they don't support
    4033             :      * mark/restore at present.
    4034             :      *
    4035             :      * We don't test the value of enable_material here, because
    4036             :      * materialization is required for correctness in this case, and turning
    4037             :      * it off does not entitle us to deliver an invalid plan.
    4038             :      */
    4039      200742 :     else if (innersortkeys == NIL &&
    4040        8274 :              !ExecSupportsMarkRestore(inner_path))
    4041        1540 :         path->materialize_inner = true;
    4042             : 
    4043             :     /*
    4044             :      * Also, force materializing if the inner path is to be sorted and the
    4045             :      * sort is expected to spill to disk.  This is because the final merge
    4046             :      * pass can be done on-the-fly if it doesn't have to support mark/restore.
    4047             :      * We don't try to adjust the cost estimates for this consideration,
    4048             :      * though.
    4049             :      *
    4050             :      * Since materialization is a performance optimization in this case,
    4051             :      * rather than necessary for correctness, we skip it if enable_material is
    4052             :      * off.
    4053             :      */
    4054      199202 :     else if (enable_material && innersortkeys != NIL &&
    4055      192420 :              relation_byte_size(inner_path_rows,
    4056      192420 :                                 inner_path->pathtarget->width) >
    4057      192420 :              work_mem * (Size) 1024)
    4058         256 :         path->materialize_inner = true;
    4059             :     else
    4060      198946 :         path->materialize_inner = false;
    4061             : 
    4062             :     /* Charge the right incremental cost for the chosen case */
    4063      327262 :     if (path->materialize_inner)
    4064        4446 :         run_cost += mat_inner_cost;
    4065             :     else
    4066      322816 :         run_cost += bare_inner_cost;
    4067             : 
    4068             :     /* CPU costs */
    4069             : 
    4070             :     /*
    4071             :      * The number of tuple comparisons needed is approximately number of outer
    4072             :      * rows plus number of inner rows plus number of rescanned tuples (can we
    4073             :      * refine this?).  At each one, we need to evaluate the mergejoin quals.
    4074             :      */
    4075      327262 :     startup_cost += merge_qual_cost.startup;
    4076      327262 :     startup_cost += merge_qual_cost.per_tuple *
    4077      327262 :         (outer_skip_rows + inner_skip_rows * rescanratio);
    4078      327262 :     run_cost += merge_qual_cost.per_tuple *
    4079      327262 :         ((outer_rows - outer_skip_rows) +
    4080      327262 :          (inner_rows - inner_skip_rows) * rescanratio);
    4081             : 
    4082             :     /*
    4083             :      * For each tuple that gets through the mergejoin proper, we charge
    4084             :      * cpu_tuple_cost plus the cost of evaluating additional restriction
    4085             :      * clauses that are to be applied at the join.  (This is pessimistic since
    4086             :      * not all of the quals may get evaluated at each tuple.)
    4087             :      *
    4088             :      * Note: we could adjust for SEMI/ANTI joins skipping some qual
    4089             :      * evaluations here, but it's probably not worth the trouble.
    4090             :      */
    4091      327262 :     startup_cost += qp_qual_cost.startup;
    4092      327262 :     cpu_per_tuple = cpu_tuple_cost + qp_qual_cost.per_tuple;
    4093      327262 :     run_cost += cpu_per_tuple * mergejointuples;
    4094             : 
    4095             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    4096      327262 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    4097      327262 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    4098             : 
    4099      327262 :     path->jpath.path.startup_cost = startup_cost;
    4100      327262 :     path->jpath.path.total_cost = startup_cost + run_cost;
    4101      327262 : }
    4102             : 
    4103             : /*
    4104             :  * run mergejoinscansel() with caching
    4105             :  */
    4106             : static MergeScanSelCache *
    4107     1278684 : cached_scansel(PlannerInfo *root, RestrictInfo *rinfo, PathKey *pathkey)
    4108             : {
    4109             :     MergeScanSelCache *cache;
    4110             :     ListCell   *lc;
    4111             :     Selectivity leftstartsel,
    4112             :                 leftendsel,
    4113             :                 rightstartsel,
    4114             :                 rightendsel;
    4115             :     MemoryContext oldcontext;
    4116             : 
    4117             :     /* Do we have this result already? */
    4118     1278762 :     foreach(lc, rinfo->scansel_cache)
    4119             :     {
    4120     1160130 :         cache = (MergeScanSelCache *) lfirst(lc);
    4121     1160130 :         if (cache->opfamily == pathkey->pk_opfamily &&
    4122     1160130 :             cache->collation == pathkey->pk_eclass->ec_collation &&
    4123     1160130 :             cache->cmptype == pathkey->pk_cmptype &&
    4124     1160052 :             cache->nulls_first == pathkey->pk_nulls_first)
    4125     1160052 :             return cache;
    4126             :     }
    4127             : 
    4128             :     /* Nope, do the computation */
    4129      118632 :     mergejoinscansel(root,
    4130      118632 :                      (Node *) rinfo->clause,
    4131             :                      pathkey->pk_opfamily,
    4132             :                      pathkey->pk_cmptype,
    4133      118632 :                      pathkey->pk_nulls_first,
    4134             :                      &leftstartsel,
    4135             :                      &leftendsel,
    4136             :                      &rightstartsel,
    4137             :                      &rightendsel);
    4138             : 
    4139             :     /* Cache the result in suitably long-lived workspace */
    4140      118632 :     oldcontext = MemoryContextSwitchTo(root->planner_cxt);
    4141             : 
    4142      118632 :     cache = (MergeScanSelCache *) palloc(sizeof(MergeScanSelCache));
    4143      118632 :     cache->opfamily = pathkey->pk_opfamily;
    4144      118632 :     cache->collation = pathkey->pk_eclass->ec_collation;
    4145      118632 :     cache->cmptype = pathkey->pk_cmptype;
    4146      118632 :     cache->nulls_first = pathkey->pk_nulls_first;
    4147      118632 :     cache->leftstartsel = leftstartsel;
    4148      118632 :     cache->leftendsel = leftendsel;
    4149      118632 :     cache->rightstartsel = rightstartsel;
    4150      118632 :     cache->rightendsel = rightendsel;
    4151             : 
    4152      118632 :     rinfo->scansel_cache = lappend(rinfo->scansel_cache, cache);
    4153             : 
    4154      118632 :     MemoryContextSwitchTo(oldcontext);
    4155             : 
    4156      118632 :     return cache;
    4157             : }
    4158             : 
    4159             : /*
    4160             :  * initial_cost_hashjoin
    4161             :  *    Preliminary estimate of the cost of a hashjoin path.
    4162             :  *
    4163             :  * This must quickly produce lower-bound estimates of the path's startup and
    4164             :  * total costs.  If we are unable to eliminate the proposed path from
    4165             :  * consideration using the lower bounds, final_cost_hashjoin will be called
    4166             :  * to obtain the final estimates.
    4167             :  *
    4168             :  * The exact division of labor between this function and final_cost_hashjoin
    4169             :  * is private to them, and represents a tradeoff between speed of the initial
    4170             :  * estimate and getting a tight lower bound.  We choose to not examine the
    4171             :  * join quals here (other than by counting the number of hash clauses),
    4172             :  * so we can't do much with CPU costs.  We do assume that
    4173             :  * ExecChooseHashTableSize is cheap enough to use here.
    4174             :  *
    4175             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    4176             :  *      other data to be used by final_cost_hashjoin
    4177             :  * 'jointype' is the type of join to be performed
    4178             :  * 'hashclauses' is the list of joinclauses to be used as hash clauses
    4179             :  * 'outer_path' is the outer input to the join
    4180             :  * 'inner_path' is the inner input to the join
    4181             :  * 'extra' contains miscellaneous information about the join
    4182             :  * 'parallel_hash' indicates that inner_path is partial and that a shared
    4183             :  *      hash table will be built in parallel
    4184             :  */
    4185             : void
    4186      690778 : initial_cost_hashjoin(PlannerInfo *root, JoinCostWorkspace *workspace,
    4187             :                       JoinType jointype,
    4188             :                       List *hashclauses,
    4189             :                       Path *outer_path, Path *inner_path,
    4190             :                       JoinPathExtraData *extra,
    4191             :                       bool parallel_hash)
    4192             : {
    4193             :     int         disabled_nodes;
    4194      690778 :     Cost        startup_cost = 0;
    4195      690778 :     Cost        run_cost = 0;
    4196      690778 :     double      outer_path_rows = outer_path->rows;
    4197      690778 :     double      inner_path_rows = inner_path->rows;
    4198      690778 :     double      inner_path_rows_total = inner_path_rows;
    4199      690778 :     int         num_hashclauses = list_length(hashclauses);
    4200             :     int         numbuckets;
    4201             :     int         numbatches;
    4202             :     int         num_skew_mcvs;
    4203             :     size_t      space_allowed;  /* unused */
    4204             : 
    4205             :     /* Count up disabled nodes. */
    4206      690778 :     disabled_nodes = enable_hashjoin ? 0 : 1;
    4207      690778 :     disabled_nodes += inner_path->disabled_nodes;
    4208      690778 :     disabled_nodes += outer_path->disabled_nodes;
    4209             : 
    4210             :     /* cost of source data */
    4211      690778 :     startup_cost += outer_path->startup_cost;
    4212      690778 :     run_cost += outer_path->total_cost - outer_path->startup_cost;
    4213      690778 :     startup_cost += inner_path->total_cost;
    4214             : 
    4215             :     /*
    4216             :      * Cost of computing hash function: must do it once per input tuple. We
    4217             :      * charge one cpu_operator_cost for each column's hash function.  Also,
    4218             :      * tack on one cpu_tuple_cost per inner row, to model the costs of
    4219             :      * inserting the row into the hashtable.
    4220             :      *
    4221             :      * XXX when a hashclause is more complex than a single operator, we really
    4222             :      * should charge the extra eval costs of the left or right side, as
    4223             :      * appropriate, here.  This seems more work than it's worth at the moment.
    4224             :      */
    4225      690778 :     startup_cost += (cpu_operator_cost * num_hashclauses + cpu_tuple_cost)
    4226      690778 :         * inner_path_rows;
    4227      690778 :     run_cost += cpu_operator_cost * num_hashclauses * outer_path_rows;
    4228             : 
    4229             :     /*
    4230             :      * If this is a parallel hash build, then the value we have for
    4231             :      * inner_rows_total currently refers only to the rows returned by each
    4232             :      * participant.  For shared hash table size estimation, we need the total
    4233             :      * number, so we need to undo the division.
    4234             :      */
    4235      690778 :     if (parallel_hash)
    4236       12552 :         inner_path_rows_total *= get_parallel_divisor(inner_path);
    4237             : 
    4238             :     /*
    4239             :      * Get hash table size that executor would use for inner relation.
    4240             :      *
    4241             :      * XXX for the moment, always assume that skew optimization will be
    4242             :      * performed.  As long as SKEW_HASH_MEM_PERCENT is small, it's not worth
    4243             :      * trying to determine that for sure.
    4244             :      *
    4245             :      * XXX at some point it might be interesting to try to account for skew
    4246             :      * optimization in the cost estimate, but for now, we don't.
    4247             :      */
    4248      690778 :     ExecChooseHashTableSize(inner_path_rows_total,
    4249      690778 :                             inner_path->pathtarget->width,
    4250             :                             true,   /* useskew */
    4251             :                             parallel_hash,  /* try_combined_hash_mem */
    4252             :                             outer_path->parallel_workers,
    4253             :                             &space_allowed,
    4254             :                             &numbuckets,
    4255             :                             &numbatches,
    4256             :                             &num_skew_mcvs);
    4257             : 
    4258             :     /*
    4259             :      * If inner relation is too big then we will need to "batch" the join,
    4260             :      * which implies writing and reading most of the tuples to disk an extra
    4261             :      * time.  Charge seq_page_cost per page, since the I/O should be nice and
    4262             :      * sequential.  Writing the inner rel counts as startup cost, all the rest
    4263             :      * as run cost.
    4264             :      */
    4265      690778 :     if (numbatches > 1)
    4266             :     {
    4267        5376 :         double      outerpages = page_size(outer_path_rows,
    4268        5376 :                                            outer_path->pathtarget->width);
    4269        5376 :         double      innerpages = page_size(inner_path_rows,
    4270        5376 :                                            inner_path->pathtarget->width);
    4271             : 
    4272        5376 :         startup_cost += seq_page_cost * innerpages;
    4273        5376 :         run_cost += seq_page_cost * (innerpages + 2 * outerpages);
    4274             :     }
    4275             : 
    4276             :     /* CPU costs left for later */
    4277             : 
    4278             :     /* Public result fields */
    4279      690778 :     workspace->disabled_nodes = disabled_nodes;
    4280      690778 :     workspace->startup_cost = startup_cost;
    4281      690778 :     workspace->total_cost = startup_cost + run_cost;
    4282             :     /* Save private data for final_cost_hashjoin */
    4283      690778 :     workspace->run_cost = run_cost;
    4284      690778 :     workspace->numbuckets = numbuckets;
    4285      690778 :     workspace->numbatches = numbatches;
    4286      690778 :     workspace->inner_rows_total = inner_path_rows_total;
    4287      690778 : }
    4288             : 
    4289             : /*
    4290             :  * final_cost_hashjoin
    4291             :  *    Final estimate of the cost and result size of a hashjoin path.
    4292             :  *
    4293             :  * Note: the numbatches estimate is also saved into 'path' for use later
    4294             :  *
    4295             :  * 'path' is already filled in except for the rows and cost fields and
    4296             :  *      num_batches
    4297             :  * 'workspace' is the result from initial_cost_hashjoin
    4298             :  * 'extra' contains miscellaneous information about the join
    4299             :  */
    4300             : void
    4301      295816 : final_cost_hashjoin(PlannerInfo *root, HashPath *path,
    4302             :                     JoinCostWorkspace *workspace,
    4303             :                     JoinPathExtraData *extra)
    4304             : {
    4305      295816 :     Path       *outer_path = path->jpath.outerjoinpath;
    4306      295816 :     Path       *inner_path = path->jpath.innerjoinpath;
    4307      295816 :     double      outer_path_rows = outer_path->rows;
    4308      295816 :     double      inner_path_rows = inner_path->rows;
    4309      295816 :     double      inner_path_rows_total = workspace->inner_rows_total;
    4310      295816 :     List       *hashclauses = path->path_hashclauses;
    4311      295816 :     Cost        startup_cost = workspace->startup_cost;
    4312      295816 :     Cost        run_cost = workspace->run_cost;
    4313      295816 :     int         numbuckets = workspace->numbuckets;
    4314      295816 :     int         numbatches = workspace->numbatches;
    4315             :     Cost        cpu_per_tuple;
    4316             :     QualCost    hash_qual_cost;
    4317             :     QualCost    qp_qual_cost;
    4318             :     double      hashjointuples;
    4319             :     double      virtualbuckets;
    4320             :     Selectivity innerbucketsize;
    4321             :     Selectivity innermcvfreq;
    4322             :     ListCell   *hcl;
    4323             : 
    4324             :     /* Set the number of disabled nodes. */
    4325      295816 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    4326             : 
    4327             :     /* Mark the path with the correct row estimate */
    4328      295816 :     if (path->jpath.path.param_info)
    4329        1482 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    4330             :     else
    4331      294334 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    4332             : 
    4333             :     /* For partial paths, scale row estimate. */
    4334      295816 :     if (path->jpath.path.parallel_workers > 0)
    4335             :     {
    4336       11332 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    4337             : 
    4338       11332 :         path->jpath.path.rows =
    4339       11332 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    4340             :     }
    4341             : 
    4342             :     /* mark the path with estimated # of batches */
    4343      295816 :     path->num_batches = numbatches;
    4344             : 
    4345             :     /* store the total number of tuples (sum of partial row estimates) */
    4346      295816 :     path->inner_rows_total = inner_path_rows_total;
    4347             : 
    4348             :     /* and compute the number of "virtual" buckets in the whole join */
    4349      295816 :     virtualbuckets = (double) numbuckets * (double) numbatches;
    4350             : 
    4351             :     /*
    4352             :      * Determine bucketsize fraction and MCV frequency for the inner relation.
    4353             :      * We use the smallest bucketsize or MCV frequency estimated for any
    4354             :      * individual hashclause; this is undoubtedly conservative.
    4355             :      *
    4356             :      * BUT: if inner relation has been unique-ified, we can assume it's good
    4357             :      * for hashing.  This is important both because it's the right answer, and
    4358             :      * because we avoid contaminating the cache with a value that's wrong for
    4359             :      * non-unique-ified paths.
    4360             :      */
    4361      295816 :     if (IsA(inner_path, UniquePath))
    4362             :     {
    4363        4656 :         innerbucketsize = 1.0 / virtualbuckets;
    4364        4656 :         innermcvfreq = 0.0;
    4365             :     }
    4366             :     else
    4367             :     {
    4368             :         List       *otherclauses;
    4369             : 
    4370      291160 :         innerbucketsize = 1.0;
    4371      291160 :         innermcvfreq = 1.0;
    4372             : 
    4373             :         /* At first, try to estimate bucket size using extended statistics. */
    4374      291160 :         otherclauses = estimate_multivariate_bucketsize(root,
    4375             :                                                         inner_path->parent,
    4376             :                                                         hashclauses,
    4377             :                                                         &innerbucketsize);
    4378             : 
    4379             :         /* Pass through the remaining clauses */
    4380      617810 :         foreach(hcl, otherclauses)
    4381             :         {
    4382      326650 :             RestrictInfo *restrictinfo = lfirst_node(RestrictInfo, hcl);
    4383             :             Selectivity thisbucketsize;
    4384             :             Selectivity thismcvfreq;
    4385             : 
    4386             :             /*
    4387             :              * First we have to figure out which side of the hashjoin clause
    4388             :              * is the inner side.
    4389             :              *
    4390             :              * Since we tend to visit the same clauses over and over when
    4391             :              * planning a large query, we cache the bucket stats estimates in
    4392             :              * the RestrictInfo node to avoid repeated lookups of statistics.
    4393             :              */
    4394      326650 :             if (bms_is_subset(restrictinfo->right_relids,
    4395      326650 :                               inner_path->parent->relids))
    4396             :             {
    4397             :                 /* righthand side is inner */
    4398      170980 :                 thisbucketsize = restrictinfo->right_bucketsize;
    4399      170980 :                 if (thisbucketsize < 0)
    4400             :                 {
    4401             :                     /* not cached yet */
    4402       90780 :                     estimate_hash_bucket_stats(root,
    4403       90780 :                                                get_rightop(restrictinfo->clause),
    4404             :                                                virtualbuckets,
    4405             :                                                &restrictinfo->right_mcvfreq,
    4406             :                                                &restrictinfo->right_bucketsize);
    4407       90780 :                     thisbucketsize = restrictinfo->right_bucketsize;
    4408             :                 }
    4409      170980 :                 thismcvfreq = restrictinfo->right_mcvfreq;
    4410             :             }
    4411             :             else
    4412             :             {
    4413             :                 Assert(bms_is_subset(restrictinfo->left_relids,
    4414             :                                      inner_path->parent->relids));
    4415             :                 /* lefthand side is inner */
    4416      155670 :                 thisbucketsize = restrictinfo->left_bucketsize;
    4417      155670 :                 if (thisbucketsize < 0)
    4418             :                 {
    4419             :                     /* not cached yet */
    4420       78184 :                     estimate_hash_bucket_stats(root,
    4421       78184 :                                                get_leftop(restrictinfo->clause),
    4422             :                                                virtualbuckets,
    4423             :                                                &restrictinfo->left_mcvfreq,
    4424             :                                                &restrictinfo->left_bucketsize);
    4425       78184 :                     thisbucketsize = restrictinfo->left_bucketsize;
    4426             :                 }
    4427      155670 :                 thismcvfreq = restrictinfo->left_mcvfreq;
    4428             :             }
    4429             : 
    4430      326650 :             if (innerbucketsize > thisbucketsize)
    4431      207802 :                 innerbucketsize = thisbucketsize;
    4432      326650 :             if (innermcvfreq > thismcvfreq)
    4433      295052 :                 innermcvfreq = thismcvfreq;
    4434             :         }
    4435             :     }
    4436             : 
    4437             :     /*
    4438             :      * If the bucket holding the inner MCV would exceed hash_mem, we don't
    4439             :      * want to hash unless there is really no other alternative, so apply
    4440             :      * disable_cost.  (The executor normally copes with excessive memory usage
    4441             :      * by splitting batches, but obviously it cannot separate equal values
    4442             :      * that way, so it will be unable to drive the batch size below hash_mem
    4443             :      * when this is true.)
    4444             :      */
    4445      295816 :     if (relation_byte_size(clamp_row_est(inner_path_rows * innermcvfreq),
    4446      591632 :                            inner_path->pathtarget->width) > get_hash_memory_limit())
    4447           6 :         startup_cost += disable_cost;
    4448             : 
    4449             :     /*
    4450             :      * Compute cost of the hashquals and qpquals (other restriction clauses)
    4451             :      * separately.
    4452             :      */
    4453      295816 :     cost_qual_eval(&hash_qual_cost, hashclauses, root);
    4454      295816 :     cost_qual_eval(&qp_qual_cost, path->jpath.joinrestrictinfo, root);
    4455      295816 :     qp_qual_cost.startup -= hash_qual_cost.startup;
    4456      295816 :     qp_qual_cost.per_tuple -= hash_qual_cost.per_tuple;
    4457             : 
    4458             :     /* CPU costs */
    4459             : 
    4460      295816 :     if (path->jpath.jointype == JOIN_SEMI ||
    4461      289848 :         path->jpath.jointype == JOIN_ANTI ||
    4462      284954 :         extra->inner_unique)
    4463      123714 :     {
    4464             :         double      outer_matched_rows;
    4465             :         Selectivity inner_scan_frac;
    4466             : 
    4467             :         /*
    4468             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    4469             :          * executor will stop after the first match.
    4470             :          *
    4471             :          * For an outer-rel row that has at least one match, we can expect the
    4472             :          * bucket scan to stop after a fraction 1/(match_count+1) of the
    4473             :          * bucket's rows, if the matches are evenly distributed.  Since they
    4474             :          * probably aren't quite evenly distributed, we apply a fuzz factor of
    4475             :          * 2.0 to that fraction.  (If we used a larger fuzz factor, we'd have
    4476             :          * to clamp inner_scan_frac to at most 1.0; but since match_count is
    4477             :          * at least 1, no such clamp is needed now.)
    4478             :          */
    4479      123714 :         outer_matched_rows = rint(outer_path_rows * extra->semifactors.outer_match_frac);
    4480      123714 :         inner_scan_frac = 2.0 / (extra->semifactors.match_count + 1.0);
    4481             : 
    4482      123714 :         startup_cost += hash_qual_cost.startup;
    4483      247428 :         run_cost += hash_qual_cost.per_tuple * outer_matched_rows *
    4484      123714 :             clamp_row_est(inner_path_rows * innerbucketsize * inner_scan_frac) * 0.5;
    4485             : 
    4486             :         /*
    4487             :          * For unmatched outer-rel rows, the picture is quite a lot different.
    4488             :          * In the first place, there is no reason to assume that these rows
    4489             :          * preferentially hit heavily-populated buckets; instead assume they
    4490             :          * are uncorrelated with the inner distribution and so they see an
    4491             :          * average bucket size of inner_path_rows / virtualbuckets.  In the
    4492             :          * second place, it seems likely that they will have few if any exact
    4493             :          * hash-code matches and so very few of the tuples in the bucket will
    4494             :          * actually require eval of the hash quals.  We don't have any good
    4495             :          * way to estimate how many will, but for the moment assume that the
    4496             :          * effective cost per bucket entry is one-tenth what it is for
    4497             :          * matchable tuples.
    4498             :          */
    4499      247428 :         run_cost += hash_qual_cost.per_tuple *
    4500      247428 :             (outer_path_rows - outer_matched_rows) *
    4501      123714 :             clamp_row_est(inner_path_rows / virtualbuckets) * 0.05;
    4502             : 
    4503             :         /* Get # of tuples that will pass the basic join */
    4504      123714 :         if (path->jpath.jointype == JOIN_ANTI)
    4505        4894 :             hashjointuples = outer_path_rows - outer_matched_rows;
    4506             :         else
    4507      118820 :             hashjointuples = outer_matched_rows;
    4508             :     }
    4509             :     else
    4510             :     {
    4511             :         /*
    4512             :          * The number of tuple comparisons needed is the number of outer
    4513             :          * tuples times the typical number of tuples in a hash bucket, which
    4514             :          * is the inner relation size times its bucketsize fraction.  At each
    4515             :          * one, we need to evaluate the hashjoin quals.  But actually,
    4516             :          * charging the full qual eval cost at each tuple is pessimistic,
    4517             :          * since we don't evaluate the quals unless the hash values match
    4518             :          * exactly.  For lack of a better idea, halve the cost estimate to
    4519             :          * allow for that.
    4520             :          */
    4521      172102 :         startup_cost += hash_qual_cost.startup;
    4522      344204 :         run_cost += hash_qual_cost.per_tuple * outer_path_rows *
    4523      172102 :             clamp_row_est(inner_path_rows * innerbucketsize) * 0.5;
    4524             : 
    4525             :         /*
    4526             :          * Get approx # tuples passing the hashquals.  We use
    4527             :          * approx_tuple_count here because we need an estimate done with
    4528             :          * JOIN_INNER semantics.
    4529             :          */
    4530      172102 :         hashjointuples = approx_tuple_count(root, &path->jpath, hashclauses);
    4531             :     }
    4532             : 
    4533             :     /*
    4534             :      * For each tuple that gets through the hashjoin proper, we charge
    4535             :      * cpu_tuple_cost plus the cost of evaluating additional restriction
    4536             :      * clauses that are to be applied at the join.  (This is pessimistic since
    4537             :      * not all of the quals may get evaluated at each tuple.)
    4538             :      */
    4539      295816 :     startup_cost += qp_qual_cost.startup;
    4540      295816 :     cpu_per_tuple = cpu_tuple_cost + qp_qual_cost.per_tuple;
    4541      295816 :     run_cost += cpu_per_tuple * hashjointuples;
    4542             : 
    4543             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    4544      295816 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    4545      295816 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    4546             : 
    4547      295816 :     path->jpath.path.startup_cost = startup_cost;
    4548      295816 :     path->jpath.path.total_cost = startup_cost + run_cost;
    4549      295816 : }
    4550             : 
    4551             : 
    4552             : /*
    4553             :  * cost_subplan
    4554             :  *      Figure the costs for a SubPlan (or initplan).
    4555             :  *
    4556             :  * Note: we could dig the subplan's Plan out of the root list, but in practice
    4557             :  * all callers have it handy already, so we make them pass it.
    4558             :  */
    4559             : void
    4560       45832 : cost_subplan(PlannerInfo *root, SubPlan *subplan, Plan *plan)
    4561             : {
    4562             :     QualCost    sp_cost;
    4563             : 
    4564             :     /* Figure any cost for evaluating the testexpr */
    4565       45832 :     cost_qual_eval(&sp_cost,
    4566       45832 :                    make_ands_implicit((Expr *) subplan->testexpr),
    4567             :                    root);
    4568             : 
    4569       45832 :     if (subplan->useHashTable)
    4570             :     {
    4571             :         /*
    4572             :          * If we are using a hash table for the subquery outputs, then the
    4573             :          * cost of evaluating the query is a one-time cost.  We charge one
    4574             :          * cpu_operator_cost per tuple for the work of loading the hashtable,
    4575             :          * too.
    4576             :          */
    4577        2172 :         sp_cost.startup += plan->total_cost +
    4578        2172 :             cpu_operator_cost * plan->plan_rows;
    4579             : 
    4580             :         /*
    4581             :          * The per-tuple costs include the cost of evaluating the lefthand
    4582             :          * expressions, plus the cost of probing the hashtable.  We already
    4583             :          * accounted for the lefthand expressions as part of the testexpr, and
    4584             :          * will also have counted one cpu_operator_cost for each comparison
    4585             :          * operator.  That is probably too low for the probing cost, but it's
    4586             :          * hard to make a better estimate, so live with it for now.
    4587             :          */
    4588             :     }
    4589             :     else
    4590             :     {
    4591             :         /*
    4592             :          * Otherwise we will be rescanning the subplan output on each
    4593             :          * evaluation.  We need to estimate how much of the output we will
    4594             :          * actually need to scan.  NOTE: this logic should agree with the
    4595             :          * tuple_fraction estimates used by make_subplan() in
    4596             :          * plan/subselect.c.
    4597             :          */
    4598       43660 :         Cost        plan_run_cost = plan->total_cost - plan->startup_cost;
    4599             : 
    4600       43660 :         if (subplan->subLinkType == EXISTS_SUBLINK)
    4601             :         {
    4602             :             /* we only need to fetch 1 tuple; clamp to avoid zero divide */
    4603        2644 :             sp_cost.per_tuple += plan_run_cost / clamp_row_est(plan->plan_rows);
    4604             :         }
    4605       41016 :         else if (subplan->subLinkType == ALL_SUBLINK ||
    4606       40998 :                  subplan->subLinkType == ANY_SUBLINK)
    4607             :         {
    4608             :             /* assume we need 50% of the tuples */
    4609         134 :             sp_cost.per_tuple += 0.50 * plan_run_cost;
    4610             :             /* also charge a cpu_operator_cost per row examined */
    4611         134 :             sp_cost.per_tuple += 0.50 * plan->plan_rows * cpu_operator_cost;
    4612             :         }
    4613             :         else
    4614             :         {
    4615             :             /* assume we need all tuples */
    4616       40882 :             sp_cost.per_tuple += plan_run_cost;
    4617             :         }
    4618             : 
    4619             :         /*
    4620             :          * Also account for subplan's startup cost. If the subplan is
    4621             :          * uncorrelated or undirect correlated, AND its topmost node is one
    4622             :          * that materializes its output, assume that we'll only need to pay
    4623             :          * its startup cost once; otherwise assume we pay the startup cost
    4624             :          * every time.
    4625             :          */
    4626       57394 :         if (subplan->parParam == NIL &&
    4627       13734 :             ExecMaterializesOutput(nodeTag(plan)))
    4628         694 :             sp_cost.startup += plan->startup_cost;
    4629             :         else
    4630       42966 :             sp_cost.per_tuple += plan->startup_cost;
    4631             :     }
    4632             : 
    4633       45832 :     subplan->startup_cost = sp_cost.startup;
    4634       45832 :     subplan->per_call_cost = sp_cost.per_tuple;
    4635       45832 : }
    4636             : 
    4637             : 
    4638             : /*
    4639             :  * cost_rescan
    4640             :  *      Given a finished Path, estimate the costs of rescanning it after
    4641             :  *      having done so the first time.  For some Path types a rescan is
    4642             :  *      cheaper than an original scan (if no parameters change), and this
    4643             :  *      function embodies knowledge about that.  The default is to return
    4644             :  *      the same costs stored in the Path.  (Note that the cost estimates
    4645             :  *      actually stored in Paths are always for first scans.)
    4646             :  *
    4647             :  * This function is not currently intended to model effects such as rescans
    4648             :  * being cheaper due to disk block caching; what we are concerned with is
    4649             :  * plan types wherein the executor caches results explicitly, or doesn't
    4650             :  * redo startup calculations, etc.
    4651             :  */
    4652             : static void
    4653     2915176 : cost_rescan(PlannerInfo *root, Path *path,
    4654             :             Cost *rescan_startup_cost,  /* output parameters */
    4655             :             Cost *rescan_total_cost)
    4656             : {
    4657     2915176 :     switch (path->pathtype)
    4658             :     {
    4659       56198 :         case T_FunctionScan:
    4660             : 
    4661             :             /*
    4662             :              * Currently, nodeFunctionscan.c always executes the function to
    4663             :              * completion before returning any rows, and caches the results in
    4664             :              * a tuplestore.  So the function eval cost is all startup cost
    4665             :              * and isn't paid over again on rescans. However, all run costs
    4666             :              * will be paid over again.
    4667             :              */
    4668       56198 :             *rescan_startup_cost = 0;
    4669       56198 :             *rescan_total_cost = path->total_cost - path->startup_cost;
    4670       56198 :             break;
    4671      132126 :         case T_HashJoin:
    4672             : 
    4673             :             /*
    4674             :              * If it's a single-batch join, we don't need to rebuild the hash
    4675             :              * table during a rescan.
    4676             :              */
    4677      132126 :             if (((HashPath *) path)->num_batches == 1)
    4678             :             {
    4679             :                 /* Startup cost is exactly the cost of hash table building */
    4680      132126 :                 *rescan_startup_cost = 0;
    4681      132126 :                 *rescan_total_cost = path->total_cost - path->startup_cost;
    4682             :             }
    4683             :             else
    4684             :             {
    4685             :                 /* Otherwise, no special treatment */
    4686           0 :                 *rescan_startup_cost = path->startup_cost;
    4687           0 :                 *rescan_total_cost = path->total_cost;
    4688             :             }
    4689      132126 :             break;
    4690        8716 :         case T_CteScan:
    4691             :         case T_WorkTableScan:
    4692             :             {
    4693             :                 /*
    4694             :                  * These plan types materialize their final result in a
    4695             :                  * tuplestore or tuplesort object.  So the rescan cost is only
    4696             :                  * cpu_tuple_cost per tuple, unless the result is large enough
    4697             :                  * to spill to disk.
    4698             :                  */
    4699        8716 :                 Cost        run_cost = cpu_tuple_cost * path->rows;
    4700        8716 :                 double      nbytes = relation_byte_size(path->rows,
    4701        8716 :                                                         path->pathtarget->width);
    4702        8716 :                 double      work_mem_bytes = work_mem * (Size) 1024;
    4703             : 
    4704        8716 :                 if (nbytes > work_mem_bytes)
    4705             :                 {
    4706             :                     /* It will spill, so account for re-read cost */
    4707         296 :                     double      npages = ceil(nbytes / BLCKSZ);
    4708             : 
    4709         296 :                     run_cost += seq_page_cost * npages;
    4710             :                 }
    4711        8716 :                 *rescan_startup_cost = 0;
    4712        8716 :                 *rescan_total_cost = run_cost;
    4713             :             }
    4714        8716 :             break;
    4715      985730 :         case T_Material:
    4716             :         case T_Sort:
    4717             :             {
    4718             :                 /*
    4719             :                  * These plan types not only materialize their results, but do
    4720             :                  * not implement qual filtering or projection.  So they are
    4721             :                  * even cheaper to rescan than the ones above.  We charge only
    4722             :                  * cpu_operator_cost per tuple.  (Note: keep that in sync with
    4723             :                  * the run_cost charge in cost_sort, and also see comments in
    4724             :                  * cost_material before you change it.)
    4725             :                  */
    4726      985730 :                 Cost        run_cost = cpu_operator_cost * path->rows;
    4727      985730 :                 double      nbytes = relation_byte_size(path->rows,
    4728      985730 :                                                         path->pathtarget->width);
    4729      985730 :                 double      work_mem_bytes = work_mem * (Size) 1024;
    4730             : 
    4731      985730 :                 if (nbytes > work_mem_bytes)
    4732             :                 {
    4733             :                     /* It will spill, so account for re-read cost */
    4734       11514 :                     double      npages = ceil(nbytes / BLCKSZ);
    4735             : 
    4736       11514 :                     run_cost += seq_page_cost * npages;
    4737             :                 }
    4738      985730 :                 *rescan_startup_cost = 0;
    4739      985730 :                 *rescan_total_cost = run_cost;
    4740             :             }
    4741      985730 :             break;
    4742      299506 :         case T_Memoize:
    4743             :             /* All the hard work is done by cost_memoize_rescan */
    4744      299506 :             cost_memoize_rescan(root, (MemoizePath *) path,
    4745             :                                 rescan_startup_cost, rescan_total_cost);
    4746      299506 :             break;
    4747     1432900 :         default:
    4748     1432900 :             *rescan_startup_cost = path->startup_cost;
    4749     1432900 :             *rescan_total_cost = path->total_cost;
    4750     1432900 :             break;
    4751             :     }
    4752     2915176 : }
    4753             : 
    4754             : 
    4755             : /*
    4756             :  * cost_qual_eval
    4757             :  *      Estimate the CPU costs of evaluating a WHERE clause.
    4758             :  *      The input can be either an implicitly-ANDed list of boolean
    4759             :  *      expressions, or a list of RestrictInfo nodes.  (The latter is
    4760             :  *      preferred since it allows caching of the results.)
    4761             :  *      The result includes both a one-time (startup) component,
    4762             :  *      and a per-evaluation component.
    4763             :  *
    4764             :  * Note: in some code paths root can be passed as NULL, resulting in
    4765             :  * slightly worse estimates.
    4766             :  */
    4767             : void
    4768     3969926 : cost_qual_eval(QualCost *cost, List *quals, PlannerInfo *root)
    4769             : {
    4770             :     cost_qual_eval_context context;
    4771             :     ListCell   *l;
    4772             : 
    4773     3969926 :     context.root = root;
    4774     3969926 :     context.total.startup = 0;
    4775     3969926 :     context.total.per_tuple = 0;
    4776             : 
    4777             :     /* We don't charge any cost for the implicit ANDing at top level ... */
    4778             : 
    4779     7498918 :     foreach(l, quals)
    4780             :     {
    4781     3528992 :         Node       *qual = (Node *) lfirst(l);
    4782             : 
    4783     3528992 :         cost_qual_eval_walker(qual, &context);
    4784             :     }
    4785             : 
    4786     3969926 :     *cost = context.total;
    4787     3969926 : }
    4788             : 
    4789             : /*
    4790             :  * cost_qual_eval_node
    4791             :  *      As above, for a single RestrictInfo or expression.
    4792             :  */
    4793             : void
    4794     1794360 : cost_qual_eval_node(QualCost *cost, Node *qual, PlannerInfo *root)
    4795             : {
    4796             :     cost_qual_eval_context context;
    4797             : 
    4798     1794360 :     context.root = root;
    4799     1794360 :     context.total.startup = 0;
    4800     1794360 :     context.total.per_tuple = 0;
    4801             : 
    4802     1794360 :     cost_qual_eval_walker(qual, &context);
    4803             : 
    4804     1794360 :     *cost = context.total;
    4805     1794360 : }
    4806             : 
    4807             : static bool
    4808     8732252 : cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)
    4809             : {
    4810     8732252 :     if (node == NULL)
    4811       87530 :         return false;
    4812             : 
    4813             :     /*
    4814             :      * RestrictInfo nodes contain an eval_cost field reserved for this
    4815             :      * routine's use, so that it's not necessary to evaluate the qual clause's
    4816             :      * cost more than once.  If the clause's cost hasn't been computed yet,
    4817             :      * the field's startup value will contain -1.
    4818             :      */
    4819     8644722 :     if (IsA(node, RestrictInfo))
    4820             :     {
    4821     3712906 :         RestrictInfo *rinfo = (RestrictInfo *) node;
    4822             : 
    4823     3712906 :         if (rinfo->eval_cost.startup < 0)
    4824             :         {
    4825             :             cost_qual_eval_context locContext;
    4826             : 
    4827      579698 :             locContext.root = context->root;
    4828      579698 :             locContext.total.startup = 0;
    4829      579698 :             locContext.total.per_tuple = 0;
    4830             : 
    4831             :             /*
    4832             :              * For an OR clause, recurse into the marked-up tree so that we
    4833             :              * set the eval_cost for contained RestrictInfos too.
    4834             :              */
    4835      579698 :             if (rinfo->orclause)
    4836       10180 :                 cost_qual_eval_walker((Node *) rinfo->orclause, &locContext);
    4837             :             else
    4838      569518 :                 cost_qual_eval_walker((Node *) rinfo->clause, &locContext);
    4839             : 
    4840             :             /*
    4841             :              * If the RestrictInfo is marked pseudoconstant, it will be tested
    4842             :              * only once, so treat its cost as all startup cost.
    4843             :              */
    4844      579698 :             if (rinfo->pseudoconstant)
    4845             :             {
    4846             :                 /* count one execution during startup */
    4847        9880 :                 locContext.total.startup += locContext.total.per_tuple;
    4848        9880 :                 locContext.total.per_tuple = 0;
    4849             :             }
    4850      579698 :             rinfo->eval_cost = locContext.total;
    4851             :         }
    4852     3712906 :         context->total.startup += rinfo->eval_cost.startup;
    4853     3712906 :         context->total.per_tuple += rinfo->eval_cost.per_tuple;
    4854             :         /* do NOT recurse into children */
    4855     3712906 :         return false;
    4856             :     }
    4857             : 
    4858             :     /*
    4859             :      * For each operator or function node in the given tree, we charge the
    4860             :      * estimated execution cost given by pg_proc.procost (remember to multiply
    4861             :      * this by cpu_operator_cost).
    4862             :      *
    4863             :      * Vars and Consts are charged zero, and so are boolean operators (AND,
    4864             :      * OR, NOT). Simplistic, but a lot better than no model at all.
    4865             :      *
    4866             :      * Should we try to account for the possibility of short-circuit
    4867             :      * evaluation of AND/OR?  Probably *not*, because that would make the
    4868             :      * results depend on the clause ordering, and we are not in any position
    4869             :      * to expect that the current ordering of the clauses is the one that's
    4870             :      * going to end up being used.  The above per-RestrictInfo caching would
    4871             :      * not mix well with trying to re-order clauses anyway.
    4872             :      *
    4873             :      * Another issue that is entirely ignored here is that if a set-returning
    4874             :      * function is below top level in the tree, the functions/operators above
    4875             :      * it will need to be evaluated multiple times.  In practical use, such
    4876             :      * cases arise so seldom as to not be worth the added complexity needed;
    4877             :      * moreover, since our rowcount estimates for functions tend to be pretty
    4878             :      * phony, the results would also be pretty phony.
    4879             :      */
    4880     4931816 :     if (IsA(node, FuncExpr))
    4881             :     {
    4882      347878 :         add_function_cost(context->root, ((FuncExpr *) node)->funcid, node,
    4883             :                           &context->total);
    4884             :     }
    4885     4583938 :     else if (IsA(node, OpExpr) ||
    4886     3948680 :              IsA(node, DistinctExpr) ||
    4887     3947422 :              IsA(node, NullIfExpr))
    4888             :     {
    4889             :         /* rely on struct equivalence to treat these all alike */
    4890      636640 :         set_opfuncid((OpExpr *) node);
    4891      636640 :         add_function_cost(context->root, ((OpExpr *) node)->opfuncid, node,
    4892             :                           &context->total);
    4893             :     }
    4894     3947298 :     else if (IsA(node, ScalarArrayOpExpr))
    4895             :     {
    4896       44180 :         ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) node;
    4897       44180 :         Node       *arraynode = (Node *) lsecond(saop->args);
    4898             :         QualCost    sacosts;
    4899             :         QualCost    hcosts;
    4900       44180 :         double      estarraylen = estimate_array_length(context->root, arraynode);
    4901             : 
    4902       44180 :         set_sa_opfuncid(saop);
    4903       44180 :         sacosts.startup = sacosts.per_tuple = 0;
    4904       44180 :         add_function_cost(context->root, saop->opfuncid, NULL,
    4905             :                           &sacosts);
    4906             : 
    4907       44180 :         if (OidIsValid(saop->hashfuncid))
    4908             :         {
    4909             :             /* Handle costs for hashed ScalarArrayOpExpr */
    4910         438 :             hcosts.startup = hcosts.per_tuple = 0;
    4911             : 
    4912         438 :             add_function_cost(context->root, saop->hashfuncid, NULL, &hcosts);
    4913         438 :             context->total.startup += sacosts.startup + hcosts.startup;
    4914             : 
    4915             :             /* Estimate the cost of building the hashtable. */
    4916         438 :             context->total.startup += estarraylen * hcosts.per_tuple;
    4917             : 
    4918             :             /*
    4919             :              * XXX should we charge a little bit for sacosts.per_tuple when
    4920             :              * building the table, or is it ok to assume there will be zero
    4921             :              * hash collision?
    4922             :              */
    4923             : 
    4924             :             /*
    4925             :              * Charge for hashtable lookups.  Charge a single hash and a
    4926             :              * single comparison.
    4927             :              */
    4928         438 :             context->total.per_tuple += hcosts.per_tuple + sacosts.per_tuple;
    4929             :         }
    4930             :         else
    4931             :         {
    4932             :             /*
    4933             :              * Estimate that the operator will be applied to about half of the
    4934             :              * array elements before the answer is determined.
    4935             :              */
    4936       43742 :             context->total.startup += sacosts.startup;
    4937       87484 :             context->total.per_tuple += sacosts.per_tuple *
    4938       43742 :                 estimate_array_length(context->root, arraynode) * 0.5;
    4939             :         }
    4940             :     }
    4941     3903118 :     else if (IsA(node, Aggref) ||
    4942     3847844 :              IsA(node, WindowFunc))
    4943             :     {
    4944             :         /*
    4945             :          * Aggref and WindowFunc nodes are (and should be) treated like Vars,
    4946             :          * ie, zero execution cost in the current model, because they behave
    4947             :          * essentially like Vars at execution.  We disregard the costs of
    4948             :          * their input expressions for the same reason.  The actual execution
    4949             :          * costs of the aggregate/window functions and their arguments have to
    4950             :          * be factored into plan-node-specific costing of the Agg or WindowAgg
    4951             :          * plan node.
    4952             :          */
    4953       58798 :         return false;           /* don't recurse into children */
    4954             :     }
    4955     3844320 :     else if (IsA(node, GroupingFunc))
    4956             :     {
    4957             :         /* Treat this as having cost 1 */
    4958         422 :         context->total.per_tuple += cpu_operator_cost;
    4959         422 :         return false;           /* don't recurse into children */
    4960             :     }
    4961     3843898 :     else if (IsA(node, CoerceViaIO))
    4962             :     {
    4963       22202 :         CoerceViaIO *iocoerce = (CoerceViaIO *) node;
    4964             :         Oid         iofunc;
    4965             :         Oid         typioparam;
    4966             :         bool        typisvarlena;
    4967             : 
    4968             :         /* check the result type's input function */
    4969       22202 :         getTypeInputInfo(iocoerce->resulttype,
    4970             :                          &iofunc, &typioparam);
    4971       22202 :         add_function_cost(context->root, iofunc, NULL,
    4972             :                           &context->total);
    4973             :         /* check the input type's output function */
    4974       22202 :         getTypeOutputInfo(exprType((Node *) iocoerce->arg),
    4975             :                           &iofunc, &typisvarlena);
    4976       22202 :         add_function_cost(context->root, iofunc, NULL,
    4977             :                           &context->total);
    4978             :     }
    4979     3821696 :     else if (IsA(node, ArrayCoerceExpr))
    4980             :     {
    4981        5288 :         ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node;
    4982             :         QualCost    perelemcost;
    4983             : 
    4984        5288 :         cost_qual_eval_node(&perelemcost, (Node *) acoerce->elemexpr,
    4985             :                             context->root);
    4986        5288 :         context->total.startup += perelemcost.startup;
    4987        5288 :         if (perelemcost.per_tuple > 0)
    4988          66 :             context->total.per_tuple += perelemcost.per_tuple *
    4989          66 :                 estimate_array_length(context->root, (Node *) acoerce->arg);
    4990             :     }
    4991     3816408 :     else if (IsA(node, RowCompareExpr))
    4992             :     {
    4993             :         /* Conservatively assume we will check all the columns */
    4994         252 :         RowCompareExpr *rcexpr = (RowCompareExpr *) node;
    4995             :         ListCell   *lc;
    4996             : 
    4997         810 :         foreach(lc, rcexpr->opnos)
    4998             :         {
    4999         558 :             Oid         opid = lfirst_oid(lc);
    5000             : 
    5001         558 :             add_function_cost(context->root, get_opcode(opid), NULL,
    5002             :                               &context->total);
    5003             :         }
    5004             :     }
    5005     3816156 :     else if (IsA(node, MinMaxExpr) ||
    5006     3815896 :              IsA(node, SQLValueFunction) ||
    5007     3811126 :              IsA(node, XmlExpr) ||
    5008     3810424 :              IsA(node, CoerceToDomain) ||
    5009     3800566 :              IsA(node, NextValueExpr) ||
    5010     3800204 :              IsA(node, JsonExpr))
    5011             :     {
    5012             :         /* Treat all these as having cost 1 */
    5013       18500 :         context->total.per_tuple += cpu_operator_cost;
    5014             :     }
    5015     3797656 :     else if (IsA(node, SubLink))
    5016             :     {
    5017             :         /* This routine should not be applied to un-planned expressions */
    5018           0 :         elog(ERROR, "cannot handle unplanned sub-select");
    5019             :     }
    5020     3797656 :     else if (IsA(node, SubPlan))
    5021             :     {
    5022             :         /*
    5023             :          * A subplan node in an expression typically indicates that the
    5024             :          * subplan will be executed on each evaluation, so charge accordingly.
    5025             :          * (Sub-selects that can be executed as InitPlans have already been
    5026             :          * removed from the expression.)
    5027             :          */
    5028       45338 :         SubPlan    *subplan = (SubPlan *) node;
    5029             : 
    5030       45338 :         context->total.startup += subplan->startup_cost;
    5031       45338 :         context->total.per_tuple += subplan->per_call_cost;
    5032             : 
    5033             :         /*
    5034             :          * We don't want to recurse into the testexpr, because it was already
    5035             :          * counted in the SubPlan node's costs.  So we're done.
    5036             :          */
    5037       45338 :         return false;
    5038             :     }
    5039     3752318 :     else if (IsA(node, AlternativeSubPlan))
    5040             :     {
    5041             :         /*
    5042             :          * Arbitrarily use the first alternative plan for costing.  (We should
    5043             :          * certainly only include one alternative, and we don't yet have
    5044             :          * enough information to know which one the executor is most likely to
    5045             :          * use.)
    5046             :          */
    5047        1910 :         AlternativeSubPlan *asplan = (AlternativeSubPlan *) node;
    5048             : 
    5049        1910 :         return cost_qual_eval_walker((Node *) linitial(asplan->subplans),
    5050             :                                      context);
    5051             :     }
    5052     3750408 :     else if (IsA(node, PlaceHolderVar))
    5053             :     {
    5054             :         /*
    5055             :          * A PlaceHolderVar should be given cost zero when considering general
    5056             :          * expression evaluation costs.  The expense of doing the contained
    5057             :          * expression is charged as part of the tlist eval costs of the scan
    5058             :          * or join where the PHV is first computed (see set_rel_width and
    5059             :          * add_placeholders_to_joinrel).  If we charged it again here, we'd be
    5060             :          * double-counting the cost for each level of plan that the PHV
    5061             :          * bubbles up through.  Hence, return without recursing into the
    5062             :          * phexpr.
    5063             :          */
    5064        5154 :         return false;
    5065             :     }
    5066             : 
    5067             :     /* recurse into children */
    5068     4820194 :     return expression_tree_walker(node, cost_qual_eval_walker, context);
    5069             : }
    5070             : 
    5071             : /*
    5072             :  * get_restriction_qual_cost
    5073             :  *    Compute evaluation costs of a baserel's restriction quals, plus any
    5074             :  *    movable join quals that have been pushed down to the scan.
    5075             :  *    Results are returned into *qpqual_cost.
    5076             :  *
    5077             :  * This is a convenience subroutine that works for seqscans and other cases
    5078             :  * where all the given quals will be evaluated the hard way.  It's not useful
    5079             :  * for cost_index(), for example, where the index machinery takes care of
    5080             :  * some of the quals.  We assume baserestrictcost was previously set by
    5081             :  * set_baserel_size_estimates().
    5082             :  */
    5083             : static void
    5084     1060128 : get_restriction_qual_cost(PlannerInfo *root, RelOptInfo *baserel,
    5085             :                           ParamPathInfo *param_info,
    5086             :                           QualCost *qpqual_cost)
    5087             : {
    5088     1060128 :     if (param_info)
    5089             :     {
    5090             :         /* Include costs of pushed-down clauses */
    5091      232504 :         cost_qual_eval(qpqual_cost, param_info->ppi_clauses, root);
    5092             : 
    5093      232504 :         qpqual_cost->startup += baserel->baserestrictcost.startup;
    5094      232504 :         qpqual_cost->per_tuple += baserel->baserestrictcost.per_tuple;
    5095             :     }
    5096             :     else
    5097      827624 :         *qpqual_cost = baserel->baserestrictcost;
    5098     1060128 : }
    5099             : 
    5100             : 
    5101             : /*
    5102             :  * compute_semi_anti_join_factors
    5103             :  *    Estimate how much of the inner input a SEMI, ANTI, or inner_unique join
    5104             :  *    can be expected to scan.
    5105             :  *
    5106             :  * In a hash or nestloop SEMI/ANTI join, the executor will stop scanning
    5107             :  * inner rows as soon as it finds a match to the current outer row.
    5108             :  * The same happens if we have detected the inner rel is unique.
    5109             :  * We should therefore adjust some of the cost components for this effect.
    5110             :  * This function computes some estimates needed for these adjustments.
    5111             :  * These estimates will be the same regardless of the particular paths used
    5112             :  * for the outer and inner relation, so we compute these once and then pass
    5113             :  * them to all the join cost estimation functions.
    5114             :  *
    5115             :  * Input parameters:
    5116             :  *  joinrel: join relation under consideration
    5117             :  *  outerrel: outer relation under consideration
    5118             :  *  innerrel: inner relation under consideration
    5119             :  *  jointype: if not JOIN_SEMI or JOIN_ANTI, we assume it's inner_unique
    5120             :  *  sjinfo: SpecialJoinInfo relevant to this join
    5121             :  *  restrictlist: join quals
    5122             :  * Output parameters:
    5123             :  *  *semifactors is filled in (see pathnodes.h for field definitions)
    5124             :  */
    5125             : void
    5126      213488 : compute_semi_anti_join_factors(PlannerInfo *root,
    5127             :                                RelOptInfo *joinrel,
    5128             :                                RelOptInfo *outerrel,
    5129             :                                RelOptInfo *innerrel,
    5130             :                                JoinType jointype,
    5131             :                                SpecialJoinInfo *sjinfo,
    5132             :                                List *restrictlist,
    5133             :                                SemiAntiJoinFactors *semifactors)
    5134             : {
    5135             :     Selectivity jselec;
    5136             :     Selectivity nselec;
    5137             :     Selectivity avgmatch;
    5138             :     SpecialJoinInfo norm_sjinfo;
    5139             :     List       *joinquals;
    5140             :     ListCell   *l;
    5141             : 
    5142             :     /*
    5143             :      * In an ANTI join, we must ignore clauses that are "pushed down", since
    5144             :      * those won't affect the match logic.  In a SEMI join, we do not
    5145             :      * distinguish joinquals from "pushed down" quals, so just use the whole
    5146             :      * restrictinfo list.  For other outer join types, we should consider only
    5147             :      * non-pushed-down quals, so that this devolves to an IS_OUTER_JOIN check.
    5148             :      */
    5149      213488 :     if (IS_OUTER_JOIN(jointype))
    5150             :     {
    5151       79452 :         joinquals = NIL;
    5152      175456 :         foreach(l, restrictlist)
    5153             :         {
    5154       96004 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    5155             : 
    5156       96004 :             if (!RINFO_IS_PUSHED_DOWN(rinfo, joinrel->relids))
    5157       90246 :                 joinquals = lappend(joinquals, rinfo);
    5158             :         }
    5159             :     }
    5160             :     else
    5161      134036 :         joinquals = restrictlist;
    5162             : 
    5163             :     /*
    5164             :      * Get the JOIN_SEMI or JOIN_ANTI selectivity of the join clauses.
    5165             :      */
    5166      213488 :     jselec = clauselist_selectivity(root,
    5167             :                                     joinquals,
    5168             :                                     0,
    5169             :                                     (jointype == JOIN_ANTI) ? JOIN_ANTI : JOIN_SEMI,
    5170             :                                     sjinfo);
    5171             : 
    5172             :     /*
    5173             :      * Also get the normal inner-join selectivity of the join clauses.
    5174             :      */
    5175      213488 :     init_dummy_sjinfo(&norm_sjinfo, outerrel->relids, innerrel->relids);
    5176             : 
    5177      213488 :     nselec = clauselist_selectivity(root,
    5178             :                                     joinquals,
    5179             :                                     0,
    5180             :                                     JOIN_INNER,
    5181             :                                     &norm_sjinfo);
    5182             : 
    5183             :     /* Avoid leaking a lot of ListCells */
    5184      213488 :     if (IS_OUTER_JOIN(jointype))
    5185       79452 :         list_free(joinquals);
    5186             : 
    5187             :     /*
    5188             :      * jselec can be interpreted as the fraction of outer-rel rows that have
    5189             :      * any matches (this is true for both SEMI and ANTI cases).  And nselec is
    5190             :      * the fraction of the Cartesian product that matches.  So, the average
    5191             :      * number of matches for each outer-rel row that has at least one match is
    5192             :      * nselec * inner_rows / jselec.
    5193             :      *
    5194             :      * Note: it is correct to use the inner rel's "rows" count here, even
    5195             :      * though we might later be considering a parameterized inner path with
    5196             :      * fewer rows.  This is because we have included all the join clauses in
    5197             :      * the selectivity estimate.
    5198             :      */
    5199      213488 :     if (jselec > 0)              /* protect against zero divide */
    5200             :     {
    5201      212984 :         avgmatch = nselec * innerrel->rows / jselec;
    5202             :         /* Clamp to sane range */
    5203      212984 :         avgmatch = Max(1.0, avgmatch);
    5204             :     }
    5205             :     else
    5206         504 :         avgmatch = 1.0;
    5207             : 
    5208      213488 :     semifactors->outer_match_frac = jselec;
    5209      213488 :     semifactors->match_count = avgmatch;
    5210      213488 : }
    5211             : 
    5212             : /*
    5213             :  * has_indexed_join_quals
    5214             :  *    Check whether all the joinquals of a nestloop join are used as
    5215             :  *    inner index quals.
    5216             :  *
    5217             :  * If the inner path of a SEMI/ANTI join is an indexscan (including bitmap
    5218             :  * indexscan) that uses all the joinquals as indexquals, we can assume that an
    5219             :  * unmatched outer tuple is cheap to process, whereas otherwise it's probably
    5220             :  * expensive.
    5221             :  */
    5222             : static bool
    5223      910432 : has_indexed_join_quals(NestPath *path)
    5224             : {
    5225      910432 :     JoinPath   *joinpath = &path->jpath;
    5226      910432 :     Relids      joinrelids = joinpath->path.parent->relids;
    5227      910432 :     Path       *innerpath = joinpath->innerjoinpath;
    5228             :     List       *indexclauses;
    5229             :     bool        found_one;
    5230             :     ListCell   *lc;
    5231             : 
    5232             :     /* If join still has quals to evaluate, it's not fast */
    5233      910432 :     if (joinpath->joinrestrictinfo != NIL)
    5234      645950 :         return false;
    5235             :     /* Nor if the inner path isn't parameterized at all */
    5236      264482 :     if (innerpath->param_info == NULL)
    5237        4800 :         return false;
    5238             : 
    5239             :     /* Find the indexclauses list for the inner scan */
    5240      259682 :     switch (innerpath->pathtype)
    5241             :     {
    5242      154940 :         case T_IndexScan:
    5243             :         case T_IndexOnlyScan:
    5244      154940 :             indexclauses = ((IndexPath *) innerpath)->indexclauses;
    5245      154940 :             break;
    5246         270 :         case T_BitmapHeapScan:
    5247             :             {
    5248             :                 /* Accept only a simple bitmap scan, not AND/OR cases */
    5249         270 :                 Path       *bmqual = ((BitmapHeapPath *) innerpath)->bitmapqual;
    5250             : 
    5251         270 :                 if (IsA(bmqual, IndexPath))
    5252         222 :                     indexclauses = ((IndexPath *) bmqual)->indexclauses;
    5253             :                 else
    5254          48 :                     return false;
    5255         222 :                 break;
    5256             :             }
    5257      104472 :         default:
    5258             : 
    5259             :             /*
    5260             :              * If it's not a simple indexscan, it probably doesn't run quickly
    5261             :              * for zero rows out, even if it's a parameterized path using all
    5262             :              * the joinquals.
    5263             :              */
    5264      104472 :             return false;
    5265             :     }
    5266             : 
    5267             :     /*
    5268             :      * Examine the inner path's param clauses.  Any that are from the outer
    5269             :      * path must be found in the indexclauses list, either exactly or in an
    5270             :      * equivalent form generated by equivclass.c.  Also, we must find at least
    5271             :      * one such clause, else it's a clauseless join which isn't fast.
    5272             :      */
    5273      155162 :     found_one = false;
    5274      308648 :     foreach(lc, innerpath->param_info->ppi_clauses)
    5275             :     {
    5276      158366 :         RestrictInfo *rinfo = (RestrictInfo *) lfirst(lc);
    5277             : 
    5278      158366 :         if (join_clause_is_movable_into(rinfo,
    5279      158366 :                                         innerpath->parent->relids,
    5280             :                                         joinrelids))
    5281             :         {
    5282      157862 :             if (!is_redundant_with_indexclauses(rinfo, indexclauses))
    5283        4880 :                 return false;
    5284      152982 :             found_one = true;
    5285             :         }
    5286             :     }
    5287      150282 :     return found_one;
    5288             : }
    5289             : 
    5290             : 
    5291             : /*
    5292             :  * approx_tuple_count
    5293             :  *      Quick-and-dirty estimation of the number of join rows passing
    5294             :  *      a set of qual conditions.
    5295             :  *
    5296             :  * The quals can be either an implicitly-ANDed list of boolean expressions,
    5297             :  * or a list of RestrictInfo nodes (typically the latter).
    5298             :  *
    5299             :  * We intentionally compute the selectivity under JOIN_INNER rules, even
    5300             :  * if it's some type of outer join.  This is appropriate because we are
    5301             :  * trying to figure out how many tuples pass the initial merge or hash
    5302             :  * join step.
    5303             :  *
    5304             :  * This is quick-and-dirty because we bypass clauselist_selectivity, and
    5305             :  * simply multiply the independent clause selectivities together.  Now
    5306             :  * clauselist_selectivity often can't do any better than that anyhow, but
    5307             :  * for some situations (such as range constraints) it is smarter.  However,
    5308             :  * we can't effectively cache the results of clauselist_selectivity, whereas
    5309             :  * the individual clause selectivities can be and are cached.
    5310             :  *
    5311             :  * Since we are only using the results to estimate how many potential
    5312             :  * output tuples are generated and passed through qpqual checking, it
    5313             :  * seems OK to live with the approximation.
    5314             :  */
    5315             : static double
    5316      499364 : approx_tuple_count(PlannerInfo *root, JoinPath *path, List *quals)
    5317             : {
    5318             :     double      tuples;
    5319      499364 :     double      outer_tuples = path->outerjoinpath->rows;
    5320      499364 :     double      inner_tuples = path->innerjoinpath->rows;
    5321             :     SpecialJoinInfo sjinfo;
    5322      499364 :     Selectivity selec = 1.0;
    5323             :     ListCell   *l;
    5324             : 
    5325             :     /*
    5326             :      * Make up a SpecialJoinInfo for JOIN_INNER semantics.
    5327             :      */
    5328      499364 :     init_dummy_sjinfo(&sjinfo, path->outerjoinpath->parent->relids,
    5329      499364 :                       path->innerjoinpath->parent->relids);
    5330             : 
    5331             :     /* Get the approximate selectivity */
    5332     1080070 :     foreach(l, quals)
    5333             :     {
    5334      580706 :         Node       *qual = (Node *) lfirst(l);
    5335             : 
    5336             :         /* Note that clause_selectivity will be able to cache its result */
    5337      580706 :         selec *= clause_selectivity(root, qual, 0, JOIN_INNER, &sjinfo);
    5338             :     }
    5339             : 
    5340             :     /* Apply it to the input relation sizes */
    5341      499364 :     tuples = selec * outer_tuples * inner_tuples;
    5342             : 
    5343      499364 :     return clamp_row_est(tuples);
    5344             : }
    5345             : 
    5346             : 
    5347             : /*
    5348             :  * set_baserel_size_estimates
    5349             :  *      Set the size estimates for the given base relation.
    5350             :  *
    5351             :  * The rel's targetlist and restrictinfo list must have been constructed
    5352             :  * already, and rel->tuples must be set.
    5353             :  *
    5354             :  * We set the following fields of the rel node:
    5355             :  *  rows: the estimated number of output tuples (after applying
    5356             :  *        restriction clauses).
    5357             :  *  width: the estimated average output tuple width in bytes.
    5358             :  *  baserestrictcost: estimated cost of evaluating baserestrictinfo clauses.
    5359             :  */
    5360             : void
    5361      501452 : set_baserel_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5362             : {
    5363             :     double      nrows;
    5364             : 
    5365             :     /* Should only be applied to base relations */
    5366             :     Assert(rel->relid > 0);
    5367             : 
    5368     1002874 :     nrows = rel->tuples *
    5369      501452 :         clauselist_selectivity(root,
    5370             :                                rel->baserestrictinfo,
    5371             :                                0,
    5372             :                                JOIN_INNER,
    5373             :                                NULL);
    5374             : 
    5375      501422 :     rel->rows = clamp_row_est(nrows);
    5376             : 
    5377      501422 :     cost_qual_eval(&rel->baserestrictcost, rel->baserestrictinfo, root);
    5378             : 
    5379      501422 :     set_rel_width(root, rel);
    5380      501422 : }
    5381             : 
    5382             : /*
    5383             :  * get_parameterized_baserel_size
    5384             :  *      Make a size estimate for a parameterized scan of a base relation.
    5385             :  *
    5386             :  * 'param_clauses' lists the additional join clauses to be used.
    5387             :  *
    5388             :  * set_baserel_size_estimates must have been applied already.
    5389             :  */
    5390             : double
    5391      149102 : get_parameterized_baserel_size(PlannerInfo *root, RelOptInfo *rel,
    5392             :                                List *param_clauses)
    5393             : {
    5394             :     List       *allclauses;
    5395             :     double      nrows;
    5396             : 
    5397             :     /*
    5398             :      * Estimate the number of rows returned by the parameterized scan, knowing
    5399             :      * that it will apply all the extra join clauses as well as the rel's own
    5400             :      * restriction clauses.  Note that we force the clauses to be treated as
    5401             :      * non-join clauses during selectivity estimation.
    5402             :      */
    5403      149102 :     allclauses = list_concat_copy(param_clauses, rel->baserestrictinfo);
    5404      298204 :     nrows = rel->tuples *
    5405      149102 :         clauselist_selectivity(root,
    5406             :                                allclauses,
    5407      149102 :                                rel->relid,   /* do not use 0! */
    5408             :                                JOIN_INNER,
    5409             :                                NULL);
    5410      149102 :     nrows = clamp_row_est(nrows);
    5411             :     /* For safety, make sure result is not more than the base estimate */
    5412      149102 :     if (nrows > rel->rows)
    5413           0 :         nrows = rel->rows;
    5414      149102 :     return nrows;
    5415             : }
    5416             : 
    5417             : /*
    5418             :  * set_joinrel_size_estimates
    5419             :  *      Set the size estimates for the given join relation.
    5420             :  *
    5421             :  * The rel's targetlist must have been constructed already, and a
    5422             :  * restriction clause list that matches the given component rels must
    5423             :  * be provided.
    5424             :  *
    5425             :  * Since there is more than one way to make a joinrel for more than two
    5426             :  * base relations, the results we get here could depend on which component
    5427             :  * rel pair is provided.  In theory we should get the same answers no matter
    5428             :  * which pair is provided; in practice, since the selectivity estimation
    5429             :  * routines don't handle all cases equally well, we might not.  But there's
    5430             :  * not much to be done about it.  (Would it make sense to repeat the
    5431             :  * calculations for each pair of input rels that's encountered, and somehow
    5432             :  * average the results?  Probably way more trouble than it's worth, and
    5433             :  * anyway we must keep the rowcount estimate the same for all paths for the
    5434             :  * joinrel.)
    5435             :  *
    5436             :  * We set only the rows field here.  The reltarget field was already set by
    5437             :  * build_joinrel_tlist, and baserestrictcost is not used for join rels.
    5438             :  */
    5439             : void
    5440      216968 : set_joinrel_size_estimates(PlannerInfo *root, RelOptInfo *rel,
    5441             :                            RelOptInfo *outer_rel,
    5442             :                            RelOptInfo *inner_rel,
    5443             :                            SpecialJoinInfo *sjinfo,
    5444             :                            List *restrictlist)
    5445             : {
    5446      216968 :     rel->rows = calc_joinrel_size_estimate(root,
    5447             :                                            rel,
    5448             :                                            outer_rel,
    5449             :                                            inner_rel,
    5450             :                                            outer_rel->rows,
    5451             :                                            inner_rel->rows,
    5452             :                                            sjinfo,
    5453             :                                            restrictlist);
    5454      216968 : }
    5455             : 
    5456             : /*
    5457             :  * get_parameterized_joinrel_size
    5458             :  *      Make a size estimate for a parameterized scan of a join relation.
    5459             :  *
    5460             :  * 'rel' is the joinrel under consideration.
    5461             :  * 'outer_path', 'inner_path' are (probably also parameterized) Paths that
    5462             :  *      produce the relations being joined.
    5463             :  * 'sjinfo' is any SpecialJoinInfo relevant to this join.
    5464             :  * 'restrict_clauses' lists the join clauses that need to be applied at the
    5465             :  * join node (including any movable clauses that were moved down to this join,
    5466             :  * and not including any movable clauses that were pushed down into the
    5467             :  * child paths).
    5468             :  *
    5469             :  * set_joinrel_size_estimates must have been applied already.
    5470             :  */
    5471             : double
    5472        8170 : get_parameterized_joinrel_size(PlannerInfo *root, RelOptInfo *rel,
    5473             :                                Path *outer_path,
    5474             :                                Path *inner_path,
    5475             :                                SpecialJoinInfo *sjinfo,
    5476             :                                List *restrict_clauses)
    5477             : {
    5478             :     double      nrows;
    5479             : 
    5480             :     /*
    5481             :      * Estimate the number of rows returned by the parameterized join as the
    5482             :      * sizes of the input paths times the selectivity of the clauses that have
    5483             :      * ended up at this join node.
    5484             :      *
    5485             :      * As with set_joinrel_size_estimates, the rowcount estimate could depend
    5486             :      * on the pair of input paths provided, though ideally we'd get the same
    5487             :      * estimate for any pair with the same parameterization.
    5488             :      */
    5489        8170 :     nrows = calc_joinrel_size_estimate(root,
    5490             :                                        rel,
    5491             :                                        outer_path->parent,
    5492             :                                        inner_path->parent,
    5493             :                                        outer_path->rows,
    5494             :                                        inner_path->rows,
    5495             :                                        sjinfo,
    5496             :                                        restrict_clauses);
    5497             :     /* For safety, make sure result is not more than the base estimate */
    5498        8170 :     if (nrows > rel->rows)
    5499          12 :         nrows = rel->rows;
    5500        8170 :     return nrows;
    5501             : }
    5502             : 
    5503             : /*
    5504             :  * calc_joinrel_size_estimate
    5505             :  *      Workhorse for set_joinrel_size_estimates and
    5506             :  *      get_parameterized_joinrel_size.
    5507             :  *
    5508             :  * outer_rel/inner_rel are the relations being joined, but they should be
    5509             :  * assumed to have sizes outer_rows/inner_rows; those numbers might be less
    5510             :  * than what rel->rows says, when we are considering parameterized paths.
    5511             :  */
    5512             : static double
    5513      225138 : calc_joinrel_size_estimate(PlannerInfo *root,
    5514             :                            RelOptInfo *joinrel,
    5515             :                            RelOptInfo *outer_rel,
    5516             :                            RelOptInfo *inner_rel,
    5517             :                            double outer_rows,
    5518             :                            double inner_rows,
    5519             :                            SpecialJoinInfo *sjinfo,
    5520             :                            List *restrictlist)
    5521             : {
    5522      225138 :     JoinType    jointype = sjinfo->jointype;
    5523             :     Selectivity fkselec;
    5524             :     Selectivity jselec;
    5525             :     Selectivity pselec;
    5526             :     double      nrows;
    5527             : 
    5528             :     /*
    5529             :      * Compute joinclause selectivity.  Note that we are only considering
    5530             :      * clauses that become restriction clauses at this join level; we are not
    5531             :      * double-counting them because they were not considered in estimating the
    5532             :      * sizes of the component rels.
    5533             :      *
    5534             :      * First, see whether any of the joinclauses can be matched to known FK
    5535             :      * constraints.  If so, drop those clauses from the restrictlist, and
    5536             :      * instead estimate their selectivity using FK semantics.  (We do this
    5537             :      * without regard to whether said clauses are local or "pushed down".
    5538             :      * Probably, an FK-matching clause could never be seen as pushed down at
    5539             :      * an outer join, since it would be strict and hence would be grounds for
    5540             :      * join strength reduction.)  fkselec gets the net selectivity for
    5541             :      * FK-matching clauses, or 1.0 if there are none.
    5542             :      */
    5543      225138 :     fkselec = get_foreign_key_join_selectivity(root,
    5544             :                                                outer_rel->relids,
    5545             :                                                inner_rel->relids,
    5546             :                                                sjinfo,
    5547             :                                                &restrictlist);
    5548             : 
    5549             :     /*
    5550             :      * For an outer join, we have to distinguish the selectivity of the join's
    5551             :      * own clauses (JOIN/ON conditions) from any clauses that were "pushed
    5552             :      * down".  For inner joins we just count them all as joinclauses.
    5553             :      */
    5554      225138 :     if (IS_OUTER_JOIN(jointype))
    5555             :     {
    5556       85886 :         List       *joinquals = NIL;
    5557       85886 :         List       *pushedquals = NIL;
    5558             :         ListCell   *l;
    5559             : 
    5560             :         /* Grovel through the clauses to separate into two lists */
    5561      195264 :         foreach(l, restrictlist)
    5562             :         {
    5563      109378 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    5564             : 
    5565      109378 :             if (RINFO_IS_PUSHED_DOWN(rinfo, joinrel->relids))
    5566        4764 :                 pushedquals = lappend(pushedquals, rinfo);
    5567             :             else
    5568      104614 :                 joinquals = lappend(joinquals, rinfo);
    5569             :         }
    5570             : 
    5571             :         /* Get the separate selectivities */
    5572       85886 :         jselec = clauselist_selectivity(root,
    5573             :                                         joinquals,
    5574             :                                         0,
    5575             :                                         jointype,
    5576             :                                         sjinfo);
    5577       85886 :         pselec = clauselist_selectivity(root,
    5578             :                                         pushedquals,
    5579             :                                         0,
    5580             :                                         jointype,
    5581             :                                         sjinfo);
    5582             : 
    5583             :         /* Avoid leaking a lot of ListCells */
    5584       85886 :         list_free(joinquals);
    5585       85886 :         list_free(pushedquals);
    5586             :     }
    5587             :     else
    5588             :     {
    5589      139252 :         jselec = clauselist_selectivity(root,
    5590             :                                         restrictlist,
    5591             :                                         0,
    5592             :                                         jointype,
    5593             :                                         sjinfo);
    5594      139252 :         pselec = 0.0;           /* not used, keep compiler quiet */
    5595             :     }
    5596             : 
    5597             :     /*
    5598             :      * Basically, we multiply size of Cartesian product by selectivity.
    5599             :      *
    5600             :      * If we are doing an outer join, take that into account: the joinqual
    5601             :      * selectivity has to be clamped using the knowledge that the output must
    5602             :      * be at least as large as the non-nullable input.  However, any
    5603             :      * pushed-down quals are applied after the outer join, so their
    5604             :      * selectivity applies fully.
    5605             :      *
    5606             :      * For JOIN_SEMI and JOIN_ANTI, the selectivity is defined as the fraction
    5607             :      * of LHS rows that have matches, and we apply that straightforwardly.
    5608             :      */
    5609      225138 :     switch (jointype)
    5610             :     {
    5611      131444 :         case JOIN_INNER:
    5612      131444 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5613             :             /* pselec not used */
    5614      131444 :             break;
    5615       78838 :         case JOIN_LEFT:
    5616       78838 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5617       78838 :             if (nrows < outer_rows)
    5618       32184 :                 nrows = outer_rows;
    5619       78838 :             nrows *= pselec;
    5620       78838 :             break;
    5621        1714 :         case JOIN_FULL:
    5622        1714 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5623        1714 :             if (nrows < outer_rows)
    5624        1136 :                 nrows = outer_rows;
    5625        1714 :             if (nrows < inner_rows)
    5626         120 :                 nrows = inner_rows;
    5627        1714 :             nrows *= pselec;
    5628        1714 :             break;
    5629        7808 :         case JOIN_SEMI:
    5630        7808 :             nrows = outer_rows * fkselec * jselec;
    5631             :             /* pselec not used */
    5632        7808 :             break;
    5633        5334 :         case JOIN_ANTI:
    5634        5334 :             nrows = outer_rows * (1.0 - fkselec * jselec);
    5635        5334 :             nrows *= pselec;
    5636        5334 :             break;
    5637           0 :         default:
    5638             :             /* other values not expected here */
    5639           0 :             elog(ERROR, "unrecognized join type: %d", (int) jointype);
    5640             :             nrows = 0;          /* keep compiler quiet */
    5641             :             break;
    5642             :     }
    5643             : 
    5644      225138 :     return clamp_row_est(nrows);
    5645             : }
    5646             : 
    5647             : /*
    5648             :  * get_foreign_key_join_selectivity
    5649             :  *      Estimate join selectivity for foreign-key-related clauses.
    5650             :  *
    5651             :  * Remove any clauses that can be matched to FK constraints from *restrictlist,
    5652             :  * and return a substitute estimate of their selectivity.  1.0 is returned
    5653             :  * when there are no such clauses.
    5654             :  *
    5655             :  * The reason for treating such clauses specially is that we can get better
    5656             :  * estimates this way than by relying on clauselist_selectivity(), especially
    5657             :  * for multi-column FKs where that function's assumption that the clauses are
    5658             :  * independent falls down badly.  But even with single-column FKs, we may be
    5659             :  * able to get a better answer when the pg_statistic stats are missing or out
    5660             :  * of date.
    5661             :  */
    5662             : static Selectivity
    5663      225138 : get_foreign_key_join_selectivity(PlannerInfo *root,
    5664             :                                  Relids outer_relids,
    5665             :                                  Relids inner_relids,
    5666             :                                  SpecialJoinInfo *sjinfo,
    5667             :                                  List **restrictlist)
    5668             : {
    5669      225138 :     Selectivity fkselec = 1.0;
    5670      225138 :     JoinType    jointype = sjinfo->jointype;
    5671      225138 :     List       *worklist = *restrictlist;
    5672             :     ListCell   *lc;
    5673             : 
    5674             :     /* Consider each FK constraint that is known to match the query */
    5675      227154 :     foreach(lc, root->fkey_list)
    5676             :     {
    5677        2016 :         ForeignKeyOptInfo *fkinfo = (ForeignKeyOptInfo *) lfirst(lc);
    5678             :         bool        ref_is_outer;
    5679             :         List       *removedlist;
    5680             :         ListCell   *cell;
    5681             : 
    5682             :         /*
    5683             :          * This FK is not relevant unless it connects a baserel on one side of
    5684             :          * this join to a baserel on the other side.
    5685             :          */
    5686        3680 :         if (bms_is_member(fkinfo->con_relid, outer_relids) &&
    5687        1664 :             bms_is_member(fkinfo->ref_relid, inner_relids))
    5688        1490 :             ref_is_outer = false;
    5689         866 :         else if (bms_is_member(fkinfo->ref_relid, outer_relids) &&
    5690         340 :                  bms_is_member(fkinfo->con_relid, inner_relids))
    5691         130 :             ref_is_outer = true;
    5692             :         else
    5693         396 :             continue;
    5694             : 
    5695             :         /*
    5696             :          * If we're dealing with a semi/anti join, and the FK's referenced
    5697             :          * relation is on the outside, then knowledge of the FK doesn't help
    5698             :          * us figure out what we need to know (which is the fraction of outer
    5699             :          * rows that have matches).  On the other hand, if the referenced rel
    5700             :          * is on the inside, then all outer rows must have matches in the
    5701             :          * referenced table (ignoring nulls).  But any restriction or join
    5702             :          * clauses that filter that table will reduce the fraction of matches.
    5703             :          * We can account for restriction clauses, but it's too hard to guess
    5704             :          * how many table rows would get through a join that's inside the RHS.
    5705             :          * Hence, if either case applies, punt and ignore the FK.
    5706             :          */
    5707        1620 :         if ((jointype == JOIN_SEMI || jointype == JOIN_ANTI) &&
    5708        1098 :             (ref_is_outer || bms_membership(inner_relids) != BMS_SINGLETON))
    5709          12 :             continue;
    5710             : 
    5711             :         /*
    5712             :          * Modify the restrictlist by removing clauses that match the FK (and
    5713             :          * putting them into removedlist instead).  It seems unsafe to modify
    5714             :          * the originally-passed List structure, so we make a shallow copy the
    5715             :          * first time through.
    5716             :          */
    5717        1608 :         if (worklist == *restrictlist)
    5718        1370 :             worklist = list_copy(worklist);
    5719             : 
    5720        1608 :         removedlist = NIL;
    5721        3364 :         foreach(cell, worklist)
    5722             :         {
    5723        1756 :             RestrictInfo *rinfo = (RestrictInfo *) lfirst(cell);
    5724        1756 :             bool        remove_it = false;
    5725             :             int         i;
    5726             : 
    5727             :             /* Drop this clause if it matches any column of the FK */
    5728        2228 :             for (i = 0; i < fkinfo->nkeys; i++)
    5729             :             {
    5730        2198 :                 if (rinfo->parent_ec)
    5731             :                 {
    5732             :                     /*
    5733             :                      * EC-derived clauses can only match by EC.  It is okay to
    5734             :                      * consider any clause derived from the same EC as
    5735             :                      * matching the FK: even if equivclass.c chose to generate
    5736             :                      * a clause equating some other pair of Vars, it could
    5737             :                      * have generated one equating the FK's Vars.  So for
    5738             :                      * purposes of estimation, we can act as though it did so.
    5739             :                      *
    5740             :                      * Note: checking parent_ec is a bit of a cheat because
    5741             :                      * there are EC-derived clauses that don't have parent_ec
    5742             :                      * set; but such clauses must compare expressions that
    5743             :                      * aren't just Vars, so they cannot match the FK anyway.
    5744             :                      */
    5745         304 :                     if (fkinfo->eclass[i] == rinfo->parent_ec)
    5746             :                     {
    5747         298 :                         remove_it = true;
    5748         298 :                         break;
    5749             :                     }
    5750             :                 }
    5751             :                 else
    5752             :                 {
    5753             :                     /*
    5754             :                      * Otherwise, see if rinfo was previously matched to FK as
    5755             :                      * a "loose" clause.
    5756             :                      */
    5757        1894 :                     if (list_member_ptr(fkinfo->rinfos[i], rinfo))
    5758             :                     {
    5759        1428 :                         remove_it = true;
    5760        1428 :                         break;
    5761             :                     }
    5762             :                 }
    5763             :             }
    5764        1756 :             if (remove_it)
    5765             :             {
    5766        1726 :                 worklist = foreach_delete_current(worklist, cell);
    5767        1726 :                 removedlist = lappend(removedlist, rinfo);
    5768             :             }
    5769             :         }
    5770             : 
    5771             :         /*
    5772             :          * If we failed to remove all the matching clauses we expected to
    5773             :          * find, chicken out and ignore this FK; applying its selectivity
    5774             :          * might result in double-counting.  Put any clauses we did manage to
    5775             :          * remove back into the worklist.
    5776             :          *
    5777             :          * Since the matching clauses are known not outerjoin-delayed, they
    5778             :          * would normally have appeared in the initial joinclause list.  If we
    5779             :          * didn't find them, there are two possibilities:
    5780             :          *
    5781             :          * 1. If the FK match is based on an EC that is ec_has_const, it won't
    5782             :          * have generated any join clauses at all.  We discount such ECs while
    5783             :          * checking to see if we have "all" the clauses.  (Below, we'll adjust
    5784             :          * the selectivity estimate for this case.)
    5785             :          *
    5786             :          * 2. The clauses were matched to some other FK in a previous
    5787             :          * iteration of this loop, and thus removed from worklist.  (A likely
    5788             :          * case is that two FKs are matched to the same EC; there will be only
    5789             :          * one EC-derived clause in the initial list, so the first FK will
    5790             :          * consume it.)  Applying both FKs' selectivity independently risks
    5791             :          * underestimating the join size; in particular, this would undo one
    5792             :          * of the main things that ECs were invented for, namely to avoid
    5793             :          * double-counting the selectivity of redundant equality conditions.
    5794             :          * Later we might think of a reasonable way to combine the estimates,
    5795             :          * but for now, just punt, since this is a fairly uncommon situation.
    5796             :          */
    5797        1608 :         if (removedlist == NIL ||
    5798        1308 :             list_length(removedlist) !=
    5799        1308 :             (fkinfo->nmatched_ec - fkinfo->nconst_ec + fkinfo->nmatched_ri))
    5800             :         {
    5801         300 :             worklist = list_concat(worklist, removedlist);
    5802         300 :             continue;
    5803             :         }
    5804             : 
    5805             :         /*
    5806             :          * Finally we get to the payoff: estimate selectivity using the
    5807             :          * knowledge that each referencing row will match exactly one row in
    5808             :          * the referenced table.
    5809             :          *
    5810             :          * XXX that's not true in the presence of nulls in the referencing
    5811             :          * column(s), so in principle we should derate the estimate for those.
    5812             :          * However (1) if there are any strict restriction clauses for the
    5813             :          * referencing column(s) elsewhere in the query, derating here would
    5814             :          * be double-counting the null fraction, and (2) it's not very clear
    5815             :          * how to combine null fractions for multiple referencing columns. So
    5816             :          * we do nothing for now about correcting for nulls.
    5817             :          *
    5818             :          * XXX another point here is that if either side of an FK constraint
    5819             :          * is an inheritance parent, we estimate as though the constraint
    5820             :          * covers all its children as well.  This is not an unreasonable
    5821             :          * assumption for a referencing table, ie the user probably applied
    5822             :          * identical constraints to all child tables (though perhaps we ought
    5823             :          * to check that).  But it's not possible to have done that for a
    5824             :          * referenced table.  Fortunately, precisely because that doesn't
    5825             :          * work, it is uncommon in practice to have an FK referencing a parent
    5826             :          * table.  So, at least for now, disregard inheritance here.
    5827             :          */
    5828        1308 :         if (jointype == JOIN_SEMI || jointype == JOIN_ANTI)
    5829         860 :         {
    5830             :             /*
    5831             :              * For JOIN_SEMI and JOIN_ANTI, we only get here when the FK's
    5832             :              * referenced table is exactly the inside of the join.  The join
    5833             :              * selectivity is defined as the fraction of LHS rows that have
    5834             :              * matches.  The FK implies that every LHS row has a match *in the
    5835             :              * referenced table*; but any restriction clauses on it will
    5836             :              * reduce the number of matches.  Hence we take the join
    5837             :              * selectivity as equal to the selectivity of the table's
    5838             :              * restriction clauses, which is rows / tuples; but we must guard
    5839             :              * against tuples == 0.
    5840             :              */
    5841         860 :             RelOptInfo *ref_rel = find_base_rel(root, fkinfo->ref_relid);
    5842         860 :             double      ref_tuples = Max(ref_rel->tuples, 1.0);
    5843             : 
    5844         860 :             fkselec *= ref_rel->rows / ref_tuples;
    5845             :         }
    5846             :         else
    5847             :         {
    5848             :             /*
    5849             :              * Otherwise, selectivity is exactly 1/referenced-table-size; but
    5850             :              * guard against tuples == 0.  Note we should use the raw table
    5851             :              * tuple count, not any estimate of its filtered or joined size.
    5852             :              */
    5853         448 :             RelOptInfo *ref_rel = find_base_rel(root, fkinfo->ref_relid);
    5854         448 :             double      ref_tuples = Max(ref_rel->tuples, 1.0);
    5855             : 
    5856         448 :             fkselec *= 1.0 / ref_tuples;
    5857             :         }
    5858             : 
    5859             :         /*
    5860             :          * If any of the FK columns participated in ec_has_const ECs, then
    5861             :          * equivclass.c will have generated "var = const" restrictions for
    5862             :          * each side of the join, thus reducing the sizes of both input
    5863             :          * relations.  Taking the fkselec at face value would amount to
    5864             :          * double-counting the selectivity of the constant restriction for the
    5865             :          * referencing Var.  Hence, look for the restriction clause(s) that
    5866             :          * were applied to the referencing Var(s), and divide out their
    5867             :          * selectivity to correct for this.
    5868             :          */
    5869        1308 :         if (fkinfo->nconst_ec > 0)
    5870             :         {
    5871          24 :             for (int i = 0; i < fkinfo->nkeys; i++)
    5872             :             {
    5873          18 :                 EquivalenceClass *ec = fkinfo->eclass[i];
    5874             : 
    5875          18 :                 if (ec && ec->ec_has_const)
    5876             :                 {
    5877           6 :                     EquivalenceMember *em = fkinfo->fk_eclass_member[i];
    5878           6 :                     RestrictInfo *rinfo = find_derived_clause_for_ec_member(root,
    5879             :                                                                             ec,
    5880             :                                                                             em);
    5881             : 
    5882           6 :                     if (rinfo)
    5883             :                     {
    5884             :                         Selectivity s0;
    5885             : 
    5886           6 :                         s0 = clause_selectivity(root,
    5887             :                                                 (Node *) rinfo,
    5888             :                                                 0,
    5889             :                                                 jointype,
    5890             :                                                 sjinfo);
    5891           6 :                         if (s0 > 0)
    5892           6 :                             fkselec /= s0;
    5893             :                     }
    5894             :                 }
    5895             :             }
    5896             :         }
    5897             :     }
    5898             : 
    5899      225138 :     *restrictlist = worklist;
    5900      225138 :     CLAMP_PROBABILITY(fkselec);
    5901      225138 :     return fkselec;
    5902             : }
    5903             : 
    5904             : /*
    5905             :  * set_subquery_size_estimates
    5906             :  *      Set the size estimates for a base relation that is a subquery.
    5907             :  *
    5908             :  * The rel's targetlist and restrictinfo list must have been constructed
    5909             :  * already, and the Paths for the subquery must have been completed.
    5910             :  * We look at the subquery's PlannerInfo to extract data.
    5911             :  *
    5912             :  * We set the same fields as set_baserel_size_estimates.
    5913             :  */
    5914             : void
    5915       27788 : set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5916             : {
    5917       27788 :     PlannerInfo *subroot = rel->subroot;
    5918             :     RelOptInfo *sub_final_rel;
    5919             :     ListCell   *lc;
    5920             : 
    5921             :     /* Should only be applied to base relations that are subqueries */
    5922             :     Assert(rel->relid > 0);
    5923             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_SUBQUERY);
    5924             : 
    5925             :     /*
    5926             :      * Copy raw number of output rows from subquery.  All of its paths should
    5927             :      * have the same output rowcount, so just look at cheapest-total.
    5928             :      */
    5929       27788 :     sub_final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL);
    5930       27788 :     rel->tuples = sub_final_rel->cheapest_total_path->rows;
    5931             : 
    5932             :     /*
    5933             :      * Compute per-output-column width estimates by examining the subquery's
    5934             :      * targetlist.  For any output that is a plain Var, get the width estimate
    5935             :      * that was made while planning the subquery.  Otherwise, we leave it to
    5936             :      * set_rel_width to fill in a datatype-based default estimate.
    5937             :      */
    5938      113540 :     foreach(lc, subroot->parse->targetList)
    5939             :     {
    5940       85752 :         TargetEntry *te = lfirst_node(TargetEntry, lc);
    5941       85752 :         Node       *texpr = (Node *) te->expr;
    5942       85752 :         int32       item_width = 0;
    5943             : 
    5944             :         /* junk columns aren't visible to upper query */
    5945       85752 :         if (te->resjunk)
    5946        1292 :             continue;
    5947             : 
    5948             :         /*
    5949             :          * The subquery could be an expansion of a view that's had columns
    5950             :          * added to it since the current query was parsed, so that there are
    5951             :          * non-junk tlist columns in it that don't correspond to any column
    5952             :          * visible at our query level.  Ignore such columns.
    5953             :          */
    5954       84460 :         if (te->resno < rel->min_attr || te->resno > rel->max_attr)
    5955           0 :             continue;
    5956             : 
    5957             :         /*
    5958             :          * XXX This currently doesn't work for subqueries containing set
    5959             :          * operations, because the Vars in their tlists are bogus references
    5960             :          * to the first leaf subquery, which wouldn't give the right answer
    5961             :          * even if we could still get to its PlannerInfo.
    5962             :          *
    5963             :          * Also, the subquery could be an appendrel for which all branches are
    5964             :          * known empty due to constraint exclusion, in which case
    5965             :          * set_append_rel_pathlist will have left the attr_widths set to zero.
    5966             :          *
    5967             :          * In either case, we just leave the width estimate zero until
    5968             :          * set_rel_width fixes it.
    5969             :          */
    5970       84460 :         if (IsA(texpr, Var) &&
    5971       38446 :             subroot->parse->setOperations == NULL)
    5972             :         {
    5973       36668 :             Var        *var = (Var *) texpr;
    5974       36668 :             RelOptInfo *subrel = find_base_rel(subroot, var->varno);
    5975             : 
    5976       36668 :             item_width = subrel->attr_widths[var->varattno - subrel->min_attr];
    5977             :         }
    5978       84460 :         rel->attr_widths[te->resno - rel->min_attr] = item_width;
    5979             :     }
    5980             : 
    5981             :     /* Now estimate number of output rows, etc */
    5982       27788 :     set_baserel_size_estimates(root, rel);
    5983       27788 : }
    5984             : 
    5985             : /*
    5986             :  * set_function_size_estimates
    5987             :  *      Set the size estimates for a base relation that is a function call.
    5988             :  *
    5989             :  * The rel's targetlist and restrictinfo list must have been constructed
    5990             :  * already.
    5991             :  *
    5992             :  * We set the same fields as set_baserel_size_estimates.
    5993             :  */
    5994             : void
    5995       51610 : set_function_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5996             : {
    5997             :     RangeTblEntry *rte;
    5998             :     ListCell   *lc;
    5999             : 
    6000             :     /* Should only be applied to base relations that are functions */
    6001             :     Assert(rel->relid > 0);
    6002       51610 :     rte = planner_rt_fetch(rel->relid, root);
    6003             :     Assert(rte->rtekind == RTE_FUNCTION);
    6004             : 
    6005             :     /*
    6006             :      * Estimate number of rows the functions will return. The rowcount of the
    6007             :      * node is that of the largest function result.
    6008             :      */
    6009       51610 :     rel->tuples = 0;
    6010      103712 :     foreach(lc, rte->functions)
    6011             :     {
    6012       52102 :         RangeTblFunction *rtfunc = (RangeTblFunction *) lfirst(lc);
    6013       52102 :         double      ntup = expression_returns_set_rows(root, rtfunc->funcexpr);
    6014             : 
    6015       52102 :         if (ntup > rel->tuples)
    6016       51634 :             rel->tuples = ntup;
    6017             :     }
    6018             : 
    6019             :     /* Now estimate number of output rows, etc */
    6020       51610 :     set_baserel_size_estimates(root, rel);
    6021       51610 : }
    6022             : 
    6023             : /*
    6024             :  * set_function_size_estimates
    6025             :  *      Set the size estimates for a base relation that is a function call.
    6026             :  *
    6027             :  * The rel's targetlist and restrictinfo list must have been constructed
    6028             :  * already.
    6029             :  *
    6030             :  * We set the same fields as set_tablefunc_size_estimates.
    6031             :  */
    6032             : void
    6033         626 : set_tablefunc_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6034             : {
    6035             :     /* Should only be applied to base relations that are functions */
    6036             :     Assert(rel->relid > 0);
    6037             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_TABLEFUNC);
    6038             : 
    6039         626 :     rel->tuples = 100;
    6040             : 
    6041             :     /* Now estimate number of output rows, etc */
    6042         626 :     set_baserel_size_estimates(root, rel);
    6043         626 : }
    6044             : 
    6045             : /*
    6046             :  * set_values_size_estimates
    6047             :  *      Set the size estimates for a base relation that is a values list.
    6048             :  *
    6049             :  * The rel's targetlist and restrictinfo list must have been constructed
    6050             :  * already.
    6051             :  *
    6052             :  * We set the same fields as set_baserel_size_estimates.
    6053             :  */
    6054             : void
    6055        8246 : set_values_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6056             : {
    6057             :     RangeTblEntry *rte;
    6058             : 
    6059             :     /* Should only be applied to base relations that are values lists */
    6060             :     Assert(rel->relid > 0);
    6061        8246 :     rte = planner_rt_fetch(rel->relid, root);
    6062             :     Assert(rte->rtekind == RTE_VALUES);
    6063             : 
    6064             :     /*
    6065             :      * Estimate number of rows the values list will return. We know this
    6066             :      * precisely based on the list length (well, barring set-returning
    6067             :      * functions in list items, but that's a refinement not catered for
    6068             :      * anywhere else either).
    6069             :      */
    6070        8246 :     rel->tuples = list_length(rte->values_lists);
    6071             : 
    6072             :     /* Now estimate number of output rows, etc */
    6073        8246 :     set_baserel_size_estimates(root, rel);
    6074        8246 : }
    6075             : 
    6076             : /*
    6077             :  * set_cte_size_estimates
    6078             :  *      Set the size estimates for a base relation that is a CTE reference.
    6079             :  *
    6080             :  * The rel's targetlist and restrictinfo list must have been constructed
    6081             :  * already, and we need an estimate of the number of rows returned by the CTE
    6082             :  * (if a regular CTE) or the non-recursive term (if a self-reference).
    6083             :  *
    6084             :  * We set the same fields as set_baserel_size_estimates.
    6085             :  */
    6086             : void
    6087        5094 : set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, double cte_rows)
    6088             : {
    6089             :     RangeTblEntry *rte;
    6090             : 
    6091             :     /* Should only be applied to base relations that are CTE references */
    6092             :     Assert(rel->relid > 0);
    6093        5094 :     rte = planner_rt_fetch(rel->relid, root);
    6094             :     Assert(rte->rtekind == RTE_CTE);
    6095             : 
    6096        5094 :     if (rte->self_reference)
    6097             :     {
    6098             :         /*
    6099             :          * In a self-reference, we assume the average worktable size is a
    6100             :          * multiple of the nonrecursive term's size.  The best multiplier will
    6101             :          * vary depending on query "fan-out", so make its value adjustable.
    6102             :          */
    6103        1006 :         rel->tuples = clamp_row_est(recursive_worktable_factor * cte_rows);
    6104             :     }
    6105             :     else
    6106             :     {
    6107             :         /* Otherwise just believe the CTE's rowcount estimate */
    6108        4088 :         rel->tuples = cte_rows;
    6109             :     }
    6110             : 
    6111             :     /* Now estimate number of output rows, etc */
    6112        5094 :     set_baserel_size_estimates(root, rel);
    6113        5094 : }
    6114             : 
    6115             : /*
    6116             :  * set_namedtuplestore_size_estimates
    6117             :  *      Set the size estimates for a base relation that is a tuplestore reference.
    6118             :  *
    6119             :  * The rel's targetlist and restrictinfo list must have been constructed
    6120             :  * already.
    6121             :  *
    6122             :  * We set the same fields as set_baserel_size_estimates.
    6123             :  */
    6124             : void
    6125         466 : set_namedtuplestore_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6126             : {
    6127             :     RangeTblEntry *rte;
    6128             : 
    6129             :     /* Should only be applied to base relations that are tuplestore references */
    6130             :     Assert(rel->relid > 0);
    6131         466 :     rte = planner_rt_fetch(rel->relid, root);
    6132             :     Assert(rte->rtekind == RTE_NAMEDTUPLESTORE);
    6133             : 
    6134             :     /*
    6135             :      * Use the estimate provided by the code which is generating the named
    6136             :      * tuplestore.  In some cases, the actual number might be available; in
    6137             :      * others the same plan will be re-used, so a "typical" value might be
    6138             :      * estimated and used.
    6139             :      */
    6140         466 :     rel->tuples = rte->enrtuples;
    6141         466 :     if (rel->tuples < 0)
    6142           0 :         rel->tuples = 1000;
    6143             : 
    6144             :     /* Now estimate number of output rows, etc */
    6145         466 :     set_baserel_size_estimates(root, rel);
    6146         466 : }
    6147             : 
    6148             : /*
    6149             :  * set_result_size_estimates
    6150             :  *      Set the size estimates for an RTE_RESULT base relation
    6151             :  *
    6152             :  * The rel's targetlist and restrictinfo list must have been constructed
    6153             :  * already.
    6154             :  *
    6155             :  * We set the same fields as set_baserel_size_estimates.
    6156             :  */
    6157             : void
    6158        4220 : set_result_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6159             : {
    6160             :     /* Should only be applied to RTE_RESULT base relations */
    6161             :     Assert(rel->relid > 0);
    6162             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_RESULT);
    6163             : 
    6164             :     /* RTE_RESULT always generates a single row, natively */
    6165        4220 :     rel->tuples = 1;
    6166             : 
    6167             :     /* Now estimate number of output rows, etc */
    6168        4220 :     set_baserel_size_estimates(root, rel);
    6169        4220 : }
    6170             : 
    6171             : /*
    6172             :  * set_foreign_size_estimates
    6173             :  *      Set the size estimates for a base relation that is a foreign table.
    6174             :  *
    6175             :  * There is not a whole lot that we can do here; the foreign-data wrapper
    6176             :  * is responsible for producing useful estimates.  We can do a decent job
    6177             :  * of estimating baserestrictcost, so we set that, and we also set up width
    6178             :  * using what will be purely datatype-driven estimates from the targetlist.
    6179             :  * There is no way to do anything sane with the rows value, so we just put
    6180             :  * a default estimate and hope that the wrapper can improve on it.  The
    6181             :  * wrapper's GetForeignRelSize function will be called momentarily.
    6182             :  *
    6183             :  * The rel's targetlist and restrictinfo list must have been constructed
    6184             :  * already.
    6185             :  */
    6186             : void
    6187        2424 : set_foreign_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6188             : {
    6189             :     /* Should only be applied to base relations */
    6190             :     Assert(rel->relid > 0);
    6191             : 
    6192        2424 :     rel->rows = 1000;            /* entirely bogus default estimate */
    6193             : 
    6194        2424 :     cost_qual_eval(&rel->baserestrictcost, rel->baserestrictinfo, root);
    6195             : 
    6196        2424 :     set_rel_width(root, rel);
    6197        2424 : }
    6198             : 
    6199             : 
    6200             : /*
    6201             :  * set_rel_width
    6202             :  *      Set the estimated output width of a base relation.
    6203             :  *
    6204             :  * The estimated output width is the sum of the per-attribute width estimates
    6205             :  * for the actually-referenced columns, plus any PHVs or other expressions
    6206             :  * that have to be calculated at this relation.  This is the amount of data
    6207             :  * we'd need to pass upwards in case of a sort, hash, etc.
    6208             :  *
    6209             :  * This function also sets reltarget->cost, so it's a bit misnamed now.
    6210             :  *
    6211             :  * NB: this works best on plain relations because it prefers to look at
    6212             :  * real Vars.  For subqueries, set_subquery_size_estimates will already have
    6213             :  * copied up whatever per-column estimates were made within the subquery,
    6214             :  * and for other types of rels there isn't much we can do anyway.  We fall
    6215             :  * back on (fairly stupid) datatype-based width estimates if we can't get
    6216             :  * any better number.
    6217             :  *
    6218             :  * The per-attribute width estimates are cached for possible re-use while
    6219             :  * building join relations or post-scan/join pathtargets.
    6220             :  */
    6221             : static void
    6222      503846 : set_rel_width(PlannerInfo *root, RelOptInfo *rel)
    6223             : {
    6224      503846 :     Oid         reloid = planner_rt_fetch(rel->relid, root)->relid;
    6225      503846 :     int64       tuple_width = 0;
    6226      503846 :     bool        have_wholerow_var = false;
    6227             :     ListCell   *lc;
    6228             : 
    6229             :     /* Vars are assumed to have cost zero, but other exprs do not */
    6230      503846 :     rel->reltarget->cost.startup = 0;
    6231      503846 :     rel->reltarget->cost.per_tuple = 0;
    6232             : 
    6233     1820816 :     foreach(lc, rel->reltarget->exprs)
    6234             :     {
    6235     1316970 :         Node       *node = (Node *) lfirst(lc);
    6236             : 
    6237             :         /*
    6238             :          * Ordinarily, a Var in a rel's targetlist must belong to that rel;
    6239             :          * but there are corner cases involving LATERAL references where that
    6240             :          * isn't so.  If the Var has the wrong varno, fall through to the
    6241             :          * generic case (it doesn't seem worth the trouble to be any smarter).
    6242             :          */
    6243     1316970 :         if (IsA(node, Var) &&
    6244     1292802 :             ((Var *) node)->varno == rel->relid)
    6245      342538 :         {
    6246     1292736 :             Var        *var = (Var *) node;
    6247             :             int         ndx;
    6248             :             int32       item_width;
    6249             : 
    6250             :             Assert(var->varattno >= rel->min_attr);
    6251             :             Assert(var->varattno <= rel->max_attr);
    6252             : 
    6253     1292736 :             ndx = var->varattno - rel->min_attr;
    6254             : 
    6255             :             /*
    6256             :              * If it's a whole-row Var, we'll deal with it below after we have
    6257             :              * already cached as many attr widths as possible.
    6258             :              */
    6259     1292736 :             if (var->varattno == 0)
    6260             :             {
    6261        2990 :                 have_wholerow_var = true;
    6262        2990 :                 continue;
    6263             :             }
    6264             : 
    6265             :             /*
    6266             :              * The width may have been cached already (especially if it's a
    6267             :              * subquery), so don't duplicate effort.
    6268             :              */
    6269     1289746 :             if (rel->attr_widths[ndx] > 0)
    6270             :             {
    6271      237042 :                 tuple_width += rel->attr_widths[ndx];
    6272      237042 :                 continue;
    6273             :             }
    6274             : 
    6275             :             /* Try to get column width from statistics */
    6276     1052704 :             if (reloid != InvalidOid && var->varattno > 0)
    6277             :             {
    6278      837462 :                 item_width = get_attavgwidth(reloid, var->varattno);
    6279      837462 :                 if (item_width > 0)
    6280             :                 {
    6281      710166 :                     rel->attr_widths[ndx] = item_width;
    6282      710166 :                     tuple_width += item_width;
    6283      710166 :                     continue;
    6284             :                 }
    6285             :             }
    6286             : 
    6287             :             /*
    6288             :              * Not a plain relation, or can't find statistics for it. Estimate
    6289             :              * using just the type info.
    6290             :              */
    6291      342538 :             item_width = get_typavgwidth(var->vartype, var->vartypmod);
    6292             :             Assert(item_width > 0);
    6293      342538 :             rel->attr_widths[ndx] = item_width;
    6294      342538 :             tuple_width += item_width;
    6295             :         }
    6296       24234 :         else if (IsA(node, PlaceHolderVar))
    6297             :         {
    6298             :             /*
    6299             :              * We will need to evaluate the PHV's contained expression while
    6300             :              * scanning this rel, so be sure to include it in reltarget->cost.
    6301             :              */
    6302        1978 :             PlaceHolderVar *phv = (PlaceHolderVar *) node;
    6303        1978 :             PlaceHolderInfo *phinfo = find_placeholder_info(root, phv);
    6304             :             QualCost    cost;
    6305             : 
    6306        1978 :             tuple_width += phinfo->ph_width;
    6307        1978 :             cost_qual_eval_node(&cost, (Node *) phv->phexpr, root);
    6308        1978 :             rel->reltarget->cost.startup += cost.startup;
    6309        1978 :             rel->reltarget->cost.per_tuple += cost.per_tuple;
    6310             :         }
    6311             :         else
    6312             :         {
    6313             :             /*
    6314             :              * We could be looking at an expression pulled up from a subquery,
    6315             :              * or a ROW() representing a whole-row child Var, etc.  Do what we
    6316             :              * can using the expression type information.
    6317             :              */
    6318             :             int32       item_width;
    6319             :             QualCost    cost;
    6320             : 
    6321       22256 :             item_width = get_typavgwidth(exprType(node), exprTypmod(node));
    6322             :             Assert(item_width > 0);
    6323       22256 :             tuple_width += item_width;
    6324             :             /* Not entirely clear if we need to account for cost, but do so */
    6325       22256 :             cost_qual_eval_node(&cost, node, root);
    6326       22256 :             rel->reltarget->cost.startup += cost.startup;
    6327       22256 :             rel->reltarget->cost.per_tuple += cost.per_tuple;
    6328             :         }
    6329             :     }
    6330             : 
    6331             :     /*
    6332             :      * If we have a whole-row reference, estimate its width as the sum of
    6333             :      * per-column widths plus heap tuple header overhead.
    6334             :      */
    6335      503846 :     if (have_wholerow_var)
    6336             :     {
    6337        2990 :         int64       wholerow_width = MAXALIGN(SizeofHeapTupleHeader);
    6338             : 
    6339        2990 :         if (reloid != InvalidOid)
    6340             :         {
    6341             :             /* Real relation, so estimate true tuple width */
    6342        2332 :             wholerow_width += get_relation_data_width(reloid,
    6343        2332 :                                                       rel->attr_widths - rel->min_attr);
    6344             :         }
    6345             :         else
    6346             :         {
    6347             :             /* Do what we can with info for a phony rel */
    6348             :             AttrNumber  i;
    6349             : 
    6350        1794 :             for (i = 1; i <= rel->max_attr; i++)
    6351        1136 :                 wholerow_width += rel->attr_widths[i - rel->min_attr];
    6352             :         }
    6353             : 
    6354        2990 :         rel->attr_widths[0 - rel->min_attr] = clamp_width_est(wholerow_width);
    6355             : 
    6356             :         /*
    6357             :          * Include the whole-row Var as part of the output tuple.  Yes, that
    6358             :          * really is what happens at runtime.
    6359             :          */
    6360        2990 :         tuple_width += wholerow_width;
    6361             :     }
    6362             : 
    6363      503846 :     rel->reltarget->width = clamp_width_est(tuple_width);
    6364      503846 : }
    6365             : 
    6366             : /*
    6367             :  * set_pathtarget_cost_width
    6368             :  *      Set the estimated eval cost and output width of a PathTarget tlist.
    6369             :  *
    6370             :  * As a notational convenience, returns the same PathTarget pointer passed in.
    6371             :  *
    6372             :  * Most, though not quite all, uses of this function occur after we've run
    6373             :  * set_rel_width() for base relations; so we can usually obtain cached width
    6374             :  * estimates for Vars.  If we can't, fall back on datatype-based width
    6375             :  * estimates.  Present early-planning uses of PathTargets don't need accurate
    6376             :  * widths badly enough to justify going to the catalogs for better data.
    6377             :  */
    6378             : PathTarget *
    6379      600952 : set_pathtarget_cost_width(PlannerInfo *root, PathTarget *target)
    6380             : {
    6381      600952 :     int64       tuple_width = 0;
    6382             :     ListCell   *lc;
    6383             : 
    6384             :     /* Vars are assumed to have cost zero, but other exprs do not */
    6385      600952 :     target->cost.startup = 0;
    6386      600952 :     target->cost.per_tuple = 0;
    6387             : 
    6388     2094896 :     foreach(lc, target->exprs)
    6389             :     {
    6390     1493944 :         Node       *node = (Node *) lfirst(lc);
    6391             : 
    6392     1493944 :         tuple_width += get_expr_width(root, node);
    6393             : 
    6394             :         /* For non-Vars, account for evaluation cost */
    6395     1493944 :         if (!IsA(node, Var))
    6396             :         {
    6397             :             QualCost    cost;
    6398             : 
    6399      622716 :             cost_qual_eval_node(&cost, node, root);
    6400      622716 :             target->cost.startup += cost.startup;
    6401      622716 :             target->cost.per_tuple += cost.per_tuple;
    6402             :         }
    6403             :     }
    6404             : 
    6405      600952 :     target->width = clamp_width_est(tuple_width);
    6406             : 
    6407      600952 :     return target;
    6408             : }
    6409             : 
    6410             : /*
    6411             :  * get_expr_width
    6412             :  *      Estimate the width of the given expr attempting to use the width
    6413             :  *      cached in a Var's owning RelOptInfo, else fallback on the type's
    6414             :  *      average width when unable to or when the given Node is not a Var.
    6415             :  */
    6416             : static int32
    6417     1838074 : get_expr_width(PlannerInfo *root, const Node *expr)
    6418             : {
    6419             :     int32       width;
    6420             : 
    6421     1838074 :     if (IsA(expr, Var))
    6422             :     {
    6423     1202094 :         const Var  *var = (const Var *) expr;
    6424             : 
    6425             :         /* We should not see any upper-level Vars here */
    6426             :         Assert(var->varlevelsup == 0);
    6427             : 
    6428             :         /* Try to get data from RelOptInfo cache */
    6429     1202094 :         if (!IS_SPECIAL_VARNO(var->varno) &&
    6430     1196388 :             var->varno < root->simple_rel_array_size)
    6431             :         {
    6432     1196388 :             RelOptInfo *rel = root->simple_rel_array[var->varno];
    6433             : 
    6434     1196388 :             if (rel != NULL &&
    6435     1167506 :                 var->varattno >= rel->min_attr &&
    6436     1167506 :                 var->varattno <= rel->max_attr)
    6437             :             {
    6438     1167506 :                 int         ndx = var->varattno - rel->min_attr;
    6439             : 
    6440     1167506 :                 if (rel->attr_widths[ndx] > 0)
    6441     1134070 :                     return rel->attr_widths[ndx];
    6442             :             }
    6443             :         }
    6444             : 
    6445             :         /*
    6446             :          * No cached data available, so estimate using just the type info.
    6447             :          */
    6448       68024 :         width = get_typavgwidth(var->vartype, var->vartypmod);
    6449             :         Assert(width > 0);
    6450             : 
    6451       68024 :         return width;
    6452             :     }
    6453             : 
    6454      635980 :     width = get_typavgwidth(exprType(expr), exprTypmod(expr));
    6455             :     Assert(width > 0);
    6456      635980 :     return width;
    6457             : }
    6458             : 
    6459             : /*
    6460             :  * relation_byte_size
    6461             :  *    Estimate the storage space in bytes for a given number of tuples
    6462             :  *    of a given width (size in bytes).
    6463             :  */
    6464             : static double
    6465     4101996 : relation_byte_size(double tuples, int width)
    6466             : {
    6467     4101996 :     return tuples * (MAXALIGN(width) + MAXALIGN(SizeofHeapTupleHeader));
    6468             : }
    6469             : 
    6470             : /*
    6471             :  * page_size
    6472             :  *    Returns an estimate of the number of pages covered by a given
    6473             :  *    number of tuples of a given width (size in bytes).
    6474             :  */
    6475             : static double
    6476       10752 : page_size(double tuples, int width)
    6477             : {
    6478       10752 :     return ceil(relation_byte_size(tuples, width) / BLCKSZ);
    6479             : }
    6480             : 
    6481             : /*
    6482             :  * Estimate the fraction of the work that each worker will do given the
    6483             :  * number of workers budgeted for the path.
    6484             :  */
    6485             : static double
    6486      179082 : get_parallel_divisor(Path *path)
    6487             : {
    6488      179082 :     double      parallel_divisor = path->parallel_workers;
    6489             : 
    6490             :     /*
    6491             :      * Early experience with parallel query suggests that when there is only
    6492             :      * one worker, the leader often makes a very substantial contribution to
    6493             :      * executing the parallel portion of the plan, but as more workers are
    6494             :      * added, it does less and less, because it's busy reading tuples from the
    6495             :      * workers and doing whatever non-parallel post-processing is needed.  By
    6496             :      * the time we reach 4 workers, the leader no longer makes a meaningful
    6497             :      * contribution.  Thus, for now, estimate that the leader spends 30% of
    6498             :      * its time servicing each worker, and the remainder executing the
    6499             :      * parallel plan.
    6500             :      */
    6501      179082 :     if (parallel_leader_participation)
    6502             :     {
    6503             :         double      leader_contribution;
    6504             : 
    6505      177780 :         leader_contribution = 1.0 - (0.3 * path->parallel_workers);
    6506      177780 :         if (leader_contribution > 0)
    6507      175464 :             parallel_divisor += leader_contribution;
    6508             :     }
    6509             : 
    6510      179082 :     return parallel_divisor;
    6511             : }
    6512             : 
    6513             : /*
    6514             :  * compute_bitmap_pages
    6515             :  *    Estimate number of pages fetched from heap in a bitmap heap scan.
    6516             :  *
    6517             :  * 'baserel' is the relation to be scanned
    6518             :  * 'bitmapqual' is a tree of IndexPaths, BitmapAndPaths, and BitmapOrPaths
    6519             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
    6520             :  *      estimates of caching behavior
    6521             :  *
    6522             :  * If cost_p isn't NULL, the indexTotalCost estimate is returned in *cost_p.
    6523             :  * If tuples_p isn't NULL, the tuples_fetched estimate is returned in *tuples_p.
    6524             :  */
    6525             : double
    6526      680148 : compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel,
    6527             :                      Path *bitmapqual, double loop_count,
    6528             :                      Cost *cost_p, double *tuples_p)
    6529             : {
    6530             :     Cost        indexTotalCost;
    6531             :     Selectivity indexSelectivity;
    6532             :     double      T;
    6533             :     double      pages_fetched;
    6534             :     double      tuples_fetched;
    6535             :     double      heap_pages;
    6536             :     double      maxentries;
    6537             : 
    6538             :     /*
    6539             :      * Fetch total cost of obtaining the bitmap, as well as its total
    6540             :      * selectivity.
    6541             :      */
    6542      680148 :     cost_bitmap_tree_node(bitmapqual, &indexTotalCost, &indexSelectivity);
    6543             : 
    6544             :     /*
    6545             :      * Estimate number of main-table pages fetched.
    6546             :      */
    6547      680148 :     tuples_fetched = clamp_row_est(indexSelectivity * baserel->tuples);
    6548             : 
    6549      680148 :     T = (baserel->pages > 1) ? (double) baserel->pages : 1.0;
    6550             : 
    6551             :     /*
    6552             :      * For a single scan, the number of heap pages that need to be fetched is
    6553             :      * the same as the Mackert and Lohman formula for the case T <= b (ie, no
    6554             :      * re-reads needed).
    6555             :      */
    6556      680148 :     pages_fetched = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
    6557             : 
    6558             :     /*
    6559             :      * Calculate the number of pages fetched from the heap.  Then based on
    6560             :      * current work_mem estimate get the estimated maxentries in the bitmap.
    6561             :      * (Note that we always do this calculation based on the number of pages
    6562             :      * that would be fetched in a single iteration, even if loop_count > 1.
    6563             :      * That's correct, because only that number of entries will be stored in
    6564             :      * the bitmap at one time.)
    6565             :      */
    6566      680148 :     heap_pages = Min(pages_fetched, baserel->pages);
    6567      680148 :     maxentries = tbm_calculate_entries(work_mem * (Size) 1024);
    6568             : 
    6569      680148 :     if (loop_count > 1)
    6570             :     {
    6571             :         /*
    6572             :          * For repeated bitmap scans, scale up the number of tuples fetched in
    6573             :          * the Mackert and Lohman formula by the number of scans, so that we
    6574             :          * estimate the number of pages fetched by all the scans. Then
    6575             :          * pro-rate for one scan.
    6576             :          */
    6577      139732 :         pages_fetched = index_pages_fetched(tuples_fetched * loop_count,
    6578             :                                             baserel->pages,
    6579             :                                             get_indexpath_pages(bitmapqual),
    6580             :                                             root);
    6581      139732 :         pages_fetched /= loop_count;
    6582             :     }
    6583             : 
    6584      680148 :     if (pages_fetched >= T)
    6585       69016 :         pages_fetched = T;
    6586             :     else
    6587      611132 :         pages_fetched = ceil(pages_fetched);
    6588             : 
    6589      680148 :     if (maxentries < heap_pages)
    6590             :     {
    6591             :         double      exact_pages;
    6592             :         double      lossy_pages;
    6593             : 
    6594             :         /*
    6595             :          * Crude approximation of the number of lossy pages.  Because of the
    6596             :          * way tbm_lossify() is coded, the number of lossy pages increases
    6597             :          * very sharply as soon as we run short of memory; this formula has
    6598             :          * that property and seems to perform adequately in testing, but it's
    6599             :          * possible we could do better somehow.
    6600             :          */
    6601          18 :         lossy_pages = Max(0, heap_pages - maxentries / 2);
    6602          18 :         exact_pages = heap_pages - lossy_pages;
    6603             : 
    6604             :         /*
    6605             :          * If there are lossy pages then recompute the number of tuples
    6606             :          * processed by the bitmap heap node.  We assume here that the chance
    6607             :          * of a given tuple coming from an exact page is the same as the
    6608             :          * chance that a given page is exact.  This might not be true, but
    6609             :          * it's not clear how we can do any better.
    6610             :          */
    6611          18 :         if (lossy_pages > 0)
    6612             :             tuples_fetched =
    6613          18 :                 clamp_row_est(indexSelectivity *
    6614          18 :                               (exact_pages / heap_pages) * baserel->tuples +
    6615          18 :                               (lossy_pages / heap_pages) * baserel->tuples);
    6616             :     }
    6617             : 
    6618      680148 :     if (cost_p)
    6619      535218 :         *cost_p = indexTotalCost;
    6620      680148 :     if (tuples_p)
    6621      535218 :         *tuples_p = tuples_fetched;
    6622             : 
    6623      680148 :     return pages_fetched;
    6624             : }
    6625             : 
    6626             : /*
    6627             :  * compute_gather_rows
    6628             :  *    Estimate number of rows for gather (merge) nodes.
    6629             :  *
    6630             :  * In a parallel plan, each worker's row estimate is determined by dividing the
    6631             :  * total number of rows by parallel_divisor, which accounts for the leader's
    6632             :  * contribution in addition to the number of workers.  Accordingly, when
    6633             :  * estimating the number of rows for gather (merge) nodes, we multiply the rows
    6634             :  * per worker by the same parallel_divisor to undo the division.
    6635             :  */
    6636             : double
    6637       29318 : compute_gather_rows(Path *path)
    6638             : {
    6639             :     Assert(path->parallel_workers > 0);
    6640             : 
    6641       29318 :     return clamp_row_est(path->rows * get_parallel_divisor(path));
    6642             : }

Generated by: LCOV version 1.16