LCOV - code coverage report
Current view: top level - src/backend/optimizer/path - costsize.c (source / functions) Hit Total Coverage
Test: PostgreSQL 18devel Lines: 1743 1782 97.8 %
Date: 2025-04-24 12:15:10 Functions: 75 75 100.0 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * costsize.c
       4             :  *    Routines to compute (and set) relation sizes and path costs
       5             :  *
       6             :  * Path costs are measured in arbitrary units established by these basic
       7             :  * parameters:
       8             :  *
       9             :  *  seq_page_cost       Cost of a sequential page fetch
      10             :  *  random_page_cost    Cost of a non-sequential page fetch
      11             :  *  cpu_tuple_cost      Cost of typical CPU time to process a tuple
      12             :  *  cpu_index_tuple_cost  Cost of typical CPU time to process an index tuple
      13             :  *  cpu_operator_cost   Cost of CPU time to execute an operator or function
      14             :  *  parallel_tuple_cost Cost of CPU time to pass a tuple from worker to leader backend
      15             :  *  parallel_setup_cost Cost of setting up shared memory for parallelism
      16             :  *
      17             :  * We expect that the kernel will typically do some amount of read-ahead
      18             :  * optimization; this in conjunction with seek costs means that seq_page_cost
      19             :  * is normally considerably less than random_page_cost.  (However, if the
      20             :  * database is fully cached in RAM, it is reasonable to set them equal.)
      21             :  *
      22             :  * We also use a rough estimate "effective_cache_size" of the number of
      23             :  * disk pages in Postgres + OS-level disk cache.  (We can't simply use
      24             :  * NBuffers for this purpose because that would ignore the effects of
      25             :  * the kernel's disk cache.)
      26             :  *
      27             :  * Obviously, taking constants for these values is an oversimplification,
      28             :  * but it's tough enough to get any useful estimates even at this level of
      29             :  * detail.  Note that all of these parameters are user-settable, in case
      30             :  * the default values are drastically off for a particular platform.
      31             :  *
      32             :  * seq_page_cost and random_page_cost can also be overridden for an individual
      33             :  * tablespace, in case some data is on a fast disk and other data is on a slow
      34             :  * disk.  Per-tablespace overrides never apply to temporary work files such as
      35             :  * an external sort or a materialize node that overflows work_mem.
      36             :  *
      37             :  * We compute two separate costs for each path:
      38             :  *      total_cost: total estimated cost to fetch all tuples
      39             :  *      startup_cost: cost that is expended before first tuple is fetched
      40             :  * In some scenarios, such as when there is a LIMIT or we are implementing
      41             :  * an EXISTS(...) sub-select, it is not necessary to fetch all tuples of the
      42             :  * path's result.  A caller can estimate the cost of fetching a partial
      43             :  * result by interpolating between startup_cost and total_cost.  In detail:
      44             :  *      actual_cost = startup_cost +
      45             :  *          (total_cost - startup_cost) * tuples_to_fetch / path->rows;
      46             :  * Note that a base relation's rows count (and, by extension, plan_rows for
      47             :  * plan nodes below the LIMIT node) are set without regard to any LIMIT, so
      48             :  * that this equation works properly.  (Note: while path->rows is never zero
      49             :  * for ordinary relations, it is zero for paths for provably-empty relations,
      50             :  * so beware of division-by-zero.)  The LIMIT is applied as a top-level
      51             :  * plan node.
      52             :  *
      53             :  * Each path stores the total number of disabled nodes that exist at or
      54             :  * below that point in the plan tree. This is regarded as a component of
      55             :  * the cost, and paths with fewer disabled nodes should be regarded as
      56             :  * cheaper than those with more. Disabled nodes occur when the user sets
      57             :  * a GUC like enable_seqscan=false. We can't necessarily respect such a
      58             :  * setting in every part of the plan tree, but we want to respect in as many
      59             :  * parts of the plan tree as possible. Simpler schemes like storing a Boolean
      60             :  * here rather than a count fail to do that. We used to disable nodes by
      61             :  * adding a large constant to the startup cost, but that distorted planning
      62             :  * in other ways.
      63             :  *
      64             :  * For largely historical reasons, most of the routines in this module use
      65             :  * the passed result Path only to store their results (rows, startup_cost and
      66             :  * total_cost) into.  All the input data they need is passed as separate
      67             :  * parameters, even though much of it could be extracted from the Path.
      68             :  * An exception is made for the cost_XXXjoin() routines, which expect all
      69             :  * the other fields of the passed XXXPath to be filled in, and similarly
      70             :  * cost_index() assumes the passed IndexPath is valid except for its output
      71             :  * values.
      72             :  *
      73             :  *
      74             :  * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
      75             :  * Portions Copyright (c) 1994, Regents of the University of California
      76             :  *
      77             :  * IDENTIFICATION
      78             :  *    src/backend/optimizer/path/costsize.c
      79             :  *
      80             :  *-------------------------------------------------------------------------
      81             :  */
      82             : 
      83             : #include "postgres.h"
      84             : 
      85             : #include <limits.h>
      86             : #include <math.h>
      87             : 
      88             : #include "access/amapi.h"
      89             : #include "access/htup_details.h"
      90             : #include "access/tsmapi.h"
      91             : #include "executor/executor.h"
      92             : #include "executor/nodeAgg.h"
      93             : #include "executor/nodeHash.h"
      94             : #include "executor/nodeMemoize.h"
      95             : #include "miscadmin.h"
      96             : #include "nodes/makefuncs.h"
      97             : #include "nodes/nodeFuncs.h"
      98             : #include "optimizer/clauses.h"
      99             : #include "optimizer/cost.h"
     100             : #include "optimizer/optimizer.h"
     101             : #include "optimizer/pathnode.h"
     102             : #include "optimizer/paths.h"
     103             : #include "optimizer/placeholder.h"
     104             : #include "optimizer/plancat.h"
     105             : #include "optimizer/restrictinfo.h"
     106             : #include "parser/parsetree.h"
     107             : #include "utils/lsyscache.h"
     108             : #include "utils/selfuncs.h"
     109             : #include "utils/spccache.h"
     110             : #include "utils/tuplesort.h"
     111             : 
     112             : 
     113             : #define LOG2(x)  (log(x) / 0.693147180559945)
     114             : 
     115             : /*
     116             :  * Append and MergeAppend nodes are less expensive than some other operations
     117             :  * which use cpu_tuple_cost; instead of adding a separate GUC, estimate the
     118             :  * per-tuple cost as cpu_tuple_cost multiplied by this value.
     119             :  */
     120             : #define APPEND_CPU_COST_MULTIPLIER 0.5
     121             : 
     122             : /*
     123             :  * Maximum value for row estimates.  We cap row estimates to this to help
     124             :  * ensure that costs based on these estimates remain within the range of what
     125             :  * double can represent.  add_path() wouldn't act sanely given infinite or NaN
     126             :  * cost values.
     127             :  */
     128             : #define MAXIMUM_ROWCOUNT 1e100
     129             : 
     130             : double      seq_page_cost = DEFAULT_SEQ_PAGE_COST;
     131             : double      random_page_cost = DEFAULT_RANDOM_PAGE_COST;
     132             : double      cpu_tuple_cost = DEFAULT_CPU_TUPLE_COST;
     133             : double      cpu_index_tuple_cost = DEFAULT_CPU_INDEX_TUPLE_COST;
     134             : double      cpu_operator_cost = DEFAULT_CPU_OPERATOR_COST;
     135             : double      parallel_tuple_cost = DEFAULT_PARALLEL_TUPLE_COST;
     136             : double      parallel_setup_cost = DEFAULT_PARALLEL_SETUP_COST;
     137             : double      recursive_worktable_factor = DEFAULT_RECURSIVE_WORKTABLE_FACTOR;
     138             : 
     139             : int         effective_cache_size = DEFAULT_EFFECTIVE_CACHE_SIZE;
     140             : 
     141             : Cost        disable_cost = 1.0e10;
     142             : 
     143             : int         max_parallel_workers_per_gather = 2;
     144             : 
     145             : bool        enable_seqscan = true;
     146             : bool        enable_indexscan = true;
     147             : bool        enable_indexonlyscan = true;
     148             : bool        enable_bitmapscan = true;
     149             : bool        enable_tidscan = true;
     150             : bool        enable_sort = true;
     151             : bool        enable_incremental_sort = true;
     152             : bool        enable_hashagg = true;
     153             : bool        enable_nestloop = true;
     154             : bool        enable_material = true;
     155             : bool        enable_memoize = true;
     156             : bool        enable_mergejoin = true;
     157             : bool        enable_hashjoin = true;
     158             : bool        enable_gathermerge = true;
     159             : bool        enable_partitionwise_join = false;
     160             : bool        enable_partitionwise_aggregate = false;
     161             : bool        enable_parallel_append = true;
     162             : bool        enable_parallel_hash = true;
     163             : bool        enable_partition_pruning = true;
     164             : bool        enable_presorted_aggregate = true;
     165             : bool        enable_async_append = true;
     166             : 
     167             : typedef struct
     168             : {
     169             :     PlannerInfo *root;
     170             :     QualCost    total;
     171             : } cost_qual_eval_context;
     172             : 
     173             : static List *extract_nonindex_conditions(List *qual_clauses, List *indexclauses);
     174             : static MergeScanSelCache *cached_scansel(PlannerInfo *root,
     175             :                                          RestrictInfo *rinfo,
     176             :                                          PathKey *pathkey);
     177             : static void cost_rescan(PlannerInfo *root, Path *path,
     178             :                         Cost *rescan_startup_cost, Cost *rescan_total_cost);
     179             : static bool cost_qual_eval_walker(Node *node, cost_qual_eval_context *context);
     180             : static void get_restriction_qual_cost(PlannerInfo *root, RelOptInfo *baserel,
     181             :                                       ParamPathInfo *param_info,
     182             :                                       QualCost *qpqual_cost);
     183             : static bool has_indexed_join_quals(NestPath *path);
     184             : static double approx_tuple_count(PlannerInfo *root, JoinPath *path,
     185             :                                  List *quals);
     186             : static double calc_joinrel_size_estimate(PlannerInfo *root,
     187             :                                          RelOptInfo *joinrel,
     188             :                                          RelOptInfo *outer_rel,
     189             :                                          RelOptInfo *inner_rel,
     190             :                                          double outer_rows,
     191             :                                          double inner_rows,
     192             :                                          SpecialJoinInfo *sjinfo,
     193             :                                          List *restrictlist);
     194             : static Selectivity get_foreign_key_join_selectivity(PlannerInfo *root,
     195             :                                                     Relids outer_relids,
     196             :                                                     Relids inner_relids,
     197             :                                                     SpecialJoinInfo *sjinfo,
     198             :                                                     List **restrictlist);
     199             : static Cost append_nonpartial_cost(List *subpaths, int numpaths,
     200             :                                    int parallel_workers);
     201             : static void set_rel_width(PlannerInfo *root, RelOptInfo *rel);
     202             : static int32 get_expr_width(PlannerInfo *root, const Node *expr);
     203             : static double relation_byte_size(double tuples, int width);
     204             : static double page_size(double tuples, int width);
     205             : static double get_parallel_divisor(Path *path);
     206             : 
     207             : 
     208             : /*
     209             :  * clamp_row_est
     210             :  *      Force a row-count estimate to a sane value.
     211             :  */
     212             : double
     213     8942002 : clamp_row_est(double nrows)
     214             : {
     215             :     /*
     216             :      * Avoid infinite and NaN row estimates.  Costs derived from such values
     217             :      * are going to be useless.  Also force the estimate to be at least one
     218             :      * row, to make explain output look better and to avoid possible
     219             :      * divide-by-zero when interpolating costs.  Make it an integer, too.
     220             :      */
     221     8942002 :     if (nrows > MAXIMUM_ROWCOUNT || isnan(nrows))
     222           0 :         nrows = MAXIMUM_ROWCOUNT;
     223     8942002 :     else if (nrows <= 1.0)
     224     3251690 :         nrows = 1.0;
     225             :     else
     226     5690312 :         nrows = rint(nrows);
     227             : 
     228     8942002 :     return nrows;
     229             : }
     230             : 
     231             : /*
     232             :  * clamp_width_est
     233             :  *      Force a tuple-width estimate to a sane value.
     234             :  *
     235             :  * The planner represents datatype width and tuple width estimates as int32.
     236             :  * When summing column width estimates to create a tuple width estimate,
     237             :  * it's possible to reach integer overflow in edge cases.  To ensure sane
     238             :  * behavior, we form such sums in int64 arithmetic and then apply this routine
     239             :  * to clamp to int32 range.
     240             :  */
     241             : int32
     242     1902542 : clamp_width_est(int64 tuple_width)
     243             : {
     244             :     /*
     245             :      * Anything more than MaxAllocSize is clearly bogus, since we could not
     246             :      * create a tuple that large.
     247             :      */
     248     1902542 :     if (tuple_width > MaxAllocSize)
     249           0 :         return (int32) MaxAllocSize;
     250             : 
     251             :     /*
     252             :      * Unlike clamp_row_est, we just Assert that the value isn't negative,
     253             :      * rather than masking such errors.
     254             :      */
     255             :     Assert(tuple_width >= 0);
     256             : 
     257     1902542 :     return (int32) tuple_width;
     258             : }
     259             : 
     260             : /*
     261             :  * clamp_cardinality_to_long
     262             :  *      Cast a Cardinality value to a sane long value.
     263             :  */
     264             : long
     265       45676 : clamp_cardinality_to_long(Cardinality x)
     266             : {
     267             :     /*
     268             :      * Just for paranoia's sake, ensure we do something sane with negative or
     269             :      * NaN values.
     270             :      */
     271       45676 :     if (isnan(x))
     272           0 :         return LONG_MAX;
     273       45676 :     if (x <= 0)
     274         556 :         return 0;
     275             : 
     276             :     /*
     277             :      * If "long" is 64 bits, then LONG_MAX cannot be represented exactly as a
     278             :      * double.  Casting it to double and back may well result in overflow due
     279             :      * to rounding, so avoid doing that.  We trust that any double value that
     280             :      * compares strictly less than "(double) LONG_MAX" will cast to a
     281             :      * representable "long" value.
     282             :      */
     283       45120 :     return (x < (double) LONG_MAX) ? (long) x : LONG_MAX;
     284             : }
     285             : 
     286             : 
     287             : /*
     288             :  * cost_seqscan
     289             :  *    Determines and returns the cost of scanning a relation sequentially.
     290             :  *
     291             :  * 'baserel' is the relation to be scanned
     292             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     293             :  */
     294             : void
     295      426398 : cost_seqscan(Path *path, PlannerInfo *root,
     296             :              RelOptInfo *baserel, ParamPathInfo *param_info)
     297             : {
     298      426398 :     Cost        startup_cost = 0;
     299             :     Cost        cpu_run_cost;
     300             :     Cost        disk_run_cost;
     301             :     double      spc_seq_page_cost;
     302             :     QualCost    qpqual_cost;
     303             :     Cost        cpu_per_tuple;
     304             : 
     305             :     /* Should only be applied to base relations */
     306             :     Assert(baserel->relid > 0);
     307             :     Assert(baserel->rtekind == RTE_RELATION);
     308             : 
     309             :     /* Mark the path with the correct row estimate */
     310      426398 :     if (param_info)
     311         840 :         path->rows = param_info->ppi_rows;
     312             :     else
     313      425558 :         path->rows = baserel->rows;
     314             : 
     315             :     /* fetch estimated page cost for tablespace containing table */
     316      426398 :     get_tablespace_page_costs(baserel->reltablespace,
     317             :                               NULL,
     318             :                               &spc_seq_page_cost);
     319             : 
     320             :     /*
     321             :      * disk costs
     322             :      */
     323      426398 :     disk_run_cost = spc_seq_page_cost * baserel->pages;
     324             : 
     325             :     /* CPU costs */
     326      426398 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
     327             : 
     328      426398 :     startup_cost += qpqual_cost.startup;
     329      426398 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     330      426398 :     cpu_run_cost = cpu_per_tuple * baserel->tuples;
     331             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     332      426398 :     startup_cost += path->pathtarget->cost.startup;
     333      426398 :     cpu_run_cost += path->pathtarget->cost.per_tuple * path->rows;
     334             : 
     335             :     /* Adjust costing for parallelism, if used. */
     336      426398 :     if (path->parallel_workers > 0)
     337             :     {
     338       26118 :         double      parallel_divisor = get_parallel_divisor(path);
     339             : 
     340             :         /* The CPU cost is divided among all the workers. */
     341       26118 :         cpu_run_cost /= parallel_divisor;
     342             : 
     343             :         /*
     344             :          * It may be possible to amortize some of the I/O cost, but probably
     345             :          * not very much, because most operating systems already do aggressive
     346             :          * prefetching.  For now, we assume that the disk run cost can't be
     347             :          * amortized at all.
     348             :          */
     349             : 
     350             :         /*
     351             :          * In the case of a parallel plan, the row count needs to represent
     352             :          * the number of tuples processed per worker.
     353             :          */
     354       26118 :         path->rows = clamp_row_est(path->rows / parallel_divisor);
     355             :     }
     356             : 
     357      426398 :     path->disabled_nodes = enable_seqscan ? 0 : 1;
     358      426398 :     path->startup_cost = startup_cost;
     359      426398 :     path->total_cost = startup_cost + cpu_run_cost + disk_run_cost;
     360      426398 : }
     361             : 
     362             : /*
     363             :  * cost_samplescan
     364             :  *    Determines and returns the cost of scanning a relation using sampling.
     365             :  *
     366             :  * 'baserel' is the relation to be scanned
     367             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     368             :  */
     369             : void
     370         306 : cost_samplescan(Path *path, PlannerInfo *root,
     371             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
     372             : {
     373         306 :     Cost        startup_cost = 0;
     374         306 :     Cost        run_cost = 0;
     375             :     RangeTblEntry *rte;
     376             :     TableSampleClause *tsc;
     377             :     TsmRoutine *tsm;
     378             :     double      spc_seq_page_cost,
     379             :                 spc_random_page_cost,
     380             :                 spc_page_cost;
     381             :     QualCost    qpqual_cost;
     382             :     Cost        cpu_per_tuple;
     383             : 
     384             :     /* Should only be applied to base relations with tablesample clauses */
     385             :     Assert(baserel->relid > 0);
     386         306 :     rte = planner_rt_fetch(baserel->relid, root);
     387             :     Assert(rte->rtekind == RTE_RELATION);
     388         306 :     tsc = rte->tablesample;
     389             :     Assert(tsc != NULL);
     390         306 :     tsm = GetTsmRoutine(tsc->tsmhandler);
     391             : 
     392             :     /* Mark the path with the correct row estimate */
     393         306 :     if (param_info)
     394          72 :         path->rows = param_info->ppi_rows;
     395             :     else
     396         234 :         path->rows = baserel->rows;
     397             : 
     398             :     /* fetch estimated page cost for tablespace containing table */
     399         306 :     get_tablespace_page_costs(baserel->reltablespace,
     400             :                               &spc_random_page_cost,
     401             :                               &spc_seq_page_cost);
     402             : 
     403             :     /* if NextSampleBlock is used, assume random access, else sequential */
     404         612 :     spc_page_cost = (tsm->NextSampleBlock != NULL) ?
     405         306 :         spc_random_page_cost : spc_seq_page_cost;
     406             : 
     407             :     /*
     408             :      * disk costs (recall that baserel->pages has already been set to the
     409             :      * number of pages the sampling method will visit)
     410             :      */
     411         306 :     run_cost += spc_page_cost * baserel->pages;
     412             : 
     413             :     /*
     414             :      * CPU costs (recall that baserel->tuples has already been set to the
     415             :      * number of tuples the sampling method will select).  Note that we ignore
     416             :      * execution cost of the TABLESAMPLE parameter expressions; they will be
     417             :      * evaluated only once per scan, and in most usages they'll likely be
     418             :      * simple constants anyway.  We also don't charge anything for the
     419             :      * calculations the sampling method might do internally.
     420             :      */
     421         306 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
     422             : 
     423         306 :     startup_cost += qpqual_cost.startup;
     424         306 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     425         306 :     run_cost += cpu_per_tuple * baserel->tuples;
     426             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     427         306 :     startup_cost += path->pathtarget->cost.startup;
     428         306 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
     429             : 
     430         306 :     path->disabled_nodes = 0;
     431         306 :     path->startup_cost = startup_cost;
     432         306 :     path->total_cost = startup_cost + run_cost;
     433         306 : }
     434             : 
     435             : /*
     436             :  * cost_gather
     437             :  *    Determines and returns the cost of gather path.
     438             :  *
     439             :  * 'rel' is the relation to be operated upon
     440             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     441             :  * 'rows' may be used to point to a row estimate; if non-NULL, it overrides
     442             :  * both 'rel' and 'param_info'.  This is useful when the path doesn't exactly
     443             :  * correspond to any particular RelOptInfo.
     444             :  */
     445             : void
     446       19082 : cost_gather(GatherPath *path, PlannerInfo *root,
     447             :             RelOptInfo *rel, ParamPathInfo *param_info,
     448             :             double *rows)
     449             : {
     450       19082 :     Cost        startup_cost = 0;
     451       19082 :     Cost        run_cost = 0;
     452             : 
     453             :     /* Mark the path with the correct row estimate */
     454       19082 :     if (rows)
     455        1752 :         path->path.rows = *rows;
     456       17330 :     else if (param_info)
     457           0 :         path->path.rows = param_info->ppi_rows;
     458             :     else
     459       17330 :         path->path.rows = rel->rows;
     460             : 
     461       19082 :     startup_cost = path->subpath->startup_cost;
     462             : 
     463       19082 :     run_cost = path->subpath->total_cost - path->subpath->startup_cost;
     464             : 
     465             :     /* Parallel setup and communication cost. */
     466       19082 :     startup_cost += parallel_setup_cost;
     467       19082 :     run_cost += parallel_tuple_cost * path->path.rows;
     468             : 
     469       19082 :     path->path.disabled_nodes = path->subpath->disabled_nodes;
     470       19082 :     path->path.startup_cost = startup_cost;
     471       19082 :     path->path.total_cost = (startup_cost + run_cost);
     472       19082 : }
     473             : 
     474             : /*
     475             :  * cost_gather_merge
     476             :  *    Determines and returns the cost of gather merge path.
     477             :  *
     478             :  * GatherMerge merges several pre-sorted input streams, using a heap that at
     479             :  * any given instant holds the next tuple from each stream. If there are N
     480             :  * streams, we need about N*log2(N) tuple comparisons to construct the heap at
     481             :  * startup, and then for each output tuple, about log2(N) comparisons to
     482             :  * replace the top heap entry with the next tuple from the same stream.
     483             :  */
     484             : void
     485       10134 : cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
     486             :                   RelOptInfo *rel, ParamPathInfo *param_info,
     487             :                   int input_disabled_nodes,
     488             :                   Cost input_startup_cost, Cost input_total_cost,
     489             :                   double *rows)
     490             : {
     491       10134 :     Cost        startup_cost = 0;
     492       10134 :     Cost        run_cost = 0;
     493             :     Cost        comparison_cost;
     494             :     double      N;
     495             :     double      logN;
     496             : 
     497             :     /* Mark the path with the correct row estimate */
     498       10134 :     if (rows)
     499        4612 :         path->path.rows = *rows;
     500        5522 :     else if (param_info)
     501           0 :         path->path.rows = param_info->ppi_rows;
     502             :     else
     503        5522 :         path->path.rows = rel->rows;
     504             : 
     505             :     /*
     506             :      * Add one to the number of workers to account for the leader.  This might
     507             :      * be overgenerous since the leader will do less work than other workers
     508             :      * in typical cases, but we'll go with it for now.
     509             :      */
     510             :     Assert(path->num_workers > 0);
     511       10134 :     N = (double) path->num_workers + 1;
     512       10134 :     logN = LOG2(N);
     513             : 
     514             :     /* Assumed cost per tuple comparison */
     515       10134 :     comparison_cost = 2.0 * cpu_operator_cost;
     516             : 
     517             :     /* Heap creation cost */
     518       10134 :     startup_cost += comparison_cost * N * logN;
     519             : 
     520             :     /* Per-tuple heap maintenance cost */
     521       10134 :     run_cost += path->path.rows * comparison_cost * logN;
     522             : 
     523             :     /* small cost for heap management, like cost_merge_append */
     524       10134 :     run_cost += cpu_operator_cost * path->path.rows;
     525             : 
     526             :     /*
     527             :      * Parallel setup and communication cost.  Since Gather Merge, unlike
     528             :      * Gather, requires us to block until a tuple is available from every
     529             :      * worker, we bump the IPC cost up a little bit as compared with Gather.
     530             :      * For lack of a better idea, charge an extra 5%.
     531             :      */
     532       10134 :     startup_cost += parallel_setup_cost;
     533       10134 :     run_cost += parallel_tuple_cost * path->path.rows * 1.05;
     534             : 
     535       10134 :     path->path.disabled_nodes = input_disabled_nodes
     536       10134 :         + (enable_gathermerge ? 0 : 1);
     537       10134 :     path->path.startup_cost = startup_cost + input_startup_cost;
     538       10134 :     path->path.total_cost = (startup_cost + run_cost + input_total_cost);
     539       10134 : }
     540             : 
     541             : /*
     542             :  * cost_index
     543             :  *    Determines and returns the cost of scanning a relation using an index.
     544             :  *
     545             :  * 'path' describes the indexscan under consideration, and is complete
     546             :  *      except for the fields to be set by this routine
     547             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
     548             :  *      estimates of caching behavior
     549             :  *
     550             :  * In addition to rows, startup_cost and total_cost, cost_index() sets the
     551             :  * path's indextotalcost and indexselectivity fields.  These values will be
     552             :  * needed if the IndexPath is used in a BitmapIndexScan.
     553             :  *
     554             :  * NOTE: path->indexquals must contain only clauses usable as index
     555             :  * restrictions.  Any additional quals evaluated as qpquals may reduce the
     556             :  * number of returned tuples, but they won't reduce the number of tuples
     557             :  * we have to fetch from the table, so they don't reduce the scan cost.
     558             :  */
     559             : void
     560      784292 : cost_index(IndexPath *path, PlannerInfo *root, double loop_count,
     561             :            bool partial_path)
     562             : {
     563      784292 :     IndexOptInfo *index = path->indexinfo;
     564      784292 :     RelOptInfo *baserel = index->rel;
     565      784292 :     bool        indexonly = (path->path.pathtype == T_IndexOnlyScan);
     566             :     amcostestimate_function amcostestimate;
     567             :     List       *qpquals;
     568      784292 :     Cost        startup_cost = 0;
     569      784292 :     Cost        run_cost = 0;
     570      784292 :     Cost        cpu_run_cost = 0;
     571             :     Cost        indexStartupCost;
     572             :     Cost        indexTotalCost;
     573             :     Selectivity indexSelectivity;
     574             :     double      indexCorrelation,
     575             :                 csquared;
     576             :     double      spc_seq_page_cost,
     577             :                 spc_random_page_cost;
     578             :     Cost        min_IO_cost,
     579             :                 max_IO_cost;
     580             :     QualCost    qpqual_cost;
     581             :     Cost        cpu_per_tuple;
     582             :     double      tuples_fetched;
     583             :     double      pages_fetched;
     584             :     double      rand_heap_pages;
     585             :     double      index_pages;
     586             : 
     587             :     /* Should only be applied to base relations */
     588             :     Assert(IsA(baserel, RelOptInfo) &&
     589             :            IsA(index, IndexOptInfo));
     590             :     Assert(baserel->relid > 0);
     591             :     Assert(baserel->rtekind == RTE_RELATION);
     592             : 
     593             :     /*
     594             :      * Mark the path with the correct row estimate, and identify which quals
     595             :      * will need to be enforced as qpquals.  We need not check any quals that
     596             :      * are implied by the index's predicate, so we can use indrestrictinfo not
     597             :      * baserestrictinfo as the list of relevant restriction clauses for the
     598             :      * rel.
     599             :      */
     600      784292 :     if (path->path.param_info)
     601             :     {
     602      144868 :         path->path.rows = path->path.param_info->ppi_rows;
     603             :         /* qpquals come from the rel's restriction clauses and ppi_clauses */
     604      144868 :         qpquals = list_concat(extract_nonindex_conditions(path->indexinfo->indrestrictinfo,
     605             :                                                           path->indexclauses),
     606      144868 :                               extract_nonindex_conditions(path->path.param_info->ppi_clauses,
     607             :                                                           path->indexclauses));
     608             :     }
     609             :     else
     610             :     {
     611      639424 :         path->path.rows = baserel->rows;
     612             :         /* qpquals come from just the rel's restriction clauses */
     613      639424 :         qpquals = extract_nonindex_conditions(path->indexinfo->indrestrictinfo,
     614             :                                               path->indexclauses);
     615             :     }
     616             : 
     617             :     /* we don't need to check enable_indexonlyscan; indxpath.c does that */
     618      784292 :     path->path.disabled_nodes = enable_indexscan ? 0 : 1;
     619             : 
     620             :     /*
     621             :      * Call index-access-method-specific code to estimate the processing cost
     622             :      * for scanning the index, as well as the selectivity of the index (ie,
     623             :      * the fraction of main-table tuples we will have to retrieve) and its
     624             :      * correlation to the main-table tuple order.  We need a cast here because
     625             :      * pathnodes.h uses a weak function type to avoid including amapi.h.
     626             :      */
     627      784292 :     amcostestimate = (amcostestimate_function) index->amcostestimate;
     628      784292 :     amcostestimate(root, path, loop_count,
     629             :                    &indexStartupCost, &indexTotalCost,
     630             :                    &indexSelectivity, &indexCorrelation,
     631             :                    &index_pages);
     632             : 
     633             :     /*
     634             :      * Save amcostestimate's results for possible use in bitmap scan planning.
     635             :      * We don't bother to save indexStartupCost or indexCorrelation, because a
     636             :      * bitmap scan doesn't care about either.
     637             :      */
     638      784292 :     path->indextotalcost = indexTotalCost;
     639      784292 :     path->indexselectivity = indexSelectivity;
     640             : 
     641             :     /* all costs for touching index itself included here */
     642      784292 :     startup_cost += indexStartupCost;
     643      784292 :     run_cost += indexTotalCost - indexStartupCost;
     644             : 
     645             :     /* estimate number of main-table tuples fetched */
     646      784292 :     tuples_fetched = clamp_row_est(indexSelectivity * baserel->tuples);
     647             : 
     648             :     /* fetch estimated page costs for tablespace containing table */
     649      784292 :     get_tablespace_page_costs(baserel->reltablespace,
     650             :                               &spc_random_page_cost,
     651             :                               &spc_seq_page_cost);
     652             : 
     653             :     /*----------
     654             :      * Estimate number of main-table pages fetched, and compute I/O cost.
     655             :      *
     656             :      * When the index ordering is uncorrelated with the table ordering,
     657             :      * we use an approximation proposed by Mackert and Lohman (see
     658             :      * index_pages_fetched() for details) to compute the number of pages
     659             :      * fetched, and then charge spc_random_page_cost per page fetched.
     660             :      *
     661             :      * When the index ordering is exactly correlated with the table ordering
     662             :      * (just after a CLUSTER, for example), the number of pages fetched should
     663             :      * be exactly selectivity * table_size.  What's more, all but the first
     664             :      * will be sequential fetches, not the random fetches that occur in the
     665             :      * uncorrelated case.  So if the number of pages is more than 1, we
     666             :      * ought to charge
     667             :      *      spc_random_page_cost + (pages_fetched - 1) * spc_seq_page_cost
     668             :      * For partially-correlated indexes, we ought to charge somewhere between
     669             :      * these two estimates.  We currently interpolate linearly between the
     670             :      * estimates based on the correlation squared (XXX is that appropriate?).
     671             :      *
     672             :      * If it's an index-only scan, then we will not need to fetch any heap
     673             :      * pages for which the visibility map shows all tuples are visible.
     674             :      * Hence, reduce the estimated number of heap fetches accordingly.
     675             :      * We use the measured fraction of the entire heap that is all-visible,
     676             :      * which might not be particularly relevant to the subset of the heap
     677             :      * that this query will fetch; but it's not clear how to do better.
     678             :      *----------
     679             :      */
     680      784292 :     if (loop_count > 1)
     681             :     {
     682             :         /*
     683             :          * For repeated indexscans, the appropriate estimate for the
     684             :          * uncorrelated case is to scale up the number of tuples fetched in
     685             :          * the Mackert and Lohman formula by the number of scans, so that we
     686             :          * estimate the number of pages fetched by all the scans; then
     687             :          * pro-rate the costs for one scan.  In this case we assume all the
     688             :          * fetches are random accesses.
     689             :          */
     690       82932 :         pages_fetched = index_pages_fetched(tuples_fetched * loop_count,
     691             :                                             baserel->pages,
     692       82932 :                                             (double) index->pages,
     693             :                                             root);
     694             : 
     695       82932 :         if (indexonly)
     696        9036 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     697             : 
     698       82932 :         rand_heap_pages = pages_fetched;
     699             : 
     700       82932 :         max_IO_cost = (pages_fetched * spc_random_page_cost) / loop_count;
     701             : 
     702             :         /*
     703             :          * In the perfectly correlated case, the number of pages touched by
     704             :          * each scan is selectivity * table_size, and we can use the Mackert
     705             :          * and Lohman formula at the page level to estimate how much work is
     706             :          * saved by caching across scans.  We still assume all the fetches are
     707             :          * random, though, which is an overestimate that's hard to correct for
     708             :          * without double-counting the cache effects.  (But in most cases
     709             :          * where such a plan is actually interesting, only one page would get
     710             :          * fetched per scan anyway, so it shouldn't matter much.)
     711             :          */
     712       82932 :         pages_fetched = ceil(indexSelectivity * (double) baserel->pages);
     713             : 
     714       82932 :         pages_fetched = index_pages_fetched(pages_fetched * loop_count,
     715             :                                             baserel->pages,
     716       82932 :                                             (double) index->pages,
     717             :                                             root);
     718             : 
     719       82932 :         if (indexonly)
     720        9036 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     721             : 
     722       82932 :         min_IO_cost = (pages_fetched * spc_random_page_cost) / loop_count;
     723             :     }
     724             :     else
     725             :     {
     726             :         /*
     727             :          * Normal case: apply the Mackert and Lohman formula, and then
     728             :          * interpolate between that and the correlation-derived result.
     729             :          */
     730      701360 :         pages_fetched = index_pages_fetched(tuples_fetched,
     731             :                                             baserel->pages,
     732      701360 :                                             (double) index->pages,
     733             :                                             root);
     734             : 
     735      701360 :         if (indexonly)
     736       64374 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     737             : 
     738      701360 :         rand_heap_pages = pages_fetched;
     739             : 
     740             :         /* max_IO_cost is for the perfectly uncorrelated case (csquared=0) */
     741      701360 :         max_IO_cost = pages_fetched * spc_random_page_cost;
     742             : 
     743             :         /* min_IO_cost is for the perfectly correlated case (csquared=1) */
     744      701360 :         pages_fetched = ceil(indexSelectivity * (double) baserel->pages);
     745             : 
     746      701360 :         if (indexonly)
     747       64374 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     748             : 
     749      701360 :         if (pages_fetched > 0)
     750             :         {
     751      631462 :             min_IO_cost = spc_random_page_cost;
     752      631462 :             if (pages_fetched > 1)
     753      187834 :                 min_IO_cost += (pages_fetched - 1) * spc_seq_page_cost;
     754             :         }
     755             :         else
     756       69898 :             min_IO_cost = 0;
     757             :     }
     758             : 
     759      784292 :     if (partial_path)
     760             :     {
     761             :         /*
     762             :          * For index only scans compute workers based on number of index pages
     763             :          * fetched; the number of heap pages we fetch might be so small as to
     764             :          * effectively rule out parallelism, which we don't want to do.
     765             :          */
     766      272496 :         if (indexonly)
     767       23214 :             rand_heap_pages = -1;
     768             : 
     769             :         /*
     770             :          * Estimate the number of parallel workers required to scan index. Use
     771             :          * the number of heap pages computed considering heap fetches won't be
     772             :          * sequential as for parallel scans the pages are accessed in random
     773             :          * order.
     774             :          */
     775      272496 :         path->path.parallel_workers = compute_parallel_worker(baserel,
     776             :                                                               rand_heap_pages,
     777             :                                                               index_pages,
     778             :                                                               max_parallel_workers_per_gather);
     779             : 
     780             :         /*
     781             :          * Fall out if workers can't be assigned for parallel scan, because in
     782             :          * such a case this path will be rejected.  So there is no benefit in
     783             :          * doing extra computation.
     784             :          */
     785      272496 :         if (path->path.parallel_workers <= 0)
     786      262464 :             return;
     787             : 
     788       10032 :         path->path.parallel_aware = true;
     789             :     }
     790             : 
     791             :     /*
     792             :      * Now interpolate based on estimated index order correlation to get total
     793             :      * disk I/O cost for main table accesses.
     794             :      */
     795      521828 :     csquared = indexCorrelation * indexCorrelation;
     796             : 
     797      521828 :     run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);
     798             : 
     799             :     /*
     800             :      * Estimate CPU costs per tuple.
     801             :      *
     802             :      * What we want here is cpu_tuple_cost plus the evaluation costs of any
     803             :      * qual clauses that we have to evaluate as qpquals.
     804             :      */
     805      521828 :     cost_qual_eval(&qpqual_cost, qpquals, root);
     806             : 
     807      521828 :     startup_cost += qpqual_cost.startup;
     808      521828 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     809             : 
     810      521828 :     cpu_run_cost += cpu_per_tuple * tuples_fetched;
     811             : 
     812             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     813      521828 :     startup_cost += path->path.pathtarget->cost.startup;
     814      521828 :     cpu_run_cost += path->path.pathtarget->cost.per_tuple * path->path.rows;
     815             : 
     816             :     /* Adjust costing for parallelism, if used. */
     817      521828 :     if (path->path.parallel_workers > 0)
     818             :     {
     819       10032 :         double      parallel_divisor = get_parallel_divisor(&path->path);
     820             : 
     821       10032 :         path->path.rows = clamp_row_est(path->path.rows / parallel_divisor);
     822             : 
     823             :         /* The CPU cost is divided among all the workers. */
     824       10032 :         cpu_run_cost /= parallel_divisor;
     825             :     }
     826             : 
     827      521828 :     run_cost += cpu_run_cost;
     828             : 
     829      521828 :     path->path.startup_cost = startup_cost;
     830      521828 :     path->path.total_cost = startup_cost + run_cost;
     831             : }
     832             : 
     833             : /*
     834             :  * extract_nonindex_conditions
     835             :  *
     836             :  * Given a list of quals to be enforced in an indexscan, extract the ones that
     837             :  * will have to be applied as qpquals (ie, the index machinery won't handle
     838             :  * them).  Here we detect only whether a qual clause is directly redundant
     839             :  * with some indexclause.  If the index path is chosen for use, createplan.c
     840             :  * will try a bit harder to get rid of redundant qual conditions; specifically
     841             :  * it will see if quals can be proven to be implied by the indexquals.  But
     842             :  * it does not seem worth the cycles to try to factor that in at this stage,
     843             :  * since we're only trying to estimate qual eval costs.  Otherwise this must
     844             :  * match the logic in create_indexscan_plan().
     845             :  *
     846             :  * qual_clauses, and the result, are lists of RestrictInfos.
     847             :  * indexclauses is a list of IndexClauses.
     848             :  */
     849             : static List *
     850      929160 : extract_nonindex_conditions(List *qual_clauses, List *indexclauses)
     851             : {
     852      929160 :     List       *result = NIL;
     853             :     ListCell   *lc;
     854             : 
     855     1951374 :     foreach(lc, qual_clauses)
     856             :     {
     857     1022214 :         RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc);
     858             : 
     859     1022214 :         if (rinfo->pseudoconstant)
     860        9706 :             continue;           /* we may drop pseudoconstants here */
     861     1012508 :         if (is_redundant_with_indexclauses(rinfo, indexclauses))
     862      592024 :             continue;           /* dup or derived from same EquivalenceClass */
     863             :         /* ... skip the predicate proof attempt createplan.c will try ... */
     864      420484 :         result = lappend(result, rinfo);
     865             :     }
     866      929160 :     return result;
     867             : }
     868             : 
     869             : /*
     870             :  * index_pages_fetched
     871             :  *    Estimate the number of pages actually fetched after accounting for
     872             :  *    cache effects.
     873             :  *
     874             :  * We use an approximation proposed by Mackert and Lohman, "Index Scans
     875             :  * Using a Finite LRU Buffer: A Validated I/O Model", ACM Transactions
     876             :  * on Database Systems, Vol. 14, No. 3, September 1989, Pages 401-424.
     877             :  * The Mackert and Lohman approximation is that the number of pages
     878             :  * fetched is
     879             :  *  PF =
     880             :  *      min(2TNs/(2T+Ns), T)            when T <= b
     881             :  *      2TNs/(2T+Ns)                    when T > b and Ns <= 2Tb/(2T-b)
     882             :  *      b + (Ns - 2Tb/(2T-b))*(T-b)/T   when T > b and Ns > 2Tb/(2T-b)
     883             :  * where
     884             :  *      T = # pages in table
     885             :  *      N = # tuples in table
     886             :  *      s = selectivity = fraction of table to be scanned
     887             :  *      b = # buffer pages available (we include kernel space here)
     888             :  *
     889             :  * We assume that effective_cache_size is the total number of buffer pages
     890             :  * available for the whole query, and pro-rate that space across all the
     891             :  * tables in the query and the index currently under consideration.  (This
     892             :  * ignores space needed for other indexes used by the query, but since we
     893             :  * don't know which indexes will get used, we can't estimate that very well;
     894             :  * and in any case counting all the tables may well be an overestimate, since
     895             :  * depending on the join plan not all the tables may be scanned concurrently.)
     896             :  *
     897             :  * The product Ns is the number of tuples fetched; we pass in that
     898             :  * product rather than calculating it here.  "pages" is the number of pages
     899             :  * in the object under consideration (either an index or a table).
     900             :  * "index_pages" is the amount to add to the total table space, which was
     901             :  * computed for us by make_one_rel.
     902             :  *
     903             :  * Caller is expected to have ensured that tuples_fetched is greater than zero
     904             :  * and rounded to integer (see clamp_row_est).  The result will likewise be
     905             :  * greater than zero and integral.
     906             :  */
     907             : double
     908     1094750 : index_pages_fetched(double tuples_fetched, BlockNumber pages,
     909             :                     double index_pages, PlannerInfo *root)
     910             : {
     911             :     double      pages_fetched;
     912             :     double      total_pages;
     913             :     double      T,
     914             :                 b;
     915             : 
     916             :     /* T is # pages in table, but don't allow it to be zero */
     917     1094750 :     T = (pages > 1) ? (double) pages : 1.0;
     918             : 
     919             :     /* Compute number of pages assumed to be competing for cache space */
     920     1094750 :     total_pages = root->total_table_pages + index_pages;
     921     1094750 :     total_pages = Max(total_pages, 1.0);
     922             :     Assert(T <= total_pages);
     923             : 
     924             :     /* b is pro-rated share of effective_cache_size */
     925     1094750 :     b = (double) effective_cache_size * T / total_pages;
     926             : 
     927             :     /* force it positive and integral */
     928     1094750 :     if (b <= 1.0)
     929           0 :         b = 1.0;
     930             :     else
     931     1094750 :         b = ceil(b);
     932             : 
     933             :     /* This part is the Mackert and Lohman formula */
     934     1094750 :     if (T <= b)
     935             :     {
     936     1094750 :         pages_fetched =
     937     1094750 :             (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
     938     1094750 :         if (pages_fetched >= T)
     939      634444 :             pages_fetched = T;
     940             :         else
     941      460306 :             pages_fetched = ceil(pages_fetched);
     942             :     }
     943             :     else
     944             :     {
     945             :         double      lim;
     946             : 
     947           0 :         lim = (2.0 * T * b) / (2.0 * T - b);
     948           0 :         if (tuples_fetched <= lim)
     949             :         {
     950           0 :             pages_fetched =
     951           0 :                 (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
     952             :         }
     953             :         else
     954             :         {
     955           0 :             pages_fetched =
     956           0 :                 b + (tuples_fetched - lim) * (T - b) / T;
     957             :         }
     958           0 :         pages_fetched = ceil(pages_fetched);
     959             :     }
     960     1094750 :     return pages_fetched;
     961             : }
     962             : 
     963             : /*
     964             :  * get_indexpath_pages
     965             :  *      Determine the total size of the indexes used in a bitmap index path.
     966             :  *
     967             :  * Note: if the same index is used more than once in a bitmap tree, we will
     968             :  * count it multiple times, which perhaps is the wrong thing ... but it's
     969             :  * not completely clear, and detecting duplicates is difficult, so ignore it
     970             :  * for now.
     971             :  */
     972             : static double
     973      186226 : get_indexpath_pages(Path *bitmapqual)
     974             : {
     975      186226 :     double      result = 0;
     976             :     ListCell   *l;
     977             : 
     978      186226 :     if (IsA(bitmapqual, BitmapAndPath))
     979             :     {
     980       23300 :         BitmapAndPath *apath = (BitmapAndPath *) bitmapqual;
     981             : 
     982       69900 :         foreach(l, apath->bitmapquals)
     983             :         {
     984       46600 :             result += get_indexpath_pages((Path *) lfirst(l));
     985             :         }
     986             :     }
     987      162926 :     else if (IsA(bitmapqual, BitmapOrPath))
     988             :     {
     989          74 :         BitmapOrPath *opath = (BitmapOrPath *) bitmapqual;
     990             : 
     991         234 :         foreach(l, opath->bitmapquals)
     992             :         {
     993         160 :             result += get_indexpath_pages((Path *) lfirst(l));
     994             :         }
     995             :     }
     996      162852 :     else if (IsA(bitmapqual, IndexPath))
     997             :     {
     998      162852 :         IndexPath  *ipath = (IndexPath *) bitmapqual;
     999             : 
    1000      162852 :         result = (double) ipath->indexinfo->pages;
    1001             :     }
    1002             :     else
    1003           0 :         elog(ERROR, "unrecognized node type: %d", nodeTag(bitmapqual));
    1004             : 
    1005      186226 :     return result;
    1006             : }
    1007             : 
    1008             : /*
    1009             :  * cost_bitmap_heap_scan
    1010             :  *    Determines and returns the cost of scanning a relation using a bitmap
    1011             :  *    index-then-heap plan.
    1012             :  *
    1013             :  * 'baserel' is the relation to be scanned
    1014             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1015             :  * 'bitmapqual' is a tree of IndexPaths, BitmapAndPaths, and BitmapOrPaths
    1016             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
    1017             :  *      estimates of caching behavior
    1018             :  *
    1019             :  * Note: the component IndexPaths in bitmapqual should have been costed
    1020             :  * using the same loop_count.
    1021             :  */
    1022             : void
    1023      537172 : cost_bitmap_heap_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel,
    1024             :                       ParamPathInfo *param_info,
    1025             :                       Path *bitmapqual, double loop_count)
    1026             : {
    1027      537172 :     Cost        startup_cost = 0;
    1028      537172 :     Cost        run_cost = 0;
    1029             :     Cost        indexTotalCost;
    1030             :     QualCost    qpqual_cost;
    1031             :     Cost        cpu_per_tuple;
    1032             :     Cost        cost_per_page;
    1033             :     Cost        cpu_run_cost;
    1034             :     double      tuples_fetched;
    1035             :     double      pages_fetched;
    1036             :     double      spc_seq_page_cost,
    1037             :                 spc_random_page_cost;
    1038             :     double      T;
    1039             : 
    1040             :     /* Should only be applied to base relations */
    1041             :     Assert(IsA(baserel, RelOptInfo));
    1042             :     Assert(baserel->relid > 0);
    1043             :     Assert(baserel->rtekind == RTE_RELATION);
    1044             : 
    1045             :     /* Mark the path with the correct row estimate */
    1046      537172 :     if (param_info)
    1047      224424 :         path->rows = param_info->ppi_rows;
    1048             :     else
    1049      312748 :         path->rows = baserel->rows;
    1050             : 
    1051      537172 :     pages_fetched = compute_bitmap_pages(root, baserel, bitmapqual,
    1052             :                                          loop_count, &indexTotalCost,
    1053             :                                          &tuples_fetched);
    1054             : 
    1055      537172 :     startup_cost += indexTotalCost;
    1056      537172 :     T = (baserel->pages > 1) ? (double) baserel->pages : 1.0;
    1057             : 
    1058             :     /* Fetch estimated page costs for tablespace containing table. */
    1059      537172 :     get_tablespace_page_costs(baserel->reltablespace,
    1060             :                               &spc_random_page_cost,
    1061             :                               &spc_seq_page_cost);
    1062             : 
    1063             :     /*
    1064             :      * For small numbers of pages we should charge spc_random_page_cost
    1065             :      * apiece, while if nearly all the table's pages are being read, it's more
    1066             :      * appropriate to charge spc_seq_page_cost apiece.  The effect is
    1067             :      * nonlinear, too. For lack of a better idea, interpolate like this to
    1068             :      * determine the cost per page.
    1069             :      */
    1070      537172 :     if (pages_fetched >= 2.0)
    1071      108850 :         cost_per_page = spc_random_page_cost -
    1072      108850 :             (spc_random_page_cost - spc_seq_page_cost)
    1073      108850 :             * sqrt(pages_fetched / T);
    1074             :     else
    1075      428322 :         cost_per_page = spc_random_page_cost;
    1076             : 
    1077      537172 :     run_cost += pages_fetched * cost_per_page;
    1078             : 
    1079             :     /*
    1080             :      * Estimate CPU costs per tuple.
    1081             :      *
    1082             :      * Often the indexquals don't need to be rechecked at each tuple ... but
    1083             :      * not always, especially not if there are enough tuples involved that the
    1084             :      * bitmaps become lossy.  For the moment, just assume they will be
    1085             :      * rechecked always.  This means we charge the full freight for all the
    1086             :      * scan clauses.
    1087             :      */
    1088      537172 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1089             : 
    1090      537172 :     startup_cost += qpqual_cost.startup;
    1091      537172 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1092      537172 :     cpu_run_cost = cpu_per_tuple * tuples_fetched;
    1093             : 
    1094             :     /* Adjust costing for parallelism, if used. */
    1095      537172 :     if (path->parallel_workers > 0)
    1096             :     {
    1097        4182 :         double      parallel_divisor = get_parallel_divisor(path);
    1098             : 
    1099             :         /* The CPU cost is divided among all the workers. */
    1100        4182 :         cpu_run_cost /= parallel_divisor;
    1101             : 
    1102        4182 :         path->rows = clamp_row_est(path->rows / parallel_divisor);
    1103             :     }
    1104             : 
    1105             : 
    1106      537172 :     run_cost += cpu_run_cost;
    1107             : 
    1108             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1109      537172 :     startup_cost += path->pathtarget->cost.startup;
    1110      537172 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1111             : 
    1112      537172 :     path->disabled_nodes = enable_bitmapscan ? 0 : 1;
    1113      537172 :     path->startup_cost = startup_cost;
    1114      537172 :     path->total_cost = startup_cost + run_cost;
    1115      537172 : }
    1116             : 
    1117             : /*
    1118             :  * cost_bitmap_tree_node
    1119             :  *      Extract cost and selectivity from a bitmap tree node (index/and/or)
    1120             :  */
    1121             : void
    1122     1012496 : cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec)
    1123             : {
    1124     1012496 :     if (IsA(path, IndexPath))
    1125             :     {
    1126      957236 :         *cost = ((IndexPath *) path)->indextotalcost;
    1127      957236 :         *selec = ((IndexPath *) path)->indexselectivity;
    1128             : 
    1129             :         /*
    1130             :          * Charge a small amount per retrieved tuple to reflect the costs of
    1131             :          * manipulating the bitmap.  This is mostly to make sure that a bitmap
    1132             :          * scan doesn't look to be the same cost as an indexscan to retrieve a
    1133             :          * single tuple.
    1134             :          */
    1135      957236 :         *cost += 0.1 * cpu_operator_cost * path->rows;
    1136             :     }
    1137       55260 :     else if (IsA(path, BitmapAndPath))
    1138             :     {
    1139       52024 :         *cost = path->total_cost;
    1140       52024 :         *selec = ((BitmapAndPath *) path)->bitmapselectivity;
    1141             :     }
    1142        3236 :     else if (IsA(path, BitmapOrPath))
    1143             :     {
    1144        3236 :         *cost = path->total_cost;
    1145        3236 :         *selec = ((BitmapOrPath *) path)->bitmapselectivity;
    1146             :     }
    1147             :     else
    1148             :     {
    1149           0 :         elog(ERROR, "unrecognized node type: %d", nodeTag(path));
    1150             :         *cost = *selec = 0;     /* keep compiler quiet */
    1151             :     }
    1152     1012496 : }
    1153             : 
    1154             : /*
    1155             :  * cost_bitmap_and_node
    1156             :  *      Estimate the cost of a BitmapAnd node
    1157             :  *
    1158             :  * Note that this considers only the costs of index scanning and bitmap
    1159             :  * creation, not the eventual heap access.  In that sense the object isn't
    1160             :  * truly a Path, but it has enough path-like properties (costs in particular)
    1161             :  * to warrant treating it as one.  We don't bother to set the path rows field,
    1162             :  * however.
    1163             :  */
    1164             : void
    1165       51834 : cost_bitmap_and_node(BitmapAndPath *path, PlannerInfo *root)
    1166             : {
    1167             :     Cost        totalCost;
    1168             :     Selectivity selec;
    1169             :     ListCell   *l;
    1170             : 
    1171             :     /*
    1172             :      * We estimate AND selectivity on the assumption that the inputs are
    1173             :      * independent.  This is probably often wrong, but we don't have the info
    1174             :      * to do better.
    1175             :      *
    1176             :      * The runtime cost of the BitmapAnd itself is estimated at 100x
    1177             :      * cpu_operator_cost for each tbm_intersect needed.  Probably too small,
    1178             :      * definitely too simplistic?
    1179             :      */
    1180       51834 :     totalCost = 0.0;
    1181       51834 :     selec = 1.0;
    1182      155502 :     foreach(l, path->bitmapquals)
    1183             :     {
    1184      103668 :         Path       *subpath = (Path *) lfirst(l);
    1185             :         Cost        subCost;
    1186             :         Selectivity subselec;
    1187             : 
    1188      103668 :         cost_bitmap_tree_node(subpath, &subCost, &subselec);
    1189             : 
    1190      103668 :         selec *= subselec;
    1191             : 
    1192      103668 :         totalCost += subCost;
    1193      103668 :         if (l != list_head(path->bitmapquals))
    1194       51834 :             totalCost += 100.0 * cpu_operator_cost;
    1195             :     }
    1196       51834 :     path->bitmapselectivity = selec;
    1197       51834 :     path->path.rows = 0;     /* per above, not used */
    1198       51834 :     path->path.disabled_nodes = 0;
    1199       51834 :     path->path.startup_cost = totalCost;
    1200       51834 :     path->path.total_cost = totalCost;
    1201       51834 : }
    1202             : 
    1203             : /*
    1204             :  * cost_bitmap_or_node
    1205             :  *      Estimate the cost of a BitmapOr node
    1206             :  *
    1207             :  * See comments for cost_bitmap_and_node.
    1208             :  */
    1209             : void
    1210         976 : cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root)
    1211             : {
    1212             :     Cost        totalCost;
    1213             :     Selectivity selec;
    1214             :     ListCell   *l;
    1215             : 
    1216             :     /*
    1217             :      * We estimate OR selectivity on the assumption that the inputs are
    1218             :      * non-overlapping, since that's often the case in "x IN (list)" type
    1219             :      * situations.  Of course, we clamp to 1.0 at the end.
    1220             :      *
    1221             :      * The runtime cost of the BitmapOr itself is estimated at 100x
    1222             :      * cpu_operator_cost for each tbm_union needed.  Probably too small,
    1223             :      * definitely too simplistic?  We are aware that the tbm_unions are
    1224             :      * optimized out when the inputs are BitmapIndexScans.
    1225             :      */
    1226         976 :     totalCost = 0.0;
    1227         976 :     selec = 0.0;
    1228        2736 :     foreach(l, path->bitmapquals)
    1229             :     {
    1230        1760 :         Path       *subpath = (Path *) lfirst(l);
    1231             :         Cost        subCost;
    1232             :         Selectivity subselec;
    1233             : 
    1234        1760 :         cost_bitmap_tree_node(subpath, &subCost, &subselec);
    1235             : 
    1236        1760 :         selec += subselec;
    1237             : 
    1238        1760 :         totalCost += subCost;
    1239        1760 :         if (l != list_head(path->bitmapquals) &&
    1240         784 :             !IsA(subpath, IndexPath))
    1241           6 :             totalCost += 100.0 * cpu_operator_cost;
    1242             :     }
    1243         976 :     path->bitmapselectivity = Min(selec, 1.0);
    1244         976 :     path->path.rows = 0;     /* per above, not used */
    1245         976 :     path->path.startup_cost = totalCost;
    1246         976 :     path->path.total_cost = totalCost;
    1247         976 : }
    1248             : 
    1249             : /*
    1250             :  * cost_tidscan
    1251             :  *    Determines and returns the cost of scanning a relation using TIDs.
    1252             :  *
    1253             :  * 'baserel' is the relation to be scanned
    1254             :  * 'tidquals' is the list of TID-checkable quals
    1255             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1256             :  */
    1257             : void
    1258         852 : cost_tidscan(Path *path, PlannerInfo *root,
    1259             :              RelOptInfo *baserel, List *tidquals, ParamPathInfo *param_info)
    1260             : {
    1261         852 :     Cost        startup_cost = 0;
    1262         852 :     Cost        run_cost = 0;
    1263             :     QualCost    qpqual_cost;
    1264             :     Cost        cpu_per_tuple;
    1265             :     QualCost    tid_qual_cost;
    1266             :     double      ntuples;
    1267             :     ListCell   *l;
    1268             :     double      spc_random_page_cost;
    1269             : 
    1270             :     /* Should only be applied to base relations */
    1271             :     Assert(baserel->relid > 0);
    1272             :     Assert(baserel->rtekind == RTE_RELATION);
    1273             :     Assert(tidquals != NIL);
    1274             : 
    1275             :     /* Mark the path with the correct row estimate */
    1276         852 :     if (param_info)
    1277         144 :         path->rows = param_info->ppi_rows;
    1278             :     else
    1279         708 :         path->rows = baserel->rows;
    1280             : 
    1281             :     /* Count how many tuples we expect to retrieve */
    1282         852 :     ntuples = 0;
    1283        1728 :     foreach(l, tidquals)
    1284             :     {
    1285         876 :         RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    1286         876 :         Expr       *qual = rinfo->clause;
    1287             : 
    1288             :         /*
    1289             :          * We must use a TID scan for CurrentOfExpr; in any other case, we
    1290             :          * should be generating a TID scan only if enable_tidscan=true. Also,
    1291             :          * if CurrentOfExpr is the qual, there should be only one.
    1292             :          */
    1293             :         Assert(enable_tidscan || IsA(qual, CurrentOfExpr));
    1294             :         Assert(list_length(tidquals) == 1 || !IsA(qual, CurrentOfExpr));
    1295             : 
    1296         876 :         if (IsA(qual, ScalarArrayOpExpr))
    1297             :         {
    1298             :             /* Each element of the array yields 1 tuple */
    1299          50 :             ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) qual;
    1300          50 :             Node       *arraynode = (Node *) lsecond(saop->args);
    1301             : 
    1302          50 :             ntuples += estimate_array_length(root, arraynode);
    1303             :         }
    1304         826 :         else if (IsA(qual, CurrentOfExpr))
    1305             :         {
    1306             :             /* CURRENT OF yields 1 tuple */
    1307         404 :             ntuples++;
    1308             :         }
    1309             :         else
    1310             :         {
    1311             :             /* It's just CTID = something, count 1 tuple */
    1312         422 :             ntuples++;
    1313             :         }
    1314             :     }
    1315             : 
    1316             :     /*
    1317             :      * The TID qual expressions will be computed once, any other baserestrict
    1318             :      * quals once per retrieved tuple.
    1319             :      */
    1320         852 :     cost_qual_eval(&tid_qual_cost, tidquals, root);
    1321             : 
    1322             :     /* fetch estimated page cost for tablespace containing table */
    1323         852 :     get_tablespace_page_costs(baserel->reltablespace,
    1324             :                               &spc_random_page_cost,
    1325             :                               NULL);
    1326             : 
    1327             :     /* disk costs --- assume each tuple on a different page */
    1328         852 :     run_cost += spc_random_page_cost * ntuples;
    1329             : 
    1330             :     /* Add scanning CPU costs */
    1331         852 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1332             : 
    1333             :     /* XXX currently we assume TID quals are a subset of qpquals */
    1334         852 :     startup_cost += qpqual_cost.startup + tid_qual_cost.per_tuple;
    1335         852 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple -
    1336         852 :         tid_qual_cost.per_tuple;
    1337         852 :     run_cost += cpu_per_tuple * ntuples;
    1338             : 
    1339             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1340         852 :     startup_cost += path->pathtarget->cost.startup;
    1341         852 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1342             : 
    1343             :     /*
    1344             :      * There are assertions above verifying that we only reach this function
    1345             :      * either when enable_tidscan=true or when the TID scan is the only legal
    1346             :      * path, so it's safe to set disabled_nodes to zero here.
    1347             :      */
    1348         852 :     path->disabled_nodes = 0;
    1349         852 :     path->startup_cost = startup_cost;
    1350         852 :     path->total_cost = startup_cost + run_cost;
    1351         852 : }
    1352             : 
    1353             : /*
    1354             :  * cost_tidrangescan
    1355             :  *    Determines and sets the costs of scanning a relation using a range of
    1356             :  *    TIDs for 'path'
    1357             :  *
    1358             :  * 'baserel' is the relation to be scanned
    1359             :  * 'tidrangequals' is the list of TID-checkable range quals
    1360             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1361             :  */
    1362             : void
    1363        1940 : cost_tidrangescan(Path *path, PlannerInfo *root,
    1364             :                   RelOptInfo *baserel, List *tidrangequals,
    1365             :                   ParamPathInfo *param_info)
    1366             : {
    1367             :     Selectivity selectivity;
    1368             :     double      pages;
    1369        1940 :     Cost        startup_cost = 0;
    1370        1940 :     Cost        run_cost = 0;
    1371             :     QualCost    qpqual_cost;
    1372             :     Cost        cpu_per_tuple;
    1373             :     QualCost    tid_qual_cost;
    1374             :     double      ntuples;
    1375             :     double      nseqpages;
    1376             :     double      spc_random_page_cost;
    1377             :     double      spc_seq_page_cost;
    1378             : 
    1379             :     /* Should only be applied to base relations */
    1380             :     Assert(baserel->relid > 0);
    1381             :     Assert(baserel->rtekind == RTE_RELATION);
    1382             : 
    1383             :     /* Mark the path with the correct row estimate */
    1384        1940 :     if (param_info)
    1385           0 :         path->rows = param_info->ppi_rows;
    1386             :     else
    1387        1940 :         path->rows = baserel->rows;
    1388             : 
    1389             :     /* Count how many tuples and pages we expect to scan */
    1390        1940 :     selectivity = clauselist_selectivity(root, tidrangequals, baserel->relid,
    1391             :                                          JOIN_INNER, NULL);
    1392        1940 :     pages = ceil(selectivity * baserel->pages);
    1393             : 
    1394        1940 :     if (pages <= 0.0)
    1395          42 :         pages = 1.0;
    1396             : 
    1397             :     /*
    1398             :      * The first page in a range requires a random seek, but each subsequent
    1399             :      * page is just a normal sequential page read. NOTE: it's desirable for
    1400             :      * TID Range Scans to cost more than the equivalent Sequential Scans,
    1401             :      * because Seq Scans have some performance advantages such as scan
    1402             :      * synchronization and parallelizability, and we'd prefer one of them to
    1403             :      * be picked unless a TID Range Scan really is better.
    1404             :      */
    1405        1940 :     ntuples = selectivity * baserel->tuples;
    1406        1940 :     nseqpages = pages - 1.0;
    1407             : 
    1408             :     /*
    1409             :      * The TID qual expressions will be computed once, any other baserestrict
    1410             :      * quals once per retrieved tuple.
    1411             :      */
    1412        1940 :     cost_qual_eval(&tid_qual_cost, tidrangequals, root);
    1413             : 
    1414             :     /* fetch estimated page cost for tablespace containing table */
    1415        1940 :     get_tablespace_page_costs(baserel->reltablespace,
    1416             :                               &spc_random_page_cost,
    1417             :                               &spc_seq_page_cost);
    1418             : 
    1419             :     /* disk costs; 1 random page and the remainder as seq pages */
    1420        1940 :     run_cost += spc_random_page_cost + spc_seq_page_cost * nseqpages;
    1421             : 
    1422             :     /* Add scanning CPU costs */
    1423        1940 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1424             : 
    1425             :     /*
    1426             :      * XXX currently we assume TID quals are a subset of qpquals at this
    1427             :      * point; they will be removed (if possible) when we create the plan, so
    1428             :      * we subtract their cost from the total qpqual cost.  (If the TID quals
    1429             :      * can't be removed, this is a mistake and we're going to underestimate
    1430             :      * the CPU cost a bit.)
    1431             :      */
    1432        1940 :     startup_cost += qpqual_cost.startup + tid_qual_cost.per_tuple;
    1433        1940 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple -
    1434        1940 :         tid_qual_cost.per_tuple;
    1435        1940 :     run_cost += cpu_per_tuple * ntuples;
    1436             : 
    1437             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1438        1940 :     startup_cost += path->pathtarget->cost.startup;
    1439        1940 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1440             : 
    1441             :     /* we should not generate this path type when enable_tidscan=false */
    1442             :     Assert(enable_tidscan);
    1443        1940 :     path->disabled_nodes = 0;
    1444        1940 :     path->startup_cost = startup_cost;
    1445        1940 :     path->total_cost = startup_cost + run_cost;
    1446        1940 : }
    1447             : 
    1448             : /*
    1449             :  * cost_subqueryscan
    1450             :  *    Determines and returns the cost of scanning a subquery RTE.
    1451             :  *
    1452             :  * 'baserel' is the relation to be scanned
    1453             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1454             :  * 'trivial_pathtarget' is true if the pathtarget is believed to be trivial.
    1455             :  */
    1456             : void
    1457       48856 : cost_subqueryscan(SubqueryScanPath *path, PlannerInfo *root,
    1458             :                   RelOptInfo *baserel, ParamPathInfo *param_info,
    1459             :                   bool trivial_pathtarget)
    1460             : {
    1461             :     Cost        startup_cost;
    1462             :     Cost        run_cost;
    1463             :     List       *qpquals;
    1464             :     QualCost    qpqual_cost;
    1465             :     Cost        cpu_per_tuple;
    1466             : 
    1467             :     /* Should only be applied to base relations that are subqueries */
    1468             :     Assert(baserel->relid > 0);
    1469             :     Assert(baserel->rtekind == RTE_SUBQUERY);
    1470             : 
    1471             :     /*
    1472             :      * We compute the rowcount estimate as the subplan's estimate times the
    1473             :      * selectivity of relevant restriction clauses.  In simple cases this will
    1474             :      * come out the same as baserel->rows; but when dealing with parallelized
    1475             :      * paths we must do it like this to get the right answer.
    1476             :      */
    1477       48856 :     if (param_info)
    1478         552 :         qpquals = list_concat_copy(param_info->ppi_clauses,
    1479         552 :                                    baserel->baserestrictinfo);
    1480             :     else
    1481       48304 :         qpquals = baserel->baserestrictinfo;
    1482             : 
    1483       48856 :     path->path.rows = clamp_row_est(path->subpath->rows *
    1484       48856 :                                     clauselist_selectivity(root,
    1485             :                                                            qpquals,
    1486             :                                                            0,
    1487             :                                                            JOIN_INNER,
    1488             :                                                            NULL));
    1489             : 
    1490             :     /*
    1491             :      * Cost of path is cost of evaluating the subplan, plus cost of evaluating
    1492             :      * any restriction clauses and tlist that will be attached to the
    1493             :      * SubqueryScan node, plus cpu_tuple_cost to account for selection and
    1494             :      * projection overhead.
    1495             :      */
    1496       48856 :     path->path.disabled_nodes = path->subpath->disabled_nodes;
    1497       48856 :     path->path.startup_cost = path->subpath->startup_cost;
    1498       48856 :     path->path.total_cost = path->subpath->total_cost;
    1499             : 
    1500             :     /*
    1501             :      * However, if there are no relevant restriction clauses and the
    1502             :      * pathtarget is trivial, then we expect that setrefs.c will optimize away
    1503             :      * the SubqueryScan plan node altogether, so we should just make its cost
    1504             :      * and rowcount equal to the input path's.
    1505             :      *
    1506             :      * Note: there are some edge cases where createplan.c will apply a
    1507             :      * different targetlist to the SubqueryScan node, thus falsifying our
    1508             :      * current estimate of whether the target is trivial, and making the cost
    1509             :      * estimate (though not the rowcount) wrong.  It does not seem worth the
    1510             :      * extra complication to try to account for that exactly, especially since
    1511             :      * that behavior falsifies other cost estimates as well.
    1512             :      */
    1513       48856 :     if (qpquals == NIL && trivial_pathtarget)
    1514       24874 :         return;
    1515             : 
    1516       23982 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1517             : 
    1518       23982 :     startup_cost = qpqual_cost.startup;
    1519       23982 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1520       23982 :     run_cost = cpu_per_tuple * path->subpath->rows;
    1521             : 
    1522             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1523       23982 :     startup_cost += path->path.pathtarget->cost.startup;
    1524       23982 :     run_cost += path->path.pathtarget->cost.per_tuple * path->path.rows;
    1525             : 
    1526       23982 :     path->path.startup_cost += startup_cost;
    1527       23982 :     path->path.total_cost += startup_cost + run_cost;
    1528             : }
    1529             : 
    1530             : /*
    1531             :  * cost_functionscan
    1532             :  *    Determines and returns the cost of scanning a function RTE.
    1533             :  *
    1534             :  * 'baserel' is the relation to be scanned
    1535             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1536             :  */
    1537             : void
    1538       55114 : cost_functionscan(Path *path, PlannerInfo *root,
    1539             :                   RelOptInfo *baserel, ParamPathInfo *param_info)
    1540             : {
    1541       55114 :     Cost        startup_cost = 0;
    1542       55114 :     Cost        run_cost = 0;
    1543             :     QualCost    qpqual_cost;
    1544             :     Cost        cpu_per_tuple;
    1545             :     RangeTblEntry *rte;
    1546             :     QualCost    exprcost;
    1547             : 
    1548             :     /* Should only be applied to base relations that are functions */
    1549             :     Assert(baserel->relid > 0);
    1550       55114 :     rte = planner_rt_fetch(baserel->relid, root);
    1551             :     Assert(rte->rtekind == RTE_FUNCTION);
    1552             : 
    1553             :     /* Mark the path with the correct row estimate */
    1554       55114 :     if (param_info)
    1555        8342 :         path->rows = param_info->ppi_rows;
    1556             :     else
    1557       46772 :         path->rows = baserel->rows;
    1558             : 
    1559             :     /*
    1560             :      * Estimate costs of executing the function expression(s).
    1561             :      *
    1562             :      * Currently, nodeFunctionscan.c always executes the functions to
    1563             :      * completion before returning any rows, and caches the results in a
    1564             :      * tuplestore.  So the function eval cost is all startup cost, and per-row
    1565             :      * costs are minimal.
    1566             :      *
    1567             :      * XXX in principle we ought to charge tuplestore spill costs if the
    1568             :      * number of rows is large.  However, given how phony our rowcount
    1569             :      * estimates for functions tend to be, there's not a lot of point in that
    1570             :      * refinement right now.
    1571             :      */
    1572       55114 :     cost_qual_eval_node(&exprcost, (Node *) rte->functions, root);
    1573             : 
    1574       55114 :     startup_cost += exprcost.startup + exprcost.per_tuple;
    1575             : 
    1576             :     /* Add scanning CPU costs */
    1577       55114 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1578             : 
    1579       55114 :     startup_cost += qpqual_cost.startup;
    1580       55114 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1581       55114 :     run_cost += cpu_per_tuple * baserel->tuples;
    1582             : 
    1583             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1584       55114 :     startup_cost += path->pathtarget->cost.startup;
    1585       55114 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1586             : 
    1587       55114 :     path->disabled_nodes = 0;
    1588       55114 :     path->startup_cost = startup_cost;
    1589       55114 :     path->total_cost = startup_cost + run_cost;
    1590       55114 : }
    1591             : 
    1592             : /*
    1593             :  * cost_tablefuncscan
    1594             :  *    Determines and returns the cost of scanning a table function.
    1595             :  *
    1596             :  * 'baserel' is the relation to be scanned
    1597             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1598             :  */
    1599             : void
    1600         626 : cost_tablefuncscan(Path *path, PlannerInfo *root,
    1601             :                    RelOptInfo *baserel, ParamPathInfo *param_info)
    1602             : {
    1603         626 :     Cost        startup_cost = 0;
    1604         626 :     Cost        run_cost = 0;
    1605             :     QualCost    qpqual_cost;
    1606             :     Cost        cpu_per_tuple;
    1607             :     RangeTblEntry *rte;
    1608             :     QualCost    exprcost;
    1609             : 
    1610             :     /* Should only be applied to base relations that are functions */
    1611             :     Assert(baserel->relid > 0);
    1612         626 :     rte = planner_rt_fetch(baserel->relid, root);
    1613             :     Assert(rte->rtekind == RTE_TABLEFUNC);
    1614             : 
    1615             :     /* Mark the path with the correct row estimate */
    1616         626 :     if (param_info)
    1617         234 :         path->rows = param_info->ppi_rows;
    1618             :     else
    1619         392 :         path->rows = baserel->rows;
    1620             : 
    1621             :     /*
    1622             :      * Estimate costs of executing the table func expression(s).
    1623             :      *
    1624             :      * XXX in principle we ought to charge tuplestore spill costs if the
    1625             :      * number of rows is large.  However, given how phony our rowcount
    1626             :      * estimates for tablefuncs tend to be, there's not a lot of point in that
    1627             :      * refinement right now.
    1628             :      */
    1629         626 :     cost_qual_eval_node(&exprcost, (Node *) rte->tablefunc, root);
    1630             : 
    1631         626 :     startup_cost += exprcost.startup + exprcost.per_tuple;
    1632             : 
    1633             :     /* Add scanning CPU costs */
    1634         626 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1635             : 
    1636         626 :     startup_cost += qpqual_cost.startup;
    1637         626 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1638         626 :     run_cost += cpu_per_tuple * baserel->tuples;
    1639             : 
    1640             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1641         626 :     startup_cost += path->pathtarget->cost.startup;
    1642         626 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1643             : 
    1644         626 :     path->disabled_nodes = 0;
    1645         626 :     path->startup_cost = startup_cost;
    1646         626 :     path->total_cost = startup_cost + run_cost;
    1647         626 : }
    1648             : 
    1649             : /*
    1650             :  * cost_valuesscan
    1651             :  *    Determines and returns the cost of scanning a VALUES RTE.
    1652             :  *
    1653             :  * 'baserel' is the relation to be scanned
    1654             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1655             :  */
    1656             : void
    1657        8232 : cost_valuesscan(Path *path, PlannerInfo *root,
    1658             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
    1659             : {
    1660        8232 :     Cost        startup_cost = 0;
    1661        8232 :     Cost        run_cost = 0;
    1662             :     QualCost    qpqual_cost;
    1663             :     Cost        cpu_per_tuple;
    1664             : 
    1665             :     /* Should only be applied to base relations that are values lists */
    1666             :     Assert(baserel->relid > 0);
    1667             :     Assert(baserel->rtekind == RTE_VALUES);
    1668             : 
    1669             :     /* Mark the path with the correct row estimate */
    1670        8232 :     if (param_info)
    1671          66 :         path->rows = param_info->ppi_rows;
    1672             :     else
    1673        8166 :         path->rows = baserel->rows;
    1674             : 
    1675             :     /*
    1676             :      * For now, estimate list evaluation cost at one operator eval per list
    1677             :      * (probably pretty bogus, but is it worth being smarter?)
    1678             :      */
    1679        8232 :     cpu_per_tuple = cpu_operator_cost;
    1680             : 
    1681             :     /* Add scanning CPU costs */
    1682        8232 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1683             : 
    1684        8232 :     startup_cost += qpqual_cost.startup;
    1685        8232 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1686        8232 :     run_cost += cpu_per_tuple * baserel->tuples;
    1687             : 
    1688             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1689        8232 :     startup_cost += path->pathtarget->cost.startup;
    1690        8232 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1691             : 
    1692        8232 :     path->disabled_nodes = 0;
    1693        8232 :     path->startup_cost = startup_cost;
    1694        8232 :     path->total_cost = startup_cost + run_cost;
    1695        8232 : }
    1696             : 
    1697             : /*
    1698             :  * cost_ctescan
    1699             :  *    Determines and returns the cost of scanning a CTE RTE.
    1700             :  *
    1701             :  * Note: this is used for both self-reference and regular CTEs; the
    1702             :  * possible cost differences are below the threshold of what we could
    1703             :  * estimate accurately anyway.  Note that the costs of evaluating the
    1704             :  * referenced CTE query are added into the final plan as initplan costs,
    1705             :  * and should NOT be counted here.
    1706             :  */
    1707             : void
    1708        5100 : cost_ctescan(Path *path, PlannerInfo *root,
    1709             :              RelOptInfo *baserel, ParamPathInfo *param_info)
    1710             : {
    1711        5100 :     Cost        startup_cost = 0;
    1712        5100 :     Cost        run_cost = 0;
    1713             :     QualCost    qpqual_cost;
    1714             :     Cost        cpu_per_tuple;
    1715             : 
    1716             :     /* Should only be applied to base relations that are CTEs */
    1717             :     Assert(baserel->relid > 0);
    1718             :     Assert(baserel->rtekind == RTE_CTE);
    1719             : 
    1720             :     /* Mark the path with the correct row estimate */
    1721        5100 :     if (param_info)
    1722           0 :         path->rows = param_info->ppi_rows;
    1723             :     else
    1724        5100 :         path->rows = baserel->rows;
    1725             : 
    1726             :     /* Charge one CPU tuple cost per row for tuplestore manipulation */
    1727        5100 :     cpu_per_tuple = cpu_tuple_cost;
    1728             : 
    1729             :     /* Add scanning CPU costs */
    1730        5100 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1731             : 
    1732        5100 :     startup_cost += qpqual_cost.startup;
    1733        5100 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1734        5100 :     run_cost += cpu_per_tuple * baserel->tuples;
    1735             : 
    1736             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1737        5100 :     startup_cost += path->pathtarget->cost.startup;
    1738        5100 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1739             : 
    1740        5100 :     path->disabled_nodes = 0;
    1741        5100 :     path->startup_cost = startup_cost;
    1742        5100 :     path->total_cost = startup_cost + run_cost;
    1743        5100 : }
    1744             : 
    1745             : /*
    1746             :  * cost_namedtuplestorescan
    1747             :  *    Determines and returns the cost of scanning a named tuplestore.
    1748             :  */
    1749             : void
    1750         462 : cost_namedtuplestorescan(Path *path, PlannerInfo *root,
    1751             :                          RelOptInfo *baserel, ParamPathInfo *param_info)
    1752             : {
    1753         462 :     Cost        startup_cost = 0;
    1754         462 :     Cost        run_cost = 0;
    1755             :     QualCost    qpqual_cost;
    1756             :     Cost        cpu_per_tuple;
    1757             : 
    1758             :     /* Should only be applied to base relations that are Tuplestores */
    1759             :     Assert(baserel->relid > 0);
    1760             :     Assert(baserel->rtekind == RTE_NAMEDTUPLESTORE);
    1761             : 
    1762             :     /* Mark the path with the correct row estimate */
    1763         462 :     if (param_info)
    1764           0 :         path->rows = param_info->ppi_rows;
    1765             :     else
    1766         462 :         path->rows = baserel->rows;
    1767             : 
    1768             :     /* Charge one CPU tuple cost per row for tuplestore manipulation */
    1769         462 :     cpu_per_tuple = cpu_tuple_cost;
    1770             : 
    1771             :     /* Add scanning CPU costs */
    1772         462 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1773             : 
    1774         462 :     startup_cost += qpqual_cost.startup;
    1775         462 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1776         462 :     run_cost += cpu_per_tuple * baserel->tuples;
    1777             : 
    1778         462 :     path->disabled_nodes = 0;
    1779         462 :     path->startup_cost = startup_cost;
    1780         462 :     path->total_cost = startup_cost + run_cost;
    1781         462 : }
    1782             : 
    1783             : /*
    1784             :  * cost_resultscan
    1785             :  *    Determines and returns the cost of scanning an RTE_RESULT relation.
    1786             :  */
    1787             : void
    1788        4244 : cost_resultscan(Path *path, PlannerInfo *root,
    1789             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
    1790             : {
    1791        4244 :     Cost        startup_cost = 0;
    1792        4244 :     Cost        run_cost = 0;
    1793             :     QualCost    qpqual_cost;
    1794             :     Cost        cpu_per_tuple;
    1795             : 
    1796             :     /* Should only be applied to RTE_RESULT base relations */
    1797             :     Assert(baserel->relid > 0);
    1798             :     Assert(baserel->rtekind == RTE_RESULT);
    1799             : 
    1800             :     /* Mark the path with the correct row estimate */
    1801        4244 :     if (param_info)
    1802         150 :         path->rows = param_info->ppi_rows;
    1803             :     else
    1804        4094 :         path->rows = baserel->rows;
    1805             : 
    1806             :     /* We charge qual cost plus cpu_tuple_cost */
    1807        4244 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1808             : 
    1809        4244 :     startup_cost += qpqual_cost.startup;
    1810        4244 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1811        4244 :     run_cost += cpu_per_tuple * baserel->tuples;
    1812             : 
    1813        4244 :     path->disabled_nodes = 0;
    1814        4244 :     path->startup_cost = startup_cost;
    1815        4244 :     path->total_cost = startup_cost + run_cost;
    1816        4244 : }
    1817             : 
    1818             : /*
    1819             :  * cost_recursive_union
    1820             :  *    Determines and returns the cost of performing a recursive union,
    1821             :  *    and also the estimated output size.
    1822             :  *
    1823             :  * We are given Paths for the nonrecursive and recursive terms.
    1824             :  */
    1825             : void
    1826        1004 : cost_recursive_union(Path *runion, Path *nrterm, Path *rterm)
    1827             : {
    1828             :     Cost        startup_cost;
    1829             :     Cost        total_cost;
    1830             :     double      total_rows;
    1831             : 
    1832             :     /* We probably have decent estimates for the non-recursive term */
    1833        1004 :     startup_cost = nrterm->startup_cost;
    1834        1004 :     total_cost = nrterm->total_cost;
    1835        1004 :     total_rows = nrterm->rows;
    1836             : 
    1837             :     /*
    1838             :      * We arbitrarily assume that about 10 recursive iterations will be
    1839             :      * needed, and that we've managed to get a good fix on the cost and output
    1840             :      * size of each one of them.  These are mighty shaky assumptions but it's
    1841             :      * hard to see how to do better.
    1842             :      */
    1843        1004 :     total_cost += 10 * rterm->total_cost;
    1844        1004 :     total_rows += 10 * rterm->rows;
    1845             : 
    1846             :     /*
    1847             :      * Also charge cpu_tuple_cost per row to account for the costs of
    1848             :      * manipulating the tuplestores.  (We don't worry about possible
    1849             :      * spill-to-disk costs.)
    1850             :      */
    1851        1004 :     total_cost += cpu_tuple_cost * total_rows;
    1852             : 
    1853        1004 :     runion->disabled_nodes = nrterm->disabled_nodes + rterm->disabled_nodes;
    1854        1004 :     runion->startup_cost = startup_cost;
    1855        1004 :     runion->total_cost = total_cost;
    1856        1004 :     runion->rows = total_rows;
    1857        1004 :     runion->pathtarget->width = Max(nrterm->pathtarget->width,
    1858             :                                     rterm->pathtarget->width);
    1859        1004 : }
    1860             : 
    1861             : /*
    1862             :  * cost_tuplesort
    1863             :  *    Determines and returns the cost of sorting a relation using tuplesort,
    1864             :  *    not including the cost of reading the input data.
    1865             :  *
    1866             :  * If the total volume of data to sort is less than sort_mem, we will do
    1867             :  * an in-memory sort, which requires no I/O and about t*log2(t) tuple
    1868             :  * comparisons for t tuples.
    1869             :  *
    1870             :  * If the total volume exceeds sort_mem, we switch to a tape-style merge
    1871             :  * algorithm.  There will still be about t*log2(t) tuple comparisons in
    1872             :  * total, but we will also need to write and read each tuple once per
    1873             :  * merge pass.  We expect about ceil(logM(r)) merge passes where r is the
    1874             :  * number of initial runs formed and M is the merge order used by tuplesort.c.
    1875             :  * Since the average initial run should be about sort_mem, we have
    1876             :  *      disk traffic = 2 * relsize * ceil(logM(p / sort_mem))
    1877             :  *      cpu = comparison_cost * t * log2(t)
    1878             :  *
    1879             :  * If the sort is bounded (i.e., only the first k result tuples are needed)
    1880             :  * and k tuples can fit into sort_mem, we use a heap method that keeps only
    1881             :  * k tuples in the heap; this will require about t*log2(k) tuple comparisons.
    1882             :  *
    1883             :  * The disk traffic is assumed to be 3/4ths sequential and 1/4th random
    1884             :  * accesses (XXX can't we refine that guess?)
    1885             :  *
    1886             :  * By default, we charge two operator evals per tuple comparison, which should
    1887             :  * be in the right ballpark in most cases.  The caller can tweak this by
    1888             :  * specifying nonzero comparison_cost; typically that's used for any extra
    1889             :  * work that has to be done to prepare the inputs to the comparison operators.
    1890             :  *
    1891             :  * 'tuples' is the number of tuples in the relation
    1892             :  * 'width' is the average tuple width in bytes
    1893             :  * 'comparison_cost' is the extra cost per comparison, if any
    1894             :  * 'sort_mem' is the number of kilobytes of work memory allowed for the sort
    1895             :  * 'limit_tuples' is the bound on the number of output tuples; -1 if no bound
    1896             :  */
    1897             : static void
    1898     1712880 : cost_tuplesort(Cost *startup_cost, Cost *run_cost,
    1899             :                double tuples, int width,
    1900             :                Cost comparison_cost, int sort_mem,
    1901             :                double limit_tuples)
    1902             : {
    1903     1712880 :     double      input_bytes = relation_byte_size(tuples, width);
    1904             :     double      output_bytes;
    1905             :     double      output_tuples;
    1906     1712880 :     int64       sort_mem_bytes = sort_mem * (int64) 1024;
    1907             : 
    1908             :     /*
    1909             :      * We want to be sure the cost of a sort is never estimated as zero, even
    1910             :      * if passed-in tuple count is zero.  Besides, mustn't do log(0)...
    1911             :      */
    1912     1712880 :     if (tuples < 2.0)
    1913      509520 :         tuples = 2.0;
    1914             : 
    1915             :     /* Include the default cost-per-comparison */
    1916     1712880 :     comparison_cost += 2.0 * cpu_operator_cost;
    1917             : 
    1918             :     /* Do we have a useful LIMIT? */
    1919     1712880 :     if (limit_tuples > 0 && limit_tuples < tuples)
    1920             :     {
    1921        1874 :         output_tuples = limit_tuples;
    1922        1874 :         output_bytes = relation_byte_size(output_tuples, width);
    1923             :     }
    1924             :     else
    1925             :     {
    1926     1711006 :         output_tuples = tuples;
    1927     1711006 :         output_bytes = input_bytes;
    1928             :     }
    1929             : 
    1930     1712880 :     if (output_bytes > sort_mem_bytes)
    1931             :     {
    1932             :         /*
    1933             :          * We'll have to use a disk-based sort of all the tuples
    1934             :          */
    1935       20356 :         double      npages = ceil(input_bytes / BLCKSZ);
    1936       20356 :         double      nruns = input_bytes / sort_mem_bytes;
    1937       20356 :         double      mergeorder = tuplesort_merge_order(sort_mem_bytes);
    1938             :         double      log_runs;
    1939             :         double      npageaccesses;
    1940             : 
    1941             :         /*
    1942             :          * CPU costs
    1943             :          *
    1944             :          * Assume about N log2 N comparisons
    1945             :          */
    1946       20356 :         *startup_cost = comparison_cost * tuples * LOG2(tuples);
    1947             : 
    1948             :         /* Disk costs */
    1949             : 
    1950             :         /* Compute logM(r) as log(r) / log(M) */
    1951       20356 :         if (nruns > mergeorder)
    1952        5302 :             log_runs = ceil(log(nruns) / log(mergeorder));
    1953             :         else
    1954       15054 :             log_runs = 1.0;
    1955       20356 :         npageaccesses = 2.0 * npages * log_runs;
    1956             :         /* Assume 3/4ths of accesses are sequential, 1/4th are not */
    1957       20356 :         *startup_cost += npageaccesses *
    1958       20356 :             (seq_page_cost * 0.75 + random_page_cost * 0.25);
    1959             :     }
    1960     1692524 :     else if (tuples > 2 * output_tuples || input_bytes > sort_mem_bytes)
    1961             :     {
    1962             :         /*
    1963             :          * We'll use a bounded heap-sort keeping just K tuples in memory, for
    1964             :          * a total number of tuple comparisons of N log2 K; but the constant
    1965             :          * factor is a bit higher than for quicksort.  Tweak it so that the
    1966             :          * cost curve is continuous at the crossover point.
    1967             :          */
    1968        1370 :         *startup_cost = comparison_cost * tuples * LOG2(2.0 * output_tuples);
    1969             :     }
    1970             :     else
    1971             :     {
    1972             :         /* We'll use plain quicksort on all the input tuples */
    1973     1691154 :         *startup_cost = comparison_cost * tuples * LOG2(tuples);
    1974             :     }
    1975             : 
    1976             :     /*
    1977             :      * Also charge a small amount (arbitrarily set equal to operator cost) per
    1978             :      * extracted tuple.  We don't charge cpu_tuple_cost because a Sort node
    1979             :      * doesn't do qual-checking or projection, so it has less overhead than
    1980             :      * most plan nodes.  Note it's correct to use tuples not output_tuples
    1981             :      * here --- the upper LIMIT will pro-rate the run cost so we'd be double
    1982             :      * counting the LIMIT otherwise.
    1983             :      */
    1984     1712880 :     *run_cost = cpu_operator_cost * tuples;
    1985     1712880 : }
    1986             : 
    1987             : /*
    1988             :  * cost_incremental_sort
    1989             :  *  Determines and returns the cost of sorting a relation incrementally, when
    1990             :  *  the input path is presorted by a prefix of the pathkeys.
    1991             :  *
    1992             :  * 'presorted_keys' is the number of leading pathkeys by which the input path
    1993             :  * is sorted.
    1994             :  *
    1995             :  * We estimate the number of groups into which the relation is divided by the
    1996             :  * leading pathkeys, and then calculate the cost of sorting a single group
    1997             :  * with tuplesort using cost_tuplesort().
    1998             :  */
    1999             : void
    2000       11968 : cost_incremental_sort(Path *path,
    2001             :                       PlannerInfo *root, List *pathkeys, int presorted_keys,
    2002             :                       int input_disabled_nodes,
    2003             :                       Cost input_startup_cost, Cost input_total_cost,
    2004             :                       double input_tuples, int width, Cost comparison_cost, int sort_mem,
    2005             :                       double limit_tuples)
    2006             : {
    2007             :     Cost        startup_cost,
    2008             :                 run_cost,
    2009       11968 :                 input_run_cost = input_total_cost - input_startup_cost;
    2010             :     double      group_tuples,
    2011             :                 input_groups;
    2012             :     Cost        group_startup_cost,
    2013             :                 group_run_cost,
    2014             :                 group_input_run_cost;
    2015       11968 :     List       *presortedExprs = NIL;
    2016             :     ListCell   *l;
    2017       11968 :     bool        unknown_varno = false;
    2018             : 
    2019             :     Assert(presorted_keys > 0 && presorted_keys < list_length(pathkeys));
    2020             : 
    2021             :     /*
    2022             :      * We want to be sure the cost of a sort is never estimated as zero, even
    2023             :      * if passed-in tuple count is zero.  Besides, mustn't do log(0)...
    2024             :      */
    2025       11968 :     if (input_tuples < 2.0)
    2026        7146 :         input_tuples = 2.0;
    2027             : 
    2028             :     /* Default estimate of number of groups, capped to one group per row. */
    2029       11968 :     input_groups = Min(input_tuples, DEFAULT_NUM_DISTINCT);
    2030             : 
    2031             :     /*
    2032             :      * Extract presorted keys as list of expressions.
    2033             :      *
    2034             :      * We need to be careful about Vars containing "varno 0" which might have
    2035             :      * been introduced by generate_append_tlist, which would confuse
    2036             :      * estimate_num_groups (in fact it'd fail for such expressions). See
    2037             :      * recurse_set_operations which has to deal with the same issue.
    2038             :      *
    2039             :      * Unlike recurse_set_operations we can't access the original target list
    2040             :      * here, and even if we could it's not very clear how useful would that be
    2041             :      * for a set operation combining multiple tables. So we simply detect if
    2042             :      * there are any expressions with "varno 0" and use the default
    2043             :      * DEFAULT_NUM_DISTINCT in that case.
    2044             :      *
    2045             :      * We might also use either 1.0 (a single group) or input_tuples (each row
    2046             :      * being a separate group), pretty much the worst and best case for
    2047             :      * incremental sort. But those are extreme cases and using something in
    2048             :      * between seems reasonable. Furthermore, generate_append_tlist is used
    2049             :      * for set operations, which are likely to produce mostly unique output
    2050             :      * anyway - from that standpoint the DEFAULT_NUM_DISTINCT is defensive
    2051             :      * while maintaining lower startup cost.
    2052             :      */
    2053       12064 :     foreach(l, pathkeys)
    2054             :     {
    2055       12064 :         PathKey    *key = (PathKey *) lfirst(l);
    2056       12064 :         EquivalenceMember *member = (EquivalenceMember *)
    2057       12064 :             linitial(key->pk_eclass->ec_members);
    2058             : 
    2059             :         /*
    2060             :          * Check if the expression contains Var with "varno 0" so that we
    2061             :          * don't call estimate_num_groups in that case.
    2062             :          */
    2063       12064 :         if (bms_is_member(0, pull_varnos(root, (Node *) member->em_expr)))
    2064             :         {
    2065          10 :             unknown_varno = true;
    2066          10 :             break;
    2067             :         }
    2068             : 
    2069             :         /* expression not containing any Vars with "varno 0" */
    2070       12054 :         presortedExprs = lappend(presortedExprs, member->em_expr);
    2071             : 
    2072       12054 :         if (foreach_current_index(l) + 1 >= presorted_keys)
    2073       11958 :             break;
    2074             :     }
    2075             : 
    2076             :     /* Estimate the number of groups with equal presorted keys. */
    2077       11968 :     if (!unknown_varno)
    2078       11958 :         input_groups = estimate_num_groups(root, presortedExprs, input_tuples,
    2079             :                                            NULL, NULL);
    2080             : 
    2081       11968 :     group_tuples = input_tuples / input_groups;
    2082       11968 :     group_input_run_cost = input_run_cost / input_groups;
    2083             : 
    2084             :     /*
    2085             :      * Estimate the average cost of sorting of one group where presorted keys
    2086             :      * are equal.
    2087             :      */
    2088       11968 :     cost_tuplesort(&group_startup_cost, &group_run_cost,
    2089             :                    group_tuples, width, comparison_cost, sort_mem,
    2090             :                    limit_tuples);
    2091             : 
    2092             :     /*
    2093             :      * Startup cost of incremental sort is the startup cost of its first group
    2094             :      * plus the cost of its input.
    2095             :      */
    2096       11968 :     startup_cost = group_startup_cost + input_startup_cost +
    2097             :         group_input_run_cost;
    2098             : 
    2099             :     /*
    2100             :      * After we started producing tuples from the first group, the cost of
    2101             :      * producing all the tuples is given by the cost to finish processing this
    2102             :      * group, plus the total cost to process the remaining groups, plus the
    2103             :      * remaining cost of input.
    2104             :      */
    2105       11968 :     run_cost = group_run_cost + (group_run_cost + group_startup_cost) *
    2106       11968 :         (input_groups - 1) + group_input_run_cost * (input_groups - 1);
    2107             : 
    2108             :     /*
    2109             :      * Incremental sort adds some overhead by itself. Firstly, it has to
    2110             :      * detect the sort groups. This is roughly equal to one extra copy and
    2111             :      * comparison per tuple.
    2112             :      */
    2113       11968 :     run_cost += (cpu_tuple_cost + comparison_cost) * input_tuples;
    2114             : 
    2115             :     /*
    2116             :      * Additionally, we charge double cpu_tuple_cost for each input group to
    2117             :      * account for the tuplesort_reset that's performed after each group.
    2118             :      */
    2119       11968 :     run_cost += 2.0 * cpu_tuple_cost * input_groups;
    2120             : 
    2121       11968 :     path->rows = input_tuples;
    2122             : 
    2123             :     /* should not generate these paths when enable_incremental_sort=false */
    2124             :     Assert(enable_incremental_sort);
    2125       11968 :     path->disabled_nodes = input_disabled_nodes;
    2126             : 
    2127       11968 :     path->startup_cost = startup_cost;
    2128       11968 :     path->total_cost = startup_cost + run_cost;
    2129       11968 : }
    2130             : 
    2131             : /*
    2132             :  * cost_sort
    2133             :  *    Determines and returns the cost of sorting a relation, including
    2134             :  *    the cost of reading the input data.
    2135             :  *
    2136             :  * NOTE: some callers currently pass NIL for pathkeys because they
    2137             :  * can't conveniently supply the sort keys.  Since this routine doesn't
    2138             :  * currently do anything with pathkeys anyway, that doesn't matter...
    2139             :  * but if it ever does, it should react gracefully to lack of key data.
    2140             :  * (Actually, the thing we'd most likely be interested in is just the number
    2141             :  * of sort keys, which all callers *could* supply.)
    2142             :  */
    2143             : void
    2144     1700912 : cost_sort(Path *path, PlannerInfo *root,
    2145             :           List *pathkeys, int input_disabled_nodes,
    2146             :           Cost input_cost, double tuples, int width,
    2147             :           Cost comparison_cost, int sort_mem,
    2148             :           double limit_tuples)
    2149             : 
    2150             : {
    2151             :     Cost        startup_cost;
    2152             :     Cost        run_cost;
    2153             : 
    2154     1700912 :     cost_tuplesort(&startup_cost, &run_cost,
    2155             :                    tuples, width,
    2156             :                    comparison_cost, sort_mem,
    2157             :                    limit_tuples);
    2158             : 
    2159     1700912 :     startup_cost += input_cost;
    2160             : 
    2161     1700912 :     path->rows = tuples;
    2162     1700912 :     path->disabled_nodes = input_disabled_nodes + (enable_sort ? 0 : 1);
    2163     1700912 :     path->startup_cost = startup_cost;
    2164     1700912 :     path->total_cost = startup_cost + run_cost;
    2165     1700912 : }
    2166             : 
    2167             : /*
    2168             :  * append_nonpartial_cost
    2169             :  *    Estimate the cost of the non-partial paths in a Parallel Append.
    2170             :  *    The non-partial paths are assumed to be the first "numpaths" paths
    2171             :  *    from the subpaths list, and to be in order of decreasing cost.
    2172             :  */
    2173             : static Cost
    2174       18312 : append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
    2175             : {
    2176             :     Cost       *costarr;
    2177             :     int         arrlen;
    2178             :     ListCell   *l;
    2179             :     ListCell   *cell;
    2180             :     int         path_index;
    2181             :     int         min_index;
    2182             :     int         max_index;
    2183             : 
    2184       18312 :     if (numpaths == 0)
    2185       14166 :         return 0;
    2186             : 
    2187             :     /*
    2188             :      * Array length is number of workers or number of relevant paths,
    2189             :      * whichever is less.
    2190             :      */
    2191        4146 :     arrlen = Min(parallel_workers, numpaths);
    2192        4146 :     costarr = (Cost *) palloc(sizeof(Cost) * arrlen);
    2193             : 
    2194             :     /* The first few paths will each be claimed by a different worker. */
    2195        4146 :     path_index = 0;
    2196       11964 :     foreach(cell, subpaths)
    2197             :     {
    2198        8662 :         Path       *subpath = (Path *) lfirst(cell);
    2199             : 
    2200        8662 :         if (path_index == arrlen)
    2201         844 :             break;
    2202        7818 :         costarr[path_index++] = subpath->total_cost;
    2203             :     }
    2204             : 
    2205             :     /*
    2206             :      * Since subpaths are sorted by decreasing cost, the last one will have
    2207             :      * the minimum cost.
    2208             :      */
    2209        4146 :     min_index = arrlen - 1;
    2210             : 
    2211             :     /*
    2212             :      * For each of the remaining subpaths, add its cost to the array element
    2213             :      * with minimum cost.
    2214             :      */
    2215        4628 :     for_each_cell(l, subpaths, cell)
    2216             :     {
    2217        1028 :         Path       *subpath = (Path *) lfirst(l);
    2218             : 
    2219             :         /* Consider only the non-partial paths */
    2220        1028 :         if (path_index++ == numpaths)
    2221         546 :             break;
    2222             : 
    2223         482 :         costarr[min_index] += subpath->total_cost;
    2224             : 
    2225             :         /* Update the new min cost array index */
    2226         482 :         min_index = 0;
    2227        1482 :         for (int i = 0; i < arrlen; i++)
    2228             :         {
    2229        1000 :             if (costarr[i] < costarr[min_index])
    2230         196 :                 min_index = i;
    2231             :         }
    2232             :     }
    2233             : 
    2234             :     /* Return the highest cost from the array */
    2235        4146 :     max_index = 0;
    2236       11964 :     for (int i = 0; i < arrlen; i++)
    2237             :     {
    2238        7818 :         if (costarr[i] > costarr[max_index])
    2239         188 :             max_index = i;
    2240             :     }
    2241             : 
    2242        4146 :     return costarr[max_index];
    2243             : }
    2244             : 
    2245             : /*
    2246             :  * cost_append
    2247             :  *    Determines and returns the cost of an Append node.
    2248             :  */
    2249             : void
    2250       54110 : cost_append(AppendPath *apath)
    2251             : {
    2252             :     ListCell   *l;
    2253             : 
    2254       54110 :     apath->path.disabled_nodes = 0;
    2255       54110 :     apath->path.startup_cost = 0;
    2256       54110 :     apath->path.total_cost = 0;
    2257       54110 :     apath->path.rows = 0;
    2258             : 
    2259       54110 :     if (apath->subpaths == NIL)
    2260        1642 :         return;
    2261             : 
    2262       52468 :     if (!apath->path.parallel_aware)
    2263             :     {
    2264       34156 :         List       *pathkeys = apath->path.pathkeys;
    2265             : 
    2266       34156 :         if (pathkeys == NIL)
    2267             :         {
    2268       32026 :             Path       *firstsubpath = (Path *) linitial(apath->subpaths);
    2269             : 
    2270             :             /*
    2271             :              * For an unordered, non-parallel-aware Append we take the startup
    2272             :              * cost as the startup cost of the first subpath.
    2273             :              */
    2274       32026 :             apath->path.startup_cost = firstsubpath->startup_cost;
    2275             : 
    2276             :             /*
    2277             :              * Compute rows, number of disabled nodes, and total cost as sums
    2278             :              * of underlying subplan values.
    2279             :              */
    2280      123948 :             foreach(l, apath->subpaths)
    2281             :             {
    2282       91922 :                 Path       *subpath = (Path *) lfirst(l);
    2283             : 
    2284       91922 :                 apath->path.rows += subpath->rows;
    2285       91922 :                 apath->path.disabled_nodes += subpath->disabled_nodes;
    2286       91922 :                 apath->path.total_cost += subpath->total_cost;
    2287             :             }
    2288             :         }
    2289             :         else
    2290             :         {
    2291             :             /*
    2292             :              * For an ordered, non-parallel-aware Append we take the startup
    2293             :              * cost as the sum of the subpath startup costs.  This ensures
    2294             :              * that we don't underestimate the startup cost when a query's
    2295             :              * LIMIT is such that several of the children have to be run to
    2296             :              * satisfy it.  This might be overkill --- another plausible hack
    2297             :              * would be to take the Append's startup cost as the maximum of
    2298             :              * the child startup costs.  But we don't want to risk believing
    2299             :              * that an ORDER BY LIMIT query can be satisfied at small cost
    2300             :              * when the first child has small startup cost but later ones
    2301             :              * don't.  (If we had the ability to deal with nonlinear cost
    2302             :              * interpolation for partial retrievals, we would not need to be
    2303             :              * so conservative about this.)
    2304             :              *
    2305             :              * This case is also different from the above in that we have to
    2306             :              * account for possibly injecting sorts into subpaths that aren't
    2307             :              * natively ordered.
    2308             :              */
    2309        8318 :             foreach(l, apath->subpaths)
    2310             :             {
    2311        6188 :                 Path       *subpath = (Path *) lfirst(l);
    2312             :                 Path        sort_path;  /* dummy for result of cost_sort */
    2313             : 
    2314        6188 :                 if (!pathkeys_contained_in(pathkeys, subpath->pathkeys))
    2315             :                 {
    2316             :                     /*
    2317             :                      * We'll need to insert a Sort node, so include costs for
    2318             :                      * that.  We can use the parent's LIMIT if any, since we
    2319             :                      * certainly won't pull more than that many tuples from
    2320             :                      * any child.
    2321             :                      */
    2322          44 :                     cost_sort(&sort_path,
    2323             :                               NULL, /* doesn't currently need root */
    2324             :                               pathkeys,
    2325             :                               subpath->disabled_nodes,
    2326             :                               subpath->total_cost,
    2327             :                               subpath->rows,
    2328          44 :                               subpath->pathtarget->width,
    2329             :                               0.0,
    2330             :                               work_mem,
    2331             :                               apath->limit_tuples);
    2332          44 :                     subpath = &sort_path;
    2333             :                 }
    2334             : 
    2335        6188 :                 apath->path.rows += subpath->rows;
    2336        6188 :                 apath->path.disabled_nodes += subpath->disabled_nodes;
    2337        6188 :                 apath->path.startup_cost += subpath->startup_cost;
    2338        6188 :                 apath->path.total_cost += subpath->total_cost;
    2339             :             }
    2340             :         }
    2341             :     }
    2342             :     else                        /* parallel-aware */
    2343             :     {
    2344       18312 :         int         i = 0;
    2345       18312 :         double      parallel_divisor = get_parallel_divisor(&apath->path);
    2346             : 
    2347             :         /* Parallel-aware Append never produces ordered output. */
    2348             :         Assert(apath->path.pathkeys == NIL);
    2349             : 
    2350             :         /* Calculate startup cost. */
    2351       71472 :         foreach(l, apath->subpaths)
    2352             :         {
    2353       53160 :             Path       *subpath = (Path *) lfirst(l);
    2354             : 
    2355             :             /*
    2356             :              * Append will start returning tuples when the child node having
    2357             :              * lowest startup cost is done setting up. We consider only the
    2358             :              * first few subplans that immediately get a worker assigned.
    2359             :              */
    2360       53160 :             if (i == 0)
    2361       18312 :                 apath->path.startup_cost = subpath->startup_cost;
    2362       34848 :             else if (i < apath->path.parallel_workers)
    2363       17754 :                 apath->path.startup_cost = Min(apath->path.startup_cost,
    2364             :                                                subpath->startup_cost);
    2365             : 
    2366             :             /*
    2367             :              * Apply parallel divisor to subpaths.  Scale the number of rows
    2368             :              * for each partial subpath based on the ratio of the parallel
    2369             :              * divisor originally used for the subpath to the one we adopted.
    2370             :              * Also add the cost of partial paths to the total cost, but
    2371             :              * ignore non-partial paths for now.
    2372             :              */
    2373       53160 :             if (i < apath->first_partial_path)
    2374        8300 :                 apath->path.rows += subpath->rows / parallel_divisor;
    2375             :             else
    2376             :             {
    2377             :                 double      subpath_parallel_divisor;
    2378             : 
    2379       44860 :                 subpath_parallel_divisor = get_parallel_divisor(subpath);
    2380       44860 :                 apath->path.rows += subpath->rows * (subpath_parallel_divisor /
    2381             :                                                      parallel_divisor);
    2382       44860 :                 apath->path.total_cost += subpath->total_cost;
    2383             :             }
    2384             : 
    2385       53160 :             apath->path.disabled_nodes += subpath->disabled_nodes;
    2386       53160 :             apath->path.rows = clamp_row_est(apath->path.rows);
    2387             : 
    2388       53160 :             i++;
    2389             :         }
    2390             : 
    2391             :         /* Add cost for non-partial subpaths. */
    2392       18312 :         apath->path.total_cost +=
    2393       18312 :             append_nonpartial_cost(apath->subpaths,
    2394             :                                    apath->first_partial_path,
    2395             :                                    apath->path.parallel_workers);
    2396             :     }
    2397             : 
    2398             :     /*
    2399             :      * Although Append does not do any selection or projection, it's not free;
    2400             :      * add a small per-tuple overhead.
    2401             :      */
    2402       52468 :     apath->path.total_cost +=
    2403       52468 :         cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER * apath->path.rows;
    2404             : }
    2405             : 
    2406             : /*
    2407             :  * cost_merge_append
    2408             :  *    Determines and returns the cost of a MergeAppend node.
    2409             :  *
    2410             :  * MergeAppend merges several pre-sorted input streams, using a heap that
    2411             :  * at any given instant holds the next tuple from each stream.  If there
    2412             :  * are N streams, we need about N*log2(N) tuple comparisons to construct
    2413             :  * the heap at startup, and then for each output tuple, about log2(N)
    2414             :  * comparisons to replace the top entry.
    2415             :  *
    2416             :  * (The effective value of N will drop once some of the input streams are
    2417             :  * exhausted, but it seems unlikely to be worth trying to account for that.)
    2418             :  *
    2419             :  * The heap is never spilled to disk, since we assume N is not very large.
    2420             :  * So this is much simpler than cost_sort.
    2421             :  *
    2422             :  * As in cost_sort, we charge two operator evals per tuple comparison.
    2423             :  *
    2424             :  * 'pathkeys' is a list of sort keys
    2425             :  * 'n_streams' is the number of input streams
    2426             :  * 'input_disabled_nodes' is the sum of the input streams' disabled node counts
    2427             :  * 'input_startup_cost' is the sum of the input streams' startup costs
    2428             :  * 'input_total_cost' is the sum of the input streams' total costs
    2429             :  * 'tuples' is the number of tuples in all the streams
    2430             :  */
    2431             : void
    2432        4166 : cost_merge_append(Path *path, PlannerInfo *root,
    2433             :                   List *pathkeys, int n_streams,
    2434             :                   int input_disabled_nodes,
    2435             :                   Cost input_startup_cost, Cost input_total_cost,
    2436             :                   double tuples)
    2437             : {
    2438        4166 :     Cost        startup_cost = 0;
    2439        4166 :     Cost        run_cost = 0;
    2440             :     Cost        comparison_cost;
    2441             :     double      N;
    2442             :     double      logN;
    2443             : 
    2444             :     /*
    2445             :      * Avoid log(0)...
    2446             :      */
    2447        4166 :     N = (n_streams < 2) ? 2.0 : (double) n_streams;
    2448        4166 :     logN = LOG2(N);
    2449             : 
    2450             :     /* Assumed cost per tuple comparison */
    2451        4166 :     comparison_cost = 2.0 * cpu_operator_cost;
    2452             : 
    2453             :     /* Heap creation cost */
    2454        4166 :     startup_cost += comparison_cost * N * logN;
    2455             : 
    2456             :     /* Per-tuple heap maintenance cost */
    2457        4166 :     run_cost += tuples * comparison_cost * logN;
    2458             : 
    2459             :     /*
    2460             :      * Although MergeAppend does not do any selection or projection, it's not
    2461             :      * free; add a small per-tuple overhead.
    2462             :      */
    2463        4166 :     run_cost += cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER * tuples;
    2464             : 
    2465        4166 :     path->disabled_nodes = input_disabled_nodes;
    2466        4166 :     path->startup_cost = startup_cost + input_startup_cost;
    2467        4166 :     path->total_cost = startup_cost + run_cost + input_total_cost;
    2468        4166 : }
    2469             : 
    2470             : /*
    2471             :  * cost_material
    2472             :  *    Determines and returns the cost of materializing a relation, including
    2473             :  *    the cost of reading the input data.
    2474             :  *
    2475             :  * If the total volume of data to materialize exceeds work_mem, we will need
    2476             :  * to write it to disk, so the cost is much higher in that case.
    2477             :  *
    2478             :  * Note that here we are estimating the costs for the first scan of the
    2479             :  * relation, so the materialization is all overhead --- any savings will
    2480             :  * occur only on rescan, which is estimated in cost_rescan.
    2481             :  */
    2482             : void
    2483      530244 : cost_material(Path *path,
    2484             :               int input_disabled_nodes,
    2485             :               Cost input_startup_cost, Cost input_total_cost,
    2486             :               double tuples, int width)
    2487             : {
    2488      530244 :     Cost        startup_cost = input_startup_cost;
    2489      530244 :     Cost        run_cost = input_total_cost - input_startup_cost;
    2490      530244 :     double      nbytes = relation_byte_size(tuples, width);
    2491      530244 :     double      work_mem_bytes = work_mem * (Size) 1024;
    2492             : 
    2493      530244 :     path->rows = tuples;
    2494             : 
    2495             :     /*
    2496             :      * Whether spilling or not, charge 2x cpu_operator_cost per tuple to
    2497             :      * reflect bookkeeping overhead.  (This rate must be more than what
    2498             :      * cost_rescan charges for materialize, ie, cpu_operator_cost per tuple;
    2499             :      * if it is exactly the same then there will be a cost tie between
    2500             :      * nestloop with A outer, materialized B inner and nestloop with B outer,
    2501             :      * materialized A inner.  The extra cost ensures we'll prefer
    2502             :      * materializing the smaller rel.)  Note that this is normally a good deal
    2503             :      * less than cpu_tuple_cost; which is OK because a Material plan node
    2504             :      * doesn't do qual-checking or projection, so it's got less overhead than
    2505             :      * most plan nodes.
    2506             :      */
    2507      530244 :     run_cost += 2 * cpu_operator_cost * tuples;
    2508             : 
    2509             :     /*
    2510             :      * If we will spill to disk, charge at the rate of seq_page_cost per page.
    2511             :      * This cost is assumed to be evenly spread through the plan run phase,
    2512             :      * which isn't exactly accurate but our cost model doesn't allow for
    2513             :      * nonuniform costs within the run phase.
    2514             :      */
    2515      530244 :     if (nbytes > work_mem_bytes)
    2516             :     {
    2517        5472 :         double      npages = ceil(nbytes / BLCKSZ);
    2518             : 
    2519        5472 :         run_cost += seq_page_cost * npages;
    2520             :     }
    2521             : 
    2522      530244 :     path->disabled_nodes = input_disabled_nodes + (enable_material ? 0 : 1);
    2523      530244 :     path->startup_cost = startup_cost;
    2524      530244 :     path->total_cost = startup_cost + run_cost;
    2525      530244 : }
    2526             : 
    2527             : /*
    2528             :  * cost_memoize_rescan
    2529             :  *    Determines the estimated cost of rescanning a Memoize node.
    2530             :  *
    2531             :  * In order to estimate this, we must gain knowledge of how often we expect to
    2532             :  * be called and how many distinct sets of parameters we are likely to be
    2533             :  * called with. If we expect a good cache hit ratio, then we can set our
    2534             :  * costs to account for that hit ratio, plus a little bit of cost for the
    2535             :  * caching itself.  Caching will not work out well if we expect to be called
    2536             :  * with too many distinct parameter values.  The worst-case here is that we
    2537             :  * never see any parameter value twice, in which case we'd never get a cache
    2538             :  * hit and caching would be a complete waste of effort.
    2539             :  */
    2540             : static void
    2541      290258 : cost_memoize_rescan(PlannerInfo *root, MemoizePath *mpath,
    2542             :                     Cost *rescan_startup_cost, Cost *rescan_total_cost)
    2543             : {
    2544             :     EstimationInfo estinfo;
    2545             :     ListCell   *lc;
    2546      290258 :     Cost        input_startup_cost = mpath->subpath->startup_cost;
    2547      290258 :     Cost        input_total_cost = mpath->subpath->total_cost;
    2548      290258 :     double      tuples = mpath->subpath->rows;
    2549      290258 :     double      calls = mpath->calls;
    2550      290258 :     int         width = mpath->subpath->pathtarget->width;
    2551             : 
    2552             :     double      hash_mem_bytes;
    2553             :     double      est_entry_bytes;
    2554             :     double      est_cache_entries;
    2555             :     double      ndistinct;
    2556             :     double      evict_ratio;
    2557             :     double      hit_ratio;
    2558             :     Cost        startup_cost;
    2559             :     Cost        total_cost;
    2560             : 
    2561             :     /* available cache space */
    2562      290258 :     hash_mem_bytes = get_hash_memory_limit();
    2563             : 
    2564             :     /*
    2565             :      * Set the number of bytes each cache entry should consume in the cache.
    2566             :      * To provide us with better estimations on how many cache entries we can
    2567             :      * store at once, we make a call to the executor here to ask it what
    2568             :      * memory overheads there are for a single cache entry.
    2569             :      */
    2570      290258 :     est_entry_bytes = relation_byte_size(tuples, width) +
    2571      290258 :         ExecEstimateCacheEntryOverheadBytes(tuples);
    2572             : 
    2573             :     /* include the estimated width for the cache keys */
    2574      618206 :     foreach(lc, mpath->param_exprs)
    2575      327948 :         est_entry_bytes += get_expr_width(root, (Node *) lfirst(lc));
    2576             : 
    2577             :     /* estimate on the upper limit of cache entries we can hold at once */
    2578      290258 :     est_cache_entries = floor(hash_mem_bytes / est_entry_bytes);
    2579             : 
    2580             :     /* estimate on the distinct number of parameter values */
    2581      290258 :     ndistinct = estimate_num_groups(root, mpath->param_exprs, calls, NULL,
    2582             :                                     &estinfo);
    2583             : 
    2584             :     /*
    2585             :      * When the estimation fell back on using a default value, it's a bit too
    2586             :      * risky to assume that it's ok to use a Memoize node.  The use of a
    2587             :      * default could cause us to use a Memoize node when it's really
    2588             :      * inappropriate to do so.  If we see that this has been done, then we'll
    2589             :      * assume that every call will have unique parameters, which will almost
    2590             :      * certainly mean a MemoizePath will never survive add_path().
    2591             :      */
    2592      290258 :     if ((estinfo.flags & SELFLAG_USED_DEFAULT) != 0)
    2593       15516 :         ndistinct = calls;
    2594             : 
    2595             :     /*
    2596             :      * Since we've already estimated the maximum number of entries we can
    2597             :      * store at once and know the estimated number of distinct values we'll be
    2598             :      * called with, we'll take this opportunity to set the path's est_entries.
    2599             :      * This will ultimately determine the hash table size that the executor
    2600             :      * will use.  If we leave this at zero, the executor will just choose the
    2601             :      * size itself.  Really this is not the right place to do this, but it's
    2602             :      * convenient since everything is already calculated.
    2603             :      */
    2604      290258 :     mpath->est_entries = Min(Min(ndistinct, est_cache_entries),
    2605             :                              PG_UINT32_MAX);
    2606             : 
    2607             :     /*
    2608             :      * When the number of distinct parameter values is above the amount we can
    2609             :      * store in the cache, then we'll have to evict some entries from the
    2610             :      * cache.  This is not free. Here we estimate how often we'll incur the
    2611             :      * cost of that eviction.
    2612             :      */
    2613      290258 :     evict_ratio = 1.0 - Min(est_cache_entries, ndistinct) / ndistinct;
    2614             : 
    2615             :     /*
    2616             :      * In order to estimate how costly a single scan will be, we need to
    2617             :      * attempt to estimate what the cache hit ratio will be.  To do that we
    2618             :      * must look at how many scans are estimated in total for this node and
    2619             :      * how many of those scans we expect to get a cache hit.
    2620             :      */
    2621      580516 :     hit_ratio = ((calls - ndistinct) / calls) *
    2622      290258 :         (est_cache_entries / Max(ndistinct, est_cache_entries));
    2623             : 
    2624             :     Assert(hit_ratio >= 0 && hit_ratio <= 1.0);
    2625             : 
    2626             :     /*
    2627             :      * Set the total_cost accounting for the expected cache hit ratio.  We
    2628             :      * also add on a cpu_operator_cost to account for a cache lookup. This
    2629             :      * will happen regardless of whether it's a cache hit or not.
    2630             :      */
    2631      290258 :     total_cost = input_total_cost * (1.0 - hit_ratio) + cpu_operator_cost;
    2632             : 
    2633             :     /* Now adjust the total cost to account for cache evictions */
    2634             : 
    2635             :     /* Charge a cpu_tuple_cost for evicting the actual cache entry */
    2636      290258 :     total_cost += cpu_tuple_cost * evict_ratio;
    2637             : 
    2638             :     /*
    2639             :      * Charge a 10th of cpu_operator_cost to evict every tuple in that entry.
    2640             :      * The per-tuple eviction is really just a pfree, so charging a whole
    2641             :      * cpu_operator_cost seems a little excessive.
    2642             :      */
    2643      290258 :     total_cost += cpu_operator_cost / 10.0 * evict_ratio * tuples;
    2644             : 
    2645             :     /*
    2646             :      * Now adjust for storing things in the cache, since that's not free
    2647             :      * either.  Everything must go in the cache.  We don't proportion this
    2648             :      * over any ratio, just apply it once for the scan.  We charge a
    2649             :      * cpu_tuple_cost for the creation of the cache entry and also a
    2650             :      * cpu_operator_cost for each tuple we expect to cache.
    2651             :      */
    2652      290258 :     total_cost += cpu_tuple_cost + cpu_operator_cost * tuples;
    2653             : 
    2654             :     /*
    2655             :      * Getting the first row must be also be proportioned according to the
    2656             :      * expected cache hit ratio.
    2657             :      */
    2658      290258 :     startup_cost = input_startup_cost * (1.0 - hit_ratio);
    2659             : 
    2660             :     /*
    2661             :      * Additionally we charge a cpu_tuple_cost to account for cache lookups,
    2662             :      * which we'll do regardless of whether it was a cache hit or not.
    2663             :      */
    2664      290258 :     startup_cost += cpu_tuple_cost;
    2665             : 
    2666      290258 :     *rescan_startup_cost = startup_cost;
    2667      290258 :     *rescan_total_cost = total_cost;
    2668      290258 : }
    2669             : 
    2670             : /*
    2671             :  * cost_agg
    2672             :  *      Determines and returns the cost of performing an Agg plan node,
    2673             :  *      including the cost of its input.
    2674             :  *
    2675             :  * aggcosts can be NULL when there are no actual aggregate functions (i.e.,
    2676             :  * we are using a hashed Agg node just to do grouping).
    2677             :  *
    2678             :  * Note: when aggstrategy == AGG_SORTED, caller must ensure that input costs
    2679             :  * are for appropriately-sorted input.
    2680             :  */
    2681             : void
    2682       69284 : cost_agg(Path *path, PlannerInfo *root,
    2683             :          AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
    2684             :          int numGroupCols, double numGroups,
    2685             :          List *quals,
    2686             :          int disabled_nodes,
    2687             :          Cost input_startup_cost, Cost input_total_cost,
    2688             :          double input_tuples, double input_width)
    2689             : {
    2690             :     double      output_tuples;
    2691             :     Cost        startup_cost;
    2692             :     Cost        total_cost;
    2693       69284 :     const AggClauseCosts dummy_aggcosts = {0};
    2694             : 
    2695             :     /* Use all-zero per-aggregate costs if NULL is passed */
    2696       69284 :     if (aggcosts == NULL)
    2697             :     {
    2698             :         Assert(aggstrategy == AGG_HASHED);
    2699       12584 :         aggcosts = &dummy_aggcosts;
    2700             :     }
    2701             : 
    2702             :     /*
    2703             :      * The transCost.per_tuple component of aggcosts should be charged once
    2704             :      * per input tuple, corresponding to the costs of evaluating the aggregate
    2705             :      * transfns and their input expressions. The finalCost.per_tuple component
    2706             :      * is charged once per output tuple, corresponding to the costs of
    2707             :      * evaluating the finalfns.  Startup costs are of course charged but once.
    2708             :      *
    2709             :      * If we are grouping, we charge an additional cpu_operator_cost per
    2710             :      * grouping column per input tuple for grouping comparisons.
    2711             :      *
    2712             :      * We will produce a single output tuple if not grouping, and a tuple per
    2713             :      * group otherwise.  We charge cpu_tuple_cost for each output tuple.
    2714             :      *
    2715             :      * Note: in this cost model, AGG_SORTED and AGG_HASHED have exactly the
    2716             :      * same total CPU cost, but AGG_SORTED has lower startup cost.  If the
    2717             :      * input path is already sorted appropriately, AGG_SORTED should be
    2718             :      * preferred (since it has no risk of memory overflow).  This will happen
    2719             :      * as long as the computed total costs are indeed exactly equal --- but if
    2720             :      * there's roundoff error we might do the wrong thing.  So be sure that
    2721             :      * the computations below form the same intermediate values in the same
    2722             :      * order.
    2723             :      */
    2724       69284 :     if (aggstrategy == AGG_PLAIN)
    2725             :     {
    2726       37096 :         startup_cost = input_total_cost;
    2727       37096 :         startup_cost += aggcosts->transCost.startup;
    2728       37096 :         startup_cost += aggcosts->transCost.per_tuple * input_tuples;
    2729       37096 :         startup_cost += aggcosts->finalCost.startup;
    2730       37096 :         startup_cost += aggcosts->finalCost.per_tuple;
    2731             :         /* we aren't grouping */
    2732       37096 :         total_cost = startup_cost + cpu_tuple_cost;
    2733       37096 :         output_tuples = 1;
    2734             :     }
    2735       32188 :     else if (aggstrategy == AGG_SORTED || aggstrategy == AGG_MIXED)
    2736             :     {
    2737             :         /* Here we are able to deliver output on-the-fly */
    2738       11244 :         startup_cost = input_startup_cost;
    2739       11244 :         total_cost = input_total_cost;
    2740       11244 :         if (aggstrategy == AGG_MIXED && !enable_hashagg)
    2741         456 :             ++disabled_nodes;
    2742             :         /* calcs phrased this way to match HASHED case, see note above */
    2743       11244 :         total_cost += aggcosts->transCost.startup;
    2744       11244 :         total_cost += aggcosts->transCost.per_tuple * input_tuples;
    2745       11244 :         total_cost += (cpu_operator_cost * numGroupCols) * input_tuples;
    2746       11244 :         total_cost += aggcosts->finalCost.startup;
    2747       11244 :         total_cost += aggcosts->finalCost.per_tuple * numGroups;
    2748       11244 :         total_cost += cpu_tuple_cost * numGroups;
    2749       11244 :         output_tuples = numGroups;
    2750             :     }
    2751             :     else
    2752             :     {
    2753             :         /* must be AGG_HASHED */
    2754       20944 :         startup_cost = input_total_cost;
    2755       20944 :         if (!enable_hashagg)
    2756        1578 :             ++disabled_nodes;
    2757       20944 :         startup_cost += aggcosts->transCost.startup;
    2758       20944 :         startup_cost += aggcosts->transCost.per_tuple * input_tuples;
    2759             :         /* cost of computing hash value */
    2760       20944 :         startup_cost += (cpu_operator_cost * numGroupCols) * input_tuples;
    2761       20944 :         startup_cost += aggcosts->finalCost.startup;
    2762             : 
    2763       20944 :         total_cost = startup_cost;
    2764       20944 :         total_cost += aggcosts->finalCost.per_tuple * numGroups;
    2765             :         /* cost of retrieving from hash table */
    2766       20944 :         total_cost += cpu_tuple_cost * numGroups;
    2767       20944 :         output_tuples = numGroups;
    2768             :     }
    2769             : 
    2770             :     /*
    2771             :      * Add the disk costs of hash aggregation that spills to disk.
    2772             :      *
    2773             :      * Groups that go into the hash table stay in memory until finalized, so
    2774             :      * spilling and reprocessing tuples doesn't incur additional invocations
    2775             :      * of transCost or finalCost. Furthermore, the computed hash value is
    2776             :      * stored with the spilled tuples, so we don't incur extra invocations of
    2777             :      * the hash function.
    2778             :      *
    2779             :      * Hash Agg begins returning tuples after the first batch is complete.
    2780             :      * Accrue writes (spilled tuples) to startup_cost and to total_cost;
    2781             :      * accrue reads only to total_cost.
    2782             :      */
    2783       69284 :     if (aggstrategy == AGG_HASHED || aggstrategy == AGG_MIXED)
    2784             :     {
    2785             :         double      pages;
    2786       21860 :         double      pages_written = 0.0;
    2787       21860 :         double      pages_read = 0.0;
    2788             :         double      spill_cost;
    2789             :         double      hashentrysize;
    2790             :         double      nbatches;
    2791             :         Size        mem_limit;
    2792             :         uint64      ngroups_limit;
    2793             :         int         num_partitions;
    2794             :         int         depth;
    2795             : 
    2796             :         /*
    2797             :          * Estimate number of batches based on the computed limits. If less
    2798             :          * than or equal to one, all groups are expected to fit in memory;
    2799             :          * otherwise we expect to spill.
    2800             :          */
    2801       21860 :         hashentrysize = hash_agg_entry_size(list_length(root->aggtransinfos),
    2802             :                                             input_width,
    2803             :                                             aggcosts->transitionSpace);
    2804       21860 :         hash_agg_set_limits(hashentrysize, numGroups, 0, &mem_limit,
    2805             :                             &ngroups_limit, &num_partitions);
    2806             : 
    2807       21860 :         nbatches = Max((numGroups * hashentrysize) / mem_limit,
    2808             :                        numGroups / ngroups_limit);
    2809             : 
    2810       21860 :         nbatches = Max(ceil(nbatches), 1.0);
    2811       21860 :         num_partitions = Max(num_partitions, 2);
    2812             : 
    2813             :         /*
    2814             :          * The number of partitions can change at different levels of
    2815             :          * recursion; but for the purposes of this calculation assume it stays
    2816             :          * constant.
    2817             :          */
    2818       21860 :         depth = ceil(log(nbatches) / log(num_partitions));
    2819             : 
    2820             :         /*
    2821             :          * Estimate number of pages read and written. For each level of
    2822             :          * recursion, a tuple must be written and then later read.
    2823             :          */
    2824       21860 :         pages = relation_byte_size(input_tuples, input_width) / BLCKSZ;
    2825       21860 :         pages_written = pages_read = pages * depth;
    2826             : 
    2827             :         /*
    2828             :          * HashAgg has somewhat worse IO behavior than Sort on typical
    2829             :          * hardware/OS combinations. Account for this with a generic penalty.
    2830             :          */
    2831       21860 :         pages_read *= 2.0;
    2832       21860 :         pages_written *= 2.0;
    2833             : 
    2834       21860 :         startup_cost += pages_written * random_page_cost;
    2835       21860 :         total_cost += pages_written * random_page_cost;
    2836       21860 :         total_cost += pages_read * seq_page_cost;
    2837             : 
    2838             :         /* account for CPU cost of spilling a tuple and reading it back */
    2839       21860 :         spill_cost = depth * input_tuples * 2.0 * cpu_tuple_cost;
    2840       21860 :         startup_cost += spill_cost;
    2841       21860 :         total_cost += spill_cost;
    2842             :     }
    2843             : 
    2844             :     /*
    2845             :      * If there are quals (HAVING quals), account for their cost and
    2846             :      * selectivity.
    2847             :      */
    2848       69284 :     if (quals)
    2849             :     {
    2850             :         QualCost    qual_cost;
    2851             : 
    2852        4364 :         cost_qual_eval(&qual_cost, quals, root);
    2853        4364 :         startup_cost += qual_cost.startup;
    2854        4364 :         total_cost += qual_cost.startup + output_tuples * qual_cost.per_tuple;
    2855             : 
    2856        4364 :         output_tuples = clamp_row_est(output_tuples *
    2857        4364 :                                       clauselist_selectivity(root,
    2858             :                                                              quals,
    2859             :                                                              0,
    2860             :                                                              JOIN_INNER,
    2861             :                                                              NULL));
    2862             :     }
    2863             : 
    2864       69284 :     path->rows = output_tuples;
    2865       69284 :     path->disabled_nodes = disabled_nodes;
    2866       69284 :     path->startup_cost = startup_cost;
    2867       69284 :     path->total_cost = total_cost;
    2868       69284 : }
    2869             : 
    2870             : /*
    2871             :  * get_windowclause_startup_tuples
    2872             :  *      Estimate how many tuples we'll need to fetch from a WindowAgg's
    2873             :  *      subnode before we can output the first WindowAgg tuple.
    2874             :  *
    2875             :  * How many tuples need to be read depends on the WindowClause.  For example,
    2876             :  * a WindowClause with no PARTITION BY and no ORDER BY requires that all
    2877             :  * subnode tuples are read and aggregated before the WindowAgg can output
    2878             :  * anything.  If there's a PARTITION BY, then we only need to look at tuples
    2879             :  * in the first partition.  Here we attempt to estimate just how many
    2880             :  * 'input_tuples' the WindowAgg will need to read for the given WindowClause
    2881             :  * before the first tuple can be output.
    2882             :  */
    2883             : static double
    2884        2754 : get_windowclause_startup_tuples(PlannerInfo *root, WindowClause *wc,
    2885             :                                 double input_tuples)
    2886             : {
    2887        2754 :     int         frameOptions = wc->frameOptions;
    2888             :     double      partition_tuples;
    2889             :     double      return_tuples;
    2890             :     double      peer_tuples;
    2891             : 
    2892             :     /*
    2893             :      * First, figure out how many partitions there are likely to be and set
    2894             :      * partition_tuples according to that estimate.
    2895             :      */
    2896        2754 :     if (wc->partitionClause != NIL)
    2897             :     {
    2898             :         double      num_partitions;
    2899         716 :         List       *partexprs = get_sortgrouplist_exprs(wc->partitionClause,
    2900         716 :                                                         root->parse->targetList);
    2901             : 
    2902         716 :         num_partitions = estimate_num_groups(root, partexprs, input_tuples,
    2903             :                                              NULL, NULL);
    2904         716 :         list_free(partexprs);
    2905             : 
    2906         716 :         partition_tuples = input_tuples / num_partitions;
    2907             :     }
    2908             :     else
    2909             :     {
    2910             :         /* all tuples belong to the same partition */
    2911        2038 :         partition_tuples = input_tuples;
    2912             :     }
    2913             : 
    2914             :     /* estimate the number of tuples in each peer group */
    2915        2754 :     if (wc->orderClause != NIL)
    2916             :     {
    2917             :         double      num_groups;
    2918             :         List       *orderexprs;
    2919             : 
    2920        2274 :         orderexprs = get_sortgrouplist_exprs(wc->orderClause,
    2921        2274 :                                              root->parse->targetList);
    2922             : 
    2923             :         /* estimate out how many peer groups there are in the partition */
    2924        2274 :         num_groups = estimate_num_groups(root, orderexprs,
    2925             :                                          partition_tuples, NULL,
    2926             :                                          NULL);
    2927        2274 :         list_free(orderexprs);
    2928        2274 :         peer_tuples = partition_tuples / num_groups;
    2929             :     }
    2930             :     else
    2931             :     {
    2932             :         /* no ORDER BY so only 1 tuple belongs in each peer group */
    2933         480 :         peer_tuples = 1.0;
    2934             :     }
    2935             : 
    2936        2754 :     if (frameOptions & FRAMEOPTION_END_UNBOUNDED_FOLLOWING)
    2937             :     {
    2938             :         /* include all partition rows */
    2939         346 :         return_tuples = partition_tuples;
    2940             :     }
    2941        2408 :     else if (frameOptions & FRAMEOPTION_END_CURRENT_ROW)
    2942             :     {
    2943        1418 :         if (frameOptions & FRAMEOPTION_ROWS)
    2944             :         {
    2945             :             /* just count the current row */
    2946         608 :             return_tuples = 1.0;
    2947             :         }
    2948         810 :         else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS))
    2949             :         {
    2950             :             /*
    2951             :              * When in RANGE/GROUPS mode, it's more complex.  If there's no
    2952             :              * ORDER BY, then all rows in the partition are peers, otherwise
    2953             :              * we'll need to read the first group of peers.
    2954             :              */
    2955         810 :             if (wc->orderClause == NIL)
    2956         308 :                 return_tuples = partition_tuples;
    2957             :             else
    2958         502 :                 return_tuples = peer_tuples;
    2959             :         }
    2960             :         else
    2961             :         {
    2962             :             /*
    2963             :              * Something new we don't support yet?  This needs attention.
    2964             :              * We'll just return 1.0 in the meantime.
    2965             :              */
    2966             :             Assert(false);
    2967           0 :             return_tuples = 1.0;
    2968             :         }
    2969             :     }
    2970         990 :     else if (frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING)
    2971             :     {
    2972             :         /*
    2973             :          * BETWEEN ... AND N PRECEDING will only need to read the WindowAgg's
    2974             :          * subnode after N ROWS/RANGES/GROUPS.  N can be 0, but not negative,
    2975             :          * so we'll just assume only the current row needs to be read to fetch
    2976             :          * the first WindowAgg row.
    2977             :          */
    2978         108 :         return_tuples = 1.0;
    2979             :     }
    2980         882 :     else if (frameOptions & FRAMEOPTION_END_OFFSET_FOLLOWING)
    2981             :     {
    2982         882 :         Const      *endOffset = (Const *) wc->endOffset;
    2983             :         double      end_offset_value;
    2984             : 
    2985             :         /* try and figure out the value specified in the endOffset. */
    2986         882 :         if (IsA(endOffset, Const))
    2987             :         {
    2988         882 :             if (endOffset->constisnull)
    2989             :             {
    2990             :                 /*
    2991             :                  * NULLs are not allowed, but currently, there's no code to
    2992             :                  * error out if there's a NULL Const.  We'll only discover
    2993             :                  * this during execution.  For now, just pretend everything is
    2994             :                  * fine and assume that just the first row/range/group will be
    2995             :                  * needed.
    2996             :                  */
    2997           0 :                 end_offset_value = 1.0;
    2998             :             }
    2999             :             else
    3000             :             {
    3001         882 :                 switch (endOffset->consttype)
    3002             :                 {
    3003          24 :                     case INT2OID:
    3004          24 :                         end_offset_value =
    3005          24 :                             (double) DatumGetInt16(endOffset->constvalue);
    3006          24 :                         break;
    3007         132 :                     case INT4OID:
    3008         132 :                         end_offset_value =
    3009         132 :                             (double) DatumGetInt32(endOffset->constvalue);
    3010         132 :                         break;
    3011         384 :                     case INT8OID:
    3012         384 :                         end_offset_value =
    3013         384 :                             (double) DatumGetInt64(endOffset->constvalue);
    3014         384 :                         break;
    3015         342 :                     default:
    3016         342 :                         end_offset_value =
    3017         342 :                             partition_tuples / peer_tuples *
    3018             :                             DEFAULT_INEQ_SEL;
    3019         342 :                         break;
    3020             :                 }
    3021             :             }
    3022             :         }
    3023             :         else
    3024             :         {
    3025             :             /*
    3026             :              * When the end bound is not a Const, we'll just need to guess. We
    3027             :              * just make use of DEFAULT_INEQ_SEL.
    3028             :              */
    3029           0 :             end_offset_value =
    3030           0 :                 partition_tuples / peer_tuples * DEFAULT_INEQ_SEL;
    3031             :         }
    3032             : 
    3033         882 :         if (frameOptions & FRAMEOPTION_ROWS)
    3034             :         {
    3035             :             /* include the N FOLLOWING and the current row */
    3036         222 :             return_tuples = end_offset_value + 1.0;
    3037             :         }
    3038         660 :         else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS))
    3039             :         {
    3040             :             /* include N FOLLOWING ranges/group and the initial range/group */
    3041         660 :             return_tuples = peer_tuples * (end_offset_value + 1.0);
    3042             :         }
    3043             :         else
    3044             :         {
    3045             :             /*
    3046             :              * Something new we don't support yet?  This needs attention.
    3047             :              * We'll just return 1.0 in the meantime.
    3048             :              */
    3049             :             Assert(false);
    3050           0 :             return_tuples = 1.0;
    3051             :         }
    3052             :     }
    3053             :     else
    3054             :     {
    3055             :         /*
    3056             :          * Something new we don't support yet?  This needs attention.  We'll
    3057             :          * just return 1.0 in the meantime.
    3058             :          */
    3059             :         Assert(false);
    3060           0 :         return_tuples = 1.0;
    3061             :     }
    3062             : 
    3063        2754 :     if (wc->partitionClause != NIL || wc->orderClause != NIL)
    3064             :     {
    3065             :         /*
    3066             :          * Cap the return value to the estimated partition tuples and account
    3067             :          * for the extra tuple WindowAgg will need to read to confirm the next
    3068             :          * tuple does not belong to the same partition or peer group.
    3069             :          */
    3070        2474 :         return_tuples = Min(return_tuples + 1.0, partition_tuples);
    3071             :     }
    3072             :     else
    3073             :     {
    3074             :         /*
    3075             :          * Cap the return value so it's never higher than the expected tuples
    3076             :          * in the partition.
    3077             :          */
    3078         280 :         return_tuples = Min(return_tuples, partition_tuples);
    3079             :     }
    3080             : 
    3081             :     /*
    3082             :      * We needn't worry about any EXCLUDE options as those only exclude rows
    3083             :      * from being aggregated, not from being read from the WindowAgg's
    3084             :      * subnode.
    3085             :      */
    3086             : 
    3087        2754 :     return clamp_row_est(return_tuples);
    3088             : }
    3089             : 
    3090             : /*
    3091             :  * cost_windowagg
    3092             :  *      Determines and returns the cost of performing a WindowAgg plan node,
    3093             :  *      including the cost of its input.
    3094             :  *
    3095             :  * Input is assumed already properly sorted.
    3096             :  */
    3097             : void
    3098        2754 : cost_windowagg(Path *path, PlannerInfo *root,
    3099             :                List *windowFuncs, WindowClause *winclause,
    3100             :                int input_disabled_nodes,
    3101             :                Cost input_startup_cost, Cost input_total_cost,
    3102             :                double input_tuples)
    3103             : {
    3104             :     Cost        startup_cost;
    3105             :     Cost        total_cost;
    3106             :     double      startup_tuples;
    3107             :     int         numPartCols;
    3108             :     int         numOrderCols;
    3109             :     ListCell   *lc;
    3110             : 
    3111        2754 :     numPartCols = list_length(winclause->partitionClause);
    3112        2754 :     numOrderCols = list_length(winclause->orderClause);
    3113             : 
    3114        2754 :     startup_cost = input_startup_cost;
    3115        2754 :     total_cost = input_total_cost;
    3116             : 
    3117             :     /*
    3118             :      * Window functions are assumed to cost their stated execution cost, plus
    3119             :      * the cost of evaluating their input expressions, per tuple.  Since they
    3120             :      * may in fact evaluate their inputs at multiple rows during each cycle,
    3121             :      * this could be a drastic underestimate; but without a way to know how
    3122             :      * many rows the window function will fetch, it's hard to do better.  In
    3123             :      * any case, it's a good estimate for all the built-in window functions,
    3124             :      * so we'll just do this for now.
    3125             :      */
    3126        6246 :     foreach(lc, windowFuncs)
    3127             :     {
    3128        3492 :         WindowFunc *wfunc = lfirst_node(WindowFunc, lc);
    3129             :         Cost        wfunccost;
    3130             :         QualCost    argcosts;
    3131             : 
    3132        3492 :         argcosts.startup = argcosts.per_tuple = 0;
    3133        3492 :         add_function_cost(root, wfunc->winfnoid, (Node *) wfunc,
    3134             :                           &argcosts);
    3135        3492 :         startup_cost += argcosts.startup;
    3136        3492 :         wfunccost = argcosts.per_tuple;
    3137             : 
    3138             :         /* also add the input expressions' cost to per-input-row costs */
    3139        3492 :         cost_qual_eval_node(&argcosts, (Node *) wfunc->args, root);
    3140        3492 :         startup_cost += argcosts.startup;
    3141        3492 :         wfunccost += argcosts.per_tuple;
    3142             : 
    3143             :         /*
    3144             :          * Add the filter's cost to per-input-row costs.  XXX We should reduce
    3145             :          * input expression costs according to filter selectivity.
    3146             :          */
    3147        3492 :         cost_qual_eval_node(&argcosts, (Node *) wfunc->aggfilter, root);
    3148        3492 :         startup_cost += argcosts.startup;
    3149        3492 :         wfunccost += argcosts.per_tuple;
    3150             : 
    3151        3492 :         total_cost += wfunccost * input_tuples;
    3152             :     }
    3153             : 
    3154             :     /*
    3155             :      * We also charge cpu_operator_cost per grouping column per tuple for
    3156             :      * grouping comparisons, plus cpu_tuple_cost per tuple for general
    3157             :      * overhead.
    3158             :      *
    3159             :      * XXX this neglects costs of spooling the data to disk when it overflows
    3160             :      * work_mem.  Sooner or later that should get accounted for.
    3161             :      */
    3162        2754 :     total_cost += cpu_operator_cost * (numPartCols + numOrderCols) * input_tuples;
    3163        2754 :     total_cost += cpu_tuple_cost * input_tuples;
    3164             : 
    3165        2754 :     path->rows = input_tuples;
    3166        2754 :     path->disabled_nodes = input_disabled_nodes;
    3167        2754 :     path->startup_cost = startup_cost;
    3168        2754 :     path->total_cost = total_cost;
    3169             : 
    3170             :     /*
    3171             :      * Also, take into account how many tuples we need to read from the
    3172             :      * subnode in order to produce the first tuple from the WindowAgg.  To do
    3173             :      * this we proportion the run cost (total cost not including startup cost)
    3174             :      * over the estimated startup tuples.  We already included the startup
    3175             :      * cost of the subnode, so we only need to do this when the estimated
    3176             :      * startup tuples is above 1.0.
    3177             :      */
    3178        2754 :     startup_tuples = get_windowclause_startup_tuples(root, winclause,
    3179             :                                                      input_tuples);
    3180             : 
    3181        2754 :     if (startup_tuples > 1.0)
    3182        2466 :         path->startup_cost += (total_cost - startup_cost) / input_tuples *
    3183        2466 :             (startup_tuples - 1.0);
    3184        2754 : }
    3185             : 
    3186             : /*
    3187             :  * cost_group
    3188             :  *      Determines and returns the cost of performing a Group plan node,
    3189             :  *      including the cost of its input.
    3190             :  *
    3191             :  * Note: caller must ensure that input costs are for appropriately-sorted
    3192             :  * input.
    3193             :  */
    3194             : void
    3195        1214 : cost_group(Path *path, PlannerInfo *root,
    3196             :            int numGroupCols, double numGroups,
    3197             :            List *quals,
    3198             :            int input_disabled_nodes,
    3199             :            Cost input_startup_cost, Cost input_total_cost,
    3200             :            double input_tuples)
    3201             : {
    3202             :     double      output_tuples;
    3203             :     Cost        startup_cost;
    3204             :     Cost        total_cost;
    3205             : 
    3206        1214 :     output_tuples = numGroups;
    3207        1214 :     startup_cost = input_startup_cost;
    3208        1214 :     total_cost = input_total_cost;
    3209             : 
    3210             :     /*
    3211             :      * Charge one cpu_operator_cost per comparison per input tuple. We assume
    3212             :      * all columns get compared at most of the tuples.
    3213             :      */
    3214        1214 :     total_cost += cpu_operator_cost * input_tuples * numGroupCols;
    3215             : 
    3216             :     /*
    3217             :      * If there are quals (HAVING quals), account for their cost and
    3218             :      * selectivity.
    3219             :      */
    3220        1214 :     if (quals)
    3221             :     {
    3222             :         QualCost    qual_cost;
    3223             : 
    3224           0 :         cost_qual_eval(&qual_cost, quals, root);
    3225           0 :         startup_cost += qual_cost.startup;
    3226           0 :         total_cost += qual_cost.startup + output_tuples * qual_cost.per_tuple;
    3227             : 
    3228           0 :         output_tuples = clamp_row_est(output_tuples *
    3229           0 :                                       clauselist_selectivity(root,
    3230             :                                                              quals,
    3231             :                                                              0,
    3232             :                                                              JOIN_INNER,
    3233             :                                                              NULL));
    3234             :     }
    3235             : 
    3236        1214 :     path->rows = output_tuples;
    3237        1214 :     path->disabled_nodes = input_disabled_nodes;
    3238        1214 :     path->startup_cost = startup_cost;
    3239        1214 :     path->total_cost = total_cost;
    3240        1214 : }
    3241             : 
    3242             : /*
    3243             :  * initial_cost_nestloop
    3244             :  *    Preliminary estimate of the cost of a nestloop join path.
    3245             :  *
    3246             :  * This must quickly produce lower-bound estimates of the path's startup and
    3247             :  * total costs.  If we are unable to eliminate the proposed path from
    3248             :  * consideration using the lower bounds, final_cost_nestloop will be called
    3249             :  * to obtain the final estimates.
    3250             :  *
    3251             :  * The exact division of labor between this function and final_cost_nestloop
    3252             :  * is private to them, and represents a tradeoff between speed of the initial
    3253             :  * estimate and getting a tight lower bound.  We choose to not examine the
    3254             :  * join quals here, since that's by far the most expensive part of the
    3255             :  * calculations.  The end result is that CPU-cost considerations must be
    3256             :  * left for the second phase; and for SEMI/ANTI joins, we must also postpone
    3257             :  * incorporation of the inner path's run cost.
    3258             :  *
    3259             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    3260             :  *      other data to be used by final_cost_nestloop
    3261             :  * 'jointype' is the type of join to be performed
    3262             :  * 'outer_path' is the outer input to the join
    3263             :  * 'inner_path' is the inner input to the join
    3264             :  * 'extra' contains miscellaneous information about the join
    3265             :  */
    3266             : void
    3267     2877214 : initial_cost_nestloop(PlannerInfo *root, JoinCostWorkspace *workspace,
    3268             :                       JoinType jointype,
    3269             :                       Path *outer_path, Path *inner_path,
    3270             :                       JoinPathExtraData *extra)
    3271             : {
    3272             :     int         disabled_nodes;
    3273     2877214 :     Cost        startup_cost = 0;
    3274     2877214 :     Cost        run_cost = 0;
    3275     2877214 :     double      outer_path_rows = outer_path->rows;
    3276             :     Cost        inner_rescan_start_cost;
    3277             :     Cost        inner_rescan_total_cost;
    3278             :     Cost        inner_run_cost;
    3279             :     Cost        inner_rescan_run_cost;
    3280             : 
    3281             :     /* Count up disabled nodes. */
    3282     2877214 :     disabled_nodes = enable_nestloop ? 0 : 1;
    3283     2877214 :     disabled_nodes += inner_path->disabled_nodes;
    3284     2877214 :     disabled_nodes += outer_path->disabled_nodes;
    3285             : 
    3286             :     /* estimate costs to rescan the inner relation */
    3287     2877214 :     cost_rescan(root, inner_path,
    3288             :                 &inner_rescan_start_cost,
    3289             :                 &inner_rescan_total_cost);
    3290             : 
    3291             :     /* cost of source data */
    3292             : 
    3293             :     /*
    3294             :      * NOTE: clearly, we must pay both outer and inner paths' startup_cost
    3295             :      * before we can start returning tuples, so the join's startup cost is
    3296             :      * their sum.  We'll also pay the inner path's rescan startup cost
    3297             :      * multiple times.
    3298             :      */
    3299     2877214 :     startup_cost += outer_path->startup_cost + inner_path->startup_cost;
    3300     2877214 :     run_cost += outer_path->total_cost - outer_path->startup_cost;
    3301     2877214 :     if (outer_path_rows > 1)
    3302     2036094 :         run_cost += (outer_path_rows - 1) * inner_rescan_start_cost;
    3303             : 
    3304     2877214 :     inner_run_cost = inner_path->total_cost - inner_path->startup_cost;
    3305     2877214 :     inner_rescan_run_cost = inner_rescan_total_cost - inner_rescan_start_cost;
    3306             : 
    3307     2877214 :     if (jointype == JOIN_SEMI || jointype == JOIN_ANTI ||
    3308     2818798 :         extra->inner_unique)
    3309             :     {
    3310             :         /*
    3311             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3312             :          * executor will stop after the first match.
    3313             :          *
    3314             :          * Getting decent estimates requires inspection of the join quals,
    3315             :          * which we choose to postpone to final_cost_nestloop.
    3316             :          */
    3317             : 
    3318             :         /* Save private data for final_cost_nestloop */
    3319     1326212 :         workspace->inner_run_cost = inner_run_cost;
    3320     1326212 :         workspace->inner_rescan_run_cost = inner_rescan_run_cost;
    3321             :     }
    3322             :     else
    3323             :     {
    3324             :         /* Normal case; we'll scan whole input rel for each outer row */
    3325     1551002 :         run_cost += inner_run_cost;
    3326     1551002 :         if (outer_path_rows > 1)
    3327     1107692 :             run_cost += (outer_path_rows - 1) * inner_rescan_run_cost;
    3328             :     }
    3329             : 
    3330             :     /* CPU costs left for later */
    3331             : 
    3332             :     /* Public result fields */
    3333     2877214 :     workspace->disabled_nodes = disabled_nodes;
    3334     2877214 :     workspace->startup_cost = startup_cost;
    3335     2877214 :     workspace->total_cost = startup_cost + run_cost;
    3336             :     /* Save private data for final_cost_nestloop */
    3337     2877214 :     workspace->run_cost = run_cost;
    3338     2877214 : }
    3339             : 
    3340             : /*
    3341             :  * final_cost_nestloop
    3342             :  *    Final estimate of the cost and result size of a nestloop join path.
    3343             :  *
    3344             :  * 'path' is already filled in except for the rows and cost fields
    3345             :  * 'workspace' is the result from initial_cost_nestloop
    3346             :  * 'extra' contains miscellaneous information about the join
    3347             :  */
    3348             : void
    3349     1389978 : final_cost_nestloop(PlannerInfo *root, NestPath *path,
    3350             :                     JoinCostWorkspace *workspace,
    3351             :                     JoinPathExtraData *extra)
    3352             : {
    3353     1389978 :     Path       *outer_path = path->jpath.outerjoinpath;
    3354     1389978 :     Path       *inner_path = path->jpath.innerjoinpath;
    3355     1389978 :     double      outer_path_rows = outer_path->rows;
    3356     1389978 :     double      inner_path_rows = inner_path->rows;
    3357     1389978 :     Cost        startup_cost = workspace->startup_cost;
    3358     1389978 :     Cost        run_cost = workspace->run_cost;
    3359             :     Cost        cpu_per_tuple;
    3360             :     QualCost    restrict_qual_cost;
    3361             :     double      ntuples;
    3362             : 
    3363             :     /* Set the number of disabled nodes. */
    3364     1389978 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    3365             : 
    3366             :     /* Protect some assumptions below that rowcounts aren't zero */
    3367     1389978 :     if (outer_path_rows <= 0)
    3368           0 :         outer_path_rows = 1;
    3369     1389978 :     if (inner_path_rows <= 0)
    3370         678 :         inner_path_rows = 1;
    3371             :     /* Mark the path with the correct row estimate */
    3372     1389978 :     if (path->jpath.path.param_info)
    3373       26740 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    3374             :     else
    3375     1363238 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    3376             : 
    3377             :     /* For partial paths, scale row estimate. */
    3378     1389978 :     if (path->jpath.path.parallel_workers > 0)
    3379             :     {
    3380       12648 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    3381             : 
    3382       12648 :         path->jpath.path.rows =
    3383       12648 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    3384             :     }
    3385             : 
    3386             :     /* cost of inner-relation source data (we already dealt with outer rel) */
    3387             : 
    3388     1389978 :     if (path->jpath.jointype == JOIN_SEMI || path->jpath.jointype == JOIN_ANTI ||
    3389     1349660 :         extra->inner_unique)
    3390      910470 :     {
    3391             :         /*
    3392             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3393             :          * executor will stop after the first match.
    3394             :          */
    3395      910470 :         Cost        inner_run_cost = workspace->inner_run_cost;
    3396      910470 :         Cost        inner_rescan_run_cost = workspace->inner_rescan_run_cost;
    3397             :         double      outer_matched_rows;
    3398             :         double      outer_unmatched_rows;
    3399             :         Selectivity inner_scan_frac;
    3400             : 
    3401             :         /*
    3402             :          * For an outer-rel row that has at least one match, we can expect the
    3403             :          * inner scan to stop after a fraction 1/(match_count+1) of the inner
    3404             :          * rows, if the matches are evenly distributed.  Since they probably
    3405             :          * aren't quite evenly distributed, we apply a fuzz factor of 2.0 to
    3406             :          * that fraction.  (If we used a larger fuzz factor, we'd have to
    3407             :          * clamp inner_scan_frac to at most 1.0; but since match_count is at
    3408             :          * least 1, no such clamp is needed now.)
    3409             :          */
    3410      910470 :         outer_matched_rows = rint(outer_path_rows * extra->semifactors.outer_match_frac);
    3411      910470 :         outer_unmatched_rows = outer_path_rows - outer_matched_rows;
    3412      910470 :         inner_scan_frac = 2.0 / (extra->semifactors.match_count + 1.0);
    3413             : 
    3414             :         /*
    3415             :          * Compute number of tuples processed (not number emitted!).  First,
    3416             :          * account for successfully-matched outer rows.
    3417             :          */
    3418      910470 :         ntuples = outer_matched_rows * inner_path_rows * inner_scan_frac;
    3419             : 
    3420             :         /*
    3421             :          * Now we need to estimate the actual costs of scanning the inner
    3422             :          * relation, which may be quite a bit less than N times inner_run_cost
    3423             :          * due to early scan stops.  We consider two cases.  If the inner path
    3424             :          * is an indexscan using all the joinquals as indexquals, then an
    3425             :          * unmatched outer row results in an indexscan returning no rows,
    3426             :          * which is probably quite cheap.  Otherwise, the executor will have
    3427             :          * to scan the whole inner rel for an unmatched row; not so cheap.
    3428             :          */
    3429      910470 :         if (has_indexed_join_quals(path))
    3430             :         {
    3431             :             /*
    3432             :              * Successfully-matched outer rows will only require scanning
    3433             :              * inner_scan_frac of the inner relation.  In this case, we don't
    3434             :              * need to charge the full inner_run_cost even when that's more
    3435             :              * than inner_rescan_run_cost, because we can assume that none of
    3436             :              * the inner scans ever scan the whole inner relation.  So it's
    3437             :              * okay to assume that all the inner scan executions can be
    3438             :              * fractions of the full cost, even if materialization is reducing
    3439             :              * the rescan cost.  At this writing, it's impossible to get here
    3440             :              * for a materialized inner scan, so inner_run_cost and
    3441             :              * inner_rescan_run_cost will be the same anyway; but just in
    3442             :              * case, use inner_run_cost for the first matched tuple and
    3443             :              * inner_rescan_run_cost for additional ones.
    3444             :              */
    3445      149366 :             run_cost += inner_run_cost * inner_scan_frac;
    3446      149366 :             if (outer_matched_rows > 1)
    3447       21166 :                 run_cost += (outer_matched_rows - 1) * inner_rescan_run_cost * inner_scan_frac;
    3448             : 
    3449             :             /*
    3450             :              * Add the cost of inner-scan executions for unmatched outer rows.
    3451             :              * We estimate this as the same cost as returning the first tuple
    3452             :              * of a nonempty scan.  We consider that these are all rescans,
    3453             :              * since we used inner_run_cost once already.
    3454             :              */
    3455      149366 :             run_cost += outer_unmatched_rows *
    3456      149366 :                 inner_rescan_run_cost / inner_path_rows;
    3457             : 
    3458             :             /*
    3459             :              * We won't be evaluating any quals at all for unmatched rows, so
    3460             :              * don't add them to ntuples.
    3461             :              */
    3462             :         }
    3463             :         else
    3464             :         {
    3465             :             /*
    3466             :              * Here, a complicating factor is that rescans may be cheaper than
    3467             :              * first scans.  If we never scan all the way to the end of the
    3468             :              * inner rel, it might be (depending on the plan type) that we'd
    3469             :              * never pay the whole inner first-scan run cost.  However it is
    3470             :              * difficult to estimate whether that will happen (and it could
    3471             :              * not happen if there are any unmatched outer rows!), so be
    3472             :              * conservative and always charge the whole first-scan cost once.
    3473             :              * We consider this charge to correspond to the first unmatched
    3474             :              * outer row, unless there isn't one in our estimate, in which
    3475             :              * case blame it on the first matched row.
    3476             :              */
    3477             : 
    3478             :             /* First, count all unmatched join tuples as being processed */
    3479      761104 :             ntuples += outer_unmatched_rows * inner_path_rows;
    3480             : 
    3481             :             /* Now add the forced full scan, and decrement appropriate count */
    3482      761104 :             run_cost += inner_run_cost;
    3483      761104 :             if (outer_unmatched_rows >= 1)
    3484      731382 :                 outer_unmatched_rows -= 1;
    3485             :             else
    3486       29722 :                 outer_matched_rows -= 1;
    3487             : 
    3488             :             /* Add inner run cost for additional outer tuples having matches */
    3489      761104 :             if (outer_matched_rows > 0)
    3490      266156 :                 run_cost += outer_matched_rows * inner_rescan_run_cost * inner_scan_frac;
    3491             : 
    3492             :             /* Add inner run cost for additional unmatched outer tuples */
    3493      761104 :             if (outer_unmatched_rows > 0)
    3494      504814 :                 run_cost += outer_unmatched_rows * inner_rescan_run_cost;
    3495             :         }
    3496             :     }
    3497             :     else
    3498             :     {
    3499             :         /* Normal-case source costs were included in preliminary estimate */
    3500             : 
    3501             :         /* Compute number of tuples processed (not number emitted!) */
    3502      479508 :         ntuples = outer_path_rows * inner_path_rows;
    3503             :     }
    3504             : 
    3505             :     /* CPU costs */
    3506     1389978 :     cost_qual_eval(&restrict_qual_cost, path->jpath.joinrestrictinfo, root);
    3507     1389978 :     startup_cost += restrict_qual_cost.startup;
    3508     1389978 :     cpu_per_tuple = cpu_tuple_cost + restrict_qual_cost.per_tuple;
    3509     1389978 :     run_cost += cpu_per_tuple * ntuples;
    3510             : 
    3511             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    3512     1389978 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    3513     1389978 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    3514             : 
    3515     1389978 :     path->jpath.path.startup_cost = startup_cost;
    3516     1389978 :     path->jpath.path.total_cost = startup_cost + run_cost;
    3517     1389978 : }
    3518             : 
    3519             : /*
    3520             :  * initial_cost_mergejoin
    3521             :  *    Preliminary estimate of the cost of a mergejoin path.
    3522             :  *
    3523             :  * This must quickly produce lower-bound estimates of the path's startup and
    3524             :  * total costs.  If we are unable to eliminate the proposed path from
    3525             :  * consideration using the lower bounds, final_cost_mergejoin will be called
    3526             :  * to obtain the final estimates.
    3527             :  *
    3528             :  * The exact division of labor between this function and final_cost_mergejoin
    3529             :  * is private to them, and represents a tradeoff between speed of the initial
    3530             :  * estimate and getting a tight lower bound.  We choose to not examine the
    3531             :  * join quals here, except for obtaining the scan selectivity estimate which
    3532             :  * is really essential (but fortunately, use of caching keeps the cost of
    3533             :  * getting that down to something reasonable).
    3534             :  * We also assume that cost_sort/cost_incremental_sort is cheap enough to use
    3535             :  * here.
    3536             :  *
    3537             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    3538             :  *      other data to be used by final_cost_mergejoin
    3539             :  * 'jointype' is the type of join to be performed
    3540             :  * 'mergeclauses' is the list of joinclauses to be used as merge clauses
    3541             :  * 'outer_path' is the outer input to the join
    3542             :  * 'inner_path' is the inner input to the join
    3543             :  * 'outersortkeys' is the list of sort keys for the outer path
    3544             :  * 'innersortkeys' is the list of sort keys for the inner path
    3545             :  * 'extra' contains miscellaneous information about the join
    3546             :  *
    3547             :  * Note: outersortkeys and innersortkeys should be NIL if no explicit
    3548             :  * sort is needed because the respective source path is already ordered.
    3549             :  */
    3550             : void
    3551     1270732 : initial_cost_mergejoin(PlannerInfo *root, JoinCostWorkspace *workspace,
    3552             :                        JoinType jointype,
    3553             :                        List *mergeclauses,
    3554             :                        Path *outer_path, Path *inner_path,
    3555             :                        List *outersortkeys, List *innersortkeys,
    3556             :                        JoinPathExtraData *extra)
    3557             : {
    3558             :     int         disabled_nodes;
    3559     1270732 :     Cost        startup_cost = 0;
    3560     1270732 :     Cost        run_cost = 0;
    3561     1270732 :     double      outer_path_rows = outer_path->rows;
    3562     1270732 :     double      inner_path_rows = inner_path->rows;
    3563             :     Cost        inner_run_cost;
    3564             :     double      outer_rows,
    3565             :                 inner_rows,
    3566             :                 outer_skip_rows,
    3567             :                 inner_skip_rows;
    3568             :     Selectivity outerstartsel,
    3569             :                 outerendsel,
    3570             :                 innerstartsel,
    3571             :                 innerendsel;
    3572             :     Path        sort_path;      /* dummy for result of
    3573             :                                  * cost_sort/cost_incremental_sort */
    3574             : 
    3575             :     /* Protect some assumptions below that rowcounts aren't zero */
    3576     1270732 :     if (outer_path_rows <= 0)
    3577          96 :         outer_path_rows = 1;
    3578     1270732 :     if (inner_path_rows <= 0)
    3579         126 :         inner_path_rows = 1;
    3580             : 
    3581             :     /*
    3582             :      * A merge join will stop as soon as it exhausts either input stream
    3583             :      * (unless it's an outer join, in which case the outer side has to be
    3584             :      * scanned all the way anyway).  Estimate fraction of the left and right
    3585             :      * inputs that will actually need to be scanned.  Likewise, we can
    3586             :      * estimate the number of rows that will be skipped before the first join
    3587             :      * pair is found, which should be factored into startup cost. We use only
    3588             :      * the first (most significant) merge clause for this purpose. Since
    3589             :      * mergejoinscansel() is a fairly expensive computation, we cache the
    3590             :      * results in the merge clause RestrictInfo.
    3591             :      */
    3592     1270732 :     if (mergeclauses && jointype != JOIN_FULL)
    3593     1264592 :     {
    3594     1264592 :         RestrictInfo *firstclause = (RestrictInfo *) linitial(mergeclauses);
    3595             :         List       *opathkeys;
    3596             :         List       *ipathkeys;
    3597             :         PathKey    *opathkey;
    3598             :         PathKey    *ipathkey;
    3599             :         MergeScanSelCache *cache;
    3600             : 
    3601             :         /* Get the input pathkeys to determine the sort-order details */
    3602     1264592 :         opathkeys = outersortkeys ? outersortkeys : outer_path->pathkeys;
    3603     1264592 :         ipathkeys = innersortkeys ? innersortkeys : inner_path->pathkeys;
    3604             :         Assert(opathkeys);
    3605             :         Assert(ipathkeys);
    3606     1264592 :         opathkey = (PathKey *) linitial(opathkeys);
    3607     1264592 :         ipathkey = (PathKey *) linitial(ipathkeys);
    3608             :         /* debugging check */
    3609     1264592 :         if (opathkey->pk_opfamily != ipathkey->pk_opfamily ||
    3610     1264592 :             opathkey->pk_eclass->ec_collation != ipathkey->pk_eclass->ec_collation ||
    3611     1264592 :             opathkey->pk_cmptype != ipathkey->pk_cmptype ||
    3612     1264592 :             opathkey->pk_nulls_first != ipathkey->pk_nulls_first)
    3613           0 :             elog(ERROR, "left and right pathkeys do not match in mergejoin");
    3614             : 
    3615             :         /* Get the selectivity with caching */
    3616     1264592 :         cache = cached_scansel(root, firstclause, opathkey);
    3617             : 
    3618     1264592 :         if (bms_is_subset(firstclause->left_relids,
    3619     1264592 :                           outer_path->parent->relids))
    3620             :         {
    3621             :             /* left side of clause is outer */
    3622      674802 :             outerstartsel = cache->leftstartsel;
    3623      674802 :             outerendsel = cache->leftendsel;
    3624      674802 :             innerstartsel = cache->rightstartsel;
    3625      674802 :             innerendsel = cache->rightendsel;
    3626             :         }
    3627             :         else
    3628             :         {
    3629             :             /* left side of clause is inner */
    3630      589790 :             outerstartsel = cache->rightstartsel;
    3631      589790 :             outerendsel = cache->rightendsel;
    3632      589790 :             innerstartsel = cache->leftstartsel;
    3633      589790 :             innerendsel = cache->leftendsel;
    3634             :         }
    3635     1264592 :         if (jointype == JOIN_LEFT ||
    3636             :             jointype == JOIN_ANTI)
    3637             :         {
    3638      209408 :             outerstartsel = 0.0;
    3639      209408 :             outerendsel = 1.0;
    3640             :         }
    3641     1055184 :         else if (jointype == JOIN_RIGHT ||
    3642             :                  jointype == JOIN_RIGHT_ANTI)
    3643             :         {
    3644      204200 :             innerstartsel = 0.0;
    3645      204200 :             innerendsel = 1.0;
    3646             :         }
    3647             :     }
    3648             :     else
    3649             :     {
    3650             :         /* cope with clauseless or full mergejoin */
    3651        6140 :         outerstartsel = innerstartsel = 0.0;
    3652        6140 :         outerendsel = innerendsel = 1.0;
    3653             :     }
    3654             : 
    3655             :     /*
    3656             :      * Convert selectivities to row counts.  We force outer_rows and
    3657             :      * inner_rows to be at least 1, but the skip_rows estimates can be zero.
    3658             :      */
    3659     1270732 :     outer_skip_rows = rint(outer_path_rows * outerstartsel);
    3660     1270732 :     inner_skip_rows = rint(inner_path_rows * innerstartsel);
    3661     1270732 :     outer_rows = clamp_row_est(outer_path_rows * outerendsel);
    3662     1270732 :     inner_rows = clamp_row_est(inner_path_rows * innerendsel);
    3663             : 
    3664             :     Assert(outer_skip_rows <= outer_rows);
    3665             :     Assert(inner_skip_rows <= inner_rows);
    3666             : 
    3667             :     /*
    3668             :      * Readjust scan selectivities to account for above rounding.  This is
    3669             :      * normally an insignificant effect, but when there are only a few rows in
    3670             :      * the inputs, failing to do this makes for a large percentage error.
    3671             :      */
    3672     1270732 :     outerstartsel = outer_skip_rows / outer_path_rows;
    3673     1270732 :     innerstartsel = inner_skip_rows / inner_path_rows;
    3674     1270732 :     outerendsel = outer_rows / outer_path_rows;
    3675     1270732 :     innerendsel = inner_rows / inner_path_rows;
    3676             : 
    3677             :     Assert(outerstartsel <= outerendsel);
    3678             :     Assert(innerstartsel <= innerendsel);
    3679             : 
    3680     1270732 :     disabled_nodes = enable_mergejoin ? 0 : 1;
    3681             : 
    3682             :     /* cost of source data */
    3683             : 
    3684     1270732 :     if (outersortkeys)          /* do we need to sort outer? */
    3685             :     {
    3686      599150 :         bool        use_incremental_sort = false;
    3687             :         int         presorted_keys;
    3688             : 
    3689             :         /*
    3690             :          * We choose to use incremental sort if it is enabled and there are
    3691             :          * presorted keys; otherwise we use full sort.
    3692             :          */
    3693      599150 :         if (enable_incremental_sort)
    3694             :         {
    3695             :             bool        is_sorted PG_USED_FOR_ASSERTS_ONLY;
    3696             : 
    3697      598220 :             is_sorted = pathkeys_count_contained_in(outersortkeys,
    3698             :                                                     outer_path->pathkeys,
    3699             :                                                     &presorted_keys);
    3700             :             Assert(!is_sorted);
    3701             : 
    3702      598220 :             if (presorted_keys > 0)
    3703        2082 :                 use_incremental_sort = true;
    3704             :         }
    3705             : 
    3706      599150 :         if (!use_incremental_sort)
    3707             :         {
    3708      597068 :             cost_sort(&sort_path,
    3709             :                       root,
    3710             :                       outersortkeys,
    3711             :                       outer_path->disabled_nodes,
    3712             :                       outer_path->total_cost,
    3713             :                       outer_path_rows,
    3714      597068 :                       outer_path->pathtarget->width,
    3715             :                       0.0,
    3716             :                       work_mem,
    3717             :                       -1.0);
    3718             :         }
    3719             :         else
    3720             :         {
    3721        2082 :             cost_incremental_sort(&sort_path,
    3722             :                                   root,
    3723             :                                   outersortkeys,
    3724             :                                   presorted_keys,
    3725             :                                   outer_path->disabled_nodes,
    3726             :                                   outer_path->startup_cost,
    3727             :                                   outer_path->total_cost,
    3728             :                                   outer_path_rows,
    3729        2082 :                                   outer_path->pathtarget->width,
    3730             :                                   0.0,
    3731             :                                   work_mem,
    3732             :                                   -1.0);
    3733             :         }
    3734      599150 :         disabled_nodes += sort_path.disabled_nodes;
    3735      599150 :         startup_cost += sort_path.startup_cost;
    3736      599150 :         startup_cost += (sort_path.total_cost - sort_path.startup_cost)
    3737      599150 :             * outerstartsel;
    3738      599150 :         run_cost += (sort_path.total_cost - sort_path.startup_cost)
    3739      599150 :             * (outerendsel - outerstartsel);
    3740             :     }
    3741             :     else
    3742             :     {
    3743      671582 :         disabled_nodes += outer_path->disabled_nodes;
    3744      671582 :         startup_cost += outer_path->startup_cost;
    3745      671582 :         startup_cost += (outer_path->total_cost - outer_path->startup_cost)
    3746      671582 :             * outerstartsel;
    3747      671582 :         run_cost += (outer_path->total_cost - outer_path->startup_cost)
    3748      671582 :             * (outerendsel - outerstartsel);
    3749             :     }
    3750             : 
    3751     1270732 :     if (innersortkeys)          /* do we need to sort inner? */
    3752             :     {
    3753             :         /*
    3754             :          * We do not consider incremental sort for inner path, because
    3755             :          * incremental sort does not support mark/restore.
    3756             :          */
    3757             : 
    3758      982898 :         cost_sort(&sort_path,
    3759             :                   root,
    3760             :                   innersortkeys,
    3761             :                   inner_path->disabled_nodes,
    3762             :                   inner_path->total_cost,
    3763             :                   inner_path_rows,
    3764      982898 :                   inner_path->pathtarget->width,
    3765             :                   0.0,
    3766             :                   work_mem,
    3767             :                   -1.0);
    3768      982898 :         disabled_nodes += sort_path.disabled_nodes;
    3769      982898 :         startup_cost += sort_path.startup_cost;
    3770      982898 :         startup_cost += (sort_path.total_cost - sort_path.startup_cost)
    3771      982898 :             * innerstartsel;
    3772      982898 :         inner_run_cost = (sort_path.total_cost - sort_path.startup_cost)
    3773      982898 :             * (innerendsel - innerstartsel);
    3774             :     }
    3775             :     else
    3776             :     {
    3777      287834 :         disabled_nodes += inner_path->disabled_nodes;
    3778      287834 :         startup_cost += inner_path->startup_cost;
    3779      287834 :         startup_cost += (inner_path->total_cost - inner_path->startup_cost)
    3780      287834 :             * innerstartsel;
    3781      287834 :         inner_run_cost = (inner_path->total_cost - inner_path->startup_cost)
    3782      287834 :             * (innerendsel - innerstartsel);
    3783             :     }
    3784             : 
    3785             :     /*
    3786             :      * We can't yet determine whether rescanning occurs, or whether
    3787             :      * materialization of the inner input should be done.  The minimum
    3788             :      * possible inner input cost, regardless of rescan and materialization
    3789             :      * considerations, is inner_run_cost.  We include that in
    3790             :      * workspace->total_cost, but not yet in run_cost.
    3791             :      */
    3792             : 
    3793             :     /* CPU costs left for later */
    3794             : 
    3795             :     /* Public result fields */
    3796     1270732 :     workspace->disabled_nodes = disabled_nodes;
    3797     1270732 :     workspace->startup_cost = startup_cost;
    3798     1270732 :     workspace->total_cost = startup_cost + run_cost + inner_run_cost;
    3799             :     /* Save private data for final_cost_mergejoin */
    3800     1270732 :     workspace->run_cost = run_cost;
    3801     1270732 :     workspace->inner_run_cost = inner_run_cost;
    3802     1270732 :     workspace->outer_rows = outer_rows;
    3803     1270732 :     workspace->inner_rows = inner_rows;
    3804     1270732 :     workspace->outer_skip_rows = outer_skip_rows;
    3805     1270732 :     workspace->inner_skip_rows = inner_skip_rows;
    3806     1270732 : }
    3807             : 
    3808             : /*
    3809             :  * final_cost_mergejoin
    3810             :  *    Final estimate of the cost and result size of a mergejoin path.
    3811             :  *
    3812             :  * Unlike other costsize functions, this routine makes two actual decisions:
    3813             :  * whether the executor will need to do mark/restore, and whether we should
    3814             :  * materialize the inner path.  It would be logically cleaner to build
    3815             :  * separate paths testing these alternatives, but that would require repeating
    3816             :  * most of the cost calculations, which are not all that cheap.  Since the
    3817             :  * choice will not affect output pathkeys or startup cost, only total cost,
    3818             :  * there is no possibility of wanting to keep more than one path.  So it seems
    3819             :  * best to make the decisions here and record them in the path's
    3820             :  * skip_mark_restore and materialize_inner fields.
    3821             :  *
    3822             :  * Mark/restore overhead is usually required, but can be skipped if we know
    3823             :  * that the executor need find only one match per outer tuple, and that the
    3824             :  * mergeclauses are sufficient to identify a match.
    3825             :  *
    3826             :  * We materialize the inner path if we need mark/restore and either the inner
    3827             :  * path can't support mark/restore, or it's cheaper to use an interposed
    3828             :  * Material node to handle mark/restore.
    3829             :  *
    3830             :  * 'path' is already filled in except for the rows and cost fields and
    3831             :  *      skip_mark_restore and materialize_inner
    3832             :  * 'workspace' is the result from initial_cost_mergejoin
    3833             :  * 'extra' contains miscellaneous information about the join
    3834             :  */
    3835             : void
    3836      328052 : final_cost_mergejoin(PlannerInfo *root, MergePath *path,
    3837             :                      JoinCostWorkspace *workspace,
    3838             :                      JoinPathExtraData *extra)
    3839             : {
    3840      328052 :     Path       *outer_path = path->jpath.outerjoinpath;
    3841      328052 :     Path       *inner_path = path->jpath.innerjoinpath;
    3842      328052 :     double      inner_path_rows = inner_path->rows;
    3843      328052 :     List       *mergeclauses = path->path_mergeclauses;
    3844      328052 :     List       *innersortkeys = path->innersortkeys;
    3845      328052 :     Cost        startup_cost = workspace->startup_cost;
    3846      328052 :     Cost        run_cost = workspace->run_cost;
    3847      328052 :     Cost        inner_run_cost = workspace->inner_run_cost;
    3848      328052 :     double      outer_rows = workspace->outer_rows;
    3849      328052 :     double      inner_rows = workspace->inner_rows;
    3850      328052 :     double      outer_skip_rows = workspace->outer_skip_rows;
    3851      328052 :     double      inner_skip_rows = workspace->inner_skip_rows;
    3852             :     Cost        cpu_per_tuple,
    3853             :                 bare_inner_cost,
    3854             :                 mat_inner_cost;
    3855             :     QualCost    merge_qual_cost;
    3856             :     QualCost    qp_qual_cost;
    3857             :     double      mergejointuples,
    3858             :                 rescannedtuples;
    3859             :     double      rescanratio;
    3860             : 
    3861             :     /* Set the number of disabled nodes. */
    3862      328052 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    3863             : 
    3864             :     /* Protect some assumptions below that rowcounts aren't zero */
    3865      328052 :     if (inner_path_rows <= 0)
    3866          90 :         inner_path_rows = 1;
    3867             : 
    3868             :     /* Mark the path with the correct row estimate */
    3869      328052 :     if (path->jpath.path.param_info)
    3870         776 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    3871             :     else
    3872      327276 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    3873             : 
    3874             :     /* For partial paths, scale row estimate. */
    3875      328052 :     if (path->jpath.path.parallel_workers > 0)
    3876             :     {
    3877        9432 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    3878             : 
    3879        9432 :         path->jpath.path.rows =
    3880        9432 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    3881             :     }
    3882             : 
    3883             :     /*
    3884             :      * Compute cost of the mergequals and qpquals (other restriction clauses)
    3885             :      * separately.
    3886             :      */
    3887      328052 :     cost_qual_eval(&merge_qual_cost, mergeclauses, root);
    3888      328052 :     cost_qual_eval(&qp_qual_cost, path->jpath.joinrestrictinfo, root);
    3889      328052 :     qp_qual_cost.startup -= merge_qual_cost.startup;
    3890      328052 :     qp_qual_cost.per_tuple -= merge_qual_cost.per_tuple;
    3891             : 
    3892             :     /*
    3893             :      * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3894             :      * executor will stop scanning for matches after the first match.  When
    3895             :      * all the joinclauses are merge clauses, this means we don't ever need to
    3896             :      * back up the merge, and so we can skip mark/restore overhead.
    3897             :      */
    3898      328052 :     if ((path->jpath.jointype == JOIN_SEMI ||
    3899      321240 :          path->jpath.jointype == JOIN_ANTI ||
    3900      464940 :          extra->inner_unique) &&
    3901      151100 :         (list_length(path->jpath.joinrestrictinfo) ==
    3902      151100 :          list_length(path->path_mergeclauses)))
    3903      126338 :         path->skip_mark_restore = true;
    3904             :     else
    3905      201714 :         path->skip_mark_restore = false;
    3906             : 
    3907             :     /*
    3908             :      * Get approx # tuples passing the mergequals.  We use approx_tuple_count
    3909             :      * here because we need an estimate done with JOIN_INNER semantics.
    3910             :      */
    3911      328052 :     mergejointuples = approx_tuple_count(root, &path->jpath, mergeclauses);
    3912             : 
    3913             :     /*
    3914             :      * When there are equal merge keys in the outer relation, the mergejoin
    3915             :      * must rescan any matching tuples in the inner relation. This means
    3916             :      * re-fetching inner tuples; we have to estimate how often that happens.
    3917             :      *
    3918             :      * For regular inner and outer joins, the number of re-fetches can be
    3919             :      * estimated approximately as size of merge join output minus size of
    3920             :      * inner relation. Assume that the distinct key values are 1, 2, ..., and
    3921             :      * denote the number of values of each key in the outer relation as m1,
    3922             :      * m2, ...; in the inner relation, n1, n2, ...  Then we have
    3923             :      *
    3924             :      * size of join = m1 * n1 + m2 * n2 + ...
    3925             :      *
    3926             :      * number of rescanned tuples = (m1 - 1) * n1 + (m2 - 1) * n2 + ... = m1 *
    3927             :      * n1 + m2 * n2 + ... - (n1 + n2 + ...) = size of join - size of inner
    3928             :      * relation
    3929             :      *
    3930             :      * This equation works correctly for outer tuples having no inner match
    3931             :      * (nk = 0), but not for inner tuples having no outer match (mk = 0); we
    3932             :      * are effectively subtracting those from the number of rescanned tuples,
    3933             :      * when we should not.  Can we do better without expensive selectivity
    3934             :      * computations?
    3935             :      *
    3936             :      * The whole issue is moot if we are working from a unique-ified outer
    3937             :      * input, or if we know we don't need to mark/restore at all.
    3938             :      */
    3939      328052 :     if (IsA(outer_path, UniquePath) || path->skip_mark_restore)
    3940      129078 :         rescannedtuples = 0;
    3941             :     else
    3942             :     {
    3943      198974 :         rescannedtuples = mergejointuples - inner_path_rows;
    3944             :         /* Must clamp because of possible underestimate */
    3945      198974 :         if (rescannedtuples < 0)
    3946       79718 :             rescannedtuples = 0;
    3947             :     }
    3948             : 
    3949             :     /*
    3950             :      * We'll inflate various costs this much to account for rescanning.  Note
    3951             :      * that this is to be multiplied by something involving inner_rows, or
    3952             :      * another number related to the portion of the inner rel we'll scan.
    3953             :      */
    3954      328052 :     rescanratio = 1.0 + (rescannedtuples / inner_rows);
    3955             : 
    3956             :     /*
    3957             :      * Decide whether we want to materialize the inner input to shield it from
    3958             :      * mark/restore and performing re-fetches.  Our cost model for regular
    3959             :      * re-fetches is that a re-fetch costs the same as an original fetch,
    3960             :      * which is probably an overestimate; but on the other hand we ignore the
    3961             :      * bookkeeping costs of mark/restore.  Not clear if it's worth developing
    3962             :      * a more refined model.  So we just need to inflate the inner run cost by
    3963             :      * rescanratio.
    3964             :      */
    3965      328052 :     bare_inner_cost = inner_run_cost * rescanratio;
    3966             : 
    3967             :     /*
    3968             :      * When we interpose a Material node the re-fetch cost is assumed to be
    3969             :      * just cpu_operator_cost per tuple, independently of the underlying
    3970             :      * plan's cost; and we charge an extra cpu_operator_cost per original
    3971             :      * fetch as well.  Note that we're assuming the materialize node will
    3972             :      * never spill to disk, since it only has to remember tuples back to the
    3973             :      * last mark.  (If there are a huge number of duplicates, our other cost
    3974             :      * factors will make the path so expensive that it probably won't get
    3975             :      * chosen anyway.)  So we don't use cost_rescan here.
    3976             :      *
    3977             :      * Note: keep this estimate in sync with create_mergejoin_plan's labeling
    3978             :      * of the generated Material node.
    3979             :      */
    3980      328052 :     mat_inner_cost = inner_run_cost +
    3981      328052 :         cpu_operator_cost * inner_rows * rescanratio;
    3982             : 
    3983             :     /*
    3984             :      * If we don't need mark/restore at all, we don't need materialization.
    3985             :      */
    3986      328052 :     if (path->skip_mark_restore)
    3987      126338 :         path->materialize_inner = false;
    3988             : 
    3989             :     /*
    3990             :      * Prefer materializing if it looks cheaper, unless the user has asked to
    3991             :      * suppress materialization.
    3992             :      */
    3993      201714 :     else if (enable_material && mat_inner_cost < bare_inner_cost)
    3994        2670 :         path->materialize_inner = true;
    3995             : 
    3996             :     /*
    3997             :      * Even if materializing doesn't look cheaper, we *must* do it if the
    3998             :      * inner path is to be used directly (without sorting) and it doesn't
    3999             :      * support mark/restore.
    4000             :      *
    4001             :      * Since the inner side must be ordered, and only Sorts and IndexScans can
    4002             :      * create order to begin with, and they both support mark/restore, you
    4003             :      * might think there's no problem --- but you'd be wrong.  Nestloop and
    4004             :      * merge joins can *preserve* the order of their inputs, so they can be
    4005             :      * selected as the input of a mergejoin, and they don't support
    4006             :      * mark/restore at present.
    4007             :      *
    4008             :      * We don't test the value of enable_material here, because
    4009             :      * materialization is required for correctness in this case, and turning
    4010             :      * it off does not entitle us to deliver an invalid plan.
    4011             :      */
    4012      199044 :     else if (innersortkeys == NIL &&
    4013       12330 :              !ExecSupportsMarkRestore(inner_path))
    4014        1534 :         path->materialize_inner = true;
    4015             : 
    4016             :     /*
    4017             :      * Also, force materializing if the inner path is to be sorted and the
    4018             :      * sort is expected to spill to disk.  This is because the final merge
    4019             :      * pass can be done on-the-fly if it doesn't have to support mark/restore.
    4020             :      * We don't try to adjust the cost estimates for this consideration,
    4021             :      * though.
    4022             :      *
    4023             :      * Since materialization is a performance optimization in this case,
    4024             :      * rather than necessary for correctness, we skip it if enable_material is
    4025             :      * off.
    4026             :      */
    4027      197510 :     else if (enable_material && innersortkeys != NIL &&
    4028      186666 :              relation_byte_size(inner_path_rows,
    4029      186666 :                                 inner_path->pathtarget->width) >
    4030      186666 :              work_mem * (Size) 1024)
    4031         256 :         path->materialize_inner = true;
    4032             :     else
    4033      197254 :         path->materialize_inner = false;
    4034             : 
    4035             :     /* Charge the right incremental cost for the chosen case */
    4036      328052 :     if (path->materialize_inner)
    4037        4460 :         run_cost += mat_inner_cost;
    4038             :     else
    4039      323592 :         run_cost += bare_inner_cost;
    4040             : 
    4041             :     /* CPU costs */
    4042             : 
    4043             :     /*
    4044             :      * The number of tuple comparisons needed is approximately number of outer
    4045             :      * rows plus number of inner rows plus number of rescanned tuples (can we
    4046             :      * refine this?).  At each one, we need to evaluate the mergejoin quals.
    4047             :      */
    4048      328052 :     startup_cost += merge_qual_cost.startup;
    4049      328052 :     startup_cost += merge_qual_cost.per_tuple *
    4050      328052 :         (outer_skip_rows + inner_skip_rows * rescanratio);
    4051      328052 :     run_cost += merge_qual_cost.per_tuple *
    4052      328052 :         ((outer_rows - outer_skip_rows) +
    4053      328052 :          (inner_rows - inner_skip_rows) * rescanratio);
    4054             : 
    4055             :     /*
    4056             :      * For each tuple that gets through the mergejoin proper, we charge
    4057             :      * cpu_tuple_cost plus the cost of evaluating additional restriction
    4058             :      * clauses that are to be applied at the join.  (This is pessimistic since
    4059             :      * not all of the quals may get evaluated at each tuple.)
    4060             :      *
    4061             :      * Note: we could adjust for SEMI/ANTI joins skipping some qual
    4062             :      * evaluations here, but it's probably not worth the trouble.
    4063             :      */
    4064      328052 :     startup_cost += qp_qual_cost.startup;
    4065      328052 :     cpu_per_tuple = cpu_tuple_cost + qp_qual_cost.per_tuple;
    4066      328052 :     run_cost += cpu_per_tuple * mergejointuples;
    4067             : 
    4068             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    4069      328052 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    4070      328052 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    4071             : 
    4072      328052 :     path->jpath.path.startup_cost = startup_cost;
    4073      328052 :     path->jpath.path.total_cost = startup_cost + run_cost;
    4074      328052 : }
    4075             : 
    4076             : /*
    4077             :  * run mergejoinscansel() with caching
    4078             :  */
    4079             : static MergeScanSelCache *
    4080     1264592 : cached_scansel(PlannerInfo *root, RestrictInfo *rinfo, PathKey *pathkey)
    4081             : {
    4082             :     MergeScanSelCache *cache;
    4083             :     ListCell   *lc;
    4084             :     Selectivity leftstartsel,
    4085             :                 leftendsel,
    4086             :                 rightstartsel,
    4087             :                 rightendsel;
    4088             :     MemoryContext oldcontext;
    4089             : 
    4090             :     /* Do we have this result already? */
    4091     1264634 :     foreach(lc, rinfo->scansel_cache)
    4092             :     {
    4093     1146406 :         cache = (MergeScanSelCache *) lfirst(lc);
    4094     1146406 :         if (cache->opfamily == pathkey->pk_opfamily &&
    4095     1146406 :             cache->collation == pathkey->pk_eclass->ec_collation &&
    4096     1146406 :             cache->cmptype == pathkey->pk_cmptype &&
    4097     1146364 :             cache->nulls_first == pathkey->pk_nulls_first)
    4098     1146364 :             return cache;
    4099             :     }
    4100             : 
    4101             :     /* Nope, do the computation */
    4102      118228 :     mergejoinscansel(root,
    4103      118228 :                      (Node *) rinfo->clause,
    4104             :                      pathkey->pk_opfamily,
    4105             :                      pathkey->pk_cmptype,
    4106      118228 :                      pathkey->pk_nulls_first,
    4107             :                      &leftstartsel,
    4108             :                      &leftendsel,
    4109             :                      &rightstartsel,
    4110             :                      &rightendsel);
    4111             : 
    4112             :     /* Cache the result in suitably long-lived workspace */
    4113      118228 :     oldcontext = MemoryContextSwitchTo(root->planner_cxt);
    4114             : 
    4115      118228 :     cache = (MergeScanSelCache *) palloc(sizeof(MergeScanSelCache));
    4116      118228 :     cache->opfamily = pathkey->pk_opfamily;
    4117      118228 :     cache->collation = pathkey->pk_eclass->ec_collation;
    4118      118228 :     cache->cmptype = pathkey->pk_cmptype;
    4119      118228 :     cache->nulls_first = pathkey->pk_nulls_first;
    4120      118228 :     cache->leftstartsel = leftstartsel;
    4121      118228 :     cache->leftendsel = leftendsel;
    4122      118228 :     cache->rightstartsel = rightstartsel;
    4123      118228 :     cache->rightendsel = rightendsel;
    4124             : 
    4125      118228 :     rinfo->scansel_cache = lappend(rinfo->scansel_cache, cache);
    4126             : 
    4127      118228 :     MemoryContextSwitchTo(oldcontext);
    4128             : 
    4129      118228 :     return cache;
    4130             : }
    4131             : 
    4132             : /*
    4133             :  * initial_cost_hashjoin
    4134             :  *    Preliminary estimate of the cost of a hashjoin path.
    4135             :  *
    4136             :  * This must quickly produce lower-bound estimates of the path's startup and
    4137             :  * total costs.  If we are unable to eliminate the proposed path from
    4138             :  * consideration using the lower bounds, final_cost_hashjoin will be called
    4139             :  * to obtain the final estimates.
    4140             :  *
    4141             :  * The exact division of labor between this function and final_cost_hashjoin
    4142             :  * is private to them, and represents a tradeoff between speed of the initial
    4143             :  * estimate and getting a tight lower bound.  We choose to not examine the
    4144             :  * join quals here (other than by counting the number of hash clauses),
    4145             :  * so we can't do much with CPU costs.  We do assume that
    4146             :  * ExecChooseHashTableSize is cheap enough to use here.
    4147             :  *
    4148             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    4149             :  *      other data to be used by final_cost_hashjoin
    4150             :  * 'jointype' is the type of join to be performed
    4151             :  * 'hashclauses' is the list of joinclauses to be used as hash clauses
    4152             :  * 'outer_path' is the outer input to the join
    4153             :  * 'inner_path' is the inner input to the join
    4154             :  * 'extra' contains miscellaneous information about the join
    4155             :  * 'parallel_hash' indicates that inner_path is partial and that a shared
    4156             :  *      hash table will be built in parallel
    4157             :  */
    4158             : void
    4159      683140 : initial_cost_hashjoin(PlannerInfo *root, JoinCostWorkspace *workspace,
    4160             :                       JoinType jointype,
    4161             :                       List *hashclauses,
    4162             :                       Path *outer_path, Path *inner_path,
    4163             :                       JoinPathExtraData *extra,
    4164             :                       bool parallel_hash)
    4165             : {
    4166             :     int         disabled_nodes;
    4167      683140 :     Cost        startup_cost = 0;
    4168      683140 :     Cost        run_cost = 0;
    4169      683140 :     double      outer_path_rows = outer_path->rows;
    4170      683140 :     double      inner_path_rows = inner_path->rows;
    4171      683140 :     double      inner_path_rows_total = inner_path_rows;
    4172      683140 :     int         num_hashclauses = list_length(hashclauses);
    4173             :     int         numbuckets;
    4174             :     int         numbatches;
    4175             :     int         num_skew_mcvs;
    4176             :     size_t      space_allowed;  /* unused */
    4177             : 
    4178             :     /* Count up disabled nodes. */
    4179      683140 :     disabled_nodes = enable_hashjoin ? 0 : 1;
    4180      683140 :     disabled_nodes += inner_path->disabled_nodes;
    4181      683140 :     disabled_nodes += outer_path->disabled_nodes;
    4182             : 
    4183             :     /* cost of source data */
    4184      683140 :     startup_cost += outer_path->startup_cost;
    4185      683140 :     run_cost += outer_path->total_cost - outer_path->startup_cost;
    4186      683140 :     startup_cost += inner_path->total_cost;
    4187             : 
    4188             :     /*
    4189             :      * Cost of computing hash function: must do it once per input tuple. We
    4190             :      * charge one cpu_operator_cost for each column's hash function.  Also,
    4191             :      * tack on one cpu_tuple_cost per inner row, to model the costs of
    4192             :      * inserting the row into the hashtable.
    4193             :      *
    4194             :      * XXX when a hashclause is more complex than a single operator, we really
    4195             :      * should charge the extra eval costs of the left or right side, as
    4196             :      * appropriate, here.  This seems more work than it's worth at the moment.
    4197             :      */
    4198      683140 :     startup_cost += (cpu_operator_cost * num_hashclauses + cpu_tuple_cost)
    4199      683140 :         * inner_path_rows;
    4200      683140 :     run_cost += cpu_operator_cost * num_hashclauses * outer_path_rows;
    4201             : 
    4202             :     /*
    4203             :      * If this is a parallel hash build, then the value we have for
    4204             :      * inner_rows_total currently refers only to the rows returned by each
    4205             :      * participant.  For shared hash table size estimation, we need the total
    4206             :      * number, so we need to undo the division.
    4207             :      */
    4208      683140 :     if (parallel_hash)
    4209       12552 :         inner_path_rows_total *= get_parallel_divisor(inner_path);
    4210             : 
    4211             :     /*
    4212             :      * Get hash table size that executor would use for inner relation.
    4213             :      *
    4214             :      * XXX for the moment, always assume that skew optimization will be
    4215             :      * performed.  As long as SKEW_HASH_MEM_PERCENT is small, it's not worth
    4216             :      * trying to determine that for sure.
    4217             :      *
    4218             :      * XXX at some point it might be interesting to try to account for skew
    4219             :      * optimization in the cost estimate, but for now, we don't.
    4220             :      */
    4221      683140 :     ExecChooseHashTableSize(inner_path_rows_total,
    4222      683140 :                             inner_path->pathtarget->width,
    4223             :                             true,   /* useskew */
    4224             :                             parallel_hash,  /* try_combined_hash_mem */
    4225             :                             outer_path->parallel_workers,
    4226             :                             &space_allowed,
    4227             :                             &numbuckets,
    4228             :                             &numbatches,
    4229             :                             &num_skew_mcvs);
    4230             : 
    4231             :     /*
    4232             :      * If inner relation is too big then we will need to "batch" the join,
    4233             :      * which implies writing and reading most of the tuples to disk an extra
    4234             :      * time.  Charge seq_page_cost per page, since the I/O should be nice and
    4235             :      * sequential.  Writing the inner rel counts as startup cost, all the rest
    4236             :      * as run cost.
    4237             :      */
    4238      683140 :     if (numbatches > 1)
    4239             :     {
    4240        5344 :         double      outerpages = page_size(outer_path_rows,
    4241        5344 :                                            outer_path->pathtarget->width);
    4242        5344 :         double      innerpages = page_size(inner_path_rows,
    4243        5344 :                                            inner_path->pathtarget->width);
    4244             : 
    4245        5344 :         startup_cost += seq_page_cost * innerpages;
    4246        5344 :         run_cost += seq_page_cost * (innerpages + 2 * outerpages);
    4247             :     }
    4248             : 
    4249             :     /* CPU costs left for later */
    4250             : 
    4251             :     /* Public result fields */
    4252      683140 :     workspace->disabled_nodes = disabled_nodes;
    4253      683140 :     workspace->startup_cost = startup_cost;
    4254      683140 :     workspace->total_cost = startup_cost + run_cost;
    4255             :     /* Save private data for final_cost_hashjoin */
    4256      683140 :     workspace->run_cost = run_cost;
    4257      683140 :     workspace->numbuckets = numbuckets;
    4258      683140 :     workspace->numbatches = numbatches;
    4259      683140 :     workspace->inner_rows_total = inner_path_rows_total;
    4260      683140 : }
    4261             : 
    4262             : /*
    4263             :  * final_cost_hashjoin
    4264             :  *    Final estimate of the cost and result size of a hashjoin path.
    4265             :  *
    4266             :  * Note: the numbatches estimate is also saved into 'path' for use later
    4267             :  *
    4268             :  * 'path' is already filled in except for the rows and cost fields and
    4269             :  *      num_batches
    4270             :  * 'workspace' is the result from initial_cost_hashjoin
    4271             :  * 'extra' contains miscellaneous information about the join
    4272             :  */
    4273             : void
    4274      294864 : final_cost_hashjoin(PlannerInfo *root, HashPath *path,
    4275             :                     JoinCostWorkspace *workspace,
    4276             :                     JoinPathExtraData *extra)
    4277             : {
    4278      294864 :     Path       *outer_path = path->jpath.outerjoinpath;
    4279      294864 :     Path       *inner_path = path->jpath.innerjoinpath;
    4280      294864 :     double      outer_path_rows = outer_path->rows;
    4281      294864 :     double      inner_path_rows = inner_path->rows;
    4282      294864 :     double      inner_path_rows_total = workspace->inner_rows_total;
    4283      294864 :     List       *hashclauses = path->path_hashclauses;
    4284      294864 :     Cost        startup_cost = workspace->startup_cost;
    4285      294864 :     Cost        run_cost = workspace->run_cost;
    4286      294864 :     int         numbuckets = workspace->numbuckets;
    4287      294864 :     int         numbatches = workspace->numbatches;
    4288             :     Cost        cpu_per_tuple;
    4289             :     QualCost    hash_qual_cost;
    4290             :     QualCost    qp_qual_cost;
    4291             :     double      hashjointuples;
    4292             :     double      virtualbuckets;
    4293             :     Selectivity innerbucketsize;
    4294             :     Selectivity innermcvfreq;
    4295             :     ListCell   *hcl;
    4296             : 
    4297             :     /* Set the number of disabled nodes. */
    4298      294864 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    4299             : 
    4300             :     /* Mark the path with the correct row estimate */
    4301      294864 :     if (path->jpath.path.param_info)
    4302        1422 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    4303             :     else
    4304      293442 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    4305             : 
    4306             :     /* For partial paths, scale row estimate. */
    4307      294864 :     if (path->jpath.path.parallel_workers > 0)
    4308             :     {
    4309       11340 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    4310             : 
    4311       11340 :         path->jpath.path.rows =
    4312       11340 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    4313             :     }
    4314             : 
    4315             :     /* mark the path with estimated # of batches */
    4316      294864 :     path->num_batches = numbatches;
    4317             : 
    4318             :     /* store the total number of tuples (sum of partial row estimates) */
    4319      294864 :     path->inner_rows_total = inner_path_rows_total;
    4320             : 
    4321             :     /* and compute the number of "virtual" buckets in the whole join */
    4322      294864 :     virtualbuckets = (double) numbuckets * (double) numbatches;
    4323             : 
    4324             :     /*
    4325             :      * Determine bucketsize fraction and MCV frequency for the inner relation.
    4326             :      * We use the smallest bucketsize or MCV frequency estimated for any
    4327             :      * individual hashclause; this is undoubtedly conservative.
    4328             :      *
    4329             :      * BUT: if inner relation has been unique-ified, we can assume it's good
    4330             :      * for hashing.  This is important both because it's the right answer, and
    4331             :      * because we avoid contaminating the cache with a value that's wrong for
    4332             :      * non-unique-ified paths.
    4333             :      */
    4334      294864 :     if (IsA(inner_path, UniquePath))
    4335             :     {
    4336        4716 :         innerbucketsize = 1.0 / virtualbuckets;
    4337        4716 :         innermcvfreq = 0.0;
    4338             :     }
    4339             :     else
    4340             :     {
    4341             :         List       *otherclauses;
    4342             : 
    4343      290148 :         innerbucketsize = 1.0;
    4344      290148 :         innermcvfreq = 1.0;
    4345             : 
    4346             :         /* At first, try to estimate bucket size using extended statistics. */
    4347      290148 :         otherclauses = estimate_multivariate_bucketsize(root,
    4348             :                                                         inner_path->parent,
    4349             :                                                         hashclauses,
    4350             :                                                         &innerbucketsize);
    4351             : 
    4352             :         /* Pass through the remaining clauses */
    4353      610794 :         foreach(hcl, otherclauses)
    4354             :         {
    4355      320646 :             RestrictInfo *restrictinfo = lfirst_node(RestrictInfo, hcl);
    4356             :             Selectivity thisbucketsize;
    4357             :             Selectivity thismcvfreq;
    4358             : 
    4359             :             /*
    4360             :              * First we have to figure out which side of the hashjoin clause
    4361             :              * is the inner side.
    4362             :              *
    4363             :              * Since we tend to visit the same clauses over and over when
    4364             :              * planning a large query, we cache the bucket stats estimates in
    4365             :              * the RestrictInfo node to avoid repeated lookups of statistics.
    4366             :              */
    4367      320646 :             if (bms_is_subset(restrictinfo->right_relids,
    4368      320646 :                               inner_path->parent->relids))
    4369             :             {
    4370             :                 /* righthand side is inner */
    4371      167716 :                 thisbucketsize = restrictinfo->right_bucketsize;
    4372      167716 :                 if (thisbucketsize < 0)
    4373             :                 {
    4374             :                     /* not cached yet */
    4375       90438 :                     estimate_hash_bucket_stats(root,
    4376       90438 :                                                get_rightop(restrictinfo->clause),
    4377             :                                                virtualbuckets,
    4378             :                                                &restrictinfo->right_mcvfreq,
    4379             :                                                &restrictinfo->right_bucketsize);
    4380       90438 :                     thisbucketsize = restrictinfo->right_bucketsize;
    4381             :                 }
    4382      167716 :                 thismcvfreq = restrictinfo->right_mcvfreq;
    4383             :             }
    4384             :             else
    4385             :             {
    4386             :                 Assert(bms_is_subset(restrictinfo->left_relids,
    4387             :                                      inner_path->parent->relids));
    4388             :                 /* lefthand side is inner */
    4389      152930 :                 thisbucketsize = restrictinfo->left_bucketsize;
    4390      152930 :                 if (thisbucketsize < 0)
    4391             :                 {
    4392             :                     /* not cached yet */
    4393       78468 :                     estimate_hash_bucket_stats(root,
    4394       78468 :                                                get_leftop(restrictinfo->clause),
    4395             :                                                virtualbuckets,
    4396             :                                                &restrictinfo->left_mcvfreq,
    4397             :                                                &restrictinfo->left_bucketsize);
    4398       78468 :                     thisbucketsize = restrictinfo->left_bucketsize;
    4399             :                 }
    4400      152930 :                 thismcvfreq = restrictinfo->left_mcvfreq;
    4401             :             }
    4402             : 
    4403      320646 :             if (innerbucketsize > thisbucketsize)
    4404      203874 :                 innerbucketsize = thisbucketsize;
    4405      320646 :             if (innermcvfreq > thismcvfreq)
    4406      290402 :                 innermcvfreq = thismcvfreq;
    4407             :         }
    4408             :     }
    4409             : 
    4410             :     /*
    4411             :      * If the bucket holding the inner MCV would exceed hash_mem, we don't
    4412             :      * want to hash unless there is really no other alternative, so apply
    4413             :      * disable_cost.  (The executor normally copes with excessive memory usage
    4414             :      * by splitting batches, but obviously it cannot separate equal values
    4415             :      * that way, so it will be unable to drive the batch size below hash_mem
    4416             :      * when this is true.)
    4417             :      */
    4418      294864 :     if (relation_byte_size(clamp_row_est(inner_path_rows * innermcvfreq),
    4419      589728 :                            inner_path->pathtarget->width) > get_hash_memory_limit())
    4420           6 :         startup_cost += disable_cost;
    4421             : 
    4422             :     /*
    4423             :      * Compute cost of the hashquals and qpquals (other restriction clauses)
    4424             :      * separately.
    4425             :      */
    4426      294864 :     cost_qual_eval(&hash_qual_cost, hashclauses, root);
    4427      294864 :     cost_qual_eval(&qp_qual_cost, path->jpath.joinrestrictinfo, root);
    4428      294864 :     qp_qual_cost.startup -= hash_qual_cost.startup;
    4429      294864 :     qp_qual_cost.per_tuple -= hash_qual_cost.per_tuple;
    4430             : 
    4431             :     /* CPU costs */
    4432             : 
    4433      294864 :     if (path->jpath.jointype == JOIN_SEMI ||
    4434      288874 :         path->jpath.jointype == JOIN_ANTI ||
    4435      283800 :         extra->inner_unique)
    4436      125584 :     {
    4437             :         double      outer_matched_rows;
    4438             :         Selectivity inner_scan_frac;
    4439             : 
    4440             :         /*
    4441             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    4442             :          * executor will stop after the first match.
    4443             :          *
    4444             :          * For an outer-rel row that has at least one match, we can expect the
    4445             :          * bucket scan to stop after a fraction 1/(match_count+1) of the
    4446             :          * bucket's rows, if the matches are evenly distributed.  Since they
    4447             :          * probably aren't quite evenly distributed, we apply a fuzz factor of
    4448             :          * 2.0 to that fraction.  (If we used a larger fuzz factor, we'd have
    4449             :          * to clamp inner_scan_frac to at most 1.0; but since match_count is
    4450             :          * at least 1, no such clamp is needed now.)
    4451             :          */
    4452      125584 :         outer_matched_rows = rint(outer_path_rows * extra->semifactors.outer_match_frac);
    4453      125584 :         inner_scan_frac = 2.0 / (extra->semifactors.match_count + 1.0);
    4454             : 
    4455      125584 :         startup_cost += hash_qual_cost.startup;
    4456      251168 :         run_cost += hash_qual_cost.per_tuple * outer_matched_rows *
    4457      125584 :             clamp_row_est(inner_path_rows * innerbucketsize * inner_scan_frac) * 0.5;
    4458             : 
    4459             :         /*
    4460             :          * For unmatched outer-rel rows, the picture is quite a lot different.
    4461             :          * In the first place, there is no reason to assume that these rows
    4462             :          * preferentially hit heavily-populated buckets; instead assume they
    4463             :          * are uncorrelated with the inner distribution and so they see an
    4464             :          * average bucket size of inner_path_rows / virtualbuckets.  In the
    4465             :          * second place, it seems likely that they will have few if any exact
    4466             :          * hash-code matches and so very few of the tuples in the bucket will
    4467             :          * actually require eval of the hash quals.  We don't have any good
    4468             :          * way to estimate how many will, but for the moment assume that the
    4469             :          * effective cost per bucket entry is one-tenth what it is for
    4470             :          * matchable tuples.
    4471             :          */
    4472      251168 :         run_cost += hash_qual_cost.per_tuple *
    4473      251168 :             (outer_path_rows - outer_matched_rows) *
    4474      125584 :             clamp_row_est(inner_path_rows / virtualbuckets) * 0.05;
    4475             : 
    4476             :         /* Get # of tuples that will pass the basic join */
    4477      125584 :         if (path->jpath.jointype == JOIN_ANTI)
    4478        5074 :             hashjointuples = outer_path_rows - outer_matched_rows;
    4479             :         else
    4480      120510 :             hashjointuples = outer_matched_rows;
    4481             :     }
    4482             :     else
    4483             :     {
    4484             :         /*
    4485             :          * The number of tuple comparisons needed is the number of outer
    4486             :          * tuples times the typical number of tuples in a hash bucket, which
    4487             :          * is the inner relation size times its bucketsize fraction.  At each
    4488             :          * one, we need to evaluate the hashjoin quals.  But actually,
    4489             :          * charging the full qual eval cost at each tuple is pessimistic,
    4490             :          * since we don't evaluate the quals unless the hash values match
    4491             :          * exactly.  For lack of a better idea, halve the cost estimate to
    4492             :          * allow for that.
    4493             :          */
    4494      169280 :         startup_cost += hash_qual_cost.startup;
    4495      338560 :         run_cost += hash_qual_cost.per_tuple * outer_path_rows *
    4496      169280 :             clamp_row_est(inner_path_rows * innerbucketsize) * 0.5;
    4497             : 
    4498             :         /*
    4499             :          * Get approx # tuples passing the hashquals.  We use
    4500             :          * approx_tuple_count here because we need an estimate done with
    4501             :          * JOIN_INNER semantics.
    4502             :          */
    4503      169280 :         hashjointuples = approx_tuple_count(root, &path->jpath, hashclauses);
    4504             :     }
    4505             : 
    4506             :     /*
    4507             :      * For each tuple that gets through the hashjoin proper, we charge
    4508             :      * cpu_tuple_cost plus the cost of evaluating additional restriction
    4509             :      * clauses that are to be applied at the join.  (This is pessimistic since
    4510             :      * not all of the quals may get evaluated at each tuple.)
    4511             :      */
    4512      294864 :     startup_cost += qp_qual_cost.startup;
    4513      294864 :     cpu_per_tuple = cpu_tuple_cost + qp_qual_cost.per_tuple;
    4514      294864 :     run_cost += cpu_per_tuple * hashjointuples;
    4515             : 
    4516             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    4517      294864 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    4518      294864 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    4519             : 
    4520      294864 :     path->jpath.path.startup_cost = startup_cost;
    4521      294864 :     path->jpath.path.total_cost = startup_cost + run_cost;
    4522      294864 : }
    4523             : 
    4524             : 
    4525             : /*
    4526             :  * cost_subplan
    4527             :  *      Figure the costs for a SubPlan (or initplan).
    4528             :  *
    4529             :  * Note: we could dig the subplan's Plan out of the root list, but in practice
    4530             :  * all callers have it handy already, so we make them pass it.
    4531             :  */
    4532             : void
    4533       45710 : cost_subplan(PlannerInfo *root, SubPlan *subplan, Plan *plan)
    4534             : {
    4535             :     QualCost    sp_cost;
    4536             : 
    4537             :     /* Figure any cost for evaluating the testexpr */
    4538       45710 :     cost_qual_eval(&sp_cost,
    4539       45710 :                    make_ands_implicit((Expr *) subplan->testexpr),
    4540             :                    root);
    4541             : 
    4542       45710 :     if (subplan->useHashTable)
    4543             :     {
    4544             :         /*
    4545             :          * If we are using a hash table for the subquery outputs, then the
    4546             :          * cost of evaluating the query is a one-time cost.  We charge one
    4547             :          * cpu_operator_cost per tuple for the work of loading the hashtable,
    4548             :          * too.
    4549             :          */
    4550        2174 :         sp_cost.startup += plan->total_cost +
    4551        2174 :             cpu_operator_cost * plan->plan_rows;
    4552             : 
    4553             :         /*
    4554             :          * The per-tuple costs include the cost of evaluating the lefthand
    4555             :          * expressions, plus the cost of probing the hashtable.  We already
    4556             :          * accounted for the lefthand expressions as part of the testexpr, and
    4557             :          * will also have counted one cpu_operator_cost for each comparison
    4558             :          * operator.  That is probably too low for the probing cost, but it's
    4559             :          * hard to make a better estimate, so live with it for now.
    4560             :          */
    4561             :     }
    4562             :     else
    4563             :     {
    4564             :         /*
    4565             :          * Otherwise we will be rescanning the subplan output on each
    4566             :          * evaluation.  We need to estimate how much of the output we will
    4567             :          * actually need to scan.  NOTE: this logic should agree with the
    4568             :          * tuple_fraction estimates used by make_subplan() in
    4569             :          * plan/subselect.c.
    4570             :          */
    4571       43536 :         Cost        plan_run_cost = plan->total_cost - plan->startup_cost;
    4572             : 
    4573       43536 :         if (subplan->subLinkType == EXISTS_SUBLINK)
    4574             :         {
    4575             :             /* we only need to fetch 1 tuple; clamp to avoid zero divide */
    4576        2630 :             sp_cost.per_tuple += plan_run_cost / clamp_row_est(plan->plan_rows);
    4577             :         }
    4578       40906 :         else if (subplan->subLinkType == ALL_SUBLINK ||
    4579       40888 :                  subplan->subLinkType == ANY_SUBLINK)
    4580             :         {
    4581             :             /* assume we need 50% of the tuples */
    4582         134 :             sp_cost.per_tuple += 0.50 * plan_run_cost;
    4583             :             /* also charge a cpu_operator_cost per row examined */
    4584         134 :             sp_cost.per_tuple += 0.50 * plan->plan_rows * cpu_operator_cost;
    4585             :         }
    4586             :         else
    4587             :         {
    4588             :             /* assume we need all tuples */
    4589       40772 :             sp_cost.per_tuple += plan_run_cost;
    4590             :         }
    4591             : 
    4592             :         /*
    4593             :          * Also account for subplan's startup cost. If the subplan is
    4594             :          * uncorrelated or undirect correlated, AND its topmost node is one
    4595             :          * that materializes its output, assume that we'll only need to pay
    4596             :          * its startup cost once; otherwise assume we pay the startup cost
    4597             :          * every time.
    4598             :          */
    4599       57106 :         if (subplan->parParam == NIL &&
    4600       13570 :             ExecMaterializesOutput(nodeTag(plan)))
    4601         690 :             sp_cost.startup += plan->startup_cost;
    4602             :         else
    4603       42846 :             sp_cost.per_tuple += plan->startup_cost;
    4604             :     }
    4605             : 
    4606       45710 :     subplan->startup_cost = sp_cost.startup;
    4607       45710 :     subplan->per_call_cost = sp_cost.per_tuple;
    4608       45710 : }
    4609             : 
    4610             : 
    4611             : /*
    4612             :  * cost_rescan
    4613             :  *      Given a finished Path, estimate the costs of rescanning it after
    4614             :  *      having done so the first time.  For some Path types a rescan is
    4615             :  *      cheaper than an original scan (if no parameters change), and this
    4616             :  *      function embodies knowledge about that.  The default is to return
    4617             :  *      the same costs stored in the Path.  (Note that the cost estimates
    4618             :  *      actually stored in Paths are always for first scans.)
    4619             :  *
    4620             :  * This function is not currently intended to model effects such as rescans
    4621             :  * being cheaper due to disk block caching; what we are concerned with is
    4622             :  * plan types wherein the executor caches results explicitly, or doesn't
    4623             :  * redo startup calculations, etc.
    4624             :  */
    4625             : static void
    4626     2877214 : cost_rescan(PlannerInfo *root, Path *path,
    4627             :             Cost *rescan_startup_cost,  /* output parameters */
    4628             :             Cost *rescan_total_cost)
    4629             : {
    4630     2877214 :     switch (path->pathtype)
    4631             :     {
    4632       54446 :         case T_FunctionScan:
    4633             : 
    4634             :             /*
    4635             :              * Currently, nodeFunctionscan.c always executes the function to
    4636             :              * completion before returning any rows, and caches the results in
    4637             :              * a tuplestore.  So the function eval cost is all startup cost
    4638             :              * and isn't paid over again on rescans. However, all run costs
    4639             :              * will be paid over again.
    4640             :              */
    4641       54446 :             *rescan_startup_cost = 0;
    4642       54446 :             *rescan_total_cost = path->total_cost - path->startup_cost;
    4643       54446 :             break;
    4644      128374 :         case T_HashJoin:
    4645             : 
    4646             :             /*
    4647             :              * If it's a single-batch join, we don't need to rebuild the hash
    4648             :              * table during a rescan.
    4649             :              */
    4650      128374 :             if (((HashPath *) path)->num_batches == 1)
    4651             :             {
    4652             :                 /* Startup cost is exactly the cost of hash table building */
    4653      128374 :                 *rescan_startup_cost = 0;
    4654      128374 :                 *rescan_total_cost = path->total_cost - path->startup_cost;
    4655             :             }
    4656             :             else
    4657             :             {
    4658             :                 /* Otherwise, no special treatment */
    4659           0 :                 *rescan_startup_cost = path->startup_cost;
    4660           0 :                 *rescan_total_cost = path->total_cost;
    4661             :             }
    4662      128374 :             break;
    4663        8742 :         case T_CteScan:
    4664             :         case T_WorkTableScan:
    4665             :             {
    4666             :                 /*
    4667             :                  * These plan types materialize their final result in a
    4668             :                  * tuplestore or tuplesort object.  So the rescan cost is only
    4669             :                  * cpu_tuple_cost per tuple, unless the result is large enough
    4670             :                  * to spill to disk.
    4671             :                  */
    4672        8742 :                 Cost        run_cost = cpu_tuple_cost * path->rows;
    4673        8742 :                 double      nbytes = relation_byte_size(path->rows,
    4674        8742 :                                                         path->pathtarget->width);
    4675        8742 :                 double      work_mem_bytes = work_mem * (Size) 1024;
    4676             : 
    4677        8742 :                 if (nbytes > work_mem_bytes)
    4678             :                 {
    4679             :                     /* It will spill, so account for re-read cost */
    4680         296 :                     double      npages = ceil(nbytes / BLCKSZ);
    4681             : 
    4682         296 :                     run_cost += seq_page_cost * npages;
    4683             :                 }
    4684        8742 :                 *rescan_startup_cost = 0;
    4685        8742 :                 *rescan_total_cost = run_cost;
    4686             :             }
    4687        8742 :             break;
    4688      977210 :         case T_Material:
    4689             :         case T_Sort:
    4690             :             {
    4691             :                 /*
    4692             :                  * These plan types not only materialize their results, but do
    4693             :                  * not implement qual filtering or projection.  So they are
    4694             :                  * even cheaper to rescan than the ones above.  We charge only
    4695             :                  * cpu_operator_cost per tuple.  (Note: keep that in sync with
    4696             :                  * the run_cost charge in cost_sort, and also see comments in
    4697             :                  * cost_material before you change it.)
    4698             :                  */
    4699      977210 :                 Cost        run_cost = cpu_operator_cost * path->rows;
    4700      977210 :                 double      nbytes = relation_byte_size(path->rows,
    4701      977210 :                                                         path->pathtarget->width);
    4702      977210 :                 double      work_mem_bytes = work_mem * (Size) 1024;
    4703             : 
    4704      977210 :                 if (nbytes > work_mem_bytes)
    4705             :                 {
    4706             :                     /* It will spill, so account for re-read cost */
    4707       11554 :                     double      npages = ceil(nbytes / BLCKSZ);
    4708             : 
    4709       11554 :                     run_cost += seq_page_cost * npages;
    4710             :                 }
    4711      977210 :                 *rescan_startup_cost = 0;
    4712      977210 :                 *rescan_total_cost = run_cost;
    4713             :             }
    4714      977210 :             break;
    4715      290258 :         case T_Memoize:
    4716             :             /* All the hard work is done by cost_memoize_rescan */
    4717      290258 :             cost_memoize_rescan(root, (MemoizePath *) path,
    4718             :                                 rescan_startup_cost, rescan_total_cost);
    4719      290258 :             break;
    4720     1418184 :         default:
    4721     1418184 :             *rescan_startup_cost = path->startup_cost;
    4722     1418184 :             *rescan_total_cost = path->total_cost;
    4723     1418184 :             break;
    4724             :     }
    4725     2877214 : }
    4726             : 
    4727             : 
    4728             : /*
    4729             :  * cost_qual_eval
    4730             :  *      Estimate the CPU costs of evaluating a WHERE clause.
    4731             :  *      The input can be either an implicitly-ANDed list of boolean
    4732             :  *      expressions, or a list of RestrictInfo nodes.  (The latter is
    4733             :  *      preferred since it allows caching of the results.)
    4734             :  *      The result includes both a one-time (startup) component,
    4735             :  *      and a per-evaluation component.
    4736             :  *
    4737             :  * Note: in some code paths root can be passed as NULL, resulting in
    4738             :  * slightly worse estimates.
    4739             :  */
    4740             : void
    4741     3958882 : cost_qual_eval(QualCost *cost, List *quals, PlannerInfo *root)
    4742             : {
    4743             :     cost_qual_eval_context context;
    4744             :     ListCell   *l;
    4745             : 
    4746     3958882 :     context.root = root;
    4747     3958882 :     context.total.startup = 0;
    4748     3958882 :     context.total.per_tuple = 0;
    4749             : 
    4750             :     /* We don't charge any cost for the implicit ANDing at top level ... */
    4751             : 
    4752     7482174 :     foreach(l, quals)
    4753             :     {
    4754     3523292 :         Node       *qual = (Node *) lfirst(l);
    4755             : 
    4756     3523292 :         cost_qual_eval_walker(qual, &context);
    4757             :     }
    4758             : 
    4759     3958882 :     *cost = context.total;
    4760     3958882 : }
    4761             : 
    4762             : /*
    4763             :  * cost_qual_eval_node
    4764             :  *      As above, for a single RestrictInfo or expression.
    4765             :  */
    4766             : void
    4767     1820036 : cost_qual_eval_node(QualCost *cost, Node *qual, PlannerInfo *root)
    4768             : {
    4769             :     cost_qual_eval_context context;
    4770             : 
    4771     1820036 :     context.root = root;
    4772     1820036 :     context.total.startup = 0;
    4773     1820036 :     context.total.per_tuple = 0;
    4774             : 
    4775     1820036 :     cost_qual_eval_walker(qual, &context);
    4776             : 
    4777     1820036 :     *cost = context.total;
    4778     1820036 : }
    4779             : 
    4780             : static bool
    4781     8858316 : cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)
    4782             : {
    4783     8858316 :     if (node == NULL)
    4784       90594 :         return false;
    4785             : 
    4786             :     /*
    4787             :      * RestrictInfo nodes contain an eval_cost field reserved for this
    4788             :      * routine's use, so that it's not necessary to evaluate the qual clause's
    4789             :      * cost more than once.  If the clause's cost hasn't been computed yet,
    4790             :      * the field's startup value will contain -1.
    4791             :      */
    4792     8767722 :     if (IsA(node, RestrictInfo))
    4793             :     {
    4794     3715342 :         RestrictInfo *rinfo = (RestrictInfo *) node;
    4795             : 
    4796     3715342 :         if (rinfo->eval_cost.startup < 0)
    4797             :         {
    4798             :             cost_qual_eval_context locContext;
    4799             : 
    4800      579094 :             locContext.root = context->root;
    4801      579094 :             locContext.total.startup = 0;
    4802      579094 :             locContext.total.per_tuple = 0;
    4803             : 
    4804             :             /*
    4805             :              * For an OR clause, recurse into the marked-up tree so that we
    4806             :              * set the eval_cost for contained RestrictInfos too.
    4807             :              */
    4808      579094 :             if (rinfo->orclause)
    4809       10386 :                 cost_qual_eval_walker((Node *) rinfo->orclause, &locContext);
    4810             :             else
    4811      568708 :                 cost_qual_eval_walker((Node *) rinfo->clause, &locContext);
    4812             : 
    4813             :             /*
    4814             :              * If the RestrictInfo is marked pseudoconstant, it will be tested
    4815             :              * only once, so treat its cost as all startup cost.
    4816             :              */
    4817      579094 :             if (rinfo->pseudoconstant)
    4818             :             {
    4819             :                 /* count one execution during startup */
    4820        9976 :                 locContext.total.startup += locContext.total.per_tuple;
    4821        9976 :                 locContext.total.per_tuple = 0;
    4822             :             }
    4823      579094 :             rinfo->eval_cost = locContext.total;
    4824             :         }
    4825     3715342 :         context->total.startup += rinfo->eval_cost.startup;
    4826     3715342 :         context->total.per_tuple += rinfo->eval_cost.per_tuple;
    4827             :         /* do NOT recurse into children */
    4828     3715342 :         return false;
    4829             :     }
    4830             : 
    4831             :     /*
    4832             :      * For each operator or function node in the given tree, we charge the
    4833             :      * estimated execution cost given by pg_proc.procost (remember to multiply
    4834             :      * this by cpu_operator_cost).
    4835             :      *
    4836             :      * Vars and Consts are charged zero, and so are boolean operators (AND,
    4837             :      * OR, NOT). Simplistic, but a lot better than no model at all.
    4838             :      *
    4839             :      * Should we try to account for the possibility of short-circuit
    4840             :      * evaluation of AND/OR?  Probably *not*, because that would make the
    4841             :      * results depend on the clause ordering, and we are not in any position
    4842             :      * to expect that the current ordering of the clauses is the one that's
    4843             :      * going to end up being used.  The above per-RestrictInfo caching would
    4844             :      * not mix well with trying to re-order clauses anyway.
    4845             :      *
    4846             :      * Another issue that is entirely ignored here is that if a set-returning
    4847             :      * function is below top level in the tree, the functions/operators above
    4848             :      * it will need to be evaluated multiple times.  In practical use, such
    4849             :      * cases arise so seldom as to not be worth the added complexity needed;
    4850             :      * moreover, since our rowcount estimates for functions tend to be pretty
    4851             :      * phony, the results would also be pretty phony.
    4852             :      */
    4853     5052380 :     if (IsA(node, FuncExpr))
    4854             :     {
    4855      351276 :         add_function_cost(context->root, ((FuncExpr *) node)->funcid, node,
    4856             :                           &context->total);
    4857             :     }
    4858     4701104 :     else if (IsA(node, OpExpr) ||
    4859     4057064 :              IsA(node, DistinctExpr) ||
    4860     4055816 :              IsA(node, NullIfExpr))
    4861             :     {
    4862             :         /* rely on struct equivalence to treat these all alike */
    4863      645412 :         set_opfuncid((OpExpr *) node);
    4864      645412 :         add_function_cost(context->root, ((OpExpr *) node)->opfuncid, node,
    4865             :                           &context->total);
    4866             :     }
    4867     4055692 :     else if (IsA(node, ScalarArrayOpExpr))
    4868             :     {
    4869       44480 :         ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) node;
    4870       44480 :         Node       *arraynode = (Node *) lsecond(saop->args);
    4871             :         QualCost    sacosts;
    4872             :         QualCost    hcosts;
    4873       44480 :         double      estarraylen = estimate_array_length(context->root, arraynode);
    4874             : 
    4875       44480 :         set_sa_opfuncid(saop);
    4876       44480 :         sacosts.startup = sacosts.per_tuple = 0;
    4877       44480 :         add_function_cost(context->root, saop->opfuncid, NULL,
    4878             :                           &sacosts);
    4879             : 
    4880       44480 :         if (OidIsValid(saop->hashfuncid))
    4881             :         {
    4882             :             /* Handle costs for hashed ScalarArrayOpExpr */
    4883         492 :             hcosts.startup = hcosts.per_tuple = 0;
    4884             : 
    4885         492 :             add_function_cost(context->root, saop->hashfuncid, NULL, &hcosts);
    4886         492 :             context->total.startup += sacosts.startup + hcosts.startup;
    4887             : 
    4888             :             /* Estimate the cost of building the hashtable. */
    4889         492 :             context->total.startup += estarraylen * hcosts.per_tuple;
    4890             : 
    4891             :             /*
    4892             :              * XXX should we charge a little bit for sacosts.per_tuple when
    4893             :              * building the table, or is it ok to assume there will be zero
    4894             :              * hash collision?
    4895             :              */
    4896             : 
    4897             :             /*
    4898             :              * Charge for hashtable lookups.  Charge a single hash and a
    4899             :              * single comparison.
    4900             :              */
    4901         492 :             context->total.per_tuple += hcosts.per_tuple + sacosts.per_tuple;
    4902             :         }
    4903             :         else
    4904             :         {
    4905             :             /*
    4906             :              * Estimate that the operator will be applied to about half of the
    4907             :              * array elements before the answer is determined.
    4908             :              */
    4909       43988 :             context->total.startup += sacosts.startup;
    4910       87976 :             context->total.per_tuple += sacosts.per_tuple *
    4911       43988 :                 estimate_array_length(context->root, arraynode) * 0.5;
    4912             :         }
    4913             :     }
    4914     4011212 :     else if (IsA(node, Aggref) ||
    4915     3956090 :              IsA(node, WindowFunc))
    4916             :     {
    4917             :         /*
    4918             :          * Aggref and WindowFunc nodes are (and should be) treated like Vars,
    4919             :          * ie, zero execution cost in the current model, because they behave
    4920             :          * essentially like Vars at execution.  We disregard the costs of
    4921             :          * their input expressions for the same reason.  The actual execution
    4922             :          * costs of the aggregate/window functions and their arguments have to
    4923             :          * be factored into plan-node-specific costing of the Agg or WindowAgg
    4924             :          * plan node.
    4925             :          */
    4926       58646 :         return false;           /* don't recurse into children */
    4927             :     }
    4928     3952566 :     else if (IsA(node, GroupingFunc))
    4929             :     {
    4930             :         /* Treat this as having cost 1 */
    4931         422 :         context->total.per_tuple += cpu_operator_cost;
    4932         422 :         return false;           /* don't recurse into children */
    4933             :     }
    4934     3952144 :     else if (IsA(node, CoerceViaIO))
    4935             :     {
    4936       22074 :         CoerceViaIO *iocoerce = (CoerceViaIO *) node;
    4937             :         Oid         iofunc;
    4938             :         Oid         typioparam;
    4939             :         bool        typisvarlena;
    4940             : 
    4941             :         /* check the result type's input function */
    4942       22074 :         getTypeInputInfo(iocoerce->resulttype,
    4943             :                          &iofunc, &typioparam);
    4944       22074 :         add_function_cost(context->root, iofunc, NULL,
    4945             :                           &context->total);
    4946             :         /* check the input type's output function */
    4947       22074 :         getTypeOutputInfo(exprType((Node *) iocoerce->arg),
    4948             :                           &iofunc, &typisvarlena);
    4949       22074 :         add_function_cost(context->root, iofunc, NULL,
    4950             :                           &context->total);
    4951             :     }
    4952     3930070 :     else if (IsA(node, ArrayCoerceExpr))
    4953             :     {
    4954        5288 :         ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node;
    4955             :         QualCost    perelemcost;
    4956             : 
    4957        5288 :         cost_qual_eval_node(&perelemcost, (Node *) acoerce->elemexpr,
    4958             :                             context->root);
    4959        5288 :         context->total.startup += perelemcost.startup;
    4960        5288 :         if (perelemcost.per_tuple > 0)
    4961          66 :             context->total.per_tuple += perelemcost.per_tuple *
    4962          66 :                 estimate_array_length(context->root, (Node *) acoerce->arg);
    4963             :     }
    4964     3924782 :     else if (IsA(node, RowCompareExpr))
    4965             :     {
    4966             :         /* Conservatively assume we will check all the columns */
    4967         216 :         RowCompareExpr *rcexpr = (RowCompareExpr *) node;
    4968             :         ListCell   *lc;
    4969             : 
    4970         702 :         foreach(lc, rcexpr->opnos)
    4971             :         {
    4972         486 :             Oid         opid = lfirst_oid(lc);
    4973             : 
    4974         486 :             add_function_cost(context->root, get_opcode(opid), NULL,
    4975             :                               &context->total);
    4976             :         }
    4977             :     }
    4978     3924566 :     else if (IsA(node, MinMaxExpr) ||
    4979     3924306 :              IsA(node, SQLValueFunction) ||
    4980     3919532 :              IsA(node, XmlExpr) ||
    4981     3918830 :              IsA(node, CoerceToDomain) ||
    4982     3909312 :              IsA(node, NextValueExpr) ||
    4983     3908950 :              IsA(node, JsonExpr))
    4984             :     {
    4985             :         /* Treat all these as having cost 1 */
    4986       18164 :         context->total.per_tuple += cpu_operator_cost;
    4987             :     }
    4988     3906402 :     else if (IsA(node, SubLink))
    4989             :     {
    4990             :         /* This routine should not be applied to un-planned expressions */
    4991           0 :         elog(ERROR, "cannot handle unplanned sub-select");
    4992             :     }
    4993     3906402 :     else if (IsA(node, SubPlan))
    4994             :     {
    4995             :         /*
    4996             :          * A subplan node in an expression typically indicates that the
    4997             :          * subplan will be executed on each evaluation, so charge accordingly.
    4998             :          * (Sub-selects that can be executed as InitPlans have already been
    4999             :          * removed from the expression.)
    5000             :          */
    5001       45400 :         SubPlan    *subplan = (SubPlan *) node;
    5002             : 
    5003       45400 :         context->total.startup += subplan->startup_cost;
    5004       45400 :         context->total.per_tuple += subplan->per_call_cost;
    5005             : 
    5006             :         /*
    5007             :          * We don't want to recurse into the testexpr, because it was already
    5008             :          * counted in the SubPlan node's costs.  So we're done.
    5009             :          */
    5010       45400 :         return false;
    5011             :     }
    5012     3861002 :     else if (IsA(node, AlternativeSubPlan))
    5013             :     {
    5014             :         /*
    5015             :          * Arbitrarily use the first alternative plan for costing.  (We should
    5016             :          * certainly only include one alternative, and we don't yet have
    5017             :          * enough information to know which one the executor is most likely to
    5018             :          * use.)
    5019             :          */
    5020        1912 :         AlternativeSubPlan *asplan = (AlternativeSubPlan *) node;
    5021             : 
    5022        1912 :         return cost_qual_eval_walker((Node *) linitial(asplan->subplans),
    5023             :                                      context);
    5024             :     }
    5025     3859090 :     else if (IsA(node, PlaceHolderVar))
    5026             :     {
    5027             :         /*
    5028             :          * A PlaceHolderVar should be given cost zero when considering general
    5029             :          * expression evaluation costs.  The expense of doing the contained
    5030             :          * expression is charged as part of the tlist eval costs of the scan
    5031             :          * or join where the PHV is first computed (see set_rel_width and
    5032             :          * add_placeholders_to_joinrel).  If we charged it again here, we'd be
    5033             :          * double-counting the cost for each level of plan that the PHV
    5034             :          * bubbles up through.  Hence, return without recursing into the
    5035             :          * phexpr.
    5036             :          */
    5037        4992 :         return false;
    5038             :     }
    5039             : 
    5040             :     /* recurse into children */
    5041     4941008 :     return expression_tree_walker(node, cost_qual_eval_walker, context);
    5042             : }
    5043             : 
    5044             : /*
    5045             :  * get_restriction_qual_cost
    5046             :  *    Compute evaluation costs of a baserel's restriction quals, plus any
    5047             :  *    movable join quals that have been pushed down to the scan.
    5048             :  *    Results are returned into *qpqual_cost.
    5049             :  *
    5050             :  * This is a convenience subroutine that works for seqscans and other cases
    5051             :  * where all the given quals will be evaluated the hard way.  It's not useful
    5052             :  * for cost_index(), for example, where the index machinery takes care of
    5053             :  * some of the quals.  We assume baserestrictcost was previously set by
    5054             :  * set_baserel_size_estimates().
    5055             :  */
    5056             : static void
    5057     1064428 : get_restriction_qual_cost(PlannerInfo *root, RelOptInfo *baserel,
    5058             :                           ParamPathInfo *param_info,
    5059             :                           QualCost *qpqual_cost)
    5060             : {
    5061     1064428 :     if (param_info)
    5062             :     {
    5063             :         /* Include costs of pushed-down clauses */
    5064      234468 :         cost_qual_eval(qpqual_cost, param_info->ppi_clauses, root);
    5065             : 
    5066      234468 :         qpqual_cost->startup += baserel->baserestrictcost.startup;
    5067      234468 :         qpqual_cost->per_tuple += baserel->baserestrictcost.per_tuple;
    5068             :     }
    5069             :     else
    5070      829960 :         *qpqual_cost = baserel->baserestrictcost;
    5071     1064428 : }
    5072             : 
    5073             : 
    5074             : /*
    5075             :  * compute_semi_anti_join_factors
    5076             :  *    Estimate how much of the inner input a SEMI, ANTI, or inner_unique join
    5077             :  *    can be expected to scan.
    5078             :  *
    5079             :  * In a hash or nestloop SEMI/ANTI join, the executor will stop scanning
    5080             :  * inner rows as soon as it finds a match to the current outer row.
    5081             :  * The same happens if we have detected the inner rel is unique.
    5082             :  * We should therefore adjust some of the cost components for this effect.
    5083             :  * This function computes some estimates needed for these adjustments.
    5084             :  * These estimates will be the same regardless of the particular paths used
    5085             :  * for the outer and inner relation, so we compute these once and then pass
    5086             :  * them to all the join cost estimation functions.
    5087             :  *
    5088             :  * Input parameters:
    5089             :  *  joinrel: join relation under consideration
    5090             :  *  outerrel: outer relation under consideration
    5091             :  *  innerrel: inner relation under consideration
    5092             :  *  jointype: if not JOIN_SEMI or JOIN_ANTI, we assume it's inner_unique
    5093             :  *  sjinfo: SpecialJoinInfo relevant to this join
    5094             :  *  restrictlist: join quals
    5095             :  * Output parameters:
    5096             :  *  *semifactors is filled in (see pathnodes.h for field definitions)
    5097             :  */
    5098             : void
    5099      214156 : compute_semi_anti_join_factors(PlannerInfo *root,
    5100             :                                RelOptInfo *joinrel,
    5101             :                                RelOptInfo *outerrel,
    5102             :                                RelOptInfo *innerrel,
    5103             :                                JoinType jointype,
    5104             :                                SpecialJoinInfo *sjinfo,
    5105             :                                List *restrictlist,
    5106             :                                SemiAntiJoinFactors *semifactors)
    5107             : {
    5108             :     Selectivity jselec;
    5109             :     Selectivity nselec;
    5110             :     Selectivity avgmatch;
    5111             :     SpecialJoinInfo norm_sjinfo;
    5112             :     List       *joinquals;
    5113             :     ListCell   *l;
    5114             : 
    5115             :     /*
    5116             :      * In an ANTI join, we must ignore clauses that are "pushed down", since
    5117             :      * those won't affect the match logic.  In a SEMI join, we do not
    5118             :      * distinguish joinquals from "pushed down" quals, so just use the whole
    5119             :      * restrictinfo list.  For other outer join types, we should consider only
    5120             :      * non-pushed-down quals, so that this devolves to an IS_OUTER_JOIN check.
    5121             :      */
    5122      214156 :     if (IS_OUTER_JOIN(jointype))
    5123             :     {
    5124       78776 :         joinquals = NIL;
    5125      173812 :         foreach(l, restrictlist)
    5126             :         {
    5127       95036 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    5128             : 
    5129       95036 :             if (!RINFO_IS_PUSHED_DOWN(rinfo, joinrel->relids))
    5130       89098 :                 joinquals = lappend(joinquals, rinfo);
    5131             :         }
    5132             :     }
    5133             :     else
    5134      135380 :         joinquals = restrictlist;
    5135             : 
    5136             :     /*
    5137             :      * Get the JOIN_SEMI or JOIN_ANTI selectivity of the join clauses.
    5138             :      */
    5139      214156 :     jselec = clauselist_selectivity(root,
    5140             :                                     joinquals,
    5141             :                                     0,
    5142             :                                     (jointype == JOIN_ANTI) ? JOIN_ANTI : JOIN_SEMI,
    5143             :                                     sjinfo);
    5144             : 
    5145             :     /*
    5146             :      * Also get the normal inner-join selectivity of the join clauses.
    5147             :      */
    5148      214156 :     init_dummy_sjinfo(&norm_sjinfo, outerrel->relids, innerrel->relids);
    5149             : 
    5150      214156 :     nselec = clauselist_selectivity(root,
    5151             :                                     joinquals,
    5152             :                                     0,
    5153             :                                     JOIN_INNER,
    5154             :                                     &norm_sjinfo);
    5155             : 
    5156             :     /* Avoid leaking a lot of ListCells */
    5157      214156 :     if (IS_OUTER_JOIN(jointype))
    5158       78776 :         list_free(joinquals);
    5159             : 
    5160             :     /*
    5161             :      * jselec can be interpreted as the fraction of outer-rel rows that have
    5162             :      * any matches (this is true for both SEMI and ANTI cases).  And nselec is
    5163             :      * the fraction of the Cartesian product that matches.  So, the average
    5164             :      * number of matches for each outer-rel row that has at least one match is
    5165             :      * nselec * inner_rows / jselec.
    5166             :      *
    5167             :      * Note: it is correct to use the inner rel's "rows" count here, even
    5168             :      * though we might later be considering a parameterized inner path with
    5169             :      * fewer rows.  This is because we have included all the join clauses in
    5170             :      * the selectivity estimate.
    5171             :      */
    5172      214156 :     if (jselec > 0)              /* protect against zero divide */
    5173             :     {
    5174      214124 :         avgmatch = nselec * innerrel->rows / jselec;
    5175             :         /* Clamp to sane range */
    5176      214124 :         avgmatch = Max(1.0, avgmatch);
    5177             :     }
    5178             :     else
    5179          32 :         avgmatch = 1.0;
    5180             : 
    5181      214156 :     semifactors->outer_match_frac = jselec;
    5182      214156 :     semifactors->match_count = avgmatch;
    5183      214156 : }
    5184             : 
    5185             : /*
    5186             :  * has_indexed_join_quals
    5187             :  *    Check whether all the joinquals of a nestloop join are used as
    5188             :  *    inner index quals.
    5189             :  *
    5190             :  * If the inner path of a SEMI/ANTI join is an indexscan (including bitmap
    5191             :  * indexscan) that uses all the joinquals as indexquals, we can assume that an
    5192             :  * unmatched outer tuple is cheap to process, whereas otherwise it's probably
    5193             :  * expensive.
    5194             :  */
    5195             : static bool
    5196      910470 : has_indexed_join_quals(NestPath *path)
    5197             : {
    5198      910470 :     JoinPath   *joinpath = &path->jpath;
    5199      910470 :     Relids      joinrelids = joinpath->path.parent->relids;
    5200      910470 :     Path       *innerpath = joinpath->innerjoinpath;
    5201             :     List       *indexclauses;
    5202             :     bool        found_one;
    5203             :     ListCell   *lc;
    5204             : 
    5205             :     /* If join still has quals to evaluate, it's not fast */
    5206      910470 :     if (joinpath->joinrestrictinfo != NIL)
    5207      649304 :         return false;
    5208             :     /* Nor if the inner path isn't parameterized at all */
    5209      261166 :     if (innerpath->param_info == NULL)
    5210        4800 :         return false;
    5211             : 
    5212             :     /* Find the indexclauses list for the inner scan */
    5213      256366 :     switch (innerpath->pathtype)
    5214             :     {
    5215      154704 :         case T_IndexScan:
    5216             :         case T_IndexOnlyScan:
    5217      154704 :             indexclauses = ((IndexPath *) innerpath)->indexclauses;
    5218      154704 :             break;
    5219         270 :         case T_BitmapHeapScan:
    5220             :             {
    5221             :                 /* Accept only a simple bitmap scan, not AND/OR cases */
    5222         270 :                 Path       *bmqual = ((BitmapHeapPath *) innerpath)->bitmapqual;
    5223             : 
    5224         270 :                 if (IsA(bmqual, IndexPath))
    5225         222 :                     indexclauses = ((IndexPath *) bmqual)->indexclauses;
    5226             :                 else
    5227          48 :                     return false;
    5228         222 :                 break;
    5229             :             }
    5230      101392 :         default:
    5231             : 
    5232             :             /*
    5233             :              * If it's not a simple indexscan, it probably doesn't run quickly
    5234             :              * for zero rows out, even if it's a parameterized path using all
    5235             :              * the joinquals.
    5236             :              */
    5237      101392 :             return false;
    5238             :     }
    5239             : 
    5240             :     /*
    5241             :      * Examine the inner path's param clauses.  Any that are from the outer
    5242             :      * path must be found in the indexclauses list, either exactly or in an
    5243             :      * equivalent form generated by equivclass.c.  Also, we must find at least
    5244             :      * one such clause, else it's a clauseless join which isn't fast.
    5245             :      */
    5246      154926 :     found_one = false;
    5247      308344 :     foreach(lc, innerpath->param_info->ppi_clauses)
    5248             :     {
    5249      158474 :         RestrictInfo *rinfo = (RestrictInfo *) lfirst(lc);
    5250             : 
    5251      158474 :         if (join_clause_is_movable_into(rinfo,
    5252      158474 :                                         innerpath->parent->relids,
    5253             :                                         joinrelids))
    5254             :         {
    5255      157970 :             if (!is_redundant_with_indexclauses(rinfo, indexclauses))
    5256        5056 :                 return false;
    5257      152914 :             found_one = true;
    5258             :         }
    5259             :     }
    5260      149870 :     return found_one;
    5261             : }
    5262             : 
    5263             : 
    5264             : /*
    5265             :  * approx_tuple_count
    5266             :  *      Quick-and-dirty estimation of the number of join rows passing
    5267             :  *      a set of qual conditions.
    5268             :  *
    5269             :  * The quals can be either an implicitly-ANDed list of boolean expressions,
    5270             :  * or a list of RestrictInfo nodes (typically the latter).
    5271             :  *
    5272             :  * We intentionally compute the selectivity under JOIN_INNER rules, even
    5273             :  * if it's some type of outer join.  This is appropriate because we are
    5274             :  * trying to figure out how many tuples pass the initial merge or hash
    5275             :  * join step.
    5276             :  *
    5277             :  * This is quick-and-dirty because we bypass clauselist_selectivity, and
    5278             :  * simply multiply the independent clause selectivities together.  Now
    5279             :  * clauselist_selectivity often can't do any better than that anyhow, but
    5280             :  * for some situations (such as range constraints) it is smarter.  However,
    5281             :  * we can't effectively cache the results of clauselist_selectivity, whereas
    5282             :  * the individual clause selectivities can be and are cached.
    5283             :  *
    5284             :  * Since we are only using the results to estimate how many potential
    5285             :  * output tuples are generated and passed through qpqual checking, it
    5286             :  * seems OK to live with the approximation.
    5287             :  */
    5288             : static double
    5289      497332 : approx_tuple_count(PlannerInfo *root, JoinPath *path, List *quals)
    5290             : {
    5291             :     double      tuples;
    5292      497332 :     double      outer_tuples = path->outerjoinpath->rows;
    5293      497332 :     double      inner_tuples = path->innerjoinpath->rows;
    5294             :     SpecialJoinInfo sjinfo;
    5295      497332 :     Selectivity selec = 1.0;
    5296             :     ListCell   *l;
    5297             : 
    5298             :     /*
    5299             :      * Make up a SpecialJoinInfo for JOIN_INNER semantics.
    5300             :      */
    5301      497332 :     init_dummy_sjinfo(&sjinfo, path->outerjoinpath->parent->relids,
    5302      497332 :                       path->innerjoinpath->parent->relids);
    5303             : 
    5304             :     /* Get the approximate selectivity */
    5305     1067840 :     foreach(l, quals)
    5306             :     {
    5307      570508 :         Node       *qual = (Node *) lfirst(l);
    5308             : 
    5309             :         /* Note that clause_selectivity will be able to cache its result */
    5310      570508 :         selec *= clause_selectivity(root, qual, 0, JOIN_INNER, &sjinfo);
    5311             :     }
    5312             : 
    5313             :     /* Apply it to the input relation sizes */
    5314      497332 :     tuples = selec * outer_tuples * inner_tuples;
    5315             : 
    5316      497332 :     return clamp_row_est(tuples);
    5317             : }
    5318             : 
    5319             : 
    5320             : /*
    5321             :  * set_baserel_size_estimates
    5322             :  *      Set the size estimates for the given base relation.
    5323             :  *
    5324             :  * The rel's targetlist and restrictinfo list must have been constructed
    5325             :  * already, and rel->tuples must be set.
    5326             :  *
    5327             :  * We set the following fields of the rel node:
    5328             :  *  rows: the estimated number of output tuples (after applying
    5329             :  *        restriction clauses).
    5330             :  *  width: the estimated average output tuple width in bytes.
    5331             :  *  baserestrictcost: estimated cost of evaluating baserestrictinfo clauses.
    5332             :  */
    5333             : void
    5334      503986 : set_baserel_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5335             : {
    5336             :     double      nrows;
    5337             : 
    5338             :     /* Should only be applied to base relations */
    5339             :     Assert(rel->relid > 0);
    5340             : 
    5341     1007942 :     nrows = rel->tuples *
    5342      503986 :         clauselist_selectivity(root,
    5343             :                                rel->baserestrictinfo,
    5344             :                                0,
    5345             :                                JOIN_INNER,
    5346             :                                NULL);
    5347             : 
    5348      503956 :     rel->rows = clamp_row_est(nrows);
    5349             : 
    5350      503956 :     cost_qual_eval(&rel->baserestrictcost, rel->baserestrictinfo, root);
    5351             : 
    5352      503956 :     set_rel_width(root, rel);
    5353      503956 : }
    5354             : 
    5355             : /*
    5356             :  * get_parameterized_baserel_size
    5357             :  *      Make a size estimate for a parameterized scan of a base relation.
    5358             :  *
    5359             :  * 'param_clauses' lists the additional join clauses to be used.
    5360             :  *
    5361             :  * set_baserel_size_estimates must have been applied already.
    5362             :  */
    5363             : double
    5364      149918 : get_parameterized_baserel_size(PlannerInfo *root, RelOptInfo *rel,
    5365             :                                List *param_clauses)
    5366             : {
    5367             :     List       *allclauses;
    5368             :     double      nrows;
    5369             : 
    5370             :     /*
    5371             :      * Estimate the number of rows returned by the parameterized scan, knowing
    5372             :      * that it will apply all the extra join clauses as well as the rel's own
    5373             :      * restriction clauses.  Note that we force the clauses to be treated as
    5374             :      * non-join clauses during selectivity estimation.
    5375             :      */
    5376      149918 :     allclauses = list_concat_copy(param_clauses, rel->baserestrictinfo);
    5377      299836 :     nrows = rel->tuples *
    5378      149918 :         clauselist_selectivity(root,
    5379             :                                allclauses,
    5380      149918 :                                rel->relid,   /* do not use 0! */
    5381             :                                JOIN_INNER,
    5382             :                                NULL);
    5383      149918 :     nrows = clamp_row_est(nrows);
    5384             :     /* For safety, make sure result is not more than the base estimate */
    5385      149918 :     if (nrows > rel->rows)
    5386           0 :         nrows = rel->rows;
    5387      149918 :     return nrows;
    5388             : }
    5389             : 
    5390             : /*
    5391             :  * set_joinrel_size_estimates
    5392             :  *      Set the size estimates for the given join relation.
    5393             :  *
    5394             :  * The rel's targetlist must have been constructed already, and a
    5395             :  * restriction clause list that matches the given component rels must
    5396             :  * be provided.
    5397             :  *
    5398             :  * Since there is more than one way to make a joinrel for more than two
    5399             :  * base relations, the results we get here could depend on which component
    5400             :  * rel pair is provided.  In theory we should get the same answers no matter
    5401             :  * which pair is provided; in practice, since the selectivity estimation
    5402             :  * routines don't handle all cases equally well, we might not.  But there's
    5403             :  * not much to be done about it.  (Would it make sense to repeat the
    5404             :  * calculations for each pair of input rels that's encountered, and somehow
    5405             :  * average the results?  Probably way more trouble than it's worth, and
    5406             :  * anyway we must keep the rowcount estimate the same for all paths for the
    5407             :  * joinrel.)
    5408             :  *
    5409             :  * We set only the rows field here.  The reltarget field was already set by
    5410             :  * build_joinrel_tlist, and baserestrictcost is not used for join rels.
    5411             :  */
    5412             : void
    5413      216270 : set_joinrel_size_estimates(PlannerInfo *root, RelOptInfo *rel,
    5414             :                            RelOptInfo *outer_rel,
    5415             :                            RelOptInfo *inner_rel,
    5416             :                            SpecialJoinInfo *sjinfo,
    5417             :                            List *restrictlist)
    5418             : {
    5419      216270 :     rel->rows = calc_joinrel_size_estimate(root,
    5420             :                                            rel,
    5421             :                                            outer_rel,
    5422             :                                            inner_rel,
    5423             :                                            outer_rel->rows,
    5424             :                                            inner_rel->rows,
    5425             :                                            sjinfo,
    5426             :                                            restrictlist);
    5427      216270 : }
    5428             : 
    5429             : /*
    5430             :  * get_parameterized_joinrel_size
    5431             :  *      Make a size estimate for a parameterized scan of a join relation.
    5432             :  *
    5433             :  * 'rel' is the joinrel under consideration.
    5434             :  * 'outer_path', 'inner_path' are (probably also parameterized) Paths that
    5435             :  *      produce the relations being joined.
    5436             :  * 'sjinfo' is any SpecialJoinInfo relevant to this join.
    5437             :  * 'restrict_clauses' lists the join clauses that need to be applied at the
    5438             :  * join node (including any movable clauses that were moved down to this join,
    5439             :  * and not including any movable clauses that were pushed down into the
    5440             :  * child paths).
    5441             :  *
    5442             :  * set_joinrel_size_estimates must have been applied already.
    5443             :  */
    5444             : double
    5445        7344 : get_parameterized_joinrel_size(PlannerInfo *root, RelOptInfo *rel,
    5446             :                                Path *outer_path,
    5447             :                                Path *inner_path,
    5448             :                                SpecialJoinInfo *sjinfo,
    5449             :                                List *restrict_clauses)
    5450             : {
    5451             :     double      nrows;
    5452             : 
    5453             :     /*
    5454             :      * Estimate the number of rows returned by the parameterized join as the
    5455             :      * sizes of the input paths times the selectivity of the clauses that have
    5456             :      * ended up at this join node.
    5457             :      *
    5458             :      * As with set_joinrel_size_estimates, the rowcount estimate could depend
    5459             :      * on the pair of input paths provided, though ideally we'd get the same
    5460             :      * estimate for any pair with the same parameterization.
    5461             :      */
    5462        7344 :     nrows = calc_joinrel_size_estimate(root,
    5463             :                                        rel,
    5464             :                                        outer_path->parent,
    5465             :                                        inner_path->parent,
    5466             :                                        outer_path->rows,
    5467             :                                        inner_path->rows,
    5468             :                                        sjinfo,
    5469             :                                        restrict_clauses);
    5470             :     /* For safety, make sure result is not more than the base estimate */
    5471        7344 :     if (nrows > rel->rows)
    5472          12 :         nrows = rel->rows;
    5473        7344 :     return nrows;
    5474             : }
    5475             : 
    5476             : /*
    5477             :  * calc_joinrel_size_estimate
    5478             :  *      Workhorse for set_joinrel_size_estimates and
    5479             :  *      get_parameterized_joinrel_size.
    5480             :  *
    5481             :  * outer_rel/inner_rel are the relations being joined, but they should be
    5482             :  * assumed to have sizes outer_rows/inner_rows; those numbers might be less
    5483             :  * than what rel->rows says, when we are considering parameterized paths.
    5484             :  */
    5485             : static double
    5486      223614 : calc_joinrel_size_estimate(PlannerInfo *root,
    5487             :                            RelOptInfo *joinrel,
    5488             :                            RelOptInfo *outer_rel,
    5489             :                            RelOptInfo *inner_rel,
    5490             :                            double outer_rows,
    5491             :                            double inner_rows,
    5492             :                            SpecialJoinInfo *sjinfo,
    5493             :                            List *restrictlist)
    5494             : {
    5495      223614 :     JoinType    jointype = sjinfo->jointype;
    5496             :     Selectivity fkselec;
    5497             :     Selectivity jselec;
    5498             :     Selectivity pselec;
    5499             :     double      nrows;
    5500             : 
    5501             :     /*
    5502             :      * Compute joinclause selectivity.  Note that we are only considering
    5503             :      * clauses that become restriction clauses at this join level; we are not
    5504             :      * double-counting them because they were not considered in estimating the
    5505             :      * sizes of the component rels.
    5506             :      *
    5507             :      * First, see whether any of the joinclauses can be matched to known FK
    5508             :      * constraints.  If so, drop those clauses from the restrictlist, and
    5509             :      * instead estimate their selectivity using FK semantics.  (We do this
    5510             :      * without regard to whether said clauses are local or "pushed down".
    5511             :      * Probably, an FK-matching clause could never be seen as pushed down at
    5512             :      * an outer join, since it would be strict and hence would be grounds for
    5513             :      * join strength reduction.)  fkselec gets the net selectivity for
    5514             :      * FK-matching clauses, or 1.0 if there are none.
    5515             :      */
    5516      223614 :     fkselec = get_foreign_key_join_selectivity(root,
    5517             :                                                outer_rel->relids,
    5518             :                                                inner_rel->relids,
    5519             :                                                sjinfo,
    5520             :                                                &restrictlist);
    5521             : 
    5522             :     /*
    5523             :      * For an outer join, we have to distinguish the selectivity of the join's
    5524             :      * own clauses (JOIN/ON conditions) from any clauses that were "pushed
    5525             :      * down".  For inner joins we just count them all as joinclauses.
    5526             :      */
    5527      223614 :     if (IS_OUTER_JOIN(jointype))
    5528             :     {
    5529       83816 :         List       *joinquals = NIL;
    5530       83816 :         List       *pushedquals = NIL;
    5531             :         ListCell   *l;
    5532             : 
    5533             :         /* Grovel through the clauses to separate into two lists */
    5534      188518 :         foreach(l, restrictlist)
    5535             :         {
    5536      104702 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    5537             : 
    5538      104702 :             if (RINFO_IS_PUSHED_DOWN(rinfo, joinrel->relids))
    5539        4864 :                 pushedquals = lappend(pushedquals, rinfo);
    5540             :             else
    5541       99838 :                 joinquals = lappend(joinquals, rinfo);
    5542             :         }
    5543             : 
    5544             :         /* Get the separate selectivities */
    5545       83816 :         jselec = clauselist_selectivity(root,
    5546             :                                         joinquals,
    5547             :                                         0,
    5548             :                                         jointype,
    5549             :                                         sjinfo);
    5550       83816 :         pselec = clauselist_selectivity(root,
    5551             :                                         pushedquals,
    5552             :                                         0,
    5553             :                                         jointype,
    5554             :                                         sjinfo);
    5555             : 
    5556             :         /* Avoid leaking a lot of ListCells */
    5557       83816 :         list_free(joinquals);
    5558       83816 :         list_free(pushedquals);
    5559             :     }
    5560             :     else
    5561             :     {
    5562      139798 :         jselec = clauselist_selectivity(root,
    5563             :                                         restrictlist,
    5564             :                                         0,
    5565             :                                         jointype,
    5566             :                                         sjinfo);
    5567      139798 :         pselec = 0.0;           /* not used, keep compiler quiet */
    5568             :     }
    5569             : 
    5570             :     /*
    5571             :      * Basically, we multiply size of Cartesian product by selectivity.
    5572             :      *
    5573             :      * If we are doing an outer join, take that into account: the joinqual
    5574             :      * selectivity has to be clamped using the knowledge that the output must
    5575             :      * be at least as large as the non-nullable input.  However, any
    5576             :      * pushed-down quals are applied after the outer join, so their
    5577             :      * selectivity applies fully.
    5578             :      *
    5579             :      * For JOIN_SEMI and JOIN_ANTI, the selectivity is defined as the fraction
    5580             :      * of LHS rows that have matches, and we apply that straightforwardly.
    5581             :      */
    5582      223614 :     switch (jointype)
    5583             :     {
    5584      132002 :         case JOIN_INNER:
    5585      132002 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5586             :             /* pselec not used */
    5587      132002 :             break;
    5588       76812 :         case JOIN_LEFT:
    5589       76812 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5590       76812 :             if (nrows < outer_rows)
    5591       29292 :                 nrows = outer_rows;
    5592       76812 :             nrows *= pselec;
    5593       76812 :             break;
    5594        1714 :         case JOIN_FULL:
    5595        1714 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5596        1714 :             if (nrows < outer_rows)
    5597        1136 :                 nrows = outer_rows;
    5598        1714 :             if (nrows < inner_rows)
    5599         120 :                 nrows = inner_rows;
    5600        1714 :             nrows *= pselec;
    5601        1714 :             break;
    5602        7796 :         case JOIN_SEMI:
    5603        7796 :             nrows = outer_rows * fkselec * jselec;
    5604             :             /* pselec not used */
    5605        7796 :             break;
    5606        5290 :         case JOIN_ANTI:
    5607        5290 :             nrows = outer_rows * (1.0 - fkselec * jselec);
    5608        5290 :             nrows *= pselec;
    5609        5290 :             break;
    5610           0 :         default:
    5611             :             /* other values not expected here */
    5612           0 :             elog(ERROR, "unrecognized join type: %d", (int) jointype);
    5613             :             nrows = 0;          /* keep compiler quiet */
    5614             :             break;
    5615             :     }
    5616             : 
    5617      223614 :     return clamp_row_est(nrows);
    5618             : }
    5619             : 
    5620             : /*
    5621             :  * get_foreign_key_join_selectivity
    5622             :  *      Estimate join selectivity for foreign-key-related clauses.
    5623             :  *
    5624             :  * Remove any clauses that can be matched to FK constraints from *restrictlist,
    5625             :  * and return a substitute estimate of their selectivity.  1.0 is returned
    5626             :  * when there are no such clauses.
    5627             :  *
    5628             :  * The reason for treating such clauses specially is that we can get better
    5629             :  * estimates this way than by relying on clauselist_selectivity(), especially
    5630             :  * for multi-column FKs where that function's assumption that the clauses are
    5631             :  * independent falls down badly.  But even with single-column FKs, we may be
    5632             :  * able to get a better answer when the pg_statistic stats are missing or out
    5633             :  * of date.
    5634             :  */
    5635             : static Selectivity
    5636      223614 : get_foreign_key_join_selectivity(PlannerInfo *root,
    5637             :                                  Relids outer_relids,
    5638             :                                  Relids inner_relids,
    5639             :                                  SpecialJoinInfo *sjinfo,
    5640             :                                  List **restrictlist)
    5641             : {
    5642      223614 :     Selectivity fkselec = 1.0;
    5643      223614 :     JoinType    jointype = sjinfo->jointype;
    5644      223614 :     List       *worklist = *restrictlist;
    5645             :     ListCell   *lc;
    5646             : 
    5647             :     /* Consider each FK constraint that is known to match the query */
    5648      225636 :     foreach(lc, root->fkey_list)
    5649             :     {
    5650        2022 :         ForeignKeyOptInfo *fkinfo = (ForeignKeyOptInfo *) lfirst(lc);
    5651             :         bool        ref_is_outer;
    5652             :         List       *removedlist;
    5653             :         ListCell   *cell;
    5654             : 
    5655             :         /*
    5656             :          * This FK is not relevant unless it connects a baserel on one side of
    5657             :          * this join to a baserel on the other side.
    5658             :          */
    5659        3692 :         if (bms_is_member(fkinfo->con_relid, outer_relids) &&
    5660        1670 :             bms_is_member(fkinfo->ref_relid, inner_relids))
    5661        1496 :             ref_is_outer = false;
    5662         866 :         else if (bms_is_member(fkinfo->ref_relid, outer_relids) &&
    5663         340 :                  bms_is_member(fkinfo->con_relid, inner_relids))
    5664         130 :             ref_is_outer = true;
    5665             :         else
    5666         396 :             continue;
    5667             : 
    5668             :         /*
    5669             :          * If we're dealing with a semi/anti join, and the FK's referenced
    5670             :          * relation is on the outside, then knowledge of the FK doesn't help
    5671             :          * us figure out what we need to know (which is the fraction of outer
    5672             :          * rows that have matches).  On the other hand, if the referenced rel
    5673             :          * is on the inside, then all outer rows must have matches in the
    5674             :          * referenced table (ignoring nulls).  But any restriction or join
    5675             :          * clauses that filter that table will reduce the fraction of matches.
    5676             :          * We can account for restriction clauses, but it's too hard to guess
    5677             :          * how many table rows would get through a join that's inside the RHS.
    5678             :          * Hence, if either case applies, punt and ignore the FK.
    5679             :          */
    5680        1626 :         if ((jointype == JOIN_SEMI || jointype == JOIN_ANTI) &&
    5681        1104 :             (ref_is_outer || bms_membership(inner_relids) != BMS_SINGLETON))
    5682          12 :             continue;
    5683             : 
    5684             :         /*
    5685             :          * Modify the restrictlist by removing clauses that match the FK (and
    5686             :          * putting them into removedlist instead).  It seems unsafe to modify
    5687             :          * the originally-passed List structure, so we make a shallow copy the
    5688             :          * first time through.
    5689             :          */
    5690        1614 :         if (worklist == *restrictlist)
    5691        1376 :             worklist = list_copy(worklist);
    5692             : 
    5693        1614 :         removedlist = NIL;
    5694        3382 :         foreach(cell, worklist)
    5695             :         {
    5696        1768 :             RestrictInfo *rinfo = (RestrictInfo *) lfirst(cell);
    5697        1768 :             bool        remove_it = false;
    5698             :             int         i;
    5699             : 
    5700             :             /* Drop this clause if it matches any column of the FK */
    5701        2246 :             for (i = 0; i < fkinfo->nkeys; i++)
    5702             :             {
    5703        2216 :                 if (rinfo->parent_ec)
    5704             :                 {
    5705             :                     /*
    5706             :                      * EC-derived clauses can only match by EC.  It is okay to
    5707             :                      * consider any clause derived from the same EC as
    5708             :                      * matching the FK: even if equivclass.c chose to generate
    5709             :                      * a clause equating some other pair of Vars, it could
    5710             :                      * have generated one equating the FK's Vars.  So for
    5711             :                      * purposes of estimation, we can act as though it did so.
    5712             :                      *
    5713             :                      * Note: checking parent_ec is a bit of a cheat because
    5714             :                      * there are EC-derived clauses that don't have parent_ec
    5715             :                      * set; but such clauses must compare expressions that
    5716             :                      * aren't just Vars, so they cannot match the FK anyway.
    5717             :                      */
    5718         304 :                     if (fkinfo->eclass[i] == rinfo->parent_ec)
    5719             :                     {
    5720         298 :                         remove_it = true;
    5721         298 :                         break;
    5722             :                     }
    5723             :                 }
    5724             :                 else
    5725             :                 {
    5726             :                     /*
    5727             :                      * Otherwise, see if rinfo was previously matched to FK as
    5728             :                      * a "loose" clause.
    5729             :                      */
    5730        1912 :                     if (list_member_ptr(fkinfo->rinfos[i], rinfo))
    5731             :                     {
    5732        1440 :                         remove_it = true;
    5733        1440 :                         break;
    5734             :                     }
    5735             :                 }
    5736             :             }
    5737        1768 :             if (remove_it)
    5738             :             {
    5739        1738 :                 worklist = foreach_delete_current(worklist, cell);
    5740        1738 :                 removedlist = lappend(removedlist, rinfo);
    5741             :             }
    5742             :         }
    5743             : 
    5744             :         /*
    5745             :          * If we failed to remove all the matching clauses we expected to
    5746             :          * find, chicken out and ignore this FK; applying its selectivity
    5747             :          * might result in double-counting.  Put any clauses we did manage to
    5748             :          * remove back into the worklist.
    5749             :          *
    5750             :          * Since the matching clauses are known not outerjoin-delayed, they
    5751             :          * would normally have appeared in the initial joinclause list.  If we
    5752             :          * didn't find them, there are two possibilities:
    5753             :          *
    5754             :          * 1. If the FK match is based on an EC that is ec_has_const, it won't
    5755             :          * have generated any join clauses at all.  We discount such ECs while
    5756             :          * checking to see if we have "all" the clauses.  (Below, we'll adjust
    5757             :          * the selectivity estimate for this case.)
    5758             :          *
    5759             :          * 2. The clauses were matched to some other FK in a previous
    5760             :          * iteration of this loop, and thus removed from worklist.  (A likely
    5761             :          * case is that two FKs are matched to the same EC; there will be only
    5762             :          * one EC-derived clause in the initial list, so the first FK will
    5763             :          * consume it.)  Applying both FKs' selectivity independently risks
    5764             :          * underestimating the join size; in particular, this would undo one
    5765             :          * of the main things that ECs were invented for, namely to avoid
    5766             :          * double-counting the selectivity of redundant equality conditions.
    5767             :          * Later we might think of a reasonable way to combine the estimates,
    5768             :          * but for now, just punt, since this is a fairly uncommon situation.
    5769             :          */
    5770        1614 :         if (removedlist == NIL ||
    5771        1314 :             list_length(removedlist) !=
    5772        1314 :             (fkinfo->nmatched_ec - fkinfo->nconst_ec + fkinfo->nmatched_ri))
    5773             :         {
    5774         300 :             worklist = list_concat(worklist, removedlist);
    5775         300 :             continue;
    5776             :         }
    5777             : 
    5778             :         /*
    5779             :          * Finally we get to the payoff: estimate selectivity using the
    5780             :          * knowledge that each referencing row will match exactly one row in
    5781             :          * the referenced table.
    5782             :          *
    5783             :          * XXX that's not true in the presence of nulls in the referencing
    5784             :          * column(s), so in principle we should derate the estimate for those.
    5785             :          * However (1) if there are any strict restriction clauses for the
    5786             :          * referencing column(s) elsewhere in the query, derating here would
    5787             :          * be double-counting the null fraction, and (2) it's not very clear
    5788             :          * how to combine null fractions for multiple referencing columns. So
    5789             :          * we do nothing for now about correcting for nulls.
    5790             :          *
    5791             :          * XXX another point here is that if either side of an FK constraint
    5792             :          * is an inheritance parent, we estimate as though the constraint
    5793             :          * covers all its children as well.  This is not an unreasonable
    5794             :          * assumption for a referencing table, ie the user probably applied
    5795             :          * identical constraints to all child tables (though perhaps we ought
    5796             :          * to check that).  But it's not possible to have done that for a
    5797             :          * referenced table.  Fortunately, precisely because that doesn't
    5798             :          * work, it is uncommon in practice to have an FK referencing a parent
    5799             :          * table.  So, at least for now, disregard inheritance here.
    5800             :          */
    5801        1314 :         if (jointype == JOIN_SEMI || jointype == JOIN_ANTI)
    5802         866 :         {
    5803             :             /*
    5804             :              * For JOIN_SEMI and JOIN_ANTI, we only get here when the FK's
    5805             :              * referenced table is exactly the inside of the join.  The join
    5806             :              * selectivity is defined as the fraction of LHS rows that have
    5807             :              * matches.  The FK implies that every LHS row has a match *in the
    5808             :              * referenced table*; but any restriction clauses on it will
    5809             :              * reduce the number of matches.  Hence we take the join
    5810             :              * selectivity as equal to the selectivity of the table's
    5811             :              * restriction clauses, which is rows / tuples; but we must guard
    5812             :              * against tuples == 0.
    5813             :              */
    5814         866 :             RelOptInfo *ref_rel = find_base_rel(root, fkinfo->ref_relid);
    5815         866 :             double      ref_tuples = Max(ref_rel->tuples, 1.0);
    5816             : 
    5817         866 :             fkselec *= ref_rel->rows / ref_tuples;
    5818             :         }
    5819             :         else
    5820             :         {
    5821             :             /*
    5822             :              * Otherwise, selectivity is exactly 1/referenced-table-size; but
    5823             :              * guard against tuples == 0.  Note we should use the raw table
    5824             :              * tuple count, not any estimate of its filtered or joined size.
    5825             :              */
    5826         448 :             RelOptInfo *ref_rel = find_base_rel(root, fkinfo->ref_relid);
    5827         448 :             double      ref_tuples = Max(ref_rel->tuples, 1.0);
    5828             : 
    5829         448 :             fkselec *= 1.0 / ref_tuples;
    5830             :         }
    5831             : 
    5832             :         /*
    5833             :          * If any of the FK columns participated in ec_has_const ECs, then
    5834             :          * equivclass.c will have generated "var = const" restrictions for
    5835             :          * each side of the join, thus reducing the sizes of both input
    5836             :          * relations.  Taking the fkselec at face value would amount to
    5837             :          * double-counting the selectivity of the constant restriction for the
    5838             :          * referencing Var.  Hence, look for the restriction clause(s) that
    5839             :          * were applied to the referencing Var(s), and divide out their
    5840             :          * selectivity to correct for this.
    5841             :          */
    5842        1314 :         if (fkinfo->nconst_ec > 0)
    5843             :         {
    5844          24 :             for (int i = 0; i < fkinfo->nkeys; i++)
    5845             :             {
    5846          18 :                 EquivalenceClass *ec = fkinfo->eclass[i];
    5847             : 
    5848          18 :                 if (ec && ec->ec_has_const)
    5849             :                 {
    5850           6 :                     EquivalenceMember *em = fkinfo->fk_eclass_member[i];
    5851           6 :                     RestrictInfo *rinfo = find_derived_clause_for_ec_member(root,
    5852             :                                                                             ec,
    5853             :                                                                             em);
    5854             : 
    5855           6 :                     if (rinfo)
    5856             :                     {
    5857             :                         Selectivity s0;
    5858             : 
    5859           6 :                         s0 = clause_selectivity(root,
    5860             :                                                 (Node *) rinfo,
    5861             :                                                 0,
    5862             :                                                 jointype,
    5863             :                                                 sjinfo);
    5864           6 :                         if (s0 > 0)
    5865           6 :                             fkselec /= s0;
    5866             :                     }
    5867             :                 }
    5868             :             }
    5869             :         }
    5870             :     }
    5871             : 
    5872      223614 :     *restrictlist = worklist;
    5873      223614 :     CLAMP_PROBABILITY(fkselec);
    5874      223614 :     return fkselec;
    5875             : }
    5876             : 
    5877             : /*
    5878             :  * set_subquery_size_estimates
    5879             :  *      Set the size estimates for a base relation that is a subquery.
    5880             :  *
    5881             :  * The rel's targetlist and restrictinfo list must have been constructed
    5882             :  * already, and the Paths for the subquery must have been completed.
    5883             :  * We look at the subquery's PlannerInfo to extract data.
    5884             :  *
    5885             :  * We set the same fields as set_baserel_size_estimates.
    5886             :  */
    5887             : void
    5888       27936 : set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5889             : {
    5890       27936 :     PlannerInfo *subroot = rel->subroot;
    5891             :     RelOptInfo *sub_final_rel;
    5892             :     ListCell   *lc;
    5893             : 
    5894             :     /* Should only be applied to base relations that are subqueries */
    5895             :     Assert(rel->relid > 0);
    5896             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_SUBQUERY);
    5897             : 
    5898             :     /*
    5899             :      * Copy raw number of output rows from subquery.  All of its paths should
    5900             :      * have the same output rowcount, so just look at cheapest-total.
    5901             :      */
    5902       27936 :     sub_final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL);
    5903       27936 :     rel->tuples = sub_final_rel->cheapest_total_path->rows;
    5904             : 
    5905             :     /*
    5906             :      * Compute per-output-column width estimates by examining the subquery's
    5907             :      * targetlist.  For any output that is a plain Var, get the width estimate
    5908             :      * that was made while planning the subquery.  Otherwise, we leave it to
    5909             :      * set_rel_width to fill in a datatype-based default estimate.
    5910             :      */
    5911      116916 :     foreach(lc, subroot->parse->targetList)
    5912             :     {
    5913       88980 :         TargetEntry *te = lfirst_node(TargetEntry, lc);
    5914       88980 :         Node       *texpr = (Node *) te->expr;
    5915       88980 :         int32       item_width = 0;
    5916             : 
    5917             :         /* junk columns aren't visible to upper query */
    5918       88980 :         if (te->resjunk)
    5919        1284 :             continue;
    5920             : 
    5921             :         /*
    5922             :          * The subquery could be an expansion of a view that's had columns
    5923             :          * added to it since the current query was parsed, so that there are
    5924             :          * non-junk tlist columns in it that don't correspond to any column
    5925             :          * visible at our query level.  Ignore such columns.
    5926             :          */
    5927       87696 :         if (te->resno < rel->min_attr || te->resno > rel->max_attr)
    5928           0 :             continue;
    5929             : 
    5930             :         /*
    5931             :          * XXX This currently doesn't work for subqueries containing set
    5932             :          * operations, because the Vars in their tlists are bogus references
    5933             :          * to the first leaf subquery, which wouldn't give the right answer
    5934             :          * even if we could still get to its PlannerInfo.
    5935             :          *
    5936             :          * Also, the subquery could be an appendrel for which all branches are
    5937             :          * known empty due to constraint exclusion, in which case
    5938             :          * set_append_rel_pathlist will have left the attr_widths set to zero.
    5939             :          *
    5940             :          * In either case, we just leave the width estimate zero until
    5941             :          * set_rel_width fixes it.
    5942             :          */
    5943       87696 :         if (IsA(texpr, Var) &&
    5944       39856 :             subroot->parse->setOperations == NULL)
    5945             :         {
    5946       38082 :             Var        *var = (Var *) texpr;
    5947       38082 :             RelOptInfo *subrel = find_base_rel(subroot, var->varno);
    5948             : 
    5949       38082 :             item_width = subrel->attr_widths[var->varattno - subrel->min_attr];
    5950             :         }
    5951       87696 :         rel->attr_widths[te->resno - rel->min_attr] = item_width;
    5952             :     }
    5953             : 
    5954             :     /* Now estimate number of output rows, etc */
    5955       27936 :     set_baserel_size_estimates(root, rel);
    5956       27936 : }
    5957             : 
    5958             : /*
    5959             :  * set_function_size_estimates
    5960             :  *      Set the size estimates for a base relation that is a function call.
    5961             :  *
    5962             :  * The rel's targetlist and restrictinfo list must have been constructed
    5963             :  * already.
    5964             :  *
    5965             :  * We set the same fields as set_baserel_size_estimates.
    5966             :  */
    5967             : void
    5968       55114 : set_function_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5969             : {
    5970             :     RangeTblEntry *rte;
    5971             :     ListCell   *lc;
    5972             : 
    5973             :     /* Should only be applied to base relations that are functions */
    5974             :     Assert(rel->relid > 0);
    5975       55114 :     rte = planner_rt_fetch(rel->relid, root);
    5976             :     Assert(rte->rtekind == RTE_FUNCTION);
    5977             : 
    5978             :     /*
    5979             :      * Estimate number of rows the functions will return. The rowcount of the
    5980             :      * node is that of the largest function result.
    5981             :      */
    5982       55114 :     rel->tuples = 0;
    5983      110916 :     foreach(lc, rte->functions)
    5984             :     {
    5985       55802 :         RangeTblFunction *rtfunc = (RangeTblFunction *) lfirst(lc);
    5986       55802 :         double      ntup = expression_returns_set_rows(root, rtfunc->funcexpr);
    5987             : 
    5988       55802 :         if (ntup > rel->tuples)
    5989       55138 :             rel->tuples = ntup;
    5990             :     }
    5991             : 
    5992             :     /* Now estimate number of output rows, etc */
    5993       55114 :     set_baserel_size_estimates(root, rel);
    5994       55114 : }
    5995             : 
    5996             : /*
    5997             :  * set_function_size_estimates
    5998             :  *      Set the size estimates for a base relation that is a function call.
    5999             :  *
    6000             :  * The rel's targetlist and restrictinfo list must have been constructed
    6001             :  * already.
    6002             :  *
    6003             :  * We set the same fields as set_tablefunc_size_estimates.
    6004             :  */
    6005             : void
    6006         626 : set_tablefunc_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6007             : {
    6008             :     /* Should only be applied to base relations that are functions */
    6009             :     Assert(rel->relid > 0);
    6010             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_TABLEFUNC);
    6011             : 
    6012         626 :     rel->tuples = 100;
    6013             : 
    6014             :     /* Now estimate number of output rows, etc */
    6015         626 :     set_baserel_size_estimates(root, rel);
    6016         626 : }
    6017             : 
    6018             : /*
    6019             :  * set_values_size_estimates
    6020             :  *      Set the size estimates for a base relation that is a values list.
    6021             :  *
    6022             :  * The rel's targetlist and restrictinfo list must have been constructed
    6023             :  * already.
    6024             :  *
    6025             :  * We set the same fields as set_baserel_size_estimates.
    6026             :  */
    6027             : void
    6028        8232 : set_values_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6029             : {
    6030             :     RangeTblEntry *rte;
    6031             : 
    6032             :     /* Should only be applied to base relations that are values lists */
    6033             :     Assert(rel->relid > 0);
    6034        8232 :     rte = planner_rt_fetch(rel->relid, root);
    6035             :     Assert(rte->rtekind == RTE_VALUES);
    6036             : 
    6037             :     /*
    6038             :      * Estimate number of rows the values list will return. We know this
    6039             :      * precisely based on the list length (well, barring set-returning
    6040             :      * functions in list items, but that's a refinement not catered for
    6041             :      * anywhere else either).
    6042             :      */
    6043        8232 :     rel->tuples = list_length(rte->values_lists);
    6044             : 
    6045             :     /* Now estimate number of output rows, etc */
    6046        8232 :     set_baserel_size_estimates(root, rel);
    6047        8232 : }
    6048             : 
    6049             : /*
    6050             :  * set_cte_size_estimates
    6051             :  *      Set the size estimates for a base relation that is a CTE reference.
    6052             :  *
    6053             :  * The rel's targetlist and restrictinfo list must have been constructed
    6054             :  * already, and we need an estimate of the number of rows returned by the CTE
    6055             :  * (if a regular CTE) or the non-recursive term (if a self-reference).
    6056             :  *
    6057             :  * We set the same fields as set_baserel_size_estimates.
    6058             :  */
    6059             : void
    6060        5100 : set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, double cte_rows)
    6061             : {
    6062             :     RangeTblEntry *rte;
    6063             : 
    6064             :     /* Should only be applied to base relations that are CTE references */
    6065             :     Assert(rel->relid > 0);
    6066        5100 :     rte = planner_rt_fetch(rel->relid, root);
    6067             :     Assert(rte->rtekind == RTE_CTE);
    6068             : 
    6069        5100 :     if (rte->self_reference)
    6070             :     {
    6071             :         /*
    6072             :          * In a self-reference, we assume the average worktable size is a
    6073             :          * multiple of the nonrecursive term's size.  The best multiplier will
    6074             :          * vary depending on query "fan-out", so make its value adjustable.
    6075             :          */
    6076        1010 :         rel->tuples = clamp_row_est(recursive_worktable_factor * cte_rows);
    6077             :     }
    6078             :     else
    6079             :     {
    6080             :         /* Otherwise just believe the CTE's rowcount estimate */
    6081        4090 :         rel->tuples = cte_rows;
    6082             :     }
    6083             : 
    6084             :     /* Now estimate number of output rows, etc */
    6085        5100 :     set_baserel_size_estimates(root, rel);
    6086        5100 : }
    6087             : 
    6088             : /*
    6089             :  * set_namedtuplestore_size_estimates
    6090             :  *      Set the size estimates for a base relation that is a tuplestore reference.
    6091             :  *
    6092             :  * The rel's targetlist and restrictinfo list must have been constructed
    6093             :  * already.
    6094             :  *
    6095             :  * We set the same fields as set_baserel_size_estimates.
    6096             :  */
    6097             : void
    6098         462 : set_namedtuplestore_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6099             : {
    6100             :     RangeTblEntry *rte;
    6101             : 
    6102             :     /* Should only be applied to base relations that are tuplestore references */
    6103             :     Assert(rel->relid > 0);
    6104         462 :     rte = planner_rt_fetch(rel->relid, root);
    6105             :     Assert(rte->rtekind == RTE_NAMEDTUPLESTORE);
    6106             : 
    6107             :     /*
    6108             :      * Use the estimate provided by the code which is generating the named
    6109             :      * tuplestore.  In some cases, the actual number might be available; in
    6110             :      * others the same plan will be re-used, so a "typical" value might be
    6111             :      * estimated and used.
    6112             :      */
    6113         462 :     rel->tuples = rte->enrtuples;
    6114         462 :     if (rel->tuples < 0)
    6115           0 :         rel->tuples = 1000;
    6116             : 
    6117             :     /* Now estimate number of output rows, etc */
    6118         462 :     set_baserel_size_estimates(root, rel);
    6119         462 : }
    6120             : 
    6121             : /*
    6122             :  * set_result_size_estimates
    6123             :  *      Set the size estimates for an RTE_RESULT base relation
    6124             :  *
    6125             :  * The rel's targetlist and restrictinfo list must have been constructed
    6126             :  * already.
    6127             :  *
    6128             :  * We set the same fields as set_baserel_size_estimates.
    6129             :  */
    6130             : void
    6131        4190 : set_result_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6132             : {
    6133             :     /* Should only be applied to RTE_RESULT base relations */
    6134             :     Assert(rel->relid > 0);
    6135             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_RESULT);
    6136             : 
    6137             :     /* RTE_RESULT always generates a single row, natively */
    6138        4190 :     rel->tuples = 1;
    6139             : 
    6140             :     /* Now estimate number of output rows, etc */
    6141        4190 :     set_baserel_size_estimates(root, rel);
    6142        4190 : }
    6143             : 
    6144             : /*
    6145             :  * set_foreign_size_estimates
    6146             :  *      Set the size estimates for a base relation that is a foreign table.
    6147             :  *
    6148             :  * There is not a whole lot that we can do here; the foreign-data wrapper
    6149             :  * is responsible for producing useful estimates.  We can do a decent job
    6150             :  * of estimating baserestrictcost, so we set that, and we also set up width
    6151             :  * using what will be purely datatype-driven estimates from the targetlist.
    6152             :  * There is no way to do anything sane with the rows value, so we just put
    6153             :  * a default estimate and hope that the wrapper can improve on it.  The
    6154             :  * wrapper's GetForeignRelSize function will be called momentarily.
    6155             :  *
    6156             :  * The rel's targetlist and restrictinfo list must have been constructed
    6157             :  * already.
    6158             :  */
    6159             : void
    6160        2412 : set_foreign_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6161             : {
    6162             :     /* Should only be applied to base relations */
    6163             :     Assert(rel->relid > 0);
    6164             : 
    6165        2412 :     rel->rows = 1000;            /* entirely bogus default estimate */
    6166             : 
    6167        2412 :     cost_qual_eval(&rel->baserestrictcost, rel->baserestrictinfo, root);
    6168             : 
    6169        2412 :     set_rel_width(root, rel);
    6170        2412 : }
    6171             : 
    6172             : 
    6173             : /*
    6174             :  * set_rel_width
    6175             :  *      Set the estimated output width of a base relation.
    6176             :  *
    6177             :  * The estimated output width is the sum of the per-attribute width estimates
    6178             :  * for the actually-referenced columns, plus any PHVs or other expressions
    6179             :  * that have to be calculated at this relation.  This is the amount of data
    6180             :  * we'd need to pass upwards in case of a sort, hash, etc.
    6181             :  *
    6182             :  * This function also sets reltarget->cost, so it's a bit misnamed now.
    6183             :  *
    6184             :  * NB: this works best on plain relations because it prefers to look at
    6185             :  * real Vars.  For subqueries, set_subquery_size_estimates will already have
    6186             :  * copied up whatever per-column estimates were made within the subquery,
    6187             :  * and for other types of rels there isn't much we can do anyway.  We fall
    6188             :  * back on (fairly stupid) datatype-based width estimates if we can't get
    6189             :  * any better number.
    6190             :  *
    6191             :  * The per-attribute width estimates are cached for possible re-use while
    6192             :  * building join relations or post-scan/join pathtargets.
    6193             :  */
    6194             : static void
    6195      506368 : set_rel_width(PlannerInfo *root, RelOptInfo *rel)
    6196             : {
    6197      506368 :     Oid         reloid = planner_rt_fetch(rel->relid, root)->relid;
    6198      506368 :     int64       tuple_width = 0;
    6199      506368 :     bool        have_wholerow_var = false;
    6200             :     ListCell   *lc;
    6201             : 
    6202             :     /* Vars are assumed to have cost zero, but other exprs do not */
    6203      506368 :     rel->reltarget->cost.startup = 0;
    6204      506368 :     rel->reltarget->cost.per_tuple = 0;
    6205             : 
    6206     1832376 :     foreach(lc, rel->reltarget->exprs)
    6207             :     {
    6208     1326008 :         Node       *node = (Node *) lfirst(lc);
    6209             : 
    6210             :         /*
    6211             :          * Ordinarily, a Var in a rel's targetlist must belong to that rel;
    6212             :          * but there are corner cases involving LATERAL references where that
    6213             :          * isn't so.  If the Var has the wrong varno, fall through to the
    6214             :          * generic case (it doesn't seem worth the trouble to be any smarter).
    6215             :          */
    6216     1326008 :         if (IsA(node, Var) &&
    6217     1301834 :             ((Var *) node)->varno == rel->relid)
    6218      357698 :         {
    6219     1301768 :             Var        *var = (Var *) node;
    6220             :             int         ndx;
    6221             :             int32       item_width;
    6222             : 
    6223             :             Assert(var->varattno >= rel->min_attr);
    6224             :             Assert(var->varattno <= rel->max_attr);
    6225             : 
    6226     1301768 :             ndx = var->varattno - rel->min_attr;
    6227             : 
    6228             :             /*
    6229             :              * If it's a whole-row Var, we'll deal with it below after we have
    6230             :              * already cached as many attr widths as possible.
    6231             :              */
    6232     1301768 :             if (var->varattno == 0)
    6233             :             {
    6234        2956 :                 have_wholerow_var = true;
    6235        2956 :                 continue;
    6236             :             }
    6237             : 
    6238             :             /*
    6239             :              * The width may have been cached already (especially if it's a
    6240             :              * subquery), so don't duplicate effort.
    6241             :              */
    6242     1298812 :             if (rel->attr_widths[ndx] > 0)
    6243             :             {
    6244      238584 :                 tuple_width += rel->attr_widths[ndx];
    6245      238584 :                 continue;
    6246             :             }
    6247             : 
    6248             :             /* Try to get column width from statistics */
    6249     1060228 :             if (reloid != InvalidOid && var->varattno > 0)
    6250             :             {
    6251      841604 :                 item_width = get_attavgwidth(reloid, var->varattno);
    6252      841604 :                 if (item_width > 0)
    6253             :                 {
    6254      702530 :                     rel->attr_widths[ndx] = item_width;
    6255      702530 :                     tuple_width += item_width;
    6256      702530 :                     continue;
    6257             :                 }
    6258             :             }
    6259             : 
    6260             :             /*
    6261             :              * Not a plain relation, or can't find statistics for it. Estimate
    6262             :              * using just the type info.
    6263             :              */
    6264      357698 :             item_width = get_typavgwidth(var->vartype, var->vartypmod);
    6265             :             Assert(item_width > 0);
    6266      357698 :             rel->attr_widths[ndx] = item_width;
    6267      357698 :             tuple_width += item_width;
    6268             :         }
    6269       24240 :         else if (IsA(node, PlaceHolderVar))
    6270             :         {
    6271             :             /*
    6272             :              * We will need to evaluate the PHV's contained expression while
    6273             :              * scanning this rel, so be sure to include it in reltarget->cost.
    6274             :              */
    6275        1960 :             PlaceHolderVar *phv = (PlaceHolderVar *) node;
    6276        1960 :             PlaceHolderInfo *phinfo = find_placeholder_info(root, phv);
    6277             :             QualCost    cost;
    6278             : 
    6279        1960 :             tuple_width += phinfo->ph_width;
    6280        1960 :             cost_qual_eval_node(&cost, (Node *) phv->phexpr, root);
    6281        1960 :             rel->reltarget->cost.startup += cost.startup;
    6282        1960 :             rel->reltarget->cost.per_tuple += cost.per_tuple;
    6283             :         }
    6284             :         else
    6285             :         {
    6286             :             /*
    6287             :              * We could be looking at an expression pulled up from a subquery,
    6288             :              * or a ROW() representing a whole-row child Var, etc.  Do what we
    6289             :              * can using the expression type information.
    6290             :              */
    6291             :             int32       item_width;
    6292             :             QualCost    cost;
    6293             : 
    6294       22280 :             item_width = get_typavgwidth(exprType(node), exprTypmod(node));
    6295             :             Assert(item_width > 0);
    6296       22280 :             tuple_width += item_width;
    6297             :             /* Not entirely clear if we need to account for cost, but do so */
    6298       22280 :             cost_qual_eval_node(&cost, node, root);
    6299       22280 :             rel->reltarget->cost.startup += cost.startup;
    6300       22280 :             rel->reltarget->cost.per_tuple += cost.per_tuple;
    6301             :         }
    6302             :     }
    6303             : 
    6304             :     /*
    6305             :      * If we have a whole-row reference, estimate its width as the sum of
    6306             :      * per-column widths plus heap tuple header overhead.
    6307             :      */
    6308      506368 :     if (have_wholerow_var)
    6309             :     {
    6310        2956 :         int64       wholerow_width = MAXALIGN(SizeofHeapTupleHeader);
    6311             : 
    6312        2956 :         if (reloid != InvalidOid)
    6313             :         {
    6314             :             /* Real relation, so estimate true tuple width */
    6315        2298 :             wholerow_width += get_relation_data_width(reloid,
    6316        2298 :                                                       rel->attr_widths - rel->min_attr);
    6317             :         }
    6318             :         else
    6319             :         {
    6320             :             /* Do what we can with info for a phony rel */
    6321             :             AttrNumber  i;
    6322             : 
    6323        1794 :             for (i = 1; i <= rel->max_attr; i++)
    6324        1136 :                 wholerow_width += rel->attr_widths[i - rel->min_attr];
    6325             :         }
    6326             : 
    6327        2956 :         rel->attr_widths[0 - rel->min_attr] = clamp_width_est(wholerow_width);
    6328             : 
    6329             :         /*
    6330             :          * Include the whole-row Var as part of the output tuple.  Yes, that
    6331             :          * really is what happens at runtime.
    6332             :          */
    6333        2956 :         tuple_width += wholerow_width;
    6334             :     }
    6335             : 
    6336      506368 :     rel->reltarget->width = clamp_width_est(tuple_width);
    6337      506368 : }
    6338             : 
    6339             : /*
    6340             :  * set_pathtarget_cost_width
    6341             :  *      Set the estimated eval cost and output width of a PathTarget tlist.
    6342             :  *
    6343             :  * As a notational convenience, returns the same PathTarget pointer passed in.
    6344             :  *
    6345             :  * Most, though not quite all, uses of this function occur after we've run
    6346             :  * set_rel_width() for base relations; so we can usually obtain cached width
    6347             :  * estimates for Vars.  If we can't, fall back on datatype-based width
    6348             :  * estimates.  Present early-planning uses of PathTargets don't need accurate
    6349             :  * widths badly enough to justify going to the catalogs for better data.
    6350             :  */
    6351             : PathTarget *
    6352      608654 : set_pathtarget_cost_width(PlannerInfo *root, PathTarget *target)
    6353             : {
    6354      608654 :     int64       tuple_width = 0;
    6355             :     ListCell   *lc;
    6356             : 
    6357             :     /* Vars are assumed to have cost zero, but other exprs do not */
    6358      608654 :     target->cost.startup = 0;
    6359      608654 :     target->cost.per_tuple = 0;
    6360             : 
    6361     2120352 :     foreach(lc, target->exprs)
    6362             :     {
    6363     1511698 :         Node       *node = (Node *) lfirst(lc);
    6364             : 
    6365     1511698 :         tuple_width += get_expr_width(root, node);
    6366             : 
    6367             :         /* For non-Vars, account for evaluation cost */
    6368     1511698 :         if (!IsA(node, Var))
    6369             :         {
    6370             :             QualCost    cost;
    6371             : 
    6372      634706 :             cost_qual_eval_node(&cost, node, root);
    6373      634706 :             target->cost.startup += cost.startup;
    6374      634706 :             target->cost.per_tuple += cost.per_tuple;
    6375             :         }
    6376             :     }
    6377             : 
    6378      608654 :     target->width = clamp_width_est(tuple_width);
    6379             : 
    6380      608654 :     return target;
    6381             : }
    6382             : 
    6383             : /*
    6384             :  * get_expr_width
    6385             :  *      Estimate the width of the given expr attempting to use the width
    6386             :  *      cached in a Var's owning RelOptInfo, else fallback on the type's
    6387             :  *      average width when unable to or when the given Node is not a Var.
    6388             :  */
    6389             : static int32
    6390     1839646 : get_expr_width(PlannerInfo *root, const Node *expr)
    6391             : {
    6392             :     int32       width;
    6393             : 
    6394     1839646 :     if (IsA(expr, Var))
    6395             :     {
    6396     1192034 :         const Var  *var = (const Var *) expr;
    6397             : 
    6398             :         /* We should not see any upper-level Vars here */
    6399             :         Assert(var->varlevelsup == 0);
    6400             : 
    6401             :         /* Try to get data from RelOptInfo cache */
    6402     1192034 :         if (!IS_SPECIAL_VARNO(var->varno) &&
    6403     1186424 :             var->varno < root->simple_rel_array_size)
    6404             :         {
    6405     1186424 :             RelOptInfo *rel = root->simple_rel_array[var->varno];
    6406             : 
    6407     1186424 :             if (rel != NULL &&
    6408     1157496 :                 var->varattno >= rel->min_attr &&
    6409     1157496 :                 var->varattno <= rel->max_attr)
    6410             :             {
    6411     1157496 :                 int         ndx = var->varattno - rel->min_attr;
    6412             : 
    6413     1157496 :                 if (rel->attr_widths[ndx] > 0)
    6414     1123992 :                     return rel->attr_widths[ndx];
    6415             :             }
    6416             :         }
    6417             : 
    6418             :         /*
    6419             :          * No cached data available, so estimate using just the type info.
    6420             :          */
    6421       68042 :         width = get_typavgwidth(var->vartype, var->vartypmod);
    6422             :         Assert(width > 0);
    6423             : 
    6424       68042 :         return width;
    6425             :     }
    6426             : 
    6427      647612 :     width = get_typavgwidth(exprType(expr), exprTypmod(expr));
    6428             :     Assert(width > 0);
    6429      647612 :     return width;
    6430             : }
    6431             : 
    6432             : /*
    6433             :  * relation_byte_size
    6434             :  *    Estimate the storage space in bytes for a given number of tuples
    6435             :  *    of a given width (size in bytes).
    6436             :  */
    6437             : static double
    6438     4035286 : relation_byte_size(double tuples, int width)
    6439             : {
    6440     4035286 :     return tuples * (MAXALIGN(width) + MAXALIGN(SizeofHeapTupleHeader));
    6441             : }
    6442             : 
    6443             : /*
    6444             :  * page_size
    6445             :  *    Returns an estimate of the number of pages covered by a given
    6446             :  *    number of tuples of a given width (size in bytes).
    6447             :  */
    6448             : static double
    6449       10688 : page_size(double tuples, int width)
    6450             : {
    6451       10688 :     return ceil(relation_byte_size(tuples, width) / BLCKSZ);
    6452             : }
    6453             : 
    6454             : /*
    6455             :  * Estimate the fraction of the work that each worker will do given the
    6456             :  * number of workers budgeted for the path.
    6457             :  */
    6458             : static double
    6459      178686 : get_parallel_divisor(Path *path)
    6460             : {
    6461      178686 :     double      parallel_divisor = path->parallel_workers;
    6462             : 
    6463             :     /*
    6464             :      * Early experience with parallel query suggests that when there is only
    6465             :      * one worker, the leader often makes a very substantial contribution to
    6466             :      * executing the parallel portion of the plan, but as more workers are
    6467             :      * added, it does less and less, because it's busy reading tuples from the
    6468             :      * workers and doing whatever non-parallel post-processing is needed.  By
    6469             :      * the time we reach 4 workers, the leader no longer makes a meaningful
    6470             :      * contribution.  Thus, for now, estimate that the leader spends 30% of
    6471             :      * its time servicing each worker, and the remainder executing the
    6472             :      * parallel plan.
    6473             :      */
    6474      178686 :     if (parallel_leader_participation)
    6475             :     {
    6476             :         double      leader_contribution;
    6477             : 
    6478      177384 :         leader_contribution = 1.0 - (0.3 * path->parallel_workers);
    6479      177384 :         if (leader_contribution > 0)
    6480      175068 :             parallel_divisor += leader_contribution;
    6481             :     }
    6482             : 
    6483      178686 :     return parallel_divisor;
    6484             : }
    6485             : 
    6486             : /*
    6487             :  * compute_bitmap_pages
    6488             :  *    Estimate number of pages fetched from heap in a bitmap heap scan.
    6489             :  *
    6490             :  * 'baserel' is the relation to be scanned
    6491             :  * 'bitmapqual' is a tree of IndexPaths, BitmapAndPaths, and BitmapOrPaths
    6492             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
    6493             :  *      estimates of caching behavior
    6494             :  *
    6495             :  * If cost_p isn't NULL, the indexTotalCost estimate is returned in *cost_p.
    6496             :  * If tuples_p isn't NULL, the tuples_fetched estimate is returned in *tuples_p.
    6497             :  */
    6498             : double
    6499      681804 : compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel,
    6500             :                      Path *bitmapqual, double loop_count,
    6501             :                      Cost *cost_p, double *tuples_p)
    6502             : {
    6503             :     Cost        indexTotalCost;
    6504             :     Selectivity indexSelectivity;
    6505             :     double      T;
    6506             :     double      pages_fetched;
    6507             :     double      tuples_fetched;
    6508             :     double      heap_pages;
    6509             :     double      maxentries;
    6510             : 
    6511             :     /*
    6512             :      * Fetch total cost of obtaining the bitmap, as well as its total
    6513             :      * selectivity.
    6514             :      */
    6515      681804 :     cost_bitmap_tree_node(bitmapqual, &indexTotalCost, &indexSelectivity);
    6516             : 
    6517             :     /*
    6518             :      * Estimate number of main-table pages fetched.
    6519             :      */
    6520      681804 :     tuples_fetched = clamp_row_est(indexSelectivity * baserel->tuples);
    6521             : 
    6522      681804 :     T = (baserel->pages > 1) ? (double) baserel->pages : 1.0;
    6523             : 
    6524             :     /*
    6525             :      * For a single scan, the number of heap pages that need to be fetched is
    6526             :      * the same as the Mackert and Lohman formula for the case T <= b (ie, no
    6527             :      * re-reads needed).
    6528             :      */
    6529      681804 :     pages_fetched = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
    6530             : 
    6531             :     /*
    6532             :      * Calculate the number of pages fetched from the heap.  Then based on
    6533             :      * current work_mem estimate get the estimated maxentries in the bitmap.
    6534             :      * (Note that we always do this calculation based on the number of pages
    6535             :      * that would be fetched in a single iteration, even if loop_count > 1.
    6536             :      * That's correct, because only that number of entries will be stored in
    6537             :      * the bitmap at one time.)
    6538             :      */
    6539      681804 :     heap_pages = Min(pages_fetched, baserel->pages);
    6540      681804 :     maxentries = tbm_calculate_entries(work_mem * (Size) 1024);
    6541             : 
    6542      681804 :     if (loop_count > 1)
    6543             :     {
    6544             :         /*
    6545             :          * For repeated bitmap scans, scale up the number of tuples fetched in
    6546             :          * the Mackert and Lohman formula by the number of scans, so that we
    6547             :          * estimate the number of pages fetched by all the scans. Then
    6548             :          * pro-rate for one scan.
    6549             :          */
    6550      139466 :         pages_fetched = index_pages_fetched(tuples_fetched * loop_count,
    6551             :                                             baserel->pages,
    6552             :                                             get_indexpath_pages(bitmapqual),
    6553             :                                             root);
    6554      139466 :         pages_fetched /= loop_count;
    6555             :     }
    6556             : 
    6557      681804 :     if (pages_fetched >= T)
    6558       68468 :         pages_fetched = T;
    6559             :     else
    6560      613336 :         pages_fetched = ceil(pages_fetched);
    6561             : 
    6562      681804 :     if (maxentries < heap_pages)
    6563             :     {
    6564             :         double      exact_pages;
    6565             :         double      lossy_pages;
    6566             : 
    6567             :         /*
    6568             :          * Crude approximation of the number of lossy pages.  Because of the
    6569             :          * way tbm_lossify() is coded, the number of lossy pages increases
    6570             :          * very sharply as soon as we run short of memory; this formula has
    6571             :          * that property and seems to perform adequately in testing, but it's
    6572             :          * possible we could do better somehow.
    6573             :          */
    6574          18 :         lossy_pages = Max(0, heap_pages - maxentries / 2);
    6575          18 :         exact_pages = heap_pages - lossy_pages;
    6576             : 
    6577             :         /*
    6578             :          * If there are lossy pages then recompute the number of tuples
    6579             :          * processed by the bitmap heap node.  We assume here that the chance
    6580             :          * of a given tuple coming from an exact page is the same as the
    6581             :          * chance that a given page is exact.  This might not be true, but
    6582             :          * it's not clear how we can do any better.
    6583             :          */
    6584          18 :         if (lossy_pages > 0)
    6585             :             tuples_fetched =
    6586          18 :                 clamp_row_est(indexSelectivity *
    6587          18 :                               (exact_pages / heap_pages) * baserel->tuples +
    6588          18 :                               (lossy_pages / heap_pages) * baserel->tuples);
    6589             :     }
    6590             : 
    6591      681804 :     if (cost_p)
    6592      537172 :         *cost_p = indexTotalCost;
    6593      681804 :     if (tuples_p)
    6594      537172 :         *tuples_p = tuples_fetched;
    6595             : 
    6596      681804 :     return pages_fetched;
    6597             : }
    6598             : 
    6599             : /*
    6600             :  * compute_gather_rows
    6601             :  *    Estimate number of rows for gather (merge) nodes.
    6602             :  *
    6603             :  * In a parallel plan, each worker's row estimate is determined by dividing the
    6604             :  * total number of rows by parallel_divisor, which accounts for the leader's
    6605             :  * contribution in addition to the number of workers.  Accordingly, when
    6606             :  * estimating the number of rows for gather (merge) nodes, we multiply the rows
    6607             :  * per worker by the same parallel_divisor to undo the division.
    6608             :  */
    6609             : double
    6610       29210 : compute_gather_rows(Path *path)
    6611             : {
    6612             :     Assert(path->parallel_workers > 0);
    6613             : 
    6614       29210 :     return clamp_row_est(path->rows * get_parallel_divisor(path));
    6615             : }

Generated by: LCOV version 1.14