LCOV - code coverage report
Current view: top level - src/backend/optimizer/path - costsize.c (source / functions) Hit Total Coverage
Test: PostgreSQL 19devel Lines: 1739 1778 97.8 %
Date: 2025-11-27 00:18:02 Functions: 74 74 100.0 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * costsize.c
       4             :  *    Routines to compute (and set) relation sizes and path costs
       5             :  *
       6             :  * Path costs are measured in arbitrary units established by these basic
       7             :  * parameters:
       8             :  *
       9             :  *  seq_page_cost       Cost of a sequential page fetch
      10             :  *  random_page_cost    Cost of a non-sequential page fetch
      11             :  *  cpu_tuple_cost      Cost of typical CPU time to process a tuple
      12             :  *  cpu_index_tuple_cost  Cost of typical CPU time to process an index tuple
      13             :  *  cpu_operator_cost   Cost of CPU time to execute an operator or function
      14             :  *  parallel_tuple_cost Cost of CPU time to pass a tuple from worker to leader backend
      15             :  *  parallel_setup_cost Cost of setting up shared memory for parallelism
      16             :  *
      17             :  * We expect that the kernel will typically do some amount of read-ahead
      18             :  * optimization; this in conjunction with seek costs means that seq_page_cost
      19             :  * is normally considerably less than random_page_cost.  (However, if the
      20             :  * database is fully cached in RAM, it is reasonable to set them equal.)
      21             :  *
      22             :  * We also use a rough estimate "effective_cache_size" of the number of
      23             :  * disk pages in Postgres + OS-level disk cache.  (We can't simply use
      24             :  * NBuffers for this purpose because that would ignore the effects of
      25             :  * the kernel's disk cache.)
      26             :  *
      27             :  * Obviously, taking constants for these values is an oversimplification,
      28             :  * but it's tough enough to get any useful estimates even at this level of
      29             :  * detail.  Note that all of these parameters are user-settable, in case
      30             :  * the default values are drastically off for a particular platform.
      31             :  *
      32             :  * seq_page_cost and random_page_cost can also be overridden for an individual
      33             :  * tablespace, in case some data is on a fast disk and other data is on a slow
      34             :  * disk.  Per-tablespace overrides never apply to temporary work files such as
      35             :  * an external sort or a materialize node that overflows work_mem.
      36             :  *
      37             :  * We compute two separate costs for each path:
      38             :  *      total_cost: total estimated cost to fetch all tuples
      39             :  *      startup_cost: cost that is expended before first tuple is fetched
      40             :  * In some scenarios, such as when there is a LIMIT or we are implementing
      41             :  * an EXISTS(...) sub-select, it is not necessary to fetch all tuples of the
      42             :  * path's result.  A caller can estimate the cost of fetching a partial
      43             :  * result by interpolating between startup_cost and total_cost.  In detail:
      44             :  *      actual_cost = startup_cost +
      45             :  *          (total_cost - startup_cost) * tuples_to_fetch / path->rows;
      46             :  * Note that a base relation's rows count (and, by extension, plan_rows for
      47             :  * plan nodes below the LIMIT node) are set without regard to any LIMIT, so
      48             :  * that this equation works properly.  (Note: while path->rows is never zero
      49             :  * for ordinary relations, it is zero for paths for provably-empty relations,
      50             :  * so beware of division-by-zero.)  The LIMIT is applied as a top-level
      51             :  * plan node.
      52             :  *
      53             :  * Each path stores the total number of disabled nodes that exist at or
      54             :  * below that point in the plan tree. This is regarded as a component of
      55             :  * the cost, and paths with fewer disabled nodes should be regarded as
      56             :  * cheaper than those with more. Disabled nodes occur when the user sets
      57             :  * a GUC like enable_seqscan=false. We can't necessarily respect such a
      58             :  * setting in every part of the plan tree, but we want to respect in as many
      59             :  * parts of the plan tree as possible. Simpler schemes like storing a Boolean
      60             :  * here rather than a count fail to do that. We used to disable nodes by
      61             :  * adding a large constant to the startup cost, but that distorted planning
      62             :  * in other ways.
      63             :  *
      64             :  * For largely historical reasons, most of the routines in this module use
      65             :  * the passed result Path only to store their results (rows, startup_cost and
      66             :  * total_cost) into.  All the input data they need is passed as separate
      67             :  * parameters, even though much of it could be extracted from the Path.
      68             :  * An exception is made for the cost_XXXjoin() routines, which expect all
      69             :  * the other fields of the passed XXXPath to be filled in, and similarly
      70             :  * cost_index() assumes the passed IndexPath is valid except for its output
      71             :  * values.
      72             :  *
      73             :  *
      74             :  * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
      75             :  * Portions Copyright (c) 1994, Regents of the University of California
      76             :  *
      77             :  * IDENTIFICATION
      78             :  *    src/backend/optimizer/path/costsize.c
      79             :  *
      80             :  *-------------------------------------------------------------------------
      81             :  */
      82             : 
      83             : #include "postgres.h"
      84             : 
      85             : #include <limits.h>
      86             : #include <math.h>
      87             : 
      88             : #include "access/amapi.h"
      89             : #include "access/htup_details.h"
      90             : #include "access/tsmapi.h"
      91             : #include "executor/executor.h"
      92             : #include "executor/nodeAgg.h"
      93             : #include "executor/nodeHash.h"
      94             : #include "executor/nodeMemoize.h"
      95             : #include "miscadmin.h"
      96             : #include "nodes/makefuncs.h"
      97             : #include "nodes/nodeFuncs.h"
      98             : #include "optimizer/clauses.h"
      99             : #include "optimizer/cost.h"
     100             : #include "optimizer/optimizer.h"
     101             : #include "optimizer/pathnode.h"
     102             : #include "optimizer/paths.h"
     103             : #include "optimizer/placeholder.h"
     104             : #include "optimizer/plancat.h"
     105             : #include "optimizer/restrictinfo.h"
     106             : #include "parser/parsetree.h"
     107             : #include "utils/lsyscache.h"
     108             : #include "utils/selfuncs.h"
     109             : #include "utils/spccache.h"
     110             : #include "utils/tuplesort.h"
     111             : 
     112             : 
     113             : #define LOG2(x)  (log(x) / 0.693147180559945)
     114             : 
     115             : /*
     116             :  * Append and MergeAppend nodes are less expensive than some other operations
     117             :  * which use cpu_tuple_cost; instead of adding a separate GUC, estimate the
     118             :  * per-tuple cost as cpu_tuple_cost multiplied by this value.
     119             :  */
     120             : #define APPEND_CPU_COST_MULTIPLIER 0.5
     121             : 
     122             : /*
     123             :  * Maximum value for row estimates.  We cap row estimates to this to help
     124             :  * ensure that costs based on these estimates remain within the range of what
     125             :  * double can represent.  add_path() wouldn't act sanely given infinite or NaN
     126             :  * cost values.
     127             :  */
     128             : #define MAXIMUM_ROWCOUNT 1e100
     129             : 
     130             : double      seq_page_cost = DEFAULT_SEQ_PAGE_COST;
     131             : double      random_page_cost = DEFAULT_RANDOM_PAGE_COST;
     132             : double      cpu_tuple_cost = DEFAULT_CPU_TUPLE_COST;
     133             : double      cpu_index_tuple_cost = DEFAULT_CPU_INDEX_TUPLE_COST;
     134             : double      cpu_operator_cost = DEFAULT_CPU_OPERATOR_COST;
     135             : double      parallel_tuple_cost = DEFAULT_PARALLEL_TUPLE_COST;
     136             : double      parallel_setup_cost = DEFAULT_PARALLEL_SETUP_COST;
     137             : double      recursive_worktable_factor = DEFAULT_RECURSIVE_WORKTABLE_FACTOR;
     138             : 
     139             : int         effective_cache_size = DEFAULT_EFFECTIVE_CACHE_SIZE;
     140             : 
     141             : Cost        disable_cost = 1.0e10;
     142             : 
     143             : int         max_parallel_workers_per_gather = 2;
     144             : 
     145             : bool        enable_seqscan = true;
     146             : bool        enable_indexscan = true;
     147             : bool        enable_indexonlyscan = true;
     148             : bool        enable_bitmapscan = true;
     149             : bool        enable_tidscan = true;
     150             : bool        enable_sort = true;
     151             : bool        enable_incremental_sort = true;
     152             : bool        enable_hashagg = true;
     153             : bool        enable_nestloop = true;
     154             : bool        enable_material = true;
     155             : bool        enable_memoize = true;
     156             : bool        enable_mergejoin = true;
     157             : bool        enable_hashjoin = true;
     158             : bool        enable_gathermerge = true;
     159             : bool        enable_partitionwise_join = false;
     160             : bool        enable_partitionwise_aggregate = false;
     161             : bool        enable_parallel_append = true;
     162             : bool        enable_parallel_hash = true;
     163             : bool        enable_partition_pruning = true;
     164             : bool        enable_presorted_aggregate = true;
     165             : bool        enable_async_append = true;
     166             : 
     167             : typedef struct
     168             : {
     169             :     PlannerInfo *root;
     170             :     QualCost    total;
     171             : } cost_qual_eval_context;
     172             : 
     173             : static List *extract_nonindex_conditions(List *qual_clauses, List *indexclauses);
     174             : static MergeScanSelCache *cached_scansel(PlannerInfo *root,
     175             :                                          RestrictInfo *rinfo,
     176             :                                          PathKey *pathkey);
     177             : static void cost_rescan(PlannerInfo *root, Path *path,
     178             :                         Cost *rescan_startup_cost, Cost *rescan_total_cost);
     179             : static bool cost_qual_eval_walker(Node *node, cost_qual_eval_context *context);
     180             : static void get_restriction_qual_cost(PlannerInfo *root, RelOptInfo *baserel,
     181             :                                       ParamPathInfo *param_info,
     182             :                                       QualCost *qpqual_cost);
     183             : static bool has_indexed_join_quals(NestPath *path);
     184             : static double approx_tuple_count(PlannerInfo *root, JoinPath *path,
     185             :                                  List *quals);
     186             : static double calc_joinrel_size_estimate(PlannerInfo *root,
     187             :                                          RelOptInfo *joinrel,
     188             :                                          RelOptInfo *outer_rel,
     189             :                                          RelOptInfo *inner_rel,
     190             :                                          double outer_rows,
     191             :                                          double inner_rows,
     192             :                                          SpecialJoinInfo *sjinfo,
     193             :                                          List *restrictlist);
     194             : static Selectivity get_foreign_key_join_selectivity(PlannerInfo *root,
     195             :                                                     Relids outer_relids,
     196             :                                                     Relids inner_relids,
     197             :                                                     SpecialJoinInfo *sjinfo,
     198             :                                                     List **restrictlist);
     199             : static Cost append_nonpartial_cost(List *subpaths, int numpaths,
     200             :                                    int parallel_workers);
     201             : static void set_rel_width(PlannerInfo *root, RelOptInfo *rel);
     202             : static int32 get_expr_width(PlannerInfo *root, const Node *expr);
     203             : static double relation_byte_size(double tuples, int width);
     204             : static double page_size(double tuples, int width);
     205             : static double get_parallel_divisor(Path *path);
     206             : 
     207             : 
     208             : /*
     209             :  * clamp_row_est
     210             :  *      Force a row-count estimate to a sane value.
     211             :  */
     212             : double
     213    10436928 : clamp_row_est(double nrows)
     214             : {
     215             :     /*
     216             :      * Avoid infinite and NaN row estimates.  Costs derived from such values
     217             :      * are going to be useless.  Also force the estimate to be at least one
     218             :      * row, to make explain output look better and to avoid possible
     219             :      * divide-by-zero when interpolating costs.  Make it an integer, too.
     220             :      */
     221    10436928 :     if (nrows > MAXIMUM_ROWCOUNT || isnan(nrows))
     222           0 :         nrows = MAXIMUM_ROWCOUNT;
     223    10436928 :     else if (nrows <= 1.0)
     224     3336088 :         nrows = 1.0;
     225             :     else
     226     7100840 :         nrows = rint(nrows);
     227             : 
     228    10436928 :     return nrows;
     229             : }
     230             : 
     231             : /*
     232             :  * clamp_width_est
     233             :  *      Force a tuple-width estimate to a sane value.
     234             :  *
     235             :  * The planner represents datatype width and tuple width estimates as int32.
     236             :  * When summing column width estimates to create a tuple width estimate,
     237             :  * it's possible to reach integer overflow in edge cases.  To ensure sane
     238             :  * behavior, we form such sums in int64 arithmetic and then apply this routine
     239             :  * to clamp to int32 range.
     240             :  */
     241             : int32
     242     1941796 : clamp_width_est(int64 tuple_width)
     243             : {
     244             :     /*
     245             :      * Anything more than MaxAllocSize is clearly bogus, since we could not
     246             :      * create a tuple that large.
     247             :      */
     248     1941796 :     if (tuple_width > MaxAllocSize)
     249           0 :         return (int32) MaxAllocSize;
     250             : 
     251             :     /*
     252             :      * Unlike clamp_row_est, we just Assert that the value isn't negative,
     253             :      * rather than masking such errors.
     254             :      */
     255             :     Assert(tuple_width >= 0);
     256             : 
     257     1941796 :     return (int32) tuple_width;
     258             : }
     259             : 
     260             : 
     261             : /*
     262             :  * cost_seqscan
     263             :  *    Determines and returns the cost of scanning a relation sequentially.
     264             :  *
     265             :  * 'baserel' is the relation to be scanned
     266             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     267             :  */
     268             : void
     269      432962 : cost_seqscan(Path *path, PlannerInfo *root,
     270             :              RelOptInfo *baserel, ParamPathInfo *param_info)
     271             : {
     272      432962 :     Cost        startup_cost = 0;
     273             :     Cost        cpu_run_cost;
     274             :     Cost        disk_run_cost;
     275             :     double      spc_seq_page_cost;
     276             :     QualCost    qpqual_cost;
     277             :     Cost        cpu_per_tuple;
     278             : 
     279             :     /* Should only be applied to base relations */
     280             :     Assert(baserel->relid > 0);
     281             :     Assert(baserel->rtekind == RTE_RELATION);
     282             : 
     283             :     /* Mark the path with the correct row estimate */
     284      432962 :     if (param_info)
     285         840 :         path->rows = param_info->ppi_rows;
     286             :     else
     287      432122 :         path->rows = baserel->rows;
     288             : 
     289             :     /* fetch estimated page cost for tablespace containing table */
     290      432962 :     get_tablespace_page_costs(baserel->reltablespace,
     291             :                               NULL,
     292             :                               &spc_seq_page_cost);
     293             : 
     294             :     /*
     295             :      * disk costs
     296             :      */
     297      432962 :     disk_run_cost = spc_seq_page_cost * baserel->pages;
     298             : 
     299             :     /* CPU costs */
     300      432962 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
     301             : 
     302      432962 :     startup_cost += qpqual_cost.startup;
     303      432962 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     304      432962 :     cpu_run_cost = cpu_per_tuple * baserel->tuples;
     305             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     306      432962 :     startup_cost += path->pathtarget->cost.startup;
     307      432962 :     cpu_run_cost += path->pathtarget->cost.per_tuple * path->rows;
     308             : 
     309             :     /* Adjust costing for parallelism, if used. */
     310      432962 :     if (path->parallel_workers > 0)
     311             :     {
     312       27398 :         double      parallel_divisor = get_parallel_divisor(path);
     313             : 
     314             :         /* The CPU cost is divided among all the workers. */
     315       27398 :         cpu_run_cost /= parallel_divisor;
     316             : 
     317             :         /*
     318             :          * It may be possible to amortize some of the I/O cost, but probably
     319             :          * not very much, because most operating systems already do aggressive
     320             :          * prefetching.  For now, we assume that the disk run cost can't be
     321             :          * amortized at all.
     322             :          */
     323             : 
     324             :         /*
     325             :          * In the case of a parallel plan, the row count needs to represent
     326             :          * the number of tuples processed per worker.
     327             :          */
     328       27398 :         path->rows = clamp_row_est(path->rows / parallel_divisor);
     329             :     }
     330             : 
     331      432962 :     path->disabled_nodes = enable_seqscan ? 0 : 1;
     332      432962 :     path->startup_cost = startup_cost;
     333      432962 :     path->total_cost = startup_cost + cpu_run_cost + disk_run_cost;
     334      432962 : }
     335             : 
     336             : /*
     337             :  * cost_samplescan
     338             :  *    Determines and returns the cost of scanning a relation using sampling.
     339             :  *
     340             :  * 'baserel' is the relation to be scanned
     341             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     342             :  */
     343             : void
     344         306 : cost_samplescan(Path *path, PlannerInfo *root,
     345             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
     346             : {
     347         306 :     Cost        startup_cost = 0;
     348         306 :     Cost        run_cost = 0;
     349             :     RangeTblEntry *rte;
     350             :     TableSampleClause *tsc;
     351             :     TsmRoutine *tsm;
     352             :     double      spc_seq_page_cost,
     353             :                 spc_random_page_cost,
     354             :                 spc_page_cost;
     355             :     QualCost    qpqual_cost;
     356             :     Cost        cpu_per_tuple;
     357             : 
     358             :     /* Should only be applied to base relations with tablesample clauses */
     359             :     Assert(baserel->relid > 0);
     360         306 :     rte = planner_rt_fetch(baserel->relid, root);
     361             :     Assert(rte->rtekind == RTE_RELATION);
     362         306 :     tsc = rte->tablesample;
     363             :     Assert(tsc != NULL);
     364         306 :     tsm = GetTsmRoutine(tsc->tsmhandler);
     365             : 
     366             :     /* Mark the path with the correct row estimate */
     367         306 :     if (param_info)
     368          72 :         path->rows = param_info->ppi_rows;
     369             :     else
     370         234 :         path->rows = baserel->rows;
     371             : 
     372             :     /* fetch estimated page cost for tablespace containing table */
     373         306 :     get_tablespace_page_costs(baserel->reltablespace,
     374             :                               &spc_random_page_cost,
     375             :                               &spc_seq_page_cost);
     376             : 
     377             :     /* if NextSampleBlock is used, assume random access, else sequential */
     378         612 :     spc_page_cost = (tsm->NextSampleBlock != NULL) ?
     379         306 :         spc_random_page_cost : spc_seq_page_cost;
     380             : 
     381             :     /*
     382             :      * disk costs (recall that baserel->pages has already been set to the
     383             :      * number of pages the sampling method will visit)
     384             :      */
     385         306 :     run_cost += spc_page_cost * baserel->pages;
     386             : 
     387             :     /*
     388             :      * CPU costs (recall that baserel->tuples has already been set to the
     389             :      * number of tuples the sampling method will select).  Note that we ignore
     390             :      * execution cost of the TABLESAMPLE parameter expressions; they will be
     391             :      * evaluated only once per scan, and in most usages they'll likely be
     392             :      * simple constants anyway.  We also don't charge anything for the
     393             :      * calculations the sampling method might do internally.
     394             :      */
     395         306 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
     396             : 
     397         306 :     startup_cost += qpqual_cost.startup;
     398         306 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     399         306 :     run_cost += cpu_per_tuple * baserel->tuples;
     400             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     401         306 :     startup_cost += path->pathtarget->cost.startup;
     402         306 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
     403             : 
     404         306 :     path->disabled_nodes = 0;
     405         306 :     path->startup_cost = startup_cost;
     406         306 :     path->total_cost = startup_cost + run_cost;
     407         306 : }
     408             : 
     409             : /*
     410             :  * cost_gather
     411             :  *    Determines and returns the cost of gather path.
     412             :  *
     413             :  * 'rel' is the relation to be operated upon
     414             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     415             :  * 'rows' may be used to point to a row estimate; if non-NULL, it overrides
     416             :  * both 'rel' and 'param_info'.  This is useful when the path doesn't exactly
     417             :  * correspond to any particular RelOptInfo.
     418             :  */
     419             : void
     420       24778 : cost_gather(GatherPath *path, PlannerInfo *root,
     421             :             RelOptInfo *rel, ParamPathInfo *param_info,
     422             :             double *rows)
     423             : {
     424       24778 :     Cost        startup_cost = 0;
     425       24778 :     Cost        run_cost = 0;
     426             : 
     427             :     /* Mark the path with the correct row estimate */
     428       24778 :     if (rows)
     429        6078 :         path->path.rows = *rows;
     430       18700 :     else if (param_info)
     431           0 :         path->path.rows = param_info->ppi_rows;
     432             :     else
     433       18700 :         path->path.rows = rel->rows;
     434             : 
     435       24778 :     startup_cost = path->subpath->startup_cost;
     436             : 
     437       24778 :     run_cost = path->subpath->total_cost - path->subpath->startup_cost;
     438             : 
     439             :     /* Parallel setup and communication cost. */
     440       24778 :     startup_cost += parallel_setup_cost;
     441       24778 :     run_cost += parallel_tuple_cost * path->path.rows;
     442             : 
     443       24778 :     path->path.disabled_nodes = path->subpath->disabled_nodes;
     444       24778 :     path->path.startup_cost = startup_cost;
     445       24778 :     path->path.total_cost = (startup_cost + run_cost);
     446       24778 : }
     447             : 
     448             : /*
     449             :  * cost_gather_merge
     450             :  *    Determines and returns the cost of gather merge path.
     451             :  *
     452             :  * GatherMerge merges several pre-sorted input streams, using a heap that at
     453             :  * any given instant holds the next tuple from each stream. If there are N
     454             :  * streams, we need about N*log2(N) tuple comparisons to construct the heap at
     455             :  * startup, and then for each output tuple, about log2(N) comparisons to
     456             :  * replace the top heap entry with the next tuple from the same stream.
     457             :  */
     458             : void
     459       17552 : cost_gather_merge(GatherMergePath *path, PlannerInfo *root,
     460             :                   RelOptInfo *rel, ParamPathInfo *param_info,
     461             :                   int input_disabled_nodes,
     462             :                   Cost input_startup_cost, Cost input_total_cost,
     463             :                   double *rows)
     464             : {
     465       17552 :     Cost        startup_cost = 0;
     466       17552 :     Cost        run_cost = 0;
     467             :     Cost        comparison_cost;
     468             :     double      N;
     469             :     double      logN;
     470             : 
     471             :     /* Mark the path with the correct row estimate */
     472       17552 :     if (rows)
     473       10920 :         path->path.rows = *rows;
     474        6632 :     else if (param_info)
     475           0 :         path->path.rows = param_info->ppi_rows;
     476             :     else
     477        6632 :         path->path.rows = rel->rows;
     478             : 
     479             :     /*
     480             :      * Add one to the number of workers to account for the leader.  This might
     481             :      * be overgenerous since the leader will do less work than other workers
     482             :      * in typical cases, but we'll go with it for now.
     483             :      */
     484             :     Assert(path->num_workers > 0);
     485       17552 :     N = (double) path->num_workers + 1;
     486       17552 :     logN = LOG2(N);
     487             : 
     488             :     /* Assumed cost per tuple comparison */
     489       17552 :     comparison_cost = 2.0 * cpu_operator_cost;
     490             : 
     491             :     /* Heap creation cost */
     492       17552 :     startup_cost += comparison_cost * N * logN;
     493             : 
     494             :     /* Per-tuple heap maintenance cost */
     495       17552 :     run_cost += path->path.rows * comparison_cost * logN;
     496             : 
     497             :     /* small cost for heap management, like cost_merge_append */
     498       17552 :     run_cost += cpu_operator_cost * path->path.rows;
     499             : 
     500             :     /*
     501             :      * Parallel setup and communication cost.  Since Gather Merge, unlike
     502             :      * Gather, requires us to block until a tuple is available from every
     503             :      * worker, we bump the IPC cost up a little bit as compared with Gather.
     504             :      * For lack of a better idea, charge an extra 5%.
     505             :      */
     506       17552 :     startup_cost += parallel_setup_cost;
     507       17552 :     run_cost += parallel_tuple_cost * path->path.rows * 1.05;
     508             : 
     509       17552 :     path->path.disabled_nodes = input_disabled_nodes
     510       17552 :         + (enable_gathermerge ? 0 : 1);
     511       17552 :     path->path.startup_cost = startup_cost + input_startup_cost;
     512       17552 :     path->path.total_cost = (startup_cost + run_cost + input_total_cost);
     513       17552 : }
     514             : 
     515             : /*
     516             :  * cost_index
     517             :  *    Determines and returns the cost of scanning a relation using an index.
     518             :  *
     519             :  * 'path' describes the indexscan under consideration, and is complete
     520             :  *      except for the fields to be set by this routine
     521             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
     522             :  *      estimates of caching behavior
     523             :  *
     524             :  * In addition to rows, startup_cost and total_cost, cost_index() sets the
     525             :  * path's indextotalcost and indexselectivity fields.  These values will be
     526             :  * needed if the IndexPath is used in a BitmapIndexScan.
     527             :  *
     528             :  * NOTE: path->indexquals must contain only clauses usable as index
     529             :  * restrictions.  Any additional quals evaluated as qpquals may reduce the
     530             :  * number of returned tuples, but they won't reduce the number of tuples
     531             :  * we have to fetch from the table, so they don't reduce the scan cost.
     532             :  */
     533             : void
     534      802418 : cost_index(IndexPath *path, PlannerInfo *root, double loop_count,
     535             :            bool partial_path)
     536             : {
     537      802418 :     IndexOptInfo *index = path->indexinfo;
     538      802418 :     RelOptInfo *baserel = index->rel;
     539      802418 :     bool        indexonly = (path->path.pathtype == T_IndexOnlyScan);
     540             :     amcostestimate_function amcostestimate;
     541             :     List       *qpquals;
     542      802418 :     Cost        startup_cost = 0;
     543      802418 :     Cost        run_cost = 0;
     544      802418 :     Cost        cpu_run_cost = 0;
     545             :     Cost        indexStartupCost;
     546             :     Cost        indexTotalCost;
     547             :     Selectivity indexSelectivity;
     548             :     double      indexCorrelation,
     549             :                 csquared;
     550             :     double      spc_seq_page_cost,
     551             :                 spc_random_page_cost;
     552             :     Cost        min_IO_cost,
     553             :                 max_IO_cost;
     554             :     QualCost    qpqual_cost;
     555             :     Cost        cpu_per_tuple;
     556             :     double      tuples_fetched;
     557             :     double      pages_fetched;
     558             :     double      rand_heap_pages;
     559             :     double      index_pages;
     560             : 
     561             :     /* Should only be applied to base relations */
     562             :     Assert(IsA(baserel, RelOptInfo) &&
     563             :            IsA(index, IndexOptInfo));
     564             :     Assert(baserel->relid > 0);
     565             :     Assert(baserel->rtekind == RTE_RELATION);
     566             : 
     567             :     /*
     568             :      * Mark the path with the correct row estimate, and identify which quals
     569             :      * will need to be enforced as qpquals.  We need not check any quals that
     570             :      * are implied by the index's predicate, so we can use indrestrictinfo not
     571             :      * baserestrictinfo as the list of relevant restriction clauses for the
     572             :      * rel.
     573             :      */
     574      802418 :     if (path->path.param_info)
     575             :     {
     576      154436 :         path->path.rows = path->path.param_info->ppi_rows;
     577             :         /* qpquals come from the rel's restriction clauses and ppi_clauses */
     578      154436 :         qpquals = list_concat(extract_nonindex_conditions(path->indexinfo->indrestrictinfo,
     579             :                                                           path->indexclauses),
     580      154436 :                               extract_nonindex_conditions(path->path.param_info->ppi_clauses,
     581             :                                                           path->indexclauses));
     582             :     }
     583             :     else
     584             :     {
     585      647982 :         path->path.rows = baserel->rows;
     586             :         /* qpquals come from just the rel's restriction clauses */
     587      647982 :         qpquals = extract_nonindex_conditions(path->indexinfo->indrestrictinfo,
     588             :                                               path->indexclauses);
     589             :     }
     590             : 
     591             :     /* we don't need to check enable_indexonlyscan; indxpath.c does that */
     592      802418 :     path->path.disabled_nodes = enable_indexscan ? 0 : 1;
     593             : 
     594             :     /*
     595             :      * Call index-access-method-specific code to estimate the processing cost
     596             :      * for scanning the index, as well as the selectivity of the index (ie,
     597             :      * the fraction of main-table tuples we will have to retrieve) and its
     598             :      * correlation to the main-table tuple order.  We need a cast here because
     599             :      * pathnodes.h uses a weak function type to avoid including amapi.h.
     600             :      */
     601      802418 :     amcostestimate = (amcostestimate_function) index->amcostestimate;
     602      802418 :     amcostestimate(root, path, loop_count,
     603             :                    &indexStartupCost, &indexTotalCost,
     604             :                    &indexSelectivity, &indexCorrelation,
     605             :                    &index_pages);
     606             : 
     607             :     /*
     608             :      * Save amcostestimate's results for possible use in bitmap scan planning.
     609             :      * We don't bother to save indexStartupCost or indexCorrelation, because a
     610             :      * bitmap scan doesn't care about either.
     611             :      */
     612      802418 :     path->indextotalcost = indexTotalCost;
     613      802418 :     path->indexselectivity = indexSelectivity;
     614             : 
     615             :     /* all costs for touching index itself included here */
     616      802418 :     startup_cost += indexStartupCost;
     617      802418 :     run_cost += indexTotalCost - indexStartupCost;
     618             : 
     619             :     /* estimate number of main-table tuples fetched */
     620      802418 :     tuples_fetched = clamp_row_est(indexSelectivity * baserel->tuples);
     621             : 
     622             :     /* fetch estimated page costs for tablespace containing table */
     623      802418 :     get_tablespace_page_costs(baserel->reltablespace,
     624             :                               &spc_random_page_cost,
     625             :                               &spc_seq_page_cost);
     626             : 
     627             :     /*----------
     628             :      * Estimate number of main-table pages fetched, and compute I/O cost.
     629             :      *
     630             :      * When the index ordering is uncorrelated with the table ordering,
     631             :      * we use an approximation proposed by Mackert and Lohman (see
     632             :      * index_pages_fetched() for details) to compute the number of pages
     633             :      * fetched, and then charge spc_random_page_cost per page fetched.
     634             :      *
     635             :      * When the index ordering is exactly correlated with the table ordering
     636             :      * (just after a CLUSTER, for example), the number of pages fetched should
     637             :      * be exactly selectivity * table_size.  What's more, all but the first
     638             :      * will be sequential fetches, not the random fetches that occur in the
     639             :      * uncorrelated case.  So if the number of pages is more than 1, we
     640             :      * ought to charge
     641             :      *      spc_random_page_cost + (pages_fetched - 1) * spc_seq_page_cost
     642             :      * For partially-correlated indexes, we ought to charge somewhere between
     643             :      * these two estimates.  We currently interpolate linearly between the
     644             :      * estimates based on the correlation squared (XXX is that appropriate?).
     645             :      *
     646             :      * If it's an index-only scan, then we will not need to fetch any heap
     647             :      * pages for which the visibility map shows all tuples are visible.
     648             :      * Hence, reduce the estimated number of heap fetches accordingly.
     649             :      * We use the measured fraction of the entire heap that is all-visible,
     650             :      * which might not be particularly relevant to the subset of the heap
     651             :      * that this query will fetch; but it's not clear how to do better.
     652             :      *----------
     653             :      */
     654      802418 :     if (loop_count > 1)
     655             :     {
     656             :         /*
     657             :          * For repeated indexscans, the appropriate estimate for the
     658             :          * uncorrelated case is to scale up the number of tuples fetched in
     659             :          * the Mackert and Lohman formula by the number of scans, so that we
     660             :          * estimate the number of pages fetched by all the scans; then
     661             :          * pro-rate the costs for one scan.  In this case we assume all the
     662             :          * fetches are random accesses.
     663             :          */
     664       88988 :         pages_fetched = index_pages_fetched(tuples_fetched * loop_count,
     665             :                                             baserel->pages,
     666       88988 :                                             (double) index->pages,
     667             :                                             root);
     668             : 
     669       88988 :         if (indexonly)
     670       11208 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     671             : 
     672       88988 :         rand_heap_pages = pages_fetched;
     673             : 
     674       88988 :         max_IO_cost = (pages_fetched * spc_random_page_cost) / loop_count;
     675             : 
     676             :         /*
     677             :          * In the perfectly correlated case, the number of pages touched by
     678             :          * each scan is selectivity * table_size, and we can use the Mackert
     679             :          * and Lohman formula at the page level to estimate how much work is
     680             :          * saved by caching across scans.  We still assume all the fetches are
     681             :          * random, though, which is an overestimate that's hard to correct for
     682             :          * without double-counting the cache effects.  (But in most cases
     683             :          * where such a plan is actually interesting, only one page would get
     684             :          * fetched per scan anyway, so it shouldn't matter much.)
     685             :          */
     686       88988 :         pages_fetched = ceil(indexSelectivity * (double) baserel->pages);
     687             : 
     688       88988 :         pages_fetched = index_pages_fetched(pages_fetched * loop_count,
     689             :                                             baserel->pages,
     690       88988 :                                             (double) index->pages,
     691             :                                             root);
     692             : 
     693       88988 :         if (indexonly)
     694       11208 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     695             : 
     696       88988 :         min_IO_cost = (pages_fetched * spc_random_page_cost) / loop_count;
     697             :     }
     698             :     else
     699             :     {
     700             :         /*
     701             :          * Normal case: apply the Mackert and Lohman formula, and then
     702             :          * interpolate between that and the correlation-derived result.
     703             :          */
     704      713430 :         pages_fetched = index_pages_fetched(tuples_fetched,
     705             :                                             baserel->pages,
     706      713430 :                                             (double) index->pages,
     707             :                                             root);
     708             : 
     709      713430 :         if (indexonly)
     710       70820 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     711             : 
     712      713430 :         rand_heap_pages = pages_fetched;
     713             : 
     714             :         /* max_IO_cost is for the perfectly uncorrelated case (csquared=0) */
     715      713430 :         max_IO_cost = pages_fetched * spc_random_page_cost;
     716             : 
     717             :         /* min_IO_cost is for the perfectly correlated case (csquared=1) */
     718      713430 :         pages_fetched = ceil(indexSelectivity * (double) baserel->pages);
     719             : 
     720      713430 :         if (indexonly)
     721       70820 :             pages_fetched = ceil(pages_fetched * (1.0 - baserel->allvisfrac));
     722             : 
     723      713430 :         if (pages_fetched > 0)
     724             :         {
     725      634994 :             min_IO_cost = spc_random_page_cost;
     726      634994 :             if (pages_fetched > 1)
     727      186490 :                 min_IO_cost += (pages_fetched - 1) * spc_seq_page_cost;
     728             :         }
     729             :         else
     730       78436 :             min_IO_cost = 0;
     731             :     }
     732             : 
     733      802418 :     if (partial_path)
     734             :     {
     735             :         /*
     736             :          * For index only scans compute workers based on number of index pages
     737             :          * fetched; the number of heap pages we fetch might be so small as to
     738             :          * effectively rule out parallelism, which we don't want to do.
     739             :          */
     740      276976 :         if (indexonly)
     741       26136 :             rand_heap_pages = -1;
     742             : 
     743             :         /*
     744             :          * Estimate the number of parallel workers required to scan index. Use
     745             :          * the number of heap pages computed considering heap fetches won't be
     746             :          * sequential as for parallel scans the pages are accessed in random
     747             :          * order.
     748             :          */
     749      276976 :         path->path.parallel_workers = compute_parallel_worker(baserel,
     750             :                                                               rand_heap_pages,
     751             :                                                               index_pages,
     752             :                                                               max_parallel_workers_per_gather);
     753             : 
     754             :         /*
     755             :          * Fall out if workers can't be assigned for parallel scan, because in
     756             :          * such a case this path will be rejected.  So there is no benefit in
     757             :          * doing extra computation.
     758             :          */
     759      276976 :         if (path->path.parallel_workers <= 0)
     760      266874 :             return;
     761             : 
     762       10102 :         path->path.parallel_aware = true;
     763             :     }
     764             : 
     765             :     /*
     766             :      * Now interpolate based on estimated index order correlation to get total
     767             :      * disk I/O cost for main table accesses.
     768             :      */
     769      535544 :     csquared = indexCorrelation * indexCorrelation;
     770             : 
     771      535544 :     run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);
     772             : 
     773             :     /*
     774             :      * Estimate CPU costs per tuple.
     775             :      *
     776             :      * What we want here is cpu_tuple_cost plus the evaluation costs of any
     777             :      * qual clauses that we have to evaluate as qpquals.
     778             :      */
     779      535544 :     cost_qual_eval(&qpqual_cost, qpquals, root);
     780             : 
     781      535544 :     startup_cost += qpqual_cost.startup;
     782      535544 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
     783             : 
     784      535544 :     cpu_run_cost += cpu_per_tuple * tuples_fetched;
     785             : 
     786             :     /* tlist eval costs are paid per output row, not per tuple scanned */
     787      535544 :     startup_cost += path->path.pathtarget->cost.startup;
     788      535544 :     cpu_run_cost += path->path.pathtarget->cost.per_tuple * path->path.rows;
     789             : 
     790             :     /* Adjust costing for parallelism, if used. */
     791      535544 :     if (path->path.parallel_workers > 0)
     792             :     {
     793       10102 :         double      parallel_divisor = get_parallel_divisor(&path->path);
     794             : 
     795       10102 :         path->path.rows = clamp_row_est(path->path.rows / parallel_divisor);
     796             : 
     797             :         /* The CPU cost is divided among all the workers. */
     798       10102 :         cpu_run_cost /= parallel_divisor;
     799             :     }
     800             : 
     801      535544 :     run_cost += cpu_run_cost;
     802             : 
     803      535544 :     path->path.startup_cost = startup_cost;
     804      535544 :     path->path.total_cost = startup_cost + run_cost;
     805             : }
     806             : 
     807             : /*
     808             :  * extract_nonindex_conditions
     809             :  *
     810             :  * Given a list of quals to be enforced in an indexscan, extract the ones that
     811             :  * will have to be applied as qpquals (ie, the index machinery won't handle
     812             :  * them).  Here we detect only whether a qual clause is directly redundant
     813             :  * with some indexclause.  If the index path is chosen for use, createplan.c
     814             :  * will try a bit harder to get rid of redundant qual conditions; specifically
     815             :  * it will see if quals can be proven to be implied by the indexquals.  But
     816             :  * it does not seem worth the cycles to try to factor that in at this stage,
     817             :  * since we're only trying to estimate qual eval costs.  Otherwise this must
     818             :  * match the logic in create_indexscan_plan().
     819             :  *
     820             :  * qual_clauses, and the result, are lists of RestrictInfos.
     821             :  * indexclauses is a list of IndexClauses.
     822             :  */
     823             : static List *
     824      956854 : extract_nonindex_conditions(List *qual_clauses, List *indexclauses)
     825             : {
     826      956854 :     List       *result = NIL;
     827             :     ListCell   *lc;
     828             : 
     829     1982252 :     foreach(lc, qual_clauses)
     830             :     {
     831     1025398 :         RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc);
     832             : 
     833     1025398 :         if (rinfo->pseudoconstant)
     834       10070 :             continue;           /* we may drop pseudoconstants here */
     835     1015328 :         if (is_redundant_with_indexclauses(rinfo, indexclauses))
     836      602404 :             continue;           /* dup or derived from same EquivalenceClass */
     837             :         /* ... skip the predicate proof attempt createplan.c will try ... */
     838      412924 :         result = lappend(result, rinfo);
     839             :     }
     840      956854 :     return result;
     841             : }
     842             : 
     843             : /*
     844             :  * index_pages_fetched
     845             :  *    Estimate the number of pages actually fetched after accounting for
     846             :  *    cache effects.
     847             :  *
     848             :  * We use an approximation proposed by Mackert and Lohman, "Index Scans
     849             :  * Using a Finite LRU Buffer: A Validated I/O Model", ACM Transactions
     850             :  * on Database Systems, Vol. 14, No. 3, September 1989, Pages 401-424.
     851             :  * The Mackert and Lohman approximation is that the number of pages
     852             :  * fetched is
     853             :  *  PF =
     854             :  *      min(2TNs/(2T+Ns), T)            when T <= b
     855             :  *      2TNs/(2T+Ns)                    when T > b and Ns <= 2Tb/(2T-b)
     856             :  *      b + (Ns - 2Tb/(2T-b))*(T-b)/T   when T > b and Ns > 2Tb/(2T-b)
     857             :  * where
     858             :  *      T = # pages in table
     859             :  *      N = # tuples in table
     860             :  *      s = selectivity = fraction of table to be scanned
     861             :  *      b = # buffer pages available (we include kernel space here)
     862             :  *
     863             :  * We assume that effective_cache_size is the total number of buffer pages
     864             :  * available for the whole query, and pro-rate that space across all the
     865             :  * tables in the query and the index currently under consideration.  (This
     866             :  * ignores space needed for other indexes used by the query, but since we
     867             :  * don't know which indexes will get used, we can't estimate that very well;
     868             :  * and in any case counting all the tables may well be an overestimate, since
     869             :  * depending on the join plan not all the tables may be scanned concurrently.)
     870             :  *
     871             :  * The product Ns is the number of tuples fetched; we pass in that
     872             :  * product rather than calculating it here.  "pages" is the number of pages
     873             :  * in the object under consideration (either an index or a table).
     874             :  * "index_pages" is the amount to add to the total table space, which was
     875             :  * computed for us by make_one_rel.
     876             :  *
     877             :  * Caller is expected to have ensured that tuples_fetched is greater than zero
     878             :  * and rounded to integer (see clamp_row_est).  The result will likewise be
     879             :  * greater than zero and integral.
     880             :  */
     881             : double
     882     1130686 : index_pages_fetched(double tuples_fetched, BlockNumber pages,
     883             :                     double index_pages, PlannerInfo *root)
     884             : {
     885             :     double      pages_fetched;
     886             :     double      total_pages;
     887             :     double      T,
     888             :                 b;
     889             : 
     890             :     /* T is # pages in table, but don't allow it to be zero */
     891     1130686 :     T = (pages > 1) ? (double) pages : 1.0;
     892             : 
     893             :     /* Compute number of pages assumed to be competing for cache space */
     894     1130686 :     total_pages = root->total_table_pages + index_pages;
     895     1130686 :     total_pages = Max(total_pages, 1.0);
     896             :     Assert(T <= total_pages);
     897             : 
     898             :     /* b is pro-rated share of effective_cache_size */
     899     1130686 :     b = (double) effective_cache_size * T / total_pages;
     900             : 
     901             :     /* force it positive and integral */
     902     1130686 :     if (b <= 1.0)
     903           0 :         b = 1.0;
     904             :     else
     905     1130686 :         b = ceil(b);
     906             : 
     907             :     /* This part is the Mackert and Lohman formula */
     908     1130686 :     if (T <= b)
     909             :     {
     910     1130686 :         pages_fetched =
     911     1130686 :             (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
     912     1130686 :         if (pages_fetched >= T)
     913      659674 :             pages_fetched = T;
     914             :         else
     915      471012 :             pages_fetched = ceil(pages_fetched);
     916             :     }
     917             :     else
     918             :     {
     919             :         double      lim;
     920             : 
     921           0 :         lim = (2.0 * T * b) / (2.0 * T - b);
     922           0 :         if (tuples_fetched <= lim)
     923             :         {
     924           0 :             pages_fetched =
     925           0 :                 (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
     926             :         }
     927             :         else
     928             :         {
     929           0 :             pages_fetched =
     930           0 :                 b + (tuples_fetched - lim) * (T - b) / T;
     931             :         }
     932           0 :         pages_fetched = ceil(pages_fetched);
     933             :     }
     934     1130686 :     return pages_fetched;
     935             : }
     936             : 
     937             : /*
     938             :  * get_indexpath_pages
     939             :  *      Determine the total size of the indexes used in a bitmap index path.
     940             :  *
     941             :  * Note: if the same index is used more than once in a bitmap tree, we will
     942             :  * count it multiple times, which perhaps is the wrong thing ... but it's
     943             :  * not completely clear, and detecting duplicates is difficult, so ignore it
     944             :  * for now.
     945             :  */
     946             : static double
     947      190388 : get_indexpath_pages(Path *bitmapqual)
     948             : {
     949      190388 :     double      result = 0;
     950             :     ListCell   *l;
     951             : 
     952      190388 :     if (IsA(bitmapqual, BitmapAndPath))
     953             :     {
     954       22606 :         BitmapAndPath *apath = (BitmapAndPath *) bitmapqual;
     955             : 
     956       67818 :         foreach(l, apath->bitmapquals)
     957             :         {
     958       45212 :             result += get_indexpath_pages((Path *) lfirst(l));
     959             :         }
     960             :     }
     961      167782 :     else if (IsA(bitmapqual, BitmapOrPath))
     962             :     {
     963          70 :         BitmapOrPath *opath = (BitmapOrPath *) bitmapqual;
     964             : 
     965         222 :         foreach(l, opath->bitmapquals)
     966             :         {
     967         152 :             result += get_indexpath_pages((Path *) lfirst(l));
     968             :         }
     969             :     }
     970      167712 :     else if (IsA(bitmapqual, IndexPath))
     971             :     {
     972      167712 :         IndexPath  *ipath = (IndexPath *) bitmapqual;
     973             : 
     974      167712 :         result = (double) ipath->indexinfo->pages;
     975             :     }
     976             :     else
     977           0 :         elog(ERROR, "unrecognized node type: %d", nodeTag(bitmapqual));
     978             : 
     979      190388 :     return result;
     980             : }
     981             : 
     982             : /*
     983             :  * cost_bitmap_heap_scan
     984             :  *    Determines and returns the cost of scanning a relation using a bitmap
     985             :  *    index-then-heap plan.
     986             :  *
     987             :  * 'baserel' is the relation to be scanned
     988             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
     989             :  * 'bitmapqual' is a tree of IndexPaths, BitmapAndPaths, and BitmapOrPaths
     990             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
     991             :  *      estimates of caching behavior
     992             :  *
     993             :  * Note: the component IndexPaths in bitmapqual should have been costed
     994             :  * using the same loop_count.
     995             :  */
     996             : void
     997      542954 : cost_bitmap_heap_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel,
     998             :                       ParamPathInfo *param_info,
     999             :                       Path *bitmapqual, double loop_count)
    1000             : {
    1001      542954 :     Cost        startup_cost = 0;
    1002      542954 :     Cost        run_cost = 0;
    1003             :     Cost        indexTotalCost;
    1004             :     QualCost    qpqual_cost;
    1005             :     Cost        cpu_per_tuple;
    1006             :     Cost        cost_per_page;
    1007             :     Cost        cpu_run_cost;
    1008             :     double      tuples_fetched;
    1009             :     double      pages_fetched;
    1010             :     double      spc_seq_page_cost,
    1011             :                 spc_random_page_cost;
    1012             :     double      T;
    1013             : 
    1014             :     /* Should only be applied to base relations */
    1015             :     Assert(IsA(baserel, RelOptInfo));
    1016             :     Assert(baserel->relid > 0);
    1017             :     Assert(baserel->rtekind == RTE_RELATION);
    1018             : 
    1019             :     /* Mark the path with the correct row estimate */
    1020      542954 :     if (param_info)
    1021      232376 :         path->rows = param_info->ppi_rows;
    1022             :     else
    1023      310578 :         path->rows = baserel->rows;
    1024             : 
    1025      542954 :     pages_fetched = compute_bitmap_pages(root, baserel, bitmapqual,
    1026             :                                          loop_count, &indexTotalCost,
    1027             :                                          &tuples_fetched);
    1028             : 
    1029      542954 :     startup_cost += indexTotalCost;
    1030      542954 :     T = (baserel->pages > 1) ? (double) baserel->pages : 1.0;
    1031             : 
    1032             :     /* Fetch estimated page costs for tablespace containing table. */
    1033      542954 :     get_tablespace_page_costs(baserel->reltablespace,
    1034             :                               &spc_random_page_cost,
    1035             :                               &spc_seq_page_cost);
    1036             : 
    1037             :     /*
    1038             :      * For small numbers of pages we should charge spc_random_page_cost
    1039             :      * apiece, while if nearly all the table's pages are being read, it's more
    1040             :      * appropriate to charge spc_seq_page_cost apiece.  The effect is
    1041             :      * nonlinear, too. For lack of a better idea, interpolate like this to
    1042             :      * determine the cost per page.
    1043             :      */
    1044      542954 :     if (pages_fetched >= 2.0)
    1045      111960 :         cost_per_page = spc_random_page_cost -
    1046      111960 :             (spc_random_page_cost - spc_seq_page_cost)
    1047      111960 :             * sqrt(pages_fetched / T);
    1048             :     else
    1049      430994 :         cost_per_page = spc_random_page_cost;
    1050             : 
    1051      542954 :     run_cost += pages_fetched * cost_per_page;
    1052             : 
    1053             :     /*
    1054             :      * Estimate CPU costs per tuple.
    1055             :      *
    1056             :      * Often the indexquals don't need to be rechecked at each tuple ... but
    1057             :      * not always, especially not if there are enough tuples involved that the
    1058             :      * bitmaps become lossy.  For the moment, just assume they will be
    1059             :      * rechecked always.  This means we charge the full freight for all the
    1060             :      * scan clauses.
    1061             :      */
    1062      542954 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1063             : 
    1064      542954 :     startup_cost += qpqual_cost.startup;
    1065      542954 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1066      542954 :     cpu_run_cost = cpu_per_tuple * tuples_fetched;
    1067             : 
    1068             :     /* Adjust costing for parallelism, if used. */
    1069      542954 :     if (path->parallel_workers > 0)
    1070             :     {
    1071        4166 :         double      parallel_divisor = get_parallel_divisor(path);
    1072             : 
    1073             :         /* The CPU cost is divided among all the workers. */
    1074        4166 :         cpu_run_cost /= parallel_divisor;
    1075             : 
    1076        4166 :         path->rows = clamp_row_est(path->rows / parallel_divisor);
    1077             :     }
    1078             : 
    1079             : 
    1080      542954 :     run_cost += cpu_run_cost;
    1081             : 
    1082             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1083      542954 :     startup_cost += path->pathtarget->cost.startup;
    1084      542954 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1085             : 
    1086      542954 :     path->disabled_nodes = enable_bitmapscan ? 0 : 1;
    1087      542954 :     path->startup_cost = startup_cost;
    1088      542954 :     path->total_cost = startup_cost + run_cost;
    1089      542954 : }
    1090             : 
    1091             : /*
    1092             :  * cost_bitmap_tree_node
    1093             :  *      Extract cost and selectivity from a bitmap tree node (index/and/or)
    1094             :  */
    1095             : void
    1096     1005158 : cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec)
    1097             : {
    1098     1005158 :     if (IsA(path, IndexPath))
    1099             :     {
    1100      951772 :         *cost = ((IndexPath *) path)->indextotalcost;
    1101      951772 :         *selec = ((IndexPath *) path)->indexselectivity;
    1102             : 
    1103             :         /*
    1104             :          * Charge a small amount per retrieved tuple to reflect the costs of
    1105             :          * manipulating the bitmap.  This is mostly to make sure that a bitmap
    1106             :          * scan doesn't look to be the same cost as an indexscan to retrieve a
    1107             :          * single tuple.
    1108             :          */
    1109      951772 :         *cost += 0.1 * cpu_operator_cost * path->rows;
    1110             :     }
    1111       53386 :     else if (IsA(path, BitmapAndPath))
    1112             :     {
    1113       49850 :         *cost = path->total_cost;
    1114       49850 :         *selec = ((BitmapAndPath *) path)->bitmapselectivity;
    1115             :     }
    1116        3536 :     else if (IsA(path, BitmapOrPath))
    1117             :     {
    1118        3536 :         *cost = path->total_cost;
    1119        3536 :         *selec = ((BitmapOrPath *) path)->bitmapselectivity;
    1120             :     }
    1121             :     else
    1122             :     {
    1123           0 :         elog(ERROR, "unrecognized node type: %d", nodeTag(path));
    1124             :         *cost = *selec = 0;     /* keep compiler quiet */
    1125             :     }
    1126     1005158 : }
    1127             : 
    1128             : /*
    1129             :  * cost_bitmap_and_node
    1130             :  *      Estimate the cost of a BitmapAnd node
    1131             :  *
    1132             :  * Note that this considers only the costs of index scanning and bitmap
    1133             :  * creation, not the eventual heap access.  In that sense the object isn't
    1134             :  * truly a Path, but it has enough path-like properties (costs in particular)
    1135             :  * to warrant treating it as one.  We don't bother to set the path rows field,
    1136             :  * however.
    1137             :  */
    1138             : void
    1139       49644 : cost_bitmap_and_node(BitmapAndPath *path, PlannerInfo *root)
    1140             : {
    1141             :     Cost        totalCost;
    1142             :     Selectivity selec;
    1143             :     ListCell   *l;
    1144             : 
    1145             :     /*
    1146             :      * We estimate AND selectivity on the assumption that the inputs are
    1147             :      * independent.  This is probably often wrong, but we don't have the info
    1148             :      * to do better.
    1149             :      *
    1150             :      * The runtime cost of the BitmapAnd itself is estimated at 100x
    1151             :      * cpu_operator_cost for each tbm_intersect needed.  Probably too small,
    1152             :      * definitely too simplistic?
    1153             :      */
    1154       49644 :     totalCost = 0.0;
    1155       49644 :     selec = 1.0;
    1156      148932 :     foreach(l, path->bitmapquals)
    1157             :     {
    1158       99288 :         Path       *subpath = (Path *) lfirst(l);
    1159             :         Cost        subCost;
    1160             :         Selectivity subselec;
    1161             : 
    1162       99288 :         cost_bitmap_tree_node(subpath, &subCost, &subselec);
    1163             : 
    1164       99288 :         selec *= subselec;
    1165             : 
    1166       99288 :         totalCost += subCost;
    1167       99288 :         if (l != list_head(path->bitmapquals))
    1168       49644 :             totalCost += 100.0 * cpu_operator_cost;
    1169             :     }
    1170       49644 :     path->bitmapselectivity = selec;
    1171       49644 :     path->path.rows = 0;     /* per above, not used */
    1172       49644 :     path->path.disabled_nodes = 0;
    1173       49644 :     path->path.startup_cost = totalCost;
    1174       49644 :     path->path.total_cost = totalCost;
    1175       49644 : }
    1176             : 
    1177             : /*
    1178             :  * cost_bitmap_or_node
    1179             :  *      Estimate the cost of a BitmapOr node
    1180             :  *
    1181             :  * See comments for cost_bitmap_and_node.
    1182             :  */
    1183             : void
    1184        1016 : cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root)
    1185             : {
    1186             :     Cost        totalCost;
    1187             :     Selectivity selec;
    1188             :     ListCell   *l;
    1189             : 
    1190             :     /*
    1191             :      * We estimate OR selectivity on the assumption that the inputs are
    1192             :      * non-overlapping, since that's often the case in "x IN (list)" type
    1193             :      * situations.  Of course, we clamp to 1.0 at the end.
    1194             :      *
    1195             :      * The runtime cost of the BitmapOr itself is estimated at 100x
    1196             :      * cpu_operator_cost for each tbm_union needed.  Probably too small,
    1197             :      * definitely too simplistic?  We are aware that the tbm_unions are
    1198             :      * optimized out when the inputs are BitmapIndexScans.
    1199             :      */
    1200        1016 :     totalCost = 0.0;
    1201        1016 :     selec = 0.0;
    1202        2850 :     foreach(l, path->bitmapquals)
    1203             :     {
    1204        1834 :         Path       *subpath = (Path *) lfirst(l);
    1205             :         Cost        subCost;
    1206             :         Selectivity subselec;
    1207             : 
    1208        1834 :         cost_bitmap_tree_node(subpath, &subCost, &subselec);
    1209             : 
    1210        1834 :         selec += subselec;
    1211             : 
    1212        1834 :         totalCost += subCost;
    1213        1834 :         if (l != list_head(path->bitmapquals) &&
    1214         818 :             !IsA(subpath, IndexPath))
    1215           0 :             totalCost += 100.0 * cpu_operator_cost;
    1216             :     }
    1217        1016 :     path->bitmapselectivity = Min(selec, 1.0);
    1218        1016 :     path->path.rows = 0;     /* per above, not used */
    1219        1016 :     path->path.startup_cost = totalCost;
    1220        1016 :     path->path.total_cost = totalCost;
    1221        1016 : }
    1222             : 
    1223             : /*
    1224             :  * cost_tidscan
    1225             :  *    Determines and returns the cost of scanning a relation using TIDs.
    1226             :  *
    1227             :  * 'baserel' is the relation to be scanned
    1228             :  * 'tidquals' is the list of TID-checkable quals
    1229             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1230             :  */
    1231             : void
    1232         872 : cost_tidscan(Path *path, PlannerInfo *root,
    1233             :              RelOptInfo *baserel, List *tidquals, ParamPathInfo *param_info)
    1234             : {
    1235         872 :     Cost        startup_cost = 0;
    1236         872 :     Cost        run_cost = 0;
    1237             :     QualCost    qpqual_cost;
    1238             :     Cost        cpu_per_tuple;
    1239             :     QualCost    tid_qual_cost;
    1240             :     double      ntuples;
    1241             :     ListCell   *l;
    1242             :     double      spc_random_page_cost;
    1243             : 
    1244             :     /* Should only be applied to base relations */
    1245             :     Assert(baserel->relid > 0);
    1246             :     Assert(baserel->rtekind == RTE_RELATION);
    1247             :     Assert(tidquals != NIL);
    1248             : 
    1249             :     /* Mark the path with the correct row estimate */
    1250         872 :     if (param_info)
    1251         144 :         path->rows = param_info->ppi_rows;
    1252             :     else
    1253         728 :         path->rows = baserel->rows;
    1254             : 
    1255             :     /* Count how many tuples we expect to retrieve */
    1256         872 :     ntuples = 0;
    1257        1770 :     foreach(l, tidquals)
    1258             :     {
    1259         898 :         RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    1260         898 :         Expr       *qual = rinfo->clause;
    1261             : 
    1262             :         /*
    1263             :          * We must use a TID scan for CurrentOfExpr; in any other case, we
    1264             :          * should be generating a TID scan only if enable_tidscan=true. Also,
    1265             :          * if CurrentOfExpr is the qual, there should be only one.
    1266             :          */
    1267             :         Assert(enable_tidscan || IsA(qual, CurrentOfExpr));
    1268             :         Assert(list_length(tidquals) == 1 || !IsA(qual, CurrentOfExpr));
    1269             : 
    1270         898 :         if (IsA(qual, ScalarArrayOpExpr))
    1271             :         {
    1272             :             /* Each element of the array yields 1 tuple */
    1273          50 :             ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) qual;
    1274          50 :             Node       *arraynode = (Node *) lsecond(saop->args);
    1275             : 
    1276          50 :             ntuples += estimate_array_length(root, arraynode);
    1277             :         }
    1278         848 :         else if (IsA(qual, CurrentOfExpr))
    1279             :         {
    1280             :             /* CURRENT OF yields 1 tuple */
    1281         404 :             ntuples++;
    1282             :         }
    1283             :         else
    1284             :         {
    1285             :             /* It's just CTID = something, count 1 tuple */
    1286         444 :             ntuples++;
    1287             :         }
    1288             :     }
    1289             : 
    1290             :     /*
    1291             :      * The TID qual expressions will be computed once, any other baserestrict
    1292             :      * quals once per retrieved tuple.
    1293             :      */
    1294         872 :     cost_qual_eval(&tid_qual_cost, tidquals, root);
    1295             : 
    1296             :     /* fetch estimated page cost for tablespace containing table */
    1297         872 :     get_tablespace_page_costs(baserel->reltablespace,
    1298             :                               &spc_random_page_cost,
    1299             :                               NULL);
    1300             : 
    1301             :     /* disk costs --- assume each tuple on a different page */
    1302         872 :     run_cost += spc_random_page_cost * ntuples;
    1303             : 
    1304             :     /* Add scanning CPU costs */
    1305         872 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1306             : 
    1307             :     /* XXX currently we assume TID quals are a subset of qpquals */
    1308         872 :     startup_cost += qpqual_cost.startup + tid_qual_cost.per_tuple;
    1309         872 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple -
    1310         872 :         tid_qual_cost.per_tuple;
    1311         872 :     run_cost += cpu_per_tuple * ntuples;
    1312             : 
    1313             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1314         872 :     startup_cost += path->pathtarget->cost.startup;
    1315         872 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1316             : 
    1317             :     /*
    1318             :      * There are assertions above verifying that we only reach this function
    1319             :      * either when enable_tidscan=true or when the TID scan is the only legal
    1320             :      * path, so it's safe to set disabled_nodes to zero here.
    1321             :      */
    1322         872 :     path->disabled_nodes = 0;
    1323         872 :     path->startup_cost = startup_cost;
    1324         872 :     path->total_cost = startup_cost + run_cost;
    1325         872 : }
    1326             : 
    1327             : /*
    1328             :  * cost_tidrangescan
    1329             :  *    Determines and sets the costs of scanning a relation using a range of
    1330             :  *    TIDs for 'path'
    1331             :  *
    1332             :  * 'baserel' is the relation to be scanned
    1333             :  * 'tidrangequals' is the list of TID-checkable range quals
    1334             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1335             :  */
    1336             : void
    1337        1944 : cost_tidrangescan(Path *path, PlannerInfo *root,
    1338             :                   RelOptInfo *baserel, List *tidrangequals,
    1339             :                   ParamPathInfo *param_info)
    1340             : {
    1341             :     Selectivity selectivity;
    1342             :     double      pages;
    1343        1944 :     Cost        startup_cost = 0;
    1344        1944 :     Cost        run_cost = 0;
    1345             :     QualCost    qpqual_cost;
    1346             :     Cost        cpu_per_tuple;
    1347             :     QualCost    tid_qual_cost;
    1348             :     double      ntuples;
    1349             :     double      nseqpages;
    1350             :     double      spc_random_page_cost;
    1351             :     double      spc_seq_page_cost;
    1352             : 
    1353             :     /* Should only be applied to base relations */
    1354             :     Assert(baserel->relid > 0);
    1355             :     Assert(baserel->rtekind == RTE_RELATION);
    1356             : 
    1357             :     /* Mark the path with the correct row estimate */
    1358        1944 :     if (param_info)
    1359           0 :         path->rows = param_info->ppi_rows;
    1360             :     else
    1361        1944 :         path->rows = baserel->rows;
    1362             : 
    1363             :     /* Count how many tuples and pages we expect to scan */
    1364        1944 :     selectivity = clauselist_selectivity(root, tidrangequals, baserel->relid,
    1365             :                                          JOIN_INNER, NULL);
    1366        1944 :     pages = ceil(selectivity * baserel->pages);
    1367             : 
    1368        1944 :     if (pages <= 0.0)
    1369          42 :         pages = 1.0;
    1370             : 
    1371             :     /*
    1372             :      * The first page in a range requires a random seek, but each subsequent
    1373             :      * page is just a normal sequential page read. NOTE: it's desirable for
    1374             :      * TID Range Scans to cost more than the equivalent Sequential Scans,
    1375             :      * because Seq Scans have some performance advantages such as scan
    1376             :      * synchronization and parallelizability, and we'd prefer one of them to
    1377             :      * be picked unless a TID Range Scan really is better.
    1378             :      */
    1379        1944 :     ntuples = selectivity * baserel->tuples;
    1380        1944 :     nseqpages = pages - 1.0;
    1381             : 
    1382             :     /*
    1383             :      * The TID qual expressions will be computed once, any other baserestrict
    1384             :      * quals once per retrieved tuple.
    1385             :      */
    1386        1944 :     cost_qual_eval(&tid_qual_cost, tidrangequals, root);
    1387             : 
    1388             :     /* fetch estimated page cost for tablespace containing table */
    1389        1944 :     get_tablespace_page_costs(baserel->reltablespace,
    1390             :                               &spc_random_page_cost,
    1391             :                               &spc_seq_page_cost);
    1392             : 
    1393             :     /* disk costs; 1 random page and the remainder as seq pages */
    1394        1944 :     run_cost += spc_random_page_cost + spc_seq_page_cost * nseqpages;
    1395             : 
    1396             :     /* Add scanning CPU costs */
    1397        1944 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1398             : 
    1399             :     /*
    1400             :      * XXX currently we assume TID quals are a subset of qpquals at this
    1401             :      * point; they will be removed (if possible) when we create the plan, so
    1402             :      * we subtract their cost from the total qpqual cost.  (If the TID quals
    1403             :      * can't be removed, this is a mistake and we're going to underestimate
    1404             :      * the CPU cost a bit.)
    1405             :      */
    1406        1944 :     startup_cost += qpqual_cost.startup + tid_qual_cost.per_tuple;
    1407        1944 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple -
    1408        1944 :         tid_qual_cost.per_tuple;
    1409        1944 :     run_cost += cpu_per_tuple * ntuples;
    1410             : 
    1411             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1412        1944 :     startup_cost += path->pathtarget->cost.startup;
    1413        1944 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1414             : 
    1415             :     /* we should not generate this path type when enable_tidscan=false */
    1416             :     Assert(enable_tidscan);
    1417        1944 :     path->disabled_nodes = 0;
    1418        1944 :     path->startup_cost = startup_cost;
    1419        1944 :     path->total_cost = startup_cost + run_cost;
    1420        1944 : }
    1421             : 
    1422             : /*
    1423             :  * cost_subqueryscan
    1424             :  *    Determines and returns the cost of scanning a subquery RTE.
    1425             :  *
    1426             :  * 'baserel' is the relation to be scanned
    1427             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1428             :  * 'trivial_pathtarget' is true if the pathtarget is believed to be trivial.
    1429             :  */
    1430             : void
    1431       55828 : cost_subqueryscan(SubqueryScanPath *path, PlannerInfo *root,
    1432             :                   RelOptInfo *baserel, ParamPathInfo *param_info,
    1433             :                   bool trivial_pathtarget)
    1434             : {
    1435             :     Cost        startup_cost;
    1436             :     Cost        run_cost;
    1437             :     List       *qpquals;
    1438             :     QualCost    qpqual_cost;
    1439             :     Cost        cpu_per_tuple;
    1440             : 
    1441             :     /* Should only be applied to base relations that are subqueries */
    1442             :     Assert(baserel->relid > 0);
    1443             :     Assert(baserel->rtekind == RTE_SUBQUERY);
    1444             : 
    1445             :     /*
    1446             :      * We compute the rowcount estimate as the subplan's estimate times the
    1447             :      * selectivity of relevant restriction clauses.  In simple cases this will
    1448             :      * come out the same as baserel->rows; but when dealing with parallelized
    1449             :      * paths we must do it like this to get the right answer.
    1450             :      */
    1451       55828 :     if (param_info)
    1452         606 :         qpquals = list_concat_copy(param_info->ppi_clauses,
    1453         606 :                                    baserel->baserestrictinfo);
    1454             :     else
    1455       55222 :         qpquals = baserel->baserestrictinfo;
    1456             : 
    1457       55828 :     path->path.rows = clamp_row_est(path->subpath->rows *
    1458       55828 :                                     clauselist_selectivity(root,
    1459             :                                                            qpquals,
    1460             :                                                            0,
    1461             :                                                            JOIN_INNER,
    1462             :                                                            NULL));
    1463             : 
    1464             :     /*
    1465             :      * Cost of path is cost of evaluating the subplan, plus cost of evaluating
    1466             :      * any restriction clauses and tlist that will be attached to the
    1467             :      * SubqueryScan node, plus cpu_tuple_cost to account for selection and
    1468             :      * projection overhead.
    1469             :      */
    1470       55828 :     path->path.disabled_nodes = path->subpath->disabled_nodes;
    1471       55828 :     path->path.startup_cost = path->subpath->startup_cost;
    1472       55828 :     path->path.total_cost = path->subpath->total_cost;
    1473             : 
    1474             :     /*
    1475             :      * However, if there are no relevant restriction clauses and the
    1476             :      * pathtarget is trivial, then we expect that setrefs.c will optimize away
    1477             :      * the SubqueryScan plan node altogether, so we should just make its cost
    1478             :      * and rowcount equal to the input path's.
    1479             :      *
    1480             :      * Note: there are some edge cases where createplan.c will apply a
    1481             :      * different targetlist to the SubqueryScan node, thus falsifying our
    1482             :      * current estimate of whether the target is trivial, and making the cost
    1483             :      * estimate (though not the rowcount) wrong.  It does not seem worth the
    1484             :      * extra complication to try to account for that exactly, especially since
    1485             :      * that behavior falsifies other cost estimates as well.
    1486             :      */
    1487       55828 :     if (qpquals == NIL && trivial_pathtarget)
    1488       27088 :         return;
    1489             : 
    1490       28740 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1491             : 
    1492       28740 :     startup_cost = qpqual_cost.startup;
    1493       28740 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1494       28740 :     run_cost = cpu_per_tuple * path->subpath->rows;
    1495             : 
    1496             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1497       28740 :     startup_cost += path->path.pathtarget->cost.startup;
    1498       28740 :     run_cost += path->path.pathtarget->cost.per_tuple * path->path.rows;
    1499             : 
    1500       28740 :     path->path.startup_cost += startup_cost;
    1501       28740 :     path->path.total_cost += startup_cost + run_cost;
    1502             : }
    1503             : 
    1504             : /*
    1505             :  * cost_functionscan
    1506             :  *    Determines and returns the cost of scanning a function RTE.
    1507             :  *
    1508             :  * 'baserel' is the relation to be scanned
    1509             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1510             :  */
    1511             : void
    1512       52258 : cost_functionscan(Path *path, PlannerInfo *root,
    1513             :                   RelOptInfo *baserel, ParamPathInfo *param_info)
    1514             : {
    1515       52258 :     Cost        startup_cost = 0;
    1516       52258 :     Cost        run_cost = 0;
    1517             :     QualCost    qpqual_cost;
    1518             :     Cost        cpu_per_tuple;
    1519             :     RangeTblEntry *rte;
    1520             :     QualCost    exprcost;
    1521             : 
    1522             :     /* Should only be applied to base relations that are functions */
    1523             :     Assert(baserel->relid > 0);
    1524       52258 :     rte = planner_rt_fetch(baserel->relid, root);
    1525             :     Assert(rte->rtekind == RTE_FUNCTION);
    1526             : 
    1527             :     /* Mark the path with the correct row estimate */
    1528       52258 :     if (param_info)
    1529        8582 :         path->rows = param_info->ppi_rows;
    1530             :     else
    1531       43676 :         path->rows = baserel->rows;
    1532             : 
    1533             :     /*
    1534             :      * Estimate costs of executing the function expression(s).
    1535             :      *
    1536             :      * Currently, nodeFunctionscan.c always executes the functions to
    1537             :      * completion before returning any rows, and caches the results in a
    1538             :      * tuplestore.  So the function eval cost is all startup cost, and per-row
    1539             :      * costs are minimal.
    1540             :      *
    1541             :      * XXX in principle we ought to charge tuplestore spill costs if the
    1542             :      * number of rows is large.  However, given how phony our rowcount
    1543             :      * estimates for functions tend to be, there's not a lot of point in that
    1544             :      * refinement right now.
    1545             :      */
    1546       52258 :     cost_qual_eval_node(&exprcost, (Node *) rte->functions, root);
    1547             : 
    1548       52258 :     startup_cost += exprcost.startup + exprcost.per_tuple;
    1549             : 
    1550             :     /* Add scanning CPU costs */
    1551       52258 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1552             : 
    1553       52258 :     startup_cost += qpqual_cost.startup;
    1554       52258 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1555       52258 :     run_cost += cpu_per_tuple * baserel->tuples;
    1556             : 
    1557             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1558       52258 :     startup_cost += path->pathtarget->cost.startup;
    1559       52258 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1560             : 
    1561       52258 :     path->disabled_nodes = 0;
    1562       52258 :     path->startup_cost = startup_cost;
    1563       52258 :     path->total_cost = startup_cost + run_cost;
    1564       52258 : }
    1565             : 
    1566             : /*
    1567             :  * cost_tablefuncscan
    1568             :  *    Determines and returns the cost of scanning a table function.
    1569             :  *
    1570             :  * 'baserel' is the relation to be scanned
    1571             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1572             :  */
    1573             : void
    1574         626 : cost_tablefuncscan(Path *path, PlannerInfo *root,
    1575             :                    RelOptInfo *baserel, ParamPathInfo *param_info)
    1576             : {
    1577         626 :     Cost        startup_cost = 0;
    1578         626 :     Cost        run_cost = 0;
    1579             :     QualCost    qpqual_cost;
    1580             :     Cost        cpu_per_tuple;
    1581             :     RangeTblEntry *rte;
    1582             :     QualCost    exprcost;
    1583             : 
    1584             :     /* Should only be applied to base relations that are functions */
    1585             :     Assert(baserel->relid > 0);
    1586         626 :     rte = planner_rt_fetch(baserel->relid, root);
    1587             :     Assert(rte->rtekind == RTE_TABLEFUNC);
    1588             : 
    1589             :     /* Mark the path with the correct row estimate */
    1590         626 :     if (param_info)
    1591         234 :         path->rows = param_info->ppi_rows;
    1592             :     else
    1593         392 :         path->rows = baserel->rows;
    1594             : 
    1595             :     /*
    1596             :      * Estimate costs of executing the table func expression(s).
    1597             :      *
    1598             :      * XXX in principle we ought to charge tuplestore spill costs if the
    1599             :      * number of rows is large.  However, given how phony our rowcount
    1600             :      * estimates for tablefuncs tend to be, there's not a lot of point in that
    1601             :      * refinement right now.
    1602             :      */
    1603         626 :     cost_qual_eval_node(&exprcost, (Node *) rte->tablefunc, root);
    1604             : 
    1605         626 :     startup_cost += exprcost.startup + exprcost.per_tuple;
    1606             : 
    1607             :     /* Add scanning CPU costs */
    1608         626 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1609             : 
    1610         626 :     startup_cost += qpqual_cost.startup;
    1611         626 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1612         626 :     run_cost += cpu_per_tuple * baserel->tuples;
    1613             : 
    1614             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1615         626 :     startup_cost += path->pathtarget->cost.startup;
    1616         626 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1617             : 
    1618         626 :     path->disabled_nodes = 0;
    1619         626 :     path->startup_cost = startup_cost;
    1620         626 :     path->total_cost = startup_cost + run_cost;
    1621         626 : }
    1622             : 
    1623             : /*
    1624             :  * cost_valuesscan
    1625             :  *    Determines and returns the cost of scanning a VALUES RTE.
    1626             :  *
    1627             :  * 'baserel' is the relation to be scanned
    1628             :  * 'param_info' is the ParamPathInfo if this is a parameterized path, else NULL
    1629             :  */
    1630             : void
    1631        8286 : cost_valuesscan(Path *path, PlannerInfo *root,
    1632             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
    1633             : {
    1634        8286 :     Cost        startup_cost = 0;
    1635        8286 :     Cost        run_cost = 0;
    1636             :     QualCost    qpqual_cost;
    1637             :     Cost        cpu_per_tuple;
    1638             : 
    1639             :     /* Should only be applied to base relations that are values lists */
    1640             :     Assert(baserel->relid > 0);
    1641             :     Assert(baserel->rtekind == RTE_VALUES);
    1642             : 
    1643             :     /* Mark the path with the correct row estimate */
    1644        8286 :     if (param_info)
    1645          66 :         path->rows = param_info->ppi_rows;
    1646             :     else
    1647        8220 :         path->rows = baserel->rows;
    1648             : 
    1649             :     /*
    1650             :      * For now, estimate list evaluation cost at one operator eval per list
    1651             :      * (probably pretty bogus, but is it worth being smarter?)
    1652             :      */
    1653        8286 :     cpu_per_tuple = cpu_operator_cost;
    1654             : 
    1655             :     /* Add scanning CPU costs */
    1656        8286 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1657             : 
    1658        8286 :     startup_cost += qpqual_cost.startup;
    1659        8286 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1660        8286 :     run_cost += cpu_per_tuple * baserel->tuples;
    1661             : 
    1662             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1663        8286 :     startup_cost += path->pathtarget->cost.startup;
    1664        8286 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1665             : 
    1666        8286 :     path->disabled_nodes = 0;
    1667        8286 :     path->startup_cost = startup_cost;
    1668        8286 :     path->total_cost = startup_cost + run_cost;
    1669        8286 : }
    1670             : 
    1671             : /*
    1672             :  * cost_ctescan
    1673             :  *    Determines and returns the cost of scanning a CTE RTE.
    1674             :  *
    1675             :  * Note: this is used for both self-reference and regular CTEs; the
    1676             :  * possible cost differences are below the threshold of what we could
    1677             :  * estimate accurately anyway.  Note that the costs of evaluating the
    1678             :  * referenced CTE query are added into the final plan as initplan costs,
    1679             :  * and should NOT be counted here.
    1680             :  */
    1681             : void
    1682        5194 : cost_ctescan(Path *path, PlannerInfo *root,
    1683             :              RelOptInfo *baserel, ParamPathInfo *param_info)
    1684             : {
    1685        5194 :     Cost        startup_cost = 0;
    1686        5194 :     Cost        run_cost = 0;
    1687             :     QualCost    qpqual_cost;
    1688             :     Cost        cpu_per_tuple;
    1689             : 
    1690             :     /* Should only be applied to base relations that are CTEs */
    1691             :     Assert(baserel->relid > 0);
    1692             :     Assert(baserel->rtekind == RTE_CTE);
    1693             : 
    1694             :     /* Mark the path with the correct row estimate */
    1695        5194 :     if (param_info)
    1696           0 :         path->rows = param_info->ppi_rows;
    1697             :     else
    1698        5194 :         path->rows = baserel->rows;
    1699             : 
    1700             :     /* Charge one CPU tuple cost per row for tuplestore manipulation */
    1701        5194 :     cpu_per_tuple = cpu_tuple_cost;
    1702             : 
    1703             :     /* Add scanning CPU costs */
    1704        5194 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1705             : 
    1706        5194 :     startup_cost += qpqual_cost.startup;
    1707        5194 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1708        5194 :     run_cost += cpu_per_tuple * baserel->tuples;
    1709             : 
    1710             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    1711        5194 :     startup_cost += path->pathtarget->cost.startup;
    1712        5194 :     run_cost += path->pathtarget->cost.per_tuple * path->rows;
    1713             : 
    1714        5194 :     path->disabled_nodes = 0;
    1715        5194 :     path->startup_cost = startup_cost;
    1716        5194 :     path->total_cost = startup_cost + run_cost;
    1717        5194 : }
    1718             : 
    1719             : /*
    1720             :  * cost_namedtuplestorescan
    1721             :  *    Determines and returns the cost of scanning a named tuplestore.
    1722             :  */
    1723             : void
    1724         474 : cost_namedtuplestorescan(Path *path, PlannerInfo *root,
    1725             :                          RelOptInfo *baserel, ParamPathInfo *param_info)
    1726             : {
    1727         474 :     Cost        startup_cost = 0;
    1728         474 :     Cost        run_cost = 0;
    1729             :     QualCost    qpqual_cost;
    1730             :     Cost        cpu_per_tuple;
    1731             : 
    1732             :     /* Should only be applied to base relations that are Tuplestores */
    1733             :     Assert(baserel->relid > 0);
    1734             :     Assert(baserel->rtekind == RTE_NAMEDTUPLESTORE);
    1735             : 
    1736             :     /* Mark the path with the correct row estimate */
    1737         474 :     if (param_info)
    1738           0 :         path->rows = param_info->ppi_rows;
    1739             :     else
    1740         474 :         path->rows = baserel->rows;
    1741             : 
    1742             :     /* Charge one CPU tuple cost per row for tuplestore manipulation */
    1743         474 :     cpu_per_tuple = cpu_tuple_cost;
    1744             : 
    1745             :     /* Add scanning CPU costs */
    1746         474 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1747             : 
    1748         474 :     startup_cost += qpqual_cost.startup;
    1749         474 :     cpu_per_tuple += cpu_tuple_cost + qpqual_cost.per_tuple;
    1750         474 :     run_cost += cpu_per_tuple * baserel->tuples;
    1751             : 
    1752         474 :     path->disabled_nodes = 0;
    1753         474 :     path->startup_cost = startup_cost;
    1754         474 :     path->total_cost = startup_cost + run_cost;
    1755         474 : }
    1756             : 
    1757             : /*
    1758             :  * cost_resultscan
    1759             :  *    Determines and returns the cost of scanning an RTE_RESULT relation.
    1760             :  */
    1761             : void
    1762        4268 : cost_resultscan(Path *path, PlannerInfo *root,
    1763             :                 RelOptInfo *baserel, ParamPathInfo *param_info)
    1764             : {
    1765        4268 :     Cost        startup_cost = 0;
    1766        4268 :     Cost        run_cost = 0;
    1767             :     QualCost    qpqual_cost;
    1768             :     Cost        cpu_per_tuple;
    1769             : 
    1770             :     /* Should only be applied to RTE_RESULT base relations */
    1771             :     Assert(baserel->relid > 0);
    1772             :     Assert(baserel->rtekind == RTE_RESULT);
    1773             : 
    1774             :     /* Mark the path with the correct row estimate */
    1775        4268 :     if (param_info)
    1776         156 :         path->rows = param_info->ppi_rows;
    1777             :     else
    1778        4112 :         path->rows = baserel->rows;
    1779             : 
    1780             :     /* We charge qual cost plus cpu_tuple_cost */
    1781        4268 :     get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);
    1782             : 
    1783        4268 :     startup_cost += qpqual_cost.startup;
    1784        4268 :     cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;
    1785        4268 :     run_cost += cpu_per_tuple * baserel->tuples;
    1786             : 
    1787        4268 :     path->disabled_nodes = 0;
    1788        4268 :     path->startup_cost = startup_cost;
    1789        4268 :     path->total_cost = startup_cost + run_cost;
    1790        4268 : }
    1791             : 
    1792             : /*
    1793             :  * cost_recursive_union
    1794             :  *    Determines and returns the cost of performing a recursive union,
    1795             :  *    and also the estimated output size.
    1796             :  *
    1797             :  * We are given Paths for the nonrecursive and recursive terms.
    1798             :  */
    1799             : void
    1800         928 : cost_recursive_union(Path *runion, Path *nrterm, Path *rterm)
    1801             : {
    1802             :     Cost        startup_cost;
    1803             :     Cost        total_cost;
    1804             :     double      total_rows;
    1805             : 
    1806             :     /* We probably have decent estimates for the non-recursive term */
    1807         928 :     startup_cost = nrterm->startup_cost;
    1808         928 :     total_cost = nrterm->total_cost;
    1809         928 :     total_rows = nrterm->rows;
    1810             : 
    1811             :     /*
    1812             :      * We arbitrarily assume that about 10 recursive iterations will be
    1813             :      * needed, and that we've managed to get a good fix on the cost and output
    1814             :      * size of each one of them.  These are mighty shaky assumptions but it's
    1815             :      * hard to see how to do better.
    1816             :      */
    1817         928 :     total_cost += 10 * rterm->total_cost;
    1818         928 :     total_rows += 10 * rterm->rows;
    1819             : 
    1820             :     /*
    1821             :      * Also charge cpu_tuple_cost per row to account for the costs of
    1822             :      * manipulating the tuplestores.  (We don't worry about possible
    1823             :      * spill-to-disk costs.)
    1824             :      */
    1825         928 :     total_cost += cpu_tuple_cost * total_rows;
    1826             : 
    1827         928 :     runion->disabled_nodes = nrterm->disabled_nodes + rterm->disabled_nodes;
    1828         928 :     runion->startup_cost = startup_cost;
    1829         928 :     runion->total_cost = total_cost;
    1830         928 :     runion->rows = total_rows;
    1831         928 :     runion->pathtarget->width = Max(nrterm->pathtarget->width,
    1832             :                                     rterm->pathtarget->width);
    1833         928 : }
    1834             : 
    1835             : /*
    1836             :  * cost_tuplesort
    1837             :  *    Determines and returns the cost of sorting a relation using tuplesort,
    1838             :  *    not including the cost of reading the input data.
    1839             :  *
    1840             :  * If the total volume of data to sort is less than sort_mem, we will do
    1841             :  * an in-memory sort, which requires no I/O and about t*log2(t) tuple
    1842             :  * comparisons for t tuples.
    1843             :  *
    1844             :  * If the total volume exceeds sort_mem, we switch to a tape-style merge
    1845             :  * algorithm.  There will still be about t*log2(t) tuple comparisons in
    1846             :  * total, but we will also need to write and read each tuple once per
    1847             :  * merge pass.  We expect about ceil(logM(r)) merge passes where r is the
    1848             :  * number of initial runs formed and M is the merge order used by tuplesort.c.
    1849             :  * Since the average initial run should be about sort_mem, we have
    1850             :  *      disk traffic = 2 * relsize * ceil(logM(p / sort_mem))
    1851             :  *      cpu = comparison_cost * t * log2(t)
    1852             :  *
    1853             :  * If the sort is bounded (i.e., only the first k result tuples are needed)
    1854             :  * and k tuples can fit into sort_mem, we use a heap method that keeps only
    1855             :  * k tuples in the heap; this will require about t*log2(k) tuple comparisons.
    1856             :  *
    1857             :  * The disk traffic is assumed to be 3/4ths sequential and 1/4th random
    1858             :  * accesses (XXX can't we refine that guess?)
    1859             :  *
    1860             :  * By default, we charge two operator evals per tuple comparison, which should
    1861             :  * be in the right ballpark in most cases.  The caller can tweak this by
    1862             :  * specifying nonzero comparison_cost; typically that's used for any extra
    1863             :  * work that has to be done to prepare the inputs to the comparison operators.
    1864             :  *
    1865             :  * 'tuples' is the number of tuples in the relation
    1866             :  * 'width' is the average tuple width in bytes
    1867             :  * 'comparison_cost' is the extra cost per comparison, if any
    1868             :  * 'sort_mem' is the number of kilobytes of work memory allowed for the sort
    1869             :  * 'limit_tuples' is the bound on the number of output tuples; -1 if no bound
    1870             :  */
    1871             : static void
    1872     2080776 : cost_tuplesort(Cost *startup_cost, Cost *run_cost,
    1873             :                double tuples, int width,
    1874             :                Cost comparison_cost, int sort_mem,
    1875             :                double limit_tuples)
    1876             : {
    1877     2080776 :     double      input_bytes = relation_byte_size(tuples, width);
    1878             :     double      output_bytes;
    1879             :     double      output_tuples;
    1880     2080776 :     int64       sort_mem_bytes = sort_mem * (int64) 1024;
    1881             : 
    1882             :     /*
    1883             :      * We want to be sure the cost of a sort is never estimated as zero, even
    1884             :      * if passed-in tuple count is zero.  Besides, mustn't do log(0)...
    1885             :      */
    1886     2080776 :     if (tuples < 2.0)
    1887      548998 :         tuples = 2.0;
    1888             : 
    1889             :     /* Include the default cost-per-comparison */
    1890     2080776 :     comparison_cost += 2.0 * cpu_operator_cost;
    1891             : 
    1892             :     /* Do we have a useful LIMIT? */
    1893     2080776 :     if (limit_tuples > 0 && limit_tuples < tuples)
    1894             :     {
    1895        1830 :         output_tuples = limit_tuples;
    1896        1830 :         output_bytes = relation_byte_size(output_tuples, width);
    1897             :     }
    1898             :     else
    1899             :     {
    1900     2078946 :         output_tuples = tuples;
    1901     2078946 :         output_bytes = input_bytes;
    1902             :     }
    1903             : 
    1904     2080776 :     if (output_bytes > sort_mem_bytes)
    1905             :     {
    1906             :         /*
    1907             :          * We'll have to use a disk-based sort of all the tuples
    1908             :          */
    1909       18448 :         double      npages = ceil(input_bytes / BLCKSZ);
    1910       18448 :         double      nruns = input_bytes / sort_mem_bytes;
    1911       18448 :         double      mergeorder = tuplesort_merge_order(sort_mem_bytes);
    1912             :         double      log_runs;
    1913             :         double      npageaccesses;
    1914             : 
    1915             :         /*
    1916             :          * CPU costs
    1917             :          *
    1918             :          * Assume about N log2 N comparisons
    1919             :          */
    1920       18448 :         *startup_cost = comparison_cost * tuples * LOG2(tuples);
    1921             : 
    1922             :         /* Disk costs */
    1923             : 
    1924             :         /* Compute logM(r) as log(r) / log(M) */
    1925       18448 :         if (nruns > mergeorder)
    1926        4676 :             log_runs = ceil(log(nruns) / log(mergeorder));
    1927             :         else
    1928       13772 :             log_runs = 1.0;
    1929       18448 :         npageaccesses = 2.0 * npages * log_runs;
    1930             :         /* Assume 3/4ths of accesses are sequential, 1/4th are not */
    1931       18448 :         *startup_cost += npageaccesses *
    1932       18448 :             (seq_page_cost * 0.75 + random_page_cost * 0.25);
    1933             :     }
    1934     2062328 :     else if (tuples > 2 * output_tuples || input_bytes > sort_mem_bytes)
    1935             :     {
    1936             :         /*
    1937             :          * We'll use a bounded heap-sort keeping just K tuples in memory, for
    1938             :          * a total number of tuple comparisons of N log2 K; but the constant
    1939             :          * factor is a bit higher than for quicksort.  Tweak it so that the
    1940             :          * cost curve is continuous at the crossover point.
    1941             :          */
    1942        1352 :         *startup_cost = comparison_cost * tuples * LOG2(2.0 * output_tuples);
    1943             :     }
    1944             :     else
    1945             :     {
    1946             :         /* We'll use plain quicksort on all the input tuples */
    1947     2060976 :         *startup_cost = comparison_cost * tuples * LOG2(tuples);
    1948             :     }
    1949             : 
    1950             :     /*
    1951             :      * Also charge a small amount (arbitrarily set equal to operator cost) per
    1952             :      * extracted tuple.  We don't charge cpu_tuple_cost because a Sort node
    1953             :      * doesn't do qual-checking or projection, so it has less overhead than
    1954             :      * most plan nodes.  Note it's correct to use tuples not output_tuples
    1955             :      * here --- the upper LIMIT will pro-rate the run cost so we'd be double
    1956             :      * counting the LIMIT otherwise.
    1957             :      */
    1958     2080776 :     *run_cost = cpu_operator_cost * tuples;
    1959     2080776 : }
    1960             : 
    1961             : /*
    1962             :  * cost_incremental_sort
    1963             :  *  Determines and returns the cost of sorting a relation incrementally, when
    1964             :  *  the input path is presorted by a prefix of the pathkeys.
    1965             :  *
    1966             :  * 'presorted_keys' is the number of leading pathkeys by which the input path
    1967             :  * is sorted.
    1968             :  *
    1969             :  * We estimate the number of groups into which the relation is divided by the
    1970             :  * leading pathkeys, and then calculate the cost of sorting a single group
    1971             :  * with tuplesort using cost_tuplesort().
    1972             :  */
    1973             : void
    1974       12124 : cost_incremental_sort(Path *path,
    1975             :                       PlannerInfo *root, List *pathkeys, int presorted_keys,
    1976             :                       int input_disabled_nodes,
    1977             :                       Cost input_startup_cost, Cost input_total_cost,
    1978             :                       double input_tuples, int width, Cost comparison_cost, int sort_mem,
    1979             :                       double limit_tuples)
    1980             : {
    1981             :     Cost        startup_cost,
    1982             :                 run_cost,
    1983       12124 :                 input_run_cost = input_total_cost - input_startup_cost;
    1984             :     double      group_tuples,
    1985             :                 input_groups;
    1986             :     Cost        group_startup_cost,
    1987             :                 group_run_cost,
    1988             :                 group_input_run_cost;
    1989       12124 :     List       *presortedExprs = NIL;
    1990             :     ListCell   *l;
    1991       12124 :     bool        unknown_varno = false;
    1992             : 
    1993             :     Assert(presorted_keys > 0 && presorted_keys < list_length(pathkeys));
    1994             : 
    1995             :     /*
    1996             :      * We want to be sure the cost of a sort is never estimated as zero, even
    1997             :      * if passed-in tuple count is zero.  Besides, mustn't do log(0)...
    1998             :      */
    1999       12124 :     if (input_tuples < 2.0)
    2000        6798 :         input_tuples = 2.0;
    2001             : 
    2002             :     /* Default estimate of number of groups, capped to one group per row. */
    2003       12124 :     input_groups = Min(input_tuples, DEFAULT_NUM_DISTINCT);
    2004             : 
    2005             :     /*
    2006             :      * Extract presorted keys as list of expressions.
    2007             :      *
    2008             :      * We need to be careful about Vars containing "varno 0" which might have
    2009             :      * been introduced by generate_append_tlist, which would confuse
    2010             :      * estimate_num_groups (in fact it'd fail for such expressions). See
    2011             :      * recurse_set_operations which has to deal with the same issue.
    2012             :      *
    2013             :      * Unlike recurse_set_operations we can't access the original target list
    2014             :      * here, and even if we could it's not very clear how useful would that be
    2015             :      * for a set operation combining multiple tables. So we simply detect if
    2016             :      * there are any expressions with "varno 0" and use the default
    2017             :      * DEFAULT_NUM_DISTINCT in that case.
    2018             :      *
    2019             :      * We might also use either 1.0 (a single group) or input_tuples (each row
    2020             :      * being a separate group), pretty much the worst and best case for
    2021             :      * incremental sort. But those are extreme cases and using something in
    2022             :      * between seems reasonable. Furthermore, generate_append_tlist is used
    2023             :      * for set operations, which are likely to produce mostly unique output
    2024             :      * anyway - from that standpoint the DEFAULT_NUM_DISTINCT is defensive
    2025             :      * while maintaining lower startup cost.
    2026             :      */
    2027       12220 :     foreach(l, pathkeys)
    2028             :     {
    2029       12220 :         PathKey    *key = (PathKey *) lfirst(l);
    2030       12220 :         EquivalenceMember *member = (EquivalenceMember *)
    2031       12220 :             linitial(key->pk_eclass->ec_members);
    2032             : 
    2033             :         /*
    2034             :          * Check if the expression contains Var with "varno 0" so that we
    2035             :          * don't call estimate_num_groups in that case.
    2036             :          */
    2037       12220 :         if (bms_is_member(0, pull_varnos(root, (Node *) member->em_expr)))
    2038             :         {
    2039          10 :             unknown_varno = true;
    2040          10 :             break;
    2041             :         }
    2042             : 
    2043             :         /* expression not containing any Vars with "varno 0" */
    2044       12210 :         presortedExprs = lappend(presortedExprs, member->em_expr);
    2045             : 
    2046       12210 :         if (foreach_current_index(l) + 1 >= presorted_keys)
    2047       12114 :             break;
    2048             :     }
    2049             : 
    2050             :     /* Estimate the number of groups with equal presorted keys. */
    2051       12124 :     if (!unknown_varno)
    2052       12114 :         input_groups = estimate_num_groups(root, presortedExprs, input_tuples,
    2053             :                                            NULL, NULL);
    2054             : 
    2055       12124 :     group_tuples = input_tuples / input_groups;
    2056       12124 :     group_input_run_cost = input_run_cost / input_groups;
    2057             : 
    2058             :     /*
    2059             :      * Estimate the average cost of sorting of one group where presorted keys
    2060             :      * are equal.
    2061             :      */
    2062       12124 :     cost_tuplesort(&group_startup_cost, &group_run_cost,
    2063             :                    group_tuples, width, comparison_cost, sort_mem,
    2064             :                    limit_tuples);
    2065             : 
    2066             :     /*
    2067             :      * Startup cost of incremental sort is the startup cost of its first group
    2068             :      * plus the cost of its input.
    2069             :      */
    2070       12124 :     startup_cost = group_startup_cost + input_startup_cost +
    2071             :         group_input_run_cost;
    2072             : 
    2073             :     /*
    2074             :      * After we started producing tuples from the first group, the cost of
    2075             :      * producing all the tuples is given by the cost to finish processing this
    2076             :      * group, plus the total cost to process the remaining groups, plus the
    2077             :      * remaining cost of input.
    2078             :      */
    2079       12124 :     run_cost = group_run_cost + (group_run_cost + group_startup_cost) *
    2080       12124 :         (input_groups - 1) + group_input_run_cost * (input_groups - 1);
    2081             : 
    2082             :     /*
    2083             :      * Incremental sort adds some overhead by itself. Firstly, it has to
    2084             :      * detect the sort groups. This is roughly equal to one extra copy and
    2085             :      * comparison per tuple.
    2086             :      */
    2087       12124 :     run_cost += (cpu_tuple_cost + comparison_cost) * input_tuples;
    2088             : 
    2089             :     /*
    2090             :      * Additionally, we charge double cpu_tuple_cost for each input group to
    2091             :      * account for the tuplesort_reset that's performed after each group.
    2092             :      */
    2093       12124 :     run_cost += 2.0 * cpu_tuple_cost * input_groups;
    2094             : 
    2095       12124 :     path->rows = input_tuples;
    2096             : 
    2097             :     /* should not generate these paths when enable_incremental_sort=false */
    2098             :     Assert(enable_incremental_sort);
    2099       12124 :     path->disabled_nodes = input_disabled_nodes;
    2100             : 
    2101       12124 :     path->startup_cost = startup_cost;
    2102       12124 :     path->total_cost = startup_cost + run_cost;
    2103       12124 : }
    2104             : 
    2105             : /*
    2106             :  * cost_sort
    2107             :  *    Determines and returns the cost of sorting a relation, including
    2108             :  *    the cost of reading the input data.
    2109             :  *
    2110             :  * NOTE: some callers currently pass NIL for pathkeys because they
    2111             :  * can't conveniently supply the sort keys.  Since this routine doesn't
    2112             :  * currently do anything with pathkeys anyway, that doesn't matter...
    2113             :  * but if it ever does, it should react gracefully to lack of key data.
    2114             :  * (Actually, the thing we'd most likely be interested in is just the number
    2115             :  * of sort keys, which all callers *could* supply.)
    2116             :  */
    2117             : void
    2118     2068652 : cost_sort(Path *path, PlannerInfo *root,
    2119             :           List *pathkeys, int input_disabled_nodes,
    2120             :           Cost input_cost, double tuples, int width,
    2121             :           Cost comparison_cost, int sort_mem,
    2122             :           double limit_tuples)
    2123             : 
    2124             : {
    2125             :     Cost        startup_cost;
    2126             :     Cost        run_cost;
    2127             : 
    2128     2068652 :     cost_tuplesort(&startup_cost, &run_cost,
    2129             :                    tuples, width,
    2130             :                    comparison_cost, sort_mem,
    2131             :                    limit_tuples);
    2132             : 
    2133     2068652 :     startup_cost += input_cost;
    2134             : 
    2135     2068652 :     path->rows = tuples;
    2136     2068652 :     path->disabled_nodes = input_disabled_nodes + (enable_sort ? 0 : 1);
    2137     2068652 :     path->startup_cost = startup_cost;
    2138     2068652 :     path->total_cost = startup_cost + run_cost;
    2139     2068652 : }
    2140             : 
    2141             : /*
    2142             :  * append_nonpartial_cost
    2143             :  *    Estimate the cost of the non-partial paths in a Parallel Append.
    2144             :  *    The non-partial paths are assumed to be the first "numpaths" paths
    2145             :  *    from the subpaths list, and to be in order of decreasing cost.
    2146             :  */
    2147             : static Cost
    2148       25564 : append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
    2149             : {
    2150             :     Cost       *costarr;
    2151             :     int         arrlen;
    2152             :     ListCell   *l;
    2153             :     ListCell   *cell;
    2154             :     int         path_index;
    2155             :     int         min_index;
    2156             :     int         max_index;
    2157             : 
    2158       25564 :     if (numpaths == 0)
    2159       20598 :         return 0;
    2160             : 
    2161             :     /*
    2162             :      * Array length is number of workers or number of relevant paths,
    2163             :      * whichever is less.
    2164             :      */
    2165        4966 :     arrlen = Min(parallel_workers, numpaths);
    2166        4966 :     costarr = (Cost *) palloc(sizeof(Cost) * arrlen);
    2167             : 
    2168             :     /* The first few paths will each be claimed by a different worker. */
    2169        4966 :     path_index = 0;
    2170       14424 :     foreach(cell, subpaths)
    2171             :     {
    2172       10756 :         Path       *subpath = (Path *) lfirst(cell);
    2173             : 
    2174       10756 :         if (path_index == arrlen)
    2175        1298 :             break;
    2176        9458 :         costarr[path_index++] = subpath->total_cost;
    2177             :     }
    2178             : 
    2179             :     /*
    2180             :      * Since subpaths are sorted by decreasing cost, the last one will have
    2181             :      * the minimum cost.
    2182             :      */
    2183        4966 :     min_index = arrlen - 1;
    2184             : 
    2185             :     /*
    2186             :      * For each of the remaining subpaths, add its cost to the array element
    2187             :      * with minimum cost.
    2188             :      */
    2189        9674 :     for_each_cell(l, subpaths, cell)
    2190             :     {
    2191        5254 :         Path       *subpath = (Path *) lfirst(l);
    2192             : 
    2193             :         /* Consider only the non-partial paths */
    2194        5254 :         if (path_index++ == numpaths)
    2195         546 :             break;
    2196             : 
    2197        4708 :         costarr[min_index] += subpath->total_cost;
    2198             : 
    2199             :         /* Update the new min cost array index */
    2200        4708 :         min_index = 0;
    2201       14160 :         for (int i = 0; i < arrlen; i++)
    2202             :         {
    2203        9452 :             if (costarr[i] < costarr[min_index])
    2204        1514 :                 min_index = i;
    2205             :         }
    2206             :     }
    2207             : 
    2208             :     /* Return the highest cost from the array */
    2209        4966 :     max_index = 0;
    2210       14424 :     for (int i = 0; i < arrlen; i++)
    2211             :     {
    2212        9458 :         if (costarr[i] > costarr[max_index])
    2213         414 :             max_index = i;
    2214             :     }
    2215             : 
    2216        4966 :     return costarr[max_index];
    2217             : }
    2218             : 
    2219             : /*
    2220             :  * cost_append
    2221             :  *    Determines and returns the cost of an Append node.
    2222             :  */
    2223             : void
    2224       69582 : cost_append(AppendPath *apath, PlannerInfo *root)
    2225             : {
    2226             :     ListCell   *l;
    2227             : 
    2228       69582 :     apath->path.disabled_nodes = 0;
    2229       69582 :     apath->path.startup_cost = 0;
    2230       69582 :     apath->path.total_cost = 0;
    2231       69582 :     apath->path.rows = 0;
    2232             : 
    2233       69582 :     if (apath->subpaths == NIL)
    2234        2012 :         return;
    2235             : 
    2236       67570 :     if (!apath->path.parallel_aware)
    2237             :     {
    2238       42006 :         List       *pathkeys = apath->path.pathkeys;
    2239             : 
    2240       42006 :         if (pathkeys == NIL)
    2241             :         {
    2242       39862 :             Path       *firstsubpath = (Path *) linitial(apath->subpaths);
    2243             : 
    2244             :             /*
    2245             :              * For an unordered, non-parallel-aware Append we take the startup
    2246             :              * cost as the startup cost of the first subpath.
    2247             :              */
    2248       39862 :             apath->path.startup_cost = firstsubpath->startup_cost;
    2249             : 
    2250             :             /*
    2251             :              * Compute rows, number of disabled nodes, and total cost as sums
    2252             :              * of underlying subplan values.
    2253             :              */
    2254      155712 :             foreach(l, apath->subpaths)
    2255             :             {
    2256      115850 :                 Path       *subpath = (Path *) lfirst(l);
    2257             : 
    2258      115850 :                 apath->path.rows += subpath->rows;
    2259      115850 :                 apath->path.disabled_nodes += subpath->disabled_nodes;
    2260      115850 :                 apath->path.total_cost += subpath->total_cost;
    2261             :             }
    2262             :         }
    2263             :         else
    2264             :         {
    2265             :             /*
    2266             :              * For an ordered, non-parallel-aware Append we take the startup
    2267             :              * cost as the sum of the subpath startup costs.  This ensures
    2268             :              * that we don't underestimate the startup cost when a query's
    2269             :              * LIMIT is such that several of the children have to be run to
    2270             :              * satisfy it.  This might be overkill --- another plausible hack
    2271             :              * would be to take the Append's startup cost as the maximum of
    2272             :              * the child startup costs.  But we don't want to risk believing
    2273             :              * that an ORDER BY LIMIT query can be satisfied at small cost
    2274             :              * when the first child has small startup cost but later ones
    2275             :              * don't.  (If we had the ability to deal with nonlinear cost
    2276             :              * interpolation for partial retrievals, we would not need to be
    2277             :              * so conservative about this.)
    2278             :              *
    2279             :              * This case is also different from the above in that we have to
    2280             :              * account for possibly injecting sorts into subpaths that aren't
    2281             :              * natively ordered.
    2282             :              */
    2283        8340 :             foreach(l, apath->subpaths)
    2284             :             {
    2285        6196 :                 Path       *subpath = (Path *) lfirst(l);
    2286             :                 int         presorted_keys;
    2287             :                 Path        sort_path;  /* dummy for result of
    2288             :                                          * cost_sort/cost_incremental_sort */
    2289             : 
    2290        6196 :                 if (!pathkeys_count_contained_in(pathkeys, subpath->pathkeys,
    2291             :                                                  &presorted_keys))
    2292             :                 {
    2293             :                     /*
    2294             :                      * We'll need to insert a Sort node, so include costs for
    2295             :                      * that.  We choose to use incremental sort if it is
    2296             :                      * enabled and there are presorted keys; otherwise we use
    2297             :                      * full sort.
    2298             :                      *
    2299             :                      * We can use the parent's LIMIT if any, since we
    2300             :                      * certainly won't pull more than that many tuples from
    2301             :                      * any child.
    2302             :                      */
    2303          44 :                     if (enable_incremental_sort && presorted_keys > 0)
    2304             :                     {
    2305          12 :                         cost_incremental_sort(&sort_path,
    2306             :                                               root,
    2307             :                                               pathkeys,
    2308             :                                               presorted_keys,
    2309             :                                               subpath->disabled_nodes,
    2310             :                                               subpath->startup_cost,
    2311             :                                               subpath->total_cost,
    2312             :                                               subpath->rows,
    2313          12 :                                               subpath->pathtarget->width,
    2314             :                                               0.0,
    2315             :                                               work_mem,
    2316             :                                               apath->limit_tuples);
    2317             :                     }
    2318             :                     else
    2319             :                     {
    2320          32 :                         cost_sort(&sort_path,
    2321             :                                   root,
    2322             :                                   pathkeys,
    2323             :                                   subpath->disabled_nodes,
    2324             :                                   subpath->total_cost,
    2325             :                                   subpath->rows,
    2326          32 :                                   subpath->pathtarget->width,
    2327             :                                   0.0,
    2328             :                                   work_mem,
    2329             :                                   apath->limit_tuples);
    2330             :                     }
    2331             : 
    2332          44 :                     subpath = &sort_path;
    2333             :                 }
    2334             : 
    2335        6196 :                 apath->path.rows += subpath->rows;
    2336        6196 :                 apath->path.disabled_nodes += subpath->disabled_nodes;
    2337        6196 :                 apath->path.startup_cost += subpath->startup_cost;
    2338        6196 :                 apath->path.total_cost += subpath->total_cost;
    2339             :             }
    2340             :         }
    2341             :     }
    2342             :     else                        /* parallel-aware */
    2343             :     {
    2344       25564 :         int         i = 0;
    2345       25564 :         double      parallel_divisor = get_parallel_divisor(&apath->path);
    2346             : 
    2347             :         /* Parallel-aware Append never produces ordered output. */
    2348             :         Assert(apath->path.pathkeys == NIL);
    2349             : 
    2350             :         /* Calculate startup cost. */
    2351      101118 :         foreach(l, apath->subpaths)
    2352             :         {
    2353       75554 :             Path       *subpath = (Path *) lfirst(l);
    2354             : 
    2355             :             /*
    2356             :              * Append will start returning tuples when the child node having
    2357             :              * lowest startup cost is done setting up. We consider only the
    2358             :              * first few subplans that immediately get a worker assigned.
    2359             :              */
    2360       75554 :             if (i == 0)
    2361       25564 :                 apath->path.startup_cost = subpath->startup_cost;
    2362       49990 :             else if (i < apath->path.parallel_workers)
    2363       25006 :                 apath->path.startup_cost = Min(apath->path.startup_cost,
    2364             :                                                subpath->startup_cost);
    2365             : 
    2366             :             /*
    2367             :              * Apply parallel divisor to subpaths.  Scale the number of rows
    2368             :              * for each partial subpath based on the ratio of the parallel
    2369             :              * divisor originally used for the subpath to the one we adopted.
    2370             :              * Also add the cost of partial paths to the total cost, but
    2371             :              * ignore non-partial paths for now.
    2372             :              */
    2373       75554 :             if (i < apath->first_partial_path)
    2374       14166 :                 apath->path.rows += subpath->rows / parallel_divisor;
    2375             :             else
    2376             :             {
    2377             :                 double      subpath_parallel_divisor;
    2378             : 
    2379       61388 :                 subpath_parallel_divisor = get_parallel_divisor(subpath);
    2380       61388 :                 apath->path.rows += subpath->rows * (subpath_parallel_divisor /
    2381             :                                                      parallel_divisor);
    2382       61388 :                 apath->path.total_cost += subpath->total_cost;
    2383             :             }
    2384             : 
    2385       75554 :             apath->path.disabled_nodes += subpath->disabled_nodes;
    2386       75554 :             apath->path.rows = clamp_row_est(apath->path.rows);
    2387             : 
    2388       75554 :             i++;
    2389             :         }
    2390             : 
    2391             :         /* Add cost for non-partial subpaths. */
    2392       25564 :         apath->path.total_cost +=
    2393       25564 :             append_nonpartial_cost(apath->subpaths,
    2394             :                                    apath->first_partial_path,
    2395             :                                    apath->path.parallel_workers);
    2396             :     }
    2397             : 
    2398             :     /*
    2399             :      * Although Append does not do any selection or projection, it's not free;
    2400             :      * add a small per-tuple overhead.
    2401             :      */
    2402       67570 :     apath->path.total_cost +=
    2403       67570 :         cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER * apath->path.rows;
    2404             : }
    2405             : 
    2406             : /*
    2407             :  * cost_merge_append
    2408             :  *    Determines and returns the cost of a MergeAppend node.
    2409             :  *
    2410             :  * MergeAppend merges several pre-sorted input streams, using a heap that
    2411             :  * at any given instant holds the next tuple from each stream.  If there
    2412             :  * are N streams, we need about N*log2(N) tuple comparisons to construct
    2413             :  * the heap at startup, and then for each output tuple, about log2(N)
    2414             :  * comparisons to replace the top entry.
    2415             :  *
    2416             :  * (The effective value of N will drop once some of the input streams are
    2417             :  * exhausted, but it seems unlikely to be worth trying to account for that.)
    2418             :  *
    2419             :  * The heap is never spilled to disk, since we assume N is not very large.
    2420             :  * So this is much simpler than cost_sort.
    2421             :  *
    2422             :  * As in cost_sort, we charge two operator evals per tuple comparison.
    2423             :  *
    2424             :  * 'pathkeys' is a list of sort keys
    2425             :  * 'n_streams' is the number of input streams
    2426             :  * 'input_disabled_nodes' is the sum of the input streams' disabled node counts
    2427             :  * 'input_startup_cost' is the sum of the input streams' startup costs
    2428             :  * 'input_total_cost' is the sum of the input streams' total costs
    2429             :  * 'tuples' is the number of tuples in all the streams
    2430             :  */
    2431             : void
    2432        9908 : cost_merge_append(Path *path, PlannerInfo *root,
    2433             :                   List *pathkeys, int n_streams,
    2434             :                   int input_disabled_nodes,
    2435             :                   Cost input_startup_cost, Cost input_total_cost,
    2436             :                   double tuples)
    2437             : {
    2438        9908 :     Cost        startup_cost = 0;
    2439        9908 :     Cost        run_cost = 0;
    2440             :     Cost        comparison_cost;
    2441             :     double      N;
    2442             :     double      logN;
    2443             : 
    2444             :     /*
    2445             :      * Avoid log(0)...
    2446             :      */
    2447        9908 :     N = (n_streams < 2) ? 2.0 : (double) n_streams;
    2448        9908 :     logN = LOG2(N);
    2449             : 
    2450             :     /* Assumed cost per tuple comparison */
    2451        9908 :     comparison_cost = 2.0 * cpu_operator_cost;
    2452             : 
    2453             :     /* Heap creation cost */
    2454        9908 :     startup_cost += comparison_cost * N * logN;
    2455             : 
    2456             :     /* Per-tuple heap maintenance cost */
    2457        9908 :     run_cost += tuples * comparison_cost * logN;
    2458             : 
    2459             :     /*
    2460             :      * Although MergeAppend does not do any selection or projection, it's not
    2461             :      * free; add a small per-tuple overhead.
    2462             :      */
    2463        9908 :     run_cost += cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER * tuples;
    2464             : 
    2465        9908 :     path->disabled_nodes = input_disabled_nodes;
    2466        9908 :     path->startup_cost = startup_cost + input_startup_cost;
    2467        9908 :     path->total_cost = startup_cost + run_cost + input_total_cost;
    2468        9908 : }
    2469             : 
    2470             : /*
    2471             :  * cost_material
    2472             :  *    Determines and returns the cost of materializing a relation, including
    2473             :  *    the cost of reading the input data.
    2474             :  *
    2475             :  * If the total volume of data to materialize exceeds work_mem, we will need
    2476             :  * to write it to disk, so the cost is much higher in that case.
    2477             :  *
    2478             :  * Note that here we are estimating the costs for the first scan of the
    2479             :  * relation, so the materialization is all overhead --- any savings will
    2480             :  * occur only on rescan, which is estimated in cost_rescan.
    2481             :  */
    2482             : void
    2483      673568 : cost_material(Path *path,
    2484             :               int input_disabled_nodes,
    2485             :               Cost input_startup_cost, Cost input_total_cost,
    2486             :               double tuples, int width)
    2487             : {
    2488      673568 :     Cost        startup_cost = input_startup_cost;
    2489      673568 :     Cost        run_cost = input_total_cost - input_startup_cost;
    2490      673568 :     double      nbytes = relation_byte_size(tuples, width);
    2491      673568 :     double      work_mem_bytes = work_mem * (Size) 1024;
    2492             : 
    2493      673568 :     path->rows = tuples;
    2494             : 
    2495             :     /*
    2496             :      * Whether spilling or not, charge 2x cpu_operator_cost per tuple to
    2497             :      * reflect bookkeeping overhead.  (This rate must be more than what
    2498             :      * cost_rescan charges for materialize, ie, cpu_operator_cost per tuple;
    2499             :      * if it is exactly the same then there will be a cost tie between
    2500             :      * nestloop with A outer, materialized B inner and nestloop with B outer,
    2501             :      * materialized A inner.  The extra cost ensures we'll prefer
    2502             :      * materializing the smaller rel.)  Note that this is normally a good deal
    2503             :      * less than cpu_tuple_cost; which is OK because a Material plan node
    2504             :      * doesn't do qual-checking or projection, so it's got less overhead than
    2505             :      * most plan nodes.
    2506             :      */
    2507      673568 :     run_cost += 2 * cpu_operator_cost * tuples;
    2508             : 
    2509             :     /*
    2510             :      * If we will spill to disk, charge at the rate of seq_page_cost per page.
    2511             :      * This cost is assumed to be evenly spread through the plan run phase,
    2512             :      * which isn't exactly accurate but our cost model doesn't allow for
    2513             :      * nonuniform costs within the run phase.
    2514             :      */
    2515      673568 :     if (nbytes > work_mem_bytes)
    2516             :     {
    2517        4992 :         double      npages = ceil(nbytes / BLCKSZ);
    2518             : 
    2519        4992 :         run_cost += seq_page_cost * npages;
    2520             :     }
    2521             : 
    2522      673568 :     path->disabled_nodes = input_disabled_nodes + (enable_material ? 0 : 1);
    2523      673568 :     path->startup_cost = startup_cost;
    2524      673568 :     path->total_cost = startup_cost + run_cost;
    2525      673568 : }
    2526             : 
    2527             : /*
    2528             :  * cost_memoize_rescan
    2529             :  *    Determines the estimated cost of rescanning a Memoize node.
    2530             :  *
    2531             :  * In order to estimate this, we must gain knowledge of how often we expect to
    2532             :  * be called and how many distinct sets of parameters we are likely to be
    2533             :  * called with. If we expect a good cache hit ratio, then we can set our
    2534             :  * costs to account for that hit ratio, plus a little bit of cost for the
    2535             :  * caching itself.  Caching will not work out well if we expect to be called
    2536             :  * with too many distinct parameter values.  The worst-case here is that we
    2537             :  * never see any parameter value twice, in which case we'd never get a cache
    2538             :  * hit and caching would be a complete waste of effort.
    2539             :  */
    2540             : static void
    2541      291440 : cost_memoize_rescan(PlannerInfo *root, MemoizePath *mpath,
    2542             :                     Cost *rescan_startup_cost, Cost *rescan_total_cost)
    2543             : {
    2544             :     EstimationInfo estinfo;
    2545             :     ListCell   *lc;
    2546      291440 :     Cost        input_startup_cost = mpath->subpath->startup_cost;
    2547      291440 :     Cost        input_total_cost = mpath->subpath->total_cost;
    2548      291440 :     double      tuples = mpath->subpath->rows;
    2549      291440 :     Cardinality est_calls = mpath->est_calls;
    2550      291440 :     int         width = mpath->subpath->pathtarget->width;
    2551             : 
    2552             :     double      hash_mem_bytes;
    2553             :     double      est_entry_bytes;
    2554             :     Cardinality est_cache_entries;
    2555             :     Cardinality ndistinct;
    2556             :     double      evict_ratio;
    2557             :     double      hit_ratio;
    2558             :     Cost        startup_cost;
    2559             :     Cost        total_cost;
    2560             : 
    2561             :     /* available cache space */
    2562      291440 :     hash_mem_bytes = get_hash_memory_limit();
    2563             : 
    2564             :     /*
    2565             :      * Set the number of bytes each cache entry should consume in the cache.
    2566             :      * To provide us with better estimations on how many cache entries we can
    2567             :      * store at once, we make a call to the executor here to ask it what
    2568             :      * memory overheads there are for a single cache entry.
    2569             :      */
    2570      291440 :     est_entry_bytes = relation_byte_size(tuples, width) +
    2571      291440 :         ExecEstimateCacheEntryOverheadBytes(tuples);
    2572             : 
    2573             :     /* include the estimated width for the cache keys */
    2574      621238 :     foreach(lc, mpath->param_exprs)
    2575      329798 :         est_entry_bytes += get_expr_width(root, (Node *) lfirst(lc));
    2576             : 
    2577             :     /* estimate on the upper limit of cache entries we can hold at once */
    2578      291440 :     est_cache_entries = floor(hash_mem_bytes / est_entry_bytes);
    2579             : 
    2580             :     /* estimate on the distinct number of parameter values */
    2581      291440 :     ndistinct = estimate_num_groups(root, mpath->param_exprs, est_calls, NULL,
    2582             :                                     &estinfo);
    2583             : 
    2584             :     /*
    2585             :      * When the estimation fell back on using a default value, it's a bit too
    2586             :      * risky to assume that it's ok to use a Memoize node.  The use of a
    2587             :      * default could cause us to use a Memoize node when it's really
    2588             :      * inappropriate to do so.  If we see that this has been done, then we'll
    2589             :      * assume that every call will have unique parameters, which will almost
    2590             :      * certainly mean a MemoizePath will never survive add_path().
    2591             :      */
    2592      291440 :     if ((estinfo.flags & SELFLAG_USED_DEFAULT) != 0)
    2593       17172 :         ndistinct = est_calls;
    2594             : 
    2595             :     /* Remember the ndistinct estimate for EXPLAIN */
    2596      291440 :     mpath->est_unique_keys = ndistinct;
    2597             : 
    2598             :     /*
    2599             :      * Since we've already estimated the maximum number of entries we can
    2600             :      * store at once and know the estimated number of distinct values we'll be
    2601             :      * called with, we'll take this opportunity to set the path's est_entries.
    2602             :      * This will ultimately determine the hash table size that the executor
    2603             :      * will use.  If we leave this at zero, the executor will just choose the
    2604             :      * size itself.  Really this is not the right place to do this, but it's
    2605             :      * convenient since everything is already calculated.
    2606             :      */
    2607      291440 :     mpath->est_entries = Min(Min(ndistinct, est_cache_entries),
    2608             :                              PG_UINT32_MAX);
    2609             : 
    2610             :     /*
    2611             :      * When the number of distinct parameter values is above the amount we can
    2612             :      * store in the cache, then we'll have to evict some entries from the
    2613             :      * cache.  This is not free. Here we estimate how often we'll incur the
    2614             :      * cost of that eviction.
    2615             :      */
    2616      291440 :     evict_ratio = 1.0 - Min(est_cache_entries, ndistinct) / ndistinct;
    2617             : 
    2618             :     /*
    2619             :      * In order to estimate how costly a single scan will be, we need to
    2620             :      * attempt to estimate what the cache hit ratio will be.  To do that we
    2621             :      * must look at how many scans are estimated in total for this node and
    2622             :      * how many of those scans we expect to get a cache hit.
    2623             :      */
    2624      582880 :     hit_ratio = ((est_calls - ndistinct) / est_calls) *
    2625      291440 :         (est_cache_entries / Max(ndistinct, est_cache_entries));
    2626             : 
    2627             :     /* Remember the hit ratio estimate for EXPLAIN */
    2628      291440 :     mpath->est_hit_ratio = hit_ratio;
    2629             : 
    2630             :     Assert(hit_ratio >= 0 && hit_ratio <= 1.0);
    2631             : 
    2632             :     /*
    2633             :      * Set the total_cost accounting for the expected cache hit ratio.  We
    2634             :      * also add on a cpu_operator_cost to account for a cache lookup. This
    2635             :      * will happen regardless of whether it's a cache hit or not.
    2636             :      */
    2637      291440 :     total_cost = input_total_cost * (1.0 - hit_ratio) + cpu_operator_cost;
    2638             : 
    2639             :     /* Now adjust the total cost to account for cache evictions */
    2640             : 
    2641             :     /* Charge a cpu_tuple_cost for evicting the actual cache entry */
    2642      291440 :     total_cost += cpu_tuple_cost * evict_ratio;
    2643             : 
    2644             :     /*
    2645             :      * Charge a 10th of cpu_operator_cost to evict every tuple in that entry.
    2646             :      * The per-tuple eviction is really just a pfree, so charging a whole
    2647             :      * cpu_operator_cost seems a little excessive.
    2648             :      */
    2649      291440 :     total_cost += cpu_operator_cost / 10.0 * evict_ratio * tuples;
    2650             : 
    2651             :     /*
    2652             :      * Now adjust for storing things in the cache, since that's not free
    2653             :      * either.  Everything must go in the cache.  We don't proportion this
    2654             :      * over any ratio, just apply it once for the scan.  We charge a
    2655             :      * cpu_tuple_cost for the creation of the cache entry and also a
    2656             :      * cpu_operator_cost for each tuple we expect to cache.
    2657             :      */
    2658      291440 :     total_cost += cpu_tuple_cost + cpu_operator_cost * tuples;
    2659             : 
    2660             :     /*
    2661             :      * Getting the first row must be also be proportioned according to the
    2662             :      * expected cache hit ratio.
    2663             :      */
    2664      291440 :     startup_cost = input_startup_cost * (1.0 - hit_ratio);
    2665             : 
    2666             :     /*
    2667             :      * Additionally we charge a cpu_tuple_cost to account for cache lookups,
    2668             :      * which we'll do regardless of whether it was a cache hit or not.
    2669             :      */
    2670      291440 :     startup_cost += cpu_tuple_cost;
    2671             : 
    2672      291440 :     *rescan_startup_cost = startup_cost;
    2673      291440 :     *rescan_total_cost = total_cost;
    2674      291440 : }
    2675             : 
    2676             : /*
    2677             :  * cost_agg
    2678             :  *      Determines and returns the cost of performing an Agg plan node,
    2679             :  *      including the cost of its input.
    2680             :  *
    2681             :  * aggcosts can be NULL when there are no actual aggregate functions (i.e.,
    2682             :  * we are using a hashed Agg node just to do grouping).
    2683             :  *
    2684             :  * Note: when aggstrategy == AGG_SORTED, caller must ensure that input costs
    2685             :  * are for appropriately-sorted input.
    2686             :  */
    2687             : void
    2688       86294 : cost_agg(Path *path, PlannerInfo *root,
    2689             :          AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
    2690             :          int numGroupCols, double numGroups,
    2691             :          List *quals,
    2692             :          int disabled_nodes,
    2693             :          Cost input_startup_cost, Cost input_total_cost,
    2694             :          double input_tuples, double input_width)
    2695             : {
    2696             :     double      output_tuples;
    2697             :     Cost        startup_cost;
    2698             :     Cost        total_cost;
    2699       86294 :     const AggClauseCosts dummy_aggcosts = {0};
    2700             : 
    2701             :     /* Use all-zero per-aggregate costs if NULL is passed */
    2702       86294 :     if (aggcosts == NULL)
    2703             :     {
    2704             :         Assert(aggstrategy == AGG_HASHED);
    2705       18784 :         aggcosts = &dummy_aggcosts;
    2706             :     }
    2707             : 
    2708             :     /*
    2709             :      * The transCost.per_tuple component of aggcosts should be charged once
    2710             :      * per input tuple, corresponding to the costs of evaluating the aggregate
    2711             :      * transfns and their input expressions. The finalCost.per_tuple component
    2712             :      * is charged once per output tuple, corresponding to the costs of
    2713             :      * evaluating the finalfns.  Startup costs are of course charged but once.
    2714             :      *
    2715             :      * If we are grouping, we charge an additional cpu_operator_cost per
    2716             :      * grouping column per input tuple for grouping comparisons.
    2717             :      *
    2718             :      * We will produce a single output tuple if not grouping, and a tuple per
    2719             :      * group otherwise.  We charge cpu_tuple_cost for each output tuple.
    2720             :      *
    2721             :      * Note: in this cost model, AGG_SORTED and AGG_HASHED have exactly the
    2722             :      * same total CPU cost, but AGG_SORTED has lower startup cost.  If the
    2723             :      * input path is already sorted appropriately, AGG_SORTED should be
    2724             :      * preferred (since it has no risk of memory overflow).  This will happen
    2725             :      * as long as the computed total costs are indeed exactly equal --- but if
    2726             :      * there's roundoff error we might do the wrong thing.  So be sure that
    2727             :      * the computations below form the same intermediate values in the same
    2728             :      * order.
    2729             :      */
    2730       86294 :     if (aggstrategy == AGG_PLAIN)
    2731             :     {
    2732       37226 :         startup_cost = input_total_cost;
    2733       37226 :         startup_cost += aggcosts->transCost.startup;
    2734       37226 :         startup_cost += aggcosts->transCost.per_tuple * input_tuples;
    2735       37226 :         startup_cost += aggcosts->finalCost.startup;
    2736       37226 :         startup_cost += aggcosts->finalCost.per_tuple;
    2737             :         /* we aren't grouping */
    2738       37226 :         total_cost = startup_cost + cpu_tuple_cost;
    2739       37226 :         output_tuples = 1;
    2740             :     }
    2741       49068 :     else if (aggstrategy == AGG_SORTED || aggstrategy == AGG_MIXED)
    2742             :     {
    2743             :         /* Here we are able to deliver output on-the-fly */
    2744       17758 :         startup_cost = input_startup_cost;
    2745       17758 :         total_cost = input_total_cost;
    2746       17758 :         if (aggstrategy == AGG_MIXED && !enable_hashagg)
    2747         480 :             ++disabled_nodes;
    2748             :         /* calcs phrased this way to match HASHED case, see note above */
    2749       17758 :         total_cost += aggcosts->transCost.startup;
    2750       17758 :         total_cost += aggcosts->transCost.per_tuple * input_tuples;
    2751       17758 :         total_cost += (cpu_operator_cost * numGroupCols) * input_tuples;
    2752       17758 :         total_cost += aggcosts->finalCost.startup;
    2753       17758 :         total_cost += aggcosts->finalCost.per_tuple * numGroups;
    2754       17758 :         total_cost += cpu_tuple_cost * numGroups;
    2755       17758 :         output_tuples = numGroups;
    2756             :     }
    2757             :     else
    2758             :     {
    2759             :         /* must be AGG_HASHED */
    2760       31310 :         startup_cost = input_total_cost;
    2761       31310 :         if (!enable_hashagg)
    2762        1866 :             ++disabled_nodes;
    2763       31310 :         startup_cost += aggcosts->transCost.startup;
    2764       31310 :         startup_cost += aggcosts->transCost.per_tuple * input_tuples;
    2765             :         /* cost of computing hash value */
    2766       31310 :         startup_cost += (cpu_operator_cost * numGroupCols) * input_tuples;
    2767       31310 :         startup_cost += aggcosts->finalCost.startup;
    2768             : 
    2769       31310 :         total_cost = startup_cost;
    2770       31310 :         total_cost += aggcosts->finalCost.per_tuple * numGroups;
    2771             :         /* cost of retrieving from hash table */
    2772       31310 :         total_cost += cpu_tuple_cost * numGroups;
    2773       31310 :         output_tuples = numGroups;
    2774             :     }
    2775             : 
    2776             :     /*
    2777             :      * Add the disk costs of hash aggregation that spills to disk.
    2778             :      *
    2779             :      * Groups that go into the hash table stay in memory until finalized, so
    2780             :      * spilling and reprocessing tuples doesn't incur additional invocations
    2781             :      * of transCost or finalCost. Furthermore, the computed hash value is
    2782             :      * stored with the spilled tuples, so we don't incur extra invocations of
    2783             :      * the hash function.
    2784             :      *
    2785             :      * Hash Agg begins returning tuples after the first batch is complete.
    2786             :      * Accrue writes (spilled tuples) to startup_cost and to total_cost;
    2787             :      * accrue reads only to total_cost.
    2788             :      */
    2789       86294 :     if (aggstrategy == AGG_HASHED || aggstrategy == AGG_MIXED)
    2790             :     {
    2791             :         double      pages;
    2792       32250 :         double      pages_written = 0.0;
    2793       32250 :         double      pages_read = 0.0;
    2794             :         double      spill_cost;
    2795             :         double      hashentrysize;
    2796             :         double      nbatches;
    2797             :         Size        mem_limit;
    2798             :         uint64      ngroups_limit;
    2799             :         int         num_partitions;
    2800             :         int         depth;
    2801             : 
    2802             :         /*
    2803             :          * Estimate number of batches based on the computed limits. If less
    2804             :          * than or equal to one, all groups are expected to fit in memory;
    2805             :          * otherwise we expect to spill.
    2806             :          */
    2807       32250 :         hashentrysize = hash_agg_entry_size(list_length(root->aggtransinfos),
    2808             :                                             input_width,
    2809       32250 :                                             aggcosts->transitionSpace);
    2810       32250 :         hash_agg_set_limits(hashentrysize, numGroups, 0, &mem_limit,
    2811             :                             &ngroups_limit, &num_partitions);
    2812             : 
    2813       32250 :         nbatches = Max((numGroups * hashentrysize) / mem_limit,
    2814             :                        numGroups / ngroups_limit);
    2815             : 
    2816       32250 :         nbatches = Max(ceil(nbatches), 1.0);
    2817       32250 :         num_partitions = Max(num_partitions, 2);
    2818             : 
    2819             :         /*
    2820             :          * The number of partitions can change at different levels of
    2821             :          * recursion; but for the purposes of this calculation assume it stays
    2822             :          * constant.
    2823             :          */
    2824       32250 :         depth = ceil(log(nbatches) / log(num_partitions));
    2825             : 
    2826             :         /*
    2827             :          * Estimate number of pages read and written. For each level of
    2828             :          * recursion, a tuple must be written and then later read.
    2829             :          */
    2830       32250 :         pages = relation_byte_size(input_tuples, input_width) / BLCKSZ;
    2831       32250 :         pages_written = pages_read = pages * depth;
    2832             : 
    2833             :         /*
    2834             :          * HashAgg has somewhat worse IO behavior than Sort on typical
    2835             :          * hardware/OS combinations. Account for this with a generic penalty.
    2836             :          */
    2837       32250 :         pages_read *= 2.0;
    2838       32250 :         pages_written *= 2.0;
    2839             : 
    2840       32250 :         startup_cost += pages_written * random_page_cost;
    2841       32250 :         total_cost += pages_written * random_page_cost;
    2842       32250 :         total_cost += pages_read * seq_page_cost;
    2843             : 
    2844             :         /* account for CPU cost of spilling a tuple and reading it back */
    2845       32250 :         spill_cost = depth * input_tuples * 2.0 * cpu_tuple_cost;
    2846       32250 :         startup_cost += spill_cost;
    2847       32250 :         total_cost += spill_cost;
    2848             :     }
    2849             : 
    2850             :     /*
    2851             :      * If there are quals (HAVING quals), account for their cost and
    2852             :      * selectivity.
    2853             :      */
    2854       86294 :     if (quals)
    2855             :     {
    2856             :         QualCost    qual_cost;
    2857             : 
    2858        4620 :         cost_qual_eval(&qual_cost, quals, root);
    2859        4620 :         startup_cost += qual_cost.startup;
    2860        4620 :         total_cost += qual_cost.startup + output_tuples * qual_cost.per_tuple;
    2861             : 
    2862        4620 :         output_tuples = clamp_row_est(output_tuples *
    2863        4620 :                                       clauselist_selectivity(root,
    2864             :                                                              quals,
    2865             :                                                              0,
    2866             :                                                              JOIN_INNER,
    2867             :                                                              NULL));
    2868             :     }
    2869             : 
    2870       86294 :     path->rows = output_tuples;
    2871       86294 :     path->disabled_nodes = disabled_nodes;
    2872       86294 :     path->startup_cost = startup_cost;
    2873       86294 :     path->total_cost = total_cost;
    2874       86294 : }
    2875             : 
    2876             : /*
    2877             :  * get_windowclause_startup_tuples
    2878             :  *      Estimate how many tuples we'll need to fetch from a WindowAgg's
    2879             :  *      subnode before we can output the first WindowAgg tuple.
    2880             :  *
    2881             :  * How many tuples need to be read depends on the WindowClause.  For example,
    2882             :  * a WindowClause with no PARTITION BY and no ORDER BY requires that all
    2883             :  * subnode tuples are read and aggregated before the WindowAgg can output
    2884             :  * anything.  If there's a PARTITION BY, then we only need to look at tuples
    2885             :  * in the first partition.  Here we attempt to estimate just how many
    2886             :  * 'input_tuples' the WindowAgg will need to read for the given WindowClause
    2887             :  * before the first tuple can be output.
    2888             :  */
    2889             : static double
    2890        2964 : get_windowclause_startup_tuples(PlannerInfo *root, WindowClause *wc,
    2891             :                                 double input_tuples)
    2892             : {
    2893        2964 :     int         frameOptions = wc->frameOptions;
    2894             :     double      partition_tuples;
    2895             :     double      return_tuples;
    2896             :     double      peer_tuples;
    2897             : 
    2898             :     /*
    2899             :      * First, figure out how many partitions there are likely to be and set
    2900             :      * partition_tuples according to that estimate.
    2901             :      */
    2902        2964 :     if (wc->partitionClause != NIL)
    2903             :     {
    2904             :         double      num_partitions;
    2905         734 :         List       *partexprs = get_sortgrouplist_exprs(wc->partitionClause,
    2906         734 :                                                         root->parse->targetList);
    2907             : 
    2908         734 :         num_partitions = estimate_num_groups(root, partexprs, input_tuples,
    2909             :                                              NULL, NULL);
    2910         734 :         list_free(partexprs);
    2911             : 
    2912         734 :         partition_tuples = input_tuples / num_partitions;
    2913             :     }
    2914             :     else
    2915             :     {
    2916             :         /* all tuples belong to the same partition */
    2917        2230 :         partition_tuples = input_tuples;
    2918             :     }
    2919             : 
    2920             :     /* estimate the number of tuples in each peer group */
    2921        2964 :     if (wc->orderClause != NIL)
    2922             :     {
    2923             :         double      num_groups;
    2924             :         List       *orderexprs;
    2925             : 
    2926        2358 :         orderexprs = get_sortgrouplist_exprs(wc->orderClause,
    2927        2358 :                                              root->parse->targetList);
    2928             : 
    2929             :         /* estimate out how many peer groups there are in the partition */
    2930        2358 :         num_groups = estimate_num_groups(root, orderexprs,
    2931             :                                          partition_tuples, NULL,
    2932             :                                          NULL);
    2933        2358 :         list_free(orderexprs);
    2934        2358 :         peer_tuples = partition_tuples / num_groups;
    2935             :     }
    2936             :     else
    2937             :     {
    2938             :         /* no ORDER BY so only 1 tuple belongs in each peer group */
    2939         606 :         peer_tuples = 1.0;
    2940             :     }
    2941             : 
    2942        2964 :     if (frameOptions & FRAMEOPTION_END_UNBOUNDED_FOLLOWING)
    2943             :     {
    2944             :         /* include all partition rows */
    2945         364 :         return_tuples = partition_tuples;
    2946             :     }
    2947        2600 :     else if (frameOptions & FRAMEOPTION_END_CURRENT_ROW)
    2948             :     {
    2949        1562 :         if (frameOptions & FRAMEOPTION_ROWS)
    2950             :         {
    2951             :             /* just count the current row */
    2952         722 :             return_tuples = 1.0;
    2953             :         }
    2954         840 :         else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS))
    2955             :         {
    2956             :             /*
    2957             :              * When in RANGE/GROUPS mode, it's more complex.  If there's no
    2958             :              * ORDER BY, then all rows in the partition are peers, otherwise
    2959             :              * we'll need to read the first group of peers.
    2960             :              */
    2961         840 :             if (wc->orderClause == NIL)
    2962         326 :                 return_tuples = partition_tuples;
    2963             :             else
    2964         514 :                 return_tuples = peer_tuples;
    2965             :         }
    2966             :         else
    2967             :         {
    2968             :             /*
    2969             :              * Something new we don't support yet?  This needs attention.
    2970             :              * We'll just return 1.0 in the meantime.
    2971             :              */
    2972             :             Assert(false);
    2973           0 :             return_tuples = 1.0;
    2974             :         }
    2975             :     }
    2976        1038 :     else if (frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING)
    2977             :     {
    2978             :         /*
    2979             :          * BETWEEN ... AND N PRECEDING will only need to read the WindowAgg's
    2980             :          * subnode after N ROWS/RANGES/GROUPS.  N can be 0, but not negative,
    2981             :          * so we'll just assume only the current row needs to be read to fetch
    2982             :          * the first WindowAgg row.
    2983             :          */
    2984         108 :         return_tuples = 1.0;
    2985             :     }
    2986         930 :     else if (frameOptions & FRAMEOPTION_END_OFFSET_FOLLOWING)
    2987             :     {
    2988         930 :         Const      *endOffset = (Const *) wc->endOffset;
    2989             :         double      end_offset_value;
    2990             : 
    2991             :         /* try and figure out the value specified in the endOffset. */
    2992         930 :         if (IsA(endOffset, Const))
    2993             :         {
    2994         930 :             if (endOffset->constisnull)
    2995             :             {
    2996             :                 /*
    2997             :                  * NULLs are not allowed, but currently, there's no code to
    2998             :                  * error out if there's a NULL Const.  We'll only discover
    2999             :                  * this during execution.  For now, just pretend everything is
    3000             :                  * fine and assume that just the first row/range/group will be
    3001             :                  * needed.
    3002             :                  */
    3003           0 :                 end_offset_value = 1.0;
    3004             :             }
    3005             :             else
    3006             :             {
    3007         930 :                 switch (endOffset->consttype)
    3008             :                 {
    3009          24 :                     case INT2OID:
    3010          24 :                         end_offset_value =
    3011          24 :                             (double) DatumGetInt16(endOffset->constvalue);
    3012          24 :                         break;
    3013         132 :                     case INT4OID:
    3014         132 :                         end_offset_value =
    3015         132 :                             (double) DatumGetInt32(endOffset->constvalue);
    3016         132 :                         break;
    3017         432 :                     case INT8OID:
    3018         432 :                         end_offset_value =
    3019         432 :                             (double) DatumGetInt64(endOffset->constvalue);
    3020         432 :                         break;
    3021         342 :                     default:
    3022         342 :                         end_offset_value =
    3023         342 :                             partition_tuples / peer_tuples *
    3024             :                             DEFAULT_INEQ_SEL;
    3025         342 :                         break;
    3026             :                 }
    3027             :             }
    3028             :         }
    3029             :         else
    3030             :         {
    3031             :             /*
    3032             :              * When the end bound is not a Const, we'll just need to guess. We
    3033             :              * just make use of DEFAULT_INEQ_SEL.
    3034             :              */
    3035           0 :             end_offset_value =
    3036           0 :                 partition_tuples / peer_tuples * DEFAULT_INEQ_SEL;
    3037             :         }
    3038             : 
    3039         930 :         if (frameOptions & FRAMEOPTION_ROWS)
    3040             :         {
    3041             :             /* include the N FOLLOWING and the current row */
    3042         270 :             return_tuples = end_offset_value + 1.0;
    3043             :         }
    3044         660 :         else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS))
    3045             :         {
    3046             :             /* include N FOLLOWING ranges/group and the initial range/group */
    3047         660 :             return_tuples = peer_tuples * (end_offset_value + 1.0);
    3048             :         }
    3049             :         else
    3050             :         {
    3051             :             /*
    3052             :              * Something new we don't support yet?  This needs attention.
    3053             :              * We'll just return 1.0 in the meantime.
    3054             :              */
    3055             :             Assert(false);
    3056           0 :             return_tuples = 1.0;
    3057             :         }
    3058             :     }
    3059             :     else
    3060             :     {
    3061             :         /*
    3062             :          * Something new we don't support yet?  This needs attention.  We'll
    3063             :          * just return 1.0 in the meantime.
    3064             :          */
    3065             :         Assert(false);
    3066           0 :         return_tuples = 1.0;
    3067             :     }
    3068             : 
    3069        2964 :     if (wc->partitionClause != NIL || wc->orderClause != NIL)
    3070             :     {
    3071             :         /*
    3072             :          * Cap the return value to the estimated partition tuples and account
    3073             :          * for the extra tuple WindowAgg will need to read to confirm the next
    3074             :          * tuple does not belong to the same partition or peer group.
    3075             :          */
    3076        2570 :         return_tuples = Min(return_tuples + 1.0, partition_tuples);
    3077             :     }
    3078             :     else
    3079             :     {
    3080             :         /*
    3081             :          * Cap the return value so it's never higher than the expected tuples
    3082             :          * in the partition.
    3083             :          */
    3084         394 :         return_tuples = Min(return_tuples, partition_tuples);
    3085             :     }
    3086             : 
    3087             :     /*
    3088             :      * We needn't worry about any EXCLUDE options as those only exclude rows
    3089             :      * from being aggregated, not from being read from the WindowAgg's
    3090             :      * subnode.
    3091             :      */
    3092             : 
    3093        2964 :     return clamp_row_est(return_tuples);
    3094             : }
    3095             : 
    3096             : /*
    3097             :  * cost_windowagg
    3098             :  *      Determines and returns the cost of performing a WindowAgg plan node,
    3099             :  *      including the cost of its input.
    3100             :  *
    3101             :  * Input is assumed already properly sorted.
    3102             :  */
    3103             : void
    3104        2964 : cost_windowagg(Path *path, PlannerInfo *root,
    3105             :                List *windowFuncs, WindowClause *winclause,
    3106             :                int input_disabled_nodes,
    3107             :                Cost input_startup_cost, Cost input_total_cost,
    3108             :                double input_tuples)
    3109             : {
    3110             :     Cost        startup_cost;
    3111             :     Cost        total_cost;
    3112             :     double      startup_tuples;
    3113             :     int         numPartCols;
    3114             :     int         numOrderCols;
    3115             :     ListCell   *lc;
    3116             : 
    3117        2964 :     numPartCols = list_length(winclause->partitionClause);
    3118        2964 :     numOrderCols = list_length(winclause->orderClause);
    3119             : 
    3120        2964 :     startup_cost = input_startup_cost;
    3121        2964 :     total_cost = input_total_cost;
    3122             : 
    3123             :     /*
    3124             :      * Window functions are assumed to cost their stated execution cost, plus
    3125             :      * the cost of evaluating their input expressions, per tuple.  Since they
    3126             :      * may in fact evaluate their inputs at multiple rows during each cycle,
    3127             :      * this could be a drastic underestimate; but without a way to know how
    3128             :      * many rows the window function will fetch, it's hard to do better.  In
    3129             :      * any case, it's a good estimate for all the built-in window functions,
    3130             :      * so we'll just do this for now.
    3131             :      */
    3132        6798 :     foreach(lc, windowFuncs)
    3133             :     {
    3134        3834 :         WindowFunc *wfunc = lfirst_node(WindowFunc, lc);
    3135             :         Cost        wfunccost;
    3136             :         QualCost    argcosts;
    3137             : 
    3138        3834 :         argcosts.startup = argcosts.per_tuple = 0;
    3139        3834 :         add_function_cost(root, wfunc->winfnoid, (Node *) wfunc,
    3140             :                           &argcosts);
    3141        3834 :         startup_cost += argcosts.startup;
    3142        3834 :         wfunccost = argcosts.per_tuple;
    3143             : 
    3144             :         /* also add the input expressions' cost to per-input-row costs */
    3145        3834 :         cost_qual_eval_node(&argcosts, (Node *) wfunc->args, root);
    3146        3834 :         startup_cost += argcosts.startup;
    3147        3834 :         wfunccost += argcosts.per_tuple;
    3148             : 
    3149             :         /*
    3150             :          * Add the filter's cost to per-input-row costs.  XXX We should reduce
    3151             :          * input expression costs according to filter selectivity.
    3152             :          */
    3153        3834 :         cost_qual_eval_node(&argcosts, (Node *) wfunc->aggfilter, root);
    3154        3834 :         startup_cost += argcosts.startup;
    3155        3834 :         wfunccost += argcosts.per_tuple;
    3156             : 
    3157        3834 :         total_cost += wfunccost * input_tuples;
    3158             :     }
    3159             : 
    3160             :     /*
    3161             :      * We also charge cpu_operator_cost per grouping column per tuple for
    3162             :      * grouping comparisons, plus cpu_tuple_cost per tuple for general
    3163             :      * overhead.
    3164             :      *
    3165             :      * XXX this neglects costs of spooling the data to disk when it overflows
    3166             :      * work_mem.  Sooner or later that should get accounted for.
    3167             :      */
    3168        2964 :     total_cost += cpu_operator_cost * (numPartCols + numOrderCols) * input_tuples;
    3169        2964 :     total_cost += cpu_tuple_cost * input_tuples;
    3170             : 
    3171        2964 :     path->rows = input_tuples;
    3172        2964 :     path->disabled_nodes = input_disabled_nodes;
    3173        2964 :     path->startup_cost = startup_cost;
    3174        2964 :     path->total_cost = total_cost;
    3175             : 
    3176             :     /*
    3177             :      * Also, take into account how many tuples we need to read from the
    3178             :      * subnode in order to produce the first tuple from the WindowAgg.  To do
    3179             :      * this we proportion the run cost (total cost not including startup cost)
    3180             :      * over the estimated startup tuples.  We already included the startup
    3181             :      * cost of the subnode, so we only need to do this when the estimated
    3182             :      * startup tuples is above 1.0.
    3183             :      */
    3184        2964 :     startup_tuples = get_windowclause_startup_tuples(root, winclause,
    3185             :                                                      input_tuples);
    3186             : 
    3187        2964 :     if (startup_tuples > 1.0)
    3188        2556 :         path->startup_cost += (total_cost - startup_cost) / input_tuples *
    3189        2556 :             (startup_tuples - 1.0);
    3190        2964 : }
    3191             : 
    3192             : /*
    3193             :  * cost_group
    3194             :  *      Determines and returns the cost of performing a Group plan node,
    3195             :  *      including the cost of its input.
    3196             :  *
    3197             :  * Note: caller must ensure that input costs are for appropriately-sorted
    3198             :  * input.
    3199             :  */
    3200             : void
    3201        1226 : cost_group(Path *path, PlannerInfo *root,
    3202             :            int numGroupCols, double numGroups,
    3203             :            List *quals,
    3204             :            int input_disabled_nodes,
    3205             :            Cost input_startup_cost, Cost input_total_cost,
    3206             :            double input_tuples)
    3207             : {
    3208             :     double      output_tuples;
    3209             :     Cost        startup_cost;
    3210             :     Cost        total_cost;
    3211             : 
    3212        1226 :     output_tuples = numGroups;
    3213        1226 :     startup_cost = input_startup_cost;
    3214        1226 :     total_cost = input_total_cost;
    3215             : 
    3216             :     /*
    3217             :      * Charge one cpu_operator_cost per comparison per input tuple. We assume
    3218             :      * all columns get compared at most of the tuples.
    3219             :      */
    3220        1226 :     total_cost += cpu_operator_cost * input_tuples * numGroupCols;
    3221             : 
    3222             :     /*
    3223             :      * If there are quals (HAVING quals), account for their cost and
    3224             :      * selectivity.
    3225             :      */
    3226        1226 :     if (quals)
    3227             :     {
    3228             :         QualCost    qual_cost;
    3229             : 
    3230           0 :         cost_qual_eval(&qual_cost, quals, root);
    3231           0 :         startup_cost += qual_cost.startup;
    3232           0 :         total_cost += qual_cost.startup + output_tuples * qual_cost.per_tuple;
    3233             : 
    3234           0 :         output_tuples = clamp_row_est(output_tuples *
    3235           0 :                                       clauselist_selectivity(root,
    3236             :                                                              quals,
    3237             :                                                              0,
    3238             :                                                              JOIN_INNER,
    3239             :                                                              NULL));
    3240             :     }
    3241             : 
    3242        1226 :     path->rows = output_tuples;
    3243        1226 :     path->disabled_nodes = input_disabled_nodes;
    3244        1226 :     path->startup_cost = startup_cost;
    3245        1226 :     path->total_cost = total_cost;
    3246        1226 : }
    3247             : 
    3248             : /*
    3249             :  * initial_cost_nestloop
    3250             :  *    Preliminary estimate of the cost of a nestloop join path.
    3251             :  *
    3252             :  * This must quickly produce lower-bound estimates of the path's startup and
    3253             :  * total costs.  If we are unable to eliminate the proposed path from
    3254             :  * consideration using the lower bounds, final_cost_nestloop will be called
    3255             :  * to obtain the final estimates.
    3256             :  *
    3257             :  * The exact division of labor between this function and final_cost_nestloop
    3258             :  * is private to them, and represents a tradeoff between speed of the initial
    3259             :  * estimate and getting a tight lower bound.  We choose to not examine the
    3260             :  * join quals here, since that's by far the most expensive part of the
    3261             :  * calculations.  The end result is that CPU-cost considerations must be
    3262             :  * left for the second phase; and for SEMI/ANTI joins, we must also postpone
    3263             :  * incorporation of the inner path's run cost.
    3264             :  *
    3265             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    3266             :  *      other data to be used by final_cost_nestloop
    3267             :  * 'jointype' is the type of join to be performed
    3268             :  * 'outer_path' is the outer input to the join
    3269             :  * 'inner_path' is the inner input to the join
    3270             :  * 'extra' contains miscellaneous information about the join
    3271             :  */
    3272             : void
    3273     3259348 : initial_cost_nestloop(PlannerInfo *root, JoinCostWorkspace *workspace,
    3274             :                       JoinType jointype,
    3275             :                       Path *outer_path, Path *inner_path,
    3276             :                       JoinPathExtraData *extra)
    3277             : {
    3278             :     int         disabled_nodes;
    3279     3259348 :     Cost        startup_cost = 0;
    3280     3259348 :     Cost        run_cost = 0;
    3281     3259348 :     double      outer_path_rows = outer_path->rows;
    3282             :     Cost        inner_rescan_start_cost;
    3283             :     Cost        inner_rescan_total_cost;
    3284             :     Cost        inner_run_cost;
    3285             :     Cost        inner_rescan_run_cost;
    3286             : 
    3287             :     /* Count up disabled nodes. */
    3288     3259348 :     disabled_nodes = enable_nestloop ? 0 : 1;
    3289     3259348 :     disabled_nodes += inner_path->disabled_nodes;
    3290     3259348 :     disabled_nodes += outer_path->disabled_nodes;
    3291             : 
    3292             :     /* estimate costs to rescan the inner relation */
    3293     3259348 :     cost_rescan(root, inner_path,
    3294             :                 &inner_rescan_start_cost,
    3295             :                 &inner_rescan_total_cost);
    3296             : 
    3297             :     /* cost of source data */
    3298             : 
    3299             :     /*
    3300             :      * NOTE: clearly, we must pay both outer and inner paths' startup_cost
    3301             :      * before we can start returning tuples, so the join's startup cost is
    3302             :      * their sum.  We'll also pay the inner path's rescan startup cost
    3303             :      * multiple times.
    3304             :      */
    3305     3259348 :     startup_cost += outer_path->startup_cost + inner_path->startup_cost;
    3306     3259348 :     run_cost += outer_path->total_cost - outer_path->startup_cost;
    3307     3259348 :     if (outer_path_rows > 1)
    3308     2383426 :         run_cost += (outer_path_rows - 1) * inner_rescan_start_cost;
    3309             : 
    3310     3259348 :     inner_run_cost = inner_path->total_cost - inner_path->startup_cost;
    3311     3259348 :     inner_rescan_run_cost = inner_rescan_total_cost - inner_rescan_start_cost;
    3312             : 
    3313     3259348 :     if (jointype == JOIN_SEMI || jointype == JOIN_ANTI ||
    3314     3198104 :         extra->inner_unique)
    3315             :     {
    3316             :         /*
    3317             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3318             :          * executor will stop after the first match.
    3319             :          *
    3320             :          * Getting decent estimates requires inspection of the join quals,
    3321             :          * which we choose to postpone to final_cost_nestloop.
    3322             :          */
    3323             : 
    3324             :         /* Save private data for final_cost_nestloop */
    3325     1323660 :         workspace->inner_run_cost = inner_run_cost;
    3326     1323660 :         workspace->inner_rescan_run_cost = inner_rescan_run_cost;
    3327             :     }
    3328             :     else
    3329             :     {
    3330             :         /* Normal case; we'll scan whole input rel for each outer row */
    3331     1935688 :         run_cost += inner_run_cost;
    3332     1935688 :         if (outer_path_rows > 1)
    3333     1499860 :             run_cost += (outer_path_rows - 1) * inner_rescan_run_cost;
    3334             :     }
    3335             : 
    3336             :     /* CPU costs left for later */
    3337             : 
    3338             :     /* Public result fields */
    3339     3259348 :     workspace->disabled_nodes = disabled_nodes;
    3340     3259348 :     workspace->startup_cost = startup_cost;
    3341     3259348 :     workspace->total_cost = startup_cost + run_cost;
    3342             :     /* Save private data for final_cost_nestloop */
    3343     3259348 :     workspace->run_cost = run_cost;
    3344     3259348 : }
    3345             : 
    3346             : /*
    3347             :  * final_cost_nestloop
    3348             :  *    Final estimate of the cost and result size of a nestloop join path.
    3349             :  *
    3350             :  * 'path' is already filled in except for the rows and cost fields
    3351             :  * 'workspace' is the result from initial_cost_nestloop
    3352             :  * 'extra' contains miscellaneous information about the join
    3353             :  */
    3354             : void
    3355     1460290 : final_cost_nestloop(PlannerInfo *root, NestPath *path,
    3356             :                     JoinCostWorkspace *workspace,
    3357             :                     JoinPathExtraData *extra)
    3358             : {
    3359     1460290 :     Path       *outer_path = path->jpath.outerjoinpath;
    3360     1460290 :     Path       *inner_path = path->jpath.innerjoinpath;
    3361     1460290 :     double      outer_path_rows = outer_path->rows;
    3362     1460290 :     double      inner_path_rows = inner_path->rows;
    3363     1460290 :     Cost        startup_cost = workspace->startup_cost;
    3364     1460290 :     Cost        run_cost = workspace->run_cost;
    3365             :     Cost        cpu_per_tuple;
    3366             :     QualCost    restrict_qual_cost;
    3367             :     double      ntuples;
    3368             : 
    3369             :     /* Set the number of disabled nodes. */
    3370     1460290 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    3371             : 
    3372             :     /* Protect some assumptions below that rowcounts aren't zero */
    3373     1460290 :     if (outer_path_rows <= 0)
    3374           0 :         outer_path_rows = 1;
    3375     1460290 :     if (inner_path_rows <= 0)
    3376         726 :         inner_path_rows = 1;
    3377             :     /* Mark the path with the correct row estimate */
    3378     1460290 :     if (path->jpath.path.param_info)
    3379       32864 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    3380             :     else
    3381     1427426 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    3382             : 
    3383             :     /* For partial paths, scale row estimate. */
    3384     1460290 :     if (path->jpath.path.parallel_workers > 0)
    3385             :     {
    3386       44038 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    3387             : 
    3388       44038 :         path->jpath.path.rows =
    3389       44038 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    3390             :     }
    3391             : 
    3392             :     /* cost of inner-relation source data (we already dealt with outer rel) */
    3393             : 
    3394     1460290 :     if (path->jpath.jointype == JOIN_SEMI || path->jpath.jointype == JOIN_ANTI ||
    3395     1417746 :         extra->inner_unique)
    3396      913154 :     {
    3397             :         /*
    3398             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3399             :          * executor will stop after the first match.
    3400             :          */
    3401      913154 :         Cost        inner_run_cost = workspace->inner_run_cost;
    3402      913154 :         Cost        inner_rescan_run_cost = workspace->inner_rescan_run_cost;
    3403             :         double      outer_matched_rows;
    3404             :         double      outer_unmatched_rows;
    3405             :         Selectivity inner_scan_frac;
    3406             : 
    3407             :         /*
    3408             :          * For an outer-rel row that has at least one match, we can expect the
    3409             :          * inner scan to stop after a fraction 1/(match_count+1) of the inner
    3410             :          * rows, if the matches are evenly distributed.  Since they probably
    3411             :          * aren't quite evenly distributed, we apply a fuzz factor of 2.0 to
    3412             :          * that fraction.  (If we used a larger fuzz factor, we'd have to
    3413             :          * clamp inner_scan_frac to at most 1.0; but since match_count is at
    3414             :          * least 1, no such clamp is needed now.)
    3415             :          */
    3416      913154 :         outer_matched_rows = rint(outer_path_rows * extra->semifactors.outer_match_frac);
    3417      913154 :         outer_unmatched_rows = outer_path_rows - outer_matched_rows;
    3418      913154 :         inner_scan_frac = 2.0 / (extra->semifactors.match_count + 1.0);
    3419             : 
    3420             :         /*
    3421             :          * Compute number of tuples processed (not number emitted!).  First,
    3422             :          * account for successfully-matched outer rows.
    3423             :          */
    3424      913154 :         ntuples = outer_matched_rows * inner_path_rows * inner_scan_frac;
    3425             : 
    3426             :         /*
    3427             :          * Now we need to estimate the actual costs of scanning the inner
    3428             :          * relation, which may be quite a bit less than N times inner_run_cost
    3429             :          * due to early scan stops.  We consider two cases.  If the inner path
    3430             :          * is an indexscan using all the joinquals as indexquals, then an
    3431             :          * unmatched outer row results in an indexscan returning no rows,
    3432             :          * which is probably quite cheap.  Otherwise, the executor will have
    3433             :          * to scan the whole inner rel for an unmatched row; not so cheap.
    3434             :          */
    3435      913154 :         if (has_indexed_join_quals(path))
    3436             :         {
    3437             :             /*
    3438             :              * Successfully-matched outer rows will only require scanning
    3439             :              * inner_scan_frac of the inner relation.  In this case, we don't
    3440             :              * need to charge the full inner_run_cost even when that's more
    3441             :              * than inner_rescan_run_cost, because we can assume that none of
    3442             :              * the inner scans ever scan the whole inner relation.  So it's
    3443             :              * okay to assume that all the inner scan executions can be
    3444             :              * fractions of the full cost, even if materialization is reducing
    3445             :              * the rescan cost.  At this writing, it's impossible to get here
    3446             :              * for a materialized inner scan, so inner_run_cost and
    3447             :              * inner_rescan_run_cost will be the same anyway; but just in
    3448             :              * case, use inner_run_cost for the first matched tuple and
    3449             :              * inner_rescan_run_cost for additional ones.
    3450             :              */
    3451      149722 :             run_cost += inner_run_cost * inner_scan_frac;
    3452      149722 :             if (outer_matched_rows > 1)
    3453       22780 :                 run_cost += (outer_matched_rows - 1) * inner_rescan_run_cost * inner_scan_frac;
    3454             : 
    3455             :             /*
    3456             :              * Add the cost of inner-scan executions for unmatched outer rows.
    3457             :              * We estimate this as the same cost as returning the first tuple
    3458             :              * of a nonempty scan.  We consider that these are all rescans,
    3459             :              * since we used inner_run_cost once already.
    3460             :              */
    3461      149722 :             run_cost += outer_unmatched_rows *
    3462      149722 :                 inner_rescan_run_cost / inner_path_rows;
    3463             : 
    3464             :             /*
    3465             :              * We won't be evaluating any quals at all for unmatched rows, so
    3466             :              * don't add them to ntuples.
    3467             :              */
    3468             :         }
    3469             :         else
    3470             :         {
    3471             :             /*
    3472             :              * Here, a complicating factor is that rescans may be cheaper than
    3473             :              * first scans.  If we never scan all the way to the end of the
    3474             :              * inner rel, it might be (depending on the plan type) that we'd
    3475             :              * never pay the whole inner first-scan run cost.  However it is
    3476             :              * difficult to estimate whether that will happen (and it could
    3477             :              * not happen if there are any unmatched outer rows!), so be
    3478             :              * conservative and always charge the whole first-scan cost once.
    3479             :              * We consider this charge to correspond to the first unmatched
    3480             :              * outer row, unless there isn't one in our estimate, in which
    3481             :              * case blame it on the first matched row.
    3482             :              */
    3483             : 
    3484             :             /* First, count all unmatched join tuples as being processed */
    3485      763432 :             ntuples += outer_unmatched_rows * inner_path_rows;
    3486             : 
    3487             :             /* Now add the forced full scan, and decrement appropriate count */
    3488      763432 :             run_cost += inner_run_cost;
    3489      763432 :             if (outer_unmatched_rows >= 1)
    3490      726608 :                 outer_unmatched_rows -= 1;
    3491             :             else
    3492       36824 :                 outer_matched_rows -= 1;
    3493             : 
    3494             :             /* Add inner run cost for additional outer tuples having matches */
    3495      763432 :             if (outer_matched_rows > 0)
    3496      276552 :                 run_cost += outer_matched_rows * inner_rescan_run_cost * inner_scan_frac;
    3497             : 
    3498             :             /* Add inner run cost for additional unmatched outer tuples */
    3499      763432 :             if (outer_unmatched_rows > 0)
    3500      483792 :                 run_cost += outer_unmatched_rows * inner_rescan_run_cost;
    3501             :         }
    3502             :     }
    3503             :     else
    3504             :     {
    3505             :         /* Normal-case source costs were included in preliminary estimate */
    3506             : 
    3507             :         /* Compute number of tuples processed (not number emitted!) */
    3508      547136 :         ntuples = outer_path_rows * inner_path_rows;
    3509             :     }
    3510             : 
    3511             :     /* CPU costs */
    3512     1460290 :     cost_qual_eval(&restrict_qual_cost, path->jpath.joinrestrictinfo, root);
    3513     1460290 :     startup_cost += restrict_qual_cost.startup;
    3514     1460290 :     cpu_per_tuple = cpu_tuple_cost + restrict_qual_cost.per_tuple;
    3515     1460290 :     run_cost += cpu_per_tuple * ntuples;
    3516             : 
    3517             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    3518     1460290 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    3519     1460290 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    3520             : 
    3521     1460290 :     path->jpath.path.startup_cost = startup_cost;
    3522     1460290 :     path->jpath.path.total_cost = startup_cost + run_cost;
    3523     1460290 : }
    3524             : 
    3525             : /*
    3526             :  * initial_cost_mergejoin
    3527             :  *    Preliminary estimate of the cost of a mergejoin path.
    3528             :  *
    3529             :  * This must quickly produce lower-bound estimates of the path's startup and
    3530             :  * total costs.  If we are unable to eliminate the proposed path from
    3531             :  * consideration using the lower bounds, final_cost_mergejoin will be called
    3532             :  * to obtain the final estimates.
    3533             :  *
    3534             :  * The exact division of labor between this function and final_cost_mergejoin
    3535             :  * is private to them, and represents a tradeoff between speed of the initial
    3536             :  * estimate and getting a tight lower bound.  We choose to not examine the
    3537             :  * join quals here, except for obtaining the scan selectivity estimate which
    3538             :  * is really essential (but fortunately, use of caching keeps the cost of
    3539             :  * getting that down to something reasonable).
    3540             :  * We also assume that cost_sort/cost_incremental_sort is cheap enough to use
    3541             :  * here.
    3542             :  *
    3543             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    3544             :  *      other data to be used by final_cost_mergejoin
    3545             :  * 'jointype' is the type of join to be performed
    3546             :  * 'mergeclauses' is the list of joinclauses to be used as merge clauses
    3547             :  * 'outer_path' is the outer input to the join
    3548             :  * 'inner_path' is the inner input to the join
    3549             :  * 'outersortkeys' is the list of sort keys for the outer path
    3550             :  * 'innersortkeys' is the list of sort keys for the inner path
    3551             :  * 'outer_presorted_keys' is the number of presorted keys of the outer path
    3552             :  * 'extra' contains miscellaneous information about the join
    3553             :  *
    3554             :  * Note: outersortkeys and innersortkeys should be NIL if no explicit
    3555             :  * sort is needed because the respective source path is already ordered.
    3556             :  */
    3557             : void
    3558     1476348 : initial_cost_mergejoin(PlannerInfo *root, JoinCostWorkspace *workspace,
    3559             :                        JoinType jointype,
    3560             :                        List *mergeclauses,
    3561             :                        Path *outer_path, Path *inner_path,
    3562             :                        List *outersortkeys, List *innersortkeys,
    3563             :                        int outer_presorted_keys,
    3564             :                        JoinPathExtraData *extra)
    3565             : {
    3566             :     int         disabled_nodes;
    3567     1476348 :     Cost        startup_cost = 0;
    3568     1476348 :     Cost        run_cost = 0;
    3569     1476348 :     double      outer_path_rows = outer_path->rows;
    3570     1476348 :     double      inner_path_rows = inner_path->rows;
    3571             :     Cost        inner_run_cost;
    3572             :     double      outer_rows,
    3573             :                 inner_rows,
    3574             :                 outer_skip_rows,
    3575             :                 inner_skip_rows;
    3576             :     Selectivity outerstartsel,
    3577             :                 outerendsel,
    3578             :                 innerstartsel,
    3579             :                 innerendsel;
    3580             :     Path        sort_path;      /* dummy for result of
    3581             :                                  * cost_sort/cost_incremental_sort */
    3582             : 
    3583             :     /* Protect some assumptions below that rowcounts aren't zero */
    3584     1476348 :     if (outer_path_rows <= 0)
    3585          96 :         outer_path_rows = 1;
    3586     1476348 :     if (inner_path_rows <= 0)
    3587         126 :         inner_path_rows = 1;
    3588             : 
    3589             :     /*
    3590             :      * A merge join will stop as soon as it exhausts either input stream
    3591             :      * (unless it's an outer join, in which case the outer side has to be
    3592             :      * scanned all the way anyway).  Estimate fraction of the left and right
    3593             :      * inputs that will actually need to be scanned.  Likewise, we can
    3594             :      * estimate the number of rows that will be skipped before the first join
    3595             :      * pair is found, which should be factored into startup cost. We use only
    3596             :      * the first (most significant) merge clause for this purpose. Since
    3597             :      * mergejoinscansel() is a fairly expensive computation, we cache the
    3598             :      * results in the merge clause RestrictInfo.
    3599             :      */
    3600     1476348 :     if (mergeclauses && jointype != JOIN_FULL)
    3601     1470192 :     {
    3602     1470192 :         RestrictInfo *firstclause = (RestrictInfo *) linitial(mergeclauses);
    3603             :         List       *opathkeys;
    3604             :         List       *ipathkeys;
    3605             :         PathKey    *opathkey;
    3606             :         PathKey    *ipathkey;
    3607             :         MergeScanSelCache *cache;
    3608             : 
    3609             :         /* Get the input pathkeys to determine the sort-order details */
    3610     1470192 :         opathkeys = outersortkeys ? outersortkeys : outer_path->pathkeys;
    3611     1470192 :         ipathkeys = innersortkeys ? innersortkeys : inner_path->pathkeys;
    3612             :         Assert(opathkeys);
    3613             :         Assert(ipathkeys);
    3614     1470192 :         opathkey = (PathKey *) linitial(opathkeys);
    3615     1470192 :         ipathkey = (PathKey *) linitial(ipathkeys);
    3616             :         /* debugging check */
    3617     1470192 :         if (opathkey->pk_opfamily != ipathkey->pk_opfamily ||
    3618     1470192 :             opathkey->pk_eclass->ec_collation != ipathkey->pk_eclass->ec_collation ||
    3619     1470192 :             opathkey->pk_cmptype != ipathkey->pk_cmptype ||
    3620     1470192 :             opathkey->pk_nulls_first != ipathkey->pk_nulls_first)
    3621           0 :             elog(ERROR, "left and right pathkeys do not match in mergejoin");
    3622             : 
    3623             :         /* Get the selectivity with caching */
    3624     1470192 :         cache = cached_scansel(root, firstclause, opathkey);
    3625             : 
    3626     1470192 :         if (bms_is_subset(firstclause->left_relids,
    3627     1470192 :                           outer_path->parent->relids))
    3628             :         {
    3629             :             /* left side of clause is outer */
    3630      766760 :             outerstartsel = cache->leftstartsel;
    3631      766760 :             outerendsel = cache->leftendsel;
    3632      766760 :             innerstartsel = cache->rightstartsel;
    3633      766760 :             innerendsel = cache->rightendsel;
    3634             :         }
    3635             :         else
    3636             :         {
    3637             :             /* left side of clause is inner */
    3638      703432 :             outerstartsel = cache->rightstartsel;
    3639      703432 :             outerendsel = cache->rightendsel;
    3640      703432 :             innerstartsel = cache->leftstartsel;
    3641      703432 :             innerendsel = cache->leftendsel;
    3642             :         }
    3643     1470192 :         if (jointype == JOIN_LEFT ||
    3644             :             jointype == JOIN_ANTI)
    3645             :         {
    3646      197822 :             outerstartsel = 0.0;
    3647      197822 :             outerendsel = 1.0;
    3648             :         }
    3649     1272370 :         else if (jointype == JOIN_RIGHT ||
    3650             :                  jointype == JOIN_RIGHT_ANTI)
    3651             :         {
    3652      197126 :             innerstartsel = 0.0;
    3653      197126 :             innerendsel = 1.0;
    3654             :         }
    3655             :     }
    3656             :     else
    3657             :     {
    3658             :         /* cope with clauseless or full mergejoin */
    3659        6156 :         outerstartsel = innerstartsel = 0.0;
    3660        6156 :         outerendsel = innerendsel = 1.0;
    3661             :     }
    3662             : 
    3663             :     /*
    3664             :      * Convert selectivities to row counts.  We force outer_rows and
    3665             :      * inner_rows to be at least 1, but the skip_rows estimates can be zero.
    3666             :      */
    3667     1476348 :     outer_skip_rows = rint(outer_path_rows * outerstartsel);
    3668     1476348 :     inner_skip_rows = rint(inner_path_rows * innerstartsel);
    3669     1476348 :     outer_rows = clamp_row_est(outer_path_rows * outerendsel);
    3670     1476348 :     inner_rows = clamp_row_est(inner_path_rows * innerendsel);
    3671             : 
    3672             :     Assert(outer_skip_rows <= outer_rows);
    3673             :     Assert(inner_skip_rows <= inner_rows);
    3674             : 
    3675             :     /*
    3676             :      * Readjust scan selectivities to account for above rounding.  This is
    3677             :      * normally an insignificant effect, but when there are only a few rows in
    3678             :      * the inputs, failing to do this makes for a large percentage error.
    3679             :      */
    3680     1476348 :     outerstartsel = outer_skip_rows / outer_path_rows;
    3681     1476348 :     innerstartsel = inner_skip_rows / inner_path_rows;
    3682     1476348 :     outerendsel = outer_rows / outer_path_rows;
    3683     1476348 :     innerendsel = inner_rows / inner_path_rows;
    3684             : 
    3685             :     Assert(outerstartsel <= outerendsel);
    3686             :     Assert(innerstartsel <= innerendsel);
    3687             : 
    3688     1476348 :     disabled_nodes = enable_mergejoin ? 0 : 1;
    3689             : 
    3690             :     /* cost of source data */
    3691             : 
    3692     1476348 :     if (outersortkeys)          /* do we need to sort outer? */
    3693             :     {
    3694             :         /*
    3695             :          * We can assert that the outer path is not already ordered
    3696             :          * appropriately for the mergejoin; otherwise, outersortkeys would
    3697             :          * have been set to NIL.
    3698             :          */
    3699             :         Assert(!pathkeys_contained_in(outersortkeys, outer_path->pathkeys));
    3700             : 
    3701             :         /*
    3702             :          * We choose to use incremental sort if it is enabled and there are
    3703             :          * presorted keys; otherwise we use full sort.
    3704             :          */
    3705      754442 :         if (enable_incremental_sort && outer_presorted_keys > 0)
    3706             :         {
    3707        1698 :             cost_incremental_sort(&sort_path,
    3708             :                                   root,
    3709             :                                   outersortkeys,
    3710             :                                   outer_presorted_keys,
    3711             :                                   outer_path->disabled_nodes,
    3712             :                                   outer_path->startup_cost,
    3713             :                                   outer_path->total_cost,
    3714             :                                   outer_path_rows,
    3715        1698 :                                   outer_path->pathtarget->width,
    3716             :                                   0.0,
    3717             :                                   work_mem,
    3718             :                                   -1.0);
    3719             :         }
    3720             :         else
    3721             :         {
    3722      752744 :             cost_sort(&sort_path,
    3723             :                       root,
    3724             :                       outersortkeys,
    3725             :                       outer_path->disabled_nodes,
    3726             :                       outer_path->total_cost,
    3727             :                       outer_path_rows,
    3728      752744 :                       outer_path->pathtarget->width,
    3729             :                       0.0,
    3730             :                       work_mem,
    3731             :                       -1.0);
    3732             :         }
    3733             : 
    3734      754442 :         disabled_nodes += sort_path.disabled_nodes;
    3735      754442 :         startup_cost += sort_path.startup_cost;
    3736      754442 :         startup_cost += (sort_path.total_cost - sort_path.startup_cost)
    3737      754442 :             * outerstartsel;
    3738      754442 :         run_cost += (sort_path.total_cost - sort_path.startup_cost)
    3739      754442 :             * (outerendsel - outerstartsel);
    3740             :     }
    3741             :     else
    3742             :     {
    3743      721906 :         disabled_nodes += outer_path->disabled_nodes;
    3744      721906 :         startup_cost += outer_path->startup_cost;
    3745      721906 :         startup_cost += (outer_path->total_cost - outer_path->startup_cost)
    3746      721906 :             * outerstartsel;
    3747      721906 :         run_cost += (outer_path->total_cost - outer_path->startup_cost)
    3748      721906 :             * (outerendsel - outerstartsel);
    3749             :     }
    3750             : 
    3751     1476348 :     if (innersortkeys)          /* do we need to sort inner? */
    3752             :     {
    3753             :         /*
    3754             :          * We can assert that the inner path is not already ordered
    3755             :          * appropriately for the mergejoin; otherwise, innersortkeys would
    3756             :          * have been set to NIL.
    3757             :          */
    3758             :         Assert(!pathkeys_contained_in(innersortkeys, inner_path->pathkeys));
    3759             : 
    3760             :         /*
    3761             :          * We do not consider incremental sort for inner path, because
    3762             :          * incremental sort does not support mark/restore.
    3763             :          */
    3764             : 
    3765     1188296 :         cost_sort(&sort_path,
    3766             :                   root,
    3767             :                   innersortkeys,
    3768             :                   inner_path->disabled_nodes,
    3769             :                   inner_path->total_cost,
    3770             :                   inner_path_rows,
    3771     1188296 :                   inner_path->pathtarget->width,
    3772             :                   0.0,
    3773             :                   work_mem,
    3774             :                   -1.0);
    3775     1188296 :         disabled_nodes += sort_path.disabled_nodes;
    3776     1188296 :         startup_cost += sort_path.startup_cost;
    3777     1188296 :         startup_cost += (sort_path.total_cost - sort_path.startup_cost)
    3778     1188296 :             * innerstartsel;
    3779     1188296 :         inner_run_cost = (sort_path.total_cost - sort_path.startup_cost)
    3780     1188296 :             * (innerendsel - innerstartsel);
    3781             :     }
    3782             :     else
    3783             :     {
    3784      288052 :         disabled_nodes += inner_path->disabled_nodes;
    3785      288052 :         startup_cost += inner_path->startup_cost;
    3786      288052 :         startup_cost += (inner_path->total_cost - inner_path->startup_cost)
    3787      288052 :             * innerstartsel;
    3788      288052 :         inner_run_cost = (inner_path->total_cost - inner_path->startup_cost)
    3789      288052 :             * (innerendsel - innerstartsel);
    3790             :     }
    3791             : 
    3792             :     /*
    3793             :      * We can't yet determine whether rescanning occurs, or whether
    3794             :      * materialization of the inner input should be done.  The minimum
    3795             :      * possible inner input cost, regardless of rescan and materialization
    3796             :      * considerations, is inner_run_cost.  We include that in
    3797             :      * workspace->total_cost, but not yet in run_cost.
    3798             :      */
    3799             : 
    3800             :     /* CPU costs left for later */
    3801             : 
    3802             :     /* Public result fields */
    3803     1476348 :     workspace->disabled_nodes = disabled_nodes;
    3804     1476348 :     workspace->startup_cost = startup_cost;
    3805     1476348 :     workspace->total_cost = startup_cost + run_cost + inner_run_cost;
    3806             :     /* Save private data for final_cost_mergejoin */
    3807     1476348 :     workspace->run_cost = run_cost;
    3808     1476348 :     workspace->inner_run_cost = inner_run_cost;
    3809     1476348 :     workspace->outer_rows = outer_rows;
    3810     1476348 :     workspace->inner_rows = inner_rows;
    3811     1476348 :     workspace->outer_skip_rows = outer_skip_rows;
    3812     1476348 :     workspace->inner_skip_rows = inner_skip_rows;
    3813     1476348 : }
    3814             : 
    3815             : /*
    3816             :  * final_cost_mergejoin
    3817             :  *    Final estimate of the cost and result size of a mergejoin path.
    3818             :  *
    3819             :  * Unlike other costsize functions, this routine makes two actual decisions:
    3820             :  * whether the executor will need to do mark/restore, and whether we should
    3821             :  * materialize the inner path.  It would be logically cleaner to build
    3822             :  * separate paths testing these alternatives, but that would require repeating
    3823             :  * most of the cost calculations, which are not all that cheap.  Since the
    3824             :  * choice will not affect output pathkeys or startup cost, only total cost,
    3825             :  * there is no possibility of wanting to keep more than one path.  So it seems
    3826             :  * best to make the decisions here and record them in the path's
    3827             :  * skip_mark_restore and materialize_inner fields.
    3828             :  *
    3829             :  * Mark/restore overhead is usually required, but can be skipped if we know
    3830             :  * that the executor need find only one match per outer tuple, and that the
    3831             :  * mergeclauses are sufficient to identify a match.
    3832             :  *
    3833             :  * We materialize the inner path if we need mark/restore and either the inner
    3834             :  * path can't support mark/restore, or it's cheaper to use an interposed
    3835             :  * Material node to handle mark/restore.
    3836             :  *
    3837             :  * 'path' is already filled in except for the rows and cost fields and
    3838             :  *      skip_mark_restore and materialize_inner
    3839             :  * 'workspace' is the result from initial_cost_mergejoin
    3840             :  * 'extra' contains miscellaneous information about the join
    3841             :  */
    3842             : void
    3843      459832 : final_cost_mergejoin(PlannerInfo *root, MergePath *path,
    3844             :                      JoinCostWorkspace *workspace,
    3845             :                      JoinPathExtraData *extra)
    3846             : {
    3847      459832 :     Path       *outer_path = path->jpath.outerjoinpath;
    3848      459832 :     Path       *inner_path = path->jpath.innerjoinpath;
    3849      459832 :     double      inner_path_rows = inner_path->rows;
    3850      459832 :     List       *mergeclauses = path->path_mergeclauses;
    3851      459832 :     List       *innersortkeys = path->innersortkeys;
    3852      459832 :     Cost        startup_cost = workspace->startup_cost;
    3853      459832 :     Cost        run_cost = workspace->run_cost;
    3854      459832 :     Cost        inner_run_cost = workspace->inner_run_cost;
    3855      459832 :     double      outer_rows = workspace->outer_rows;
    3856      459832 :     double      inner_rows = workspace->inner_rows;
    3857      459832 :     double      outer_skip_rows = workspace->outer_skip_rows;
    3858      459832 :     double      inner_skip_rows = workspace->inner_skip_rows;
    3859             :     Cost        cpu_per_tuple,
    3860             :                 bare_inner_cost,
    3861             :                 mat_inner_cost;
    3862             :     QualCost    merge_qual_cost;
    3863             :     QualCost    qp_qual_cost;
    3864             :     double      mergejointuples,
    3865             :                 rescannedtuples;
    3866             :     double      rescanratio;
    3867             : 
    3868             :     /* Set the number of disabled nodes. */
    3869      459832 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    3870             : 
    3871             :     /* Protect some assumptions below that rowcounts aren't zero */
    3872      459832 :     if (inner_path_rows <= 0)
    3873          90 :         inner_path_rows = 1;
    3874             : 
    3875             :     /* Mark the path with the correct row estimate */
    3876      459832 :     if (path->jpath.path.param_info)
    3877        1644 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    3878             :     else
    3879      458188 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    3880             : 
    3881             :     /* For partial paths, scale row estimate. */
    3882      459832 :     if (path->jpath.path.parallel_workers > 0)
    3883             :     {
    3884       65832 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    3885             : 
    3886       65832 :         path->jpath.path.rows =
    3887       65832 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    3888             :     }
    3889             : 
    3890             :     /*
    3891             :      * Compute cost of the mergequals and qpquals (other restriction clauses)
    3892             :      * separately.
    3893             :      */
    3894      459832 :     cost_qual_eval(&merge_qual_cost, mergeclauses, root);
    3895      459832 :     cost_qual_eval(&qp_qual_cost, path->jpath.joinrestrictinfo, root);
    3896      459832 :     qp_qual_cost.startup -= merge_qual_cost.startup;
    3897      459832 :     qp_qual_cost.per_tuple -= merge_qual_cost.per_tuple;
    3898             : 
    3899             :     /*
    3900             :      * With a SEMI or ANTI join, or if the innerrel is known unique, the
    3901             :      * executor will stop scanning for matches after the first match.  When
    3902             :      * all the joinclauses are merge clauses, this means we don't ever need to
    3903             :      * back up the merge, and so we can skip mark/restore overhead.
    3904             :      */
    3905      459832 :     if ((path->jpath.jointype == JOIN_SEMI ||
    3906      452760 :          path->jpath.jointype == JOIN_ANTI ||
    3907      604898 :          extra->inner_unique) &&
    3908      159128 :         (list_length(path->jpath.joinrestrictinfo) ==
    3909      159128 :          list_length(path->path_mergeclauses)))
    3910      137356 :         path->skip_mark_restore = true;
    3911             :     else
    3912      322476 :         path->skip_mark_restore = false;
    3913             : 
    3914             :     /*
    3915             :      * Get approx # tuples passing the mergequals.  We use approx_tuple_count
    3916             :      * here because we need an estimate done with JOIN_INNER semantics.
    3917             :      */
    3918      459832 :     mergejointuples = approx_tuple_count(root, &path->jpath, mergeclauses);
    3919             : 
    3920             :     /*
    3921             :      * When there are equal merge keys in the outer relation, the mergejoin
    3922             :      * must rescan any matching tuples in the inner relation. This means
    3923             :      * re-fetching inner tuples; we have to estimate how often that happens.
    3924             :      *
    3925             :      * For regular inner and outer joins, the number of re-fetches can be
    3926             :      * estimated approximately as size of merge join output minus size of
    3927             :      * inner relation. Assume that the distinct key values are 1, 2, ..., and
    3928             :      * denote the number of values of each key in the outer relation as m1,
    3929             :      * m2, ...; in the inner relation, n1, n2, ...  Then we have
    3930             :      *
    3931             :      * size of join = m1 * n1 + m2 * n2 + ...
    3932             :      *
    3933             :      * number of rescanned tuples = (m1 - 1) * n1 + (m2 - 1) * n2 + ... = m1 *
    3934             :      * n1 + m2 * n2 + ... - (n1 + n2 + ...) = size of join - size of inner
    3935             :      * relation
    3936             :      *
    3937             :      * This equation works correctly for outer tuples having no inner match
    3938             :      * (nk = 0), but not for inner tuples having no outer match (mk = 0); we
    3939             :      * are effectively subtracting those from the number of rescanned tuples,
    3940             :      * when we should not.  Can we do better without expensive selectivity
    3941             :      * computations?
    3942             :      *
    3943             :      * The whole issue is moot if we know we don't need to mark/restore at
    3944             :      * all, or if we are working from a unique-ified outer input.
    3945             :      */
    3946      459832 :     if (path->skip_mark_restore ||
    3947      322476 :         RELATION_WAS_MADE_UNIQUE(outer_path->parent, extra->sjinfo,
    3948             :                                  path->jpath.jointype))
    3949      141670 :         rescannedtuples = 0;
    3950             :     else
    3951             :     {
    3952      318162 :         rescannedtuples = mergejointuples - inner_path_rows;
    3953             :         /* Must clamp because of possible underestimate */
    3954      318162 :         if (rescannedtuples < 0)
    3955       77642 :             rescannedtuples = 0;
    3956             :     }
    3957             : 
    3958             :     /*
    3959             :      * We'll inflate various costs this much to account for rescanning.  Note
    3960             :      * that this is to be multiplied by something involving inner_rows, or
    3961             :      * another number related to the portion of the inner rel we'll scan.
    3962             :      */
    3963      459832 :     rescanratio = 1.0 + (rescannedtuples / inner_rows);
    3964             : 
    3965             :     /*
    3966             :      * Decide whether we want to materialize the inner input to shield it from
    3967             :      * mark/restore and performing re-fetches.  Our cost model for regular
    3968             :      * re-fetches is that a re-fetch costs the same as an original fetch,
    3969             :      * which is probably an overestimate; but on the other hand we ignore the
    3970             :      * bookkeeping costs of mark/restore.  Not clear if it's worth developing
    3971             :      * a more refined model.  So we just need to inflate the inner run cost by
    3972             :      * rescanratio.
    3973             :      */
    3974      459832 :     bare_inner_cost = inner_run_cost * rescanratio;
    3975             : 
    3976             :     /*
    3977             :      * When we interpose a Material node the re-fetch cost is assumed to be
    3978             :      * just cpu_operator_cost per tuple, independently of the underlying
    3979             :      * plan's cost; and we charge an extra cpu_operator_cost per original
    3980             :      * fetch as well.  Note that we're assuming the materialize node will
    3981             :      * never spill to disk, since it only has to remember tuples back to the
    3982             :      * last mark.  (If there are a huge number of duplicates, our other cost
    3983             :      * factors will make the path so expensive that it probably won't get
    3984             :      * chosen anyway.)  So we don't use cost_rescan here.
    3985             :      *
    3986             :      * Note: keep this estimate in sync with create_mergejoin_plan's labeling
    3987             :      * of the generated Material node.
    3988             :      */
    3989      459832 :     mat_inner_cost = inner_run_cost +
    3990      459832 :         cpu_operator_cost * inner_rows * rescanratio;
    3991             : 
    3992             :     /*
    3993             :      * If we don't need mark/restore at all, we don't need materialization.
    3994             :      */
    3995      459832 :     if (path->skip_mark_restore)
    3996      137356 :         path->materialize_inner = false;
    3997             : 
    3998             :     /*
    3999             :      * Prefer materializing if it looks cheaper, unless the user has asked to
    4000             :      * suppress materialization.
    4001             :      */
    4002      322476 :     else if (enable_material && mat_inner_cost < bare_inner_cost)
    4003        3542 :         path->materialize_inner = true;
    4004             : 
    4005             :     /*
    4006             :      * Even if materializing doesn't look cheaper, we *must* do it if the
    4007             :      * inner path is to be used directly (without sorting) and it doesn't
    4008             :      * support mark/restore.
    4009             :      *
    4010             :      * Since the inner side must be ordered, and only Sorts and IndexScans can
    4011             :      * create order to begin with, and they both support mark/restore, you
    4012             :      * might think there's no problem --- but you'd be wrong.  Nestloop and
    4013             :      * merge joins can *preserve* the order of their inputs, so they can be
    4014             :      * selected as the input of a mergejoin, and they don't support
    4015             :      * mark/restore at present.
    4016             :      *
    4017             :      * We don't test the value of enable_material here, because
    4018             :      * materialization is required for correctness in this case, and turning
    4019             :      * it off does not entitle us to deliver an invalid plan.
    4020             :      */
    4021      318934 :     else if (innersortkeys == NIL &&
    4022        8624 :              !ExecSupportsMarkRestore(inner_path))
    4023        1876 :         path->materialize_inner = true;
    4024             : 
    4025             :     /*
    4026             :      * Also, force materializing if the inner path is to be sorted and the
    4027             :      * sort is expected to spill to disk.  This is because the final merge
    4028             :      * pass can be done on-the-fly if it doesn't have to support mark/restore.
    4029             :      * We don't try to adjust the cost estimates for this consideration,
    4030             :      * though.
    4031             :      *
    4032             :      * Since materialization is a performance optimization in this case,
    4033             :      * rather than necessary for correctness, we skip it if enable_material is
    4034             :      * off.
    4035             :      */
    4036      317058 :     else if (enable_material && innersortkeys != NIL &&
    4037      310262 :              relation_byte_size(inner_path_rows,
    4038      310262 :                                 inner_path->pathtarget->width) >
    4039      310262 :              work_mem * (Size) 1024)
    4040         284 :         path->materialize_inner = true;
    4041             :     else
    4042      316774 :         path->materialize_inner = false;
    4043             : 
    4044             :     /* Charge the right incremental cost for the chosen case */
    4045      459832 :     if (path->materialize_inner)
    4046        5702 :         run_cost += mat_inner_cost;
    4047             :     else
    4048      454130 :         run_cost += bare_inner_cost;
    4049             : 
    4050             :     /* CPU costs */
    4051             : 
    4052             :     /*
    4053             :      * The number of tuple comparisons needed is approximately number of outer
    4054             :      * rows plus number of inner rows plus number of rescanned tuples (can we
    4055             :      * refine this?).  At each one, we need to evaluate the mergejoin quals.
    4056             :      */
    4057      459832 :     startup_cost += merge_qual_cost.startup;
    4058      459832 :     startup_cost += merge_qual_cost.per_tuple *
    4059      459832 :         (outer_skip_rows + inner_skip_rows * rescanratio);
    4060      459832 :     run_cost += merge_qual_cost.per_tuple *
    4061      459832 :         ((outer_rows - outer_skip_rows) +
    4062      459832 :          (inner_rows - inner_skip_rows) * rescanratio);
    4063             : 
    4064             :     /*
    4065             :      * For each tuple that gets through the mergejoin proper, we charge
    4066             :      * cpu_tuple_cost plus the cost of evaluating additional restriction
    4067             :      * clauses that are to be applied at the join.  (This is pessimistic since
    4068             :      * not all of the quals may get evaluated at each tuple.)
    4069             :      *
    4070             :      * Note: we could adjust for SEMI/ANTI joins skipping some qual
    4071             :      * evaluations here, but it's probably not worth the trouble.
    4072             :      */
    4073      459832 :     startup_cost += qp_qual_cost.startup;
    4074      459832 :     cpu_per_tuple = cpu_tuple_cost + qp_qual_cost.per_tuple;
    4075      459832 :     run_cost += cpu_per_tuple * mergejointuples;
    4076             : 
    4077             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    4078      459832 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    4079      459832 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    4080             : 
    4081      459832 :     path->jpath.path.startup_cost = startup_cost;
    4082      459832 :     path->jpath.path.total_cost = startup_cost + run_cost;
    4083      459832 : }
    4084             : 
    4085             : /*
    4086             :  * run mergejoinscansel() with caching
    4087             :  */
    4088             : static MergeScanSelCache *
    4089     1470192 : cached_scansel(PlannerInfo *root, RestrictInfo *rinfo, PathKey *pathkey)
    4090             : {
    4091             :     MergeScanSelCache *cache;
    4092             :     ListCell   *lc;
    4093             :     Selectivity leftstartsel,
    4094             :                 leftendsel,
    4095             :                 rightstartsel,
    4096             :                 rightendsel;
    4097             :     MemoryContext oldcontext;
    4098             : 
    4099             :     /* Do we have this result already? */
    4100     1470270 :     foreach(lc, rinfo->scansel_cache)
    4101             :     {
    4102     1328696 :         cache = (MergeScanSelCache *) lfirst(lc);
    4103     1328696 :         if (cache->opfamily == pathkey->pk_opfamily &&
    4104     1328696 :             cache->collation == pathkey->pk_eclass->ec_collation &&
    4105     1328696 :             cache->cmptype == pathkey->pk_cmptype &&
    4106     1328618 :             cache->nulls_first == pathkey->pk_nulls_first)
    4107     1328618 :             return cache;
    4108             :     }
    4109             : 
    4110             :     /* Nope, do the computation */
    4111      141574 :     mergejoinscansel(root,
    4112      141574 :                      (Node *) rinfo->clause,
    4113             :                      pathkey->pk_opfamily,
    4114             :                      pathkey->pk_cmptype,
    4115      141574 :                      pathkey->pk_nulls_first,
    4116             :                      &leftstartsel,
    4117             :                      &leftendsel,
    4118             :                      &rightstartsel,
    4119             :                      &rightendsel);
    4120             : 
    4121             :     /* Cache the result in suitably long-lived workspace */
    4122      141574 :     oldcontext = MemoryContextSwitchTo(root->planner_cxt);
    4123             : 
    4124      141574 :     cache = (MergeScanSelCache *) palloc(sizeof(MergeScanSelCache));
    4125      141574 :     cache->opfamily = pathkey->pk_opfamily;
    4126      141574 :     cache->collation = pathkey->pk_eclass->ec_collation;
    4127      141574 :     cache->cmptype = pathkey->pk_cmptype;
    4128      141574 :     cache->nulls_first = pathkey->pk_nulls_first;
    4129      141574 :     cache->leftstartsel = leftstartsel;
    4130      141574 :     cache->leftendsel = leftendsel;
    4131      141574 :     cache->rightstartsel = rightstartsel;
    4132      141574 :     cache->rightendsel = rightendsel;
    4133             : 
    4134      141574 :     rinfo->scansel_cache = lappend(rinfo->scansel_cache, cache);
    4135             : 
    4136      141574 :     MemoryContextSwitchTo(oldcontext);
    4137             : 
    4138      141574 :     return cache;
    4139             : }
    4140             : 
    4141             : /*
    4142             :  * initial_cost_hashjoin
    4143             :  *    Preliminary estimate of the cost of a hashjoin path.
    4144             :  *
    4145             :  * This must quickly produce lower-bound estimates of the path's startup and
    4146             :  * total costs.  If we are unable to eliminate the proposed path from
    4147             :  * consideration using the lower bounds, final_cost_hashjoin will be called
    4148             :  * to obtain the final estimates.
    4149             :  *
    4150             :  * The exact division of labor between this function and final_cost_hashjoin
    4151             :  * is private to them, and represents a tradeoff between speed of the initial
    4152             :  * estimate and getting a tight lower bound.  We choose to not examine the
    4153             :  * join quals here (other than by counting the number of hash clauses),
    4154             :  * so we can't do much with CPU costs.  We do assume that
    4155             :  * ExecChooseHashTableSize is cheap enough to use here.
    4156             :  *
    4157             :  * 'workspace' is to be filled with startup_cost, total_cost, and perhaps
    4158             :  *      other data to be used by final_cost_hashjoin
    4159             :  * 'jointype' is the type of join to be performed
    4160             :  * 'hashclauses' is the list of joinclauses to be used as hash clauses
    4161             :  * 'outer_path' is the outer input to the join
    4162             :  * 'inner_path' is the inner input to the join
    4163             :  * 'extra' contains miscellaneous information about the join
    4164             :  * 'parallel_hash' indicates that inner_path is partial and that a shared
    4165             :  *      hash table will be built in parallel
    4166             :  */
    4167             : void
    4168      875454 : initial_cost_hashjoin(PlannerInfo *root, JoinCostWorkspace *workspace,
    4169             :                       JoinType jointype,
    4170             :                       List *hashclauses,
    4171             :                       Path *outer_path, Path *inner_path,
    4172             :                       JoinPathExtraData *extra,
    4173             :                       bool parallel_hash)
    4174             : {
    4175             :     int         disabled_nodes;
    4176      875454 :     Cost        startup_cost = 0;
    4177      875454 :     Cost        run_cost = 0;
    4178      875454 :     double      outer_path_rows = outer_path->rows;
    4179      875454 :     double      inner_path_rows = inner_path->rows;
    4180      875454 :     double      inner_path_rows_total = inner_path_rows;
    4181      875454 :     int         num_hashclauses = list_length(hashclauses);
    4182             :     int         numbuckets;
    4183             :     int         numbatches;
    4184             :     int         num_skew_mcvs;
    4185             :     size_t      space_allowed;  /* unused */
    4186             : 
    4187             :     /* Count up disabled nodes. */
    4188      875454 :     disabled_nodes = enable_hashjoin ? 0 : 1;
    4189      875454 :     disabled_nodes += inner_path->disabled_nodes;
    4190      875454 :     disabled_nodes += outer_path->disabled_nodes;
    4191             : 
    4192             :     /* cost of source data */
    4193      875454 :     startup_cost += outer_path->startup_cost;
    4194      875454 :     run_cost += outer_path->total_cost - outer_path->startup_cost;
    4195      875454 :     startup_cost += inner_path->total_cost;
    4196             : 
    4197             :     /*
    4198             :      * Cost of computing hash function: must do it once per input tuple. We
    4199             :      * charge one cpu_operator_cost for each column's hash function.  Also,
    4200             :      * tack on one cpu_tuple_cost per inner row, to model the costs of
    4201             :      * inserting the row into the hashtable.
    4202             :      *
    4203             :      * XXX when a hashclause is more complex than a single operator, we really
    4204             :      * should charge the extra eval costs of the left or right side, as
    4205             :      * appropriate, here.  This seems more work than it's worth at the moment.
    4206             :      */
    4207      875454 :     startup_cost += (cpu_operator_cost * num_hashclauses + cpu_tuple_cost)
    4208      875454 :         * inner_path_rows;
    4209      875454 :     run_cost += cpu_operator_cost * num_hashclauses * outer_path_rows;
    4210             : 
    4211             :     /*
    4212             :      * If this is a parallel hash build, then the value we have for
    4213             :      * inner_rows_total currently refers only to the rows returned by each
    4214             :      * participant.  For shared hash table size estimation, we need the total
    4215             :      * number, so we need to undo the division.
    4216             :      */
    4217      875454 :     if (parallel_hash)
    4218       75156 :         inner_path_rows_total *= get_parallel_divisor(inner_path);
    4219             : 
    4220             :     /*
    4221             :      * Get hash table size that executor would use for inner relation.
    4222             :      *
    4223             :      * XXX for the moment, always assume that skew optimization will be
    4224             :      * performed.  As long as SKEW_HASH_MEM_PERCENT is small, it's not worth
    4225             :      * trying to determine that for sure.
    4226             :      *
    4227             :      * XXX at some point it might be interesting to try to account for skew
    4228             :      * optimization in the cost estimate, but for now, we don't.
    4229             :      */
    4230      875454 :     ExecChooseHashTableSize(inner_path_rows_total,
    4231      875454 :                             inner_path->pathtarget->width,
    4232             :                             true,   /* useskew */
    4233             :                             parallel_hash,  /* try_combined_hash_mem */
    4234             :                             outer_path->parallel_workers,
    4235             :                             &space_allowed,
    4236             :                             &numbuckets,
    4237             :                             &numbatches,
    4238             :                             &num_skew_mcvs);
    4239             : 
    4240             :     /*
    4241             :      * If inner relation is too big then we will need to "batch" the join,
    4242             :      * which implies writing and reading most of the tuples to disk an extra
    4243             :      * time.  Charge seq_page_cost per page, since the I/O should be nice and
    4244             :      * sequential.  Writing the inner rel counts as startup cost, all the rest
    4245             :      * as run cost.
    4246             :      */
    4247      875454 :     if (numbatches > 1)
    4248             :     {
    4249        4664 :         double      outerpages = page_size(outer_path_rows,
    4250        4664 :                                            outer_path->pathtarget->width);
    4251        4664 :         double      innerpages = page_size(inner_path_rows,
    4252        4664 :                                            inner_path->pathtarget->width);
    4253             : 
    4254        4664 :         startup_cost += seq_page_cost * innerpages;
    4255        4664 :         run_cost += seq_page_cost * (innerpages + 2 * outerpages);
    4256             :     }
    4257             : 
    4258             :     /* CPU costs left for later */
    4259             : 
    4260             :     /* Public result fields */
    4261      875454 :     workspace->disabled_nodes = disabled_nodes;
    4262      875454 :     workspace->startup_cost = startup_cost;
    4263      875454 :     workspace->total_cost = startup_cost + run_cost;
    4264             :     /* Save private data for final_cost_hashjoin */
    4265      875454 :     workspace->run_cost = run_cost;
    4266      875454 :     workspace->numbuckets = numbuckets;
    4267      875454 :     workspace->numbatches = numbatches;
    4268      875454 :     workspace->inner_rows_total = inner_path_rows_total;
    4269      875454 : }
    4270             : 
    4271             : /*
    4272             :  * final_cost_hashjoin
    4273             :  *    Final estimate of the cost and result size of a hashjoin path.
    4274             :  *
    4275             :  * Note: the numbatches estimate is also saved into 'path' for use later
    4276             :  *
    4277             :  * 'path' is already filled in except for the rows and cost fields and
    4278             :  *      num_batches
    4279             :  * 'workspace' is the result from initial_cost_hashjoin
    4280             :  * 'extra' contains miscellaneous information about the join
    4281             :  */
    4282             : void
    4283      456364 : final_cost_hashjoin(PlannerInfo *root, HashPath *path,
    4284             :                     JoinCostWorkspace *workspace,
    4285             :                     JoinPathExtraData *extra)
    4286             : {
    4287      456364 :     Path       *outer_path = path->jpath.outerjoinpath;
    4288      456364 :     Path       *inner_path = path->jpath.innerjoinpath;
    4289      456364 :     double      outer_path_rows = outer_path->rows;
    4290      456364 :     double      inner_path_rows = inner_path->rows;
    4291      456364 :     double      inner_path_rows_total = workspace->inner_rows_total;
    4292      456364 :     List       *hashclauses = path->path_hashclauses;
    4293      456364 :     Cost        startup_cost = workspace->startup_cost;
    4294      456364 :     Cost        run_cost = workspace->run_cost;
    4295      456364 :     int         numbuckets = workspace->numbuckets;
    4296      456364 :     int         numbatches = workspace->numbatches;
    4297             :     Cost        cpu_per_tuple;
    4298             :     QualCost    hash_qual_cost;
    4299             :     QualCost    qp_qual_cost;
    4300             :     double      hashjointuples;
    4301             :     double      virtualbuckets;
    4302             :     Selectivity innerbucketsize;
    4303             :     Selectivity innermcvfreq;
    4304             :     ListCell   *hcl;
    4305             : 
    4306             :     /* Set the number of disabled nodes. */
    4307      456364 :     path->jpath.path.disabled_nodes = workspace->disabled_nodes;
    4308             : 
    4309             :     /* Mark the path with the correct row estimate */
    4310      456364 :     if (path->jpath.path.param_info)
    4311        3674 :         path->jpath.path.rows = path->jpath.path.param_info->ppi_rows;
    4312             :     else
    4313      452690 :         path->jpath.path.rows = path->jpath.path.parent->rows;
    4314             : 
    4315             :     /* For partial paths, scale row estimate. */
    4316      456364 :     if (path->jpath.path.parallel_workers > 0)
    4317             :     {
    4318      107100 :         double      parallel_divisor = get_parallel_divisor(&path->jpath.path);
    4319             : 
    4320      107100 :         path->jpath.path.rows =
    4321      107100 :             clamp_row_est(path->jpath.path.rows / parallel_divisor);
    4322             :     }
    4323             : 
    4324             :     /* mark the path with estimated # of batches */
    4325      456364 :     path->num_batches = numbatches;
    4326             : 
    4327             :     /* store the total number of tuples (sum of partial row estimates) */
    4328      456364 :     path->inner_rows_total = inner_path_rows_total;
    4329             : 
    4330             :     /* and compute the number of "virtual" buckets in the whole join */
    4331      456364 :     virtualbuckets = (double) numbuckets * (double) numbatches;
    4332             : 
    4333             :     /*
    4334             :      * Determine bucketsize fraction and MCV frequency for the inner relation.
    4335             :      * We use the smallest bucketsize or MCV frequency estimated for any
    4336             :      * individual hashclause; this is undoubtedly conservative.
    4337             :      *
    4338             :      * BUT: if inner relation has been unique-ified, we can assume it's good
    4339             :      * for hashing.  This is important both because it's the right answer, and
    4340             :      * because we avoid contaminating the cache with a value that's wrong for
    4341             :      * non-unique-ified paths.
    4342             :      */
    4343      456364 :     if (RELATION_WAS_MADE_UNIQUE(inner_path->parent, extra->sjinfo,
    4344             :                                  path->jpath.jointype))
    4345             :     {
    4346        4232 :         innerbucketsize = 1.0 / virtualbuckets;
    4347        4232 :         innermcvfreq = 0.0;
    4348             :     }
    4349             :     else
    4350             :     {
    4351             :         List       *otherclauses;
    4352             : 
    4353      452132 :         innerbucketsize = 1.0;
    4354      452132 :         innermcvfreq = 1.0;
    4355             : 
    4356             :         /* At first, try to estimate bucket size using extended statistics. */
    4357      452132 :         otherclauses = estimate_multivariate_bucketsize(root,
    4358             :                                                         inner_path->parent,
    4359             :                                                         hashclauses,
    4360             :                                                         &innerbucketsize);
    4361             : 
    4362             :         /* Pass through the remaining clauses */
    4363      941098 :         foreach(hcl, otherclauses)
    4364             :         {
    4365      488966 :             RestrictInfo *restrictinfo = lfirst_node(RestrictInfo, hcl);
    4366             :             Selectivity thisbucketsize;
    4367             :             Selectivity thismcvfreq;
    4368             : 
    4369             :             /*
    4370             :              * First we have to figure out which side of the hashjoin clause
    4371             :              * is the inner side.
    4372             :              *
    4373             :              * Since we tend to visit the same clauses over and over when
    4374             :              * planning a large query, we cache the bucket stats estimates in
    4375             :              * the RestrictInfo node to avoid repeated lookups of statistics.
    4376             :              */
    4377      488966 :             if (bms_is_subset(restrictinfo->right_relids,
    4378      488966 :                               inner_path->parent->relids))
    4379             :             {
    4380             :                 /* righthand side is inner */
    4381      252898 :                 thisbucketsize = restrictinfo->right_bucketsize;
    4382      252898 :                 if (thisbucketsize < 0)
    4383             :                 {
    4384             :                     /* not cached yet */
    4385      109320 :                     estimate_hash_bucket_stats(root,
    4386      109320 :                                                get_rightop(restrictinfo->clause),
    4387             :                                                virtualbuckets,
    4388             :                                                &restrictinfo->right_mcvfreq,
    4389             :                                                &restrictinfo->right_bucketsize);
    4390      109320 :                     thisbucketsize = restrictinfo->right_bucketsize;
    4391             :                 }
    4392      252898 :                 thismcvfreq = restrictinfo->right_mcvfreq;
    4393             :             }
    4394             :             else
    4395             :             {
    4396             :                 Assert(bms_is_subset(restrictinfo->left_relids,
    4397             :                                      inner_path->parent->relids));
    4398             :                 /* lefthand side is inner */
    4399      236068 :                 thisbucketsize = restrictinfo->left_bucketsize;
    4400      236068 :                 if (thisbucketsize < 0)
    4401             :                 {
    4402             :                     /* not cached yet */
    4403       95920 :                     estimate_hash_bucket_stats(root,
    4404       95920 :                                                get_leftop(restrictinfo->clause),
    4405             :                                                virtualbuckets,
    4406             :                                                &restrictinfo->left_mcvfreq,
    4407             :                                                &restrictinfo->left_bucketsize);
    4408       95920 :                     thisbucketsize = restrictinfo->left_bucketsize;
    4409             :                 }
    4410      236068 :                 thismcvfreq = restrictinfo->left_mcvfreq;
    4411             :             }
    4412             : 
    4413      488966 :             if (innerbucketsize > thisbucketsize)
    4414      370488 :                 innerbucketsize = thisbucketsize;
    4415      488966 :             if (innermcvfreq > thismcvfreq)
    4416      455146 :                 innermcvfreq = thismcvfreq;
    4417             :         }
    4418             :     }
    4419             : 
    4420             :     /*
    4421             :      * If the bucket holding the inner MCV would exceed hash_mem, we don't
    4422             :      * want to hash unless there is really no other alternative, so apply
    4423             :      * disable_cost.  (The executor normally copes with excessive memory usage
    4424             :      * by splitting batches, but obviously it cannot separate equal values
    4425             :      * that way, so it will be unable to drive the batch size below hash_mem
    4426             :      * when this is true.)
    4427             :      */
    4428      456364 :     if (relation_byte_size(clamp_row_est(inner_path_rows * innermcvfreq),
    4429      912728 :                            inner_path->pathtarget->width) > get_hash_memory_limit())
    4430           8 :         startup_cost += disable_cost;
    4431             : 
    4432             :     /*
    4433             :      * Compute cost of the hashquals and qpquals (other restriction clauses)
    4434             :      * separately.
    4435             :      */
    4436      456364 :     cost_qual_eval(&hash_qual_cost, hashclauses, root);
    4437      456364 :     cost_qual_eval(&qp_qual_cost, path->jpath.joinrestrictinfo, root);
    4438      456364 :     qp_qual_cost.startup -= hash_qual_cost.startup;
    4439      456364 :     qp_qual_cost.per_tuple -= hash_qual_cost.per_tuple;
    4440             : 
    4441             :     /* CPU costs */
    4442             : 
    4443      456364 :     if (path->jpath.jointype == JOIN_SEMI ||
    4444      450180 :         path->jpath.jointype == JOIN_ANTI ||
    4445      445636 :         extra->inner_unique)
    4446      124856 :     {
    4447             :         double      outer_matched_rows;
    4448             :         Selectivity inner_scan_frac;
    4449             : 
    4450             :         /*
    4451             :          * With a SEMI or ANTI join, or if the innerrel is known unique, the
    4452             :          * executor will stop after the first match.
    4453             :          *
    4454             :          * For an outer-rel row that has at least one match, we can expect the
    4455             :          * bucket scan to stop after a fraction 1/(match_count+1) of the
    4456             :          * bucket's rows, if the matches are evenly distributed.  Since they
    4457             :          * probably aren't quite evenly distributed, we apply a fuzz factor of
    4458             :          * 2.0 to that fraction.  (If we used a larger fuzz factor, we'd have
    4459             :          * to clamp inner_scan_frac to at most 1.0; but since match_count is
    4460             :          * at least 1, no such clamp is needed now.)
    4461             :          */
    4462      124856 :         outer_matched_rows = rint(outer_path_rows * extra->semifactors.outer_match_frac);
    4463      124856 :         inner_scan_frac = 2.0 / (extra->semifactors.match_count + 1.0);
    4464             : 
    4465      124856 :         startup_cost += hash_qual_cost.startup;
    4466      249712 :         run_cost += hash_qual_cost.per_tuple * outer_matched_rows *
    4467      124856 :             clamp_row_est(inner_path_rows * innerbucketsize * inner_scan_frac) * 0.5;
    4468             : 
    4469             :         /*
    4470             :          * For unmatched outer-rel rows, the picture is quite a lot different.
    4471             :          * In the first place, there is no reason to assume that these rows
    4472             :          * preferentially hit heavily-populated buckets; instead assume they
    4473             :          * are uncorrelated with the inner distribution and so they see an
    4474             :          * average bucket size of inner_path_rows / virtualbuckets.  In the
    4475             :          * second place, it seems likely that they will have few if any exact
    4476             :          * hash-code matches and so very few of the tuples in the bucket will
    4477             :          * actually require eval of the hash quals.  We don't have any good
    4478             :          * way to estimate how many will, but for the moment assume that the
    4479             :          * effective cost per bucket entry is one-tenth what it is for
    4480             :          * matchable tuples.
    4481             :          */
    4482      249712 :         run_cost += hash_qual_cost.per_tuple *
    4483      249712 :             (outer_path_rows - outer_matched_rows) *
    4484      124856 :             clamp_row_est(inner_path_rows / virtualbuckets) * 0.05;
    4485             : 
    4486             :         /* Get # of tuples that will pass the basic join */
    4487      124856 :         if (path->jpath.jointype == JOIN_ANTI)
    4488        4544 :             hashjointuples = outer_path_rows - outer_matched_rows;
    4489             :         else
    4490      120312 :             hashjointuples = outer_matched_rows;
    4491             :     }
    4492             :     else
    4493             :     {
    4494             :         /*
    4495             :          * The number of tuple comparisons needed is the number of outer
    4496             :          * tuples times the typical number of tuples in a hash bucket, which
    4497             :          * is the inner relation size times its bucketsize fraction.  At each
    4498             :          * one, we need to evaluate the hashjoin quals.  But actually,
    4499             :          * charging the full qual eval cost at each tuple is pessimistic,
    4500             :          * since we don't evaluate the quals unless the hash values match
    4501             :          * exactly.  For lack of a better idea, halve the cost estimate to
    4502             :          * allow for that.
    4503             :          */
    4504      331508 :         startup_cost += hash_qual_cost.startup;
    4505      663016 :         run_cost += hash_qual_cost.per_tuple * outer_path_rows *
    4506      331508 :             clamp_row_est(inner_path_rows * innerbucketsize) * 0.5;
    4507             : 
    4508             :         /*
    4509             :          * Get approx # tuples passing the hashquals.  We use
    4510             :          * approx_tuple_count here because we need an estimate done with
    4511             :          * JOIN_INNER semantics.
    4512             :          */
    4513      331508 :         hashjointuples = approx_tuple_count(root, &path->jpath, hashclauses);
    4514             :     }
    4515             : 
    4516             :     /*
    4517             :      * For each tuple that gets through the hashjoin proper, we charge
    4518             :      * cpu_tuple_cost plus the cost of evaluating additional restriction
    4519             :      * clauses that are to be applied at the join.  (This is pessimistic since
    4520             :      * not all of the quals may get evaluated at each tuple.)
    4521             :      */
    4522      456364 :     startup_cost += qp_qual_cost.startup;
    4523      456364 :     cpu_per_tuple = cpu_tuple_cost + qp_qual_cost.per_tuple;
    4524      456364 :     run_cost += cpu_per_tuple * hashjointuples;
    4525             : 
    4526             :     /* tlist eval costs are paid per output row, not per tuple scanned */
    4527      456364 :     startup_cost += path->jpath.path.pathtarget->cost.startup;
    4528      456364 :     run_cost += path->jpath.path.pathtarget->cost.per_tuple * path->jpath.path.rows;
    4529             : 
    4530      456364 :     path->jpath.path.startup_cost = startup_cost;
    4531      456364 :     path->jpath.path.total_cost = startup_cost + run_cost;
    4532      456364 : }
    4533             : 
    4534             : 
    4535             : /*
    4536             :  * cost_subplan
    4537             :  *      Figure the costs for a SubPlan (or initplan).
    4538             :  *
    4539             :  * Note: we could dig the subplan's Plan out of the root list, but in practice
    4540             :  * all callers have it handy already, so we make them pass it.
    4541             :  */
    4542             : void
    4543       43618 : cost_subplan(PlannerInfo *root, SubPlan *subplan, Plan *plan)
    4544             : {
    4545             :     QualCost    sp_cost;
    4546             : 
    4547             :     /*
    4548             :      * Figure any cost for evaluating the testexpr.
    4549             :      *
    4550             :      * Usually, SubPlan nodes are built very early, before we have constructed
    4551             :      * any RelOptInfos for the parent query level, which means the parent root
    4552             :      * does not yet contain enough information to safely consult statistics.
    4553             :      * Therefore, we pass root as NULL here.  cost_qual_eval() is already
    4554             :      * well-equipped to handle a NULL root.
    4555             :      *
    4556             :      * One exception is SubPlan nodes built for the initplans of MIN/MAX
    4557             :      * aggregates from indexes (cf. SS_make_initplan_from_plan).  In this
    4558             :      * case, having a NULL root is safe because testexpr will be NULL.
    4559             :      * Besides, an initplan will by definition not consult anything from the
    4560             :      * parent plan.
    4561             :      */
    4562       43618 :     cost_qual_eval(&sp_cost,
    4563       43618 :                    make_ands_implicit((Expr *) subplan->testexpr),
    4564             :                    NULL);
    4565             : 
    4566       43618 :     if (subplan->useHashTable)
    4567             :     {
    4568             :         /*
    4569             :          * If we are using a hash table for the subquery outputs, then the
    4570             :          * cost of evaluating the query is a one-time cost.  We charge one
    4571             :          * cpu_operator_cost per tuple for the work of loading the hashtable,
    4572             :          * too.
    4573             :          */
    4574        2116 :         sp_cost.startup += plan->total_cost +
    4575        2116 :             cpu_operator_cost * plan->plan_rows;
    4576             : 
    4577             :         /*
    4578             :          * The per-tuple costs include the cost of evaluating the lefthand
    4579             :          * expressions, plus the cost of probing the hashtable.  We already
    4580             :          * accounted for the lefthand expressions as part of the testexpr, and
    4581             :          * will also have counted one cpu_operator_cost for each comparison
    4582             :          * operator.  That is probably too low for the probing cost, but it's
    4583             :          * hard to make a better estimate, so live with it for now.
    4584             :          */
    4585             :     }
    4586             :     else
    4587             :     {
    4588             :         /*
    4589             :          * Otherwise we will be rescanning the subplan output on each
    4590             :          * evaluation.  We need to estimate how much of the output we will
    4591             :          * actually need to scan.  NOTE: this logic should agree with the
    4592             :          * tuple_fraction estimates used by make_subplan() in
    4593             :          * plan/subselect.c.
    4594             :          */
    4595       41502 :         Cost        plan_run_cost = plan->total_cost - plan->startup_cost;
    4596             : 
    4597       41502 :         if (subplan->subLinkType == EXISTS_SUBLINK)
    4598             :         {
    4599             :             /* we only need to fetch 1 tuple; clamp to avoid zero divide */
    4600        2506 :             sp_cost.per_tuple += plan_run_cost / clamp_row_est(plan->plan_rows);
    4601             :         }
    4602       38996 :         else if (subplan->subLinkType == ALL_SUBLINK ||
    4603       38978 :                  subplan->subLinkType == ANY_SUBLINK)
    4604             :         {
    4605             :             /* assume we need 50% of the tuples */
    4606         146 :             sp_cost.per_tuple += 0.50 * plan_run_cost;
    4607             :             /* also charge a cpu_operator_cost per row examined */
    4608         146 :             sp_cost.per_tuple += 0.50 * plan->plan_rows * cpu_operator_cost;
    4609             :         }
    4610             :         else
    4611             :         {
    4612             :             /* assume we need all tuples */
    4613       38850 :             sp_cost.per_tuple += plan_run_cost;
    4614             :         }
    4615             : 
    4616             :         /*
    4617             :          * Also account for subplan's startup cost. If the subplan is
    4618             :          * uncorrelated or undirect correlated, AND its topmost node is one
    4619             :          * that materializes its output, assume that we'll only need to pay
    4620             :          * its startup cost once; otherwise assume we pay the startup cost
    4621             :          * every time.
    4622             :          */
    4623       54736 :         if (subplan->parParam == NIL &&
    4624       13234 :             ExecMaterializesOutput(nodeTag(plan)))
    4625         726 :             sp_cost.startup += plan->startup_cost;
    4626             :         else
    4627       40776 :             sp_cost.per_tuple += plan->startup_cost;
    4628             :     }
    4629             : 
    4630       43618 :     subplan->startup_cost = sp_cost.startup;
    4631       43618 :     subplan->per_call_cost = sp_cost.per_tuple;
    4632       43618 : }
    4633             : 
    4634             : 
    4635             : /*
    4636             :  * cost_rescan
    4637             :  *      Given a finished Path, estimate the costs of rescanning it after
    4638             :  *      having done so the first time.  For some Path types a rescan is
    4639             :  *      cheaper than an original scan (if no parameters change), and this
    4640             :  *      function embodies knowledge about that.  The default is to return
    4641             :  *      the same costs stored in the Path.  (Note that the cost estimates
    4642             :  *      actually stored in Paths are always for first scans.)
    4643             :  *
    4644             :  * This function is not currently intended to model effects such as rescans
    4645             :  * being cheaper due to disk block caching; what we are concerned with is
    4646             :  * plan types wherein the executor caches results explicitly, or doesn't
    4647             :  * redo startup calculations, etc.
    4648             :  */
    4649             : static void
    4650     3259348 : cost_rescan(PlannerInfo *root, Path *path,
    4651             :             Cost *rescan_startup_cost,  /* output parameters */
    4652             :             Cost *rescan_total_cost)
    4653             : {
    4654     3259348 :     switch (path->pathtype)
    4655             :     {
    4656       52998 :         case T_FunctionScan:
    4657             : 
    4658             :             /*
    4659             :              * Currently, nodeFunctionscan.c always executes the function to
    4660             :              * completion before returning any rows, and caches the results in
    4661             :              * a tuplestore.  So the function eval cost is all startup cost
    4662             :              * and isn't paid over again on rescans. However, all run costs
    4663             :              * will be paid over again.
    4664             :              */
    4665       52998 :             *rescan_startup_cost = 0;
    4666       52998 :             *rescan_total_cost = path->total_cost - path->startup_cost;
    4667       52998 :             break;
    4668      135374 :         case T_HashJoin:
    4669             : 
    4670             :             /*
    4671             :              * If it's a single-batch join, we don't need to rebuild the hash
    4672             :              * table during a rescan.
    4673             :              */
    4674      135374 :             if (((HashPath *) path)->num_batches == 1)
    4675             :             {
    4676             :                 /* Startup cost is exactly the cost of hash table building */
    4677      135374 :                 *rescan_startup_cost = 0;
    4678      135374 :                 *rescan_total_cost = path->total_cost - path->startup_cost;
    4679             :             }
    4680             :             else
    4681             :             {
    4682             :                 /* Otherwise, no special treatment */
    4683           0 :                 *rescan_startup_cost = path->startup_cost;
    4684           0 :                 *rescan_total_cost = path->total_cost;
    4685             :             }
    4686      135374 :             break;
    4687        8014 :         case T_CteScan:
    4688             :         case T_WorkTableScan:
    4689             :             {
    4690             :                 /*
    4691             :                  * These plan types materialize their final result in a
    4692             :                  * tuplestore or tuplesort object.  So the rescan cost is only
    4693             :                  * cpu_tuple_cost per tuple, unless the result is large enough
    4694             :                  * to spill to disk.
    4695             :                  */
    4696        8014 :                 Cost        run_cost = cpu_tuple_cost * path->rows;
    4697        8014 :                 double      nbytes = relation_byte_size(path->rows,
    4698        8014 :                                                         path->pathtarget->width);
    4699        8014 :                 double      work_mem_bytes = work_mem * (Size) 1024;
    4700             : 
    4701        8014 :                 if (nbytes > work_mem_bytes)
    4702             :                 {
    4703             :                     /* It will spill, so account for re-read cost */
    4704         352 :                     double      npages = ceil(nbytes / BLCKSZ);
    4705             : 
    4706         352 :                     run_cost += seq_page_cost * npages;
    4707             :                 }
    4708        8014 :                 *rescan_startup_cost = 0;
    4709        8014 :                 *rescan_total_cost = run_cost;
    4710             :             }
    4711        8014 :             break;
    4712     1176396 :         case T_Material:
    4713             :         case T_Sort:
    4714             :             {
    4715             :                 /*
    4716             :                  * These plan types not only materialize their results, but do
    4717             :                  * not implement qual filtering or projection.  So they are
    4718             :                  * even cheaper to rescan than the ones above.  We charge only
    4719             :                  * cpu_operator_cost per tuple.  (Note: keep that in sync with
    4720             :                  * the run_cost charge in cost_sort, and also see comments in
    4721             :                  * cost_material before you change it.)
    4722             :                  */
    4723     1176396 :                 Cost        run_cost = cpu_operator_cost * path->rows;
    4724     1176396 :                 double      nbytes = relation_byte_size(path->rows,
    4725     1176396 :                                                         path->pathtarget->width);
    4726     1176396 :                 double      work_mem_bytes = work_mem * (Size) 1024;
    4727             : 
    4728     1176396 :                 if (nbytes > work_mem_bytes)
    4729             :                 {
    4730             :                     /* It will spill, so account for re-read cost */
    4731        9942 :                     double      npages = ceil(nbytes / BLCKSZ);
    4732             : 
    4733        9942 :                     run_cost += seq_page_cost * npages;
    4734             :                 }
    4735     1176396 :                 *rescan_startup_cost = 0;
    4736     1176396 :                 *rescan_total_cost = run_cost;
    4737             :             }
    4738     1176396 :             break;
    4739      291440 :         case T_Memoize:
    4740             :             /* All the hard work is done by cost_memoize_rescan */
    4741      291440 :             cost_memoize_rescan(root, (MemoizePath *) path,
    4742             :                                 rescan_startup_cost, rescan_total_cost);
    4743      291440 :             break;
    4744     1595126 :         default:
    4745     1595126 :             *rescan_startup_cost = path->startup_cost;
    4746     1595126 :             *rescan_total_cost = path->total_cost;
    4747     1595126 :             break;
    4748             :     }
    4749     3259348 : }
    4750             : 
    4751             : 
    4752             : /*
    4753             :  * cost_qual_eval
    4754             :  *      Estimate the CPU costs of evaluating a WHERE clause.
    4755             :  *      The input can be either an implicitly-ANDed list of boolean
    4756             :  *      expressions, or a list of RestrictInfo nodes.  (The latter is
    4757             :  *      preferred since it allows caching of the results.)
    4758             :  *      The result includes both a one-time (startup) component,
    4759             :  *      and a per-evaluation component.
    4760             :  *
    4761             :  * Note: in some code paths root can be passed as NULL, resulting in
    4762             :  * slightly worse estimates.
    4763             :  */
    4764             : void
    4765     4644346 : cost_qual_eval(QualCost *cost, List *quals, PlannerInfo *root)
    4766             : {
    4767             :     cost_qual_eval_context context;
    4768             :     ListCell   *l;
    4769             : 
    4770     4644346 :     context.root = root;
    4771     4644346 :     context.total.startup = 0;
    4772     4644346 :     context.total.per_tuple = 0;
    4773             : 
    4774             :     /* We don't charge any cost for the implicit ANDing at top level ... */
    4775             : 
    4776     8857452 :     foreach(l, quals)
    4777             :     {
    4778     4213106 :         Node       *qual = (Node *) lfirst(l);
    4779             : 
    4780     4213106 :         cost_qual_eval_walker(qual, &context);
    4781             :     }
    4782             : 
    4783     4644346 :     *cost = context.total;
    4784     4644346 : }
    4785             : 
    4786             : /*
    4787             :  * cost_qual_eval_node
    4788             :  *      As above, for a single RestrictInfo or expression.
    4789             :  */
    4790             : void
    4791     1827182 : cost_qual_eval_node(QualCost *cost, Node *qual, PlannerInfo *root)
    4792             : {
    4793             :     cost_qual_eval_context context;
    4794             : 
    4795     1827182 :     context.root = root;
    4796     1827182 :     context.total.startup = 0;
    4797     1827182 :     context.total.per_tuple = 0;
    4798             : 
    4799     1827182 :     cost_qual_eval_walker(qual, &context);
    4800             : 
    4801     1827182 :     *cost = context.total;
    4802     1827182 : }
    4803             : 
    4804             : static bool
    4805     9481420 : cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)
    4806             : {
    4807     9481420 :     if (node == NULL)
    4808       87926 :         return false;
    4809             : 
    4810             :     /*
    4811             :      * RestrictInfo nodes contain an eval_cost field reserved for this
    4812             :      * routine's use, so that it's not necessary to evaluate the qual clause's
    4813             :      * cost more than once.  If the clause's cost hasn't been computed yet,
    4814             :      * the field's startup value will contain -1.
    4815             :      */
    4816     9393494 :     if (IsA(node, RestrictInfo))
    4817             :     {
    4818     4400302 :         RestrictInfo *rinfo = (RestrictInfo *) node;
    4819             : 
    4820     4400302 :         if (rinfo->eval_cost.startup < 0)
    4821             :         {
    4822             :             cost_qual_eval_context locContext;
    4823             : 
    4824      597028 :             locContext.root = context->root;
    4825      597028 :             locContext.total.startup = 0;
    4826      597028 :             locContext.total.per_tuple = 0;
    4827             : 
    4828             :             /*
    4829             :              * For an OR clause, recurse into the marked-up tree so that we
    4830             :              * set the eval_cost for contained RestrictInfos too.
    4831             :              */
    4832      597028 :             if (rinfo->orclause)
    4833        9530 :                 cost_qual_eval_walker((Node *) rinfo->orclause, &locContext);
    4834             :             else
    4835      587498 :                 cost_qual_eval_walker((Node *) rinfo->clause, &locContext);
    4836             : 
    4837             :             /*
    4838             :              * If the RestrictInfo is marked pseudoconstant, it will be tested
    4839             :              * only once, so treat its cost as all startup cost.
    4840             :              */
    4841      597028 :             if (rinfo->pseudoconstant)
    4842             :             {
    4843             :                 /* count one execution during startup */
    4844       10064 :                 locContext.total.startup += locContext.total.per_tuple;
    4845       10064 :                 locContext.total.per_tuple = 0;
    4846             :             }
    4847      597028 :             rinfo->eval_cost = locContext.total;
    4848             :         }
    4849     4400302 :         context->total.startup += rinfo->eval_cost.startup;
    4850     4400302 :         context->total.per_tuple += rinfo->eval_cost.per_tuple;
    4851             :         /* do NOT recurse into children */
    4852     4400302 :         return false;
    4853             :     }
    4854             : 
    4855             :     /*
    4856             :      * For each operator or function node in the given tree, we charge the
    4857             :      * estimated execution cost given by pg_proc.procost (remember to multiply
    4858             :      * this by cpu_operator_cost).
    4859             :      *
    4860             :      * Vars and Consts are charged zero, and so are boolean operators (AND,
    4861             :      * OR, NOT). Simplistic, but a lot better than no model at all.
    4862             :      *
    4863             :      * Should we try to account for the possibility of short-circuit
    4864             :      * evaluation of AND/OR?  Probably *not*, because that would make the
    4865             :      * results depend on the clause ordering, and we are not in any position
    4866             :      * to expect that the current ordering of the clauses is the one that's
    4867             :      * going to end up being used.  The above per-RestrictInfo caching would
    4868             :      * not mix well with trying to re-order clauses anyway.
    4869             :      *
    4870             :      * Another issue that is entirely ignored here is that if a set-returning
    4871             :      * function is below top level in the tree, the functions/operators above
    4872             :      * it will need to be evaluated multiple times.  In practical use, such
    4873             :      * cases arise so seldom as to not be worth the added complexity needed;
    4874             :      * moreover, since our rowcount estimates for functions tend to be pretty
    4875             :      * phony, the results would also be pretty phony.
    4876             :      */
    4877     4993192 :     if (IsA(node, FuncExpr))
    4878             :     {
    4879      342150 :         add_function_cost(context->root, ((FuncExpr *) node)->funcid, node,
    4880             :                           &context->total);
    4881             :     }
    4882     4651042 :     else if (IsA(node, OpExpr) ||
    4883     3998546 :              IsA(node, DistinctExpr) ||
    4884     3997472 :              IsA(node, NullIfExpr))
    4885             :     {
    4886             :         /* rely on struct equivalence to treat these all alike */
    4887      653694 :         set_opfuncid((OpExpr *) node);
    4888      653694 :         add_function_cost(context->root, ((OpExpr *) node)->opfuncid, node,
    4889             :                           &context->total);
    4890             :     }
    4891     3997348 :     else if (IsA(node, ScalarArrayOpExpr))
    4892             :     {
    4893       44388 :         ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) node;
    4894       44388 :         Node       *arraynode = (Node *) lsecond(saop->args);
    4895             :         QualCost    sacosts;
    4896             :         QualCost    hcosts;
    4897       44388 :         double      estarraylen = estimate_array_length(context->root, arraynode);
    4898             : 
    4899       44388 :         set_sa_opfuncid(saop);
    4900       44388 :         sacosts.startup = sacosts.per_tuple = 0;
    4901       44388 :         add_function_cost(context->root, saop->opfuncid, NULL,
    4902             :                           &sacosts);
    4903             : 
    4904       44388 :         if (OidIsValid(saop->hashfuncid))
    4905             :         {
    4906             :             /* Handle costs for hashed ScalarArrayOpExpr */
    4907         430 :             hcosts.startup = hcosts.per_tuple = 0;
    4908             : 
    4909         430 :             add_function_cost(context->root, saop->hashfuncid, NULL, &hcosts);
    4910         430 :             context->total.startup += sacosts.startup + hcosts.startup;
    4911             : 
    4912             :             /* Estimate the cost of building the hashtable. */
    4913         430 :             context->total.startup += estarraylen * hcosts.per_tuple;
    4914             : 
    4915             :             /*
    4916             :              * XXX should we charge a little bit for sacosts.per_tuple when
    4917             :              * building the table, or is it ok to assume there will be zero
    4918             :              * hash collision?
    4919             :              */
    4920             : 
    4921             :             /*
    4922             :              * Charge for hashtable lookups.  Charge a single hash and a
    4923             :              * single comparison.
    4924             :              */
    4925         430 :             context->total.per_tuple += hcosts.per_tuple + sacosts.per_tuple;
    4926             :         }
    4927             :         else
    4928             :         {
    4929             :             /*
    4930             :              * Estimate that the operator will be applied to about half of the
    4931             :              * array elements before the answer is determined.
    4932             :              */
    4933       43958 :             context->total.startup += sacosts.startup;
    4934       87916 :             context->total.per_tuple += sacosts.per_tuple *
    4935       43958 :                 estimate_array_length(context->root, arraynode) * 0.5;
    4936             :         }
    4937             :     }
    4938     3952960 :     else if (IsA(node, Aggref) ||
    4939     3887684 :              IsA(node, WindowFunc))
    4940             :     {
    4941             :         /*
    4942             :          * Aggref and WindowFunc nodes are (and should be) treated like Vars,
    4943             :          * ie, zero execution cost in the current model, because they behave
    4944             :          * essentially like Vars at execution.  We disregard the costs of
    4945             :          * their input expressions for the same reason.  The actual execution
    4946             :          * costs of the aggregate/window functions and their arguments have to
    4947             :          * be factored into plan-node-specific costing of the Agg or WindowAgg
    4948             :          * plan node.
    4949             :          */
    4950       69142 :         return false;           /* don't recurse into children */
    4951             :     }
    4952     3883818 :     else if (IsA(node, GroupingFunc))
    4953             :     {
    4954             :         /* Treat this as having cost 1 */
    4955         422 :         context->total.per_tuple += cpu_operator_cost;
    4956         422 :         return false;           /* don't recurse into children */
    4957             :     }
    4958     3883396 :     else if (IsA(node, CoerceViaIO))
    4959             :     {
    4960       22242 :         CoerceViaIO *iocoerce = (CoerceViaIO *) node;
    4961             :         Oid         iofunc;
    4962             :         Oid         typioparam;
    4963             :         bool        typisvarlena;
    4964             : 
    4965             :         /* check the result type's input function */
    4966       22242 :         getTypeInputInfo(iocoerce->resulttype,
    4967             :                          &iofunc, &typioparam);
    4968       22242 :         add_function_cost(context->root, iofunc, NULL,
    4969             :                           &context->total);
    4970             :         /* check the input type's output function */
    4971       22242 :         getTypeOutputInfo(exprType((Node *) iocoerce->arg),
    4972             :                           &iofunc, &typisvarlena);
    4973       22242 :         add_function_cost(context->root, iofunc, NULL,
    4974             :                           &context->total);
    4975             :     }
    4976     3861154 :     else if (IsA(node, ArrayCoerceExpr))
    4977             :     {
    4978        5102 :         ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node;
    4979             :         QualCost    perelemcost;
    4980             : 
    4981        5102 :         cost_qual_eval_node(&perelemcost, (Node *) acoerce->elemexpr,
    4982             :                             context->root);
    4983        5102 :         context->total.startup += perelemcost.startup;
    4984        5102 :         if (perelemcost.per_tuple > 0)
    4985          66 :             context->total.per_tuple += perelemcost.per_tuple *
    4986          66 :                 estimate_array_length(context->root, (Node *) acoerce->arg);
    4987             :     }
    4988     3856052 :     else if (IsA(node, RowCompareExpr))
    4989             :     {
    4990             :         /* Conservatively assume we will check all the columns */
    4991         252 :         RowCompareExpr *rcexpr = (RowCompareExpr *) node;
    4992             :         ListCell   *lc;
    4993             : 
    4994         810 :         foreach(lc, rcexpr->opnos)
    4995             :         {
    4996         558 :             Oid         opid = lfirst_oid(lc);
    4997             : 
    4998         558 :             add_function_cost(context->root, get_opcode(opid), NULL,
    4999             :                               &context->total);
    5000             :         }
    5001             :     }
    5002     3855800 :     else if (IsA(node, MinMaxExpr) ||
    5003     3855528 :              IsA(node, SQLValueFunction) ||
    5004     3850762 :              IsA(node, XmlExpr) ||
    5005     3850060 :              IsA(node, CoerceToDomain) ||
    5006     3840342 :              IsA(node, NextValueExpr) ||
    5007     3839982 :              IsA(node, JsonExpr))
    5008             :     {
    5009             :         /* Treat all these as having cost 1 */
    5010       18390 :         context->total.per_tuple += cpu_operator_cost;
    5011             :     }
    5012     3837410 :     else if (IsA(node, SubLink))
    5013             :     {
    5014             :         /* This routine should not be applied to un-planned expressions */
    5015           0 :         elog(ERROR, "cannot handle unplanned sub-select");
    5016             :     }
    5017     3837410 :     else if (IsA(node, SubPlan))
    5018             :     {
    5019             :         /*
    5020             :          * A subplan node in an expression typically indicates that the
    5021             :          * subplan will be executed on each evaluation, so charge accordingly.
    5022             :          * (Sub-selects that can be executed as InitPlans have already been
    5023             :          * removed from the expression.)
    5024             :          */
    5025       42974 :         SubPlan    *subplan = (SubPlan *) node;
    5026             : 
    5027       42974 :         context->total.startup += subplan->startup_cost;
    5028       42974 :         context->total.per_tuple += subplan->per_call_cost;
    5029             : 
    5030             :         /*
    5031             :          * We don't want to recurse into the testexpr, because it was already
    5032             :          * counted in the SubPlan node's costs.  So we're done.
    5033             :          */
    5034       42974 :         return false;
    5035             :     }
    5036     3794436 :     else if (IsA(node, AlternativeSubPlan))
    5037             :     {
    5038             :         /*
    5039             :          * Arbitrarily use the first alternative plan for costing.  (We should
    5040             :          * certainly only include one alternative, and we don't yet have
    5041             :          * enough information to know which one the executor is most likely to
    5042             :          * use.)
    5043             :          */
    5044        1848 :         AlternativeSubPlan *asplan = (AlternativeSubPlan *) node;
    5045             : 
    5046        1848 :         return cost_qual_eval_walker((Node *) linitial(asplan->subplans),
    5047             :                                      context);
    5048             :     }
    5049     3792588 :     else if (IsA(node, PlaceHolderVar))
    5050             :     {
    5051             :         /*
    5052             :          * A PlaceHolderVar should be given cost zero when considering general
    5053             :          * expression evaluation costs.  The expense of doing the contained
    5054             :          * expression is charged as part of the tlist eval costs of the scan
    5055             :          * or join where the PHV is first computed (see set_rel_width and
    5056             :          * add_placeholders_to_joinrel).  If we charged it again here, we'd be
    5057             :          * double-counting the cost for each level of plan that the PHV
    5058             :          * bubbles up through.  Hence, return without recursing into the
    5059             :          * phexpr.
    5060             :          */
    5061        5196 :         return false;
    5062             :     }
    5063             : 
    5064             :     /* recurse into children */
    5065     4873610 :     return expression_tree_walker(node, cost_qual_eval_walker, context);
    5066             : }
    5067             : 
    5068             : /*
    5069             :  * get_restriction_qual_cost
    5070             :  *    Compute evaluation costs of a baserel's restriction quals, plus any
    5071             :  *    movable join quals that have been pushed down to the scan.
    5072             :  *    Results are returned into *qpqual_cost.
    5073             :  *
    5074             :  * This is a convenience subroutine that works for seqscans and other cases
    5075             :  * where all the given quals will be evaluated the hard way.  It's not useful
    5076             :  * for cost_index(), for example, where the index machinery takes care of
    5077             :  * some of the quals.  We assume baserestrictcost was previously set by
    5078             :  * set_baserel_size_estimates().
    5079             :  */
    5080             : static void
    5081     1078884 : get_restriction_qual_cost(PlannerInfo *root, RelOptInfo *baserel,
    5082             :                           ParamPathInfo *param_info,
    5083             :                           QualCost *qpqual_cost)
    5084             : {
    5085     1078884 :     if (param_info)
    5086             :     {
    5087             :         /* Include costs of pushed-down clauses */
    5088      242702 :         cost_qual_eval(qpqual_cost, param_info->ppi_clauses, root);
    5089             : 
    5090      242702 :         qpqual_cost->startup += baserel->baserestrictcost.startup;
    5091      242702 :         qpqual_cost->per_tuple += baserel->baserestrictcost.per_tuple;
    5092             :     }
    5093             :     else
    5094      836182 :         *qpqual_cost = baserel->baserestrictcost;
    5095     1078884 : }
    5096             : 
    5097             : 
    5098             : /*
    5099             :  * compute_semi_anti_join_factors
    5100             :  *    Estimate how much of the inner input a SEMI, ANTI, or inner_unique join
    5101             :  *    can be expected to scan.
    5102             :  *
    5103             :  * In a hash or nestloop SEMI/ANTI join, the executor will stop scanning
    5104             :  * inner rows as soon as it finds a match to the current outer row.
    5105             :  * The same happens if we have detected the inner rel is unique.
    5106             :  * We should therefore adjust some of the cost components for this effect.
    5107             :  * This function computes some estimates needed for these adjustments.
    5108             :  * These estimates will be the same regardless of the particular paths used
    5109             :  * for the outer and inner relation, so we compute these once and then pass
    5110             :  * them to all the join cost estimation functions.
    5111             :  *
    5112             :  * Input parameters:
    5113             :  *  joinrel: join relation under consideration
    5114             :  *  outerrel: outer relation under consideration
    5115             :  *  innerrel: inner relation under consideration
    5116             :  *  jointype: if not JOIN_SEMI or JOIN_ANTI, we assume it's inner_unique
    5117             :  *  sjinfo: SpecialJoinInfo relevant to this join
    5118             :  *  restrictlist: join quals
    5119             :  * Output parameters:
    5120             :  *  *semifactors is filled in (see pathnodes.h for field definitions)
    5121             :  */
    5122             : void
    5123      212866 : compute_semi_anti_join_factors(PlannerInfo *root,
    5124             :                                RelOptInfo *joinrel,
    5125             :                                RelOptInfo *outerrel,
    5126             :                                RelOptInfo *innerrel,
    5127             :                                JoinType jointype,
    5128             :                                SpecialJoinInfo *sjinfo,
    5129             :                                List *restrictlist,
    5130             :                                SemiAntiJoinFactors *semifactors)
    5131             : {
    5132             :     Selectivity jselec;
    5133             :     Selectivity nselec;
    5134             :     Selectivity avgmatch;
    5135             :     SpecialJoinInfo norm_sjinfo;
    5136             :     List       *joinquals;
    5137             :     ListCell   *l;
    5138             : 
    5139             :     /*
    5140             :      * In an ANTI join, we must ignore clauses that are "pushed down", since
    5141             :      * those won't affect the match logic.  In a SEMI join, we do not
    5142             :      * distinguish joinquals from "pushed down" quals, so just use the whole
    5143             :      * restrictinfo list.  For other outer join types, we should consider only
    5144             :      * non-pushed-down quals, so that this devolves to an IS_OUTER_JOIN check.
    5145             :      */
    5146      212866 :     if (IS_OUTER_JOIN(jointype))
    5147             :     {
    5148       74608 :         joinquals = NIL;
    5149      163336 :         foreach(l, restrictlist)
    5150             :         {
    5151       88728 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    5152             : 
    5153       88728 :             if (!RINFO_IS_PUSHED_DOWN(rinfo, joinrel->relids))
    5154       83906 :                 joinquals = lappend(joinquals, rinfo);
    5155             :         }
    5156             :     }
    5157             :     else
    5158      138258 :         joinquals = restrictlist;
    5159             : 
    5160             :     /*
    5161             :      * Get the JOIN_SEMI or JOIN_ANTI selectivity of the join clauses.
    5162             :      */
    5163      212866 :     jselec = clauselist_selectivity(root,
    5164             :                                     joinquals,
    5165             :                                     0,
    5166             :                                     (jointype == JOIN_ANTI) ? JOIN_ANTI : JOIN_SEMI,
    5167             :                                     sjinfo);
    5168             : 
    5169             :     /*
    5170             :      * Also get the normal inner-join selectivity of the join clauses.
    5171             :      */
    5172      212866 :     init_dummy_sjinfo(&norm_sjinfo, outerrel->relids, innerrel->relids);
    5173             : 
    5174      212866 :     nselec = clauselist_selectivity(root,
    5175             :                                     joinquals,
    5176             :                                     0,
    5177             :                                     JOIN_INNER,
    5178             :                                     &norm_sjinfo);
    5179             : 
    5180             :     /* Avoid leaking a lot of ListCells */
    5181      212866 :     if (IS_OUTER_JOIN(jointype))
    5182       74608 :         list_free(joinquals);
    5183             : 
    5184             :     /*
    5185             :      * jselec can be interpreted as the fraction of outer-rel rows that have
    5186             :      * any matches (this is true for both SEMI and ANTI cases).  And nselec is
    5187             :      * the fraction of the Cartesian product that matches.  So, the average
    5188             :      * number of matches for each outer-rel row that has at least one match is
    5189             :      * nselec * inner_rows / jselec.
    5190             :      *
    5191             :      * Note: it is correct to use the inner rel's "rows" count here, even
    5192             :      * though we might later be considering a parameterized inner path with
    5193             :      * fewer rows.  This is because we have included all the join clauses in
    5194             :      * the selectivity estimate.
    5195             :      */
    5196      212866 :     if (jselec > 0)              /* protect against zero divide */
    5197             :     {
    5198      212474 :         avgmatch = nselec * innerrel->rows / jselec;
    5199             :         /* Clamp to sane range */
    5200      212474 :         avgmatch = Max(1.0, avgmatch);
    5201             :     }
    5202             :     else
    5203         392 :         avgmatch = 1.0;
    5204             : 
    5205      212866 :     semifactors->outer_match_frac = jselec;
    5206      212866 :     semifactors->match_count = avgmatch;
    5207      212866 : }
    5208             : 
    5209             : /*
    5210             :  * has_indexed_join_quals
    5211             :  *    Check whether all the joinquals of a nestloop join are used as
    5212             :  *    inner index quals.
    5213             :  *
    5214             :  * If the inner path of a SEMI/ANTI join is an indexscan (including bitmap
    5215             :  * indexscan) that uses all the joinquals as indexquals, we can assume that an
    5216             :  * unmatched outer tuple is cheap to process, whereas otherwise it's probably
    5217             :  * expensive.
    5218             :  */
    5219             : static bool
    5220      913154 : has_indexed_join_quals(NestPath *path)
    5221             : {
    5222      913154 :     JoinPath   *joinpath = &path->jpath;
    5223      913154 :     Relids      joinrelids = joinpath->path.parent->relids;
    5224      913154 :     Path       *innerpath = joinpath->innerjoinpath;
    5225             :     List       *indexclauses;
    5226             :     bool        found_one;
    5227             :     ListCell   *lc;
    5228             : 
    5229             :     /* If join still has quals to evaluate, it's not fast */
    5230      913154 :     if (joinpath->joinrestrictinfo != NIL)
    5231      648312 :         return false;
    5232             :     /* Nor if the inner path isn't parameterized at all */
    5233      264842 :     if (innerpath->param_info == NULL)
    5234        3300 :         return false;
    5235             : 
    5236             :     /* Find the indexclauses list for the inner scan */
    5237      261542 :     switch (innerpath->pathtype)
    5238             :     {
    5239      158728 :         case T_IndexScan:
    5240             :         case T_IndexOnlyScan:
    5241      158728 :             indexclauses = ((IndexPath *) innerpath)->indexclauses;
    5242      158728 :             break;
    5243         270 :         case T_BitmapHeapScan:
    5244             :             {
    5245             :                 /* Accept only a simple bitmap scan, not AND/OR cases */
    5246         270 :                 Path       *bmqual = ((BitmapHeapPath *) innerpath)->bitmapqual;
    5247             : 
    5248         270 :                 if (IsA(bmqual, IndexPath))
    5249         222 :                     indexclauses = ((IndexPath *) bmqual)->indexclauses;
    5250             :                 else
    5251          48 :                     return false;
    5252         222 :                 break;
    5253             :             }
    5254      102544 :         default:
    5255             : 
    5256             :             /*
    5257             :              * If it's not a simple indexscan, it probably doesn't run quickly
    5258             :              * for zero rows out, even if it's a parameterized path using all
    5259             :              * the joinquals.
    5260             :              */
    5261      102544 :             return false;
    5262             :     }
    5263             : 
    5264             :     /*
    5265             :      * Examine the inner path's param clauses.  Any that are from the outer
    5266             :      * path must be found in the indexclauses list, either exactly or in an
    5267             :      * equivalent form generated by equivclass.c.  Also, we must find at least
    5268             :      * one such clause, else it's a clauseless join which isn't fast.
    5269             :      */
    5270      158950 :     found_one = false;
    5271      313792 :     foreach(lc, innerpath->param_info->ppi_clauses)
    5272             :     {
    5273      163518 :         RestrictInfo *rinfo = (RestrictInfo *) lfirst(lc);
    5274             : 
    5275      163518 :         if (join_clause_is_movable_into(rinfo,
    5276      163518 :                                         innerpath->parent->relids,
    5277             :                                         joinrelids))
    5278             :         {
    5279      162966 :             if (!is_redundant_with_indexclauses(rinfo, indexclauses))
    5280        8676 :                 return false;
    5281      154290 :             found_one = true;
    5282             :         }
    5283             :     }
    5284      150274 :     return found_one;
    5285             : }
    5286             : 
    5287             : 
    5288             : /*
    5289             :  * approx_tuple_count
    5290             :  *      Quick-and-dirty estimation of the number of join rows passing
    5291             :  *      a set of qual conditions.
    5292             :  *
    5293             :  * The quals can be either an implicitly-ANDed list of boolean expressions,
    5294             :  * or a list of RestrictInfo nodes (typically the latter).
    5295             :  *
    5296             :  * We intentionally compute the selectivity under JOIN_INNER rules, even
    5297             :  * if it's some type of outer join.  This is appropriate because we are
    5298             :  * trying to figure out how many tuples pass the initial merge or hash
    5299             :  * join step.
    5300             :  *
    5301             :  * This is quick-and-dirty because we bypass clauselist_selectivity, and
    5302             :  * simply multiply the independent clause selectivities together.  Now
    5303             :  * clauselist_selectivity often can't do any better than that anyhow, but
    5304             :  * for some situations (such as range constraints) it is smarter.  However,
    5305             :  * we can't effectively cache the results of clauselist_selectivity, whereas
    5306             :  * the individual clause selectivities can be and are cached.
    5307             :  *
    5308             :  * Since we are only using the results to estimate how many potential
    5309             :  * output tuples are generated and passed through qpqual checking, it
    5310             :  * seems OK to live with the approximation.
    5311             :  */
    5312             : static double
    5313      791340 : approx_tuple_count(PlannerInfo *root, JoinPath *path, List *quals)
    5314             : {
    5315             :     double      tuples;
    5316      791340 :     double      outer_tuples = path->outerjoinpath->rows;
    5317      791340 :     double      inner_tuples = path->innerjoinpath->rows;
    5318             :     SpecialJoinInfo sjinfo;
    5319      791340 :     Selectivity selec = 1.0;
    5320             :     ListCell   *l;
    5321             : 
    5322             :     /*
    5323             :      * Make up a SpecialJoinInfo for JOIN_INNER semantics.
    5324             :      */
    5325      791340 :     init_dummy_sjinfo(&sjinfo, path->outerjoinpath->parent->relids,
    5326      791340 :                       path->innerjoinpath->parent->relids);
    5327             : 
    5328             :     /* Get the approximate selectivity */
    5329     1673196 :     foreach(l, quals)
    5330             :     {
    5331      881856 :         Node       *qual = (Node *) lfirst(l);
    5332             : 
    5333             :         /* Note that clause_selectivity will be able to cache its result */
    5334      881856 :         selec *= clause_selectivity(root, qual, 0, JOIN_INNER, &sjinfo);
    5335             :     }
    5336             : 
    5337             :     /* Apply it to the input relation sizes */
    5338      791340 :     tuples = selec * outer_tuples * inner_tuples;
    5339             : 
    5340      791340 :     return clamp_row_est(tuples);
    5341             : }
    5342             : 
    5343             : 
    5344             : /*
    5345             :  * set_baserel_size_estimates
    5346             :  *      Set the size estimates for the given base relation.
    5347             :  *
    5348             :  * The rel's targetlist and restrictinfo list must have been constructed
    5349             :  * already, and rel->tuples must be set.
    5350             :  *
    5351             :  * We set the following fields of the rel node:
    5352             :  *  rows: the estimated number of output tuples (after applying
    5353             :  *        restriction clauses).
    5354             :  *  width: the estimated average output tuple width in bytes.
    5355             :  *  baserestrictcost: estimated cost of evaluating baserestrictinfo clauses.
    5356             :  */
    5357             : void
    5358      512176 : set_baserel_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5359             : {
    5360             :     double      nrows;
    5361             : 
    5362             :     /* Should only be applied to base relations */
    5363             :     Assert(rel->relid > 0);
    5364             : 
    5365     1024322 :     nrows = rel->tuples *
    5366      512176 :         clauselist_selectivity(root,
    5367             :                                rel->baserestrictinfo,
    5368             :                                0,
    5369             :                                JOIN_INNER,
    5370             :                                NULL);
    5371             : 
    5372      512146 :     rel->rows = clamp_row_est(nrows);
    5373             : 
    5374      512146 :     cost_qual_eval(&rel->baserestrictcost, rel->baserestrictinfo, root);
    5375             : 
    5376      512146 :     set_rel_width(root, rel);
    5377      512146 : }
    5378             : 
    5379             : /*
    5380             :  * get_parameterized_baserel_size
    5381             :  *      Make a size estimate for a parameterized scan of a base relation.
    5382             :  *
    5383             :  * 'param_clauses' lists the additional join clauses to be used.
    5384             :  *
    5385             :  * set_baserel_size_estimates must have been applied already.
    5386             :  */
    5387             : double
    5388      159668 : get_parameterized_baserel_size(PlannerInfo *root, RelOptInfo *rel,
    5389             :                                List *param_clauses)
    5390             : {
    5391             :     List       *allclauses;
    5392             :     double      nrows;
    5393             : 
    5394             :     /*
    5395             :      * Estimate the number of rows returned by the parameterized scan, knowing
    5396             :      * that it will apply all the extra join clauses as well as the rel's own
    5397             :      * restriction clauses.  Note that we force the clauses to be treated as
    5398             :      * non-join clauses during selectivity estimation.
    5399             :      */
    5400      159668 :     allclauses = list_concat_copy(param_clauses, rel->baserestrictinfo);
    5401      319336 :     nrows = rel->tuples *
    5402      159668 :         clauselist_selectivity(root,
    5403             :                                allclauses,
    5404      159668 :                                rel->relid,   /* do not use 0! */
    5405             :                                JOIN_INNER,
    5406             :                                NULL);
    5407      159668 :     nrows = clamp_row_est(nrows);
    5408             :     /* For safety, make sure result is not more than the base estimate */
    5409      159668 :     if (nrows > rel->rows)
    5410           0 :         nrows = rel->rows;
    5411      159668 :     return nrows;
    5412             : }
    5413             : 
    5414             : /*
    5415             :  * set_joinrel_size_estimates
    5416             :  *      Set the size estimates for the given join relation.
    5417             :  *
    5418             :  * The rel's targetlist must have been constructed already, and a
    5419             :  * restriction clause list that matches the given component rels must
    5420             :  * be provided.
    5421             :  *
    5422             :  * Since there is more than one way to make a joinrel for more than two
    5423             :  * base relations, the results we get here could depend on which component
    5424             :  * rel pair is provided.  In theory we should get the same answers no matter
    5425             :  * which pair is provided; in practice, since the selectivity estimation
    5426             :  * routines don't handle all cases equally well, we might not.  But there's
    5427             :  * not much to be done about it.  (Would it make sense to repeat the
    5428             :  * calculations for each pair of input rels that's encountered, and somehow
    5429             :  * average the results?  Probably way more trouble than it's worth, and
    5430             :  * anyway we must keep the rowcount estimate the same for all paths for the
    5431             :  * joinrel.)
    5432             :  *
    5433             :  * We set only the rows field here.  The reltarget field was already set by
    5434             :  * build_joinrel_tlist, and baserestrictcost is not used for join rels.
    5435             :  */
    5436             : void
    5437      253956 : set_joinrel_size_estimates(PlannerInfo *root, RelOptInfo *rel,
    5438             :                            RelOptInfo *outer_rel,
    5439             :                            RelOptInfo *inner_rel,
    5440             :                            SpecialJoinInfo *sjinfo,
    5441             :                            List *restrictlist)
    5442             : {
    5443      253956 :     rel->rows = calc_joinrel_size_estimate(root,
    5444             :                                            rel,
    5445             :                                            outer_rel,
    5446             :                                            inner_rel,
    5447             :                                            outer_rel->rows,
    5448             :                                            inner_rel->rows,
    5449             :                                            sjinfo,
    5450             :                                            restrictlist);
    5451      253956 : }
    5452             : 
    5453             : /*
    5454             :  * get_parameterized_joinrel_size
    5455             :  *      Make a size estimate for a parameterized scan of a join relation.
    5456             :  *
    5457             :  * 'rel' is the joinrel under consideration.
    5458             :  * 'outer_path', 'inner_path' are (probably also parameterized) Paths that
    5459             :  *      produce the relations being joined.
    5460             :  * 'sjinfo' is any SpecialJoinInfo relevant to this join.
    5461             :  * 'restrict_clauses' lists the join clauses that need to be applied at the
    5462             :  * join node (including any movable clauses that were moved down to this join,
    5463             :  * and not including any movable clauses that were pushed down into the
    5464             :  * child paths).
    5465             :  *
    5466             :  * set_joinrel_size_estimates must have been applied already.
    5467             :  */
    5468             : double
    5469       10048 : get_parameterized_joinrel_size(PlannerInfo *root, RelOptInfo *rel,
    5470             :                                Path *outer_path,
    5471             :                                Path *inner_path,
    5472             :                                SpecialJoinInfo *sjinfo,
    5473             :                                List *restrict_clauses)
    5474             : {
    5475             :     double      nrows;
    5476             : 
    5477             :     /*
    5478             :      * Estimate the number of rows returned by the parameterized join as the
    5479             :      * sizes of the input paths times the selectivity of the clauses that have
    5480             :      * ended up at this join node.
    5481             :      *
    5482             :      * As with set_joinrel_size_estimates, the rowcount estimate could depend
    5483             :      * on the pair of input paths provided, though ideally we'd get the same
    5484             :      * estimate for any pair with the same parameterization.
    5485             :      */
    5486       10048 :     nrows = calc_joinrel_size_estimate(root,
    5487             :                                        rel,
    5488             :                                        outer_path->parent,
    5489             :                                        inner_path->parent,
    5490             :                                        outer_path->rows,
    5491             :                                        inner_path->rows,
    5492             :                                        sjinfo,
    5493             :                                        restrict_clauses);
    5494             :     /* For safety, make sure result is not more than the base estimate */
    5495       10048 :     if (nrows > rel->rows)
    5496          12 :         nrows = rel->rows;
    5497       10048 :     return nrows;
    5498             : }
    5499             : 
    5500             : /*
    5501             :  * calc_joinrel_size_estimate
    5502             :  *      Workhorse for set_joinrel_size_estimates and
    5503             :  *      get_parameterized_joinrel_size.
    5504             :  *
    5505             :  * outer_rel/inner_rel are the relations being joined, but they should be
    5506             :  * assumed to have sizes outer_rows/inner_rows; those numbers might be less
    5507             :  * than what rel->rows says, when we are considering parameterized paths.
    5508             :  */
    5509             : static double
    5510      264004 : calc_joinrel_size_estimate(PlannerInfo *root,
    5511             :                            RelOptInfo *joinrel,
    5512             :                            RelOptInfo *outer_rel,
    5513             :                            RelOptInfo *inner_rel,
    5514             :                            double outer_rows,
    5515             :                            double inner_rows,
    5516             :                            SpecialJoinInfo *sjinfo,
    5517             :                            List *restrictlist)
    5518             : {
    5519      264004 :     JoinType    jointype = sjinfo->jointype;
    5520             :     Selectivity fkselec;
    5521             :     Selectivity jselec;
    5522             :     Selectivity pselec;
    5523             :     double      nrows;
    5524             : 
    5525             :     /*
    5526             :      * Compute joinclause selectivity.  Note that we are only considering
    5527             :      * clauses that become restriction clauses at this join level; we are not
    5528             :      * double-counting them because they were not considered in estimating the
    5529             :      * sizes of the component rels.
    5530             :      *
    5531             :      * First, see whether any of the joinclauses can be matched to known FK
    5532             :      * constraints.  If so, drop those clauses from the restrictlist, and
    5533             :      * instead estimate their selectivity using FK semantics.  (We do this
    5534             :      * without regard to whether said clauses are local or "pushed down".
    5535             :      * Probably, an FK-matching clause could never be seen as pushed down at
    5536             :      * an outer join, since it would be strict and hence would be grounds for
    5537             :      * join strength reduction.)  fkselec gets the net selectivity for
    5538             :      * FK-matching clauses, or 1.0 if there are none.
    5539             :      */
    5540      264004 :     fkselec = get_foreign_key_join_selectivity(root,
    5541             :                                                outer_rel->relids,
    5542             :                                                inner_rel->relids,
    5543             :                                                sjinfo,
    5544             :                                                &restrictlist);
    5545             : 
    5546             :     /*
    5547             :      * For an outer join, we have to distinguish the selectivity of the join's
    5548             :      * own clauses (JOIN/ON conditions) from any clauses that were "pushed
    5549             :      * down".  For inner joins we just count them all as joinclauses.
    5550             :      */
    5551      264004 :     if (IS_OUTER_JOIN(jointype))
    5552             :     {
    5553       80016 :         List       *joinquals = NIL;
    5554       80016 :         List       *pushedquals = NIL;
    5555             :         ListCell   *l;
    5556             : 
    5557             :         /* Grovel through the clauses to separate into two lists */
    5558      180308 :         foreach(l, restrictlist)
    5559             :         {
    5560      100292 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, l);
    5561             : 
    5562      100292 :             if (RINFO_IS_PUSHED_DOWN(rinfo, joinrel->relids))
    5563        4270 :                 pushedquals = lappend(pushedquals, rinfo);
    5564             :             else
    5565       96022 :                 joinquals = lappend(joinquals, rinfo);
    5566             :         }
    5567             : 
    5568             :         /* Get the separate selectivities */
    5569       80016 :         jselec = clauselist_selectivity(root,
    5570             :                                         joinquals,
    5571             :                                         0,
    5572             :                                         jointype,
    5573             :                                         sjinfo);
    5574       80016 :         pselec = clauselist_selectivity(root,
    5575             :                                         pushedquals,
    5576             :                                         0,
    5577             :                                         jointype,
    5578             :                                         sjinfo);
    5579             : 
    5580             :         /* Avoid leaking a lot of ListCells */
    5581       80016 :         list_free(joinquals);
    5582       80016 :         list_free(pushedquals);
    5583             :     }
    5584             :     else
    5585             :     {
    5586      183988 :         jselec = clauselist_selectivity(root,
    5587             :                                         restrictlist,
    5588             :                                         0,
    5589             :                                         jointype,
    5590             :                                         sjinfo);
    5591      183988 :         pselec = 0.0;           /* not used, keep compiler quiet */
    5592             :     }
    5593             : 
    5594             :     /*
    5595             :      * Basically, we multiply size of Cartesian product by selectivity.
    5596             :      *
    5597             :      * If we are doing an outer join, take that into account: the joinqual
    5598             :      * selectivity has to be clamped using the knowledge that the output must
    5599             :      * be at least as large as the non-nullable input.  However, any
    5600             :      * pushed-down quals are applied after the outer join, so their
    5601             :      * selectivity applies fully.
    5602             :      *
    5603             :      * For JOIN_SEMI and JOIN_ANTI, the selectivity is defined as the fraction
    5604             :      * of LHS rows that have matches, and we apply that straightforwardly.
    5605             :      */
    5606      264004 :     switch (jointype)
    5607             :     {
    5608      175896 :         case JOIN_INNER:
    5609      175896 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5610             :             /* pselec not used */
    5611      175896 :             break;
    5612       73298 :         case JOIN_LEFT:
    5613       73298 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5614       73298 :             if (nrows < outer_rows)
    5615       28664 :                 nrows = outer_rows;
    5616       73298 :             nrows *= pselec;
    5617       73298 :             break;
    5618        1720 :         case JOIN_FULL:
    5619        1720 :             nrows = outer_rows * inner_rows * fkselec * jselec;
    5620        1720 :             if (nrows < outer_rows)
    5621        1142 :                 nrows = outer_rows;
    5622        1720 :             if (nrows < inner_rows)
    5623         120 :                 nrows = inner_rows;
    5624        1720 :             nrows *= pselec;
    5625        1720 :             break;
    5626        8092 :         case JOIN_SEMI:
    5627        8092 :             nrows = outer_rows * fkselec * jselec;
    5628             :             /* pselec not used */
    5629        8092 :             break;
    5630        4998 :         case JOIN_ANTI:
    5631        4998 :             nrows = outer_rows * (1.0 - fkselec * jselec);
    5632        4998 :             nrows *= pselec;
    5633        4998 :             break;
    5634           0 :         default:
    5635             :             /* other values not expected here */
    5636           0 :             elog(ERROR, "unrecognized join type: %d", (int) jointype);
    5637             :             nrows = 0;          /* keep compiler quiet */
    5638             :             break;
    5639             :     }
    5640             : 
    5641      264004 :     return clamp_row_est(nrows);
    5642             : }
    5643             : 
    5644             : /*
    5645             :  * get_foreign_key_join_selectivity
    5646             :  *      Estimate join selectivity for foreign-key-related clauses.
    5647             :  *
    5648             :  * Remove any clauses that can be matched to FK constraints from *restrictlist,
    5649             :  * and return a substitute estimate of their selectivity.  1.0 is returned
    5650             :  * when there are no such clauses.
    5651             :  *
    5652             :  * The reason for treating such clauses specially is that we can get better
    5653             :  * estimates this way than by relying on clauselist_selectivity(), especially
    5654             :  * for multi-column FKs where that function's assumption that the clauses are
    5655             :  * independent falls down badly.  But even with single-column FKs, we may be
    5656             :  * able to get a better answer when the pg_statistic stats are missing or out
    5657             :  * of date.
    5658             :  */
    5659             : static Selectivity
    5660      264004 : get_foreign_key_join_selectivity(PlannerInfo *root,
    5661             :                                  Relids outer_relids,
    5662             :                                  Relids inner_relids,
    5663             :                                  SpecialJoinInfo *sjinfo,
    5664             :                                  List **restrictlist)
    5665             : {
    5666      264004 :     Selectivity fkselec = 1.0;
    5667      264004 :     JoinType    jointype = sjinfo->jointype;
    5668      264004 :     List       *worklist = *restrictlist;
    5669             :     ListCell   *lc;
    5670             : 
    5671             :     /* Consider each FK constraint that is known to match the query */
    5672      265970 :     foreach(lc, root->fkey_list)
    5673             :     {
    5674        1966 :         ForeignKeyOptInfo *fkinfo = (ForeignKeyOptInfo *) lfirst(lc);
    5675             :         bool        ref_is_outer;
    5676             :         List       *removedlist;
    5677             :         ListCell   *cell;
    5678             : 
    5679             :         /*
    5680             :          * This FK is not relevant unless it connects a baserel on one side of
    5681             :          * this join to a baserel on the other side.
    5682             :          */
    5683        3580 :         if (bms_is_member(fkinfo->con_relid, outer_relids) &&
    5684        1614 :             bms_is_member(fkinfo->ref_relid, inner_relids))
    5685        1440 :             ref_is_outer = false;
    5686         866 :         else if (bms_is_member(fkinfo->ref_relid, outer_relids) &&
    5687         340 :                  bms_is_member(fkinfo->con_relid, inner_relids))
    5688         130 :             ref_is_outer = true;
    5689             :         else
    5690         396 :             continue;
    5691             : 
    5692             :         /*
    5693             :          * If we're dealing with a semi/anti join, and the FK's referenced
    5694             :          * relation is on the outside, then knowledge of the FK doesn't help
    5695             :          * us figure out what we need to know (which is the fraction of outer
    5696             :          * rows that have matches).  On the other hand, if the referenced rel
    5697             :          * is on the inside, then all outer rows must have matches in the
    5698             :          * referenced table (ignoring nulls).  But any restriction or join
    5699             :          * clauses that filter that table will reduce the fraction of matches.
    5700             :          * We can account for restriction clauses, but it's too hard to guess
    5701             :          * how many table rows would get through a join that's inside the RHS.
    5702             :          * Hence, if either case applies, punt and ignore the FK.
    5703             :          */
    5704        1570 :         if ((jointype == JOIN_SEMI || jointype == JOIN_ANTI) &&
    5705        1048 :             (ref_is_outer || bms_membership(inner_relids) != BMS_SINGLETON))
    5706          12 :             continue;
    5707             : 
    5708             :         /*
    5709             :          * Modify the restrictlist by removing clauses that match the FK (and
    5710             :          * putting them into removedlist instead).  It seems unsafe to modify
    5711             :          * the originally-passed List structure, so we make a shallow copy the
    5712             :          * first time through.
    5713             :          */
    5714        1558 :         if (worklist == *restrictlist)
    5715        1334 :             worklist = list_copy(worklist);
    5716             : 
    5717        1558 :         removedlist = NIL;
    5718        3252 :         foreach(cell, worklist)
    5719             :         {
    5720        1694 :             RestrictInfo *rinfo = (RestrictInfo *) lfirst(cell);
    5721        1694 :             bool        remove_it = false;
    5722             :             int         i;
    5723             : 
    5724             :             /* Drop this clause if it matches any column of the FK */
    5725        2140 :             for (i = 0; i < fkinfo->nkeys; i++)
    5726             :             {
    5727        2110 :                 if (rinfo->parent_ec)
    5728             :                 {
    5729             :                     /*
    5730             :                      * EC-derived clauses can only match by EC.  It is okay to
    5731             :                      * consider any clause derived from the same EC as
    5732             :                      * matching the FK: even if equivclass.c chose to generate
    5733             :                      * a clause equating some other pair of Vars, it could
    5734             :                      * have generated one equating the FK's Vars.  So for
    5735             :                      * purposes of estimation, we can act as though it did so.
    5736             :                      *
    5737             :                      * Note: checking parent_ec is a bit of a cheat because
    5738             :                      * there are EC-derived clauses that don't have parent_ec
    5739             :                      * set; but such clauses must compare expressions that
    5740             :                      * aren't just Vars, so they cannot match the FK anyway.
    5741             :                      */
    5742         304 :                     if (fkinfo->eclass[i] == rinfo->parent_ec)
    5743             :                     {
    5744         298 :                         remove_it = true;
    5745         298 :                         break;
    5746             :                     }
    5747             :                 }
    5748             :                 else
    5749             :                 {
    5750             :                     /*
    5751             :                      * Otherwise, see if rinfo was previously matched to FK as
    5752             :                      * a "loose" clause.
    5753             :                      */
    5754        1806 :                     if (list_member_ptr(fkinfo->rinfos[i], rinfo))
    5755             :                     {
    5756        1366 :                         remove_it = true;
    5757        1366 :                         break;
    5758             :                     }
    5759             :                 }
    5760             :             }
    5761        1694 :             if (remove_it)
    5762             :             {
    5763        1664 :                 worklist = foreach_delete_current(worklist, cell);
    5764        1664 :                 removedlist = lappend(removedlist, rinfo);
    5765             :             }
    5766             :         }
    5767             : 
    5768             :         /*
    5769             :          * If we failed to remove all the matching clauses we expected to
    5770             :          * find, chicken out and ignore this FK; applying its selectivity
    5771             :          * might result in double-counting.  Put any clauses we did manage to
    5772             :          * remove back into the worklist.
    5773             :          *
    5774             :          * Since the matching clauses are known not outerjoin-delayed, they
    5775             :          * would normally have appeared in the initial joinclause list.  If we
    5776             :          * didn't find them, there are two possibilities:
    5777             :          *
    5778             :          * 1. If the FK match is based on an EC that is ec_has_const, it won't
    5779             :          * have generated any join clauses at all.  We discount such ECs while
    5780             :          * checking to see if we have "all" the clauses.  (Below, we'll adjust
    5781             :          * the selectivity estimate for this case.)
    5782             :          *
    5783             :          * 2. The clauses were matched to some other FK in a previous
    5784             :          * iteration of this loop, and thus removed from worklist.  (A likely
    5785             :          * case is that two FKs are matched to the same EC; there will be only
    5786             :          * one EC-derived clause in the initial list, so the first FK will
    5787             :          * consume it.)  Applying both FKs' selectivity independently risks
    5788             :          * underestimating the join size; in particular, this would undo one
    5789             :          * of the main things that ECs were invented for, namely to avoid
    5790             :          * double-counting the selectivity of redundant equality conditions.
    5791             :          * Later we might think of a reasonable way to combine the estimates,
    5792             :          * but for now, just punt, since this is a fairly uncommon situation.
    5793             :          */
    5794        1558 :         if (removedlist == NIL ||
    5795        1272 :             list_length(removedlist) !=
    5796        1272 :             (fkinfo->nmatched_ec - fkinfo->nconst_ec + fkinfo->nmatched_ri))
    5797             :         {
    5798         286 :             worklist = list_concat(worklist, removedlist);
    5799         286 :             continue;
    5800             :         }
    5801             : 
    5802             :         /*
    5803             :          * Finally we get to the payoff: estimate selectivity using the
    5804             :          * knowledge that each referencing row will match exactly one row in
    5805             :          * the referenced table.
    5806             :          *
    5807             :          * XXX that's not true in the presence of nulls in the referencing
    5808             :          * column(s), so in principle we should derate the estimate for those.
    5809             :          * However (1) if there are any strict restriction clauses for the
    5810             :          * referencing column(s) elsewhere in the query, derating here would
    5811             :          * be double-counting the null fraction, and (2) it's not very clear
    5812             :          * how to combine null fractions for multiple referencing columns. So
    5813             :          * we do nothing for now about correcting for nulls.
    5814             :          *
    5815             :          * XXX another point here is that if either side of an FK constraint
    5816             :          * is an inheritance parent, we estimate as though the constraint
    5817             :          * covers all its children as well.  This is not an unreasonable
    5818             :          * assumption for a referencing table, ie the user probably applied
    5819             :          * identical constraints to all child tables (though perhaps we ought
    5820             :          * to check that).  But it's not possible to have done that for a
    5821             :          * referenced table.  Fortunately, precisely because that doesn't
    5822             :          * work, it is uncommon in practice to have an FK referencing a parent
    5823             :          * table.  So, at least for now, disregard inheritance here.
    5824             :          */
    5825        1272 :         if (jointype == JOIN_SEMI || jointype == JOIN_ANTI)
    5826         824 :         {
    5827             :             /*
    5828             :              * For JOIN_SEMI and JOIN_ANTI, we only get here when the FK's
    5829             :              * referenced table is exactly the inside of the join.  The join
    5830             :              * selectivity is defined as the fraction of LHS rows that have
    5831             :              * matches.  The FK implies that every LHS row has a match *in the
    5832             :              * referenced table*; but any restriction clauses on it will
    5833             :              * reduce the number of matches.  Hence we take the join
    5834             :              * selectivity as equal to the selectivity of the table's
    5835             :              * restriction clauses, which is rows / tuples; but we must guard
    5836             :              * against tuples == 0.
    5837             :              */
    5838         824 :             RelOptInfo *ref_rel = find_base_rel(root, fkinfo->ref_relid);
    5839         824 :             double      ref_tuples = Max(ref_rel->tuples, 1.0);
    5840             : 
    5841         824 :             fkselec *= ref_rel->rows / ref_tuples;
    5842             :         }
    5843             :         else
    5844             :         {
    5845             :             /*
    5846             :              * Otherwise, selectivity is exactly 1/referenced-table-size; but
    5847             :              * guard against tuples == 0.  Note we should use the raw table
    5848             :              * tuple count, not any estimate of its filtered or joined size.
    5849             :              */
    5850         448 :             RelOptInfo *ref_rel = find_base_rel(root, fkinfo->ref_relid);
    5851         448 :             double      ref_tuples = Max(ref_rel->tuples, 1.0);
    5852             : 
    5853         448 :             fkselec *= 1.0 / ref_tuples;
    5854             :         }
    5855             : 
    5856             :         /*
    5857             :          * If any of the FK columns participated in ec_has_const ECs, then
    5858             :          * equivclass.c will have generated "var = const" restrictions for
    5859             :          * each side of the join, thus reducing the sizes of both input
    5860             :          * relations.  Taking the fkselec at face value would amount to
    5861             :          * double-counting the selectivity of the constant restriction for the
    5862             :          * referencing Var.  Hence, look for the restriction clause(s) that
    5863             :          * were applied to the referencing Var(s), and divide out their
    5864             :          * selectivity to correct for this.
    5865             :          */
    5866        1272 :         if (fkinfo->nconst_ec > 0)
    5867             :         {
    5868          24 :             for (int i = 0; i < fkinfo->nkeys; i++)
    5869             :             {
    5870          18 :                 EquivalenceClass *ec = fkinfo->eclass[i];
    5871             : 
    5872          18 :                 if (ec && ec->ec_has_const)
    5873             :                 {
    5874           6 :                     EquivalenceMember *em = fkinfo->fk_eclass_member[i];
    5875           6 :                     RestrictInfo *rinfo = find_derived_clause_for_ec_member(root,
    5876             :                                                                             ec,
    5877             :                                                                             em);
    5878             : 
    5879           6 :                     if (rinfo)
    5880             :                     {
    5881             :                         Selectivity s0;
    5882             : 
    5883           6 :                         s0 = clause_selectivity(root,
    5884             :                                                 (Node *) rinfo,
    5885             :                                                 0,
    5886             :                                                 jointype,
    5887             :                                                 sjinfo);
    5888           6 :                         if (s0 > 0)
    5889           6 :                             fkselec /= s0;
    5890             :                     }
    5891             :                 }
    5892             :             }
    5893             :         }
    5894             :     }
    5895             : 
    5896      264004 :     *restrictlist = worklist;
    5897      264004 :     CLAMP_PROBABILITY(fkselec);
    5898      264004 :     return fkselec;
    5899             : }
    5900             : 
    5901             : /*
    5902             :  * set_subquery_size_estimates
    5903             :  *      Set the size estimates for a base relation that is a subquery.
    5904             :  *
    5905             :  * The rel's targetlist and restrictinfo list must have been constructed
    5906             :  * already, and the Paths for the subquery must have been completed.
    5907             :  * We look at the subquery's PlannerInfo to extract data.
    5908             :  *
    5909             :  * We set the same fields as set_baserel_size_estimates.
    5910             :  */
    5911             : void
    5912       33498 : set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5913             : {
    5914       33498 :     PlannerInfo *subroot = rel->subroot;
    5915             :     RelOptInfo *sub_final_rel;
    5916             :     ListCell   *lc;
    5917             : 
    5918             :     /* Should only be applied to base relations that are subqueries */
    5919             :     Assert(rel->relid > 0);
    5920             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_SUBQUERY);
    5921             : 
    5922             :     /*
    5923             :      * Copy raw number of output rows from subquery.  All of its paths should
    5924             :      * have the same output rowcount, so just look at cheapest-total.
    5925             :      */
    5926       33498 :     sub_final_rel = fetch_upper_rel(subroot, UPPERREL_FINAL, NULL);
    5927       33498 :     rel->tuples = sub_final_rel->cheapest_total_path->rows;
    5928             : 
    5929             :     /*
    5930             :      * Compute per-output-column width estimates by examining the subquery's
    5931             :      * targetlist.  For any output that is a plain Var, get the width estimate
    5932             :      * that was made while planning the subquery.  Otherwise, we leave it to
    5933             :      * set_rel_width to fill in a datatype-based default estimate.
    5934             :      */
    5935      162566 :     foreach(lc, subroot->parse->targetList)
    5936             :     {
    5937      129068 :         TargetEntry *te = lfirst_node(TargetEntry, lc);
    5938      129068 :         Node       *texpr = (Node *) te->expr;
    5939      129068 :         int32       item_width = 0;
    5940             : 
    5941             :         /* junk columns aren't visible to upper query */
    5942      129068 :         if (te->resjunk)
    5943        1368 :             continue;
    5944             : 
    5945             :         /*
    5946             :          * The subquery could be an expansion of a view that's had columns
    5947             :          * added to it since the current query was parsed, so that there are
    5948             :          * non-junk tlist columns in it that don't correspond to any column
    5949             :          * visible at our query level.  Ignore such columns.
    5950             :          */
    5951      127700 :         if (te->resno < rel->min_attr || te->resno > rel->max_attr)
    5952           0 :             continue;
    5953             : 
    5954             :         /*
    5955             :          * XXX This currently doesn't work for subqueries containing set
    5956             :          * operations, because the Vars in their tlists are bogus references
    5957             :          * to the first leaf subquery, which wouldn't give the right answer
    5958             :          * even if we could still get to its PlannerInfo.
    5959             :          *
    5960             :          * Also, the subquery could be an appendrel for which all branches are
    5961             :          * known empty due to constraint exclusion, in which case
    5962             :          * set_append_rel_pathlist will have left the attr_widths set to zero.
    5963             :          *
    5964             :          * In either case, we just leave the width estimate zero until
    5965             :          * set_rel_width fixes it.
    5966             :          */
    5967      127700 :         if (IsA(texpr, Var) &&
    5968       62084 :             subroot->parse->setOperations == NULL)
    5969             :         {
    5970       60236 :             Var        *var = (Var *) texpr;
    5971       60236 :             RelOptInfo *subrel = find_base_rel(subroot, var->varno);
    5972             : 
    5973       60236 :             item_width = subrel->attr_widths[var->varattno - subrel->min_attr];
    5974             :         }
    5975      127700 :         rel->attr_widths[te->resno - rel->min_attr] = item_width;
    5976             :     }
    5977             : 
    5978             :     /* Now estimate number of output rows, etc */
    5979       33498 :     set_baserel_size_estimates(root, rel);
    5980       33498 : }
    5981             : 
    5982             : /*
    5983             :  * set_function_size_estimates
    5984             :  *      Set the size estimates for a base relation that is a function call.
    5985             :  *
    5986             :  * The rel's targetlist and restrictinfo list must have been constructed
    5987             :  * already.
    5988             :  *
    5989             :  * We set the same fields as set_baserel_size_estimates.
    5990             :  */
    5991             : void
    5992       52258 : set_function_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    5993             : {
    5994             :     RangeTblEntry *rte;
    5995             :     ListCell   *lc;
    5996             : 
    5997             :     /* Should only be applied to base relations that are functions */
    5998             :     Assert(rel->relid > 0);
    5999       52258 :     rte = planner_rt_fetch(rel->relid, root);
    6000             :     Assert(rte->rtekind == RTE_FUNCTION);
    6001             : 
    6002             :     /*
    6003             :      * Estimate number of rows the functions will return. The rowcount of the
    6004             :      * node is that of the largest function result.
    6005             :      */
    6006       52258 :     rel->tuples = 0;
    6007      105016 :     foreach(lc, rte->functions)
    6008             :     {
    6009       52758 :         RangeTblFunction *rtfunc = (RangeTblFunction *) lfirst(lc);
    6010       52758 :         double      ntup = expression_returns_set_rows(root, rtfunc->funcexpr);
    6011             : 
    6012       52758 :         if (ntup > rel->tuples)
    6013       52282 :             rel->tuples = ntup;
    6014             :     }
    6015             : 
    6016             :     /* Now estimate number of output rows, etc */
    6017       52258 :     set_baserel_size_estimates(root, rel);
    6018       52258 : }
    6019             : 
    6020             : /*
    6021             :  * set_function_size_estimates
    6022             :  *      Set the size estimates for a base relation that is a function call.
    6023             :  *
    6024             :  * The rel's targetlist and restrictinfo list must have been constructed
    6025             :  * already.
    6026             :  *
    6027             :  * We set the same fields as set_tablefunc_size_estimates.
    6028             :  */
    6029             : void
    6030         626 : set_tablefunc_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6031             : {
    6032             :     /* Should only be applied to base relations that are functions */
    6033             :     Assert(rel->relid > 0);
    6034             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_TABLEFUNC);
    6035             : 
    6036         626 :     rel->tuples = 100;
    6037             : 
    6038             :     /* Now estimate number of output rows, etc */
    6039         626 :     set_baserel_size_estimates(root, rel);
    6040         626 : }
    6041             : 
    6042             : /*
    6043             :  * set_values_size_estimates
    6044             :  *      Set the size estimates for a base relation that is a values list.
    6045             :  *
    6046             :  * The rel's targetlist and restrictinfo list must have been constructed
    6047             :  * already.
    6048             :  *
    6049             :  * We set the same fields as set_baserel_size_estimates.
    6050             :  */
    6051             : void
    6052        8286 : set_values_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6053             : {
    6054             :     RangeTblEntry *rte;
    6055             : 
    6056             :     /* Should only be applied to base relations that are values lists */
    6057             :     Assert(rel->relid > 0);
    6058        8286 :     rte = planner_rt_fetch(rel->relid, root);
    6059             :     Assert(rte->rtekind == RTE_VALUES);
    6060             : 
    6061             :     /*
    6062             :      * Estimate number of rows the values list will return. We know this
    6063             :      * precisely based on the list length (well, barring set-returning
    6064             :      * functions in list items, but that's a refinement not catered for
    6065             :      * anywhere else either).
    6066             :      */
    6067        8286 :     rel->tuples = list_length(rte->values_lists);
    6068             : 
    6069             :     /* Now estimate number of output rows, etc */
    6070        8286 :     set_baserel_size_estimates(root, rel);
    6071        8286 : }
    6072             : 
    6073             : /*
    6074             :  * set_cte_size_estimates
    6075             :  *      Set the size estimates for a base relation that is a CTE reference.
    6076             :  *
    6077             :  * The rel's targetlist and restrictinfo list must have been constructed
    6078             :  * already, and we need an estimate of the number of rows returned by the CTE
    6079             :  * (if a regular CTE) or the non-recursive term (if a self-reference).
    6080             :  *
    6081             :  * We set the same fields as set_baserel_size_estimates.
    6082             :  */
    6083             : void
    6084        5194 : set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, double cte_rows)
    6085             : {
    6086             :     RangeTblEntry *rte;
    6087             : 
    6088             :     /* Should only be applied to base relations that are CTE references */
    6089             :     Assert(rel->relid > 0);
    6090        5194 :     rte = planner_rt_fetch(rel->relid, root);
    6091             :     Assert(rte->rtekind == RTE_CTE);
    6092             : 
    6093        5194 :     if (rte->self_reference)
    6094             :     {
    6095             :         /*
    6096             :          * In a self-reference, we assume the average worktable size is a
    6097             :          * multiple of the nonrecursive term's size.  The best multiplier will
    6098             :          * vary depending on query "fan-out", so make its value adjustable.
    6099             :          */
    6100         934 :         rel->tuples = clamp_row_est(recursive_worktable_factor * cte_rows);
    6101             :     }
    6102             :     else
    6103             :     {
    6104             :         /* Otherwise just believe the CTE's rowcount estimate */
    6105        4260 :         rel->tuples = cte_rows;
    6106             :     }
    6107             : 
    6108             :     /* Now estimate number of output rows, etc */
    6109        5194 :     set_baserel_size_estimates(root, rel);
    6110        5194 : }
    6111             : 
    6112             : /*
    6113             :  * set_namedtuplestore_size_estimates
    6114             :  *      Set the size estimates for a base relation that is a tuplestore reference.
    6115             :  *
    6116             :  * The rel's targetlist and restrictinfo list must have been constructed
    6117             :  * already.
    6118             :  *
    6119             :  * We set the same fields as set_baserel_size_estimates.
    6120             :  */
    6121             : void
    6122         474 : set_namedtuplestore_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6123             : {
    6124             :     RangeTblEntry *rte;
    6125             : 
    6126             :     /* Should only be applied to base relations that are tuplestore references */
    6127             :     Assert(rel->relid > 0);
    6128         474 :     rte = planner_rt_fetch(rel->relid, root);
    6129             :     Assert(rte->rtekind == RTE_NAMEDTUPLESTORE);
    6130             : 
    6131             :     /*
    6132             :      * Use the estimate provided by the code which is generating the named
    6133             :      * tuplestore.  In some cases, the actual number might be available; in
    6134             :      * others the same plan will be re-used, so a "typical" value might be
    6135             :      * estimated and used.
    6136             :      */
    6137         474 :     rel->tuples = rte->enrtuples;
    6138         474 :     if (rel->tuples < 0)
    6139           0 :         rel->tuples = 1000;
    6140             : 
    6141             :     /* Now estimate number of output rows, etc */
    6142         474 :     set_baserel_size_estimates(root, rel);
    6143         474 : }
    6144             : 
    6145             : /*
    6146             :  * set_result_size_estimates
    6147             :  *      Set the size estimates for an RTE_RESULT base relation
    6148             :  *
    6149             :  * The rel's targetlist and restrictinfo list must have been constructed
    6150             :  * already.
    6151             :  *
    6152             :  * We set the same fields as set_baserel_size_estimates.
    6153             :  */
    6154             : void
    6155        4208 : set_result_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6156             : {
    6157             :     /* Should only be applied to RTE_RESULT base relations */
    6158             :     Assert(rel->relid > 0);
    6159             :     Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_RESULT);
    6160             : 
    6161             :     /* RTE_RESULT always generates a single row, natively */
    6162        4208 :     rel->tuples = 1;
    6163             : 
    6164             :     /* Now estimate number of output rows, etc */
    6165        4208 :     set_baserel_size_estimates(root, rel);
    6166        4208 : }
    6167             : 
    6168             : /*
    6169             :  * set_foreign_size_estimates
    6170             :  *      Set the size estimates for a base relation that is a foreign table.
    6171             :  *
    6172             :  * There is not a whole lot that we can do here; the foreign-data wrapper
    6173             :  * is responsible for producing useful estimates.  We can do a decent job
    6174             :  * of estimating baserestrictcost, so we set that, and we also set up width
    6175             :  * using what will be purely datatype-driven estimates from the targetlist.
    6176             :  * There is no way to do anything sane with the rows value, so we just put
    6177             :  * a default estimate and hope that the wrapper can improve on it.  The
    6178             :  * wrapper's GetForeignRelSize function will be called momentarily.
    6179             :  *
    6180             :  * The rel's targetlist and restrictinfo list must have been constructed
    6181             :  * already.
    6182             :  */
    6183             : void
    6184        2464 : set_foreign_size_estimates(PlannerInfo *root, RelOptInfo *rel)
    6185             : {
    6186             :     /* Should only be applied to base relations */
    6187             :     Assert(rel->relid > 0);
    6188             : 
    6189        2464 :     rel->rows = 1000;            /* entirely bogus default estimate */
    6190             : 
    6191        2464 :     cost_qual_eval(&rel->baserestrictcost, rel->baserestrictinfo, root);
    6192             : 
    6193        2464 :     set_rel_width(root, rel);
    6194        2464 : }
    6195             : 
    6196             : 
    6197             : /*
    6198             :  * set_rel_width
    6199             :  *      Set the estimated output width of a base relation.
    6200             :  *
    6201             :  * The estimated output width is the sum of the per-attribute width estimates
    6202             :  * for the actually-referenced columns, plus any PHVs or other expressions
    6203             :  * that have to be calculated at this relation.  This is the amount of data
    6204             :  * we'd need to pass upwards in case of a sort, hash, etc.
    6205             :  *
    6206             :  * This function also sets reltarget->cost, so it's a bit misnamed now.
    6207             :  *
    6208             :  * NB: this works best on plain relations because it prefers to look at
    6209             :  * real Vars.  For subqueries, set_subquery_size_estimates will already have
    6210             :  * copied up whatever per-column estimates were made within the subquery,
    6211             :  * and for other types of rels there isn't much we can do anyway.  We fall
    6212             :  * back on (fairly stupid) datatype-based width estimates if we can't get
    6213             :  * any better number.
    6214             :  *
    6215             :  * The per-attribute width estimates are cached for possible re-use while
    6216             :  * building join relations or post-scan/join pathtargets.
    6217             :  */
    6218             : static void
    6219      514610 : set_rel_width(PlannerInfo *root, RelOptInfo *rel)
    6220             : {
    6221      514610 :     Oid         reloid = planner_rt_fetch(rel->relid, root)->relid;
    6222      514610 :     int64       tuple_width = 0;
    6223      514610 :     bool        have_wholerow_var = false;
    6224             :     ListCell   *lc;
    6225             : 
    6226             :     /* Vars are assumed to have cost zero, but other exprs do not */
    6227      514610 :     rel->reltarget->cost.startup = 0;
    6228      514610 :     rel->reltarget->cost.per_tuple = 0;
    6229             : 
    6230     1853024 :     foreach(lc, rel->reltarget->exprs)
    6231             :     {
    6232     1338414 :         Node       *node = (Node *) lfirst(lc);
    6233             : 
    6234             :         /*
    6235             :          * Ordinarily, a Var in a rel's targetlist must belong to that rel;
    6236             :          * but there are corner cases involving LATERAL references where that
    6237             :          * isn't so.  If the Var has the wrong varno, fall through to the
    6238             :          * generic case (it doesn't seem worth the trouble to be any smarter).
    6239             :          */
    6240     1338414 :         if (IsA(node, Var) &&
    6241     1314462 :             ((Var *) node)->varno == rel->relid)
    6242      360872 :         {
    6243     1314396 :             Var        *var = (Var *) node;
    6244             :             int         ndx;
    6245             :             int32       item_width;
    6246             : 
    6247             :             Assert(var->varattno >= rel->min_attr);
    6248             :             Assert(var->varattno <= rel->max_attr);
    6249             : 
    6250     1314396 :             ndx = var->varattno - rel->min_attr;
    6251             : 
    6252             :             /*
    6253             :              * If it's a whole-row Var, we'll deal with it below after we have
    6254             :              * already cached as many attr widths as possible.
    6255             :              */
    6256     1314396 :             if (var->varattno == 0)
    6257             :             {
    6258        3050 :                 have_wholerow_var = true;
    6259        3050 :                 continue;
    6260             :             }
    6261             : 
    6262             :             /*
    6263             :              * The width may have been cached already (especially if it's a
    6264             :              * subquery), so don't duplicate effort.
    6265             :              */
    6266     1311346 :             if (rel->attr_widths[ndx] > 0)
    6267             :             {
    6268      260058 :                 tuple_width += rel->attr_widths[ndx];
    6269      260058 :                 continue;
    6270             :             }
    6271             : 
    6272             :             /* Try to get column width from statistics */
    6273     1051288 :             if (reloid != InvalidOid && var->varattno > 0)
    6274             :             {
    6275      829102 :                 item_width = get_attavgwidth(reloid, var->varattno);
    6276      829102 :                 if (item_width > 0)
    6277             :                 {
    6278      690416 :                     rel->attr_widths[ndx] = item_width;
    6279      690416 :                     tuple_width += item_width;
    6280      690416 :                     continue;
    6281             :                 }
    6282             :             }
    6283             : 
    6284             :             /*
    6285             :              * Not a plain relation, or can't find statistics for it. Estimate
    6286             :              * using just the type info.
    6287             :              */
    6288      360872 :             item_width = get_typavgwidth(var->vartype, var->vartypmod);
    6289             :             Assert(item_width > 0);
    6290      360872 :             rel->attr_widths[ndx] = item_width;
    6291      360872 :             tuple_width += item_width;
    6292             :         }
    6293       24018 :         else if (IsA(node, PlaceHolderVar))
    6294             :         {
    6295             :             /*
    6296             :              * We will need to evaluate the PHV's contained expression while
    6297             :              * scanning this rel, so be sure to include it in reltarget->cost.
    6298             :              */
    6299        2008 :             PlaceHolderVar *phv = (PlaceHolderVar *) node;
    6300        2008 :             PlaceHolderInfo *phinfo = find_placeholder_info(root, phv);
    6301             :             QualCost    cost;
    6302             : 
    6303        2008 :             tuple_width += phinfo->ph_width;
    6304        2008 :             cost_qual_eval_node(&cost, (Node *) phv->phexpr, root);
    6305        2008 :             rel->reltarget->cost.startup += cost.startup;
    6306        2008 :             rel->reltarget->cost.per_tuple += cost.per_tuple;
    6307             :         }
    6308             :         else
    6309             :         {
    6310             :             /*
    6311             :              * We could be looking at an expression pulled up from a subquery,
    6312             :              * or a ROW() representing a whole-row child Var, etc.  Do what we
    6313             :              * can using the expression type information.
    6314             :              */
    6315             :             int32       item_width;
    6316             :             QualCost    cost;
    6317             : 
    6318       22010 :             item_width = get_typavgwidth(exprType(node), exprTypmod(node));
    6319             :             Assert(item_width > 0);
    6320       22010 :             tuple_width += item_width;
    6321             :             /* Not entirely clear if we need to account for cost, but do so */
    6322       22010 :             cost_qual_eval_node(&cost, node, root);
    6323       22010 :             rel->reltarget->cost.startup += cost.startup;
    6324       22010 :             rel->reltarget->cost.per_tuple += cost.per_tuple;
    6325             :         }
    6326             :     }
    6327             : 
    6328             :     /*
    6329             :      * If we have a whole-row reference, estimate its width as the sum of
    6330             :      * per-column widths plus heap tuple header overhead.
    6331             :      */
    6332      514610 :     if (have_wholerow_var)
    6333             :     {
    6334        3050 :         int64       wholerow_width = MAXALIGN(SizeofHeapTupleHeader);
    6335             : 
    6336        3050 :         if (reloid != InvalidOid)
    6337             :         {
    6338             :             /* Real relation, so estimate true tuple width */
    6339        2392 :             wholerow_width += get_relation_data_width(reloid,
    6340        2392 :                                                       rel->attr_widths - rel->min_attr);
    6341             :         }
    6342             :         else
    6343             :         {
    6344             :             /* Do what we can with info for a phony rel */
    6345             :             AttrNumber  i;
    6346             : 
    6347        1794 :             for (i = 1; i <= rel->max_attr; i++)
    6348        1136 :                 wholerow_width += rel->attr_widths[i - rel->min_attr];
    6349             :         }
    6350             : 
    6351        3050 :         rel->attr_widths[0 - rel->min_attr] = clamp_width_est(wholerow_width);
    6352             : 
    6353             :         /*
    6354             :          * Include the whole-row Var as part of the output tuple.  Yes, that
    6355             :          * really is what happens at runtime.
    6356             :          */
    6357        3050 :         tuple_width += wholerow_width;
    6358             :     }
    6359             : 
    6360      514610 :     rel->reltarget->width = clamp_width_est(tuple_width);
    6361      514610 : }
    6362             : 
    6363             : /*
    6364             :  * set_pathtarget_cost_width
    6365             :  *      Set the estimated eval cost and output width of a PathTarget tlist.
    6366             :  *
    6367             :  * As a notational convenience, returns the same PathTarget pointer passed in.
    6368             :  *
    6369             :  * Most, though not quite all, uses of this function occur after we've run
    6370             :  * set_rel_width() for base relations; so we can usually obtain cached width
    6371             :  * estimates for Vars.  If we can't, fall back on datatype-based width
    6372             :  * estimates.  Present early-planning uses of PathTargets don't need accurate
    6373             :  * widths badly enough to justify going to the catalogs for better data.
    6374             :  */
    6375             : PathTarget *
    6376      614642 : set_pathtarget_cost_width(PlannerInfo *root, PathTarget *target)
    6377             : {
    6378      614642 :     int64       tuple_width = 0;
    6379             :     ListCell   *lc;
    6380             : 
    6381             :     /* Vars are assumed to have cost zero, but other exprs do not */
    6382      614642 :     target->cost.startup = 0;
    6383      614642 :     target->cost.per_tuple = 0;
    6384             : 
    6385     2138030 :     foreach(lc, target->exprs)
    6386             :     {
    6387     1523388 :         Node       *node = (Node *) lfirst(lc);
    6388             : 
    6389     1523388 :         tuple_width += get_expr_width(root, node);
    6390             : 
    6391             :         /* For non-Vars, account for evaluation cost */
    6392     1523388 :         if (!IsA(node, Var))
    6393             :         {
    6394             :             QualCost    cost;
    6395             : 
    6396      651394 :             cost_qual_eval_node(&cost, node, root);
    6397      651394 :             target->cost.startup += cost.startup;
    6398      651394 :             target->cost.per_tuple += cost.per_tuple;
    6399             :         }
    6400             :     }
    6401             : 
    6402      614642 :     target->width = clamp_width_est(tuple_width);
    6403             : 
    6404      614642 :     return target;
    6405             : }
    6406             : 
    6407             : /*
    6408             :  * get_expr_width
    6409             :  *      Estimate the width of the given expr attempting to use the width
    6410             :  *      cached in a Var's owning RelOptInfo, else fallback on the type's
    6411             :  *      average width when unable to or when the given Node is not a Var.
    6412             :  */
    6413             : static int32
    6414     1853186 : get_expr_width(PlannerInfo *root, const Node *expr)
    6415             : {
    6416             :     int32       width;
    6417             : 
    6418     1853186 :     if (IsA(expr, Var))
    6419             :     {
    6420     1189060 :         const Var  *var = (const Var *) expr;
    6421             : 
    6422             :         /* We should not see any upper-level Vars here */
    6423             :         Assert(var->varlevelsup == 0);
    6424             : 
    6425             :         /* Try to get data from RelOptInfo cache */
    6426     1189060 :         if (!IS_SPECIAL_VARNO(var->varno) &&
    6427     1183222 :             var->varno < root->simple_rel_array_size)
    6428             :         {
    6429     1183222 :             RelOptInfo *rel = root->simple_rel_array[var->varno];
    6430             : 
    6431     1183222 :             if (rel != NULL &&
    6432     1165538 :                 var->varattno >= rel->min_attr &&
    6433     1165538 :                 var->varattno <= rel->max_attr)
    6434             :             {
    6435     1165538 :                 int         ndx = var->varattno - rel->min_attr;
    6436             : 
    6437     1165538 :                 if (rel->attr_widths[ndx] > 0)
    6438     1133584 :                     return rel->attr_widths[ndx];
    6439             :             }
    6440             :         }
    6441             : 
    6442             :         /*
    6443             :          * No cached data available, so estimate using just the type info.
    6444             :          */
    6445       55476 :         width = get_typavgwidth(var->vartype, var->vartypmod);
    6446             :         Assert(width > 0);
    6447             : 
    6448       55476 :         return width;
    6449             :     }
    6450             : 
    6451      664126 :     width = get_typavgwidth(exprType(expr), exprTypmod(expr));
    6452             :     Assert(width > 0);
    6453      664126 :     return width;
    6454             : }
    6455             : 
    6456             : /*
    6457             :  * relation_byte_size
    6458             :  *    Estimate the storage space in bytes for a given number of tuples
    6459             :  *    of a given width (size in bytes).
    6460             :  */
    6461             : static double
    6462     5040228 : relation_byte_size(double tuples, int width)
    6463             : {
    6464     5040228 :     return tuples * (MAXALIGN(width) + MAXALIGN(SizeofHeapTupleHeader));
    6465             : }
    6466             : 
    6467             : /*
    6468             :  * page_size
    6469             :  *    Returns an estimate of the number of pages covered by a given
    6470             :  *    number of tuples of a given width (size in bytes).
    6471             :  */
    6472             : static double
    6473        9328 : page_size(double tuples, int width)
    6474             : {
    6475        9328 :     return ceil(relation_byte_size(tuples, width) / BLCKSZ);
    6476             : }
    6477             : 
    6478             : /*
    6479             :  * Estimate the fraction of the work that each worker will do given the
    6480             :  * number of workers budgeted for the path.
    6481             :  */
    6482             : static double
    6483      463068 : get_parallel_divisor(Path *path)
    6484             : {
    6485      463068 :     double      parallel_divisor = path->parallel_workers;
    6486             : 
    6487             :     /*
    6488             :      * Early experience with parallel query suggests that when there is only
    6489             :      * one worker, the leader often makes a very substantial contribution to
    6490             :      * executing the parallel portion of the plan, but as more workers are
    6491             :      * added, it does less and less, because it's busy reading tuples from the
    6492             :      * workers and doing whatever non-parallel post-processing is needed.  By
    6493             :      * the time we reach 4 workers, the leader no longer makes a meaningful
    6494             :      * contribution.  Thus, for now, estimate that the leader spends 30% of
    6495             :      * its time servicing each worker, and the remainder executing the
    6496             :      * parallel plan.
    6497             :      */
    6498      463068 :     if (parallel_leader_participation)
    6499             :     {
    6500             :         double      leader_contribution;
    6501             : 
    6502      461766 :         leader_contribution = 1.0 - (0.3 * path->parallel_workers);
    6503      461766 :         if (leader_contribution > 0)
    6504      459396 :             parallel_divisor += leader_contribution;
    6505             :     }
    6506             : 
    6507      463068 :     return parallel_divisor;
    6508             : }
    6509             : 
    6510             : /*
    6511             :  * compute_bitmap_pages
    6512             :  *    Estimate number of pages fetched from heap in a bitmap heap scan.
    6513             :  *
    6514             :  * 'baserel' is the relation to be scanned
    6515             :  * 'bitmapqual' is a tree of IndexPaths, BitmapAndPaths, and BitmapOrPaths
    6516             :  * 'loop_count' is the number of repetitions of the indexscan to factor into
    6517             :  *      estimates of caching behavior
    6518             :  *
    6519             :  * If cost_p isn't NULL, the indexTotalCost estimate is returned in *cost_p.
    6520             :  * If tuples_p isn't NULL, the tuples_fetched estimate is returned in *tuples_p.
    6521             :  */
    6522             : double
    6523      687152 : compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel,
    6524             :                      Path *bitmapqual, double loop_count,
    6525             :                      Cost *cost_p, double *tuples_p)
    6526             : {
    6527             :     Cost        indexTotalCost;
    6528             :     Selectivity indexSelectivity;
    6529             :     double      T;
    6530             :     double      pages_fetched;
    6531             :     double      tuples_fetched;
    6532             :     double      heap_pages;
    6533             :     double      maxentries;
    6534             : 
    6535             :     /*
    6536             :      * Fetch total cost of obtaining the bitmap, as well as its total
    6537             :      * selectivity.
    6538             :      */
    6539      687152 :     cost_bitmap_tree_node(bitmapqual, &indexTotalCost, &indexSelectivity);
    6540             : 
    6541             :     /*
    6542             :      * Estimate number of main-table pages fetched.
    6543             :      */
    6544      687152 :     tuples_fetched = clamp_row_est(indexSelectivity * baserel->tuples);
    6545             : 
    6546      687152 :     T = (baserel->pages > 1) ? (double) baserel->pages : 1.0;
    6547             : 
    6548             :     /*
    6549             :      * For a single scan, the number of heap pages that need to be fetched is
    6550             :      * the same as the Mackert and Lohman formula for the case T <= b (ie, no
    6551             :      * re-reads needed).
    6552             :      */
    6553      687152 :     pages_fetched = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);
    6554             : 
    6555             :     /*
    6556             :      * Calculate the number of pages fetched from the heap.  Then based on
    6557             :      * current work_mem estimate get the estimated maxentries in the bitmap.
    6558             :      * (Note that we always do this calculation based on the number of pages
    6559             :      * that would be fetched in a single iteration, even if loop_count > 1.
    6560             :      * That's correct, because only that number of entries will be stored in
    6561             :      * the bitmap at one time.)
    6562             :      */
    6563      687152 :     heap_pages = Min(pages_fetched, baserel->pages);
    6564      687152 :     maxentries = tbm_calculate_entries(work_mem * (Size) 1024);
    6565             : 
    6566      687152 :     if (loop_count > 1)
    6567             :     {
    6568             :         /*
    6569             :          * For repeated bitmap scans, scale up the number of tuples fetched in
    6570             :          * the Mackert and Lohman formula by the number of scans, so that we
    6571             :          * estimate the number of pages fetched by all the scans. Then
    6572             :          * pro-rate for one scan.
    6573             :          */
    6574      145024 :         pages_fetched = index_pages_fetched(tuples_fetched * loop_count,
    6575             :                                             baserel->pages,
    6576             :                                             get_indexpath_pages(bitmapqual),
    6577             :                                             root);
    6578      145024 :         pages_fetched /= loop_count;
    6579             :     }
    6580             : 
    6581      687152 :     if (pages_fetched >= T)
    6582       65818 :         pages_fetched = T;
    6583             :     else
    6584      621334 :         pages_fetched = ceil(pages_fetched);
    6585             : 
    6586      687152 :     if (maxentries < heap_pages)
    6587             :     {
    6588             :         double      exact_pages;
    6589             :         double      lossy_pages;
    6590             : 
    6591             :         /*
    6592             :          * Crude approximation of the number of lossy pages.  Because of the
    6593             :          * way tbm_lossify() is coded, the number of lossy pages increases
    6594             :          * very sharply as soon as we run short of memory; this formula has
    6595             :          * that property and seems to perform adequately in testing, but it's
    6596             :          * possible we could do better somehow.
    6597             :          */
    6598          18 :         lossy_pages = Max(0, heap_pages - maxentries / 2);
    6599          18 :         exact_pages = heap_pages - lossy_pages;
    6600             : 
    6601             :         /*
    6602             :          * If there are lossy pages then recompute the number of tuples
    6603             :          * processed by the bitmap heap node.  We assume here that the chance
    6604             :          * of a given tuple coming from an exact page is the same as the
    6605             :          * chance that a given page is exact.  This might not be true, but
    6606             :          * it's not clear how we can do any better.
    6607             :          */
    6608          18 :         if (lossy_pages > 0)
    6609             :             tuples_fetched =
    6610          18 :                 clamp_row_est(indexSelectivity *
    6611          18 :                               (exact_pages / heap_pages) * baserel->tuples +
    6612          18 :                               (lossy_pages / heap_pages) * baserel->tuples);
    6613             :     }
    6614             : 
    6615      687152 :     if (cost_p)
    6616      542954 :         *cost_p = indexTotalCost;
    6617      687152 :     if (tuples_p)
    6618      542954 :         *tuples_p = tuples_fetched;
    6619             : 
    6620      687152 :     return pages_fetched;
    6621             : }
    6622             : 
    6623             : /*
    6624             :  * compute_gather_rows
    6625             :  *    Estimate number of rows for gather (merge) nodes.
    6626             :  *
    6627             :  * In a parallel plan, each worker's row estimate is determined by dividing the
    6628             :  * total number of rows by parallel_divisor, which accounts for the leader's
    6629             :  * contribution in addition to the number of workers.  Accordingly, when
    6630             :  * estimating the number of rows for gather (merge) nodes, we multiply the rows
    6631             :  * per worker by the same parallel_divisor to undo the division.
    6632             :  */
    6633             : double
    6634       42324 : compute_gather_rows(Path *path)
    6635             : {
    6636             :     Assert(path->parallel_workers > 0);
    6637             : 
    6638       42324 :     return clamp_row_est(path->rows * get_parallel_divisor(path));
    6639             : }

Generated by: LCOV version 1.16