LCOV - code coverage report
Current view: top level - src/backend/utils/adt - selfuncs.c (source / functions) Hit Total Coverage
Test: PostgreSQL 19devel Lines: 2275 2586 88.0 %
Date: 2026-01-11 20:17:25 Functions: 78 81 96.3 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * selfuncs.c
       4             :  *    Selectivity functions and index cost estimation functions for
       5             :  *    standard operators and index access methods.
       6             :  *
       7             :  *    Selectivity routines are registered in the pg_operator catalog
       8             :  *    in the "oprrest" and "oprjoin" attributes.
       9             :  *
      10             :  *    Index cost functions are located via the index AM's API struct,
      11             :  *    which is obtained from the handler function registered in pg_am.
      12             :  *
      13             :  * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
      14             :  * Portions Copyright (c) 1994, Regents of the University of California
      15             :  *
      16             :  *
      17             :  * IDENTIFICATION
      18             :  *    src/backend/utils/adt/selfuncs.c
      19             :  *
      20             :  *-------------------------------------------------------------------------
      21             :  */
      22             : 
      23             : /*----------
      24             :  * Operator selectivity estimation functions are called to estimate the
      25             :  * selectivity of WHERE clauses whose top-level operator is their operator.
      26             :  * We divide the problem into two cases:
      27             :  *      Restriction clause estimation: the clause involves vars of just
      28             :  *          one relation.
      29             :  *      Join clause estimation: the clause involves vars of multiple rels.
      30             :  * Join selectivity estimation is far more difficult and usually less accurate
      31             :  * than restriction estimation.
      32             :  *
      33             :  * When dealing with the inner scan of a nestloop join, we consider the
      34             :  * join's joinclauses as restriction clauses for the inner relation, and
      35             :  * treat vars of the outer relation as parameters (a/k/a constants of unknown
      36             :  * values).  So, restriction estimators need to be able to accept an argument
      37             :  * telling which relation is to be treated as the variable.
      38             :  *
      39             :  * The call convention for a restriction estimator (oprrest function) is
      40             :  *
      41             :  *      Selectivity oprrest (PlannerInfo *root,
      42             :  *                           Oid operator,
      43             :  *                           List *args,
      44             :  *                           int varRelid);
      45             :  *
      46             :  * root: general information about the query (rtable and RelOptInfo lists
      47             :  * are particularly important for the estimator).
      48             :  * operator: OID of the specific operator in question.
      49             :  * args: argument list from the operator clause.
      50             :  * varRelid: if not zero, the relid (rtable index) of the relation to
      51             :  * be treated as the variable relation.  May be zero if the args list
      52             :  * is known to contain vars of only one relation.
      53             :  *
      54             :  * This is represented at the SQL level (in pg_proc) as
      55             :  *
      56             :  *      float8 oprrest (internal, oid, internal, int4);
      57             :  *
      58             :  * The result is a selectivity, that is, a fraction (0 to 1) of the rows
      59             :  * of the relation that are expected to produce a TRUE result for the
      60             :  * given operator.
      61             :  *
      62             :  * The call convention for a join estimator (oprjoin function) is similar
      63             :  * except that varRelid is not needed, and instead join information is
      64             :  * supplied:
      65             :  *
      66             :  *      Selectivity oprjoin (PlannerInfo *root,
      67             :  *                           Oid operator,
      68             :  *                           List *args,
      69             :  *                           JoinType jointype,
      70             :  *                           SpecialJoinInfo *sjinfo);
      71             :  *
      72             :  *      float8 oprjoin (internal, oid, internal, int2, internal);
      73             :  *
      74             :  * (Before Postgres 8.4, join estimators had only the first four of these
      75             :  * parameters.  That signature is still allowed, but deprecated.)  The
      76             :  * relationship between jointype and sjinfo is explained in the comments for
      77             :  * clause_selectivity() --- the short version is that jointype is usually
      78             :  * best ignored in favor of examining sjinfo.
      79             :  *
      80             :  * Join selectivity for regular inner and outer joins is defined as the
      81             :  * fraction (0 to 1) of the cross product of the relations that is expected
      82             :  * to produce a TRUE result for the given operator.  For both semi and anti
      83             :  * joins, however, the selectivity is defined as the fraction of the left-hand
      84             :  * side relation's rows that are expected to have a match (ie, at least one
      85             :  * row with a TRUE result) in the right-hand side.
      86             :  *
      87             :  * For both oprrest and oprjoin functions, the operator's input collation OID
      88             :  * (if any) is passed using the standard fmgr mechanism, so that the estimator
      89             :  * function can fetch it with PG_GET_COLLATION().  Note, however, that all
      90             :  * statistics in pg_statistic are currently built using the relevant column's
      91             :  * collation.
      92             :  *----------
      93             :  */
      94             : 
      95             : #include "postgres.h"
      96             : 
      97             : #include <ctype.h>
      98             : #include <math.h>
      99             : 
     100             : #include "access/brin.h"
     101             : #include "access/brin_page.h"
     102             : #include "access/gin.h"
     103             : #include "access/table.h"
     104             : #include "access/tableam.h"
     105             : #include "access/visibilitymap.h"
     106             : #include "catalog/pg_collation.h"
     107             : #include "catalog/pg_operator.h"
     108             : #include "catalog/pg_statistic.h"
     109             : #include "catalog/pg_statistic_ext.h"
     110             : #include "executor/nodeAgg.h"
     111             : #include "miscadmin.h"
     112             : #include "nodes/makefuncs.h"
     113             : #include "nodes/nodeFuncs.h"
     114             : #include "optimizer/clauses.h"
     115             : #include "optimizer/cost.h"
     116             : #include "optimizer/optimizer.h"
     117             : #include "optimizer/pathnode.h"
     118             : #include "optimizer/paths.h"
     119             : #include "optimizer/plancat.h"
     120             : #include "parser/parse_clause.h"
     121             : #include "parser/parse_relation.h"
     122             : #include "parser/parsetree.h"
     123             : #include "rewrite/rewriteManip.h"
     124             : #include "statistics/statistics.h"
     125             : #include "storage/bufmgr.h"
     126             : #include "utils/acl.h"
     127             : #include "utils/array.h"
     128             : #include "utils/builtins.h"
     129             : #include "utils/date.h"
     130             : #include "utils/datum.h"
     131             : #include "utils/fmgroids.h"
     132             : #include "utils/index_selfuncs.h"
     133             : #include "utils/lsyscache.h"
     134             : #include "utils/memutils.h"
     135             : #include "utils/pg_locale.h"
     136             : #include "utils/rel.h"
     137             : #include "utils/selfuncs.h"
     138             : #include "utils/snapmgr.h"
     139             : #include "utils/spccache.h"
     140             : #include "utils/syscache.h"
     141             : #include "utils/timestamp.h"
     142             : #include "utils/typcache.h"
     143             : 
     144             : #define DEFAULT_PAGE_CPU_MULTIPLIER 50.0
     145             : 
     146             : /*
     147             :  * In production builds, switch to hash-based MCV matching when the lists are
     148             :  * large enough to amortize hash setup cost.  (This threshold is compared to
     149             :  * the sum of the lengths of the two MCV lists.  This is simplistic but seems
     150             :  * to work well enough.)  In debug builds, we use a smaller threshold so that
     151             :  * the regression tests cover both paths well.
     152             :  */
     153             : #ifndef USE_ASSERT_CHECKING
     154             : #define EQJOINSEL_MCV_HASH_THRESHOLD 200
     155             : #else
     156             : #define EQJOINSEL_MCV_HASH_THRESHOLD 20
     157             : #endif
     158             : 
     159             : /* Entries in the simplehash hash table used by eqjoinsel_find_matches */
     160             : typedef struct MCVHashEntry
     161             : {
     162             :     Datum       value;          /* the value represented by this entry */
     163             :     int         index;          /* its index in the relevant AttStatsSlot */
     164             :     uint32      hash;           /* hash code for the Datum */
     165             :     char        status;         /* status code used by simplehash.h */
     166             : } MCVHashEntry;
     167             : 
     168             : /* private_data for the simplehash hash table */
     169             : typedef struct MCVHashContext
     170             : {
     171             :     FunctionCallInfo equal_fcinfo;  /* the equality join operator */
     172             :     FunctionCallInfo hash_fcinfo;   /* the hash function to use */
     173             :     bool        op_is_reversed; /* equality compares hash type to probe type */
     174             :     bool        insert_mode;    /* doing inserts or lookups? */
     175             :     bool        hash_typbyval;  /* typbyval of hashed data type */
     176             :     int16       hash_typlen;    /* typlen of hashed data type */
     177             : } MCVHashContext;
     178             : 
     179             : /* forward reference */
     180             : typedef struct MCVHashTable_hash MCVHashTable_hash;
     181             : 
     182             : /* Hooks for plugins to get control when we ask for stats */
     183             : get_relation_stats_hook_type get_relation_stats_hook = NULL;
     184             : get_index_stats_hook_type get_index_stats_hook = NULL;
     185             : 
     186             : static double eqsel_internal(PG_FUNCTION_ARGS, bool negate);
     187             : static double eqjoinsel_inner(FmgrInfo *eqproc, Oid collation,
     188             :                               Oid hashLeft, Oid hashRight,
     189             :                               VariableStatData *vardata1, VariableStatData *vardata2,
     190             :                               double nd1, double nd2,
     191             :                               bool isdefault1, bool isdefault2,
     192             :                               AttStatsSlot *sslot1, AttStatsSlot *sslot2,
     193             :                               Form_pg_statistic stats1, Form_pg_statistic stats2,
     194             :                               bool have_mcvs1, bool have_mcvs2,
     195             :                               bool *hasmatch1, bool *hasmatch2,
     196             :                               int *p_nmatches);
     197             : static double eqjoinsel_semi(FmgrInfo *eqproc, Oid collation,
     198             :                              Oid hashLeft, Oid hashRight,
     199             :                              bool op_is_reversed,
     200             :                              VariableStatData *vardata1, VariableStatData *vardata2,
     201             :                              double nd1, double nd2,
     202             :                              bool isdefault1, bool isdefault2,
     203             :                              AttStatsSlot *sslot1, AttStatsSlot *sslot2,
     204             :                              Form_pg_statistic stats1, Form_pg_statistic stats2,
     205             :                              bool have_mcvs1, bool have_mcvs2,
     206             :                              bool *hasmatch1, bool *hasmatch2,
     207             :                              int *p_nmatches,
     208             :                              RelOptInfo *inner_rel);
     209             : static void eqjoinsel_find_matches(FmgrInfo *eqproc, Oid collation,
     210             :                                    Oid hashLeft, Oid hashRight,
     211             :                                    bool op_is_reversed,
     212             :                                    AttStatsSlot *sslot1, AttStatsSlot *sslot2,
     213             :                                    int nvalues1, int nvalues2,
     214             :                                    bool *hasmatch1, bool *hasmatch2,
     215             :                                    int *p_nmatches, double *p_matchprodfreq);
     216             : static uint32 hash_mcv(MCVHashTable_hash *tab, Datum key);
     217             : static bool mcvs_equal(MCVHashTable_hash *tab, Datum key0, Datum key1);
     218             : static bool estimate_multivariate_ndistinct(PlannerInfo *root,
     219             :                                             RelOptInfo *rel, List **varinfos, double *ndistinct);
     220             : static bool convert_to_scalar(Datum value, Oid valuetypid, Oid collid,
     221             :                               double *scaledvalue,
     222             :                               Datum lobound, Datum hibound, Oid boundstypid,
     223             :                               double *scaledlobound, double *scaledhibound);
     224             : static double convert_numeric_to_scalar(Datum value, Oid typid, bool *failure);
     225             : static void convert_string_to_scalar(char *value,
     226             :                                      double *scaledvalue,
     227             :                                      char *lobound,
     228             :                                      double *scaledlobound,
     229             :                                      char *hibound,
     230             :                                      double *scaledhibound);
     231             : static void convert_bytea_to_scalar(Datum value,
     232             :                                     double *scaledvalue,
     233             :                                     Datum lobound,
     234             :                                     double *scaledlobound,
     235             :                                     Datum hibound,
     236             :                                     double *scaledhibound);
     237             : static double convert_one_string_to_scalar(char *value,
     238             :                                            int rangelo, int rangehi);
     239             : static double convert_one_bytea_to_scalar(unsigned char *value, int valuelen,
     240             :                                           int rangelo, int rangehi);
     241             : static char *convert_string_datum(Datum value, Oid typid, Oid collid,
     242             :                                   bool *failure);
     243             : static double convert_timevalue_to_scalar(Datum value, Oid typid,
     244             :                                           bool *failure);
     245             : static Node *strip_all_phvs_deep(PlannerInfo *root, Node *node);
     246             : static bool contain_placeholder_walker(Node *node, void *context);
     247             : static Node *strip_all_phvs_mutator(Node *node, void *context);
     248             : static void examine_simple_variable(PlannerInfo *root, Var *var,
     249             :                                     VariableStatData *vardata);
     250             : static void examine_indexcol_variable(PlannerInfo *root, IndexOptInfo *index,
     251             :                                       int indexcol, VariableStatData *vardata);
     252             : static bool get_variable_range(PlannerInfo *root, VariableStatData *vardata,
     253             :                                Oid sortop, Oid collation,
     254             :                                Datum *min, Datum *max);
     255             : static void get_stats_slot_range(AttStatsSlot *sslot,
     256             :                                  Oid opfuncoid, FmgrInfo *opproc,
     257             :                                  Oid collation, int16 typLen, bool typByVal,
     258             :                                  Datum *min, Datum *max, bool *p_have_data);
     259             : static bool get_actual_variable_range(PlannerInfo *root,
     260             :                                       VariableStatData *vardata,
     261             :                                       Oid sortop, Oid collation,
     262             :                                       Datum *min, Datum *max);
     263             : static bool get_actual_variable_endpoint(Relation heapRel,
     264             :                                          Relation indexRel,
     265             :                                          ScanDirection indexscandir,
     266             :                                          ScanKey scankeys,
     267             :                                          int16 typLen,
     268             :                                          bool typByVal,
     269             :                                          TupleTableSlot *tableslot,
     270             :                                          MemoryContext outercontext,
     271             :                                          Datum *endpointDatum);
     272             : static RelOptInfo *find_join_input_rel(PlannerInfo *root, Relids relids);
     273             : static double btcost_correlation(IndexOptInfo *index,
     274             :                                  VariableStatData *vardata);
     275             : 
     276             : /* Define support routines for MCV hash tables */
     277             : #define SH_PREFIX               MCVHashTable
     278             : #define SH_ELEMENT_TYPE         MCVHashEntry
     279             : #define SH_KEY_TYPE             Datum
     280             : #define SH_KEY                  value
     281             : #define SH_HASH_KEY(tab,key)    hash_mcv(tab, key)
     282             : #define SH_EQUAL(tab,key0,key1) mcvs_equal(tab, key0, key1)
     283             : #define SH_SCOPE                static inline
     284             : #define SH_STORE_HASH
     285             : #define SH_GET_HASH(tab,ent)    (ent)->hash
     286             : #define SH_DEFINE
     287             : #define SH_DECLARE
     288             : #include "lib/simplehash.h"
     289             : 
     290             : 
     291             : /*
     292             :  *      eqsel           - Selectivity of "=" for any data types.
     293             :  *
     294             :  * Note: this routine is also used to estimate selectivity for some
     295             :  * operators that are not "=" but have comparable selectivity behavior,
     296             :  * such as "~=" (geometric approximate-match).  Even for "=", we must
     297             :  * keep in mind that the left and right datatypes may differ.
     298             :  */
     299             : Datum
     300      704370 : eqsel(PG_FUNCTION_ARGS)
     301             : {
     302      704370 :     PG_RETURN_FLOAT8((float8) eqsel_internal(fcinfo, false));
     303             : }
     304             : 
     305             : /*
     306             :  * Common code for eqsel() and neqsel()
     307             :  */
     308             : static double
     309      751084 : eqsel_internal(PG_FUNCTION_ARGS, bool negate)
     310             : {
     311      751084 :     PlannerInfo *root = (PlannerInfo *) PG_GETARG_POINTER(0);
     312      751084 :     Oid         operator = PG_GETARG_OID(1);
     313      751084 :     List       *args = (List *) PG_GETARG_POINTER(2);
     314      751084 :     int         varRelid = PG_GETARG_INT32(3);
     315      751084 :     Oid         collation = PG_GET_COLLATION();
     316             :     VariableStatData vardata;
     317             :     Node       *other;
     318             :     bool        varonleft;
     319             :     double      selec;
     320             : 
     321             :     /*
     322             :      * When asked about <>, we do the estimation using the corresponding =
     323             :      * operator, then convert to <> via "1.0 - eq_selectivity - nullfrac".
     324             :      */
     325      751084 :     if (negate)
     326             :     {
     327       46714 :         operator = get_negator(operator);
     328       46714 :         if (!OidIsValid(operator))
     329             :         {
     330             :             /* Use default selectivity (should we raise an error instead?) */
     331           0 :             return 1.0 - DEFAULT_EQ_SEL;
     332             :         }
     333             :     }
     334             : 
     335             :     /*
     336             :      * If expression is not variable = something or something = variable, then
     337             :      * punt and return a default estimate.
     338             :      */
     339      751084 :     if (!get_restriction_variable(root, args, varRelid,
     340             :                                   &vardata, &other, &varonleft))
     341        5124 :         return negate ? (1.0 - DEFAULT_EQ_SEL) : DEFAULT_EQ_SEL;
     342             : 
     343             :     /*
     344             :      * We can do a lot better if the something is a constant.  (Note: the
     345             :      * Const might result from estimation rather than being a simple constant
     346             :      * in the query.)
     347             :      */
     348      745954 :     if (IsA(other, Const))
     349      308924 :         selec = var_eq_const(&vardata, operator, collation,
     350      308924 :                              ((Const *) other)->constvalue,
     351      308924 :                              ((Const *) other)->constisnull,
     352             :                              varonleft, negate);
     353             :     else
     354      437030 :         selec = var_eq_non_const(&vardata, operator, collation, other,
     355             :                                  varonleft, negate);
     356             : 
     357      745954 :     ReleaseVariableStats(vardata);
     358             : 
     359      745954 :     return selec;
     360             : }
     361             : 
     362             : /*
     363             :  * var_eq_const --- eqsel for var = const case
     364             :  *
     365             :  * This is exported so that some other estimation functions can use it.
     366             :  */
     367             : double
     368      353842 : var_eq_const(VariableStatData *vardata, Oid oproid, Oid collation,
     369             :              Datum constval, bool constisnull,
     370             :              bool varonleft, bool negate)
     371             : {
     372             :     double      selec;
     373      353842 :     double      nullfrac = 0.0;
     374             :     bool        isdefault;
     375             :     Oid         opfuncoid;
     376             : 
     377             :     /*
     378             :      * If the constant is NULL, assume operator is strict and return zero, ie,
     379             :      * operator will never return TRUE.  (It's zero even for a negator op.)
     380             :      */
     381      353842 :     if (constisnull)
     382         410 :         return 0.0;
     383             : 
     384             :     /*
     385             :      * Grab the nullfrac for use below.  Note we allow use of nullfrac
     386             :      * regardless of security check.
     387             :      */
     388      353432 :     if (HeapTupleIsValid(vardata->statsTuple))
     389             :     {
     390             :         Form_pg_statistic stats;
     391             : 
     392      266238 :         stats = (Form_pg_statistic) GETSTRUCT(vardata->statsTuple);
     393      266238 :         nullfrac = stats->stanullfrac;
     394             :     }
     395             : 
     396             :     /*
     397             :      * If we matched the var to a unique index, DISTINCT or GROUP-BY clause,
     398             :      * assume there is exactly one match regardless of anything else.  (This
     399             :      * is slightly bogus, since the index or clause's equality operator might
     400             :      * be different from ours, but it's much more likely to be right than
     401             :      * ignoring the information.)
     402             :      */
     403      353432 :     if (vardata->isunique && vardata->rel && vardata->rel->tuples >= 1.0)
     404             :     {
     405       84422 :         selec = 1.0 / vardata->rel->tuples;
     406             :     }
     407      468974 :     else if (HeapTupleIsValid(vardata->statsTuple) &&
     408      199964 :              statistic_proc_security_check(vardata,
     409      199964 :                                            (opfuncoid = get_opcode(oproid))))
     410      199964 :     {
     411             :         AttStatsSlot sslot;
     412      199964 :         bool        match = false;
     413             :         int         i;
     414             : 
     415             :         /*
     416             :          * Is the constant "=" to any of the column's most common values?
     417             :          * (Although the given operator may not really be "=", we will assume
     418             :          * that seeing whether it returns TRUE is an appropriate test.  If you
     419             :          * don't like this, maybe you shouldn't be using eqsel for your
     420             :          * operator...)
     421             :          */
     422      199964 :         if (get_attstatsslot(&sslot, vardata->statsTuple,
     423             :                              STATISTIC_KIND_MCV, InvalidOid,
     424             :                              ATTSTATSSLOT_VALUES | ATTSTATSSLOT_NUMBERS))
     425             :         {
     426      178610 :             LOCAL_FCINFO(fcinfo, 2);
     427             :             FmgrInfo    eqproc;
     428             : 
     429      178610 :             fmgr_info(opfuncoid, &eqproc);
     430             : 
     431             :             /*
     432             :              * Save a few cycles by setting up the fcinfo struct just once.
     433             :              * Using FunctionCallInvoke directly also avoids failure if the
     434             :              * eqproc returns NULL, though really equality functions should
     435             :              * never do that.
     436             :              */
     437      178610 :             InitFunctionCallInfoData(*fcinfo, &eqproc, 2, collation,
     438             :                                      NULL, NULL);
     439      178610 :             fcinfo->args[0].isnull = false;
     440      178610 :             fcinfo->args[1].isnull = false;
     441             :             /* be careful to apply operator right way 'round */
     442      178610 :             if (varonleft)
     443      178578 :                 fcinfo->args[1].value = constval;
     444             :             else
     445          32 :                 fcinfo->args[0].value = constval;
     446             : 
     447     3069216 :             for (i = 0; i < sslot.nvalues; i++)
     448             :             {
     449             :                 Datum       fresult;
     450             : 
     451     2986322 :                 if (varonleft)
     452     2986266 :                     fcinfo->args[0].value = sslot.values[i];
     453             :                 else
     454          56 :                     fcinfo->args[1].value = sslot.values[i];
     455     2986322 :                 fcinfo->isnull = false;
     456     2986322 :                 fresult = FunctionCallInvoke(fcinfo);
     457     2986322 :                 if (!fcinfo->isnull && DatumGetBool(fresult))
     458             :                 {
     459       95716 :                     match = true;
     460       95716 :                     break;
     461             :                 }
     462             :             }
     463             :         }
     464             :         else
     465             :         {
     466             :             /* no most-common-value info available */
     467       21354 :             i = 0;              /* keep compiler quiet */
     468             :         }
     469             : 
     470      199964 :         if (match)
     471             :         {
     472             :             /*
     473             :              * Constant is "=" to this common value.  We know selectivity
     474             :              * exactly (or as exactly as ANALYZE could calculate it, anyway).
     475             :              */
     476       95716 :             selec = sslot.numbers[i];
     477             :         }
     478             :         else
     479             :         {
     480             :             /*
     481             :              * Comparison is against a constant that is neither NULL nor any
     482             :              * of the common values.  Its selectivity cannot be more than
     483             :              * this:
     484             :              */
     485      104248 :             double      sumcommon = 0.0;
     486             :             double      otherdistinct;
     487             : 
     488     2606120 :             for (i = 0; i < sslot.nnumbers; i++)
     489     2501872 :                 sumcommon += sslot.numbers[i];
     490      104248 :             selec = 1.0 - sumcommon - nullfrac;
     491      104248 :             CLAMP_PROBABILITY(selec);
     492             : 
     493             :             /*
     494             :              * and in fact it's probably a good deal less. We approximate that
     495             :              * all the not-common values share this remaining fraction
     496             :              * equally, so we divide by the number of other distinct values.
     497             :              */
     498      104248 :             otherdistinct = get_variable_numdistinct(vardata, &isdefault) -
     499      104248 :                 sslot.nnumbers;
     500      104248 :             if (otherdistinct > 1)
     501       53960 :                 selec /= otherdistinct;
     502             : 
     503             :             /*
     504             :              * Another cross-check: selectivity shouldn't be estimated as more
     505             :              * than the least common "most common value".
     506             :              */
     507      104248 :             if (sslot.nnumbers > 0 && selec > sslot.numbers[sslot.nnumbers - 1])
     508           0 :                 selec = sslot.numbers[sslot.nnumbers - 1];
     509             :         }
     510             : 
     511      199964 :         free_attstatsslot(&sslot);
     512             :     }
     513             :     else
     514             :     {
     515             :         /*
     516             :          * No ANALYZE stats available, so make a guess using estimated number
     517             :          * of distinct values and assuming they are equally common. (The guess
     518             :          * is unlikely to be very good, but we do know a few special cases.)
     519             :          */
     520       69046 :         selec = 1.0 / get_variable_numdistinct(vardata, &isdefault);
     521             :     }
     522             : 
     523             :     /* now adjust if we wanted <> rather than = */
     524      353432 :     if (negate)
     525       37730 :         selec = 1.0 - selec - nullfrac;
     526             : 
     527             :     /* result should be in range, but make sure... */
     528      353432 :     CLAMP_PROBABILITY(selec);
     529             : 
     530      353432 :     return selec;
     531             : }
     532             : 
     533             : /*
     534             :  * var_eq_non_const --- eqsel for var = something-other-than-const case
     535             :  *
     536             :  * This is exported so that some other estimation functions can use it.
     537             :  */
     538             : double
     539      437030 : var_eq_non_const(VariableStatData *vardata, Oid oproid, Oid collation,
     540             :                  Node *other,
     541             :                  bool varonleft, bool negate)
     542             : {
     543             :     double      selec;
     544      437030 :     double      nullfrac = 0.0;
     545             :     bool        isdefault;
     546             : 
     547             :     /*
     548             :      * Grab the nullfrac for use below.
     549             :      */
     550      437030 :     if (HeapTupleIsValid(vardata->statsTuple))
     551             :     {
     552             :         Form_pg_statistic stats;
     553             : 
     554      300094 :         stats = (Form_pg_statistic) GETSTRUCT(vardata->statsTuple);
     555      300094 :         nullfrac = stats->stanullfrac;
     556             :     }
     557             : 
     558             :     /*
     559             :      * If we matched the var to a unique index, DISTINCT or GROUP-BY clause,
     560             :      * assume there is exactly one match regardless of anything else.  (This
     561             :      * is slightly bogus, since the index or clause's equality operator might
     562             :      * be different from ours, but it's much more likely to be right than
     563             :      * ignoring the information.)
     564             :      */
     565      437030 :     if (vardata->isunique && vardata->rel && vardata->rel->tuples >= 1.0)
     566             :     {
     567      164588 :         selec = 1.0 / vardata->rel->tuples;
     568             :     }
     569      272442 :     else if (HeapTupleIsValid(vardata->statsTuple))
     570             :     {
     571             :         double      ndistinct;
     572             :         AttStatsSlot sslot;
     573             : 
     574             :         /*
     575             :          * Search is for a value that we do not know a priori, but we will
     576             :          * assume it is not NULL.  Estimate the selectivity as non-null
     577             :          * fraction divided by number of distinct values, so that we get a
     578             :          * result averaged over all possible values whether common or
     579             :          * uncommon.  (Essentially, we are assuming that the not-yet-known
     580             :          * comparison value is equally likely to be any of the possible
     581             :          * values, regardless of their frequency in the table.  Is that a good
     582             :          * idea?)
     583             :          */
     584      151796 :         selec = 1.0 - nullfrac;
     585      151796 :         ndistinct = get_variable_numdistinct(vardata, &isdefault);
     586      151796 :         if (ndistinct > 1)
     587      147968 :             selec /= ndistinct;
     588             : 
     589             :         /*
     590             :          * Cross-check: selectivity should never be estimated as more than the
     591             :          * most common value's.
     592             :          */
     593      151796 :         if (get_attstatsslot(&sslot, vardata->statsTuple,
     594             :                              STATISTIC_KIND_MCV, InvalidOid,
     595             :                              ATTSTATSSLOT_NUMBERS))
     596             :         {
     597      132488 :             if (sslot.nnumbers > 0 && selec > sslot.numbers[0])
     598         582 :                 selec = sslot.numbers[0];
     599      132488 :             free_attstatsslot(&sslot);
     600             :         }
     601             :     }
     602             :     else
     603             :     {
     604             :         /*
     605             :          * No ANALYZE stats available, so make a guess using estimated number
     606             :          * of distinct values and assuming they are equally common. (The guess
     607             :          * is unlikely to be very good, but we do know a few special cases.)
     608             :          */
     609      120646 :         selec = 1.0 / get_variable_numdistinct(vardata, &isdefault);
     610             :     }
     611             : 
     612             :     /* now adjust if we wanted <> rather than = */
     613      437030 :     if (negate)
     614        6590 :         selec = 1.0 - selec - nullfrac;
     615             : 
     616             :     /* result should be in range, but make sure... */
     617      437030 :     CLAMP_PROBABILITY(selec);
     618             : 
     619      437030 :     return selec;
     620             : }
     621             : 
     622             : /*
     623             :  *      neqsel          - Selectivity of "!=" for any data types.
     624             :  *
     625             :  * This routine is also used for some operators that are not "!="
     626             :  * but have comparable selectivity behavior.  See above comments
     627             :  * for eqsel().
     628             :  */
     629             : Datum
     630       46714 : neqsel(PG_FUNCTION_ARGS)
     631             : {
     632       46714 :     PG_RETURN_FLOAT8((float8) eqsel_internal(fcinfo, true));
     633             : }
     634             : 
     635             : /*
     636             :  *  scalarineqsel       - Selectivity of "<", "<=", ">", ">=" for scalars.
     637             :  *
     638             :  * This is the guts of scalarltsel/scalarlesel/scalargtsel/scalargesel.
     639             :  * The isgt and iseq flags distinguish which of the four cases apply.
     640             :  *
     641             :  * The caller has commuted the clause, if necessary, so that we can treat
     642             :  * the variable as being on the left.  The caller must also make sure that
     643             :  * the other side of the clause is a non-null Const, and dissect that into
     644             :  * a value and datatype.  (This definition simplifies some callers that
     645             :  * want to estimate against a computed value instead of a Const node.)
     646             :  *
     647             :  * This routine works for any datatype (or pair of datatypes) known to
     648             :  * convert_to_scalar().  If it is applied to some other datatype,
     649             :  * it will return an approximate estimate based on assuming that the constant
     650             :  * value falls in the middle of the bin identified by binary search.
     651             :  */
     652             : static double
     653      379624 : scalarineqsel(PlannerInfo *root, Oid operator, bool isgt, bool iseq,
     654             :               Oid collation,
     655             :               VariableStatData *vardata, Datum constval, Oid consttype)
     656             : {
     657             :     Form_pg_statistic stats;
     658             :     FmgrInfo    opproc;
     659             :     double      mcv_selec,
     660             :                 hist_selec,
     661             :                 sumcommon;
     662             :     double      selec;
     663             : 
     664      379624 :     if (!HeapTupleIsValid(vardata->statsTuple))
     665             :     {
     666             :         /*
     667             :          * No stats are available.  Typically this means we have to fall back
     668             :          * on the default estimate; but if the variable is CTID then we can
     669             :          * make an estimate based on comparing the constant to the table size.
     670             :          */
     671       28506 :         if (vardata->var && IsA(vardata->var, Var) &&
     672       23540 :             ((Var *) vardata->var)->varattno == SelfItemPointerAttributeNumber)
     673             :         {
     674             :             ItemPointer itemptr;
     675             :             double      block;
     676             :             double      density;
     677             : 
     678             :             /*
     679             :              * If the relation's empty, we're going to include all of it.
     680             :              * (This is mostly to avoid divide-by-zero below.)
     681             :              */
     682        2020 :             if (vardata->rel->pages == 0)
     683           0 :                 return 1.0;
     684             : 
     685        2020 :             itemptr = (ItemPointer) DatumGetPointer(constval);
     686        2020 :             block = ItemPointerGetBlockNumberNoCheck(itemptr);
     687             : 
     688             :             /*
     689             :              * Determine the average number of tuples per page (density).
     690             :              *
     691             :              * Since the last page will, on average, be only half full, we can
     692             :              * estimate it to have half as many tuples as earlier pages.  So
     693             :              * give it half the weight of a regular page.
     694             :              */
     695        2020 :             density = vardata->rel->tuples / (vardata->rel->pages - 0.5);
     696             : 
     697             :             /* If target is the last page, use half the density. */
     698        2020 :             if (block >= vardata->rel->pages - 1)
     699          30 :                 density *= 0.5;
     700             : 
     701             :             /*
     702             :              * Using the average tuples per page, calculate how far into the
     703             :              * page the itemptr is likely to be and adjust block accordingly,
     704             :              * by adding that fraction of a whole block (but never more than a
     705             :              * whole block, no matter how high the itemptr's offset is).  Here
     706             :              * we are ignoring the possibility of dead-tuple line pointers,
     707             :              * which is fairly bogus, but we lack the info to do better.
     708             :              */
     709        2020 :             if (density > 0.0)
     710             :             {
     711        2020 :                 OffsetNumber offset = ItemPointerGetOffsetNumberNoCheck(itemptr);
     712             : 
     713        2020 :                 block += Min(offset / density, 1.0);
     714             :             }
     715             : 
     716             :             /*
     717             :              * Convert relative block number to selectivity.  Again, the last
     718             :              * page has only half weight.
     719             :              */
     720        2020 :             selec = block / (vardata->rel->pages - 0.5);
     721             : 
     722             :             /*
     723             :              * The calculation so far gave us a selectivity for the "<=" case.
     724             :              * We'll have one fewer tuple for "<" and one additional tuple for
     725             :              * ">=", the latter of which we'll reverse the selectivity for
     726             :              * below, so we can simply subtract one tuple for both cases.  The
     727             :              * cases that need this adjustment can be identified by iseq being
     728             :              * equal to isgt.
     729             :              */
     730        2020 :             if (iseq == isgt && vardata->rel->tuples >= 1.0)
     731        1880 :                 selec -= (1.0 / vardata->rel->tuples);
     732             : 
     733             :             /* Finally, reverse the selectivity for the ">", ">=" cases. */
     734        2020 :             if (isgt)
     735        1862 :                 selec = 1.0 - selec;
     736             : 
     737        2020 :             CLAMP_PROBABILITY(selec);
     738        2020 :             return selec;
     739             :         }
     740             : 
     741             :         /* no stats available, so default result */
     742       26486 :         return DEFAULT_INEQ_SEL;
     743             :     }
     744      351118 :     stats = (Form_pg_statistic) GETSTRUCT(vardata->statsTuple);
     745             : 
     746      351118 :     fmgr_info(get_opcode(operator), &opproc);
     747             : 
     748             :     /*
     749             :      * If we have most-common-values info, add up the fractions of the MCV
     750             :      * entries that satisfy MCV OP CONST.  These fractions contribute directly
     751             :      * to the result selectivity.  Also add up the total fraction represented
     752             :      * by MCV entries.
     753             :      */
     754      351118 :     mcv_selec = mcv_selectivity(vardata, &opproc, collation, constval, true,
     755             :                                 &sumcommon);
     756             : 
     757             :     /*
     758             :      * If there is a histogram, determine which bin the constant falls in, and
     759             :      * compute the resulting contribution to selectivity.
     760             :      */
     761      351118 :     hist_selec = ineq_histogram_selectivity(root, vardata,
     762             :                                             operator, &opproc, isgt, iseq,
     763             :                                             collation,
     764             :                                             constval, consttype);
     765             : 
     766             :     /*
     767             :      * Now merge the results from the MCV and histogram calculations,
     768             :      * realizing that the histogram covers only the non-null values that are
     769             :      * not listed in MCV.
     770             :      */
     771      351118 :     selec = 1.0 - stats->stanullfrac - sumcommon;
     772             : 
     773      351118 :     if (hist_selec >= 0.0)
     774      219282 :         selec *= hist_selec;
     775             :     else
     776             :     {
     777             :         /*
     778             :          * If no histogram but there are values not accounted for by MCV,
     779             :          * arbitrarily assume half of them will match.
     780             :          */
     781      131836 :         selec *= 0.5;
     782             :     }
     783             : 
     784      351118 :     selec += mcv_selec;
     785             : 
     786             :     /* result should be in range, but make sure... */
     787      351118 :     CLAMP_PROBABILITY(selec);
     788             : 
     789      351118 :     return selec;
     790             : }
     791             : 
     792             : /*
     793             :  *  mcv_selectivity         - Examine the MCV list for selectivity estimates
     794             :  *
     795             :  * Determine the fraction of the variable's MCV population that satisfies
     796             :  * the predicate (VAR OP CONST), or (CONST OP VAR) if !varonleft.  Also
     797             :  * compute the fraction of the total column population represented by the MCV
     798             :  * list.  This code will work for any boolean-returning predicate operator.
     799             :  *
     800             :  * The function result is the MCV selectivity, and the fraction of the
     801             :  * total population is returned into *sumcommonp.  Zeroes are returned
     802             :  * if there is no MCV list.
     803             :  */
     804             : double
     805      357408 : mcv_selectivity(VariableStatData *vardata, FmgrInfo *opproc, Oid collation,
     806             :                 Datum constval, bool varonleft,
     807             :                 double *sumcommonp)
     808             : {
     809             :     double      mcv_selec,
     810             :                 sumcommon;
     811             :     AttStatsSlot sslot;
     812             :     int         i;
     813             : 
     814      357408 :     mcv_selec = 0.0;
     815      357408 :     sumcommon = 0.0;
     816             : 
     817      712348 :     if (HeapTupleIsValid(vardata->statsTuple) &&
     818      709550 :         statistic_proc_security_check(vardata, opproc->fn_oid) &&
     819      354610 :         get_attstatsslot(&sslot, vardata->statsTuple,
     820             :                          STATISTIC_KIND_MCV, InvalidOid,
     821             :                          ATTSTATSSLOT_VALUES | ATTSTATSSLOT_NUMBERS))
     822             :     {
     823      196240 :         LOCAL_FCINFO(fcinfo, 2);
     824             : 
     825             :         /*
     826             :          * We invoke the opproc "by hand" so that we won't fail on NULL
     827             :          * results.  Such cases won't arise for normal comparison functions,
     828             :          * but generic_restriction_selectivity could perhaps be used with
     829             :          * operators that can return NULL.  A small side benefit is to not
     830             :          * need to re-initialize the fcinfo struct from scratch each time.
     831             :          */
     832      196240 :         InitFunctionCallInfoData(*fcinfo, opproc, 2, collation,
     833             :                                  NULL, NULL);
     834      196240 :         fcinfo->args[0].isnull = false;
     835      196240 :         fcinfo->args[1].isnull = false;
     836             :         /* be careful to apply operator right way 'round */
     837      196240 :         if (varonleft)
     838      196240 :             fcinfo->args[1].value = constval;
     839             :         else
     840           0 :             fcinfo->args[0].value = constval;
     841             : 
     842     4683962 :         for (i = 0; i < sslot.nvalues; i++)
     843             :         {
     844             :             Datum       fresult;
     845             : 
     846     4487722 :             if (varonleft)
     847     4487722 :                 fcinfo->args[0].value = sslot.values[i];
     848             :             else
     849           0 :                 fcinfo->args[1].value = sslot.values[i];
     850     4487722 :             fcinfo->isnull = false;
     851     4487722 :             fresult = FunctionCallInvoke(fcinfo);
     852     4487722 :             if (!fcinfo->isnull && DatumGetBool(fresult))
     853     1732724 :                 mcv_selec += sslot.numbers[i];
     854     4487722 :             sumcommon += sslot.numbers[i];
     855             :         }
     856      196240 :         free_attstatsslot(&sslot);
     857             :     }
     858             : 
     859      357408 :     *sumcommonp = sumcommon;
     860      357408 :     return mcv_selec;
     861             : }
     862             : 
     863             : /*
     864             :  *  histogram_selectivity   - Examine the histogram for selectivity estimates
     865             :  *
     866             :  * Determine the fraction of the variable's histogram entries that satisfy
     867             :  * the predicate (VAR OP CONST), or (CONST OP VAR) if !varonleft.
     868             :  *
     869             :  * This code will work for any boolean-returning predicate operator, whether
     870             :  * or not it has anything to do with the histogram sort operator.  We are
     871             :  * essentially using the histogram just as a representative sample.  However,
     872             :  * small histograms are unlikely to be all that representative, so the caller
     873             :  * should be prepared to fall back on some other estimation approach when the
     874             :  * histogram is missing or very small.  It may also be prudent to combine this
     875             :  * approach with another one when the histogram is small.
     876             :  *
     877             :  * If the actual histogram size is not at least min_hist_size, we won't bother
     878             :  * to do the calculation at all.  Also, if the n_skip parameter is > 0, we
     879             :  * ignore the first and last n_skip histogram elements, on the grounds that
     880             :  * they are outliers and hence not very representative.  Typical values for
     881             :  * these parameters are 10 and 1.
     882             :  *
     883             :  * The function result is the selectivity, or -1 if there is no histogram
     884             :  * or it's smaller than min_hist_size.
     885             :  *
     886             :  * The output parameter *hist_size receives the actual histogram size,
     887             :  * or zero if no histogram.  Callers may use this number to decide how
     888             :  * much faith to put in the function result.
     889             :  *
     890             :  * Note that the result disregards both the most-common-values (if any) and
     891             :  * null entries.  The caller is expected to combine this result with
     892             :  * statistics for those portions of the column population.  It may also be
     893             :  * prudent to clamp the result range, ie, disbelieve exact 0 or 1 outputs.
     894             :  */
     895             : double
     896        6290 : histogram_selectivity(VariableStatData *vardata,
     897             :                       FmgrInfo *opproc, Oid collation,
     898             :                       Datum constval, bool varonleft,
     899             :                       int min_hist_size, int n_skip,
     900             :                       int *hist_size)
     901             : {
     902             :     double      result;
     903             :     AttStatsSlot sslot;
     904             : 
     905             :     /* check sanity of parameters */
     906             :     Assert(n_skip >= 0);
     907             :     Assert(min_hist_size > 2 * n_skip);
     908             : 
     909       10112 :     if (HeapTupleIsValid(vardata->statsTuple) &&
     910        7638 :         statistic_proc_security_check(vardata, opproc->fn_oid) &&
     911        3816 :         get_attstatsslot(&sslot, vardata->statsTuple,
     912             :                          STATISTIC_KIND_HISTOGRAM, InvalidOid,
     913             :                          ATTSTATSSLOT_VALUES))
     914             :     {
     915        3722 :         *hist_size = sslot.nvalues;
     916        3722 :         if (sslot.nvalues >= min_hist_size)
     917             :         {
     918        1790 :             LOCAL_FCINFO(fcinfo, 2);
     919        1790 :             int         nmatch = 0;
     920             :             int         i;
     921             : 
     922             :             /*
     923             :              * We invoke the opproc "by hand" so that we won't fail on NULL
     924             :              * results.  Such cases won't arise for normal comparison
     925             :              * functions, but generic_restriction_selectivity could perhaps be
     926             :              * used with operators that can return NULL.  A small side benefit
     927             :              * is to not need to re-initialize the fcinfo struct from scratch
     928             :              * each time.
     929             :              */
     930        1790 :             InitFunctionCallInfoData(*fcinfo, opproc, 2, collation,
     931             :                                      NULL, NULL);
     932        1790 :             fcinfo->args[0].isnull = false;
     933        1790 :             fcinfo->args[1].isnull = false;
     934             :             /* be careful to apply operator right way 'round */
     935        1790 :             if (varonleft)
     936        1790 :                 fcinfo->args[1].value = constval;
     937             :             else
     938           0 :                 fcinfo->args[0].value = constval;
     939             : 
     940      146566 :             for (i = n_skip; i < sslot.nvalues - n_skip; i++)
     941             :             {
     942             :                 Datum       fresult;
     943             : 
     944      144776 :                 if (varonleft)
     945      144776 :                     fcinfo->args[0].value = sslot.values[i];
     946             :                 else
     947           0 :                     fcinfo->args[1].value = sslot.values[i];
     948      144776 :                 fcinfo->isnull = false;
     949      144776 :                 fresult = FunctionCallInvoke(fcinfo);
     950      144776 :                 if (!fcinfo->isnull && DatumGetBool(fresult))
     951        9616 :                     nmatch++;
     952             :             }
     953        1790 :             result = ((double) nmatch) / ((double) (sslot.nvalues - 2 * n_skip));
     954             :         }
     955             :         else
     956        1932 :             result = -1;
     957        3722 :         free_attstatsslot(&sslot);
     958             :     }
     959             :     else
     960             :     {
     961        2568 :         *hist_size = 0;
     962        2568 :         result = -1;
     963             :     }
     964             : 
     965        6290 :     return result;
     966             : }
     967             : 
     968             : /*
     969             :  *  generic_restriction_selectivity     - Selectivity for almost anything
     970             :  *
     971             :  * This function estimates selectivity for operators that we don't have any
     972             :  * special knowledge about, but are on data types that we collect standard
     973             :  * MCV and/or histogram statistics for.  (Additional assumptions are that
     974             :  * the operator is strict and immutable, or at least stable.)
     975             :  *
     976             :  * If we have "VAR OP CONST" or "CONST OP VAR", selectivity is estimated by
     977             :  * applying the operator to each element of the column's MCV and/or histogram
     978             :  * stats, and merging the results using the assumption that the histogram is
     979             :  * a reasonable random sample of the column's non-MCV population.  Note that
     980             :  * if the operator's semantics are related to the histogram ordering, this
     981             :  * might not be such a great assumption; other functions such as
     982             :  * scalarineqsel() are probably a better match in such cases.
     983             :  *
     984             :  * Otherwise, fall back to the default selectivity provided by the caller.
     985             :  */
     986             : double
     987        1130 : generic_restriction_selectivity(PlannerInfo *root, Oid oproid, Oid collation,
     988             :                                 List *args, int varRelid,
     989             :                                 double default_selectivity)
     990             : {
     991             :     double      selec;
     992             :     VariableStatData vardata;
     993             :     Node       *other;
     994             :     bool        varonleft;
     995             : 
     996             :     /*
     997             :      * If expression is not variable OP something or something OP variable,
     998             :      * then punt and return the default estimate.
     999             :      */
    1000        1130 :     if (!get_restriction_variable(root, args, varRelid,
    1001             :                                   &vardata, &other, &varonleft))
    1002           0 :         return default_selectivity;
    1003             : 
    1004             :     /*
    1005             :      * If the something is a NULL constant, assume operator is strict and
    1006             :      * return zero, ie, operator will never return TRUE.
    1007             :      */
    1008        1130 :     if (IsA(other, Const) &&
    1009        1130 :         ((Const *) other)->constisnull)
    1010             :     {
    1011           0 :         ReleaseVariableStats(vardata);
    1012           0 :         return 0.0;
    1013             :     }
    1014             : 
    1015        1130 :     if (IsA(other, Const))
    1016             :     {
    1017             :         /* Variable is being compared to a known non-null constant */
    1018        1130 :         Datum       constval = ((Const *) other)->constvalue;
    1019             :         FmgrInfo    opproc;
    1020             :         double      mcvsum;
    1021             :         double      mcvsel;
    1022             :         double      nullfrac;
    1023             :         int         hist_size;
    1024             : 
    1025        1130 :         fmgr_info(get_opcode(oproid), &opproc);
    1026             : 
    1027             :         /*
    1028             :          * Calculate the selectivity for the column's most common values.
    1029             :          */
    1030        1130 :         mcvsel = mcv_selectivity(&vardata, &opproc, collation,
    1031             :                                  constval, varonleft,
    1032             :                                  &mcvsum);
    1033             : 
    1034             :         /*
    1035             :          * If the histogram is large enough, see what fraction of it matches
    1036             :          * the query, and assume that's representative of the non-MCV
    1037             :          * population.  Otherwise use the default selectivity for the non-MCV
    1038             :          * population.
    1039             :          */
    1040        1130 :         selec = histogram_selectivity(&vardata, &opproc, collation,
    1041             :                                       constval, varonleft,
    1042             :                                       10, 1, &hist_size);
    1043        1130 :         if (selec < 0)
    1044             :         {
    1045             :             /* Nope, fall back on default */
    1046        1130 :             selec = default_selectivity;
    1047             :         }
    1048           0 :         else if (hist_size < 100)
    1049             :         {
    1050             :             /*
    1051             :              * For histogram sizes from 10 to 100, we combine the histogram
    1052             :              * and default selectivities, putting increasingly more trust in
    1053             :              * the histogram for larger sizes.
    1054             :              */
    1055           0 :             double      hist_weight = hist_size / 100.0;
    1056             : 
    1057           0 :             selec = selec * hist_weight +
    1058           0 :                 default_selectivity * (1.0 - hist_weight);
    1059             :         }
    1060             : 
    1061             :         /* In any case, don't believe extremely small or large estimates. */
    1062        1130 :         if (selec < 0.0001)
    1063           0 :             selec = 0.0001;
    1064        1130 :         else if (selec > 0.9999)
    1065           0 :             selec = 0.9999;
    1066             : 
    1067             :         /* Don't forget to account for nulls. */
    1068        1130 :         if (HeapTupleIsValid(vardata.statsTuple))
    1069          84 :             nullfrac = ((Form_pg_statistic) GETSTRUCT(vardata.statsTuple))->stanullfrac;
    1070             :         else
    1071        1046 :             nullfrac = 0.0;
    1072             : 
    1073             :         /*
    1074             :          * Now merge the results from the MCV and histogram calculations,
    1075             :          * realizing that the histogram covers only the non-null values that
    1076             :          * are not listed in MCV.
    1077             :          */
    1078        1130 :         selec *= 1.0 - nullfrac - mcvsum;
    1079        1130 :         selec += mcvsel;
    1080             :     }
    1081             :     else
    1082             :     {
    1083             :         /* Comparison value is not constant, so we can't do anything */
    1084           0 :         selec = default_selectivity;
    1085             :     }
    1086             : 
    1087        1130 :     ReleaseVariableStats(vardata);
    1088             : 
    1089             :     /* result should be in range, but make sure... */
    1090        1130 :     CLAMP_PROBABILITY(selec);
    1091             : 
    1092        1130 :     return selec;
    1093             : }
    1094             : 
    1095             : /*
    1096             :  *  ineq_histogram_selectivity  - Examine the histogram for scalarineqsel
    1097             :  *
    1098             :  * Determine the fraction of the variable's histogram population that
    1099             :  * satisfies the inequality condition, ie, VAR < (or <=, >, >=) CONST.
    1100             :  * The isgt and iseq flags distinguish which of the four cases apply.
    1101             :  *
    1102             :  * While opproc could be looked up from the operator OID, common callers
    1103             :  * also need to call it separately, so we make the caller pass both.
    1104             :  *
    1105             :  * Returns -1 if there is no histogram (valid results will always be >= 0).
    1106             :  *
    1107             :  * Note that the result disregards both the most-common-values (if any) and
    1108             :  * null entries.  The caller is expected to combine this result with
    1109             :  * statistics for those portions of the column population.
    1110             :  *
    1111             :  * This is exported so that some other estimation functions can use it.
    1112             :  */
    1113             : double
    1114      356530 : ineq_histogram_selectivity(PlannerInfo *root,
    1115             :                            VariableStatData *vardata,
    1116             :                            Oid opoid, FmgrInfo *opproc, bool isgt, bool iseq,
    1117             :                            Oid collation,
    1118             :                            Datum constval, Oid consttype)
    1119             : {
    1120             :     double      hist_selec;
    1121             :     AttStatsSlot sslot;
    1122             : 
    1123      356530 :     hist_selec = -1.0;
    1124             : 
    1125             :     /*
    1126             :      * Someday, ANALYZE might store more than one histogram per rel/att,
    1127             :      * corresponding to more than one possible sort ordering defined for the
    1128             :      * column type.  Right now, we know there is only one, so just grab it and
    1129             :      * see if it matches the query.
    1130             :      *
    1131             :      * Note that we can't use opoid as search argument; the staop appearing in
    1132             :      * pg_statistic will be for the relevant '<' operator, but what we have
    1133             :      * might be some other inequality operator such as '>='.  (Even if opoid
    1134             :      * is a '<' operator, it could be cross-type.)  Hence we must use
    1135             :      * comparison_ops_are_compatible() to see if the operators match.
    1136             :      */
    1137      712358 :     if (HeapTupleIsValid(vardata->statsTuple) &&
    1138      711332 :         statistic_proc_security_check(vardata, opproc->fn_oid) &&
    1139      355504 :         get_attstatsslot(&sslot, vardata->statsTuple,
    1140             :                          STATISTIC_KIND_HISTOGRAM, InvalidOid,
    1141             :                          ATTSTATSSLOT_VALUES))
    1142             :     {
    1143      223990 :         if (sslot.nvalues > 1 &&
    1144      447904 :             sslot.stacoll == collation &&
    1145      223914 :             comparison_ops_are_compatible(sslot.staop, opoid))
    1146      223806 :         {
    1147             :             /*
    1148             :              * Use binary search to find the desired location, namely the
    1149             :              * right end of the histogram bin containing the comparison value,
    1150             :              * which is the leftmost entry for which the comparison operator
    1151             :              * succeeds (if isgt) or fails (if !isgt).
    1152             :              *
    1153             :              * In this loop, we pay no attention to whether the operator iseq
    1154             :              * or not; that detail will be mopped up below.  (We cannot tell,
    1155             :              * anyway, whether the operator thinks the values are equal.)
    1156             :              *
    1157             :              * If the binary search accesses the first or last histogram
    1158             :              * entry, we try to replace that endpoint with the true column min
    1159             :              * or max as found by get_actual_variable_range().  This
    1160             :              * ameliorates misestimates when the min or max is moving as a
    1161             :              * result of changes since the last ANALYZE.  Note that this could
    1162             :              * result in effectively including MCVs into the histogram that
    1163             :              * weren't there before, but we don't try to correct for that.
    1164             :              */
    1165             :             double      histfrac;
    1166      223806 :             int         lobound = 0;    /* first possible slot to search */
    1167      223806 :             int         hibound = sslot.nvalues;    /* last+1 slot to search */
    1168      223806 :             bool        have_end = false;
    1169             : 
    1170             :             /*
    1171             :              * If there are only two histogram entries, we'll want up-to-date
    1172             :              * values for both.  (If there are more than two, we need at most
    1173             :              * one of them to be updated, so we deal with that within the
    1174             :              * loop.)
    1175             :              */
    1176      223806 :             if (sslot.nvalues == 2)
    1177        2958 :                 have_end = get_actual_variable_range(root,
    1178             :                                                      vardata,
    1179             :                                                      sslot.staop,
    1180             :                                                      collation,
    1181             :                                                      &sslot.values[0],
    1182        2958 :                                                      &sslot.values[1]);
    1183             : 
    1184     1484066 :             while (lobound < hibound)
    1185             :             {
    1186     1260260 :                 int         probe = (lobound + hibound) / 2;
    1187             :                 bool        ltcmp;
    1188             : 
    1189             :                 /*
    1190             :                  * If we find ourselves about to compare to the first or last
    1191             :                  * histogram entry, first try to replace it with the actual
    1192             :                  * current min or max (unless we already did so above).
    1193             :                  */
    1194     1260260 :                 if (probe == 0 && sslot.nvalues > 2)
    1195      110346 :                     have_end = get_actual_variable_range(root,
    1196             :                                                          vardata,
    1197             :                                                          sslot.staop,
    1198             :                                                          collation,
    1199             :                                                          &sslot.values[0],
    1200             :                                                          NULL);
    1201     1149914 :                 else if (probe == sslot.nvalues - 1 && sslot.nvalues > 2)
    1202       76252 :                     have_end = get_actual_variable_range(root,
    1203             :                                                          vardata,
    1204             :                                                          sslot.staop,
    1205             :                                                          collation,
    1206             :                                                          NULL,
    1207       76252 :                                                          &sslot.values[probe]);
    1208             : 
    1209     1260260 :                 ltcmp = DatumGetBool(FunctionCall2Coll(opproc,
    1210             :                                                        collation,
    1211     1260260 :                                                        sslot.values[probe],
    1212             :                                                        constval));
    1213     1260260 :                 if (isgt)
    1214       69142 :                     ltcmp = !ltcmp;
    1215     1260260 :                 if (ltcmp)
    1216      475734 :                     lobound = probe + 1;
    1217             :                 else
    1218      784526 :                     hibound = probe;
    1219             :             }
    1220             : 
    1221      223806 :             if (lobound <= 0)
    1222             :             {
    1223             :                 /*
    1224             :                  * Constant is below lower histogram boundary.  More
    1225             :                  * precisely, we have found that no entry in the histogram
    1226             :                  * satisfies the inequality clause (if !isgt) or they all do
    1227             :                  * (if isgt).  We estimate that that's true of the entire
    1228             :                  * table, so set histfrac to 0.0 (which we'll flip to 1.0
    1229             :                  * below, if isgt).
    1230             :                  */
    1231       95610 :                 histfrac = 0.0;
    1232             :             }
    1233      128196 :             else if (lobound >= sslot.nvalues)
    1234             :             {
    1235             :                 /*
    1236             :                  * Inverse case: constant is above upper histogram boundary.
    1237             :                  */
    1238       38224 :                 histfrac = 1.0;
    1239             :             }
    1240             :             else
    1241             :             {
    1242             :                 /* We have values[i-1] <= constant <= values[i]. */
    1243       89972 :                 int         i = lobound;
    1244       89972 :                 double      eq_selec = 0;
    1245             :                 double      val,
    1246             :                             high,
    1247             :                             low;
    1248             :                 double      binfrac;
    1249             : 
    1250             :                 /*
    1251             :                  * In the cases where we'll need it below, obtain an estimate
    1252             :                  * of the selectivity of "x = constval".  We use a calculation
    1253             :                  * similar to what var_eq_const() does for a non-MCV constant,
    1254             :                  * ie, estimate that all distinct non-MCV values occur equally
    1255             :                  * often.  But multiplication by "1.0 - sumcommon - nullfrac"
    1256             :                  * will be done by our caller, so we shouldn't do that here.
    1257             :                  * Therefore we can't try to clamp the estimate by reference
    1258             :                  * to the least common MCV; the result would be too small.
    1259             :                  *
    1260             :                  * Note: since this is effectively assuming that constval
    1261             :                  * isn't an MCV, it's logically dubious if constval in fact is
    1262             :                  * one.  But we have to apply *some* correction for equality,
    1263             :                  * and anyway we cannot tell if constval is an MCV, since we
    1264             :                  * don't have a suitable equality operator at hand.
    1265             :                  */
    1266       89972 :                 if (i == 1 || isgt == iseq)
    1267             :                 {
    1268             :                     double      otherdistinct;
    1269             :                     bool        isdefault;
    1270             :                     AttStatsSlot mcvslot;
    1271             : 
    1272             :                     /* Get estimated number of distinct values */
    1273       38862 :                     otherdistinct = get_variable_numdistinct(vardata,
    1274             :                                                              &isdefault);
    1275             : 
    1276             :                     /* Subtract off the number of known MCVs */
    1277       38862 :                     if (get_attstatsslot(&mcvslot, vardata->statsTuple,
    1278             :                                          STATISTIC_KIND_MCV, InvalidOid,
    1279             :                                          ATTSTATSSLOT_NUMBERS))
    1280             :                     {
    1281        4096 :                         otherdistinct -= mcvslot.nnumbers;
    1282        4096 :                         free_attstatsslot(&mcvslot);
    1283             :                     }
    1284             : 
    1285             :                     /* If result doesn't seem sane, leave eq_selec at 0 */
    1286       38862 :                     if (otherdistinct > 1)
    1287       38820 :                         eq_selec = 1.0 / otherdistinct;
    1288             :                 }
    1289             : 
    1290             :                 /*
    1291             :                  * Convert the constant and the two nearest bin boundary
    1292             :                  * values to a uniform comparison scale, and do a linear
    1293             :                  * interpolation within this bin.
    1294             :                  */
    1295       89972 :                 if (convert_to_scalar(constval, consttype, collation,
    1296             :                                       &val,
    1297       89972 :                                       sslot.values[i - 1], sslot.values[i],
    1298             :                                       vardata->vartype,
    1299             :                                       &low, &high))
    1300             :                 {
    1301       89972 :                     if (high <= low)
    1302             :                     {
    1303             :                         /* cope if bin boundaries appear identical */
    1304           0 :                         binfrac = 0.5;
    1305             :                     }
    1306       89972 :                     else if (val <= low)
    1307       19872 :                         binfrac = 0.0;
    1308       70100 :                     else if (val >= high)
    1309        3200 :                         binfrac = 1.0;
    1310             :                     else
    1311             :                     {
    1312       66900 :                         binfrac = (val - low) / (high - low);
    1313             : 
    1314             :                         /*
    1315             :                          * Watch out for the possibility that we got a NaN or
    1316             :                          * Infinity from the division.  This can happen
    1317             :                          * despite the previous checks, if for example "low"
    1318             :                          * is -Infinity.
    1319             :                          */
    1320       66900 :                         if (isnan(binfrac) ||
    1321       66900 :                             binfrac < 0.0 || binfrac > 1.0)
    1322           0 :                             binfrac = 0.5;
    1323             :                     }
    1324             :                 }
    1325             :                 else
    1326             :                 {
    1327             :                     /*
    1328             :                      * Ideally we'd produce an error here, on the grounds that
    1329             :                      * the given operator shouldn't have scalarXXsel
    1330             :                      * registered as its selectivity func unless we can deal
    1331             :                      * with its operand types.  But currently, all manner of
    1332             :                      * stuff is invoking scalarXXsel, so give a default
    1333             :                      * estimate until that can be fixed.
    1334             :                      */
    1335           0 :                     binfrac = 0.5;
    1336             :                 }
    1337             : 
    1338             :                 /*
    1339             :                  * Now, compute the overall selectivity across the values
    1340             :                  * represented by the histogram.  We have i-1 full bins and
    1341             :                  * binfrac partial bin below the constant.
    1342             :                  */
    1343       89972 :                 histfrac = (double) (i - 1) + binfrac;
    1344       89972 :                 histfrac /= (double) (sslot.nvalues - 1);
    1345             : 
    1346             :                 /*
    1347             :                  * At this point, histfrac is an estimate of the fraction of
    1348             :                  * the population represented by the histogram that satisfies
    1349             :                  * "x <= constval".  Somewhat remarkably, this statement is
    1350             :                  * true regardless of which operator we were doing the probes
    1351             :                  * with, so long as convert_to_scalar() delivers reasonable
    1352             :                  * results.  If the probe constant is equal to some histogram
    1353             :                  * entry, we would have considered the bin to the left of that
    1354             :                  * entry if probing with "<" or ">=", or the bin to the right
    1355             :                  * if probing with "<=" or ">"; but binfrac would have come
    1356             :                  * out as 1.0 in the first case and 0.0 in the second, leading
    1357             :                  * to the same histfrac in either case.  For probe constants
    1358             :                  * between histogram entries, we find the same bin and get the
    1359             :                  * same estimate with any operator.
    1360             :                  *
    1361             :                  * The fact that the estimate corresponds to "x <= constval"
    1362             :                  * and not "x < constval" is because of the way that ANALYZE
    1363             :                  * constructs the histogram: each entry is, effectively, the
    1364             :                  * rightmost value in its sample bucket.  So selectivity
    1365             :                  * values that are exact multiples of 1/(histogram_size-1)
    1366             :                  * should be understood as estimates including a histogram
    1367             :                  * entry plus everything to its left.
    1368             :                  *
    1369             :                  * However, that breaks down for the first histogram entry,
    1370             :                  * which necessarily is the leftmost value in its sample
    1371             :                  * bucket.  That means the first histogram bin is slightly
    1372             :                  * narrower than the rest, by an amount equal to eq_selec.
    1373             :                  * Another way to say that is that we want "x <= leftmost" to
    1374             :                  * be estimated as eq_selec not zero.  So, if we're dealing
    1375             :                  * with the first bin (i==1), rescale to make that true while
    1376             :                  * adjusting the rest of that bin linearly.
    1377             :                  */
    1378       89972 :                 if (i == 1)
    1379       16672 :                     histfrac += eq_selec * (1.0 - binfrac);
    1380             : 
    1381             :                 /*
    1382             :                  * "x <= constval" is good if we want an estimate for "<=" or
    1383             :                  * ">", but if we are estimating for "<" or ">=", we now need
    1384             :                  * to decrease the estimate by eq_selec.
    1385             :                  */
    1386       89972 :                 if (isgt == iseq)
    1387       29502 :                     histfrac -= eq_selec;
    1388             :             }
    1389             : 
    1390             :             /*
    1391             :              * Now the estimate is finished for "<" and "<=" cases.  If we are
    1392             :              * estimating for ">" or ">=", flip it.
    1393             :              */
    1394      223806 :             hist_selec = isgt ? (1.0 - histfrac) : histfrac;
    1395             : 
    1396             :             /*
    1397             :              * The histogram boundaries are only approximate to begin with,
    1398             :              * and may well be out of date anyway.  Therefore, don't believe
    1399             :              * extremely small or large selectivity estimates --- unless we
    1400             :              * got actual current endpoint values from the table, in which
    1401             :              * case just do the usual sanity clamp.  Somewhat arbitrarily, we
    1402             :              * set the cutoff for other cases at a hundredth of the histogram
    1403             :              * resolution.
    1404             :              */
    1405      223806 :             if (have_end)
    1406      127314 :                 CLAMP_PROBABILITY(hist_selec);
    1407             :             else
    1408             :             {
    1409       96492 :                 double      cutoff = 0.01 / (double) (sslot.nvalues - 1);
    1410             : 
    1411       96492 :                 if (hist_selec < cutoff)
    1412       34080 :                     hist_selec = cutoff;
    1413       62412 :                 else if (hist_selec > 1.0 - cutoff)
    1414       22674 :                     hist_selec = 1.0 - cutoff;
    1415             :             }
    1416             :         }
    1417         184 :         else if (sslot.nvalues > 1)
    1418             :         {
    1419             :             /*
    1420             :              * If we get here, we have a histogram but it's not sorted the way
    1421             :              * we want.  Do a brute-force search to see how many of the
    1422             :              * entries satisfy the comparison condition, and take that
    1423             :              * fraction as our estimate.  (This is identical to the inner loop
    1424             :              * of histogram_selectivity; maybe share code?)
    1425             :              */
    1426         184 :             LOCAL_FCINFO(fcinfo, 2);
    1427         184 :             int         nmatch = 0;
    1428             : 
    1429         184 :             InitFunctionCallInfoData(*fcinfo, opproc, 2, collation,
    1430             :                                      NULL, NULL);
    1431         184 :             fcinfo->args[0].isnull = false;
    1432         184 :             fcinfo->args[1].isnull = false;
    1433         184 :             fcinfo->args[1].value = constval;
    1434      962508 :             for (int i = 0; i < sslot.nvalues; i++)
    1435             :             {
    1436             :                 Datum       fresult;
    1437             : 
    1438      962324 :                 fcinfo->args[0].value = sslot.values[i];
    1439      962324 :                 fcinfo->isnull = false;
    1440      962324 :                 fresult = FunctionCallInvoke(fcinfo);
    1441      962324 :                 if (!fcinfo->isnull && DatumGetBool(fresult))
    1442        2228 :                     nmatch++;
    1443             :             }
    1444         184 :             hist_selec = ((double) nmatch) / ((double) sslot.nvalues);
    1445             : 
    1446             :             /*
    1447             :              * As above, clamp to a hundredth of the histogram resolution.
    1448             :              * This case is surely even less trustworthy than the normal one,
    1449             :              * so we shouldn't believe exact 0 or 1 selectivity.  (Maybe the
    1450             :              * clamp should be more restrictive in this case?)
    1451             :              */
    1452             :             {
    1453         184 :                 double      cutoff = 0.01 / (double) (sslot.nvalues - 1);
    1454             : 
    1455         184 :                 if (hist_selec < cutoff)
    1456          12 :                     hist_selec = cutoff;
    1457         172 :                 else if (hist_selec > 1.0 - cutoff)
    1458          12 :                     hist_selec = 1.0 - cutoff;
    1459             :             }
    1460             :         }
    1461             : 
    1462      223990 :         free_attstatsslot(&sslot);
    1463             :     }
    1464             : 
    1465      356530 :     return hist_selec;
    1466             : }
    1467             : 
    1468             : /*
    1469             :  * Common wrapper function for the selectivity estimators that simply
    1470             :  * invoke scalarineqsel().
    1471             :  */
    1472             : static Datum
    1473       51096 : scalarineqsel_wrapper(PG_FUNCTION_ARGS, bool isgt, bool iseq)
    1474             : {
    1475       51096 :     PlannerInfo *root = (PlannerInfo *) PG_GETARG_POINTER(0);
    1476       51096 :     Oid         operator = PG_GETARG_OID(1);
    1477       51096 :     List       *args = (List *) PG_GETARG_POINTER(2);
    1478       51096 :     int         varRelid = PG_GETARG_INT32(3);
    1479       51096 :     Oid         collation = PG_GET_COLLATION();
    1480             :     VariableStatData vardata;
    1481             :     Node       *other;
    1482             :     bool        varonleft;
    1483             :     Datum       constval;
    1484             :     Oid         consttype;
    1485             :     double      selec;
    1486             : 
    1487             :     /*
    1488             :      * If expression is not variable op something or something op variable,
    1489             :      * then punt and return a default estimate.
    1490             :      */
    1491       51096 :     if (!get_restriction_variable(root, args, varRelid,
    1492             :                                   &vardata, &other, &varonleft))
    1493         644 :         PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL);
    1494             : 
    1495             :     /*
    1496             :      * Can't do anything useful if the something is not a constant, either.
    1497             :      */
    1498       50452 :     if (!IsA(other, Const))
    1499             :     {
    1500        2842 :         ReleaseVariableStats(vardata);
    1501        2842 :         PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL);
    1502             :     }
    1503             : 
    1504             :     /*
    1505             :      * If the constant is NULL, assume operator is strict and return zero, ie,
    1506             :      * operator will never return TRUE.
    1507             :      */
    1508       47610 :     if (((Const *) other)->constisnull)
    1509             :     {
    1510          66 :         ReleaseVariableStats(vardata);
    1511          66 :         PG_RETURN_FLOAT8(0.0);
    1512             :     }
    1513       47544 :     constval = ((Const *) other)->constvalue;
    1514       47544 :     consttype = ((Const *) other)->consttype;
    1515             : 
    1516             :     /*
    1517             :      * Force the var to be on the left to simplify logic in scalarineqsel.
    1518             :      */
    1519       47544 :     if (!varonleft)
    1520             :     {
    1521         384 :         operator = get_commutator(operator);
    1522         384 :         if (!operator)
    1523             :         {
    1524             :             /* Use default selectivity (should we raise an error instead?) */
    1525           0 :             ReleaseVariableStats(vardata);
    1526           0 :             PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL);
    1527             :         }
    1528         384 :         isgt = !isgt;
    1529             :     }
    1530             : 
    1531             :     /* The rest of the work is done by scalarineqsel(). */
    1532       47544 :     selec = scalarineqsel(root, operator, isgt, iseq, collation,
    1533             :                           &vardata, constval, consttype);
    1534             : 
    1535       47544 :     ReleaseVariableStats(vardata);
    1536             : 
    1537       47544 :     PG_RETURN_FLOAT8((float8) selec);
    1538             : }
    1539             : 
    1540             : /*
    1541             :  *      scalarltsel     - Selectivity of "<" for scalars.
    1542             :  */
    1543             : Datum
    1544       15206 : scalarltsel(PG_FUNCTION_ARGS)
    1545             : {
    1546       15206 :     return scalarineqsel_wrapper(fcinfo, false, false);
    1547             : }
    1548             : 
    1549             : /*
    1550             :  *      scalarlesel     - Selectivity of "<=" for scalars.
    1551             :  */
    1552             : Datum
    1553        4636 : scalarlesel(PG_FUNCTION_ARGS)
    1554             : {
    1555        4636 :     return scalarineqsel_wrapper(fcinfo, false, true);
    1556             : }
    1557             : 
    1558             : /*
    1559             :  *      scalargtsel     - Selectivity of ">" for scalars.
    1560             :  */
    1561             : Datum
    1562       15660 : scalargtsel(PG_FUNCTION_ARGS)
    1563             : {
    1564       15660 :     return scalarineqsel_wrapper(fcinfo, true, false);
    1565             : }
    1566             : 
    1567             : /*
    1568             :  *      scalargesel     - Selectivity of ">=" for scalars.
    1569             :  */
    1570             : Datum
    1571       15594 : scalargesel(PG_FUNCTION_ARGS)
    1572             : {
    1573       15594 :     return scalarineqsel_wrapper(fcinfo, true, true);
    1574             : }
    1575             : 
    1576             : /*
    1577             :  *      boolvarsel      - Selectivity of Boolean variable.
    1578             :  *
    1579             :  * This can actually be called on any boolean-valued expression.  If it
    1580             :  * involves only Vars of the specified relation, and if there are statistics
    1581             :  * about the Var or expression (the latter is possible if it's indexed) then
    1582             :  * we'll produce a real estimate; otherwise it's just a default.
    1583             :  */
    1584             : Selectivity
    1585       56898 : boolvarsel(PlannerInfo *root, Node *arg, int varRelid)
    1586             : {
    1587             :     VariableStatData vardata;
    1588             :     double      selec;
    1589             : 
    1590       56898 :     examine_variable(root, arg, varRelid, &vardata);
    1591       56898 :     if (HeapTupleIsValid(vardata.statsTuple))
    1592             :     {
    1593             :         /*
    1594             :          * A boolean variable V is equivalent to the clause V = 't', so we
    1595             :          * compute the selectivity as if that is what we have.
    1596             :          */
    1597       35900 :         selec = var_eq_const(&vardata, BooleanEqualOperator, InvalidOid,
    1598             :                              BoolGetDatum(true), false, true, false);
    1599             :     }
    1600       20998 :     else if (is_funcclause(arg))
    1601             :     {
    1602             :         /*
    1603             :          * If we have no stats and it's a function call, estimate 0.3333333.
    1604             :          * This seems a pretty unprincipled choice, but Postgres has been
    1605             :          * using that estimate for function calls since 1992.  The hoariness
    1606             :          * of this behavior suggests that we should not be in too much hurry
    1607             :          * to use another value.
    1608             :          */
    1609       12460 :         selec = 0.3333333;
    1610             :     }
    1611             :     else
    1612             :     {
    1613             :         /* Otherwise, the default estimate is 0.5 */
    1614        8538 :         selec = 0.5;
    1615             :     }
    1616       56898 :     ReleaseVariableStats(vardata);
    1617       56898 :     return selec;
    1618             : }
    1619             : 
    1620             : /*
    1621             :  *      booltestsel     - Selectivity of BooleanTest Node.
    1622             :  */
    1623             : Selectivity
    1624         902 : booltestsel(PlannerInfo *root, BoolTestType booltesttype, Node *arg,
    1625             :             int varRelid, JoinType jointype, SpecialJoinInfo *sjinfo)
    1626             : {
    1627             :     VariableStatData vardata;
    1628             :     double      selec;
    1629             : 
    1630         902 :     examine_variable(root, arg, varRelid, &vardata);
    1631             : 
    1632         902 :     if (HeapTupleIsValid(vardata.statsTuple))
    1633             :     {
    1634             :         Form_pg_statistic stats;
    1635             :         double      freq_null;
    1636             :         AttStatsSlot sslot;
    1637             : 
    1638          12 :         stats = (Form_pg_statistic) GETSTRUCT(vardata.statsTuple);
    1639          12 :         freq_null = stats->stanullfrac;
    1640             : 
    1641          12 :         if (get_attstatsslot(&sslot, vardata.statsTuple,
    1642             :                              STATISTIC_KIND_MCV, InvalidOid,
    1643             :                              ATTSTATSSLOT_VALUES | ATTSTATSSLOT_NUMBERS)
    1644          12 :             && sslot.nnumbers > 0)
    1645          12 :         {
    1646             :             double      freq_true;
    1647             :             double      freq_false;
    1648             : 
    1649             :             /*
    1650             :              * Get first MCV frequency and derive frequency for true.
    1651             :              */
    1652          12 :             if (DatumGetBool(sslot.values[0]))
    1653           0 :                 freq_true = sslot.numbers[0];
    1654             :             else
    1655          12 :                 freq_true = 1.0 - sslot.numbers[0] - freq_null;
    1656             : 
    1657             :             /*
    1658             :              * Next derive frequency for false. Then use these as appropriate
    1659             :              * to derive frequency for each case.
    1660             :              */
    1661          12 :             freq_false = 1.0 - freq_true - freq_null;
    1662             : 
    1663          12 :             switch (booltesttype)
    1664             :             {
    1665           0 :                 case IS_UNKNOWN:
    1666             :                     /* select only NULL values */
    1667           0 :                     selec = freq_null;
    1668           0 :                     break;
    1669           0 :                 case IS_NOT_UNKNOWN:
    1670             :                     /* select non-NULL values */
    1671           0 :                     selec = 1.0 - freq_null;
    1672           0 :                     break;
    1673          12 :                 case IS_TRUE:
    1674             :                     /* select only TRUE values */
    1675          12 :                     selec = freq_true;
    1676          12 :                     break;
    1677           0 :                 case IS_NOT_TRUE:
    1678             :                     /* select non-TRUE values */
    1679           0 :                     selec = 1.0 - freq_true;
    1680           0 :                     break;
    1681           0 :                 case IS_FALSE:
    1682             :                     /* select only FALSE values */
    1683           0 :                     selec = freq_false;
    1684           0 :                     break;
    1685           0 :                 case IS_NOT_FALSE:
    1686             :                     /* select non-FALSE values */
    1687           0 :                     selec = 1.0 - freq_false;
    1688           0 :                     break;
    1689           0 :                 default:
    1690           0 :                     elog(ERROR, "unrecognized booltesttype: %d",
    1691             :                          (int) booltesttype);
    1692             :                     selec = 0.0;    /* Keep compiler quiet */
    1693             :                     break;
    1694             :             }
    1695             : 
    1696          12 :             free_attstatsslot(&sslot);
    1697             :         }
    1698             :         else
    1699             :         {
    1700             :             /*
    1701             :              * No most-common-value info available. Still have null fraction
    1702             :              * information, so use it for IS [NOT] UNKNOWN. Otherwise adjust
    1703             :              * for null fraction and assume a 50-50 split of TRUE and FALSE.
    1704             :              */
    1705           0 :             switch (booltesttype)
    1706             :             {
    1707           0 :                 case IS_UNKNOWN:
    1708             :                     /* select only NULL values */
    1709           0 :                     selec = freq_null;
    1710           0 :                     break;
    1711           0 :                 case IS_NOT_UNKNOWN:
    1712             :                     /* select non-NULL values */
    1713           0 :                     selec = 1.0 - freq_null;
    1714           0 :                     break;
    1715           0 :                 case IS_TRUE:
    1716             :                 case IS_FALSE:
    1717             :                     /* Assume we select half of the non-NULL values */
    1718           0 :                     selec = (1.0 - freq_null) / 2.0;
    1719           0 :                     break;
    1720           0 :                 case IS_NOT_TRUE:
    1721             :                 case IS_NOT_FALSE:
    1722             :                     /* Assume we select NULLs plus half of the non-NULLs */
    1723             :                     /* equiv. to freq_null + (1.0 - freq_null) / 2.0 */
    1724           0 :                     selec = (freq_null + 1.0) / 2.0;
    1725           0 :                     break;
    1726           0 :                 default:
    1727           0 :                     elog(ERROR, "unrecognized booltesttype: %d",
    1728             :                          (int) booltesttype);
    1729             :                     selec = 0.0;    /* Keep compiler quiet */
    1730             :                     break;
    1731             :             }
    1732             :         }
    1733             :     }
    1734             :     else
    1735             :     {
    1736             :         /*
    1737             :          * If we can't get variable statistics for the argument, perhaps
    1738             :          * clause_selectivity can do something with it.  We ignore the
    1739             :          * possibility of a NULL value when using clause_selectivity, and just
    1740             :          * assume the value is either TRUE or FALSE.
    1741             :          */
    1742         890 :         switch (booltesttype)
    1743             :         {
    1744          48 :             case IS_UNKNOWN:
    1745          48 :                 selec = DEFAULT_UNK_SEL;
    1746          48 :                 break;
    1747         108 :             case IS_NOT_UNKNOWN:
    1748         108 :                 selec = DEFAULT_NOT_UNK_SEL;
    1749         108 :                 break;
    1750         252 :             case IS_TRUE:
    1751             :             case IS_NOT_FALSE:
    1752         252 :                 selec = (double) clause_selectivity(root, arg,
    1753             :                                                     varRelid,
    1754             :                                                     jointype, sjinfo);
    1755         252 :                 break;
    1756         482 :             case IS_FALSE:
    1757             :             case IS_NOT_TRUE:
    1758         482 :                 selec = 1.0 - (double) clause_selectivity(root, arg,
    1759             :                                                           varRelid,
    1760             :                                                           jointype, sjinfo);
    1761         482 :                 break;
    1762           0 :             default:
    1763           0 :                 elog(ERROR, "unrecognized booltesttype: %d",
    1764             :                      (int) booltesttype);
    1765             :                 selec = 0.0;    /* Keep compiler quiet */
    1766             :                 break;
    1767             :         }
    1768             :     }
    1769             : 
    1770         902 :     ReleaseVariableStats(vardata);
    1771             : 
    1772             :     /* result should be in range, but make sure... */
    1773         902 :     CLAMP_PROBABILITY(selec);
    1774             : 
    1775         902 :     return (Selectivity) selec;
    1776             : }
    1777             : 
    1778             : /*
    1779             :  *      nulltestsel     - Selectivity of NullTest Node.
    1780             :  */
    1781             : Selectivity
    1782       17516 : nulltestsel(PlannerInfo *root, NullTestType nulltesttype, Node *arg,
    1783             :             int varRelid, JoinType jointype, SpecialJoinInfo *sjinfo)
    1784             : {
    1785             :     VariableStatData vardata;
    1786             :     double      selec;
    1787             : 
    1788       17516 :     examine_variable(root, arg, varRelid, &vardata);
    1789             : 
    1790       17516 :     if (HeapTupleIsValid(vardata.statsTuple))
    1791             :     {
    1792             :         Form_pg_statistic stats;
    1793             :         double      freq_null;
    1794             : 
    1795        9782 :         stats = (Form_pg_statistic) GETSTRUCT(vardata.statsTuple);
    1796        9782 :         freq_null = stats->stanullfrac;
    1797             : 
    1798        9782 :         switch (nulltesttype)
    1799             :         {
    1800        7260 :             case IS_NULL:
    1801             : 
    1802             :                 /*
    1803             :                  * Use freq_null directly.
    1804             :                  */
    1805        7260 :                 selec = freq_null;
    1806        7260 :                 break;
    1807        2522 :             case IS_NOT_NULL:
    1808             : 
    1809             :                 /*
    1810             :                  * Select not unknown (not null) values. Calculate from
    1811             :                  * freq_null.
    1812             :                  */
    1813        2522 :                 selec = 1.0 - freq_null;
    1814        2522 :                 break;
    1815           0 :             default:
    1816           0 :                 elog(ERROR, "unrecognized nulltesttype: %d",
    1817             :                      (int) nulltesttype);
    1818             :                 return (Selectivity) 0; /* keep compiler quiet */
    1819             :         }
    1820             :     }
    1821        7734 :     else if (vardata.var && IsA(vardata.var, Var) &&
    1822        6974 :              ((Var *) vardata.var)->varattno < 0)
    1823             :     {
    1824             :         /*
    1825             :          * There are no stats for system columns, but we know they are never
    1826             :          * NULL.
    1827             :          */
    1828         104 :         selec = (nulltesttype == IS_NULL) ? 0.0 : 1.0;
    1829             :     }
    1830             :     else
    1831             :     {
    1832             :         /*
    1833             :          * No ANALYZE stats available, so make a guess
    1834             :          */
    1835        7630 :         switch (nulltesttype)
    1836             :         {
    1837        2098 :             case IS_NULL:
    1838        2098 :                 selec = DEFAULT_UNK_SEL;
    1839        2098 :                 break;
    1840        5532 :             case IS_NOT_NULL:
    1841        5532 :                 selec = DEFAULT_NOT_UNK_SEL;
    1842        5532 :                 break;
    1843           0 :             default:
    1844           0 :                 elog(ERROR, "unrecognized nulltesttype: %d",
    1845             :                      (int) nulltesttype);
    1846             :                 return (Selectivity) 0; /* keep compiler quiet */
    1847             :         }
    1848             :     }
    1849             : 
    1850       17516 :     ReleaseVariableStats(vardata);
    1851             : 
    1852             :     /* result should be in range, but make sure... */
    1853       17516 :     CLAMP_PROBABILITY(selec);
    1854             : 
    1855       17516 :     return (Selectivity) selec;
    1856             : }
    1857             : 
    1858             : /*
    1859             :  * strip_array_coercion - strip binary-compatible relabeling from an array expr
    1860             :  *
    1861             :  * For array values, the parser normally generates ArrayCoerceExpr conversions,
    1862             :  * but it seems possible that RelabelType might show up.  Also, the planner
    1863             :  * is not currently tense about collapsing stacked ArrayCoerceExpr nodes,
    1864             :  * so we need to be ready to deal with more than one level.
    1865             :  */
    1866             : static Node *
    1867      130888 : strip_array_coercion(Node *node)
    1868             : {
    1869             :     for (;;)
    1870             :     {
    1871      131000 :         if (node && IsA(node, ArrayCoerceExpr))
    1872         112 :         {
    1873        3022 :             ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node;
    1874             : 
    1875             :             /*
    1876             :              * If the per-element expression is just a RelabelType on top of
    1877             :              * CaseTestExpr, then we know it's a binary-compatible relabeling.
    1878             :              */
    1879        3022 :             if (IsA(acoerce->elemexpr, RelabelType) &&
    1880         112 :                 IsA(((RelabelType *) acoerce->elemexpr)->arg, CaseTestExpr))
    1881         112 :                 node = (Node *) acoerce->arg;
    1882             :             else
    1883             :                 break;
    1884             :         }
    1885      127978 :         else if (node && IsA(node, RelabelType))
    1886             :         {
    1887             :             /* We don't really expect this case, but may as well cope */
    1888           0 :             node = (Node *) ((RelabelType *) node)->arg;
    1889             :         }
    1890             :         else
    1891             :             break;
    1892             :     }
    1893      130888 :     return node;
    1894             : }
    1895             : 
    1896             : /*
    1897             :  *      scalararraysel      - Selectivity of ScalarArrayOpExpr Node.
    1898             :  */
    1899             : Selectivity
    1900       22960 : scalararraysel(PlannerInfo *root,
    1901             :                ScalarArrayOpExpr *clause,
    1902             :                bool is_join_clause,
    1903             :                int varRelid,
    1904             :                JoinType jointype,
    1905             :                SpecialJoinInfo *sjinfo)
    1906             : {
    1907       22960 :     Oid         operator = clause->opno;
    1908       22960 :     bool        useOr = clause->useOr;
    1909       22960 :     bool        isEquality = false;
    1910       22960 :     bool        isInequality = false;
    1911             :     Node       *leftop;
    1912             :     Node       *rightop;
    1913             :     Oid         nominal_element_type;
    1914             :     Oid         nominal_element_collation;
    1915             :     TypeCacheEntry *typentry;
    1916             :     RegProcedure oprsel;
    1917             :     FmgrInfo    oprselproc;
    1918             :     Selectivity s1;
    1919             :     Selectivity s1disjoint;
    1920             : 
    1921             :     /* First, deconstruct the expression */
    1922             :     Assert(list_length(clause->args) == 2);
    1923       22960 :     leftop = (Node *) linitial(clause->args);
    1924       22960 :     rightop = (Node *) lsecond(clause->args);
    1925             : 
    1926             :     /* aggressively reduce both sides to constants */
    1927       22960 :     leftop = estimate_expression_value(root, leftop);
    1928       22960 :     rightop = estimate_expression_value(root, rightop);
    1929             : 
    1930             :     /* get nominal (after relabeling) element type of rightop */
    1931       22960 :     nominal_element_type = get_base_element_type(exprType(rightop));
    1932       22960 :     if (!OidIsValid(nominal_element_type))
    1933           0 :         return (Selectivity) 0.5;   /* probably shouldn't happen */
    1934             :     /* get nominal collation, too, for generating constants */
    1935       22960 :     nominal_element_collation = exprCollation(rightop);
    1936             : 
    1937             :     /* look through any binary-compatible relabeling of rightop */
    1938       22960 :     rightop = strip_array_coercion(rightop);
    1939             : 
    1940             :     /*
    1941             :      * Detect whether the operator is the default equality or inequality
    1942             :      * operator of the array element type.
    1943             :      */
    1944       22960 :     typentry = lookup_type_cache(nominal_element_type, TYPECACHE_EQ_OPR);
    1945       22960 :     if (OidIsValid(typentry->eq_opr))
    1946             :     {
    1947       22956 :         if (operator == typentry->eq_opr)
    1948       19590 :             isEquality = true;
    1949        3366 :         else if (get_negator(operator) == typentry->eq_opr)
    1950        2800 :             isInequality = true;
    1951             :     }
    1952             : 
    1953             :     /*
    1954             :      * If it is equality or inequality, we might be able to estimate this as a
    1955             :      * form of array containment; for instance "const = ANY(column)" can be
    1956             :      * treated as "ARRAY[const] <@ column".  scalararraysel_containment tries
    1957             :      * that, and returns the selectivity estimate if successful, or -1 if not.
    1958             :      */
    1959       22960 :     if ((isEquality || isInequality) && !is_join_clause)
    1960             :     {
    1961       22390 :         s1 = scalararraysel_containment(root, leftop, rightop,
    1962             :                                         nominal_element_type,
    1963             :                                         isEquality, useOr, varRelid);
    1964       22390 :         if (s1 >= 0.0)
    1965         118 :             return s1;
    1966             :     }
    1967             : 
    1968             :     /*
    1969             :      * Look up the underlying operator's selectivity estimator. Punt if it
    1970             :      * hasn't got one.
    1971             :      */
    1972       22842 :     if (is_join_clause)
    1973           0 :         oprsel = get_oprjoin(operator);
    1974             :     else
    1975       22842 :         oprsel = get_oprrest(operator);
    1976       22842 :     if (!oprsel)
    1977           4 :         return (Selectivity) 0.5;
    1978       22838 :     fmgr_info(oprsel, &oprselproc);
    1979             : 
    1980             :     /*
    1981             :      * In the array-containment check above, we must only believe that an
    1982             :      * operator is equality or inequality if it is the default btree equality
    1983             :      * operator (or its negator) for the element type, since those are the
    1984             :      * operators that array containment will use.  But in what follows, we can
    1985             :      * be a little laxer, and also believe that any operators using eqsel() or
    1986             :      * neqsel() as selectivity estimator act like equality or inequality.
    1987             :      */
    1988       22838 :     if (oprsel == F_EQSEL || oprsel == F_EQJOINSEL)
    1989       19668 :         isEquality = true;
    1990        3170 :     else if (oprsel == F_NEQSEL || oprsel == F_NEQJOINSEL)
    1991        2690 :         isInequality = true;
    1992             : 
    1993             :     /*
    1994             :      * We consider three cases:
    1995             :      *
    1996             :      * 1. rightop is an Array constant: deconstruct the array, apply the
    1997             :      * operator's selectivity function for each array element, and merge the
    1998             :      * results in the same way that clausesel.c does for AND/OR combinations.
    1999             :      *
    2000             :      * 2. rightop is an ARRAY[] construct: apply the operator's selectivity
    2001             :      * function for each element of the ARRAY[] construct, and merge.
    2002             :      *
    2003             :      * 3. otherwise, make a guess ...
    2004             :      */
    2005       22838 :     if (rightop && IsA(rightop, Const))
    2006       18496 :     {
    2007       18526 :         Datum       arraydatum = ((Const *) rightop)->constvalue;
    2008       18526 :         bool        arrayisnull = ((Const *) rightop)->constisnull;
    2009             :         ArrayType  *arrayval;
    2010             :         int16       elmlen;
    2011             :         bool        elmbyval;
    2012             :         char        elmalign;
    2013             :         int         num_elems;
    2014             :         Datum      *elem_values;
    2015             :         bool       *elem_nulls;
    2016             :         int         i;
    2017             : 
    2018       18526 :         if (arrayisnull)        /* qual can't succeed if null array */
    2019          30 :             return (Selectivity) 0.0;
    2020       18496 :         arrayval = DatumGetArrayTypeP(arraydatum);
    2021       18496 :         get_typlenbyvalalign(ARR_ELEMTYPE(arrayval),
    2022             :                              &elmlen, &elmbyval, &elmalign);
    2023       18496 :         deconstruct_array(arrayval,
    2024             :                           ARR_ELEMTYPE(arrayval),
    2025             :                           elmlen, elmbyval, elmalign,
    2026             :                           &elem_values, &elem_nulls, &num_elems);
    2027             : 
    2028             :         /*
    2029             :          * For generic operators, we assume the probability of success is
    2030             :          * independent for each array element.  But for "= ANY" or "<> ALL",
    2031             :          * if the array elements are distinct (which'd typically be the case)
    2032             :          * then the probabilities are disjoint, and we should just sum them.
    2033             :          *
    2034             :          * If we were being really tense we would try to confirm that the
    2035             :          * elements are all distinct, but that would be expensive and it
    2036             :          * doesn't seem to be worth the cycles; it would amount to penalizing
    2037             :          * well-written queries in favor of poorly-written ones.  However, we
    2038             :          * do protect ourselves a little bit by checking whether the
    2039             :          * disjointness assumption leads to an impossible (out of range)
    2040             :          * probability; if so, we fall back to the normal calculation.
    2041             :          */
    2042       18496 :         s1 = s1disjoint = (useOr ? 0.0 : 1.0);
    2043             : 
    2044       78642 :         for (i = 0; i < num_elems; i++)
    2045             :         {
    2046             :             List       *args;
    2047             :             Selectivity s2;
    2048             : 
    2049       60146 :             args = list_make2(leftop,
    2050             :                               makeConst(nominal_element_type,
    2051             :                                         -1,
    2052             :                                         nominal_element_collation,
    2053             :                                         elmlen,
    2054             :                                         elem_values[i],
    2055             :                                         elem_nulls[i],
    2056             :                                         elmbyval));
    2057       60146 :             if (is_join_clause)
    2058           0 :                 s2 = DatumGetFloat8(FunctionCall5Coll(&oprselproc,
    2059             :                                                       clause->inputcollid,
    2060             :                                                       PointerGetDatum(root),
    2061             :                                                       ObjectIdGetDatum(operator),
    2062             :                                                       PointerGetDatum(args),
    2063             :                                                       Int16GetDatum(jointype),
    2064             :                                                       PointerGetDatum(sjinfo)));
    2065             :             else
    2066       60146 :                 s2 = DatumGetFloat8(FunctionCall4Coll(&oprselproc,
    2067             :                                                       clause->inputcollid,
    2068             :                                                       PointerGetDatum(root),
    2069             :                                                       ObjectIdGetDatum(operator),
    2070             :                                                       PointerGetDatum(args),
    2071             :                                                       Int32GetDatum(varRelid)));
    2072             : 
    2073       60146 :             if (useOr)
    2074             :             {
    2075       51464 :                 s1 = s1 + s2 - s1 * s2;
    2076       51464 :                 if (isEquality)
    2077       50420 :                     s1disjoint += s2;
    2078             :             }
    2079             :             else
    2080             :             {
    2081        8682 :                 s1 = s1 * s2;
    2082        8682 :                 if (isInequality)
    2083        8370 :                     s1disjoint += s2 - 1.0;
    2084             :             }
    2085             :         }
    2086             : 
    2087             :         /* accept disjoint-probability estimate if in range */
    2088       18496 :         if ((useOr ? isEquality : isInequality) &&
    2089       17866 :             s1disjoint >= 0.0 && s1disjoint <= 1.0)
    2090       17836 :             s1 = s1disjoint;
    2091             :     }
    2092        4312 :     else if (rightop && IsA(rightop, ArrayExpr) &&
    2093         384 :              !((ArrayExpr *) rightop)->multidims)
    2094         384 :     {
    2095         384 :         ArrayExpr  *arrayexpr = (ArrayExpr *) rightop;
    2096             :         int16       elmlen;
    2097             :         bool        elmbyval;
    2098             :         ListCell   *l;
    2099             : 
    2100         384 :         get_typlenbyval(arrayexpr->element_typeid,
    2101             :                         &elmlen, &elmbyval);
    2102             : 
    2103             :         /*
    2104             :          * We use the assumption of disjoint probabilities here too, although
    2105             :          * the odds of equal array elements are rather higher if the elements
    2106             :          * are not all constants (which they won't be, else constant folding
    2107             :          * would have reduced the ArrayExpr to a Const).  In this path it's
    2108             :          * critical to have the sanity check on the s1disjoint estimate.
    2109             :          */
    2110         384 :         s1 = s1disjoint = (useOr ? 0.0 : 1.0);
    2111             : 
    2112        1420 :         foreach(l, arrayexpr->elements)
    2113             :         {
    2114        1036 :             Node       *elem = (Node *) lfirst(l);
    2115             :             List       *args;
    2116             :             Selectivity s2;
    2117             : 
    2118             :             /*
    2119             :              * Theoretically, if elem isn't of nominal_element_type we should
    2120             :              * insert a RelabelType, but it seems unlikely that any operator
    2121             :              * estimation function would really care ...
    2122             :              */
    2123        1036 :             args = list_make2(leftop, elem);
    2124        1036 :             if (is_join_clause)
    2125           0 :                 s2 = DatumGetFloat8(FunctionCall5Coll(&oprselproc,
    2126             :                                                       clause->inputcollid,
    2127             :                                                       PointerGetDatum(root),
    2128             :                                                       ObjectIdGetDatum(operator),
    2129             :                                                       PointerGetDatum(args),
    2130             :                                                       Int16GetDatum(jointype),
    2131             :                                                       PointerGetDatum(sjinfo)));
    2132             :             else
    2133        1036 :                 s2 = DatumGetFloat8(FunctionCall4Coll(&oprselproc,
    2134             :                                                       clause->inputcollid,
    2135             :                                                       PointerGetDatum(root),
    2136             :                                                       ObjectIdGetDatum(operator),
    2137             :                                                       PointerGetDatum(args),
    2138             :                                                       Int32GetDatum(varRelid)));
    2139             : 
    2140        1036 :             if (useOr)
    2141             :             {
    2142        1036 :                 s1 = s1 + s2 - s1 * s2;
    2143        1036 :                 if (isEquality)
    2144        1036 :                     s1disjoint += s2;
    2145             :             }
    2146             :             else
    2147             :             {
    2148           0 :                 s1 = s1 * s2;
    2149           0 :                 if (isInequality)
    2150           0 :                     s1disjoint += s2 - 1.0;
    2151             :             }
    2152             :         }
    2153             : 
    2154             :         /* accept disjoint-probability estimate if in range */
    2155         384 :         if ((useOr ? isEquality : isInequality) &&
    2156         384 :             s1disjoint >= 0.0 && s1disjoint <= 1.0)
    2157         384 :             s1 = s1disjoint;
    2158             :     }
    2159             :     else
    2160             :     {
    2161             :         CaseTestExpr *dummyexpr;
    2162             :         List       *args;
    2163             :         Selectivity s2;
    2164             :         int         i;
    2165             : 
    2166             :         /*
    2167             :          * We need a dummy rightop to pass to the operator selectivity
    2168             :          * routine.  It can be pretty much anything that doesn't look like a
    2169             :          * constant; CaseTestExpr is a convenient choice.
    2170             :          */
    2171        3928 :         dummyexpr = makeNode(CaseTestExpr);
    2172        3928 :         dummyexpr->typeId = nominal_element_type;
    2173        3928 :         dummyexpr->typeMod = -1;
    2174        3928 :         dummyexpr->collation = clause->inputcollid;
    2175        3928 :         args = list_make2(leftop, dummyexpr);
    2176        3928 :         if (is_join_clause)
    2177           0 :             s2 = DatumGetFloat8(FunctionCall5Coll(&oprselproc,
    2178             :                                                   clause->inputcollid,
    2179             :                                                   PointerGetDatum(root),
    2180             :                                                   ObjectIdGetDatum(operator),
    2181             :                                                   PointerGetDatum(args),
    2182             :                                                   Int16GetDatum(jointype),
    2183             :                                                   PointerGetDatum(sjinfo)));
    2184             :         else
    2185        3928 :             s2 = DatumGetFloat8(FunctionCall4Coll(&oprselproc,
    2186             :                                                   clause->inputcollid,
    2187             :                                                   PointerGetDatum(root),
    2188             :                                                   ObjectIdGetDatum(operator),
    2189             :                                                   PointerGetDatum(args),
    2190             :                                                   Int32GetDatum(varRelid)));
    2191        3928 :         s1 = useOr ? 0.0 : 1.0;
    2192             : 
    2193             :         /*
    2194             :          * Arbitrarily assume 10 elements in the eventual array value (see
    2195             :          * also estimate_array_length).  We don't risk an assumption of
    2196             :          * disjoint probabilities here.
    2197             :          */
    2198       43208 :         for (i = 0; i < 10; i++)
    2199             :         {
    2200       39280 :             if (useOr)
    2201       39280 :                 s1 = s1 + s2 - s1 * s2;
    2202             :             else
    2203           0 :                 s1 = s1 * s2;
    2204             :         }
    2205             :     }
    2206             : 
    2207             :     /* result should be in range, but make sure... */
    2208       22808 :     CLAMP_PROBABILITY(s1);
    2209             : 
    2210       22808 :     return s1;
    2211             : }
    2212             : 
    2213             : /*
    2214             :  * Estimate number of elements in the array yielded by an expression.
    2215             :  *
    2216             :  * Note: the result is integral, but we use "double" to avoid overflow
    2217             :  * concerns.  Most callers will use it in double-type expressions anyway.
    2218             :  *
    2219             :  * Note: in some code paths root can be passed as NULL, resulting in
    2220             :  * slightly worse estimates.
    2221             :  */
    2222             : double
    2223      107928 : estimate_array_length(PlannerInfo *root, Node *arrayexpr)
    2224             : {
    2225             :     /* look through any binary-compatible relabeling of arrayexpr */
    2226      107928 :     arrayexpr = strip_array_coercion(arrayexpr);
    2227             : 
    2228      107928 :     if (arrayexpr && IsA(arrayexpr, Const))
    2229             :     {
    2230       48564 :         Datum       arraydatum = ((Const *) arrayexpr)->constvalue;
    2231       48564 :         bool        arrayisnull = ((Const *) arrayexpr)->constisnull;
    2232             :         ArrayType  *arrayval;
    2233             : 
    2234       48564 :         if (arrayisnull)
    2235          90 :             return 0;
    2236       48474 :         arrayval = DatumGetArrayTypeP(arraydatum);
    2237       48474 :         return ArrayGetNItems(ARR_NDIM(arrayval), ARR_DIMS(arrayval));
    2238             :     }
    2239       59364 :     else if (arrayexpr && IsA(arrayexpr, ArrayExpr) &&
    2240         680 :              !((ArrayExpr *) arrayexpr)->multidims)
    2241             :     {
    2242         680 :         return list_length(((ArrayExpr *) arrayexpr)->elements);
    2243             :     }
    2244       58684 :     else if (arrayexpr && root)
    2245             :     {
    2246             :         /* See if we can find any statistics about it */
    2247             :         VariableStatData vardata;
    2248             :         AttStatsSlot sslot;
    2249       58660 :         double      nelem = 0;
    2250             : 
    2251       58660 :         examine_variable(root, arrayexpr, 0, &vardata);
    2252       58660 :         if (HeapTupleIsValid(vardata.statsTuple))
    2253             :         {
    2254             :             /*
    2255             :              * Found stats, so use the average element count, which is stored
    2256             :              * in the last stanumbers element of the DECHIST statistics.
    2257             :              * Actually that is the average count of *distinct* elements;
    2258             :              * perhaps we should scale it up somewhat?
    2259             :              */
    2260       13556 :             if (get_attstatsslot(&sslot, vardata.statsTuple,
    2261             :                                  STATISTIC_KIND_DECHIST, InvalidOid,
    2262             :                                  ATTSTATSSLOT_NUMBERS))
    2263             :             {
    2264       13442 :                 if (sslot.nnumbers > 0)
    2265       13442 :                     nelem = clamp_row_est(sslot.numbers[sslot.nnumbers - 1]);
    2266       13442 :                 free_attstatsslot(&sslot);
    2267             :             }
    2268             :         }
    2269       58660 :         ReleaseVariableStats(vardata);
    2270             : 
    2271       58660 :         if (nelem > 0)
    2272       13442 :             return nelem;
    2273             :     }
    2274             : 
    2275             :     /* Else use a default guess --- this should match scalararraysel */
    2276       45242 :     return 10;
    2277             : }
    2278             : 
    2279             : /*
    2280             :  *      rowcomparesel       - Selectivity of RowCompareExpr Node.
    2281             :  *
    2282             :  * We estimate RowCompare selectivity by considering just the first (high
    2283             :  * order) columns, which makes it equivalent to an ordinary OpExpr.  While
    2284             :  * this estimate could be refined by considering additional columns, it
    2285             :  * seems unlikely that we could do a lot better without multi-column
    2286             :  * statistics.
    2287             :  */
    2288             : Selectivity
    2289         252 : rowcomparesel(PlannerInfo *root,
    2290             :               RowCompareExpr *clause,
    2291             :               int varRelid, JoinType jointype, SpecialJoinInfo *sjinfo)
    2292             : {
    2293             :     Selectivity s1;
    2294         252 :     Oid         opno = linitial_oid(clause->opnos);
    2295         252 :     Oid         inputcollid = linitial_oid(clause->inputcollids);
    2296             :     List       *opargs;
    2297             :     bool        is_join_clause;
    2298             : 
    2299             :     /* Build equivalent arg list for single operator */
    2300         252 :     opargs = list_make2(linitial(clause->largs), linitial(clause->rargs));
    2301             : 
    2302             :     /*
    2303             :      * Decide if it's a join clause.  This should match clausesel.c's
    2304             :      * treat_as_join_clause(), except that we intentionally consider only the
    2305             :      * leading columns and not the rest of the clause.
    2306             :      */
    2307         252 :     if (varRelid != 0)
    2308             :     {
    2309             :         /*
    2310             :          * Caller is forcing restriction mode (eg, because we are examining an
    2311             :          * inner indexscan qual).
    2312             :          */
    2313          54 :         is_join_clause = false;
    2314             :     }
    2315         198 :     else if (sjinfo == NULL)
    2316             :     {
    2317             :         /*
    2318             :          * It must be a restriction clause, since it's being evaluated at a
    2319             :          * scan node.
    2320             :          */
    2321         186 :         is_join_clause = false;
    2322             :     }
    2323             :     else
    2324             :     {
    2325             :         /*
    2326             :          * Otherwise, it's a join if there's more than one base relation used.
    2327             :          */
    2328          12 :         is_join_clause = (NumRelids(root, (Node *) opargs) > 1);
    2329             :     }
    2330             : 
    2331         252 :     if (is_join_clause)
    2332             :     {
    2333             :         /* Estimate selectivity for a join clause. */
    2334          12 :         s1 = join_selectivity(root, opno,
    2335             :                               opargs,
    2336             :                               inputcollid,
    2337             :                               jointype,
    2338             :                               sjinfo);
    2339             :     }
    2340             :     else
    2341             :     {
    2342             :         /* Estimate selectivity for a restriction clause. */
    2343         240 :         s1 = restriction_selectivity(root, opno,
    2344             :                                      opargs,
    2345             :                                      inputcollid,
    2346             :                                      varRelid);
    2347             :     }
    2348             : 
    2349         252 :     return s1;
    2350             : }
    2351             : 
    2352             : /*
    2353             :  *      eqjoinsel       - Join selectivity of "="
    2354             :  */
    2355             : Datum
    2356      268514 : eqjoinsel(PG_FUNCTION_ARGS)
    2357             : {
    2358      268514 :     PlannerInfo *root = (PlannerInfo *) PG_GETARG_POINTER(0);
    2359      268514 :     Oid         operator = PG_GETARG_OID(1);
    2360      268514 :     List       *args = (List *) PG_GETARG_POINTER(2);
    2361             : 
    2362             : #ifdef NOT_USED
    2363             :     JoinType    jointype = (JoinType) PG_GETARG_INT16(3);
    2364             : #endif
    2365      268514 :     SpecialJoinInfo *sjinfo = (SpecialJoinInfo *) PG_GETARG_POINTER(4);
    2366      268514 :     Oid         collation = PG_GET_COLLATION();
    2367             :     double      selec;
    2368             :     double      selec_inner;
    2369             :     VariableStatData vardata1;
    2370             :     VariableStatData vardata2;
    2371             :     double      nd1;
    2372             :     double      nd2;
    2373             :     bool        isdefault1;
    2374             :     bool        isdefault2;
    2375             :     Oid         opfuncoid;
    2376             :     FmgrInfo    eqproc;
    2377      268514 :     Oid         hashLeft = InvalidOid;
    2378      268514 :     Oid         hashRight = InvalidOid;
    2379             :     AttStatsSlot sslot1;
    2380             :     AttStatsSlot sslot2;
    2381      268514 :     Form_pg_statistic stats1 = NULL;
    2382      268514 :     Form_pg_statistic stats2 = NULL;
    2383      268514 :     bool        have_mcvs1 = false;
    2384      268514 :     bool        have_mcvs2 = false;
    2385      268514 :     bool       *hasmatch1 = NULL;
    2386      268514 :     bool       *hasmatch2 = NULL;
    2387      268514 :     int         nmatches = 0;
    2388             :     bool        get_mcv_stats;
    2389             :     bool        join_is_reversed;
    2390             :     RelOptInfo *inner_rel;
    2391             : 
    2392      268514 :     get_join_variables(root, args, sjinfo,
    2393             :                        &vardata1, &vardata2, &join_is_reversed);
    2394             : 
    2395      268514 :     nd1 = get_variable_numdistinct(&vardata1, &isdefault1);
    2396      268514 :     nd2 = get_variable_numdistinct(&vardata2, &isdefault2);
    2397             : 
    2398      268514 :     opfuncoid = get_opcode(operator);
    2399             : 
    2400      268514 :     memset(&sslot1, 0, sizeof(sslot1));
    2401      268514 :     memset(&sslot2, 0, sizeof(sslot2));
    2402             : 
    2403             :     /*
    2404             :      * There is no use in fetching one side's MCVs if we lack MCVs for the
    2405             :      * other side, so do a quick check to verify that both stats exist.
    2406             :      */
    2407      740870 :     get_mcv_stats = (HeapTupleIsValid(vardata1.statsTuple) &&
    2408      363286 :                      HeapTupleIsValid(vardata2.statsTuple) &&
    2409      159444 :                      get_attstatsslot(&sslot1, vardata1.statsTuple,
    2410             :                                       STATISTIC_KIND_MCV, InvalidOid,
    2411      472356 :                                       0) &&
    2412       72834 :                      get_attstatsslot(&sslot2, vardata2.statsTuple,
    2413             :                                       STATISTIC_KIND_MCV, InvalidOid,
    2414             :                                       0));
    2415             : 
    2416      268514 :     if (HeapTupleIsValid(vardata1.statsTuple))
    2417             :     {
    2418             :         /* note we allow use of nullfrac regardless of security check */
    2419      203842 :         stats1 = (Form_pg_statistic) GETSTRUCT(vardata1.statsTuple);
    2420      232382 :         if (get_mcv_stats &&
    2421       28540 :             statistic_proc_security_check(&vardata1, opfuncoid))
    2422       28540 :             have_mcvs1 = get_attstatsslot(&sslot1, vardata1.statsTuple,
    2423             :                                           STATISTIC_KIND_MCV, InvalidOid,
    2424             :                                           ATTSTATSSLOT_VALUES | ATTSTATSSLOT_NUMBERS);
    2425             :     }
    2426             : 
    2427      268514 :     if (HeapTupleIsValid(vardata2.statsTuple))
    2428             :     {
    2429             :         /* note we allow use of nullfrac regardless of security check */
    2430      180130 :         stats2 = (Form_pg_statistic) GETSTRUCT(vardata2.statsTuple);
    2431      208670 :         if (get_mcv_stats &&
    2432       28540 :             statistic_proc_security_check(&vardata2, opfuncoid))
    2433       28540 :             have_mcvs2 = get_attstatsslot(&sslot2, vardata2.statsTuple,
    2434             :                                           STATISTIC_KIND_MCV, InvalidOid,
    2435             :                                           ATTSTATSSLOT_VALUES | ATTSTATSSLOT_NUMBERS);
    2436             :     }
    2437             : 
    2438             :     /* Prepare info usable by both eqjoinsel_inner and eqjoinsel_semi */
    2439      268514 :     if (have_mcvs1 && have_mcvs2)
    2440             :     {
    2441       28540 :         fmgr_info(opfuncoid, &eqproc);
    2442       28540 :         hasmatch1 = (bool *) palloc0(sslot1.nvalues * sizeof(bool));
    2443       28540 :         hasmatch2 = (bool *) palloc0(sslot2.nvalues * sizeof(bool));
    2444             : 
    2445             :         /*
    2446             :          * If the MCV lists are long enough to justify hashing, try to look up
    2447             :          * hash functions for the join operator.
    2448             :          */
    2449       28540 :         if ((sslot1.nvalues + sslot2.nvalues) >= EQJOINSEL_MCV_HASH_THRESHOLD)
    2450        1668 :             (void) get_op_hash_functions(operator, &hashLeft, &hashRight);
    2451             :     }
    2452             :     else
    2453      239974 :         memset(&eqproc, 0, sizeof(eqproc)); /* silence uninit-var warnings */
    2454             : 
    2455             :     /* We need to compute the inner-join selectivity in all cases */
    2456      268514 :     selec_inner = eqjoinsel_inner(&eqproc, collation,
    2457             :                                   hashLeft, hashRight,
    2458             :                                   &vardata1, &vardata2,
    2459             :                                   nd1, nd2,
    2460             :                                   isdefault1, isdefault2,
    2461             :                                   &sslot1, &sslot2,
    2462             :                                   stats1, stats2,
    2463             :                                   have_mcvs1, have_mcvs2,
    2464             :                                   hasmatch1, hasmatch2,
    2465             :                                   &nmatches);
    2466             : 
    2467      268514 :     switch (sjinfo->jointype)
    2468             :     {
    2469      257630 :         case JOIN_INNER:
    2470             :         case JOIN_LEFT:
    2471             :         case JOIN_FULL:
    2472      257630 :             selec = selec_inner;
    2473      257630 :             break;
    2474       10884 :         case JOIN_SEMI:
    2475             :         case JOIN_ANTI:
    2476             : 
    2477             :             /*
    2478             :              * Look up the join's inner relation.  min_righthand is sufficient
    2479             :              * information because neither SEMI nor ANTI joins permit any
    2480             :              * reassociation into or out of their RHS, so the righthand will
    2481             :              * always be exactly that set of rels.
    2482             :              */
    2483       10884 :             inner_rel = find_join_input_rel(root, sjinfo->min_righthand);
    2484             : 
    2485       10884 :             if (!join_is_reversed)
    2486        6734 :                 selec = eqjoinsel_semi(&eqproc, collation,
    2487             :                                        hashLeft, hashRight,
    2488             :                                        false,
    2489             :                                        &vardata1, &vardata2,
    2490             :                                        nd1, nd2,
    2491             :                                        isdefault1, isdefault2,
    2492             :                                        &sslot1, &sslot2,
    2493             :                                        stats1, stats2,
    2494             :                                        have_mcvs1, have_mcvs2,
    2495             :                                        hasmatch1, hasmatch2,
    2496             :                                        &nmatches,
    2497             :                                        inner_rel);
    2498             :             else
    2499        4150 :                 selec = eqjoinsel_semi(&eqproc, collation,
    2500             :                                        hashLeft, hashRight,
    2501             :                                        true,
    2502             :                                        &vardata2, &vardata1,
    2503             :                                        nd2, nd1,
    2504             :                                        isdefault2, isdefault1,
    2505             :                                        &sslot2, &sslot1,
    2506             :                                        stats2, stats1,
    2507             :                                        have_mcvs2, have_mcvs1,
    2508             :                                        hasmatch2, hasmatch1,
    2509             :                                        &nmatches,
    2510             :                                        inner_rel);
    2511             : 
    2512             :             /*
    2513             :              * We should never estimate the output of a semijoin to be more
    2514             :              * rows than we estimate for an inner join with the same input
    2515             :              * rels and join condition; it's obviously impossible for that to
    2516             :              * happen.  The former estimate is N1 * Ssemi while the latter is
    2517             :              * N1 * N2 * Sinner, so we may clamp Ssemi <= N2 * Sinner.  Doing
    2518             :              * this is worthwhile because of the shakier estimation rules we
    2519             :              * use in eqjoinsel_semi, particularly in cases where it has to
    2520             :              * punt entirely.
    2521             :              */
    2522       10884 :             selec = Min(selec, inner_rel->rows * selec_inner);
    2523       10884 :             break;
    2524           0 :         default:
    2525             :             /* other values not expected here */
    2526           0 :             elog(ERROR, "unrecognized join type: %d",
    2527             :                  (int) sjinfo->jointype);
    2528             :             selec = 0;          /* keep compiler quiet */
    2529             :             break;
    2530             :     }
    2531             : 
    2532      268514 :     free_attstatsslot(&sslot1);
    2533      268514 :     free_attstatsslot(&sslot2);
    2534             : 
    2535      268514 :     ReleaseVariableStats(vardata1);
    2536      268514 :     ReleaseVariableStats(vardata2);
    2537             : 
    2538      268514 :     if (hasmatch1)
    2539       28540 :         pfree(hasmatch1);
    2540      268514 :     if (hasmatch2)
    2541       28540 :         pfree(hasmatch2);
    2542             : 
    2543      268514 :     CLAMP_PROBABILITY(selec);
    2544             : 
    2545      268514 :     PG_RETURN_FLOAT8((float8) selec);
    2546             : }
    2547             : 
    2548             : /*
    2549             :  * eqjoinsel_inner --- eqjoinsel for normal inner join
    2550             :  *
    2551             :  * In addition to computing the selectivity estimate, this will fill
    2552             :  * hasmatch1[], hasmatch2[], and *p_nmatches (if have_mcvs1 && have_mcvs2).
    2553             :  * We may be able to re-use that data in eqjoinsel_semi.
    2554             :  *
    2555             :  * We also use this for LEFT/FULL outer joins; it's not presently clear
    2556             :  * that it's worth trying to distinguish them here.
    2557             :  */
    2558             : static double
    2559      268514 : eqjoinsel_inner(FmgrInfo *eqproc, Oid collation,
    2560             :                 Oid hashLeft, Oid hashRight,
    2561             :                 VariableStatData *vardata1, VariableStatData *vardata2,
    2562             :                 double nd1, double nd2,
    2563             :                 bool isdefault1, bool isdefault2,
    2564             :                 AttStatsSlot *sslot1, AttStatsSlot *sslot2,
    2565             :                 Form_pg_statistic stats1, Form_pg_statistic stats2,
    2566             :                 bool have_mcvs1, bool have_mcvs2,
    2567             :                 bool *hasmatch1, bool *hasmatch2,
    2568             :                 int *p_nmatches)
    2569             : {
    2570             :     double      selec;
    2571             : 
    2572      268514 :     if (have_mcvs1 && have_mcvs2)
    2573       28540 :     {
    2574             :         /*
    2575             :          * We have most-common-value lists for both relations.  Run through
    2576             :          * the lists to see which MCVs actually join to each other with the
    2577             :          * given operator.  This allows us to determine the exact join
    2578             :          * selectivity for the portion of the relations represented by the MCV
    2579             :          * lists.  We still have to estimate for the remaining population, but
    2580             :          * in a skewed distribution this gives us a big leg up in accuracy.
    2581             :          * For motivation see the analysis in Y. Ioannidis and S.
    2582             :          * Christodoulakis, "On the propagation of errors in the size of join
    2583             :          * results", Technical Report 1018, Computer Science Dept., University
    2584             :          * of Wisconsin, Madison, March 1991 (available from ftp.cs.wisc.edu).
    2585             :          */
    2586       28540 :         double      nullfrac1 = stats1->stanullfrac;
    2587       28540 :         double      nullfrac2 = stats2->stanullfrac;
    2588             :         double      matchprodfreq,
    2589             :                     matchfreq1,
    2590             :                     matchfreq2,
    2591             :                     unmatchfreq1,
    2592             :                     unmatchfreq2,
    2593             :                     otherfreq1,
    2594             :                     otherfreq2,
    2595             :                     totalsel1,
    2596             :                     totalsel2;
    2597             :         int         i,
    2598             :                     nmatches;
    2599             : 
    2600             :         /* Fill the match arrays */
    2601       28540 :         eqjoinsel_find_matches(eqproc, collation,
    2602             :                                hashLeft, hashRight,
    2603             :                                false,
    2604             :                                sslot1, sslot2,
    2605             :                                sslot1->nvalues, sslot2->nvalues,
    2606             :                                hasmatch1, hasmatch2,
    2607             :                                p_nmatches, &matchprodfreq);
    2608       28540 :         nmatches = *p_nmatches;
    2609       28540 :         CLAMP_PROBABILITY(matchprodfreq);
    2610             : 
    2611             :         /* Sum up frequencies of matched and unmatched MCVs */
    2612       28540 :         matchfreq1 = unmatchfreq1 = 0.0;
    2613      706610 :         for (i = 0; i < sslot1->nvalues; i++)
    2614             :         {
    2615      678070 :             if (hasmatch1[i])
    2616      294418 :                 matchfreq1 += sslot1->numbers[i];
    2617             :             else
    2618      383652 :                 unmatchfreq1 += sslot1->numbers[i];
    2619             :         }
    2620       28540 :         CLAMP_PROBABILITY(matchfreq1);
    2621       28540 :         CLAMP_PROBABILITY(unmatchfreq1);
    2622       28540 :         matchfreq2 = unmatchfreq2 = 0.0;
    2623      534640 :         for (i = 0; i < sslot2->nvalues; i++)
    2624             :         {
    2625      506100 :             if (hasmatch2[i])
    2626      294418 :                 matchfreq2 += sslot2->numbers[i];
    2627             :             else
    2628      211682 :                 unmatchfreq2 += sslot2->numbers[i];
    2629             :         }
    2630       28540 :         CLAMP_PROBABILITY(matchfreq2);
    2631       28540 :         CLAMP_PROBABILITY(unmatchfreq2);
    2632             : 
    2633             :         /*
    2634             :          * Compute total frequency of non-null values that are not in the MCV
    2635             :          * lists.
    2636             :          */
    2637       28540 :         otherfreq1 = 1.0 - nullfrac1 - matchfreq1 - unmatchfreq1;
    2638       28540 :         otherfreq2 = 1.0 - nullfrac2 - matchfreq2 - unmatchfreq2;
    2639       28540 :         CLAMP_PROBABILITY(otherfreq1);
    2640       28540 :         CLAMP_PROBABILITY(otherfreq2);
    2641             : 
    2642             :         /*
    2643             :          * We can estimate the total selectivity from the point of view of
    2644             :          * relation 1 as: the known selectivity for matched MCVs, plus
    2645             :          * unmatched MCVs that are assumed to match against random members of
    2646             :          * relation 2's non-MCV population, plus non-MCV values that are
    2647             :          * assumed to match against random members of relation 2's unmatched
    2648             :          * MCVs plus non-MCV values.
    2649             :          */
    2650       28540 :         totalsel1 = matchprodfreq;
    2651       28540 :         if (nd2 > sslot2->nvalues)
    2652        6234 :             totalsel1 += unmatchfreq1 * otherfreq2 / (nd2 - sslot2->nvalues);
    2653       28540 :         if (nd2 > nmatches)
    2654       11176 :             totalsel1 += otherfreq1 * (otherfreq2 + unmatchfreq2) /
    2655       11176 :                 (nd2 - nmatches);
    2656             :         /* Same estimate from the point of view of relation 2. */
    2657       28540 :         totalsel2 = matchprodfreq;
    2658       28540 :         if (nd1 > sslot1->nvalues)
    2659        7032 :             totalsel2 += unmatchfreq2 * otherfreq1 / (nd1 - sslot1->nvalues);
    2660       28540 :         if (nd1 > nmatches)
    2661        9896 :             totalsel2 += otherfreq2 * (otherfreq1 + unmatchfreq1) /
    2662        9896 :                 (nd1 - nmatches);
    2663             : 
    2664             :         /*
    2665             :          * Use the smaller of the two estimates.  This can be justified in
    2666             :          * essentially the same terms as given below for the no-stats case: to
    2667             :          * a first approximation, we are estimating from the point of view of
    2668             :          * the relation with smaller nd.
    2669             :          */
    2670       28540 :         selec = (totalsel1 < totalsel2) ? totalsel1 : totalsel2;
    2671             :     }
    2672             :     else
    2673             :     {
    2674             :         /*
    2675             :          * We do not have MCV lists for both sides.  Estimate the join
    2676             :          * selectivity as MIN(1/nd1,1/nd2)*(1-nullfrac1)*(1-nullfrac2). This
    2677             :          * is plausible if we assume that the join operator is strict and the
    2678             :          * non-null values are about equally distributed: a given non-null
    2679             :          * tuple of rel1 will join to either zero or N2*(1-nullfrac2)/nd2 rows
    2680             :          * of rel2, so total join rows are at most
    2681             :          * N1*(1-nullfrac1)*N2*(1-nullfrac2)/nd2 giving a join selectivity of
    2682             :          * not more than (1-nullfrac1)*(1-nullfrac2)/nd2. By the same logic it
    2683             :          * is not more than (1-nullfrac1)*(1-nullfrac2)/nd1, so the expression
    2684             :          * with MIN() is an upper bound.  Using the MIN() means we estimate
    2685             :          * from the point of view of the relation with smaller nd (since the
    2686             :          * larger nd is determining the MIN).  It is reasonable to assume that
    2687             :          * most tuples in this rel will have join partners, so the bound is
    2688             :          * probably reasonably tight and should be taken as-is.
    2689             :          *
    2690             :          * XXX Can we be smarter if we have an MCV list for just one side? It
    2691             :          * seems that if we assume equal distribution for the other side, we
    2692             :          * end up with the same answer anyway.
    2693             :          */
    2694      239974 :         double      nullfrac1 = stats1 ? stats1->stanullfrac : 0.0;
    2695      239974 :         double      nullfrac2 = stats2 ? stats2->stanullfrac : 0.0;
    2696             : 
    2697      239974 :         selec = (1.0 - nullfrac1) * (1.0 - nullfrac2);
    2698      239974 :         if (nd1 > nd2)
    2699      127206 :             selec /= nd1;
    2700             :         else
    2701      112768 :             selec /= nd2;
    2702             :     }
    2703             : 
    2704      268514 :     return selec;
    2705             : }
    2706             : 
    2707             : /*
    2708             :  * eqjoinsel_semi --- eqjoinsel for semi join
    2709             :  *
    2710             :  * (Also used for anti join, which we are supposed to estimate the same way.)
    2711             :  * Caller has ensured that vardata1 is the LHS variable; however, eqproc
    2712             :  * is for the original join operator, which might now need to have the inputs
    2713             :  * swapped in order to apply correctly.  Also, if have_mcvs1 && have_mcvs2
    2714             :  * then hasmatch1[], hasmatch2[], and *p_nmatches were filled by
    2715             :  * eqjoinsel_inner.
    2716             :  */
    2717             : static double
    2718       10884 : eqjoinsel_semi(FmgrInfo *eqproc, Oid collation,
    2719             :                Oid hashLeft, Oid hashRight,
    2720             :                bool op_is_reversed,
    2721             :                VariableStatData *vardata1, VariableStatData *vardata2,
    2722             :                double nd1, double nd2,
    2723             :                bool isdefault1, bool isdefault2,
    2724             :                AttStatsSlot *sslot1, AttStatsSlot *sslot2,
    2725             :                Form_pg_statistic stats1, Form_pg_statistic stats2,
    2726             :                bool have_mcvs1, bool have_mcvs2,
    2727             :                bool *hasmatch1, bool *hasmatch2,
    2728             :                int *p_nmatches,
    2729             :                RelOptInfo *inner_rel)
    2730             : {
    2731             :     double      selec;
    2732             : 
    2733             :     /*
    2734             :      * We clamp nd2 to be not more than what we estimate the inner relation's
    2735             :      * size to be.  This is intuitively somewhat reasonable since obviously
    2736             :      * there can't be more than that many distinct values coming from the
    2737             :      * inner rel.  The reason for the asymmetry (ie, that we don't clamp nd1
    2738             :      * likewise) is that this is the only pathway by which restriction clauses
    2739             :      * applied to the inner rel will affect the join result size estimate,
    2740             :      * since set_joinrel_size_estimates will multiply SEMI/ANTI selectivity by
    2741             :      * only the outer rel's size.  If we clamped nd1 we'd be double-counting
    2742             :      * the selectivity of outer-rel restrictions.
    2743             :      *
    2744             :      * We can apply this clamping both with respect to the base relation from
    2745             :      * which the join variable comes (if there is just one), and to the
    2746             :      * immediate inner input relation of the current join.
    2747             :      *
    2748             :      * If we clamp, we can treat nd2 as being a non-default estimate; it's not
    2749             :      * great, maybe, but it didn't come out of nowhere either.  This is most
    2750             :      * helpful when the inner relation is empty and consequently has no stats.
    2751             :      */
    2752       10884 :     if (vardata2->rel)
    2753             :     {
    2754       10878 :         if (nd2 >= vardata2->rel->rows)
    2755             :         {
    2756        8736 :             nd2 = vardata2->rel->rows;
    2757        8736 :             isdefault2 = false;
    2758             :         }
    2759             :     }
    2760       10884 :     if (nd2 >= inner_rel->rows)
    2761             :     {
    2762        8680 :         nd2 = inner_rel->rows;
    2763        8680 :         isdefault2 = false;
    2764             :     }
    2765             : 
    2766       10884 :     if (have_mcvs1 && have_mcvs2)
    2767         624 :     {
    2768             :         /*
    2769             :          * We have most-common-value lists for both relations.  Run through
    2770             :          * the lists to see which MCVs actually join to each other with the
    2771             :          * given operator.  This allows us to determine the exact join
    2772             :          * selectivity for the portion of the relations represented by the MCV
    2773             :          * lists.  We still have to estimate for the remaining population, but
    2774             :          * in a skewed distribution this gives us a big leg up in accuracy.
    2775             :          */
    2776         624 :         double      nullfrac1 = stats1->stanullfrac;
    2777             :         double      matchprodfreq,
    2778             :                     matchfreq1,
    2779             :                     uncertainfrac,
    2780             :                     uncertain;
    2781             :         int         i,
    2782             :                     nmatches,
    2783             :                     clamped_nvalues2;
    2784             : 
    2785             :         /*
    2786             :          * The clamping above could have resulted in nd2 being less than
    2787             :          * sslot2->nvalues; in which case, we assume that precisely the nd2
    2788             :          * most common values in the relation will appear in the join input,
    2789             :          * and so compare to only the first nd2 members of the MCV list.  Of
    2790             :          * course this is frequently wrong, but it's the best bet we can make.
    2791             :          */
    2792         624 :         clamped_nvalues2 = Min(sslot2->nvalues, nd2);
    2793             : 
    2794             :         /*
    2795             :          * If we did not set clamped_nvalues2 to less than sslot2->nvalues,
    2796             :          * then the hasmatch1[] and hasmatch2[] match flags computed by
    2797             :          * eqjoinsel_inner are still perfectly applicable, so we need not
    2798             :          * re-do the matching work.  Note that it does not matter if
    2799             :          * op_is_reversed: we'd get the same answers.
    2800             :          *
    2801             :          * If we did clamp, then a different set of sslot2 values is to be
    2802             :          * compared, so we have to re-do the matching.
    2803             :          */
    2804         624 :         if (clamped_nvalues2 != sslot2->nvalues)
    2805             :         {
    2806             :             /* Must re-zero the arrays */
    2807           0 :             memset(hasmatch1, 0, sslot1->nvalues * sizeof(bool));
    2808           0 :             memset(hasmatch2, 0, clamped_nvalues2 * sizeof(bool));
    2809             :             /* Re-fill the match arrays */
    2810           0 :             eqjoinsel_find_matches(eqproc, collation,
    2811             :                                    hashLeft, hashRight,
    2812             :                                    op_is_reversed,
    2813             :                                    sslot1, sslot2,
    2814             :                                    sslot1->nvalues, clamped_nvalues2,
    2815             :                                    hasmatch1, hasmatch2,
    2816             :                                    p_nmatches, &matchprodfreq);
    2817             :         }
    2818         624 :         nmatches = *p_nmatches;
    2819             : 
    2820             :         /* Sum up frequencies of matched MCVs */
    2821         624 :         matchfreq1 = 0.0;
    2822       13738 :         for (i = 0; i < sslot1->nvalues; i++)
    2823             :         {
    2824       13114 :             if (hasmatch1[i])
    2825       11448 :                 matchfreq1 += sslot1->numbers[i];
    2826             :         }
    2827         624 :         CLAMP_PROBABILITY(matchfreq1);
    2828             : 
    2829             :         /*
    2830             :          * Now we need to estimate the fraction of relation 1 that has at
    2831             :          * least one join partner.  We know for certain that the matched MCVs
    2832             :          * do, so that gives us a lower bound, but we're really in the dark
    2833             :          * about everything else.  Our crude approach is: if nd1 <= nd2 then
    2834             :          * assume all non-null rel1 rows have join partners, else assume for
    2835             :          * the uncertain rows that a fraction nd2/nd1 have join partners. We
    2836             :          * can discount the known-matched MCVs from the distinct-values counts
    2837             :          * before doing the division.
    2838             :          *
    2839             :          * Crude as the above is, it's completely useless if we don't have
    2840             :          * reliable ndistinct values for both sides.  Hence, if either nd1 or
    2841             :          * nd2 is default, punt and assume half of the uncertain rows have
    2842             :          * join partners.
    2843             :          */
    2844         624 :         if (!isdefault1 && !isdefault2)
    2845             :         {
    2846         624 :             nd1 -= nmatches;
    2847         624 :             nd2 -= nmatches;
    2848         624 :             if (nd1 <= nd2 || nd2 < 0)
    2849         588 :                 uncertainfrac = 1.0;
    2850             :             else
    2851          36 :                 uncertainfrac = nd2 / nd1;
    2852             :         }
    2853             :         else
    2854           0 :             uncertainfrac = 0.5;
    2855         624 :         uncertain = 1.0 - matchfreq1 - nullfrac1;
    2856         624 :         CLAMP_PROBABILITY(uncertain);
    2857         624 :         selec = matchfreq1 + uncertainfrac * uncertain;
    2858             :     }
    2859             :     else
    2860             :     {
    2861             :         /*
    2862             :          * Without MCV lists for both sides, we can only use the heuristic
    2863             :          * about nd1 vs nd2.
    2864             :          */
    2865       10260 :         double      nullfrac1 = stats1 ? stats1->stanullfrac : 0.0;
    2866             : 
    2867       10260 :         if (!isdefault1 && !isdefault2)
    2868             :         {
    2869        7862 :             if (nd1 <= nd2 || nd2 < 0)
    2870        4918 :                 selec = 1.0 - nullfrac1;
    2871             :             else
    2872        2944 :                 selec = (nd2 / nd1) * (1.0 - nullfrac1);
    2873             :         }
    2874             :         else
    2875        2398 :             selec = 0.5 * (1.0 - nullfrac1);
    2876             :     }
    2877             : 
    2878       10884 :     return selec;
    2879             : }
    2880             : 
    2881             : /*
    2882             :  * Identify matching MCVs for eqjoinsel_inner or eqjoinsel_semi.
    2883             :  *
    2884             :  * Inputs:
    2885             :  *  eqproc: FmgrInfo for equality function to use (might be reversed)
    2886             :  *  collation: OID of collation to use
    2887             :  *  hashLeft, hashRight: OIDs of hash functions associated with equality op,
    2888             :  *      or InvalidOid if we're not to use hashing
    2889             :  *  op_is_reversed: indicates that eqproc compares right type to left type
    2890             :  *  sslot1, sslot2: MCV values for the lefthand and righthand inputs
    2891             :  *  nvalues1, nvalues2: number of values to be considered (can be less than
    2892             :  *      sslotN->nvalues, but not more)
    2893             :  * Outputs:
    2894             :  *  hasmatch1[], hasmatch2[]: pre-zeroed arrays of lengths nvalues1, nvalues2;
    2895             :  *      entries are set to true if that MCV has a match on the other side
    2896             :  *  *p_nmatches: receives number of MCV pairs that match
    2897             :  *  *p_matchprodfreq: receives sum(sslot1->numbers[i] * sslot2->numbers[j])
    2898             :  *      for matching MCVs
    2899             :  *
    2900             :  * Note that hashLeft is for the eqproc's left-hand input type, hashRight
    2901             :  * for its right, regardless of op_is_reversed.
    2902             :  *
    2903             :  * Note we assume that each MCV will match at most one member of the other
    2904             :  * MCV list.  If the operator isn't really equality, there could be multiple
    2905             :  * matches --- but we don't look for them, both for speed and because the
    2906             :  * math wouldn't add up...
    2907             :  */
    2908             : static void
    2909       28540 : eqjoinsel_find_matches(FmgrInfo *eqproc, Oid collation,
    2910             :                        Oid hashLeft, Oid hashRight,
    2911             :                        bool op_is_reversed,
    2912             :                        AttStatsSlot *sslot1, AttStatsSlot *sslot2,
    2913             :                        int nvalues1, int nvalues2,
    2914             :                        bool *hasmatch1, bool *hasmatch2,
    2915             :                        int *p_nmatches, double *p_matchprodfreq)
    2916             : {
    2917       28540 :     LOCAL_FCINFO(fcinfo, 2);
    2918       28540 :     double      matchprodfreq = 0.0;
    2919       28540 :     int         nmatches = 0;
    2920             : 
    2921             :     /*
    2922             :      * Save a few cycles by setting up the fcinfo struct just once.  Using
    2923             :      * FunctionCallInvoke directly also avoids failure if the eqproc returns
    2924             :      * NULL, though really equality functions should never do that.
    2925             :      */
    2926       28540 :     InitFunctionCallInfoData(*fcinfo, eqproc, 2, collation,
    2927             :                              NULL, NULL);
    2928       28540 :     fcinfo->args[0].isnull = false;
    2929       28540 :     fcinfo->args[1].isnull = false;
    2930             : 
    2931       28540 :     if (OidIsValid(hashLeft) && OidIsValid(hashRight))
    2932        1668 :     {
    2933             :         /* Use a hash table to speed up the matching */
    2934        1668 :         LOCAL_FCINFO(hash_fcinfo, 1);
    2935             :         FmgrInfo    hash_proc;
    2936             :         MCVHashContext hashContext;
    2937             :         MCVHashTable_hash *hashTable;
    2938             :         AttStatsSlot *statsProbe;
    2939             :         AttStatsSlot *statsHash;
    2940             :         bool       *hasMatchProbe;
    2941             :         bool       *hasMatchHash;
    2942             :         int         nvaluesProbe;
    2943             :         int         nvaluesHash;
    2944             : 
    2945             :         /* Make sure we build the hash table on the smaller array. */
    2946        1668 :         if (sslot1->nvalues >= sslot2->nvalues)
    2947             :         {
    2948        1668 :             statsProbe = sslot1;
    2949        1668 :             statsHash = sslot2;
    2950        1668 :             hasMatchProbe = hasmatch1;
    2951        1668 :             hasMatchHash = hasmatch2;
    2952        1668 :             nvaluesProbe = nvalues1;
    2953        1668 :             nvaluesHash = nvalues2;
    2954             :         }
    2955             :         else
    2956             :         {
    2957             :             /* We'll have to reverse the direction of use of the operator. */
    2958           0 :             op_is_reversed = !op_is_reversed;
    2959           0 :             statsProbe = sslot2;
    2960           0 :             statsHash = sslot1;
    2961           0 :             hasMatchProbe = hasmatch2;
    2962           0 :             hasMatchHash = hasmatch1;
    2963           0 :             nvaluesProbe = nvalues2;
    2964           0 :             nvaluesHash = nvalues1;
    2965             :         }
    2966             : 
    2967             :         /*
    2968             :          * Build the hash table on the smaller array, using the appropriate
    2969             :          * hash function for its data type.
    2970             :          */
    2971        1668 :         fmgr_info(op_is_reversed ? hashLeft : hashRight, &hash_proc);
    2972        1668 :         InitFunctionCallInfoData(*hash_fcinfo, &hash_proc, 1, collation,
    2973             :                                  NULL, NULL);
    2974        1668 :         hash_fcinfo->args[0].isnull = false;
    2975             : 
    2976        1668 :         hashContext.equal_fcinfo = fcinfo;
    2977        1668 :         hashContext.hash_fcinfo = hash_fcinfo;
    2978        1668 :         hashContext.op_is_reversed = op_is_reversed;
    2979        1668 :         hashContext.insert_mode = true;
    2980        1668 :         get_typlenbyval(statsHash->valuetype,
    2981             :                         &hashContext.hash_typlen,
    2982             :                         &hashContext.hash_typbyval);
    2983             : 
    2984        1668 :         hashTable = MCVHashTable_create(CurrentMemoryContext,
    2985             :                                         nvaluesHash,
    2986             :                                         &hashContext);
    2987             : 
    2988      168468 :         for (int i = 0; i < nvaluesHash; i++)
    2989             :         {
    2990      166800 :             bool        found = false;
    2991      166800 :             MCVHashEntry *entry = MCVHashTable_insert(hashTable,
    2992      166800 :                                                       statsHash->values[i],
    2993             :                                                       &found);
    2994             : 
    2995             :             /*
    2996             :              * MCVHashTable_insert will only report "found" if the new value
    2997             :              * is equal to some previous one per datum_image_eq().  That
    2998             :              * probably shouldn't happen, since we're not expecting duplicates
    2999             :              * in the MCV list.  If we do find a dup, just ignore it, leaving
    3000             :              * the hash entry's index pointing at the first occurrence.  That
    3001             :              * matches the behavior that the non-hashed code path would have.
    3002             :              */
    3003      166800 :             if (likely(!found))
    3004      166800 :                 entry->index = i;
    3005             :         }
    3006             : 
    3007             :         /*
    3008             :          * Prepare to probe the hash table.  If the probe values are of a
    3009             :          * different data type, then we need to change hash functions.  (This
    3010             :          * code relies on the assumption that since we defined SH_STORE_HASH,
    3011             :          * simplehash.h will never need to compute hash values for existing
    3012             :          * hash table entries.)
    3013             :          */
    3014        1668 :         hashContext.insert_mode = false;
    3015        1668 :         if (hashLeft != hashRight)
    3016             :         {
    3017           0 :             fmgr_info(op_is_reversed ? hashRight : hashLeft, &hash_proc);
    3018             :             /* Resetting hash_fcinfo is probably unnecessary, but be safe */
    3019           0 :             InitFunctionCallInfoData(*hash_fcinfo, &hash_proc, 1, collation,
    3020             :                                      NULL, NULL);
    3021           0 :             hash_fcinfo->args[0].isnull = false;
    3022             :         }
    3023             : 
    3024             :         /* Look up each probe value in turn. */
    3025      168468 :         for (int i = 0; i < nvaluesProbe; i++)
    3026             :         {
    3027      166800 :             MCVHashEntry *entry = MCVHashTable_lookup(hashTable,
    3028      166800 :                                                       statsProbe->values[i]);
    3029             : 
    3030             :             /* As in the other code path, skip already-matched hash entries */
    3031      166800 :             if (entry != NULL && !hasMatchHash[entry->index])
    3032             :             {
    3033       65542 :                 hasMatchHash[entry->index] = hasMatchProbe[i] = true;
    3034       65542 :                 nmatches++;
    3035       65542 :                 matchprodfreq += statsHash->numbers[entry->index] * statsProbe->numbers[i];
    3036             :             }
    3037             :         }
    3038             : 
    3039        1668 :         MCVHashTable_destroy(hashTable);
    3040             :     }
    3041             :     else
    3042             :     {
    3043             :         /* We're not to use hashing, so do it the O(N^2) way */
    3044             :         int         index1,
    3045             :                     index2;
    3046             : 
    3047             :         /* Set up to supply the values in the order the operator expects */
    3048       26872 :         if (op_is_reversed)
    3049             :         {
    3050           0 :             index1 = 1;
    3051           0 :             index2 = 0;
    3052             :         }
    3053             :         else
    3054             :         {
    3055       26872 :             index1 = 0;
    3056       26872 :             index2 = 1;
    3057             :         }
    3058             : 
    3059      538142 :         for (int i = 0; i < nvalues1; i++)
    3060             :         {
    3061      511270 :             fcinfo->args[index1].value = sslot1->values[i];
    3062             : 
    3063    10535870 :             for (int j = 0; j < nvalues2; j++)
    3064             :             {
    3065             :                 Datum       fresult;
    3066             : 
    3067    10253476 :                 if (hasmatch2[j])
    3068     3110328 :                     continue;
    3069     7143148 :                 fcinfo->args[index2].value = sslot2->values[j];
    3070     7143148 :                 fcinfo->isnull = false;
    3071     7143148 :                 fresult = FunctionCallInvoke(fcinfo);
    3072     7143148 :                 if (!fcinfo->isnull && DatumGetBool(fresult))
    3073             :                 {
    3074      228876 :                     hasmatch1[i] = hasmatch2[j] = true;
    3075      228876 :                     matchprodfreq += sslot1->numbers[i] * sslot2->numbers[j];
    3076      228876 :                     nmatches++;
    3077      228876 :                     break;
    3078             :                 }
    3079             :             }
    3080             :         }
    3081             :     }
    3082             : 
    3083       28540 :     *p_nmatches = nmatches;
    3084       28540 :     *p_matchprodfreq = matchprodfreq;
    3085       28540 : }
    3086             : 
    3087             : /*
    3088             :  * Support functions for the hash tables used by eqjoinsel_find_matches
    3089             :  */
    3090             : static uint32
    3091      333600 : hash_mcv(MCVHashTable_hash *tab, Datum key)
    3092             : {
    3093      333600 :     MCVHashContext *context = (MCVHashContext *) tab->private_data;
    3094      333600 :     FunctionCallInfo fcinfo = context->hash_fcinfo;
    3095             :     Datum       fresult;
    3096             : 
    3097      333600 :     fcinfo->args[0].value = key;
    3098      333600 :     fcinfo->isnull = false;
    3099      333600 :     fresult = FunctionCallInvoke(fcinfo);
    3100             :     Assert(!fcinfo->isnull);
    3101      333600 :     return DatumGetUInt32(fresult);
    3102             : }
    3103             : 
    3104             : static bool
    3105       65542 : mcvs_equal(MCVHashTable_hash *tab, Datum key0, Datum key1)
    3106             : {
    3107       65542 :     MCVHashContext *context = (MCVHashContext *) tab->private_data;
    3108             : 
    3109       65542 :     if (context->insert_mode)
    3110             :     {
    3111             :         /*
    3112             :          * During the insertion step, any comparisons will be between two
    3113             :          * Datums of the hash table's data type, so if the given operator is
    3114             :          * cross-type it will be the wrong thing to use.  Fortunately, we can
    3115             :          * use datum_image_eq instead.  The MCV values should all be distinct
    3116             :          * anyway, so it's mostly pro-forma to compare them at all.
    3117             :          */
    3118           0 :         return datum_image_eq(key0, key1,
    3119           0 :                               context->hash_typbyval, context->hash_typlen);
    3120             :     }
    3121             :     else
    3122             :     {
    3123       65542 :         FunctionCallInfo fcinfo = context->equal_fcinfo;
    3124             :         Datum       fresult;
    3125             : 
    3126             :         /*
    3127             :          * Apply the operator the correct way around.  Although simplehash.h
    3128             :          * doesn't document this explicitly, during lookups key0 is from the
    3129             :          * hash table while key1 is the probe value, so we should compare them
    3130             :          * in that order only if op_is_reversed.
    3131             :          */
    3132       65542 :         if (context->op_is_reversed)
    3133             :         {
    3134           0 :             fcinfo->args[0].value = key0;
    3135           0 :             fcinfo->args[1].value = key1;
    3136             :         }
    3137             :         else
    3138             :         {
    3139       65542 :             fcinfo->args[0].value = key1;
    3140       65542 :             fcinfo->args[1].value = key0;
    3141             :         }
    3142       65542 :         fcinfo->isnull = false;
    3143       65542 :         fresult = FunctionCallInvoke(fcinfo);
    3144       65542 :         return (!fcinfo->isnull && DatumGetBool(fresult));
    3145             :     }
    3146             : }
    3147             : 
    3148             : /*
    3149             :  *      neqjoinsel      - Join selectivity of "!="
    3150             :  */
    3151             : Datum
    3152        3770 : neqjoinsel(PG_FUNCTION_ARGS)
    3153             : {
    3154        3770 :     PlannerInfo *root = (PlannerInfo *) PG_GETARG_POINTER(0);
    3155        3770 :     Oid         operator = PG_GETARG_OID(1);
    3156        3770 :     List       *args = (List *) PG_GETARG_POINTER(2);
    3157        3770 :     JoinType    jointype = (JoinType) PG_GETARG_INT16(3);
    3158        3770 :     SpecialJoinInfo *sjinfo = (SpecialJoinInfo *) PG_GETARG_POINTER(4);
    3159        3770 :     Oid         collation = PG_GET_COLLATION();
    3160             :     float8      result;
    3161             : 
    3162        3770 :     if (jointype == JOIN_SEMI || jointype == JOIN_ANTI)
    3163        1278 :     {
    3164             :         /*
    3165             :          * For semi-joins, if there is more than one distinct value in the RHS
    3166             :          * relation then every non-null LHS row must find a row to join since
    3167             :          * it can only be equal to one of them.  We'll assume that there is
    3168             :          * always more than one distinct RHS value for the sake of stability,
    3169             :          * though in theory we could have special cases for empty RHS
    3170             :          * (selectivity = 0) and single-distinct-value RHS (selectivity =
    3171             :          * fraction of LHS that has the same value as the single RHS value).
    3172             :          *
    3173             :          * For anti-joins, if we use the same assumption that there is more
    3174             :          * than one distinct key in the RHS relation, then every non-null LHS
    3175             :          * row must be suppressed by the anti-join.
    3176             :          *
    3177             :          * So either way, the selectivity estimate should be 1 - nullfrac.
    3178             :          */
    3179             :         VariableStatData leftvar;
    3180             :         VariableStatData rightvar;
    3181             :         bool        reversed;
    3182             :         HeapTuple   statsTuple;
    3183             :         double      nullfrac;
    3184             : 
    3185        1278 :         get_join_variables(root, args, sjinfo, &leftvar, &rightvar, &reversed);
    3186        1278 :         statsTuple = reversed ? rightvar.statsTuple : leftvar.statsTuple;
    3187        1278 :         if (HeapTupleIsValid(statsTuple))
    3188        1042 :             nullfrac = ((Form_pg_statistic) GETSTRUCT(statsTuple))->stanullfrac;
    3189             :         else
    3190         236 :             nullfrac = 0.0;
    3191        1278 :         ReleaseVariableStats(leftvar);
    3192        1278 :         ReleaseVariableStats(rightvar);
    3193             : 
    3194        1278 :         result = 1.0 - nullfrac;
    3195             :     }
    3196             :     else
    3197             :     {
    3198             :         /*
    3199             :          * We want 1 - eqjoinsel() where the equality operator is the one
    3200             :          * associated with this != operator, that is, its negator.
    3201             :          */
    3202        2492 :         Oid         eqop = get_negator(operator);
    3203             : 
    3204        2492 :         if (eqop)
    3205             :         {
    3206             :             result =
    3207        2492 :                 DatumGetFloat8(DirectFunctionCall5Coll(eqjoinsel,
    3208             :                                                        collation,
    3209             :                                                        PointerGetDatum(root),
    3210             :                                                        ObjectIdGetDatum(eqop),
    3211             :                                                        PointerGetDatum(args),
    3212             :                                                        Int16GetDatum(jointype),
    3213             :                                                        PointerGetDatum(sjinfo)));
    3214             :         }
    3215             :         else
    3216             :         {
    3217             :             /* Use default selectivity (should we raise an error instead?) */
    3218           0 :             result = DEFAULT_EQ_SEL;
    3219             :         }
    3220        2492 :         result = 1.0 - result;
    3221             :     }
    3222             : 
    3223        3770 :     PG_RETURN_FLOAT8(result);
    3224             : }
    3225             : 
    3226             : /*
    3227             :  *      scalarltjoinsel - Join selectivity of "<" for scalars
    3228             :  */
    3229             : Datum
    3230         324 : scalarltjoinsel(PG_FUNCTION_ARGS)
    3231             : {
    3232         324 :     PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL);
    3233             : }
    3234             : 
    3235             : /*
    3236             :  *      scalarlejoinsel - Join selectivity of "<=" for scalars
    3237             :  */
    3238             : Datum
    3239         276 : scalarlejoinsel(PG_FUNCTION_ARGS)
    3240             : {
    3241         276 :     PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL);
    3242             : }
    3243             : 
    3244             : /*
    3245             :  *      scalargtjoinsel - Join selectivity of ">" for scalars
    3246             :  */
    3247             : Datum
    3248         276 : scalargtjoinsel(PG_FUNCTION_ARGS)
    3249             : {
    3250         276 :     PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL);
    3251             : }
    3252             : 
    3253             : /*
    3254             :  *      scalargejoinsel - Join selectivity of ">=" for scalars
    3255             :  */
    3256             : Datum
    3257         184 : scalargejoinsel(PG_FUNCTION_ARGS)
    3258             : {
    3259         184 :     PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL);
    3260             : }
    3261             : 
    3262             : 
    3263             : /*
    3264             :  * mergejoinscansel         - Scan selectivity of merge join.
    3265             :  *
    3266             :  * A merge join will stop as soon as it exhausts either input stream.
    3267             :  * Therefore, if we can estimate the ranges of both input variables,
    3268             :  * we can estimate how much of the input will actually be read.  This
    3269             :  * can have a considerable impact on the cost when using indexscans.
    3270             :  *
    3271             :  * Also, we can estimate how much of each input has to be read before the
    3272             :  * first join pair is found, which will affect the join's startup time.
    3273             :  *
    3274             :  * clause should be a clause already known to be mergejoinable.  opfamily,
    3275             :  * cmptype, and nulls_first specify the sort ordering being used.
    3276             :  *
    3277             :  * The outputs are:
    3278             :  *      *leftstart is set to the fraction of the left-hand variable expected
    3279             :  *       to be scanned before the first join pair is found (0 to 1).
    3280             :  *      *leftend is set to the fraction of the left-hand variable expected
    3281             :  *       to be scanned before the join terminates (0 to 1).
    3282             :  *      *rightstart, *rightend similarly for the right-hand variable.
    3283             :  */
    3284             : void
    3285      143264 : mergejoinscansel(PlannerInfo *root, Node *clause,
    3286             :                  Oid opfamily, CompareType cmptype, bool nulls_first,
    3287             :                  Selectivity *leftstart, Selectivity *leftend,
    3288             :                  Selectivity *rightstart, Selectivity *rightend)
    3289             : {
    3290             :     Node       *left,
    3291             :                *right;
    3292             :     VariableStatData leftvar,
    3293             :                 rightvar;
    3294             :     Oid         opmethod;
    3295             :     int         op_strategy;
    3296             :     Oid         op_lefttype;
    3297             :     Oid         op_righttype;
    3298             :     Oid         opno,
    3299             :                 collation,
    3300             :                 lsortop,
    3301             :                 rsortop,
    3302             :                 lstatop,
    3303             :                 rstatop,
    3304             :                 ltop,
    3305             :                 leop,
    3306             :                 revltop,
    3307             :                 revleop;
    3308             :     StrategyNumber ltstrat,
    3309             :                 lestrat,
    3310             :                 gtstrat,
    3311             :                 gestrat;
    3312             :     bool        isgt;
    3313             :     Datum       leftmin,
    3314             :                 leftmax,
    3315             :                 rightmin,
    3316             :                 rightmax;
    3317             :     double      selec;
    3318             : 
    3319             :     /* Set default results if we can't figure anything out. */
    3320             :     /* XXX should default "start" fraction be a bit more than 0? */
    3321      143264 :     *leftstart = *rightstart = 0.0;
    3322      143264 :     *leftend = *rightend = 1.0;
    3323             : 
    3324             :     /* Deconstruct the merge clause */
    3325      143264 :     if (!is_opclause(clause))
    3326           0 :         return;                 /* shouldn't happen */
    3327      143264 :     opno = ((OpExpr *) clause)->opno;
    3328      143264 :     collation = ((OpExpr *) clause)->inputcollid;
    3329      143264 :     left = get_leftop((Expr *) clause);
    3330      143264 :     right = get_rightop((Expr *) clause);
    3331      143264 :     if (!right)
    3332           0 :         return;                 /* shouldn't happen */
    3333             : 
    3334             :     /* Look for stats for the inputs */
    3335      143264 :     examine_variable(root, left, 0, &leftvar);
    3336      143264 :     examine_variable(root, right, 0, &rightvar);
    3337             : 
    3338      143264 :     opmethod = get_opfamily_method(opfamily);
    3339             : 
    3340             :     /* Extract the operator's declared left/right datatypes */
    3341      143264 :     get_op_opfamily_properties(opno, opfamily, false,
    3342             :                                &op_strategy,
    3343             :                                &op_lefttype,
    3344             :                                &op_righttype);
    3345             :     Assert(IndexAmTranslateStrategy(op_strategy, opmethod, opfamily, true) == COMPARE_EQ);
    3346             : 
    3347             :     /*
    3348             :      * Look up the various operators we need.  If we don't find them all, it
    3349             :      * probably means the opfamily is broken, but we just fail silently.
    3350             :      *
    3351             :      * Note: we expect that pg_statistic histograms will be sorted by the '<'
    3352             :      * operator, regardless of which sort direction we are considering.
    3353             :      */
    3354      143264 :     switch (cmptype)
    3355             :     {
    3356      143228 :         case COMPARE_LT:
    3357      143228 :             isgt = false;
    3358      143228 :             ltstrat = IndexAmTranslateCompareType(COMPARE_LT, opmethod, opfamily, true);
    3359      143228 :             lestrat = IndexAmTranslateCompareType(COMPARE_LE, opmethod, opfamily, true);
    3360      143228 :             if (op_lefttype == op_righttype)
    3361             :             {
    3362             :                 /* easy case */
    3363      141454 :                 ltop = get_opfamily_member(opfamily,
    3364             :                                            op_lefttype, op_righttype,
    3365             :                                            ltstrat);
    3366      141454 :                 leop = get_opfamily_member(opfamily,
    3367             :                                            op_lefttype, op_righttype,
    3368             :                                            lestrat);
    3369      141454 :                 lsortop = ltop;
    3370      141454 :                 rsortop = ltop;
    3371      141454 :                 lstatop = lsortop;
    3372      141454 :                 rstatop = rsortop;
    3373      141454 :                 revltop = ltop;
    3374      141454 :                 revleop = leop;
    3375             :             }
    3376             :             else
    3377             :             {
    3378        1774 :                 ltop = get_opfamily_member(opfamily,
    3379             :                                            op_lefttype, op_righttype,
    3380             :                                            ltstrat);
    3381        1774 :                 leop = get_opfamily_member(opfamily,
    3382             :                                            op_lefttype, op_righttype,
    3383             :                                            lestrat);
    3384        1774 :                 lsortop = get_opfamily_member(opfamily,
    3385             :                                               op_lefttype, op_lefttype,
    3386             :                                               ltstrat);
    3387        1774 :                 rsortop = get_opfamily_member(opfamily,
    3388             :                                               op_righttype, op_righttype,
    3389             :                                               ltstrat);
    3390        1774 :                 lstatop = lsortop;
    3391        1774 :                 rstatop = rsortop;
    3392        1774 :                 revltop = get_opfamily_member(opfamily,
    3393             :                                               op_righttype, op_lefttype,
    3394             :                                               ltstrat);
    3395        1774 :                 revleop = get_opfamily_member(opfamily,
    3396             :                                               op_righttype, op_lefttype,
    3397             :                                               lestrat);
    3398             :             }
    3399      143228 :             break;
    3400          36 :         case COMPARE_GT:
    3401             :             /* descending-order case */
    3402          36 :             isgt = true;
    3403          36 :             ltstrat = IndexAmTranslateCompareType(COMPARE_LT, opmethod, opfamily, true);
    3404          36 :             gtstrat = IndexAmTranslateCompareType(COMPARE_GT, opmethod, opfamily, true);
    3405          36 :             gestrat = IndexAmTranslateCompareType(COMPARE_GE, opmethod, opfamily, true);
    3406          36 :             if (op_lefttype == op_righttype)
    3407             :             {
    3408             :                 /* easy case */
    3409          36 :                 ltop = get_opfamily_member(opfamily,
    3410             :                                            op_lefttype, op_righttype,
    3411             :                                            gtstrat);
    3412          36 :                 leop = get_opfamily_member(opfamily,
    3413             :                                            op_lefttype, op_righttype,
    3414             :                                            gestrat);
    3415          36 :                 lsortop = ltop;
    3416          36 :                 rsortop = ltop;
    3417          36 :                 lstatop = get_opfamily_member(opfamily,
    3418             :                                               op_lefttype, op_lefttype,
    3419             :                                               ltstrat);
    3420          36 :                 rstatop = lstatop;
    3421          36 :                 revltop = ltop;
    3422          36 :                 revleop = leop;
    3423             :             }
    3424             :             else
    3425             :             {
    3426           0 :                 ltop = get_opfamily_member(opfamily,
    3427             :                                            op_lefttype, op_righttype,
    3428             :                                            gtstrat);
    3429           0 :                 leop = get_opfamily_member(opfamily,
    3430             :                                            op_lefttype, op_righttype,
    3431             :                                            gestrat);
    3432           0 :                 lsortop = get_opfamily_member(opfamily,
    3433             :                                               op_lefttype, op_lefttype,
    3434             :                                               gtstrat);
    3435           0 :                 rsortop = get_opfamily_member(opfamily,
    3436             :                                               op_righttype, op_righttype,
    3437             :                                               gtstrat);
    3438           0 :                 lstatop = get_opfamily_member(opfamily,
    3439             :                                               op_lefttype, op_lefttype,
    3440             :                                               ltstrat);
    3441           0 :                 rstatop = get_opfamily_member(opfamily,
    3442             :                                               op_righttype, op_righttype,
    3443             :                                               ltstrat);
    3444           0 :                 revltop = get_opfamily_member(opfamily,
    3445             :                                               op_righttype, op_lefttype,
    3446             :                                               gtstrat);
    3447           0 :                 revleop = get_opfamily_member(opfamily,
    3448             :                                               op_righttype, op_lefttype,
    3449             :                                               gestrat);
    3450             :             }
    3451          36 :             break;
    3452           0 :         default:
    3453           0 :             goto fail;          /* shouldn't get here */
    3454             :     }
    3455             : 
    3456      143264 :     if (!OidIsValid(lsortop) ||
    3457      143264 :         !OidIsValid(rsortop) ||
    3458      143264 :         !OidIsValid(lstatop) ||
    3459      143264 :         !OidIsValid(rstatop) ||
    3460      143252 :         !OidIsValid(ltop) ||
    3461      143252 :         !OidIsValid(leop) ||
    3462      143252 :         !OidIsValid(revltop) ||
    3463             :         !OidIsValid(revleop))
    3464          12 :         goto fail;              /* insufficient info in catalogs */
    3465             : 
    3466             :     /* Try to get ranges of both inputs */
    3467      143252 :     if (!isgt)
    3468             :     {
    3469      143216 :         if (!get_variable_range(root, &leftvar, lstatop, collation,
    3470             :                                 &leftmin, &leftmax))
    3471       34760 :             goto fail;          /* no range available from stats */
    3472      108456 :         if (!get_variable_range(root, &rightvar, rstatop, collation,
    3473             :                                 &rightmin, &rightmax))
    3474       25442 :             goto fail;          /* no range available from stats */
    3475             :     }
    3476             :     else
    3477             :     {
    3478             :         /* need to swap the max and min */
    3479          36 :         if (!get_variable_range(root, &leftvar, lstatop, collation,
    3480             :                                 &leftmax, &leftmin))
    3481          30 :             goto fail;          /* no range available from stats */
    3482           6 :         if (!get_variable_range(root, &rightvar, rstatop, collation,
    3483             :                                 &rightmax, &rightmin))
    3484           0 :             goto fail;          /* no range available from stats */
    3485             :     }
    3486             : 
    3487             :     /*
    3488             :      * Now, the fraction of the left variable that will be scanned is the
    3489             :      * fraction that's <= the right-side maximum value.  But only believe
    3490             :      * non-default estimates, else stick with our 1.0.
    3491             :      */
    3492       83020 :     selec = scalarineqsel(root, leop, isgt, true, collation, &leftvar,
    3493             :                           rightmax, op_righttype);
    3494       83020 :     if (selec != DEFAULT_INEQ_SEL)
    3495       83014 :         *leftend = selec;
    3496             : 
    3497             :     /* And similarly for the right variable. */
    3498       83020 :     selec = scalarineqsel(root, revleop, isgt, true, collation, &rightvar,
    3499             :                           leftmax, op_lefttype);
    3500       83020 :     if (selec != DEFAULT_INEQ_SEL)
    3501       83020 :         *rightend = selec;
    3502             : 
    3503             :     /*
    3504             :      * Only one of the two "end" fractions can really be less than 1.0;
    3505             :      * believe the smaller estimate and reset the other one to exactly 1.0. If
    3506             :      * we get exactly equal estimates (as can easily happen with self-joins),
    3507             :      * believe neither.
    3508             :      */
    3509       83020 :     if (*leftend > *rightend)
    3510       24754 :         *leftend = 1.0;
    3511       58266 :     else if (*leftend < *rightend)
    3512       33650 :         *rightend = 1.0;
    3513             :     else
    3514       24616 :         *leftend = *rightend = 1.0;
    3515             : 
    3516             :     /*
    3517             :      * Also, the fraction of the left variable that will be scanned before the
    3518             :      * first join pair is found is the fraction that's < the right-side
    3519             :      * minimum value.  But only believe non-default estimates, else stick with
    3520             :      * our own default.
    3521             :      */
    3522       83020 :     selec = scalarineqsel(root, ltop, isgt, false, collation, &leftvar,
    3523             :                           rightmin, op_righttype);
    3524       83020 :     if (selec != DEFAULT_INEQ_SEL)
    3525       83020 :         *leftstart = selec;
    3526             : 
    3527             :     /* And similarly for the right variable. */
    3528       83020 :     selec = scalarineqsel(root, revltop, isgt, false, collation, &rightvar,
    3529             :                           leftmin, op_lefttype);
    3530       83020 :     if (selec != DEFAULT_INEQ_SEL)
    3531       83020 :         *rightstart = selec;
    3532             : 
    3533             :     /*
    3534             :      * Only one of the two "start" fractions can really be more than zero;
    3535             :      * believe the larger estimate and reset the other one to exactly 0.0. If
    3536             :      * we get exactly equal estimates (as can easily happen with self-joins),
    3537             :      * believe neither.
    3538             :      */
    3539       83020 :     if (*leftstart < *rightstart)
    3540       16984 :         *leftstart = 0.0;
    3541       66036 :     else if (*leftstart > *rightstart)
    3542       24436 :         *rightstart = 0.0;
    3543             :     else
    3544       41600 :         *leftstart = *rightstart = 0.0;
    3545             : 
    3546             :     /*
    3547             :      * If the sort order is nulls-first, we're going to have to skip over any
    3548             :      * nulls too.  These would not have been counted by scalarineqsel, and we
    3549             :      * can safely add in this fraction regardless of whether we believe
    3550             :      * scalarineqsel's results or not.  But be sure to clamp the sum to 1.0!
    3551             :      */
    3552       83020 :     if (nulls_first)
    3553             :     {
    3554             :         Form_pg_statistic stats;
    3555             : 
    3556           6 :         if (HeapTupleIsValid(leftvar.statsTuple))
    3557             :         {
    3558           6 :             stats = (Form_pg_statistic) GETSTRUCT(leftvar.statsTuple);
    3559           6 :             *leftstart += stats->stanullfrac;
    3560           6 :             CLAMP_PROBABILITY(*leftstart);
    3561           6 :             *leftend += stats->stanullfrac;
    3562           6 :             CLAMP_PROBABILITY(*leftend);
    3563             :         }
    3564           6 :         if (HeapTupleIsValid(rightvar.statsTuple))
    3565             :         {
    3566           6 :             stats = (Form_pg_statistic) GETSTRUCT(rightvar.statsTuple);
    3567           6 :             *rightstart += stats->stanullfrac;
    3568           6 :             CLAMP_PROBABILITY(*rightstart);
    3569           6 :             *rightend += stats->stanullfrac;
    3570           6 :             CLAMP_PROBABILITY(*rightend);
    3571             :         }
    3572             :     }
    3573             : 
    3574             :     /* Disbelieve start >= end, just in case that can happen */
    3575       83020 :     if (*leftstart >= *leftend)
    3576             :     {
    3577         164 :         *leftstart = 0.0;
    3578         164 :         *leftend = 1.0;
    3579             :     }
    3580       83020 :     if (*rightstart >= *rightend)
    3581             :     {
    3582        1110 :         *rightstart = 0.0;
    3583        1110 :         *rightend = 1.0;
    3584             :     }
    3585             : 
    3586       81910 : fail:
    3587      143264 :     ReleaseVariableStats(leftvar);
    3588      143264 :     ReleaseVariableStats(rightvar);
    3589             : }
    3590             : 
    3591             : 
    3592             : /*
    3593             :  *  matchingsel -- generic matching-operator selectivity support
    3594             :  *
    3595             :  * Use these for any operators that (a) are on data types for which we collect
    3596             :  * standard statistics, and (b) have behavior for which the default estimate
    3597             :  * (twice DEFAULT_EQ_SEL) is sane.  Typically that is good for match-like
    3598             :  * operators.
    3599             :  */
    3600             : 
    3601             : Datum
    3602        1130 : matchingsel(PG_FUNCTION_ARGS)
    3603             : {
    3604        1130 :     PlannerInfo *root = (PlannerInfo *) PG_GETARG_POINTER(0);
    3605        1130 :     Oid         operator = PG_GETARG_OID(1);
    3606        1130 :     List       *args = (List *) PG_GETARG_POINTER(2);
    3607        1130 :     int         varRelid = PG_GETARG_INT32(3);
    3608        1130 :     Oid         collation = PG_GET_COLLATION();
    3609             :     double      selec;
    3610             : 
    3611             :     /* Use generic restriction selectivity logic. */
    3612        1130 :     selec = generic_restriction_selectivity(root, operator, collation,
    3613             :                                             args, varRelid,
    3614             :                                             DEFAULT_MATCHING_SEL);
    3615             : 
    3616        1130 :     PG_RETURN_FLOAT8((float8) selec);
    3617             : }
    3618             : 
    3619             : Datum
    3620           6 : matchingjoinsel(PG_FUNCTION_ARGS)
    3621             : {
    3622             :     /* Just punt, for the moment. */
    3623           6 :     PG_RETURN_FLOAT8(DEFAULT_MATCHING_SEL);
    3624             : }
    3625             : 
    3626             : 
    3627             : /*
    3628             :  * Helper routine for estimate_num_groups: add an item to a list of
    3629             :  * GroupVarInfos, but only if it's not known equal to any of the existing
    3630             :  * entries.
    3631             :  */
    3632             : typedef struct
    3633             : {
    3634             :     Node       *var;            /* might be an expression, not just a Var */
    3635             :     RelOptInfo *rel;            /* relation it belongs to */
    3636             :     double      ndistinct;      /* # distinct values */
    3637             :     bool        isdefault;      /* true if DEFAULT_NUM_DISTINCT was used */
    3638             : } GroupVarInfo;
    3639             : 
    3640             : static List *
    3641      397288 : add_unique_group_var(PlannerInfo *root, List *varinfos,
    3642             :                      Node *var, VariableStatData *vardata)
    3643             : {
    3644             :     GroupVarInfo *varinfo;
    3645             :     double      ndistinct;
    3646             :     bool        isdefault;
    3647             :     ListCell   *lc;
    3648             : 
    3649      397288 :     ndistinct = get_variable_numdistinct(vardata, &isdefault);
    3650             : 
    3651             :     /*
    3652             :      * The nullingrels bits within the var could cause the same var to be
    3653             :      * counted multiple times if it's marked with different nullingrels.  They
    3654             :      * could also prevent us from matching the var to the expressions in
    3655             :      * extended statistics (see estimate_multivariate_ndistinct).  So strip
    3656             :      * them out first.
    3657             :      */
    3658      397288 :     var = remove_nulling_relids(var, root->outer_join_rels, NULL);
    3659             : 
    3660      480306 :     foreach(lc, varinfos)
    3661             :     {
    3662       84150 :         varinfo = (GroupVarInfo *) lfirst(lc);
    3663             : 
    3664             :         /* Drop exact duplicates */
    3665       84150 :         if (equal(var, varinfo->var))
    3666        1132 :             return varinfos;
    3667             : 
    3668             :         /*
    3669             :          * Drop known-equal vars, but only if they belong to different
    3670             :          * relations (see comments for estimate_num_groups).  We aren't too
    3671             :          * fussy about the semantics of "equal" here.
    3672             :          */
    3673       90294 :         if (vardata->rel != varinfo->rel &&
    3674        7000 :             exprs_known_equal(root, var, varinfo->var, InvalidOid))
    3675             :         {
    3676         300 :             if (varinfo->ndistinct <= ndistinct)
    3677             :             {
    3678             :                 /* Keep older item, forget new one */
    3679         276 :                 return varinfos;
    3680             :             }
    3681             :             else
    3682             :             {
    3683             :                 /* Delete the older item */
    3684          24 :                 varinfos = foreach_delete_current(varinfos, lc);
    3685             :             }
    3686             :         }
    3687             :     }
    3688             : 
    3689      396156 :     varinfo = palloc_object(GroupVarInfo);
    3690             : 
    3691      396156 :     varinfo->var = var;
    3692      396156 :     varinfo->rel = vardata->rel;
    3693      396156 :     varinfo->ndistinct = ndistinct;
    3694      396156 :     varinfo->isdefault = isdefault;
    3695      396156 :     varinfos = lappend(varinfos, varinfo);
    3696      396156 :     return varinfos;
    3697             : }
    3698             : 
    3699             : /*
    3700             :  * estimate_num_groups      - Estimate number of groups in a grouped query
    3701             :  *
    3702             :  * Given a query having a GROUP BY clause, estimate how many groups there
    3703             :  * will be --- ie, the number of distinct combinations of the GROUP BY
    3704             :  * expressions.
    3705             :  *
    3706             :  * This routine is also used to estimate the number of rows emitted by
    3707             :  * a DISTINCT filtering step; that is an isomorphic problem.  (Note:
    3708             :  * actually, we only use it for DISTINCT when there's no grouping or
    3709             :  * aggregation ahead of the DISTINCT.)
    3710             :  *
    3711             :  * Inputs:
    3712             :  *  root - the query
    3713             :  *  groupExprs - list of expressions being grouped by
    3714             :  *  input_rows - number of rows estimated to arrive at the group/unique
    3715             :  *      filter step
    3716             :  *  pgset - NULL, or a List** pointing to a grouping set to filter the
    3717             :  *      groupExprs against
    3718             :  *
    3719             :  * Outputs:
    3720             :  *  estinfo - When passed as non-NULL, the function will set bits in the
    3721             :  *      "flags" field in order to provide callers with additional information
    3722             :  *      about the estimation.  Currently, we only set the SELFLAG_USED_DEFAULT
    3723             :  *      bit if we used any default values in the estimation.
    3724             :  *
    3725             :  * Given the lack of any cross-correlation statistics in the system, it's
    3726             :  * impossible to do anything really trustworthy with GROUP BY conditions
    3727             :  * involving multiple Vars.  We should however avoid assuming the worst
    3728             :  * case (all possible cross-product terms actually appear as groups) since
    3729             :  * very often the grouped-by Vars are highly correlated.  Our current approach
    3730             :  * is as follows:
    3731             :  *  1.  Expressions yielding boolean are assumed to contribute two groups,
    3732             :  *      independently of their content, and are ignored in the subsequent
    3733             :  *      steps.  This is mainly because tests like "col IS NULL" break the
    3734             :  *      heuristic used in step 2 especially badly.
    3735             :  *  2.  Reduce the given expressions to a list of unique Vars used.  For
    3736             :  *      example, GROUP BY a, a + b is treated the same as GROUP BY a, b.
    3737             :  *      It is clearly correct not to count the same Var more than once.
    3738             :  *      It is also reasonable to treat f(x) the same as x: f() cannot
    3739             :  *      increase the number of distinct values (unless it is volatile,
    3740             :  *      which we consider unlikely for grouping), but it probably won't
    3741             :  *      reduce the number of distinct values much either.
    3742             :  *      As a special case, if a GROUP BY expression can be matched to an
    3743             :  *      expressional index for which we have statistics, then we treat the
    3744             :  *      whole expression as though it were just a Var.
    3745             :  *  3.  If the list contains Vars of different relations that are known equal
    3746             :  *      due to equivalence classes, then drop all but one of the Vars from each
    3747             :  *      known-equal set, keeping the one with smallest estimated # of values
    3748             :  *      (since the extra values of the others can't appear in joined rows).
    3749             :  *      Note the reason we only consider Vars of different relations is that
    3750             :  *      if we considered ones of the same rel, we'd be double-counting the
    3751             :  *      restriction selectivity of the equality in the next step.
    3752             :  *  4.  For Vars within a single source rel, we multiply together the numbers
    3753             :  *      of values, clamp to the number of rows in the rel (divided by 10 if
    3754             :  *      more than one Var), and then multiply by a factor based on the
    3755             :  *      selectivity of the restriction clauses for that rel.  When there's
    3756             :  *      more than one Var, the initial product is probably too high (it's the
    3757             :  *      worst case) but clamping to a fraction of the rel's rows seems to be a
    3758             :  *      helpful heuristic for not letting the estimate get out of hand.  (The
    3759             :  *      factor of 10 is derived from pre-Postgres-7.4 practice.)  The factor
    3760             :  *      we multiply by to adjust for the restriction selectivity assumes that
    3761             :  *      the restriction clauses are independent of the grouping, which may not
    3762             :  *      be a valid assumption, but it's hard to do better.
    3763             :  *  5.  If there are Vars from multiple rels, we repeat step 4 for each such
    3764             :  *      rel, and multiply the results together.
    3765             :  * Note that rels not containing grouped Vars are ignored completely, as are
    3766             :  * join clauses.  Such rels cannot increase the number of groups, and we
    3767             :  * assume such clauses do not reduce the number either (somewhat bogus,
    3768             :  * but we don't have the info to do better).
    3769             :  */
    3770             : double
    3771      345682 : estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows,
    3772             :                     List **pgset, EstimationInfo *estinfo)
    3773             : {
    3774      345682 :     List       *varinfos = NIL;
    3775      345682 :     double      srf_multiplier = 1.0;
    3776             :     double      numdistinct;
    3777             :     ListCell   *l;
    3778             :     int         i;
    3779             : 
    3780             :     /* Zero the estinfo output parameter, if non-NULL */
    3781      345682 :     if (estinfo != NULL)
    3782      295294 :         memset(estinfo, 0, sizeof(EstimationInfo));
    3783             : 
    3784             :     /*
    3785             :      * We don't ever want to return an estimate of zero groups, as that tends
    3786             :      * to lead to division-by-zero and other unpleasantness.  The input_rows
    3787             :      * estimate is usually already at least 1, but clamp it just in case it
    3788             :      * isn't.
    3789             :      */
    3790      345682 :     input_rows = clamp_row_est(input_rows);
    3791             : 
    3792             :     /*
    3793             :      * If no grouping columns, there's exactly one group.  (This can't happen
    3794             :      * for normal cases with GROUP BY or DISTINCT, but it is possible for
    3795             :      * corner cases with set operations.)
    3796             :      */
    3797      345682 :     if (groupExprs == NIL || (pgset && *pgset == NIL))
    3798        1136 :         return 1.0;
    3799             : 
    3800             :     /*
    3801             :      * Count groups derived from boolean grouping expressions.  For other
    3802             :      * expressions, find the unique Vars used, treating an expression as a Var
    3803             :      * if we can find stats for it.  For each one, record the statistical
    3804             :      * estimate of number of distinct values (total in its table, without
    3805             :      * regard for filtering).
    3806             :      */
    3807      344546 :     numdistinct = 1.0;
    3808             : 
    3809      344546 :     i = 0;
    3810      740038 :     foreach(l, groupExprs)
    3811             :     {
    3812      395540 :         Node       *groupexpr = (Node *) lfirst(l);
    3813             :         double      this_srf_multiplier;
    3814             :         VariableStatData vardata;
    3815             :         List       *varshere;
    3816             :         ListCell   *l2;
    3817             : 
    3818             :         /* is expression in this grouping set? */
    3819      395540 :         if (pgset && !list_member_int(*pgset, i++))
    3820      326774 :             continue;
    3821             : 
    3822             :         /*
    3823             :          * Set-returning functions in grouping columns are a bit problematic.
    3824             :          * The code below will effectively ignore their SRF nature and come up
    3825             :          * with a numdistinct estimate as though they were scalar functions.
    3826             :          * We compensate by scaling up the end result by the largest SRF
    3827             :          * rowcount estimate.  (This will be an overestimate if the SRF
    3828             :          * produces multiple copies of any output value, but it seems best to
    3829             :          * assume the SRF's outputs are distinct.  In any case, it's probably
    3830             :          * pointless to worry too much about this without much better
    3831             :          * estimates for SRF output rowcounts than we have today.)
    3832             :          */
    3833      394728 :         this_srf_multiplier = expression_returns_set_rows(root, groupexpr);
    3834      394728 :         if (srf_multiplier < this_srf_multiplier)
    3835         144 :             srf_multiplier = this_srf_multiplier;
    3836             : 
    3837             :         /* Short-circuit for expressions returning boolean */
    3838      394728 :         if (exprType(groupexpr) == BOOLOID)
    3839             :         {
    3840         204 :             numdistinct *= 2.0;
    3841         204 :             continue;
    3842             :         }
    3843             : 
    3844             :         /*
    3845             :          * If examine_variable is able to deduce anything about the GROUP BY
    3846             :          * expression, treat it as a single variable even if it's really more
    3847             :          * complicated.
    3848             :          *
    3849             :          * XXX This has the consequence that if there's a statistics object on
    3850             :          * the expression, we don't split it into individual Vars. This
    3851             :          * affects our selection of statistics in
    3852             :          * estimate_multivariate_ndistinct, because it's probably better to
    3853             :          * use more accurate estimate for each expression and treat them as
    3854             :          * independent, than to combine estimates for the extracted variables
    3855             :          * when we don't know how that relates to the expressions.
    3856             :          */
    3857      394524 :         examine_variable(root, groupexpr, 0, &vardata);
    3858      394524 :         if (HeapTupleIsValid(vardata.statsTuple) || vardata.isunique)
    3859             :         {
    3860      325074 :             varinfos = add_unique_group_var(root, varinfos,
    3861             :                                             groupexpr, &vardata);
    3862      325074 :             ReleaseVariableStats(vardata);
    3863      325074 :             continue;
    3864             :         }
    3865       69450 :         ReleaseVariableStats(vardata);
    3866             : 
    3867             :         /*
    3868             :          * Else pull out the component Vars.  Handle PlaceHolderVars by
    3869             :          * recursing into their arguments (effectively assuming that the
    3870             :          * PlaceHolderVar doesn't change the number of groups, which boils
    3871             :          * down to ignoring the possible addition of nulls to the result set).
    3872             :          */
    3873       69450 :         varshere = pull_var_clause(groupexpr,
    3874             :                                    PVC_RECURSE_AGGREGATES |
    3875             :                                    PVC_RECURSE_WINDOWFUNCS |
    3876             :                                    PVC_RECURSE_PLACEHOLDERS);
    3877             : 
    3878             :         /*
    3879             :          * If we find any variable-free GROUP BY item, then either it is a
    3880             :          * constant (and we can ignore it) or it contains a volatile function;
    3881             :          * in the latter case we punt and assume that each input row will
    3882             :          * yield a distinct group.
    3883             :          */
    3884       69450 :         if (varshere == NIL)
    3885             :         {
    3886         732 :             if (contain_volatile_functions(groupexpr))
    3887          48 :                 return input_rows;
    3888         684 :             continue;
    3889             :         }
    3890             : 
    3891             :         /*
    3892             :          * Else add variables to varinfos list
    3893             :          */
    3894      140932 :         foreach(l2, varshere)
    3895             :         {
    3896       72214 :             Node       *var = (Node *) lfirst(l2);
    3897             : 
    3898       72214 :             examine_variable(root, var, 0, &vardata);
    3899       72214 :             varinfos = add_unique_group_var(root, varinfos, var, &vardata);
    3900       72214 :             ReleaseVariableStats(vardata);
    3901             :         }
    3902             :     }
    3903             : 
    3904             :     /*
    3905             :      * If now no Vars, we must have an all-constant or all-boolean GROUP BY
    3906             :      * list.
    3907             :      */
    3908      344498 :     if (varinfos == NIL)
    3909             :     {
    3910             :         /* Apply SRF multiplier as we would do in the long path */
    3911         400 :         numdistinct *= srf_multiplier;
    3912             :         /* Round off */
    3913         400 :         numdistinct = ceil(numdistinct);
    3914             :         /* Guard against out-of-range answers */
    3915         400 :         if (numdistinct > input_rows)
    3916          44 :             numdistinct = input_rows;
    3917         400 :         if (numdistinct < 1.0)
    3918           0 :             numdistinct = 1.0;
    3919         400 :         return numdistinct;
    3920             :     }
    3921             : 
    3922             :     /*
    3923             :      * Group Vars by relation and estimate total numdistinct.
    3924             :      *
    3925             :      * For each iteration of the outer loop, we process the frontmost Var in
    3926             :      * varinfos, plus all other Vars in the same relation.  We remove these
    3927             :      * Vars from the newvarinfos list for the next iteration. This is the
    3928             :      * easiest way to group Vars of same rel together.
    3929             :      */
    3930             :     do
    3931             :     {
    3932      347012 :         GroupVarInfo *varinfo1 = (GroupVarInfo *) linitial(varinfos);
    3933      347012 :         RelOptInfo *rel = varinfo1->rel;
    3934      347012 :         double      reldistinct = 1;
    3935      347012 :         double      relmaxndistinct = reldistinct;
    3936      347012 :         int         relvarcount = 0;
    3937      347012 :         List       *newvarinfos = NIL;
    3938      347012 :         List       *relvarinfos = NIL;
    3939             : 
    3940             :         /*
    3941             :          * Split the list of varinfos in two - one for the current rel, one
    3942             :          * for remaining Vars on other rels.
    3943             :          */
    3944      347012 :         relvarinfos = lappend(relvarinfos, varinfo1);
    3945      401720 :         for_each_from(l, varinfos, 1)
    3946             :         {
    3947       54708 :             GroupVarInfo *varinfo2 = (GroupVarInfo *) lfirst(l);
    3948             : 
    3949       54708 :             if (varinfo2->rel == varinfo1->rel)
    3950             :             {
    3951             :                 /* varinfos on current rel */
    3952       49120 :                 relvarinfos = lappend(relvarinfos, varinfo2);
    3953             :             }
    3954             :             else
    3955             :             {
    3956             :                 /* not time to process varinfo2 yet */
    3957        5588 :                 newvarinfos = lappend(newvarinfos, varinfo2);
    3958             :             }
    3959             :         }
    3960             : 
    3961             :         /*
    3962             :          * Get the numdistinct estimate for the Vars of this rel.  We
    3963             :          * iteratively search for multivariate n-distinct with maximum number
    3964             :          * of vars; assuming that each var group is independent of the others,
    3965             :          * we multiply them together.  Any remaining relvarinfos after no more
    3966             :          * multivariate matches are found are assumed independent too, so
    3967             :          * their individual ndistinct estimates are multiplied also.
    3968             :          *
    3969             :          * While iterating, count how many separate numdistinct values we
    3970             :          * apply.  We apply a fudge factor below, but only if we multiplied
    3971             :          * more than one such values.
    3972             :          */
    3973      694150 :         while (relvarinfos)
    3974             :         {
    3975             :             double      mvndistinct;
    3976             : 
    3977      347138 :             if (estimate_multivariate_ndistinct(root, rel, &relvarinfos,
    3978             :                                                 &mvndistinct))
    3979             :             {
    3980         414 :                 reldistinct *= mvndistinct;
    3981         414 :                 if (relmaxndistinct < mvndistinct)
    3982         402 :                     relmaxndistinct = mvndistinct;
    3983         414 :                 relvarcount++;
    3984             :             }
    3985             :             else
    3986             :             {
    3987      741980 :                 foreach(l, relvarinfos)
    3988             :                 {
    3989      395256 :                     GroupVarInfo *varinfo2 = (GroupVarInfo *) lfirst(l);
    3990             : 
    3991      395256 :                     reldistinct *= varinfo2->ndistinct;
    3992      395256 :                     if (relmaxndistinct < varinfo2->ndistinct)
    3993      348266 :                         relmaxndistinct = varinfo2->ndistinct;
    3994      395256 :                     relvarcount++;
    3995             : 
    3996             :                     /*
    3997             :                      * When varinfo2's isdefault is set then we'd better set
    3998             :                      * the SELFLAG_USED_DEFAULT bit in the EstimationInfo.
    3999             :                      */
    4000      395256 :                     if (estinfo != NULL && varinfo2->isdefault)
    4001       19072 :                         estinfo->flags |= SELFLAG_USED_DEFAULT;
    4002             :                 }
    4003             : 
    4004             :                 /* we're done with this relation */
    4005      346724 :                 relvarinfos = NIL;
    4006             :             }
    4007             :         }
    4008             : 
    4009             :         /*
    4010             :          * Sanity check --- don't divide by zero if empty relation.
    4011             :          */
    4012             :         Assert(IS_SIMPLE_REL(rel));
    4013      347012 :         if (rel->tuples > 0)
    4014             :         {
    4015             :             /*
    4016             :              * Clamp to size of rel, or size of rel / 10 if multiple Vars. The
    4017             :              * fudge factor is because the Vars are probably correlated but we
    4018             :              * don't know by how much.  We should never clamp to less than the
    4019             :              * largest ndistinct value for any of the Vars, though, since
    4020             :              * there will surely be at least that many groups.
    4021             :              */
    4022      345968 :             double      clamp = rel->tuples;
    4023             : 
    4024      345968 :             if (relvarcount > 1)
    4025             :             {
    4026       44330 :                 clamp *= 0.1;
    4027       44330 :                 if (clamp < relmaxndistinct)
    4028             :                 {
    4029       41652 :                     clamp = relmaxndistinct;
    4030             :                     /* for sanity in case some ndistinct is too large: */
    4031       41652 :                     if (clamp > rel->tuples)
    4032          78 :                         clamp = rel->tuples;
    4033             :                 }
    4034             :             }
    4035      345968 :             if (reldistinct > clamp)
    4036       36220 :                 reldistinct = clamp;
    4037             : 
    4038             :             /*
    4039             :              * Update the estimate based on the restriction selectivity,
    4040             :              * guarding against division by zero when reldistinct is zero.
    4041             :              * Also skip this if we know that we are returning all rows.
    4042             :              */
    4043      345968 :             if (reldistinct > 0 && rel->rows < rel->tuples)
    4044             :             {
    4045             :                 /*
    4046             :                  * Given a table containing N rows with n distinct values in a
    4047             :                  * uniform distribution, if we select p rows at random then
    4048             :                  * the expected number of distinct values selected is
    4049             :                  *
    4050             :                  * n * (1 - product((N-N/n-i)/(N-i), i=0..p-1))
    4051             :                  *
    4052             :                  * = n * (1 - (N-N/n)! / (N-N/n-p)! * (N-p)! / N!)
    4053             :                  *
    4054             :                  * See "Approximating block accesses in database
    4055             :                  * organizations", S. B. Yao, Communications of the ACM,
    4056             :                  * Volume 20 Issue 4, April 1977 Pages 260-261.
    4057             :                  *
    4058             :                  * Alternatively, re-arranging the terms from the factorials,
    4059             :                  * this may be written as
    4060             :                  *
    4061             :                  * n * (1 - product((N-p-i)/(N-i), i=0..N/n-1))
    4062             :                  *
    4063             :                  * This form of the formula is more efficient to compute in
    4064             :                  * the common case where p is larger than N/n.  Additionally,
    4065             :                  * as pointed out by Dell'Era, if i << N for all terms in the
    4066             :                  * product, it can be approximated by
    4067             :                  *
    4068             :                  * n * (1 - ((N-p)/N)^(N/n))
    4069             :                  *
    4070             :                  * See "Expected distinct values when selecting from a bag
    4071             :                  * without replacement", Alberto Dell'Era,
    4072             :                  * http://www.adellera.it/investigations/distinct_balls/.
    4073             :                  *
    4074             :                  * The condition i << N is equivalent to n >> 1, so this is a
    4075             :                  * good approximation when the number of distinct values in
    4076             :                  * the table is large.  It turns out that this formula also
    4077             :                  * works well even when n is small.
    4078             :                  */
    4079      108980 :                 reldistinct *=
    4080      108980 :                     (1 - pow((rel->tuples - rel->rows) / rel->tuples,
    4081      108980 :                              rel->tuples / reldistinct));
    4082             :             }
    4083      345968 :             reldistinct = clamp_row_est(reldistinct);
    4084             : 
    4085             :             /*
    4086             :              * Update estimate of total distinct groups.
    4087             :              */
    4088      345968 :             numdistinct *= reldistinct;
    4089             :         }
    4090             : 
    4091      347012 :         varinfos = newvarinfos;
    4092      347012 :     } while (varinfos != NIL);
    4093             : 
    4094             :     /* Now we can account for the effects of any SRFs */
    4095      344098 :     numdistinct *= srf_multiplier;
    4096             : 
    4097             :     /* Round off */
    4098      344098 :     numdistinct = ceil(numdistinct);
    4099             : 
    4100             :     /* Guard against out-of-range answers */
    4101      344098 :     if (numdistinct > input_rows)
    4102       71390 :         numdistinct = input_rows;
    4103      344098 :     if (numdistinct < 1.0)
    4104           0 :         numdistinct = 1.0;
    4105             : 
    4106      344098 :     return numdistinct;
    4107             : }
    4108             : 
    4109             : /*
    4110             :  * Try to estimate the bucket size of the hash join inner side when the join
    4111             :  * condition contains two or more clauses by employing extended statistics.
    4112             :  *
    4113             :  * The main idea of this approach is that the distinct value generated by
    4114             :  * multivariate estimation on two or more columns would provide less bucket size
    4115             :  * than estimation on one separate column.
    4116             :  *
    4117             :  * IMPORTANT: It is crucial to synchronize the approach of combining different
    4118             :  * estimations with the caller's method.
    4119             :  *
    4120             :  * Return a list of clauses that didn't fetch any extended statistics.
    4121             :  */
    4122             : List *
    4123      454278 : estimate_multivariate_bucketsize(PlannerInfo *root, RelOptInfo *inner,
    4124             :                                  List *hashclauses,
    4125             :                                  Selectivity *innerbucketsize)
    4126             : {
    4127             :     List       *clauses;
    4128             :     List       *otherclauses;
    4129             :     double      ndistinct;
    4130             : 
    4131      454278 :     if (list_length(hashclauses) <= 1)
    4132             :     {
    4133             :         /*
    4134             :          * Nothing to do for a single clause.  Could we employ univariate
    4135             :          * extended stat here?
    4136             :          */
    4137      417536 :         return hashclauses;
    4138             :     }
    4139             : 
    4140             :     /* "clauses" is the list of hashclauses we've not dealt with yet */
    4141       36742 :     clauses = list_copy(hashclauses);
    4142             :     /* "otherclauses" holds clauses we are going to return to caller */
    4143       36742 :     otherclauses = NIL;
    4144             :     /* current estimate of ndistinct */
    4145       36742 :     ndistinct = 1.0;
    4146       73496 :     while (clauses != NIL)
    4147             :     {
    4148             :         ListCell   *lc;
    4149       36754 :         int         relid = -1;
    4150       36754 :         List       *varinfos = NIL;
    4151       36754 :         List       *origin_rinfos = NIL;
    4152             :         double      mvndistinct;
    4153             :         List       *origin_varinfos;
    4154       36754 :         int         group_relid = -1;
    4155       36754 :         RelOptInfo *group_rel = NULL;
    4156             :         ListCell   *lc1,
    4157             :                    *lc2;
    4158             : 
    4159             :         /*
    4160             :          * Find clauses, referencing the same single base relation and try to
    4161             :          * estimate such a group with extended statistics.  Create varinfo for
    4162             :          * an approved clause, push it to otherclauses, if it can't be
    4163             :          * estimated here or ignore to process at the next iteration.
    4164             :          */
    4165      110874 :         foreach(lc, clauses)
    4166             :         {
    4167       74120 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc);
    4168             :             Node       *expr;
    4169             :             Relids      relids;
    4170             :             GroupVarInfo *varinfo;
    4171             : 
    4172             :             /*
    4173             :              * Find the inner side of the join, which we need to estimate the
    4174             :              * number of buckets.  Use outer_is_left because the
    4175             :              * clause_sides_match_join routine has called on hash clauses.
    4176             :              */
    4177      148240 :             relids = rinfo->outer_is_left ?
    4178       74120 :                 rinfo->right_relids : rinfo->left_relids;
    4179      148240 :             expr = rinfo->outer_is_left ?
    4180       74120 :                 get_rightop(rinfo->clause) : get_leftop(rinfo->clause);
    4181             : 
    4182       74120 :             if (bms_get_singleton_member(relids, &relid) &&
    4183       71506 :                 root->simple_rel_array[relid]->statlist != NIL)
    4184          48 :             {
    4185          60 :                 bool        is_duplicate = false;
    4186             : 
    4187             :                 /*
    4188             :                  * This inner-side expression references only one relation.
    4189             :                  * Extended statistics on this clause can exist.
    4190             :                  */
    4191          60 :                 if (group_relid < 0)
    4192             :                 {
    4193          30 :                     RangeTblEntry *rte = root->simple_rte_array[relid];
    4194             : 
    4195          30 :                     if (!rte || (rte->relkind != RELKIND_RELATION &&
    4196           0 :                                  rte->relkind != RELKIND_MATVIEW &&
    4197           0 :                                  rte->relkind != RELKIND_FOREIGN_TABLE &&
    4198           0 :                                  rte->relkind != RELKIND_PARTITIONED_TABLE))
    4199             :                     {
    4200             :                         /* Extended statistics can't exist in principle */
    4201           0 :                         otherclauses = lappend(otherclauses, rinfo);
    4202           0 :                         clauses = foreach_delete_current(clauses, lc);
    4203           0 :                         continue;
    4204             :                     }
    4205             : 
    4206          30 :                     group_relid = relid;
    4207          30 :                     group_rel = root->simple_rel_array[relid];
    4208             :                 }
    4209          30 :                 else if (group_relid != relid)
    4210             :                 {
    4211             :                     /*
    4212             :                      * Being in the group forming state we don't need other
    4213             :                      * clauses.
    4214             :                      */
    4215           0 :                     continue;
    4216             :                 }
    4217             : 
    4218             :                 /*
    4219             :                  * We're going to add the new clause to the varinfos list.  We
    4220             :                  * might re-use add_unique_group_var(), but we don't do so for
    4221             :                  * two reasons.
    4222             :                  *
    4223             :                  * 1) We must keep the origin_rinfos list ordered exactly the
    4224             :                  * same way as varinfos.
    4225             :                  *
    4226             :                  * 2) add_unique_group_var() is designed for
    4227             :                  * estimate_num_groups(), where a larger number of groups is
    4228             :                  * worse.   While estimating the number of hash buckets, we
    4229             :                  * have the opposite: a lesser number of groups is worse.
    4230             :                  * Therefore, we don't have to remove "known equal" vars: the
    4231             :                  * removed var may valuably contribute to the multivariate
    4232             :                  * statistics to grow the number of groups.
    4233             :                  */
    4234             : 
    4235             :                 /*
    4236             :                  * Clear nullingrels to correctly match hash keys.  See
    4237             :                  * add_unique_group_var()'s comment for details.
    4238             :                  */
    4239          60 :                 expr = remove_nulling_relids(expr, root->outer_join_rels, NULL);
    4240             : 
    4241             :                 /*
    4242             :                  * Detect and exclude exact duplicates from the list of hash
    4243             :                  * keys (like add_unique_group_var does).
    4244             :                  */
    4245          84 :                 foreach(lc1, varinfos)
    4246             :                 {
    4247          36 :                     varinfo = (GroupVarInfo *) lfirst(lc1);
    4248             : 
    4249          36 :                     if (!equal(expr, varinfo->var))
    4250          24 :                         continue;
    4251             : 
    4252          12 :                     is_duplicate = true;
    4253          12 :                     break;
    4254             :                 }
    4255             : 
    4256          60 :                 if (is_duplicate)
    4257             :                 {
    4258             :                     /*
    4259             :                      * Skip exact duplicates. Adding them to the otherclauses
    4260             :                      * list also doesn't make sense.
    4261             :                      */
    4262          12 :                     continue;
    4263             :                 }
    4264             : 
    4265             :                 /*
    4266             :                  * Initialize GroupVarInfo.  We only use it to call
    4267             :                  * estimate_multivariate_ndistinct(), which doesn't care about
    4268             :                  * ndistinct and isdefault fields.  Thus, skip these fields.
    4269             :                  */
    4270          48 :                 varinfo = palloc0_object(GroupVarInfo);
    4271          48 :                 varinfo->var = expr;
    4272          48 :                 varinfo->rel = root->simple_rel_array[relid];
    4273          48 :                 varinfos = lappend(varinfos, varinfo);
    4274             : 
    4275             :                 /*
    4276             :                  * Remember the link to RestrictInfo for the case the clause
    4277             :                  * is failed to be estimated.
    4278             :                  */
    4279          48 :                 origin_rinfos = lappend(origin_rinfos, rinfo);
    4280             :             }
    4281             :             else
    4282             :             {
    4283             :                 /* This clause can't be estimated with extended statistics */
    4284       74060 :                 otherclauses = lappend(otherclauses, rinfo);
    4285             :             }
    4286             : 
    4287       74108 :             clauses = foreach_delete_current(clauses, lc);
    4288             :         }
    4289             : 
    4290       36754 :         if (list_length(varinfos) < 2)
    4291             :         {
    4292             :             /*
    4293             :              * Multivariate statistics doesn't apply to single columns except
    4294             :              * for expressions, but it has not been implemented yet.
    4295             :              */
    4296       36742 :             otherclauses = list_concat(otherclauses, origin_rinfos);
    4297       36742 :             list_free_deep(varinfos);
    4298       36742 :             list_free(origin_rinfos);
    4299       36742 :             continue;
    4300             :         }
    4301             : 
    4302             :         Assert(group_rel != NULL);
    4303             : 
    4304             :         /* Employ the extended statistics. */
    4305          12 :         origin_varinfos = varinfos;
    4306             :         for (;;)
    4307          12 :         {
    4308          24 :             bool        estimated = estimate_multivariate_ndistinct(root,
    4309             :                                                                     group_rel,
    4310             :                                                                     &varinfos,
    4311             :                                                                     &mvndistinct);
    4312             : 
    4313          24 :             if (!estimated)
    4314          12 :                 break;
    4315             : 
    4316             :             /*
    4317             :              * We've got an estimation.  Use ndistinct value in a consistent
    4318             :              * way - according to the caller's logic (see
    4319             :              * final_cost_hashjoin).
    4320             :              */
    4321          12 :             if (ndistinct < mvndistinct)
    4322          12 :                 ndistinct = mvndistinct;
    4323             :             Assert(ndistinct >= 1.0);
    4324             :         }
    4325             : 
    4326             :         Assert(list_length(origin_varinfos) == list_length(origin_rinfos));
    4327             : 
    4328             :         /* Collect unmatched clauses as otherclauses. */
    4329          42 :         forboth(lc1, origin_varinfos, lc2, origin_rinfos)
    4330             :         {
    4331          30 :             GroupVarInfo *vinfo = lfirst(lc1);
    4332             : 
    4333          30 :             if (!list_member_ptr(varinfos, vinfo))
    4334             :                 /* Already estimated */
    4335          30 :                 continue;
    4336             : 
    4337             :             /* Can't be estimated here - push to the returning list */
    4338           0 :             otherclauses = lappend(otherclauses, lfirst(lc2));
    4339             :         }
    4340             :     }
    4341             : 
    4342       36742 :     *innerbucketsize = 1.0 / ndistinct;
    4343       36742 :     return otherclauses;
    4344             : }
    4345             : 
    4346             : /*
    4347             :  * Estimate hash bucket statistics when the specified expression is used
    4348             :  * as a hash key for the given number of buckets.
    4349             :  *
    4350             :  * This attempts to determine two values:
    4351             :  *
    4352             :  * 1. The frequency of the most common value of the expression (returns
    4353             :  * zero into *mcv_freq if we can't get that).
    4354             :  *
    4355             :  * 2. The "bucketsize fraction", ie, average number of entries in a bucket
    4356             :  * divided by total tuples in relation.
    4357             :  *
    4358             :  * XXX This is really pretty bogus since we're effectively assuming that the
    4359             :  * distribution of hash keys will be the same after applying restriction
    4360             :  * clauses as it was in the underlying relation.  However, we are not nearly
    4361             :  * smart enough to figure out how the restrict clauses might change the
    4362             :  * distribution, so this will have to do for now.
    4363             :  *
    4364             :  * We are passed the number of buckets the executor will use for the given
    4365             :  * input relation.  If the data were perfectly distributed, with the same
    4366             :  * number of tuples going into each available bucket, then the bucketsize
    4367             :  * fraction would be 1/nbuckets.  But this happy state of affairs will occur
    4368             :  * only if (a) there are at least nbuckets distinct data values, and (b)
    4369             :  * we have a not-too-skewed data distribution.  Otherwise the buckets will
    4370             :  * be nonuniformly occupied.  If the other relation in the join has a key
    4371             :  * distribution similar to this one's, then the most-loaded buckets are
    4372             :  * exactly those that will be probed most often.  Therefore, the "average"
    4373             :  * bucket size for costing purposes should really be taken as something close
    4374             :  * to the "worst case" bucket size.  We try to estimate this by adjusting the
    4375             :  * fraction if there are too few distinct data values, and then scaling up
    4376             :  * by the ratio of the most common value's frequency to the average frequency.
    4377             :  *
    4378             :  * If no statistics are available, use a default estimate of 0.1.  This will
    4379             :  * discourage use of a hash rather strongly if the inner relation is large,
    4380             :  * which is what we want.  We do not want to hash unless we know that the
    4381             :  * inner rel is well-dispersed (or the alternatives seem much worse).
    4382             :  *
    4383             :  * The caller should also check that the mcv_freq is not so large that the
    4384             :  * most common value would by itself require an impractically large bucket.
    4385             :  * In a hash join, the executor can split buckets if they get too big, but
    4386             :  * obviously that doesn't help for a bucket that contains many duplicates of
    4387             :  * the same value.
    4388             :  */
    4389             : void
    4390      205774 : estimate_hash_bucket_stats(PlannerInfo *root, Node *hashkey, double nbuckets,
    4391             :                            Selectivity *mcv_freq,
    4392             :                            Selectivity *bucketsize_frac)
    4393             : {
    4394             :     VariableStatData vardata;
    4395             :     double      estfract,
    4396             :                 ndistinct,
    4397             :                 stanullfrac,
    4398             :                 avgfreq;
    4399             :     bool        isdefault;
    4400             :     AttStatsSlot sslot;
    4401             : 
    4402      205774 :     examine_variable(root, hashkey, 0, &vardata);
    4403             : 
    4404             :     /* Initialize *mcv_freq to "unknown" */
    4405      205774 :     *mcv_freq = 0.0;
    4406             : 
    4407             :     /* Look up the frequency of the most common value, if available */
    4408      205774 :     if (HeapTupleIsValid(vardata.statsTuple))
    4409             :     {
    4410      148262 :         if (get_attstatsslot(&sslot, vardata.statsTuple,
    4411             :                              STATISTIC_KIND_MCV, InvalidOid,
    4412             :                              ATTSTATSSLOT_NUMBERS))
    4413             :         {
    4414             :             /*
    4415             :              * The first MCV stat is for the most common value.
    4416             :              */
    4417       86660 :             if (sslot.nnumbers > 0)
    4418       86660 :                 *mcv_freq = sslot.numbers[0];
    4419       86660 :             free_attstatsslot(&sslot);
    4420             :         }
    4421       61602 :         else if (get_attstatsslot(&sslot, vardata.statsTuple,
    4422             :                                   STATISTIC_KIND_HISTOGRAM, InvalidOid,
    4423             :                                   0))
    4424             :         {
    4425             :             /*
    4426             :              * If there are no recorded MCVs, but we do have a histogram, then
    4427             :              * assume that ANALYZE determined that the column is unique.
    4428             :              */
    4429       59296 :             if (vardata.rel && vardata.rel->rows > 0)
    4430       59278 :                 *mcv_freq = 1.0 / vardata.rel->rows;
    4431             :         }
    4432             :     }
    4433             : 
    4434             :     /* Get number of distinct values */
    4435      205774 :     ndistinct = get_variable_numdistinct(&vardata, &isdefault);
    4436             : 
    4437             :     /*
    4438             :      * If ndistinct isn't real, punt.  We normally return 0.1, but if the
    4439             :      * mcv_freq is known to be even higher than that, use it instead.
    4440             :      */
    4441      205774 :     if (isdefault)
    4442             :     {
    4443       25724 :         *bucketsize_frac = (Selectivity) Max(0.1, *mcv_freq);
    4444       25724 :         ReleaseVariableStats(vardata);
    4445       25724 :         return;
    4446             :     }
    4447             : 
    4448             :     /* Get fraction that are null */
    4449      180050 :     if (HeapTupleIsValid(vardata.statsTuple))
    4450             :     {
    4451             :         Form_pg_statistic stats;
    4452             : 
    4453      148244 :         stats = (Form_pg_statistic) GETSTRUCT(vardata.statsTuple);
    4454      148244 :         stanullfrac = stats->stanullfrac;
    4455             :     }
    4456             :     else
    4457       31806 :         stanullfrac = 0.0;
    4458             : 
    4459             :     /* Compute avg freq of all distinct data values in raw relation */
    4460      180050 :     avgfreq = (1.0 - stanullfrac) / ndistinct;
    4461             : 
    4462             :     /*
    4463             :      * Adjust ndistinct to account for restriction clauses.  Observe we are
    4464             :      * assuming that the data distribution is affected uniformly by the
    4465             :      * restriction clauses!
    4466             :      *
    4467             :      * XXX Possibly better way, but much more expensive: multiply by
    4468             :      * selectivity of rel's restriction clauses that mention the target Var.
    4469             :      */
    4470      180050 :     if (vardata.rel && vardata.rel->tuples > 0)
    4471             :     {
    4472      179994 :         ndistinct *= vardata.rel->rows / vardata.rel->tuples;
    4473      179994 :         ndistinct = clamp_row_est(ndistinct);
    4474             :     }
    4475             : 
    4476             :     /*
    4477             :      * Initial estimate of bucketsize fraction is 1/nbuckets as long as the
    4478             :      * number of buckets is less than the expected number of distinct values;
    4479             :      * otherwise it is 1/ndistinct.
    4480             :      */
    4481      180050 :     if (ndistinct > nbuckets)
    4482          84 :         estfract = 1.0 / nbuckets;
    4483             :     else
    4484      179966 :         estfract = 1.0 / ndistinct;
    4485             : 
    4486             :     /*
    4487             :      * Adjust estimated bucketsize upward to account for skewed distribution.
    4488             :      */
    4489      180050 :     if (avgfreq > 0.0 && *mcv_freq > avgfreq)
    4490       96028 :         estfract *= *mcv_freq / avgfreq;
    4491             : 
    4492             :     /*
    4493             :      * Clamp bucketsize to sane range (the above adjustment could easily
    4494             :      * produce an out-of-range result).  We set the lower bound a little above
    4495             :      * zero, since zero isn't a very sane result.
    4496             :      */
    4497      180050 :     if (estfract < 1.0e-6)
    4498           0 :         estfract = 1.0e-6;
    4499      180050 :     else if (estfract > 1.0)
    4500       47776 :         estfract = 1.0;
    4501             : 
    4502      180050 :     *bucketsize_frac = (Selectivity) estfract;
    4503             : 
    4504      180050 :     ReleaseVariableStats(vardata);
    4505             : }
    4506             : 
    4507             : /*
    4508             :  * estimate_hashagg_tablesize
    4509             :  *    estimate the number of bytes that a hash aggregate hashtable will
    4510             :  *    require based on the agg_costs, path width and number of groups.
    4511             :  *
    4512             :  * We return the result as "double" to forestall any possible overflow
    4513             :  * problem in the multiplication by dNumGroups.
    4514             :  *
    4515             :  * XXX this may be over-estimating the size now that hashagg knows to omit
    4516             :  * unneeded columns from the hashtable.  Also for mixed-mode grouping sets,
    4517             :  * grouping columns not in the hashed set are counted here even though hashagg
    4518             :  * won't store them.  Is this a problem?
    4519             :  */
    4520             : double
    4521        2614 : estimate_hashagg_tablesize(PlannerInfo *root, Path *path,
    4522             :                            const AggClauseCosts *agg_costs, double dNumGroups)
    4523             : {
    4524             :     Size        hashentrysize;
    4525             : 
    4526        2614 :     hashentrysize = hash_agg_entry_size(list_length(root->aggtransinfos),
    4527        2614 :                                         path->pathtarget->width,
    4528        2614 :                                         agg_costs->transitionSpace);
    4529             : 
    4530             :     /*
    4531             :      * Note that this disregards the effect of fill-factor and growth policy
    4532             :      * of the hash table.  That's probably ok, given that the default
    4533             :      * fill-factor is relatively high.  It'd be hard to meaningfully factor in
    4534             :      * "double-in-size" growth policies here.
    4535             :      */
    4536        2614 :     return hashentrysize * dNumGroups;
    4537             : }
    4538             : 
    4539             : 
    4540             : /*-------------------------------------------------------------------------
    4541             :  *
    4542             :  * Support routines
    4543             :  *
    4544             :  *-------------------------------------------------------------------------
    4545             :  */
    4546             : 
    4547             : /*
    4548             :  * Find the best matching ndistinct extended statistics for the given list of
    4549             :  * GroupVarInfos.
    4550             :  *
    4551             :  * Callers must ensure that the given GroupVarInfos all belong to 'rel' and
    4552             :  * the GroupVarInfos list does not contain any duplicate Vars or expressions.
    4553             :  *
    4554             :  * When statistics are found that match > 1 of the given GroupVarInfo, the
    4555             :  * *ndistinct parameter is set according to the ndistinct estimate and a new
    4556             :  * list is built with the matching GroupVarInfos removed, which is output via
    4557             :  * the *varinfos parameter before returning true.  When no matching stats are
    4558             :  * found, false is returned and the *varinfos and *ndistinct parameters are
    4559             :  * left untouched.
    4560             :  */
    4561             : static bool
    4562      347162 : estimate_multivariate_ndistinct(PlannerInfo *root, RelOptInfo *rel,
    4563             :                                 List **varinfos, double *ndistinct)
    4564             : {
    4565             :     ListCell   *lc;
    4566             :     int         nmatches_vars;
    4567             :     int         nmatches_exprs;
    4568      347162 :     Oid         statOid = InvalidOid;
    4569             :     MVNDistinct *stats;
    4570      347162 :     StatisticExtInfo *matched_info = NULL;
    4571      347162 :     RangeTblEntry *rte = planner_rt_fetch(rel->relid, root);
    4572             : 
    4573             :     /* bail out immediately if the table has no extended statistics */
    4574      347162 :     if (!rel->statlist)
    4575      346598 :         return false;
    4576             : 
    4577             :     /* look for the ndistinct statistics object matching the most vars */
    4578         564 :     nmatches_vars = 0;          /* we require at least two matches */
    4579         564 :     nmatches_exprs = 0;
    4580        2244 :     foreach(lc, rel->statlist)
    4581             :     {
    4582             :         ListCell   *lc2;
    4583        1680 :         StatisticExtInfo *info = (StatisticExtInfo *) lfirst(lc);
    4584        1680 :         int         nshared_vars = 0;
    4585        1680 :         int         nshared_exprs = 0;
    4586             : 
    4587             :         /* skip statistics of other kinds */
    4588        1680 :         if (info->kind != STATS_EXT_NDISTINCT)
    4589         792 :             continue;
    4590             : 
    4591             :         /* skip statistics with mismatching stxdinherit value */
    4592         888 :         if (info->inherit != rte->inh)
    4593          30 :             continue;
    4594             : 
    4595             :         /*
    4596             :          * Determine how many expressions (and variables in non-matched
    4597             :          * expressions) match. We'll then use these numbers to pick the
    4598             :          * statistics object that best matches the clauses.
    4599             :          */
    4600        2718 :         foreach(lc2, *varinfos)
    4601             :         {
    4602             :             ListCell   *lc3;
    4603        1860 :             GroupVarInfo *varinfo = (GroupVarInfo *) lfirst(lc2);
    4604             :             AttrNumber  attnum;
    4605             : 
    4606             :             Assert(varinfo->rel == rel);
    4607             : 
    4608             :             /* simple Var, search in statistics keys directly */
    4609        1860 :             if (IsA(varinfo->var, Var))
    4610             :             {
    4611        1494 :                 attnum = ((Var *) varinfo->var)->varattno;
    4612             : 
    4613             :                 /*
    4614             :                  * Ignore system attributes - we don't support statistics on
    4615             :                  * them, so can't match them (and it'd fail as the values are
    4616             :                  * negative).
    4617             :                  */
    4618        1494 :                 if (!AttrNumberIsForUserDefinedAttr(attnum))
    4619          12 :                     continue;
    4620             : 
    4621        1482 :                 if (bms_is_member(attnum, info->keys))
    4622         876 :                     nshared_vars++;
    4623             : 
    4624        1482 :                 continue;
    4625             :             }
    4626             : 
    4627             :             /* expression - see if it's in the statistics object */
    4628         660 :             foreach(lc3, info->exprs)
    4629             :             {
    4630         528 :                 Node       *expr = (Node *) lfirst(lc3);
    4631             : 
    4632         528 :                 if (equal(varinfo->var, expr))
    4633             :                 {
    4634         234 :                     nshared_exprs++;
    4635         234 :                     break;
    4636             :                 }
    4637             :             }
    4638             :         }
    4639             : 
    4640             :         /*
    4641             :          * The ndistinct extended statistics contain estimates for a minimum
    4642             :          * of pairs of columns which the statistics are defined on and
    4643             :          * certainly not single columns.  Here we skip unless we managed to
    4644             :          * match to at least two columns.
    4645             :          */
    4646         858 :         if (nshared_vars + nshared_exprs < 2)
    4647         396 :             continue;
    4648             : 
    4649             :         /*
    4650             :          * Check if these statistics are a better match than the previous best
    4651             :          * match and if so, take note of the StatisticExtInfo.
    4652             :          *
    4653             :          * The statslist is sorted by statOid, so the StatisticExtInfo we
    4654             :          * select as the best match is deterministic even when multiple sets
    4655             :          * of statistics match equally as well.
    4656             :          */
    4657         462 :         if ((nshared_exprs > nmatches_exprs) ||
    4658         354 :             (((nshared_exprs == nmatches_exprs)) && (nshared_vars > nmatches_vars)))
    4659             :         {
    4660         438 :             statOid = info->statOid;
    4661         438 :             nmatches_vars = nshared_vars;
    4662         438 :             nmatches_exprs = nshared_exprs;
    4663         438 :             matched_info = info;
    4664             :         }
    4665             :     }
    4666             : 
    4667             :     /* No match? */
    4668         564 :     if (statOid == InvalidOid)
    4669         138 :         return false;
    4670             : 
    4671             :     Assert(nmatches_vars + nmatches_exprs > 1);
    4672             : 
    4673         426 :     stats = statext_ndistinct_load(statOid, rte->inh);
    4674             : 
    4675             :     /*
    4676             :      * If we have a match, search it for the specific item that matches (there
    4677             :      * must be one), and construct the output values.
    4678             :      */
    4679         426 :     if (stats)
    4680             :     {
    4681             :         int         i;
    4682         426 :         List       *newlist = NIL;
    4683         426 :         MVNDistinctItem *item = NULL;
    4684             :         ListCell   *lc2;
    4685         426 :         Bitmapset  *matched = NULL;
    4686             :         AttrNumber  attnum_offset;
    4687             : 
    4688             :         /*
    4689             :          * How much we need to offset the attnums? If there are no
    4690             :          * expressions, no offset is needed. Otherwise offset enough to move
    4691             :          * the lowest one (which is equal to number of expressions) to 1.
    4692             :          */
    4693         426 :         if (matched_info->exprs)
    4694         150 :             attnum_offset = (list_length(matched_info->exprs) + 1);
    4695             :         else
    4696         276 :             attnum_offset = 0;
    4697             : 
    4698             :         /* see what actually matched */
    4699        1488 :         foreach(lc2, *varinfos)
    4700             :         {
    4701             :             ListCell   *lc3;
    4702             :             int         idx;
    4703        1062 :             bool        found = false;
    4704             : 
    4705        1062 :             GroupVarInfo *varinfo = (GroupVarInfo *) lfirst(lc2);
    4706             : 
    4707             :             /*
    4708             :              * Process a simple Var expression, by matching it to keys
    4709             :              * directly. If there's a matching expression, we'll try matching
    4710             :              * it later.
    4711             :              */
    4712        1062 :             if (IsA(varinfo->var, Var))
    4713             :             {
    4714         876 :                 AttrNumber  attnum = ((Var *) varinfo->var)->varattno;
    4715             : 
    4716             :                 /*
    4717             :                  * Ignore expressions on system attributes. Can't rely on the
    4718             :                  * bms check for negative values.
    4719             :                  */
    4720         876 :                 if (!AttrNumberIsForUserDefinedAttr(attnum))
    4721           6 :                     continue;
    4722             : 
    4723             :                 /* Is the variable covered by the statistics object? */
    4724         870 :                 if (!bms_is_member(attnum, matched_info->keys))
    4725         120 :                     continue;
    4726             : 
    4727         750 :                 attnum = attnum + attnum_offset;
    4728             : 
    4729             :                 /* ensure sufficient offset */
    4730             :                 Assert(AttrNumberIsForUserDefinedAttr(attnum));
    4731             : 
    4732         750 :                 matched = bms_add_member(matched, attnum);
    4733             : 
    4734         750 :                 found = true;
    4735             :             }
    4736             : 
    4737             :             /*
    4738             :              * XXX Maybe we should allow searching the expressions even if we
    4739             :              * found an attribute matching the expression? That would handle
    4740             :              * trivial expressions like "(a)" but it seems fairly useless.
    4741             :              */
    4742         936 :             if (found)
    4743         750 :                 continue;
    4744             : 
    4745             :             /* expression - see if it's in the statistics object */
    4746         186 :             idx = 0;
    4747         306 :             foreach(lc3, matched_info->exprs)
    4748             :             {
    4749         276 :                 Node       *expr = (Node *) lfirst(lc3);
    4750             : 
    4751         276 :                 if (equal(varinfo->var, expr))
    4752             :                 {
    4753         156 :                     AttrNumber  attnum = -(idx + 1);
    4754             : 
    4755         156 :                     attnum = attnum + attnum_offset;
    4756             : 
    4757             :                     /* ensure sufficient offset */
    4758             :                     Assert(AttrNumberIsForUserDefinedAttr(attnum));
    4759             : 
    4760         156 :                     matched = bms_add_member(matched, attnum);
    4761             : 
    4762             :                     /* there should be just one matching expression */
    4763         156 :                     break;
    4764             :                 }
    4765             : 
    4766         120 :                 idx++;
    4767             :             }
    4768             :         }
    4769             : 
    4770             :         /* Find the specific item that exactly matches the combination */
    4771         864 :         for (i = 0; i < stats->nitems; i++)
    4772             :         {
    4773             :             int         j;
    4774         864 :             MVNDistinctItem *tmpitem = &stats->items[i];
    4775             : 
    4776         864 :             if (tmpitem->nattributes != bms_num_members(matched))
    4777         162 :                 continue;
    4778             : 
    4779             :             /* assume it's the right item */
    4780         702 :             item = tmpitem;
    4781             : 
    4782             :             /* check that all item attributes/expressions fit the match */
    4783        1692 :             for (j = 0; j < tmpitem->nattributes; j++)
    4784             :             {
    4785        1266 :                 AttrNumber  attnum = tmpitem->attributes[j];
    4786             : 
    4787             :                 /*
    4788             :                  * Thanks to how we constructed the matched bitmap above, we
    4789             :                  * can just offset all attnums the same way.
    4790             :                  */
    4791        1266 :                 attnum = attnum + attnum_offset;
    4792             : 
    4793        1266 :                 if (!bms_is_member(attnum, matched))
    4794             :                 {
    4795             :                     /* nah, it's not this item */
    4796         276 :                     item = NULL;
    4797         276 :                     break;
    4798             :                 }
    4799             :             }
    4800             : 
    4801             :             /*
    4802             :              * If the item has all the matched attributes, we know it's the
    4803             :              * right one - there can't be a better one. matching more.
    4804             :              */
    4805         702 :             if (item)
    4806         426 :                 break;
    4807             :         }
    4808             : 
    4809             :         /*
    4810             :          * Make sure we found an item. There has to be one, because ndistinct
    4811             :          * statistics includes all combinations of attributes.
    4812             :          */
    4813         426 :         if (!item)
    4814           0 :             elog(ERROR, "corrupt MVNDistinct entry");
    4815             : 
    4816             :         /* Form the output varinfo list, keeping only unmatched ones */
    4817        1488 :         foreach(lc, *varinfos)
    4818             :         {
    4819        1062 :             GroupVarInfo *varinfo = (GroupVarInfo *) lfirst(lc);
    4820             :             ListCell   *lc3;
    4821        1062 :             bool        found = false;
    4822             : 
    4823             :             /*
    4824             :              * Let's look at plain variables first, because it's the most
    4825             :              * common case and the check is quite cheap. We can simply get the
    4826             :              * attnum and check (with an offset) matched bitmap.
    4827             :              */
    4828        1062 :             if (IsA(varinfo->var, Var))
    4829         870 :             {
    4830         876 :                 AttrNumber  attnum = ((Var *) varinfo->var)->varattno;
    4831             : 
    4832             :                 /*
    4833             :                  * If it's a system attribute, we're done. We don't support
    4834             :                  * extended statistics on system attributes, so it's clearly
    4835             :                  * not matched. Just keep the expression and continue.
    4836             :                  */
    4837         876 :                 if (!AttrNumberIsForUserDefinedAttr(attnum))
    4838             :                 {
    4839           6 :                     newlist = lappend(newlist, varinfo);
    4840           6 :                     continue;
    4841             :                 }
    4842             : 
    4843             :                 /* apply the same offset as above */
    4844         870 :                 attnum += attnum_offset;
    4845             : 
    4846             :                 /* if it's not matched, keep the varinfo */
    4847         870 :                 if (!bms_is_member(attnum, matched))
    4848         120 :                     newlist = lappend(newlist, varinfo);
    4849             : 
    4850             :                 /* The rest of the loop deals with complex expressions. */
    4851         870 :                 continue;
    4852             :             }
    4853             : 
    4854             :             /*
    4855             :              * Process complex expressions, not just simple Vars.
    4856             :              *
    4857             :              * First, we search for an exact match of an expression. If we
    4858             :              * find one, we can just discard the whole GroupVarInfo, with all
    4859             :              * the variables we extracted from it.
    4860             :              *
    4861             :              * Otherwise we inspect the individual vars, and try matching it
    4862             :              * to variables in the item.
    4863             :              */
    4864         306 :             foreach(lc3, matched_info->exprs)
    4865             :             {
    4866         276 :                 Node       *expr = (Node *) lfirst(lc3);
    4867             : 
    4868         276 :                 if (equal(varinfo->var, expr))
    4869             :                 {
    4870         156 :                     found = true;
    4871         156 :                     break;
    4872             :                 }
    4873             :             }
    4874             : 
    4875             :             /* found exact match, skip */
    4876         186 :             if (found)
    4877         156 :                 continue;
    4878             : 
    4879          30 :             newlist = lappend(newlist, varinfo);
    4880             :         }
    4881             : 
    4882         426 :         *varinfos = newlist;
    4883         426 :         *ndistinct = item->ndistinct;
    4884         426 :         return true;
    4885             :     }
    4886             : 
    4887           0 :     return false;
    4888             : }
    4889             : 
    4890             : /*
    4891             :  * convert_to_scalar
    4892             :  *    Convert non-NULL values of the indicated types to the comparison
    4893             :  *    scale needed by scalarineqsel().
    4894             :  *    Returns "true" if successful.
    4895             :  *
    4896             :  * XXX this routine is a hack: ideally we should look up the conversion
    4897             :  * subroutines in pg_type.
    4898             :  *
    4899             :  * All numeric datatypes are simply converted to their equivalent
    4900             :  * "double" values.  (NUMERIC values that are outside the range of "double"
    4901             :  * are clamped to +/- HUGE_VAL.)
    4902             :  *
    4903             :  * String datatypes are converted by convert_string_to_scalar(),
    4904             :  * which is explained below.  The reason why this routine deals with
    4905             :  * three values at a time, not just one, is that we need it for strings.
    4906             :  *
    4907             :  * The bytea datatype is just enough different from strings that it has
    4908             :  * to be treated separately.
    4909             :  *
    4910             :  * The several datatypes representing absolute times are all converted
    4911             :  * to Timestamp, which is actually an int64, and then we promote that to
    4912             :  * a double.  Note this will give correct results even for the "special"
    4913             :  * values of Timestamp, since those are chosen to compare correctly;
    4914             :  * see timestamp_cmp.
    4915             :  *
    4916             :  * The several datatypes representing relative times (intervals) are all
    4917             :  * converted to measurements expressed in seconds.
    4918             :  */
    4919             : static bool
    4920       89972 : convert_to_scalar(Datum value, Oid valuetypid, Oid collid, double *scaledvalue,
    4921             :                   Datum lobound, Datum hibound, Oid boundstypid,
    4922             :                   double *scaledlobound, double *scaledhibound)
    4923             : {
    4924       89972 :     bool        failure = false;
    4925             : 
    4926             :     /*
    4927             :      * Both the valuetypid and the boundstypid should exactly match the
    4928             :      * declared input type(s) of the operator we are invoked for.  However,
    4929             :      * extensions might try to use scalarineqsel as estimator for operators
    4930             :      * with input type(s) we don't handle here; in such cases, we want to
    4931             :      * return false, not fail.  In any case, we mustn't assume that valuetypid
    4932             :      * and boundstypid are identical.
    4933             :      *
    4934             :      * XXX The histogram we are interpolating between points of could belong
    4935             :      * to a column that's only binary-compatible with the declared type. In
    4936             :      * essence we are assuming that the semantics of binary-compatible types
    4937             :      * are enough alike that we can use a histogram generated with one type's
    4938             :      * operators to estimate selectivity for the other's.  This is outright
    4939             :      * wrong in some cases --- in particular signed versus unsigned
    4940             :      * interpretation could trip us up.  But it's useful enough in the
    4941             :      * majority of cases that we do it anyway.  Should think about more
    4942             :      * rigorous ways to do it.
    4943             :      */
    4944       89972 :     switch (valuetypid)
    4945             :     {
    4946             :             /*
    4947             :              * Built-in numeric types
    4948             :              */
    4949       82814 :         case BOOLOID:
    4950             :         case INT2OID:
    4951             :         case INT4OID:
    4952             :         case INT8OID:
    4953             :         case FLOAT4OID:
    4954             :         case FLOAT8OID:
    4955             :         case NUMERICOID:
    4956             :         case OIDOID:
    4957             :         case REGPROCOID:
    4958             :         case REGPROCEDUREOID:
    4959             :         case REGOPEROID:
    4960             :         case REGOPERATOROID:
    4961             :         case REGCLASSOID:
    4962             :         case REGTYPEOID:
    4963             :         case REGCOLLATIONOID:
    4964             :         case REGCONFIGOID:
    4965             :         case REGDICTIONARYOID:
    4966             :         case REGROLEOID:
    4967             :         case REGNAMESPACEOID:
    4968             :         case REGDATABASEOID:
    4969       82814 :             *scaledvalue = convert_numeric_to_scalar(value, valuetypid,
    4970             :                                                      &failure);
    4971       82814 :             *scaledlobound = convert_numeric_to_scalar(lobound, boundstypid,
    4972             :                                                        &failure);
    4973       82814 :             *scaledhibound = convert_numeric_to_scalar(hibound, boundstypid,
    4974             :                                                        &failure);
    4975       82814 :             return !failure;
    4976             : 
    4977             :             /*
    4978             :              * Built-in string types
    4979             :              */
    4980        7158 :         case CHAROID:
    4981             :         case BPCHAROID:
    4982             :         case VARCHAROID:
    4983             :         case TEXTOID:
    4984             :         case NAMEOID:
    4985             :             {
    4986        7158 :                 char       *valstr = convert_string_datum(value, valuetypid,
    4987             :                                                           collid, &failure);
    4988        7158 :                 char       *lostr = convert_string_datum(lobound, boundstypid,
    4989             :                                                          collid, &failure);
    4990        7158 :                 char       *histr = convert_string_datum(hibound, boundstypid,
    4991             :                                                          collid, &failure);
    4992             : 
    4993             :                 /*
    4994             :                  * Bail out if any of the values is not of string type.  We
    4995             :                  * might leak converted strings for the other value(s), but
    4996             :                  * that's not worth troubling over.
    4997             :                  */
    4998        7158 :                 if (failure)
    4999           0 :                     return false;
    5000             : 
    5001        7158 :                 convert_string_to_scalar(valstr, scaledvalue,
    5002             :                                          lostr, scaledlobound,
    5003             :                                          histr, scaledhibound);
    5004        7158 :                 pfree(valstr);
    5005        7158 :                 pfree(lostr);
    5006        7158 :                 pfree(histr);
    5007        7158 :                 return true;
    5008             :             }
    5009             : 
    5010             :             /*
    5011             :              * Built-in bytea type
    5012             :              */
    5013           0 :         case BYTEAOID:
    5014             :             {
    5015             :                 /* We only support bytea vs bytea comparison */
    5016           0 :                 if (boundstypid != BYTEAOID)
    5017           0 :                     return false;
    5018           0 :                 convert_bytea_to_scalar(value, scaledvalue,
    5019             :                                         lobound, scaledlobound,
    5020             :                                         hibound, scaledhibound);
    5021           0 :                 return true;
    5022             :             }
    5023             : 
    5024             :             /*
    5025             :              * Built-in time types
    5026             :              */
    5027           0 :         case TIMESTAMPOID:
    5028             :         case TIMESTAMPTZOID:
    5029             :         case DATEOID:
    5030             :         case INTERVALOID:
    5031             :         case TIMEOID:
    5032             :         case TIMETZOID:
    5033           0 :             *scaledvalue = convert_timevalue_to_scalar(value, valuetypid,
    5034             :                                                        &failure);
    5035           0 :             *scaledlobound = convert_timevalue_to_scalar(lobound, boundstypid,
    5036             :                                                          &failure);
    5037           0 :             *scaledhibound = convert_timevalue_to_scalar(hibound, boundstypid,
    5038             :                                                          &failure);
    5039           0 :             return !failure;
    5040             : 
    5041             :             /*
    5042             :              * Built-in network types
    5043             :              */
    5044           0 :         case INETOID:
    5045             :         case CIDROID:
    5046             :         case MACADDROID:
    5047             :         case MACADDR8OID:
    5048           0 :             *scaledvalue = convert_network_to_scalar(value, valuetypid,
    5049             :                                                      &failure);
    5050           0 :             *scaledlobound = convert_network_to_scalar(lobound, boundstypid,
    5051             :                                                        &failure);
    5052           0 :             *scaledhibound = convert_network_to_scalar(hibound, boundstypid,
    5053             :                                                        &failure);
    5054           0 :             return !failure;
    5055             :     }
    5056             :     /* Don't know how to convert */
    5057           0 :     *scaledvalue = *scaledlobound = *scaledhibound = 0;
    5058           0 :     return false;
    5059             : }
    5060             : 
    5061             : /*
    5062             :  * Do convert_to_scalar()'s work for any numeric data type.
    5063             :  *
    5064             :  * On failure (e.g., unsupported typid), set *failure to true;
    5065             :  * otherwise, that variable is not changed.
    5066             :  */
    5067             : static double
    5068      248442 : convert_numeric_to_scalar(Datum value, Oid typid, bool *failure)
    5069             : {
    5070      248442 :     switch (typid)
    5071             :     {
    5072           0 :         case BOOLOID:
    5073           0 :             return (double) DatumGetBool(value);
    5074          12 :         case INT2OID:
    5075          12 :             return (double) DatumGetInt16(value);
    5076       31842 :         case INT4OID:
    5077       31842 :             return (double) DatumGetInt32(value);
    5078           0 :         case INT8OID:
    5079           0 :             return (double) DatumGetInt64(value);
    5080           0 :         case FLOAT4OID:
    5081           0 :             return (double) DatumGetFloat4(value);
    5082          54 :         case FLOAT8OID:
    5083          54 :             return (double) DatumGetFloat8(value);
    5084           0 :         case NUMERICOID:
    5085             :             /* Note: out-of-range values will be clamped to +-HUGE_VAL */
    5086           0 :             return (double)
    5087           0 :                 DatumGetFloat8(DirectFunctionCall1(numeric_float8_no_overflow,
    5088             :                                                    value));
    5089      216534 :         case OIDOID:
    5090             :         case REGPROCOID:
    5091             :         case REGPROCEDUREOID:
    5092             :         case REGOPEROID:
    5093             :         case REGOPERATOROID:
    5094             :         case REGCLASSOID:
    5095             :         case REGTYPEOID:
    5096             :         case REGCOLLATIONOID:
    5097             :         case REGCONFIGOID:
    5098             :         case REGDICTIONARYOID:
    5099             :         case REGROLEOID:
    5100             :         case REGNAMESPACEOID:
    5101             :         case REGDATABASEOID:
    5102             :             /* we can treat OIDs as integers... */
    5103      216534 :             return (double) DatumGetObjectId(value);
    5104             :     }
    5105             : 
    5106           0 :     *failure = true;
    5107           0 :     return 0;
    5108             : }
    5109             : 
    5110             : /*
    5111             :  * Do convert_to_scalar()'s work for any character-string data type.
    5112             :  *
    5113             :  * String datatypes are converted to a scale that ranges from 0 to 1,
    5114             :  * where we visualize the bytes of the string as fractional digits.
    5115             :  *
    5116             :  * We do not want the base to be 256, however, since that tends to
    5117             :  * generate inflated selectivity estimates; few databases will have
    5118             :  * occurrences of all 256 possible byte values at each position.
    5119             :  * Instead, use the smallest and largest byte values seen in the bounds
    5120             :  * as the estimated range for each byte, after some fudging to deal with
    5121             :  * the fact that we probably aren't going to see the full range that way.
    5122             :  *
    5123             :  * An additional refinement is that we discard any common prefix of the
    5124             :  * three strings before computing the scaled values.  This allows us to
    5125             :  * "zoom in" when we encounter a narrow data range.  An example is a phone
    5126             :  * number database where all the values begin with the same area code.
    5127             :  * (Actually, the bounds will be adjacent histogram-bin-boundary values,
    5128             :  * so this is more likely to happen than you might think.)
    5129             :  */
    5130             : static void
    5131        7158 : convert_string_to_scalar(char *value,
    5132             :                          double *scaledvalue,
    5133             :                          char *lobound,
    5134             :                          double *scaledlobound,
    5135             :                          char *hibound,
    5136             :                          double *scaledhibound)
    5137             : {
    5138             :     int         rangelo,
    5139             :                 rangehi;
    5140             :     char       *sptr;
    5141             : 
    5142        7158 :     rangelo = rangehi = (unsigned char) hibound[0];
    5143       89742 :     for (sptr = lobound; *sptr; sptr++)
    5144             :     {
    5145       82584 :         if (rangelo > (unsigned char) *sptr)
    5146       16798 :             rangelo = (unsigned char) *sptr;
    5147       82584 :         if (rangehi < (unsigned char) *sptr)
    5148        8912 :             rangehi = (unsigned char) *sptr;
    5149             :     }
    5150       83552 :     for (sptr = hibound; *sptr; sptr++)
    5151             :     {
    5152       76394 :         if (rangelo > (unsigned char) *sptr)
    5153        1248 :             rangelo = (unsigned char) *sptr;
    5154       76394 :         if (rangehi < (unsigned char) *sptr)
    5155        3372 :             rangehi = (unsigned char) *sptr;
    5156             :     }
    5157             :     /* If range includes any upper-case ASCII chars, make it include all */
    5158        7158 :     if (rangelo <= 'Z' && rangehi >= 'A')
    5159             :     {
    5160        1516 :         if (rangelo > 'A')
    5161         222 :             rangelo = 'A';
    5162        1516 :         if (rangehi < 'Z')
    5163         480 :             rangehi = 'Z';
    5164             :     }
    5165             :     /* Ditto lower-case */
    5166        7158 :     if (rangelo <= 'z' && rangehi >= 'a')
    5167             :     {
    5168        6656 :         if (rangelo > 'a')
    5169         102 :             rangelo = 'a';
    5170        6656 :         if (rangehi < 'z')
    5171        6566 :             rangehi = 'z';
    5172             :     }
    5173             :     /* Ditto digits */
    5174        7158 :     if (rangelo <= '9' && rangehi >= '0')
    5175             :     {
    5176         836 :         if (rangelo > '0')
    5177         732 :             rangelo = '0';
    5178         836 :         if (rangehi < '9')
    5179          14 :             rangehi = '9';
    5180             :     }
    5181             : 
    5182             :     /*
    5183             :      * If range includes less than 10 chars, assume we have not got enough
    5184             :      * data, and make it include regular ASCII set.
    5185             :      */
    5186        7158 :     if (rangehi - rangelo < 9)
    5187             :     {
    5188           0 :         rangelo = ' ';
    5189           0 :         rangehi = 127;
    5190             :     }
    5191             : 
    5192             :     /*
    5193             :      * Now strip any common prefix of the three strings.
    5194             :      */
    5195       15374 :     while (*lobound)
    5196             :     {
    5197       15354 :         if (*lobound != *hibound || *lobound != *value)
    5198             :             break;
    5199        8216 :         lobound++, hibound++, value++;
    5200             :     }
    5201             : 
    5202             :     /*
    5203             :      * Now we can do the conversions.
    5204             :      */
    5205        7158 :     *scaledvalue = convert_one_string_to_scalar(value, rangelo, rangehi);
    5206        7158 :     *scaledlobound = convert_one_string_to_scalar(lobound, rangelo, rangehi);
    5207        7158 :     *scaledhibound = convert_one_string_to_scalar(hibound, rangelo, rangehi);
    5208        7158 : }
    5209             : 
    5210             : static double
    5211       21474 : convert_one_string_to_scalar(char *value, int rangelo, int rangehi)
    5212             : {
    5213       21474 :     int         slen = strlen(value);
    5214             :     double      num,
    5215             :                 denom,
    5216             :                 base;
    5217             : 
    5218       21474 :     if (slen <= 0)
    5219          20 :         return 0.0;             /* empty string has scalar value 0 */
    5220             : 
    5221             :     /*
    5222             :      * There seems little point in considering more than a dozen bytes from
    5223             :      * the string.  Since base is at least 10, that will give us nominal
    5224             :      * resolution of at least 12 decimal digits, which is surely far more
    5225             :      * precision than this estimation technique has got anyway (especially in
    5226             :      * non-C locales).  Also, even with the maximum possible base of 256, this
    5227             :      * ensures denom cannot grow larger than 256^13 = 2.03e31, which will not
    5228             :      * overflow on any known machine.
    5229             :      */
    5230       21454 :     if (slen > 12)
    5231        5390 :         slen = 12;
    5232             : 
    5233             :     /* Convert initial characters to fraction */
    5234       21454 :     base = rangehi - rangelo + 1;
    5235       21454 :     num = 0.0;
    5236       21454 :     denom = base;
    5237      176840 :     while (slen-- > 0)
    5238             :     {
    5239      155386 :         int         ch = (unsigned char) *value++;
    5240             : 
    5241      155386 :         if (ch < rangelo)
    5242         160 :             ch = rangelo - 1;
    5243      155226 :         else if (ch > rangehi)
    5244           0 :             ch = rangehi + 1;
    5245      155386 :         num += ((double) (ch - rangelo)) / denom;
    5246      155386 :         denom *= base;
    5247             :     }
    5248             : 
    5249       21454 :     return num;
    5250             : }
    5251             : 
    5252             : /*
    5253             :  * Convert a string-type Datum into a palloc'd, null-terminated string.
    5254             :  *
    5255             :  * On failure (e.g., unsupported typid), set *failure to true;
    5256             :  * otherwise, that variable is not changed.  (We'll return NULL on failure.)
    5257             :  *
    5258             :  * When using a non-C locale, we must pass the string through pg_strxfrm()
    5259             :  * before continuing, so as to generate correct locale-specific results.
    5260             :  */
    5261             : static char *
    5262       21474 : convert_string_datum(Datum value, Oid typid, Oid collid, bool *failure)
    5263             : {
    5264             :     char       *val;
    5265             :     pg_locale_t mylocale;
    5266             : 
    5267       21474 :     switch (typid)
    5268             :     {
    5269           0 :         case CHAROID:
    5270           0 :             val = (char *) palloc(2);
    5271           0 :             val[0] = DatumGetChar(value);
    5272           0 :             val[1] = '\0';
    5273           0 :             break;
    5274        6622 :         case BPCHAROID:
    5275             :         case VARCHAROID:
    5276             :         case TEXTOID:
    5277        6622 :             val = TextDatumGetCString(value);
    5278        6622 :             break;
    5279       14852 :         case NAMEOID:
    5280             :             {
    5281       14852 :                 NameData   *nm = (NameData *) DatumGetPointer(value);
    5282             : 
    5283       14852 :                 val = pstrdup(NameStr(*nm));
    5284       14852 :                 break;
    5285             :             }
    5286           0 :         default:
    5287           0 :             *failure = true;
    5288           0 :             return NULL;
    5289             :     }
    5290             : 
    5291       21474 :     mylocale = pg_newlocale_from_collation(collid);
    5292             : 
    5293       21474 :     if (!mylocale->collate_is_c)
    5294             :     {
    5295             :         char       *xfrmstr;
    5296             :         size_t      xfrmlen;
    5297             :         size_t      xfrmlen2 PG_USED_FOR_ASSERTS_ONLY;
    5298             : 
    5299             :         /*
    5300             :          * XXX: We could guess at a suitable output buffer size and only call
    5301             :          * pg_strxfrm() twice if our guess is too small.
    5302             :          *
    5303             :          * XXX: strxfrm doesn't support UTF-8 encoding on Win32, it can return
    5304             :          * bogus data or set an error. This is not really a problem unless it
    5305             :          * crashes since it will only give an estimation error and nothing
    5306             :          * fatal.
    5307             :          *
    5308             :          * XXX: we do not check pg_strxfrm_enabled(). On some platforms and in
    5309             :          * some cases, libc strxfrm() may return the wrong results, but that
    5310             :          * will only lead to an estimation error.
    5311             :          */
    5312          72 :         xfrmlen = pg_strxfrm(NULL, val, 0, mylocale);
    5313             : #ifdef WIN32
    5314             : 
    5315             :         /*
    5316             :          * On Windows, strxfrm returns INT_MAX when an error occurs. Instead
    5317             :          * of trying to allocate this much memory (and fail), just return the
    5318             :          * original string unmodified as if we were in the C locale.
    5319             :          */
    5320             :         if (xfrmlen == INT_MAX)
    5321             :             return val;
    5322             : #endif
    5323          72 :         xfrmstr = (char *) palloc(xfrmlen + 1);
    5324          72 :         xfrmlen2 = pg_strxfrm(xfrmstr, val, xfrmlen + 1, mylocale);
    5325             : 
    5326             :         /*
    5327             :          * Some systems (e.g., glibc) can return a smaller value from the
    5328             :          * second call than the first; thus the Assert must be <= not ==.
    5329             :          */
    5330             :         Assert(xfrmlen2 <= xfrmlen);
    5331          72 :         pfree(val);
    5332          72 :         val = xfrmstr;
    5333             :     }
    5334             : 
    5335       21474 :     return val;
    5336             : }
    5337             : 
    5338             : /*
    5339             :  * Do convert_to_scalar()'s work for any bytea data type.
    5340             :  *
    5341             :  * Very similar to convert_string_to_scalar except we can't assume
    5342             :  * null-termination and therefore pass explicit lengths around.
    5343             :  *
    5344             :  * Also, assumptions about likely "normal" ranges of characters have been
    5345             :  * removed - a data range of 0..255 is always used, for now.  (Perhaps
    5346             :  * someday we will add information about actual byte data range to
    5347             :  * pg_statistic.)
    5348             :  */
    5349             : static void
    5350           0 : convert_bytea_to_scalar(Datum value,
    5351             :                         double *scaledvalue,
    5352             :                         Datum lobound,
    5353             :                         double *scaledlobound,
    5354             :                         Datum hibound,
    5355             :                         double *scaledhibound)
    5356             : {
    5357           0 :     bytea      *valuep = DatumGetByteaPP(value);
    5358           0 :     bytea      *loboundp = DatumGetByteaPP(lobound);
    5359           0 :     bytea      *hiboundp = DatumGetByteaPP(hibound);
    5360             :     int         rangelo,
    5361             :                 rangehi,
    5362           0 :                 valuelen = VARSIZE_ANY_EXHDR(valuep),
    5363           0 :                 loboundlen = VARSIZE_ANY_EXHDR(loboundp),
    5364           0 :                 hiboundlen = VARSIZE_ANY_EXHDR(hiboundp),
    5365             :                 i,
    5366             :                 minlen;
    5367           0 :     unsigned char *valstr = (unsigned char *) VARDATA_ANY(valuep);
    5368           0 :     unsigned char *lostr = (unsigned char *) VARDATA_ANY(loboundp);
    5369           0 :     unsigned char *histr = (unsigned char *) VARDATA_ANY(hiboundp);
    5370             : 
    5371             :     /*
    5372             :      * Assume bytea data is uniformly distributed across all byte values.
    5373             :      */
    5374           0 :     rangelo = 0;
    5375           0 :     rangehi = 255;
    5376             : 
    5377             :     /*
    5378             :      * Now strip any common prefix of the three strings.
    5379             :      */
    5380           0 :     minlen = Min(Min(valuelen, loboundlen), hiboundlen);
    5381           0 :     for (i = 0; i < minlen; i++)
    5382             :     {
    5383           0 :         if (*lostr != *histr || *lostr != *valstr)
    5384             :             break;
    5385           0 :         lostr++, histr++, valstr++;
    5386           0 :         loboundlen--, hiboundlen--, valuelen--;
    5387             :     }
    5388             : 
    5389             :     /*
    5390             :      * Now we can do the conversions.
    5391             :      */
    5392           0 :     *scaledvalue = convert_one_bytea_to_scalar(valstr, valuelen, rangelo, rangehi);
    5393           0 :     *scaledlobound = convert_one_bytea_to_scalar(lostr, loboundlen, rangelo, rangehi);
    5394           0 :     *scaledhibound = convert_one_bytea_to_scalar(histr, hiboundlen, rangelo, rangehi);
    5395           0 : }
    5396             : 
    5397             : static double
    5398           0 : convert_one_bytea_to_scalar(unsigned char *value, int valuelen,
    5399             :                             int rangelo, int rangehi)
    5400             : {
    5401             :     double      num,
    5402             :                 denom,
    5403             :                 base;
    5404             : 
    5405           0 :     if (valuelen <= 0)
    5406           0 :         return 0.0;             /* empty string has scalar value 0 */
    5407             : 
    5408             :     /*
    5409             :      * Since base is 256, need not consider more than about 10 chars (even
    5410             :      * this many seems like overkill)
    5411             :      */
    5412           0 :     if (valuelen > 10)
    5413           0 :         valuelen = 10;
    5414             : 
    5415             :     /* Convert initial characters to fraction */
    5416           0 :     base = rangehi - rangelo + 1;
    5417           0 :     num = 0.0;
    5418           0 :     denom = base;
    5419           0 :     while (valuelen-- > 0)
    5420             :     {
    5421           0 :         int         ch = *value++;
    5422             : 
    5423           0 :         if (ch < rangelo)
    5424           0 :             ch = rangelo - 1;
    5425           0 :         else if (ch > rangehi)
    5426           0 :             ch = rangehi + 1;
    5427           0 :         num += ((double) (ch - rangelo)) / denom;
    5428           0 :         denom *= base;
    5429             :     }
    5430             : 
    5431           0 :     return num;
    5432             : }
    5433             : 
    5434             : /*
    5435             :  * Do convert_to_scalar()'s work for any timevalue data type.
    5436             :  *
    5437             :  * On failure (e.g., unsupported typid), set *failure to true;
    5438             :  * otherwise, that variable is not changed.
    5439             :  */
    5440             : static double
    5441           0 : convert_timevalue_to_scalar(Datum value, Oid typid, bool *failure)
    5442             : {
    5443           0 :     switch (typid)
    5444             :     {
    5445           0 :         case TIMESTAMPOID:
    5446           0 :             return DatumGetTimestamp(value);
    5447           0 :         case TIMESTAMPTZOID:
    5448           0 :             return DatumGetTimestampTz(value);
    5449           0 :         case DATEOID:
    5450           0 :             return date2timestamp_no_overflow(DatumGetDateADT(value));
    5451           0 :         case INTERVALOID:
    5452             :             {
    5453           0 :                 Interval   *interval = DatumGetIntervalP(value);
    5454             : 
    5455             :                 /*
    5456             :                  * Convert the month part of Interval to days using assumed
    5457             :                  * average month length of 365.25/12.0 days.  Not too
    5458             :                  * accurate, but plenty good enough for our purposes.
    5459             :                  *
    5460             :                  * This also works for infinite intervals, which just have all
    5461             :                  * fields set to INT_MIN/INT_MAX, and so will produce a result
    5462             :                  * smaller/larger than any finite interval.
    5463             :                  */
    5464           0 :                 return interval->time + interval->day * (double) USECS_PER_DAY +
    5465           0 :                     interval->month * ((DAYS_PER_YEAR / (double) MONTHS_PER_YEAR) * USECS_PER_DAY);
    5466             :             }
    5467           0 :         case TIMEOID:
    5468           0 :             return DatumGetTimeADT(value);
    5469           0 :         case TIMETZOID:
    5470             :             {
    5471           0 :                 TimeTzADT  *timetz = DatumGetTimeTzADTP(value);
    5472             : 
    5473             :                 /* use GMT-equivalent time */
    5474           0 :                 return (double) (timetz->time + (timetz->zone * 1000000.0));
    5475             :             }
    5476             :     }
    5477             : 
    5478           0 :     *failure = true;
    5479           0 :     return 0;
    5480             : }
    5481             : 
    5482             : 
    5483             : /*
    5484             :  * get_restriction_variable
    5485             :  *      Examine the args of a restriction clause to see if it's of the
    5486             :  *      form (variable op pseudoconstant) or (pseudoconstant op variable),
    5487             :  *      where "variable" could be either a Var or an expression in vars of a
    5488             :  *      single relation.  If so, extract information about the variable,
    5489             :  *      and also indicate which side it was on and the other argument.
    5490             :  *
    5491             :  * Inputs:
    5492             :  *  root: the planner info
    5493             :  *  args: clause argument list
    5494             :  *  varRelid: see specs for restriction selectivity functions
    5495             :  *
    5496             :  * Outputs: (these are valid only if true is returned)
    5497             :  *  *vardata: gets information about variable (see examine_variable)
    5498             :  *  *other: gets other clause argument, aggressively reduced to a constant
    5499             :  *  *varonleft: set true if variable is on the left, false if on the right
    5500             :  *
    5501             :  * Returns true if a variable is identified, otherwise false.
    5502             :  *
    5503             :  * Note: if there are Vars on both sides of the clause, we must fail, because
    5504             :  * callers are expecting that the other side will act like a pseudoconstant.
    5505             :  */
    5506             : bool
    5507      821598 : get_restriction_variable(PlannerInfo *root, List *args, int varRelid,
    5508             :                          VariableStatData *vardata, Node **other,
    5509             :                          bool *varonleft)
    5510             : {
    5511             :     Node       *left,
    5512             :                *right;
    5513             :     VariableStatData rdata;
    5514             : 
    5515             :     /* Fail if not a binary opclause (probably shouldn't happen) */
    5516      821598 :     if (list_length(args) != 2)
    5517           0 :         return false;
    5518             : 
    5519      821598 :     left = (Node *) linitial(args);
    5520      821598 :     right = (Node *) lsecond(args);
    5521             : 
    5522             :     /*
    5523             :      * Examine both sides.  Note that when varRelid is nonzero, Vars of other
    5524             :      * relations will be treated as pseudoconstants.
    5525             :      */
    5526      821598 :     examine_variable(root, left, varRelid, vardata);
    5527      821598 :     examine_variable(root, right, varRelid, &rdata);
    5528             : 
    5529             :     /*
    5530             :      * If one side is a variable and the other not, we win.
    5531             :      */
    5532      821598 :     if (vardata->rel && rdata.rel == NULL)
    5533             :     {
    5534      733046 :         *varonleft = true;
    5535      733046 :         *other = estimate_expression_value(root, rdata.var);
    5536             :         /* Assume we need no ReleaseVariableStats(rdata) here */
    5537      733040 :         return true;
    5538             :     }
    5539             : 
    5540       88552 :     if (vardata->rel == NULL && rdata.rel)
    5541             :     {
    5542       82340 :         *varonleft = false;
    5543       82340 :         *other = estimate_expression_value(root, vardata->var);
    5544             :         /* Assume we need no ReleaseVariableStats(*vardata) here */
    5545       82340 :         *vardata = rdata;
    5546       82340 :         return true;
    5547             :     }
    5548             : 
    5549             :     /* Oops, clause has wrong structure (probably var op var) */
    5550        6212 :     ReleaseVariableStats(*vardata);
    5551        6212 :     ReleaseVariableStats(rdata);
    5552             : 
    5553        6212 :     return false;
    5554             : }
    5555             : 
    5556             : /*
    5557             :  * get_join_variables
    5558             :  *      Apply examine_variable() to each side of a join clause.
    5559             :  *      Also, attempt to identify whether the join clause has the same
    5560             :  *      or reversed sense compared to the SpecialJoinInfo.
    5561             :  *
    5562             :  * We consider the join clause "normal" if it is "lhs_var OP rhs_var",
    5563             :  * or "reversed" if it is "rhs_var OP lhs_var".  In complicated cases
    5564             :  * where we can't tell for sure, we default to assuming it's normal.
    5565             :  */
    5566             : void
    5567      269792 : get_join_variables(PlannerInfo *root, List *args, SpecialJoinInfo *sjinfo,
    5568             :                    VariableStatData *vardata1, VariableStatData *vardata2,
    5569             :                    bool *join_is_reversed)
    5570             : {
    5571             :     Node       *left,
    5572             :                *right;
    5573             : 
    5574      269792 :     if (list_length(args) != 2)
    5575           0 :         elog(ERROR, "join operator should take two arguments");
    5576             : 
    5577      269792 :     left = (Node *) linitial(args);
    5578      269792 :     right = (Node *) lsecond(args);
    5579             : 
    5580      269792 :     examine_variable(root, left, 0, vardata1);
    5581      269792 :     examine_variable(root, right, 0, vardata2);
    5582             : 
    5583      539330 :     if (vardata1->rel &&
    5584      269538 :         bms_is_subset(vardata1->rel->relids, sjinfo->syn_righthand))
    5585       91058 :         *join_is_reversed = true;   /* var1 is on RHS */
    5586      357222 :     else if (vardata2->rel &&
    5587      178488 :              bms_is_subset(vardata2->rel->relids, sjinfo->syn_lefthand))
    5588         206 :         *join_is_reversed = true;   /* var2 is on LHS */
    5589             :     else
    5590      178528 :         *join_is_reversed = false;
    5591      269792 : }
    5592             : 
    5593             : /* statext_expressions_load copies the tuple, so just pfree it. */
    5594             : static void
    5595        1650 : ReleaseDummy(HeapTuple tuple)
    5596             : {
    5597        1650 :     pfree(tuple);
    5598        1650 : }
    5599             : 
    5600             : /*
    5601             :  * examine_variable
    5602             :  *      Try to look up statistical data about an expression.
    5603             :  *      Fill in a VariableStatData struct to describe the expression.
    5604             :  *
    5605             :  * Inputs:
    5606             :  *  root: the planner info
    5607             :  *  node: the expression tree to examine
    5608             :  *  varRelid: see specs for restriction selectivity functions
    5609             :  *
    5610             :  * Outputs: *vardata is filled as follows:
    5611             :  *  var: the input expression (with any phvs or binary relabeling stripped,
    5612             :  *      if it is or contains a variable; but otherwise unchanged)
    5613             :  *  rel: RelOptInfo for relation containing variable; NULL if expression
    5614             :  *      contains no Vars (NOTE this could point to a RelOptInfo of a
    5615             :  *      subquery, not one in the current query).
    5616             :  *  statsTuple: the pg_statistic entry for the variable, if one exists;
    5617             :  *      otherwise NULL.
    5618             :  *  freefunc: pointer to a function to release statsTuple with.
    5619             :  *  vartype: exposed type of the expression; this should always match
    5620             :  *      the declared input type of the operator we are estimating for.
    5621             :  *  atttype, atttypmod: actual type/typmod of the "var" expression.  This is
    5622             :  *      commonly the same as the exposed type of the variable argument,
    5623             :  *      but can be different in binary-compatible-type cases.
    5624             :  *  isunique: true if we were able to match the var to a unique index, a
    5625             :  *      single-column DISTINCT or GROUP-BY clause, implying its values are
    5626             :  *      unique for this query.  (Caution: this should be trusted for
    5627             :  *      statistical purposes only, since we do not check indimmediate nor
    5628             :  *      verify that the exact same definition of equality applies.)
    5629             :  *  acl_ok: true if current user has permission to read all table rows from
    5630             :  *      the column(s) underlying the pg_statistic entry.  This is consulted by
    5631             :  *      statistic_proc_security_check().
    5632             :  *
    5633             :  * Caller is responsible for doing ReleaseVariableStats() before exiting.
    5634             :  */
    5635             : void
    5636     3298186 : examine_variable(PlannerInfo *root, Node *node, int varRelid,
    5637             :                  VariableStatData *vardata)
    5638             : {
    5639             :     Node       *basenode;
    5640             :     Relids      varnos;
    5641             :     Relids      basevarnos;
    5642             :     RelOptInfo *onerel;
    5643             : 
    5644             :     /* Make sure we don't return dangling pointers in vardata */
    5645    23087302 :     MemSet(vardata, 0, sizeof(VariableStatData));
    5646             : 
    5647             :     /* Save the exposed type of the expression */
    5648     3298186 :     vardata->vartype = exprType(node);
    5649             : 
    5650             :     /*
    5651             :      * PlaceHolderVars are transparent for the purpose of statistics lookup;
    5652             :      * they do not alter the value distribution of the underlying expression.
    5653             :      * However, they can obscure the structure, preventing us from recognizing
    5654             :      * matches to base columns, index expressions, or extended statistics.  So
    5655             :      * strip them out first.
    5656             :      */
    5657     3298186 :     basenode = strip_all_phvs_deep(root, node);
    5658             : 
    5659             :     /*
    5660             :      * Look inside any binary-compatible relabeling.  We need to handle nested
    5661             :      * RelabelType nodes here, because the prior stripping of PlaceHolderVars
    5662             :      * may have brought separate RelabelTypes into adjacency.
    5663             :      */
    5664     3346764 :     while (IsA(basenode, RelabelType))
    5665       48578 :         basenode = (Node *) ((RelabelType *) basenode)->arg;
    5666             : 
    5667             :     /* Fast path for a simple Var */
    5668     3298186 :     if (IsA(basenode, Var) &&
    5669      795670 :         (varRelid == 0 || varRelid == ((Var *) basenode)->varno))
    5670             :     {
    5671     2362614 :         Var        *var = (Var *) basenode;
    5672             : 
    5673             :         /* Set up result fields other than the stats tuple */
    5674     2362614 :         vardata->var = basenode; /* return Var without phvs or relabeling */
    5675     2362614 :         vardata->rel = find_base_rel(root, var->varno);
    5676     2362614 :         vardata->atttype = var->vartype;
    5677     2362614 :         vardata->atttypmod = var->vartypmod;
    5678     2362614 :         vardata->isunique = has_unique_index(vardata->rel, var->varattno);
    5679             : 
    5680             :         /* Try to locate some stats */
    5681     2362614 :         examine_simple_variable(root, var, vardata);
    5682             : 
    5683     2362614 :         return;
    5684             :     }
    5685             : 
    5686             :     /*
    5687             :      * Okay, it's a more complicated expression.  Determine variable
    5688             :      * membership.  Note that when varRelid isn't zero, only vars of that
    5689             :      * relation are considered "real" vars.
    5690             :      */
    5691      935572 :     varnos = pull_varnos(root, basenode);
    5692      935572 :     basevarnos = bms_difference(varnos, root->outer_join_rels);
    5693             : 
    5694      935572 :     onerel = NULL;
    5695             : 
    5696      935572 :     if (bms_is_empty(basevarnos))
    5697             :     {
    5698             :         /* No Vars at all ... must be pseudo-constant clause */
    5699             :     }
    5700             :     else
    5701             :     {
    5702             :         int         relid;
    5703             : 
    5704             :         /* Check if the expression is in vars of a single base relation */
    5705      473286 :         if (bms_get_singleton_member(basevarnos, &relid))
    5706             :         {
    5707      468458 :             if (varRelid == 0 || varRelid == relid)
    5708             :             {
    5709       69062 :                 onerel = find_base_rel(root, relid);
    5710       69062 :                 vardata->rel = onerel;
    5711       69062 :                 node = basenode;    /* strip any phvs or relabeling */
    5712             :             }
    5713             :             /* else treat it as a constant */
    5714             :         }
    5715             :         else
    5716             :         {
    5717             :             /* varnos has multiple relids */
    5718        4828 :             if (varRelid == 0)
    5719             :             {
    5720             :                 /* treat it as a variable of a join relation */
    5721        3516 :                 vardata->rel = find_join_rel(root, varnos);
    5722        3516 :                 node = basenode;    /* strip any phvs or relabeling */
    5723             :             }
    5724        1312 :             else if (bms_is_member(varRelid, varnos))
    5725             :             {
    5726             :                 /* ignore the vars belonging to other relations */
    5727        1198 :                 vardata->rel = find_base_rel(root, varRelid);
    5728        1198 :                 node = basenode;    /* strip any phvs or relabeling */
    5729             :                 /* note: no point in expressional-index search here */
    5730             :             }
    5731             :             /* else treat it as a constant */
    5732             :         }
    5733             :     }
    5734             : 
    5735      935572 :     bms_free(basevarnos);
    5736             : 
    5737      935572 :     vardata->var = node;
    5738      935572 :     vardata->atttype = exprType(node);
    5739      935572 :     vardata->atttypmod = exprTypmod(node);
    5740             : 
    5741      935572 :     if (onerel)
    5742             :     {
    5743             :         /*
    5744             :          * We have an expression in vars of a single relation.  Try to match
    5745             :          * it to expressional index columns, in hopes of finding some
    5746             :          * statistics.
    5747             :          *
    5748             :          * Note that we consider all index columns including INCLUDE columns,
    5749             :          * since there could be stats for such columns.  But the test for
    5750             :          * uniqueness needs to be warier.
    5751             :          *
    5752             :          * XXX it's conceivable that there are multiple matches with different
    5753             :          * index opfamilies; if so, we need to pick one that matches the
    5754             :          * operator we are estimating for.  FIXME later.
    5755             :          */
    5756             :         ListCell   *ilist;
    5757             :         ListCell   *slist;
    5758             : 
    5759             :         /*
    5760             :          * The nullingrels bits within the expression could prevent us from
    5761             :          * matching it to expressional index columns or to the expressions in
    5762             :          * extended statistics.  So strip them out first.
    5763             :          */
    5764       69062 :         if (bms_overlap(varnos, root->outer_join_rels))
    5765        2522 :             node = remove_nulling_relids(node, root->outer_join_rels, NULL);
    5766             : 
    5767      150262 :         foreach(ilist, onerel->indexlist)
    5768             :         {
    5769       84182 :             IndexOptInfo *index = (IndexOptInfo *) lfirst(ilist);
    5770             :             ListCell   *indexpr_item;
    5771             :             int         pos;
    5772             : 
    5773       84182 :             indexpr_item = list_head(index->indexprs);
    5774       84182 :             if (indexpr_item == NULL)
    5775       79310 :                 continue;       /* no expressions here... */
    5776             : 
    5777        6834 :             for (pos = 0; pos < index->ncolumns; pos++)
    5778             :             {
    5779        4944 :                 if (index->indexkeys[pos] == 0)
    5780             :                 {
    5781             :                     Node       *indexkey;
    5782             : 
    5783        4872 :                     if (indexpr_item == NULL)
    5784           0 :                         elog(ERROR, "too few entries in indexprs list");
    5785        4872 :                     indexkey = (Node *) lfirst(indexpr_item);
    5786        4872 :                     if (indexkey && IsA(indexkey, RelabelType))
    5787           0 :                         indexkey = (Node *) ((RelabelType *) indexkey)->arg;
    5788        4872 :                     if (equal(node, indexkey))
    5789             :                     {
    5790             :                         /*
    5791             :                          * Found a match ... is it a unique index? Tests here
    5792             :                          * should match has_unique_index().
    5793             :                          */
    5794        3618 :                         if (index->unique &&
    5795         438 :                             index->nkeycolumns == 1 &&
    5796         438 :                             pos == 0 &&
    5797         438 :                             (index->indpred == NIL || index->predOK))
    5798         438 :                             vardata->isunique = true;
    5799             : 
    5800             :                         /*
    5801             :                          * Has it got stats?  We only consider stats for
    5802             :                          * non-partial indexes, since partial indexes probably
    5803             :                          * don't reflect whole-relation statistics; the above
    5804             :                          * check for uniqueness is the only info we take from
    5805             :                          * a partial index.
    5806             :                          *
    5807             :                          * An index stats hook, however, must make its own
    5808             :                          * decisions about what to do with partial indexes.
    5809             :                          */
    5810        3618 :                         if (get_index_stats_hook &&
    5811           0 :                             (*get_index_stats_hook) (root, index->indexoid,
    5812           0 :                                                      pos + 1, vardata))
    5813             :                         {
    5814             :                             /*
    5815             :                              * The hook took control of acquiring a stats
    5816             :                              * tuple.  If it did supply a tuple, it'd better
    5817             :                              * have supplied a freefunc.
    5818             :                              */
    5819           0 :                             if (HeapTupleIsValid(vardata->statsTuple) &&
    5820           0 :                                 !vardata->freefunc)
    5821           0 :                                 elog(ERROR, "no function provided to release variable stats with");
    5822             :                         }
    5823        3618 :                         else if (index->indpred == NIL)
    5824             :                         {
    5825        3618 :                             vardata->statsTuple =
    5826        7236 :                                 SearchSysCache3(STATRELATTINH,
    5827             :                                                 ObjectIdGetDatum(index->indexoid),
    5828        3618 :                                                 Int16GetDatum(pos + 1),
    5829             :                                                 BoolGetDatum(false));
    5830        3618 :                             vardata->freefunc = ReleaseSysCache;
    5831             : 
    5832        3618 :                             if (HeapTupleIsValid(vardata->statsTuple))
    5833             :                             {
    5834             :                                 /*
    5835             :                                  * Test if user has permission to access all
    5836             :                                  * rows from the index's table.
    5837             :                                  *
    5838             :                                  * For simplicity, we insist on the whole
    5839             :                                  * table being selectable, rather than trying
    5840             :                                  * to identify which column(s) the index
    5841             :                                  * depends on.
    5842             :                                  *
    5843             :                                  * Note that for an inheritance child,
    5844             :                                  * permissions are checked on the inheritance
    5845             :                                  * root parent, and whole-table select
    5846             :                                  * privilege on the parent doesn't quite
    5847             :                                  * guarantee that the user could read all
    5848             :                                  * columns of the child.  But in practice it's
    5849             :                                  * unlikely that any interesting security
    5850             :                                  * violation could result from allowing access
    5851             :                                  * to the expression index's stats, so we
    5852             :                                  * allow it anyway.  See similar code in
    5853             :                                  * examine_simple_variable() for additional
    5854             :                                  * comments.
    5855             :                                  */
    5856        2982 :                                 vardata->acl_ok =
    5857        2982 :                                     all_rows_selectable(root,
    5858        2982 :                                                         index->rel->relid,
    5859             :                                                         NULL);
    5860             :                             }
    5861             :                             else
    5862             :                             {
    5863             :                                 /* suppress leakproofness checks later */
    5864         636 :                                 vardata->acl_ok = true;
    5865             :                             }
    5866             :                         }
    5867        3618 :                         if (vardata->statsTuple)
    5868        2982 :                             break;
    5869             :                     }
    5870        1890 :                     indexpr_item = lnext(index->indexprs, indexpr_item);
    5871             :                 }
    5872             :             }
    5873        4872 :             if (vardata->statsTuple)
    5874        2982 :                 break;
    5875             :         }
    5876             : 
    5877             :         /*
    5878             :          * Search extended statistics for one with a matching expression.
    5879             :          * There might be multiple ones, so just grab the first one. In the
    5880             :          * future, we might consider the statistics target (and pick the most
    5881             :          * accurate statistics) and maybe some other parameters.
    5882             :          */
    5883       73178 :         foreach(slist, onerel->statlist)
    5884             :         {
    5885        4404 :             StatisticExtInfo *info = (StatisticExtInfo *) lfirst(slist);
    5886        4404 :             RangeTblEntry *rte = planner_rt_fetch(onerel->relid, root);
    5887             :             ListCell   *expr_item;
    5888             :             int         pos;
    5889             : 
    5890             :             /*
    5891             :              * Stop once we've found statistics for the expression (either
    5892             :              * from extended stats, or for an index in the preceding loop).
    5893             :              */
    5894        4404 :             if (vardata->statsTuple)
    5895         288 :                 break;
    5896             : 
    5897             :             /* skip stats without per-expression stats */
    5898        4116 :             if (info->kind != STATS_EXT_EXPRESSIONS)
    5899        2106 :                 continue;
    5900             : 
    5901             :             /* skip stats with mismatching stxdinherit value */
    5902        2010 :             if (info->inherit != rte->inh)
    5903           6 :                 continue;
    5904             : 
    5905        2004 :             pos = 0;
    5906        3306 :             foreach(expr_item, info->exprs)
    5907             :             {
    5908        2952 :                 Node       *expr = (Node *) lfirst(expr_item);
    5909             : 
    5910             :                 Assert(expr);
    5911             : 
    5912             :                 /* strip RelabelType before comparing it */
    5913        2952 :                 if (expr && IsA(expr, RelabelType))
    5914           0 :                     expr = (Node *) ((RelabelType *) expr)->arg;
    5915             : 
    5916             :                 /* found a match, see if we can extract pg_statistic row */
    5917        2952 :                 if (equal(node, expr))
    5918             :                 {
    5919             :                     /*
    5920             :                      * XXX Not sure if we should cache the tuple somewhere.
    5921             :                      * Now we just create a new copy every time.
    5922             :                      */
    5923        1650 :                     vardata->statsTuple =
    5924        1650 :                         statext_expressions_load(info->statOid, rte->inh, pos);
    5925             : 
    5926        1650 :                     vardata->freefunc = ReleaseDummy;
    5927             : 
    5928             :                     /*
    5929             :                      * Test if user has permission to access all rows from the
    5930             :                      * table.
    5931             :                      *
    5932             :                      * For simplicity, we insist on the whole table being
    5933             :                      * selectable, rather than trying to identify which
    5934             :                      * column(s) the statistics object depends on.
    5935             :                      *
    5936             :                      * Note that for an inheritance child, permissions are
    5937             :                      * checked on the inheritance root parent, and whole-table
    5938             :                      * select privilege on the parent doesn't quite guarantee
    5939             :                      * that the user could read all columns of the child.  But
    5940             :                      * in practice it's unlikely that any interesting security
    5941             :                      * violation could result from allowing access to the
    5942             :                      * expression stats, so we allow it anyway.  See similar
    5943             :                      * code in examine_simple_variable() for additional
    5944             :                      * comments.
    5945             :                      */
    5946        1650 :                     vardata->acl_ok = all_rows_selectable(root,
    5947             :                                                           onerel->relid,
    5948             :                                                           NULL);
    5949             : 
    5950        1650 :                     break;
    5951             :                 }
    5952             : 
    5953        1302 :                 pos++;
    5954             :             }
    5955             :         }
    5956             :     }
    5957             : 
    5958      935572 :     bms_free(varnos);
    5959             : }
    5960             : 
    5961             : /*
    5962             :  * strip_all_phvs_deep
    5963             :  *      Deeply strip all PlaceHolderVars in an expression.
    5964             : 
    5965             :  * As a performance optimization, we first use a lightweight walker to check
    5966             :  * for the presence of any PlaceHolderVars.  The expensive mutator is invoked
    5967             :  * only if a PlaceHolderVar is found, avoiding unnecessary memory allocation
    5968             :  * and tree copying in the common case where no PlaceHolderVars are present.
    5969             :  */
    5970             : static Node *
    5971     3298186 : strip_all_phvs_deep(PlannerInfo *root, Node *node)
    5972             : {
    5973             :     /* If there are no PHVs anywhere, we needn't work hard */
    5974     3298186 :     if (root->glob->lastPHId == 0)
    5975     3263592 :         return node;
    5976             : 
    5977       34594 :     if (!contain_placeholder_walker(node, NULL))
    5978       30260 :         return node;
    5979        4334 :     return strip_all_phvs_mutator(node, NULL);
    5980             : }
    5981             : 
    5982             : /*
    5983             :  * contain_placeholder_walker
    5984             :  *      Lightweight walker to check if an expression contains any
    5985             :  *      PlaceHolderVars
    5986             :  */
    5987             : static bool
    5988       39192 : contain_placeholder_walker(Node *node, void *context)
    5989             : {
    5990       39192 :     if (node == NULL)
    5991         210 :         return false;
    5992       38982 :     if (IsA(node, PlaceHolderVar))
    5993        4334 :         return true;
    5994             : 
    5995       34648 :     return expression_tree_walker(node, contain_placeholder_walker, context);
    5996             : }
    5997             : 
    5998             : /*
    5999             :  * strip_all_phvs_mutator
    6000             :  *      Mutator to deeply strip all PlaceHolderVars
    6001             :  */
    6002             : static Node *
    6003       11686 : strip_all_phvs_mutator(Node *node, void *context)
    6004             : {
    6005       11686 :     if (node == NULL)
    6006          48 :         return NULL;
    6007       11638 :     if (IsA(node, PlaceHolderVar))
    6008             :     {
    6009             :         /* Strip it and recurse into its contained expression */
    6010        4484 :         PlaceHolderVar *phv = (PlaceHolderVar *) node;
    6011             : 
    6012        4484 :         return strip_all_phvs_mutator((Node *) phv->phexpr, context);
    6013             :     }
    6014             : 
    6015        7154 :     return expression_tree_mutator(node, strip_all_phvs_mutator, context);
    6016             : }
    6017             : 
    6018             : /*
    6019             :  * examine_simple_variable
    6020             :  *      Handle a simple Var for examine_variable
    6021             :  *
    6022             :  * This is split out as a subroutine so that we can recurse to deal with
    6023             :  * Vars referencing subqueries (either sub-SELECT-in-FROM or CTE style).
    6024             :  *
    6025             :  * We already filled in all the fields of *vardata except for the stats tuple.
    6026             :  */
    6027             : static void
    6028     2368988 : examine_simple_variable(PlannerInfo *root, Var *var,
    6029             :                         VariableStatData *vardata)
    6030             : {
    6031     2368988 :     RangeTblEntry *rte = root->simple_rte_array[var->varno];
    6032             : 
    6033             :     Assert(IsA(rte, RangeTblEntry));
    6034             : 
    6035     2368988 :     if (get_relation_stats_hook &&
    6036           0 :         (*get_relation_stats_hook) (root, rte, var->varattno, vardata))
    6037             :     {
    6038             :         /*
    6039             :          * The hook took control of acquiring a stats tuple.  If it did supply
    6040             :          * a tuple, it'd better have supplied a freefunc.
    6041             :          */
    6042           0 :         if (HeapTupleIsValid(vardata->statsTuple) &&
    6043           0 :             !vardata->freefunc)
    6044           0 :             elog(ERROR, "no function provided to release variable stats with");
    6045             :     }
    6046     2368988 :     else if (rte->rtekind == RTE_RELATION)
    6047             :     {
    6048             :         /*
    6049             :          * Plain table or parent of an inheritance appendrel, so look up the
    6050             :          * column in pg_statistic
    6051             :          */
    6052     2245734 :         vardata->statsTuple = SearchSysCache3(STATRELATTINH,
    6053             :                                               ObjectIdGetDatum(rte->relid),
    6054     2245734 :                                               Int16GetDatum(var->varattno),
    6055     2245734 :                                               BoolGetDatum(rte->inh));
    6056     2245734 :         vardata->freefunc = ReleaseSysCache;
    6057             : 
    6058     2245734 :         if (HeapTupleIsValid(vardata->statsTuple))
    6059             :         {
    6060             :             /*
    6061             :              * Test if user has permission to read all rows from this column.
    6062             :              *
    6063             :              * This requires that the user has the appropriate SELECT
    6064             :              * privileges and that there are no securityQuals from security
    6065             :              * barrier views or RLS policies.  If that's not the case, then we
    6066             :              * only permit leakproof functions to be passed pg_statistic data
    6067             :              * in vardata, otherwise the functions might reveal data that the
    6068             :              * user doesn't have permission to see --- see
    6069             :              * statistic_proc_security_check().
    6070             :              */
    6071     1664084 :             vardata->acl_ok =
    6072     1664084 :                 all_rows_selectable(root, var->varno,
    6073     1664084 :                                     bms_make_singleton(var->varattno - FirstLowInvalidHeapAttributeNumber));
    6074             :         }
    6075             :         else
    6076             :         {
    6077             :             /* suppress any possible leakproofness checks later */
    6078      581650 :             vardata->acl_ok = true;
    6079             :         }
    6080             :     }
    6081      123254 :     else if ((rte->rtekind == RTE_SUBQUERY && !rte->inh) ||
    6082      112898 :              (rte->rtekind == RTE_CTE && !rte->self_reference))
    6083             :     {
    6084             :         /*
    6085             :          * Plain subquery (not one that was converted to an appendrel) or
    6086             :          * non-recursive CTE.  In either case, we can try to find out what the
    6087             :          * Var refers to within the subquery.  We skip this for appendrel and
    6088             :          * recursive-CTE cases because any column stats we did find would
    6089             :          * likely not be very relevant.
    6090             :          */
    6091             :         PlannerInfo *subroot;
    6092             :         Query      *subquery;
    6093             :         List       *subtlist;
    6094             :         TargetEntry *ste;
    6095             : 
    6096             :         /*
    6097             :          * Punt if it's a whole-row var rather than a plain column reference.
    6098             :          */
    6099       17604 :         if (var->varattno == InvalidAttrNumber)
    6100           0 :             return;
    6101             : 
    6102             :         /*
    6103             :          * Otherwise, find the subquery's planner subroot.
    6104             :          */
    6105       17604 :         if (rte->rtekind == RTE_SUBQUERY)
    6106             :         {
    6107             :             RelOptInfo *rel;
    6108             : 
    6109             :             /*
    6110             :              * Fetch RelOptInfo for subquery.  Note that we don't change the
    6111             :              * rel returned in vardata, since caller expects it to be a rel of
    6112             :              * the caller's query level.  Because we might already be
    6113             :              * recursing, we can't use that rel pointer either, but have to
    6114             :              * look up the Var's rel afresh.
    6115             :              */
    6116       10356 :             rel = find_base_rel(root, var->varno);
    6117             : 
    6118       10356 :             subroot = rel->subroot;
    6119             :         }
    6120             :         else
    6121             :         {
    6122             :             /* CTE case is more difficult */
    6123             :             PlannerInfo *cteroot;
    6124             :             Index       levelsup;
    6125             :             int         ndx;
    6126             :             int         plan_id;
    6127             :             ListCell   *lc;
    6128             : 
    6129             :             /*
    6130             :              * Find the referenced CTE, and locate the subroot previously made
    6131             :              * for it.
    6132             :              */
    6133        7248 :             levelsup = rte->ctelevelsup;
    6134        7248 :             cteroot = root;
    6135       13634 :             while (levelsup-- > 0)
    6136             :             {
    6137        6386 :                 cteroot = cteroot->parent_root;
    6138        6386 :                 if (!cteroot)   /* shouldn't happen */
    6139           0 :                     elog(ERROR, "bad levelsup for CTE \"%s\"", rte->ctename);
    6140             :             }
    6141             : 
    6142             :             /*
    6143             :              * Note: cte_plan_ids can be shorter than cteList, if we are still
    6144             :              * working on planning the CTEs (ie, this is a side-reference from
    6145             :              * another CTE).  So we mustn't use forboth here.
    6146             :              */
    6147        7248 :             ndx = 0;
    6148        9470 :             foreach(lc, cteroot->parse->cteList)
    6149             :             {
    6150        9470 :                 CommonTableExpr *cte = (CommonTableExpr *) lfirst(lc);
    6151             : 
    6152        9470 :                 if (strcmp(cte->ctename, rte->ctename) == 0)
    6153        7248 :                     break;
    6154        2222 :                 ndx++;
    6155             :             }
    6156        7248 :             if (lc == NULL)     /* shouldn't happen */
    6157           0 :                 elog(ERROR, "could not find CTE \"%s\"", rte->ctename);
    6158        7248 :             if (ndx >= list_length(cteroot->cte_plan_ids))
    6159           0 :                 elog(ERROR, "could not find plan for CTE \"%s\"", rte->ctename);
    6160        7248 :             plan_id = list_nth_int(cteroot->cte_plan_ids, ndx);
    6161        7248 :             if (plan_id <= 0)
    6162           0 :                 elog(ERROR, "no plan was made for CTE \"%s\"", rte->ctename);
    6163        7248 :             subroot = list_nth(root->glob->subroots, plan_id - 1);
    6164             :         }
    6165             : 
    6166             :         /* If the subquery hasn't been planned yet, we have to punt */
    6167       17604 :         if (subroot == NULL)
    6168           0 :             return;
    6169             :         Assert(IsA(subroot, PlannerInfo));
    6170             : 
    6171             :         /*
    6172             :          * We must use the subquery parsetree as mangled by the planner, not
    6173             :          * the raw version from the RTE, because we need a Var that will refer
    6174             :          * to the subroot's live RelOptInfos.  For instance, if any subquery
    6175             :          * pullup happened during planning, Vars in the targetlist might have
    6176             :          * gotten replaced, and we need to see the replacement expressions.
    6177             :          */
    6178       17604 :         subquery = subroot->parse;
    6179             :         Assert(IsA(subquery, Query));
    6180             : 
    6181             :         /*
    6182             :          * Punt if subquery uses set operations or grouping sets, as these
    6183             :          * will mash underlying columns' stats beyond recognition.  (Set ops
    6184             :          * are particularly nasty; if we forged ahead, we would return stats
    6185             :          * relevant to only the leftmost subselect...)  DISTINCT is also
    6186             :          * problematic, but we check that later because there is a possibility
    6187             :          * of learning something even with it.
    6188             :          */
    6189       17604 :         if (subquery->setOperations ||
    6190       15318 :             subquery->groupingSets)
    6191        2382 :             return;
    6192             : 
    6193             :         /* Get the subquery output expression referenced by the upper Var */
    6194       15222 :         if (subquery->returningList)
    6195         206 :             subtlist = subquery->returningList;
    6196             :         else
    6197       15016 :             subtlist = subquery->targetList;
    6198       15222 :         ste = get_tle_by_resno(subtlist, var->varattno);
    6199       15222 :         if (ste == NULL || ste->resjunk)
    6200           0 :             elog(ERROR, "subquery %s does not have attribute %d",
    6201             :                  rte->eref->aliasname, var->varattno);
    6202       15222 :         var = (Var *) ste->expr;
    6203             : 
    6204             :         /*
    6205             :          * If subquery uses DISTINCT, we can't make use of any stats for the
    6206             :          * variable ... but, if it's the only DISTINCT column, we are entitled
    6207             :          * to consider it unique.  We do the test this way so that it works
    6208             :          * for cases involving DISTINCT ON.
    6209             :          */
    6210       15222 :         if (subquery->distinctClause)
    6211             :         {
    6212        1838 :             if (list_length(subquery->distinctClause) == 1 &&
    6213         616 :                 targetIsInSortList(ste, InvalidOid, subquery->distinctClause))
    6214         308 :                 vardata->isunique = true;
    6215             :             /* cannot go further */
    6216        1222 :             return;
    6217             :         }
    6218             : 
    6219             :         /* The same idea as with DISTINCT clause works for a GROUP-BY too */
    6220       14000 :         if (subquery->groupClause)
    6221             :         {
    6222        1080 :             if (list_length(subquery->groupClause) == 1 &&
    6223         450 :                 targetIsInSortList(ste, InvalidOid, subquery->groupClause))
    6224         338 :                 vardata->isunique = true;
    6225             :             /* cannot go further */
    6226         630 :             return;
    6227             :         }
    6228             : 
    6229             :         /*
    6230             :          * If the sub-query originated from a view with the security_barrier
    6231             :          * attribute, we must not look at the variable's statistics, though it
    6232             :          * seems all right to notice the existence of a DISTINCT clause. So
    6233             :          * stop here.
    6234             :          *
    6235             :          * This is probably a harsher restriction than necessary; it's
    6236             :          * certainly OK for the selectivity estimator (which is a C function,
    6237             :          * and therefore omnipotent anyway) to look at the statistics.  But
    6238             :          * many selectivity estimators will happily *invoke the operator
    6239             :          * function* to try to work out a good estimate - and that's not OK.
    6240             :          * So for now, don't dig down for stats.
    6241             :          */
    6242       13370 :         if (rte->security_barrier)
    6243        1374 :             return;
    6244             : 
    6245             :         /* Can only handle a simple Var of subquery's query level */
    6246       11996 :         if (var && IsA(var, Var) &&
    6247        6374 :             var->varlevelsup == 0)
    6248             :         {
    6249             :             /*
    6250             :              * OK, recurse into the subquery.  Note that the original setting
    6251             :              * of vardata->isunique (which will surely be false) is left
    6252             :              * unchanged in this situation.  That's what we want, since even
    6253             :              * if the underlying column is unique, the subquery may have
    6254             :              * joined to other tables in a way that creates duplicates.
    6255             :              */
    6256        6374 :             examine_simple_variable(subroot, var, vardata);
    6257             :         }
    6258             :     }
    6259             :     else
    6260             :     {
    6261             :         /*
    6262             :          * Otherwise, the Var comes from a FUNCTION or VALUES RTE.  (We won't
    6263             :          * see RTE_JOIN here because join alias Vars have already been
    6264             :          * flattened.)  There's not much we can do with function outputs, but
    6265             :          * maybe someday try to be smarter about VALUES.
    6266             :          */
    6267             :     }
    6268             : }
    6269             : 
    6270             : /*
    6271             :  * all_rows_selectable
    6272             :  *      Test whether the user has permission to select all rows from a given
    6273             :  *      relation.
    6274             :  *
    6275             :  * Inputs:
    6276             :  *  root: the planner info
    6277             :  *  varno: the index of the relation (assumed to be an RTE_RELATION)
    6278             :  *  varattnos: the attributes for which permission is required, or NULL if
    6279             :  *      whole-table access is required
    6280             :  *
    6281             :  * Returns true if the user has the required select permissions, and there are
    6282             :  * no securityQuals from security barrier views or RLS policies.
    6283             :  *
    6284             :  * Note that if the relation is an inheritance child relation, securityQuals
    6285             :  * and access permissions are checked against the inheritance root parent (the
    6286             :  * relation actually mentioned in the query) --- see the comments in
    6287             :  * expand_single_inheritance_child() for an explanation of why it has to be
    6288             :  * done this way.
    6289             :  *
    6290             :  * If varattnos is non-NULL, its attribute numbers should be offset by
    6291             :  * FirstLowInvalidHeapAttributeNumber so that system attributes can be
    6292             :  * checked.  If varattnos is NULL, only table-level SELECT privileges are
    6293             :  * checked, not any column-level privileges.
    6294             :  *
    6295             :  * Note: if the relation is accessed via a view, this function actually tests
    6296             :  * whether the view owner has permission to select from the relation.  To
    6297             :  * ensure that the current user has permission, it is also necessary to check
    6298             :  * that the current user has permission to select from the view, which we do
    6299             :  * at planner-startup --- see subquery_planner().
    6300             :  *
    6301             :  * This is exported so that other estimation functions can use it.
    6302             :  */
    6303             : bool
    6304     1668968 : all_rows_selectable(PlannerInfo *root, Index varno, Bitmapset *varattnos)
    6305             : {
    6306     1668968 :     RelOptInfo *rel = find_base_rel_noerr(root, varno);
    6307     1668968 :     RangeTblEntry *rte = planner_rt_fetch(varno, root);
    6308             :     Oid         userid;
    6309             :     int         varattno;
    6310             : 
    6311             :     Assert(rte->rtekind == RTE_RELATION);
    6312             : 
    6313             :     /*
    6314             :      * Determine the user ID to use for privilege checks (either the current
    6315             :      * user or the view owner, if we're accessing the table via a view).
    6316             :      *
    6317             :      * Normally the relation will have an associated RelOptInfo from which we
    6318             :      * can find the userid, but it might not if it's a RETURNING Var for an
    6319             :      * INSERT target relation.  In that case use the RTEPermissionInfo
    6320             :      * associated with the RTE.
    6321             :      *
    6322             :      * If we navigate up to a parent relation, we keep using the same userid,
    6323             :      * since it's the same in all relations of a given inheritance tree.
    6324             :      */
    6325     1668968 :     if (rel)
    6326     1668926 :         userid = rel->userid;
    6327             :     else
    6328             :     {
    6329             :         RTEPermissionInfo *perminfo;
    6330             : 
    6331          42 :         perminfo = getRTEPermissionInfo(root->parse->rteperminfos, rte);
    6332          42 :         userid = perminfo->checkAsUser;
    6333             :     }
    6334     1668968 :     if (!OidIsValid(userid))
    6335     1491400 :         userid = GetUserId();
    6336             : 
    6337             :     /*
    6338             :      * Permissions and securityQuals must be checked on the table actually
    6339             :      * mentioned in the query, so if this is an inheritance child, navigate up
    6340             :      * to the inheritance root parent.  If the user can read the whole table
    6341             :      * or the required columns there, then they can read from the child table
    6342             :      * too.  For per-column checks, we must find out which of the root
    6343             :      * parent's attributes the child relation's attributes correspond to.
    6344             :      */
    6345     1668968 :     if (root->append_rel_array != NULL)
    6346             :     {
    6347             :         AppendRelInfo *appinfo;
    6348             : 
    6349      234406 :         appinfo = root->append_rel_array[varno];
    6350             : 
    6351             :         /*
    6352             :          * Partitions are mapped to their immediate parent, not the root
    6353             :          * parent, so must be ready to walk up multiple AppendRelInfos.  But
    6354             :          * stop if we hit a parent that is not RTE_RELATION --- that's a
    6355             :          * flattened UNION ALL subquery, not an inheritance parent.
    6356             :          */
    6357      436174 :         while (appinfo &&
    6358      202140 :                planner_rt_fetch(appinfo->parent_relid,
    6359      202140 :                                 root)->rtekind == RTE_RELATION)
    6360             :         {
    6361      201768 :             Bitmapset  *parent_varattnos = NULL;
    6362             : 
    6363             :             /*
    6364             :              * For each child attribute, find the corresponding parent
    6365             :              * attribute.  In rare cases, the attribute may be local to the
    6366             :              * child table, in which case, we've got to live with having no
    6367             :              * access to this column.
    6368             :              */
    6369      201768 :             varattno = -1;
    6370      400686 :             while ((varattno = bms_next_member(varattnos, varattno)) >= 0)
    6371             :             {
    6372             :                 AttrNumber  attno;
    6373             :                 AttrNumber  parent_attno;
    6374             : 
    6375      198918 :                 attno = varattno + FirstLowInvalidHeapAttributeNumber;
    6376             : 
    6377      198918 :                 if (attno == InvalidAttrNumber)
    6378             :                 {
    6379             :                     /*
    6380             :                      * Whole-row reference, so must map each column of the
    6381             :                      * child to the parent table.
    6382             :                      */
    6383          36 :                     for (attno = 1; attno <= appinfo->num_child_cols; attno++)
    6384             :                     {
    6385          24 :                         parent_attno = appinfo->parent_colnos[attno - 1];
    6386          24 :                         if (parent_attno == 0)
    6387           0 :                             return false;   /* attr is local to child */
    6388             :                         parent_varattnos =
    6389          24 :                             bms_add_member(parent_varattnos,
    6390             :                                            parent_attno - FirstLowInvalidHeapAttributeNumber);
    6391             :                     }
    6392             :                 }
    6393             :                 else
    6394             :                 {
    6395      198906 :                     if (attno < 0)
    6396             :                     {
    6397             :                         /* System attnos are the same in all tables */
    6398           0 :                         parent_attno = attno;
    6399             :                     }
    6400             :                     else
    6401             :                     {
    6402      198906 :                         if (attno > appinfo->num_child_cols)
    6403           0 :                             return false;   /* safety check */
    6404      198906 :                         parent_attno = appinfo->parent_colnos[attno - 1];
    6405      198906 :                         if (parent_attno == 0)
    6406           0 :                             return false;   /* attr is local to child */
    6407             :                     }
    6408             :                     parent_varattnos =
    6409      198906 :                         bms_add_member(parent_varattnos,
    6410             :                                        parent_attno - FirstLowInvalidHeapAttributeNumber);
    6411             :                 }
    6412             :             }
    6413             : 
    6414             :             /* If the parent is itself a child, continue up */
    6415      201768 :             varno = appinfo->parent_relid;
    6416      201768 :             varattnos = parent_varattnos;
    6417      201768 :             appinfo = root->append_rel_array[varno];
    6418             :         }
    6419             : 
    6420             :         /* Perform the access check on this parent rel */
    6421      234406 :         rte = planner_rt_fetch(varno, root);
    6422             :         Assert(rte->rtekind == RTE_RELATION);
    6423             :     }
    6424             : 
    6425             :     /*
    6426             :      * For all rows to be accessible, there must be no securityQuals from
    6427             :      * security barrier views or RLS policies.
    6428             :      */
    6429     1668968 :     if (rte->securityQuals != NIL)
    6430         828 :         return false;
    6431             : 
    6432             :     /*
    6433             :      * Test for table-level SELECT privilege.
    6434             :      *
    6435             :      * If varattnos is non-NULL, this is sufficient to give access to all
    6436             :      * requested attributes, even for a child table, since we have verified
    6437             :      * that all required child columns have matching parent columns.
    6438             :      *
    6439             :      * If varattnos is NULL (whole-table access requested), this doesn't
    6440             :      * necessarily guarantee that the user can read all columns of a child
    6441             :      * table, but we allow it anyway (see comments in examine_variable()) and
    6442             :      * don't bother checking any column privileges.
    6443             :      */
    6444     1668140 :     if (pg_class_aclcheck(rte->relid, userid, ACL_SELECT) == ACLCHECK_OK)
    6445     1667688 :         return true;
    6446             : 
    6447         452 :     if (varattnos == NULL)
    6448          12 :         return false;           /* whole-table access requested */
    6449             : 
    6450             :     /*
    6451             :      * Don't have table-level SELECT privilege, so check per-column
    6452             :      * privileges.
    6453             :      */
    6454         440 :     varattno = -1;
    6455         646 :     while ((varattno = bms_next_member(varattnos, varattno)) >= 0)
    6456             :     {
    6457         440 :         AttrNumber  attno = varattno + FirstLowInvalidHeapAttributeNumber;
    6458             : 
    6459         440 :         if (attno == InvalidAttrNumber)
    6460             :         {
    6461             :             /* Whole-row reference, so must have access to all columns */
    6462           6 :             if (pg_attribute_aclcheck_all(rte->relid, userid, ACL_SELECT,
    6463             :                                           ACLMASK_ALL) != ACLCHECK_OK)
    6464           6 :                 return false;
    6465             :         }
    6466             :         else
    6467             :         {
    6468         434 :             if (pg_attribute_aclcheck(rte->relid, attno, userid,
    6469             :                                       ACL_SELECT) != ACLCHECK_OK)
    6470         228 :                 return false;
    6471             :         }
    6472             :     }
    6473             : 
    6474             :     /* If we reach here, have all required column privileges */
    6475         206 :     return true;
    6476             : }
    6477             : 
    6478             : /*
    6479             :  * examine_indexcol_variable
    6480             :  *      Try to look up statistical data about an index column/expression.
    6481             :  *      Fill in a VariableStatData struct to describe the column.
    6482             :  *
    6483             :  * Inputs:
    6484             :  *  root: the planner info
    6485             :  *  index: the index whose column we're interested in
    6486             :  *  indexcol: 0-based index column number (subscripts index->indexkeys[])
    6487             :  *
    6488             :  * Outputs: *vardata is filled as follows:
    6489             :  *  var: the input expression (with any binary relabeling stripped, if
    6490             :  *      it is or contains a variable; but otherwise the type is preserved)
    6491             :  *  rel: RelOptInfo for table relation containing variable.
    6492             :  *  statsTuple: the pg_statistic entry for the variable, if one exists;
    6493             :  *      otherwise NULL.
    6494             :  *  freefunc: pointer to a function to release statsTuple with.
    6495             :  *
    6496             :  * Caller is responsible for doing ReleaseVariableStats() before exiting.
    6497             :  */
    6498             : static void
    6499      806606 : examine_indexcol_variable(PlannerInfo *root, IndexOptInfo *index,
    6500             :                           int indexcol, VariableStatData *vardata)
    6501             : {
    6502             :     AttrNumber  colnum;
    6503             :     Oid         relid;
    6504             : 
    6505      806606 :     if (index->indexkeys[indexcol] != 0)
    6506             :     {
    6507             :         /* Simple variable --- look to stats for the underlying table */
    6508      804392 :         RangeTblEntry *rte = planner_rt_fetch(index->rel->relid, root);
    6509             : 
    6510             :         Assert(rte->rtekind == RTE_RELATION);
    6511      804392 :         relid = rte->relid;
    6512             :         Assert(relid != InvalidOid);
    6513      804392 :         colnum = index->indexkeys[indexcol];
    6514      804392 :         vardata->rel = index->rel;
    6515             : 
    6516      804392 :         if (get_relation_stats_hook &&
    6517           0 :             (*get_relation_stats_hook) (root, rte, colnum, vardata))
    6518             :         {
    6519             :             /*
    6520             :              * The hook took control of acquiring a stats tuple.  If it did
    6521             :              * supply a tuple, it'd better have supplied a freefunc.
    6522             :              */
    6523           0 :             if (HeapTupleIsValid(vardata->statsTuple) &&
    6524           0 :                 !vardata->freefunc)
    6525           0 :                 elog(ERROR, "no function provided to release variable stats with");
    6526             :         }
    6527             :         else
    6528             :         {
    6529      804392 :             vardata->statsTuple = SearchSysCache3(STATRELATTINH,
    6530             :                                                   ObjectIdGetDatum(relid),
    6531             :                                                   Int16GetDatum(colnum),
    6532      804392 :                                                   BoolGetDatum(rte->inh));
    6533      804392 :             vardata->freefunc = ReleaseSysCache;
    6534             :         }
    6535             :     }
    6536             :     else
    6537             :     {
    6538             :         /* Expression --- maybe there are stats for the index itself */
    6539        2214 :         relid = index->indexoid;
    6540        2214 :         colnum = indexcol + 1;
    6541             : 
    6542        2214 :         if (get_index_stats_hook &&
    6543           0 :             (*get_index_stats_hook) (root, relid, colnum, vardata))
    6544             :         {
    6545             :             /*
    6546             :              * The hook took control of acquiring a stats tuple.  If it did
    6547             :              * supply a tuple, it'd better have supplied a freefunc.
    6548             :              */
    6549           0 :             if (HeapTupleIsValid(vardata->statsTuple) &&
    6550           0 :                 !vardata->freefunc)
    6551           0 :                 elog(ERROR, "no function provided to release variable stats with");
    6552             :         }
    6553             :         else
    6554             :         {
    6555        2214 :             vardata->statsTuple = SearchSysCache3(STATRELATTINH,
    6556             :                                                   ObjectIdGetDatum(relid),
    6557             :                                                   Int16GetDatum(colnum),
    6558             :                                                   BoolGetDatum(false));
    6559        2214 :             vardata->freefunc = ReleaseSysCache;
    6560             :         }
    6561             :     }
    6562      806606 : }
    6563             : 
    6564             : /*
    6565             :  * Check whether it is permitted to call func_oid passing some of the
    6566             :  * pg_statistic data in vardata.  We allow this if either of the following
    6567             :  * conditions is met: (1) the user has SELECT privileges on the table or
    6568             :  * column underlying the pg_statistic data and there are no securityQuals from
    6569             :  * security barrier views or RLS policies, or (2) the function is marked
    6570             :  * leakproof.
    6571             :  */
    6572             : bool
    6573     1171630 : statistic_proc_security_check(VariableStatData *vardata, Oid func_oid)
    6574             : {
    6575     1171630 :     if (vardata->acl_ok)
    6576     1169780 :         return true;            /* have SELECT privs and no securityQuals */
    6577             : 
    6578        1850 :     if (!OidIsValid(func_oid))
    6579           0 :         return false;
    6580             : 
    6581        1850 :     if (get_func_leakproof(func_oid))
    6582         916 :         return true;
    6583             : 
    6584         934 :     ereport(DEBUG2,
    6585             :             (errmsg_internal("not using statistics because function \"%s\" is not leakproof",
    6586             :                              get_func_name(func_oid))));
    6587         934 :     return false;
    6588             : }
    6589             : 
    6590             : /*
    6591             :  * get_variable_numdistinct
    6592             :  *    Estimate the number of distinct values of a variable.
    6593             :  *
    6594             :  * vardata: results of examine_variable
    6595             :  * *isdefault: set to true if the result is a default rather than based on
    6596             :  * anything meaningful.
    6597             :  *
    6598             :  * NB: be careful to produce a positive integral result, since callers may
    6599             :  * compare the result to exact integer counts, or might divide by it.
    6600             :  */
    6601             : double
    6602     1665030 : get_variable_numdistinct(VariableStatData *vardata, bool *isdefault)
    6603             : {
    6604             :     double      stadistinct;
    6605     1665030 :     double      stanullfrac = 0.0;
    6606             :     double      ntuples;
    6607             : 
    6608     1665030 :     *isdefault = false;
    6609             : 
    6610             :     /*
    6611             :      * Determine the stadistinct value to use.  There are cases where we can
    6612             :      * get an estimate even without a pg_statistic entry, or can get a better
    6613             :      * value than is in pg_statistic.  Grab stanullfrac too if we can find it
    6614             :      * (otherwise, assume no nulls, for lack of any better idea).
    6615             :      */
    6616     1665030 :     if (HeapTupleIsValid(vardata->statsTuple))
    6617             :     {
    6618             :         /* Use the pg_statistic entry */
    6619             :         Form_pg_statistic stats;
    6620             : 
    6621     1167104 :         stats = (Form_pg_statistic) GETSTRUCT(vardata->statsTuple);
    6622     1167104 :         stadistinct = stats->stadistinct;
    6623     1167104 :         stanullfrac = stats->stanullfrac;
    6624             :     }
    6625      497926 :     else if (vardata->vartype == BOOLOID)
    6626             :     {
    6627             :         /*
    6628             :          * Special-case boolean columns: presumably, two distinct values.
    6629             :          *
    6630             :          * Are there any other datatypes we should wire in special estimates
    6631             :          * for?
    6632             :          */
    6633         602 :         stadistinct = 2.0;
    6634             :     }
    6635      497324 :     else if (vardata->rel && vardata->rel->rtekind == RTE_VALUES)
    6636             :     {
    6637             :         /*
    6638             :          * If the Var represents a column of a VALUES RTE, assume it's unique.
    6639             :          * This could of course be very wrong, but it should tend to be true
    6640             :          * in well-written queries.  We could consider examining the VALUES'
    6641             :          * contents to get some real statistics; but that only works if the
    6642             :          * entries are all constants, and it would be pretty expensive anyway.
    6643             :          */
    6644        3588 :         stadistinct = -1.0;     /* unique (and all non null) */
    6645             :     }
    6646             :     else
    6647             :     {
    6648             :         /*
    6649             :          * We don't keep statistics for system columns, but in some cases we
    6650             :          * can infer distinctness anyway.
    6651             :          */
    6652      493736 :         if (vardata->var && IsA(vardata->var, Var))
    6653             :         {
    6654      455982 :             switch (((Var *) vardata->var)->varattno)
    6655             :             {
    6656        1224 :                 case SelfItemPointerAttributeNumber:
    6657        1224 :                     stadistinct = -1.0; /* unique (and all non null) */
    6658        1224 :                     break;
    6659       26398 :                 case TableOidAttributeNumber:
    6660       26398 :                     stadistinct = 1.0;  /* only 1 value */
    6661       26398 :                     break;
    6662      428360 :                 default:
    6663      428360 :                     stadistinct = 0.0;  /* means "unknown" */
    6664      428360 :                     break;
    6665             :             }
    6666             :         }
    6667             :         else
    6668       37754 :             stadistinct = 0.0;  /* means "unknown" */
    6669             : 
    6670             :         /*
    6671             :          * XXX consider using estimate_num_groups on expressions?
    6672             :          */
    6673             :     }
    6674             : 
    6675             :     /*
    6676             :      * If there is a unique index, DISTINCT or GROUP-BY clause for the
    6677             :      * variable, assume it is unique no matter what pg_statistic says; the
    6678             :      * statistics could be out of date, or we might have found a partial
    6679             :      * unique index that proves the var is unique for this query.  However,
    6680             :      * we'd better still believe the null-fraction statistic.
    6681             :      */
    6682     1665030 :     if (vardata->isunique)
    6683      413248 :         stadistinct = -1.0 * (1.0 - stanullfrac);
    6684             : 
    6685             :     /*
    6686             :      * If we had an absolute estimate, use that.
    6687             :      */
    6688     1665030 :     if (stadistinct > 0.0)
    6689      430674 :         return clamp_row_est(stadistinct);
    6690             : 
    6691             :     /*
    6692             :      * Otherwise we need to get the relation size; punt if not available.
    6693             :      */
    6694     1234356 :     if (vardata->rel == NULL)
    6695             :     {
    6696         704 :         *isdefault = true;
    6697         704 :         return DEFAULT_NUM_DISTINCT;
    6698             :     }
    6699     1233652 :     ntuples = vardata->rel->tuples;
    6700     1233652 :     if (ntuples <= 0.0)
    6701             :     {
    6702      118366 :         *isdefault = true;
    6703      118366 :         return DEFAULT_NUM_DISTINCT;
    6704             :     }
    6705             : 
    6706             :     /*
    6707             :      * If we had a relative estimate, use that.
    6708             :      */
    6709     1115286 :     if (stadistinct < 0.0)
    6710      809346 :         return clamp_row_est(-stadistinct * ntuples);
    6711             : 
    6712             :     /*
    6713             :      * With no data, estimate ndistinct = ntuples if the table is small, else
    6714             :      * use default.  We use DEFAULT_NUM_DISTINCT as the cutoff for "small" so
    6715             :      * that the behavior isn't discontinuous.
    6716             :      */
    6717      305940 :     if (ntuples < DEFAULT_NUM_DISTINCT)
    6718      143472 :         return clamp_row_est(ntuples);
    6719             : 
    6720      162468 :     *isdefault = true;
    6721      162468 :     return DEFAULT_NUM_DISTINCT;
    6722             : }
    6723             : 
    6724             : /*
    6725             :  * get_variable_range
    6726             :  *      Estimate the minimum and maximum value of the specified variable.
    6727             :  *      If successful, store values in *min and *max, and return true.
    6728             :  *      If no data available, return false.
    6729             :  *
    6730             :  * sortop is the "<" comparison operator to use.  This should generally
    6731             :  * be "<" not ">", as only the former is likely to be found in pg_statistic.
    6732             :  * The collation must be specified too.
    6733             :  */
    6734             : static bool
    6735      251714 : get_variable_range(PlannerInfo *root, VariableStatData *vardata,
    6736             :                    Oid sortop, Oid collation,
    6737             :                    Datum *min, Datum *max)
    6738             : {
    6739      251714 :     Datum       tmin = 0;
    6740      251714 :     Datum       tmax = 0;
    6741      251714 :     bool        have_data = false;
    6742             :     int16       typLen;
    6743             :     bool        typByVal;
    6744             :     Oid         opfuncoid;
    6745             :     FmgrInfo    opproc;
    6746             :     AttStatsSlot sslot;
    6747             : 
    6748             :     /*
    6749             :      * XXX It's very tempting to try to use the actual column min and max, if
    6750             :      * we can get them relatively-cheaply with an index probe.  However, since
    6751             :      * this function is called many times during join planning, that could
    6752             :      * have unpleasant effects on planning speed.  Need more investigation
    6753             :      * before enabling this.
    6754             :      */
    6755             : #ifdef NOT_USED
    6756             :     if (get_actual_variable_range(root, vardata, sortop, collation, min, max))
    6757             :         return true;
    6758             : #endif
    6759             : 
    6760      251714 :     if (!HeapTupleIsValid(vardata->statsTuple))
    6761             :     {
    6762             :         /* no stats available, so default result */
    6763       56590 :         return false;
    6764             :     }
    6765             : 
    6766             :     /*
    6767             :      * If we can't apply the sortop to the stats data, just fail.  In
    6768             :      * principle, if there's a histogram and no MCVs, we could return the
    6769             :      * histogram endpoints without ever applying the sortop ... but it's
    6770             :      * probably not worth trying, because whatever the caller wants to do with
    6771             :      * the endpoints would likely fail the security check too.
    6772             :      */
    6773      195124 :     if (!statistic_proc_security_check(vardata,
    6774      195124 :                                        (opfuncoid = get_opcode(sortop))))
    6775           0 :         return false;
    6776             : 
    6777      195124 :     opproc.fn_oid = InvalidOid; /* mark this as not looked up yet */
    6778             : 
    6779      195124 :     get_typlenbyval(vardata->atttype, &typLen, &typByVal);
    6780             : 
    6781             :     /*
    6782             :      * If there is a histogram with the ordering we want, grab the first and
    6783             :      * last values.
    6784             :      */
    6785      195124 :     if (get_attstatsslot(&sslot, vardata->statsTuple,
    6786             :                          STATISTIC_KIND_HISTOGRAM, sortop,
    6787             :                          ATTSTATSSLOT_VALUES))
    6788             :     {
    6789      123448 :         if (sslot.stacoll == collation && sslot.nvalues > 0)
    6790             :         {
    6791      123448 :             tmin = datumCopy(sslot.values[0], typByVal, typLen);
    6792      123448 :             tmax = datumCopy(sslot.values[sslot.nvalues - 1], typByVal, typLen);
    6793      123448 :             have_data = true;
    6794             :         }
    6795      123448 :         free_attstatsslot(&sslot);
    6796             :     }
    6797             : 
    6798             :     /*
    6799             :      * Otherwise, if there is a histogram with some other ordering, scan it
    6800             :      * and get the min and max values according to the ordering we want.  This
    6801             :      * of course may not find values that are really extremal according to our
    6802             :      * ordering, but it beats ignoring available data.
    6803             :      */
    6804      266800 :     if (!have_data &&
    6805       71676 :         get_attstatsslot(&sslot, vardata->statsTuple,
    6806             :                          STATISTIC_KIND_HISTOGRAM, InvalidOid,
    6807             :                          ATTSTATSSLOT_VALUES))
    6808             :     {
    6809           0 :         get_stats_slot_range(&sslot, opfuncoid, &opproc,
    6810             :                              collation, typLen, typByVal,
    6811             :                              &tmin, &tmax, &have_data);
    6812           0 :         free_attstatsslot(&sslot);
    6813             :     }
    6814             : 
    6815             :     /*
    6816             :      * If we have most-common-values info, look for extreme MCVs.  This is
    6817             :      * needed even if we also have a histogram, since the histogram excludes
    6818             :      * the MCVs.  However, if we *only* have MCVs and no histogram, we should
    6819             :      * be pretty wary of deciding that that is a full representation of the
    6820             :      * data.  Proceed only if the MCVs represent the whole table (to within
    6821             :      * roundoff error).
    6822             :      */
    6823      195124 :     if (get_attstatsslot(&sslot, vardata->statsTuple,
    6824             :                          STATISTIC_KIND_MCV, InvalidOid,
    6825      195124 :                          have_data ? ATTSTATSSLOT_VALUES :
    6826             :                          (ATTSTATSSLOT_VALUES | ATTSTATSSLOT_NUMBERS)))
    6827             :     {
    6828      109608 :         bool        use_mcvs = have_data;
    6829             : 
    6830      109608 :         if (!have_data)
    6831             :         {
    6832       70242 :             double      sumcommon = 0.0;
    6833             :             double      nullfrac;
    6834             :             int         i;
    6835             : 
    6836      529628 :             for (i = 0; i < sslot.nnumbers; i++)
    6837      459386 :                 sumcommon += sslot.numbers[i];
    6838       70242 :             nullfrac = ((Form_pg_statistic) GETSTRUCT(vardata->statsTuple))->stanullfrac;
    6839       70242 :             if (sumcommon + nullfrac > 0.99999)
    6840       68034 :                 use_mcvs = true;
    6841             :         }
    6842             : 
    6843      109608 :         if (use_mcvs)
    6844      107400 :             get_stats_slot_range(&sslot, opfuncoid, &opproc,
    6845             :                                  collation, typLen, typByVal,
    6846             :                                  &tmin, &tmax, &have_data);
    6847      109608 :         free_attstatsslot(&sslot);
    6848             :     }
    6849             : 
    6850      195124 :     *min = tmin;
    6851      195124 :     *max = tmax;
    6852      195124 :     return have_data;
    6853             : }
    6854             : 
    6855             : /*
    6856             :  * get_stats_slot_range: scan sslot for min/max values
    6857             :  *
    6858             :  * Subroutine for get_variable_range: update min/max/have_data according
    6859             :  * to what we find in the statistics array.
    6860             :  */
    6861             : static void
    6862      107400 : get_stats_slot_range(AttStatsSlot *sslot, Oid opfuncoid, FmgrInfo *opproc,
    6863             :                      Oid collation, int16 typLen, bool typByVal,
    6864             :                      Datum *min, Datum *max, bool *p_have_data)
    6865             : {
    6866      107400 :     Datum       tmin = *min;
    6867      107400 :     Datum       tmax = *max;
    6868      107400 :     bool        have_data = *p_have_data;
    6869      107400 :     bool        found_tmin = false;
    6870      107400 :     bool        found_tmax = false;
    6871             : 
    6872             :     /* Look up the comparison function, if we didn't already do so */
    6873      107400 :     if (opproc->fn_oid != opfuncoid)
    6874      107400 :         fmgr_info(opfuncoid, opproc);
    6875             : 
    6876             :     /* Scan all the slot's values */
    6877     2628482 :     for (int i = 0; i < sslot->nvalues; i++)
    6878             :     {
    6879     2521082 :         if (!have_data)
    6880             :         {
    6881       68034 :             tmin = tmax = sslot->values[i];
    6882       68034 :             found_tmin = found_tmax = true;
    6883       68034 :             *p_have_data = have_data = true;
    6884       68034 :             continue;
    6885             :         }
    6886     2453048 :         if (DatumGetBool(FunctionCall2Coll(opproc,
    6887             :                                            collation,
    6888     2453048 :                                            sslot->values[i], tmin)))
    6889             :         {
    6890       61000 :             tmin = sslot->values[i];
    6891       61000 :             found_tmin = true;
    6892             :         }
    6893     2453048 :         if (DatumGetBool(FunctionCall2Coll(opproc,
    6894             :                                            collation,
    6895     2453048 :                                            tmax, sslot->values[i])))
    6896             :         {
    6897      264752 :             tmax = sslot->values[i];
    6898      264752 :             found_tmax = true;
    6899             :         }
    6900             :     }
    6901             : 
    6902             :     /*
    6903             :      * Copy the slot's values, if we found new extreme values.
    6904             :      */
    6905      107400 :     if (found_tmin)
    6906       91898 :         *min = datumCopy(tmin, typByVal, typLen);
    6907      107400 :     if (found_tmax)
    6908       72942 :         *max = datumCopy(tmax, typByVal, typLen);
    6909      107400 : }
    6910             : 
    6911             : 
    6912             : /*
    6913             :  * get_actual_variable_range
    6914             :  *      Attempt to identify the current *actual* minimum and/or maximum
    6915             :  *      of the specified variable, by looking for a suitable btree index
    6916             :  *      and fetching its low and/or high values.
    6917             :  *      If successful, store values in *min and *max, and return true.
    6918             :  *      (Either pointer can be NULL if that endpoint isn't needed.)
    6919             :  *      If unsuccessful, return false.
    6920             :  *
    6921             :  * sortop is the "<" comparison operator to use.
    6922             :  * collation is the required collation.
    6923             :  */
    6924             : static bool
    6925      189556 : get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
    6926             :                           Oid sortop, Oid collation,
    6927             :                           Datum *min, Datum *max)
    6928             : {
    6929      189556 :     bool        have_data = false;
    6930      189556 :     RelOptInfo *rel = vardata->rel;
    6931             :     RangeTblEntry *rte;
    6932             :     ListCell   *lc;
    6933             : 
    6934             :     /* No hope if no relation or it doesn't have indexes */
    6935      189556 :     if (rel == NULL || rel->indexlist == NIL)
    6936       13840 :         return false;
    6937             :     /* If it has indexes it must be a plain relation */
    6938      175716 :     rte = root->simple_rte_array[rel->relid];
    6939             :     Assert(rte->rtekind == RTE_RELATION);
    6940             : 
    6941             :     /* ignore partitioned tables.  Any indexes here are not real indexes */
    6942      175716 :     if (rte->relkind == RELKIND_PARTITIONED_TABLE)
    6943         756 :         return false;
    6944             : 
    6945             :     /* Search through the indexes to see if any match our problem */
    6946      340738 :     foreach(lc, rel->indexlist)
    6947             :     {
    6948      293092 :         IndexOptInfo *index = (IndexOptInfo *) lfirst(lc);
    6949             :         ScanDirection indexscandir;
    6950             :         StrategyNumber strategy;
    6951             : 
    6952             :         /* Ignore non-ordering indexes */
    6953      293092 :         if (index->sortopfamily == NULL)
    6954           0 :             continue;
    6955             : 
    6956             :         /*
    6957             :          * Ignore partial indexes --- we only want stats that cover the entire
    6958             :          * relation.
    6959             :          */
    6960      293092 :         if (index->indpred != NIL)
    6961         288 :             continue;
    6962             : 
    6963             :         /*
    6964             :          * The index list might include hypothetical indexes inserted by a
    6965             :          * get_relation_info hook --- don't try to access them.
    6966             :          */
    6967      292804 :         if (index->hypothetical)
    6968           0 :             continue;
    6969             : 
    6970             :         /*
    6971             :          * get_actual_variable_endpoint uses the index-only-scan machinery, so
    6972             :          * ignore indexes that can't use it on their first column.
    6973             :          */
    6974      292804 :         if (!index->canreturn[0])
    6975           0 :             continue;
    6976             : 
    6977             :         /*
    6978             :          * The first index column must match the desired variable, sortop, and
    6979             :          * collation --- but we can use a descending-order index.
    6980             :          */
    6981      292804 :         if (collation != index->indexcollations[0])
    6982       38792 :             continue;           /* test first 'cause it's cheapest */
    6983      254012 :         if (!match_index_to_operand(vardata->var, 0, index))
    6984      126698 :             continue;
    6985      127314 :         strategy = get_op_opfamily_strategy(sortop, index->sortopfamily[0]);
    6986      127314 :         switch (IndexAmTranslateStrategy(strategy, index->relam, index->sortopfamily[0], true))
    6987             :         {
    6988      127314 :             case COMPARE_LT:
    6989      127314 :                 if (index->reverse_sort[0])
    6990           0 :                     indexscandir = BackwardScanDirection;
    6991             :                 else
    6992      127314 :                     indexscandir = ForwardScanDirection;
    6993      127314 :                 break;
    6994           0 :             case COMPARE_GT:
    6995           0 :                 if (index->reverse_sort[0])
    6996           0 :                     indexscandir = ForwardScanDirection;
    6997             :                 else
    6998           0 :                     indexscandir = BackwardScanDirection;
    6999           0 :                 break;
    7000           0 :             default:
    7001             :                 /* index doesn't match the sortop */
    7002           0 :                 continue;
    7003             :         }
    7004             : 
    7005             :         /*
    7006             :          * Found a suitable index to extract data from.  Set up some data that
    7007             :          * can be used by both invocations of get_actual_variable_endpoint.
    7008             :          */
    7009             :         {
    7010             :             MemoryContext tmpcontext;
    7011             :             MemoryContext oldcontext;
    7012             :             Relation    heapRel;
    7013             :             Relation    indexRel;
    7014             :             TupleTableSlot *slot;
    7015             :             int16       typLen;
    7016             :             bool        typByVal;
    7017             :             ScanKeyData scankeys[1];
    7018             : 
    7019             :             /* Make sure any cruft gets recycled when we're done */
    7020      127314 :             tmpcontext = AllocSetContextCreate(CurrentMemoryContext,
    7021             :                                                "get_actual_variable_range workspace",
    7022             :                                                ALLOCSET_DEFAULT_SIZES);
    7023      127314 :             oldcontext = MemoryContextSwitchTo(tmpcontext);
    7024             : 
    7025             :             /*
    7026             :              * Open the table and index so we can read from them.  We should
    7027             :              * already have some type of lock on each.
    7028             :              */
    7029      127314 :             heapRel = table_open(rte->relid, NoLock);
    7030      127314 :             indexRel = index_open(index->indexoid, NoLock);
    7031             : 
    7032             :             /* build some stuff needed for indexscan execution */
    7033      127314 :             slot = table_slot_create(heapRel, NULL);
    7034      127314 :             get_typlenbyval(vardata->atttype, &typLen, &typByVal);
    7035             : 
    7036             :             /* set up an IS NOT NULL scan key so that we ignore nulls */
    7037      127314 :             ScanKeyEntryInitialize(&scankeys[0],
    7038             :                                    SK_ISNULL | SK_SEARCHNOTNULL,
    7039             :                                    1,   /* index col to scan */
    7040             :                                    InvalidStrategy, /* no strategy */
    7041             :                                    InvalidOid,  /* no strategy subtype */
    7042             :                                    InvalidOid,  /* no collation */
    7043             :                                    InvalidOid,  /* no reg proc for this */
    7044             :                                    (Datum) 0);  /* constant */
    7045             : 
    7046             :             /* If min is requested ... */
    7047      127314 :             if (min)
    7048             :             {
    7049       71838 :                 have_data = get_actual_variable_endpoint(heapRel,
    7050             :                                                          indexRel,
    7051             :                                                          indexscandir,
    7052             :                                                          scankeys,
    7053             :                                                          typLen,
    7054             :                                                          typByVal,
    7055             :                                                          slot,
    7056             :                                                          oldcontext,
    7057             :                                                          min);
    7058             :             }
    7059             :             else
    7060             :             {
    7061             :                 /* If min not requested, still want to fetch max */
    7062       55476 :                 have_data = true;
    7063             :             }
    7064             : 
    7065             :             /* If max is requested, and we didn't already fail ... */
    7066      127314 :             if (max && have_data)
    7067             :             {
    7068             :                 /* scan in the opposite direction; all else is the same */
    7069       57228 :                 have_data = get_actual_variable_endpoint(heapRel,
    7070             :                                                          indexRel,
    7071       57228 :                                                          -indexscandir,
    7072             :                                                          scankeys,
    7073             :                                                          typLen,
    7074             :                                                          typByVal,
    7075             :                                                          slot,
    7076             :                                                          oldcontext,
    7077             :                                                          max);
    7078             :             }
    7079             : 
    7080             :             /* Clean everything up */
    7081      127314 :             ExecDropSingleTupleTableSlot(slot);
    7082             : 
    7083      127314 :             index_close(indexRel, NoLock);
    7084      127314 :             table_close(heapRel, NoLock);
    7085             : 
    7086      127314 :             MemoryContextSwitchTo(oldcontext);
    7087      127314 :             MemoryContextDelete(tmpcontext);
    7088             : 
    7089             :             /* And we're done */
    7090      127314 :             break;
    7091             :         }
    7092             :     }
    7093             : 
    7094      174960 :     return have_data;
    7095             : }
    7096             : 
    7097             : /*
    7098             :  * Get one endpoint datum (min or max depending on indexscandir) from the
    7099             :  * specified index.  Return true if successful, false if not.
    7100             :  * On success, endpoint value is stored to *endpointDatum (and copied into
    7101             :  * outercontext).
    7102             :  *
    7103             :  * scankeys is a 1-element scankey array set up to reject nulls.
    7104             :  * typLen/typByVal describe the datatype of the index's first column.
    7105             :  * tableslot is a slot suitable to hold table tuples, in case we need
    7106             :  * to probe the heap.
    7107             :  * (We could compute these values locally, but that would mean computing them
    7108             :  * twice when get_actual_variable_range needs both the min and the max.)
    7109             :  *
    7110             :  * Failure occurs either when the index is empty, or we decide that it's
    7111             :  * taking too long to find a suitable tuple.
    7112             :  */
    7113             : static bool
    7114      129066 : get_actual_variable_endpoint(Relation heapRel,
    7115             :                              Relation indexRel,
    7116             :                              ScanDirection indexscandir,
    7117             :                              ScanKey scankeys,
    7118             :                              int16 typLen,
    7119             :                              bool typByVal,
    7120             :                              TupleTableSlot *tableslot,
    7121             :                              MemoryContext outercontext,
    7122             :                              Datum *endpointDatum)
    7123             : {
    7124      129066 :     bool        have_data = false;
    7125             :     SnapshotData SnapshotNonVacuumable;
    7126             :     IndexScanDesc index_scan;
    7127      129066 :     Buffer      vmbuffer = InvalidBuffer;
    7128      129066 :     BlockNumber last_heap_block = InvalidBlockNumber;
    7129      129066 :     int         n_visited_heap_pages = 0;
    7130             :     ItemPointer tid;
    7131             :     Datum       values[INDEX_MAX_KEYS];
    7132             :     bool        isnull[INDEX_MAX_KEYS];
    7133             :     MemoryContext oldcontext;
    7134             : 
    7135             :     /*
    7136             :      * We use the index-only-scan machinery for this.  With mostly-static
    7137             :      * tables that's a win because it avoids a heap visit.  It's also a win
    7138             :      * for dynamic data, but the reason is less obvious; read on for details.
    7139             :      *
    7140             :      * In principle, we should scan the index with our current active
    7141             :      * snapshot, which is the best approximation we've got to what the query
    7142             :      * will see when executed.  But that won't be exact if a new snap is taken
    7143             :      * before running the query, and it can be very expensive if a lot of
    7144             :      * recently-dead or uncommitted rows exist at the beginning or end of the
    7145             :      * index (because we'll laboriously fetch each one and reject it).
    7146             :      * Instead, we use SnapshotNonVacuumable.  That will accept recently-dead
    7147             :      * and uncommitted rows as well as normal visible rows.  On the other
    7148             :      * hand, it will reject known-dead rows, and thus not give a bogus answer
    7149             :      * when the extreme value has been deleted (unless the deletion was quite
    7150             :      * recent); that case motivates not using SnapshotAny here.
    7151             :      *
    7152             :      * A crucial point here is that SnapshotNonVacuumable, with
    7153             :      * GlobalVisTestFor(heapRel) as horizon, yields the inverse of the
    7154             :      * condition that the indexscan will use to decide that index entries are
    7155             :      * killable (see heap_hot_search_buffer()).  Therefore, if the snapshot
    7156             :      * rejects a tuple (or more precisely, all tuples of a HOT chain) and we
    7157             :      * have to continue scanning past it, we know that the indexscan will mark
    7158             :      * that index entry killed.  That means that the next
    7159             :      * get_actual_variable_endpoint() call will not have to re-consider that
    7160             :      * index entry.  In this way we avoid repetitive work when this function
    7161             :      * is used a lot during planning.
    7162             :      *
    7163             :      * But using SnapshotNonVacuumable creates a hazard of its own.  In a
    7164             :      * recently-created index, some index entries may point at "broken" HOT
    7165             :      * chains in which not all the tuple versions contain data matching the
    7166             :      * index entry.  The live tuple version(s) certainly do match the index,
    7167             :      * but SnapshotNonVacuumable can accept recently-dead tuple versions that
    7168             :      * don't match.  Hence, if we took data from the selected heap tuple, we
    7169             :      * might get a bogus answer that's not close to the index extremal value,
    7170             :      * or could even be NULL.  We avoid this hazard because we take the data
    7171             :      * from the index entry not the heap.
    7172             :      *
    7173             :      * Despite all this care, there are situations where we might find many
    7174             :      * non-visible tuples near the end of the index.  We don't want to expend
    7175             :      * a huge amount of time here, so we give up once we've read too many heap
    7176             :      * pages.  When we fail for that reason, the caller will end up using
    7177             :      * whatever extremal value is recorded in pg_statistic.
    7178             :      */
    7179      129066 :     InitNonVacuumableSnapshot(SnapshotNonVacuumable,
    7180             :                               GlobalVisTestFor(heapRel));
    7181             : 
    7182      129066 :     index_scan = index_beginscan(heapRel, indexRel,
    7183             :                                  &SnapshotNonVacuumable, NULL,
    7184             :                                  1, 0);
    7185             :     /* Set it up for index-only scan */
    7186      129066 :     index_scan->xs_want_itup = true;
    7187      129066 :     index_rescan(index_scan, scankeys, 1, NULL, 0);
    7188             : 
    7189             :     /* Fetch first/next tuple in specified direction */
    7190      151292 :     while ((tid = index_getnext_tid(index_scan, indexscandir)) != NULL)
    7191             :     {
    7192      151292 :         BlockNumber block = ItemPointerGetBlockNumber(tid);
    7193             : 
    7194      151292 :         if (!VM_ALL_VISIBLE(heapRel,
    7195             :                             block,
    7196             :                             &vmbuffer))
    7197             :         {
    7198             :             /* Rats, we have to visit the heap to check visibility */
    7199      105244 :             if (!index_fetch_heap(index_scan, tableslot))
    7200             :             {
    7201             :                 /*
    7202             :                  * No visible tuple for this index entry, so we need to
    7203             :                  * advance to the next entry.  Before doing so, count heap
    7204             :                  * page fetches and give up if we've done too many.
    7205             :                  *
    7206             :                  * We don't charge a page fetch if this is the same heap page
    7207             :                  * as the previous tuple.  This is on the conservative side,
    7208             :                  * since other recently-accessed pages are probably still in
    7209             :                  * buffers too; but it's good enough for this heuristic.
    7210             :                  */
    7211             : #define VISITED_PAGES_LIMIT 100
    7212             : 
    7213       22226 :                 if (block != last_heap_block)
    7214             :                 {
    7215        2826 :                     last_heap_block = block;
    7216        2826 :                     n_visited_heap_pages++;
    7217        2826 :                     if (n_visited_heap_pages > VISITED_PAGES_LIMIT)
    7218           0 :                         break;
    7219             :                 }
    7220             : 
    7221       22226 :                 continue;       /* no visible tuple, try next index entry */
    7222             :             }
    7223             : 
    7224             :             /* We don't actually need the heap tuple for anything */
    7225       83018 :             ExecClearTuple(tableslot);
    7226             : 
    7227             :             /*
    7228             :              * We don't care whether there's more than one visible tuple in
    7229             :              * the HOT chain; if any are visible, that's good enough.
    7230             :              */
    7231             :         }
    7232             : 
    7233             :         /*
    7234             :          * We expect that the index will return data in IndexTuple not
    7235             :          * HeapTuple format.
    7236             :          */
    7237      129066 :         if (!index_scan->xs_itup)
    7238           0 :             elog(ERROR, "no data returned for index-only scan");
    7239             : 
    7240             :         /*
    7241             :          * We do not yet support recheck here.
    7242             :          */
    7243      129066 :         if (index_scan->xs_recheck)
    7244           0 :             break;
    7245             : 
    7246             :         /* OK to deconstruct the index tuple */
    7247      129066 :         index_deform_tuple(index_scan->xs_itup,
    7248             :                            index_scan->xs_itupdesc,
    7249             :                            values, isnull);
    7250             : 
    7251             :         /* Shouldn't have got a null, but be careful */
    7252      129066 :         if (isnull[0])
    7253           0 :             elog(ERROR, "found unexpected null value in index \"%s\"",
    7254             :                  RelationGetRelationName(indexRel));
    7255             : 
    7256             :         /* Copy the index column value out to caller's context */
    7257      129066 :         oldcontext = MemoryContextSwitchTo(outercontext);
    7258      129066 :         *endpointDatum = datumCopy(values[0], typByVal, typLen);
    7259      129066 :         MemoryContextSwitchTo(oldcontext);
    7260      129066 :         have_data = true;
    7261      129066 :         break;
    7262             :     }
    7263             : 
    7264      129066 :     if (vmbuffer != InvalidBuffer)
    7265      116320 :         ReleaseBuffer(vmbuffer);
    7266      129066 :     index_endscan(index_scan);
    7267             : 
    7268      129066 :     return have_data;
    7269             : }
    7270             : 
    7271             : /*
    7272             :  * find_join_input_rel
    7273             :  *      Look up the input relation for a join.
    7274             :  *
    7275             :  * We assume that the input relation's RelOptInfo must have been constructed
    7276             :  * already.
    7277             :  */
    7278             : static RelOptInfo *
    7279       10884 : find_join_input_rel(PlannerInfo *root, Relids relids)
    7280             : {
    7281       10884 :     RelOptInfo *rel = NULL;
    7282             : 
    7283       10884 :     if (!bms_is_empty(relids))
    7284             :     {
    7285             :         int         relid;
    7286             : 
    7287       10884 :         if (bms_get_singleton_member(relids, &relid))
    7288       10568 :             rel = find_base_rel(root, relid);
    7289             :         else
    7290         316 :             rel = find_join_rel(root, relids);
    7291             :     }
    7292             : 
    7293       10884 :     if (rel == NULL)
    7294           0 :         elog(ERROR, "could not find RelOptInfo for given relids");
    7295             : 
    7296       10884 :     return rel;
    7297             : }
    7298             : 
    7299             : 
    7300             : /*-------------------------------------------------------------------------
    7301             :  *
    7302             :  * Index cost estimation functions
    7303             :  *
    7304             :  *-------------------------------------------------------------------------
    7305             :  */
    7306             : 
    7307             : /*
    7308             :  * Extract the actual indexquals (as RestrictInfos) from an IndexClause list
    7309             :  */
    7310             : List *
    7311      823854 : get_quals_from_indexclauses(List *indexclauses)
    7312             : {
    7313      823854 :     List       *result = NIL;
    7314             :     ListCell   *lc;
    7315             : 
    7316     1450382 :     foreach(lc, indexclauses)
    7317             :     {
    7318      626528 :         IndexClause *iclause = lfirst_node(IndexClause, lc);
    7319             :         ListCell   *lc2;
    7320             : 
    7321     1255978 :         foreach(lc2, iclause->indexquals)
    7322             :         {
    7323      629450 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc2);
    7324             : 
    7325      629450 :             result = lappend(result, rinfo);
    7326             :         }
    7327             :     }
    7328      823854 :     return result;
    7329             : }
    7330             : 
    7331             : /*
    7332             :  * Compute the total evaluation cost of the comparison operands in a list
    7333             :  * of index qual expressions.  Since we know these will be evaluated just
    7334             :  * once per scan, there's no need to distinguish startup from per-row cost.
    7335             :  *
    7336             :  * This can be used either on the result of get_quals_from_indexclauses(),
    7337             :  * or directly on an indexorderbys list.  In both cases, we expect that the
    7338             :  * index key expression is on the left side of binary clauses.
    7339             :  */
    7340             : Cost
    7341     1634702 : index_other_operands_eval_cost(PlannerInfo *root, List *indexquals)
    7342             : {
    7343     1634702 :     Cost        qual_arg_cost = 0;
    7344             :     ListCell   *lc;
    7345             : 
    7346     2264614 :     foreach(lc, indexquals)
    7347             :     {
    7348      629912 :         Expr       *clause = (Expr *) lfirst(lc);
    7349             :         Node       *other_operand;
    7350             :         QualCost    index_qual_cost;
    7351             : 
    7352             :         /*
    7353             :          * Index quals will have RestrictInfos, indexorderbys won't.  Look
    7354             :          * through RestrictInfo if present.
    7355             :          */
    7356      629912 :         if (IsA(clause, RestrictInfo))
    7357      629438 :             clause = ((RestrictInfo *) clause)->clause;
    7358             : 
    7359      629912 :         if (IsA(clause, OpExpr))
    7360             :         {
    7361      614860 :             OpExpr     *op = (OpExpr *) clause;
    7362             : 
    7363      614860 :             other_operand = (Node *) lsecond(op->args);
    7364             :         }
    7365       15052 :         else if (IsA(clause, RowCompareExpr))
    7366             :         {
    7367         396 :             RowCompareExpr *rc = (RowCompareExpr *) clause;
    7368             : 
    7369         396 :             other_operand = (Node *) rc->rargs;
    7370             :         }
    7371       14656 :         else if (IsA(clause, ScalarArrayOpExpr))
    7372             :         {
    7373       11730 :             ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) clause;
    7374             : 
    7375       11730 :             other_operand = (Node *) lsecond(saop->args);
    7376             :         }
    7377        2926 :         else if (IsA(clause, NullTest))
    7378             :         {
    7379        2926 :             other_operand = NULL;
    7380             :         }
    7381             :         else
    7382             :         {
    7383           0 :             elog(ERROR, "unsupported indexqual type: %d",
    7384             :                  (int) nodeTag(clause));
    7385             :             other_operand = NULL;   /* keep compiler quiet */
    7386             :         }
    7387             : 
    7388      629912 :         cost_qual_eval_node(&index_qual_cost, other_operand, root);
    7389      629912 :         qual_arg_cost += index_qual_cost.startup + index_qual_cost.per_tuple;
    7390             :     }
    7391     1634702 :     return qual_arg_cost;
    7392             : }
    7393             : 
    7394             : void
    7395      810860 : genericcostestimate(PlannerInfo *root,
    7396             :                     IndexPath *path,
    7397             :                     double loop_count,
    7398             :                     GenericCosts *costs)
    7399             : {
    7400      810860 :     IndexOptInfo *index = path->indexinfo;
    7401      810860 :     List       *indexQuals = get_quals_from_indexclauses(path->indexclauses);
    7402      810860 :     List       *indexOrderBys = path->indexorderbys;
    7403             :     Cost        indexStartupCost;
    7404             :     Cost        indexTotalCost;
    7405             :     Selectivity indexSelectivity;
    7406             :     double      indexCorrelation;
    7407             :     double      numIndexPages;
    7408             :     double      numIndexTuples;
    7409             :     double      spc_random_page_cost;
    7410             :     double      num_sa_scans;
    7411             :     double      num_outer_scans;
    7412             :     double      num_scans;
    7413             :     double      qual_op_cost;
    7414             :     double      qual_arg_cost;
    7415             :     List       *selectivityQuals;
    7416             :     ListCell   *l;
    7417             : 
    7418             :     /*
    7419             :      * If the index is partial, AND the index predicate with the explicitly
    7420             :      * given indexquals to produce a more accurate idea of the index
    7421             :      * selectivity.
    7422             :      */
    7423      810860 :     selectivityQuals = add_predicate_to_index_quals(index, indexQuals);
    7424             : 
    7425             :     /*
    7426             :      * If caller didn't give us an estimate for ScalarArrayOpExpr index scans,
    7427             :      * just assume that the number of index descents is the number of distinct
    7428             :      * combinations of array elements from all of the scan's SAOP clauses.
    7429             :      */
    7430      810860 :     num_sa_scans = costs->num_sa_scans;
    7431      810860 :     if (num_sa_scans < 1)
    7432             :     {
    7433        7904 :         num_sa_scans = 1;
    7434       16592 :         foreach(l, indexQuals)
    7435             :         {
    7436        8688 :             RestrictInfo *rinfo = (RestrictInfo *) lfirst(l);
    7437             : 
    7438        8688 :             if (IsA(rinfo->clause, ScalarArrayOpExpr))
    7439             :             {
    7440          26 :                 ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) rinfo->clause;
    7441          26 :                 double      alength = estimate_array_length(root, lsecond(saop->args));
    7442             : 
    7443          26 :                 if (alength > 1)
    7444          26 :                     num_sa_scans *= alength;
    7445             :             }
    7446             :         }
    7447             :     }
    7448             : 
    7449             :     /* Estimate the fraction of main-table tuples that will be visited */
    7450      810860 :     indexSelectivity = clauselist_selectivity(root, selectivityQuals,
    7451      810860 :                                               index->rel->relid,
    7452             :                                               JOIN_INNER,
    7453             :                                               NULL);
    7454             : 
    7455             :     /*
    7456             :      * If caller didn't give us an estimate, estimate the number of index
    7457             :      * tuples that will be visited.  We do it in this rather peculiar-looking
    7458             :      * way in order to get the right answer for partial indexes.
    7459             :      */
    7460      810860 :     numIndexTuples = costs->numIndexTuples;
    7461      810860 :     if (numIndexTuples <= 0.0)
    7462             :     {
    7463       92682 :         numIndexTuples = indexSelectivity * index->rel->tuples;
    7464             : 
    7465             :         /*
    7466             :          * The above calculation counts all the tuples visited across all
    7467             :          * scans induced by ScalarArrayOpExpr nodes.  We want to consider the
    7468             :          * average per-indexscan number, so adjust.  This is a handy place to
    7469             :          * round to integer, too.  (If caller supplied tuple estimate, it's
    7470             :          * responsible for handling these considerations.)
    7471             :          */
    7472       92682 :         numIndexTuples = rint(numIndexTuples / num_sa_scans);
    7473             :     }
    7474             : 
    7475             :     /*
    7476             :      * We can bound the number of tuples by the index size in any case. Also,
    7477             :      * always estimate at least one tuple is touched, even when
    7478             :      * indexSelectivity estimate is tiny.
    7479             :      */
    7480      810860 :     if (numIndexTuples > index->tuples)
    7481        6602 :         numIndexTuples = index->tuples;
    7482      810860 :     if (numIndexTuples < 1.0)
    7483       93430 :         numIndexTuples = 1.0;
    7484             : 
    7485             :     /*
    7486             :      * Estimate the number of index pages that will be retrieved.
    7487             :      *
    7488             :      * We use the simplistic method of taking a pro-rata fraction of the total
    7489             :      * number of index pages.  In effect, this counts only leaf pages and not
    7490             :      * any overhead such as index metapage or upper tree levels.
    7491             :      *
    7492             :      * In practice access to upper index levels is often nearly free because
    7493             :      * those tend to stay in cache under load; moreover, the cost involved is
    7494             :      * highly dependent on index type.  We therefore ignore such costs here
    7495             :      * and leave it to the caller to add a suitable charge if needed.
    7496             :      */
    7497      810860 :     if (index->pages > 1 && index->tuples > 1)
    7498      746158 :         numIndexPages = ceil(numIndexTuples * index->pages / index->tuples);
    7499             :     else
    7500       64702 :         numIndexPages = 1.0;
    7501             : 
    7502             :     /* fetch estimated page cost for tablespace containing index */
    7503      810860 :     get_tablespace_page_costs(index->reltablespace,
    7504             :                               &spc_random_page_cost,
    7505             :                               NULL);
    7506             : 
    7507             :     /*
    7508             :      * Now compute the disk access costs.
    7509             :      *
    7510             :      * The above calculations are all per-index-scan.  However, if we are in a
    7511             :      * nestloop inner scan, we can expect the scan to be repeated (with
    7512             :      * different search keys) for each row of the outer relation.  Likewise,
    7513             :      * ScalarArrayOpExpr quals result in multiple index scans.  This creates
    7514             :      * the potential for cache effects to reduce the number of disk page
    7515             :      * fetches needed.  We want to estimate the average per-scan I/O cost in
    7516             :      * the presence of caching.
    7517             :      *
    7518             :      * We use the Mackert-Lohman formula (see costsize.c for details) to
    7519             :      * estimate the total number of page fetches that occur.  While this
    7520             :      * wasn't what it was designed for, it seems a reasonable model anyway.
    7521             :      * Note that we are counting pages not tuples anymore, so we take N = T =
    7522             :      * index size, as if there were one "tuple" per page.
    7523             :      */
    7524      810860 :     num_outer_scans = loop_count;
    7525      810860 :     num_scans = num_sa_scans * num_outer_scans;
    7526             : 
    7527      810860 :     if (num_scans > 1)
    7528             :     {
    7529             :         double      pages_fetched;
    7530             : 
    7531             :         /* total page fetches ignoring cache effects */
    7532       96016 :         pages_fetched = numIndexPages * num_scans;
    7533             : 
    7534             :         /* use Mackert and Lohman formula to adjust for cache effects */
    7535       96016 :         pages_fetched = index_pages_fetched(pages_fetched,
    7536             :                                             index->pages,
    7537       96016 :                                             (double) index->pages,
    7538             :                                             root);
    7539             : 
    7540             :         /*
    7541             :          * Now compute the total disk access cost, and then report a pro-rated
    7542             :          * share for each outer scan.  (Don't pro-rate for ScalarArrayOpExpr,
    7543             :          * since that's internal to the indexscan.)
    7544             :          */
    7545       96016 :         indexTotalCost = (pages_fetched * spc_random_page_cost)
    7546             :             / num_outer_scans;
    7547             :     }
    7548             :     else
    7549             :     {
    7550             :         /*
    7551             :          * For a single index scan, we just charge spc_random_page_cost per
    7552             :          * page touched.
    7553             :          */
    7554      714844 :         indexTotalCost = numIndexPages * spc_random_page_cost;
    7555             :     }
    7556             : 
    7557             :     /*
    7558             :      * CPU cost: any complex expressions in the indexquals will need to be
    7559             :      * evaluated once at the start of the scan to reduce them to runtime keys
    7560             :      * to pass to the index AM (see nodeIndexscan.c).  We model the per-tuple
    7561             :      * CPU costs as cpu_index_tuple_cost plus one cpu_operator_cost per
    7562             :      * indexqual operator.  Because we have numIndexTuples as a per-scan
    7563             :      * number, we have to multiply by num_sa_scans to get the correct result
    7564             :      * for ScalarArrayOpExpr cases.  Similarly add in costs for any index
    7565             :      * ORDER BY expressions.
    7566             :      *
    7567             :      * Note: this neglects the possible costs of rechecking lossy operators.
    7568             :      * Detecting that that might be needed seems more expensive than it's
    7569             :      * worth, though, considering all the other inaccuracies here ...
    7570             :      */
    7571      810860 :     qual_arg_cost = index_other_operands_eval_cost(root, indexQuals) +
    7572      810860 :         index_other_operands_eval_cost(root, indexOrderBys);
    7573      810860 :     qual_op_cost = cpu_operator_cost *
    7574      810860 :         (list_length(indexQuals) + list_length(indexOrderBys));
    7575             : 
    7576      810860 :     indexStartupCost = qual_arg_cost;
    7577      810860 :     indexTotalCost += qual_arg_cost;
    7578      810860 :     indexTotalCost += numIndexTuples * num_sa_scans * (cpu_index_tuple_cost + qual_op_cost);
    7579             : 
    7580             :     /*
    7581             :      * Generic assumption about index correlation: there isn't any.
    7582             :      */
    7583      810860 :     indexCorrelation = 0.0;
    7584             : 
    7585             :     /*
    7586             :      * Return everything to caller.
    7587             :      */
    7588      810860 :     costs->indexStartupCost = indexStartupCost;
    7589      810860 :     costs->indexTotalCost = indexTotalCost;
    7590      810860 :     costs->indexSelectivity = indexSelectivity;
    7591      810860 :     costs->indexCorrelation = indexCorrelation;
    7592      810860 :     costs->numIndexPages = numIndexPages;
    7593      810860 :     costs->numIndexTuples = numIndexTuples;
    7594      810860 :     costs->spc_random_page_cost = spc_random_page_cost;
    7595      810860 :     costs->num_sa_scans = num_sa_scans;
    7596      810860 : }
    7597             : 
    7598             : /*
    7599             :  * If the index is partial, add its predicate to the given qual list.
    7600             :  *
    7601             :  * ANDing the index predicate with the explicitly given indexquals produces
    7602             :  * a more accurate idea of the index's selectivity.  However, we need to be
    7603             :  * careful not to insert redundant clauses, because clauselist_selectivity()
    7604             :  * is easily fooled into computing a too-low selectivity estimate.  Our
    7605             :  * approach is to add only the predicate clause(s) that cannot be proven to
    7606             :  * be implied by the given indexquals.  This successfully handles cases such
    7607             :  * as a qual "x = 42" used with a partial index "WHERE x >= 40 AND x < 50".
    7608             :  * There are many other cases where we won't detect redundancy, leading to a
    7609             :  * too-low selectivity estimate, which will bias the system in favor of using
    7610             :  * partial indexes where possible.  That is not necessarily bad though.
    7611             :  *
    7612             :  * Note that indexQuals contains RestrictInfo nodes while the indpred
    7613             :  * does not, so the output list will be mixed.  This is OK for both
    7614             :  * predicate_implied_by() and clauselist_selectivity(), but might be
    7615             :  * problematic if the result were passed to other things.
    7616             :  */
    7617             : List *
    7618     1369768 : add_predicate_to_index_quals(IndexOptInfo *index, List *indexQuals)
    7619             : {
    7620     1369768 :     List       *predExtraQuals = NIL;
    7621             :     ListCell   *lc;
    7622             : 
    7623     1369768 :     if (index->indpred == NIL)
    7624     1367750 :         return indexQuals;
    7625             : 
    7626        4048 :     foreach(lc, index->indpred)
    7627             :     {
    7628        2030 :         Node       *predQual = (Node *) lfirst(lc);
    7629        2030 :         List       *oneQual = list_make1(predQual);
    7630             : 
    7631        2030 :         if (!predicate_implied_by(oneQual, indexQuals, false))
    7632        1812 :             predExtraQuals = list_concat(predExtraQuals, oneQual);
    7633             :     }
    7634        2018 :     return list_concat(predExtraQuals, indexQuals);
    7635             : }
    7636             : 
    7637             : /*
    7638             :  * Estimate correlation of btree index's first column.
    7639             :  *
    7640             :  * If we can get an estimate of the first column's ordering correlation C
    7641             :  * from pg_statistic, estimate the index correlation as C for a single-column
    7642             :  * index, or C * 0.75 for multiple columns.  The idea here is that multiple
    7643             :  * columns dilute the importance of the first column's ordering, but don't
    7644             :  * negate it entirely.
    7645             :  *
    7646             :  * We already filled in the stats tuple for *vardata when called.
    7647             :  */
    7648             : static double
    7649      600654 : btcost_correlation(IndexOptInfo *index, VariableStatData *vardata)
    7650             : {
    7651             :     Oid         sortop;
    7652             :     AttStatsSlot sslot;
    7653      600654 :     double      indexCorrelation = 0;
    7654             : 
    7655             :     Assert(HeapTupleIsValid(vardata->statsTuple));
    7656             : 
    7657      600654 :     sortop = get_opfamily_member(index->opfamily[0],
    7658      600654 :                                  index->opcintype[0],
    7659      600654 :                                  index->opcintype[0],
    7660             :                                  BTLessStrategyNumber);
    7661     1201308 :     if (OidIsValid(sortop) &&
    7662      600654 :         get_attstatsslot(&sslot, vardata->statsTuple,
    7663             :                          STATISTIC_KIND_CORRELATION, sortop,
    7664             :                          ATTSTATSSLOT_NUMBERS))
    7665             :     {
    7666             :         double      varCorrelation;
    7667             : 
    7668             :         Assert(sslot.nnumbers == 1);
    7669      592424 :         varCorrelation = sslot.numbers[0];
    7670             : 
    7671      592424 :         if (index->reverse_sort[0])
    7672           0 :             varCorrelation = -varCorrelation;
    7673             : 
    7674      592424 :         if (index->nkeycolumns > 1)
    7675      207362 :             indexCorrelation = varCorrelation * 0.75;
    7676             :         else
    7677      385062 :             indexCorrelation = varCorrelation;
    7678             : 
    7679      592424 :         free_attstatsslot(&sslot);
    7680             :     }
    7681             : 
    7682      600654 :     return indexCorrelation;
    7683             : }
    7684             : 
    7685             : void
    7686      802956 : btcostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
    7687             :                Cost *indexStartupCost, Cost *indexTotalCost,
    7688             :                Selectivity *indexSelectivity, double *indexCorrelation,
    7689             :                double *indexPages)
    7690             : {
    7691      802956 :     IndexOptInfo *index = path->indexinfo;
    7692      802956 :     GenericCosts costs = {0};
    7693      802956 :     VariableStatData vardata = {0};
    7694             :     double      numIndexTuples;
    7695             :     Cost        descentCost;
    7696             :     List       *indexBoundQuals;
    7697             :     List       *indexSkipQuals;
    7698             :     int         indexcol;
    7699             :     bool        eqQualHere;
    7700             :     bool        found_row_compare;
    7701             :     bool        found_array;
    7702             :     bool        found_is_null_op;
    7703      802956 :     bool        have_correlation = false;
    7704             :     double      num_sa_scans;
    7705      802956 :     double      correlation = 0.0;
    7706             :     ListCell   *lc;
    7707             : 
    7708             :     /*
    7709             :      * For a btree scan, only leading '=' quals plus inequality quals for the
    7710             :      * immediately next attribute contribute to index selectivity (these are
    7711             :      * the "boundary quals" that determine the starting and stopping points of
    7712             :      * the index scan).  Additional quals can suppress visits to the heap, so
    7713             :      * it's OK to count them in indexSelectivity, but they should not count
    7714             :      * for estimating numIndexTuples.  So we must examine the given indexquals
    7715             :      * to find out which ones count as boundary quals.  We rely on the
    7716             :      * knowledge that they are given in index column order.  Note that nbtree
    7717             :      * preprocessing can add skip arrays that act as leading '=' quals in the
    7718             :      * absence of ordinary input '=' quals, so in practice _most_ input quals
    7719             :      * are able to act as index bound quals (which we take into account here).
    7720             :      *
    7721             :      * For a RowCompareExpr, we consider only the first column, just as
    7722             :      * rowcomparesel() does.
    7723             :      *
    7724             :      * If there's a SAOP or skip array in the quals, we'll actually perform up
    7725             :      * to N index descents (not just one), but the underlying array key's
    7726             :      * operator can be considered to act the same as it normally does.
    7727             :      */
    7728      802956 :     indexBoundQuals = NIL;
    7729      802956 :     indexSkipQuals = NIL;
    7730      802956 :     indexcol = 0;
    7731      802956 :     eqQualHere = false;
    7732      802956 :     found_row_compare = false;
    7733      802956 :     found_array = false;
    7734      802956 :     found_is_null_op = false;
    7735      802956 :     num_sa_scans = 1;
    7736     1367134 :     foreach(lc, path->indexclauses)
    7737             :     {
    7738      601554 :         IndexClause *iclause = lfirst_node(IndexClause, lc);
    7739             :         ListCell   *lc2;
    7740             : 
    7741      601554 :         if (indexcol < iclause->indexcol)
    7742             :         {
    7743      117044 :             double      num_sa_scans_prev_cols = num_sa_scans;
    7744             : 
    7745             :             /*
    7746             :              * Beginning of a new column's quals.
    7747             :              *
    7748             :              * Skip scans use skip arrays, which are ScalarArrayOp style
    7749             :              * arrays that generate their elements procedurally and on demand.
    7750             :              * Given a multi-column index on "(a, b)", and an SQL WHERE clause
    7751             :              * "WHERE b = 42", a skip scan will effectively use an indexqual
    7752             :              * "WHERE a = ANY('{every col a value}') AND b = 42".  (Obviously,
    7753             :              * the array on "a" must also return "IS NULL" matches, since our
    7754             :              * WHERE clause used no strict operator on "a").
    7755             :              *
    7756             :              * Here we consider how nbtree will backfill skip arrays for any
    7757             :              * index columns that lacked an '=' qual.  This maintains our
    7758             :              * num_sa_scans estimate, and determines if this new column (the
    7759             :              * "iclause->indexcol" column, not the prior "indexcol" column)
    7760             :              * can have its RestrictInfos/quals added to indexBoundQuals.
    7761             :              *
    7762             :              * We'll need to handle columns that have inequality quals, where
    7763             :              * the skip array generates values from a range constrained by the
    7764             :              * quals (not every possible value).  We've been maintaining
    7765             :              * indexSkipQuals to help with this; it will now contain all of
    7766             :              * the prior column's quals (that is, indexcol's quals) when they
    7767             :              * might be used for this.
    7768             :              */
    7769      117044 :             if (found_row_compare)
    7770             :             {
    7771             :                 /*
    7772             :                  * Skip arrays can't be added after a RowCompare input qual
    7773             :                  * due to limitations in nbtree
    7774             :                  */
    7775          24 :                 break;
    7776             :             }
    7777      117020 :             if (eqQualHere)
    7778             :             {
    7779             :                 /*
    7780             :                  * Don't need to add a skip array for an indexcol that already
    7781             :                  * has an '=' qual/equality constraint
    7782             :                  */
    7783       80316 :                 indexcol++;
    7784       80316 :                 indexSkipQuals = NIL;
    7785             :             }
    7786      117020 :             eqQualHere = false;
    7787             : 
    7788      120010 :             while (indexcol < iclause->indexcol)
    7789             :             {
    7790             :                 double      ndistinct;
    7791       40342 :                 bool        isdefault = true;
    7792             : 
    7793       40342 :                 found_array = true;
    7794             : 
    7795             :                 /*
    7796             :                  * A skipped attribute's ndistinct forms the basis of our
    7797             :                  * estimate of the total number of "array elements" used by
    7798             :                  * its skip array at runtime.  Look that up first.
    7799             :                  */
    7800       40342 :                 examine_indexcol_variable(root, index, indexcol, &vardata);
    7801       40342 :                 ndistinct = get_variable_numdistinct(&vardata, &isdefault);
    7802             : 
    7803       40342 :                 if (indexcol == 0)
    7804             :                 {
    7805             :                     /*
    7806             :                      * Get an estimate of the leading column's correlation in
    7807             :                      * passing (avoids rereading variable stats below)
    7808             :                      */
    7809       36692 :                     if (HeapTupleIsValid(vardata.statsTuple))
    7810       23846 :                         correlation = btcost_correlation(index, &vardata);
    7811       36692 :                     have_correlation = true;
    7812             :                 }
    7813             : 
    7814       40342 :                 ReleaseVariableStats(vardata);
    7815             : 
    7816             :                 /*
    7817             :                  * If ndistinct is a default estimate, conservatively assume
    7818             :                  * that no skipping will happen at runtime
    7819             :                  */
    7820       40342 :                 if (isdefault)
    7821             :                 {
    7822       11544 :                     num_sa_scans = num_sa_scans_prev_cols;
    7823       37352 :                     break;      /* done building indexBoundQuals */
    7824             :                 }
    7825             : 
    7826             :                 /*
    7827             :                  * Apply indexcol's indexSkipQuals selectivity to ndistinct
    7828             :                  */
    7829       28798 :                 if (indexSkipQuals != NIL)
    7830             :                 {
    7831             :                     List       *partialSkipQuals;
    7832             :                     Selectivity ndistinctfrac;
    7833             : 
    7834             :                     /*
    7835             :                      * If the index is partial, AND the index predicate with
    7836             :                      * the index-bound quals to produce a more accurate idea
    7837             :                      * of the number of distinct values for prior indexcol
    7838             :                      */
    7839         664 :                     partialSkipQuals = add_predicate_to_index_quals(index,
    7840             :                                                                     indexSkipQuals);
    7841             : 
    7842         664 :                     ndistinctfrac = clauselist_selectivity(root, partialSkipQuals,
    7843         664 :                                                            index->rel->relid,
    7844             :                                                            JOIN_INNER,
    7845             :                                                            NULL);
    7846             : 
    7847             :                     /*
    7848             :                      * If ndistinctfrac is selective (on its own), the scan is
    7849             :                      * unlikely to benefit from repositioning itself using
    7850             :                      * later quals.  Do not allow iclause->indexcol's quals to
    7851             :                      * be added to indexBoundQuals (it would increase descent
    7852             :                      * costs, without lowering numIndexTuples costs by much).
    7853             :                      */
    7854         664 :                     if (ndistinctfrac < DEFAULT_RANGE_INEQ_SEL)
    7855             :                     {
    7856         374 :                         num_sa_scans = num_sa_scans_prev_cols;
    7857         374 :                         break;  /* done building indexBoundQuals */
    7858             :                     }
    7859             : 
    7860             :                     /* Adjust ndistinct downward */
    7861         290 :                     ndistinct = rint(ndistinct * ndistinctfrac);
    7862         290 :                     ndistinct = Max(ndistinct, 1);
    7863             :                 }
    7864             : 
    7865             :                 /*
    7866             :                  * When there's no inequality quals, account for the need to
    7867             :                  * find an initial value by counting -inf/+inf as a value.
    7868             :                  *
    7869             :                  * We don't charge anything extra for possible next/prior key
    7870             :                  * index probes, which are sometimes used to find the next
    7871             :                  * valid skip array element (ahead of using the located
    7872             :                  * element value to relocate the scan to the next position
    7873             :                  * that might contain matching tuples).  It seems hard to do
    7874             :                  * better here.  Use of the skip support infrastructure often
    7875             :                  * avoids most next/prior key probes.  But even when it can't,
    7876             :                  * there's a decent chance that most individual next/prior key
    7877             :                  * probes will locate a leaf page whose key space overlaps all
    7878             :                  * of the scan's keys (even the lower-order keys) -- which
    7879             :                  * also avoids the need for a separate, extra index descent.
    7880             :                  * Note also that these probes are much cheaper than non-probe
    7881             :                  * primitive index scans: they're reliably very selective.
    7882             :                  */
    7883       28424 :                 if (indexSkipQuals == NIL)
    7884       28134 :                     ndistinct += 1;
    7885             : 
    7886             :                 /*
    7887             :                  * Update num_sa_scans estimate by multiplying by ndistinct.
    7888             :                  *
    7889             :                  * We make the pessimistic assumption that there is no
    7890             :                  * naturally occurring cross-column correlation.  This is
    7891             :                  * often wrong, but it seems best to err on the side of not
    7892             :                  * expecting skipping to be helpful...
    7893             :                  */
    7894       28424 :                 num_sa_scans *= ndistinct;
    7895             : 
    7896             :                 /*
    7897             :                  * ...but back out of adding this latest group of 1 or more
    7898             :                  * skip arrays when num_sa_scans exceeds the total number of
    7899             :                  * index pages (revert to num_sa_scans from before indexcol).
    7900             :                  * This causes a sharp discontinuity in cost (as a function of
    7901             :                  * the indexcol's ndistinct), but that is representative of
    7902             :                  * actual runtime costs.
    7903             :                  *
    7904             :                  * Note that skipping is helpful when each primitive index
    7905             :                  * scan only manages to skip over 1 or 2 irrelevant leaf pages
    7906             :                  * on average.  Skip arrays bring savings in CPU costs due to
    7907             :                  * the scan not needing to evaluate indexquals against every
    7908             :                  * tuple, which can greatly exceed any savings in I/O costs.
    7909             :                  * This test is a test of whether num_sa_scans implies that
    7910             :                  * we're past the point where the ability to skip ceases to
    7911             :                  * lower the scan's costs (even qual evaluation CPU costs).
    7912             :                  */
    7913       28424 :                 if (index->pages < num_sa_scans)
    7914             :                 {
    7915       25434 :                     num_sa_scans = num_sa_scans_prev_cols;
    7916       25434 :                     break;      /* done building indexBoundQuals */
    7917             :                 }
    7918             : 
    7919        2990 :                 indexcol++;
    7920        2990 :                 indexSkipQuals = NIL;
    7921             :             }
    7922             : 
    7923             :             /*
    7924             :              * Finished considering the need to add skip arrays to bridge an
    7925             :              * initial eqQualHere gap between the old and new index columns
    7926             :              * (or there was no initial eqQualHere gap in the first place).
    7927             :              *
    7928             :              * If an initial gap could not be bridged, then new column's quals
    7929             :              * (i.e. iclause->indexcol's quals) won't go into indexBoundQuals,
    7930             :              * and so won't affect our final numIndexTuples estimate.
    7931             :              */
    7932      117020 :             if (indexcol != iclause->indexcol)
    7933       37352 :                 break;          /* done building indexBoundQuals */
    7934             :         }
    7935             : 
    7936             :         Assert(indexcol == iclause->indexcol);
    7937             : 
    7938             :         /* Examine each indexqual associated with this index clause */
    7939     1131098 :         foreach(lc2, iclause->indexquals)
    7940             :         {
    7941      566920 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc2);
    7942      566920 :             Expr       *clause = rinfo->clause;
    7943      566920 :             Oid         clause_op = InvalidOid;
    7944             :             int         op_strategy;
    7945             : 
    7946      566920 :             if (IsA(clause, OpExpr))
    7947             :             {
    7948      552938 :                 OpExpr     *op = (OpExpr *) clause;
    7949             : 
    7950      552938 :                 clause_op = op->opno;
    7951             :             }
    7952       13982 :             else if (IsA(clause, RowCompareExpr))
    7953             :             {
    7954         396 :                 RowCompareExpr *rc = (RowCompareExpr *) clause;
    7955             : 
    7956         396 :                 clause_op = linitial_oid(rc->opnos);
    7957         396 :                 found_row_compare = true;
    7958             :             }
    7959       13586 :             else if (IsA(clause, ScalarArrayOpExpr))
    7960             :             {
    7961       11302 :                 ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) clause;
    7962       11302 :                 Node       *other_operand = (Node *) lsecond(saop->args);
    7963       11302 :                 double      alength = estimate_array_length(root, other_operand);
    7964             : 
    7965       11302 :                 clause_op = saop->opno;
    7966       11302 :                 found_array = true;
    7967             :                 /* estimate SA descents by indexBoundQuals only */
    7968       11302 :                 if (alength > 1)
    7969       10994 :                     num_sa_scans *= alength;
    7970             :             }
    7971        2284 :             else if (IsA(clause, NullTest))
    7972             :             {
    7973        2284 :                 NullTest   *nt = (NullTest *) clause;
    7974             : 
    7975        2284 :                 if (nt->nulltesttype == IS_NULL)
    7976             :                 {
    7977         240 :                     found_is_null_op = true;
    7978             :                     /* IS NULL is like = for selectivity/skip scan purposes */
    7979         240 :                     eqQualHere = true;
    7980             :                 }
    7981             :             }
    7982             :             else
    7983           0 :                 elog(ERROR, "unsupported indexqual type: %d",
    7984             :                      (int) nodeTag(clause));
    7985             : 
    7986             :             /* check for equality operator */
    7987      566920 :             if (OidIsValid(clause_op))
    7988             :             {
    7989      564636 :                 op_strategy = get_op_opfamily_strategy(clause_op,
    7990      564636 :                                                        index->opfamily[indexcol]);
    7991             :                 Assert(op_strategy != 0);   /* not a member of opfamily?? */
    7992      564636 :                 if (op_strategy == BTEqualStrategyNumber)
    7993      532338 :                     eqQualHere = true;
    7994             :             }
    7995             : 
    7996      566920 :             indexBoundQuals = lappend(indexBoundQuals, rinfo);
    7997             : 
    7998             :             /*
    7999             :              * We apply inequality selectivities to estimate index descent
    8000             :              * costs with scans that use skip arrays.  Save this indexcol's
    8001             :              * RestrictInfos if it looks like they'll be needed for that.
    8002             :              */
    8003      566920 :             if (!eqQualHere && !found_row_compare &&
    8004       33244 :                 indexcol < index->nkeycolumns - 1)
    8005        5704 :                 indexSkipQuals = lappend(indexSkipQuals, rinfo);
    8006             :         }
    8007             :     }
    8008             : 
    8009             :     /*
    8010             :      * If index is unique and we found an '=' clause for each column, we can
    8011             :      * just assume numIndexTuples = 1 and skip the expensive
    8012             :      * clauselist_selectivity calculations.  However, an array or NullTest
    8013             :      * always invalidates that theory (even when eqQualHere has been set).
    8014             :      */
    8015      802956 :     if (index->unique &&
    8016      655270 :         indexcol == index->nkeycolumns - 1 &&
    8017      253298 :         eqQualHere &&
    8018      253298 :         !found_array &&
    8019      247024 :         !found_is_null_op)
    8020      246976 :         numIndexTuples = 1.0;
    8021             :     else
    8022             :     {
    8023             :         List       *selectivityQuals;
    8024             :         Selectivity btreeSelectivity;
    8025             : 
    8026             :         /*
    8027             :          * If the index is partial, AND the index predicate with the
    8028             :          * index-bound quals to produce a more accurate idea of the number of
    8029             :          * rows covered by the bound conditions.
    8030             :          */
    8031      555980 :         selectivityQuals = add_predicate_to_index_quals(index, indexBoundQuals);
    8032             : 
    8033      555980 :         btreeSelectivity = clauselist_selectivity(root, selectivityQuals,
    8034      555980 :                                                   index->rel->relid,
    8035             :                                                   JOIN_INNER,
    8036             :                                                   NULL);
    8037      555980 :         numIndexTuples = btreeSelectivity * index->rel->tuples;
    8038             : 
    8039             :         /*
    8040             :          * btree automatically combines individual array element primitive
    8041             :          * index scans whenever the tuples covered by the next set of array
    8042             :          * keys are close to tuples covered by the current set.  That puts a
    8043             :          * natural ceiling on the worst case number of descents -- there
    8044             :          * cannot possibly be more than one descent per leaf page scanned.
    8045             :          *
    8046             :          * Clamp the number of descents to at most 1/3 the number of index
    8047             :          * pages.  This avoids implausibly high estimates with low selectivity
    8048             :          * paths, where scans usually require only one or two descents.  This
    8049             :          * is most likely to help when there are several SAOP clauses, where
    8050             :          * naively accepting the total number of distinct combinations of
    8051             :          * array elements as the number of descents would frequently lead to
    8052             :          * wild overestimates.
    8053             :          *
    8054             :          * We somewhat arbitrarily don't just make the cutoff the total number
    8055             :          * of leaf pages (we make it 1/3 the total number of pages instead) to
    8056             :          * give the btree code credit for its ability to continue on the leaf
    8057             :          * level with low selectivity scans.
    8058             :          *
    8059             :          * Note: num_sa_scans includes both ScalarArrayOp array elements and
    8060             :          * skip array elements whose qual affects our numIndexTuples estimate.
    8061             :          */
    8062      555980 :         num_sa_scans = Min(num_sa_scans, ceil(index->pages * 0.3333333));
    8063      555980 :         num_sa_scans = Max(num_sa_scans, 1);
    8064             : 
    8065             :         /*
    8066             :          * As in genericcostestimate(), we have to adjust for any array quals
    8067             :          * included in indexBoundQuals, and then round to integer.
    8068             :          *
    8069             :          * It is tempting to make genericcostestimate behave as if array
    8070             :          * clauses work in almost the same way as scalar operators during
    8071             :          * btree scans, making the top-level scan look like a continuous scan
    8072             :          * (as opposed to num_sa_scans-many primitive index scans).  After
    8073             :          * all, btree scans mostly work like that at runtime.  However, such a
    8074             :          * scheme would badly bias genericcostestimate's simplistic approach
    8075             :          * to calculating numIndexPages through prorating.
    8076             :          *
    8077             :          * Stick with the approach taken by non-native SAOP scans for now.
    8078             :          * genericcostestimate will use the Mackert-Lohman formula to
    8079             :          * compensate for repeat page fetches, even though that definitely
    8080             :          * won't happen during btree scans (not for leaf pages, at least).
    8081             :          * We're usually very pessimistic about the number of primitive index
    8082             :          * scans that will be required, but it's not clear how to do better.
    8083             :          */
    8084      555980 :         numIndexTuples = rint(numIndexTuples / num_sa_scans);
    8085             :     }
    8086             : 
    8087             :     /*
    8088             :      * Now do generic index cost estimation.
    8089             :      */
    8090      802956 :     costs.numIndexTuples = numIndexTuples;
    8091      802956 :     costs.num_sa_scans = num_sa_scans;
    8092             : 
    8093      802956 :     genericcostestimate(root, path, loop_count, &costs);
    8094             : 
    8095             :     /*
    8096             :      * Add a CPU-cost component to represent the costs of initial btree
    8097             :      * descent.  We don't charge any I/O cost for touching upper btree levels,
    8098             :      * since they tend to stay in cache, but we still have to do about log2(N)
    8099             :      * comparisons to descend a btree of N leaf tuples.  We charge one
    8100             :      * cpu_operator_cost per comparison.
    8101             :      *
    8102             :      * If there are SAOP or skip array keys, charge this once per estimated
    8103             :      * index descent.  The ones after the first one are not startup cost so
    8104             :      * far as the overall plan goes, so just add them to "total" cost.
    8105             :      */
    8106      802956 :     if (index->tuples > 1)        /* avoid computing log(0) */
    8107             :     {
    8108      747118 :         descentCost = ceil(log(index->tuples) / log(2.0)) * cpu_operator_cost;
    8109      747118 :         costs.indexStartupCost += descentCost;
    8110      747118 :         costs.indexTotalCost += costs.num_sa_scans * descentCost;
    8111             :     }
    8112             : 
    8113             :     /*
    8114             :      * Even though we're not charging I/O cost for touching upper btree pages,
    8115             :      * it's still reasonable to charge some CPU cost per page descended
    8116             :      * through.  Moreover, if we had no such charge at all, bloated indexes
    8117             :      * would appear to have the same search cost as unbloated ones, at least
    8118             :      * in cases where only a single leaf page is expected to be visited.  This
    8119             :      * cost is somewhat arbitrarily set at 50x cpu_operator_cost per page
    8120             :      * touched.  The number of such pages is btree tree height plus one (ie,
    8121             :      * we charge for the leaf page too).  As above, charge once per estimated
    8122             :      * SAOP/skip array descent.
    8123             :      */
    8124      802956 :     descentCost = (index->tree_height + 1) * DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost;
    8125      802956 :     costs.indexStartupCost += descentCost;
    8126      802956 :     costs.indexTotalCost += costs.num_sa_scans * descentCost;
    8127             : 
    8128      802956 :     if (!have_correlation)
    8129             :     {
    8130      766264 :         examine_indexcol_variable(root, index, 0, &vardata);
    8131      766264 :         if (HeapTupleIsValid(vardata.statsTuple))
    8132      576808 :             costs.indexCorrelation = btcost_correlation(index, &vardata);
    8133      766264 :         ReleaseVariableStats(vardata);
    8134             :     }
    8135             :     else
    8136             :     {
    8137             :         /* btcost_correlation already called earlier on */
    8138       36692 :         costs.indexCorrelation = correlation;
    8139             :     }
    8140             : 
    8141      802956 :     *indexStartupCost = costs.indexStartupCost;
    8142      802956 :     *indexTotalCost = costs.indexTotalCost;
    8143      802956 :     *indexSelectivity = costs.indexSelectivity;
    8144      802956 :     *indexCorrelation = costs.indexCorrelation;
    8145      802956 :     *indexPages = costs.numIndexPages;
    8146      802956 : }
    8147             : 
    8148             : void
    8149         430 : hashcostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
    8150             :                  Cost *indexStartupCost, Cost *indexTotalCost,
    8151             :                  Selectivity *indexSelectivity, double *indexCorrelation,
    8152             :                  double *indexPages)
    8153             : {
    8154         430 :     GenericCosts costs = {0};
    8155             : 
    8156         430 :     genericcostestimate(root, path, loop_count, &costs);
    8157             : 
    8158             :     /*
    8159             :      * A hash index has no descent costs as such, since the index AM can go
    8160             :      * directly to the target bucket after computing the hash value.  There
    8161             :      * are a couple of other hash-specific costs that we could conceivably add
    8162             :      * here, though:
    8163             :      *
    8164             :      * Ideally we'd charge spc_random_page_cost for each page in the target
    8165             :      * bucket, not just the numIndexPages pages that genericcostestimate
    8166             :      * thought we'd visit.  However in most cases we don't know which bucket
    8167             :      * that will be.  There's no point in considering the average bucket size
    8168             :      * because the hash AM makes sure that's always one page.
    8169             :      *
    8170             :      * Likewise, we could consider charging some CPU for each index tuple in
    8171             :      * the bucket, if we knew how many there were.  But the per-tuple cost is
    8172             :      * just a hash value comparison, not a general datatype-dependent
    8173             :      * comparison, so any such charge ought to be quite a bit less than
    8174             :      * cpu_operator_cost; which makes it probably not worth worrying about.
    8175             :      *
    8176             :      * A bigger issue is that chance hash-value collisions will result in
    8177             :      * wasted probes into the heap.  We don't currently attempt to model this
    8178             :      * cost on the grounds that it's rare, but maybe it's not rare enough.
    8179             :      * (Any fix for this ought to consider the generic lossy-operator problem,
    8180             :      * though; it's not entirely hash-specific.)
    8181             :      */
    8182             : 
    8183         430 :     *indexStartupCost = costs.indexStartupCost;
    8184         430 :     *indexTotalCost = costs.indexTotalCost;
    8185         430 :     *indexSelectivity = costs.indexSelectivity;
    8186         430 :     *indexCorrelation = costs.indexCorrelation;
    8187         430 :     *indexPages = costs.numIndexPages;
    8188         430 : }
    8189             : 
    8190             : void
    8191        4878 : gistcostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
    8192             :                  Cost *indexStartupCost, Cost *indexTotalCost,
    8193             :                  Selectivity *indexSelectivity, double *indexCorrelation,
    8194             :                  double *indexPages)
    8195             : {
    8196        4878 :     IndexOptInfo *index = path->indexinfo;
    8197        4878 :     GenericCosts costs = {0};
    8198             :     Cost        descentCost;
    8199             : 
    8200        4878 :     genericcostestimate(root, path, loop_count, &costs);
    8201             : 
    8202             :     /*
    8203             :      * We model index descent costs similarly to those for btree, but to do
    8204             :      * that we first need an idea of the tree height.  We somewhat arbitrarily
    8205             :      * assume that the fanout is 100, meaning the tree height is at most
    8206             :      * log100(index->pages).
    8207             :      *
    8208             :      * Although this computation isn't really expensive enough to require
    8209             :      * caching, we might as well use index->tree_height to cache it.
    8210             :      */
    8211        4878 :     if (index->tree_height < 0) /* unknown? */
    8212             :     {
    8213        4864 :         if (index->pages > 1) /* avoid computing log(0) */
    8214        2720 :             index->tree_height = (int) (log(index->pages) / log(100.0));
    8215             :         else
    8216        2144 :             index->tree_height = 0;
    8217             :     }
    8218             : 
    8219             :     /*
    8220             :      * Add a CPU-cost component to represent the costs of initial descent. We
    8221             :      * just use log(N) here not log2(N) since the branching factor isn't
    8222             :      * necessarily two anyway.  As for btree, charge once per SA scan.
    8223             :      */
    8224        4878 :     if (index->tuples > 1)        /* avoid computing log(0) */
    8225             :     {
    8226        4878 :         descentCost = ceil(log(index->tuples)) * cpu_operator_cost;
    8227        4878 :         costs.indexStartupCost += descentCost;
    8228        4878 :         costs.indexTotalCost += costs.num_sa_scans * descentCost;
    8229             :     }
    8230             : 
    8231             :     /*
    8232             :      * Likewise add a per-page charge, calculated the same as for btrees.
    8233             :      */
    8234        4878 :     descentCost = (index->tree_height + 1) * DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost;
    8235        4878 :     costs.indexStartupCost += descentCost;
    8236        4878 :     costs.indexTotalCost += costs.num_sa_scans * descentCost;
    8237             : 
    8238        4878 :     *indexStartupCost = costs.indexStartupCost;
    8239        4878 :     *indexTotalCost = costs.indexTotalCost;
    8240        4878 :     *indexSelectivity = costs.indexSelectivity;
    8241        4878 :     *indexCorrelation = costs.indexCorrelation;
    8242        4878 :     *indexPages = costs.numIndexPages;
    8243        4878 : }
    8244             : 
    8245             : void
    8246        1784 : spgcostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
    8247             :                 Cost *indexStartupCost, Cost *indexTotalCost,
    8248             :                 Selectivity *indexSelectivity, double *indexCorrelation,
    8249             :                 double *indexPages)
    8250             : {
    8251        1784 :     IndexOptInfo *index = path->indexinfo;
    8252        1784 :     GenericCosts costs = {0};
    8253             :     Cost        descentCost;
    8254             : 
    8255        1784 :     genericcostestimate(root, path, loop_count, &costs);
    8256             : 
    8257             :     /*
    8258             :      * We model index descent costs similarly to those for btree, but to do
    8259             :      * that we first need an idea of the tree height.  We somewhat arbitrarily
    8260             :      * assume that the fanout is 100, meaning the tree height is at most
    8261             :      * log100(index->pages).
    8262             :      *
    8263             :      * Although this computation isn't really expensive enough to require
    8264             :      * caching, we might as well use index->tree_height to cache it.
    8265             :      */
    8266        1784 :     if (index->tree_height < 0) /* unknown? */
    8267             :     {
    8268        1778 :         if (index->pages > 1) /* avoid computing log(0) */
    8269        1778 :             index->tree_height = (int) (log(index->pages) / log(100.0));
    8270             :         else
    8271           0 :             index->tree_height = 0;
    8272             :     }
    8273             : 
    8274             :     /*
    8275             :      * Add a CPU-cost component to represent the costs of initial descent. We
    8276             :      * just use log(N) here not log2(N) since the branching factor isn't
    8277             :      * necessarily two anyway.  As for btree, charge once per SA scan.
    8278             :      */
    8279        1784 :     if (index->tuples > 1)        /* avoid computing log(0) */
    8280             :     {
    8281        1784 :         descentCost = ceil(log(index->tuples)) * cpu_operator_cost;
    8282        1784 :         costs.indexStartupCost += descentCost;
    8283        1784 :         costs.indexTotalCost += costs.num_sa_scans * descentCost;
    8284             :     }
    8285             : 
    8286             :     /*
    8287             :      * Likewise add a per-page charge, calculated the same as for btrees.
    8288             :      */
    8289        1784 :     descentCost = (index->tree_height + 1) * DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost;
    8290        1784 :     costs.indexStartupCost += descentCost;
    8291        1784 :     costs.indexTotalCost += costs.num_sa_scans * descentCost;
    8292             : 
    8293        1784 :     *indexStartupCost = costs.indexStartupCost;
    8294        1784 :     *indexTotalCost = costs.indexTotalCost;
    8295        1784 :     *indexSelectivity = costs.indexSelectivity;
    8296        1784 :     *indexCorrelation = costs.indexCorrelation;
    8297        1784 :     *indexPages = costs.numIndexPages;
    8298        1784 : }
    8299             : 
    8300             : 
    8301             : /*
    8302             :  * Support routines for gincostestimate
    8303             :  */
    8304             : 
    8305             : typedef struct
    8306             : {
    8307             :     bool        attHasFullScan[INDEX_MAX_KEYS];
    8308             :     bool        attHasNormalScan[INDEX_MAX_KEYS];
    8309             :     double      partialEntries;
    8310             :     double      exactEntries;
    8311             :     double      searchEntries;
    8312             :     double      arrayScans;
    8313             : } GinQualCounts;
    8314             : 
    8315             : /*
    8316             :  * Estimate the number of index terms that need to be searched for while
    8317             :  * testing the given GIN query, and increment the counts in *counts
    8318             :  * appropriately.  If the query is unsatisfiable, return false.
    8319             :  */
    8320             : static bool
    8321        2480 : gincost_pattern(IndexOptInfo *index, int indexcol,
    8322             :                 Oid clause_op, Datum query,
    8323             :                 GinQualCounts *counts)
    8324             : {
    8325             :     FmgrInfo    flinfo;
    8326             :     Oid         extractProcOid;
    8327             :     Oid         collation;
    8328             :     int         strategy_op;
    8329             :     Oid         lefttype,
    8330             :                 righttype;
    8331        2480 :     int32       nentries = 0;
    8332        2480 :     bool       *partial_matches = NULL;
    8333        2480 :     Pointer    *extra_data = NULL;
    8334        2480 :     bool       *nullFlags = NULL;
    8335        2480 :     int32       searchMode = GIN_SEARCH_MODE_DEFAULT;
    8336             :     int32       i;
    8337             : 
    8338             :     Assert(indexcol < index->nkeycolumns);
    8339             : 
    8340             :     /*
    8341             :      * Get the operator's strategy number and declared input data types within
    8342             :      * the index opfamily.  (We don't need the latter, but we use
    8343             :      * get_op_opfamily_properties because it will throw error if it fails to
    8344             :      * find a matching pg_amop entry.)
    8345             :      */
    8346        2480 :     get_op_opfamily_properties(clause_op, index->opfamily[indexcol], false,
    8347             :                                &strategy_op, &lefttype, &righttype);
    8348             : 
    8349             :     /*
    8350             :      * GIN always uses the "default" support functions, which are those with
    8351             :      * lefttype == righttype == the opclass' opcintype (see
    8352             :      * IndexSupportInitialize in relcache.c).
    8353             :      */
    8354        2480 :     extractProcOid = get_opfamily_proc(index->opfamily[indexcol],
    8355        2480 :                                        index->opcintype[indexcol],
    8356        2480 :                                        index->opcintype[indexcol],
    8357             :                                        GIN_EXTRACTQUERY_PROC);
    8358             : 
    8359        2480 :     if (!OidIsValid(extractProcOid))
    8360             :     {
    8361             :         /* should not happen; throw same error as index_getprocinfo */
    8362           0 :         elog(ERROR, "missing support function %d for attribute %d of index \"%s\"",
    8363             :              GIN_EXTRACTQUERY_PROC, indexcol + 1,
    8364             :              get_rel_name(index->indexoid));
    8365             :     }
    8366             : 
    8367             :     /*
    8368             :      * Choose collation to pass to extractProc (should match initGinState).
    8369             :      */
    8370        2480 :     if (OidIsValid(index->indexcollations[indexcol]))
    8371         414 :         collation = index->indexcollations[indexcol];
    8372             :     else
    8373        2066 :         collation = DEFAULT_COLLATION_OID;
    8374             : 
    8375        2480 :     fmgr_info(extractProcOid, &flinfo);
    8376             : 
    8377        2480 :     set_fn_opclass_options(&flinfo, index->opclassoptions[indexcol]);
    8378             : 
    8379        2480 :     FunctionCall7Coll(&flinfo,
    8380             :                       collation,
    8381             :                       query,
    8382             :                       PointerGetDatum(&nentries),
    8383             :                       UInt16GetDatum(strategy_op),
    8384             :                       PointerGetDatum(&partial_matches),
    8385             :                       PointerGetDatum(&extra_data),
    8386             :                       PointerGetDatum(&nullFlags),
    8387             :                       PointerGetDatum(&searchMode));
    8388             : 
    8389        2480 :     if (nentries <= 0 && searchMode == GIN_SEARCH_MODE_DEFAULT)
    8390             :     {
    8391             :         /* No match is possible */
    8392          12 :         return false;
    8393             :     }
    8394             : 
    8395        9676 :     for (i = 0; i < nentries; i++)
    8396             :     {
    8397             :         /*
    8398             :          * For partial match we haven't any information to estimate number of
    8399             :          * matched entries in index, so, we just estimate it as 100
    8400             :          */
    8401        7208 :         if (partial_matches && partial_matches[i])
    8402         694 :             counts->partialEntries += 100;
    8403             :         else
    8404        6514 :             counts->exactEntries++;
    8405             : 
    8406        7208 :         counts->searchEntries++;
    8407             :     }
    8408             : 
    8409        2468 :     if (searchMode == GIN_SEARCH_MODE_DEFAULT)
    8410             :     {
    8411        1984 :         counts->attHasNormalScan[indexcol] = true;
    8412             :     }
    8413         484 :     else if (searchMode == GIN_SEARCH_MODE_INCLUDE_EMPTY)
    8414             :     {
    8415             :         /* Treat "include empty" like an exact-match item */
    8416          44 :         counts->attHasNormalScan[indexcol] = true;
    8417          44 :         counts->exactEntries++;
    8418          44 :         counts->searchEntries++;
    8419             :     }
    8420             :     else
    8421             :     {
    8422             :         /* It's GIN_SEARCH_MODE_ALL */
    8423         440 :         counts->attHasFullScan[indexcol] = true;
    8424             :     }
    8425             : 
    8426        2468 :     return true;
    8427             : }
    8428             : 
    8429             : /*
    8430             :  * Estimate the number of index terms that need to be searched for while
    8431             :  * testing the given GIN index clause, and increment the counts in *counts
    8432             :  * appropriately.  If the query is unsatisfiable, return false.
    8433             :  */
    8434             : static bool
    8435        2468 : gincost_opexpr(PlannerInfo *root,
    8436             :                IndexOptInfo *index,
    8437             :                int indexcol,
    8438             :                OpExpr *clause,
    8439             :                GinQualCounts *counts)
    8440             : {
    8441        2468 :     Oid         clause_op = clause->opno;
    8442        2468 :     Node       *operand = (Node *) lsecond(clause->args);
    8443             : 
    8444             :     /* aggressively reduce to a constant, and look through relabeling */
    8445        2468 :     operand = estimate_expression_value(root, operand);
    8446             : 
    8447        2468 :     if (IsA(operand, RelabelType))
    8448           0 :         operand = (Node *) ((RelabelType *) operand)->arg;
    8449             : 
    8450             :     /*
    8451             :      * It's impossible to call extractQuery method for unknown operand. So
    8452             :      * unless operand is a Const we can't do much; just assume there will be
    8453             :      * one ordinary search entry from the operand at runtime.
    8454             :      */
    8455        2468 :     if (!IsA(operand, Const))
    8456             :     {
    8457           0 :         counts->exactEntries++;
    8458           0 :         counts->searchEntries++;
    8459           0 :         return true;
    8460             :     }
    8461             : 
    8462             :     /* If Const is null, there can be no matches */
    8463        2468 :     if (((Const *) operand)->constisnull)
    8464           0 :         return false;
    8465             : 
    8466             :     /* Otherwise, apply extractQuery and get the actual term counts */
    8467        2468 :     return gincost_pattern(index, indexcol, clause_op,
    8468             :                            ((Const *) operand)->constvalue,
    8469             :                            counts);
    8470             : }
    8471             : 
    8472             : /*
    8473             :  * Estimate the number of index terms that need to be searched for while
    8474             :  * testing the given GIN index clause, and increment the counts in *counts
    8475             :  * appropriately.  If the query is unsatisfiable, return false.
    8476             :  *
    8477             :  * A ScalarArrayOpExpr will give rise to N separate indexscans at runtime,
    8478             :  * each of which involves one value from the RHS array, plus all the
    8479             :  * non-array quals (if any).  To model this, we average the counts across
    8480             :  * the RHS elements, and add the averages to the counts in *counts (which
    8481             :  * correspond to per-indexscan costs).  We also multiply counts->arrayScans
    8482             :  * by N, causing gincostestimate to scale up its estimates accordingly.
    8483             :  */
    8484             : static bool
    8485           6 : gincost_scalararrayopexpr(PlannerInfo *root,
    8486             :                           IndexOptInfo *index,
    8487             :                           int indexcol,
    8488             :                           ScalarArrayOpExpr *clause,
    8489             :                           double numIndexEntries,
    8490             :                           GinQualCounts *counts)
    8491             : {
    8492           6 :     Oid         clause_op = clause->opno;
    8493           6 :     Node       *rightop = (Node *) lsecond(clause->args);
    8494             :     ArrayType  *arrayval;
    8495             :     int16       elmlen;
    8496             :     bool        elmbyval;
    8497             :     char        elmalign;
    8498             :     int         numElems;
    8499             :     Datum      *elemValues;
    8500             :     bool       *elemNulls;
    8501             :     GinQualCounts arraycounts;
    8502           6 :     int         numPossible = 0;
    8503             :     int         i;
    8504             : 
    8505             :     Assert(clause->useOr);
    8506             : 
    8507             :     /* aggressively reduce to a constant, and look through relabeling */
    8508           6 :     rightop = estimate_expression_value(root, rightop);
    8509             : 
    8510           6 :     if (IsA(rightop, RelabelType))
    8511           0 :         rightop = (Node *) ((RelabelType *) rightop)->arg;
    8512             : 
    8513             :     /*
    8514             :      * It's impossible to call extractQuery method for unknown operand. So
    8515             :      * unless operand is a Const we can't do much; just assume there will be
    8516             :      * one ordinary search entry from each array entry at runtime, and fall
    8517             :      * back on a probably-bad estimate of the number of array entries.
    8518             :      */
    8519           6 :     if (!IsA(rightop, Const))
    8520             :     {
    8521           0 :         counts->exactEntries++;
    8522           0 :         counts->searchEntries++;
    8523           0 :         counts->arrayScans *= estimate_array_length(root, rightop);
    8524           0 :         return true;
    8525             :     }
    8526             : 
    8527             :     /* If Const is null, there can be no matches */
    8528           6 :     if (((Const *) rightop)->constisnull)
    8529           0 :         return false;
    8530             : 
    8531             :     /* Otherwise, extract the array elements and iterate over them */
    8532           6 :     arrayval = DatumGetArrayTypeP(((Const *) rightop)->constvalue);
    8533           6 :     get_typlenbyvalalign(ARR_ELEMTYPE(arrayval),
    8534             :                          &elmlen, &elmbyval, &elmalign);
    8535           6 :     deconstruct_array(arrayval,
    8536             :                       ARR_ELEMTYPE(arrayval),
    8537             :                       elmlen, elmbyval, elmalign,
    8538             :                       &elemValues, &elemNulls, &numElems);
    8539             : 
    8540           6 :     memset(&arraycounts, 0, sizeof(arraycounts));
    8541             : 
    8542          18 :     for (i = 0; i < numElems; i++)
    8543             :     {
    8544             :         GinQualCounts elemcounts;
    8545             : 
    8546             :         /* NULL can't match anything, so ignore, as the executor will */
    8547          12 :         if (elemNulls[i])
    8548           0 :             continue;
    8549             : 
    8550             :         /* Otherwise, apply extractQuery and get the actual term counts */
    8551          12 :         memset(&elemcounts, 0, sizeof(elemcounts));
    8552             : 
    8553          12 :         if (gincost_pattern(index, indexcol, clause_op, elemValues[i],
    8554             :                             &elemcounts))
    8555             :         {
    8556             :             /* We ignore array elements that are unsatisfiable patterns */
    8557          12 :             numPossible++;
    8558             : 
    8559          12 :             if (elemcounts.attHasFullScan[indexcol] &&
    8560           0 :                 !elemcounts.attHasNormalScan[indexcol])
    8561             :             {
    8562             :                 /*
    8563             :                  * Full index scan will be required.  We treat this as if
    8564             :                  * every key in the index had been listed in the query; is
    8565             :                  * that reasonable?
    8566             :                  */
    8567           0 :                 elemcounts.partialEntries = 0;
    8568           0 :                 elemcounts.exactEntries = numIndexEntries;
    8569           0 :                 elemcounts.searchEntries = numIndexEntries;
    8570             :             }
    8571          12 :             arraycounts.partialEntries += elemcounts.partialEntries;
    8572          12 :             arraycounts.exactEntries += elemcounts.exactEntries;
    8573          12 :             arraycounts.searchEntries += elemcounts.searchEntries;
    8574             :         }
    8575             :     }
    8576             : 
    8577           6 :     if (numPossible == 0)
    8578             :     {
    8579             :         /* No satisfiable patterns in the array */
    8580           0 :         return false;
    8581             :     }
    8582             : 
    8583             :     /*
    8584             :      * Now add the averages to the global counts.  This will give us an
    8585             :      * estimate of the average number of terms searched for in each indexscan,
    8586             :      * including contributions from both array and non-array quals.
    8587             :      */
    8588           6 :     counts->partialEntries += arraycounts.partialEntries / numPossible;
    8589           6 :     counts->exactEntries += arraycounts.exactEntries / numPossible;
    8590           6 :     counts->searchEntries += arraycounts.searchEntries / numPossible;
    8591             : 
    8592           6 :     counts->arrayScans *= numPossible;
    8593             : 
    8594           6 :     return true;
    8595             : }
    8596             : 
    8597             : /*
    8598             :  * GIN has search behavior completely different from other index types
    8599             :  */
    8600             : void
    8601        2264 : gincostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
    8602             :                 Cost *indexStartupCost, Cost *indexTotalCost,
    8603             :                 Selectivity *indexSelectivity, double *indexCorrelation,
    8604             :                 double *indexPages)
    8605             : {
    8606        2264 :     IndexOptInfo *index = path->indexinfo;
    8607        2264 :     List       *indexQuals = get_quals_from_indexclauses(path->indexclauses);
    8608             :     List       *selectivityQuals;
    8609        2264 :     double      numPages = index->pages,
    8610        2264 :                 numTuples = index->tuples;
    8611             :     double      numEntryPages,
    8612             :                 numDataPages,
    8613             :                 numPendingPages,
    8614             :                 numEntries;
    8615             :     GinQualCounts counts;
    8616             :     bool        matchPossible;
    8617             :     bool        fullIndexScan;
    8618             :     double      partialScale;
    8619             :     double      entryPagesFetched,
    8620             :                 dataPagesFetched,
    8621             :                 dataPagesFetchedBySel;
    8622             :     double      qual_op_cost,
    8623             :                 qual_arg_cost,
    8624             :                 spc_random_page_cost,
    8625             :                 outer_scans;
    8626             :     Cost        descentCost;
    8627             :     Relation    indexRel;
    8628             :     GinStatsData ginStats;
    8629             :     ListCell   *lc;
    8630             :     int         i;
    8631             : 
    8632             :     /*
    8633             :      * Obtain statistical information from the meta page, if possible.  Else
    8634             :      * set ginStats to zeroes, and we'll cope below.
    8635             :      */
    8636        2264 :     if (!index->hypothetical)
    8637             :     {
    8638             :         /* Lock should have already been obtained in plancat.c */
    8639        2264 :         indexRel = index_open(index->indexoid, NoLock);
    8640        2264 :         ginGetStats(indexRel, &ginStats);
    8641        2264 :         index_close(indexRel, NoLock);
    8642             :     }
    8643             :     else
    8644             :     {
    8645           0 :         memset(&ginStats, 0, sizeof(ginStats));
    8646             :     }
    8647             : 
    8648             :     /*
    8649             :      * Assuming we got valid (nonzero) stats at all, nPendingPages can be
    8650             :      * trusted, but the other fields are data as of the last VACUUM.  We can
    8651             :      * scale them up to account for growth since then, but that method only
    8652             :      * goes so far; in the worst case, the stats might be for a completely
    8653             :      * empty index, and scaling them will produce pretty bogus numbers.
    8654             :      * Somewhat arbitrarily, set the cutoff for doing scaling at 4X growth; if
    8655             :      * it's grown more than that, fall back to estimating things only from the
    8656             :      * assumed-accurate index size.  But we'll trust nPendingPages in any case
    8657             :      * so long as it's not clearly insane, ie, more than the index size.
    8658             :      */
    8659        2264 :     if (ginStats.nPendingPages < numPages)
    8660        2264 :         numPendingPages = ginStats.nPendingPages;
    8661             :     else
    8662           0 :         numPendingPages = 0;
    8663             : 
    8664        2264 :     if (numPages > 0 && ginStats.nTotalPages <= numPages &&
    8665        2264 :         ginStats.nTotalPages > numPages / 4 &&
    8666        2212 :         ginStats.nEntryPages > 0 && ginStats.nEntries > 0)
    8667        1948 :     {
    8668             :         /*
    8669             :          * OK, the stats seem close enough to sane to be trusted.  But we
    8670             :          * still need to scale them by the ratio numPages / nTotalPages to
    8671             :          * account for growth since the last VACUUM.
    8672             :          */
    8673        1948 :         double      scale = numPages / ginStats.nTotalPages;
    8674             : 
    8675        1948 :         numEntryPages = ceil(ginStats.nEntryPages * scale);
    8676        1948 :         numDataPages = ceil(ginStats.nDataPages * scale);
    8677        1948 :         numEntries = ceil(ginStats.nEntries * scale);
    8678             :         /* ensure we didn't round up too much */
    8679        1948 :         numEntryPages = Min(numEntryPages, numPages - numPendingPages);
    8680        1948 :         numDataPages = Min(numDataPages,
    8681             :                            numPages - numPendingPages - numEntryPages);
    8682             :     }
    8683             :     else
    8684             :     {
    8685             :         /*
    8686             :          * We might get here because it's a hypothetical index, or an index
    8687             :          * created pre-9.1 and never vacuumed since upgrading (in which case
    8688             :          * its stats would read as zeroes), or just because it's grown too
    8689             :          * much since the last VACUUM for us to put our faith in scaling.
    8690             :          *
    8691             :          * Invent some plausible internal statistics based on the index page
    8692             :          * count (and clamp that to at least 10 pages, just in case).  We
    8693             :          * estimate that 90% of the index is entry pages, and the rest is data
    8694             :          * pages.  Estimate 100 entries per entry page; this is rather bogus
    8695             :          * since it'll depend on the size of the keys, but it's more robust
    8696             :          * than trying to predict the number of entries per heap tuple.
    8697             :          */
    8698         316 :         numPages = Max(numPages, 10);
    8699         316 :         numEntryPages = floor((numPages - numPendingPages) * 0.90);
    8700         316 :         numDataPages = numPages - numPendingPages - numEntryPages;
    8701         316 :         numEntries = floor(numEntryPages * 100);
    8702             :     }
    8703             : 
    8704             :     /* In an empty index, numEntries could be zero.  Avoid divide-by-zero */
    8705        2264 :     if (numEntries < 1)
    8706           0 :         numEntries = 1;
    8707             : 
    8708             :     /*
    8709             :      * If the index is partial, AND the index predicate with the index-bound
    8710             :      * quals to produce a more accurate idea of the number of rows covered by
    8711             :      * the bound conditions.
    8712             :      */
    8713        2264 :     selectivityQuals = add_predicate_to_index_quals(index, indexQuals);
    8714             : 
    8715             :     /* Estimate the fraction of main-table tuples that will be visited */
    8716        4528 :     *indexSelectivity = clauselist_selectivity(root, selectivityQuals,
    8717        2264 :                                                index->rel->relid,
    8718             :                                                JOIN_INNER,
    8719             :                                                NULL);
    8720             : 
    8721             :     /* fetch estimated page cost for tablespace containing index */
    8722        2264 :     get_tablespace_page_costs(index->reltablespace,
    8723             :                               &spc_random_page_cost,
    8724             :                               NULL);
    8725             : 
    8726             :     /*
    8727             :      * Generic assumption about index correlation: there isn't any.
    8728             :      */
    8729        2264 :     *indexCorrelation = 0.0;
    8730             : 
    8731             :     /*
    8732             :      * Examine quals to estimate number of search entries & partial matches
    8733             :      */
    8734        2264 :     memset(&counts, 0, sizeof(counts));
    8735        2264 :     counts.arrayScans = 1;
    8736        2264 :     matchPossible = true;
    8737             : 
    8738        4738 :     foreach(lc, path->indexclauses)
    8739             :     {
    8740        2474 :         IndexClause *iclause = lfirst_node(IndexClause, lc);
    8741             :         ListCell   *lc2;
    8742             : 
    8743        4936 :         foreach(lc2, iclause->indexquals)
    8744             :         {
    8745        2474 :             RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc2);
    8746        2474 :             Expr       *clause = rinfo->clause;
    8747             : 
    8748        2474 :             if (IsA(clause, OpExpr))
    8749             :             {
    8750        2468 :                 matchPossible = gincost_opexpr(root,
    8751             :                                                index,
    8752        2468 :                                                iclause->indexcol,
    8753             :                                                (OpExpr *) clause,
    8754             :                                                &counts);
    8755        2468 :                 if (!matchPossible)
    8756          12 :                     break;
    8757             :             }
    8758           6 :             else if (IsA(clause, ScalarArrayOpExpr))
    8759             :             {
    8760           6 :                 matchPossible = gincost_scalararrayopexpr(root,
    8761             :                                                           index,
    8762           6 :                                                           iclause->indexcol,
    8763             :                                                           (ScalarArrayOpExpr *) clause,
    8764             :                                                           numEntries,
    8765             :                                                           &counts);
    8766           6 :                 if (!matchPossible)
    8767           0 :                     break;
    8768             :             }
    8769             :             else
    8770             :             {
    8771             :                 /* shouldn't be anything else for a GIN index */
    8772           0 :                 elog(ERROR, "unsupported GIN indexqual type: %d",
    8773             :                      (int) nodeTag(clause));
    8774             :             }
    8775             :         }
    8776             :     }
    8777             : 
    8778             :     /* Fall out if there were any provably-unsatisfiable quals */
    8779        2264 :     if (!matchPossible)
    8780             :     {
    8781          12 :         *indexStartupCost = 0;
    8782          12 :         *indexTotalCost = 0;
    8783          12 :         *indexSelectivity = 0;
    8784          12 :         return;
    8785             :     }
    8786             : 
    8787             :     /*
    8788             :      * If attribute has a full scan and at the same time doesn't have normal
    8789             :      * scan, then we'll have to scan all non-null entries of that attribute.
    8790             :      * Currently, we don't have per-attribute statistics for GIN.  Thus, we
    8791             :      * must assume the whole GIN index has to be scanned in this case.
    8792             :      */
    8793        2252 :     fullIndexScan = false;
    8794        4394 :     for (i = 0; i < index->nkeycolumns; i++)
    8795             :     {
    8796        2480 :         if (counts.attHasFullScan[i] && !counts.attHasNormalScan[i])
    8797             :         {
    8798         338 :             fullIndexScan = true;
    8799         338 :             break;
    8800             :         }
    8801             :     }
    8802             : 
    8803        2252 :     if (fullIndexScan || indexQuals == NIL)
    8804             :     {
    8805             :         /*
    8806             :          * Full index scan will be required.  We treat this as if every key in
    8807             :          * the index had been listed in the query; is that reasonable?
    8808             :          */
    8809         338 :         counts.partialEntries = 0;
    8810         338 :         counts.exactEntries = numEntries;
    8811         338 :         counts.searchEntries = numEntries;
    8812             :     }
    8813             : 
    8814             :     /* Will we have more than one iteration of a nestloop scan? */
    8815        2252 :     outer_scans = loop_count;
    8816             : 
    8817             :     /*
    8818             :      * Compute cost to begin scan, first of all, pay attention to pending
    8819             :      * list.
    8820             :      */
    8821        2252 :     entryPagesFetched = numPendingPages;
    8822             : 
    8823             :     /*
    8824             :      * Estimate number of entry pages read.  We need to do
    8825             :      * counts.searchEntries searches.  Use a power function as it should be,
    8826             :      * but tuples on leaf pages usually is much greater. Here we include all
    8827             :      * searches in entry tree, including search of first entry in partial
    8828             :      * match algorithm
    8829             :      */
    8830        2252 :     entryPagesFetched += ceil(counts.searchEntries * rint(pow(numEntryPages, 0.15)));
    8831             : 
    8832             :     /*
    8833             :      * Add an estimate of entry pages read by partial match algorithm. It's a
    8834             :      * scan over leaf pages in entry tree.  We haven't any useful stats here,
    8835             :      * so estimate it as proportion.  Because counts.partialEntries is really
    8836             :      * pretty bogus (see code above), it's possible that it is more than
    8837             :      * numEntries; clamp the proportion to ensure sanity.
    8838             :      */
    8839        2252 :     partialScale = counts.partialEntries / numEntries;
    8840        2252 :     partialScale = Min(partialScale, 1.0);
    8841             : 
    8842        2252 :     entryPagesFetched += ceil(numEntryPages * partialScale);
    8843             : 
    8844             :     /*
    8845             :      * Partial match algorithm reads all data pages before doing actual scan,
    8846             :      * so it's a startup cost.  Again, we haven't any useful stats here, so
    8847             :      * estimate it as proportion.
    8848             :      */
    8849        2252 :     dataPagesFetched = ceil(numDataPages * partialScale);
    8850             : 
    8851        2252 :     *indexStartupCost = 0;
    8852        2252 :     *indexTotalCost = 0;
    8853             : 
    8854             :     /*
    8855             :      * Add a CPU-cost component to represent the costs of initial entry btree
    8856             :      * descent.  We don't charge any I/O cost for touching upper btree levels,
    8857             :      * since they tend to stay in cache, but we still have to do about log2(N)
    8858             :      * comparisons to descend a btree of N leaf tuples.  We charge one
    8859             :      * cpu_operator_cost per comparison.
    8860             :      *
    8861             :      * If there are ScalarArrayOpExprs, charge this once per SA scan.  The
    8862             :      * ones after the first one are not startup cost so far as the overall
    8863             :      * plan is concerned, so add them only to "total" cost.
    8864             :      */
    8865        2252 :     if (numEntries > 1)          /* avoid computing log(0) */
    8866             :     {
    8867        2252 :         descentCost = ceil(log(numEntries) / log(2.0)) * cpu_operator_cost;
    8868        2252 :         *indexStartupCost += descentCost * counts.searchEntries;
    8869        2252 :         *indexTotalCost += counts.arrayScans * descentCost * counts.searchEntries;
    8870             :     }
    8871             : 
    8872             :     /*
    8873             :      * Add a cpu cost per entry-page fetched. This is not amortized over a
    8874             :      * loop.
    8875             :      */
    8876        2252 :     *indexStartupCost += entryPagesFetched * DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost;
    8877        2252 :     *indexTotalCost += entryPagesFetched * counts.arrayScans * DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost;
    8878             : 
    8879             :     /*
    8880             :      * Add a cpu cost per data-page fetched. This is also not amortized over a
    8881             :      * loop. Since those are the data pages from the partial match algorithm,
    8882             :      * charge them as startup cost.
    8883             :      */
    8884        2252 :     *indexStartupCost += DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost * dataPagesFetched;
    8885             : 
    8886             :     /*
    8887             :      * Since we add the startup cost to the total cost later on, remove the
    8888             :      * initial arrayscan from the total.
    8889             :      */
    8890        2252 :     *indexTotalCost += dataPagesFetched * (counts.arrayScans - 1) * DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost;
    8891             : 
    8892             :     /*
    8893             :      * Calculate cache effects if more than one scan due to nestloops or array
    8894             :      * quals.  The result is pro-rated per nestloop scan, but the array qual
    8895             :      * factor shouldn't be pro-rated (compare genericcostestimate).
    8896             :      */
    8897        2252 :     if (outer_scans > 1 || counts.arrayScans > 1)
    8898             :     {
    8899           6 :         entryPagesFetched *= outer_scans * counts.arrayScans;
    8900           6 :         entryPagesFetched = index_pages_fetched(entryPagesFetched,
    8901             :                                                 (BlockNumber) numEntryPages,
    8902             :                                                 numEntryPages, root);
    8903           6 :         entryPagesFetched /= outer_scans;
    8904           6 :         dataPagesFetched *= outer_scans * counts.arrayScans;
    8905           6 :         dataPagesFetched = index_pages_fetched(dataPagesFetched,
    8906             :                                                (BlockNumber) numDataPages,
    8907             :                                                numDataPages, root);
    8908           6 :         dataPagesFetched /= outer_scans;
    8909             :     }
    8910             : 
    8911             :     /*
    8912             :      * Here we use random page cost because logically-close pages could be far
    8913             :      * apart on disk.
    8914             :      */
    8915        2252 :     *indexStartupCost += (entryPagesFetched + dataPagesFetched) * spc_random_page_cost;
    8916             : 
    8917             :     /*
    8918             :      * Now compute the number of data pages fetched during the scan.
    8919             :      *
    8920             :      * We assume every entry to have the same number of items, and that there
    8921             :      * is no overlap between them. (XXX: tsvector and array opclasses collect
    8922             :      * statistics on the frequency of individual keys; it would be nice to use
    8923             :      * those here.)
    8924             :      */
    8925        2252 :     dataPagesFetched = ceil(numDataPages * counts.exactEntries / numEntries);
    8926             : 
    8927             :     /*
    8928             :      * If there is a lot of overlap among the entries, in particular if one of
    8929             :      * the entries is very frequent, the above calculation can grossly
    8930             :      * under-estimate.  As a simple cross-check, calculate a lower bound based
    8931             :      * on the overall selectivity of the quals.  At a minimum, we must read
    8932             :      * one item pointer for each matching entry.
    8933             :      *
    8934             :      * The width of each item pointer varies, based on the level of
    8935             :      * compression.  We don't have statistics on that, but an average of
    8936             :      * around 3 bytes per item is fairly typical.
    8937             :      */
    8938        2252 :     dataPagesFetchedBySel = ceil(*indexSelectivity *
    8939        2252 :                                  (numTuples / (BLCKSZ / 3)));
    8940        2252 :     if (dataPagesFetchedBySel > dataPagesFetched)
    8941        1866 :         dataPagesFetched = dataPagesFetchedBySel;
    8942             : 
    8943             :     /* Add one page cpu-cost to the startup cost */
    8944        2252 :     *indexStartupCost += DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost * counts.searchEntries;
    8945             : 
    8946             :     /*
    8947             :      * Add once again a CPU-cost for those data pages, before amortizing for
    8948             :      * cache.
    8949             :      */
    8950        2252 :     *indexTotalCost += dataPagesFetched * counts.arrayScans * DEFAULT_PAGE_CPU_MULTIPLIER * cpu_operator_cost;
    8951             : 
    8952             :     /* Account for cache effects, the same as above */
    8953        2252 :     if (outer_scans > 1 || counts.arrayScans > 1)
    8954             :     {
    8955           6 :         dataPagesFetched *= outer_scans * counts.arrayScans;
    8956           6 :         dataPagesFetched = index_pages_fetched(dataPagesFetched,
    8957             :                                                (BlockNumber) numDataPages,
    8958             :                                                numDataPages, root);
    8959           6 :         dataPagesFetched /= outer_scans;
    8960             :     }
    8961             : 
    8962             :     /* And apply random_page_cost as the cost per page */
    8963        2252 :     *indexTotalCost += *indexStartupCost +
    8964        2252 :         dataPagesFetched * spc_random_page_cost;
    8965             : 
    8966             :     /*
    8967             :      * Add on index qual eval costs, much as in genericcostestimate. We charge
    8968             :      * cpu but we can disregard indexorderbys, since GIN doesn't support
    8969             :      * those.
    8970             :      */
    8971        2252 :     qual_arg_cost = index_other_operands_eval_cost(root, indexQuals);
    8972        2252 :     qual_op_cost = cpu_operator_cost * list_length(indexQuals);
    8973             : 
    8974        2252 :     *indexStartupCost += qual_arg_cost;
    8975        2252 :     *indexTotalCost += qual_arg_cost;
    8976             : 
    8977             :     /*
    8978             :      * Add a cpu cost per search entry, corresponding to the actual visited
    8979             :      * entries.
    8980             :      */
    8981        2252 :     *indexTotalCost += (counts.searchEntries * counts.arrayScans) * (qual_op_cost);
    8982             :     /* Now add a cpu cost per tuple in the posting lists / trees */
    8983        2252 :     *indexTotalCost += (numTuples * *indexSelectivity) * (cpu_index_tuple_cost);
    8984        2252 :     *indexPages = dataPagesFetched;
    8985             : }
    8986             : 
    8987             : /*
    8988             :  * BRIN has search behavior completely different from other index types
    8989             :  */
    8990             : void
    8991       10730 : brincostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
    8992             :                  Cost *indexStartupCost, Cost *indexTotalCost,
    8993             :                  Selectivity *indexSelectivity, double *indexCorrelation,
    8994             :                  double *indexPages)
    8995             : {
    8996       10730 :     IndexOptInfo *index = path->indexinfo;
    8997       10730 :     List       *indexQuals = get_quals_from_indexclauses(path->indexclauses);
    8998       10730 :     double      numPages = index->pages;
    8999       10730 :     RelOptInfo *baserel = index->rel;
    9000       10730 :     RangeTblEntry *rte = planner_rt_fetch(baserel->relid, root);
    9001             :     Cost        spc_seq_page_cost;
    9002             :     Cost        spc_random_page_cost;
    9003             :     double      qual_arg_cost;
    9004             :     double      qualSelectivity;
    9005             :     BrinStatsData statsData;
    9006             :     double      indexRanges;
    9007             :     double      minimalRanges;
    9008             :     double      estimatedRanges;
    9009             :     double      selec;
    9010             :     Relation    indexRel;
    9011             :     ListCell   *l;
    9012             :     VariableStatData vardata;
    9013             : 
    9014             :     Assert(rte->rtekind == RTE_RELATION);
    9015             : 
    9016             :     /* fetch estimated page cost for the tablespace containing the index */
    9017       10730 :     get_tablespace_page_costs(index->reltablespace,
    9018             :                               &spc_random_page_cost,
    9019             :                               &spc_seq_page_cost);
    9020             : 
    9021             :     /*
    9022             :      * Obtain some data from the index itself, if possible.  Otherwise invent
    9023             :      * some plausible internal statistics based on the relation page count.
    9024             :      */
    9025       10730 :     if (!index->hypothetical)
    9026             :     {
    9027             :         /*
    9028             :          * A lock should have already been obtained on the index in plancat.c.
    9029             :          */
    9030       10730 :         indexRel = index_open(index->indexoid, NoLock);
    9031       10730 :         brinGetStats(indexRel, &statsData);
    9032       10730 :         index_close(indexRel, NoLock);
    9033             : 
    9034             :         /* work out the actual number of ranges in the index */
    9035       10730 :         indexRanges = Max(ceil((double) baserel->pages /
    9036             :                                statsData.pagesPerRange), 1.0);
    9037             :     }
    9038             :     else
    9039             :     {
    9040             :         /*
    9041             :          * Assume default number of pages per range, and estimate the number
    9042             :          * of ranges based on that.
    9043             :          */
    9044           0 :         indexRanges = Max(ceil((double) baserel->pages /
    9045             :                                BRIN_DEFAULT_PAGES_PER_RANGE), 1.0);
    9046             : 
    9047           0 :         statsData.pagesPerRange = BRIN_DEFAULT_PAGES_PER_RANGE;
    9048           0 :         statsData.revmapNumPages = (indexRanges / REVMAP_PAGE_MAXITEMS) + 1;
    9049             :     }
    9050             : 
    9051             :     /*
    9052             :      * Compute index correlation
    9053             :      *
    9054             :      * Because we can use all index quals equally when scanning, we can use
    9055             :      * the largest correlation (in absolute value) among columns used by the
    9056             :      * query.  Start at zero, the worst possible case.  If we cannot find any
    9057             :      * correlation statistics, we will keep it as 0.
    9058             :      */
    9059       10730 :     *indexCorrelation = 0;
    9060             : 
    9061       21462 :     foreach(l, path->indexclauses)
    9062             :     {
    9063       10732 :         IndexClause *iclause = lfirst_node(IndexClause, l);
    9064       10732 :         AttrNumber  attnum = index->indexkeys[iclause->indexcol];
    9065             : 
    9066             :         /* attempt to lookup stats in relation for this index column */
    9067       10732 :         if (attnum != 0)
    9068             :         {
    9069             :             /* Simple variable -- look to stats for the underlying table */
    9070       10732 :             if (get_relation_stats_hook &&
    9071           0 :                 (*get_relation_stats_hook) (root, rte, attnum, &vardata))
    9072             :             {
    9073             :                 /*
    9074             :                  * The hook took control of acquiring a stats tuple.  If it
    9075             :                  * did supply a tuple, it'd better have supplied a freefunc.
    9076             :                  */
    9077           0 :                 if (HeapTupleIsValid(vardata.statsTuple) && !vardata.freefunc)
    9078           0 :                     elog(ERROR,
    9079             :                          "no function provided to release variable stats with");
    9080             :             }
    9081             :             else
    9082             :             {
    9083       10732 :                 vardata.statsTuple =
    9084       10732 :                     SearchSysCache3(STATRELATTINH,
    9085             :                                     ObjectIdGetDatum(rte->relid),
    9086             :                                     Int16GetDatum(attnum),
    9087             :                                     BoolGetDatum(false));
    9088       10732 :                 vardata.freefunc = ReleaseSysCache;
    9089             :             }
    9090             :         }
    9091             :         else
    9092             :         {
    9093             :             /*
    9094             :              * Looks like we've found an expression column in the index. Let's
    9095             :              * see if there's any stats for it.
    9096             :              */
    9097             : 
    9098             :             /* get the attnum from the 0-based index. */
    9099           0 :             attnum = iclause->indexcol + 1;
    9100             : 
    9101           0 :             if (get_index_stats_hook &&
    9102           0 :                 (*get_index_stats_hook) (root, index->indexoid, attnum, &vardata))
    9103             :             {
    9104             :                 /*
    9105             :                  * The hook took control of acquiring a stats tuple.  If it
    9106             :                  * did supply a tuple, it'd better have supplied a freefunc.
    9107             :                  */
    9108           0 :                 if (HeapTupleIsValid(vardata.statsTuple) &&
    9109           0 :                     !vardata.freefunc)
    9110           0 :                     elog(ERROR, "no function provided to release variable stats with");
    9111             :             }
    9112             :             else
    9113             :             {
    9114           0 :                 vardata.statsTuple = SearchSysCache3(STATRELATTINH,
    9115             :                                                      ObjectIdGetDatum(index->indexoid),
    9116             :                                                      Int16GetDatum(attnum),
    9117             :                                                      BoolGetDatum(false));
    9118           0 :                 vardata.freefunc = ReleaseSysCache;
    9119             :             }
    9120             :         }
    9121             : 
    9122       10732 :         if (HeapTupleIsValid(vardata.statsTuple))
    9123             :         {
    9124             :             AttStatsSlot sslot;
    9125             : 
    9126          36 :             if (get_attstatsslot(&sslot, vardata.statsTuple,
    9127             :                                  STATISTIC_KIND_CORRELATION, InvalidOid,
    9128             :                                  ATTSTATSSLOT_NUMBERS))
    9129             :             {
    9130          36 :                 double      varCorrelation = 0.0;
    9131             : 
    9132          36 :                 if (sslot.nnumbers > 0)
    9133          36 :                     varCorrelation = fabs(sslot.numbers[0]);
    9134             : 
    9135          36 :                 if (varCorrelation > *indexCorrelation)
    9136          36 :                     *indexCorrelation = varCorrelation;
    9137             : 
    9138          36 :                 free_attstatsslot(&sslot);
    9139             :             }
    9140             :         }
    9141             : 
    9142       10732 :         ReleaseVariableStats(vardata);
    9143             :     }
    9144             : 
    9145       10730 :     qualSelectivity = clauselist_selectivity(root, indexQuals,
    9146       10730 :                                              baserel->relid,
    9147             :                                              JOIN_INNER, NULL);
    9148             : 
    9149             :     /*
    9150             :      * Now calculate the minimum possible ranges we could match with if all of
    9151             :      * the rows were in the perfect order in the table's heap.
    9152             :      */
    9153       10730 :     minimalRanges = ceil(indexRanges * qualSelectivity);
    9154             : 
    9155             :     /*
    9156             :      * Now estimate the number of ranges that we'll touch by using the
    9157             :      * indexCorrelation from the stats. Careful not to divide by zero (note
    9158             :      * we're using the absolute value of the correlation).
    9159             :      */
    9160       10730 :     if (*indexCorrelation < 1.0e-10)
    9161       10694 :         estimatedRanges = indexRanges;
    9162             :     else
    9163          36 :         estimatedRanges = Min(minimalRanges / *indexCorrelation, indexRanges);
    9164             : 
    9165             :     /* we expect to visit this portion of the table */
    9166       10730 :     selec = estimatedRanges / indexRanges;
    9167             : 
    9168       10730 :     CLAMP_PROBABILITY(selec);
    9169             : 
    9170       10730 :     *indexSelectivity = selec;
    9171             : 
    9172             :     /*
    9173             :      * Compute the index qual costs, much as in genericcostestimate, to add to
    9174             :      * the index costs.  We can disregard indexorderbys, since BRIN doesn't
    9175             :      * support those.
    9176             :      */
    9177       10730 :     qual_arg_cost = index_other_operands_eval_cost(root, indexQuals);
    9178             : 
    9179             :     /*
    9180             :      * Compute the startup cost as the cost to read the whole revmap
    9181             :      * sequentially, including the cost to execute the index quals.
    9182             :      */
    9183       10730 :     *indexStartupCost =
    9184       10730 :         spc_seq_page_cost * statsData.revmapNumPages * loop_count;
    9185       10730 :     *indexStartupCost += qual_arg_cost;
    9186             : 
    9187             :     /*
    9188             :      * To read a BRIN index there might be a bit of back and forth over
    9189             :      * regular pages, as revmap might point to them out of sequential order;
    9190             :      * calculate the total cost as reading the whole index in random order.
    9191             :      */
    9192       10730 :     *indexTotalCost = *indexStartupCost +
    9193       10730 :         spc_random_page_cost * (numPages - statsData.revmapNumPages) * loop_count;
    9194             : 
    9195             :     /*
    9196             :      * Charge a small amount per range tuple which we expect to match to. This
    9197             :      * is meant to reflect the costs of manipulating the bitmap. The BRIN scan
    9198             :      * will set a bit for each page in the range when we find a matching
    9199             :      * range, so we must multiply the charge by the number of pages in the
    9200             :      * range.
    9201             :      */
    9202       10730 :     *indexTotalCost += 0.1 * cpu_operator_cost * estimatedRanges *
    9203       10730 :         statsData.pagesPerRange;
    9204             : 
    9205       10730 :     *indexPages = index->pages;
    9206       10730 : }

Generated by: LCOV version 1.16