LCOV - code coverage report
Current view: top level - src/backend/executor - nodeHashjoin.c (source / functions) Hit Total Coverage
Test: PostgreSQL 12beta2 Lines: 416 451 92.2 %
Date: 2019-06-19 16:07:09 Functions: 18 18 100.0 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * nodeHashjoin.c
       4             :  *    Routines to handle hash join nodes
       5             :  *
       6             :  * Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
       7             :  * Portions Copyright (c) 1994, Regents of the University of California
       8             :  *
       9             :  *
      10             :  * IDENTIFICATION
      11             :  *    src/backend/executor/nodeHashjoin.c
      12             :  *
      13             :  * PARALLELISM
      14             :  *
      15             :  * Hash joins can participate in parallel query execution in several ways.  A
      16             :  * parallel-oblivious hash join is one where the node is unaware that it is
      17             :  * part of a parallel plan.  In this case, a copy of the inner plan is used to
      18             :  * build a copy of the hash table in every backend, and the outer plan could
      19             :  * either be built from a partial or complete path, so that the results of the
      20             :  * hash join are correspondingly either partial or complete.  A parallel-aware
      21             :  * hash join is one that behaves differently, coordinating work between
      22             :  * backends, and appears as Parallel Hash Join in EXPLAIN output.  A Parallel
      23             :  * Hash Join always appears with a Parallel Hash node.
      24             :  *
      25             :  * Parallel-aware hash joins use the same per-backend state machine to track
      26             :  * progress through the hash join algorithm as parallel-oblivious hash joins.
      27             :  * In a parallel-aware hash join, there is also a shared state machine that
      28             :  * co-operating backends use to synchronize their local state machines and
      29             :  * program counters.  The shared state machine is managed with a Barrier IPC
      30             :  * primitive.  When all attached participants arrive at a barrier, the phase
      31             :  * advances and all waiting participants are released.
      32             :  *
      33             :  * When a participant begins working on a parallel hash join, it must first
      34             :  * figure out how much progress has already been made, because participants
      35             :  * don't wait for each other to begin.  For this reason there are switch
      36             :  * statements at key points in the code where we have to synchronize our local
      37             :  * state machine with the phase, and then jump to the correct part of the
      38             :  * algorithm so that we can get started.
      39             :  *
      40             :  * One barrier called build_barrier is used to coordinate the hashing phases.
      41             :  * The phase is represented by an integer which begins at zero and increments
      42             :  * one by one, but in the code it is referred to by symbolic names as follows:
      43             :  *
      44             :  *   PHJ_BUILD_ELECTING              -- initial state
      45             :  *   PHJ_BUILD_ALLOCATING            -- one sets up the batches and table 0
      46             :  *   PHJ_BUILD_HASHING_INNER         -- all hash the inner rel
      47             :  *   PHJ_BUILD_HASHING_OUTER         -- (multi-batch only) all hash the outer
      48             :  *   PHJ_BUILD_DONE                  -- building done, probing can begin
      49             :  *
      50             :  * While in the phase PHJ_BUILD_HASHING_INNER a separate pair of barriers may
      51             :  * be used repeatedly as required to coordinate expansions in the number of
      52             :  * batches or buckets.  Their phases are as follows:
      53             :  *
      54             :  *   PHJ_GROW_BATCHES_ELECTING       -- initial state
      55             :  *   PHJ_GROW_BATCHES_ALLOCATING     -- one allocates new batches
      56             :  *   PHJ_GROW_BATCHES_REPARTITIONING -- all repartition
      57             :  *   PHJ_GROW_BATCHES_FINISHING      -- one cleans up, detects skew
      58             :  *
      59             :  *   PHJ_GROW_BUCKETS_ELECTING       -- initial state
      60             :  *   PHJ_GROW_BUCKETS_ALLOCATING     -- one allocates new buckets
      61             :  *   PHJ_GROW_BUCKETS_REINSERTING    -- all insert tuples
      62             :  *
      63             :  * If the planner got the number of batches and buckets right, those won't be
      64             :  * necessary, but on the other hand we might finish up needing to expand the
      65             :  * buckets or batches multiple times while hashing the inner relation to stay
      66             :  * within our memory budget and load factor target.  For that reason it's a
      67             :  * separate pair of barriers using circular phases.
      68             :  *
      69             :  * The PHJ_BUILD_HASHING_OUTER phase is required only for multi-batch joins,
      70             :  * because we need to divide the outer relation into batches up front in order
      71             :  * to be able to process batches entirely independently.  In contrast, the
      72             :  * parallel-oblivious algorithm simply throws tuples 'forward' to 'later'
      73             :  * batches whenever it encounters them while scanning and probing, which it
      74             :  * can do because it processes batches in serial order.
      75             :  *
      76             :  * Once PHJ_BUILD_DONE is reached, backends then split up and process
      77             :  * different batches, or gang up and work together on probing batches if there
      78             :  * aren't enough to go around.  For each batch there is a separate barrier
      79             :  * with the following phases:
      80             :  *
      81             :  *  PHJ_BATCH_ELECTING       -- initial state
      82             :  *  PHJ_BATCH_ALLOCATING     -- one allocates buckets
      83             :  *  PHJ_BATCH_LOADING        -- all load the hash table from disk
      84             :  *  PHJ_BATCH_PROBING        -- all probe
      85             :  *  PHJ_BATCH_DONE           -- end
      86             :  *
      87             :  * Batch 0 is a special case, because it starts out in phase
      88             :  * PHJ_BATCH_PROBING; populating batch 0's hash table is done during
      89             :  * PHJ_BUILD_HASHING_INNER so we can skip loading.
      90             :  *
      91             :  * Initially we try to plan for a single-batch hash join using the combined
      92             :  * work_mem of all participants to create a large shared hash table.  If that
      93             :  * turns out either at planning or execution time to be impossible then we
      94             :  * fall back to regular work_mem sized hash tables.
      95             :  *
      96             :  * To avoid deadlocks, we never wait for any barrier unless it is known that
      97             :  * all other backends attached to it are actively executing the node or have
      98             :  * already arrived.  Practically, that means that we never return a tuple
      99             :  * while attached to a barrier, unless the barrier has reached its final
     100             :  * state.  In the slightly special case of the per-batch barrier, we return
     101             :  * tuples while in PHJ_BATCH_PROBING phase, but that's OK because we use
     102             :  * BarrierArriveAndDetach() to advance it to PHJ_BATCH_DONE without waiting.
     103             :  *
     104             :  *-------------------------------------------------------------------------
     105             :  */
     106             : 
     107             : #include "postgres.h"
     108             : 
     109             : #include "access/htup_details.h"
     110             : #include "access/parallel.h"
     111             : #include "executor/executor.h"
     112             : #include "executor/hashjoin.h"
     113             : #include "executor/nodeHash.h"
     114             : #include "executor/nodeHashjoin.h"
     115             : #include "miscadmin.h"
     116             : #include "pgstat.h"
     117             : #include "utils/memutils.h"
     118             : #include "utils/sharedtuplestore.h"
     119             : 
     120             : 
     121             : /*
     122             :  * States of the ExecHashJoin state machine
     123             :  */
     124             : #define HJ_BUILD_HASHTABLE      1
     125             : #define HJ_NEED_NEW_OUTER       2
     126             : #define HJ_SCAN_BUCKET          3
     127             : #define HJ_FILL_OUTER_TUPLE     4
     128             : #define HJ_FILL_INNER_TUPLES    5
     129             : #define HJ_NEED_NEW_BATCH       6
     130             : 
     131             : /* Returns true if doing null-fill on outer relation */
     132             : #define HJ_FILL_OUTER(hjstate)  ((hjstate)->hj_NullInnerTupleSlot != NULL)
     133             : /* Returns true if doing null-fill on inner relation */
     134             : #define HJ_FILL_INNER(hjstate)  ((hjstate)->hj_NullOuterTupleSlot != NULL)
     135             : 
     136             : static TupleTableSlot *ExecHashJoinOuterGetTuple(PlanState *outerNode,
     137             :                                                  HashJoinState *hjstate,
     138             :                                                  uint32 *hashvalue);
     139             : static TupleTableSlot *ExecParallelHashJoinOuterGetTuple(PlanState *outerNode,
     140             :                                                          HashJoinState *hjstate,
     141             :                                                          uint32 *hashvalue);
     142             : static TupleTableSlot *ExecHashJoinGetSavedTuple(HashJoinState *hjstate,
     143             :                                                  BufFile *file,
     144             :                                                  uint32 *hashvalue,
     145             :                                                  TupleTableSlot *tupleSlot);
     146             : static bool ExecHashJoinNewBatch(HashJoinState *hjstate);
     147             : static bool ExecParallelHashJoinNewBatch(HashJoinState *hjstate);
     148             : static void ExecParallelHashJoinPartitionOuter(HashJoinState *node);
     149             : 
     150             : 
     151             : /* ----------------------------------------------------------------
     152             :  *      ExecHashJoinImpl
     153             :  *
     154             :  *      This function implements the Hybrid Hashjoin algorithm.  It is marked
     155             :  *      with an always-inline attribute so that ExecHashJoin() and
     156             :  *      ExecParallelHashJoin() can inline it.  Compilers that respect the
     157             :  *      attribute should create versions specialized for parallel == true and
     158             :  *      parallel == false with unnecessary branches removed.
     159             :  *
     160             :  *      Note: the relation we build hash table on is the "inner"
     161             :  *            the other one is "outer".
     162             :  * ----------------------------------------------------------------
     163             :  */
     164             : static pg_attribute_always_inline TupleTableSlot *
     165     7094476 : ExecHashJoinImpl(PlanState *pstate, bool parallel)
     166             : {
     167     7094476 :     HashJoinState *node = castNode(HashJoinState, pstate);
     168             :     PlanState  *outerNode;
     169             :     HashState  *hashNode;
     170             :     ExprState  *joinqual;
     171             :     ExprState  *otherqual;
     172             :     ExprContext *econtext;
     173             :     HashJoinTable hashtable;
     174             :     TupleTableSlot *outerTupleSlot;
     175             :     uint32      hashvalue;
     176             :     int         batchno;
     177             :     ParallelHashJoinState *parallel_state;
     178             : 
     179             :     /*
     180             :      * get information from HashJoin node
     181             :      */
     182     7094476 :     joinqual = node->js.joinqual;
     183     7094476 :     otherqual = node->js.ps.qual;
     184     7094476 :     hashNode = (HashState *) innerPlanState(node);
     185     7094476 :     outerNode = outerPlanState(node);
     186     7094476 :     hashtable = node->hj_HashTable;
     187     7094476 :     econtext = node->js.ps.ps_ExprContext;
     188     7094476 :     parallel_state = hashNode->parallel_state;
     189             : 
     190             :     /*
     191             :      * Reset per-tuple memory context to free any expression evaluation
     192             :      * storage allocated in the previous tuple cycle.
     193             :      */
     194     7094476 :     ResetExprContext(econtext);
     195             : 
     196             :     /*
     197             :      * run the hash join state machine
     198             :      */
     199             :     for (;;)
     200             :     {
     201             :         /*
     202             :          * It's possible to iterate this loop many times before returning a
     203             :          * tuple, in some pathological cases such as needing to move much of
     204             :          * the current batch to a later batch.  So let's check for interrupts
     205             :          * each time through.
     206             :          */
     207    41990936 :         CHECK_FOR_INTERRUPTS();
     208             : 
     209    24542706 :         switch (node->hj_JoinState)
     210             :         {
     211             :             case HJ_BUILD_HASHTABLE:
     212             : 
     213             :                 /*
     214             :                  * First time through: build hash table for inner relation.
     215             :                  */
     216             :                 Assert(hashtable == NULL);
     217             : 
     218             :                 /*
     219             :                  * If the outer relation is completely empty, and it's not
     220             :                  * right/full join, we can quit without building the hash
     221             :                  * table.  However, for an inner join it is only a win to
     222             :                  * check this when the outer relation's startup cost is less
     223             :                  * than the projected cost of building the hash table.
     224             :                  * Otherwise it's best to build the hash table first and see
     225             :                  * if the inner relation is empty.  (When it's a left join, we
     226             :                  * should always make this check, since we aren't going to be
     227             :                  * able to skip the join on the strength of an empty inner
     228             :                  * relation anyway.)
     229             :                  *
     230             :                  * If we are rescanning the join, we make use of information
     231             :                  * gained on the previous scan: don't bother to try the
     232             :                  * prefetch if the previous scan found the outer relation
     233             :                  * nonempty. This is not 100% reliable since with new
     234             :                  * parameters the outer relation might yield different
     235             :                  * results, but it's a good heuristic.
     236             :                  *
     237             :                  * The only way to make the check is to try to fetch a tuple
     238             :                  * from the outer plan node.  If we succeed, we have to stash
     239             :                  * it away for later consumption by ExecHashJoinOuterGetTuple.
     240             :                  */
     241     1116880 :                 if (HJ_FILL_INNER(node))
     242             :                 {
     243             :                     /* no chance to not build the hash table */
     244        3856 :                     node->hj_FirstOuterTupleSlot = NULL;
     245             :                 }
     246     1113024 :                 else if (parallel)
     247             :                 {
     248             :                     /*
     249             :                      * The empty-outer optimization is not implemented for
     250             :                      * shared hash tables, because no one participant can
     251             :                      * determine that there are no outer tuples, and it's not
     252             :                      * yet clear that it's worth the synchronization overhead
     253             :                      * of reaching consensus to figure that out.  So we have
     254             :                      * to build the hash table.
     255             :                      */
     256         224 :                     node->hj_FirstOuterTupleSlot = NULL;
     257             :                 }
     258     1123730 :                 else if (HJ_FILL_OUTER(node) ||
     259       21562 :                          (outerNode->plan->startup_cost < hashNode->ps.plan->total_cost &&
     260       10632 :                           !node->hj_OuterNotEmpty))
     261             :                 {
     262     1111820 :                     node->hj_FirstOuterTupleSlot = ExecProcNode(outerNode);
     263     1372026 :                     if (TupIsNull(node->hj_FirstOuterTupleSlot))
     264             :                     {
     265      851614 :                         node->hj_OuterNotEmpty = false;
     266      851614 :                         return NULL;
     267             :                     }
     268             :                     else
     269      260206 :                         node->hj_OuterNotEmpty = true;
     270             :                 }
     271             :                 else
     272         980 :                     node->hj_FirstOuterTupleSlot = NULL;
     273             : 
     274             :                 /*
     275             :                  * Create the hash table.  If using Parallel Hash, then
     276             :                  * whoever gets here first will create the hash table and any
     277             :                  * later arrivals will merely attach to it.
     278             :                  */
     279      265266 :                 hashtable = ExecHashTableCreate(hashNode,
     280             :                                                 node->hj_HashOperators,
     281             :                                                 node->hj_Collations,
     282      265266 :                                                 HJ_FILL_INNER(node));
     283      265266 :                 node->hj_HashTable = hashtable;
     284             : 
     285             :                 /*
     286             :                  * Execute the Hash node, to build the hash table.  If using
     287             :                  * Parallel Hash, then we'll try to help hashing unless we
     288             :                  * arrived too late.
     289             :                  */
     290      265266 :                 hashNode->hashtable = hashtable;
     291      265266 :                 (void) MultiExecProcNode((PlanState *) hashNode);
     292             : 
     293             :                 /*
     294             :                  * If the inner relation is completely empty, and we're not
     295             :                  * doing a left outer join, we can quit without scanning the
     296             :                  * outer relation.
     297             :                  */
     298      265266 :                 if (hashtable->totalTuples == 0 && !HJ_FILL_OUTER(node))
     299        1796 :                     return NULL;
     300             : 
     301             :                 /*
     302             :                  * need to remember whether nbatch has increased since we
     303             :                  * began scanning the outer relation
     304             :                  */
     305      263470 :                 hashtable->nbatch_outstart = hashtable->nbatch;
     306             : 
     307             :                 /*
     308             :                  * Reset OuterNotEmpty for scan.  (It's OK if we fetched a
     309             :                  * tuple above, because ExecHashJoinOuterGetTuple will
     310             :                  * immediately set it again.)
     311             :                  */
     312      263470 :                 node->hj_OuterNotEmpty = false;
     313             : 
     314      263470 :                 if (parallel)
     315             :                 {
     316             :                     Barrier    *build_barrier;
     317             : 
     318         224 :                     build_barrier = &parallel_state->build_barrier;
     319             :                     Assert(BarrierPhase(build_barrier) == PHJ_BUILD_HASHING_OUTER ||
     320             :                            BarrierPhase(build_barrier) == PHJ_BUILD_DONE);
     321         224 :                     if (BarrierPhase(build_barrier) == PHJ_BUILD_HASHING_OUTER)
     322             :                     {
     323             :                         /*
     324             :                          * If multi-batch, we need to hash the outer relation
     325             :                          * up front.
     326             :                          */
     327         166 :                         if (hashtable->nbatch > 1)
     328          92 :                             ExecParallelHashJoinPartitionOuter(node);
     329         166 :                         BarrierArriveAndWait(build_barrier,
     330             :                                              WAIT_EVENT_HASH_BUILD_HASHING_OUTER);
     331             :                     }
     332             :                     Assert(BarrierPhase(build_barrier) == PHJ_BUILD_DONE);
     333             : 
     334             :                     /* Each backend should now select a batch to work on. */
     335         224 :                     hashtable->curbatch = -1;
     336         224 :                     node->hj_JoinState = HJ_NEED_NEW_BATCH;
     337             : 
     338         224 :                     continue;
     339             :                 }
     340             :                 else
     341      263246 :                     node->hj_JoinState = HJ_NEED_NEW_OUTER;
     342             : 
     343             :                 /* FALL THRU */
     344             : 
     345             :             case HJ_NEED_NEW_OUTER:
     346             : 
     347             :                 /*
     348             :                  * We don't have an outer tuple, try to get the next one
     349             :                  */
     350    10712642 :                 if (parallel)
     351     1200568 :                     outerTupleSlot =
     352             :                         ExecParallelHashJoinOuterGetTuple(outerNode, node,
     353             :                                                           &hashvalue);
     354             :                 else
     355     9512074 :                     outerTupleSlot =
     356             :                         ExecHashJoinOuterGetTuple(outerNode, node, &hashvalue);
     357             : 
     358    10712642 :                 if (TupIsNull(outerTupleSlot))
     359             :                 {
     360             :                     /* end of batch, or maybe whole join */
     361      264656 :                     if (HJ_FILL_INNER(node))
     362             :                     {
     363             :                         /* set up to scan for unmatched inner tuples */
     364        3660 :                         ExecPrepHashTableForUnmatched(node);
     365        3660 :                         node->hj_JoinState = HJ_FILL_INNER_TUPLES;
     366             :                     }
     367             :                     else
     368      260996 :                         node->hj_JoinState = HJ_NEED_NEW_BATCH;
     369      264656 :                     continue;
     370             :                 }
     371             : 
     372    10447986 :                 econtext->ecxt_outertuple = outerTupleSlot;
     373    10447986 :                 node->hj_MatchedOuter = false;
     374             : 
     375             :                 /*
     376             :                  * Find the corresponding bucket for this tuple in the main
     377             :                  * hash table or skew hash table.
     378             :                  */
     379    10447986 :                 node->hj_CurHashValue = hashvalue;
     380    10447986 :                 ExecHashGetBucketAndBatch(hashtable, hashvalue,
     381             :                                           &node->hj_CurBucketNo, &batchno);
     382    10447986 :                 node->hj_CurSkewBucketNo = ExecHashGetSkewBucket(hashtable,
     383             :                                                                  hashvalue);
     384    10447986 :                 node->hj_CurTuple = NULL;
     385             : 
     386             :                 /*
     387             :                  * The tuple might not belong to the current batch (where
     388             :                  * "current batch" includes the skew buckets if any).
     389             :                  */
     390    11434514 :                 if (batchno != hashtable->curbatch &&
     391      986528 :                     node->hj_CurSkewBucketNo == INVALID_SKEW_BUCKET_NO)
     392             :                 {
     393             :                     bool        shouldFree;
     394      986128 :                     MinimalTuple mintuple = ExecFetchSlotMinimalTuple(outerTupleSlot,
     395             :                                                                       &shouldFree);
     396             : 
     397             :                     /*
     398             :                      * Need to postpone this outer tuple to a later batch.
     399             :                      * Save it in the corresponding outer-batch file.
     400             :                      */
     401             :                     Assert(parallel_state == NULL);
     402             :                     Assert(batchno > hashtable->curbatch);
     403      986128 :                     ExecHashJoinSaveTuple(mintuple, hashvalue,
     404      986128 :                                           &hashtable->outerBatchFile[batchno]);
     405             : 
     406      986128 :                     if (shouldFree)
     407      986128 :                         heap_free_minimal_tuple(mintuple);
     408             : 
     409             :                     /* Loop around, staying in HJ_NEED_NEW_OUTER state */
     410      986128 :                     continue;
     411             :                 }
     412             : 
     413             :                 /* OK, let's scan the bucket for matches */
     414     9461858 :                 node->hj_JoinState = HJ_SCAN_BUCKET;
     415             : 
     416             :                 /* FALL THRU */
     417             : 
     418             :             case HJ_SCAN_BUCKET:
     419             : 
     420             :                 /*
     421             :                  * Scan the selected hash bucket for matches to current outer
     422             :                  */
     423    14313654 :                 if (parallel)
     424             :                 {
     425     2400032 :                     if (!ExecParallelScanHashBucket(node, econtext))
     426             :                     {
     427             :                         /* out of matches; check for possible outer-join fill */
     428     1200016 :                         node->hj_JoinState = HJ_FILL_OUTER_TUPLE;
     429     1200016 :                         continue;
     430             :                     }
     431             :                 }
     432             :                 else
     433             :                 {
     434    11913622 :                     if (!ExecScanHashBucket(node, econtext))
     435             :                     {
     436             :                         /* out of matches; check for possible outer-join fill */
     437     6452058 :                         node->hj_JoinState = HJ_FILL_OUTER_TUPLE;
     438     6452058 :                         continue;
     439             :                     }
     440             :                 }
     441             : 
     442             :                 /*
     443             :                  * We've got a match, but still need to test non-hashed quals.
     444             :                  * ExecScanHashBucket already set up all the state needed to
     445             :                  * call ExecQual.
     446             :                  *
     447             :                  * If we pass the qual, then save state for next call and have
     448             :                  * ExecProject form the projection, store it in the tuple
     449             :                  * table, and return the slot.
     450             :                  *
     451             :                  * Only the joinquals determine tuple match status, but all
     452             :                  * quals must pass to actually return the tuple.
     453             :                  */
     454     6661580 :                 if (joinqual == NULL || ExecQual(joinqual, econtext))
     455             :                 {
     456     6576822 :                     node->hj_MatchedOuter = true;
     457     6576822 :                     HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));
     458             : 
     459             :                     /* In an antijoin, we never return a matched tuple */
     460     6576822 :                     if (node->js.jointype == JOIN_ANTI)
     461             :                     {
     462     1453976 :                         node->hj_JoinState = HJ_NEED_NEW_OUTER;
     463     1453976 :                         continue;
     464             :                     }
     465             : 
     466             :                     /*
     467             :                      * If we only need to join to the first matching inner
     468             :                      * tuple, then consider returning this one, but after that
     469             :                      * continue with next outer tuple.
     470             :                      */
     471     5122846 :                     if (node->js.single_match)
     472      355768 :                         node->hj_JoinState = HJ_NEED_NEW_OUTER;
     473             : 
     474     5135448 :                     if (otherqual == NULL || ExecQual(otherqual, econtext))
     475     5110244 :                         return ExecProject(node->js.ps.ps_ProjInfo);
     476             :                     else
     477       12602 :                         InstrCountFiltered2(node, 1);
     478             :                 }
     479             :                 else
     480       84758 :                     InstrCountFiltered1(node, 1);
     481       97360 :                 break;
     482             : 
     483             :             case HJ_FILL_OUTER_TUPLE:
     484             : 
     485             :                 /*
     486             :                  * The current outer tuple has run out of matches, so check
     487             :                  * whether to emit a dummy outer-join tuple.  Whether we emit
     488             :                  * one or not, the next state is NEED_NEW_OUTER.
     489             :                  */
     490     7652074 :                 node->hj_JoinState = HJ_NEED_NEW_OUTER;
     491             : 
     492    11406732 :                 if (!node->hj_MatchedOuter &&
     493     3754658 :                     HJ_FILL_OUTER(node))
     494             :                 {
     495             :                     /*
     496             :                      * Generate a fake join tuple with nulls for the inner
     497             :                      * tuple, and return it if it passes the non-join quals.
     498             :                      */
     499     1182604 :                     econtext->ecxt_innertuple = node->hj_NullInnerTupleSlot;
     500             : 
     501     1182604 :                     if (otherqual == NULL || ExecQual(otherqual, econtext))
     502      663906 :                         return ExecProject(node->js.ps.ps_ProjInfo);
     503             :                     else
     504      518698 :                         InstrCountFiltered2(node, 1);
     505             :                 }
     506     6988168 :                 break;
     507             : 
     508             :             case HJ_FILL_INNER_TUPLES:
     509             : 
     510             :                 /*
     511             :                  * We have finished a batch, but we are doing right/full join,
     512             :                  * so any unmatched inner tuples in the hashtable have to be
     513             :                  * emitted before we continue to the next batch.
     514             :                  */
     515      207684 :                 if (!ExecScanHashTableForUnmatched(node, econtext))
     516             :                 {
     517             :                     /* no more unmatched tuples */
     518        3656 :                     node->hj_JoinState = HJ_NEED_NEW_BATCH;
     519        3656 :                     continue;
     520             :                 }
     521             : 
     522             :                 /*
     523             :                  * Generate a fake join tuple with nulls for the outer tuple,
     524             :                  * and return it if it passes the non-join quals.
     525             :                  */
     526      204028 :                 econtext->ecxt_outertuple = node->hj_NullOuterTupleSlot;
     527             : 
     528      204028 :                 if (otherqual == NULL || ExecQual(otherqual, econtext))
     529      203476 :                     return ExecProject(node->js.ps.ps_ProjInfo);
     530             :                 else
     531         552 :                     InstrCountFiltered2(node, 1);
     532         552 :                 break;
     533             : 
     534             :             case HJ_NEED_NEW_BATCH:
     535             : 
     536             :                 /*
     537             :                  * Try to advance to next batch.  Done if there are no more.
     538             :                  */
     539      264876 :                 if (parallel)
     540             :                 {
     541         776 :                     if (!ExecParallelHashJoinNewBatch(node))
     542         224 :                         return NULL;    /* end of parallel-aware join */
     543             :                 }
     544             :                 else
     545             :                 {
     546      264100 :                     if (!ExecHashJoinNewBatch(node))
     547      263216 :                         return NULL;    /* end of parallel-oblivious join */
     548             :                 }
     549        1436 :                 node->hj_JoinState = HJ_NEED_NEW_OUTER;
     550        1436 :                 break;
     551             : 
     552             :             default:
     553           0 :                 elog(ERROR, "unrecognized hashjoin state: %d",
     554             :                      (int) node->hj_JoinState);
     555             :         }
     556             :     }
     557             : }
     558             : 
     559             : /* ----------------------------------------------------------------
     560             :  *      ExecHashJoin
     561             :  *
     562             :  *      Parallel-oblivious version.
     563             :  * ----------------------------------------------------------------
     564             :  */
     565             : static TupleTableSlot *         /* return: a tuple or NULL */
     566     5894236 : ExecHashJoin(PlanState *pstate)
     567             : {
     568             :     /*
     569             :      * On sufficiently smart compilers this should be inlined with the
     570             :      * parallel-aware branches removed.
     571             :      */
     572     5894236 :     return ExecHashJoinImpl(pstate, false);
     573             : }
     574             : 
     575             : /* ----------------------------------------------------------------
     576             :  *      ExecParallelHashJoin
     577             :  *
     578             :  *      Parallel-aware version.
     579             :  * ----------------------------------------------------------------
     580             :  */
     581             : static TupleTableSlot *         /* return: a tuple or NULL */
     582     1200240 : ExecParallelHashJoin(PlanState *pstate)
     583             : {
     584             :     /*
     585             :      * On sufficiently smart compilers this should be inlined with the
     586             :      * parallel-oblivious branches removed.
     587             :      */
     588     1200240 :     return ExecHashJoinImpl(pstate, true);
     589             : }
     590             : 
     591             : /* ----------------------------------------------------------------
     592             :  *      ExecInitHashJoin
     593             :  *
     594             :  *      Init routine for HashJoin node.
     595             :  * ----------------------------------------------------------------
     596             :  */
     597             : HashJoinState *
     598       26768 : ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
     599             : {
     600             :     HashJoinState *hjstate;
     601             :     Plan       *outerNode;
     602             :     Hash       *hashNode;
     603             :     List       *lclauses;
     604             :     List       *rclauses;
     605             :     List       *rhclauses;
     606             :     List       *hoperators;
     607             :     List       *hcollations;
     608             :     TupleDesc   outerDesc,
     609             :                 innerDesc;
     610             :     ListCell   *l;
     611             :     const TupleTableSlotOps *ops;
     612             : 
     613             :     /* check for unsupported flags */
     614             :     Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
     615             : 
     616             :     /*
     617             :      * create state structure
     618             :      */
     619       26768 :     hjstate = makeNode(HashJoinState);
     620       26768 :     hjstate->js.ps.plan = (Plan *) node;
     621       26768 :     hjstate->js.ps.state = estate;
     622             : 
     623             :     /*
     624             :      * See ExecHashJoinInitializeDSM() and ExecHashJoinInitializeWorker()
     625             :      * where this function may be replaced with a parallel version, if we
     626             :      * managed to launch a parallel query.
     627             :      */
     628       26768 :     hjstate->js.ps.ExecProcNode = ExecHashJoin;
     629       26768 :     hjstate->js.jointype = node->join.jointype;
     630             : 
     631             :     /*
     632             :      * Miscellaneous initialization
     633             :      *
     634             :      * create expression context for node
     635             :      */
     636       26768 :     ExecAssignExprContext(estate, &hjstate->js.ps);
     637             : 
     638             :     /*
     639             :      * initialize child nodes
     640             :      *
     641             :      * Note: we could suppress the REWIND flag for the inner input, which
     642             :      * would amount to betting that the hash will be a single batch.  Not
     643             :      * clear if this would be a win or not.
     644             :      */
     645       26768 :     outerNode = outerPlan(node);
     646       26768 :     hashNode = (Hash *) innerPlan(node);
     647             : 
     648       26768 :     outerPlanState(hjstate) = ExecInitNode(outerNode, estate, eflags);
     649       26768 :     outerDesc = ExecGetResultType(outerPlanState(hjstate));
     650       26768 :     innerPlanState(hjstate) = ExecInitNode((Plan *) hashNode, estate, eflags);
     651       26768 :     innerDesc = ExecGetResultType(innerPlanState(hjstate));
     652             : 
     653             :     /*
     654             :      * Initialize result slot, type and projection.
     655             :      */
     656       26768 :     ExecInitResultTupleSlotTL(&hjstate->js.ps, &TTSOpsVirtual);
     657       26768 :     ExecAssignProjectionInfo(&hjstate->js.ps, NULL);
     658             : 
     659             :     /*
     660             :      * tuple table initialization
     661             :      */
     662       26768 :     ops = ExecGetResultSlotOps(outerPlanState(hjstate), NULL);
     663       26768 :     hjstate->hj_OuterTupleSlot = ExecInitExtraTupleSlot(estate, outerDesc,
     664             :                                                         ops);
     665             : 
     666             :     /*
     667             :      * detect whether we need only consider the first matching inner tuple
     668             :      */
     669       42606 :     hjstate->js.single_match = (node->join.inner_unique ||
     670       15838 :                                 node->join.jointype == JOIN_SEMI);
     671             : 
     672             :     /* set up null tuples for outer joins, if needed */
     673       26768 :     switch (node->join.jointype)
     674             :     {
     675             :         case JOIN_INNER:
     676             :         case JOIN_SEMI:
     677       12264 :             break;
     678             :         case JOIN_LEFT:
     679             :         case JOIN_ANTI:
     680       10326 :             hjstate->hj_NullInnerTupleSlot =
     681       10326 :                 ExecInitNullTupleSlot(estate, innerDesc, &TTSOpsVirtual);
     682       10326 :             break;
     683             :         case JOIN_RIGHT:
     684        3802 :             hjstate->hj_NullOuterTupleSlot =
     685        3802 :                 ExecInitNullTupleSlot(estate, outerDesc, &TTSOpsVirtual);
     686        3802 :             break;
     687             :         case JOIN_FULL:
     688         376 :             hjstate->hj_NullOuterTupleSlot =
     689         376 :                 ExecInitNullTupleSlot(estate, outerDesc, &TTSOpsVirtual);
     690         376 :             hjstate->hj_NullInnerTupleSlot =
     691         376 :                 ExecInitNullTupleSlot(estate, innerDesc, &TTSOpsVirtual);
     692         376 :             break;
     693             :         default:
     694           0 :             elog(ERROR, "unrecognized join type: %d",
     695             :                  (int) node->join.jointype);
     696             :     }
     697             : 
     698             :     /*
     699             :      * now for some voodoo.  our temporary tuple slot is actually the result
     700             :      * tuple slot of the Hash node (which is our inner plan).  we can do this
     701             :      * because Hash nodes don't return tuples via ExecProcNode() -- instead
     702             :      * the hash join node uses ExecScanHashBucket() to get at the contents of
     703             :      * the hash table.  -cim 6/9/91
     704             :      */
     705             :     {
     706       26768 :         HashState  *hashstate = (HashState *) innerPlanState(hjstate);
     707       26768 :         TupleTableSlot *slot = hashstate->ps.ps_ResultTupleSlot;
     708             : 
     709       26768 :         hjstate->hj_HashTupleSlot = slot;
     710             :     }
     711             : 
     712             :     /*
     713             :      * initialize child expressions
     714             :      */
     715       26768 :     hjstate->js.ps.qual =
     716       26768 :         ExecInitQual(node->join.plan.qual, (PlanState *) hjstate);
     717       26768 :     hjstate->js.joinqual =
     718       26768 :         ExecInitQual(node->join.joinqual, (PlanState *) hjstate);
     719       26768 :     hjstate->hashclauses =
     720       26768 :         ExecInitQual(node->hashclauses, (PlanState *) hjstate);
     721             : 
     722             :     /*
     723             :      * initialize hash-specific info
     724             :      */
     725       26768 :     hjstate->hj_HashTable = NULL;
     726       26768 :     hjstate->hj_FirstOuterTupleSlot = NULL;
     727             : 
     728       26768 :     hjstate->hj_CurHashValue = 0;
     729       26768 :     hjstate->hj_CurBucketNo = 0;
     730       26768 :     hjstate->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
     731       26768 :     hjstate->hj_CurTuple = NULL;
     732             : 
     733             :     /*
     734             :      * Deconstruct the hash clauses into outer and inner argument values, so
     735             :      * that we can evaluate those subexpressions separately.  Also make a list
     736             :      * of the hash operator OIDs, in preparation for looking up the hash
     737             :      * functions to use.
     738             :      */
     739       26768 :     lclauses = NIL;
     740       26768 :     rclauses = NIL;
     741       26768 :     rhclauses = NIL;
     742       26768 :     hoperators = NIL;
     743       26768 :     hcollations = NIL;
     744       54848 :     foreach(l, node->hashclauses)
     745             :     {
     746       28080 :         OpExpr     *hclause = lfirst_node(OpExpr, l);
     747             : 
     748       28080 :         lclauses = lappend(lclauses, ExecInitExpr(linitial(hclause->args),
     749             :                                                   (PlanState *) hjstate));
     750       28080 :         rclauses = lappend(rclauses, ExecInitExpr(lsecond(hclause->args),
     751             :                                                   (PlanState *) hjstate));
     752       28080 :         rhclauses = lappend(rhclauses, ExecInitExpr(lsecond(hclause->args),
     753       28080 :                                                     innerPlanState(hjstate)));
     754       28080 :         hoperators = lappend_oid(hoperators, hclause->opno);
     755       28080 :         hcollations = lappend_oid(hcollations, hclause->inputcollid);
     756             :     }
     757       26768 :     hjstate->hj_OuterHashKeys = lclauses;
     758       26768 :     hjstate->hj_InnerHashKeys = rclauses;
     759       26768 :     hjstate->hj_HashOperators = hoperators;
     760       26768 :     hjstate->hj_Collations = hcollations;
     761             :     /* child Hash node needs to evaluate inner hash keys, too */
     762       26768 :     ((HashState *) innerPlanState(hjstate))->hashkeys = rhclauses;
     763             : 
     764       26768 :     hjstate->hj_JoinState = HJ_BUILD_HASHTABLE;
     765       26768 :     hjstate->hj_MatchedOuter = false;
     766       26768 :     hjstate->hj_OuterNotEmpty = false;
     767             : 
     768       26768 :     return hjstate;
     769             : }
     770             : 
     771             : /* ----------------------------------------------------------------
     772             :  *      ExecEndHashJoin
     773             :  *
     774             :  *      clean up routine for HashJoin node
     775             :  * ----------------------------------------------------------------
     776             :  */
     777             : void
     778       26722 : ExecEndHashJoin(HashJoinState *node)
     779             : {
     780             :     /*
     781             :      * Free hash table
     782             :      */
     783       26722 :     if (node->hj_HashTable)
     784             :     {
     785       16354 :         ExecHashTableDestroy(node->hj_HashTable);
     786       16354 :         node->hj_HashTable = NULL;
     787             :     }
     788             : 
     789             :     /*
     790             :      * Free the exprcontext
     791             :      */
     792       26722 :     ExecFreeExprContext(&node->js.ps);
     793             : 
     794             :     /*
     795             :      * clean out the tuple table
     796             :      */
     797       26722 :     ExecClearTuple(node->js.ps.ps_ResultTupleSlot);
     798       26722 :     ExecClearTuple(node->hj_OuterTupleSlot);
     799       26722 :     ExecClearTuple(node->hj_HashTupleSlot);
     800             : 
     801             :     /*
     802             :      * clean up subtrees
     803             :      */
     804       26722 :     ExecEndNode(outerPlanState(node));
     805       26722 :     ExecEndNode(innerPlanState(node));
     806       26722 : }
     807             : 
     808             : /*
     809             :  * ExecHashJoinOuterGetTuple
     810             :  *
     811             :  *      get the next outer tuple for a parallel oblivious hashjoin: either by
     812             :  *      executing the outer plan node in the first pass, or from the temp
     813             :  *      files for the hashjoin batches.
     814             :  *
     815             :  * Returns a null slot if no more outer tuples (within the current batch).
     816             :  *
     817             :  * On success, the tuple's hash value is stored at *hashvalue --- this is
     818             :  * either originally computed, or re-read from the temp file.
     819             :  */
     820             : static TupleTableSlot *
     821     9512074 : ExecHashJoinOuterGetTuple(PlanState *outerNode,
     822             :                           HashJoinState *hjstate,
     823             :                           uint32 *hashvalue)
     824             : {
     825     9512074 :     HashJoinTable hashtable = hjstate->hj_HashTable;
     826     9512074 :     int         curbatch = hashtable->curbatch;
     827             :     TupleTableSlot *slot;
     828             : 
     829     9512074 :     if (curbatch == 0)          /* if it is the first pass */
     830             :     {
     831             :         /*
     832             :          * Check to see if first outer tuple was already fetched by
     833             :          * ExecHashJoin() and not used yet.
     834             :          */
     835     8525062 :         slot = hjstate->hj_FirstOuterTupleSlot;
     836     8525062 :         if (!TupIsNull(slot))
     837      258898 :             hjstate->hj_FirstOuterTupleSlot = NULL;
     838             :         else
     839     8266164 :             slot = ExecProcNode(outerNode);
     840             : 
     841    17050584 :         while (!TupIsNull(slot))
     842             :         {
     843             :             /*
     844             :              * We have to compute the tuple's hash value.
     845             :              */
     846     8262302 :             ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
     847             : 
     848     8262302 :             econtext->ecxt_outertuple = slot;
     849     8262302 :             if (ExecHashGetHashValue(hashtable, econtext,
     850             :                                      hjstate->hj_OuterHashKeys,
     851             :                                      true,  /* outer tuple */
     852     8262302 :                                      HJ_FILL_OUTER(hjstate),
     853             :                                      hashvalue))
     854             :             {
     855             :                 /* remember outer relation is not empty for possible rescan */
     856     8261842 :                 hjstate->hj_OuterNotEmpty = true;
     857             : 
     858     8261842 :                 return slot;
     859             :             }
     860             : 
     861             :             /*
     862             :              * That tuple couldn't match because of a NULL, so discard it and
     863             :              * continue with the next one.
     864             :              */
     865         460 :             slot = ExecProcNode(outerNode);
     866             :         }
     867             :     }
     868      987012 :     else if (curbatch < hashtable->nbatch)
     869             :     {
     870      987012 :         BufFile    *file = hashtable->outerBatchFile[curbatch];
     871             : 
     872             :         /*
     873             :          * In outer-join cases, we could get here even though the batch file
     874             :          * is empty.
     875             :          */
     876      987012 :         if (file == NULL)
     877           0 :             return NULL;
     878             : 
     879      987012 :         slot = ExecHashJoinGetSavedTuple(hjstate,
     880             :                                          file,
     881             :                                          hashvalue,
     882             :                                          hjstate->hj_OuterTupleSlot);
     883      987012 :         if (!TupIsNull(slot))
     884      986128 :             return slot;
     885             :     }
     886             : 
     887             :     /* End of this batch */
     888      264104 :     return NULL;
     889             : }
     890             : 
     891             : /*
     892             :  * ExecHashJoinOuterGetTuple variant for the parallel case.
     893             :  */
     894             : static TupleTableSlot *
     895     1200568 : ExecParallelHashJoinOuterGetTuple(PlanState *outerNode,
     896             :                                   HashJoinState *hjstate,
     897             :                                   uint32 *hashvalue)
     898             : {
     899     1200568 :     HashJoinTable hashtable = hjstate->hj_HashTable;
     900     1200568 :     int         curbatch = hashtable->curbatch;
     901             :     TupleTableSlot *slot;
     902             : 
     903             :     /*
     904             :      * In the Parallel Hash case we only run the outer plan directly for
     905             :      * single-batch hash joins.  Otherwise we have to go to batch files, even
     906             :      * for batch 0.
     907             :      */
     908     1200644 :     if (curbatch == 0 && hashtable->nbatch == 1)
     909             :     {
     910      480076 :         slot = ExecProcNode(outerNode);
     911             : 
     912      960152 :         while (!TupIsNull(slot))
     913             :         {
     914      480000 :             ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
     915             : 
     916      480000 :             econtext->ecxt_outertuple = slot;
     917      480000 :             if (ExecHashGetHashValue(hashtable, econtext,
     918             :                                      hjstate->hj_OuterHashKeys,
     919             :                                      true,  /* outer tuple */
     920      480000 :                                      HJ_FILL_OUTER(hjstate),
     921             :                                      hashvalue))
     922      480000 :                 return slot;
     923             : 
     924             :             /*
     925             :              * That tuple couldn't match because of a NULL, so discard it and
     926             :              * continue with the next one.
     927             :              */
     928           0 :             slot = ExecProcNode(outerNode);
     929             :         }
     930             :     }
     931      720492 :     else if (curbatch < hashtable->nbatch)
     932             :     {
     933             :         MinimalTuple tuple;
     934             : 
     935      720492 :         tuple = sts_parallel_scan_next(hashtable->batches[curbatch].outer_tuples,
     936             :                                        hashvalue);
     937      720492 :         if (tuple != NULL)
     938             :         {
     939      720016 :             ExecForceStoreMinimalTuple(tuple,
     940             :                                        hjstate->hj_OuterTupleSlot,
     941             :                                        false);
     942      720016 :             slot = hjstate->hj_OuterTupleSlot;
     943      720016 :             return slot;
     944             :         }
     945             :         else
     946         476 :             ExecClearTuple(hjstate->hj_OuterTupleSlot);
     947             :     }
     948             : 
     949             :     /* End of this batch */
     950         552 :     return NULL;
     951             : }
     952             : 
     953             : /*
     954             :  * ExecHashJoinNewBatch
     955             :  *      switch to a new hashjoin batch
     956             :  *
     957             :  * Returns true if successful, false if there are no more batches.
     958             :  */
     959             : static bool
     960      264100 : ExecHashJoinNewBatch(HashJoinState *hjstate)
     961             : {
     962      264100 :     HashJoinTable hashtable = hjstate->hj_HashTable;
     963             :     int         nbatch;
     964             :     int         curbatch;
     965             :     BufFile    *innerFile;
     966             :     TupleTableSlot *slot;
     967             :     uint32      hashvalue;
     968             : 
     969      264100 :     nbatch = hashtable->nbatch;
     970      264100 :     curbatch = hashtable->curbatch;
     971             : 
     972      264100 :     if (curbatch > 0)
     973             :     {
     974             :         /*
     975             :          * We no longer need the previous outer batch file; close it right
     976             :          * away to free disk space.
     977             :          */
     978         884 :         if (hashtable->outerBatchFile[curbatch])
     979         884 :             BufFileClose(hashtable->outerBatchFile[curbatch]);
     980         884 :         hashtable->outerBatchFile[curbatch] = NULL;
     981             :     }
     982             :     else                        /* we just finished the first batch */
     983             :     {
     984             :         /*
     985             :          * Reset some of the skew optimization state variables, since we no
     986             :          * longer need to consider skew tuples after the first batch. The
     987             :          * memory context reset we are about to do will release the skew
     988             :          * hashtable itself.
     989             :          */
     990      263216 :         hashtable->skewEnabled = false;
     991      263216 :         hashtable->skewBucket = NULL;
     992      263216 :         hashtable->skewBucketNums = NULL;
     993      263216 :         hashtable->nSkewBuckets = 0;
     994      263216 :         hashtable->spaceUsedSkew = 0;
     995             :     }
     996             : 
     997             :     /*
     998             :      * We can always skip over any batches that are completely empty on both
     999             :      * sides.  We can sometimes skip over batches that are empty on only one
    1000             :      * side, but there are exceptions:
    1001             :      *
    1002             :      * 1. In a left/full outer join, we have to process outer batches even if
    1003             :      * the inner batch is empty.  Similarly, in a right/full outer join, we
    1004             :      * have to process inner batches even if the outer batch is empty.
    1005             :      *
    1006             :      * 2. If we have increased nbatch since the initial estimate, we have to
    1007             :      * scan inner batches since they might contain tuples that need to be
    1008             :      * reassigned to later inner batches.
    1009             :      *
    1010             :      * 3. Similarly, if we have increased nbatch since starting the outer
    1011             :      * scan, we have to rescan outer batches in case they contain tuples that
    1012             :      * need to be reassigned.
    1013             :      */
    1014      264100 :     curbatch++;
    1015      529084 :     while (curbatch < nbatch &&
    1016        1768 :            (hashtable->outerBatchFile[curbatch] == NULL ||
    1017         884 :             hashtable->innerBatchFile[curbatch] == NULL))
    1018             :     {
    1019           0 :         if (hashtable->outerBatchFile[curbatch] &&
    1020           0 :             HJ_FILL_OUTER(hjstate))
    1021           0 :             break;              /* must process due to rule 1 */
    1022           0 :         if (hashtable->innerBatchFile[curbatch] &&
    1023           0 :             HJ_FILL_INNER(hjstate))
    1024           0 :             break;              /* must process due to rule 1 */
    1025           0 :         if (hashtable->innerBatchFile[curbatch] &&
    1026           0 :             nbatch != hashtable->nbatch_original)
    1027           0 :             break;              /* must process due to rule 2 */
    1028           0 :         if (hashtable->outerBatchFile[curbatch] &&
    1029           0 :             nbatch != hashtable->nbatch_outstart)
    1030           0 :             break;              /* must process due to rule 3 */
    1031             :         /* We can ignore this batch. */
    1032             :         /* Release associated temp files right away. */
    1033           0 :         if (hashtable->innerBatchFile[curbatch])
    1034           0 :             BufFileClose(hashtable->innerBatchFile[curbatch]);
    1035           0 :         hashtable->innerBatchFile[curbatch] = NULL;
    1036           0 :         if (hashtable->outerBatchFile[curbatch])
    1037           0 :             BufFileClose(hashtable->outerBatchFile[curbatch]);
    1038           0 :         hashtable->outerBatchFile[curbatch] = NULL;
    1039           0 :         curbatch++;
    1040             :     }
    1041             : 
    1042      264100 :     if (curbatch >= nbatch)
    1043      263216 :         return false;           /* no more batches */
    1044             : 
    1045         884 :     hashtable->curbatch = curbatch;
    1046             : 
    1047             :     /*
    1048             :      * Reload the hash table with the new inner batch (which could be empty)
    1049             :      */
    1050         884 :     ExecHashTableReset(hashtable);
    1051             : 
    1052         884 :     innerFile = hashtable->innerBatchFile[curbatch];
    1053             : 
    1054         884 :     if (innerFile != NULL)
    1055             :     {
    1056         884 :         if (BufFileSeek(innerFile, 0, 0L, SEEK_SET))
    1057           0 :             ereport(ERROR,
    1058             :                     (errcode_for_file_access(),
    1059             :                      errmsg("could not rewind hash-join temporary file: %m")));
    1060             : 
    1061     1855332 :         while ((slot = ExecHashJoinGetSavedTuple(hjstate,
    1062             :                                                  innerFile,
    1063             :                                                  &hashvalue,
    1064             :                                                  hjstate->hj_HashTupleSlot)))
    1065             :         {
    1066             :             /*
    1067             :              * NOTE: some tuples may be sent to future batches.  Also, it is
    1068             :              * possible for hashtable->nbatch to be increased here!
    1069             :              */
    1070     1853564 :             ExecHashTableInsert(hashtable, slot, hashvalue);
    1071             :         }
    1072             : 
    1073             :         /*
    1074             :          * after we build the hash table, the inner batch file is no longer
    1075             :          * needed
    1076             :          */
    1077         884 :         BufFileClose(innerFile);
    1078         884 :         hashtable->innerBatchFile[curbatch] = NULL;
    1079             :     }
    1080             : 
    1081             :     /*
    1082             :      * Rewind outer batch file (if present), so that we can start reading it.
    1083             :      */
    1084         884 :     if (hashtable->outerBatchFile[curbatch] != NULL)
    1085             :     {
    1086         884 :         if (BufFileSeek(hashtable->outerBatchFile[curbatch], 0, 0L, SEEK_SET))
    1087           0 :             ereport(ERROR,
    1088             :                     (errcode_for_file_access(),
    1089             :                      errmsg("could not rewind hash-join temporary file: %m")));
    1090             :     }
    1091             : 
    1092         884 :     return true;
    1093             : }
    1094             : 
    1095             : /*
    1096             :  * Choose a batch to work on, and attach to it.  Returns true if successful,
    1097             :  * false if there are no more batches.
    1098             :  */
    1099             : static bool
    1100         776 : ExecParallelHashJoinNewBatch(HashJoinState *hjstate)
    1101             : {
    1102         776 :     HashJoinTable hashtable = hjstate->hj_HashTable;
    1103             :     int         start_batchno;
    1104             :     int         batchno;
    1105             : 
    1106             :     /*
    1107             :      * If we started up so late that the batch tracking array has been freed
    1108             :      * already by ExecHashTableDetach(), then we are finished.  See also
    1109             :      * ExecParallelHashEnsureBatchAccessors().
    1110             :      */
    1111         776 :     if (hashtable->batches == NULL)
    1112           0 :         return false;
    1113             : 
    1114             :     /*
    1115             :      * If we were already attached to a batch, remember not to bother checking
    1116             :      * it again, and detach from it (possibly freeing the hash table if we are
    1117             :      * last to detach).
    1118             :      */
    1119         776 :     if (hashtable->curbatch >= 0)
    1120             :     {
    1121         552 :         hashtable->batches[hashtable->curbatch].done = true;
    1122         552 :         ExecHashTableDetachBatch(hashtable);
    1123             :     }
    1124             : 
    1125             :     /*
    1126             :      * Search for a batch that isn't done.  We use an atomic counter to start
    1127             :      * our search at a different batch in every participant when there are
    1128             :      * more batches than participants.
    1129             :      */
    1130         776 :     batchno = start_batchno =
    1131        1552 :         pg_atomic_fetch_add_u32(&hashtable->parallel_state->distributor, 1) %
    1132         776 :         hashtable->nbatch;
    1133             :     do
    1134             :     {
    1135             :         uint32      hashvalue;
    1136             :         MinimalTuple tuple;
    1137             :         TupleTableSlot *slot;
    1138             : 
    1139        1896 :         if (!hashtable->batches[batchno].done)
    1140             :         {
    1141             :             SharedTuplestoreAccessor *inner_tuples;
    1142        1088 :             Barrier    *batch_barrier =
    1143        1088 :             &hashtable->batches[batchno].shared->batch_barrier;
    1144             : 
    1145        1088 :             switch (BarrierAttach(batch_barrier))
    1146             :             {
    1147             :                 case PHJ_BATCH_ELECTING:
    1148             : 
    1149             :                     /* One backend allocates the hash table. */
    1150         376 :                     if (BarrierArriveAndWait(batch_barrier,
    1151             :                                              WAIT_EVENT_HASH_BATCH_ELECTING))
    1152         376 :                         ExecParallelHashTableAlloc(hashtable, batchno);
    1153             :                     /* Fall through. */
    1154             : 
    1155             :                 case PHJ_BATCH_ALLOCATING:
    1156             :                     /* Wait for allocation to complete. */
    1157         378 :                     BarrierArriveAndWait(batch_barrier,
    1158             :                                          WAIT_EVENT_HASH_BATCH_ALLOCATING);
    1159             :                     /* Fall through. */
    1160             : 
    1161             :                 case PHJ_BATCH_LOADING:
    1162             :                     /* Start (or join in) loading tuples. */
    1163         386 :                     ExecParallelHashTableSetCurrentBatch(hashtable, batchno);
    1164         386 :                     inner_tuples = hashtable->batches[batchno].inner_tuples;
    1165         386 :                     sts_begin_parallel_scan(inner_tuples);
    1166      653890 :                     while ((tuple = sts_parallel_scan_next(inner_tuples,
    1167             :                                                            &hashvalue)))
    1168             :                     {
    1169      653118 :                         ExecForceStoreMinimalTuple(tuple,
    1170             :                                                    hjstate->hj_HashTupleSlot,
    1171             :                                                    false);
    1172      653118 :                         slot = hjstate->hj_HashTupleSlot;
    1173      653118 :                         ExecParallelHashTableInsertCurrentBatch(hashtable, slot,
    1174             :                                                                 hashvalue);
    1175             :                     }
    1176         386 :                     sts_end_parallel_scan(inner_tuples);
    1177         386 :                     BarrierArriveAndWait(batch_barrier,
    1178             :                                          WAIT_EVENT_HASH_BATCH_LOADING);
    1179             :                     /* Fall through. */
    1180             : 
    1181             :                 case PHJ_BATCH_PROBING:
    1182             : 
    1183             :                     /*
    1184             :                      * This batch is ready to probe.  Return control to
    1185             :                      * caller. We stay attached to batch_barrier so that the
    1186             :                      * hash table stays alive until everyone's finished
    1187             :                      * probing it, but no participant is allowed to wait at
    1188             :                      * this barrier again (or else a deadlock could occur).
    1189             :                      * All attached participants must eventually call
    1190             :                      * BarrierArriveAndDetach() so that the final phase
    1191             :                      * PHJ_BATCH_DONE can be reached.
    1192             :                      */
    1193         552 :                     ExecParallelHashTableSetCurrentBatch(hashtable, batchno);
    1194         552 :                     sts_begin_parallel_scan(hashtable->batches[batchno].outer_tuples);
    1195         552 :                     return true;
    1196             : 
    1197             :                 case PHJ_BATCH_DONE:
    1198             : 
    1199             :                     /*
    1200             :                      * Already done.  Detach and go around again (if any
    1201             :                      * remain).
    1202             :                      */
    1203         536 :                     BarrierDetach(batch_barrier);
    1204         536 :                     hashtable->batches[batchno].done = true;
    1205         536 :                     hashtable->curbatch = -1;
    1206         536 :                     break;
    1207             : 
    1208             :                 default:
    1209           0 :                     elog(ERROR, "unexpected batch phase %d",
    1210             :                          BarrierPhase(batch_barrier));
    1211             :             }
    1212             :         }
    1213        1344 :         batchno = (batchno + 1) % hashtable->nbatch;
    1214        1344 :     } while (batchno != start_batchno);
    1215             : 
    1216         224 :     return false;
    1217             : }
    1218             : 
    1219             : /*
    1220             :  * ExecHashJoinSaveTuple
    1221             :  *      save a tuple to a batch file.
    1222             :  *
    1223             :  * The data recorded in the file for each tuple is its hash value,
    1224             :  * then the tuple in MinimalTuple format.
    1225             :  *
    1226             :  * Note: it is important always to call this in the regular executor
    1227             :  * context, not in a shorter-lived context; else the temp file buffers
    1228             :  * will get messed up.
    1229             :  */
    1230             : void
    1231     2839692 : ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,
    1232             :                       BufFile **fileptr)
    1233             : {
    1234     2839692 :     BufFile    *file = *fileptr;
    1235             :     size_t      written;
    1236             : 
    1237     2839692 :     if (file == NULL)
    1238             :     {
    1239             :         /* First write to this batch file, so open it. */
    1240        1768 :         file = BufFileCreateTemp(false);
    1241        1768 :         *fileptr = file;
    1242             :     }
    1243             : 
    1244     2839692 :     written = BufFileWrite(file, (void *) &hashvalue, sizeof(uint32));
    1245     2839692 :     if (written != sizeof(uint32))
    1246           0 :         ereport(ERROR,
    1247             :                 (errcode_for_file_access(),
    1248             :                  errmsg("could not write to hash-join temporary file: %m")));
    1249             : 
    1250     2839692 :     written = BufFileWrite(file, (void *) tuple, tuple->t_len);
    1251     2839692 :     if (written != tuple->t_len)
    1252           0 :         ereport(ERROR,
    1253             :                 (errcode_for_file_access(),
    1254             :                  errmsg("could not write to hash-join temporary file: %m")));
    1255     2839692 : }
    1256             : 
    1257             : /*
    1258             :  * ExecHashJoinGetSavedTuple
    1259             :  *      read the next tuple from a batch file.  Return NULL if no more.
    1260             :  *
    1261             :  * On success, *hashvalue is set to the tuple's hash value, and the tuple
    1262             :  * itself is stored in the given slot.
    1263             :  */
    1264             : static TupleTableSlot *
    1265     2841460 : ExecHashJoinGetSavedTuple(HashJoinState *hjstate,
    1266             :                           BufFile *file,
    1267             :                           uint32 *hashvalue,
    1268             :                           TupleTableSlot *tupleSlot)
    1269             : {
    1270             :     uint32      header[2];
    1271             :     size_t      nread;
    1272             :     MinimalTuple tuple;
    1273             : 
    1274             :     /*
    1275             :      * We check for interrupts here because this is typically taken as an
    1276             :      * alternative code path to an ExecProcNode() call, which would include
    1277             :      * such a check.
    1278             :      */
    1279     2841460 :     CHECK_FOR_INTERRUPTS();
    1280             : 
    1281             :     /*
    1282             :      * Since both the hash value and the MinimalTuple length word are uint32,
    1283             :      * we can read them both in one BufFileRead() call without any type
    1284             :      * cheating.
    1285             :      */
    1286     2841460 :     nread = BufFileRead(file, (void *) header, sizeof(header));
    1287     2841460 :     if (nread == 0)             /* end of file */
    1288             :     {
    1289        1768 :         ExecClearTuple(tupleSlot);
    1290        1768 :         return NULL;
    1291             :     }
    1292     2839692 :     if (nread != sizeof(header))
    1293           0 :         ereport(ERROR,
    1294             :                 (errcode_for_file_access(),
    1295             :                  errmsg("could not read from hash-join temporary file: %m")));
    1296     2839692 :     *hashvalue = header[0];
    1297     2839692 :     tuple = (MinimalTuple) palloc(header[1]);
    1298     2839692 :     tuple->t_len = header[1];
    1299     2839692 :     nread = BufFileRead(file,
    1300             :                         (void *) ((char *) tuple + sizeof(uint32)),
    1301     2839692 :                         header[1] - sizeof(uint32));
    1302     2839692 :     if (nread != header[1] - sizeof(uint32))
    1303           0 :         ereport(ERROR,
    1304             :                 (errcode_for_file_access(),
    1305             :                  errmsg("could not read from hash-join temporary file: %m")));
    1306     2839692 :     ExecForceStoreMinimalTuple(tuple, tupleSlot, true);
    1307     2839692 :     return tupleSlot;
    1308             : }
    1309             : 
    1310             : 
    1311             : void
    1312     1099144 : ExecReScanHashJoin(HashJoinState *node)
    1313             : {
    1314             :     /*
    1315             :      * In a multi-batch join, we currently have to do rescans the hard way,
    1316             :      * primarily because batch temp files may have already been released. But
    1317             :      * if it's a single-batch join, and there is no parameter change for the
    1318             :      * inner subnode, then we can just re-use the existing hash table without
    1319             :      * rebuilding it.
    1320             :      */
    1321     1099144 :     if (node->hj_HashTable != NULL)
    1322             :     {
    1323      497912 :         if (node->hj_HashTable->nbatch == 1 &&
    1324      248956 :             node->js.ps.righttree->chgParam == NULL)
    1325             :         {
    1326             :             /*
    1327             :              * Okay to reuse the hash table; needn't rescan inner, either.
    1328             :              *
    1329             :              * However, if it's a right/full join, we'd better reset the
    1330             :              * inner-tuple match flags contained in the table.
    1331             :              */
    1332          76 :             if (HJ_FILL_INNER(node))
    1333          20 :                 ExecHashTableResetMatchFlags(node->hj_HashTable);
    1334             : 
    1335             :             /*
    1336             :              * Also, we need to reset our state about the emptiness of the
    1337             :              * outer relation, so that the new scan of the outer will update
    1338             :              * it correctly if it turns out to be empty this time. (There's no
    1339             :              * harm in clearing it now because ExecHashJoin won't need the
    1340             :              * info.  In the other cases, where the hash table doesn't exist
    1341             :              * or we are destroying it, we leave this state alone because
    1342             :              * ExecHashJoin will need it the first time through.)
    1343             :              */
    1344          76 :             node->hj_OuterNotEmpty = false;
    1345             : 
    1346             :             /* ExecHashJoin can skip the BUILD_HASHTABLE step */
    1347          76 :             node->hj_JoinState = HJ_NEED_NEW_OUTER;
    1348             :         }
    1349             :         else
    1350             :         {
    1351             :             /* must destroy and rebuild hash table */
    1352      248880 :             ExecHashTableDestroy(node->hj_HashTable);
    1353      248880 :             node->hj_HashTable = NULL;
    1354      248880 :             node->hj_JoinState = HJ_BUILD_HASHTABLE;
    1355             : 
    1356             :             /*
    1357             :              * if chgParam of subnode is not null then plan will be re-scanned
    1358             :              * by first ExecProcNode.
    1359             :              */
    1360      248880 :             if (node->js.ps.righttree->chgParam == NULL)
    1361           0 :                 ExecReScan(node->js.ps.righttree);
    1362             :         }
    1363             :     }
    1364             : 
    1365             :     /* Always reset intra-tuple state */
    1366     1099144 :     node->hj_CurHashValue = 0;
    1367     1099144 :     node->hj_CurBucketNo = 0;
    1368     1099144 :     node->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
    1369     1099144 :     node->hj_CurTuple = NULL;
    1370             : 
    1371     1099144 :     node->hj_MatchedOuter = false;
    1372     1099144 :     node->hj_FirstOuterTupleSlot = NULL;
    1373             : 
    1374             :     /*
    1375             :      * if chgParam of subnode is not null then plan will be re-scanned by
    1376             :      * first ExecProcNode.
    1377             :      */
    1378     1099144 :     if (node->js.ps.lefttree->chgParam == NULL)
    1379        1214 :         ExecReScan(node->js.ps.lefttree);
    1380     1099144 : }
    1381             : 
    1382             : void
    1383       25728 : ExecShutdownHashJoin(HashJoinState *node)
    1384             : {
    1385       25728 :     if (node->hj_HashTable)
    1386             :     {
    1387             :         /*
    1388             :          * Detach from shared state before DSM memory goes away.  This makes
    1389             :          * sure that we don't have any pointers into DSM memory by the time
    1390             :          * ExecEndHashJoin runs.
    1391             :          */
    1392       16648 :         ExecHashTableDetachBatch(node->hj_HashTable);
    1393       16648 :         ExecHashTableDetach(node->hj_HashTable);
    1394             :     }
    1395       25728 : }
    1396             : 
    1397             : static void
    1398          92 : ExecParallelHashJoinPartitionOuter(HashJoinState *hjstate)
    1399             : {
    1400          92 :     PlanState  *outerState = outerPlanState(hjstate);
    1401          92 :     ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
    1402          92 :     HashJoinTable hashtable = hjstate->hj_HashTable;
    1403             :     TupleTableSlot *slot;
    1404             :     uint32      hashvalue;
    1405             :     int         i;
    1406             : 
    1407             :     Assert(hjstate->hj_FirstOuterTupleSlot == NULL);
    1408             : 
    1409             :     /* Execute outer plan, writing all tuples to shared tuplestores. */
    1410             :     for (;;)
    1411             :     {
    1412     1440124 :         slot = ExecProcNode(outerState);
    1413      720108 :         if (TupIsNull(slot))
    1414             :             break;
    1415      720016 :         econtext->ecxt_outertuple = slot;
    1416      720016 :         if (ExecHashGetHashValue(hashtable, econtext,
    1417             :                                  hjstate->hj_OuterHashKeys,
    1418             :                                  true,  /* outer tuple */
    1419      720016 :                                  HJ_FILL_OUTER(hjstate),
    1420             :                                  &hashvalue))
    1421             :         {
    1422             :             int         batchno;
    1423             :             int         bucketno;
    1424             :             bool        shouldFree;
    1425      720016 :             MinimalTuple mintup = ExecFetchSlotMinimalTuple(slot, &shouldFree);
    1426             : 
    1427      720016 :             ExecHashGetBucketAndBatch(hashtable, hashvalue, &bucketno,
    1428             :                                       &batchno);
    1429      720016 :             sts_puttuple(hashtable->batches[batchno].outer_tuples,
    1430             :                          &hashvalue, mintup);
    1431             : 
    1432      720016 :             if (shouldFree)
    1433      720016 :                 heap_free_minimal_tuple(mintup);
    1434             :         }
    1435      720016 :         CHECK_FOR_INTERRUPTS();
    1436             :     }
    1437             : 
    1438             :     /* Make sure all outer partitions are readable by any backend. */
    1439         812 :     for (i = 0; i < hashtable->nbatch; ++i)
    1440         720 :         sts_end_write(hashtable->batches[i].outer_tuples);
    1441          92 : }
    1442             : 
    1443             : void
    1444          64 : ExecHashJoinEstimate(HashJoinState *state, ParallelContext *pcxt)
    1445             : {
    1446          64 :     shm_toc_estimate_chunk(&pcxt->estimator, sizeof(ParallelHashJoinState));
    1447          64 :     shm_toc_estimate_keys(&pcxt->estimator, 1);
    1448          64 : }
    1449             : 
    1450             : void
    1451          64 : ExecHashJoinInitializeDSM(HashJoinState *state, ParallelContext *pcxt)
    1452             : {
    1453          64 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1454             :     HashState  *hashNode;
    1455             :     ParallelHashJoinState *pstate;
    1456             : 
    1457             :     /*
    1458             :      * Disable shared hash table mode if we failed to create a real DSM
    1459             :      * segment, because that means that we don't have a DSA area to work with.
    1460             :      */
    1461          64 :     if (pcxt->seg == NULL)
    1462           0 :         return;
    1463             : 
    1464          64 :     ExecSetExecProcNode(&state->js.ps, ExecParallelHashJoin);
    1465             : 
    1466             :     /*
    1467             :      * Set up the state needed to coordinate access to the shared hash
    1468             :      * table(s), using the plan node ID as the toc key.
    1469             :      */
    1470          64 :     pstate = shm_toc_allocate(pcxt->toc, sizeof(ParallelHashJoinState));
    1471          64 :     shm_toc_insert(pcxt->toc, plan_node_id, pstate);
    1472             : 
    1473             :     /*
    1474             :      * Set up the shared hash join state with no batches initially.
    1475             :      * ExecHashTableCreate() will prepare at least one later and set nbatch
    1476             :      * and space_allowed.
    1477             :      */
    1478          64 :     pstate->nbatch = 0;
    1479          64 :     pstate->space_allowed = 0;
    1480          64 :     pstate->batches = InvalidDsaPointer;
    1481          64 :     pstate->old_batches = InvalidDsaPointer;
    1482          64 :     pstate->nbuckets = 0;
    1483          64 :     pstate->growth = PHJ_GROWTH_OK;
    1484          64 :     pstate->chunk_work_queue = InvalidDsaPointer;
    1485          64 :     pg_atomic_init_u32(&pstate->distributor, 0);
    1486          64 :     pstate->nparticipants = pcxt->nworkers + 1;
    1487          64 :     pstate->total_tuples = 0;
    1488          64 :     LWLockInitialize(&pstate->lock,
    1489             :                      LWTRANCHE_PARALLEL_HASH_JOIN);
    1490          64 :     BarrierInit(&pstate->build_barrier, 0);
    1491          64 :     BarrierInit(&pstate->grow_batches_barrier, 0);
    1492          64 :     BarrierInit(&pstate->grow_buckets_barrier, 0);
    1493             : 
    1494             :     /* Set up the space we'll use for shared temporary files. */
    1495          64 :     SharedFileSetInit(&pstate->fileset, pcxt->seg);
    1496             : 
    1497             :     /* Initialize the shared state in the hash node. */
    1498          64 :     hashNode = (HashState *) innerPlanState(state);
    1499          64 :     hashNode->parallel_state = pstate;
    1500             : }
    1501             : 
    1502             : /* ----------------------------------------------------------------
    1503             :  *      ExecHashJoinReInitializeDSM
    1504             :  *
    1505             :  *      Reset shared state before beginning a fresh scan.
    1506             :  * ----------------------------------------------------------------
    1507             :  */
    1508             : void
    1509          32 : ExecHashJoinReInitializeDSM(HashJoinState *state, ParallelContext *cxt)
    1510             : {
    1511          32 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1512          32 :     ParallelHashJoinState *pstate =
    1513          32 :     shm_toc_lookup(cxt->toc, plan_node_id, false);
    1514             : 
    1515             :     /*
    1516             :      * It would be possible to reuse the shared hash table in single-batch
    1517             :      * cases by resetting and then fast-forwarding build_barrier to
    1518             :      * PHJ_BUILD_DONE and batch 0's batch_barrier to PHJ_BATCH_PROBING, but
    1519             :      * currently shared hash tables are already freed by now (by the last
    1520             :      * participant to detach from the batch).  We could consider keeping it
    1521             :      * around for single-batch joins.  We'd also need to adjust
    1522             :      * finalize_plan() so that it doesn't record a dummy dependency for
    1523             :      * Parallel Hash nodes, preventing the rescan optimization.  For now we
    1524             :      * don't try.
    1525             :      */
    1526             : 
    1527             :     /* Detach, freeing any remaining shared memory. */
    1528          32 :     if (state->hj_HashTable != NULL)
    1529             :     {
    1530           0 :         ExecHashTableDetachBatch(state->hj_HashTable);
    1531           0 :         ExecHashTableDetach(state->hj_HashTable);
    1532             :     }
    1533             : 
    1534             :     /* Clear any shared batch files. */
    1535          32 :     SharedFileSetDeleteAll(&pstate->fileset);
    1536             : 
    1537             :     /* Reset build_barrier to PHJ_BUILD_ELECTING so we can go around again. */
    1538          32 :     BarrierInit(&pstate->build_barrier, 0);
    1539          32 : }
    1540             : 
    1541             : void
    1542         180 : ExecHashJoinInitializeWorker(HashJoinState *state,
    1543             :                              ParallelWorkerContext *pwcxt)
    1544             : {
    1545             :     HashState  *hashNode;
    1546         180 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1547         180 :     ParallelHashJoinState *pstate =
    1548         180 :     shm_toc_lookup(pwcxt->toc, plan_node_id, false);
    1549             : 
    1550             :     /* Attach to the space for shared temporary files. */
    1551         180 :     SharedFileSetAttach(&pstate->fileset, pwcxt->seg);
    1552             : 
    1553             :     /* Attach to the shared state in the hash node. */
    1554         180 :     hashNode = (HashState *) innerPlanState(state);
    1555         180 :     hashNode->parallel_state = pstate;
    1556             : 
    1557         180 :     ExecSetExecProcNode(&state->js.ps, ExecParallelHashJoin);
    1558         180 : }

Generated by: LCOV version 1.13