LCOV - code coverage report
Current view: top level - src/backend/executor - nodeHashjoin.c (source / functions) Hit Total Coverage
Test: PostgreSQL 17devel Lines: 427 468 91.2 %
Date: 2024-03-29 14:11:41 Functions: 18 18 100.0 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * nodeHashjoin.c
       4             :  *    Routines to handle hash join nodes
       5             :  *
       6             :  * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
       7             :  * Portions Copyright (c) 1994, Regents of the University of California
       8             :  *
       9             :  *
      10             :  * IDENTIFICATION
      11             :  *    src/backend/executor/nodeHashjoin.c
      12             :  *
      13             :  * HASH JOIN
      14             :  *
      15             :  * This is based on the "hybrid hash join" algorithm described shortly in the
      16             :  * following page
      17             :  *
      18             :  *   https://en.wikipedia.org/wiki/Hash_join#Hybrid_hash_join
      19             :  *
      20             :  * and in detail in the referenced paper:
      21             :  *
      22             :  *   "An Adaptive Hash Join Algorithm for Multiuser Environments"
      23             :  *   Hansjörg Zeller; Jim Gray (1990). Proceedings of the 16th VLDB conference.
      24             :  *   Brisbane: 186–197.
      25             :  *
      26             :  * If the inner side tuples of a hash join do not fit in memory, the hash join
      27             :  * can be executed in multiple batches.
      28             :  *
      29             :  * If the statistics on the inner side relation are accurate, planner chooses a
      30             :  * multi-batch strategy and estimates the number of batches.
      31             :  *
      32             :  * The query executor measures the real size of the hashtable and increases the
      33             :  * number of batches if the hashtable grows too large.
      34             :  *
      35             :  * The number of batches is always a power of two, so an increase in the number
      36             :  * of batches doubles it.
      37             :  *
      38             :  * Serial hash join measures batch size lazily -- waiting until it is loading a
      39             :  * batch to determine if it will fit in memory. While inserting tuples into the
      40             :  * hashtable, serial hash join will, if that tuple were to exceed work_mem,
      41             :  * dump out the hashtable and reassign them either to other batch files or the
      42             :  * current batch resident in the hashtable.
      43             :  *
      44             :  * Parallel hash join, on the other hand, completes all changes to the number
      45             :  * of batches during the build phase. If it increases the number of batches, it
      46             :  * dumps out all the tuples from all batches and reassigns them to entirely new
      47             :  * batch files. Then it checks every batch to ensure it will fit in the space
      48             :  * budget for the query.
      49             :  *
      50             :  * In both parallel and serial hash join, the executor currently makes a best
      51             :  * effort. If a particular batch will not fit in memory, it tries doubling the
      52             :  * number of batches. If after a batch increase, there is a batch which
      53             :  * retained all or none of its tuples, the executor disables growth in the
      54             :  * number of batches globally. After growth is disabled, all batches that would
      55             :  * have previously triggered an increase in the number of batches instead
      56             :  * exceed the space allowed.
      57             :  *
      58             :  * PARALLELISM
      59             :  *
      60             :  * Hash joins can participate in parallel query execution in several ways.  A
      61             :  * parallel-oblivious hash join is one where the node is unaware that it is
      62             :  * part of a parallel plan.  In this case, a copy of the inner plan is used to
      63             :  * build a copy of the hash table in every backend, and the outer plan could
      64             :  * either be built from a partial or complete path, so that the results of the
      65             :  * hash join are correspondingly either partial or complete.  A parallel-aware
      66             :  * hash join is one that behaves differently, coordinating work between
      67             :  * backends, and appears as Parallel Hash Join in EXPLAIN output.  A Parallel
      68             :  * Hash Join always appears with a Parallel Hash node.
      69             :  *
      70             :  * Parallel-aware hash joins use the same per-backend state machine to track
      71             :  * progress through the hash join algorithm as parallel-oblivious hash joins.
      72             :  * In a parallel-aware hash join, there is also a shared state machine that
      73             :  * co-operating backends use to synchronize their local state machines and
      74             :  * program counters.  The shared state machine is managed with a Barrier IPC
      75             :  * primitive.  When all attached participants arrive at a barrier, the phase
      76             :  * advances and all waiting participants are released.
      77             :  *
      78             :  * When a participant begins working on a parallel hash join, it must first
      79             :  * figure out how much progress has already been made, because participants
      80             :  * don't wait for each other to begin.  For this reason there are switch
      81             :  * statements at key points in the code where we have to synchronize our local
      82             :  * state machine with the phase, and then jump to the correct part of the
      83             :  * algorithm so that we can get started.
      84             :  *
      85             :  * One barrier called build_barrier is used to coordinate the hashing phases.
      86             :  * The phase is represented by an integer which begins at zero and increments
      87             :  * one by one, but in the code it is referred to by symbolic names as follows.
      88             :  * An asterisk indicates a phase that is performed by a single arbitrarily
      89             :  * chosen process.
      90             :  *
      91             :  *   PHJ_BUILD_ELECT                 -- initial state
      92             :  *   PHJ_BUILD_ALLOCATE*             -- one sets up the batches and table 0
      93             :  *   PHJ_BUILD_HASH_INNER            -- all hash the inner rel
      94             :  *   PHJ_BUILD_HASH_OUTER            -- (multi-batch only) all hash the outer
      95             :  *   PHJ_BUILD_RUN                   -- building done, probing can begin
      96             :  *   PHJ_BUILD_FREE*                 -- all work complete, one frees batches
      97             :  *
      98             :  * While in the phase PHJ_BUILD_HASH_INNER a separate pair of barriers may
      99             :  * be used repeatedly as required to coordinate expansions in the number of
     100             :  * batches or buckets.  Their phases are as follows:
     101             :  *
     102             :  *   PHJ_GROW_BATCHES_ELECT          -- initial state
     103             :  *   PHJ_GROW_BATCHES_REALLOCATE*    -- one allocates new batches
     104             :  *   PHJ_GROW_BATCHES_REPARTITION    -- all repartition
     105             :  *   PHJ_GROW_BATCHES_DECIDE*        -- one detects skew and cleans up
     106             :  *   PHJ_GROW_BATCHES_FINISH         -- finished one growth cycle
     107             :  *
     108             :  *   PHJ_GROW_BUCKETS_ELECT          -- initial state
     109             :  *   PHJ_GROW_BUCKETS_REALLOCATE*    -- one allocates new buckets
     110             :  *   PHJ_GROW_BUCKETS_REINSERT       -- all insert tuples
     111             :  *
     112             :  * If the planner got the number of batches and buckets right, those won't be
     113             :  * necessary, but on the other hand we might finish up needing to expand the
     114             :  * buckets or batches multiple times while hashing the inner relation to stay
     115             :  * within our memory budget and load factor target.  For that reason it's a
     116             :  * separate pair of barriers using circular phases.
     117             :  *
     118             :  * The PHJ_BUILD_HASH_OUTER phase is required only for multi-batch joins,
     119             :  * because we need to divide the outer relation into batches up front in order
     120             :  * to be able to process batches entirely independently.  In contrast, the
     121             :  * parallel-oblivious algorithm simply throws tuples 'forward' to 'later'
     122             :  * batches whenever it encounters them while scanning and probing, which it
     123             :  * can do because it processes batches in serial order.
     124             :  *
     125             :  * Once PHJ_BUILD_RUN is reached, backends then split up and process
     126             :  * different batches, or gang up and work together on probing batches if there
     127             :  * aren't enough to go around.  For each batch there is a separate barrier
     128             :  * with the following phases:
     129             :  *
     130             :  *  PHJ_BATCH_ELECT          -- initial state
     131             :  *  PHJ_BATCH_ALLOCATE*      -- one allocates buckets
     132             :  *  PHJ_BATCH_LOAD           -- all load the hash table from disk
     133             :  *  PHJ_BATCH_PROBE          -- all probe
     134             :  *  PHJ_BATCH_SCAN*          -- one does right/right-anti/full unmatched scan
     135             :  *  PHJ_BATCH_FREE*          -- one frees memory
     136             :  *
     137             :  * Batch 0 is a special case, because it starts out in phase
     138             :  * PHJ_BATCH_PROBE; populating batch 0's hash table is done during
     139             :  * PHJ_BUILD_HASH_INNER so we can skip loading.
     140             :  *
     141             :  * Initially we try to plan for a single-batch hash join using the combined
     142             :  * hash_mem of all participants to create a large shared hash table.  If that
     143             :  * turns out either at planning or execution time to be impossible then we
     144             :  * fall back to regular hash_mem sized hash tables.
     145             :  *
     146             :  * To avoid deadlocks, we never wait for any barrier unless it is known that
     147             :  * all other backends attached to it are actively executing the node or have
     148             :  * finished.  Practically, that means that we never emit a tuple while attached
     149             :  * to a barrier, unless the barrier has reached a phase that means that no
     150             :  * process will wait on it again.  We emit tuples while attached to the build
     151             :  * barrier in phase PHJ_BUILD_RUN, and to a per-batch barrier in phase
     152             :  * PHJ_BATCH_PROBE.  These are advanced to PHJ_BUILD_FREE and PHJ_BATCH_SCAN
     153             :  * respectively without waiting, using BarrierArriveAndDetach() and
     154             :  * BarrierArriveAndDetachExceptLast() respectively.  The last to detach
     155             :  * receives a different return value so that it knows that it's safe to
     156             :  * clean up.  Any straggler process that attaches after that phase is reached
     157             :  * will see that it's too late to participate or access the relevant shared
     158             :  * memory objects.
     159             :  *
     160             :  *-------------------------------------------------------------------------
     161             :  */
     162             : 
     163             : #include "postgres.h"
     164             : 
     165             : #include "access/htup_details.h"
     166             : #include "access/parallel.h"
     167             : #include "executor/executor.h"
     168             : #include "executor/hashjoin.h"
     169             : #include "executor/nodeHash.h"
     170             : #include "executor/nodeHashjoin.h"
     171             : #include "miscadmin.h"
     172             : #include "utils/sharedtuplestore.h"
     173             : #include "utils/wait_event.h"
     174             : 
     175             : 
     176             : /*
     177             :  * States of the ExecHashJoin state machine
     178             :  */
     179             : #define HJ_BUILD_HASHTABLE      1
     180             : #define HJ_NEED_NEW_OUTER       2
     181             : #define HJ_SCAN_BUCKET          3
     182             : #define HJ_FILL_OUTER_TUPLE     4
     183             : #define HJ_FILL_INNER_TUPLES    5
     184             : #define HJ_NEED_NEW_BATCH       6
     185             : 
     186             : /* Returns true if doing null-fill on outer relation */
     187             : #define HJ_FILL_OUTER(hjstate)  ((hjstate)->hj_NullInnerTupleSlot != NULL)
     188             : /* Returns true if doing null-fill on inner relation */
     189             : #define HJ_FILL_INNER(hjstate)  ((hjstate)->hj_NullOuterTupleSlot != NULL)
     190             : 
     191             : static TupleTableSlot *ExecHashJoinOuterGetTuple(PlanState *outerNode,
     192             :                                                  HashJoinState *hjstate,
     193             :                                                  uint32 *hashvalue);
     194             : static TupleTableSlot *ExecParallelHashJoinOuterGetTuple(PlanState *outerNode,
     195             :                                                          HashJoinState *hjstate,
     196             :                                                          uint32 *hashvalue);
     197             : static TupleTableSlot *ExecHashJoinGetSavedTuple(HashJoinState *hjstate,
     198             :                                                  BufFile *file,
     199             :                                                  uint32 *hashvalue,
     200             :                                                  TupleTableSlot *tupleSlot);
     201             : static bool ExecHashJoinNewBatch(HashJoinState *hjstate);
     202             : static bool ExecParallelHashJoinNewBatch(HashJoinState *hjstate);
     203             : static void ExecParallelHashJoinPartitionOuter(HashJoinState *hjstate);
     204             : 
     205             : 
     206             : /* ----------------------------------------------------------------
     207             :  *      ExecHashJoinImpl
     208             :  *
     209             :  *      This function implements the Hybrid Hashjoin algorithm.  It is marked
     210             :  *      with an always-inline attribute so that ExecHashJoin() and
     211             :  *      ExecParallelHashJoin() can inline it.  Compilers that respect the
     212             :  *      attribute should create versions specialized for parallel == true and
     213             :  *      parallel == false with unnecessary branches removed.
     214             :  *
     215             :  *      Note: the relation we build hash table on is the "inner"
     216             :  *            the other one is "outer".
     217             :  * ----------------------------------------------------------------
     218             :  */
     219             : static pg_attribute_always_inline TupleTableSlot *
     220     9369912 : ExecHashJoinImpl(PlanState *pstate, bool parallel)
     221             : {
     222     9369912 :     HashJoinState *node = castNode(HashJoinState, pstate);
     223             :     PlanState  *outerNode;
     224             :     HashState  *hashNode;
     225             :     ExprState  *joinqual;
     226             :     ExprState  *otherqual;
     227             :     ExprContext *econtext;
     228             :     HashJoinTable hashtable;
     229             :     TupleTableSlot *outerTupleSlot;
     230             :     uint32      hashvalue;
     231             :     int         batchno;
     232             :     ParallelHashJoinState *parallel_state;
     233             : 
     234             :     /*
     235             :      * get information from HashJoin node
     236             :      */
     237     9369912 :     joinqual = node->js.joinqual;
     238     9369912 :     otherqual = node->js.ps.qual;
     239     9369912 :     hashNode = (HashState *) innerPlanState(node);
     240     9369912 :     outerNode = outerPlanState(node);
     241     9369912 :     hashtable = node->hj_HashTable;
     242     9369912 :     econtext = node->js.ps.ps_ExprContext;
     243     9369912 :     parallel_state = hashNode->parallel_state;
     244             : 
     245             :     /*
     246             :      * Reset per-tuple memory context to free any expression evaluation
     247             :      * storage allocated in the previous tuple cycle.
     248             :      */
     249     9369912 :     ResetExprContext(econtext);
     250             : 
     251             :     /*
     252             :      * run the hash join state machine
     253             :      */
     254             :     for (;;)
     255             :     {
     256             :         /*
     257             :          * It's possible to iterate this loop many times before returning a
     258             :          * tuple, in some pathological cases such as needing to move much of
     259             :          * the current batch to a later batch.  So let's check for interrupts
     260             :          * each time through.
     261             :          */
     262    35745084 :         CHECK_FOR_INTERRUPTS();
     263             : 
     264    35745084 :         switch (node->hj_JoinState)
     265             :         {
     266       24380 :             case HJ_BUILD_HASHTABLE:
     267             : 
     268             :                 /*
     269             :                  * First time through: build hash table for inner relation.
     270             :                  */
     271             :                 Assert(hashtable == NULL);
     272             : 
     273             :                 /*
     274             :                  * If the outer relation is completely empty, and it's not
     275             :                  * right/right-anti/full join, we can quit without building
     276             :                  * the hash table.  However, for an inner join it is only a
     277             :                  * win to check this when the outer relation's startup cost is
     278             :                  * less than the projected cost of building the hash table.
     279             :                  * Otherwise it's best to build the hash table first and see
     280             :                  * if the inner relation is empty.  (When it's a left join, we
     281             :                  * should always make this check, since we aren't going to be
     282             :                  * able to skip the join on the strength of an empty inner
     283             :                  * relation anyway.)
     284             :                  *
     285             :                  * If we are rescanning the join, we make use of information
     286             :                  * gained on the previous scan: don't bother to try the
     287             :                  * prefetch if the previous scan found the outer relation
     288             :                  * nonempty. This is not 100% reliable since with new
     289             :                  * parameters the outer relation might yield different
     290             :                  * results, but it's a good heuristic.
     291             :                  *
     292             :                  * The only way to make the check is to try to fetch a tuple
     293             :                  * from the outer plan node.  If we succeed, we have to stash
     294             :                  * it away for later consumption by ExecHashJoinOuterGetTuple.
     295             :                  */
     296       24380 :                 if (HJ_FILL_INNER(node))
     297             :                 {
     298             :                     /* no chance to not build the hash table */
     299        5326 :                     node->hj_FirstOuterTupleSlot = NULL;
     300             :                 }
     301       19054 :                 else if (parallel)
     302             :                 {
     303             :                     /*
     304             :                      * The empty-outer optimization is not implemented for
     305             :                      * shared hash tables, because no one participant can
     306             :                      * determine that there are no outer tuples, and it's not
     307             :                      * yet clear that it's worth the synchronization overhead
     308             :                      * of reaching consensus to figure that out.  So we have
     309             :                      * to build the hash table.
     310             :                      */
     311         326 :                     node->hj_FirstOuterTupleSlot = NULL;
     312             :                 }
     313       18728 :                 else if (HJ_FILL_OUTER(node) ||
     314       13920 :                          (outerNode->plan->startup_cost < hashNode->ps.plan->total_cost &&
     315       12962 :                           !node->hj_OuterNotEmpty))
     316             :                 {
     317       16354 :                     node->hj_FirstOuterTupleSlot = ExecProcNode(outerNode);
     318       16354 :                     if (TupIsNull(node->hj_FirstOuterTupleSlot))
     319             :                     {
     320        4144 :                         node->hj_OuterNotEmpty = false;
     321        4144 :                         return NULL;
     322             :                     }
     323             :                     else
     324       12210 :                         node->hj_OuterNotEmpty = true;
     325             :                 }
     326             :                 else
     327        2374 :                     node->hj_FirstOuterTupleSlot = NULL;
     328             : 
     329             :                 /*
     330             :                  * Create the hash table.  If using Parallel Hash, then
     331             :                  * whoever gets here first will create the hash table and any
     332             :                  * later arrivals will merely attach to it.
     333             :                  */
     334       20236 :                 hashtable = ExecHashTableCreate(hashNode,
     335             :                                                 node->hj_HashOperators,
     336             :                                                 node->hj_Collations,
     337       20236 :                                                 HJ_FILL_INNER(node));
     338       20236 :                 node->hj_HashTable = hashtable;
     339             : 
     340             :                 /*
     341             :                  * Execute the Hash node, to build the hash table.  If using
     342             :                  * Parallel Hash, then we'll try to help hashing unless we
     343             :                  * arrived too late.
     344             :                  */
     345       20236 :                 hashNode->hashtable = hashtable;
     346       20236 :                 (void) MultiExecProcNode((PlanState *) hashNode);
     347             : 
     348             :                 /*
     349             :                  * If the inner relation is completely empty, and we're not
     350             :                  * doing a left outer join, we can quit without scanning the
     351             :                  * outer relation.
     352             :                  */
     353       20236 :                 if (hashtable->totalTuples == 0 && !HJ_FILL_OUTER(node))
     354             :                 {
     355        1674 :                     if (parallel)
     356             :                     {
     357             :                         /*
     358             :                          * Advance the build barrier to PHJ_BUILD_RUN before
     359             :                          * proceeding so we can negotiate resource cleanup.
     360             :                          */
     361           6 :                         Barrier    *build_barrier = &parallel_state->build_barrier;
     362             : 
     363           8 :                         while (BarrierPhase(build_barrier) < PHJ_BUILD_RUN)
     364           2 :                             BarrierArriveAndWait(build_barrier, 0);
     365             :                     }
     366        1674 :                     return NULL;
     367             :                 }
     368             : 
     369             :                 /*
     370             :                  * need to remember whether nbatch has increased since we
     371             :                  * began scanning the outer relation
     372             :                  */
     373       18562 :                 hashtable->nbatch_outstart = hashtable->nbatch;
     374             : 
     375             :                 /*
     376             :                  * Reset OuterNotEmpty for scan.  (It's OK if we fetched a
     377             :                  * tuple above, because ExecHashJoinOuterGetTuple will
     378             :                  * immediately set it again.)
     379             :                  */
     380       18562 :                 node->hj_OuterNotEmpty = false;
     381             : 
     382       18562 :                 if (parallel)
     383             :                 {
     384             :                     Barrier    *build_barrier;
     385             : 
     386         392 :                     build_barrier = &parallel_state->build_barrier;
     387             :                     Assert(BarrierPhase(build_barrier) == PHJ_BUILD_HASH_OUTER ||
     388             :                            BarrierPhase(build_barrier) == PHJ_BUILD_RUN ||
     389             :                            BarrierPhase(build_barrier) == PHJ_BUILD_FREE);
     390         392 :                     if (BarrierPhase(build_barrier) == PHJ_BUILD_HASH_OUTER)
     391             :                     {
     392             :                         /*
     393             :                          * If multi-batch, we need to hash the outer relation
     394             :                          * up front.
     395             :                          */
     396         256 :                         if (hashtable->nbatch > 1)
     397         142 :                             ExecParallelHashJoinPartitionOuter(node);
     398         256 :                         BarrierArriveAndWait(build_barrier,
     399             :                                              WAIT_EVENT_HASH_BUILD_HASH_OUTER);
     400             :                     }
     401         136 :                     else if (BarrierPhase(build_barrier) == PHJ_BUILD_FREE)
     402             :                     {
     403             :                         /*
     404             :                          * If we attached so late that the job is finished and
     405             :                          * the batch state has been freed, we can return
     406             :                          * immediately.
     407             :                          */
     408           2 :                         return NULL;
     409             :                     }
     410             : 
     411             :                     /* Each backend should now select a batch to work on. */
     412             :                     Assert(BarrierPhase(build_barrier) == PHJ_BUILD_RUN);
     413         390 :                     hashtable->curbatch = -1;
     414         390 :                     node->hj_JoinState = HJ_NEED_NEW_BATCH;
     415             : 
     416         390 :                     continue;
     417             :                 }
     418             :                 else
     419       18170 :                     node->hj_JoinState = HJ_NEED_NEW_OUTER;
     420             : 
     421             :                 /* FALL THRU */
     422             : 
     423    16837462 :             case HJ_NEED_NEW_OUTER:
     424             : 
     425             :                 /*
     426             :                  * We don't have an outer tuple, try to get the next one
     427             :                  */
     428    16837462 :                 if (parallel)
     429             :                     outerTupleSlot =
     430     2160900 :                         ExecParallelHashJoinOuterGetTuple(outerNode, node,
     431             :                                                           &hashvalue);
     432             :                 else
     433             :                     outerTupleSlot =
     434    14676562 :                         ExecHashJoinOuterGetTuple(outerNode, node, &hashvalue);
     435             : 
     436    16837462 :                 if (TupIsNull(outerTupleSlot))
     437             :                 {
     438             :                     /* end of batch, or maybe whole join */
     439       20782 :                     if (HJ_FILL_INNER(node))
     440             :                     {
     441             :                         /* set up to scan for unmatched inner tuples */
     442        5086 :                         if (parallel)
     443             :                         {
     444             :                             /*
     445             :                              * Only one process is currently allow to handle
     446             :                              * each batch's unmatched tuples, in a parallel
     447             :                              * join.
     448             :                              */
     449          74 :                             if (ExecParallelPrepHashTableForUnmatched(node))
     450          66 :                                 node->hj_JoinState = HJ_FILL_INNER_TUPLES;
     451             :                             else
     452           8 :                                 node->hj_JoinState = HJ_NEED_NEW_BATCH;
     453             :                         }
     454             :                         else
     455             :                         {
     456        5012 :                             ExecPrepHashTableForUnmatched(node);
     457        5012 :                             node->hj_JoinState = HJ_FILL_INNER_TUPLES;
     458             :                         }
     459             :                     }
     460             :                     else
     461       15696 :                         node->hj_JoinState = HJ_NEED_NEW_BATCH;
     462       20782 :                     continue;
     463             :                 }
     464             : 
     465    16816680 :                 econtext->ecxt_outertuple = outerTupleSlot;
     466    16816680 :                 node->hj_MatchedOuter = false;
     467             : 
     468             :                 /*
     469             :                  * Find the corresponding bucket for this tuple in the main
     470             :                  * hash table or skew hash table.
     471             :                  */
     472    16816680 :                 node->hj_CurHashValue = hashvalue;
     473    16816680 :                 ExecHashGetBucketAndBatch(hashtable, hashvalue,
     474             :                                           &node->hj_CurBucketNo, &batchno);
     475    16816680 :                 node->hj_CurSkewBucketNo = ExecHashGetSkewBucket(hashtable,
     476             :                                                                  hashvalue);
     477    16816680 :                 node->hj_CurTuple = NULL;
     478             : 
     479             :                 /*
     480             :                  * The tuple might not belong to the current batch (where
     481             :                  * "current batch" includes the skew buckets if any).
     482             :                  */
     483    16816680 :                 if (batchno != hashtable->curbatch &&
     484     1471392 :                     node->hj_CurSkewBucketNo == INVALID_SKEW_BUCKET_NO)
     485             :                 {
     486             :                     bool        shouldFree;
     487     1470192 :                     MinimalTuple mintuple = ExecFetchSlotMinimalTuple(outerTupleSlot,
     488             :                                                                       &shouldFree);
     489             : 
     490             :                     /*
     491             :                      * Need to postpone this outer tuple to a later batch.
     492             :                      * Save it in the corresponding outer-batch file.
     493             :                      */
     494             :                     Assert(parallel_state == NULL);
     495             :                     Assert(batchno > hashtable->curbatch);
     496     1470192 :                     ExecHashJoinSaveTuple(mintuple, hashvalue,
     497     1470192 :                                           &hashtable->outerBatchFile[batchno],
     498             :                                           hashtable);
     499             : 
     500     1470192 :                     if (shouldFree)
     501     1470192 :                         heap_free_minimal_tuple(mintuple);
     502             : 
     503             :                     /* Loop around, staying in HJ_NEED_NEW_OUTER state */
     504     1470192 :                     continue;
     505             :                 }
     506             : 
     507             :                 /* OK, let's scan the bucket for matches */
     508    15346488 :                 node->hj_JoinState = HJ_SCAN_BUCKET;
     509             : 
     510             :                 /* FALL THRU */
     511             : 
     512    21803588 :             case HJ_SCAN_BUCKET:
     513             : 
     514             :                 /*
     515             :                  * Scan the selected hash bucket for matches to current outer
     516             :                  */
     517    21803588 :                 if (parallel)
     518             :                 {
     519     4200054 :                     if (!ExecParallelScanHashBucket(node, econtext))
     520             :                     {
     521             :                         /* out of matches; check for possible outer-join fill */
     522     2160030 :                         node->hj_JoinState = HJ_FILL_OUTER_TUPLE;
     523     2160030 :                         continue;
     524             :                     }
     525             :                 }
     526             :                 else
     527             :                 {
     528    17603534 :                     if (!ExecScanHashBucket(node, econtext))
     529             :                     {
     530             :                         /* out of matches; check for possible outer-join fill */
     531     9813198 :                         node->hj_JoinState = HJ_FILL_OUTER_TUPLE;
     532     9813198 :                         continue;
     533             :                     }
     534             :                 }
     535             : 
     536             :                 /*
     537             :                  * We've got a match, but still need to test non-hashed quals.
     538             :                  * ExecScanHashBucket already set up all the state needed to
     539             :                  * call ExecQual.
     540             :                  *
     541             :                  * If we pass the qual, then save state for next call and have
     542             :                  * ExecProject form the projection, store it in the tuple
     543             :                  * table, and return the slot.
     544             :                  *
     545             :                  * Only the joinquals determine tuple match status, but all
     546             :                  * quals must pass to actually return the tuple.
     547             :                  */
     548     9830360 :                 if (joinqual == NULL || ExecQual(joinqual, econtext))
     549             :                 {
     550     9677582 :                     node->hj_MatchedOuter = true;
     551             : 
     552             : 
     553             :                     /*
     554             :                      * This is really only needed if HJ_FILL_INNER(node), but
     555             :                      * we'll avoid the branch and just set it always.
     556             :                      */
     557     9677582 :                     if (!HeapTupleHeaderHasMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple)))
     558     5765032 :                         HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));
     559             : 
     560             :                     /* In an antijoin, we never return a matched tuple */
     561     9677582 :                     if (node->js.jointype == JOIN_ANTI)
     562             :                     {
     563     1543570 :                         node->hj_JoinState = HJ_NEED_NEW_OUTER;
     564     1543570 :                         continue;
     565             :                     }
     566             : 
     567             :                     /*
     568             :                      * In a right-antijoin, we never return a matched tuple.
     569             :                      * And we need to stay on the current outer tuple to
     570             :                      * continue scanning the inner side for matches.
     571             :                      */
     572     8134012 :                     if (node->js.jointype == JOIN_RIGHT_ANTI)
     573       24622 :                         continue;
     574             : 
     575             :                     /*
     576             :                      * If we only need to join to the first matching inner
     577             :                      * tuple, then consider returning this one, but after that
     578             :                      * continue with next outer tuple.
     579             :                      */
     580     8109390 :                     if (node->js.single_match)
     581     1829594 :                         node->hj_JoinState = HJ_NEED_NEW_OUTER;
     582             : 
     583     8109390 :                     if (otherqual == NULL || ExecQual(otherqual, econtext))
     584     7925170 :                         return ExecProject(node->js.ps.ps_ProjInfo);
     585             :                     else
     586      184220 :                         InstrCountFiltered2(node, 1);
     587             :                 }
     588             :                 else
     589      152778 :                     InstrCountFiltered1(node, 1);
     590      336998 :                 break;
     591             : 
     592    11973228 :             case HJ_FILL_OUTER_TUPLE:
     593             : 
     594             :                 /*
     595             :                  * The current outer tuple has run out of matches, so check
     596             :                  * whether to emit a dummy outer-join tuple.  Whether we emit
     597             :                  * one or not, the next state is NEED_NEW_OUTER.
     598             :                  */
     599    11973228 :                 node->hj_JoinState = HJ_NEED_NEW_OUTER;
     600             : 
     601    11973228 :                 if (!node->hj_MatchedOuter &&
     602     6998300 :                     HJ_FILL_OUTER(node))
     603             :                 {
     604             :                     /*
     605             :                      * Generate a fake join tuple with nulls for the inner
     606             :                      * tuple, and return it if it passes the non-join quals.
     607             :                      */
     608     2043266 :                     econtext->ecxt_innertuple = node->hj_NullInnerTupleSlot;
     609             : 
     610     2043266 :                     if (otherqual == NULL || ExecQual(otherqual, econtext))
     611      982022 :                         return ExecProject(node->js.ps.ps_ProjInfo);
     612             :                     else
     613     1061244 :                         InstrCountFiltered2(node, 1);
     614             :                 }
     615    10991206 :                 break;
     616             : 
     617      449924 :             case HJ_FILL_INNER_TUPLES:
     618             : 
     619             :                 /*
     620             :                  * We have finished a batch, but we are doing
     621             :                  * right/right-anti/full join, so any unmatched inner tuples
     622             :                  * in the hashtable have to be emitted before we continue to
     623             :                  * the next batch.
     624             :                  */
     625      779776 :                 if (!(parallel ? ExecParallelScanHashTableForUnmatched(node, econtext)
     626      329852 :                       : ExecScanHashTableForUnmatched(node, econtext)))
     627             :                 {
     628             :                     /* no more unmatched tuples */
     629        5066 :                     node->hj_JoinState = HJ_NEED_NEW_BATCH;
     630        5066 :                     continue;
     631             :                 }
     632             : 
     633             :                 /*
     634             :                  * Generate a fake join tuple with nulls for the outer tuple,
     635             :                  * and return it if it passes the non-join quals.
     636             :                  */
     637      444858 :                 econtext->ecxt_outertuple = node->hj_NullOuterTupleSlot;
     638             : 
     639      444858 :                 if (otherqual == NULL || ExecQual(otherqual, econtext))
     640      437762 :                     return ExecProject(node->js.ps.ps_ProjInfo);
     641             :                 else
     642        7096 :                     InstrCountFiltered2(node, 1);
     643        7096 :                 break;
     644             : 
     645       21160 :             case HJ_NEED_NEW_BATCH:
     646             : 
     647             :                 /*
     648             :                  * Try to advance to next batch.  Done if there are no more.
     649             :                  */
     650       21160 :                 if (parallel)
     651             :                 {
     652        1260 :                     if (!ExecParallelHashJoinNewBatch(node))
     653         390 :                         return NULL;    /* end of parallel-aware join */
     654             :                 }
     655             :                 else
     656             :                 {
     657       19900 :                     if (!ExecHashJoinNewBatch(node))
     658       18748 :                         return NULL;    /* end of parallel-oblivious join */
     659             :                 }
     660        2022 :                 node->hj_JoinState = HJ_NEED_NEW_OUTER;
     661        2022 :                 break;
     662             : 
     663           0 :             default:
     664           0 :                 elog(ERROR, "unrecognized hashjoin state: %d",
     665             :                      (int) node->hj_JoinState);
     666             :         }
     667             :     }
     668             : }
     669             : 
     670             : /* ----------------------------------------------------------------
     671             :  *      ExecHashJoin
     672             :  *
     673             :  *      Parallel-oblivious version.
     674             :  * ----------------------------------------------------------------
     675             :  */
     676             : static TupleTableSlot *         /* return: a tuple or NULL */
     677     7089478 : ExecHashJoin(PlanState *pstate)
     678             : {
     679             :     /*
     680             :      * On sufficiently smart compilers this should be inlined with the
     681             :      * parallel-aware branches removed.
     682             :      */
     683     7089478 :     return ExecHashJoinImpl(pstate, false);
     684             : }
     685             : 
     686             : /* ----------------------------------------------------------------
     687             :  *      ExecParallelHashJoin
     688             :  *
     689             :  *      Parallel-aware version.
     690             :  * ----------------------------------------------------------------
     691             :  */
     692             : static TupleTableSlot *         /* return: a tuple or NULL */
     693     2280434 : ExecParallelHashJoin(PlanState *pstate)
     694             : {
     695             :     /*
     696             :      * On sufficiently smart compilers this should be inlined with the
     697             :      * parallel-oblivious branches removed.
     698             :      */
     699     2280434 :     return ExecHashJoinImpl(pstate, true);
     700             : }
     701             : 
     702             : /* ----------------------------------------------------------------
     703             :  *      ExecInitHashJoin
     704             :  *
     705             :  *      Init routine for HashJoin node.
     706             :  * ----------------------------------------------------------------
     707             :  */
     708             : HashJoinState *
     709       29714 : ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
     710             : {
     711             :     HashJoinState *hjstate;
     712             :     Plan       *outerNode;
     713             :     Hash       *hashNode;
     714             :     TupleDesc   outerDesc,
     715             :                 innerDesc;
     716             :     const TupleTableSlotOps *ops;
     717             : 
     718             :     /* check for unsupported flags */
     719             :     Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
     720             : 
     721             :     /*
     722             :      * create state structure
     723             :      */
     724       29714 :     hjstate = makeNode(HashJoinState);
     725       29714 :     hjstate->js.ps.plan = (Plan *) node;
     726       29714 :     hjstate->js.ps.state = estate;
     727             : 
     728             :     /*
     729             :      * See ExecHashJoinInitializeDSM() and ExecHashJoinInitializeWorker()
     730             :      * where this function may be replaced with a parallel version, if we
     731             :      * managed to launch a parallel query.
     732             :      */
     733       29714 :     hjstate->js.ps.ExecProcNode = ExecHashJoin;
     734       29714 :     hjstate->js.jointype = node->join.jointype;
     735             : 
     736             :     /*
     737             :      * Miscellaneous initialization
     738             :      *
     739             :      * create expression context for node
     740             :      */
     741       29714 :     ExecAssignExprContext(estate, &hjstate->js.ps);
     742             : 
     743             :     /*
     744             :      * initialize child nodes
     745             :      *
     746             :      * Note: we could suppress the REWIND flag for the inner input, which
     747             :      * would amount to betting that the hash will be a single batch.  Not
     748             :      * clear if this would be a win or not.
     749             :      */
     750       29714 :     outerNode = outerPlan(node);
     751       29714 :     hashNode = (Hash *) innerPlan(node);
     752             : 
     753       29714 :     outerPlanState(hjstate) = ExecInitNode(outerNode, estate, eflags);
     754       29714 :     outerDesc = ExecGetResultType(outerPlanState(hjstate));
     755       29714 :     innerPlanState(hjstate) = ExecInitNode((Plan *) hashNode, estate, eflags);
     756       29714 :     innerDesc = ExecGetResultType(innerPlanState(hjstate));
     757             : 
     758             :     /*
     759             :      * Initialize result slot, type and projection.
     760             :      */
     761       29714 :     ExecInitResultTupleSlotTL(&hjstate->js.ps, &TTSOpsVirtual);
     762       29714 :     ExecAssignProjectionInfo(&hjstate->js.ps, NULL);
     763             : 
     764             :     /*
     765             :      * tuple table initialization
     766             :      */
     767       29714 :     ops = ExecGetResultSlotOps(outerPlanState(hjstate), NULL);
     768       29714 :     hjstate->hj_OuterTupleSlot = ExecInitExtraTupleSlot(estate, outerDesc,
     769             :                                                         ops);
     770             : 
     771             :     /*
     772             :      * detect whether we need only consider the first matching inner tuple
     773             :      */
     774       44638 :     hjstate->js.single_match = (node->join.inner_unique ||
     775       14924 :                                 node->join.jointype == JOIN_SEMI);
     776             : 
     777             :     /* set up null tuples for outer joins, if needed */
     778       29714 :     switch (node->join.jointype)
     779             :     {
     780       18050 :         case JOIN_INNER:
     781             :         case JOIN_SEMI:
     782       18050 :             break;
     783        5252 :         case JOIN_LEFT:
     784             :         case JOIN_ANTI:
     785        5252 :             hjstate->hj_NullInnerTupleSlot =
     786        5252 :                 ExecInitNullTupleSlot(estate, innerDesc, &TTSOpsVirtual);
     787        5252 :             break;
     788        5376 :         case JOIN_RIGHT:
     789             :         case JOIN_RIGHT_ANTI:
     790        5376 :             hjstate->hj_NullOuterTupleSlot =
     791        5376 :                 ExecInitNullTupleSlot(estate, outerDesc, &TTSOpsVirtual);
     792        5376 :             break;
     793        1036 :         case JOIN_FULL:
     794        1036 :             hjstate->hj_NullOuterTupleSlot =
     795        1036 :                 ExecInitNullTupleSlot(estate, outerDesc, &TTSOpsVirtual);
     796        1036 :             hjstate->hj_NullInnerTupleSlot =
     797        1036 :                 ExecInitNullTupleSlot(estate, innerDesc, &TTSOpsVirtual);
     798        1036 :             break;
     799           0 :         default:
     800           0 :             elog(ERROR, "unrecognized join type: %d",
     801             :                  (int) node->join.jointype);
     802             :     }
     803             : 
     804             :     /*
     805             :      * now for some voodoo.  our temporary tuple slot is actually the result
     806             :      * tuple slot of the Hash node (which is our inner plan).  we can do this
     807             :      * because Hash nodes don't return tuples via ExecProcNode() -- instead
     808             :      * the hash join node uses ExecScanHashBucket() to get at the contents of
     809             :      * the hash table.  -cim 6/9/91
     810             :      */
     811             :     {
     812       29714 :         HashState  *hashstate = (HashState *) innerPlanState(hjstate);
     813       29714 :         TupleTableSlot *slot = hashstate->ps.ps_ResultTupleSlot;
     814             : 
     815       29714 :         hjstate->hj_HashTupleSlot = slot;
     816             :     }
     817             : 
     818             :     /*
     819             :      * initialize child expressions
     820             :      */
     821       29714 :     hjstate->js.ps.qual =
     822       29714 :         ExecInitQual(node->join.plan.qual, (PlanState *) hjstate);
     823       29714 :     hjstate->js.joinqual =
     824       29714 :         ExecInitQual(node->join.joinqual, (PlanState *) hjstate);
     825       29714 :     hjstate->hashclauses =
     826       29714 :         ExecInitQual(node->hashclauses, (PlanState *) hjstate);
     827             : 
     828             :     /*
     829             :      * initialize hash-specific info
     830             :      */
     831       29714 :     hjstate->hj_HashTable = NULL;
     832       29714 :     hjstate->hj_FirstOuterTupleSlot = NULL;
     833             : 
     834       29714 :     hjstate->hj_CurHashValue = 0;
     835       29714 :     hjstate->hj_CurBucketNo = 0;
     836       29714 :     hjstate->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
     837       29714 :     hjstate->hj_CurTuple = NULL;
     838             : 
     839       29714 :     hjstate->hj_OuterHashKeys = ExecInitExprList(node->hashkeys,
     840             :                                                  (PlanState *) hjstate);
     841       29714 :     hjstate->hj_HashOperators = node->hashoperators;
     842       29714 :     hjstate->hj_Collations = node->hashcollations;
     843             : 
     844       29714 :     hjstate->hj_JoinState = HJ_BUILD_HASHTABLE;
     845       29714 :     hjstate->hj_MatchedOuter = false;
     846       29714 :     hjstate->hj_OuterNotEmpty = false;
     847             : 
     848       29714 :     return hjstate;
     849             : }
     850             : 
     851             : /* ----------------------------------------------------------------
     852             :  *      ExecEndHashJoin
     853             :  *
     854             :  *      clean up routine for HashJoin node
     855             :  * ----------------------------------------------------------------
     856             :  */
     857             : void
     858       29612 : ExecEndHashJoin(HashJoinState *node)
     859             : {
     860             :     /*
     861             :      * Free hash table
     862             :      */
     863       29612 :     if (node->hj_HashTable)
     864             :     {
     865       18586 :         ExecHashTableDestroy(node->hj_HashTable);
     866       18586 :         node->hj_HashTable = NULL;
     867             :     }
     868             : 
     869             :     /*
     870             :      * clean up subtrees
     871             :      */
     872       29612 :     ExecEndNode(outerPlanState(node));
     873       29612 :     ExecEndNode(innerPlanState(node));
     874       29612 : }
     875             : 
     876             : /*
     877             :  * ExecHashJoinOuterGetTuple
     878             :  *
     879             :  *      get the next outer tuple for a parallel oblivious hashjoin: either by
     880             :  *      executing the outer plan node in the first pass, or from the temp
     881             :  *      files for the hashjoin batches.
     882             :  *
     883             :  * Returns a null slot if no more outer tuples (within the current batch).
     884             :  *
     885             :  * On success, the tuple's hash value is stored at *hashvalue --- this is
     886             :  * either originally computed, or re-read from the temp file.
     887             :  */
     888             : static TupleTableSlot *
     889    14676562 : ExecHashJoinOuterGetTuple(PlanState *outerNode,
     890             :                           HashJoinState *hjstate,
     891             :                           uint32 *hashvalue)
     892             : {
     893    14676562 :     HashJoinTable hashtable = hjstate->hj_HashTable;
     894    14676562 :     int         curbatch = hashtable->curbatch;
     895             :     TupleTableSlot *slot;
     896             : 
     897    14676562 :     if (curbatch == 0)          /* if it is the first pass */
     898             :     {
     899             :         /*
     900             :          * Check to see if first outer tuple was already fetched by
     901             :          * ExecHashJoin() and not used yet.
     902             :          */
     903    13205218 :         slot = hjstate->hj_FirstOuterTupleSlot;
     904    13205218 :         if (!TupIsNull(slot))
     905       11594 :             hjstate->hj_FirstOuterTupleSlot = NULL;
     906             :         else
     907    13193624 :             slot = ExecProcNode(outerNode);
     908             : 
     909    13206032 :         while (!TupIsNull(slot))
     910             :         {
     911             :             /*
     912             :              * We have to compute the tuple's hash value.
     913             :              */
     914    13187272 :             ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
     915             : 
     916    13187272 :             econtext->ecxt_outertuple = slot;
     917    13187272 :             if (ExecHashGetHashValue(hashtable, econtext,
     918             :                                      hjstate->hj_OuterHashKeys,
     919             :                                      true,  /* outer tuple */
     920    13187272 :                                      HJ_FILL_OUTER(hjstate),
     921             :                                      hashvalue))
     922             :             {
     923             :                 /* remember outer relation is not empty for possible rescan */
     924    13186458 :                 hjstate->hj_OuterNotEmpty = true;
     925             : 
     926    13186458 :                 return slot;
     927             :             }
     928             : 
     929             :             /*
     930             :              * That tuple couldn't match because of a NULL, so discard it and
     931             :              * continue with the next one.
     932             :              */
     933         814 :             slot = ExecProcNode(outerNode);
     934             :         }
     935             :     }
     936     1471344 :     else if (curbatch < hashtable->nbatch)
     937             :     {
     938     1471344 :         BufFile    *file = hashtable->outerBatchFile[curbatch];
     939             : 
     940             :         /*
     941             :          * In outer-join cases, we could get here even though the batch file
     942             :          * is empty.
     943             :          */
     944     1471344 :         if (file == NULL)
     945           0 :             return NULL;
     946             : 
     947     1471344 :         slot = ExecHashJoinGetSavedTuple(hjstate,
     948             :                                          file,
     949             :                                          hashvalue,
     950             :                                          hjstate->hj_OuterTupleSlot);
     951     1471344 :         if (!TupIsNull(slot))
     952     1470192 :             return slot;
     953             :     }
     954             : 
     955             :     /* End of this batch */
     956       19912 :     return NULL;
     957             : }
     958             : 
     959             : /*
     960             :  * ExecHashJoinOuterGetTuple variant for the parallel case.
     961             :  */
     962             : static TupleTableSlot *
     963     2160900 : ExecParallelHashJoinOuterGetTuple(PlanState *outerNode,
     964             :                                   HashJoinState *hjstate,
     965             :                                   uint32 *hashvalue)
     966             : {
     967     2160900 :     HashJoinTable hashtable = hjstate->hj_HashTable;
     968     2160900 :     int         curbatch = hashtable->curbatch;
     969             :     TupleTableSlot *slot;
     970             : 
     971             :     /*
     972             :      * In the Parallel Hash case we only run the outer plan directly for
     973             :      * single-batch hash joins.  Otherwise we have to go to batch files, even
     974             :      * for batch 0.
     975             :      */
     976     2160900 :     if (curbatch == 0 && hashtable->nbatch == 1)
     977             :     {
     978      960134 :         slot = ExecProcNode(outerNode);
     979             : 
     980      960134 :         while (!TupIsNull(slot))
     981             :         {
     982      960006 :             ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
     983             : 
     984      960006 :             econtext->ecxt_outertuple = slot;
     985      960006 :             if (ExecHashGetHashValue(hashtable, econtext,
     986             :                                      hjstate->hj_OuterHashKeys,
     987             :                                      true,  /* outer tuple */
     988      960006 :                                      HJ_FILL_OUTER(hjstate),
     989             :                                      hashvalue))
     990      960006 :                 return slot;
     991             : 
     992             :             /*
     993             :              * That tuple couldn't match because of a NULL, so discard it and
     994             :              * continue with the next one.
     995             :              */
     996           0 :             slot = ExecProcNode(outerNode);
     997             :         }
     998             :     }
     999     1200766 :     else if (curbatch < hashtable->nbatch)
    1000             :     {
    1001             :         MinimalTuple tuple;
    1002             : 
    1003     1200766 :         tuple = sts_parallel_scan_next(hashtable->batches[curbatch].outer_tuples,
    1004             :                                        hashvalue);
    1005     1200766 :         if (tuple != NULL)
    1006             :         {
    1007     1200024 :             ExecForceStoreMinimalTuple(tuple,
    1008             :                                        hjstate->hj_OuterTupleSlot,
    1009             :                                        false);
    1010     1200024 :             slot = hjstate->hj_OuterTupleSlot;
    1011     1200024 :             return slot;
    1012             :         }
    1013             :         else
    1014         742 :             ExecClearTuple(hjstate->hj_OuterTupleSlot);
    1015             :     }
    1016             : 
    1017             :     /* End of this batch */
    1018         870 :     hashtable->batches[curbatch].outer_eof = true;
    1019             : 
    1020         870 :     return NULL;
    1021             : }
    1022             : 
    1023             : /*
    1024             :  * ExecHashJoinNewBatch
    1025             :  *      switch to a new hashjoin batch
    1026             :  *
    1027             :  * Returns true if successful, false if there are no more batches.
    1028             :  */
    1029             : static bool
    1030       19900 : ExecHashJoinNewBatch(HashJoinState *hjstate)
    1031             : {
    1032       19900 :     HashJoinTable hashtable = hjstate->hj_HashTable;
    1033             :     int         nbatch;
    1034             :     int         curbatch;
    1035             :     BufFile    *innerFile;
    1036             :     TupleTableSlot *slot;
    1037             :     uint32      hashvalue;
    1038             : 
    1039       19900 :     nbatch = hashtable->nbatch;
    1040       19900 :     curbatch = hashtable->curbatch;
    1041             : 
    1042       19900 :     if (curbatch > 0)
    1043             :     {
    1044             :         /*
    1045             :          * We no longer need the previous outer batch file; close it right
    1046             :          * away to free disk space.
    1047             :          */
    1048        1152 :         if (hashtable->outerBatchFile[curbatch])
    1049        1152 :             BufFileClose(hashtable->outerBatchFile[curbatch]);
    1050        1152 :         hashtable->outerBatchFile[curbatch] = NULL;
    1051             :     }
    1052             :     else                        /* we just finished the first batch */
    1053             :     {
    1054             :         /*
    1055             :          * Reset some of the skew optimization state variables, since we no
    1056             :          * longer need to consider skew tuples after the first batch. The
    1057             :          * memory context reset we are about to do will release the skew
    1058             :          * hashtable itself.
    1059             :          */
    1060       18748 :         hashtable->skewEnabled = false;
    1061       18748 :         hashtable->skewBucket = NULL;
    1062       18748 :         hashtable->skewBucketNums = NULL;
    1063       18748 :         hashtable->nSkewBuckets = 0;
    1064       18748 :         hashtable->spaceUsedSkew = 0;
    1065             :     }
    1066             : 
    1067             :     /*
    1068             :      * We can always skip over any batches that are completely empty on both
    1069             :      * sides.  We can sometimes skip over batches that are empty on only one
    1070             :      * side, but there are exceptions:
    1071             :      *
    1072             :      * 1. In a left/full outer join, we have to process outer batches even if
    1073             :      * the inner batch is empty.  Similarly, in a right/right-anti/full outer
    1074             :      * join, we have to process inner batches even if the outer batch is
    1075             :      * empty.
    1076             :      *
    1077             :      * 2. If we have increased nbatch since the initial estimate, we have to
    1078             :      * scan inner batches since they might contain tuples that need to be
    1079             :      * reassigned to later inner batches.
    1080             :      *
    1081             :      * 3. Similarly, if we have increased nbatch since starting the outer
    1082             :      * scan, we have to rescan outer batches in case they contain tuples that
    1083             :      * need to be reassigned.
    1084             :      */
    1085       19900 :     curbatch++;
    1086       19900 :     while (curbatch < nbatch &&
    1087        1152 :            (hashtable->outerBatchFile[curbatch] == NULL ||
    1088        1152 :             hashtable->innerBatchFile[curbatch] == NULL))
    1089             :     {
    1090           0 :         if (hashtable->outerBatchFile[curbatch] &&
    1091           0 :             HJ_FILL_OUTER(hjstate))
    1092           0 :             break;              /* must process due to rule 1 */
    1093           0 :         if (hashtable->innerBatchFile[curbatch] &&
    1094           0 :             HJ_FILL_INNER(hjstate))
    1095           0 :             break;              /* must process due to rule 1 */
    1096           0 :         if (hashtable->innerBatchFile[curbatch] &&
    1097           0 :             nbatch != hashtable->nbatch_original)
    1098           0 :             break;              /* must process due to rule 2 */
    1099           0 :         if (hashtable->outerBatchFile[curbatch] &&
    1100           0 :             nbatch != hashtable->nbatch_outstart)
    1101           0 :             break;              /* must process due to rule 3 */
    1102             :         /* We can ignore this batch. */
    1103             :         /* Release associated temp files right away. */
    1104           0 :         if (hashtable->innerBatchFile[curbatch])
    1105           0 :             BufFileClose(hashtable->innerBatchFile[curbatch]);
    1106           0 :         hashtable->innerBatchFile[curbatch] = NULL;
    1107           0 :         if (hashtable->outerBatchFile[curbatch])
    1108           0 :             BufFileClose(hashtable->outerBatchFile[curbatch]);
    1109           0 :         hashtable->outerBatchFile[curbatch] = NULL;
    1110           0 :         curbatch++;
    1111             :     }
    1112             : 
    1113       19900 :     if (curbatch >= nbatch)
    1114       18748 :         return false;           /* no more batches */
    1115             : 
    1116        1152 :     hashtable->curbatch = curbatch;
    1117             : 
    1118             :     /*
    1119             :      * Reload the hash table with the new inner batch (which could be empty)
    1120             :      */
    1121        1152 :     ExecHashTableReset(hashtable);
    1122             : 
    1123        1152 :     innerFile = hashtable->innerBatchFile[curbatch];
    1124             : 
    1125        1152 :     if (innerFile != NULL)
    1126             :     {
    1127        1152 :         if (BufFileSeek(innerFile, 0, 0, SEEK_SET))
    1128           0 :             ereport(ERROR,
    1129             :                     (errcode_for_file_access(),
    1130             :                      errmsg("could not rewind hash-join temporary file")));
    1131             : 
    1132     2353850 :         while ((slot = ExecHashJoinGetSavedTuple(hjstate,
    1133             :                                                  innerFile,
    1134             :                                                  &hashvalue,
    1135             :                                                  hjstate->hj_HashTupleSlot)))
    1136             :         {
    1137             :             /*
    1138             :              * NOTE: some tuples may be sent to future batches.  Also, it is
    1139             :              * possible for hashtable->nbatch to be increased here!
    1140             :              */
    1141     2352698 :             ExecHashTableInsert(hashtable, slot, hashvalue);
    1142             :         }
    1143             : 
    1144             :         /*
    1145             :          * after we build the hash table, the inner batch file is no longer
    1146             :          * needed
    1147             :          */
    1148        1152 :         BufFileClose(innerFile);
    1149        1152 :         hashtable->innerBatchFile[curbatch] = NULL;
    1150             :     }
    1151             : 
    1152             :     /*
    1153             :      * Rewind outer batch file (if present), so that we can start reading it.
    1154             :      */
    1155        1152 :     if (hashtable->outerBatchFile[curbatch] != NULL)
    1156             :     {
    1157        1152 :         if (BufFileSeek(hashtable->outerBatchFile[curbatch], 0, 0, SEEK_SET))
    1158           0 :             ereport(ERROR,
    1159             :                     (errcode_for_file_access(),
    1160             :                      errmsg("could not rewind hash-join temporary file")));
    1161             :     }
    1162             : 
    1163        1152 :     return true;
    1164             : }
    1165             : 
    1166             : /*
    1167             :  * Choose a batch to work on, and attach to it.  Returns true if successful,
    1168             :  * false if there are no more batches.
    1169             :  */
    1170             : static bool
    1171        1260 : ExecParallelHashJoinNewBatch(HashJoinState *hjstate)
    1172             : {
    1173        1260 :     HashJoinTable hashtable = hjstate->hj_HashTable;
    1174             :     int         start_batchno;
    1175             :     int         batchno;
    1176             : 
    1177             :     /*
    1178             :      * If we were already attached to a batch, remember not to bother checking
    1179             :      * it again, and detach from it (possibly freeing the hash table if we are
    1180             :      * last to detach).
    1181             :      */
    1182        1260 :     if (hashtable->curbatch >= 0)
    1183             :     {
    1184         862 :         hashtable->batches[hashtable->curbatch].done = true;
    1185         862 :         ExecHashTableDetachBatch(hashtable);
    1186             :     }
    1187             : 
    1188             :     /*
    1189             :      * Search for a batch that isn't done.  We use an atomic counter to start
    1190             :      * our search at a different batch in every participant when there are
    1191             :      * more batches than participants.
    1192             :      */
    1193        1260 :     batchno = start_batchno =
    1194        1260 :         pg_atomic_fetch_add_u32(&hashtable->parallel_state->distributor, 1) %
    1195        1260 :         hashtable->nbatch;
    1196             :     do
    1197             :     {
    1198             :         uint32      hashvalue;
    1199             :         MinimalTuple tuple;
    1200             :         TupleTableSlot *slot;
    1201             : 
    1202        3050 :         if (!hashtable->batches[batchno].done)
    1203             :         {
    1204             :             SharedTuplestoreAccessor *inner_tuples;
    1205        1764 :             Barrier    *batch_barrier =
    1206        1764 :                 &hashtable->batches[batchno].shared->batch_barrier;
    1207             : 
    1208        1764 :             switch (BarrierAttach(batch_barrier))
    1209             :             {
    1210         582 :                 case PHJ_BATCH_ELECT:
    1211             : 
    1212             :                     /* One backend allocates the hash table. */
    1213         582 :                     if (BarrierArriveAndWait(batch_barrier,
    1214             :                                              WAIT_EVENT_HASH_BATCH_ELECT))
    1215         582 :                         ExecParallelHashTableAlloc(hashtable, batchno);
    1216             :                     /* Fall through. */
    1217             : 
    1218             :                 case PHJ_BATCH_ALLOCATE:
    1219             :                     /* Wait for allocation to complete. */
    1220         582 :                     BarrierArriveAndWait(batch_barrier,
    1221             :                                          WAIT_EVENT_HASH_BATCH_ALLOCATE);
    1222             :                     /* Fall through. */
    1223             : 
    1224         596 :                 case PHJ_BATCH_LOAD:
    1225             :                     /* Start (or join in) loading tuples. */
    1226         596 :                     ExecParallelHashTableSetCurrentBatch(hashtable, batchno);
    1227         596 :                     inner_tuples = hashtable->batches[batchno].inner_tuples;
    1228         596 :                     sts_begin_parallel_scan(inner_tuples);
    1229     1083524 :                     while ((tuple = sts_parallel_scan_next(inner_tuples,
    1230             :                                                            &hashvalue)))
    1231             :                     {
    1232     1082928 :                         ExecForceStoreMinimalTuple(tuple,
    1233             :                                                    hjstate->hj_HashTupleSlot,
    1234             :                                                    false);
    1235     1082928 :                         slot = hjstate->hj_HashTupleSlot;
    1236     1082928 :                         ExecParallelHashTableInsertCurrentBatch(hashtable, slot,
    1237             :                                                                 hashvalue);
    1238             :                     }
    1239         596 :                     sts_end_parallel_scan(inner_tuples);
    1240         596 :                     BarrierArriveAndWait(batch_barrier,
    1241             :                                          WAIT_EVENT_HASH_BATCH_LOAD);
    1242             :                     /* Fall through. */
    1243             : 
    1244         870 :                 case PHJ_BATCH_PROBE:
    1245             : 
    1246             :                     /*
    1247             :                      * This batch is ready to probe.  Return control to
    1248             :                      * caller. We stay attached to batch_barrier so that the
    1249             :                      * hash table stays alive until everyone's finished
    1250             :                      * probing it, but no participant is allowed to wait at
    1251             :                      * this barrier again (or else a deadlock could occur).
    1252             :                      * All attached participants must eventually detach from
    1253             :                      * the barrier and one worker must advance the phase so
    1254             :                      * that the final phase is reached.
    1255             :                      */
    1256         870 :                     ExecParallelHashTableSetCurrentBatch(hashtable, batchno);
    1257         870 :                     sts_begin_parallel_scan(hashtable->batches[batchno].outer_tuples);
    1258             : 
    1259         870 :                     return true;
    1260           0 :                 case PHJ_BATCH_SCAN:
    1261             : 
    1262             :                     /*
    1263             :                      * In principle, we could help scan for unmatched tuples,
    1264             :                      * since that phase is already underway (the thing we
    1265             :                      * can't do under current deadlock-avoidance rules is wait
    1266             :                      * for others to arrive at PHJ_BATCH_SCAN, because
    1267             :                      * PHJ_BATCH_PROBE emits tuples, but in this case we just
    1268             :                      * got here without waiting).  That is not yet done.  For
    1269             :                      * now, we just detach and go around again.  We have to
    1270             :                      * use ExecHashTableDetachBatch() because there's a small
    1271             :                      * chance we'll be the last to detach, and then we're
    1272             :                      * responsible for freeing memory.
    1273             :                      */
    1274           0 :                     ExecParallelHashTableSetCurrentBatch(hashtable, batchno);
    1275           0 :                     hashtable->batches[batchno].done = true;
    1276           0 :                     ExecHashTableDetachBatch(hashtable);
    1277           0 :                     break;
    1278             : 
    1279         894 :                 case PHJ_BATCH_FREE:
    1280             : 
    1281             :                     /*
    1282             :                      * Already done.  Detach and go around again (if any
    1283             :                      * remain).
    1284             :                      */
    1285         894 :                     BarrierDetach(batch_barrier);
    1286         894 :                     hashtable->batches[batchno].done = true;
    1287         894 :                     hashtable->curbatch = -1;
    1288         894 :                     break;
    1289             : 
    1290           0 :                 default:
    1291           0 :                     elog(ERROR, "unexpected batch phase %d",
    1292             :                          BarrierPhase(batch_barrier));
    1293             :             }
    1294             :         }
    1295        2180 :         batchno = (batchno + 1) % hashtable->nbatch;
    1296        2180 :     } while (batchno != start_batchno);
    1297             : 
    1298         390 :     return false;
    1299             : }
    1300             : 
    1301             : /*
    1302             :  * ExecHashJoinSaveTuple
    1303             :  *      save a tuple to a batch file.
    1304             :  *
    1305             :  * The data recorded in the file for each tuple is its hash value,
    1306             :  * then the tuple in MinimalTuple format.
    1307             :  *
    1308             :  * fileptr points to a batch file in one of the hashtable arrays.
    1309             :  *
    1310             :  * The batch files (and their buffers) are allocated in the spill context
    1311             :  * created for the hashtable.
    1312             :  */
    1313             : void
    1314     3822890 : ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,
    1315             :                       BufFile **fileptr, HashJoinTable hashtable)
    1316             : {
    1317     3822890 :     BufFile    *file = *fileptr;
    1318             : 
    1319             :     /*
    1320             :      * The batch file is lazily created. If this is the first tuple written to
    1321             :      * this batch, the batch file is created and its buffer is allocated in
    1322             :      * the spillCxt context, NOT in the batchCxt.
    1323             :      *
    1324             :      * During the build phase, buffered files are created for inner batches.
    1325             :      * Each batch's buffered file is closed (and its buffer freed) after the
    1326             :      * batch is loaded into memory during the outer side scan. Therefore, it
    1327             :      * is necessary to allocate the batch file buffer in a memory context
    1328             :      * which outlives the batch itself.
    1329             :      *
    1330             :      * Also, we use spillCxt instead of hashCxt for a better accounting of the
    1331             :      * spilling memory consumption.
    1332             :      */
    1333     3822890 :     if (file == NULL)
    1334             :     {
    1335        2304 :         MemoryContext oldctx = MemoryContextSwitchTo(hashtable->spillCxt);
    1336             : 
    1337        2304 :         file = BufFileCreateTemp(false);
    1338        2304 :         *fileptr = file;
    1339             : 
    1340        2304 :         MemoryContextSwitchTo(oldctx);
    1341             :     }
    1342             : 
    1343     3822890 :     BufFileWrite(file, &hashvalue, sizeof(uint32));
    1344     3822890 :     BufFileWrite(file, tuple, tuple->t_len);
    1345     3822890 : }
    1346             : 
    1347             : /*
    1348             :  * ExecHashJoinGetSavedTuple
    1349             :  *      read the next tuple from a batch file.  Return NULL if no more.
    1350             :  *
    1351             :  * On success, *hashvalue is set to the tuple's hash value, and the tuple
    1352             :  * itself is stored in the given slot.
    1353             :  */
    1354             : static TupleTableSlot *
    1355     3825194 : ExecHashJoinGetSavedTuple(HashJoinState *hjstate,
    1356             :                           BufFile *file,
    1357             :                           uint32 *hashvalue,
    1358             :                           TupleTableSlot *tupleSlot)
    1359             : {
    1360             :     uint32      header[2];
    1361             :     size_t      nread;
    1362             :     MinimalTuple tuple;
    1363             : 
    1364             :     /*
    1365             :      * We check for interrupts here because this is typically taken as an
    1366             :      * alternative code path to an ExecProcNode() call, which would include
    1367             :      * such a check.
    1368             :      */
    1369     3825194 :     CHECK_FOR_INTERRUPTS();
    1370             : 
    1371             :     /*
    1372             :      * Since both the hash value and the MinimalTuple length word are uint32,
    1373             :      * we can read them both in one BufFileRead() call without any type
    1374             :      * cheating.
    1375             :      */
    1376     3825194 :     nread = BufFileReadMaybeEOF(file, header, sizeof(header), true);
    1377     3825194 :     if (nread == 0)             /* end of file */
    1378             :     {
    1379        2304 :         ExecClearTuple(tupleSlot);
    1380        2304 :         return NULL;
    1381             :     }
    1382     3822890 :     *hashvalue = header[0];
    1383     3822890 :     tuple = (MinimalTuple) palloc(header[1]);
    1384     3822890 :     tuple->t_len = header[1];
    1385     3822890 :     BufFileReadExact(file,
    1386             :                      (char *) tuple + sizeof(uint32),
    1387     3822890 :                      header[1] - sizeof(uint32));
    1388     3822890 :     ExecForceStoreMinimalTuple(tuple, tupleSlot, true);
    1389     3822890 :     return tupleSlot;
    1390             : }
    1391             : 
    1392             : 
    1393             : void
    1394        2790 : ExecReScanHashJoin(HashJoinState *node)
    1395             : {
    1396        2790 :     PlanState  *outerPlan = outerPlanState(node);
    1397        2790 :     PlanState  *innerPlan = innerPlanState(node);
    1398             : 
    1399             :     /*
    1400             :      * In a multi-batch join, we currently have to do rescans the hard way,
    1401             :      * primarily because batch temp files may have already been released. But
    1402             :      * if it's a single-batch join, and there is no parameter change for the
    1403             :      * inner subnode, then we can just re-use the existing hash table without
    1404             :      * rebuilding it.
    1405             :      */
    1406        2790 :     if (node->hj_HashTable != NULL)
    1407             :     {
    1408        2350 :         if (node->hj_HashTable->nbatch == 1 &&
    1409        2350 :             innerPlan->chgParam == NULL)
    1410             :         {
    1411             :             /*
    1412             :              * Okay to reuse the hash table; needn't rescan inner, either.
    1413             :              *
    1414             :              * However, if it's a right/right-anti/full join, we'd better
    1415             :              * reset the inner-tuple match flags contained in the table.
    1416             :              */
    1417         802 :             if (HJ_FILL_INNER(node))
    1418          14 :                 ExecHashTableResetMatchFlags(node->hj_HashTable);
    1419             : 
    1420             :             /*
    1421             :              * Also, we need to reset our state about the emptiness of the
    1422             :              * outer relation, so that the new scan of the outer will update
    1423             :              * it correctly if it turns out to be empty this time. (There's no
    1424             :              * harm in clearing it now because ExecHashJoin won't need the
    1425             :              * info.  In the other cases, where the hash table doesn't exist
    1426             :              * or we are destroying it, we leave this state alone because
    1427             :              * ExecHashJoin will need it the first time through.)
    1428             :              */
    1429         802 :             node->hj_OuterNotEmpty = false;
    1430             : 
    1431             :             /* ExecHashJoin can skip the BUILD_HASHTABLE step */
    1432         802 :             node->hj_JoinState = HJ_NEED_NEW_OUTER;
    1433             :         }
    1434             :         else
    1435             :         {
    1436             :             /* must destroy and rebuild hash table */
    1437        1548 :             HashState  *hashNode = castNode(HashState, innerPlan);
    1438             : 
    1439             :             Assert(hashNode->hashtable == node->hj_HashTable);
    1440             :             /* accumulate stats from old hash table, if wanted */
    1441             :             /* (this should match ExecShutdownHash) */
    1442        1548 :             if (hashNode->ps.instrument && !hashNode->hinstrument)
    1443           0 :                 hashNode->hinstrument = (HashInstrumentation *)
    1444           0 :                     palloc0(sizeof(HashInstrumentation));
    1445        1548 :             if (hashNode->hinstrument)
    1446           0 :                 ExecHashAccumInstrumentation(hashNode->hinstrument,
    1447             :                                              hashNode->hashtable);
    1448             :             /* for safety, be sure to clear child plan node's pointer too */
    1449        1548 :             hashNode->hashtable = NULL;
    1450             : 
    1451        1548 :             ExecHashTableDestroy(node->hj_HashTable);
    1452        1548 :             node->hj_HashTable = NULL;
    1453        1548 :             node->hj_JoinState = HJ_BUILD_HASHTABLE;
    1454             : 
    1455             :             /*
    1456             :              * if chgParam of subnode is not null then plan will be re-scanned
    1457             :              * by first ExecProcNode.
    1458             :              */
    1459        1548 :             if (innerPlan->chgParam == NULL)
    1460           0 :                 ExecReScan(innerPlan);
    1461             :         }
    1462             :     }
    1463             : 
    1464             :     /* Always reset intra-tuple state */
    1465        2790 :     node->hj_CurHashValue = 0;
    1466        2790 :     node->hj_CurBucketNo = 0;
    1467        2790 :     node->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
    1468        2790 :     node->hj_CurTuple = NULL;
    1469             : 
    1470        2790 :     node->hj_MatchedOuter = false;
    1471        2790 :     node->hj_FirstOuterTupleSlot = NULL;
    1472             : 
    1473             :     /*
    1474             :      * if chgParam of subnode is not null then plan will be re-scanned by
    1475             :      * first ExecProcNode.
    1476             :      */
    1477        2790 :     if (outerPlan->chgParam == NULL)
    1478        2050 :         ExecReScan(outerPlan);
    1479        2790 : }
    1480             : 
    1481             : void
    1482       26286 : ExecShutdownHashJoin(HashJoinState *node)
    1483             : {
    1484       26286 :     if (node->hj_HashTable)
    1485             :     {
    1486             :         /*
    1487             :          * Detach from shared state before DSM memory goes away.  This makes
    1488             :          * sure that we don't have any pointers into DSM memory by the time
    1489             :          * ExecEndHashJoin runs.
    1490             :          */
    1491       18568 :         ExecHashTableDetachBatch(node->hj_HashTable);
    1492       18568 :         ExecHashTableDetach(node->hj_HashTable);
    1493             :     }
    1494       26286 : }
    1495             : 
    1496             : static void
    1497         142 : ExecParallelHashJoinPartitionOuter(HashJoinState *hjstate)
    1498             : {
    1499         142 :     PlanState  *outerState = outerPlanState(hjstate);
    1500         142 :     ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
    1501         142 :     HashJoinTable hashtable = hjstate->hj_HashTable;
    1502             :     TupleTableSlot *slot;
    1503             :     uint32      hashvalue;
    1504             :     int         i;
    1505             : 
    1506             :     Assert(hjstate->hj_FirstOuterTupleSlot == NULL);
    1507             : 
    1508             :     /* Execute outer plan, writing all tuples to shared tuplestores. */
    1509             :     for (;;)
    1510             :     {
    1511     1200166 :         slot = ExecProcNode(outerState);
    1512     1200166 :         if (TupIsNull(slot))
    1513             :             break;
    1514     1200024 :         econtext->ecxt_outertuple = slot;
    1515     1200024 :         if (ExecHashGetHashValue(hashtable, econtext,
    1516             :                                  hjstate->hj_OuterHashKeys,
    1517             :                                  true,  /* outer tuple */
    1518     1200024 :                                  HJ_FILL_OUTER(hjstate),
    1519             :                                  &hashvalue))
    1520             :         {
    1521             :             int         batchno;
    1522             :             int         bucketno;
    1523             :             bool        shouldFree;
    1524     1200024 :             MinimalTuple mintup = ExecFetchSlotMinimalTuple(slot, &shouldFree);
    1525             : 
    1526     1200024 :             ExecHashGetBucketAndBatch(hashtable, hashvalue, &bucketno,
    1527             :                                       &batchno);
    1528     1200024 :             sts_puttuple(hashtable->batches[batchno].outer_tuples,
    1529             :                          &hashvalue, mintup);
    1530             : 
    1531     1200024 :             if (shouldFree)
    1532     1200024 :                 heap_free_minimal_tuple(mintup);
    1533             :         }
    1534     1200024 :         CHECK_FOR_INTERRUPTS();
    1535             :     }
    1536             : 
    1537             :     /* Make sure all outer partitions are readable by any backend. */
    1538        1206 :     for (i = 0; i < hashtable->nbatch; ++i)
    1539        1064 :         sts_end_write(hashtable->batches[i].outer_tuples);
    1540         142 : }
    1541             : 
    1542             : void
    1543         120 : ExecHashJoinEstimate(HashJoinState *state, ParallelContext *pcxt)
    1544             : {
    1545         120 :     shm_toc_estimate_chunk(&pcxt->estimator, sizeof(ParallelHashJoinState));
    1546         120 :     shm_toc_estimate_keys(&pcxt->estimator, 1);
    1547         120 : }
    1548             : 
    1549             : void
    1550         120 : ExecHashJoinInitializeDSM(HashJoinState *state, ParallelContext *pcxt)
    1551             : {
    1552         120 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1553             :     HashState  *hashNode;
    1554             :     ParallelHashJoinState *pstate;
    1555             : 
    1556             :     /*
    1557             :      * Disable shared hash table mode if we failed to create a real DSM
    1558             :      * segment, because that means that we don't have a DSA area to work with.
    1559             :      */
    1560         120 :     if (pcxt->seg == NULL)
    1561           0 :         return;
    1562             : 
    1563         120 :     ExecSetExecProcNode(&state->js.ps, ExecParallelHashJoin);
    1564             : 
    1565             :     /*
    1566             :      * Set up the state needed to coordinate access to the shared hash
    1567             :      * table(s), using the plan node ID as the toc key.
    1568             :      */
    1569         120 :     pstate = shm_toc_allocate(pcxt->toc, sizeof(ParallelHashJoinState));
    1570         120 :     shm_toc_insert(pcxt->toc, plan_node_id, pstate);
    1571             : 
    1572             :     /*
    1573             :      * Set up the shared hash join state with no batches initially.
    1574             :      * ExecHashTableCreate() will prepare at least one later and set nbatch
    1575             :      * and space_allowed.
    1576             :      */
    1577         120 :     pstate->nbatch = 0;
    1578         120 :     pstate->space_allowed = 0;
    1579         120 :     pstate->batches = InvalidDsaPointer;
    1580         120 :     pstate->old_batches = InvalidDsaPointer;
    1581         120 :     pstate->nbuckets = 0;
    1582         120 :     pstate->growth = PHJ_GROWTH_OK;
    1583         120 :     pstate->chunk_work_queue = InvalidDsaPointer;
    1584         120 :     pg_atomic_init_u32(&pstate->distributor, 0);
    1585         120 :     pstate->nparticipants = pcxt->nworkers + 1;
    1586         120 :     pstate->total_tuples = 0;
    1587         120 :     LWLockInitialize(&pstate->lock,
    1588             :                      LWTRANCHE_PARALLEL_HASH_JOIN);
    1589         120 :     BarrierInit(&pstate->build_barrier, 0);
    1590         120 :     BarrierInit(&pstate->grow_batches_barrier, 0);
    1591         120 :     BarrierInit(&pstate->grow_buckets_barrier, 0);
    1592             : 
    1593             :     /* Set up the space we'll use for shared temporary files. */
    1594         120 :     SharedFileSetInit(&pstate->fileset, pcxt->seg);
    1595             : 
    1596             :     /* Initialize the shared state in the hash node. */
    1597         120 :     hashNode = (HashState *) innerPlanState(state);
    1598         120 :     hashNode->parallel_state = pstate;
    1599             : }
    1600             : 
    1601             : /* ----------------------------------------------------------------
    1602             :  *      ExecHashJoinReInitializeDSM
    1603             :  *
    1604             :  *      Reset shared state before beginning a fresh scan.
    1605             :  * ----------------------------------------------------------------
    1606             :  */
    1607             : void
    1608          48 : ExecHashJoinReInitializeDSM(HashJoinState *state, ParallelContext *pcxt)
    1609             : {
    1610          48 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1611             :     ParallelHashJoinState *pstate =
    1612          48 :         shm_toc_lookup(pcxt->toc, plan_node_id, false);
    1613             : 
    1614             :     /*
    1615             :      * It would be possible to reuse the shared hash table in single-batch
    1616             :      * cases by resetting and then fast-forwarding build_barrier to
    1617             :      * PHJ_BUILD_FREE and batch 0's batch_barrier to PHJ_BATCH_PROBE, but
    1618             :      * currently shared hash tables are already freed by now (by the last
    1619             :      * participant to detach from the batch).  We could consider keeping it
    1620             :      * around for single-batch joins.  We'd also need to adjust
    1621             :      * finalize_plan() so that it doesn't record a dummy dependency for
    1622             :      * Parallel Hash nodes, preventing the rescan optimization.  For now we
    1623             :      * don't try.
    1624             :      */
    1625             : 
    1626             :     /* Detach, freeing any remaining shared memory. */
    1627          48 :     if (state->hj_HashTable != NULL)
    1628             :     {
    1629           0 :         ExecHashTableDetachBatch(state->hj_HashTable);
    1630           0 :         ExecHashTableDetach(state->hj_HashTable);
    1631             :     }
    1632             : 
    1633             :     /* Clear any shared batch files. */
    1634          48 :     SharedFileSetDeleteAll(&pstate->fileset);
    1635             : 
    1636             :     /* Reset build_barrier to PHJ_BUILD_ELECT so we can go around again. */
    1637          48 :     BarrierInit(&pstate->build_barrier, 0);
    1638          48 : }
    1639             : 
    1640             : void
    1641         308 : ExecHashJoinInitializeWorker(HashJoinState *state,
    1642             :                              ParallelWorkerContext *pwcxt)
    1643             : {
    1644             :     HashState  *hashNode;
    1645         308 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1646             :     ParallelHashJoinState *pstate =
    1647         308 :         shm_toc_lookup(pwcxt->toc, plan_node_id, false);
    1648             : 
    1649             :     /* Attach to the space for shared temporary files. */
    1650         308 :     SharedFileSetAttach(&pstate->fileset, pwcxt->seg);
    1651             : 
    1652             :     /* Attach to the shared state in the hash node. */
    1653         308 :     hashNode = (HashState *) innerPlanState(state);
    1654         308 :     hashNode->parallel_state = pstate;
    1655             : 
    1656         308 :     ExecSetExecProcNode(&state->js.ps, ExecParallelHashJoin);
    1657         308 : }

Generated by: LCOV version 1.14