LCOV - code coverage report
Current view: top level - src/backend/executor - nodeHashjoin.c (source / functions) Hit Total Coverage
Test: PostgreSQL 17devel Lines: 431 468 92.1 %
Date: 2023-12-01 19:11:07 Functions: 18 18 100.0 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * nodeHashjoin.c
       4             :  *    Routines to handle hash join nodes
       5             :  *
       6             :  * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
       7             :  * Portions Copyright (c) 1994, Regents of the University of California
       8             :  *
       9             :  *
      10             :  * IDENTIFICATION
      11             :  *    src/backend/executor/nodeHashjoin.c
      12             :  *
      13             :  * HASH JOIN
      14             :  *
      15             :  * This is based on the "hybrid hash join" algorithm described shortly in the
      16             :  * following page
      17             :  *
      18             :  *   https://en.wikipedia.org/wiki/Hash_join#Hybrid_hash_join
      19             :  *
      20             :  * and in detail in the referenced paper:
      21             :  *
      22             :  *   "An Adaptive Hash Join Algorithm for Multiuser Environments"
      23             :  *   Hansjörg Zeller; Jim Gray (1990). Proceedings of the 16th VLDB conference.
      24             :  *   Brisbane: 186–197.
      25             :  *
      26             :  * If the inner side tuples of a hash join do not fit in memory, the hash join
      27             :  * can be executed in multiple batches.
      28             :  *
      29             :  * If the statistics on the inner side relation are accurate, planner chooses a
      30             :  * multi-batch strategy and estimates the number of batches.
      31             :  *
      32             :  * The query executor measures the real size of the hashtable and increases the
      33             :  * number of batches if the hashtable grows too large.
      34             :  *
      35             :  * The number of batches is always a power of two, so an increase in the number
      36             :  * of batches doubles it.
      37             :  *
      38             :  * Serial hash join measures batch size lazily -- waiting until it is loading a
      39             :  * batch to determine if it will fit in memory. While inserting tuples into the
      40             :  * hashtable, serial hash join will, if that tuple were to exceed work_mem,
      41             :  * dump out the hashtable and reassign them either to other batch files or the
      42             :  * current batch resident in the hashtable.
      43             :  *
      44             :  * Parallel hash join, on the other hand, completes all changes to the number
      45             :  * of batches during the build phase. If it increases the number of batches, it
      46             :  * dumps out all the tuples from all batches and reassigns them to entirely new
      47             :  * batch files. Then it checks every batch to ensure it will fit in the space
      48             :  * budget for the query.
      49             :  *
      50             :  * In both parallel and serial hash join, the executor currently makes a best
      51             :  * effort. If a particular batch will not fit in memory, it tries doubling the
      52             :  * number of batches. If after a batch increase, there is a batch which
      53             :  * retained all or none of its tuples, the executor disables growth in the
      54             :  * number of batches globally. After growth is disabled, all batches that would
      55             :  * have previously triggered an increase in the number of batches instead
      56             :  * exceed the space allowed.
      57             :  *
      58             :  * PARALLELISM
      59             :  *
      60             :  * Hash joins can participate in parallel query execution in several ways.  A
      61             :  * parallel-oblivious hash join is one where the node is unaware that it is
      62             :  * part of a parallel plan.  In this case, a copy of the inner plan is used to
      63             :  * build a copy of the hash table in every backend, and the outer plan could
      64             :  * either be built from a partial or complete path, so that the results of the
      65             :  * hash join are correspondingly either partial or complete.  A parallel-aware
      66             :  * hash join is one that behaves differently, coordinating work between
      67             :  * backends, and appears as Parallel Hash Join in EXPLAIN output.  A Parallel
      68             :  * Hash Join always appears with a Parallel Hash node.
      69             :  *
      70             :  * Parallel-aware hash joins use the same per-backend state machine to track
      71             :  * progress through the hash join algorithm as parallel-oblivious hash joins.
      72             :  * In a parallel-aware hash join, there is also a shared state machine that
      73             :  * co-operating backends use to synchronize their local state machines and
      74             :  * program counters.  The shared state machine is managed with a Barrier IPC
      75             :  * primitive.  When all attached participants arrive at a barrier, the phase
      76             :  * advances and all waiting participants are released.
      77             :  *
      78             :  * When a participant begins working on a parallel hash join, it must first
      79             :  * figure out how much progress has already been made, because participants
      80             :  * don't wait for each other to begin.  For this reason there are switch
      81             :  * statements at key points in the code where we have to synchronize our local
      82             :  * state machine with the phase, and then jump to the correct part of the
      83             :  * algorithm so that we can get started.
      84             :  *
      85             :  * One barrier called build_barrier is used to coordinate the hashing phases.
      86             :  * The phase is represented by an integer which begins at zero and increments
      87             :  * one by one, but in the code it is referred to by symbolic names as follows.
      88             :  * An asterisk indicates a phase that is performed by a single arbitrarily
      89             :  * chosen process.
      90             :  *
      91             :  *   PHJ_BUILD_ELECT                 -- initial state
      92             :  *   PHJ_BUILD_ALLOCATE*             -- one sets up the batches and table 0
      93             :  *   PHJ_BUILD_HASH_INNER            -- all hash the inner rel
      94             :  *   PHJ_BUILD_HASH_OUTER            -- (multi-batch only) all hash the outer
      95             :  *   PHJ_BUILD_RUN                   -- building done, probing can begin
      96             :  *   PHJ_BUILD_FREE*                 -- all work complete, one frees batches
      97             :  *
      98             :  * While in the phase PHJ_BUILD_HASH_INNER a separate pair of barriers may
      99             :  * be used repeatedly as required to coordinate expansions in the number of
     100             :  * batches or buckets.  Their phases are as follows:
     101             :  *
     102             :  *   PHJ_GROW_BATCHES_ELECT          -- initial state
     103             :  *   PHJ_GROW_BATCHES_REALLOCATE*    -- one allocates new batches
     104             :  *   PHJ_GROW_BATCHES_REPARTITION    -- all repartition
     105             :  *   PHJ_GROW_BATCHES_DECIDE*        -- one detects skew and cleans up
     106             :  *   PHJ_GROW_BATCHES_FINISH         -- finished one growth cycle
     107             :  *
     108             :  *   PHJ_GROW_BUCKETS_ELECT          -- initial state
     109             :  *   PHJ_GROW_BUCKETS_REALLOCATE*    -- one allocates new buckets
     110             :  *   PHJ_GROW_BUCKETS_REINSERT       -- all insert tuples
     111             :  *
     112             :  * If the planner got the number of batches and buckets right, those won't be
     113             :  * necessary, but on the other hand we might finish up needing to expand the
     114             :  * buckets or batches multiple times while hashing the inner relation to stay
     115             :  * within our memory budget and load factor target.  For that reason it's a
     116             :  * separate pair of barriers using circular phases.
     117             :  *
     118             :  * The PHJ_BUILD_HASH_OUTER phase is required only for multi-batch joins,
     119             :  * because we need to divide the outer relation into batches up front in order
     120             :  * to be able to process batches entirely independently.  In contrast, the
     121             :  * parallel-oblivious algorithm simply throws tuples 'forward' to 'later'
     122             :  * batches whenever it encounters them while scanning and probing, which it
     123             :  * can do because it processes batches in serial order.
     124             :  *
     125             :  * Once PHJ_BUILD_RUN is reached, backends then split up and process
     126             :  * different batches, or gang up and work together on probing batches if there
     127             :  * aren't enough to go around.  For each batch there is a separate barrier
     128             :  * with the following phases:
     129             :  *
     130             :  *  PHJ_BATCH_ELECT          -- initial state
     131             :  *  PHJ_BATCH_ALLOCATE*      -- one allocates buckets
     132             :  *  PHJ_BATCH_LOAD           -- all load the hash table from disk
     133             :  *  PHJ_BATCH_PROBE          -- all probe
     134             :  *  PHJ_BATCH_SCAN*          -- one does right/right-anti/full unmatched scan
     135             :  *  PHJ_BATCH_FREE*          -- one frees memory
     136             :  *
     137             :  * Batch 0 is a special case, because it starts out in phase
     138             :  * PHJ_BATCH_PROBE; populating batch 0's hash table is done during
     139             :  * PHJ_BUILD_HASH_INNER so we can skip loading.
     140             :  *
     141             :  * Initially we try to plan for a single-batch hash join using the combined
     142             :  * hash_mem of all participants to create a large shared hash table.  If that
     143             :  * turns out either at planning or execution time to be impossible then we
     144             :  * fall back to regular hash_mem sized hash tables.
     145             :  *
     146             :  * To avoid deadlocks, we never wait for any barrier unless it is known that
     147             :  * all other backends attached to it are actively executing the node or have
     148             :  * finished.  Practically, that means that we never emit a tuple while attached
     149             :  * to a barrier, unless the barrier has reached a phase that means that no
     150             :  * process will wait on it again.  We emit tuples while attached to the build
     151             :  * barrier in phase PHJ_BUILD_RUN, and to a per-batch barrier in phase
     152             :  * PHJ_BATCH_PROBE.  These are advanced to PHJ_BUILD_FREE and PHJ_BATCH_SCAN
     153             :  * respectively without waiting, using BarrierArriveAndDetach() and
     154             :  * BarrierArriveAndDetachExceptLast() respectively.  The last to detach
     155             :  * receives a different return value so that it knows that it's safe to
     156             :  * clean up.  Any straggler process that attaches after that phase is reached
     157             :  * will see that it's too late to participate or access the relevant shared
     158             :  * memory objects.
     159             :  *
     160             :  *-------------------------------------------------------------------------
     161             :  */
     162             : 
     163             : #include "postgres.h"
     164             : 
     165             : #include "access/htup_details.h"
     166             : #include "access/parallel.h"
     167             : #include "executor/executor.h"
     168             : #include "executor/hashjoin.h"
     169             : #include "executor/nodeHash.h"
     170             : #include "executor/nodeHashjoin.h"
     171             : #include "miscadmin.h"
     172             : #include "pgstat.h"
     173             : #include "utils/memutils.h"
     174             : #include "utils/sharedtuplestore.h"
     175             : 
     176             : 
     177             : /*
     178             :  * States of the ExecHashJoin state machine
     179             :  */
     180             : #define HJ_BUILD_HASHTABLE      1
     181             : #define HJ_NEED_NEW_OUTER       2
     182             : #define HJ_SCAN_BUCKET          3
     183             : #define HJ_FILL_OUTER_TUPLE     4
     184             : #define HJ_FILL_INNER_TUPLES    5
     185             : #define HJ_NEED_NEW_BATCH       6
     186             : 
     187             : /* Returns true if doing null-fill on outer relation */
     188             : #define HJ_FILL_OUTER(hjstate)  ((hjstate)->hj_NullInnerTupleSlot != NULL)
     189             : /* Returns true if doing null-fill on inner relation */
     190             : #define HJ_FILL_INNER(hjstate)  ((hjstate)->hj_NullOuterTupleSlot != NULL)
     191             : 
     192             : static TupleTableSlot *ExecHashJoinOuterGetTuple(PlanState *outerNode,
     193             :                                                  HashJoinState *hjstate,
     194             :                                                  uint32 *hashvalue);
     195             : static TupleTableSlot *ExecParallelHashJoinOuterGetTuple(PlanState *outerNode,
     196             :                                                          HashJoinState *hjstate,
     197             :                                                          uint32 *hashvalue);
     198             : static TupleTableSlot *ExecHashJoinGetSavedTuple(HashJoinState *hjstate,
     199             :                                                  BufFile *file,
     200             :                                                  uint32 *hashvalue,
     201             :                                                  TupleTableSlot *tupleSlot);
     202             : static bool ExecHashJoinNewBatch(HashJoinState *hjstate);
     203             : static bool ExecParallelHashJoinNewBatch(HashJoinState *hjstate);
     204             : static void ExecParallelHashJoinPartitionOuter(HashJoinState *hjstate);
     205             : 
     206             : 
     207             : /* ----------------------------------------------------------------
     208             :  *      ExecHashJoinImpl
     209             :  *
     210             :  *      This function implements the Hybrid Hashjoin algorithm.  It is marked
     211             :  *      with an always-inline attribute so that ExecHashJoin() and
     212             :  *      ExecParallelHashJoin() can inline it.  Compilers that respect the
     213             :  *      attribute should create versions specialized for parallel == true and
     214             :  *      parallel == false with unnecessary branches removed.
     215             :  *
     216             :  *      Note: the relation we build hash table on is the "inner"
     217             :  *            the other one is "outer".
     218             :  * ----------------------------------------------------------------
     219             :  */
     220             : static pg_attribute_always_inline TupleTableSlot *
     221     8773024 : ExecHashJoinImpl(PlanState *pstate, bool parallel)
     222             : {
     223     8773024 :     HashJoinState *node = castNode(HashJoinState, pstate);
     224             :     PlanState  *outerNode;
     225             :     HashState  *hashNode;
     226             :     ExprState  *joinqual;
     227             :     ExprState  *otherqual;
     228             :     ExprContext *econtext;
     229             :     HashJoinTable hashtable;
     230             :     TupleTableSlot *outerTupleSlot;
     231             :     uint32      hashvalue;
     232             :     int         batchno;
     233             :     ParallelHashJoinState *parallel_state;
     234             : 
     235             :     /*
     236             :      * get information from HashJoin node
     237             :      */
     238     8773024 :     joinqual = node->js.joinqual;
     239     8773024 :     otherqual = node->js.ps.qual;
     240     8773024 :     hashNode = (HashState *) innerPlanState(node);
     241     8773024 :     outerNode = outerPlanState(node);
     242     8773024 :     hashtable = node->hj_HashTable;
     243     8773024 :     econtext = node->js.ps.ps_ExprContext;
     244     8773024 :     parallel_state = hashNode->parallel_state;
     245             : 
     246             :     /*
     247             :      * Reset per-tuple memory context to free any expression evaluation
     248             :      * storage allocated in the previous tuple cycle.
     249             :      */
     250     8773024 :     ResetExprContext(econtext);
     251             : 
     252             :     /*
     253             :      * run the hash join state machine
     254             :      */
     255             :     for (;;)
     256             :     {
     257             :         /*
     258             :          * It's possible to iterate this loop many times before returning a
     259             :          * tuple, in some pathological cases such as needing to move much of
     260             :          * the current batch to a later batch.  So let's check for interrupts
     261             :          * each time through.
     262             :          */
     263    33548570 :         CHECK_FOR_INTERRUPTS();
     264             : 
     265    33548570 :         switch (node->hj_JoinState)
     266             :         {
     267       22510 :             case HJ_BUILD_HASHTABLE:
     268             : 
     269             :                 /*
     270             :                  * First time through: build hash table for inner relation.
     271             :                  */
     272             :                 Assert(hashtable == NULL);
     273             : 
     274             :                 /*
     275             :                  * If the outer relation is completely empty, and it's not
     276             :                  * right/right-anti/full join, we can quit without building
     277             :                  * the hash table.  However, for an inner join it is only a
     278             :                  * win to check this when the outer relation's startup cost is
     279             :                  * less than the projected cost of building the hash table.
     280             :                  * Otherwise it's best to build the hash table first and see
     281             :                  * if the inner relation is empty.  (When it's a left join, we
     282             :                  * should always make this check, since we aren't going to be
     283             :                  * able to skip the join on the strength of an empty inner
     284             :                  * relation anyway.)
     285             :                  *
     286             :                  * If we are rescanning the join, we make use of information
     287             :                  * gained on the previous scan: don't bother to try the
     288             :                  * prefetch if the previous scan found the outer relation
     289             :                  * nonempty. This is not 100% reliable since with new
     290             :                  * parameters the outer relation might yield different
     291             :                  * results, but it's a good heuristic.
     292             :                  *
     293             :                  * The only way to make the check is to try to fetch a tuple
     294             :                  * from the outer plan node.  If we succeed, we have to stash
     295             :                  * it away for later consumption by ExecHashJoinOuterGetTuple.
     296             :                  */
     297       22510 :                 if (HJ_FILL_INNER(node))
     298             :                 {
     299             :                     /* no chance to not build the hash table */
     300        4942 :                     node->hj_FirstOuterTupleSlot = NULL;
     301             :                 }
     302       17568 :                 else if (parallel)
     303             :                 {
     304             :                     /*
     305             :                      * The empty-outer optimization is not implemented for
     306             :                      * shared hash tables, because no one participant can
     307             :                      * determine that there are no outer tuples, and it's not
     308             :                      * yet clear that it's worth the synchronization overhead
     309             :                      * of reaching consensus to figure that out.  So we have
     310             :                      * to build the hash table.
     311             :                      */
     312         326 :                     node->hj_FirstOuterTupleSlot = NULL;
     313             :                 }
     314       17242 :                 else if (HJ_FILL_OUTER(node) ||
     315       12616 :                          (outerNode->plan->startup_cost < hashNode->ps.plan->total_cost &&
     316       11740 :                           !node->hj_OuterNotEmpty))
     317             :                 {
     318       15524 :                     node->hj_FirstOuterTupleSlot = ExecProcNode(outerNode);
     319       15524 :                     if (TupIsNull(node->hj_FirstOuterTupleSlot))
     320             :                     {
     321        3606 :                         node->hj_OuterNotEmpty = false;
     322        3606 :                         return NULL;
     323             :                     }
     324             :                     else
     325       11918 :                         node->hj_OuterNotEmpty = true;
     326             :                 }
     327             :                 else
     328        1718 :                     node->hj_FirstOuterTupleSlot = NULL;
     329             : 
     330             :                 /*
     331             :                  * Create the hash table.  If using Parallel Hash, then
     332             :                  * whoever gets here first will create the hash table and any
     333             :                  * later arrivals will merely attach to it.
     334             :                  */
     335       18904 :                 hashtable = ExecHashTableCreate(hashNode,
     336             :                                                 node->hj_HashOperators,
     337             :                                                 node->hj_Collations,
     338       18904 :                                                 HJ_FILL_INNER(node));
     339       18904 :                 node->hj_HashTable = hashtable;
     340             : 
     341             :                 /*
     342             :                  * Execute the Hash node, to build the hash table.  If using
     343             :                  * Parallel Hash, then we'll try to help hashing unless we
     344             :                  * arrived too late.
     345             :                  */
     346       18904 :                 hashNode->hashtable = hashtable;
     347       18904 :                 (void) MultiExecProcNode((PlanState *) hashNode);
     348             : 
     349             :                 /*
     350             :                  * If the inner relation is completely empty, and we're not
     351             :                  * doing a left outer join, we can quit without scanning the
     352             :                  * outer relation.
     353             :                  */
     354       18904 :                 if (hashtable->totalTuples == 0 && !HJ_FILL_OUTER(node))
     355             :                 {
     356        1050 :                     if (parallel)
     357             :                     {
     358             :                         /*
     359             :                          * Advance the build barrier to PHJ_BUILD_RUN before
     360             :                          * proceeding so we can negotiate resource cleanup.
     361             :                          */
     362           6 :                         Barrier    *build_barrier = &parallel_state->build_barrier;
     363             : 
     364           8 :                         while (BarrierPhase(build_barrier) < PHJ_BUILD_RUN)
     365           2 :                             BarrierArriveAndWait(build_barrier, 0);
     366             :                     }
     367        1050 :                     return NULL;
     368             :                 }
     369             : 
     370             :                 /*
     371             :                  * need to remember whether nbatch has increased since we
     372             :                  * began scanning the outer relation
     373             :                  */
     374       17854 :                 hashtable->nbatch_outstart = hashtable->nbatch;
     375             : 
     376             :                 /*
     377             :                  * Reset OuterNotEmpty for scan.  (It's OK if we fetched a
     378             :                  * tuple above, because ExecHashJoinOuterGetTuple will
     379             :                  * immediately set it again.)
     380             :                  */
     381       17854 :                 node->hj_OuterNotEmpty = false;
     382             : 
     383       17854 :                 if (parallel)
     384             :                 {
     385             :                     Barrier    *build_barrier;
     386             : 
     387         392 :                     build_barrier = &parallel_state->build_barrier;
     388             :                     Assert(BarrierPhase(build_barrier) == PHJ_BUILD_HASH_OUTER ||
     389             :                            BarrierPhase(build_barrier) == PHJ_BUILD_RUN ||
     390             :                            BarrierPhase(build_barrier) == PHJ_BUILD_FREE);
     391         392 :                     if (BarrierPhase(build_barrier) == PHJ_BUILD_HASH_OUTER)
     392             :                     {
     393             :                         /*
     394             :                          * If multi-batch, we need to hash the outer relation
     395             :                          * up front.
     396             :                          */
     397         252 :                         if (hashtable->nbatch > 1)
     398         140 :                             ExecParallelHashJoinPartitionOuter(node);
     399         252 :                         BarrierArriveAndWait(build_barrier,
     400             :                                              WAIT_EVENT_HASH_BUILD_HASH_OUTER);
     401             :                     }
     402         140 :                     else if (BarrierPhase(build_barrier) == PHJ_BUILD_FREE)
     403             :                     {
     404             :                         /*
     405             :                          * If we attached so late that the job is finished and
     406             :                          * the batch state has been freed, we can return
     407             :                          * immediately.
     408             :                          */
     409           0 :                         return NULL;
     410             :                     }
     411             : 
     412             :                     /* Each backend should now select a batch to work on. */
     413             :                     Assert(BarrierPhase(build_barrier) == PHJ_BUILD_RUN);
     414         392 :                     hashtable->curbatch = -1;
     415         392 :                     node->hj_JoinState = HJ_NEED_NEW_BATCH;
     416             : 
     417         392 :                     continue;
     418             :                 }
     419             :                 else
     420       17462 :                     node->hj_JoinState = HJ_NEED_NEW_OUTER;
     421             : 
     422             :                 /* FALL THRU */
     423             : 
     424    15854898 :             case HJ_NEED_NEW_OUTER:
     425             : 
     426             :                 /*
     427             :                  * We don't have an outer tuple, try to get the next one
     428             :                  */
     429    15854898 :                 if (parallel)
     430             :                     outerTupleSlot =
     431     2160948 :                         ExecParallelHashJoinOuterGetTuple(outerNode, node,
     432             :                                                           &hashvalue);
     433             :                 else
     434             :                     outerTupleSlot =
     435    13693950 :                         ExecHashJoinOuterGetTuple(outerNode, node, &hashvalue);
     436             : 
     437    15854898 :                 if (TupIsNull(outerTupleSlot))
     438             :                 {
     439             :                     /* end of batch, or maybe whole join */
     440       20078 :                     if (HJ_FILL_INNER(node))
     441             :                     {
     442             :                         /* set up to scan for unmatched inner tuples */
     443        4746 :                         if (parallel)
     444             :                         {
     445             :                             /*
     446             :                              * Only one process is currently allow to handle
     447             :                              * each batch's unmatched tuples, in a parallel
     448             :                              * join.
     449             :                              */
     450          70 :                             if (ExecParallelPrepHashTableForUnmatched(node))
     451          66 :                                 node->hj_JoinState = HJ_FILL_INNER_TUPLES;
     452             :                             else
     453           4 :                                 node->hj_JoinState = HJ_NEED_NEW_BATCH;
     454             :                         }
     455             :                         else
     456             :                         {
     457        4676 :                             ExecPrepHashTableForUnmatched(node);
     458        4676 :                             node->hj_JoinState = HJ_FILL_INNER_TUPLES;
     459             :                         }
     460             :                     }
     461             :                     else
     462       15332 :                         node->hj_JoinState = HJ_NEED_NEW_BATCH;
     463       20078 :                     continue;
     464             :                 }
     465             : 
     466    15834820 :                 econtext->ecxt_outertuple = outerTupleSlot;
     467    15834820 :                 node->hj_MatchedOuter = false;
     468             : 
     469             :                 /*
     470             :                  * Find the corresponding bucket for this tuple in the main
     471             :                  * hash table or skew hash table.
     472             :                  */
     473    15834820 :                 node->hj_CurHashValue = hashvalue;
     474    15834820 :                 ExecHashGetBucketAndBatch(hashtable, hashvalue,
     475             :                                           &node->hj_CurBucketNo, &batchno);
     476    15834820 :                 node->hj_CurSkewBucketNo = ExecHashGetSkewBucket(hashtable,
     477             :                                                                  hashvalue);
     478    15834820 :                 node->hj_CurTuple = NULL;
     479             : 
     480             :                 /*
     481             :                  * The tuple might not belong to the current batch (where
     482             :                  * "current batch" includes the skew buckets if any).
     483             :                  */
     484    15834820 :                 if (batchno != hashtable->curbatch &&
     485     1471392 :                     node->hj_CurSkewBucketNo == INVALID_SKEW_BUCKET_NO)
     486             :                 {
     487             :                     bool        shouldFree;
     488     1470192 :                     MinimalTuple mintuple = ExecFetchSlotMinimalTuple(outerTupleSlot,
     489             :                                                                       &shouldFree);
     490             : 
     491             :                     /*
     492             :                      * Need to postpone this outer tuple to a later batch.
     493             :                      * Save it in the corresponding outer-batch file.
     494             :                      */
     495             :                     Assert(parallel_state == NULL);
     496             :                     Assert(batchno > hashtable->curbatch);
     497     1470192 :                     ExecHashJoinSaveTuple(mintuple, hashvalue,
     498     1470192 :                                           &hashtable->outerBatchFile[batchno],
     499             :                                           hashtable);
     500             : 
     501     1470192 :                     if (shouldFree)
     502     1470192 :                         heap_free_minimal_tuple(mintuple);
     503             : 
     504             :                     /* Loop around, staying in HJ_NEED_NEW_OUTER state */
     505     1470192 :                     continue;
     506             :                 }
     507             : 
     508             :                 /* OK, let's scan the bucket for matches */
     509    14364628 :                 node->hj_JoinState = HJ_SCAN_BUCKET;
     510             : 
     511             :                 /* FALL THRU */
     512             : 
     513    20542156 :             case HJ_SCAN_BUCKET:
     514             : 
     515             :                 /*
     516             :                  * Scan the selected hash bucket for matches to current outer
     517             :                  */
     518    20542156 :                 if (parallel)
     519             :                 {
     520     4200054 :                     if (!ExecParallelScanHashBucket(node, econtext))
     521             :                     {
     522             :                         /* out of matches; check for possible outer-join fill */
     523     2160030 :                         node->hj_JoinState = HJ_FILL_OUTER_TUPLE;
     524     2160030 :                         continue;
     525             :                     }
     526             :                 }
     527             :                 else
     528             :                 {
     529    16342102 :                     if (!ExecScanHashBucket(node, econtext))
     530             :                     {
     531             :                         /* out of matches; check for possible outer-join fill */
     532     8886594 :                         node->hj_JoinState = HJ_FILL_OUTER_TUPLE;
     533     8886594 :                         continue;
     534             :                     }
     535             :                 }
     536             : 
     537             :                 /*
     538             :                  * We've got a match, but still need to test non-hashed quals.
     539             :                  * ExecScanHashBucket already set up all the state needed to
     540             :                  * call ExecQual.
     541             :                  *
     542             :                  * If we pass the qual, then save state for next call and have
     543             :                  * ExecProject form the projection, store it in the tuple
     544             :                  * table, and return the slot.
     545             :                  *
     546             :                  * Only the joinquals determine tuple match status, but all
     547             :                  * quals must pass to actually return the tuple.
     548             :                  */
     549     9495532 :                 if (joinqual == NULL || ExecQual(joinqual, econtext))
     550             :                 {
     551     9343036 :                     node->hj_MatchedOuter = true;
     552             : 
     553             : 
     554             :                     /*
     555             :                      * This is really only needed if HJ_FILL_INNER(node), but
     556             :                      * we'll avoid the branch and just set it always.
     557             :                      */
     558     9343036 :                     if (!HeapTupleHeaderHasMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple)))
     559     5731056 :                         HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));
     560             : 
     561             :                     /* In an antijoin, we never return a matched tuple */
     562     9343036 :                     if (node->js.jointype == JOIN_ANTI)
     563             :                     {
     564     1541966 :                         node->hj_JoinState = HJ_NEED_NEW_OUTER;
     565     1541966 :                         continue;
     566             :                     }
     567             : 
     568             :                     /*
     569             :                      * In a right-antijoin, we never return a matched tuple.
     570             :                      * And we need to stay on the current outer tuple to
     571             :                      * continue scanning the inner side for matches.
     572             :                      */
     573     7801070 :                     if (node->js.jointype == JOIN_RIGHT_ANTI)
     574       21852 :                         continue;
     575             : 
     576             :                     /*
     577             :                      * If we only need to join to the first matching inner
     578             :                      * tuple, then consider returning this one, but after that
     579             :                      * continue with next outer tuple.
     580             :                      */
     581     7779218 :                     if (node->js.single_match)
     582     1775970 :                         node->hj_JoinState = HJ_NEED_NEW_OUTER;
     583             : 
     584     7779218 :                     if (otherqual == NULL || ExecQual(otherqual, econtext))
     585     7596518 :                         return ExecProject(node->js.ps.ps_ProjInfo);
     586             :                     else
     587      182700 :                         InstrCountFiltered2(node, 1);
     588             :                 }
     589             :                 else
     590      152496 :                     InstrCountFiltered1(node, 1);
     591      335196 :                 break;
     592             : 
     593    11046624 :             case HJ_FILL_OUTER_TUPLE:
     594             : 
     595             :                 /*
     596             :                  * The current outer tuple has run out of matches, so check
     597             :                  * whether to emit a dummy outer-join tuple.  Whether we emit
     598             :                  * one or not, the next state is NEED_NEW_OUTER.
     599             :                  */
     600    11046624 :                 node->hj_JoinState = HJ_NEED_NEW_OUTER;
     601             : 
     602    11046624 :                 if (!node->hj_MatchedOuter &&
     603     6338898 :                     HJ_FILL_OUTER(node))
     604             :                 {
     605             :                     /*
     606             :                      * Generate a fake join tuple with nulls for the inner
     607             :                      * tuple, and return it if it passes the non-join quals.
     608             :                      */
     609     1711960 :                     econtext->ecxt_innertuple = node->hj_NullInnerTupleSlot;
     610             : 
     611     1711960 :                     if (otherqual == NULL || ExecQual(otherqual, econtext))
     612      721170 :                         return ExecProject(node->js.ps.ps_ProjInfo);
     613             :                     else
     614      990790 :                         InstrCountFiltered2(node, 1);
     615             :                 }
     616    10325454 :                 break;
     617             : 
     618      444008 :             case HJ_FILL_INNER_TUPLES:
     619             : 
     620             :                 /*
     621             :                  * We have finished a batch, but we are doing
     622             :                  * right/right-anti/full join, so any unmatched inner tuples
     623             :                  * in the hashtable have to be emitted before we continue to
     624             :                  * the next batch.
     625             :                  */
     626      767944 :                 if (!(parallel ? ExecParallelScanHashTableForUnmatched(node, econtext)
     627      323936 :                       : ExecScanHashTableForUnmatched(node, econtext)))
     628             :                 {
     629             :                     /* no more unmatched tuples */
     630        4736 :                     node->hj_JoinState = HJ_NEED_NEW_BATCH;
     631        4736 :                     continue;
     632             :                 }
     633             : 
     634             :                 /*
     635             :                  * Generate a fake join tuple with nulls for the outer tuple,
     636             :                  * and return it if it passes the non-join quals.
     637             :                  */
     638      439272 :                 econtext->ecxt_outertuple = node->hj_NullOuterTupleSlot;
     639             : 
     640      439272 :                 if (otherqual == NULL || ExecQual(otherqual, econtext))
     641      432290 :                     return ExecProject(node->js.ps.ps_ProjInfo);
     642             :                 else
     643        6982 :                     InstrCountFiltered2(node, 1);
     644        6982 :                 break;
     645             : 
     646       20464 :             case HJ_NEED_NEW_BATCH:
     647             : 
     648             :                 /*
     649             :                  * Try to advance to next batch.  Done if there are no more.
     650             :                  */
     651       20464 :                 if (parallel)
     652             :                 {
     653        1310 :                     if (!ExecParallelHashJoinNewBatch(node))
     654         392 :                         return NULL;    /* end of parallel-aware join */
     655             :                 }
     656             :                 else
     657             :                 {
     658       19154 :                     if (!ExecHashJoinNewBatch(node))
     659       17998 :                         return NULL;    /* end of parallel-oblivious join */
     660             :                 }
     661        2074 :                 node->hj_JoinState = HJ_NEED_NEW_OUTER;
     662        2074 :                 break;
     663             : 
     664           0 :             default:
     665           0 :                 elog(ERROR, "unrecognized hashjoin state: %d",
     666             :                      (int) node->hj_JoinState);
     667             :         }
     668             :     }
     669             : }
     670             : 
     671             : /* ----------------------------------------------------------------
     672             :  *      ExecHashJoin
     673             :  *
     674             :  *      Parallel-oblivious version.
     675             :  * ----------------------------------------------------------------
     676             :  */
     677             : static TupleTableSlot *         /* return: a tuple or NULL */
     678     6492590 : ExecHashJoin(PlanState *pstate)
     679             : {
     680             :     /*
     681             :      * On sufficiently smart compilers this should be inlined with the
     682             :      * parallel-aware branches removed.
     683             :      */
     684     6492590 :     return ExecHashJoinImpl(pstate, false);
     685             : }
     686             : 
     687             : /* ----------------------------------------------------------------
     688             :  *      ExecParallelHashJoin
     689             :  *
     690             :  *      Parallel-aware version.
     691             :  * ----------------------------------------------------------------
     692             :  */
     693             : static TupleTableSlot *         /* return: a tuple or NULL */
     694     2280434 : ExecParallelHashJoin(PlanState *pstate)
     695             : {
     696             :     /*
     697             :      * On sufficiently smart compilers this should be inlined with the
     698             :      * parallel-oblivious branches removed.
     699             :      */
     700     2280434 :     return ExecHashJoinImpl(pstate, true);
     701             : }
     702             : 
     703             : /* ----------------------------------------------------------------
     704             :  *      ExecInitHashJoin
     705             :  *
     706             :  *      Init routine for HashJoin node.
     707             :  * ----------------------------------------------------------------
     708             :  */
     709             : HashJoinState *
     710       27666 : ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
     711             : {
     712             :     HashJoinState *hjstate;
     713             :     Plan       *outerNode;
     714             :     Hash       *hashNode;
     715             :     TupleDesc   outerDesc,
     716             :                 innerDesc;
     717             :     const TupleTableSlotOps *ops;
     718             : 
     719             :     /* check for unsupported flags */
     720             :     Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
     721             : 
     722             :     /*
     723             :      * create state structure
     724             :      */
     725       27666 :     hjstate = makeNode(HashJoinState);
     726       27666 :     hjstate->js.ps.plan = (Plan *) node;
     727       27666 :     hjstate->js.ps.state = estate;
     728             : 
     729             :     /*
     730             :      * See ExecHashJoinInitializeDSM() and ExecHashJoinInitializeWorker()
     731             :      * where this function may be replaced with a parallel version, if we
     732             :      * managed to launch a parallel query.
     733             :      */
     734       27666 :     hjstate->js.ps.ExecProcNode = ExecHashJoin;
     735       27666 :     hjstate->js.jointype = node->join.jointype;
     736             : 
     737             :     /*
     738             :      * Miscellaneous initialization
     739             :      *
     740             :      * create expression context for node
     741             :      */
     742       27666 :     ExecAssignExprContext(estate, &hjstate->js.ps);
     743             : 
     744             :     /*
     745             :      * initialize child nodes
     746             :      *
     747             :      * Note: we could suppress the REWIND flag for the inner input, which
     748             :      * would amount to betting that the hash will be a single batch.  Not
     749             :      * clear if this would be a win or not.
     750             :      */
     751       27666 :     outerNode = outerPlan(node);
     752       27666 :     hashNode = (Hash *) innerPlan(node);
     753             : 
     754       27666 :     outerPlanState(hjstate) = ExecInitNode(outerNode, estate, eflags);
     755       27666 :     outerDesc = ExecGetResultType(outerPlanState(hjstate));
     756       27666 :     innerPlanState(hjstate) = ExecInitNode((Plan *) hashNode, estate, eflags);
     757       27666 :     innerDesc = ExecGetResultType(innerPlanState(hjstate));
     758             : 
     759             :     /*
     760             :      * Initialize result slot, type and projection.
     761             :      */
     762       27666 :     ExecInitResultTupleSlotTL(&hjstate->js.ps, &TTSOpsVirtual);
     763       27666 :     ExecAssignProjectionInfo(&hjstate->js.ps, NULL);
     764             : 
     765             :     /*
     766             :      * tuple table initialization
     767             :      */
     768       27666 :     ops = ExecGetResultSlotOps(outerPlanState(hjstate), NULL);
     769       27666 :     hjstate->hj_OuterTupleSlot = ExecInitExtraTupleSlot(estate, outerDesc,
     770             :                                                         ops);
     771             : 
     772             :     /*
     773             :      * detect whether we need only consider the first matching inner tuple
     774             :      */
     775       41872 :     hjstate->js.single_match = (node->join.inner_unique ||
     776       14206 :                                 node->join.jointype == JOIN_SEMI);
     777             : 
     778             :     /* set up null tuples for outer joins, if needed */
     779       27666 :     switch (node->join.jointype)
     780             :     {
     781       16588 :         case JOIN_INNER:
     782             :         case JOIN_SEMI:
     783       16588 :             break;
     784        5062 :         case JOIN_LEFT:
     785             :         case JOIN_ANTI:
     786        5062 :             hjstate->hj_NullInnerTupleSlot =
     787        5062 :                 ExecInitNullTupleSlot(estate, innerDesc, &TTSOpsVirtual);
     788        5062 :             break;
     789        4980 :         case JOIN_RIGHT:
     790             :         case JOIN_RIGHT_ANTI:
     791        4980 :             hjstate->hj_NullOuterTupleSlot =
     792        4980 :                 ExecInitNullTupleSlot(estate, outerDesc, &TTSOpsVirtual);
     793        4980 :             break;
     794        1036 :         case JOIN_FULL:
     795        1036 :             hjstate->hj_NullOuterTupleSlot =
     796        1036 :                 ExecInitNullTupleSlot(estate, outerDesc, &TTSOpsVirtual);
     797        1036 :             hjstate->hj_NullInnerTupleSlot =
     798        1036 :                 ExecInitNullTupleSlot(estate, innerDesc, &TTSOpsVirtual);
     799        1036 :             break;
     800           0 :         default:
     801           0 :             elog(ERROR, "unrecognized join type: %d",
     802             :                  (int) node->join.jointype);
     803             :     }
     804             : 
     805             :     /*
     806             :      * now for some voodoo.  our temporary tuple slot is actually the result
     807             :      * tuple slot of the Hash node (which is our inner plan).  we can do this
     808             :      * because Hash nodes don't return tuples via ExecProcNode() -- instead
     809             :      * the hash join node uses ExecScanHashBucket() to get at the contents of
     810             :      * the hash table.  -cim 6/9/91
     811             :      */
     812             :     {
     813       27666 :         HashState  *hashstate = (HashState *) innerPlanState(hjstate);
     814       27666 :         TupleTableSlot *slot = hashstate->ps.ps_ResultTupleSlot;
     815             : 
     816       27666 :         hjstate->hj_HashTupleSlot = slot;
     817             :     }
     818             : 
     819             :     /*
     820             :      * initialize child expressions
     821             :      */
     822       27666 :     hjstate->js.ps.qual =
     823       27666 :         ExecInitQual(node->join.plan.qual, (PlanState *) hjstate);
     824       27666 :     hjstate->js.joinqual =
     825       27666 :         ExecInitQual(node->join.joinqual, (PlanState *) hjstate);
     826       27666 :     hjstate->hashclauses =
     827       27666 :         ExecInitQual(node->hashclauses, (PlanState *) hjstate);
     828             : 
     829             :     /*
     830             :      * initialize hash-specific info
     831             :      */
     832       27666 :     hjstate->hj_HashTable = NULL;
     833       27666 :     hjstate->hj_FirstOuterTupleSlot = NULL;
     834             : 
     835       27666 :     hjstate->hj_CurHashValue = 0;
     836       27666 :     hjstate->hj_CurBucketNo = 0;
     837       27666 :     hjstate->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
     838       27666 :     hjstate->hj_CurTuple = NULL;
     839             : 
     840       27666 :     hjstate->hj_OuterHashKeys = ExecInitExprList(node->hashkeys,
     841             :                                                  (PlanState *) hjstate);
     842       27666 :     hjstate->hj_HashOperators = node->hashoperators;
     843       27666 :     hjstate->hj_Collations = node->hashcollations;
     844             : 
     845       27666 :     hjstate->hj_JoinState = HJ_BUILD_HASHTABLE;
     846       27666 :     hjstate->hj_MatchedOuter = false;
     847       27666 :     hjstate->hj_OuterNotEmpty = false;
     848             : 
     849       27666 :     return hjstate;
     850             : }
     851             : 
     852             : /* ----------------------------------------------------------------
     853             :  *      ExecEndHashJoin
     854             :  *
     855             :  *      clean up routine for HashJoin node
     856             :  * ----------------------------------------------------------------
     857             :  */
     858             : void
     859       27582 : ExecEndHashJoin(HashJoinState *node)
     860             : {
     861             :     /*
     862             :      * Free hash table
     863             :      */
     864       27582 :     if (node->hj_HashTable)
     865             :     {
     866       17862 :         ExecHashTableDestroy(node->hj_HashTable);
     867       17862 :         node->hj_HashTable = NULL;
     868             :     }
     869             : 
     870             :     /*
     871             :      * clean up subtrees
     872             :      */
     873       27582 :     ExecEndNode(outerPlanState(node));
     874       27582 :     ExecEndNode(innerPlanState(node));
     875       27582 : }
     876             : 
     877             : /*
     878             :  * ExecHashJoinOuterGetTuple
     879             :  *
     880             :  *      get the next outer tuple for a parallel oblivious hashjoin: either by
     881             :  *      executing the outer plan node in the first pass, or from the temp
     882             :  *      files for the hashjoin batches.
     883             :  *
     884             :  * Returns a null slot if no more outer tuples (within the current batch).
     885             :  *
     886             :  * On success, the tuple's hash value is stored at *hashvalue --- this is
     887             :  * either originally computed, or re-read from the temp file.
     888             :  */
     889             : static TupleTableSlot *
     890    13693950 : ExecHashJoinOuterGetTuple(PlanState *outerNode,
     891             :                           HashJoinState *hjstate,
     892             :                           uint32 *hashvalue)
     893             : {
     894    13693950 :     HashJoinTable hashtable = hjstate->hj_HashTable;
     895    13693950 :     int         curbatch = hashtable->curbatch;
     896             :     TupleTableSlot *slot;
     897             : 
     898    13693950 :     if (curbatch == 0)          /* if it is the first pass */
     899             :     {
     900             :         /*
     901             :          * Check to see if first outer tuple was already fetched by
     902             :          * ExecHashJoin() and not used yet.
     903             :          */
     904    12222602 :         slot = hjstate->hj_FirstOuterTupleSlot;
     905    12222602 :         if (!TupIsNull(slot))
     906       11326 :             hjstate->hj_FirstOuterTupleSlot = NULL;
     907             :         else
     908    12211276 :             slot = ExecProcNode(outerNode);
     909             : 
     910    12223416 :         while (!TupIsNull(slot))
     911             :         {
     912             :             /*
     913             :              * We have to compute the tuple's hash value.
     914             :              */
     915    12205412 :             ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
     916             : 
     917    12205412 :             econtext->ecxt_outertuple = slot;
     918    12205412 :             if (ExecHashGetHashValue(hashtable, econtext,
     919             :                                      hjstate->hj_OuterHashKeys,
     920             :                                      true,  /* outer tuple */
     921    12205412 :                                      HJ_FILL_OUTER(hjstate),
     922             :                                      hashvalue))
     923             :             {
     924             :                 /* remember outer relation is not empty for possible rescan */
     925    12204598 :                 hjstate->hj_OuterNotEmpty = true;
     926             : 
     927    12204598 :                 return slot;
     928             :             }
     929             : 
     930             :             /*
     931             :              * That tuple couldn't match because of a NULL, so discard it and
     932             :              * continue with the next one.
     933             :              */
     934         814 :             slot = ExecProcNode(outerNode);
     935             :         }
     936             :     }
     937     1471348 :     else if (curbatch < hashtable->nbatch)
     938             :     {
     939     1471348 :         BufFile    *file = hashtable->outerBatchFile[curbatch];
     940             : 
     941             :         /*
     942             :          * In outer-join cases, we could get here even though the batch file
     943             :          * is empty.
     944             :          */
     945     1471348 :         if (file == NULL)
     946           0 :             return NULL;
     947             : 
     948     1471348 :         slot = ExecHashJoinGetSavedTuple(hjstate,
     949             :                                          file,
     950             :                                          hashvalue,
     951             :                                          hjstate->hj_OuterTupleSlot);
     952     1471348 :         if (!TupIsNull(slot))
     953     1470192 :             return slot;
     954             :     }
     955             : 
     956             :     /* End of this batch */
     957       19160 :     return NULL;
     958             : }
     959             : 
     960             : /*
     961             :  * ExecHashJoinOuterGetTuple variant for the parallel case.
     962             :  */
     963             : static TupleTableSlot *
     964     2160948 : ExecParallelHashJoinOuterGetTuple(PlanState *outerNode,
     965             :                                   HashJoinState *hjstate,
     966             :                                   uint32 *hashvalue)
     967             : {
     968     2160948 :     HashJoinTable hashtable = hjstate->hj_HashTable;
     969     2160948 :     int         curbatch = hashtable->curbatch;
     970             :     TupleTableSlot *slot;
     971             : 
     972             :     /*
     973             :      * In the Parallel Hash case we only run the outer plan directly for
     974             :      * single-batch hash joins.  Otherwise we have to go to batch files, even
     975             :      * for batch 0.
     976             :      */
     977     2160948 :     if (curbatch == 0 && hashtable->nbatch == 1)
     978             :     {
     979      960132 :         slot = ExecProcNode(outerNode);
     980             : 
     981      960132 :         while (!TupIsNull(slot))
     982             :         {
     983      960006 :             ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
     984             : 
     985      960006 :             econtext->ecxt_outertuple = slot;
     986      960006 :             if (ExecHashGetHashValue(hashtable, econtext,
     987             :                                      hjstate->hj_OuterHashKeys,
     988             :                                      true,  /* outer tuple */
     989      960006 :                                      HJ_FILL_OUTER(hjstate),
     990             :                                      hashvalue))
     991      960006 :                 return slot;
     992             : 
     993             :             /*
     994             :              * That tuple couldn't match because of a NULL, so discard it and
     995             :              * continue with the next one.
     996             :              */
     997           0 :             slot = ExecProcNode(outerNode);
     998             :         }
     999             :     }
    1000     1200816 :     else if (curbatch < hashtable->nbatch)
    1001             :     {
    1002             :         MinimalTuple tuple;
    1003             : 
    1004     1200816 :         tuple = sts_parallel_scan_next(hashtable->batches[curbatch].outer_tuples,
    1005             :                                        hashvalue);
    1006     1200816 :         if (tuple != NULL)
    1007             :         {
    1008     1200024 :             ExecForceStoreMinimalTuple(tuple,
    1009             :                                        hjstate->hj_OuterTupleSlot,
    1010             :                                        false);
    1011     1200024 :             slot = hjstate->hj_OuterTupleSlot;
    1012     1200024 :             return slot;
    1013             :         }
    1014             :         else
    1015         792 :             ExecClearTuple(hjstate->hj_OuterTupleSlot);
    1016             :     }
    1017             : 
    1018             :     /* End of this batch */
    1019         918 :     hashtable->batches[curbatch].outer_eof = true;
    1020             : 
    1021         918 :     return NULL;
    1022             : }
    1023             : 
    1024             : /*
    1025             :  * ExecHashJoinNewBatch
    1026             :  *      switch to a new hashjoin batch
    1027             :  *
    1028             :  * Returns true if successful, false if there are no more batches.
    1029             :  */
    1030             : static bool
    1031       19154 : ExecHashJoinNewBatch(HashJoinState *hjstate)
    1032             : {
    1033       19154 :     HashJoinTable hashtable = hjstate->hj_HashTable;
    1034             :     int         nbatch;
    1035             :     int         curbatch;
    1036             :     BufFile    *innerFile;
    1037             :     TupleTableSlot *slot;
    1038             :     uint32      hashvalue;
    1039             : 
    1040       19154 :     nbatch = hashtable->nbatch;
    1041       19154 :     curbatch = hashtable->curbatch;
    1042             : 
    1043       19154 :     if (curbatch > 0)
    1044             :     {
    1045             :         /*
    1046             :          * We no longer need the previous outer batch file; close it right
    1047             :          * away to free disk space.
    1048             :          */
    1049        1156 :         if (hashtable->outerBatchFile[curbatch])
    1050        1156 :             BufFileClose(hashtable->outerBatchFile[curbatch]);
    1051        1156 :         hashtable->outerBatchFile[curbatch] = NULL;
    1052             :     }
    1053             :     else                        /* we just finished the first batch */
    1054             :     {
    1055             :         /*
    1056             :          * Reset some of the skew optimization state variables, since we no
    1057             :          * longer need to consider skew tuples after the first batch. The
    1058             :          * memory context reset we are about to do will release the skew
    1059             :          * hashtable itself.
    1060             :          */
    1061       17998 :         hashtable->skewEnabled = false;
    1062       17998 :         hashtable->skewBucket = NULL;
    1063       17998 :         hashtable->skewBucketNums = NULL;
    1064       17998 :         hashtable->nSkewBuckets = 0;
    1065       17998 :         hashtable->spaceUsedSkew = 0;
    1066             :     }
    1067             : 
    1068             :     /*
    1069             :      * We can always skip over any batches that are completely empty on both
    1070             :      * sides.  We can sometimes skip over batches that are empty on only one
    1071             :      * side, but there are exceptions:
    1072             :      *
    1073             :      * 1. In a left/full outer join, we have to process outer batches even if
    1074             :      * the inner batch is empty.  Similarly, in a right/right-anti/full outer
    1075             :      * join, we have to process inner batches even if the outer batch is
    1076             :      * empty.
    1077             :      *
    1078             :      * 2. If we have increased nbatch since the initial estimate, we have to
    1079             :      * scan inner batches since they might contain tuples that need to be
    1080             :      * reassigned to later inner batches.
    1081             :      *
    1082             :      * 3. Similarly, if we have increased nbatch since starting the outer
    1083             :      * scan, we have to rescan outer batches in case they contain tuples that
    1084             :      * need to be reassigned.
    1085             :      */
    1086       19154 :     curbatch++;
    1087       19154 :     while (curbatch < nbatch &&
    1088        1156 :            (hashtable->outerBatchFile[curbatch] == NULL ||
    1089        1156 :             hashtable->innerBatchFile[curbatch] == NULL))
    1090             :     {
    1091           0 :         if (hashtable->outerBatchFile[curbatch] &&
    1092           0 :             HJ_FILL_OUTER(hjstate))
    1093           0 :             break;              /* must process due to rule 1 */
    1094           0 :         if (hashtable->innerBatchFile[curbatch] &&
    1095           0 :             HJ_FILL_INNER(hjstate))
    1096           0 :             break;              /* must process due to rule 1 */
    1097           0 :         if (hashtable->innerBatchFile[curbatch] &&
    1098           0 :             nbatch != hashtable->nbatch_original)
    1099           0 :             break;              /* must process due to rule 2 */
    1100           0 :         if (hashtable->outerBatchFile[curbatch] &&
    1101           0 :             nbatch != hashtable->nbatch_outstart)
    1102           0 :             break;              /* must process due to rule 3 */
    1103             :         /* We can ignore this batch. */
    1104             :         /* Release associated temp files right away. */
    1105           0 :         if (hashtable->innerBatchFile[curbatch])
    1106           0 :             BufFileClose(hashtable->innerBatchFile[curbatch]);
    1107           0 :         hashtable->innerBatchFile[curbatch] = NULL;
    1108           0 :         if (hashtable->outerBatchFile[curbatch])
    1109           0 :             BufFileClose(hashtable->outerBatchFile[curbatch]);
    1110           0 :         hashtable->outerBatchFile[curbatch] = NULL;
    1111           0 :         curbatch++;
    1112             :     }
    1113             : 
    1114       19154 :     if (curbatch >= nbatch)
    1115       17998 :         return false;           /* no more batches */
    1116             : 
    1117        1156 :     hashtable->curbatch = curbatch;
    1118             : 
    1119             :     /*
    1120             :      * Reload the hash table with the new inner batch (which could be empty)
    1121             :      */
    1122        1156 :     ExecHashTableReset(hashtable);
    1123             : 
    1124        1156 :     innerFile = hashtable->innerBatchFile[curbatch];
    1125             : 
    1126        1156 :     if (innerFile != NULL)
    1127             :     {
    1128        1156 :         if (BufFileSeek(innerFile, 0, 0, SEEK_SET))
    1129           0 :             ereport(ERROR,
    1130             :                     (errcode_for_file_access(),
    1131             :                      errmsg("could not rewind hash-join temporary file")));
    1132             : 
    1133     2433854 :         while ((slot = ExecHashJoinGetSavedTuple(hjstate,
    1134             :                                                  innerFile,
    1135             :                                                  &hashvalue,
    1136             :                                                  hjstate->hj_HashTupleSlot)))
    1137             :         {
    1138             :             /*
    1139             :              * NOTE: some tuples may be sent to future batches.  Also, it is
    1140             :              * possible for hashtable->nbatch to be increased here!
    1141             :              */
    1142     2432698 :             ExecHashTableInsert(hashtable, slot, hashvalue);
    1143             :         }
    1144             : 
    1145             :         /*
    1146             :          * after we build the hash table, the inner batch file is no longer
    1147             :          * needed
    1148             :          */
    1149        1156 :         BufFileClose(innerFile);
    1150        1156 :         hashtable->innerBatchFile[curbatch] = NULL;
    1151             :     }
    1152             : 
    1153             :     /*
    1154             :      * Rewind outer batch file (if present), so that we can start reading it.
    1155             :      */
    1156        1156 :     if (hashtable->outerBatchFile[curbatch] != NULL)
    1157             :     {
    1158        1156 :         if (BufFileSeek(hashtable->outerBatchFile[curbatch], 0, 0, SEEK_SET))
    1159           0 :             ereport(ERROR,
    1160             :                     (errcode_for_file_access(),
    1161             :                      errmsg("could not rewind hash-join temporary file")));
    1162             :     }
    1163             : 
    1164        1156 :     return true;
    1165             : }
    1166             : 
    1167             : /*
    1168             :  * Choose a batch to work on, and attach to it.  Returns true if successful,
    1169             :  * false if there are no more batches.
    1170             :  */
    1171             : static bool
    1172        1310 : ExecParallelHashJoinNewBatch(HashJoinState *hjstate)
    1173             : {
    1174        1310 :     HashJoinTable hashtable = hjstate->hj_HashTable;
    1175             :     int         start_batchno;
    1176             :     int         batchno;
    1177             : 
    1178             :     /*
    1179             :      * If we were already attached to a batch, remember not to bother checking
    1180             :      * it again, and detach from it (possibly freeing the hash table if we are
    1181             :      * last to detach).
    1182             :      */
    1183        1310 :     if (hashtable->curbatch >= 0)
    1184             :     {
    1185         914 :         hashtable->batches[hashtable->curbatch].done = true;
    1186         914 :         ExecHashTableDetachBatch(hashtable);
    1187             :     }
    1188             : 
    1189             :     /*
    1190             :      * Search for a batch that isn't done.  We use an atomic counter to start
    1191             :      * our search at a different batch in every participant when there are
    1192             :      * more batches than participants.
    1193             :      */
    1194        1310 :     batchno = start_batchno =
    1195        1310 :         pg_atomic_fetch_add_u32(&hashtable->parallel_state->distributor, 1) %
    1196        1310 :         hashtable->nbatch;
    1197             :     do
    1198             :     {
    1199             :         uint32      hashvalue;
    1200             :         MinimalTuple tuple;
    1201             :         TupleTableSlot *slot;
    1202             : 
    1203        3248 :         if (!hashtable->batches[batchno].done)
    1204             :         {
    1205             :             SharedTuplestoreAccessor *inner_tuples;
    1206        1862 :             Barrier    *batch_barrier =
    1207        1862 :                 &hashtable->batches[batchno].shared->batch_barrier;
    1208             : 
    1209        1862 :             switch (BarrierAttach(batch_barrier))
    1210             :             {
    1211         630 :                 case PHJ_BATCH_ELECT:
    1212             : 
    1213             :                     /* One backend allocates the hash table. */
    1214         630 :                     if (BarrierArriveAndWait(batch_barrier,
    1215             :                                              WAIT_EVENT_HASH_BATCH_ELECT))
    1216         630 :                         ExecParallelHashTableAlloc(hashtable, batchno);
    1217             :                     /* Fall through. */
    1218             : 
    1219             :                 case PHJ_BATCH_ALLOCATE:
    1220             :                     /* Wait for allocation to complete. */
    1221         630 :                     BarrierArriveAndWait(batch_barrier,
    1222             :                                          WAIT_EVENT_HASH_BATCH_ALLOCATE);
    1223             :                     /* Fall through. */
    1224             : 
    1225         644 :                 case PHJ_BATCH_LOAD:
    1226             :                     /* Start (or join in) loading tuples. */
    1227         644 :                     ExecParallelHashTableSetCurrentBatch(hashtable, batchno);
    1228         644 :                     inner_tuples = hashtable->batches[batchno].inner_tuples;
    1229         644 :                     sts_begin_parallel_scan(inner_tuples);
    1230     1087310 :                     while ((tuple = sts_parallel_scan_next(inner_tuples,
    1231             :                                                            &hashvalue)))
    1232             :                     {
    1233     1086666 :                         ExecForceStoreMinimalTuple(tuple,
    1234             :                                                    hjstate->hj_HashTupleSlot,
    1235             :                                                    false);
    1236     1086666 :                         slot = hjstate->hj_HashTupleSlot;
    1237     1086666 :                         ExecParallelHashTableInsertCurrentBatch(hashtable, slot,
    1238             :                                                                 hashvalue);
    1239             :                     }
    1240         644 :                     sts_end_parallel_scan(inner_tuples);
    1241         644 :                     BarrierArriveAndWait(batch_barrier,
    1242             :                                          WAIT_EVENT_HASH_BATCH_LOAD);
    1243             :                     /* Fall through. */
    1244             : 
    1245         918 :                 case PHJ_BATCH_PROBE:
    1246             : 
    1247             :                     /*
    1248             :                      * This batch is ready to probe.  Return control to
    1249             :                      * caller. We stay attached to batch_barrier so that the
    1250             :                      * hash table stays alive until everyone's finished
    1251             :                      * probing it, but no participant is allowed to wait at
    1252             :                      * this barrier again (or else a deadlock could occur).
    1253             :                      * All attached participants must eventually detach from
    1254             :                      * the barrier and one worker must advance the phase so
    1255             :                      * that the final phase is reached.
    1256             :                      */
    1257         918 :                     ExecParallelHashTableSetCurrentBatch(hashtable, batchno);
    1258         918 :                     sts_begin_parallel_scan(hashtable->batches[batchno].outer_tuples);
    1259             : 
    1260         918 :                     return true;
    1261           2 :                 case PHJ_BATCH_SCAN:
    1262             : 
    1263             :                     /*
    1264             :                      * In principle, we could help scan for unmatched tuples,
    1265             :                      * since that phase is already underway (the thing we
    1266             :                      * can't do under current deadlock-avoidance rules is wait
    1267             :                      * for others to arrive at PHJ_BATCH_SCAN, because
    1268             :                      * PHJ_BATCH_PROBE emits tuples, but in this case we just
    1269             :                      * got here without waiting).  That is not yet done.  For
    1270             :                      * now, we just detach and go around again.  We have to
    1271             :                      * use ExecHashTableDetachBatch() because there's a small
    1272             :                      * chance we'll be the last to detach, and then we're
    1273             :                      * responsible for freeing memory.
    1274             :                      */
    1275           2 :                     ExecParallelHashTableSetCurrentBatch(hashtable, batchno);
    1276           2 :                     hashtable->batches[batchno].done = true;
    1277           2 :                     ExecHashTableDetachBatch(hashtable);
    1278           2 :                     break;
    1279             : 
    1280         942 :                 case PHJ_BATCH_FREE:
    1281             : 
    1282             :                     /*
    1283             :                      * Already done.  Detach and go around again (if any
    1284             :                      * remain).
    1285             :                      */
    1286         942 :                     BarrierDetach(batch_barrier);
    1287         942 :                     hashtable->batches[batchno].done = true;
    1288         942 :                     hashtable->curbatch = -1;
    1289         942 :                     break;
    1290             : 
    1291           0 :                 default:
    1292           0 :                     elog(ERROR, "unexpected batch phase %d",
    1293             :                          BarrierPhase(batch_barrier));
    1294             :             }
    1295             :         }
    1296        2330 :         batchno = (batchno + 1) % hashtable->nbatch;
    1297        2330 :     } while (batchno != start_batchno);
    1298             : 
    1299         392 :     return false;
    1300             : }
    1301             : 
    1302             : /*
    1303             :  * ExecHashJoinSaveTuple
    1304             :  *      save a tuple to a batch file.
    1305             :  *
    1306             :  * The data recorded in the file for each tuple is its hash value,
    1307             :  * then the tuple in MinimalTuple format.
    1308             :  *
    1309             :  * fileptr points to a batch file in one of the hashtable arrays.
    1310             :  *
    1311             :  * The batch files (and their buffers) are allocated in the spill context
    1312             :  * created for the hashtable.
    1313             :  */
    1314             : void
    1315     3902890 : ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,
    1316             :                       BufFile **fileptr, HashJoinTable hashtable)
    1317             : {
    1318     3902890 :     BufFile    *file = *fileptr;
    1319             : 
    1320             :     /*
    1321             :      * The batch file is lazily created. If this is the first tuple written to
    1322             :      * this batch, the batch file is created and its buffer is allocated in
    1323             :      * the spillCxt context, NOT in the batchCxt.
    1324             :      *
    1325             :      * During the build phase, buffered files are created for inner batches.
    1326             :      * Each batch's buffered file is closed (and its buffer freed) after the
    1327             :      * batch is loaded into memory during the outer side scan. Therefore, it
    1328             :      * is necessary to allocate the batch file buffer in a memory context
    1329             :      * which outlives the batch itself.
    1330             :      *
    1331             :      * Also, we use spillCxt instead of hashCxt for a better accounting of the
    1332             :      * spilling memory consumption.
    1333             :      */
    1334     3902890 :     if (file == NULL)
    1335             :     {
    1336        2312 :         MemoryContext oldctx = MemoryContextSwitchTo(hashtable->spillCxt);
    1337             : 
    1338        2312 :         file = BufFileCreateTemp(false);
    1339        2312 :         *fileptr = file;
    1340             : 
    1341        2312 :         MemoryContextSwitchTo(oldctx);
    1342             :     }
    1343             : 
    1344     3902890 :     BufFileWrite(file, &hashvalue, sizeof(uint32));
    1345     3902890 :     BufFileWrite(file, tuple, tuple->t_len);
    1346     3902890 : }
    1347             : 
    1348             : /*
    1349             :  * ExecHashJoinGetSavedTuple
    1350             :  *      read the next tuple from a batch file.  Return NULL if no more.
    1351             :  *
    1352             :  * On success, *hashvalue is set to the tuple's hash value, and the tuple
    1353             :  * itself is stored in the given slot.
    1354             :  */
    1355             : static TupleTableSlot *
    1356     3905202 : ExecHashJoinGetSavedTuple(HashJoinState *hjstate,
    1357             :                           BufFile *file,
    1358             :                           uint32 *hashvalue,
    1359             :                           TupleTableSlot *tupleSlot)
    1360             : {
    1361             :     uint32      header[2];
    1362             :     size_t      nread;
    1363             :     MinimalTuple tuple;
    1364             : 
    1365             :     /*
    1366             :      * We check for interrupts here because this is typically taken as an
    1367             :      * alternative code path to an ExecProcNode() call, which would include
    1368             :      * such a check.
    1369             :      */
    1370     3905202 :     CHECK_FOR_INTERRUPTS();
    1371             : 
    1372             :     /*
    1373             :      * Since both the hash value and the MinimalTuple length word are uint32,
    1374             :      * we can read them both in one BufFileRead() call without any type
    1375             :      * cheating.
    1376             :      */
    1377     3905202 :     nread = BufFileReadMaybeEOF(file, header, sizeof(header), true);
    1378     3905202 :     if (nread == 0)             /* end of file */
    1379             :     {
    1380        2312 :         ExecClearTuple(tupleSlot);
    1381        2312 :         return NULL;
    1382             :     }
    1383     3902890 :     *hashvalue = header[0];
    1384     3902890 :     tuple = (MinimalTuple) palloc(header[1]);
    1385     3902890 :     tuple->t_len = header[1];
    1386     3902890 :     BufFileReadExact(file,
    1387             :                      (char *) tuple + sizeof(uint32),
    1388     3902890 :                      header[1] - sizeof(uint32));
    1389     3902890 :     ExecForceStoreMinimalTuple(tuple, tupleSlot, true);
    1390     3902890 :     return tupleSlot;
    1391             : }
    1392             : 
    1393             : 
    1394             : void
    1395        2078 : ExecReScanHashJoin(HashJoinState *node)
    1396             : {
    1397        2078 :     PlanState  *outerPlan = outerPlanState(node);
    1398        2078 :     PlanState  *innerPlan = innerPlanState(node);
    1399             : 
    1400             :     /*
    1401             :      * In a multi-batch join, we currently have to do rescans the hard way,
    1402             :      * primarily because batch temp files may have already been released. But
    1403             :      * if it's a single-batch join, and there is no parameter change for the
    1404             :      * inner subnode, then we can just re-use the existing hash table without
    1405             :      * rebuilding it.
    1406             :      */
    1407        2078 :     if (node->hj_HashTable != NULL)
    1408             :     {
    1409        1684 :         if (node->hj_HashTable->nbatch == 1 &&
    1410        1684 :             innerPlan->chgParam == NULL)
    1411             :         {
    1412             :             /*
    1413             :              * Okay to reuse the hash table; needn't rescan inner, either.
    1414             :              *
    1415             :              * However, if it's a right/right-anti/full join, we'd better
    1416             :              * reset the inner-tuple match flags contained in the table.
    1417             :              */
    1418         726 :             if (HJ_FILL_INNER(node))
    1419          14 :                 ExecHashTableResetMatchFlags(node->hj_HashTable);
    1420             : 
    1421             :             /*
    1422             :              * Also, we need to reset our state about the emptiness of the
    1423             :              * outer relation, so that the new scan of the outer will update
    1424             :              * it correctly if it turns out to be empty this time. (There's no
    1425             :              * harm in clearing it now because ExecHashJoin won't need the
    1426             :              * info.  In the other cases, where the hash table doesn't exist
    1427             :              * or we are destroying it, we leave this state alone because
    1428             :              * ExecHashJoin will need it the first time through.)
    1429             :              */
    1430         726 :             node->hj_OuterNotEmpty = false;
    1431             : 
    1432             :             /* ExecHashJoin can skip the BUILD_HASHTABLE step */
    1433         726 :             node->hj_JoinState = HJ_NEED_NEW_OUTER;
    1434             :         }
    1435             :         else
    1436             :         {
    1437             :             /* must destroy and rebuild hash table */
    1438         958 :             HashState  *hashNode = castNode(HashState, innerPlan);
    1439             : 
    1440             :             Assert(hashNode->hashtable == node->hj_HashTable);
    1441             :             /* accumulate stats from old hash table, if wanted */
    1442             :             /* (this should match ExecShutdownHash) */
    1443         958 :             if (hashNode->ps.instrument && !hashNode->hinstrument)
    1444           0 :                 hashNode->hinstrument = (HashInstrumentation *)
    1445           0 :                     palloc0(sizeof(HashInstrumentation));
    1446         958 :             if (hashNode->hinstrument)
    1447           0 :                 ExecHashAccumInstrumentation(hashNode->hinstrument,
    1448             :                                              hashNode->hashtable);
    1449             :             /* for safety, be sure to clear child plan node's pointer too */
    1450         958 :             hashNode->hashtable = NULL;
    1451             : 
    1452         958 :             ExecHashTableDestroy(node->hj_HashTable);
    1453         958 :             node->hj_HashTable = NULL;
    1454         958 :             node->hj_JoinState = HJ_BUILD_HASHTABLE;
    1455             : 
    1456             :             /*
    1457             :              * if chgParam of subnode is not null then plan will be re-scanned
    1458             :              * by first ExecProcNode.
    1459             :              */
    1460         958 :             if (innerPlan->chgParam == NULL)
    1461           0 :                 ExecReScan(innerPlan);
    1462             :         }
    1463             :     }
    1464             : 
    1465             :     /* Always reset intra-tuple state */
    1466        2078 :     node->hj_CurHashValue = 0;
    1467        2078 :     node->hj_CurBucketNo = 0;
    1468        2078 :     node->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
    1469        2078 :     node->hj_CurTuple = NULL;
    1470             : 
    1471        2078 :     node->hj_MatchedOuter = false;
    1472        2078 :     node->hj_FirstOuterTupleSlot = NULL;
    1473             : 
    1474             :     /*
    1475             :      * if chgParam of subnode is not null then plan will be re-scanned by
    1476             :      * first ExecProcNode.
    1477             :      */
    1478        2078 :     if (outerPlan->chgParam == NULL)
    1479        1380 :         ExecReScan(outerPlan);
    1480        2078 : }
    1481             : 
    1482             : void
    1483       24716 : ExecShutdownHashJoin(HashJoinState *node)
    1484             : {
    1485       24716 :     if (node->hj_HashTable)
    1486             :     {
    1487             :         /*
    1488             :          * Detach from shared state before DSM memory goes away.  This makes
    1489             :          * sure that we don't have any pointers into DSM memory by the time
    1490             :          * ExecEndHashJoin runs.
    1491             :          */
    1492       17844 :         ExecHashTableDetachBatch(node->hj_HashTable);
    1493       17844 :         ExecHashTableDetach(node->hj_HashTable);
    1494             :     }
    1495       24716 : }
    1496             : 
    1497             : static void
    1498         140 : ExecParallelHashJoinPartitionOuter(HashJoinState *hjstate)
    1499             : {
    1500         140 :     PlanState  *outerState = outerPlanState(hjstate);
    1501         140 :     ExprContext *econtext = hjstate->js.ps.ps_ExprContext;
    1502         140 :     HashJoinTable hashtable = hjstate->hj_HashTable;
    1503             :     TupleTableSlot *slot;
    1504             :     uint32      hashvalue;
    1505             :     int         i;
    1506             : 
    1507             :     Assert(hjstate->hj_FirstOuterTupleSlot == NULL);
    1508             : 
    1509             :     /* Execute outer plan, writing all tuples to shared tuplestores. */
    1510             :     for (;;)
    1511             :     {
    1512     1200164 :         slot = ExecProcNode(outerState);
    1513     1200164 :         if (TupIsNull(slot))
    1514             :             break;
    1515     1200024 :         econtext->ecxt_outertuple = slot;
    1516     1200024 :         if (ExecHashGetHashValue(hashtable, econtext,
    1517             :                                  hjstate->hj_OuterHashKeys,
    1518             :                                  true,  /* outer tuple */
    1519     1200024 :                                  HJ_FILL_OUTER(hjstate),
    1520             :                                  &hashvalue))
    1521             :         {
    1522             :             int         batchno;
    1523             :             int         bucketno;
    1524             :             bool        shouldFree;
    1525     1200024 :             MinimalTuple mintup = ExecFetchSlotMinimalTuple(slot, &shouldFree);
    1526             : 
    1527     1200024 :             ExecHashGetBucketAndBatch(hashtable, hashvalue, &bucketno,
    1528             :                                       &batchno);
    1529     1200024 :             sts_puttuple(hashtable->batches[batchno].outer_tuples,
    1530             :                          &hashvalue, mintup);
    1531             : 
    1532     1200024 :             if (shouldFree)
    1533     1200024 :                 heap_free_minimal_tuple(mintup);
    1534             :         }
    1535     1200024 :         CHECK_FOR_INTERRUPTS();
    1536             :     }
    1537             : 
    1538             :     /* Make sure all outer partitions are readable by any backend. */
    1539        1300 :     for (i = 0; i < hashtable->nbatch; ++i)
    1540        1160 :         sts_end_write(hashtable->batches[i].outer_tuples);
    1541         140 : }
    1542             : 
    1543             : void
    1544         120 : ExecHashJoinEstimate(HashJoinState *state, ParallelContext *pcxt)
    1545             : {
    1546         120 :     shm_toc_estimate_chunk(&pcxt->estimator, sizeof(ParallelHashJoinState));
    1547         120 :     shm_toc_estimate_keys(&pcxt->estimator, 1);
    1548         120 : }
    1549             : 
    1550             : void
    1551         120 : ExecHashJoinInitializeDSM(HashJoinState *state, ParallelContext *pcxt)
    1552             : {
    1553         120 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1554             :     HashState  *hashNode;
    1555             :     ParallelHashJoinState *pstate;
    1556             : 
    1557             :     /*
    1558             :      * Disable shared hash table mode if we failed to create a real DSM
    1559             :      * segment, because that means that we don't have a DSA area to work with.
    1560             :      */
    1561         120 :     if (pcxt->seg == NULL)
    1562           0 :         return;
    1563             : 
    1564         120 :     ExecSetExecProcNode(&state->js.ps, ExecParallelHashJoin);
    1565             : 
    1566             :     /*
    1567             :      * Set up the state needed to coordinate access to the shared hash
    1568             :      * table(s), using the plan node ID as the toc key.
    1569             :      */
    1570         120 :     pstate = shm_toc_allocate(pcxt->toc, sizeof(ParallelHashJoinState));
    1571         120 :     shm_toc_insert(pcxt->toc, plan_node_id, pstate);
    1572             : 
    1573             :     /*
    1574             :      * Set up the shared hash join state with no batches initially.
    1575             :      * ExecHashTableCreate() will prepare at least one later and set nbatch
    1576             :      * and space_allowed.
    1577             :      */
    1578         120 :     pstate->nbatch = 0;
    1579         120 :     pstate->space_allowed = 0;
    1580         120 :     pstate->batches = InvalidDsaPointer;
    1581         120 :     pstate->old_batches = InvalidDsaPointer;
    1582         120 :     pstate->nbuckets = 0;
    1583         120 :     pstate->growth = PHJ_GROWTH_OK;
    1584         120 :     pstate->chunk_work_queue = InvalidDsaPointer;
    1585         120 :     pg_atomic_init_u32(&pstate->distributor, 0);
    1586         120 :     pstate->nparticipants = pcxt->nworkers + 1;
    1587         120 :     pstate->total_tuples = 0;
    1588         120 :     LWLockInitialize(&pstate->lock,
    1589             :                      LWTRANCHE_PARALLEL_HASH_JOIN);
    1590         120 :     BarrierInit(&pstate->build_barrier, 0);
    1591         120 :     BarrierInit(&pstate->grow_batches_barrier, 0);
    1592         120 :     BarrierInit(&pstate->grow_buckets_barrier, 0);
    1593             : 
    1594             :     /* Set up the space we'll use for shared temporary files. */
    1595         120 :     SharedFileSetInit(&pstate->fileset, pcxt->seg);
    1596             : 
    1597             :     /* Initialize the shared state in the hash node. */
    1598         120 :     hashNode = (HashState *) innerPlanState(state);
    1599         120 :     hashNode->parallel_state = pstate;
    1600             : }
    1601             : 
    1602             : /* ----------------------------------------------------------------
    1603             :  *      ExecHashJoinReInitializeDSM
    1604             :  *
    1605             :  *      Reset shared state before beginning a fresh scan.
    1606             :  * ----------------------------------------------------------------
    1607             :  */
    1608             : void
    1609          48 : ExecHashJoinReInitializeDSM(HashJoinState *state, ParallelContext *pcxt)
    1610             : {
    1611          48 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1612             :     ParallelHashJoinState *pstate =
    1613          48 :         shm_toc_lookup(pcxt->toc, plan_node_id, false);
    1614             : 
    1615             :     /*
    1616             :      * It would be possible to reuse the shared hash table in single-batch
    1617             :      * cases by resetting and then fast-forwarding build_barrier to
    1618             :      * PHJ_BUILD_FREE and batch 0's batch_barrier to PHJ_BATCH_PROBE, but
    1619             :      * currently shared hash tables are already freed by now (by the last
    1620             :      * participant to detach from the batch).  We could consider keeping it
    1621             :      * around for single-batch joins.  We'd also need to adjust
    1622             :      * finalize_plan() so that it doesn't record a dummy dependency for
    1623             :      * Parallel Hash nodes, preventing the rescan optimization.  For now we
    1624             :      * don't try.
    1625             :      */
    1626             : 
    1627             :     /* Detach, freeing any remaining shared memory. */
    1628          48 :     if (state->hj_HashTable != NULL)
    1629             :     {
    1630           0 :         ExecHashTableDetachBatch(state->hj_HashTable);
    1631           0 :         ExecHashTableDetach(state->hj_HashTable);
    1632             :     }
    1633             : 
    1634             :     /* Clear any shared batch files. */
    1635          48 :     SharedFileSetDeleteAll(&pstate->fileset);
    1636             : 
    1637             :     /* Reset build_barrier to PHJ_BUILD_ELECT so we can go around again. */
    1638          48 :     BarrierInit(&pstate->build_barrier, 0);
    1639          48 : }
    1640             : 
    1641             : void
    1642         308 : ExecHashJoinInitializeWorker(HashJoinState *state,
    1643             :                              ParallelWorkerContext *pwcxt)
    1644             : {
    1645             :     HashState  *hashNode;
    1646         308 :     int         plan_node_id = state->js.ps.plan->plan_node_id;
    1647             :     ParallelHashJoinState *pstate =
    1648         308 :         shm_toc_lookup(pwcxt->toc, plan_node_id, false);
    1649             : 
    1650             :     /* Attach to the space for shared temporary files. */
    1651         308 :     SharedFileSetAttach(&pstate->fileset, pwcxt->seg);
    1652             : 
    1653             :     /* Attach to the shared state in the hash node. */
    1654         308 :     hashNode = (HashState *) innerPlanState(state);
    1655         308 :     hashNode->parallel_state = pstate;
    1656             : 
    1657         308 :     ExecSetExecProcNode(&state->js.ps, ExecParallelHashJoin);
    1658         308 : }

Generated by: LCOV version 1.14