Line data Source code
1 : /*-------------------------------------------------------------------------
2 : *
3 : * inval.c
4 : * POSTGRES cache invalidation dispatcher code.
5 : *
6 : * This is subtle stuff, so pay attention:
7 : *
8 : * When a tuple is updated or deleted, our standard visibility rules
9 : * consider that it is *still valid* so long as we are in the same command,
10 : * ie, until the next CommandCounterIncrement() or transaction commit.
11 : * (See access/heap/heapam_visibility.c, and note that system catalogs are
12 : * generally scanned under the most current snapshot available, rather than
13 : * the transaction snapshot.) At the command boundary, the old tuple stops
14 : * being valid and the new version, if any, becomes valid. Therefore,
15 : * we cannot simply flush a tuple from the system caches during heap_update()
16 : * or heap_delete(). The tuple is still good at that point; what's more,
17 : * even if we did flush it, it might be reloaded into the caches by a later
18 : * request in the same command. So the correct behavior is to keep a list
19 : * of outdated (updated/deleted) tuples and then do the required cache
20 : * flushes at the next command boundary. We must also keep track of
21 : * inserted tuples so that we can flush "negative" cache entries that match
22 : * the new tuples; again, that mustn't happen until end of command.
23 : *
24 : * Once we have finished the command, we still need to remember inserted
25 : * tuples (including new versions of updated tuples), so that we can flush
26 : * them from the caches if we abort the transaction. Similarly, we'd better
27 : * be able to flush "negative" cache entries that may have been loaded in
28 : * place of deleted tuples, so we still need the deleted ones too.
29 : *
30 : * If we successfully complete the transaction, we have to broadcast all
31 : * these invalidation events to other backends (via the SI message queue)
32 : * so that they can flush obsolete entries from their caches. Note we have
33 : * to record the transaction commit before sending SI messages, otherwise
34 : * the other backends won't see our updated tuples as good.
35 : *
36 : * When a subtransaction aborts, we can process and discard any events
37 : * it has queued. When a subtransaction commits, we just add its events
38 : * to the pending lists of the parent transaction.
39 : *
40 : * In short, we need to remember until xact end every insert or delete
41 : * of a tuple that might be in the system caches. Updates are treated as
42 : * two events, delete + insert, for simplicity. (If the update doesn't
43 : * change the tuple hash value, catcache.c optimizes this into one event.)
44 : *
45 : * We do not need to register EVERY tuple operation in this way, just those
46 : * on tuples in relations that have associated catcaches. We do, however,
47 : * have to register every operation on every tuple that *could* be in a
48 : * catcache, whether or not it currently is in our cache. Also, if the
49 : * tuple is in a relation that has multiple catcaches, we need to register
50 : * an invalidation message for each such catcache. catcache.c's
51 : * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
52 : * catcaches may need invalidation for a given tuple.
53 : *
54 : * Also, whenever we see an operation on a pg_class, pg_attribute, or
55 : * pg_index tuple, we register a relcache flush operation for the relation
56 : * described by that tuple (as specified in CacheInvalidateHeapTuple()).
57 : * Likewise for pg_constraint tuples for foreign keys on relations.
58 : *
59 : * We keep the relcache flush requests in lists separate from the catcache
60 : * tuple flush requests. This allows us to issue all the pending catcache
61 : * flushes before we issue relcache flushes, which saves us from loading
62 : * a catcache tuple during relcache load only to flush it again right away.
63 : * Also, we avoid queuing multiple relcache flush requests for the same
64 : * relation, since a relcache flush is relatively expensive to do.
65 : * (XXX is it worth testing likewise for duplicate catcache flush entries?
66 : * Probably not.)
67 : *
68 : * Many subsystems own higher-level caches that depend on relcache and/or
69 : * catcache, and they register callbacks here to invalidate their caches.
70 : * While building a higher-level cache entry, a backend may receive a
71 : * callback for the being-built entry or one of its dependencies. This
72 : * implies the new higher-level entry would be born stale, and it might
73 : * remain stale for the life of the backend. Many caches do not prevent
74 : * that. They rely on DDL for can't-miss catalog changes taking
75 : * AccessExclusiveLock on suitable objects. (For a change made with less
76 : * locking, backends might never read the change.) The relation cache,
77 : * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
78 : * than the beginning of the next transaction. Hence, when a relevant
79 : * invalidation callback arrives during a build, relcache.c reattempts that
80 : * build. Caches with similar needs could do likewise.
81 : *
82 : * If a relcache flush is issued for a system relation that we preload
83 : * from the relcache init file, we must also delete the init file so that
84 : * it will be rebuilt during the next backend restart. The actual work of
85 : * manipulating the init file is in relcache.c, but we keep track of the
86 : * need for it here.
87 : *
88 : * Currently, inval messages are sent without regard for the possibility
89 : * that the object described by the catalog tuple might be a session-local
90 : * object such as a temporary table. This is because (1) this code has
91 : * no practical way to tell the difference, and (2) it is not certain that
92 : * other backends don't have catalog cache or even relcache entries for
93 : * such tables, anyway; there is nothing that prevents that. It might be
94 : * worth trying to avoid sending such inval traffic in the future, if those
95 : * problems can be overcome cheaply.
96 : *
97 : * When making a nontransactional change to a cacheable object, we must
98 : * likewise send the invalidation immediately, before ending the change's
99 : * critical section. This includes inplace heap updates, relmap, and smgr.
100 : *
101 : * When wal_level=logical, write invalidations into WAL at each command end to
102 : * support the decoding of the in-progress transactions. See
103 : * CommandEndInvalidationMessages.
104 : *
105 : * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
106 : * Portions Copyright (c) 1994, Regents of the University of California
107 : *
108 : * IDENTIFICATION
109 : * src/backend/utils/cache/inval.c
110 : *
111 : *-------------------------------------------------------------------------
112 : */
113 : #include "postgres.h"
114 :
115 : #include <limits.h>
116 :
117 : #include "access/htup_details.h"
118 : #include "access/xact.h"
119 : #include "access/xloginsert.h"
120 : #include "catalog/catalog.h"
121 : #include "catalog/pg_constraint.h"
122 : #include "miscadmin.h"
123 : #include "storage/sinval.h"
124 : #include "storage/smgr.h"
125 : #include "utils/catcache.h"
126 : #include "utils/inval.h"
127 : #include "utils/memdebug.h"
128 : #include "utils/memutils.h"
129 : #include "utils/rel.h"
130 : #include "utils/relmapper.h"
131 : #include "utils/snapmgr.h"
132 : #include "utils/syscache.h"
133 :
134 :
135 : /*
136 : * Pending requests are stored as ready-to-send SharedInvalidationMessages.
137 : * We keep the messages themselves in arrays in TopTransactionContext (there
138 : * are separate arrays for catcache and relcache messages). For transactional
139 : * messages, control information is kept in a chain of TransInvalidationInfo
140 : * structs, also allocated in TopTransactionContext. (We could keep a
141 : * subtransaction's TransInvalidationInfo in its CurTransactionContext; but
142 : * that's more wasteful not less so, since in very many scenarios it'd be the
143 : * only allocation in the subtransaction's CurTransactionContext.) For
144 : * inplace update messages, control information appears in an
145 : * InvalidationInfo, allocated in CurrentMemoryContext.
146 : *
147 : * We can store the message arrays densely, and yet avoid moving data around
148 : * within an array, because within any one subtransaction we need only
149 : * distinguish between messages emitted by prior commands and those emitted
150 : * by the current command. Once a command completes and we've done local
151 : * processing on its messages, we can fold those into the prior-commands
152 : * messages just by changing array indexes in the TransInvalidationInfo
153 : * struct. Similarly, we need distinguish messages of prior subtransactions
154 : * from those of the current subtransaction only until the subtransaction
155 : * completes, after which we adjust the array indexes in the parent's
156 : * TransInvalidationInfo to include the subtransaction's messages. Inplace
157 : * invalidations don't need a concept of command or subtransaction boundaries,
158 : * since we send them during the WAL insertion critical section.
159 : *
160 : * The ordering of the individual messages within a command's or
161 : * subtransaction's output is not considered significant, although this
162 : * implementation happens to preserve the order in which they were queued.
163 : * (Previous versions of this code did not preserve it.)
164 : *
165 : * For notational convenience, control information is kept in two-element
166 : * arrays, the first for catcache messages and the second for relcache
167 : * messages.
168 : */
169 : #define CatCacheMsgs 0
170 : #define RelCacheMsgs 1
171 :
172 : /* Pointers to main arrays in TopTransactionContext */
173 : typedef struct InvalMessageArray
174 : {
175 : SharedInvalidationMessage *msgs; /* palloc'd array (can be expanded) */
176 : int maxmsgs; /* current allocated size of array */
177 : } InvalMessageArray;
178 :
179 : static InvalMessageArray InvalMessageArrays[2];
180 :
181 : /* Control information for one logical group of messages */
182 : typedef struct InvalidationMsgsGroup
183 : {
184 : int firstmsg[2]; /* first index in relevant array */
185 : int nextmsg[2]; /* last+1 index */
186 : } InvalidationMsgsGroup;
187 :
188 : /* Macros to help preserve InvalidationMsgsGroup abstraction */
189 : #define SetSubGroupToFollow(targetgroup, priorgroup, subgroup) \
190 : do { \
191 : (targetgroup)->firstmsg[subgroup] = \
192 : (targetgroup)->nextmsg[subgroup] = \
193 : (priorgroup)->nextmsg[subgroup]; \
194 : } while (0)
195 :
196 : #define SetGroupToFollow(targetgroup, priorgroup) \
197 : do { \
198 : SetSubGroupToFollow(targetgroup, priorgroup, CatCacheMsgs); \
199 : SetSubGroupToFollow(targetgroup, priorgroup, RelCacheMsgs); \
200 : } while (0)
201 :
202 : #define NumMessagesInSubGroup(group, subgroup) \
203 : ((group)->nextmsg[subgroup] - (group)->firstmsg[subgroup])
204 :
205 : #define NumMessagesInGroup(group) \
206 : (NumMessagesInSubGroup(group, CatCacheMsgs) + \
207 : NumMessagesInSubGroup(group, RelCacheMsgs))
208 :
209 :
210 : /*----------------
211 : * Transactional invalidation messages are divided into two groups:
212 : * 1) events so far in current command, not yet reflected to caches.
213 : * 2) events in previous commands of current transaction; these have
214 : * been reflected to local caches, and must be either broadcast to
215 : * other backends or rolled back from local cache when we commit
216 : * or abort the transaction.
217 : * Actually, we need such groups for each level of nested transaction,
218 : * so that we can discard events from an aborted subtransaction. When
219 : * a subtransaction commits, we append its events to the parent's groups.
220 : *
221 : * The relcache-file-invalidated flag can just be a simple boolean,
222 : * since we only act on it at transaction commit; we don't care which
223 : * command of the transaction set it.
224 : *----------------
225 : */
226 :
227 : /* fields common to both transactional and inplace invalidation */
228 : typedef struct InvalidationInfo
229 : {
230 : /* Events emitted by current command */
231 : InvalidationMsgsGroup CurrentCmdInvalidMsgs;
232 :
233 : /* init file must be invalidated? */
234 : bool RelcacheInitFileInval;
235 : } InvalidationInfo;
236 :
237 : /* subclass adding fields specific to transactional invalidation */
238 : typedef struct TransInvalidationInfo
239 : {
240 : /* Base class */
241 : struct InvalidationInfo ii;
242 :
243 : /* Events emitted by previous commands of this (sub)transaction */
244 : InvalidationMsgsGroup PriorCmdInvalidMsgs;
245 :
246 : /* Back link to parent transaction's info */
247 : struct TransInvalidationInfo *parent;
248 :
249 : /* Subtransaction nesting depth */
250 : int my_level;
251 : } TransInvalidationInfo;
252 :
253 : static TransInvalidationInfo *transInvalInfo = NULL;
254 :
255 : static InvalidationInfo *inplaceInvalInfo = NULL;
256 :
257 : /* GUC storage */
258 : int debug_discard_caches = 0;
259 :
260 : /*
261 : * Dynamically-registered callback functions. Current implementation
262 : * assumes there won't be enough of these to justify a dynamically resizable
263 : * array; it'd be easy to improve that if needed.
264 : *
265 : * To avoid searching in CallSyscacheCallbacks, all callbacks for a given
266 : * syscache are linked into a list pointed to by syscache_callback_links[id].
267 : * The link values are syscache_callback_list[] index plus 1, or 0 for none.
268 : */
269 :
270 : #define MAX_SYSCACHE_CALLBACKS 64
271 : #define MAX_RELCACHE_CALLBACKS 10
272 :
273 : static struct SYSCACHECALLBACK
274 : {
275 : int16 id; /* cache number */
276 : int16 link; /* next callback index+1 for same cache */
277 : SyscacheCallbackFunction function;
278 : Datum arg;
279 : } syscache_callback_list[MAX_SYSCACHE_CALLBACKS];
280 :
281 : static int16 syscache_callback_links[SysCacheSize];
282 :
283 : static int syscache_callback_count = 0;
284 :
285 : static struct RELCACHECALLBACK
286 : {
287 : RelcacheCallbackFunction function;
288 : Datum arg;
289 : } relcache_callback_list[MAX_RELCACHE_CALLBACKS];
290 :
291 : static int relcache_callback_count = 0;
292 :
293 : /* ----------------------------------------------------------------
294 : * Invalidation subgroup support functions
295 : * ----------------------------------------------------------------
296 : */
297 :
298 : /*
299 : * AddInvalidationMessage
300 : * Add an invalidation message to a (sub)group.
301 : *
302 : * The group must be the last active one, since we assume we can add to the
303 : * end of the relevant InvalMessageArray.
304 : *
305 : * subgroup must be CatCacheMsgs or RelCacheMsgs.
306 : */
307 : static void
308 6607854 : AddInvalidationMessage(InvalidationMsgsGroup *group, int subgroup,
309 : const SharedInvalidationMessage *msg)
310 : {
311 6607854 : InvalMessageArray *ima = &InvalMessageArrays[subgroup];
312 6607854 : int nextindex = group->nextmsg[subgroup];
313 :
314 6607854 : if (nextindex >= ima->maxmsgs)
315 : {
316 688344 : if (ima->msgs == NULL)
317 : {
318 : /* Create new storage array in TopTransactionContext */
319 634892 : int reqsize = 32; /* arbitrary */
320 :
321 634892 : ima->msgs = (SharedInvalidationMessage *)
322 634892 : MemoryContextAlloc(TopTransactionContext,
323 : reqsize * sizeof(SharedInvalidationMessage));
324 634892 : ima->maxmsgs = reqsize;
325 : Assert(nextindex == 0);
326 : }
327 : else
328 : {
329 : /* Enlarge storage array */
330 53452 : int reqsize = 2 * ima->maxmsgs;
331 :
332 53452 : ima->msgs = (SharedInvalidationMessage *)
333 53452 : repalloc(ima->msgs,
334 : reqsize * sizeof(SharedInvalidationMessage));
335 53452 : ima->maxmsgs = reqsize;
336 : }
337 : }
338 : /* Okay, add message to current group */
339 6607854 : ima->msgs[nextindex] = *msg;
340 6607854 : group->nextmsg[subgroup]++;
341 6607854 : }
342 :
343 : /*
344 : * Append one subgroup of invalidation messages to another, resetting
345 : * the source subgroup to empty.
346 : */
347 : static void
348 1858180 : AppendInvalidationMessageSubGroup(InvalidationMsgsGroup *dest,
349 : InvalidationMsgsGroup *src,
350 : int subgroup)
351 : {
352 : /* Messages must be adjacent in main array */
353 : Assert(dest->nextmsg[subgroup] == src->firstmsg[subgroup]);
354 :
355 : /* ... which makes this easy: */
356 1858180 : dest->nextmsg[subgroup] = src->nextmsg[subgroup];
357 :
358 : /*
359 : * This is handy for some callers and irrelevant for others. But we do it
360 : * always, reasoning that it's bad to leave different groups pointing at
361 : * the same fragment of the message array.
362 : */
363 1858180 : SetSubGroupToFollow(src, dest, subgroup);
364 1858180 : }
365 :
366 : /*
367 : * Process a subgroup of invalidation messages.
368 : *
369 : * This is a macro that executes the given code fragment for each message in
370 : * a message subgroup. The fragment should refer to the message as *msg.
371 : */
372 : #define ProcessMessageSubGroup(group, subgroup, codeFragment) \
373 : do { \
374 : int _msgindex = (group)->firstmsg[subgroup]; \
375 : int _endmsg = (group)->nextmsg[subgroup]; \
376 : for (; _msgindex < _endmsg; _msgindex++) \
377 : { \
378 : SharedInvalidationMessage *msg = \
379 : &InvalMessageArrays[subgroup].msgs[_msgindex]; \
380 : codeFragment; \
381 : } \
382 : } while (0)
383 :
384 : /*
385 : * Process a subgroup of invalidation messages as an array.
386 : *
387 : * As above, but the code fragment can handle an array of messages.
388 : * The fragment should refer to the messages as msgs[], with n entries.
389 : */
390 : #define ProcessMessageSubGroupMulti(group, subgroup, codeFragment) \
391 : do { \
392 : int n = NumMessagesInSubGroup(group, subgroup); \
393 : if (n > 0) { \
394 : SharedInvalidationMessage *msgs = \
395 : &InvalMessageArrays[subgroup].msgs[(group)->firstmsg[subgroup]]; \
396 : codeFragment; \
397 : } \
398 : } while (0)
399 :
400 :
401 : /* ----------------------------------------------------------------
402 : * Invalidation group support functions
403 : *
404 : * These routines understand about the division of a logical invalidation
405 : * group into separate physical arrays for catcache and relcache entries.
406 : * ----------------------------------------------------------------
407 : */
408 :
409 : /*
410 : * Add a catcache inval entry
411 : */
412 : static void
413 5298684 : AddCatcacheInvalidationMessage(InvalidationMsgsGroup *group,
414 : int id, uint32 hashValue, Oid dbId)
415 : {
416 : SharedInvalidationMessage msg;
417 :
418 : Assert(id < CHAR_MAX);
419 5298684 : msg.cc.id = (int8) id;
420 5298684 : msg.cc.dbId = dbId;
421 5298684 : msg.cc.hashValue = hashValue;
422 :
423 : /*
424 : * Define padding bytes in SharedInvalidationMessage structs to be
425 : * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
426 : * multiple processes, will cause spurious valgrind warnings about
427 : * undefined memory being used. That's because valgrind remembers the
428 : * undefined bytes from the last local process's store, not realizing that
429 : * another process has written since, filling the previously uninitialized
430 : * bytes
431 : */
432 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
433 :
434 5298684 : AddInvalidationMessage(group, CatCacheMsgs, &msg);
435 5298684 : }
436 :
437 : /*
438 : * Add a whole-catalog inval entry
439 : */
440 : static void
441 216 : AddCatalogInvalidationMessage(InvalidationMsgsGroup *group,
442 : Oid dbId, Oid catId)
443 : {
444 : SharedInvalidationMessage msg;
445 :
446 216 : msg.cat.id = SHAREDINVALCATALOG_ID;
447 216 : msg.cat.dbId = dbId;
448 216 : msg.cat.catId = catId;
449 : /* check AddCatcacheInvalidationMessage() for an explanation */
450 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
451 :
452 216 : AddInvalidationMessage(group, CatCacheMsgs, &msg);
453 216 : }
454 :
455 : /*
456 : * Add a relcache inval entry
457 : */
458 : static void
459 1920532 : AddRelcacheInvalidationMessage(InvalidationMsgsGroup *group,
460 : Oid dbId, Oid relId)
461 : {
462 : SharedInvalidationMessage msg;
463 :
464 : /*
465 : * Don't add a duplicate item. We assume dbId need not be checked because
466 : * it will never change. InvalidOid for relId means all relations so we
467 : * don't need to add individual ones when it is present.
468 : */
469 5447490 : ProcessMessageSubGroup(group, RelCacheMsgs,
470 : if (msg->rc.id == SHAREDINVALRELCACHE_ID &&
471 : (msg->rc.relId == relId ||
472 : msg->rc.relId == InvalidOid))
473 : return);
474 :
475 : /* OK, add the item */
476 818066 : msg.rc.id = SHAREDINVALRELCACHE_ID;
477 818066 : msg.rc.dbId = dbId;
478 818066 : msg.rc.relId = relId;
479 : /* check AddCatcacheInvalidationMessage() for an explanation */
480 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
481 :
482 818066 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
483 : }
484 :
485 : /*
486 : * Add a snapshot inval entry
487 : *
488 : * We put these into the relcache subgroup for simplicity.
489 : */
490 : static void
491 973638 : AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group,
492 : Oid dbId, Oid relId)
493 : {
494 : SharedInvalidationMessage msg;
495 :
496 : /* Don't add a duplicate item */
497 : /* We assume dbId need not be checked because it will never change */
498 1420860 : ProcessMessageSubGroup(group, RelCacheMsgs,
499 : if (msg->sn.id == SHAREDINVALSNAPSHOT_ID &&
500 : msg->sn.relId == relId)
501 : return);
502 :
503 : /* OK, add the item */
504 490888 : msg.sn.id = SHAREDINVALSNAPSHOT_ID;
505 490888 : msg.sn.dbId = dbId;
506 490888 : msg.sn.relId = relId;
507 : /* check AddCatcacheInvalidationMessage() for an explanation */
508 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
509 :
510 490888 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
511 : }
512 :
513 : /*
514 : * Append one group of invalidation messages to another, resetting
515 : * the source group to empty.
516 : */
517 : static void
518 929090 : AppendInvalidationMessages(InvalidationMsgsGroup *dest,
519 : InvalidationMsgsGroup *src)
520 : {
521 929090 : AppendInvalidationMessageSubGroup(dest, src, CatCacheMsgs);
522 929090 : AppendInvalidationMessageSubGroup(dest, src, RelCacheMsgs);
523 929090 : }
524 :
525 : /*
526 : * Execute the given function for all the messages in an invalidation group.
527 : * The group is not altered.
528 : *
529 : * catcache entries are processed first, for reasons mentioned above.
530 : */
531 : static void
532 706148 : ProcessInvalidationMessages(InvalidationMsgsGroup *group,
533 : void (*func) (SharedInvalidationMessage *msg))
534 : {
535 5532512 : ProcessMessageSubGroup(group, CatCacheMsgs, func(msg));
536 1754782 : ProcessMessageSubGroup(group, RelCacheMsgs, func(msg));
537 706142 : }
538 :
539 : /*
540 : * As above, but the function is able to process an array of messages
541 : * rather than just one at a time.
542 : */
543 : static void
544 348000 : ProcessInvalidationMessagesMulti(InvalidationMsgsGroup *group,
545 : void (*func) (const SharedInvalidationMessage *msgs, int n))
546 : {
547 348000 : ProcessMessageSubGroupMulti(group, CatCacheMsgs, func(msgs, n));
548 348000 : ProcessMessageSubGroupMulti(group, RelCacheMsgs, func(msgs, n));
549 348000 : }
550 :
551 : /* ----------------------------------------------------------------
552 : * private support functions
553 : * ----------------------------------------------------------------
554 : */
555 :
556 : /*
557 : * RegisterCatcacheInvalidation
558 : *
559 : * Register an invalidation event for a catcache tuple entry.
560 : */
561 : static void
562 5298684 : RegisterCatcacheInvalidation(int cacheId,
563 : uint32 hashValue,
564 : Oid dbId,
565 : void *context)
566 : {
567 5298684 : InvalidationInfo *info = (InvalidationInfo *) context;
568 :
569 5298684 : AddCatcacheInvalidationMessage(&info->CurrentCmdInvalidMsgs,
570 : cacheId, hashValue, dbId);
571 5298684 : }
572 :
573 : /*
574 : * RegisterCatalogInvalidation
575 : *
576 : * Register an invalidation event for all catcache entries from a catalog.
577 : */
578 : static void
579 216 : RegisterCatalogInvalidation(InvalidationInfo *info, Oid dbId, Oid catId)
580 : {
581 216 : AddCatalogInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, catId);
582 216 : }
583 :
584 : /*
585 : * RegisterRelcacheInvalidation
586 : *
587 : * As above, but register a relcache invalidation event.
588 : */
589 : static void
590 1920532 : RegisterRelcacheInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
591 : {
592 1920532 : AddRelcacheInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, relId);
593 :
594 : /*
595 : * Most of the time, relcache invalidation is associated with system
596 : * catalog updates, but there are a few cases where it isn't. Quick hack
597 : * to ensure that the next CommandCounterIncrement() will think that we
598 : * need to do CommandEndInvalidationMessages().
599 : */
600 1920532 : (void) GetCurrentCommandId(true);
601 :
602 : /*
603 : * If the relation being invalidated is one of those cached in a relcache
604 : * init file, mark that we need to zap that file at commit. For simplicity
605 : * invalidations for a specific database always invalidate the shared file
606 : * as well. Also zap when we are invalidating whole relcache.
607 : */
608 1920532 : if (relId == InvalidOid || RelationIdIsInInitFile(relId))
609 139414 : info->RelcacheInitFileInval = true;
610 1920532 : }
611 :
612 : /*
613 : * RegisterSnapshotInvalidation
614 : *
615 : * Register an invalidation event for MVCC scans against a given catalog.
616 : * Only needed for catalogs that don't have catcaches.
617 : */
618 : static void
619 973638 : RegisterSnapshotInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
620 : {
621 973638 : AddSnapshotInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, relId);
622 973638 : }
623 :
624 : /*
625 : * PrepareInvalidationState
626 : * Initialize inval data for the current (sub)transaction.
627 : */
628 : static InvalidationInfo *
629 3852956 : PrepareInvalidationState(void)
630 : {
631 : TransInvalidationInfo *myInfo;
632 :
633 : Assert(IsTransactionState());
634 : /* Can't queue transactional message while collecting inplace messages. */
635 : Assert(inplaceInvalInfo == NULL);
636 :
637 7474282 : if (transInvalInfo != NULL &&
638 3621326 : transInvalInfo->my_level == GetCurrentTransactionNestLevel())
639 3621184 : return (InvalidationInfo *) transInvalInfo;
640 :
641 : myInfo = (TransInvalidationInfo *)
642 231772 : MemoryContextAllocZero(TopTransactionContext,
643 : sizeof(TransInvalidationInfo));
644 231772 : myInfo->parent = transInvalInfo;
645 231772 : myInfo->my_level = GetCurrentTransactionNestLevel();
646 :
647 : /* Now, do we have a previous stack entry? */
648 231772 : if (transInvalInfo != NULL)
649 : {
650 : /* Yes; this one should be for a deeper nesting level. */
651 : Assert(myInfo->my_level > transInvalInfo->my_level);
652 :
653 : /*
654 : * The parent (sub)transaction must not have any current (i.e.,
655 : * not-yet-locally-processed) messages. If it did, we'd have a
656 : * semantic problem: the new subtransaction presumably ought not be
657 : * able to see those events yet, but since the CommandCounter is
658 : * linear, that can't work once the subtransaction advances the
659 : * counter. This is a convenient place to check for that, as well as
660 : * being important to keep management of the message arrays simple.
661 : */
662 142 : if (NumMessagesInGroup(&transInvalInfo->ii.CurrentCmdInvalidMsgs) != 0)
663 0 : elog(ERROR, "cannot start a subtransaction when there are unprocessed inval messages");
664 :
665 : /*
666 : * MemoryContextAllocZero set firstmsg = nextmsg = 0 in each group,
667 : * which is fine for the first (sub)transaction, but otherwise we need
668 : * to update them to follow whatever is already in the arrays.
669 : */
670 142 : SetGroupToFollow(&myInfo->PriorCmdInvalidMsgs,
671 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
672 142 : SetGroupToFollow(&myInfo->ii.CurrentCmdInvalidMsgs,
673 : &myInfo->PriorCmdInvalidMsgs);
674 : }
675 : else
676 : {
677 : /*
678 : * Here, we need only clear any array pointers left over from a prior
679 : * transaction.
680 : */
681 231630 : InvalMessageArrays[CatCacheMsgs].msgs = NULL;
682 231630 : InvalMessageArrays[CatCacheMsgs].maxmsgs = 0;
683 231630 : InvalMessageArrays[RelCacheMsgs].msgs = NULL;
684 231630 : InvalMessageArrays[RelCacheMsgs].maxmsgs = 0;
685 : }
686 :
687 231772 : transInvalInfo = myInfo;
688 231772 : return (InvalidationInfo *) myInfo;
689 : }
690 :
691 : /*
692 : * PrepareInplaceInvalidationState
693 : * Initialize inval data for an inplace update.
694 : *
695 : * See previous function for more background.
696 : */
697 : static InvalidationInfo *
698 209106 : PrepareInplaceInvalidationState(void)
699 : {
700 : InvalidationInfo *myInfo;
701 :
702 : Assert(IsTransactionState());
703 : /* limit of one inplace update under assembly */
704 : Assert(inplaceInvalInfo == NULL);
705 :
706 : /* gone after WAL insertion CritSection ends, so use current context */
707 209106 : myInfo = (InvalidationInfo *) palloc0(sizeof(InvalidationInfo));
708 :
709 : /* Stash our messages past end of the transactional messages, if any. */
710 209106 : if (transInvalInfo != NULL)
711 100516 : SetGroupToFollow(&myInfo->CurrentCmdInvalidMsgs,
712 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
713 : else
714 : {
715 108590 : InvalMessageArrays[CatCacheMsgs].msgs = NULL;
716 108590 : InvalMessageArrays[CatCacheMsgs].maxmsgs = 0;
717 108590 : InvalMessageArrays[RelCacheMsgs].msgs = NULL;
718 108590 : InvalMessageArrays[RelCacheMsgs].maxmsgs = 0;
719 : }
720 :
721 209106 : inplaceInvalInfo = myInfo;
722 209106 : return myInfo;
723 : }
724 :
725 : /* ----------------------------------------------------------------
726 : * public functions
727 : * ----------------------------------------------------------------
728 : */
729 :
730 : void
731 4046 : InvalidateSystemCachesExtended(bool debug_discard)
732 : {
733 : int i;
734 :
735 4046 : InvalidateCatalogSnapshot();
736 4046 : ResetCatalogCaches();
737 4046 : RelationCacheInvalidate(debug_discard); /* gets smgr and relmap too */
738 :
739 69610 : for (i = 0; i < syscache_callback_count; i++)
740 : {
741 65564 : struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
742 :
743 65564 : ccitem->function(ccitem->arg, ccitem->id, 0);
744 : }
745 :
746 9242 : for (i = 0; i < relcache_callback_count; i++)
747 : {
748 5196 : struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
749 :
750 5196 : ccitem->function(ccitem->arg, InvalidOid);
751 : }
752 4046 : }
753 :
754 : /*
755 : * LocalExecuteInvalidationMessage
756 : *
757 : * Process a single invalidation message (which could be of any type).
758 : * Only the local caches are flushed; this does not transmit the message
759 : * to other backends.
760 : */
761 : void
762 33554904 : LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
763 : {
764 33554904 : if (msg->id >= 0)
765 : {
766 27013732 : if (msg->cc.dbId == MyDatabaseId || msg->cc.dbId == InvalidOid)
767 : {
768 19204234 : InvalidateCatalogSnapshot();
769 :
770 19204234 : SysCacheInvalidate(msg->cc.id, msg->cc.hashValue);
771 :
772 19204234 : CallSyscacheCallbacks(msg->cc.id, msg->cc.hashValue);
773 : }
774 : }
775 6541172 : else if (msg->id == SHAREDINVALCATALOG_ID)
776 : {
777 864 : if (msg->cat.dbId == MyDatabaseId || msg->cat.dbId == InvalidOid)
778 : {
779 728 : InvalidateCatalogSnapshot();
780 :
781 728 : CatalogCacheFlushCatalog(msg->cat.catId);
782 :
783 : /* CatalogCacheFlushCatalog calls CallSyscacheCallbacks as needed */
784 : }
785 : }
786 6540308 : else if (msg->id == SHAREDINVALRELCACHE_ID)
787 : {
788 3551896 : if (msg->rc.dbId == MyDatabaseId || msg->rc.dbId == InvalidOid)
789 : {
790 : int i;
791 :
792 2507374 : if (msg->rc.relId == InvalidOid)
793 346 : RelationCacheInvalidate(false);
794 : else
795 2507028 : RelationCacheInvalidateEntry(msg->rc.relId);
796 :
797 6794810 : for (i = 0; i < relcache_callback_count; i++)
798 : {
799 4287442 : struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
800 :
801 4287442 : ccitem->function(ccitem->arg, msg->rc.relId);
802 : }
803 : }
804 : }
805 2988412 : else if (msg->id == SHAREDINVALSMGR_ID)
806 : {
807 : /*
808 : * We could have smgr entries for relations of other databases, so no
809 : * short-circuit test is possible here.
810 : */
811 : RelFileLocatorBackend rlocator;
812 :
813 405706 : rlocator.locator = msg->sm.rlocator;
814 405706 : rlocator.backend = (msg->sm.backend_hi << 16) | (int) msg->sm.backend_lo;
815 405706 : smgrreleaserellocator(rlocator);
816 : }
817 2582706 : else if (msg->id == SHAREDINVALRELMAP_ID)
818 : {
819 : /* We only care about our own database and shared catalogs */
820 666 : if (msg->rm.dbId == InvalidOid)
821 292 : RelationMapInvalidate(true);
822 374 : else if (msg->rm.dbId == MyDatabaseId)
823 252 : RelationMapInvalidate(false);
824 : }
825 2582040 : else if (msg->id == SHAREDINVALSNAPSHOT_ID)
826 : {
827 : /* We only care about our own database and shared catalogs */
828 2582040 : if (msg->sn.dbId == InvalidOid)
829 77310 : InvalidateCatalogSnapshot();
830 2504730 : else if (msg->sn.dbId == MyDatabaseId)
831 1841444 : InvalidateCatalogSnapshot();
832 : }
833 : else
834 0 : elog(FATAL, "unrecognized SI message ID: %d", msg->id);
835 33554898 : }
836 :
837 : /*
838 : * InvalidateSystemCaches
839 : *
840 : * This blows away all tuples in the system catalog caches and
841 : * all the cached relation descriptors and smgr cache entries.
842 : * Relation descriptors that have positive refcounts are then rebuilt.
843 : *
844 : * We call this when we see a shared-inval-queue overflow signal,
845 : * since that tells us we've lost some shared-inval messages and hence
846 : * don't know what needs to be invalidated.
847 : */
848 : void
849 4046 : InvalidateSystemCaches(void)
850 : {
851 4046 : InvalidateSystemCachesExtended(false);
852 4046 : }
853 :
854 : /*
855 : * AcceptInvalidationMessages
856 : * Read and process invalidation messages from the shared invalidation
857 : * message queue.
858 : *
859 : * Note:
860 : * This should be called as the first step in processing a transaction.
861 : */
862 : void
863 32099914 : AcceptInvalidationMessages(void)
864 : {
865 32099914 : ReceiveSharedInvalidMessages(LocalExecuteInvalidationMessage,
866 : InvalidateSystemCaches);
867 :
868 : /*----------
869 : * Test code to force cache flushes anytime a flush could happen.
870 : *
871 : * This helps detect intermittent faults caused by code that reads a cache
872 : * entry and then performs an action that could invalidate the entry, but
873 : * rarely actually does so. This can spot issues that would otherwise
874 : * only arise with badly timed concurrent DDL, for example.
875 : *
876 : * The default debug_discard_caches = 0 does no forced cache flushes.
877 : *
878 : * If used with CLOBBER_FREED_MEMORY,
879 : * debug_discard_caches = 1 (formerly known as CLOBBER_CACHE_ALWAYS)
880 : * provides a fairly thorough test that the system contains no cache-flush
881 : * hazards. However, it also makes the system unbelievably slow --- the
882 : * regression tests take about 100 times longer than normal.
883 : *
884 : * If you're a glutton for punishment, try
885 : * debug_discard_caches = 3 (formerly known as CLOBBER_CACHE_RECURSIVELY).
886 : * This slows things by at least a factor of 10000, so I wouldn't suggest
887 : * trying to run the entire regression tests that way. It's useful to try
888 : * a few simple tests, to make sure that cache reload isn't subject to
889 : * internal cache-flush hazards, but after you've done a few thousand
890 : * recursive reloads it's unlikely you'll learn more.
891 : *----------
892 : */
893 : #ifdef DISCARD_CACHES_ENABLED
894 : {
895 : static int recursion_depth = 0;
896 :
897 : if (recursion_depth < debug_discard_caches)
898 : {
899 : recursion_depth++;
900 : InvalidateSystemCachesExtended(true);
901 : recursion_depth--;
902 : }
903 : }
904 : #endif
905 32099914 : }
906 :
907 : /*
908 : * PostPrepare_Inval
909 : * Clean up after successful PREPARE.
910 : *
911 : * Here, we want to act as though the transaction aborted, so that we will
912 : * undo any syscache changes it made, thereby bringing us into sync with the
913 : * outside world, which doesn't believe the transaction committed yet.
914 : *
915 : * If the prepared transaction is later aborted, there is nothing more to
916 : * do; if it commits, we will receive the consequent inval messages just
917 : * like everyone else.
918 : */
919 : void
920 768 : PostPrepare_Inval(void)
921 : {
922 768 : AtEOXact_Inval(false);
923 768 : }
924 :
925 : /*
926 : * xactGetCommittedInvalidationMessages() is called by
927 : * RecordTransactionCommit() to collect invalidation messages to add to the
928 : * commit record. This applies only to commit message types, never to
929 : * abort records. Must always run before AtEOXact_Inval(), since that
930 : * removes the data we need to see.
931 : *
932 : * Remember that this runs before we have officially committed, so we
933 : * must not do anything here to change what might occur *if* we should
934 : * fail between here and the actual commit.
935 : *
936 : * see also xact_redo_commit() and xact_desc_commit()
937 : */
938 : int
939 370696 : xactGetCommittedInvalidationMessages(SharedInvalidationMessage **msgs,
940 : bool *RelcacheInitFileInval)
941 : {
942 : SharedInvalidationMessage *msgarray;
943 : int nummsgs;
944 : int nmsgs;
945 :
946 : /* Quick exit if we haven't done anything with invalidation messages. */
947 370696 : if (transInvalInfo == NULL)
948 : {
949 219202 : *RelcacheInitFileInval = false;
950 219202 : *msgs = NULL;
951 219202 : return 0;
952 : }
953 :
954 : /* Must be at top of stack */
955 : Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
956 :
957 : /*
958 : * Relcache init file invalidation requires processing both before and
959 : * after we send the SI messages. However, we need not do anything unless
960 : * we committed.
961 : */
962 151494 : *RelcacheInitFileInval = transInvalInfo->ii.RelcacheInitFileInval;
963 :
964 : /*
965 : * Collect all the pending messages into a single contiguous array of
966 : * invalidation messages, to simplify what needs to happen while building
967 : * the commit WAL message. Maintain the order that they would be
968 : * processed in by AtEOXact_Inval(), to ensure emulated behaviour in redo
969 : * is as similar as possible to original. We want the same bugs, if any,
970 : * not new ones.
971 : */
972 151494 : nummsgs = NumMessagesInGroup(&transInvalInfo->PriorCmdInvalidMsgs) +
973 151494 : NumMessagesInGroup(&transInvalInfo->ii.CurrentCmdInvalidMsgs);
974 :
975 151494 : *msgs = msgarray = (SharedInvalidationMessage *)
976 151494 : MemoryContextAlloc(CurTransactionContext,
977 : nummsgs * sizeof(SharedInvalidationMessage));
978 :
979 151494 : nmsgs = 0;
980 151494 : ProcessMessageSubGroupMulti(&transInvalInfo->PriorCmdInvalidMsgs,
981 : CatCacheMsgs,
982 : (memcpy(msgarray + nmsgs,
983 : msgs,
984 : n * sizeof(SharedInvalidationMessage)),
985 : nmsgs += n));
986 151494 : ProcessMessageSubGroupMulti(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
987 : CatCacheMsgs,
988 : (memcpy(msgarray + nmsgs,
989 : msgs,
990 : n * sizeof(SharedInvalidationMessage)),
991 : nmsgs += n));
992 151494 : ProcessMessageSubGroupMulti(&transInvalInfo->PriorCmdInvalidMsgs,
993 : RelCacheMsgs,
994 : (memcpy(msgarray + nmsgs,
995 : msgs,
996 : n * sizeof(SharedInvalidationMessage)),
997 : nmsgs += n));
998 151494 : ProcessMessageSubGroupMulti(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
999 : RelCacheMsgs,
1000 : (memcpy(msgarray + nmsgs,
1001 : msgs,
1002 : n * sizeof(SharedInvalidationMessage)),
1003 : nmsgs += n));
1004 : Assert(nmsgs == nummsgs);
1005 :
1006 151494 : return nmsgs;
1007 : }
1008 :
1009 : /*
1010 : * inplaceGetInvalidationMessages() is called by the inplace update to collect
1011 : * invalidation messages to add to its WAL record. Like the previous
1012 : * function, we might still fail.
1013 : */
1014 : int
1015 91088 : inplaceGetInvalidationMessages(SharedInvalidationMessage **msgs,
1016 : bool *RelcacheInitFileInval)
1017 : {
1018 : SharedInvalidationMessage *msgarray;
1019 : int nummsgs;
1020 : int nmsgs;
1021 :
1022 : /* Quick exit if we haven't done anything with invalidation messages. */
1023 91088 : if (inplaceInvalInfo == NULL)
1024 : {
1025 26640 : *RelcacheInitFileInval = false;
1026 26640 : *msgs = NULL;
1027 26640 : return 0;
1028 : }
1029 :
1030 64448 : *RelcacheInitFileInval = inplaceInvalInfo->RelcacheInitFileInval;
1031 64448 : nummsgs = NumMessagesInGroup(&inplaceInvalInfo->CurrentCmdInvalidMsgs);
1032 64448 : *msgs = msgarray = (SharedInvalidationMessage *)
1033 64448 : palloc(nummsgs * sizeof(SharedInvalidationMessage));
1034 :
1035 64448 : nmsgs = 0;
1036 64448 : ProcessMessageSubGroupMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1037 : CatCacheMsgs,
1038 : (memcpy(msgarray + nmsgs,
1039 : msgs,
1040 : n * sizeof(SharedInvalidationMessage)),
1041 : nmsgs += n));
1042 64448 : ProcessMessageSubGroupMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1043 : RelCacheMsgs,
1044 : (memcpy(msgarray + nmsgs,
1045 : msgs,
1046 : n * sizeof(SharedInvalidationMessage)),
1047 : nmsgs += n));
1048 : Assert(nmsgs == nummsgs);
1049 :
1050 64448 : return nmsgs;
1051 : }
1052 :
1053 : /*
1054 : * ProcessCommittedInvalidationMessages is executed by xact_redo_commit() or
1055 : * standby_redo() to process invalidation messages. Currently that happens
1056 : * only at end-of-xact.
1057 : *
1058 : * Relcache init file invalidation requires processing both
1059 : * before and after we send the SI messages. See AtEOXact_Inval()
1060 : */
1061 : void
1062 51934 : ProcessCommittedInvalidationMessages(SharedInvalidationMessage *msgs,
1063 : int nmsgs, bool RelcacheInitFileInval,
1064 : Oid dbid, Oid tsid)
1065 : {
1066 51934 : if (nmsgs <= 0)
1067 9876 : return;
1068 :
1069 42058 : elog(DEBUG4, "replaying commit with %d messages%s", nmsgs,
1070 : (RelcacheInitFileInval ? " and relcache file invalidation" : ""));
1071 :
1072 42058 : if (RelcacheInitFileInval)
1073 : {
1074 648 : elog(DEBUG4, "removing relcache init files for database %u", dbid);
1075 :
1076 : /*
1077 : * RelationCacheInitFilePreInvalidate, when the invalidation message
1078 : * is for a specific database, requires DatabasePath to be set, but we
1079 : * should not use SetDatabasePath during recovery, since it is
1080 : * intended to be used only once by normal backends. Hence, a quick
1081 : * hack: set DatabasePath directly then unset after use.
1082 : */
1083 648 : if (OidIsValid(dbid))
1084 648 : DatabasePath = GetDatabasePath(dbid, tsid);
1085 :
1086 648 : RelationCacheInitFilePreInvalidate();
1087 :
1088 648 : if (OidIsValid(dbid))
1089 : {
1090 648 : pfree(DatabasePath);
1091 648 : DatabasePath = NULL;
1092 : }
1093 : }
1094 :
1095 42058 : SendSharedInvalidMessages(msgs, nmsgs);
1096 :
1097 42058 : if (RelcacheInitFileInval)
1098 648 : RelationCacheInitFilePostInvalidate();
1099 : }
1100 :
1101 : /*
1102 : * AtEOXact_Inval
1103 : * Process queued-up invalidation messages at end of main transaction.
1104 : *
1105 : * If isCommit, we must send out the messages in our PriorCmdInvalidMsgs list
1106 : * to the shared invalidation message queue. Note that these will be read
1107 : * not only by other backends, but also by our own backend at the next
1108 : * transaction start (via AcceptInvalidationMessages). This means that
1109 : * we can skip immediate local processing of anything that's still in
1110 : * CurrentCmdInvalidMsgs, and just send that list out too.
1111 : *
1112 : * If not isCommit, we are aborting, and must locally process the messages
1113 : * in PriorCmdInvalidMsgs. No messages need be sent to other backends,
1114 : * since they'll not have seen our changed tuples anyway. We can forget
1115 : * about CurrentCmdInvalidMsgs too, since those changes haven't touched
1116 : * the caches yet.
1117 : *
1118 : * In any case, reset our state to empty. We need not physically
1119 : * free memory here, since TopTransactionContext is about to be emptied
1120 : * anyway.
1121 : *
1122 : * Note:
1123 : * This should be called as the last step in processing a transaction.
1124 : */
1125 : void
1126 749162 : AtEOXact_Inval(bool isCommit)
1127 : {
1128 749162 : inplaceInvalInfo = NULL;
1129 :
1130 : /* Quick exit if no transactional messages */
1131 749162 : if (transInvalInfo == NULL)
1132 517596 : return;
1133 :
1134 : /* Must be at top of stack */
1135 : Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
1136 :
1137 231566 : if (isCommit)
1138 : {
1139 : /*
1140 : * Relcache init file invalidation requires processing both before and
1141 : * after we send the SI messages. However, we need not do anything
1142 : * unless we committed.
1143 : */
1144 227336 : if (transInvalInfo->ii.RelcacheInitFileInval)
1145 32640 : RelationCacheInitFilePreInvalidate();
1146 :
1147 227336 : AppendInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1148 227336 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
1149 :
1150 227336 : ProcessInvalidationMessagesMulti(&transInvalInfo->PriorCmdInvalidMsgs,
1151 : SendSharedInvalidMessages);
1152 :
1153 227336 : if (transInvalInfo->ii.RelcacheInitFileInval)
1154 32640 : RelationCacheInitFilePostInvalidate();
1155 : }
1156 : else
1157 : {
1158 4230 : ProcessInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1159 : LocalExecuteInvalidationMessage);
1160 : }
1161 :
1162 : /* Need not free anything explicitly */
1163 231566 : transInvalInfo = NULL;
1164 : }
1165 :
1166 : /*
1167 : * PreInplace_Inval
1168 : * Process queued-up invalidation before inplace update critical section.
1169 : *
1170 : * Tasks belong here if they are safe even if the inplace update does not
1171 : * complete. Currently, this just unlinks a cache file, which can fail. The
1172 : * sum of this and AtInplace_Inval() mirrors AtEOXact_Inval(isCommit=true).
1173 : */
1174 : void
1175 147304 : PreInplace_Inval(void)
1176 : {
1177 : Assert(CritSectionCount == 0);
1178 :
1179 147304 : if (inplaceInvalInfo && inplaceInvalInfo->RelcacheInitFileInval)
1180 30390 : RelationCacheInitFilePreInvalidate();
1181 147304 : }
1182 :
1183 : /*
1184 : * AtInplace_Inval
1185 : * Process queued-up invalidations after inplace update buffer mutation.
1186 : */
1187 : void
1188 147304 : AtInplace_Inval(void)
1189 : {
1190 : Assert(CritSectionCount > 0);
1191 :
1192 147304 : if (inplaceInvalInfo == NULL)
1193 26640 : return;
1194 :
1195 120664 : ProcessInvalidationMessagesMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1196 : SendSharedInvalidMessages);
1197 :
1198 120664 : if (inplaceInvalInfo->RelcacheInitFileInval)
1199 30390 : RelationCacheInitFilePostInvalidate();
1200 :
1201 120664 : inplaceInvalInfo = NULL;
1202 : }
1203 :
1204 : /*
1205 : * ForgetInplace_Inval
1206 : * Alternative to PreInplace_Inval()+AtInplace_Inval(): discard queued-up
1207 : * invalidations. This lets inplace update enumerate invalidations
1208 : * optimistically, before locking the buffer.
1209 : */
1210 : void
1211 93842 : ForgetInplace_Inval(void)
1212 : {
1213 93842 : inplaceInvalInfo = NULL;
1214 93842 : }
1215 :
1216 : /*
1217 : * AtEOSubXact_Inval
1218 : * Process queued-up invalidation messages at end of subtransaction.
1219 : *
1220 : * If isCommit, process CurrentCmdInvalidMsgs if any (there probably aren't),
1221 : * and then attach both CurrentCmdInvalidMsgs and PriorCmdInvalidMsgs to the
1222 : * parent's PriorCmdInvalidMsgs list.
1223 : *
1224 : * If not isCommit, we are aborting, and must locally process the messages
1225 : * in PriorCmdInvalidMsgs. No messages need be sent to other backends.
1226 : * We can forget about CurrentCmdInvalidMsgs too, since those changes haven't
1227 : * touched the caches yet.
1228 : *
1229 : * In any case, pop the transaction stack. We need not physically free memory
1230 : * here, since CurTransactionContext is about to be emptied anyway
1231 : * (if aborting). Beware of the possibility of aborting the same nesting
1232 : * level twice, though.
1233 : */
1234 : void
1235 20044 : AtEOSubXact_Inval(bool isCommit)
1236 : {
1237 : int my_level;
1238 : TransInvalidationInfo *myInfo;
1239 :
1240 : /*
1241 : * Successful inplace update must clear this, but we clear it on abort.
1242 : * Inplace updates allocate this in CurrentMemoryContext, which has
1243 : * lifespan <= subtransaction lifespan. Hence, don't free it explicitly.
1244 : */
1245 20044 : if (isCommit)
1246 : Assert(inplaceInvalInfo == NULL);
1247 : else
1248 9288 : inplaceInvalInfo = NULL;
1249 :
1250 : /* Quick exit if no transactional messages. */
1251 20044 : myInfo = transInvalInfo;
1252 20044 : if (myInfo == NULL)
1253 18412 : return;
1254 :
1255 : /* Also bail out quickly if messages are not for this level. */
1256 1632 : my_level = GetCurrentTransactionNestLevel();
1257 1632 : if (myInfo->my_level != my_level)
1258 : {
1259 : Assert(myInfo->my_level < my_level);
1260 1352 : return;
1261 : }
1262 :
1263 280 : if (isCommit)
1264 : {
1265 : /* If CurrentCmdInvalidMsgs still has anything, fix it */
1266 98 : CommandEndInvalidationMessages();
1267 :
1268 : /*
1269 : * We create invalidation stack entries lazily, so the parent might
1270 : * not have one. Instead of creating one, moving all the data over,
1271 : * and then freeing our own, we can just adjust the level of our own
1272 : * entry.
1273 : */
1274 98 : if (myInfo->parent == NULL || myInfo->parent->my_level < my_level - 1)
1275 : {
1276 74 : myInfo->my_level--;
1277 74 : return;
1278 : }
1279 :
1280 : /*
1281 : * Pass up my inval messages to parent. Notice that we stick them in
1282 : * PriorCmdInvalidMsgs, not CurrentCmdInvalidMsgs, since they've
1283 : * already been locally processed. (This would trigger the Assert in
1284 : * AppendInvalidationMessageSubGroup if the parent's
1285 : * CurrentCmdInvalidMsgs isn't empty; but we already checked that in
1286 : * PrepareInvalidationState.)
1287 : */
1288 24 : AppendInvalidationMessages(&myInfo->parent->PriorCmdInvalidMsgs,
1289 : &myInfo->PriorCmdInvalidMsgs);
1290 :
1291 : /* Must readjust parent's CurrentCmdInvalidMsgs indexes now */
1292 24 : SetGroupToFollow(&myInfo->parent->ii.CurrentCmdInvalidMsgs,
1293 : &myInfo->parent->PriorCmdInvalidMsgs);
1294 :
1295 : /* Pending relcache inval becomes parent's problem too */
1296 24 : if (myInfo->ii.RelcacheInitFileInval)
1297 0 : myInfo->parent->ii.RelcacheInitFileInval = true;
1298 :
1299 : /* Pop the transaction state stack */
1300 24 : transInvalInfo = myInfo->parent;
1301 :
1302 : /* Need not free anything else explicitly */
1303 24 : pfree(myInfo);
1304 : }
1305 : else
1306 : {
1307 182 : ProcessInvalidationMessages(&myInfo->PriorCmdInvalidMsgs,
1308 : LocalExecuteInvalidationMessage);
1309 :
1310 : /* Pop the transaction state stack */
1311 182 : transInvalInfo = myInfo->parent;
1312 :
1313 : /* Need not free anything else explicitly */
1314 182 : pfree(myInfo);
1315 : }
1316 : }
1317 :
1318 : /*
1319 : * CommandEndInvalidationMessages
1320 : * Process queued-up invalidation messages at end of one command
1321 : * in a transaction.
1322 : *
1323 : * Here, we send no messages to the shared queue, since we don't know yet if
1324 : * we will commit. We do need to locally process the CurrentCmdInvalidMsgs
1325 : * list, so as to flush our caches of any entries we have outdated in the
1326 : * current command. We then move the current-cmd list over to become part
1327 : * of the prior-cmds list.
1328 : *
1329 : * Note:
1330 : * This should be called during CommandCounterIncrement(),
1331 : * after we have advanced the command ID.
1332 : */
1333 : void
1334 1067052 : CommandEndInvalidationMessages(void)
1335 : {
1336 : /*
1337 : * You might think this shouldn't be called outside any transaction, but
1338 : * bootstrap does it, and also ABORT issued when not in a transaction. So
1339 : * just quietly return if no state to work on.
1340 : */
1341 1067052 : if (transInvalInfo == NULL)
1342 365316 : return;
1343 :
1344 701736 : ProcessInvalidationMessages(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
1345 : LocalExecuteInvalidationMessage);
1346 :
1347 : /* WAL Log per-command invalidation messages for wal_level=logical */
1348 701730 : if (XLogLogicalInfoActive())
1349 7942 : LogLogicalInvalidations();
1350 :
1351 701730 : AppendInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1352 701730 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
1353 : }
1354 :
1355 :
1356 : /*
1357 : * CacheInvalidateHeapTupleCommon
1358 : * Common logic for end-of-command and inplace variants.
1359 : */
1360 : static void
1361 21253008 : CacheInvalidateHeapTupleCommon(Relation relation,
1362 : HeapTuple tuple,
1363 : HeapTuple newtuple,
1364 : InvalidationInfo *(*prepare_callback) (void))
1365 : {
1366 : InvalidationInfo *info;
1367 : Oid tupleRelId;
1368 : Oid databaseId;
1369 : Oid relationId;
1370 :
1371 : /* Do nothing during bootstrap */
1372 21253008 : if (IsBootstrapProcessingMode())
1373 1172160 : return;
1374 :
1375 : /*
1376 : * We only need to worry about invalidation for tuples that are in system
1377 : * catalogs; user-relation tuples are never in catcaches and can't affect
1378 : * the relcache either.
1379 : */
1380 20080848 : if (!IsCatalogRelation(relation))
1381 16177668 : return;
1382 :
1383 : /*
1384 : * IsCatalogRelation() will return true for TOAST tables of system
1385 : * catalogs, but we don't care about those, either.
1386 : */
1387 3903180 : if (IsToastRelation(relation))
1388 30300 : return;
1389 :
1390 : /* Allocate any required resources. */
1391 3872880 : info = prepare_callback();
1392 :
1393 : /*
1394 : * First let the catcache do its thing
1395 : */
1396 3872880 : tupleRelId = RelationGetRelid(relation);
1397 3872880 : if (RelationInvalidatesSnapshotsOnly(tupleRelId))
1398 : {
1399 973638 : databaseId = IsSharedRelation(tupleRelId) ? InvalidOid : MyDatabaseId;
1400 973638 : RegisterSnapshotInvalidation(info, databaseId, tupleRelId);
1401 : }
1402 : else
1403 2899242 : PrepareToInvalidateCacheTuple(relation, tuple, newtuple,
1404 : RegisterCatcacheInvalidation,
1405 : (void *) info);
1406 :
1407 : /*
1408 : * Now, is this tuple one of the primary definers of a relcache entry? See
1409 : * comments in file header for deeper explanation.
1410 : *
1411 : * Note we ignore newtuple here; we assume an update cannot move a tuple
1412 : * from being part of one relcache entry to being part of another.
1413 : */
1414 3872880 : if (tupleRelId == RelationRelationId)
1415 : {
1416 613486 : Form_pg_class classtup = (Form_pg_class) GETSTRUCT(tuple);
1417 :
1418 613486 : relationId = classtup->oid;
1419 613486 : if (classtup->relisshared)
1420 34094 : databaseId = InvalidOid;
1421 : else
1422 579392 : databaseId = MyDatabaseId;
1423 : }
1424 3259394 : else if (tupleRelId == AttributeRelationId)
1425 : {
1426 1048904 : Form_pg_attribute atttup = (Form_pg_attribute) GETSTRUCT(tuple);
1427 :
1428 1048904 : relationId = atttup->attrelid;
1429 :
1430 : /*
1431 : * KLUGE ALERT: we always send the relcache event with MyDatabaseId,
1432 : * even if the rel in question is shared (which we can't easily tell).
1433 : * This essentially means that only backends in this same database
1434 : * will react to the relcache flush request. This is in fact
1435 : * appropriate, since only those backends could see our pg_attribute
1436 : * change anyway. It looks a bit ugly though. (In practice, shared
1437 : * relations can't have schema changes after bootstrap, so we should
1438 : * never come here for a shared rel anyway.)
1439 : */
1440 1048904 : databaseId = MyDatabaseId;
1441 : }
1442 2210490 : else if (tupleRelId == IndexRelationId)
1443 : {
1444 61658 : Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
1445 :
1446 : /*
1447 : * When a pg_index row is updated, we should send out a relcache inval
1448 : * for the index relation. As above, we don't know the shared status
1449 : * of the index, but in practice it doesn't matter since indexes of
1450 : * shared catalogs can't have such updates.
1451 : */
1452 61658 : relationId = indextup->indexrelid;
1453 61658 : databaseId = MyDatabaseId;
1454 : }
1455 2148832 : else if (tupleRelId == ConstraintRelationId)
1456 : {
1457 77130 : Form_pg_constraint constrtup = (Form_pg_constraint) GETSTRUCT(tuple);
1458 :
1459 : /*
1460 : * Foreign keys are part of relcache entries, too, so send out an
1461 : * inval for the table that the FK applies to.
1462 : */
1463 77130 : if (constrtup->contype == CONSTRAINT_FOREIGN &&
1464 7518 : OidIsValid(constrtup->conrelid))
1465 : {
1466 7518 : relationId = constrtup->conrelid;
1467 7518 : databaseId = MyDatabaseId;
1468 : }
1469 : else
1470 69612 : return;
1471 : }
1472 : else
1473 2071702 : return;
1474 :
1475 : /*
1476 : * Yes. We need to register a relcache invalidation event.
1477 : */
1478 1731566 : RegisterRelcacheInvalidation(info, databaseId, relationId);
1479 : }
1480 :
1481 : /*
1482 : * CacheInvalidateHeapTuple
1483 : * Register the given tuple for invalidation at end of command
1484 : * (ie, current command is creating or outdating this tuple) and end of
1485 : * transaction. Also, detect whether a relcache invalidation is implied.
1486 : *
1487 : * For an insert or delete, tuple is the target tuple and newtuple is NULL.
1488 : * For an update, we are called just once, with tuple being the old tuple
1489 : * version and newtuple the new version. This allows avoidance of duplicate
1490 : * effort during an update.
1491 : */
1492 : void
1493 21011862 : CacheInvalidateHeapTuple(Relation relation,
1494 : HeapTuple tuple,
1495 : HeapTuple newtuple)
1496 : {
1497 21011862 : CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1498 : PrepareInvalidationState);
1499 21011862 : }
1500 :
1501 : /*
1502 : * CacheInvalidateHeapTupleInplace
1503 : * Register the given tuple for nontransactional invalidation pertaining
1504 : * to an inplace update. Also, detect whether a relcache invalidation is
1505 : * implied.
1506 : *
1507 : * Like CacheInvalidateHeapTuple(), but for inplace updates.
1508 : */
1509 : void
1510 241146 : CacheInvalidateHeapTupleInplace(Relation relation,
1511 : HeapTuple tuple,
1512 : HeapTuple newtuple)
1513 : {
1514 241146 : CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1515 : PrepareInplaceInvalidationState);
1516 241146 : }
1517 :
1518 : /*
1519 : * CacheInvalidateCatalog
1520 : * Register invalidation of the whole content of a system catalog.
1521 : *
1522 : * This is normally used in VACUUM FULL/CLUSTER, where we haven't so much
1523 : * changed any tuples as moved them around. Some uses of catcache entries
1524 : * expect their TIDs to be correct, so we have to blow away the entries.
1525 : *
1526 : * Note: we expect caller to verify that the rel actually is a system
1527 : * catalog. If it isn't, no great harm is done, just a wasted sinval message.
1528 : */
1529 : void
1530 216 : CacheInvalidateCatalog(Oid catalogId)
1531 : {
1532 : Oid databaseId;
1533 :
1534 216 : if (IsSharedRelation(catalogId))
1535 36 : databaseId = InvalidOid;
1536 : else
1537 180 : databaseId = MyDatabaseId;
1538 :
1539 216 : RegisterCatalogInvalidation(PrepareInvalidationState(),
1540 : databaseId, catalogId);
1541 216 : }
1542 :
1543 : /*
1544 : * CacheInvalidateRelcache
1545 : * Register invalidation of the specified relation's relcache entry
1546 : * at end of command.
1547 : *
1548 : * This is used in places that need to force relcache rebuild but aren't
1549 : * changing any of the tuples recognized as contributors to the relcache
1550 : * entry by CacheInvalidateHeapTuple. (An example is dropping an index.)
1551 : */
1552 : void
1553 120586 : CacheInvalidateRelcache(Relation relation)
1554 : {
1555 : Oid databaseId;
1556 : Oid relationId;
1557 :
1558 120586 : relationId = RelationGetRelid(relation);
1559 120586 : if (relation->rd_rel->relisshared)
1560 5268 : databaseId = InvalidOid;
1561 : else
1562 115318 : databaseId = MyDatabaseId;
1563 :
1564 120586 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1565 : databaseId, relationId);
1566 120586 : }
1567 :
1568 : /*
1569 : * CacheInvalidateRelcacheAll
1570 : * Register invalidation of the whole relcache at the end of command.
1571 : *
1572 : * This is used by alter publication as changes in publications may affect
1573 : * large number of tables.
1574 : */
1575 : void
1576 120 : CacheInvalidateRelcacheAll(void)
1577 : {
1578 120 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1579 : InvalidOid, InvalidOid);
1580 120 : }
1581 :
1582 : /*
1583 : * CacheInvalidateRelcacheByTuple
1584 : * As above, but relation is identified by passing its pg_class tuple.
1585 : */
1586 : void
1587 68260 : CacheInvalidateRelcacheByTuple(HeapTuple classTuple)
1588 : {
1589 68260 : Form_pg_class classtup = (Form_pg_class) GETSTRUCT(classTuple);
1590 : Oid databaseId;
1591 : Oid relationId;
1592 :
1593 68260 : relationId = classtup->oid;
1594 68260 : if (classtup->relisshared)
1595 2006 : databaseId = InvalidOid;
1596 : else
1597 66254 : databaseId = MyDatabaseId;
1598 68260 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1599 : databaseId, relationId);
1600 68260 : }
1601 :
1602 : /*
1603 : * CacheInvalidateRelcacheByRelid
1604 : * As above, but relation is identified by passing its OID.
1605 : * This is the least efficient of the three options; use one of
1606 : * the above routines if you have a Relation or pg_class tuple.
1607 : */
1608 : void
1609 26638 : CacheInvalidateRelcacheByRelid(Oid relid)
1610 : {
1611 : HeapTuple tup;
1612 :
1613 26638 : tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
1614 26638 : if (!HeapTupleIsValid(tup))
1615 0 : elog(ERROR, "cache lookup failed for relation %u", relid);
1616 26638 : CacheInvalidateRelcacheByTuple(tup);
1617 26638 : ReleaseSysCache(tup);
1618 26638 : }
1619 :
1620 :
1621 : /*
1622 : * CacheInvalidateSmgr
1623 : * Register invalidation of smgr references to a physical relation.
1624 : *
1625 : * Sending this type of invalidation msg forces other backends to close open
1626 : * smgr entries for the rel. This should be done to flush dangling open-file
1627 : * references when the physical rel is being dropped or truncated. Because
1628 : * these are nontransactional (i.e., not-rollback-able) operations, we just
1629 : * send the inval message immediately without any queuing.
1630 : *
1631 : * Note: in most cases there will have been a relcache flush issued against
1632 : * the rel at the logical level. We need a separate smgr-level flush because
1633 : * it is possible for backends to have open smgr entries for rels they don't
1634 : * have a relcache entry for, e.g. because the only thing they ever did with
1635 : * the rel is write out dirty shared buffers.
1636 : *
1637 : * Note: because these messages are nontransactional, they won't be captured
1638 : * in commit/abort WAL entries. Instead, calls to CacheInvalidateSmgr()
1639 : * should happen in low-level smgr.c routines, which are executed while
1640 : * replaying WAL as well as when creating it.
1641 : *
1642 : * Note: In order to avoid bloating SharedInvalidationMessage, we store only
1643 : * three bytes of the ProcNumber using what would otherwise be padding space.
1644 : * Thus, the maximum possible ProcNumber is 2^23-1.
1645 : */
1646 : void
1647 93914 : CacheInvalidateSmgr(RelFileLocatorBackend rlocator)
1648 : {
1649 : SharedInvalidationMessage msg;
1650 :
1651 93914 : msg.sm.id = SHAREDINVALSMGR_ID;
1652 93914 : msg.sm.backend_hi = rlocator.backend >> 16;
1653 93914 : msg.sm.backend_lo = rlocator.backend & 0xffff;
1654 93914 : msg.sm.rlocator = rlocator.locator;
1655 : /* check AddCatcacheInvalidationMessage() for an explanation */
1656 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1657 :
1658 93914 : SendSharedInvalidMessages(&msg, 1);
1659 93914 : }
1660 :
1661 : /*
1662 : * CacheInvalidateRelmap
1663 : * Register invalidation of the relation mapping for a database,
1664 : * or for the shared catalogs if databaseId is zero.
1665 : *
1666 : * Sending this type of invalidation msg forces other backends to re-read
1667 : * the indicated relation mapping file. It is also necessary to send a
1668 : * relcache inval for the specific relations whose mapping has been altered,
1669 : * else the relcache won't get updated with the new filenode data.
1670 : *
1671 : * Note: because these messages are nontransactional, they won't be captured
1672 : * in commit/abort WAL entries. Instead, calls to CacheInvalidateRelmap()
1673 : * should happen in low-level relmapper.c routines, which are executed while
1674 : * replaying WAL as well as when creating it.
1675 : */
1676 : void
1677 426 : CacheInvalidateRelmap(Oid databaseId)
1678 : {
1679 : SharedInvalidationMessage msg;
1680 :
1681 426 : msg.rm.id = SHAREDINVALRELMAP_ID;
1682 426 : msg.rm.dbId = databaseId;
1683 : /* check AddCatcacheInvalidationMessage() for an explanation */
1684 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1685 :
1686 426 : SendSharedInvalidMessages(&msg, 1);
1687 426 : }
1688 :
1689 :
1690 : /*
1691 : * CacheRegisterSyscacheCallback
1692 : * Register the specified function to be called for all future
1693 : * invalidation events in the specified cache. The cache ID and the
1694 : * hash value of the tuple being invalidated will be passed to the
1695 : * function.
1696 : *
1697 : * NOTE: Hash value zero will be passed if a cache reset request is received.
1698 : * In this case the called routines should flush all cached state.
1699 : * Yes, there's a possibility of a false match to zero, but it doesn't seem
1700 : * worth troubling over, especially since most of the current callees just
1701 : * flush all cached state anyway.
1702 : */
1703 : void
1704 527820 : CacheRegisterSyscacheCallback(int cacheid,
1705 : SyscacheCallbackFunction func,
1706 : Datum arg)
1707 : {
1708 527820 : if (cacheid < 0 || cacheid >= SysCacheSize)
1709 0 : elog(FATAL, "invalid cache ID: %d", cacheid);
1710 527820 : if (syscache_callback_count >= MAX_SYSCACHE_CALLBACKS)
1711 0 : elog(FATAL, "out of syscache_callback_list slots");
1712 :
1713 527820 : if (syscache_callback_links[cacheid] == 0)
1714 : {
1715 : /* first callback for this cache */
1716 373506 : syscache_callback_links[cacheid] = syscache_callback_count + 1;
1717 : }
1718 : else
1719 : {
1720 : /* add to end of chain, so that older callbacks are called first */
1721 154314 : int i = syscache_callback_links[cacheid] - 1;
1722 :
1723 184836 : while (syscache_callback_list[i].link > 0)
1724 30522 : i = syscache_callback_list[i].link - 1;
1725 154314 : syscache_callback_list[i].link = syscache_callback_count + 1;
1726 : }
1727 :
1728 527820 : syscache_callback_list[syscache_callback_count].id = cacheid;
1729 527820 : syscache_callback_list[syscache_callback_count].link = 0;
1730 527820 : syscache_callback_list[syscache_callback_count].function = func;
1731 527820 : syscache_callback_list[syscache_callback_count].arg = arg;
1732 :
1733 527820 : ++syscache_callback_count;
1734 527820 : }
1735 :
1736 : /*
1737 : * CacheRegisterRelcacheCallback
1738 : * Register the specified function to be called for all future
1739 : * relcache invalidation events. The OID of the relation being
1740 : * invalidated will be passed to the function.
1741 : *
1742 : * NOTE: InvalidOid will be passed if a cache reset request is received.
1743 : * In this case the called routines should flush all cached state.
1744 : */
1745 : void
1746 42322 : CacheRegisterRelcacheCallback(RelcacheCallbackFunction func,
1747 : Datum arg)
1748 : {
1749 42322 : if (relcache_callback_count >= MAX_RELCACHE_CALLBACKS)
1750 0 : elog(FATAL, "out of relcache_callback_list slots");
1751 :
1752 42322 : relcache_callback_list[relcache_callback_count].function = func;
1753 42322 : relcache_callback_list[relcache_callback_count].arg = arg;
1754 :
1755 42322 : ++relcache_callback_count;
1756 42322 : }
1757 :
1758 : /*
1759 : * CallSyscacheCallbacks
1760 : *
1761 : * This is exported so that CatalogCacheFlushCatalog can call it, saving
1762 : * this module from knowing which catcache IDs correspond to which catalogs.
1763 : */
1764 : void
1765 19205194 : CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
1766 : {
1767 : int i;
1768 :
1769 19205194 : if (cacheid < 0 || cacheid >= SysCacheSize)
1770 0 : elog(ERROR, "invalid cache ID: %d", cacheid);
1771 :
1772 19205194 : i = syscache_callback_links[cacheid] - 1;
1773 21871642 : while (i >= 0)
1774 : {
1775 2666448 : struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
1776 :
1777 : Assert(ccitem->id == cacheid);
1778 2666448 : ccitem->function(ccitem->arg, cacheid, hashvalue);
1779 2666448 : i = ccitem->link - 1;
1780 : }
1781 19205194 : }
1782 :
1783 : /*
1784 : * LogLogicalInvalidations
1785 : *
1786 : * Emit WAL for invalidations caused by the current command.
1787 : *
1788 : * This is currently only used for logging invalidations at the command end
1789 : * or at commit time if any invalidations are pending.
1790 : */
1791 : void
1792 30610 : LogLogicalInvalidations(void)
1793 : {
1794 : xl_xact_invals xlrec;
1795 : InvalidationMsgsGroup *group;
1796 : int nmsgs;
1797 :
1798 : /* Quick exit if we haven't done anything with invalidation messages. */
1799 30610 : if (transInvalInfo == NULL)
1800 19468 : return;
1801 :
1802 11142 : group = &transInvalInfo->ii.CurrentCmdInvalidMsgs;
1803 11142 : nmsgs = NumMessagesInGroup(group);
1804 :
1805 11142 : if (nmsgs > 0)
1806 : {
1807 : /* prepare record */
1808 8956 : memset(&xlrec, 0, MinSizeOfXactInvals);
1809 8956 : xlrec.nmsgs = nmsgs;
1810 :
1811 : /* perform insertion */
1812 8956 : XLogBeginInsert();
1813 8956 : XLogRegisterData((char *) (&xlrec), MinSizeOfXactInvals);
1814 8956 : ProcessMessageSubGroupMulti(group, CatCacheMsgs,
1815 : XLogRegisterData((char *) msgs,
1816 : n * sizeof(SharedInvalidationMessage)));
1817 8956 : ProcessMessageSubGroupMulti(group, RelCacheMsgs,
1818 : XLogRegisterData((char *) msgs,
1819 : n * sizeof(SharedInvalidationMessage)));
1820 8956 : XLogInsert(RM_XACT_ID, XLOG_XACT_INVALIDATIONS);
1821 : }
1822 : }
|