Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
[deliverable/linux.git] / Documentation / memory-barriers.txt
1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
4
5 By: David Howells <dhowells@redhat.com>
6 Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7 Will Deacon <will.deacon@arm.com>
8 Peter Zijlstra <peterz@infradead.org>
9
10 ==========
11 DISCLAIMER
12 ==========
13
14 This document is not a specification; it is intentionally (for the sake of
15 brevity) and unintentionally (due to being human) incomplete. This document is
16 meant as a guide to using the various memory barriers provided by Linux, but
17 in case of any doubt (and there are many) please ask.
18
19 To repeat, this document is not a specification of what Linux expects from
20 hardware.
21
22 The purpose of this document is twofold:
23
24 (1) to specify the minimum functionality that one can rely on for any
25 particular barrier, and
26
27 (2) to provide a guide as to how to use the barriers that are available.
28
29 Note that an architecture can provide more than the minimum requirement
30 for any particular barrier, but if the architecure provides less than
31 that, that architecture is incorrect.
32
33 Note also that it is possible that a barrier may be a no-op for an
34 architecture because the way that arch works renders an explicit barrier
35 unnecessary in that case.
36
37
38 ========
39 CONTENTS
40 ========
41
42 (*) Abstract memory access model.
43
44 - Device operations.
45 - Guarantees.
46
47 (*) What are memory barriers?
48
49 - Varieties of memory barrier.
50 - What may not be assumed about memory barriers?
51 - Data dependency barriers.
52 - Control dependencies.
53 - SMP barrier pairing.
54 - Examples of memory barrier sequences.
55 - Read memory barriers vs load speculation.
56 - Transitivity
57
58 (*) Explicit kernel barriers.
59
60 - Compiler barrier.
61 - CPU memory barriers.
62 - MMIO write barrier.
63
64 (*) Implicit kernel memory barriers.
65
66 - Lock acquisition functions.
67 - Interrupt disabling functions.
68 - Sleep and wake-up functions.
69 - Miscellaneous functions.
70
71 (*) Inter-CPU acquiring barrier effects.
72
73 - Acquires vs memory accesses.
74 - Acquires vs I/O accesses.
75
76 (*) Where are memory barriers needed?
77
78 - Interprocessor interaction.
79 - Atomic operations.
80 - Accessing devices.
81 - Interrupts.
82
83 (*) Kernel I/O barrier effects.
84
85 (*) Assumed minimum execution ordering model.
86
87 (*) The effects of the cpu cache.
88
89 - Cache coherency.
90 - Cache coherency vs DMA.
91 - Cache coherency vs MMIO.
92
93 (*) The things CPUs get up to.
94
95 - And then there's the Alpha.
96 - Virtual Machine Guests.
97
98 (*) Example uses.
99
100 - Circular buffers.
101
102 (*) References.
103
104
105 ============================
106 ABSTRACT MEMORY ACCESS MODEL
107 ============================
108
109 Consider the following abstract model of the system:
110
111 : :
112 : :
113 : :
114 +-------+ : +--------+ : +-------+
115 | | : | | : | |
116 | | : | | : | |
117 | CPU 1 |<----->| Memory |<----->| CPU 2 |
118 | | : | | : | |
119 | | : | | : | |
120 +-------+ : +--------+ : +-------+
121 ^ : ^ : ^
122 | : | : |
123 | : | : |
124 | : v : |
125 | : +--------+ : |
126 | : | | : |
127 | : | | : |
128 +---------->| Device |<----------+
129 : | | :
130 : | | :
131 : +--------+ :
132 : :
133
134 Each CPU executes a program that generates memory access operations. In the
135 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
136 perform the memory operations in any order it likes, provided program causality
137 appears to be maintained. Similarly, the compiler may also arrange the
138 instructions it emits in any order it likes, provided it doesn't affect the
139 apparent operation of the program.
140
141 So in the above diagram, the effects of the memory operations performed by a
142 CPU are perceived by the rest of the system as the operations cross the
143 interface between the CPU and rest of the system (the dotted lines).
144
145
146 For example, consider the following sequence of events:
147
148 CPU 1 CPU 2
149 =============== ===============
150 { A == 1; B == 2 }
151 A = 3; x = B;
152 B = 4; y = A;
153
154 The set of accesses as seen by the memory system in the middle can be arranged
155 in 24 different combinations:
156
157 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
158 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
159 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
160 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
161 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
162 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
163 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
164 STORE B=4, ...
165 ...
166
167 and can thus result in four different combinations of values:
168
169 x == 2, y == 1
170 x == 2, y == 3
171 x == 4, y == 1
172 x == 4, y == 3
173
174
175 Furthermore, the stores committed by a CPU to the memory system may not be
176 perceived by the loads made by another CPU in the same order as the stores were
177 committed.
178
179
180 As a further example, consider this sequence of events:
181
182 CPU 1 CPU 2
183 =============== ===============
184 { A == 1, B == 2, C == 3, P == &A, Q == &C }
185 B = 4; Q = P;
186 P = &B D = *Q;
187
188 There is an obvious data dependency here, as the value loaded into D depends on
189 the address retrieved from P by CPU 2. At the end of the sequence, any of the
190 following results are possible:
191
192 (Q == &A) and (D == 1)
193 (Q == &B) and (D == 2)
194 (Q == &B) and (D == 4)
195
196 Note that CPU 2 will never try and load C into D because the CPU will load P
197 into Q before issuing the load of *Q.
198
199
200 DEVICE OPERATIONS
201 -----------------
202
203 Some devices present their control interfaces as collections of memory
204 locations, but the order in which the control registers are accessed is very
205 important. For instance, imagine an ethernet card with a set of internal
206 registers that are accessed through an address port register (A) and a data
207 port register (D). To read internal register 5, the following code might then
208 be used:
209
210 *A = 5;
211 x = *D;
212
213 but this might show up as either of the following two sequences:
214
215 STORE *A = 5, x = LOAD *D
216 x = LOAD *D, STORE *A = 5
217
218 the second of which will almost certainly result in a malfunction, since it set
219 the address _after_ attempting to read the register.
220
221
222 GUARANTEES
223 ----------
224
225 There are some minimal guarantees that may be expected of a CPU:
226
227 (*) On any given CPU, dependent memory accesses will be issued in order, with
228 respect to itself. This means that for:
229
230 Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q);
231
232 the CPU will issue the following memory operations:
233
234 Q = LOAD P, D = LOAD *Q
235
236 and always in that order. On most systems, smp_read_barrier_depends()
237 does nothing, but it is required for DEC Alpha. The READ_ONCE()
238 is required to prevent compiler mischief. Please note that you
239 should normally use something like rcu_dereference() instead of
240 open-coding smp_read_barrier_depends().
241
242 (*) Overlapping loads and stores within a particular CPU will appear to be
243 ordered within that CPU. This means that for:
244
245 a = READ_ONCE(*X); WRITE_ONCE(*X, b);
246
247 the CPU will only issue the following sequence of memory operations:
248
249 a = LOAD *X, STORE *X = b
250
251 And for:
252
253 WRITE_ONCE(*X, c); d = READ_ONCE(*X);
254
255 the CPU will only issue:
256
257 STORE *X = c, d = LOAD *X
258
259 (Loads and stores overlap if they are targeted at overlapping pieces of
260 memory).
261
262 And there are a number of things that _must_ or _must_not_ be assumed:
263
264 (*) It _must_not_ be assumed that the compiler will do what you want
265 with memory references that are not protected by READ_ONCE() and
266 WRITE_ONCE(). Without them, the compiler is within its rights to
267 do all sorts of "creative" transformations, which are covered in
268 the COMPILER BARRIER section.
269
270 (*) It _must_not_ be assumed that independent loads and stores will be issued
271 in the order given. This means that for:
272
273 X = *A; Y = *B; *D = Z;
274
275 we may get any of the following sequences:
276
277 X = LOAD *A, Y = LOAD *B, STORE *D = Z
278 X = LOAD *A, STORE *D = Z, Y = LOAD *B
279 Y = LOAD *B, X = LOAD *A, STORE *D = Z
280 Y = LOAD *B, STORE *D = Z, X = LOAD *A
281 STORE *D = Z, X = LOAD *A, Y = LOAD *B
282 STORE *D = Z, Y = LOAD *B, X = LOAD *A
283
284 (*) It _must_ be assumed that overlapping memory accesses may be merged or
285 discarded. This means that for:
286
287 X = *A; Y = *(A + 4);
288
289 we may get any one of the following sequences:
290
291 X = LOAD *A; Y = LOAD *(A + 4);
292 Y = LOAD *(A + 4); X = LOAD *A;
293 {X, Y} = LOAD {*A, *(A + 4) };
294
295 And for:
296
297 *A = X; *(A + 4) = Y;
298
299 we may get any of:
300
301 STORE *A = X; STORE *(A + 4) = Y;
302 STORE *(A + 4) = Y; STORE *A = X;
303 STORE {*A, *(A + 4) } = {X, Y};
304
305 And there are anti-guarantees:
306
307 (*) These guarantees do not apply to bitfields, because compilers often
308 generate code to modify these using non-atomic read-modify-write
309 sequences. Do not attempt to use bitfields to synchronize parallel
310 algorithms.
311
312 (*) Even in cases where bitfields are protected by locks, all fields
313 in a given bitfield must be protected by one lock. If two fields
314 in a given bitfield are protected by different locks, the compiler's
315 non-atomic read-modify-write sequences can cause an update to one
316 field to corrupt the value of an adjacent field.
317
318 (*) These guarantees apply only to properly aligned and sized scalar
319 variables. "Properly sized" currently means variables that are
320 the same size as "char", "short", "int" and "long". "Properly
321 aligned" means the natural alignment, thus no constraints for
322 "char", two-byte alignment for "short", four-byte alignment for
323 "int", and either four-byte or eight-byte alignment for "long",
324 on 32-bit and 64-bit systems, respectively. Note that these
325 guarantees were introduced into the C11 standard, so beware when
326 using older pre-C11 compilers (for example, gcc 4.6). The portion
327 of the standard containing this guarantee is Section 3.14, which
328 defines "memory location" as follows:
329
330 memory location
331 either an object of scalar type, or a maximal sequence
332 of adjacent bit-fields all having nonzero width
333
334 NOTE 1: Two threads of execution can update and access
335 separate memory locations without interfering with
336 each other.
337
338 NOTE 2: A bit-field and an adjacent non-bit-field member
339 are in separate memory locations. The same applies
340 to two bit-fields, if one is declared inside a nested
341 structure declaration and the other is not, or if the two
342 are separated by a zero-length bit-field declaration,
343 or if they are separated by a non-bit-field member
344 declaration. It is not safe to concurrently update two
345 bit-fields in the same structure if all members declared
346 between them are also bit-fields, no matter what the
347 sizes of those intervening bit-fields happen to be.
348
349
350 =========================
351 WHAT ARE MEMORY BARRIERS?
352 =========================
353
354 As can be seen above, independent memory operations are effectively performed
355 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
356 What is required is some way of intervening to instruct the compiler and the
357 CPU to restrict the order.
358
359 Memory barriers are such interventions. They impose a perceived partial
360 ordering over the memory operations on either side of the barrier.
361
362 Such enforcement is important because the CPUs and other devices in a system
363 can use a variety of tricks to improve performance, including reordering,
364 deferral and combination of memory operations; speculative loads; speculative
365 branch prediction and various types of caching. Memory barriers are used to
366 override or suppress these tricks, allowing the code to sanely control the
367 interaction of multiple CPUs and/or devices.
368
369
370 VARIETIES OF MEMORY BARRIER
371 ---------------------------
372
373 Memory barriers come in four basic varieties:
374
375 (1) Write (or store) memory barriers.
376
377 A write memory barrier gives a guarantee that all the STORE operations
378 specified before the barrier will appear to happen before all the STORE
379 operations specified after the barrier with respect to the other
380 components of the system.
381
382 A write barrier is a partial ordering on stores only; it is not required
383 to have any effect on loads.
384
385 A CPU can be viewed as committing a sequence of store operations to the
386 memory system as time progresses. All stores before a write barrier will
387 occur in the sequence _before_ all the stores after the write barrier.
388
389 [!] Note that write barriers should normally be paired with read or data
390 dependency barriers; see the "SMP barrier pairing" subsection.
391
392
393 (2) Data dependency barriers.
394
395 A data dependency barrier is a weaker form of read barrier. In the case
396 where two loads are performed such that the second depends on the result
397 of the first (eg: the first load retrieves the address to which the second
398 load will be directed), a data dependency barrier would be required to
399 make sure that the target of the second load is updated before the address
400 obtained by the first load is accessed.
401
402 A data dependency barrier is a partial ordering on interdependent loads
403 only; it is not required to have any effect on stores, independent loads
404 or overlapping loads.
405
406 As mentioned in (1), the other CPUs in the system can be viewed as
407 committing sequences of stores to the memory system that the CPU being
408 considered can then perceive. A data dependency barrier issued by the CPU
409 under consideration guarantees that for any load preceding it, if that
410 load touches one of a sequence of stores from another CPU, then by the
411 time the barrier completes, the effects of all the stores prior to that
412 touched by the load will be perceptible to any loads issued after the data
413 dependency barrier.
414
415 See the "Examples of memory barrier sequences" subsection for diagrams
416 showing the ordering constraints.
417
418 [!] Note that the first load really has to have a _data_ dependency and
419 not a control dependency. If the address for the second load is dependent
420 on the first load, but the dependency is through a conditional rather than
421 actually loading the address itself, then it's a _control_ dependency and
422 a full read barrier or better is required. See the "Control dependencies"
423 subsection for more information.
424
425 [!] Note that data dependency barriers should normally be paired with
426 write barriers; see the "SMP barrier pairing" subsection.
427
428
429 (3) Read (or load) memory barriers.
430
431 A read barrier is a data dependency barrier plus a guarantee that all the
432 LOAD operations specified before the barrier will appear to happen before
433 all the LOAD operations specified after the barrier with respect to the
434 other components of the system.
435
436 A read barrier is a partial ordering on loads only; it is not required to
437 have any effect on stores.
438
439 Read memory barriers imply data dependency barriers, and so can substitute
440 for them.
441
442 [!] Note that read barriers should normally be paired with write barriers;
443 see the "SMP barrier pairing" subsection.
444
445
446 (4) General memory barriers.
447
448 A general memory barrier gives a guarantee that all the LOAD and STORE
449 operations specified before the barrier will appear to happen before all
450 the LOAD and STORE operations specified after the barrier with respect to
451 the other components of the system.
452
453 A general memory barrier is a partial ordering over both loads and stores.
454
455 General memory barriers imply both read and write memory barriers, and so
456 can substitute for either.
457
458
459 And a couple of implicit varieties:
460
461 (5) ACQUIRE operations.
462
463 This acts as a one-way permeable barrier. It guarantees that all memory
464 operations after the ACQUIRE operation will appear to happen after the
465 ACQUIRE operation with respect to the other components of the system.
466 ACQUIRE operations include LOCK operations and both smp_load_acquire()
467 and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
468 semantics from relying on a control dependency and smp_rmb().
469
470 Memory operations that occur before an ACQUIRE operation may appear to
471 happen after it completes.
472
473 An ACQUIRE operation should almost always be paired with a RELEASE
474 operation.
475
476
477 (6) RELEASE operations.
478
479 This also acts as a one-way permeable barrier. It guarantees that all
480 memory operations before the RELEASE operation will appear to happen
481 before the RELEASE operation with respect to the other components of the
482 system. RELEASE operations include UNLOCK operations and
483 smp_store_release() operations.
484
485 Memory operations that occur after a RELEASE operation may appear to
486 happen before it completes.
487
488 The use of ACQUIRE and RELEASE operations generally precludes the need
489 for other sorts of memory barrier (but note the exceptions mentioned in
490 the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE
491 pair is -not- guaranteed to act as a full memory barrier. However, after
492 an ACQUIRE on a given variable, all memory accesses preceding any prior
493 RELEASE on that same variable are guaranteed to be visible. In other
494 words, within a given variable's critical section, all accesses of all
495 previous critical sections for that variable are guaranteed to have
496 completed.
497
498 This means that ACQUIRE acts as a minimal "acquire" operation and
499 RELEASE acts as a minimal "release" operation.
500
501 A subset of the atomic operations described in atomic_ops.txt have ACQUIRE
502 and RELEASE variants in addition to fully-ordered and relaxed (no barrier
503 semantics) definitions. For compound atomics performing both a load and a
504 store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
505 only to the store portion of the operation.
506
507 Memory barriers are only required where there's a possibility of interaction
508 between two CPUs or between a CPU and a device. If it can be guaranteed that
509 there won't be any such interaction in any particular piece of code, then
510 memory barriers are unnecessary in that piece of code.
511
512
513 Note that these are the _minimum_ guarantees. Different architectures may give
514 more substantial guarantees, but they may _not_ be relied upon outside of arch
515 specific code.
516
517
518 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
519 ----------------------------------------------
520
521 There are certain things that the Linux kernel memory barriers do not guarantee:
522
523 (*) There is no guarantee that any of the memory accesses specified before a
524 memory barrier will be _complete_ by the completion of a memory barrier
525 instruction; the barrier can be considered to draw a line in that CPU's
526 access queue that accesses of the appropriate type may not cross.
527
528 (*) There is no guarantee that issuing a memory barrier on one CPU will have
529 any direct effect on another CPU or any other hardware in the system. The
530 indirect effect will be the order in which the second CPU sees the effects
531 of the first CPU's accesses occur, but see the next point:
532
533 (*) There is no guarantee that a CPU will see the correct order of effects
534 from a second CPU's accesses, even _if_ the second CPU uses a memory
535 barrier, unless the first CPU _also_ uses a matching memory barrier (see
536 the subsection on "SMP Barrier Pairing").
537
538 (*) There is no guarantee that some intervening piece of off-the-CPU
539 hardware[*] will not reorder the memory accesses. CPU cache coherency
540 mechanisms should propagate the indirect effects of a memory barrier
541 between CPUs, but might not do so in order.
542
543 [*] For information on bus mastering DMA and coherency please read:
544
545 Documentation/PCI/pci.txt
546 Documentation/DMA-API-HOWTO.txt
547 Documentation/DMA-API.txt
548
549
550 DATA DEPENDENCY BARRIERS
551 ------------------------
552
553 The usage requirements of data dependency barriers are a little subtle, and
554 it's not always obvious that they're needed. To illustrate, consider the
555 following sequence of events:
556
557 CPU 1 CPU 2
558 =============== ===============
559 { A == 1, B == 2, C == 3, P == &A, Q == &C }
560 B = 4;
561 <write barrier>
562 WRITE_ONCE(P, &B)
563 Q = READ_ONCE(P);
564 D = *Q;
565
566 There's a clear data dependency here, and it would seem that by the end of the
567 sequence, Q must be either &A or &B, and that:
568
569 (Q == &A) implies (D == 1)
570 (Q == &B) implies (D == 4)
571
572 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
573 leading to the following situation:
574
575 (Q == &B) and (D == 2) ????
576
577 Whilst this may seem like a failure of coherency or causality maintenance, it
578 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
579 Alpha).
580
581 To deal with this, a data dependency barrier or better must be inserted
582 between the address load and the data load:
583
584 CPU 1 CPU 2
585 =============== ===============
586 { A == 1, B == 2, C == 3, P == &A, Q == &C }
587 B = 4;
588 <write barrier>
589 WRITE_ONCE(P, &B);
590 Q = READ_ONCE(P);
591 <data dependency barrier>
592 D = *Q;
593
594 This enforces the occurrence of one of the two implications, and prevents the
595 third possibility from arising.
596
597 A data-dependency barrier must also order against dependent writes:
598
599 CPU 1 CPU 2
600 =============== ===============
601 { A == 1, B == 2, C = 3, P == &A, Q == &C }
602 B = 4;
603 <write barrier>
604 WRITE_ONCE(P, &B);
605 Q = READ_ONCE(P);
606 <data dependency barrier>
607 *Q = 5;
608
609 The data-dependency barrier must order the read into Q with the store
610 into *Q. This prohibits this outcome:
611
612 (Q == B) && (B == 4)
613
614 Please note that this pattern should be rare. After all, the whole point
615 of dependency ordering is to -prevent- writes to the data structure, along
616 with the expensive cache misses associated with those writes. This pattern
617 can be used to record rare error conditions and the like, and the ordering
618 prevents such records from being lost.
619
620
621 [!] Note that this extremely counterintuitive situation arises most easily on
622 machines with split caches, so that, for example, one cache bank processes
623 even-numbered cache lines and the other bank processes odd-numbered cache
624 lines. The pointer P might be stored in an odd-numbered cache line, and the
625 variable B might be stored in an even-numbered cache line. Then, if the
626 even-numbered bank of the reading CPU's cache is extremely busy while the
627 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
628 but the old value of the variable B (2).
629
630
631 The data dependency barrier is very important to the RCU system,
632 for example. See rcu_assign_pointer() and rcu_dereference() in
633 include/linux/rcupdate.h. This permits the current target of an RCU'd
634 pointer to be replaced with a new modified target, without the replacement
635 target appearing to be incompletely initialised.
636
637 See also the subsection on "Cache Coherency" for a more thorough example.
638
639
640 CONTROL DEPENDENCIES
641 --------------------
642
643 A load-load control dependency requires a full read memory barrier, not
644 simply a data dependency barrier to make it work correctly. Consider the
645 following bit of code:
646
647 q = READ_ONCE(a);
648 if (q) {
649 <data dependency barrier> /* BUG: No data dependency!!! */
650 p = READ_ONCE(b);
651 }
652
653 This will not have the desired effect because there is no actual data
654 dependency, but rather a control dependency that the CPU may short-circuit
655 by attempting to predict the outcome in advance, so that other CPUs see
656 the load from b as having happened before the load from a. In such a
657 case what's actually required is:
658
659 q = READ_ONCE(a);
660 if (q) {
661 <read barrier>
662 p = READ_ONCE(b);
663 }
664
665 However, stores are not speculated. This means that ordering -is- provided
666 for load-store control dependencies, as in the following example:
667
668 q = READ_ONCE(a);
669 if (q) {
670 WRITE_ONCE(b, p);
671 }
672
673 Control dependencies pair normally with other types of barriers. That
674 said, please note that READ_ONCE() is not optional! Without the
675 READ_ONCE(), the compiler might combine the load from 'a' with other
676 loads from 'a', and the store to 'b' with other stores to 'b', with
677 possible highly counterintuitive effects on ordering.
678
679 Worse yet, if the compiler is able to prove (say) that the value of
680 variable 'a' is always non-zero, it would be well within its rights
681 to optimize the original example by eliminating the "if" statement
682 as follows:
683
684 q = a;
685 b = p; /* BUG: Compiler and CPU can both reorder!!! */
686
687 So don't leave out the READ_ONCE().
688
689 It is tempting to try to enforce ordering on identical stores on both
690 branches of the "if" statement as follows:
691
692 q = READ_ONCE(a);
693 if (q) {
694 barrier();
695 WRITE_ONCE(b, p);
696 do_something();
697 } else {
698 barrier();
699 WRITE_ONCE(b, p);
700 do_something_else();
701 }
702
703 Unfortunately, current compilers will transform this as follows at high
704 optimization levels:
705
706 q = READ_ONCE(a);
707 barrier();
708 WRITE_ONCE(b, p); /* BUG: No ordering vs. load from a!!! */
709 if (q) {
710 /* WRITE_ONCE(b, p); -- moved up, BUG!!! */
711 do_something();
712 } else {
713 /* WRITE_ONCE(b, p); -- moved up, BUG!!! */
714 do_something_else();
715 }
716
717 Now there is no conditional between the load from 'a' and the store to
718 'b', which means that the CPU is within its rights to reorder them:
719 The conditional is absolutely required, and must be present in the
720 assembly code even after all compiler optimizations have been applied.
721 Therefore, if you need ordering in this example, you need explicit
722 memory barriers, for example, smp_store_release():
723
724 q = READ_ONCE(a);
725 if (q) {
726 smp_store_release(&b, p);
727 do_something();
728 } else {
729 smp_store_release(&b, p);
730 do_something_else();
731 }
732
733 In contrast, without explicit memory barriers, two-legged-if control
734 ordering is guaranteed only when the stores differ, for example:
735
736 q = READ_ONCE(a);
737 if (q) {
738 WRITE_ONCE(b, p);
739 do_something();
740 } else {
741 WRITE_ONCE(b, r);
742 do_something_else();
743 }
744
745 The initial READ_ONCE() is still required to prevent the compiler from
746 proving the value of 'a'.
747
748 In addition, you need to be careful what you do with the local variable 'q',
749 otherwise the compiler might be able to guess the value and again remove
750 the needed conditional. For example:
751
752 q = READ_ONCE(a);
753 if (q % MAX) {
754 WRITE_ONCE(b, p);
755 do_something();
756 } else {
757 WRITE_ONCE(b, r);
758 do_something_else();
759 }
760
761 If MAX is defined to be 1, then the compiler knows that (q % MAX) is
762 equal to zero, in which case the compiler is within its rights to
763 transform the above code into the following:
764
765 q = READ_ONCE(a);
766 WRITE_ONCE(b, p);
767 do_something_else();
768
769 Given this transformation, the CPU is not required to respect the ordering
770 between the load from variable 'a' and the store to variable 'b'. It is
771 tempting to add a barrier(), but this does not help. The conditional
772 is gone, and the barrier won't bring it back. Therefore, if you are
773 relying on this ordering, you should make sure that MAX is greater than
774 one, perhaps as follows:
775
776 q = READ_ONCE(a);
777 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
778 if (q % MAX) {
779 WRITE_ONCE(b, p);
780 do_something();
781 } else {
782 WRITE_ONCE(b, r);
783 do_something_else();
784 }
785
786 Please note once again that the stores to 'b' differ. If they were
787 identical, as noted earlier, the compiler could pull this store outside
788 of the 'if' statement.
789
790 You must also be careful not to rely too much on boolean short-circuit
791 evaluation. Consider this example:
792
793 q = READ_ONCE(a);
794 if (q || 1 > 0)
795 WRITE_ONCE(b, 1);
796
797 Because the first condition cannot fault and the second condition is
798 always true, the compiler can transform this example as following,
799 defeating control dependency:
800
801 q = READ_ONCE(a);
802 WRITE_ONCE(b, 1);
803
804 This example underscores the need to ensure that the compiler cannot
805 out-guess your code. More generally, although READ_ONCE() does force
806 the compiler to actually emit code for a given load, it does not force
807 the compiler to use the results.
808
809 In addition, control dependencies apply only to the then-clause and
810 else-clause of the if-statement in question. In particular, it does
811 not necessarily apply to code following the if-statement:
812
813 q = READ_ONCE(a);
814 if (q) {
815 WRITE_ONCE(b, p);
816 } else {
817 WRITE_ONCE(b, r);
818 }
819 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from "a". */
820
821 It is tempting to argue that there in fact is ordering because the
822 compiler cannot reorder volatile accesses and also cannot reorder
823 the writes to "b" with the condition. Unfortunately for this line
824 of reasoning, the compiler might compile the two writes to "b" as
825 conditional-move instructions, as in this fanciful pseudo-assembly
826 language:
827
828 ld r1,a
829 ld r2,p
830 ld r3,r
831 cmp r1,$0
832 cmov,ne r4,r2
833 cmov,eq r4,r3
834 st r4,b
835 st $1,c
836
837 A weakly ordered CPU would have no dependency of any sort between the load
838 from "a" and the store to "c". The control dependencies would extend
839 only to the pair of cmov instructions and the store depending on them.
840 In short, control dependencies apply only to the stores in the then-clause
841 and else-clause of the if-statement in question (including functions
842 invoked by those two clauses), not to code following that if-statement.
843
844 Finally, control dependencies do -not- provide transitivity. This is
845 demonstrated by two related examples, with the initial values of
846 x and y both being zero:
847
848 CPU 0 CPU 1
849 ======================= =======================
850 r1 = READ_ONCE(x); r2 = READ_ONCE(y);
851 if (r1 > 0) if (r2 > 0)
852 WRITE_ONCE(y, 1); WRITE_ONCE(x, 1);
853
854 assert(!(r1 == 1 && r2 == 1));
855
856 The above two-CPU example will never trigger the assert(). However,
857 if control dependencies guaranteed transitivity (which they do not),
858 then adding the following CPU would guarantee a related assertion:
859
860 CPU 2
861 =====================
862 WRITE_ONCE(x, 2);
863
864 assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
865
866 But because control dependencies do -not- provide transitivity, the above
867 assertion can fail after the combined three-CPU example completes. If you
868 need the three-CPU example to provide ordering, you will need smp_mb()
869 between the loads and stores in the CPU 0 and CPU 1 code fragments,
870 that is, just before or just after the "if" statements. Furthermore,
871 the original two-CPU example is very fragile and should be avoided.
872
873 These two examples are the LB and WWC litmus tests from this paper:
874 http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
875 site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
876
877 In summary:
878
879 (*) Control dependencies can order prior loads against later stores.
880 However, they do -not- guarantee any other sort of ordering:
881 Not prior loads against later loads, nor prior stores against
882 later anything. If you need these other forms of ordering,
883 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
884 later loads, smp_mb().
885
886 (*) If both legs of the "if" statement begin with identical stores to
887 the same variable, then those stores must be ordered, either by
888 preceding both of them with smp_mb() or by using smp_store_release()
889 to carry out the stores. Please note that it is -not- sufficient
890 to use barrier() at beginning of each leg of the "if" statement
891 because, as shown by the example above, optimizing compilers can
892 destroy the control dependency while respecting the letter of the
893 barrier() law.
894
895 (*) Control dependencies require at least one run-time conditional
896 between the prior load and the subsequent store, and this
897 conditional must involve the prior load. If the compiler is able
898 to optimize the conditional away, it will have also optimized
899 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE()
900 can help to preserve the needed conditional.
901
902 (*) Control dependencies require that the compiler avoid reordering the
903 dependency into nonexistence. Careful use of READ_ONCE() or
904 atomic{,64}_read() can help to preserve your control dependency.
905 Please see the COMPILER BARRIER section for more information.
906
907 (*) Control dependencies apply only to the then-clause and else-clause
908 of the if-statement containing the control dependency, including
909 any functions that these two clauses call. Control dependencies
910 do -not- apply to code following the if-statement containing the
911 control dependency.
912
913 (*) Control dependencies pair normally with other types of barriers.
914
915 (*) Control dependencies do -not- provide transitivity. If you
916 need transitivity, use smp_mb().
917
918
919 SMP BARRIER PAIRING
920 -------------------
921
922 When dealing with CPU-CPU interactions, certain types of memory barrier should
923 always be paired. A lack of appropriate pairing is almost certainly an error.
924
925 General barriers pair with each other, though they also pair with most
926 other types of barriers, albeit without transitivity. An acquire barrier
927 pairs with a release barrier, but both may also pair with other barriers,
928 including of course general barriers. A write barrier pairs with a data
929 dependency barrier, a control dependency, an acquire barrier, a release
930 barrier, a read barrier, or a general barrier. Similarly a read barrier,
931 control dependency, or a data dependency barrier pairs with a write
932 barrier, an acquire barrier, a release barrier, or a general barrier:
933
934 CPU 1 CPU 2
935 =============== ===============
936 WRITE_ONCE(a, 1);
937 <write barrier>
938 WRITE_ONCE(b, 2); x = READ_ONCE(b);
939 <read barrier>
940 y = READ_ONCE(a);
941
942 Or:
943
944 CPU 1 CPU 2
945 =============== ===============================
946 a = 1;
947 <write barrier>
948 WRITE_ONCE(b, &a); x = READ_ONCE(b);
949 <data dependency barrier>
950 y = *x;
951
952 Or even:
953
954 CPU 1 CPU 2
955 =============== ===============================
956 r1 = READ_ONCE(y);
957 <general barrier>
958 WRITE_ONCE(y, 1); if (r2 = READ_ONCE(x)) {
959 <implicit control dependency>
960 WRITE_ONCE(y, 1);
961 }
962
963 assert(r1 == 0 || r2 == 0);
964
965 Basically, the read barrier always has to be there, even though it can be of
966 the "weaker" type.
967
968 [!] Note that the stores before the write barrier would normally be expected to
969 match the loads after the read barrier or the data dependency barrier, and vice
970 versa:
971
972 CPU 1 CPU 2
973 =================== ===================
974 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
975 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
976 <write barrier> \ <read barrier>
977 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
978 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
979
980
981 EXAMPLES OF MEMORY BARRIER SEQUENCES
982 ------------------------------------
983
984 Firstly, write barriers act as partial orderings on store operations.
985 Consider the following sequence of events:
986
987 CPU 1
988 =======================
989 STORE A = 1
990 STORE B = 2
991 STORE C = 3
992 <write barrier>
993 STORE D = 4
994 STORE E = 5
995
996 This sequence of events is committed to the memory coherence system in an order
997 that the rest of the system might perceive as the unordered set of { STORE A,
998 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
999 }:
1000
1001 +-------+ : :
1002 | | +------+
1003 | |------>| C=3 | } /\
1004 | | : +------+ }----- \ -----> Events perceptible to
1005 | | : | A=1 | } \/ the rest of the system
1006 | | : +------+ }
1007 | CPU 1 | : | B=2 | }
1008 | | +------+ }
1009 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
1010 | | +------+ } requires all stores prior to the
1011 | | : | E=5 | } barrier to be committed before
1012 | | : +------+ } further stores may take place
1013 | |------>| D=4 | }
1014 | | +------+
1015 +-------+ : :
1016 |
1017 | Sequence in which stores are committed to the
1018 | memory system by CPU 1
1019 V
1020
1021
1022 Secondly, data dependency barriers act as partial orderings on data-dependent
1023 loads. Consider the following sequence of events:
1024
1025 CPU 1 CPU 2
1026 ======================= =======================
1027 { B = 7; X = 9; Y = 8; C = &Y }
1028 STORE A = 1
1029 STORE B = 2
1030 <write barrier>
1031 STORE C = &B LOAD X
1032 STORE D = 4 LOAD C (gets &B)
1033 LOAD *C (reads B)
1034
1035 Without intervention, CPU 2 may perceive the events on CPU 1 in some
1036 effectively random order, despite the write barrier issued by CPU 1:
1037
1038 +-------+ : : : :
1039 | | +------+ +-------+ | Sequence of update
1040 | |------>| B=2 |----- --->| Y->8 | | of perception on
1041 | | : +------+ \ +-------+ | CPU 2
1042 | CPU 1 | : | A=1 | \ --->| C->&Y | V
1043 | | +------+ | +-------+
1044 | | wwwwwwwwwwwwwwww | : :
1045 | | +------+ | : :
1046 | | : | C=&B |--- | : : +-------+
1047 | | : +------+ \ | +-------+ | |
1048 | |------>| D=4 | ----------->| C->&B |------>| |
1049 | | +------+ | +-------+ | |
1050 +-------+ : : | : : | |
1051 | : : | |
1052 | : : | CPU 2 |
1053 | +-------+ | |
1054 Apparently incorrect ---> | | B->7 |------>| |
1055 perception of B (!) | +-------+ | |
1056 | : : | |
1057 | +-------+ | |
1058 The load of X holds ---> \ | X->9 |------>| |
1059 up the maintenance \ +-------+ | |
1060 of coherence of B ----->| B->2 | +-------+
1061 +-------+
1062 : :
1063
1064
1065 In the above example, CPU 2 perceives that B is 7, despite the load of *C
1066 (which would be B) coming after the LOAD of C.
1067
1068 If, however, a data dependency barrier were to be placed between the load of C
1069 and the load of *C (ie: B) on CPU 2:
1070
1071 CPU 1 CPU 2
1072 ======================= =======================
1073 { B = 7; X = 9; Y = 8; C = &Y }
1074 STORE A = 1
1075 STORE B = 2
1076 <write barrier>
1077 STORE C = &B LOAD X
1078 STORE D = 4 LOAD C (gets &B)
1079 <data dependency barrier>
1080 LOAD *C (reads B)
1081
1082 then the following will occur:
1083
1084 +-------+ : : : :
1085 | | +------+ +-------+
1086 | |------>| B=2 |----- --->| Y->8 |
1087 | | : +------+ \ +-------+
1088 | CPU 1 | : | A=1 | \ --->| C->&Y |
1089 | | +------+ | +-------+
1090 | | wwwwwwwwwwwwwwww | : :
1091 | | +------+ | : :
1092 | | : | C=&B |--- | : : +-------+
1093 | | : +------+ \ | +-------+ | |
1094 | |------>| D=4 | ----------->| C->&B |------>| |
1095 | | +------+ | +-------+ | |
1096 +-------+ : : | : : | |
1097 | : : | |
1098 | : : | CPU 2 |
1099 | +-------+ | |
1100 | | X->9 |------>| |
1101 | +-------+ | |
1102 Makes sure all effects ---> \ ddddddddddddddddd | |
1103 prior to the store of C \ +-------+ | |
1104 are perceptible to ----->| B->2 |------>| |
1105 subsequent loads +-------+ | |
1106 : : +-------+
1107
1108
1109 And thirdly, a read barrier acts as a partial order on loads. Consider the
1110 following sequence of events:
1111
1112 CPU 1 CPU 2
1113 ======================= =======================
1114 { A = 0, B = 9 }
1115 STORE A=1
1116 <write barrier>
1117 STORE B=2
1118 LOAD B
1119 LOAD A
1120
1121 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1122 some effectively random order, despite the write barrier issued by CPU 1:
1123
1124 +-------+ : : : :
1125 | | +------+ +-------+
1126 | |------>| A=1 |------ --->| A->0 |
1127 | | +------+ \ +-------+
1128 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1129 | | +------+ | +-------+
1130 | |------>| B=2 |--- | : :
1131 | | +------+ \ | : : +-------+
1132 +-------+ : : \ | +-------+ | |
1133 ---------->| B->2 |------>| |
1134 | +-------+ | CPU 2 |
1135 | | A->0 |------>| |
1136 | +-------+ | |
1137 | : : +-------+
1138 \ : :
1139 \ +-------+
1140 ---->| A->1 |
1141 +-------+
1142 : :
1143
1144
1145 If, however, a read barrier were to be placed between the load of B and the
1146 load of A on CPU 2:
1147
1148 CPU 1 CPU 2
1149 ======================= =======================
1150 { A = 0, B = 9 }
1151 STORE A=1
1152 <write barrier>
1153 STORE B=2
1154 LOAD B
1155 <read barrier>
1156 LOAD A
1157
1158 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
1159 2:
1160
1161 +-------+ : : : :
1162 | | +------+ +-------+
1163 | |------>| A=1 |------ --->| A->0 |
1164 | | +------+ \ +-------+
1165 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1166 | | +------+ | +-------+
1167 | |------>| B=2 |--- | : :
1168 | | +------+ \ | : : +-------+
1169 +-------+ : : \ | +-------+ | |
1170 ---------->| B->2 |------>| |
1171 | +-------+ | CPU 2 |
1172 | : : | |
1173 | : : | |
1174 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1175 barrier causes all effects \ +-------+ | |
1176 prior to the storage of B ---->| A->1 |------>| |
1177 to be perceptible to CPU 2 +-------+ | |
1178 : : +-------+
1179
1180
1181 To illustrate this more completely, consider what could happen if the code
1182 contained a load of A either side of the read barrier:
1183
1184 CPU 1 CPU 2
1185 ======================= =======================
1186 { A = 0, B = 9 }
1187 STORE A=1
1188 <write barrier>
1189 STORE B=2
1190 LOAD B
1191 LOAD A [first load of A]
1192 <read barrier>
1193 LOAD A [second load of A]
1194
1195 Even though the two loads of A both occur after the load of B, they may both
1196 come up with different values:
1197
1198 +-------+ : : : :
1199 | | +------+ +-------+
1200 | |------>| A=1 |------ --->| A->0 |
1201 | | +------+ \ +-------+
1202 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1203 | | +------+ | +-------+
1204 | |------>| B=2 |--- | : :
1205 | | +------+ \ | : : +-------+
1206 +-------+ : : \ | +-------+ | |
1207 ---------->| B->2 |------>| |
1208 | +-------+ | CPU 2 |
1209 | : : | |
1210 | : : | |
1211 | +-------+ | |
1212 | | A->0 |------>| 1st |
1213 | +-------+ | |
1214 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1215 barrier causes all effects \ +-------+ | |
1216 prior to the storage of B ---->| A->1 |------>| 2nd |
1217 to be perceptible to CPU 2 +-------+ | |
1218 : : +-------+
1219
1220
1221 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1222 before the read barrier completes anyway:
1223
1224 +-------+ : : : :
1225 | | +------+ +-------+
1226 | |------>| A=1 |------ --->| A->0 |
1227 | | +------+ \ +-------+
1228 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1229 | | +------+ | +-------+
1230 | |------>| B=2 |--- | : :
1231 | | +------+ \ | : : +-------+
1232 +-------+ : : \ | +-------+ | |
1233 ---------->| B->2 |------>| |
1234 | +-------+ | CPU 2 |
1235 | : : | |
1236 \ : : | |
1237 \ +-------+ | |
1238 ---->| A->1 |------>| 1st |
1239 +-------+ | |
1240 rrrrrrrrrrrrrrrrr | |
1241 +-------+ | |
1242 | A->1 |------>| 2nd |
1243 +-------+ | |
1244 : : +-------+
1245
1246
1247 The guarantee is that the second load will always come up with A == 1 if the
1248 load of B came up with B == 2. No such guarantee exists for the first load of
1249 A; that may come up with either A == 0 or A == 1.
1250
1251
1252 READ MEMORY BARRIERS VS LOAD SPECULATION
1253 ----------------------------------------
1254
1255 Many CPUs speculate with loads: that is they see that they will need to load an
1256 item from memory, and they find a time where they're not using the bus for any
1257 other loads, and so do the load in advance - even though they haven't actually
1258 got to that point in the instruction execution flow yet. This permits the
1259 actual load instruction to potentially complete immediately because the CPU
1260 already has the value to hand.
1261
1262 It may turn out that the CPU didn't actually need the value - perhaps because a
1263 branch circumvented the load - in which case it can discard the value or just
1264 cache it for later use.
1265
1266 Consider:
1267
1268 CPU 1 CPU 2
1269 ======================= =======================
1270 LOAD B
1271 DIVIDE } Divide instructions generally
1272 DIVIDE } take a long time to perform
1273 LOAD A
1274
1275 Which might appear as this:
1276
1277 : : +-------+
1278 +-------+ | |
1279 --->| B->2 |------>| |
1280 +-------+ | CPU 2 |
1281 : :DIVIDE | |
1282 +-------+ | |
1283 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1284 division speculates on the +-------+ ~ | |
1285 LOAD of A : : ~ | |
1286 : :DIVIDE | |
1287 : : ~ | |
1288 Once the divisions are complete --> : : ~-->| |
1289 the CPU can then perform the : : | |
1290 LOAD with immediate effect : : +-------+
1291
1292
1293 Placing a read barrier or a data dependency barrier just before the second
1294 load:
1295
1296 CPU 1 CPU 2
1297 ======================= =======================
1298 LOAD B
1299 DIVIDE
1300 DIVIDE
1301 <read barrier>
1302 LOAD A
1303
1304 will force any value speculatively obtained to be reconsidered to an extent
1305 dependent on the type of barrier used. If there was no change made to the
1306 speculated memory location, then the speculated value will just be used:
1307
1308 : : +-------+
1309 +-------+ | |
1310 --->| B->2 |------>| |
1311 +-------+ | CPU 2 |
1312 : :DIVIDE | |
1313 +-------+ | |
1314 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1315 division speculates on the +-------+ ~ | |
1316 LOAD of A : : ~ | |
1317 : :DIVIDE | |
1318 : : ~ | |
1319 : : ~ | |
1320 rrrrrrrrrrrrrrrr~ | |
1321 : : ~ | |
1322 : : ~-->| |
1323 : : | |
1324 : : +-------+
1325
1326
1327 but if there was an update or an invalidation from another CPU pending, then
1328 the speculation will be cancelled and the value reloaded:
1329
1330 : : +-------+
1331 +-------+ | |
1332 --->| B->2 |------>| |
1333 +-------+ | CPU 2 |
1334 : :DIVIDE | |
1335 +-------+ | |
1336 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1337 division speculates on the +-------+ ~ | |
1338 LOAD of A : : ~ | |
1339 : :DIVIDE | |
1340 : : ~ | |
1341 : : ~ | |
1342 rrrrrrrrrrrrrrrrr | |
1343 +-------+ | |
1344 The speculation is discarded ---> --->| A->1 |------>| |
1345 and an updated value is +-------+ | |
1346 retrieved : : +-------+
1347
1348
1349 TRANSITIVITY
1350 ------------
1351
1352 Transitivity is a deeply intuitive notion about ordering that is not
1353 always provided by real computer systems. The following example
1354 demonstrates transitivity:
1355
1356 CPU 1 CPU 2 CPU 3
1357 ======================= ======================= =======================
1358 { X = 0, Y = 0 }
1359 STORE X=1 LOAD X STORE Y=1
1360 <general barrier> <general barrier>
1361 LOAD Y LOAD X
1362
1363 Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1364 This indicates that CPU 2's load from X in some sense follows CPU 1's
1365 store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1366 store to Y. The question is then "Can CPU 3's load from X return 0?"
1367
1368 Because CPU 2's load from X in some sense came after CPU 1's store, it
1369 is natural to expect that CPU 3's load from X must therefore return 1.
1370 This expectation is an example of transitivity: if a load executing on
1371 CPU A follows a load from the same variable executing on CPU B, then
1372 CPU A's load must either return the same value that CPU B's load did,
1373 or must return some later value.
1374
1375 In the Linux kernel, use of general memory barriers guarantees
1376 transitivity. Therefore, in the above example, if CPU 2's load from X
1377 returns 1 and its load from Y returns 0, then CPU 3's load from X must
1378 also return 1.
1379
1380 However, transitivity is -not- guaranteed for read or write barriers.
1381 For example, suppose that CPU 2's general barrier in the above example
1382 is changed to a read barrier as shown below:
1383
1384 CPU 1 CPU 2 CPU 3
1385 ======================= ======================= =======================
1386 { X = 0, Y = 0 }
1387 STORE X=1 LOAD X STORE Y=1
1388 <read barrier> <general barrier>
1389 LOAD Y LOAD X
1390
1391 This substitution destroys transitivity: in this example, it is perfectly
1392 legal for CPU 2's load from X to return 1, its load from Y to return 0,
1393 and CPU 3's load from X to return 0.
1394
1395 The key point is that although CPU 2's read barrier orders its pair
1396 of loads, it does not guarantee to order CPU 1's store. Therefore, if
1397 this example runs on a system where CPUs 1 and 2 share a store buffer
1398 or a level of cache, CPU 2 might have early access to CPU 1's writes.
1399 General barriers are therefore required to ensure that all CPUs agree
1400 on the combined order of CPU 1's and CPU 2's accesses.
1401
1402 General barriers provide "global transitivity", so that all CPUs will
1403 agree on the order of operations. In contrast, a chain of release-acquire
1404 pairs provides only "local transitivity", so that only those CPUs on
1405 the chain are guaranteed to agree on the combined order of the accesses.
1406 For example, switching to C code in deference to Herman Hollerith:
1407
1408 int u, v, x, y, z;
1409
1410 void cpu0(void)
1411 {
1412 r0 = smp_load_acquire(&x);
1413 WRITE_ONCE(u, 1);
1414 smp_store_release(&y, 1);
1415 }
1416
1417 void cpu1(void)
1418 {
1419 r1 = smp_load_acquire(&y);
1420 r4 = READ_ONCE(v);
1421 r5 = READ_ONCE(u);
1422 smp_store_release(&z, 1);
1423 }
1424
1425 void cpu2(void)
1426 {
1427 r2 = smp_load_acquire(&z);
1428 smp_store_release(&x, 1);
1429 }
1430
1431 void cpu3(void)
1432 {
1433 WRITE_ONCE(v, 1);
1434 smp_mb();
1435 r3 = READ_ONCE(u);
1436 }
1437
1438 Because cpu0(), cpu1(), and cpu2() participate in a local transitive
1439 chain of smp_store_release()/smp_load_acquire() pairs, the following
1440 outcome is prohibited:
1441
1442 r0 == 1 && r1 == 1 && r2 == 1
1443
1444 Furthermore, because of the release-acquire relationship between cpu0()
1445 and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1446 outcome is prohibited:
1447
1448 r1 == 1 && r5 == 0
1449
1450 However, the transitivity of release-acquire is local to the participating
1451 CPUs and does not apply to cpu3(). Therefore, the following outcome
1452 is possible:
1453
1454 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1455
1456 As an aside, the following outcome is also possible:
1457
1458 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1459
1460 Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1461 writes in order, CPUs not involved in the release-acquire chain might
1462 well disagree on the order. This disagreement stems from the fact that
1463 the weak memory-barrier instructions used to implement smp_load_acquire()
1464 and smp_store_release() are not required to order prior stores against
1465 subsequent loads in all cases. This means that cpu3() can see cpu0()'s
1466 store to u as happening -after- cpu1()'s load from v, even though
1467 both cpu0() and cpu1() agree that these two operations occurred in the
1468 intended order.
1469
1470 However, please keep in mind that smp_load_acquire() is not magic.
1471 In particular, it simply reads from its argument with ordering. It does
1472 -not- ensure that any particular value will be read. Therefore, the
1473 following outcome is possible:
1474
1475 r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1476
1477 Note that this outcome can happen even on a mythical sequentially
1478 consistent system where nothing is ever reordered.
1479
1480 To reiterate, if your code requires global transitivity, use general
1481 barriers throughout.
1482
1483
1484 ========================
1485 EXPLICIT KERNEL BARRIERS
1486 ========================
1487
1488 The Linux kernel has a variety of different barriers that act at different
1489 levels:
1490
1491 (*) Compiler barrier.
1492
1493 (*) CPU memory barriers.
1494
1495 (*) MMIO write barrier.
1496
1497
1498 COMPILER BARRIER
1499 ----------------
1500
1501 The Linux kernel has an explicit compiler barrier function that prevents the
1502 compiler from moving the memory accesses either side of it to the other side:
1503
1504 barrier();
1505
1506 This is a general barrier -- there are no read-read or write-write
1507 variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be
1508 thought of as weak forms of barrier() that affect only the specific
1509 accesses flagged by the READ_ONCE() or WRITE_ONCE().
1510
1511 The barrier() function has the following effects:
1512
1513 (*) Prevents the compiler from reordering accesses following the
1514 barrier() to precede any accesses preceding the barrier().
1515 One example use for this property is to ease communication between
1516 interrupt-handler code and the code that was interrupted.
1517
1518 (*) Within a loop, forces the compiler to load the variables used
1519 in that loop's conditional on each pass through that loop.
1520
1521 The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1522 optimizations that, while perfectly safe in single-threaded code, can
1523 be fatal in concurrent code. Here are some examples of these sorts
1524 of optimizations:
1525
1526 (*) The compiler is within its rights to reorder loads and stores
1527 to the same variable, and in some cases, the CPU is within its
1528 rights to reorder loads to the same variable. This means that
1529 the following code:
1530
1531 a[0] = x;
1532 a[1] = x;
1533
1534 Might result in an older value of x stored in a[1] than in a[0].
1535 Prevent both the compiler and the CPU from doing this as follows:
1536
1537 a[0] = READ_ONCE(x);
1538 a[1] = READ_ONCE(x);
1539
1540 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1541 accesses from multiple CPUs to a single variable.
1542
1543 (*) The compiler is within its rights to merge successive loads from
1544 the same variable. Such merging can cause the compiler to "optimize"
1545 the following code:
1546
1547 while (tmp = a)
1548 do_something_with(tmp);
1549
1550 into the following code, which, although in some sense legitimate
1551 for single-threaded code, is almost certainly not what the developer
1552 intended:
1553
1554 if (tmp = a)
1555 for (;;)
1556 do_something_with(tmp);
1557
1558 Use READ_ONCE() to prevent the compiler from doing this to you:
1559
1560 while (tmp = READ_ONCE(a))
1561 do_something_with(tmp);
1562
1563 (*) The compiler is within its rights to reload a variable, for example,
1564 in cases where high register pressure prevents the compiler from
1565 keeping all data of interest in registers. The compiler might
1566 therefore optimize the variable 'tmp' out of our previous example:
1567
1568 while (tmp = a)
1569 do_something_with(tmp);
1570
1571 This could result in the following code, which is perfectly safe in
1572 single-threaded code, but can be fatal in concurrent code:
1573
1574 while (a)
1575 do_something_with(a);
1576
1577 For example, the optimized version of this code could result in
1578 passing a zero to do_something_with() in the case where the variable
1579 a was modified by some other CPU between the "while" statement and
1580 the call to do_something_with().
1581
1582 Again, use READ_ONCE() to prevent the compiler from doing this:
1583
1584 while (tmp = READ_ONCE(a))
1585 do_something_with(tmp);
1586
1587 Note that if the compiler runs short of registers, it might save
1588 tmp onto the stack. The overhead of this saving and later restoring
1589 is why compilers reload variables. Doing so is perfectly safe for
1590 single-threaded code, so you need to tell the compiler about cases
1591 where it is not safe.
1592
1593 (*) The compiler is within its rights to omit a load entirely if it knows
1594 what the value will be. For example, if the compiler can prove that
1595 the value of variable 'a' is always zero, it can optimize this code:
1596
1597 while (tmp = a)
1598 do_something_with(tmp);
1599
1600 Into this:
1601
1602 do { } while (0);
1603
1604 This transformation is a win for single-threaded code because it
1605 gets rid of a load and a branch. The problem is that the compiler
1606 will carry out its proof assuming that the current CPU is the only
1607 one updating variable 'a'. If variable 'a' is shared, then the
1608 compiler's proof will be erroneous. Use READ_ONCE() to tell the
1609 compiler that it doesn't know as much as it thinks it does:
1610
1611 while (tmp = READ_ONCE(a))
1612 do_something_with(tmp);
1613
1614 But please note that the compiler is also closely watching what you
1615 do with the value after the READ_ONCE(). For example, suppose you
1616 do the following and MAX is a preprocessor macro with the value 1:
1617
1618 while ((tmp = READ_ONCE(a)) % MAX)
1619 do_something_with(tmp);
1620
1621 Then the compiler knows that the result of the "%" operator applied
1622 to MAX will always be zero, again allowing the compiler to optimize
1623 the code into near-nonexistence. (It will still load from the
1624 variable 'a'.)
1625
1626 (*) Similarly, the compiler is within its rights to omit a store entirely
1627 if it knows that the variable already has the value being stored.
1628 Again, the compiler assumes that the current CPU is the only one
1629 storing into the variable, which can cause the compiler to do the
1630 wrong thing for shared variables. For example, suppose you have
1631 the following:
1632
1633 a = 0;
1634 ... Code that does not store to variable a ...
1635 a = 0;
1636
1637 The compiler sees that the value of variable 'a' is already zero, so
1638 it might well omit the second store. This would come as a fatal
1639 surprise if some other CPU might have stored to variable 'a' in the
1640 meantime.
1641
1642 Use WRITE_ONCE() to prevent the compiler from making this sort of
1643 wrong guess:
1644
1645 WRITE_ONCE(a, 0);
1646 ... Code that does not store to variable a ...
1647 WRITE_ONCE(a, 0);
1648
1649 (*) The compiler is within its rights to reorder memory accesses unless
1650 you tell it not to. For example, consider the following interaction
1651 between process-level code and an interrupt handler:
1652
1653 void process_level(void)
1654 {
1655 msg = get_message();
1656 flag = true;
1657 }
1658
1659 void interrupt_handler(void)
1660 {
1661 if (flag)
1662 process_message(msg);
1663 }
1664
1665 There is nothing to prevent the compiler from transforming
1666 process_level() to the following, in fact, this might well be a
1667 win for single-threaded code:
1668
1669 void process_level(void)
1670 {
1671 flag = true;
1672 msg = get_message();
1673 }
1674
1675 If the interrupt occurs between these two statement, then
1676 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE()
1677 to prevent this as follows:
1678
1679 void process_level(void)
1680 {
1681 WRITE_ONCE(msg, get_message());
1682 WRITE_ONCE(flag, true);
1683 }
1684
1685 void interrupt_handler(void)
1686 {
1687 if (READ_ONCE(flag))
1688 process_message(READ_ONCE(msg));
1689 }
1690
1691 Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1692 interrupt_handler() are needed if this interrupt handler can itself
1693 be interrupted by something that also accesses 'flag' and 'msg',
1694 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE()
1695 and WRITE_ONCE() are not needed in interrupt_handler() other than
1696 for documentation purposes. (Note also that nested interrupts
1697 do not typically occur in modern Linux kernels, in fact, if an
1698 interrupt handler returns with interrupts enabled, you will get a
1699 WARN_ONCE() splat.)
1700
1701 You should assume that the compiler can move READ_ONCE() and
1702 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1703 barrier(), or similar primitives.
1704
1705 This effect could also be achieved using barrier(), but READ_ONCE()
1706 and WRITE_ONCE() are more selective: With READ_ONCE() and
1707 WRITE_ONCE(), the compiler need only forget the contents of the
1708 indicated memory locations, while with barrier() the compiler must
1709 discard the value of all memory locations that it has currented
1710 cached in any machine registers. Of course, the compiler must also
1711 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1712 though the CPU of course need not do so.
1713
1714 (*) The compiler is within its rights to invent stores to a variable,
1715 as in the following example:
1716
1717 if (a)
1718 b = a;
1719 else
1720 b = 42;
1721
1722 The compiler might save a branch by optimizing this as follows:
1723
1724 b = 42;
1725 if (a)
1726 b = a;
1727
1728 In single-threaded code, this is not only safe, but also saves
1729 a branch. Unfortunately, in concurrent code, this optimization
1730 could cause some other CPU to see a spurious value of 42 -- even
1731 if variable 'a' was never zero -- when loading variable 'b'.
1732 Use WRITE_ONCE() to prevent this as follows:
1733
1734 if (a)
1735 WRITE_ONCE(b, a);
1736 else
1737 WRITE_ONCE(b, 42);
1738
1739 The compiler can also invent loads. These are usually less
1740 damaging, but they can result in cache-line bouncing and thus in
1741 poor performance and scalability. Use READ_ONCE() to prevent
1742 invented loads.
1743
1744 (*) For aligned memory locations whose size allows them to be accessed
1745 with a single memory-reference instruction, prevents "load tearing"
1746 and "store tearing," in which a single large access is replaced by
1747 multiple smaller accesses. For example, given an architecture having
1748 16-bit store instructions with 7-bit immediate fields, the compiler
1749 might be tempted to use two 16-bit store-immediate instructions to
1750 implement the following 32-bit store:
1751
1752 p = 0x00010002;
1753
1754 Please note that GCC really does use this sort of optimization,
1755 which is not surprising given that it would likely take more
1756 than two instructions to build the constant and then store it.
1757 This optimization can therefore be a win in single-threaded code.
1758 In fact, a recent bug (since fixed) caused GCC to incorrectly use
1759 this optimization in a volatile store. In the absence of such bugs,
1760 use of WRITE_ONCE() prevents store tearing in the following example:
1761
1762 WRITE_ONCE(p, 0x00010002);
1763
1764 Use of packed structures can also result in load and store tearing,
1765 as in this example:
1766
1767 struct __attribute__((__packed__)) foo {
1768 short a;
1769 int b;
1770 short c;
1771 };
1772 struct foo foo1, foo2;
1773 ...
1774
1775 foo2.a = foo1.a;
1776 foo2.b = foo1.b;
1777 foo2.c = foo1.c;
1778
1779 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1780 volatile markings, the compiler would be well within its rights to
1781 implement these three assignment statements as a pair of 32-bit
1782 loads followed by a pair of 32-bit stores. This would result in
1783 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
1784 and WRITE_ONCE() again prevent tearing in this example:
1785
1786 foo2.a = foo1.a;
1787 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1788 foo2.c = foo1.c;
1789
1790 All that aside, it is never necessary to use READ_ONCE() and
1791 WRITE_ONCE() on a variable that has been marked volatile. For example,
1792 because 'jiffies' is marked volatile, it is never necessary to
1793 say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and
1794 WRITE_ONCE() are implemented as volatile casts, which has no effect when
1795 its argument is already marked volatile.
1796
1797 Please note that these compiler barriers have no direct effect on the CPU,
1798 which may then reorder things however it wishes.
1799
1800
1801 CPU MEMORY BARRIERS
1802 -------------------
1803
1804 The Linux kernel has eight basic CPU memory barriers:
1805
1806 TYPE MANDATORY SMP CONDITIONAL
1807 =============== ======================= ===========================
1808 GENERAL mb() smp_mb()
1809 WRITE wmb() smp_wmb()
1810 READ rmb() smp_rmb()
1811 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends()
1812
1813
1814 All memory barriers except the data dependency barriers imply a compiler
1815 barrier. Data dependencies do not impose any additional compiler ordering.
1816
1817 Aside: In the case of data dependencies, the compiler would be expected
1818 to issue the loads in the correct order (eg. `a[b]` would have to load
1819 the value of b before loading a[b]), however there is no guarantee in
1820 the C specification that the compiler may not speculate the value of b
1821 (eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
1822 tmp = a[b]; ). There is also the problem of a compiler reloading b after
1823 having loaded a[b], thus having a newer copy of b than a[b]. A consensus
1824 has not yet been reached about these problems, however the READ_ONCE()
1825 macro is a good place to start looking.
1826
1827 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1828 systems because it is assumed that a CPU will appear to be self-consistent,
1829 and will order overlapping accesses correctly with respect to itself.
1830 However, see the subsection on "Virtual Machine Guests" below.
1831
1832 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1833 references to shared memory on SMP systems, though the use of locking instead
1834 is sufficient.
1835
1836 Mandatory barriers should not be used to control SMP effects, since mandatory
1837 barriers impose unnecessary overhead on both SMP and UP systems. They may,
1838 however, be used to control MMIO effects on accesses through relaxed memory I/O
1839 windows. These barriers are required even on non-SMP systems as they affect
1840 the order in which memory operations appear to a device by prohibiting both the
1841 compiler and the CPU from reordering them.
1842
1843
1844 There are some more advanced barrier functions:
1845
1846 (*) smp_store_mb(var, value)
1847
1848 This assigns the value to the variable and then inserts a full memory
1849 barrier after it. It isn't guaranteed to insert anything more than a
1850 compiler barrier in a UP compilation.
1851
1852
1853 (*) smp_mb__before_atomic();
1854 (*) smp_mb__after_atomic();
1855
1856 These are for use with atomic (such as add, subtract, increment and
1857 decrement) functions that don't return a value, especially when used for
1858 reference counting. These functions do not imply memory barriers.
1859
1860 These are also used for atomic bitop functions that do not return a
1861 value (such as set_bit and clear_bit).
1862
1863 As an example, consider a piece of code that marks an object as being dead
1864 and then decrements the object's reference count:
1865
1866 obj->dead = 1;
1867 smp_mb__before_atomic();
1868 atomic_dec(&obj->ref_count);
1869
1870 This makes sure that the death mark on the object is perceived to be set
1871 *before* the reference counter is decremented.
1872
1873 See Documentation/atomic_ops.txt for more information. See the "Atomic
1874 operations" subsection for information on where to use these.
1875
1876
1877 (*) lockless_dereference();
1878
1879 This can be thought of as a pointer-fetch wrapper around the
1880 smp_read_barrier_depends() data-dependency barrier.
1881
1882 This is also similar to rcu_dereference(), but in cases where
1883 object lifetime is handled by some mechanism other than RCU, for
1884 example, when the objects removed only when the system goes down.
1885 In addition, lockless_dereference() is used in some data structures
1886 that can be used both with and without RCU.
1887
1888
1889 (*) dma_wmb();
1890 (*) dma_rmb();
1891
1892 These are for use with consistent memory to guarantee the ordering
1893 of writes or reads of shared memory accessible to both the CPU and a
1894 DMA capable device.
1895
1896 For example, consider a device driver that shares memory with a device
1897 and uses a descriptor status value to indicate if the descriptor belongs
1898 to the device or the CPU, and a doorbell to notify it when new
1899 descriptors are available:
1900
1901 if (desc->status != DEVICE_OWN) {
1902 /* do not read data until we own descriptor */
1903 dma_rmb();
1904
1905 /* read/modify data */
1906 read_data = desc->data;
1907 desc->data = write_data;
1908
1909 /* flush modifications before status update */
1910 dma_wmb();
1911
1912 /* assign ownership */
1913 desc->status = DEVICE_OWN;
1914
1915 /* force memory to sync before notifying device via MMIO */
1916 wmb();
1917
1918 /* notify device of new descriptors */
1919 writel(DESC_NOTIFY, doorbell);
1920 }
1921
1922 The dma_rmb() allows us guarantee the device has released ownership
1923 before we read the data from the descriptor, and the dma_wmb() allows
1924 us to guarantee the data is written to the descriptor before the device
1925 can see it now has ownership. The wmb() is needed to guarantee that the
1926 cache coherent memory writes have completed before attempting a write to
1927 the cache incoherent MMIO region.
1928
1929 See Documentation/DMA-API.txt for more information on consistent memory.
1930
1931 MMIO WRITE BARRIER
1932 ------------------
1933
1934 The Linux kernel also has a special barrier for use with memory-mapped I/O
1935 writes:
1936
1937 mmiowb();
1938
1939 This is a variation on the mandatory write barrier that causes writes to weakly
1940 ordered I/O regions to be partially ordered. Its effects may go beyond the
1941 CPU->Hardware interface and actually affect the hardware at some level.
1942
1943 See the subsection "Acquires vs I/O accesses" for more information.
1944
1945
1946 ===============================
1947 IMPLICIT KERNEL MEMORY BARRIERS
1948 ===============================
1949
1950 Some of the other functions in the linux kernel imply memory barriers, amongst
1951 which are locking and scheduling functions.
1952
1953 This specification is a _minimum_ guarantee; any particular architecture may
1954 provide more substantial guarantees, but these may not be relied upon outside
1955 of arch specific code.
1956
1957
1958 LOCK ACQUISITION FUNCTIONS
1959 --------------------------
1960
1961 The Linux kernel has a number of locking constructs:
1962
1963 (*) spin locks
1964 (*) R/W spin locks
1965 (*) mutexes
1966 (*) semaphores
1967 (*) R/W semaphores
1968
1969 In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1970 for each construct. These operations all imply certain barriers:
1971
1972 (1) ACQUIRE operation implication:
1973
1974 Memory operations issued after the ACQUIRE will be completed after the
1975 ACQUIRE operation has completed.
1976
1977 Memory operations issued before the ACQUIRE may be completed after
1978 the ACQUIRE operation has completed. An smp_mb__before_spinlock(),
1979 combined with a following ACQUIRE, orders prior stores against
1980 subsequent loads and stores. Note that this is weaker than smp_mb()!
1981 The smp_mb__before_spinlock() primitive is free on many architectures.
1982
1983 (2) RELEASE operation implication:
1984
1985 Memory operations issued before the RELEASE will be completed before the
1986 RELEASE operation has completed.
1987
1988 Memory operations issued after the RELEASE may be completed before the
1989 RELEASE operation has completed.
1990
1991 (3) ACQUIRE vs ACQUIRE implication:
1992
1993 All ACQUIRE operations issued before another ACQUIRE operation will be
1994 completed before that ACQUIRE operation.
1995
1996 (4) ACQUIRE vs RELEASE implication:
1997
1998 All ACQUIRE operations issued before a RELEASE operation will be
1999 completed before the RELEASE operation.
2000
2001 (5) Failed conditional ACQUIRE implication:
2002
2003 Certain locking variants of the ACQUIRE operation may fail, either due to
2004 being unable to get the lock immediately, or due to receiving an unblocked
2005 signal whilst asleep waiting for the lock to become available. Failed
2006 locks do not imply any sort of barrier.
2007
2008 [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2009 one-way barriers is that the effects of instructions outside of a critical
2010 section may seep into the inside of the critical section.
2011
2012 An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2013 because it is possible for an access preceding the ACQUIRE to happen after the
2014 ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2015 the two accesses can themselves then cross:
2016
2017 *A = a;
2018 ACQUIRE M
2019 RELEASE M
2020 *B = b;
2021
2022 may occur as:
2023
2024 ACQUIRE M, STORE *B, STORE *A, RELEASE M
2025
2026 When the ACQUIRE and RELEASE are a lock acquisition and release,
2027 respectively, this same reordering can occur if the lock's ACQUIRE and
2028 RELEASE are to the same lock variable, but only from the perspective of
2029 another CPU not holding that lock. In short, a ACQUIRE followed by an
2030 RELEASE may -not- be assumed to be a full memory barrier.
2031
2032 Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2033 not imply a full memory barrier. Therefore, the CPU's execution of the
2034 critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2035 so that:
2036
2037 *A = a;
2038 RELEASE M
2039 ACQUIRE N
2040 *B = b;
2041
2042 could occur as:
2043
2044 ACQUIRE N, STORE *B, STORE *A, RELEASE M
2045
2046 It might appear that this reordering could introduce a deadlock.
2047 However, this cannot happen because if such a deadlock threatened,
2048 the RELEASE would simply complete, thereby avoiding the deadlock.
2049
2050 Why does this work?
2051
2052 One key point is that we are only talking about the CPU doing
2053 the reordering, not the compiler. If the compiler (or, for
2054 that matter, the developer) switched the operations, deadlock
2055 -could- occur.
2056
2057 But suppose the CPU reordered the operations. In this case,
2058 the unlock precedes the lock in the assembly code. The CPU
2059 simply elected to try executing the later lock operation first.
2060 If there is a deadlock, this lock operation will simply spin (or
2061 try to sleep, but more on that later). The CPU will eventually
2062 execute the unlock operation (which preceded the lock operation
2063 in the assembly code), which will unravel the potential deadlock,
2064 allowing the lock operation to succeed.
2065
2066 But what if the lock is a sleeplock? In that case, the code will
2067 try to enter the scheduler, where it will eventually encounter
2068 a memory barrier, which will force the earlier unlock operation
2069 to complete, again unraveling the deadlock. There might be
2070 a sleep-unlock race, but the locking primitive needs to resolve
2071 such races properly in any case.
2072
2073 Locks and semaphores may not provide any guarantee of ordering on UP compiled
2074 systems, and so cannot be counted on in such a situation to actually achieve
2075 anything at all - especially with respect to I/O accesses - unless combined
2076 with interrupt disabling operations.
2077
2078 See also the section on "Inter-CPU locking barrier effects".
2079
2080
2081 As an example, consider the following:
2082
2083 *A = a;
2084 *B = b;
2085 ACQUIRE
2086 *C = c;
2087 *D = d;
2088 RELEASE
2089 *E = e;
2090 *F = f;
2091
2092 The following sequence of events is acceptable:
2093
2094 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2095
2096 [+] Note that {*F,*A} indicates a combined access.
2097
2098 But none of the following are:
2099
2100 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E
2101 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F
2102 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F
2103 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E
2104
2105
2106
2107 INTERRUPT DISABLING FUNCTIONS
2108 -----------------------------
2109
2110 Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2111 (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
2112 barriers are required in such a situation, they must be provided from some
2113 other means.
2114
2115
2116 SLEEP AND WAKE-UP FUNCTIONS
2117 ---------------------------
2118
2119 Sleeping and waking on an event flagged in global data can be viewed as an
2120 interaction between two pieces of data: the task state of the task waiting for
2121 the event and the global data used to indicate the event. To make sure that
2122 these appear to happen in the right order, the primitives to begin the process
2123 of going to sleep, and the primitives to initiate a wake up imply certain
2124 barriers.
2125
2126 Firstly, the sleeper normally follows something like this sequence of events:
2127
2128 for (;;) {
2129 set_current_state(TASK_UNINTERRUPTIBLE);
2130 if (event_indicated)
2131 break;
2132 schedule();
2133 }
2134
2135 A general memory barrier is interpolated automatically by set_current_state()
2136 after it has altered the task state:
2137
2138 CPU 1
2139 ===============================
2140 set_current_state();
2141 smp_store_mb();
2142 STORE current->state
2143 <general barrier>
2144 LOAD event_indicated
2145
2146 set_current_state() may be wrapped by:
2147
2148 prepare_to_wait();
2149 prepare_to_wait_exclusive();
2150
2151 which therefore also imply a general memory barrier after setting the state.
2152 The whole sequence above is available in various canned forms, all of which
2153 interpolate the memory barrier in the right place:
2154
2155 wait_event();
2156 wait_event_interruptible();
2157 wait_event_interruptible_exclusive();
2158 wait_event_interruptible_timeout();
2159 wait_event_killable();
2160 wait_event_timeout();
2161 wait_on_bit();
2162 wait_on_bit_lock();
2163
2164
2165 Secondly, code that performs a wake up normally follows something like this:
2166
2167 event_indicated = 1;
2168 wake_up(&event_wait_queue);
2169
2170 or:
2171
2172 event_indicated = 1;
2173 wake_up_process(event_daemon);
2174
2175 A write memory barrier is implied by wake_up() and co. if and only if they
2176 wake something up. The barrier occurs before the task state is cleared, and so
2177 sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2178
2179 CPU 1 CPU 2
2180 =============================== ===============================
2181 set_current_state(); STORE event_indicated
2182 smp_store_mb(); wake_up();
2183 STORE current->state <write barrier>
2184 <general barrier> STORE current->state
2185 LOAD event_indicated
2186
2187 To repeat, this write memory barrier is present if and only if something
2188 is actually awakened. To see this, consider the following sequence of
2189 events, where X and Y are both initially zero:
2190
2191 CPU 1 CPU 2
2192 =============================== ===============================
2193 X = 1; STORE event_indicated
2194 smp_mb(); wake_up();
2195 Y = 1; wait_event(wq, Y == 1);
2196 wake_up(); load from Y sees 1, no memory barrier
2197 load from X might see 0
2198
2199 In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2200 to see 1.
2201
2202 The available waker functions include:
2203
2204 complete();
2205 wake_up();
2206 wake_up_all();
2207 wake_up_bit();
2208 wake_up_interruptible();
2209 wake_up_interruptible_all();
2210 wake_up_interruptible_nr();
2211 wake_up_interruptible_poll();
2212 wake_up_interruptible_sync();
2213 wake_up_interruptible_sync_poll();
2214 wake_up_locked();
2215 wake_up_locked_poll();
2216 wake_up_nr();
2217 wake_up_poll();
2218 wake_up_process();
2219
2220
2221 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
2222 order multiple stores before the wake-up with respect to loads of those stored
2223 values after the sleeper has called set_current_state(). For instance, if the
2224 sleeper does:
2225
2226 set_current_state(TASK_INTERRUPTIBLE);
2227 if (event_indicated)
2228 break;
2229 __set_current_state(TASK_RUNNING);
2230 do_something(my_data);
2231
2232 and the waker does:
2233
2234 my_data = value;
2235 event_indicated = 1;
2236 wake_up(&event_wait_queue);
2237
2238 there's no guarantee that the change to event_indicated will be perceived by
2239 the sleeper as coming after the change to my_data. In such a circumstance, the
2240 code on both sides must interpolate its own memory barriers between the
2241 separate data accesses. Thus the above sleeper ought to do:
2242
2243 set_current_state(TASK_INTERRUPTIBLE);
2244 if (event_indicated) {
2245 smp_rmb();
2246 do_something(my_data);
2247 }
2248
2249 and the waker should do:
2250
2251 my_data = value;
2252 smp_wmb();
2253 event_indicated = 1;
2254 wake_up(&event_wait_queue);
2255
2256
2257 MISCELLANEOUS FUNCTIONS
2258 -----------------------
2259
2260 Other functions that imply barriers:
2261
2262 (*) schedule() and similar imply full memory barriers.
2263
2264
2265 ===================================
2266 INTER-CPU ACQUIRING BARRIER EFFECTS
2267 ===================================
2268
2269 On SMP systems locking primitives give a more substantial form of barrier: one
2270 that does affect memory access ordering on other CPUs, within the context of
2271 conflict on any particular lock.
2272
2273
2274 ACQUIRES VS MEMORY ACCESSES
2275 ---------------------------
2276
2277 Consider the following: the system has a pair of spinlocks (M) and (Q), and
2278 three CPUs; then should the following sequence of events occur:
2279
2280 CPU 1 CPU 2
2281 =============================== ===============================
2282 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e);
2283 ACQUIRE M ACQUIRE Q
2284 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f);
2285 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g);
2286 RELEASE M RELEASE Q
2287 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h);
2288
2289 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2290 through *H occur in, other than the constraints imposed by the separate locks
2291 on the separate CPUs. It might, for example, see:
2292
2293 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2294
2295 But it won't see any of:
2296
2297 *B, *C or *D preceding ACQUIRE M
2298 *A, *B or *C following RELEASE M
2299 *F, *G or *H preceding ACQUIRE Q
2300 *E, *F or *G following RELEASE Q
2301
2302
2303
2304 ACQUIRES VS I/O ACCESSES
2305 ------------------------
2306
2307 Under certain circumstances (especially involving NUMA), I/O accesses within
2308 two spinlocked sections on two different CPUs may be seen as interleaved by the
2309 PCI bridge, because the PCI bridge does not necessarily participate in the
2310 cache-coherence protocol, and is therefore incapable of issuing the required
2311 read memory barriers.
2312
2313 For example:
2314
2315 CPU 1 CPU 2
2316 =============================== ===============================
2317 spin_lock(Q)
2318 writel(0, ADDR)
2319 writel(1, DATA);
2320 spin_unlock(Q);
2321 spin_lock(Q);
2322 writel(4, ADDR);
2323 writel(5, DATA);
2324 spin_unlock(Q);
2325
2326 may be seen by the PCI bridge as follows:
2327
2328 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2329
2330 which would probably cause the hardware to malfunction.
2331
2332
2333 What is necessary here is to intervene with an mmiowb() before dropping the
2334 spinlock, for example:
2335
2336 CPU 1 CPU 2
2337 =============================== ===============================
2338 spin_lock(Q)
2339 writel(0, ADDR)
2340 writel(1, DATA);
2341 mmiowb();
2342 spin_unlock(Q);
2343 spin_lock(Q);
2344 writel(4, ADDR);
2345 writel(5, DATA);
2346 mmiowb();
2347 spin_unlock(Q);
2348
2349 this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2350 before either of the stores issued on CPU 2.
2351
2352
2353 Furthermore, following a store by a load from the same device obviates the need
2354 for the mmiowb(), because the load forces the store to complete before the load
2355 is performed:
2356
2357 CPU 1 CPU 2
2358 =============================== ===============================
2359 spin_lock(Q)
2360 writel(0, ADDR)
2361 a = readl(DATA);
2362 spin_unlock(Q);
2363 spin_lock(Q);
2364 writel(4, ADDR);
2365 b = readl(DATA);
2366 spin_unlock(Q);
2367
2368
2369 See Documentation/DocBook/deviceiobook.tmpl for more information.
2370
2371
2372 =================================
2373 WHERE ARE MEMORY BARRIERS NEEDED?
2374 =================================
2375
2376 Under normal operation, memory operation reordering is generally not going to
2377 be a problem as a single-threaded linear piece of code will still appear to
2378 work correctly, even if it's in an SMP kernel. There are, however, four
2379 circumstances in which reordering definitely _could_ be a problem:
2380
2381 (*) Interprocessor interaction.
2382
2383 (*) Atomic operations.
2384
2385 (*) Accessing devices.
2386
2387 (*) Interrupts.
2388
2389
2390 INTERPROCESSOR INTERACTION
2391 --------------------------
2392
2393 When there's a system with more than one processor, more than one CPU in the
2394 system may be working on the same data set at the same time. This can cause
2395 synchronisation problems, and the usual way of dealing with them is to use
2396 locks. Locks, however, are quite expensive, and so it may be preferable to
2397 operate without the use of a lock if at all possible. In such a case
2398 operations that affect both CPUs may have to be carefully ordered to prevent
2399 a malfunction.
2400
2401 Consider, for example, the R/W semaphore slow path. Here a waiting process is
2402 queued on the semaphore, by virtue of it having a piece of its stack linked to
2403 the semaphore's list of waiting processes:
2404
2405 struct rw_semaphore {
2406 ...
2407 spinlock_t lock;
2408 struct list_head waiters;
2409 };
2410
2411 struct rwsem_waiter {
2412 struct list_head list;
2413 struct task_struct *task;
2414 };
2415
2416 To wake up a particular waiter, the up_read() or up_write() functions have to:
2417
2418 (1) read the next pointer from this waiter's record to know as to where the
2419 next waiter record is;
2420
2421 (2) read the pointer to the waiter's task structure;
2422
2423 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2424
2425 (4) call wake_up_process() on the task; and
2426
2427 (5) release the reference held on the waiter's task struct.
2428
2429 In other words, it has to perform this sequence of events:
2430
2431 LOAD waiter->list.next;
2432 LOAD waiter->task;
2433 STORE waiter->task;
2434 CALL wakeup
2435 RELEASE task
2436
2437 and if any of these steps occur out of order, then the whole thing may
2438 malfunction.
2439
2440 Once it has queued itself and dropped the semaphore lock, the waiter does not
2441 get the lock again; it instead just waits for its task pointer to be cleared
2442 before proceeding. Since the record is on the waiter's stack, this means that
2443 if the task pointer is cleared _before_ the next pointer in the list is read,
2444 another CPU might start processing the waiter and might clobber the waiter's
2445 stack before the up*() function has a chance to read the next pointer.
2446
2447 Consider then what might happen to the above sequence of events:
2448
2449 CPU 1 CPU 2
2450 =============================== ===============================
2451 down_xxx()
2452 Queue waiter
2453 Sleep
2454 up_yyy()
2455 LOAD waiter->task;
2456 STORE waiter->task;
2457 Woken up by other event
2458 <preempt>
2459 Resume processing
2460 down_xxx() returns
2461 call foo()
2462 foo() clobbers *waiter
2463 </preempt>
2464 LOAD waiter->list.next;
2465 --- OOPS ---
2466
2467 This could be dealt with using the semaphore lock, but then the down_xxx()
2468 function has to needlessly get the spinlock again after being woken up.
2469
2470 The way to deal with this is to insert a general SMP memory barrier:
2471
2472 LOAD waiter->list.next;
2473 LOAD waiter->task;
2474 smp_mb();
2475 STORE waiter->task;
2476 CALL wakeup
2477 RELEASE task
2478
2479 In this case, the barrier makes a guarantee that all memory accesses before the
2480 barrier will appear to happen before all the memory accesses after the barrier
2481 with respect to the other CPUs on the system. It does _not_ guarantee that all
2482 the memory accesses before the barrier will be complete by the time the barrier
2483 instruction itself is complete.
2484
2485 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2486 compiler barrier, thus making sure the compiler emits the instructions in the
2487 right order without actually intervening in the CPU. Since there's only one
2488 CPU, that CPU's dependency ordering logic will take care of everything else.
2489
2490
2491 ATOMIC OPERATIONS
2492 -----------------
2493
2494 Whilst they are technically interprocessor interaction considerations, atomic
2495 operations are noted specially as some of them imply full memory barriers and
2496 some don't, but they're very heavily relied on as a group throughout the
2497 kernel.
2498
2499 Any atomic operation that modifies some state in memory and returns information
2500 about the state (old or new) implies an SMP-conditional general memory barrier
2501 (smp_mb()) on each side of the actual operation (with the exception of
2502 explicit lock operations, described later). These include:
2503
2504 xchg();
2505 atomic_xchg(); atomic_long_xchg();
2506 atomic_inc_return(); atomic_long_inc_return();
2507 atomic_dec_return(); atomic_long_dec_return();
2508 atomic_add_return(); atomic_long_add_return();
2509 atomic_sub_return(); atomic_long_sub_return();
2510 atomic_inc_and_test(); atomic_long_inc_and_test();
2511 atomic_dec_and_test(); atomic_long_dec_and_test();
2512 atomic_sub_and_test(); atomic_long_sub_and_test();
2513 atomic_add_negative(); atomic_long_add_negative();
2514 test_and_set_bit();
2515 test_and_clear_bit();
2516 test_and_change_bit();
2517
2518 /* when succeeds */
2519 cmpxchg();
2520 atomic_cmpxchg(); atomic_long_cmpxchg();
2521 atomic_add_unless(); atomic_long_add_unless();
2522
2523 These are used for such things as implementing ACQUIRE-class and RELEASE-class
2524 operations and adjusting reference counters towards object destruction, and as
2525 such the implicit memory barrier effects are necessary.
2526
2527
2528 The following operations are potential problems as they do _not_ imply memory
2529 barriers, but might be used for implementing such things as RELEASE-class
2530 operations:
2531
2532 atomic_set();
2533 set_bit();
2534 clear_bit();
2535 change_bit();
2536
2537 With these the appropriate explicit memory barrier should be used if necessary
2538 (smp_mb__before_atomic() for instance).
2539
2540
2541 The following also do _not_ imply memory barriers, and so may require explicit
2542 memory barriers under some circumstances (smp_mb__before_atomic() for
2543 instance):
2544
2545 atomic_add();
2546 atomic_sub();
2547 atomic_inc();
2548 atomic_dec();
2549
2550 If they're used for statistics generation, then they probably don't need memory
2551 barriers, unless there's a coupling between statistical data.
2552
2553 If they're used for reference counting on an object to control its lifetime,
2554 they probably don't need memory barriers because either the reference count
2555 will be adjusted inside a locked section, or the caller will already hold
2556 sufficient references to make the lock, and thus a memory barrier unnecessary.
2557
2558 If they're used for constructing a lock of some description, then they probably
2559 do need memory barriers as a lock primitive generally has to do things in a
2560 specific order.
2561
2562 Basically, each usage case has to be carefully considered as to whether memory
2563 barriers are needed or not.
2564
2565 The following operations are special locking primitives:
2566
2567 test_and_set_bit_lock();
2568 clear_bit_unlock();
2569 __clear_bit_unlock();
2570
2571 These implement ACQUIRE-class and RELEASE-class operations. These should be
2572 used in preference to other operations when implementing locking primitives,
2573 because their implementations can be optimised on many architectures.
2574
2575 [!] Note that special memory barrier primitives are available for these
2576 situations because on some CPUs the atomic instructions used imply full memory
2577 barriers, and so barrier instructions are superfluous in conjunction with them,
2578 and in such cases the special barrier primitives will be no-ops.
2579
2580 See Documentation/atomic_ops.txt for more information.
2581
2582
2583 ACCESSING DEVICES
2584 -----------------
2585
2586 Many devices can be memory mapped, and so appear to the CPU as if they're just
2587 a set of memory locations. To control such a device, the driver usually has to
2588 make the right memory accesses in exactly the right order.
2589
2590 However, having a clever CPU or a clever compiler creates a potential problem
2591 in that the carefully sequenced accesses in the driver code won't reach the
2592 device in the requisite order if the CPU or the compiler thinks it is more
2593 efficient to reorder, combine or merge accesses - something that would cause
2594 the device to malfunction.
2595
2596 Inside of the Linux kernel, I/O should be done through the appropriate accessor
2597 routines - such as inb() or writel() - which know how to make such accesses
2598 appropriately sequential. Whilst this, for the most part, renders the explicit
2599 use of memory barriers unnecessary, there are a couple of situations where they
2600 might be needed:
2601
2602 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2603 so for _all_ general drivers locks should be used and mmiowb() must be
2604 issued prior to unlocking the critical section.
2605
2606 (2) If the accessor functions are used to refer to an I/O memory window with
2607 relaxed memory access properties, then _mandatory_ memory barriers are
2608 required to enforce ordering.
2609
2610 See Documentation/DocBook/deviceiobook.tmpl for more information.
2611
2612
2613 INTERRUPTS
2614 ----------
2615
2616 A driver may be interrupted by its own interrupt service routine, and thus the
2617 two parts of the driver may interfere with each other's attempts to control or
2618 access the device.
2619
2620 This may be alleviated - at least in part - by disabling local interrupts (a
2621 form of locking), such that the critical operations are all contained within
2622 the interrupt-disabled section in the driver. Whilst the driver's interrupt
2623 routine is executing, the driver's core may not run on the same CPU, and its
2624 interrupt is not permitted to happen again until the current interrupt has been
2625 handled, thus the interrupt handler does not need to lock against that.
2626
2627 However, consider a driver that was talking to an ethernet card that sports an
2628 address register and a data register. If that driver's core talks to the card
2629 under interrupt-disablement and then the driver's interrupt handler is invoked:
2630
2631 LOCAL IRQ DISABLE
2632 writew(ADDR, 3);
2633 writew(DATA, y);
2634 LOCAL IRQ ENABLE
2635 <interrupt>
2636 writew(ADDR, 4);
2637 q = readw(DATA);
2638 </interrupt>
2639
2640 The store to the data register might happen after the second store to the
2641 address register if ordering rules are sufficiently relaxed:
2642
2643 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2644
2645
2646 If ordering rules are relaxed, it must be assumed that accesses done inside an
2647 interrupt disabled section may leak outside of it and may interleave with
2648 accesses performed in an interrupt - and vice versa - unless implicit or
2649 explicit barriers are used.
2650
2651 Normally this won't be a problem because the I/O accesses done inside such
2652 sections will include synchronous load operations on strictly ordered I/O
2653 registers that form implicit I/O barriers. If this isn't sufficient then an
2654 mmiowb() may need to be used explicitly.
2655
2656
2657 A similar situation may occur between an interrupt routine and two routines
2658 running on separate CPUs that communicate with each other. If such a case is
2659 likely, then interrupt-disabling locks should be used to guarantee ordering.
2660
2661
2662 ==========================
2663 KERNEL I/O BARRIER EFFECTS
2664 ==========================
2665
2666 When accessing I/O memory, drivers should use the appropriate accessor
2667 functions:
2668
2669 (*) inX(), outX():
2670
2671 These are intended to talk to I/O space rather than memory space, but
2672 that's primarily a CPU-specific concept. The i386 and x86_64 processors
2673 do indeed have special I/O space access cycles and instructions, but many
2674 CPUs don't have such a concept.
2675
2676 The PCI bus, amongst others, defines an I/O space concept which - on such
2677 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2678 space. However, it may also be mapped as a virtual I/O space in the CPU's
2679 memory map, particularly on those CPUs that don't support alternate I/O
2680 spaces.
2681
2682 Accesses to this space may be fully synchronous (as on i386), but
2683 intermediary bridges (such as the PCI host bridge) may not fully honour
2684 that.
2685
2686 They are guaranteed to be fully ordered with respect to each other.
2687
2688 They are not guaranteed to be fully ordered with respect to other types of
2689 memory and I/O operation.
2690
2691 (*) readX(), writeX():
2692
2693 Whether these are guaranteed to be fully ordered and uncombined with
2694 respect to each other on the issuing CPU depends on the characteristics
2695 defined for the memory window through which they're accessing. On later
2696 i386 architecture machines, for example, this is controlled by way of the
2697 MTRR registers.
2698
2699 Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2700 provided they're not accessing a prefetchable device.
2701
2702 However, intermediary hardware (such as a PCI bridge) may indulge in
2703 deferral if it so wishes; to flush a store, a load from the same location
2704 is preferred[*], but a load from the same device or from configuration
2705 space should suffice for PCI.
2706
2707 [*] NOTE! attempting to load from the same location as was written to may
2708 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2709 example.
2710
2711 Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2712 force stores to be ordered.
2713
2714 Please refer to the PCI specification for more information on interactions
2715 between PCI transactions.
2716
2717 (*) readX_relaxed(), writeX_relaxed()
2718
2719 These are similar to readX() and writeX(), but provide weaker memory
2720 ordering guarantees. Specifically, they do not guarantee ordering with
2721 respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2722 ordering with respect to LOCK or UNLOCK operations. If the latter is
2723 required, an mmiowb() barrier can be used. Note that relaxed accesses to
2724 the same peripheral are guaranteed to be ordered with respect to each
2725 other.
2726
2727 (*) ioreadX(), iowriteX()
2728
2729 These will perform appropriately for the type of access they're actually
2730 doing, be it inX()/outX() or readX()/writeX().
2731
2732
2733 ========================================
2734 ASSUMED MINIMUM EXECUTION ORDERING MODEL
2735 ========================================
2736
2737 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2738 maintain the appearance of program causality with respect to itself. Some CPUs
2739 (such as i386 or x86_64) are more constrained than others (such as powerpc or
2740 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2741 of arch-specific code.
2742
2743 This means that it must be considered that the CPU will execute its instruction
2744 stream in any order it feels like - or even in parallel - provided that if an
2745 instruction in the stream depends on an earlier instruction, then that
2746 earlier instruction must be sufficiently complete[*] before the later
2747 instruction may proceed; in other words: provided that the appearance of
2748 causality is maintained.
2749
2750 [*] Some instructions have more than one effect - such as changing the
2751 condition codes, changing registers or changing memory - and different
2752 instructions may depend on different effects.
2753
2754 A CPU may also discard any instruction sequence that winds up having no
2755 ultimate effect. For example, if two adjacent instructions both load an
2756 immediate value into the same register, the first may be discarded.
2757
2758
2759 Similarly, it has to be assumed that compiler might reorder the instruction
2760 stream in any way it sees fit, again provided the appearance of causality is
2761 maintained.
2762
2763
2764 ============================
2765 THE EFFECTS OF THE CPU CACHE
2766 ============================
2767
2768 The way cached memory operations are perceived across the system is affected to
2769 a certain extent by the caches that lie between CPUs and memory, and by the
2770 memory coherence system that maintains the consistency of state in the system.
2771
2772 As far as the way a CPU interacts with another part of the system through the
2773 caches goes, the memory system has to include the CPU's caches, and memory
2774 barriers for the most part act at the interface between the CPU and its cache
2775 (memory barriers logically act on the dotted line in the following diagram):
2776
2777 <--- CPU ---> : <----------- Memory ----------->
2778 :
2779 +--------+ +--------+ : +--------+ +-----------+
2780 | | | | : | | | | +--------+
2781 | CPU | | Memory | : | CPU | | | | |
2782 | Core |--->| Access |----->| Cache |<-->| | | |
2783 | | | Queue | : | | | |--->| Memory |
2784 | | | | : | | | | | |
2785 +--------+ +--------+ : +--------+ | | | |
2786 : | Cache | +--------+
2787 : | Coherency |
2788 : | Mechanism | +--------+
2789 +--------+ +--------+ : +--------+ | | | |
2790 | | | | : | | | | | |
2791 | CPU | | Memory | : | CPU | | |--->| Device |
2792 | Core |--->| Access |----->| Cache |<-->| | | |
2793 | | | Queue | : | | | | | |
2794 | | | | : | | | | +--------+
2795 +--------+ +--------+ : +--------+ +-----------+
2796 :
2797 :
2798
2799 Although any particular load or store may not actually appear outside of the
2800 CPU that issued it since it may have been satisfied within the CPU's own cache,
2801 it will still appear as if the full memory access had taken place as far as the
2802 other CPUs are concerned since the cache coherency mechanisms will migrate the
2803 cacheline over to the accessing CPU and propagate the effects upon conflict.
2804
2805 The CPU core may execute instructions in any order it deems fit, provided the
2806 expected program causality appears to be maintained. Some of the instructions
2807 generate load and store operations which then go into the queue of memory
2808 accesses to be performed. The core may place these in the queue in any order
2809 it wishes, and continue execution until it is forced to wait for an instruction
2810 to complete.
2811
2812 What memory barriers are concerned with is controlling the order in which
2813 accesses cross from the CPU side of things to the memory side of things, and
2814 the order in which the effects are perceived to happen by the other observers
2815 in the system.
2816
2817 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2818 their own loads and stores as if they had happened in program order.
2819
2820 [!] MMIO or other device accesses may bypass the cache system. This depends on
2821 the properties of the memory window through which devices are accessed and/or
2822 the use of any special device communication instructions the CPU may have.
2823
2824
2825 CACHE COHERENCY
2826 ---------------
2827
2828 Life isn't quite as simple as it may appear above, however: for while the
2829 caches are expected to be coherent, there's no guarantee that that coherency
2830 will be ordered. This means that whilst changes made on one CPU will
2831 eventually become visible on all CPUs, there's no guarantee that they will
2832 become apparent in the same order on those other CPUs.
2833
2834
2835 Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2836 has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2837
2838 :
2839 : +--------+
2840 : +---------+ | |
2841 +--------+ : +--->| Cache A |<------->| |
2842 | | : | +---------+ | |
2843 | CPU 1 |<---+ | |
2844 | | : | +---------+ | |
2845 +--------+ : +--->| Cache B |<------->| |
2846 : +---------+ | |
2847 : | Memory |
2848 : +---------+ | System |
2849 +--------+ : +--->| Cache C |<------->| |
2850 | | : | +---------+ | |
2851 | CPU 2 |<---+ | |
2852 | | : | +---------+ | |
2853 +--------+ : +--->| Cache D |<------->| |
2854 : +---------+ | |
2855 : +--------+
2856 :
2857
2858 Imagine the system has the following properties:
2859
2860 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2861 resident in memory;
2862
2863 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2864 resident in memory;
2865
2866 (*) whilst the CPU core is interrogating one cache, the other cache may be
2867 making use of the bus to access the rest of the system - perhaps to
2868 displace a dirty cacheline or to do a speculative load;
2869
2870 (*) each cache has a queue of operations that need to be applied to that cache
2871 to maintain coherency with the rest of the system;
2872
2873 (*) the coherency queue is not flushed by normal loads to lines already
2874 present in the cache, even though the contents of the queue may
2875 potentially affect those loads.
2876
2877 Imagine, then, that two writes are made on the first CPU, with a write barrier
2878 between them to guarantee that they will appear to reach that CPU's caches in
2879 the requisite order:
2880
2881 CPU 1 CPU 2 COMMENT
2882 =============== =============== =======================================
2883 u == 0, v == 1 and p == &u, q == &u
2884 v = 2;
2885 smp_wmb(); Make sure change to v is visible before
2886 change to p
2887 <A:modify v=2> v is now in cache A exclusively
2888 p = &v;
2889 <B:modify p=&v> p is now in cache B exclusively
2890
2891 The write memory barrier forces the other CPUs in the system to perceive that
2892 the local CPU's caches have apparently been updated in the correct order. But
2893 now imagine that the second CPU wants to read those values:
2894
2895 CPU 1 CPU 2 COMMENT
2896 =============== =============== =======================================
2897 ...
2898 q = p;
2899 x = *q;
2900
2901 The above pair of reads may then fail to happen in the expected order, as the
2902 cacheline holding p may get updated in one of the second CPU's caches whilst
2903 the update to the cacheline holding v is delayed in the other of the second
2904 CPU's caches by some other cache event:
2905
2906 CPU 1 CPU 2 COMMENT
2907 =============== =============== =======================================
2908 u == 0, v == 1 and p == &u, q == &u
2909 v = 2;
2910 smp_wmb();
2911 <A:modify v=2> <C:busy>
2912 <C:queue v=2>
2913 p = &v; q = p;
2914 <D:request p>
2915 <B:modify p=&v> <D:commit p=&v>
2916 <D:read p>
2917 x = *q;
2918 <C:read *q> Reads from v before v updated in cache
2919 <C:unbusy>
2920 <C:commit v=2>
2921
2922 Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2923 no guarantee that, without intervention, the order of update will be the same
2924 as that committed on CPU 1.
2925
2926
2927 To intervene, we need to interpolate a data dependency barrier or a read
2928 barrier between the loads. This will force the cache to commit its coherency
2929 queue before processing any further requests:
2930
2931 CPU 1 CPU 2 COMMENT
2932 =============== =============== =======================================
2933 u == 0, v == 1 and p == &u, q == &u
2934 v = 2;
2935 smp_wmb();
2936 <A:modify v=2> <C:busy>
2937 <C:queue v=2>
2938 p = &v; q = p;
2939 <D:request p>
2940 <B:modify p=&v> <D:commit p=&v>
2941 <D:read p>
2942 smp_read_barrier_depends()
2943 <C:unbusy>
2944 <C:commit v=2>
2945 x = *q;
2946 <C:read *q> Reads from v after v updated in cache
2947
2948
2949 This sort of problem can be encountered on DEC Alpha processors as they have a
2950 split cache that improves performance by making better use of the data bus.
2951 Whilst most CPUs do imply a data dependency barrier on the read when a memory
2952 access depends on a read, not all do, so it may not be relied on.
2953
2954 Other CPUs may also have split caches, but must coordinate between the various
2955 cachelets for normal memory accesses. The semantics of the Alpha removes the
2956 need for coordination in the absence of memory barriers.
2957
2958
2959 CACHE COHERENCY VS DMA
2960 ----------------------
2961
2962 Not all systems maintain cache coherency with respect to devices doing DMA. In
2963 such cases, a device attempting DMA may obtain stale data from RAM because
2964 dirty cache lines may be resident in the caches of various CPUs, and may not
2965 have been written back to RAM yet. To deal with this, the appropriate part of
2966 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2967 invalidate them as well).
2968
2969 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2970 cache lines being written back to RAM from a CPU's cache after the device has
2971 installed its own data, or cache lines present in the CPU's cache may simply
2972 obscure the fact that RAM has been updated, until at such time as the cacheline
2973 is discarded from the CPU's cache and reloaded. To deal with this, the
2974 appropriate part of the kernel must invalidate the overlapping bits of the
2975 cache on each CPU.
2976
2977 See Documentation/cachetlb.txt for more information on cache management.
2978
2979
2980 CACHE COHERENCY VS MMIO
2981 -----------------------
2982
2983 Memory mapped I/O usually takes place through memory locations that are part of
2984 a window in the CPU's memory space that has different properties assigned than
2985 the usual RAM directed window.
2986
2987 Amongst these properties is usually the fact that such accesses bypass the
2988 caching entirely and go directly to the device buses. This means MMIO accesses
2989 may, in effect, overtake accesses to cached memory that were emitted earlier.
2990 A memory barrier isn't sufficient in such a case, but rather the cache must be
2991 flushed between the cached memory write and the MMIO access if the two are in
2992 any way dependent.
2993
2994
2995 =========================
2996 THE THINGS CPUS GET UP TO
2997 =========================
2998
2999 A programmer might take it for granted that the CPU will perform memory
3000 operations in exactly the order specified, so that if the CPU is, for example,
3001 given the following piece of code to execute:
3002
3003 a = READ_ONCE(*A);
3004 WRITE_ONCE(*B, b);
3005 c = READ_ONCE(*C);
3006 d = READ_ONCE(*D);
3007 WRITE_ONCE(*E, e);
3008
3009 they would then expect that the CPU will complete the memory operation for each
3010 instruction before moving on to the next one, leading to a definite sequence of
3011 operations as seen by external observers in the system:
3012
3013 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
3014
3015
3016 Reality is, of course, much messier. With many CPUs and compilers, the above
3017 assumption doesn't hold because:
3018
3019 (*) loads are more likely to need to be completed immediately to permit
3020 execution progress, whereas stores can often be deferred without a
3021 problem;
3022
3023 (*) loads may be done speculatively, and the result discarded should it prove
3024 to have been unnecessary;
3025
3026 (*) loads may be done speculatively, leading to the result having been fetched
3027 at the wrong time in the expected sequence of events;
3028
3029 (*) the order of the memory accesses may be rearranged to promote better use
3030 of the CPU buses and caches;
3031
3032 (*) loads and stores may be combined to improve performance when talking to
3033 memory or I/O hardware that can do batched accesses of adjacent locations,
3034 thus cutting down on transaction setup costs (memory and PCI devices may
3035 both be able to do this); and
3036
3037 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
3038 mechanisms may alleviate this - once the store has actually hit the cache
3039 - there's no guarantee that the coherency management will be propagated in
3040 order to other CPUs.
3041
3042 So what another CPU, say, might actually observe from the above piece of code
3043 is:
3044
3045 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
3046
3047 (Where "LOAD {*C,*D}" is a combined load)
3048
3049
3050 However, it is guaranteed that a CPU will be self-consistent: it will see its
3051 _own_ accesses appear to be correctly ordered, without the need for a memory
3052 barrier. For instance with the following code:
3053
3054 U = READ_ONCE(*A);
3055 WRITE_ONCE(*A, V);
3056 WRITE_ONCE(*A, W);
3057 X = READ_ONCE(*A);
3058 WRITE_ONCE(*A, Y);
3059 Z = READ_ONCE(*A);
3060
3061 and assuming no intervention by an external influence, it can be assumed that
3062 the final result will appear to be:
3063
3064 U == the original value of *A
3065 X == W
3066 Z == Y
3067 *A == Y
3068
3069 The code above may cause the CPU to generate the full sequence of memory
3070 accesses:
3071
3072 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
3073
3074 in that order, but, without intervention, the sequence may have almost any
3075 combination of elements combined or discarded, provided the program's view
3076 of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE()
3077 are -not- optional in the above example, as there are architectures
3078 where a given CPU might reorder successive loads to the same location.
3079 On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
3080 necessary to prevent this, for example, on Itanium the volatile casts
3081 used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
3082 and st.rel instructions (respectively) that prevent such reordering.
3083
3084 The compiler may also combine, discard or defer elements of the sequence before
3085 the CPU even sees them.
3086
3087 For instance:
3088
3089 *A = V;
3090 *A = W;
3091
3092 may be reduced to:
3093
3094 *A = W;
3095
3096 since, without either a write barrier or an WRITE_ONCE(), it can be
3097 assumed that the effect of the storage of V to *A is lost. Similarly:
3098
3099 *A = Y;
3100 Z = *A;
3101
3102 may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
3103 reduced to:
3104
3105 *A = Y;
3106 Z = Y;
3107
3108 and the LOAD operation never appear outside of the CPU.
3109
3110
3111 AND THEN THERE'S THE ALPHA
3112 --------------------------
3113
3114 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
3115 some versions of the Alpha CPU have a split data cache, permitting them to have
3116 two semantically-related cache lines updated at separate times. This is where
3117 the data dependency barrier really becomes necessary as this synchronises both
3118 caches with the memory coherence system, thus making it seem like pointer
3119 changes vs new data occur in the right order.
3120
3121 The Alpha defines the Linux kernel's memory barrier model.
3122
3123 See the subsection on "Cache Coherency" above.
3124
3125
3126 VIRTUAL MACHINE GUESTS
3127 ----------------------
3128
3129 Guests running within virtual machines might be affected by SMP effects even if
3130 the guest itself is compiled without SMP support. This is an artifact of
3131 interfacing with an SMP host while running an UP kernel. Using mandatory
3132 barriers for this use-case would be possible but is often suboptimal.
3133
3134 To handle this case optimally, low-level virt_mb() etc macros are available.
3135 These have the same effect as smp_mb() etc when SMP is enabled, but generate
3136 identical code for SMP and non-SMP systems. For example, virtual machine guests
3137 should use virt_mb() rather than smp_mb() when synchronizing against a
3138 (possibly SMP) host.
3139
3140 These are equivalent to smp_mb() etc counterparts in all other respects,
3141 in particular, they do not control MMIO effects: to control
3142 MMIO effects, use mandatory barriers.
3143
3144
3145 ============
3146 EXAMPLE USES
3147 ============
3148
3149 CIRCULAR BUFFERS
3150 ----------------
3151
3152 Memory barriers can be used to implement circular buffering without the need
3153 of a lock to serialise the producer with the consumer. See:
3154
3155 Documentation/circular-buffers.txt
3156
3157 for details.
3158
3159
3160 ==========
3161 REFERENCES
3162 ==========
3163
3164 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3165 Digital Press)
3166 Chapter 5.2: Physical Address Space Characteristics
3167 Chapter 5.4: Caches and Write Buffers
3168 Chapter 5.5: Data Sharing
3169 Chapter 5.6: Read/Write Ordering
3170
3171 AMD64 Architecture Programmer's Manual Volume 2: System Programming
3172 Chapter 7.1: Memory-Access Ordering
3173 Chapter 7.4: Buffering and Combining Memory Writes
3174
3175 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3176 System Programming Guide
3177 Chapter 7.1: Locked Atomic Operations
3178 Chapter 7.2: Memory Ordering
3179 Chapter 7.4: Serializing Instructions
3180
3181 The SPARC Architecture Manual, Version 9
3182 Chapter 8: Memory Models
3183 Appendix D: Formal Specification of the Memory Models
3184 Appendix J: Programming with the Memory Models
3185
3186 UltraSPARC Programmer Reference Manual
3187 Chapter 5: Memory Accesses and Cacheability
3188 Chapter 15: Sparc-V9 Memory Models
3189
3190 UltraSPARC III Cu User's Manual
3191 Chapter 9: Memory Models
3192
3193 UltraSPARC IIIi Processor User's Manual
3194 Chapter 8: Memory Models
3195
3196 UltraSPARC Architecture 2005
3197 Chapter 9: Memory
3198 Appendix D: Formal Specifications of the Memory Models
3199
3200 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3201 Chapter 8: Memory Models
3202 Appendix F: Caches and Cache Coherency
3203
3204 Solaris Internals, Core Kernel Architecture, p63-68:
3205 Chapter 3.3: Hardware Considerations for Locks and
3206 Synchronization
3207
3208 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3209 for Kernel Programmers:
3210 Chapter 13: Other Memory Models
3211
3212 Intel Itanium Architecture Software Developer's Manual: Volume 1:
3213 Section 2.6: Speculation
3214 Section 4.4: Memory Access
This page took 0.102385 seconds and 5 git commands to generate.