powerpc: Make {cmp}xchg* and their atomic_ versions fully ordered
authorBoqun Feng <boqun.feng@gmail.com>
Mon, 2 Nov 2015 01:30:32 +0000 (09:30 +0800)
committerMichael Ellerman <mpe@ellerman.id.au>
Mon, 14 Dec 2015 09:39:01 +0000 (20:39 +1100)
commit81d7a3294de7e9828310bbf986a67246b13fa01e
tree5bf300937eb52355f7719898e5ab00e2002a6525
parent49e9cf3f0c04bf76ffa59242254110309554861d
powerpc: Make {cmp}xchg* and their atomic_ versions fully ordered

According to memory-barriers.txt, xchg*, cmpxchg* and their atomic_
versions all need to be fully ordered, however they are now just
RELEASE+ACQUIRE, which are not fully ordered.

So also replace PPC_RELEASE_BARRIER and PPC_ACQUIRE_BARRIER with
PPC_ATOMIC_ENTRY_BARRIER and PPC_ATOMIC_EXIT_BARRIER in
__{cmp,}xchg_{u32,u64} respectively to guarantee fully ordered semantics
of atomic{,64}_{cmp,}xchg() and {cmp,}xchg(), as a complement of commit
b97021f85517 ("powerpc: Fix atomic_xxx_return barrier semantics")

This patch depends on patch "powerpc: Make value-returning atomics fully
ordered" for PPC_ATOMIC_ENTRY_BARRIER definition.

Cc: stable@vger.kernel.org # 3.2+
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
arch/powerpc/include/asm/cmpxchg.h
This page took 0.025515 seconds and 5 git commands to generate.