diff options
author | 2021-08-30 14:26:36 -0700 | |
---|---|---|
committer | 2021-08-30 14:26:36 -0700 | |
commit | e5e726f7bb9f711102edea7e5bd511835640e3b4 (patch) | |
tree | e9f2d1696cd7a9664a04735568b2adbd2527e2e0 /scripts | |
parent | Merge tag 'smp-core-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip (diff) | |
parent | locking/rtmutex: Return success on deadlock for ww_mutex waiters (diff) | |
download | linux-dev-e5e726f7bb9f711102edea7e5bd511835640e3b4.tar.xz linux-dev-e5e726f7bb9f711102edea7e5bd511835640e3b4.zip |
Merge tag 'locking-core-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking and atomics updates from Thomas Gleixner:
"The regular pile:
- A few improvements to the mutex code
- Documentation updates for atomics to clarify the difference between
cmpxchg() and try_cmpxchg() and to explain the forward progress
expectations.
- Simplification of the atomics fallback generator
- The addition of arch_atomic_long*() variants and generic arch_*()
bitops based on them.
- Add the missing might_sleep() invocations to the down*() operations
of semaphores.
The PREEMPT_RT locking core:
- Scheduler updates to support the state preserving mechanism for
'sleeping' spin- and rwlocks on RT.
This mechanism is carefully preserving the state of the task when
blocking on a 'sleeping' spin- or rwlock and takes regular wake-ups
targeted at the same task into account. The preserved or updated
(via a regular wakeup) state is restored when the lock has been
acquired.
- Restructuring of the rtmutex code so it can be utilized and
extended for the RT specific lock variants.
- Restructuring of the ww_mutex code to allow sharing of the ww_mutex
specific functionality for rtmutex based ww_mutexes.
- Header file disentangling to allow substitution of the regular lock
implementations with the PREEMPT_RT variants without creating an
unmaintainable #ifdef mess.
- Shared base code for the PREEMPT_RT specific rw_semaphore and
rwlock implementations.
Contrary to the regular rw_semaphores and rwlocks the PREEMPT_RT
implementation is writer unfair because it is infeasible to do
priority inheritance on multiple readers. Experience over the years
has shown that real-time workloads are not the typical workloads
which are sensitive to writer starvation.
The alternative solution would be to allow only a single reader
which has been tried and discarded as it is a major bottleneck
especially for mmap_sem. Aside of that many of the writer
starvation critical usage sites have been converted to a writer
side mutex/spinlock and RCU read side protections in the past
decade so that the issue is less prominent than it used to be.
- The actual rtmutex based lock substitutions for PREEMPT_RT enabled
kernels which affect mutex, ww_mutex, rw_semaphore, spinlock_t and
rwlock_t. The spin/rw_lock*() functions disable migration across
the critical section to preserve the existing semantics vs per-CPU
variables.
- Rework of the futex REQUEUE_PI mechanism to handle the case of
early wake-ups which interleave with a re-queue operation to
prevent the situation that a task would be blocked on both the
rtmutex associated to the outer futex and the rtmutex based hash
bucket spinlock.
While this situation cannot happen on !RT enabled kernels the
changes make the underlying concurrency problems easier to
understand in general. As a result the difference between !RT and
RT kernels is reduced to the handling of waiting for the critical
section. !RT kernels simply spin-wait as before and RT kernels
utilize rcu_wait().
- The substitution of local_lock for PREEMPT_RT with a spinlock which
protects the critical section while staying preemptible. The CPU
locality is established by disabling migration.
The underlying concepts of this code have been in use in PREEMPT_RT for
way more than a decade. The code has been refactored several times over
the years and this final incarnation has been optimized once again to be
as non-intrusive as possible, i.e. the RT specific parts are mostly
isolated.
It has been extensively tested in the 5.14-rt patch series and it has
been verified that !RT kernels are not affected by these changes"
* tag 'locking-core-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (92 commits)
locking/rtmutex: Return success on deadlock for ww_mutex waiters
locking/rtmutex: Prevent spurious EDEADLK return caused by ww_mutexes
locking/rtmutex: Dequeue waiter on ww_mutex deadlock
locking/rtmutex: Dont dereference waiter lockless
locking/semaphore: Add might_sleep() to down_*() family
locking/ww_mutex: Initialize waiter.ww_ctx properly
static_call: Update API documentation
locking/local_lock: Add PREEMPT_RT support
locking/spinlock/rt: Prepare for RT local_lock
locking/rtmutex: Add adaptive spinwait mechanism
locking/rtmutex: Implement equal priority lock stealing
preempt: Adjust PREEMPT_LOCK_OFFSET for RT
locking/rtmutex: Prevent lockdep false positive with PI futexes
futex: Prevent requeue_pi() lock nesting issue on RT
futex: Simplify handle_early_requeue_pi_wakeup()
futex: Reorder sanity checks in futex_requeue()
futex: Clarify comment in futex_requeue()
futex: Restructure futex_requeue()
futex: Correct the number of requeued waiters for PI
futex: Remove bogus condition for requeue PI
...
Diffstat (limited to 'scripts')
24 files changed, 90 insertions, 105 deletions
diff --git a/scripts/atomic/check-atomics.sh b/scripts/atomic/check-atomics.sh index 9c7fbd4bcbce..0e7bab3eb0d1 100755 --- a/scripts/atomic/check-atomics.sh +++ b/scripts/atomic/check-atomics.sh @@ -14,9 +14,9 @@ if [ $? -ne 0 ]; then fi cat <<EOF | -asm-generic/atomic-instrumented.h -asm-generic/atomic-long.h -linux/atomic-arch-fallback.h +linux/atomic/atomic-instrumented.h +linux/atomic/atomic-long.h +linux/atomic/atomic-arch-fallback.h EOF while read header; do OLDSUM="$(tail -n 1 ${LINUXDIR}/include/${header})" diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire index 59c00529dc7c..ef764085c79a 100755 --- a/scripts/atomic/fallbacks/acquire +++ b/scripts/atomic/fallbacks/acquire @@ -1,8 +1,8 @@ cat <<EOF static __always_inline ${ret} -${arch}${atomic}_${pfx}${name}${sfx}_acquire(${params}) +arch_${atomic}_${pfx}${name}${sfx}_acquire(${params}) { - ${ret} ret = ${arch}${atomic}_${pfx}${name}${sfx}_relaxed(${args}); + ${ret} ret = arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args}); __atomic_acquire_fence(); return ret; } diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative index a66635bceefb..15caa2eb2371 100755 --- a/scripts/atomic/fallbacks/add_negative +++ b/scripts/atomic/fallbacks/add_negative @@ -1,6 +1,6 @@ cat <<EOF /** - * ${arch}${atomic}_add_negative - add and test if negative + * arch_${atomic}_add_negative - add and test if negative * @i: integer value to add * @v: pointer of type ${atomic}_t * @@ -9,8 +9,8 @@ cat <<EOF * result is greater than or equal to zero. */ static __always_inline bool -${arch}${atomic}_add_negative(${int} i, ${atomic}_t *v) +arch_${atomic}_add_negative(${int} i, ${atomic}_t *v) { - return ${arch}${atomic}_add_return(i, v) < 0; + return arch_${atomic}_add_return(i, v) < 0; } EOF diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless index 2ff598a3f9ec..9e5159c2ccfc 100755 --- a/scripts/atomic/fallbacks/add_unless +++ b/scripts/atomic/fallbacks/add_unless @@ -1,6 +1,6 @@ cat << EOF /** - * ${arch}${atomic}_add_unless - add unless the number is already a given value + * arch_${atomic}_add_unless - add unless the number is already a given value * @v: pointer of type ${atomic}_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -9,8 +9,8 @@ cat << EOF * Returns true if the addition was done. */ static __always_inline bool -${arch}${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u) +arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u) { - return ${arch}${atomic}_fetch_add_unless(v, a, u) != u; + return arch_${atomic}_fetch_add_unless(v, a, u) != u; } EOF diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot index 3f18663dcefb..5a42f54a3595 100755 --- a/scripts/atomic/fallbacks/andnot +++ b/scripts/atomic/fallbacks/andnot @@ -1,7 +1,7 @@ cat <<EOF static __always_inline ${ret} -${arch}${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v) +arch_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v) { - ${retstmt}${arch}${atomic}_${pfx}and${sfx}${order}(~i, v); + ${retstmt}arch_${atomic}_${pfx}and${sfx}${order}(~i, v); } EOF diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec index e2e01f0574bb..8c144c818e9e 100755 --- a/scripts/atomic/fallbacks/dec +++ b/scripts/atomic/fallbacks/dec @@ -1,7 +1,7 @@ cat <<EOF static __always_inline ${ret} -${arch}${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v) +arch_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v) { - ${retstmt}${arch}${atomic}_${pfx}sub${sfx}${order}(1, v); + ${retstmt}arch_${atomic}_${pfx}sub${sfx}${order}(1, v); } EOF diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test index e8a5e492eb5f..8549f359bd0e 100755 --- a/scripts/atomic/fallbacks/dec_and_test +++ b/scripts/atomic/fallbacks/dec_and_test @@ -1,6 +1,6 @@ cat <<EOF /** - * ${arch}${atomic}_dec_and_test - decrement and test + * arch_${atomic}_dec_and_test - decrement and test * @v: pointer of type ${atomic}_t * * Atomically decrements @v by 1 and @@ -8,8 +8,8 @@ cat <<EOF * cases. */ static __always_inline bool -${arch}${atomic}_dec_and_test(${atomic}_t *v) +arch_${atomic}_dec_and_test(${atomic}_t *v) { - return ${arch}${atomic}_dec_return(v) == 0; + return arch_${atomic}_dec_return(v) == 0; } EOF diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive index 527adec89c37..86bdced3428d 100755 --- a/scripts/atomic/fallbacks/dec_if_positive +++ b/scripts/atomic/fallbacks/dec_if_positive @@ -1,14 +1,14 @@ cat <<EOF static __always_inline ${ret} -${arch}${atomic}_dec_if_positive(${atomic}_t *v) +arch_${atomic}_dec_if_positive(${atomic}_t *v) { - ${int} dec, c = ${arch}${atomic}_read(v); + ${int} dec, c = arch_${atomic}_read(v); do { dec = c - 1; if (unlikely(dec < 0)) break; - } while (!${arch}${atomic}_try_cmpxchg(v, &c, dec)); + } while (!arch_${atomic}_try_cmpxchg(v, &c, dec)); return dec; } diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive index dcab6848ca1e..c531d5afecc4 100755 --- a/scripts/atomic/fallbacks/dec_unless_positive +++ b/scripts/atomic/fallbacks/dec_unless_positive @@ -1,13 +1,13 @@ cat <<EOF static __always_inline bool -${arch}${atomic}_dec_unless_positive(${atomic}_t *v) +arch_${atomic}_dec_unless_positive(${atomic}_t *v) { - ${int} c = ${arch}${atomic}_read(v); + ${int} c = arch_${atomic}_read(v); do { if (unlikely(c > 0)) return false; - } while (!${arch}${atomic}_try_cmpxchg(v, &c, c - 1)); + } while (!arch_${atomic}_try_cmpxchg(v, &c, c - 1)); return true; } diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence index 3764fc8ce945..07757d8e338e 100755 --- a/scripts/atomic/fallbacks/fence +++ b/scripts/atomic/fallbacks/fence @@ -1,10 +1,10 @@ cat <<EOF static __always_inline ${ret} -${arch}${atomic}_${pfx}${name}${sfx}(${params}) +arch_${atomic}_${pfx}${name}${sfx}(${params}) { ${ret} ret; __atomic_pre_full_fence(); - ret = ${arch}${atomic}_${pfx}${name}${sfx}_relaxed(${args}); + ret = arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args}); __atomic_post_full_fence(); return ret; } diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless index 0e0b9aef1515..68ce13c8b9da 100755 --- a/scripts/atomic/fallbacks/fetch_add_unless +++ b/scripts/atomic/fallbacks/fetch_add_unless @@ -1,6 +1,6 @@ cat << EOF /** - * ${arch}${atomic}_fetch_add_unless - add unless the number is already a given value + * arch_${atomic}_fetch_add_unless - add unless the number is already a given value * @v: pointer of type ${atomic}_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -9,14 +9,14 @@ cat << EOF * Returns original value of @v */ static __always_inline ${int} -${arch}${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u) +arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u) { - ${int} c = ${arch}${atomic}_read(v); + ${int} c = arch_${atomic}_read(v); do { if (unlikely(c == u)) break; - } while (!${arch}${atomic}_try_cmpxchg(v, &c, c + a)); + } while (!arch_${atomic}_try_cmpxchg(v, &c, c + a)); return c; } diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc index 15ec62946e8c..3c2c3739169e 100755 --- a/scripts/atomic/fallbacks/inc +++ b/scripts/atomic/fallbacks/inc @@ -1,7 +1,7 @@ cat <<EOF static __always_inline ${ret} -${arch}${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v) +arch_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v) { - ${retstmt}${arch}${atomic}_${pfx}add${sfx}${order}(1, v); + ${retstmt}arch_${atomic}_${pfx}add${sfx}${order}(1, v); } EOF diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test index cecc8322a21f..0cf23fe1efb8 100755 --- a/scripts/atomic/fallbacks/inc_and_test +++ b/scripts/atomic/fallbacks/inc_and_test @@ -1,6 +1,6 @@ cat <<EOF /** - * ${arch}${atomic}_inc_and_test - increment and test + * arch_${atomic}_inc_and_test - increment and test * @v: pointer of type ${atomic}_t * * Atomically increments @v by 1 @@ -8,8 +8,8 @@ cat <<EOF * other cases. */ static __always_inline bool -${arch}${atomic}_inc_and_test(${atomic}_t *v) +arch_${atomic}_inc_and_test(${atomic}_t *v) { - return ${arch}${atomic}_inc_return(v) == 0; + return arch_${atomic}_inc_return(v) == 0; } EOF diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero index 50f2d4d48279..ed8a1f562667 100755 --- a/scripts/atomic/fallbacks/inc_not_zero +++ b/scripts/atomic/fallbacks/inc_not_zero @@ -1,14 +1,14 @@ cat <<EOF /** - * ${arch}${atomic}_inc_not_zero - increment unless the number is zero + * arch_${atomic}_inc_not_zero - increment unless the number is zero * @v: pointer of type ${atomic}_t * * Atomically increments @v by 1, if @v is non-zero. * Returns true if the increment was done. */ static __always_inline bool -${arch}${atomic}_inc_not_zero(${atomic}_t *v) +arch_${atomic}_inc_not_zero(${atomic}_t *v) { - return ${arch}${atomic}_add_unless(v, 1, 0); + return arch_${atomic}_add_unless(v, 1, 0); } EOF diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative index 87629e0d4a80..95d8ce48233f 100755 --- a/scripts/atomic/fallbacks/inc_unless_negative +++ b/scripts/atomic/fallbacks/inc_unless_negative @@ -1,13 +1,13 @@ cat <<EOF static __always_inline bool -${arch}${atomic}_inc_unless_negative(${atomic}_t *v) +arch_${atomic}_inc_unless_negative(${atomic}_t *v) { - ${int} c = ${arch}${atomic}_read(v); + ${int} c = arch_${atomic}_read(v); do { if (unlikely(c < 0)) return false; - } while (!${arch}${atomic}_try_cmpxchg(v, &c, c + 1)); + } while (!arch_${atomic}_try_cmpxchg(v, &c, c + 1)); return true; } diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire index 341a88dccaa7..803ba7561076 100755 --- a/scripts/atomic/fallbacks/read_acquire +++ b/scripts/atomic/fallbacks/read_acquire @@ -1,6 +1,6 @@ cat <<EOF static __always_inline ${ret} -${arch}${atomic}_read_acquire(const ${atomic}_t *v) +arch_${atomic}_read_acquire(const ${atomic}_t *v) { return smp_load_acquire(&(v)->counter); } diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release index f8906d537c0f..b46feb56d69c 100755 --- a/scripts/atomic/fallbacks/release +++ b/scripts/atomic/fallbacks/release @@ -1,8 +1,8 @@ cat <<EOF static __always_inline ${ret} -${arch}${atomic}_${pfx}${name}${sfx}_release(${params}) +arch_${atomic}_${pfx}${name}${sfx}_release(${params}) { __atomic_release_fence(); - ${retstmt}${arch}${atomic}_${pfx}${name}${sfx}_relaxed(${args}); + ${retstmt}arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args}); } EOF diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release index 76068272d5f5..86ede759f24e 100755 --- a/scripts/atomic/fallbacks/set_release +++ b/scripts/atomic/fallbacks/set_release @@ -1,6 +1,6 @@ cat <<EOF static __always_inline void -${arch}${atomic}_set_release(${atomic}_t *v, ${int} i) +arch_${atomic}_set_release(${atomic}_t *v, ${int} i) { smp_store_release(&(v)->counter, i); } diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test index c580f4c2136e..260f37341c88 100755 --- a/scripts/atomic/fallbacks/sub_and_test +++ b/scripts/atomic/fallbacks/sub_and_test @@ -1,6 +1,6 @@ cat <<EOF /** - * ${arch}${atomic}_sub_and_test - subtract value from variable and test result + * arch_${atomic}_sub_and_test - subtract value from variable and test result * @i: integer value to subtract * @v: pointer of type ${atomic}_t * @@ -9,8 +9,8 @@ cat <<EOF * other cases. */ static __always_inline bool -${arch}${atomic}_sub_and_test(${int} i, ${atomic}_t *v) +arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v) { - return ${arch}${atomic}_sub_return(i, v) == 0; + return arch_${atomic}_sub_return(i, v) == 0; } EOF diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg index 06db0f738e45..890f850ede37 100755 --- a/scripts/atomic/fallbacks/try_cmpxchg +++ b/scripts/atomic/fallbacks/try_cmpxchg @@ -1,9 +1,9 @@ cat <<EOF static __always_inline bool -${arch}${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new) +arch_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new) { ${int} r, o = *old; - r = ${arch}${atomic}_cmpxchg${order}(v, o, new); + r = arch_${atomic}_cmpxchg${order}(v, o, new); if (unlikely(r != o)) *old = r; return likely(r == o); diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh index 317a6cec76e1..8e2da71f1d5f 100755 --- a/scripts/atomic/gen-atomic-fallback.sh +++ b/scripts/atomic/gen-atomic-fallback.sh @@ -2,11 +2,10 @@ # SPDX-License-Identifier: GPL-2.0 ATOMICDIR=$(dirname $0) -ARCH=$2 . ${ATOMICDIR}/atomic-tbl.sh -#gen_template_fallback(template, meta, pfx, name, sfx, order, arch, atomic, int, args...) +#gen_template_fallback(template, meta, pfx, name, sfx, order, atomic, int, args...) gen_template_fallback() { local template="$1"; shift @@ -15,11 +14,10 @@ gen_template_fallback() local name="$1"; shift local sfx="$1"; shift local order="$1"; shift - local arch="$1"; shift local atomic="$1"; shift local int="$1"; shift - local atomicname="${arch}${atomic}_${pfx}${name}${sfx}${order}" + local atomicname="arch_${atomic}_${pfx}${name}${sfx}${order}" local ret="$(gen_ret_type "${meta}" "${int}")" local retstmt="$(gen_ret_stmt "${meta}")" @@ -34,7 +32,7 @@ gen_template_fallback() fi } -#gen_proto_fallback(meta, pfx, name, sfx, order, arch, atomic, int, args...) +#gen_proto_fallback(meta, pfx, name, sfx, order, atomic, int, args...) gen_proto_fallback() { local meta="$1"; shift @@ -65,44 +63,26 @@ gen_proto_order_variant() local name="$1"; shift local sfx="$1"; shift local order="$1"; shift - local arch="$1" - local atomic="$2" + local atomic="$1" - local basename="${arch}${atomic}_${pfx}${name}${sfx}" + local basename="arch_${atomic}_${pfx}${name}${sfx}" - printf "#define arch_${basename}${order} ${basename}${order}\n" + printf "#define ${basename}${order} ${basename}${order}\n" } -#gen_proto_order_variants(meta, pfx, name, sfx, arch, atomic, int, args...) +#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...) gen_proto_order_variants() { local meta="$1"; shift local pfx="$1"; shift local name="$1"; shift local sfx="$1"; shift - local arch="$1" - local atomic="$2" + local atomic="$1" - local basename="${arch}${atomic}_${pfx}${name}${sfx}" + local basename="arch_${atomic}_${pfx}${name}${sfx}" local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")" - if [ -z "$arch" ]; then - gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" - - if meta_has_acquire "${meta}"; then - gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" - fi - if meta_has_release "${meta}"; then - gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" - fi - if meta_has_relaxed "${meta}"; then - gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@" - fi - - echo "" - fi - # If we don't have relaxed atomics, then we don't bother with ordering fallbacks # read_acquire and set_release need to be templated, though if ! meta_has_relaxed "${meta}"; then @@ -128,7 +108,7 @@ gen_proto_order_variants() gen_basic_fallbacks "${basename}" if [ ! -z "${template}" ]; then - printf "#endif /* ${arch}${atomic}_${pfx}${name}${sfx} */\n\n" + printf "#endif /* ${basename} */\n\n" gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" @@ -187,38 +167,38 @@ gen_try_cmpxchg_fallback() local order="$1"; shift; cat <<EOF -#ifndef ${ARCH}try_cmpxchg${order} -#define ${ARCH}try_cmpxchg${order}(_ptr, _oldp, _new) \\ +#ifndef arch_try_cmpxchg${order} +#define arch_try_cmpxchg${order}(_ptr, _oldp, _new) \\ ({ \\ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \\ - ___r = ${ARCH}cmpxchg${order}((_ptr), ___o, (_new)); \\ + ___r = arch_cmpxchg${order}((_ptr), ___o, (_new)); \\ if (unlikely(___r != ___o)) \\ *___op = ___r; \\ likely(___r == ___o); \\ }) -#endif /* ${ARCH}try_cmpxchg${order} */ +#endif /* arch_try_cmpxchg${order} */ EOF } gen_try_cmpxchg_fallbacks() { - printf "#ifndef ${ARCH}try_cmpxchg_relaxed\n" - printf "#ifdef ${ARCH}try_cmpxchg\n" + printf "#ifndef arch_try_cmpxchg_relaxed\n" + printf "#ifdef arch_try_cmpxchg\n" - gen_basic_fallbacks "${ARCH}try_cmpxchg" + gen_basic_fallbacks "arch_try_cmpxchg" - printf "#endif /* ${ARCH}try_cmpxchg */\n\n" + printf "#endif /* arch_try_cmpxchg */\n\n" for order in "" "_acquire" "_release" "_relaxed"; do gen_try_cmpxchg_fallback "${order}" done - printf "#else /* ${ARCH}try_cmpxchg_relaxed */\n" + printf "#else /* arch_try_cmpxchg_relaxed */\n" - gen_order_fallbacks "${ARCH}try_cmpxchg" + gen_order_fallbacks "arch_try_cmpxchg" - printf "#endif /* ${ARCH}try_cmpxchg_relaxed */\n\n" + printf "#endif /* arch_try_cmpxchg_relaxed */\n\n" } cat << EOF @@ -234,14 +214,14 @@ cat << EOF EOF -for xchg in "${ARCH}xchg" "${ARCH}cmpxchg" "${ARCH}cmpxchg64"; do +for xchg in "arch_xchg" "arch_cmpxchg" "arch_cmpxchg64"; do gen_xchg_fallbacks "${xchg}" done gen_try_cmpxchg_fallbacks grep '^[a-z]' "$1" | while read name meta args; do - gen_proto "${meta}" "${name}" "${ARCH}" "atomic" "int" ${args} + gen_proto "${meta}" "${name}" "atomic" "int" ${args} done cat <<EOF @@ -252,7 +232,7 @@ cat <<EOF EOF grep '^[a-z]' "$1" | while read name meta args; do - gen_proto "${meta}" "${name}" "${ARCH}" "atomic64" "s64" ${args} + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} done cat <<EOF diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index b0c45aee19d7..035ceb4ee85c 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -121,8 +121,8 @@ cat << EOF * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid * double instrumentation. */ -#ifndef _ASM_GENERIC_ATOMIC_INSTRUMENTED_H -#define _ASM_GENERIC_ATOMIC_INSTRUMENTED_H +#ifndef _LINUX_ATOMIC_INSTRUMENTED_H +#define _LINUX_ATOMIC_INSTRUMENTED_H #include <linux/build_bug.h> #include <linux/compiler.h> @@ -138,6 +138,11 @@ grep '^[a-z]' "$1" | while read name meta args; do gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} done +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic_long" "long" ${args} +done + + for xchg in "xchg" "cmpxchg" "cmpxchg64" "try_cmpxchg"; do for order in "" "_acquire" "_release" "_relaxed"; do gen_xchg "${xchg}${order}" "" @@ -158,5 +163,5 @@ gen_xchg "cmpxchg_double_local" "2 * " cat <<EOF -#endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */ +#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ EOF diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index e318d3f92e53..eda89cea6e1d 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -47,9 +47,9 @@ gen_proto_order_variant() cat <<EOF static __always_inline ${ret} -atomic_long_${name}(${params}) +arch_atomic_long_${name}(${params}) { - ${retstmt}${atomic}_${name}(${argscast}); + ${retstmt}arch_${atomic}_${name}(${argscast}); } EOF @@ -61,8 +61,8 @@ cat << EOF // Generated by $0 // DO NOT MODIFY THIS FILE DIRECTLY -#ifndef _ASM_GENERIC_ATOMIC_LONG_H -#define _ASM_GENERIC_ATOMIC_LONG_H +#ifndef _LINUX_ATOMIC_LONG_H +#define _LINUX_ATOMIC_LONG_H #include <linux/compiler.h> #include <asm/types.h> @@ -98,5 +98,5 @@ done cat <<EOF #endif /* CONFIG_64BIT */ -#endif /* _ASM_GENERIC_ATOMIC_LONG_H */ +#endif /* _LINUX_ATOMIC_LONG_H */ EOF diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index f776a574224d..5b98a8307693 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -8,9 +8,9 @@ ATOMICTBL=${ATOMICDIR}/atomics.tbl LINUXDIR=${ATOMICDIR}/../.. cat <<EOF | -gen-atomic-instrumented.sh asm-generic/atomic-instrumented.h -gen-atomic-long.sh asm-generic/atomic-long.h -gen-atomic-fallback.sh linux/atomic-arch-fallback.h arch_ +gen-atomic-instrumented.sh linux/atomic/atomic-instrumented.h +gen-atomic-long.sh linux/atomic/atomic-long.h +gen-atomic-fallback.sh linux/atomic/atomic-arch-fallback.h EOF while read script header args; do /bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header} |