From nobody Mon Feb 9 05:40:21 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 696BC34677F for ; Fri, 19 Dec 2025 15:46:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159187; cv=none; b=OLiWJI5xVw4ExSfJOpnWswDHSIVdEp3pGYEuerMwGL2N0uRgIYQmePRrO7M/oCEodJQn+uRBrVEbZ/z5nB6BRFvFiN+0NwDzxcCUsiGeUby/Rvmz5kokPoq3IuXCL6HfDeXHYKEFlwqaej8ypFgOD9DTJ0U52bRGExLu94j9mXk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159187; c=relaxed/simple; bh=f/v012HY5KOr4wKAOymmeKizKhzNr5xvOVnNja1XV7o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NtGilXbwnMfyGZaRPcNP2ki56QfMyoy4c0921ncFNEWSh81lihU3BojTKffMdowU607rNoPbcH3fHJGG2OhM5X+bnYkbZo0/pP4n+mOzGhjocC/Ijya9hHjzeRyY+O12cp02DPrRzjBJW42/XdMWeJzLUtg9L/nJbYbvdAobuKE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fJoZDYm1; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fJoZDYm1" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4779981523fso21281345e9.2 for ; Fri, 19 Dec 2025 07:46:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159181; x=1766763981; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+jYEguITBPTs5G6jcojPovTyLNy31gBuOV4hxWkZ9us=; b=fJoZDYm1nrDgnjmGX1Yxo6aAhFerSt4/fFNJ/MdZYKRzammy24rwGEMN93dg3zZsYG MEv128FNgF4FMMyTJzQtTrsPSMetWNqRwM1xoAq0elRewact82j9zVHrV6UQO8sB7ZPN lVIqsO1BpFCCzGmGBFH+4SqVLTBxdyMkDEyczy1pvXREB01qltZevytnME7hdrtd8NqM T4xWfWs6MzFjeXWkDO2FYKd7dxGn91d37dX3ksWsl1eNeLDH5yhzglPmVded6Bs4/lQH 2Zz6W+VWwWV+yZz+pusY7pvnzkwio9yvdahEEwoC5C6SD6dWxXR3peSxz9OMddOr7447 XPRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159181; x=1766763981; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+jYEguITBPTs5G6jcojPovTyLNy31gBuOV4hxWkZ9us=; b=Ymasijzu6Tn5CXTQeZjxaUv5QMBZGc4zbYN4ppSwv17wrD8zKZzQE8MXHJt4Pm3GUO yHfepMDdovpjZe1WG1/nJc802SuaN3ZQVdeXDuFjiCqnxmk9G5KhUxMGyT2SqHoj6fdE 6FhmsiJTxdbTVWrwqSy6ZVbqXLkH84iwb2WBjKJAdLuOzwRbeQlJn3CSDKbLmEsYz0j7 hODl4IurOQIdw06otKZ0eAVZFNItDM22an6n/sD3ISh8v2eeEgS9hwmDt35LBiwoZsnc 7gV/o1HNsAGq9yHuUGuxYTq91k6RPyczTsIl6+qD+3PFcIOCD8t3sfq6KMbJt1xDXo+T TlGQ== X-Forwarded-Encrypted: i=1; AJvYcCVOgmW/Ytq98FylMTpn4iKVabTxchZSFQOfMytCND+ylt3/0Q8bCFU+kQoqLQA7pCbcH39w/SXFDIjqGz8=@vger.kernel.org X-Gm-Message-State: AOJu0YzQG0M7qqKVn+tv68UU1sDbfFvTgxHH8XJjZ4qbAWKShmOQOMKV HZU+PBr1C+8en4aQC7RZDQaVn0tih88x46smn9fWN0HXfsNXJwYl6Cld91OpmndxaqHa/wRxjLy gGg== X-Google-Smtp-Source: AGHT+IEYnGuKvpG1yqXy3gpRoAVKm6+YVd4z93VGj6UWW783o4r2dfscOlxqJjdnEPTLmregJbA3SNUltw== X-Received: from wma9.prod.google.com ([2002:a05:600c:8909:b0:477:a0cb:7165]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5487:b0:47b:da85:b9f3 with SMTP id 5b1f17b1804b1-47d195a72c0mr32797025e9.23.1766159180751; Fri, 19 Dec 2025 07:46:20 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:03 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-15-elver@google.com> Subject: [PATCH v5 14/36] rcu: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Improve the existing annotations to properly support Clang's context analysis. The old annotations distinguished between RCU, RCU_BH, and RCU_SCHED; however, to more easily be able to express that "hold the RCU read lock" without caring if the normal, _bh(), or _sched() variant was used we'd have to remove the distinction of the latter variants: change the _bh() and _sched() variants to also acquire "RCU". When (and if) we introduce context locks to denote more generally that "IRQ", "BH", "PREEMPT" contexts are disabled, it would make sense to acquire these instead of RCU_BH and RCU_SCHED respectively. The above change also simplified introducing __guarded_by support, where only the "RCU" context lock needs to be held: introduce __rcu_guarded, where Clang's context analysis warns if a pointer is dereferenced without any of the RCU locks held, or updated without the appropriate helpers. The primitives rcu_assign_pointer() and friends are wrapped with context_unsafe(), which enforces using them to update RCU-protected pointers marked with __rcu_guarded. Signed-off-by: Marco Elver Acked-by: Paul E. McKenney --- v5: * Rename "context guard" -> "context lock". v3: * Properly support reentrancy via new compiler support. v2: * Reword commit message and point out reentrancy caveat. --- Documentation/dev-tools/context-analysis.rst | 2 +- include/linux/rcupdate.h | 77 ++++++++++++------ lib/test_context-analysis.c | 85 ++++++++++++++++++++ 3 files changed, 139 insertions(+), 25 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index b2d69fb4a884..3bc72f71fe25 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`. +`bit_spinlock`, RCU. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index c5b30054cd01..50e63eade019 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -31,6 +31,16 @@ #include #include =20 +token_context_lock(RCU, __reentrant_ctx_lock); +token_context_lock_instance(RCU, RCU_SCHED); +token_context_lock_instance(RCU, RCU_BH); + +/* + * A convenience macro that can be used for RCU-protected globals or struct + * members; adds type qualifier __rcu, and also enforces __guarded_by(RCU). + */ +#define __rcu_guarded __rcu __guarded_by(RCU) + #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >=3D (a) - (b)) #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) =20 @@ -425,7 +435,8 @@ static inline void rcu_preempt_sleep_check(void) { } =20 // See RCU_LOCKDEP_WARN() for an explanation of the double call to // debug_lockdep_rcu_enabled(). -static inline bool lockdep_assert_rcu_helper(bool c) +static inline bool lockdep_assert_rcu_helper(bool c, const struct __ctx_lo= ck_RCU *ctx) + __assumes_shared_ctx_lock(RCU) __assumes_shared_ctx_lock(ctx) { return debug_lockdep_rcu_enabled() && (c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) && @@ -438,7 +449,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * Splats if lockdep is enabled and there is no rcu_read_lock() in effect. */ #define lockdep_assert_in_rcu_read_lock() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map), RCU)) =20 /** * lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_= lock_bh() @@ -448,7 +459,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * actual rcu_read_lock_bh() is required. */ #define lockdep_assert_in_rcu_read_lock_bh() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map), R= CU_BH)) =20 /** * lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_re= ad_lock_sched() @@ -458,7 +469,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * instead an actual rcu_read_lock_sched() is required. */ #define lockdep_assert_in_rcu_read_lock_sched() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map)= )) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map)= , RCU_SCHED)) =20 /** * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU read= er @@ -476,17 +487,17 @@ static inline bool lockdep_assert_rcu_helper(bool c) WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map) && \ !lock_is_held(&rcu_bh_lock_map) && \ !lock_is_held(&rcu_sched_lock_map) && \ - preemptible())) + preemptible(), RCU)) =20 #else /* #ifdef CONFIG_PROVE_RCU */ =20 #define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c)) #define rcu_sleep_check() do { } while (0) =20 -#define lockdep_assert_in_rcu_read_lock() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_bh() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_sched() do { } while (0) -#define lockdep_assert_in_rcu_reader() do { } while (0) +#define lockdep_assert_in_rcu_read_lock() __assume_shared_ctx_lock(RCU) +#define lockdep_assert_in_rcu_read_lock_bh() __assume_shared_ctx_lock(RCU_= BH) +#define lockdep_assert_in_rcu_read_lock_sched() __assume_shared_ctx_lock(R= CU_SCHED) +#define lockdep_assert_in_rcu_reader() __assume_shared_ctx_lock(RCU) =20 #endif /* #else #ifdef CONFIG_PROVE_RCU */ =20 @@ -506,11 +517,11 @@ static inline bool lockdep_assert_rcu_helper(bool c) #endif /* #else #ifdef __CHECKER__ */ =20 #define __unrcu_pointer(p, local) \ -({ \ +context_unsafe( \ typeof(*p) *local =3D (typeof(*p) *__force)(p); \ rcu_check_sparse(p, __rcu); \ - ((typeof(*p) __force __kernel *)(local)); \ -}) + ((typeof(*p) __force __kernel *)(local)) \ +) /** * unrcu_pointer - mark a pointer as not being RCU protected * @p: pointer needing to lose its __rcu property @@ -586,7 +597,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * other macros that it invokes. */ #define rcu_assign_pointer(p, v) \ -do { \ +context_unsafe( \ uintptr_t _r_a_p__v =3D (uintptr_t)(v); \ rcu_check_sparse(p, __rcu); \ \ @@ -594,7 +605,7 @@ do { \ WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \ else \ smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \ -} while (0) +) =20 /** * rcu_replace_pointer() - replace an RCU pointer, returning its old value @@ -861,9 +872,10 @@ do { \ * only when acquiring spinlocks that are subject to priority inheritance. */ static __always_inline void rcu_read_lock(void) + __acquires_shared(RCU) { __rcu_read_lock(); - __acquire(RCU); + __acquire_shared(RCU); rcu_lock_acquire(&rcu_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock() used illegally while idle"); @@ -891,11 +903,12 @@ static __always_inline void rcu_read_lock(void) * See rcu_read_lock() for more information. */ static inline void rcu_read_unlock(void) + __releases_shared(RCU) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock() used illegally while idle"); rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */ - __release(RCU); + __release_shared(RCU); __rcu_read_unlock(); } =20 @@ -914,9 +927,11 @@ static inline void rcu_read_unlock(void) * was invoked from some other task. */ static inline void rcu_read_lock_bh(void) + __acquires_shared(RCU) __acquires_shared(RCU_BH) { local_bh_disable(); - __acquire(RCU_BH); + __acquire_shared(RCU); + __acquire_shared(RCU_BH); rcu_lock_acquire(&rcu_bh_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_bh() used illegally while idle"); @@ -928,11 +943,13 @@ static inline void rcu_read_lock_bh(void) * See rcu_read_lock_bh() for more information. */ static inline void rcu_read_unlock_bh(void) + __releases_shared(RCU) __releases_shared(RCU_BH) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_bh() used illegally while idle"); rcu_lock_release(&rcu_bh_lock_map); - __release(RCU_BH); + __release_shared(RCU_BH); + __release_shared(RCU); local_bh_enable(); } =20 @@ -952,9 +969,11 @@ static inline void rcu_read_unlock_bh(void) * rcu_read_lock_sched() was invoked from an NMI handler. */ static inline void rcu_read_lock_sched(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); rcu_lock_acquire(&rcu_sched_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_sched() used illegally while idle"); @@ -962,9 +981,11 @@ static inline void rcu_read_lock_sched(void) =20 /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_lock_sched_notrace(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable_notrace(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); } =20 /** @@ -973,22 +994,27 @@ static inline notrace void rcu_read_lock_sched_notrac= e(void) * See rcu_read_lock_sched() for more information. */ static inline void rcu_read_unlock_sched(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_sched() used illegally while idle"); rcu_lock_release(&rcu_sched_lock_map); - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable(); } =20 /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_unlock_sched_notrace(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable_notrace(); } =20 static __always_inline void rcu_read_lock_dont_migrate(void) + __acquires_shared(RCU) { if (IS_ENABLED(CONFIG_PREEMPT_RCU)) migrate_disable(); @@ -996,6 +1022,7 @@ static __always_inline void rcu_read_lock_dont_migrate= (void) } =20 static inline void rcu_read_unlock_migrate(void) + __releases_shared(RCU) { rcu_read_unlock(); if (IS_ENABLED(CONFIG_PREEMPT_RCU)) @@ -1041,10 +1068,10 @@ static inline void rcu_read_unlock_migrate(void) * ordering guarantees for either the CPU or the compiler. */ #define RCU_INIT_POINTER(p, v) \ - do { \ + context_unsafe( \ rcu_check_sparse(p, __rcu); \ WRITE_ONCE(p, RCU_INITIALIZER(v)); \ - } while (0) + ) =20 /** * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected poin= ter @@ -1206,4 +1233,6 @@ DEFINE_LOCK_GUARD_0(rcu, } while (0), rcu_read_unlock()) =20 +DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(= RCU)) + #endif /* __LINUX_RCUPDATE_H */ diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index be0c5d462a48..559df32fb5f8 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include =20 @@ -284,3 +285,87 @@ static void __used test_bit_spin_lock(struct test_bit_= spinlock_data *d) bit_spin_unlock(3, &d->bits); } } + +/* + * Test that we can mark a variable guarded by RCU, and we can dereference= and + * write to the pointer with RCU's primitives. + */ +struct test_rcu_data { + long __rcu_guarded *data; +}; + +static void __used test_rcu_guarded_reader(struct test_rcu_data *d) +{ + rcu_read_lock(); + (void)rcu_dereference(d->data); + rcu_read_unlock(); + + rcu_read_lock_bh(); + (void)rcu_dereference(d->data); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + (void)rcu_dereference(d->data); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_guard(struct test_rcu_data *d) +{ + guard(rcu)(); + (void)rcu_dereference(d->data); +} + +static void __used test_rcu_guarded_updater(struct test_rcu_data *d) +{ + rcu_assign_pointer(d->data, NULL); + RCU_INIT_POINTER(d->data, NULL); + (void)unrcu_pointer(d->data); +} + +static void wants_rcu_held(void) __must_hold_shared(RCU) { } +static void wants_rcu_held_bh(void) __must_hold_shared(RCU_BH) { } +static void wants_rcu_held_sched(void) __must_hold_shared(RCU_SCHED) { } + +static void __used test_rcu_lock_variants(void) +{ + rcu_read_lock(); + wants_rcu_held(); + rcu_read_unlock(); + + rcu_read_lock_bh(); + wants_rcu_held_bh(); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + wants_rcu_held_sched(); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_lock_reentrant(void) +{ + rcu_read_lock(); + rcu_read_lock(); + rcu_read_lock_bh(); + rcu_read_lock_bh(); + rcu_read_lock_sched(); + rcu_read_lock_sched(); + + rcu_read_unlock_sched(); + rcu_read_unlock_sched(); + rcu_read_unlock_bh(); + rcu_read_unlock_bh(); + rcu_read_unlock(); + rcu_read_unlock(); +} + +static void __used test_rcu_assert_variants(void) +{ + lockdep_assert_in_rcu_read_lock(); + wants_rcu_held(); + + lockdep_assert_in_rcu_read_lock_bh(); + wants_rcu_held_bh(); + + lockdep_assert_in_rcu_read_lock_sched(); + wants_rcu_held_sched(); +} --=20 2.52.0.322.g1dd061c0dc-goog