From nobody Thu Oct 2 10:55:49 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DF61313271 for ; Thu, 18 Sep 2025 14:06:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758204374; cv=none; b=S/27CM6H7FC+W182OUeBUf8xH5AAqXbAe4o/aFMm3oHx020ny6cIMth8Ctv4Y9mffG9k8I27Kfzw2wiTVFBjEgQTWZdDTzmlLzB1IqV+HRfFypLjHJKIFiBGSYeS48pdZWlVuGh6SaN0AZ+nQe/AsWEjWvqln+P7GQhOUhEQ+FM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758204374; c=relaxed/simple; bh=WICJChI0BHN6Uxo/zgv61MhcLxiI7g2xCo7HlWX4Z1Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bhkMP6fpsArFrpIHgC4XHebY3DdJVLT0p7buflVaBN2N8NrFtDKlDEWeAU80ckZ0NDwdzrfxf8oXlya2ugvf2aQV0uYL+QAjl7jxUcrFH5mINGZFeNO5OIqZsNNDy7kR8BBkECs2ImRH3V0fMazOY1JXvDjHJ/gK2nMGOj/Qupo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fohhAVXx; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fohhAVXx" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3ece14b9231so561404f8f.0 for ; Thu, 18 Sep 2025 07:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758204368; x=1758809168; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uZC4ZOE8JE+M8LbYw3r+YcY7P7dk+Wk5ZJQlr/Vrksc=; b=fohhAVXxi3c/8E4+bqfsVO0PmUVlCRSCF8eJ4tM8UM/47gLzQC5hwwzPyJhWhBWOqF hjkFyhXqkayuTMa4bh05nrFxVKWLqi16+IxIoumjeZSY429Y6RdjYa12nSahZjCH9OgE mxoO+1zpQoVwBPQPqvgS1kxojOdB2/dd0wHfseSmmY66RmnKWhJP/JkF622tvWQsRVsf BS8wpF+6KWtVpSsEH9a+tGa4kh8CSAnpIIaOw86WLmAYC0FmnTCRiwiWG2CuoxQYlKT5 4siO0eS+H8GGBgVWjey6zhP2WFROlkt74B7Cdyf/dk2jnKh6N4iYCy02k4U4ffbqU2JQ LHXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758204368; x=1758809168; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uZC4ZOE8JE+M8LbYw3r+YcY7P7dk+Wk5ZJQlr/Vrksc=; b=UPkDcvcElXploPmfCAptfev49AWOElqrmdvIz7qnEwXdSjgYcJqCMU6xXra4cdKQNe pHUiFZnWAO5l8+ANFsqhonJFp9XEwbx7yB+x1wcJGDZjQ+oY75AQEEK0zV5hklGqX6Ga rQF+C9BbzxCq61E8w6qFo6XSwthTOpOBr7EGGWOFGIBHTK6CjyLNAkfd7K1u3UVEzka2 Ed+xbUClL071GmgD/1A/6g3gd1Il5GV0YFu9AHYzPo++E9mcfIT5DEJRYqVTKuBKNRI6 VQaNjXFeckmPDS9IGoJqwtAbbc9YIrHyKQWgTIBi2Mgldx5Grju33KfJpcCO2mKfnBiQ JMuQ== X-Forwarded-Encrypted: i=1; AJvYcCXXLbExlkSAWxypEAyTyOwV1BKC1oE+iq/7OraDLAUCj/Ux2qHiq6+wOLxZhyHKrjS8UtJfHk4IG/BlFOA=@vger.kernel.org X-Gm-Message-State: AOJu0YxOygntMPsodopxUCfaSdDvKDPFMSug1cLy3SX6WyVh1nvcJ15v eDnPlzL9lSwS0VMjLHpbQJY9MOwCSmJ+nA/BfzdgqyFAj+dwumLCsivyOB3U/cMle9DNUnsCPbc XeQ== X-Google-Smtp-Source: AGHT+IErGnNUtVWtw5wt6ngF4KBlc2sTC7xdm7qy0xDdBV9Ewux4kyrKYgwxcj7jBTO4Xps49TpxyhLVvA== X-Received: from wrva2.prod.google.com ([2002:a5d:5702:0:b0:3ed:e1d6:f198]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2601:b0:3e2:da00:44af with SMTP id ffacd0b85a97d-3ecdfa1ebbfmr5132620f8f.36.1758204367579; Thu, 18 Sep 2025 07:06:07 -0700 (PDT) Date: Thu, 18 Sep 2025 15:59:26 +0200 In-Reply-To: <20250918140451.1289454-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250918140451.1289454-1-elver@google.com> X-Mailer: git-send-email 2.51.0.384.g4c02a37b29-goog Message-ID: <20250918140451.1289454-16-elver@google.com> Subject: [PATCH v3 15/35] srcu: Support Clang's capability analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for SRCU. Signed-off-by: Marco Elver --- v3: * Switch to DECLARE_LOCK_GUARD_1_ATTRS() (suggested by Peter) * Support SRCU being reentrant. --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/srcu.h | 60 +++++++++++++------ include/linux/srcutiny.h | 4 ++ include/linux/srcutree.h | 6 +- lib/test_capability-analysis.c | 24 ++++++++ 5 files changed, 75 insertions(+), 21 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index fdacc7f73da8..779ecb5ec17a 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -82,7 +82,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU. +`bit_spinlock`, RCU, SRCU (`srcu_struct`). =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/srcu.h b/include/linux/srcu.h index f179700fecaf..6cafaf6dde71 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -21,7 +21,7 @@ #include #include =20 -struct srcu_struct; +struct_with_capability(srcu_struct, __reentrant_cap); =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC =20 @@ -53,7 +53,7 @@ int init_srcu_struct(struct srcu_struct *ssp); #define SRCU_READ_FLAVOR_SLOWGP SRCU_READ_FLAVOR_FAST // Flavors requiring synchronize_rcu() // instead of smp_mb(). -void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); +void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases_share= d(ssp); =20 #ifdef CONFIG_TINY_SRCU #include @@ -107,14 +107,16 @@ static inline bool same_state_synchronize_srcu(unsign= ed long oldstate1, unsigned } =20 #ifdef CONFIG_NEED_SRCU_NMI_SAFE -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releas= es(ssp); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires_shared(ss= p); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releas= es_shared(ssp); #else static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { return __srcu_read_lock(ssp); } static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int= idx) + __releases_shared(ssp) { __srcu_read_unlock(ssp, idx); } @@ -186,6 +188,14 @@ static inline int srcu_read_lock_held(const struct src= u_struct *ssp) =20 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ =20 +/* + * No-op helper to denote that ssp must be held. Because SRCU-protected po= inters + * should still be marked with __rcu_guarded, and we do not want to mark t= hem + * with __guarded_by(ssp) as it would complicate annotations for writers, = we + * choose the following strategy: srcu_dereference_check() calls this help= er + * that checks that the passed ssp is held, and then fake-acquires 'RCU'. + */ +static inline void __srcu_read_lock_must_hold(const struct srcu_struct *ss= p) __must_hold_shared(ssp) { } =20 /** * srcu_dereference_check - fetch SRCU-protected pointer for later derefer= encing @@ -199,9 +209,15 @@ static inline int srcu_read_lock_held(const struct src= u_struct *ssp) * to 1. The @c argument will normally be a logical expression containing * lockdep_is_held() calls. */ -#define srcu_dereference_check(p, ssp, c) \ - __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ - (c) || srcu_read_lock_held(ssp), __rcu) +#define srcu_dereference_check(p, ssp, c) \ +({ \ + __srcu_read_lock_must_hold(ssp); \ + __acquire_shared_cap(RCU); \ + __auto_type __v =3D __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ + (c) || srcu_read_lock_held(ssp), __rcu); \ + __release_shared_cap(RCU); \ + __v; \ +}) =20 /** * srcu_dereference - fetch SRCU-protected pointer for later dereferencing @@ -244,7 +260,8 @@ static inline int srcu_read_lock_held(const struct srcu= _struct *ssp) * invoke srcu_read_unlock() from one task and the matching srcu_read_lock= () * from another. */ -static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -271,7 +288,8 @@ static inline int srcu_read_lock(struct srcu_struct *ss= p) __acquires(ssp) * where RCU is watching, that is, from contexts where it would be legal * to invoke rcu_read_lock(). Otherwise, lockdep will complain. */ -static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_st= ruct *ssp) __acquires(ssp) +static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_st= ruct *ssp) __acquires_shared(ssp) + __acquires_shared(ssp) { struct srcu_ctr __percpu *retval; =20 @@ -292,7 +310,7 @@ static inline struct srcu_ctr __percpu *srcu_read_lock_= fast(struct srcu_struct * * The same srcu_struct may be used concurrently by srcu_down_read_fast() * and srcu_read_lock_fast(). */ -static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_st= ruct *ssp) __acquires(ssp) +static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_st= ruct *ssp) __acquires_shared(ssp) { WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi()); srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_FAST); @@ -310,7 +328,8 @@ static inline struct srcu_ctr __percpu *srcu_down_read_= fast(struct srcu_struct * * then none of the other flavors may be used, whether before, during, * or after. */ -static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquir= es(ssp) +static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -322,7 +341,8 @@ static inline int srcu_read_lock_nmisafe(struct srcu_st= ruct *ssp) __acquires(ssp =20 /* Used by tracing, cannot be traced and cannot invoke lockdep. */ static inline notrace int -srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) +srcu_read_lock_notrace(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -353,7 +373,8 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acqui= res(ssp) * which calls to down_read() may be nested. The same srcu_struct may be * used concurrently by srcu_down_read() and srcu_read_lock(). */ -static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_down_read(struct srcu_struct *ssp) + __acquires_shared(ssp) { WARN_ON_ONCE(in_nmi()); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -368,7 +389,7 @@ static inline int srcu_down_read(struct srcu_struct *ss= p) __acquires(ssp) * Exit an SRCU read-side critical section. */ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -384,7 +405,7 @@ static inline void srcu_read_unlock(struct srcu_struct = *ssp, int idx) * Exit a light-weight SRCU read-side critical section. */ static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct s= rcu_ctr __percpu *scp) - __releases(ssp) + __releases_shared(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST); srcu_lock_release(&ssp->dep_map); @@ -400,7 +421,7 @@ static inline void srcu_read_unlock_fast(struct srcu_st= ruct *ssp, struct srcu_ct * the same context as the maching srcu_down_read_fast(). */ static inline void srcu_up_read_fast(struct srcu_struct *ssp, struct srcu_= ctr __percpu *scp) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi()); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST); @@ -415,7 +436,7 @@ static inline void srcu_up_read_fast(struct srcu_struct= *ssp, struct srcu_ctr __ * Exit an SRCU read-side critical section, but in an NMI-safe manner. */ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int i= dx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NMI); @@ -425,7 +446,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu= _struct *ssp, int idx) =20 /* Used by tracing, cannot be traced and cannot call lockdep. */ static inline notrace void -srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) +srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases_shar= ed(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); __srcu_read_unlock(ssp, idx); @@ -440,7 +461,7 @@ srcu_read_unlock_notrace(struct srcu_struct *ssp, int i= dx) __releases(ssp) * the same context as the maching srcu_down_read(). */ static inline void srcu_up_read(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); WARN_ON_ONCE(in_nmi()); @@ -480,6 +501,7 @@ DEFINE_LOCK_GUARD_1(srcu, struct srcu_struct, _T->idx =3D srcu_read_lock(_T->lock), srcu_read_unlock(_T->lock, _T->idx), int idx) +DECLARE_LOCK_GUARD_1_ATTRS(srcu, __assumes_cap(_T), /* */) =20 DEFINE_LOCK_GUARD_1(srcu_fast, struct srcu_struct, _T->scp =3D srcu_read_lock_fast(_T->lock), diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 51ce25f07930..c194b3c7c43b 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -61,6 +61,7 @@ void synchronize_srcu(struct srcu_struct *ssp); * index that must be passed to the matching srcu_read_unlock(). */ static inline int __srcu_read_lock(struct srcu_struct *ssp) + __acquires_shared(ssp) { int idx; =20 @@ -68,6 +69,7 @@ static inline int __srcu_read_lock(struct srcu_struct *ss= p) idx =3D ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1; WRITE_ONCE(ssp->srcu_lock_nesting[idx], READ_ONCE(ssp->srcu_lock_nesting[= idx]) + 1); preempt_enable(); + __acquire_shared(ssp); return idx; } =20 @@ -84,11 +86,13 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_p= tr(struct srcu_struct *ss } =20 static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_= struct *ssp) + __acquires_shared(ssp) { return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp)); } =20 static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct= srcu_ctr __percpu *scp) + __releases_shared(ssp) { __srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp)); } diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index bf44d8d1e69e..43754472e07a 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -207,7 +207,7 @@ struct srcu_struct { #define DEFINE_SRCU(name) __DEFINE_SRCU(name, /* not static */) #define DEFINE_STATIC_SRCU(name) __DEFINE_SRCU(name, static) =20 -int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); +int __srcu_read_lock(struct srcu_struct *ssp) __acquires_shared(ssp); void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); @@ -241,6 +241,7 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_p= tr(struct srcu_struct *ss * implementations of this_cpu_inc(). */ static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_= struct *ssp) + __acquires_shared(ssp) { struct srcu_ctr __percpu *scp =3D READ_ONCE(ssp->srcu_ctrp); =20 @@ -250,6 +251,7 @@ static inline struct srcu_ctr __percpu *__srcu_read_loc= k_fast(struct srcu_struct else atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); /* Z */ barrier(); /* Avoid leaking the critical section. */ + __acquire_shared(ssp); return scp; } =20 @@ -269,7 +271,9 @@ static inline struct srcu_ctr __percpu *__srcu_read_loc= k_fast(struct srcu_struct * implementations of this_cpu_inc(). */ static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct= srcu_ctr __percpu *scp) + __releases_shared(ssp) { + __release_shared(ssp); barrier(); /* Avoid leaking the critical section. */ if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) this_cpu_inc(scp->srcu_unlocks.counter); /* Z */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 31c9bc1e2405..5b17fd94f31e 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -10,6 +10,7 @@ #include #include #include +#include =20 /* * Test that helper macros work as expected. @@ -362,3 +363,26 @@ static void __used test_rcu_assert_variants(void) lockdep_assert_in_rcu_read_lock_sched(); wants_rcu_held_sched(); } + +struct test_srcu_data { + struct srcu_struct srcu; + long __rcu_guarded *data; +}; + +static void __used test_srcu(struct test_srcu_data *d) +{ + init_srcu_struct(&d->srcu); + + int idx =3D srcu_read_lock(&d->srcu); + long *data =3D srcu_dereference(d->data, &d->srcu); + (void)data; + srcu_read_unlock(&d->srcu, idx); + + rcu_assign_pointer(d->data, NULL); +} + +static void __used test_srcu_guard(struct test_srcu_data *d) +{ + guard(srcu)(&d->srcu); + (void)srcu_dereference(d->data, &d->srcu); +} --=20 2.51.0.384.g4c02a37b29-goog