From nobody Thu Oct 2 10:55:50 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C41530E843 for ; Thu, 18 Sep 2025 14:05:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758204361; cv=none; b=tQmTLDgLF+r0os/zDCdD4mLBaDIftCeJKty6W/Buq0VRMblaA6CmVfWuMWz/yOALHpnS4uac7NWSUcE7kxAI9go0e1QVdYsHy7eRzr4zGuYuqoNZnxjimj6oVK5Hi/OxyvIbhAk5VNLm07mkBgwhHjf4MN/JztIrMMG4/aT+rZ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758204361; c=relaxed/simple; bh=kDYz94EGVgZLkhIGXZoME9D3M+t/03LDqGZ/1t5bzb0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=H/oN5AyCgHRTsqAPFXrWwbqK0zt8GyUkrZosTKD3sZPbPttki6fU2L2phNsQutuj1snwkO8omVBsbfFXp7WRlAtxYWTkhG/zYHh4r/tOR/6ZLQLUSpwSCB8j/FhqZyNbOaQHFIsv7rtPYKkyIz3UXbrQ6aByMVTfULdMMryCvRs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=w3th23fD; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="w3th23fD" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-45dd9a66c3fso3732955e9.1 for ; Thu, 18 Sep 2025 07:05:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758204357; x=1758809157; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=O/41QdXex1hw7P5cbsCaPm8UfRqOpbfD5EoaOmOtrFc=; b=w3th23fDQ67W30S4ohkjxPPeloYMUGpiIjmnkBY6kna7nNZ/Q4zPnE+1eA6Jyx8qWd 2aWAnJdkYLZjyMQu/wzViHFzwms2Y7kG4j9OclxF6cAzggjRkDgaJKeCMdPj6+kXVCE6 m3adHtcWuYjbuPUT8hB0B8FPC0KakGqIg9hZi4YT8xAJ801jlMt+1hgcYlXnkDON5tYv DQuDo9EZKVuCaEWiFAvp/DpthMGpD8d+dM+cTVBartJxInv1sVQ4gO0yiYs78E/2jRvC +ympFCINjFT0X80O0GspVuT2m0tNyqwGhDwYnj5G0GUDuzu8pZMRV+i6mV/LI3iR/Inm NNJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758204357; x=1758809157; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=O/41QdXex1hw7P5cbsCaPm8UfRqOpbfD5EoaOmOtrFc=; b=ueHvwTS4WdeSz1OcXgwCS3RE52NxDO7/qSKro+4WgKD7lm4l13DSQbyKvPiiZ7EsKV 7zZN7gwSRkIhdo5CqPkxcHBCJEC2bpbgHfRoP/EtaNXt6su6YncRHhP6VMS9yObqvROT kh/FfE/api9wp55yg3MQxqSK+hvxwioTf2ZpbfHi4HMVsjFpbF8sTYo5VqvhPSFUSPkq VgzocXX6ueqY7bzEG7YXnGXDn40ZjxCBAV+CNJ3HnT2M0AvWzdT3zNu8C8Xp1yNdxX02 hgiJbq8JYuiIMBt6AtOXVzuYKNp+oV1NdBAbk3yzK7mW/i1YFgK0WMnFKQDKV5fTpish /RJw== X-Forwarded-Encrypted: i=1; AJvYcCUx3k7q9B5adPxFmUw8+rsG4VaaBoOm32tKiwgvGbxKY5WMFY6GT8LPEynAD5vDnYY48EUjES70GoO3ys8=@vger.kernel.org X-Gm-Message-State: AOJu0Yws+NIppzIGTl63nB1aalp/MfqAYf+EmKr6qZmMsrndFstBhldq pZxRYelbNM0og8Dmmh8yZ31Zpb+7d9vLDNBMqwhMn9zKMonnZL9z6Aq0N+Ug1CPxA4S6+8S6GEB QBw== X-Google-Smtp-Source: AGHT+IEB/eDmCZXQzzdzX+kSJkfizGem/KoO4BquEG9sVEC1EtYxAyOBGfQfmYUfn+x0uGsDT13+4AgY/Q== X-Received: from wmlv10.prod.google.com ([2002:a05:600c:214a:b0:45b:9c60:76bb]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3227:b0:45d:f7e4:7a9e with SMTP id 5b1f17b1804b1-466ce515027mr10992405e9.5.1758204356910; Thu, 18 Sep 2025 07:05:56 -0700 (PDT) Date: Thu, 18 Sep 2025 15:59:22 +0200 In-Reply-To: <20250918140451.1289454-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250918140451.1289454-1-elver@google.com> X-Mailer: git-send-email 2.51.0.384.g4c02a37b29-goog Message-ID: <20250918140451.1289454-12-elver@google.com> Subject: [PATCH v3 11/35] locking/seqlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for seqlock_t. Signed-off-by: Marco Elver --- v3: * __assert -> __assume rename --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/seqlock.h | 24 +++++++++++ include/linux/seqlock_types.h | 5 ++- lib/test_capability-analysis.c | 43 +++++++++++++++++++ 4 files changed, 71 insertions(+), 3 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 89f9c991f7cf..4789de7b019a 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -81,7 +81,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 5ce48eab7a2a..2c7a02b727de 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -816,6 +816,7 @@ static __always_inline void write_seqcount_latch_end(se= qcount_latch_t *s) do { \ spin_lock_init(&(sl)->lock); \ seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \ + __assume_cap(sl); \ } while (0) =20 /** @@ -832,6 +833,7 @@ static __always_inline void write_seqcount_latch_end(se= qcount_latch_t *s) * Return: count, to be passed to read_seqretry() */ static inline unsigned read_seqbegin(const seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { return read_seqcount_begin(&sl->seqcount); } @@ -848,6 +850,7 @@ static inline unsigned read_seqbegin(const seqlock_t *s= l) * Return: true if a read section retry is required, else false */ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) + __releases_shared(sl) __no_capability_analysis { return read_seqcount_retry(&sl->seqcount, start); } @@ -872,6 +875,7 @@ static inline unsigned read_seqretry(const seqlock_t *s= l, unsigned start) * _irqsave or _bh variants of this function instead. */ static inline void write_seqlock(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -885,6 +889,7 @@ static inline void write_seqlock(seqlock_t *sl) * critical section of given seqlock_t. */ static inline void write_sequnlock(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock(&sl->lock); @@ -898,6 +903,7 @@ static inline void write_sequnlock(seqlock_t *sl) * other write side sections, can be invoked from softirq contexts. */ static inline void write_seqlock_bh(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock_bh(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -912,6 +918,7 @@ static inline void write_seqlock_bh(seqlock_t *sl) * write_seqlock_bh(). */ static inline void write_sequnlock_bh(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_bh(&sl->lock); @@ -925,6 +932,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl) * other write sections, can be invoked from hardirq contexts. */ static inline void write_seqlock_irq(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock_irq(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -938,12 +946,14 @@ static inline void write_seqlock_irq(seqlock_t *sl) * seqlock_t write side section opened with write_seqlock_irq(). */ static inline void write_sequnlock_irq(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irq(&sl->lock); } =20 static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { unsigned long flags; =20 @@ -976,6 +986,7 @@ static inline unsigned long __write_seqlock_irqsave(seq= lock_t *sl) */ static inline void write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irqrestore(&sl->lock, flags); @@ -998,6 +1009,7 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned lon= g flags) * The opened read section must be closed with read_sequnlock_excl(). */ static inline void read_seqlock_excl(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock(&sl->lock); } @@ -1007,6 +1019,7 @@ static inline void read_seqlock_excl(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock(&sl->lock); } @@ -1021,6 +1034,7 @@ static inline void read_sequnlock_excl(seqlock_t *sl) * from softirq contexts. */ static inline void read_seqlock_excl_bh(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock_bh(&sl->lock); } @@ -1031,6 +1045,7 @@ static inline void read_seqlock_excl_bh(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_bh(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock_bh(&sl->lock); } @@ -1045,6 +1060,7 @@ static inline void read_sequnlock_excl_bh(seqlock_t *= sl) * hardirq context. */ static inline void read_seqlock_excl_irq(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock_irq(&sl->lock); } @@ -1055,11 +1071,13 @@ static inline void read_seqlock_excl_irq(seqlock_t = *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_irq(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock_irq(&sl->lock); } =20 static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { unsigned long flags; =20 @@ -1089,6 +1107,7 @@ static inline unsigned long __read_seqlock_excl_irqsa= ve(seqlock_t *sl) */ static inline void read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) + __releases_shared(sl) __no_capability_analysis { spin_unlock_irqrestore(&sl->lock, flags); } @@ -1125,6 +1144,7 @@ read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigne= d long flags) * parameter of the next read_seqbegin_or_lock() iteration. */ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_capability_analysis { if (!(*seq & 1)) /* Even */ *seq =3D read_seqbegin(lock); @@ -1140,6 +1160,7 @@ static inline void read_seqbegin_or_lock(seqlock_t *l= ock, int *seq) * Return: true if a read section retry is required, false otherwise */ static inline int need_seqretry(seqlock_t *lock, int seq) + __releases_shared(lock) __no_capability_analysis { return !(seq & 1) && read_seqretry(lock, seq); } @@ -1153,6 +1174,7 @@ static inline int need_seqretry(seqlock_t *lock, int = seq) * with read_seqbegin_or_lock() and validated by need_seqretry(). */ static inline void done_seqretry(seqlock_t *lock, int seq) + __no_capability_analysis { if (seq & 1) read_sequnlock_excl(lock); @@ -1180,6 +1202,7 @@ static inline void done_seqretry(seqlock_t *lock, int= seq) */ static inline unsigned long read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_capability_analysis { unsigned long flags =3D 0; =20 @@ -1205,6 +1228,7 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *s= eq) */ static inline void done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags) + __no_capability_analysis { if (seq & 1) read_sequnlock_excl_irqrestore(lock, flags); diff --git a/include/linux/seqlock_types.h b/include/linux/seqlock_types.h index dfdf43e3fa3d..9775d6f1a234 100644 --- a/include/linux/seqlock_types.h +++ b/include/linux/seqlock_types.h @@ -81,13 +81,14 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, = mutex) * - Comments on top of seqcount_t * - Documentation/locking/seqlock.rst */ -typedef struct { +struct_with_capability(seqlock) { /* * Make sure that readers don't starve writers on PREEMPT_RT: use * seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK(). */ seqcount_spinlock_t seqcount; spinlock_t lock; -} seqlock_t; +}; +typedef struct seqlock seqlock_t; =20 #endif /* __LINUX_SEQLOCK_TYPES_H */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 286723b47328..74d287740bb8 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -6,6 +6,7 @@ =20 #include #include +#include #include =20 /* @@ -208,3 +209,45 @@ static void __used test_mutex_cond_guard(struct test_m= utex_data *d) d->counter++; } } + +struct test_seqlock_data { + seqlock_t sl; + int counter __guarded_by(&sl); +}; + +static void __used test_seqlock_init(struct test_seqlock_data *d) +{ + seqlock_init(&d->sl); + d->counter =3D 0; +} + +static void __used test_seqlock_reader(struct test_seqlock_data *d) +{ + unsigned int seq; + + do { + seq =3D read_seqbegin(&d->sl); + (void)d->counter; + } while (read_seqretry(&d->sl, seq)); +} + +static void __used test_seqlock_writer(struct test_seqlock_data *d) +{ + unsigned long flags; + + write_seqlock(&d->sl); + d->counter++; + write_sequnlock(&d->sl); + + write_seqlock_irq(&d->sl); + d->counter++; + write_sequnlock_irq(&d->sl); + + write_seqlock_bh(&d->sl); + d->counter++; + write_sequnlock_bh(&d->sl); + + write_seqlock_irqsave(&d->sl, flags); + d->counter++; + write_sequnlock_irqrestore(&d->sl, flags); +} --=20 2.51.0.384.g4c02a37b29-goog