From nobody Sat Feb 7 07:25:44 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C86CB330B3B; Mon, 5 Jan 2026 15:54:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767628482; cv=none; b=lhSX/XkYS1I2OwjWwc68LSZHuiGSq8nkTTJwuwPDofcpOgzd4kZvksvjE0mhSz4r//TmgjGc4dfNYZ1S85AjRW7kxZSHDZf1EhAX8e/G+pid7Hm8kL8KX022nj8v5J+14f7+w2dw+zuXkhaG6AUMD8rHwfTfjSYDMonKd115xRk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767628482; c=relaxed/simple; bh=JuxZvX8zvwjAoUkvxA7oSrhWGS/Wvzta6tlVq/Ym1i8=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=Eg6PPjLKCDI2imkgEnkyp2IyKu11SxrXeBXWQbQ7GusQ9X9VL9ny8FLK67vu2vdlA0eQjrnuRvQSBH83LmRXZ0X4herkHiJZlnJyvO7kkP6HH2OEBUnZWvS+RVRINLqmbghtdCXNFhW2aSptuliUSVThfydU3yqEPInCQP9L6HU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=yFgfdn8b; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=PlunFQcC; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="yFgfdn8b"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="PlunFQcC" Date: Mon, 05 Jan 2026 15:54:36 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1767628478; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z8bnJOv6MmhkSK/DxyxiL6zwxZOYNay+nrGW9+ISO4o=; b=yFgfdn8bccMzaQaNGisv5sqLEayjn0zf44JLksZ6mcF35TC2U0l3RK/x84pxbe/NHH0eXs jamN20j/z07nuWEY3DBrnOZ2sP4abMTsm1M0GnyXRHULExv8kUP7YWVbvTHNe/N/pLJ6Jw 7mDuVrQExOTE/wn0JVhmL3Tc9A/wLRMUFvczAECa3nQJJ5ya4syjy/oi/zx+Nb4BKhhZ8q D59QwI0o9kG4C43KCgN2oZdGLEdtsjnbP1oHfeoWAEfsMZ+u37CIwABd1CkGJsGFa2dzrw 7CQQzXZ0OitDuX7Ja+wBf1oni9fQ13g99Z9xi08ageIYfkvnn6/WX0ipNwnVIA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1767628478; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z8bnJOv6MmhkSK/DxyxiL6zwxZOYNay+nrGW9+ISO4o=; b=PlunFQcC/Pwo3xplOlIBv2TBb90W8yLPzJQYOaQTBGc78W+DgmgEF7V6gXueCvfTgKe+YM hXnpOTEUq9b0D/BA== From: "tip-bot2 for Marco Elver" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: locking/core] locking/seqlock: Support Clang's context analysis Cc: Marco Elver , "Peter Zijlstra (Intel)" , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20251219154418.3592607-12-elver@google.com> References: <20251219154418.3592607-12-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <176762847665.510.9431094941065565293.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the locking/core branch of tip: Commit-ID: 8f8a55f49cda5fee914bbea1ab5af8df3a6ba8af Gitweb: https://git.kernel.org/tip/8f8a55f49cda5fee914bbea1ab5af8df3= a6ba8af Author: Marco Elver AuthorDate: Fri, 19 Dec 2025 16:40:00 +01:00 Committer: Peter Zijlstra CommitterDate: Mon, 05 Jan 2026 16:43:29 +01:00 locking/seqlock: Support Clang's context analysis Add support for Clang's context analysis for seqlock_t. Signed-off-by: Marco Elver Signed-off-by: Peter Zijlstra (Intel) Link: https://patch.msgid.link/20251219154418.3592607-12-elver@google.com --- Documentation/dev-tools/context-analysis.rst | 2 +- include/linux/seqlock.h | 38 +++++++++++++- include/linux/seqlock_types.h | 5 +- lib/test_context-analysis.c | 50 +++++++++++++++++++- 4 files changed, 91 insertions(+), 4 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index 1864b6c..6905659 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -79,7 +79,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 2211236..1133209 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -816,6 +816,7 @@ static __always_inline void write_seqcount_latch_end(se= qcount_latch_t *s) do { \ spin_lock_init(&(sl)->lock); \ seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \ + __assume_ctx_lock(sl); \ } while (0) =20 /** @@ -832,6 +833,7 @@ static __always_inline void write_seqcount_latch_end(se= qcount_latch_t *s) * Return: count, to be passed to read_seqretry() */ static inline unsigned read_seqbegin(const seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { return read_seqcount_begin(&sl->seqcount); } @@ -848,6 +850,7 @@ static inline unsigned read_seqbegin(const seqlock_t *s= l) * Return: true if a read section retry is required, else false */ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) + __releases_shared(sl) __no_context_analysis { return read_seqcount_retry(&sl->seqcount, start); } @@ -872,6 +875,7 @@ static inline unsigned read_seqretry(const seqlock_t *s= l, unsigned start) * _irqsave or _bh variants of this function instead. */ static inline void write_seqlock(seqlock_t *sl) + __acquires(sl) __no_context_analysis { spin_lock(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -885,6 +889,7 @@ static inline void write_seqlock(seqlock_t *sl) * critical section of given seqlock_t. */ static inline void write_sequnlock(seqlock_t *sl) + __releases(sl) __no_context_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock(&sl->lock); @@ -898,6 +903,7 @@ static inline void write_sequnlock(seqlock_t *sl) * other write side sections, can be invoked from softirq contexts. */ static inline void write_seqlock_bh(seqlock_t *sl) + __acquires(sl) __no_context_analysis { spin_lock_bh(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -912,6 +918,7 @@ static inline void write_seqlock_bh(seqlock_t *sl) * write_seqlock_bh(). */ static inline void write_sequnlock_bh(seqlock_t *sl) + __releases(sl) __no_context_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_bh(&sl->lock); @@ -925,6 +932,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl) * other write sections, can be invoked from hardirq contexts. */ static inline void write_seqlock_irq(seqlock_t *sl) + __acquires(sl) __no_context_analysis { spin_lock_irq(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -938,12 +946,14 @@ static inline void write_seqlock_irq(seqlock_t *sl) * seqlock_t write side section opened with write_seqlock_irq(). */ static inline void write_sequnlock_irq(seqlock_t *sl) + __releases(sl) __no_context_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irq(&sl->lock); } =20 static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) + __acquires(sl) __no_context_analysis { unsigned long flags; =20 @@ -976,6 +986,7 @@ static inline unsigned long __write_seqlock_irqsave(seq= lock_t *sl) */ static inline void write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) + __releases(sl) __no_context_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irqrestore(&sl->lock, flags); @@ -998,6 +1009,7 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned lon= g flags) * The opened read section must be closed with read_sequnlock_excl(). */ static inline void read_seqlock_excl(seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { spin_lock(&sl->lock); } @@ -1007,6 +1019,7 @@ static inline void read_seqlock_excl(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl(seqlock_t *sl) + __releases_shared(sl) __no_context_analysis { spin_unlock(&sl->lock); } @@ -1021,6 +1034,7 @@ static inline void read_sequnlock_excl(seqlock_t *sl) * from softirq contexts. */ static inline void read_seqlock_excl_bh(seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { spin_lock_bh(&sl->lock); } @@ -1031,6 +1045,7 @@ static inline void read_seqlock_excl_bh(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_bh(seqlock_t *sl) + __releases_shared(sl) __no_context_analysis { spin_unlock_bh(&sl->lock); } @@ -1045,6 +1060,7 @@ static inline void read_sequnlock_excl_bh(seqlock_t *= sl) * hardirq context. */ static inline void read_seqlock_excl_irq(seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { spin_lock_irq(&sl->lock); } @@ -1055,11 +1071,13 @@ static inline void read_seqlock_excl_irq(seqlock_t = *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_irq(seqlock_t *sl) + __releases_shared(sl) __no_context_analysis { spin_unlock_irq(&sl->lock); } =20 static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { unsigned long flags; =20 @@ -1089,6 +1107,7 @@ static inline unsigned long __read_seqlock_excl_irqsa= ve(seqlock_t *sl) */ static inline void read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) + __releases_shared(sl) __no_context_analysis { spin_unlock_irqrestore(&sl->lock, flags); } @@ -1125,6 +1144,7 @@ read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigne= d long flags) * parameter of the next read_seqbegin_or_lock() iteration. */ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_context_analysis { if (!(*seq & 1)) /* Even */ *seq =3D read_seqbegin(lock); @@ -1140,6 +1160,7 @@ static inline void read_seqbegin_or_lock(seqlock_t *l= ock, int *seq) * Return: true if a read section retry is required, false otherwise */ static inline int need_seqretry(seqlock_t *lock, int seq) + __releases_shared(lock) __no_context_analysis { return !(seq & 1) && read_seqretry(lock, seq); } @@ -1153,6 +1174,7 @@ static inline int need_seqretry(seqlock_t *lock, int = seq) * with read_seqbegin_or_lock() and validated by need_seqretry(). */ static inline void done_seqretry(seqlock_t *lock, int seq) + __no_context_analysis { if (seq & 1) read_sequnlock_excl(lock); @@ -1180,6 +1202,7 @@ static inline void done_seqretry(seqlock_t *lock, int= seq) */ static inline unsigned long read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_context_analysis { unsigned long flags =3D 0; =20 @@ -1205,6 +1228,7 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *s= eq) */ static inline void done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags) + __no_context_analysis { if (seq & 1) read_sequnlock_excl_irqrestore(lock, flags); @@ -1225,6 +1249,7 @@ struct ss_tmp { }; =20 static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst) + __no_context_analysis { if (sst->lock) spin_unlock(sst->lock); @@ -1254,6 +1279,7 @@ extern void __scoped_seqlock_bug(void); =20 static __always_inline void __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state t= arget) + __no_context_analysis { switch (sst->state) { case ss_done: @@ -1296,9 +1322,19 @@ __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t = *lock, enum ss_state target) } } =20 +/* + * Context analysis no-op helper to release seqlock at the end of the for-= scope; + * the alias analysis of the compiler will recognize that the pointer @s i= s an + * alias to @_seqlock passed to read_seqbegin(_seqlock) below. + */ +static __always_inline void __scoped_seqlock_cleanup_ctx(struct ss_tmp **s) + __releases_shared(*((seqlock_t **)s)) __no_context_analysis {} + #define __scoped_seqlock_read(_seqlock, _target, _s) \ for (struct ss_tmp _s __cleanup(__scoped_seqlock_cleanup) =3D \ - { .state =3D ss_lockless, .data =3D read_seqbegin(_seqlock) }; \ + { .state =3D ss_lockless, .data =3D read_seqbegin(_seqlock) }, \ + *__UNIQUE_ID(ctx) __cleanup(__scoped_seqlock_cleanup_ctx) =3D\ + (struct ss_tmp *)_seqlock; \ _s.state !=3D ss_done; \ __scoped_seqlock_next(&_s, _seqlock, _target)) =20 diff --git a/include/linux/seqlock_types.h b/include/linux/seqlock_types.h index dfdf43e..2d5d793 100644 --- a/include/linux/seqlock_types.h +++ b/include/linux/seqlock_types.h @@ -81,13 +81,14 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, = mutex) * - Comments on top of seqcount_t * - Documentation/locking/seqlock.rst */ -typedef struct { +context_lock_struct(seqlock) { /* * Make sure that readers don't starve writers on PREEMPT_RT: use * seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK(). */ seqcount_spinlock_t seqcount; spinlock_t lock; -} seqlock_t; +}; +typedef struct seqlock seqlock_t; =20 #endif /* __LINUX_SEQLOCK_TYPES_H */ diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 2b28d20..53abea0 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -6,6 +6,7 @@ =20 #include #include +#include #include =20 /* @@ -208,3 +209,52 @@ static void __used test_mutex_cond_guard(struct test_m= utex_data *d) d->counter++; } } + +struct test_seqlock_data { + seqlock_t sl; + int counter __guarded_by(&sl); +}; + +static void __used test_seqlock_init(struct test_seqlock_data *d) +{ + seqlock_init(&d->sl); + d->counter =3D 0; +} + +static void __used test_seqlock_reader(struct test_seqlock_data *d) +{ + unsigned int seq; + + do { + seq =3D read_seqbegin(&d->sl); + (void)d->counter; + } while (read_seqretry(&d->sl, seq)); +} + +static void __used test_seqlock_writer(struct test_seqlock_data *d) +{ + unsigned long flags; + + write_seqlock(&d->sl); + d->counter++; + write_sequnlock(&d->sl); + + write_seqlock_irq(&d->sl); + d->counter++; + write_sequnlock_irq(&d->sl); + + write_seqlock_bh(&d->sl); + d->counter++; + write_sequnlock_bh(&d->sl); + + write_seqlock_irqsave(&d->sl, flags); + d->counter++; + write_sequnlock_irqrestore(&d->sl, flags); +} + +static void __used test_seqlock_scoped(struct test_seqlock_data *d) +{ + scoped_seqlock_read (&d->sl, ss_lockless) { + (void)d->counter; + } +}