[PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization

Marco Elver posted 1 patch 3 weeks, 2 days ago
include/linux/compiler-context-analysis.h | 12 ++++++++++++
include/linux/local_lock_internal.h       |  6 +++---
include/linux/mutex.h                     |  2 +-
include/linux/rwlock.h                    |  4 ++--
include/linux/rwlock_rt.h                 |  2 +-
include/linux/rwsem.h                     |  4 ++--
include/linux/seqlock.h                   |  2 +-
include/linux/spinlock.h                  |  8 ++++----
include/linux/spinlock_rt.h               |  2 +-
include/linux/ww_mutex.h                  |  2 +-
lib/test_context-analysis.c               |  3 +++
11 files changed, 31 insertions(+), 16 deletions(-)
[PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Marco Elver 3 weeks, 2 days ago
When a lock is initialized (e.g. mutex_init()), we assume/assert that
the context lock is held to allow initialization of guarded members
within the same scope.

However, this previously prevented actually acquiring the lock within
that same scope, as the analyzer would report a double-lock warning:

  mutex_init(&mtx);
  ...
  mutex_lock(&mtx); // acquiring mutex 'mtx' that is already held

To fix (without new init+lock APIs), we can tell the analysis to treat
the "held" context lock resulting from initialization as reentrant,
allowing subsequent acquisitions to succeed.

To do so *only* within the initialization scope, we can cast the lock
pointer to any reentrant type for the init assume/assert. Introduce a
generic reentrant context lock type `struct __ctx_lock_init` and add
`__inits_ctx_lock()` that casts the lock pointer to this type before
assuming/asserting it.

This ensures that the initial "held" state is reentrant, allowing
patterns like:

  mutex_init(&lock);
  ...
  mutex_lock(&lock);

to compile without false positives, and avoids having to make all
context lock types reentrant outside an initialization scope.

The caveat here is missing real double-lock bugs right after init scope.
However, this is a classic trade-off of avoiding false positives against
(unlikely) false negatives.

Longer-term, Peter suggested to create scoped init-guards [1], which
will both fix the issue in a more robust way and also denote clearly
where initialization starts and ends. However, that requires new APIs,
and won't help bridge the gap for code that just wants to opt into the
analysis with as little other changes as possible (as suggested in [2]).

Link: https://lore.kernel.org/all/20251212095943.GM3911114@noisy.programming.kicks-ass.net/ [1]
Link: https://lore.kernel.org/all/57062131-e79e-42c2-aa0b-8f931cb8cac2@acm.org/ [2]
Reported-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Marco Elver <elver@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/compiler-context-analysis.h | 12 ++++++++++++
 include/linux/local_lock_internal.h       |  6 +++---
 include/linux/mutex.h                     |  2 +-
 include/linux/rwlock.h                    |  4 ++--
 include/linux/rwlock_rt.h                 |  2 +-
 include/linux/rwsem.h                     |  4 ++--
 include/linux/seqlock.h                   |  2 +-
 include/linux/spinlock.h                  |  8 ++++----
 include/linux/spinlock_rt.h               |  2 +-
 include/linux/ww_mutex.h                  |  2 +-
 lib/test_context-analysis.c               |  3 +++
 11 files changed, 31 insertions(+), 16 deletions(-)

diff --git a/include/linux/compiler-context-analysis.h b/include/linux/compiler-context-analysis.h
index e86b8a3c2f89..89e893e47bb7 100644
--- a/include/linux/compiler-context-analysis.h
+++ b/include/linux/compiler-context-analysis.h
@@ -43,6 +43,14 @@
 # define __assumes_ctx_lock(...)		__attribute__((assert_capability(__VA_ARGS__)))
 # define __assumes_shared_ctx_lock(...)	__attribute__((assert_shared_capability(__VA_ARGS__)))
 
+/*
+ * Generic reentrant context lock type that we cast to when initializing context
+ * locks with __assumes_ctx_lock(), so that we can support guarded member
+ * initialization, but also immediate use after initialization.
+ */
+struct __ctx_lock_type(init_generic) __reentrant_ctx_lock __ctx_lock_init;
+# define __inits_ctx_lock(var) __assumes_ctx_lock((const struct __ctx_lock_init *)(var))
+
 /**
  * __guarded_by - struct member and globals attribute, declares variable
  *                only accessible within active context
@@ -120,6 +128,8 @@
 		__attribute__((overloadable)) __assumes_ctx_lock(var) { }				\
 	static __always_inline void __assume_shared_ctx_lock(const struct name *var)			\
 		__attribute__((overloadable)) __assumes_shared_ctx_lock(var) { }			\
+	static __always_inline void __init_ctx_lock(const struct name *var)				\
+		__attribute__((overloadable)) __inits_ctx_lock(var) { }					\
 	struct name
 
 /**
@@ -162,6 +172,7 @@
 # define __releases_shared_ctx_lock(...)
 # define __assumes_ctx_lock(...)
 # define __assumes_shared_ctx_lock(...)
+# define __inits_ctx_lock(var)
 # define __returns_ctx_lock(var)
 # define __guarded_by(...)
 # define __pt_guarded_by(...)
@@ -176,6 +187,7 @@
 # define __release_shared_ctx_lock(var)		do { } while (0)
 # define __assume_ctx_lock(var)			do { (void)(var); } while (0)
 # define __assume_shared_ctx_lock(var)			do { (void)(var); } while (0)
+# define __init_ctx_lock(var)			do { (void)(var); } while (0)
 # define context_lock_struct(name, ...)		struct __VA_ARGS__ name
 # define disable_context_analysis()
 # define enable_context_analysis()
diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
index 7843ab9059c2..53f44719db73 100644
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -86,13 +86,13 @@ do {								\
 			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
 			      LD_LOCK_PERCPU);			\
 	local_lock_debug_init(lock);				\
-	__assume_ctx_lock(lock);				\
+	__init_ctx_lock(lock);					\
 } while (0)
 
 #define __local_trylock_init(lock)				\
 do {								\
 	__local_lock_init((local_lock_t *)lock);		\
-	__assume_ctx_lock(lock);				\
+	__init_ctx_lock(lock);					\
 } while (0)
 
 #define __spinlock_nested_bh_init(lock)				\
@@ -104,7 +104,7 @@ do {								\
 			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
 			      LD_LOCK_NORMAL);			\
 	local_lock_debug_init(lock);				\
-	__assume_ctx_lock(lock);				\
+	__init_ctx_lock(lock);					\
 } while (0)
 
 #define __local_lock_acquire(lock)					\
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 89977c215cbd..5d2ef75c4fdb 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -62,7 +62,7 @@ do {									\
 	static struct lock_class_key __key;				\
 									\
 	__mutex_init((mutex), #mutex, &__key);				\
-	__assume_ctx_lock(mutex);					\
+	__init_ctx_lock(mutex);						\
 } while (0)
 
 /**
diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h
index 65a5b55e1bcd..7e171634d2c4 100644
--- a/include/linux/rwlock.h
+++ b/include/linux/rwlock.h
@@ -22,11 +22,11 @@ do {								\
 	static struct lock_class_key __key;			\
 								\
 	__rwlock_init((lock), #lock, &__key);			\
-	__assume_ctx_lock(lock);				\
+	__init_ctx_lock(lock);					\
 } while (0)
 #else
 # define rwlock_init(lock)					\
-	do { *(lock) = __RW_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } while (0)
+	do { *(lock) = __RW_LOCK_UNLOCKED(lock); __init_ctx_lock(lock); } while (0)
 #endif
 
 #ifdef CONFIG_DEBUG_SPINLOCK
diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
index 37b387dcab21..1e087a6ce2cf 100644
--- a/include/linux/rwlock_rt.h
+++ b/include/linux/rwlock_rt.h
@@ -22,7 +22,7 @@ do {							\
 							\
 	init_rwbase_rt(&(rwl)->rwbase);			\
 	__rt_rwlock_init(rwl, #rwl, &__key);		\
-	__assume_ctx_lock(rwl);				\
+	__init_ctx_lock(rwl);				\
 } while (0)
 
 extern void rt_read_lock(rwlock_t *rwlock)	__acquires_shared(rwlock);
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index 8da14a08a4e1..6ea7d2a23580 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -121,7 +121,7 @@ do {								\
 	static struct lock_class_key __key;			\
 								\
 	__init_rwsem((sem), #sem, &__key);			\
-	__assume_ctx_lock(sem);					\
+	__init_ctx_lock(sem);					\
 } while (0)
 
 /*
@@ -175,7 +175,7 @@ do {								\
 	static struct lock_class_key __key;			\
 								\
 	__init_rwsem((sem), #sem, &__key);			\
-	__assume_ctx_lock(sem);					\
+	__init_ctx_lock(sem);					\
 } while (0)
 
 static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 113320911a09..a0670adb4b6e 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -816,7 +816,7 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
 	do {								\
 		spin_lock_init(&(sl)->lock);				\
 		seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock);	\
-		__assume_ctx_lock(sl);					\
+		__init_ctx_lock(sl);					\
 	} while (0)
 
 /**
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 396b8c5d6c1b..e50372a5f7d1 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -106,12 +106,12 @@ do {									\
 	static struct lock_class_key __key;				\
 									\
 	__raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN);	\
-	__assume_ctx_lock(lock);					\
+	__init_ctx_lock(lock);						\
 } while (0)
 
 #else
 # define raw_spin_lock_init(lock)				\
-	do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } while (0)
+	do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); __init_ctx_lock(lock); } while (0)
 #endif
 
 #define raw_spin_is_locked(lock)	arch_spin_is_locked(&(lock)->raw_lock)
@@ -324,7 +324,7 @@ do {								\
 								\
 	__raw_spin_lock_init(spinlock_check(lock),		\
 			     #lock, &__key, LD_WAIT_CONFIG);	\
-	__assume_ctx_lock(lock);				\
+	__init_ctx_lock(lock);					\
 } while (0)
 
 #else
@@ -333,7 +333,7 @@ do {								\
 do {						\
 	spinlock_check(_lock);			\
 	*(_lock) = __SPIN_LOCK_UNLOCKED(_lock);	\
-	__assume_ctx_lock(_lock);		\
+	__init_ctx_lock(_lock);			\
 } while (0)
 
 #endif
diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
index 0a585768358f..154d7290bd99 100644
--- a/include/linux/spinlock_rt.h
+++ b/include/linux/spinlock_rt.h
@@ -20,7 +20,7 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name,
 do {								\
 	rt_mutex_base_init(&(slock)->lock);			\
 	__rt_spin_lock_init(slock, name, key, percpu);		\
-	__assume_ctx_lock(slock);				\
+	__init_ctx_lock(slock);					\
 } while (0)
 
 #define _spin_lock_init(slock, percpu)				\
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index 58e959ee10e9..ecb5564ee70d 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -107,7 +107,7 @@ context_lock_struct(ww_acquire_ctx) {
  */
 static inline void ww_mutex_init(struct ww_mutex *lock,
 				 struct ww_class *ww_class)
-	__assumes_ctx_lock(lock)
+	__inits_ctx_lock(lock)
 {
 	ww_mutex_base_init(&lock->base, ww_class->mutex_name, &ww_class->mutex_key);
 	lock->ctx = NULL;
diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c
index 1c5a381461fc..2f733b5cc650 100644
--- a/lib/test_context-analysis.c
+++ b/lib/test_context-analysis.c
@@ -165,6 +165,9 @@ static void __used test_mutex_init(struct test_mutex_data *d)
 {
 	mutex_init(&d->mtx);
 	d->counter = 0;
+
+	mutex_lock(&d->mtx);
+	mutex_unlock(&d->mtx);
 }
 
 static void __used test_mutex_lock(struct test_mutex_data *d)
-- 
2.52.0.457.g6b5491de43-goog
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Peter Zijlstra 3 weeks, 1 day ago
On Thu, Jan 15, 2026 at 01:51:25AM +0100, Marco Elver wrote:

> Longer-term, Peter suggested to create scoped init-guards [1], which
> will both fix the issue in a more robust way and also denote clearly
> where initialization starts and ends. However, that requires new APIs,
> and won't help bridge the gap for code that just wants to opt into the
> analysis with as little other changes as possible (as suggested in [2]).

OTOH, switching to that *now*, while we have minimal files with
CONTEXT_ANALYSIS enabled, is the easiest it will ever get.

The more files get enabled, the harder it gets to switch, no?
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Marco Elver 3 weeks, 1 day ago
On Thu, Jan 15, 2026 at 10:33PM +0100, Peter Zijlstra wrote:
> On Thu, Jan 15, 2026 at 01:51:25AM +0100, Marco Elver wrote:
> 
> > Longer-term, Peter suggested to create scoped init-guards [1], which
> > will both fix the issue in a more robust way and also denote clearly
> > where initialization starts and ends. However, that requires new APIs,
> > and won't help bridge the gap for code that just wants to opt into the
> > analysis with as little other changes as possible (as suggested in [2]).
> 
> OTOH, switching to that *now*, while we have minimal files with
> CONTEXT_ANALYSIS enabled, is the easiest it will ever get.
> 
> The more files get enabled, the harder it gets to switch, no?

Fair point; meaning, we should improve it sooner than later. :-)

In my sleep-deprived state, I came up with the below. I'd split it up
into maybe 3 patches (add guards; use guards where needed; remove
assume).

Thoughts?

------ >8 ------

diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/dev-tools/context-analysis.rst
index e69896e597b6..0afe29398e26 100644
--- a/Documentation/dev-tools/context-analysis.rst
+++ b/Documentation/dev-tools/context-analysis.rst
@@ -83,9 +83,11 @@ Currently the following synchronization primitives are supported:
 `bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`,
 `ww_mutex`.
 
-For context locks with an initialization function (e.g., `spin_lock_init()`),
-calling this function before initializing any guarded members or globals
-prevents the compiler from issuing warnings about unguarded initialization.
+For context locks with an initialization function (e.g., ``spin_lock_init()``),
+use ``guard(foo_init)(&lock)`` or ``scoped_guard(foo_init, &lock) { ...  }``
+pattern to initialize guarded members or globals. This initializes the context
+lock, but also treats the context as active within the initialization scope
+(initialization implies exclusive access to the underlying object).
 
 Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's
 context analysis that the associated synchronization primitive is held after
diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h
index 99c06e499375..0f971fd6d02a 100644
--- a/include/linux/local_lock.h
+++ b/include/linux/local_lock.h
@@ -104,6 +104,8 @@ DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu,
 		    local_lock_nested_bh(_T->lock),
 		    local_unlock_nested_bh(_T->lock))
 
+DEFINE_LOCK_GUARD_1(local_lock_init, local_lock_t __percpu, local_lock_init(_T->lock), /* */)
+
 DECLARE_LOCK_GUARD_1_ATTRS(local_lock, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
 #define class_local_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock, _T)
 DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irq, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
@@ -112,5 +114,11 @@ DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irqsave, __acquires(_T), __releases(*(loca
 #define class_local_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_irqsave, _T)
 DECLARE_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
 #define class_local_lock_nested_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, _T)
+DECLARE_LOCK_GUARD_1_ATTRS(local_lock_init, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
+#define class_local_lock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_init, _T)
+
+DEFINE_LOCK_GUARD_1(local_trylock_init, local_trylock_t __percpu, local_trylock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(local_trylock_init, __acquires(_T), __releases(*(local_trylock_t __percpu **)_T))
+#define class_local_trylock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_trylock_init, _T)
 
 #endif
diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
index e8c4803d8db4..66d4984eea62 100644
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -86,13 +86,11 @@ do {								\
 			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
 			      LD_LOCK_PERCPU);			\
 	local_lock_debug_init(lock);				\
-	__assume_ctx_lock(lock);				\
 } while (0)
 
 #define __local_trylock_init(lock)				\
 do {								\
 	__local_lock_init((local_lock_t *)lock);		\
-	__assume_ctx_lock(lock);				\
 } while (0)
 
 #define __spinlock_nested_bh_init(lock)				\
@@ -104,7 +102,6 @@ do {								\
 			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
 			      LD_LOCK_NORMAL);			\
 	local_lock_debug_init(lock);				\
-	__assume_ctx_lock(lock);				\
 } while (0)
 
 #define __local_lock_acquire(lock)					\
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 89977c215cbd..ecaa0440f6ec 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -62,7 +62,6 @@ do {									\
 	static struct lock_class_key __key;				\
 									\
 	__mutex_init((mutex), #mutex, &__key);				\
-	__assume_ctx_lock(mutex);					\
 } while (0)
 
 /**
@@ -254,6 +253,7 @@ extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __cond_a
 DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unlock(_T->lock))
 DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock))
 DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock), _RET == 0)
+DEFINE_LOCK_GUARD_1(mutex_init, struct mutex, mutex_init(_T->lock), /* */)
 
 DECLARE_LOCK_GUARD_1_ATTRS(mutex,	__acquires(_T), __releases(*(struct mutex **)_T))
 #define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T)
@@ -261,6 +261,8 @@ DECLARE_LOCK_GUARD_1_ATTRS(mutex_try,	__acquires(_T), __releases(*(struct mutex
 #define class_mutex_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_try, _T)
 DECLARE_LOCK_GUARD_1_ATTRS(mutex_intr,	__acquires(_T), __releases(*(struct mutex **)_T))
 #define class_mutex_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_intr, _T)
+DECLARE_LOCK_GUARD_1_ATTRS(mutex_init,	__acquires(_T), __releases(*(struct mutex **)_T))
+#define class_mutex_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_init, _T)
 
 extern unsigned long mutex_get_owner(struct mutex *lock);
 
diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h
index 65a5b55e1bcd..3390d21c95dd 100644
--- a/include/linux/rwlock.h
+++ b/include/linux/rwlock.h
@@ -22,11 +22,10 @@ do {								\
 	static struct lock_class_key __key;			\
 								\
 	__rwlock_init((lock), #lock, &__key);			\
-	__assume_ctx_lock(lock);				\
 } while (0)
 #else
 # define rwlock_init(lock)					\
-	do { *(lock) = __RW_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } while (0)
+	do { *(lock) = __RW_LOCK_UNLOCKED(lock); } while (0)
 #endif
 
 #ifdef CONFIG_DEBUG_SPINLOCK
diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
index 37b387dcab21..5353abbfdc0b 100644
--- a/include/linux/rwlock_rt.h
+++ b/include/linux/rwlock_rt.h
@@ -22,7 +22,6 @@ do {							\
 							\
 	init_rwbase_rt(&(rwl)->rwbase);			\
 	__rt_rwlock_init(rwl, #rwl, &__key);		\
-	__assume_ctx_lock(rwl);				\
 } while (0)
 
 extern void rt_read_lock(rwlock_t *rwlock)	__acquires_shared(rwlock);
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index 8da14a08a4e1..9bf1d93d3d7b 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -121,7 +121,6 @@ do {								\
 	static struct lock_class_key __key;			\
 								\
 	__init_rwsem((sem), #sem, &__key);			\
-	__assume_ctx_lock(sem);					\
 } while (0)
 
 /*
@@ -175,7 +174,6 @@ do {								\
 	static struct lock_class_key __key;			\
 								\
 	__init_rwsem((sem), #sem, &__key);			\
-	__assume_ctx_lock(sem);					\
 } while (0)
 
 static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem)
@@ -280,6 +278,10 @@ DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_try, __acquires(_T), __releases(*(struct
 DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_kill, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
 #define class_rwsem_write_kill_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write_kill, _T)
 
+DEFINE_LOCK_GUARD_1(rwsem_init, struct rw_semaphore, init_rwsem(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(rwsem_init, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
+#define class_rwsem_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_init, _T)
+
 /*
  * downgrade write lock to read lock
  */
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 113320911a09..c0c6235dff59 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -14,6 +14,7 @@
  */
 
 #include <linux/compiler.h>
+#include <linux/cleanup.h>
 #include <linux/kcsan-checks.h>
 #include <linux/lockdep.h>
 #include <linux/mutex.h>
@@ -816,7 +817,6 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
 	do {								\
 		spin_lock_init(&(sl)->lock);				\
 		seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock);	\
-		__assume_ctx_lock(sl);					\
 	} while (0)
 
 /**
@@ -1359,4 +1359,8 @@ static __always_inline void __scoped_seqlock_cleanup_ctx(struct ss_tmp **s)
 #define scoped_seqlock_read(_seqlock, _target)				\
 	__scoped_seqlock_read(_seqlock, _target, __UNIQUE_ID(seqlock))
 
+DEFINE_LOCK_GUARD_1(seqlock_init, seqlock_t, seqlock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(seqlock_init, __acquires(_T), __releases(*(seqlock_t **)_T))
+#define class_seqlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(seqlock_init, _T)
+
 #endif /* __LINUX_SEQLOCK_H */
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 396b8c5d6c1b..e1e2f144af9b 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -106,12 +106,11 @@ do {									\
 	static struct lock_class_key __key;				\
 									\
 	__raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN);	\
-	__assume_ctx_lock(lock);					\
 } while (0)
 
 #else
 # define raw_spin_lock_init(lock)				\
-	do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } while (0)
+	do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0)
 #endif
 
 #define raw_spin_is_locked(lock)	arch_spin_is_locked(&(lock)->raw_lock)
@@ -324,7 +323,6 @@ do {								\
 								\
 	__raw_spin_lock_init(spinlock_check(lock),		\
 			     #lock, &__key, LD_WAIT_CONFIG);	\
-	__assume_ctx_lock(lock);				\
 } while (0)
 
 #else
@@ -333,7 +331,6 @@ do {								\
 do {						\
 	spinlock_check(_lock);			\
 	*(_lock) = __SPIN_LOCK_UNLOCKED(_lock);	\
-	__assume_ctx_lock(_lock);		\
 } while (0)
 
 #endif
@@ -582,6 +579,10 @@ DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try,
 DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
 #define class_raw_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, _T)
 
+DEFINE_LOCK_GUARD_1(raw_spinlock_init, raw_spinlock_t, raw_spin_lock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_init, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
+#define class_raw_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_init, _T)
+
 DEFINE_LOCK_GUARD_1(spinlock, spinlock_t,
 		    spin_lock(_T->lock),
 		    spin_unlock(_T->lock))
@@ -626,6 +627,10 @@ DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try,
 DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, __acquires(_T), __releases(*(spinlock_t **)_T))
 #define class_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, _T)
 
+DEFINE_LOCK_GUARD_1(spinlock_init, spinlock_t, spin_lock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(spinlock_init, __acquires(_T), __releases(*(spinlock_t **)_T))
+#define class_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_init, _T)
+
 DEFINE_LOCK_GUARD_1(read_lock, rwlock_t,
 		    read_lock(_T->lock),
 		    read_unlock(_T->lock))
@@ -664,5 +669,9 @@ DEFINE_LOCK_GUARD_1(write_lock_irqsave, rwlock_t,
 DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irqsave, __acquires(_T), __releases(*(rwlock_t **)_T))
 #define class_write_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock_irqsave, _T)
 
+DEFINE_LOCK_GUARD_1(rwlock_init, rwlock_t, rwlock_init(_T->lock), /* */)
+DECLARE_LOCK_GUARD_1_ATTRS(rwlock_init, __acquires(_T), __releases(*(rwlock_t **)_T))
+#define class_rwlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwlock_init, _T)
+
 #undef __LINUX_INSIDE_SPINLOCK_H
 #endif /* __LINUX_SPINLOCK_H */
diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
index 0a585768358f..373618a4243c 100644
--- a/include/linux/spinlock_rt.h
+++ b/include/linux/spinlock_rt.h
@@ -20,7 +20,6 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name,
 do {								\
 	rt_mutex_base_init(&(slock)->lock);			\
 	__rt_spin_lock_init(slock, name, key, percpu);		\
-	__assume_ctx_lock(slock);				\
 } while (0)
 
 #define _spin_lock_init(slock, percpu)				\
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 6cbc6e2d8aee..5397d0c14127 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -530,7 +530,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
 	kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
 	if (!kcov)
 		return -ENOMEM;
-	spin_lock_init(&kcov->lock);
+	guard(spinlock_init)(&kcov->lock);
 	kcov->mode = KCOV_MODE_DISABLED;
 	kcov->sequence = 1;
 	refcount_set(&kcov->refcount, 1);
diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c
index 1c5a381461fc..0f05943d957f 100644
--- a/lib/test_context-analysis.c
+++ b/lib/test_context-analysis.c
@@ -35,7 +35,7 @@ static void __used test_common_helpers(void)
 	};											\
 	static void __used test_##class##_init(struct test_##class##_data *d)			\
 	{											\
-		type_init(&d->lock);								\
+		guard(type_init)(&d->lock);							\
 		d->counter = 0;									\
 	}											\
 	static void __used test_##class(struct test_##class##_data *d)				\
@@ -83,7 +83,7 @@ static void __used test_common_helpers(void)
 
 TEST_SPINLOCK_COMMON(raw_spinlock,
 		     raw_spinlock_t,
-		     raw_spin_lock_init,
+		     raw_spinlock_init,
 		     raw_spin_lock,
 		     raw_spin_unlock,
 		     raw_spin_trylock,
@@ -109,7 +109,7 @@ static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinlock_data
 
 TEST_SPINLOCK_COMMON(spinlock,
 		     spinlock_t,
-		     spin_lock_init,
+		     spinlock_init,
 		     spin_lock,
 		     spin_unlock,
 		     spin_trylock,
@@ -163,7 +163,7 @@ struct test_mutex_data {
 
 static void __used test_mutex_init(struct test_mutex_data *d)
 {
-	mutex_init(&d->mtx);
+	guard(mutex_init)(&d->mtx);
 	d->counter = 0;
 }
 
@@ -226,7 +226,7 @@ struct test_seqlock_data {
 
 static void __used test_seqlock_init(struct test_seqlock_data *d)
 {
-	seqlock_init(&d->sl);
+	guard(seqlock_init)(&d->sl);
 	d->counter = 0;
 }
 
@@ -275,7 +275,7 @@ struct test_rwsem_data {
 
 static void __used test_rwsem_init(struct test_rwsem_data *d)
 {
-	init_rwsem(&d->sem);
+	guard(rwsem_init)(&d->sem);
 	d->counter = 0;
 }
 
@@ -475,7 +475,7 @@ static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = {
 
 static void __used test_local_lock_init(struct test_local_lock_data *d)
 {
-	local_lock_init(&d->lock);
+	guard(local_lock_init)(&d->lock);
 	d->counter = 0;
 }
 
@@ -519,7 +519,7 @@ static DEFINE_PER_CPU(struct test_local_trylock_data, test_local_trylock_data) =
 
 static void __used test_local_trylock_init(struct test_local_trylock_data *d)
 {
-	local_trylock_init(&d->lock);
+	guard(local_trylock_init)(&d->lock);
 	d->counter = 0;
 }
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Peter Zijlstra 3 weeks, 1 day ago
+Steve +Christoph

On Fri, Jan 16, 2026 at 02:17:09AM +0100, Marco Elver wrote:
> On Thu, Jan 15, 2026 at 10:33PM +0100, Peter Zijlstra wrote:
> > On Thu, Jan 15, 2026 at 01:51:25AM +0100, Marco Elver wrote:
> > 
> > > Longer-term, Peter suggested to create scoped init-guards [1], which
> > > will both fix the issue in a more robust way and also denote clearly
> > > where initialization starts and ends. However, that requires new APIs,
> > > and won't help bridge the gap for code that just wants to opt into the
> > > analysis with as little other changes as possible (as suggested in [2]).
> > 
> > OTOH, switching to that *now*, while we have minimal files with
> > CONTEXT_ANALYSIS enabled, is the easiest it will ever get.
> > 
> > The more files get enabled, the harder it gets to switch, no?
> 
> Fair point; meaning, we should improve it sooner than later. :-)
> 
> In my sleep-deprived state, I came up with the below. I'd split it up
> into maybe 3 patches (add guards; use guards where needed; remove
> assume).
> 
> Thoughts?

LGTM; Steve, Christoph, does this work for you guys? Init and then lock
would look something like:

	scoped_guard (spinlock_init, &obj->lock) {
		// init obj
		refcount_init(&obj->ref);
		...
	}

	guard(spinlock)(&obj->lock);
	// obj is locked.

> ------ >8 ------
> 
> diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/dev-tools/context-analysis.rst
> index e69896e597b6..0afe29398e26 100644
> --- a/Documentation/dev-tools/context-analysis.rst
> +++ b/Documentation/dev-tools/context-analysis.rst
> @@ -83,9 +83,11 @@ Currently the following synchronization primitives are supported:
>  `bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`,
>  `ww_mutex`.
>  
> -For context locks with an initialization function (e.g., `spin_lock_init()`),
> -calling this function before initializing any guarded members or globals
> -prevents the compiler from issuing warnings about unguarded initialization.
> +For context locks with an initialization function (e.g., ``spin_lock_init()``),
> +use ``guard(foo_init)(&lock)`` or ``scoped_guard(foo_init, &lock) { ...  }``
> +pattern to initialize guarded members or globals. This initializes the context
> +lock, but also treats the context as active within the initialization scope
> +(initialization implies exclusive access to the underlying object).
>  
>  Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's
>  context analysis that the associated synchronization primitive is held after
> diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h
> index 99c06e499375..0f971fd6d02a 100644
> --- a/include/linux/local_lock.h
> +++ b/include/linux/local_lock.h
> @@ -104,6 +104,8 @@ DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu,
>  		    local_lock_nested_bh(_T->lock),
>  		    local_unlock_nested_bh(_T->lock))
>  
> +DEFINE_LOCK_GUARD_1(local_lock_init, local_lock_t __percpu, local_lock_init(_T->lock), /* */)
> +
>  DECLARE_LOCK_GUARD_1_ATTRS(local_lock, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
>  #define class_local_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock, _T)
>  DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irq, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
> @@ -112,5 +114,11 @@ DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irqsave, __acquires(_T), __releases(*(loca
>  #define class_local_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_irqsave, _T)
>  DECLARE_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
>  #define class_local_lock_nested_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, _T)
> +DECLARE_LOCK_GUARD_1_ATTRS(local_lock_init, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
> +#define class_local_lock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_init, _T)
> +
> +DEFINE_LOCK_GUARD_1(local_trylock_init, local_trylock_t __percpu, local_trylock_init(_T->lock), /* */)
> +DECLARE_LOCK_GUARD_1_ATTRS(local_trylock_init, __acquires(_T), __releases(*(local_trylock_t __percpu **)_T))
> +#define class_local_trylock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_trylock_init, _T)
>  
>  #endif
> diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
> index e8c4803d8db4..66d4984eea62 100644
> --- a/include/linux/local_lock_internal.h
> +++ b/include/linux/local_lock_internal.h
> @@ -86,13 +86,11 @@ do {								\
>  			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
>  			      LD_LOCK_PERCPU);			\
>  	local_lock_debug_init(lock);				\
> -	__assume_ctx_lock(lock);				\
>  } while (0)
>  
>  #define __local_trylock_init(lock)				\
>  do {								\
>  	__local_lock_init((local_lock_t *)lock);		\
> -	__assume_ctx_lock(lock);				\
>  } while (0)
>  
>  #define __spinlock_nested_bh_init(lock)				\
> @@ -104,7 +102,6 @@ do {								\
>  			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
>  			      LD_LOCK_NORMAL);			\
>  	local_lock_debug_init(lock);				\
> -	__assume_ctx_lock(lock);				\
>  } while (0)
>  
>  #define __local_lock_acquire(lock)					\
> diff --git a/include/linux/mutex.h b/include/linux/mutex.h
> index 89977c215cbd..ecaa0440f6ec 100644
> --- a/include/linux/mutex.h
> +++ b/include/linux/mutex.h
> @@ -62,7 +62,6 @@ do {									\
>  	static struct lock_class_key __key;				\
>  									\
>  	__mutex_init((mutex), #mutex, &__key);				\
> -	__assume_ctx_lock(mutex);					\
>  } while (0)
>  
>  /**
> @@ -254,6 +253,7 @@ extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __cond_a
>  DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unlock(_T->lock))
>  DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock))
>  DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock), _RET == 0)
> +DEFINE_LOCK_GUARD_1(mutex_init, struct mutex, mutex_init(_T->lock), /* */)
>  
>  DECLARE_LOCK_GUARD_1_ATTRS(mutex,	__acquires(_T), __releases(*(struct mutex **)_T))
>  #define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T)
> @@ -261,6 +261,8 @@ DECLARE_LOCK_GUARD_1_ATTRS(mutex_try,	__acquires(_T), __releases(*(struct mutex
>  #define class_mutex_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_try, _T)
>  DECLARE_LOCK_GUARD_1_ATTRS(mutex_intr,	__acquires(_T), __releases(*(struct mutex **)_T))
>  #define class_mutex_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_intr, _T)
> +DECLARE_LOCK_GUARD_1_ATTRS(mutex_init,	__acquires(_T), __releases(*(struct mutex **)_T))
> +#define class_mutex_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_init, _T)
>  
>  extern unsigned long mutex_get_owner(struct mutex *lock);
>  
> diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h
> index 65a5b55e1bcd..3390d21c95dd 100644
> --- a/include/linux/rwlock.h
> +++ b/include/linux/rwlock.h
> @@ -22,11 +22,10 @@ do {								\
>  	static struct lock_class_key __key;			\
>  								\
>  	__rwlock_init((lock), #lock, &__key);			\
> -	__assume_ctx_lock(lock);				\
>  } while (0)
>  #else
>  # define rwlock_init(lock)					\
> -	do { *(lock) = __RW_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } while (0)
> +	do { *(lock) = __RW_LOCK_UNLOCKED(lock); } while (0)
>  #endif
>  
>  #ifdef CONFIG_DEBUG_SPINLOCK
> diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
> index 37b387dcab21..5353abbfdc0b 100644
> --- a/include/linux/rwlock_rt.h
> +++ b/include/linux/rwlock_rt.h
> @@ -22,7 +22,6 @@ do {							\
>  							\
>  	init_rwbase_rt(&(rwl)->rwbase);			\
>  	__rt_rwlock_init(rwl, #rwl, &__key);		\
> -	__assume_ctx_lock(rwl);				\
>  } while (0)
>  
>  extern void rt_read_lock(rwlock_t *rwlock)	__acquires_shared(rwlock);
> diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
> index 8da14a08a4e1..9bf1d93d3d7b 100644
> --- a/include/linux/rwsem.h
> +++ b/include/linux/rwsem.h
> @@ -121,7 +121,6 @@ do {								\
>  	static struct lock_class_key __key;			\
>  								\
>  	__init_rwsem((sem), #sem, &__key);			\
> -	__assume_ctx_lock(sem);					\
>  } while (0)
>  
>  /*
> @@ -175,7 +174,6 @@ do {								\
>  	static struct lock_class_key __key;			\
>  								\
>  	__init_rwsem((sem), #sem, &__key);			\
> -	__assume_ctx_lock(sem);					\
>  } while (0)
>  
>  static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem)
> @@ -280,6 +278,10 @@ DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_try, __acquires(_T), __releases(*(struct
>  DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_kill, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
>  #define class_rwsem_write_kill_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write_kill, _T)
>  
> +DEFINE_LOCK_GUARD_1(rwsem_init, struct rw_semaphore, init_rwsem(_T->lock), /* */)
> +DECLARE_LOCK_GUARD_1_ATTRS(rwsem_init, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
> +#define class_rwsem_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_init, _T)
> +
>  /*
>   * downgrade write lock to read lock
>   */
> diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> index 113320911a09..c0c6235dff59 100644
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -14,6 +14,7 @@
>   */
>  
>  #include <linux/compiler.h>
> +#include <linux/cleanup.h>
>  #include <linux/kcsan-checks.h>
>  #include <linux/lockdep.h>
>  #include <linux/mutex.h>
> @@ -816,7 +817,6 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
>  	do {								\
>  		spin_lock_init(&(sl)->lock);				\
>  		seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock);	\
> -		__assume_ctx_lock(sl);					\
>  	} while (0)
>  
>  /**
> @@ -1359,4 +1359,8 @@ static __always_inline void __scoped_seqlock_cleanup_ctx(struct ss_tmp **s)
>  #define scoped_seqlock_read(_seqlock, _target)				\
>  	__scoped_seqlock_read(_seqlock, _target, __UNIQUE_ID(seqlock))
>  
> +DEFINE_LOCK_GUARD_1(seqlock_init, seqlock_t, seqlock_init(_T->lock), /* */)
> +DECLARE_LOCK_GUARD_1_ATTRS(seqlock_init, __acquires(_T), __releases(*(seqlock_t **)_T))
> +#define class_seqlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(seqlock_init, _T)
> +
>  #endif /* __LINUX_SEQLOCK_H */
> diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> index 396b8c5d6c1b..e1e2f144af9b 100644
> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -106,12 +106,11 @@ do {									\
>  	static struct lock_class_key __key;				\
>  									\
>  	__raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN);	\
> -	__assume_ctx_lock(lock);					\
>  } while (0)
>  
>  #else
>  # define raw_spin_lock_init(lock)				\
> -	do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } while (0)
> +	do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0)
>  #endif
>  
>  #define raw_spin_is_locked(lock)	arch_spin_is_locked(&(lock)->raw_lock)
> @@ -324,7 +323,6 @@ do {								\
>  								\
>  	__raw_spin_lock_init(spinlock_check(lock),		\
>  			     #lock, &__key, LD_WAIT_CONFIG);	\
> -	__assume_ctx_lock(lock);				\
>  } while (0)
>  
>  #else
> @@ -333,7 +331,6 @@ do {								\
>  do {						\
>  	spinlock_check(_lock);			\
>  	*(_lock) = __SPIN_LOCK_UNLOCKED(_lock);	\
> -	__assume_ctx_lock(_lock);		\
>  } while (0)
>  
>  #endif
> @@ -582,6 +579,10 @@ DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try,
>  DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
>  #define class_raw_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, _T)
>  
> +DEFINE_LOCK_GUARD_1(raw_spinlock_init, raw_spinlock_t, raw_spin_lock_init(_T->lock), /* */)
> +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_init, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
> +#define class_raw_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_init, _T)
> +
>  DEFINE_LOCK_GUARD_1(spinlock, spinlock_t,
>  		    spin_lock(_T->lock),
>  		    spin_unlock(_T->lock))
> @@ -626,6 +627,10 @@ DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try,
>  DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, __acquires(_T), __releases(*(spinlock_t **)_T))
>  #define class_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, _T)
>  
> +DEFINE_LOCK_GUARD_1(spinlock_init, spinlock_t, spin_lock_init(_T->lock), /* */)
> +DECLARE_LOCK_GUARD_1_ATTRS(spinlock_init, __acquires(_T), __releases(*(spinlock_t **)_T))
> +#define class_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_init, _T)
> +
>  DEFINE_LOCK_GUARD_1(read_lock, rwlock_t,
>  		    read_lock(_T->lock),
>  		    read_unlock(_T->lock))
> @@ -664,5 +669,9 @@ DEFINE_LOCK_GUARD_1(write_lock_irqsave, rwlock_t,
>  DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irqsave, __acquires(_T), __releases(*(rwlock_t **)_T))
>  #define class_write_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock_irqsave, _T)
>  
> +DEFINE_LOCK_GUARD_1(rwlock_init, rwlock_t, rwlock_init(_T->lock), /* */)
> +DECLARE_LOCK_GUARD_1_ATTRS(rwlock_init, __acquires(_T), __releases(*(rwlock_t **)_T))
> +#define class_rwlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwlock_init, _T)
> +
>  #undef __LINUX_INSIDE_SPINLOCK_H
>  #endif /* __LINUX_SPINLOCK_H */
> diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
> index 0a585768358f..373618a4243c 100644
> --- a/include/linux/spinlock_rt.h
> +++ b/include/linux/spinlock_rt.h
> @@ -20,7 +20,6 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name,
>  do {								\
>  	rt_mutex_base_init(&(slock)->lock);			\
>  	__rt_spin_lock_init(slock, name, key, percpu);		\
> -	__assume_ctx_lock(slock);				\
>  } while (0)
>  
>  #define _spin_lock_init(slock, percpu)				\
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index 6cbc6e2d8aee..5397d0c14127 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -530,7 +530,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
>  	kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
>  	if (!kcov)
>  		return -ENOMEM;
> -	spin_lock_init(&kcov->lock);
> +	guard(spinlock_init)(&kcov->lock);
>  	kcov->mode = KCOV_MODE_DISABLED;
>  	kcov->sequence = 1;
>  	refcount_set(&kcov->refcount, 1);
> diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c
> index 1c5a381461fc..0f05943d957f 100644
> --- a/lib/test_context-analysis.c
> +++ b/lib/test_context-analysis.c
> @@ -35,7 +35,7 @@ static void __used test_common_helpers(void)
>  	};											\
>  	static void __used test_##class##_init(struct test_##class##_data *d)			\
>  	{											\
> -		type_init(&d->lock);								\
> +		guard(type_init)(&d->lock);							\
>  		d->counter = 0;									\
>  	}											\
>  	static void __used test_##class(struct test_##class##_data *d)				\
> @@ -83,7 +83,7 @@ static void __used test_common_helpers(void)
>  
>  TEST_SPINLOCK_COMMON(raw_spinlock,
>  		     raw_spinlock_t,
> -		     raw_spin_lock_init,
> +		     raw_spinlock_init,
>  		     raw_spin_lock,
>  		     raw_spin_unlock,
>  		     raw_spin_trylock,
> @@ -109,7 +109,7 @@ static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinlock_data
>  
>  TEST_SPINLOCK_COMMON(spinlock,
>  		     spinlock_t,
> -		     spin_lock_init,
> +		     spinlock_init,
>  		     spin_lock,
>  		     spin_unlock,
>  		     spin_trylock,
> @@ -163,7 +163,7 @@ struct test_mutex_data {
>  
>  static void __used test_mutex_init(struct test_mutex_data *d)
>  {
> -	mutex_init(&d->mtx);
> +	guard(mutex_init)(&d->mtx);
>  	d->counter = 0;
>  }
>  
> @@ -226,7 +226,7 @@ struct test_seqlock_data {
>  
>  static void __used test_seqlock_init(struct test_seqlock_data *d)
>  {
> -	seqlock_init(&d->sl);
> +	guard(seqlock_init)(&d->sl);
>  	d->counter = 0;
>  }
>  
> @@ -275,7 +275,7 @@ struct test_rwsem_data {
>  
>  static void __used test_rwsem_init(struct test_rwsem_data *d)
>  {
> -	init_rwsem(&d->sem);
> +	guard(rwsem_init)(&d->sem);
>  	d->counter = 0;
>  }
>  
> @@ -475,7 +475,7 @@ static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = {
>  
>  static void __used test_local_lock_init(struct test_local_lock_data *d)
>  {
> -	local_lock_init(&d->lock);
> +	guard(local_lock_init)(&d->lock);
>  	d->counter = 0;
>  }
>  
> @@ -519,7 +519,7 @@ static DEFINE_PER_CPU(struct test_local_trylock_data, test_local_trylock_data) =
>  
>  static void __used test_local_trylock_init(struct test_local_trylock_data *d)
>  {
> -	local_trylock_init(&d->lock);
> +	guard(local_trylock_init)(&d->lock);
>  	d->counter = 0;
>  }
>
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Christoph Hellwig 3 weeks, 1 day ago
On Fri, Jan 16, 2026 at 04:07:50PM +0100, Peter Zijlstra wrote:
> LGTM; Steve, Christoph, does this work for you guys? Init and then lock
> would look something like:

Please do something that works without all these messy guards that just
obfuscate the code.
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Peter Zijlstra 3 weeks, 1 day ago
On Fri, Jan 16, 2026 at 04:10:43PM +0100, Christoph Hellwig wrote:
> On Fri, Jan 16, 2026 at 04:07:50PM +0100, Peter Zijlstra wrote:
> > LGTM; Steve, Christoph, does this work for you guys? Init and then lock
> > would look something like:
> 
> Please do something that works without all these messy guards that just
> obfuscate the code.

I think we're doing to have to agree to disagree on this.

Something like:

	scoped_guard (spinlock_init, &obj->lock) {
		// init
	}

is *much* clearer than something like:

	spinlock_init(&obj->lock);
	// init
	spinlock_deinit(&obj->lock);

Exactly because it has explicit scope. (also my deinit naming might not
be optimal, it is ambiguous at best, probably confusing).

Not to mention that the scope things are far more robust vs error paths.
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Christoph Hellwig 3 weeks, 1 day ago
On Fri, Jan 16, 2026 at 04:20:16PM +0100, Peter Zijlstra wrote:
> is *much* clearer than something like:
> 
> 	spinlock_init(&obj->lock);
> 	// init
> 	spinlock_deinit(&obj->lock);
> 
> Exactly because it has explicit scope. (also my deinit naming might not
> be optimal, it is ambiguous at best, probably confusing).

WTF is spinlock_deinit even supposed to be?

I though this is about:

	spin_lock_init(&obj->lock);
	spin_lock(&obj->lock);

> Not to mention that the scope things are far more robust vs error paths.

They are just a really hacked up clumsy way to provide what a very
limited version of what the capability analys provides, while messing
up the code.
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Peter Zijlstra 3 weeks, 1 day ago
On Fri, Jan 16, 2026 at 04:27:41PM +0100, Christoph Hellwig wrote:
> On Fri, Jan 16, 2026 at 04:20:16PM +0100, Peter Zijlstra wrote:
> > is *much* clearer than something like:
> > 
> > 	spinlock_init(&obj->lock);
> > 	// init
> > 	spinlock_deinit(&obj->lock);
> > 
> > Exactly because it has explicit scope. (also my deinit naming might not
> > be optimal, it is ambiguous at best, probably confusing).
> 
> WTF is spinlock_deinit even supposed to be?
> 
> I though this is about:
> 
> 	spin_lock_init(&obj->lock);
> 	spin_lock(&obj->lock);
> 
> > Not to mention that the scope things are far more robust vs error paths.
> 
> They are just a really hacked up clumsy way to provide what a very
> limited version of what the capability analys provides, while messing
> up the code.

So the base problem here is something like:

struct obj {
	spinlock_t	lock;
	int		state __guarded_by(lock);
};

struct obj *create_obj(void)
{
	struct obj *obj = kzmalloc(sizeof(*obj), GFP_KERNEL);
	if (!obj)
		return NULL;

	spin_lock_init(&obj->lock);
	obj->state = INIT_STATE; // error: ->state demands ->lock is held
}

So if you want/can take spin_lock() directly after spin_lock_init(),
then yes, you can write:


	spin_lock_init(&obj->lock);
	spin_lock(&obj->lock);
	obj->state = INIT_STATE; // OK

However, if code is structured such that you need to init fields before
taking the lock, you need a 'fake' lock acquire to wrap the
initialization -- which is safe because there is no concurrency yet and
all that, furthermore, by holding the fake lock you also ensure you
cannot in fact take the lock and create undue concurrency before
initialization is complete.

So the fairly common pattern where an object is first (fully) initialized
before it can be used will need this fake acquisition. For this we get:

	scoped_lock (spinlock_init, &obj->lock) {
		// init goes here
	}

Or you can manually __acquire_ctx_lock() / __release_ctx_lock().
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Steven Rostedt 2 weeks, 2 days ago
On Fri, 16 Jan 2026 16:47:54 +0100
Peter Zijlstra <peterz@infradead.org> wrote:

> struct obj {
> 	spinlock_t	lock;
> 	int		state __guarded_by(lock);
> };
> 
> struct obj *create_obj(void)
> {
> 	struct obj *obj = kzmalloc(sizeof(*obj), GFP_KERNEL);
> 	if (!obj)
> 		return NULL;
> 
> 	spin_lock_init(&obj->lock);
> 	obj->state = INIT_STATE; // error: ->state demands ->lock is held
> }

I haven't seen all the other approaches, but would a macro be able to hide
it with some kind of obfuscation from the compiler?


	GUARD_INIT(obj->state, INIT_STATE);

which would be something like a WRITE_ONCE() macro. I'm not sure what
tooling there is to disable checks for a small bit of code like this.

-- Steve
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Marco Elver 2 weeks, 2 days ago
On Thu, 22 Jan 2026 at 02:24, Steven Rostedt <rostedt@goodmis.org> wrote:
>
> On Fri, 16 Jan 2026 16:47:54 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
>
> > struct obj {
> >       spinlock_t      lock;
> >       int             state __guarded_by(lock);
> > };
> >
> > struct obj *create_obj(void)
> > {
> >       struct obj *obj = kzmalloc(sizeof(*obj), GFP_KERNEL);
> >       if (!obj)
> >               return NULL;
> >
> >       spin_lock_init(&obj->lock);
> >       obj->state = INIT_STATE; // error: ->state demands ->lock is held
> > }
>
> I haven't seen all the other approaches, but would a macro be able to hide
> it with some kind of obfuscation from the compiler?
>
>
>         GUARD_INIT(obj->state, INIT_STATE);
>
> which would be something like a WRITE_ONCE() macro. I'm not sure what
> tooling there is to disable checks for a small bit of code like this.

Something like this is now a documented alternative [1]. Basically this works:

   context_unsafe(obj->state = INIT_STATE);

For single guarded fields, that's as simple as it gets.

[1] https://lore.kernel.org/all/20260119094029.1344361-1-elver@google.com/
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Christoph Hellwig 2 weeks, 2 days ago
On Wed, Jan 21, 2026 at 08:24:27PM -0500, Steven Rostedt wrote:
> I haven't seen all the other approaches, but would a macro be able to hide
> it with some kind of obfuscation from the compiler?
> 
> 
> 	GUARD_INIT(obj->state, INIT_STATE);
> 
> which would be something like a WRITE_ONCE() macro. I'm not sure what
> tooling there is to disable checks for a small bit of code like this.

Well, you don't really want WRITE_ONCE for every field, but basically
a barrier.  And initializing the lock seems like a very logical
place for such a barrier.  So we'd probably need to pair it with a some
kind of 'start initializing fields' that starts the context, and the
init then ends it.
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Christoph Hellwig 2 weeks, 5 days ago
On Fri, Jan 16, 2026 at 04:47:54PM +0100, Peter Zijlstra wrote:
> So the base problem here is something like:
> 
> struct obj {
> 	spinlock_t	lock;
> 	int		state __guarded_by(lock);
> };
> 
> struct obj *create_obj(void)
> {
> 	struct obj *obj = kzmalloc(sizeof(*obj), GFP_KERNEL);
> 	if (!obj)
> 		return NULL;
> 
> 	spin_lock_init(&obj->lock);
> 	obj->state = INIT_STATE; // error: ->state demands ->lock is held
> }
> 
> So if you want/can take spin_lock() directly after spin_lock_init(),
> then yes, you can write:

Which really is the normal case.

> However, if code is structured such that you need to init fields before
> taking the lock, you need a 'fake' lock acquire to wrap the
> initialization -- which is safe because there is no concurrency yet and
> all that, furthermore, by holding the fake lock you also ensure you
> cannot in fact take the lock and create undue concurrency before
> initialization is complete.

Well.  That assumes you have fields, or probably pointed to data
structures, where use under the lock is fine, but initializing them is
not.  Which sounds really weird.  And if you do that, splitting out a
separate function like in the patch that trigger all this sounds
perfectly fine.  It's the simple case you mentiond above that is fairly
common and really needs to work.  Especially as the allocate and init
helpers for them are often pretty trivial.

That being said, even outside this pattern the concept of allowing
initialization to touch fields before the containing structure is
published and can be found by other threads is a common and useful
one.  Having that supported natively in context tracking would probably
be useful if not required sooner or later.
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Marco Elver 3 weeks, 1 day ago
On Fri, Jan 16, 2026 at 04:27PM +0100, Christoph Hellwig wrote:
> On Fri, Jan 16, 2026 at 04:20:16PM +0100, Peter Zijlstra wrote:
> > is *much* clearer than something like:
> > 
> > 	spinlock_init(&obj->lock);
> > 	// init
> > 	spinlock_deinit(&obj->lock);
> > 
> > Exactly because it has explicit scope. (also my deinit naming might not
> > be optimal, it is ambiguous at best, probably confusing).
> 
> WTF is spinlock_deinit even supposed to be?
> 
> I though this is about:
> 
> 	spin_lock_init(&obj->lock);
> 	spin_lock(&obj->lock);
> 
> > Not to mention that the scope things are far more robust vs error paths.
> 
> They are just a really hacked up clumsy way to provide what a very
> limited version of what the capability analys provides, while messing
> up the code.

There might be more design options we're missing, but thus far I think
it's this patch (using the "reentrant promotion" approach) vs. scoped
init guards.

   * Scoped init guards [1]: Sound, requires explicit
     guard(type_init) (or scoped_guard) for guarded member
     initialization.

   * Reentrant init (this patch): Less intrusive, foo_init() just
     works. Misses double-locks immediately after init.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/melver/linux.git/log/?h=ctx-analysis/init-guards

FWIW, on the C++ side, Clang's Thread Safety Analysis just completely
disables itself in constructors to allow guarded member init. So we're
already doing better than that. :-)

As for why this simpler patch, I stand by my points from [2]; trading
false positives against false negatives so that things "just work" does
have merit, too.

[2] https://lore.kernel.org/all/CANpmjNPm5861mmHYMHoC9ErRfbLxmTy=MYwfsGC-YTpgP+z-Bw@mail.gmail.com/

I'm more or less indifferent, though would slightly favor the simpler
patch (this one), but can live with either. I can send out [1] for
reference, and you can choose.

Thanks,
-- Marco
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Peter Zijlstra 3 weeks, 1 day ago
On Fri, Jan 16, 2026 at 04:20:16PM +0100, Peter Zijlstra wrote:
> On Fri, Jan 16, 2026 at 04:10:43PM +0100, Christoph Hellwig wrote:
> > On Fri, Jan 16, 2026 at 04:07:50PM +0100, Peter Zijlstra wrote:
> > > LGTM; Steve, Christoph, does this work for you guys? Init and then lock
> > > would look something like:
> > 
> > Please do something that works without all these messy guards that just
> > obfuscate the code.
> 
> I think we're doing to have to agree to disagree on this.
> 
> Something like:
> 
> 	scoped_guard (spinlock_init, &obj->lock) {
> 		// init
> 	}
> 
> is *much* clearer than something like:
> 
> 	spinlock_init(&obj->lock);
> 	// init
> 	spinlock_deinit(&obj->lock);
> 
> Exactly because it has explicit scope. (also my deinit naming might not
> be optimal, it is ambiguous at best, probably confusing).
> 
> Not to mention that the scope things are far more robust vs error paths.

That said; you can just write:

	spin_lock_init(&obj->lock);
	__acquire_ctx_lock(&obj->lock);
	// init
	__release_ctx_lock(&obj->lock);

But I really don't see how that is 'better' in any way.
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Bart Van Assche 3 weeks, 2 days ago
On 1/14/26 4:51 PM, Marco Elver wrote:
> To do so *only* within the initialization scope, we can cast the lock
> pointer to any reentrant type for the init assume/assert. Introduce a
> generic reentrant context lock type `struct __ctx_lock_init` and add
> `__inits_ctx_lock()` that casts the lock pointer to this type before
> assuming/asserting it.
> 
> This ensures that the initial "held" state is reentrant, allowing
> patterns like:
> 
>    mutex_init(&lock);
>    ...
>    mutex_lock(&lock);
> 
> to compile without false positives, and avoids having to make all
> context lock types reentrant outside an initialization scope.
> 
> The caveat here is missing real double-lock bugs right after init scope.
> However, this is a classic trade-off of avoiding false positives against
> (unlikely) false negatives.

The goal of lock context analysis is to detect as many locking bugs at
compile time as possible. As mentioned above, with this patch applied, a
class of real bugs won't be detected: recursive locking in the context
of the initialization of the synchronization object. Hence, I think this
patch is a step in the wrong direction.

Even without this patch there is a class of locking bugs that is not
detected, namely unpaired synchronization object release calls in the
context of synchronization object initialization. An example for struct
mutex of this type of incorrect use of the mutex API that won't be
detected even without this patch:

	mutex_init(&mutex);
	...
	mutex_unlock(&mutex);

My preference is to remove __assume_ctx_lock() from mutex_init() and
similar macros in .h files and to add __assume_ctx_lock() explicitly in
the .c code that needs it. This will reduce significantly the chance
that the locking bug mentioned above is not detected at compile time.

Bart.
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Marco Elver 3 weeks, 2 days ago
On Thu, 15 Jan 2026 at 18:22, Bart Van Assche <bvanassche@acm.org> wrote:
> On 1/14/26 4:51 PM, Marco Elver wrote:
> > To do so *only* within the initialization scope, we can cast the lock
> > pointer to any reentrant type for the init assume/assert. Introduce a
> > generic reentrant context lock type `struct __ctx_lock_init` and add
> > `__inits_ctx_lock()` that casts the lock pointer to this type before
> > assuming/asserting it.
> >
> > This ensures that the initial "held" state is reentrant, allowing
> > patterns like:
> >
> >    mutex_init(&lock);
> >    ...
> >    mutex_lock(&lock);
> >
> > to compile without false positives, and avoids having to make all
> > context lock types reentrant outside an initialization scope.
> >
> > The caveat here is missing real double-lock bugs right after init scope.
> > However, this is a classic trade-off of avoiding false positives against
> > (unlikely) false negatives.
>
> The goal of lock context analysis is to detect as many locking bugs at
> compile time as possible. As mentioned above, with this patch applied, a

An analysis that detects as many bugs as possible is useless if nobody
wants to use it.

> class of real bugs won't be detected: recursive locking in the context
> of the initialization of the synchronization object. Hence, I think this
> patch is a step in the wrong direction.
>
> Even without this patch there is a class of locking bugs that is not
> detected, namely unpaired synchronization object release calls in the
> context of synchronization object initialization. An example for struct
> mutex of this type of incorrect use of the mutex API that won't be
> detected even without this patch:
>
>         mutex_init(&mutex);
>         ...
>         mutex_unlock(&mutex);
>
> My preference is to remove __assume_ctx_lock() from mutex_init() and
> similar macros in .h files and to add __assume_ctx_lock() explicitly in
> the .c code that needs it. This will reduce significantly the chance
> that the locking bug mentioned above is not detected at compile time.

It's the fundamental mismatch between our philosophies here:
completeness vs. soundness, where I'm advocating for the former
(compile most code with few changes) and you for the latter (detect
all bugs), where one or the other approach trades false negatives
against false positives respectively. Our experience with kernel bug
detection and analysis tools (KASAN, KCSAN, KMSAN, UBSAN, syzkaller,
syzbot) has shown time and time again that false positives or an undue
amount of churn (in the form of awkward annotations) on developers is
not acceptable -- I tried hard to improve ergonomics and avoid false
positives of the current context analysis infra -- and the necessary
trade-off are all too often more false negatives, yet at the benefit
of resulting greater coverage.

For the patch in question here, I took Steve's comment [1] to heart:

> If tooling can't handle a simple pattern of initializing a lock than
> taking it, that's a hard show stopper of adding that tooling.

[1] https://lore.kernel.org/all/20260109080715.0a390f6b@gandalf.local.home/

A corollary of this would be "If tooling can't handle a simple pattern
of initializing a lock and guarded members, that's a hard show
stopper". So we need to support both: initializing guarded members,
and being able to take the lock after initialization in the same
scope.

And to give you a bit of compromise: As with other tooling, the more
pragmatic approach is to claw back soundness once coverage and users
have reached critical mass (months or years.. hard to say).

We have to walk before we run.

Thanks,
-- Marco
Re: [PATCH tip/locking/core] compiler-context-analysis: Support immediate acquisition after initialization
Posted by Bart Van Assche 3 weeks, 2 days ago
On 1/15/26 10:58 AM, Marco Elver wrote:
> A corollary of this would be "If tooling can't handle a simple pattern
> of initializing a lock and guarded members, that's a hard show
> stopper".

That's your opinion. I'm not sure anyone else shares this opinion.

If an __assume_ctx_lock() annotation is missing from initialization
code, that will result in a clear and easy to fix error message.

Silently ignoring two classes of real bugs is a much worse choice in my
opinion than requesting __guarded_by() users to add an
__assume_ctx_lock() annotation in initialization code.

Bart.