From nobody Wed Dec 17 04:42:16 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 284361F9A90 for ; Thu, 6 Feb 2025 18:18:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865915; cv=none; b=cUAoi5MvhN+xmU2Y/bdtp4i2uZViSdj+DlYuVZ/4D4vBm30bXw3MpNFSIRexFt1ADJMJeEIpyEvWyReooYUn3Oho3aZ08uqUR26yjpXwB9s9RIa/P3OUCC54pVrS+LlYAwzbGwXaPSLTosxuF5jv9EJ5uMTT4K7HER3L5TaS2a8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865915; c=relaxed/simple; bh=RqHDLtuTafGGUahN4EZGXu8MD0HDtS+tp7xa1pcqjDw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mu2D5ewxBd/L5gSB0foa28DWlrHweNdSV1AtMzeIadHE18hpn6oCO7fVIGf3tCiApUfQazc2uTNKWsqN151pRLJ9RJK9xmCQ3WcaAz/qqNUtYmxmJ06yLjIX4rNZLRjanuZW5dHR1PVWiOpEicDOi/ZwWK1ERmLdwcYbpIDxxJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=16qPCvCs; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="16qPCvCs" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5dcd8f32414so1478327a12.2 for ; Thu, 06 Feb 2025 10:18:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865911; x=1739470711; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9TIY+XnKqDQPKONLGJPgemro/50zRzCvFBM/gxQTHJA=; b=16qPCvCsZjw0Ks1UBRKRn2PV/4kzNmxoat7xqEf00PCjiAGMEBE6eHRHsATa7go32D c/2xy+qtuWtvs6QgUPXK2wY1LeJDLjifeMv5qvi8bjs0LWT6gH2gkJdzmcarwRIk4X7Q s3gs1RBOtRWjkkZEU0iFj3laQAs+Pvyh/2zdDeIj9bdVXcPFydDSfOUXR2+IpVSxWPBc VjikdYNtJND3Emob9V1HTuVHtf39lMUO0tRt0eMTXrgTB0+rhVGlJzzcHe4+/tZvscxt VR9dOxSWP6isuJmemWvA+Po+BA+NwuYziUaFbMZt7dTwLy3NVv7I4SLOosu3NKZqSZbd hHqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865911; x=1739470711; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9TIY+XnKqDQPKONLGJPgemro/50zRzCvFBM/gxQTHJA=; b=cEINrSEuIyfGuJNQJ+NLY8OqeMl4IZvuhEEn7HwkHJk9ArIp7nRgbwr5qZefAhe3xx +5Wh0wtcs4+J6oD/hdi/G5fKTpTLh6/NX4IWSBFhX2/2gXPIz6r8taYZQU5lgCMx5CDJ s0AoZ6I/YbaRvEnoCLfjuGM4fOcgMWl0Yv/x1yb//XT3UwiZLQJqXqL6A9O6qoWJXxFq eEQAZz1uErqsLvOUGk2rfaEAhr8CjkO4DOsHtqENGrdFmghWRltOT0pthtivpBAPfhir MClufrJ9I4DcRLCkyHRq0V1cOmC3HqxiWonGIhECeFE/+wMkgBDGQ/nKpcXCIBOgjg25 OaMQ== X-Forwarded-Encrypted: i=1; AJvYcCW+W3yIdM0GzqtyeGA3yhFS8AsxiRXNgrn+DCX4pIv3N+e8+zA+Bfu9GFDuqbiYCBv1IUSI0C4jmKFKNRs=@vger.kernel.org X-Gm-Message-State: AOJu0YynJWNyIekWSayhWrPuUl8W9M14GtwhTJ7myUmfDJ+stSRueKkE zJpfaeW/nOnE3ROaZFekeS/XKzq7O3PnnCooLflMwjUv6sRjefTAuK1+3wZI8ht/FoVf2fr06A= = X-Google-Smtp-Source: AGHT+IEvPeOFhagqmwgnyrai4ZSejzn5v54agYXyK+aaPZflaDyrJ0WIw+bD5/x8N7Qpgf8i3HkDoVStPw== X-Received: from edbij8.prod.google.com ([2002:a05:6402:1588:b0:5dd:2e6f:2549]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:42d6:b0:5dc:5860:6881 with SMTP id 4fb4d7f45d1cf-5de45023562mr572207a12.19.1738865911608; Thu, 06 Feb 2025 10:18:31 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:12 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-19-elver@google.com> Subject: [PATCH RFC 18/24] locking/rwsem: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for rw_semaphore. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/rwsem.h | 56 +++++++++------- lib/test_capability-analysis.c | 64 +++++++++++++++++++ 3 files changed, 97 insertions(+), 25 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 3766ac466470..719986739b0e 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`). +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index c8b543d428b0..0c84e3072370 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -45,7 +45,7 @@ * reduce the chance that they will share the same cacheline causing * cacheline bouncing problem. */ -struct rw_semaphore { +struct_with_capability(rw_semaphore) { atomic_long_t count; /* * Write owner or one of the read owners as well flags regarding @@ -76,11 +76,13 @@ static inline int rwsem_is_locked(struct rw_semaphore *= sem) } =20 static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *= sem) + __asserts_cap(sem) { WARN_ON(atomic_long_read(&sem->count) =3D=3D RWSEM_UNLOCKED_VALUE); } =20 static inline void rwsem_assert_held_write_nolockdep(const struct rw_semap= hore *sem) + __asserts_cap(sem) { WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); } @@ -119,6 +121,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) =20 /* @@ -136,7 +139,7 @@ static inline int rwsem_is_contended(struct rw_semaphor= e *sem) =20 #include =20 -struct rw_semaphore { +struct_with_capability(rw_semaphore) { struct rwbase_rt rwbase; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; @@ -160,6 +163,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) =20 static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) @@ -168,11 +172,13 @@ static __always_inline int rwsem_is_locked(const stru= ct rw_semaphore *sem) } =20 static __always_inline void rwsem_assert_held_nolockdep(const struct rw_se= maphore *sem) + __asserts_cap(sem) { WARN_ON(!rwsem_is_locked(sem)); } =20 static __always_inline void rwsem_assert_held_write_nolockdep(const struct= rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(!rw_base_is_write_locked(&sem->rwbase)); } @@ -190,6 +196,7 @@ static __always_inline int rwsem_is_contended(struct rw= _semaphore *sem) */ =20 static inline void rwsem_assert_held(const struct rw_semaphore *sem) + __asserts_cap(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held(sem); @@ -198,6 +205,7 @@ static inline void rwsem_assert_held(const struct rw_se= maphore *sem) } =20 static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) + __asserts_cap(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held_write(sem); @@ -208,47 +216,47 @@ static inline void rwsem_assert_held_write(const stru= ct rw_semaphore *sem) /* * lock for reading */ -extern void down_read(struct rw_semaphore *sem); -extern int __must_check down_read_interruptible(struct rw_semaphore *sem); -extern int __must_check down_read_killable(struct rw_semaphore *sem); +extern void down_read(struct rw_semaphore *sem) __acquires_shared(sem); +extern int __must_check down_read_interruptible(struct rw_semaphore *sem) = __cond_acquires_shared(0, sem); +extern int __must_check down_read_killable(struct rw_semaphore *sem) __con= d_acquires_shared(0, sem); =20 /* * trylock for reading -- returns 1 if successful, 0 if contention */ -extern int down_read_trylock(struct rw_semaphore *sem); +extern int down_read_trylock(struct rw_semaphore *sem) __cond_acquires_sha= red(1, sem); =20 /* * lock for writing */ -extern void down_write(struct rw_semaphore *sem); -extern int __must_check down_write_killable(struct rw_semaphore *sem); +extern void down_write(struct rw_semaphore *sem) __acquires(sem); +extern int __must_check down_write_killable(struct rw_semaphore *sem) __co= nd_acquires(0, sem); =20 /* * trylock for writing -- returns 1 if successful, 0 if contention */ -extern int down_write_trylock(struct rw_semaphore *sem); +extern int down_write_trylock(struct rw_semaphore *sem) __cond_acquires(1,= sem); =20 /* * release a read lock */ -extern void up_read(struct rw_semaphore *sem); +extern void up_read(struct rw_semaphore *sem) __releases_shared(sem); =20 /* * release a write lock */ -extern void up_write(struct rw_semaphore *sem); +extern void up_write(struct rw_semaphore *sem) __releases(sem); =20 -DEFINE_GUARD(rwsem_read, struct rw_semaphore *, down_read(_T), up_read(_T)) -DEFINE_GUARD_COND(rwsem_read, _try, down_read_trylock(_T)) -DEFINE_GUARD_COND(rwsem_read, _intr, down_read_interruptible(_T) =3D=3D 0) +DEFINE_LOCK_GUARD_1(rwsem_read, struct rw_semaphore, down_read(_T->lock), = up_read(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _try, down_read_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _intr, down_read_interruptible(_T->lo= ck) =3D=3D 0) =20 -DEFINE_GUARD(rwsem_write, struct rw_semaphore *, down_write(_T), up_write(= _T)) -DEFINE_GUARD_COND(rwsem_write, _try, down_write_trylock(_T)) +DEFINE_LOCK_GUARD_1(rwsem_write, struct rw_semaphore, down_write(_T->lock)= , up_write(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_write, _try, down_write_trylock(_T->lock)) =20 /* * downgrade write lock to read lock */ -extern void downgrade_write(struct rw_semaphore *sem); +extern void downgrade_write(struct rw_semaphore *sem) __releases(sem) __ac= quires_shared(sem); =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC /* @@ -264,11 +272,11 @@ extern void downgrade_write(struct rw_semaphore *sem); * lockdep_set_class() at lock initialization time. * See Documentation/locking/lockdep-design.rst for more details.) */ -extern void down_read_nested(struct rw_semaphore *sem, int subclass); -extern int __must_check down_read_killable_nested(struct rw_semaphore *sem= , int subclass); -extern void down_write_nested(struct rw_semaphore *sem, int subclass); -extern int down_write_killable_nested(struct rw_semaphore *sem, int subcla= ss); -extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep= _map *nest_lock); +extern void down_read_nested(struct rw_semaphore *sem, int subclass) __acq= uires_shared(sem); +extern int __must_check down_read_killable_nested(struct rw_semaphore *sem= , int subclass) __cond_acquires_shared(0, sem); +extern void down_write_nested(struct rw_semaphore *sem, int subclass) __ac= quires(sem); +extern int down_write_killable_nested(struct rw_semaphore *sem, int subcla= ss) __cond_acquires(0, sem); +extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep= _map *nest_lock) __acquires(sem); =20 # define down_write_nest_lock(sem, nest_lock) \ do { \ @@ -282,8 +290,8 @@ do { \ * [ This API should be avoided as much as possible - the * proper abstraction for this case is completions. ] */ -extern void down_read_non_owner(struct rw_semaphore *sem); -extern void up_read_non_owner(struct rw_semaphore *sem); +extern void down_read_non_owner(struct rw_semaphore *sem) __acquires_share= d(sem); +extern void up_read_non_owner(struct rw_semaphore *sem) __releases_shared(= sem); #else # define down_read_nested(sem, subclass) down_read(sem) # define down_read_killable_nested(sem, subclass) down_read_killable(sem) diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 8bc8c3e6cb5c..4638d220f474 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -255,6 +256,69 @@ static void __used test_seqlock_writer(struct test_seq= lock_data *d) write_sequnlock_irqrestore(&d->sl, flags); } =20 +struct test_rwsem_data { + struct rw_semaphore sem; + int counter __var_guarded_by(&sem); +}; + +static void __used test_rwsem_init(struct test_rwsem_data *d) +{ + init_rwsem(&d->sem); + d->counter =3D 0; +} + +static void __used test_rwsem_reader(struct test_rwsem_data *d) +{ + down_read(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_read_trylock(&d->sem)) { + (void)d->counter; + up_read(&d->sem); + } +} + +static void __used test_rwsem_writer(struct test_rwsem_data *d) +{ + down_write(&d->sem); + d->counter++; + up_write(&d->sem); + + down_write(&d->sem); + d->counter++; + downgrade_write(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_write_trylock(&d->sem)) { + d->counter++; + up_write(&d->sem); + } +} + +static void __used test_rwsem_assert(struct test_rwsem_data *d) +{ + rwsem_assert_held_nolockdep(&d->sem); + d->counter++; +} + +static void __used test_rwsem_guard(struct test_rwsem_data *d) +{ + { guard(rwsem_read)(&d->sem); (void)d->counter; } + { guard(rwsem_write)(&d->sem); d->counter++; } +} + +static void __used test_rwsem_cond_guard(struct test_rwsem_data *d) +{ + scoped_cond_guard(rwsem_read_try, return, &d->sem) { + (void)d->counter; + } + scoped_cond_guard(rwsem_write_try, return, &d->sem) { + d->counter++; + } +} + struct test_bit_spinlock_data { unsigned long bits; int counter __var_guarded_by(__bitlock(3, &bits)); --=20 2.48.1.502.g6dc24dfdaf-goog