From nobody Tue Dec 16 05:56:59 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA7BD1A5BA8 for ; Thu, 6 Feb 2025 18:17:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865872; cv=none; b=dgzR5K8JRD7+PD1YNnMVizOSSV1pYOMtxLHeeoSTuspz3uav5mMrREpDrLV6IFa5vBzwZ7gR452OMX4jsHQqW7jM+TpfFJE+FzPXK1S/JtbOtiQ1NzDkJiieI8ofa0BpieeMiS4xu7M9AUrvEbzdMTaV1uF4mJEtMhlbDLbLHDE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865872; c=relaxed/simple; bh=yFPTPmKgM1WyRS4/pLwj/qD4AEqMsgmnBOUoRJkxsqk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EEkk4Lcm5oOMqt1fudLpoYwQH1ylo5OLfmJyc5Ip76JdBoLUHNDcx4pkll9OHfVQY53yvfw+SQjo2EjvqNTSi6O01yA6/4Z9QycNxJx4+6CjzfTTqUOQKJK1AIl7FF/Cmifd0bJ6glIeHHn+HsOEYygEA7kPiytyLYgHwSgRMx4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=h8Q5oXDB; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="h8Q5oXDB" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5dce1b1a2b3so1385811a12.1 for ; Thu, 06 Feb 2025 10:17:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865868; x=1739470668; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qF82DJs8+giiKRyetuSOsU+JANR9VCfWNPadmkefX9I=; b=h8Q5oXDBlm8Do82HkcBRkumMnHB5P/Hw1WzIcIg1XuVgBAWhfqnu+NP8ZpS4/ZaT+O rNy5fc0uBT7EV78LB8BI3wY+PgP62QqLEFb9n4cB/PSfdZ0mLKelYPcdFhXdhlC6ZWTj p7DIK6a3KPxScbjtaPUY0VojhABy0qQgjGUXBFRNbicEx+m5lzpn1m5fQq9dPzcsfi/S aRqzCLfuX5PTx/wya/4c/rRY+0yhEpg9rTi/+QtEAO54nLG53zPNyc6ZsGMH4P4lTAyE ZT/Xug3WG8KOK7wuEzNTXzzb0kOVWVmGWeXdUIFL5PwnbLUOaYqn3rOJEUZj2K3t0h3e ocTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865868; x=1739470668; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qF82DJs8+giiKRyetuSOsU+JANR9VCfWNPadmkefX9I=; b=PXXBsv6t4l0JId7ndDNtJCYYB7JQAGFubMR1BhRWrWA+fapoEz8BiCBWVw9QmXEEM8 GawHx4cl9a7mDzQ5iKkSddP6Xr46Cd6E9qkt40JtJVqSy0uFxrvI8URedMZEd5BmItKG mkHkJQf1rDvAOFjXe0BVQ9CIvTAt7N1NDLzVoeB1oiQ52VWv31tf75hH+jOCCDVFYSXk m093smfo4zYJNaQZYKREDBqXxxdfrvqFGxjZ2xgk+WHZJp9XfPqRtpZQ9L1WfEm9+OuK ngLDy36pEVYvvORO3Fkw1fOUXsVSH2lHTnWD+TE4a3spj5xhkBo2+EGjHQ3xWtAgPWhe k34g== X-Forwarded-Encrypted: i=1; AJvYcCXK3mO8OmzTM74oHIxKRrWAO30XUZZXeTeDsJYAqE65DxO2LvrLiUIuspMkIOSU6Ze8OhimyhNfssu2wh4=@vger.kernel.org X-Gm-Message-State: AOJu0YxJX+0yatUe4fx/uTNlkDH0ra0XUSMJ0sYIqfX/qp3ktR/7x5E3 zwzI+2AEsiv+72JJvZtMy7WHdbI3pFmegyQDwmgE9+c9JYhVCrekviqzL+7BrIgs0iVAjRW7uA= = X-Google-Smtp-Source: AGHT+IFhkhtKPOLjUVtcZrbrdXay8cNt9Gf56kNUcmjSPeBOeQQDqEs7f95+SgyQPwoEkhsZdOCvuOgK1w== X-Received: from edbcs11.prod.google.com ([2002:a05:6402:c4b:b0:5dc:22e2:2325]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5290:b0:5dc:5a2f:a726 with SMTP id 4fb4d7f45d1cf-5de45072314mr470242a12.22.1738865868148; Thu, 06 Feb 2025 10:17:48 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:55 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-2-elver@google.com> Subject: [PATCH RFC 01/24] compiler_types: Move lock checking attributes to compiler-capability-analysis.h From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The conditional definition of lock checking macros and attributes is about to become more complex. Factor them out into their own header for better readability, and to make it obvious which features are supported by which mode (currently only Sparse). This is the first step towards generalizing towards "capability analysis". No functional change intended. Signed-off-by: Marco Elver --- include/linux/compiler-capability-analysis.h | 32 ++++++++++++++++++++ include/linux/compiler_types.h | 18 ++--------- 2 files changed, 34 insertions(+), 16 deletions(-) create mode 100644 include/linux/compiler-capability-analysis.h diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/c= ompiler-capability-analysis.h new file mode 100644 index 000000000000..7546ddb83f86 --- /dev/null +++ b/include/linux/compiler-capability-analysis.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Macros and attributes for compiler-based static capability analysis. + */ + +#ifndef _LINUX_COMPILER_CAPABILITY_ANALYSIS_H +#define _LINUX_COMPILER_CAPABILITY_ANALYSIS_H + +#ifdef __CHECKER__ + +/* Sparse context/lock checking support. */ +# define __must_hold(x) __attribute__((context(x,1,1))) +# define __acquires(x) __attribute__((context(x,0,1))) +# define __cond_acquires(x) __attribute__((context(x,0,-1))) +# define __releases(x) __attribute__((context(x,1,0))) +# define __acquire(x) __context__(x,1) +# define __release(x) __context__(x,-1) +# define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) + +#else /* !__CHECKER__ */ + +# define __must_hold(x) +# define __acquires(x) +# define __cond_acquires(x) +# define __releases(x) +# define __acquire(x) (void)0 +# define __release(x) (void)0 +# define __cond_lock(x, c) (c) + +#endif /* __CHECKER__ */ + +#endif /* _LINUX_COMPILER_CAPABILITY_ANALYSIS_H */ diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 981cc3d7e3aa..4a458e41293c 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -24,6 +24,8 @@ # define BTF_TYPE_TAG(value) /* nothing */ #endif =20 +#include + /* sparse defines __CHECKER__; see Documentation/dev-tools/sparse.rst */ #ifdef __CHECKER__ /* address spaces */ @@ -34,14 +36,6 @@ # define __rcu __attribute__((noderef, address_space(__rcu))) static inline void __chk_user_ptr(const volatile void __user *ptr) { } static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } -/* context/locking */ -# define __must_hold(x) __attribute__((context(x,1,1))) -# define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(x) __attribute__((context(x,0,-1))) -# define __releases(x) __attribute__((context(x,1,0))) -# define __acquire(x) __context__(x,1) -# define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) /* other */ # define __force __attribute__((force)) # define __nocast __attribute__((nocast)) @@ -62,14 +56,6 @@ static inline void __chk_io_ptr(const volatile void __io= mem *ptr) { } =20 # define __chk_user_ptr(x) (void)0 # define __chk_io_ptr(x) (void)0 -/* context/locking */ -# define __must_hold(x) -# define __acquires(x) -# define __cond_acquires(x) -# define __releases(x) -# define __acquire(x) (void)0 -# define __release(x) (void)0 -# define __cond_lock(x,c) (c) /* other */ # define __force # define __nocast --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:56:59 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4AE271A83E8 for ; Thu, 6 Feb 2025 18:17:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865874; cv=none; b=T2NIQgga6h+MDqJudTxodDKiI1y39wUNpTKUEs2xVwWA52O9vo4cmZkuObw5oZjfqKzec9dcP2LWCGvBBOM97DgY8YTz8NCPOmQoFj2zpmsyUUtKDwVKCV0/aVEDmr18jyuXjH46zhFmouOrLx/Zp1LUoDHXJeqnDgZRF+zZzko= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865874; c=relaxed/simple; bh=iyz037rEQeSBSVTGhzJ4Z6M1D6aGZ+4JptXyFE10Z/A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=O4YaRZWNCP5P/yTYp/eUHS/UxD7qKqnYOY8AD/UC/tpbMO4eqLRotcijDV4zXZCQAr0uufBTyiJMqItRKsokbUlCT+exO2vfSD2oEqgq/tt7IcQPMg5bBCK8UMYVmlHDOahgHPtuYeRcw7zeXFNE+MMgqBK253bTXZvNmOz3jls= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4fANvAIb; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4fANvAIb" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43651b1ba8aso10430005e9.1 for ; Thu, 06 Feb 2025 10:17:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865870; x=1739470670; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PGmsmIsXIBV39eIxfDyWh+XEHePbJneD0pcTBLdRlSI=; b=4fANvAIboUIOjoZ20hLhSQdoNtFzsnDLJ4m74AAnwSWwIJl3Dzi1U7iTKzd+mQQSrp J8JBBljauCH1x127RFb7ZAgXm1PZZRQ8pGgvtPpA8X8S1uT2/aip3heAEso6s9Xfr0Wa 3WD4zPlcxlJ0/XYN/D0gZlatGBUvfz+wyZe+3XT1XlSvSdtzT7DQDJDjB0/dCv1BMTZz +uGDuq9RYMhzBxFw42w57Tyb8whvO3vKOHbd2P25bfcOAMFqjomutkvDXl3oeJUNPitg RhWVJHWi1usn6KzAvpfjCPcqWdSApGNmvoDWMUoPiDKE7qJUhhFISNxRxBlPiEZ0wxjO +Jwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865870; x=1739470670; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PGmsmIsXIBV39eIxfDyWh+XEHePbJneD0pcTBLdRlSI=; b=APVJPGxoVbm8bVdUYz84ncCBkFzV/TrNqgO0HejNLphdzyoEijvsnMIxqH/kmvTUbf jD/afdcd7HqeGS/h+gtDy2bqbkdhsMiVYJ68QjARmrmgjdc2UsteC/nwT5xYkP89vWGS MM4zeQWTwybLwCMAstkK/4TwEW1zzKT7INm1+IQjJJ7GZcjnVuTi1QjnJElBu0zcGqAR qOV6vCTbhEcZVlPkxTdW05IFJui5mPKI6GLR3cgpZBL3ZHWmJ5pibD7/JzSZBdXo6ceL A4yeGuiBcWuAAZaYw7jy50NEXfjNdJH69amPVkFuxU+IdkfOIrM02kfrsuLqnEHasZOP jLAg== X-Forwarded-Encrypted: i=1; AJvYcCVETPM7niqFeO1j+LlSvX9DLEb+81nDtAGfwzjwHaQ0sK5n9j6n8uIEdRo707i6+qhu/BQt1Krq69jO41c=@vger.kernel.org X-Gm-Message-State: AOJu0Yz3q8rzW7SpV8p703QoDcS5UOUrVodRqUBRn7JgSdf/82mDRjc/ ogeiUeUghC3/ge/3cLggSQRniN7yDywm4NsLPQKu8tTeacmS7rLAxn5q2fpxkitVdpf5gG2psg= = X-Google-Smtp-Source: AGHT+IGDAkcn1bQ3IkfDNe8IDZP0C9GaYxa8QLVu0+68ezGhl3fk9usp9e5g9je1OprcIzBUfUjOK5kXzw== X-Received: from wmbhg20.prod.google.com ([2002:a05:600c:5394:b0:436:a247:a0e6]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e15:b0:435:192:63fb with SMTP id 5b1f17b1804b1-4392497d02amr4329395e9.3.1738865870730; Thu, 06 Feb 2025 10:17:50 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:56 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-3-elver@google.com> Subject: [PATCH RFC 02/24] compiler-capability-analysis: Rename __cond_lock() to __cond_acquire() From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Just like the pairing of attribute __acquires() with a matching function-like macro __acquire(), the attribute __cond_acquires() should have a matching function-like macro __cond_acquire(). To be consistent, rename __cond_lock() to __cond_acquire(). Signed-off-by: Marco Elver --- drivers/net/wireless/intel/iwlwifi/iwl-trans.h | 2 +- drivers/net/wireless/intel/iwlwifi/pcie/internal.h | 2 +- include/linux/compiler-capability-analysis.h | 4 ++-- include/linux/mm.h | 6 +++--- include/linux/rwlock.h | 4 ++-- include/linux/rwlock_rt.h | 4 ++-- include/linux/sched/signal.h | 2 +- include/linux/spinlock.h | 12 ++++++------ include/linux/spinlock_rt.h | 6 +++--- kernel/time/posix-timers.c | 2 +- tools/include/linux/compiler_types.h | 4 ++-- 11 files changed, 24 insertions(+), 24 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/w= ireless/intel/iwlwifi/iwl-trans.h index f6234065dbdd..560a5a899d1f 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h @@ -1136,7 +1136,7 @@ void iwl_trans_set_bits_mask(struct iwl_trans *trans,= u32 reg, bool _iwl_trans_grab_nic_access(struct iwl_trans *trans); =20 #define iwl_trans_grab_nic_access(trans) \ - __cond_lock(nic_access, \ + __cond_acquire(nic_access, \ likely(_iwl_trans_grab_nic_access(trans))) =20 void __releases(nic_access) diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/n= et/wireless/intel/iwlwifi/pcie/internal.h index 856b7e9f717d..a1becf833dc5 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h @@ -560,7 +560,7 @@ void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_d= ram_regions *dram_regions =20 bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans); #define _iwl_trans_pcie_grab_nic_access(trans) \ - __cond_lock(nic_access_nobh, \ + __cond_acquire(nic_access_nobh, \ likely(__iwl_trans_pcie_grab_nic_access(trans))) =20 void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev); diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/c= ompiler-capability-analysis.h index 7546ddb83f86..dfed4e7e6ab8 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -15,7 +15,7 @@ # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) -# define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) +# define __cond_acquire(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) =20 #else /* !__CHECKER__ */ =20 @@ -25,7 +25,7 @@ # define __releases(x) # define __acquire(x) (void)0 # define __release(x) (void)0 -# define __cond_lock(x, c) (c) +# define __cond_acquire(x, c) (c) =20 #endif /* __CHECKER__ */ =20 diff --git a/include/linux/mm.h b/include/linux/mm.h index 7b1068ddcbb7..a2365f4d6826 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2738,7 +2738,7 @@ static inline pte_t *get_locked_pte(struct mm_struct = *mm, unsigned long addr, spinlock_t **ptl) { pte_t *ptep; - __cond_lock(*ptl, ptep =3D __get_locked_pte(mm, addr, ptl)); + __cond_acquire(*ptl, ptep =3D __get_locked_pte(mm, addr, ptl)); return ptep; } =20 @@ -3029,7 +3029,7 @@ static inline pte_t *__pte_offset_map(pmd_t *pmd, uns= igned long addr, { pte_t *pte; =20 - __cond_lock(RCU, pte =3D ___pte_offset_map(pmd, addr, pmdvalp)); + __cond_acquire(RCU, pte =3D ___pte_offset_map(pmd, addr, pmdvalp)); return pte; } static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) @@ -3044,7 +3044,7 @@ static inline pte_t *pte_offset_map_lock(struct mm_st= ruct *mm, pmd_t *pmd, { pte_t *pte; =20 - __cond_lock(RCU, __cond_lock(*ptlp, + __cond_acquire(RCU, __cond_acquire(*ptlp, pte =3D __pte_offset_map_lock(mm, pmd, addr, ptlp))); return pte; } diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 5b87c6f4a243..58c346947aa2 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -49,8 +49,8 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_lock(lock, _raw_read_trylock(lock)) -#define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock)) +#define read_trylock(lock) __cond_acquire(lock, _raw_read_trylock(lock)) +#define write_trylock(lock) __cond_acquire(lock, _raw_write_trylock(lock)) =20 #define write_lock(lock) _raw_write_lock(lock) #define read_lock(lock) _raw_read_lock(lock) diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 7d81fc6918ee..5320b4b66405 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -55,7 +55,7 @@ static __always_inline void read_lock_irq(rwlock_t *rwloc= k) flags =3D 0; \ } while (0) =20 -#define read_trylock(lock) __cond_lock(lock, rt_read_trylock(lock)) +#define read_trylock(lock) __cond_acquire(lock, rt_read_trylock(lock)) =20 static __always_inline void read_unlock(rwlock_t *rwlock) { @@ -111,7 +111,7 @@ static __always_inline void write_lock_irq(rwlock_t *rw= lock) flags =3D 0; \ } while (0) =20 -#define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock)) +#define write_trylock(lock) __cond_acquire(lock, rt_write_trylock(lock)) =20 #define write_trylock_irqsave(lock, flags) \ ({ \ diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index d5d03d919df8..3304cce4b1bf 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -741,7 +741,7 @@ static inline struct sighand_struct *lock_task_sighand(= struct task_struct *task, struct sighand_struct *ret; =20 ret =3D __lock_task_sighand(task, flags); - (void)__cond_lock(&task->sighand->siglock, ret); + (void)__cond_acquire(&task->sighand->siglock, ret); return ret; } =20 diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 63dd8cf3c3c2..678e6f0679a1 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -212,7 +212,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *l= ock) __releases(lock) * various methods are defined as nops in the case they are not * required. */ -#define raw_spin_trylock(lock) __cond_lock(lock, _raw_spin_trylock(lock)) +#define raw_spin_trylock(lock) __cond_acquire(lock, _raw_spin_trylock(lock= )) =20 #define raw_spin_lock(lock) _raw_spin_lock(lock) =20 @@ -284,7 +284,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *l= ock) __releases(lock) #define raw_spin_unlock_bh(lock) _raw_spin_unlock_bh(lock) =20 #define raw_spin_trylock_bh(lock) \ - __cond_lock(lock, _raw_spin_trylock_bh(lock)) + __cond_acquire(lock, _raw_spin_trylock_bh(lock)) =20 #define raw_spin_trylock_irq(lock) \ ({ \ @@ -499,21 +499,21 @@ static inline int rwlock_needbreak(rwlock_t *lock) */ extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); #define atomic_dec_and_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_lock(atomic, lock)) + __cond_acquire(lock, _atomic_dec_and_lock(atomic, lock)) =20 extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, unsigned long *flags); #define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) + __cond_acquire(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags)= )) =20 extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock= ); #define atomic_dec_and_raw_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) + __cond_acquire(lock, _atomic_dec_and_raw_lock(atomic, lock)) =20 extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock= _t *lock, unsigned long *flags); #define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags= ))) + __cond_acquire(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(fl= ags))) =20 int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, size_t max_size, unsigned int cpu_mult, diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index f6499c37157d..eaad4dd2baac 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -123,13 +123,13 @@ static __always_inline void spin_unlock_irqrestore(sp= inlock_t *lock, } =20 #define spin_trylock(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) + __cond_acquire(lock, rt_spin_trylock(lock)) =20 #define spin_trylock_bh(lock) \ - __cond_lock(lock, rt_spin_trylock_bh(lock)) + __cond_acquire(lock, rt_spin_trylock_bh(lock)) =20 #define spin_trylock_irq(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) + __cond_acquire(lock, rt_spin_trylock(lock)) =20 #define spin_trylock_irqsave(lock, flags) \ ({ \ diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index 1b675aee99a9..dbada41c10ad 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -63,7 +63,7 @@ static struct k_itimer *__lock_timer(timer_t timer_id, un= signed long *flags); =20 #define lock_timer(tid, flags) \ ({ struct k_itimer *__timr; \ - __cond_lock(&__timr->it_lock, __timr =3D __lock_timer(tid, flags)); \ + __cond_acquire(&__timr->it_lock, __timr =3D __lock_timer(tid, flags)); \ __timr; \ }) =20 diff --git a/tools/include/linux/compiler_types.h b/tools/include/linux/com= piler_types.h index d09f9dc172a4..b1db30e510d0 100644 --- a/tools/include/linux/compiler_types.h +++ b/tools/include/linux/compiler_types.h @@ -20,7 +20,7 @@ # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) +# define __cond_acquire(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) #else /* __CHECKER__ */ /* context/locking */ # define __must_hold(x) @@ -28,7 +28,7 @@ # define __releases(x) # define __acquire(x) (void)0 # define __release(x) (void)0 -# define __cond_lock(x,c) (c) +# define __cond_acquire(x,c) (c) #endif /* __CHECKER__ */ =20 /* Compiler specific macros. */ --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:56:59 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB3F71AAA11 for ; Thu, 6 Feb 2025 18:17:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865878; cv=none; b=pH38mKPQwQJa5tto9XrdmCuEux72XKgU1T/EsH7SRuPNaEky7zbUpaWCpJnRs6W/7GOLaX9rAXLq7a65xnBwgEuPMhEUz4ZlK0sliElePUl7xYbh0Wd5t3WVdObQJP1vc3pMzyocdl9aSqPFYjWaKM0/fr5+yd1s8pBvjGsCc8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865878; c=relaxed/simple; bh=yRN4Gj4MvxGTov8s/QOCv5uvlVtRE0CbXf/8otlwQng=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=i7930bF3iQPISNjOruWcDPLO/pPqoTpnOuBDQdaMnzdHH1dSh8oWeUAtKve2TfjgzgJl6b+tkwsr1gdGtrBX4RsugZ77i8gFEB8Pr0ZvF9V5FLTIGRb3h8M+4Q9Z1uSMq1GpF+Ggd/LKJEiR17gGBVFC5gKDCzQ63xWP3vpbr74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gx2Fwhyf; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gx2Fwhyf" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa63b02c69cso238849566b.0 for ; Thu, 06 Feb 2025 10:17:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865873; x=1739470673; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=i/6NHOCJ0j6Hx0AS5oQYzqhaSxqaXPhlyrR7hNMYhdY=; b=gx2FwhyfFt0kG5d9ezrfN0DEXUcQqs1MdEB8/PFrl6xAbFLyczy5ZCDpmNUElSJQU1 DdbwR1InlZlanZrZAWavT9JA/Y/3VHixiVq8BnBHA+FEJPPOdSbDa3cxqUDPOcqflDsz ekd9T3EYwTZepqn20zf39MBaPMCnVAlz2PIV1ysEyVAMjM4NaM0vOwXUbrqlktmEiuaF cNLnobQbDxyRluXqw7pTJsue3nt0Q6BWuJ7ATQ/KEpejMbLBXPvb5Owdo0cfH8kb9VOC 9BkOvsE8go7mknZmtI3tl16vu+CO8Kxu6uXQLKdUZUc57lbsdKRxn/lsrg76vsQ69Zk3 p5Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865873; x=1739470673; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=i/6NHOCJ0j6Hx0AS5oQYzqhaSxqaXPhlyrR7hNMYhdY=; b=Bp1FkOr45SimKdM6QCCjgV1glUS509mwS3MpBMQwtRrZahrKogSGhTChx2UrlGIV0g Ia34bHUFp20Lx9VB5NcK5Weh4QjjUFgQvBmhVJZVUhbL3bQPn+jkU2Yxjkqy0oC5mwFM xc8Oe15l0iXKjAOWiAelU5drbBAqJYgFAw5DI7jJZ6yaBsOxv4gv72P3RVboZO5StOv4 CBaEi8xtyRqQ6qrZDJhO1XWYNsH7Ci8j8QBrKRs9cUrn8PkvDiDtlFvwCVOTtoEb9pI3 PjP/hR59F3I2LS2PVm/TaKZn49MkwA7mqucHOHpNDx/TdIG80R5DbkS74c44Y3Aq+86z b+rg== X-Forwarded-Encrypted: i=1; AJvYcCXKrgMj1cH+UlGr8Uw/7syMUvqOdiCYVoFkLeZwB6MTBBWrmD1J40xSKefpjRTSNxO4LstRyQethG0cfj8=@vger.kernel.org X-Gm-Message-State: AOJu0YwOflM5SPOdh1IYx0u5gHQxp6MFumbD6+lgzOiRoIa+gJBWd2aM qmeUzxPGwS4lV5aotcagd3JDCUZN1l4RMtZTmteq492gelfSonT1aAnXR6Pwej8gyxe8mQkFtA= = X-Google-Smtp-Source: AGHT+IHjRr9nsq+Pgl1IGJW7jXb045jrv43NdzF0tjB/1RMMH17fs3hF0cGFnuSlHPWuXEy7zIMiUwUXog== X-Received: from ejcvo15.prod.google.com ([2002:a17:907:a80f:b0:aab:9ce1:20df]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6d21:b0:ab6:8bb8:af2e with SMTP id a640c23a62f3a-ab76e913b7amr490828066b.26.1738865873264; Thu, 06 Feb 2025 10:17:53 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:57 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-4-elver@google.com> Subject: [PATCH RFC 03/24] compiler-capability-analysis: Add infrastructure for Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Capability analysis is a C language extension, which enables statically checking that user-definable "capabilities" are acquired and released where required. An obvious application is lock-safety checking for the kernel's various synchronization primitives (each of which represents a "capability"= ), and checking that locking rules are not violated. Clang originally called the feature "Thread Safety Analysis" [1], with some terminology still using the thread-safety-analysis-only names. This was later changed and the feature became more flexible, gaining the ability to define custom "capabilities". Its foundations can be found in "capability systems", used to specify the permissibility of operations to depend on some capability being held (or not held). [1] https://clang.llvm.org/docs/ThreadSafetyAnalysis.html [2] https://www.cs.cornell.edu/talc/papers/capabilities.pdf Because the feature is not just able to express capabilities related to synchronization primitives, the naming chosen for the kernel departs from Clang's initial "Thread Safety" nomenclature and refers to the feature as "Capability Analysis" to avoid confusion. The implementation still makes references to the older terminology in some places, such as `-Wthread-safety` being the warning enabled option that also still appears in diagnostic messages. See more details in the kernel-doc documentation added in this and the subsequent changes. [ RFC Note: A Clang version that supports -Wthread-safety-addressof is recommended, but not required: https://github.com/llvm/llvm-project/pull/123063 Should this patch series reach non-RFC stage, it is planned to be committed to Clang before. ] Signed-off-by: Marco Elver --- Makefile | 1 + include/linux/compiler-capability-analysis.h | 385 ++++++++++++++++++- lib/Kconfig.debug | 29 ++ scripts/Makefile.capability-analysis | 5 + scripts/Makefile.lib | 10 + 5 files changed, 423 insertions(+), 7 deletions(-) create mode 100644 scripts/Makefile.capability-analysis diff --git a/Makefile b/Makefile index 9e0d63d9d94b..e89b9f7d4a08 100644 --- a/Makefile +++ b/Makefile @@ -1082,6 +1082,7 @@ include-$(CONFIG_KCOV) +=3D scripts/Makefile.kcov include-$(CONFIG_RANDSTRUCT) +=3D scripts/Makefile.randstruct include-$(CONFIG_AUTOFDO_CLANG) +=3D scripts/Makefile.autofdo include-$(CONFIG_PROPELLER_CLANG) +=3D scripts/Makefile.propeller +include-$(CONFIG_WARN_CAPABILITY_ANALYSIS) +=3D scripts/Makefile.capabilit= y-analysis include-$(CONFIG_GCC_PLUGINS) +=3D scripts/Makefile.gcc-plugins =20 include $(addprefix $(srctree)/, $(include-y)) diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/c= ompiler-capability-analysis.h index dfed4e7e6ab8..ca63b6513dc3 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -6,26 +6,397 @@ #ifndef _LINUX_COMPILER_CAPABILITY_ANALYSIS_H #define _LINUX_COMPILER_CAPABILITY_ANALYSIS_H =20 +#if defined(WARN_CAPABILITY_ANALYSIS) + +/* + * The below attributes are used to define new capability types. Internal = only. + */ +# define __cap_type(name) __attribute__((capability(#name))) +# define __acquires_cap(var) __attribute__((acquire_capability(var))) +# define __acquires_shared_cap(var) __attribute__((acquire_shared_capabil= ity(var))) +# define __try_acquires_cap(ret, var) __attribute__((try_acquire_capabili= ty(ret, var))) +# define __try_acquires_shared_cap(ret, var) __attribute__((try_acquire_sh= ared_capability(ret, var))) +# define __releases_cap(var) __attribute__((release_capability(var))) +# define __releases_shared_cap(var) __attribute__((release_shared_capabil= ity(var))) +# define __asserts_cap(var) __attribute__((assert_capability(var))) +# define __asserts_shared_cap(var) __attribute__((assert_shared_capabilit= y(var))) +# define __returns_cap(var) __attribute__((lock_returned(var))) + +/* + * The below are used to annotate code being checked. Internal only. + */ +# define __excludes_cap(var) __attribute__((locks_excluded(var))) +# define __requires_cap(var) __attribute__((requires_capability(var))) +# define __requires_shared_cap(var) __attribute__((requires_shared_capabil= ity(var))) + +/** + * __var_guarded_by - struct member and globals attribute, declares variab= le + * protected by capability + * @var: the capability instance that guards the member or global + * + * Declares that the struct member or global variable must be guarded by t= he + * given capability @var. Read operations on the data require shared acces= s, + * while write operations require exclusive access. + * + * .. code-block:: c + * + * struct some_state { + * spinlock_t lock; + * long counter __var_guarded_by(&lock); + * }; + */ +# define __var_guarded_by(var) __attribute__((guarded_by(var))) + +/** + * __ref_guarded_by - struct member and globals attribute, declares pointe= d-to + * data is protected by capability + * @var: the capability instance that guards the member or global + * + * Declares that the data pointed to by the struct member pointer or global + * pointer must be guarded by the given capability @var. Read operations o= n the + * data require shared access, while write operations require exclusive ac= cess. + * + * .. code-block:: c + * + * struct some_state { + * spinlock_t lock; + * long *counter __ref_guarded_by(&lock); + * }; + */ +# define __ref_guarded_by(var) __attribute__((pt_guarded_by(var))) + +/** + * struct_with_capability() - declare or define a capability struct + * @name: struct name + * + * Helper to declare or define a struct type with capability of the same n= ame. + * + * .. code-block:: c + * + * struct_with_capability(my_handle) { + * int foo; + * long bar; + * }; + * + * struct some_state { + * ... + * }; + * // ... declared elsewhere ... + * struct_with_capability(some_state); + * + * Note: The implementation defines several helper functions that can acqu= ire, + * release, and assert the capability. + */ +# define struct_with_capability(name) \ + struct __cap_type(name) name; \ + static __always_inline void __acquire_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __acquires_cap(va= r) { } \ + static __always_inline void __acquire_shared_cap(const struct name *var) = \ + __attribute__((overloadable)) __no_capability_analysis __acquires_shared= _cap(var) { } \ + static __always_inline bool __try_acquire_cap(const struct name *var, boo= l ret) \ + __attribute__((overloadable)) __no_capability_analysis __try_acquires_ca= p(1, var) \ + { return ret; } \ + static __always_inline bool __try_acquire_shared_cap(const struct name *v= ar, bool ret) \ + __attribute__((overloadable)) __no_capability_analysis __try_acquires_sh= ared_cap(1, var) \ + { return ret; } \ + static __always_inline void __release_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __releases_cap(va= r) { } \ + static __always_inline void __release_shared_cap(const struct name *var) = \ + __attribute__((overloadable)) __no_capability_analysis __releases_shared= _cap(var) { } \ + static __always_inline void __assert_cap(const struct name *var) \ + __attribute__((overloadable)) __asserts_cap(var) { } \ + static __always_inline void __assert_shared_cap(const struct name *var) = \ + __attribute__((overloadable)) __asserts_shared_cap(var) { } \ + struct name + +/** + * disable_capability_analysis() - disables capability analysis + * + * Disables capability analysis. Must be paired with a later + * enable_capability_analysis(). + */ +# define disable_capability_analysis() \ + __diag_push(); \ + __diag_ignore_all("-Wunknown-warning-option", "") \ + __diag_ignore_all("-Wthread-safety", "") \ + __diag_ignore_all("-Wthread-safety-addressof", "") + +/** + * enable_capability_analysis() - re-enables capability analysis + * + * Re-enables capability analysis. Must be paired with a prior + * disable_capability_analysis(). + */ +# define enable_capability_analysis() __diag_pop() + +/** + * __no_capability_analysis - function attribute, disables capability anal= ysis + * + * Function attribute denoting that capability analysis is disabled for the + * whole function. Prefer use of `capability_unsafe()` where possible. + */ +# define __no_capability_analysis __attribute__((no_thread_safety_analysis= )) + +#else /* !WARN_CAPABILITY_ANALYSIS */ + +# define __cap_type(name) +# define __acquires_cap(var) +# define __acquires_shared_cap(var) +# define __try_acquires_cap(ret, var) +# define __try_acquires_shared_cap(ret, var) +# define __releases_cap(var) +# define __releases_shared_cap(var) +# define __asserts_cap(var) +# define __asserts_shared_cap(var) +# define __returns_cap(var) +# define __var_guarded_by(var) +# define __ref_guarded_by(var) +# define __excludes_cap(var) +# define __requires_cap(var) +# define __requires_shared_cap(var) +# define __acquire_cap(var) do { } while (0) +# define __acquire_shared_cap(var) do { } while (0) +# define __try_acquire_cap(var, ret) (ret) +# define __try_acquire_shared_cap(var, ret) (ret) +# define __release_cap(var) do { } while (0) +# define __release_shared_cap(var) do { } while (0) +# define __assert_cap(var) do { (void)(var); } while (0) +# define __assert_shared_cap(var) do { (void)(var); } while (0) +# define struct_with_capability(name) struct name +# define disable_capability_analysis() +# define enable_capability_analysis() +# define __no_capability_analysis + +#endif /* WARN_CAPABILITY_ANALYSIS */ + +/** + * capability_unsafe() - disable capability checking for contained code + * + * Disables capability checking for contained statements or expression. + * + * .. code-block:: c + * + * struct some_data { + * spinlock_t lock; + * int counter __var_guarded_by(&lock); + * }; + * + * int foo(struct some_data *d) + * { + * // ... + * // other code that is still checked ... + * // ... + * return capability_unsafe(d->counter); + * } + */ +#define capability_unsafe(...) \ +({ \ + disable_capability_analysis(); \ + __VA_ARGS__; \ + enable_capability_analysis() \ +}) + +/** + * token_capability() - declare an abstract global capability instance + * @name: token capability name + * + * Helper that declares an abstract global capability instance @name that = can be + * used as a token capability, but not backed by a real data structure (li= nker + * error if accidentally referenced). The type name is `__capability_@name= `. + */ +#define token_capability(name) \ + struct_with_capability(__capability_##name) {}; \ + extern const struct __capability_##name *name + +/** + * token_capability_instance() - declare another instance of a global capa= bility + * @cap: token capability previously declared with token_capability() + * @name: name of additional global capability instance + * + * Helper that declares an additional instance @name of the same token + * capability class @name. This is helpful where multiple related token + * capabilities are declared, as it also allows using the same underlying = type + * (`__capability_@cap`) as function arguments. + */ +#define token_capability_instance(cap, name) \ + extern const struct __capability_##cap *name + +/* + * Common keywords for static capability analysis. Both Clang's capability + * analysis and Sparse's context tracking are currently supported. + */ #ifdef __CHECKER__ =20 /* Sparse context/lock checking support. */ # define __must_hold(x) __attribute__((context(x,1,1))) +# define __must_not_hold(x) # define __acquires(x) __attribute__((context(x,0,1))) # define __cond_acquires(x) __attribute__((context(x,0,-1))) # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) # define __cond_acquire(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) +/* For Sparse, there's no distinction between exclusive and shared locks. = */ +# define __must_hold_shared __must_hold +# define __acquires_shared __acquires +# define __cond_acquires_shared __cond_acquires +# define __releases_shared __releases +# define __acquire_shared __acquire +# define __release_shared __release +# define __cond_acquire_shared __cond_acquire =20 #else /* !__CHECKER__ */ =20 -# define __must_hold(x) -# define __acquires(x) -# define __cond_acquires(x) -# define __releases(x) -# define __acquire(x) (void)0 -# define __release(x) (void)0 -# define __cond_acquire(x, c) (c) +/** + * __must_hold() - function attribute, caller must hold exclusive capabili= ty + * @x: capability instance pointer + * + * Function attribute declaring that the caller must hold the given capabi= lity + * instance @x exclusively. + */ +# define __must_hold(x) __requires_cap(x) + +/** + * __must_not_hold() - function attribute, caller must not hold capability + * @x: capability instance pointer + * + * Function attribute declaring that the caller must not hold the given + * capability instance @x. + */ +# define __must_not_hold(x) __excludes_cap(x) + +/** + * __acquires() - function attribute, function acquires capability exclusi= vely + * @x: capability instance pointer + * + * Function attribute declaring that the function acquires the the given + * capability instance @x exclusively, but does not release it. + */ +# define __acquires(x) __acquires_cap(x) + +/** + * __cond_acquires() - function attribute, function conditionally + * acquires a capability exclusively + * @x: capability instance pointer + * + * Function attribute declaring that the function conditionally acquires t= he + * given capability instance @x exclusively, but does not release it. + */ +# define __cond_acquires(x) __try_acquires_cap(1, x) + +/** + * __releases() - function attribute, function releases a capability exclu= sively + * @x: capability instance pointer + * + * Function attribute declaring that the function releases the given capab= ility + * instance @x exclusively. The capability must be held on entry. + */ +# define __releases(x) __releases_cap(x) + +/** + * __acquire() - function to acquire capability exclusively + * @x: capability instance pinter + * + * No-op function that acquires the given capability instance @x exclusive= ly. + */ +# define __acquire(x) __acquire_cap(x) + +/** + * __release() - function to release capability exclusively + * @x: capability instance pinter + * + * No-op function that releases the given capability instance @x. + */ +# define __release(x) __release_cap(x) + +/** + * __cond_acquire() - function that conditionally acquires a capability + * exclusively + * @x: capability instance pinter + * @c: boolean expression + * + * Return: result of @c + * + * No-op function that conditionally acquires capability instance @x + * exclusively, if the boolean expression @c is true. The result of @c is = the + * return value, to be able to create a capability-enabled interface; for + * example: + * + * .. code-block:: c + * + * #define spin_trylock(l) __cond_acquire(&lock, _spin_trylock(&lock)) + */ +# define __cond_acquire(x, c) __try_acquire_cap(x, c) + +/** + * __must_hold_shared() - function attribute, caller must hold shared capa= bility + * @x: capability instance pointer + * + * Function attribute declaring that the caller must hold the given capabi= lity + * instance @x with shared access. + */ +# define __must_hold_shared(x) __requires_shared_cap(x) + +/** + * __acquires_shared() - function attribute, function acquires capability = shared + * @x: capability instance pointer + * + * Function attribute declaring that the function acquires the the given + * capability instance @x with shared access, but does not release it. + */ +# define __acquires_shared(x) __acquires_shared_cap(x) + +/** + * __cond_acquires_shared() - function attribute, function conditionally + * acquires a capability shared + * @x: capability instance pointer + * + * Function attribute declaring that the function conditionally acquires t= he + * given capability instance @x with shared access, but does not release i= t. + */ +# define __cond_acquires_shared(x) __try_acquires_shared_cap(1, x) + +/** + * __releases_shared() - function attribute, function releases a + * capability shared + * @x: capability instance pointer + * + * Function attribute declaring that the function releases the given capab= ility + * instance @x with shared access. The capability must be held on entry. + */ +# define __releases_shared(x) __releases_shared_cap(x) + +/** + * __acquire_shared() - function to acquire capability shared + * @x: capability instance pinter + * + * No-op function that acquires the given capability instance @x with shar= ed + * access. + */ +# define __acquire_shared(x) __acquire_shared_cap(x) + +/** + * __release_shared() - function to release capability shared + * @x: capability instance pinter + * + * No-op function that releases the given capability instance @x with shar= ed + * access. + */ +# define __release_shared(x) __release_shared_cap(x) + +/** + * __cond_acquire_shared() - function that conditionally acquires a capabi= lity + * shared + * @x: capability instance pinter + * @c: boolean expression + * + * Return: result of @c + * + * No-op function that conditionally acquires capability instance @x with = shared + * access, if the boolean expression @c is true. The result of @c is the r= eturn + * value, to be able to create a capability-enabled interface. + */ +# define __cond_acquire_shared(x, c) __try_acquire_shared_cap(x, c) =20 #endif /* __CHECKER__ */ =20 diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 1af972a92d06..801ad28fe6d7 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -603,6 +603,35 @@ config DEBUG_FORCE_WEAK_PER_CPU To ensure that generic code follows the above rules, this option forces all percpu variables to be defined as weak. =20 +config WARN_CAPABILITY_ANALYSIS + bool "Compiler capability-analysis warnings" + depends on CC_IS_CLANG && $(cc-option,-Wthread-safety -fexperimental-late= -parse-attributes) + # Branch profiling re-defines "if", which messes with the compiler's + # ability to analyze __cond_acquire(..), resulting in false positives. + depends on !TRACE_BRANCH_PROFILING + default y + help + Capability analysis is a C language extension, which enables + statically checking that user-definable "capabilities" are acquired + and released where required. + + Clang's name of the feature ("Thread Safety Analysis") refers to + the original name of the feature; it was later expanded to be a + generic "Capability Analysis" framework. + + Produces warnings by default. Select CONFIG_WERROR if you wish to + turn these warnings into errors. + +config WARN_CAPABILITY_ANALYSIS_ALL + bool "Enable capability analysis for all source files" + depends on WARN_CAPABILITY_ANALYSIS + depends on EXPERT && !COMPILE_TEST + help + Enable tree-wide capability analysis. This is likely to produce a + large number of false positives - enable at your own risk. + + If unsure, say N. + endmenu # "Compiler options" =20 menu "Generic Kernel Debugging Instruments" diff --git a/scripts/Makefile.capability-analysis b/scripts/Makefile.capabi= lity-analysis new file mode 100644 index 000000000000..71383812201c --- /dev/null +++ b/scripts/Makefile.capability-analysis @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 + +export CFLAGS_CAPABILITY_ANALYSIS :=3D -DWARN_CAPABILITY_ANALYSIS \ + -fexperimental-late-parse-attributes -Wthread-safety \ + $(call cc-option,-Wthread-safety-addressof) diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index ad55ef201aac..5bf37af96cdf 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -191,6 +191,16 @@ _c_flags +=3D $(if $(patsubst n%,, \ -D__KCSAN_INSTRUMENT_BARRIERS__) endif =20 +# +# Enable capability analysis flags only where explicitly opted in. +# (depends on variables CAPABILITY_ANALYSIS_obj.o, CAPABILITY_ANALYSIS) +# +ifeq ($(CONFIG_WARN_CAPABILITY_ANALYSIS),y) +_c_flags +=3D $(if $(patsubst n%,, \ + $(CAPABILITY_ANALYSIS_$(target-stem).o)$(CAPABILITY_ANALYSIS)$(if $(is-k= ernel-object),$(CONFIG_WARN_CAPABILITY_ANALYSIS_ALL))), \ + $(CFLAGS_CAPABILITY_ANALYSIS)) +endif + # # Enable AutoFDO build flags except some files or directories we don't wan= t to # enable (depends on variables AUTOFDO_PROFILE_obj.o and AUTOFDO_PROFILE). --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:56:59 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7611A1DEFEB for ; Thu, 6 Feb 2025 18:17:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865879; cv=none; b=azI0y9Dhd5JY3OjLOXGsx3aZacuWaok2PJUtm7tKDo4IPHHGjHFvbcCAx2IAGANHZYSCqJ6gYq78nrOoUGJY1SjwDDFT8Wo2tKIQnvVkoAjOZdduzKocmvITM1G5VChXCqLG1UskK4kLbHggO+8UV6C/KaJc2plXUG/adU/QogA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865879; c=relaxed/simple; bh=xGjyV1mmuJOCMCkvKm/wdONdE3U0T4zFS8a+y0DCXOQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=drtiC0GGHQRQCOK9Kx/4ARyn3JtucFNmsZ9vV3VqkcHO6St2Z8jv4gmnKarRbn0VtzELm8CXndl5VxZXBw5YmhCtMuhQirP8pfx7dx41gKlx6Vy7KydrgGpf14zosDlj/nT/Ye4BEAugPwBls4w2CvwhNBMqwb5T64Lnz04j+/E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jnwHMmxz; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jnwHMmxz" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5dcd3da2aa6so1069749a12.2 for ; Thu, 06 Feb 2025 10:17:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865876; x=1739470676; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NkeMRttk3RYqT58NIYnzD7f2BrALmFrVkg1AkAEzinM=; b=jnwHMmxz31HO/SuKyVJLKnLoGlliGxLEh188YIEnq/kE1q2O441gQWrcroUHGVbFBd RQlr5FMtjFkA8Re/cURU8cn7tZTW7q5tV17z6pNzvdC1086RUGMu+uygO6/dMYbXmY2B C88qaNjwSAyrGcZ83l85dC6RXp4eoyDuw7hkcqHRBjahckpzh4h9EucC2X0GYLkfhiib 1ouyyCBpDKneCs3tgyu3BS8BL4g5bWkZXAaBKQQ3A5AV12DONxn2EYu4W4GUMOIJ0CiC K89XdYPq4GxoCf/HRIZUHhWWV6Yz1yuUYvXt8+XeKDWYMA8fmmuMhZzsQ69CBfqiAoIO 217w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865876; x=1739470676; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NkeMRttk3RYqT58NIYnzD7f2BrALmFrVkg1AkAEzinM=; b=QN8SmxIxt1wg98wResVtRe0andPKl5EeFjbp0StL8Gq0ztPAypjqSOhhbLXm6BvBZl exphBPa+/3bJ6PSISGExUR01FajvkNamltkwED65cQDIiegRmp+mK1C42W99WQ9+/Kvx 2eDSkR2qjD1Cj0TU3oTkU8/okv47Q5pQmRXG41su1rqiFoO2qPGj1bhd19axRp5odgls BbcvX8csTv6kQLqu2El4ilLUfHmcMrBq4E2ZwUL97w+9FdS/4wig3nJiHhoXtGcGlEiS nQKc7R0Fhh/2fsHiJImKt87qFOaVZ/JxmF/COJsREhtDjrZpz8bw/IEP3gKOLF+Kz8k/ NfYg== X-Forwarded-Encrypted: i=1; AJvYcCVBTC7Z4HKG8uhwwb77IM7/OvXyLKmlIDhbD3BVD1F5l3L+tMHCfgb/dm4zCZ6ws7p/DowgLwimD7ygrx0=@vger.kernel.org X-Gm-Message-State: AOJu0YwG4YF6uEO5Z9yuZvldAc1sBjiBXSEpgmq3Ffb2fxIBY5p+Wfpa hUJR6dIKb3ve3fpyttbjAO+5qdf9S8Mo56WmkGmxLM9f/KLzG72A8cFFuEypG+Sr3bxxY63Zrw= = X-Google-Smtp-Source: AGHT+IGXY3SpDaruUK6VARumHi3XqNDM2oza8GEyGvEbUTnudtsurff0yLICaOG9kxpXPQLackltedLCRQ== X-Received: from edat29.prod.google.com ([2002:a05:6402:241d:b0:5dc:764b:8e16]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2801:b0:5dc:d31a:398d with SMTP id 4fb4d7f45d1cf-5de450059cbmr548326a12.10.1738865875912; Thu, 06 Feb 2025 10:17:55 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:58 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-5-elver@google.com> Subject: [PATCH RFC 04/24] compiler-capability-analysis: Add test stub From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a simple test stub where we will add common supported patterns that should not generate false positive of each new supported capability. Signed-off-by: Marco Elver --- lib/Kconfig.debug | 14 ++++++++++++++ lib/Makefile | 3 +++ lib/test_capability-analysis.c | 18 ++++++++++++++++++ 3 files changed, 35 insertions(+) create mode 100644 lib/test_capability-analysis.c diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 801ad28fe6d7..b76fa3dc59ec 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2764,6 +2764,20 @@ config LINEAR_RANGES_TEST =20 If unsure, say N. =20 +config CAPABILITY_ANALYSIS_TEST + bool "Compiler capability-analysis warnings test" + depends on EXPERT + help + This builds the test for compiler-based capability analysis. The test + does not add executable code to the kernel, but is meant to test that + common patterns supported by the analysis do not result in false + positive warnings. + + When adding support for new capabilities, it is strongly recommended + to add supported patterns to this test. + + If unsure, say N. + config CMDLINE_KUNIT_TEST tristate "KUnit test for cmdline API" if !KUNIT_ALL_TESTS depends on KUNIT diff --git a/lib/Makefile b/lib/Makefile index d5cfc7afbbb8..1dbb59175eb0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -394,6 +394,9 @@ obj-$(CONFIG_CRC_KUNIT_TEST) +=3D crc_kunit.o obj-$(CONFIG_SIPHASH_KUNIT_TEST) +=3D siphash_kunit.o obj-$(CONFIG_USERCOPY_KUNIT_TEST) +=3D usercopy_kunit.o =20 +CAPABILITY_ANALYSIS_test_capability-analysis.o :=3D y +obj-$(CONFIG_CAPABILITY_ANALYSIS_TEST) +=3D test_capability-analysis.o + obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) +=3D devmem_is_allowed.o =20 obj-$(CONFIG_FIRMWARE_TABLE) +=3D fw_table.o diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c new file mode 100644 index 000000000000..a0adacce30ff --- /dev/null +++ b/lib/test_capability-analysis.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Compile-only tests for common patterns that should not generate false + * positive errors when compiled with Clang's capability analysis. + */ + +#include + +/* + * Test that helper macros work as expected. + */ +static void __used test_common_helpers(void) +{ + BUILD_BUG_ON(capability_unsafe(3) !=3D 3); /* plain expression */ + BUILD_BUG_ON(capability_unsafe((void)2; 3;) !=3D 3); /* does not swallow = semi-colon */ + BUILD_BUG_ON(capability_unsafe((void)2, 3) !=3D 3); /* does not swallow c= ommas */ + capability_unsafe(do { } while (0)); /* works with void statements */ +} --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:56:59 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7124023957D for ; Thu, 6 Feb 2025 18:18:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865882; cv=none; b=LSYoz8d18HJPbw+gS5tdORxRZXhTaS6TO9bdX4Z6NXCpc5SIMPSK+nxcR3tCSBUCRDqHFZtewFqtDiM4axsjXm8Ol6FO2nkPtNxLd51FtTZ3Sd62xEE3DkXTP/ILxeZbHdatxz8FzMn0dpTNzlnd0Iadi1Wnt2WBRdBB45A5nbU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865882; c=relaxed/simple; bh=XCrStM0aHgrhET61z+k7oYXwhS7WvQ1g7ufeBP2YzBY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZR3w903r9vYOgSsYfJG34hxxBle/e82EcCKQbYR93sEAr0jkHtz/7bDc91mGEcJjZoDpbqkH2orwip5ifPblZnJk5FlziiSwZ9jUp5KVjgjg9/FPXozHS5uGjauVbLneqfGXRdSTvFXg26NfS3D1HtqFiMTE9bVT8vV+WS9pSpY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kLTbSLfF; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kLTbSLfF" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab2e44dc9b8so242518366b.1 for ; Thu, 06 Feb 2025 10:18:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865879; x=1739470679; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3zCXzYYWNA/v8qcQtOcKIh5TmuX7E6oXR0H5ZAQV+vQ=; b=kLTbSLfFD8hXSeYRCsWr99/TbfXUnOf0PeyHTNmtT69ImMflQ9mMroaITX+XVPU6hh Uo52sYB5nQHSu1JndZm5unLW9gjMmqaDa3vkPI1hjHLwUVt5pLAiTtxpvehPZWNUVwVf +efioDf4d/pTiMX2kEP9Pqhjcak8vv2gEYwJKFhBcNkDB15VJb2nB/9fJBSl/jqwFLMF 2SrxCfauqeIrh6RUW7uXrunDLszUqDHyp39boA8kPofjSNCBv6YoshA9+0JzTVLCWPRZ Um2vnQygRkqmlcHVe6dsBOMZdTrk93bWr9X6iFrjwGzoW3V53ucw9VYzwSO8VaXqrVp0 XBBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865879; x=1739470679; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3zCXzYYWNA/v8qcQtOcKIh5TmuX7E6oXR0H5ZAQV+vQ=; b=F0XwllUOJWRM6Xo9cjBTZld2H1nX/5/ho9jNea4bapSRJKfjppTOpeGqyFA6YgJTnc UpY0frtsDdW0/4lXgDfvT20crmZpRCyMnzfo4A8vPMEr5UTm6vH9ZuhIg9r43iXVijmq PqLnnPoAUXU5oxkVxh+Uy8ckahkZuvEeNAAelDfqQgWgxK+qfVVVysmmZAtjMIugFY6o bzb4Gookoa/OvmP22EhA6J1pZYIpBJTBZTopHPE9Q75MZxEnCtj+uDvXpluAeb82euF1 cZdn2aAzUXWRc4vds8+77ojd0Hdhysv09BcsyufGqnR2MlIPkhWgm5iVsvcg+VU5iDDX w8Pg== X-Forwarded-Encrypted: i=1; AJvYcCV7JcJzv/aJ0MsLD2bPMCCPZfhZKbdZlRWtUqk8U3vm3u3gDyPY8Nvq6/SDSB89kCZe3CyqHRpPHkA+GOc=@vger.kernel.org X-Gm-Message-State: AOJu0Yz4j7GZB5K97cyWPgDGZe71A9EzpYvnHwVtpHGbbhZoxDMdQRNU AGEPpy5XfBXVjq0sine5gu8Af0ujhvpnQFcCCwag0MbZfF3xbJfonSwQXUeoAwM3qkmxuWePuQ= = X-Google-Smtp-Source: AGHT+IE3fLvFaStZCPDAHMprTwdHS2CDtyLSA/FOO9gR6TbzSQgYNeNMtTUtaGDdnKoUqzREyKkIXGTCnw== X-Received: from edap10.prod.google.com ([2002:a05:6402:500a:b0:5d8:ab23:4682]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:4404:b0:5dc:abe4:9d8d with SMTP id 4fb4d7f45d1cf-5dcecca9427mr4164056a12.9.1738865878495; Thu, 06 Feb 2025 10:17:58 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:59 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-6-elver@google.com> Subject: [PATCH RFC 05/24] Documentation: Add documentation for Compiler-Based Capability Analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Adds documentation in Documentation/dev-tools/capability-analysis.rst, and adds it to the index and cross-references from Sparse's document. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 147 ++++++++++++++++++ Documentation/dev-tools/index.rst | 1 + Documentation/dev-tools/sparse.rst | 4 + 3 files changed, 152 insertions(+) create mode 100644 Documentation/dev-tools/capability-analysis.rst diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst new file mode 100644 index 000000000000..2211af90e01b --- /dev/null +++ b/Documentation/dev-tools/capability-analysis.rst @@ -0,0 +1,147 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. Copyright (C) 2025, Google LLC. + +.. _capability-analysis: + +Compiler-Based Capability Analysis +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Capability analysis is a C language extension, which enables statically +checking that user-definable "capabilities" are acquired and released where +required. An obvious application is lock-safety checking for the kernel's +various synchronization primitives (each of which represents a "capability= "), +and checking that locking rules are not violated. + +The Clang compiler currently supports the full set of capability analysis +features. To enable for Clang, configure the kernel with:: + + CONFIG_WARN_CAPABILITY_ANALYSIS=3Dy + +The analysis is *opt-in by default*, and requires declaring which modules = and +subsystems should be analyzed in the respective `Makefile`:: + + CAPABILITY_ANALYSIS_mymodule.o :=3D y + +Or for all translation units in the directory:: + + CAPABILITY_ANALYSIS :=3D y + +It is possible to enable the analysis tree-wide, however, which will resul= t in +numerous false positive warnings currently and is *not* generally recommen= ded:: + + CONFIG_WARN_CAPABILITY_ANALYSIS_ALL=3Dy + +Independent of the above Clang support, a subset of the analysis is suppor= ted +by :ref:`Sparse `, with weaker guarantees (fewer false positives w= ith +tree-wide analysis, more more false negatives). Compared to Sparse, Clang's +analysis is more complete. + +Programming Model +----------------- + +The below describes the programming model around using capability-enabled +types. + +.. note:: + Enabling capability analysis can be seen as enabling a dialect of Linux= C with + a Capability System. Some valid patterns involving complex control-flow= are + constrained (such as conditional acquisition and later conditional rele= ase + in the same function, or returning pointers to capabilities from functi= ons. + +Capability analysis is a way to specify permissibility of operations to de= pend +on capabilities being held (or not held). Typically we are interested in +protecting data and code by requiring some capability to be held, for exam= ple a +specific lock. The analysis ensures that the caller cannot perform the +operation without holding the appropriate capability. + +Capabilities are associated with named structs, along with functions that +operate on capability-enabled struct instances to acquire and release the +associated capability. + +Capabilities can be held either exclusively or shared. This mechanism allo= ws +assign more precise privileges when holding a capability, typically to +distinguish where a thread may only read (shared) or also write (exclusive= ) to +guarded data. + +The set of capabilities that are actually held by a given thread at a given +point in program execution is a run-time concept. The static analysis work= s by +calculating an approximation of that set, called the capability environmen= t. +The capability environment is calculated for every program point, and desc= ribes +the set of capabilities that are statically known to be held, or not held,= at +that particular point. This environment is a conservative approximation of= the +full set of capabilities that will actually held by a thread at run-time. + +More details are also documented `here +`_. + +.. note:: + Unlike Sparse's context tracking analysis, Clang's analysis explicitly = does + not infer capabilities acquired or released by inline functions. It req= uires + explicit annotations to (a) assert that it's not a bug if a capability = is + released or acquired, and (b) to retain consistency between inline and + non-inline function declarations. + +Supported Kernel Primitives +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. Currently the following synchronization primitives are supported: + +For capabilities with an initialization function (e.g., `spin_lock_init()`= ), +calling this function on the capability instance before initializing any +guarded members or globals prevents the compiler from issuing warnings abo= ut +unguarded initialization. + +Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's +capability analysis that the associated synchronization primitive is held = after +the assertion. This avoids false positives in complex control-flow scenari= os +and encourages the use of Lockdep where static analysis is limited. For +example, this is useful when a function doesn't *always* require a lock, m= aking +`__must_hold()` inappropriate. + +Keywords +~~~~~~~~ + +.. kernel-doc:: include/linux/compiler-capability-analysis.h + :identifiers: struct_with_capability + token_capability token_capability_instance + __var_guarded_by __ref_guarded_by + __must_hold + __must_not_hold + __acquires + __cond_acquires + __releases + __must_hold_shared + __acquires_shared + __cond_acquires_shared + __releases_shared + __acquire + __release + __cond_acquire + __acquire_shared + __release_shared + __cond_acquire_shared + capability_unsafe + __no_capability_analysis + disable_capability_analysis enable_capability_analysis + +Background +---------- + +Clang originally called the feature `Thread Safety Analysis +`_, with some +terminology still using the thread-safety-analysis-only names. This was la= ter +changed and the feature become more flexible, gaining the ability to define +custom "capabilities". + +Indeed, its foundations can be found in `capability systems +`_, used to speci= fy +the permissibility of operations to depend on some capability being held (= or +not held). + +Because the feature is not just able to express capabilities related to +synchronization primitives, the naming chosen for the kernel departs from +Clang's initial "Thread Safety" nomenclature and refers to the feature as +"Capability Analysis" to avoid confusion. The implementation still makes +references to the older terminology in some places, such as `-Wthread-safe= ty` +being the warning enabled option that also still appears in diagnostic +messages. diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/in= dex.rst index 65c54b27a60b..62ac23f797cd 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -18,6 +18,7 @@ Documentation/process/debugging/index.rst :maxdepth: 2 =20 testing-overview + capability-analysis checkpatch clang-format coccinelle diff --git a/Documentation/dev-tools/sparse.rst b/Documentation/dev-tools/s= parse.rst index dc791c8d84d1..8c2077834b6f 100644 --- a/Documentation/dev-tools/sparse.rst +++ b/Documentation/dev-tools/sparse.rst @@ -2,6 +2,8 @@ .. Copyright 2004 Pavel Machek .. Copyright 2006 Bob Copeland =20 +.. _sparse: + Sparse =3D=3D=3D=3D=3D=3D =20 @@ -72,6 +74,8 @@ releasing the lock inside the function in a balanced way,= no annotation is needed. The three annotations above are for cases where sparse would otherwise report a context imbalance. =20 +Also see :ref:`Compiler-Based Capability Analysis `. + Getting sparse -------------- =20 --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:56:59 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B05881DF75A for ; Thu, 6 Feb 2025 18:18:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865884; cv=none; b=NgtXd+IN7rjnxCMN4e0RPgbmDuUVo2DITRyCwL0zj1lakF8tSX6Knuoragbxpf9w9C36TplX00pKyXgyOQGsZx6SkMhf/pF/8tjb1t9T/myiv+pcpS176f0bvBgNas+H7Mwb1IR1ffcBlknasycPYdyl8wH7hdn7Tpvndx3+TZY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865884; c=relaxed/simple; bh=OUyhDVr77oUg9Ew94dbfOaYtUgHMDOjvOzaZtS/8oZ4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mE135elN9u+dd+6x6RifC50a8qMJRGKBWaTwwjzXwb6c4Sru6MKnEyo+SSCYLItyCUZaw4SBbhdcgVi4feXRajaupRh1DvvnWujrwewF9EPsgu5N4He8My0OsPOKMPDQUQTcU0T1F3+mp9bxQAbYK5Q37bZJr8/RDGlLdc/3zD4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XqltBCv/; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XqltBCv/" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43646b453bcso6967705e9.3 for ; Thu, 06 Feb 2025 10:18:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865881; x=1739470681; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VsoCl0Fwo1Ic1mnwHNM6zvZdMp3PVmxgZxLwt6H7/D4=; b=XqltBCv/PhH36XyvvGL0gKijrfWt6NFZkgWxlBdjL2PWo9C3koQ3faCtpRQrLO8bhf TAK1W6eHQkT4MDRKhDPZDBRQtSAF/YdoPIKr2n6KcTktxDWY1Xm14N1iY0ifgNQbSRKI FrdkikeWJS1lpbvZAE0/ZctiCnXnkWGJyMttf239oKIh8NlZP2SgTLnc/rLSbmColp9h sFwyH4fl6ddo/wBNWBtyXiIP4+Fcv71X4DDBggAWE4uR78TGDSGpI0PkNSC1vpvvv0H+ s65A+fGSpkfgbnd2/fITFdQEY9/tVlcFyEWLcbik4rM61HX1gWI5NS+TWpiknzfnYz7w gFkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865881; x=1739470681; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VsoCl0Fwo1Ic1mnwHNM6zvZdMp3PVmxgZxLwt6H7/D4=; b=tmA+2ldlw+toOBefbXCaEozKJgklG9AtKLxEq7OUhxSUZQ94bL2EbP2vHFFy3kjrp4 265S7PP5mRR03ZwUNL8W/jhL4dbsmlrfV77kJBfWV5SO43xJl/5+0cblolu0Mimqu1Lt LHScfudRP2koUKKNIm8q+IUZ2jtuGo5W92f7uGVkX0+YYFdfjKNoLr9bs50ahK8jGI9r neAPR7wIsk8la/CRIDSJZFX3MmwMP+p2KU73aD/4azRfIzxNwX0umwNnPIFrocWn5qeO RHX6I7JeeMSE9kv/B5yd8ED4MQ6hKgvFHm7JufSu1pcPsRwI32IAupm23wetin6oYfCa 3egA== X-Forwarded-Encrypted: i=1; AJvYcCVFgUrK7OLo2IVuE0vWAPrDY06Ax1jYti5rT0CAsY4Umow2ZqkyBEWUkU3EZ9g0ItTzkVqcsksF8l1Auow=@vger.kernel.org X-Gm-Message-State: AOJu0Yw9mJRcyEi0ojwNmglxOTDva9UmQe0FNs0q+n8HfxakZyvFt+Hz VTG4yAmgAvJTtpUfT+gVDRLMVCw0tQBLhdzy2NCRpvMbH50nbk6piAg4Q1hqna+VPJFG8xCeGg= = X-Google-Smtp-Source: AGHT+IHiTmn6vfthg2vhJ36+4kRebwXf2RJ94SpIllvtOjA4FgWz+fAElpnxs8wxYNVNWceG+v5BwvdraA== X-Received: from wmbeq14.prod.google.com ([2002:a05:600c:848e:b0:434:e9e2:2991]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e15:b0:434:ff9d:a370 with SMTP id 5b1f17b1804b1-439248c34e9mr5139685e9.0.1738865881193; Thu, 06 Feb 2025 10:18:01 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:00 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-7-elver@google.com> Subject: [PATCH RFC 06/24] checkpatch: Warn about capability_unsafe() without comment From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Warn about applications of capability_unsafe() without a comment, to encourage documenting the reasoning behind why it was deemed safe. Signed-off-by: Marco Elver --- scripts/checkpatch.pl | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 7b28ad331742..c28efdb1d404 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -6693,6 +6693,14 @@ sub process { } } =20 +# check for capability_unsafe without a comment. + if ($line =3D~ /\bcapability_unsafe\b/) { + if (!ctx_has_comment($first_line, $linenr)) { + WARN("CAPABILITY_UNSAFE", + "capability_unsafe without comment\n" . $herecurr); + } + } + # check of hardware specific defines if ($line =3D~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x= __)\b@ && $realfile !~ m@include/asm-@) { CHK("ARCH_DEFINES", --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:56:59 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 256E11DFD9B for ; Thu, 6 Feb 2025 18:18:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865886; cv=none; b=B23UsZN1Hf5feHg14pbFhUxI8EzulhCAMduLju13usMt1yAmxf9rLbQoa8a2qm096lytxMyfy2QocLvRSlRF3agy0xErmA0/w8pyut4yVq808O8dVQKwKj96hatBHHqFYFMb2oR/p/S33GEna29w7cSknrN0YVKqFOXLiv6tE1s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865886; c=relaxed/simple; bh=nbrPwXyZS1EvTzB5y+SypJlT/s4PiDMg7VB+pOJqZ2M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YL8t6DFDsGnPUOShvbaHaXC6Q3bVF5OE4QKSjFZ0+TfPmW4GZqtiIAfrzVGHjOoEv66WyaWHn5giSE+ciAlE1WevrluAk/ReA5ry+wzv3nXhF9LnaCZvVY1sFtWj+wI5z0prgw6uSnHRLQgj9KP/m38lV20u++a+wouIktjgVVE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bk7WjvjF; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bk7WjvjF" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aaf901a0ef9so137776666b.0 for ; Thu, 06 Feb 2025 10:18:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865883; x=1739470683; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2DbQd0Feo/QZz3Rhlb0Ii9wR2JgNHixpKT0msEk4JbI=; b=bk7WjvjFVYAf2KOpHRMqC2KHFewZGo+a1MN8tjt1FxDZYFYpx5faGTdcW/0ozFvpLt nWh/ckwZyxgHi8cw1yGj5ibTIlsW3oxTxARraym7hqDgr38W6Ch/TTRPOJg476A1nJ+d wvWAzc8yllD0aF0ji0II36tK18mLDrvDEFtbjtyaREAG8EWlhheLF7MHh9EIgjX2Ylpg prGwDeHVKOFP3oGV9DZngU709BNycm5Y7QYN9q4I97dO9cGvoQB+ZLlRoQ5+p1rxD9Zn pSt4HoLUxVaoGTHTO+wLCwYkd/1UeoUMsafPzEp9BgvU9PhqvfTpJ5BhpAn1g4mxy0Dp AotA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865883; x=1739470683; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2DbQd0Feo/QZz3Rhlb0Ii9wR2JgNHixpKT0msEk4JbI=; b=u5c8WnLgJVl0BAGdq+76ymH3GiQ51AkwsTB5CY0vDHrnaX6lj+Q7ufqcN07yQfslj4 Q2F8wW1VLFUKUhUNdUQVwLgTQlyIzE+gRyQOFZxV/vEQQ1kNAX7JnMFoMHmQEQ3UN0g2 y6qS80Gd6Od/g2PngZ6Kjdq62loIXWheZY30BiwDIEOE8l+RrozAfzFqW6jVUDqIYBtm Ag+0khRrsi5a5xaIwwt4U7PmHYz1OVo5RDm9+nMR3ndkhABb1Tur5o4QQJX9Ft8uEMt4 14wpkddAitrWnwWzQ+8vneJ8qjA10yBQs0CBrPKSwNJCrPq1MGpJCtA2i9kxifY7fwtC 1zrg== X-Forwarded-Encrypted: i=1; AJvYcCVb2yGdkNCXzFz7zX45Vr0QpEBotvl7YPRCwObunKPiYRNg+hVzcn4SjVZB5ykUf6eBWPqKTQ9u4HOHFRE=@vger.kernel.org X-Gm-Message-State: AOJu0YwAdlrX131gs3t4cAkiHmVZhhTzgW0c+6YDocloslkVxFA4oQq+ 2Qe4rHfrPUDRBfUuu0HjFQ1G0Wdjvf8QV+gTo1MIwmHtUWMwwzPszkrU6K/xBKLpiOq4WylpGA= = X-Google-Smtp-Source: AGHT+IEpKrf/iQkrY6KOXq/iYZBQV1l1pAnmjhSh7YEqml+QLzZmS4/pcVjkplICVk/aelHvFs7D7YrXFw== X-Received: from edbes17.prod.google.com ([2002:a05:6402:3811:b0:5d8:7c8:cde8]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7251:b0:aa6:6885:e2f0 with SMTP id a640c23a62f3a-ab75e35de7emr916219966b.46.1738865883628; Thu, 06 Feb 2025 10:18:03 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:01 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-8-elver@google.com> Subject: [PATCH RFC 07/24] cleanup: Basic compatibility with capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Due to the scoped cleanup helpers used for lock guards wrapping acquire/release around their own constructors/destructors that store pointers to the passed locks in a separate struct, we currently cannot accurately annotate *destructors* which lock was released. While it's possible to annotate the constructor to say which lock was acquired, that alone would result in false positives claiming the lock was not released on function return. Instead, to avoid false positives, we can claim that the constructor "asserts" that the taken lock is held. This will ensure we can still benefit from the analysis where scoped guards are used to protect access to guarded variables, while avoiding false positives. The only downside are false negatives where we might accidentally lock the same lock again: raw_spin_lock(&my_lock); ... guard(raw_spinlock)(&my_lock); // no warning Arguably, lockdep will immediately catch issues like this. While Clang's analysis supports scoped guards in C++ [1], there's no way to apply this to C right now. Better support for Linux's scoped guard design could be added in future if deemed critical. [1] https://clang.llvm.org/docs/ThreadSafetyAnalysis.html#scoped-capability Signed-off-by: Marco Elver --- include/linux/cleanup.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h index ec00e3f7af2b..93a166549add 100644 --- a/include/linux/cleanup.h +++ b/include/linux/cleanup.h @@ -223,7 +223,7 @@ const volatile void * __must_check_fn(const volatile vo= id *val) * @exit is an expression using '_T' -- similar to FREE above. * @init is an expression in @init_args resulting in @type * - * EXTEND_CLASS(name, ext, init, init_args...): + * EXTEND_CLASS(name, ext, ctor_attrs, init, init_args...): * extends class @name to @name@ext with the new constructor * * CLASS(name, var)(args...): @@ -243,15 +243,18 @@ const volatile void * __must_check_fn(const volatile = void *val) #define DEFINE_CLASS(_name, _type, _exit, _init, _init_args...) \ typedef _type class_##_name##_t; \ static inline void class_##_name##_destructor(_type *p) \ + __no_capability_analysis \ { _type _T =3D *p; _exit; } \ static inline _type class_##_name##_constructor(_init_args) \ + __no_capability_analysis \ { _type t =3D _init; return t; } =20 -#define EXTEND_CLASS(_name, ext, _init, _init_args...) \ +#define EXTEND_CLASS(_name, ext, ctor_attrs, _init, _init_args...) \ typedef class_##_name##_t class_##_name##ext##_t; \ static inline void class_##_name##ext##_destructor(class_##_name##_t *p)\ { class_##_name##_destructor(p); } \ static inline class_##_name##_t class_##_name##ext##_constructor(_init_arg= s) \ + __no_capability_analysis ctor_attrs \ { class_##_name##_t t =3D _init; return t; } =20 #define CLASS(_name, var) \ @@ -299,7 +302,7 @@ static __maybe_unused const bool class_##_name##_is_con= ditional =3D _is_cond =20 #define DEFINE_GUARD_COND(_name, _ext, _condlock) \ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \ - EXTEND_CLASS(_name, _ext, \ + EXTEND_CLASS(_name, _ext,, \ ({ void *_t =3D _T; if (_T && !(_condlock)) _t =3D NULL; _t; }), \ class_##_name##_t _T) \ static inline void * class_##_name##_ext##_lock_ptr(class_##_name##_t *_T= ) \ @@ -371,6 +374,7 @@ typedef struct { \ } class_##_name##_t; \ \ static inline void class_##_name##_destructor(class_##_name##_t *_T) \ + __no_capability_analysis \ { \ if (_T->lock) { _unlock; } \ } \ @@ -383,6 +387,7 @@ static inline void *class_##_name##_lock_ptr(class_##_n= ame##_t *_T) \ =20 #define __DEFINE_LOCK_GUARD_1(_name, _type, _lock) \ static inline class_##_name##_t class_##_name##_constructor(_type *l) \ + __no_capability_analysis __asserts_cap(l) \ { \ class_##_name##_t _t =3D { .lock =3D l }, *_T =3D &_t; \ _lock; \ @@ -391,6 +396,7 @@ static inline class_##_name##_t class_##_name##_constru= ctor(_type *l) \ =20 #define __DEFINE_LOCK_GUARD_0(_name, _lock) \ static inline class_##_name##_t class_##_name##_constructor(void) \ + __no_capability_analysis \ { \ class_##_name##_t _t =3D { .lock =3D (void*)1 }, \ *_T __maybe_unused =3D &_t; \ @@ -410,7 +416,7 @@ __DEFINE_LOCK_GUARD_0(_name, _lock) =20 #define DEFINE_LOCK_GUARD_1_COND(_name, _ext, _condlock) \ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \ - EXTEND_CLASS(_name, _ext, \ + EXTEND_CLASS(_name, _ext, __asserts_cap(l), \ ({ class_##_name##_t _t =3D { .lock =3D l }, *_T =3D &_t;\ if (_T->lock && !(_condlock)) _T->lock =3D NULL; \ _t; }), \ --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7B1C194A6B for ; Thu, 6 Feb 2025 18:18:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865889; cv=none; b=Q9E04rvH/TrATngun4ZTJurTOBvFjEPKdMZvibcTbEGSQ2TNQ8CbiPxTuzicUFl4P1K/5KQn3wzqWqPP201fqOjh5wKaJG5vWj311pVCuFuLOilsqughtkvSeTaYS7MOb3ZbZu3MzEeMWf3babs/xwehU2yNVqrbP0MK4ydiOxM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865889; c=relaxed/simple; bh=NC/giOsxEdW/2YEKwyZ/rqMsDsoQXLA5Fl9b5LdcKjw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=m8ZxKGCDy89Pltw4YlNdckHEUH0E5lWiEH3gMozDajYpCkDsoAC93xB0t0ZeQB31uJDy0gs1b2WNY4yI+Fp35xfbXGhGjF/cAHiVaQRYrZuzyF46ukVSp4EGmj2vxbGy273p3w8+r6VaiMZdEpBo8T0/gsbAIQyNwOMCJ4p7go8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LYpNoJMK; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LYpNoJMK" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-aa689b88293so130786866b.3 for ; Thu, 06 Feb 2025 10:18:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865886; x=1739470686; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=B9zo14rswWWzdXk+7RXhASP4eQAv2ef2DvmlHG5M+4Q=; b=LYpNoJMKrv7QAgp9cXs2m1IV91l7Bzo+YU98sn32nTGzsEhZxs34RSpl4Ltx9Ranqh Y4WvTY6cM7l5waJzCuRqpRc96GYq6Ex6mZ8MHhxRecMiHoR6nEnaf3Wu9J/JCv3eH1MR wc2Z6oiqqEmGr6ovVaQvHDE5Va5fVs5ky8Gz00BD7iIyMLLVByj1H5lArIuJ/5YS5h8a ZeSFnVtg0Ccya33GQzNJb7r2WqWzAWPv5DukjPKua57otUhFDrTQREeNFyU/T/lbWOGh tEs0PeScPpCe5RVRET1TMyBvF9IKs2+YbA7RVlMPPioBmqJD+w1FgT0s9gP+m9Hefw/H r5YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865886; x=1739470686; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=B9zo14rswWWzdXk+7RXhASP4eQAv2ef2DvmlHG5M+4Q=; b=TgBAwh83jEp1WlixGr8wGq9bXAMnt31lvOqUtm4dfSdH1wW6DUou052U+IrfYWbtU9 aoXoKHsGBXbit3cX3nB/EebhbdC5gm57bnHOLyAmmex1FNGsVE+pJ9OF8TwOZiraWJq6 dLFUpKhsrs9VsaGmWFS4/PiAxnCLS1Ulk9rlR79eLLHaiVIlN9lg8Je4QLOk/H/0wd/z DQlnRLuxjko6KxXI2H3hm4UGiaol91PEThLvMLqt98mcpAAYGa2CoVPJhbeEWBoTz+Uj 1X3qspLP5wRCAwUrINTSqaC4jBB+JYeyIiCPK8OSlu+0RPDEEKRev9tV87xWlk5Qj9qL Nb8Q== X-Forwarded-Encrypted: i=1; AJvYcCXW8LJFyU4gvgKqSTEVZ6+KnGuBBE3gQMUyDLVhNE/b7DA5d3RtpejPEH4by4DEa/wK8qD5ijPNTEjEgBg=@vger.kernel.org X-Gm-Message-State: AOJu0YxrrT2JcX+Bl+UmBTh8VOqWzU8Wh2gkVR1/LTHFOAeGPM6IwHDV uFKsVgbfOQjCcQOiqcF+2fdfVuxRJmJ6nVZit3yw1d2FwzW+851jZYcaLcuPFYOK8U9+7b3TFA= = X-Google-Smtp-Source: AGHT+IFu7xPJdokZBAxGmpTaLRXegeAIT0R1AZB6cJDpPIO2PbBj0EGjZIpu4bBVqoqg73DZbQallQnyAQ== X-Received: from ejcvi3.prod.google.com ([2002:a17:907:d403:b0:aa6:90a8:f5f8]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7216:b0:ab7:647:d52d with SMTP id a640c23a62f3a-ab75e322c0dmr999272366b.51.1738865886143; Thu, 06 Feb 2025 10:18:06 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:02 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-9-elver@google.com> Subject: [PATCH RFC 08/24] lockdep: Annotate lockdep assertions for capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Clang's capability analysis can be made aware of functions that assert that capabilities/locks are held. Presence of these annotations causes the analysis to assume the capability is held after calls to the annotated function, and avoid false positives with complex control-flow; for example, where not all control-flow paths in a function require a held lock, and therefore marking the function with __must_hold(..) is inappropriate. Signed-off-by: Marco Elver --- include/linux/lockdep.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 67964dc4db95..5cea929b2219 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -282,16 +282,16 @@ extern void lock_unpin_lock(struct lockdep_map *lock,= struct pin_cookie); do { WARN_ON_ONCE(debug_locks && !(cond)); } while (0) =20 #define lockdep_assert_held(l) \ - lockdep_assert(lockdep_is_held(l) !=3D LOCK_STATE_NOT_HELD) + do { lockdep_assert(lockdep_is_held(l) !=3D LOCK_STATE_NOT_HELD); __asser= t_cap(l); } while (0) =20 #define lockdep_assert_not_held(l) \ lockdep_assert(lockdep_is_held(l) !=3D LOCK_STATE_HELD) =20 #define lockdep_assert_held_write(l) \ - lockdep_assert(lockdep_is_held_type(l, 0)) + do { lockdep_assert(lockdep_is_held_type(l, 0)); __assert_cap(l); } while= (0) =20 #define lockdep_assert_held_read(l) \ - lockdep_assert(lockdep_is_held_type(l, 1)) + do { lockdep_assert(lockdep_is_held_type(l, 1)); __assert_shared_cap(l); = } while (0) =20 #define lockdep_assert_held_once(l) \ lockdep_assert_once(lockdep_is_held(l) !=3D LOCK_STATE_NOT_HELD) @@ -389,10 +389,10 @@ extern int lockdep_is_held(const void *); #define lockdep_assert(c) do { } while (0) #define lockdep_assert_once(c) do { } while (0) =20 -#define lockdep_assert_held(l) do { (void)(l); } while (0) +#define lockdep_assert_held(l) __assert_cap(l) #define lockdep_assert_not_held(l) do { (void)(l); } while (0) -#define lockdep_assert_held_write(l) do { (void)(l); } while (0) -#define lockdep_assert_held_read(l) do { (void)(l); } while (0) +#define lockdep_assert_held_write(l) __assert_cap(l) +#define lockdep_assert_held_read(l) __assert_shared_cap(l) #define lockdep_assert_held_once(l) do { (void)(l); } while (0) #define lockdep_assert_none_held_once() do { } while (0) =20 --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 37E561E5B83 for ; Thu, 6 Feb 2025 18:18:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865894; cv=none; b=kyEjm7WmzltPPrPC/Q+3UllP/PK4A2ZxMB/0YQKMdf9eNm/cnUJuT9/fhHWPyGxSvW+EZt3Ex2wyCBgMDu/Q3rpmA5zocANml/9okERCZv31zSyjw9bh1HBJ6hw4CxLxkqgHsx+00b9mvinau+ZjL7VkWrNepBWumE3nskI+9BI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865894; c=relaxed/simple; bh=gVcJwP7DaHofrIRS7DtPO8XOt5Qq6uoCKcfH8A5mJms=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hbwmJ6wixIl9nfBG9tLxwpGHa/LOTzfHqOSlBE7uwbRFpl8AfTgf13Ng4Auh64Rotx8CeZm4xdoCpukMDLbKH6xtz50GTDZMeKvTgeIvNPPfGCA+fCBz1cF2DKrBL51c1E1lTYYRzMNgc4ofhDzlrbQI2E+lLRGKA2+g9Wz8tnM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TM5qe8JC; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TM5qe8JC" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5dcd3f2f3d7so1404791a12.0 for ; Thu, 06 Feb 2025 10:18:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865888; x=1739470688; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ooxnk/JQzY6kVJXM0+XJ/prw2lUk2BfGet6u4s4YFKY=; b=TM5qe8JCxvngk/Xf9n2ocZ9VmuWIHAglxQU+gttFoEd9CeNy7FoJ7y/1DtAEUOVp5C PenvK8mGhFY5wN4uWryfeEUNCAkInQjPynmLIigiRsXqO308JqQHw3qKKm5HyBr/emDS S7LL4Q1XolUEhQwPdF85IxiVWjabXhKLeGY34O2JRvFLjUVqDMrzd4gXQcQx4/Puafba 905i5iPEzGlyZ5o5181EImyR667cRJk2BfZ4RR6a0GLZJ4nghnZHNJGx4YTIiLS75p24 RhK1ehsDHGvV0Dvx3qgNLQSY7Ws1VWsp4Iq0UqkfdjRPa8LodTWkTMxfVAdCcHcepP36 ptnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865888; x=1739470688; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ooxnk/JQzY6kVJXM0+XJ/prw2lUk2BfGet6u4s4YFKY=; b=YM2Bmi2/JGoSuV47rJlYPzTPhKAypCMVgeKrv/zMKC+wBbyQcyvQPL73q9NoHb/3J2 CM2FDsNEyG4W9CAF3es1kW/o3duGzQahQmoIFsNr94M3WmQxerxIQsPKAF+rffQZ25mi hcS3oLqIZuf0jmNTIYI5AyS4D4PgXV6Bh9haqOPwHpectLdtOf7u6OwV0WZcThExYCgl SiPQnUOlbQsbxSwqig7Yobc9MCqQ8PC32OH+97tbQGAvKExTW38QMCgk3EXS47u5XmO6 vajcEAd/jUw5USu2GTfLHb5xHAGZ6NM8oulWhpcFwmx7xk6E0RWDug/iviS79u/o6i2F ofww== X-Forwarded-Encrypted: i=1; AJvYcCXXzWDtcbm6vm/vciDVdh6bqhcIVLYItLpAc/0tdRKh0EF4BnLZ8VONT+rNMzbpXzjk0rjV2a3p5T9W0gE=@vger.kernel.org X-Gm-Message-State: AOJu0YzzV9xmm14qZhUFdDJb+sh+Wjdlr3vjyUG6/XoVzbw9b8C20GyX +rkANWugXyLSXdp6rvRlDkJF4eE844dtHYtjfjDruEOVC0IZv5kz6Pul+H0ll6BvWZ7vd5bQfg= = X-Google-Smtp-Source: AGHT+IFm7el1MNhepx8reHTJ4p1sUzV5FhX1LdLPWyerUR85KR1MkFSqrbz9hShMncx2dK3vsIt3v94VSw== X-Received: from edben19.prod.google.com ([2002:a05:6402:5293:b0:5dc:1039:c937]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a50:d582:0:b0:5dc:c9ce:b029 with SMTP id 4fb4d7f45d1cf-5de44fe9d66mr457639a12.5.1738865888539; Thu, 06 Feb 2025 10:18:08 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:03 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-10-elver@google.com> Subject: [PATCH RFC 09/24] locking/rwlock, spinlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for raw_spinlock_t, spinlock_t, and rwlock. This wholesale conversion is required because all three of them are interdependent. To avoid warnings in constructors, the initialization functions mark a capability as acquired when initialized before guarded variables. The test verifies that common patterns do not generate false positives. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 3 +- include/linux/rwlock.h | 25 ++-- include/linux/rwlock_api_smp.h | 29 +++- include/linux/rwlock_rt.h | 35 +++-- include/linux/rwlock_types.h | 10 +- include/linux/spinlock.h | 45 +++--- include/linux/spinlock_api_smp.h | 14 +- include/linux/spinlock_api_up.h | 71 +++++----- include/linux/spinlock_rt.h | 21 +-- include/linux/spinlock_types.h | 10 +- include/linux/spinlock_types_raw.h | 5 +- lib/test_capability-analysis.c | 128 ++++++++++++++++++ 12 files changed, 299 insertions(+), 97 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 2211af90e01b..904448605a77 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -84,7 +84,8 @@ More details are also documented `here Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 -.. Currently the following synchronization primitives are supported: +Currently the following synchronization primitives are supported: +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 58c346947aa2..44755fd96c27 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -22,23 +22,24 @@ do { \ static struct lock_class_key __key; \ \ __rwlock_init((lock), #lock, &__key); \ + __assert_cap(lock); \ } while (0) #else # define rwlock_init(lock) \ - do { *(lock) =3D __RW_LOCK_UNLOCKED(lock); } while (0) + do { *(lock) =3D __RW_LOCK_UNLOCKED(lock); __assert_cap(lock); } while (0) #endif =20 #ifdef CONFIG_DEBUG_SPINLOCK - extern void do_raw_read_lock(rwlock_t *lock) __acquires(lock); + extern void do_raw_read_lock(rwlock_t *lock) __acquires_shared(lock); extern int do_raw_read_trylock(rwlock_t *lock); - extern void do_raw_read_unlock(rwlock_t *lock) __releases(lock); + extern void do_raw_read_unlock(rwlock_t *lock) __releases_shared(lock); extern void do_raw_write_lock(rwlock_t *lock) __acquires(lock); extern int do_raw_write_trylock(rwlock_t *lock); extern void do_raw_write_unlock(rwlock_t *lock) __releases(lock); #else -# define do_raw_read_lock(rwlock) do {__acquire(lock); arch_read_lock(&(rw= lock)->raw_lock); } while (0) +# define do_raw_read_lock(rwlock) do {__acquire_shared(lock); arch_read_lo= ck(&(rwlock)->raw_lock); } while (0) # define do_raw_read_trylock(rwlock) arch_read_trylock(&(rwlock)->raw_lock) -# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lo= ck); __release(lock); } while (0) +# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lo= ck); __release_shared(lock); } while (0) # define do_raw_write_lock(rwlock) do {__acquire(lock); arch_write_lock(&(= rwlock)->raw_lock); } while (0) # define do_raw_write_trylock(rwlock) arch_write_trylock(&(rwlock)->raw_lo= ck) # define do_raw_write_unlock(rwlock) do {arch_write_unlock(&(rwlock)->raw_= lock); __release(lock); } while (0) @@ -49,7 +50,7 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_acquire(lock, _raw_read_trylock(lock)) +#define read_trylock(lock) __cond_acquire_shared(lock, _raw_read_trylock(l= ock)) #define write_trylock(lock) __cond_acquire(lock, _raw_write_trylock(lock)) =20 #define write_lock(lock) _raw_write_lock(lock) @@ -112,12 +113,12 @@ do { \ } while (0) #define write_unlock_bh(lock) _raw_write_unlock_bh(lock) =20 -#define write_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - write_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) +#define write_trylock_irqsave(lock, flags) \ + __cond_acquire(lock, ({ \ + local_irq_save(flags); \ + _raw_write_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ + })) =20 #ifdef arch_rwlock_is_contended #define rwlock_is_contended(lock) \ diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index 31d3d1116323..3e975105a606 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -15,12 +15,12 @@ * Released under the General Public License (GPL). */ =20 -void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock); void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) __acq= uires(lock); -void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock); -void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock_irq(rwlock_t *lock) __acquires(lock); unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock) __acquires(lock); @@ -28,11 +28,11 @@ unsigned long __lockfunc _raw_write_lock_irqsave(rwlock= _t *lock) __acquires(lock); int __lockfunc _raw_read_trylock(rwlock_t *lock); int __lockfunc _raw_write_trylock(rwlock_t *lock); -void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock); -void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock= ); void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) __releases(lock); -void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases_shared(loc= k); void __lockfunc _raw_write_unlock_irq(rwlock_t *lock) __releases(lock); void __lockfunc _raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) @@ -145,6 +145,7 @@ static inline int __raw_write_trylock(rwlock_t *lock) #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) =20 static inline void __raw_read_lock(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); @@ -152,6 +153,7 @@ static inline void __raw_read_lock(rwlock_t *lock) } =20 static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { unsigned long flags; =20 @@ -163,6 +165,7 @@ static inline unsigned long __raw_read_lock_irqsave(rwl= ock_t *lock) } =20 static inline void __raw_read_lock_irq(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -171,6 +174,7 @@ static inline void __raw_read_lock_irq(rwlock_t *lock) } =20 static inline void __raw_read_lock_bh(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); @@ -178,6 +182,7 @@ static inline void __raw_read_lock_bh(rwlock_t *lock) } =20 static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { unsigned long flags; =20 @@ -189,6 +194,7 @@ static inline unsigned long __raw_write_lock_irqsave(rw= lock_t *lock) } =20 static inline void __raw_write_lock_irq(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -197,6 +203,7 @@ static inline void __raw_write_lock_irq(rwlock_t *lock) } =20 static inline void __raw_write_lock_bh(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -204,6 +211,7 @@ static inline void __raw_write_lock_bh(rwlock_t *lock) } =20 static inline void __raw_write_lock(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -211,6 +219,7 @@ static inline void __raw_write_lock(rwlock_t *lock) } =20 static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass) + __acquires(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_); @@ -220,6 +229,7 @@ static inline void __raw_write_lock_nested(rwlock_t *lo= ck, int subclass) #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ =20 static inline void __raw_write_unlock(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -227,6 +237,7 @@ static inline void __raw_write_unlock(rwlock_t *lock) } =20 static inline void __raw_read_unlock(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -235,6 +246,7 @@ static inline void __raw_read_unlock(rwlock_t *lock) =20 static inline void __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -243,6 +255,7 @@ __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned l= ong flags) } =20 static inline void __raw_read_unlock_irq(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -251,6 +264,7 @@ static inline void __raw_read_unlock_irq(rwlock_t *lock) } =20 static inline void __raw_read_unlock_bh(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -259,6 +273,7 @@ static inline void __raw_read_unlock_bh(rwlock_t *lock) =20 static inline void __raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -267,6 +282,7 @@ static inline void __raw_write_unlock_irqrestore(rwlock= _t *lock, } =20 static inline void __raw_write_unlock_irq(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -275,6 +291,7 @@ static inline void __raw_write_unlock_irq(rwlock_t *loc= k) } =20 static inline void __raw_write_unlock_bh(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 5320b4b66405..c6280b0e4503 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -22,28 +22,32 @@ do { \ \ init_rwbase_rt(&(rwl)->rwbase); \ __rt_rwlock_init(rwl, #rwl, &__key); \ + __assert_cap(rwl); \ } while (0) =20 -extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock); +extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock); extern int rt_read_trylock(rwlock_t *rwlock); -extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock); +extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock); extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquire= s(rwlock); extern int rt_write_trylock(rwlock_t *rwlock); extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock); =20 static __always_inline void read_lock(rwlock_t *rwlock) + __acquires_shared(rwlock) { rt_read_lock(rwlock); } =20 static __always_inline void read_lock_bh(rwlock_t *rwlock) + __acquires_shared(rwlock) { local_bh_disable(); rt_read_lock(rwlock); } =20 static __always_inline void read_lock_irq(rwlock_t *rwlock) + __acquires_shared(rwlock) { rt_read_lock(rwlock); } @@ -55,37 +59,43 @@ static __always_inline void read_lock_irq(rwlock_t *rwl= ock) flags =3D 0; \ } while (0) =20 -#define read_trylock(lock) __cond_acquire(lock, rt_read_trylock(lock)) +#define read_trylock(lock) __cond_acquire_shared(lock, rt_read_trylock(loc= k)) =20 static __always_inline void read_unlock(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } =20 static __always_inline void read_unlock_bh(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); local_bh_enable(); } =20 static __always_inline void read_unlock_irq(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } =20 static __always_inline void read_unlock_irqrestore(rwlock_t *rwlock, unsigned long flags) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } =20 static __always_inline void write_lock(rwlock_t *rwlock) + __acquires(rwlock) { rt_write_lock(rwlock); } =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC static __always_inline void write_lock_nested(rwlock_t *rwlock, int subcla= ss) + __acquires(rwlock) { rt_write_lock_nested(rwlock, subclass); } @@ -94,12 +104,14 @@ static __always_inline void write_lock_nested(rwlock_t= *rwlock, int subclass) #endif =20 static __always_inline void write_lock_bh(rwlock_t *rwlock) + __acquires(rwlock) { local_bh_disable(); rt_write_lock(rwlock); } =20 static __always_inline void write_lock_irq(rwlock_t *rwlock) + __acquires(rwlock) { rt_write_lock(rwlock); } @@ -114,33 +126,34 @@ static __always_inline void write_lock_irq(rwlock_t *= rwlock) #define write_trylock(lock) __cond_acquire(lock, rt_write_trylock(lock)) =20 #define write_trylock_irqsave(lock, flags) \ -({ \ - int __locked; \ - \ - typecheck(unsigned long, flags); \ - flags =3D 0; \ - __locked =3D write_trylock(lock); \ - __locked; \ -}) + __cond_acquire(lock, ({ \ + typecheck(unsigned long, flags); \ + flags =3D 0; \ + rt_write_trylock(lock); \ + })) =20 static __always_inline void write_unlock(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); } =20 static __always_inline void write_unlock_bh(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); local_bh_enable(); } =20 static __always_inline void write_unlock_irq(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); } =20 static __always_inline void write_unlock_irqrestore(rwlock_t *rwlock, unsigned long flags) + __releases(rwlock) { rt_write_unlock(rwlock); } diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h index 1948442e7750..231489cc30f2 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -22,7 +22,7 @@ * portions Copyright 2005, Red Hat, Inc., Ingo Molnar * Released under the General Public License (GPL). */ -typedef struct { +struct_with_capability(rwlock) { arch_rwlock_t raw_lock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned int magic, owner_cpu; @@ -31,7 +31,8 @@ typedef struct { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} rwlock_t; +}; +typedef struct rwlock rwlock_t; =20 #define RWLOCK_MAGIC 0xdeaf1eed =20 @@ -54,13 +55,14 @@ typedef struct { =20 #include =20 -typedef struct { +struct_with_capability(rwlock) { struct rwbase_rt rwbase; atomic_t readers; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} rwlock_t; +}; +typedef struct rwlock rwlock_t; =20 #define __RWLOCK_RT_INITIALIZER(name) \ { \ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 678e6f0679a1..1646a9920fd7 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -106,11 +106,12 @@ do { \ static struct lock_class_key __key; \ \ __raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN); \ + __assert_cap(lock); \ } while (0) =20 #else # define raw_spin_lock_init(lock) \ - do { *(lock) =3D __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0) + do { *(lock) =3D __RAW_SPIN_LOCK_UNLOCKED(lock); __assert_cap(lock); } wh= ile (0) #endif =20 #define raw_spin_is_locked(lock) arch_spin_is_locked(&(lock)->raw_lock) @@ -286,19 +287,19 @@ static inline void do_raw_spin_unlock(raw_spinlock_t = *lock) __releases(lock) #define raw_spin_trylock_bh(lock) \ __cond_acquire(lock, _raw_spin_trylock_bh(lock)) =20 -#define raw_spin_trylock_irq(lock) \ -({ \ - local_irq_disable(); \ - raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_enable(); 0; }); \ -}) +#define raw_spin_trylock_irq(lock) \ + __cond_acquire(lock, ({ \ + local_irq_disable(); \ + _raw_spin_trylock(lock) ? \ + 1 : ({ local_irq_enable(); 0; }); \ + })) =20 -#define raw_spin_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) +#define raw_spin_trylock_irqsave(lock, flags) \ + __cond_acquire(lock, ({ \ + local_irq_save(flags); \ + _raw_spin_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ + })) =20 #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ @@ -334,6 +335,7 @@ do { \ \ __raw_spin_lock_init(spinlock_check(lock), \ #lock, &__key, LD_WAIT_CONFIG); \ + __assert_cap(lock); \ } while (0) =20 #else @@ -342,21 +344,25 @@ do { \ do { \ spinlock_check(_lock); \ *(_lock) =3D __SPIN_LOCK_UNLOCKED(_lock); \ + __assert_cap(_lock); \ } while (0) =20 #endif =20 static __always_inline void spin_lock(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock(&lock->rlock); } =20 static __always_inline void spin_lock_bh(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock_bh(&lock->rlock); } =20 static __always_inline int spin_trylock(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock(&lock->rlock); } @@ -372,6 +378,7 @@ do { \ } while (0) =20 static __always_inline void spin_lock_irq(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock_irq(&lock->rlock); } @@ -379,47 +386,53 @@ static __always_inline void spin_lock_irq(spinlock_t = *lock) #define spin_lock_irqsave(lock, flags) \ do { \ raw_spin_lock_irqsave(spinlock_check(lock), flags); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) =20 #define spin_lock_irqsave_nested(lock, flags, subclass) \ do { \ raw_spin_lock_irqsave_nested(spinlock_check(lock), flags, subclass); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) =20 static __always_inline void spin_unlock(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock(&lock->rlock); } =20 static __always_inline void spin_unlock_bh(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock_bh(&lock->rlock); } =20 static __always_inline void spin_unlock_irq(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock_irq(&lock->rlock); } =20 static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsig= ned long flags) + __releases(lock) __no_capability_analysis { raw_spin_unlock_irqrestore(&lock->rlock, flags); } =20 static __always_inline int spin_trylock_bh(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock_bh(&lock->rlock); } =20 static __always_inline int spin_trylock_irq(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock_irq(&lock->rlock); } =20 #define spin_trylock_irqsave(lock, flags) \ -({ \ - raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ -}) + __cond_acquire(lock, raw_spin_trylock_irqsave(spinlock_check(lock), flags= )) =20 /** * spin_is_locked() - Check whether a spinlock is locked. diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_= smp.h index 9ecb0ab504e3..fab02d8bf0c9 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinl= ock_t *lock) unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) __acquires(lock); -int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock); -int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock); +int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(lo= ck); +int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(= lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock= ); @@ -84,6 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigne= d long flags) #endif =20 static inline int __raw_spin_trylock(raw_spinlock_t *lock) + __cond_acquires(lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { @@ -102,6 +103,7 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lo= ck) #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) =20 static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { unsigned long flags; =20 @@ -113,6 +115,7 @@ static inline unsigned long __raw_spin_lock_irqsave(raw= _spinlock_t *lock) } =20 static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -121,6 +124,7 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *= lock) } =20 static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -128,6 +132,7 @@ static inline void __raw_spin_lock_bh(raw_spinlock_t *l= ock) } =20 static inline void __raw_spin_lock(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { preempt_disable(); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -137,6 +142,7 @@ static inline void __raw_spin_lock(raw_spinlock_t *lock) #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ =20 static inline void __raw_spin_unlock(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -145,6 +151,7 @@ static inline void __raw_spin_unlock(raw_spinlock_t *lo= ck) =20 static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -153,6 +160,7 @@ static inline void __raw_spin_unlock_irqrestore(raw_spi= nlock_t *lock, } =20 static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -161,6 +169,7 @@ static inline void __raw_spin_unlock_irq(raw_spinlock_t= *lock) } =20 static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -168,6 +177,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t = *lock) } =20 static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) + __cond_acquires(lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); if (do_raw_spin_trylock(lock)) { diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_u= p.h index 819aeba1c87e..018f5aabc1be 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -24,68 +24,77 @@ * flags straight, to suppress compiler warnings of unused lock * variables, and to add the proper checker annotations: */ -#define ___LOCK(lock) \ - do { __acquire(lock); (void)(lock); } while (0) +#define ___LOCK_void(lock) \ + do { (void)(lock); } while (0) =20 -#define __LOCK(lock) \ - do { preempt_disable(); ___LOCK(lock); } while (0) +#define ___LOCK_(lock) \ + do { __acquire(lock); ___LOCK_void(lock); } while (0) =20 -#define __LOCK_BH(lock) \ - do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK(lock= ); } while (0) +#define ___LOCK_shared(lock) \ + do { __acquire_shared(lock); ___LOCK_void(lock); } while (0) =20 -#define __LOCK_IRQ(lock) \ - do { local_irq_disable(); __LOCK(lock); } while (0) +#define __LOCK(lock, ...) \ + do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0) =20 -#define __LOCK_IRQSAVE(lock, flags) \ - do { local_irq_save(flags); __LOCK(lock); } while (0) +#define __LOCK_BH(lock, ...) \ + do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK_##__= VA_ARGS__(lock); } while (0) =20 -#define ___UNLOCK(lock) \ +#define __LOCK_IRQ(lock, ...) \ + do { local_irq_disable(); __LOCK(lock, ##__VA_ARGS__); } while (0) + +#define __LOCK_IRQSAVE(lock, flags, ...) \ + do { local_irq_save(flags); __LOCK(lock, ##__VA_ARGS__); } while (0) + +#define ___UNLOCK_(lock) \ do { __release(lock); (void)(lock); } while (0) =20 -#define __UNLOCK(lock) \ - do { preempt_enable(); ___UNLOCK(lock); } while (0) +#define ___UNLOCK_shared(lock) \ + do { __release_shared(lock); (void)(lock); } while (0) =20 -#define __UNLOCK_BH(lock) \ +#define __UNLOCK(lock, ...) \ + do { preempt_enable(); ___UNLOCK_##__VA_ARGS__(lock); } while (0) + +#define __UNLOCK_BH(lock, ...) \ do { __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); \ - ___UNLOCK(lock); } while (0) + ___UNLOCK_##__VA_ARGS__(lock); } while (0) =20 -#define __UNLOCK_IRQ(lock) \ - do { local_irq_enable(); __UNLOCK(lock); } while (0) +#define __UNLOCK_IRQ(lock, ...) \ + do { local_irq_enable(); __UNLOCK(lock, ##__VA_ARGS__); } while (0) =20 -#define __UNLOCK_IRQRESTORE(lock, flags) \ - do { local_irq_restore(flags); __UNLOCK(lock); } while (0) +#define __UNLOCK_IRQRESTORE(lock, flags, ...) \ + do { local_irq_restore(flags); __UNLOCK(lock, ##__VA_ARGS__); } while (0) =20 #define _raw_spin_lock(lock) __LOCK(lock) #define _raw_spin_lock_nested(lock, subclass) __LOCK(lock) -#define _raw_read_lock(lock) __LOCK(lock) +#define _raw_read_lock(lock) __LOCK(lock, shared) #define _raw_write_lock(lock) __LOCK(lock) #define _raw_write_lock_nested(lock, subclass) __LOCK(lock) #define _raw_spin_lock_bh(lock) __LOCK_BH(lock) -#define _raw_read_lock_bh(lock) __LOCK_BH(lock) +#define _raw_read_lock_bh(lock) __LOCK_BH(lock, shared) #define _raw_write_lock_bh(lock) __LOCK_BH(lock) #define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock) -#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock) +#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock, shared) #define _raw_write_lock_irq(lock) __LOCK_IRQ(lock) #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) +#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, sh= ared) #define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_spin_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_read_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_write_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock); 1; }) +#define _raw_spin_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_read_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_write_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock, void); 1; }) #define _raw_spin_unlock(lock) __UNLOCK(lock) -#define _raw_read_unlock(lock) __UNLOCK(lock) +#define _raw_read_unlock(lock) __UNLOCK(lock, shared) #define _raw_write_unlock(lock) __UNLOCK(lock) #define _raw_spin_unlock_bh(lock) __UNLOCK_BH(lock) #define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock) -#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock) +#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock, shared) #define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock) -#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock) +#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock, shared) #define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock) #define _raw_spin_unlock_irqrestore(lock, flags) \ __UNLOCK_IRQRESTORE(lock, flags) #define _raw_read_unlock_irqrestore(lock, flags) \ - __UNLOCK_IRQRESTORE(lock, flags) + __UNLOCK_IRQRESTORE(lock, flags, shared) #define _raw_write_unlock_irqrestore(lock, flags) \ __UNLOCK_IRQRESTORE(lock, flags) =20 diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index eaad4dd2baac..5d9ebc3ec521 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -20,6 +20,7 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, = const char *name, do { \ rt_mutex_base_init(&(slock)->lock); \ __rt_spin_lock_init(slock, name, key, percpu); \ + __assert_cap(slock); \ } while (0) =20 #define _spin_lock_init(slock, percpu) \ @@ -40,6 +41,7 @@ extern int rt_spin_trylock_bh(spinlock_t *lock); extern int rt_spin_trylock(spinlock_t *lock); =20 static __always_inline void spin_lock(spinlock_t *lock) + __acquires(lock) { rt_spin_lock(lock); } @@ -82,6 +84,7 @@ static __always_inline void spin_lock(spinlock_t *lock) __spin_lock_irqsave_nested(lock, flags, subclass) =20 static __always_inline void spin_lock_bh(spinlock_t *lock) + __acquires(lock) { /* Investigate: Drop bh when blocking ? */ local_bh_disable(); @@ -89,6 +92,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lock) } =20 static __always_inline void spin_lock_irq(spinlock_t *lock) + __acquires(lock) { rt_spin_lock(lock); } @@ -101,23 +105,27 @@ static __always_inline void spin_lock_irq(spinlock_t = *lock) } while (0) =20 static __always_inline void spin_unlock(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); } =20 static __always_inline void spin_unlock_bh(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); local_bh_enable(); } =20 static __always_inline void spin_unlock_irq(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); } =20 static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) + __releases(lock) { rt_spin_unlock(lock); } @@ -132,14 +140,11 @@ static __always_inline void spin_unlock_irqrestore(sp= inlock_t *lock, __cond_acquire(lock, rt_spin_trylock(lock)) =20 #define spin_trylock_irqsave(lock, flags) \ -({ \ - int __locked; \ - \ - typecheck(unsigned long, flags); \ - flags =3D 0; \ - __locked =3D spin_trylock(lock); \ - __locked; \ -}) + __cond_acquire(lock, ({ \ + typecheck(unsigned long, flags); \ + flags =3D 0; \ + rt_spin_trylock(lock); \ + })) =20 #define spin_is_contended(lock) (((void)(lock), 0)) =20 diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h index 2dfa35ffec76..2c5db5b5b990 100644 --- a/include/linux/spinlock_types.h +++ b/include/linux/spinlock_types.h @@ -14,7 +14,7 @@ #ifndef CONFIG_PREEMPT_RT =20 /* Non PREEMPT_RT kernels map spinlock to raw_spinlock */ -typedef struct spinlock { +struct_with_capability(spinlock) { union { struct raw_spinlock rlock; =20 @@ -26,7 +26,8 @@ typedef struct spinlock { }; #endif }; -} spinlock_t; +}; +typedef struct spinlock spinlock_t; =20 #define ___SPIN_LOCK_INITIALIZER(lockname) \ { \ @@ -47,12 +48,13 @@ typedef struct spinlock { /* PREEMPT_RT kernels map spinlock to rt_mutex */ #include =20 -typedef struct spinlock { +struct_with_capability(spinlock) { struct rt_mutex_base lock; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} spinlock_t; +}; +typedef struct spinlock spinlock_t; =20 #define __SPIN_LOCK_UNLOCKED(name) \ { \ diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_ty= pes_raw.h index 91cb36b65a17..07792ff2c2b5 100644 --- a/include/linux/spinlock_types_raw.h +++ b/include/linux/spinlock_types_raw.h @@ -11,7 +11,7 @@ =20 #include =20 -typedef struct raw_spinlock { +struct_with_capability(raw_spinlock) { arch_spinlock_t raw_lock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned int magic, owner_cpu; @@ -20,7 +20,8 @@ typedef struct raw_spinlock { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} raw_spinlock_t; +}; +typedef struct raw_spinlock raw_spinlock_t; =20 #define SPINLOCK_MAGIC 0xdead4ead =20 diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index a0adacce30ff..f63980e134cf 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -5,6 +5,7 @@ */ =20 #include +#include =20 /* * Test that helper macros work as expected. @@ -16,3 +17,130 @@ static void __used test_common_helpers(void) BUILD_BUG_ON(capability_unsafe((void)2, 3) !=3D 3); /* does not swallow c= ommas */ capability_unsafe(do { } while (0)); /* works with void statements */ } + +#define TEST_SPINLOCK_COMMON(class, type, type_init, type_lock, type_unloc= k, type_trylock, op) \ + struct test_##class##_data { \ + type lock; \ + int counter __var_guarded_by(&lock); \ + int *pointer __ref_guarded_by(&lock); \ + }; \ + static void __used test_##class##_init(struct test_##class##_data *d) \ + { \ + type_init(&d->lock); \ + d->counter =3D 0; \ + } \ + static void __used test_##class(struct test_##class##_data *d) \ + { \ + unsigned long flags; \ + d->pointer++; \ + type_lock(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock(&d->lock); \ + type_lock##_irq(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_irq(&d->lock); \ + type_lock##_bh(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_bh(&d->lock); \ + type_lock##_irqsave(&d->lock, flags); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_irqrestore(&d->lock, flags); \ + } \ + static void __used test_##class##_trylock(struct test_##class##_data *d) = \ + { \ + if (type_trylock(&d->lock)) { \ + op(d->counter); \ + type_unlock(&d->lock); \ + } \ + } \ + static void __used test_##class##_assert(struct test_##class##_data *d) = \ + { \ + lockdep_assert_held(&d->lock); \ + op(d->counter); \ + } \ + static void __used test_##class##_guard(struct test_##class##_data *d) \ + { \ + { guard(class)(&d->lock); op(d->counter); } \ + { guard(class##_irq)(&d->lock); op(d->counter); } \ + { guard(class##_irqsave)(&d->lock); op(d->counter); } \ + } + +#define TEST_OP_RW(x) (x)++ +#define TEST_OP_RO(x) ((void)(x)) + +TEST_SPINLOCK_COMMON(raw_spinlock, + raw_spinlock_t, + raw_spin_lock_init, + raw_spin_lock, + raw_spin_unlock, + raw_spin_trylock, + TEST_OP_RW); +static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinloc= k_data *d) +{ + unsigned long flags; + + if (raw_spin_trylock_irq(&d->lock)) { + d->counter++; + raw_spin_unlock_irq(&d->lock); + } + if (raw_spin_trylock_irqsave(&d->lock, flags)) { + d->counter++; + raw_spin_unlock_irqrestore(&d->lock, flags); + } + scoped_cond_guard(raw_spinlock_try, return, &d->lock) { + d->counter++; + } +} + +TEST_SPINLOCK_COMMON(spinlock, + spinlock_t, + spin_lock_init, + spin_lock, + spin_unlock, + spin_trylock, + TEST_OP_RW); +static void __used test_spinlock_trylock_extra(struct test_spinlock_data *= d) +{ + unsigned long flags; + + if (spin_trylock_irq(&d->lock)) { + d->counter++; + spin_unlock_irq(&d->lock); + } + if (spin_trylock_irqsave(&d->lock, flags)) { + d->counter++; + spin_unlock_irqrestore(&d->lock, flags); + } + scoped_cond_guard(spinlock_try, return, &d->lock) { + d->counter++; + } +} + +TEST_SPINLOCK_COMMON(write_lock, + rwlock_t, + rwlock_init, + write_lock, + write_unlock, + write_trylock, + TEST_OP_RW); +static void __used test_write_trylock_extra(struct test_write_lock_data *d) +{ + unsigned long flags; + + if (write_trylock_irqsave(&d->lock, flags)) { + d->counter++; + write_unlock_irqrestore(&d->lock, flags); + } +} + +TEST_SPINLOCK_COMMON(read_lock, + rwlock_t, + rwlock_init, + read_lock, + read_unlock, + read_trylock, + TEST_OP_RO); --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B12FD1E7C10 for ; Thu, 6 Feb 2025 18:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865894; cv=none; b=e+rf4u7BMx2PdR1J5DwMXvkGTglh7LjXKELi9a3KD80aS520MbFqnMqS3S/uuDfD7fPvYU+Qv/zekDZVDEFgteEZrh/R4WoECR1qDGfMVIgY7DKz4wi33tMFyEZOhZq+DAFQdhQTQpoftd6BSsRKlocPTPq+gqu7Q0JCEoPMDHc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865894; c=relaxed/simple; bh=V/8+QBogDMyq8h2947266ECzIEzGrAwlE/YZoBQH82U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=P5Qzz+//PRNlC2+3GAEDywMgTJO1hbbAxEKa93rkclWuMH/aO6NE7DXuJ5Pu/zSji3t62OeHh0pimxQ7zIeir+FMJ8JP+LDdI+kSRUYAdvif4RN0XiBBgOEdPU60fYPB1k4DOwlJYESFtrkCnpysv1xekSxpdVfEhIUDJh+s4dU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zqC1cvdL; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zqC1cvdL" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-38dc88ed7e6so16854f8f.1 for ; Thu, 06 Feb 2025 10:18:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865891; x=1739470691; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Y0plDjCJ1B0qZbMeycYJk7FAuIdA1LrCIxvHWeEdHoU=; b=zqC1cvdL5z4g+NzZXRFUwGumd9cF3Bw+U6gYvGOP0gY21Y8RsEaiQv16aOjF9LxCmQ Oek0NSkm7CO8Oq+TYkkSDRl0z2C4BowmKAWZqULyP35oosbqo+ZpFm0VPSN8m1ujrvpJ glMQ4U/CJIJWrQjtlB3sua3FNbdvhnxfDCZy7gXez2GswV0y82ZDPhgluoJjuiW/wBzO sigB4eKBhdH3mqISAiozgVoew+YZqBVFtTVQWDAdVGveVzNzHnCdmXBlJhUGnu7GTu8Y 1DRPG8PBGCxM9EWuXCbEjVG9mfIKvQMwgnoHvkblJYbAmatJY/r9Ca7YaV1SOZOTPtbD hm1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865891; x=1739470691; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Y0plDjCJ1B0qZbMeycYJk7FAuIdA1LrCIxvHWeEdHoU=; b=HWkDcMFfGiVZHxHw2xVOHuLKvqQPgPYKgqSCqyphpbREiTyQlUEhcWPddD3yZNLYuY XNd9ZNaTbWpZnISKkz3tTyPw7nOpt5Be0REoCTkmewXhn1xsaJ3kg9lDPYFX47oaqVO7 xo37r7EOsEPb7IrAZf53kQeMUCZg5eYQTF4llVGpbmxenpLYm81njFzWW+kEMAJZCsC8 nNo6g0iLxdDQuH/Jdq5Z6M8L2CwHK5Yc8ksz2DWeeu1U4gPtkbXhuX2O7Z9JeVFgjJUy tcmmwZ4z55p8W+WVw9//Kt3KABCQoHP2sPt+3zB1r6TNjEJiXHw18KYVOy5Jbey5hDCP a5EQ== X-Forwarded-Encrypted: i=1; AJvYcCVYKvXQQVAtFavleRL7SEl0KcToG9PYO1saG6OOQyKffwRwM3nzQGagz5XJXae1FDGalZeFvywjSYUgX3s=@vger.kernel.org X-Gm-Message-State: AOJu0YxfSXpQwY53rxymP7MUNvq7nnJOqxBqxcd2PUHBvgvTechQMQWi VecU8ZWpX7+c6qpF/G/0jvEGvp/njYaSRxN4gd48CY8+ekKzk1FcMiosMkIJ0r0B8Vx2QhOZGA= = X-Google-Smtp-Source: AGHT+IEkg7Kc2bKGRgLgH3oGb3lUCUVYUhWVhdvv0Wl/Ah0jIHP7904jYfscr6wQzIioSc3ZaTf74I70+g== X-Received: from ejcvs8.prod.google.com ([2002:a17:907:a588:b0:aa6:8676:3b2b]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64af:0:b0:386:605:77e with SMTP id ffacd0b85a97d-38dc933bd7amr24f8f.49.1738865890985; Thu, 06 Feb 2025 10:18:10 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:04 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-11-elver@google.com> Subject: [PATCH RFC 10/24] compiler-capability-analysis: Change __cond_acquires to take return value From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While Sparse is oblivious to the return value of conditional acquire functions, Clang's capability analysis needs to know the return value which indicates successful acquisition. Add the additional argument, and convert existing uses. No functional change intended. Signed-off-by: Marco Elver --- fs/dlm/lock.c | 2 +- include/linux/compiler-capability-analysis.h | 14 +++++++++----- include/linux/refcount.h | 6 +++--- include/linux/spinlock.h | 6 +++--- include/linux/spinlock_api_smp.h | 8 ++++---- net/ipv4/tcp_sigpool.c | 2 +- 6 files changed, 21 insertions(+), 17 deletions(-) diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index c8ff88f1cdcf..e39ca02b793e 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -343,7 +343,7 @@ void dlm_hold_rsb(struct dlm_rsb *r) /* TODO move this to lib/refcount.c */ static __must_check bool dlm_refcount_dec_and_write_lock_bh(refcount_t *r, rwlock_t *lock) -__cond_acquires(lock) + __cond_acquires(1, lock) { if (refcount_dec_not_one(r)) return false; diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/c= ompiler-capability-analysis.h index ca63b6513dc3..10c03133ac4d 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -231,7 +231,7 @@ # define __must_hold(x) __attribute__((context(x,1,1))) # define __must_not_hold(x) # define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(x) __attribute__((context(x,0,-1))) +# define __cond_acquires(ret, x) __attribute__((context(x,0,-1))) # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) @@ -277,12 +277,14 @@ /** * __cond_acquires() - function attribute, function conditionally * acquires a capability exclusively + * @ret: value returned by function if capability acquired * @x: capability instance pointer * * Function attribute declaring that the function conditionally acquires t= he - * given capability instance @x exclusively, but does not release it. + * given capability instance @x exclusively, but does not release it. The + * function return value @ret denotes when the capability is acquired. */ -# define __cond_acquires(x) __try_acquires_cap(1, x) +# define __cond_acquires(ret, x) __try_acquires_cap(ret, x) =20 /** * __releases() - function attribute, function releases a capability exclu= sively @@ -349,12 +351,14 @@ /** * __cond_acquires_shared() - function attribute, function conditionally * acquires a capability shared + * @ret: value returned by function if capability acquired * @x: capability instance pointer * * Function attribute declaring that the function conditionally acquires t= he - * given capability instance @x with shared access, but does not release i= t. + * given capability instance @x with shared access, but does not release i= t. The + * function return value @ret denotes when the capability is acquired. */ -# define __cond_acquires_shared(x) __try_acquires_shared_cap(1, x) +# define __cond_acquires_shared(ret, x) __try_acquires_shared_cap(ret, x) =20 /** * __releases_shared() - function attribute, function releases a diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 35f039ecb272..f63ce3fadfa3 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -353,9 +353,9 @@ static inline void refcount_dec(refcount_t *r) =20 extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); -extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct= mutex *lock) __cond_acquires(lock); -extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *= lock) __cond_acquires(lock); +extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct= mutex *lock) __cond_acquires(1, lock); +extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *= lock) __cond_acquires(1, lock); extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, - unsigned long *flags) __cond_acquires(lock); + unsigned long *flags) __cond_acquires(1, lock); #endif /* _LINUX_REFCOUNT_H */ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 1646a9920fd7..de5118d0e718 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -362,7 +362,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lo= ck) } =20 static __always_inline int spin_trylock(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(1, lock) __no_capability_analysis { return raw_spin_trylock(&lock->rlock); } @@ -420,13 +420,13 @@ static __always_inline void spin_unlock_irqrestore(sp= inlock_t *lock, unsigned lo } =20 static __always_inline int spin_trylock_bh(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(1, lock) __no_capability_analysis { return raw_spin_trylock_bh(&lock->rlock); } =20 static __always_inline int spin_trylock_irq(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(1, lock) __no_capability_analysis { return raw_spin_trylock_irq(&lock->rlock); } diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_= smp.h index fab02d8bf0c9..9b6f7a5a0705 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinl= ock_t *lock) unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) __acquires(lock); -int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(lo= ck); -int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(= lock); +int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(1,= lock); +int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(= 1, lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock= ); @@ -84,7 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigne= d long flags) #endif =20 static inline int __raw_spin_trylock(raw_spinlock_t *lock) - __cond_acquires(lock) + __cond_acquires(1, lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { @@ -177,7 +177,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t = *lock) } =20 static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) - __cond_acquires(lock) + __cond_acquires(1, lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); if (do_raw_spin_trylock(lock)) { diff --git a/net/ipv4/tcp_sigpool.c b/net/ipv4/tcp_sigpool.c index d8a4f192873a..10b2e5970c40 100644 --- a/net/ipv4/tcp_sigpool.c +++ b/net/ipv4/tcp_sigpool.c @@ -257,7 +257,7 @@ void tcp_sigpool_get(unsigned int id) } EXPORT_SYMBOL_GPL(tcp_sigpool_get); =20 -int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acqui= res(RCU_BH) +int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acqui= res(0, RCU_BH) { struct crypto_ahash *hash; =20 --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67E481EDA0B for ; Thu, 6 Feb 2025 18:18:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865897; cv=none; b=W69xnbt4NiD8kvxbo7jd09xm3BSz9wyhQNN+SvdYeK6rgdeVAgOHT+11hg5jgHJ1xGeEsLJYZE7ZwGBsHrKjcN7iX7QAhl9RTwnn98ZLYM3QjUNGUyxVQfPKxNM5N445GTfk1eeQD/XFCs3mzKMyIl/oo8fZg08tyg3LCkjNeH0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865897; c=relaxed/simple; bh=tXHzFH0hmNiiQM92UIIdG/aT+1rtSn37gLNSxACIiw4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OsmLQ8mUizQ4nWYPivGZIGVZo4YyA9mO+lu5gTSR6b9T67kMV8nahBbICmoU/oNKzyPmUN9WOnmkt3kVlFDnRa0XPXnJukPKPFnLLgLEYeUJwmV+XvWNEDh7QLVvvW2D4DuY/WyXVCBVROwNEQistu9R5ABJgWp6qKx2+1Qa/IE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wOwr3KKh; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wOwr3KKh" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-ab77dd2c243so101659466b.0 for ; Thu, 06 Feb 2025 10:18:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865894; x=1739470694; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=745NDxNXHBzGRxy9OOyVrOwCOzE5UpzdpD8KZdhsNVE=; b=wOwr3KKhqV/HeGoKdVJNW1Hw+bQubemKoCJpm2b+bSwGWVAF5wHYfRWdLoYThAibDz TcDrE0p4y7hvBdEL6boM0qvgmCyZSVA1PHki1ViOjSb6fySbx3DJ6GIUgFAGfsrg/n9q Qpg51qGyTWbPm3vC68vjK5h7+hhjucUB4r6FUzC7ikJOJGbK2pwTroLIKbbKmeIaNqJ4 KvNC5ZQyLP2ahcdMJCELx+bQolcVIR8/uWh76sS/7gbc4hd9gieo+ZD7ayRjbCQY1K4P pxcCWu1koGEsSlvooTEK+KT1ahYPXZic53jh7J1sv8+OepaxPgjfn5BfdYxnGbSJPwGl wf6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865894; x=1739470694; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=745NDxNXHBzGRxy9OOyVrOwCOzE5UpzdpD8KZdhsNVE=; b=LlWRuilQdS5/Sq7nBmX06ZX3tJ4+/rPlxYpcimsoGGRBRDzEIy09noCgQZH36Ct51D EZmldh8/nXGgcgJA7zMcbYoij9OMzzD1bAjdbIKp3utbheDCiV1NiDDAi2XfaUc+qbe8 k9Max0okjGUT17WoKM0po1iysA46TczRkX9JrjDliQcVKEp3/B36gOctTACktTnU8GT4 whFYeFM8YFCyP6RjY97DbPj/wePBlnNmyK4Q1HFCgb27prR6qinH0K94r28Em5FCHVxy C7cnKDGTLqMt6UBH1zPc2KST48ZG1yXJ+EIiVGGJRVwT7DWHIetKm2BOmGU9aPqmHHXO qlTA== X-Forwarded-Encrypted: i=1; AJvYcCVfzNPCjUy86fxF0mSwP4RHUeLrZVf+5ckOnPcsdoujQu57S0YiszWu9+dY4DOcl/Megi/vhfktxTujUyo=@vger.kernel.org X-Gm-Message-State: AOJu0Ywvhg7LE0Gwi8F5Q8j7sZc7LRuvV3hhSgyitMXQWLg55K1oqqhE SPy3l3bD7HlY3xg1siV0mUJ7azwf0GLu6Chd3yBIH7tKA3aNUUXVPwUOKeExh6JRH0XiefCUew= = X-Google-Smtp-Source: AGHT+IHMP7tew/ZdA68jN08WfrON1P9W/R7nXqvhUxrwPbRPxt2UtuWHgLWteAVtIaGSz1y85zKMuqXJxA== X-Received: from ejchx5.prod.google.com ([2002:a17:906:8465:b0:ab7:8024:1fb3]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:c147:b0:aaf:117c:e929 with SMTP id a640c23a62f3a-ab75e358d20mr804115566b.57.1738865893605; Thu, 06 Feb 2025 10:18:13 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:05 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-12-elver@google.com> Subject: [PATCH RFC 11/24] locking/mutex: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for mutex. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/mutex.h | 29 +++++---- include/linux/mutex_types.h | 4 +- lib/test_capability-analysis.c | 64 +++++++++++++++++++ 4 files changed, 82 insertions(+), 17 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 904448605a77..31f76e877be5 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -85,7 +85,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 2bf91b57591b..09ee3b89d342 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -62,6 +62,7 @@ do { \ static struct lock_class_key __key; \ \ __mutex_init((mutex), #mutex, &__key); \ + __assert_cap(mutex); \ } while (0) =20 /** @@ -154,14 +155,14 @@ static inline int __devm_mutex_init(struct device *de= v, struct mutex *lock) * Also see Documentation/locking/mutex-design.rst. */ #ifdef CONFIG_DEBUG_LOCK_ALLOC -extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); +extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass) _= _acquires(lock); extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *= nest_lock); =20 extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, - unsigned int subclass); + unsigned int subclass) __cond_acquires(0, lock); extern int __must_check mutex_lock_killable_nested(struct mutex *lock, - unsigned int subclass); -extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass= ); + unsigned int subclass) __cond_acquires(0, lock); +extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass= ) __acquires(lock); =20 #define mutex_lock(lock) mutex_lock_nested(lock, 0) #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(loc= k, 0) @@ -175,10 +176,10 @@ do { \ } while (0) =20 #else -extern void mutex_lock(struct mutex *lock); -extern int __must_check mutex_lock_interruptible(struct mutex *lock); -extern int __must_check mutex_lock_killable(struct mutex *lock); -extern void mutex_lock_io(struct mutex *lock); +extern void mutex_lock(struct mutex *lock) __acquires(lock); +extern int __must_check mutex_lock_interruptible(struct mutex *lock) __con= d_acquires(0, lock); +extern int __must_check mutex_lock_killable(struct mutex *lock) __cond_acq= uires(0, lock); +extern void mutex_lock_io(struct mutex *lock) __acquires(lock); =20 # define mutex_lock_nested(lock, subclass) mutex_lock(lock) # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interr= uptible(lock) @@ -193,13 +194,13 @@ extern void mutex_lock_io(struct mutex *lock); * * Returns 1 if the mutex has been acquired successfully, and 0 on content= ion. */ -extern int mutex_trylock(struct mutex *lock); -extern void mutex_unlock(struct mutex *lock); +extern int mutex_trylock(struct mutex *lock) __cond_acquires(1, lock); +extern void mutex_unlock(struct mutex *lock) __releases(lock); =20 -extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); +extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __= cond_acquires(1, lock); =20 -DEFINE_GUARD(mutex, struct mutex *, mutex_lock(_T), mutex_unlock(_T)) -DEFINE_GUARD_COND(mutex, _try, mutex_trylock(_T)) -DEFINE_GUARD_COND(mutex, _intr, mutex_lock_interruptible(_T) =3D=3D 0) +DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unloc= k(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock) = =3D=3D 0) =20 #endif /* __LINUX_MUTEX_H */ diff --git a/include/linux/mutex_types.h b/include/linux/mutex_types.h index fdf7f515fde8..e1a5ea12d53c 100644 --- a/include/linux/mutex_types.h +++ b/include/linux/mutex_types.h @@ -38,7 +38,7 @@ * - detects multi-task circular deadlocks and prints out all affected * locks and tasks (and only those tasks) */ -struct mutex { +struct_with_capability(mutex) { atomic_long_t owner; raw_spinlock_t wait_lock; #ifdef CONFIG_MUTEX_SPIN_ON_OWNER @@ -59,7 +59,7 @@ struct mutex { */ #include =20 -struct mutex { +struct_with_capability(mutex) { struct rt_mutex_base rtmutex; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index f63980e134cf..3410c04c2b76 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -5,6 +5,7 @@ */ =20 #include +#include #include =20 /* @@ -144,3 +145,66 @@ TEST_SPINLOCK_COMMON(read_lock, read_unlock, read_trylock, TEST_OP_RO); + +struct test_mutex_data { + struct mutex mtx; + int counter __var_guarded_by(&mtx); +}; + +static void __used test_mutex_init(struct test_mutex_data *d) +{ + mutex_init(&d->mtx); + d->counter =3D 0; +} + +static void __used test_mutex_lock(struct test_mutex_data *d) +{ + mutex_lock(&d->mtx); + d->counter++; + mutex_unlock(&d->mtx); + mutex_lock_io(&d->mtx); + d->counter++; + mutex_unlock(&d->mtx); +} + +static void __used test_mutex_trylock(struct test_mutex_data *d, atomic_t = *a) +{ + if (!mutex_lock_interruptible(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (!mutex_lock_killable(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (mutex_trylock(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (atomic_dec_and_mutex_lock(a, &d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } +} + +static void __used test_mutex_assert(struct test_mutex_data *d) +{ + lockdep_assert_held(&d->mtx); + d->counter++; +} + +static void __used test_mutex_guard(struct test_mutex_data *d) +{ + guard(mutex)(&d->mtx); + d->counter++; +} + +static void __used test_mutex_cond_guard(struct test_mutex_data *d) +{ + scoped_cond_guard(mutex_try, return, &d->mtx) { + d->counter++; + } + scoped_cond_guard(mutex_intr, return, &d->mtx) { + d->counter++; + } +} --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B58091EEA34 for ; Thu, 6 Feb 2025 18:18:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865900; cv=none; b=Lq98y1VaZ6aevhZQflALZXebP/UaN3fP/2yVJFXZ9o89BqdTRo8nxei3OOsjUTkwM5JoeJWCmEF8AtfdkQQaRKVCTbFaIAyo67OylXysUjwbdvpQAVG4vRvu7UbI0QPKHIR59fdTiyHgBKz7jYTnYOnjL5K03UAYiCWAPQ+7Eo0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865900; c=relaxed/simple; bh=95UjjvZYAJ9cKJ8gjbfWj4DkOEijxPrW0T7Wi8ukuEk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AYGOYx4ocDOCedB43fnDRmwrYWMMeSZiLAedlYCbObJbGhrMTNnaB33nGuPALWMd1zxM7AW0+EqmhodDqk9DzvuoPzvVt7DhlIz/9QOhJwvEp6lGHfYyQwdeNNN5lKIuvW+JLLI7GqVeXiaWUo3AstY9ee+Niy8/sD81FzUGZsI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ucGlPYLz; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ucGlPYLz" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5dcda2e3a36so1429084a12.3 for ; Thu, 06 Feb 2025 10:18:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865896; x=1739470696; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H4q+9WLE15qKddw85vlrIejrThq+pT1MCiLGjI1MaOU=; b=ucGlPYLzuYWB0CraFX2OjpjWpdB4bu9NDqabcfhP+UWfcLuMhr3qWcaJidrxOjYzga 9bTSmF0Iqtp5U5wc4yuHQbKnB8phKqdCBMiPq9mzfnAXVQBV1AFO9PVRGdyf2s+6s/+a ny5YBXBOuyl45oC+VgRItCfVbAdie4EY2slL1QKlw4PzO23x829LwvMOHomwJGJUWNEl 1HtX/S/+Ys/X6UBDGYxDu2xW28dO3BXOUDfgyfzhkkKpWuBZfyEXZUYgMjCG4UhRLLdv cJk1ZHP9bgDfeWEStflygtxndKUYpGZvxNQBlTLVH1WzbCVYyG5IgxYymUcXWkx/8sQF NoRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865896; x=1739470696; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H4q+9WLE15qKddw85vlrIejrThq+pT1MCiLGjI1MaOU=; b=qbV2q0wZDv9SnAzuHvBDIBvrhKihGZQNlKEXPa0PHU/F92Z4FdfN9XvXoxlBbdUQYX 4r7t6+peEUPDAM3kjFev9sZ+5dqv/+rPtsPFTSY9BdscrWArWZtNMQJFBwZgtchIOgjQ NFuZfzz/HC+UXVxOXBRcwGzpFBbXRWTbT6gp518DRMNo1rH+k959/qqvhQNCCo/mGyuI RcI5X5Mqx5gUZ2tFwu+UviMvOtEYaLSAaFHvQJBDQabMT5gNBprO27eTsKu+Cp39SYDr 2hRmI1IR4nI/RO5tuy/q//EP5jB+zSVf8BmPBZ2Qjwvw0wREr9B7DCpCPL6/9q3OwhKM BZrg== X-Forwarded-Encrypted: i=1; AJvYcCXzOWQswlId3c8+dsXcR9enh8Sa+dH0zhIt2FyYRGAknpcDzKvxP8h77Nj7M+qa37ieDSctTsS+72xWfuU=@vger.kernel.org X-Gm-Message-State: AOJu0Yxd+mR60lHDYbMunBjw8RZEtsM3NxfO92Yi3z3+MD2yo/x+7pGM CkEYtt0nPTu75SL/o62UN/f/ULV+MXMOSRLYksB9pyNVtzNmU6YqsgA5iS1D7N9ed6unlC8Mvg= = X-Google-Smtp-Source: AGHT+IGPOkWYJqLoYHNLLbuzAos5k3kgWOVjs6Uc6epHSddM+3YDj7OblNI++rh3HnWy7IoUvZg6rTtpfQ== X-Received: from edbin10.prod.google.com ([2002:a05:6402:208a:b0:5de:35d9:f60c]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:194b:b0:5db:f423:19c5 with SMTP id 4fb4d7f45d1cf-5de44fea647mr545992a12.5.1738865896253; Thu, 06 Feb 2025 10:18:16 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:06 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-13-elver@google.com> Subject: [PATCH RFC 12/24] locking/seqlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for seqlock_t. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/seqlock.h | 24 +++++++++++ include/linux/seqlock_types.h | 5 ++- lib/test_capability-analysis.c | 43 +++++++++++++++++++ 4 files changed, 71 insertions(+), 3 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 31f76e877be5..8d9336e91ce2 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -85,7 +85,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 5ce48eab7a2a..c914eb9714e9 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -816,6 +816,7 @@ static __always_inline void write_seqcount_latch_end(se= qcount_latch_t *s) do { \ spin_lock_init(&(sl)->lock); \ seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \ + __assert_cap(sl); \ } while (0) =20 /** @@ -832,6 +833,7 @@ static __always_inline void write_seqcount_latch_end(se= qcount_latch_t *s) * Return: count, to be passed to read_seqretry() */ static inline unsigned read_seqbegin(const seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { return read_seqcount_begin(&sl->seqcount); } @@ -848,6 +850,7 @@ static inline unsigned read_seqbegin(const seqlock_t *s= l) * Return: true if a read section retry is required, else false */ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) + __releases_shared(sl) __no_capability_analysis { return read_seqcount_retry(&sl->seqcount, start); } @@ -872,6 +875,7 @@ static inline unsigned read_seqretry(const seqlock_t *s= l, unsigned start) * _irqsave or _bh variants of this function instead. */ static inline void write_seqlock(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -885,6 +889,7 @@ static inline void write_seqlock(seqlock_t *sl) * critical section of given seqlock_t. */ static inline void write_sequnlock(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock(&sl->lock); @@ -898,6 +903,7 @@ static inline void write_sequnlock(seqlock_t *sl) * other write side sections, can be invoked from softirq contexts. */ static inline void write_seqlock_bh(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock_bh(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -912,6 +918,7 @@ static inline void write_seqlock_bh(seqlock_t *sl) * write_seqlock_bh(). */ static inline void write_sequnlock_bh(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_bh(&sl->lock); @@ -925,6 +932,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl) * other write sections, can be invoked from hardirq contexts. */ static inline void write_seqlock_irq(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock_irq(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -938,12 +946,14 @@ static inline void write_seqlock_irq(seqlock_t *sl) * seqlock_t write side section opened with write_seqlock_irq(). */ static inline void write_sequnlock_irq(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irq(&sl->lock); } =20 static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { unsigned long flags; =20 @@ -976,6 +986,7 @@ static inline unsigned long __write_seqlock_irqsave(seq= lock_t *sl) */ static inline void write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irqrestore(&sl->lock, flags); @@ -998,6 +1009,7 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned lon= g flags) * The opened read section must be closed with read_sequnlock_excl(). */ static inline void read_seqlock_excl(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock(&sl->lock); } @@ -1007,6 +1019,7 @@ static inline void read_seqlock_excl(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock(&sl->lock); } @@ -1021,6 +1034,7 @@ static inline void read_sequnlock_excl(seqlock_t *sl) * from softirq contexts. */ static inline void read_seqlock_excl_bh(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock_bh(&sl->lock); } @@ -1031,6 +1045,7 @@ static inline void read_seqlock_excl_bh(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_bh(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock_bh(&sl->lock); } @@ -1045,6 +1060,7 @@ static inline void read_sequnlock_excl_bh(seqlock_t *= sl) * hardirq context. */ static inline void read_seqlock_excl_irq(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock_irq(&sl->lock); } @@ -1055,11 +1071,13 @@ static inline void read_seqlock_excl_irq(seqlock_t = *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_irq(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock_irq(&sl->lock); } =20 static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { unsigned long flags; =20 @@ -1089,6 +1107,7 @@ static inline unsigned long __read_seqlock_excl_irqsa= ve(seqlock_t *sl) */ static inline void read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) + __releases_shared(sl) __no_capability_analysis { spin_unlock_irqrestore(&sl->lock, flags); } @@ -1125,6 +1144,7 @@ read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigne= d long flags) * parameter of the next read_seqbegin_or_lock() iteration. */ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_capability_analysis { if (!(*seq & 1)) /* Even */ *seq =3D read_seqbegin(lock); @@ -1140,6 +1160,7 @@ static inline void read_seqbegin_or_lock(seqlock_t *l= ock, int *seq) * Return: true if a read section retry is required, false otherwise */ static inline int need_seqretry(seqlock_t *lock, int seq) + __releases_shared(lock) __no_capability_analysis { return !(seq & 1) && read_seqretry(lock, seq); } @@ -1153,6 +1174,7 @@ static inline int need_seqretry(seqlock_t *lock, int = seq) * with read_seqbegin_or_lock() and validated by need_seqretry(). */ static inline void done_seqretry(seqlock_t *lock, int seq) + __no_capability_analysis { if (seq & 1) read_sequnlock_excl(lock); @@ -1180,6 +1202,7 @@ static inline void done_seqretry(seqlock_t *lock, int= seq) */ static inline unsigned long read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_capability_analysis { unsigned long flags =3D 0; =20 @@ -1205,6 +1228,7 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *s= eq) */ static inline void done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags) + __no_capability_analysis { if (seq & 1) read_sequnlock_excl_irqrestore(lock, flags); diff --git a/include/linux/seqlock_types.h b/include/linux/seqlock_types.h index dfdf43e3fa3d..9775d6f1a234 100644 --- a/include/linux/seqlock_types.h +++ b/include/linux/seqlock_types.h @@ -81,13 +81,14 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, = mutex) * - Comments on top of seqcount_t * - Documentation/locking/seqlock.rst */ -typedef struct { +struct_with_capability(seqlock) { /* * Make sure that readers don't starve writers on PREEMPT_RT: use * seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK(). */ seqcount_spinlock_t seqcount; spinlock_t lock; -} seqlock_t; +}; +typedef struct seqlock seqlock_t; =20 #endif /* __LINUX_SEQLOCK_TYPES_H */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 3410c04c2b76..1e4b90f76420 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -6,6 +6,7 @@ =20 #include #include +#include #include =20 /* @@ -208,3 +209,45 @@ static void __used test_mutex_cond_guard(struct test_m= utex_data *d) d->counter++; } } + +struct test_seqlock_data { + seqlock_t sl; + int counter __var_guarded_by(&sl); +}; + +static void __used test_seqlock_init(struct test_seqlock_data *d) +{ + seqlock_init(&d->sl); + d->counter =3D 0; +} + +static void __used test_seqlock_reader(struct test_seqlock_data *d) +{ + unsigned int seq; + + do { + seq =3D read_seqbegin(&d->sl); + (void)d->counter; + } while (read_seqretry(&d->sl, seq)); +} + +static void __used test_seqlock_writer(struct test_seqlock_data *d) +{ + unsigned long flags; + + write_seqlock(&d->sl); + d->counter++; + write_sequnlock(&d->sl); + + write_seqlock_irq(&d->sl); + d->counter++; + write_sequnlock_irq(&d->sl); + + write_seqlock_bh(&d->sl); + d->counter++; + write_sequnlock_bh(&d->sl); + + write_seqlock_irqsave(&d->sl, flags); + d->counter++; + write_sequnlock_irqrestore(&d->sl, flags); +} --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49CC01F3D45 for ; Thu, 6 Feb 2025 18:18:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865902; cv=none; b=orcvgC6EK0SHgaNIkp4C6FYJgM+UaC6P7qFXjB6iB+Wec/EpBUCa5Cu9y+mn1jBUrxmBHYLQkzMtr19F+dZjgvw1y52VS8PaEOBEcXAsn1f0JsEARhZYtAunoBrp0dGwHczxogQcV+lvB8mwaYRtej1LKq+XVXWUsSfxYp0RLFY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865902; c=relaxed/simple; bh=Y9ecdWWtlfdxtG756H0/OQ3twmMuyLw5QPOn7VZx5xo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Eovo/Porz7KDsSqt7WxkZMSMJ6uxdOPg3RmhCF7PbWOGtqDRS5Fu/ZCxIOqEHt3OPJklKRuDOrRJ5r0YyQNLST1WSnaI1LlQROdS/qGir4Rh0WzHsxPHCsYpRSFyTc3DSpqWPuPj7tQAn2yQge4Z0rliNMH8uaScKV5oFcI0Muw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hZPTZHaf; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hZPTZHaf" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d90a59999aso1400731a12.3 for ; Thu, 06 Feb 2025 10:18:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865899; x=1739470699; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nywgR7jAINvHiuDVB4pmE7YTMO7Utfq2UN01exAeezk=; b=hZPTZHaf5WSPoU5yoeyyjfqebZFC+khtx+219AqDn4G87F5WK08nKW9aOg0w/I7rbO 6B3bQXvYKm43fx7QtrWXBb3gXMuhxPdD+cybNbdsNS36X1BNNaomTroeqtZsumn++iPw XA1Booery4+/REIbSd4fvGc68mNnKi6SMrxv7DoP3iolWp++2pm819CnBFhkkhYcDDXX 4gfO4z4RWJgnGrs+7073pvdNeXEAv/RiG5Rb9eF1FdmwYwdoIQxLvX2q6PDogaQJW9O8 1s98iEBrTHKoA+3eJIODb4YRj/YTvE4t43efauXfVBvCJeMFg/6MH7n44T5z2ymUPsZp CvxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865899; x=1739470699; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nywgR7jAINvHiuDVB4pmE7YTMO7Utfq2UN01exAeezk=; b=gvVcicpV1FLyB8bvDhTAYxSN/lgPiL/C5MJnfEpt+CEUBcIuAn5VVuQ1D3s9DJvH5Y hxg72tl3shsUKAbuBiXI24jiHP3fa1Wi2KzM+npEVjcq2MqDlwDExKghH0h3Ha+ZBKOs 8up6v/x2+5bBT6Z//OzAF6n88BIE9HGTFCgiz9uVp+kp92jV0sh7nCcuR3Vnodh/FQrK XaV5N9O+Vb0z4BVZFlxhQZdsI1EQVARlZ1OoZaSOTbgzwYECDJioVlZ7XgBwQM/GAbSr nBnx7PKulLQTETgB1eqbE95cYXXRc31YtVfv70cP6ja7Sdg+0jNNrBIdtjQ0nahPFbCY p2dA== X-Forwarded-Encrypted: i=1; AJvYcCW1awQUQx57DMJPPulhRF8jjGM2dYtm4ewxYkhpnbMrLarugE6UlsSHW5l+JPgwzfUnJTuNMHl7cFVn0ak=@vger.kernel.org X-Gm-Message-State: AOJu0YyzqIipc6h/OoMPrMFk6mGKl3ZzutR39ct5osVhXw27FinzzKWv GyxxPGxoeyTY7R44fP9l2EeqTRVGZhXh1k21wrdcIPJPBKWZZ3IMJfPnpYRK+ss05sLEnFEzDg= = X-Google-Smtp-Source: AGHT+IFTk3UeF0W3iesDJF4EyACnSXH4BTQBJaMsRlkE8Y124Sbz3T2V3DVtoGu6cpwiCqo9TIV5lxqZAg== X-Received: from edag6.prod.google.com ([2002:a05:6402:3206:b0:5de:3ce0:a49b]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:51cb:b0:5dc:88dd:38aa with SMTP id 4fb4d7f45d1cf-5de45005a73mr490615a12.8.1738865898851; Thu, 06 Feb 2025 10:18:18 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:07 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-14-elver@google.com> Subject: [PATCH RFC 13/24] bit_spinlock: Include missing From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Including into an empty TU will result in the compiler complaining: ./include/linux/bit_spinlock.h:34:4: error: call to undeclared function 'cp= u_relax'; <...> 34 | cpu_relax(); | ^ 1 error generated. Include to allow including bit_spinlock.h where is not otherwise included. Signed-off-by: Marco Elver --- include/linux/bit_spinlock.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index bbc4730a6505..f1174a2fcc4d 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -7,6 +7,8 @@ #include #include =20 +#include /* for cpu_relax() */ + /* * bit-based spin_lock() * --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDDE01F5617 for ; Thu, 6 Feb 2025 18:18:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865905; cv=none; b=e6WLMv38fKoODgm2BQB3wMv4+/36FkeQaGnkA+FJezlj40cZRqNX3BMzN711/N8oMK3mK7wTTf78H4HqStqipy7Ogamxdo7MD/sO6bohXb1Hxq+5UNxcb3OaNxwrGlEhxkZAPgqp6m3u+d+8K9vpBWNAT4DYQLZD2fXzNiK3AQo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865905; c=relaxed/simple; bh=NUEXWlEAE92KV/GEdPakz1UqBNLjHONT9ssvsb3eGUA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=axSS+AEXoHJLu46maoxQS0LNLZ9oOn/DIurO1tAdcyslOEx+rs3dUC3+JHrOXXxlUBnPwiP5nXHPmcN2bzlRjZpGDFISWvrmORgJYYx7l4KnGJGCXZSnkr7pjP5zf7gd3896bj+sxU83tZIZqxIzeetGUucScORx7ngy+7YMaVQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=N4cmvGTq; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="N4cmvGTq" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5de3d2a3090so339114a12.2 for ; Thu, 06 Feb 2025 10:18:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865901; x=1739470701; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wzjNorLMqe8QUgEHsGwtuhtrhRqTVkmGkehlX/wsBpI=; b=N4cmvGTqM3W/Lm1R19HQyp9FgnVkbaM7MrWGfLZxalROGa9dXlkEVWcQ372AqdtNOm OWf2hE2aoaCidUZjoATBRStkCH5ku5LfPwtVVAib31bLQsREe1zfcIz7XQMerAhra3Mq Y8oP4Lj+vDNfjL7C92g6lH06Yuw5elYYOK5CH173PvV9cGS47674FbHZ3MA20iMTZUbk Vgsqw5oFJXRQeXSsb0bIDH4vxQVDx9KrXbq4InE8HCKFFF92DYs5rvZaHyZjK5pdfaUp mOXpmjDNcy9eO85cLXxibm4JqPLiM2DlkKCeK7Ymo5D5TNBfMtOswMijPGMQrWcIhMKz VenA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865901; x=1739470701; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wzjNorLMqe8QUgEHsGwtuhtrhRqTVkmGkehlX/wsBpI=; b=QAGdIItD4B4gQCBIFa2jynJGjW8fgxFHjORU/E2ML28rsYy36UjsB+hTfiuyykzAaM rBpVthAjry/yPcx4EtbY0lK7tAedvLMty6pc4IZpli8xNLtew++kGt8c0hRvmSoKndT3 Fu59gBgnt6b4vocTALi7+l2of0Tdhrc6UD0Di6RBmCddSbfQ1TWJ9octWU/+rkOXVFka 8dXq+TyDSvu9inngVgyt+zG8o2a/2JPew1Gdi5ScfT8Xf4zEhN+qGb10A6l7CxrPnQ9U frtVdAYJmZo9ydaMyTPEzz35moPsbdamErtVb5PAdb4UjSwgudWyfRajk7YR7gVpHxAf 93ng== X-Forwarded-Encrypted: i=1; AJvYcCXfBOeMmS364dEGGkM9BLsiqEhjrbtFTHzM5aIQqPH2mnkFsqCZCfxsL7OWubGbYmEiViNMipsw8e/In7E=@vger.kernel.org X-Gm-Message-State: AOJu0Yxc9Uns6RD/jhkfCaLLl3KrbIEydLTGyWBgBjWVHPe6IWBEBoWr qZXZ/JL/RTdNnkpJdPQNDbis0a8nQMn5rHlsm2ENl1ew3lqMoUKWi64dJGaT4GBfqH+cRIUmyw= = X-Google-Smtp-Source: AGHT+IFsRC9lsbg4c4TRqzZ93jQ5joCg0R4FF8MgHcqadtERExxxGFc9QGo4Fx50fBZiM7U81D6CptTy4g== X-Received: from edag33.prod.google.com ([2002:a05:6402:3221:b0:5dc:74ee:c4dd]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3903:b0:5dc:da2f:9cda with SMTP id 4fb4d7f45d1cf-5de450e1eccmr537344a12.27.1738865901391; Thu, 06 Feb 2025 10:18:21 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:08 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-15-elver@google.com> Subject: [PATCH RFC 14/24] bit_spinlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The annotations for bit_spinlock.h have simply been using "bitlock" as the token. For Sparse, that was likely sufficient in most cases. But Clang's capability analysis is more precise, and we need to ensure we can distinguish different bitlocks. To do so, add a token capability, and a macro __bitlock(bitnum, addr) that is used to construct unique per-bitlock tokens. Add the appropriate test. is implicitly included through other includes, and requires 2 annotations to indicate that acquisition (without release) and release (without prior acquisition) of its bitlock is intended. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 3 ++- include/linux/bit_spinlock.h | 22 +++++++++++++--- include/linux/list_bl.h | 2 ++ lib/test_capability-analysis.c | 26 +++++++++++++++++++ 4 files changed, 48 insertions(+), 5 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 8d9336e91ce2..a34dfe7b0b09 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -85,7 +85,8 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, +`bit_spinlock`. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index f1174a2fcc4d..57114b44ce5d 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -9,6 +9,16 @@ =20 #include /* for cpu_relax() */ =20 +/* + * For static capability analysis, we need a unique token for each possibl= e bit + * that can be used as a bit_spinlock. The easiest way to do that is to cr= eate a + * fake capability that we can cast to with the __bitlock(bitnum, addr) ma= cro + * below, which will give us unique instances for each (bit, addr) pair th= at the + * static analysis can use. + */ +struct_with_capability(__capability_bitlock) { }; +#define __bitlock(bitnum, addr) (struct __capability_bitlock *)(bitnum + (= addr)) + /* * bit-based spin_lock() * @@ -16,6 +26,7 @@ * are significantly faster. */ static inline void bit_spin_lock(int bitnum, unsigned long *addr) + __acquires(__bitlock(bitnum, addr)) { /* * Assuming the lock is uncontended, this never enters @@ -34,13 +45,14 @@ static inline void bit_spin_lock(int bitnum, unsigned l= ong *addr) preempt_disable(); } #endif - __acquire(bitlock); + __acquire(__bitlock(bitnum, addr)); } =20 /* * Return true if it was acquired */ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) + __cond_acquires(1, __bitlock(bitnum, addr)) { preempt_disable(); #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) @@ -49,7 +61,7 @@ static inline int bit_spin_trylock(int bitnum, unsigned l= ong *addr) return 0; } #endif - __acquire(bitlock); + __acquire(__bitlock(bitnum, addr)); return 1; } =20 @@ -57,6 +69,7 @@ static inline int bit_spin_trylock(int bitnum, unsigned l= ong *addr) * bit-based spin_unlock() */ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) + __releases(__bitlock(bitnum, addr)) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -65,7 +78,7 @@ static inline void bit_spin_unlock(int bitnum, unsigned l= ong *addr) clear_bit_unlock(bitnum, addr); #endif preempt_enable(); - __release(bitlock); + __release(__bitlock(bitnum, addr)); } =20 /* @@ -74,6 +87,7 @@ static inline void bit_spin_unlock(int bitnum, unsigned l= ong *addr) * protecting the rest of the flags in the word. */ static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) + __releases(__bitlock(bitnum, addr)) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -82,7 +96,7 @@ static inline void __bit_spin_unlock(int bitnum, unsigned= long *addr) __clear_bit_unlock(bitnum, addr); #endif preempt_enable(); - __release(bitlock); + __release(__bitlock(bitnum, addr)); } =20 /* diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h index ae1b541446c9..df9eebe6afca 100644 --- a/include/linux/list_bl.h +++ b/include/linux/list_bl.h @@ -144,11 +144,13 @@ static inline void hlist_bl_del_init(struct hlist_bl_= node *n) } =20 static inline void hlist_bl_lock(struct hlist_bl_head *b) + __acquires(__bitlock(0, b)) { bit_spin_lock(0, (unsigned long *)b); } =20 static inline void hlist_bl_unlock(struct hlist_bl_head *b) + __releases(__bitlock(0, b)) { __bit_spin_unlock(0, (unsigned long *)b); } diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 1e4b90f76420..fc8dcad2a994 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -4,6 +4,7 @@ * positive errors when compiled with Clang's capability analysis. */ =20 +#include #include #include #include @@ -251,3 +252,28 @@ static void __used test_seqlock_writer(struct test_seq= lock_data *d) d->counter++; write_sequnlock_irqrestore(&d->sl, flags); } + +struct test_bit_spinlock_data { + unsigned long bits; + int counter __var_guarded_by(__bitlock(3, &bits)); +}; + +static void __used test_bit_spin_lock(struct test_bit_spinlock_data *d) +{ + /* + * Note, the analysis seems to have false negatives, because it won't + * precisely recognize the bit of the fake __bitlock() token. + */ + bit_spin_lock(3, &d->bits); + d->counter++; + bit_spin_unlock(3, &d->bits); + + bit_spin_lock(3, &d->bits); + d->counter++; + __bit_spin_unlock(3, &d->bits); + + if (bit_spin_trylock(3, &d->bits)) { + d->counter++; + bit_spin_unlock(3, &d->bits); + } +} --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 748371F75A6 for ; Thu, 6 Feb 2025 18:18:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865908; cv=none; b=trl+EaP8xGLPAzUC9W/xPTxzXZA/Elx6dy/FZz0WLH58oj7bqDxKO739ywKNKfL/OPEoRS5u8VAR5bEP6CFMdvEI7G7Z5aEuRbx7RnDNFksmWIJYg9P24zh/GeEsnbs/CMptugvfKeXdZ6KOTLeFOx1Utslpzk6J8AsULWOsVQ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865908; c=relaxed/simple; bh=JyPZFv/1H35rfYbz8+mBYSIuX2lQG3yoeqLSp4QFW6k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hrC8xLbyIeNT2fSXB00XlMfJQ501UXOfSbVrfav5KR76hgwZqTsz3WOPjxzEG2XZxZxgVOYqGvG99FiKxRZcUCCymUQfqKvWyFoS+D7VaJ5vsJHVDKUN/qifFQY2u+TArPOslDmjmcByR8dNY8DUYd12oigepO38pm7StB/ND2Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ebrbeSIL; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ebrbeSIL" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-aaf8f016bb1so126643166b.2 for ; Thu, 06 Feb 2025 10:18:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865904; x=1739470704; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JOzjkOrq5kterB3FAixUHWQIVtw7H/sF/ZXjRLk43E0=; b=ebrbeSILly5W0ANS54ud7YMtmnfxMdW1UbtkGV+Wr6flRRd0HSrYwRu6T/8N35GWtw jHKHLPZO8FreCe+KcdPLBeUaBseBvJM1jHaN8xiTbLfFTdeci8tnin2LwSbxuA/pLfSd xxAi92AXH7zsqcudZ+hRWHKeOPAp03blwfP60MbCdIuVihTBgdngv7bSP2+5VE5WcUz2 fY0jwrJsRVSsXgvleOCEODpEJNxawNQd2vYVJPJdsRuGYYZKc4Ed6Exfk1aWY7sM2qa7 uNfanmN+TJGg+bVeh3/qLyegp3CZ4w8gGoh+MpU2oU5OQfgsIoRdL+xU76G+wLdK37nf M+6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865904; x=1739470704; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JOzjkOrq5kterB3FAixUHWQIVtw7H/sF/ZXjRLk43E0=; b=XKxdtQJMk8ROwLQEpP+bs2Jwm+dGANRX+ClE29sEQW3ZbBxqtSkbBAcpVuY7vcj106 hLcaOf2zyBucMP1DeEC+3/IQXM0ECdlX3eYi6hw6E/68jTqcKSBwfBZxYdYlcwugxil1 Vku5UNq+3mHZCQ/99jQ37kCyfjWvHGmlON7rAx6zBLOcJ88HR6JVZIRWvAsdkRXy9sOy 0MVj+EcPqSV5Ehd7QNL69VJXA7wxY602A1N24fbUXPRzkfnwG7CEyWfYtExOa00awsUq jzB5AtNkqkO7LdnUDwcbBgxoluhpkTHW6ze17EyZdHbVyBKaHwMbiRC/ANo1saYVyk7n 40gA== X-Forwarded-Encrypted: i=1; AJvYcCXKP9Ol5sEyciW9g47gfPkm7Bh6NiRCdiQ2Qn7inGJaBxfI5ROUVmWEbEIJ0JdVZJzSuNlfwlzElkkOpTI=@vger.kernel.org X-Gm-Message-State: AOJu0YxoKKC6fh4AxVXmg3ADCKyPaVdX9BeLXYtZwuXpujJul8Iicf1D gq9O2poHnJ5a5dmyqw4mV3PsHNmRfEvKAosMhJEjZv5qskixsnCK0zTbX2lDaAdssikReOtcNA= = X-Google-Smtp-Source: AGHT+IFglMxHiMcB6nXLGSS3tW+M08QRN/fbJzu7qadpesBJMHfXBWVx17ONgkOvj87YID/Ots6zhUnX4g== X-Received: from edben24.prod.google.com ([2002:a05:6402:5298:b0:5dc:37ed:79fc]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7ba8:b0:ab6:ed8a:601f with SMTP id a640c23a62f3a-ab75e23496emr716074566b.12.1738865903775; Thu, 06 Feb 2025 10:18:23 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:09 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-16-elver@google.com> Subject: [PATCH RFC 15/24] rcu: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Improve the existing annotations to properly support Clang's capability analysis. The old annotations distinguished between RCU, RCU_BH, and RCU_SCHED. However, it does not make sense to acquire rcu_read_lock_bh() after rcu_read_lock() - annotate the _bh() and _sched() variants to also acquire 'RCU', so that Clang (and also Sparse) can warn about it. The above change also simplified introducing annotations, where it would not matter if RCU, RCU_BH, or RCU_SCHED is acquired: through the introduction of __rcu_guarded, we can use Clang's capability analysis to warn if a pointer is dereferenced without any of the RCU locks held, or updated without the appropriate helpers. The primitives rcu_assign_pointer() and friends are wrapped with capability_unsafe(), which enforces using them to update RCU-protected pointers marked with __rcu_guarded. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/cleanup.h | 4 + include/linux/rcupdate.h | 73 +++++++++++++------ lib/test_capability-analysis.c | 68 +++++++++++++++++ 4 files changed, 123 insertions(+), 24 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index a34dfe7b0b09..73dd28a23b11 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`. +`bit_spinlock`, RCU. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h index 93a166549add..7d70d308357a 100644 --- a/include/linux/cleanup.h +++ b/include/linux/cleanup.h @@ -404,6 +404,10 @@ static inline class_##_name##_t class_##_name##_constr= uctor(void) \ return _t; \ } =20 +#define DECLARE_LOCK_GUARD_0_ATTRS(_name, _lock, _unlock) \ +static inline class_##_name##_t class_##_name##_constructor(void) _lock;\ +static inline void class_##_name##_destructor(class_##_name##_t *_T) _unlo= ck + #define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \ __DEFINE_CLASS_IS_CONDITIONAL(_name, false); \ __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \ diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 48e5c03df1dd..ee68095ba9f0 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -31,6 +31,16 @@ #include #include =20 +token_capability(RCU); +token_capability_instance(RCU, RCU_SCHED); +token_capability_instance(RCU, RCU_BH); + +/* + * A convenience macro that can be used for RCU-protected globals or struct + * members; adds type qualifier __rcu, and also enforces __var_guarded_by(= RCU). + */ +#define __rcu_guarded __rcu __var_guarded_by(RCU) + #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >=3D (a) - (b)) #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) =20 @@ -431,7 +441,8 @@ static inline void rcu_preempt_sleep_check(void) { } =20 // See RCU_LOCKDEP_WARN() for an explanation of the double call to // debug_lockdep_rcu_enabled(). -static inline bool lockdep_assert_rcu_helper(bool c) +static inline bool lockdep_assert_rcu_helper(bool c, const struct __capabi= lity_RCU *cap) + __asserts_shared_cap(RCU) __asserts_shared_cap(cap) { return debug_lockdep_rcu_enabled() && (c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) && @@ -444,7 +455,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * Splats if lockdep is enabled and there is no rcu_read_lock() in effect. */ #define lockdep_assert_in_rcu_read_lock() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map), RCU)) =20 /** * lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_= lock_bh() @@ -454,7 +465,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * actual rcu_read_lock_bh() is required. */ #define lockdep_assert_in_rcu_read_lock_bh() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map), R= CU_BH)) =20 /** * lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_re= ad_lock_sched() @@ -464,7 +475,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * instead an actual rcu_read_lock_sched() is required. */ #define lockdep_assert_in_rcu_read_lock_sched() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map)= )) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map)= , RCU_SCHED)) =20 /** * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU read= er @@ -482,17 +493,17 @@ static inline bool lockdep_assert_rcu_helper(bool c) WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map) && \ !lock_is_held(&rcu_bh_lock_map) && \ !lock_is_held(&rcu_sched_lock_map) && \ - preemptible())) + preemptible(), RCU)) =20 #else /* #ifdef CONFIG_PROVE_RCU */ =20 #define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c)) #define rcu_sleep_check() do { } while (0) =20 -#define lockdep_assert_in_rcu_read_lock() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_bh() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_sched() do { } while (0) -#define lockdep_assert_in_rcu_reader() do { } while (0) +#define lockdep_assert_in_rcu_read_lock() __assert_shared_cap(RCU) +#define lockdep_assert_in_rcu_read_lock_bh() __assert_shared_cap(RCU_BH) +#define lockdep_assert_in_rcu_read_lock_sched() __assert_shared_cap(RCU_SC= HED) +#define lockdep_assert_in_rcu_reader() __assert_shared_cap(RCU) =20 #endif /* #else #ifdef CONFIG_PROVE_RCU */ =20 @@ -512,11 +523,11 @@ static inline bool lockdep_assert_rcu_helper(bool c) #endif /* #else #ifdef __CHECKER__ */ =20 #define __unrcu_pointer(p, local) \ -({ \ +capability_unsafe( \ typeof(*p) *local =3D (typeof(*p) *__force)(p); \ rcu_check_sparse(p, __rcu); \ ((typeof(*p) __force __kernel *)(local)); \ -}) +) /** * unrcu_pointer - mark a pointer as not being RCU protected * @p: pointer needing to lose its __rcu property @@ -592,7 +603,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * other macros that it invokes. */ #define rcu_assign_pointer(p, v) \ -do { \ +capability_unsafe( \ uintptr_t _r_a_p__v =3D (uintptr_t)(v); \ rcu_check_sparse(p, __rcu); \ \ @@ -600,7 +611,7 @@ do { \ WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \ else \ smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \ -} while (0) +) =20 /** * rcu_replace_pointer() - replace an RCU pointer, returning its old value @@ -843,9 +854,10 @@ do { \ * only when acquiring spinlocks that are subject to priority inheritance. */ static __always_inline void rcu_read_lock(void) + __acquires_shared(RCU) { __rcu_read_lock(); - __acquire(RCU); + __acquire_shared(RCU); rcu_lock_acquire(&rcu_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock() used illegally while idle"); @@ -874,11 +886,12 @@ static __always_inline void rcu_read_lock(void) * See rcu_read_lock() for more information. */ static inline void rcu_read_unlock(void) + __releases_shared(RCU) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock() used illegally while idle"); rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */ - __release(RCU); + __release_shared(RCU); __rcu_read_unlock(); } =20 @@ -897,9 +910,11 @@ static inline void rcu_read_unlock(void) * was invoked from some other task. */ static inline void rcu_read_lock_bh(void) + __acquires_shared(RCU) __acquires_shared(RCU_BH) { local_bh_disable(); - __acquire(RCU_BH); + __acquire_shared(RCU); + __acquire_shared(RCU_BH); rcu_lock_acquire(&rcu_bh_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_bh() used illegally while idle"); @@ -911,11 +926,13 @@ static inline void rcu_read_lock_bh(void) * See rcu_read_lock_bh() for more information. */ static inline void rcu_read_unlock_bh(void) + __releases_shared(RCU) __releases_shared(RCU_BH) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_bh() used illegally while idle"); rcu_lock_release(&rcu_bh_lock_map); - __release(RCU_BH); + __release_shared(RCU_BH); + __release_shared(RCU); local_bh_enable(); } =20 @@ -935,9 +952,11 @@ static inline void rcu_read_unlock_bh(void) * rcu_read_lock_sched() was invoked from an NMI handler. */ static inline void rcu_read_lock_sched(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); rcu_lock_acquire(&rcu_sched_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_sched() used illegally while idle"); @@ -945,9 +964,11 @@ static inline void rcu_read_lock_sched(void) =20 /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_lock_sched_notrace(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable_notrace(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); } =20 /** @@ -956,18 +977,22 @@ static inline notrace void rcu_read_lock_sched_notrac= e(void) * See rcu_read_lock_sched() for more information. */ static inline void rcu_read_unlock_sched(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_sched() used illegally while idle"); rcu_lock_release(&rcu_sched_lock_map); - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable(); } =20 /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_unlock_sched_notrace(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable_notrace(); } =20 @@ -1010,10 +1035,10 @@ static inline notrace void rcu_read_unlock_sched_no= trace(void) * ordering guarantees for either the CPU or the compiler. */ #define RCU_INIT_POINTER(p, v) \ - do { \ + capability_unsafe( \ rcu_check_sparse(p, __rcu); \ WRITE_ONCE(p, RCU_INITIALIZER(v)); \ - } while (0) + ) =20 /** * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected poin= ter @@ -1172,4 +1197,6 @@ DEFINE_LOCK_GUARD_0(rcu, } while (0), rcu_read_unlock()) =20 +DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(= RCU)); + #endif /* __LINUX_RCUPDATE_H */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index fc8dcad2a994..f5a1dda6ca38 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include =20 @@ -277,3 +278,70 @@ static void __used test_bit_spin_lock(struct test_bit_= spinlock_data *d) bit_spin_unlock(3, &d->bits); } } + +/* + * Test that we can mark a variable guarded by RCU, and we can dereference= and + * write to the pointer with RCU's primitives. + */ +struct test_rcu_data { + long __rcu_guarded *data; +}; + +static void __used test_rcu_guarded_reader(struct test_rcu_data *d) +{ + rcu_read_lock(); + (void)rcu_dereference(d->data); + rcu_read_unlock(); + + rcu_read_lock_bh(); + (void)rcu_dereference(d->data); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + (void)rcu_dereference(d->data); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_guard(struct test_rcu_data *d) +{ + guard(rcu)(); + (void)rcu_dereference(d->data); +} + +static void __used test_rcu_guarded_updater(struct test_rcu_data *d) +{ + rcu_assign_pointer(d->data, NULL); + RCU_INIT_POINTER(d->data, NULL); + (void)unrcu_pointer(d->data); +} + +static void wants_rcu_held(void) __must_hold_shared(RCU) { } +static void wants_rcu_held_bh(void) __must_hold_shared(RCU_BH) { } +static void wants_rcu_held_sched(void) __must_hold_shared(RCU_SCHED) { } + +static void __used test_rcu_lock_variants(void) +{ + rcu_read_lock(); + wants_rcu_held(); + rcu_read_unlock(); + + rcu_read_lock_bh(); + wants_rcu_held_bh(); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + wants_rcu_held_sched(); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_assert_variants(void) +{ + lockdep_assert_in_rcu_read_lock(); + wants_rcu_held(); + + lockdep_assert_in_rcu_read_lock_bh(); + wants_rcu_held_bh(); + + lockdep_assert_in_rcu_read_lock_sched(); + wants_rcu_held_sched(); +} --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C82C91F78F2 for ; Thu, 6 Feb 2025 18:18:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865910; cv=none; b=O3L0FCfZUA4W6KAIkohjoLofs1dM8L+Lb7laLeLSbkRaIlxV5nlAo8R/OGXF8+dz6a0d5qWuWOxIWBVbzn4QHQNjbqiehQ9UxcrIKsHLK5dzODRckNBx7GwTkzEsI9jEmvb7KssWHNwxQkgWajpdc10QC0frTQLDOD3HJ3XKFmc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865910; c=relaxed/simple; bh=9NvpT1xaNJVbVqSm8G/+XOo08oZlb8dEjdT0tJE3yMs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Sbf09hBu+K+jN5rRR9I5kL6cXkmkjk97Z6HLQawXldkfFcazzk0AX0jJmf4JyJxOoBR/bOc0uoT1alEpxa6B4o/82UC/934cgdsh+0kSNOFPAxLItcfmGOLxc71tY90MVorMxaJmBeqi+ZRGqmW6vVXL9qVth9Q7FaYqIW40Sp8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=P5Ktb/gZ; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P5Ktb/gZ" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa689b88293so130821466b.3 for ; Thu, 06 Feb 2025 10:18:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865906; x=1739470706; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MQXyMw84n+KN7BKVw/OtNJwL/he84+ejRwU4F35ugF4=; b=P5Ktb/gZNFX/xvBTGLBEvLoCScSWyrrt8ZxPV97etNVly1lVvMIFdwE7n4kIgNb3Yk abw9O2IPRiybtujuwrB9VQpO7ZAGBV/rW+xjVM+6ONDA8kIj2jTCrzNOY4DuVaFtNE6V o9NP4AcnEGNQSEpnRozG5OlEPOmPGkeEZbKRrB1ZkHjgXdXKOw3dyhClhAvDzBjawHMG ymRtpzwJv+0CaVQszdG/8l2tbXTsq7M1XV0kwoXWLJK8UOwjYSsXrh9XtVWDPLj0dkd1 B6dCCOLMsc2x+WAQ1tn3lcNC+YP/jvQtqCqRI5hYvwTQbtVuhFqnq4/l0t+ieZalm9xX 35TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865906; x=1739470706; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MQXyMw84n+KN7BKVw/OtNJwL/he84+ejRwU4F35ugF4=; b=Eor3zARVSWef0Wd5XF8VwG1+Fx+dxKu+WPD1y0PtH6QyhQRiqFNsHiGZB77to4eukj t/pL+Fs+duWTAcnTLgU/8XXMrHAAYbXnE35cDwyC01fKEz3NxUAr4qBTGmwHHYLBFhNZ +fLrrcX9vUhckTzOXlinwgZxbbFo1ZU8YcunJRrLZ41HYJf0xN8h47TKZoSjanhultI2 rZVbev4dGRB99fLDcvcba+XId4cD4qY53lM/1rLfCgq7t6NGaTLsFYwMsvNVjHc3yy8v qV3XCL17zGmMX4tNCwNkJvEgdND00/QApTAgt17lw7Ga644jeo1QAcfNOZR6CqvlUUJM Fg3A== X-Forwarded-Encrypted: i=1; AJvYcCV2G+jk9CvjsbnXGz9toOIp20Lqzaux+UnXO2v7VtknrjunA3ltLs3QGdrLTtR0R2sI34sk7YebWLS+ZQI=@vger.kernel.org X-Gm-Message-State: AOJu0Ywt8CeP8gZZh+UzizYiW3dVnqFReCIYQ42sxTZEjAPlgzGDrMPH vODXYB0MmfA2o+KhvxQPrkAMGmDk0ih7Txq2vJDOG+8GdwgyTo8abMxA8XSVsYzJ4V7ovxDHTw= = X-Google-Smtp-Source: AGHT+IHZYxOsD+xJIbdoA+mMUNGhu0UIVcbIEjPgFMgubaFg9/8FZuVYQVxxJZ4RkmKxQO8jomuTw/hsWQ== X-Received: from ejcss11.prod.google.com ([2002:a17:907:c00b:b0:ab7:822d:f553]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:856:b0:ab7:6606:a8d5 with SMTP id a640c23a62f3a-ab76606b5camr644091966b.48.1738865906398; Thu, 06 Feb 2025 10:18:26 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:10 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-17-elver@google.com> Subject: [PATCH RFC 16/24] srcu: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for SRCU. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/srcu.h | 61 +++++++++++++------ lib/test_capability-analysis.c | 24 ++++++++ 3 files changed, 66 insertions(+), 21 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 73dd28a23b11..3766ac466470 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU. +`bit_spinlock`, RCU, SRCU (`srcu_struct`). =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/srcu.h b/include/linux/srcu.h index d7ba46e74f58..560310643c54 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -21,7 +21,7 @@ #include #include =20 -struct srcu_struct; +struct_with_capability(srcu_struct); =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC =20 @@ -60,14 +60,14 @@ int init_srcu_struct(struct srcu_struct *ssp); void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, void (*func)(struct rcu_head *head)); void cleanup_srcu_struct(struct srcu_struct *ssp); -int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock(struct srcu_struct *ssp) __acquires_shared(ssp); +void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases_share= d(ssp); #ifdef CONFIG_TINY_SRCU #define __srcu_read_lock_lite __srcu_read_lock #define __srcu_read_unlock_lite __srcu_read_unlock #else // #ifdef CONFIG_TINY_SRCU -int __srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) __releases(= ssp); +int __srcu_read_lock_lite(struct srcu_struct *ssp) __acquires_shared(ssp); +void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) __releases_= shared(ssp); #endif // #else // #ifdef CONFIG_TINY_SRCU void synchronize_srcu(struct srcu_struct *ssp); =20 @@ -110,14 +110,16 @@ static inline bool same_state_synchronize_srcu(unsign= ed long oldstate1, unsigned } =20 #ifdef CONFIG_NEED_SRCU_NMI_SAFE -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releas= es(ssp); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires_shared(ss= p); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releas= es_shared(ssp); #else static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { return __srcu_read_lock(ssp); } static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int= idx) + __releases_shared(ssp) { __srcu_read_unlock(ssp, idx); } @@ -189,6 +191,14 @@ static inline int srcu_read_lock_held(const struct src= u_struct *ssp) =20 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ =20 +/* + * No-op helper to denote that ssp must be held. Because SRCU-protected po= inters + * should still be marked with __rcu_guarded, and we do not want to mark t= hem + * with __var_guarded_by(ssp) as it would complicate annotations for write= rs, we + * choose the following strategy: srcu_dereference_check() calls this help= er + * that checks that the passed ssp is held, and then fake-acquires 'RCU'. + */ +static inline void __srcu_read_lock_must_hold(const struct srcu_struct *ss= p) __must_hold_shared(ssp) { } =20 /** * srcu_dereference_check - fetch SRCU-protected pointer for later derefer= encing @@ -202,9 +212,15 @@ static inline int srcu_read_lock_held(const struct src= u_struct *ssp) * to 1. The @c argument will normally be a logical expression containing * lockdep_is_held() calls. */ -#define srcu_dereference_check(p, ssp, c) \ - __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ - (c) || srcu_read_lock_held(ssp), __rcu) +#define srcu_dereference_check(p, ssp, c) \ +({ \ + __srcu_read_lock_must_hold(ssp); \ + __acquire_shared_cap(RCU); \ + __auto_type __v =3D __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ + (c) || srcu_read_lock_held(ssp), __rcu); \ + __release_shared_cap(RCU); \ + __v; \ +}) =20 /** * srcu_dereference - fetch SRCU-protected pointer for later dereferencing @@ -247,7 +263,8 @@ static inline int srcu_read_lock_held(const struct srcu= _struct *ssp) * invoke srcu_read_unlock() from one task and the matching srcu_read_lock= () * from another. */ -static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -274,7 +291,8 @@ static inline int srcu_read_lock(struct srcu_struct *ss= p) __acquires(ssp) * where RCU is watching, that is, from contexts where it would be legal * to invoke rcu_read_lock(). Otherwise, lockdep will complain. */ -static inline int srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(= ssp) +static inline int srcu_read_lock_lite(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -295,7 +313,8 @@ static inline int srcu_read_lock_lite(struct srcu_struc= t *ssp) __acquires(ssp) * then none of the other flavors may be used, whether before, during, * or after. */ -static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquir= es(ssp) +static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -307,7 +326,8 @@ static inline int srcu_read_lock_nmisafe(struct srcu_st= ruct *ssp) __acquires(ssp =20 /* Used by tracing, cannot be traced and cannot invoke lockdep. */ static inline notrace int -srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) +srcu_read_lock_notrace(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -337,7 +357,8 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acqui= res(ssp) * Calls to srcu_down_read() may be nested, similar to the manner in * which calls to down_read() may be nested. */ -static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_down_read(struct srcu_struct *ssp) + __acquires_shared(ssp) { WARN_ON_ONCE(in_nmi()); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -352,7 +373,7 @@ static inline int srcu_down_read(struct srcu_struct *ss= p) __acquires(ssp) * Exit an SRCU read-side critical section. */ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -368,7 +389,7 @@ static inline void srcu_read_unlock(struct srcu_struct = *ssp, int idx) * Exit a light-weight SRCU read-side critical section. */ static inline void srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_LITE); @@ -384,7 +405,7 @@ static inline void srcu_read_unlock_lite(struct srcu_st= ruct *ssp, int idx) * Exit an SRCU read-side critical section, but in an NMI-safe manner. */ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int i= dx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NMI); @@ -394,7 +415,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu= _struct *ssp, int idx) =20 /* Used by tracing, cannot be traced and cannot call lockdep. */ static inline notrace void -srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) +srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases_shar= ed(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); __srcu_read_unlock(ssp, idx); @@ -409,7 +430,7 @@ srcu_read_unlock_notrace(struct srcu_struct *ssp, int i= dx) __releases(ssp) * the same context as the maching srcu_down_read(). */ static inline void srcu_up_read(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); WARN_ON_ONCE(in_nmi()); diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index f5a1dda6ca38..8bc8c3e6cb5c 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -10,6 +10,7 @@ #include #include #include +#include =20 /* * Test that helper macros work as expected. @@ -345,3 +346,26 @@ static void __used test_rcu_assert_variants(void) lockdep_assert_in_rcu_read_lock_sched(); wants_rcu_held_sched(); } + +struct test_srcu_data { + struct srcu_struct srcu; + long __rcu_guarded *data; +}; + +static void __used test_srcu(struct test_srcu_data *d) +{ + init_srcu_struct(&d->srcu); + + int idx =3D srcu_read_lock(&d->srcu); + long *data =3D srcu_dereference(d->data, &d->srcu); + (void)data; + srcu_read_unlock(&d->srcu, idx); + + rcu_assign_pointer(d->data, NULL); +} + +static void __used test_srcu_guard(struct test_srcu_data *d) +{ + guard(srcu)(&d->srcu); + (void)srcu_dereference(d->data, &d->srcu); +} --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E1D41F8ADF for ; Thu, 6 Feb 2025 18:18:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865912; cv=none; b=AIohDLN7+OvlMRQM/wG6GR07+PWflj/h5SI76s+qDyIXLR2lRsWC1kVCyTalXtpyvoO1Omjug38mc550BDW1JTxzN68OMcZuPjbZUIHukkO3rjeiBnvqKg3prJ+J4NEpipnXs+XKMDWxOurcyiuHdnKpf2Sm5VPZ3sEN07yoA18= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865912; c=relaxed/simple; bh=eUsEA8AKaMySMoDYSStoEnIC6yQaJkf66VdQ0LAAkhM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Kkg+SCRFY43FgRZ6FkB+9YEvJLS24dsnxZwXJE71gzAU36YxatVZZLqlZz7SHwpL4Amjy3Ym9aSFGYLgCVRJozgMlfCveHyYtCD03dInE+TfPc+lgTxf83Kx3GEInj88xnA39B7+AssyETREgvfXT+5GztP0W1q76/OWHvL8RsE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0BZZumzI; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0BZZumzI" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-ab76f438dddso137100466b.2 for ; Thu, 06 Feb 2025 10:18:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865909; x=1739470709; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rczBmZP1xxTJDBYPo5xjGBbOgXwfpsKdWxPKt0+mHDg=; b=0BZZumzI16fwyyHccw7JiF1zmLs7MjWIGoLLf0A5shUDVgRSF33DoLfKuhAgVd8cwS Rm9DabHSKxAv8Z+Mzn0iCjlWNLRycGZ0voeTVoVhgKHZkYRPf8np9EsZRPALDLcozY3r m3U2+YOqDVuW/UYlDGg5yZj4GR3w4otclzxsZG4iqzU1whzEoHOl300OPG+kbW6auDsB h1bVOcPp9HFFLDZONkYYCCxv5T5VqASidd3p9aDw8qP17ZNWU/0aBB+xESWePpX6xE+w FL5klIKqz68LwPkueDFQ+j79a55okeiG9R8xcbg0K3sF3ombuOd5/AKcKCa9ca7M2e6O M7vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865909; x=1739470709; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rczBmZP1xxTJDBYPo5xjGBbOgXwfpsKdWxPKt0+mHDg=; b=Ezfi38jCEdxYfI24nytYXtJS8xFjrHIiY26BcjGKVNK8eaisuFvZ8MLB2b70Hj6+JU jhsCC0FAS5iwqrtE0oSkVGxqi8oRBmwx3tahKs44bzgupW/H6/RvV98GcHmunxBRewfg aGc2Y5BpuNUmFe6NTwEViXXxSRAXvyzagWzngQyC/jpy7lEKY7iNxu7IrafupkgiD0hB rrQmbkBgw0VAlaBXFcyUJZEDeJWMzKsTFn1ov4yr51EvPwbNYRTpay6vjIw2mvvSye3n KHqPceBIFGNg+DzC0ywGEwL5EengtC2n9sMuYDeRIp0QK4A8WSpOgHuT4JX97tNem/ST mYWw== X-Forwarded-Encrypted: i=1; AJvYcCU7sS+ysiNV5/7iRqtUbaDH7BXQPem714xF260ORwcgyso/nGt7EKaIIijI0KvAUpyuoeZymFfipiQBao8=@vger.kernel.org X-Gm-Message-State: AOJu0YxeCDRNwS13T1/2YvxLyokcSZoRJlaewELaM2wbSODOJsibNAe0 yCP1jOSWkjRg/OlZyNTGOpYxP8E6FWabU9/ggDZkpVUJ+qlRqiZl1WUAZ3X5nlG3pS7iOSktOA= = X-Google-Smtp-Source: AGHT+IGVPfCAaawQkSTPD4Xv6V79uv64bqYTAjRa+/Jl4z9KcDjw7LPQAkOYBilNooqt7ZxMEpeteFTuAQ== X-Received: from ejcth7.prod.google.com ([2002:a17:907:8e07:b0:ab6:c785:9cc6]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7e92:b0:ab7:8520:e953 with SMTP id a640c23a62f3a-ab78520ea84mr97837866b.55.1738865908835; Thu, 06 Feb 2025 10:18:28 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:11 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-18-elver@google.com> Subject: [PATCH RFC 17/24] kref: Add capability-analysis annotations From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark functions that conditionally acquire the passed lock. Signed-off-by: Marco Elver --- include/linux/kref.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/kref.h b/include/linux/kref.h index 88e82ab1367c..c1bd26936f41 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -81,6 +81,7 @@ static inline int kref_put(struct kref *kref, void (*rele= ase)(struct kref *kref) static inline int kref_put_mutex(struct kref *kref, void (*release)(struct kref *kref), struct mutex *mutex) + __cond_acquires(1, mutex) { if (refcount_dec_and_mutex_lock(&kref->refcount, mutex)) { release(kref); @@ -102,6 +103,7 @@ static inline int kref_put_mutex(struct kref *kref, static inline int kref_put_lock(struct kref *kref, void (*release)(struct kref *kref), spinlock_t *lock) + __cond_acquires(1, lock) { if (refcount_dec_and_lock(&kref->refcount, lock)) { release(kref); --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 284361F9A90 for ; Thu, 6 Feb 2025 18:18:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865915; cv=none; b=cUAoi5MvhN+xmU2Y/bdtp4i2uZViSdj+DlYuVZ/4D4vBm30bXw3MpNFSIRexFt1ADJMJeEIpyEvWyReooYUn3Oho3aZ08uqUR26yjpXwB9s9RIa/P3OUCC54pVrS+LlYAwzbGwXaPSLTosxuF5jv9EJ5uMTT4K7HER3L5TaS2a8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865915; c=relaxed/simple; bh=RqHDLtuTafGGUahN4EZGXu8MD0HDtS+tp7xa1pcqjDw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mu2D5ewxBd/L5gSB0foa28DWlrHweNdSV1AtMzeIadHE18hpn6oCO7fVIGf3tCiApUfQazc2uTNKWsqN151pRLJ9RJK9xmCQ3WcaAz/qqNUtYmxmJ06yLjIX4rNZLRjanuZW5dHR1PVWiOpEicDOi/ZwWK1ERmLdwcYbpIDxxJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=16qPCvCs; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="16qPCvCs" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5dcd8f32414so1478327a12.2 for ; Thu, 06 Feb 2025 10:18:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865911; x=1739470711; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9TIY+XnKqDQPKONLGJPgemro/50zRzCvFBM/gxQTHJA=; b=16qPCvCsZjw0Ks1UBRKRn2PV/4kzNmxoat7xqEf00PCjiAGMEBE6eHRHsATa7go32D c/2xy+qtuWtvs6QgUPXK2wY1LeJDLjifeMv5qvi8bjs0LWT6gH2gkJdzmcarwRIk4X7Q s3gs1RBOtRWjkkZEU0iFj3laQAs+Pvyh/2zdDeIj9bdVXcPFydDSfOUXR2+IpVSxWPBc VjikdYNtJND3Emob9V1HTuVHtf39lMUO0tRt0eMTXrgTB0+rhVGlJzzcHe4+/tZvscxt VR9dOxSWP6isuJmemWvA+Po+BA+NwuYziUaFbMZt7dTwLy3NVv7I4SLOosu3NKZqSZbd hHqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865911; x=1739470711; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9TIY+XnKqDQPKONLGJPgemro/50zRzCvFBM/gxQTHJA=; b=cEINrSEuIyfGuJNQJ+NLY8OqeMl4IZvuhEEn7HwkHJk9ArIp7nRgbwr5qZefAhe3xx +5Wh0wtcs4+J6oD/hdi/G5fKTpTLh6/NX4IWSBFhX2/2gXPIz6r8taYZQU5lgCMx5CDJ s0AoZ6I/YbaRvEnoCLfjuGM4fOcgMWl0Yv/x1yb//XT3UwiZLQJqXqL6A9O6qoWJXxFq eEQAZz1uErqsLvOUGk2rfaEAhr8CjkO4DOsHtqENGrdFmghWRltOT0pthtivpBAPfhir MClufrJ9I4DcRLCkyHRq0V1cOmC3HqxiWonGIhECeFE/+wMkgBDGQ/nKpcXCIBOgjg25 OaMQ== X-Forwarded-Encrypted: i=1; AJvYcCW+W3yIdM0GzqtyeGA3yhFS8AsxiRXNgrn+DCX4pIv3N+e8+zA+Bfu9GFDuqbiYCBv1IUSI0C4jmKFKNRs=@vger.kernel.org X-Gm-Message-State: AOJu0YynJWNyIekWSayhWrPuUl8W9M14GtwhTJ7myUmfDJ+stSRueKkE zJpfaeW/nOnE3ROaZFekeS/XKzq7O3PnnCooLflMwjUv6sRjefTAuK1+3wZI8ht/FoVf2fr06A= = X-Google-Smtp-Source: AGHT+IEvPeOFhagqmwgnyrai4ZSejzn5v54agYXyK+aaPZflaDyrJ0WIw+bD5/x8N7Qpgf8i3HkDoVStPw== X-Received: from edbij8.prod.google.com ([2002:a05:6402:1588:b0:5dd:2e6f:2549]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:42d6:b0:5dc:5860:6881 with SMTP id 4fb4d7f45d1cf-5de45023562mr572207a12.19.1738865911608; Thu, 06 Feb 2025 10:18:31 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:12 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-19-elver@google.com> Subject: [PATCH RFC 18/24] locking/rwsem: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for rw_semaphore. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/rwsem.h | 56 +++++++++------- lib/test_capability-analysis.c | 64 +++++++++++++++++++ 3 files changed, 97 insertions(+), 25 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 3766ac466470..719986739b0e 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`). +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index c8b543d428b0..0c84e3072370 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -45,7 +45,7 @@ * reduce the chance that they will share the same cacheline causing * cacheline bouncing problem. */ -struct rw_semaphore { +struct_with_capability(rw_semaphore) { atomic_long_t count; /* * Write owner or one of the read owners as well flags regarding @@ -76,11 +76,13 @@ static inline int rwsem_is_locked(struct rw_semaphore *= sem) } =20 static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *= sem) + __asserts_cap(sem) { WARN_ON(atomic_long_read(&sem->count) =3D=3D RWSEM_UNLOCKED_VALUE); } =20 static inline void rwsem_assert_held_write_nolockdep(const struct rw_semap= hore *sem) + __asserts_cap(sem) { WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); } @@ -119,6 +121,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) =20 /* @@ -136,7 +139,7 @@ static inline int rwsem_is_contended(struct rw_semaphor= e *sem) =20 #include =20 -struct rw_semaphore { +struct_with_capability(rw_semaphore) { struct rwbase_rt rwbase; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; @@ -160,6 +163,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) =20 static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) @@ -168,11 +172,13 @@ static __always_inline int rwsem_is_locked(const stru= ct rw_semaphore *sem) } =20 static __always_inline void rwsem_assert_held_nolockdep(const struct rw_se= maphore *sem) + __asserts_cap(sem) { WARN_ON(!rwsem_is_locked(sem)); } =20 static __always_inline void rwsem_assert_held_write_nolockdep(const struct= rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(!rw_base_is_write_locked(&sem->rwbase)); } @@ -190,6 +196,7 @@ static __always_inline int rwsem_is_contended(struct rw= _semaphore *sem) */ =20 static inline void rwsem_assert_held(const struct rw_semaphore *sem) + __asserts_cap(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held(sem); @@ -198,6 +205,7 @@ static inline void rwsem_assert_held(const struct rw_se= maphore *sem) } =20 static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) + __asserts_cap(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held_write(sem); @@ -208,47 +216,47 @@ static inline void rwsem_assert_held_write(const stru= ct rw_semaphore *sem) /* * lock for reading */ -extern void down_read(struct rw_semaphore *sem); -extern int __must_check down_read_interruptible(struct rw_semaphore *sem); -extern int __must_check down_read_killable(struct rw_semaphore *sem); +extern void down_read(struct rw_semaphore *sem) __acquires_shared(sem); +extern int __must_check down_read_interruptible(struct rw_semaphore *sem) = __cond_acquires_shared(0, sem); +extern int __must_check down_read_killable(struct rw_semaphore *sem) __con= d_acquires_shared(0, sem); =20 /* * trylock for reading -- returns 1 if successful, 0 if contention */ -extern int down_read_trylock(struct rw_semaphore *sem); +extern int down_read_trylock(struct rw_semaphore *sem) __cond_acquires_sha= red(1, sem); =20 /* * lock for writing */ -extern void down_write(struct rw_semaphore *sem); -extern int __must_check down_write_killable(struct rw_semaphore *sem); +extern void down_write(struct rw_semaphore *sem) __acquires(sem); +extern int __must_check down_write_killable(struct rw_semaphore *sem) __co= nd_acquires(0, sem); =20 /* * trylock for writing -- returns 1 if successful, 0 if contention */ -extern int down_write_trylock(struct rw_semaphore *sem); +extern int down_write_trylock(struct rw_semaphore *sem) __cond_acquires(1,= sem); =20 /* * release a read lock */ -extern void up_read(struct rw_semaphore *sem); +extern void up_read(struct rw_semaphore *sem) __releases_shared(sem); =20 /* * release a write lock */ -extern void up_write(struct rw_semaphore *sem); +extern void up_write(struct rw_semaphore *sem) __releases(sem); =20 -DEFINE_GUARD(rwsem_read, struct rw_semaphore *, down_read(_T), up_read(_T)) -DEFINE_GUARD_COND(rwsem_read, _try, down_read_trylock(_T)) -DEFINE_GUARD_COND(rwsem_read, _intr, down_read_interruptible(_T) =3D=3D 0) +DEFINE_LOCK_GUARD_1(rwsem_read, struct rw_semaphore, down_read(_T->lock), = up_read(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _try, down_read_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _intr, down_read_interruptible(_T->lo= ck) =3D=3D 0) =20 -DEFINE_GUARD(rwsem_write, struct rw_semaphore *, down_write(_T), up_write(= _T)) -DEFINE_GUARD_COND(rwsem_write, _try, down_write_trylock(_T)) +DEFINE_LOCK_GUARD_1(rwsem_write, struct rw_semaphore, down_write(_T->lock)= , up_write(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_write, _try, down_write_trylock(_T->lock)) =20 /* * downgrade write lock to read lock */ -extern void downgrade_write(struct rw_semaphore *sem); +extern void downgrade_write(struct rw_semaphore *sem) __releases(sem) __ac= quires_shared(sem); =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC /* @@ -264,11 +272,11 @@ extern void downgrade_write(struct rw_semaphore *sem); * lockdep_set_class() at lock initialization time. * See Documentation/locking/lockdep-design.rst for more details.) */ -extern void down_read_nested(struct rw_semaphore *sem, int subclass); -extern int __must_check down_read_killable_nested(struct rw_semaphore *sem= , int subclass); -extern void down_write_nested(struct rw_semaphore *sem, int subclass); -extern int down_write_killable_nested(struct rw_semaphore *sem, int subcla= ss); -extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep= _map *nest_lock); +extern void down_read_nested(struct rw_semaphore *sem, int subclass) __acq= uires_shared(sem); +extern int __must_check down_read_killable_nested(struct rw_semaphore *sem= , int subclass) __cond_acquires_shared(0, sem); +extern void down_write_nested(struct rw_semaphore *sem, int subclass) __ac= quires(sem); +extern int down_write_killable_nested(struct rw_semaphore *sem, int subcla= ss) __cond_acquires(0, sem); +extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep= _map *nest_lock) __acquires(sem); =20 # define down_write_nest_lock(sem, nest_lock) \ do { \ @@ -282,8 +290,8 @@ do { \ * [ This API should be avoided as much as possible - the * proper abstraction for this case is completions. ] */ -extern void down_read_non_owner(struct rw_semaphore *sem); -extern void up_read_non_owner(struct rw_semaphore *sem); +extern void down_read_non_owner(struct rw_semaphore *sem) __acquires_share= d(sem); +extern void up_read_non_owner(struct rw_semaphore *sem) __releases_shared(= sem); #else # define down_read_nested(sem, subclass) down_read(sem) # define down_read_killable_nested(sem, subclass) down_read_killable(sem) diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 8bc8c3e6cb5c..4638d220f474 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -255,6 +256,69 @@ static void __used test_seqlock_writer(struct test_seq= lock_data *d) write_sequnlock_irqrestore(&d->sl, flags); } =20 +struct test_rwsem_data { + struct rw_semaphore sem; + int counter __var_guarded_by(&sem); +}; + +static void __used test_rwsem_init(struct test_rwsem_data *d) +{ + init_rwsem(&d->sem); + d->counter =3D 0; +} + +static void __used test_rwsem_reader(struct test_rwsem_data *d) +{ + down_read(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_read_trylock(&d->sem)) { + (void)d->counter; + up_read(&d->sem); + } +} + +static void __used test_rwsem_writer(struct test_rwsem_data *d) +{ + down_write(&d->sem); + d->counter++; + up_write(&d->sem); + + down_write(&d->sem); + d->counter++; + downgrade_write(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_write_trylock(&d->sem)) { + d->counter++; + up_write(&d->sem); + } +} + +static void __used test_rwsem_assert(struct test_rwsem_data *d) +{ + rwsem_assert_held_nolockdep(&d->sem); + d->counter++; +} + +static void __used test_rwsem_guard(struct test_rwsem_data *d) +{ + { guard(rwsem_read)(&d->sem); (void)d->counter; } + { guard(rwsem_write)(&d->sem); d->counter++; } +} + +static void __used test_rwsem_cond_guard(struct test_rwsem_data *d) +{ + scoped_cond_guard(rwsem_read_try, return, &d->sem) { + (void)d->counter; + } + scoped_cond_guard(rwsem_write_try, return, &d->sem) { + d->counter++; + } +} + struct test_bit_spinlock_data { unsigned long bits; int counter __var_guarded_by(__bitlock(3, &bits)); --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA6C620B1E8 for ; Thu, 6 Feb 2025 18:18:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865919; cv=none; b=rXvS/HWQo7Vhokf/bIUL5XeE/rvqPEgNV1wVfeLUDebY17gwg5IVL1iDsF2O0TNjY0cOUNVUHrj3ipJC59111U/y68RKrd7+Gp9tMTJ3icPF3dm8ywWz1KNoCoHQyZ0NnEUJsAKfbYyvrYPALGfs8EixfgpM/beXVV0346NXkuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865919; c=relaxed/simple; bh=L1EG/Pog/Bknty3ffiiK6+CRlNE7aUcIt8NuXQ0NddY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EZ7bExGX5doMtvb0SJ3EZeDpgnAeBqH+CXkhc5YgRldSj+0841qYUqxcsctI0zCLT7TMQDVuDFkz5DA3FVx/fMQczH7DM5eyK4RS4IiHGzGrd5cxuh9RudiqjvsE9BsXZi8Kqls4i8wIHtxjr7LRip9MLC4bRWUowgwx8dTFK/0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OaCaW8wx; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OaCaW8wx" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab773dd745fso119608366b.3 for ; Thu, 06 Feb 2025 10:18:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865914; x=1739470714; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/4B3dc+uw3kIUoAQrEfaYRIJGMzUFqPSDxrvx8H6TaQ=; b=OaCaW8wxR30KrYECejUg+jwXmaBa+/tqpjJBLR/m86M4DqHw1aeSYEZ8vchx0MtaL3 SUKaMyzzp9KouPx/Ag0WXu9u+ka3E+9AsN29HrNI9mgvfTTQQ4T1t19Uvm/+S5jQf8lU KpkBTZ03wxUUriJUmvObAggQHgkc5ulijYHPnmbxSzJfP6WBpLH87bwTEE0f9yKY3JWT Gy0R1gszxw1W+hACh5rKCibSDWgACuSrV/CKgcoh7ddLrSBZsS3aockX9C1dtCbObrIw 2IL6c9BTDBVpG2dhGPq2ZOKP4ij8QiycZny5Ry6hG+eRMg+eCuj4KkEBWWoyKl0c6kbB 5xzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865914; x=1739470714; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/4B3dc+uw3kIUoAQrEfaYRIJGMzUFqPSDxrvx8H6TaQ=; b=IxAu+t/x3ApPNqsatippqtqOM1mBcWtVsorsFF1Un0JTXAKxK8CPIn0fpx5moLlzlU zCpCkswW7goAueotqM1USjBJ7ZZPhCgg8W4Q5cQVKHbPmUbCaVAjAFE1yANU1gdyHVMc 2lhWPVrpPUC7v79X7Hl4Wso6GFNEdgkWJVvZQ/aAVUGiCgKFatiyTY+zIn4DgnU4catf +jAV0CUB/ijSfZ0WvtF5p3Og1u50mjNytz1vehJe9+Yjnd2IRMjLHaRuRcRamWJ8PKsh p22RCHL8xPlolMXbho8ACWQuWqFUwPaqNmmem+Ipkx5ddzWCyIueX/kmNZ4/4CeF0SiW F07g== X-Forwarded-Encrypted: i=1; AJvYcCV2uElyaK5efSQTgRL3kQ/E4sP6e2mDtUz74azE7sPN+/cVVj29PP1d4TZHdctYzym7SodEKGYt06sL3ZE=@vger.kernel.org X-Gm-Message-State: AOJu0Ywz5A5gtM+A21gYslz3bLtQdKV0lxjBnUW10RNQlqcrGloPH5O+ a2Pam1rVhC5N7tsrwoCUR0GDsYqEqhKq5HP54RxSUdXpL2xkrT0jS8ZzAcn9bNaORT0jepb8iw= = X-Google-Smtp-Source: AGHT+IGoGX/frEP3WONTFmzfntA7mN3pNwgZKUAdg2rEUwf3T8agPxmffbfAd5hLQ1tVZvBZMMK7mObHWw== X-Received: from ejctl25.prod.google.com ([2002:a17:907:c319:b0:aa6:a222:16ac]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6094:b0:ab6:f4e7:52f9 with SMTP id a640c23a62f3a-ab75e26494emr827537866b.25.1738865914032; Thu, 06 Feb 2025 10:18:34 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:13 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-20-elver@google.com> Subject: [PATCH RFC 19/24] locking/local_lock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's capability analysis for local_lock_t. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/local_lock.h | 18 ++++---- include/linux/local_lock_internal.h | 41 ++++++++++++++--- lib/test_capability-analysis.c | 46 +++++++++++++++++++ 4 files changed, 90 insertions(+), 17 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 719986739b0e..1e9ce018e30e 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`. =20 For capabilities with an initialization function (e.g., `spin_lock_init()`= ), calling this function on the capability instance before initializing any diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h index 091dc0b6bdfb..63fadcf66216 100644 --- a/include/linux/local_lock.h +++ b/include/linux/local_lock.h @@ -51,12 +51,12 @@ #define local_unlock_irqrestore(lock, flags) \ __local_unlock_irqrestore(lock, flags) =20 -DEFINE_GUARD(local_lock, local_lock_t __percpu*, - local_lock(_T), - local_unlock(_T)) -DEFINE_GUARD(local_lock_irq, local_lock_t __percpu*, - local_lock_irq(_T), - local_unlock_irq(_T)) +DEFINE_LOCK_GUARD_1(local_lock, local_lock_t __percpu, + local_lock(_T->lock), + local_unlock(_T->lock)) +DEFINE_LOCK_GUARD_1(local_lock_irq, local_lock_t __percpu, + local_lock_irq(_T->lock), + local_unlock_irq(_T->lock)) DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, local_lock_irqsave(_T->lock, _T->flags), local_unlock_irqrestore(_T->lock, _T->flags), @@ -68,8 +68,8 @@ DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __pe= rcpu, #define local_unlock_nested_bh(_lock) \ __local_unlock_nested_bh(_lock) =20 -DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*, - local_lock_nested_bh(_T), - local_unlock_nested_bh(_T)) +DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu, + local_lock_nested_bh(_T->lock), + local_unlock_nested_bh(_T->lock)) =20 #endif diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock= _internal.h index 8dd71fbbb6d2..031de28d8ffb 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -8,12 +8,13 @@ =20 #ifndef CONFIG_PREEMPT_RT =20 -typedef struct { +struct_with_capability(local_lock) { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; struct task_struct *owner; #endif -} local_lock_t; +}; +typedef struct local_lock local_lock_t; =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC # define LOCAL_LOCK_DEBUG_INIT(lockname) \ @@ -60,6 +61,7 @@ do { \ 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ LD_LOCK_PERCPU); \ local_lock_debug_init(lock); \ + __assert_cap(lock); \ } while (0) =20 #define __spinlock_nested_bh_init(lock) \ @@ -71,40 +73,47 @@ do { \ 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ LD_LOCK_NORMAL); \ local_lock_debug_init(lock); \ + __assert_cap(lock); \ } while (0) =20 #define __local_lock(lock) \ do { \ preempt_disable(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) =20 #define __local_lock_irq(lock) \ do { \ local_irq_disable(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) =20 #define __local_lock_irqsave(lock, flags) \ do { \ local_irq_save(flags); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) =20 #define __local_unlock(lock) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ preempt_enable(); \ } while (0) =20 #define __local_unlock_irq(lock) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ local_irq_enable(); \ } while (0) =20 #define __local_unlock_irqrestore(lock, flags) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ local_irq_restore(flags); \ } while (0) @@ -113,19 +122,37 @@ do { \ do { \ lockdep_assert_in_softirq(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) =20 #define __local_unlock_nested_bh(lock) \ - local_lock_release(this_cpu_ptr(lock)) + do { \ + __release(lock); \ + local_lock_release(this_cpu_ptr(lock)); \ + } while (0) =20 #else /* !CONFIG_PREEMPT_RT */ =20 +#include + /* * On PREEMPT_RT local_lock maps to a per CPU spinlock, which protects the * critical section while staying preemptible. */ typedef spinlock_t local_lock_t; =20 +/* + * Because the compiler only knows about the base per-CPU variable, use th= is + * helper function to make the compiler think we lock/unlock the @base var= iable, + * and hide the fact we actually pass the per-CPU instance @pcpu to lock/u= nlock + * functions. + */ +static inline local_lock_t *__local_lock_alias(local_lock_t __percpu *base= , local_lock_t *pcpu) + __returns_cap(base) +{ + return pcpu; +} + #define INIT_LOCAL_LOCK(lockname) __LOCAL_SPIN_LOCK_UNLOCKED((lockname)) =20 #define __local_lock_init(l) \ @@ -136,7 +163,7 @@ typedef spinlock_t local_lock_t; #define __local_lock(__lock) \ do { \ migrate_disable(); \ - spin_lock(this_cpu_ptr((__lock))); \ + spin_lock(__local_lock_alias(__lock, this_cpu_ptr((__lock)))); \ } while (0) =20 #define __local_lock_irq(lock) __local_lock(lock) @@ -150,7 +177,7 @@ typedef spinlock_t local_lock_t; =20 #define __local_unlock(__lock) \ do { \ - spin_unlock(this_cpu_ptr((__lock))); \ + spin_unlock(__local_lock_alias(__lock, this_cpu_ptr((__lock)))); \ migrate_enable(); \ } while (0) =20 @@ -161,12 +188,12 @@ typedef spinlock_t local_lock_t; #define __local_lock_nested_bh(lock) \ do { \ lockdep_assert_in_softirq_func(); \ - spin_lock(this_cpu_ptr(lock)); \ + spin_lock(__local_lock_alias(lock, this_cpu_ptr(lock))); \ } while (0) =20 #define __local_unlock_nested_bh(lock) \ do { \ - spin_unlock(this_cpu_ptr((lock))); \ + spin_unlock(__local_lock_alias(lock, this_cpu_ptr((lock)))); \ } while (0) =20 #endif /* CONFIG_PREEMPT_RT */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 4638d220f474..dd3fccff2352 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -6,7 +6,9 @@ =20 #include #include +#include #include +#include #include #include #include @@ -433,3 +435,47 @@ static void __used test_srcu_guard(struct test_srcu_da= ta *d) guard(srcu)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } + +struct test_local_lock_data { + local_lock_t lock; + int counter __var_guarded_by(&lock); +}; + +static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = =3D { + .lock =3D INIT_LOCAL_LOCK(lock), +}; + +static void __used test_local_lock_init(struct test_local_lock_data *d) +{ + local_lock_init(&d->lock); + d->counter =3D 0; +} + +static void __used test_local_lock(void) +{ + unsigned long flags; + + local_lock(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock(&test_local_lock_data.lock); + + local_lock_irq(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_irq(&test_local_lock_data.lock); + + local_lock_irqsave(&test_local_lock_data.lock, flags); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_irqrestore(&test_local_lock_data.lock, flags); + + local_lock_nested_bh(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_nested_bh(&test_local_lock_data.lock); +} + +static void __used test_local_lock_guard(void) +{ + { guard(local_lock)(&test_local_lock_data.lock); this_cpu_add(test_local_= lock_data.counter, 1); } + { guard(local_lock_irq)(&test_local_lock_data.lock); this_cpu_add(test_lo= cal_lock_data.counter, 1); } + { guard(local_lock_irqsave)(&test_local_lock_data.lock); this_cpu_add(tes= t_local_lock_data.counter, 1); } + { guard(local_lock_nested_bh)(&test_local_lock_data.lock); this_cpu_add(t= est_local_lock_data.counter, 1); } +} --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 044461FC115 for ; Thu, 6 Feb 2025 18:18:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865920; cv=none; b=a4ZNm0FO0bz9X7qGvxnjWiNDQNcYK20pbCVFm/aWzLwNq9QpwnMrBwTey2hr25bJwSuykLdubU3+GJUzDPKD4eWSXYFEp5uZqU0DYW7bpBU2b3OLwk5PRak3DsNWnu4bkyMBp2A+ETYsXfP0RMkA7f/mrUJUTuu1GcbOpPgEPjk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865920; c=relaxed/simple; bh=5EVSb50dMQhWyPFGHfjsGLdP+2gDBxuAj9DD3qD/9wA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=L4ki7xS9tWxSoBh2hNmGL4iQehx/q1CQZ1N3VYs8L+n2rQzpquR5Af6VYefoPp4rqjGYnoKZ/Cdf4oluu25r5d4bI6WjVYiLbJP2efPSynpKKqXcWc4o3py1moTaTyEzIiD52iLhbKrlUmW71G2F1thLuQgKhxcTI0U4o+SCKPM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hWm2xkLr; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hWm2xkLr" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5dcee17a6a5so1238412a12.2 for ; Thu, 06 Feb 2025 10:18:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865916; x=1739470716; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Gbrljv8lkIJmai+iapf5Yd4+selXZvjO0vfpPpqYqbM=; b=hWm2xkLr0AuomA1CuAzFdTGmmJFf6o3LwE2jzSXCGikNBsZdkXJn/huy/JlLPrZJME lMTLbSq9zTlwjYK/ufoUkZfLEtiY7ya1WjWVfeLTGJVEYzPqNjo0wfa6rQA61eWwmVTd dZk3PGNFgT2pzarJnzMSGlfAX4WmYowIP/NHqa42UrKiEDJC2fmtORhuP22SZJvUFFS6 I5vyFQIzRnE2/LBBhdV21PhqmEahjmpFz6xnfVVxCqxSASESUjr4nio9WSpzNLerqlu9 CuvEk1622Pl/tonAdgy+4Drx9W6ihR6+wXFpMMuyg3pzeBiFx8XX9S9OIRyjOYMGEE/i awTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865916; x=1739470716; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Gbrljv8lkIJmai+iapf5Yd4+selXZvjO0vfpPpqYqbM=; b=OLBSjDM7cQpazPEsY7K6ZuAToLu1P3mQblTYiC2Hvq5/AkgIEvPf3sY90d0lFSL8H1 395VGAUQCnAF7yTag08UDPDu5P68W4FJ3bUdcMAacsIUXcy5K53oUQj9m1riiREsfzD/ Us5CglvoIXJMtSnvO4T1sieTWBOrl1TTrFBpvbq/fQ+ieukjh6mXJcvu3CZtNL+6zd8I 5iHUbsmQgU//2XdUbztgc7bbTW+CwrLQ66q+MaN5xtguDNwY3KnBA7rsYjMUKhfq/u+5 BOSyfd5jUN7UFzvU2ExJQEtf/rDC0qJcbWn4+nlBGwNA2sHbPglrbMtnMgq+3Xpe227h Bgrw== X-Forwarded-Encrypted: i=1; AJvYcCUnl+Aj1dXzCD4TEz7Ojj6ccwA6kPfIA65FNYOp4Ua/xQEqRf4ginUcQQi/yaNmMtr2d1O+3gPlptsxlh0=@vger.kernel.org X-Gm-Message-State: AOJu0YxICvHjOJGwglRtVqQCsnt5BljEcAp9j+MDcofz5dJxE0tQoUJI PCOowxIma7sdztbSSgrJUqEBxtd86LR37tWbOUrnFX2Xpjl7ye9RSJEePDkYrayv+76nRkBJiw= = X-Google-Smtp-Source: AGHT+IG6fYjyQDhG15FL/7jfcoC1TInNHtYu+hLdicEFGjYbLL+jlfGEcskN2tLnzzmKsflIazl4pQ1juA== X-Received: from ejcvq6.prod.google.com ([2002:a17:907:a4c6:b0:aa6:bd80:4523]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6d1e:b0:ab2:f8e9:723c with SMTP id a640c23a62f3a-ab75e210266mr866257866b.5.1738865916587; Thu, 06 Feb 2025 10:18:36 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:14 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-21-elver@google.com> Subject: [PATCH RFC 20/24] debugfs: Make debugfs_cancellation a capability struct From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When compiling include/linux/debugfs.h with CAPABILITY_ANALYSIS enabled, we can see this error: ./include/linux/debugfs.h:239:17: error: use of undeclared identifier 'canc= ellation' 239 | void __acquires(cancellation) Move the __acquires(..) attribute after the declaration, so that the compiler can see the cancellation function argument, as well as making struct debugfs_cancellation a real capability to benefit from Clang's capability analysis. Signed-off-by: Marco Elver --- include/linux/debugfs.h | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h index fa2568b4380d..c6a429381887 100644 --- a/include/linux/debugfs.h +++ b/include/linux/debugfs.h @@ -240,18 +240,16 @@ ssize_t debugfs_read_file_str(struct file *file, char= __user *user_buf, * @cancel: callback to call * @cancel_data: extra data for the callback to call */ -struct debugfs_cancellation { +struct_with_capability(debugfs_cancellation) { struct list_head list; void (*cancel)(struct dentry *, void *); void *cancel_data; }; =20 -void __acquires(cancellation) -debugfs_enter_cancellation(struct file *file, - struct debugfs_cancellation *cancellation); -void __releases(cancellation) -debugfs_leave_cancellation(struct file *file, - struct debugfs_cancellation *cancellation); +void debugfs_enter_cancellation(struct file *file, + struct debugfs_cancellation *cancellation) __acquires(cancellation); +void debugfs_leave_cancellation(struct file *file, + struct debugfs_cancellation *cancellation) __releases(cancellation); =20 #else =20 --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A21D221579C for ; Thu, 6 Feb 2025 18:18:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865923; cv=none; b=H3v6Q1h5p2uNLkGX0Gll43+T+0gw3doAuqdUb61jbgXXD95n6yLyEqU4T1sBI3tPQUfSlqb6XTDNS3wQkFTJ/z2pnfpT4TdcbI5l6bnLizAnN9CV070E32mOLnTCoHTsidCFHV28qs1lQY5lmRJdhQMsD0svZ7Yr5OexmcPdehY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865923; c=relaxed/simple; bh=YJDtuAQrt10Gj5yTP5oP/HfXJALSbUnpY7JrPkCy+uE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=M/BoY3G1fgErFwvqSBfqpVNnGfHrQDqDCTEv8nI50/FJEWoXmgfMW387eMkLHemsflJAZFsHQZFBtwVp/0qMRJDRxJ+VPIKOywqklAps2y5XYApkiVbj5BvHtdMaykNflUWsr06CRadVE3PsM5lJm2MW4DdLICgLMRjIyLaRtqY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qd0cRgEV; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qd0cRgEV" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5dc5b397109so1443935a12.2 for ; Thu, 06 Feb 2025 10:18:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865919; x=1739470719; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/Q/r6EDjEaKBVCObAXq5d4B0c7MeKIT3YrYe1pikBVY=; b=qd0cRgEVC6zuPxo6FpV/WeD4RxHIEp7+/BfhnjUJ2Mm2duAP6e8wU7uQxcVCzFw22E R1woPR1J/0KDF+NbWsut8ViICdx3HhJPtEoSnEOE6TUCZ4YjFNf1PmHwqVHt1a99mgqy BRe+5RVyFu+3hI+AanzV6rCLDi/cx4ubOJMN1DoJh+qPQS8ZSbQhl6Y41PCEnzqXmGnl WrG0iFjeyhPiCEVsdL07YSV6m9AC5BzzeXyTuJOGeXVVk9HZilcZfAm4cCBz+q8VZr84 DYGLV4TVxd9U13sPGFveZLwA/o+R/5vkMQWZ3OQCgH1yVsaTnXeoon29UhKIyTZUboyX pJsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865919; x=1739470719; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/Q/r6EDjEaKBVCObAXq5d4B0c7MeKIT3YrYe1pikBVY=; b=c67NMrpXVZT6PMsSRrIhAAgtIqlSVSh+wFQjT2pXl1WO3Rs1WvqR0ChBGPBzs7yazc VHUTljfkXMfM+aIs1Z390m7nAvhzuJo76yx3dGddiAYW6ga8Vo2+yMemE1KDvxg425S2 UOKgEcnzFv4dg9ogLsn82Z75RxEmwGMs6quJg0KnFAtQBJr7D7WpmVEJdfwTMUEyhmVN 3a80FrR4Kwjf1FgyR41SxCdpSsScL86QNzdgTlSdV1VtLsNYHvgR5ZjT76vZ2XpCw52O QYhYBUnsZWrfDXEo6KJP40lTpwVCkir9gWGtSq/Z+jhwWAllSUBah78yMGD89zIXc+GI dSug== X-Forwarded-Encrypted: i=1; AJvYcCUkfLqFtii07RmaGKcbrWQ69BY2AmDeC62vlqJ+cSBbJ4h6ftswC676h0dHpopUVbAty2QvjzmvdybbLXs=@vger.kernel.org X-Gm-Message-State: AOJu0YwD3j/lvSUC3P/QxLlii66V/yWG1hX6t6cQVdwkl8thEU7vpUxs JvN+A/TlgUxEB7RWmDbek/py/yKTBRAbBMCBJneqUJeQVG3K1OSPUKU3hBolxq0NPp3GFalqtw= = X-Google-Smtp-Source: AGHT+IFMaBtqEK9sUYJXAvF7j4Xpi4+3px4oh9ewLScLsByAAKS4VeuOgIK+QXLT9+HMwjIVF5k6RoBd8Q== X-Received: from edag6.prod.google.com ([2002:a05:6402:3206:b0:5de:3ce0:a49b]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3583:b0:5da:9d3:bc23 with SMTP id 4fb4d7f45d1cf-5de4508a0b9mr436219a12.24.1738865919115; Thu, 06 Feb 2025 10:18:39 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:15 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-22-elver@google.com> Subject: [PATCH RFC 21/24] kfence: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable capability analysis for the KFENCE subsystem. Notable, kfence_handle_page_fault() required minor restructure, which also fixed a subtle race; arguably that function is more readable now. Signed-off-by: Marco Elver --- mm/kfence/Makefile | 2 ++ mm/kfence/core.c | 24 +++++++++++++++++------- mm/kfence/kfence.h | 18 ++++++++++++------ mm/kfence/kfence_test.c | 4 ++++ mm/kfence/report.c | 8 ++++++-- 5 files changed, 41 insertions(+), 15 deletions(-) diff --git a/mm/kfence/Makefile b/mm/kfence/Makefile index 2de2a58d11a1..b3640bdc3c69 100644 --- a/mm/kfence/Makefile +++ b/mm/kfence/Makefile @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 =20 +CAPABILITY_ANALYSIS :=3D y + obj-y :=3D core.o report.o =20 CFLAGS_kfence_test.o :=3D -fno-omit-frame-pointer -fno-optimize-sibling-ca= lls diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 102048821c22..c2d1ffd20a1f 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -7,6 +7,8 @@ =20 #define pr_fmt(fmt) "kfence: " fmt =20 +disable_capability_analysis(); + #include #include #include @@ -34,6 +36,8 @@ =20 #include =20 +enable_capability_analysis(); + #include "kfence.h" =20 /* Disables KFENCE on the first warning assuming an irrecoverable error. */ @@ -132,8 +136,8 @@ struct kfence_metadata *kfence_metadata __read_mostly; static struct kfence_metadata *kfence_metadata_init __read_mostly; =20 /* Freelist with available objects. */ -static struct list_head kfence_freelist =3D LIST_HEAD_INIT(kfence_freelist= ); -static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freel= ist. */ +DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ +static struct list_head kfence_freelist __var_guarded_by(&kfence_freelist_= lock) =3D LIST_HEAD_INIT(kfence_freelist); =20 /* * The static key to set up a KFENCE allocation; or if static keys are not= used @@ -253,6 +257,7 @@ static bool kfence_unprotect(unsigned long addr) } =20 static inline unsigned long metadata_to_pageaddr(const struct kfence_metad= ata *meta) + __must_hold(&meta->lock) { unsigned long offset =3D (meta - kfence_metadata + 1) * PAGE_SIZE * 2; unsigned long pageaddr =3D (unsigned long)&__kfence_pool[offset]; @@ -288,6 +293,7 @@ static inline bool kfence_obj_allocated(const struct kf= ence_metadata *meta) static noinline void metadata_update_state(struct kfence_metadata *meta, enum kfence_object_sta= te next, unsigned long *stack_entries, size_t num_stack_entries) + __must_hold(&meta->lock) { struct kfence_track *track =3D next =3D=3D KFENCE_OBJECT_ALLOCATED ? &meta->alloc_track : &meta->free_t= rack; @@ -485,7 +491,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *ca= che, size_t size, gfp_t g alloc_covered_add(alloc_stack_hash, 1); =20 /* Set required slab fields. */ - slab =3D virt_to_slab((void *)meta->addr); + slab =3D virt_to_slab(addr); slab->slab_cache =3D cache; slab->objects =3D 1; =20 @@ -514,6 +520,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *ca= che, size_t size, gfp_t g static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, = bool zombie) { struct kcsan_scoped_access assert_page_exclusive; + u32 alloc_stack_hash; unsigned long flags; bool init; =20 @@ -546,9 +553,10 @@ static void kfence_guarded_free(void *addr, struct kfe= nce_metadata *meta, bool z /* Mark the object as freed. */ metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); init =3D slab_want_init_on_free(meta->cache); + alloc_stack_hash =3D meta->alloc_stack_hash; raw_spin_unlock_irqrestore(&meta->lock, flags); =20 - alloc_covered_add(meta->alloc_stack_hash, -1); + alloc_covered_add(alloc_stack_hash, -1); =20 /* Check canary bytes for memory corruption. */ check_canary(meta); @@ -593,6 +601,7 @@ static void rcu_guarded_free(struct rcu_head *h) * which partial initialization succeeded. */ static unsigned long kfence_init_pool(void) + __no_capability_analysis { unsigned long addr; struct page *pages; @@ -1192,6 +1201,7 @@ bool kfence_handle_page_fault(unsigned long addr, boo= l is_write, struct pt_regs { const int page_index =3D (addr - (unsigned long)__kfence_pool) / PAGE_SIZ= E; struct kfence_metadata *to_report =3D NULL; + unsigned long unprotected_page =3D 0; enum kfence_error_type error_type; unsigned long flags; =20 @@ -1225,9 +1235,8 @@ bool kfence_handle_page_fault(unsigned long addr, boo= l is_write, struct pt_regs if (!to_report) goto out; =20 - raw_spin_lock_irqsave(&to_report->lock, flags); - to_report->unprotected_page =3D addr; error_type =3D KFENCE_ERROR_OOB; + unprotected_page =3D addr; =20 /* * If the object was freed before we took the look we can still @@ -1239,7 +1248,6 @@ bool kfence_handle_page_fault(unsigned long addr, boo= l is_write, struct pt_regs if (!to_report) goto out; =20 - raw_spin_lock_irqsave(&to_report->lock, flags); error_type =3D KFENCE_ERROR_UAF; /* * We may race with __kfence_alloc(), and it is possible that a @@ -1251,6 +1259,8 @@ bool kfence_handle_page_fault(unsigned long addr, boo= l is_write, struct pt_regs =20 out: if (to_report) { + raw_spin_lock_irqsave(&to_report->lock, flags); + to_report->unprotected_page =3D unprotected_page; kfence_report_error(addr, is_write, regs, to_report, error_type); raw_spin_unlock_irqrestore(&to_report->lock, flags); } else { diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index dfba5ea06b01..27829d70baf6 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -9,6 +9,8 @@ #ifndef MM_KFENCE_KFENCE_H #define MM_KFENCE_KFENCE_H =20 +disable_capability_analysis(); + #include #include #include @@ -16,6 +18,8 @@ =20 #include "../slab.h" /* for struct kmem_cache */ =20 +enable_capability_analysis(); + /* * Get the canary byte pattern for @addr. Use a pattern that varies based = on the * lower 3 bits of the address, to detect memory corruptions with higher @@ -34,6 +38,8 @@ /* Maximum stack depth for reports. */ #define KFENCE_STACK_DEPTH 64 =20 +extern raw_spinlock_t kfence_freelist_lock; + /* KFENCE object states. */ enum kfence_object_state { KFENCE_OBJECT_UNUSED, /* Object is unused. */ @@ -53,7 +59,7 @@ struct kfence_track { =20 /* KFENCE metadata per guarded allocation. */ struct kfence_metadata { - struct list_head list; /* Freelist node; access under kfence_freelist_lo= ck. */ + struct list_head list __var_guarded_by(&kfence_freelist_lock); /* Freelis= t node. */ struct rcu_head rcu_head; /* For delayed freeing. */ =20 /* @@ -91,13 +97,13 @@ struct kfence_metadata { * In case of an invalid access, the page that was unprotected; we * optimistically only store one address. */ - unsigned long unprotected_page; + unsigned long unprotected_page __var_guarded_by(&lock); =20 /* Allocation and free stack information. */ - struct kfence_track alloc_track; - struct kfence_track free_track; + struct kfence_track alloc_track __var_guarded_by(&lock); + struct kfence_track free_track __var_guarded_by(&lock); /* For updating alloc_covered on frees. */ - u32 alloc_stack_hash; + u32 alloc_stack_hash __var_guarded_by(&lock); #ifdef CONFIG_MEMCG struct slabobj_ext obj_exts; #endif @@ -141,6 +147,6 @@ enum kfence_error_type { void kfence_report_error(unsigned long address, bool is_write, struct pt_r= egs *regs, const struct kfence_metadata *meta, enum kfence_error_type type); =20 -void kfence_print_object(struct seq_file *seq, const struct kfence_metadat= a *meta); +void kfence_print_object(struct seq_file *seq, const struct kfence_metadat= a *meta) __must_hold(&meta->lock); =20 #endif /* MM_KFENCE_KFENCE_H */ diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c index 00034e37bc9f..67eca6e9a8de 100644 --- a/mm/kfence/kfence_test.c +++ b/mm/kfence/kfence_test.c @@ -11,6 +11,8 @@ * Marco Elver */ =20 +disable_capability_analysis(); + #include #include #include @@ -26,6 +28,8 @@ =20 #include =20 +enable_capability_analysis(); + #include "kfence.h" =20 /* May be overridden by . */ diff --git a/mm/kfence/report.c b/mm/kfence/report.c index 10e6802a2edf..bbee90d0034d 100644 --- a/mm/kfence/report.c +++ b/mm/kfence/report.c @@ -5,6 +5,8 @@ * Copyright (C) 2020, Google LLC. */ =20 +disable_capability_analysis(); + #include =20 #include @@ -22,6 +24,8 @@ =20 #include =20 +enable_capability_analysis(); + #include "kfence.h" =20 /* May be overridden by . */ @@ -106,6 +110,7 @@ static int get_stack_skipnr(const unsigned long stack_e= ntries[], int num_entries =20 static void kfence_print_stack(struct seq_file *seq, const struct kfence_m= etadata *meta, bool show_alloc) + __must_hold(&meta->lock) { const struct kfence_track *track =3D show_alloc ? &meta->alloc_track : &m= eta->free_track; u64 ts_sec =3D track->ts_nsec; @@ -207,8 +212,6 @@ void kfence_report_error(unsigned long address, bool is= _write, struct pt_regs *r if (WARN_ON(type !=3D KFENCE_ERROR_INVALID && !meta)) return; =20 - if (meta) - lockdep_assert_held(&meta->lock); /* * Because we may generate reports in printk-unfriendly parts of the * kernel, such as scheduler code, the use of printk() could deadlock. @@ -263,6 +266,7 @@ void kfence_report_error(unsigned long address, bool is= _write, struct pt_regs *r stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 0); =20 if (meta) { + lockdep_assert_held(&meta->lock); pr_err("\n"); kfence_print_object(NULL, meta); } --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B33321E0BD for ; Thu, 6 Feb 2025 18:18:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865925; cv=none; b=jBpdQ49AGM9h738WjZ/rLkBFKTFevtiOChZpdQnpCESrpMCWL120mT5xRT4Q6GHL4N/b99uqMsI6EDda1/KDK1bs/PLulcA7R9Vzi5/r6284cEz2OHg6xdBqSsjkT/GniydSQ58z0mL1utZVUxQ9mDr6xOU7kNvxvRx5b8UKA8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865925; c=relaxed/simple; bh=62DVLhLLiJSVkhZ+NOavdiQcCNV6tDH/K6t59rw7Blo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rnYWY2VmKakg4YIsJiLHn4S9Cq4LFklPxONYR44yJe1paRor/b6BgeE7dka/qdcKBOJxz1YFh2+TqK1TGsZpn7BqqEuqGQzUvJR4hkMn2N8teCa8vT2zbkyIl7j7AVWxT06FeG8ZCpKP7ZmzYHjv3CbzlR86zF+yYpWVRqVa8fM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bX5W/qff; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bX5W/qff" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5dcd3120b24so1427447a12.1 for ; Thu, 06 Feb 2025 10:18:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865921; x=1739470721; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xc/nsPoKHTE+OnZKDVqwWYhiGzXohY6mN55B1zkjrXo=; b=bX5W/qffrxXL3BnTYdkHIpx5m4+S+fduu+kQpV3kTkV9tPAu6SC+pjpQXL90qf333w 9p0U7qHGCkDe7lmJcFcg89eFkATsPKvtYzSzHoH/429/xbMS96+vuTl4JYvjsr7R+Khc 1D/IGanV9UVm99nZUBrG9cy179AffmdrGHhd3zIkhY+iAojETfwXhByfsT2NdVt0v6H1 5nYXgEoliH21LPYdPoSDh0IpK0m8YGk+o6PJRaSgILeFwBITUptFyq2YiPUcUlSzwolP G9EjxYKHfSA1Y8qpvlUPEBQSpzn2dQuymfwoNlAWf0tnRk0GTSDGtOwiMPJxlyZXAzXl dMVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865921; x=1739470721; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xc/nsPoKHTE+OnZKDVqwWYhiGzXohY6mN55B1zkjrXo=; b=mjwFUUnKlFpuF7ZY3Pw0k0Q2Y+9IupscdHJ5IZsIahuoEFk2mPCvwQGhHIOesLdUhk 9M/Bi97zSCCjbor2Jns79JtXG7JMLLszZE8TzYl5JVjG1vqfwJEdIzTdp8XQXgtll3uC u9dUOTMgq3B9M7+6xYM84evjMtYDBUGuKrSZqBIB1f6/Xw/2r2FGoS/6Hk8wa/ccMuGT CSBJlDQiJEboquW/j2qKkszSZJUGOAyh7iQ1AaH51DMo3PJqmxGpOIbKT2ykVu8Qx0ZO SodU5eYmF2qeezSc1xFbDNTCSOwYsMSi7/cNBBi9nq/8GxP7cqcraKqHCdUgkftSvoJZ QZXA== X-Forwarded-Encrypted: i=1; AJvYcCXjZTA5l6kAR80u5da8yO3dyT0V5MxmtQJWC0h2VwZVejos59G0vb+pEov3QopU19/PD6eWWGhg36q3kQs=@vger.kernel.org X-Gm-Message-State: AOJu0Yy8UMgK6nsUadjjNtEh4m67oF3+yzLw+PFJOIzA7Msr+0T+hDTp wdOCtcP8BxV1OYGrDbqeElQDC++J82O6wzd1miklmgGZsW2ZnJ+a+/Z3ABu4T9laU8mxb9nXlA= = X-Google-Smtp-Source: AGHT+IGMLO3VG3x3Ju4enmaIZ1q2blYdg0P0/rVgQoMBt/NBEB2is3323Ue5scgPotcIXUb1LrdBgJTq1g== X-Received: from edah39.prod.google.com ([2002:a05:6402:ea7:b0:5dc:92bc:54c2]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2390:b0:5db:f26d:fff8 with SMTP id 4fb4d7f45d1cf-5de4508dcc3mr323309a12.22.1738865921739; Thu, 06 Feb 2025 10:18:41 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:16 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-23-elver@google.com> Subject: [PATCH RFC 22/24] kcov: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable capability analysis for the KCOV subsystem. Signed-off-by: Marco Elver --- kernel/Makefile | 2 ++ kernel/kcov.c | 40 +++++++++++++++++++++++++++++----------- 2 files changed, 31 insertions(+), 11 deletions(-) diff --git a/kernel/Makefile b/kernel/Makefile index 87866b037fbe..7e399998532d 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -39,6 +39,8 @@ KASAN_SANITIZE_kcov.o :=3D n KCSAN_SANITIZE_kcov.o :=3D n UBSAN_SANITIZE_kcov.o :=3D n KMSAN_SANITIZE_kcov.o :=3D n + +CAPABILITY_ANALYSIS_kcov.o :=3D y CFLAGS_kcov.o :=3D $(call cc-option, -fno-conserve-stack) -fno-stack-prote= ctor =20 obj-y +=3D sched/ diff --git a/kernel/kcov.c b/kernel/kcov.c index 187ba1b80bda..d89c933fe682 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 #define pr_fmt(fmt) "kcov: " fmt =20 +disable_capability_analysis(); + #define DISABLE_BRANCH_PROFILING #include #include @@ -27,6 +29,8 @@ #include #include =20 +enable_capability_analysis(); + #define kcov_debug(fmt, ...) pr_debug("%s: " fmt, __func__, ##__VA_ARGS__) =20 /* Number of 64-bit words written per one comparison: */ @@ -55,13 +59,13 @@ struct kcov { refcount_t refcount; /* The lock protects mode, size, area and t. */ spinlock_t lock; - enum kcov_mode mode; + enum kcov_mode mode __var_guarded_by(&lock); /* Size of arena (in long's). */ - unsigned int size; + unsigned int size __var_guarded_by(&lock); /* Coverage buffer shared with user space. */ - void *area; + void *area __var_guarded_by(&lock); /* Task for which we collect coverage, or NULL. */ - struct task_struct *t; + struct task_struct *t __var_guarded_by(&lock); /* Collecting coverage from remote (background) threads. */ bool remote; /* Size of remote area (in long's). */ @@ -391,6 +395,7 @@ void kcov_task_init(struct task_struct *t) } =20 static void kcov_reset(struct kcov *kcov) + __must_hold(&kcov->lock) { kcov->t =3D NULL; kcov->mode =3D KCOV_MODE_INIT; @@ -400,6 +405,7 @@ static void kcov_reset(struct kcov *kcov) } =20 static void kcov_remote_reset(struct kcov *kcov) + __must_hold(&kcov->lock) { int bkt; struct kcov_remote *remote; @@ -419,6 +425,7 @@ static void kcov_remote_reset(struct kcov *kcov) } =20 static void kcov_disable(struct task_struct *t, struct kcov *kcov) + __must_hold(&kcov->lock) { kcov_task_reset(t); if (kcov->remote) @@ -435,8 +442,11 @@ static void kcov_get(struct kcov *kcov) static void kcov_put(struct kcov *kcov) { if (refcount_dec_and_test(&kcov->refcount)) { - kcov_remote_reset(kcov); - vfree(kcov->area); + /* Capability-safety: no references left, object being destroyed. */ + capability_unsafe( + kcov_remote_reset(kcov); + vfree(kcov->area); + ); kfree(kcov); } } @@ -491,6 +501,7 @@ static int kcov_mmap(struct file *filep, struct vm_area= _struct *vma) unsigned long size, off; struct page *page; unsigned long flags; + unsigned long *area; =20 spin_lock_irqsave(&kcov->lock, flags); size =3D kcov->size * sizeof(unsigned long); @@ -499,10 +510,11 @@ static int kcov_mmap(struct file *filep, struct vm_ar= ea_struct *vma) res =3D -EINVAL; goto exit; } + area =3D kcov->area; spin_unlock_irqrestore(&kcov->lock, flags); vm_flags_set(vma, VM_DONTEXPAND); for (off =3D 0; off < size; off +=3D PAGE_SIZE) { - page =3D vmalloc_to_page(kcov->area + off); + page =3D vmalloc_to_page(area + off); res =3D vm_insert_page(vma, vma->vm_start + off, page); if (res) { pr_warn_once("kcov: vm_insert_page() failed\n"); @@ -522,10 +534,10 @@ static int kcov_open(struct inode *inode, struct file= *filep) kcov =3D kzalloc(sizeof(*kcov), GFP_KERNEL); if (!kcov) return -ENOMEM; + spin_lock_init(&kcov->lock); kcov->mode =3D KCOV_MODE_DISABLED; kcov->sequence =3D 1; refcount_set(&kcov->refcount, 1); - spin_lock_init(&kcov->lock); filep->private_data =3D kcov; return nonseekable_open(inode, filep); } @@ -556,6 +568,7 @@ static int kcov_get_mode(unsigned long arg) * vmalloc fault handling path is instrumented. */ static void kcov_fault_in_area(struct kcov *kcov) + __must_hold(&kcov->lock) { unsigned long stride =3D PAGE_SIZE / sizeof(unsigned long); unsigned long *area =3D kcov->area; @@ -584,6 +597,7 @@ static inline bool kcov_check_handle(u64 handle, bool c= ommon_valid, =20 static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd, unsigned long arg) + __must_hold(&kcov->lock) { struct task_struct *t; unsigned long flags, unused; @@ -814,6 +828,7 @@ static inline bool kcov_mode_enabled(unsigned int mode) } =20 static void kcov_remote_softirq_start(struct task_struct *t) + __must_hold(&kcov_percpu_data.lock) { struct kcov_percpu_data *data =3D this_cpu_ptr(&kcov_percpu_data); unsigned int mode; @@ -831,6 +846,7 @@ static void kcov_remote_softirq_start(struct task_struc= t *t) } =20 static void kcov_remote_softirq_stop(struct task_struct *t) + __must_hold(&kcov_percpu_data.lock) { struct kcov_percpu_data *data =3D this_cpu_ptr(&kcov_percpu_data); =20 @@ -896,10 +912,12 @@ void kcov_remote_start(u64 handle) /* Put in kcov_remote_stop(). */ kcov_get(kcov); /* - * Read kcov fields before unlock to prevent races with - * KCOV_DISABLE / kcov_remote_reset(). + * Read kcov fields before unlocking kcov_remote_lock to prevent races + * with KCOV_DISABLE and kcov_remote_reset(); cannot acquire kcov->lock + * here, because it might lead to deadlock given kcov_remote_lock is + * acquired _after_ kcov->lock elsewhere. */ - mode =3D kcov->mode; + mode =3D capability_unsafe(kcov->mode); sequence =3D kcov->sequence; if (in_task()) { size =3D kcov->remote_size; --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5A951C3F1C for ; Thu, 6 Feb 2025 18:18:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865927; cv=none; b=Esghi9vRvz6+5k/oVjWpJAq2SY2ImsXpKqHQa2cXFMXVZiWboLi02dFp1GSjxe0slJfjZfy3GDdlp9YUCsCJvs7Ta7fKHWxNdHeKh/lCk7Vy6DdIu1p4z3f0iIP7cIMFTIh8h4ybwbVDb0HfgqmPdI+fQWk8ocd31ZcfDIuIyMI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865927; c=relaxed/simple; bh=7Rgf3ZTg3VDeZEL33luK3Fx0Q0AHhneR5A6+XhHvm/o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mDl+SBquz6IZnf6h9ZjQ2hKsYutqbVBnipNzWCxQC2+eSm9pWwVIeFFdKyuP9He0PKOjf2L99M3C+jsXiVew3juuy3DkDsqALHN8pYtMxMdnvncM2Y8CHd9tTjLgSSncsPoqo4MbqiUpjpbLT70dWXv9vW2LX3/nup3Zrn+SI9U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=v7Kl4Ott; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="v7Kl4Ott" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-ab341e14e27so162955466b.3 for ; Thu, 06 Feb 2025 10:18:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865924; x=1739470724; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cTAbjs+ZIolkWTfm9mQklQQ/rgalTGxg3MlfIrb1brs=; b=v7Kl4OttnAV55cMlvUkZ5aQGf2nd8oWcJQwWpsC00unC40AFFcI3J2zTH//YakNiiL AENEgrq8jgJcnLxCfYte4KvOUr/GRObakLzcY4xaj11ceADB86GlQEfKkEtlZBWTbLa0 2rsyGsBjkgLn8FAjR13ulvSogNi+okL83F7AOACjx02Q/nPawkiPka0ziPJvSDuk1oIy 27LLd1hL7YLI5RmxbdMpCTtS9OWVrlAOr2ONO42ICEWqnmT+tc8Rw1yLN5UbuIAG2Ssb 7x8TbBqQXLVujaFYLURyNQOrwFVUQPNQMR7tCM9tWodZwQrXIClrx2an7pAtv7cUDaXk kk8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865924; x=1739470724; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cTAbjs+ZIolkWTfm9mQklQQ/rgalTGxg3MlfIrb1brs=; b=P+3p4gK8hCaffOhoINI9E2t6VoHM7/X4qDxnVD2DjCsOQDko2bWRdGBoK7q26Trepl LJc6I+oY0tFRlqSqGLOx0Mqzc6ZRZMyHrhTF/jKJsYY6cXGIOFjrhTdM80RcMMajSY+o 9D9eRilyzs9BFDtOWCCUrfFbioGNZrtQ84Z1RWcGtcHiQPXqISRkmfIbSAqjkUDYs4K4 lttQc+7bSX+aoEBE6qmfgZT6i0sY3bcCJFq1xhfm9aNdYOQ3kNWhwaiz7tLu43kfxWtG Zmyq4H4PoiiUtngF4GsmLd6qYQkuXRHSSWEAjYVVNqtzOm5v7lMr4RQcNG2pNGmCxGa2 1ZVQ== X-Forwarded-Encrypted: i=1; AJvYcCUFhcrXCkBWRcswk4R3xUn5YQG3m5LuJcW4CFmHDmZAzguFh1r5ZKXmjP+zMZnlYPsB9BrW2EF9f28zzPg=@vger.kernel.org X-Gm-Message-State: AOJu0YwiANVlZ7psDq0BlO3A7w1OFwm6WzjyFn68gMZX7hdUzJHWIRjK Uk8o7ZHqKcHbZYS9lv3WfQhF73KFTyJvsJ7Ky3uPIVaidrAjMATKLWBFHNF+fHmWPs7ljMHAWQ= = X-Google-Smtp-Source: AGHT+IFq6I8SlF529XaSIG0v2kxPMDuCk08v3rpn5V0Sdgy9vzchYWtKtzABIJUIV4QBXvz8ORyS0aC2Jg== X-Received: from edbfj20.prod.google.com ([2002:a05:6402:2b94:b0:5dc:4848:561d]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:9724:b0:aae:869f:c4ad with SMTP id a640c23a62f3a-ab75e212f41mr882204166b.7.1738865924120; Thu, 06 Feb 2025 10:18:44 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:17 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-24-elver@google.com> Subject: [PATCH RFC 23/24] stackdepot: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable capability analysis for stackdepot. Signed-off-by: Marco Elver --- lib/Makefile | 1 + lib/stackdepot.c | 24 ++++++++++++++++++------ 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/lib/Makefile b/lib/Makefile index 1dbb59175eb0..f40ba93c9a94 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -270,6 +270,7 @@ obj-$(CONFIG_POLYNOMIAL) +=3D polynomial.o # Prevent the compiler from calling builtins like memcmp() or bcmp() from = this # file. CFLAGS_stackdepot.o +=3D -fno-builtin +CAPABILITY_ANALYSIS_stackdepot.o :=3D y obj-$(CONFIG_STACKDEPOT) +=3D stackdepot.o KASAN_SANITIZE_stackdepot.o :=3D n # In particular, instrumenting stackdepot.c with KMSAN will result in infi= nite diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 245d5b416699..6664146d1f31 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -14,6 +14,8 @@ =20 #define pr_fmt(fmt) "stackdepot: " fmt =20 +disable_capability_analysis(); + #include #include #include @@ -36,6 +38,8 @@ #include #include =20 +enable_capability_analysis(); + #define DEPOT_POOLS_CAP 8192 /* The pool_index is offset by 1 so the first record does not have a 0 han= dle. */ #define DEPOT_MAX_POOLS \ @@ -61,18 +65,18 @@ static unsigned int stack_bucket_number_order; /* Hash mask for indexing the table. */ static unsigned int stack_hash_mask; =20 +/* The lock must be held when performing pool or freelist modifications. */ +static DEFINE_RAW_SPINLOCK(pool_lock); /* Array of memory regions that store stack records. */ -static void *stack_pools[DEPOT_MAX_POOLS]; +static void *stack_pools[DEPOT_MAX_POOLS] __var_guarded_by(&pool_lock); /* Newly allocated pool that is not yet added to stack_pools. */ static void *new_pool; /* Number of pools in stack_pools. */ static int pools_num; /* Offset to the unused space in the currently used pool. */ -static size_t pool_offset =3D DEPOT_POOL_SIZE; +static size_t pool_offset __var_guarded_by(&pool_lock) =3D DEPOT_POOL_SIZE; /* Freelist of stack records within stack_pools. */ -static LIST_HEAD(free_stacks); -/* The lock must be held when performing pool or freelist modifications. */ -static DEFINE_RAW_SPINLOCK(pool_lock); +static __var_guarded_by(&pool_lock) LIST_HEAD(free_stacks); =20 /* Statistics counters for debugfs. */ enum depot_counter_id { @@ -242,6 +246,7 @@ EXPORT_SYMBOL_GPL(stack_depot_init); * Initializes new stack pool, and updates the list of pools. */ static bool depot_init_pool(void **prealloc) + __must_hold(&pool_lock) { lockdep_assert_held(&pool_lock); =20 @@ -289,6 +294,7 @@ static bool depot_init_pool(void **prealloc) =20 /* Keeps the preallocated memory to be used for a new stack depot pool. */ static void depot_keep_new_pool(void **prealloc) + __must_hold(&pool_lock) { lockdep_assert_held(&pool_lock); =20 @@ -308,6 +314,7 @@ static void depot_keep_new_pool(void **prealloc) * the current pre-allocation. */ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t si= ze) + __must_hold(&pool_lock) { struct stack_record *stack; void *current_pool; @@ -342,6 +349,7 @@ static struct stack_record *depot_pop_free_pool(void **= prealloc, size_t size) =20 /* Try to find next free usable entry from the freelist. */ static struct stack_record *depot_pop_free(void) + __must_hold(&pool_lock) { struct stack_record *stack; =20 @@ -379,6 +387,7 @@ static inline size_t depot_stack_record_size(struct sta= ck_record *s, unsigned in /* Allocates a new stack in a stack depot pool. */ static struct stack_record * depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 has= h, depot_flags_t flags, void **prealloc) + __must_hold(&pool_lock) { struct stack_record *stack =3D NULL; size_t record_size; @@ -437,6 +446,7 @@ depot_alloc_stack(unsigned long *entries, unsigned int = nr_entries, u32 hash, dep } =20 static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) + __must_not_hold(&pool_lock) { const int pools_num_cached =3D READ_ONCE(pools_num); union handle_parts parts =3D { .handle =3D handle }; @@ -453,7 +463,8 @@ static struct stack_record *depot_fetch_stack(depot_sta= ck_handle_t handle) return NULL; } =20 - pool =3D stack_pools[pool_index]; + /* @pool_index either valid, or user passed in corrupted value. */ + pool =3D capability_unsafe(stack_pools[pool_index]); if (WARN_ON(!pool)) return NULL; =20 @@ -466,6 +477,7 @@ static struct stack_record *depot_fetch_stack(depot_sta= ck_handle_t handle) =20 /* Links stack into the freelist. */ static void depot_free_stack(struct stack_record *stack) + __must_not_hold(&pool_lock) { unsigned long flags; =20 --=20 2.48.1.502.g6dc24dfdaf-goog From nobody Tue Dec 16 05:57:00 2025 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15AFA221DA6 for ; Thu, 6 Feb 2025 18:18:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865930; cv=none; b=H+uEwAks06l7vwHySbsNVKVKcPomnKnJiuDryLlxZhAst1MJeDEgqwW/d1Whg5fJkNuU+KlhC9mu7lT4kWziGzva4yLPAUek7NhTUkjV6QKhKH5/JLHUqqqH4L1Qv5QIuqOvhxRjIDiPF6KqYgdtU3c6PXT6FOUTmT6DiHNUg4k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865930; c=relaxed/simple; bh=U9Zquo+I28n7D6l89QyMvjgCk9CUHGORATxUF8MOC08=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bw7jVbUFSI3yFvdBSveQm18oidZF6kvbzIpZNWRCl473Bbtujc8UvVaTb6qrVM80h6oM5Gr75IGfKNQgBhkXUiLp9VK+3dpYDNOpcJa+LQzCjsv1r5CE0UW+qU2IBGQvx58oe3xa5SjICAF/s0JEXYVXZHlrgHOuH0sxnbBlNQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=K9kTI7MY; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K9kTI7MY" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-ab6e2b653a0so145212966b.3 for ; Thu, 06 Feb 2025 10:18:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865926; x=1739470726; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=M3npqKmcD7idosYB3Rql5Qlei3J/L1gfZsrIz0sfHUY=; b=K9kTI7MY6mQK9tOS73I2v9K0v65AtDd4n2pwmcc2R29p6EGO8u8elONw3yS7WssT8K 2FHQRGGn5qIIdvXVfYQH2et5AXGmhlmxy0BmuU3xEwqdcM/o8ba5J2zJcA1CL/xpeBdx F4jVmtxMSilRSRL6zzEIXnSCGo4sRE06XlD2vkwUvbNqSfXTTu17vThH93vHfWfHUhqw lBF8udw97YWafgY7Z7tih/DRm0fv7V9kgZyjbQX0MAWKUwbaKoFg8Ks9267UqO8bkIcL cmLrOtlVN8sW1+T3WWfve5PL4QtjE3xGXyCfPiCfGQ27JIqGDQrC1CJqUmZNGH6BnJr6 kiFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865926; x=1739470726; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M3npqKmcD7idosYB3Rql5Qlei3J/L1gfZsrIz0sfHUY=; b=s3SeG21Q2zXMAhOZt7VHwAi2Rzdq1QiOqkEKIRMYZeKnBip45JGwimXY6xpuLlX8We qyc8rVc6Apjgh1zbOldP9ZJUTerteAfWZZI/gmxWcIeOo96y3qvT5/q5alky6z8k15+O GjScaHOCMgPcwp8o9Z47sVa9U+LxBYT/kelN7zWYRwv9ueC1M8EDIkJZs5We8se4qrqd kyTT0Lt+XS+YE9sfr4uSUQ/xrZOHwRtvg/dVYpBXqlzuBv6Z2kpfjQQkEiNmfrxesMxc HjsHDd6HkUUqLIXA+lBQ6uni+c51ogvehwVar5ONpBPfHImZnv8O6uIttUJUuuFdpLPp ZMrQ== X-Forwarded-Encrypted: i=1; AJvYcCUM/eoSNmFHScU/GpaNmA+JCBbOd5t++kOLs2tz9afZTuIjKUWSwKYlnmk2XVZIhMLXgCLka37IYyvSc64=@vger.kernel.org X-Gm-Message-State: AOJu0YyGQrgwxboFSb9EangtWpPlfTZ/jr2G6Zy6R6ze0C4/cEbQQdEM pNxaqwd82s3RtCNuPgquD5dWgPAta3d8ODCBYOqjB1fs5nJ0JKcVnUWzSj6VMqwdtkxffluBdg= = X-Google-Smtp-Source: AGHT+IHaR4OtYgVkoAVUDStYqYv+9thvNuzjxTMXa8LJQryBI8+iut5U512tyfYG9g1hUPYb+umuGfZNFg== X-Received: from edben5.prod.google.com ([2002:a05:6402:5285:b0:5dc:c943:7c1]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:dc8d:b0:aac:622:8f6 with SMTP id a640c23a62f3a-ab75e26558fmr633944866b.17.1738865926699; Thu, 06 Feb 2025 10:18:46 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:18 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-25-elver@google.com> Subject: [PATCH RFC 24/24] rhashtable: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable capability analysis for rhashtable, which was used as an initial test as it contains a combination of RCU, mutex, and bit_spinlock usage. Users of rhashtable now also benefit from annotations on the API, which will now warn if the RCU read lock is not held where required. Signed-off-by: Marco Elver --- include/linux/rhashtable.h | 14 +++++++++++--- lib/Makefile | 2 ++ lib/rhashtable.c | 12 +++++++++--- 3 files changed, 22 insertions(+), 6 deletions(-) diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h index 8463a128e2f4..c6374691ccc7 100644 --- a/include/linux/rhashtable.h +++ b/include/linux/rhashtable.h @@ -245,16 +245,17 @@ void *rhashtable_insert_slow(struct rhashtable *ht, c= onst void *key, void rhashtable_walk_enter(struct rhashtable *ht, struct rhashtable_iter *iter); void rhashtable_walk_exit(struct rhashtable_iter *iter); -int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires(R= CU); +int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires_s= hared(RCU); =20 static inline void rhashtable_walk_start(struct rhashtable_iter *iter) + __acquires_shared(RCU) { (void)rhashtable_walk_start_check(iter); } =20 void *rhashtable_walk_next(struct rhashtable_iter *iter); void *rhashtable_walk_peek(struct rhashtable_iter *iter); -void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases(RCU); +void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases_shared(= RCU); =20 void rhashtable_free_and_destroy(struct rhashtable *ht, void (*free_fn)(void *ptr, void *arg), @@ -325,6 +326,7 @@ static inline struct rhash_lock_head __rcu **rht_bucket= _insert( =20 static inline unsigned long rht_lock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt) + __acquires(__bitlock(0, bkt)) { unsigned long flags; =20 @@ -337,6 +339,7 @@ static inline unsigned long rht_lock(struct bucket_tabl= e *tbl, static inline unsigned long rht_lock_nested(struct bucket_table *tbl, struct rhash_lock_head __rcu **bucket, unsigned int subclass) + __acquires(__bitlock(0, bucket)) { unsigned long flags; =20 @@ -349,6 +352,7 @@ static inline unsigned long rht_lock_nested(struct buck= et_table *tbl, static inline void rht_unlock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt, unsigned long flags) + __releases(__bitlock(0, bkt)) { lock_map_release(&tbl->dep_map); bit_spin_unlock(0, (unsigned long *)bkt); @@ -402,13 +406,14 @@ static inline void rht_assign_unlock(struct bucket_ta= ble *tbl, struct rhash_lock_head __rcu **bkt, struct rhash_head *obj, unsigned long flags) + __releases(__bitlock(0, bkt)) { if (rht_is_a_nulls(obj)) obj =3D NULL; lock_map_release(&tbl->dep_map); rcu_assign_pointer(*bkt, (void *)obj); preempt_enable(); - __release(bitlock); + __release(__bitlock(0, bkt)); local_irq_restore(flags); } =20 @@ -589,6 +594,7 @@ static inline int rhashtable_compare(struct rhashtable_= compare_arg *arg, static inline struct rhash_head *__rhashtable_lookup( struct rhashtable *ht, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhashtable_compare_arg arg =3D { .ht =3D ht, @@ -642,6 +648,7 @@ static inline struct rhash_head *__rhashtable_lookup( static inline void *rhashtable_lookup( struct rhashtable *ht, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he =3D __rhashtable_lookup(ht, key, params); =20 @@ -692,6 +699,7 @@ static inline void *rhashtable_lookup_fast( static inline struct rhlist_head *rhltable_lookup( struct rhltable *hlt, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he =3D __rhashtable_lookup(&hlt->ht, key, params); =20 diff --git a/lib/Makefile b/lib/Makefile index f40ba93c9a94..c7004270ad5f 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -45,6 +45,8 @@ lib-$(CONFIG_MIN_HEAP) +=3D min_heap.o lib-y +=3D kobject.o klist.o obj-y +=3D lockref.o =20 +CAPABILITY_ANALYSIS_rhashtable.o :=3D y + obj-y +=3D bcd.o sort.o parser.o debug_locks.o random32.o \ bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ list_sort.o uuid.o iov_iter.o clz_ctz.o \ diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 3e555d012ed6..47a61e214621 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -11,6 +11,10 @@ * pointer as suggested by Josh Triplett */ =20 +#include + +disable_capability_analysis(); + #include #include #include @@ -22,10 +26,11 @@ #include #include #include -#include #include #include =20 +enable_capability_analysis(); + #define HASH_DEFAULT_SIZE 64UL #define HASH_MIN_SIZE 4U =20 @@ -358,6 +363,7 @@ static int rhashtable_rehash_table(struct rhashtable *h= t) static int rhashtable_rehash_alloc(struct rhashtable *ht, struct bucket_table *old_tbl, unsigned int size) + __must_hold(&ht->mutex) { struct bucket_table *new_tbl; int err; @@ -392,6 +398,7 @@ static int rhashtable_rehash_alloc(struct rhashtable *h= t, * bucket locks or concurrent RCU protected lookups and traversals. */ static int rhashtable_shrink(struct rhashtable *ht) + __must_hold(&ht->mutex) { struct bucket_table *old_tbl =3D rht_dereference(ht->tbl, ht); unsigned int nelems =3D atomic_read(&ht->nelems); @@ -724,7 +731,7 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_exit); * resize events and always continue. */ int rhashtable_walk_start_check(struct rhashtable_iter *iter) - __acquires(RCU) + __acquires_shared(RCU) { struct rhashtable *ht =3D iter->ht; bool rhlist =3D ht->rhlist; @@ -940,7 +947,6 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_peek); * hash table. */ void rhashtable_walk_stop(struct rhashtable_iter *iter) - __releases(RCU) { struct rhashtable *ht; struct bucket_table *tbl =3D iter->walker.tbl; --=20 2.48.1.502.g6dc24dfdaf-goog