From nobody Thu Oct 2 10:55:49 2025 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53FF9315D55 for ; Thu, 18 Sep 2025 14:06:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758204400; cv=none; b=QjHyIQb4xj61K3FUC+PXsHN3fdE8rXUWs7r8koh01jJ1D5tLduGTi8/Bo9Fyv9QV8saoX2y76TB2Jp9EhNEJu46HWOyoasJY4SDiuf9LZ4DscJRirFzxICFLIYKOwgUTQTzn0fHVxuF0XuZ1poVPHgnkN/9ZbOZKc3M8xVo5Yt4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758204400; c=relaxed/simple; bh=Cwa3yaODrdvGJTurCsWpVtmAe04flkKQ4rRJMJkPOYI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rf/7WdN6DoVTI0OEic5KjyY/tORZcuyah3krk4i6zE22HQxKzkmbFa69Jk2Nt7KgLRU/LKG0wB+wnWSLx+Y+kG3z1IfiIml/ZugsVV7f9WJUUckjhhpg/uGQo88FcbSDdaqxEVcwc3W2uzAJLjuDKnMdk9FoGoPtNfEqhfZdBvo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YtAfMswp; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YtAfMswp" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-62f5135a320so896185a12.1 for ; Thu, 18 Sep 2025 07:06:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1758204394; x=1758809194; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Jkk6mUsFuJFMkBQYcIPdpHbgYByhZwbPKISMbgaAjV0=; b=YtAfMswpNrULFdEhsikzSrhYTyGaVgeHPT9vwem8trVU9MQPaDZ+ilnA9cSuk5UUTC Zb8+F2NkY56PbbzgBj9mCFAhPomxrEnzonuiWyb6ktFeom9JoxQbA0FmvYApGlGp/ws1 1Ei1JCYEIJ2MsurP13G6qMmruOkaSVZeNTx7M1s3HkfVMLHGEbZbQh1Jxu7ApsGEVfHZ ZCqCG/I9bYQ9Ee56krOkQnSLMeV6EOo7ZI/wmqMnHmeTU1xwNYYEkMyyRhzVid71XQeD uU0GsY7bJIaf/Z/15WUc+IozCtCZgrJlm/jMAEyf8Pu3TVqikA+zD3ocfK93umGJnelE 6zQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758204394; x=1758809194; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Jkk6mUsFuJFMkBQYcIPdpHbgYByhZwbPKISMbgaAjV0=; b=q2S0IfiYklc4rLHzPmXVEMwZPAA2o+qywUAOjN0RVLflukLT46HtasfCHuiGPJOpkg DIfwRBUxrGuQv8h92jzzsHcjaB6ruBzwbBvaLBMIK53j62qQiYsPNvenftPdhxHyVt2T U5DzFsm0yJYp1Wwi5vaNWv4toiYctsnnIbtcfV2bXGqBaFHRWHXovmKvoaPnfzelRIHA oDK8XJOrfEah9IpkidlIvHXxE6mINqdbubSublDnvsvgsaEO1JEUFh4ksRn4soDwenKU a1KEGM4yQ7Z/yM8R/kRcdecCMhXVqFbPwJg46R3CvSRgrA/MyzK0D4sxYr7OWMwz9TSK ZtoA== X-Forwarded-Encrypted: i=1; AJvYcCXBNYGK7OXNMeOzPT8Lk1cxgUmJazHbSSxRSmPcUqlv1af5n2CY0KUs9ZrKodnHpTHj7x2QusTwoiLFiuo=@vger.kernel.org X-Gm-Message-State: AOJu0YzlB283GRg2VKTUe8rY5fNYTISa9a3kdEAASB3cVuW+cil56JDv iIekWiEhrwgxON6G5Xf5hMy+1/TzRLZdC1LA5j4AwUjLmtWPxxNJAt4dVgNdw5Uyu49V0dadznz Xew== X-Google-Smtp-Source: AGHT+IF4fOf5HXgbgeKyfN1qpd3A/UVv/l2+J0beGALqHEO+meG74uppEhmt1NMEKDfmZFc2hf71GqDvIg== X-Received: from edg11.prod.google.com ([2002:a05:6402:23cb:b0:62c:a16c:b054]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5252:b0:62c:34ed:bbed with SMTP id 4fb4d7f45d1cf-62f844410ddmr5661335a12.19.1758204389213; Thu, 18 Sep 2025 07:06:29 -0700 (PDT) Date: Thu, 18 Sep 2025 15:59:34 +0200 In-Reply-To: <20250918140451.1289454-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250918140451.1289454-1-elver@google.com> X-Mailer: git-send-email 2.51.0.384.g4c02a37b29-goog Message-ID: <20250918140451.1289454-24-elver@google.com> Subject: [PATCH v3 23/35] compiler-capability-analysis: Remove __cond_lock() function-like helper From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Bill Wendling , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As discussed in [1], removing __cond_lock() will improve the readability of trylock code. Now that Sparse context tracking support has been removed, we can also remove __cond_lock(). Change existing APIs to either drop __cond_lock() completely, or make use of the __cond_acquires() function attribute instead. In particular, spinlock and rwlock implementations required switching over to inline helpers rather than statement-expressions for their trylock_* variants. Link: https://lore.kernel.org/all/20250207082832.GU7145@noisy.programming.k= icks-ass.net/ [1] Suggested-by: Peter Zijlstra Signed-off-by: Marco Elver --- v2: * New patch. --- .../dev-tools/capability-analysis.rst | 2 - Documentation/mm/process_addrs.rst | 6 +- .../net/wireless/intel/iwlwifi/iwl-trans.c | 4 +- .../net/wireless/intel/iwlwifi/iwl-trans.h | 6 +- .../intel/iwlwifi/pcie/gen1_2/internal.h | 5 +- .../intel/iwlwifi/pcie/gen1_2/trans.c | 4 +- include/linux/compiler-capability-analysis.h | 33 ---------- include/linux/mm.h | 33 ++-------- include/linux/rwlock.h | 11 +--- include/linux/rwlock_api_smp.h | 14 ++++- include/linux/rwlock_rt.h | 21 ++++--- include/linux/sched/signal.h | 14 +---- include/linux/spinlock.h | 45 +++++--------- include/linux/spinlock_api_smp.h | 20 ++++++ include/linux/spinlock_api_up.h | 61 ++++++++++++++++--- include/linux/spinlock_rt.h | 26 ++++---- kernel/signal.c | 4 +- kernel/time/posix-timers.c | 13 +--- lib/dec_and_lock.c | 8 +-- mm/memory.c | 4 +- mm/pgtable-generic.c | 19 +++--- tools/include/linux/compiler_types.h | 2 - 22 files changed, 162 insertions(+), 193 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentatio= n/dev-tools/capability-analysis.rst index 2b89d346723b..3456132261c6 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -115,10 +115,8 @@ Keywords __releases_shared __acquire __release - __cond_lock __acquire_shared __release_shared - __cond_lock_shared capability_unsafe __capability_unsafe disable_capability_analysis enable_capability_analysis diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_= addrs.rst index be49e2a269e4..25d551a01f16 100644 --- a/Documentation/mm/process_addrs.rst +++ b/Documentation/mm/process_addrs.rst @@ -582,7 +582,7 @@ To access PTE-level page tables, a helper like :c:func:= `!pte_offset_map_lock` or :c:func:`!pte_offset_map` can be used depending on stability requirements. These map the page table into kernel memory if required, take the RCU lock= , and depending on variant, may also look up or acquire the PTE lock. -See the comment on :c:func:`!__pte_offset_map_lock`. +See the comment on :c:func:`!pte_offset_map_lock`. =20 Atomicity ^^^^^^^^^ @@ -666,7 +666,7 @@ must be released via :c:func:`!pte_unmap_unlock`. .. note:: There are some variants on this, such as :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable= but for brevity we do not explore this. See the comment for - :c:func:`!__pte_offset_map_lock` for more details. + :c:func:`!pte_offset_map_lock` for more details. =20 When modifying data in ranges we typically only wish to allocate higher pa= ge tables as necessary, using these locks to avoid races or overwriting anyth= ing, @@ -685,7 +685,7 @@ At the leaf page table, that is the PTE, we can't entir= ely rely on this pattern as we have separate PMD and PTE locks and a THP collapse for instance migh= t have eliminated the PMD entry as well as the PTE from under us. =20 -This is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD = entry +This is why :c:func:`!pte_offset_map_lock` locklessly retrieves the PMD en= try for the PTE, carefully checking it is as expected, before acquiring the PTE-specific lock, and then *again* checking that the PMD entry is as expe= cted. =20 diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/w= ireless/intel/iwlwifi/iwl-trans.c index 3694b41d6621..5c32f0d95da4 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c @@ -566,11 +566,11 @@ int iwl_trans_read_config32(struct iwl_trans *trans, = u32 ofs, return iwl_trans_pcie_read_config32(trans, ofs, val); } =20 -bool _iwl_trans_grab_nic_access(struct iwl_trans *trans) +bool iwl_trans_grab_nic_access(struct iwl_trans *trans) { return iwl_trans_pcie_grab_nic_access(trans); } -IWL_EXPORT_SYMBOL(_iwl_trans_grab_nic_access); +IWL_EXPORT_SYMBOL(iwl_trans_grab_nic_access); =20 void __releases(nic_access) iwl_trans_release_nic_access(struct iwl_trans *trans) diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/w= ireless/intel/iwlwifi/iwl-trans.h index d0e658801c2e..d6b11893e6c7 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h @@ -1101,11 +1101,7 @@ int iwl_trans_sw_reset(struct iwl_trans *trans); void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg, u32 mask, u32 value); =20 -bool _iwl_trans_grab_nic_access(struct iwl_trans *trans); - -#define iwl_trans_grab_nic_access(trans) \ - __cond_lock(nic_access, \ - likely(_iwl_trans_grab_nic_access(trans))) +bool iwl_trans_grab_nic_access(struct iwl_trans *trans); =20 void __releases(nic_access) iwl_trans_release_nic_access(struct iwl_trans *trans); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h b/dr= ivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h index f48aeebb151c..ccc891e99d8f 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h @@ -542,10 +542,7 @@ void iwl_trans_pcie_free(struct iwl_trans *trans); void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_r= egions, struct device *dev); =20 -bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent= ); -#define _iwl_trans_pcie_grab_nic_access(trans, silent) \ - __cond_lock(nic_access_nobh, \ - likely(__iwl_trans_pcie_grab_nic_access(trans, silent))) +bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent); =20 void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev); void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c b/drive= rs/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c index 327366bf87de..c9ab1d124fc6 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c @@ -2304,7 +2304,7 @@ EXPORT_SYMBOL(iwl_trans_pcie_reset); * This version doesn't disable BHs but rather assumes they're * already disabled. */ -bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent) +bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent) { int ret; struct iwl_trans_pcie *trans_pcie =3D IWL_TRANS_GET_PCIE_TRANS(trans); @@ -2392,7 +2392,7 @@ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans = *trans) bool ret; =20 local_bh_disable(); - ret =3D __iwl_trans_pcie_grab_nic_access(trans, false); + ret =3D _iwl_trans_pcie_grab_nic_access(trans, false); if (ret) { /* keep BHs disabled until iwl_trans_pcie_release_nic_access */ return ret; diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/c= ompiler-capability-analysis.h index 6046fca44f17..f8a1da67589c 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -326,25 +326,6 @@ static inline void _capability_unsafe_alias(void **p) = { } */ #define __release(x) __release_cap(x) =20 -/** - * __cond_lock() - function that conditionally acquires a capability - * exclusively - * @x: capability instance pinter - * @c: boolean expression - * - * Return: result of @c - * - * No-op function that conditionally acquires capability instance @x - * exclusively, if the boolean expression @c is true. The result of @c is = the - * return value, to be able to create a capability-enabled interface; for - * example: - * - * .. code-block:: c - * - * #define spin_trylock(l) __cond_lock(&lock, _spin_trylock(&lock)) - */ -#define __cond_lock(x, c) __try_acquire_cap(x, c) - /** * __must_hold_shared() - function attribute, caller must hold shared capa= bility * @@ -401,20 +382,6 @@ static inline void _capability_unsafe_alias(void **p) = { } */ #define __release_shared(x) __release_shared_cap(x) =20 -/** - * __cond_lock_shared() - function that conditionally acquires a capability - * shared - * @x: capability instance pinter - * @c: boolean expression - * - * Return: result of @c - * - * No-op function that conditionally acquires capability instance @x with = shared - * access, if the boolean expression @c is true. The result of @c is the r= eturn - * value, to be able to create a capability-enabled interface. - */ -#define __cond_lock_shared(x, c) __try_acquire_shared_cap(x, c) - /** * __acquire_ret() - helper to acquire capability of return value * @call: call expression diff --git a/include/linux/mm.h b/include/linux/mm.h index 1ae97a0b8ec7..0ca9005378c5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2713,15 +2713,8 @@ static inline pud_t pud_mkspecial(pud_t pud) } #endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */ =20 -extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, - spinlock_t **ptl); -static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long ad= dr, - spinlock_t **ptl) -{ - pte_t *ptep; - __cond_lock(*ptl, ptep =3D __get_locked_pte(mm, addr, ptl)); - return ptep; -} +extern pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, + spinlock_t **ptl); =20 #ifdef __PAGETABLE_P4D_FOLDED static inline int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, @@ -3005,31 +2998,15 @@ static inline bool pagetable_pte_ctor(struct mm_str= uct *mm, return true; } =20 -pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); -static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, - pmd_t *pmdvalp) -{ - pte_t *pte; +pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); =20 - __cond_lock(RCU, pte =3D ___pte_offset_map(pmd, addr, pmdvalp)); - return pte; -} static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) { return __pte_offset_map(pmd, addr, NULL); } =20 -pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp); -static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) -{ - pte_t *pte; - - __cond_lock(RCU, __cond_lock(*ptlp, - pte =3D __pte_offset_map_lock(mm, pmd, addr, ptlp))); - return pte; -} +pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, spinlock_t **ptlp); =20 pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp); diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 78e4d02ee2c6..827ea95c9e06 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -50,8 +50,8 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_lock_shared(lock, _raw_read_trylock(lock= )) -#define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock)) +#define read_trylock(lock) _raw_read_trylock(lock) +#define write_trylock(lock) _raw_write_trylock(lock) =20 #define write_lock(lock) _raw_write_lock(lock) #define read_lock(lock) _raw_read_lock(lock) @@ -113,12 +113,7 @@ do { \ } while (0) #define write_unlock_bh(lock) _raw_write_unlock_bh(lock) =20 -#define write_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - local_irq_save(flags); \ - _raw_write_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ - })) +#define write_trylock_irqsave(lock, flags) _raw_write_trylock_irqsave(lock= , &(flags)) =20 #ifdef arch_rwlock_is_contended #define rwlock_is_contended(lock) \ diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index 3e975105a606..b289c3089ab7 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -26,8 +26,8 @@ unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t = *lock) __acquires(lock); unsigned long __lockfunc _raw_write_lock_irqsave(rwlock_t *lock) __acquires(lock); -int __lockfunc _raw_read_trylock(rwlock_t *lock); -int __lockfunc _raw_write_trylock(rwlock_t *lock); +int __lockfunc _raw_read_trylock(rwlock_t *lock) __cond_acquires_shared(tr= ue, lock); +int __lockfunc _raw_write_trylock(rwlock_t *lock) __cond_acquires(true, lo= ck); void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock); void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock= ); @@ -41,6 +41,16 @@ void __lockfunc _raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) __releases(lock); =20 +static inline bool _raw_write_trylock_irqsave(rwlock_t *lock, unsigned lon= g *flags) + __cond_acquires(true, lock) +{ + local_irq_save(*flags); + if (_raw_write_trylock(lock)) + return true; + local_irq_restore(*flags); + return false; +} + #ifdef CONFIG_INLINE_READ_LOCK #define _raw_read_lock(lock) __raw_read_lock(lock) #endif diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 52ef2dc63a96..6015e296914f 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -26,11 +26,11 @@ do { \ } while (0) =20 extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock); -extern int rt_read_trylock(rwlock_t *rwlock); +extern int rt_read_trylock(rwlock_t *rwlock) __cond_acquires_shared(true, = rwlock); extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock); extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquire= s(rwlock); -extern int rt_write_trylock(rwlock_t *rwlock); +extern int rt_write_trylock(rwlock_t *rwlock) __cond_acquires(true, rwlock= ); extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock); =20 static __always_inline void read_lock(rwlock_t *rwlock) @@ -59,7 +59,7 @@ static __always_inline void read_lock_irq(rwlock_t *rwloc= k) flags =3D 0; \ } while (0) =20 -#define read_trylock(lock) __cond_lock_shared(lock, rt_read_trylock(lock)) +#define read_trylock(lock) rt_read_trylock(lock) =20 static __always_inline void read_unlock(rwlock_t *rwlock) __releases_shared(rwlock) @@ -123,14 +123,15 @@ static __always_inline void write_lock_irq(rwlock_t *= rwlock) flags =3D 0; \ } while (0) =20 -#define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock)) +#define write_trylock(lock) rt_write_trylock(lock) =20 -#define write_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - typecheck(unsigned long, flags); \ - flags =3D 0; \ - rt_write_trylock(lock); \ - })) +static __always_inline bool _write_trylock_irqsave(rwlock_t *rwlock, unsig= ned long *flags) + __cond_acquires(true, rwlock) +{ + *flags =3D 0; + return rt_write_trylock(rwlock); +} +#define write_trylock_irqsave(lock, flags) _write_trylock_irqsave(lock, &(= flags)) =20 static __always_inline void write_unlock(rwlock_t *rwlock) __releases(rwlock) diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index 1ef1edbaaf79..bc7f83b012fb 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -733,18 +733,8 @@ static inline int thread_group_empty(struct task_struc= t *p) #define delay_group_leader(p) \ (thread_group_leader(p) && !thread_group_empty(p)) =20 -extern struct sighand_struct *__lock_task_sighand(struct task_struct *task, - unsigned long *flags); - -static inline struct sighand_struct *lock_task_sighand(struct task_struct = *task, - unsigned long *flags) -{ - struct sighand_struct *ret; - - ret =3D __lock_task_sighand(task, flags); - (void)__cond_lock(&task->sighand->siglock, ret); - return ret; -} +extern struct sighand_struct *lock_task_sighand(struct task_struct *task, + unsigned long *flags); =20 static inline void unlock_task_sighand(struct task_struct *task, unsigned long *flags) diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 22295a126c3a..d0cef13bfb33 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -213,7 +213,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *l= ock) __releases(lock) * various methods are defined as nops in the case they are not * required. */ -#define raw_spin_trylock(lock) __cond_lock(lock, _raw_spin_trylock(lock)) +#define raw_spin_trylock(lock) _raw_spin_trylock(lock) =20 #define raw_spin_lock(lock) _raw_spin_lock(lock) =20 @@ -284,22 +284,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t = *lock) __releases(lock) } while (0) #define raw_spin_unlock_bh(lock) _raw_spin_unlock_bh(lock) =20 -#define raw_spin_trylock_bh(lock) \ - __cond_lock(lock, _raw_spin_trylock_bh(lock)) +#define raw_spin_trylock_bh(lock) _raw_spin_trylock_bh(lock) =20 -#define raw_spin_trylock_irq(lock) \ - __cond_lock(lock, ({ \ - local_irq_disable(); \ - _raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_enable(); 0; }); \ - })) +#define raw_spin_trylock_irq(lock) _raw_spin_trylock_irq(lock) =20 -#define raw_spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - local_irq_save(flags); \ - _raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ - })) +#define raw_spin_trylock_irqsave(lock, flags) _raw_spin_trylock_irqsave(lo= ck, &(flags)) =20 #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ @@ -433,8 +422,12 @@ static __always_inline int spin_trylock_irq(spinlock_t= *lock) return raw_spin_trylock_irq(&lock->rlock); } =20 -#define spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, raw_spin_trylock_irqsave(spinlock_check(lock), flags)) +static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsign= ed long *flags) + __cond_acquires(true, lock) __no_capability_analysis +{ + return raw_spin_trylock_irqsave(spinlock_check(lock), *flags); +} +#define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(fl= ags)) =20 /** * spin_is_locked() - Check whether a spinlock is locked. @@ -512,23 +505,17 @@ static inline int rwlock_needbreak(rwlock_t *lock) * Decrements @atomic by 1. If the result is 0, returns true and locks * @lock. Returns false for all other cases. */ -extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); -#define atomic_dec_and_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_lock(atomic, lock)) +extern int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) __cond_= acquires(true, lock); =20 extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, - unsigned long *flags); -#define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) + unsigned long *flags) __cond_acquires(true, lock); +#define atomic_dec_and_lock_irqsave(atomic, lock, flags) _atomic_dec_and_l= ock_irqsave(atomic, lock, &(flags)) =20 -extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock= ); -#define atomic_dec_and_raw_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) +extern int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock)= __cond_acquires(true, lock); =20 extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock= _t *lock, - unsigned long *flags); -#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags= ))) + unsigned long *flags) __cond_acquires(true, lock); +#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) _atomic_dec_a= nd_raw_lock_irqsave(atomic, lock, &(flags)) =20 int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, size_t max_size, unsigned int cpu_mult, diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_= smp.h index a77b76003ebb..1b1896595cbc 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -95,6 +95,26 @@ static inline int __raw_spin_trylock(raw_spinlock_t *loc= k) return 0; } =20 +static __always_inline bool _raw_spin_trylock_irq(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + local_irq_disable(); + if (_raw_spin_trylock(lock)) + return true; + local_irq_enable(); + return false; +} + +static __always_inline bool _raw_spin_trylock_irqsave(raw_spinlock_t *lock= , unsigned long *flags) + __cond_acquires(true, lock) +{ + local_irq_save(*flags); + if (_raw_spin_trylock(lock)) + return true; + local_irq_restore(*flags); + return false; +} + /* * If lockdep is enabled then we use the non-preemption spin-ops * even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_u= p.h index 018f5aabc1be..a9d5c7c66e03 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -24,14 +24,11 @@ * flags straight, to suppress compiler warnings of unused lock * variables, and to add the proper checker annotations: */ -#define ___LOCK_void(lock) \ - do { (void)(lock); } while (0) - #define ___LOCK_(lock) \ - do { __acquire(lock); ___LOCK_void(lock); } while (0) + do { __acquire(lock); (void)(lock); } while (0) =20 #define ___LOCK_shared(lock) \ - do { __acquire_shared(lock); ___LOCK_void(lock); } while (0) + do { __acquire_shared(lock); (void)(lock); } while (0) =20 #define __LOCK(lock, ...) \ do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0) @@ -78,10 +75,56 @@ #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) #define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, sh= ared) #define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_spin_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_read_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_write_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock, void); 1; }) + +static __always_inline int _raw_spin_trylock(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_bh(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK_BH(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_irq(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK_IRQ(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_irqsave(raw_spinlock_t *lock,= unsigned long *flags) + __cond_acquires(true, lock) +{ + __LOCK_IRQSAVE(lock, *(flags)); + return 1; +} + +static __always_inline int _raw_read_trylock(rwlock_t *lock) + __cond_acquires_shared(true, lock) +{ + __LOCK(lock, shared); + return 1; +} + +static __always_inline int _raw_write_trylock(rwlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK(lock); + return 1; +} + +static __always_inline int _raw_write_trylock_irqsave(rwlock_t *lock, unsi= gned long *flags) + __cond_acquires(true, lock) +{ + __LOCK_IRQSAVE(lock, *(flags)); + return 1; +} + #define _raw_spin_unlock(lock) __UNLOCK(lock) #define _raw_read_unlock(lock) __UNLOCK(lock, shared) #define _raw_write_unlock(lock) __UNLOCK(lock) diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index 9688675b7536..23760f0e35e2 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -37,8 +37,8 @@ extern void rt_spin_lock_nested(spinlock_t *lock, int sub= class) __acquires(lock) extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *n= est_lock) __acquires(lock); extern void rt_spin_unlock(spinlock_t *lock) __releases(lock); extern void rt_spin_lock_unlock(spinlock_t *lock); -extern int rt_spin_trylock_bh(spinlock_t *lock); -extern int rt_spin_trylock(spinlock_t *lock); +extern int rt_spin_trylock_bh(spinlock_t *lock) __cond_acquires(true, lock= ); +extern int rt_spin_trylock(spinlock_t *lock) __cond_acquires(true, lock); =20 static __always_inline void spin_lock(spinlock_t *lock) __acquires(lock) @@ -130,21 +130,19 @@ static __always_inline void spin_unlock_irqrestore(sp= inlock_t *lock, rt_spin_unlock(lock); } =20 -#define spin_trylock(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) +#define spin_trylock(lock) rt_spin_trylock(lock) =20 -#define spin_trylock_bh(lock) \ - __cond_lock(lock, rt_spin_trylock_bh(lock)) +#define spin_trylock_bh(lock) rt_spin_trylock_bh(lock) =20 -#define spin_trylock_irq(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) +#define spin_trylock_irq(lock) rt_spin_trylock(lock) =20 -#define spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - typecheck(unsigned long, flags); \ - flags =3D 0; \ - rt_spin_trylock(lock); \ - })) +static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsign= ed long *flags) + __cond_acquires(true, lock) +{ + *flags =3D 0; + return rt_spin_trylock(lock); +} +#define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(fl= ags)) =20 #define spin_is_contended(lock) (((void)(lock), 0)) =20 diff --git a/kernel/signal.c b/kernel/signal.c index fe9190d84f28..9ff96a341e42 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -1355,8 +1355,8 @@ int zap_other_threads(struct task_struct *p) return count; } =20 -struct sighand_struct *__lock_task_sighand(struct task_struct *tsk, - unsigned long *flags) +struct sighand_struct *lock_task_sighand(struct task_struct *tsk, + unsigned long *flags) { struct sighand_struct *sighand; =20 diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index 8b582174b1f9..3c043330aa21 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -66,14 +66,7 @@ static const struct k_clock clock_realtime, clock_monoto= nic; #error "SIGEV_THREAD_ID must not share bit with other SIGEV values!" #endif =20 -static struct k_itimer *__lock_timer(timer_t timer_id); - -#define lock_timer(tid) \ -({ struct k_itimer *__timr; \ - __cond_lock(&__timr->it_lock, __timr =3D __lock_timer(tid)); \ - __timr; \ -}) - +static struct k_itimer *lock_timer(timer_t timer_id); static inline void unlock_timer(struct k_itimer *timr) { if (likely((timr))) @@ -85,7 +78,7 @@ static inline void unlock_timer(struct k_itimer *timr) =20 #define scoped_timer (scope) =20 -DEFINE_CLASS(lock_timer, struct k_itimer *, unlock_timer(_T), __lock_timer= (id), timer_t id); +DEFINE_CLASS(lock_timer, struct k_itimer *, unlock_timer(_T), lock_timer(i= d), timer_t id); DEFINE_CLASS_IS_COND_GUARD(lock_timer); =20 static struct timer_hash_bucket *hash_bucket(struct signal_struct *sig, un= signed int nr) @@ -601,7 +594,7 @@ COMPAT_SYSCALL_DEFINE3(timer_create, clockid_t, which_c= lock, } #endif =20 -static struct k_itimer *__lock_timer(timer_t timer_id) +static struct k_itimer *lock_timer(timer_t timer_id) { struct k_itimer *timr; =20 diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c index 1dcca8f2e194..8c7c398fd770 100644 --- a/lib/dec_and_lock.c +++ b/lib/dec_and_lock.c @@ -18,7 +18,7 @@ * because the spin-lock and the decrement must be * "atomic". */ -int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) +int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) { /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ if (atomic_add_unless(atomic, -1, 1)) @@ -32,7 +32,7 @@ int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lo= ck) return 0; } =20 -EXPORT_SYMBOL(_atomic_dec_and_lock); +EXPORT_SYMBOL(atomic_dec_and_lock); =20 int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, unsigned long *flags) @@ -50,7 +50,7 @@ int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlo= ck_t *lock, } EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave); =20 -int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) +int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) { /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ if (atomic_add_unless(atomic, -1, 1)) @@ -63,7 +63,7 @@ int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlo= ck_t *lock) raw_spin_unlock(lock); return 0; } -EXPORT_SYMBOL(_atomic_dec_and_raw_lock); +EXPORT_SYMBOL(atomic_dec_and_raw_lock); =20 int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *loc= k, unsigned long *flags) diff --git a/mm/memory.c b/mm/memory.c index 0ba4f6b71847..454baefb4989 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2086,8 +2086,8 @@ static pmd_t *walk_to_pmd(struct mm_struct *mm, unsig= ned long addr) return pmd; } =20 -pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, - spinlock_t **ptl) +pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, + spinlock_t **ptl) { pmd_t *pmd =3D walk_to_pmd(mm, addr); =20 diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 567e2d084071..808f18d68279 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -278,7 +278,7 @@ static unsigned long pmdp_get_lockless_start(void) { re= turn 0; } static void pmdp_get_lockless_end(unsigned long irqflags) { } #endif =20 -pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) +pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) { unsigned long irqflags; pmd_t pmdval; @@ -330,13 +330,12 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm,= pmd_t *pmd, } =20 /* - * pte_offset_map_lock(mm, pmd, addr, ptlp), and its internal implementati= on - * __pte_offset_map_lock() below, is usually called with the pmd pointer f= or - * addr, reached by walking down the mm's pgd, p4d, pud for addr: either w= hile - * holding mmap_lock or vma lock for read or for write; or in truncate or = rmap - * context, while holding file's i_mmap_lock or anon_vma lock for read (or= for - * write). In a few cases, it may be used with pmd pointing to a pmd_t alr= eady - * copied to or constructed on the stack. + * pte_offset_map_lock(mm, pmd, addr, ptlp) is usually called with the pmd + * pointer for addr, reached by walking down the mm's pgd, p4d, pud for ad= dr: + * either while holding mmap_lock or vma lock for read or for write; or in + * truncate or rmap context, while holding file's i_mmap_lock or anon_vma = lock + * for read (or for write). In a few cases, it may be used with pmd pointi= ng to + * a pmd_t already copied to or constructed on the stack. * * When successful, it returns the pte pointer for addr, with its page tab= le * kmapped if necessary (when CONFIG_HIGHPTE), and locked against concurre= nt @@ -387,8 +386,8 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, p= md_t *pmd, * table, and may not use RCU at all: "outsiders" like khugepaged should a= void * pte_offset_map() and co once the vma is detached from mm or mm_users is= zero. */ -pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) +pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, spinlock_t **ptlp) { spinlock_t *ptl; pmd_t pmdval; diff --git a/tools/include/linux/compiler_types.h b/tools/include/linux/com= piler_types.h index d09f9dc172a4..067a5b4e0f7b 100644 --- a/tools/include/linux/compiler_types.h +++ b/tools/include/linux/compiler_types.h @@ -20,7 +20,6 @@ # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) #else /* __CHECKER__ */ /* context/locking */ # define __must_hold(x) @@ -28,7 +27,6 @@ # define __releases(x) # define __acquire(x) (void)0 # define __release(x) (void)0 -# define __cond_lock(x,c) (c) #endif /* __CHECKER__ */ =20 /* Compiler specific macros. */ --=20 2.51.0.384.g4c02a37b29-goog