From nobody Sat Feb 7 12:29:44 2026 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 909612737F8 for ; Fri, 19 Dec 2025 15:45:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159122; cv=none; b=dr9AI3DLnLQy3eRVtpR6L8GDjvyVpgTtKnn1fPA1XV4XPf5UUhxzSPiUltQ3HG3VrEDUfiwNeQGZMKllKqw8xIlaxDpbDfJ8Yqg1e1KsrpBNXTvixCtOQLXmCyNa+O0B4iL4hcn4pyPoMcEeD8SE2PV3stwqLk6Y9EJOagU4xRA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159122; c=relaxed/simple; bh=Ik7Q1QK3m7JRRAoBflguqfhBl8xUfuhNClbCvfPvAiU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UHpBe3Hupzg8o/mA5PeMoIf9THi3wbPsn5sLTFA+HX+Qkhn5ksRtt4bJ3FUaPMehjUK1jbYTxhbGq75DW6ikNuvpBian2ZqRzMy5nQ42jHF9U6W5j75uiNW2abZIpIqKrupOnt1rSABlSgIe4Zy/gKde20pY93lR5XKtpNGwMFo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0puxOCuL; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0puxOCuL" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-6495ccea18dso2018232a12.2 for ; Fri, 19 Dec 2025 07:45:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159117; x=1766763917; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uTTrpDgY6boLD/2fqquzR5fZiR/OECGrXMTsYaAZTdg=; b=0puxOCuLAdItHs2dSqsI5YSjIIJjQ04YiWr4bsc/CwHcivs+opsVSr06fy4UM1cnR4 +d6IVwbFL1y1skTRll+gRZ6XQBsPKUvHtbtPqSi6FZIcdS+uwr7J1VHrkgYliRfIW0J6 Q0EZzl0kRizhgDZussnbq4HAv86uTM/rZl2MLWwKHuzBC5h+SR57nCiePF25DNakmLUR r/bEd4smFWaPVeJyzl8+4COoPCEI/1dwrOPvkL9CZFcAty1stecrwnUAFXm7S/ZfkIS2 YjJsiwrKeLTzvPhcBZ6fuPoRqgQg6EY0Yalj+eN92uXVuRqaoVgyS0WaJ8CQcf9/GFj9 qJPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159117; x=1766763917; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uTTrpDgY6boLD/2fqquzR5fZiR/OECGrXMTsYaAZTdg=; b=DKrww8p7XLt2/1sFg8+ObslZ0MlOxshOd7y9Iuuz8t6F7UpUd+A4xY57lkiztBR42f m1pRZA21rvcUBkRgJAFFInSFe54Gu0EB9sfPqnJK5do40NjtimGqPX+tXH+s0Vo8Ca2t oWkNujqDwilERWQyRkN4pqrxMEPiAdke19cDXlep5qkW1/EJcRbNicPJj93iHUW7gk/C Jofm4z9lKFMgJWG8x/crZ8Duw1915eKA1cg5tkQzrVbOYmuhCWHfiFL15k1XV7uwKf4O o25dztFKu1pplsKrr0NM3AMYRqnFfg4jo8zr4lPpe6UmkjVJt9YrjuHJGVpha5McxCgY XXYQ== X-Forwarded-Encrypted: i=1; AJvYcCXTGin+DXXcwSdoJVDKpIbVfg9xgLY7GEzwe9gvsoYFjIO3U3pNoEAM/3HzMY/HRchcYfzb6TQrlKe5lr0=@vger.kernel.org X-Gm-Message-State: AOJu0YxMulQMlIsXG2w7f4k4qBQI2WU3zqb4CwymS+sD3yeJElrOoUp8 +QqTXW0xssN49lZ775m7Kj+veit5MPk39Bz0DpkupwpNZiURulf+o5Tw+r79/m3yzdpQXDpe8Xc 1vA== X-Google-Smtp-Source: AGHT+IFIQlGi8phgPKpWRTwxqlF14Y73hlUw2TFGVrXDbo950uyglJ5sCcodZJR2/zFCPCPTqrJC0MUdYQ== X-Received: from edvd12.prod.google.com ([2002:aa7:ce0c:0:b0:64b:5a31:444e]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3582:b0:64c:584c:556c with SMTP id 4fb4d7f45d1cf-64c584c586dmr960916a12.30.1766159116689; Fri, 19 Dec 2025 07:45:16 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:50 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-2-elver@google.com> Subject: [PATCH v5 01/36] compiler_types: Move lock checking attributes to compiler-context-analysis.h From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The conditional definition of lock checking macros and attributes is about to become more complex. Factor them out into their own header for better readability, and to make it obvious which features are supported by which mode (currently only Sparse). This is the first step towards generalizing towards "context analysis". No functional change intended. Signed-off-by: Marco Elver Reviewed-by: Bart Van Assche --- v4: * Rename capability -> context analysis. --- include/linux/compiler-context-analysis.h | 32 +++++++++++++++++++++++ include/linux/compiler_types.h | 18 ++----------- 2 files changed, 34 insertions(+), 16 deletions(-) create mode 100644 include/linux/compiler-context-analysis.h diff --git a/include/linux/compiler-context-analysis.h b/include/linux/comp= iler-context-analysis.h new file mode 100644 index 000000000000..f8af63045281 --- /dev/null +++ b/include/linux/compiler-context-analysis.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Macros and attributes for compiler-based static context analysis. + */ + +#ifndef _LINUX_COMPILER_CONTEXT_ANALYSIS_H +#define _LINUX_COMPILER_CONTEXT_ANALYSIS_H + +#ifdef __CHECKER__ + +/* Sparse context/lock checking support. */ +# define __must_hold(x) __attribute__((context(x,1,1))) +# define __acquires(x) __attribute__((context(x,0,1))) +# define __cond_acquires(x) __attribute__((context(x,0,-1))) +# define __releases(x) __attribute__((context(x,1,0))) +# define __acquire(x) __context__(x,1) +# define __release(x) __context__(x,-1) +# define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) + +#else /* !__CHECKER__ */ + +# define __must_hold(x) +# define __acquires(x) +# define __cond_acquires(x) +# define __releases(x) +# define __acquire(x) (void)0 +# define __release(x) (void)0 +# define __cond_lock(x, c) (c) + +#endif /* __CHECKER__ */ + +#endif /* _LINUX_COMPILER_CONTEXT_ANALYSIS_H */ diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 1280693766b9..ddada1ed88ea 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -41,6 +41,8 @@ # define BTF_TYPE_TAG(value) /* nothing */ #endif =20 +#include + /* sparse defines __CHECKER__; see Documentation/dev-tools/sparse.rst */ #ifdef __CHECKER__ /* address spaces */ @@ -51,14 +53,6 @@ # define __rcu __attribute__((noderef, address_space(__rcu))) static inline void __chk_user_ptr(const volatile void __user *ptr) { } static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } -/* context/locking */ -# define __must_hold(x) __attribute__((context(x,1,1))) -# define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(x) __attribute__((context(x,0,-1))) -# define __releases(x) __attribute__((context(x,1,0))) -# define __acquire(x) __context__(x,1) -# define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) /* other */ # define __force __attribute__((force)) # define __nocast __attribute__((nocast)) @@ -79,14 +73,6 @@ static inline void __chk_io_ptr(const volatile void __io= mem *ptr) { } =20 # define __chk_user_ptr(x) (void)0 # define __chk_io_ptr(x) (void)0 -/* context/locking */ -# define __must_hold(x) -# define __acquires(x) -# define __cond_acquires(x) -# define __releases(x) -# define __acquire(x) (void)0 -# define __release(x) (void)0 -# define __cond_lock(x,c) (c) /* other */ # define __force # define __nocast --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C28C4337BAB for ; Fri, 19 Dec 2025 15:45:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159128; cv=none; b=sV4XqBkIb9KEzLMtq3utvPNHiReQn/umgQ5EgVGDBt94fAhHgbnZQ7EMU2oxrfmXfNrkxEC0JYnxYvduAtIVoTkf3cANxaHstX483S35/Ui5ZDDLPCaUrnJo7BSN54TE93PAT37vC7HUiWk/5eq6BHL4Xdm9tbKVCSUhGEPxJdM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159128; c=relaxed/simple; bh=v8L/MRY6sx60PUO1SQA8cTC1pzFgnq9rGebTsBob81o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NLNElHc+ACjJQGsUs/yKCwUDv5lXdvI269mOdjLeMPhjxsh5YHXeQanWOpo71/+Ng4mqwqQMpNDg1NzqD879Usw/7hvRlL63+motqPrf2qFNvshpXmQdPqCjUa+02ORZvU3jwBy/rRzJlG8z/EQYQq8BZ5qRyxR2uEA91oNw/hI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PVgaEEnf; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PVgaEEnf" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-477a0ddd1d4so18017705e9.0 for ; Fri, 19 Dec 2025 07:45:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159123; x=1766763923; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=31QsZmag33fLdjyCYpWTjN5tFNjf4WEVSli7AO9DFeU=; b=PVgaEEnfFOP2HsDqLoeu9K+8uUy0KmuGA2E0D8Wz5mPQnlnS7187rfu9fMC/luMQan 4zRPWQ87mzGdWHiZhHT7FaS8jKNE4hdLspx3xnlPX8UgJ2L04Jgj0lEkA28SDnyu11q6 zWkDUksWuuetN7RQcMkPk3TLnQM09suwqi99mWRPCw/Q31V/eT6IUJ1WEiYQW9goykb2 Lqur5uIIUjYGGo0cocRkvP0K+J3pvWP+3u71XfCsPc3j4KfmR3m6zKds5fG8w15eWseQ TV2Z0KVA9ysJxnliS1Z5143sfibXyzOi+Kh/N5UUP4zw01l2NvQ8CpCWRP6f2+dQc0Eh b+4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159123; x=1766763923; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=31QsZmag33fLdjyCYpWTjN5tFNjf4WEVSli7AO9DFeU=; b=HYqMQtzjmRi/S5EE15xjU+CGFJoO6xROjmQgaqPvbbEzgheXmnnW+/e8YBc6Nw4qvH 9/+8pAOFUnFSxPUaSc+o6MzOqpr+E5wmgW0l4Vj47kd0/fKB04RNhyXplBc/aeU4GE0/ dO7RVG8AihqcYCxZ89L34wL3fpLwWqz3Ordg8r4vApgpxmu/j8GoQF7dN9oYhIK67csV vDzIzto+CldMXwAxzM+jT2d94EhLy0GTP9cV/gBoxY/D8Dhb0l1vpvxrPG8CugQzpyvo zr8cdCmS249Cw0XVHNgpq2TDhT1CS/nkKpxn1bt42oN7oa/67GVXsd/DdMN5h7jivbCx HjLQ== X-Forwarded-Encrypted: i=1; AJvYcCXcv18EUbLzb69UAejf0pVUR2nmZTxf3wCj+Je4K/V5fa4walTwsoDM9GIUrJp+PTUGOIIrw+f5yyJglJQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yw9EaizSI/mP3+z5N3okh3dpcc55jAq9YGSryjZi1ijhMtDoiwk aNlO4cWc/ciLR/RFzb35lNRuUj4jlP0TIbZtNjQbbuahRcIK2PEEiQChp3Y5u5/sdAlo4zpMmr6 jrA== X-Google-Smtp-Source: AGHT+IG62NwBZ7/G20o3wsCQBsfZR2Y7hb+y0d4KcogHXMVdQXMZrqrYD7HGKsHMaCOMVVWF1569WOWotQ== X-Received: from wrd22.prod.google.com ([2002:a05:6000:4a16:b0:431:92e:1d36]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c1b:b0:477:9976:9e1a with SMTP id 5b1f17b1804b1-47d1956e545mr34346535e9.6.1766159122833; Fri, 19 Dec 2025 07:45:22 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:51 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-3-elver@google.com> Subject: [PATCH v5 02/36] compiler-context-analysis: Add infrastructure for Context Analysis with Clang From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Context Analysis is a language extension, which enables statically checking that required contexts are active (or inactive), by acquiring and releasing user-definable "context locks". An obvious application is lock-safety checking for the kernel's various synchronization primitives (each of which represents a "context lock"), and checking that locking rules are not violated. Clang originally called the feature "Thread Safety Analysis" [1]. This was later changed and the feature became more flexible, gaining the ability to define custom "capabilities". Its foundations can be found in "Capability Systems" [2], used to specify the permissibility of operations to depend on some "capability" being held (or not held). Because the feature is not just able to express "capabilities" related to synchronization primitives, and "capability" is already overloaded in the kernel, the naming chosen for the kernel departs from Clang's "Thread Safety" and "capability" nomenclature; we refer to the feature as "Context Analysis" to avoid confusion. The internal implementation still makes references to Clang's terminology in a few places, such as `-Wthread-safety` being the warning option that also still appears in diagnostic messages. [1] https://clang.llvm.org/docs/ThreadSafetyAnalysis.html [2] https://www.cs.cornell.edu/talc/papers/capabilities.pdf See more details in the kernel-doc documentation added in this and subsequent changes. Clang version 22+ is required. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". * Better document Clang's `assert_capability` attribute. v4: * Rename capability -> context analysis. v3: * Require Clang 22 or later (reentrant capabilities, basic alias analysis). * Rename __assert_cap/__asserts_cap -> __assume_cap/__assumes_cap (suggeste= d by Peter). * Add __acquire_ret and __acquire_shared_ret helper macros - can be used to define function-like macros that return objects which contains a held capabilities. Works now because of capability alias analysis. * Add capability_unsafe_alias() helper, where the analysis rightfully points out we're doing strange things with aliases but we don't care. * Support multi-argument attributes. v2: * New -Wthread-safety feature rename to -Wthread-safety-pointer (was -Wthread-safety-addressof). * Introduce __capability_unsafe() function attribute. * Rename __var_guarded_by to simply __guarded_by. The initial idea was to be explicit if the variable or pointed-to data is guarded by, but having a shorter attribute name is likely better long-term. * Rename __ref_guarded_by to __pt_guarded_by (pointed-to guarded by). --- Makefile | 1 + include/linux/compiler-context-analysis.h | 464 +++++++++++++++++++++- lib/Kconfig.debug | 30 ++ scripts/Makefile.context-analysis | 7 + scripts/Makefile.lib | 10 + 5 files changed, 505 insertions(+), 7 deletions(-) create mode 100644 scripts/Makefile.context-analysis diff --git a/Makefile b/Makefile index e404e4767944..d4c2aa2df79c 100644 --- a/Makefile +++ b/Makefile @@ -1118,6 +1118,7 @@ include-$(CONFIG_RANDSTRUCT) +=3D scripts/Makefile.ra= ndstruct include-$(CONFIG_KSTACK_ERASE) +=3D scripts/Makefile.kstack_erase include-$(CONFIG_AUTOFDO_CLANG) +=3D scripts/Makefile.autofdo include-$(CONFIG_PROPELLER_CLANG) +=3D scripts/Makefile.propeller +include-$(CONFIG_WARN_CONTEXT_ANALYSIS) +=3D scripts/Makefile.context-anal= ysis include-$(CONFIG_GCC_PLUGINS) +=3D scripts/Makefile.gcc-plugins =20 include $(addprefix $(srctree)/, $(include-y)) diff --git a/include/linux/compiler-context-analysis.h b/include/linux/comp= iler-context-analysis.h index f8af63045281..afff910d8930 100644 --- a/include/linux/compiler-context-analysis.h +++ b/include/linux/compiler-context-analysis.h @@ -6,27 +6,477 @@ #ifndef _LINUX_COMPILER_CONTEXT_ANALYSIS_H #define _LINUX_COMPILER_CONTEXT_ANALYSIS_H =20 +#if defined(WARN_CONTEXT_ANALYSIS) + +/* + * These attributes define new context lock (Clang: capability) types. + * Internal only. + */ +# define __ctx_lock_type(name) __attribute__((capability(#name))) +# define __reentrant_ctx_lock __attribute__((reentrant_capability)) +# define __acquires_ctx_lock(...) __attribute__((acquire_capability(__VA_= ARGS__))) +# define __acquires_shared_ctx_lock(...) __attribute__((acquire_shared_cap= ability(__VA_ARGS__))) +# define __try_acquires_ctx_lock(ret, var) __attribute__((try_acquire_capa= bility(ret, var))) +# define __try_acquires_shared_ctx_lock(ret, var) __attribute__((try_acqui= re_shared_capability(ret, var))) +# define __releases_ctx_lock(...) __attribute__((release_capability(__VA_= ARGS__))) +# define __releases_shared_ctx_lock(...) __attribute__((release_shared_cap= ability(__VA_ARGS__))) +# define __returns_ctx_lock(var) __attribute__((lock_returned(var))) + +/* + * The below are used to annotate code being checked. Internal only. + */ +# define __excludes_ctx_lock(...) __attribute__((locks_excluded(__VA_ARGS= __))) +# define __requires_ctx_lock(...) __attribute__((requires_capability(__VA= _ARGS__))) +# define __requires_shared_ctx_lock(...) __attribute__((requires_shared_ca= pability(__VA_ARGS__))) + +/* + * The "assert_capability" attribute is a bit confusingly named. It does n= ot + * generate a check. Instead, it tells the analysis to *assume* the capabi= lity + * is held. This is used for: + * + * 1. Augmenting runtime assertions, that can then help with patterns beyo= nd the + * compiler's static reasoning abilities. + * + * 2. Initialization of context locks, so we can access guarded variables = right + * after initialization (nothing else should access the same object yet= ). + */ +# define __assumes_ctx_lock(...) __attribute__((assert_capability(__VA_AR= GS__))) +# define __assumes_shared_ctx_lock(...) __attribute__((assert_shared_capab= ility(__VA_ARGS__))) + +/** + * __guarded_by - struct member and globals attribute, declares variable + * only accessible within active context + * + * Declares that the struct member or global variable is only accessible w= ithin + * the context entered by the given context lock. Read operations on the d= ata + * require shared access, while write operations require exclusive access. + * + * .. code-block:: c + * + * struct some_state { + * spinlock_t lock; + * long counter __guarded_by(&lock); + * }; + */ +# define __guarded_by(...) __attribute__((guarded_by(__VA_ARGS__))) + +/** + * __pt_guarded_by - struct member and globals attribute, declares pointed= -to + * data only accessible within active context + * + * Declares that the data pointed to by the struct member pointer or global + * pointer is only accessible within the context entered by the given cont= ext + * lock. Read operations on the data require shared access, while write + * operations require exclusive access. + * + * .. code-block:: c + * + * struct some_state { + * spinlock_t lock; + * long *counter __pt_guarded_by(&lock); + * }; + */ +# define __pt_guarded_by(...) __attribute__((pt_guarded_by(__VA_ARGS__))) + +/** + * context_lock_struct() - declare or define a context lock struct + * @name: struct name + * + * Helper to declare or define a struct type that is also a context lock. + * + * .. code-block:: c + * + * context_lock_struct(my_handle) { + * int foo; + * long bar; + * }; + * + * struct some_state { + * ... + * }; + * // ... declared elsewhere ... + * context_lock_struct(some_state); + * + * Note: The implementation defines several helper functions that can acqu= ire + * and release the context lock. + */ +# define context_lock_struct(name, ...) \ + struct __ctx_lock_type(name) __VA_ARGS__ name; \ + static __always_inline void __acquire_ctx_lock(const struct name *var) = \ + __attribute__((overloadable)) __no_context_analysis __acquires_ctx_lock(= var) { } \ + static __always_inline void __acquire_shared_ctx_lock(const struct name *= var) \ + __attribute__((overloadable)) __no_context_analysis __acquires_shared_ct= x_lock(var) { } \ + static __always_inline bool __try_acquire_ctx_lock(const struct name *var= , bool ret) \ + __attribute__((overloadable)) __no_context_analysis __try_acquires_ctx_l= ock(1, var) \ + { return ret; } \ + static __always_inline bool __try_acquire_shared_ctx_lock(const struct na= me *var, bool ret) \ + __attribute__((overloadable)) __no_context_analysis __try_acquires_share= d_ctx_lock(1, var) \ + { return ret; } \ + static __always_inline void __release_ctx_lock(const struct name *var) = \ + __attribute__((overloadable)) __no_context_analysis __releases_ctx_lock(= var) { } \ + static __always_inline void __release_shared_ctx_lock(const struct name *= var) \ + __attribute__((overloadable)) __no_context_analysis __releases_shared_ct= x_lock(var) { } \ + static __always_inline void __assume_ctx_lock(const struct name *var) \ + __attribute__((overloadable)) __assumes_ctx_lock(var) { } \ + static __always_inline void __assume_shared_ctx_lock(const struct name *v= ar) \ + __attribute__((overloadable)) __assumes_shared_ctx_lock(var) { } \ + struct name + +/** + * disable_context_analysis() - disables context analysis + * + * Disables context analysis. Must be paired with a later + * enable_context_analysis(). + */ +# define disable_context_analysis() \ + __diag_push(); \ + __diag_ignore_all("-Wunknown-warning-option", "") \ + __diag_ignore_all("-Wthread-safety", "") \ + __diag_ignore_all("-Wthread-safety-pointer", "") + +/** + * enable_context_analysis() - re-enables context analysis + * + * Re-enables context analysis. Must be paired with a prior + * disable_context_analysis(). + */ +# define enable_context_analysis() __diag_pop() + +/** + * __no_context_analysis - function attribute, disables context analysis + * + * Function attribute denoting that context analysis is disabled for the + * whole function. Prefer use of `context_unsafe()` where possible. + */ +# define __no_context_analysis __attribute__((no_thread_safety_analysis)) + +#else /* !WARN_CONTEXT_ANALYSIS */ + +# define __ctx_lock_type(name) +# define __reentrant_ctx_lock +# define __acquires_ctx_lock(...) +# define __acquires_shared_ctx_lock(...) +# define __try_acquires_ctx_lock(ret, var) +# define __try_acquires_shared_ctx_lock(ret, var) +# define __releases_ctx_lock(...) +# define __releases_shared_ctx_lock(...) +# define __assumes_ctx_lock(...) +# define __assumes_shared_ctx_lock(...) +# define __returns_ctx_lock(var) +# define __guarded_by(...) +# define __pt_guarded_by(...) +# define __excludes_ctx_lock(...) +# define __requires_ctx_lock(...) +# define __requires_shared_ctx_lock(...) +# define __acquire_ctx_lock(var) do { } while (0) +# define __acquire_shared_ctx_lock(var) do { } while (0) +# define __try_acquire_ctx_lock(var, ret) (ret) +# define __try_acquire_shared_ctx_lock(var, ret) (ret) +# define __release_ctx_lock(var) do { } while (0) +# define __release_shared_ctx_lock(var) do { } while (0) +# define __assume_ctx_lock(var) do { (void)(var); } while (0) +# define __assume_shared_ctx_lock(var) do { (void)(var); } while (0) +# define context_lock_struct(name, ...) struct __VA_ARGS__ name +# define disable_context_analysis() +# define enable_context_analysis() +# define __no_context_analysis + +#endif /* WARN_CONTEXT_ANALYSIS */ + +/** + * context_unsafe() - disable context checking for contained code + * + * Disables context checking for contained statements or expression. + * + * .. code-block:: c + * + * struct some_data { + * spinlock_t lock; + * int counter __guarded_by(&lock); + * }; + * + * int foo(struct some_data *d) + * { + * // ... + * // other code that is still checked ... + * // ... + * return context_unsafe(d->counter); + * } + */ +#define context_unsafe(...) \ +({ \ + disable_context_analysis(); \ + __VA_ARGS__; \ + enable_context_analysis() \ +}) + +/** + * __context_unsafe() - function attribute, disable context checking + * @comment: comment explaining why opt-out is safe + * + * Function attribute denoting that context analysis is disabled for the + * whole function. Forces adding an inline comment as argument. + */ +#define __context_unsafe(comment) __no_context_analysis + +/** + * context_unsafe_alias() - helper to insert a context lock "alias barrier" + * @p: pointer aliasing a context lock or object containing context locks + * + * No-op function that acts as a "context lock alias barrier", where the + * analysis rightfully detects that we're switching aliases, but the switc= h is + * considered safe but beyond the analysis reasoning abilities. + * + * This should be inserted before the first use of such an alias. + * + * Implementation Note: The compiler ignores aliases that may be reassigne= d but + * their value cannot be determined (e.g. when passing a non-const pointer= to an + * alias as a function argument). + */ +#define context_unsafe_alias(p) _context_unsafe_alias((void **)&(p)) +static inline void _context_unsafe_alias(void **p) { } + +/** + * token_context_lock() - declare an abstract global context lock instance + * @name: token context lock name + * + * Helper that declares an abstract global context lock instance @name, bu= t not + * backed by a real data structure (linker error if accidentally reference= d). + * The type name is `__ctx_lock_@name`. + */ +#define token_context_lock(name, ...) \ + context_lock_struct(__ctx_lock_##name, ##__VA_ARGS__) {}; \ + extern const struct __ctx_lock_##name *name + +/** + * token_context_lock_instance() - declare another instance of a global co= ntext lock + * @ctx: token context lock previously declared with token_context_lock() + * @name: name of additional global context lock instance + * + * Helper that declares an additional instance @name of the same token con= text + * lock class @ctx. This is helpful where multiple related token contexts = are + * declared, to allow using the same underlying type (`__ctx_lock_@ctx`) as + * function arguments. + */ +#define token_context_lock_instance(ctx, name) \ + extern const struct __ctx_lock_##ctx *name + +/* + * Common keywords for static context analysis. Both Clang's "capability + * analysis" and Sparse's "context tracking" are currently supported. + */ #ifdef __CHECKER__ =20 /* Sparse context/lock checking support. */ # define __must_hold(x) __attribute__((context(x,1,1))) +# define __must_not_hold(x) # define __acquires(x) __attribute__((context(x,0,1))) # define __cond_acquires(x) __attribute__((context(x,0,-1))) # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) # define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) +/* For Sparse, there's no distinction between exclusive and shared locks. = */ +# define __must_hold_shared __must_hold +# define __acquires_shared __acquires +# define __cond_acquires_shared __cond_acquires +# define __releases_shared __releases +# define __acquire_shared __acquire +# define __release_shared __release +# define __cond_lock_shared __cond_acquire =20 #else /* !__CHECKER__ */ =20 -# define __must_hold(x) -# define __acquires(x) -# define __cond_acquires(x) -# define __releases(x) -# define __acquire(x) (void)0 -# define __release(x) (void)0 -# define __cond_lock(x, c) (c) +/** + * __must_hold() - function attribute, caller must hold exclusive context = lock + * @x: context lock instance pointer + * + * Function attribute declaring that the caller must hold the given context + * lock instance @x exclusively. + */ +# define __must_hold(x) __requires_ctx_lock(x) + +/** + * __must_not_hold() - function attribute, caller must not hold context lo= ck + * @x: context lock instance pointer + * + * Function attribute declaring that the caller must not hold the given co= ntext + * lock instance @x. + */ +# define __must_not_hold(x) __excludes_ctx_lock(x) + +/** + * __acquires() - function attribute, function acquires context lock exclu= sively + * @x: context lock instance pointer + * + * Function attribute declaring that the function acquires the given conte= xt + * lock instance @x exclusively, but does not release it. + */ +# define __acquires(x) __acquires_ctx_lock(x) + +/** + * __cond_acquires() - function attribute, function conditionally + * acquires a context lock exclusively + * @x: context lock instance pointer + * + * Function attribute declaring that the function conditionally acquires t= he + * given context lock instance @x exclusively, but does not release it. + */ +# define __cond_acquires(x) __try_acquires_ctx_lock(1, x) + +/** + * __releases() - function attribute, function releases a context lock exc= lusively + * @x: context lock instance pointer + * + * Function attribute declaring that the function releases the given conte= xt + * lock instance @x exclusively. The associated context must be active on + * entry. + */ +# define __releases(x) __releases_ctx_lock(x) + +/** + * __acquire() - function to acquire context lock exclusively + * @x: context lock instance pointer + * + * No-op function that acquires the given context lock instance @x exclusi= vely. + */ +# define __acquire(x) __acquire_ctx_lock(x) + +/** + * __release() - function to release context lock exclusively + * @x: context lock instance pointer + * + * No-op function that releases the given context lock instance @x. + */ +# define __release(x) __release_ctx_lock(x) + +/** + * __cond_lock() - function that conditionally acquires a context lock + * exclusively + * @x: context lock instance pinter + * @c: boolean expression + * + * Return: result of @c + * + * No-op function that conditionally acquires context lock instance @x + * exclusively, if the boolean expression @c is true. The result of @c is = the + * return value; for example: + * + * .. code-block:: c + * + * #define spin_trylock(l) __cond_lock(&lock, _spin_trylock(&lock)) + */ +# define __cond_lock(x, c) __try_acquire_ctx_lock(x, c) + +/** + * __must_hold_shared() - function attribute, caller must hold shared cont= ext lock + * @x: context lock instance pointer + * + * Function attribute declaring that the caller must hold the given context + * lock instance @x with shared access. + */ +# define __must_hold_shared(x) __requires_shared_ctx_lock(x) + +/** + * __acquires_shared() - function attribute, function acquires context loc= k shared + * @x: context lock instance pointer + * + * Function attribute declaring that the function acquires the given + * context lock instance @x with shared access, but does not release it. + */ +# define __acquires_shared(x) __acquires_shared_ctx_lock(x) + +/** + * __cond_acquires_shared() - function attribute, function conditionally + * acquires a context lock shared + * @x: context lock instance pointer + * + * Function attribute declaring that the function conditionally acquires t= he + * given context lock instance @x with shared access, but does not release= it. + */ +# define __cond_acquires_shared(x) __try_acquires_shared_ctx_lock(1, x) + +/** + * __releases_shared() - function attribute, function releases a + * context lock shared + * @x: context lock instance pointer + * + * Function attribute declaring that the function releases the given conte= xt + * lock instance @x with shared access. The associated context must be act= ive + * on entry. + */ +# define __releases_shared(x) __releases_shared_ctx_lock(x) + +/** + * __acquire_shared() - function to acquire context lock shared + * @x: context lock instance pointer + * + * No-op function that acquires the given context lock instance @x with sh= ared + * access. + */ +# define __acquire_shared(x) __acquire_shared_ctx_lock(x) + +/** + * __release_shared() - function to release context lock shared + * @x: context lock instance pointer + * + * No-op function that releases the given context lock instance @x with sh= ared + * access. + */ +# define __release_shared(x) __release_shared_ctx_lock(x) + +/** + * __cond_lock_shared() - function that conditionally acquires a context l= ock shared + * @x: context lock instance pinter + * @c: boolean expression + * + * Return: result of @c + * + * No-op function that conditionally acquires context lock instance @x with + * shared access, if the boolean expression @c is true. The result of @c i= s the + * return value. + */ +# define __cond_lock_shared(x, c) __try_acquire_shared_ctx_lock(x, c) =20 #endif /* __CHECKER__ */ =20 +/** + * __acquire_ret() - helper to acquire context lock of return value + * @call: call expression + * @ret_expr: acquire expression that uses __ret + */ +#define __acquire_ret(call, ret_expr) \ + ({ \ + __auto_type __ret =3D call; \ + __acquire(ret_expr); \ + __ret; \ + }) + +/** + * __acquire_shared_ret() - helper to acquire context lock shared of retur= n value + * @call: call expression + * @ret_expr: acquire shared expression that uses __ret + */ +#define __acquire_shared_ret(call, ret_expr) \ + ({ \ + __auto_type __ret =3D call; \ + __acquire_shared(ret_expr); \ + __ret; \ + }) + +/* + * Attributes to mark functions returning acquired context locks. + * + * This is purely cosmetic to help readability, and should be used with the + * above macros as follows: + * + * struct foo { spinlock_t lock; ... }; + * ... + * #define myfunc(...) __acquire_ret(_myfunc(__VA_ARGS__), &__ret->lock) + * struct foo *_myfunc(int bar) __acquires_ret; + * ... + */ +#define __acquires_ret __no_context_analysis +#define __acquires_shared_ret __no_context_analysis + #endif /* _LINUX_COMPILER_CONTEXT_ANALYSIS_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index ba36939fda79..cd557e7653a4 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -621,6 +621,36 @@ config DEBUG_FORCE_WEAK_PER_CPU To ensure that generic code follows the above rules, this option forces all percpu variables to be defined as weak. =20 +config WARN_CONTEXT_ANALYSIS + bool "Compiler context-analysis warnings" + depends on CC_IS_CLANG && CLANG_VERSION >=3D 220000 + # Branch profiling re-defines "if", which messes with the compiler's + # ability to analyze __cond_acquires(..), resulting in false positives. + depends on !TRACE_BRANCH_PROFILING + default y + help + Context Analysis is a language extension, which enables statically + checking that required contexts are active (or inactive) by acquiring + and releasing user-definable "context locks". + + Clang's name of the feature is "Thread Safety Analysis". Requires + Clang 22 or later. + + Produces warnings by default. Select CONFIG_WERROR if you wish to + turn these warnings into errors. + + For more details, see Documentation/dev-tools/context-analysis.rst. + +config WARN_CONTEXT_ANALYSIS_ALL + bool "Enable context analysis for all source files" + depends on WARN_CONTEXT_ANALYSIS + depends on EXPERT && !COMPILE_TEST + help + Enable tree-wide context analysis. This is likely to produce a + large number of false positives - enable at your own risk. + + If unsure, say N. + endmenu # "Compiler options" =20 menu "Generic Kernel Debugging Instruments" diff --git a/scripts/Makefile.context-analysis b/scripts/Makefile.context-a= nalysis new file mode 100644 index 000000000000..70549f7fae1a --- /dev/null +++ b/scripts/Makefile.context-analysis @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: GPL-2.0 + +context-analysis-cflags :=3D -DWARN_CONTEXT_ANALYSIS \ + -fexperimental-late-parse-attributes -Wthread-safety \ + -Wthread-safety-pointer -Wthread-safety-beta + +export CFLAGS_CONTEXT_ANALYSIS :=3D $(context-analysis-cflags) diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index 28a1c08e3b22..e429d68b8594 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -105,6 +105,16 @@ _c_flags +=3D $(if $(patsubst n%,, \ -D__KCSAN_INSTRUMENT_BARRIERS__) endif =20 +# +# Enable context analysis flags only where explicitly opted in. +# (depends on variables CONTEXT_ANALYSIS_obj.o, CONTEXT_ANALYSIS) +# +ifeq ($(CONFIG_WARN_CONTEXT_ANALYSIS),y) +_c_flags +=3D $(if $(patsubst n%,, \ + $(CONTEXT_ANALYSIS_$(target-stem).o)$(CONTEXT_ANALYSIS)$(if $(is-kernel-= object),$(CONFIG_WARN_CONTEXT_ANALYSIS_ALL))), \ + $(CFLAGS_CONTEXT_ANALYSIS)) +endif + # # Enable AutoFDO build flags except some files or directories we don't wan= t to # enable (depends on variables AUTOFDO_PROFILE_obj.o and AUTOFDO_PROFILE). --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDD77339879 for ; Fri, 19 Dec 2025 15:45:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159132; cv=none; b=F6u3D9f8jSkpr4L4f6UOJ9/QRHj4V4nuPd5SCYf/PEzWPpTn5zH/3ERbunk+69jnEGnz9l4egHsBfOsygzxhSASsNIiWHwPbfmcU2xiLmD02t2UBDgHanH/gT/9YrL5NT1N+OJlobqpB5Aqxm4nriSfeidxtXfTVlQb1NqqAR3k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159132; c=relaxed/simple; bh=gnuFLX8kT8gqPaa+rke7bwea6K0Ca21iuuLPz3ICNfc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RsU4zQXtP4b11S6swnIL/s7CA1kaTqBuU4bB+p8+0T5xJmxnG0m2g8mNtoKMk0gJTuDafFY3EAV+X31gKJxQFBN0aG3H65O07BewSItuJubK51C/pbp5pjP6Uq73yfNrugER7sZG9nVXzn6lYLodKPo/cSrdPREwjtp8cI8ktVw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=daBVCbPF; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="daBVCbPF" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-430ffa9fccaso1471347f8f.1 for ; Fri, 19 Dec 2025 07:45:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159128; x=1766763928; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZIbRhS5yAElvokIR1MvQQLNlZ/WUUA5jS9gYnuf7mNE=; b=daBVCbPFfUXz9zEifF1ONdfb5QyEitFsgEPo/PJBmgHAgCmc6fSd624HkORoVkBnDH 3qg+/Da6j+vcpfIx8cLsC9vw9rzT4f+IUxajpMmCJmidWN4YZCYGNuGqLq0YyTP/xdxM /biPhGxX2+Apo9J+2IzDVjTMRZ7ctgKY7GzUHry5fePeOowoBr3LEVJLSRF68SCucSg8 BC7FmXIbUbIPbATg3GfcF5aIv7dD0pH7cIWScN1X853fT0Yz/z4bQ0UG5I08F6x1W865 k/KBiyOl/pZKouk0lh+UzZbs7PKiIcO+zIVC3XHmd7W5V7bBwNU/73VmohAFkCd8zv8F CtBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159128; x=1766763928; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZIbRhS5yAElvokIR1MvQQLNlZ/WUUA5jS9gYnuf7mNE=; b=G9NTKjZqxFMUxDVagXos6lEa+JaEAWTDTk5nuA5Aoa0c8lE3bPvo4+LsTmcW5bHx3L pHvZxpAjJMeM6mglzNFToFyXvr5DJiMtD7NTFAKoyhE8dJD8xUP0nI5w5JSMh1b4jEGT d86r9DEm87IT+qRSder9pE3LhoUGpzSza00SzQeb3sv33pqjbL5Ih3QChk6RxhgcVVY1 5IGQS92BtW5JLtfT9ff13ruKMQ/M1vU4yDXwjlUI8C5Y1zj2/nlBIHv2Lb1fakgVAhYu L5D3uvEfaSNf0HBo1KAE0g0EwAX3v1oX9rTxdXGW1rivA+sioJYSWQPc64Wyl7iw9KwG Rg2w== X-Forwarded-Encrypted: i=1; AJvYcCVvfNfynU6ogp06DmanVXgeB4lC0E6n62PePS/0ScFLDiZZSpSOAvKKFdHYpnvONXJKvU5G+NMCwtBq8+g=@vger.kernel.org X-Gm-Message-State: AOJu0Ywyugs4RHxcK8p1dCXPbh1ZQI2vGYeZ5stLhPHRtfnieOzqOlt/ QxLayCDN6Fd1QFyWflDROurOBhfRhLWXm82QUM0tUFbp4wY3ax6sddvFVnA0YFg0EkxxV60VPYe d4g== X-Google-Smtp-Source: AGHT+IEWqNQ8B2uikI0R3t/CQ7ZJwgcNL1VKKMu3VpcFkJfSPyjiMbOXjCZK3QVkgDbXUfaVN8BdAwmGKw== X-Received: from wrbfu3.prod.google.com ([2002:a05:6000:25e3:b0:431:37f:7ba1]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5d09:0:b0:42f:a025:92b3 with SMTP id ffacd0b85a97d-4324e4c0dd4mr3283947f8f.2.1766159127887; Fri, 19 Dec 2025 07:45:27 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:52 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-4-elver@google.com> Subject: [PATCH v5 03/36] compiler-context-analysis: Add test stub From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a simple test stub where we will add common supported patterns that should not generate false positives for each new supported context lock. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. --- lib/Kconfig.debug | 14 ++++++++++++++ lib/Makefile | 3 +++ lib/test_context-analysis.c | 18 ++++++++++++++++++ 3 files changed, 35 insertions(+) create mode 100644 lib/test_context-analysis.c diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index cd557e7653a4..8ca42526ee43 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2835,6 +2835,20 @@ config LINEAR_RANGES_TEST =20 If unsure, say N. =20 +config CONTEXT_ANALYSIS_TEST + bool "Compiler context-analysis warnings test" + depends on EXPERT + help + This builds the test for compiler-based context analysis. The test + does not add executable code to the kernel, but is meant to test that + common patterns supported by the analysis do not result in false + positive warnings. + + When adding support for new context locks, it is strongly recommended + to add supported patterns to this test. + + If unsure, say N. + config CMDLINE_KUNIT_TEST tristate "KUnit test for cmdline API" if !KUNIT_ALL_TESTS depends on KUNIT diff --git a/lib/Makefile b/lib/Makefile index aaf677cf4527..89defefbf6c0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -331,4 +331,7 @@ obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) +=3D devmem= _is_allowed.o =20 obj-$(CONFIG_FIRMWARE_TABLE) +=3D fw_table.o =20 +CONTEXT_ANALYSIS_test_context-analysis.o :=3D y +obj-$(CONFIG_CONTEXT_ANALYSIS_TEST) +=3D test_context-analysis.o + subdir-$(CONFIG_FORTIFY_SOURCE) +=3D test_fortify diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c new file mode 100644 index 000000000000..68f075dec0e0 --- /dev/null +++ b/lib/test_context-analysis.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Compile-only tests for common patterns that should not generate false + * positive errors when compiled with Clang's context analysis. + */ + +#include + +/* + * Test that helper macros work as expected. + */ +static void __used test_common_helpers(void) +{ + BUILD_BUG_ON(context_unsafe(3) !=3D 3); /* plain expression */ + BUILD_BUG_ON(context_unsafe((void)2; 3) !=3D 3); /* does not swallow semi= -colon */ + BUILD_BUG_ON(context_unsafe((void)2, 3) !=3D 3); /* does not swallow comm= as */ + context_unsafe(do { } while (0)); /* works with void statements */ +} --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9985D33B95B for ; Fri, 19 Dec 2025 15:45:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159141; cv=none; b=H4L4BwXTVzsvbWVYCvyisNv+9UFEZEtiVDH6y1wqHTaNDiVKRCllzC/qywWsoVWri+q56zfS/q/jqKW1bQI+ARgyfwi1V2swthfWdR3LfhkyLFpO9XOqrDgwAVQ0cWpdn0hDvtmu3OgVY6fbdfgkFWq1+1BZ+iLTcGLKqZxOtIA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159141; c=relaxed/simple; bh=D876gfERGm4Uh8Tof9yRwAWiXRfLZlCBiCrqdIxRAxc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LNsOFrqSDMD45ZNatVq7Xc5DwTefzXofJ+YynMj3W6GSThtKKeKPlAqA9xJObNJERWR5iFajv4eoNCOtj0ApHKHaCoYSPI39VFlZfuFHMQ01/H9Uo6/Z+bc94gR+uwiWJGYGM4rMKgbAjiKZaSr5lwltoZtmKIL7oDO1HzvOMoU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=urs97b6w; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="urs97b6w" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4779da35d27so17886335e9.3 for ; Fri, 19 Dec 2025 07:45:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159136; x=1766763936; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wNypD57EGX2MX8ivTIXP0XbqOR/mFQo2ssZAJ4O2zXU=; b=urs97b6wXGfVRyg2RQQJYV8Voj7uxokAqjV1bHnvEQg0POLBxgC9l+1uKc1e2ME6vx fxVp/ShBN4LK/XUQ8DNOaxLo7uWoa0ClaeA+MSlnIfcfoWbk3yq+xEWYww4m2GcEScMG yWwE3T8vCbGm5ygDIGTLkLNL7VJ2hf0/SPnQJ5v7PQseysF2kRAM/5jk1l6RtGtPiFsw CLvY4RnVvEJKXyJRuWuMsFkeOhYJFy/O89z/uuQctDfuSJiK8LyW5G1UK6zaBp9p0A9A /pJJrZWnKuHNnoYuv4HBSNhfwNgKO3I2gvq4OyQuLJiyOZKJrMQCmi3Icjx3ZqcqXnhN n/xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159136; x=1766763936; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wNypD57EGX2MX8ivTIXP0XbqOR/mFQo2ssZAJ4O2zXU=; b=hl1d4p8tsCMuh1ud1kx/76Dx9LFRb4muCOTT4+RYZVbFXhf0LVPIVZV/RVJ3vq6nGa IHgx4OXQGBBad1K91AQ8pOqh2ptPEIbF3x6nfy/9qEQHtEp4BvzJy41cabojBOHKHUU4 vWA+2zjMTvP9T+db8A5YNSLpvw8h1T9V1wowviTC+4EwEFC49WxN7KA+cz0cPU+W/uAB TqXkM0P20SJyxUpfFXgeC5qhc2oS5jqIva5xCUucrkVwb61miiD5V/U1GM8EPoz/kAXr urFhdagWt/tZ9wlhfLcA6Ptzu7zDZesCeSa/FJWr/hqCLhyj5G2/PXEEwYpZd++Th4a0 fORA== X-Forwarded-Encrypted: i=1; AJvYcCXXDNi6MidYrqaHpmycaB534+2HpqzwZzKL9bn7RuJ3sgXMqbGorWErm7fsj8PN7MKc/WDqzeETlMz+ZPQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwfDAexNYJ8L4yMiN7qaEOtg1m82oQVWrx1eMld6wiWp2svT/tg SvexoKqTFhaztwRSI15q03b+weDw3AU2rmCd0RieIVV+fN9bkTsJ8fr1cAwg3AdoJZKpJRiVZ4h 30Q== X-Google-Smtp-Source: AGHT+IHXXBJOd4wvSdy3ALXfB4Mt0dokW3EvzdUSRwHOp88UE22ws+JWjg7YwUgEMwqGcjEmAiNPIAdtCQ== X-Received: from wmco28.prod.google.com ([2002:a05:600c:a31c:b0:477:9976:8214]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:828c:b0:479:3a87:2092 with SMTP id 5b1f17b1804b1-47d19598e86mr23981475e9.36.1766159135626; Fri, 19 Dec 2025 07:45:35 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:53 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-5-elver@google.com> Subject: [PATCH v5 04/36] Documentation: Add documentation for Compiler-Based Context Analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Adds documentation in Documentation/dev-tools/context-analysis.rst, and adds it to the index. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. v2: * Remove cross-reference to Sparse, since we plan to remove Sparse support anyway. * Mention __no_context_analysis should be avoided. --- Documentation/dev-tools/context-analysis.rst | 144 +++++++++++++++++++ Documentation/dev-tools/index.rst | 1 + 2 files changed, 145 insertions(+) create mode 100644 Documentation/dev-tools/context-analysis.rst diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst new file mode 100644 index 000000000000..47eb547eb716 --- /dev/null +++ b/Documentation/dev-tools/context-analysis.rst @@ -0,0 +1,144 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. Copyright (C) 2025, Google LLC. + +.. _context-analysis: + +Compiler-Based Context Analysis +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D + +Context Analysis is a language extension, which enables statically checking +that required contexts are active (or inactive) by acquiring and releasing +user-definable "context locks". An obvious application is lock-safety chec= king +for the kernel's various synchronization primitives (each of which represe= nts a +"context lock"), and checking that locking rules are not violated. + +The Clang compiler currently supports the full set of context analysis +features. To enable for Clang, configure the kernel with:: + + CONFIG_WARN_CONTEXT_ANALYSIS=3Dy + +The feature requires Clang 22 or later. + +The analysis is *opt-in by default*, and requires declaring which modules = and +subsystems should be analyzed in the respective `Makefile`:: + + CONTEXT_ANALYSIS_mymodule.o :=3D y + +Or for all translation units in the directory:: + + CONTEXT_ANALYSIS :=3D y + +It is possible to enable the analysis tree-wide, however, which will resul= t in +numerous false positive warnings currently and is *not* generally recommen= ded:: + + CONFIG_WARN_CONTEXT_ANALYSIS_ALL=3Dy + +Programming Model +----------------- + +The below describes the programming model around using context lock types. + +.. note:: + Enabling context analysis can be seen as enabling a dialect of Linux C = with + a Context System. Some valid patterns involving complex control-flow are + constrained (such as conditional acquisition and later conditional rele= ase + in the same function). + +Context analysis is a way to specify permissibility of operations to depen= d on +context locks being held (or not held). Typically we are interested in +protecting data and code in a critical section by requiring a specific con= text +to be active, for example by holding a specific lock. The analysis ensures= that +callers cannot perform an operation without the required context being act= ive. + +Context locks are associated with named structs, along with functions that +operate on struct instances to acquire and release the associated context = lock. + +Context locks can be held either exclusively or shared. This mechanism all= ows +assigning more precise privileges when a context is active, typically to +distinguish where a thread may only read (shared) or also write (exclusive= ) to +data guarded within a context. + +The set of contexts that are actually active in a given thread at a given = point +in program execution is a run-time concept. The static analysis works by +calculating an approximation of that set, called the context environment. = The +context environment is calculated for every program point, and describes t= he +set of contexts that are statically known to be active, or inactive, at th= at +particular point. This environment is a conservative approximation of the = full +set of contexts that will actually be active in a thread at run-time. + +More details are also documented `here +`_. + +.. note:: + Clang's analysis explicitly does not infer context locks acquired or + released by inline functions. It requires explicit annotations to (a) a= ssert + that it's not a bug if a context lock is released or acquired, and (b) = to + retain consistency between inline and non-inline function declarations. + +Supported Kernel Primitives +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. Currently the following synchronization primitives are supported: + +For context locks with an initialization function (e.g., `spin_lock_init()= `), +calling this function before initializing any guarded members or globals +prevents the compiler from issuing warnings about unguarded initialization. + +Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's +context analysis that the associated synchronization primitive is held aft= er +the assertion. This avoids false positives in complex control-flow scenari= os +and encourages the use of Lockdep where static analysis is limited. For +example, this is useful when a function doesn't *always* require a lock, m= aking +`__must_hold()` inappropriate. + +Keywords +~~~~~~~~ + +.. kernel-doc:: include/linux/compiler-context-analysis.h + :identifiers: context_lock_struct + token_context_lock token_context_lock_instance + __guarded_by __pt_guarded_by + __must_hold + __must_not_hold + __acquires + __cond_acquires + __releases + __must_hold_shared + __acquires_shared + __cond_acquires_shared + __releases_shared + __acquire + __release + __cond_lock + __acquire_shared + __release_shared + __cond_lock_shared + __acquire_ret + __acquire_shared_ret + context_unsafe + __context_unsafe + disable_context_analysis enable_context_analysis + +.. note:: + The function attribute `__no_context_analysis` is reserved for internal + implementation of context lock types, and should be avoided in normal c= ode. + +Background +---------- + +Clang originally called the feature `Thread Safety Analysis +`_, with some keywo= rds +and documentation still using the thread-safety-analysis-only terminology.= This +was later changed and the feature became more flexible, gaining the abilit= y to +define custom "capabilities". Its foundations can be found in `Capability +Systems `_, used = to +specify the permissibility of operations to depend on some "capability" be= ing +held (or not held). + +Because the feature is not just able to express capabilities related to +synchronization primitives, and "capability" is already overloaded in the +kernel, the naming chosen for the kernel departs from Clang's initial "Thr= ead +Safety" and "capability" nomenclature; we refer to the feature as "Context +Analysis" to avoid confusion. The internal implementation still makes +references to Clang's terminology in a few places, such as `-Wthread-safet= y` +being the warning option that also still appears in diagnostic messages. diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/in= dex.rst index 4b8425e348ab..d864b3da4cc7 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -21,6 +21,7 @@ Documentation/process/debugging/index.rst checkpatch clang-format coccinelle + context-analysis sparse kcov gcov --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D78E133BBBD for ; Fri, 19 Dec 2025 15:45:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159145; cv=none; b=AJ7xfv1KLS1jmhOmNfybEBEJhg+4LAbuuy0FTkq4mBKb7T80XLAP+YWu7P+zjXA8+JmpQVDEHK21pCwZ5gMHfS297y/AZgYsdiXlfW9W+r+myN+dUmNa4OwXsinuIu9ePPsU0nJlkx5yn3koj57tn3q0vvWLf9kR5VUYFFWBIWs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159145; c=relaxed/simple; bh=eXGkgk2np9CnqeU2wkxlYEOA75Yi53MyCAIm51ULRmQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UZnDcUjXAZV9lDrmnTJrKmfrYFNYol5Ur0QjZyVnPD5kKDSxwEiv77+An6xtikB+fw1XAcU81uKsk/XOeyOMlQPBu7UmCUza9o/5B/crEuoQ6Q0U/Zi7pHJKusCipNguIW7S/IVzTm3lJCuJEBVV5bp+wXva6681/5y9MUv50e8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hHH5+bj9; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hHH5+bj9" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4775d110fabso15677585e9.1 for ; Fri, 19 Dec 2025 07:45:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159140; x=1766763940; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Cp89rZHQypoAlOCNHCyWU3ytuEytNXkoLr0IcNjmOds=; b=hHH5+bj9mD08sRTK9V51sFdSChVwD0Dlk8kExvuhbyuSQt2nMrVO7SUz5wjXnBd0NC Ey16ADfJ8JTu+iWTLtsVwR+6EoNLmMEAcfgWqqNhRhnrTI/I89ttGn7TQeMWzweXNq7/ ajG1b4CAkZS6juXQxlYeM6TNHvxNHt9deQGn1YCJ/uO0rDtLwW3lT/azW7CLy6oK+z37 hKsrWG6WCBOvYtii+kHj3gwtjLeqIJR5HOBH8Cop9z2VuPZu0ICfRkW/tzOmD/5lYFO2 qloM8CRIr7p13snihP4GyqiAfFA9367HA2H9AhAMPjZfAYZJ6erIPiOfKBAdSIwcibLf nXvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159140; x=1766763940; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Cp89rZHQypoAlOCNHCyWU3ytuEytNXkoLr0IcNjmOds=; b=TzQ29btGYlm0u98/ZeEpImUeKKJJCz/5fJrbgcH3vEKazs+J+Hac+eqMGumgEzf5WV uIezKLxDZoP9Nr7VMXwKOsO/GXZ655Hqi4FpyHS256eZ2fvvWWBQw2YtyyjXoYfWAa8t QGiK50xO51QjAUapB79fmp8lldRFnHnY+jVb+Y/ov3OMP34VbdwkPxOhy6ISJpbFflMp mS7Yq7aoYKLg3GVTMtc1vTtL9GbI3Vplhc7btZgP+MWYlVPxE7V4km+3j6fCAetyxoSV iP1y2MVxSTYTP2eDNKSzkSpb/VYXJ4+5QXPImrwThC/+l88M4LmCUmxj+brdtWFAdehJ 92LQ== X-Forwarded-Encrypted: i=1; AJvYcCWCfJW7hPbv2ERnGjhC0+kqcZOLRyxXUR/B8jHWHArwzuReqw9dM53BFyNt5KN149HEK7CzBuHzBHEHjqQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yy77G//Lho9/U4oDfLqtD67fiiBNYfp6kFOMfLKmV0GsrVLWtsr 42dnbzuq7Gzy5vOoNSjXOHFlNO0z6QTeDnEoQSKH2g+Z1Sm3qa/+azTwj2wQesR04iZ2616w9bz /OQ== X-Google-Smtp-Source: AGHT+IEo4aI5AYzB6mIv/Q1TYzZNn72eZTN+KjN0BRtkpQb5xaIDLH0+DGUI6xvUKW6b1Bpd8Cf9msZmEQ== X-Received: from wmpy33.prod.google.com ([2002:a05:600c:3421:b0:477:103e:8e30]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a010:b0:47a:80f8:82ab with SMTP id 5b1f17b1804b1-47d1957f71dmr30483175e9.24.1766159140325; Fri, 19 Dec 2025 07:45:40 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:54 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-6-elver@google.com> Subject: [PATCH v5 05/36] checkpatch: Warn about context_unsafe() without comment From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Warn about applications of context_unsafe() without a comment, to encourage documenting the reasoning behind why it was deemed safe. Signed-off-by: Marco Elver --- v4: * Rename capability -> context analysis. * Avoid nested if. --- scripts/checkpatch.pl | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index c0250244cf7a..c4fd8bdff528 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -6733,6 +6733,13 @@ sub process { } } =20 +# check for context_unsafe without a comment. + if ($line =3D~ /\bcontext_unsafe\b/ && + !ctx_has_comment($first_line, $linenr)) { + WARN("CONTEXT_UNSAFE", + "context_unsafe without comment\n" . $herecurr); + } + # check of hardware specific defines if ($line =3D~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x= __)\b@ && $realfile !~ m@include/asm-@) { CHK("ARCH_DEFINES", --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57C6033C1A6 for ; Fri, 19 Dec 2025 15:45:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159148; cv=none; b=BWQCcbkiXBne6qGzaB4bsqVIm7QXCDIYtLTmHMniaw9BG+D72S/dsZLSt4l912yUEbtE7EOT1V5aonlSoupMEJpbWBRgN6PaXo2hDQ+trO90pf0IACKbodp29MtSsvjFbq7tvDIK2s3zI+WMQNON3xOLYNZX1Vnmvj7kPIMioMQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159148; c=relaxed/simple; bh=O5NjawtYIRdcH1Vd06kaXR9fJ3HeZPVaHdNaz9TNxP8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ahOxKwNw1HUfYynq9He4iKPlRPLVixu3/eoluvNtvfU9hWi5nAYDkMs5mfcbtCUqSN8v6iH9i4cOEp+WxvMUXxuJS49sTfbTsH7/t/u7RznRn4IE3hgYJaRgOXZKinP6nM+kc+J1dHa/2EpNWjpqRYMFWoKUDKnwao54+ysNaRg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=m1EjBB2i; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="m1EjBB2i" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-477cabba65dso11084985e9.2 for ; Fri, 19 Dec 2025 07:45:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159145; x=1766763945; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kb0gLsOvOvTOd9/IwL391pJi9FHpm7GL0bThZeAVvoU=; b=m1EjBB2ixO4YvYybHcpr7Vou1ULMMSy2/hNK/ry3eHFviHhPvjbgCaDCaEW0/1lo4w 4s395aBFc7Twn6cQvARdSV0pfNAROIzR1W8M/M/PSxWb+Z5FhnGlPt5Jiu3Hq1+9Z9hh v6wyMkfyMpe4/OoaxhWdEdEd9LVzY0iTk5nrvEOO1NmsO1VcqBSUw5Hp18J1FeAdD0Mu aQL0TvSH4vUkR+Q1h0+QazqrEC7OgTqlpcIVZr+QVOmq1SCJF1GLa66jK8oa14Q8fXOs 2VOJUSp/UZGfRKR0NZASy2aNoWuQq7kTaHOonSBIV55sTSsdNYbBCQbR3JZajlr5sA0/ z61A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159145; x=1766763945; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kb0gLsOvOvTOd9/IwL391pJi9FHpm7GL0bThZeAVvoU=; b=DcJ46JSWu4/fsBjPoSfJZKZE1eZnsZGKqciwdtT1GggSGL1kzgKBz2OWZLOlkVs+v5 1DJk73u1A30VJ3gFr1R4AxtfG7IE4qHoXQfyP1+vMYuhIgR0E96S+K1e8PHLi+voP5pI JScUZcVy5fhgEzTyGVk7rFRri9aU0DzXPYj08wqKT+UyOgIXewLcfwpLga9cInNWgvDW dDddt6jpFekQF5oH0vpxR5+nJMWo9endNCx1SQ3ak2lpFUJ4xrTCtN1q49+GW57VGFDA 71U8h6FJlDV/j8Rc+zqELeXs17Iyou5tX8jcuPuRrdwygKr6V1sCX6E4bpInItZ4Qdd9 k1lw== X-Forwarded-Encrypted: i=1; AJvYcCUsUYB69x9OzE9EMN30E3Cfuzeyuu7R2PaMv+TZslF/0Ad/6mS4g5rXlKQ/V2v0/btjlbc/aadA6vhyDyY=@vger.kernel.org X-Gm-Message-State: AOJu0YydOtV9An1+LXq+qCnntIGQJ18ba2H+n3OcYkejjDQZITnUGcTp 75jdFqwJaj3zVGGaJcpX09asg9x6SAbmRqv45UBJnPq9AUr1OxpAFVxspBPOezfo2T1/wcB/NQU wfg== X-Google-Smtp-Source: AGHT+IET+LEGDJRWCHLmorOjhXicoZnuOWLnFvU0WuT5JtyAAqi51qU04r6fOHC0UWSg/g7d1u9qxqHVsA== X-Received: from wmv18.prod.google.com ([2002:a05:600c:26d2:b0:475:dadb:c8f2]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:820d:b0:477:7c7d:d9b7 with SMTP id 5b1f17b1804b1-47d1958e475mr30951665e9.33.1766159144675; Fri, 19 Dec 2025 07:45:44 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:55 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-7-elver@google.com> Subject: [PATCH v5 06/36] cleanup: Basic compatibility with context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce basic compatibility with cleanup.h infrastructure. We need to allow the compiler to see the acquisition and release of the context lock at the start and end of a scope. However, the current "cleanup" helpers wrap the lock in a struct passed through separate helper functions, which hides the lock alias from the compiler (no inter-procedural analysis). While Clang supports scoped guards in C++, it's not possible to apply in C code: https://clang.llvm.org/docs/ThreadSafetyAnalysis.html#scoped-context However, together with recent improvements to Clang's alias analysis abilities, idioms such as this work correctly now: void spin_unlock_cleanup(spinlock_t **l) __releases(*l) { .. } ... { spinlock_t *lock_scope __cleanup(spin_unlock_cleanup) =3D &lock; spin_lock(&lock); // lock through &lock ... critical section ... } // unlock through lock_scope -[alias]-> &lock (no warnings) To generalize this pattern and make it work with existing lock guards, introduce DECLARE_LOCK_GUARD_1_ATTRS() and WITH_LOCK_GUARD_1_ATTRS(). These allow creating an explicit alias to the context lock instance that is "cleaned" up with a separate cleanup helper. This helper is a dummy function that does nothing at runtime, but has the release attributes to tell the compiler what happens at the end of the scope. Example usage: DECLARE_LOCK_GUARD_1_ATTRS(mutex, __acquires(_T), __releases(*(struct mut= ex **)_T)) #define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T) Note: To support the for-loop based scoped helpers, the auxiliary variable must be a pointer to the "class" type because it is defined in the same statement as the guard variable. However, we initialize it with the lock pointer (despite the type mismatch, the compiler's alias analysis still works as expected). The "_unlock" attribute receives a pointer to the auxiliary variable (a double pointer to the class type), and must be cast and dereferenced appropriately. Signed-off-by: Marco Elver --- v5: * Rework infrastructure to properly release at scope end with reworked WITH_LOCK_GUARD_1_ATTRS() and WITH_LOCK_GUARD_1_ATTRS(). v4: * Rename capability -> context analysis. v3: * Add *_ATTRS helpers instead of implicit __assumes_cap (suggested by Peter) * __assert -> __assume rename --- include/linux/cleanup.h | 50 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h index 8d41b917c77d..ee6df68c2177 100644 --- a/include/linux/cleanup.h +++ b/include/linux/cleanup.h @@ -278,16 +278,21 @@ const volatile void * __must_check_fn(const volatile = void *val) =20 #define DEFINE_CLASS(_name, _type, _exit, _init, _init_args...) \ typedef _type class_##_name##_t; \ +typedef _type lock_##_name##_t; \ static __always_inline void class_##_name##_destructor(_type *p) \ + __no_context_analysis \ { _type _T =3D *p; _exit; } \ static __always_inline _type class_##_name##_constructor(_init_args) \ + __no_context_analysis \ { _type t =3D _init; return t; } =20 #define EXTEND_CLASS(_name, ext, _init, _init_args...) \ +typedef lock_##_name##_t lock_##_name##ext##_t; \ typedef class_##_name##_t class_##_name##ext##_t; \ static __always_inline void class_##_name##ext##_destructor(class_##_name#= #_t *p) \ { class_##_name##_destructor(p); } \ static __always_inline class_##_name##_t class_##_name##ext##_constructor(= _init_args) \ + __no_context_analysis \ { class_##_name##_t t =3D _init; return t; } =20 #define CLASS(_name, var) \ @@ -474,12 +479,14 @@ _label: \ */ =20 #define __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, ...) \ +typedef _type lock_##_name##_t; \ typedef struct { \ _type *lock; \ __VA_ARGS__; \ } class_##_name##_t; \ \ static __always_inline void class_##_name##_destructor(class_##_name##_t *= _T) \ + __no_context_analysis \ { \ if (!__GUARD_IS_ERR(_T->lock)) { _unlock; } \ } \ @@ -488,6 +495,7 @@ __DEFINE_GUARD_LOCK_PTR(_name, &_T->lock) =20 #define __DEFINE_LOCK_GUARD_1(_name, _type, _lock) \ static __always_inline class_##_name##_t class_##_name##_constructor(_type= *l) \ + __no_context_analysis \ { \ class_##_name##_t _t =3D { .lock =3D l }, *_T =3D &_t; \ _lock; \ @@ -496,6 +504,7 @@ static __always_inline class_##_name##_t class_##_name#= #_constructor(_type *l) \ =20 #define __DEFINE_LOCK_GUARD_0(_name, _lock) \ static __always_inline class_##_name##_t class_##_name##_constructor(void)= \ + __no_context_analysis \ { \ class_##_name##_t _t =3D { .lock =3D (void*)1 }, \ *_T __maybe_unused =3D &_t; \ @@ -503,6 +512,47 @@ static __always_inline class_##_name##_t class_##_name= ##_constructor(void) \ return _t; \ } =20 +#define DECLARE_LOCK_GUARD_0_ATTRS(_name, _lock, _unlock) \ +static inline class_##_name##_t class_##_name##_constructor(void) _lock;\ +static inline void class_##_name##_destructor(class_##_name##_t *_T) _unlo= ck; + +/* + * To support Context Analysis, we need to allow the compiler to see the + * acquisition and release of the context lock. However, the "cleanup" hel= pers + * wrap the lock in a struct passed through separate helper functions, whi= ch + * hides the lock alias from the compiler (no inter-procedural analysis). + * + * To make it work, we introduce an explicit alias to the context lock ins= tance + * that is "cleaned" up with a separate cleanup helper. This helper is a d= ummy + * function that does nothing at runtime, but has the "_unlock" attribute = to + * tell the compiler what happens at the end of the scope. + * + * To generalize the pattern, the WITH_LOCK_GUARD_1_ATTRS() macro should b= e used + * to redefine the constructor, which then also creates the alias variable= with + * the right "cleanup" attribute, *after* DECLARE_LOCK_GUARD_1_ATTRS() has= been + * used. + * + * Example usage: + * + * DECLARE_LOCK_GUARD_1_ATTRS(mutex, __acquires(_T), __releases(*(struct= mutex **)_T)) + * #define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T) + * + * Note: To support the for-loop based scoped helpers, the auxiliary varia= ble + * must be a pointer to the "class" type because it is defined in the same + * statement as the guard variable. However, we initialize it with the lock + * pointer (despite the type mismatch, the compiler's alias analysis still= works + * as expected). The "_unlock" attribute receives a pointer to the auxilia= ry + * variable (a double pointer to the class type), and must be cast and + * dereferenced appropriately. + */ +#define DECLARE_LOCK_GUARD_1_ATTRS(_name, _lock, _unlock) \ +static inline class_##_name##_t class_##_name##_constructor(lock_##_name##= _t *_T) _lock;\ +static __always_inline void __class_##_name##_cleanup_ctx(class_##_name##_= t **_T) \ + __no_context_analysis _unlock { } +#define WITH_LOCK_GUARD_1_ATTRS(_name, _T) \ + class_##_name##_constructor(_T), \ + *__UNIQUE_ID(unlock) __cleanup(__class_##_name##_cleanup_ctx) =3D (void *= )(unsigned long)(_T) + #define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \ __DEFINE_CLASS_IS_CONDITIONAL(_name, false); \ __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \ --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DE9633D6C4 for ; Fri, 19 Dec 2025 15:45:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159155; cv=none; b=NNiodBp/KJ6ty3NlEAJ3eCpoMaq9B1x3WwnTQdMlYyUrNC3BmmI2D7LtoyWmAg1hKYq0FtWdGGNYUBIvbpRtPxjlPw6wHh/g8TMoCCgX9LJ7M6ItuESJmQQyGzWxRYvVYZnwblZpcFz1+6YcV2ejGfmCty36UrIqIUeX3vI1V/o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159155; c=relaxed/simple; bh=K1MLbs0Qjumx8D0debWXhxMk75D85FXOx4bTzsjHirc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rD1ZUSxHVVqED0g8ed2x+KPsxzZvhE8ss6AY8dCuvbjdyp/wnk+VT6UaZbbfmwk1tUtCYTwHM9FGH6PnZdAQbWU/8daLVVrePPsI7+Q0VGj1lII90opIESKoXmGVii+Nk7o0mP78S7UuvEByhsUTAva3L9QA6RW8sARfoOf4/4A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SCPq+aX0; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SCPq+aX0" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4775e00b16fso10518255e9.2 for ; Fri, 19 Dec 2025 07:45:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159150; x=1766763950; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ppEU3vt21POuxnmNi7TnVPUBkU2y1/pB0srZ9HoiLa4=; b=SCPq+aX06eDPCwDeGEac0nvmQvTdko0Uk4Lkys3Cgxsh+yamWuax8rkFsAQtda5Q3q KeFfPlS5RnuEkjSYlThTLASv3xWTo3DZlQRFbuXSWaV5Du9nBNevzykGaHks9PXuBI2v fwmU4R6jE7fiOvTQqyPNaCkqOi8P7PU5g3cyq4MseNX8pcEwn2O9ZspJvfP+WTp1M+F+ HO26u08i/dMb2BXM7NtyhdhKf+pDjJdnlOk9hfOBnsI7tW4mL7UHeNebOLswbsjxyhsL CYJA1E9XyXDwlgfhFux9Mkb/i+EoPud3yK8Xsvtt1yKdaE/GJ0aUQDsiv6uEUWenf2Xj 9NcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159150; x=1766763950; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ppEU3vt21POuxnmNi7TnVPUBkU2y1/pB0srZ9HoiLa4=; b=OYNO7tQ9usjC6boUSB8kyBV+ElpbSQH3XDez+5kE5pNigdgqKLwbZHP0Zv0VledDq5 IGqBmD4qWB8oiX2n1Hzfvs6yUu/2SUZxx78Izo20CzTFrJn3IAEhwwEjoyeAdz0nRdjy raO2xESN4ibpbbCMmOLl0iQmMVRTDewOLGEdt3JOOXEDlJzBdYzRs6ABowPghWFJIrBD fNfxIai88YWF12A0f8q4PSbV6E1EUpUBWnoipLdjWIuaijMORiCUBHoPchMUG4RYBrM6 44FHb8BnpaO8O2PlYVsjCJ7ullPTwL1HB0Qi8+b2UarOQ0ATO5Xny6U5GBEPYKJaPOVm 3xMA== X-Forwarded-Encrypted: i=1; AJvYcCXVF508qNxumk+07s0AMPbdEYpKEF99VdMTF8wDgTMN+cmaM1padKB5mt9uuK8aWJc6eL0W56RvgPDEJwI=@vger.kernel.org X-Gm-Message-State: AOJu0YwVj9ZCel1DqGPVuIn8RDBmt97+KiB5dkUQRvnlhrQDGDc6TuqV iGT09fhNEX6b/Z5kkOtweSrz1eJpqUaHVShPE4C6XfqA1jNA9dkQ9T2leXSOY69iMjdhRTHMrMN 7dg== X-Google-Smtp-Source: AGHT+IFAZFqR3CocsGFs4abVuEztgsgnPQaBN0RWDQEu/FHy3Bb3a0zFWcTgbAs09JYX5Sh2VgApfT61VQ== X-Received: from wmcq18.prod.google.com ([2002:a05:600c:c112:b0:47b:e2a9:2bd3]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:19c8:b0:475:de68:3c30 with SMTP id 5b1f17b1804b1-47d1955797amr31569585e9.16.1766159149911; Fri, 19 Dec 2025 07:45:49 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:56 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-8-elver@google.com> Subject: [PATCH v5 07/36] lockdep: Annotate lockdep assertions for context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Clang's context analysis can be made aware of functions that assert that locks are held. Presence of these annotations causes the analysis to assume the context lock is held after calls to the annotated function, and avoid false positives with complex control-flow; for example, where not all control-flow paths in a function require a held lock, and therefore marking the function with __must_hold(..) is inappropriate. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. v3: * __assert -> __assume rename --- include/linux/lockdep.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index dd634103b014..621566345406 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -282,16 +282,16 @@ extern void lock_unpin_lock(struct lockdep_map *lock,= struct pin_cookie); do { WARN_ON_ONCE(debug_locks && !(cond)); } while (0) =20 #define lockdep_assert_held(l) \ - lockdep_assert(lockdep_is_held(l) !=3D LOCK_STATE_NOT_HELD) + do { lockdep_assert(lockdep_is_held(l) !=3D LOCK_STATE_NOT_HELD); __assum= e_ctx_lock(l); } while (0) =20 #define lockdep_assert_not_held(l) \ lockdep_assert(lockdep_is_held(l) !=3D LOCK_STATE_HELD) =20 #define lockdep_assert_held_write(l) \ - lockdep_assert(lockdep_is_held_type(l, 0)) + do { lockdep_assert(lockdep_is_held_type(l, 0)); __assume_ctx_lock(l); } = while (0) =20 #define lockdep_assert_held_read(l) \ - lockdep_assert(lockdep_is_held_type(l, 1)) + do { lockdep_assert(lockdep_is_held_type(l, 1)); __assume_shared_ctx_lock= (l); } while (0) =20 #define lockdep_assert_held_once(l) \ lockdep_assert_once(lockdep_is_held(l) !=3D LOCK_STATE_NOT_HELD) @@ -389,10 +389,10 @@ extern int lockdep_is_held(const void *); #define lockdep_assert(c) do { } while (0) #define lockdep_assert_once(c) do { } while (0) =20 -#define lockdep_assert_held(l) do { (void)(l); } while (0) +#define lockdep_assert_held(l) __assume_ctx_lock(l) #define lockdep_assert_not_held(l) do { (void)(l); } while (0) -#define lockdep_assert_held_write(l) do { (void)(l); } while (0) -#define lockdep_assert_held_read(l) do { (void)(l); } while (0) +#define lockdep_assert_held_write(l) __assume_ctx_lock(l) +#define lockdep_assert_held_read(l) __assume_shared_ctx_lock(l) #define lockdep_assert_held_once(l) do { (void)(l); } while (0) #define lockdep_assert_none_held_once() do { } while (0) =20 --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95EF433EAFC for ; Fri, 19 Dec 2025 15:45:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159159; cv=none; b=BpGILG8O7rr4bgQ/qy6qqFRZMFjdI/dR28gnqOAvHyzDbmx36uR36cAyhOf2BLxZ6JZ6HsTvvTTmZU5oBkIX47dKBnPX0uoch/Ggt6got45sdmZeMk1JjgUuwg+vNgsxZdaqCWtjNTlSzQCTRRMKYBSwDeg1RTGfdqSnQdNd0hg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159159; c=relaxed/simple; bh=vRWZ8i7ltAOK+GT3utM+ZjgKCEm31uIvVGRb3nnq/qg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kBteZs7KzzjAoeftAGRcFp8tsKVMFgiTgosmOErsBH51w6f55p4rqQCYaTqDIRZVo/VjaHCcQK5wJTnjSaTuZiqOUdoz1xqh2ZCRqo8G9Xllpibq4ij86M+5HV2MV3EXm9ahdbh+AR77gyA86ULC9KK6f3G6s9DkozC+vHzAe88= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=O3JtoFil; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="O3JtoFil" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-477b8a667bcso23224485e9.2 for ; Fri, 19 Dec 2025 07:45:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159154; x=1766763954; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LZHgecIFhvgsrcrEA4ZlzfSc1wPirLGNMbpfzS64gTI=; b=O3JtoFilBQIXDnHR2FF9mFsgtYUEGChkdyd3fssXwT0sSfJw3ngxGtnDi5K7jzUpMu 1h8r2jvBuEAeE+ZLfVJjpVvGwBYRA3aYVam/NSy+fWlVaABjTUsCk/pzMvpEz9F5VEjD 5NUQ3DYTVISSJe7oQae3cx11Qhs8pIm7eqjy8UCKiYTMUfmi5DzW9p94gHP0ulefTFQ6 jiE1iTGdOZ0fbLY77gMKMdV2sBsnj43/3MtOFXh0V+nK3AMQbbCwG/5M5EoVTEqkJmIs /FczFOsET0KGGIs+sfCQPScTAdiVsNvM4wJDIuaBhemLhO1MEo4PwroU9Zz6z0ezLRkg Oxig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159154; x=1766763954; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LZHgecIFhvgsrcrEA4ZlzfSc1wPirLGNMbpfzS64gTI=; b=DUU3Wo0Rh/uBVJlSFblI3g9Z0Tsv9gyZA4kAVEEL/Iv0qq7cXjDQ93tQFzLjp+G41Q O4+IGKg/0J1LekpbYYZoOTrIfAk2gY4jT4UkTr6NWrjbVGsyQUh8XFABC4i7b6c/UMzY cvAf1TJuIN6rIXp8nbp4YKUHiBh1ovX1Pq4ufrunUlhm4nIiFk9d+VOFcvbNUh/XChcb FnoO6+6bF+Vxvc6cszB7xwdmIG9ZzobMLhyDqQFhW7ieJZCyiyf3oSjSe1Jdcybls+Zz iV2k70ppdB/LOY6AjP2uHChI1dEEFyaJ5wanwP7r5waGAU5S+fzhvz3r4VxlvjUE2IJ0 LgjA== X-Forwarded-Encrypted: i=1; AJvYcCXL3PovkiygPbHXDbhy4XqjXy9bm8EcxtC3v9u5ibOpM6ycGJ2iSbVrLnO9huAsusOqRP3FW7o12O/CnFQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxVI30ZVVlH6fwCOX1iXu4TOSRSM4fHIRrTXzlRMJ6ZE7LBnd3I itgMaGwvRX6crjWHl2NlKU5h9iSIPRa0L4hyv9VjnQWiIyPs6+xK22gHdzVW7OH11zm71q4pVWu qlA== X-Google-Smtp-Source: AGHT+IEpUU63biSp+CWF7qU2W/J2hwjdiwKMgpo7qODC5PYPPyNbstNZnKM/Tyt/Q4xoJ76QTobm1iH7/w== X-Received: from wmop19.prod.google.com ([2002:a05:600c:4693:b0:477:9dee:b5d5]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b0d:b0:477:569c:34e9 with SMTP id 5b1f17b1804b1-47d20021316mr14032265e9.23.1766159153854; Fri, 19 Dec 2025 07:45:53 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:57 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-9-elver@google.com> Subject: [PATCH v5 08/36] locking/rwlock, spinlock: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's context analysis for raw_spinlock_t, spinlock_t, and rwlock. This wholesale conversion is required because all three of them are interdependent. To avoid warnings in constructors, the initialization functions mark a lock as acquired when initialized before guarded variables. The test verifies that common patterns do not generate false positives. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". * Use new cleanup.h helpers to properly support scoped lock guards. v4: * Rename capability -> context analysis. v3: * Switch to DECLARE_LOCK_GUARD_1_ATTRS() (suggested by Peter) * __assert -> __assume rename --- Documentation/dev-tools/context-analysis.rst | 3 +- include/linux/rwlock.h | 25 ++-- include/linux/rwlock_api_smp.h | 29 ++++- include/linux/rwlock_rt.h | 35 +++-- include/linux/rwlock_types.h | 10 +- include/linux/spinlock.h | 93 +++++++++++--- include/linux/spinlock_api_smp.h | 14 +- include/linux/spinlock_api_up.h | 71 +++++----- include/linux/spinlock_rt.h | 21 +-- include/linux/spinlock_types.h | 10 +- include/linux/spinlock_types_raw.h | 5 +- lib/test_context-analysis.c | 128 +++++++++++++++++++ 12 files changed, 347 insertions(+), 97 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index 47eb547eb716..746a2d275fb2 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -78,7 +78,8 @@ More details are also documented `here Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 -.. Currently the following synchronization primitives are supported: +Currently the following synchronization primitives are supported: +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 5b87c6f4a243..151f9d5f3288 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -22,23 +22,24 @@ do { \ static struct lock_class_key __key; \ \ __rwlock_init((lock), #lock, &__key); \ + __assume_ctx_lock(lock); \ } while (0) #else # define rwlock_init(lock) \ - do { *(lock) =3D __RW_LOCK_UNLOCKED(lock); } while (0) + do { *(lock) =3D __RW_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock); } whi= le (0) #endif =20 #ifdef CONFIG_DEBUG_SPINLOCK - extern void do_raw_read_lock(rwlock_t *lock) __acquires(lock); + extern void do_raw_read_lock(rwlock_t *lock) __acquires_shared(lock); extern int do_raw_read_trylock(rwlock_t *lock); - extern void do_raw_read_unlock(rwlock_t *lock) __releases(lock); + extern void do_raw_read_unlock(rwlock_t *lock) __releases_shared(lock); extern void do_raw_write_lock(rwlock_t *lock) __acquires(lock); extern int do_raw_write_trylock(rwlock_t *lock); extern void do_raw_write_unlock(rwlock_t *lock) __releases(lock); #else -# define do_raw_read_lock(rwlock) do {__acquire(lock); arch_read_lock(&(rw= lock)->raw_lock); } while (0) +# define do_raw_read_lock(rwlock) do {__acquire_shared(lock); arch_read_lo= ck(&(rwlock)->raw_lock); } while (0) # define do_raw_read_trylock(rwlock) arch_read_trylock(&(rwlock)->raw_lock) -# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lo= ck); __release(lock); } while (0) +# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lo= ck); __release_shared(lock); } while (0) # define do_raw_write_lock(rwlock) do {__acquire(lock); arch_write_lock(&(= rwlock)->raw_lock); } while (0) # define do_raw_write_trylock(rwlock) arch_write_trylock(&(rwlock)->raw_lo= ck) # define do_raw_write_unlock(rwlock) do {arch_write_unlock(&(rwlock)->raw_= lock); __release(lock); } while (0) @@ -49,7 +50,7 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_lock(lock, _raw_read_trylock(lock)) +#define read_trylock(lock) __cond_lock_shared(lock, _raw_read_trylock(lock= )) #define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock)) =20 #define write_lock(lock) _raw_write_lock(lock) @@ -112,12 +113,12 @@ do { \ } while (0) #define write_unlock_bh(lock) _raw_write_unlock_bh(lock) =20 -#define write_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - write_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) +#define write_trylock_irqsave(lock, flags) \ + __cond_lock(lock, ({ \ + local_irq_save(flags); \ + _raw_write_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ + })) =20 #ifdef arch_rwlock_is_contended #define rwlock_is_contended(lock) \ diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index 31d3d1116323..6d5cc0b7be1f 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -15,12 +15,12 @@ * Released under the General Public License (GPL). */ =20 -void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock); void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) __acq= uires(lock); -void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock); -void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock_irq(rwlock_t *lock) __acquires(lock); unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock) __acquires(lock); @@ -28,11 +28,11 @@ unsigned long __lockfunc _raw_write_lock_irqsave(rwlock= _t *lock) __acquires(lock); int __lockfunc _raw_read_trylock(rwlock_t *lock); int __lockfunc _raw_write_trylock(rwlock_t *lock); -void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock); -void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock= ); void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) __releases(lock); -void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases_shared(loc= k); void __lockfunc _raw_write_unlock_irq(rwlock_t *lock) __releases(lock); void __lockfunc _raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) @@ -145,6 +145,7 @@ static inline int __raw_write_trylock(rwlock_t *lock) #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) =20 static inline void __raw_read_lock(rwlock_t *lock) + __acquires_shared(lock) __no_context_analysis { preempt_disable(); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); @@ -152,6 +153,7 @@ static inline void __raw_read_lock(rwlock_t *lock) } =20 static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock) + __acquires_shared(lock) __no_context_analysis { unsigned long flags; =20 @@ -163,6 +165,7 @@ static inline unsigned long __raw_read_lock_irqsave(rwl= ock_t *lock) } =20 static inline void __raw_read_lock_irq(rwlock_t *lock) + __acquires_shared(lock) __no_context_analysis { local_irq_disable(); preempt_disable(); @@ -171,6 +174,7 @@ static inline void __raw_read_lock_irq(rwlock_t *lock) } =20 static inline void __raw_read_lock_bh(rwlock_t *lock) + __acquires_shared(lock) __no_context_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); @@ -178,6 +182,7 @@ static inline void __raw_read_lock_bh(rwlock_t *lock) } =20 static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock) + __acquires(lock) __no_context_analysis { unsigned long flags; =20 @@ -189,6 +194,7 @@ static inline unsigned long __raw_write_lock_irqsave(rw= lock_t *lock) } =20 static inline void __raw_write_lock_irq(rwlock_t *lock) + __acquires(lock) __no_context_analysis { local_irq_disable(); preempt_disable(); @@ -197,6 +203,7 @@ static inline void __raw_write_lock_irq(rwlock_t *lock) } =20 static inline void __raw_write_lock_bh(rwlock_t *lock) + __acquires(lock) __no_context_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -204,6 +211,7 @@ static inline void __raw_write_lock_bh(rwlock_t *lock) } =20 static inline void __raw_write_lock(rwlock_t *lock) + __acquires(lock) __no_context_analysis { preempt_disable(); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -211,6 +219,7 @@ static inline void __raw_write_lock(rwlock_t *lock) } =20 static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass) + __acquires(lock) __no_context_analysis { preempt_disable(); rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_); @@ -220,6 +229,7 @@ static inline void __raw_write_lock_nested(rwlock_t *lo= ck, int subclass) #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ =20 static inline void __raw_write_unlock(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -227,6 +237,7 @@ static inline void __raw_write_unlock(rwlock_t *lock) } =20 static inline void __raw_read_unlock(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -235,6 +246,7 @@ static inline void __raw_read_unlock(rwlock_t *lock) =20 static inline void __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -243,6 +255,7 @@ __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned l= ong flags) } =20 static inline void __raw_read_unlock_irq(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -251,6 +264,7 @@ static inline void __raw_read_unlock_irq(rwlock_t *lock) } =20 static inline void __raw_read_unlock_bh(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -259,6 +273,7 @@ static inline void __raw_read_unlock_bh(rwlock_t *lock) =20 static inline void __raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -267,6 +282,7 @@ static inline void __raw_write_unlock_irqrestore(rwlock= _t *lock, } =20 static inline void __raw_write_unlock_irq(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -275,6 +291,7 @@ static inline void __raw_write_unlock_irq(rwlock_t *loc= k) } =20 static inline void __raw_write_unlock_bh(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 7d81fc6918ee..f64d6d319a47 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -22,28 +22,32 @@ do { \ \ init_rwbase_rt(&(rwl)->rwbase); \ __rt_rwlock_init(rwl, #rwl, &__key); \ + __assume_ctx_lock(rwl); \ } while (0) =20 -extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock); +extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock); extern int rt_read_trylock(rwlock_t *rwlock); -extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock); +extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock); extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquire= s(rwlock); extern int rt_write_trylock(rwlock_t *rwlock); extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock); =20 static __always_inline void read_lock(rwlock_t *rwlock) + __acquires_shared(rwlock) { rt_read_lock(rwlock); } =20 static __always_inline void read_lock_bh(rwlock_t *rwlock) + __acquires_shared(rwlock) { local_bh_disable(); rt_read_lock(rwlock); } =20 static __always_inline void read_lock_irq(rwlock_t *rwlock) + __acquires_shared(rwlock) { rt_read_lock(rwlock); } @@ -55,37 +59,43 @@ static __always_inline void read_lock_irq(rwlock_t *rwl= ock) flags =3D 0; \ } while (0) =20 -#define read_trylock(lock) __cond_lock(lock, rt_read_trylock(lock)) +#define read_trylock(lock) __cond_lock_shared(lock, rt_read_trylock(lock)) =20 static __always_inline void read_unlock(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } =20 static __always_inline void read_unlock_bh(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); local_bh_enable(); } =20 static __always_inline void read_unlock_irq(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } =20 static __always_inline void read_unlock_irqrestore(rwlock_t *rwlock, unsigned long flags) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } =20 static __always_inline void write_lock(rwlock_t *rwlock) + __acquires(rwlock) { rt_write_lock(rwlock); } =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC static __always_inline void write_lock_nested(rwlock_t *rwlock, int subcla= ss) + __acquires(rwlock) { rt_write_lock_nested(rwlock, subclass); } @@ -94,12 +104,14 @@ static __always_inline void write_lock_nested(rwlock_t= *rwlock, int subclass) #endif =20 static __always_inline void write_lock_bh(rwlock_t *rwlock) + __acquires(rwlock) { local_bh_disable(); rt_write_lock(rwlock); } =20 static __always_inline void write_lock_irq(rwlock_t *rwlock) + __acquires(rwlock) { rt_write_lock(rwlock); } @@ -114,33 +126,34 @@ static __always_inline void write_lock_irq(rwlock_t *= rwlock) #define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock)) =20 #define write_trylock_irqsave(lock, flags) \ -({ \ - int __locked; \ - \ - typecheck(unsigned long, flags); \ - flags =3D 0; \ - __locked =3D write_trylock(lock); \ - __locked; \ -}) + __cond_lock(lock, ({ \ + typecheck(unsigned long, flags); \ + flags =3D 0; \ + rt_write_trylock(lock); \ + })) =20 static __always_inline void write_unlock(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); } =20 static __always_inline void write_unlock_bh(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); local_bh_enable(); } =20 static __always_inline void write_unlock_irq(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); } =20 static __always_inline void write_unlock_irqrestore(rwlock_t *rwlock, unsigned long flags) + __releases(rwlock) { rt_write_unlock(rwlock); } diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h index 1948442e7750..d5e7316401e7 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -22,7 +22,7 @@ * portions Copyright 2005, Red Hat, Inc., Ingo Molnar * Released under the General Public License (GPL). */ -typedef struct { +context_lock_struct(rwlock) { arch_rwlock_t raw_lock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned int magic, owner_cpu; @@ -31,7 +31,8 @@ typedef struct { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} rwlock_t; +}; +typedef struct rwlock rwlock_t; =20 #define RWLOCK_MAGIC 0xdeaf1eed =20 @@ -54,13 +55,14 @@ typedef struct { =20 #include =20 -typedef struct { +context_lock_struct(rwlock) { struct rwbase_rt rwbase; atomic_t readers; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} rwlock_t; +}; +typedef struct rwlock rwlock_t; =20 #define __RWLOCK_RT_INITIALIZER(name) \ { \ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index d3561c4a080e..72aabdd4fa3f 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -106,11 +106,12 @@ do { \ static struct lock_class_key __key; \ \ __raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN); \ + __assume_ctx_lock(lock); \ } while (0) =20 #else # define raw_spin_lock_init(lock) \ - do { *(lock) =3D __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0) + do { *(lock) =3D __RAW_SPIN_LOCK_UNLOCKED(lock); __assume_ctx_lock(lock);= } while (0) #endif =20 #define raw_spin_is_locked(lock) arch_spin_is_locked(&(lock)->raw_lock) @@ -286,19 +287,19 @@ static inline void do_raw_spin_unlock(raw_spinlock_t = *lock) __releases(lock) #define raw_spin_trylock_bh(lock) \ __cond_lock(lock, _raw_spin_trylock_bh(lock)) =20 -#define raw_spin_trylock_irq(lock) \ -({ \ - local_irq_disable(); \ - raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_enable(); 0; }); \ -}) +#define raw_spin_trylock_irq(lock) \ + __cond_lock(lock, ({ \ + local_irq_disable(); \ + _raw_spin_trylock(lock) ? \ + 1 : ({ local_irq_enable(); 0; }); \ + })) =20 -#define raw_spin_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) +#define raw_spin_trylock_irqsave(lock, flags) \ + __cond_lock(lock, ({ \ + local_irq_save(flags); \ + _raw_spin_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ + })) =20 #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ @@ -334,6 +335,7 @@ do { \ \ __raw_spin_lock_init(spinlock_check(lock), \ #lock, &__key, LD_WAIT_CONFIG); \ + __assume_ctx_lock(lock); \ } while (0) =20 #else @@ -342,21 +344,25 @@ do { \ do { \ spinlock_check(_lock); \ *(_lock) =3D __SPIN_LOCK_UNLOCKED(_lock); \ + __assume_ctx_lock(_lock); \ } while (0) =20 #endif =20 static __always_inline void spin_lock(spinlock_t *lock) + __acquires(lock) __no_context_analysis { raw_spin_lock(&lock->rlock); } =20 static __always_inline void spin_lock_bh(spinlock_t *lock) + __acquires(lock) __no_context_analysis { raw_spin_lock_bh(&lock->rlock); } =20 static __always_inline int spin_trylock(spinlock_t *lock) + __cond_acquires(lock) __no_context_analysis { return raw_spin_trylock(&lock->rlock); } @@ -364,14 +370,17 @@ static __always_inline int spin_trylock(spinlock_t *l= ock) #define spin_lock_nested(lock, subclass) \ do { \ raw_spin_lock_nested(spinlock_check(lock), subclass); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) =20 #define spin_lock_nest_lock(lock, nest_lock) \ do { \ raw_spin_lock_nest_lock(spinlock_check(lock), nest_lock); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) =20 static __always_inline void spin_lock_irq(spinlock_t *lock) + __acquires(lock) __no_context_analysis { raw_spin_lock_irq(&lock->rlock); } @@ -379,47 +388,53 @@ static __always_inline void spin_lock_irq(spinlock_t = *lock) #define spin_lock_irqsave(lock, flags) \ do { \ raw_spin_lock_irqsave(spinlock_check(lock), flags); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) =20 #define spin_lock_irqsave_nested(lock, flags, subclass) \ do { \ raw_spin_lock_irqsave_nested(spinlock_check(lock), flags, subclass); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) =20 static __always_inline void spin_unlock(spinlock_t *lock) + __releases(lock) __no_context_analysis { raw_spin_unlock(&lock->rlock); } =20 static __always_inline void spin_unlock_bh(spinlock_t *lock) + __releases(lock) __no_context_analysis { raw_spin_unlock_bh(&lock->rlock); } =20 static __always_inline void spin_unlock_irq(spinlock_t *lock) + __releases(lock) __no_context_analysis { raw_spin_unlock_irq(&lock->rlock); } =20 static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsig= ned long flags) + __releases(lock) __no_context_analysis { raw_spin_unlock_irqrestore(&lock->rlock, flags); } =20 static __always_inline int spin_trylock_bh(spinlock_t *lock) + __cond_acquires(lock) __no_context_analysis { return raw_spin_trylock_bh(&lock->rlock); } =20 static __always_inline int spin_trylock_irq(spinlock_t *lock) + __cond_acquires(lock) __no_context_analysis { return raw_spin_trylock_irq(&lock->rlock); } =20 #define spin_trylock_irqsave(lock, flags) \ -({ \ - raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ -}) + __cond_lock(lock, raw_spin_trylock_irqsave(spinlock_check(lock), flags)) =20 /** * spin_is_locked() - Check whether a spinlock is locked. @@ -535,86 +550,132 @@ void free_bucket_spinlocks(spinlock_t *locks); DEFINE_LOCK_GUARD_1(raw_spinlock, raw_spinlock_t, raw_spin_lock(_T->lock), raw_spin_unlock(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock, __acquires(_T), __releases(*(raw_= spinlock_t **)_T)) +#define class_raw_spinlock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spi= nlock, _T) =20 DEFINE_LOCK_GUARD_1_COND(raw_spinlock, _try, raw_spin_trylock(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_try, __acquires(_T), __releases(*(= raw_spinlock_t **)_T)) +#define class_raw_spinlock_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw= _spinlock_try, _T) =20 DEFINE_LOCK_GUARD_1(raw_spinlock_nested, raw_spinlock_t, raw_spin_lock_nested(_T->lock, SINGLE_DEPTH_NESTING), raw_spin_unlock(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_nested, __acquires(_T), __releases= (*(raw_spinlock_t **)_T)) +#define class_raw_spinlock_nested_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(= raw_spinlock_nested, _T) =20 DEFINE_LOCK_GUARD_1(raw_spinlock_irq, raw_spinlock_t, raw_spin_lock_irq(_T->lock), raw_spin_unlock_irq(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irq, __acquires(_T), __releases(*(= raw_spinlock_t **)_T)) +#define class_raw_spinlock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw= _spinlock_irq, _T) =20 DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq(_T->= lock)) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irq_try, __acquires(_T), __release= s(*(raw_spinlock_t **)_T)) +#define class_raw_spinlock_irq_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS= (raw_spinlock_irq_try, _T) =20 DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t, raw_spin_lock_bh(_T->lock), raw_spin_unlock_bh(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_bh, __acquires(_T), __releases(*(r= aw_spinlock_t **)_T)) +#define class_raw_spinlock_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_= spinlock_bh, _T) =20 DEFINE_LOCK_GUARD_1_COND(raw_spinlock_bh, _try, raw_spin_trylock_bh(_T->lo= ck)) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_bh_try, __acquires(_T), __releases= (*(raw_spinlock_t **)_T)) +#define class_raw_spinlock_bh_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(= raw_spinlock_bh_try, _T) =20 DEFINE_LOCK_GUARD_1(raw_spinlock_irqsave, raw_spinlock_t, raw_spin_lock_irqsave(_T->lock, _T->flags), raw_spin_unlock_irqrestore(_T->lock, _T->flags), unsigned long flags) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave, __acquires(_T), __release= s(*(raw_spinlock_t **)_T)) +#define class_raw_spinlock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS= (raw_spinlock_irqsave, _T) =20 DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try, raw_spin_trylock_irqsave(_T->lock, _T->flags)) +DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, __acquires(_T), __rel= eases(*(raw_spinlock_t **)_T)) +#define class_raw_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_A= TTRS(raw_spinlock_irqsave_try, _T) =20 DEFINE_LOCK_GUARD_1(spinlock, spinlock_t, spin_lock(_T->lock), spin_unlock(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(spinlock, __acquires(_T), __releases(*(spinlock= _t **)_T)) +#define class_spinlock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock, _= T) =20 DEFINE_LOCK_GUARD_1_COND(spinlock, _try, spin_trylock(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(spinlock_try, __acquires(_T), __releases(*(spin= lock_t **)_T)) +#define class_spinlock_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinloc= k_try, _T) =20 DEFINE_LOCK_GUARD_1(spinlock_irq, spinlock_t, spin_lock_irq(_T->lock), spin_unlock_irq(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irq, __acquires(_T), __releases(*(spin= lock_t **)_T)) +#define class_spinlock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinloc= k_irq, _T) =20 DEFINE_LOCK_GUARD_1_COND(spinlock_irq, _try, spin_trylock_irq(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irq_try, __acquires(_T), __releases(*(= spinlock_t **)_T)) +#define class_spinlock_irq_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spi= nlock_irq_try, _T) =20 DEFINE_LOCK_GUARD_1(spinlock_bh, spinlock_t, spin_lock_bh(_T->lock), spin_unlock_bh(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(spinlock_bh, __acquires(_T), __releases(*(spinl= ock_t **)_T)) +#define class_spinlock_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock= _bh, _T) =20 DEFINE_LOCK_GUARD_1_COND(spinlock_bh, _try, spin_trylock_bh(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(spinlock_bh_try, __acquires(_T), __releases(*(s= pinlock_t **)_T)) +#define class_spinlock_bh_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spin= lock_bh_try, _T) =20 DEFINE_LOCK_GUARD_1(spinlock_irqsave, spinlock_t, spin_lock_irqsave(_T->lock, _T->flags), spin_unlock_irqrestore(_T->lock, _T->flags), unsigned long flags) +DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave, __acquires(_T), __releases(*(= spinlock_t **)_T)) +#define class_spinlock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spi= nlock_irqsave, _T) =20 DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try, spin_trylock_irqsave(_T->lock, _T->flags)) +DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, __acquires(_T), __release= s(*(spinlock_t **)_T)) +#define class_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS= (spinlock_irqsave_try, _T) =20 DEFINE_LOCK_GUARD_1(read_lock, rwlock_t, read_lock(_T->lock), read_unlock(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(read_lock, __acquires(_T), __releases(*(rwlock_= t **)_T)) +#define class_read_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(read_lock,= _T) =20 DEFINE_LOCK_GUARD_1(read_lock_irq, rwlock_t, read_lock_irq(_T->lock), read_unlock_irq(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(read_lock_irq, __acquires(_T), __releases(*(rwl= ock_t **)_T)) +#define class_read_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(read_l= ock_irq, _T) =20 DEFINE_LOCK_GUARD_1(read_lock_irqsave, rwlock_t, read_lock_irqsave(_T->lock, _T->flags), read_unlock_irqrestore(_T->lock, _T->flags), unsigned long flags) +DECLARE_LOCK_GUARD_1_ATTRS(read_lock_irqsave, __acquires(_T), __releases(*= (rwlock_t **)_T)) +#define class_read_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(re= ad_lock_irqsave, _T) =20 DEFINE_LOCK_GUARD_1(write_lock, rwlock_t, write_lock(_T->lock), write_unlock(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(write_lock, __acquires(_T), __releases(*(rwlock= _t **)_T)) +#define class_write_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_loc= k, _T) =20 DEFINE_LOCK_GUARD_1(write_lock_irq, rwlock_t, write_lock_irq(_T->lock), write_unlock_irq(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irq, __acquires(_T), __releases(*(rw= lock_t **)_T)) +#define class_write_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write= _lock_irq, _T) =20 DEFINE_LOCK_GUARD_1(write_lock_irqsave, rwlock_t, write_lock_irqsave(_T->lock, _T->flags), write_unlock_irqrestore(_T->lock, _T->flags), unsigned long flags) +DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irqsave, __acquires(_T), __releases(= *(rwlock_t **)_T)) +#define class_write_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(w= rite_lock_irqsave, _T) =20 #undef __LINUX_INSIDE_SPINLOCK_H #endif /* __LINUX_SPINLOCK_H */ diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_= smp.h index 9ecb0ab504e3..d19327e04df9 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinl= ock_t *lock) unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) __acquires(lock); -int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock); -int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock); +int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(lo= ck); +int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(= lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock= ); @@ -84,6 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigne= d long flags) #endif =20 static inline int __raw_spin_trylock(raw_spinlock_t *lock) + __cond_acquires(lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { @@ -102,6 +103,7 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lo= ck) #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) =20 static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock) + __acquires(lock) __no_context_analysis { unsigned long flags; =20 @@ -113,6 +115,7 @@ static inline unsigned long __raw_spin_lock_irqsave(raw= _spinlock_t *lock) } =20 static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) + __acquires(lock) __no_context_analysis { local_irq_disable(); preempt_disable(); @@ -121,6 +124,7 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *= lock) } =20 static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) + __acquires(lock) __no_context_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -128,6 +132,7 @@ static inline void __raw_spin_lock_bh(raw_spinlock_t *l= ock) } =20 static inline void __raw_spin_lock(raw_spinlock_t *lock) + __acquires(lock) __no_context_analysis { preempt_disable(); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -137,6 +142,7 @@ static inline void __raw_spin_lock(raw_spinlock_t *lock) #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ =20 static inline void __raw_spin_unlock(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -145,6 +151,7 @@ static inline void __raw_spin_unlock(raw_spinlock_t *lo= ck) =20 static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -153,6 +160,7 @@ static inline void __raw_spin_unlock_irqrestore(raw_spi= nlock_t *lock, } =20 static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -161,6 +169,7 @@ static inline void __raw_spin_unlock_irq(raw_spinlock_t= *lock) } =20 static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -168,6 +177,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t = *lock) } =20 static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) + __cond_acquires(lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); if (do_raw_spin_trylock(lock)) { diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_u= p.h index 819aeba1c87e..018f5aabc1be 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -24,68 +24,77 @@ * flags straight, to suppress compiler warnings of unused lock * variables, and to add the proper checker annotations: */ -#define ___LOCK(lock) \ - do { __acquire(lock); (void)(lock); } while (0) +#define ___LOCK_void(lock) \ + do { (void)(lock); } while (0) =20 -#define __LOCK(lock) \ - do { preempt_disable(); ___LOCK(lock); } while (0) +#define ___LOCK_(lock) \ + do { __acquire(lock); ___LOCK_void(lock); } while (0) =20 -#define __LOCK_BH(lock) \ - do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK(lock= ); } while (0) +#define ___LOCK_shared(lock) \ + do { __acquire_shared(lock); ___LOCK_void(lock); } while (0) =20 -#define __LOCK_IRQ(lock) \ - do { local_irq_disable(); __LOCK(lock); } while (0) +#define __LOCK(lock, ...) \ + do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0) =20 -#define __LOCK_IRQSAVE(lock, flags) \ - do { local_irq_save(flags); __LOCK(lock); } while (0) +#define __LOCK_BH(lock, ...) \ + do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK_##__= VA_ARGS__(lock); } while (0) =20 -#define ___UNLOCK(lock) \ +#define __LOCK_IRQ(lock, ...) \ + do { local_irq_disable(); __LOCK(lock, ##__VA_ARGS__); } while (0) + +#define __LOCK_IRQSAVE(lock, flags, ...) \ + do { local_irq_save(flags); __LOCK(lock, ##__VA_ARGS__); } while (0) + +#define ___UNLOCK_(lock) \ do { __release(lock); (void)(lock); } while (0) =20 -#define __UNLOCK(lock) \ - do { preempt_enable(); ___UNLOCK(lock); } while (0) +#define ___UNLOCK_shared(lock) \ + do { __release_shared(lock); (void)(lock); } while (0) =20 -#define __UNLOCK_BH(lock) \ +#define __UNLOCK(lock, ...) \ + do { preempt_enable(); ___UNLOCK_##__VA_ARGS__(lock); } while (0) + +#define __UNLOCK_BH(lock, ...) \ do { __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); \ - ___UNLOCK(lock); } while (0) + ___UNLOCK_##__VA_ARGS__(lock); } while (0) =20 -#define __UNLOCK_IRQ(lock) \ - do { local_irq_enable(); __UNLOCK(lock); } while (0) +#define __UNLOCK_IRQ(lock, ...) \ + do { local_irq_enable(); __UNLOCK(lock, ##__VA_ARGS__); } while (0) =20 -#define __UNLOCK_IRQRESTORE(lock, flags) \ - do { local_irq_restore(flags); __UNLOCK(lock); } while (0) +#define __UNLOCK_IRQRESTORE(lock, flags, ...) \ + do { local_irq_restore(flags); __UNLOCK(lock, ##__VA_ARGS__); } while (0) =20 #define _raw_spin_lock(lock) __LOCK(lock) #define _raw_spin_lock_nested(lock, subclass) __LOCK(lock) -#define _raw_read_lock(lock) __LOCK(lock) +#define _raw_read_lock(lock) __LOCK(lock, shared) #define _raw_write_lock(lock) __LOCK(lock) #define _raw_write_lock_nested(lock, subclass) __LOCK(lock) #define _raw_spin_lock_bh(lock) __LOCK_BH(lock) -#define _raw_read_lock_bh(lock) __LOCK_BH(lock) +#define _raw_read_lock_bh(lock) __LOCK_BH(lock, shared) #define _raw_write_lock_bh(lock) __LOCK_BH(lock) #define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock) -#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock) +#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock, shared) #define _raw_write_lock_irq(lock) __LOCK_IRQ(lock) #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) +#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, sh= ared) #define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_spin_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_read_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_write_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock); 1; }) +#define _raw_spin_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_read_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_write_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock, void); 1; }) #define _raw_spin_unlock(lock) __UNLOCK(lock) -#define _raw_read_unlock(lock) __UNLOCK(lock) +#define _raw_read_unlock(lock) __UNLOCK(lock, shared) #define _raw_write_unlock(lock) __UNLOCK(lock) #define _raw_spin_unlock_bh(lock) __UNLOCK_BH(lock) #define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock) -#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock) +#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock, shared) #define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock) -#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock) +#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock, shared) #define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock) #define _raw_spin_unlock_irqrestore(lock, flags) \ __UNLOCK_IRQRESTORE(lock, flags) #define _raw_read_unlock_irqrestore(lock, flags) \ - __UNLOCK_IRQRESTORE(lock, flags) + __UNLOCK_IRQRESTORE(lock, flags, shared) #define _raw_write_unlock_irqrestore(lock, flags) \ __UNLOCK_IRQRESTORE(lock, flags) =20 diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index f6499c37157d..6bab73ee1384 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -20,6 +20,7 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, = const char *name, do { \ rt_mutex_base_init(&(slock)->lock); \ __rt_spin_lock_init(slock, name, key, percpu); \ + __assume_ctx_lock(slock); \ } while (0) =20 #define _spin_lock_init(slock, percpu) \ @@ -40,6 +41,7 @@ extern int rt_spin_trylock_bh(spinlock_t *lock); extern int rt_spin_trylock(spinlock_t *lock); =20 static __always_inline void spin_lock(spinlock_t *lock) + __acquires(lock) { rt_spin_lock(lock); } @@ -82,6 +84,7 @@ static __always_inline void spin_lock(spinlock_t *lock) __spin_lock_irqsave_nested(lock, flags, subclass) =20 static __always_inline void spin_lock_bh(spinlock_t *lock) + __acquires(lock) { /* Investigate: Drop bh when blocking ? */ local_bh_disable(); @@ -89,6 +92,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lock) } =20 static __always_inline void spin_lock_irq(spinlock_t *lock) + __acquires(lock) { rt_spin_lock(lock); } @@ -101,23 +105,27 @@ static __always_inline void spin_lock_irq(spinlock_t = *lock) } while (0) =20 static __always_inline void spin_unlock(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); } =20 static __always_inline void spin_unlock_bh(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); local_bh_enable(); } =20 static __always_inline void spin_unlock_irq(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); } =20 static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) + __releases(lock) { rt_spin_unlock(lock); } @@ -132,14 +140,11 @@ static __always_inline void spin_unlock_irqrestore(sp= inlock_t *lock, __cond_lock(lock, rt_spin_trylock(lock)) =20 #define spin_trylock_irqsave(lock, flags) \ -({ \ - int __locked; \ - \ - typecheck(unsigned long, flags); \ - flags =3D 0; \ - __locked =3D spin_trylock(lock); \ - __locked; \ -}) + __cond_lock(lock, ({ \ + typecheck(unsigned long, flags); \ + flags =3D 0; \ + rt_spin_trylock(lock); \ + })) =20 #define spin_is_contended(lock) (((void)(lock), 0)) =20 diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h index 2dfa35ffec76..b65bb6e4451c 100644 --- a/include/linux/spinlock_types.h +++ b/include/linux/spinlock_types.h @@ -14,7 +14,7 @@ #ifndef CONFIG_PREEMPT_RT =20 /* Non PREEMPT_RT kernels map spinlock to raw_spinlock */ -typedef struct spinlock { +context_lock_struct(spinlock) { union { struct raw_spinlock rlock; =20 @@ -26,7 +26,8 @@ typedef struct spinlock { }; #endif }; -} spinlock_t; +}; +typedef struct spinlock spinlock_t; =20 #define ___SPIN_LOCK_INITIALIZER(lockname) \ { \ @@ -47,12 +48,13 @@ typedef struct spinlock { /* PREEMPT_RT kernels map spinlock to rt_mutex */ #include =20 -typedef struct spinlock { +context_lock_struct(spinlock) { struct rt_mutex_base lock; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} spinlock_t; +}; +typedef struct spinlock spinlock_t; =20 #define __SPIN_LOCK_UNLOCKED(name) \ { \ diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_ty= pes_raw.h index 91cb36b65a17..e5644ab2161f 100644 --- a/include/linux/spinlock_types_raw.h +++ b/include/linux/spinlock_types_raw.h @@ -11,7 +11,7 @@ =20 #include =20 -typedef struct raw_spinlock { +context_lock_struct(raw_spinlock) { arch_spinlock_t raw_lock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned int magic, owner_cpu; @@ -20,7 +20,8 @@ typedef struct raw_spinlock { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} raw_spinlock_t; +}; +typedef struct raw_spinlock raw_spinlock_t; =20 #define SPINLOCK_MAGIC 0xdead4ead =20 diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 68f075dec0e0..273fa9d34657 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -5,6 +5,7 @@ */ =20 #include +#include =20 /* * Test that helper macros work as expected. @@ -16,3 +17,130 @@ static void __used test_common_helpers(void) BUILD_BUG_ON(context_unsafe((void)2, 3) !=3D 3); /* does not swallow comm= as */ context_unsafe(do { } while (0)); /* works with void statements */ } + +#define TEST_SPINLOCK_COMMON(class, type, type_init, type_lock, type_unloc= k, type_trylock, op) \ + struct test_##class##_data { \ + type lock; \ + int counter __guarded_by(&lock); \ + int *pointer __pt_guarded_by(&lock); \ + }; \ + static void __used test_##class##_init(struct test_##class##_data *d) \ + { \ + type_init(&d->lock); \ + d->counter =3D 0; \ + } \ + static void __used test_##class(struct test_##class##_data *d) \ + { \ + unsigned long flags; \ + d->pointer++; \ + type_lock(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock(&d->lock); \ + type_lock##_irq(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_irq(&d->lock); \ + type_lock##_bh(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_bh(&d->lock); \ + type_lock##_irqsave(&d->lock, flags); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_irqrestore(&d->lock, flags); \ + } \ + static void __used test_##class##_trylock(struct test_##class##_data *d) = \ + { \ + if (type_trylock(&d->lock)) { \ + op(d->counter); \ + type_unlock(&d->lock); \ + } \ + } \ + static void __used test_##class##_assert(struct test_##class##_data *d) = \ + { \ + lockdep_assert_held(&d->lock); \ + op(d->counter); \ + } \ + static void __used test_##class##_guard(struct test_##class##_data *d) \ + { \ + { guard(class)(&d->lock); op(d->counter); } \ + { guard(class##_irq)(&d->lock); op(d->counter); } \ + { guard(class##_irqsave)(&d->lock); op(d->counter); } \ + } + +#define TEST_OP_RW(x) (x)++ +#define TEST_OP_RO(x) ((void)(x)) + +TEST_SPINLOCK_COMMON(raw_spinlock, + raw_spinlock_t, + raw_spin_lock_init, + raw_spin_lock, + raw_spin_unlock, + raw_spin_trylock, + TEST_OP_RW); +static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinloc= k_data *d) +{ + unsigned long flags; + + if (raw_spin_trylock_irq(&d->lock)) { + d->counter++; + raw_spin_unlock_irq(&d->lock); + } + if (raw_spin_trylock_irqsave(&d->lock, flags)) { + d->counter++; + raw_spin_unlock_irqrestore(&d->lock, flags); + } + scoped_cond_guard(raw_spinlock_try, return, &d->lock) { + d->counter++; + } +} + +TEST_SPINLOCK_COMMON(spinlock, + spinlock_t, + spin_lock_init, + spin_lock, + spin_unlock, + spin_trylock, + TEST_OP_RW); +static void __used test_spinlock_trylock_extra(struct test_spinlock_data *= d) +{ + unsigned long flags; + + if (spin_trylock_irq(&d->lock)) { + d->counter++; + spin_unlock_irq(&d->lock); + } + if (spin_trylock_irqsave(&d->lock, flags)) { + d->counter++; + spin_unlock_irqrestore(&d->lock, flags); + } + scoped_cond_guard(spinlock_try, return, &d->lock) { + d->counter++; + } +} + +TEST_SPINLOCK_COMMON(write_lock, + rwlock_t, + rwlock_init, + write_lock, + write_unlock, + write_trylock, + TEST_OP_RW); +static void __used test_write_trylock_extra(struct test_write_lock_data *d) +{ + unsigned long flags; + + if (write_trylock_irqsave(&d->lock, flags)) { + d->counter++; + write_unlock_irqrestore(&d->lock, flags); + } +} + +TEST_SPINLOCK_COMMON(read_lock, + rwlock_t, + rwlock_init, + read_lock, + read_unlock, + read_trylock, + TEST_OP_RO); --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8273233F37E for ; Fri, 19 Dec 2025 15:46:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159164; cv=none; b=HRvC87tebZX0kgwLWcl76Z12I+nAjXpvyPTk6wjs72US32PO5cK1xJawn7QkwfpFqklP22ht/DxG2uu/fGvGv95cH/HLW+A/cgfsZJmCB6yuLfj9QUBKe04Yhxle4yeQsNkLOH5dDCaEPqGuvaa6mhZS37IyqsnYgCvqKEjpe6I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159164; c=relaxed/simple; bh=QGxIQFbqEyAMfTIH4+T6gbwAiviM4TQiwWbNYnT6XxM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Y5ocnFZ7uuqrl2V746KuoalXCTN2LBEuZ1hw6u2tuIU81FsXQkCEDaHflQ9kcPEGVUYI9wac8w3MVbVgkO4Q+9DJWpuzka9/e2V7p+UMY9jEwCK6I4JEQBnXjuAuQ3+DZ0mWh8iWoaeIaps8mXSqTmRqPY1ed+K+BGGQ7IyMweg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PZrEJDeR; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PZrEJDeR" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477c49f273fso24026195e9.3 for ; Fri, 19 Dec 2025 07:46:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159160; x=1766763960; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lGbYnpIDd95/4deu1O5Gyvn5GxdfmH9SCDxje2/Aqi4=; b=PZrEJDeRIzgszfQ6N1XLP5P9xlml1EIulKtgsawQAyPSGATgDTwPsi1M1S/ltvPsEz 3Fe1NyObMBlxwGvltvMADNzm0Pg1nG1X+RzG1Foq7w5C3v3lpjgAGqKx+SoOmqxGjFYV V4fQsNrbR1KyIE+mAFHLvvtioqI/7LJoOhNAIyvDlOX9sv6aTkxWstNOFJDldeOXlZEt GKT88QX0bRSvwGevPyHOwmjAokm3KYwV3KGjO0KEJlBUIyPjLI03Sz2Fsm9RXHc6h3pZ zLUHufDXFnqDbKPTvS7psZBlnJS8fKZ/82IkYypl1pmLz7B470VtoIk5mUAV7oWGruC+ 8nUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159160; x=1766763960; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lGbYnpIDd95/4deu1O5Gyvn5GxdfmH9SCDxje2/Aqi4=; b=RmwX7SGeb5Le7a2nyJSDorc3beQVAjPv18EzPQXH0wKbW/S8A66b2OAjBGjrarVzK/ +p0idX1Z/yrBtf+LG1cgwUB9WJVVaGSR1p+1qAi1lksLLcBIudfkPiH7VZCtrJpNiENd XzYrUCmrGESPT3xIdVC7lMKjW87nOaHjRuxWhnUagG+W+6Je6mMiyLD8YqeJvIi8KojC L7eZY9KmawldxPJa3IxE0nwdpsSjTbKTdak/ZuqfY8//A6bCi2O8LUsFXhoF9C0vQokk ahyo0ZyH3TCXr1pnGeeHKsrB0ZjQvq0DihgcVm5sMsj0CPUMe08QNWGYh+fgHeFMSsQv w+6Q== X-Forwarded-Encrypted: i=1; AJvYcCXKdvTAEdPcWxLgHoB1HVWzBqADHe8rK2LWG3th/jnWKTl1rvoi+R1A4mdzESH92tXLh2BSnVU3VTGmVsA=@vger.kernel.org X-Gm-Message-State: AOJu0YzbIiAMDp8InfKcf3KjmgUmybAYXY4p4oQKJk78brdXpPpLo/9s yIVwUnDrMCvdUIAYE5wWD6MxP91V8voCu5LM34qLpmZF+hcbMXdLVOvKAe6nolJmETYkaaPcJp5 Xww== X-Google-Smtp-Source: AGHT+IE1XZ3lv2u7wAuVsmDUYfED9I82A9YF3FTOwvCA/weTpzb3iuyTGf96w55msyALpBkixRUmT0AdYw== X-Received: from wmma6.prod.google.com ([2002:a05:600c:2246:b0:477:40c1:3e61]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8b12:b0:46e:1fb7:a1b3 with SMTP id 5b1f17b1804b1-47d19595fcfmr36884125e9.23.1766159159900; Fri, 19 Dec 2025 07:45:59 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:58 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-10-elver@google.com> Subject: [PATCH v5 09/36] compiler-context-analysis: Change __cond_acquires to take return value From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While Sparse is oblivious to the return value of conditional acquire functions, Clang's context analysis needs to know the return value which indicates successful acquisition. Add the additional argument, and convert existing uses. Notably, Clang's interpretation of the value merely relates to the use in a later conditional branch, i.e. 1 =3D=3D> context lock acquired in branch taken if condition non-zero, and 0 =3D=3D> context lock acquired in branch taken if condition is zero. Given the precise value does not matter, introduce symbolic variants to use instead of either 0 or 1, which should be more intuitive. No functional change intended. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. v2: * Use symbolic values for __cond_acquires() and __cond_acquires_shared() (suggested by Bart). --- fs/dlm/lock.c | 2 +- include/linux/compiler-context-analysis.h | 31 +++++++++++++++++++---- include/linux/refcount.h | 6 ++--- include/linux/spinlock.h | 6 ++--- include/linux/spinlock_api_smp.h | 8 +++--- net/ipv4/tcp_sigpool.c | 2 +- 6 files changed, 38 insertions(+), 17 deletions(-) diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index be938fdf17d9..0ce04be0d3de 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -343,7 +343,7 @@ void dlm_hold_rsb(struct dlm_rsb *r) /* TODO move this to lib/refcount.c */ static __must_check bool dlm_refcount_dec_and_write_lock_bh(refcount_t *r, rwlock_t *lock) -__cond_acquires(lock) + __cond_acquires(true, lock) { if (refcount_dec_not_one(r)) return false; diff --git a/include/linux/compiler-context-analysis.h b/include/linux/comp= iler-context-analysis.h index afff910d8930..9ad800e27692 100644 --- a/include/linux/compiler-context-analysis.h +++ b/include/linux/compiler-context-analysis.h @@ -271,7 +271,7 @@ static inline void _context_unsafe_alias(void **p) { } # define __must_hold(x) __attribute__((context(x,1,1))) # define __must_not_hold(x) # define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(x) __attribute__((context(x,0,-1))) +# define __cond_acquires(ret, x) __attribute__((context(x,0,-1))) # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) @@ -314,15 +314,32 @@ static inline void _context_unsafe_alias(void **p) { } */ # define __acquires(x) __acquires_ctx_lock(x) =20 +/* + * Clang's analysis does not care precisely about the value, only that it = is + * either zero or non-zero. So the __cond_acquires() interface might be + * misleading if we say that @ret is the value returned if acquired. Inste= ad, + * provide symbolic variants which we translate. + */ +#define __cond_acquires_impl_true(x, ...) __try_acquires##__VA_ARGS__#= #_ctx_lock(1, x) +#define __cond_acquires_impl_false(x, ...) __try_acquires##__VA_ARGS__#= #_ctx_lock(0, x) +#define __cond_acquires_impl_nonzero(x, ...) __try_acquires##__VA_ARGS__#= #_ctx_lock(1, x) +#define __cond_acquires_impl_0(x, ...) __try_acquires##__VA_ARGS__#= #_ctx_lock(0, x) +#define __cond_acquires_impl_nonnull(x, ...) __try_acquires##__VA_ARGS__#= #_ctx_lock(1, x) +#define __cond_acquires_impl_NULL(x, ...) __try_acquires##__VA_ARGS__#= #_ctx_lock(0, x) + /** * __cond_acquires() - function attribute, function conditionally * acquires a context lock exclusively + * @ret: abstract value returned by function if context lock acquired * @x: context lock instance pointer * * Function attribute declaring that the function conditionally acquires t= he - * given context lock instance @x exclusively, but does not release it. + * given context lock instance @x exclusively, but does not release it. The + * function return value @ret denotes when the context lock is acquired. + * + * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. */ -# define __cond_acquires(x) __try_acquires_ctx_lock(1, x) +# define __cond_acquires(ret, x) __cond_acquires_impl_##ret(x) =20 /** * __releases() - function attribute, function releases a context lock exc= lusively @@ -389,12 +406,16 @@ static inline void _context_unsafe_alias(void **p) { } /** * __cond_acquires_shared() - function attribute, function conditionally * acquires a context lock shared + * @ret: abstract value returned by function if context lock acquired * @x: context lock instance pointer * * Function attribute declaring that the function conditionally acquires t= he - * given context lock instance @x with shared access, but does not release= it. + * given context lock instance @x with shared access, but does not release= it. The + * function return value @ret denotes when the context lock is acquired. + * + * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. */ -# define __cond_acquires_shared(x) __try_acquires_shared_ctx_lock(1, x) +# define __cond_acquires_shared(ret, x) __cond_acquires_impl_##ret(x, _sha= red) =20 /** * __releases_shared() - function attribute, function releases a diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 80dc023ac2bf..3da377ffb0c2 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -478,9 +478,9 @@ static inline void refcount_dec(refcount_t *r) =20 extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); -extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct= mutex *lock) __cond_acquires(lock); -extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *= lock) __cond_acquires(lock); +extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct= mutex *lock) __cond_acquires(true, lock); +extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *= lock) __cond_acquires(true, lock); extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, - unsigned long *flags) __cond_acquires(lock); + unsigned long *flags) __cond_acquires(true, lock); #endif /* _LINUX_REFCOUNT_H */ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 72aabdd4fa3f..7e560c7a7b23 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -362,7 +362,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lo= ck) } =20 static __always_inline int spin_trylock(spinlock_t *lock) - __cond_acquires(lock) __no_context_analysis + __cond_acquires(true, lock) __no_context_analysis { return raw_spin_trylock(&lock->rlock); } @@ -422,13 +422,13 @@ static __always_inline void spin_unlock_irqrestore(sp= inlock_t *lock, unsigned lo } =20 static __always_inline int spin_trylock_bh(spinlock_t *lock) - __cond_acquires(lock) __no_context_analysis + __cond_acquires(true, lock) __no_context_analysis { return raw_spin_trylock_bh(&lock->rlock); } =20 static __always_inline int spin_trylock_irq(spinlock_t *lock) - __cond_acquires(lock) __no_context_analysis + __cond_acquires(true, lock) __no_context_analysis { return raw_spin_trylock_irq(&lock->rlock); } diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_= smp.h index d19327e04df9..7e7d7d373213 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinl= ock_t *lock) unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) __acquires(lock); -int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(lo= ck); -int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(= lock); +int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(tr= ue, lock); +int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(= true, lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock= ); @@ -84,7 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigne= d long flags) #endif =20 static inline int __raw_spin_trylock(raw_spinlock_t *lock) - __cond_acquires(lock) + __cond_acquires(true, lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { @@ -177,7 +177,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t = *lock) } =20 static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) - __cond_acquires(lock) + __cond_acquires(true, lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); if (do_raw_spin_trylock(lock)) { diff --git a/net/ipv4/tcp_sigpool.c b/net/ipv4/tcp_sigpool.c index d8a4f192873a..10b2e5970c40 100644 --- a/net/ipv4/tcp_sigpool.c +++ b/net/ipv4/tcp_sigpool.c @@ -257,7 +257,7 @@ void tcp_sigpool_get(unsigned int id) } EXPORT_SYMBOL_GPL(tcp_sigpool_get); =20 -int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acqui= res(RCU_BH) +int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acqui= res(0, RCU_BH) { struct crypto_ahash *hash; =20 --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A40FF340D86 for ; Fri, 19 Dec 2025 15:46:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159170; cv=none; b=q4bm641HVs76neHarVzjpeaDcgDZjqKo0YXzTFn0nlfqRHds1KAmUqOhKTzokkHAFq17KtIRuiLdzD+9xJaOcE9RCpkxrVkl2BP0if4gv7oNRTEbDDGTqq6waq2L51HlCDB5WT5KdtrQ7FTs3SUDjA3xpaeO7zuldB3x7cOQ41w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159170; c=relaxed/simple; bh=m+2uypAcBzZqVoQHjxu4/pvBeR9wTc1cg3TkE2rmywI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IrSm4MYjaAjcG12F4NV9yRJaJhn+bfNElJH8Z5bfzmJ9CGFHbnmuZmUj00+ifz4VzNiJZvIN3h29FWN/VzdrW3hfAYgVEHoKKEHRMqnbhPKBZygSlJd04GWuyi+e6yZwn+I/fNxRhRihHw0tpOTPbni7ZMUZnh4GR6af9AhfhVw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qRQejhVB; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qRQejhVB" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477563e531cso12944255e9.1 for ; Fri, 19 Dec 2025 07:46:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159165; x=1766763965; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2nto1KcJJjiv3QrSNF53pqHLj6j420z7Hf2YMAu356w=; b=qRQejhVBcAu6gcPuioMPgFGvv2dGvnUkqmqm1hI1nS5t+H18zzrw4D+efWQBJ0XIY8 IM31tXkPD78IhQ8gTQ4r9aRhirJCx707sxWR9lGlwZxZ2f304XSDZjJdHlrgLWhqyN4C 2mG7ISFpv4zKLmSW2c+mK0Yk9gRzS2chbbFd1eQx1U3Y2z4obpRQHFLy2WL92dlf+X90 6zlraidkHW8gzG7sviCGmkXunNYi42rFz2+K/NKhtQubf3i9U/Ir9zV3hLejRkACCrq3 4EZAFvEFrHQv0P2DDNgir+/EPSB9WJ7U6h1UUghGFFcWMM9ViT1OH9zEmq0hoREkwPGa XV1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159165; x=1766763965; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2nto1KcJJjiv3QrSNF53pqHLj6j420z7Hf2YMAu356w=; b=fbpJbaB8nqR5AIV3810VanmCDwQVTCkkXGLkx2qkurGpKZ2kgQn2uWwdzdrbfUoy0n HV/+SI4fGMclFFj8b1bBuIUMw76/TTZtqJw7BzIQh0BNiR/JEDPQU0x/xg0N/Z4GWgfO Vy7Of7RAfAShiTfU514UEd0nSC3n1bwInX491SFDvcbFSYYRBQcV0vogNc/WDFWiSFes WYKEHP6bU15s+yMo5bzO7eOVD/gs320iL8INbBU9etfAjZfPD0IEkW44NjjAboQqaKLX RgjiAQPvTcYt1UlzolEWRL4hpRlRgkYEKPWnB74XGVyZ11mmVM5rApVOWkIF7GeG+/DX OP5w== X-Forwarded-Encrypted: i=1; AJvYcCWkjxinsbmchbHJ+X+R6THi06dOs6hRxjWtr8dgA/8TRszq8GZS4sKkVnXLwDVkut8ZQygCti0BuF8wVSM=@vger.kernel.org X-Gm-Message-State: AOJu0Yz3xSNKLpdiu8zGmRiV54CPYknyLAh5dr3wo55P9C8jtFSbtKxy PcdMGPlyrYXDdcPAGCexRR3Fhir0f9zSBbihJmR93bntimuzU/w3DEpAVsw1v3GVIujkl9vutyy qCQ== X-Google-Smtp-Source: AGHT+IEI4FtTU2uBJwILA8/Yh0TJDO/NsIIewDaTcKmzVNBr5My/fTSLrYYgL2OtjmIxbmTe6FcaJZZhgQ== X-Received: from wmby2.prod.google.com ([2002:a05:600c:c042:b0:477:165e:7e2a]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4ecd:b0:477:1bb6:17e5 with SMTP id 5b1f17b1804b1-47d19593e32mr28466615e9.30.1766159165093; Fri, 19 Dec 2025 07:46:05 -0800 (PST) Date: Fri, 19 Dec 2025 16:39:59 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-11-elver@google.com> Subject: [PATCH v5 10/36] locking/mutex: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's context analysis for mutex. Signed-off-by: Marco Elver Reported-by: Bart Van Assche --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. v3: * Switch to DECLARE_LOCK_GUARD_1_ATTRS() (suggested by Peter) * __assert -> __assume rename --- Documentation/dev-tools/context-analysis.rst | 2 +- include/linux/mutex.h | 38 +++++++----- include/linux/mutex_types.h | 4 +- lib/test_context-analysis.c | 64 ++++++++++++++++++++ 4 files changed, 90 insertions(+), 18 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index 746a2d275fb2..1864b6cba4d1 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -79,7 +79,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/mutex.h b/include/linux/mutex.h index bf535f0118bb..89977c215cbd 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -62,6 +62,7 @@ do { \ static struct lock_class_key __key; \ \ __mutex_init((mutex), #mutex, &__key); \ + __assume_ctx_lock(mutex); \ } while (0) =20 /** @@ -182,13 +183,13 @@ static inline int __must_check __devm_mutex_init(stru= ct device *dev, struct mute * Also see Documentation/locking/mutex-design.rst. */ #ifdef CONFIG_DEBUG_LOCK_ALLOC -extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); +extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass) _= _acquires(lock); extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *= nest_lock); extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, - unsigned int subclass); + unsigned int subclass) __cond_acquires(0, lock); extern int __must_check _mutex_lock_killable(struct mutex *lock, - unsigned int subclass, struct lockdep_map *nest_lock); -extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass= ); + unsigned int subclass, struct lockdep_map *nest_lock) __cond_acquires(0,= lock); +extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass= ) __acquires(lock); =20 #define mutex_lock(lock) mutex_lock_nested(lock, 0) #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(loc= k, 0) @@ -211,10 +212,10 @@ do { \ _mutex_lock_killable(lock, subclass, NULL) =20 #else -extern void mutex_lock(struct mutex *lock); -extern int __must_check mutex_lock_interruptible(struct mutex *lock); -extern int __must_check mutex_lock_killable(struct mutex *lock); -extern void mutex_lock_io(struct mutex *lock); +extern void mutex_lock(struct mutex *lock) __acquires(lock); +extern int __must_check mutex_lock_interruptible(struct mutex *lock) __con= d_acquires(0, lock); +extern int __must_check mutex_lock_killable(struct mutex *lock) __cond_acq= uires(0, lock); +extern void mutex_lock_io(struct mutex *lock) __acquires(lock); =20 # define mutex_lock_nested(lock, subclass) mutex_lock(lock) # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interr= uptible(lock) @@ -232,7 +233,7 @@ extern void mutex_lock_io(struct mutex *lock); */ =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC -extern int _mutex_trylock_nest_lock(struct mutex *lock, struct lockdep_map= *nest_lock); +extern int _mutex_trylock_nest_lock(struct mutex *lock, struct lockdep_map= *nest_lock) __cond_acquires(true, lock); =20 #define mutex_trylock_nest_lock(lock, nest_lock) \ ( \ @@ -242,17 +243,24 @@ extern int _mutex_trylock_nest_lock(struct mutex *loc= k, struct lockdep_map *nest =20 #define mutex_trylock(lock) _mutex_trylock_nest_lock(lock, NULL) #else -extern int mutex_trylock(struct mutex *lock); +extern int mutex_trylock(struct mutex *lock) __cond_acquires(true, lock); #define mutex_trylock_nest_lock(lock, nest_lock) mutex_trylock(lock) #endif =20 -extern void mutex_unlock(struct mutex *lock); +extern void mutex_unlock(struct mutex *lock) __releases(lock); =20 -extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); +extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __= cond_acquires(true, lock); =20 -DEFINE_GUARD(mutex, struct mutex *, mutex_lock(_T), mutex_unlock(_T)) -DEFINE_GUARD_COND(mutex, _try, mutex_trylock(_T)) -DEFINE_GUARD_COND(mutex, _intr, mutex_lock_interruptible(_T), _RET =3D=3D = 0) +DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unloc= k(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock),= _RET =3D=3D 0) + +DECLARE_LOCK_GUARD_1_ATTRS(mutex, __acquires(_T), __releases(*(struct mute= x **)_T)) +#define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T) +DECLARE_LOCK_GUARD_1_ATTRS(mutex_try, __acquires(_T), __releases(*(struct = mutex **)_T)) +#define class_mutex_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_try,= _T) +DECLARE_LOCK_GUARD_1_ATTRS(mutex_intr, __acquires(_T), __releases(*(struct= mutex **)_T)) +#define class_mutex_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_int= r, _T) =20 extern unsigned long mutex_get_owner(struct mutex *lock); =20 diff --git a/include/linux/mutex_types.h b/include/linux/mutex_types.h index fdf7f515fde8..80975935ec48 100644 --- a/include/linux/mutex_types.h +++ b/include/linux/mutex_types.h @@ -38,7 +38,7 @@ * - detects multi-task circular deadlocks and prints out all affected * locks and tasks (and only those tasks) */ -struct mutex { +context_lock_struct(mutex) { atomic_long_t owner; raw_spinlock_t wait_lock; #ifdef CONFIG_MUTEX_SPIN_ON_OWNER @@ -59,7 +59,7 @@ struct mutex { */ #include =20 -struct mutex { +context_lock_struct(mutex) { struct rt_mutex_base rtmutex; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 273fa9d34657..2b28d20c5f51 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -5,6 +5,7 @@ */ =20 #include +#include #include =20 /* @@ -144,3 +145,66 @@ TEST_SPINLOCK_COMMON(read_lock, read_unlock, read_trylock, TEST_OP_RO); + +struct test_mutex_data { + struct mutex mtx; + int counter __guarded_by(&mtx); +}; + +static void __used test_mutex_init(struct test_mutex_data *d) +{ + mutex_init(&d->mtx); + d->counter =3D 0; +} + +static void __used test_mutex_lock(struct test_mutex_data *d) +{ + mutex_lock(&d->mtx); + d->counter++; + mutex_unlock(&d->mtx); + mutex_lock_io(&d->mtx); + d->counter++; + mutex_unlock(&d->mtx); +} + +static void __used test_mutex_trylock(struct test_mutex_data *d, atomic_t = *a) +{ + if (!mutex_lock_interruptible(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (!mutex_lock_killable(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (mutex_trylock(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (atomic_dec_and_mutex_lock(a, &d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } +} + +static void __used test_mutex_assert(struct test_mutex_data *d) +{ + lockdep_assert_held(&d->mtx); + d->counter++; +} + +static void __used test_mutex_guard(struct test_mutex_data *d) +{ + guard(mutex)(&d->mtx); + d->counter++; +} + +static void __used test_mutex_cond_guard(struct test_mutex_data *d) +{ + scoped_cond_guard(mutex_try, return, &d->mtx) { + d->counter++; + } + scoped_cond_guard(mutex_intr, return, &d->mtx) { + d->counter++; + } +} --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 959EE342507 for ; Fri, 19 Dec 2025 15:46:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159175; cv=none; b=dbPTOjmw4awrhLpSoHMX99JisScVXJGqLNtOXaet4n6wLf7XYoKH2vZv5XnchKvK/cFvHbG728xef8t4Qni2ZCcpaswMIiua3F8/HAVuOx20/b51SPtYjHARToxY0/Pdm4H49YcuadJwQrmv2GjkrufiDwm/PETOVWkop367oOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159175; c=relaxed/simple; bh=ryd2ZGQh80LsMvkATOMIkJPWUkDcGNIurhKhLkEb7p0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ix39/gO8ut5YIqqMVXB2xGxF28vCVhTnxB2ah1yQq1zW/7rRTzrUG8wReoXlFx92zJWn+bbWZOOGFElRQXD7nVDsv8FjjkoAC2F4jeujl5NPOPGKL8PFG8ICMv4jlql2q7bMZqb6o5H1pHQs4EPWM1+Y/Vv0VeYiaLU7Tsc8Gug= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FtyZ3akQ; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FtyZ3akQ" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-477563a0c75so10563695e9.1 for ; Fri, 19 Dec 2025 07:46:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159170; x=1766763970; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=amWxerODEEoCfIoX62derQiE7p5loIvVLHROm1A4vmY=; b=FtyZ3akQ7NectMhxr4hU4/1xDG11T+AvwWGEAo34FNZrx6erB2pcB3sCQQN+xSGZNm f0+i2AuXVZ+B18qkiDHBBuhznwd0DDpADwCkeRI+cxBn/+FZIzxNxpe+ZKAkEkKHXfug OjAwp/3+BcZtszUTc0PI7ZEE5D9pjj3dSM4AHclhwR4tR0I8JwnsG6ZXyg2mVKogdHSX kkni4q5zQMB0E7tjNx/7vpi9OsrniUPZ/1HPx9ztTohRepq1GpqiJ1Olo2+UkJJajA61 eDhiVjpYhVsJTjq+e8zsuOCFsSfa3mnwRTfNQodQse+9ij1cdbfz9CzT3BeVZ5PkqLL4 uiGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159170; x=1766763970; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=amWxerODEEoCfIoX62derQiE7p5loIvVLHROm1A4vmY=; b=UIg2BBiMhpJQAYARIzSlG5oxAgrdXoYIEU6Mt2iiuLypnIx6By/l2RLp0cD3pEBKuM 9/4SLlBKspPcKWwkmnGUV0tnz6EIyYhPvZz6DqEf70FXRLBQVKfLRe11pniX0DVIVdK9 YHX8QrU78qJmqs2QHDGZuIBf9zxyx0E/cXvlzfoXv3CpIl72cAY6yp/FUtbjwsWWc6kZ VgXVnmfePrrD7Gg+m02h2Y+uE++dCjKNeYC6uIqmI0luDhSws6+Lkr7dKepI1V3TI20V plJRPrrDEuwOdILSVZuaiKv2FnVN3iqgIyqO454OD1I1WnmSLvvObDpxale031PJPTL1 srSQ== X-Forwarded-Encrypted: i=1; AJvYcCVkE7tV26ZD2tJ4vVwFXkOQHeZ3Jlu0dPYzq9xwiCO8yOV/fS2aR3YaTKE6M146nZn6WuSkDGezik2fgC8=@vger.kernel.org X-Gm-Message-State: AOJu0YzI3s4rVGPy9MRqFiJ9jI45UxKnZj9Imijzx/ipupOfdYdjYPVs y4LxyURKlEFEPvWaSg3N1Z6qnV6ov/217+GF8N7+KEGryH2J08yyaW/Xc1WHKFIsj5EXhSMjjRA 92w== X-Google-Smtp-Source: AGHT+IGYa74Y5SSaoBaAP3H9tDi+ZVIm7cF2G8HvVxQcU2hfofUC6ryo9mvwVtRKE9bg0y9aXAy14nnKZQ== X-Received: from wmot12.prod.google.com ([2002:a05:600c:450c:b0:46f:b153:bfb7]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b2a:b0:477:7ab8:aba with SMTP id 5b1f17b1804b1-47d1953bd8bmr31823715e9.1.1766159169275; Fri, 19 Dec 2025 07:46:09 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:00 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-12-elver@google.com> Subject: [PATCH v5 11/36] locking/seqlock: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's context analysis for seqlock_t. Signed-off-by: Marco Elver --- v5: * Support scoped_seqlock_read(). * Rename "context guard" -> "context lock". v3: * __assert -> __assume rename --- Documentation/dev-tools/context-analysis.rst | 2 +- include/linux/seqlock.h | 38 ++++++++++++++- include/linux/seqlock_types.h | 5 +- lib/test_context-analysis.c | 50 ++++++++++++++++++++ 4 files changed, 91 insertions(+), 4 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index 1864b6cba4d1..690565910084 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -79,7 +79,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 221123660e71..113320911a09 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -816,6 +816,7 @@ static __always_inline void write_seqcount_latch_end(se= qcount_latch_t *s) do { \ spin_lock_init(&(sl)->lock); \ seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \ + __assume_ctx_lock(sl); \ } while (0) =20 /** @@ -832,6 +833,7 @@ static __always_inline void write_seqcount_latch_end(se= qcount_latch_t *s) * Return: count, to be passed to read_seqretry() */ static inline unsigned read_seqbegin(const seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { return read_seqcount_begin(&sl->seqcount); } @@ -848,6 +850,7 @@ static inline unsigned read_seqbegin(const seqlock_t *s= l) * Return: true if a read section retry is required, else false */ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) + __releases_shared(sl) __no_context_analysis { return read_seqcount_retry(&sl->seqcount, start); } @@ -872,6 +875,7 @@ static inline unsigned read_seqretry(const seqlock_t *s= l, unsigned start) * _irqsave or _bh variants of this function instead. */ static inline void write_seqlock(seqlock_t *sl) + __acquires(sl) __no_context_analysis { spin_lock(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -885,6 +889,7 @@ static inline void write_seqlock(seqlock_t *sl) * critical section of given seqlock_t. */ static inline void write_sequnlock(seqlock_t *sl) + __releases(sl) __no_context_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock(&sl->lock); @@ -898,6 +903,7 @@ static inline void write_sequnlock(seqlock_t *sl) * other write side sections, can be invoked from softirq contexts. */ static inline void write_seqlock_bh(seqlock_t *sl) + __acquires(sl) __no_context_analysis { spin_lock_bh(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -912,6 +918,7 @@ static inline void write_seqlock_bh(seqlock_t *sl) * write_seqlock_bh(). */ static inline void write_sequnlock_bh(seqlock_t *sl) + __releases(sl) __no_context_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_bh(&sl->lock); @@ -925,6 +932,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl) * other write sections, can be invoked from hardirq contexts. */ static inline void write_seqlock_irq(seqlock_t *sl) + __acquires(sl) __no_context_analysis { spin_lock_irq(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -938,12 +946,14 @@ static inline void write_seqlock_irq(seqlock_t *sl) * seqlock_t write side section opened with write_seqlock_irq(). */ static inline void write_sequnlock_irq(seqlock_t *sl) + __releases(sl) __no_context_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irq(&sl->lock); } =20 static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) + __acquires(sl) __no_context_analysis { unsigned long flags; =20 @@ -976,6 +986,7 @@ static inline unsigned long __write_seqlock_irqsave(seq= lock_t *sl) */ static inline void write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) + __releases(sl) __no_context_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irqrestore(&sl->lock, flags); @@ -998,6 +1009,7 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned lon= g flags) * The opened read section must be closed with read_sequnlock_excl(). */ static inline void read_seqlock_excl(seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { spin_lock(&sl->lock); } @@ -1007,6 +1019,7 @@ static inline void read_seqlock_excl(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl(seqlock_t *sl) + __releases_shared(sl) __no_context_analysis { spin_unlock(&sl->lock); } @@ -1021,6 +1034,7 @@ static inline void read_sequnlock_excl(seqlock_t *sl) * from softirq contexts. */ static inline void read_seqlock_excl_bh(seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { spin_lock_bh(&sl->lock); } @@ -1031,6 +1045,7 @@ static inline void read_seqlock_excl_bh(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_bh(seqlock_t *sl) + __releases_shared(sl) __no_context_analysis { spin_unlock_bh(&sl->lock); } @@ -1045,6 +1060,7 @@ static inline void read_sequnlock_excl_bh(seqlock_t *= sl) * hardirq context. */ static inline void read_seqlock_excl_irq(seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { spin_lock_irq(&sl->lock); } @@ -1055,11 +1071,13 @@ static inline void read_seqlock_excl_irq(seqlock_t = *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_irq(seqlock_t *sl) + __releases_shared(sl) __no_context_analysis { spin_unlock_irq(&sl->lock); } =20 static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) + __acquires_shared(sl) __no_context_analysis { unsigned long flags; =20 @@ -1089,6 +1107,7 @@ static inline unsigned long __read_seqlock_excl_irqsa= ve(seqlock_t *sl) */ static inline void read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) + __releases_shared(sl) __no_context_analysis { spin_unlock_irqrestore(&sl->lock, flags); } @@ -1125,6 +1144,7 @@ read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigne= d long flags) * parameter of the next read_seqbegin_or_lock() iteration. */ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_context_analysis { if (!(*seq & 1)) /* Even */ *seq =3D read_seqbegin(lock); @@ -1140,6 +1160,7 @@ static inline void read_seqbegin_or_lock(seqlock_t *l= ock, int *seq) * Return: true if a read section retry is required, false otherwise */ static inline int need_seqretry(seqlock_t *lock, int seq) + __releases_shared(lock) __no_context_analysis { return !(seq & 1) && read_seqretry(lock, seq); } @@ -1153,6 +1174,7 @@ static inline int need_seqretry(seqlock_t *lock, int = seq) * with read_seqbegin_or_lock() and validated by need_seqretry(). */ static inline void done_seqretry(seqlock_t *lock, int seq) + __no_context_analysis { if (seq & 1) read_sequnlock_excl(lock); @@ -1180,6 +1202,7 @@ static inline void done_seqretry(seqlock_t *lock, int= seq) */ static inline unsigned long read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_context_analysis { unsigned long flags =3D 0; =20 @@ -1205,6 +1228,7 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *s= eq) */ static inline void done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags) + __no_context_analysis { if (seq & 1) read_sequnlock_excl_irqrestore(lock, flags); @@ -1225,6 +1249,7 @@ struct ss_tmp { }; =20 static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst) + __no_context_analysis { if (sst->lock) spin_unlock(sst->lock); @@ -1254,6 +1279,7 @@ extern void __scoped_seqlock_bug(void); =20 static __always_inline void __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state t= arget) + __no_context_analysis { switch (sst->state) { case ss_done: @@ -1296,9 +1322,19 @@ __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t = *lock, enum ss_state target) } } =20 +/* + * Context analysis no-op helper to release seqlock at the end of the for-= scope; + * the alias analysis of the compiler will recognize that the pointer @s i= s an + * alias to @_seqlock passed to read_seqbegin(_seqlock) below. + */ +static __always_inline void __scoped_seqlock_cleanup_ctx(struct ss_tmp **s) + __releases_shared(*((seqlock_t **)s)) __no_context_analysis {} + #define __scoped_seqlock_read(_seqlock, _target, _s) \ for (struct ss_tmp _s __cleanup(__scoped_seqlock_cleanup) =3D \ - { .state =3D ss_lockless, .data =3D read_seqbegin(_seqlock) }; \ + { .state =3D ss_lockless, .data =3D read_seqbegin(_seqlock) }, \ + *__UNIQUE_ID(ctx) __cleanup(__scoped_seqlock_cleanup_ctx) =3D\ + (struct ss_tmp *)_seqlock; \ _s.state !=3D ss_done; \ __scoped_seqlock_next(&_s, _seqlock, _target)) =20 diff --git a/include/linux/seqlock_types.h b/include/linux/seqlock_types.h index dfdf43e3fa3d..2d5d793ef660 100644 --- a/include/linux/seqlock_types.h +++ b/include/linux/seqlock_types.h @@ -81,13 +81,14 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, = mutex) * - Comments on top of seqcount_t * - Documentation/locking/seqlock.rst */ -typedef struct { +context_lock_struct(seqlock) { /* * Make sure that readers don't starve writers on PREEMPT_RT: use * seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK(). */ seqcount_spinlock_t seqcount; spinlock_t lock; -} seqlock_t; +}; +typedef struct seqlock seqlock_t; =20 #endif /* __LINUX_SEQLOCK_TYPES_H */ diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 2b28d20c5f51..53abea0008f2 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -6,6 +6,7 @@ =20 #include #include +#include #include =20 /* @@ -208,3 +209,52 @@ static void __used test_mutex_cond_guard(struct test_m= utex_data *d) d->counter++; } } + +struct test_seqlock_data { + seqlock_t sl; + int counter __guarded_by(&sl); +}; + +static void __used test_seqlock_init(struct test_seqlock_data *d) +{ + seqlock_init(&d->sl); + d->counter =3D 0; +} + +static void __used test_seqlock_reader(struct test_seqlock_data *d) +{ + unsigned int seq; + + do { + seq =3D read_seqbegin(&d->sl); + (void)d->counter; + } while (read_seqretry(&d->sl, seq)); +} + +static void __used test_seqlock_writer(struct test_seqlock_data *d) +{ + unsigned long flags; + + write_seqlock(&d->sl); + d->counter++; + write_sequnlock(&d->sl); + + write_seqlock_irq(&d->sl); + d->counter++; + write_sequnlock_irq(&d->sl); + + write_seqlock_bh(&d->sl); + d->counter++; + write_sequnlock_bh(&d->sl); + + write_seqlock_irqsave(&d->sl, flags); + d->counter++; + write_sequnlock_irqrestore(&d->sl, flags); +} + +static void __used test_seqlock_scoped(struct test_seqlock_data *d) +{ + scoped_seqlock_read (&d->sl, ss_lockless) { + (void)d->counter; + } +} --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:44 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72BC0342CB0 for ; Fri, 19 Dec 2025 15:46:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159179; cv=none; b=q37ZXFT71grN5JOJ4TcOeom2X8R2sLO72LES1sH93uLL5qFUDA4RecUBf5/lQ/ui4TKysI/VHgn7IfHKBhv3JRRTOK5p4YB74ejh8RsV0BCgOzM6gldnmGTi6QOyVp2ccyyNCOcfYtyQWYWOaZh7v8p9Ja5FX/Pe6QVKIt6VS/g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159179; c=relaxed/simple; bh=UEZ87BxP3bpICDcmger+CGP9yO2gKJF9FSIT7SHC+TA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KG1rOzYPBl4TXl8hTjSJiDSIEY2cu6EVh375ABVNZOAn9Ausr+yiNwYKKlOURGkgdQ+l5KQO9/qz315al0oW7IJhlmPTyK3U6wEuPeqxcNZaKrw1QKJ91HCpETgzm81fpXcAda+8jR6UwqwJGJj4xlSRl5dkOLF1BAQs/8sXJO4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=20xkXXuA; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="20xkXXuA" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477771366cbso12251925e9.0 for ; Fri, 19 Dec 2025 07:46:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159174; x=1766763974; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GzqxblqNylMZssrhkzyYMNb717UQfTl85zNKlM2THvo=; b=20xkXXuAhACSentUBjEFwtAg+/OPXumpmh1MGs2zcpLZqfjv1WIaoHLMDEIQRb5C6e 6Ut7UPEwYmS+2PSh5nVQNlksCPgJQw0RlfkQCLkgbVCMeTZEw7vr7emat5d9X11CWuLc umHOgN4R5SlQPLCelwmxJ9vdMZvDRxwpqIz1R6sM4FTSLz4efpjOQy0wOL3QeR/wrm1q XYcpBUDNRp5lF3JOLl/xMeYkxYxfhRelWta1hCs1Z264sjy0M2vzrI1Pw9PD5zY6r7L1 J+o/Zluws6fXB1GFwNEBgFTidsvdF5EfX90lrvitpLRo/7rBexsfGp51lTW5wWYu9tiP 5QdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159174; x=1766763974; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GzqxblqNylMZssrhkzyYMNb717UQfTl85zNKlM2THvo=; b=lrIPbRbSvmpIuIDXRANq70Kr84XN4/nLm2H1aIZ6prbyIVZFJRuOd5TdtYR+X/eFJ5 Tcp8Ulv7Ind6HWz1lyduwzo7VvV/OinVg2HXhL/GzPeybRPMUiqMI95chDm0jzlZXj2X eMBnrXbBTYMqrkfIELOvC0k+nPoAEHWgdezse4vhHgiRWyTV50+eSawSTE9784h91N+S +PcqwmI4HKofgltNSbxeORKU2ecAXCZ1cpEJ6VuL+2jY4TktniGYE2Bh2wn/I3z1NTBu Y4ihLt95FF9bl9yJY/SkeMWX0RLg/rflPQJQwq8kqUC+fZhc1Rvi7ASFO/5LkHD5B9WQ Fdkw== X-Forwarded-Encrypted: i=1; AJvYcCWIOu3KaIOILPxenT/rDf6zf8lYMowbDV9gqClp5PfDO6P3M17Nxrly3LJmhut4zsdoNy6mwUvsXw98E3w=@vger.kernel.org X-Gm-Message-State: AOJu0Yx9k2PuvmxduucCGnSW/3+39HXknBU+E+DU41pvK5dEC41GeevM BAV3zsJxVvGFtAJTKtEgGDRfWklmgpp4i1qgIJynORjm9bPsJFgVzwwZyH3pK/dwBF/gU6YWEpS emA== X-Google-Smtp-Source: AGHT+IFDBzha0VLOsownpeE3/oX+GYClWE/LTUckt6t3nGkH3udhxqzAfrWi3SLQ1c5sHOt4XQ+QWDn3uQ== X-Received: from wmqn17.prod.google.com ([2002:a05:600c:4f91:b0:47d:1d7a:6d40]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:c494:b0:479:35e7:a0e3 with SMTP id 5b1f17b1804b1-47d19582aacmr29774555e9.30.1766159173561; Fri, 19 Dec 2025 07:46:13 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:01 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-13-elver@google.com> Subject: [PATCH v5 12/36] bit_spinlock: Include missing From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Including into an empty TU will result in the compiler complaining: ./include/linux/bit_spinlock.h:34:4: error: call to undeclared function 'cp= u_relax'; <...> 34 | cpu_relax(); | ^ 1 error generated. Include to allow including bit_spinlock.h where is not otherwise included. Signed-off-by: Marco Elver Reviewed-by: Bart Van Assche --- include/linux/bit_spinlock.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index c0989b5b0407..59e345f74b0e 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -7,6 +7,8 @@ #include #include =20 +#include /* for cpu_relax() */ + /* * bit-based spin_lock() * --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CA0B3446BB for ; Fri, 19 Dec 2025 15:46:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159182; cv=none; b=s2aONE1N/8e+vM0mKmXoNhzj6O+01o1x3xLHrtC/ya1kt6l9rWo2DPGxBQEWsN+PIWccnp1/rT1fthby08QAuWQSN0284INyKzXl1PJNAvEkvBC3LnvKi9AINllia5LUnAnJMa56vBbb0hQQRS/pEWA59cl59DRjTZg4Fl9A/yk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159182; c=relaxed/simple; bh=twLTnhf5u6L/bpwFTiOMumVPeFgZcHl3jJv/1Wf0m5s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=S3nN7lMSHcZxndn2BTmCThMBsDoTTeizjjvJvKQjS2q4RzMfdhiwmXPm5SGzV+9PmV563EKtK8RpRLyY1XWUbInNekyEE+4vhorbE2fSyJ53n25sEi6inMkjtAWCEoa/20CplJCDGOjIqZV4nHDVsNDBX8+zqBt3BUnGE+xX+ec= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=q6VHETu3; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="q6VHETu3" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477cf25ceccso19113585e9.0 for ; Fri, 19 Dec 2025 07:46:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159177; x=1766763977; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xxzeiPbUa31+hz0U8Tv6bmU7luRcJmafBg2S0XNVzY0=; b=q6VHETu3P2tuxPJRg7rWfTyTwiv1PfoD8fe5S4TeAnmUYXc6bt+0f71H30Txa/bcqW ZKqnK3qliOElsmydWkN6vIF8f5Uq61prcno4x9uJqjoSQ0nxF3GXwR+iGJeuWDnK4ct+ ZD+vZ6QjNPFqv4CV3EWdV0XhMfPMomC80YqBOi72P+Yk/rY2u3OJZUe1aNuK4Jlqpw/R w1vRoFsRF5gL4UlThUdVqT1qcs7s1tpCxggCyB4bjGAyGSC+N5YxDrFdbGIiIryd9yCb LQCsy+0HndSG+5OuxUKlRdmnJh44xENck9ni2pSH06zsbqNCUJqbwYBfUDO4Cu40mTfb AaSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159177; x=1766763977; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xxzeiPbUa31+hz0U8Tv6bmU7luRcJmafBg2S0XNVzY0=; b=DxjgoNRonDSAizHyuULnc/FNfeDgeGWEWLMcFprQe8wi35HoHigT6p1vfX0HJTVyCd 5OpW3AX8053YQsEqcmDHaXKIpNK1DbkVrjH2ZmrC/8ItLTDOVFVwRPAXLtyCCBsm7f4z 0dBoLRowN9Gbo0ZKV3BQ5I3pUTwzXsJXakyjVrnKFnKXWPDVUeP3jgX3gKyetevuK7nh ony5JTrdYdfXGv/RL3jjW4JU5spXFJ4VfRPST+6B2mf8u4QL0Mr8olpLkwp6P369nlO9 Dt7EFXWSNa5tjEw/oKz44pqHOfZs1rVnzbxSg2yS3qBZkAK9LWPZREInSnWlvcRGQG3a F+hw== X-Forwarded-Encrypted: i=1; AJvYcCWazzCZFiM1k2/eKHFBYL+0G3VSZThSSZlO/jSCHf8B/REVXBTcFSpi8uoGF4JKDL/JSg13TbGZRwkv6bg=@vger.kernel.org X-Gm-Message-State: AOJu0YxatsZpMccSlJpoC75KB8NiV2ltGF/FT0Pcx86uM7aE7jV3dj2y v4aAxoo9Pbjg1PQrkK5wmJadp2Px35/1Vi/0Zj2bMpjit1piKCACF/e9szxviHbJBwStB8n6Erc KpA== X-Google-Smtp-Source: AGHT+IGaGS8VBg9MijqM6Ta4+HbU9bsl4GQbb/p7haK0nKuccuPDjffspfAf6ASbrAdFf7JACzcajKCfOA== X-Received: from wmbgx16.prod.google.com ([2002:a05:600c:8590:b0:47a:90c7:e279]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:820d:b0:479:3a87:208f with SMTP id 5b1f17b1804b1-47d195aa085mr30269305e9.36.1766159177290; Fri, 19 Dec 2025 07:46:17 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:02 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-14-elver@google.com> Subject: [PATCH v5 13/36] bit_spinlock: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The annotations for bit_spinlock.h have simply been using "bitlock" as the token. For Sparse, that was likely sufficient in most cases. But Clang's context analysis is more precise, and we need to ensure we can distinguish different bitlocks. To do so, add a token context, and a macro __bitlock(bitnum, addr) that is used to construct unique per-bitlock tokens. Add the appropriate test. is implicitly included through other includes, and requires 2 annotations to indicate that acquisition (without release) and release (without prior acquisition) of its bitlock is intended. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. --- Documentation/dev-tools/context-analysis.rst | 3 ++- include/linux/bit_spinlock.h | 22 ++++++++++++++--- include/linux/list_bl.h | 2 ++ lib/test_context-analysis.c | 26 ++++++++++++++++++++ 4 files changed, 48 insertions(+), 5 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index 690565910084..b2d69fb4a884 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -79,7 +79,8 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ =20 Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, +`bit_spinlock`. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index 59e345f74b0e..7869a6e59b6a 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -9,6 +9,16 @@ =20 #include /* for cpu_relax() */ =20 +/* + * For static context analysis, we need a unique token for each possible b= it + * that can be used as a bit_spinlock. The easiest way to do that is to cr= eate a + * fake context that we can cast to with the __bitlock(bitnum, addr) macro + * below, which will give us unique instances for each (bit, addr) pair th= at the + * static analysis can use. + */ +context_lock_struct(__context_bitlock) { }; +#define __bitlock(bitnum, addr) (struct __context_bitlock *)(bitnum + (add= r)) + /* * bit-based spin_lock() * @@ -16,6 +26,7 @@ * are significantly faster. */ static __always_inline void bit_spin_lock(int bitnum, unsigned long *addr) + __acquires(__bitlock(bitnum, addr)) { /* * Assuming the lock is uncontended, this never enters @@ -34,13 +45,14 @@ static __always_inline void bit_spin_lock(int bitnum, u= nsigned long *addr) preempt_disable(); } #endif - __acquire(bitlock); + __acquire(__bitlock(bitnum, addr)); } =20 /* * Return true if it was acquired */ static __always_inline int bit_spin_trylock(int bitnum, unsigned long *add= r) + __cond_acquires(true, __bitlock(bitnum, addr)) { preempt_disable(); #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) @@ -49,7 +61,7 @@ static __always_inline int bit_spin_trylock(int bitnum, u= nsigned long *addr) return 0; } #endif - __acquire(bitlock); + __acquire(__bitlock(bitnum, addr)); return 1; } =20 @@ -57,6 +69,7 @@ static __always_inline int bit_spin_trylock(int bitnum, u= nsigned long *addr) * bit-based spin_unlock() */ static __always_inline void bit_spin_unlock(int bitnum, unsigned long *add= r) + __releases(__bitlock(bitnum, addr)) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -65,7 +78,7 @@ static __always_inline void bit_spin_unlock(int bitnum, u= nsigned long *addr) clear_bit_unlock(bitnum, addr); #endif preempt_enable(); - __release(bitlock); + __release(__bitlock(bitnum, addr)); } =20 /* @@ -74,6 +87,7 @@ static __always_inline void bit_spin_unlock(int bitnum, u= nsigned long *addr) * protecting the rest of the flags in the word. */ static __always_inline void __bit_spin_unlock(int bitnum, unsigned long *a= ddr) + __releases(__bitlock(bitnum, addr)) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -82,7 +96,7 @@ static __always_inline void __bit_spin_unlock(int bitnum,= unsigned long *addr) __clear_bit_unlock(bitnum, addr); #endif preempt_enable(); - __release(bitlock); + __release(__bitlock(bitnum, addr)); } =20 /* diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h index ae1b541446c9..df9eebe6afca 100644 --- a/include/linux/list_bl.h +++ b/include/linux/list_bl.h @@ -144,11 +144,13 @@ static inline void hlist_bl_del_init(struct hlist_bl_= node *n) } =20 static inline void hlist_bl_lock(struct hlist_bl_head *b) + __acquires(__bitlock(0, b)) { bit_spin_lock(0, (unsigned long *)b); } =20 static inline void hlist_bl_unlock(struct hlist_bl_head *b) + __releases(__bitlock(0, b)) { __bit_spin_unlock(0, (unsigned long *)b); } diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 53abea0008f2..be0c5d462a48 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -4,6 +4,7 @@ * positive errors when compiled with Clang's context analysis. */ =20 +#include #include #include #include @@ -258,3 +259,28 @@ static void __used test_seqlock_scoped(struct test_seq= lock_data *d) (void)d->counter; } } + +struct test_bit_spinlock_data { + unsigned long bits; + int counter __guarded_by(__bitlock(3, &bits)); +}; + +static void __used test_bit_spin_lock(struct test_bit_spinlock_data *d) +{ + /* + * Note, the analysis seems to have false negatives, because it won't + * precisely recognize the bit of the fake __bitlock() token. + */ + bit_spin_lock(3, &d->bits); + d->counter++; + bit_spin_unlock(3, &d->bits); + + bit_spin_lock(3, &d->bits); + d->counter++; + __bit_spin_unlock(3, &d->bits); + + if (bit_spin_trylock(3, &d->bits)) { + d->counter++; + bit_spin_unlock(3, &d->bits); + } +} --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 696BC34677F for ; Fri, 19 Dec 2025 15:46:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159187; cv=none; b=OLiWJI5xVw4ExSfJOpnWswDHSIVdEp3pGYEuerMwGL2N0uRgIYQmePRrO7M/oCEodJQn+uRBrVEbZ/z5nB6BRFvFiN+0NwDzxcCUsiGeUby/Rvmz5kokPoq3IuXCL6HfDeXHYKEFlwqaej8ypFgOD9DTJ0U52bRGExLu94j9mXk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159187; c=relaxed/simple; bh=f/v012HY5KOr4wKAOymmeKizKhzNr5xvOVnNja1XV7o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NtGilXbwnMfyGZaRPcNP2ki56QfMyoy4c0921ncFNEWSh81lihU3BojTKffMdowU607rNoPbcH3fHJGG2OhM5X+bnYkbZo0/pP4n+mOzGhjocC/Ijya9hHjzeRyY+O12cp02DPrRzjBJW42/XdMWeJzLUtg9L/nJbYbvdAobuKE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fJoZDYm1; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fJoZDYm1" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4779981523fso21281345e9.2 for ; Fri, 19 Dec 2025 07:46:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159181; x=1766763981; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+jYEguITBPTs5G6jcojPovTyLNy31gBuOV4hxWkZ9us=; b=fJoZDYm1nrDgnjmGX1Yxo6aAhFerSt4/fFNJ/MdZYKRzammy24rwGEMN93dg3zZsYG MEv128FNgF4FMMyTJzQtTrsPSMetWNqRwM1xoAq0elRewact82j9zVHrV6UQO8sB7ZPN lVIqsO1BpFCCzGmGBFH+4SqVLTBxdyMkDEyczy1pvXREB01qltZevytnME7hdrtd8NqM T4xWfWs6MzFjeXWkDO2FYKd7dxGn91d37dX3ksWsl1eNeLDH5yhzglPmVded6Bs4/lQH 2Zz6W+VWwWV+yZz+pusY7pvnzkwio9yvdahEEwoC5C6SD6dWxXR3peSxz9OMddOr7447 XPRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159181; x=1766763981; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+jYEguITBPTs5G6jcojPovTyLNy31gBuOV4hxWkZ9us=; b=Ymasijzu6Tn5CXTQeZjxaUv5QMBZGc4zbYN4ppSwv17wrD8zKZzQE8MXHJt4Pm3GUO yHfepMDdovpjZe1WG1/nJc802SuaN3ZQVdeXDuFjiCqnxmk9G5KhUxMGyT2SqHoj6fdE 6FhmsiJTxdbTVWrwqSy6ZVbqXLkH84iwb2WBjKJAdLuOzwRbeQlJn3CSDKbLmEsYz0j7 hODl4IurOQIdw06otKZ0eAVZFNItDM22an6n/sD3ISh8v2eeEgS9hwmDt35LBiwoZsnc 7gV/o1HNsAGq9yHuUGuxYTq91k6RPyczTsIl6+qD+3PFcIOCD8t3sfq6KMbJt1xDXo+T TlGQ== X-Forwarded-Encrypted: i=1; AJvYcCVOgmW/Ytq98FylMTpn4iKVabTxchZSFQOfMytCND+ylt3/0Q8bCFU+kQoqLQA7pCbcH39w/SXFDIjqGz8=@vger.kernel.org X-Gm-Message-State: AOJu0YzQG0M7qqKVn+tv68UU1sDbfFvTgxHH8XJjZ4qbAWKShmOQOMKV HZU+PBr1C+8en4aQC7RZDQaVn0tih88x46smn9fWN0HXfsNXJwYl6Cld91OpmndxaqHa/wRxjLy gGg== X-Google-Smtp-Source: AGHT+IEYnGuKvpG1yqXy3gpRoAVKm6+YVd4z93VGj6UWW783o4r2dfscOlxqJjdnEPTLmregJbA3SNUltw== X-Received: from wma9.prod.google.com ([2002:a05:600c:8909:b0:477:a0cb:7165]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5487:b0:47b:da85:b9f3 with SMTP id 5b1f17b1804b1-47d195a72c0mr32797025e9.23.1766159180751; Fri, 19 Dec 2025 07:46:20 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:03 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-15-elver@google.com> Subject: [PATCH v5 14/36] rcu: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Improve the existing annotations to properly support Clang's context analysis. The old annotations distinguished between RCU, RCU_BH, and RCU_SCHED; however, to more easily be able to express that "hold the RCU read lock" without caring if the normal, _bh(), or _sched() variant was used we'd have to remove the distinction of the latter variants: change the _bh() and _sched() variants to also acquire "RCU". When (and if) we introduce context locks to denote more generally that "IRQ", "BH", "PREEMPT" contexts are disabled, it would make sense to acquire these instead of RCU_BH and RCU_SCHED respectively. The above change also simplified introducing __guarded_by support, where only the "RCU" context lock needs to be held: introduce __rcu_guarded, where Clang's context analysis warns if a pointer is dereferenced without any of the RCU locks held, or updated without the appropriate helpers. The primitives rcu_assign_pointer() and friends are wrapped with context_unsafe(), which enforces using them to update RCU-protected pointers marked with __rcu_guarded. Signed-off-by: Marco Elver Acked-by: Paul E. McKenney --- v5: * Rename "context guard" -> "context lock". v3: * Properly support reentrancy via new compiler support. v2: * Reword commit message and point out reentrancy caveat. --- Documentation/dev-tools/context-analysis.rst | 2 +- include/linux/rcupdate.h | 77 ++++++++++++------ lib/test_context-analysis.c | 85 ++++++++++++++++++++ 3 files changed, 139 insertions(+), 25 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index b2d69fb4a884..3bc72f71fe25 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`. +`bit_spinlock`, RCU. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index c5b30054cd01..50e63eade019 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -31,6 +31,16 @@ #include #include =20 +token_context_lock(RCU, __reentrant_ctx_lock); +token_context_lock_instance(RCU, RCU_SCHED); +token_context_lock_instance(RCU, RCU_BH); + +/* + * A convenience macro that can be used for RCU-protected globals or struct + * members; adds type qualifier __rcu, and also enforces __guarded_by(RCU). + */ +#define __rcu_guarded __rcu __guarded_by(RCU) + #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >=3D (a) - (b)) #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) =20 @@ -425,7 +435,8 @@ static inline void rcu_preempt_sleep_check(void) { } =20 // See RCU_LOCKDEP_WARN() for an explanation of the double call to // debug_lockdep_rcu_enabled(). -static inline bool lockdep_assert_rcu_helper(bool c) +static inline bool lockdep_assert_rcu_helper(bool c, const struct __ctx_lo= ck_RCU *ctx) + __assumes_shared_ctx_lock(RCU) __assumes_shared_ctx_lock(ctx) { return debug_lockdep_rcu_enabled() && (c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) && @@ -438,7 +449,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * Splats if lockdep is enabled and there is no rcu_read_lock() in effect. */ #define lockdep_assert_in_rcu_read_lock() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map), RCU)) =20 /** * lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_= lock_bh() @@ -448,7 +459,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * actual rcu_read_lock_bh() is required. */ #define lockdep_assert_in_rcu_read_lock_bh() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map), R= CU_BH)) =20 /** * lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_re= ad_lock_sched() @@ -458,7 +469,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * instead an actual rcu_read_lock_sched() is required. */ #define lockdep_assert_in_rcu_read_lock_sched() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map)= )) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map)= , RCU_SCHED)) =20 /** * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU read= er @@ -476,17 +487,17 @@ static inline bool lockdep_assert_rcu_helper(bool c) WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map) && \ !lock_is_held(&rcu_bh_lock_map) && \ !lock_is_held(&rcu_sched_lock_map) && \ - preemptible())) + preemptible(), RCU)) =20 #else /* #ifdef CONFIG_PROVE_RCU */ =20 #define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c)) #define rcu_sleep_check() do { } while (0) =20 -#define lockdep_assert_in_rcu_read_lock() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_bh() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_sched() do { } while (0) -#define lockdep_assert_in_rcu_reader() do { } while (0) +#define lockdep_assert_in_rcu_read_lock() __assume_shared_ctx_lock(RCU) +#define lockdep_assert_in_rcu_read_lock_bh() __assume_shared_ctx_lock(RCU_= BH) +#define lockdep_assert_in_rcu_read_lock_sched() __assume_shared_ctx_lock(R= CU_SCHED) +#define lockdep_assert_in_rcu_reader() __assume_shared_ctx_lock(RCU) =20 #endif /* #else #ifdef CONFIG_PROVE_RCU */ =20 @@ -506,11 +517,11 @@ static inline bool lockdep_assert_rcu_helper(bool c) #endif /* #else #ifdef __CHECKER__ */ =20 #define __unrcu_pointer(p, local) \ -({ \ +context_unsafe( \ typeof(*p) *local =3D (typeof(*p) *__force)(p); \ rcu_check_sparse(p, __rcu); \ - ((typeof(*p) __force __kernel *)(local)); \ -}) + ((typeof(*p) __force __kernel *)(local)) \ +) /** * unrcu_pointer - mark a pointer as not being RCU protected * @p: pointer needing to lose its __rcu property @@ -586,7 +597,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * other macros that it invokes. */ #define rcu_assign_pointer(p, v) \ -do { \ +context_unsafe( \ uintptr_t _r_a_p__v =3D (uintptr_t)(v); \ rcu_check_sparse(p, __rcu); \ \ @@ -594,7 +605,7 @@ do { \ WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \ else \ smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \ -} while (0) +) =20 /** * rcu_replace_pointer() - replace an RCU pointer, returning its old value @@ -861,9 +872,10 @@ do { \ * only when acquiring spinlocks that are subject to priority inheritance. */ static __always_inline void rcu_read_lock(void) + __acquires_shared(RCU) { __rcu_read_lock(); - __acquire(RCU); + __acquire_shared(RCU); rcu_lock_acquire(&rcu_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock() used illegally while idle"); @@ -891,11 +903,12 @@ static __always_inline void rcu_read_lock(void) * See rcu_read_lock() for more information. */ static inline void rcu_read_unlock(void) + __releases_shared(RCU) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock() used illegally while idle"); rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */ - __release(RCU); + __release_shared(RCU); __rcu_read_unlock(); } =20 @@ -914,9 +927,11 @@ static inline void rcu_read_unlock(void) * was invoked from some other task. */ static inline void rcu_read_lock_bh(void) + __acquires_shared(RCU) __acquires_shared(RCU_BH) { local_bh_disable(); - __acquire(RCU_BH); + __acquire_shared(RCU); + __acquire_shared(RCU_BH); rcu_lock_acquire(&rcu_bh_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_bh() used illegally while idle"); @@ -928,11 +943,13 @@ static inline void rcu_read_lock_bh(void) * See rcu_read_lock_bh() for more information. */ static inline void rcu_read_unlock_bh(void) + __releases_shared(RCU) __releases_shared(RCU_BH) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_bh() used illegally while idle"); rcu_lock_release(&rcu_bh_lock_map); - __release(RCU_BH); + __release_shared(RCU_BH); + __release_shared(RCU); local_bh_enable(); } =20 @@ -952,9 +969,11 @@ static inline void rcu_read_unlock_bh(void) * rcu_read_lock_sched() was invoked from an NMI handler. */ static inline void rcu_read_lock_sched(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); rcu_lock_acquire(&rcu_sched_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_sched() used illegally while idle"); @@ -962,9 +981,11 @@ static inline void rcu_read_lock_sched(void) =20 /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_lock_sched_notrace(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable_notrace(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); } =20 /** @@ -973,22 +994,27 @@ static inline notrace void rcu_read_lock_sched_notrac= e(void) * See rcu_read_lock_sched() for more information. */ static inline void rcu_read_unlock_sched(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_sched() used illegally while idle"); rcu_lock_release(&rcu_sched_lock_map); - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable(); } =20 /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_unlock_sched_notrace(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable_notrace(); } =20 static __always_inline void rcu_read_lock_dont_migrate(void) + __acquires_shared(RCU) { if (IS_ENABLED(CONFIG_PREEMPT_RCU)) migrate_disable(); @@ -996,6 +1022,7 @@ static __always_inline void rcu_read_lock_dont_migrate= (void) } =20 static inline void rcu_read_unlock_migrate(void) + __releases_shared(RCU) { rcu_read_unlock(); if (IS_ENABLED(CONFIG_PREEMPT_RCU)) @@ -1041,10 +1068,10 @@ static inline void rcu_read_unlock_migrate(void) * ordering guarantees for either the CPU or the compiler. */ #define RCU_INIT_POINTER(p, v) \ - do { \ + context_unsafe( \ rcu_check_sparse(p, __rcu); \ WRITE_ONCE(p, RCU_INITIALIZER(v)); \ - } while (0) + ) =20 /** * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected poin= ter @@ -1206,4 +1233,6 @@ DEFINE_LOCK_GUARD_0(rcu, } while (0), rcu_read_unlock()) =20 +DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(= RCU)) + #endif /* __LINUX_RCUPDATE_H */ diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index be0c5d462a48..559df32fb5f8 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include =20 @@ -284,3 +285,87 @@ static void __used test_bit_spin_lock(struct test_bit_= spinlock_data *d) bit_spin_unlock(3, &d->bits); } } + +/* + * Test that we can mark a variable guarded by RCU, and we can dereference= and + * write to the pointer with RCU's primitives. + */ +struct test_rcu_data { + long __rcu_guarded *data; +}; + +static void __used test_rcu_guarded_reader(struct test_rcu_data *d) +{ + rcu_read_lock(); + (void)rcu_dereference(d->data); + rcu_read_unlock(); + + rcu_read_lock_bh(); + (void)rcu_dereference(d->data); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + (void)rcu_dereference(d->data); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_guard(struct test_rcu_data *d) +{ + guard(rcu)(); + (void)rcu_dereference(d->data); +} + +static void __used test_rcu_guarded_updater(struct test_rcu_data *d) +{ + rcu_assign_pointer(d->data, NULL); + RCU_INIT_POINTER(d->data, NULL); + (void)unrcu_pointer(d->data); +} + +static void wants_rcu_held(void) __must_hold_shared(RCU) { } +static void wants_rcu_held_bh(void) __must_hold_shared(RCU_BH) { } +static void wants_rcu_held_sched(void) __must_hold_shared(RCU_SCHED) { } + +static void __used test_rcu_lock_variants(void) +{ + rcu_read_lock(); + wants_rcu_held(); + rcu_read_unlock(); + + rcu_read_lock_bh(); + wants_rcu_held_bh(); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + wants_rcu_held_sched(); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_lock_reentrant(void) +{ + rcu_read_lock(); + rcu_read_lock(); + rcu_read_lock_bh(); + rcu_read_lock_bh(); + rcu_read_lock_sched(); + rcu_read_lock_sched(); + + rcu_read_unlock_sched(); + rcu_read_unlock_sched(); + rcu_read_unlock_bh(); + rcu_read_unlock_bh(); + rcu_read_unlock(); + rcu_read_unlock(); +} + +static void __used test_rcu_assert_variants(void) +{ + lockdep_assert_in_rcu_read_lock(); + wants_rcu_held(); + + lockdep_assert_in_rcu_read_lock_bh(); + wants_rcu_held_bh(); + + lockdep_assert_in_rcu_read_lock_sched(); + wants_rcu_held_sched(); +} --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AEE2347FD2 for ; Fri, 19 Dec 2025 15:46:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159190; cv=none; b=OkrAXSfunz04szzcmB4iH7kppaQlziFu9QJTr9IXPdauLdbSpLqHgaObV9u3/A9AiDgAQgSdwIdZXiFUFzWZhV/SyPhsm2JErOkQ1xnBPFTDARxDj9tzxmdXl8L7j7av+E86VBmHizQVeBedxwdQj9o0wTTMsyL5+2ZP0zqaU+4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159190; c=relaxed/simple; bh=M9Qg4/HO3jW94aJcgadKpeDtb2mPLn7x2yYWi1jEHOs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BcGUWSfpC2HoGsfKPrCg93JhQ+099eR7WDNdlI6CLwOZTaJK1Im6buhr7YS9Gg5wIuUAcKR4TRcTAsbQn3wKWhB5irwMsFGEi5bDezs2IBF7Y9W4B7+JAvPLxHM10D40GtER3h5KiPfwH92yn3xZ5n/pliQFgQkXVZTTQKWXFnc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=efvPBIlE; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="efvPBIlE" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-64b7907dd42so1740208a12.3 for ; Fri, 19 Dec 2025 07:46:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159185; x=1766763985; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HOUwolzbiNJSOFKZRnPKrSxaUxMsy/Gxmzf+65/vXT8=; b=efvPBIlEc2U94sN6Z5+/C9/PTdLKVAv6kdh5NgDVSm1cTItdQPObbvce1x5eOyaom3 q3+k66/2qEWUGm1X6ZudwtaRzcFyS2FcPeRJfYyOoR1wKsgYrv06AUsoxzBqk+6Ve90R 6LPUoJPuWgrdQtjpmZjzJUZ1nHRZQSjbxW4NX3zH/M90VKf6PAEhyXJmN4iCwLA1aXnO D2yssGwVMWcWrpCk2+dnQ+/wCvTMSSLlaMDtKL0UtMsocXhPTNDpZPCE40zIczY+vaQk nI/GlasEARupi0jNjc7ryrnPJy/i0QQ4530g1hADplvkad/sSQdSe/a+pYHEY+Srhb3K nfXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159185; x=1766763985; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HOUwolzbiNJSOFKZRnPKrSxaUxMsy/Gxmzf+65/vXT8=; b=aMTd73uYFlB2wE30v5qzuGSICO/qmbRaGYMsSqzWQ/kSoluNlpaFzKxpyS/PuDzUhh r0h8dbJ10TX5+gWPBSBJ5H6lDLPrRL/uMf5HsDagh0IZW65BjFu1KtP4CiM4Z7jqBfV9 dOX3Z9d/H8FNlaBe144mrYDyheQ2p775MnyrP1pU6tBonEVuV+obj50qSxXOUigFQBaw 90/JBxlWNlUplQqw1zvNSgiAfBkB3BgPNLGC/N+ZA82woJpKaiEnqiN04OfX+gtLI024 dVD7eEXZlK39tzAl1RgVKeRRt8iNEaNN9modmtCQWy7yzJBapFizppXrU1eAyY/Njupf a4gQ== X-Forwarded-Encrypted: i=1; AJvYcCXA1eO6USF6lCaEmAo8UK6p8HmG8xIDzvMCWC39G1bQ9Dg9SQqq5J6OTwcXsAUBY9uEU5IcPtB4zxwJU7s=@vger.kernel.org X-Gm-Message-State: AOJu0YyktUoew1rtLdvni51x73Lqffc1A6J1/g460EYFMBiJzHakZ5MY ksbKLG9lGx7lD3mIAMy2QtWSX/pAZ2OxW4VRZo0oFkjEdYWbCEZaPlEgZsMnySdeR/2DOIjLzfv pMA== X-Google-Smtp-Source: AGHT+IG8HOTfxr1WNzO/Dc+GOb4HsZbQh+hYfqKNDqegaOX6AdQDOkKoX2hK7UyCMxtgqNumtqifsWNf8A== X-Received: from edzd6.prod.google.com ([2002:a05:6402:8c6:b0:64b:6d46:21e6]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:1d53:b0:64b:654f:8738 with SMTP id 4fb4d7f45d1cf-64b8e94bf4emr2676096a12.14.1766159184533; Fri, 19 Dec 2025 07:46:24 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:04 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-16-elver@google.com> Subject: [PATCH v5 15/36] srcu: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's context analysis for SRCU. Signed-off-by: Marco Elver Acked-by: Paul E. McKenney --- v5: * Fix up annotation for recently added SRCU interfaces. * Rename "context guard" -> "context lock". * Use new cleanup.h helpers to properly support scoped lock guards. v4: * Rename capability -> context analysis. v3: * Switch to DECLARE_LOCK_GUARD_1_ATTRS() (suggested by Peter) * Support SRCU being reentrant. --- Documentation/dev-tools/context-analysis.rst | 2 +- include/linux/srcu.h | 73 ++++++++++++++------ include/linux/srcutiny.h | 6 ++ include/linux/srcutree.h | 10 ++- lib/test_context-analysis.c | 25 +++++++ 5 files changed, 91 insertions(+), 25 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index 3bc72f71fe25..f7736f1c0767 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU. +`bit_spinlock`, RCU, SRCU (`srcu_struct`). =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 344ad51c8f6c..bb44a0bd7696 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -21,7 +21,7 @@ #include #include =20 -struct srcu_struct; +context_lock_struct(srcu_struct, __reentrant_ctx_lock); =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC =20 @@ -77,7 +77,7 @@ int init_srcu_struct_fast_updown(struct srcu_struct *ssp); #define SRCU_READ_FLAVOR_SLOWGP (SRCU_READ_FLAVOR_FAST | SRCU_READ_FLAVOR= _FAST_UPDOWN) // Flavors requiring synchronize_rcu() // instead of smp_mb(). -void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); +void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases_share= d(ssp); =20 #ifdef CONFIG_TINY_SRCU #include @@ -131,14 +131,16 @@ static inline bool same_state_synchronize_srcu(unsign= ed long oldstate1, unsigned } =20 #ifdef CONFIG_NEED_SRCU_NMI_SAFE -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releas= es(ssp); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires_shared(ss= p); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releas= es_shared(ssp); #else static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { return __srcu_read_lock(ssp); } static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int= idx) + __releases_shared(ssp) { __srcu_read_unlock(ssp, idx); } @@ -210,6 +212,14 @@ static inline int srcu_read_lock_held(const struct src= u_struct *ssp) =20 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ =20 +/* + * No-op helper to denote that ssp must be held. Because SRCU-protected po= inters + * should still be marked with __rcu_guarded, and we do not want to mark t= hem + * with __guarded_by(ssp) as it would complicate annotations for writers, = we + * choose the following strategy: srcu_dereference_check() calls this help= er + * that checks that the passed ssp is held, and then fake-acquires 'RCU'. + */ +static inline void __srcu_read_lock_must_hold(const struct srcu_struct *ss= p) __must_hold_shared(ssp) { } =20 /** * srcu_dereference_check - fetch SRCU-protected pointer for later derefer= encing @@ -223,9 +233,15 @@ static inline int srcu_read_lock_held(const struct src= u_struct *ssp) * to 1. The @c argument will normally be a logical expression containing * lockdep_is_held() calls. */ -#define srcu_dereference_check(p, ssp, c) \ - __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ - (c) || srcu_read_lock_held(ssp), __rcu) +#define srcu_dereference_check(p, ssp, c) \ +({ \ + __srcu_read_lock_must_hold(ssp); \ + __acquire_shared_ctx_lock(RCU); \ + __auto_type __v =3D __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ + (c) || srcu_read_lock_held(ssp), __rcu); \ + __release_shared_ctx_lock(RCU); \ + __v; \ +}) =20 /** * srcu_dereference - fetch SRCU-protected pointer for later dereferencing @@ -268,7 +284,8 @@ static inline int srcu_read_lock_held(const struct srcu= _struct *ssp) * invoke srcu_read_unlock() from one task and the matching srcu_read_lock= () * from another. */ -static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -304,7 +321,8 @@ static inline int srcu_read_lock(struct srcu_struct *ss= p) __acquires(ssp) * contexts where RCU is watching, that is, from contexts where it would * be legal to invoke rcu_read_lock(). Otherwise, lockdep will complain. */ -static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_st= ruct *ssp) __acquires(ssp) +static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_st= ruct *ssp) __acquires_shared(ssp) + __acquires_shared(ssp) { struct srcu_ctr __percpu *retval; =20 @@ -344,7 +362,7 @@ static inline struct srcu_ctr __percpu *srcu_read_lock_= fast(struct srcu_struct * * complain. */ static inline struct srcu_ctr __percpu *srcu_read_lock_fast_updown(struct = srcu_struct *ssp) -__acquires(ssp) + __acquires_shared(ssp) { struct srcu_ctr __percpu *retval; =20 @@ -360,7 +378,7 @@ __acquires(ssp) * See srcu_read_lock_fast() for more information. */ static inline struct srcu_ctr __percpu *srcu_read_lock_fast_notrace(struct= srcu_struct *ssp) - __acquires(ssp) + __acquires_shared(ssp) { struct srcu_ctr __percpu *retval; =20 @@ -381,7 +399,7 @@ static inline struct srcu_ctr __percpu *srcu_read_lock_= fast_notrace(struct srcu_ * and srcu_read_lock_fast(). However, the same definition/initialization * requirements called out for srcu_read_lock_safe() apply. */ -static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_st= ruct *ssp) __acquires(ssp) +static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_st= ruct *ssp) __acquires_shared(ssp) { WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi()); RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_down_read= _fast()."); @@ -400,7 +418,8 @@ static inline struct srcu_ctr __percpu *srcu_down_read_= fast(struct srcu_struct * * then none of the other flavors may be used, whether before, during, * or after. */ -static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquir= es(ssp) +static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -412,7 +431,8 @@ static inline int srcu_read_lock_nmisafe(struct srcu_st= ruct *ssp) __acquires(ssp =20 /* Used by tracing, cannot be traced and cannot invoke lockdep. */ static inline notrace int -srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) +srcu_read_lock_notrace(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; =20 @@ -443,7 +463,8 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acqui= res(ssp) * which calls to down_read() may be nested. The same srcu_struct may be * used concurrently by srcu_down_read() and srcu_read_lock(). */ -static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_down_read(struct srcu_struct *ssp) + __acquires_shared(ssp) { WARN_ON_ONCE(in_nmi()); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -458,7 +479,7 @@ static inline int srcu_down_read(struct srcu_struct *ss= p) __acquires(ssp) * Exit an SRCU read-side critical section. */ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -474,7 +495,7 @@ static inline void srcu_read_unlock(struct srcu_struct = *ssp, int idx) * Exit a light-weight SRCU read-side critical section. */ static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct s= rcu_ctr __percpu *scp) - __releases(ssp) + __releases_shared(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST); srcu_lock_release(&ssp->dep_map); @@ -490,7 +511,7 @@ static inline void srcu_read_unlock_fast(struct srcu_st= ruct *ssp, struct srcu_ct * Exit an SRCU-fast-updown read-side critical section. */ static inline void -srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __pe= rcpu *scp) __releases(ssp) +srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __pe= rcpu *scp) __releases_shared(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN); srcu_lock_release(&ssp->dep_map); @@ -504,7 +525,7 @@ srcu_read_unlock_fast_updown(struct srcu_struct *ssp, s= truct srcu_ctr __percpu * * See srcu_read_unlock_fast() for more information. */ static inline void srcu_read_unlock_fast_notrace(struct srcu_struct *ssp, - struct srcu_ctr __percpu *scp) __releases(ssp) + struct srcu_ctr __percpu *scp) __releases_shared(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST); __srcu_read_unlock_fast(ssp, scp); @@ -519,7 +540,7 @@ static inline void srcu_read_unlock_fast_notrace(struct= srcu_struct *ssp, * the same context as the maching srcu_down_read_fast(). */ static inline void srcu_up_read_fast(struct srcu_struct *ssp, struct srcu_= ctr __percpu *scp) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi()); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN); @@ -535,7 +556,7 @@ static inline void srcu_up_read_fast(struct srcu_struct= *ssp, struct srcu_ctr __ * Exit an SRCU read-side critical section, but in an NMI-safe manner. */ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int i= dx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NMI); @@ -545,7 +566,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu= _struct *ssp, int idx) =20 /* Used by tracing, cannot be traced and cannot call lockdep. */ static inline notrace void -srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) +srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases_shar= ed(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); __srcu_read_unlock(ssp, idx); @@ -560,7 +581,7 @@ srcu_read_unlock_notrace(struct srcu_struct *ssp, int i= dx) __releases(ssp) * the same context as the maching srcu_down_read(). */ static inline void srcu_up_read(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); WARN_ON_ONCE(in_nmi()); @@ -600,15 +621,21 @@ DEFINE_LOCK_GUARD_1(srcu, struct srcu_struct, _T->idx =3D srcu_read_lock(_T->lock), srcu_read_unlock(_T->lock, _T->idx), int idx) +DECLARE_LOCK_GUARD_1_ATTRS(srcu, __acquires_shared(_T), __releases_shared(= *(struct srcu_struct **)_T)) +#define class_srcu_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(srcu, _T) =20 DEFINE_LOCK_GUARD_1(srcu_fast, struct srcu_struct, _T->scp =3D srcu_read_lock_fast(_T->lock), srcu_read_unlock_fast(_T->lock, _T->scp), struct srcu_ctr __percpu *scp) +DECLARE_LOCK_GUARD_1_ATTRS(srcu_fast, __acquires_shared(_T), __releases_sh= ared(*(struct srcu_struct **)_T)) +#define class_srcu_fast_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(srcu_fast,= _T) =20 DEFINE_LOCK_GUARD_1(srcu_fast_notrace, struct srcu_struct, _T->scp =3D srcu_read_lock_fast_notrace(_T->lock), srcu_read_unlock_fast_notrace(_T->lock, _T->scp), struct srcu_ctr __percpu *scp) +DECLARE_LOCK_GUARD_1_ATTRS(srcu_fast_notrace, __acquires_shared(_T), __rel= eases_shared(*(struct srcu_struct **)_T)) +#define class_srcu_fast_notrace_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(sr= cu_fast_notrace, _T) =20 #endif diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index e0698024667a..dec7cbe015aa 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -73,6 +73,7 @@ void synchronize_srcu(struct srcu_struct *ssp); * index that must be passed to the matching srcu_read_unlock(). */ static inline int __srcu_read_lock(struct srcu_struct *ssp) + __acquires_shared(ssp) { int idx; =20 @@ -80,6 +81,7 @@ static inline int __srcu_read_lock(struct srcu_struct *ss= p) idx =3D ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1; WRITE_ONCE(ssp->srcu_lock_nesting[idx], READ_ONCE(ssp->srcu_lock_nesting[= idx]) + 1); preempt_enable(); + __acquire_shared(ssp); return idx; } =20 @@ -96,22 +98,26 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_p= tr(struct srcu_struct *ss } =20 static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_= struct *ssp) + __acquires_shared(ssp) { return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp)); } =20 static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct= srcu_ctr __percpu *scp) + __releases_shared(ssp) { __srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp)); } =20 static inline struct srcu_ctr __percpu *__srcu_read_lock_fast_updown(struc= t srcu_struct *ssp) + __acquires_shared(ssp) { return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp)); } =20 static inline void __srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_c= tr __percpu *scp) + __releases_shared(ssp) { __srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp)); } diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index d6f978b50472..958cb7ef41cb 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -233,7 +233,7 @@ struct srcu_struct { #define DEFINE_STATIC_SRCU_FAST_UPDOWN(name) \ __DEFINE_SRCU(name, SRCU_READ_FLAVOR_FAST_UPDOWN, static) =20 -int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); +int __srcu_read_lock(struct srcu_struct *ssp) __acquires_shared(ssp); void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_expedite_current(struct srcu_struct *ssp); @@ -286,6 +286,7 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_p= tr(struct srcu_struct *ss * implementations of this_cpu_inc(). */ static inline struct srcu_ctr __percpu notrace *__srcu_read_lock_fast(stru= ct srcu_struct *ssp) + __acquires_shared(ssp) { struct srcu_ctr __percpu *scp =3D READ_ONCE(ssp->srcu_ctrp); =20 @@ -294,6 +295,7 @@ static inline struct srcu_ctr __percpu notrace *__srcu_= read_lock_fast(struct src else atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU = reader. barrier(); /* Avoid leaking the critical section. */ + __acquire_shared(ssp); return scp; } =20 @@ -308,7 +310,9 @@ static inline struct srcu_ctr __percpu notrace *__srcu_= read_lock_fast(struct src */ static inline void notrace __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu = *scp) + __releases_shared(ssp) { + __release_shared(ssp); barrier(); /* Avoid leaking the critical section. */ if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader. @@ -326,6 +330,7 @@ __srcu_read_unlock_fast(struct srcu_struct *ssp, struct= srcu_ctr __percpu *scp) */ static inline struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu= _struct *ssp) + __acquires_shared(ssp) { struct srcu_ctr __percpu *scp =3D READ_ONCE(ssp->srcu_ctrp); =20 @@ -334,6 +339,7 @@ struct srcu_ctr __percpu notrace *__srcu_read_lock_fast= _updown(struct srcu_struc else atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU = reader. barrier(); /* Avoid leaking the critical section. */ + __acquire_shared(ssp); return scp; } =20 @@ -348,7 +354,9 @@ struct srcu_ctr __percpu notrace *__srcu_read_lock_fast= _updown(struct srcu_struc */ static inline void notrace __srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __= percpu *scp) + __releases_shared(ssp) { + __release_shared(ssp); barrier(); /* Avoid leaking the critical section. */ if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader. diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 559df32fb5f8..39e03790c0f6 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -10,6 +10,7 @@ #include #include #include +#include =20 /* * Test that helper macros work as expected. @@ -369,3 +370,27 @@ static void __used test_rcu_assert_variants(void) lockdep_assert_in_rcu_read_lock_sched(); wants_rcu_held_sched(); } + +struct test_srcu_data { + struct srcu_struct srcu; + long __rcu_guarded *data; +}; + +static void __used test_srcu(struct test_srcu_data *d) +{ + init_srcu_struct(&d->srcu); + + int idx =3D srcu_read_lock(&d->srcu); + long *data =3D srcu_dereference(d->data, &d->srcu); + (void)data; + srcu_read_unlock(&d->srcu, idx); + + rcu_assign_pointer(d->data, NULL); +} + +static void __used test_srcu_guard(struct test_srcu_data *d) +{ + { guard(srcu)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } + { guard(srcu_fast)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } + { guard(srcu_fast_notrace)(&d->srcu); (void)srcu_dereference(d->data, &d-= >srcu); } +} --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A5D93491DE for ; Fri, 19 Dec 2025 15:46:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159192; cv=none; b=FjufyXjVx9jAtPN0zlGlVZBDAnLqJf9o4a08JOW/oplZyJf8Yd252I9l/jFGgdWX/CTEQEPiHFRMm+ytc8uHb2yovNlZ/a0Urgig3xQJxbPv9uIhnwQgPnrOmEwjARNYCipjz1lL6wn7SgpFpYLtVInK2da2eFjrrQL+s/TtZ8c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159192; c=relaxed/simple; bh=+TxyiziKtAssraZX/W9BG58Rc5SRTSVNFHhEJgAI80M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Yc0rK0deRYlRZjfl77B3nN9/6OBm8pFg0C7Zp/w+/V55Nn5A7QAobRgegPG3mDNh4EZcqWYsQ1A4J3au7RyUKVkP/CB38I9iK/ENJsLQAu40HLF81IdiOm4a9So/t8V/rCQx5vKTRsFo+IEjswARSN8sqCdpKNOVwBGrzZq9yww= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Btmm9Ian; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Btmm9Ian" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477cf25ceccso19115785e9.0 for ; Fri, 19 Dec 2025 07:46:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159189; x=1766763989; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JNejWFdtMyTjodgPbCP3MYwy/kg9I0zeQniFTuJdWnY=; b=Btmm9Ian9vkW2R1CmBw+4UEGFIcU19VTH1if1ZDzBv6i7+qkwK8FB/mchliyLeX7ea Lv3TTAaSEl6CyhgCR+GVo+O385fSiZ8SbrQyzBZqEUR/rpjDaMx/DKaQMGqWRGMwYmbI su6Lg1c7UP1tDmGTlM0gi4ZPnBCuLD2W78C8eggo3adVNqKxuLE9wxwaDVyiBk4InD+D PrPKasVfuRx8a41HcrByYZw3oCLNoOkUIIxynFHot93y8oFhIBAnXnNbxjmB4ldktlKr eQz8DC14OOMXKZswgtZN8/umOcDiIYb/xDa6S8yC6KraIgzXaB9Dq52KnlM6o/q4Igwo TKZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159189; x=1766763989; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JNejWFdtMyTjodgPbCP3MYwy/kg9I0zeQniFTuJdWnY=; b=tB1fF4Ih4I6oTJ5M2tmndTa0TAOfLrNdx81KvnMDWpiBiDY4LXIKSsDBmxC5OQxiMG ndKHNt2ZK+Pb3nMVi34Ym8MU1OoRW1Owb+6ReCXfPL0/A7mRS0T0UniHqkkUQyX+1LcP IlAmu6zgKsKtlvr/8aefLMX0YqHjcKY+9ZyiGC73k2+9QEHmBxNhgOulrGjm8u47z2PI N99vJPId27AXRFghZpPyTMHVO/ef1ceeyIRfqYltTRPXteyLQmTZJrTOlFckZ1JKYWX6 IE4DgFDdy42tFRjVR5zM80YKEbnwONJ0PHzCtPRXjJB+kSnmvv6hEWGmZ82zez9dDvCb or8g== X-Forwarded-Encrypted: i=1; AJvYcCWEdgKuxGdlYs0HRveVhC0QGgYvxu3PdawckrvDjTOkHQQ6kprZUBb/VRYY+rl+XCyLhqBWuv6vfSUr0e4=@vger.kernel.org X-Gm-Message-State: AOJu0YzhIyQfPGsyF9DXxDTp85WszDjBirpLLJ2J3Z+hP6Y0uQx1M8vc S/AQoO+bZVRyO0lgwLmqtNeA9nawi1B6ECDc3efmWf7+ogBStlA50moSMAshkNPcqinEs+xqyIX N1w== X-Google-Smtp-Source: AGHT+IFZPg350on31b7Arj7YO65031V0YGnmFoBhddxYq3gtipYJLlSpx9LZ4+YgVEePmezOZITENNcOhw== X-Received: from wmbdr22.prod.google.com ([2002:a05:600c:6096:b0:477:7949:c534]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:820d:b0:479:3a87:208f with SMTP id 5b1f17b1804b1-47d195aa085mr30273655e9.36.1766159188650; Fri, 19 Dec 2025 07:46:28 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:05 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-17-elver@google.com> Subject: [PATCH v5 16/36] kref: Add context-analysis annotations From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark functions that conditionally acquire the passed lock. Signed-off-by: Marco Elver Reviewed-by: Bart Van Assche --- include/linux/kref.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/kref.h b/include/linux/kref.h index 88e82ab1367c..9bc6abe57572 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -81,6 +81,7 @@ static inline int kref_put(struct kref *kref, void (*rele= ase)(struct kref *kref) static inline int kref_put_mutex(struct kref *kref, void (*release)(struct kref *kref), struct mutex *mutex) + __cond_acquires(true, mutex) { if (refcount_dec_and_mutex_lock(&kref->refcount, mutex)) { release(kref); @@ -102,6 +103,7 @@ static inline int kref_put_mutex(struct kref *kref, static inline int kref_put_lock(struct kref *kref, void (*release)(struct kref *kref), spinlock_t *lock) + __cond_acquires(true, lock) { if (refcount_dec_and_lock(&kref->refcount, lock)) { release(kref); --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A69534AAF6 for ; Fri, 19 Dec 2025 15:46:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159197; cv=none; b=W39pGMjrc7pSTVuntQ636y6JVy10lufVNzLclqYqHVvBjcihv0PWaEyVd1oQJ/b2MmxuzYM7f5maF7w9N9OKagAr1JWRGHT6tOtICAGCmjdcR7pMwshD75x2kYm3NdZiEC5wgv6hqDPz3u0g86XJFieWREV2wLD21mzVQfIWs28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159197; c=relaxed/simple; bh=vKoqr2RYzTDHLLYtIp6JkWt3yQo+z8EUmJ1VlwwcZhM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IoxrB/gie0sDJ7S2+MspSwk4a8RYnLfk4teUbBhvY9pFXefO8SyO+jlSxr/3Tr10MtqAiKS3VX+hqrPdDDhDMbZrUBEQYyymfzHQ6rUTmK0c8YXdal2dzD78sBVBavjsHueK6456MZgH+vixUEQTtO6HQ9sNZXTun91g7dxwhLs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gO6eBRqo; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gO6eBRqo" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-47775585257so13160155e9.1 for ; Fri, 19 Dec 2025 07:46:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159193; x=1766763993; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=91/M6Wc5goo40nrJkJ1ulkL+/DliPVncC/51mYdmaQ4=; b=gO6eBRqonKFaODH9SECpcFKvWbBc2f7QoDdwhscPbHhltM7d0pzOp/9jev/yirXRy7 JhbjLwijEq92Fm3YoO4gDCOjIXnaxR7GDRpiMbfKS0ZkjpyBAKUuoNQt+t5Yqaf9sv1e Nx+KQCie0OVobYqckJkpaOXBqV3YZnNO5RFggY1DFghLZ5ugpuGDmXYqeKlHUYU0C9zN GQsqlSjTCY4VVU37aaM/zSkBuVoa5K3CQHjGlcp75TrBxd1lYbqLSwNbrhkKnaJ1pWk8 buQXJSO4pq6SOufHgFUy3E7OhTtolTzIMnoKHjQXlOPqdqVh1BMIXB1GGspUKrRjrMaD WQFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159193; x=1766763993; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=91/M6Wc5goo40nrJkJ1ulkL+/DliPVncC/51mYdmaQ4=; b=q6g9ZDuR5JC3nf4XPWUscUpCifvxidAgCVj7/7+xQEric64w1VJzeH5OJVVdni/RG+ SQle+EdVAZeOuStYfdCs5MJRgt3Lly/X2539/7wpnzhWSGZfdnjtpSZVEYXucyBM/o6e xbdVECSRQep5p2fNiGuDCy+S2kgBaV7i8WC2j8w+xzyTE8ljoX0otxjI18DEctsq1xfQ kZx64JVL/Nz5s4AYVF+qIQrrRqqEKqzDEHNt8J6Lj+vN/AoiBSbSutUhHUF2GFZbbaTh cTzz28l2atEn3LoQy5zUqKGQpPFvFcW2fvpMLmLyAtaeLUTPSra7JreyPTdDs0xP4PD9 PbTA== X-Forwarded-Encrypted: i=1; AJvYcCU6ShrKxat4iQy9Wut9vcf4uZ4l1k2a6QV6aH82hO3+SoWtTnnI4u7B8juhxly0UC9ezTTVC7+J5Jb3EuM=@vger.kernel.org X-Gm-Message-State: AOJu0Yz4WvzQzl2u279WaFgeKF5eLxt6SzFk/fp0TPV8knX10eZYOVzF JaaHEp5TJC5cqJnQrBFLEj4mcrpgYX/BHfihY6D/cNOHC/1YYkBc/HlhlHjAOucQljVmp4sdWpT X2Q== X-Google-Smtp-Source: AGHT+IGE1LgxqdKnUpQTqh6OwtRuQIXkK6E2bbB3uOqTPKnnNib9O3sYIDs/w4CmKbE3kIU0DbcVhMnPBQ== X-Received: from wmxb4-n2.prod.google.com ([2002:a05:600d:8444:20b0:477:5a4b:d57f]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c28:b0:45d:dc85:c009 with SMTP id 5b1f17b1804b1-47d1954586amr32388785e9.10.1766159192865; Fri, 19 Dec 2025 07:46:32 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:06 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-18-elver@google.com> Subject: [PATCH v5 17/36] locking/rwsem: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's context analysis for rw_semaphore. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". * Use new cleanup.h helpers to properly support scoped lock guards. v4: * Rename capability -> context analysis. v3: * Switch to DECLARE_LOCK_GUARD_1_ATTRS() (suggested by Peter) * __assert -> __assume rename --- Documentation/dev-tools/context-analysis.rst | 2 +- include/linux/rwsem.h | 76 +++++++++++++------- lib/test_context-analysis.c | 64 +++++++++++++++++ 3 files changed, 114 insertions(+), 28 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index f7736f1c0767..7b660c3003a0 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`). +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index f1aaf676a874..8da14a08a4e1 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -45,7 +45,7 @@ * reduce the chance that they will share the same cacheline causing * cacheline bouncing problem. */ -struct rw_semaphore { +context_lock_struct(rw_semaphore) { atomic_long_t count; /* * Write owner or one of the read owners as well flags regarding @@ -76,11 +76,13 @@ static inline int rwsem_is_locked(struct rw_semaphore *= sem) } =20 static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *= sem) + __assumes_ctx_lock(sem) { WARN_ON(atomic_long_read(&sem->count) =3D=3D RWSEM_UNLOCKED_VALUE); } =20 static inline void rwsem_assert_held_write_nolockdep(const struct rw_semap= hore *sem) + __assumes_ctx_lock(sem) { WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); } @@ -119,6 +121,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assume_ctx_lock(sem); \ } while (0) =20 /* @@ -148,7 +151,7 @@ extern bool is_rwsem_reader_owned(struct rw_semaphore *= sem); =20 #include =20 -struct rw_semaphore { +context_lock_struct(rw_semaphore) { struct rwbase_rt rwbase; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; @@ -172,6 +175,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assume_ctx_lock(sem); \ } while (0) =20 static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) @@ -180,11 +184,13 @@ static __always_inline int rwsem_is_locked(const stru= ct rw_semaphore *sem) } =20 static __always_inline void rwsem_assert_held_nolockdep(const struct rw_se= maphore *sem) + __assumes_ctx_lock(sem) { WARN_ON(!rwsem_is_locked(sem)); } =20 static __always_inline void rwsem_assert_held_write_nolockdep(const struct= rw_semaphore *sem) + __assumes_ctx_lock(sem) { WARN_ON(!rw_base_is_write_locked(&sem->rwbase)); } @@ -202,6 +208,7 @@ static __always_inline int rwsem_is_contended(struct rw= _semaphore *sem) */ =20 static inline void rwsem_assert_held(const struct rw_semaphore *sem) + __assumes_ctx_lock(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held(sem); @@ -210,6 +217,7 @@ static inline void rwsem_assert_held(const struct rw_se= maphore *sem) } =20 static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) + __assumes_ctx_lock(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held_write(sem); @@ -220,48 +228,62 @@ static inline void rwsem_assert_held_write(const stru= ct rw_semaphore *sem) /* * lock for reading */ -extern void down_read(struct rw_semaphore *sem); -extern int __must_check down_read_interruptible(struct rw_semaphore *sem); -extern int __must_check down_read_killable(struct rw_semaphore *sem); +extern void down_read(struct rw_semaphore *sem) __acquires_shared(sem); +extern int __must_check down_read_interruptible(struct rw_semaphore *sem) = __cond_acquires_shared(0, sem); +extern int __must_check down_read_killable(struct rw_semaphore *sem) __con= d_acquires_shared(0, sem); =20 /* * trylock for reading -- returns 1 if successful, 0 if contention */ -extern int down_read_trylock(struct rw_semaphore *sem); +extern int down_read_trylock(struct rw_semaphore *sem) __cond_acquires_sha= red(true, sem); =20 /* * lock for writing */ -extern void down_write(struct rw_semaphore *sem); -extern int __must_check down_write_killable(struct rw_semaphore *sem); +extern void down_write(struct rw_semaphore *sem) __acquires(sem); +extern int __must_check down_write_killable(struct rw_semaphore *sem) __co= nd_acquires(0, sem); =20 /* * trylock for writing -- returns 1 if successful, 0 if contention */ -extern int down_write_trylock(struct rw_semaphore *sem); +extern int down_write_trylock(struct rw_semaphore *sem) __cond_acquires(tr= ue, sem); =20 /* * release a read lock */ -extern void up_read(struct rw_semaphore *sem); +extern void up_read(struct rw_semaphore *sem) __releases_shared(sem); =20 /* * release a write lock */ -extern void up_write(struct rw_semaphore *sem); - -DEFINE_GUARD(rwsem_read, struct rw_semaphore *, down_read(_T), up_read(_T)) -DEFINE_GUARD_COND(rwsem_read, _try, down_read_trylock(_T)) -DEFINE_GUARD_COND(rwsem_read, _intr, down_read_interruptible(_T), _RET =3D= =3D 0) - -DEFINE_GUARD(rwsem_write, struct rw_semaphore *, down_write(_T), up_write(= _T)) -DEFINE_GUARD_COND(rwsem_write, _try, down_write_trylock(_T)) -DEFINE_GUARD_COND(rwsem_write, _kill, down_write_killable(_T), _RET =3D=3D= 0) +extern void up_write(struct rw_semaphore *sem) __releases(sem); + +DEFINE_LOCK_GUARD_1(rwsem_read, struct rw_semaphore, down_read(_T->lock), = up_read(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _try, down_read_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _intr, down_read_interruptible(_T->lo= ck), _RET =3D=3D 0) + +DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read, __acquires_shared(_T), __releases_s= hared(*(struct rw_semaphore **)_T)) +#define class_rwsem_read_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_rea= d, _T) +DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read_try, __acquires_shared(_T), __releas= es_shared(*(struct rw_semaphore **)_T)) +#define class_rwsem_read_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem= _read_try, _T) +DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read_intr, __acquires_shared(_T), __relea= ses_shared(*(struct rw_semaphore **)_T)) +#define class_rwsem_read_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwse= m_read_intr, _T) + +DEFINE_LOCK_GUARD_1(rwsem_write, struct rw_semaphore, down_write(_T->lock)= , up_write(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_write, _try, down_write_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_write, _kill, down_write_killable(_T->lock)= , _RET =3D=3D 0) + +DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write, __acquires(_T), __releases(*(struc= t rw_semaphore **)_T)) +#define class_rwsem_write_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_wr= ite, _T) +DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_try, __acquires(_T), __releases(*(s= truct rw_semaphore **)_T)) +#define class_rwsem_write_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwse= m_write_try, _T) +DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_kill, __acquires(_T), __releases(*(= struct rw_semaphore **)_T)) +#define class_rwsem_write_kill_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rws= em_write_kill, _T) =20 /* * downgrade write lock to read lock */ -extern void downgrade_write(struct rw_semaphore *sem); +extern void downgrade_write(struct rw_semaphore *sem) __releases(sem) __ac= quires_shared(sem); =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC /* @@ -277,11 +299,11 @@ extern void downgrade_write(struct rw_semaphore *sem); * lockdep_set_class() at lock initialization time. * See Documentation/locking/lockdep-design.rst for more details.) */ -extern void down_read_nested(struct rw_semaphore *sem, int subclass); -extern int __must_check down_read_killable_nested(struct rw_semaphore *sem= , int subclass); -extern void down_write_nested(struct rw_semaphore *sem, int subclass); -extern int down_write_killable_nested(struct rw_semaphore *sem, int subcla= ss); -extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep= _map *nest_lock); +extern void down_read_nested(struct rw_semaphore *sem, int subclass) __acq= uires_shared(sem); +extern int __must_check down_read_killable_nested(struct rw_semaphore *sem= , int subclass) __cond_acquires_shared(0, sem); +extern void down_write_nested(struct rw_semaphore *sem, int subclass) __ac= quires(sem); +extern int down_write_killable_nested(struct rw_semaphore *sem, int subcla= ss) __cond_acquires(0, sem); +extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep= _map *nest_lock) __acquires(sem); =20 # define down_write_nest_lock(sem, nest_lock) \ do { \ @@ -295,8 +317,8 @@ do { \ * [ This API should be avoided as much as possible - the * proper abstraction for this case is completions. ] */ -extern void down_read_non_owner(struct rw_semaphore *sem); -extern void up_read_non_owner(struct rw_semaphore *sem); +extern void down_read_non_owner(struct rw_semaphore *sem) __acquires_share= d(sem); +extern void up_read_non_owner(struct rw_semaphore *sem) __releases_shared(= sem); #else # define down_read_nested(sem, subclass) down_read(sem) # define down_read_killable_nested(sem, subclass) down_read_killable(sem) diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 39e03790c0f6..1c96c56cf873 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -262,6 +263,69 @@ static void __used test_seqlock_scoped(struct test_seq= lock_data *d) } } =20 +struct test_rwsem_data { + struct rw_semaphore sem; + int counter __guarded_by(&sem); +}; + +static void __used test_rwsem_init(struct test_rwsem_data *d) +{ + init_rwsem(&d->sem); + d->counter =3D 0; +} + +static void __used test_rwsem_reader(struct test_rwsem_data *d) +{ + down_read(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_read_trylock(&d->sem)) { + (void)d->counter; + up_read(&d->sem); + } +} + +static void __used test_rwsem_writer(struct test_rwsem_data *d) +{ + down_write(&d->sem); + d->counter++; + up_write(&d->sem); + + down_write(&d->sem); + d->counter++; + downgrade_write(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_write_trylock(&d->sem)) { + d->counter++; + up_write(&d->sem); + } +} + +static void __used test_rwsem_assert(struct test_rwsem_data *d) +{ + rwsem_assert_held_nolockdep(&d->sem); + d->counter++; +} + +static void __used test_rwsem_guard(struct test_rwsem_data *d) +{ + { guard(rwsem_read)(&d->sem); (void)d->counter; } + { guard(rwsem_write)(&d->sem); d->counter++; } +} + +static void __used test_rwsem_cond_guard(struct test_rwsem_data *d) +{ + scoped_cond_guard(rwsem_read_try, return, &d->sem) { + (void)d->counter; + } + scoped_cond_guard(rwsem_write_try, return, &d->sem) { + d->counter++; + } +} + struct test_bit_spinlock_data { unsigned long bits; int counter __guarded_by(__bitlock(3, &bits)); --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34DCF34B40C for ; Fri, 19 Dec 2025 15:46:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159201; cv=none; b=kIZoA0VJxs6BZnwG+QmoxtSs6HtpDlmMgTLhOCvh1FHLuqrpsMFwAfZT997tc1T++IEro6w4sHeBvz1jUdgFKXiqTrshJ4BNklRx3R7N/rghfGaPG5LlYmt9oFpgdcA6+Gdco4R+FqsBV9QBir9dsidclEyLvjzXs1h9gQ2YaFc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159201; c=relaxed/simple; bh=ZqqBArB2yoGPi5pRLFHBWVfZ5IhyubnYslnOT+L1Eto=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GZPl+fCy/v3H80eqNx0uVZ+Oc2QwIrp5aaICfcOTYXCKZ+lYmyEJs/2fje8EwQrWYHDb8r2Zuz4qxPm4zK2Z2yX5qYilh2rbduGz/0mW6xXb2wguXvD3scc94pzYT/Mcy7TmI9kc5Ng3kcGqHnsSXRL2oXrO52VYKd94h//SyyY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2WR/Hmn0; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2WR/Hmn0" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-477964c22e0so15831075e9.0 for ; Fri, 19 Dec 2025 07:46:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159197; x=1766763997; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=pnCWdNxOjiV83xbv6/Q1ySENLnYMSFtAYAkCgCH7C/E=; b=2WR/Hmn0XZkceSqV5BECw8Nu1JHmqfbv0Ou6CpRlp2vPWa0GPbR4LV24BndLk2nIRR 8pTZmPHLc3MYELV+LZPjjBD+6Mj3etIprLflB3mBRjTRNJxhnNY2gvQc2ayqGzfh6gkj LOer+amJqinrFc1hQHgWbEBX+1P+TWIYW0Vji6CLlzQkIYyfE6Rl0HnOyuPmWzKDUrt0 t9PvIYW9rEXPMJHMhbmJJE6wp5oJr6jbsDNPb7HdozANnm9kFO4lXrANZ5AsAZtgucU8 gO1cA79OEB5KwMco4RlW08kLV9WDu6jBrWLcE83RvIQycMDNKcA8p+mUCobLL83+7ONr 4bmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159197; x=1766763997; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=pnCWdNxOjiV83xbv6/Q1ySENLnYMSFtAYAkCgCH7C/E=; b=Gp9fh0yb8LOkMaZgcnX/HipLdo6Lkf554oZF0u+AllcsQ/tU/QBErJWzo13CIeYP0O dix0VUO6xcjicDBpgN1oP40D988ZZ/T23arQ5RvW/s8UP7IeVWdduiG3UlGhdYvYMNVe iKzVKzsTR6v1YNjj7A1ma9UHDczcIiYa6gnOtg4zftvVMmKkfYZCRS7QRbOEe3QVkmgd 0LSMY9rqmPSwieVXj6MPFBotmNlE2BYSO0ctbclP6k3mJ7WtCF/2ORt4Cw30e0+G05oP GIR6k5eVIl4fY4Q+9QAqe6MyJYDa7izoCbuHLL3lOFIUCe6ahc+6g+cYUqZ76JObNp4M l6ig== X-Forwarded-Encrypted: i=1; AJvYcCWF/UXuTcEha+nPDa8GethgHNmNo5mipIuihKIrjoXWWSiNMdV52Pj3BXhDWndn/+y8If5fmyhL9HojJ0k=@vger.kernel.org X-Gm-Message-State: AOJu0YxqU0hN7qQAQmqf7OqzbeqDXrYMIBqdSAWGGgZcYDnzutOP3Ngs iV68XAUNyQr5E3aQDYam1DqGOGdlVkS3Z+M/Zk+oGW/WupnlUqOoHJfIA3EPPceLXZEZtdMEQ5g vuw== X-Google-Smtp-Source: AGHT+IH94p9GxjeE5eZFV8DN+RguEekXTKbRPlnErvQrVGXfEBJeQJYNzzYoe3WVGxRFHI0q7jnGDkVIDg== X-Received: from wmcn13.prod.google.com ([2002:a05:600c:c0cd:b0:477:a6e8:797a]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:888b:b0:477:9cec:c83e with SMTP id 5b1f17b1804b1-47be2999667mr56775795e9.1.1766159196727; Fri, 19 Dec 2025 07:46:36 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:07 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-19-elver@google.com> Subject: [PATCH v5 18/36] locking/local_lock: Include missing headers From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Including into an empty TU will result in the compiler complaining: ./include/linux/local_lock.h: In function =E2=80=98class_local_lock_irqsave= _constructor=E2=80=99: ./include/linux/local_lock_internal.h:95:17: error: implicit declaration of= function =E2=80=98local_irq_save=E2=80=99; <...> 95 | local_irq_save(flags); \ | ^~~~~~~~~~~~~~ As well as (some architectures only, such as 'sh'): ./include/linux/local_lock_internal.h: In function =E2=80=98local_lock_acqu= ire=E2=80=99: ./include/linux/local_lock_internal.h:33:20: error: =E2=80=98current=E2=80= =99 undeclared (first use in this function) 33 | l->owner =3D current; Include missing headers to allow including local_lock.h where the required headers are not otherwise included. Signed-off-by: Marco Elver Reviewed-by: Bart Van Assche --- include/linux/local_lock_internal.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock= _internal.h index 8f82b4eb542f..1a1ea1232add 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -4,7 +4,9 @@ #endif =20 #include +#include #include +#include =20 #ifndef CONFIG_PREEMPT_RT =20 --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6399E34BA34 for ; Fri, 19 Dec 2025 15:46:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159206; cv=none; b=TbW6fLBcB4qQxyZCvrDLZomBHHh8b+nZ4uV33kfgknhlkQj9Fv1JnOrkhDnoXwWxwjZvOtfmIbQN55g6M+TwGsJsLDgALWCPGOgBk6h4ilpWFh0SzhwB2cVhlREA36qqiO5cno2RhfqE/W3Txr5RryIMFbGPVMR1XeK1PB+8oYE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159206; c=relaxed/simple; bh=39Ur7dfeLkHLPtspMQ6+jpRwScZmVubBFyalh3J9qTg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dwqevRRbsGdq+z69MuUcFL1WgHuXlRuvsi7JbzDyTcVRgmLBNUNMxPn6dA/QCJSloDfeNCxRPkuKBdOcSOm25HyUEhbg4IdqrQWww8KlNZf8uFLYOZC1bM13mCOx5k9T5dc8ZQMG/okhdNIHG9omawRvYrfwHSd1WNoYVFo9Q3o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VYiPy6ZB; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VYiPy6ZB" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4779da35d27so17894985e9.3 for ; Fri, 19 Dec 2025 07:46:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159201; x=1766764001; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OmbIhVE6lrQtlGznaoJubWA1ZdiFv/Z/aaL1L4DndHU=; b=VYiPy6ZBchv4JwBBKPpvdOXKedFA1lePijZImUvu9S9yoSz+jN1rx/tUhyVa9N+LSh DBxsNwftsht7XLxGGdgMwApKmSFqwhGRfaGJonTPZ22jKl7n+X9civslQ77uJ+AD+od3 Pl1S0UfSfKLI15MZIWNhBKUXtU0gCsdUHHhx4S+x7CnM7NFTQU/msRRrFUUyqK7/+TFZ fdVHs7OBadasCaTOgX7fdgQsgcXipx5mygoufQ/G/GxK8J6IZQW5DzZ0PIhjLGrQqGvB fBf74k8AAz6FI4gB1AxpyA72cjc9dRJEhQ7YIiCEENXTvFOES63TY4N0qNL2z9NAWlrS H15A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159201; x=1766764001; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OmbIhVE6lrQtlGznaoJubWA1ZdiFv/Z/aaL1L4DndHU=; b=nT208rIr/f/p9VzZhv0/u+Kn2tHjpTNR1DO5RbAw4MSXOew7PghZGfndBWbhpsFPCQ ae4a4Rlu8IuKg0ZG1t4axQTkLjh65X5ERgXOlQvVBz0UiQRKS0eAMUGE23fDdrYD83QD 5LH4JdRZoKMaEnii18rozpukTdk/a3qqPUtJobCsMxep83fWte6/dPNIk0cuQwCrgkY1 XQIEa3PKWTDMT9phazWzBDqCPnfITdtX2ZpgbI04s6d1+YpZ09OUtjGFzOt71GJ3a6Zx zyjxtbyUWWUP84sAevX5Yya2rBP1LUmAkPz5EFAAH/eOJi0Og38gEXAw8PN+j2NiPE02 hSAg== X-Forwarded-Encrypted: i=1; AJvYcCW8L6sACFDkSWg8xqmbmH/JAvlfjNQfEelq7tPi5I1blf2pPFq4FqafCnTe97OuNdKlD9uqXHsyW9q9DSM=@vger.kernel.org X-Gm-Message-State: AOJu0Yzndcw2dDu9LxOe2ZC2Y3Vws9pN3OZPxXLjD+WJG9MR1FDNUCgF ccPOagVivoH1z21AVZCM2ST3RummqrmGuMHg0151msDA/QZHXNUbPSPkeXYHpAix1KT9zxvhxIn Avg== X-Google-Smtp-Source: AGHT+IG8sG31wXc+Qczu/TlGtNtQXVUelf3E14dDPvkeumRAhR7hfNjKeviKIQMKw+eqk2KBY9GxLKpBwQ== X-Received: from wmoo8-n2.prod.google.com ([2002:a05:600d:108:20b0:47b:daaa:51cf]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4fc6:b0:465:a51d:d4 with SMTP id 5b1f17b1804b1-47d1953b768mr30732635e9.6.1766159200542; Fri, 19 Dec 2025 07:46:40 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:08 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-20-elver@google.com> Subject: [PATCH v5 19/36] locking/local_lock: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's context analysis for local_lock_t and local_trylock_t. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". * Use new cleanup.h helpers to properly support scoped lock guards. v4: * Rename capability -> context analysis. v3: * Switch to DECLARE_LOCK_GUARD_1_ATTRS() (suggested by Peter) * __assert -> __assume rename * Rework __this_cpu_local_lock helper * Support local_trylock_t --- Documentation/dev-tools/context-analysis.rst | 2 +- include/linux/local_lock.h | 51 ++++++++------ include/linux/local_lock_internal.h | 71 +++++++++++++++---- lib/test_context-analysis.c | 73 ++++++++++++++++++++ 4 files changed, 161 insertions(+), 36 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index 7b660c3003a0..a48b75f45e79 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -80,7 +80,7 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h index b0e6ab329b00..99c06e499375 100644 --- a/include/linux/local_lock.h +++ b/include/linux/local_lock.h @@ -14,13 +14,13 @@ * local_lock - Acquire a per CPU local lock * @lock: The lock variable */ -#define local_lock(lock) __local_lock(this_cpu_ptr(lock)) +#define local_lock(lock) __local_lock(__this_cpu_local_lock(lock)) =20 /** * local_lock_irq - Acquire a per CPU local lock and disable interrupts * @lock: The lock variable */ -#define local_lock_irq(lock) __local_lock_irq(this_cpu_ptr(lock)) +#define local_lock_irq(lock) __local_lock_irq(__this_cpu_local_lock(lock)) =20 /** * local_lock_irqsave - Acquire a per CPU local lock, save and disable @@ -29,19 +29,19 @@ * @flags: Storage for interrupt flags */ #define local_lock_irqsave(lock, flags) \ - __local_lock_irqsave(this_cpu_ptr(lock), flags) + __local_lock_irqsave(__this_cpu_local_lock(lock), flags) =20 /** * local_unlock - Release a per CPU local lock * @lock: The lock variable */ -#define local_unlock(lock) __local_unlock(this_cpu_ptr(lock)) +#define local_unlock(lock) __local_unlock(__this_cpu_local_lock(lock)) =20 /** * local_unlock_irq - Release a per CPU local lock and enable interrupts * @lock: The lock variable */ -#define local_unlock_irq(lock) __local_unlock_irq(this_cpu_ptr(lock)) +#define local_unlock_irq(lock) __local_unlock_irq(__this_cpu_local_lock(l= ock)) =20 /** * local_unlock_irqrestore - Release a per CPU local lock and restore @@ -50,7 +50,7 @@ * @flags: Interrupt flags to restore */ #define local_unlock_irqrestore(lock, flags) \ - __local_unlock_irqrestore(this_cpu_ptr(lock), flags) + __local_unlock_irqrestore(__this_cpu_local_lock(lock), flags) =20 /** * local_trylock_init - Runtime initialize a lock instance @@ -66,7 +66,7 @@ * locking constrains it will _always_ fail to acquire the lock in NMI or * HARDIRQ context on PREEMPT_RT. */ -#define local_trylock(lock) __local_trylock(this_cpu_ptr(lock)) +#define local_trylock(lock) __local_trylock(__this_cpu_local_lock(lock)) =20 #define local_lock_is_locked(lock) __local_lock_is_locked(lock) =20 @@ -81,27 +81,36 @@ * HARDIRQ context on PREEMPT_RT. */ #define local_trylock_irqsave(lock, flags) \ - __local_trylock_irqsave(this_cpu_ptr(lock), flags) - -DEFINE_GUARD(local_lock, local_lock_t __percpu*, - local_lock(_T), - local_unlock(_T)) -DEFINE_GUARD(local_lock_irq, local_lock_t __percpu*, - local_lock_irq(_T), - local_unlock_irq(_T)) + __local_trylock_irqsave(__this_cpu_local_lock(lock), flags) + +DEFINE_LOCK_GUARD_1(local_lock, local_lock_t __percpu, + local_lock(_T->lock), + local_unlock(_T->lock)) +DEFINE_LOCK_GUARD_1(local_lock_irq, local_lock_t __percpu, + local_lock_irq(_T->lock), + local_unlock_irq(_T->lock)) DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, local_lock_irqsave(_T->lock, _T->flags), local_unlock_irqrestore(_T->lock, _T->flags), unsigned long flags) =20 #define local_lock_nested_bh(_lock) \ - __local_lock_nested_bh(this_cpu_ptr(_lock)) + __local_lock_nested_bh(__this_cpu_local_lock(_lock)) =20 #define local_unlock_nested_bh(_lock) \ - __local_unlock_nested_bh(this_cpu_ptr(_lock)) - -DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*, - local_lock_nested_bh(_T), - local_unlock_nested_bh(_T)) + __local_unlock_nested_bh(__this_cpu_local_lock(_lock)) + +DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu, + local_lock_nested_bh(_T->lock), + local_unlock_nested_bh(_T->lock)) + +DECLARE_LOCK_GUARD_1_ATTRS(local_lock, __acquires(_T), __releases(*(local_= lock_t __percpu **)_T)) +#define class_local_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_loc= k, _T) +DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irq, __acquires(_T), __releases(*(lo= cal_lock_t __percpu **)_T)) +#define class_local_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local= _lock_irq, _T) +DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irqsave, __acquires(_T), __releases(= *(local_lock_t __percpu **)_T)) +#define class_local_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(l= ocal_lock_irqsave, _T) +DECLARE_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, __acquires(_T), __release= s(*(local_lock_t __percpu **)_T)) +#define class_local_lock_nested_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS= (local_lock_nested_bh, _T) =20 #endif diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock= _internal.h index 1a1ea1232add..e8c4803d8db4 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -10,21 +10,23 @@ =20 #ifndef CONFIG_PREEMPT_RT =20 -typedef struct { +context_lock_struct(local_lock) { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; struct task_struct *owner; #endif -} local_lock_t; +}; +typedef struct local_lock local_lock_t; =20 /* local_trylock() and local_trylock_irqsave() only work with local_tryloc= k_t */ -typedef struct { +context_lock_struct(local_trylock) { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; struct task_struct *owner; #endif u8 acquired; -} local_trylock_t; +}; +typedef struct local_trylock local_trylock_t; =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC # define LOCAL_LOCK_DEBUG_INIT(lockname) \ @@ -84,9 +86,14 @@ do { \ 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ LD_LOCK_PERCPU); \ local_lock_debug_init(lock); \ + __assume_ctx_lock(lock); \ } while (0) =20 -#define __local_trylock_init(lock) __local_lock_init((local_lock_t *)lock) +#define __local_trylock_init(lock) \ +do { \ + __local_lock_init((local_lock_t *)lock); \ + __assume_ctx_lock(lock); \ +} while (0) =20 #define __spinlock_nested_bh_init(lock) \ do { \ @@ -97,6 +104,7 @@ do { \ 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ LD_LOCK_NORMAL); \ local_lock_debug_init(lock); \ + __assume_ctx_lock(lock); \ } while (0) =20 #define __local_lock_acquire(lock) \ @@ -119,22 +127,25 @@ do { \ do { \ preempt_disable(); \ __local_lock_acquire(lock); \ + __acquire(lock); \ } while (0) =20 #define __local_lock_irq(lock) \ do { \ local_irq_disable(); \ __local_lock_acquire(lock); \ + __acquire(lock); \ } while (0) =20 #define __local_lock_irqsave(lock, flags) \ do { \ local_irq_save(flags); \ __local_lock_acquire(lock); \ + __acquire(lock); \ } while (0) =20 #define __local_trylock(lock) \ - ({ \ + __try_acquire_ctx_lock(lock, ({ \ local_trylock_t *__tl; \ \ preempt_disable(); \ @@ -148,10 +159,10 @@ do { \ (local_lock_t *)__tl); \ } \ !!__tl; \ - }) + })) =20 #define __local_trylock_irqsave(lock, flags) \ - ({ \ + __try_acquire_ctx_lock(lock, ({ \ local_trylock_t *__tl; \ \ local_irq_save(flags); \ @@ -165,7 +176,7 @@ do { \ (local_lock_t *)__tl); \ } \ !!__tl; \ - }) + })) =20 /* preemption or migration must be disabled before calling __local_lock_is= _locked */ #define __local_lock_is_locked(lock) READ_ONCE(this_cpu_ptr(lock)->acquire= d) @@ -188,18 +199,21 @@ do { \ =20 #define __local_unlock(lock) \ do { \ + __release(lock); \ __local_lock_release(lock); \ preempt_enable(); \ } while (0) =20 #define __local_unlock_irq(lock) \ do { \ + __release(lock); \ __local_lock_release(lock); \ local_irq_enable(); \ } while (0) =20 #define __local_unlock_irqrestore(lock, flags) \ do { \ + __release(lock); \ __local_lock_release(lock); \ local_irq_restore(flags); \ } while (0) @@ -208,13 +222,19 @@ do { \ do { \ lockdep_assert_in_softirq(); \ local_lock_acquire((lock)); \ + __acquire(lock); \ } while (0) =20 #define __local_unlock_nested_bh(lock) \ - local_lock_release((lock)) + do { \ + __release(lock); \ + local_lock_release((lock)); \ + } while (0) =20 #else /* !CONFIG_PREEMPT_RT */ =20 +#include + /* * On PREEMPT_RT local_lock maps to a per CPU spinlock, which protects the * critical section while staying preemptible. @@ -269,7 +289,7 @@ do { \ } while (0) =20 #define __local_trylock(lock) \ - ({ \ + __try_acquire_ctx_lock(lock, context_unsafe(({ \ int __locked; \ \ if (in_nmi() | in_hardirq()) { \ @@ -281,17 +301,40 @@ do { \ migrate_enable(); \ } \ __locked; \ - }) + }))) =20 #define __local_trylock_irqsave(lock, flags) \ - ({ \ + __try_acquire_ctx_lock(lock, ({ \ typecheck(unsigned long, flags); \ flags =3D 0; \ __local_trylock(lock); \ - }) + })) =20 /* migration must be disabled before calling __local_lock_is_locked */ #define __local_lock_is_locked(__lock) \ (rt_mutex_owner(&this_cpu_ptr(__lock)->lock) =3D=3D current) =20 #endif /* CONFIG_PREEMPT_RT */ + +#if defined(WARN_CONTEXT_ANALYSIS) +/* + * Because the compiler only knows about the base per-CPU variable, use th= is + * helper function to make the compiler think we lock/unlock the @base var= iable, + * and hide the fact we actually pass the per-CPU instance to lock/unlock + * functions. + */ +static __always_inline local_lock_t *__this_cpu_local_lock(local_lock_t __= percpu *base) + __returns_ctx_lock(base) __attribute__((overloadable)) +{ + return this_cpu_ptr(base); +} +#ifndef CONFIG_PREEMPT_RT +static __always_inline local_trylock_t *__this_cpu_local_lock(local_tryloc= k_t __percpu *base) + __returns_ctx_lock(base) __attribute__((overloadable)) +{ + return this_cpu_ptr(base); +} +#endif /* CONFIG_PREEMPT_RT */ +#else /* WARN_CONTEXT_ANALYSIS */ +#define __this_cpu_local_lock(base) this_cpu_ptr(base) +#endif /* WARN_CONTEXT_ANALYSIS */ diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 1c96c56cf873..003e64cac540 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -6,7 +6,9 @@ =20 #include #include +#include #include +#include #include #include #include @@ -458,3 +460,74 @@ static void __used test_srcu_guard(struct test_srcu_da= ta *d) { guard(srcu_fast)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } { guard(srcu_fast_notrace)(&d->srcu); (void)srcu_dereference(d->data, &d-= >srcu); } } + +struct test_local_lock_data { + local_lock_t lock; + int counter __guarded_by(&lock); +}; + +static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = =3D { + .lock =3D INIT_LOCAL_LOCK(lock), +}; + +static void __used test_local_lock_init(struct test_local_lock_data *d) +{ + local_lock_init(&d->lock); + d->counter =3D 0; +} + +static void __used test_local_lock(void) +{ + unsigned long flags; + + local_lock(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock(&test_local_lock_data.lock); + + local_lock_irq(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_irq(&test_local_lock_data.lock); + + local_lock_irqsave(&test_local_lock_data.lock, flags); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_irqrestore(&test_local_lock_data.lock, flags); + + local_lock_nested_bh(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_nested_bh(&test_local_lock_data.lock); +} + +static void __used test_local_lock_guard(void) +{ + { guard(local_lock)(&test_local_lock_data.lock); this_cpu_add(test_local_= lock_data.counter, 1); } + { guard(local_lock_irq)(&test_local_lock_data.lock); this_cpu_add(test_lo= cal_lock_data.counter, 1); } + { guard(local_lock_irqsave)(&test_local_lock_data.lock); this_cpu_add(tes= t_local_lock_data.counter, 1); } + { guard(local_lock_nested_bh)(&test_local_lock_data.lock); this_cpu_add(t= est_local_lock_data.counter, 1); } +} + +struct test_local_trylock_data { + local_trylock_t lock; + int counter __guarded_by(&lock); +}; + +static DEFINE_PER_CPU(struct test_local_trylock_data, test_local_trylock_d= ata) =3D { + .lock =3D INIT_LOCAL_TRYLOCK(lock), +}; + +static void __used test_local_trylock_init(struct test_local_trylock_data = *d) +{ + local_trylock_init(&d->lock); + d->counter =3D 0; +} + +static void __used test_local_trylock(void) +{ + local_lock(&test_local_trylock_data.lock); + this_cpu_add(test_local_trylock_data.counter, 1); + local_unlock(&test_local_trylock_data.lock); + + if (local_trylock(&test_local_trylock_data.lock)) { + this_cpu_add(test_local_trylock_data.counter, 1); + local_unlock(&test_local_trylock_data.lock); + } +} --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BB6C33A6F6 for ; Fri, 19 Dec 2025 15:46:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159209; cv=none; b=H26n7S5S0xL5tNKitImiB3ottRhvMXFRoyypTr2xxuQObVbFdRVqmsMqOq0P77g6yl+Fdyo7Lb0I5tm860Px+P2oL2blvyc+bqiJfN6rqet2F7oUvjiTKrSmqd6N1FHruNLhxOES+MRXZRxJG3kwIjDQTYPFcwjFKYzFizfDewg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159209; c=relaxed/simple; bh=SRooDxh5bHCBcbDsNpRpQeJWUSxVq3L7pJYOLObAdXI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LwqpkZvwFoSQCklbRvvku3XI2WnqP07g6bgJftfeyLzO0xcFkxLuEOHaHCxnldKwv1xoyaUMIJnDi5Aei7fYUO0dwCyuLeYElcZ316hBZwopKd7Nn8U08Zy8hySSx6WoiWfeJQYBlbtrpl88u2dsj0NcAhCevvOb6iGn8u0Ps5s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TPWhalNj; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TPWhalNj" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-42fdbba545fso1521781f8f.0 for ; Fri, 19 Dec 2025 07:46:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159205; x=1766764005; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=P8y2ehCZZyqeLeTmnnsrT76N1owOcsxeMFhfj7fOlbU=; b=TPWhalNjs9Jt5WEWY0wF+7A4PjgYhVbJsmZuAgZYIsIT3X6n1TdCymaswovuvB+Rsx zrGHXQfxMuhWw22ZdwEfDXm+4nWGKyz41rinL9jNFI0qC0QxwOjJZgNSOMtbywhx4nYG kzoRTYNorOC7BTrSPKGTAWlu+TyD0gr8xcZRxpbjOKqhFB8Os7s5ePWTMtJbLnbJ2nJV 8iOEDl8C1XHI/MeZ7DnW6mU8Y0vHdurfpAHgpeEq70I9fFCNxkeWkETo+s9XmnyD/u9M RrdG3Kq1RfIiX2KBJQ49Y2PWlouKn99PVLSNU3SEEXCzFa2UWd7JWsr2V+7tUpSHSquh KN2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159205; x=1766764005; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P8y2ehCZZyqeLeTmnnsrT76N1owOcsxeMFhfj7fOlbU=; b=JdvgtqIAMGG8JCfU0MSS00ZGKN4wS4oM7G3w/5A1x9QGacKO6NkjnjUeOu7drk6mHh abY5q6nqh4bp1F+pmqYhXhlj7W0NkbYHrL3BioVb8F1wRVsan00egqwrCBcb7v70skVB IEpNhXwwzBauZW/2IrLhEu/1k+ZF7eyP35XsSEC7xFdQt3lxEkuUxRGCnleVGmtORhJ/ YpbPobNRm5CL8ja/ZuFeYAA2TBjnTYeHW4FbP2Ns24v8huAa6mfSqJayKeAEOwsB0IjV LfJCcehPYPeCv/EAajglCnsYRnJg3pODEnDDHvS9375mErOhEOnD1U+F3ODmL/0t66cy oe5w== X-Forwarded-Encrypted: i=1; AJvYcCU2qI5QXsf/1dX9hRVS8avFSbObF5oFXusJ1GPnmjbvStdEFoof+1lunfFZgysKhW8aL1YzIzOTcpo0KDs=@vger.kernel.org X-Gm-Message-State: AOJu0Ywg1QeE8mn8//UmgeNaOYHWCQzWJ79zU6C2EwmrANbStb36x3Zq Nb6TdS3bQ+WloinFYUkoE7r8idbU+/NjSaWWjgLHBFXtO7VzVwphO4QeKH4vYk3IZD64Uh17WAP Ecg== X-Google-Smtp-Source: AGHT+IFV9RFQsYqi1IuVNzQ0l+6NHAAORu/eNnm8e3NvFhH3K4ABTOt3R4xrfCUaQb2+inwZKUEY8Z2lVg== X-Received: from wrbay2.prod.google.com ([2002:a5d:6f02:0:b0:430:f3bf:123f]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1789:b0:431:2ff:128f with SMTP id ffacd0b85a97d-4324e3ebfbbmr4354092f8f.6.1766159204463; Fri, 19 Dec 2025 07:46:44 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:09 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-21-elver@google.com> Subject: [PATCH v5 20/36] locking/ww_mutex: Support Clang's context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for Clang's context analysis for ww_mutex. The programming model for ww_mutex is subtly more complex than other locking primitives when using ww_acquire_ctx. Encoding the respective pre-conditions for ww_mutex lock/unlock based on ww_acquire_ctx state using Clang's context analysis makes incorrect use of the API harder. Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. v3: * __assert -> __assume rename v2: * New patch. --- Documentation/dev-tools/context-analysis.rst | 3 +- include/linux/ww_mutex.h | 22 +++++-- lib/test_context-analysis.c | 69 ++++++++++++++++++++ 3 files changed, 87 insertions(+), 7 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index a48b75f45e79..8dd6c0d695aa 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -80,7 +80,8 @@ Supported Kernel Primitives =20 Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`. +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`, +`ww_mutex`. =20 For context locks with an initialization function (e.g., `spin_lock_init()= `), calling this function before initializing any guarded members or globals diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h index 45ff6f7a872b..58e959ee10e9 100644 --- a/include/linux/ww_mutex.h +++ b/include/linux/ww_mutex.h @@ -44,7 +44,7 @@ struct ww_class { unsigned int is_wait_die; }; =20 -struct ww_mutex { +context_lock_struct(ww_mutex) { struct WW_MUTEX_BASE base; struct ww_acquire_ctx *ctx; #ifdef DEBUG_WW_MUTEXES @@ -52,7 +52,7 @@ struct ww_mutex { #endif }; =20 -struct ww_acquire_ctx { +context_lock_struct(ww_acquire_ctx) { struct task_struct *task; unsigned long stamp; unsigned int acquired; @@ -107,6 +107,7 @@ struct ww_acquire_ctx { */ static inline void ww_mutex_init(struct ww_mutex *lock, struct ww_class *ww_class) + __assumes_ctx_lock(lock) { ww_mutex_base_init(&lock->base, ww_class->mutex_name, &ww_class->mutex_ke= y); lock->ctx =3D NULL; @@ -141,6 +142,7 @@ static inline void ww_mutex_init(struct ww_mutex *lock, */ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, struct ww_class *ww_class) + __acquires(ctx) __no_context_analysis { ctx->task =3D current; ctx->stamp =3D atomic_long_inc_return_relaxed(&ww_class->stamp); @@ -179,6 +181,7 @@ static inline void ww_acquire_init(struct ww_acquire_ct= x *ctx, * data structures. */ static inline void ww_acquire_done(struct ww_acquire_ctx *ctx) + __releases(ctx) __acquires_shared(ctx) __no_context_analysis { #ifdef DEBUG_WW_MUTEXES lockdep_assert_held(ctx); @@ -196,6 +199,7 @@ static inline void ww_acquire_done(struct ww_acquire_ct= x *ctx) * mutexes have been released with ww_mutex_unlock. */ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx) + __releases_shared(ctx) __no_context_analysis { #ifdef CONFIG_DEBUG_LOCK_ALLOC mutex_release(&ctx->first_lock_dep_map, _THIS_IP_); @@ -245,7 +249,8 @@ static inline void ww_acquire_fini(struct ww_acquire_ct= x *ctx) * * A mutex acquired with this function must be released with ww_mutex_unlo= ck. */ -extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct = ww_acquire_ctx *ctx); +extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct = ww_acquire_ctx *ctx) + __cond_acquires(0, lock) __must_hold(ctx); =20 /** * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible @@ -278,7 +283,8 @@ extern int /* __must_check */ ww_mutex_lock(struct ww_m= utex *lock, struct ww_acq * A mutex acquired with this function must be released with ww_mutex_unlo= ck. */ extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock, - struct ww_acquire_ctx *ctx); + struct ww_acquire_ctx *ctx) + __cond_acquires(0, lock) __must_hold(ctx); =20 /** * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex @@ -305,6 +311,7 @@ extern int __must_check ww_mutex_lock_interruptible(str= uct ww_mutex *lock, */ static inline void ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) + __acquires(lock) __must_hold(ctx) __no_context_analysis { int ret; #ifdef DEBUG_WW_MUTEXES @@ -342,6 +349,7 @@ ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acq= uire_ctx *ctx) static inline int __must_check ww_mutex_lock_slow_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) + __cond_acquires(0, lock) __must_hold(ctx) { #ifdef DEBUG_WW_MUTEXES DEBUG_LOCKS_WARN_ON(!ctx->contending_lock); @@ -349,10 +357,11 @@ ww_mutex_lock_slow_interruptible(struct ww_mutex *loc= k, return ww_mutex_lock_interruptible(lock, ctx); } =20 -extern void ww_mutex_unlock(struct ww_mutex *lock); +extern void ww_mutex_unlock(struct ww_mutex *lock) __releases(lock); =20 extern int __must_check ww_mutex_trylock(struct ww_mutex *lock, - struct ww_acquire_ctx *ctx); + struct ww_acquire_ctx *ctx) + __cond_acquires(true, lock) __must_hold(ctx); =20 /*** * ww_mutex_destroy - mark a w/w mutex unusable @@ -363,6 +372,7 @@ extern int __must_check ww_mutex_trylock(struct ww_mute= x *lock, * this function is called. */ static inline void ww_mutex_destroy(struct ww_mutex *lock) + __must_not_hold(lock) { #ifndef CONFIG_PREEMPT_RT mutex_destroy(&lock->base); diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 003e64cac540..2dc404456497 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -14,6 +14,7 @@ #include #include #include +#include =20 /* * Test that helper macros work as expected. @@ -531,3 +532,71 @@ static void __used test_local_trylock(void) local_unlock(&test_local_trylock_data.lock); } } + +static DEFINE_WD_CLASS(ww_class); + +struct test_ww_mutex_data { + struct ww_mutex mtx; + int counter __guarded_by(&mtx); +}; + +static void __used test_ww_mutex_init(struct test_ww_mutex_data *d) +{ + ww_mutex_init(&d->mtx, &ww_class); + d->counter =3D 0; +} + +static void __used test_ww_mutex_lock_noctx(struct test_ww_mutex_data *d) +{ + if (!ww_mutex_lock(&d->mtx, NULL)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + if (!ww_mutex_lock_interruptible(&d->mtx, NULL)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + if (ww_mutex_trylock(&d->mtx, NULL)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + ww_mutex_lock_slow(&d->mtx, NULL); + d->counter++; + ww_mutex_unlock(&d->mtx); + + ww_mutex_destroy(&d->mtx); +} + +static void __used test_ww_mutex_lock_ctx(struct test_ww_mutex_data *d) +{ + struct ww_acquire_ctx ctx; + + ww_acquire_init(&ctx, &ww_class); + + if (!ww_mutex_lock(&d->mtx, &ctx)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + if (!ww_mutex_lock_interruptible(&d->mtx, &ctx)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + if (ww_mutex_trylock(&d->mtx, &ctx)) { + d->counter++; + ww_mutex_unlock(&d->mtx); + } + + ww_mutex_lock_slow(&d->mtx, &ctx); + d->counter++; + ww_mutex_unlock(&d->mtx); + + ww_acquire_done(&ctx); + ww_acquire_fini(&ctx); + + ww_mutex_destroy(&d->mtx); +} --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 06A6E34CFB6 for ; Fri, 19 Dec 2025 15:46:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159213; cv=none; b=p+ZAUgLfE+TWRYcUBauzJ5Tn/+zRoDzG1R18OopqMz2xKhE9ukl+YLdOF46YNq5QabrHKtBgEl7XYRCye5EETitYpayw5GLc/IdWH/liKlf+QRicusZvftWslFWyTw9PJI3gG4KTHJEqaKhmfv3oM0R/YBDLsLHlj0zlEb3k3iM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159213; c=relaxed/simple; bh=ln67g4FksJVOa0BRa1f5CRH52V/ROTo9NVPtsH31YyE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nwoHvin3IvHdQi+TW8qCiWdXvpiD3mwrmqqWhMJ6FEynjXewbiuA/dn5oCcR7UojKBCatxo3Ofhi1ss+HFWNzD0T7b1DmVtn6cNkCp9ue3LfHMXTkvPwq2RPneiSzxFKFhqnQmNPQh97bqBR+xP1OhKRUmwlN6VnfB+PunGcGvs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4oadzVeY; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4oadzVeY" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-64b7907dd42so1740564a12.3 for ; Fri, 19 Dec 2025 07:46:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159208; x=1766764008; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iwJwFE4RR+G0qPj93NacKXfc9Bqs8f3KUgfNaIjFSIY=; b=4oadzVeYYlhSHL8EVuAggfjknO+wAgJgEn/pDwg3EUlp4Al8SWsMW/bllQrjMC24xJ 7OEJoJiYIdsTBQhs4TobyKhHejj/Zj/EKOVAfmN/w2a9BkPp2V2F1s5ijG+bv/FWCc+6 9XqtRYZkLCZ4toa/eliZ05VtbcLdtKK7vzijZuOkemDKYBP8+FdRBqYBAMAZWRoLru5s ZZGpdD8ulG2vHnalpjuwDcYADtxfE2DW7gRNwQj3aXfRlEYmWZNmk7Ur04noWaT4NY6C bEbbLnV9VdGnaPku6sZ/Y+6hS6G59XEew6PGJcmZ0rOJaWQj/5kaWvOs0Gxq5ewpe6LZ gDSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159208; x=1766764008; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iwJwFE4RR+G0qPj93NacKXfc9Bqs8f3KUgfNaIjFSIY=; b=M5h6BrEQZEKFyB4MBNIXjQqIZxno87iYFCVDhfCTWjOo+EuWllvaw5cNiijU59XlQN GZrs5HquRXyL+Y5Wy8vL27ZQzm+dsoJDnE710pwjzZmo7yKsBkMkgUyYb/943llo7mj+ paNAjqu6s0oglz0mWZWEZBvuVDNeHp820OIPuj0QKbk0UCzVdjFUexMTPBSfU6FHXeG9 Pohq6DvysxGPoSxobBKgAupXXS09hj9SZNuO/HedG7e6CtwcRQqkcqU458drgkgWdNaY SOEtAmP8BjDLAGy8ri05BDotGXP7A5AkoNoCUfOlS4N36DWCenUE67do5RbUlfqqWGQG iLxw== X-Forwarded-Encrypted: i=1; AJvYcCUd3JwnCqfpi8qj/gNwgBvYuHH5GeUZVaZwk/W0oeKn7PsdOyD5jVdlpzZsVW0tR/BDkY/2bzajVXNT8uk=@vger.kernel.org X-Gm-Message-State: AOJu0YwfjIuD1mgi6axsCt00OHV5gSE/uEgdi8Q+9ocOnnlXaN4ouMvn QIeH6wqQPu8tLLdJZ/9BtIRqrtyvhElF5FmdLBgNQ7qBFk/Vxwy6llSWmp7mJ4A7/HG7pCFcJuL 9iA== X-Google-Smtp-Source: AGHT+IFnXG6cE34nzlKUaF0YLjZuRw/c6GtHv7aWsx/cCEAUfVEyphXLuEYpLswMSP0zGStw/fZIKiWOPA== X-Received: from edwv2.prod.google.com ([2002:aa7:cd42:0:b0:643:8c4d:bca0]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:27cb:b0:64b:46d4:5d5c with SMTP id 4fb4d7f45d1cf-64b8e9379dcmr2970087a12.5.1766159208169; Fri, 19 Dec 2025 07:46:48 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:10 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-22-elver@google.com> Subject: [PATCH v5 21/36] debugfs: Make debugfs_cancellation a context lock struct From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When compiling include/linux/debugfs.h with CONTEXT_ANALYSIS enabled, we can see this error: ./include/linux/debugfs.h:239:17: error: use of undeclared identifier 'canc= ellation' 239 | void __acquires(cancellation) Move the __acquires(..) attribute after the declaration, so that the compiler can see the cancellation function argument, as well as making struct debugfs_cancellation a real context lock to benefit from Clang's context analysis. This change is a preparatory change to allow enabling context analysis in subsystems that include the above header. Signed-off-by: Marco Elver Reviewed-by: Bart Van Assche --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. --- include/linux/debugfs.h | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h index 7cecda29447e..4177c4738282 100644 --- a/include/linux/debugfs.h +++ b/include/linux/debugfs.h @@ -239,18 +239,16 @@ ssize_t debugfs_read_file_str(struct file *file, char= __user *user_buf, * @cancel: callback to call * @cancel_data: extra data for the callback to call */ -struct debugfs_cancellation { +context_lock_struct(debugfs_cancellation) { struct list_head list; void (*cancel)(struct dentry *, void *); void *cancel_data; }; =20 -void __acquires(cancellation) -debugfs_enter_cancellation(struct file *file, - struct debugfs_cancellation *cancellation); -void __releases(cancellation) -debugfs_leave_cancellation(struct file *file, - struct debugfs_cancellation *cancellation); +void debugfs_enter_cancellation(struct file *file, + struct debugfs_cancellation *cancellation) __acquires(cancellation); +void debugfs_leave_cancellation(struct file *file, + struct debugfs_cancellation *cancellation) __releases(cancellation); =20 #else =20 --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13DE934D3AC for ; Fri, 19 Dec 2025 15:46:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159216; cv=none; b=InuFPffdwU+/Ht+pSwjfRD1OSPXJhfvQK1W9J/9P5bBeQiOZLlOs414u2GXFaMC/he1StlO4MDgIMpMJWRQtVLw92FwZ0pE26JSFVMMezRs7M7CftY8WaB1hGfuFsRvaOcbeoKMglZMePO9jK/wlrPRwuQYTRp5Y9A82NHpvfWk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159216; c=relaxed/simple; bh=ffn5A72rAl7FAFjS1rk5YlULl8qajeM/K30VyhzfQ+s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PWXyPByw5sh2QGMXh6G3mfrbsf2+vIeaZff/yFRfob4LZNim3GgSWgDX4w7cirBD+xLQ6JhFekYM/BPbs2pWsfLoxF79qBn2uhdnEzL1zpPuzmz9G1ExzwCOLzNaeZUOwiqjF+EyAZuHJ1YoPZHwXfetAzKLmlNFmWENyiSxtzM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cKbw4Wbp; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cKbw4Wbp" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477563e531cso12949115e9.1 for ; Fri, 19 Dec 2025 07:46:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159212; x=1766764012; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BPiD3KqRf4YAgh6EJvt9JR+cuwWr4Dg7H+dSvPnSAFc=; b=cKbw4WbpL+Cvqd22rO3gflpWRfLXWYBA5lTrVXemFojS3IzY5B9bWRbqDk2NK0N5t+ S5DGaTfJHL9AqeYURPH5JK9KoHPOx9g7bg1oGlDBd5GIM6jdMJ/GYz0brlm5FgEwe3/R 8hzLlb7c2OIt0Cz3ZEsSa5XLsFON5bYu4TWEly9dZ54gGSaJ1d5Fn2IqyQv0ACbeidRj 5AvEhmdol57VZWfEu7fgZNF/nPMGpICpSyo/4PHezLaG/9mBryDgF9i+ajBGDFX/Q2wW tFpczKzYxk4WX11wkI5XPDZVYqjZOqtv3il3BJ6fDvOV3P8nlfkM9y986Cc1BrDN3PPQ 1grg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159212; x=1766764012; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BPiD3KqRf4YAgh6EJvt9JR+cuwWr4Dg7H+dSvPnSAFc=; b=i05G1EJiDeYzw5+qWgC+OQ2CBdXR+JA8tKZ70orMQMXkiOTPL8c8/RjizA3LyDQHSs ZfXyRk7lCkZpn40HgfVPMKDzrXPDtkW9APnmK0pAaqTl73nM6qRK4GKk1Zg4o/PslOsX Rp1UKpdXTBybCGdvy6a1HoQAONqoC3Zvr0WbUoVxNnb6QhlkPtt5KaItdarAXtMt5D7S 8zS7PIXVHy25ES0lsiZHlMIWYpVVw3Wbq/7++g+PxoqqdQ/pGD9TVhOIw8BMUV0krCTP KFJi/DXTcx0+C8ZicLto6YhsubRXplKJkTmKHYQoF3VAivVLYG4rjBKX9UVRLGXhx+uw Qplw== X-Forwarded-Encrypted: i=1; AJvYcCXlYNVFHdkYkpva+r7Fz1ID6OCz6ReNp8t8U/ySFxKoBzKndF/o8+M1ZVDcoJsavkBk4UVGJeeZMWeIE8c=@vger.kernel.org X-Gm-Message-State: AOJu0YyqUPpCv0x+TFdgt3JL9M4HW7E8QbGDXO3fILK0oTWrNN6Nx7DE 0y5zVV7ZyY08HwZrCqz9ax+DQqAoFhWDA9zBvIMBpEwOUAO8JfPxIWFgSAZ4ks4EdEvNfVkv2DE 6uw== X-Google-Smtp-Source: AGHT+IGhAhPmou604sSjPj23AvOGmEtACeK8/O1HXfeBpY2WmsP8AIbWuFkf2cEj+zrFEmw2cUXjp49BYQ== X-Received: from wmco23.prod.google.com ([2002:a05:600c:a317:b0:477:93dd:bbb1]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600d:108:20b0:477:214f:bd95 with SMTP id 5b1f17b1804b1-47d1c036d6cmr18724405e9.23.1766159212483; Fri, 19 Dec 2025 07:46:52 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:11 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-23-elver@google.com> Subject: [PATCH v5 22/36] um: Fix incorrect __acquires/__releases annotations From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, kernel test robot , Johannes Berg , Tiwei Bie Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With Clang's context analysis, the compiler is a bit more strict about what goes into the __acquires/__releases annotations and can't refer to non-existent variables. On an UM build, mm_id.h is transitively included into mm_types.h, and we can observe the following error (if context analysis is enabled in e.g. stackdepot.c): In file included from lib/stackdepot.c:17: In file included from include/linux/debugfs.h:15: In file included from include/linux/fs.h:5: In file included from include/linux/fs/super.h:5: In file included from include/linux/fs/super_types.h:7: In file included from include/linux/list_lru.h:14: In file included from include/linux/xarray.h:16: In file included from include/linux/gfp.h:7: In file included from include/linux/mmzone.h:22: In file included from include/linux/mm_types.h:26: In file included from arch/um/include/asm/mmu.h:12: >> arch/um/include/shared/skas/mm_id.h:24:54: error: use of undeclared iden= tifier 'turnstile' 24 | void enter_turnstile(struct mm_id *mm_id) __acquires(turnstile); | ^~~~~~~~~ arch/um/include/shared/skas/mm_id.h:25:53: error: use of undeclared iden= tifier 'turnstile' 25 | void exit_turnstile(struct mm_id *mm_id) __releases(turnstile); | ^~~~~~~~~ One (discarded) option was to use token_context_lock(turnstile) to just define a token with the already used name, but that would not allow the compiler to distinguish between different mm_id-dependent instances. Another constraint is that struct mm_id is only declared and incomplete in the header, so even if we tried to construct an expression to get to the mutex instance, this would fail (including more headers transitively everywhere should also be avoided). Instead, just declare an mm_id-dependent helper to return the mutex, and use the mm_id-dependent call expression in the __acquires/__releases attributes; the compiler will consider the identity of the mutex to be the call expression. Then using __get_turnstile() in the lock/unlock wrappers (with context analysis enabled for mmu.c) the compiler will be able to verify the implementation of the wrappers as-is. We leave context analysis disabled in arch/um/kernel/skas/ for now. This change is a preparatory change to allow enabling context analysis in subsystems that include any of the above headers. No functional change intended. Closes: https://lore.kernel.org/oe-kbuild-all/202512171220.vHlvhpCr-lkp@int= el.com/ Reported-by: kernel test robot Signed-off-by: Marco Elver Cc: Johannes Berg Cc: Tiwei Bie --- arch/um/include/shared/skas/mm_id.h | 5 +++-- arch/um/kernel/skas/mmu.c | 13 ++++++++----- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/um/include/shared/skas/mm_id.h b/arch/um/include/shared/s= kas/mm_id.h index fb96c0bd8222..18c0621430d2 100644 --- a/arch/um/include/shared/skas/mm_id.h +++ b/arch/um/include/shared/skas/mm_id.h @@ -21,8 +21,9 @@ struct mm_id { int syscall_fd_map[STUB_MAX_FDS]; }; =20 -void enter_turnstile(struct mm_id *mm_id) __acquires(turnstile); -void exit_turnstile(struct mm_id *mm_id) __releases(turnstile); +struct mutex *__get_turnstile(struct mm_id *mm_id); +void enter_turnstile(struct mm_id *mm_id) __acquires(__get_turnstile(mm_id= )); +void exit_turnstile(struct mm_id *mm_id) __releases(__get_turnstile(mm_id)= ); =20 void notify_mm_kill(int pid); =20 diff --git a/arch/um/kernel/skas/mmu.c b/arch/um/kernel/skas/mmu.c index 00957788591b..b5017096028b 100644 --- a/arch/um/kernel/skas/mmu.c +++ b/arch/um/kernel/skas/mmu.c @@ -23,18 +23,21 @@ static_assert(sizeof(struct stub_data) =3D=3D STUB_DATA= _PAGES * UM_KERN_PAGE_SIZE); static spinlock_t mm_list_lock; static struct list_head mm_list; =20 -void enter_turnstile(struct mm_id *mm_id) __acquires(turnstile) +struct mutex *__get_turnstile(struct mm_id *mm_id) { struct mm_context *ctx =3D container_of(mm_id, struct mm_context, id); =20 - mutex_lock(&ctx->turnstile); + return &ctx->turnstile; } =20 -void exit_turnstile(struct mm_id *mm_id) __releases(turnstile) +void enter_turnstile(struct mm_id *mm_id) { - struct mm_context *ctx =3D container_of(mm_id, struct mm_context, id); + mutex_lock(__get_turnstile(mm_id)); +} =20 - mutex_unlock(&ctx->turnstile); +void exit_turnstile(struct mm_id *mm_id) +{ + mutex_unlock(__get_turnstile(mm_id)); } =20 int init_new_context(struct task_struct *task, struct mm_struct *mm) --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBD6834D4EE for ; Fri, 19 Dec 2025 15:46:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159221; cv=none; b=Uy9bTQSiWevzNEDedgN4rG9Sndu6T9kC+yLM3NvlwcS00+fqbzp9Mm8DE1iJrmct9JFVLlzHSY+yOAiVrCK5tXnqZlH54y0W+0czH2OymhNG/7n0J8kq5HB+LGIDVkm8IDDLwPZF58UItm9mP9YnyCzxVOW/9yWjVKgpj0RVU2I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159221; c=relaxed/simple; bh=jL0QM1v99QEoViO9S3HcGBHIkHFuA8KQrDf2LXqrRBg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=au7QCjntjcTmG/T+CP58jsTiWFDOxVvZabCe578YhRPxXtvQp4wEC4sXI+GqeHnBXLDQJvioXo6FtxEzKLaJnQ67++t/OkwLHM9advZ/fwOionTFr6n6pZuxKLZHOIVYbfxcCHCJvrG8PKpVviRSrWdb2kPbke1DlCuC5pSc6aE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VW1I07NF; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VW1I07NF" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-430ffa9fd7fso1120421f8f.0 for ; Fri, 19 Dec 2025 07:46:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159217; x=1766764017; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BXcHXIDfWoSkrk6mEFbnYEvAXHgRRlXY75uZgyD6jWc=; b=VW1I07NFqIJH2aeycnI90/1uxYV8fkYm6qn8ycshrbEXVh5mKUReQRhUfXBveFeEXl 9pOjscY0J3OyE5aQfpaJB1y9oEvziub9cLrtMrE5T15wkdFzRbcb/5SXm6nkayOkrvVP 12bpKkYFAz/KcXPZbvga5250bwZBruYSuX6zg+WDDSRc1FkNnsxE7Fyjf9jefzYDxdjs 83QDtDiWfT6tX/FHz7s8AWHUSb5PRKphsyYcBoZYXWfjsOmMGtSSj3Udkx1GhLNUtjwK U8RGLx7nkecnMvqkwwGmgb8BeEftLg6sy0RfFs9FRf/rdctL1Bt0BQq1zBfvAP8uC/J4 ncBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159217; x=1766764017; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BXcHXIDfWoSkrk6mEFbnYEvAXHgRRlXY75uZgyD6jWc=; b=g+BRKETSuLIAKQIGXA+o82ayEmmwtdd8YOSefs2evcj1DFv+A+kY/0u23nSt2zx00G yic35L/wrFpfiGZORLupaSyUmvXkd7ZyrPvjysJo3OTJ57u6MtufS26hsXssMUVfcDiC HtNRLo9grcxZJOUZKoGD20514n+brtyEYOyDrlllkt6Ez1sREoS2oTsZGsZUti6YkzZc UJaaNUCNR2QXVkX6uyYrmAtoCu6S0hmIuQpjlYYcKMWwdoBolTsCFPsWjEDjzyF9LIN3 Qi+yCBoJhBLzBuNg9gOiQOBA5cqddWJtA9SFqj79dEuzjKCXPR5zK+b2AFLbEuOiI1AC jb5A== X-Forwarded-Encrypted: i=1; AJvYcCVk0PyF/HCvdiG3R0thXKpAYb1SbqhuYOjMoeVrwgxSFPshU7CCqIMATuYrMZxOaQk/ai7sP06Zo/RKVWk=@vger.kernel.org X-Gm-Message-State: AOJu0YxM4ZjOKZU60ybxjUQl69Suomxpx2iO+BKAmZLxzlqCwDqSCKHI rF6/oZZgeDF4m2iTRrscfiI1AZVdn+uWREVw4fNTSKpMcOd5p/mBEhAXopwLembc8Y213i/03X2 xbw== X-Google-Smtp-Source: AGHT+IHyKiIB7A3/4NS3dgdSOn3IONvOE4PwqBUh1HTTKAWUaRZZqyIHVFzTOOWTZuodACVR6hwE9Va7/g== X-Received: from wmco28.prod.google.com ([2002:a05:600c:a31c:b0:477:afa:d217]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:45cf:b0:47a:7fdd:2906 with SMTP id 5b1f17b1804b1-47d1954a550mr28322755e9.12.1766159217354; Fri, 19 Dec 2025 07:46:57 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:12 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-24-elver@google.com> Subject: [PATCH v5 23/36] compiler-context-analysis: Remove Sparse support From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove Sparse support as discussed at [1]. The kernel codebase is still scattered with numerous places that try to appease Sparse's context tracking ("annotation for sparse", "fake out sparse", "work around sparse", etc.). Eventually, as more subsystems enable Clang's context analysis, these places will show up and need adjustment or removal of the workarounds altogether. Link: https://lore.kernel.org/all/20250207083335.GW7145@noisy.programming.k= icks-ass.net/ [1] Link: https://lore.kernel.org/all/Z6XTKTo_LMj9KmbY@elver.google.com/ [2] Cc: Chris Li Cc: "Luc Van Oostenryck" Cc: Peter Zijlstra Signed-off-by: Marco Elver --- v5: * Rename "context guard" -> "context lock". v4: * Rename capability -> context analysis. v2: * New patch. --- Documentation/dev-tools/sparse.rst | 19 ----- include/linux/compiler-context-analysis.h | 85 +++++++---------------- include/linux/rcupdate.h | 15 +--- 3 files changed, 28 insertions(+), 91 deletions(-) diff --git a/Documentation/dev-tools/sparse.rst b/Documentation/dev-tools/s= parse.rst index dc791c8d84d1..37b20170835d 100644 --- a/Documentation/dev-tools/sparse.rst +++ b/Documentation/dev-tools/sparse.rst @@ -53,25 +53,6 @@ sure that bitwise types don't get mixed up (little-endia= n vs big-endian vs cpu-endian vs whatever), and there the constant "0" really _is_ special. =20 -Using sparse for lock checking ------------------------------- - -The following macros are undefined for gcc and defined during a sparse -run to use the "context" tracking feature of sparse, applied to -locking. These annotations tell sparse when a lock is held, with -regard to the annotated function's entry and exit. - -__must_hold - The specified lock is held on function entry and exit. - -__acquires - The specified lock is held on function exit, but not entry. - -__releases - The specified lock is held on function entry, but not exit. - -If the function enters and exits without the lock held, acquiring and -releasing the lock inside the function in a balanced way, no -annotation is needed. The three annotations above are for cases where -sparse would otherwise report a context imbalance. - Getting sparse -------------- =20 diff --git a/include/linux/compiler-context-analysis.h b/include/linux/comp= iler-context-analysis.h index 9ad800e27692..fccd6d68158e 100644 --- a/include/linux/compiler-context-analysis.h +++ b/include/linux/compiler-context-analysis.h @@ -262,57 +262,32 @@ static inline void _context_unsafe_alias(void **p) { } extern const struct __ctx_lock_##ctx *name =20 /* - * Common keywords for static context analysis. Both Clang's "capability - * analysis" and Sparse's "context tracking" are currently supported. - */ -#ifdef __CHECKER__ - -/* Sparse context/lock checking support. */ -# define __must_hold(x) __attribute__((context(x,1,1))) -# define __must_not_hold(x) -# define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(ret, x) __attribute__((context(x,0,-1))) -# define __releases(x) __attribute__((context(x,1,0))) -# define __acquire(x) __context__(x,1) -# define __release(x) __context__(x,-1) -# define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) -/* For Sparse, there's no distinction between exclusive and shared locks. = */ -# define __must_hold_shared __must_hold -# define __acquires_shared __acquires -# define __cond_acquires_shared __cond_acquires -# define __releases_shared __releases -# define __acquire_shared __acquire -# define __release_shared __release -# define __cond_lock_shared __cond_acquire - -#else /* !__CHECKER__ */ + * Common keywords for static context analysis. + */ =20 /** * __must_hold() - function attribute, caller must hold exclusive context = lock - * @x: context lock instance pointer * * Function attribute declaring that the caller must hold the given context - * lock instance @x exclusively. + * lock instance(s) exclusively. */ -# define __must_hold(x) __requires_ctx_lock(x) +#define __must_hold(...) __requires_ctx_lock(__VA_ARGS__) =20 /** * __must_not_hold() - function attribute, caller must not hold context lo= ck - * @x: context lock instance pointer * * Function attribute declaring that the caller must not hold the given co= ntext - * lock instance @x. + * lock instance(s). */ -# define __must_not_hold(x) __excludes_ctx_lock(x) +#define __must_not_hold(...) __excludes_ctx_lock(__VA_ARGS__) =20 /** * __acquires() - function attribute, function acquires context lock exclu= sively - * @x: context lock instance pointer * * Function attribute declaring that the function acquires the given conte= xt - * lock instance @x exclusively, but does not release it. + * lock instance(s) exclusively, but does not release them. */ -# define __acquires(x) __acquires_ctx_lock(x) +#define __acquires(...) __acquires_ctx_lock(__VA_ARGS__) =20 /* * Clang's analysis does not care precisely about the value, only that it = is @@ -339,17 +314,16 @@ static inline void _context_unsafe_alias(void **p) { } * * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. */ -# define __cond_acquires(ret, x) __cond_acquires_impl_##ret(x) +#define __cond_acquires(ret, x) __cond_acquires_impl_##ret(x) =20 /** * __releases() - function attribute, function releases a context lock exc= lusively - * @x: context lock instance pointer * * Function attribute declaring that the function releases the given conte= xt - * lock instance @x exclusively. The associated context must be active on + * lock instance(s) exclusively. The associated context(s) must be active = on * entry. */ -# define __releases(x) __releases_ctx_lock(x) +#define __releases(...) __releases_ctx_lock(__VA_ARGS__) =20 /** * __acquire() - function to acquire context lock exclusively @@ -357,7 +331,7 @@ static inline void _context_unsafe_alias(void **p) { } * * No-op function that acquires the given context lock instance @x exclusi= vely. */ -# define __acquire(x) __acquire_ctx_lock(x) +#define __acquire(x) __acquire_ctx_lock(x) =20 /** * __release() - function to release context lock exclusively @@ -365,7 +339,7 @@ static inline void _context_unsafe_alias(void **p) { } * * No-op function that releases the given context lock instance @x. */ -# define __release(x) __release_ctx_lock(x) +#define __release(x) __release_ctx_lock(x) =20 /** * __cond_lock() - function that conditionally acquires a context lock @@ -383,25 +357,23 @@ static inline void _context_unsafe_alias(void **p) { } * * #define spin_trylock(l) __cond_lock(&lock, _spin_trylock(&lock)) */ -# define __cond_lock(x, c) __try_acquire_ctx_lock(x, c) +#define __cond_lock(x, c) __try_acquire_ctx_lock(x, c) =20 /** * __must_hold_shared() - function attribute, caller must hold shared cont= ext lock - * @x: context lock instance pointer * * Function attribute declaring that the caller must hold the given context - * lock instance @x with shared access. + * lock instance(s) with shared access. */ -# define __must_hold_shared(x) __requires_shared_ctx_lock(x) +#define __must_hold_shared(...) __requires_shared_ctx_lock(__VA_ARGS__) =20 /** * __acquires_shared() - function attribute, function acquires context loc= k shared - * @x: context lock instance pointer * * Function attribute declaring that the function acquires the given - * context lock instance @x with shared access, but does not release it. + * context lock instance(s) with shared access, but does not release them. */ -# define __acquires_shared(x) __acquires_shared_ctx_lock(x) +#define __acquires_shared(...) __acquires_shared_ctx_lock(__VA_ARGS__) =20 /** * __cond_acquires_shared() - function attribute, function conditionally @@ -410,23 +382,22 @@ static inline void _context_unsafe_alias(void **p) { } * @x: context lock instance pointer * * Function attribute declaring that the function conditionally acquires t= he - * given context lock instance @x with shared access, but does not release= it. The - * function return value @ret denotes when the context lock is acquired. + * given context lock instance @x with shared access, but does not release= it. + * The function return value @ret denotes when the context lock is acquire= d. * * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. */ -# define __cond_acquires_shared(ret, x) __cond_acquires_impl_##ret(x, _sha= red) +#define __cond_acquires_shared(ret, x) __cond_acquires_impl_##ret(x, _shar= ed) =20 /** * __releases_shared() - function attribute, function releases a * context lock shared - * @x: context lock instance pointer * * Function attribute declaring that the function releases the given conte= xt - * lock instance @x with shared access. The associated context must be act= ive - * on entry. + * lock instance(s) with shared access. The associated context(s) must be + * active on entry. */ -# define __releases_shared(x) __releases_shared_ctx_lock(x) +#define __releases_shared(...) __releases_shared_ctx_lock(__VA_ARGS__) =20 /** * __acquire_shared() - function to acquire context lock shared @@ -435,7 +406,7 @@ static inline void _context_unsafe_alias(void **p) { } * No-op function that acquires the given context lock instance @x with sh= ared * access. */ -# define __acquire_shared(x) __acquire_shared_ctx_lock(x) +#define __acquire_shared(x) __acquire_shared_ctx_lock(x) =20 /** * __release_shared() - function to release context lock shared @@ -444,7 +415,7 @@ static inline void _context_unsafe_alias(void **p) { } * No-op function that releases the given context lock instance @x with sh= ared * access. */ -# define __release_shared(x) __release_shared_ctx_lock(x) +#define __release_shared(x) __release_shared_ctx_lock(x) =20 /** * __cond_lock_shared() - function that conditionally acquires a context l= ock shared @@ -457,9 +428,7 @@ static inline void _context_unsafe_alias(void **p) { } * shared access, if the boolean expression @c is true. The result of @c i= s the * return value. */ -# define __cond_lock_shared(x, c) __try_acquire_shared_ctx_lock(x, c) - -#endif /* __CHECKER__ */ +#define __cond_lock_shared(x, c) __try_acquire_shared_ctx_lock(x, c) =20 /** * __acquire_ret() - helper to acquire context lock of return value diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 50e63eade019..d828a4673441 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -1219,20 +1219,7 @@ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_ca= llback_t f) extern int rcu_expedited; extern int rcu_normal; =20 -DEFINE_LOCK_GUARD_0(rcu, - do { - rcu_read_lock(); - /* - * sparse doesn't call the cleanup function, - * so just release immediately and don't track - * the context. We don't need to anyway, since - * the whole point of the guard is to not need - * the explicit unlock. - */ - __release(RCU); - } while (0), - rcu_read_unlock()) - +DEFINE_LOCK_GUARD_0(rcu, rcu_read_lock(), rcu_read_unlock()) DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(= RCU)) =20 #endif /* __LINUX_RCUPDATE_H */ --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5A5834D914 for ; Fri, 19 Dec 2025 15:47:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159226; cv=none; b=UUnt+xvtnSH99vGxfoNiFzXug7vsXvbHGz2YSOtK0ztXbD6LhEBb2htc1tqmk/J2Drjp1syHqmVN6g2+eFHSu3MQMK1WVDV6bMqOcHQXcjU+gqy/VMVDw0vbu54TekpeXTn54+yFTBvbTCd2GGQuk1GeQ3bg46FoLKA0tEAYPiU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159226; c=relaxed/simple; bh=RekZ3du3R/6s3wVRDh5vSDGHBi0M+VokIMVlRmiunzo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FRl3hft9qGlh3we3k9dCQhAaZVdag5lIDDS3MnY0TmSMcSMeaqf5CmiH5zBH+07jJ4rOoAJDQ1G/Qd1MbeXUktE4Zw7IWOhxziIAk7r+tuqiq7iw9gLDasuhKeK05yGIJBhQtkM/w5rU+NbrndzdQ6oI+Erc42zEXDCM/Zbvs/E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XBhQlhB2; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XBhQlhB2" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4792bd2c290so17592445e9.1 for ; Fri, 19 Dec 2025 07:47:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159221; x=1766764021; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AfZmGahRmjCLlpnNWraYfqLt+QA3PGG/MMgJtDqy5AE=; b=XBhQlhB2Z3+25hcHIt6rGGDt8NOiO/oBR5wttYgqWniRsBOboec6vEaE2tGhp2rxBY iT/NWtBXd6VStoJZlNUonIyPP926Q6gNSeGLTnRXmgJfSuk8pNnGPpRDtmvfUFNBUSx+ 46qd8NZ3wfRIo7PtSkcAtd4qPFZzFhEXeVnvwDjZGrttSwQe62uEZof31HqJnPK+7wVL A2fC6iG8obSlTTxEVtMBORDT/xqYyyCj9VvNi95jrNG2bxNuGP31IYSPUKNCUtLqBtha Fe8VlwF3oAL34GEILqNHIMTc+hweVb4AH8QBquA44Y53VcvUW7Gf7F56I/TyA42Uw6Qr puTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159221; x=1766764021; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AfZmGahRmjCLlpnNWraYfqLt+QA3PGG/MMgJtDqy5AE=; b=H2HUgeYVri3DOrUbw7STqsvTK+aYD3sqatGpzOFZU3hmwqSfZrtRrFxJf3af5fpblV X6wQJ94UsWseLnVU4d8O3h09meShJJT0TVXJ0sRs4uEIDgT9UwyyRpRWoXz/es4laQpi Tjr/IN4mJdWp67hRxw4i/wu3f9JO7b+Dth3E+wKaRPSq4lqqo0ecEQJyDdc6nIghwaNY 8Rrib1efa0/vPx/Sgt7m8EiX46MOOHRR6CgQ8DAMELfqKAwSK0llK1fq4wZw0WoYsNRJ tZvH0BWSqiS8JW776uwA9789LbMNrYlrbEnTWoqNQvskKvoD79JJFC116rpiRX8q5uCt EMWw== X-Forwarded-Encrypted: i=1; AJvYcCWX/PcviQMFqSoSVgjUO3QLd/Ea2TubCK98sRMOh3Uzcv/kQedLgwEbXp4l060keemYQkG1nwTYLe6WRdg=@vger.kernel.org X-Gm-Message-State: AOJu0YzgY1BjK91Y/UZEo2BSEOQsK7iJhoxGLJDt3A+gLU25nHD0QB+E EiYm2iVhO6M3ThJogpBamI2YP/hbkMrc7mLpAFvl9x2jFkChx0kf67o9ZpX+nW4bnm7+pHyVEUA glA== X-Google-Smtp-Source: AGHT+IEZNHKaNIRNM2ArqHw56kxfvsWaHKJ6IaDu69Clwad86uzCdoCmGvirxShdiFM9zAcPiXEpvPOhBg== X-Received: from wmco23.prod.google.com ([2002:a05:600c:a317:b0:475:dca0:4de3]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:820d:b0:477:7b16:5fb1 with SMTP id 5b1f17b1804b1-47d1955b971mr27204115e9.7.1766159221332; Fri, 19 Dec 2025 07:47:01 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:13 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-25-elver@google.com> Subject: [PATCH v5 24/36] compiler-context-analysis: Remove __cond_lock() function-like helper From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As discussed in [1], removing __cond_lock() will improve the readability of trylock code. Now that Sparse context tracking support has been removed, we can also remove __cond_lock(). Change existing APIs to either drop __cond_lock() completely, or make use of the __cond_acquires() function attribute instead. In particular, spinlock and rwlock implementations required switching over to inline helpers rather than statement-expressions for their trylock_* variants. Link: https://lore.kernel.org/all/20250207082832.GU7145@noisy.programming.k= icks-ass.net/ [1] Suggested-by: Peter Zijlstra Signed-off-by: Marco Elver --- v5: * Fix up include/linux/lockref.h, too. v2: * New patch. --- Documentation/dev-tools/context-analysis.rst | 2 - Documentation/mm/process_addrs.rst | 6 +- .../net/wireless/intel/iwlwifi/iwl-trans.c | 4 +- .../net/wireless/intel/iwlwifi/iwl-trans.h | 6 +- .../intel/iwlwifi/pcie/gen1_2/internal.h | 5 +- .../intel/iwlwifi/pcie/gen1_2/trans.c | 4 +- include/linux/compiler-context-analysis.h | 31 ---------- include/linux/lockref.h | 4 +- include/linux/mm.h | 33 ++-------- include/linux/rwlock.h | 11 +--- include/linux/rwlock_api_smp.h | 14 ++++- include/linux/rwlock_rt.h | 21 ++++--- include/linux/sched/signal.h | 14 +---- include/linux/spinlock.h | 45 +++++--------- include/linux/spinlock_api_smp.h | 20 ++++++ include/linux/spinlock_api_up.h | 61 ++++++++++++++++--- include/linux/spinlock_rt.h | 26 ++++---- kernel/signal.c | 4 +- kernel/time/posix-timers.c | 13 +--- lib/dec_and_lock.c | 8 +-- lib/lockref.c | 1 - mm/memory.c | 4 +- mm/pgtable-generic.c | 19 +++--- tools/include/linux/compiler_types.h | 2 - 24 files changed, 163 insertions(+), 195 deletions(-) diff --git a/Documentation/dev-tools/context-analysis.rst b/Documentation/d= ev-tools/context-analysis.rst index 8dd6c0d695aa..e69896e597b6 100644 --- a/Documentation/dev-tools/context-analysis.rst +++ b/Documentation/dev-tools/context-analysis.rst @@ -112,10 +112,8 @@ Keywords __releases_shared __acquire __release - __cond_lock __acquire_shared __release_shared - __cond_lock_shared __acquire_ret __acquire_shared_ret context_unsafe diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_= addrs.rst index 7f2f3e87071d..851680ead45f 100644 --- a/Documentation/mm/process_addrs.rst +++ b/Documentation/mm/process_addrs.rst @@ -583,7 +583,7 @@ To access PTE-level page tables, a helper like :c:func:= `!pte_offset_map_lock` or :c:func:`!pte_offset_map` can be used depending on stability requirements. These map the page table into kernel memory if required, take the RCU lock= , and depending on variant, may also look up or acquire the PTE lock. -See the comment on :c:func:`!__pte_offset_map_lock`. +See the comment on :c:func:`!pte_offset_map_lock`. =20 Atomicity ^^^^^^^^^ @@ -667,7 +667,7 @@ must be released via :c:func:`!pte_unmap_unlock`. .. note:: There are some variants on this, such as :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable= but for brevity we do not explore this. See the comment for - :c:func:`!__pte_offset_map_lock` for more details. + :c:func:`!pte_offset_map_lock` for more details. =20 When modifying data in ranges we typically only wish to allocate higher pa= ge tables as necessary, using these locks to avoid races or overwriting anyth= ing, @@ -686,7 +686,7 @@ At the leaf page table, that is the PTE, we can't entir= ely rely on this pattern as we have separate PMD and PTE locks and a THP collapse for instance migh= t have eliminated the PMD entry as well as the PTE from under us. =20 -This is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD = entry +This is why :c:func:`!pte_offset_map_lock` locklessly retrieves the PMD en= try for the PTE, carefully checking it is as expected, before acquiring the PTE-specific lock, and then *again* checking that the PMD entry is as expe= cted. =20 diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/w= ireless/intel/iwlwifi/iwl-trans.c index cc8a84018f70..fa1442246662 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c @@ -548,11 +548,11 @@ int iwl_trans_read_config32(struct iwl_trans *trans, = u32 ofs, return iwl_trans_pcie_read_config32(trans, ofs, val); } =20 -bool _iwl_trans_grab_nic_access(struct iwl_trans *trans) +bool iwl_trans_grab_nic_access(struct iwl_trans *trans) { return iwl_trans_pcie_grab_nic_access(trans); } -IWL_EXPORT_SYMBOL(_iwl_trans_grab_nic_access); +IWL_EXPORT_SYMBOL(iwl_trans_grab_nic_access); =20 void __releases(nic_access) iwl_trans_release_nic_access(struct iwl_trans *trans) diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/w= ireless/intel/iwlwifi/iwl-trans.h index a552669db6e2..688f9fee2821 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h @@ -1063,11 +1063,7 @@ int iwl_trans_sw_reset(struct iwl_trans *trans); void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg, u32 mask, u32 value); =20 -bool _iwl_trans_grab_nic_access(struct iwl_trans *trans); - -#define iwl_trans_grab_nic_access(trans) \ - __cond_lock(nic_access, \ - likely(_iwl_trans_grab_nic_access(trans))) +bool iwl_trans_grab_nic_access(struct iwl_trans *trans); =20 void __releases(nic_access) iwl_trans_release_nic_access(struct iwl_trans *trans); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h b/dr= ivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h index 207c56e338dd..7b7b35e442f9 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h @@ -553,10 +553,7 @@ void iwl_trans_pcie_free(struct iwl_trans *trans); void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_r= egions, struct device *dev); =20 -bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent= ); -#define _iwl_trans_pcie_grab_nic_access(trans, silent) \ - __cond_lock(nic_access_nobh, \ - likely(__iwl_trans_pcie_grab_nic_access(trans, silent))) +bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent); =20 void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev); void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c b/drive= rs/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c index 164d060ec617..415a19ea9f06 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c @@ -2327,7 +2327,7 @@ EXPORT_SYMBOL(iwl_trans_pcie_reset); * This version doesn't disable BHs but rather assumes they're * already disabled. */ -bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent) +bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent) { int ret; struct iwl_trans_pcie *trans_pcie =3D IWL_TRANS_GET_PCIE_TRANS(trans); @@ -2415,7 +2415,7 @@ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans = *trans) bool ret; =20 local_bh_disable(); - ret =3D __iwl_trans_pcie_grab_nic_access(trans, false); + ret =3D _iwl_trans_pcie_grab_nic_access(trans, false); if (ret) { /* keep BHs disabled until iwl_trans_pcie_release_nic_access */ return ret; diff --git a/include/linux/compiler-context-analysis.h b/include/linux/comp= iler-context-analysis.h index fccd6d68158e..db7e0d48d8f2 100644 --- a/include/linux/compiler-context-analysis.h +++ b/include/linux/compiler-context-analysis.h @@ -341,24 +341,6 @@ static inline void _context_unsafe_alias(void **p) { } */ #define __release(x) __release_ctx_lock(x) =20 -/** - * __cond_lock() - function that conditionally acquires a context lock - * exclusively - * @x: context lock instance pinter - * @c: boolean expression - * - * Return: result of @c - * - * No-op function that conditionally acquires context lock instance @x - * exclusively, if the boolean expression @c is true. The result of @c is = the - * return value; for example: - * - * .. code-block:: c - * - * #define spin_trylock(l) __cond_lock(&lock, _spin_trylock(&lock)) - */ -#define __cond_lock(x, c) __try_acquire_ctx_lock(x, c) - /** * __must_hold_shared() - function attribute, caller must hold shared cont= ext lock * @@ -417,19 +399,6 @@ static inline void _context_unsafe_alias(void **p) { } */ #define __release_shared(x) __release_shared_ctx_lock(x) =20 -/** - * __cond_lock_shared() - function that conditionally acquires a context l= ock shared - * @x: context lock instance pinter - * @c: boolean expression - * - * Return: result of @c - * - * No-op function that conditionally acquires context lock instance @x with - * shared access, if the boolean expression @c is true. The result of @c i= s the - * return value. - */ -#define __cond_lock_shared(x, c) __try_acquire_shared_ctx_lock(x, c) - /** * __acquire_ret() - helper to acquire context lock of return value * @call: call expression diff --git a/include/linux/lockref.h b/include/linux/lockref.h index 815d871fadfc..6ded24cdb4a8 100644 --- a/include/linux/lockref.h +++ b/include/linux/lockref.h @@ -49,9 +49,7 @@ static inline void lockref_init(struct lockref *lockref) void lockref_get(struct lockref *lockref); int lockref_put_return(struct lockref *lockref); bool lockref_get_not_zero(struct lockref *lockref); -bool lockref_put_or_lock(struct lockref *lockref); -#define lockref_put_or_lock(_lockref) \ - (!__cond_lock((_lockref)->lock, !lockref_put_or_lock(_lockref))) +bool lockref_put_or_lock(struct lockref *lockref) __cond_acquires(false, &= lockref->lock); =20 void lockref_mark_dead(struct lockref *lockref); bool lockref_get_not_dead(struct lockref *lockref); diff --git a/include/linux/mm.h b/include/linux/mm.h index 15076261d0c2..f369cb633516 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2975,15 +2975,8 @@ static inline pud_t pud_mkspecial(pud_t pud) } #endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */ =20 -extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, - spinlock_t **ptl); -static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long ad= dr, - spinlock_t **ptl) -{ - pte_t *ptep; - __cond_lock(*ptl, ptep =3D __get_locked_pte(mm, addr, ptl)); - return ptep; -} +extern pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, + spinlock_t **ptl); =20 #ifdef __PAGETABLE_P4D_FOLDED static inline int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, @@ -3337,31 +3330,15 @@ static inline bool pagetable_pte_ctor(struct mm_str= uct *mm, return true; } =20 -pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); -static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, - pmd_t *pmdvalp) -{ - pte_t *pte; +pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); =20 - __cond_lock(RCU, pte =3D ___pte_offset_map(pmd, addr, pmdvalp)); - return pte; -} static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) { return __pte_offset_map(pmd, addr, NULL); } =20 -pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp); -static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) -{ - pte_t *pte; - - __cond_lock(RCU, __cond_lock(*ptlp, - pte =3D __pte_offset_map_lock(mm, pmd, addr, ptlp))); - return pte; -} +pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, spinlock_t **ptlp); =20 pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp); diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 151f9d5f3288..65a5b55e1bcd 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -50,8 +50,8 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_lock_shared(lock, _raw_read_trylock(lock= )) -#define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock)) +#define read_trylock(lock) _raw_read_trylock(lock) +#define write_trylock(lock) _raw_write_trylock(lock) =20 #define write_lock(lock) _raw_write_lock(lock) #define read_lock(lock) _raw_read_lock(lock) @@ -113,12 +113,7 @@ do { \ } while (0) #define write_unlock_bh(lock) _raw_write_unlock_bh(lock) =20 -#define write_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - local_irq_save(flags); \ - _raw_write_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ - })) +#define write_trylock_irqsave(lock, flags) _raw_write_trylock_irqsave(lock= , &(flags)) =20 #ifdef arch_rwlock_is_contended #define rwlock_is_contended(lock) \ diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index 6d5cc0b7be1f..d903b17c46ca 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -26,8 +26,8 @@ unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t = *lock) __acquires(lock); unsigned long __lockfunc _raw_write_lock_irqsave(rwlock_t *lock) __acquires(lock); -int __lockfunc _raw_read_trylock(rwlock_t *lock); -int __lockfunc _raw_write_trylock(rwlock_t *lock); +int __lockfunc _raw_read_trylock(rwlock_t *lock) __cond_acquires_shared(tr= ue, lock); +int __lockfunc _raw_write_trylock(rwlock_t *lock) __cond_acquires(true, lo= ck); void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock); void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock= ); @@ -41,6 +41,16 @@ void __lockfunc _raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) __releases(lock); =20 +static inline bool _raw_write_trylock_irqsave(rwlock_t *lock, unsigned lon= g *flags) + __cond_acquires(true, lock) +{ + local_irq_save(*flags); + if (_raw_write_trylock(lock)) + return true; + local_irq_restore(*flags); + return false; +} + #ifdef CONFIG_INLINE_READ_LOCK #define _raw_read_lock(lock) __raw_read_lock(lock) #endif diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index f64d6d319a47..37b387dcab21 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -26,11 +26,11 @@ do { \ } while (0) =20 extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock); -extern int rt_read_trylock(rwlock_t *rwlock); +extern int rt_read_trylock(rwlock_t *rwlock) __cond_acquires_shared(true, = rwlock); extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock); extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquire= s(rwlock); -extern int rt_write_trylock(rwlock_t *rwlock); +extern int rt_write_trylock(rwlock_t *rwlock) __cond_acquires(true, rwlock= ); extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock); =20 static __always_inline void read_lock(rwlock_t *rwlock) @@ -59,7 +59,7 @@ static __always_inline void read_lock_irq(rwlock_t *rwloc= k) flags =3D 0; \ } while (0) =20 -#define read_trylock(lock) __cond_lock_shared(lock, rt_read_trylock(lock)) +#define read_trylock(lock) rt_read_trylock(lock) =20 static __always_inline void read_unlock(rwlock_t *rwlock) __releases_shared(rwlock) @@ -123,14 +123,15 @@ static __always_inline void write_lock_irq(rwlock_t *= rwlock) flags =3D 0; \ } while (0) =20 -#define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock)) +#define write_trylock(lock) rt_write_trylock(lock) =20 -#define write_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - typecheck(unsigned long, flags); \ - flags =3D 0; \ - rt_write_trylock(lock); \ - })) +static __always_inline bool _write_trylock_irqsave(rwlock_t *rwlock, unsig= ned long *flags) + __cond_acquires(true, rwlock) +{ + *flags =3D 0; + return rt_write_trylock(rwlock); +} +#define write_trylock_irqsave(lock, flags) _write_trylock_irqsave(lock, &(= flags)) =20 static __always_inline void write_unlock(rwlock_t *rwlock) __releases(rwlock) diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index 7d6449982822..a63f65aa5bdd 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -737,18 +737,8 @@ static inline int thread_group_empty(struct task_struc= t *p) #define delay_group_leader(p) \ (thread_group_leader(p) && !thread_group_empty(p)) =20 -extern struct sighand_struct *__lock_task_sighand(struct task_struct *task, - unsigned long *flags); - -static inline struct sighand_struct *lock_task_sighand(struct task_struct = *task, - unsigned long *flags) -{ - struct sighand_struct *ret; - - ret =3D __lock_task_sighand(task, flags); - (void)__cond_lock(&task->sighand->siglock, ret); - return ret; -} +extern struct sighand_struct *lock_task_sighand(struct task_struct *task, + unsigned long *flags); =20 static inline void unlock_task_sighand(struct task_struct *task, unsigned long *flags) diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 7e560c7a7b23..396b8c5d6c1b 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -213,7 +213,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *l= ock) __releases(lock) * various methods are defined as nops in the case they are not * required. */ -#define raw_spin_trylock(lock) __cond_lock(lock, _raw_spin_trylock(lock)) +#define raw_spin_trylock(lock) _raw_spin_trylock(lock) =20 #define raw_spin_lock(lock) _raw_spin_lock(lock) =20 @@ -284,22 +284,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t = *lock) __releases(lock) } while (0) #define raw_spin_unlock_bh(lock) _raw_spin_unlock_bh(lock) =20 -#define raw_spin_trylock_bh(lock) \ - __cond_lock(lock, _raw_spin_trylock_bh(lock)) +#define raw_spin_trylock_bh(lock) _raw_spin_trylock_bh(lock) =20 -#define raw_spin_trylock_irq(lock) \ - __cond_lock(lock, ({ \ - local_irq_disable(); \ - _raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_enable(); 0; }); \ - })) +#define raw_spin_trylock_irq(lock) _raw_spin_trylock_irq(lock) =20 -#define raw_spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - local_irq_save(flags); \ - _raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ - })) +#define raw_spin_trylock_irqsave(lock, flags) _raw_spin_trylock_irqsave(lo= ck, &(flags)) =20 #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ @@ -433,8 +422,12 @@ static __always_inline int spin_trylock_irq(spinlock_t= *lock) return raw_spin_trylock_irq(&lock->rlock); } =20 -#define spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, raw_spin_trylock_irqsave(spinlock_check(lock), flags)) +static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsign= ed long *flags) + __cond_acquires(true, lock) __no_context_analysis +{ + return raw_spin_trylock_irqsave(spinlock_check(lock), *flags); +} +#define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(fl= ags)) =20 /** * spin_is_locked() - Check whether a spinlock is locked. @@ -512,23 +505,17 @@ static inline int rwlock_needbreak(rwlock_t *lock) * Decrements @atomic by 1. If the result is 0, returns true and locks * @lock. Returns false for all other cases. */ -extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); -#define atomic_dec_and_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_lock(atomic, lock)) +extern int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) __cond_= acquires(true, lock); =20 extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, - unsigned long *flags); -#define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) + unsigned long *flags) __cond_acquires(true, lock); +#define atomic_dec_and_lock_irqsave(atomic, lock, flags) _atomic_dec_and_l= ock_irqsave(atomic, lock, &(flags)) =20 -extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock= ); -#define atomic_dec_and_raw_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) +extern int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock)= __cond_acquires(true, lock); =20 extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock= _t *lock, - unsigned long *flags); -#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags= ))) + unsigned long *flags) __cond_acquires(true, lock); +#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) _atomic_dec_a= nd_raw_lock_irqsave(atomic, lock, &(flags)) =20 int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, size_t max_size, unsigned int cpu_mult, diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_= smp.h index 7e7d7d373213..bda5e7a390cd 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -95,6 +95,26 @@ static inline int __raw_spin_trylock(raw_spinlock_t *loc= k) return 0; } =20 +static __always_inline bool _raw_spin_trylock_irq(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + local_irq_disable(); + if (_raw_spin_trylock(lock)) + return true; + local_irq_enable(); + return false; +} + +static __always_inline bool _raw_spin_trylock_irqsave(raw_spinlock_t *lock= , unsigned long *flags) + __cond_acquires(true, lock) +{ + local_irq_save(*flags); + if (_raw_spin_trylock(lock)) + return true; + local_irq_restore(*flags); + return false; +} + /* * If lockdep is enabled then we use the non-preemption spin-ops * even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_u= p.h index 018f5aabc1be..a9d5c7c66e03 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -24,14 +24,11 @@ * flags straight, to suppress compiler warnings of unused lock * variables, and to add the proper checker annotations: */ -#define ___LOCK_void(lock) \ - do { (void)(lock); } while (0) - #define ___LOCK_(lock) \ - do { __acquire(lock); ___LOCK_void(lock); } while (0) + do { __acquire(lock); (void)(lock); } while (0) =20 #define ___LOCK_shared(lock) \ - do { __acquire_shared(lock); ___LOCK_void(lock); } while (0) + do { __acquire_shared(lock); (void)(lock); } while (0) =20 #define __LOCK(lock, ...) \ do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0) @@ -78,10 +75,56 @@ #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) #define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, sh= ared) #define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_spin_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_read_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_write_trylock(lock) ({ __LOCK(lock, void); 1; }) -#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock, void); 1; }) + +static __always_inline int _raw_spin_trylock(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_bh(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK_BH(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_irq(raw_spinlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK_IRQ(lock); + return 1; +} + +static __always_inline int _raw_spin_trylock_irqsave(raw_spinlock_t *lock,= unsigned long *flags) + __cond_acquires(true, lock) +{ + __LOCK_IRQSAVE(lock, *(flags)); + return 1; +} + +static __always_inline int _raw_read_trylock(rwlock_t *lock) + __cond_acquires_shared(true, lock) +{ + __LOCK(lock, shared); + return 1; +} + +static __always_inline int _raw_write_trylock(rwlock_t *lock) + __cond_acquires(true, lock) +{ + __LOCK(lock); + return 1; +} + +static __always_inline int _raw_write_trylock_irqsave(rwlock_t *lock, unsi= gned long *flags) + __cond_acquires(true, lock) +{ + __LOCK_IRQSAVE(lock, *(flags)); + return 1; +} + #define _raw_spin_unlock(lock) __UNLOCK(lock) #define _raw_read_unlock(lock) __UNLOCK(lock, shared) #define _raw_write_unlock(lock) __UNLOCK(lock) diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index 6bab73ee1384..0a585768358f 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -37,8 +37,8 @@ extern void rt_spin_lock_nested(spinlock_t *lock, int sub= class) __acquires(lock) extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *n= est_lock) __acquires(lock); extern void rt_spin_unlock(spinlock_t *lock) __releases(lock); extern void rt_spin_lock_unlock(spinlock_t *lock); -extern int rt_spin_trylock_bh(spinlock_t *lock); -extern int rt_spin_trylock(spinlock_t *lock); +extern int rt_spin_trylock_bh(spinlock_t *lock) __cond_acquires(true, lock= ); +extern int rt_spin_trylock(spinlock_t *lock) __cond_acquires(true, lock); =20 static __always_inline void spin_lock(spinlock_t *lock) __acquires(lock) @@ -130,21 +130,19 @@ static __always_inline void spin_unlock_irqrestore(sp= inlock_t *lock, rt_spin_unlock(lock); } =20 -#define spin_trylock(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) +#define spin_trylock(lock) rt_spin_trylock(lock) =20 -#define spin_trylock_bh(lock) \ - __cond_lock(lock, rt_spin_trylock_bh(lock)) +#define spin_trylock_bh(lock) rt_spin_trylock_bh(lock) =20 -#define spin_trylock_irq(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) +#define spin_trylock_irq(lock) rt_spin_trylock(lock) =20 -#define spin_trylock_irqsave(lock, flags) \ - __cond_lock(lock, ({ \ - typecheck(unsigned long, flags); \ - flags =3D 0; \ - rt_spin_trylock(lock); \ - })) +static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsign= ed long *flags) + __cond_acquires(true, lock) +{ + *flags =3D 0; + return rt_spin_trylock(lock); +} +#define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(fl= ags)) =20 #define spin_is_contended(lock) (((void)(lock), 0)) =20 diff --git a/kernel/signal.c b/kernel/signal.c index e42b8bd6922f..d65d0fe24bfb 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -1355,8 +1355,8 @@ int zap_other_threads(struct task_struct *p) return count; } =20 -struct sighand_struct *__lock_task_sighand(struct task_struct *tsk, - unsigned long *flags) +struct sighand_struct *lock_task_sighand(struct task_struct *tsk, + unsigned long *flags) { struct sighand_struct *sighand; =20 diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index 80a8a09a21a0..413e2389f0a5 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -66,14 +66,7 @@ static const struct k_clock clock_realtime, clock_monoto= nic; #error "SIGEV_THREAD_ID must not share bit with other SIGEV values!" #endif =20 -static struct k_itimer *__lock_timer(timer_t timer_id); - -#define lock_timer(tid) \ -({ struct k_itimer *__timr; \ - __cond_lock(&__timr->it_lock, __timr =3D __lock_timer(tid)); \ - __timr; \ -}) - +static struct k_itimer *lock_timer(timer_t timer_id); static inline void unlock_timer(struct k_itimer *timr) { if (likely((timr))) @@ -85,7 +78,7 @@ static inline void unlock_timer(struct k_itimer *timr) =20 #define scoped_timer (scope) =20 -DEFINE_CLASS(lock_timer, struct k_itimer *, unlock_timer(_T), __lock_timer= (id), timer_t id); +DEFINE_CLASS(lock_timer, struct k_itimer *, unlock_timer(_T), lock_timer(i= d), timer_t id); DEFINE_CLASS_IS_COND_GUARD(lock_timer); =20 static struct timer_hash_bucket *hash_bucket(struct signal_struct *sig, un= signed int nr) @@ -600,7 +593,7 @@ COMPAT_SYSCALL_DEFINE3(timer_create, clockid_t, which_c= lock, } #endif =20 -static struct k_itimer *__lock_timer(timer_t timer_id) +static struct k_itimer *lock_timer(timer_t timer_id) { struct k_itimer *timr; =20 diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c index 1dcca8f2e194..8c7c398fd770 100644 --- a/lib/dec_and_lock.c +++ b/lib/dec_and_lock.c @@ -18,7 +18,7 @@ * because the spin-lock and the decrement must be * "atomic". */ -int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) +int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) { /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ if (atomic_add_unless(atomic, -1, 1)) @@ -32,7 +32,7 @@ int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lo= ck) return 0; } =20 -EXPORT_SYMBOL(_atomic_dec_and_lock); +EXPORT_SYMBOL(atomic_dec_and_lock); =20 int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, unsigned long *flags) @@ -50,7 +50,7 @@ int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlo= ck_t *lock, } EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave); =20 -int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) +int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) { /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ if (atomic_add_unless(atomic, -1, 1)) @@ -63,7 +63,7 @@ int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlo= ck_t *lock) raw_spin_unlock(lock); return 0; } -EXPORT_SYMBOL(_atomic_dec_and_raw_lock); +EXPORT_SYMBOL(atomic_dec_and_raw_lock); =20 int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *loc= k, unsigned long *flags) diff --git a/lib/lockref.c b/lib/lockref.c index 9210fc6ae714..5d8e3ef3860e 100644 --- a/lib/lockref.c +++ b/lib/lockref.c @@ -105,7 +105,6 @@ EXPORT_SYMBOL(lockref_put_return); * @lockref: pointer to lockref structure * Return: 1 if count updated successfully or 0 if count <=3D 1 and lock t= aken */ -#undef lockref_put_or_lock bool lockref_put_or_lock(struct lockref *lockref) { CMPXCHG_LOOP( diff --git a/mm/memory.c b/mm/memory.c index 2a55edc48a65..b751e1f85abc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2210,8 +2210,8 @@ static pmd_t *walk_to_pmd(struct mm_struct *mm, unsig= ned long addr) return pmd; } =20 -pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, - spinlock_t **ptl) +pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, + spinlock_t **ptl) { pmd_t *pmd =3D walk_to_pmd(mm, addr); =20 diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index d3aec7a9926a..af7966169d69 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -280,7 +280,7 @@ static unsigned long pmdp_get_lockless_start(void) { re= turn 0; } static void pmdp_get_lockless_end(unsigned long irqflags) { } #endif =20 -pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) +pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) { unsigned long irqflags; pmd_t pmdval; @@ -332,13 +332,12 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm,= pmd_t *pmd, } =20 /* - * pte_offset_map_lock(mm, pmd, addr, ptlp), and its internal implementati= on - * __pte_offset_map_lock() below, is usually called with the pmd pointer f= or - * addr, reached by walking down the mm's pgd, p4d, pud for addr: either w= hile - * holding mmap_lock or vma lock for read or for write; or in truncate or = rmap - * context, while holding file's i_mmap_lock or anon_vma lock for read (or= for - * write). In a few cases, it may be used with pmd pointing to a pmd_t alr= eady - * copied to or constructed on the stack. + * pte_offset_map_lock(mm, pmd, addr, ptlp) is usually called with the pmd + * pointer for addr, reached by walking down the mm's pgd, p4d, pud for ad= dr: + * either while holding mmap_lock or vma lock for read or for write; or in + * truncate or rmap context, while holding file's i_mmap_lock or anon_vma = lock + * for read (or for write). In a few cases, it may be used with pmd pointi= ng to + * a pmd_t already copied to or constructed on the stack. * * When successful, it returns the pte pointer for addr, with its page tab= le * kmapped if necessary (when CONFIG_HIGHPTE), and locked against concurre= nt @@ -389,8 +388,8 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, p= md_t *pmd, * table, and may not use RCU at all: "outsiders" like khugepaged should a= void * pte_offset_map() and co once the vma is detached from mm or mm_users is= zero. */ -pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) +pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, spinlock_t **ptlp) { spinlock_t *ptl; pmd_t pmdval; diff --git a/tools/include/linux/compiler_types.h b/tools/include/linux/com= piler_types.h index d09f9dc172a4..067a5b4e0f7b 100644 --- a/tools/include/linux/compiler_types.h +++ b/tools/include/linux/compiler_types.h @@ -20,7 +20,6 @@ # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) #else /* __CHECKER__ */ /* context/locking */ # define __must_hold(x) @@ -28,7 +27,6 @@ # define __releases(x) # define __acquire(x) (void)0 # define __release(x) (void)0 -# define __cond_lock(x,c) (c) #endif /* __CHECKER__ */ =20 /* Compiler specific macros. */ --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87C8C34D935 for ; Fri, 19 Dec 2025 15:47:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159230; cv=none; b=rB7H5unxQ4Y7f7rxhUlz6iqgbd641yTQgH6jx93+wzBd0JKQmKxZy6OgLyxiBt7mj1wK6zk2gEAElHygEoAtDBl0s058bWLLeToHRIIMWYHkfVs5ixg+Oly+7nHvlJn5DBBNMb84tqZGDSOLz4rbAK7P155fNena/CXe37UY9To= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159230; c=relaxed/simple; bh=6rdkhgIhrcuHAfDwpQsS1f4aMCHAzfeNjskQV2PQ154=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Jrn3hPzeyUwtJ8iadd0ymSfQVqiYBZT8Yr3DP2iiUsubDtiS9vgRkLqImO2eybU2UiUqs0JbjMlu1UBbHhRUPRenVbNhuIMkm667n1FJLbCFg4Gdw7oFOD3Y2HgnPkzrp55atk0yfBc7WUT8Ol1RLdnXja/pVzaYOMA8J/C54MQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EyvIUvYT; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EyvIUvYT" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-431026b6252so1818698f8f.1 for ; Fri, 19 Dec 2025 07:47:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159226; x=1766764026; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nTy18ZdTqK3/SC0lGMdEXuAKkcRsuvgQuaMHd3BFX3Q=; b=EyvIUvYTsSxYqQbGcJqqRe7q4y3tfoPTcOuHGVkLs0oayf4Y37CteCMzp5yfrsnqhd TOGXVAEGYIHgXq9DkeLvuvojE1Bot5YQqMWudjk9+8p8TZU+Av5L//6egx99Dl3hq9cq SKAGPnHMPU/NltUthUuo0kQXnbwXprdRaEn3QYW42nA8AhdTx2W8pCICycqcgeA0vaz0 ffDnzOMK+EgcT4udNv6OedIZ5e9AY4I4ZU9XiMg8wqhDJTaMdHnDqGgeK2kfrMU9buIx ymh99UpahVnZN0D70gzVmKQnKJACxe0JYzv4aw6zbSe0R2q2PpDZKXa8BWmS5J8vl6mo 4BkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159226; x=1766764026; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nTy18ZdTqK3/SC0lGMdEXuAKkcRsuvgQuaMHd3BFX3Q=; b=GLUakxjjhzjgS+Wo2dtIWRvNN/2NXuPXZy09SKtbgNdtP/VYF72VTvcF3yN0LnXGVF wbNfCzAt1zqHYz9+wBNLgF1Yem5+OUTScTiHWfJUyQNtj+T90lacv9y1V2yDp36dzzHt BdFa3rDeXVePaScfaCTTu65daoIbOZhr0jpkkNKaC7awVMT3onjKe9pIjWXKfmRYosIc 5/ICMUH8inJZBtLYRmZ5oHiZu98nvjak78RsGvPS5K/Mj/bv67zNn3tu1KMdDK3nEoEq K6bOUF/h/ucyxAx7ySwvm+IZmPNLMHB2KBzMiTi7PAEZcax8onuRovaUOvreZt1k+aIj 06uw== X-Forwarded-Encrypted: i=1; AJvYcCWdQc0njUO2HywESu7mDAPmTzTZiljGzlQCQd0stEu2Ht2BBJ+5lkZZOTbb8pho96LLcDMY1qrceI4GXPI=@vger.kernel.org X-Gm-Message-State: AOJu0YwrvYO0SPlE1rZR6eB9In6+L7JTy9374QI23/Ucp98yzGFNlHLK baTfuiSXfYGrpCNRawDBdZUhlkRZYDjHJtFoq+jpsoMKRpjK7CqoMEt/4iC0N7577HBt/rOQ/MB sow== X-Google-Smtp-Source: AGHT+IF24T4xLk8zHfaxGfwTVEzbsV+GAmr+3MEnl5nQxzQxPnKeM3cpUzi9Ro4WJge4y08O2O0BEUfdhg== X-Received: from wrbay2.prod.google.com ([2002:a5d:6f02:0:b0:430:f3bf:123f]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2886:b0:42f:dbbc:5103 with SMTP id ffacd0b85a97d-4324e4fda18mr3897372f8f.35.1766159225487; Fri, 19 Dec 2025 07:47:05 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:14 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-26-elver@google.com> Subject: [PATCH v5 25/36] compiler-context-analysis: Introduce header suppressions From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While we can opt in individual subsystems which add the required annotations, such subsystems inevitably include headers from other subsystems which may not yet have the right annotations, which then result in false positive warnings. Making compatible by adding annotations across all common headers currently requires an excessive number of __no_context_analysis annotations, or carefully analyzing non-trivial cases to add the correct annotations. While this is desirable long-term, providing an incremental path causes less churn and headaches for maintainers not yet interested in dealing with such warnings. Rather than clutter headers unnecessary and mandate all subsystem maintainers to keep their headers working with context analysis, suppress all -Wthread-safety warnings in headers. Explicitly opt in headers with context-enabled primitives. With this in place, we can start enabling the analysis on more complex subsystems in subsequent changes. Signed-off-by: Marco Elver --- v4: * Rename capability -> context analysis. --- scripts/Makefile.context-analysis | 4 +++ scripts/context-analysis-suppression.txt | 32 ++++++++++++++++++++++++ 2 files changed, 36 insertions(+) create mode 100644 scripts/context-analysis-suppression.txt diff --git a/scripts/Makefile.context-analysis b/scripts/Makefile.context-a= nalysis index 70549f7fae1a..cd3bb49d3f09 100644 --- a/scripts/Makefile.context-analysis +++ b/scripts/Makefile.context-analysis @@ -4,4 +4,8 @@ context-analysis-cflags :=3D -DWARN_CONTEXT_ANALYSIS \ -fexperimental-late-parse-attributes -Wthread-safety \ -Wthread-safety-pointer -Wthread-safety-beta =20 +ifndef CONFIG_WARN_CONTEXT_ANALYSIS_ALL +context-analysis-cflags +=3D --warning-suppression-mappings=3D$(srctree)/s= cripts/context-analysis-suppression.txt +endif + export CFLAGS_CONTEXT_ANALYSIS :=3D $(context-analysis-cflags) diff --git a/scripts/context-analysis-suppression.txt b/scripts/context-ana= lysis-suppression.txt new file mode 100644 index 000000000000..df25c3d07a5b --- /dev/null +++ b/scripts/context-analysis-suppression.txt @@ -0,0 +1,32 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# The suppressions file should only match common paths such as header file= s. +# For individual subsytems use Makefile directive CONTEXT_ANALYSIS :=3D [y= n]. +# +# The suppressions are ignored when CONFIG_WARN_CONTEXT_ANALYSIS_ALL is +# selected. + +[thread-safety] +src:*arch/*/include/* +src:*include/acpi/* +src:*include/asm-generic/* +src:*include/linux/* +src:*include/net/* + +# Opt-in headers: +src:*include/linux/bit_spinlock.h=3Demit +src:*include/linux/cleanup.h=3Demit +src:*include/linux/kref.h=3Demit +src:*include/linux/list*.h=3Demit +src:*include/linux/local_lock*.h=3Demit +src:*include/linux/lockdep.h=3Demit +src:*include/linux/mutex*.h=3Demit +src:*include/linux/rcupdate.h=3Demit +src:*include/linux/refcount.h=3Demit +src:*include/linux/rhashtable.h=3Demit +src:*include/linux/rwlock*.h=3Demit +src:*include/linux/rwsem.h=3Demit +src:*include/linux/seqlock*.h=3Demit +src:*include/linux/spinlock*.h=3Demit +src:*include/linux/srcu*.h=3Demit +src:*include/linux/ww_mutex.h=3Demit --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C19634DB6B for ; Fri, 19 Dec 2025 15:47:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159238; cv=none; b=KxlC4GY3EqL5UJ+hAtHabDsYlL4a4ZJcpig6BsXJK1tlt99MsuEZxa8v/xX6Q0VYAhyh4mDexIf4R/ZWHhBuCMff34sZOBkb7cmtbfx+RJ0benE4Sooy0qn1L8FQSJsgAxW7dtXeHhYcojwVaCVohV5isyomzojeVqt8M+6LOq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159238; c=relaxed/simple; bh=3+1d1G/TatPrxyV3VzGXDrwt3WtPBe3HR66XpxkDFKg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=O4rzhvOPm9GpypEzH0R+VJtK5RslnAx8uUwTIe7emoQkD7Oalnm5d5MpmhgQ//Fz7TcRLdVXHVXWXYPSCGgKqC+Mv5NiIF1ITx1YfqJI6DxzijN7R8r2Q1cfksgj1l4FgelGpInT/g4a96Xh8fK4zahYlsTCMSIgMoxYPrT4tyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UeVmInqN; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UeVmInqN" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-47918084ac1so15313195e9.2 for ; Fri, 19 Dec 2025 07:47:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159230; x=1766764030; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=J3sgEnhMWMLOYj1/OofGWKavVFtW6YP1ndiLQ0rOu/8=; b=UeVmInqN+zRzaV1DoaJ5xk38ZeTcI15hO/ufaIB7iBK97Q+sQ+VHV1hcdhF/IXZQID Tv3WTtE654uvE7a09mzxhVa5xAHdhdtrXuBsUXoIRdGqgOgMZM01CyOWqQLw7R54hAmM GbQ4OuIx+5y3q1oy/fDPtIO43s3J6svaSt+EHJ6UGgc7I3BvlG6PW0PdJFjXBMzNg/2u 09Ooy/s7Wkz2yhboTtlRwfMMypePHzvOb0jOrQ/WyILXWdAAMh1zA5aH4V0COupCyjMa /X96nsSaRjT9OxcDKE53ynaqi0+BllPWcpW1/Txn0XSeSx+ucBD5iarKYzo4yOqbpAX+ LGHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159230; x=1766764030; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=J3sgEnhMWMLOYj1/OofGWKavVFtW6YP1ndiLQ0rOu/8=; b=vMGoP3tv0h4dZYzPas3ElwRYrCRxVgLCaSzZpbTxFYLzj26WChWrEZOqTkk3S/LEZx bRvSREkcA2oh7zFcSfjJ+9Kozhp6ZRZf+8l+LBYKEwjI5xc2n/tacVPF6iKIKkuWq/gG jWHNq4uxlsaK6PmaF0b/+FyKOhmhvPue24r3kJvkaH9DeNHTfOkbzH920wNtGQDqaso/ AVbZ9R1Z4zxF040SNAnNudmPHjkybhl8B8m3WZ/9MhfhGNZzEX+PsRbxZET+RNjsMzqQ NhSr9WhHelM0F8cpNnygaiYNGVW6qI6zT7CFb1R9CXW8bCuuFfHBe0ZrAJAQdlaIFgyV 0E7w== X-Forwarded-Encrypted: i=1; AJvYcCWahMGdyOveRA4ccy8dVL6K7bR7jvFN01PJvHAP3jqRf/vz9QwgNzx2LQFz59VoMV0PzrEbHfwQwoIUp3Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yy2OZ+Cg8BzcSsXMdgMi+cAosx04rx5nXxftz61tAT1/oblfkAL KnATtog8SPKlb37mOy6TH/YQsZp3qrw6TUTREFvgOe0jbYfTs04rijsDW9+ya09PcUHpPjdc+ZU iAg== X-Google-Smtp-Source: AGHT+IG6Z5Tj65gpnRG9ZfygNEOWxPYymEwwgB1rgfvjvus65EDRt+hfREOXwzDTtwSRis3WLEZWS8/1FQ== X-Received: from wmcq18.prod.google.com ([2002:a05:600c:c112:b0:47b:e2a9:2bd3]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:46d1:b0:47a:8088:439c with SMTP id 5b1f17b1804b1-47d1959d6a0mr26330495e9.35.1766159230294; Fri, 19 Dec 2025 07:47:10 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:15 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-27-elver@google.com> Subject: [PATCH v5 26/36] compiler: Let data_race() imply disabled context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Many patterns that involve data-racy accesses often deliberately ignore normal synchronization rules to avoid taking a lock. If we have a lock-guarded variable on which we do a lock-less data-racy access, rather than having to write context_unsafe(data_race(..)), simply make the data_race(..) macro imply context-unsafety. The data_race() macro already denotes the intent that something subtly unsafe is about to happen, so it should be clear enough as-is. Signed-off-by: Marco Elver --- v4: * Rename capability -> context analysis. v2: * New patch. --- include/linux/compiler.h | 2 ++ lib/test_context-analysis.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 04487c9bd751..110b28dfd1d1 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -190,7 +190,9 @@ void ftrace_likely_update(struct ftrace_likely_data *f,= int val, #define data_race(expr) \ ({ \ __kcsan_disable_current(); \ + disable_context_analysis(); \ auto __v =3D (expr); \ + enable_context_analysis(); \ __kcsan_enable_current(); \ __v; \ }) diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 2dc404456497..1c5a381461fc 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -92,6 +92,8 @@ static void __used test_raw_spinlock_trylock_extra(struct= test_raw_spinlock_data { unsigned long flags; =20 + data_race(d->counter++); /* no warning */ + if (raw_spin_trylock_irq(&d->lock)) { d->counter++; raw_spin_unlock_irq(&d->lock); --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C388234DB7E for ; Fri, 19 Dec 2025 15:47:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159238; cv=none; b=jfBru2ECaYY245awCxiOLcmX5YiZadw9I5vHKy2n6UNVJGLvrezn9+BFebOWMgCA7oRq/UhJoF26DW3L2DeN8cHxKM9fDeKS2hwR2hgnt0Ju7rAosZ7/pTF3AKflD2kqW6Me6Wu1pFwOGGTITavopcluxdgMqMMx9fJkHGsMUWA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159238; c=relaxed/simple; bh=CnQRybaMloUAntIxAZhPHUkaIjFKvuXDqT9A4epNaZ4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ES9CKGgGo30sOmC8khJmvfe8hXVRZ98FlzKP7J/r2Ie7kYE2MLYNqKZctvEciaqNcU5ybpM4RG2Gq2ctA+B9Q8BAXr6dUCKoOU3qocwxzTsmfEWN7AU/X7sF92vR+MK2rP77iP/NyCcgz1qtwrhgaoHTc0bAmY43Zr7W7w32wWg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=raNw5WnL; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="raNw5WnL" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4779b3749a8so13975055e9.1 for ; Fri, 19 Dec 2025 07:47:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159234; x=1766764034; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Oz8u5mnRRtKrmwrW53CCOIRd71oba2M2eRROatW6JBk=; b=raNw5WnLqCozI+dMaTW0n3NUEcOaer7DIHJj7+pKkNMqzuW1f43eIr6KabLyiaGN97 Vmccpl5CDrBV6QRDbfTIZsGDykNrqduBEthB3BRahYRWHJSCmSXAu0PHXQugFg1QtwX1 w00Nrr91R1rdfPMzdtdUzpgaq+/h7CIWqN5z3TH42t6ZfUIGxh+Wy/U/gBBuqenlfo48 9QtCPF7BcyDInj5y2CsoW2MTjOFNUTw3uQ8tqS3Fd7ueCJs8w1VUB7WNcaQXnhqD6Hry 2s1qkCgH5JiYa9jhEDAhH4OilSafu1UensWS8n0H25WaaD8FfENA7HXfS8m8b+H5Ym4K cQpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159234; x=1766764034; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Oz8u5mnRRtKrmwrW53CCOIRd71oba2M2eRROatW6JBk=; b=fS8PoXZ9pA8JpghjMxSPdpWOgmWHWaFHP9YXEFMaGn+xtoS0tSSSlfD+DGxoz8tWGI IXZjGgP5lZPwBtL98oC8BnvP1cGT3pGoQPbZjQ5VC3AhOx5zb+IdWP/TvZRH6+vuRGmE prc5DGhQGnKYBBuONuDGmOW64RImBd4rDfpGrvO6lEhTxcABhy/MvDiyXbVCBBSucbjq jBSIs0j0nEg7q0GW5pL8xmUVYNCWmEUP6zaVyuXLFegbyMJILysh9HARZbyQGNnQHHYK dfoKU0duI+RP6UySpfRtvgeUppVFK9dbHHpvNN7pZkPA6lPeiYBucqUCJg6X89RD7aaU EWuw== X-Forwarded-Encrypted: i=1; AJvYcCUeosljTIO3hRKt53QhnSttwX7EqiihBmIo5SerTpKsmjeQn4TK1NtvT3KXElFpZJYLy+0ZcB1ce1t9oko=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3iyKC5OCSp8/ycaA9oen+tjsi9xf1DebP+NbVKAlGT2FMqtaF S10Lqgkt9Y1OA8wTNoFQghre8CaJQZcIdtRrcrdpBpOvxWu0u9Vqm8Ux0Ztyx3mN7W/zWmuuo01 Sqg== X-Google-Smtp-Source: AGHT+IEwqTwwND1oOGnUGB6Esw6SWTC15eB9YSENtXjOOBgQW9BC3hhYUlWG2oQBl8kMDLj0dbJpcjcvdQ== X-Received: from wmdd3.prod.google.com ([2002:a05:600c:a203:b0:477:54e1:e29e]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:c16f:b0:47a:810f:1d06 with SMTP id 5b1f17b1804b1-47d1953bb06mr26654055e9.4.1766159234094; Fri, 19 Dec 2025 07:47:14 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:16 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-28-elver@google.com> Subject: [PATCH v5 27/36] MAINTAINERS: Add entry for Context Analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add entry for all new files added for Clang's context analysis. Signed-off-by: Marco Elver Reviewed-by: Bart Van Assche --- v4: * Rename capability -> context analysis. --- MAINTAINERS | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 5b11839cba9d..2953b466107e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -6132,6 +6132,17 @@ M: Nelson Escobar S: Supported F: drivers/infiniband/hw/usnic/ =20 +CLANG CONTEXT ANALYSIS +M: Marco Elver +R: Bart Van Assche +L: llvm@lists.linux.dev +S: Maintained +F: Documentation/dev-tools/context-analysis.rst +F: include/linux/compiler-context-analysis.h +F: lib/test_context-analysis.c +F: scripts/Makefile.context-analysis +F: scripts/context-analysis-suppression.txt + CLANG CONTROL FLOW INTEGRITY SUPPORT M: Sami Tolvanen M: Kees Cook --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 88B0734E25E for ; Fri, 19 Dec 2025 15:47:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159242; cv=none; b=Cirqxcx+FdQoT7HuuxoqZhSd2IMR8Iw38ACqkr4k2GBxJ7QK+UxXK9C7BMFrKIJpSjj+4o+UYUlHW4qhIq9HgtNP7cWF0FTepItwnPzW2n6L90jfZosrIYbr+Lm/JqHQy7h5B+XcEQ9WX/y/3IDC7ZksOxhfQ7JC0Bzv0yWEz4U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159242; c=relaxed/simple; bh=8D0FoN0KkM0sT6D/shmnMDBi0BAc5PIEso3HlVAe2ZY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kw/c75LD43bnAYEoG4VR5629JiDXiGeYaci9Vh906Dv6vKGYoZnlRo/AGTukaYb5fI2OIq1MOCqw2xQatNtrAjsbIK8FtHlF0eI1TA5jK2HXZdjtVwsxABvyyBUgvRr4I1pnENt3sP4UeDlqmar36FLtkzAUuk5QsfOzUugjZh4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WoVqqcU8; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WoVqqcU8" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-47775585257so13164025e9.1 for ; Fri, 19 Dec 2025 07:47:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159238; x=1766764038; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=in59w/Ham4YrnosvYw1tGLvejZ0fwtDhCl3ORoOSt58=; b=WoVqqcU8traup5iX4Zlq6JP+Y+dCDCogAVAeVjroUkoCSFmgzcB/JzZuErljHGd73Y Q2F7L78zu3KOWGh1WAZbg3TfiJcvVBnEvvC5jGGcKgOfyrrHl6fWcJNtN2hsa4oA4pnS aTdolYGSvuk2DsHDVOAIFBvm1OyvCOjA+0SRgT0YBpUMK+k3kO4zvMZJ4/rJgK+u0vT4 7BLzHHY+mxe2oYIaZqSaYK3GirdRhBsAz2b0GAvSt64N9ji7LDPox7LKJqp9jgw5v0NR s7K3wEhdBILI4OhY9oy2yoPpt2pyYBXwEBKTamASPmFy9Cd33k4/h5UkfCbBkRUduZvI XpFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159238; x=1766764038; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=in59w/Ham4YrnosvYw1tGLvejZ0fwtDhCl3ORoOSt58=; b=YCa9/TM7Mq2aLeCMGt0wFALCGjcnKNdCZxwfWsGpVcQ+8LFHTiTqwAWJiv4YOycEnL Tv8IIa+Uh1eh83pdBJ5pWLqt4VyeI3uxBylhQMtfTY+NU1VmmzdXFxGAZKZb7/Xtjw8D 0NpKDBJYl8uw17vn/rdTKCEVBnoQlCKV1pSjqOald5PDXWBNXrESFpCyp9q9ODUARaGW 1EfOJTbrFIBznHwIF4TC0QVZ+GDjbLhQBKrmxixvr2XZpFvGIDNyEOIWfInuOg90SS1F GlX//HDivCRXef+ftLYGvfmHLbAZbEpGrW93A3FcTnSj/URTRkmGzvGWuyo0pXhVNUnC AD4g== X-Forwarded-Encrypted: i=1; AJvYcCWW835l1Gzu/ERzKLWRYy4ohq3FkU+ADeXVZ3RWuR9RxX+h4N42cJxAYDqsFdK1AhszlzC00dDZU82p1Go=@vger.kernel.org X-Gm-Message-State: AOJu0YzovcgF7FieRb4XFdcKUZD8C6KLvTMPAqJ94xnGRdcXADSfVRGp JFqskGjjYVLMFOgpGUer5BIgQ48n1lUY1X1RkWHMXuhDYW1Bg0MFz1bSuEiajj2WxlNdWwzB2Lv ZGg== X-Google-Smtp-Source: AGHT+IF1j74KEIoGqO8PaQ6QoYF3GZBjGmyEFxYxzCmJ1EHTx1dn4Dy0FqX7qP0sJNQ6OGOk0N3i/DE0JA== X-Received: from wmsm16.prod.google.com ([2002:a05:600c:3b10:b0:479:3624:3472]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e8e:b0:477:b0b8:4dd0 with SMTP id 5b1f17b1804b1-47d1957b120mr31632645e9.17.1766159237625; Fri, 19 Dec 2025 07:47:17 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:17 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-29-elver@google.com> Subject: [PATCH v5 28/36] kfence: Enable context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable context analysis for the KFENCE subsystem. Notable, kfence_handle_page_fault() required minor restructure, which also fixed a subtle race; arguably that function is more readable now. Signed-off-by: Marco Elver --- v4: * Rename capability -> context analysis. v2: * Remove disable/enable_context_analysis() around headers. * Use __context_unsafe() instead of __no_context_analysis. --- mm/kfence/Makefile | 2 ++ mm/kfence/core.c | 20 +++++++++++++------- mm/kfence/kfence.h | 14 ++++++++------ mm/kfence/report.c | 4 ++-- 4 files changed, 25 insertions(+), 15 deletions(-) diff --git a/mm/kfence/Makefile b/mm/kfence/Makefile index 2de2a58d11a1..a503e83e74d9 100644 --- a/mm/kfence/Makefile +++ b/mm/kfence/Makefile @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 =20 +CONTEXT_ANALYSIS :=3D y + obj-y :=3D core.o report.o =20 CFLAGS_kfence_test.o :=3D -fno-omit-frame-pointer -fno-optimize-sibling-ca= lls diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 577a1699c553..ebf442fb2c2b 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -133,8 +133,8 @@ struct kfence_metadata *kfence_metadata __read_mostly; static struct kfence_metadata *kfence_metadata_init __read_mostly; =20 /* Freelist with available objects. */ -static struct list_head kfence_freelist =3D LIST_HEAD_INIT(kfence_freelist= ); -static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freel= ist. */ +DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ +static struct list_head kfence_freelist __guarded_by(&kfence_freelist_lock= ) =3D LIST_HEAD_INIT(kfence_freelist); =20 /* * The static key to set up a KFENCE allocation; or if static keys are not= used @@ -254,6 +254,7 @@ static bool kfence_unprotect(unsigned long addr) } =20 static inline unsigned long metadata_to_pageaddr(const struct kfence_metad= ata *meta) + __must_hold(&meta->lock) { unsigned long offset =3D (meta - kfence_metadata + 1) * PAGE_SIZE * 2; unsigned long pageaddr =3D (unsigned long)&__kfence_pool[offset]; @@ -289,6 +290,7 @@ static inline bool kfence_obj_allocated(const struct kf= ence_metadata *meta) static noinline void metadata_update_state(struct kfence_metadata *meta, enum kfence_object_sta= te next, unsigned long *stack_entries, size_t num_stack_entries) + __must_hold(&meta->lock) { struct kfence_track *track =3D next =3D=3D KFENCE_OBJECT_ALLOCATED ? &meta->alloc_track : &meta->free_t= rack; @@ -486,7 +488,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *ca= che, size_t size, gfp_t g alloc_covered_add(alloc_stack_hash, 1); =20 /* Set required slab fields. */ - slab =3D virt_to_slab((void *)meta->addr); + slab =3D virt_to_slab(addr); slab->slab_cache =3D cache; slab->objects =3D 1; =20 @@ -515,6 +517,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *ca= che, size_t size, gfp_t g static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, = bool zombie) { struct kcsan_scoped_access assert_page_exclusive; + u32 alloc_stack_hash; unsigned long flags; bool init; =20 @@ -547,9 +550,10 @@ static void kfence_guarded_free(void *addr, struct kfe= nce_metadata *meta, bool z /* Mark the object as freed. */ metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); init =3D slab_want_init_on_free(meta->cache); + alloc_stack_hash =3D meta->alloc_stack_hash; raw_spin_unlock_irqrestore(&meta->lock, flags); =20 - alloc_covered_add(meta->alloc_stack_hash, -1); + alloc_covered_add(alloc_stack_hash, -1); =20 /* Check canary bytes for memory corruption. */ check_canary(meta); @@ -594,6 +598,7 @@ static void rcu_guarded_free(struct rcu_head *h) * which partial initialization succeeded. */ static unsigned long kfence_init_pool(void) + __context_unsafe(/* constructor */) { unsigned long addr, start_pfn; int i; @@ -1220,6 +1225,7 @@ bool kfence_handle_page_fault(unsigned long addr, boo= l is_write, struct pt_regs { const int page_index =3D (addr - (unsigned long)__kfence_pool) / PAGE_SIZ= E; struct kfence_metadata *to_report =3D NULL; + unsigned long unprotected_page =3D 0; enum kfence_error_type error_type; unsigned long flags; =20 @@ -1253,9 +1259,8 @@ bool kfence_handle_page_fault(unsigned long addr, boo= l is_write, struct pt_regs if (!to_report) goto out; =20 - raw_spin_lock_irqsave(&to_report->lock, flags); - to_report->unprotected_page =3D addr; error_type =3D KFENCE_ERROR_OOB; + unprotected_page =3D addr; =20 /* * If the object was freed before we took the look we can still @@ -1267,7 +1272,6 @@ bool kfence_handle_page_fault(unsigned long addr, boo= l is_write, struct pt_regs if (!to_report) goto out; =20 - raw_spin_lock_irqsave(&to_report->lock, flags); error_type =3D KFENCE_ERROR_UAF; /* * We may race with __kfence_alloc(), and it is possible that a @@ -1279,6 +1283,8 @@ bool kfence_handle_page_fault(unsigned long addr, boo= l is_write, struct pt_regs =20 out: if (to_report) { + raw_spin_lock_irqsave(&to_report->lock, flags); + to_report->unprotected_page =3D unprotected_page; kfence_report_error(addr, is_write, regs, to_report, error_type); raw_spin_unlock_irqrestore(&to_report->lock, flags); } else { diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index dfba5ea06b01..f9caea007246 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -34,6 +34,8 @@ /* Maximum stack depth for reports. */ #define KFENCE_STACK_DEPTH 64 =20 +extern raw_spinlock_t kfence_freelist_lock; + /* KFENCE object states. */ enum kfence_object_state { KFENCE_OBJECT_UNUSED, /* Object is unused. */ @@ -53,7 +55,7 @@ struct kfence_track { =20 /* KFENCE metadata per guarded allocation. */ struct kfence_metadata { - struct list_head list; /* Freelist node; access under kfence_freelist_lo= ck. */ + struct list_head list __guarded_by(&kfence_freelist_lock); /* Freelist no= de. */ struct rcu_head rcu_head; /* For delayed freeing. */ =20 /* @@ -91,13 +93,13 @@ struct kfence_metadata { * In case of an invalid access, the page that was unprotected; we * optimistically only store one address. */ - unsigned long unprotected_page; + unsigned long unprotected_page __guarded_by(&lock); =20 /* Allocation and free stack information. */ - struct kfence_track alloc_track; - struct kfence_track free_track; + struct kfence_track alloc_track __guarded_by(&lock); + struct kfence_track free_track __guarded_by(&lock); /* For updating alloc_covered on frees. */ - u32 alloc_stack_hash; + u32 alloc_stack_hash __guarded_by(&lock); #ifdef CONFIG_MEMCG struct slabobj_ext obj_exts; #endif @@ -141,6 +143,6 @@ enum kfence_error_type { void kfence_report_error(unsigned long address, bool is_write, struct pt_r= egs *regs, const struct kfence_metadata *meta, enum kfence_error_type type); =20 -void kfence_print_object(struct seq_file *seq, const struct kfence_metadat= a *meta); +void kfence_print_object(struct seq_file *seq, const struct kfence_metadat= a *meta) __must_hold(&meta->lock); =20 #endif /* MM_KFENCE_KFENCE_H */ diff --git a/mm/kfence/report.c b/mm/kfence/report.c index 10e6802a2edf..787e87c26926 100644 --- a/mm/kfence/report.c +++ b/mm/kfence/report.c @@ -106,6 +106,7 @@ static int get_stack_skipnr(const unsigned long stack_e= ntries[], int num_entries =20 static void kfence_print_stack(struct seq_file *seq, const struct kfence_m= etadata *meta, bool show_alloc) + __must_hold(&meta->lock) { const struct kfence_track *track =3D show_alloc ? &meta->alloc_track : &m= eta->free_track; u64 ts_sec =3D track->ts_nsec; @@ -207,8 +208,6 @@ void kfence_report_error(unsigned long address, bool is= _write, struct pt_regs *r if (WARN_ON(type !=3D KFENCE_ERROR_INVALID && !meta)) return; =20 - if (meta) - lockdep_assert_held(&meta->lock); /* * Because we may generate reports in printk-unfriendly parts of the * kernel, such as scheduler code, the use of printk() could deadlock. @@ -263,6 +262,7 @@ void kfence_report_error(unsigned long address, bool is= _write, struct pt_regs *r stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 0); =20 if (meta) { + lockdep_assert_held(&meta->lock); pr_err("\n"); kfence_print_object(NULL, meta); } --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 210E134EEF6 for ; Fri, 19 Dec 2025 15:47:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159245; cv=none; b=hvGWSIMUSRqN8rdkAqqVO5UzlOv+P63OCIY+5oZ9iVoAnRIfw6jwONRzqxFqw2Qms8EqpL6g2V+cOiLxpm0a+uH8lwemIatu2r/qPqdyhaeDJ/tm3eu5LCHhxZ+4FNSy7zHOk1ai/JECgle+wXhG+ap9asKWAgtuJA8OxwJSU0U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159245; c=relaxed/simple; bh=07w1iVYLGApit8bkKyeu1U8F4tG4DfRO81UX3zlTOc4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lxHwousEQ5fXmm/PZoK7fXGug3/DNToIX63XWQe8ZjsAdvC++gSKQtQp+3m7pKRV2+3tvzWQPUBqbFM8WgrrKGri/2aa0BpxX+KPLSDxahAnIYzKh/Cuu83LOv7ja569oGJyFzonhHRQwthy++K8cFeT1vBQtA4ubMOb3aYoU6c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=P86/6nzl; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P86/6nzl" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4779981523fso21292225e9.2 for ; Fri, 19 Dec 2025 07:47:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159241; x=1766764041; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cIl6hRG6VWUZ3266tGTJDIxO3xjvOCdXIFgWS7XA8vA=; b=P86/6nzl5y7+cFjhmeHObiYlOOElSRsuccmSPB+P7BRhmIcqR445imUvqWiW1ot9io lkyxaS3hXABicsWrv9ZDHjs7vCD7kHbn0m7WncnMUsUcf8hb0qmwCrj8UcYtozobplbm heY6uknojggO0hjzkGlBN2/lHBR8k2qECmZ3hpiMIgEk/pBxok/nCOfO07r7iil8bsru s7rtvr4JJIutjG1jkreHfuUOcfXGWAGK2iMY8cVpTn44Yjr0e30h5cJWW3yfS2dpp74s yWF4fKa3ZoFXifFvWZs58DDxAl0v7HE6Hzn5HgDaSnFwYXCp19KroisNHHO4iNGHNfrH mxuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159241; x=1766764041; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cIl6hRG6VWUZ3266tGTJDIxO3xjvOCdXIFgWS7XA8vA=; b=f7frhJ4MWp3MtLq0qRchaloIyJRemV6xWiFVwy3S/lgOE9jSKSB910hTsp/4MgBFNV YdFpLrzUTpYIEpvbJ6mNP+9srxMQ5RmeRsn37RWSRlMmEDWmKCG70rpXmoPQapvrqFAh Jr+TTIKsxSsCatoa1QOwrXQu7WDSyRhYdNY8JVEx6xx1NIwPVEbLAvgXnQuHWKBKoaOX ob+3JwORQGW8WgOXkJ6gVh6navFTZYGt3auFjRpvM/1BR1T1RcQDRQVDitZrruGWVrBf il+xXD5QxR4maRm6vMJLglVLhdN9y4u/8dxx32IqSOjRbQwUF1tNsmhVoT0P60+LXYZ0 soaQ== X-Forwarded-Encrypted: i=1; AJvYcCU2GwSbrcGo/hLpsLeenioS7YR/NQtKiOxae79kQmDh0EHXmhg+brGS7VtHAeR+dkECPuWmp7N+9VxCHMo=@vger.kernel.org X-Gm-Message-State: AOJu0YzmBGA81JF+xqTXqSPMLGmYtKs55TOOsihfUQNkmcQGldg1rCB3 Y9PsfT2NDoHhz3nGPpZgGrIobMxgk9qOGhnuk+y2a8IvH8o4JQO2crjtNzsKnZKDR4sgwUBPj49 m+A== X-Google-Smtp-Source: AGHT+IHUYWBrpeCypfdMxgfNzjmtc9mmMvSwDnVKS4eDk2yh+EmJ7zT/5eTOcBjrkNmlHr6bTJLq31xqdQ== X-Received: from wmxb5-n2.prod.google.com ([2002:a05:600d:8445:20b0:477:3fdf:4c24]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8484:b0:477:b642:9dbf with SMTP id 5b1f17b1804b1-47d195c1cebmr30959435e9.32.1766159241428; Fri, 19 Dec 2025 07:47:21 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:18 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-30-elver@google.com> Subject: [PATCH v5 29/36] kcov: Enable context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable context analysis for the KCOV subsystem. Signed-off-by: Marco Elver --- v4: * Fix new temporary variable type. * Rename capability -> context analysis. v2: * Remove disable/enable_context_analysis() around headers. --- kernel/Makefile | 2 ++ kernel/kcov.c | 36 +++++++++++++++++++++++++----------- 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/kernel/Makefile b/kernel/Makefile index e83669841b8c..6785982013dc 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -43,6 +43,8 @@ KASAN_SANITIZE_kcov.o :=3D n KCSAN_SANITIZE_kcov.o :=3D n UBSAN_SANITIZE_kcov.o :=3D n KMSAN_SANITIZE_kcov.o :=3D n + +CONTEXT_ANALYSIS_kcov.o :=3D y CFLAGS_kcov.o :=3D $(call cc-option, -fno-conserve-stack) -fno-stack-prote= ctor =20 obj-y +=3D sched/ diff --git a/kernel/kcov.c b/kernel/kcov.c index 6563141f5de9..6cbc6e2d8aee 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -55,13 +55,13 @@ struct kcov { refcount_t refcount; /* The lock protects mode, size, area and t. */ spinlock_t lock; - enum kcov_mode mode; + enum kcov_mode mode __guarded_by(&lock); /* Size of arena (in long's). */ - unsigned int size; + unsigned int size __guarded_by(&lock); /* Coverage buffer shared with user space. */ - void *area; + void *area __guarded_by(&lock); /* Task for which we collect coverage, or NULL. */ - struct task_struct *t; + struct task_struct *t __guarded_by(&lock); /* Collecting coverage from remote (background) threads. */ bool remote; /* Size of remote area (in long's). */ @@ -391,6 +391,7 @@ void kcov_task_init(struct task_struct *t) } =20 static void kcov_reset(struct kcov *kcov) + __must_hold(&kcov->lock) { kcov->t =3D NULL; kcov->mode =3D KCOV_MODE_INIT; @@ -400,6 +401,7 @@ static void kcov_reset(struct kcov *kcov) } =20 static void kcov_remote_reset(struct kcov *kcov) + __must_hold(&kcov->lock) { int bkt; struct kcov_remote *remote; @@ -419,6 +421,7 @@ static void kcov_remote_reset(struct kcov *kcov) } =20 static void kcov_disable(struct task_struct *t, struct kcov *kcov) + __must_hold(&kcov->lock) { kcov_task_reset(t); if (kcov->remote) @@ -435,8 +438,11 @@ static void kcov_get(struct kcov *kcov) static void kcov_put(struct kcov *kcov) { if (refcount_dec_and_test(&kcov->refcount)) { - kcov_remote_reset(kcov); - vfree(kcov->area); + /* Context-safety: no references left, object being destroyed. */ + context_unsafe( + kcov_remote_reset(kcov); + vfree(kcov->area); + ); kfree(kcov); } } @@ -491,6 +497,7 @@ static int kcov_mmap(struct file *filep, struct vm_area= _struct *vma) unsigned long size, off; struct page *page; unsigned long flags; + void *area; =20 spin_lock_irqsave(&kcov->lock, flags); size =3D kcov->size * sizeof(unsigned long); @@ -499,10 +506,11 @@ static int kcov_mmap(struct file *filep, struct vm_ar= ea_struct *vma) res =3D -EINVAL; goto exit; } + area =3D kcov->area; spin_unlock_irqrestore(&kcov->lock, flags); vm_flags_set(vma, VM_DONTEXPAND); for (off =3D 0; off < size; off +=3D PAGE_SIZE) { - page =3D vmalloc_to_page(kcov->area + off); + page =3D vmalloc_to_page(area + off); res =3D vm_insert_page(vma, vma->vm_start + off, page); if (res) { pr_warn_once("kcov: vm_insert_page() failed\n"); @@ -522,10 +530,10 @@ static int kcov_open(struct inode *inode, struct file= *filep) kcov =3D kzalloc(sizeof(*kcov), GFP_KERNEL); if (!kcov) return -ENOMEM; + spin_lock_init(&kcov->lock); kcov->mode =3D KCOV_MODE_DISABLED; kcov->sequence =3D 1; refcount_set(&kcov->refcount, 1); - spin_lock_init(&kcov->lock); filep->private_data =3D kcov; return nonseekable_open(inode, filep); } @@ -556,6 +564,7 @@ static int kcov_get_mode(unsigned long arg) * vmalloc fault handling path is instrumented. */ static void kcov_fault_in_area(struct kcov *kcov) + __must_hold(&kcov->lock) { unsigned long stride =3D PAGE_SIZE / sizeof(unsigned long); unsigned long *area =3D kcov->area; @@ -584,6 +593,7 @@ static inline bool kcov_check_handle(u64 handle, bool c= ommon_valid, =20 static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd, unsigned long arg) + __must_hold(&kcov->lock) { struct task_struct *t; unsigned long flags, unused; @@ -814,6 +824,7 @@ static inline bool kcov_mode_enabled(unsigned int mode) } =20 static void kcov_remote_softirq_start(struct task_struct *t) + __must_hold(&kcov_percpu_data.lock) { struct kcov_percpu_data *data =3D this_cpu_ptr(&kcov_percpu_data); unsigned int mode; @@ -831,6 +842,7 @@ static void kcov_remote_softirq_start(struct task_struc= t *t) } =20 static void kcov_remote_softirq_stop(struct task_struct *t) + __must_hold(&kcov_percpu_data.lock) { struct kcov_percpu_data *data =3D this_cpu_ptr(&kcov_percpu_data); =20 @@ -896,10 +908,12 @@ void kcov_remote_start(u64 handle) /* Put in kcov_remote_stop(). */ kcov_get(kcov); /* - * Read kcov fields before unlock to prevent races with - * KCOV_DISABLE / kcov_remote_reset(). + * Read kcov fields before unlocking kcov_remote_lock to prevent races + * with KCOV_DISABLE and kcov_remote_reset(); cannot acquire kcov->lock + * here, because it might lead to deadlock given kcov_remote_lock is + * acquired _after_ kcov->lock elsewhere. */ - mode =3D kcov->mode; + mode =3D context_unsafe(kcov->mode); sequence =3D kcov->sequence; if (in_task()) { size =3D kcov->remote_size; --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63C0534F485 for ; Fri, 19 Dec 2025 15:47:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159250; cv=none; b=L3iF4RIrPwn1SGLC2mAYzgTKHeupqhIw+4+6STjuiAdicQHIyWjZ2x6qDoa4QSLu31zalG59PN51VI0Unn2QGFl5HiS0D6TI2RSFyD96lCvPe2ehta5+JbOTAmwuuV5kMhvPbS0cuPyxDnL/3GLekm2YOHqVai1pRDYKdpt7cnU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159250; c=relaxed/simple; bh=Xhk0xsW0tGqUud5kt1c33VabY4PrqeJpXHkHXkMftFw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qDscnKnx0+cVkC9w+s/dSgWMBEUtYhRKHsnS/tWb1ULAUxK03EKskpQ/V/fRwsxeBKmKZz6PKz9DBJZRZuW65EvIyQpmba/zosYCIehTYQiEbFdNjqwHFx6C4ioXwuAwbXEtYOrkibceLDrYIdTUgXwggiEww5oZ6ZDoZxAtx1k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Pl1r8SKj; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Pl1r8SKj" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-430fc83f58dso1056063f8f.2 for ; Fri, 19 Dec 2025 07:47:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159247; x=1766764047; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OO33/jJhJQVIdzyn/h7cV8sb9potmD4v/pwOWqkGaBs=; b=Pl1r8SKjxaw5QQmAttAJDQz7fH++bHf2iAWCOto/JuUcpq4Uo1PmlFDkV3LTgrrHoZ bDUX6mnr6eMlL/PsSx580C1p4lJFeCnX6PKALNVhD79is7yby0qwLmZG9MoquigGEk12 KLykN6pojkZ+vEJquapjwy+zWsqLpGoVb5Gy+iYnep+g5usnqfnI82b5v7o7hQLZQhXJ kgPX2XKfVrHpQAwySCg6UEE9PrPI9k+Ip8Um1SzonUyh6ctmgyAewWOgrlR61D6VLbQI 2AvopbyCrcdFmR8pRzFe/4fyqCTIziurlR80eWXVvdrE2QvtpGuf7ydnzgw3TZYwFdrA Hkww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159247; x=1766764047; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OO33/jJhJQVIdzyn/h7cV8sb9potmD4v/pwOWqkGaBs=; b=PTRLRGIzok/ztVr7XstWmRImiHMnwVQmtMHFUB7iF0KTx6DdQQAtjwRLx8Zutv+op6 ybyWyb+FtchebKv7QirnO/u87AuctqmxU5B4Sp97ITUJd5cr43/he7ugMbm7FmuK/x6E sxoDJ+qLuMV0XAFh8ToJ/31zTlyWx6SWkxjgNpponhjn79hfBAuan+SHPw3o2ojqdkgc c7OHizCLAv+jmXYu8hOMo51lI09rV3ASZErVJIJXcAayqMhJU2Efe1F0oErBnXNfEXJ7 gwN2ZB194S6ZpRBritNZze0gTIXVeOT0vSA8O/UxvOKQTjk6GNiQOLrVk72Zo7URkfQn 9IYw== X-Forwarded-Encrypted: i=1; AJvYcCVURya2pLjYjS1vIrPRBpfWl4ND/+G+HXflI9dulylwMrwq0Y5f4MKgyExJnIqYnQ0ZRU27MgFQmhh8nEg=@vger.kernel.org X-Gm-Message-State: AOJu0YxpFG/1Kw2dJD4sohlLxBCCWh4ULfcfC1OkSRgbqJxf8B6+02tz Lz0KwEaTuxj/TX2fidDDJ9989M6kU28Xe8A/8WH6mRugOnHD1jXGCd/Sn18MhiK8l400ujmD+fb ryA== X-Google-Smtp-Source: AGHT+IFerDd1vd4X97dkJpWHrvPSjwQcqOjMG8Z0RXSvdbWhPjy9O9psZiuBXvn9GzAtKPBtfUrWDqrhqg== X-Received: from wrbbs1.prod.google.com ([2002:a05:6000:701:b0:42b:2aa2:e459]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2303:b0:429:b9bc:e81a with SMTP id ffacd0b85a97d-4324e458883mr2916935f8f.0.1766159245502; Fri, 19 Dec 2025 07:47:25 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:19 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-31-elver@google.com> Subject: [PATCH v5 30/36] kcsan: Enable context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable context analysis for the KCSAN subsystem. Signed-off-by: Marco Elver --- v4: * Rename capability -> context analysis. v3: * New patch. --- kernel/kcsan/Makefile | 2 ++ kernel/kcsan/report.c | 11 ++++++++--- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile index a45f3dfc8d14..824f30c93252 100644 --- a/kernel/kcsan/Makefile +++ b/kernel/kcsan/Makefile @@ -1,4 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 +CONTEXT_ANALYSIS :=3D y + KCSAN_SANITIZE :=3D n KCOV_INSTRUMENT :=3D n UBSAN_SANITIZE :=3D n diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c index e95ce7d7a76e..11a48b78f8d1 100644 --- a/kernel/kcsan/report.c +++ b/kernel/kcsan/report.c @@ -116,6 +116,7 @@ static DEFINE_RAW_SPINLOCK(report_lock); * been reported since (now - KCSAN_REPORT_ONCE_IN_MS). */ static bool rate_limit_report(unsigned long frame1, unsigned long frame2) + __must_hold(&report_lock) { struct report_time *use_entry =3D &report_times[0]; unsigned long invalid_before; @@ -366,6 +367,7 @@ static int sym_strcmp(void *addr1, void *addr2) =20 static void print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned= long reordered_to) + __must_hold(&report_lock) { stack_trace_print(stack_entries, num_entries, 0); if (reordered_to) @@ -373,6 +375,7 @@ print_stack_trace(unsigned long stack_entries[], int nu= m_entries, unsigned long } =20 static void print_verbose_info(struct task_struct *task) + __must_hold(&report_lock) { if (!task) return; @@ -389,6 +392,7 @@ static void print_report(enum kcsan_value_change value_= change, const struct access_info *ai, struct other_info *other_info, u64 old, u64 new, u64 mask) + __must_hold(&report_lock) { unsigned long reordered_to =3D 0; unsigned long stack_entries[NUM_STACK_ENTRIES] =3D { 0 }; @@ -496,6 +500,7 @@ static void print_report(enum kcsan_value_change value_= change, } =20 static void release_report(unsigned long *flags, struct other_info *other_= info) + __releases(&report_lock) { /* * Use size to denote valid/invalid, since KCSAN entirely ignores @@ -507,13 +512,11 @@ static void release_report(unsigned long *flags, stru= ct other_info *other_info) =20 /* * Sets @other_info->task and awaits consumption of @other_info. - * - * Precondition: report_lock is held. - * Postcondition: report_lock is held. */ static void set_other_info_task_blocking(unsigned long *flags, const struct access_info *ai, struct other_info *other_info) + __must_hold(&report_lock) { /* * We may be instrumenting a code-path where current->state is already @@ -572,6 +575,7 @@ static void set_other_info_task_blocking(unsigned long = *flags, static void prepare_report_producer(unsigned long *flags, const struct access_info *ai, struct other_info *other_info) + __must_not_hold(&report_lock) { raw_spin_lock_irqsave(&report_lock, *flags); =20 @@ -603,6 +607,7 @@ static void prepare_report_producer(unsigned long *flag= s, static bool prepare_report_consumer(unsigned long *flags, const struct access_info *ai, struct other_info *other_info) + __cond_acquires(true, &report_lock) { =20 raw_spin_lock_irqsave(&report_lock, *flags); --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0D3334FF54 for ; Fri, 19 Dec 2025 15:47:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159253; cv=none; b=n4wxM6kkvNfsh1ZcRU1mF1zJTDJAgjJ6Xlp9tyaPjSiSYol3KOqHSUIubP8n6acmw943idOJNWAIH6mGiD8//IXIpTZgZ0uOC67KqZnZL81tUWu3Fg7hkQIF/FlKGf91sknE+n5HE5/cTlowcnqxZNU2sogzZfXrrjIWpflv8Wo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159253; c=relaxed/simple; bh=42FYH4YydcI8JRQNr8EFVNHBN8JBHnU8gH/4zTSZoQU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=upM2fuXYBrN06BYKe+tukfIswISSnXoYJmhhUdy1AVHF55SZqCQuOqbt9d/o6CeGA4nELPmcCYN+YVdyYxb6JJNE2wusudut5+3bpjFrp260Gi58gFiCJ9u6lHgvi0dX5iNbi5TlufTCvH6RtMp94yfITJv7IKHMiZt/qtu2hHk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wcfeUZ6W; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wcfeUZ6W" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-64b9ccc9661so764703a12.1 for ; Fri, 19 Dec 2025 07:47:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159249; x=1766764049; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4XK3qzntTZnJieCCKu4A/HzWNSoArWneMhDpC7gb9Kg=; b=wcfeUZ6W6BSHNwFdMJmk44FzON3r2Nkg8OnOk6qCERTgO970VqHIl6H7k937ecJLn8 4ACPHMcISQpWhqumcXSOUjAJzPm+gB1BrFBt9QFRTGHP4y5OSUcnXSac3EydAyHiah0N ckC0AtrQ53TQ8nARLISMuCJBsAF0aO7dPasmVv8kT977LUwxndZ37ZabTx8VYedAwwBs 0B2DoySkn9AteiUqCu7HhnfIsBrXn5gkuLQVZrivnh2P0EEAplVF0uhDyfuCw6UrEahB C3WZY3L5bIcj+KmwzVLjAUJliQoIp1maAwEupmGUk9/aoXwrSP/NyOfAueJqg5oM6GN8 6arw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159249; x=1766764049; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4XK3qzntTZnJieCCKu4A/HzWNSoArWneMhDpC7gb9Kg=; b=odmdv0z5hHBxUTGNnneGVy0b3hl7OVcvMWVURKMuVJO8ztBrsnxnnKvntkvqIArtxk 6IolNGMEsLOgpqg8yfUTsnSkyHYgKH529uEazigqEV25LXfanlFUQd/rv2tSI36Gv+bQ quW0PY6F6M1uV2H8NtX+RjqaF4FryOq5ZaKtGuNT0tc04f0BXGSm7RZv7Z+UfSd55azk G9Kt3WnGHfMQoLqaUZXI9XPBP0g40biTzA+Xic3XPWfXjnwZRGWmiNAJwAGw/gW67PA4 JKCth1pgderd2S2sOXdjhhojRqKbO1D3T6GZEIHPDbRp/nbCkGfYOVM9JwwH/yJOL+q7 o1Sw== X-Forwarded-Encrypted: i=1; AJvYcCXL+In+FjbDC8+O0HflcUiDvFl5HIr2SczW5b/FL5Mc3JvVG9QVdaEpkKGRJ4vCpQZtMumMG4vkqndxKFA=@vger.kernel.org X-Gm-Message-State: AOJu0YxsabV/NPtfasnDbuFa0Y005+gp1e0ODs4HCz0g8wbKxn5HNZxD Fl81PnbtcH33yhbLmAZjX3+cLnxzKy/60qBT3t7Ipi2FTVVIbyXkcSGdNWzXjVKe95EusIHhuue J3Q== X-Google-Smtp-Source: AGHT+IHX5UXwk2F+9TvQVKLp2kAzUS4yz1dfz5S20YIyCpvaeeaogsyAn1dyrJKuFHxwFv13c6Ax3+QEyg== X-Received: from edpr3.prod.google.com ([2002:aa7:c143:0:b0:64b:9f62:a079]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:d62:b0:64b:993f:ce06 with SMTP id 4fb4d7f45d1cf-64b993fd097mr1762424a12.32.1766159249192; Fri, 19 Dec 2025 07:47:29 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:20 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-32-elver@google.com> Subject: [PATCH v5 31/36] stackdepot: Enable context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable context analysis for stackdepot. Signed-off-by: Marco Elver --- v4: * Rename capability -> context analysis. v2: * Remove disable/enable_context_analysis() around headers. --- lib/Makefile | 1 + lib/stackdepot.c | 20 ++++++++++++++------ 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/lib/Makefile b/lib/Makefile index 89defefbf6c0..e755eee4e76f 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -250,6 +250,7 @@ obj-$(CONFIG_POLYNOMIAL) +=3D polynomial.o # Prevent the compiler from calling builtins like memcmp() or bcmp() from = this # file. CFLAGS_stackdepot.o +=3D -fno-builtin +CONTEXT_ANALYSIS_stackdepot.o :=3D y obj-$(CONFIG_STACKDEPOT) +=3D stackdepot.o KASAN_SANITIZE_stackdepot.o :=3D n # In particular, instrumenting stackdepot.c with KMSAN will result in infi= nite diff --git a/lib/stackdepot.c b/lib/stackdepot.c index de0b0025af2b..166f50ad8391 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -61,18 +61,18 @@ static unsigned int stack_bucket_number_order; /* Hash mask for indexing the table. */ static unsigned int stack_hash_mask; =20 +/* The lock must be held when performing pool or freelist modifications. */ +static DEFINE_RAW_SPINLOCK(pool_lock); /* Array of memory regions that store stack records. */ -static void **stack_pools; +static void **stack_pools __pt_guarded_by(&pool_lock); /* Newly allocated pool that is not yet added to stack_pools. */ static void *new_pool; /* Number of pools in stack_pools. */ static int pools_num; /* Offset to the unused space in the currently used pool. */ -static size_t pool_offset =3D DEPOT_POOL_SIZE; +static size_t pool_offset __guarded_by(&pool_lock) =3D DEPOT_POOL_SIZE; /* Freelist of stack records within stack_pools. */ -static LIST_HEAD(free_stacks); -/* The lock must be held when performing pool or freelist modifications. */ -static DEFINE_RAW_SPINLOCK(pool_lock); +static __guarded_by(&pool_lock) LIST_HEAD(free_stacks); =20 /* Statistics counters for debugfs. */ enum depot_counter_id { @@ -291,6 +291,7 @@ EXPORT_SYMBOL_GPL(stack_depot_init); * Initializes new stack pool, and updates the list of pools. */ static bool depot_init_pool(void **prealloc) + __must_hold(&pool_lock) { lockdep_assert_held(&pool_lock); =20 @@ -338,6 +339,7 @@ static bool depot_init_pool(void **prealloc) =20 /* Keeps the preallocated memory to be used for a new stack depot pool. */ static void depot_keep_new_pool(void **prealloc) + __must_hold(&pool_lock) { lockdep_assert_held(&pool_lock); =20 @@ -357,6 +359,7 @@ static void depot_keep_new_pool(void **prealloc) * the current pre-allocation. */ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t si= ze) + __must_hold(&pool_lock) { struct stack_record *stack; void *current_pool; @@ -391,6 +394,7 @@ static struct stack_record *depot_pop_free_pool(void **= prealloc, size_t size) =20 /* Try to find next free usable entry from the freelist. */ static struct stack_record *depot_pop_free(void) + __must_hold(&pool_lock) { struct stack_record *stack; =20 @@ -428,6 +432,7 @@ static inline size_t depot_stack_record_size(struct sta= ck_record *s, unsigned in /* Allocates a new stack in a stack depot pool. */ static struct stack_record * depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 has= h, depot_flags_t flags, void **prealloc) + __must_hold(&pool_lock) { struct stack_record *stack =3D NULL; size_t record_size; @@ -486,6 +491,7 @@ depot_alloc_stack(unsigned long *entries, unsigned int = nr_entries, u32 hash, dep } =20 static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) + __must_not_hold(&pool_lock) { const int pools_num_cached =3D READ_ONCE(pools_num); union handle_parts parts =3D { .handle =3D handle }; @@ -502,7 +508,8 @@ static struct stack_record *depot_fetch_stack(depot_sta= ck_handle_t handle) return NULL; } =20 - pool =3D stack_pools[pool_index]; + /* @pool_index either valid, or user passed in corrupted value. */ + pool =3D context_unsafe(stack_pools[pool_index]); if (WARN_ON(!pool)) return NULL; =20 @@ -515,6 +522,7 @@ static struct stack_record *depot_fetch_stack(depot_sta= ck_handle_t handle) =20 /* Links stack into the freelist. */ static void depot_free_stack(struct stack_record *stack) + __must_not_hold(&pool_lock) { unsigned long flags; =20 --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 981F43502B3 for ; Fri, 19 Dec 2025 15:47:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159257; cv=none; b=I93VMaDSnpTEDy1mMCJq1OUVEG3A2yz4SOlaFQ3Rok9dQ5XtG6k2/+hBX+bnB2xrnD1XzLQhYJjUZttOnJD1GWktXT/vunJMKPZcEvcLDEmy/gF5gDCoOuThx4oVdYOg7XMbqXcdjI6nMmvFkrGcACk3hAbddkI7ZCIo3lNwZzU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159257; c=relaxed/simple; bh=ZUmludJR9n9iF2nAunUDwV9Tez31e6gUxaaiqIaagnM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=E1+dClcB50hAtg/xUrmEYnWi4Bkvt2oAW+Ci0LUREpGpFH4hWc3eDGolhxjv+lcpNaH42aKc13TXv9J9kf+2YSuvZuXu3LoMqmroUqRnT8+fAjGDy+Mve6p2LjtRXqCPauzqAaoge8mILN2RcqmHYtjhDhtV2I0IRrb4H8L1Hrs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dpNxAp54; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dpNxAp54" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-47d1622509eso9289455e9.3 for ; Fri, 19 Dec 2025 07:47:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159253; x=1766764053; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gVYXgtdCpLCQNFVroiaCMxergOmHfVtsDs2aOEVw4MA=; b=dpNxAp54KVaT8+GG7AdotaWDvsnb0zauPxB+oMxja0n7+ZWZiVMbYasAK76NphLS/E gax7zWyyARMCoRF84gLNTBWM3MaR+T+OsBarTFQJItgu9LqOcyAjrRMBD2oxD5I04ycB hfFMbvA8cpOVa0rhMQAsofET1CXQAV6hTbwFk9jPbgNljvvwDTKteQEE0cfYne5rWnjv MVHhR+QFy5uLaIfXYU2uiaXEbfZjz2w20K20BBi7QrduQeuIin4RdTMIYwjiy2uNKMGr VMx7jVLMxUZ5mA+I7wOBVKCBk/+bdqDqFSpK8aU5BAdi8YH93xlhwIWn88Pu7mXr9iDh aUtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159253; x=1766764053; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gVYXgtdCpLCQNFVroiaCMxergOmHfVtsDs2aOEVw4MA=; b=fL6U6EiuVWqaZt/17Bv2/OkVMO8cPcvu/mR6iyHy51ichgB5NrBJJ90lQ4qfGJxkMo QN0VdMCuIq00VNvzy+REQ1Ikn87e/sRsEZ5NI2uOHoNbj+H2PwuMowQ+C17bgFFuJLag ZHEOtOvdI81sXF2R+SWjuhMxTchCUGNiN6eD+1xQ1HSeyHsNnPOLkYcv8vzUkH0KSYUL SfOqHieR0Z1gQRdgzudbnCX9gji6flmCUUk7leh/Mmhs3sU05PkeuSnbK3USilyqQN/5 cyM8ntMWFnAPf0J2QF3JSaXHZDa0h4+EdSBwFG+tl77G6SaU8ZlnU2k0ZEXFsSeRfW8/ o3GQ== X-Forwarded-Encrypted: i=1; AJvYcCXLiKO4cdStVkt+RquU5vlWbKZSGW03+5CdXUTLm/C73Z2qHbiekZ6SJy6xm7OXPEe1VvKj5etA7a5BS1s=@vger.kernel.org X-Gm-Message-State: AOJu0YxAtex2p8DVqrF9wsLfoSwMstoE+pE/hTXAj0kUf1cSr1+6NxC+ AXaxEnG7MKz3VktCdzBnGDDvt9B/ZUqAMh2WDmR1J31H6KXq9Fjzvbja7AsOkTj0RdyhLP6bja5 yXw== X-Google-Smtp-Source: AGHT+IF7X7xFaJ62BwS9IugsWlp3fxNRiUfga93aJgM/MRyd67RIn/H0v7I9tRaoGzfNYwMLbV+sgtE0Nw== X-Received: from wmco23.prod.google.com ([2002:a05:600c:a317:b0:475:dca0:4de3]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e8f:b0:479:3a89:121d with SMTP id 5b1f17b1804b1-47d1959c74fmr31405385e9.36.1766159252991; Fri, 19 Dec 2025 07:47:32 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:21 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-33-elver@google.com> Subject: [PATCH v5 32/36] rhashtable: Enable context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable context analysis for rhashtable, which was used as an initial test as it contains a combination of RCU, mutex, and bit_spinlock usage. Users of rhashtable now also benefit from annotations on the API, which will now warn if the RCU read lock is not held where required. Signed-off-by: Marco Elver Cc: Thomas Graf Cc: Herbert Xu Cc: linux-crypto@vger.kernel.org --- v5: * Fix annotations for recently modified/added functions. v4: * Rename capability -> context analysis. v2: * Remove disable/enable_context_analysis() around headers. --- include/linux/rhashtable.h | 16 +++++++++++++--- lib/Makefile | 2 ++ lib/rhashtable.c | 5 +++-- 3 files changed, 18 insertions(+), 5 deletions(-) diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h index 08e664b21f5a..133ccb39137a 100644 --- a/include/linux/rhashtable.h +++ b/include/linux/rhashtable.h @@ -245,16 +245,17 @@ void *rhashtable_insert_slow(struct rhashtable *ht, c= onst void *key, void rhashtable_walk_enter(struct rhashtable *ht, struct rhashtable_iter *iter); void rhashtable_walk_exit(struct rhashtable_iter *iter); -int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires(R= CU); +int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires_s= hared(RCU); =20 static inline void rhashtable_walk_start(struct rhashtable_iter *iter) + __acquires_shared(RCU) { (void)rhashtable_walk_start_check(iter); } =20 void *rhashtable_walk_next(struct rhashtable_iter *iter); void *rhashtable_walk_peek(struct rhashtable_iter *iter); -void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases(RCU); +void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases_shared(= RCU); =20 void rhashtable_free_and_destroy(struct rhashtable *ht, void (*free_fn)(void *ptr, void *arg), @@ -325,6 +326,7 @@ static inline struct rhash_lock_head __rcu **rht_bucket= _insert( =20 static inline unsigned long rht_lock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt) + __acquires(__bitlock(0, bkt)) { unsigned long flags; =20 @@ -337,6 +339,7 @@ static inline unsigned long rht_lock(struct bucket_tabl= e *tbl, static inline unsigned long rht_lock_nested(struct bucket_table *tbl, struct rhash_lock_head __rcu **bucket, unsigned int subclass) + __acquires(__bitlock(0, bucket)) { unsigned long flags; =20 @@ -349,6 +352,7 @@ static inline unsigned long rht_lock_nested(struct buck= et_table *tbl, static inline void rht_unlock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt, unsigned long flags) + __releases(__bitlock(0, bkt)) { lock_map_release(&tbl->dep_map); bit_spin_unlock(0, (unsigned long *)bkt); @@ -424,13 +428,14 @@ static inline void rht_assign_unlock(struct bucket_ta= ble *tbl, struct rhash_lock_head __rcu **bkt, struct rhash_head *obj, unsigned long flags) + __releases(__bitlock(0, bkt)) { if (rht_is_a_nulls(obj)) obj =3D NULL; lock_map_release(&tbl->dep_map); rcu_assign_pointer(*bkt, (void *)obj); preempt_enable(); - __release(bitlock); + __release(__bitlock(0, bkt)); local_irq_restore(flags); } =20 @@ -612,6 +617,7 @@ static __always_inline struct rhash_head *__rhashtable_= lookup( struct rhashtable *ht, const void *key, const struct rhashtable_params params, const enum rht_lookup_freq freq) + __must_hold_shared(RCU) { struct rhashtable_compare_arg arg =3D { .ht =3D ht, @@ -666,6 +672,7 @@ static __always_inline struct rhash_head *__rhashtable_= lookup( static __always_inline void *rhashtable_lookup( struct rhashtable *ht, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he =3D __rhashtable_lookup(ht, key, params, RHT_LOOKUP_NORMAL); @@ -676,6 +683,7 @@ static __always_inline void *rhashtable_lookup( static __always_inline void *rhashtable_lookup_likely( struct rhashtable *ht, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he =3D __rhashtable_lookup(ht, key, params, RHT_LOOKUP_LIKELY); @@ -727,6 +735,7 @@ static __always_inline void *rhashtable_lookup_fast( static __always_inline struct rhlist_head *rhltable_lookup( struct rhltable *hlt, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he =3D __rhashtable_lookup(&hlt->ht, key, params, RHT_LOOKUP_NORMAL); @@ -737,6 +746,7 @@ static __always_inline struct rhlist_head *rhltable_loo= kup( static __always_inline struct rhlist_head *rhltable_lookup_likely( struct rhltable *hlt, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he =3D __rhashtable_lookup(&hlt->ht, key, params, RHT_LOOKUP_LIKELY); diff --git a/lib/Makefile b/lib/Makefile index e755eee4e76f..22d8742bba57 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -50,6 +50,8 @@ lib-$(CONFIG_MIN_HEAP) +=3D min_heap.o lib-y +=3D kobject.o klist.o obj-y +=3D lockref.o =20 +CONTEXT_ANALYSIS_rhashtable.o :=3D y + obj-y +=3D bcd.o sort.o parser.o debug_locks.o random32.o \ bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ list_sort.o uuid.o iov_iter.o clz_ctz.o \ diff --git a/lib/rhashtable.c b/lib/rhashtable.c index fde0f0e556f8..6074ed5f66f3 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -358,6 +358,7 @@ static int rhashtable_rehash_table(struct rhashtable *h= t) static int rhashtable_rehash_alloc(struct rhashtable *ht, struct bucket_table *old_tbl, unsigned int size) + __must_hold(&ht->mutex) { struct bucket_table *new_tbl; int err; @@ -392,6 +393,7 @@ static int rhashtable_rehash_alloc(struct rhashtable *h= t, * bucket locks or concurrent RCU protected lookups and traversals. */ static int rhashtable_shrink(struct rhashtable *ht) + __must_hold(&ht->mutex) { struct bucket_table *old_tbl =3D rht_dereference(ht->tbl, ht); unsigned int nelems =3D atomic_read(&ht->nelems); @@ -724,7 +726,7 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_exit); * resize events and always continue. */ int rhashtable_walk_start_check(struct rhashtable_iter *iter) - __acquires(RCU) + __acquires_shared(RCU) { struct rhashtable *ht =3D iter->ht; bool rhlist =3D ht->rhlist; @@ -940,7 +942,6 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_peek); * hash table. */ void rhashtable_walk_stop(struct rhashtable_iter *iter) - __releases(RCU) { struct rhashtable *ht; struct bucket_table *tbl =3D iter->walker.tbl; --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83307350D63 for ; Fri, 19 Dec 2025 15:47:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159260; cv=none; b=t19b9iEGsjJybanl1ldyapCRW4yMEbPujjtqJcj3JZPqhbDcYkB5x+Z/bRrhgC8zqhOL9Ov/1ZF+DwI3Ku8+/bO939qEVTSlxdLcEUS3bvZR+s7L2NCP7ApJ7v1nVypou7Cq6tYQYI6HeFN3izFy0rtOOoIFG4msgI5yNBMod20= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159260; c=relaxed/simple; bh=HdEYfRQpYHFb9w9gX7p78a4F95WuBREWga8Uk5sCvhI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QaQUGr1lwAfySYXvi61kSpSQ5KHKMzNI0v4TDyp6GAtQcKsO16WLREuK5Or4a9vo9sNZ2pKMFmtKk07pMhJVwI0INrW33frv1zpfgbAlgtPxclZr9c5gqdBtii/9SyurE5jQTxk/s00H2LloZTI5EABS/HGE08KMxDA10pM/aNc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eopN3Kio; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eopN3Kio" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-47910af0c8bso15827055e9.2 for ; Fri, 19 Dec 2025 07:47:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159257; x=1766764057; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=raJvaZBiGWdUURg4IuOfBi51C0gZUEAs+Zv74eOTMUU=; b=eopN3KioUAwlrV9Qb2lwQX3bDnxr6J/7foj0X4kudK+RmsBZ9dt0tfgPHtQx95hgXj gCo3DOKxIvuqM+uLzxmvBHvBL+dHFzb/+Q4FcumxYPc47yaw+BpRhseIbCKHtdOafJ7u 3TIUyjTHH3hwYMbBBd000G25cImWYyJqHQtYCFtcMM1FF/s1RwuxW1Vm7TkM5EN2Ywfg YbLe5kA2YpQdwlkDEP4fL8Nip4Cp9705ckpbInLh0ELbkJhkwYoluIUnPBvWheZyrgMf K7tEr5GCZrtOwUwscbY4c1noUY7VYQ6z17+6Pzt21ErXE8UxUWrvspUB7N022qtWoSXK GfhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159257; x=1766764057; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=raJvaZBiGWdUURg4IuOfBi51C0gZUEAs+Zv74eOTMUU=; b=wrWIOm87z561PNRVZUQqmL13xGJjFRNnl5QxGxJ4ljGWW6yE2vo4Wyri8FudHhic3n HRM54/aa71ogLT7j45VINtKMn+c5VsZzQuONMcM6Cz2XolN7q95nXP3HmO3otGEoE/6R HxOa+VpZZnXgtzqnubUiYRAFRJk0SyQnu893YxyKSb8NrvIjs/abDbh3nMM0kdMRxGwP 5Rx96ArvU6ts96rwd3DvBfsUx0DKGBegKJH4THizH6erNcrtfk6CsIAY7XXGFeewhfej nelyXcUA1xQ7Woo3KccckJYziJ8lO2D2ApwaVLf+9TYM/i5zdAwx+mIZPsmZTzxcSPGC 0oJQ== X-Forwarded-Encrypted: i=1; AJvYcCVPubtV9XlmwJF2cjpivxuFuWifKqFXACa2mHtH5cEI3FSuU+Epb6giIbwP+o2P6ynnqCDok10DOHhEttE=@vger.kernel.org X-Gm-Message-State: AOJu0YxPh+M6rqg5aQdtftHVg8KLde8Fi7p781r9fshjW8/luVo1lTlY 81SAS+0q8wmzXJFIKwO3t9TcI3b3GuncQipsGm5QAdnrqeKDNwXV4GhfdnkXRkQll/7bNNingwz J4g== X-Google-Smtp-Source: AGHT+IE0DEBcqLPgB304KpuFyDFpAT0dXyCoUrk6DMEYAY2UUiewWpU8NKx93qOTNkEiLpHlSHZUKGwdGA== X-Received: from wmsm38.prod.google.com ([2002:a05:600c:3b26:b0:477:a1f9:138c]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f4a:b0:477:58:7cf4 with SMTP id 5b1f17b1804b1-47d1953b79dmr33800655e9.4.1766159256728; Fri, 19 Dec 2025 07:47:36 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:22 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-34-elver@google.com> Subject: [PATCH v5 33/36] printk: Move locking annotation to printk.c From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With Sparse support gone, Clang is a bit more strict and warns: ./include/linux/console.h:492:50: error: use of undeclared identifier 'cons= ole_mutex' 492 | extern void console_list_unlock(void) __releases(console_mutex); Since it does not make sense to make console_mutex itself global, move the annotation to printk.c. Context analysis remains disabled for printk.c. This is needed to enable context analysis for modules that include . Signed-off-by: Marco Elver --- v2: * New patch. --- include/linux/console.h | 4 ++-- kernel/printk/printk.c | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/console.h b/include/linux/console.h index fc9f5c5c1b04..f882833bedf0 100644 --- a/include/linux/console.h +++ b/include/linux/console.h @@ -492,8 +492,8 @@ static inline bool console_srcu_read_lock_is_held(void) extern int console_srcu_read_lock(void); extern void console_srcu_read_unlock(int cookie); =20 -extern void console_list_lock(void) __acquires(console_mutex); -extern void console_list_unlock(void) __releases(console_mutex); +extern void console_list_lock(void); +extern void console_list_unlock(void); =20 extern struct hlist_head console_list; =20 diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 1d765ad242b8..37d16ef27f13 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -245,6 +245,7 @@ int devkmsg_sysctl_set_loglvl(const struct ctl_table *t= able, int write, * For console list or console->flags updates */ void console_list_lock(void) + __acquires(&console_mutex) { /* * In unregister_console() and console_force_preferred_locked(), @@ -269,6 +270,7 @@ EXPORT_SYMBOL(console_list_lock); * Counterpart to console_list_lock() */ void console_list_unlock(void) + __releases(&console_mutex) { mutex_unlock(&console_mutex); } --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27DA9352F8A for ; Fri, 19 Dec 2025 15:47:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159265; cv=none; b=o2YSzoX8fU6T3oHBZVDJRYgeZkRLkMm023DSNagY+AvgV3nYbMIrS7eGhaD7fGScHKbjA1BgF+2eFfP3JmfaKGKOgVxNGcwUTTnHY00l81Neff0P6etVYpIE7N959fM9ZR+wRjFMxPlWjXpEk5Q1juVcMGDwEq909pwub2N+fEc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159265; c=relaxed/simple; bh=lqDZlLDF1OCr+U8z+OLKjsODRREP2SWO58EBBE0mp1E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eJkxQ48pd5rLbZGbVJP4Q7v72Bv3Vk5Hig6O/18BLx2/WHjSmzb5LMczXotNvNYS5naB7OZHS0/BY1mDZQsoHaouwFqPK49xsRFiZ3n6H9UOwNtdUAzGCU0hb8EqQ/p0hkG5P98c0cxkzq1BbI4p91++SmU7svhCeCigrtTofdo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ey8tCkJH; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ey8tCkJH" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4779d8fd4ecso12107015e9.1 for ; Fri, 19 Dec 2025 07:47:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159261; x=1766764061; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cKpJnrxSZbJcaFzY+XN2baHRfop3ro0EDLZVOhEREtY=; b=ey8tCkJHdqPidvOVgmF+Z/jDIGOdcXlUawtGc6+oYViXDXsVTlp9LT6UcGsx8XFJCx LBHrmjAWfD0b1Zb+b72NZGOEP/ryK8nMAS4Ge/y/593YW1ozGnaKIAcwNBgRBmBw7PzI CvdTmSaReW1ifW5i3+grvfHFC0i2RMlfp3ZrGFC/x42zDhF+XGPq73vDxIG3tKKWcbRa jDJnM7+7WJ1Q7K9j6+4VAKgHdqn91VbHOYMZ3k5JqHoHTvD+Nx4K7r4rf0Yjhu7FKiFI oYqCWav7OzkbEqE+5HVfNDohIv+prps5BZ8vn10QIJBoiJXGiHGqUH9xGzQW4QwRae3T S+DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159261; x=1766764061; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cKpJnrxSZbJcaFzY+XN2baHRfop3ro0EDLZVOhEREtY=; b=KatfpQqLWEOvg2O6+ZyPjQok1k5mv1KKmqRtbeuIkb5H4Ax7ZjgjVX3zVf0JFo+86Z COMdoaHHLyrqdBoanYrTg+m3OUFjYMC0LhDiC0BNIZJtlhX3q39JcE7ps4lzM+VKfW1n 4G6IByhGWtUjrw7VAY5JubJWMaAmG7mWaCLxXyTwRzumUbO7/t0p43TETXpkXR/gRI4K XkMVu49JfWGBi1LJuPB7PqN0gFljhjM/lpp4yuM/exYbDbn0mtCV1xMHwGrDZaBkP4FH 3W9shCAveDWuIfnah3n7xRl4r7+Jxb1ftRDu0HKzpU51A9JC1qsx8KmOpoEha6d9rzb9 z7lw== X-Forwarded-Encrypted: i=1; AJvYcCVZit4yHR02KxxX2JSiDyxq3FeVVWKPwhZj2/vD3TBSl79TqF3+JH3cJiCUHtFl+VUw/q4Jnxzez+58K48=@vger.kernel.org X-Gm-Message-State: AOJu0YxtCF0s+ewg494UPQxymBUjpCSSEplTLf76xr7mt/AjayIFANbd IJTAAhD7y4k8tyoJfk7Xpvpz42pItHBrig4aCfVxCHKNtiRZVku2hp50vhdW42fYLaP8vX3d7Ne k3g== X-Google-Smtp-Source: AGHT+IG154MyI4CAR04aaxT/p3LvSXHs3HPaKwgWi50EM2FVzTyouJMLWC9yHsqhBXcCfYOHUK7wDxQKLQ== X-Received: from wmij4.prod.google.com ([2002:a05:600c:4104:b0:477:9c96:9fb9]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:444d:b0:475:ddad:c3a9 with SMTP id 5b1f17b1804b1-47d18bdfc61mr33543825e9.13.1766159260615; Fri, 19 Dec 2025 07:47:40 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:23 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-35-elver@google.com> Subject: [PATCH v5 34/36] security/tomoyo: Enable context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable context analysis for security/tomoyo. This demonstrates a larger conversion to use Clang's context analysis. The benefit is additional static checking of locking rules, along with better documentation. Tomoyo makes use of several synchronization primitives, yet its clear design made it relatively straightforward to enable context analysis. One notable finding was: security/tomoyo/gc.c:664:20: error: reading variable 'write_buf' requires= holding mutex '&tomoyo_io_buffer::io_sem' 664 | is_write =3D head->write_buf !=3D NULL; For which Tetsuo writes: "Good catch. This should be data_race(), for tomoyo_write_control() might concurrently update head->write_buf from non-NULL to non-NULL with head->io_sem held." Signed-off-by: Marco Elver Cc: Kentaro Takeda Cc: Tetsuo Handa --- v4: * Rename capability -> context analysis. v2: * New patch. --- security/tomoyo/Makefile | 2 + security/tomoyo/common.c | 52 ++++++++++++++++++++++++-- security/tomoyo/common.h | 77 ++++++++++++++++++++------------------- security/tomoyo/domain.c | 1 + security/tomoyo/environ.c | 1 + security/tomoyo/file.c | 5 +++ security/tomoyo/gc.c | 28 ++++++++++---- security/tomoyo/mount.c | 2 + security/tomoyo/network.c | 3 ++ 9 files changed, 122 insertions(+), 49 deletions(-) diff --git a/security/tomoyo/Makefile b/security/tomoyo/Makefile index 55c67b9846a9..e3c0f853aa3b 100644 --- a/security/tomoyo/Makefile +++ b/security/tomoyo/Makefile @@ -1,4 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 +CONTEXT_ANALYSIS :=3D y + obj-y =3D audit.o common.o condition.o domain.o environ.o file.o gc.o grou= p.o load_policy.o memory.o mount.o network.o realpath.o securityfs_if.o tom= oyo.o util.o =20 targets +=3D builtin-policy.h diff --git a/security/tomoyo/common.c b/security/tomoyo/common.c index 0f78898bce09..86ce56c32d37 100644 --- a/security/tomoyo/common.c +++ b/security/tomoyo/common.c @@ -268,6 +268,7 @@ static void tomoyo_io_printf(struct tomoyo_io_buffer *h= ead, const char *fmt, */ static void tomoyo_io_printf(struct tomoyo_io_buffer *head, const char *fm= t, ...) + __must_hold(&head->io_sem) { va_list args; size_t len; @@ -416,8 +417,9 @@ static void tomoyo_print_name_union_quoted(struct tomoy= o_io_buffer *head, * * Returns nothing. */ -static void tomoyo_print_number_union_nospace -(struct tomoyo_io_buffer *head, const struct tomoyo_number_union *ptr) +static void +tomoyo_print_number_union_nospace(struct tomoyo_io_buffer *head, const str= uct tomoyo_number_union *ptr) + __must_hold(&head->io_sem) { if (ptr->group) { tomoyo_set_string(head, "@"); @@ -466,6 +468,7 @@ static void tomoyo_print_number_union_nospace */ static void tomoyo_print_number_union(struct tomoyo_io_buffer *head, const struct tomoyo_number_union *ptr) + __must_hold(&head->io_sem) { tomoyo_set_space(head); tomoyo_print_number_union_nospace(head, ptr); @@ -664,6 +667,7 @@ static int tomoyo_set_mode(char *name, const char *valu= e, * Returns 0 on success, negative value otherwise. */ static int tomoyo_write_profile(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { char *data =3D head->write_buf; unsigned int i; @@ -719,6 +723,7 @@ static int tomoyo_write_profile(struct tomoyo_io_buffer= *head) * Caller prints functionality's name. */ static void tomoyo_print_config(struct tomoyo_io_buffer *head, const u8 co= nfig) + __must_hold(&head->io_sem) { tomoyo_io_printf(head, "=3D{ mode=3D%s grant_log=3D%s reject_log=3D%s }\n= ", tomoyo_mode[config & 3], @@ -734,6 +739,7 @@ static void tomoyo_print_config(struct tomoyo_io_buffer= *head, const u8 config) * Returns nothing. */ static void tomoyo_read_profile(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { u8 index; struct tomoyo_policy_namespace *ns =3D @@ -852,6 +858,7 @@ static bool tomoyo_same_manager(const struct tomoyo_acl= _head *a, */ static int tomoyo_update_manager_entry(const char *manager, const bool is_delete) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_manager e =3D { }; struct tomoyo_acl_param param =3D { @@ -883,6 +890,8 @@ static int tomoyo_update_manager_entry(const char *mana= ger, * Caller holds tomoyo_read_lock(). */ static int tomoyo_write_manager(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { char *data =3D head->write_buf; =20 @@ -901,6 +910,7 @@ static int tomoyo_write_manager(struct tomoyo_io_buffer= *head) * Caller holds tomoyo_read_lock(). */ static void tomoyo_read_manager(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) { if (head->r.eof) return; @@ -927,6 +937,7 @@ static void tomoyo_read_manager(struct tomoyo_io_buffer= *head) * Caller holds tomoyo_read_lock(). */ static bool tomoyo_manager(void) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_manager *ptr; const char *exe; @@ -981,6 +992,8 @@ static struct tomoyo_domain_info *tomoyo_find_domain_by= _qid */ static bool tomoyo_select_domain(struct tomoyo_io_buffer *head, const char *data) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { unsigned int pid; struct tomoyo_domain_info *domain =3D NULL; @@ -1051,6 +1064,7 @@ static bool tomoyo_same_task_acl(const struct tomoyo_= acl_info *a, * Caller holds tomoyo_read_lock(). */ static int tomoyo_write_task(struct tomoyo_acl_param *param) + __must_hold_shared(&tomoyo_ss) { int error =3D -EINVAL; =20 @@ -1079,6 +1093,7 @@ static int tomoyo_write_task(struct tomoyo_acl_param = *param) * Caller holds tomoyo_read_lock(). */ static int tomoyo_delete_domain(char *domainname) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_domain_info *domain; struct tomoyo_path_info name; @@ -1118,6 +1133,7 @@ static int tomoyo_delete_domain(char *domainname) static int tomoyo_write_domain2(struct tomoyo_policy_namespace *ns, struct list_head *list, char *data, const bool is_delete) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_acl_param param =3D { .ns =3D ns, @@ -1162,6 +1178,8 @@ const char * const tomoyo_dif[TOMOYO_MAX_DOMAIN_INFO_= FLAGS] =3D { * Caller holds tomoyo_read_lock(). */ static int tomoyo_write_domain(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { char *data =3D head->write_buf; struct tomoyo_policy_namespace *ns; @@ -1223,6 +1241,7 @@ static int tomoyo_write_domain(struct tomoyo_io_buffe= r *head) */ static bool tomoyo_print_condition(struct tomoyo_io_buffer *head, const struct tomoyo_condition *cond) + __must_hold(&head->io_sem) { switch (head->r.cond_step) { case 0: @@ -1364,6 +1383,7 @@ static bool tomoyo_print_condition(struct tomoyo_io_b= uffer *head, */ static void tomoyo_set_group(struct tomoyo_io_buffer *head, const char *category) + __must_hold(&head->io_sem) { if (head->type =3D=3D TOMOYO_EXCEPTIONPOLICY) { tomoyo_print_namespace(head); @@ -1383,6 +1403,7 @@ static void tomoyo_set_group(struct tomoyo_io_buffer = *head, */ static bool tomoyo_print_entry(struct tomoyo_io_buffer *head, struct tomoyo_acl_info *acl) + __must_hold(&head->io_sem) { const u8 acl_type =3D acl->type; bool first =3D true; @@ -1588,6 +1609,8 @@ static bool tomoyo_print_entry(struct tomoyo_io_buffe= r *head, */ static bool tomoyo_read_domain2(struct tomoyo_io_buffer *head, struct list_head *list) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { list_for_each_cookie(head->r.acl, list) { struct tomoyo_acl_info *ptr =3D @@ -1608,6 +1631,8 @@ static bool tomoyo_read_domain2(struct tomoyo_io_buff= er *head, * Caller holds tomoyo_read_lock(). */ static void tomoyo_read_domain(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { if (head->r.eof) return; @@ -1686,6 +1711,7 @@ static int tomoyo_write_pid(struct tomoyo_io_buffer *= head) * using read()/write() interface rather than sysctl() interface. */ static void tomoyo_read_pid(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { char *buf =3D head->write_buf; bool global_pid =3D false; @@ -1746,6 +1772,8 @@ static const char *tomoyo_group_name[TOMOYO_MAX_GROUP= ] =3D { * Caller holds tomoyo_read_lock(). */ static int tomoyo_write_exception(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { const bool is_delete =3D head->w.is_delete; struct tomoyo_acl_param param =3D { @@ -1787,6 +1815,8 @@ static int tomoyo_write_exception(struct tomoyo_io_bu= ffer *head) * Caller holds tomoyo_read_lock(). */ static bool tomoyo_read_group(struct tomoyo_io_buffer *head, const int idx) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { struct tomoyo_policy_namespace *ns =3D container_of(head->r.ns, typeof(*ns), namespace_list); @@ -1846,6 +1876,7 @@ static bool tomoyo_read_group(struct tomoyo_io_buffer= *head, const int idx) * Caller holds tomoyo_read_lock(). */ static bool tomoyo_read_policy(struct tomoyo_io_buffer *head, const int id= x) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_policy_namespace *ns =3D container_of(head->r.ns, typeof(*ns), namespace_list); @@ -1906,6 +1937,8 @@ static bool tomoyo_read_policy(struct tomoyo_io_buffe= r *head, const int idx) * Caller holds tomoyo_read_lock(). */ static void tomoyo_read_exception(struct tomoyo_io_buffer *head) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { struct tomoyo_policy_namespace *ns =3D container_of(head->r.ns, typeof(*ns), namespace_list); @@ -2097,6 +2130,7 @@ static void tomoyo_patternize_path(char *buffer, cons= t int len, char *entry) * Returns nothing. */ static void tomoyo_add_entry(struct tomoyo_domain_info *domain, char *head= er) + __must_hold_shared(&tomoyo_ss) { char *buffer; char *realpath =3D NULL; @@ -2301,6 +2335,7 @@ static __poll_t tomoyo_poll_query(struct file *file, = poll_table *wait) * @head: Pointer to "struct tomoyo_io_buffer". */ static void tomoyo_read_query(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { struct list_head *tmp; unsigned int pos =3D 0; @@ -2362,6 +2397,7 @@ static void tomoyo_read_query(struct tomoyo_io_buffer= *head) * Returns 0 on success, -EINVAL otherwise. */ static int tomoyo_write_answer(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { char *data =3D head->write_buf; struct list_head *tmp; @@ -2401,6 +2437,7 @@ static int tomoyo_write_answer(struct tomoyo_io_buffe= r *head) * Returns version information. */ static void tomoyo_read_version(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { if (!head->r.eof) { tomoyo_io_printf(head, "2.6.0"); @@ -2449,6 +2486,7 @@ void tomoyo_update_stat(const u8 index) * Returns nothing. */ static void tomoyo_read_stat(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { u8 i; unsigned int total =3D 0; @@ -2493,6 +2531,7 @@ static void tomoyo_read_stat(struct tomoyo_io_buffer = *head) * Returns 0. */ static int tomoyo_write_stat(struct tomoyo_io_buffer *head) + __must_hold(&head->io_sem) { char *data =3D head->write_buf; u8 i; @@ -2717,6 +2756,8 @@ ssize_t tomoyo_read_control(struct tomoyo_io_buffer *= head, char __user *buffer, * Caller holds tomoyo_read_lock(). */ static int tomoyo_parse_policy(struct tomoyo_io_buffer *head, char *line) + __must_hold_shared(&tomoyo_ss) + __must_hold(&head->io_sem) { /* Delete request? */ head->w.is_delete =3D !strncmp(line, "delete ", 7); @@ -2969,8 +3010,11 @@ void __init tomoyo_load_builtin_policy(void) break; *end =3D '\0'; tomoyo_normalize_line(start); - head.write_buf =3D start; - tomoyo_parse_policy(&head, start); + /* head is stack-local and not shared. */ + context_unsafe( + head.write_buf =3D start; + tomoyo_parse_policy(&head, start); + ); start =3D end + 1; } } diff --git a/security/tomoyo/common.h b/security/tomoyo/common.h index 3b2a97d10a5d..4f1704c911ef 100644 --- a/security/tomoyo/common.h +++ b/security/tomoyo/common.h @@ -827,13 +827,13 @@ struct tomoyo_io_buffer { bool is_delete; } w; /* Buffer for reading. */ - char *read_buf; + char *read_buf __guarded_by(&io_sem); /* Size of read buffer. */ - size_t readbuf_size; + size_t readbuf_size __guarded_by(&io_sem); /* Buffer for writing. */ - char *write_buf; + char *write_buf __guarded_by(&io_sem); /* Size of write buffer. */ - size_t writebuf_size; + size_t writebuf_size __guarded_by(&io_sem); /* Type of this interface. */ enum tomoyo_securityfs_interface_index type; /* Users counter protected by tomoyo_io_buffer_list_lock. */ @@ -922,6 +922,35 @@ struct tomoyo_task { struct tomoyo_domain_info *old_domain_info; }; =20 +/********** External variable definitions. **********/ + +extern bool tomoyo_policy_loaded; +extern int tomoyo_enabled; +extern const char * const tomoyo_condition_keyword +[TOMOYO_MAX_CONDITION_KEYWORD]; +extern const char * const tomoyo_dif[TOMOYO_MAX_DOMAIN_INFO_FLAGS]; +extern const char * const tomoyo_mac_keywords[TOMOYO_MAX_MAC_INDEX + + TOMOYO_MAX_MAC_CATEGORY_INDEX]; +extern const char * const tomoyo_mode[TOMOYO_CONFIG_MAX_MODE]; +extern const char * const tomoyo_path_keyword[TOMOYO_MAX_PATH_OPERATION]; +extern const char * const tomoyo_proto_keyword[TOMOYO_SOCK_MAX]; +extern const char * const tomoyo_socket_keyword[TOMOYO_MAX_NETWORK_OPERATI= ON]; +extern const u8 tomoyo_index2category[TOMOYO_MAX_MAC_INDEX]; +extern const u8 tomoyo_pn2mac[TOMOYO_MAX_PATH_NUMBER_OPERATION]; +extern const u8 tomoyo_pnnn2mac[TOMOYO_MAX_MKDEV_OPERATION]; +extern const u8 tomoyo_pp2mac[TOMOYO_MAX_PATH2_OPERATION]; +extern struct list_head tomoyo_condition_list; +extern struct list_head tomoyo_domain_list; +extern struct list_head tomoyo_name_list[TOMOYO_MAX_HASH]; +extern struct list_head tomoyo_namespace_list; +extern struct mutex tomoyo_policy_lock; +extern struct srcu_struct tomoyo_ss; +extern struct tomoyo_domain_info tomoyo_kernel_domain; +extern struct tomoyo_policy_namespace tomoyo_kernel_namespace; +extern unsigned int tomoyo_memory_quota[TOMOYO_MAX_MEMORY_STAT]; +extern unsigned int tomoyo_memory_used[TOMOYO_MAX_MEMORY_STAT]; +extern struct lsm_blob_sizes tomoyo_blob_sizes; + /********** Function prototypes. **********/ =20 int tomoyo_interface_init(void); @@ -971,10 +1000,10 @@ const struct tomoyo_path_info *tomoyo_path_matches_g= roup int tomoyo_check_open_permission(struct tomoyo_domain_info *domain, const struct path *path, const int flag); void tomoyo_close_control(struct tomoyo_io_buffer *head); -int tomoyo_env_perm(struct tomoyo_request_info *r, const char *env); +int tomoyo_env_perm(struct tomoyo_request_info *r, const char *env) __must= _hold_shared(&tomoyo_ss); int tomoyo_execute_permission(struct tomoyo_request_info *r, - const struct tomoyo_path_info *filename); -int tomoyo_find_next_domain(struct linux_binprm *bprm); + const struct tomoyo_path_info *filename) __must_hold_shared(&tomo= yo_ss); +int tomoyo_find_next_domain(struct linux_binprm *bprm) __must_hold_shared(= &tomoyo_ss); int tomoyo_get_mode(const struct tomoyo_policy_namespace *ns, const u8 pro= file, const u8 index); int tomoyo_init_request_info(struct tomoyo_request_info *r, @@ -1002,6 +1031,7 @@ int tomoyo_socket_listen_permission(struct socket *so= ck); int tomoyo_socket_sendmsg_permission(struct socket *sock, struct msghdr *m= sg, int size); int tomoyo_supervisor(struct tomoyo_request_info *r, const char *fmt, ...) + __must_hold_shared(&tomoyo_ss) __printf(2, 3); int tomoyo_update_domain(struct tomoyo_acl_info *new_entry, const int size, struct tomoyo_acl_param *param, @@ -1061,7 +1091,7 @@ void tomoyo_print_ulong(char *buffer, const int buffe= r_len, const unsigned long value, const u8 type); void tomoyo_put_name_union(struct tomoyo_name_union *ptr); void tomoyo_put_number_union(struct tomoyo_number_union *ptr); -void tomoyo_read_log(struct tomoyo_io_buffer *head); +void tomoyo_read_log(struct tomoyo_io_buffer *head) __must_hold(&head->io_= sem); void tomoyo_update_stat(const u8 index); void tomoyo_warn_oom(const char *function); void tomoyo_write_log(struct tomoyo_request_info *r, const char *fmt, ...) @@ -1069,35 +1099,6 @@ void tomoyo_write_log(struct tomoyo_request_info *r,= const char *fmt, ...) void tomoyo_write_log2(struct tomoyo_request_info *r, int len, const char = *fmt, va_list args) __printf(3, 0); =20 -/********** External variable definitions. **********/ - -extern bool tomoyo_policy_loaded; -extern int tomoyo_enabled; -extern const char * const tomoyo_condition_keyword -[TOMOYO_MAX_CONDITION_KEYWORD]; -extern const char * const tomoyo_dif[TOMOYO_MAX_DOMAIN_INFO_FLAGS]; -extern const char * const tomoyo_mac_keywords[TOMOYO_MAX_MAC_INDEX - + TOMOYO_MAX_MAC_CATEGORY_INDEX]; -extern const char * const tomoyo_mode[TOMOYO_CONFIG_MAX_MODE]; -extern const char * const tomoyo_path_keyword[TOMOYO_MAX_PATH_OPERATION]; -extern const char * const tomoyo_proto_keyword[TOMOYO_SOCK_MAX]; -extern const char * const tomoyo_socket_keyword[TOMOYO_MAX_NETWORK_OPERATI= ON]; -extern const u8 tomoyo_index2category[TOMOYO_MAX_MAC_INDEX]; -extern const u8 tomoyo_pn2mac[TOMOYO_MAX_PATH_NUMBER_OPERATION]; -extern const u8 tomoyo_pnnn2mac[TOMOYO_MAX_MKDEV_OPERATION]; -extern const u8 tomoyo_pp2mac[TOMOYO_MAX_PATH2_OPERATION]; -extern struct list_head tomoyo_condition_list; -extern struct list_head tomoyo_domain_list; -extern struct list_head tomoyo_name_list[TOMOYO_MAX_HASH]; -extern struct list_head tomoyo_namespace_list; -extern struct mutex tomoyo_policy_lock; -extern struct srcu_struct tomoyo_ss; -extern struct tomoyo_domain_info tomoyo_kernel_domain; -extern struct tomoyo_policy_namespace tomoyo_kernel_namespace; -extern unsigned int tomoyo_memory_quota[TOMOYO_MAX_MEMORY_STAT]; -extern unsigned int tomoyo_memory_used[TOMOYO_MAX_MEMORY_STAT]; -extern struct lsm_blob_sizes tomoyo_blob_sizes; - /********** Inlined functions. **********/ =20 /** @@ -1106,6 +1107,7 @@ extern struct lsm_blob_sizes tomoyo_blob_sizes; * Returns index number for tomoyo_read_unlock(). */ static inline int tomoyo_read_lock(void) + __acquires_shared(&tomoyo_ss) { return srcu_read_lock(&tomoyo_ss); } @@ -1118,6 +1120,7 @@ static inline int tomoyo_read_lock(void) * Returns nothing. */ static inline void tomoyo_read_unlock(int idx) + __releases_shared(&tomoyo_ss) { srcu_read_unlock(&tomoyo_ss, idx); } diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 90cf0e2969df..0612eac7f2f2 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -611,6 +611,7 @@ struct tomoyo_domain_info *tomoyo_assign_domain(const c= har *domainname, * Returns 0 on success, negative value otherwise. */ static int tomoyo_environ(struct tomoyo_execve *ee) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_request_info *r =3D &ee->r; struct linux_binprm *bprm =3D ee->bprm; diff --git a/security/tomoyo/environ.c b/security/tomoyo/environ.c index 7f0a471f19b2..bcb05910facc 100644 --- a/security/tomoyo/environ.c +++ b/security/tomoyo/environ.c @@ -32,6 +32,7 @@ static bool tomoyo_check_env_acl(struct tomoyo_request_in= fo *r, * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_env_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "misc env %s\n", r->param.environ.name->name); diff --git a/security/tomoyo/file.c b/security/tomoyo/file.c index 8f3b90b6e03d..e9b67dbb38e7 100644 --- a/security/tomoyo/file.c +++ b/security/tomoyo/file.c @@ -164,6 +164,7 @@ static bool tomoyo_get_realpath(struct tomoyo_path_info= *buf, const struct path * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_path_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "file %s %s\n", tomoyo_path_keyword [r->param.path.operation], @@ -178,6 +179,7 @@ static int tomoyo_audit_path_log(struct tomoyo_request_= info *r) * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_path2_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "file %s %s %s\n", tomoyo_mac_keywords [tomoyo_pp2mac[r->param.path2.operation]], @@ -193,6 +195,7 @@ static int tomoyo_audit_path2_log(struct tomoyo_request= _info *r) * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_mkdev_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "file %s %s 0%o %u %u\n", tomoyo_mac_keywords @@ -210,6 +213,7 @@ static int tomoyo_audit_mkdev_log(struct tomoyo_request= _info *r) * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_path_number_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { const u8 type =3D r->param.path_number.operation; u8 radix; @@ -572,6 +576,7 @@ static int tomoyo_update_path2_acl(const u8 perm, */ static int tomoyo_path_permission(struct tomoyo_request_info *r, u8 operat= ion, const struct tomoyo_path_info *filename) + __must_hold_shared(&tomoyo_ss) { int error; =20 diff --git a/security/tomoyo/gc.c b/security/tomoyo/gc.c index 026e29ea3796..8e2008863af8 100644 --- a/security/tomoyo/gc.c +++ b/security/tomoyo/gc.c @@ -23,11 +23,10 @@ static inline void tomoyo_memory_free(void *ptr) tomoyo_memory_used[TOMOYO_MEMORY_POLICY] -=3D ksize(ptr); kfree(ptr); } - -/* The list for "struct tomoyo_io_buffer". */ -static LIST_HEAD(tomoyo_io_buffer_list); /* Lock for protecting tomoyo_io_buffer_list. */ static DEFINE_SPINLOCK(tomoyo_io_buffer_list_lock); +/* The list for "struct tomoyo_io_buffer". */ +static __guarded_by(&tomoyo_io_buffer_list_lock) LIST_HEAD(tomoyo_io_buffe= r_list); =20 /** * tomoyo_struct_used_by_io_buffer - Check whether the list element is use= d by /sys/kernel/security/tomoyo/ users or not. @@ -385,6 +384,7 @@ static inline void tomoyo_del_number_group(struct list_= head *element) */ static void tomoyo_try_to_gc(const enum tomoyo_policy_id type, struct list_head *element) + __must_hold(&tomoyo_policy_lock) { /* * __list_del_entry() guarantees that the list element became no longer @@ -484,6 +484,7 @@ static void tomoyo_try_to_gc(const enum tomoyo_policy_i= d type, */ static void tomoyo_collect_member(const enum tomoyo_policy_id id, struct list_head *member_list) + __must_hold(&tomoyo_policy_lock) { struct tomoyo_acl_head *member; struct tomoyo_acl_head *tmp; @@ -504,6 +505,7 @@ static void tomoyo_collect_member(const enum tomoyo_pol= icy_id id, * Returns nothing. */ static void tomoyo_collect_acl(struct list_head *list) + __must_hold(&tomoyo_policy_lock) { struct tomoyo_acl_info *acl; struct tomoyo_acl_info *tmp; @@ -627,8 +629,11 @@ static int tomoyo_gc_thread(void *unused) if (head->users) continue; list_del(&head->list); - kfree(head->read_buf); - kfree(head->write_buf); + /* Safe destruction because no users are left. */ + context_unsafe( + kfree(head->read_buf); + kfree(head->write_buf); + ); kfree(head); } spin_unlock(&tomoyo_io_buffer_list_lock); @@ -656,11 +661,18 @@ void tomoyo_notify_gc(struct tomoyo_io_buffer *head, = const bool is_register) head->users =3D 1; list_add(&head->list, &tomoyo_io_buffer_list); } else { - is_write =3D head->write_buf !=3D NULL; + /* + * tomoyo_write_control() can concurrently update write_buf from + * a non-NULL to new non-NULL pointer with io_sem held. + */ + is_write =3D data_race(head->write_buf !=3D NULL); if (!--head->users) { list_del(&head->list); - kfree(head->read_buf); - kfree(head->write_buf); + /* Safe destruction because no users are left. */ + context_unsafe( + kfree(head->read_buf); + kfree(head->write_buf); + ); kfree(head); } } diff --git a/security/tomoyo/mount.c b/security/tomoyo/mount.c index 2755971f50df..322dfd188ada 100644 --- a/security/tomoyo/mount.c +++ b/security/tomoyo/mount.c @@ -28,6 +28,7 @@ static const char * const tomoyo_mounts[TOMOYO_MAX_SPECIA= L_MOUNT] =3D { * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_mount_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "file mount %s %s %s 0x%lX\n", r->param.mount.dev->name, @@ -78,6 +79,7 @@ static int tomoyo_mount_acl(struct tomoyo_request_info *r, const char *dev_name, const struct path *dir, const char *type, unsigned long flags) + __must_hold_shared(&tomoyo_ss) { struct tomoyo_obj_info obj =3D { }; struct path path; diff --git a/security/tomoyo/network.c b/security/tomoyo/network.c index 8dc61335f65e..cfc2a019de1e 100644 --- a/security/tomoyo/network.c +++ b/security/tomoyo/network.c @@ -363,6 +363,7 @@ int tomoyo_write_unix_network(struct tomoyo_acl_param *= param) static int tomoyo_audit_net_log(struct tomoyo_request_info *r, const char *family, const u8 protocol, const u8 operation, const char *address) + __must_hold_shared(&tomoyo_ss) { return tomoyo_supervisor(r, "network %s %s %s %s\n", family, tomoyo_proto_keyword[protocol], @@ -377,6 +378,7 @@ static int tomoyo_audit_net_log(struct tomoyo_request_i= nfo *r, * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_inet_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { char buf[128]; int len; @@ -402,6 +404,7 @@ static int tomoyo_audit_inet_log(struct tomoyo_request_= info *r) * Returns 0 on success, negative value otherwise. */ static int tomoyo_audit_unix_log(struct tomoyo_request_info *r) + __must_hold_shared(&tomoyo_ss) { return tomoyo_audit_net_log(r, "unix", r->param.unix_network.protocol, r->param.unix_network.operation, --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF36A352FB6 for ; Fri, 19 Dec 2025 15:47:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159271; cv=none; b=Vl+RLIS7ZRRhtgxLeVI1fy64xOpPt0X4ytILDnVbmc0xZq+rAfJXSUzJCTadcn7Wg+VMbJNnZDkrYUV6E+M+o4UszBbZ/DrCdFn0fSAvKs9AvKqcUe/FCJ4apWQRSYvwyOKYSvXZkCy3JiqvshdTeMXI/wQs+j4Pl+5tePJL1qI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159271; c=relaxed/simple; bh=RcYy4DcBjQcvDuV2dx3u0OsYl9tA/CvADAWDIg/f4Fw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nPbL1215jP+Pe1v6mz8Pq+qyYCEBCA8H+zpWGUVVh0E9llTIJgV528FQMFRuzjoKI9nWFAgRmfNj0knJ1jRvC4fiJErC9ZxXWGX7/cYRSShlawoArO0bFM7HWHThLxcn9m9zmlGcAFCpwWDuMBjfr6o6mdJADtXgw82se54SQVA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hrYr/nCb; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hrYr/nCb" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477bf8c1413so11139235e9.1 for ; Fri, 19 Dec 2025 07:47:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159265; x=1766764065; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZHn8aDcdtAKeJcCs/59KrgcXnzNTM7tjpMFEs44zRkY=; b=hrYr/nCb2sEjjcWBzFjzpP+s7mieW1pcan8mXP6SKI169ic7avyK0vDBTv0Z4de4A6 ONsQSuxMngVpRXqs7ajoh34w3EailK8u0EK6zAMK3AkLekL/rywllNHkJsLPDxztsodB rZR+83ISPePQQLjoJ4Em7THuokN+h/9TJVqDqovON+1vC/jPj4feduBiGoEp7piaATex ngNp+o0godkcnZ0SK6XRelOQKE7WgezWnzqTn3F95CiYCIzbRq/G2yUddnIxckj2U6ey +Nt3JvjEoYfEz5lGBLCTBrtmufaH7zZW747FTtoXBIJmYYH5ZNSNBFDwbl/imDbtrJRE sfRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159265; x=1766764065; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZHn8aDcdtAKeJcCs/59KrgcXnzNTM7tjpMFEs44zRkY=; b=XqCIuGT7E4lFmp7b4jUTvG4lX0z/G5awNxwdkd68hGIPOVW9EouWCFlSJFLz0+aQYh i7cAxndPUTooefvjLNuFzEDsTHv+xfKs85u0sSd7J7RXvPVuC6mZ8/l+QtrQf1Ya2CAO QxkOeQ8OAeD76eif+umssrVIyTITTc3KvIdZN9HTZNKBcT2pAwhcczSOD/PJGKYshQ5Z YqhwThNlz2mW8P1Qdaao2/17BAHmyGEhkhJ65UwokWivnQ3tHQkHgBQofWXiYVNoktXq 4i12v+xTOrPNUMkJjptoFZ0/fFq2FN3AavTW5jhgES3zi/8mTJrzL2Y4bTZW4wfbFpP5 Zp2Q== X-Forwarded-Encrypted: i=1; AJvYcCUAWag2baZ6qmr2ttxqgdjjVS3VYECevX+UYAro6YD4AtI2JW5Sng6r7A7MCNfvL96TSvrGYPWuLT+qDB0=@vger.kernel.org X-Gm-Message-State: AOJu0YwNLaHoo6oMw2R6ebZno8JxTDNAx/32UjEk0MyfeMHjF6s3ymZg lhyIF27jvi8RKeX3T06Mkpi1HoTinMNwkbuq7h66vUavhP716cT/w/aEpxEZ6NpFORbjpdTVs0v 6pA== X-Google-Smtp-Source: AGHT+IGHpp21+mCBycweEqT8Ht6B+jtSAoz0ySY2DWy+4u2Qiurrqu+3+OezAzO/55ihn/7y3obtbqs2RA== X-Received: from wmgp3.prod.google.com ([2002:a05:600c:2043:b0:477:98de:d8aa]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b8d:b0:477:7725:c16a with SMTP id 5b1f17b1804b1-47d1953da58mr36604365e9.10.1766159265020; Fri, 19 Dec 2025 07:47:45 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:24 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-36-elver@google.com> Subject: [PATCH v5 35/36] crypto: Enable context analysis From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enable context analysis for crypto subsystem. This demonstrates a larger conversion to use Clang's context analysis. The benefit is additional static checking of locking rules, along with better documentation. Note the use of the __acquire_ret macro how to define an API where a function returns a pointer to an object (struct scomp_scratch) with a lock held. Additionally, the analysis only resolves aliases where the analysis unambiguously sees that a variable was not reassigned after initialization, requiring minor code changes. Signed-off-by: Marco Elver Cc: Herbert Xu Cc: "David S. Miller" Cc: linux-crypto@vger.kernel.org --- v4: * Rename capability -> context analysis. v3: * Rebase - make use of __acquire_ret macro for new functions. * Initialize variables once where we want the analysis to recognize aliases. v2: * New patch. --- crypto/Makefile | 2 ++ crypto/acompress.c | 6 +++--- crypto/algapi.c | 2 ++ crypto/api.c | 1 + crypto/crypto_engine.c | 2 +- crypto/drbg.c | 5 +++++ crypto/internal.h | 2 +- crypto/proc.c | 3 +++ crypto/scompress.c | 24 ++++++++++++------------ include/crypto/internal/acompress.h | 7 ++++--- include/crypto/internal/engine.h | 2 +- 11 files changed, 35 insertions(+), 21 deletions(-) diff --git a/crypto/Makefile b/crypto/Makefile index 16a35649dd91..db264feab7e7 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -3,6 +3,8 @@ # Cryptographic API # =20 +CONTEXT_ANALYSIS :=3D y + obj-$(CONFIG_CRYPTO) +=3D crypto.o crypto-y :=3D api.o cipher.o =20 diff --git a/crypto/acompress.c b/crypto/acompress.c index be28cbfd22e3..25df368df098 100644 --- a/crypto/acompress.c +++ b/crypto/acompress.c @@ -449,8 +449,8 @@ int crypto_acomp_alloc_streams(struct crypto_acomp_stre= ams *s) } EXPORT_SYMBOL_GPL(crypto_acomp_alloc_streams); =20 -struct crypto_acomp_stream *crypto_acomp_lock_stream_bh( - struct crypto_acomp_streams *s) __acquires(stream) +struct crypto_acomp_stream *_crypto_acomp_lock_stream_bh( + struct crypto_acomp_streams *s) { struct crypto_acomp_stream __percpu *streams =3D s->streams; int cpu =3D raw_smp_processor_id(); @@ -469,7 +469,7 @@ struct crypto_acomp_stream *crypto_acomp_lock_stream_bh( spin_lock(&ps->lock); return ps; } -EXPORT_SYMBOL_GPL(crypto_acomp_lock_stream_bh); +EXPORT_SYMBOL_GPL(_crypto_acomp_lock_stream_bh); =20 void acomp_walk_done_src(struct acomp_walk *walk, int used) { diff --git a/crypto/algapi.c b/crypto/algapi.c index e604d0d8b7b4..abc9333327d4 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -244,6 +244,7 @@ EXPORT_SYMBOL_GPL(crypto_remove_spawns); =20 static void crypto_alg_finish_registration(struct crypto_alg *alg, struct list_head *algs_to_put) + __must_hold(&crypto_alg_sem) { struct crypto_alg *q; =20 @@ -299,6 +300,7 @@ static struct crypto_larval *crypto_alloc_test_larval(s= truct crypto_alg *alg) =20 static struct crypto_larval * __crypto_register_alg(struct crypto_alg *alg, struct list_head *algs_to_pu= t) + __must_hold(&crypto_alg_sem) { struct crypto_alg *q; struct crypto_larval *larval; diff --git a/crypto/api.c b/crypto/api.c index 5724d62e9d07..05629644a688 100644 --- a/crypto/api.c +++ b/crypto/api.c @@ -57,6 +57,7 @@ EXPORT_SYMBOL_GPL(crypto_mod_put); =20 static struct crypto_alg *__crypto_alg_lookup(const char *name, u32 type, u32 mask) + __must_hold_shared(&crypto_alg_sem) { struct crypto_alg *q, *alg =3D NULL; int best =3D -2; diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index 18e1689efe12..1653a4bf5b31 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -453,8 +453,8 @@ struct crypto_engine *crypto_engine_alloc_init_and_set(= struct device *dev, snprintf(engine->name, sizeof(engine->name), "%s-engine", dev_name(dev)); =20 - crypto_init_queue(&engine->queue, qlen); spin_lock_init(&engine->queue_lock); + crypto_init_queue(&engine->queue, qlen); =20 engine->kworker =3D kthread_run_worker(0, "%s", engine->name); if (IS_ERR(engine->kworker)) { diff --git a/crypto/drbg.c b/crypto/drbg.c index 1d433dae9955..0a6f6c05a78f 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -232,6 +232,7 @@ static inline unsigned short drbg_sec_strength(drbg_fla= g_t flags) */ static int drbg_fips_continuous_test(struct drbg_state *drbg, const unsigned char *entropy) + __must_hold(&drbg->drbg_mutex) { unsigned short entropylen =3D drbg_sec_strength(drbg->core->flags); int ret =3D 0; @@ -848,6 +849,7 @@ static inline int __drbg_seed(struct drbg_state *drbg, = struct list_head *seed, static inline int drbg_get_random_bytes(struct drbg_state *drbg, unsigned char *entropy, unsigned int entropylen) + __must_hold(&drbg->drbg_mutex) { int ret; =20 @@ -862,6 +864,7 @@ static inline int drbg_get_random_bytes(struct drbg_sta= te *drbg, } =20 static int drbg_seed_from_random(struct drbg_state *drbg) + __must_hold(&drbg->drbg_mutex) { struct drbg_string data; LIST_HEAD(seedlist); @@ -919,6 +922,7 @@ static bool drbg_nopr_reseed_interval_elapsed(struct dr= bg_state *drbg) */ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers, bool reseed) + __must_hold(&drbg->drbg_mutex) { int ret; unsigned char entropy[((32 + 16) * 2)]; @@ -1153,6 +1157,7 @@ static inline int drbg_alloc_state(struct drbg_state = *drbg) static int drbg_generate(struct drbg_state *drbg, unsigned char *buf, unsigned int buflen, struct drbg_string *addtl) + __must_hold(&drbg->drbg_mutex) { int len =3D 0; LIST_HEAD(addtllist); diff --git a/crypto/internal.h b/crypto/internal.h index b9afd68767c1..8fbe0226d48e 100644 --- a/crypto/internal.h +++ b/crypto/internal.h @@ -61,8 +61,8 @@ enum { /* Maximum number of (rtattr) parameters for each template. */ #define CRYPTO_MAX_ATTRS 32 =20 -extern struct list_head crypto_alg_list; extern struct rw_semaphore crypto_alg_sem; +extern struct list_head crypto_alg_list __guarded_by(&crypto_alg_sem); extern struct blocking_notifier_head crypto_chain; =20 int alg_test(const char *driver, const char *alg, u32 type, u32 mask); diff --git a/crypto/proc.c b/crypto/proc.c index 82f15b967e85..5fb9fe86d023 100644 --- a/crypto/proc.c +++ b/crypto/proc.c @@ -19,17 +19,20 @@ #include "internal.h" =20 static void *c_start(struct seq_file *m, loff_t *pos) + __acquires_shared(&crypto_alg_sem) { down_read(&crypto_alg_sem); return seq_list_start(&crypto_alg_list, *pos); } =20 static void *c_next(struct seq_file *m, void *p, loff_t *pos) + __must_hold_shared(&crypto_alg_sem) { return seq_list_next(p, &crypto_alg_list, pos); } =20 static void c_stop(struct seq_file *m, void *p) + __releases_shared(&crypto_alg_sem) { up_read(&crypto_alg_sem); } diff --git a/crypto/scompress.c b/crypto/scompress.c index 1a7ed8ae65b0..7aee1d50e148 100644 --- a/crypto/scompress.c +++ b/crypto/scompress.c @@ -28,8 +28,8 @@ struct scomp_scratch { spinlock_t lock; union { - void *src; - unsigned long saddr; + void *src __guarded_by(&lock); + unsigned long saddr __guarded_by(&lock); }; }; =20 @@ -38,8 +38,8 @@ static DEFINE_PER_CPU(struct scomp_scratch, scomp_scratch= ) =3D { }; =20 static const struct crypto_type crypto_scomp_type; -static int scomp_scratch_users; static DEFINE_MUTEX(scomp_lock); +static int scomp_scratch_users __guarded_by(&scomp_lock); =20 static cpumask_t scomp_scratch_want; static void scomp_scratch_workfn(struct work_struct *work); @@ -67,6 +67,7 @@ static void crypto_scomp_show(struct seq_file *m, struct = crypto_alg *alg) } =20 static void crypto_scomp_free_scratches(void) + __context_unsafe(/* frees @scratch */) { struct scomp_scratch *scratch; int i; @@ -101,7 +102,7 @@ static void scomp_scratch_workfn(struct work_struct *wo= rk) struct scomp_scratch *scratch; =20 scratch =3D per_cpu_ptr(&scomp_scratch, cpu); - if (scratch->src) + if (context_unsafe(scratch->src)) continue; if (scomp_alloc_scratch(scratch, cpu)) break; @@ -111,6 +112,7 @@ static void scomp_scratch_workfn(struct work_struct *wo= rk) } =20 static int crypto_scomp_alloc_scratches(void) + __context_unsafe(/* allocates @scratch */) { unsigned int i =3D cpumask_first(cpu_possible_mask); struct scomp_scratch *scratch; @@ -139,7 +141,8 @@ static int crypto_scomp_init_tfm(struct crypto_tfm *tfm) return ret; } =20 -static struct scomp_scratch *scomp_lock_scratch(void) __acquires(scratch) +#define scomp_lock_scratch(...) __acquire_ret(_scomp_lock_scratch(__VA_ARG= S__), &__ret->lock) +static struct scomp_scratch *_scomp_lock_scratch(void) __acquires_ret { int cpu =3D raw_smp_processor_id(); struct scomp_scratch *scratch; @@ -159,7 +162,7 @@ static struct scomp_scratch *scomp_lock_scratch(void) _= _acquires(scratch) } =20 static inline void scomp_unlock_scratch(struct scomp_scratch *scratch) - __releases(scratch) + __releases(&scratch->lock) { spin_unlock(&scratch->lock); } @@ -171,8 +174,6 @@ static int scomp_acomp_comp_decomp(struct acomp_req *re= q, int dir) bool src_isvirt =3D acomp_request_src_isvirt(req); bool dst_isvirt =3D acomp_request_dst_isvirt(req); struct crypto_scomp *scomp =3D *tfm_ctx; - struct crypto_acomp_stream *stream; - struct scomp_scratch *scratch; unsigned int slen =3D req->slen; unsigned int dlen =3D req->dlen; struct page *spage, *dpage; @@ -232,13 +233,12 @@ static int scomp_acomp_comp_decomp(struct acomp_req *= req, int dir) } while (0); } =20 - stream =3D crypto_acomp_lock_stream_bh(&crypto_scomp_alg(scomp)->streams); + struct crypto_acomp_stream *stream =3D crypto_acomp_lock_stream_bh(&crypt= o_scomp_alg(scomp)->streams); =20 if (!src_isvirt && !src) { - const u8 *src; + struct scomp_scratch *scratch =3D scomp_lock_scratch(); + const u8 *src =3D scratch->src; =20 - scratch =3D scomp_lock_scratch(); - src =3D scratch->src; memcpy_from_sglist(scratch->src, req->src, 0, slen); =20 if (dir) diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/= acompress.h index 2d97440028ff..9a3f28baa804 100644 --- a/include/crypto/internal/acompress.h +++ b/include/crypto/internal/acompress.h @@ -191,11 +191,12 @@ static inline bool crypto_acomp_req_virt(struct crypt= o_acomp *tfm) void crypto_acomp_free_streams(struct crypto_acomp_streams *s); int crypto_acomp_alloc_streams(struct crypto_acomp_streams *s); =20 -struct crypto_acomp_stream *crypto_acomp_lock_stream_bh( - struct crypto_acomp_streams *s) __acquires(stream); +#define crypto_acomp_lock_stream_bh(...) __acquire_ret(_crypto_acomp_lock_= stream_bh(__VA_ARGS__), &__ret->lock); +struct crypto_acomp_stream *_crypto_acomp_lock_stream_bh( + struct crypto_acomp_streams *s) __acquires_ret; =20 static inline void crypto_acomp_unlock_stream_bh( - struct crypto_acomp_stream *stream) __releases(stream) + struct crypto_acomp_stream *stream) __releases(&stream->lock) { spin_unlock_bh(&stream->lock); } diff --git a/include/crypto/internal/engine.h b/include/crypto/internal/eng= ine.h index f19ef376833f..6a1d27880615 100644 --- a/include/crypto/internal/engine.h +++ b/include/crypto/internal/engine.h @@ -45,7 +45,7 @@ struct crypto_engine { =20 struct list_head list; spinlock_t queue_lock; - struct crypto_queue queue; + struct crypto_queue queue __guarded_by(&queue_lock); struct device *dev; =20 struct kthread_worker *kworker; --=20 2.52.0.322.g1dd061c0dc-goog From nobody Sat Feb 7 12:29:45 2026 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9F17352FBF for ; Fri, 19 Dec 2025 15:47:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159283; cv=none; b=E9uCf7kxepw4UQ668AO2to+4jzsScmq+M0RzelmZbE6y4EIHU1oXUSVmB+8j3LvQL7V5Al+rGotJ6xMMLA22qyjm+kWqHt/U6hOs7TIUYMR7o2U+ba1tsZFTRoVVomYcxZYktwwQgGc85F+E0IxDCl84MYiCEnIVM0yhsj3qlp8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766159283; c=relaxed/simple; bh=ok2iD6tqulbHsIeRgf9maP0VFX+xT4pdi7nkJofIpdk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=s64mowlAA4Ql1pxfn4WvA9KpR5EQo0E7n8+FQExtxHWWrwuMRxM+RYPMnw6SidrYffhrEmfmKJ6Yxniy9Y+n2w7GseNc9EKfkSICYy1m7xLCc6W+qbLt1627b1lHsGJPnwgr/tn0kRi20SBRNFtuo3J2EY/8F5h1bzWTvGLO2f4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Pl5QgZg6; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Pl5QgZg6" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-64b735f514dso2250123a12.3 for ; Fri, 19 Dec 2025 07:47:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766159271; x=1766764071; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pYjsqxfCGUdwb9R0CPMsVyMDBMLSmyZIUW+BdfAkYXg=; b=Pl5QgZg6eMTKz6tfwRSs1lZHDBKk9/dD7lyvJgYfiV+3yBdv3fd5xK7GsoZcb1a/HR CJv7smteIqRh7zKNuTPXlBCMX/37HmP9vfX5HKx88afdDs9OMZgqRK0B1crOecy7HEOB 7TLESCHN+4AACra6skEWyJhm95r+KjqyGNjFFgIegjC0h7eQ9kXb7ElJYHsvRfjW4J2K 0kwlYxhooxeYyaUDz/aJfhw+WZQCdHjbG91hVD/tASPnVeewtsdbK9CvnSTD2rtK4T+Y mMcw0U05abtMZvkdNTuLqU0gPS32ZYaH3F+1uTUYB6hxuC4W6i1YpK0rBBt7ZphZyRAn c8iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766159271; x=1766764071; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pYjsqxfCGUdwb9R0CPMsVyMDBMLSmyZIUW+BdfAkYXg=; b=CghmjudnIf+uY4VSpiRk0ouPZYqiUN8u6v7yOVDgBtPumvrsOl60RuHZ8Kq20mrRBA nSmGFXlw3Ta07t9geHCUE1f+WbxKOD/jlamQZEuWRZBxpnN4TfBIeqWHfdzb2sF5aG93 HkdboYezlW0o4IeYo2t1EBP1iD0oVHylZNcfZJhr0ReuU/Rjk5yAbYecDM+xbGwbuquP Um+yjuhLFnx1D7Qooox9Qa9Agb6aQW6CS4j9DcQKmqi8OnP2HDLu86s6sMB8Dog3pq8C Ma5uv4rByEL6Q8UbnqzZX+C+GmOIftmaZxlRGXNSZBslN0oVrLydG0n9OGWz9XtG54wm k6ng== X-Forwarded-Encrypted: i=1; AJvYcCUt3IO8l71NAs9Vm2RVEnYboPHWlpV6VxvUpDnT+5yt06L+UDZzCikjUPOqa2ZgSDsE0gUd9055w5I9AeA=@vger.kernel.org X-Gm-Message-State: AOJu0YzylJyxtNbifuHeC7uoRTJT9mKqiitvBe79NdkkLd4Dhyk8qSjL Hqpn0GnwBazSKxB+N6LhKAJFunGFn9oFWTorZhMUjSh7bbx7NWXNOeNd65Gy+E6/FAUiS+S5T43 DWA== X-Google-Smtp-Source: AGHT+IH50SsNKvpYAKux5hehIPpujFsHOPTNm68EIkXovJU4XQLA0skj9Fd0PVWDMx+FLd3BLjEJFVwhBw== X-Received: from edbbx3.prod.google.com ([2002:a05:6402:b43:b0:64b:82c9:1597]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5108:b0:649:b4d8:7946 with SMTP id 4fb4d7f45d1cf-64b8edb3335mr2443042a12.23.1766159270510; Fri, 19 Dec 2025 07:47:50 -0800 (PST) Date: Fri, 19 Dec 2025 16:40:25 +0100 In-Reply-To: <20251219154418.3592607-1-elver@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251219154418.3592607-1-elver@google.com> X-Mailer: git-send-email 2.52.0.322.g1dd061c0dc-goog Message-ID: <20251219154418.3592607-37-elver@google.com> Subject: [PATCH v5 36/36] sched: Enable context analysis for core.c and fair.c From: Marco Elver To: elver@google.com, Peter Zijlstra , Boqun Feng , Ingo Molnar , Will Deacon Cc: "David S. Miller" , Luc Van Oostenryck , Chris Li , "Paul E. McKenney" , Alexander Potapenko , Arnd Bergmann , Bart Van Assche , Christoph Hellwig , Dmitry Vyukov , Eric Dumazet , Frederic Weisbecker , Greg Kroah-Hartman , Herbert Xu , Ian Rogers , Jann Horn , Joel Fernandes , Johannes Berg , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Kentaro Takeda , Lukas Bulwahn , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Steven Rostedt , Tetsuo Handa , Thomas Gleixner , Thomas Graf , Uladzislau Rezki , Waiman Long , kasan-dev@googlegroups.com, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-sparse@vger.kernel.org, linux-wireless@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, Ingo Molnar Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This demonstrates a larger conversion to use Clang's context analysis. The benefit is additional static checking of locking rules, along with better documentation. Notably, kernel/sched contains sufficiently complex synchronization patterns, and application to core.c & fair.c demonstrates that the latest Clang version has become powerful enough to start applying this to more complex subsystems (with some modest annotations and changes). Signed-off-by: Marco Elver Cc: Peter Zijlstra Cc: Ingo Molnar --- v5: * Rename "context guard" -> "context lock". * Use new cleanup.h helpers to properly support scoped lock guards. v4: * Rename capability -> context analysis. v3: * New patch. --- include/linux/sched.h | 6 +- include/linux/sched/signal.h | 4 +- include/linux/sched/task.h | 6 +- include/linux/sched/wake_q.h | 3 + kernel/sched/Makefile | 3 + kernel/sched/core.c | 89 +++++++++++----- kernel/sched/fair.c | 7 +- kernel/sched/sched.h | 126 ++++++++++++++++------- scripts/context-analysis-suppression.txt | 1 + 9 files changed, 177 insertions(+), 68 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index d395f2810fac..c4022647282e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2094,9 +2094,9 @@ static inline int _cond_resched(void) _cond_resched(); \ }) =20 -extern int __cond_resched_lock(spinlock_t *lock); -extern int __cond_resched_rwlock_read(rwlock_t *lock); -extern int __cond_resched_rwlock_write(rwlock_t *lock); +extern int __cond_resched_lock(spinlock_t *lock) __must_hold(lock); +extern int __cond_resched_rwlock_read(rwlock_t *lock) __must_hold_shared(l= ock); +extern int __cond_resched_rwlock_write(rwlock_t *lock) __must_hold(lock); =20 #define MIGHT_RESCHED_RCU_SHIFT 8 #define MIGHT_RESCHED_PREEMPT_MASK ((1U << MIGHT_RESCHED_RCU_SHIFT) - 1) diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index a63f65aa5bdd..a22248aebcf9 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -738,10 +738,12 @@ static inline int thread_group_empty(struct task_stru= ct *p) (thread_group_leader(p) && !thread_group_empty(p)) =20 extern struct sighand_struct *lock_task_sighand(struct task_struct *task, - unsigned long *flags); + unsigned long *flags) + __acquires(&task->sighand->siglock); =20 static inline void unlock_task_sighand(struct task_struct *task, unsigned long *flags) + __releases(&task->sighand->siglock) { spin_unlock_irqrestore(&task->sighand->siglock, *flags); } diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 525aa2a632b2..41ed884cffc9 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -214,15 +214,19 @@ static inline struct vm_struct *task_stack_vm_area(co= nst struct task_struct *t) * write_lock_irq(&tasklist_lock), neither inside nor outside. */ static inline void task_lock(struct task_struct *p) + __acquires(&p->alloc_lock) { spin_lock(&p->alloc_lock); } =20 static inline void task_unlock(struct task_struct *p) + __releases(&p->alloc_lock) { spin_unlock(&p->alloc_lock); } =20 -DEFINE_GUARD(task_lock, struct task_struct *, task_lock(_T), task_unlock(_= T)) +DEFINE_LOCK_GUARD_1(task_lock, struct task_struct, task_lock(_T->lock), ta= sk_unlock(_T->lock)) +DECLARE_LOCK_GUARD_1_ATTRS(task_lock, __acquires(&_T->alloc_lock), __relea= ses(&(*(struct task_struct **)_T)->alloc_lock)) +#define class_task_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(task_lock,= _T) =20 #endif /* _LINUX_SCHED_TASK_H */ diff --git a/include/linux/sched/wake_q.h b/include/linux/sched/wake_q.h index 0f28b4623ad4..765bbc3d54be 100644 --- a/include/linux/sched/wake_q.h +++ b/include/linux/sched/wake_q.h @@ -66,6 +66,7 @@ extern void wake_up_q(struct wake_q_head *head); /* Spin unlock helpers to unlock and call wake_up_q with preempt disabled = */ static inline void raw_spin_unlock_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q) + __releases(lock) { guard(preempt)(); raw_spin_unlock(lock); @@ -77,6 +78,7 @@ void raw_spin_unlock_wake(raw_spinlock_t *lock, struct wa= ke_q_head *wake_q) =20 static inline void raw_spin_unlock_irq_wake(raw_spinlock_t *lock, struct wake_q_head *wa= ke_q) + __releases(lock) { guard(preempt)(); raw_spin_unlock_irq(lock); @@ -89,6 +91,7 @@ void raw_spin_unlock_irq_wake(raw_spinlock_t *lock, struc= t wake_q_head *wake_q) static inline void raw_spin_unlock_irqrestore_wake(raw_spinlock_t *lock, unsigned long f= lags, struct wake_q_head *wake_q) + __releases(lock) { guard(preempt)(); raw_spin_unlock_irqrestore(lock, flags); diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index 8ae86371ddcd..b1f1a367034f 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -1,5 +1,8 @@ # SPDX-License-Identifier: GPL-2.0 =20 +CONTEXT_ANALYSIS_core.o :=3D y +CONTEXT_ANALYSIS_fair.o :=3D y + # The compilers are complaining about unused variables inside an if(0) sco= pe # block. This is daft, shut them up. ccflags-y +=3D $(call cc-disable-warning, unused-but-set-variable) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 41ba0be16911..ae543ee91272 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -396,6 +396,8 @@ static atomic_t sched_core_count; static struct cpumask sched_core_mask; =20 static void sched_core_lock(int cpu, unsigned long *flags) + __context_unsafe(/* acquires multiple */) + __acquires(&runqueues.__lock) /* overapproximation */ { const struct cpumask *smt_mask =3D cpu_smt_mask(cpu); int t, i =3D 0; @@ -406,6 +408,8 @@ static void sched_core_lock(int cpu, unsigned long *fla= gs) } =20 static void sched_core_unlock(int cpu, unsigned long *flags) + __context_unsafe(/* releases multiple */) + __releases(&runqueues.__lock) /* overapproximation */ { const struct cpumask *smt_mask =3D cpu_smt_mask(cpu); int t; @@ -630,6 +634,7 @@ EXPORT_SYMBOL(__trace_set_current_state); */ =20 void raw_spin_rq_lock_nested(struct rq *rq, int subclass) + __context_unsafe() { raw_spinlock_t *lock; =20 @@ -655,6 +660,7 @@ void raw_spin_rq_lock_nested(struct rq *rq, int subclas= s) } =20 bool raw_spin_rq_trylock(struct rq *rq) + __context_unsafe() { raw_spinlock_t *lock; bool ret; @@ -696,15 +702,16 @@ void double_rq_lock(struct rq *rq1, struct rq *rq2) raw_spin_rq_lock(rq1); if (__rq_lockp(rq1) !=3D __rq_lockp(rq2)) raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING); + else + __acquire_ctx_lock(__rq_lockp(rq2)); /* fake acquire */ =20 double_rq_clock_clear_update(rq1, rq2); } =20 /* - * __task_rq_lock - lock the rq @p resides on. + * ___task_rq_lock - lock the rq @p resides on. */ -struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(rq->lock) +struct rq *___task_rq_lock(struct task_struct *p, struct rq_flags *rf) { struct rq *rq; =20 @@ -727,9 +734,7 @@ struct rq *__task_rq_lock(struct task_struct *p, struct= rq_flags *rf) /* * task_rq_lock - lock p->pi_lock and lock the rq @p resides on. */ -struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(p->pi_lock) - __acquires(rq->lock) +struct rq *_task_rq_lock(struct task_struct *p, struct rq_flags *rf) { struct rq *rq; =20 @@ -2431,6 +2436,7 @@ static inline bool is_cpu_allowed(struct task_struct = *p, int cpu) */ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf, struct task_struct *p, int new_cpu) + __must_hold(__rq_lockp(rq)) { lockdep_assert_rq_held(rq); =20 @@ -2477,6 +2483,7 @@ struct set_affinity_pending { */ static struct rq *__migrate_task(struct rq *rq, struct rq_flags *rf, struct task_struct *p, int dest_cpu) + __must_hold(__rq_lockp(rq)) { /* Affinity changed (again). */ if (!is_cpu_allowed(p, dest_cpu)) @@ -2513,6 +2520,12 @@ static int migration_cpu_stop(void *data) */ flush_smp_call_function_queue(); =20 + /* + * We may change the underlying rq, but the locks held will + * appropriately be "transferred" when switching. + */ + context_unsafe_alias(rq); + raw_spin_lock(&p->pi_lock); rq_lock(rq, &rf); =20 @@ -2624,6 +2637,8 @@ int push_cpu_stop(void *arg) if (!lowest_rq) goto out_unlock; =20 + lockdep_assert_rq_held(lowest_rq); + // XXX validate p is still the highest prio task if (task_rq(p) =3D=3D rq) { move_queued_task_locked(rq, lowest_rq, p); @@ -2834,8 +2849,7 @@ void release_user_cpus_ptr(struct task_struct *p) */ static int affine_move_task(struct rq *rq, struct task_struct *p, struct r= q_flags *rf, int dest_cpu, unsigned int flags) - __releases(rq->lock) - __releases(p->pi_lock) + __releases(__rq_lockp(rq), &p->pi_lock) { struct set_affinity_pending my_pending =3D { }, *pending =3D NULL; bool stop_pending, complete =3D false; @@ -2990,8 +3004,7 @@ static int __set_cpus_allowed_ptr_locked(struct task_= struct *p, struct affinity_context *ctx, struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) - __releases(p->pi_lock) + __releases(__rq_lockp(rq), &p->pi_lock) { const struct cpumask *cpu_allowed_mask =3D task_cpu_possible_mask(p); const struct cpumask *cpu_valid_mask =3D cpu_active_mask; @@ -4273,29 +4286,30 @@ static bool __task_needs_rq_lock(struct task_struct= *p) */ int task_call_func(struct task_struct *p, task_call_f func, void *arg) { - struct rq *rq =3D NULL; struct rq_flags rf; int ret; =20 raw_spin_lock_irqsave(&p->pi_lock, rf.flags); =20 - if (__task_needs_rq_lock(p)) - rq =3D __task_rq_lock(p, &rf); + if (__task_needs_rq_lock(p)) { + struct rq *rq =3D __task_rq_lock(p, &rf); =20 - /* - * At this point the task is pinned; either: - * - blocked and we're holding off wakeups (pi->lock) - * - woken, and we're holding off enqueue (rq->lock) - * - queued, and we're holding off schedule (rq->lock) - * - running, and we're holding off de-schedule (rq->lock) - * - * The called function (@func) can use: task_curr(), p->on_rq and - * p->__state to differentiate between these states. - */ - ret =3D func(p, arg); + /* + * At this point the task is pinned; either: + * - blocked and we're holding off wakeups (pi->lock) + * - woken, and we're holding off enqueue (rq->lock) + * - queued, and we're holding off schedule (rq->lock) + * - running, and we're holding off de-schedule (rq->lock) + * + * The called function (@func) can use: task_curr(), p->on_rq and + * p->__state to differentiate between these states. + */ + ret =3D func(p, arg); =20 - if (rq) __task_rq_unlock(rq, p, &rf); + } else { + ret =3D func(p, arg); + } =20 raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags); return ret; @@ -4968,6 +4982,8 @@ void balance_callbacks(struct rq *rq, struct balance_= callback *head) =20 static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next, struct rq_fla= gs *rf) + __releases(__rq_lockp(rq)) + __acquires(__rq_lockp(this_rq())) { /* * Since the runqueue lock will be released by the next @@ -4981,9 +4997,15 @@ prepare_lock_switch(struct rq *rq, struct task_struc= t *next, struct rq_flags *rf /* this is a valid case when another task releases the spinlock */ rq_lockp(rq)->owner =3D next; #endif + /* + * Model the rq reference switcheroo. + */ + __release(__rq_lockp(rq)); + __acquire(__rq_lockp(this_rq())); } =20 static inline void finish_lock_switch(struct rq *rq) + __releases(__rq_lockp(rq)) { /* * If we are tracking spinlock dependencies then we have to @@ -5039,6 +5061,7 @@ static inline void kmap_local_sched_in(void) static inline void prepare_task_switch(struct rq *rq, struct task_struct *prev, struct task_struct *next) + __must_hold(__rq_lockp(rq)) { kcov_prepare_switch(prev); sched_info_switch(rq, prev, next); @@ -5069,7 +5092,7 @@ prepare_task_switch(struct rq *rq, struct task_struct= *prev, * because prev may have moved to another CPU. */ static struct rq *finish_task_switch(struct task_struct *prev) - __releases(rq->lock) + __releases(__rq_lockp(this_rq())) { struct rq *rq =3D this_rq(); struct mm_struct *mm =3D rq->prev_mm; @@ -5165,7 +5188,7 @@ static struct rq *finish_task_switch(struct task_stru= ct *prev) * @prev: the thread we just switched away from. */ asmlinkage __visible void schedule_tail(struct task_struct *prev) - __releases(rq->lock) + __releases(__rq_lockp(this_rq())) { /* * New tasks start with FORK_PREEMPT_COUNT, see there and @@ -5197,6 +5220,7 @@ asmlinkage __visible void schedule_tail(struct task_s= truct *prev) static __always_inline struct rq * context_switch(struct rq *rq, struct task_struct *prev, struct task_struct *next, struct rq_flags *rf) + __releases(__rq_lockp(rq)) { prepare_task_switch(rq, prev, next); =20 @@ -5865,6 +5889,7 @@ static void prev_balance(struct rq *rq, struct task_s= truct *prev, */ static inline struct task_struct * __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags = *rf) + __must_hold(__rq_lockp(rq)) { const struct sched_class *class; struct task_struct *p; @@ -5965,6 +5990,7 @@ static void queue_core_balance(struct rq *rq); =20 static struct task_struct * pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *r= f) + __must_hold(__rq_lockp(rq)) { struct task_struct *next, *p, *max; const struct cpumask *smt_mask; @@ -6273,6 +6299,7 @@ static bool steal_cookie_task(int cpu, struct sched_d= omain *sd) } =20 static void sched_core_balance(struct rq *rq) + __must_hold(__rq_lockp(rq)) { struct sched_domain *sd; int cpu =3D cpu_of(rq); @@ -6418,6 +6445,7 @@ static inline void sched_core_cpu_dying(unsigned int = cpu) {} =20 static struct task_struct * pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *r= f) + __must_hold(__rq_lockp(rq)) { return __pick_next_task(rq, prev, rf); } @@ -8043,6 +8071,12 @@ static int __balance_push_cpu_stop(void *arg) int cpu; =20 scoped_guard (raw_spinlock_irq, &p->pi_lock) { + /* + * We may change the underlying rq, but the locks held will + * appropriately be "transferred" when switching. + */ + context_unsafe_alias(rq); + cpu =3D select_fallback_rq(rq->cpu, p); =20 rq_lock(rq, &rf); @@ -8066,6 +8100,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, push_work= ); * effective when the hotplug motion is down. */ static void balance_push(struct rq *rq) + __must_hold(__rq_lockp(rq)) { struct task_struct *push_task =3D rq->curr; =20 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index da46c3164537..d0c929ecdb6a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2860,6 +2860,7 @@ static int preferred_group_nid(struct task_struct *p,= int nid) } =20 static void task_numa_placement(struct task_struct *p) + __context_unsafe(/* conditional locking */) { int seq, nid, max_nid =3D NUMA_NO_NODE; unsigned long max_faults =3D 0; @@ -4781,7 +4782,8 @@ static inline unsigned long cfs_rq_load_avg(struct cf= s_rq *cfs_rq) return cfs_rq->avg.load_avg; } =20 -static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf); +static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf) + __must_hold(__rq_lockp(this_rq)); =20 static inline unsigned long task_util(struct task_struct *p) { @@ -6188,6 +6190,7 @@ static bool distribute_cfs_runtime(struct cfs_bandwid= th *cfs_b) * used to track this state. */ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int over= run, unsigned long flags) + __must_hold(&cfs_b->lock) { int throttled; =20 @@ -8919,6 +8922,7 @@ static void set_next_task_fair(struct rq *rq, struct = task_struct *p, bool first) =20 struct task_struct * pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_fla= gs *rf) + __must_hold(__rq_lockp(rq)) { struct sched_entity *se; struct task_struct *p; @@ -12858,6 +12862,7 @@ static inline void nohz_newidle_balance(struct rq *= this_rq) { } * > 0 - success, new (fair) tasks present */ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf) + __must_hold(__rq_lockp(this_rq)) { unsigned long next_balance =3D jiffies + HZ; int this_cpu =3D this_rq->cpu; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d30cca6870f5..25d2ff265227 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1358,8 +1358,13 @@ static inline u32 sched_rng(void) return prandom_u32_state(this_cpu_ptr(&sched_rnd_state)); } =20 +static __always_inline struct rq *__this_rq(void) +{ + return this_cpu_ptr(&runqueues); +} + #define cpu_rq(cpu) (&per_cpu(runqueues, (cpu))) -#define this_rq() this_cpu_ptr(&runqueues) +#define this_rq() __this_rq() #define task_rq(p) cpu_rq(task_cpu(p)) #define cpu_curr(cpu) (cpu_rq(cpu)->curr) #define raw_rq() raw_cpu_ptr(&runqueues) @@ -1404,6 +1409,7 @@ static inline raw_spinlock_t *rq_lockp(struct rq *rq) } =20 static inline raw_spinlock_t *__rq_lockp(struct rq *rq) + __returns_ctx_lock(rq_lockp(rq)) /* alias them */ { if (rq->core_enabled) return &rq->core->__lock; @@ -1503,6 +1509,7 @@ static inline raw_spinlock_t *rq_lockp(struct rq *rq) } =20 static inline raw_spinlock_t *__rq_lockp(struct rq *rq) + __returns_ctx_lock(rq_lockp(rq)) /* alias them */ { return &rq->__lock; } @@ -1545,32 +1552,42 @@ static inline bool rt_group_sched_enabled(void) #endif /* !CONFIG_RT_GROUP_SCHED */ =20 static inline void lockdep_assert_rq_held(struct rq *rq) + __assumes_ctx_lock(__rq_lockp(rq)) { lockdep_assert_held(__rq_lockp(rq)); } =20 -extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass); -extern bool raw_spin_rq_trylock(struct rq *rq); -extern void raw_spin_rq_unlock(struct rq *rq); +extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass) + __acquires(__rq_lockp(rq)); + +extern bool raw_spin_rq_trylock(struct rq *rq) + __cond_acquires(true, __rq_lockp(rq)); + +extern void raw_spin_rq_unlock(struct rq *rq) + __releases(__rq_lockp(rq)); =20 static inline void raw_spin_rq_lock(struct rq *rq) + __acquires(__rq_lockp(rq)) { raw_spin_rq_lock_nested(rq, 0); } =20 static inline void raw_spin_rq_lock_irq(struct rq *rq) + __acquires(__rq_lockp(rq)) { local_irq_disable(); raw_spin_rq_lock(rq); } =20 static inline void raw_spin_rq_unlock_irq(struct rq *rq) + __releases(__rq_lockp(rq)) { raw_spin_rq_unlock(rq); local_irq_enable(); } =20 static inline unsigned long _raw_spin_rq_lock_irqsave(struct rq *rq) + __acquires(__rq_lockp(rq)) { unsigned long flags; =20 @@ -1581,6 +1598,7 @@ static inline unsigned long _raw_spin_rq_lock_irqsave= (struct rq *rq) } =20 static inline void raw_spin_rq_unlock_irqrestore(struct rq *rq, unsigned l= ong flags) + __releases(__rq_lockp(rq)) { raw_spin_rq_unlock(rq); local_irq_restore(flags); @@ -1829,18 +1847,16 @@ static inline void rq_repin_lock(struct rq *rq, str= uct rq_flags *rf) rq->clock_update_flags |=3D rf->clock_update_flags; } =20 -extern -struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(rq->lock); +#define __task_rq_lock(...) __acquire_ret(___task_rq_lock(__VA_ARGS__), __= rq_lockp(__ret)) +extern struct rq *___task_rq_lock(struct task_struct *p, struct rq_flags *= rf) __acquires_ret; =20 -extern -struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(p->pi_lock) - __acquires(rq->lock); +#define task_rq_lock(...) __acquire_ret(_task_rq_lock(__VA_ARGS__), __rq_l= ockp(__ret)) +extern struct rq *_task_rq_lock(struct task_struct *p, struct rq_flags *rf) + __acquires(&p->pi_lock) __acquires_ret; =20 static inline void __task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) - __releases(rq->lock) + __releases(__rq_lockp(rq)) { rq_unpin_lock(rq, rf); raw_spin_rq_unlock(rq); @@ -1848,8 +1864,7 @@ __task_rq_unlock(struct rq *rq, struct task_struct *p= , struct rq_flags *rf) =20 static inline void task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) - __releases(rq->lock) - __releases(p->pi_lock) + __releases(__rq_lockp(rq), &p->pi_lock) { __task_rq_unlock(rq, p, rf); raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); @@ -1859,6 +1874,8 @@ DEFINE_LOCK_GUARD_1(task_rq_lock, struct task_struct, _T->rq =3D task_rq_lock(_T->lock, &_T->rf), task_rq_unlock(_T->rq, _T->lock, &_T->rf), struct rq *rq; struct rq_flags rf) +DECLARE_LOCK_GUARD_1_ATTRS(task_rq_lock, __acquires(_T->pi_lock), __releas= es((*(struct task_struct **)_T)->pi_lock)) +#define class_task_rq_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(task_rq= _lock, _T) =20 DEFINE_LOCK_GUARD_1(__task_rq_lock, struct task_struct, _T->rq =3D __task_rq_lock(_T->lock, &_T->rf), @@ -1866,42 +1883,42 @@ DEFINE_LOCK_GUARD_1(__task_rq_lock, struct task_str= uct, struct rq *rq; struct rq_flags rf) =20 static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) + __acquires(__rq_lockp(rq)) { raw_spin_rq_lock_irqsave(rq, rf->flags); rq_pin_lock(rq, rf); } =20 static inline void rq_lock_irq(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) + __acquires(__rq_lockp(rq)) { raw_spin_rq_lock_irq(rq); rq_pin_lock(rq, rf); } =20 static inline void rq_lock(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) + __acquires(__rq_lockp(rq)) { raw_spin_rq_lock(rq); rq_pin_lock(rq, rf); } =20 static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) + __releases(__rq_lockp(rq)) { rq_unpin_lock(rq, rf); raw_spin_rq_unlock_irqrestore(rq, rf->flags); } =20 static inline void rq_unlock_irq(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) + __releases(__rq_lockp(rq)) { rq_unpin_lock(rq, rf); raw_spin_rq_unlock_irq(rq); } =20 static inline void rq_unlock(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) + __releases(__rq_lockp(rq)) { rq_unpin_lock(rq, rf); raw_spin_rq_unlock(rq); @@ -1912,18 +1929,27 @@ DEFINE_LOCK_GUARD_1(rq_lock, struct rq, rq_unlock(_T->lock, &_T->rf), struct rq_flags rf) =20 +DECLARE_LOCK_GUARD_1_ATTRS(rq_lock, __acquires(__rq_lockp(_T)), __releases= (__rq_lockp(*(struct rq **)_T))); +#define class_rq_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_lock, _T) + DEFINE_LOCK_GUARD_1(rq_lock_irq, struct rq, rq_lock_irq(_T->lock, &_T->rf), rq_unlock_irq(_T->lock, &_T->rf), struct rq_flags rf) =20 +DECLARE_LOCK_GUARD_1_ATTRS(rq_lock_irq, __acquires(__rq_lockp(_T)), __rele= ases(__rq_lockp(*(struct rq **)_T))); +#define class_rq_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_lock_= irq, _T) + DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq, rq_lock_irqsave(_T->lock, &_T->rf), rq_unlock_irqrestore(_T->lock, &_T->rf), struct rq_flags rf) =20 -static inline struct rq *this_rq_lock_irq(struct rq_flags *rf) - __acquires(rq->lock) +DECLARE_LOCK_GUARD_1_ATTRS(rq_lock_irqsave, __acquires(__rq_lockp(_T)), __= releases(__rq_lockp(*(struct rq **)_T))); +#define class_rq_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_l= ock_irqsave, _T) + +#define this_rq_lock_irq(...) __acquire_ret(_this_rq_lock_irq(__VA_ARGS__)= , __rq_lockp(__ret)) +static inline struct rq *_this_rq_lock_irq(struct rq_flags *rf) __acquires= _ret { struct rq *rq; =20 @@ -3050,8 +3076,20 @@ static inline void double_rq_clock_clear_update(stru= ct rq *rq1, struct rq *rq2) #define DEFINE_LOCK_GUARD_2(name, type, _lock, _unlock, ...) \ __DEFINE_UNLOCK_GUARD(name, type, _unlock, type *lock2; __VA_ARGS__) \ static inline class_##name##_t class_##name##_constructor(type *lock, type= *lock2) \ + __no_context_analysis \ { class_##name##_t _t =3D { .lock =3D lock, .lock2 =3D lock2 }, *_T =3D &_= t; \ _lock; return _t; } +#define DECLARE_LOCK_GUARD_2_ATTRS(_name, _lock, _unlock1, _unlock2) \ +static inline class_##_name##_t class_##_name##_constructor(lock_##_name##= _t *_T1, \ + lock_##_name##_t *_T2) _lock; \ +static __always_inline void __class_##_name##_cleanup_ctx1(class_##_name##= _t **_T1) \ + __no_context_analysis _unlock1 { } \ +static __always_inline void __class_##_name##_cleanup_ctx2(class_##_name##= _t **_T2) \ + __no_context_analysis _unlock2 { } +#define WITH_LOCK_GUARD_2_ATTRS(_name, _T1, _T2) \ + class_##_name##_constructor(_T1, _T2), \ + *__UNIQUE_ID(unlock1) __cleanup(__class_##_name##_cleanup_ctx1) =3D (void= *)(_T1),\ + *__UNIQUE_ID(unlock2) __cleanup(__class_##_name##_cleanup_ctx2) =3D (void= *)(_T2) =20 static inline bool rq_order_less(struct rq *rq1, struct rq *rq2) { @@ -3079,7 +3117,8 @@ static inline bool rq_order_less(struct rq *rq1, stru= ct rq *rq2) return rq1->cpu < rq2->cpu; } =20 -extern void double_rq_lock(struct rq *rq1, struct rq *rq2); +extern void double_rq_lock(struct rq *rq1, struct rq *rq2) + __acquires(__rq_lockp(rq1), __rq_lockp(rq2)); =20 #ifdef CONFIG_PREEMPTION =20 @@ -3092,9 +3131,8 @@ extern void double_rq_lock(struct rq *rq1, struct rq = *rq2); * also adds more overhead and therefore may reduce throughput. */ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busi= est) - __releases(this_rq->lock) - __acquires(busiest->lock) - __acquires(this_rq->lock) + __must_hold(__rq_lockp(this_rq)) + __acquires(__rq_lockp(busiest)) { raw_spin_rq_unlock(this_rq); double_rq_lock(this_rq, busiest); @@ -3111,12 +3149,16 @@ static inline int _double_lock_balance(struct rq *t= his_rq, struct rq *busiest) * regardless of entry order into the function. */ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busi= est) - __releases(this_rq->lock) - __acquires(busiest->lock) - __acquires(this_rq->lock) + __must_hold(__rq_lockp(this_rq)) + __acquires(__rq_lockp(busiest)) { - if (__rq_lockp(this_rq) =3D=3D __rq_lockp(busiest) || - likely(raw_spin_rq_trylock(busiest))) { + if (__rq_lockp(this_rq) =3D=3D __rq_lockp(busiest)) { + __acquire(__rq_lockp(busiest)); /* already held */ + double_rq_clock_clear_update(this_rq, busiest); + return 0; + } + + if (likely(raw_spin_rq_trylock(busiest))) { double_rq_clock_clear_update(this_rq, busiest); return 0; } @@ -3139,6 +3181,8 @@ static inline int _double_lock_balance(struct rq *thi= s_rq, struct rq *busiest) * double_lock_balance - lock the busiest runqueue, this_rq is locked alre= ady. */ static inline int double_lock_balance(struct rq *this_rq, struct rq *busie= st) + __must_hold(__rq_lockp(this_rq)) + __acquires(__rq_lockp(busiest)) { lockdep_assert_irqs_disabled(); =20 @@ -3146,14 +3190,17 @@ static inline int double_lock_balance(struct rq *th= is_rq, struct rq *busiest) } =20 static inline void double_unlock_balance(struct rq *this_rq, struct rq *bu= siest) - __releases(busiest->lock) + __releases(__rq_lockp(busiest)) { if (__rq_lockp(this_rq) !=3D __rq_lockp(busiest)) raw_spin_rq_unlock(busiest); + else + __release(__rq_lockp(busiest)); /* fake release */ lock_set_subclass(&__rq_lockp(this_rq)->dep_map, 0, _RET_IP_); } =20 static inline void double_lock(spinlock_t *l1, spinlock_t *l2) + __acquires(l1, l2) { if (l1 > l2) swap(l1, l2); @@ -3163,6 +3210,7 @@ static inline void double_lock(spinlock_t *l1, spinlo= ck_t *l2) } =20 static inline void double_lock_irq(spinlock_t *l1, spinlock_t *l2) + __acquires(l1, l2) { if (l1 > l2) swap(l1, l2); @@ -3172,6 +3220,7 @@ static inline void double_lock_irq(spinlock_t *l1, sp= inlock_t *l2) } =20 static inline void double_raw_lock(raw_spinlock_t *l1, raw_spinlock_t *l2) + __acquires(l1, l2) { if (l1 > l2) swap(l1, l2); @@ -3181,6 +3230,7 @@ static inline void double_raw_lock(raw_spinlock_t *l1= , raw_spinlock_t *l2) } =20 static inline void double_raw_unlock(raw_spinlock_t *l1, raw_spinlock_t *l= 2) + __releases(l1, l2) { raw_spin_unlock(l1); raw_spin_unlock(l2); @@ -3190,6 +3240,13 @@ DEFINE_LOCK_GUARD_2(double_raw_spinlock, raw_spinloc= k_t, double_raw_lock(_T->lock, _T->lock2), double_raw_unlock(_T->lock, _T->lock2)) =20 +DECLARE_LOCK_GUARD_2_ATTRS(double_raw_spinlock, + __acquires(_T1, _T2), + __releases(*(raw_spinlock_t **)_T1), + __releases(*(raw_spinlock_t **)_T2)); +#define class_double_raw_spinlock_constructor(_T1, _T2) \ + WITH_LOCK_GUARD_2_ATTRS(double_raw_spinlock, _T1, _T2) + /* * double_rq_unlock - safely unlock two runqueues * @@ -3197,13 +3254,12 @@ DEFINE_LOCK_GUARD_2(double_raw_spinlock, raw_spinlo= ck_t, * you need to do so manually after calling. */ static inline void double_rq_unlock(struct rq *rq1, struct rq *rq2) - __releases(rq1->lock) - __releases(rq2->lock) + __releases(__rq_lockp(rq1), __rq_lockp(rq2)) { if (__rq_lockp(rq1) !=3D __rq_lockp(rq2)) raw_spin_rq_unlock(rq2); else - __release(rq2->lock); + __release(__rq_lockp(rq2)); /* fake release */ raw_spin_rq_unlock(rq1); } =20 diff --git a/scripts/context-analysis-suppression.txt b/scripts/context-ana= lysis-suppression.txt index df25c3d07a5b..fd8951d06706 100644 --- a/scripts/context-analysis-suppression.txt +++ b/scripts/context-analysis-suppression.txt @@ -26,6 +26,7 @@ src:*include/linux/refcount.h=3Demit src:*include/linux/rhashtable.h=3Demit src:*include/linux/rwlock*.h=3Demit src:*include/linux/rwsem.h=3Demit +src:*include/linux/sched*=3Demit src:*include/linux/seqlock*.h=3Demit src:*include/linux/spinlock*.h=3Demit src:*include/linux/srcu*.h=3Demit --=20 2.52.0.322.g1dd061c0dc-goog