From nobody Tue Apr 7 19:49:29 2026 Received: from mail-dl1-f74.google.com (mail-dl1-f74.google.com [74.125.82.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0AEB739934D for ; Wed, 11 Mar 2026 22:58:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773269927; cv=none; b=kAtQRsJaG+sH1KUaK6YT9rztgyzw8T0T3/Ob2j6C4nnP6oAcCYcMZr8lDozihFlxcKAbp2JnMh/6ItzSJ7VS8Dk8uH6iWzqi4G7J7APPYl2LK30mrnihTKQLX+Ue0UobdmjY8R78tDiQmZV0RFhBr8J/168KgFenljQ4YXPizOk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773269927; c=relaxed/simple; bh=d0L2vSa3oCUAcXPpREBEiBvUnZa1w0w8sODn8jCxIuo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YlleUGWgw1/av7y8OILlFS0bD/+XsJxvsRYLkRM6+mV/JwBR5R7WFww2vB0APaICg3KPrmZec/xIYrdVm0XxEEHE5VZ1eEqZdOWE28gtH7NF0szJHNfbUOB8cKqdABLJAcJVGxhDplxVtkF7EEFcRtcuDrA1jPzygvaTjUETKQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rI+ze9Ag; arc=none smtp.client-ip=74.125.82.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--cmllamas.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rI+ze9Ag" Received: by mail-dl1-f74.google.com with SMTP id a92af1059eb24-12721cd1a2aso13862562c88.1 for ; Wed, 11 Mar 2026 15:58:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773269925; x=1773874725; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bmXJ8K/BpnLUJNna/3dVzpZzaD71lE2VNfovAjbqI3Y=; b=rI+ze9Ag932yeOBt2RIgxsThaYv9zVNI3EFhin1tEyK3OM7DIhmUNZj3/UNpJ8+K7X afo7tZCL2dChBCW9Y5wrmTfYjXd0Jq9l/va6TTHtn3ygKjL8QPPcLiGQnAFIKbWlfsKw cpXHrijcAZtfVKsm/cdw3pUvQy/GRj0uD7ZkjCaRHcDfD9bRvtXYVxXNGGQIj12Pp2VJ fFLGukRmK/a0F21e4FBBG9HPFzVgyJzxN+g0URucbBmseAYZbBFsn4/FFlCqzPeZiZSS loTG70IMOtSg073PbdGsHOlNTvXs8gQYHCY9JNE9EaeIPOPu69H50mOGMvFu97WGK/kz skTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773269925; x=1773874725; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bmXJ8K/BpnLUJNna/3dVzpZzaD71lE2VNfovAjbqI3Y=; b=YBTvzXGvrN/H8I3BInQkoWOhkCN7Tdy2P4VYuVIpxKSaF1vHogZ9U2jmbUEIhWykqg vMO4gsIl/a8n7wpSnquP+ITb7LzXlqZPoQRveQnVQun9lSim+xZ6wpOqMbCTwgEAv0XY usDRpNRKhjMJz6/92vfpRUDq8Q6PUrn2mhS3GDXuI9siJflwkh2cRvwBfgXrtxr2jiRA njeip86VSnWtezwph4+hLVbs4DDhFAS7q0qt3Hfl2qmHt9paDJ9z/2fIIOHkCg6GJrKX i8DDUw6xOgR95xzZL8Vsj27eBRo7scVEqKB8TPVMWCVaJZUYhT+nZR1p1JtQBcnswORJ nnpw== X-Forwarded-Encrypted: i=1; AJvYcCWNxP45yMuiTbQNhE5k88QA3MUW5RoQF9QiShY7fRgS83sywvHtBrzHEQdojfVtRSjS3DKudOnpp4b7XKw=@vger.kernel.org X-Gm-Message-State: AOJu0YyLpsWNu377Uj2kTewZq056kR22KksUfZKlrRPxzMRELFm9efRZ z8XvB1npySfHOoX4YGpWTwOOkrvUAuVdVPQhKlP0T45paJ5MDB6txazt1qX31eq2Q/AgOJ9Godf slwMzVpxH0nyHNQ== X-Received: from dlbpt3.prod.google.com ([2002:a05:7022:e803:b0:128:cc09:3c31]) (user=cmllamas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:7022:497:b0:119:e56b:c75c with SMTP id a92af1059eb24-128e783cab1mr2269369c88.33.1773269924866; Wed, 11 Mar 2026 15:58:44 -0700 (PDT) Date: Wed, 11 Mar 2026 22:57:40 +0000 In-Reply-To: <20260309223156.GA73501@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260309223156.GA73501@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311225822.1565895-1-cmllamas@google.com> Subject: [PATCH] static_call: use CFI-compliant return0 stubs From: Carlos Llamas To: Sami Tolvanen , Catalin Marinas , Will Deacon , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Alice Ryhl , Steven Rostedt , Ard Biesheuvel , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , James Clark , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Ben Segall , Mel Gorman , Valentin Schneider , Kees Cook , Linus Walleij , "Borislav Petkov (AMD)" , Nathan Chancellor , Thomas Gleixner , Mathieu Desnoyers , Shaopeng Tan , Jens Remus , Juergen Gross , Carlos Llamas , Conor Dooley , David Kaplan , Lukas Bulwahn , Jinjie Ruan , James Morse , Thomas Huth , Sean Christopherson , Paolo Bonzini Cc: kernel-team@android.com, linux-kernel@vger.kernel.org, Will McVicker , "=?UTF-8?q?Thomas=20Wei=C3=9Fschuh?=" , "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , "open list:PERFORMANCE EVENTS SUBSYSTEM" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Architectures with !HAVE_STATIC_CALL (such as arm64) rely on the generic static_call implementation via indirect calls. In particular, users of DEFINE_STATIC_CALL_RET0, default to the generic __static_call_return0 stub to optimize the unset path. However, __static_call_return0 has a fixed signature of "long (*)(void)" which may not match the expected prototype at callsites. This triggers CFI failures when CONFIG_CFI is enabled. A trivial linux-perf command does it: $ perf record -a sleep 1 CFI failure at perf_prepare_sample+0x98/0x7f8 (target: __static_call_retu= rn0+0x0/0x10; expected type: 0x837de525) Internal error: Oops - CFI: 00000000f2008228 [#1] SMP Modules linked in: CPU: 0 UID: 0 PID: 638 Comm: perf Not tainted 7.0.0-rc3 #25 PREEMPT Hardware name: linux,dummy-virt (DT) pstate: 900000c5 (NzcV daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--) pc : perf_prepare_sample+0x98/0x7f8 lr : perf_prepare_sample+0x70/0x7f8 sp : ffff80008289bc20 x29: ffff80008289bc30 x28: 000000000000001f x27: 0000000000000018 x26: 0000000000000100 x25: ffffffffffffffff x24: 0000000000000000 x23: 0000000000010187 x22: ffff8000851eba40 x21: 0000000000010087 x20: ffff0000098c9ea0 x19: ffff80008289bdc0 x18: 0000000000000000 x17: 00000000837de525 x16: 0000000072923c8f x15: 7fffffffffffffff x14: 00007fffffffffff x13: 00000000ffffffea x12: 0000000000000000 x11: 0000000000000015 x10: 0000000000000000 x9 : ffff8000822f2240 x8 : ffff800080276e4c x7 : 0000000000000000 x6 : 0000000000000000 x5 : 0000000000000000 x4 : ffff8000851eba10 x3 : ffff8000851eba40 x2 : ffff8000822f2240 x1 : 0000000000000000 x0 : 00000009d377c3a0 Call trace: perf_prepare_sample+0x98/0x7f8 (P) perf_event_output_forward+0x5c/0x17c __perf_event_overflow+0x2fc/0x460 perf_event_overflow+0x1c/0x28 armv8pmu_handle_irq+0x134/0x210 [...] To fix this, let architectures provide an ARCH_DEFINE_TYPED_STUB_RET0 implementation that generates individual signature-matching stubs for users of DEFINE_STATIC_CALL_RET0. This ensures the CFI hash of the target call matches that of the callsite. Cc: Sami Tolvanen Cc: Sean Christopherson Cc: Kees Cook Cc: Peter Zijlstra Cc: Will McVicker Fixes: 87b940a0675e ("perf/core: Use static_call to optimize perf_guest_inf= o_callbacks") Closes: https://lore.kernel.org/all/YfrQzoIWyv9lNljh@google.com/ Suggested-by: Sami Tolvanen Signed-off-by: Carlos Llamas --- arch/Kconfig | 4 ++++ arch/arm64/Kconfig | 1 + arch/arm64/include/asm/linkage.h | 3 ++- arch/arm64/include/asm/static_call.h | 23 +++++++++++++++++++++++ include/linux/static_call.h | 19 ++++++++++++++++++- kernel/events/core.c | 11 +++++++---- kernel/sched/core.c | 4 ++-- 7 files changed, 57 insertions(+), 8 deletions(-) create mode 100644 arch/arm64/include/asm/static_call.h diff --git a/arch/Kconfig b/arch/Kconfig index 102ddbd4298e..7735d548f02e 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1678,6 +1678,10 @@ config HAVE_STATIC_CALL_INLINE depends on HAVE_STATIC_CALL select OBJTOOL =20 +config HAVE_STATIC_CALL_TYPED_STUBS + bool + depends on !HAVE_STATIC_CALL + config HAVE_PREEMPT_DYNAMIC bool =20 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 38dba5f7e4d2..b370c31a23cf 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -252,6 +252,7 @@ config ARM64 select HAVE_RSEQ select HAVE_RUST if RUSTC_SUPPORTS_ARM64 select HAVE_STACKPROTECTOR + select HAVE_STATIC_CALL_TYPED_STUBS if CFI select HAVE_SYSCALL_TRACEPOINTS select HAVE_KPROBES select HAVE_KRETPROBES diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/link= age.h index 40bd17add539..5625ea365d27 100644 --- a/arch/arm64/include/asm/linkage.h +++ b/arch/arm64/include/asm/linkage.h @@ -4,9 +4,10 @@ #ifdef __ASSEMBLER__ #include #endif +#include =20 #define __ALIGN .balign CONFIG_FUNCTION_ALIGNMENT -#define __ALIGN_STR ".balign " #CONFIG_FUNCTION_ALIGNMENT +#define __ALIGN_STR __stringify(__ALIGN) =20 /* * When using in-kernel BTI we need to ensure that PCS-conformant diff --git a/arch/arm64/include/asm/static_call.h b/arch/arm64/include/asm/= static_call.h new file mode 100644 index 000000000000..ef754b58b1c9 --- /dev/null +++ b/arch/arm64/include/asm/static_call.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM64_STATIC_CALL_H +#define _ASM_ARM64_STATIC_CALL_H + +#include +#include + +/* Generates a CFI-compliant "return 0" stub matching @reffunc signature */ +#define __ARCH_DEFINE_TYPED_STUB_RET0(name, reffunc) \ + typeof(reffunc) name; \ + __ADDRESSABLE(name); \ + asm( \ + " " __ALIGN_STR " \n" \ + " .4byte __kcfi_typeid_" #name "\n" \ + #name ": \n" \ + " bti c \n" \ + " mov x0, xzr \n" \ + " ret" \ + ); +#define ARCH_DEFINE_TYPED_STUB_RET0(name, reffunc) \ + __ARCH_DEFINE_TYPED_STUB_RET0(name, reffunc) + +#endif /* _ASM_ARM64_STATIC_CALL_H */ diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 78a77a4ae0ea..6cb44441dfe0 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -184,6 +184,8 @@ extern int static_call_text_reserved(void *start, void = *end); =20 extern long __static_call_return0(void); =20 +#define STATIC_CALL_STUB_RET0(...) ((void *)&__static_call_return0) + #define DEFINE_STATIC_CALL(name, _func) \ DECLARE_STATIC_CALL(name, _func); \ struct static_call_key STATIC_CALL_KEY(name) =3D { \ @@ -270,6 +272,8 @@ static inline int static_call_text_reserved(void *start= , void *end) =20 extern long __static_call_return0(void); =20 +#define STATIC_CALL_STUB_RET0(...) ((void *)&__static_call_return0) + #define EXPORT_STATIC_CALL(name) \ EXPORT_SYMBOL(STATIC_CALL_KEY(name)); \ EXPORT_SYMBOL(STATIC_CALL_TRAMP(name)) @@ -294,6 +298,18 @@ static inline long __static_call_return0(void) return 0; } =20 +#ifdef CONFIG_HAVE_STATIC_CALL_TYPED_STUBS +#include + +#define STATIC_CALL_STUB_RET0(name) __static_call_##name +#define DEFINE_STATIC_CALL_STUB_RET0(name, _func) \ + ARCH_DEFINE_TYPED_STUB_RET0(STATIC_CALL_STUB_RET0(name), _func) +#else +/* Fall back to the generic __static_call_return0 stub */ +#define STATIC_CALL_STUB_RET0(...) ((void *)&__static_call_return0) +#define DEFINE_STATIC_CALL_STUB_RET0(...) +#endif + #define __DEFINE_STATIC_CALL(name, _func, _func_init) \ DECLARE_STATIC_CALL(name, _func); \ struct static_call_key STATIC_CALL_KEY(name) =3D { \ @@ -307,7 +323,8 @@ static inline long __static_call_return0(void) __DEFINE_STATIC_CALL(name, _func, NULL) =20 #define DEFINE_STATIC_CALL_RET0(name, _func) \ - __DEFINE_STATIC_CALL(name, _func, __static_call_return0) + DEFINE_STATIC_CALL_STUB_RET0(name, _func) \ + __DEFINE_STATIC_CALL(name, _func, STATIC_CALL_STUB_RET0(name)) =20 static inline void __static_call_nop(void) { } =20 diff --git a/kernel/events/core.c b/kernel/events/core.c index 1f5699b339ec..6ac00e89d320 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7695,16 +7695,19 @@ void perf_register_guest_info_callbacks(struct perf= _guest_info_callbacks *cbs) } EXPORT_SYMBOL_GPL(perf_register_guest_info_callbacks); =20 +#define static_call_disable(name) \ + static_call_update(name, STATIC_CALL_STUB_RET0(name)) + void perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks= *cbs) { if (WARN_ON_ONCE(rcu_access_pointer(perf_guest_cbs) !=3D cbs)) return; =20 rcu_assign_pointer(perf_guest_cbs, NULL); - static_call_update(__perf_guest_state, (void *)&__static_call_return0); - static_call_update(__perf_guest_get_ip, (void *)&__static_call_return0); - static_call_update(__perf_guest_handle_intel_pt_intr, (void *)&__static_c= all_return0); - static_call_update(__perf_guest_handle_mediated_pmi, (void *)&__static_ca= ll_return0); + static_call_disable(__perf_guest_state); + static_call_disable(__perf_guest_get_ip); + static_call_disable(__perf_guest_handle_intel_pt_intr); + static_call_disable(__perf_guest_handle_mediated_pmi); synchronize_rcu(); } EXPORT_SYMBOL_GPL(perf_unregister_guest_info_callbacks); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b7f77c165a6e..57c441d01564 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7443,12 +7443,12 @@ EXPORT_SYMBOL(__cond_resched); #ifdef CONFIG_PREEMPT_DYNAMIC # ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL # define cond_resched_dynamic_enabled __cond_resched -# define cond_resched_dynamic_disabled ((void *)&__static_call_return0) +# define cond_resched_dynamic_disabled STATIC_CALL_STUB_RET0(cond_resched) DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched); EXPORT_STATIC_CALL_TRAMP(cond_resched); =20 # define might_resched_dynamic_enabled __cond_resched -# define might_resched_dynamic_disabled ((void *)&__static_call_return0) +# define might_resched_dynamic_disabled STATIC_CALL_STUB_RET0(might_resch= ed) DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched); EXPORT_STATIC_CALL_TRAMP(might_resched); # elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) --=20 2.53.0.473.g4a7958ca14-goog