From nobody Fri Oct 3 06:33:24 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C81122ECE83 for ; Thu, 4 Sep 2025 22:39:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757025579; cv=none; b=Sj1uoZBV5bwHQUf8QQC9jBNy0+XI8vYyXXTxbiOUZyUt/1CPpc4XO3ZTqdDoCZYfLg9HEc2eJI053wouvh8Z6z1WF3CNqe5FP+n7kR+o8x/ro82p2GUHScprtx5u/1SS0pfOXiKU8NpCLZezOefuoEgiNCMFsbzSvDkA76zEE/c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757025579; c=relaxed/simple; bh=dlLs+yi7RwXEFoGkLBmy8K3WE1BY0qc9j744kGghnWs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jVjOXCXfjwcDs6ygI/OCmbAlCmszK8hVNW06vmp0PmsE1WW2WhWzPNRrMCgSixm5q/ARjmwUIAQCHOUbcKjXLfLNfNLRpz/1nfJCnYjOK1DImh35PjPER00uVxaJUFtL+Y3fNgWVjlArOOMCpQqaGbHmr4fkbw7tMs/oLyA2C0M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--dylanbhatch.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=iJBhliaz; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--dylanbhatch.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iJBhliaz" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-24458274406so28057745ad.3 for ; Thu, 04 Sep 2025 15:39:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1757025576; x=1757630376; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0NQT71Rd3hFVreB2uwDzx4dZZ1nQuKfGGKuQOi/O3l8=; b=iJBhliaznHDR/NuiId7sWKJ6WkgbRZAgCRhMivhm6r2jkRWugg3+RAGUbQPA5+xmRh kkVlkOmsItRApf6el4SMs41avh27Y8MncbON+Xa8J+bfCaEh1ynL5bmlRfnI8MC1ul9s 1OeKFKkWS7Dl9uuxrIb6nWcu3Nat1kiGq5wh1mnK4o5VN1lmdSzwZJlHM1XIll2ul9mT b6HMFZGWNWlanqZQDq6qUiSHqRcXHW83IIuAq/xCVfNS48LzcdfaRdCJ20/Df+Op+Ko0 nfd2it4LMynCExS1ajhkE2b7CkCNeRSWqnIojzUg0q8/IifnlakM6mZYzZWyO9p9fGE7 aPTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1757025576; x=1757630376; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0NQT71Rd3hFVreB2uwDzx4dZZ1nQuKfGGKuQOi/O3l8=; b=DZ03vpMH+lVrzzBxESe1w7/fddp9CZ02TR3s0r73vQP0sJLcBUZ4bRUdNxAs/LP98+ 8YBHt/hsHfkmQqf9q+Q/czAkqq/9CztzRViEz5ZcZBdpA4bKmWg4aDpLqv2tNAY3RdfT 4YSkVSix37XYgueFmDVKQXcbHktOHTWgJpBDTa2qNJXvDAd6EfsvARUfiVeDvdgwD4Af ulr48sTTGKTf11DTE8bx/bxk8HCC0UOFMwUAQKOgeutM0hH5zFaLwIyn+8j85H87TXft 4TyzskCGDciVs7oojIk8RuTGsRttvIQJhOSpnpiphDlNNELBb2tkuDskLj0PCHG5aKb9 ozwg== X-Forwarded-Encrypted: i=1; AJvYcCUQuvzTwrRrlLyhFr3Dp7mMouTlgjEWf5L8yf9S0nV9D/nvfgILAuOBhpIRVp0YDZzn2gbjZGrHNos0a9s=@vger.kernel.org X-Gm-Message-State: AOJu0YyhJ0hLMlO5QnBiNSrVVAyJnkSI0u4ZjyMLqO/HifzrJoBYaEUu 73C+OikOC9gMit8+cWwMzNP+ii8LsiP9+QKP+HLJDp9LbuANXtA0Ss0v2Xhv8X2kJ0dk/dWOtMy x0+YySwMXCBgpIHywOvONl/cLrA== X-Google-Smtp-Source: AGHT+IGU9/lQfrcWOX/dpTT9E6qyc3rvM+V4d9jDHZ4cKgBH8shTzfPR7bsY0ynubJ92rvKh6T2ZEGmb0PlBrTok+w== X-Received: from pjbsy7.prod.google.com ([2002:a17:90b:2d07:b0:327:b430:11ad]) (user=dylanbhatch job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e5cf:b0:242:8a45:a959 with SMTP id d9443c01a7336-24944ae5ff7mr273875345ad.47.1757025576197; Thu, 04 Sep 2025 15:39:36 -0700 (PDT) Date: Thu, 4 Sep 2025 22:38:50 +0000 In-Reply-To: <20250904223850.884188-1-dylanbhatch@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250904223850.884188-1-dylanbhatch@google.com> X-Mailer: git-send-email 2.51.0.355.g5224444f11-goog Message-ID: <20250904223850.884188-7-dylanbhatch@google.com> Subject: [PATCH v2 6/6] unwind: arm64: Add reliable stacktrace with sframe unwinder. From: Dylan Hatch To: Josh Poimboeuf , Steven Rostedt , Indu Bhagat , Peter Zijlstra , Will Deacon , Catalin Marinas , Jiri Kosina Cc: Dylan Hatch , Roman Gushchin , Weinan Liu , Mark Rutland , Ian Rogers , linux-toolchains@vger.kernel.org, linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, joe.lawrence@redhat.com, Puranjay Mohan , Song Liu , Prasanna Kumar T S M Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Weinan Liu Add unwind_next_frame_sframe() function to unwind by sframe info. Built with GNU Binutils 2.42 to verify that this sframe unwinder can backtrace correctly on arm64. To support livepatch, we need to add arch_stack_walk_reliable to support reliable stacktrace according to https://docs.kernel.org/livepatch/reliable-stacktrace.html#requirements report stacktrace is not reliable if we are not able to unwind the stack by sframe unwinder and fallback to FP based unwinder Signed-off-by: Weinan Liu Signed-off-by: Dylan Hatch Reviewed-by: Prasanna Kumar T S M --- arch/arm64/include/asm/stacktrace/common.h | 6 ++ arch/arm64/kernel/setup.c | 2 + arch/arm64/kernel/stacktrace.c | 102 +++++++++++++++++++++ 3 files changed, 110 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index 821a8fdd31af..26449cd402db 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -25,6 +25,8 @@ struct stack_info { * @stack: The stack currently being unwound. * @stacks: An array of stacks which can be unwound. * @nr_stacks: The number of stacks in @stacks. + * @cfa: The sp value at the call site of the current function. + * @unreliable: Stacktrace is unreliable. */ struct unwind_state { unsigned long fp; @@ -33,6 +35,10 @@ struct unwind_state { struct stack_info stack; struct stack_info *stacks; int nr_stacks; +#ifdef CONFIG_SFRAME_UNWINDER + unsigned long cfa; + bool unreliable; +#endif }; =20 static inline struct stack_info stackinfo_get_unknown(void) diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 77c7926a4df6..ac1da45da532 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -32,6 +32,7 @@ #include #include #include +#include =20 #include #include @@ -375,6 +376,7 @@ void __init __no_sanitize_address setup_arch(char **cmd= line_p) "This indicates a broken bootloader or old kernel\n", boot_args[1], boot_args[2], boot_args[3]); } + init_sframe_table(); } =20 static inline bool cpu_can_disable(unsigned int cpu) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 3ebcf8c53fb0..72e78024d05e 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -14,6 +14,7 @@ #include #include #include +#include =20 #include #include @@ -244,6 +245,53 @@ kunwind_next_frame_record(struct kunwind_state *state) return 0; } =20 +#ifdef CONFIG_SFRAME_UNWINDER +/* + * Unwind to the next frame according to sframe. + */ +static __always_inline int +unwind_next_frame_sframe(struct unwind_state *state) +{ + unsigned long fp =3D state->fp, ip =3D state->pc; + unsigned long base_reg, cfa; + unsigned long pc_addr, fp_addr; + struct sframe_ip_entry entry; + struct stack_info *info; + struct frame_record *record =3D (struct frame_record *)fp; + + int err; + + /* frame record alignment 8 bytes */ + if (fp & 0x7) + return -EINVAL; + + info =3D unwind_find_stack(state, fp, sizeof(*record)); + if (!info) + return -EINVAL; + + err =3D sframe_find_pc(ip, &entry); + if (err) + return -EINVAL; + + unwind_consume_stack(state, info, fp, sizeof(*record)); + + base_reg =3D entry.use_fp ? fp : state->cfa; + + /* Set up the initial CFA using fp based info if CFA is not set */ + if (!state->cfa) + cfa =3D fp - entry.fp_offset; + else + cfa =3D base_reg + entry.cfa_offset; + fp_addr =3D cfa + entry.fp_offset; + pc_addr =3D cfa + entry.ra_offset; + state->cfa =3D cfa; + state->fp =3D READ_ONCE(*(unsigned long *)(fp_addr)); + state->pc =3D READ_ONCE(*(unsigned long *)(pc_addr)); + + return 0; +} +#endif + /* * Unwind from one frame record (A) to the next frame record (B). * @@ -263,7 +311,20 @@ kunwind_next(struct kunwind_state *state) case KUNWIND_SOURCE_CALLER: case KUNWIND_SOURCE_TASK: case KUNWIND_SOURCE_REGS_PC: +#ifdef CONFIG_SFRAME_UNWINDER + if (!state->common.unreliable) + err =3D unwind_next_frame_sframe(&state->common); + + /* Fallback to FP based unwinder */ + if (err || state->common.unreliable) { err =3D kunwind_next_frame_record(state); + /* Mark its stacktrace result as unreliable if it is unwindable via FP */ + if (!err) + state->common.unreliable =3D true; + } +#else + err =3D kunwind_next_frame_record(state); +#endif break; default: err =3D -EINVAL; @@ -350,6 +411,9 @@ kunwind_stack_walk(kunwind_consume_fn consume_state, .common =3D { .stacks =3D stacks, .nr_stacks =3D ARRAY_SIZE(stacks), +#ifdef CONFIG_SFRAME_UNWINDER + .cfa =3D 0, +#endif }, }; =20 @@ -390,6 +454,43 @@ noinline noinstr void arch_stack_walk(stack_trace_cons= ume_fn consume_entry, kunwind_stack_walk(arch_kunwind_consume_entry, &data, task, regs); } =20 +#ifdef CONFIG_SFRAME_UNWINDER +struct kunwind_reliable_consume_entry_data { + stack_trace_consume_fn consume_entry; + void *cookie; + bool unreliable; +}; + +static __always_inline bool +arch_kunwind_reliable_consume_entry(const struct kunwind_state *state, voi= d *cookie) +{ + struct kunwind_reliable_consume_entry_data *data =3D cookie; + + if (state->common.unreliable) { + data->unreliable =3D true; + return false; + } + return data->consume_entry(data->cookie, state->common.pc); +} + +noinline notrace int arch_stack_walk_reliable( + stack_trace_consume_fn consume_entry, + void *cookie, struct task_struct *task) +{ + struct kunwind_reliable_consume_entry_data data =3D { + .consume_entry =3D consume_entry, + .cookie =3D cookie, + .unreliable =3D false, + }; + + kunwind_stack_walk(arch_kunwind_reliable_consume_entry, &data, task, NULL= ); + + if (data.unreliable) + return -EINVAL; + + return 0; +} +#else static __always_inline bool arch_reliable_kunwind_consume_entry(const struct kunwind_state *state, voi= d *cookie) { @@ -419,6 +520,7 @@ noinline noinstr int arch_stack_walk_reliable(stack_tra= ce_consume_fn consume_ent return kunwind_stack_walk(arch_reliable_kunwind_consume_entry, &data, task, NULL); } +#endif =20 struct bpf_unwind_consume_entry_data { bool (*consume_entry)(void *cookie, u64 ip, u64 sp, u64 fp); --=20 2.51.0.355.g5224444f11-goog