From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF40CCCA479 for ; Thu, 21 Jul 2022 05:57:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230474AbiGUF5k (ORCPT ); Thu, 21 Jul 2022 01:57:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229500AbiGUF5h (ORCPT ); Thu, 21 Jul 2022 01:57:37 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44DF678592 for ; Wed, 20 Jul 2022 22:57:36 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id n192-20020a2540c9000000b0066fca45513eso607535yba.0 for ; Wed, 20 Jul 2022 22:57:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/mwH9EWDJWeQ4laSz1bWCJhYrLz4u2A8wHs/a7qp59U=; b=HVMOkL8lAmyxaj14HSkhJ/XpnwUixvSfAWxN5/y+aX2HAXjSBlkKVweGHVwOwt8XtV YxYTQMZXio8PdX6ANR3JdivsMX/96xETdvW1x6+oXuvMLM9SDZlwR9n6AEUddsVcOvi6 cajz6uQYDGQTHEiB/aRqBTEp1L8kbRksRmzY+mg2oIWFgDE0yBGf7REFbRs+/EBv76Re MmgL3lUX6VAMjkIgbWm+s+2peqTyg9A+Flrh4X/ZjyDuBSXK+9wqsALLfXz3hfA9UKBf fHy6W+xIyxmkAa87YOP9kTmMLMEd4Xa3+zWRzroj/9W0RWTD6tJ3EGY7E8nI1D6dP9rC eXzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/mwH9EWDJWeQ4laSz1bWCJhYrLz4u2A8wHs/a7qp59U=; b=BUJeA7oUObkjRxr70teNlqIJfocefDTQyBctUK6cDi03Mwp5bwgcPc13al1Se9pQzu 8Uiu3imRecOtxmyDcADDZL3UjpH+VxzTmRqNrqtCCQFcJqAodjK1PA7NvvZWL9FMNMF4 q2FgSmBV/muyYhMQSiZxi39q6RyJ4/1jLGduWcHqas+OYz7rPY2tG94lAm7kADrTu7ZJ 3P4euev71n4javWs/gtLpzdEniwUXJiMWNVbYwHHXU/v+xJhvguTAUIyUEK8zuZnbstB C1Ti8xLMTH3bFNcVZAtWhviuTihB1bWsXF2mmlqK+YhaVCimc2u6OkqLbBvLzIrRRDG9 x3nA== X-Gm-Message-State: AJIora+z8g767xoDt5dfENVZWoulbk9utrE7Pd3yYOaips7WCu86OCKu wCTSoBObl2rwbOb4vt+4qjv+X9Hvh1HBmGBwTw== X-Google-Smtp-Source: AGRyM1vthVL8l0Tsianog3OGjfg5ppgLsI6G8O96dlBCQ/jcifO3VCL34n3vIaVHrhOdKHAcwrECZcLJMQCbe2jBOQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a25:6a57:0:b0:66e:c1bf:4a2 with SMTP id f84-20020a256a57000000b0066ec1bf04a2mr38420840ybc.263.1658383055575; Wed, 20 Jul 2022 22:57:35 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:12 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 01/17] arm64: stacktrace: Add shared header for common stack unwinding code From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to reuse the arm64 stack unwinding logic for the nVHE hypervisor stack, move the common code to a shared header (arch/arm64/include/asm/stacktrace/common.h). The nVHE hypervisor cannot safely link against kernel code, so we make use of the shared header to avoid duplicated logic later in this series. Signed-off-by: Kalesh Singh Reviewed-by: Mark Brown Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Add Reviewed-by tags from Mark Brown and Fuad arch/arm64/include/asm/stacktrace.h | 35 +------ arch/arm64/include/asm/stacktrace/common.h | 105 +++++++++++++++++++++ arch/arm64/kernel/stacktrace.c | 57 ----------- 3 files changed, 106 insertions(+), 91 deletions(-) create mode 100644 arch/arm64/include/asm/stacktrace/common.h diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/s= tacktrace.h index aec9315bf156..79f455b37c84 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -8,52 +8,19 @@ #include #include #include -#include #include =20 #include #include #include =20 -enum stack_type { - STACK_TYPE_UNKNOWN, - STACK_TYPE_TASK, - STACK_TYPE_IRQ, - STACK_TYPE_OVERFLOW, - STACK_TYPE_SDEI_NORMAL, - STACK_TYPE_SDEI_CRITICAL, - __NR_STACK_TYPES -}; - -struct stack_info { - unsigned long low; - unsigned long high; - enum stack_type type; -}; +#include =20 extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, const char *loglvl); =20 DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); =20 -static inline bool on_stack(unsigned long sp, unsigned long size, - unsigned long low, unsigned long high, - enum stack_type type, struct stack_info *info) -{ - if (!low) - return false; - - if (sp < low || sp + size < sp || sp + size > high) - return false; - - if (info) { - info->low =3D low; - info->high =3D high; - info->type =3D type; - } - return true; -} - static inline bool on_irq_stack(unsigned long sp, unsigned long size, struct stack_info *info) { diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h new file mode 100644 index 000000000000..64ae4f6b06fe --- /dev/null +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Common arm64 stack unwinder code. + * + * Copyright (C) 2012 ARM Ltd. + */ +#ifndef __ASM_STACKTRACE_COMMON_H +#define __ASM_STACKTRACE_COMMON_H + +#include +#include +#include + +enum stack_type { + STACK_TYPE_UNKNOWN, + STACK_TYPE_TASK, + STACK_TYPE_IRQ, + STACK_TYPE_OVERFLOW, + STACK_TYPE_SDEI_NORMAL, + STACK_TYPE_SDEI_CRITICAL, + __NR_STACK_TYPES +}; + +struct stack_info { + unsigned long low; + unsigned long high; + enum stack_type type; +}; + +/* + * A snapshot of a frame record or fp/lr register values, along with some + * accounting information necessary for robust unwinding. + * + * @fp: The fp value in the frame record (or the real fp) + * @pc: The lr value in the frame record (or the real lr) + * + * @stacks_done: Stacks which have been entirely unwound, for which it is = no + * longer valid to unwind to. + * + * @prev_fp: The fp that pointed to this frame record, or a synthetic = value + * of 0. This is used to ensure that within a stack, each + * subsequent frame record is at an increasing address. + * @prev_type: The type of stack this frame record was on, or a synthetic + * value of STACK_TYPE_UNKNOWN. This is used to detect a + * transition from one stack to another. + * + * @kr_cur: When KRETPROBES is selected, holds the kretprobe instance + * associated with the most recently encountered replacement= lr + * value. + * + * @task: The task being unwound. + */ +struct unwind_state { + unsigned long fp; + unsigned long pc; + DECLARE_BITMAP(stacks_done, __NR_STACK_TYPES); + unsigned long prev_fp; + enum stack_type prev_type; +#ifdef CONFIG_KRETPROBES + struct llist_node *kr_cur; +#endif + struct task_struct *task; +}; + +static inline bool on_stack(unsigned long sp, unsigned long size, + unsigned long low, unsigned long high, + enum stack_type type, struct stack_info *info) +{ + if (!low) + return false; + + if (sp < low || sp + size < sp || sp + size > high) + return false; + + if (info) { + info->low =3D low; + info->high =3D high; + info->type =3D type; + } + return true; +} + +static inline void unwind_init_common(struct unwind_state *state, + struct task_struct *task) +{ + state->task =3D task; +#ifdef CONFIG_KRETPROBES + state->kr_cur =3D NULL; +#endif + + /* + * Prime the first unwind. + * + * In unwind_next() we'll check that the FP points to a valid stack, + * which can't be STACK_TYPE_UNKNOWN, and the first unwind will be + * treated as a transition to whichever stack that happens to be. The + * prev_fp value won't be used, but we set it to 0 such that it is + * definitely not an accessible stack address. + */ + bitmap_zero(state->stacks_done, __NR_STACK_TYPES); + state->prev_fp =3D 0; + state->prev_type =3D STACK_TYPE_UNKNOWN; +} + +#endif /* __ASM_STACKTRACE_COMMON_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index fcaa151b81f1..94a5dd2ab8fd 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,63 +18,6 @@ #include #include =20 -/* - * A snapshot of a frame record or fp/lr register values, along with some - * accounting information necessary for robust unwinding. - * - * @fp: The fp value in the frame record (or the real fp) - * @pc: The lr value in the frame record (or the real lr) - * - * @stacks_done: Stacks which have been entirely unwound, for which it is = no - * longer valid to unwind to. - * - * @prev_fp: The fp that pointed to this frame record, or a synthetic = value - * of 0. This is used to ensure that within a stack, each - * subsequent frame record is at an increasing address. - * @prev_type: The type of stack this frame record was on, or a synthetic - * value of STACK_TYPE_UNKNOWN. This is used to detect a - * transition from one stack to another. - * - * @kr_cur: When KRETPROBES is selected, holds the kretprobe instance - * associated with the most recently encountered replacement= lr - * value. - * - * @task: The task being unwound. - */ -struct unwind_state { - unsigned long fp; - unsigned long pc; - DECLARE_BITMAP(stacks_done, __NR_STACK_TYPES); - unsigned long prev_fp; - enum stack_type prev_type; -#ifdef CONFIG_KRETPROBES - struct llist_node *kr_cur; -#endif - struct task_struct *task; -}; - -static void unwind_init_common(struct unwind_state *state, - struct task_struct *task) -{ - state->task =3D task; -#ifdef CONFIG_KRETPROBES - state->kr_cur =3D NULL; -#endif - - /* - * Prime the first unwind. - * - * In unwind_next() we'll check that the FP points to a valid stack, - * which can't be STACK_TYPE_UNKNOWN, and the first unwind will be - * treated as a transition to whichever stack that happens to be. The - * prev_fp value won't be used, but we set it to 0 such that it is - * definitely not an accessible stack address. - */ - bitmap_zero(state->stacks_done, __NR_STACK_TYPES); - state->prev_fp =3D 0; - state->prev_type =3D STACK_TYPE_UNKNOWN; -} - /* * Start an unwind from a pt_regs. * --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DA0AC43334 for ; Thu, 21 Jul 2022 05:57:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231129AbiGUF5o (ORCPT ); Thu, 21 Jul 2022 01:57:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230419AbiGUF5j (ORCPT ); Thu, 21 Jul 2022 01:57:39 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF07978DDE for ; Wed, 20 Jul 2022 22:57:38 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31e559f6840so6702657b3.6 for ; Wed, 20 Jul 2022 22:57:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iuiJRZjPzVH9NFZXc2I6eZu6HPWoG6DIfc9zwIk6fqU=; b=Ub0rmGKgsPw/uAZ4GGpi1VBT46ThpqEdpr2mq5eMvygSt80XqgB/W48o9TC1s3+qQS v7DHcG2nDxfhoyg2sVyhfTK1tZgdAmY8h3XLPAHmTVUmOr+REfo2lm37KSYFn0DZJkoo fAPXgIdDJaieSrl96TjbUmS0Apzy9/IwJS8MSzDEGhlEf7kVwIHmY/AvMqDSAb+r/3un HTJoGcKhknLgg8DBNTgbWSM0ot1aggOfhqWLlmPxSAmrPvtDYz+yzz9CQiaZcUjI8f8z +PfBftcQ+q2R/jPuE6SyZ1Jp8dLREhbSOn5/T3iKFcDQF9YB2nUogVbKGT7n3tY7Yb+f IBvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iuiJRZjPzVH9NFZXc2I6eZu6HPWoG6DIfc9zwIk6fqU=; b=zToMhLTMLQheVQnZssmWREa7QYoAQppCGpg5TBnW2wzgvGKAwo9yHsSf15qZCT8Ajs nybrPYLKTCpdhLupw0x01juo0NOsbguLVEi3CgJf+QpKnFkk9uRsNUecxt32zqCvMvkj I4DKxlqMnnN4cXivlRxjm1+2KeLm0/cqOD3PGNrwPCXryNP1rrzx/WrUDEbTzHpH7sXj 3EMC+7j0iIL6GxbtjAyuqOWSFKMNDrM9Vj8FKKrizeOfyFl9U112OSEeX9MZQNcRlo9F xNtUehIL/3G9kw7IJMVKzj67tUHZkxzDs7D2eW0JVLXGfsk/yl/anFY5cjWaYo8OzsGd G8Lw== X-Gm-Message-State: AJIora9RpiBjYZzBQs3wscgYu700s12CDzqgSjPtB4IrFra/uxypVPZZ 75yxYCFgZBJ/ckmhHJOY5BEjdr2YP34QigxqIg== X-Google-Smtp-Source: AGRyM1vGT8w8+qyglCyqqsexhhFl2xh/RNXpV9KZJmAlMvxnjkqvinTizqBBiGxZFk68SWeE0uI4FjmfcyDRZ1sZmg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:114d:b0:66d:9fa6:4bd4 with SMTP id p13-20020a056902114d00b0066d9fa64bd4mr37315370ybu.362.1658383058157; Wed, 20 Jul 2022 22:57:38 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:13 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 02/17] arm64: stacktrace: Factor out on_accessible_stack_common() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move common on_accessible_stack checks to stacktrace/common.h. This is used in the implementation of the nVHE hypervisor unwinder later in this series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Reviewed-by: Mark Brown Tested-by: Fuad Tabba --- Changes in v5: - Add Reviewed-by tags from Mark Brown and Fuad - Remove random whitespace change, per Mark Brown arch/arm64/include/asm/stacktrace.h | 6 ++---- arch/arm64/include/asm/stacktrace/common.h | 18 ++++++++++++++++++ 2 files changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/s= tacktrace.h index 79f455b37c84..43f4b4a6d383 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -65,8 +65,8 @@ static inline bool on_accessible_stack(const struct task_= struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) { - if (info) - info->type =3D STACK_TYPE_UNKNOWN; + if (on_accessible_stack_common(tsk, sp, size, info)) + return true; =20 if (on_task_stack(tsk, sp, size, info)) return true; @@ -74,8 +74,6 @@ static inline bool on_accessible_stack(const struct task_= struct *tsk, return false; if (on_irq_stack(sp, size, info)) return true; - if (on_overflow_stack(sp, size, info)) - return true; if (on_sdei_stack(sp, size, info)) return true; =20 diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index 64ae4f6b06fe..f58b786460d3 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -62,6 +62,9 @@ struct unwind_state { struct task_struct *task; }; =20 +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -80,6 +83,21 @@ static inline bool on_stack(unsigned long sp, unsigned l= ong size, return true; } =20 +static inline bool on_accessible_stack_common(const struct task_struct *ts= k, + unsigned long sp, + unsigned long size, + struct stack_info *info) +{ + if (info) + info->type =3D STACK_TYPE_UNKNOWN; + + /* + * Both the kernel and nvhe hypervisor make use of + * an overflow_stack + */ + return on_overflow_stack(sp, size, info); +} + static inline void unwind_init_common(struct unwind_state *state, struct task_struct *task) { --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C413C433EF for ; Thu, 21 Jul 2022 05:57:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231191AbiGUF5r (ORCPT ); Thu, 21 Jul 2022 01:57:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230499AbiGUF5m (ORCPT ); Thu, 21 Jul 2022 01:57:42 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1955E79ECE for ; Wed, 20 Jul 2022 22:57:40 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id v1-20020a259d81000000b0066ec7dff8feso570172ybp.18 for ; Wed, 20 Jul 2022 22:57:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hybHHJpNkr5qvVF2psrpE5ezfSg01aH1BSluCT+EV4I=; b=Gy3GT+AejE5lIqh2FM7HDqKdxCV2+m38IsmUpjEKK1QJmEoelcLR1wNLUpjwR0PQkT B/On8ku5ir8YgBuuIXV0lnVfrOyHsaNGO/oL7BybSdER3FWejp7+Df8wxKW/R20ChgQO 0tgF+VaCWq69D/tqUhWzHS7NsjR7+BjUQboxtxC/mIamoEzuietAqDCS5WNaPh0NI1a7 mrsgfg+J61pDYkEnH16ghZeVk38xPbU9JFqu4U396jprd1MwcgriPzJJOiokg7xUfE+x Rz6bdZ0FG+eKIe2YdXgj6+0uRXCSoNKy9bpzXtHHvJX1XzmnGuI3NqMkai5ZBhjlyTO5 9Dfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hybHHJpNkr5qvVF2psrpE5ezfSg01aH1BSluCT+EV4I=; b=gBWX4+5gRcQnhvP640vIcoeUnQwuAJ7qFPAfK8+AXpuL+yalYRX3T6taAjGyjH8kIe vvz0sDtczbAU2HNj+oiQt6ZVOU0F68se22nt9tkZb+U56tVyIs9jy3KvE3OXuPwQ70y1 vqKe1hqsZR/bvCqUoOyE/6u87iWKFx+NcE4FaKzxca9ih71uzMzjOxSYQaveAr86vCsd CmImWXGWhcl8i/MaT5FNs/S3g20Ve+UCKOT98NaugIwZXjh5CabcpTQJA7mwmEJBxgwP g3yG6dzNh5kHLiDLuI+b1rPs1rAUi6gpl4+KBbj93iGrkIS80IzapBWVvR4UamYm1xci axUQ== X-Gm-Message-State: AJIora9azlPIICcT3ykBVUhmEt7g5ix7uj/ckIqArFBF74PyVMcIpw2O 8fpkaCYWhaiLyVIke9V99+Nz3jBZUCBZ3xTGxQ== X-Google-Smtp-Source: AGRyM1t7p5bQOsZ7cBIoy5YIYG4Vty+5ObB0xcGW4MddH7Fc9hMYX+1F2OdOwVc+nCl0IjFDFNgEYMiRjLwrtJHNPw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a25:d117:0:b0:670:7e79:e104 with SMTP id i23-20020a25d117000000b006707e79e104mr11309346ybg.528.1658383060543; Wed, 20 Jul 2022 22:57:40 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:14 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 03/17] arm64: stacktrace: Factor out unwind_next_common() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move common unwind_next logic to stacktrace/common.h. This allows reusing the code in the implementation the nVHE hypervisor stack unwinder, later in this series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Reviewed-by: Mark Brown Tested-by: Fuad Tabba --- Changes in v5: - Add Reviewed-by tags from Mark Brown and Fuad arch/arm64/include/asm/stacktrace/common.h | 50 ++++++++++++++++++++++ arch/arm64/kernel/stacktrace.c | 41 ++---------------- 2 files changed, 54 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index f58b786460d3..0c5cbfdb56b5 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -65,6 +65,10 @@ struct unwind_state { static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info); =20 +static inline bool on_accessible_stack(const struct task_struct *tsk, + unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -120,4 +124,50 @@ static inline void unwind_init_common(struct unwind_st= ate *state, state->prev_type =3D STACK_TYPE_UNKNOWN; } =20 +static inline int unwind_next_common(struct unwind_state *state, + struct stack_info *info) +{ + struct task_struct *tsk =3D state->task; + unsigned long fp =3D state->fp; + + if (fp & 0x7) + return -EINVAL; + + if (!on_accessible_stack(tsk, fp, 16, info)) + return -EINVAL; + + if (test_bit(info->type, state->stacks_done)) + return -EINVAL; + + /* + * As stacks grow downward, any valid record on the same stack must be + * at a strictly higher address than the prior record. + * + * Stacks can nest in several valid orders, e.g. + * + * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL + * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * + * ... but the nesting itself is strict. Once we transition from one + * stack to another, it's never valid to unwind back to that first + * stack. + */ + if (info->type =3D=3D state->prev_type) { + if (fp <=3D state->prev_fp) + return -EINVAL; + } else { + __set_bit(state->prev_type, state->stacks_done); + } + + /* + * Record this frame record's values and location. The prev_fp and + * prev_type are only meaningful to the next unwind_next() invocation. + */ + state->fp =3D READ_ONCE(*(unsigned long *)(fp)); + state->pc =3D READ_ONCE(*(unsigned long *)(fp + 8)); + state->prev_fp =3D fp; + state->prev_type =3D info->type; + + return 0; +} #endif /* __ASM_STACKTRACE_COMMON_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 94a5dd2ab8fd..834851939364 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -81,48 +81,15 @@ static int notrace unwind_next(struct unwind_state *sta= te) struct task_struct *tsk =3D state->task; unsigned long fp =3D state->fp; struct stack_info info; + int err; =20 /* Final frame; nothing to unwind */ if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) return -ENOENT; =20 - if (fp & 0x7) - return -EINVAL; - - if (!on_accessible_stack(tsk, fp, 16, &info)) - return -EINVAL; - - if (test_bit(info.type, state->stacks_done)) - return -EINVAL; - - /* - * As stacks grow downward, any valid record on the same stack must be - * at a strictly higher address than the prior record. - * - * Stacks can nest in several valid orders, e.g. - * - * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL - * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW - * - * ... but the nesting itself is strict. Once we transition from one - * stack to another, it's never valid to unwind back to that first - * stack. - */ - if (info.type =3D=3D state->prev_type) { - if (fp <=3D state->prev_fp) - return -EINVAL; - } else { - __set_bit(state->prev_type, state->stacks_done); - } - - /* - * Record this frame record's values and location. The prev_fp and - * prev_type are only meaningful to the next unwind_next() invocation. - */ - state->fp =3D READ_ONCE(*(unsigned long *)(fp)); - state->pc =3D READ_ONCE(*(unsigned long *)(fp + 8)); - state->prev_fp =3D fp; - state->prev_type =3D info.type; + err =3D unwind_next_common(state, &info); + if (err) + return err; =20 state->pc =3D ptrauth_strip_insn_pac(state->pc); =20 --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 644ADC43334 for ; Thu, 21 Jul 2022 05:57:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231219AbiGUF5t (ORCPT ); Thu, 21 Jul 2022 01:57:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231131AbiGUF5o (ORCPT ); Thu, 21 Jul 2022 01:57:44 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73DD679691 for ; Wed, 20 Jul 2022 22:57:43 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id v1-20020a259d81000000b0066ec7dff8feso570239ybp.18 for ; Wed, 20 Jul 2022 22:57:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=BrC0iFUi7vnhcT+nMUoOxg3oXGqYAC5+3i/gvJsLlbw=; b=SBHl0MnAKRFV5/9Cx0u9IGW1uKO26lo5LpMs/VKqDXqZhOUXW2q5KRdxO6ttiP3nDu 0Ss8X8zAd7TS0qW4woYj8Mjk96gW1L50N3DM2DUASrIbtpsq92qGZVMoMSgieuWz2Wrm 8cjfA/1BLtzpfvEgXZVCWeoF4ywAgZq0n8dt5kaCexB/mncyP9CD+WPCJYmygQFLEL33 Rubo2DNaZC9AKNomceQdKYajP5aPFZijnBFtGYU96f6i9aM5TaiflC0BGzgKuEnV2cn4 FN+Y5kmROrmAnRggZj9Vx3Lfcde7qoDWA9mly3RVHb6AJq54+dMsmrBrkvmtidf8Lt14 BxOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BrC0iFUi7vnhcT+nMUoOxg3oXGqYAC5+3i/gvJsLlbw=; b=UjBiwyCKdxbVRSdAxWJ3w8O2Dt+RmRfF+StkSVdZY5raFW85wIO2GdBshJp825U5dT sTf/1UP+/reu7L/+6LYQwXi9RVRth4ygtdrF7tUiB53086SVu0fBN/0cO8okw9f6pHGT vZqDOE2l0mQjAcT9+lEQQNsOP47dAxeZS02IslaPVenaq1Xo+DIYqM41yl2x2mgZWAj+ 6XFGEwZ1xoafcSuSYh4klbky/EGdiR3bLDzu4GQKGQBNnbKMdAy59F9vqZWTyVB030ai s328j9M8lUMHrPHMe+pBNX/K+H4rm2Z1+9AaWXX8faQfPETlOfe90gA6CU/Fhtk9ZWNx diFg== X-Gm-Message-State: AJIora8Vx6Qsjo2efkhl8Fjh9nNa83gv9s+j43D4TP9w2mi47M6wHihK vFWrn6G08Cd3xitbpDyTTdQCI82umo15cRjmWw== X-Google-Smtp-Source: AGRyM1s6dEnzSM/TjATo5XryVR3TbDiGD8oaFkh/6JrgATgoBXNgg6oZxo446DCtWSijqKIcoJISLrUkRSBrVKoaaQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a81:4e04:0:b0:31e:5782:ed76 with SMTP id c4-20020a814e04000000b0031e5782ed76mr13240666ywb.183.1658383063172; Wed, 20 Jul 2022 22:57:43 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:15 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 04/17] arm64: stacktrace: Handle frame pointer from different address spaces From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The unwinder code is made reusable so that it can be used to unwind various types of stacks. One usecase is unwinding the nVHE hyp stack from the host (EL1) in non-protected mode. This means that the unwinder must be able to translate HYP stack addresses to kernel addresses. Add a callback (stack_trace_translate_fp_fn) to allow specifying the translation function. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Fix typo in commit text, per Fuad - Update unwind_next_common() to not have side effects on failure, per Fu= ad - Use regular comment instead of doc comments, per Fuad arch/arm64/include/asm/stacktrace/common.h | 29 +++++++++++++++++++--- arch/arm64/kernel/stacktrace.c | 2 +- 2 files changed, 26 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index 0c5cbfdb56b5..e89c8c39858d 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -124,11 +124,25 @@ static inline void unwind_init_common(struct unwind_s= tate *state, state->prev_type =3D STACK_TYPE_UNKNOWN; } =20 +/* + * stack_trace_translate_fp_fn() - Translates a non-kernel frame pointer to + * a kernel address. + * + * @fp: the frame pointer to be updated to it's kernel address. + * @type: the stack type associated with frame pointer @fp + * + * Returns true and success and @fp is updated to the corresponding + * kernel virtual address; otherwise returns false. + */ +typedef bool (*stack_trace_translate_fp_fn)(unsigned long *fp, + enum stack_type type); + static inline int unwind_next_common(struct unwind_state *state, - struct stack_info *info) + struct stack_info *info, + stack_trace_translate_fp_fn translate_fp) { + unsigned long fp =3D state->fp, kern_fp =3D fp; struct task_struct *tsk =3D state->task; - unsigned long fp =3D state->fp; =20 if (fp & 0x7) return -EINVAL; @@ -139,6 +153,13 @@ static inline int unwind_next_common(struct unwind_sta= te *state, if (test_bit(info->type, state->stacks_done)) return -EINVAL; =20 + /* + * If fp is not from the current address space perform the necessary + * translation before dereferencing it to get the next fp. + */ + if (translate_fp && !translate_fp(&kern_fp, info->type)) + return -EINVAL; + /* * As stacks grow downward, any valid record on the same stack must be * at a strictly higher address than the prior record. @@ -163,8 +184,8 @@ static inline int unwind_next_common(struct unwind_stat= e *state, * Record this frame record's values and location. The prev_fp and * prev_type are only meaningful to the next unwind_next() invocation. */ - state->fp =3D READ_ONCE(*(unsigned long *)(fp)); - state->pc =3D READ_ONCE(*(unsigned long *)(fp + 8)); + state->fp =3D READ_ONCE(*(unsigned long *)(kern_fp)); + state->pc =3D READ_ONCE(*(unsigned long *)(kern_fp + 8)); state->prev_fp =3D fp; state->prev_type =3D info->type; =20 diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 834851939364..eef3cf6bf2d7 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -87,7 +87,7 @@ static int notrace unwind_next(struct unwind_state *state) if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) return -ENOENT; =20 - err =3D unwind_next_common(state, &info); + err =3D unwind_next_common(state, &info, NULL); if (err) return err; =20 --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 224B0C433EF for ; Thu, 21 Jul 2022 05:57:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231300AbiGUF5x (ORCPT ); Thu, 21 Jul 2022 01:57:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231177AbiGUF5r (ORCPT ); Thu, 21 Jul 2022 01:57:47 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 627E178592 for ; Wed, 20 Jul 2022 22:57:46 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id u15-20020a25ab0f000000b0066e49f6c461so596449ybi.2 for ; Wed, 20 Jul 2022 22:57:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WyIKR8TUtopiF0qnPsaUqeT828xnhHfiqTnOAjLvkEU=; b=qXePP2zExQWiLI/yOHnWQnY5tmhFAwJFSqJrQsCg0h6G8nmwnF1BSu7Tv7ohah8YB1 KVX8Lavyw3BBpVWcrOaCdU/iJ6Y7qYazH9qiorZj/xwIMS40SXKumHg4Dn/S475ukR74 9CnvnxRXJxA8LdmN+KFBJdWnfbbdOibcMyVBYansveYrAPuoySkyyGtU1qNpn7xYDGsV 6xtr6vD2pMy3ZSHbjF4UdwODcRZhFLACHZXKJ7WTnnp1f40k/V/jqLXL9FQwX+kCIdfW 0OW/gsveCT9YMYi/It+j39p4HHJbgisCkz0pmK6O+z6E1fJWW2gAUcUB7FR5rRfZWEGL 8mGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WyIKR8TUtopiF0qnPsaUqeT828xnhHfiqTnOAjLvkEU=; b=7NlqOa8G2Jwci9k+/JJxaV1aGxlPSnsPcwUe93B6yBsGvP0NEykEF7LHMl0/0Tiwsw tCiNaJ7UNTDF2hguprpHvZxsPiZCvWgHHTW78Zyg4RsIrcGyBSyJZeUToy0yVQisCxap aowaHA7nOJXYfI/O5WXt/WbLUFEek/S7vOPe8VGS6RLubVYUICnSU8ATbBhtj6xU8+Bg 5YcwAKnwObMsyHdbg+HLHfD33oYRdQEE8Hjol6H0MTz2rSeHCPIVyHSgpBTkaNZJUi3G JaBlxVc8NDD45/AiSomDpLXqJuM19iCgp56UpN94K1t8EpODP6kwtB9Y+YlTa/hUfx29 Ko5g== X-Gm-Message-State: AJIora+yrDpWyaToPQQl6s/rCSpzevOcTTppoRhDD6vic+wipjkP9S/X qYZW7WQk6Ld2N4Ys1SCd3w7DKyWJSvYIeZCnbw== X-Google-Smtp-Source: AGRyM1sfIbntgoW8rO3m7spgy+DJgSO1pahUDS3COgVSb+d4GGF47/6T+S4aPood8bswctHa1bPb6Il0e/iafJvdTA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a81:4f41:0:b0:31e:7981:3a64 with SMTP id d62-20020a814f41000000b0031e79813a64mr3060018ywb.93.1658383065697; Wed, 20 Jul 2022 22:57:45 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:16 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 05/17] arm64: stacktrace: Factor out common unwind() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move unwind() to stacktrace/common.h, and as a result the kernel unwind_next() to asm/stacktrace.h. This allow reusing unwind() in the implementation of the nVHE HYP stack unwinder, later in the series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Reviewed-by: Mark Brown Tested-by: Fuad Tabba --- Changes in v5: - Add Reviewed-by tag from Fuad arch/arm64/include/asm/stacktrace.h | 51 ++++++++++++++++ arch/arm64/include/asm/stacktrace/common.h | 19 ++++++ arch/arm64/kernel/stacktrace.c | 67 ---------------------- 3 files changed, 70 insertions(+), 67 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/s= tacktrace.h index 43f4b4a6d383..ea828579a98b 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -11,6 +11,7 @@ #include =20 #include +#include #include #include =20 @@ -80,4 +81,54 @@ static inline bool on_accessible_stack(const struct task= _struct *tsk, return false; } =20 +/* + * Unwind from one frame record (A) to the next frame record (B). + * + * We terminate early if the location of B indicates a malformed chain of = frame + * records (e.g. a cycle), determined based on the location and fp value o= f A + * and the location (but not the fp value) of B. + */ +static inline int notrace unwind_next(struct unwind_state *state) +{ + struct task_struct *tsk =3D state->task; + unsigned long fp =3D state->fp; + struct stack_info info; + int err; + + /* Final frame; nothing to unwind */ + if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) + return -ENOENT; + + err =3D unwind_next_common(state, &info, NULL); + if (err) + return err; + + state->pc =3D ptrauth_strip_insn_pac(state->pc); + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + if (tsk->ret_stack && + (state->pc =3D=3D (unsigned long)return_to_handler)) { + unsigned long orig_pc; + /* + * This is a case where function graph tracer has + * modified a return address (LR) in a stack frame + * to hook a function return. + * So replace it to an original value. + */ + orig_pc =3D ftrace_graph_ret_addr(tsk, NULL, state->pc, + (void *)state->fp); + if (WARN_ON_ONCE(state->pc =3D=3D orig_pc)) + return -EINVAL; + state->pc =3D orig_pc; + } +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ +#ifdef CONFIG_KRETPROBES + if (is_kretprobe_trampoline(state->pc)) + state->pc =3D kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr= _cur); +#endif + + return 0; +} +NOKPROBE_SYMBOL(unwind_next); + #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index e89c8c39858d..7807752aaab1 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -9,6 +9,7 @@ =20 #include #include +#include #include =20 enum stack_type { @@ -69,6 +70,8 @@ static inline bool on_accessible_stack(const struct task_= struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info); =20 +static inline int unwind_next(struct unwind_state *state); + static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -191,4 +194,20 @@ static inline int unwind_next_common(struct unwind_sta= te *state, =20 return 0; } + +static inline void notrace unwind(struct unwind_state *state, + stack_trace_consume_fn consume_entry, + void *cookie) +{ + while (1) { + int ret; + + if (!consume_entry(cookie, state->pc)) + break; + ret =3D unwind_next(state); + if (ret < 0) + break; + } +} +NOKPROBE_SYMBOL(unwind); #endif /* __ASM_STACKTRACE_COMMON_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index eef3cf6bf2d7..9fa60ee48499 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -7,14 +7,12 @@ #include #include #include -#include #include #include #include #include =20 #include -#include #include #include =20 @@ -69,71 +67,6 @@ static inline void unwind_init_from_task(struct unwind_s= tate *state, state->pc =3D thread_saved_pc(task); } =20 -/* - * Unwind from one frame record (A) to the next frame record (B). - * - * We terminate early if the location of B indicates a malformed chain of = frame - * records (e.g. a cycle), determined based on the location and fp value o= f A - * and the location (but not the fp value) of B. - */ -static int notrace unwind_next(struct unwind_state *state) -{ - struct task_struct *tsk =3D state->task; - unsigned long fp =3D state->fp; - struct stack_info info; - int err; - - /* Final frame; nothing to unwind */ - if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) - return -ENOENT; - - err =3D unwind_next_common(state, &info, NULL); - if (err) - return err; - - state->pc =3D ptrauth_strip_insn_pac(state->pc); - -#ifdef CONFIG_FUNCTION_GRAPH_TRACER - if (tsk->ret_stack && - (state->pc =3D=3D (unsigned long)return_to_handler)) { - unsigned long orig_pc; - /* - * This is a case where function graph tracer has - * modified a return address (LR) in a stack frame - * to hook a function return. - * So replace it to an original value. - */ - orig_pc =3D ftrace_graph_ret_addr(tsk, NULL, state->pc, - (void *)state->fp); - if (WARN_ON_ONCE(state->pc =3D=3D orig_pc)) - return -EINVAL; - state->pc =3D orig_pc; - } -#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ -#ifdef CONFIG_KRETPROBES - if (is_kretprobe_trampoline(state->pc)) - state->pc =3D kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr= _cur); -#endif - - return 0; -} -NOKPROBE_SYMBOL(unwind_next); - -static void notrace unwind(struct unwind_state *state, - stack_trace_consume_fn consume_entry, void *cookie) -{ - while (1) { - int ret; - - if (!consume_entry(cookie, state->pc)) - break; - ret =3D unwind_next(state); - if (ret < 0) - break; - } -} -NOKPROBE_SYMBOL(unwind); - static bool dump_backtrace_entry(void *arg, unsigned long where) { char *loglvl =3D arg; --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58360C43334 for ; Thu, 21 Jul 2022 05:58:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231360AbiGUF6C (ORCPT ); Thu, 21 Jul 2022 01:58:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231228AbiGUF5u (ORCPT ); Thu, 21 Jul 2022 01:57:50 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0076F78592 for ; Wed, 20 Jul 2022 22:57:48 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id k64-20020a253d43000000b0067080eb57fdso587766yba.5 for ; Wed, 20 Jul 2022 22:57:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iLoEKCBMnlFKYYYcN0OANFAEglV5FtD84V5LcwTvHwg=; b=iOS+USjx9NgCbUY4yL44SP1DEiSpvV4mn2X29qjPNJNxKeAXL+qsxjnw4XKuHrWeBy QD0Wm6fBKb8nu9zx4zo5TptOXWfccSlTjHBGDDSGFcaD8PPGK+TGhkHKPtLrzs/KbRJb H1SNm2km8Suv1iEJCtBPfDgjv+umAyAcBIe7aADByBmfws/QIkHBnZGLBs2FyCKyknys Gtc+j9+p2a6y9rCrUFts1wK/l0PmmdJjS5v5wA2EQwS66QfFDOyrTjSXddAEmXiHYdcB AeG3gM/Zw1g5oLlyQyCa7GCwb20o7b95xy2/Spd817zI5klqXWnSZI3hO3OG78CFY969 RoKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iLoEKCBMnlFKYYYcN0OANFAEglV5FtD84V5LcwTvHwg=; b=FSh/46kgX7qUoJxBHrafFmaAdZ+oy7LF2Hgu3DlMVpNt/R/byALwcMCOVM60JZc2xe 5WrkcGo2IU/pamLQrth0OxV6k0q3sCQ2xpbuoMvr3ICIG3JLTBQMTPKwBwFGNe8Ks5od Ctih9ls386KY464ZGIP4uAVXyDxJUyiGdbeXidcMWU2DUYspZtmGlJVKbnP/biLCsF4t YiyLH0Fzj818byZlRc41YKW2RYtdwtjqNbmcC+lrjNtvaGmWo8XG2r1BOCZZD3thc6O3 V/nNa2nmN8SH+6QL7PMZchgUvCgJ3yUv9zoZDVPbAqGZC33OJN36b4dyRJilHwic4JTv 5kqA== X-Gm-Message-State: AJIora++QN9vmKUsHI1BBrRwFQB4jWd+UeBNuxZSidpnwsGl+UQb5IJt kd2xl7G/Q0QNITDaNF7KxBGgEr5lQ40oH2sB/g== X-Google-Smtp-Source: AGRyM1vQM1i7bebcbDD5BeZRxIvCI5qJDGD+svsXThCao0+6CLL/xgSSD/QNovAdrA91Ct+RzSbDrGrdlDrjB3F5wA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a81:57c3:0:b0:31d:e2dd:116 with SMTP id l186-20020a8157c3000000b0031de2dd0116mr34878043ywb.5.1658383068317; Wed, 20 Jul 2022 22:57:48 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:17 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-7-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 06/17] arm64: stacktrace: Add description of stacktrace/common.h From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add brief description on how to use stacktrace/common.h to implement a stack unwinder. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Add short description of each required function, per Fuad and Marc - Add Reviewed-by tag from Fuad arch/arm64/include/asm/stacktrace/common.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index 7807752aaab1..be7920ba70b0 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -2,6 +2,21 @@ /* * Common arm64 stack unwinder code. * + * To implement a new arm64 stack unwinder: + * 1) Include this header + * + * 2) Provide implementations for the following functions: + * on_overflow_stack(): Returns true if SP is on the overflow + * stack. + * on_accessible_stack(): Returns true is SP is on any accessible + * stack. + * unwind_next(): Performs validation checks on the frame + * pointer, and transitions unwind_state + * to the next frame. + * + * See: arch/arm64/include/asm/stacktrace.h for reference + * implementations. + * * Copyright (C) 2012 ARM Ltd. */ #ifndef __ASM_STACKTRACE_COMMON_H --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 500FFC43334 for ; Thu, 21 Jul 2022 05:58:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231401AbiGUF6H (ORCPT ); Thu, 21 Jul 2022 01:58:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231317AbiGUF57 (ORCPT ); Thu, 21 Jul 2022 01:57:59 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BCDC79ED8 for ; Wed, 20 Jul 2022 22:57:51 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id q75-20020a25d94e000000b00670834a0102so583547ybg.8 for ; Wed, 20 Jul 2022 22:57:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=yeyBTJJsTvi5Lbtf8j5GkDXpafK17xLid75yLfJdvNQ=; b=dWueVy4mpzAlbp9vxBRXeJMd0tOrV6DgI5a2e6BASoaxnz2I5NxUAY8AkTgRGmCsVg f8XsURQmmttj+5uIMsstpaW4L1YpnR+xueEWxP8tl4iVbAFyNc8PIafqD5mLoYfqD7Ho pnv/c3XtqJhRETItrsayIOkmYAHQ+qaDb+c3j3c+u3ja5nMOcFF6Fzcob7sVHui35WkC 4VGeUs4GQkHIbx3TjkCEthiZik8f2nFDWYDSn0Hf/OTdq97Vyi85fPhnMfb1IVu+49uu uOxiS7tqX1KUacYvZ6fjhWwmwIIFicdAdaRM3bgDywnoMhKGH1RXI9725m++3/YQlnPn DQNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yeyBTJJsTvi5Lbtf8j5GkDXpafK17xLid75yLfJdvNQ=; b=PxzfDdLUJ2XzeCsRf+v18+HLergcthIpwGKE1Ql4AEg0l4dPyfWFoq4XbWRdzuYGiP equxrAcPM5Aor6FIKMl9FQuG5l9e7vH3ycyYZITNqAKSHGbDbtepSP/UQg7P49xXxdpA iOJKp8BhshLzG/LR1RNjrsfWVSUYUGYJ1qcvuK0qFI8LCVwjV8b2X17XKybC3SCLlZha gDMe8s6CFWEk78zVAR/OpCzB5YhYfpV+tbL27g3Is+yGjD4apAiGBth+W/q1v9A0QYcb C/kORqna7dgwITudvt1UEQhTCLSnjg+q1IM0CsX8JtjOWAOnx4UrHREldDlKsAtAk2UW 3Smg== X-Gm-Message-State: AJIora8o91aPgnSovZhwNYAYHLbdf5rDv4/uunV84dIyxR+rByl4jWa+ CQB9BTd1Fy3LysEtGwB0e5gcenYk+3rjhxT27A== X-Google-Smtp-Source: AGRyM1vMDEeYTxMSnumL6MvPEWUwBYRfyhLjpam3+BcX4vkhcXktAspjirIUgqM+LOZb4VluKzDtrI8VdjNV2nHveQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a25:4cb:0:b0:670:7c80:609b with SMTP id 194-20020a2504cb000000b006707c80609bmr12459399ybe.620.1658383070821; Wed, 20 Jul 2022 22:57:50 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:18 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-8-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 07/17] KVM: arm64: On stack overflow switch to hyp overflow_stack From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On hyp stack overflow switch to 16-byte aligned secondary stack. This provides us stack space to better handle overflows; and is used in a subsequent patch to dump the hypervisor stacktrace. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Add Reviewed-by tag from Fuad arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/host.S | 9 ++------- arch/arm64/kvm/hyp/nvhe/stacktrace.c | 11 +++++++++++ 3 files changed, 14 insertions(+), 8 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/stacktrace.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Mak= efile index f9fe4dc21b1f..524e7dad5739 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -14,7 +14,7 @@ lib-objs :=3D $(addprefix ../../../lib/, $(lib-objs)) =20 obj-y :=3D timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o hos= t.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \ - cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o + cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o obj-y +=3D ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.= o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-$(CONFIG_DEBUG_LIST) +=3D list_debug.o diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index ea6a397b64a6..b6c0188c4b35 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -177,13 +177,8 @@ SYM_FUNC_END(__host_hvc) b hyp_panic =20 .L__hyp_sp_overflow\@: - /* - * Reset SP to the top of the stack, to allow handling the hyp_panic. - * This corrupts the stack but is ok, since we won't be attempting - * any unwinding here. - */ - ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 - mov sp, x0 + /* Switch to the overflow stack */ + adr_this_cpu sp, overflow_stack + OVERFLOW_STACK_SIZE, x0 =20 b hyp_panic_bad_stack ASM_BUG() diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c new file mode 100644 index 000000000000..a3d5b34e1249 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -0,0 +1,11 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * KVM nVHE hypervisor stack tracing support. + * + * Copyright (C) 2022 Google LLC + */ +#include +#include + +DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_= stack) + __aligned(16); --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DD74C433EF for ; Thu, 21 Jul 2022 05:58:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231560AbiGUF6Q (ORCPT ); Thu, 21 Jul 2022 01:58:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231404AbiGUF6C (ORCPT ); Thu, 21 Jul 2022 01:58:02 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9ACD479ED4 for ; Wed, 20 Jul 2022 22:57:53 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id u15-20020a25ab0f000000b0066e49f6c461so596657ybi.2 for ; Wed, 20 Jul 2022 22:57:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=innDUxJqwACKwxXV5MqFrMuXYhJfRcgn5/z66I4MfpY=; b=j93Q+9vCOMIlrd8JUKeoSlEgKUhZAN3jSK2XP3BhOFfL5yOpWBENrI60zOwAPbO91x N6XQSWNcmzIhWj8QkGWW07DP7EeyA/mqAvAChzhCELfCbJeCF8pigRfOnmDEu/LFP7Wc gU926Sr383UjK2SWUbwqn7Qb7r0pBoMdXlpP9SU6nVu+aodOUXYuu8NsDoDaW2y+puqr s0UynAbnY7+79mAByXBhH4o/jDbNBk4CrLATYNetHf6lfi1CJ5b3j0Nelut1331YvmyO iMtm0q/sWDjIGFky7wv3hVEEFfWt3aspQ2gQmPPLX6M4E8bxwwznpbjEnbAXpy8kgXog rVIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=innDUxJqwACKwxXV5MqFrMuXYhJfRcgn5/z66I4MfpY=; b=wm8ond59S9pqVLJN0NYovb5M+7Am++uQr+9T2rjdJxbFRJIos4QOpOcnXGPeMJT799 aYJWHN1v7WYMqv1foR7itWYUT4enUz0+1RI22Vc7/h2ZlgVhecZrK6reKolC8y68QnVw 7OQ7sPHbzgeQyDGWp+VMde4rqRRL4sLl+/JL0+zMR2D3FE3PODLqc4szI5fUgO4sV2hE +7vi7MHcZnSi0ivwtTG8+nOLj1QAGf5zNjbB4o1zn3kE2hAB0SLxD/C7DjrsyM2QCw/6 WM+rFxkFjCb67my6mcttYvG2XQ4v6lDfins1MfNX6H0N3pMwlPZ9RJw6aFI7E36mhvwC Tadg== X-Gm-Message-State: AJIora/n1p+257Vj0vzABNW2cNxQ1Fs/NOO34eczopJ1IRMa8NgLpYKA 8iEvV4b54w8UECSBca8xX0Mfp7b2b39RPytoTg== X-Google-Smtp-Source: AGRyM1sfor76YmofP6IuFGP7/vlKV7Yykd8ZozDRP+vyvW6PfvuolIS6g+LFmS4sH9UnTfXd9e/iHAUsYmoQeAReYg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a81:1a52:0:b0:31e:8774:6e9b with SMTP id a79-20020a811a52000000b0031e87746e9bmr369844ywa.78.1658383073351; Wed, 20 Jul 2022 22:57:53 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:19 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-9-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 08/17] KVM: arm64: Add PROTECTED_NVHE_STACKTRACE Kconfig From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This can be used to disable stacktrace for the protected KVM nVHE hypervisor, in order to save on the associated memory usage. This option is disabled by default, since protected KVM is not widely used on platforms other than Android currently. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Make PROTECTED_NVHE_STACKTRACE depend on NVHE_EL2_DEBUG, per Marc arch/arm64/kvm/Kconfig | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 8a5fbbf084df..09c995869916 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -46,6 +46,21 @@ menuconfig KVM =20 If unsure, say N. =20 +config PROTECTED_NVHE_STACKTRACE + bool "Protected KVM hypervisor stacktraces" + depends on NVHE_EL2_DEBUG + default n + help + Say Y here to enable pKVM hypervisor stacktraces on hyp_panic() + + If you are not using protected nVHE (pKVM), say N. + + If using protected nVHE mode, but cannot afford the associated + memory cost (less than 0.75 page per CPU) of pKVM stacktraces, + say N. + + If unsure, say N. + config NVHE_EL2_DEBUG bool "Debug mode for non-VHE EL2 object" depends on KVM --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41BA3C43334 for ; Thu, 21 Jul 2022 05:58:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231423AbiGUF6T (ORCPT ); Thu, 21 Jul 2022 01:58:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231177AbiGUF6C (ORCPT ); Thu, 21 Jul 2022 01:58:02 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FCCE79EEC for ; Wed, 20 Jul 2022 22:57:56 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31e62bc916aso6542447b3.19 for ; Wed, 20 Jul 2022 22:57:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=u41gNE0FUbf7I3uSk1ZkD04WPUp+bNk6nI13lr3U2+M=; b=i1tLJHy55BRGUf2MhpeshQfHaz/P1U4sh5WTYlffK6cPwsShsTLAksunpx9WSWUvs5 0fBONE5OQ8WLDH14mRl37wS68j0C0oAC6uZ1XsgBylepMqzx7WahUDqDnXxVi+WKAQgk AGgIzVBp+61lUYk8JBGfQfLraTDIS5jAl16lRQ4X4qh6yJdvw659R1ORQW0392UXcd0h 6imC0n6wD6C8G8hOnTkHRmVOn7smO1yJg/wy15kFY1cs8By/B9dxEChbyAn0iA8Wy+pe gktmUvnnmFtVre61mdH5Uu3H353FeoBmcg7MXZPAV9ovEHqwu/oerxbpmHYhpA/EMRPc Xyuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=u41gNE0FUbf7I3uSk1ZkD04WPUp+bNk6nI13lr3U2+M=; b=NIqhBNncyvWI3dDvbv2ViSn0PEkX9LXKoqumLJWpQdOWH/qOGNI0NZ2TDK3j5Nkzga tx2sDHDtTtsjWbg0eFCL9tWpKll8J3z7+gnqeZ8hdIBPAHE61P13XhIlLr0qBID867F3 GPsuMIv0HqDb7uQuHy7SV9lpWxMkDNym3UOXh0pxU7R+s3qb13cfURUwza16RVLkcZ1z x+xF+vwlkkLDeA9IeuAaze7fEQcXwHE5+eYLG4s/wlZPNRcJH17UcqHHa9Qt74aGFDP4 /g4JBwEOGEaEwBWUc9L/jWiv6YlxUWdU/qPgVk+v9FeAXOlxykBBH5QZCx8r8Evb1ZDO Lr3Q== X-Gm-Message-State: AJIora9YBA9KifFdLqsaaHHSrGnszZhark5GU/tOAi9t62l2oBRu1FBN N5Fe+CXgJNtf/9ewV4ul/785apPy0I7gD0Xa/g== X-Google-Smtp-Source: AGRyM1uvMhMsMnmREMTDtB8EyA11iiDGNnM5EU/vVjV/cfBEZrOG6zJ3oBHMRQm5wTLQpoQkhzUqzuOm5KQq0n22Og== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a25:a1a9:0:b0:66f:8387:d3e1 with SMTP id a38-20020a25a1a9000000b0066f8387d3e1mr37333703ybi.547.1658383075762; Wed, 20 Jul 2022 22:57:55 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:20 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-10-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 09/17] KVM: arm64: Allocate shared pKVM hyp stacktrace buffers From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In protected nVHE mode the host cannot directly access hypervisor memory, so we will dump the hypervisor stacktrace to a shared buffer with the host. The minimum size for the buffer required, assuming the min frame size of [x29, x30] (2 * sizeof(long)), is half the combined size of the hypervisor and overflow stacks plus an additional entry to delimit the end of the stacktrace. The stacktrace buffers are used later in the seried to dump the nVHE hypervisor stacktrace when using protected-mode. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Fix typo in commit text, per Marc arch/arm64/include/asm/memory.h | 8 ++++++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 4 ++++ 2 files changed, 12 insertions(+) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memor= y.h index 0af70d9abede..cab80a9a4086 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -113,6 +113,14 @@ =20 #define OVERFLOW_STACK_SIZE SZ_4K =20 +/* + * With the minimum frame size of [x29, x30], exactly half the combined + * sizes of the hyp and overflow stacks is the maximum size needed to + * save the unwinded stacktrace; plus an additional entry to delimit the + * end. + */ +#define NVHE_STACKTRACE_SIZE ((OVERFLOW_STACK_SIZE + PAGE_SIZE) / 2 + size= of(long)) + /* * Alignment of kernel segments (e.g. .text, .data). * diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c index a3d5b34e1249..69e65b457f1c 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -9,3 +9,7 @@ =20 DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_= stack) __aligned(16); + +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE +DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_sta= cktrace); +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00185C433EF for ; Thu, 21 Jul 2022 05:58:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231477AbiGUF6K (ORCPT ); Thu, 21 Jul 2022 01:58:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231309AbiGUF6D (ORCPT ); Thu, 21 Jul 2022 01:58:03 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C27C279EED for ; Wed, 20 Jul 2022 22:57:58 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2eb7d137101so6650067b3.12 for ; Wed, 20 Jul 2022 22:57:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/uwhBcl2d0wd6dxTVJiUUqJSQ7KfqpT1oTO144kMEdg=; b=BWqTOdX0ZIZ5zMm9VbYz1xMo06XHQHmhBqBdZFqXLyASFYZjsNE0fFUJrAsotKVH2h XJ7hnW52HME2xg7ReZn6rVjK2osnE0jb/K5QlqNipgjkKtb2cl59Qo9j03gCasGmJ1Dz juzPZNdDyUki1Nl7gMr8C8xcc3zmItqQLlBdzUKbzgx6Z9BCZggI7WpumSn2X8NpsikN Al4RWHWjPLmEgPPSEw4HJe4fxQeNejE3gTySW3+FilDGT5zBLX4fkZKiY/WCg2vsjjZI 20fJzOdm9yjBIiPMnknkrQIGVupnOa68FqKEeobAjDieK1E+aWmxh3/BRiXSRfNAKWLp etqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/uwhBcl2d0wd6dxTVJiUUqJSQ7KfqpT1oTO144kMEdg=; b=sj4kZCRYNb1lWIfjkvLFf5MZg5CpIKAMm3xeB8F2mN5OjqHUBR73B4Jtp5emFDvZ7p 0WCGJrgECt9txpvGf9MlYYr8GYGg1Ql//d8/03FJ1F+Tv6h4dB+VaiTIhrY6iMKAjQCu VmWh+9sXhIyUn+2TSijgMhp4iRa3KvNXiB8aEQ/uC7biDWmvxqbTkJ0RsTXCTkK2IJ1n rfdABFSdeH/yHGXNtwwPxWs3JOWFq/Mm1aXulDbnOCNut7XvkDFyrl3Ta/Q5sEfH9Rx4 AaHV06mf2BC+x2iI4Y+LdBEx8aIzZ7zBrJI5LneiHIADidYoNpWu/IriaN0A3/gwBpVl GOpA== X-Gm-Message-State: AJIora+OGojG5wFqnb0EeonLs/4HSx8RMhxXRV3t/tvOI0jF58o+KFZb meorjFlEuWccrJ1WWJ++tgPr8g7jWeNgdpI3Jw== X-Google-Smtp-Source: AGRyM1uyhPISg+ylS606kbQmjVcPYblSh0zIWTy2KDjhJBy7OL6s9V2L9bR+SAwwb9z1QoXAfY2vMc1gsnfMDuOPFg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a0d:d8ce:0:b0:31e:545c:7343 with SMTP id a197-20020a0dd8ce000000b0031e545c7343mr14812233ywe.29.1658383078088; Wed, 20 Jul 2022 22:57:58 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:21 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-11-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 10/17] KVM: arm64: Stub implementation of pKVM HYP stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add some stub implementations of protected nVHE stack unwinder, for building. These are implemented later in this series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Mark unwind_next() as inline, per Marc arch/arm64/include/asm/stacktrace/nvhe.h | 59 ++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 3 +- 2 files changed, 60 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/stacktrace/nvhe.h diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h new file mode 100644 index 000000000000..80d71932afff --- /dev/null +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * KVM nVHE hypervisor stack tracing support. + * + * The unwinder implementation depends on the nVHE mode: + * + * 1) pKVM (protected nVHE) mode - the host cannot directly access + * the HYP memory. The stack is unwinded in EL2 and dumped to a shared + * buffer where the host can read and print the stacktrace. + * + * Copyright (C) 2022 Google LLC + */ +#ifndef __ASM_STACKTRACE_NVHE_H +#define __ASM_STACKTRACE_NVHE_H + +#include + +static inline bool on_accessible_stack(const struct task_struct *tsk, + unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +#ifdef __KVM_NVHE_HYPERVISOR__ +/* + * Protected nVHE HYP stack unwinder + * + * In protected mode, the unwinding is done by the hypervisor in EL2. + */ + +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +static inline int notrace unwind_next(struct unwind_state *state) +{ + return 0; +} +NOKPROBE_SYMBOL(unwind_next); +#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +static inline int notrace unwind_next(struct unwind_state *state) +{ + return 0; +} +NOKPROBE_SYMBOL(unwind_next); +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ + +#endif /* __KVM_NVHE_HYPERVISOR__ */ +#endif /* __ASM_STACKTRACE_NVHE_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c index 69e65b457f1c..96c8b93320eb 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -4,8 +4,7 @@ * * Copyright (C) 2022 Google LLC */ -#include -#include +#include =20 DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_= stack) __aligned(16); --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10CC9C433EF for ; Thu, 21 Jul 2022 05:58:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231324AbiGUF6V (ORCPT ); Thu, 21 Jul 2022 01:58:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231314AbiGUF6D (ORCPT ); Thu, 21 Jul 2022 01:58:03 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 889D079ECF for ; Wed, 20 Jul 2022 22:58:01 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31cb93cadf2so6663457b3.11 for ; Wed, 20 Jul 2022 22:58:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nCSrSlEZfmkYkOyHMY9qhudKEgt1DEv4mdp7iaevl2w=; b=Y/BckGi3KCbT4iTn1vHoKO+iIoWVzxgW89Wp+bOCHdv6z7DbV4KkriSzgVPMZM6mWf t3Np8DXW4dxPpv1lZ5ZyzliMfc/WzlVRbViT1axXnESIU//dO2Ysbj6tQOaoZ9haMLzM 9b/3MYA906Tt2nrxQ7kX5DGCsmAD8wAnpTJjIGSrmH1tdShxTdRcuYbTXYHCmYUuh7fz AWiZn7q8TwP9i1Aif5D8zObMsZHYtY96eTqOEQlPJ0Veyv6fV8OH2lqzlNq4raV87REf kf/8wbZvR0pUDOd2o2Tzm/0YdmklhPmQ7GhOaf+HJLSF1Xt6QydN+Dsd7+7qD1ZygSMt jTLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nCSrSlEZfmkYkOyHMY9qhudKEgt1DEv4mdp7iaevl2w=; b=k81AEK0Xz85ko0KIDqOOoUFof4DPaGd0B1fj1vaHgL+59/0xnaJnB1zcfVlbdFRP6Z /jUv1iAJf2AKBTDJfcLnsMPol6UVXCb9AR+uZaiOGh1zreCOLxiUqZsmNCEaJp6LUy1r oGTOzv+XiFe1e0lgZFJ8qDzLY7BWrwZPflpcWwOprW8Clal5SIJ7G0gEWDzg6HSGpsPT sXyABW3POqLExYEmjTPuZjO2y8evo2j5N7li8Cmj+Oy6EgEnoN6dSBjuVEh1LmqEEMHo WTzg4D7/3z16I/itSnNCUwEaJC8zHLch8PwzpAxuhglcASG5aFvcfr9qmJuvxQlYq5qh iIPw== X-Gm-Message-State: AJIora/ZOpIukLKKg1Sgwse17HE175Df6Gx2+2Dv8/oAekQmgJJTMEO5 kdRWQ8/TP/DxJiD35MKxTAs86wzVQWBsVoRmWQ== X-Google-Smtp-Source: AGRyM1uVWbefCZQE+IW0YShATcxYbZ1HKKXbblRtksHLoReZ4Hx2UeN/4FbZ98rwLnifcfwg5BzwKCYVTc0NlvD1nQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a0d:c587:0:b0:31e:8bab:394d with SMTP id h129-20020a0dc587000000b0031e8bab394dmr182248ywd.107.1658383080824; Wed, 20 Jul 2022 22:58:00 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:22 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-12-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 11/17] KVM: arm64: Stub implementation of non-protected nVHE HYP stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add stub implementations of non-protected nVHE stack unwinder, for building. These are implemented later in this series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Mark unwind_next() as inline, per Marc - Comment !__KVM_NVHE_HYPERVISOR__ unwinder path, per Marc arch/arm64/include/asm/stacktrace/nvhe.h | 26 ++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index 80d71932afff..3078501f8e22 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -8,6 +8,12 @@ * the HYP memory. The stack is unwinded in EL2 and dumped to a shared * buffer where the host can read and print the stacktrace. * + * 2) Non-protected nVHE mode - the host can directly access the + * HYP stack pages and unwind the HYP stack in EL1. This saves having + * to allocate shared buffers for the host to read the unwinded + * stacktrace. + * + * * Copyright (C) 2022 Google LLC */ #ifndef __ASM_STACKTRACE_NVHE_H @@ -55,5 +61,25 @@ static inline int notrace unwind_next(struct unwind_stat= e *state) NOKPROBE_SYMBOL(unwind_next); #endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ =20 +#else /* !__KVM_NVHE_HYPERVISOR__ */ +/* + * Conventional (non-protected) nVHE HYP stack unwinder + * + * In non-protected mode, the unwinding is done from kernel proper context + * (by the host in EL1). + */ + +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +static inline int notrace unwind_next(struct unwind_state *state) +{ + return 0; +} +NOKPROBE_SYMBOL(unwind_next); + #endif /* __KVM_NVHE_HYPERVISOR__ */ #endif /* __ASM_STACKTRACE_NVHE_H */ --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4C85C433EF for ; Thu, 21 Jul 2022 05:58:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231628AbiGUF6Y (ORCPT ); Thu, 21 Jul 2022 01:58:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231279AbiGUF6F (ORCPT ); Thu, 21 Jul 2022 01:58:05 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16C2379EDC for ; Wed, 20 Jul 2022 22:58:04 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31e619dcbbaso6580527b3.14 for ; Wed, 20 Jul 2022 22:58:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Y/Pw0ypxhYsMQBExEbEI8CQLolbPSquDAaPdcdOBW14=; b=A/U8SrJzEd19IiEgaaJV++dSW6MCKuGc9oV8dKjIC/h7jPSi7IiCj6jW0cejmtzRAa Zf9wrfNf5GCwUiqePPhAmUXpEtrMneI/UCVtSFNjsR0ydcms46cmShP+bnOiWhMzXi+J zyWJwjV17gr7AmrmF+I8yRon//wfjILaZKNcrvJiXTb5kcdRvRWVJuqMoMgXEPBz5ktK 51dt3WQ9mIvuwCpxuyEDmr0ORgf+zAF68CvtcCYk2tgtS3oVyM37QfaZbEpu0uf/4L/v I2+GiOj7LtsYO83KQnzkWaOPY3AIEPkQ07kCpZvupdpPiTalml7fysnqQVnpWg2SQmq6 /1Lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Y/Pw0ypxhYsMQBExEbEI8CQLolbPSquDAaPdcdOBW14=; b=f2gGqybhDZzBOM+e4S/lZ4IOMpp9rvT1CuhKZFIpLLhmG7156nnFF07L7IIgbdDxr6 ioZ9tKS9WuxrKLTf22sfmSrMc70Fd+VwEgw9u4MZgNCPqQKg0EZmvJ8RagN+C2u3Kh6M DdspHXyg7h49G79RLUmDJqr6bm5ENNByTlVI5nu99I5u3EykgAG+8jxqu/utaELwp+20 oAPZ7RtNe1pVEscT8ukcNvy1LT9izN/4PsDrEFbxYpW6VB5JMJ6KNJtyk4isvuWejn6b J2FZ/kSTYrx9sINeM5hC1iGKCnCdQ9Wj6BiDO37/ddQ7vCBQR2xmc5OXosUZTH+TnoHj Y4ug== X-Gm-Message-State: AJIora/K4sm+RfPcU0NGQYIqdAvqtI6eLZJaO76jUSHmTPcFyfYNjghI GM6jgRV719LznE8nIcG8+hMWm8ubpk0UuwmU/A== X-Google-Smtp-Source: AGRyM1srLRo8p6y9NLZdlgVwRZ5a/iwYm2zsx+W1hx/uDBK5IsQu2g57V7urVDMFs6RLHuP/ncA1Oq6BUI0cEVvi7A== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a81:74c2:0:b0:31e:7916:a556 with SMTP id p185-20020a8174c2000000b0031e7916a556mr3292762ywc.28.1658383083351; Wed, 20 Jul 2022 22:58:03 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:23 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-13-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 12/17] KVM: arm64: Save protected-nVHE (pKVM) hyp stacktrace From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In protected nVHE mode, the host cannot access private owned hypervisor memory. Also the hypervisor aims to remains simple to reduce the attack surface and does not provide any printk support. For the above reasons, the approach taken to provide hypervisor stacktraces in protected mode is: 1) Unwind and save the hyp stack addresses in EL2 to a shared buffer with the host (done in this patch). 2) Delegate the dumping and symbolization of the addresses to the host in EL1 (later patch in the series). On hyp_panic(), the hypervisor prepares the stacktrace before returning to the host. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Comment/clarify pkvm_save_backtrace_entry(), per Fuad - kvm_nvhe_unwind_init(), doesn't need to be always inline, make it inline instead to avoid linking issues, per Marc - Use regular comments instead of doc comments, per Fuad arch/arm64/include/asm/stacktrace/nvhe.h | 17 ++++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 78 ++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 6 ++ 3 files changed, 101 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index 3078501f8e22..05d7e03e0a8c 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -21,6 +21,23 @@ =20 #include =20 +/* + * kvm_nvhe_unwind_init - Start an unwind from the given nVHE HYP fp and pc + * + * @state : unwind_state to initialize + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + */ +static inline void kvm_nvhe_unwind_init(struct unwind_state *state, + unsigned long fp, + unsigned long pc) +{ + unwind_init_common(state, NULL); + + state->fp =3D fp; + state->pc =3D pc; +} + static inline bool on_accessible_stack(const struct task_struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c index 96c8b93320eb..60461c033a04 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -11,4 +11,82 @@ DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof= (long)], overflow_stack) =20 #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_sta= cktrace); + +/* + * pkvm_save_backtrace_entry - Saves a protected nVHE HYP stacktrace entry + * + * @arg : the position of the entry in the stacktrace buffer + * @where : the program counter corresponding to the stack frame + * + * Save the return address of a stack frame to the shared stacktrace buffe= r. + * The host can access this shared buffer from EL1 to dump the backtrace. + */ +static bool pkvm_save_backtrace_entry(void *arg, unsigned long where) +{ + unsigned long **stacktrace_entry =3D (unsigned long **)arg; + int nr_entries =3D NVHE_STACKTRACE_SIZE / sizeof(long); + unsigned long *stacktrace_start, *stacktrace_end; + + stacktrace_start =3D (unsigned long *)this_cpu_ptr(pkvm_stacktrace); + stacktrace_end =3D stacktrace_start + nr_entries; + + /* + * Need 2 free slots: 1 for current entry and 1 for the + * trailing zero entry delimiter. + */ + if (*stacktrace_entry > stacktrace_end - 2) + return false; + + /* Save the current entry */ + **stacktrace_entry =3D where; + + /* Add trailing zero entry delimiter */ + *(*stacktrace_entry + 1) =3D 0UL; + + /* + * Increment the current entry position. The zero entry + * will be overwritten by the next backtrace entry (if any) + */ + ++*stacktrace_entry; + + return true; +} + +/* + * pkvm_save_backtrace - Saves the protected nVHE HYP stacktrace + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + * + * Save the unwinded stack addresses to the shared stacktrace buffer. + * The host can access this shared buffer from EL1 to dump the backtrace. + */ +static void pkvm_save_backtrace(unsigned long fp, unsigned long pc) +{ + void *stacktrace_entry =3D (void *)this_cpu_ptr(pkvm_stacktrace); + struct unwind_state state; + + kvm_nvhe_unwind_init(&state, fp, pc); + + unwind(&state, pkvm_save_backtrace_entry, &stacktrace_entry); +} +#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ +static void pkvm_save_backtrace(unsigned long fp, unsigned long pc) +{ +} #endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ + +/* + * kvm_nvhe_prepare_backtrace - prepare to dump the nVHE backtrace + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + * + * Saves the information needed by the host to dump the nVHE hypervisor + * backtrace. + */ +void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc) +{ + if (is_protected_kvm_enabled()) + pkvm_save_backtrace(fp, pc); +} diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index 6db801db8f27..64e13445d0d9 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -34,6 +34,8 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); =20 +extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); + static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -375,6 +377,10 @@ asmlinkage void __noreturn hyp_panic(void) __sysreg_restore_state_nvhe(host_ctxt); } =20 + /* Prepare to dump kvm nvhe hyp stacktrace */ + kvm_nvhe_prepare_backtrace((unsigned long)__builtin_frame_address(0), + _THIS_IP_); + __hyp_do_panic(host_ctxt, spsr, elr, par); unreachable(); } --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A58E2C433EF for ; Thu, 21 Jul 2022 05:58:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231691AbiGUF62 (ORCPT ); Thu, 21 Jul 2022 01:58:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231393AbiGUF6H (ORCPT ); Thu, 21 Jul 2022 01:58:07 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30E7679ECE for ; Wed, 20 Jul 2022 22:58:06 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id k64-20020a253d43000000b0067080eb57fdso588144yba.5 for ; Wed, 20 Jul 2022 22:58:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Z7mVY9an7BKlr+DFm/CycdkNTS81eX0sj7VPn5GOYs4=; b=dTsslHtfVleq+Z/6WWji/9BuS9+0br4q7QTZcxrUN5Y9B4rlzmrmN32Bb1B5qyJ/U1 qBpAZGUG2A62Hm2QuQFjTCti5rbkdpDyTAybiwdN4xKtXHzcxrROPg9V11lvHuN/twya a2UyEZ/owQaKO0PtjZowc5DbISURjVIpDe8dEi2N7GQOpy9pYT8mVWsgs4N9nBbqUvg2 QH9mu9bYGpxdDj8JIvDwQld9c2w0EwXo08sJjQ+RmLN2wX01jPqMQrv+STC/QoW3bpXx mjKRNDCEFZxF3BqOf/VxGO2RpXhB+AoiIXZzPCTbceBAr/YG5qiWLsDGLe0S/hvlsMvl oT5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Z7mVY9an7BKlr+DFm/CycdkNTS81eX0sj7VPn5GOYs4=; b=chtKqJQ5wfc1c9YFYQHo43xGjBwGE+TsGNSK3UcdX5gxnB6gK0xTNiSV2KyfCxWFu6 iByvbBP7mSdSDUBTNiTTLYKWv676e4ily1aLMKa2kUE0lWAFv4Dbsy7xO/WBoGQWG3K+ bblFEHaPSTIw46YjazA18qyY8cMZX2oeCSKZm+J1/FiCNr6b0ZbrcWfM2S6Wr9co9MH6 Yn5BCffS9FmJGt4NxfYZ4YpY/KlsUL54EU1weXqJByGrO9mIV/5bl7bR3y1whgAoQUuL TrN22GOSW7xB2vDkilS7tCa52MomD1kEYWI1cjFkgvk2+9v7lT7qGVz9wP/AD3EJEm0i IPxg== X-Gm-Message-State: AJIora+BASRfsa4zoTLDGWNuYxDkxQDRj9u8XxD+1RmQNvc4250/LVSf DrFdvwT05FWGl7g0fyHA07lPckLCPPDPrJQ+0g== X-Google-Smtp-Source: AGRyM1t4hxWZlu80p3JVhXO15OVhTOsZC/c7QTwiHwbKpKmDeCswr0PSSvYfUF/aTjYWxM+ufbCB0TJqzm/vgw1mYQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:20a:b0:670:c563:9180 with SMTP id j10-20020a056902020a00b00670c5639180mr317132ybs.401.1658383085938; Wed, 20 Jul 2022 22:58:05 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:24 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-14-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 13/17] KVM: arm64: Prepare non-protected nVHE hypervisor stacktrace From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In non-protected nVHE mode (non-pKVM) the host can directly access hypervisor memory; and unwinding of the hypervisor stacktrace is done from EL1 to save on memory for shared buffers. To unwind the hypervisor stack from EL1 the host needs to know the starting point for the unwind and information that will allow it to translate hypervisor stack addresses to the corresponding kernel addresses. This patch sets up this book keeping. It is made use of later in the series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Use regular comments instead of doc comments, per Fuad arch/arm64/include/asm/kvm_asm.h | 16 ++++++++++++++++ arch/arm64/include/asm/stacktrace/nvhe.h | 4 ++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 24 ++++++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 2e277f2ed671..53035763e48e 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -176,6 +176,22 @@ struct kvm_nvhe_init_params { unsigned long vtcr; }; =20 +/* + * Used by the host in EL1 to dump the nVHE hypervisor backtrace on + * hyp_panic() in non-protected mode. + * + * @stack_base: hyp VA of the hyp_stack base. + * @overflow_stack_base: hyp VA of the hyp_overflow_stack base. + * @fp: hyp FP where the backtrace begins. + * @pc: hyp PC where the backtrace begins. + */ +struct kvm_nvhe_stacktrace_info { + unsigned long stack_base; + unsigned long overflow_stack_base; + unsigned long fp; + unsigned long pc; +}; + /* Translate a kernel address @ptr into its equivalent linear mapping */ #define kvm_ksym_ref(ptr) \ ({ \ diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index 05d7e03e0a8c..8f02803a005f 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -19,6 +19,7 @@ #ifndef __ASM_STACKTRACE_NVHE_H #define __ASM_STACKTRACE_NVHE_H =20 +#include #include =20 /* @@ -52,6 +53,9 @@ static inline bool on_accessible_stack(const struct task_= struct *tsk, * In protected mode, the unwinding is done by the hypervisor in EL2. */ =20 +DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow= _stack); +DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c index 60461c033a04..cbd365f4f26a 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -9,6 +9,28 @@ DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_= stack) __aligned(16); =20 +DEFINE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_info); + +/* + * hyp_prepare_backtrace - Prepare non-protected nVHE backtrace. + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + * + * Save the information needed by the host to unwind the non-protected + * nVHE hypervisor stack in EL1. + */ +static void hyp_prepare_backtrace(unsigned long fp, unsigned long pc) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info =3D this_cpu_ptr(&kvm_st= acktrace_info); + struct kvm_nvhe_init_params *params =3D this_cpu_ptr(&kvm_init_params); + + stacktrace_info->stack_base =3D (unsigned long)(params->stack_hyp_va - PA= GE_SIZE); + stacktrace_info->overflow_stack_base =3D (unsigned long)this_cpu_ptr(over= flow_stack); + stacktrace_info->fp =3D fp; + stacktrace_info->pc =3D pc; +} + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_sta= cktrace); =20 @@ -89,4 +111,6 @@ void kvm_nvhe_prepare_backtrace(unsigned long fp, unsign= ed long pc) { if (is_protected_kvm_enabled()) pkvm_save_backtrace(fp, pc); + else + hyp_prepare_backtrace(fp, pc); } --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 977BFC433EF for ; Thu, 21 Jul 2022 05:58:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231748AbiGUF6d (ORCPT ); Thu, 21 Jul 2022 01:58:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231463AbiGUF6J (ORCPT ); Thu, 21 Jul 2022 01:58:09 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25D9A79ED0 for ; Wed, 20 Jul 2022 22:58:09 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id x8-20020a5b0948000000b006707a126318so599881ybq.1 for ; Wed, 20 Jul 2022 22:58:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4s5X/+YCEmbcnDUwCHy1+kT8bVMvMylr6BkPvcZOYaI=; b=Hn8tDDVc0jcMOfIfcL7Tu34q/rCTcA8rnhoPvX/kVCgOu3vXyuDDkfkT597nYfwbfp 4ORIBESXdaMaeKjj+U19qYy2XGpQxAUT0bsM+BQretTXEB6PWKE9iqRxqc1HxgVvwTyv YBR+U/GgeyaOOAiDI2fE/vXex7Eo6fMwLqUJa27U4Yef2PWnHrh8LZbTHnOmpg67QL8D yTgQkqEUS92Ba6PkU2POCDZkstTHe5z/0JDT6Kxx7sbSgJ2m1J4lJv6xJ8MekVNKiZ6X koOKLyQNvI2jYg07OxqXuZRj5C4KdqD1MwZvvFWcWvHYVoKuh6/BK6s8uw6GYBwWV9KL 0Brg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4s5X/+YCEmbcnDUwCHy1+kT8bVMvMylr6BkPvcZOYaI=; b=Q+YFTQVCuveLsbGqg6+D3BVEksiKGHo3BXXia6siDjiW05gDc8zWbK7apgKfa/my01 oE/HJDe1bkx807mNmXzXjKxE6QLPf6MPh3D3aRA59zjv6yiVKOOUITrcg3FrvqSdbaEN ZPa6s5//rMQ/fh91DxX7RW+tjVN0SDDe9j8mUzpxI0fS18uoC6LvY4jUdoG9fklbhKO6 6oBSAqiqL5ZEXrSz/71xI9vIr+EMAXl+9KKZ4FyOvjo86SK92jVsTjnrm9kbsHoQ4YeX Lv1ZK96Alay7R0xa7CIzcNaaE7s/iROXZzIO1brQ1SMKgjbE/Ct1ZdrhwtsUOv4oBdg2 BC0A== X-Gm-Message-State: AJIora9mNcgb/VgttN6r/lkiDXCbPl1HWzB+RXk8j4HnfzlaLc/7AXrv wMVmEAaSjwukMqGFs65CfLdb6wxfPhd5S2qquw== X-Google-Smtp-Source: AGRyM1vTsife9M3jC/RGyzsoBBcL6UKgtSGPhy1mwndXgmrgVzbFMS42En18APbAoAXWfT95Wu4YGZ+TT/Ki5Tnxxw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:282:b0:66e:26b0:8f16 with SMTP id v2-20020a056902028200b0066e26b08f16mr39101418ybh.469.1658383088431; Wed, 20 Jul 2022 22:58:08 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:25 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-15-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 14/17] KVM: arm64: Implement protected nVHE hyp stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implements the common framework necessary for unwind() to work in the protected nVHE context: - on_accessible_stack() - on_overflow_stack() - unwind_next() Protected nVHE unwind() is used to unwind and save the hyp stack addresses to the shared stacktrace buffer. The host reads the entries in this buffer, symbolizes and dumps the stacktrace (later patch in the series). Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/stacktrace/common.h | 2 ++ arch/arm64/include/asm/stacktrace/nvhe.h | 34 ++++++++++++++++++++-- 2 files changed, 34 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index be7920ba70b0..73fd9e143c4a 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -34,6 +34,7 @@ enum stack_type { STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, + STACK_TYPE_HYP, __NR_STACK_TYPES }; =20 @@ -186,6 +187,7 @@ static inline int unwind_next_common(struct unwind_stat= e *state, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * HYP -> OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index 8f02803a005f..c3688e717136 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -39,10 +39,19 @@ static inline void kvm_nvhe_unwind_init(struct unwind_s= tate *state, state->pc =3D pc; } =20 +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_accessible_stack(const struct task_struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) { + if (on_accessible_stack_common(tsk, sp, size, info)) + return true; + + if (on_hyp_stack(sp, size, info)) + return true; + return false; } =20 @@ -60,12 +69,27 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_p= arams); static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { - return false; + unsigned long low =3D (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long high =3D low + OVERFLOW_STACK_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_init_params *params =3D this_cpu_ptr(&kvm_init_params); + unsigned long high =3D params->stack_hyp_va; + unsigned long low =3D high - PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); } =20 static inline int notrace unwind_next(struct unwind_state *state) { - return 0; + struct stack_info info; + + return unwind_next_common(state, &info, NULL); } NOKPROBE_SYMBOL(unwind_next); #else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ @@ -75,6 +99,12 @@ static inline bool on_overflow_stack(unsigned long sp, u= nsigned long size, return false; } =20 +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + static inline int notrace unwind_next(struct unwind_state *state) { return 0; --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D8F7C433EF for ; Thu, 21 Jul 2022 05:58:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231776AbiGUF6g (ORCPT ); Thu, 21 Jul 2022 01:58:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231511AbiGUF6M (ORCPT ); Thu, 21 Jul 2022 01:58:12 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C08079EEA for ; Wed, 20 Jul 2022 22:58:11 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31cb93cadf2so6665937b3.11 for ; Wed, 20 Jul 2022 22:58:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gR1/Z2EPI44f0egwGbpih9U0FfR3lyB8C+b1oQ7a3ho=; b=afa5tl3ErYehtuGRVn6Z2F3olzHm5mqok9Hqwl9tU/bS/EZvKNoimqQLe5oCqF/DMc TdBQTNmlNrru0i9aIep5zRlPuoHEjwccgz4Qgy87jnj5xkgYHR+W30Pr8KipWYI6LJe/ JwvfS1kvj4dSVhZKIaHgofIjVXextnUms7EF/I0hZJtqp6bRCSsgSNsZvrl+NykLzrW1 NCvGslXIrCznfDzAKgWB2mVXua6LAoU7aPHY2Kx2Our4OmdaDygrvEfqL0tqMqLYbbmK y5QbtzMDtm2Xy58L9Yy8pWaiIH22zGx35r4CZp5SCFxcx+InneYPmkR0G8Fd6lS+Z+wW lU/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gR1/Z2EPI44f0egwGbpih9U0FfR3lyB8C+b1oQ7a3ho=; b=gZBlPgQv9hlcwNr2qTGhVy+UdyfkG1Ld0vvDXiAZSD8WXtfCuSUaTUg2OzLsYOp0IA mPiEOQxj5EOzaXT2Mel2Te81/sYm0fI/e5m2Z2ZB87jlOS0qUQbJnvUJsDc6YuICj1t5 QWug/HRKtUPrrCQ5tjvyiwEF0vx4uxRH5WXDM/HnbbMHnBdYYTzZ/Rr5+RNGx8z8/6dR uz6rCc/r6VFJpCe/pz+HjxvKuGnDe7S+UCxfKo3Fp2zhxfgH3Bu6ARlYqKjFEqyuHlDO 5MLl8Y+TsBj0/hiC7T5KkAPP09N2HRuJGVLfhm/m7+dgbJvcxKElJQWW6sKrGj9AwZLx LwdA== X-Gm-Message-State: AJIora/5FDia3Jt+Sjki0Gc27JKs/iRF84K5bIq3EOk+VAmbPwZEm6UJ L5ypMwI4OOz7Q4ruGqoz7DpgidYkluDYfmYm1Q== X-Google-Smtp-Source: AGRyM1sVq2WV+efPOaHv/EvfJltNWnGlGgRuoca8wHJKHuJymzfLurD1omQOCDXP5YKLf76jV42DjWRXPQaey+SEtQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a81:98d2:0:b0:31e:5d41:7a3e with SMTP id p201-20020a8198d2000000b0031e5d417a3emr11327128ywg.520.1658383090977; Wed, 20 Jul 2022 22:58:10 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:26 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-16-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 15/17] KVM: arm64: Implement non-protected nVHE hyp stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implements the common framework necessary for unwind() to work for non-protected nVHE mode: - on_accessible_stack() - on_overflow_stack() - unwind_next() Non-protected nVHE unwind() is used to unwind and dump the hypervisor stacktrace by the host in EL1 Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Use regular comments instead of doc comments, per Fuad arch/arm64/include/asm/stacktrace/nvhe.h | 67 +++++++++++++++++++++++- arch/arm64/kvm/arm.c | 2 +- 2 files changed, 66 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index c3688e717136..7a6e761aa443 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -120,15 +120,78 @@ NOKPROBE_SYMBOL(unwind_next); * (by the host in EL1). */ =20 +DECLARE_KVM_NVHE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_= stack); +DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_i= nfo); +DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); + +/* + * kvm_nvhe_stack_kern_va - Convert KVM nVHE HYP stack addresses to a kern= el VAs + * + * The nVHE hypervisor stack is mapped in the flexible 'private' VA range,= to + * allow for guard pages below the stack. Consequently, the fixed offset a= ddress + * translation macros won't work here. + * + * The kernel VA is calculated as an offset from the kernel VA of the hype= rvisor + * stack base. + * + * Returns true on success and updates @addr to its corresponding kernel V= A; + * otherwise returns false. + */ +static inline bool kvm_nvhe_stack_kern_va(unsigned long *addr, + enum stack_type type) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info; + unsigned long hyp_base, kern_base, hyp_offset; + + stacktrace_info =3D this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + + switch (type) { + case STACK_TYPE_HYP: + kern_base =3D (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page); + hyp_base =3D (unsigned long)stacktrace_info->stack_base; + break; + case STACK_TYPE_OVERFLOW: + kern_base =3D (unsigned long)this_cpu_ptr_nvhe_sym(overflow_stack); + hyp_base =3D (unsigned long)stacktrace_info->overflow_stack_base; + break; + default: + return false; + } + + hyp_offset =3D *addr - hyp_base; + + *addr =3D kern_base + hyp_offset; + + return true; +} + static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { - return false; + struct kvm_nvhe_stacktrace_info *stacktrace_info + =3D this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + unsigned long low =3D (unsigned long)stacktrace_info->overflow_stack_base; + unsigned long high =3D low + OVERFLOW_STACK_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info + =3D this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + unsigned long low =3D (unsigned long)stacktrace_info->stack_base; + unsigned long high =3D low + PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); } =20 static inline int notrace unwind_next(struct unwind_state *state) { - return 0; + struct stack_info info; + + return unwind_next_common(state, &info, kvm_nvhe_stack_kern_va); } NOKPROBE_SYMBOL(unwind_next); =20 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a0188144a122..6a64293108c5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -49,7 +49,7 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); =20 DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); =20 -static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); =20 --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 929A8C433EF for ; Thu, 21 Jul 2022 05:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231800AbiGUF6k (ORCPT ); Thu, 21 Jul 2022 01:58:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231529AbiGUF6O (ORCPT ); Thu, 21 Jul 2022 01:58:14 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B42DF79EED for ; Wed, 20 Jul 2022 22:58:13 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id v1-20020a259d81000000b0066ec7dff8feso570981ybp.18 for ; Wed, 20 Jul 2022 22:58:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=POF5DiWqY8exBmivz0rKj4swn88HJpD6BS/y4rmGEUs=; b=iNF3/wKiZxQkHHbeWmrDPvF2lTugngVT8l6YAy7ij5cPY90sHRbAuAte+7t0gXkHM8 Nn10K9x+jyiPRlfMiH8Kme8pJx4iK86w/tvNiOBAB1RJwhX9HgF7aG5TgxWlxjs/+aC8 JmrWVdRF1+4cai70BJHNbLhfq+tLYGoZ6LLzkBFW+Nr9qdq5HvnnSMO3BS2WFS59TMNg LZGWa/1Ri7/0xL/cgFTo4eKFiFrQxY3DxKk8iLGThqSBtt9m+qrvIc1e+4lqd4jVScfj zAqxL3KDqT2/e+X/RiSP6BCUWHdY0bJGvcENptvBCIFJV6C4TR6EHm0MT+/+Kbjg1Kil auXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=POF5DiWqY8exBmivz0rKj4swn88HJpD6BS/y4rmGEUs=; b=jCr47pzVJIm1nulvy20mucoUG/E64rzUoX8fogTdHtopchSEHAoEB8X9r5JBGEUuY2 qyVEYTVZOXTZHwxdvVTGO5UDhE+71YwDVPuc4M8eSjNxF+jIDkZvLyx5AdTn9ov+vb46 RwNGTorfOqC2M858pOv0rkFaYGBIZ6bsjRX271hIG+fHPwNnpAhCBvoCED52xLCxfiBk LSRasfUulHh0AjzUo/5K2xdEjkYwunVkAlbtDwGqHuYgZZ5m3FFtYmm0vLUi4r3RmyVo bwx9wf8tOoUb270fo5YGvDoMkG7fJ8lL0/bTUsISH5LoMJte3ZH2zwBOMI+DDXquqJcD XB+A== X-Gm-Message-State: AJIora/Z8WijRGLsw5dbsFC7ZwEHGjwj3n4BfkldMEyGlDMhatrVMb4Q Iyg/dfqA9AKuC76guBAyvdpdfNuaSunuC9vXOQ== X-Google-Smtp-Source: AGRyM1s1aoVE6pG6u4uUcd7EnheOT1wFEk50XabWGsXZv3DmsYR0UuXWQPe2QHeoCAY/ooJghIm8SbcDhywackPmqQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a25:d614:0:b0:670:9ea2:e6c1 with SMTP id n20-20020a25d614000000b006709ea2e6c1mr4776973ybg.379.1658383093469; Wed, 20 Jul 2022 22:58:13 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:27 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-17-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 16/17] KVM: arm64: Introduce pkvm_dump_backtrace() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Dumps the pKVM hypervisor backtrace from EL1 by reading the unwinded addresses from the shared stacktrace buffer. The nVHE hyp backtrace is dumped on hyp_panic(), before panicking the host. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Move code out from nvhe.h header to handle_exit.c, per Marc - Fix stacktrace symoblization when CONFIG_RAMDOMIZE_BASE is enabled, per Fuad - Use regular comments instead of doc comments, per Fuad arch/arm64/kvm/handle_exit.c | 54 ++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index f66c0142b335..ad568da5c7d7 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -318,6 +318,57 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exce= ption_index) kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu)); } =20 +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE +DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], + pkvm_stacktrace); + +/* + * pkvm_dump_backtrace - Dump the protected nVHE HYP backtrace. + * + * @hyp_offset: hypervisor offset, used for address translation. + * + * Dumping of the pKVM HYP backtrace is done by reading the + * stack addresses from the shared stacktrace buffer, since the + * host cannot direclty access hyperviosr memory in protected + * mode. + */ +static void pkvm_dump_backtrace(unsigned long hyp_offset) +{ + unsigned long *stacktrace_entry + =3D (unsigned long *)this_cpu_ptr_nvhe_sym(pkvm_stacktrace); + unsigned long va_mask, pc; + + va_mask =3D GENMASK_ULL(vabits_actual - 1, 0); + + kvm_err("Protected nVHE HYP call trace:\n"); + + /* The stack trace is terminated by a null entry */ + for (; *stacktrace_entry; stacktrace_entry++) { + /* Mask tags and convert to kern addr */ + pc =3D (*stacktrace_entry & va_mask) + hyp_offset; + kvm_err(" [<%016lx>] %pB\n", pc, (void *)(pc + kaslr_offset())); + } + + kvm_err("---- End of Protected nVHE HYP call trace ----\n"); +} +#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ +static void pkvm_dump_backtrace(unsigned long hyp_offset) +{ + kvm_err("Cannot dump pKVM nVHE stacktrace: !CONFIG_PROTECTED_NVHE_STACKTR= ACE\n"); +} +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ + +/* + * kvm_nvhe_dump_backtrace - Dump KVM nVHE hypervisor backtrace. + * + * @hyp_offset: hypervisor offset, used for address translation. + */ +static void kvm_nvhe_dump_backtrace(unsigned long hyp_offset) +{ + if (is_protected_kvm_enabled()) + pkvm_dump_backtrace(hyp_offset); +} + void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt, u64 elr_phys, u64 par, uintptr_t vcpu, @@ -353,6 +404,9 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, = u64 spsr, (void *)panic_addr); } =20 + /* Dump the nVHE hypervisor backtrace */ + kvm_nvhe_dump_backtrace(hyp_offset); + /* * Hyp has panicked and we're going to handle that by panicking the * kernel. The kernel offset will be revealed in the panic so we're --=20 2.37.0.170.g444d1eabd0-goog From nobody Fri Apr 17 22:34:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61C03C43334 for ; Thu, 21 Jul 2022 05:58:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231820AbiGUF6o (ORCPT ); Thu, 21 Jul 2022 01:58:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231438AbiGUF6R (ORCPT ); Thu, 21 Jul 2022 01:58:17 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DB6D79EE0 for ; Wed, 20 Jul 2022 22:58:16 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31e89d6bea7so1694927b3.10 for ; Wed, 20 Jul 2022 22:58:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eY5/lFgLy/e/q0JCyLsG7Dq+dlCTuLB6RCoRX5a1q0g=; b=WMRHbze1UjE98lLidYWgTOC6PORIneSd6GV0ozEhz9vPbd9YOdu7j9Ryu+HJ4trz/J QRckppVKgo+ATqknDecnbXMNFaAhNymsEFAFGzocktzNwndsSyA8oxc0WeCOA36n4faE U1eO1R/yW1qyJZeqwVMLBEK8wlOeB/MOLmcGmUwoFu/ejt8IPrrrznyRtzlK1kFucm0m 2LbPQ5kNE19uG8EbRX2/P8uEWit1VVLXUxE+Qisf8tule0DtgZai7yq9seEfR65ZG1IB J/h8ACRHJmOzCd8nRGtwjzD2eprVrkcjZHDiDDazJAwK0BnFXAApBjz4lW8Mq335iydt YGCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eY5/lFgLy/e/q0JCyLsG7Dq+dlCTuLB6RCoRX5a1q0g=; b=1jtE+DVR/D+0FjQb3SMc9f7JnjXX84Ceu6LfK3y5Ps7y2iSOX11cnnzi97sVMUOiLP 2sfcYymu+Fyn1ZNv2r+ht4NxiEhsEzExAMIyA41Q2Q9NIB1ggMADHtcEYAkdJDqvAC3z P2ZLVbS8HBXk87h9mT4EfVIfvf8Dl/mOPMoFrTK/dYxe9Z2d5o8bb4weXtZORrmupgNa qVt9nT8NwWMaXuDg889BRCwITsZbecpoFEk65yZzw9ruO+NKlwg7UeqjFI2sgyrEsfI6 tel0WqudlguaBTTtptiSqMySilpfht9zxr8nC+aIcbcewoqPUmUYz+vDHemH49G/2fil LHDw== X-Gm-Message-State: AJIora+XQi3hS6tb7c4YJIH4NGQxyddNg/9O73rkMxwrL9ko95eQrZfR uk601kg59ahmM72tA3BnAvXeuDi5Qdo/YmevHw== X-Google-Smtp-Source: AGRyM1sBVPLMJaPOkAoWmG3Bd9FIx/vNpGbpXpxi987HJ+/EiD9zA+WxndpdbIc+7PEG26Pc7JjNZnzchEWAgAKDFw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:3c4:b0:670:6a54:dbc2 with SMTP id g4-20020a05690203c400b006706a54dbc2mr13483925ybs.576.1658383095838; Wed, 20 Jul 2022 22:58:15 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:28 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-18-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 17/17] KVM: arm64: Introduce hyp_dump_backtrace() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In non-protected nVHE mode, unwinds and dumps the hypervisor backtrace from EL1. This is possible beacuase the host can directly access the hypervisor stack pages in non-proteced mode. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v5: - Move code out from nvhe.h header to handle_exit.c, per Marc - Fix stacktrace symoblization when CONFIG_RAMDOMIZE_BASE is enabled, per Fuad - Use regular comments instead of doc comments, per Fuad arch/arm64/kvm/handle_exit.c | 65 +++++++++++++++++++++++++++++++----- 1 file changed, 56 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index ad568da5c7d7..432b6b26f4ad 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -17,6 +17,7 @@ #include #include #include +#include #include =20 #include @@ -318,6 +319,56 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exce= ption_index) kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu)); } =20 +/* + * kvm_nvhe_print_backtrace_entry - Symbolizes and prints the HYP stack ad= dress + */ +static void kvm_nvhe_print_backtrace_entry(unsigned long addr, + unsigned long hyp_offset) +{ + unsigned long va_mask =3D GENMASK_ULL(vabits_actual - 1, 0); + + /* Mask tags and convert to kern addr */ + addr =3D (addr & va_mask) + hyp_offset; + kvm_err(" [<%016lx>] %pB\n", addr, (void *)(addr + kaslr_offset())); +} + +/* + * hyp_dump_backtrace_entry - Dump an entry of the non-protected nVHE HYP = stacktrace + * + * @arg : the hypervisor offset, used for address translation + * @where : the program counter corresponding to the stack frame + */ +static bool hyp_dump_backtrace_entry(void *arg, unsigned long where) +{ + kvm_nvhe_print_backtrace_entry(where, (unsigned long)arg); + + return true; +} + +/* + * hyp_dump_backtrace - Dump the non-proteced nVHE HYP backtrace. + * + * @hyp_offset: hypervisor offset, used for address translation. + * + * The host can directly access HYP stack pages in non-protected + * mode, so the unwinding is done directly from EL1. This removes + * the need for shared buffers between host and hypervisor for + * the stacktrace. + */ +static void hyp_dump_backtrace(unsigned long hyp_offset) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info; + struct unwind_state state; + + stacktrace_info =3D this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + + kvm_nvhe_unwind_init(&state, stacktrace_info->fp, stacktrace_info->pc); + + kvm_err("Non-protected nVHE HYP call trace:\n"); + unwind(&state, hyp_dump_backtrace_entry, (void *)hyp_offset); + kvm_err("---- End of Non-protected nVHE HYP call trace ----\n"); +} + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace); @@ -336,18 +387,12 @@ static void pkvm_dump_backtrace(unsigned long hyp_off= set) { unsigned long *stacktrace_entry =3D (unsigned long *)this_cpu_ptr_nvhe_sym(pkvm_stacktrace); - unsigned long va_mask, pc; - - va_mask =3D GENMASK_ULL(vabits_actual - 1, 0); =20 kvm_err("Protected nVHE HYP call trace:\n"); =20 - /* The stack trace is terminated by a null entry */ - for (; *stacktrace_entry; stacktrace_entry++) { - /* Mask tags and convert to kern addr */ - pc =3D (*stacktrace_entry & va_mask) + hyp_offset; - kvm_err(" [<%016lx>] %pB\n", pc, (void *)(pc + kaslr_offset())); - } + /* The saved stacktrace is terminated by a null entry */ + for (; *stacktrace_entry; stacktrace_entry++) + kvm_nvhe_print_backtrace_entry(*stacktrace_entry, hyp_offset); =20 kvm_err("---- End of Protected nVHE HYP call trace ----\n"); } @@ -367,6 +412,8 @@ static void kvm_nvhe_dump_backtrace(unsigned long hyp_o= ffset) { if (is_protected_kvm_enabled()) pkvm_dump_backtrace(hyp_offset); + else + hyp_dump_backtrace(hyp_offset); } =20 void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, --=20 2.37.0.170.g444d1eabd0-goog