From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C77DC43334 for ; Fri, 15 Jul 2022 06:10:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230179AbiGOGKl (ORCPT ); Fri, 15 Jul 2022 02:10:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229481AbiGOGKh (ORCPT ); Fri, 15 Jul 2022 02:10:37 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5825E47B86 for ; Thu, 14 Jul 2022 23:10:36 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31c972f5f84so33606877b3.1 for ; Thu, 14 Jul 2022 23:10:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HwPTm6cmxu9zJXjSHjPz49JaMF8MztZgzIWe/cJ+deE=; b=Z3zQjpPDiRsDOnp5dDl1GRJwMvY87HgUwdl0j3CSd+VpRKjoClSRyeMLjaJep5115R DBeCAIIkFz6OCl7XTKhwsUTqotNv8xGpbXkoldYm0q//4woWStURAOVbCoj+7WJsjXL9 /zCQ0od5ZsSU7isO7m3a380zlVBKQ98kJkp1EEubvlQogyBvSds1KHxUuyihgbI4DZFQ 7jg+p0Re5JrfdxV3tD1DwB4Dv/sOdelLp5ZxaZHAqrcn5E2G5jbwzfBbd60SM+e2HKuw MGOWY6pFUCK2GEIWFI1PcqPGK72EinGGpI7hh0LvcIJ4Fn+Gjx4dYM3Jw1HuIlK0JR8r afTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HwPTm6cmxu9zJXjSHjPz49JaMF8MztZgzIWe/cJ+deE=; b=BHYpitZh8BHef0Cqhcss+r5K7uwsA2Ndn0xgXNUTlGtaVPpOaAtMmdJdu++9qFsFGz KogAZPz0hs2qvfVW8BA7iFFLd5seXIcvlBZP7veaZpYYcO1WoBKjhuJD3uwOS9MoR9UR I+QizChj7JTAF3AUs+YVnDjy0ndgPJe1jnXWxhNa3tswJwNMHkTJ4+xk8ZJnGOW2FyQx Bh+cWRdpprKcc3NJ1t9FFLfdEpUWUP6RSmTdfcDcwVRu76fjTgUCFKzSkpasBf2ZceSL gAVvUF1QIWfODtJiR3V3lYogkTfTS9Kxl9citFSKNxulWY0YfhLkrH1z+hp/lXKIAliM ySsw== X-Gm-Message-State: AJIora9s4zL7ZiSvBfzb5QHwnYOB9n37G5pNeBbdKkk2wU7I7BuEjVtY SuA82kztQvNjBW8c8ugHbGdAnt36ifGTxfqZIQ== X-Google-Smtp-Source: AGRyM1uQsnm9sZo0q5d9K7/udtHT67wBvEUrThfObqq+hDgims8clShXazzHT77SMjEPhS9aYSgaY5BdxHGEM8HEWw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a81:b80c:0:b0:31c:8cfc:cbc0 with SMTP id v12-20020a81b80c000000b0031c8cfccbc0mr14308532ywe.157.1657865435586; Thu, 14 Jul 2022 23:10:35 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:10 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 01/18] arm64: stacktrace: Add shared header for common stack unwinding code From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to reuse the arm64 stack unwinding logic for the nVHE hypervisor stack, move the common code to a shared header (arch/arm64/include/asm/stacktrace/common.h). The nVHE hypervisor cannot safely link against kernel code, so we make use of the shared header to avoid duplicated logic later in this series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Reviewed-by: Mark Brown --- arch/arm64/include/asm/stacktrace.h | 35 +------ arch/arm64/include/asm/stacktrace/common.h | 105 +++++++++++++++++++++ arch/arm64/kernel/stacktrace.c | 57 ----------- 3 files changed, 106 insertions(+), 91 deletions(-) create mode 100644 arch/arm64/include/asm/stacktrace/common.h diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/s= tacktrace.h index aec9315bf156..79f455b37c84 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -8,52 +8,19 @@ #include #include #include -#include #include =20 #include #include #include =20 -enum stack_type { - STACK_TYPE_UNKNOWN, - STACK_TYPE_TASK, - STACK_TYPE_IRQ, - STACK_TYPE_OVERFLOW, - STACK_TYPE_SDEI_NORMAL, - STACK_TYPE_SDEI_CRITICAL, - __NR_STACK_TYPES -}; - -struct stack_info { - unsigned long low; - unsigned long high; - enum stack_type type; -}; +#include =20 extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, const char *loglvl); =20 DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); =20 -static inline bool on_stack(unsigned long sp, unsigned long size, - unsigned long low, unsigned long high, - enum stack_type type, struct stack_info *info) -{ - if (!low) - return false; - - if (sp < low || sp + size < sp || sp + size > high) - return false; - - if (info) { - info->low =3D low; - info->high =3D high; - info->type =3D type; - } - return true; -} - static inline bool on_irq_stack(unsigned long sp, unsigned long size, struct stack_info *info) { diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h new file mode 100644 index 000000000000..64ae4f6b06fe --- /dev/null +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Common arm64 stack unwinder code. + * + * Copyright (C) 2012 ARM Ltd. + */ +#ifndef __ASM_STACKTRACE_COMMON_H +#define __ASM_STACKTRACE_COMMON_H + +#include +#include +#include + +enum stack_type { + STACK_TYPE_UNKNOWN, + STACK_TYPE_TASK, + STACK_TYPE_IRQ, + STACK_TYPE_OVERFLOW, + STACK_TYPE_SDEI_NORMAL, + STACK_TYPE_SDEI_CRITICAL, + __NR_STACK_TYPES +}; + +struct stack_info { + unsigned long low; + unsigned long high; + enum stack_type type; +}; + +/* + * A snapshot of a frame record or fp/lr register values, along with some + * accounting information necessary for robust unwinding. + * + * @fp: The fp value in the frame record (or the real fp) + * @pc: The lr value in the frame record (or the real lr) + * + * @stacks_done: Stacks which have been entirely unwound, for which it is = no + * longer valid to unwind to. + * + * @prev_fp: The fp that pointed to this frame record, or a synthetic = value + * of 0. This is used to ensure that within a stack, each + * subsequent frame record is at an increasing address. + * @prev_type: The type of stack this frame record was on, or a synthetic + * value of STACK_TYPE_UNKNOWN. This is used to detect a + * transition from one stack to another. + * + * @kr_cur: When KRETPROBES is selected, holds the kretprobe instance + * associated with the most recently encountered replacement= lr + * value. + * + * @task: The task being unwound. + */ +struct unwind_state { + unsigned long fp; + unsigned long pc; + DECLARE_BITMAP(stacks_done, __NR_STACK_TYPES); + unsigned long prev_fp; + enum stack_type prev_type; +#ifdef CONFIG_KRETPROBES + struct llist_node *kr_cur; +#endif + struct task_struct *task; +}; + +static inline bool on_stack(unsigned long sp, unsigned long size, + unsigned long low, unsigned long high, + enum stack_type type, struct stack_info *info) +{ + if (!low) + return false; + + if (sp < low || sp + size < sp || sp + size > high) + return false; + + if (info) { + info->low =3D low; + info->high =3D high; + info->type =3D type; + } + return true; +} + +static inline void unwind_init_common(struct unwind_state *state, + struct task_struct *task) +{ + state->task =3D task; +#ifdef CONFIG_KRETPROBES + state->kr_cur =3D NULL; +#endif + + /* + * Prime the first unwind. + * + * In unwind_next() we'll check that the FP points to a valid stack, + * which can't be STACK_TYPE_UNKNOWN, and the first unwind will be + * treated as a transition to whichever stack that happens to be. The + * prev_fp value won't be used, but we set it to 0 such that it is + * definitely not an accessible stack address. + */ + bitmap_zero(state->stacks_done, __NR_STACK_TYPES); + state->prev_fp =3D 0; + state->prev_type =3D STACK_TYPE_UNKNOWN; +} + +#endif /* __ASM_STACKTRACE_COMMON_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index fcaa151b81f1..94a5dd2ab8fd 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,63 +18,6 @@ #include #include =20 -/* - * A snapshot of a frame record or fp/lr register values, along with some - * accounting information necessary for robust unwinding. - * - * @fp: The fp value in the frame record (or the real fp) - * @pc: The lr value in the frame record (or the real lr) - * - * @stacks_done: Stacks which have been entirely unwound, for which it is = no - * longer valid to unwind to. - * - * @prev_fp: The fp that pointed to this frame record, or a synthetic = value - * of 0. This is used to ensure that within a stack, each - * subsequent frame record is at an increasing address. - * @prev_type: The type of stack this frame record was on, or a synthetic - * value of STACK_TYPE_UNKNOWN. This is used to detect a - * transition from one stack to another. - * - * @kr_cur: When KRETPROBES is selected, holds the kretprobe instance - * associated with the most recently encountered replacement= lr - * value. - * - * @task: The task being unwound. - */ -struct unwind_state { - unsigned long fp; - unsigned long pc; - DECLARE_BITMAP(stacks_done, __NR_STACK_TYPES); - unsigned long prev_fp; - enum stack_type prev_type; -#ifdef CONFIG_KRETPROBES - struct llist_node *kr_cur; -#endif - struct task_struct *task; -}; - -static void unwind_init_common(struct unwind_state *state, - struct task_struct *task) -{ - state->task =3D task; -#ifdef CONFIG_KRETPROBES - state->kr_cur =3D NULL; -#endif - - /* - * Prime the first unwind. - * - * In unwind_next() we'll check that the FP points to a valid stack, - * which can't be STACK_TYPE_UNKNOWN, and the first unwind will be - * treated as a transition to whichever stack that happens to be. The - * prev_fp value won't be used, but we set it to 0 such that it is - * definitely not an accessible stack address. - */ - bitmap_zero(state->stacks_done, __NR_STACK_TYPES); - state->prev_fp =3D 0; - state->prev_type =3D STACK_TYPE_UNKNOWN; -} - /* * Start an unwind from a pt_regs. * --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7405DC43334 for ; Fri, 15 Jul 2022 06:10:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230043AbiGOGKw (ORCPT ); Fri, 15 Jul 2022 02:10:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230178AbiGOGKj (ORCPT ); Fri, 15 Jul 2022 02:10:39 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E98223DBD2 for ; Thu, 14 Jul 2022 23:10:38 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31c972f5f84so33607597b3.1 for ; Thu, 14 Jul 2022 23:10:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/Ia+O9M0Pz2+/05x4XoBRaUcJqadaj8Ww7ydsZGTIyc=; b=AsFH83JSc75pqQLjhwbkNC5eEmbVMfMhBpPhg7rjc25BQstmijhQMgTA+PPbxJYlqN QPVHh8kifhnzc58ChmI3fgEyNUKltfrMrL+ztxu2EOY7NCdrkHxnV8NMvG/wC3CD+uXU dAxvUDRaBJDjKtmTWDjl5DPdd+gPLE/+RiY4lPMa2oGT1tRuMwcgcIPgFOqRKpNBMp9d Khxf6sFE2z9CYuQWyepW9GSGfVixtwjnqotLqhBTyVuYzQxt7DIX7fontQfVBdI7aFon MuJbSY7lOoXOx555VBK2EWAbxAa10joar+VP9GpSz11xwuxaKieYJDSmFvWw+Kh7aS+7 lQpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/Ia+O9M0Pz2+/05x4XoBRaUcJqadaj8Ww7ydsZGTIyc=; b=IL9pj2braW+3We8haBl6qLMHLZn6wdw5DUJ+RIUFCNPaYAzHcvSKeElQUoAJxBh+qM BRz4/CbBk3gNCkLBOopJ2bNEg1dtkXkRZv7WnvcK8eCNjRQWd0cH+c+l/kX8dCxq5Yxi K0td37XCJPSCrJagwqhSojwt4aefuBjY2qk7noa4DfTy4wMxrHyv/aCDlx+6+bDLeD5h q0onp2IXRa51V353r/ClvGe9rV8GoVOxbnLMzNtg6CUkUWJCFnyl13xL6A6MH7RZg2kJ NLzp35idvUC+fmBowM0cbFVokMl/U2G9c3E42OUmEQtZnOQ/2qvwKJMVFuVOCJV/5mbT 6mZQ== X-Gm-Message-State: AJIora8lm9HJyfBY42uwIQgb/ls8By0FrLd4HHFvyfoC0ah3EdkPnSM/ u4NfZMNHEVeczUeQArMgNPF3Mpe8kKPTFXxOUQ== X-Google-Smtp-Source: AGRyM1vQf/Bx1CweucDNGXThIQceIJF3jerLq31SSlQ4DRKoZzzVX77BCd9Mi+GQ+qVz+MEqdkAvXKVOyWEjNha+Qg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a0d:f647:0:b0:31d:17cc:3337 with SMTP id g68-20020a0df647000000b0031d17cc3337mr14046493ywf.100.1657865438690; Thu, 14 Jul 2022 23:10:38 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:11 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 02/18] arm64: stacktrace: Factor out on_accessible_stack_common() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move common on_accessible_stack checks to stacktrace/common.h. This is used in the implementation of the nVHE hypervisor unwinder later in this series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Reviewed-by: Mark Brown --- arch/arm64/include/asm/stacktrace.h | 8 ++------ arch/arm64/include/asm/stacktrace/common.h | 18 ++++++++++++++++++ 2 files changed, 20 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/s= tacktrace.h index 79f455b37c84..a4f8b84fb459 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -56,7 +56,6 @@ static inline bool on_overflow_stack(unsigned long sp, un= signed long size, struct stack_info *info) { return false; } #endif =20 - /* * We can only safely access per-cpu stacks from current in a non-preempti= ble * context. @@ -65,8 +64,8 @@ static inline bool on_accessible_stack(const struct task_= struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) { - if (info) - info->type =3D STACK_TYPE_UNKNOWN; + if (on_accessible_stack_common(tsk, sp, size, info)) + return true; =20 if (on_task_stack(tsk, sp, size, info)) return true; @@ -74,12 +73,9 @@ static inline bool on_accessible_stack(const struct task= _struct *tsk, return false; if (on_irq_stack(sp, size, info)) return true; - if (on_overflow_stack(sp, size, info)) - return true; if (on_sdei_stack(sp, size, info)) return true; =20 return false; } - #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index 64ae4f6b06fe..f58b786460d3 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -62,6 +62,9 @@ struct unwind_state { struct task_struct *task; }; =20 +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -80,6 +83,21 @@ static inline bool on_stack(unsigned long sp, unsigned l= ong size, return true; } =20 +static inline bool on_accessible_stack_common(const struct task_struct *ts= k, + unsigned long sp, + unsigned long size, + struct stack_info *info) +{ + if (info) + info->type =3D STACK_TYPE_UNKNOWN; + + /* + * Both the kernel and nvhe hypervisor make use of + * an overflow_stack + */ + return on_overflow_stack(sp, size, info); +} + static inline void unwind_init_common(struct unwind_state *state, struct task_struct *task) { --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EACADC43334 for ; Fri, 15 Jul 2022 06:11:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230344AbiGOGLL (ORCPT ); Fri, 15 Jul 2022 02:11:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230304AbiGOGLG (ORCPT ); Fri, 15 Jul 2022 02:11:06 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 675AD735BF for ; Thu, 14 Jul 2022 23:11:00 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31d17879672so33587447b3.8 for ; Thu, 14 Jul 2022 23:11:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ljHAEgANsvWTLcKZTghLQsxafFQVS9WVjokuyjxdCqc=; b=npQk7T0JhUPj5VT7xjkRtVg6CsVxkDj8rBq/BUUyI4Mi+9igHEZ+1DmO7myypfhwxT cw3ncKxhY6jXPVtDsozwj2HOUj/M3EooszZ6TtifjL6+sUis8T+X77PHgZ1e+BuKKtm6 ZFhXKSU46ZE+lIRgcQ8sY/2hsNJ8CYO9VvxG4xu3jpHGtK+mBk59HwIGe/xxrVkzfQ6H owbBwOiGVNo7n6i2hwzOmUUwJlrVcVjwzZN07JzLdLOSEckpSrizh3Ga1kefRYD9H75X XOF4FyXw2I5DDCrv8yO5+INrCFE7sPZcJXpG4do24oD8+GKiJ1nf2uKkhEvs4blSyMMu mDPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ljHAEgANsvWTLcKZTghLQsxafFQVS9WVjokuyjxdCqc=; b=n808iKtgUhHqKBg7BCfy7QebDytkaHIRkx7kkr+hSDbJdKpuuDKrd4LAIEnH1xxNlL MRt5IIUfg1qv5BkoRLgiguecau3X/wHMYUcVhrewlBOtd3GWzWNyFTGWDYCtKNhiYySk Ygg9+Ab4qmd3KAtAG68gsBPiH8q7KtYqhyj6qDdc5MlyvrkaRLP/1DNyAdCkT+V4i31F cyddLDy+4KObzQRM90hPS+mX11tn1fL2rUyLwjrSmpY9zH/2pitX38Rwe4VkIZm0Bisr 3uxPU+Nigzi99deKyZvn02OJ+9um+RsmohmHqPyrLkpQVQLFnyZbJTL6X1N3RsN+Voq4 oxRg== X-Gm-Message-State: AJIora+QtfXNBaKof5eE0ETOaPRbSJOFQXw/+HxyIwQjSuhZNY6bTG34 +zHfsAFnGdrMmDXuzLYOL0vTaHUnAgLHQKIFUg== X-Google-Smtp-Source: AGRyM1vsLnzedBXRLas9Bpb1/ODNueblglUHt+pB7oPmdXRtm9qObq99zCFBzoa8bIquQbhpT+0pCRDt34Y3LdmTkw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a25:7ac6:0:b0:66f:d07e:23f5 with SMTP id v189-20020a257ac6000000b0066fd07e23f5mr5892519ybc.110.1657865459665; Thu, 14 Jul 2022 23:10:59 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:12 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 03/18] arm64: stacktrace: Factor out unwind_next_common() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move common unwind_next logic to stacktrace/common.h. This allows reusing the code in the implementation the nVHE hypervisor stack unwinder, later in this series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Reviewed-by: Mark Brown --- arch/arm64/include/asm/stacktrace/common.h | 50 ++++++++++++++++++++++ arch/arm64/kernel/stacktrace.c | 41 ++---------------- 2 files changed, 54 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index f58b786460d3..0c5cbfdb56b5 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -65,6 +65,10 @@ struct unwind_state { static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info); =20 +static inline bool on_accessible_stack(const struct task_struct *tsk, + unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -120,4 +124,50 @@ static inline void unwind_init_common(struct unwind_st= ate *state, state->prev_type =3D STACK_TYPE_UNKNOWN; } =20 +static inline int unwind_next_common(struct unwind_state *state, + struct stack_info *info) +{ + struct task_struct *tsk =3D state->task; + unsigned long fp =3D state->fp; + + if (fp & 0x7) + return -EINVAL; + + if (!on_accessible_stack(tsk, fp, 16, info)) + return -EINVAL; + + if (test_bit(info->type, state->stacks_done)) + return -EINVAL; + + /* + * As stacks grow downward, any valid record on the same stack must be + * at a strictly higher address than the prior record. + * + * Stacks can nest in several valid orders, e.g. + * + * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL + * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * + * ... but the nesting itself is strict. Once we transition from one + * stack to another, it's never valid to unwind back to that first + * stack. + */ + if (info->type =3D=3D state->prev_type) { + if (fp <=3D state->prev_fp) + return -EINVAL; + } else { + __set_bit(state->prev_type, state->stacks_done); + } + + /* + * Record this frame record's values and location. The prev_fp and + * prev_type are only meaningful to the next unwind_next() invocation. + */ + state->fp =3D READ_ONCE(*(unsigned long *)(fp)); + state->pc =3D READ_ONCE(*(unsigned long *)(fp + 8)); + state->prev_fp =3D fp; + state->prev_type =3D info->type; + + return 0; +} #endif /* __ASM_STACKTRACE_COMMON_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 94a5dd2ab8fd..834851939364 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -81,48 +81,15 @@ static int notrace unwind_next(struct unwind_state *sta= te) struct task_struct *tsk =3D state->task; unsigned long fp =3D state->fp; struct stack_info info; + int err; =20 /* Final frame; nothing to unwind */ if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) return -ENOENT; =20 - if (fp & 0x7) - return -EINVAL; - - if (!on_accessible_stack(tsk, fp, 16, &info)) - return -EINVAL; - - if (test_bit(info.type, state->stacks_done)) - return -EINVAL; - - /* - * As stacks grow downward, any valid record on the same stack must be - * at a strictly higher address than the prior record. - * - * Stacks can nest in several valid orders, e.g. - * - * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL - * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW - * - * ... but the nesting itself is strict. Once we transition from one - * stack to another, it's never valid to unwind back to that first - * stack. - */ - if (info.type =3D=3D state->prev_type) { - if (fp <=3D state->prev_fp) - return -EINVAL; - } else { - __set_bit(state->prev_type, state->stacks_done); - } - - /* - * Record this frame record's values and location. The prev_fp and - * prev_type are only meaningful to the next unwind_next() invocation. - */ - state->fp =3D READ_ONCE(*(unsigned long *)(fp)); - state->pc =3D READ_ONCE(*(unsigned long *)(fp + 8)); - state->prev_fp =3D fp; - state->prev_type =3D info.type; + err =3D unwind_next_common(state, &info); + if (err) + return err; =20 state->pc =3D ptrauth_strip_insn_pac(state->pc); =20 --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63DF3C43334 for ; Fri, 15 Jul 2022 06:11:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230308AbiGOGLQ (ORCPT ); Fri, 15 Jul 2022 02:11:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230316AbiGOGLJ (ORCPT ); Fri, 15 Jul 2022 02:11:09 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A50907538A for ; Thu, 14 Jul 2022 23:11:03 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31d17879672so33588277b3.8 for ; Thu, 14 Jul 2022 23:11:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Own2o6tbvKIuIfDt6RX1YCvxnzH3pv8ZlCIH8Z843R8=; b=gfo5C0kAFY/DhTYcpdsweNWRc6amIrJNxK2I6HRtQ1uRYwQkM+g/8EfbXv8Jei6Dy9 Z3OVhzTJCmCJBhLoGzA9ZSot85i+TAuxQmWGFXx5Oxrac0FoumywLrg0LuCX2qshququ bohBvC4W75iypt1b+ihRn+heB0no3yJOzheWD8OObzEQnVAqJYoA8CKmaRG68INYPpM5 05WxGrtJapF4xGy2z4Lb0aC8BzBtubmgLcikBRK4ApIBr+yIQuvngHlV0EgD66e1LJ+E AMUJAXWxOsxqa9IaVKLx/kLcperTi5N3cXEJav8qwYoRzXZO4chjjIzjRDp7i8keLZJQ 37Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Own2o6tbvKIuIfDt6RX1YCvxnzH3pv8ZlCIH8Z843R8=; b=qcFMJ7ssS7WX3wB4XEJ7/GAG887/TC/vzvQ5k1DtWxmhgl+hAh0PLfV4ECL9XS8FRP AkB9RVYFPYaJZq8aWEs7DPa3Nau2AISKJE+y9lwRKS1F5wHcpCQwJmjY4F/WJF8W2qQg s3D7EWBuMq7nk785g7iJGHvp397fI4Cmm4clPm4jXFQpzCyRPp51EbHL9+Sm7o/TmRqv oi9vB/T72Jv79AnAm3fVZ+d7UK7G6OekzbhvxEFv0H1+M7d0xFIYITY0hyPNVGOTvRGU EhIsTORYSaUV6b79ZDQ4+xG+AhPQSdeUPvs13rDr8lhXOEl1Y/T+hlbTN60Dx4SfbAdy gOaA== X-Gm-Message-State: AJIora9vMmqK3LLtBvZf2Ay23X+5Sl140LWrtjyYXKKUu3D+fHdGWDZF mCC27AGWCH82W2nO8uXEnsywZ38UOJzrIc6jTg== X-Google-Smtp-Source: AGRyM1toginxWVu828tmbdDVi6RqxU4m5ogiI5wsF/cD/T/ASSWS9NG8+M6vpw5CwwLtUUF5ffU/O7fWHtDB+TCoLA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a0d:c547:0:b0:31b:d6fa:c05c with SMTP id h68-20020a0dc547000000b0031bd6fac05cmr14137326ywd.105.1657865463413; Thu, 14 Jul 2022 23:11:03 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:13 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 04/18] arm64: stacktrace: Handle frame pointer from different address spaces From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The unwinder code is made reusable so that it can be used to unwind various types of stacks. One usecase is unwinding the nVHE hyp stack from the host (EL1) in non-protected mode. This means that the unwinder must be able to tracnslate HYP stack addresses to kernel addresses. Add a callback (stack_trace_translate_fp_fn) to allow specifying the translation function. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/common.h | 26 ++++++++++++++++++++-- arch/arm64/kernel/stacktrace.c | 2 +- 2 files changed, 25 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index 0c5cbfdb56b5..5f5d74a286f3 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -123,9 +123,22 @@ static inline void unwind_init_common(struct unwind_st= ate *state, state->prev_fp =3D 0; state->prev_type =3D STACK_TYPE_UNKNOWN; } +/** + * stack_trace_translate_fp_fn() - Translates a non-kernel frame pointer to + * a kernel address. + * + * @fp: the frame pointer to be updated to it's kernel address. + * @type: the stack type associated with frame pointer @fp + * + * Returns true and success and @fp is updated to the corresponding + * kernel virtual address; otherwise returns false. + */ +typedef bool (*stack_trace_translate_fp_fn)(unsigned long *fp, + enum stack_type type); =20 static inline int unwind_next_common(struct unwind_state *state, - struct stack_info *info) + struct stack_info *info, + stack_trace_translate_fp_fn translate_fp) { struct task_struct *tsk =3D state->task; unsigned long fp =3D state->fp; @@ -159,13 +172,22 @@ static inline int unwind_next_common(struct unwind_st= ate *state, __set_bit(state->prev_type, state->stacks_done); } =20 + /* Record fp as prev_fp before attempting to get the next fp */ + state->prev_fp =3D fp; + + /* + * If fp is not from the current address space perform the necessary + * translation before dereferencing it to get the next fp. + */ + if (translate_fp && !translate_fp(&fp, info->type)) + return -EINVAL; + /* * Record this frame record's values and location. The prev_fp and * prev_type are only meaningful to the next unwind_next() invocation. */ state->fp =3D READ_ONCE(*(unsigned long *)(fp)); state->pc =3D READ_ONCE(*(unsigned long *)(fp + 8)); - state->prev_fp =3D fp; state->prev_type =3D info->type; =20 return 0; diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 834851939364..eef3cf6bf2d7 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -87,7 +87,7 @@ static int notrace unwind_next(struct unwind_state *state) if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) return -ENOENT; =20 - err =3D unwind_next_common(state, &info); + err =3D unwind_next_common(state, &info, NULL); if (err) return err; =20 --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BC16C433EF for ; Fri, 15 Jul 2022 06:11:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229688AbiGOGLZ (ORCPT ); Fri, 15 Jul 2022 02:11:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230283AbiGOGLO (ORCPT ); Fri, 15 Jul 2022 02:11:14 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8340466AE0 for ; Thu, 14 Jul 2022 23:11:13 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id t10-20020a5b07ca000000b0066ec1bb6e2cso3281099ybq.14 for ; Thu, 14 Jul 2022 23:11:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FYBq07VQ4DkWTq1d4ACftXBrb374xXzNK5S2gkLiMO4=; b=NLk/3Rkt1n9uJrc345lFPf00C8qzQsyt6y0eZMNx0g5v0lsiCyWYkQx9+2s0nx+FLY l7j3QQaH3ghXRvmcdL6PGLW5W9miUzv1T/9vOdwyAYy89tQ4GzXz6p2V30airVL0TRTT gaLDbMYbtao1xmyAPM316twHkn78kx1XbDcVfax/2vyIxWw1W37UN08gJVNKESNU8Oys 3jH/VfxUi0as5R1vO3yh8Bbomq/4kTKHP8BA1NtAj2jLFtJU78Ld+givYs9UGzykLejf HyugqtinuF2BlKD1nkuiKiJvYxXkzhFLGM9aX0dGi4OaK2KgU1YGI4OpXw2Mssc0AWMc PTxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FYBq07VQ4DkWTq1d4ACftXBrb374xXzNK5S2gkLiMO4=; b=k7hBEY1z0ZwBpM77Ps2+t4mfuscAoldPUzA63xCDKCxjJRQUFowtSvnXGFEeXyBlL2 /Fijd92oTTyerTcOLEToHR/ERE6oARVpawwa7D01vEMEpYmZ/JGHp/DQtlQIH3jbck2Q tnfeh4LG9DXxpldp8hvcoApKEvRInNBXSjyb/SjGXlDggrgCOsJ2t/UejmavRhZ8vvtm oX0YEQ00hu2jySQwiq8Lbk78NSV4T8lbjIDZOolbounLBPVyXUgIlum8qQChM5rWKOGm Fmokf8yafyWImzuqGY7a5UalkrTbc4mgE2VtV4QvS7abQ0oEX8cywrycrr2t9LVDCKAp P3Rg== X-Gm-Message-State: AJIora+5BbqK0LQa6H2peXbMOgBJzBrxVt3JFw6BfhYft8+Ktpj1gCjO isqYe4JVvLtJUBPvaAgjozcam/GfLDxfUNjFYQ== X-Google-Smtp-Source: AGRyM1tJ1+XhW8SOFOo1j4XRkY2qov8EmHa+wSlCqK9/nwu8v8plCWadpgoj0f878y6UXleMAR/B6ETRxL9pzXJZ8w== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a25:2d57:0:b0:66e:497f:51e6 with SMTP id s23-20020a252d57000000b0066e497f51e6mr12368453ybe.251.1657865472764; Thu, 14 Jul 2022 23:11:12 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:14 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 05/18] arm64: stacktrace: Factor out common unwind() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move unwind() to stacktrace/common.h, and as a result the kernel unwind_next() to asm/stacktrace.h. This allow reusing unwind() in the implementation of the nVHE HYP stack unwinder, later in the series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/stacktrace.h | 51 ++++++++++++++++ arch/arm64/include/asm/stacktrace/common.h | 19 ++++++ arch/arm64/kernel/stacktrace.c | 67 ---------------------- 3 files changed, 70 insertions(+), 67 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/s= tacktrace.h index a4f8b84fb459..4fa07f0f913d 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -11,6 +11,7 @@ #include =20 #include +#include #include #include =20 @@ -78,4 +79,54 @@ static inline bool on_accessible_stack(const struct task= _struct *tsk, =20 return false; } + +/* + * Unwind from one frame record (A) to the next frame record (B). + * + * We terminate early if the location of B indicates a malformed chain of = frame + * records (e.g. a cycle), determined based on the location and fp value o= f A + * and the location (but not the fp value) of B. + */ +static inline int notrace unwind_next(struct unwind_state *state) +{ + struct task_struct *tsk =3D state->task; + unsigned long fp =3D state->fp; + struct stack_info info; + int err; + + /* Final frame; nothing to unwind */ + if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) + return -ENOENT; + + err =3D unwind_next_common(state, &info, NULL); + if (err) + return err; + + state->pc =3D ptrauth_strip_insn_pac(state->pc); + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + if (tsk->ret_stack && + (state->pc =3D=3D (unsigned long)return_to_handler)) { + unsigned long orig_pc; + /* + * This is a case where function graph tracer has + * modified a return address (LR) in a stack frame + * to hook a function return. + * So replace it to an original value. + */ + orig_pc =3D ftrace_graph_ret_addr(tsk, NULL, state->pc, + (void *)state->fp); + if (WARN_ON_ONCE(state->pc =3D=3D orig_pc)) + return -EINVAL; + state->pc =3D orig_pc; + } +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ +#ifdef CONFIG_KRETPROBES + if (is_kretprobe_trampoline(state->pc)) + state->pc =3D kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr= _cur); +#endif + + return 0; +} +NOKPROBE_SYMBOL(unwind_next); #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index 5f5d74a286f3..f86efe71479d 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -9,6 +9,7 @@ =20 #include #include +#include #include =20 enum stack_type { @@ -69,6 +70,8 @@ static inline bool on_accessible_stack(const struct task_= struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info); =20 +static inline int unwind_next(struct unwind_state *state); + static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -192,4 +195,20 @@ static inline int unwind_next_common(struct unwind_sta= te *state, =20 return 0; } + +static inline void notrace unwind(struct unwind_state *state, + stack_trace_consume_fn consume_entry, + void *cookie) +{ + while (1) { + int ret; + + if (!consume_entry(cookie, state->pc)) + break; + ret =3D unwind_next(state); + if (ret < 0) + break; + } +} +NOKPROBE_SYMBOL(unwind); #endif /* __ASM_STACKTRACE_COMMON_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index eef3cf6bf2d7..9fa60ee48499 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -7,14 +7,12 @@ #include #include #include -#include #include #include #include #include =20 #include -#include #include #include =20 @@ -69,71 +67,6 @@ static inline void unwind_init_from_task(struct unwind_s= tate *state, state->pc =3D thread_saved_pc(task); } =20 -/* - * Unwind from one frame record (A) to the next frame record (B). - * - * We terminate early if the location of B indicates a malformed chain of = frame - * records (e.g. a cycle), determined based on the location and fp value o= f A - * and the location (but not the fp value) of B. - */ -static int notrace unwind_next(struct unwind_state *state) -{ - struct task_struct *tsk =3D state->task; - unsigned long fp =3D state->fp; - struct stack_info info; - int err; - - /* Final frame; nothing to unwind */ - if (fp =3D=3D (unsigned long)task_pt_regs(tsk)->stackframe) - return -ENOENT; - - err =3D unwind_next_common(state, &info, NULL); - if (err) - return err; - - state->pc =3D ptrauth_strip_insn_pac(state->pc); - -#ifdef CONFIG_FUNCTION_GRAPH_TRACER - if (tsk->ret_stack && - (state->pc =3D=3D (unsigned long)return_to_handler)) { - unsigned long orig_pc; - /* - * This is a case where function graph tracer has - * modified a return address (LR) in a stack frame - * to hook a function return. - * So replace it to an original value. - */ - orig_pc =3D ftrace_graph_ret_addr(tsk, NULL, state->pc, - (void *)state->fp); - if (WARN_ON_ONCE(state->pc =3D=3D orig_pc)) - return -EINVAL; - state->pc =3D orig_pc; - } -#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ -#ifdef CONFIG_KRETPROBES - if (is_kretprobe_trampoline(state->pc)) - state->pc =3D kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr= _cur); -#endif - - return 0; -} -NOKPROBE_SYMBOL(unwind_next); - -static void notrace unwind(struct unwind_state *state, - stack_trace_consume_fn consume_entry, void *cookie) -{ - while (1) { - int ret; - - if (!consume_entry(cookie, state->pc)) - break; - ret =3D unwind_next(state); - if (ret < 0) - break; - } -} -NOKPROBE_SYMBOL(unwind); - static bool dump_backtrace_entry(void *arg, unsigned long where) { char *loglvl =3D arg; --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43208C43334 for ; Fri, 15 Jul 2022 06:11:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230390AbiGOGLa (ORCPT ); Fri, 15 Jul 2022 02:11:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230320AbiGOGLS (ORCPT ); Fri, 15 Jul 2022 02:11:18 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB5F64E856 for ; Thu, 14 Jul 2022 23:11:16 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id o135-20020a25738d000000b0066f58989d75so3316367ybc.13 for ; Thu, 14 Jul 2022 23:11:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SVXMXCA9EUcm/U2NL9bFju/llGCppObF5hU8u8qx8a0=; b=KmS2wN930EG2IxdxCnUBJhmowSVq6hfGSEB+wly2itZQ8X0cteyEWcAfiVvdqRbU42 yYenX7Dwez5VU+bNvMLNDm0ctaQ8BCMSiSb//Vn6c9oX9+SeTSiSZ36Uppd4JnBxxR78 znl5dMF7OcbOnChFRV8zbNqv7wgOjwrgyxDU7Zpy0l4ynJg4HhJ/jccQUGXF4i2S8cNB pmV7nKb0B8X7Ula4KBpHNLEaM8aGRY3TVfgMp+m21+vceYD9GUOqJg6gPXDD2fptehJA kKrsWzYrZSbpOf11gT0kv72fBfdnYXt/tvhluVpxYNQ4o6shoPvFr2NEyL983QhMwnAC Jx5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SVXMXCA9EUcm/U2NL9bFju/llGCppObF5hU8u8qx8a0=; b=4/+wK4mkk0pTf77hksjgVczV+fy2MSSHKuwlX+F+TWlWiU6qqJu7RYwgSGKoDV0pEv R4lQ1r20Jox7lDh6Mv1CczoFi6nD/kuu7Kw8GnX5gT5IFXwulrpOLCLLbARAac0MgUaj MKGBgTmID8QFIhvbe+exmxq/IOH7huDqbzbzUVx5qR3Kz6HaWv5y72mg1D6RIx5+wl4u wtuGSsbI7jh98DRP0Lpmxi9t/bwfy5p15SKGNQwrGiM8UXGyPs535zF4Q7aRqwqzsNmy y7tvpJ+BcoJwBoFi1uOmYYHVyaSUegZnotSzxb8KIESInEWRoLwldlpPTu6LtASxtZpI bbYw== X-Gm-Message-State: AJIora8hcJlOBfFubVVIKApQJ1jjd4xwkBH3YzZ/3o8M1BmsqcAr5NBI 75r5g4pnVcuve8hL/GrZJnFBHGZmJujMaj9vfQ== X-Google-Smtp-Source: AGRyM1s7bBaJTn0X9XWOnvyD6RIVfSqUuJ1lmnLuGoAg+cY+w4t8dDFpOLAfqJACW8rVgQZNFvvWoSacW1dpR4y6cQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a81:6fc3:0:b0:31c:8c75:3eb8 with SMTP id k186-20020a816fc3000000b0031c8c753eb8mr14112217ywc.225.1657865476241; Thu, 14 Jul 2022 23:11:16 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:15 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-7-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 06/18] arm64: stacktrace: Add description of stacktrace/common.h From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add brief description on how to use stacktrace/common.h to implement a stack unwinder. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/stacktrace/common.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index f86efe71479d..b362086f4c70 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -2,6 +2,14 @@ /* * Common arm64 stack unwinder code. * + * To implement a new arm64 stack unwinder: + * 1) Include this header + * + * 2) Provide implementations for the following functions: + * - on_overflow_stack() + * - on_accessible_stack() + * - unwind_next() + * * Copyright (C) 2012 ARM Ltd. */ #ifndef __ASM_STACKTRACE_COMMON_H --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BF9EC43334 for ; Fri, 15 Jul 2022 06:11:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230397AbiGOGLg (ORCPT ); Fri, 15 Jul 2022 02:11:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230323AbiGOGLV (ORCPT ); Fri, 15 Jul 2022 02:11:21 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 124D273912 for ; Thu, 14 Jul 2022 23:11:20 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id k13-20020a056902024d00b0066fa7f50b97so3284703ybs.6 for ; Thu, 14 Jul 2022 23:11:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=l1XnM4q3x1ewVUNvHs0HuWFtRu/IpcbYPtIh1O0D3HY=; b=HAos/OmCJvgcvlVr+l4rgZ529Zy9AgZUmxRT5wAwy49nbAxCFX1OoZOy/MbJm5YfaI MRjaNeuCZinL0cnj7NX8XMcpCfHHUqWQbv0RAohFcbRY1T4dBE3coMErpa8THr3j468g 2Mni4CUr/d6Yiy2Xf8toN4vxOmJ8Zi7NDsSuV9FH7qDcmxJowgkg5gzsyeIOlz9g3CCJ +cjX93wr1FiRJEKVcXe5xqj+sIUGsei1iANx4eYiKCQ/VZ1DqzZW87VQZnOwwT+6vm2L NAve5p5YpRJ6utTnsbjjTignNi4kvQ9AH/UAD2WaDja1b7TkTUFdfYo7/bXdsqoyTW8F JnJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=l1XnM4q3x1ewVUNvHs0HuWFtRu/IpcbYPtIh1O0D3HY=; b=w4ITrVBNpfwlAFj1GM5Rb/wysymzLITA4Q6UQpp9jAw7utvPn3U9tgWdiLIW0p6YFH K0T1AanXyw2C/8UATw4AhEpxBJyLdSkZV/Sv5VvfzUNeZy6eqf56Ekz0/5l322Y8MyCc g8GL3+QgWyUeW0/vfxQCXnPUhqs0i3JcidYwYlqK+z9IYSzzgQxcl+kThkxWWUife5Rr T9KbdeOdPiEAVPrJJQGF6G5BNbmz93qUMnyf2B0L/Wq8WG5tgzYBKCqmlP6UzW3R5wgN RqxcICOdGxhH5j/P5UoCmX7Rfsv+u7zsyxScxNDG7UdmEvCWyvqsjfuOd374IgJcxxFI +X8g== X-Gm-Message-State: AJIora/vq75VdP6fNuZ4jqid6km22fTcHZkElR7hHG7EoZudS3ZXK0eG SM2aZ6+PtDNsGQ0F/7NfMK8HIYLmecNuTOBviA== X-Google-Smtp-Source: AGRyM1sFpdc9SV33p7szlsb0+to9mu3jgVu9zuERuzC22gv4CQEQSvN6S0V0TElCV8Mv01JeqbR8z7tOfs91AebCxA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a25:d18b:0:b0:66f:3536:c35a with SMTP id i133-20020a25d18b000000b0066f3536c35amr12561943ybg.546.1657865479300; Thu, 14 Jul 2022 23:11:19 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:16 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-8-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 07/18] KVM: arm64: On stack overflow switch to hyp overflow_stack From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On hyp stack overflow switch to 16-byte aligned secondary stack. This provides us stack space to better handle overflows; and is used in a subsequent patch to dump the hypervisor stacktrace. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/host.S | 9 ++------- arch/arm64/kvm/hyp/nvhe/stacktrace.c | 11 +++++++++++ 3 files changed, 14 insertions(+), 8 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/stacktrace.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Mak= efile index f9fe4dc21b1f..524e7dad5739 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -14,7 +14,7 @@ lib-objs :=3D $(addprefix ../../../lib/, $(lib-objs)) =20 obj-y :=3D timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o hos= t.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \ - cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o + cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o obj-y +=3D ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.= o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-$(CONFIG_DEBUG_LIST) +=3D list_debug.o diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index ea6a397b64a6..b6c0188c4b35 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -177,13 +177,8 @@ SYM_FUNC_END(__host_hvc) b hyp_panic =20 .L__hyp_sp_overflow\@: - /* - * Reset SP to the top of the stack, to allow handling the hyp_panic. - * This corrupts the stack but is ok, since we won't be attempting - * any unwinding here. - */ - ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 - mov sp, x0 + /* Switch to the overflow stack */ + adr_this_cpu sp, overflow_stack + OVERFLOW_STACK_SIZE, x0 =20 b hyp_panic_bad_stack ASM_BUG() diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c new file mode 100644 index 000000000000..a3d5b34e1249 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -0,0 +1,11 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * KVM nVHE hypervisor stack tracing support. + * + * Copyright (C) 2022 Google LLC + */ +#include +#include + +DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_= stack) + __aligned(16); --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DBA1C43334 for ; Fri, 15 Jul 2022 06:11:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231140AbiGOGLw (ORCPT ); Fri, 15 Jul 2022 02:11:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230396AbiGOGLa (ORCPT ); Fri, 15 Jul 2022 02:11:30 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6669C7AB2E for ; Thu, 14 Jul 2022 23:11:25 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-317f6128c86so33579367b3.22 for ; Thu, 14 Jul 2022 23:11:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3a/b4pln92HS2a1AnnvuOdac8xZP1xDPa4SrMkGLjR0=; b=q4C1JK+RQNNz76sRdeaP39mUzrBswZ6WynqoMQ6MKDm8YuAxyRhzVsfEvCXanehao4 k08Pi42tfwzVKaJ/csOoB8UeWhtI7W1QCy9bY+R8OOq/Tmlbkj6AM8hN1G8VVfDH9g6J PjQhLyB7sX4Nw2ShhZyWeWNS6Ke+I9KTVSozUpiBpss5tuOGiAPO+ZobMPwBDxLrFxE7 zigNiQz1lT/DdliNuWo2n7LVM1Hm0B6jAbBGylJ1zabLyv9YuXpFLMj1nc5SWdqRFE0C kUmZ3mPGCWamcUSLxEpDiVl7ylw4U2x4CdiGJoYyQ38S/H18B/rPV+u0xLmXPa5tDHtV ZGLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3a/b4pln92HS2a1AnnvuOdac8xZP1xDPa4SrMkGLjR0=; b=gElF4PxcxfAFnuWq0VanIeMokYD4l6H5xf1korwJX5BUWdjBKiBDWaNNINPdMQMYs2 h/opkArtDwqOdcDJoPnA/6tfKDgzhnYAgZPt/vAIUcuNrlmvNIOfho7OuDBzFs9y0VAx W9ltKlKTSBg/ZiFHzYdIT6uq6EJoETKlJ0KzOPz91u5gN6OnWLQDZOBIuv9NyORi/VFj 5ALW/B9d2y4E6tK2eE8Ypleck2h2K0epj5G6otoqAQy5essBiym8fKetTYTAKST1ZEs8 fhuSh1bmVTjWGFSu4wCFCusJeP40kd60MAYdcecEzHlhPesn5gkr8FcPn3ELO3eddipF fovg== X-Gm-Message-State: AJIora+xdwvixX/d5rg7GOpf5A59ukl41YzvCf7vwrbB/wDWH3z6k2PZ +BSRtJFTF8kntDNWmeUvnYt4wUJNTmqj5SE5SA== X-Google-Smtp-Source: AGRyM1sPOtLVKeDmWy879CSPJRglmjW5aqZxQkCzJXmhwSfCnvkLDVoRYFusSgov/9niuUM5cImhZ1ZTKOCqCOKMnQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a81:17ca:0:b0:31c:9a75:1f2b with SMTP id 193-20020a8117ca000000b0031c9a751f2bmr14665610ywx.83.1657865484575; Thu, 14 Jul 2022 23:11:24 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:17 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-9-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 08/18] KVM: arm64: Add PROTECTED_NVHE_STACKTRACE Kconfig From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This can be used to disable stacktrace for the protected KVM nVHE hypervisor, in order to save on the associated memory usage. This option is disabled by default, since protected KVM is not widely used on platforms other than Android currently. Signed-off-by: Kalesh Singh --- arch/arm64/kvm/Kconfig | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 8a5fbbf084df..1edab6f8a3b8 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -46,6 +46,21 @@ menuconfig KVM =20 If unsure, say N. =20 +config PROTECTED_NVHE_STACKTRACE + bool "Protected KVM hypervisor stacktraces" + depends on KVM + default n + help + Say Y here to enable pKVM hypervisor stacktraces on hyp_panic() + + If you are not using protected nVHE (pKVM), say N. + + If using protected nVHE mode, but cannot afford the associated + memory cost (less than 0.75 page per CPU) of pKVM stacktraces, + say N. + + If unsure, say N. + config NVHE_EL2_DEBUG bool "Debug mode for non-VHE EL2 object" depends on KVM --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB747CCA47C for ; Fri, 15 Jul 2022 06:12:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231183AbiGOGL6 (ORCPT ); Fri, 15 Jul 2022 02:11:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230429AbiGOGLj (ORCPT ); Fri, 15 Jul 2022 02:11:39 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 997BE7AC00 for ; Thu, 14 Jul 2022 23:11:27 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id k13-20020a25240d000000b0066e32c61c25so3293156ybk.3 for ; Thu, 14 Jul 2022 23:11:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M+jjLcbVfGFpuFyPmYM6em2hR4mF4qVioAr2e7vX+Xw=; b=foYSq5L3M94ew4oJheHVCcZ2sd6WTh+axQJQ3f2I0/KjDrbsDlveUQa36a+4dMUW86 oAr59yWp3/4fthJGLKbeqP3ToV625tQE1/Zl59UnW6ZgwRejfMF2CeiuIgAxnPKEdOua Wom/AcDe7Y1Vs7m9B9yUKDv3XbJABdFnmiRFcsEaHQF6hCWr4TvLrSsy0kbhYg0ecJCY 1BkjPIw0aCv0o/w1YsTK46SYwcGHpjGTLHy7NN08z1pijZldSS4E9d/h1bdT4iLMAAvp BIW0nHEuzPtR74D0IBEy/L6c6MHoCmRGrZgKKOBHl0myQ+Dw11PmBzmEKNu6izs3iIbu yVfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M+jjLcbVfGFpuFyPmYM6em2hR4mF4qVioAr2e7vX+Xw=; b=dXYA1J5VwysnmeT5sErbTfdVBWhKbPoIRFVamHq1f5ay1J1WiRYLmpsMdPX06hQtHA BUhby9yjtsatE0sO+CsNLPQXiG56/cbHYh/5Zvtye4CrNA5ZrWG6y5/Hv7KP+1cbLd8o d9MfNvk9qQa9EVyuox/z517bDIgCJme2ZGy5LjKzTEJbi5MaLqD4/sBIo39erRXkEqHj YQJghk9r1ULRsTwCIVm7k6bukfxg8zbge20jDJJDVeXDtvJy/MRyW7Va8augVKbkqlEI g8TbsVGQpzjhaqbH9zHcJ2q4H543/de1uX0LbshCW+ewhECexrBXZgY0N47WHdytLF6t qLkQ== X-Gm-Message-State: AJIora9eFBNgckq0Avn71YmNaI0h/l4s4oag4lf55k0UkXvOTqPdpTOV KEEol+bzYpqA2dsEIPITotwCQgt0Yjp/xyaMGw== X-Google-Smtp-Source: AGRyM1sHtVrkbrB5cwuR+qVJGCIV5+KLljaGjEU50/j14j8kkUywTaBTDzCv9+0MBKlLC2xgVIaPgUUyPSHW2bJDFQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a25:508a:0:b0:66e:570b:52da with SMTP id e132-20020a25508a000000b0066e570b52damr12381412ybb.464.1657865486862; Thu, 14 Jul 2022 23:11:26 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:18 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-10-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 09/18] KVM: arm64: Allocate shared pKVM hyp stacktrace buffers From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In protected nVHE mode the host cannot directly access hypervisor memory, so we will dump the hypervisor stacktrace to a shared buffer with the host. The minimum size do the buffer required, assuming the min frame size of [x29, x30] (2 * sizeof(long)), is half the combined size of the hypervisor and overflow stacks plus an additional entry to delimit the end of the stacktrace. The stacktrace buffers are used later in the seried to dump the nVHE hypervisor stacktrace when using protected-mode. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/memory.h | 7 +++++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 4 ++++ 2 files changed, 11 insertions(+) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memor= y.h index 0af70d9abede..28a4893d4b84 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -113,6 +113,13 @@ =20 #define OVERFLOW_STACK_SIZE SZ_4K =20 +/* + * With the minimum frame size of [x29, x30], exactly half the combined + * sizes of the hyp and overflow stacks is needed to save the unwinded + * stacktrace; plus an additional entry to delimit the end. + */ +#define NVHE_STACKTRACE_SIZE ((OVERFLOW_STACK_SIZE + PAGE_SIZE) / 2 + size= of(long)) + /* * Alignment of kernel segments (e.g. .text, .data). * diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c index a3d5b34e1249..69e65b457f1c 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -9,3 +9,7 @@ =20 DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_= stack) __aligned(16); + +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE +DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_sta= cktrace); +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F181C433EF for ; Fri, 15 Jul 2022 06:12:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231204AbiGOGMD (ORCPT ); Fri, 15 Jul 2022 02:12:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230366AbiGOGLj (ORCPT ); Fri, 15 Jul 2022 02:11:39 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B9C575387 for ; Thu, 14 Jul 2022 23:11:30 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31c95f27403so33483477b3.6 for ; Thu, 14 Jul 2022 23:11:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XgQpVRIZ/+rgJJYIXD5u7GGidNPh5ehm0t/PGc8qxk4=; b=KIHmo/pFQxPEIfafDaLCTvF0a7EDE8nMQQXsRPJRrevGR3G5T02SGJqczYQwFMh3ZD S/Mo2JRunG4K5QfkTmv39O6d0aPoe54uL7UFThYBPTrE9ktG8ha/V72h8G/iIXiVYeU0 ai+yxL/3VpBtP32KoMJUaGknijL0zeAZhx81eaFZibT0M7zAo7VFUWGw3jhnFFrHt88D XeOwA7YKrOUvs+ecvZNS1RSl8Rn+s8uHO6WELw5UJbkTUdlcQbKBIsZDrRgcNZb6UF+Q E1JZUQiUj6FUphFlGMzjl6HBBRywechF6Ry6KSIoVwvIBwBQKX3kIHEj/DFZyAREiTAg LvkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XgQpVRIZ/+rgJJYIXD5u7GGidNPh5ehm0t/PGc8qxk4=; b=qrSidXWQu08ThlKnboFAb2vlVF5p2yuDvEing4rl3oknircvVgCvzdhIpyyXJK9lOg gDB7DHY68NO9mp/dkGdDz/VMy6m9dKnvWTWDLxJhjOImNoLAJlF7AuMHkIVSeq6K/leZ 2eKNn4wM3rHSvedw/6MwJ1EEjOEbKRRM0bqdbvXwpDuj2wkee2f1L9Dmf43ziKHpl7wC 7J1DTowk74HdzqV8Abj3InKeyL0uWauu4wwYQ8IOFmodvk1xqMapDJNn1Yd4b7BCeURx cow9/TdBtvm7HZg13gOTyeo6JLlOl+JO40OyksvEvVm++cmL8mJBVPxuh4yGEUc/tKGU TBew== X-Gm-Message-State: AJIora+jLS8yuMVgdaeem9UU9rL48rhziCMwncAc+MAjRPoC90DFfHU0 gt1lqMjD6DYJb0jtSkFEQfjqth/QIUEgQbHGTg== X-Google-Smtp-Source: AGRyM1s5cymxF5/2M5vXFAclyed+75vWvwbMa/H/SxWfjgxHei8pGcmW11dtGoK7T5A2HhMEXreSpy82wKmWy2XaFg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:282:b0:66e:26b0:8f16 with SMTP id v2-20020a056902028200b0066e26b08f16mr12192829ybh.469.1657865489758; Thu, 14 Jul 2022 23:11:29 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:19 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-11-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 10/18] KVM: arm64: Stub implementation of pKVM HYP stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add some stub implementations of protected nVHE stack unwinder, for building. These are implemented later in this series. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/nvhe.h | 57 ++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 3 +- 2 files changed, 58 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/stacktrace/nvhe.h diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h new file mode 100644 index 000000000000..1eac4e57f2ae --- /dev/null +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * KVM nVHE hypervisor stack tracing support. + * + * The unwinder implementation depends on the nVHE mode: + * + * 1) pKVM (protected nVHE) mode - the host cannot directly access + * the HYP memory. The stack is unwinded in EL2 and dumped to a shared + * buffer where the host can read and print the stacktrace. + * + * Copyright (C) 2022 Google LLC + */ +#ifndef __ASM_STACKTRACE_NVHE_H +#define __ASM_STACKTRACE_NVHE_H + +#include + +static inline bool on_accessible_stack(const struct task_struct *tsk, + unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +/* + * Protected nVHE HYP stack unwinder + */ +#ifdef __KVM_NVHE_HYPERVISOR__ + +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +static int notrace unwind_next(struct unwind_state *state) +{ + return 0; +} +NOKPROBE_SYMBOL(unwind_next); +#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +static int notrace unwind_next(struct unwind_state *state) +{ + return 0; +} +NOKPROBE_SYMBOL(unwind_next); +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ + +#endif /* __KVM_NVHE_HYPERVISOR__ */ +#endif /* __ASM_STACKTRACE_NVHE_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c index 69e65b457f1c..96c8b93320eb 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -4,8 +4,7 @@ * * Copyright (C) 2022 Google LLC */ -#include -#include +#include =20 DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_= stack) __aligned(16); --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF770C433EF for ; Fri, 15 Jul 2022 06:12:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230466AbiGOGMK (ORCPT ); Fri, 15 Jul 2022 02:12:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230458AbiGOGLk (ORCPT ); Fri, 15 Jul 2022 02:11:40 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2848A7AC25 for ; Thu, 14 Jul 2022 23:11:33 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31cd7ade3d6so33641027b3.3 for ; Thu, 14 Jul 2022 23:11:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pdXpYKuqyR2DHzDWiSA/Dd/3H8f/+4y3eqxbGuoz/60=; b=gp6IjzjWFE3s0WtgA0THeafhzeLINBoTb4eGy+4W8aaFNoxZGpirTqTbF7xS7KhCzj OVC8u1Xggk4rJ2ga+Wxi7y0dgfaIeUU+LMQoFu/+H7H+mlWBFwOT27y1BRMRuvtMW5R4 pVF8Cx6BwHK6zf2mOwZUcifIN8RN7l4IwyBz+61P/BGjzHPmAB9M3XiuGqYcRBXGdbXH BElQjhQyhxgPGo68MBibokAEwadcmfzTU1Lj/BdgsDBc+DztJoAWW58GWeJ20WqVMh8O rgX5cS9g2fnYvRIvmsWqNAPvhAYBDXXTguTNxYfjxXB/QkGPr3P6VD692wFY19Irjf4t 4ImA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pdXpYKuqyR2DHzDWiSA/Dd/3H8f/+4y3eqxbGuoz/60=; b=IiIjIME8yn4e7Mxrvjub71+EziGvSUS9PkZCxW8YaLiSetA+wKXcBc/CWOmDYC9ZnB EIRdkJrszTfqas0IyzaKjDupUXb7Hx59cDunTGbFAQ5VfBdKTN3K0PN6XpWy+2fScgDD tDoqcuOEjxDSPYg1sePBdFSxtGeIgxDvpBSV8pFfVH5YKn1L0zyRVujKkWlUMnPvDkUj bAEjWAIpZpNlzdlFN2XumjS2mUfDDArwo6Ocmb3X7LG95IHug9S9Sg1elCgg048ef3pJ fJXc1JzyICoNTR+oWtpqMf4Qtk0hF7T8dDAUgM8qg1nFBGYXtb32DYAKzedH5mDDazpZ b2Kw== X-Gm-Message-State: AJIora8HfpwVeD1kQlWCDrqAOnMma0DZ8rE5Yf+sjQ/575BeYVAgrBFD vTdRtnz1pEmiQMNE5AZMQ0Q2/6QhsxJqufCplw== X-Google-Smtp-Source: AGRyM1s2qvo/C+AGMgSJ1bzfSnL2AH2SuaK0GUmYhlafsScdZesEOiN19TLlmgYAcKErdGUxpKCqLcIEPcdbEiO29Q== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a25:2303:0:b0:66f:d3d:45a with SMTP id j3-20020a252303000000b0066f0d3d045amr11516072ybj.606.1657865492453; Thu, 14 Jul 2022 23:11:32 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:20 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-12-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 11/18] KVM: arm64: Stub implementation of non-protected nVHE HYP stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add stub implementations of non-protected nVHE stack unwinder, for building. These are implemented later in this series. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/nvhe.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index 1eac4e57f2ae..36cf7858ddd8 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -8,6 +8,12 @@ * the HYP memory. The stack is unwinded in EL2 and dumped to a shared * buffer where the host can read and print the stacktrace. * + * 2) Non-protected nVHE mode - the host can directly access the + * HYP stack pages and unwind the HYP stack in EL1. This saves having + * to allocate shared buffers for the host to read the unwinded + * stacktrace. + * + * * Copyright (C) 2022 Google LLC */ #ifndef __ASM_STACKTRACE_NVHE_H @@ -53,5 +59,21 @@ static int notrace unwind_next(struct unwind_state *stat= e) NOKPROBE_SYMBOL(unwind_next); #endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ =20 +/* + * Non-protected nVHE HYP stack unwinder + */ +#else /* !__KVM_NVHE_HYPERVISOR__ */ +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + +static int notrace unwind_next(struct unwind_state *state) +{ + return 0; +} +NOKPROBE_SYMBOL(unwind_next); + #endif /* __KVM_NVHE_HYPERVISOR__ */ #endif /* __ASM_STACKTRACE_NVHE_H */ --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 129BFC433EF for ; Fri, 15 Jul 2022 06:12:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230141AbiGOGMU (ORCPT ); Fri, 15 Jul 2022 02:12:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229456AbiGOGLr (ORCPT ); Fri, 15 Jul 2022 02:11:47 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DDEB7AC2D for ; Thu, 14 Jul 2022 23:11:35 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31d3f067919so33710307b3.23 for ; Thu, 14 Jul 2022 23:11:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Re0lyq5fdLMfXjdX0x8O/EyKsShey1DPNrXl83M7ib4=; b=HJIzfLUePWb4mdMt5PQiB2FXqWysXIomBQwWmZmU4WV/QUZbp4L9o9VFKSwOJZifTZ Fa6IQbg+cMKv/7h9KJH2UcxrcTFfMTgmjaYMPU0KzatCgfcvq9kQCjl1mCU5KlP2UZvf saRXkT4Vpy7oTR0HAdiDDzqWQMS48FOJSp9QhUQNl4JGkHfW+2NCOJZCmji58ttWfn1x SowNG0vi9kC6tGLLFApmgH3fS6+ugyOaFaW5AYdpvJmFjuofxYo9juDxrUY5GGOH6GWX TK67Y6CrAXSsgi0VARxmXTim/P0MFuM9aFY5sRAknI9mcAvSqGV9gdOD4phpS9MBGRLP UbdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Re0lyq5fdLMfXjdX0x8O/EyKsShey1DPNrXl83M7ib4=; b=cno7G0CNIXqVjaf6c+ZkNgitH4P7Z/kISJpPTKtI/8zRdvZUJXa1/iksDkmiqU2rX4 U/O1kZBFxsRSzSvsRaSWFY9pv4bM71oDyeOZLwi3i9Hq7+33VWgsZ8p5TtyLkaaJa3Vs gYceeGH9zWRA1//Awc/lYi5RUlbYh8kSJlvAlaX2HGzwGMGBRt7qzYBXpddqhcg3hV/O Wv49hF/tTagY4pg81EgbRT6bMQQCWDSnIBntzx4hmo4Jcop2F7nb4+vr9Bp+U4FOuJ1L yA8zBkpRd1ITxxqr9HDw7qnGuiJHmdVd8R0clfZq7ZBiUhW+KspndY5Q+BRMIJWj7Q2E jarA== X-Gm-Message-State: AJIora99QaYSL2L+5alhebQfKaHgS8VfNZ5rhvknW9raY9xNw1ZyvH7U 2M6f/UkU/LUYaZ9jGXa3JrnZk19g00txPnYhfA== X-Google-Smtp-Source: AGRyM1sW1Rib3u02KM7AZyrfjLA2iwFLsV4nlOryhWe/xB4UcHZVdSrw+Q6l4PSjRDA2LJVX/SPqSYka+ic+lQq4/w== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a0d:d747:0:b0:31c:8947:7851 with SMTP id z68-20020a0dd747000000b0031c89477851mr14136383ywd.142.1657865494699; Thu, 14 Jul 2022 23:11:34 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:21 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-13-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 12/18] KVM: arm64: Save protected-nVHE (pKVM) hyp stacktrace From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In protected nVHE mode, the host cannot access private owned hypervisor memory. Also the hypervisor aims to remains simple to reduce the attack surface and does not provide any printk support. For the above reasons, the approach taken to provide hypervisor stacktraces in protected mode is: 1) Unwind and save the hyp stack addresses in EL2 to a shared buffer with the host (done in this patch). 2) Delegate the dumping and symbolization of the addresses to the host in EL1 (later patch in the series). Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/nvhe.h | 18 ++++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 70 ++++++++++++++++++++++++ 2 files changed, 88 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index 36cf7858ddd8..456a6ae08433 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -21,6 +21,22 @@ =20 #include =20 +/** + * kvm_nvhe_unwind_init - Start an unwind from the given nVHE HYP fp and pc + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + */ +static __always_inline void kvm_nvhe_unwind_init(struct unwind_state *stat= e, + unsigned long fp, + unsigned long pc) +{ + unwind_init_common(state, NULL); + + state->fp =3D fp; + state->pc =3D pc; +} + static inline bool on_accessible_stack(const struct task_struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) @@ -33,6 +49,8 @@ static inline bool on_accessible_stack(const struct task_= struct *tsk, */ #ifdef __KVM_NVHE_HYPERVISOR__ =20 +extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c index 96c8b93320eb..832a536e440f 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -11,4 +11,74 @@ DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof= (long)], overflow_stack) =20 #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_sta= cktrace); + +/** + * pkvm_save_backtrace_entry - Saves a protected nVHE HYP stacktrace entry + * + * @arg : the position of the entry in the stacktrace buffer + * @where : the program counter corresponding to the stack frame + * + * Save the return address of a stack frame to the shared stacktrace buffe= r. + * The host can access this shared buffer from EL1 to dump the backtrace. + */ +static bool pkvm_save_backtrace_entry(void *arg, unsigned long where) +{ + unsigned long **stacktrace_pos =3D (unsigned long **)arg; + unsigned long stacktrace_start, stacktrace_end; + + stacktrace_start =3D (unsigned long)this_cpu_ptr(pkvm_stacktrace); + stacktrace_end =3D stacktrace_start + NVHE_STACKTRACE_SIZE - (2 * sizeof(= long)); + + if ((unsigned long) *stacktrace_pos > stacktrace_end) + return false; + + /* Save the entry to the current pos in stacktrace buffer */ + **stacktrace_pos =3D where; + + /* A zero entry delimits the end of the stacktrace. */ + *(*stacktrace_pos + 1) =3D 0UL; + + /* Increment the current pos */ + ++*stacktrace_pos; + + return true; +} + +/** + * pkvm_save_backtrace - Saves the protected nVHE HYP stacktrace + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + * + * Save the unwinded stack addresses to the shared stacktrace buffer. + * The host can access this shared buffer from EL1 to dump the backtrace. + */ +static void pkvm_save_backtrace(unsigned long fp, unsigned long pc) +{ + void *stacktrace_start =3D (void *)this_cpu_ptr(pkvm_stacktrace); + struct unwind_state state; + + kvm_nvhe_unwind_init(&state, fp, pc); + + unwind(&state, pkvm_save_backtrace_entry, &stacktrace_start); +} +#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ +static void pkvm_save_backtrace(unsigned long fp, unsigned long pc) +{ +} #endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ + +/** + * kvm_nvhe_prepare_backtrace - prepare to dump the nVHE backtrace + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + * + * Saves the information needed by the host to dump the nVHE hypervisor + * backtrace. + */ +void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc) +{ + if (is_protected_kvm_enabled()) + pkvm_save_backtrace(fp, pc); +} --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A949DC433EF for ; Fri, 15 Jul 2022 06:12:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230527AbiGOGMi (ORCPT ); Fri, 15 Jul 2022 02:12:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230511AbiGOGLu (ORCPT ); Fri, 15 Jul 2022 02:11:50 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5943B7B369 for ; Thu, 14 Jul 2022 23:11:38 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31cbe6ad44fso33981457b3.10 for ; Thu, 14 Jul 2022 23:11:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1fym5n3n0oRia5NvSUClhii6veVXNj9gblE/7wpke9c=; b=dc7YhQhB+Ip1RaWbAyXQxBj3/khXBkxGpkdHu1EkyjdG2Ntq0zGnQ7CI0vvsM5LoR7 bOTy4YLobXsT1AI1ayobnaGn/U0oTuEsFJSaTZsf2SCqTuOq+dIW6cVLIxFgWB6yIgPP r1DS4gW5k/bjt5YkQKMECX5jYmBr8RBTdc4GxWb58awYhSWlNu2Q+r8O7bLGjuJh1CjS mjuFNlu6jjHmjla41cuWn3LyCeBjEnrvudkvWECk8mENYuC32iG1BnIHI1/gj9SbNJ6r vx96pOp6wqrGrASAADA9JpN5jp6DvbjDu9dI2PS2fMoqyh8HmLeAtNjThZMK1jc4pgPF LP3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1fym5n3n0oRia5NvSUClhii6veVXNj9gblE/7wpke9c=; b=qN546sErosX7AG+sWJpQf2yn9KdkQTNgPoSZwcM5pGzbUb0hK6/6NnnXrljU30CCwI gpwkBZdnwibtbZ4dPr47buPEQQG7URXHfNfHqXNW6p6YgG1c3pIiTznca3ICYCRQM6qm j3OaclpbK+0QxoEpO6rABxzY8QsD2BKE2+rzIchUIYI8FpJegzufgFYzrKXgVYw1S7dk ThDtiqng/gow2ZUc6OEDDj85h7WGNwMC61Yd0fLdFGVkGZpV/PhX1cOi4YJwLFhPb+ME LahAfrHEGTMLmsg7Uku1AKqgt6mQXf4Vs3MtObjN1KswkoxTKxm8XvGMj9y/PY2nzPIQ wwJw== X-Gm-Message-State: AJIora8hcSBiVp8Ablv3XJVKZ87hT5XI0eOVVDjbSxtw+WvDJW96eMTG JxtyuIveYVPG8XeCMNPy9BbC7qZeOIOdLuDvPw== X-Google-Smtp-Source: AGRyM1u3qCw7t29vwofSuUW09s1Js0/b0NPwzvR/BwAcuU954cXttzvHa6Ek2lTeefUCsd1b3z2MavlgW2KJP5IGmg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a5b:14f:0:b0:66a:bbd9:e502 with SMTP id c15-20020a5b014f000000b0066abbd9e502mr13149371ybp.278.1657865497089; Thu, 14 Jul 2022 23:11:37 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:22 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-14-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 13/18] KVM: arm64: Prepare non-protected nVHE hypervisor stacktrace From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In non-protected nVHE mode (non-pKVM) the host can directly access hypervisor memory; and unwinding of the hypervisor stacktrace is done from EL1 to save on memory for shared buffers. To unwind the hypervisor stack from EL1 the host needs to know the starting point for the unwind and information that will allow it to translate hypervisor stack addresses to the corresponding kernel addresses. This patch sets up this book keeping. It is made use of later in the series. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/kvm_asm.h | 16 ++++++++++++++++ arch/arm64/include/asm/stacktrace/nvhe.h | 4 ++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 24 ++++++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 2e277f2ed671..0ae9d12c2b5a 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -176,6 +176,22 @@ struct kvm_nvhe_init_params { unsigned long vtcr; }; =20 +/** + * Used by the host in EL1 to dump the nVHE hypervisor backtrace on + * hyp_panic() in non-protected mode. + * + * @stack_base: hyp VA of the hyp_stack base. + * @overflow_stack_base: hyp VA of the hyp_overflow_stack base. + * @fp: hyp FP where the backtrace begins. + * @pc: hyp PC where the backtrace begins. + */ +struct kvm_nvhe_stacktrace_info { + unsigned long stack_base; + unsigned long overflow_stack_base; + unsigned long fp; + unsigned long pc; +}; + /* Translate a kernel address @ptr into its equivalent linear mapping */ #define kvm_ksym_ref(ptr) \ ({ \ diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index 456a6ae08433..1aadfd8d7ac9 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -19,6 +19,7 @@ #ifndef __ASM_STACKTRACE_NVHE_H #define __ASM_STACKTRACE_NVHE_H =20 +#include #include =20 /** @@ -49,6 +50,9 @@ static inline bool on_accessible_stack(const struct task_= struct *tsk, */ #ifdef __KVM_NVHE_HYPERVISOR__ =20 +DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow= _stack); +DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); + extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); =20 #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe= /stacktrace.c index 832a536e440f..315eb41c37a2 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -9,6 +9,28 @@ DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_= stack) __aligned(16); =20 +DEFINE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_info); + +/** + * hyp_prepare_backtrace - Prepare non-protected nVHE backtrace. + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + * + * Save the information needed by the host to unwind the non-protected + * nVHE hypervisor stack in EL1. + */ +static void hyp_prepare_backtrace(unsigned long fp, unsigned long pc) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info =3D this_cpu_ptr(&kvm_st= acktrace_info); + struct kvm_nvhe_init_params *params =3D this_cpu_ptr(&kvm_init_params); + + stacktrace_info->stack_base =3D (unsigned long)(params->stack_hyp_va - PA= GE_SIZE); + stacktrace_info->overflow_stack_base =3D (unsigned long)this_cpu_ptr(over= flow_stack); + stacktrace_info->fp =3D fp; + stacktrace_info->pc =3D pc; +} + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_sta= cktrace); =20 @@ -81,4 +103,6 @@ void kvm_nvhe_prepare_backtrace(unsigned long fp, unsign= ed long pc) { if (is_protected_kvm_enabled()) pkvm_save_backtrace(fp, pc); + else + hyp_prepare_backtrace(fp, pc); } --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01268C43334 for ; Fri, 15 Jul 2022 06:12:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230464AbiGOGMu (ORCPT ); Fri, 15 Jul 2022 02:12:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230442AbiGOGLx (ORCPT ); Fri, 15 Jul 2022 02:11:53 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32A577B37F for ; Thu, 14 Jul 2022 23:11:40 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31c9d560435so33857057b3.21 for ; Thu, 14 Jul 2022 23:11:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Aen+FkdIlH4p9Bga5nOuPk26qtzTJbITGhY4IEaQl1E=; b=Kl/Wb7az0Dpk964K/t53obqM+aCIu3rAPB8Mi4lFae6KU/yHzI081j9LkXEZvLEs5F MPJViYowMIzNBQddAvvw7Po85N97ikHIUNw2aCqDw4i0AXDa82B4v1KEU83z8qseqlST HTZXzVTx3cEKvN3jrQAGdDnMKiiAgo4UkwjXsn09TeVUtUfosl4u4YQzK7dAc35q0KiF K6rSEFoTr7QPg89dMkQWTWdYZMsZfSwVCROcIJWnD/ArXlt2GsZXHYJeqHpcyX2uz6SL H7qz9XZxaBwBvC2Fh/vPGLAMM6u3TCdO9kkrMhmIrKu0QEl1nu32DcC3LA/cIr2Y6+19 rXmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Aen+FkdIlH4p9Bga5nOuPk26qtzTJbITGhY4IEaQl1E=; b=3NIRdWc8n652g6gZS1ixRpjQU7SuEF/9D/PVxTFLw1GUddoUvlQjs2x6UT4LLpzVBH jMA8Z9QZEjdEcIfxaRHTVu8y+eY+1H2YFyRVBusrFz6rIEx63dC+mMHQhYDwanpasvnv xHVEjtQFjZ7wCQbS1dqkckFONbc6YdZaO0XCTkEmenncetKCsta/oRyHuXXdieB5d0to MObjwssVoT+uRZtZrHpxJX25/GJ3idzX+jkFB9I+U8HicVI8KPWl22bWOwCVVAy25hdC 5ZqhmxUfdEjFwucZ7y6jHibn6SpKYDkBlg349fpz/majZbQ8P/mu/47rbxA7k62ipkZH vYbQ== X-Gm-Message-State: AJIora8kBevadBJ62QOHjFswD2nNm/TbXwXoXQBTOCIQcCcmwjcjb68x +x6caKa9CHnQPuQnAT+/B949VDL0G/3f50JurA== X-Google-Smtp-Source: AGRyM1u/PT8LdM4OlzT2z2Ts/bIUkpNGqVgHGqY8xhTai+zpsROBqKpKpWbwRpuEkJh/W1ny4LwrRcq2gKzotGoM5Q== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a5b:849:0:b0:66e:a027:3c with SMTP id v9-20020a5b0849000000b0066ea027003cmr11796208ybq.208.1657865499596; Thu, 14 Jul 2022 23:11:39 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:23 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-15-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 14/18] KVM: arm64: Implement protected nVHE hyp stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implements the common framework necessary for unwind() to work in the protected nVHE context: - on_accessible_stack() - on_overflow_stack() - unwind_next() Protected nVHE unwind() is used to unwind and save the hyp stack addresses to the shared stacktrace buffer. The host reads the entries in this buffer, symbolizes and dumps the stacktrace (later patch in the series). Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/common.h | 2 ++ arch/arm64/include/asm/stacktrace/nvhe.h | 34 ++++++++++++++++++++-- 2 files changed, 34 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/includ= e/asm/stacktrace/common.h index b362086f4c70..cf442e67dccd 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -27,6 +27,7 @@ enum stack_type { STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, + STACK_TYPE_HYP, __NR_STACK_TYPES }; =20 @@ -171,6 +172,7 @@ static inline int unwind_next_common(struct unwind_stat= e *state, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * HYP -> OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index 1aadfd8d7ac9..c7c8ac889ec1 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -38,10 +38,19 @@ static __always_inline void kvm_nvhe_unwind_init(struct= unwind_state *state, state->pc =3D pc; } =20 +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_accessible_stack(const struct task_struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) { + if (on_accessible_stack_common(tsk, sp, size, info)) + return true; + + if (on_hyp_stack(sp, size, info)) + return true; + return false; } =20 @@ -59,12 +68,27 @@ extern void kvm_nvhe_prepare_backtrace(unsigned long fp= , unsigned long pc); static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { - return false; + unsigned long low =3D (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long high =3D low + OVERFLOW_STACK_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_init_params *params =3D this_cpu_ptr(&kvm_init_params); + unsigned long high =3D params->stack_hyp_va; + unsigned long low =3D high - PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); } =20 static int notrace unwind_next(struct unwind_state *state) { - return 0; + struct stack_info info; + + return unwind_next_common(state, &info, NULL); } NOKPROBE_SYMBOL(unwind_next); #else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ @@ -74,6 +98,12 @@ static inline bool on_overflow_stack(unsigned long sp, u= nsigned long size, return false; } =20 +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + static int notrace unwind_next(struct unwind_state *state) { return 0; --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DF1AC43334 for ; Fri, 15 Jul 2022 06:12:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231287AbiGOGMx (ORCPT ); Fri, 15 Jul 2022 02:12:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230330AbiGOGLz (ORCPT ); Fri, 15 Jul 2022 02:11:55 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E25B07B78B for ; Thu, 14 Jul 2022 23:11:42 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31ce88f9ab8so33604427b3.16 for ; Thu, 14 Jul 2022 23:11:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Yscefk8aqX/qdzK+Mn8nR6R0Ne1QrKgVF2VH9wy9A6w=; b=gv8BYhdodgL9+KafTVNOZjz5a9ojmWAVs0iVsXZHqPY5+FXfoUxXyL+7W//GXGAx1+ GWhjgBEinl4Ot8YYyKKG/7Bk46KUu8Ok+8EKMIIoR9EidKiHtVQmXHelmZn9AWEcMea6 f0CQVkWUuIxVTljmFpp1n+/TtD/gJbBTPESj736nhIIeHrxbpNyR6DM4lO7c99QH/HO/ jaqtU/5X1c+nMAkIvMwWGQvkwHfMJ46xTmLmlNzvdZL8+Q7InCnnhZyvIKukmWSuvNET IRx7f6rM02ueatKa2ZXmKEvt5OTDqVrxZZaE4e9W4zkPuZyUJ8R/+ZFf4erB2GVZSL6m bgGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Yscefk8aqX/qdzK+Mn8nR6R0Ne1QrKgVF2VH9wy9A6w=; b=fgKhOaz/mDNaTxRFy2eT/qvxcxnhG2NU93wGldTPwFJoi4zzyJbOb5I05wiX8SZFGd 0n/HX35QgX2+UAKBmk1vm/lbgStAGbrpax1TGEtq4fP0q+JRHZpSjjOcnCzlDxNQfSt4 y/ye3hliP97LsD+UnyieJ5ZT1HQx5VqEWUXvD7docAm3M9/GqxBwdRhvx8YeEPL7vdTH qCUboD1c4ucDL+XQALoKcolOlydN6Czfq0mD2XyZcC1B6snO5aYAshBH4UtFHhK/UJ4y jv34UGFUKoodVqGswFnAg4G/wS/2ISnYhqX9U78fjHp+ZP4QIJPMF+73AowhCmXX+H95 M1Zw== X-Gm-Message-State: AJIora9OaPHEHUxlNSOrC5dJurIpxdiyMFTo/3Vogs+LebfPcglu+m5f fL9VZVZfCJmH84mir5U1gpK6faHdQ3BEuRY87w== X-Google-Smtp-Source: AGRyM1tRMPbG6Ei2Eu+kI5Lon9y9EMop8Zm7qoPDrJSeiS/nQi/8XRoWwEQLM653jgtPu4ikjQxVi7km3j09d4mA7g== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a0d:e243:0:b0:31c:9d96:8b1b with SMTP id l64-20020a0de243000000b0031c9d968b1bmr14189640ywe.222.1657865501975; Thu, 14 Jul 2022 23:11:41 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:24 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-16-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 15/18] KVM: arm64: Implement non-protected nVHE hyp stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implements the common framework necessary for unwind() to work for non-protected nVHE mode: - on_accessible_stack() - on_overflow_stack() - unwind_next() Non-protected nVHE unwind() is used to unwind and dump the hypervisor stacktrace by the host in EL1 Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/nvhe.h | 67 +++++++++++++++++++++++- arch/arm64/kvm/arm.c | 2 +- 2 files changed, 66 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index c7c8ac889ec1..c3f94b10f8f0 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -115,15 +115,78 @@ NOKPROBE_SYMBOL(unwind_next); * Non-protected nVHE HYP stack unwinder */ #else /* !__KVM_NVHE_HYPERVISOR__ */ +DECLARE_KVM_NVHE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_= stack); +DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_i= nfo); +DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); + +/** + * kvm_nvhe_stack_kern_va - Convert KVM nVHE HYP stack addresses to a kern= el VAs + * + * The nVHE hypervisor stack is mapped in the flexible 'private' VA range,= to + * allow for guard pages below the stack. Consequently, the fixed offset a= ddress + * translation macros won't work here. + * + * The kernel VA is calculated as an offset from the kernel VA of the hype= rvisor + * stack base. + * + * Returns true on success and updates @addr to its corresponding kernel V= A; + * otherwise returns false. + */ +static inline bool kvm_nvhe_stack_kern_va(unsigned long *addr, + enum stack_type type) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info; + unsigned long hyp_base, kern_base, hyp_offset; + + stacktrace_info =3D this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + + switch (type) { + case STACK_TYPE_HYP: + kern_base =3D (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page); + hyp_base =3D (unsigned long)stacktrace_info->stack_base; + break; + case STACK_TYPE_OVERFLOW: + kern_base =3D (unsigned long)this_cpu_ptr_nvhe_sym(overflow_stack); + hyp_base =3D (unsigned long)stacktrace_info->overflow_stack_base; + break; + default: + return false; + } + + hyp_offset =3D *addr - hyp_base; + + *addr =3D kern_base + hyp_offset; + + return true; +} + static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { - return false; + struct kvm_nvhe_stacktrace_info *stacktrace_info + =3D this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + unsigned long low =3D (unsigned long)stacktrace_info->overflow_stack_base; + unsigned long high =3D low + OVERFLOW_STACK_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info + =3D this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + unsigned long low =3D (unsigned long)stacktrace_info->stack_base; + unsigned long high =3D low + PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); } =20 static int notrace unwind_next(struct unwind_state *state) { - return 0; + struct stack_info info; + + return unwind_next_common(state, &info, kvm_nvhe_stack_kern_va); } NOKPROBE_SYMBOL(unwind_next); =20 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a0188144a122..6a64293108c5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -49,7 +49,7 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); =20 DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); =20 -static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); =20 --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 717F8C43334 for ; Fri, 15 Jul 2022 06:13:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231297AbiGOGNA (ORCPT ); Fri, 15 Jul 2022 02:13:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231214AbiGOGMH (ORCPT ); Fri, 15 Jul 2022 02:12:07 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 335B47B797 for ; Thu, 14 Jul 2022 23:11:45 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31c9f68d48cso33858607b3.0 for ; Thu, 14 Jul 2022 23:11:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=I1SuK+mlx4+9ePe7Z4q/Ds8UD0oS6ojhAZGxfH2qnag=; b=JdLDNIjxjl26urcQRjobE7lTFwccfK46WtRp+G7+hkl3xjEvwMw68U/MYBXubciMxG BUImcs+xTk3eLMzI43bui6vTXiNnZu8GyF+VhTxsBvMu84qLdmtpNmydEJC1khQOQ50Z iSxrEYA8fXN/craiygC6fAoFdYfU1OKsRY4QkJxHDjnzj3AKTL1loj3883Xiz45eqNE8 NzpjLCCYzvSug/99Xx/FJEKg7xSZx78LuAbSVSyhjyyMngJSswROraDCbOLtuRtJsQPS 5wF5vbQpcK4Ay7pfQPwg7dQgjrZm2O+7wGs5DLCMznbnRwd0oZO0Z5vhLoDnd0cxM7iy pjvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=I1SuK+mlx4+9ePe7Z4q/Ds8UD0oS6ojhAZGxfH2qnag=; b=g1l8b1R13boUtiuSBMV9EUgmdK8+0UIwRC3zWlchyCDHO2CJUHHNt/JmJ5Ifa+lUd/ 7wWYQVshzHTMybZPhjUVYfyBLq/pMfrAT+xw4npy+QMyseQbGxsPjY6Z9tpM5GHDU54r Zv21aSDDWue67/TSx/qPQhwbojufHucZ2Wy2V/8BnzHxFdcNZBaT8Pq4hkGMuwLtUKps 2bQe9hPoubf70NdXDhVRzCpYPTqYlyoZu9NE6FRUSbKsZgOXeLN37EXuP+7/gEARXVkQ KU4vqFt+rmOr0Jn3klVBuip5FDd/cnBZUzipzJ/LUjqYdXp4vtWHtPvYWkaOhfYZZ8L5 vnAw== X-Gm-Message-State: AJIora80b1J0Rq7e3edZ37gzdeOar9glQ31pzec4TCJGov6nM4fsmOQp b1wExDlANcrrFC/vbMxta7IrZ2Qdf9631cK/LA== X-Google-Smtp-Source: AGRyM1vfpvfXbao8rDt6oOq5i85WSVodppQ11DSfR0dlAxIEEuKVDbQoAxSVRuw8vhTAwf/zAn7NyXoNXMOZ7LmHhA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a25:1c04:0:b0:660:1ffc:fb9 with SMTP id c4-20020a251c04000000b006601ffc0fb9mr12710440ybc.431.1657865504408; Thu, 14 Jul 2022 23:11:44 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:25 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-17-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 16/18] KVM: arm64: Introduce pkvm_dump_backtrace() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Dumps the pKVM hypervisor backtrace from EL1 by reading the unwinded addresses from the shared stacktrace buffer. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/nvhe.h | 49 ++++++++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index c3f94b10f8f0..ec1a4ee21c21 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -190,5 +190,54 @@ static int notrace unwind_next(struct unwind_state *st= ate) } NOKPROBE_SYMBOL(unwind_next); =20 +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE +DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)]= , pkvm_stacktrace); + +/** + * pkvm_dump_backtrace - Dump the protected nVHE HYP backtrace. + * + * @hyp_offset: hypervisor offset, used for address translation. + * + * Dumping of the pKVM HYP backtrace is done by reading the + * stack addresses from the shared stacktrace buffer, since the + * host cannot direclty access hyperviosr memory in protected + * mode. + */ +static inline void pkvm_dump_backtrace(unsigned long hyp_offset) +{ + unsigned long *stacktrace_pos; + unsigned long va_mask, pc; + + stacktrace_pos =3D (unsigned long *)this_cpu_ptr_nvhe_sym(pkvm_stacktrace= ); + va_mask =3D GENMASK_ULL(vabits_actual - 1, 0); + + kvm_err("Protected nVHE HYP call trace:\n"); + + /* The stack trace is terminated by a null entry */ + for (; *stacktrace_pos; stacktrace_pos++) { + /* Mask tags and convert to kern addr */ + pc =3D (*stacktrace_pos & va_mask) + hyp_offset; + kvm_err(" [<%016lx>] %pB\n", pc, (void *)pc); + } + + kvm_err("---- End of Protected nVHE HYP call trace ----\n"); +} +#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ +static inline void pkvm_dump_backtrace(unsigned long hyp_offset) +{ + kvm_err("Cannot dump pKVM nVHE stacktrace: !CONFIG_PROTECTED_NVHE_STACKTR= ACE\n"); +} +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */ + +/** + * kvm_nvhe_dump_backtrace - Dump KVM nVHE hypervisor backtrace. + * + * @hyp_offset: hypervisor offset, used for address translation. + */ +static inline void kvm_nvhe_dump_backtrace(unsigned long hyp_offset) +{ + if (is_protected_kvm_enabled()) + pkvm_dump_backtrace(hyp_offset); +} #endif /* __KVM_NVHE_HYPERVISOR__ */ #endif /* __ASM_STACKTRACE_NVHE_H */ --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD9DEC43334 for ; Fri, 15 Jul 2022 06:13:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231332AbiGOGNO (ORCPT ); Fri, 15 Jul 2022 02:13:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbiGOGMV (ORCPT ); Fri, 15 Jul 2022 02:12:21 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABF187B79E for ; Thu, 14 Jul 2022 23:11:47 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b129-20020a25e487000000b0066e1c52ac55so3292348ybh.11 for ; Thu, 14 Jul 2022 23:11:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pCrsAtrXdQrqKS3eeA8f9GflhdYZQyxHWES6iqq1HQQ=; b=jI4zAPvvQ77ZuvgrkrRIHZRkBl0VSF8QUgVCY/1ZnqdCg0pQyYf0UuEwHeiROiTaAy FSHAtwwXH951A9lpReSW3weDPL7qdYwHOOctZaeKlz/8zfmGCk6fRs1Evv5wtDjJbTBx cksIRahdwMwL9F3qJGyjQ997TNOhcHp+H114RtKstCa99riB9yoL6l7Y4CeCyWMV/92L JuWSurazgujBJNjwdTmMGq62lprxYhEAOKVMr/bVOZY31Ft4VD0s5Xyn5gfq130hJGt6 vwzQ9VHOvkWYELkKQ1Ph3gdjt7xURPyOBYcL0IdT2MaGg5DiSqXxP2wCEE8xSsVaHn7d DJwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pCrsAtrXdQrqKS3eeA8f9GflhdYZQyxHWES6iqq1HQQ=; b=ctGN5hLFIrmaj0v1tqSCRYN48LtTcjTr08x72VMr8OV7XpbV1aaahZ7ajiBzWy612A 2s1Iq3AZ6x+HHX1qI9X0aXUxezvKgkvwcv3oHBlkET4DDI+SMOeN/y8ni0jdL/kQ6p9x gv9GX2D6WQLF5lL9+ZmqmbV5Wecx+1PuJHxvCiAoUmTwBLXdHpoLwM8NJeh8VSRtKS// QKhUtA3lwUZI1viN3Q6v1R32vo7d7KmM0VmjcxUGVeeQSmEt4pSJefAbd5qEcCjNJfrS 9y2YvGUJELk/AKROju+FdObN+rByhfC3df9NyPYcaU3TZoHEUMNVwsL5ocDXPyaf1ljH qXgg== X-Gm-Message-State: AJIora/YRnhK/qFkMtwPCORQHZ02PVAzUWiGtKrahgCnCym44vQPXLgv OveaQ57YtQ+Q3jpUEqgtd6VqWIVLf+4EENrybA== X-Google-Smtp-Source: AGRyM1tEqC2CS+wqI7vtsUSjZ6J4A5D2uthRIAUasSZuLlJSGC81kXJwJZFzwd7qYeFr+qoAbrl5Z8/6YFdrJhWx1A== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a81:503:0:b0:317:c5d5:16fe with SMTP id 3-20020a810503000000b00317c5d516femr13830160ywf.231.1657865506980; Thu, 14 Jul 2022 23:11:46 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:26 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-18-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 17/18] KVM: arm64: Introduce hyp_dump_backtrace() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In non-protected nVHE mode, unwinds and dumps the hypervisor backtrace from EL1. This is possible beacuase the host can directly access the hypervisor stack pages in non-proteced mode. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/nvhe.h | 64 +++++++++++++++++++++--- 1 file changed, 56 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/= asm/stacktrace/nvhe.h index ec1a4ee21c21..c322ac95b256 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -190,6 +190,56 @@ static int notrace unwind_next(struct unwind_state *st= ate) } NOKPROBE_SYMBOL(unwind_next); =20 +/** + * kvm_nvhe_print_backtrace_entry - Symbolizes and prints the HYP stack ad= dress + */ +static inline void kvm_nvhe_print_backtrace_entry(unsigned long addr, + unsigned long hyp_offset) +{ + unsigned long va_mask =3D GENMASK_ULL(vabits_actual - 1, 0); + + /* Mask tags and convert to kern addr */ + addr =3D (addr & va_mask) + hyp_offset; + kvm_err(" [<%016lx>] %pB\n", addr, (void *)addr); +} + +/** + * hyp_backtrace_entry - Dump an entry of the non-protected nVHE HYP stack= trace + * + * @arg : the hypervisor offset, used for address translation + * @where : the program counter corresponding to the stack frame + */ +static inline bool hyp_dump_backtrace_entry(void *arg, unsigned long where) +{ + kvm_nvhe_print_backtrace_entry(where, (unsigned long)arg); + + return true; +} + +/** + * hyp_dump_backtrace - Dump the non-proteced nVHE HYP backtrace. + * + * @hyp_offset: hypervisor offset, used for address translation. + * + * The host can directly access HYP stack pages in non-protected + * mode, so the unwinding is done directly from EL1. This removes + * the need for shared buffers between host and hypervisor for + * the stacktrace. + */ +static inline void hyp_dump_backtrace(unsigned long hyp_offset) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info; + struct unwind_state state; + + stacktrace_info =3D this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + + kvm_nvhe_unwind_init(&state, stacktrace_info->fp, stacktrace_info->pc); + + kvm_err("Non-protected nVHE HYP call trace:\n"); + unwind(&state, hyp_dump_backtrace_entry, (void *)hyp_offset); + kvm_err("---- End of Non-protected nVHE HYP call trace ----\n"); +} + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)]= , pkvm_stacktrace); =20 @@ -206,22 +256,18 @@ DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRA= CE_SIZE/sizeof(long)], pkvm static inline void pkvm_dump_backtrace(unsigned long hyp_offset) { unsigned long *stacktrace_pos; - unsigned long va_mask, pc; =20 stacktrace_pos =3D (unsigned long *)this_cpu_ptr_nvhe_sym(pkvm_stacktrace= ); - va_mask =3D GENMASK_ULL(vabits_actual - 1, 0); =20 kvm_err("Protected nVHE HYP call trace:\n"); =20 - /* The stack trace is terminated by a null entry */ - for (; *stacktrace_pos; stacktrace_pos++) { - /* Mask tags and convert to kern addr */ - pc =3D (*stacktrace_pos & va_mask) + hyp_offset; - kvm_err(" [<%016lx>] %pB\n", pc, (void *)pc); - } + /* The saved stacktrace is terminated by a null entry */ + for (; *stacktrace_pos; stacktrace_pos++) + kvm_nvhe_print_backtrace_entry(*stacktrace_pos, hyp_offset); =20 kvm_err("---- End of Protected nVHE HYP call trace ----\n"); } + #else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ static inline void pkvm_dump_backtrace(unsigned long hyp_offset) { @@ -238,6 +284,8 @@ static inline void kvm_nvhe_dump_backtrace(unsigned lon= g hyp_offset) { if (is_protected_kvm_enabled()) pkvm_dump_backtrace(hyp_offset); + else + hyp_dump_backtrace(hyp_offset); } #endif /* __KVM_NVHE_HYPERVISOR__ */ #endif /* __ASM_STACKTRACE_NVHE_H */ --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 09:23:29 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2B4CC433EF for ; Fri, 15 Jul 2022 06:13:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230499AbiGOGNR (ORCPT ); Fri, 15 Jul 2022 02:13:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231244AbiGOGMX (ORCPT ); Fri, 15 Jul 2022 02:12:23 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D77F73907 for ; Thu, 14 Jul 2022 23:11:50 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31d436816e1so33745347b3.15 for ; Thu, 14 Jul 2022 23:11:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=C+9RN60EOV8842FFgmBAy6PrpfwWqW7C+czhzeS/rR4=; b=YBfbYLBhEFURhCK4QKUC8bI/R6G7EXpPYpXv8EgvXFE5JI8iCfdDD/zhxXUPJ1FCj2 HobAgwdJcz8iCJZ3Nf6qUn5/JFvFqTXwTbxs1Vp8JQ5DiIGMNOmoxWd8ZFtzh4zWdbW2 UXXqKfjRvUH0FaeGEFgQrLpiSfFzw/8JrEAdOPLimA7QNNMLXVfCXV7aWgjExnRC71PE st+L76+Nt2+hlQM0Y7sQ2cwdVY2C1xVk2yx3eg9khBifSFQMmA4ZOR9Djwd7LeLopeWE xiMe9XRODecY4NQkbUkZ8/WtMreqsn6dA6vGRjB7aYLob2bWXnsgyxD5ML6TZoiDhcWj CuIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=C+9RN60EOV8842FFgmBAy6PrpfwWqW7C+czhzeS/rR4=; b=bXpU0Rs/Dsi8K2EzZeD3uz0j1Yb3fJUeImAi0pXTI0wn3ID9C20e5NeaLhl2vxxMcv rumQK8NesuNwsmlWB1KX0U/mOyvTnVQA+lKHTEc5Bf1sgDxEIFo3H1RKv9LNF2nkDtcT +d/ViKdmsrOpVOaiQbd6Sc6SqatahYgAGkfBdlYwnc5lR5h3UefbkNTIyxY0OgMhv3/5 1HruJ9aiTSQ1nupqsXP9cr4f22yHjOVaQmrI1kz2g+oizLXNvLQk+sXuPWLn7nzh4QIK TSpmy2Exb5dyIPW/puvxrzkPnX8ZcP9reYroAOq5dZJsLe8mUC6sutDbj57IgTDAGBQI ZnFA== X-Gm-Message-State: AJIora/Nm9Ep2+h4pzSEwKgWpIZaQzamcrNGjO9JQLFXuGJ4LADr4qhV YsOXlYRp0ea71wcrMulPxBTRwAIedyOtjO+sVg== X-Google-Smtp-Source: AGRyM1vmpZrg9C65GGAVW8i3cJzkzgqCICriOC0RsFfWEe5JJODK8w4KS6U/1MhBLlAvC1UTH/ak2+fLI6MXcu3Qzw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a5b:44d:0:b0:66f:ad5a:9d0b with SMTP id s13-20020a5b044d000000b0066fad5a9d0bmr10986561ybp.79.1657865509675; Thu, 14 Jul 2022 23:11:49 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:27 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-19-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 18/18] KVM: arm64: Dump nVHE hypervisor stack on panic From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On hyp_panic(), unwind and dump the nVHE hypervisor stack trace. In protected nVHE mode, hypervisor stacktraces are only produced if CONFIG_PROTECTED_NVHE_STACKTRACE is enabled. Example backtrace: [ 126.862960] kvm [371]: nVHE hyp panic at: [] __kvm_nvh= e_recursive_death+0x10/0x34! [ 126.869920] kvm [371]: Protected nVHE HYP call trace: [ 126.870528] kvm [371]: [] __kvm_nvhe_hyp_panic+0xac/0= xf8 [ 126.871342] kvm [371]: [] __kvm_nvhe_hyp_panic_bad_st= ack+0x10/0x10 [ 126.872174] kvm [371]: [] __kvm_nvhe_recursive_death+= 0x24/0x34 [ 126.872971] kvm [371]: [] __kvm_nvhe_recursive_death+= 0x24/0x34 . . . [ 126.927314] kvm [371]: [] __kvm_nvhe_recursive_death+= 0x24/0x34 [ 126.927727] kvm [371]: [] __kvm_nvhe_recursive_death+= 0x24/0x34 [ 126.928137] kvm [371]: [] __kvm_nvhe___kvm_vcpu_run+0= x30/0x40c [ 126.928561] kvm [371]: [] __kvm_nvhe_handle___kvm_vcp= u_run+0x30/0x48 [ 126.928984] kvm [371]: [] __kvm_nvhe_handle_trap+0xc4= /0x128 [ 126.929385] kvm [371]: [] __kvm_nvhe___host_exit+0x64= /0x64 [ 126.929804] kvm [371]: ---- End of Protected nVHE HYP call trace ---- Signed-off-by: Kalesh Singh --- arch/arm64/kvm/handle_exit.c | 4 ++++ arch/arm64/kvm/hyp/nvhe/switch.c | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index f66c0142b335..ef8b57953aa2 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -17,6 +17,7 @@ #include #include #include +#include #include =20 #include @@ -353,6 +354,9 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, = u64 spsr, (void *)panic_addr); } =20 + /* Dump the nVHE hypervisor backtrace */ + kvm_nvhe_dump_backtrace(hyp_offset); + /* * Hyp has panicked and we're going to handle that by panicking the * kernel. The kernel offset will be revealed in the panic so we're diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/swi= tch.c index 6db801db8f27..a50cfd39dedb 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -25,6 +25,7 @@ #include #include #include +#include =20 #include #include @@ -375,6 +376,10 @@ asmlinkage void __noreturn hyp_panic(void) __sysreg_restore_state_nvhe(host_ctxt); } =20 + /* Prepare to dump kvm nvhe hyp stacktrace */ + kvm_nvhe_prepare_backtrace((unsigned long)__builtin_frame_address(0), + _THIS_IP_); + __hyp_do_panic(host_ctxt, spsr, elr, par); unreachable(); } --=20 2.37.0.170.g444d1eabd0-goog